Settings
Configure providers, agents, approval rules, and plugin behavior
All UND settings are managed through UUNDSettings, a UDeveloperSettings subclass stored in your project's DefaultEditor.ini config file at:
{Project}/Config/DefaultEditor.ini
Section: [/Script/UnrealNeuralDirector.UNDSettings]
The plugin ships its own defaults in Plugins/UnrealNeuralDirector/Config/DefaultEditor.ini, which your project-level file overrides.
You can edit settings in two ways:
- Through the Settings view in the UND panel
- Through Edit > Project Settings > Plugins > Unreal Neural Director
Provider Configuration
These settings control which AI provider UND connects to and how.
| Setting | Default | Description |
|---|---|---|
ProviderProfiles | [Default Anthropic] | Array of provider configurations. Each profile specifies a provider type (Anthropic, OpenAI-Compatible, or Gemini), model name, API key environment variable, and endpoint URL. |
ActiveProviderProfile | "Default" | The currently selected provider profile name. Switch between profiles to use different models or providers. |
Setting Up a Provider Profile
Each provider profile contains:
- Profile Name -- A human-readable label
- Provider Type -- Anthropic, OpenAI-Compatible, or Gemini
- Model -- The model identifier (e.g.,
claude-sonnet-4-20250514,gpt-4o,gemini-2.5-pro) - API Key Env Var -- The name of the environment variable holding your API key (e.g.,
ANTHROPIC_API_KEY). UND looks for the key in two places: first the system environment variable, then inSaved/UND/apikeys.cfg(format:ENV_VAR_NAME=key, one per line). - Endpoint URL -- Override for custom endpoints (OpenAI-compatible servers, proxies)
API keys are never stored in UE config files. UND reads them from environment variables or the local
apikeys.cfgfile at runtime. See Providers for setup details.
Agent Configuration
These settings control the AI agent's behavior and capabilities.
| Setting | Default | Description |
|---|---|---|
AgentProfiles | [Orchestrator, Coder, Architect, Reviewer, LevelDesigner] | Array of agent configurations. Each profile defines a system prompt template, allowed/denied tools, and delegation rules. |
ActiveAgentProfile | "Coder" | The currently selected agent profile. Determines which system prompt and tool set the AI uses. |
MaxAgentTurns | 100 | Maximum number of tool-call round-trips per user message. The agent receives a warning at 75% of this budget. |
Built-in Agent Profiles
| Profile | Description |
|---|---|
| Orchestrator | Decomposes complex requests into sub-tasks and delegates to specialized agents |
| Coder | General-purpose implementation agent (default). Has access to all tools. |
| Architect | Focuses on system design, planning, and architecture decisions |
| Reviewer | Reviews code and Blueprint quality, suggests improvements |
| LevelDesigner | Specializes in level layout, environment art, and world building |
Each profile's AllowedTools and DeniedTools lists filter which of the 31 built-in tools the agent can use.
Approval Rules
| Setting | Default | Description |
|---|---|---|
AutoApprovalRules | [] | Persistent allow/deny rules created from the approval dialog. When you click "Always Allow" or "Always Deny" on a tool call approval prompt, a rule is saved here. |
ProtectedPaths | [] | Glob patterns for paths that are permanently write-protected. File system tools will refuse to modify files matching these patterns. |
IgnorePatterns | [] | Glob patterns for paths hidden from file system tools. Matching files and directories will not appear in list_files, search_files, or search_in_files results. |
UI Settings
| Setting | Default | Description |
|---|---|---|
bShowTokenUsageBar | true | Display the context window utilization bar at the top of the chat view, showing how much of the model's context window is in use. |
bShowToolDetails | false | Show raw tool call JSON in chat messages. Useful for debugging but verbose for normal use. |
MaxMessagesInView | 200 | Maximum number of chat messages visible in the chat view. Older messages are hidden (not deleted) to keep the UI responsive. |
Knowledge Injection
These settings control UND's automatic knowledge injection system, which provides the AI with relevant documentation based on what you are asking about.
| Setting | Default | Description |
|---|---|---|
bEnableKnowledgeInjection | true | Automatically inject relevant knowledge modules into the system prompt. Disable to reduce prompt size or if you prefer manual context management. |
KnowledgeInjectionTokenBudget | 2000 | Maximum number of tokens allocated for injected knowledge modules per turn. Higher values provide more context but consume more of the model's context window. |
Auto-RAG (Code Search)
Auto-RAG maintains a BM25 index of your project's source code and automatically injects relevant code snippets into the prompt.
| Setting | Default | Description |
|---|---|---|
bEnableAutoRAG | true | Automatically inject BM25 code search results into the system prompt. Provides the AI with awareness of existing project code. |
AutoRAGMaxResults | 5 | Maximum number of code snippets injected per turn. |
AutoRAGTokenBudget | 2000 | Maximum number of tokens allocated for Auto-RAG snippets per turn. |
BM25MaxIndexedDocs | 5000 | Maximum number of document chunks in the BM25 index. Larger values improve search coverage but increase memory usage. |
EmbeddingBackend | Local BM25 | Search backend type. "Local BM25" uses the built-in text search index. "ExternalAPI" connects to an external embedding service for vector-based semantic search. |
Checkpoints
Checkpoints save conversation snapshots so you can restore previous states.
| Setting | Default | Description |
|---|---|---|
bEnableCheckpoints | true | Automatically save conversation checkpoints after each tool execution batch. |
MaxCheckpointsPerTask | 20 | Maximum number of checkpoint snapshots stored per task. Older checkpoints are pruned when this limit is reached. |
MaxCheckpointStorageMB | 500 | Global disk budget for all checkpoint files across all tasks. |
bCompressCheckpoints | true | Compress checkpoint files with gzip to reduce disk usage. |
Checkpoints are saved to {Project}/Saved/UND/Tasks/{TaskId}/checkpoint_{N}.json.
MCP Servers
| Setting | Default | Description |
|---|---|---|
MCPServers | [] | Array of MCP (Model Context Protocol) server configurations. Each entry defines a server name, command, arguments, and enabled state. MCP servers extend UND's tool set with external capabilities. |
MCP servers communicate via stdio using JSON-RPC 2.0. Tools from MCP servers appear in the tool registry as mcp__{ServerName}__{ToolName} and are used by the AI alongside built-in tools.
Output Limits
These are compile-time constants in UNDToolLimits.h and cannot be changed through settings. They define upper bounds for tool output to prevent context window overflow.
| Constant | Value | Description |
|---|---|---|
MaxToolOutputChars | 100,000 | Global output truncation for any tool |
DefaultFileListResults | 2,000 | Default result count for list_files |
MaxFileListResults | 10,000 | Upper bound for list_files |
DefaultSearchInFileResults | 50 | Default result count for search_in_files |
MaxSearchInFileResults | 2,000 | Upper bound for search_in_files |
DefaultSemanticSearchResults | 5 | Default for semantic_search |
MaxSemanticSearchResults | 20 | Upper bound for semantic_search |
MaxPythonOutputChars | 50,000 | Python execution output cap |
MaxQueryEngineResults | 50 | query_ue_api results cap |