Class LlmChatExecutor
java.lang.Object
ai.nervemind.app.executor.LlmChatExecutor
- All Implemented Interfaces:
NodeExecutor
Executor for the "llmChat" node type - sends messages to LLM providers.
Enables AI-powered workflows by interacting with various Large Language Model providers. Supports multiple providers with a unified interface, template interpolation for dynamic prompts, and structured response handling.
Node Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| provider | String | "openai" | "openai", "anthropic", "ollama", "azure", "google", "custom" |
| model | String | provider-specific | Model name (e.g., "gpt-4", "claude-3-opus") |
| apiKey | String | from settings | API key or ${credential.name} reference |
| baseUrl | String | provider default | Custom base URL for API |
| messages | List | [] | Array of {role, content} message objects |
| systemPrompt | String | - | System message prepended to conversation |
| prompt | String | - | Single user message (alternative to messages) |
| temperature | Double | 0.7 | Sampling temperature (0.0-2.0) |
| maxTokens | Integer | 1024 | Maximum tokens in response |
| timeout | Integer | 120 | Request timeout in seconds |
| responseFormat | String | "text" | "text" or "json" |
Supported Providers
- openai - OpenAI GPT models (GPT-4, GPT-3.5-turbo)
- anthropic - Anthropic Claude models
- ollama - Local Ollama server (Llama, Mistral, etc.)
- azure - Azure OpenAI Service
- google - Google Gemini models
- custom - OpenAI-compatible custom endpoints
Message Format
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Hello!" },
{ "role": "assistant", "content": "Hi! How can I help?" },
{ "role": "user", "content": "What's 2+2?" }
]
Output Data
| Key | Type | Description |
|---|---|---|
| response | String | The LLM's text response |
| usage | Map | Token usage info (promptTokens, completionTokens, totalTokens) |
| model | String | Model that was used |
| finishReason | String | Why response ended (stop, length, etc.) |
API Key Resolution
API keys are resolved in this order:
- Explicit
apiKeyparameter - Credential reference: ${credential.OPENAI_API_KEY}
- Settings service: stored API keys per provider
- See Also:
-
Constructor Summary
ConstructorsConstructorDescriptionLlmChatExecutor(ai.nervemind.common.service.SettingsServiceInterface settingsService) Creates a new LLM chat executor with configured HTTP client. -
Method Summary
Modifier and TypeMethodDescriptionexecute(ai.nervemind.common.domain.Node node, Map<String, Object> input, ExecutionService.ExecutionContext context) Executes the business logic for this node type.Unique identifier for the node type this executor handles.
-
Constructor Details
-
LlmChatExecutor
public LlmChatExecutor(ai.nervemind.common.service.SettingsServiceInterface settingsService) Creates a new LLM chat executor with configured HTTP client.- Parameters:
settingsService- the settings service for configuration access
-
-
Method Details
-
execute
public Map<String,Object> execute(ai.nervemind.common.domain.Node node, Map<String, Object> input, ExecutionService.ExecutionContext context) Description copied from interface:NodeExecutorExecutes the business logic for this node type.- Specified by:
executein interfaceNodeExecutor- Parameters:
node- The node definition containing parameters and configuration. UseNode.parameters()to access user settings.input- Combined output from all upstream nodes that connected to this node. For simple flows, this contains the direct predecessor's output. For merge nodes, it contains combined data.context- Verification context providing access to workflow-scoped services, logger, and execution metadata.- Returns:
- A Map containing the results of this node's execution.
Keys in this map become available variables for downstream nodes.
Note: Returningnullis treated as an empty map.
-
getNodeType
Description copied from interface:NodeExecutorUnique identifier for the node type this executor handles. This must match the 'type' field in the JSON definition of the node.- Specified by:
getNodeTypein interfaceNodeExecutor- Returns:
- The unique type string (e.g., "httpRequest", "llmChat").
-