Class LlmChatExecutor

java.lang.Object
ai.nervemind.app.executor.LlmChatExecutor
All Implemented Interfaces:
NodeExecutor

@Component public class LlmChatExecutor extends Object implements NodeExecutor
Executor for the "llmChat" node type - sends messages to LLM providers.

Enables AI-powered workflows by interacting with various Large Language Model providers. Supports multiple providers with a unified interface, template interpolation for dynamic prompts, and structured response handling.

Node Parameters

LLM Chat node configuration parameters
Parameter Type Default Description
provider String "openai" "openai", "anthropic", "ollama", "azure", "google", "custom"
model String provider-specific Model name (e.g., "gpt-4", "claude-3-opus")
apiKey String from settings API key or ${credential.name} reference
baseUrl String provider default Custom base URL for API
messages List [] Array of {role, content} message objects
systemPrompt String - System message prepended to conversation
prompt String - Single user message (alternative to messages)
temperature Double 0.7 Sampling temperature (0.0-2.0)
maxTokens Integer 1024 Maximum tokens in response
timeout Integer 120 Request timeout in seconds
responseFormat String "text" "text" or "json"

Supported Providers

  • openai - OpenAI GPT models (GPT-4, GPT-3.5-turbo)
  • anthropic - Anthropic Claude models
  • ollama - Local Ollama server (Llama, Mistral, etc.)
  • azure - Azure OpenAI Service
  • google - Google Gemini models
  • custom - OpenAI-compatible custom endpoints

Message Format

"messages": [
  { "role": "system", "content": "You are a helpful assistant." },
  { "role": "user", "content": "Hello!" },
  { "role": "assistant", "content": "Hi! How can I help?" },
  { "role": "user", "content": "What's 2+2?" }
]

Output Data

Output keys added by this executor
Key Type Description
response String The LLM's text response
usage Map Token usage info (promptTokens, completionTokens, totalTokens)
model String Model that was used
finishReason String Why response ended (stop, length, etc.)

API Key Resolution

API keys are resolved in this order:

  1. Explicit apiKey parameter
  2. Credential reference: ${credential.OPENAI_API_KEY}
  3. Settings service: stored API keys per provider
See Also:
  • Constructor Details

    • LlmChatExecutor

      public LlmChatExecutor(ai.nervemind.common.service.SettingsServiceInterface settingsService)
      Creates a new LLM chat executor with configured HTTP client.
      Parameters:
      settingsService - the settings service for configuration access
  • Method Details

    • execute

      public Map<String,Object> execute(ai.nervemind.common.domain.Node node, Map<String,Object> input, ExecutionService.ExecutionContext context)
      Description copied from interface: NodeExecutor
      Executes the business logic for this node type.
      Specified by:
      execute in interface NodeExecutor
      Parameters:
      node - The node definition containing parameters and configuration. Use Node.parameters() to access user settings.
      input - Combined output from all upstream nodes that connected to this node. For simple flows, this contains the direct predecessor's output. For merge nodes, it contains combined data.
      context - Verification context providing access to workflow-scoped services, logger, and execution metadata.
      Returns:
      A Map containing the results of this node's execution. Keys in this map become available variables for downstream nodes.
      Note: Returning null is treated as an empty map.
    • getNodeType

      public String getNodeType()
      Description copied from interface: NodeExecutor
      Unique identifier for the node type this executor handles. This must match the 'type' field in the JSON definition of the node.
      Specified by:
      getNodeType in interface NodeExecutor
      Returns:
      The unique type string (e.g., "httpRequest", "llmChat").