LanguageModelNode
Generates AI responses using language models from OpenAI, Groq, Anthropic, and more. This is the core AI node for generating conversational responses.
Overview
The LanguageModelNode is one of the most commonly used nodes in ConvoFlow. It takes a query (and optional context) and generates AI-powered responses using various language model services.
Inputs
query *
Type: string | Required: Yes
The main text query to send to the language model. Typically connected from QueryNode's output.
context
Type: string | Required: No
Additional context from knowledge base or other sources. This is often connected from KnowledgeBaseRetrievalNode's output to provide relevant information for the AI to use in its response.
Outputs
response
Type: string
The generated response from the language model. This is typically connected to ResponseNode for final output.
Parameters
service *
Type: string | Required: Yes
Language model service to use. Available options:
openai- OpenAI models (GPT-3.5, GPT-4, etc.)groq- Groq models (fast inference)ollama- Local Ollama models
Default: "openai"
model
Type: string | Required: No
Specific model to use. If left empty, the default model for the selected service will be used.
Default: "" (uses service default)
system_prompt
Type: string | Required: No
System/base prompt to set AI behavior. This defines the role and personality of the AI assistant.
Default: "You are a helpful AI assistant."
temperature
Type: float | Required: No
Creativity/randomness level. Range: 0.0 to 1.0. Lower values (0.0-0.5) produce more focused and consistent responses. Higher values (0.6-1.0) produce more creative and varied responses.
Default: 0.7
max_tokens
Type: integer | Required: No
Maximum number of tokens in the response. Controls the length of the generated text.
Default: 500
Tools Used
LanguageModelNode uses the LanguageModelTool to interact with language model APIs.
from tools.language_model_tool.language_model_tool import LanguageModelTool
tool = LanguageModelTool()
result = tool.generate_response(
query="Hello, how are you?",
service="openai",
model="gpt-3.5-turbo",
system_prompt="You are a helpful assistant.",
temperature=0.7,
max_tokens=500
)Learn more about creating tools: Creating Tools
Required Credentials
Required API Keys:
OPENAI_API_KEY- Required for OpenAI serviceGROQ_API_KEY- Required for Groq serviceANTHROPIC_API_KEY- Required for Anthropic service
Configure these in your environment variables or through the ConvoFlow admin interface.
Example Usage
Here's a common workflow pattern using LanguageModelNode:
// Basic AI Chatbot:
QueryNode (query: "What is AI?")
↓ [query]
LanguageModelNode (
service: "openai",
model: "gpt-3.5-turbo",
system_prompt: "You are a helpful assistant.",
temperature: 0.7
)
↓ [response]
ResponseNode
→ "AI, or Artificial Intelligence, is..."// RAG (Retrieval-Augmented Generation):
QueryNode (query: "How do I install ConvoFlow?")
↓ [query]
KnowledgeBaseRetrievalNode (collection: "docs")
↓ [response as context]
LanguageModelNode (
service: "openai",
system_prompt: "Answer based on the provided context.",
temperature: 0.3
)
↓ [response]
ResponseNode
→ "To install ConvoFlow, first..."Styling
LanguageModelNode has a distinctive design to indicate it's an AI node:
- Shape: Rounded rectangle
- Border Color: Purple (#a78bfa)
- Background: Dark (#1f1f1f)
- Subtitle: "GENERATES TEXT"
- Icon: Sparkles icon (✨)
Related Nodes
- QueryNode - Provide input queries
- KnowledgeBaseRetrievalNode - Provide context
- ResponseNode - Output final response