Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

LLM Usage Patterns: Chats, Workflows, and Agents in AI

See also:


Owner: Vadim Rudakov, lefthand67@gmail.com
Version: 0.1.2
Birth: 2025-10-19
Last Modified: 2025-12-16


Large Language Models (LLMs) like GPT have transformed how we build AI applications. But to design effective, production-ready AI systems, engineers must understand the theoretical and practical differences between three fundamental usage patterns:

1. What is a Chat?

Theoretical View

A chat primarily involves conversational interaction: the AI model responds directly to user inputs with natural language outputs. It focuses on understanding intent and generating coherent, context-aware text.

Practical Implementation

Example: A customer support chatbot answering product FAQs.

2. What is a Workflow?

Theoretical View

Workflows use LLMs as components within a predefined, structured pipeline of AI and non-AI tasks.

Note on Determinism: While workflows may include conditional branching and error handling that appear dynamic, their decision logic is entirely scripted by human engineers. This means workflows are deterministic and lack genuine autonomy — they do not adapt or revise their sequence of steps based on situational understanding beyond predefined, hard-coded rules.

Practical Implementation

Example: A resume screening system that extracts skills, scores relevance, then generates a summary report.

3. What is an Agent?

Theoretical View

Agents are autonomous AI systems that perceive environment inputs, plan actions, dynamically select and execute tasks, and adapt based on context and feedback.

The agent’s power comes from integrating LLM reasoning with external tool/API invocation. It has dynamic decision-making capabilities, choosing the next step based on the result (observation) of the previous step, enabling iterative self-correction and feedback loops.

In sophisticated agent architectures, the planner-executor model is crucial: the LLM (planner) generates the next high-level decision, and the executor system performs the specific action (tool/API call) and reports the observation back to the LLM for the next planning cycle. This enables true self-direction beyond scripted automation.

Practical Implementation

Example: A virtual assistant that plans a trip by querying weather, dynamically booking flights based on availability, and updating the itinerary as new information arrives.

4. Key Differences Summary

AspectChatWorkflowAgent
PurposeHuman-like conversationStructured multi-step tasksAutonomous task planning & execution
ArchitectureSingle LLM interactionFixed pipeline of modular stepsDynamic, iterative control loop with tools & APIs
AutonomyLow; reactiveMedium; scripted process (deterministic)High; adaptive and self-directed (non-deterministic sequence)
ComplexitySimpleModerateHigh
Operational CostLowModerateHigh (Token/Latency)

5. Practical Tips for AI Engineers 🛠️

6. Example Code Skeletons

Chat (Python + Prompt)

user_input = "What are the benefits of solar energy?"
prompt = f"Answer in a friendly tone: {user_input}"
response = llm.call(prompt)
print(response)

Workflow (Chained Steps)

def extract_topics(text):
    return llm.call(f"Extract main topics from: {text}")

def summarize(topics):
    return llm.call(f"Write a summary based on: {topics}")

business_question = "Explain renewable energy trends."
topics = extract_topics(business_question)
summary = summarize(topics)
print(summary)

Agent (Iterative Decision + Tool Invocation)

This architecture demonstrates the iterative nature where the LLM is called repeatedly to make the next decision based on the current state and observed results.

class Agent:
    def __init__(self, tools):
        self.tools = tools

    def act(self, user_input, max_steps=5):
        current_state = {'task': user_input, 'history': []}

        for step_count in range(max_steps):
            # 1. LLM plans the NEXT action based on the *current state*
            decision = llm.call(f"Based on history {current_state['history']}, what is the best next action for task: {user_input}?")
            
            tool_name, args = parse_decision(decision) # Hypothetical function to parse tool choice

            if tool_name == "FINISH":
                print("Task complete.")
                return current_state.get('result', "Success.")

            if tool_name in self.tools:
                # 2. Execute the action (Tool invocation)
                result = self.tools[tool_name](args)
                
                # 3. Update state with the observation/result for the next loop
                current_state['history'].append({'action': tool_name, 'observation': result})
            else:
                return f"Error: Invalid tool {tool_name} used."
        
        return "Max steps reached without finishing the task."

# Hypothetical usage
agent = Agent(tools={'weather_api': get_weather, 'email': send_email})
agent.act("Schedule outdoor event next week")