Skip to main content

What’s an Agent?

Think of agents as your smart AI assistants that can handle complex tasks, remember conversations, and even work with other agents. They’re not just chatbots - they’re intelligent systems that can analyze data, make decisions, and take actions.

Smart & Contextual

Remembers conversations, understands context, and learns from interactions

Tool-Powered

Uses custom tools to access databases, APIs, and perform real actions

Multi-Model Support

Works with OpenAI, Anthropic Claude, and Google Gemini models

Team Players

Agents can delegate tasks to other specialized agents

Vector Memory

Built-in semantic search and RAG capabilities with public API access

Testing Friendly

Public methods make testing and experimentation a breeze

Creating Your First Agent

Ready to build your first AI agent? It’s easier than you think! Let’s create a helpful customer support agent that can answer questions and solve problems.

Quick Start with Artisan

Fire up your terminal and run this magical command:
Terminal
php artisan vizra:make:agent CustomerSupportAgent
Boom! You’ve just created your first agent! Let’s peek inside and see what makes it tick:

The Agent Blueprint

app/Agents/CustomerSupportAgent.php
<?php

namespace App\Agents;

use Vizra\VizraADK\Agents\BaseLlmAgent;

class CustomerSupportAgent extends BaseLlmAgent
{
    protected string $name = 'customer_support';

    protected string $description = 'Handles customer inquiries and support requests';

    protected string $instructions = "You are a helpful customer support assistant.
                Be friendly, professional, and solution-oriented.
                Always prioritize customer satisfaction.";

    // Optional: Specify model and parameters
    protected string $model = 'gpt-4o';
    protected ?float $temperature = 0.7;
    protected ?int $maxTokens = 1000;
}

Agent Configuration Options

class AdvancedAgent extends BaseLlmAgent
{
    // Model configuration
    protected string $model = 'gpt-4o';
    protected ?float $temperature = 0.7;
    protected ?int $maxTokens = 1000;
    protected ?float $topP = 0.9;

    // Tools this agent can use
    protected array $tools = [
        OrderLookupTool::class,
        RefundProcessorTool::class,
        EmailSenderTool::class,
    ];

    // Sub-agents this agent can delegate to
    protected array $subAgents = [
        TechnicalSupportAgent::class,
        BillingSupportAgent::class,
    ];

    // Enable streaming responses
    protected bool $streaming = false;
}

Unleashing Your Agent’s Powers

Now comes the fun part - putting your agent to work! Vizra ADK gives you a powerful and simple way to interact with your agents.
One Method to Rule Them All - The run() method is your gateway to agent intelligence. Simple, powerful, and flexible - it handles everything from conversations to complex data processing.

Let’s Chat! Using the Fluent API

Working with agents feels natural with our fluent API. Check out these examples:
// Basic conversational usage
$response = CustomerSupportAgent::run('How do I reset my password?')
    ->go();

// With user context
$response = CustomerSupportAgent::run('Show me my recent orders')
    ->forUser($user)
    ->go();

// With session for conversation continuity
$response = CustomerSupportAgent::run('What was my previous question?')
    ->withSession($sessionId)
    ->go();

// Event-driven execution
OrderProcessingAgent::run($orderCreatedEvent)
    ->async()
    ->go();

// Data analysis with context
$insights = AnalyticsAgent::run($salesData)
    ->withContext(['period' => 'last_quarter'])
    ->go();

// Process large datasets asynchronously
DataProcessorAgent::run($largeDataset)
    ->async()
    ->onQueue('processing')
    ->go();

// Monitor system health with thresholds
SystemMonitorAgent::run($metrics)
    ->withContext(['threshold' => 0.95])
    ->go();

// Generate reports with specific formats
$report = ReportAgent::run('weekly_summary')
    ->withContext(['format' => 'pdf'])
    ->go();

Real-time Magic with Streaming

Want to see your agent think in real-time? Enable streaming for that ChatGPT-like experience:
// Enable streaming for real-time responses
$stream = StorytellerAgent::run('Tell me a story')
    ->streaming()
    ->go();

foreach ($stream as $chunk) {
    echo $chunk;
    flush();
}

Managing Conversation History

Important: All messages are always saved to the database for session continuity. These settings only control what previous messages are sent to the LLM for context.
By default, agents don’t send conversation history to the LLM (for better performance and lower costs). But for chat agents, you’ll want context! Here’s how to control what history gets sent to the LLM:
// Enable history for conversational agents
class ChatAgent extends BaseLlmAgent
{
    protected bool $includeConversationHistory = true; // Send history to LLM
    protected string $contextStrategy = 'recent'; // 'none', 'recent', 'full'
    protected int $historyLimit = 10; // last 10 messages (for 'recent' strategy)
}

// Override at runtime
$context->setState('include_history', true);
$context->setState('history_depth', 5);

Customizing Agent Behavior

Want to add your own special sauce? Agents come with powerful lifecycle hooks that let you customize exactly how they work. It’s like having backstage passes to your agent’s brain!

Available Hooks

  • beforeLlmCall - Tweak messages before they hit the AI
  • afterLlmResponse - Process AI responses your way
  • beforeToolCall - Modify tool inputs on the fly
  • afterToolResult - Transform tool outputs
  • onToolException - Handle tool execution errors
Here’s how to use these superpowers:
class CustomAgent extends BaseLlmAgent
{
    public function beforeLlmCall(array $inputMessages, AgentContext $context): array
    {
        // Modify messages before sending to LLM
        // This is also where tracing starts
        return $inputMessages;
    }

    public function afterLlmResponse(Response|Generator $response, AgentContext $context, ?PendingRequest $request = null): mixed
    {
        // Process the LLM response
        // Access token usage: $response->usage
        // Access original request: $request
        return $response;
    }

    public function beforeToolCall(string $toolName, array $arguments, AgentContext $context): array
    {
        // Modify tool arguments before execution
        return $arguments;
    }

    public function afterToolResult(string $toolName, string $result, AgentContext $context): string
    {
        // Process tool results
        return $result;
    }

    public function onToolException(string $toolName, Throwable $e, AgentContext $context): void
    {
        // Handle tool execution errors
        // Log errors, send alerts, or implement recovery strategies
    }
}

Dynamic Prompts

Want to test different personalities without changing code? Prompt versioning lets you A/B test, switch between tones, and evolve your agent’s voice on the fly!
Switch Prompts at Runtime - No more hardcoding prompts! Store different versions and switch between them instantly.
// Use a specific prompt version
$response = CustomerSupportAgent::run('Help me with my order')
    ->withPromptVersion('friendly')
    ->go();

// A/B test different tones
$version = rand(0, 1) ? 'professional' : 'casual';
$response = CustomerSupportAgent::run($query)
    ->withPromptVersion($version)
    ->go();
Store prompts as .md files in resources/prompts/{agent_name}/ for easy version control! Learn more about Dynamic Prompts

Dynamic Prompts with Blade Templates

Create dynamic, context-aware prompts using Laravel’s Blade templating engine. Your prompts can adapt based on user data, session state, and custom variables.

Quick Example

Save your prompt as .blade.php to enable dynamic content:
resources/prompts/agent_name/default.blade.php
You are {{ $agent['name'] }}, a helpful assistant.

@if(isset($user_name))
Hello {{ $user_name }}! How can I help you today?
@else
Hello! How can I assist you?
@endif

@if($context && $context->getState('premium_user'))
Premium support mode activated
@endif

@if($tools->isNotEmpty())
I can help you with: {{ $tools->pluck('description')->join(', ') }}
@endif

Adding Custom Variables

Inject your own data by implementing getPromptData() in your agent:
class CustomerSupportAgent extends BaseLlmAgent
{
    protected function getPromptData(AgentContext $context): array
    {
        return [
            'company_name' => config('app.company_name'),
            'support_hours' => '9 AM - 5 PM EST',
            'ticket_count' => $this->getUserTicketCount($context),
        ];
    }
}
Then use them in your template:
Welcome to {{ $company_name }} support!
Our hours: {{ $support_hours }}

@if($ticket_count > 0)
You have {{ $ticket_count }} open tickets.
@endif
Learn more about Blade Templates & Advanced Features

SubAgents: Building Agent Teams

Why have one agent when you can have a whole team? SubAgents let you create specialized agents that work together, with a manager agent delegating tasks to the right specialist. Think of it as building your own AI company!

What Are SubAgents?

SubAgents are specialized agents that a parent agent can delegate tasks to. When you define subAgents on an agent, Vizra ADK automatically adds the DelegateToSubAgentTool - allowing your manager agent to intelligently route requests to the right specialist.
class CustomerServiceManager extends BaseLlmAgent
{
    protected string $name = 'customer_service_manager';

    protected string $instructions = "You are a customer service manager.
        Route technical questions to the technical support agent.
        Route billing questions to the billing agent.
        Handle general inquiries yourself.";

    protected array $subAgents = [
        TechnicalSupportAgent::class,
        BillingSupportAgent::class,
        SalesAgent::class,
    ];
}

How SubAgent Delegation Works

When a user asks a question, the manager agent decides whether to handle it directly or delegate:
User: "My payment failed"

Manager Agent analyzes the request

Decides: "This is a billing issue"

Delegates to BillingSupportAgent

BillingSupportAgent handles the request

Response returned to user

Creating Specialized SubAgents

Each subAgent should be focused on a specific domain:
class TechnicalSupportAgent extends BaseLlmAgent
{
    protected string $name = 'technical_support';

    protected string $description = 'Handles technical issues, bugs, and integration problems';

    protected string $instructions = "You are a technical support specialist.
        Help users with technical issues, debugging, and integration problems.
        Be patient and thorough in your explanations.";

    protected array $tools = [
        LogLookupTool::class,
        SystemStatusTool::class,
    ];
}

class BillingSupportAgent extends BaseLlmAgent
{
    protected string $name = 'billing_support';

    protected string $description = 'Handles billing, payments, and subscription questions';

    protected string $instructions = "You are a billing specialist.
        Help users with payment issues, invoices, and subscription management.";

    protected array $tools = [
        InvoiceLookupTool::class,
        RefundProcessorTool::class,
    ];
}
Pro Tip: Write clear description properties on your subAgents - the manager agent uses these to decide which specialist to delegate to!

SubAgent Benefits

Separation of Concerns

Each agent focuses on what it does best

Specialized Tools

SubAgents can have their own unique tools

Different Models

Use GPT-4 for complex tasks, GPT-3.5 for simple ones

Easier Testing

Test each specialist agent independently

Advanced Agent Techniques

Ready to level up? Here are some pro tips and advanced features that’ll make your agents work harder and smarter!

Vector Memory & RAG

Give your agents superpowers with built-in semantic search and knowledge retrieval! Perfect for building documentation assistants, knowledge bases, and intelligent Q&A systems.
Agent with Vector Memory
class DocumentationAgent extends BaseLlmAgent
{
    protected string $name = 'docs_agent';

    public function loadKnowledge(string $content): void
    {
        // Simple: just add content
        $this->vector()->addDocument($content);

        // Advanced: with full metadata and organization
        $this->vector()->addDocument([
            'content' => $content,
            'metadata' => ['type' => 'docs', 'version' => '2.0'],
            'namespace' => 'knowledge_base',
            'source' => 'user_upload'
        ]);
    }

    public function answerQuestion(string $question, AgentContext $context): mixed
    {
        // Generate contextual answer using RAG
        $ragContext = $this->rag()->generateRagContext($question, [
            'namespace' => 'knowledge_base',
            'limit' => 5,
            'threshold' => 0.7
        ]);

        if ($ragContext['total_results'] > 0) {
            $contextualInput = "Based on this knowledge:\n" .
                             $ragContext['context'] .
                             "\n\nAnswer: " . $question;
        } else {
            $contextualInput = $question;
        }

        return parent::run($contextualInput, $context);
    }
}

// Public access means you can test directly:
$agent = Agent::named('docs_agent');
$agent->vector()->addDocument('Laravel is awesome!');
$results = $agent->vector()->search('framework');
Vector Memory Pro Tips
  • Use progressive API: simple strings for prototypes, arrays for production
  • Public methods make testing in Tinkerwell super easy
  • Organize with namespaces: ‘docs’, ‘faqs’, ‘policies’, etc.
  • Perfect for building chatbots that remember context

Background Processing with Async

Got heavy lifting to do? Send your agents to work in the background:
// Execute agent asynchronously via queue
$job = DataProcessorAgent::run($largeDataset)
    ->async()
    ->onQueue('processing')
    ->go();

// With delay and retries
$job = ReportAgent::run('quarterly_report')
    ->delay(300) // 5 minutes
    ->tries(3)
    ->timeout(600) // 10 minutes
    ->go();

Vision & Multimodal Magic

Your agents aren’t just text wizards - they have eyes too! Send images, documents, and watch the magic happen:
// Send images with your request
$response = VisionAgent::run('What\'s in this image?')
    ->withImage('/path/to/image.jpg')
    ->go();

// Multiple images and documents
$response = DocumentAnalyzer::run('Summarize these documents')
    ->withDocument('/path/to/report.pdf')
    ->withImage('/path/to/chart.png')
    ->withImageFromUrl('https://example.com/diagram.jpg')
    ->go();

Fine-Tune on the Fly

Need more creativity? Want faster responses? Override any parameter at runtime:
// Override agent parameters at runtime
$response = CreativeWriterAgent::run('Write a poem')
    ->temperature(0.9) // More creative
    ->maxTokens(500)
    ->go();

// Set multiple parameters
$response = AnalyticalAgent::run($data)
    ->withParameters([
        'temperature' => 0.2,
        'max_tokens' => 2000,
        'top_p' => 0.95
    ])
    ->go();

Memory That Actually Remembers

Unlike that friend who forgets your birthday every year, your agents have perfect memory! They remember everything important about your conversations and context.

Conversation History

Every message in the session

Tool Results

What tools did and returned

User Context

Who they’re talking to

Custom Data

Any context you provide
// Add custom context
$response = ShoppingAssistant::run('Find me a laptop')
    ->withContext([
        'budget' => 1500,
        'preferences' => ['brand' => 'Apple'],
        'location' => 'New York'
    ])
    ->go();

Understanding the Agent Lifecycle

When you ask an agent to do something, it goes through a carefully orchestrated lifecycle. Understanding this flow helps you build more powerful agents and debug issues faster!

The Request Journey

1. Your Code → AgentExecutor (Fluent API)
2. AgentExecutor → AgentManager (Orchestration)
3. AgentManager → Agent Instance (Your Logic)
4. Agent → LLM Provider (AI Processing)
5. LLM → Tools (If Needed)
6. Response → Back Through the Chain

Lifecycle Hooks - Your Power Points

Agents provide strategic hooks where you can intercept and modify behavior. These are your control points for customization:

beforeLlmCall() - Pre-Processing

Called before sending messages to the AI. Perfect for adding context, filtering messages, or injecting system prompts.
public function beforeLlmCall(array $messages, AgentContext $context): array
{
    // Add user preferences to system message
    $messages[0]['content'] .= "\nUser prefers formal tone.";
    return $messages;
}

afterLlmResponse() - Post-Processing

Called after receiving the AI’s response. Transform responses, extract insights, or trigger side effects. Now includes the original request for complete logging.
public function afterLlmResponse(Response $response, AgentContext $context, ?PendingRequest $request): mixed
{
    // Log token usage and request details for monitoring
    Log::info('LLM call completed', [
        'model' => $request?->model(),
        'usage' => $response->usage
    ]);
    return $response;
}

Tool Execution Hooks

Control tool execution with beforeToolCall() and afterToolResult() hooks.
public function beforeToolCall(string $toolName, array $arguments, AgentContext $context): array
{
    // Add API keys or validate permissions
    if ($toolName === 'weather_api') {
        $arguments['api_key'] = config('services.weather.key');
    }
    return $arguments;
}

Lifecycle Events

As your agent works, it broadcasts events you can listen to for monitoring, logging, or triggering other actions:
EventDescription
AgentExecutionStartingWhen execution begins
LlmCallInitiatingBefore calling the AI
ToolCallInitiatingBefore using a tool
ToolCallCompletedAfter tool finishes
LlmResponseReceivedAI response received
AgentExecutionFinishedAll done!

Pro Tips from the Trenches

Want to build agents that users actually love? Here’s the wisdom we’ve gathered from building hundreds of agents:

Crystal Clear Instructions

Write instructions like you’re explaining to a smart friend. Be specific about what you want!

Right Model for the Job

GPT-4 for complex reasoning, GPT-3.5 for quick tasks. Don’t use a Ferrari to go to the corner store!

Hook Into Everything

Use lifecycle hooks for debugging and monitoring. You’ll thank yourself later!

Delegate Like a Boss

Complex workflows? Use sub-agents! Let specialists handle what they do best.

Stream for the Win

Enable streaming for long responses. Users love seeing agents “think” in real-time!

Ready to Build Something Amazing?

You’ve got the knowledge, now let’s put it to work!