Think of agents as your smart AI assistants that can handle complex tasks, remember conversations, and even work with other agents. They’re not just chatbots - they’re intelligent systems that can analyze data, make decisions, and take actions.
Smart & Contextual
Remembers conversations, understands context, and learns from interactions
Tool-Powered
Uses custom tools to access databases, APIs, and perform real actions
Multi-Model Support
Works with OpenAI, Anthropic Claude, and Google Gemini models
Team Players
Agents can delegate tasks to other specialized agents
Vector Memory
Built-in semantic search and RAG capabilities with public API access
Testing Friendly
Public methods make testing and experimentation a breeze
Ready to build your first AI agent? It’s easier than you think! Let’s create a helpful customer support agent that can answer questions and solve problems.
Now comes the fun part - putting your agent to work! Vizra ADK gives you a powerful and simple way to interact with your agents.
One Method to Rule Them All - The run() method is your gateway to agent intelligence. Simple, powerful, and flexible - it handles everything from conversations to complex data processing.
Working with agents feels natural with our fluent API. Check out these examples:
Copy
// Basic conversational usage$response = CustomerSupportAgent::run('How do I reset my password?') ->go();// With user context$response = CustomerSupportAgent::run('Show me my recent orders') ->forUser($user) ->go();// With session for conversation continuity$response = CustomerSupportAgent::run('What was my previous question?') ->withSession($sessionId) ->go();// Event-driven executionOrderProcessingAgent::run($orderCreatedEvent) ->async() ->go();// Data analysis with context$insights = AnalyticsAgent::run($salesData) ->withContext(['period' => 'last_quarter']) ->go();// Process large datasets asynchronouslyDataProcessorAgent::run($largeDataset) ->async() ->onQueue('processing') ->go();// Monitor system health with thresholdsSystemMonitorAgent::run($metrics) ->withContext(['threshold' => 0.95]) ->go();// Generate reports with specific formats$report = ReportAgent::run('weekly_summary') ->withContext(['format' => 'pdf']) ->go();
Want to see your agent think in real-time? Enable streaming for that ChatGPT-like experience:
Copy
// Enable streaming for real-time responses$stream = StorytellerAgent::run('Tell me a story') ->streaming() ->go();foreach ($stream as $chunk) { echo $chunk; flush();}
Important: All messages are always saved to the database for session continuity. These settings only control what previous messages are sent to the LLM for context.
By default, agents don’t send conversation history to the LLM (for better performance and lower costs). But for chat agents, you’ll want context! Here’s how to control what history gets sent to the LLM:
Copy
// Enable history for conversational agentsclass ChatAgent extends BaseLlmAgent{ protected bool $includeConversationHistory = true; // Send history to LLM protected string $contextStrategy = 'recent'; // 'none', 'recent', 'full' protected int $historyLimit = 10; // last 10 messages (for 'recent' strategy)}// Override at runtime$context->setState('include_history', true);$context->setState('history_depth', 5);
Want to add your own special sauce? Agents come with powerful lifecycle hooks that let you customize exactly how they work. It’s like having backstage passes to your agent’s brain!
Want to test different personalities without changing code? Prompt versioning lets you A/B test, switch between tones, and evolve your agent’s voice on the fly!
Switch Prompts at Runtime - No more hardcoding prompts! Store different versions and switch between them instantly.
Copy
// Use a specific prompt version$response = CustomerSupportAgent::run('Help me with my order') ->withPromptVersion('friendly') ->go();// A/B test different tones$version = rand(0, 1) ? 'professional' : 'casual';$response = CustomerSupportAgent::run($query) ->withPromptVersion($version) ->go();
Create dynamic, context-aware prompts using Laravel’s Blade templating engine. Your prompts can adapt based on user data, session state, and custom variables.
Save your prompt as .blade.php to enable dynamic content:
resources/prompts/agent_name/default.blade.php
Copy
You are {{ $agent['name'] }}, a helpful assistant.@if(isset($user_name))Hello {{ $user_name }}! How can I help you today?@elseHello! How can I assist you?@endif@if($context && $context->getState('premium_user'))Premium support mode activated@endif@if($tools->isNotEmpty())I can help you with: {{ $tools->pluck('description')->join(', ') }}@endif
Why have one agent when you can have a whole team? SubAgents let you create specialized agents that work together, with a manager agent delegating tasks to the right specialist. Think of it as building your own AI company!
SubAgents are specialized agents that a parent agent can delegate tasks to. When you define subAgents on an agent, Vizra ADK automatically adds the DelegateToSubAgentTool - allowing your manager agent to intelligently route requests to the right specialist.
Copy
class CustomerServiceManager extends BaseLlmAgent{ protected string $name = 'customer_service_manager'; protected string $instructions = "You are a customer service manager. Route technical questions to the technical support agent. Route billing questions to the billing agent. Handle general inquiries yourself."; protected array $subAgents = [ TechnicalSupportAgent::class, BillingSupportAgent::class, SalesAgent::class, ];}
When a user asks a question, the manager agent decides whether to handle it directly or delegate:
Copy
User: "My payment failed" ↓Manager Agent analyzes the request ↓Decides: "This is a billing issue" ↓Delegates to BillingSupportAgent ↓BillingSupportAgent handles the request ↓Response returned to user
Give your agents superpowers with built-in semantic search and knowledge retrieval! Perfect for building documentation assistants, knowledge bases, and intelligent Q&A systems.
Agent with Vector Memory
Copy
class DocumentationAgent extends BaseLlmAgent{ protected string $name = 'docs_agent'; public function loadKnowledge(string $content): void { // Simple: just add content $this->vector()->addDocument($content); // Advanced: with full metadata and organization $this->vector()->addDocument([ 'content' => $content, 'metadata' => ['type' => 'docs', 'version' => '2.0'], 'namespace' => 'knowledge_base', 'source' => 'user_upload' ]); } public function answerQuestion(string $question, AgentContext $context): mixed { // Generate contextual answer using RAG $ragContext = $this->rag()->generateRagContext($question, [ 'namespace' => 'knowledge_base', 'limit' => 5, 'threshold' => 0.7 ]); if ($ragContext['total_results'] > 0) { $contextualInput = "Based on this knowledge:\n" . $ragContext['context'] . "\n\nAnswer: " . $question; } else { $contextualInput = $question; } return parent::run($contextualInput, $context); }}// Public access means you can test directly:$agent = Agent::named('docs_agent');$agent->vector()->addDocument('Laravel is awesome!');$results = $agent->vector()->search('framework');
Vector Memory Pro Tips
Use progressive API: simple strings for prototypes, arrays for production
Public methods make testing in Tinkerwell super easy
Organize with namespaces: ‘docs’, ‘faqs’, ‘policies’, etc.
Perfect for building chatbots that remember context
Your agents aren’t just text wizards - they have eyes too! Send images, documents, and watch the magic happen:
Copy
// Send images with your request$response = VisionAgent::run('What\'s in this image?') ->withImage('/path/to/image.jpg') ->go();// Multiple images and documents$response = DocumentAnalyzer::run('Summarize these documents') ->withDocument('/path/to/report.pdf') ->withImage('/path/to/chart.png') ->withImageFromUrl('https://example.com/diagram.jpg') ->go();
Unlike that friend who forgets your birthday every year, your agents have perfect memory! They remember everything important about your conversations and context.
When you ask an agent to do something, it goes through a carefully orchestrated lifecycle. Understanding this flow helps you build more powerful agents and debug issues faster!
Called before sending messages to the AI. Perfect for adding context, filtering messages, or injecting system prompts.
Copy
public function beforeLlmCall(array $messages, AgentContext $context): array{ // Add user preferences to system message $messages[0]['content'] .= "\nUser prefers formal tone."; return $messages;}
Called after receiving the AI’s response. Transform responses, extract insights, or trigger side effects. Now includes the original request for complete logging.
Copy
public function afterLlmResponse(Response $response, AgentContext $context, ?PendingRequest $request): mixed{ // Log token usage and request details for monitoring Log::info('LLM call completed', [ 'model' => $request?->model(), 'usage' => $response->usage ]); return $response;}