laraveleasyai maintained by easybdit
📺 Video Tutorials
Why LaravelAI?
Building AI features in Laravel normally means separate SDKs, different formats, and custom error handling for every provider. LaravelAI eliminates all of that.
// Same code. Any provider. Just change the name.
$response = AI::provider('ollama')->chat($messages); // Self-hosted, free
$response = AI::provider('openai')->chat($messages); // ChatGPT
$response = AI::provider('anthropic')->chat($messages); // Claude
$response = AI::provider('deepseek')->chat($messages); // DeepSeek
Built on Laravel's driver pattern — same architecture as Mail, Cache, and Queue.
📦 Installation
Step 1: Install via Composer
composer require easybdit/laraveleasyai
Step 2: Publish config and assets
php artisan vendor:publish --tag=ai-config
php artisan vendor:publish --tag=ai-chat-assets
Step 3: Run migrations
php artisan migrate
Step 4: Add to .env
AI_PROVIDER=ollama
AI_OLLAMA_URL=http://127.0.0.1:11434
AI_OLLAMA_MODEL=qwen2:1.5b
Step 5: Visit /ai-chat in your browser ✅
Requirements
| Requirement | Version |
|---|---|
| PHP | 8.2+ |
| Laravel | 10, 11, 12, 13 |
🚀 Quick Start
use EasyAI\LaravelAI\Facades\AI;
$response = AI::chat([['role' => 'user', 'content' => 'What is Laravel?']]);
echo $response->content;
One-Liner Helper
$answer = ai('What is Laravel?');
Test in Tinker
php artisan tinker
>>> AI::provider('ollama')->health()
=> true
>>> ai('Say hello in 3 words')
=> "Hello there, friend!"
💬 Built-in Chat UI
New in v1.3.0 — A full ChatGPT-like chat app included. Zero setup required.
What you get out of the box
| Feature | Description |
|---|---|
| 💬 Chat UI | ChatGPT-like sidebar with session history |
| ⚡ Streaming | Real-time typing effect |
| 📝 Markdown | Full rendering with syntax-highlighted code |
| 📋 Copy buttons | Per message and per code block |
| 🔄 Provider switcher | Switch Ollama / OpenAI / Claude / DeepSeek live |
| 💾 DB persistence | History survives page refresh |
| 🏷️ Auto-title | First message becomes session title |
| 📁 Projects | RAG-powered knowledge bases (v1.4.0) |
| 📦 Offline assets | No CDN dependency |
Customize the view
php artisan vendor:publish --tag=ai-chat-views
# → resources/views/vendor/laravelai/chat.blade.php
Routes registered automatically
| Method | URL | Description |
|---|---|---|
| GET | /ai-chat |
Chat UI |
| POST | /ai-chat/api/sessions |
Create session |
| DELETE | /ai-chat/api/sessions/{id} |
Delete session |
| GET | /ai-chat/api/stream |
SSE streaming |
| POST | /ai-chat/api/provider |
Switch provider |
| GET | /ai-chat/api/projects |
List projects |
| POST | /ai-chat/api/projects |
Create project |
| DELETE | /ai-chat/api/projects/{id} |
Delete project |
| POST | /ai-chat/api/projects/{id}/files |
Upload & ingest file |
| DELETE | /ai-chat/api/projects/{id}/files/{fid} |
Delete file |
🗂️ Projects & Knowledge Bases
New in v1.4.0 — Self-hosted Claude-like Projects. Create knowledge bases, upload documents, and get RAG-powered answers scoped per project.
How it works
Create Project → Upload Files → Chat Inside Project → RAG answers from your docs
- Click + next to Projects in the sidebar
- Upload
.txt,.md, or.pdffiles — auto-ingested into RAG on upload - Click the project to start a new RAG-powered chat session
- Every message retrieves relevant context from that project's documents only
- Normal chats outside projects are completely unaffected
What you see in the UI
- 📁 Projects section in sidebar with file count badge
- 🧠 RAG ON badge in chat header when inside a project session
- 📎 Manage Files button — upload, view ingestion status, delete files
- 🟢 Status per file:
pending→ingested→failed - Project context active indicator in the input footer
PDF support (optional)
composer require smalot/pdfparser
RAG Scoping API
$results = AI::rag()->source('project_5')->search('your query');
$answer = AI::rag()->source('project_5')->ask('your question');
AI::rag()->flush('project_5');
🧠 RAG (Built-in)
No external vector database required — uses your existing SQL database.
Setup
ollama pull nomic-embed-text
php artisan migrate
AI_RAG_PROVIDER=ollama
AI_RAG_EMBED_MODEL=nomic-embed-text
Usage
// Store
AI::rag()->ingest('Laravel is a PHP framework using MVC.', 'docs');
// Ask
$answer = AI::rag()->ask('What is Laravel?');
// Search
$results = AI::rag()->search('MVC pattern');
// [['content' => '...', 'source' => 'docs', 'score' => 0.91]]
// Scoped
$results = AI::rag()->source('project_5')->search('your query');
// Flush
AI::rag()->flush();
AI::rag()->flush('project_5');
Artisan
php artisan ai:rag:ingest storage/docs/manual.txt --source=manual
php artisan ai:rag:ingest storage/docs/ --flush
RAG Configuration
.env Key |
Default | Description |
|---|---|---|
AI_RAG_PROVIDER |
ollama |
Embedding provider |
AI_RAG_EMBED_MODEL |
nomic-embed-text |
Embedding model |
AI_RAG_CHUNK_SIZE |
2000 |
Max chars per chunk |
AI_RAG_TOP_K |
3 |
Chunks retrieved per query |
AI_RAG_TABLE |
ai_documents |
Database table |
🤖 Providers
Ollama — Self-Hosted & Free
AI_PROVIDER=ollama
AI_OLLAMA_URL=http://127.0.0.1:11434
AI_OLLAMA_MODEL=qwen2:1.5b
AI_OLLAMA_TIMEOUT=120
Note for small models (qwen2, qwen2.5): If you get 400 errors with RAG context, set
num_ctxto match your model's context window:ollama show qwen2:1.5b --modelfile > /tmp/modelfile echo "PARAMETER num_ctx 2048" >> /tmp/modelfile ollama create qwen2-fixed -f /tmp/modelfileThen use
AI_OLLAMA_MODEL=qwen2-fixedin.env.
OpenAI (ChatGPT)
AI_OPENAI_KEY=sk-your-api-key
AI_OPENAI_MODEL=gpt-4o-mini
Anthropic (Claude)
AI_ANTHROPIC_KEY=sk-ant-your-api-key
AI_ANTHROPIC_MODEL=claude-sonnet-4-20250514
DeepSeek
AI_DEEPSEEK_KEY=sk-your-api-key
AI_DEEPSEEK_MODEL=deepseek-chat
✨ Features
Fluent Builder API
$response = AI::provider('ollama')
->model('qwen2:1.5b')
->temperature(0.9)
->maxTokens(500)
->systemPrompt('You are a helpful Laravel expert.')
->chat([['role' => 'user', 'content' => 'Explain middleware']]);
Streaming
AI::provider('ollama')->stream(
[['role' => 'user', 'content' => 'Write a poem']],
function (string $chunk) { echo $chunk; }
);
Health Check + Fallback
foreach (['ollama', 'deepseek', 'openai'] as $provider) {
try {
if (!AI::provider($provider)->health()) continue;
return AI::provider($provider)->chat($messages)->content;
} catch (\Throwable $e) {
Log::warning("{$provider} failed: {$e->getMessage()}");
}
}
Token Estimation
$tokens = AI::estimateTokens('Hello world');
$tokens = AI::estimateTokens($messagesArray);
Ollama Advanced Features
AI::provider('ollama')->format('json')->chat($messages);
AI::provider('ollama')->embed('Hello world');
AI::provider('ollama')->keepAlive('10m')->chat($messages);
AI::provider('ollama')->options(['num_ctx' => 2048])->chat($messages);
AI::provider('ollama')->pullModel('llama3.1:8b');
AI::provider('ollama')->runningModels();
AI::provider('ollama')->deleteModel('old-model');
Error Handling
use EasyAI\LaravelAI\Exceptions\ConnectionException;
use EasyAI\LaravelAI\Exceptions\ProviderException;
try {
$response = AI::provider('openai')->chat($messages);
} catch (ConnectionException $e) {
Log::error("Connection failed: " . $e->getMessage());
} catch (ProviderException $e) {
Log::error("Provider [{$e->getProvider()}]: " . $e->getMessage());
}
📖 API Reference
Facade Methods
| Method | Returns | Description |
|---|---|---|
AI::chat(array $messages) |
AIResponse |
Chat with default provider |
AI::provider(string $name) |
AIProvider |
Switch provider |
AI::estimateTokens(string|array) |
int |
Estimate token count |
AI::rag() |
RAGManager |
Access RAG system |
Provider Methods (Chainable)
| Method | Description |
|---|---|
->model($name) |
Set the model |
->temperature($float) |
Creativity (0–2) |
->maxTokens($int) |
Max response tokens |
->systemPrompt($text) |
Set instructions |
->timeout($seconds) |
Request timeout |
->chat(array $messages) |
Send and get response |
->stream(array $messages, callable) |
Stream token by token |
->health() |
Check provider reachable |
->models() |
List available models |
RAG Methods
| Method | Description |
|---|---|
->ingest($text, $source) |
Store as embeddings |
->search($query) |
Similarity search |
->ask($question) |
RAG-powered Q&A |
->source($name) |
Scope to one source |
->flush($source?) |
Delete documents |
Ollama-Only Methods
| Method | Description |
|---|---|
->format('json') |
Force JSON output |
->embed($text) |
Generate embedding |
->keepAlive($duration) |
Keep in memory |
->options($array) |
Raw Ollama options (e.g. num_ctx) |
->pullModel($name) |
Download model |
->showModel($name) |
Model details |
->deleteModel($name) |
Remove model |
->copyModel($src, $dst) |
Copy model |
->runningModels() |
List loaded models |
AIResponse Object
| Property | Type | Description |
|---|---|---|
$response->content |
string |
AI reply text |
$response->model |
string |
Model used |
$response->promptTokens |
int |
Input tokens |
$response->replyTokens |
int |
Output tokens |
$response->totalTokens |
int |
Total tokens |
$response->provider |
string |
Provider name |
$response->getRaw() |
array |
Raw API response |
(string) $response |
string |
Cast to string |
Helper Function
ai('Your question')
ai('Your question', 'openai')
ai('Your question', 'anthropic', 'claude-haiku-...')
⚙️ Configuration
// config/ai.php
return [
'default' => env('AI_PROVIDER', 'ollama'),
'providers' => [
'ollama' => ['driver' => 'ollama', 'url' => env('AI_OLLAMA_URL'), 'model' => env('AI_OLLAMA_MODEL', 'qwen2:1.5b'), 'timeout' => env('AI_OLLAMA_TIMEOUT', 120)],
'openai' => ['driver' => 'openai', 'api_key' => env('AI_OPENAI_KEY'), 'model' => env('AI_OPENAI_MODEL', 'gpt-4o-mini'), 'timeout' => 60],
'anthropic' => ['driver' => 'anthropic', 'api_key' => env('AI_ANTHROPIC_KEY'), 'model' => env('AI_ANTHROPIC_MODEL'), 'timeout' => 60],
'deepseek' => ['driver' => 'deepseek', 'api_key' => env('AI_DEEPSEEK_KEY'), 'model' => env('AI_DEEPSEEK_MODEL', 'deepseek-chat'), 'timeout' => 60],
],
'rag' => [
'embed_provider' => env('AI_RAG_PROVIDER', 'ollama'),
'embed_model' => env('AI_RAG_EMBED_MODEL', 'nomic-embed-text'),
'chat_provider' => env('AI_RAG_CHAT_PROVIDER', null),
'chunk_size' => (int) env('AI_RAG_CHUNK_SIZE', 2000),
'top_k' => (int) env('AI_RAG_TOP_K', 3),
'table' => env('AI_RAG_TABLE', 'ai_documents'),
'system_prompt' => env('AI_RAG_SYSTEM_PROMPT', 'Answer using ONLY the context below. If unsure, say so.'),
],
];
Complete .env Reference
# Provider
AI_PROVIDER=ollama
# Ollama (self-hosted, free)
AI_OLLAMA_URL=http://127.0.0.1:11434
AI_OLLAMA_MODEL=qwen2:1.5b
AI_OLLAMA_TIMEOUT=120
# OpenAI
AI_OPENAI_KEY=sk-proj-xxxx
AI_OPENAI_MODEL=gpt-4o-mini
# Anthropic (Claude)
AI_ANTHROPIC_KEY=sk-ant-xxxx
AI_ANTHROPIC_MODEL=claude-sonnet-4-20250514
# DeepSeek
AI_DEEPSEEK_KEY=sk-xxxx
AI_DEEPSEEK_MODEL=deepseek-chat
# RAG
AI_RAG_PROVIDER=ollama
AI_RAG_EMBED_MODEL=nomic-embed-text
AI_RAG_CHUNK_SIZE=500
AI_RAG_TOP_K=1
AI_RAG_TABLE=ai_documents
# RAG for small models — reduce chunk size and limit context
# AI_OLLAMA_NUM_CTX=2048
🧪 Testing
vendor/bin/phpunit
vendor/bin/phpunit --filter=test_ollama_chat
Uses Http::fake() — no real API calls needed.
🗺️ Roadmap
| Version | Feature | Status |
|---|---|---|
| v1.0 | Ollama, OpenAI, Anthropic, DeepSeek | ✅ Released |
| v1.1 | Laravel 12 & 13 support | ✅ Released |
| v1.2 | Built-in RAG system + Ollama advanced | ✅ Released |
| v1.3 | Built-in Chat UI | ✅ Released |
| v1.4 | Projects + RAG scoping (self-hosted Claude Projects) | ✅ Released |
| v2.0 | Function / Tool calling | 🔜 Planned |
| v2.0 | Vision / Image input | 🔜 Planned |
| v2.1 | Groq driver | 🔜 Planned |
| v2.1 | Google Gemini driver | 🔜 Planned |
| v2.2 | Response caching | 🔜 Planned |
| v3.0 | Image generation | 🔜 Planned |
❤️ Support
- ⭐ Star this repo on GitHub
- 🐛 Report bugs via Issues
- 🔀 Submit a PR — contributions welcome
- 📢 Share with your developer friends
👤 Credits
Md Murad Hosen — Full-Stack Laravel Vue Developer and DevOps Engineer from Chittagong, Bangladesh 🇧🇩
📄 License
MIT License — free to use in personal and commercial projects. See LICENSE for details.