laravel-ai-gateway maintained by fiachehr
Laravel AI Gateway
Unified AI gateway for Laravel: one fluent API for multiple providers, normalized response DTOs, and database logging that never breaks the main request flow.
Table of contents
- Documentation
- Supported providers
- Features
- Requirements
- Installation
- Environment variables
- Configuration
- Usage
- Response object
- Model listing
- Logging
- Error handling
- Extending with new providers
- Testing
- Troubleshooting
- Security
- Links
- License
Documentation
- Hosted HTML: Laravel AI Gateway — Documentation
Supported providers
| Provider | Key | Model listing |
|---|---|---|
| OpenAI | openai |
Yes |
| Gemini | gemini |
Yes |
| DeepSeek | deepseek |
Yes |
| Claude | claude |
No (throws capability exception) |
| Ollama | ollama |
Yes |
Features
- Fluent chain:
AI::provider()->model()->systemPrompt()->userPrompt()->options()->metadata()->send() prompt()alias for user prompt (backward compatible)- Service API:
app(AIManager::class)->send(...)andlistModels() - DTOs:
AIRequestData,AIResponseData,ModelData - Factory-based provider resolution; strategy-style adapters
- Eloquent-backed
ai_logstable viaAILogmodel and repository - Log failures isolated so they never interrupt AI calls
Requirements
- PHP 8.3+
- Laravel 10.x, 11.x, or 12.x
Installation
composer require fiachehr/laravel-ai-gateway
Publish assets and migrate:
php artisan vendor:publish --tag=aigateway-config
php artisan vendor:publish --tag=aigateway-migrations
php artisan migrate
The service provider registers automatically. The facade alias is AI → Fiachehr\AiGateway\Facades\AI.
Environment variables
Minimal .env example (adjust per provider you use):
AI_GATEWAY_DEFAULT_PROVIDER=openai
AI_GATEWAY_DEFAULT_MODEL=gpt-5-mini
AI_GATEWAY_TIMEOUT=60
OPENAI_API_KEY=
OPENAI_BASE_URL=https://api.openai.com/v1
GEMINI_API_KEY=
GEMINI_BASE_URL=https://generativelanguage.googleapis.com/v1beta
DEEPSEEK_API_KEY=
DEEPSEEK_BASE_URL=https://api.deepseek.com/v1
CLAUDE_API_KEY=
CLAUDE_BASE_URL=https://api.anthropic.com/v1
CLAUDE_API_VERSION=2023-06-01
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_API_KEY=
Configuration
After publishing, edit config/aigateway.php. Defaults come from env() as in the block above. Each provider has base_url and usually api_key (Ollama may omit the key on a local install).
Usage
Fluent API (recommended)
use Fiachehr\AiGateway\Facades\AI;
$response = AI::provider('openai')
->model('gpt-5-mini')
->systemPrompt('You are a concise technical assistant.')
->userPrompt('Write a short welcome message for new users.')
->send();
$text = $response->responseText;
prompt() as user message
$response = AI::provider('gemini')
->model('gemini-2.5-flash')
->systemPrompt('Respond in Persian.')
->prompt('Summarize this text.')
->options(['generationConfig' => ['temperature' => 0.2]])
->metadata(['source' => 'admin-panel'])
->send();
Service layer
use Fiachehr\AiGateway\Services\AIManager;
$response = app(AIManager::class)->send(
provider: 'deepseek',
model: 'deepseek-chat',
prompt: 'Explain SOLID briefly.',
systemPrompt: 'Keep it short and practical.',
options: [],
metadata: ['request_id' => request()->id()]
);
Default provider and model
If you omit provider() or model(), values from config('aigateway.default_provider') and config('aigateway.default_model') are used.
Response object
send() returns Fiachehr\AiGateway\DTOs\AIResponseData (readonly). Common properties:
| Property | Type | Description |
|---|---|---|
responseText |
string |
Normalized assistant text |
provider |
string |
Provider key |
model |
string |
Model id used |
requestPayload |
array |
Payload sent to the provider |
responsePayload |
array |
Raw decoded JSON response |
requestId |
?string |
When the API returns one |
inputTokens |
?int |
Usage when available |
outputTokens |
?int |
Usage when available |
totalTokens |
?int |
Usage when available |
estimatedCost |
?float |
Reserved; not computed by the package |
metadata |
array |
Merged metadata from request/response |
Use ->toArray() for logging or API responses.
Model listing
use Fiachehr\AiGateway\Facades\AI;
$models = AI::listModels('openai');
Each entry is a ModelData instance with id, provider, name, metadata, and toArray(). For claude, listing throws CapabilityNotSupportedException by design.
Logging
Rows are written to ai_logs (Eloquent: Fiachehr\AiGateway\Models\AILog) with:
provider,model,promptrequest_payload,response_payload,response_textstatus(success/failed),error_message,request_idinput_tokens,output_tokens,total_tokens,estimated_cost,latency_msmetadata(includessystem_promptanduser_promptwhen set)
If the database write fails, the error is swallowed so your application flow continues.
Error handling
use Fiachehr\AiGateway\Exceptions\AIException;
use Fiachehr\AiGateway\Exceptions\ProviderRequestException;
try {
$response = AI::provider('claude')
->model('claude-3-5-sonnet')
->prompt('Hello')
->send();
} catch (ProviderRequestException $e) {
report($e);
} catch (AIException $e) {
report($e);
}
Exception hierarchy (all extend AIException unless noted):
ProviderNotSupportedException— unknown provider keyCapabilityNotSupportedException— e.g. model listing not supportedProviderRequestException— base for HTTP-level failuresProviderAuthenticationException— 401 / 403ProviderRateLimitException— 429ProviderResponseException— 5xx from provider
Extending with new providers
- Create a class implementing
Fiachehr\AiGateway\Contracts\AIProviderContract(send()returnsAIResponseData). - Optionally implement
Fiachehr\AiGateway\Contracts\ModelListableContractforlistModels(). - Register the provider key in
Fiachehr\AiGateway\Factories\ProviderFactory::make(). - Add config under
config/aigateway.phpprovidersand document env keys in your README.
Bind the concrete class in AiGatewayServiceProvider if you need container injection.
Testing
From a clone of this repository:
composer install
./vendor/bin/phpunit -c phpunit.xml
Troubleshooting
| Symptom | What to check |
|---|---|
Class "Fiachehr\AiGateway\Facades\AI" not found |
Run composer dump-autoload, clear config cache php artisan config:clear. |
| Migration errors | Publish migrations once; avoid duplicate ai_logs table names. |
Empty responseText |
Inspect responsePayload in logs; model or API shape may have changed. |
| Gemini 404 on model | Use full model id as in Google docs (e.g. gemini-2.5-flash). |
| Claude model list fails | Expected: use CapabilityNotSupportedException or call listing only on supported drivers. |
Security
- Store API keys only in
.env/ secret managers, never in VCS. - Treat prompts as sensitive; avoid putting secrets inside logged prompts.
- Use HTTPS
base_urlvalues for cloud providers.
Links
License
This package is released under the MIT License.