laravel-ai-costs maintained by aaix
Laravel AI Costs provides a zero-config way to calculate API costs for Laravel AI agent responses. Pricing for 2,600+ models is resolved automatically from the LiteLLM community pricing database (cached daily). Pass in a response, usage object, or raw token counts — get back a clean DTO with USD costs.
How Pricing Works
Pricing is resolved in order:
- Local config overrides — your published
ai-costs.php(if any) - LiteLLM database — 2,600+ models, cached once daily
- Exception — if neither source knows the model
Requirements
- PHP 8.2+
- Laravel 11+ or 12+
- laravel/ai ^0
Installation
composer require aaix/laravel-ai-costs
Optionally publish the config to customize model pricing:
php artisan vendor:publish --tag=ai-costs-config
Usage
With the Trait (recommended)
Replace Promptable with TracksAiCost on your agent — it wraps prompt() and tracks costs automatically:
use Aaix\LaravelAiCosts\Concerns\TracksAiCost;
use Laravel\Ai\Contracts\Agent;
class MyAgent implements Agent
{
use TracksAiCost; // replaces Promptable
// ...
}
Every prompt() call is tracked transparently:
$agent = MyAgent::make();
$agent->prompt('Analyze this data...');
$agent->prompt('Summarize the results...');
$agent->lastCost(); // AiCostResult for the last prompt
$agent->lastCost()->totalCostUsd; // 0.000345
$agent->costs(); // array of all AiCostResult objects
$agent->totalCostUsd(); // sum of all prompts
$agent->resetCosts(); // clear tracked costs
Direct Calculator
use Aaix\LaravelAiCosts\Services\AiCostCalculator;
// From a laravel/ai response (model & provider auto-resolved)
$cost = AiCostCalculator::fromResponse($response);
// From a laravel/ai Usage object
$cost = AiCostCalculator::fromUsage($usage, 'claude-sonnet-4-6');
// From raw token counts
$cost = AiCostCalculator::fromTokens(10000, 500, 'deepseek-chat');
AiCostResult DTO
All methods return an AiCostResult readonly DTO:
| Property | Type | Description |
|---|---|---|
inputCostUsd |
float |
Input token cost in USD |
outputCostUsd |
float |
Output token cost in USD |
totalCostUsd |
float |
Combined cost in USD |
inputTokens |
int |
Number of input tokens |
outputTokens |
int |
Number of output tokens |
model |
string |
Model identifier |
provider |
string |
Auto-detected or explicit provider |
totalCostCents() |
float |
Combined cost in cents |
Configuration
Out of the box, no configuration is needed — pricing comes from LiteLLM automatically.
To override pricing for specific models, publish the config and add entries:
php artisan vendor:publish --tag=ai-costs-config
// config/ai-costs.php
'models' => [
// Local overrides take precedence over LiteLLM
'my-custom-model' => ['input' => 1.00, 'output' => 3.00],
'gpt-4o' => ['input' => 2.50, 'output' => 10.00], // pin a specific price
],
Model names have dots removed for config key compatibility (e.g. gpt-4.1 becomes gpt-41). The calculator handles this normalization automatically. Prefix matching is supported — my-custom* matches my-custom-model, my-custom-v2, etc.
Cache
LiteLLM pricing is cached for 24 hours by default. You can adjust this via environment variables:
AI_COSTS_CACHE_TTL=86400
To manually refresh:
use Aaix\LaravelAiCosts\Support\LitellmPricingProvider;
LitellmPricingProvider::clearCache();
Testing
php artisan test --filter=AiCostCalculator
Contributing
Contributions are welcome! Please open an issue or submit a pull request.
License
The MIT License (MIT). Please see License File for more information.