Simple, transparent pricing
Choose the plan that fits your needs. Scale as you grow. No hidden fees.
Argy Virtual Tokens: radical transparency on AI costs
The rule: 1 Argy credit = 1M virtual tokens = €50. For every 1M virtual tokens, your team accesses ~3.8M real provider tokens on average (OpenAI, Anthropic, Mistral, Google, xAI…) — thanks to intelligent routing. The console shows consumption in millions. No opaque credits, no hidden margin.
Free
Discover Argy for free. Perfect for testing the platform.
Included quotas
- In Argy, a project groups an application/service and its environments (dev/staging/prod). It’s the unit where you apply modules, deployments, and policies.
- 1 project
- An Argy module is a versioned workflow made of actions (nodes) and connections, with inputs/outputs schemas. It can behave like an agent with tools through its actions, including the Argy AI action (a module-specific subagent).
- 3 active modules
- A pipeline is an execution (run) with steps, status, real-time logs, artifacts, and outputs.
- 10 pipelines/month
- AI tokens measure LLM consumption. Argy governs them through the LLM Gateway (quotas, audit, filtering) to control costs and risks.
- 0.01M virtual tokens/month~0.038M real provider tokens
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 0 RAG documents
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 0 indexed RAG tokens
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 0 MB RAG storage
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 0 RAG queries/month
Features
- Full Argy Console
- Standard module catalog
- Git integrations (GitHub, GitLab)
- LLM Gateway SaaS included
- Argy Chat included
- Real-time AI consumption dashboard
- Community support
- 99.9% availability SLA (SaaS)
- No SSO
Starter
For startups and small projects looking to standardize quickly.
Included quotas
- In Argy, a project groups an application/service and its environments (dev/staging/prod). It’s the unit where you apply modules, deployments, and policies.
- 5 projects
- An Argy module is a versioned workflow made of actions (nodes) and connections, with inputs/outputs schemas. It can behave like an agent with tools through its actions, including the Argy AI action (a module-specific subagent).
- 20 active modules
- A pipeline is an execution (run) with steps, status, real-time logs, artifacts, and outputs.
- 100 pipelines/month
- AI tokens measure LLM consumption. Argy governs them through the LLM Gateway (quotas, audit, filtering) to control costs and risks.
- 0.1M virtual tokens/month~0.38M real provider tokens
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 20 RAG documents
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 1,000,000 indexed RAG tokens
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 200 MB RAG storage
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 200 RAG queries/month
Features
- Everything in Free +
- Multi-provider LLM Gateway SaaS (OpenAI, Anthropic, Mistral…)
- Argy Code — governed CLI agent
- Argy Chat included
- Module Studio
- Smart routing — cheapest model fit for each task
- Email support
- Cloud integrations (AWS, Azure, GCP)
- 99.9% availability SLA (SaaS)
Growth
For scale-ups and critical projects ready to scale.
Included quotas
- In Argy, a project groups an application/service and its environments (dev/staging/prod). It’s the unit where you apply modules, deployments, and policies.
- 25 projects
- An Argy module is a versioned workflow made of actions (nodes) and connections, with inputs/outputs schemas. It can behave like an agent with tools through its actions, including the Argy AI action (a module-specific subagent).
- 100 active modules
- A pipeline is an execution (run) with steps, status, real-time logs, artifacts, and outputs.
- 1,000 pipelines/month
- AI tokens measure LLM consumption. Argy governs them through the LLM Gateway (quotas, audit, filtering) to control costs and risks.
- 0.5M virtual tokens/month~1.9M real provider tokens
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 100 RAG documents
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 10,000,000 indexed RAG tokens
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 2,000 MB RAG storage
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 1,000 RAG queries/month
Features
- Everything in Starter +
- Argy Code — unlimited access, all models
- Argy Chat included
- Self-hosted execution agents (add-on)
- Module Studio — reusable Golden Paths
- Tenant-aware RAG (Retrieval-Augmented Generation)
- Advanced RBAC + Audit logs (AI Act compliance)
- Approval workflows
- Shadow AI detection — centralizes ungoverned AI usage
- Priority support
- 99.9% availability SLA (SaaS)
Enterprise
For large enterprises with compliance and sovereignty requirements.
Included quotas
- In Argy, a project groups an application/service and its environments (dev/staging/prod). It’s the unit where you apply modules, deployments, and policies.
- Unlimited
- An Argy module is a versioned workflow made of actions (nodes) and connections, with inputs/outputs schemas. It can behave like an agent with tools through its actions, including the Argy AI action (a module-specific subagent).
- Unlimited
- A pipeline is an execution (run) with steps, status, real-time logs, artifacts, and outputs.
- Unlimited
- AI tokens measure LLM consumption. Argy governs them through the LLM Gateway (quotas, audit, filtering) to control costs and risks.
- Negotiated virtual tokens~3.8M real tokens per 1M virtual tokens
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- Unlimited
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- Unlimited
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- Unlimited
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- Negotiated
Features
- Everything in Growth +
- On-premises LLM Gateway — AI data stays in your perimeter
- Argy Chat included
- Self-hosted execution agents
- RAG on internal data
- SSO (OIDC/SAML) + SCIM
- ITSM integration
- Native AI Act compliance — audit logs, PII traceability
- EU Sovereign Cloud or on-premises deployment
- 99.9% SLA guaranteed
- Dedicated 24/7 support
Enterprise supports SaaS, dedicated, and on-premises deployments. Get a custom quote tailored to your organization’s needs.
Compare plans at a glance
All the details to help you choose the right plan for your team.
| Feature | Free | Starter | Growth | Enterprise |
|---|---|---|---|---|
| In Argy, a project groups an application/service and its environments (dev/staging/prod). It’s the unit where you apply modules, deployments, and policies. | 1 project | 5 projects | 25 projects | Unlimited |
| An Argy module is a versioned workflow made of actions (nodes) and connections, with inputs/outputs schemas. It can behave like an agent with tools through its actions, including the Argy AI action (a module-specific subagent). | 3 active modules | 20 active modules | 100 active modules | Unlimited |
| A pipeline is an execution (run) with steps, status, real-time logs, artifacts, and outputs. | 10 pipelines/month | 100 pipelines/month | 1,000 pipelines/month | Unlimited |
| AI tokens measure LLM consumption. Argy governs them through the LLM Gateway (quotas, audit, filtering) to control costs and risks. | 0.01M virtual tokens/month | 0.1M virtual tokens/month | 0.5M virtual tokens/month | Negotiated virtual tokens |
| RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses. | 0 RAG documents | 20 RAG documents | 100 RAG documents | Unlimited |
| RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses. | 0 indexed RAG tokens | 1,000,000 indexed RAG tokens | 10,000,000 indexed RAG tokens | Unlimited |
| RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses. | 0 MB RAG storage | 200 MB RAG storage | 2,000 MB RAG storage | Unlimited |
| RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses. | 0 RAG queries/month | 200 RAG queries/month | 1,000 RAG queries/month | Negotiated |
| The LLM Gateway centralizes and secures AI calls (providers, quotas, audit, filters) and exposes an OpenAI-compatible API.LLM Gateway | ||||
| Argy Code is a developer AI agent in the terminal. It supports interactive (TUI) and autonomous execution (argy run) for automation, with built-in tools (Bash, filesystem, Git, MCP).Argy Code | ||||
| Argy Chat is the governed conversational assistant: projects/folders/conversations workspace, document uploads and indexing (RAG), MCP integrations, private or tenant-shared conversations, and real-time streaming.Argy Chat | ||||
| Module Studio is the visual editor (drag-and-drop) to design modules: actions, connections, simulation, publishing, and versioning. An AI assistant can generate and configure workflows from natural language.Module Studio | ||||
| RAG | ||||
| Advanced RBAC | ||||
| SSO + SCIM | Add-on | Add-on | ||
| Self-hosted agents | Add-on | |||
| LLM Gateway on-premises | Add-on | |||
| SLA | 99.9% | 99.9% | 99.9% | — |
| Support | Community | Priority | Dedicated 24/7 |
Add-ons & Extra capacity
Extend Argy with optional packs—keep pricing predictable while you scale.
+1M Argy virtual tokens (1 credit)
1 credit = 1M virtual tokens = ~3.8M real provider tokens on average. Increase your quota for LLM Gateway, Argy Code and RAG. Buy as many credits as needed.
Available for: Free, Starter, Growth, Enterprise
+100 pipelines/month
Run more deployment pipelines each month.
Available for: Starter, Growth, Enterprise
Self-hosted execution agents (hybrid)
Enable a self-hosted agent to run sensitive actions inside your network.
Available for: Growth
On-premises LLM Gateway
Deploy the LLM Gateway on your premises to keep AI data internal.
Available for: Growth
SSO (OIDC/SAML)
Single sign-on via your identity provider.
Available for: Starter, Growth (included in Enterprise)
SCIM / Directory provisioning
Automatic user and group synchronization.
Available for: Growth, Enterprise
Onboarding training
Personalized training session for your teams.
Available for: All plans
Dedicated support
Priority access to a dedicated support engineer.
Available for: Starter, Growth (included in Enterprise)
Frequently asked questions
Is there a limit on the number of users?
No! All Argy plans include unlimited users. You pay for capabilities (projects, modules, pipelines, tokens), not seats.
What is an Argy virtual token?
The tokens shown in your Argy quotas are Argy Virtual Tokens — a normalized, transparent unit. The rule is simple and fully transparent: 1 Argy credit = 1 million virtual tokens = €50. For every million virtual tokens, Argy's LLM Gateway gives you access to an average of 3.8 million real provider tokens (OpenAI, Anthropic, Mistral, Google, xAI…), thanks to intelligent routing that picks the most efficient model for each task. No hidden margin: the dashboard shows consumption in millions of virtual tokens and the real equivalent per provider, in real time.
How does AI token billing work?
1 Argy credit = 1 million virtual tokens = €50. Tokens are consumed on every LLM Gateway call (Argy Code, AI assistant, RAG). The console displays consumption in millions (e.g. 0.84M virtual → ~3.2M real this month). Smart routing directs each request to the most cost-efficient model, maximising the real token equivalent you get for every virtual token spent.
Can I change plans at any time?
Yes, you can upgrade at any time. The change is effective immediately and billing is prorated. For downgrades, contact our team.
What is the on-premises LLM Gateway?
The on-premises LLM Gateway allows you to deploy the AI gateway in your infrastructure. Your data and LLM API keys stay within your perimeter, ideal for enterprises with sovereignty and AI Act compliance requirements.
Are self-hosted agents secure?
Yes. Agents only establish outbound connections (HTTPS) to Argy. No inbound ports are exposed. Credentials stay in your infrastructure.
Do you offer discounts for annual commitments?
Yes, we offer a 15% discount for annual commitments. Contact our sales team to learn more.
How does Argy compare to GitHub Copilot or Claude Code?
GitHub Copilot and Claude Code are excellent tools but locked to a single provider and lack enterprise governance. Argy Code delivers the same code generation power while being provider-agnostic: switch from Claude Sonnet to GPT-5 or Mistral without changing your workflow. Add AI Act compliance, Shadow AI detection, and transparent costs via virtual tokens — that is the Argy advantage.
Ready to transform your DevSecOps?
Start with the Free plan to test Argy in self-service. Request a demo if you want a guided onboarding or enterprise rollout guidance.
No credit card required • 15% discount on annual plans • Cancel anytime
Total visibility on every AI request
1 Argy credit = 1M virtual tokens = €50. For every 1M virtual tokens, your team gets ~3.8M real provider tokens on average. The console always shows consumption in millions — no black box, no surprise bill.
0.84 M
Virtual tokens consumed
this month · 1 credit = 1M = €50
~3.2 M
Real provider tokens
effective equivalent (~3.8× avg.)
−67%
Savings vs direct pricing
via smart routing
100%
AI Act compliance
audited & traceable requests
Token consumption — March 2026
Breakdown by model · in millions (M) · virtual tokens → real provider tokens
| Model | Virtual (M) | Real provider (M) | Ratio | Usage share |
|---|---|---|---|---|
claude-sonnet-4-6 | 0.32 M | 0.96 M | ×3.0 | 38% |
gpt-5.4 | 0.22 M | 0.55 M | ×2.5 | 26% |
gemini-3.1-pro | 0.14 M | 0.57 M | ×4.0 | 17% |
devstral | 0.09 M | 0.47 M | ×5.0 | 11% |
grok-code | 0.06 M | 0.24 M | ×4.0 | 8% |
| Total avg. ratio | 0.83 M | ~3.2 M | ~×3.8 |
Make the business case in 60 seconds
Estimate potential savings using your assumptions (time reclaimed, reduced operational overhead). Indicative only.
Indicative only. Avoid double-counting and calibrate inputs to your baseline.
Start for freeEstimated annual savings
€0
Payback
n/a
Payback = monthly investment / estimated monthly savings.
What this model doesn't capture
- • Audit and compliance time saved
- • Incidents avoided through standardization
- • Faster onboarding and reduced attrition risk
- • AI cost control via quotas and routing
Architecture & deployment
A clear view of SaaS, hybrid, and on‑prem deployment models, aligned with product options.
SaaS hosting (EU)
- - Azure Kubernetes Service (AKS) — EU region: France Centrale.
- - EU data hosting (GDPR by design).
- - 99.9% availability SLA (SaaS).
Deployment models
- - SaaS (Cloud‑Managed): managed control plane on Azure (EU).
- - Hybrid: SaaS control plane + execution agents in your infrastructure.
- - On‑prem: Kubernetes deployment in your environment + local LLM Gateway (Enterprise + add-on).
- - Helm charts available (centralized configuration via values.yaml).
Sovereignty and network constraints can influence the recommended model.
FAQ
Common questions.
Does Argy replace your existing tools?⌃
No. Argy integrates with your stack (Git, CI/CD, cloud, Kubernetes, observability, identity). Argy’s role is to standardize, automate, and govern through versioned modules (golden paths).
What is an Argy 'module'?⌃
An Argy module is a versioned workflow made of actions (nodes) and connections (DAG), with inputs/outputs schemas. It can behave like an agent with tools through its actions, including the Argy AI action (a module-specific subagent).
What is a Golden Path?⌃
A Golden Path is a versioned module that is validated and approved by the organization. It captures best practices and enables self-service with governance.
How does Argy govern LLM usage?⌃
Through the LLM Gateway: multi-provider routing, fallback chains, quotas, security filters (PII/secrets/prompt injection/forbidden topics), and full request auditability.
What about compliance and traceability?⌃
Approval policies, exportable audit logs (CSV), 90-day minimum retention, correlation IDs, and multi-tenant isolation (PostgreSQL RLS, Redis key prefixes, validated x-tenant-id/x-org-id headers).
Is Argy suitable for large enterprises?⌃
Yes. Argy is built for demanding environments: passwordless-first IAM, RBAC, approval workflows, full auditability, SaaS/hybrid/on‑prem options, and EU sovereignty posture (EU hosting).
European SaaS
GDPR compliant & hosted in EU
No Lock-in
Built on open standards
API-First
Everything is automatable
Ready to get started with Argy?
Start with the Free plan. Upgrade when you're ready, or contact us for an enterprise rollout.