Architecture & deployment
A clear view of SaaS, hybrid, and on‑prem deployment models, aligned with product options.
SaaS hosting (EU)
- - Azure Kubernetes Service (AKS) — EU region: West Europe.
- - EU data hosting (GDPR by design).
- - 99.9% availability SLA (SaaS).
Deployment models
- - SaaS (Cloud‑Managed): managed control plane on Azure (EU).
- - Hybrid: SaaS control plane + execution agents in your infrastructure.
- - On‑prem: Kubernetes deployment in your environment + local LLM Gateway (Enterprise + add-on).
- - Helm charts available (centralized configuration via values.yaml).
Sovereignty and network constraints can influence the recommended model.
Make the business case in 60 seconds
Estimate potential savings using your assumptions (time reclaimed, reduced operational overhead). Indicative only.
Indicative only. Avoid double-counting and calibrate inputs to your baseline.
Start for freeEstimated annual savings
€0
Payback
n/a
Payback = monthly investment / estimated monthly savings.
What this model doesn't capture
- • Audit and compliance time saved
- • Incidents avoided through standardization
- • Faster onboarding and reduced attrition risk
- • AI cost control via quotas and routing
Simple, transparent pricing
Choose the plan that fits your needs. Scale as you grow. No hidden fees.
Free
Discover Argy for free. Perfect for testing the platform.
Included quotas
- In Argy, a project groups an application/service and its environments (dev/staging/prod). It’s the unit where you apply modules, deployments, and policies.
- 1 project
- An Argy module is a versioned workflow made of actions (nodes) and connections, with inputs/outputs schemas. It can behave like an agent with tools through its actions, including the Argy AI action (a module-specific subagent).
- 3 active modules
- A pipeline is an execution (run) with steps, status, real-time logs, artifacts, and outputs.
- 10 pipelines/month
- AI tokens measure LLM consumption. Argy governs them through the LLM Gateway (quotas, audit, filtering) to control costs and risks.
- 10,000 AI tokens/month
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 0 RAG documents
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 0 indexed RAG tokens
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 0 MB RAG storage
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 0 RAG queries/month
Features
- Full Argy Console
- Standard module catalog
- Git integrations (GitHub, GitLab)
- LLM Gateway SaaS included
- Argy Chat included
- Community support
- 99.9% availability SLA (SaaS)
- No SSO
Starter
For startups and small projects looking to standardize quickly.
Included quotas
- In Argy, a project groups an application/service and its environments (dev/staging/prod). It’s the unit where you apply modules, deployments, and policies.
- 5 projects
- An Argy module is a versioned workflow made of actions (nodes) and connections, with inputs/outputs schemas. It can behave like an agent with tools through its actions, including the Argy AI action (a module-specific subagent).
- 20 active modules
- A pipeline is an execution (run) with steps, status, real-time logs, artifacts, and outputs.
- 100 pipelines/month
- AI tokens measure LLM consumption. Argy governs them through the LLM Gateway (quotas, audit, filtering) to control costs and risks.
- 100,000 AI tokens/month
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 20 RAG documents
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 1,000,000 indexed RAG tokens
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 200 MB RAG storage
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 200 RAG queries/month
Features
- Everything in Free +
- LLM Gateway SaaS included
- Argy Code
- Argy Chat included
- Module Studio
- Email support
- Cloud integrations (AWS, Azure, GCP)
- 99.9% availability SLA (SaaS)
Growth
For scale-ups and critical projects ready to scale.
Included quotas
- In Argy, a project groups an application/service and its environments (dev/staging/prod). It’s the unit where you apply modules, deployments, and policies.
- 25 projects
- An Argy module is a versioned workflow made of actions (nodes) and connections, with inputs/outputs schemas. It can behave like an agent with tools through its actions, including the Argy AI action (a module-specific subagent).
- 100 active modules
- A pipeline is an execution (run) with steps, status, real-time logs, artifacts, and outputs.
- 1,000 pipelines/month
- AI tokens measure LLM consumption. Argy governs them through the LLM Gateway (quotas, audit, filtering) to control costs and risks.
- 500,000 AI tokens/month
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 100 RAG documents
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 10,000,000 indexed RAG tokens
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 2,000 MB RAG storage
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- 1,000 RAG queries/month
Features
- Everything in Starter +
- Unlimited Argy Code
- Argy Chat included
- Self-hosted execution agents
- Module Studio
- RAG (Retrieval-Augmented Generation)
- Advanced RBAC + Audit logs
- Approval workflows
- Priority support
- 99.9% availability SLA (SaaS)
Enterprise
For large enterprises with compliance and sovereignty requirements.
Included quotas
- In Argy, a project groups an application/service and its environments (dev/staging/prod). It’s the unit where you apply modules, deployments, and policies.
- Unlimited
- An Argy module is a versioned workflow made of actions (nodes) and connections, with inputs/outputs schemas. It can behave like an agent with tools through its actions, including the Argy AI action (a module-specific subagent).
- Unlimited
- A pipeline is an execution (run) with steps, status, real-time logs, artifacts, and outputs.
- Unlimited
- AI tokens measure LLM consumption. Argy governs them through the LLM Gateway (quotas, audit, filtering) to control costs and risks.
- Negotiated
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- Unlimited
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- Unlimited
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- Unlimited
- RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
- Negotiated
Features
- Everything in Growth +
- On-premises LLM Gateway
- Argy Chat included
- Self-hosted execution agents
- RAG on internal data
- SSO (OIDC/SAML) + SCIM
- ITSM integration
- 99.9% SLA guaranteed
- Dedicated 24/7 support
- Dedicated or on-premises deployment
Enterprise supports SaaS, dedicated, and on-premises deployments. Get a custom quote tailored to your organization’s needs.
Compare plans at a glance
All the details to help you choose the right plan for your team.
| Feature | Free | Starter | Growth | Enterprise |
|---|---|---|---|---|
| In Argy, a project groups an application/service and its environments (dev/staging/prod). It’s the unit where you apply modules, deployments, and policies. | 1 project | 5 projects | 25 projects | Unlimited |
| An Argy module is a versioned workflow made of actions (nodes) and connections, with inputs/outputs schemas. It can behave like an agent with tools through its actions, including the Argy AI action (a module-specific subagent). | 3 active modules | 20 active modules | 100 active modules | Unlimited |
| A pipeline is an execution (run) with steps, status, real-time logs, artifacts, and outputs. | 10 pipelines/month | 100 pipelines/month | 1,000 pipelines/month | Unlimited |
| AI tokens measure LLM consumption. Argy governs them through the LLM Gateway (quotas, audit, filtering) to control costs and risks. | 10,000 AI tokens/month | 100,000 AI tokens/month | 500,000 AI tokens/month | Negotiated |
| RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses. | 0 RAG documents | 20 RAG documents | 100 RAG documents | Unlimited |
| RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses. | 0 indexed RAG tokens | 1,000,000 indexed RAG tokens | 10,000,000 indexed RAG tokens | Unlimited |
| RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses. | 0 MB RAG storage | 200 MB RAG storage | 2,000 MB RAG storage | Unlimited |
| RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses. | 0 RAG queries/month | 200 RAG queries/month | 1,000 RAG queries/month | Negotiated |
| The LLM Gateway centralizes and secures AI calls (providers, quotas, audit, filters) and exposes an OpenAI-compatible API.LLM Gateway | ||||
| Argy Code is a developer AI agent in the terminal. It supports interactive (TUI) and autonomous execution (argy run) for automation, with built-in tools (Bash, filesystem, Git, MCP).Argy Code | ||||
| Argy Chat is the governed conversational assistant: projects/folders/conversations workspace, document uploads and indexing (RAG), MCP integrations, private or tenant-shared conversations, and real-time streaming.Argy Chat | ||||
| Module Studio is the visual editor (drag-and-drop) to design modules: actions, connections, simulation, publishing, and versioning. An AI assistant can generate and configure workflows from natural language.Module Studio | ||||
| RAG | ||||
| Advanced RBAC | ||||
| SSO + SCIM | Add-on | Add-on | ||
| Self-hosted agents | Add-on | |||
| On-premises LLM Gateway | Add-on | |||
| SLA | 99.9% | 99.9% | 99.9% | — |
| Support | Community | Priority | Dedicated 24/7 |
Add-ons & Extra capacity
Extend Argy with optional packs—keep pricing predictable while you scale.
+100,000 AI tokens
Increase your token quota for RAG indexing and the LLM Gateway.
Available for: Free, Starter, Growth, Enterprise
+100 pipelines/month
Run more deployment pipelines each month.
Available for: Starter, Growth, Enterprise
Self-hosted execution agents (hybrid)
Enable a self-hosted agent to run sensitive actions inside your network.
Available for: Growth
On-premises LLM Gateway
Deploy the LLM Gateway on your premises to keep AI data internal.
Available for: Growth
SSO (OIDC/SAML)
Single sign-on via your identity provider.
Available for: Starter, Growth (included in Enterprise)
SCIM / Directory provisioning
Automatic user and group synchronization.
Available for: Growth, Enterprise
Onboarding training
Personalized training session for your teams.
Available for: All plans
Dedicated support
Priority access to a dedicated support engineer.
Available for: Starter, Growth (included in Enterprise)
Frequently asked questions
Is there a limit on the number of users?
No! All Argy plans include unlimited users. You pay for capabilities (projects, modules, pipelines, AI tokens), not seats.
How does AI token billing work?
1 Argy credit = 1,000,000 tokens. Tokens are consumed when calling the LLM Gateway (Argy Code, AI assistant, RAG). You can track your consumption in real-time in the console.
Can I change plans at any time?
Yes, you can upgrade at any time. The change is effective immediately and billing is prorated. For downgrades, contact our team.
What is the on-premises LLM Gateway?
The on-premises LLM Gateway allows you to deploy the AI gateway in your infrastructure. Your data and LLM API keys stay within your perimeter, ideal for enterprises with sovereignty requirements.
Are self-hosted agents secure?
Yes. Agents only establish outbound connections (HTTPS) to Argy. No inbound ports are exposed. Credentials stay in your infrastructure.
Do you offer discounts for annual commitments?
Yes, we offer a 15% discount for annual commitments. Contact our sales team to learn more.
Ready to transform your DevSecOps?
Start with the Free plan to test Argy in self-service. Request a demo if you want a guided onboarding or enterprise rollout guidance.
No credit card required • 15% discount on annual plans • Cancel anytime
FAQ
Common questions.
Does Argy replace your existing tools?⌃
No. Argy integrates with your stack (Git, CI/CD, cloud, Kubernetes, observability, identity). Argy’s role is to standardize, automate, and govern through versioned modules (golden paths).
What is an Argy 'module'?⌃
An Argy module is a versioned workflow made of actions (nodes) and connections (DAG), with inputs/outputs schemas. It can behave like an agent with tools through its actions, including the Argy AI action (a module-specific subagent).
What is a Golden Path?⌃
A Golden Path is a versioned module that is validated and approved by the organization. It captures best practices and enables self-service with governance.
How does Argy govern LLM usage?⌃
Through the LLM Gateway: multi-provider routing, fallback chains, quotas, security filters (PII/secrets/prompt injection/forbidden topics), and full request auditability.
What about compliance and traceability?⌃
Approval policies, exportable audit logs (CSV), 90-day minimum retention, correlation IDs, and multi-tenant isolation (PostgreSQL RLS, Redis key prefixes, validated x-tenant-id/x-org-id headers).
Is Argy suitable for large enterprises?⌃
Yes. Argy is built for demanding environments: passwordless-first IAM, RBAC, approval workflows, full auditability, SaaS/hybrid/on‑prem options, and EU sovereignty posture (EU hosting).
European SaaS
GDPR compliant & hosted in EU
No Lock-in
Built on open standards
API-First
Everything is automatable
Ready to get started with Argy?
Start with the Free plan. Upgrade when you're ready, or contact us for an enterprise rollout.