Skip to content

Architecture & deployment

A clear view of SaaS, hybrid, and on‑prem deployment models, aligned with product options.

SaaS hosting (EU)

  • - Azure Kubernetes Service (AKS) — EU region: West Europe.
  • - EU data hosting (GDPR by design).
  • - 99.9% availability SLA (SaaS).

Deployment models

  • - SaaS (Cloud‑Managed): managed control plane on Azure (EU).
  • - Hybrid: SaaS control plane + execution agents in your infrastructure.
  • - On‑prem: Kubernetes deployment in your environment + local LLM Gateway (Enterprise + add-on).
  • - Helm charts available (centralized configuration via values.yaml).

Sovereignty and network constraints can influence the recommended model.

ROI estimator

Make the business case in 60 seconds

Estimate potential savings using your assumptions (time reclaimed, reduced operational overhead). Indicative only.

Indicative only. Avoid double-counting and calibrate inputs to your baseline.

Start for free

Estimated annual savings

€0

Engineering time reclaimed€0
Ops/SecOps ticket load reduced€0

Payback

n/a

Payback = monthly investment / estimated monthly savings.

What this model doesn't capture

  • Audit and compliance time saved
  • Incidents avoided through standardization
  • Faster onboarding and reduced attrition risk
  • AI cost control via quotas and routing
Unlimited users on all plans

Simple, transparent pricing

Choose the plan that fits your needs. Scale as you grow. No hidden fees.

Free

Discover Argy for free. Perfect for testing the platform.

€0Free forever

Included quotas

In Argy, a project groups an application/service and its environments (dev/staging/prod). It’s the unit where you apply modules, deployments, and policies.
1 project
An Argy module is a versioned workflow made of actions (nodes) and connections, with inputs/outputs schemas. It can behave like an agent with tools through its actions, including the Argy AI action (a module-specific subagent).
3 active modules
A pipeline is an execution (run) with steps, status, real-time logs, artifacts, and outputs.
10 pipelines/month
AI tokens measure LLM consumption. Argy governs them through the LLM Gateway (quotas, audit, filtering) to control costs and risks.
10,000 AI tokens/month
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
0 RAG documents
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
0 indexed RAG tokens
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
0 MB RAG storage
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
0 RAG queries/month

Features

  • Full Argy Console
  • Standard module catalog
  • Git integrations (GitHub, GitLab)
  • LLM Gateway SaaS included
  • Argy Chat included
  • Community support
  • 99.9% availability SLA (SaaS)
  • No SSO

Starter

For startups and small projects looking to standardize quickly.

€590 excl. VAT/month • Unlimited users

Included quotas

In Argy, a project groups an application/service and its environments (dev/staging/prod). It’s the unit where you apply modules, deployments, and policies.
5 projects
An Argy module is a versioned workflow made of actions (nodes) and connections, with inputs/outputs schemas. It can behave like an agent with tools through its actions, including the Argy AI action (a module-specific subagent).
20 active modules
A pipeline is an execution (run) with steps, status, real-time logs, artifacts, and outputs.
100 pipelines/month
AI tokens measure LLM consumption. Argy governs them through the LLM Gateway (quotas, audit, filtering) to control costs and risks.
100,000 AI tokens/month
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
20 RAG documents
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
1,000,000 indexed RAG tokens
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
200 MB RAG storage
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
200 RAG queries/month

Features

  • Everything in Free +
  • LLM Gateway SaaS included
  • Argy Code
  • Argy Chat included
  • Module Studio
  • Email support
  • Cloud integrations (AWS, Azure, GCP)
  • 99.9% availability SLA (SaaS)
Most popular

Growth

For scale-ups and critical projects ready to scale.

€2,950 excl. VAT/month • Unlimited users

Included quotas

In Argy, a project groups an application/service and its environments (dev/staging/prod). It’s the unit where you apply modules, deployments, and policies.
25 projects
An Argy module is a versioned workflow made of actions (nodes) and connections, with inputs/outputs schemas. It can behave like an agent with tools through its actions, including the Argy AI action (a module-specific subagent).
100 active modules
A pipeline is an execution (run) with steps, status, real-time logs, artifacts, and outputs.
1,000 pipelines/month
AI tokens measure LLM consumption. Argy governs them through the LLM Gateway (quotas, audit, filtering) to control costs and risks.
500,000 AI tokens/month
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
100 RAG documents
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
10,000,000 indexed RAG tokens
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
2,000 MB RAG storage
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
1,000 RAG queries/month

Features

  • Everything in Starter +
  • Unlimited Argy Code
  • Argy Chat included
  • Self-hosted execution agents
  • Module Studio
  • RAG (Retrieval-Augmented Generation)
  • Advanced RBAC + Audit logs
  • Approval workflows
  • Priority support
  • 99.9% availability SLA (SaaS)

Enterprise

For large enterprises with compliance and sovereignty requirements.

CustomStarting at ~€8,000 excl. VAT/month

Included quotas

In Argy, a project groups an application/service and its environments (dev/staging/prod). It’s the unit where you apply modules, deployments, and policies.
Unlimited
An Argy module is a versioned workflow made of actions (nodes) and connections, with inputs/outputs schemas. It can behave like an agent with tools through its actions, including the Argy AI action (a module-specific subagent).
Unlimited
A pipeline is an execution (run) with steps, status, real-time logs, artifacts, and outputs.
Unlimited
AI tokens measure LLM consumption. Argy governs them through the LLM Gateway (quotas, audit, filtering) to control costs and risks.
Negotiated
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
Unlimited
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
Unlimited
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
Unlimited
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.
Negotiated

Features

  • Everything in Growth +
  • On-premises LLM Gateway
  • Argy Chat included
  • Self-hosted execution agents
  • RAG on internal data
  • SSO (OIDC/SAML) + SCIM
  • ITSM integration
  • 99.9% SLA guaranteed
  • Dedicated 24/7 support
  • Dedicated or on-premises deployment

Enterprise supports SaaS, dedicated, and on-premises deployments. Get a custom quote tailored to your organization’s needs.

Compare plans at a glance

All the details to help you choose the right plan for your team.

FeatureFreeStarterGrowthEnterprise
In Argy, a project groups an application/service and its environments (dev/staging/prod). It’s the unit where you apply modules, deployments, and policies.1 project5 projects25 projectsUnlimited
An Argy module is a versioned workflow made of actions (nodes) and connections, with inputs/outputs schemas. It can behave like an agent with tools through its actions, including the Argy AI action (a module-specific subagent).3 active modules20 active modules100 active modulesUnlimited
A pipeline is an execution (run) with steps, status, real-time logs, artifacts, and outputs.10 pipelines/month100 pipelines/month1,000 pipelines/monthUnlimited
AI tokens measure LLM consumption. Argy governs them through the LLM Gateway (quotas, audit, filtering) to control costs and risks.10,000 AI tokens/month100,000 AI tokens/month500,000 AI tokens/monthNegotiated
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.0 RAG documents20 RAG documents100 RAG documentsUnlimited
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.0 indexed RAG tokens1,000,000 indexed RAG tokens10,000,000 indexed RAG tokensUnlimited
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.0 MB RAG storage200 MB RAG storage2,000 MB RAG storageUnlimited
RAG (Retrieval-Augmented Generation) augments prompts with passages retrieved from your documents to deliver grounded, context-aware responses.0 RAG queries/month200 RAG queries/month1,000 RAG queries/monthNegotiated
The LLM Gateway centralizes and secures AI calls (providers, quotas, audit, filters) and exposes an OpenAI-compatible API.LLM Gateway
Argy Code is a developer AI agent in the terminal. It supports interactive (TUI) and autonomous execution (argy run) for automation, with built-in tools (Bash, filesystem, Git, MCP).Argy Code
Argy Chat is the governed conversational assistant: projects/folders/conversations workspace, document uploads and indexing (RAG), MCP integrations, private or tenant-shared conversations, and real-time streaming.Argy Chat
Module Studio is the visual editor (drag-and-drop) to design modules: actions, connections, simulation, publishing, and versioning. An AI assistant can generate and configure workflows from natural language.Module Studio
RAG
Advanced RBAC
SSO + SCIMAdd-onAdd-on
Self-hosted agentsAdd-on
On-premises LLM GatewayAdd-on
SLA99.9%99.9%99.9%
SupportCommunityEmailPriorityDedicated 24/7

Add-ons & Extra capacity

Extend Argy with optional packs—keep pricing predictable while you scale.

+100,000 AI tokens

Increase your token quota for RAG indexing and the LLM Gateway.

€20 excl. VAT/month

Available for: Free, Starter, Growth, Enterprise

+100 pipelines/month

Run more deployment pipelines each month.

€50 excl. VAT/month

Available for: Starter, Growth, Enterprise

Self-hosted execution agents (hybrid)

Enable a self-hosted agent to run sensitive actions inside your network.

€1,000 excl. VAT/month

Available for: Growth

On-premises LLM Gateway

Deploy the LLM Gateway on your premises to keep AI data internal.

€1,000 excl. VAT/month

Available for: Growth

SSO (OIDC/SAML)

Single sign-on via your identity provider.

€150 excl. VAT/month

Available for: Starter, Growth (included in Enterprise)

SCIM / Directory provisioning

Automatic user and group synchronization.

€250 excl. VAT/month

Available for: Growth, Enterprise

Onboarding training

Personalized training session for your teams.

€990 excl. VAT (one-time)

Available for: All plans

Dedicated support

Priority access to a dedicated support engineer.

€490 excl. VAT/month

Available for: Starter, Growth (included in Enterprise)

Frequently asked questions

Is there a limit on the number of users?

No! All Argy plans include unlimited users. You pay for capabilities (projects, modules, pipelines, AI tokens), not seats.

How does AI token billing work?

1 Argy credit = 1,000,000 tokens. Tokens are consumed when calling the LLM Gateway (Argy Code, AI assistant, RAG). You can track your consumption in real-time in the console.

Can I change plans at any time?

Yes, you can upgrade at any time. The change is effective immediately and billing is prorated. For downgrades, contact our team.

What is the on-premises LLM Gateway?

The on-premises LLM Gateway allows you to deploy the AI gateway in your infrastructure. Your data and LLM API keys stay within your perimeter, ideal for enterprises with sovereignty requirements.

Are self-hosted agents secure?

Yes. Agents only establish outbound connections (HTTPS) to Argy. No inbound ports are exposed. Credentials stay in your infrastructure.

Do you offer discounts for annual commitments?

Yes, we offer a 15% discount for annual commitments. Contact our sales team to learn more.

Ready to transform your DevSecOps?

Start with the Free plan to test Argy in self-service. Request a demo if you want a guided onboarding or enterprise rollout guidance.

No credit card required • 15% discount on annual plans • Cancel anytime

FAQ

Common questions.

Does Argy replace your existing tools?

No. Argy integrates with your stack (Git, CI/CD, cloud, Kubernetes, observability, identity). Argy’s role is to standardize, automate, and govern through versioned modules (golden paths).

What is an Argy 'module'?

An Argy module is a versioned workflow made of actions (nodes) and connections (DAG), with inputs/outputs schemas. It can behave like an agent with tools through its actions, including the Argy AI action (a module-specific subagent).

What is a Golden Path?

A Golden Path is a versioned module that is validated and approved by the organization. It captures best practices and enables self-service with governance.

How does Argy govern LLM usage?

Through the LLM Gateway: multi-provider routing, fallback chains, quotas, security filters (PII/secrets/prompt injection/forbidden topics), and full request auditability.

What about compliance and traceability?

Approval policies, exportable audit logs (CSV), 90-day minimum retention, correlation IDs, and multi-tenant isolation (PostgreSQL RLS, Redis key prefixes, validated x-tenant-id/x-org-id headers).

Is Argy suitable for large enterprises?

Yes. Argy is built for demanding environments: passwordless-first IAM, RBAC, approval workflows, full auditability, SaaS/hybrid/on‑prem options, and EU sovereignty posture (EU hosting).

European SaaS

GDPR compliant & hosted in EU

No Lock-in

Built on open standards

API-First

Everything is automatable

Ready to get started with Argy?

Start with the Free plan. Upgrade when you're ready, or contact us for an enterprise rollout.