Deployment Guide
Argy deployment options: SaaS, Hybrid with Self-Hosted Agent, and Enterprise with On-Premises LLM Gateway.
Argy offers multiple deployment modes to adapt to your security, compliance, and data sovereignty requirements.
SaaS Architecture (High Level)
Argy runs a managed control plane in Europe with dedicated entry points for the Portal, API, and LLM Gateway. Governance is centralized (quotas, audit, policies) while customer workloads remain isolated and scalable.
High Availability & Continuity
- Multi‑region active/standby: France Central (primary) + North Europe (standby).
- Multi‑zone distribution with automatic autoscaling.
- Automated public endpoint failover.
- Target RPO: 2h (replication + geo backups).
- Target RTO: 30 min (automated failover).
Deployment Options
1. SaaS (Managed Cloud)
The default mode, ideal for a quick start.
Features:
- Managed infrastructure on Azure in Europe
- Native GDPR compliance
- 99.9% availability SLA
- Automatic and transparent updates
- Support included based on your plan
Prerequisites:
- No infrastructure to provision
- Internet access for your users
Access URLs:
- Console:
https://portal.argy.cloud - API:
https://api.argy.cloud - LLM Gateway:
https://llm.argy.cloud
2. Hybrid (Self-Hosted Agent)
SaaS Control Plane + Agents deployed in your infrastructure to execute sensitive actions.
Features:
- Lightweight docker agents deployed in your network
- Outbound-only connection (no port exposure)
- Direct access to your internal resources (Git, Kubernetes, Cloud)
- Real-time log streaming to Argy
Ideal for:
- Enterprises with internal resources not exposed to the Internet
- Environments with network security constraints
- Access to private Kubernetes clusters
Available on: Growth and Enterprise plans.
Agent Installation
Prerequisites:
- Docker or Kubernetes
- Outbound access to
api.argy.cloud:443 - Agent token generated from the Argy console
Step 1: Generate an agent token
- Log in to the Argy console
- Go to Settings → Agents
- Click Create an agent
- Name your agent (e.g.,
agent-prod-datacenter-paris) - Copy the generated token (it won't be displayed again)
Step 2: Deploy the agent with Docker
docker run -d \
--name argy-agent \
--restart unless-stopped \
-e ARGY_AGENT_TOKEN="your-token-here" \
-e ARGY_API_URL="https://api.argy.cloud" \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.kube:/root/.kube:ro \
ghcr.io/argy/agent:latest
Step 3: Deploy the agent on Kubernetes
apiVersion: apps/v1
kind: Deployment
metadata:
name: argy-agent
namespace: argy-system
spec:
replicas: 1
selector:
matchLabels:
app: argy-agent
template:
metadata:
labels:
app: argy-agent
spec:
serviceAccountName: argy-agent
containers:
- name: agent
image: ghcr.io/argy/agent:latest
env:
- name: ARGY_AGENT_TOKEN
valueFrom:
secretKeyRef:
name: argy-agent-secret
key: token
- name: ARGY_API_URL
value: "https://api.argy.cloud"
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
---
apiVersion: v1
kind: Secret
metadata:
name: argy-agent-secret
namespace: argy-system
type: Opaque
stringData:
token: "your-token-here"
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: argy-agent
namespace: argy-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: argy-agent
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin # Adjust according to your needs
subjects:
- kind: ServiceAccount
name: argy-agent
namespace: argy-system
Step 4: Verify the connection
In the Argy console, the agent should appear as "Connected" in Settings → Agents.
3. Enterprise (On-Premises LLM Gateway)
Deploy the LLM Gateway in your infrastructure to keep your AI data internal.
Features:
- LLM Gateway deployed on your premises
- Your LLM API keys stay within your perimeter
- Complete audit of AI requests locally
- Quota synchronization with Argy SaaS
- Full on‑prem option (Enterprise)
Ideal for:
- Large enterprises with strict sovereignty requirements
- Regulated sectors (finance, healthcare, defense)
- Organizations with sensitive data policies
On-Premises LLM Gateway Installation
Prerequisites:
- Kubernetes cluster (1.25+)
- PostgreSQL 15+ (for traces)
- Redis 7+ (for cache, recommended)
- Outbound access to LLM providers
- Outbound access to
api.argy.cloud:443(quota synchronization)
Step 1: Prepare the database
-- Create the database
CREATE DATABASE argy_llm_gateway;
-- Create the user
CREATE USER argy_llm WITH PASSWORD 'your-secure-password';
GRANT ALL PRIVILEGES ON DATABASE argy_llm_gateway TO argy_llm;
Step 2: Configure Kubernetes secrets
apiVersion: v1
kind: Secret
metadata:
name: argy-llm-gateway-secrets
namespace: argy-system
type: Opaque
stringData:
# Connection to Argy SaaS for quota sync
ARGY_API_URL: "https://api.argy.cloud"
ARGY_TENANT_TOKEN: "your-tenant-token"
# Database
DATABASE_URL: "postgresql://argy_llm:password@postgres:5432/argy_llm_gateway"
# Redis (optional but recommended)
REDIS_URL: "redis://redis:6379"
# LLM provider API keys
OPENAI_API_KEY: "sk-..."
ANTHROPIC_API_KEY: "sk-ant-..."
GOOGLE_API_KEY: "..."
AZURE_OPENAI_API_KEY: "..."
AZURE_OPENAI_ENDPOINT: "https://your-resource.openai.azure.com"
Step 3: Deploy the LLM Gateway
apiVersion: apps/v1
kind: Deployment
metadata:
name: argy-llm-gateway
namespace: argy-system
spec:
replicas: 2
selector:
matchLabels:
app: argy-llm-gateway
template:
metadata:
labels:
app: argy-llm-gateway
spec:
containers:
- name: llm-gateway
image: ghcr.io/argy/llm-gateway:latest
ports:
- containerPort: 3009
envFrom:
- secretRef:
name: argy-llm-gateway-secrets
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "1Gi"
cpu: "1000m"
livenessProbe:
httpGet:
path: /health
port: 3009
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3009
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: argy-llm-gateway
namespace: argy-system
spec:
selector:
app: argy-llm-gateway
ports:
- port: 443
targetPort: 3009
type: ClusterIP
Step 4: Configure Ingress (optional)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argy-llm-gateway
namespace: argy-system
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- hosts:
- llm.your-domain.internal
secretName: llm-gateway-tls
rules:
- host: llm.your-domain.internal
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argy-llm-gateway
port:
number: 443
Step 5: Configure the tenant in Argy
- Contact Argy support to enable on-premises mode
- Provide your LLM Gateway URL (e.g.,
https://llm.your-domain.internal) - Argy will configure the
llm_gateway_urlclaim in your users' JWTs
Network Flows to Open
Hybrid Mode (Self-Hosted Agent)
| Source | Destination | Port | Protocol | Direction | Description |
|---|---|---|---|---|---|
| Argy Agent | api.argy.cloud | 443 | HTTPS/gRPC | Outbound | Agent → Control Plane communication |
| Argy Agent | Your internal resources | Variable | Variable | Internal | Action execution (Terraform, K8s, Git) |
Enterprise Mode (On-Premises LLM Gateway)
| Source | Destination | Port | Protocol | Direction | Description |
|---|---|---|---|---|---|
| Argy Agent | api.argy.cloud | 443 | HTTPS/gRPC | Outbound | Agent → Control Plane communication |
| LLM Gateway | api.openai.com | 443 | HTTPS | Outbound | OpenAI calls |
| LLM Gateway | api.anthropic.com | 443 | HTTPS | Outbound | Anthropic calls |
| LLM Gateway | generativelanguage.googleapis.com | 443 | HTTPS | Outbound | Google Gemini calls |
| LLM Gateway | api.x.ai | 443 | HTTPS | Outbound | xAI Grok calls |
| LLM Gateway | api.x.ai | 443 | HTTPS | Outbound | Mistral AI calls |
| LLM Gateway | *.openai.azure.com | 443 | HTTPS | Outbound | Azure OpenAI calls |
| LLM Gateway | api.argy.cloud | 443 | HTTPS | Outbound | Quota and audit synchronization |
| Argy Code / IDE | Internal LLM Gateway | 443 | HTTPS | Internal | AI requests from dev workstations |
Architecture Diagram

Deployment Checklist
SaaS Mode
- Create an account on portal.argy.cloud
- Configure SSO (optional)
- Invite team members
- Create your first product
Hybrid Mode
- Generate an agent token in the console
- Deploy the agent (Docker or Kubernetes)
- Verify the connection in the console
- Configure access to internal resources
- Test a first deployment
Enterprise Mode
- Contact Argy support for activation
- Provision PostgreSQL and Redis
- Deploy the LLM Gateway
- Configure LLM provider API keys
- Configure the LLM Gateway URL in Argy
- Test AI calls from Argy Code