Skip to content

Argo Server

v2.5 and after


Since v3.0 the Argo Server listens for HTTPS requests, rather than HTTP.

The Argo Server is a server that exposes an API and UI for workflows. You'll need to use this if you want to offload large workflows or the workflow archive.

You can run this in either "hosted" or "local" mode.

It replaces the Argo UI.

Hosted Mode

Use this mode if:

  • You want a drop-in replacement for the Argo UI.
  • If you need to prevent users from directly accessing the database.

Hosted mode is provided as part of the standard manifests, specifically in argo-server-deployment.yaml .

Local Mode

Use this mode if:

  • You want something that does not require complex set-up.
  • You do not need to run a database.

To run locally:

argo server

This will start a server on port 2746 which you can view.


Auth Mode

See auth.

Managed Namespace

See managed namespace.


If the server is running behind reverse proxy with a sub-path different from / (for example, /argo), you can set an alternative sub-path with the --basehref flag or the BASE_HREF environment variable.

You probably now should read how to set-up an ingress

Transport Layer Security

See TLS.


See SSO. See here about sharing Argo CD's Dex with Argo Workflows.

Access the Argo Workflows UI

By default, the Argo UI service is not exposed with an external IP. To access the UI, use one of the following:

kubectl port-forward

kubectl -n argo port-forward svc/argo-server 2746:2746

Then visit: https://localhost:2746

Expose a LoadBalancer

Update the service to be of type LoadBalancer.

kubectl patch svc argo-server -n argo -p '{"spec": {"type": "LoadBalancer"}}'

Then wait for the external IP to be made available:

kubectl get svc argo-server -n argo
NAME          TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
argo-server   LoadBalancer    2746:30008/TCP   18h


You can get ingress working as follows:

Add BASE_HREF as environment variable to deployment/argo-server. Do not forget to add a trailing '/' character.

apiVersion: apps/v1
kind: Deployment
  name: argo-server
      app: argo-server
        app: argo-server
      - args:
        - server
          - name: BASE_HREF
            value: /argo/
        image: argoproj/argocli:latest
        name: argo-server

Create a ingress, with the annotation /:

If TLS is enabled (default in v3.0 and after), the ingress controller must be told that the backend uses HTTPS. The method depends on the ingress controller, e.g. Traefik expects an annotation, while ingress-nginx uses

kind: Ingress
  name: argo-server
  annotations: /$2 https # Traefik https # ingress-nginx
  - http:
      - path: /argo(/|$)(.*)
        pathType: Prefix
            name: argo-server
              number: 2746

Learn more


Users should consider the following in their set-up of the Argo Server:

API Authentication Rate Limiting

Argo Server does not perform authentication directly. It delegates this to either the Kubernetes API Server (when --auth-mode=client) and the OAuth provider (when --auth-mode=sso). In each case, it is recommended that the delegate implements any authentication rate limiting you need.

IP Address Logging

Argo Server does not log the IP addresses of API requests. We recommend you put the Argo Server behind a load balancer, and that load balancer is configured to log the IP addresses of requests that return authentication or authorization errors.

Rate Limiting

v3.4 and after

Argo Server by default rate limits to 1000 per IP per minute, you can configure it through --api-rate-limit. You can access additional information through the following headers.

  • X-Rate-Limit-Limit - the rate limit ceiling that is applicable for the current request.
  • X-Rate-Limit-Remaining - the number of requests left for the current rate-limit window.
  • X-Rate-Limit-Reset - the time at which the rate limit resets, specified in UTC time.
  • Retry-After - indicate when a client should retry requests (when the rate limit expires), in UTC time.