Home
What is Argo Workflows?¶
Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo Workflows is implemented as a Kubernetes CRD (Custom Resource Definition).
- Define workflows where each step is a container.
- Model multi-step workflows as a sequence of tasks or capture the dependencies between tasks using a directed acyclic graph (DAG).
- Easily run compute intensive jobs for machine learning or data processing in a fraction of the time using Argo Workflows on Kubernetes.
Argo is a Cloud Native Computing Foundation (CNCF) graduated project.
Use Cases¶
- Machine Learning pipelines
- Data and batch processing
- Infrastructure automation
- CI/CD
- Other use cases
Why Argo Workflows?¶
- Argo Workflows is the most popular workflow execution engine for Kubernetes.
- Light-weight, scalable, and easier to use.
- Designed from the ground up for containers without the overhead and limitations of legacy VM and server-based environments.
- Cloud agnostic and can run on any Kubernetes cluster.
Read what people said in our latest survey
Try Argo Workflows¶
You can try Argo Workflows via one of the following:
Who uses Argo Workflows?¶
About 200+ organizations are officially using Argo Workflows
Ecosystem¶
Just some of the projects that use or rely on Argo Workflows (complete list here):
- Argo Events
- Couler
- Hera
- Katib
- Kedro
- Kubeflow Pipelines
- Netflix Metaflow
- Onepanel
- Orchest
- Piper
- Ploomber
- Seldon
- SQLFlow
Client Libraries¶
Check out our Java, Golang and Python clients.
Quickstart¶
Documentation¶
You're here!
Features¶
An incomplete list of features Argo Workflows provide:
- UI to visualize and manage Workflows
- Artifact support (S3, Artifactory, Alibaba Cloud OSS, Azure Blob Storage, HTTP, Git, GCS, raw)
- Workflow templating to store commonly used Workflows in the cluster
- Archiving Workflows after executing for later access
- Scheduled workflows using cron
- Server interface with REST API (HTTP and GRPC)
- DAG or Steps based declaration of workflows
- Step level input & outputs (artifacts/parameters)
- Loops
- Parameterization
- Conditionals
- Timeouts (step & workflow level)
- Retry (step & workflow level)
- Resubmit (memoized)
- Suspend & Resume
- Cancellation
- K8s resource orchestration
- Exit Hooks (notifications, cleanup)
- Garbage collection of completed workflow
- Scheduling (affinity/tolerations/node selectors)
- Volumes (ephemeral/existing)
- Parallelism limits
- Daemoned steps
- DinD (docker-in-docker)
- Script steps
- Event emission
- Prometheus metrics
- Multiple executors
- Multiple pod and workflow garbage collection strategies
- Automatically calculated resource usage per step
- Java/Golang/Python SDKs
- Pod Disruption Budget support
- Single-sign on (OAuth2/OIDC)
- Webhook triggering
- CLI
- Out-of-the box and custom Prometheus metrics
- Windows container support
- Embedded widgets
- Multiplex log viewer
Community Meetings¶
We host monthly community meetings where we and the community showcase demos and discuss the current and future state of the project. Feel free to join us! For Community Meeting information, minutes and recordings, please see here.
Participation in Argo Workflows is governed by the CNCF Code of Conduct
Community Blogs and Presentations¶
- Awesome-Argo: A Curated List of Awesome Projects and Resources Related to Argo
- Automation of Everything - How To Combine Argo Events, Workflows & Pipelines, CD, and Rollouts
- Argo Workflows and Pipelines - CI/CD, Machine Learning, and Other Kubernetes Workflows
- Argo Ansible role: Provisioning Argo Workflows on OpenShift
- Argo Workflows vs Apache Airflow
- Beyond Prototypes: Production-Ready ML Systems with Metaflow and Argo
- CI/CD with Argo on Kubernetes
- Define Your CI/CD Pipeline with Argo Workflows
- Distributed Machine Learning Patterns from Manning Publication
- Engineering Cloud Native AI Platform
- Managing Thousands of Automatic Machine Learning Experiments with Argo and Katib
- Revolutionizing Scientific Simulations with Argo Workflows
- Running Argo Workflows Across Multiple Kubernetes Clusters
- Scaling Kubernetes: Best Practices for Managing Large-Scale Batch Jobs with Spark and Argo Workflow
- Open Source Model Management Roundup: Polyaxon, Argo, and Seldon
- Producing 200 OpenStreetMap extracts in 35 minutes using a scalable data workflow
- Production-Ready AI Platform on Kubernetes
- Argo integration review
- TGI Kubernetes with Joe Beda: Argo workflow system
Project Resources¶
Security¶
See Security.