Speed research and production
with cost effective data pipelines
Setting up the infrastructure to do large scale parallel data processing is not easy, nor managing the dependency of a series of such jobs. With the help of our workflow automation, you can configure once and let data orchestration do the rest.
Configurable system
No matter if it is a preprocessing, training, or evaluation job, you can configure the system to do them all. Containers are used to host the environment and the code. Workers will be spawned on demand. An optimized planner is used to shorten execution time.
Schedule a workflow by selecting event types from cron jobs, on new commits, on new branches and more. Stay on top of routine work with automated recurring tasks to save both time and effort.