Drag
Cursor
mode
Support center +91 902 999 3008
Get in touch
- +(91) 902 999 3008
- hello@quml.ai
- Mumbai — B-8, Star Trade Center, Chamunda Circle, Borivali West, 400092



MLOps Engineer
We're seeking an MLOps Engineer to build and maintain the infrastructure that powers our machine learning systems. You'll bridge the gap between data science and production, creating robust pipelines for training, deploying, and monitoring ML models at scale. This role is critical for ensuring our AI products are reliable, scalable, and continuously improving.
What you will do
- Design and implement CI/CD pipelines for ML model training and deployment
- Build and maintain ML infrastructure using Kubernetes, Docker, and cloud platforms
- Develop automated workflows for data preprocessing, model training, and evaluation
- Implement model monitoring systems to track performance degradation and data drift
- Create model versioning and experiment tracking systems (MLflow, Weights & Biases)
- Optimize ML model serving infrastructure for low latency and high throughput
- Establish best practices for reproducible ML experiments and deployments
- Implement automated testing for ML models and data pipelines
- Collaborate with data scientists to streamline model development workflows
- Manage cloud resources and optimize costs for ML workloads
Requirements
- Bachelor's or Master's degree in Computer Science, Engineering, or related field
- 2-5 years of experience in DevOps, ML engineering, or related roles
- Strong proficiency in Python and shell scripting
- Experience with containerization (Docker) and orchestration (Kubernetes)
- Knowledge of cloud platforms (AWS, GCP, or Azure) and their ML services
- Familiarity with CI/CD tools (GitHub Actions, GitLab CI, Jenkins)
- Understanding of ML workflows and model deployment challenges
- Experience with infrastructure as code (Terraform, CloudFormation)
- Proficiency with Linux systems and command-line tools
Preferred Qualifications
- Experience with ML platforms (Kubeflow, MLflow, SageMaker, Vertex AI)
- Knowledge of model serving frameworks (TorchServe, TensorFlow Serving, Triton)
- Familiarity with monitoring tools (Prometheus, Grafana, ELK stack)
- Experience with GPU infrastructure and optimization
- Understanding of distributed training and model parallelism
- Knowledge of data versioning tools (DVC, Pachyderm)
- Experience with feature stores (Feast, Tecton)
- Background in data engineering or ML engineering
- Certifications in cloud platforms or Kubernetes
What You'll Work On
- Building end-to-end ML pipelines from data ingestion to model deployment
- Creating automated retraining systems for continuous model improvement
- Implementing blue-green deployment strategies for ML models
- Developing monitoring dashboards for ML system health and performance
- Optimizing inference infrastructure to reduce costs and improve latency
Perks & Benefits
- Competitive salary package (₹12-24 LPA based on experience)
- Remote-first culture with flexible working arrangements
- Health insurance coverage
- Learning budget for certifications and training
- Access to cloud credits for experimentation and development
- Stock options and annual performance incentives
- Work with modern MLOps tools and cutting-edge infrastructure
- Collaborative team environment with knowledge sharing
- Flexible hours and work-life balance
- Premium development tools and hardware
We're committed to creating an inclusive and diverse workplace where everyone can thrive. Quml is proud to provide equal opportunity to all qualified applicants regardless of race, color, ancestry, religion, gender, age, or any other protected characteristic.
