ZenML’s cover photo
ZenML

ZenML

Information Technology & Services

#Opensource #MLOps #Framework that integrates all your ML tools. Run ML pipelines on any stack with minimum effort!

About us

ZenML is an extensible, open-source MLOps framework for creating portable, production-ready MLOps pipelines. It's built for data scientists, ML Engineers, and MLOps Developers to collaborate as they develop to production. ZenML has simple, flexible syntax, is cloud- and tool-agnostic, and has interfaces/abstractions that are catered towards ML workflows. ZenML brings together all your favorite tools in one place so you can tailor your workflow to cater to your needs.

Website
https://zenml.io/
Industry
Information Technology & Services
Company size
11-50 employees
Headquarters
Munich
Type
Privately Held
Founded
2021
Specialties
MLOps, ProductionML, reproducibleML, opensource, and framework

Locations

Employees at ZenML

Updates

  • View organization page for ZenML

    7,865 followers

    ⛩️ Run your first #MLOps pipeline in just 11 minutes! 🧘🏽♀️ In this video, Hamza provides an in-depth step-by-step guide on creating your first MLOps pipeline and demonstrates how ZenML seamlessly integrates with your favorite tools and existing infrastructure. It also shows the ease of transitioning from local debugging to cloud production and how to maintain visibility and control over various models and data. 📊 With ZenML, you can easily convert legacy code into a pipeline, automate versioning of data sets, scale up to the cloud for more resources, and even deploy models with a single click.👌 -> Watch it here on youtube: https://lnkd.in/dADeBehJ -> And try it yourself: https://www.zenml.io/ #opensource

  • ZenML reposted this

    View profile for Adam Probst

    CEO @ ZenML ⛩️ open-source MLOps Framework

    ⛵️ Join the ZenML Crew! Setting Sail for Growth with Our Open-Source MLOps! 🏴☠️ Ahoy there! We're charting new waters as our open-source MLOps framework gains momentum. As we hoist our sails for expansion, we're looking for new crew members to join our voyage: 🚢 Platform Solutions Engineer (m/f/*) For those who navigate both cloud infrastructures AND help customers chart their course. You'll maintain our AWS-based platform while guiding enterprise customers through smooth sailing. If you're experienced with Infra, Kubernetes, and Python—and enjoy solving technical challenges across different seas—we want you on deck! 👉 Drop anchor through the link in the comments! 🧭 Senior Frontend Engineer (m/f/*) Help us navigate uncharted UI/UX waters! You'll transform complex ML pipelines into clear navigation systems. If you have strong React and TypeScript experience and enjoy creating intuitive interfaces that help users sail through complicated workflows, come aboard! 👉 Set your course and see link in the comment ⚓ Our Ship's Culture We're a diverse crew (7 nationalities and counting!) with deckhands working remotely and at our Munich harbor. We're serious about our voyage but enjoy the journey, too. As you can see from this photo of our recent team sailing expedition, we truly embrace the spirit of teamwork and adventure on the high seas! #MLOps #opensource #hiring

    • No alternative text description for this image
  • ZenML reposted this

    View profile for Paul Iusztin

    Senior ML/AI Engineer • Founder @ Decoding ML ~ Posts and articles about building production-grade ML/AI systems.

    RAG has kept “dying” for the past 4 years. But here’s why that will never happen: All the LLMs (even the most advanced ones) struggle without the right context. It doesn’t matter if your model has 128k+ token windows or cutting-edge fine-tuning... If it doesn’t retrieve the right data or the context is full of noise or formatted incorrectly, it won’t generate the right answers. That’s why retrieval is the hardest part of RAG. Most RAG failures aren’t about generation - they happen before the LLM even sees the data. If the retrieval step is weak, your AI assistant will: - Fetch irrelevant information - Miss critical details - Hallucinate confidently wrong responses But more context isn’t the answer... Better context is. Lesson 5 of the Second Brain AI Assistant course is all about fixing retrieval with a production-ready RAG feature pipeline that. (And it's now live!) Namely, in this lesson, you will learn: - The fundamentals of RAG. - How to design and implement a production-ready RAG pipeline - Implement contextual retrieval (an advanced RAG technique) from scratch. - Implement parent retrieval (another advanced RAG technique) using LangChain. - Extend LangChain to add custom behavior using OOP. - The critical role of chunk size in optimizing retrieval quality - Write a configuration layer to switch between different algorithms and models dynamically - How to manage everything with an MLOps framework (we use ZenML) By the end of this lesson, you’ll be equipped with the skills to build a flexible, modular RAG feature pipeline. This pipeline gives access to our AI assistant to our Second Brain and provides reliable context to generate meaningful answers. Sounds interesting? Pick up lesson 5 today. (The link is in the comments) Thank you, Anca Ioana Muscalagiu, for contributing with another fantastic lesson to @Decoding ML !

    • No alternative text description for this image
  • View organization page for ZenML

    7,865 followers

    Read more on ZenMLs blog today: https://lnkd.in/d2n8gb4B

    View profile for Hamza Tahir

    Co-Founder @ ZenML

    Query rewriting is really under-rated for RAG systems I've seen firsthand how query rewriting can dramatically improve retrieval quality - yet most teams skip proper evaluation of this critical step. Implementing fancy query rewriting without rigorous evaluation is like driving blindfolded. You think you're heading in the right direction, but you're likely headed for trouble. Real-world example from our RAG bot: A query about "monitoring my pipeline" gets rewritten to "visualizing data in MLflow" - completely missing that the user wanted ZenML pipeline monitoring. These subtle misalignments destroy user trust. Common Pitfalls: • Ambiguity introducing inaccuracy • Missing domain-specific nuances • Over/under-specificity tradeoffs • Temporal drift in terminology What actually works is building systematic evaluation pipelines, tracking original queries alongside rewrites, implement automated scoring functions, and comparing performance across rewriting models. You can subsequently set up regression alerts when quality drops. The most impressive RAG pattern is worthless without evidence it's actually working. Continuous evaluation isn't optional - it's the foundation of trustworthy systems. Jayesh wrote a great guide on this on our recent blog - Give it a read if you're optimizing your RAG systems: https://lnkd.in/dGGrM3wU

    • No alternative text description for this image
  • View organization page for ZenML

    7,865 followers

    Get your MCP fix here

    View profile for Hamza Tahir

    Co-Founder @ ZenML

    Model Context Protocol is one of two things I've been following closely since December 2024, and I'm pleased that it seems to have gone viral last week. The Model Context Protocol (MCP) is an emerging standard, developed initially by Anthropic, designed to enhance how AI models, particularly large language models (LLMs), interact with external data sources and tools. It aims to simplify context management, making it easier for AI applications to access and utilize relevant information, which can improve response quality and scalability. I think it's instinct for many to say "Oh this is just an API", but you need to zoom out a bit to "get it". Yes, of course its just a typical client/server API, but if everyone starts adopting "This particular way of writing an API", *then* it has the potential to be disruptive. Case in point: At ZenML we just released our own MCP server to the world. We encoded all the tools that the LLM needs to interact with a ZenML deployment (fetching pipelines, logs, stacks etc). And now, users can plug it into Claude or Cursor or whatever, and boom they have a natural language interface to ZenML. We didn't need to build a chatbot, or integrate into our dashboard, or whatever else. It's just available now to our community. This GIF walks through how easy it is to generate a report using it: It's basically "Chat to your MLOps pipeline" baked right into Claude. Play with our new MCP server now (there's an easter egg for the eagle eyed ;-) ) now! In order to do that, you'll need to follow instructions which I'll link below. What do you think of the MCP standard? Comment below!

    • No alternative text description for this image
  • ZenML reposted this

    View profile for Paul Iusztin

    Senior ML/AI Engineer • Founder @ Decoding ML ~ Posts and articles about building production-grade ML/AI systems.

    Scaling LLMs presents enormous challenges. To solve them, study other use cases. An LLMOps Database with 372 comprehensive case studies ↓ This curated database is a 𝗴𝗮𝗺𝗲-𝗰𝗵𝗮𝗻𝗴𝗲𝗿 for teams and organizations looking to streamline and scale their LLM operations efficiently. As ZenML labels it: “𝘈 𝘤𝘶𝘳𝘢𝘵𝘦𝘥 𝘬𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘣𝘢𝘴𝘦 𝘰𝘧 𝘳𝘦𝘢𝘭-𝘸𝘰𝘳𝘭𝘥 𝘓𝘓𝘔𝘖𝘱𝘴 𝘪𝘮𝘱𝘭𝘦𝘮𝘦𝘯𝘵𝘢𝘵𝘪𝘰𝘯𝘴, 𝘸𝘪𝘵𝘩 𝘥𝘦𝘵𝘢𝘪𝘭𝘦𝘥 𝘴𝘶𝘮𝘮𝘢𝘳𝘪𝘦𝘴 𝘢𝘯𝘥 𝘵𝘦𝘤𝘩𝘯𝘪𝘤𝘢𝘭 𝘯𝘰𝘵𝘦𝘴.” Top use cases include: 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗲𝗱 𝗰𝘂𝘀𝘁𝗼𝗺𝗲𝗿 𝘀𝘂𝗽𝗽𝗼𝗿𝘁 - Integrate LLMs into chatbots to provide instant, human-like responses to customer queries. 𝗖𝗼𝗻𝘁𝗲𝗻𝘁 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻 - Use LLMs to create high-quality, SEO-optimized content at scale. 𝗖𝗼𝗱𝗲 𝗮𝘀𝘀𝗶𝘀𝘁𝗮𝗻𝗰𝗲 - Integrate LLMs to improve development workflows. 𝗦𝗲𝗻𝘁𝗶𝗺𝗲𝗻𝘁 𝗮𝗻𝗮𝗹𝘆𝘀𝗶𝘀 - Analyze large volumes of text data to understand customer sentiment, market trends, and feedback. 𝗗𝗼𝗰𝘂𝗺𝗲𝗻𝘁 𝘀𝘂𝗺𝗺𝗮𝗿𝗶𝘇𝗮𝘁𝗶𝗼𝗻 - Streamline the extraction of key insights from long reports or documents. ZenML’s LLMOps Database grants you access to everything needed to operationalize these use cases seamlessly: - Tools - Workflows - Integrations Why it matters: LLMs are powerful, but achieving their full potential can be challenging without the right infrastructure and operational support. Curious to learn more? Explore the ZenML LLMOps Database today: https://lnkd.in/dC6BEszB Thank you, Hamza Tahir and Alex S., for compiling this amazing list! #machinelearning #artificialintelligence #generativeai #mlops

    • No alternative text description for this image
  • ZenML reposted this

    View profile for Alejandro Saucedo

    AI & Data Executive @ Zalando | Advisor @ UN, EU, ACM, etc | Join 70k+ ML Newsletter

    Exciting news and resources this week in the Machine Learning Ecosystem: Technical University of Munich on Responsible AI, Google GenAI Course, ZenML LLMOps Database, Google DeepMind Simulation AI, Microsoft Quantifying Bad Days + more 🚀 Check out the deep dives and resources in this week's edition! For anyone looking for exciting ways to develop your ML Engineering skills in 2024, you can join 60,000+ ML practitioners & enthusiasts for weekly news, tutorials articles and MLOps events 📅 + more 🚀 #ML #MachineLearning #ArtificialIntelligence #AI #MLOps #AIOps #DataOps #augmentedintelligence #deeplearning #privacy #kubernetes #datascience #python #bigdata

  • View organization page for ZenML

    7,865 followers

    Interesting take from our co-founder about doing MLOps with Airflow

    View profile for Hamza Tahir

    Co-Founder @ ZenML

    "I have 47 nearly identical Airflow DAGs, each serving a different ML model variant." If that made you wince, you're not alone. Let's talk about the real mess of ML workflows in Airflow. Common failure patterns I keep seeing: 1. Massive monolithic DAGs that try to handle every edge case   (Good luck debugging that 4000-line pipeline.py) 2. Task dependencies that look like spaghetti   (Because someone had to handle "just one more feature flag") 3. Hardcoded paths and config scattered across tasks   (The classic "it works on my branch" syndrome) 4. Running preprocessing in notebooks, then wondering why prod is broken (Those magic .transform() calls need version control too) What actually works: • Dynamic DAG generation from config • Modular tasks with clear contracts • Version EVERYTHING (yes, even those sklearn transformers) • Standardized failure handling patterns • Parameterized model artifacts Real talk: Your Airflow DAGs should be boring. All the ML complexity should live in versioned packages, not task definitions. Your data scientists will never read your Airflow docs. But they will use your templates if you make them easier than notebooks. The best ML pipelines are the ones you can explain to a new team member in 10 minutes. #MLOps #DataEngineering #MachineLearning #Airflow

  • View organization page for ZenML

    7,865 followers

    How to deploy RAG in production for the enterprise

    View profile for Hamza Tahir

    Co-Founder @ ZenML

    Let's talk RAG in Enterprise! 🫸We now know prototyping a Hello World RAG app vs getting it reliably into production are two distinct problems (surprise surprise) 🫸Problems in production include inconsistent thinking, hallucinations, incompleteness, performance degradation etc 🫸A good framing is that RAG systems are data pipelines at heart (ingestion, reranking, evaluation, PII detection, etc) 🫸It's clear that building a data flywheel for your LLMOps (as for MLOps before that) is going to be a key competitive differentiation moving forward 🫸Start simple but frame this as a data problem! A solid ML platform foundation is worth the investment to enable GenAI use-cases reliably across the enterprise Picture: Me speaking about how to architect a RAG system for the enterprise at ElasticON last week in Munich

    • No alternative text description for this image
  • View organization page for ZenML

    7,865 followers

    Candid thoughts

    View profile for Hamza Tahir

    Co-Founder @ ZenML

    A candid note to ML platform leads: Building that internal ML platform was the easy part, wasn't it? The real gut punch came when: ➡️ Your data scientists kept using their notebooks instead of your carefully crafted workflows. ➡️ Your ML engineers still copied model files manually rather than use your artifact store. ➡️ Your expensive GPU scheduler sat unused while teams spun up their own instances. ➡️ Your documentation went unread while Slack filled with the same basic questions. You did everything "right": ✓ Kubernetes-native architecture  ✓ Full model versioning ✓ Automated CI/CD ✓ Standardized environments ✓ Role-based access But you built what you thought they needed, not what they actually needed. I learned this the hard way: A half-implemented solution that teams actually use beats a perfect platform that they don't. Start with one team's actual workflow. Make it 10% better. Repeat. #MLOps #ML #SoftwareEngineering #PlatformEngineering

Similar pages

Browse jobs

Funding

ZenML 2 total rounds

Last Round

Seed

US$ 3.7M

See more info on crunchbase