Bristech is back with Bristech: MLOps, a one-day online event focused on the nascent and increasingly important field of MLOps, streamed live from Floating Harbour Studio in Bristol on Thursday 24 Septemeber – and you can grab your tickets here!

As usual, the event promises a wealth of knowledge sharing, with a focus on the emerging work of MLOps. The programme encompasses a multitude of perspectives, ranging from the key challenges and tooling approaches to practitioner experiences and bridging gaps in understanding between Data Scientists, Machine Learning (ML) Infra Engineers, ML Researchers, Managers & Tech Founders.

The full programme is out now, including presentations, workshops and “at the bar” sessions for a combination of introductions, deep-dive learning and networking, but we’ve selected a few of our favourites.

What not to miss

 Confessions of an ML Infrastructure Engineer

Delivered by Machine Learning Infrastructure Lead and ML Infra Engineer at Cookpad, Ettie Eyre and Jose Navarro, the seasoned experts will be talking you through how to collaboratively go from Research through to Delivery at scale.

They will be sharing some stories from their two years of ML infra at Cookpad – some success stories, some lessons learned and some insights into how far they have made it towards our goal of a gold-plated MLOps platform.

Ahead of the talk, you can catch Bristech Founder Nic Hemley in conversation with Ettie, discussing what to expect and questioning how can ML engineers work with ML Researchers and Data Scientists to ensure deployment can happen most effectively?

Model Observability: Challenges and best practice

In this talk, Aparna Dhinakaran, Co-Founder, CPO of Arize AI (Berkeley-based startup focused on ML Observability), will discuss the state of the commonly seen ML Production Workflow and its challenges. She will focus on the lack of model observability, its impacts, and how Arize AI can help.

This talk will highlight common challenges seen in models deployed in production, including model drift, data quality issues, distribution changes, outliers, and bias. The talk will also cover best practices to address these challenges and where observability and explainability can help identify model issues before they impact the business.

Aparna will be sharing a demo of how the Arize AI platform can help companies validate their models performance, provide real-time performance monitoring and alerts, and automate troubleshooting of slices of model performance with explainability. The talk will cover best practices in ML Observability and how companies can build more transparency and trust around their models.

Kubeflow + Rok + Kale: The easiest path to reproducible ML

If you’re after information on how to produce reproductive ML, Stefano Fioravanzo‘s talk, from Arrikto, is not one to miss!

Kubeflow is an open source toolkit for Machine Learning on Kubernetes, designed to make deployments of Machine Learning workflows on Kubernetes simple, portable, and scalable. It is an exponentially growing project and very popular among data scientists, with outstanding community and industry support.

Using Kubeflow makes it easier for Data Scientists to automate and operationalize common Machine Learning workflows, like distributed training, hyperparameter tuning, running complex data pipelines, logging and storing metadata and artifacts, as well as working in shared JupyterLab environments. Kubeflow strives to provide all the bells and whistles for a comprehensive ML environment, but given the inherent complexity of running Machine Learning workflows at scale, Kubeflow remains more suited to Software/ML Engineers that possess a fair understanding of Kubernetes concepts, specific SDKs, and practices.

In this talk, Stefano will be lowering that complexity bar, taking Kubeflow’s MLOps paradigm to the next level, empowering Data Scientists to leverage on the Ops while focusing just on the ML.

Workshop – What does production-ready in ML actually look like?

Delivered by Hamza & Ben @ Maiot, this workshop will provide an insight into the workings Maiot – the team behind Core Engine, a MLOps platform for reproducible ML and a result of years of ML and DevOps experience working together on applied ML.

Hamza and Ben, co-founders and creators of the Core Engine, are going to lay out a definition of ML in production and give a walkthrough of the platform. The goal is to provide a boilerplate for solving the problem of getting models reproducibly, fast and iteratively into production.

We will show that Data Scientists do not have to change their workflows to run immutable ML pipelines with full data provenance. No prior DevOps knowledge is required to achieve reproducible results and continuously production-ready models.