15 October 2024
virtual

Getting new ML models into production with ease

REPLAY

Getting new ML models into production with ease

Shipping new algorithms to production, running experiments in live systems, or incorporating newly sourced predictive data signals involves a lot of Engineering and DevOps hours, which delays the vital discovery of best-in-breed models and/or strategies. Mather/Sophi has developed tooling to eliminate the dependency on engineering and DevOps and to allow for efficient model delivery and safe experimentations on production systems. Publishers investing in their own data science capabilities often face similar challenges, slowing their pace of innovation. In this session, Ash Ahmed will share how Mather/Sophi removed this bottleneck and how publishers could achieve these benefits and better enable their data science teams to tackle more use cases to delight and best monetize their audiences.

QUICK RECAP

Scaling Machine Learning Operations Challenges

Ashraf discussed the challenges of scaling machine learning operations at the Bill organization, including lack of established guidelines, difficulty measuring data quality, and managing large volumes of data and models with limited resources. He emphasized the importance of efficient data modeling with tools like flexible catalogs and data tiering, continuous model delivery with version control, safe experimentation through policy systems and data collection, and a deliberate architecture design separating concerns between data scientists and system engineers.

What’s Next

  • Data Science team to explore using semantic layers and flexible catalogs for improved model governance.
  • Engineering team to invest in developing a robust model training harness to increase efficiency of model delivery.
  • ML Ops team to implement dark mode testing capabilities for safer experimentation of new model

🎥 REPLAY THE MEET-UP

⬇️ DOWNLOAD THE PDF

Do you have any questions or concerns? Feel free to contact us