fbpx
January 23, 2019
ParallelM CTO Nisha Talagala to Lead a Workshop and Present about Machine Learning Production at the 3rd Annual Global Artificial Intelligence Conference
BY PARALLELM

 

** MEDIA ADVISORY **

SUNNYVALE, CA, January 23, 2019 – ParallelM CTO Nisha Talagala will lead a workshop, “Bringing Your Machine Learning and Deep Learning Algorithms to Life: From Experiments to Production Use” and deliver a presentation, “REST and Microservices based ML deployment with Containers: From Data Science experiments to Production ML” at the 3rd Annual Global Artificial Intelligence Conference, Jan. 23-25, 2019 in Santa Clara, California.

WHAT: Bringing Your Machine Learning and Deep Learning Algorithms to Life: From Experiments to Production Use
Nisha Talagala, CTO, ParallelM will lead a hands-on workshop, where attendees will learn how to take Machine Learning and Deep Learning programs into a production use case and manage the full production lifecycle. This workshop is targeted for data scientists, with some basic knowledge of Machine Learning (ML) and/or Deep Learning (DL) algorithms, who would like to learn how to bring their promising experimental results on ML and DL algorithms into production success. In the first half of the workshop, attendees will learn how to develop an ML algorithm in a Jupyter notebook and transition this algorithm into an automated production scoring environment using Apache Spark. The audience will then learn how to diagnose production scenarios for their application (for example, data and model drift) and optimize their ML performance further using retraining. In the second half of the workshop, users will perform a similar exercise for DL. They will learn how to experiment with Convolutional Neural Network algorithms in TensorFlow and then deploy their chosen algorithm into production use. They will learn how to monitor the behavior of DL algorithms in production and approaches to optimizing production DL behavior via retraining and transfer learning. Attendees should have basic knowledge of ML and DL algorithm types. All experiments will use Python. Environments will be provided in Azure for hands-on use by all attendees. Each attendee will receive an account for use during the workshop and access to the notebook environments, Spark and TensorFlow engines, as well as an ML lifecycle management environment. For the ML experiments, sample algorithms and public data sets will be provided for Anomaly Detection and Classification. For the DL experiments, sample algorithms and public data sets will be provided for Image Classification and Text Recognition.

REST and Microservices based ML deployment with Containers: From Data Science experiments to Production ML
Machine Learning (ML) is everywhere. However, putting promising ML algorithms into production is complicated by challenges that are not well solved by experimental data science environments. We describe a holistic technical approach to MLOps that uses REST, Microservices and Containers and enables organizations to extend existing software development and DevOps practices to fully operationalize the Machine Learning Lifecycle for their business. Nisha Talagala will further show how REST based containerized environments enable and drive this trend, and showcase best practices and a demonstration of a full Machine Learning and Deep Learning Lifecycle with Python/R, Docker Containers and Kubernetes.
WHO: Nisha Talagala is CTO and VP of Engineering at ParallelM and has more than 15 years of expertise in software development, distributed systems, I/O solutions, persistent memory, and flash. Prior to ParallelM, Ms. Talagala was a Fellow at SanDisk and Fellow/Lead Architect at Fusion-io, where she drive innovation in non-volatile memory, in particular, the industry’s first persistent memory solution. She was technology lead for server flash at Intel, where she led server platform non-volatile memory technology development, storage-memory convergence, and partnerships. Nisha holds 54 patents in distributed systems, networking storage, performance and non-volatile memory and serves on multiple industry and academic conference program committees.

WHEN: “Bringing Your Machine Learning and Deep Learning Algorithms to Life: From Experiments to Production Use” Workshop, Jan. 24 from 9 am – 12:50 pm PT in room 207

“REST and Microservices based ML deployment with Containers: From Data Science experiments to Production ML” Presentation, Jan. 24 from 4:20 – 5:00 pm PT in room 206

WHERE: 3rd Annual Global Artificial Intelligence Conference, Santa Clara Convention Center, 5001 Great America Parkway, Santa Clara, California

About ParallelM
ParallelM is the first and only company completely focused on delivering machine learning operationalization (MLOps) at scale. ParallelM’s breakthrough MCenter™ solution is built specifically to power the deployment, optimization, and governance of machine learning pipelines in production so that companies can scale machine learning across their business applications. ParallelM’s approach is that of a single, unified MLOps solution that embeds best practice processes in technology, enabling all ML stakeholders to unlock the business value of AI. Please visit www.parallelm.com or email us at info@parallelm.com.
Media Contact:
Marianne Dempsey/Jenna Beaucage
Phone: 508-475-0025 x115/ x124
Email: parallelm@rainierco.com

Share This Post:

Get started with a free account!

Try MCenter and See How Much Easier ML In Production Can Be

Start Free Account