MCenter: DEPLOY AND MANAGE MODELS IN PRODUCTION
MCenter makes it easy to deploy machine learning models with pre-built components and a pipeline builder to create combinations of pipelines called MLApps.
Production Pipeline Builder – MCenter comes with a library of components and a drag-and-drop pipeline builder so you can build production pipelines in minutes not hours.
Advanced ML Health Monitoring – MCenter automatically alerts you when production data deviates from training data or when your model’s results start to drift apart from a canary model you trust so you can focus on building new models and only update models when needed.
Built-in A/B Testing – Prove that new models are better than incumbents with built-in A/B testing with easy to read results.
Complete Model Governance – MCenter includes control and tracking for all actions taken in the system so you can control who can put models into production and see what model provided a given prediction.
Building Advanced MLApps – With MCenter you can combine multiple pipelines together into an ML application to serve your business use case. Common pipeline combinations include automated batch retraining with REST inference, ensemble models, and sequential pipelines where multiple pipelines feed each other to create an output.
Schedule a demo to learn more!
MCenter Agents – MCenter agents trigger analytics engines and manage local ML pipelines. They provide visibility into the activity of the pipeline and sends alerts, events, and statistics to the MCenter server. They are compatible with popular analytic engines including Spark, TensorFlow, and Flink
MCenter Server – The MCenter server orchestrates ML Applications and pipelines via the MCenter agents. It executes policies, manages configuration, and sends data to the MCenter console. The MCenter server enables automation of all the critical tasks related to the deployment and management of ML.
Flexible Deployment Options – MCenter can be deployed in the cloud, on-premise, or in hybrid scenarios. It also works across distributed computing architectures that include inter-operating, various analytic engines (Spark, TensorFlow, Kubernetes, PyTorch, and more). ParallelM works with you to define the best deployment configuration for your specific needs.