November 28, 2017
Model Governance: What is it and why is it needed in production ML?
BY VINAY SRIDHAR

Machine Learning applications are complex in nature. Managing them in production is non-trivial and has been detailed well in our other blogs (More Machine Learning models than ever, but are they making it into production?).  These ML applications produce results and outcomes that are used to affect business decisions, perform audits on ML patterns, decode faults in business logic, and more. At the core of all these actions is a Machine Learning ‘Model’. In this blog we describe what we mean by ‘Model Governance’ and why it’s necessary for ML in production.

 

What is Model Governance?

What makes ML applications and their management unique is the very nature of Machine Learning itself. Every execution and its outcome depends on numerous environmental and situational factors as well as the data fed into the application itself. An ML application can alter its behavior across two identical runs. Further, ML applications produce objects that can affect outcomes and behavior of other applications. For example, models produced by a training program can be used to infer on data in an entirely different environment and context. Apart from models, other key products of an ML application include predictions, prediction analytics, statistics, KPIs, and reports.

For enterprises, in order to perform tasks like auditing, report generation, business decision analysis or fault analysis, it is important to be able to link the generation of these objects to decisions made at any point during the execution of the ML application. This ability to understand linkages between the many facets of an ML initiative and answer questions about them is what we generally call ‘Model Governance’.

 

Why do we need Model Governance in Production? – A diagnosis example

Often in production scenarios, issues occur which involves detailed fault analysis. For example, a client’s advertisement budget was proposed based on predicted revenues from ad placements and the model used for that prediction was put into production to place bids on ads. At some point the revenue was well below the prediction and an analysis was necessary to find the root cause.

 

Figure 1.

As seen in Figure 1, there are two distinct phases here. A, the process of generating the model and B, the process of generating bidding prices for the ads. Between A and B, there could be other processes that link the two, either manual or via policy based automation. While training the model, there are possible explicit human interactions or policy based decisions to retrain and feed in certain configuration parameters. While choosing a specific model to deploy, again, there are possible human actions or automated policy decisions (e.g., data-driven evaluations with filters). The specific business decisions, in this case choosing prices for ads, are a result of a combination of multiple inputs and settings – the policy applied on the inference pipeline, the environment and configuration parameters, and of course, the actual live data itself. Clearly, there are multiple distinct events that contributed to the ad pricing. In order to identify the exact sequence in every stage that led to the faulty decision, all relevant events, statistics and other information (and their consequent linkages) need to be preserved.

Model Governance will provide the ability to diagnose and reliably repeat such operations via coordinated preservation of all related objects and their linkages.

 

Why do we need Model Governance in Production? – A provenance example

As we’ve seen above there are two distinct phases, the development of models and their actual usage in production. The process of developing a model itself isn’t straightforward and involves a data scientist experimenting with multiple configurations, algorithmic choices and variants, training datasets, etc. Consider the scenario where a data scientist has left behind piles of algorithmic variants and models but no documentation. How can the models be transitioned to production without provenance details?

 

Figure 2.

 

As seen in Figure 2, a model’s provenance includes the dataset used to train, the algorithm itself (including a specific variant), configuration parameters, and possibly environment settings. Every time the data scientist re-trains the model, maybe after an algorithmic tweak or a dataset change, a new model is added to the repository. Throw into the mix many data scientists, each working on a different combination of the above, and we have a diverse set of models with multiple versions for each type.

 

With Model Governance, the vast and growing number of production models and their supporting objects are managed, with the results being available for audit, diagnosis and other purposes.

 

Why do we need Model Governance in Production? – Emerging standards and regulatory requirements

Beyond the above examples, there are multiple emerging standards like GDPR and other equivalent data protection and privacy measures that mandate organizations to provide reasons for algorithmic decisions. Organizations often like to audit their own processes or use data to evaluate processes and decisions, perform historical analysis based on future KPI metrics, and so on. All these require fine grained tracking of objects produced by ML applications and necessitates production Model Governance.

 

In future blogs, we will describe how ParallelM addresses the above needs and enables powerful production Model Governance.

Share This Post:

Leave a Reply

Your email address will not be published. Required fields are marked *

Try MCenter and See How Much Easier ML In Production Can Be

Free Trial