Do you trust the predictions your models are generating? Let’s think about this for a moment.
Machine learning models learn from historical data and use the patterns they discover to make predictions about new data. If this were a perfect world, the model would have a stable problem to solve, copious amounts of training data, and the new data received at inference time would always have the same characteristics as the training data.
In an ideal world where such conditions are always consistent, it would be easy to have a great deal of confidence in every single prediction. Trusting your AI would be a given. But as we all know, especially today, consistency and stability are luxuries that we do not have.
In reality, most machine learning takes place in evolving systems with fuzzy data. Most models will be tasked with scoring unusual, anomalous, or unexpected data at some point after they are deployed. Monitoring your model to detect data drift is one method to address this problem, but it requires analyzing the data over longer periods of time and at an aggregated level.
Therefore, in addition to data drift monitoring, the next step is real-time protection. This gives us the option to take action on any individual prediction as soon as it occurs. To do this, we need the ability to define the rules that may lead to action on any individual prediction. These rules must be based on how certain we are that the prediction just generated is an accurate one.
The more certain we are of expected conditions at the time of prediction, the more we will trust that prediction. Furthermore, if the system itself tells us how sure it is, or how unsure it is, we can start to think of it as a humble system, not an arrogant system, and trust it even more.
In Release 6.1 of DataRobot’s Enterprise AI Platform, we are addressing these challenges with the addition of an exciting new capability we are calling Humble AI.
How Does Humble AI Work?
DataRobot’s Humble AI allows you to define a set of rules for any of your deployed models to be synchronously applied at prediction time. Each rule consists of a trigger and an assigned action. The trigger acts as the condition which will be checked at prediction time on every row, and the action defines what will happen if the condition is met.
Currently, there are three triggers supported:
- Uncertain prediction will detect if the model prediction is outside the range of normal values (for a regression problem) or within a region of low confidence (for a classification problem).
- Outlying input will detect if a numeric feature is not similar to what the model has seen during training.
- Low observation region will detect if a categorical feature value is one the user has specified as unexpected or inappropriate.
Based on these triggers, there are three actions supported:
- No operation allows you to monitor how often the rule’s condition is met without affecting predictions at all. You can use it to check if the condition was selected correctly and is not triggering too often or too rarely.
- Overriding prediction allows you to dictate the prediction regardless of the model’s output to a specified value. It can enable you to force business rules on your deployment, since you control what the predicted value will be. It may be that you specify a “safe” value for prediction, that ensures when the model is unsure for the identified condition, the action taken introduces the least risk.
- Returning error allows you to discard the prediction completely.
In addition to triggers and actions, every rule in DataRobot’s Humble AI is monitored so you can view how often each rule has triggered. This information is provided on a summary page that can be accessed on the new Humility tab of your model deployment.
Monitoring humility rules over time allows you to see when and how often your triggers are firing so you can adjust your rules accordingly. This essentially allows you to fine-tune and improve humility rules for your model as you learn more about it.
Get Started with Humble AI Today
DataRobot’s Humble AI feature is now included in the MLOps product. No extra actions are required to enable the feature. Even better, all existing deployments support the feature too. If you want to benefit from it just head over to the new Humility tab for any deployment, and enable it.
According to many studies and articles like this, only 10% of models get deployed to production, where they bring the most value to your decision-making. At DataRobot, we have an entire AI Trust team that works tirelessly to help you build models that are robust, reliable, stable, and now humble. With Humble AI, you get real-time analysis and protection for the predictions generated by any of your deployed models. This makes it easier to trust your deployed model and to get more value from it. We encourage you to begin incorporating Humble AI into your deployments and explore the other new cool new features in our latest 6.1 release.