Predictive AI routinely fails to deploy, so data scientists are spearheading a movement to focus on its business value. But stakeholders need a better understanding.
Predictive AI routinely fails to deploy, so data scientists are spearheading a movement to focus on its business value. But stakeholders need a better understanding.into production. The number crunching is sound and the data scientist delivers a viable machine learning model – but stakeholder objections sadly preclude deployment.
. Rather than sticking with the traditional technical metrics that report on ML model performance, a proactive minority of data scientists bust out of their nerdy cubicle and deliverwill face certain objections until the practice is better understood and more widely adopted. Here are four common stakeholder concerns about ML valuation and how to address them.A profit curve for targeting marketing with a machine learning model. As more customers are contacted, the profit goes up and then back down.However, a profit curve alone doesn’t solve the business problem of planning and selling deployment. Why? Because it’s usually based on certain business assumptions – such as the false positive and false negative costs – and that can call the entire curve into question.This interaction provides a much-needed intuition, a “feel” for how much these factors matter when making deployment decisions. As the shape of each chart responsively morphs, the user gets to visualize the impact of each factor. In many cases, changes to the curve remain within the range of acceptability, so deployment decisions can be made with confidence. In other cases, a curve may change drastically or detrimentally, signaling that the range of uncertainty is untenable. This means that ranges of uncertainty would need to be narrowed before gaining the confidence in model value needed to greenlight deployment. This practice empowers you to valuate models despite uncertainties. You may not have direct knowledge of, for example, the monetary loss for each false positive, because it is privy to other business units, or because it would require new investigations or experimental discovery. By interactively altering the value for such variables, you gain instant insights as to how much the uncertainty matters for driving deployment decisions. In this way, you can narrow that range, determining the limits within which the values would have to land for model deployment to be valuable. By viewing how the shape of the curves morph and how other pertinent metrics change, you gain critical intuition as to how big of a difference such factors make, whether a deployment plan may be copasetic nonetheless or whether some factors are “too uncertain” to move forward without additional efforts to narrow the range of uncertainty. Even if you already hold fairly ideal visibility into the business factors, some of them will inevitably still be subject to potential change or uncertainty – there are always business variables that are subject to such “wiggle room.”Moving from standard ML evaluation to ML valuation does not constitute an audit in the usual sense of the word. In fact, doing so usually strengthens the perception of an ML model, rather than weakening it. The main outcome and purpose is to empower you to maximize deployed value and to demonstrate that potential value to your customers, colleagues and other decision makers. Stakeholders often perceive ML valuation as a validation of business value that they already intuitively believed was there. This drives deployment. A value-oriented lens on model performance provides vital evidence to help you convince others and ensure that your model gets deployed – and that it gets deployed more optimally. At the same time, certain “audits” help rather than hurt. Audits can be oriented toward unearthing, proving and communicating potential value – placing a spotlight on an initiative’s purpose and value so that the value will be realized. Moreover, in some cases assessing the potential business value might help you by revealing an addressable weakness in a model.Most predictive AI projects plan to only assess the business results after the ML model is already deployed. Accordingly, most fail to deploy. This kind of post-mortem evaluation fails for a couple reasons. The only way to pursue business value during model development is to appraise its business value along the way. And the only way to make prudent business decisions as to whether to deploy, which model to deploy and precisely how to deploy, is to drive those decisions according to business value. Moreover, without an estimation of value, the model will likely never get deployed, so the project won’t ever even get to any post-deployment evaluation. Explicitly planning for value increases value. It is possible that a model only evaluated technically could turn out to realize value if deployed – but that value would have been left unnecessarily to luck, since the process wouldn’t have explicitly optimized for value. What's worse, the value would typically be nil, since most models that aren't valuated aren't deployed at all. Technical performance fails to compel stakeholders. ML valuation as a practice also maintains ongoing value after deployment. By monitoring performance in business terms, changes to the model or to its deployment particulars can be driven to maximize business value. ML projects must be continually revisited and potentially redeployed, so model valuation is a must not only pre-deployment, but also “pre-redeployment.”For example, in addition to the bottom-line money saved with fraud detection, there’s another important consideration: the sheer number of times a legitimate transaction is disrupted – aka the number of false positives. A medium-sized bank may stand to win $26 million by placing the decision boundary where the savings curve peaks, but as they say, money isn't everything. The cost of the disruptions is already factored into that bottom line savings, but they can also incur intangible or longer-term costs that haven’t been accounted for, since they could, for example, contribute to the bank's reputation for inconveniencing customers in this way. A small sacrifice to the monetary bottom line can sometimes greatly reduce transactional disruptions. In one case, false positives are reduced by 59% with only a 5% sacrifice in the bottom-line money saved – while also blocking 50% fewer transactions, which means cutting the disruption of commerce in half. You can readabout a similar example for misinformation detection where more misinformation is prevented by way of only a small sacrifice to the bottom line. Get those models deployed! Addressing these four concerns will go a long way toward establishing ML valuation as a much-needed, widely-adopted best practice – thereby greatly improving predictive AI’s deployment track record.
Artificial Intelligence Data Science Machine Learning Predictive AI Predictive Analytics ML Valuation
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Texas among states turning to AI to improve road safetyTexas officials have been using StreetVision and various other AI tools to address safety concerns.
Read more »
Raven-Symoné and Miranda Maday Address Marriage Criticism at Teen Vogue SummitRaven-Symoné and her wife, Miranda Maday, discussed their marriage dynamics, including their choice to sleep separately, and addressed criticism during the Teen Vogue Summit. They shared insights into balancing careers and their relationship, emphasizing their solid foundation built on respect and love.
Read more »
Mayor Johnson signs order directing resources to address SNAP ‘benefits cliff'Chicago Mayor Brandon Johnson signed an executive order Saturday to address an 'upcoming 'benefits cliff'' from changes to federal SNAP eligibility.
Read more »
Warriors Steve Kerr, Steph Curry remember slain Oakland football ‘legend’ John BeamWarriors coach Steve Kerr expresses condolences, calls to address gun violence
Read more »
Alabama Lawmakers Address Concerns Over Proposed Carbon Storage Hub in Covington CountyAlabama lawmakers met with Covington County residents to discuss efforts to halt a carbon dioxide storage project. Residents fear potential impacts on land values and water supply. Representatives are working on legislation and exploring a local referendum to give residents a vote on the project.
Read more »
Today’s Data Isn't Enough: Why Predictive Data Is Key To Marketing Success During The HolidaysFor brands looking to implement predictive consumer intelligence into their marketing strategies for the holidays, there are five principal steps to follow.
Read more »
