December 14, 2020

What’s “Good” in Data Science? It Depends Who You Ask.

Two photographs. The photograph on the left is Alistair Croll. The photograph on the right is Justin Norman.

In our November Scaletech event, Justin Norman, VP Data Science at Yelp, talked about how his team handles data science. 

Justin recently wrote an article on bringing an AI product to market, where he mentioned the old truism about product delivery: Products can be delivered fast, cheap or good—and you have to pick only two.

The diagram shows three overlapping circles representing good fast and cheap.  The intersection of fast and cheap is labelled Want it now, and don’t have the money? It’s going to suck.  The intersection of fast and good is labelled You’ll pay serious premiums to get something done well quickly.  The intersection of good and cheap is labelled If you can wait, a small team can craft something amazing. But you don’t get to ask for a deadline.

But this doesn’t answer the obvious question: What’s good? Good depends on who you ask, and Justin described three stakeholders in AI products. I’m paraphrasing here, but there are three archetypes who care about AI products:

  • The Operator. This person has to implement the product. They have to keep it running smoothly, with high uptime and few changes. The product has to adapt to changes in data and parameters, and new models shouldn’t cost a lot to train.
  • The Scientist. This is the researcher, who wants to uncover new insights. They care about precision, accuracy and reproducibility. They seek to minimize uncertainty.
  • The Product Owner. This is the businessperson, who wants to find market advantage. When they market the product, they need certainty: What are the benefits, what’s the ROI, will it definitely work for customers?

Understanding the (healthy) tension between these three stakeholders is key to delivering great AI-powered products. It defines the metrics for success, the trade-offs we’re willing to make, the testing we do and the cost and speed of the final product.

OperatorScientistProduct Owner
Main focusIs it implementable?Is it correct?Is it profitable?
Metrics that matterMean time between failure, drift, computing cost.Confidence interval, P-value, reproducibility.Return on investment (ROI), Total Cost of Ownership (TCO), predictability, consistency.
PerspectiveThe past: Will things continue as they were?The present: Is it correct now?The future: How will this improve the business?
AudienceCollaborators.Academic peers.Business owners.

These three perspectives explain why the Netflix Prize winner, while technically better than competing, simpler solutions, was never implemented: It was too complicated. Healthy AI product management is a trade-off between business value, scientific correctness, and pragmatic manageability.

Certain factors give each of the three stakeholders priority.

  • For AI products that have to scale, or where downtime would have a widespread impact on the business and its users, the Operator will have the upper hand.
  • For products where the cost of false positives or false negatives are high, or the algorithms will be reviewed by regulators, the Scientist’s perspective will prevail. The research needs to be defensible and explainable.
  • When the business is riding on a strategic feature, competitive pressures are high, or the sales cycle is too long, the Product Owner’s perspective will win out. The product needs to move the bottom line or the business will fail.

Managers and executives need to set the balance between these three competing viewpoints, communicate them clearly to teams, and follow up with the appropriate metrics for each perspective. Only then will we have an idea of what “good” means.

Growth insights
in your inbox

Join our community of thousands of tech entrepreneurs to get actionable insights from our monthly newsletter.