If Algorithms Know All, How Much Should Humans Help? - NYTimes.com

Steve Lohr writes for NYTimes.com:

Armies of the finest minds in computer science have dedicated themselves to improving the odds of making a sale. The Internet-era abundance of data and clever software has opened the door to tailored marketing, targeted advertising and personalized product recommendations.
Shake your head if you like, but that’s no small thing. Just look at the technology-driven shake-up in the advertising, media and retail industries.
This automated decision-making is designed to take the human out of the equation, but it is an all-too-human impulse to want someone looking over the result spewed out of the computer. Many data quants see marketing as a low-risk — and, yes, lucrative — petri dish in which to hone the tools of an emerging science. “What happens if my algorithm is wrong? Someone sees the wrong ad,” said Claudia Perlich, a data scientist who works for an ad-targeting start-up. “What’s the harm? It’s not a false positive for breast cancer.”

I have written here many times of analytics being a combination of "art" and "science".  Having data and insight leads to the most action, yet some data scientists want to remove the "art" part of the equation.  The belief is that computers and algorithms can see more about the data and the behavior than a human ever could.  Also, once there is so much data about an individuals behavior, there is no "art" left, all the data points are accounted for so the "science" is indisputable.  

However, I have a hard time believing that "art", or the human insight, will ever be replaceable.  There are so many variables still left unknown and a computer can't know all of them.  The "science" portion will always get better at explaining the "what" happened, but they don't understand the business operations and strategy that goes behind the decisions that were made. I am a true believer in the "big data" coming of age.  I believe it is fundamentally changing the way companies have to do business, but never forget about the human side, the "art" of understanding "why" the data is telling you "what" is happening.  

These questions are spurring a branch of academic study known as algorithmic accountability. Public interest and civil rights organizations are scrutinizing the implications of data science, both the pitfalls and the potential. In the foreword to a report last September, “Civil Rights, Big Data and Our Algorithmic Future,” Wade Henderson, president of The Leadership Conference on Civil and Human Rights, wrote, “Big data can and should bring greater safety, economic opportunity and convenience to all people.”
Take consumer lending, a market with several big data start-ups. Its methods amount to a digital-age twist on the most basic tenet of banking: Know your customer. By harvesting data sources like social network connections, or even by looking at how an applicant fills out online forms, the new data lenders say they can know borrowers as never before, and more accurately predict whether they will repay than they could have by simply looking at a person’s credit history.
The promise is more efficient loan underwriting and pricing, saving millions of people billions of dollars. But big data lending depends on software algorithms poring through mountains of data, learning as they go. It is a highly complex, automated system — and even enthusiasts have qualms.
“A decision is made about you, and you have no idea why it was done,” said Rajeev Date, an investor in data-science lenders and a former deputy director of Consumer Financial Protection Bureau. “That is disquieting.”
Blackbox algorithms have always been troubling for the majority of individuals, even for the smartest of executives when trying to understand their business.  Humans need to see why.  There is a reason why Decision Trees are the most popular of the data models, even though they inherently have less predictive prowess than their counterparts like Neural Networks.

Decision Trees output a result that a human can interpret.  It is a road map to the reason why the prediction was made.  This makes us humans feel comfortable.  We can tell story around the data that explains what is happening.  With a blackbox algorithm, we have to trust that what is going on inside is correct.  We do have the results to measure against, but as these algorithms become more commonplace, it will be imperative that humans can trust the algorithms.  In the above bank loan example, when making decisions regarding bank loans, a human needs to understand why they are being denied and what actions they can take to secure the loan in the future.  

This ties into creating superior customer experiences.  Companies that will be able to harness "big data" and blackbox algorithms and create simple narratives for customers to understand will have a significant competitive advantage.  Creating algorithms to maximize profits is a very businesslike approach, but what gets left out is the customer experience.  What will happen over time is the customer will dislike the lack of knowledge and communication and they will not become future customers.  A bank may say, this is good, they would have defaulted anyway.  But what happens in the future when too many people have bad customer experiences?  I don't believe that is a good longterm strategy.  

In a sense, a math model is the equivalent of a metaphor, a descriptive simplification. It usefully distills, but it also somewhat distorts. So at times, a human helper can provide that dose of nuanced data that escapes the algorithmic automaton. “Often, the two can be way better than the algorithm alone,” Mr. King said.  

Businesses need to also focus on the human side.  When we forget there is also an "art" to enhance all of these great algorithms, businesses will be too focused on transaction efficiency instead of customer experiences which in turn will lead to lower sales.  

Source: http://www.nytimes.com/2015/04/07/upshot/i...