The Truth about Machine Learning: What It Is And Isn’t, What It Can And Can’t Do

Ciarán Daly

April 25, 2019

5 Min Read

by Jelani Harper

SAN FRANCISCO - Machine learning is regarded in a variety of ways: the enterprise-wide savior of horizontal business problems, the synonym for artificial intelligence, and even the pathway to a future of intelligent machines rivaling humans.

In reality, however, machine learning is a statistical methodology for determining patterns based on previous and current data, which it learns from to provide future results in the form of predictions. It’s a single branch of AI and, as useful as this technology is, it has its share of flaws. Machine learning models show an undue propensity towards bias. Complicated models struggle with issues of explainability, and in several instances these issues are exacerbated by deep learning applications.

According to Franz CEO Jans Aasman, many of these shortcomings are aggravated by people being quick to accept machine learning’s results without necessarily understanding why they were given. “The why part is the thing with explainability,” says Aasman. “Explainability is about why [models produce their particular results].”

Related: Best practices for holding AI accountable

Correlation versus causation

One way to understand why machine learning models produce the results they do pertains to the notion of causation, which may become easily confused with correlation.

Judea Pearl’s The Book of Why: The New Science of Cause and Effect emphasizes the value of causation for machine learning results, as opposed to correlation. “Everyone in statistics learns that term correlation,” Aasman acknowledges. “But they kind of got stuck in that.” The massive pattern identification capabilities of machine learning can discern correlations between any number of events. However, it’s much more important to determine causation because oftentimes, mere correlation between factors and model outputs is not necessarily enough to explain them.

Aasman referenced a use case in which “we found that there’s a very high correlation between bipolar and HIV. But that’s correlation; that doesn’t mean anything.” Although one could infer any number of conclusions from this correlation, the search for causality revealed that “if you have HIV then they instantly unleash a whole battery of tests on you, so they also find out that you’re bi-polar,” Aasman said.

This analogy illustrates a critical aspect of machine learning: “machine learning and statistics will give you interesting numbers, but it doesn’t mean [a thing] if you don’t know the context,” Aasman warned.

Related: Tackling trust in machine learning

Biased versus fair models

There’s almost an intrinsic hardship in building machine learning models that yield unbiased, fair results. Moreover, even when attempting to remove bias, doing so is not always easy. Aasman details a criminal justice use case in which a deep learning model analyzed the length of sentences judges disseminated “so they could replace the judge and make a more objective punishment for a crime.”

After building a complicated model analyzing a host of judges, crimes, length of prison stays, and factors such as the crime’s nature or the perpetrator’s age, it revealed the top factor for determining how long one was sentenced was whether the accused was “black or not,” Aasman says. “That was a completely racist model.”

The modelers then removed race from the model, ran the model again, and found that race was still the top factor for jail sentences because of the racial distributions in zip codes—attesting to the complexities of creating unbiased machine learning models.

The notion of explainability is almost indispensable to creating unbiased models because it pinpoints exactly what weights and measures produced what effects on models’ scores. According to Aasman, a large medical center used a random forest model and a deep learning model to predict whether patients will require intubation. Although the latter may be more accurate, it’s not promoted because “they can’t explain it,” Aasman explains. “They prefer the random forest model because then you can say okay, the first factor is age, the second factor is weight, and you can kind of go down the decision tree. At least now a human being can look at it.”

Related: Four key considerations for building effective AI

The deep learning effect

It’s not uncommon for deep learning models to render explainability even more exacting than it may be for traditional machine learning. Nevertheless, the distinctions between conventional machine learning and deep learning (a form of advanced machine learning) may be surprising.

According to Aasman, the primary differences between deep learning and classic machine learning is the former involves more computers, much more compute power, and many more parameters and hyper-parameters. “You cannot make a distinction between classic machine learning and neural networks,” Aasman explains.

The tradeoff, however, is that deep learning models with complicated neural networks—commonly referred to as deep neural nets—are more accurate than traditional machine learning is. However, the intricacy of the inner layers of these networks makes explainability much more difficult than it is with so-called white box techniques like decisions trees.

Related: AI sector facing a diversity 'crisis'

Just the truth

The truth about machine learning is it encompasses both neural networks and straight-forward white box techniques like regression models, although even complex random forest models may become black box and arduous to explain. Deep learning incorporates advanced machine learning with greater compute power and numbers of hyper-parameters and parameters in its models. Although the pattern recognition of machine learning is ideal for correlation, causation is required to truly leverage its results to solve complicated business problems. Explainability can considerably aid in the building of fair models, although it may prove difficult to remove bias from machine learning models completely.

Still, the task to do so is at the forefront of the agendas of several data scientists and data modelers. 

Jelani Harper is an editorial consultant servicing the information technology market, specializing in data-driven applications focused on semantic technologies, data governance and analytics.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like