Bias in Machine Intelligence: Changing the Language - Hypergiant

Bias in Machine Intelligence: Changing the Language

02/18

TOPICS

#ARTIFICIAL INTELLIGENCE #DIVERSIFIED INVESTMENTS #PORTFOLIO MANAGEMENT #VENTURE CAPITAL #INDUSTRY

Written by Dr. Drew Lipman, Lead Data Scientist at Hypergiant



When a layman hears the words “Artificial Intelligence”, they might think of an industrially-designed robot capable of bending humanity to it’s will. They might even think of a faceless operating system they can share intimate life details with that also happens to be voiced by Scarlett Johansson. In a more likely scenario, though, they might think of an invisible intelligence that’s cold, emotionless, but rooted in truth. For this latter camp, people might believe that these invisible intelligences converge on fact because their training has been done with honesty, awareness, and without bias. However, this is an assumption that couldn’t be further from the truth.

The reality is is that the majority of intelligent systems are fundamentally biased. These systems are often trained with skewed data sets, interactions with the worst of humanity, and algorithms that are often subjective rather than objective. Unfortunately, these sources of garbage data, historically, tend to go unnoticed. In recent years, however, “bias in computer algorithms” is seen, increasingly, as an issue that needs to be addressed. From machine learning recognizing gender, to computer vision labeling a black couple as gorillas, we are inundated with how biased Artificial Intelligence programs are, and how to address the problem.

But what, really, is the problem? According to Merriam-Webster, “Bias”, like so many words which live simultaneously in technical and social dialects, has multiple definitions: “an inclination of temperament or outlook; especially: a personal and sometimes unreasoned judgment: prejudice”, it can also be a “deviation of the expected value of a statistical estimate from the quantity it estimates (2): systematic error introduced into sampling or testing by selecting or encouraging one outcome or answer over others.” These two definitions, one social and one technical, are the two that need to be addressed. Which of these is the correct definition for this context? The former often producing illegal or immoral results, The latter an artifact of statistics, and hence machine intelligence. The answer is: it depends. The use of the term bias, however, is used to paint the entire issue as one single, simple, issue: we will just remove the bias! When the truth, as it almost always is, is filled with subtlety and minutia. Consider an overly simplified toy example: XYZ Corp. uses a machine learning algorithm to help HR parse the applicants to a position. The algorithm disproportionately favors one ethnicity over another. What is the cause of this “bias”? Is it the algorithm itself? The people who wrote the algorithm? The data the algorithm was trained on? The goal that was framed when the algorithm was designed? Or some other source? Without a refinement of our understanding, and our language, the problem can be daunting in size.

So, how do we fix this issue? Painting everything with the single brush is not the way to fix the problem. We need to refine our terminology and educate people about the issue. To help distinguish between these, we’ll use “social bias” to refer to an inclination of temperament or outlook; especially: a personal and sometimes unreasoned judgment: prejudice, the first definition, and “statistical bias” to refer to the second. The method of addressing these, highlighted below, are fundamentally different and should be thought of separately, even if they produce similar results.

To help understand why these are fundamentally different we need to have a better understanding of statistical bias. Suppose there is a fundamental truth of the world we would like to know, a collection of data points related to this truth and a function. The function is biased if and when it uses the data to make a prediction of the truth that produced a skewed result. For example, if we are interested in predicting the age a potential employee would leave the company using data about the potential employee, but the algorithm cannot predict medical issues forcing employees into retirement, then any prediction will probably be too large. This is an example of statistical bias: the produced results are skewed too large. However, if all you are seeing is that the function consistently overestimates employment duration then it could appear as social bias. To illustrate the point: suppose our HR algorithm at XYZ Corp. uses predictions of employment duration when deciding which candidates to propose. Then, it would seem to be biased against unhealthy candidates, and hence candidates from poorer neighborhoods.

Now, to counter that, suppose the algorithm is a knowledge-based system. That is, it is developed by watching the HR employees at XYZ Corp. and attempts to simulate their decisions, methods, and processes. If there is a social bias within HR against poor people, and the algorithm correctly interprets that, then we can get a result that looks remarkably similar to the first example, and yet is fundamentally different in how it is to be treated.

To further highlight these dissimilarities, consider an idealized design cycle for a machine learning program. 1) XYZ Corp. determines that there is a problem they would like to solve, 2) the structure of the program is outlined, 3) data are collected, 4) the algorithm is trained, and 5) the program is implemented. Consider step one, there are no data points or algorithms and hence any bias that appears at this stage must be a social bias. Step two is similarly easy to parse if the method outlined has a fundamental statistical bias the rest of the steps will reflect that bias. However, with step three the issue is much less clear. This begets the question: can data be biased? To this question I believe the answer is: no. Data can be collected in a biased manner, or be misrepresented in a biased manner, but the data, in and of itself, does not have either an inclination or a statistical expected value. To paraphrase “Science is never wrong. Scientists are frequently wrong.” So, stop blaming the data! To continue, biased collection: can generally be seen as a statistical. If it was deliberately collected in a biased manner then it is being misrepresented in what the data set is, which produces social bias. Bias in step four, as with step three, depends more on intent than anything more substantial. Many training algorithms partition the data into training and testing sets. If the partition is manipulated, the training set is over used, or any number of other problems bias can be introduced. This is part of what makes Machine Learning more of an art than a science. Knowing how to avoid the myriad pitfalls and snares in the underbrush of training. Finally implementation, which, if unmolested, is generally bias-free.

So, how can we tell them apart? Well, other than in obvious cases (I very much doubt the folks at Google told the algorithm that African Americans were, in fact, gorillas), you can’t. It is very hard to understand exactly what each neuron in the network does, this is part of the magic which is Neural Networks. Moreover, determining where in a deep Neural Network with a dozen layers and thousands of neurons in each layer a particular result appears is a task of herculean proportion.

How do we deal with these biases? This issue is at once both more complex, and more addressable. As a culture we have been, with mixed success, trying to deal with social biases in various forms since the very beginning. Since I am neither a social engineer, nor a psychologist I will leave these to those whom are better equipped to approach the problem. For statistical bias there are a plethora of options. However, most of them can be reduced down to one: more education. The more we are educated on statistical methods: sampling, minimum-variance unbiased estimators, minimum mean squared error estimators the better prepared we are to be aware of and eliminate statistical error up front. The better educated we are on the nuts and bolts of the machine learning techniques that are being implemented the better we can see the problems that may, and do, arise with overfitting, over simplification and other issues.

In summary, the more we know about what we are attempting to do, on all levels, the more control we have in reducing biases. However, for humans understand ideas we need words for these ideas. The more refined the idea, the more subtle the concept, the better the nomenclature needs to be. The current approach of labeling all biases as just biases does not present the refinement of the concept that we need in order to approach, and solve, the fundamental problem.