Is Neural Network Hype Killing Machine Learning? - Hypergiant

Is Neural Network Hype Killing Machine Learning?

09/18

TOPICS

#ARTIFICIAL INTELLIGENCE #DIVERSIFIED INVESTMENTS #PORTFOLIO MANAGEMENT #VENTURE CAPITAL #INDUSTRY





In the fall semester of 2016, I was attending a seminar series taught by Dr. Sather-Wagstaff about the applications of algebraic topology, in particular homologies. Homology is a way to connect two sequences of objects, normally from diverse branches of mathematics, to each other in a meaningful way. Ideally, with these types of processes, you gain information about each sequence by studying the other. Throughout that semester, we learned several abstract concepts and ideas that have proven useful when approaching problems. Consider, for example, the difficult and relevant issue of determining the number of separate, geometric regions defined by a problem (and if there are any holes), and turning it into a computational issue — namely, computing Betti numbers.



What does this have to do with machine learning, exactly? Ask yourself: Is taking an abstract problem and converting it into a numeric computation that can be solved not one of the fundamental problems of machine intelligence? The goal is not, has not, and should not be to build larger or more complex neural networks. The goal of machine learning has always been to make abstract problems understandable to machines. That is, to compute answers to problems. Data scientists and industrial AI developers seem to get hung up on using neural networks, or Software 2.0 as some call it, as the only acceptable technique to a problem. Consequently, the news — and, therefore, public opinion — lacks information about more fundamental techniques for converting the abstract into the computational and is filled with information about how neural networks have achieved this result, instead.



That being said, neural networks do offer a wide range of applications; when trained on a large, properly-curated dataset with enough computational power and time, neural networks have produced results that match or exceed both human benchmarks and the performance of any other machine learning algorithm. This is why they represent such a rich area of research and development.



The Faults at Hand

What, then, are some fundamental problems with neural networks? While there are several, the ones we will discuss, in particular, are transparency of the resulting function, failure analysis for when things go wrong, and the lack of understanding in the community.



What do we mean by transparency, you may ask? Consider the following toy example: An insurance company hires a data scientist to help them streamline their process of identifying people eligible for coverage. After an appropriate amount of time analyzing the data, environmental factors, health metrics, and employment trends — not to mention, a lot of Natural Language Processing of news articles — the data scientist builds a neural network that inputs applicant data and outputs a list of various details such as the expected return value and insurance duration.



That may seem reasonable, but under a transparency law similar to what the EU proposed, rejected applicants have the right to ask how their data was processed and used in insurance risk assessments. A neural network does not really offer the kind of transparency required, especially in the insurance business where it is immoral, and illegal, to deny an individual coverage based on factors such as ethnicity. So, for example, how does a company defend itself against the allegation that its protocol denies people insurance based on ethnicity? How do you prove that the neural network is not, in fact, doing that? Neural networks are very good at correlating data, and oftentimes make correlations that are not there or (worse) are illegal. Sure, you can simply remove that data from the training set, but information such as hometown is important and does provide a large amount of statistical evidence regarding sensitive characteristics such as ethnicity and religion. In short, even if you do not provide the training set with the kind of data that might produce illegal correlations, the neural network might make those connections anyway.



Now, consider autonomous vehicles. This is the great, long-hoped-for promise of computer vision. If nothing else, easing the tribulations of long distance driving makes these worthwhile. Inevitably, however, any accident brings about the question of whether the driver, the vehicle’s mechanics, or the AI is responsible for the accident. Who’s to be sued for damages?



Knowing why an artificial intelligence failed at a task is often just as important as knowing when it failed. There is an example, often used, in which a neural network image classifier labeled African Americans as gorillas. We know that the neural network failed, but how did it fail and what can Google do to fix the problem? Is it truly a cultural diversity problem as suggested in the article? Or is it an artifact of the training set reflecting population dynamics and the fact that African Americans only appear in 16% of human photos? Could it have been due to lighting conditions confusing the classifier or some other reason? Given the susceptibility of neural networks to malicious real world attacks and fault injection, determining the cause of any failure is very important.



The issue with neural networks is that, many times, we just do not know what caused an incorrect classification.



Some Much-Needed Clarification

Ask yourself, what are neural networks? If you answered “machine learning models based on the behavior of neurons,” then congratulations, you have bought into the hype. Try this: Ask your friendly, neighborhood neural biologist what artificial neural networks have to do with biological neural networks. One out of three times, you will be laughed out of the office, argued with, or given a look of disgust. People believe that biological neural nets are similar to artificial ones — that they actually inspired them — when they are not.



A neuron (artificial or organic) receives inputs, often of varying strength, from other neurons and does something as a result. That is the extent of the similarities between the two. Basically, artificial neural nets and biological neural nets have about as much in common as I have to Charlize Theron. In short, one is an example of beauty and efficiency, while the other is an amusing facsimile created late one night in a basement. This is not to say that there is no value in using biology to inspire programs, but (as is the case with most biologically-inspired algorithms) the similarities are topical at best, and more about having a cool-sounding name to get interest — and, subsequently, funding and citations — for your research.



Where Does That Leave Us?

We do not understand how neural networks make decisions, and cannot explain it to others if needed; fixing neural networks when they go wrong is such a monumental task that Google often just begins again; and the public generally does not understand what they are or how they are designed. All of that being said, when a problem comes up and a so-called “intelligent” solution is called for, what do most want? A neural network! When do they want it? Right now! There seems to be no real interest in pursuing those classical methods that often provide solutions of equal or superior quality for the datasets provided.



Which begs the question: Is this hype killing the development of other machine learning techniques that provide equivalent (or greater) value?