The World of Thinking Machines

On December 28, 2013, the New York Times published an article by John Markoff titled “Brainlike Computers, Learning from Experience.” The report discussed a new version of a computer chip that will be released later this year that is expected to automate tasks that currently require direct programming. Essentially, this new approach is based on the biological nervous system. Although the Markoff article suggests that designers of these systems do not expect this will lead to computers “thinking,” we are not convinced this is true. In fact, part of human learning is tied to observing the world and drawing broader conclusions.

What makes these developments intriguing from a geopolitical perspective is that machines could “teach” themselves to perform somewhat random tasks of searching and, by analyzing patterns, make “decisions.” At the same time, Kwabena Boahen, a computer scientist that leads Stanford’s Brains in Silicon research program, admits that scientists don’t fully understand how the brain functions. It would seem that the goal is to make computers more “brainlike.” Dharmendra Modha, an I.B.M. computer scientist, suggested that “instead of bringing the data to computation as we do today, we can now bring computation to data.” Although an alluring concept, we believe this idea is quite simplistic and may lead to unexpected consequences.

In this report, we will open the discussion with an examination of the philosophy of learning and attempt to show that, outside of certain religious experiences, nothing in human perception comes from direct observation. Thus, unless computer “thinking” is fundamentally different from human thinking, computers will likely carry the biases tied to human knowledge. From this vantage point, we will then discuss the potential dangers of such machines, including the ability to perform humanlike actions without a moral sense. We will also examine the potential economic and social side effects. As always, we will conclude with potential market ramifications.

Thinking about Thinking

The branch of philosophy that examines human knowledge is called epistemology. Although a complicated discipline, we will focus on one major area, examining how we “know.” Essentially, there are two ways of knowing, a priori, which is learning without sense experience, and a posteriori, which is learning that comes from experience. Knowledge that comes from the former is considered a “self-evident truth” (famously noted by Thomas Jefferson in the Declaration of Independence). However, as the renowned British skeptical philosopher David Hume noted, such self-evident truths are usually statements of bias. If such statements were universally accepted, disputes over morals and religion would vanish. In fact, arguments over “self-evident truths” (who is God, what is moral, etc.) are anything but self-evident.

Instead, a priori statements eventually devolve into tautological statements like, “all unmarried men are bachelors.” Since the subject is defined by the predicate, and are in fact identical, such “truths” can be reduced to A=A.

Essentially, self-evident statements are statements of faith or desire. Most religions are based on a creed, which are considered true statements derived from revelation. As such, they cannot be necessarily verified by analysis or empirical evidence; they can only be accepted or rejected. Once the basic creed is accepted, one can apply logic to see if the laws that are derived are logically consistent with the creed. In other words, the laws are deduced from the initial creed.

The other way we learn is a posteriori, which is by gathering individual incidents and generalizing rules or laws that follow from these observations. Most science is done by induction; scientists observe events, postulate on a cause and try to repeat the process through experiments. If the process is repeatable, the postulate that is thought to explain the event can become a theory.

Although humans use induction instinctually (David Hume called humans “induction machines”), there are two fundamental problems with induction. The first problem is that induction is true only until contrary evidence is found. Nicholas Taleb1, a modern-day Hume, referred to induction as the “turkey problem.” Based on what the turkey experiences, humans are friendly and supportive. They provide food, water and shelter for nearly 1,000 days until Thanksgiving. Just before the holiday, however, the turkey’s perception of humans becomes more sinister!

Taleb’s book title is related to a belief that most people in Europe held, which was that all swans were white. Of course, the discovery of black swans in Australia changed that belief. But, the real moral of the story is that the conclusions drawn from experience offer, at best, a limited view of what the future might bring.

The second problem with induction is tied to human nature. After observing a series of events, we tend to assume they will continue. To bolster that belief, we postulate theories as to why these things occur. By understanding the underlying cause in a cause-to-effect relationship, we create the illusion of order in our world. Unfortunately, what we usually find is that the real world is complicated; essentially, the set of conditions that led to a specific event are rarely repeated exactly and what appear to be minor conditions are often much more important than believed. The pre-Platonist philosopher, Heraclitus, said, “we cannot step into the same river twice,” which means that underlying conditions that created a causal relationship are almost never the same.

The human tendency to focus on the narrative that supposedly explains the cause-to-effect relationship can thus lead us astray. For example, the theory that the money supply will always determine inflation has clearly been proven wrong; the persistence of this belief among investors and some analysts shows that the theory has moved from being an observation from data to a creed of sorts. In fact, many of these narratives harden into beliefs that blind investors, policymakers, et al. to the fact that either conditions have changed and may cause the narrative to be false or, perhaps worse, the narrative has become almost a religious tenet and admitting that the narrative is wrong destroys one’s world view.

If left to the choice of radical skepticism or the belief that induction may really offer insights, most humans pick the latter. Humans have a strong need to know what the future holds; throughout history, we have relied on soothsayers, astrologers and fortune tellers. In a scientific age, we have moved on to analysts and pundits. In fact, science does seem to offer promise; after all, in controlled experiments, we can predict outcomes. However, as noted above, once we leave the laboratory, the usefulness of induction declines significantly because it becomes impossible to recreate the same causal circumstances.

The Thinking Computer

The idea of bringing a neural network computer to data, having it “learn” and accurately adapt to future circumstances is seductive. However, as we discussed above, computers will certainly suffer from the same problems with induction that humans experience. In fact, they may be worse, because humans can, on occasion, intuit that “something is different” and can react against experience. It seems doubtful that computers will be able to make this leap.

The world is complex…learning the lessons of history requires a good bit of nuance and analysis. This is why historians constantly argue about the causal factors of various events. Will computers do better than humans at sorting out the factors and making better decisions? Perhaps, but probably not. As Heraclitus noted, each circumstance is different and the historical patterns are, at best, guidelines.

The Dangers

The promise of neural networks is that they can rapidly build pattern recognition to allow a computer to scan documents or faces and find a particular item among large amounts of data. Since the computer is tireless and unblinking, it should be able to perform these tasks better than humans. However, it won’t necessarily prove better than humans in determining if a face is that of a miscreant or a case of mistaken identity. That, by itself, isn’t a big deal. One would expect human intervention past the identification level. However, if law enforcement or the courts were to treat computer scans as more reliable than witnesses or human sorting, it may give an “aura of perfection” that may not be justified.

The temptation for law enforcement to scan behaviors for tendencies will be difficult to avoid. If these computers discover that a certain set of characteristics seem to be common in school shooters or sex offenders, the programs could point these out to security officials and either monitor their whereabouts or, perhaps, arrest them before they actually commit a crime. Again, if neural networks gain an “aura of perfection,” it would seem reasonable to prevent such horrific crimes before they occur.

For governments, neural networks reduce the odds of another Eric Snowden. Instead of having human analysts working through data, using neural network computers to do the sorting potentially could mean that fewer people are needed to supervise the effort. This would allow government intelligence units to focus on a smaller group of supervisors who could be more closely screened for loyalty. The likelihood of whistleblowers would be reduced.

In the private sector, the ability of firms to make connections between our habits and our wants and needs would be enhanced. Even without neural networks, firms have become amazingly adept at trying to adjust to our needs. Who hasn’t noticed that their smartphone will not only tell you the temperature but about how long your work commute will take, or that after searching for exercise equipment, website banner ads for weight loss products suddenly emerge? With neural network computers, the ability of firms to follow our patterns and offer us products and services would likely mushroom. Although this isn’t by itself a bad thing, it is a small leap to envision firms sending constant advertising messages to alcoholics or problem gamblers to succumb to their vices.

Finally, the economic effects could be enormous. Globalization and technology have combined to undermine middle class wages in the developed world through outsourcing and automation. However, at least those involved in the programming of technology that replaced these jobs had ample and lucrative work. These new machines may even take these positions away as the computer no longer needs the painstaking code writing to perform more sophisticated tasks.

Ramifications

This new technology clearly has the potential to have a significant impact on civil liberties, privacy and the economy. However, the primary victor could be the owners of capital. Essentially, thinking machines could have human-like characteristics without the “side effects” of needing sleep, organizing into a union, demanding a raise or becoming a whistleblower.

Geopolitically, the enhancement of power to the owners of capital will tend to undermine nation-states. Although it may take many years before the full promise of these new computers and software is known, the potential disruptive impact is a major concern. As we have observed over the years, new developments in technology seem to be occurring at a faster pace and so the chances that this new technology develops faster than societies can cope with it is likely.

Bill O’Grady

January 6, 2014

This report was prepared by Bill O’Grady of Confluence Investment Management LLC and reflects the current opinion of the author. It is based upon sources and data believed to be accurate and reliable. Opinions and forward looking statements expressed are subject to change without notice. This information does not constitute a solicitation or an offer to buy or sell any security.

1 Taleb, Nicholas, The Black Swan, Random House, 2007.

© Confluence Investment Management

www.confluenceinvestment.com

© Confluence Investment Management

Read more commentaries by Confluence Investment Management