Was the Unabomber Right About the Dangers of AI?
Membership required
Membership is now required to use this feature. To learn more:
View Membership BenefitsHow worried should we be about artificial intelligence (AI) and what should we do about it? There is risk on both sides: of not taking warnings about AI seriously enough, and of taking them too seriously.
We can begin with what it already can do.
Ordering a pizza in 2023
CALLER: Is this Pizza Hut?
GOOGLE: No sir, it's Google Pizza.
CALLER: I must have dialed a wrong number, sorry.
GOOGLE: No sir, Google bought Pizza Hut last month.
CALLER: OK. I would like to order a pizza.
GOOGLE: Do you want your usual, sir?
CALLER: My usual? You know me?
GOOGLE: According to our caller ID data sheet, the last 12 times you called you ordered an extra-large pizza with three cheeses, sausage, pepperoni, mushrooms and meatballs on a thick crust.
CALLER: Super! That's what I'll have.
GOOGLE: May I suggest that this time you order a pizza with ricotta, arugula, sun-dried tomatoes and olives on a whole wheat gluten-free thin crust?
CALLER: What? I don't want a vegetarian pizza!
GOOGLE: Your cholesterol is not good, sir.
CALLER: How the hell do you know that?
GOOGLE: Well, we cross-referenced your home phone number with your medical records. We have the result of your blood tests for the last seven years.
CALLER: Okay, but I do not want your rotten vegetarian pizza! I already take medication for my cholesterol.
GOOGLE: Excuse me sir, but you have not taken your medication regularly. According to our database, you purchased only a box of 30 cholesterol tablets once at Lloyds Pharmacy, four months ago.
CALLER: I bought more from another pharmacy.
GOOGLE: That doesn't show on your credit card statement.
CALLER: I paid in cash.
GOOGLE: But you did not withdraw enough cash according to your bank statement.
CALLER: I have other sources of cash.
GOOGLE: That doesn't show on your latest tax returns, unless you bought them using an undeclared income source, which is against the law!
CALLER: WHAT THE HELL?
GOOGLE: I'm sorry sir, we use such information only with the sole intention of helping you.
CALLER: Enough already! I'm sick to death of Google, Facebook, Twitter, WhatsApp and all the others. I'm going to an island without the internet, TV, where there is no phone service and no one to watch me or
spy on me.
GOOGLE: I understand sir, but you need to renew your passport first, it expired six weeks ago…
How worried should we be, and what should we do about it?
This scenario is fiction and a joke, but rest assured, Google could do this. It won’t be a human phone answerer. It will be an AI bot. Google could access all this information about you, if it tried, and AI may soon be able to carry out this dialogue, if it can’t already.
Of course, Google wouldn’t do it. That would be too intrusive – and it would make too obvious the fact that it could access all that data and could use it. But maybe you would want it to do it, as a service – to help you keep your life on track. Then it might do it and charge you a fee for it.
Should we be frightened?
Mustafa Suleyman, co-founder of DeepMind and co-founder and CEO of Inflection AI, seems to think so. In his book, The Coming Wave: Technology, Power, and the 21st Century’s Greatest Dilemma, Suleyman argued that the recent rapid development of AI, as well as of biotechnology, bring us to a historic turning point: “[W]e are faced with a choice – a choice between a future of unparalleled possibility and a future of unimaginable peril. The fate of humanity hangs in the balance.”
The book is co-authored by writer and publisher Michael Bhaskar. But all personal observations are stated in the first-person singular – presumably Suleyman – not “we.” I assume the book is an “as told to” work. I will therefore often refer to the author as Suleyman. In fact, the book reads like a transcript of a series of interviews with a subject who gushes thoughts and words, often repetitive. The word “logorrhea” came to mind more than once as I read it.
There is a rather curious paragraph in the book:
I often hear people say something along the lines of “AGI [artificial general intelligence1 is the greatest risk humanity faces today! It’s going to end the world!” But when pressed on what this actually looks like, how this actually comes about, they become evasive, the answers woolly, the exact danger nebulous.
This is a curious paragraph because it is exactly what I would say about Suleyman’s whole book. Suleyman does appear to warn that AI is the greatest risk humanity faces today – see quote above: “The fate of humanity hangs in the balance.”
But although I watched carefully for clear descriptions of how the worst comes about, I was disappointed. Yes, there are plenty of speculations about worst cases, most of them nebulous, and some could turn out to be right.
But before September 11, 2001, someone could have speculated that “the greatest risk humanity faces today is unlocked cockpit doors” – and turned out to have been right. Speculate enough and there’s a chance you might get something right.
What, then, is the biggest risk?
The risk that Suleyman and others are warning about was stated as clearly as it could be, more than 23 years ago – better than any other statement I have read.
In April 2000 an essay was published in Wired magazine titled “Why the future doesn’t need us,” written by Sun Microsystem’s cofounder and chief scientist Bill Joy. It was widely circulated and feverishly discussed. Near the beginning of the essay there is an extended quote. It is a lengthy but very clear explication of the exact risks of artificial intelligence. It begins:
First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.
The quote then goes on to consider each of these two cases separately. In each case the consequences are dire. The scenarios are quite convincing; their consequences are indeed of great concern.
Who is being quoted? The quotee is none other than Theodore Kaczynski. Kaczynski, it may be remembered, was the Unabomber – the person who over a period of nearly 20 years murdered three persons and injured 23 by planting and mailing bombs to them. All of them were people pursuing technological innovation. His pursuit was one of the longest in FBI history.
Kaczynski wrote a 35,000 word “manifesto” explaining his reasons. “In 1995,” according to Wikipedia, “Kaczynski sent a letter to The New York Times promising to ‘desist from terrorism’ if the Times or The Washington Post published his manifesto, in which he argued that his bombings were extreme but necessary in attracting attention to the erosion of human freedom and dignity by modern technologies.”
The Times and the Post were reluctant to publish it because they didn’t want to glorify a criminal’s thoughts. But they thought its publication might help in the FBI’s search.
That is indeed what happened. The wife of Kaczynski’s brother recognized strong similarities between the Unabomber’s writing and letters from her husband’s brother Ted. The couple told the FBI about it and negotiated to help the FBI find him if it were promised that he would not get a death sentence.2
Kaczynski, a vicious murderer despite his brilliance, might be considered Exhibit A of the risk if the warnings about AI are taken too seriously.
Thus, there is risk on both sides; the risk of not taking warnings about AI seriously enough, and the risk of taking them too seriously.
What, exactly, is artificial general intelligence (AGI)?
Kaczynski’s definition of AGI is as good as any: “intelligent machines that can do all things better than human beings can do them.” When AGI is spoken of it is taken for granted that this is its meaning and that it is possible, even inevitable.
But that is ridiculous. There is no being that can do all things better even than any other being, let alone all other beings. Many animals can do many things better than we can. Green plants can photosynthesize, which we can’t do at all.
The premise is absurd on its face. This taints the entire discussion.
Furthermore, those who discuss this are mostly from the digerati (people who are highly skilled and knowledgeable about digital technology). Like so many of them, they make the mistake of thinking that the speed of improvement of digital technology is, or will be, the speed of technology in general. For example, Suleyman says, “innovation in the ‘real world’ could start moving at a digital pace.” Will innovation in the production of Vaclav Smil’s four pillars of modern civilization, cement, steel, plastics, and ammonia, start to move at the digital pace of, say, Moore’s Law? This, as Smil makes clear, is a fantasy.
Unintended consequence: the demographic transition accelerates
In 1968, ecologist Paul Ehrlich and his wife Anne Ehrlich published a bombshell of a book literally titled, The Population Bomb. In it, they predicted that hundreds of millions of people would starve in the coming decade because the growth in the population would outstrip the food supply.
They turned out to be dead wrong for two reasons. One was the Green Revolution that was already underway when they wrote. It increased the yields of staple crops like wheat, rice and beans by multiples of three to five. The second was the demographic transition. Better health and survival rates of children and other factors were resulting in families having fewer children. The population explosion abated to a much greater degree than the Ehrlichs expected.
Could AI bring about an acceleration of that transition? I didn’t think of this until I read a post on Bari Weiss’s invaluable website, The Free Press, titled “The Dating Pool Dropouts.”
The post says that many men, who sound perfectly good and eminently marriageable, are giving up on the dating market. They meet with too many rejections – beginning with online rejection. It costs too much money, the rewards are slim if any, the pain great, and many of them conclude it is not worth the trouble.
And then comes this:
Over the past few years, start-ups like Replika, Character.ai, and Inflection AI, have rolled out a universe of virtual companions that users can customize to meet their every desire. One alluring chatbot, Eva AI, woos customers with the promise: “Build relationship and intimacy on your terms.” And one influencer, Caryn Marjorie, says she created an AI version of herself – so far with more than 18,000 “boyfriends” – to “cure loneliness.”
But of course. Surely, AI could make virtual women more alluring than real ones – and they could be trained over time to be exactly what their users want them to be. The result could well be a decline in the number of babies.
And yet Suleyman makes no mention of this, even though his own company, Inflection AI, is mentioned among those that have rolled out “virtual companions that users can customize to meet their every desire.” The only sentence in his book that slightly touches on this is near the end: “Many will disappear almost entirely into virtual worlds.”
Conclusion
We have an epidemic of hype: an epidemic of shrill warnings of catastrophes that could threaten our very existence – or the existence of some of us – from warnings about climate change, to people of a different political persuasion threatening to destroy our way of life or hunt us with guns, to China’s “aggression,”3 and now, of the potential catastrophic consequences of AI. Almost all of these are overblown, at least when they come from the hysterical fringe. In the case of AI, some of that hysterical fringe is situated in the mainstream of the digerati.
Nevertheless, it is a good thing that a substantial portion of AI developers are calling for a slowdown, simply because they say they don’t know how AI is doing what it’s doing – for example how GPT-4, the successor to ChatGPT, does what it does.
If you don’t know how your technology does what it does, it is time to step back and try to understand it better.
Economist and mathematician Michael Edesess is adjunct associate professor and visiting faculty at the Hong Kong University of Science and Technology, managing partner and special advisor at M1K LLC. In 2007, he authored a book about the investment services industry titled The Big Investment Lie, published by Berrett-Koehler. His new book, The Three Simple Rules of Investing, co-authored with Kwok L. Tsui, Carol Fabbri and George Peacock, was published by Berrett-Koehler in June 2014.
1 AGI, artificial general intelligence, is taken to mean the ability to perform AI superior to human intelligence in any and all areas, while AI is the ability to perform in one area such as speech recognition.
2 Disclosure: to my surprise the woman, Ted Kaczynski’s sister-in-law, who realized that he was the Unabomber was a friend of mine many years earlier at university. I had even attended her wedding (to her first husband). I later reconnected with her and her husband and found them to be very good people.
3 During the Vietnam War, The New York Times enclosed the word “aggression” in quotes when it referred to statements of the Viet Cong or the North Vietnamese about United States “aggression,” but did not enclose it in quotes when referring to Viet Cong or North Vietnamese aggression.
Income is evolving, is your portfolio? Join industry experts as they dive into fixed income markets and a range of income strategies. Register for our next symposium, October 27 at 11am ET. Click here.
Membership required
Membership is now required to use this feature. To learn more:
View Membership Benefits