As usual, some experts have played down public fears about AI, emphasizing that such astounding progress is no cause for alarm, given that playing Go isn’t a useful, real-world skill. It isn’t close to the sort of general intelligence that humans are capable of.
Yet many others rightly worry that AI will do great harm to society—putting people out of work, adding to inequality and removing warfare from human control, even posing an existential risk to the long-term future of Homo sapiens. Whether you are among those who believe that the arrival of human-level AI signals the dawn of paradise, such as the technologist Ray Kurzweil, or the sunset of the age of humans, such as the prominent voices of the philosopher Nick Bostrom, the physicist Stephen Hawking and the entrepreneur Elon Musk, there is no question that AI will profoundly influence the fate of humanity.
There is one way to deal with this growing threat to our way of life. Instead of limiting further research into AI, we should turn it in an exciting new direction. To keep up with the machines we’re creating, we must move quickly to upgrade our own organic computing machines: We must create technologies to enhance the processing and learning capabilities of the human brain.
AI was essentially born in the summer of 1956 when scientists, mathematicians and engineers convened at Dartmouth College to discuss so-called thinking machines. Since then, we’ve lived through stunning progress. In 1997, IBM ’s Deep Blue computer defeated the reigning world chess champion, Garry Kasparov. In 2005, AI learned to drive, when an autonomous vehicle completed a 132-mile off-road course in the Nevada-California desert in under seven hours. In 2011, another IBM computer, Watson, bested humans in the quiz show “Jeopardy!” Last year, AlphaGo (a predecessor to AlphaGo Zero) rose to international prominence by unexpectedly beating Lee Sedol, a top Go player. AlphaGo was trained on 160,000 games from a database of previously played Go games. AlphaGo Zero dispensed with any accumulated human wisdom and decisively annihilated its parent, AlphaGo, 100 to 0.
By now, machines are better than humans in games such as checkers, chess and Go, in which every player can see everything. And computers are taking the edge in games involving gambling, deception and other social skills, too. Earlier this year, Libratus, software developed at Carnegie Mellon University, beat four top players over a 20-day tournament of No-Limit Texas Hold ’em poker. Code doesn’t need to bluff—it simply outthinks humans.
AI has learned to listen and speak as well, in the form of digital personal assistants. We now have Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana and Google Now, though their conversational skills are still minimal. Within a decade or two, their voices will become indistinguishable from any human—except that they will be endowed with perfect recall, poise and patience.
These spectacular advances are powered by Moore’s law, the empirical observation that the number of components per integrated circuit doubles every year. It isn’t easy to wrap your head around such exponential growth. The raw computational power of computers has increased by about 10 billion since they were created to help design atomic bombs. We’re now seeing the first commercial quantum computers that will further boost computational power.
‘In the future, there are no guarantees that all or even most adults will have a job.’
All of us will be swept up by the changes brought on by this fourth industrial revolution. The first, powered by the steam engine, moved us from agricultural to urban societies. The second, powered by electricity, ushered in mass production and created consumer culture. The third, centered on computers and the internet, shifted the economy from manufacturing into services.
All of them profoundly increased human productivity, welfare and lifespan. Employment adapted as machines gradually replaced more and more aspects of human labor over time.
Yet this is not a law of nature. In the future, there are no guarantees that all or even most adults will have a job, in particular as the speed of technologically-driven disruption accelerates. At some point, the pace of progress will exceed the ability of individuals and of society at large to adapt. This could prove catastrophic.
A recent study by the McKinsey Global Institute estimated that 10% to 50% of job tasks in the U.S. could be automated using existing AI and robotic technology. In about 60% of the 800 occupations surveyed, at least 30% of their activities can be replaced by software, with some jobs (such as driver, retail worker and fast-food employee) becoming entirely obsolete.
Automation will bring many benefits, including a range of powerful new products we can’t even fully imagine today—but only for those who are wealthy or gainfully employed. AI will further accelerate the inequality between the haves and the have-nots.
Machine learning will also transform warfare. Once it’s developed, weaponized AI can fight armed conflict at a much bigger scale and at a much faster speed than humans can comprehend and react to. Sooner or later, willfully or not, AIs will have the capability to kill without a human in the loop to override its lethality.
Some see a time when we reach the singularity—an ill-defined point in time when machines surpass humans in intelligence, triggering even more rapid technological progress and a new era that is beyond our current comprehension.
‘How can the human species keep up?’
Unlike say, the speed of light, there are no known theoretical limits to intelligence. While our brain’s computational power is more or less fixed by evolution, computers are constantly growing in power and flexibility. This is made possible by a vast ecosystem of several hundred thousand hardware and software engineers building on each other’s freely shared advances and discoveries. How can the human species keep up?
The traditional answer is education. But training (and retraining) people takes time, and not everybody can, or wants to, switch from driving trucks, serving fast food or scanning items at the supermarket to developing code, designing computer chips, walking dogs or caring for elders (to list a few jobs that won’t be made redundant anytime soon).
In the face of this relentless onslaught, we must actively shape our future to avoid dystopia. We need to enhance our cognitive capabilities by directly intervening in our nervous systems.
Transcranial direct current stimulation is a noninvasive brain technology that induces a weak electric field in the cortex underlying the skull. Research in animals and in human volunteers suggests that this may enhance neuro-plasticity, the process in which the brain improves its performance when an action is repeated, over and over. Users wear headphones that gently stimulate their motor cortex while lifting weights, swinging a golf club or playing piano. With time, the athlete learns more quickly or better.
Another consumer product senses the slow brain waves characteristic of deep sleep via electroencephalogram (EEG) electrodes built into a headset. When it detects them, the device plays low sounds that enhance the depth and strength of these waves, leading to more restful sleep.
However, the billions of tiny nerve cells inside the skull are quite remote from the scalp, and only the faint echoes of neuronal chatter can be picked up by EEG. We aren’t anywhere close to selectively silencing or amplifying the activity of small cliques of neurons. Ultimately, to boost our brain power, we need to directly listen to and control individual neurons: the atoms of perception, action, memory and consciousness. And for that we need to directly access brain tissue, requiring (for now) at least some neurosurgery to penetrate the skull.
Progress has been much faster than expected, in particular for brain-machine interfaces. Consider Nancy Smith, who was injured in a car accident seven years ago. She is a tetraplegic, only able to move her shoulder and head. Neurosurgeons and neuroscientists in California implanted a tiny “bed of nails” array of electrodes in the region of her cortex that encodes her intention to grasp a cup or to press piano keys. Algorithms decode her neural signals and pass instruction to a musical synthesizer, so that she can play music with her mind.
‘Writing the cortex isn’t far behind.’
Bill Kochevar was likewise paralyzed below the shoulders following a bike accident many years ago. A Cleveland-based team of doctors and neuroscientists placed electrodes into his left motor cortex; these read out the electrical tremors of about 100 neurons, decoding the patient’s intention and then electrically stimulating muscles in his arm and hand to enable him to reach and to grasp. Such functional electrical stimulation is akin to “writing” the nervous system, giving instructions that mimic, however crudely, what occurs naturally. Functional stimulation lets Mr. Kochevar eat and drink by himself. There are more than 50 such patients with listening devices installed in their brains.
Writing the cortex isn’t far behind. When we move in the world, our bodies receive massive feedback from sensors in our limbs that signal their location in space and from touch sensors in the skin. Neuroscientists are seeking to replace these signals in patients who don’t feel their limbs by electrically stimulating their somatosensory cortex using implanted electrodes.
Funding for such research comes through the Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a public-private collaboration started in 2013 whose partners include the National Institutes of Health and U.S. defense and intelligence agencies. The 12-year initiative is expected to inject more than $4 billion into research for therapies. Its portfolio of funded grants includes direct brain stimulation for obsessive-compulsive disorder, treatment-resistant depression, essential tremor, Parkinson’s disease, epilepsy, stroke recovery and blindness.
The Allen Institute for Brain Science, which I direct, is adding to the effort. We just freely released data showing the intricate wispy axons and dendrites of hundreds of cortical neurons from living human neurosurgical samples and their electrical responses when tickled by tiny currents to their cell bodies. With the permission of patients, we receive these sugar-cube sized chunks of cortical tissue extracted during surgery to reach a deep-tissue tumor or epileptic focus (and usually discarded as medical waste), and put these samples on life support to study their structure and function for days on end in our laboratories.
This constitutes a remarkable advance, as almost everything we know about human nerve cells derives from postmortem (dead) brains, without a trace of electrical activity. In tandem, we provide computer code to simulate the electrical behavior of these cells.
This confluence of basic knowledge about the human brain with the burgeoning neuro-tech industry helps neurological patients recover their lost functionality, including driving a car, with their minds.
With more research, enhanced cognition could be within reach for all of us.
Brain enhancement could help older people who have trouble adapting to a new workplace by giving them back the flexibility they had as a child, effortlessly soaking up dozens of new words every day, learning novel skills and facts without even trying. Once we fully understand neuro-plasticity, we should be able to control its mechanisms at will.
My hope is that someday, a person could visualize a concept—say, the U.S. Constitution. An implant in his visual cortex would read this image, wirelessly access the relevant online Wikipedia page and then write its content back into the visual cortex, so that he can read the webpage with his mind’s eye. All of this would happen at the speed of thought. Another implant could translate a vague thought into a precise and error-free piece of digital code, turning anyone into a programmer.
People could set their brains to keep their focus on a task for hours on end, or control the length and depth of their sleep at will.
Another exciting prospect is melding two or more brains into a single conscious mind by direct neuron-to-neuron links—similar to the corpus callosum, the bundle of two hundred million fibers that link the two cortical hemispheres of a person’s brain. This entity could call upon the memories and skills of its member brains, but would act as one “group” consciousness, with a single, integrated purpose to coordinate highly complex activities across many bodies.
These ideas are compatible with everything we know about the brain and the mind. Turning them from science fiction into science fact requires a crash program to design safe, inexpensive, reliable and long-lasting devices and procedures for manipulating brain processes inside their protective shell. It must be focused on the end-to-end enhancement of human capabilities.
To accelerate the diffusion of this technology, the relevant government agencies, academia, the biomedical device industry and the smaller companies that are the true risk takers and pioneers must freely, openly and rapidly share data and procedures to speed up innovation. And we must shorten the very lengthy regulatory process to quickly bring these benefits to everyone.
While the 20th century was the century of physics—think the atomic bomb, the laser and the transistor—this will be the century of the brain. In particular, it will be the century of the human brain—the most complex piece of highly excitable matter in the known universe. It is within our reach to enhance it, to reach for something immensely powerful we can barely discern.
Dr. Koch is the chief scientist and president of the Allen Institute of Brain Science in Seattle.