Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

The Future Of AI: Is Something Different This Time?

Hero Images
/
Getty Images

We grew up with the fantasy and the nightmare.

The crew of the Enterprise talks directly to their ship's intelligent computer. Hal 2000 of 2001 A Space Odyssey runs a deep space exploration vessel while simultaneously trying to kill its astronaut crew. The machines in The Matrix enslave humans. The robots in Star Wars are our friends.

Clearly, the idea of Artificial Intelligence (AI) has been with us in pop culture for a while. But enthusiasm among scientists for AI's reality has actually waxed and waned strongly a number of times over the last half century.

Right now, researchers are riding high on another wave of enthusiasm. Advances across a number of fields have experts hoping that something like AI may be getting close. So is something really different this time? If so what does that mean for the rest of us?

Should we, in other words, be worried or excited?

I recently read an excellent article called the "Future of AI" by Vasant Dhar. Based on a conference at NYU by the same name held last January, Dhar does a wonderful job explaining what's changed.

The biggest shift in AI research has come through two developments. The first is the new capacity for machines to learn for themselves. In previous generations of AI studies, scientists had to spoon feed computers the distinctions and reasoning they felt were central to the operation of intelligence. Over the last decade or two, researchers have pushed successfully at the frontiers of "Machine Learning."

One of their breakthroughs involves the use of a technique called deep learning. Though it takes many forms, one example of deep learning is the creation of electronic "neural networks" that can mimic different basic operations occurring in real webs of neurons. The "deep" part comes from stacking up the operations of the electronic neural networks. That means lower level networks deal with very simple operations (like finding edges in an image) and then hand their results up to others networks higher in the stack. In the end, hopefully, these integrated networks allow a computer to execute a truly higher operation like facial recognition.

And, Dhar says, it works. Machine learning really happens on its own:

"Newer systems can take the visual, auditory, or language input directly. This advancement enables the machine to take direct inputs from the world without human involvement and create its own internal representation for further processing."

The other development driving this wave of excitement is the ubiquity of Big Data. As Dhar puts it:

"... we have witnessed a mushrooming of machine learning systems in virtually every domain where large data sets have become available. It is becoming more common for computers to perform tasks better than the best humans can."

From self-driving cars to robot assistants that anticipate your needs before you have them, it seems clear that we are already creating a fundamentally different kind of world where some form of artificial intelligence is baked in. In a sense, these systems are becoming "effectively intelligent" via their independence.

Such effectively intelligence is an important point because we need not have machines that have "woken-up" to run headlong into fundamental questions about ethics and existential dangers. That's why one of Dhar's section headings is "How Do We Control What We Don't Fully Understand?" He writes:

"Machines are now ... equipped with the appropriate epistemic criteria (such as predictive accuracy or model identification), [that] they can design themselves to solve problems and discover new knowledge. Once such machines are integrated into the fabric of our lives, we may not be able to "turn them off" if they start behaving in a way we don't understand!"

Again it's critical to emphasize that we don't need machines to go all Matrix or SkyNet on us (meaning a fully self-aware, fully evil AI). Machines just have to become intelligent enough, in the way that scientists are already developing, for society to run into difficulties.

To see how quickly the rising tide of "effective AI" may smash into daily life, consider the legal ramifications of machines behaving in ways we don't understand. How do we create regulations (i.e. robot law) when the possibility of robot decisions we don't understand must be included. As Dhar asks, "Who is responsible for the actions of a robot that designs itself and learns to get better over time?" Better yet, who is responsible if a network of robots, acting in a way consistent with their design, carries out actions that turn dangerous — or even deadly?

The point is that something really does seem to be different this time — and it's cause for both excitement and worry. Our machines don't need to become conscious to rewire our world. They just need to become intelligent enough. That day may be approaching us faster than we are preparing for it to arrive.

Adam Frank is a co-founder of the 13.7 blog, an astrophysics professor at the University of Rochester, a book author and a self-described "evangelist of science." You can keep up with more of what Adam is thinking on Facebook and Twitter: @adamfrank4.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Adam Frank was a contributor to the NPR blog 13.7: Cosmos & Culture. A professor at the University of Rochester, Frank is a theoretical/computational astrophysicist and currently heads a research group developing supercomputer code to study the formation and death of stars. Frank's research has also explored the evolution of newly born planets and the structure of clouds in the interstellar medium. Recently, he has begun work in the fields of astrobiology and network theory/data science. Frank also holds a joint appointment at the Laboratory for Laser Energetics, a Department of Energy fusion lab.