In a World Increasingly Filled With AI, How Do We Ensure That Machines Know Right From Wrong?

Editor’s note: The more machines do for us, the more it behooves us to consider ethical issues. This post is an excerpt of an article on that subject that ran in the CIMS Innovation Management Report (IMR) in July/August 2018. It features remarks made by Patrick Lin, director of the Ethics+ Emerging Science Group at California Polytechnic State University, at a recent National Academy of Engineering annual meeting:

Patrick Lin, Ph.D.

Let’s start with a working definition of Artificial Intelligence. It’s not this magical, mystical thing, but just a complex computer program that’s designed to automate decisions and actions. And it does this with the appearance of intelligence.

You can make a comparison between AI and bureaucracy. Just as like AI, bureaucracy defines and automates certain decisions. I believe this foreshadows an ethical problem for AI (and bureaucracy): If you’re not careful, you can slip in systemic risk and other vulnerabilities.

Six Domains of AI

On land, we have AI in our everyday devices, from smart phones in your pockets and virtual assistants on the kitchen table to smart thermostats, connected refrigerators, self-driving cars, factory and mall security robots, and more. In the air, airplanes are already autonomous, military drones are really semi-autonomous, while some cruise missiles are fully autonomous.

On the sea, any place where you find a human being is a market for AI, just like for the other domains. We’re already seeing autonomous surface ships and submarines, for example. In outer space, researchers are working on robot astronauts, and we find AI in Mars rovers and satellites, and other systems.

Then there’s inner space—the space inside our bodies—where AI is being used for medical diagnosis as well as connecting to us more intimately, for instance, in augmented reality flight helmets and the “neural lace” that Elon Musk is planning to make. AI can indeed shape the way we view the world.

Finally, there’s cyber space. AI is in expert systems for criminal sentencing, hiring and banking. Medicine and science are using it to discover scientific principles that not even humans have discovered.

Super-Automated Decision-Making

All these different forms of AI present different kinds of issues, from job displacement to unclear responsibility, privacy, psychological effects, and a possible AI arms race. But I’ll focus here on the core issue of proper decision-making. If AI is really about super-automation of decisions, it’s fundamental to ask whether we are doing that right.

There are three kinds of decisions: decisions that are right, decisions that are wrong, and a weird gray space of decisions that are neither wrong nor right. These are decisions that require a judgment call, or an ethics call. To give AI the best chance possible, let’s assume that it works as designed, that none of the sensors are broken, and that the system has not been hacked.

Even if we built an AI system right—if we built it to spec—we should ask whether we made the right thing. Did we create what was actually needed? With that in mind, I’ll ignore the decisions that are right and look first at the wrong ones.

Wrong Decisions

One way AI can go wrong is by exhibiting emergent behavior—behavior we didn’t expect and that we don’t want. We’ve already seen this with stock market flash crashes where autonomous financial trading bots, working against each other at digital speeds, crash the stock market and cause real losses.

This actually happens more often than you might think; it happens because AI is incredibly complex, and with complexity you often get unpredictability. When you have two autonomous systems meeting for the first time, that compounds the unpredictability.

Several years ago, for instance, Amazon had this weird case where a couple of pricing bots were basically having an auction between themselves over a used textbook about flies. They drove up its price to $23 million! Other worries include the prospect of our robot army meeting an adversary’s robot army for the first time. We can’t predict what the effects are, and our adversary is unlikely to loan us their robot to make sure it’s interoperable with ours.

There are similar concerns with self-driving cars. Will cars from different manufacturers be able to negotiate around each other and interoperate without an industry standard, for instance?

Gaming the A

An AI system can make a wrong decision is if you game it by, for example, introducing an adversarial example. I’m talking here about learning AI: neural nets that require tons of examples or training data to identify a pattern and figure out things for themselves.

In one 2014 exercise, Google researchers created an AI system that could learn to identify pandas, cats and other animals by analyzing millions of images. In this case, it identified the panda image with a 60-degree confidence rate.

Then the researchers slipped a tiny bit of imperceptible, pixilated noise into that image. As a result, the AI came up with a composite picture that it erroneously believed to be a monkey with 99 percent confidence! That’s one way we can trick learning AI—we can’t always predict what it’s going learn.

We can see how this translates into a real-world risk. For instance, the camera from a self-driving car might be filming its surroundings, but if you introduce the right adversarial example, pedestrians can become invisible to the computer.

Recently, researchers have shown that you don’t even need to hack into the system. All you really need are little bits of tape strategically placed on a sign, manipulating it just a tiny bit. Now, all of a sudden, this bright red stop sign appears to the car’s computer like a 45- mile- an- hour speed limit sign. Computer vision is still imperfect! There’s still a lot of work that needs to be done.

Algorithms Crystallize Bias

Even if you’re not gaming AI, it can go wrong by producing biased answers. This applies mostly to learning algorithms that rely on lots of data. If you start with bad data, you have the garbage-in, garbage-out problem. A famous example is a man searching for a job online who’s more likely to be shown job ads for CEO positions than if he were a woman. That’s an accurate reflection of the data, but the data may be a broken reflection of a society where historically CEOs are mostly white men.

 

As my Cal Poly colleague Ryan Jenkins puts it, algorithms tend to crystallize bias. So, once it’s in an algorithm, further thinking tends to stop. The data scientist Cathy O’Neil, who wrote the excellent book, Weapons of Math Destruction, puts it nicely; she said algorithms are opinions embedded in code. That means that data and algorithms aren’t as objective as you might think they are.

Decisions Neither Wrong Nor Right

Let’s look now at the subtler category of decisions that are not wrong but also not right. Imagine you’re in a robot car driving on your local freeway. You’re in the middle lane and for whatever reason you’ve got to swerve. Do you swerve to the right and hit that small car or do you swerve left and hit this larger car? There are reasons to go either way. If you’re worried about your passengers, you should swerve left and crash into the larger object and protect those passengers better. On the other hand, if you’re worried about your own life, then you should crush that smaller vehicle.

Either way is reasonable, but once you make that decision, you are systematically discriminating against a particular class of vehicles through no fault of their own, other than the owners couldn’t afford larger cars or they have large families. It’s important to remember that programmed decisions are premeditated decisions, and law and ethics treat these two differently. This is the difference between an innocent accident and potentially premeditated homicide.

This doesn’t only apply to weird crash scenarios. There are ethical dilemmas in the everyday decisions a self-driving car has to make, as my colleague Noah Goodall in Virginia also points out. For instance, imagine you’re going down a narrow road and encounter a group of five people on one side and a single person on the other. Where do you position the car? Do you weigh all six lives equally and let it drive straight down the middle, or do you give the group more space because you decide the five people are worth more than one?

There are many such scenarios, even today with navigation apps that choose a route for you; they tend to default to the fastest path, no matter the risk. These scenarios all raise questions about how we value different lives and what tradeoffs we’re willing to make. There’s no obviously right way but you should recognize that whatever decision you or your robot make involves transferring risk among one or more parties, and no one asked you to do that.

Superhero Ethics

It’s often helpful to connect new, unfamiliar things—like AI ethics—to the more familiar. I’d suggest that we can think about technology as a super power. Drones, for instance, are giving us the ability to fly. With drones, you could see into the windows of a 38-floor apartment building just like Superman could. Surveillance cameras and tiny sensors give us super senses—super vision, super hearing. AI combined with big data gives us omniscience!

So Facebook might know a lot more about your family history than you even know. A few years ago, Target.com reportedly learned that a daughter was pregnant before her parents even knew. That’s omniscience.

Bionic exoskeletons give us super strength. Biotechnology has given us super-metabolism; the military is working on soldiers who don’t need to eat or sleep. Nanotechnology has given us meta-materials, some of which act like invisibility cloaks right out of Harry Potter. Computer brain interfaces are essentially giving us the power of telepathy. CRISPR and gene editing let us create mutants.

These are literally super powers that jump off the pages of comic books. That means we can think of technology ethics as superhero ethics. So, think of technology ethics as asking the question: What happens when we get super powers? How do super powers change ethics? How do they change our institutions, like privacy and education? How do they change our norms? As the saying goes, great powers come with great responsibility. Today new powers come with new responsibilities.

Imagine we’re in Metropolis where Luther has planted a bomb. You’re not obligated to pick up the bomb and throw it into outer space. It makes no sense to say you have a responsibility to do something you physically cannot do. But Superman has this responsibility. He can do it. And arguably, he should do it. So, we have to think about the ways that technology is changing us—changing our obligations and responsibilities.

We’re all stakeholders in this technology-driven world. You might not be interested in robot cars, but robot cars might be interested in you as they drive alongside you and your friends and family. It’s like what British scientist Sir Martin Rees once said in The Guardian: “Scientists surely have a special responsibility. It’s their ideas that form the basis of new technology. They should not be indifferent to the fruits of their ideas.”

When law and policy concerning a new technology are unclear—our driving laws largely don’t contemplate an AI driver, for instance—it’s often helpful to go back to first principles. Go back to ethics as your moral North Star, your moral compass, to help point the way to sound law and policy.

Professor Patrick Lin is the co-editor, along with Keith Abney and Ryan Jenkins,  of Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence.To have access to the insights of other innovation leaders, subscribe to the CIMS Innovation Management Report today.

 

 

 

Comments are closed.