Ethical Considerations in Advanced Robotics and AI

Written by Eric Miller
Published on Jan 20, 2017
Topic: Technology

Previous Next

Advanced robotics and Artificial Intelligence is a very new area of scientific and sociological discussion. There wind up being a lot of problems with these kinds of products as they become increasingly complex, and when something goes wrong it’s not always clear whether responsibility lies with the user or with the manufacturer. I was inspired by this article, where the author discussed

Important Terms

AI: Artificial Intelligence - A human-created entity that can respond to external stimuli in a way that is comparable to a human in terms of sophistication.

Neural Network: Neural Net - A process that automatically uses feedback to improve itself.

The Complexities of Neural Networks

Over time, you may have heard the term “neural network” when talking about engineering. This is the technology that drives a lot of the upcoming, more impressive products like Google’s self-driving cars. It’s a paradigm born of the need to develop software more complex than humans are able to wrap their mind around. I’m delving into this subject first, because understanding the basic ideas behind this technology is imperative to discussing the ethical implications further on down the line.

So what are neural networks?

Neural networks are a new paradigm to programming that, rather than telling the computer what to do, tells it what you want it to do. Rather than walk the computer through the process of following a task, you evaluate its performance at a task and it uses that feedback. This process is largely inspired by both how human beings learn things (by taking continuous feedback and linking it into how we do things) as well as the principles of evolution.

For example, say you write a program to cook some pancakes. Because I like pancakes. In traditional programming, you would outline something like a recipe; it would go through all the steps, have clearly defined ingredients, quantities, and cooking times. Then, once you eat the pancakes, you might go back and adjust the recipe for better pancakes.

Using a neural network, the process is a bit different. You may start with a recipe, or even just a list of ingredients. Most likely, you’d program it with an initial, very basic recipe. Then, you’d develop a scoring system for the pancakes. Give some points for taste, nutritional value, and a relatively small number of points for deviation from the original recipe (so that as the “recipe” changes, it keeps making pancakes and doesn’t become completely indistinguishable). Next, it will take that original recipe and make minor random changes, and serves those modified recipes to people. If any score higher than the original, it uses that recipe going forward. Over time (with a lot of samplings), the pancakes will be far superior to the original recipe.

The Scary Side of Neural Networks

This is where things get kind of unnerving. Following the pancake metaphor, you’ll start to see the neural network doing really weird things that make no sense. If you start to look at the recipe, it will be extremely long and do things that make no sense. You’ll see seasonings that make no sense for pancakes, bizarrely complex ways of mixing batter, and other things that a human being would never do. Most likely, if a human tried to read the recipe they wouldn’t even be able to understand it.

I remember a case study where some scientists made a neural network to solve a relatively predictable problem; either solving an equation or navigating a maze, I don’t remember. After running the neural network for long enough, they found two things: 1) that the process followed was borderline gibberish and was extremely difficult to understand, and 2) that the algorithm was now so sophisticated that is was actually taking advantage of previously undiscovered flaws in the hardware to optimize its speed.

So as an algorithm becomes increasingly complex, it eventually goes beyond human comprehension, in unpredictable ways. This gives way to an important, and perhaps unanswerable question: At what point of complexity, if any, are we forced to consider the possibility that an electronic device may be conscious?

Add to this the fact that, using the internet, massive numbers of these robots or computers could be added together in a neural network. Imagine the skill of an individual who spent their entire life studying a trade. A neural network could have the equivalent of tens, or hundreds, or thousands of lifetimes of experience in a trade.

It’s pretty much unanimous that the answer is “we’re not sure, but we’re pretty sure we’re not there yet”. Perhaps by following certain guidelines, we can find a way to ensure we never reach that point. It will give way to entirely new discussions about ethics and technology, the likes of which we’ve never seen. Further, it could introduce the problem of how human beings will function in a post-scarcity society, or a society where some are post-scarcity society, or even if a post-scarcity society is possible when labor becomes infinite.

Legal Status of Advanced Robotics

As time goes on, it’s becoming apparent that we will need separate legal status for sufficiently dangerous equipment. The article above suggested tracking the possession of these robots using a kind of registry; this is an interesting idea. What if, like land, sufficiently advanced robots transferred ownership by way of a legal title or deed? It would allow one to track and identify the owner, in the case of one going rogue.

But further, who gets held accountable when things go wrong? Say you had some highly-intelligent industrial equipment in a machine shop programmed using a neural net, that injured a worker? Most would agree that the worker needs to be compensated, but should the legal owner of the robot or the machine be held accountable? It’s even possible that the neural net is part of a “hive mind” neural net, where all devices operate according to the same rules. This “hive mind” could be fully-distributed, where it has no owner, like the internet. Perhaps the manufacturer of the robot never even touched its programming.

But in a system where a robot becomes as complex, or even more complex than a human being, could we hold that robot responsible? Further, if the robot is part of a hive mind, how can you hold anyone responsible? The hive mind is, at this point, beyond the control or understanding of any human being, but it’s also not sentient and is incapable of malicious intent. In this scenario, there are strong arguments that no one could be fairly held accountable, since the only point of failure was not controlled by any sentient being.

At this point, consideration of giving the status of a legal entity to the “hive mind” is necessary, because it must be held accountable for its actions, because no one else can. Then, it needs money, in order to compensate users when things go wrong. But this raises entirely new questions - how does it decide when to go to court and when to settle? Who argues for it in court - does it hire a “lawyer” hive mind? Does it have to pay the other “hive mind?” Does a human lawyer handle it? How could a human judge assess a being that they not only can’t understand, but can’t empathize with? What happens when a “hive mind” wrongs another “hive mind”? Where will the “hive mind” get its money from? Who does it charge? Would all this be decided by its original creator? Would it owe its original creator royalties? Can its original creator “disown” it (similarly to what happens with some “open-source” development projects?

Closing Thoughts

This is a lot to think about, and ultimately we won’t have any concrete answers until we get there. However, I’ve got a few ideas and principles that I believe could make things easier going forward.

  • Intentional creation of sentient beings should, at least at the beginning, be illegal.
  • We need a definition of sentient for this context.
  • Ultimately, I believe that the creation of special legal status (similar to corporations) for sufficiently intelligent artificial entities is necessary.
  • How is a legal artificial entity different from a human, or a corporation? Can it declare bankruptcy? What legal rights and responsibilities does it have?
  • At exactly what point does a neural network become sufficiently detached from human control that an individual human can’t be held accountable?