painting-final-2014

Image source: http://recode.net/2014/05/27/googles-new-self-driving-car-ditches-the-steering-wheel/ 


Dr Paul Duignan on Outcomes 

Nicholas Carr has just written a book called The Glass Cage on the effects of automation. One of the points he makes is that once we develop autonomous automated systems, they need to include some framework for ethical reasoning. This is the case whether they are self-driving cars or killer drones. You can get a taste of his argument from his Salon interview.

He argues that you need ethical decision rules for an autonomous car which was about to run over something on the road, for instance an animal. Should it run over it or attempt to avoid it and put the car at risk of a crash?

He provides a list of the possible angles from which you could get guidance on what to do: the car owner, the software programmer, the insurance company, and the government. There are also additional ones, for instance, an animal rights point of view. Carr raises the point that you need some way of determining which of these perspectives the autonomous vehicle will take into account in making decisions about what action to take in this type of situation.

This is an interesting problem and it set me to wondering whether outcomes theory could help out, even if only as a way of working with the problem in a visual format.

From an outcomes theory point of view, the different angles would be viewed as different outcomes models. In theory you could draw an outcomes model for each of them and then compare these models for commonalities and  differences. Presumably many of the differences would revolve around the probability of different consequences occurring. For instance the insurance company outcomes model would want a lower probability of a crash with a car behind being sparked by emergency breaking than an outcomes model draw from an animal rights perspective.

Perhaps in the future, when you buy your insurance, you could get to decide on some of these probabilities and depending on the options you selected, you would pay higher or lower premiums. Or maybe the government would make an executive decision and on balance decide on the level of the probability of sparking a crash to be programmed into such autonomous cars. 

I don't have any feel for how much outcomes theory would have to add for anyone working on this class of problem. If you're interested in ways we can reason around these types of things get in touch and I could mock something up to see if it would add any value.

Paul Duignan, PhD. You can follow me on Twitter.com/PaulDuignan or contact me here. Discuss this post in the Linkedin DoView Community of Practice here.

Back to the DoView Blog

B131