You’re riding along in your driverless car when suddenly a group of five people step off the footpath and into the path of your car.

There’s no time for the car to stop.

It can veer violently off the road, potentially rendering the passenger dead.

Or it can plough through the group, potentially causing five fatalities.

What should the driverless car do?

Or should the question be, what does the car’s programmer tell the car it should do?

And who tells the programmer what to do?

The never-ending questions

Each question on ethics in the era of artificial intelligence (AI) seems to raise another.

Professor Liz Bacon is the Deputy Pro Vice-Chancellor in the Faculty of Architecture, Computing and Humanities at the University of Greenwich in the UK. She also has a PhD in artificial intelligence, and is a Past Chair of the British Computer Society.

Speaking at the International Joint Conference on Artificial Intelligence (IJCAI) in Melbourne last week, Bacon said those debating AI laws need to wade through a myriad of complex questions.

In the driverless car example above, Bacon says the AI programmed into the car needs to replace the thinking a human would do in the same situation.

“These days, as a driver, you have to make a split-second decision, but [for autonomous vehicles], you’re going to have to program that algorithm to say what are the consequences. What do you do in a situation where there is no choice but somebody has to be harmed?

“You could do it on straight numbers. The car could say ‘I’m going to veer off the road and kill the person in my own vehicle -- and I’m going to save those five people on the road.’

“Or you could do it, depending on what the AI of the car knows, on the value of the people to society. Say the person in the car happens to have the cure to cancer, and that the five people are escapees from jail.

“Maybe you say their value to society is worth less, so maybe you say ‘I’m going to save the person in the car’.

“There are horrendous implications there, if you were to actually consider the value of people to society.”

“If you have a computer that can do split-second look-ups, autonomous cars have all sorts of knowledge of who’s around them to make that actual decision. It’s scary to think about how you program that."

Then there’s the question of if you program the algorithm to say ‘kill these people over that person’, could that be considered by a court to be pre-meditated murder?

“These are really huge ethical and societal questions that we are going to have to make when people are building autonomous cars,” said Bacon.

Who gets to make those decisions?

Bacon said the government of the day needs to step up and tackle the hard questions.

“When governments catch up -- and I don’t think they have; their legislation tends to be behind reality – I think they need to try and put some laws and some legislation into place.

“The actual fine-tuning of decisions could be made at company level, at a mid-level managerial level, it could be done at the actual level of the programmer – and that’s where we have issues of potential bias coming in, which could be conscious bias or unconscious bias.”

But I don’t want to program a car that kills

When it’s your job to program autonomous cars, you may disagree with the company line on how an autonomous car should handle different situations on the road.

Would you feel comfortable programming a car to kill pedestrians to protect a car’s occupants at all costs?

“Bias can creep in at any level of the production,” said Bacon.

“Everyone has their own sense of right and wrong, and their own personal threshold about what they feel they are able to engage with.

“It can be very difficult if you’re under pressure, you don’t want to lose your job, and you’ve got your boss pressuring you to do something you’re not comfortable with. When are you going to walk? When are you going to whistle-blow? Are you going to be the next Edward Snowden?”

Concerns

Hacking into an autonomous car isn’t as far-fetched as it sounds. In the US, hackers have been able to successfully hack into cars and change their behaviour, including hitting the brakes.

“Across the world, artificial intelligence can do massive, massive good.

“But the sad thing is that there are people out there that want to harm us, who are unscrupulous and unethical.

“If you wanted to kill someone, you could hack into their car and theoretically have it drive off a cliff or into a wall,” said Bacon. “You could make it look like an accident.”

Bacon said that while there will continue to be deaths on our roads, the move to autonomous cars will lead to overall fewer deaths, and that they will be “different types” of deaths.

“The human driver may have killed Person A but the autonomous car may have killed Person B because their algorithm is different.”

In addition to autonomous cars, Bacon added AI will infiltrate other areas, such as battle zones.

“It won’t be long before we stop sending people into wars,” she said. “We send drones, we send robots, we send bombs. People are remotely controlling robots doing the fighting. That’s the reality, that’s where it’s going.

“In that sense, it’s better that people aren’t potentially getting killed on the front line anymore when we have robots that can do that. It changes the nature of war, it changes the nature of decision-making.”

I want to sue

Who does one sue when a relative gets killed by an autonomous car?

“That’s a biggie,” says Bacon, “And I do think that’s where we need government help in terms of legislation because that’s a really, really hard question. There’s some discussion about suing the robots, although I don’t really understand how that works because robots don’t have bank accounts and money to pay people. There’s the organisation that developed it, there’s the programmer – there’s going to be a whole bunch of stakeholders, and I really don’t have the answer.

“But insurance companies are going to have to have the answer because when things crash, they have to know who’s at fault. That’s a big issue and may well slow down the release of autonomous cars onto roads.

Ignorance is bliss

Not many people yet understand the implications of having autonomous vehicles on roads, and it’s an education process that needs to begin as the technology is being developed.

“I think the whole of society needs the education now so they can understand what AI is, and the impact on life," said Bacon.

“What it means, what it can do, what it’s potential for society is – the good and the bad – so that they are better qualified to make their own judgement.”

Above all, said Bacon, people need to have absolute certainty in the systems.

“Trust is going to be key,” she said. “People bringing out autonomous cars are not going to sell anything unless people actually trust the technology.”