If you think the idea of robots having rights is science fiction, you’d be dead wrong.

As unbelievable as it sounds, the European Commission is currently considering a proposal to give rights to robots, including adopting science fiction writer Isaac Asimov’s Three Laws of Robotics.

Experts are furiously debating the implications this could have on society.

As an organiser of last week’s International Joint Conference on Artificial Intelligence in Melbourne, ACS assembled a panel of AI experts to discuss an alternate reality we’re not yet ready for.

“We already have laws about working with machines, such as cars. Do we have laws about working with machines that have some form of intelligence or brain? Probably not,” said ACS President Anthony Wong.

“Why are we considering giving rights to robots, because that changes everything.

“If a robot creates some work, such as a painting or music, who owns that? A robot is a machine; it’s not a legal person and cannot own it, so who should have the right to it?

“If I were a VC investing in AI research, I would like to know, because some of the AI will be doing that, and as an investor, I’d like to know what rights I have to own those things.”

“And who’s responsible if things go wrong? The manufacturer, the person controlling it, or the robot itself?

“We need to wrestle with these questions, and the Australian Government needs to look at that, too, because the world is asking those questions. We need to have an informed discussion.”

Professor Liz Bacon from the University of Greenwich agreed, saying governments had “huge issues” to address, including ethics, privacy and security.

Outcomes

The panel also debated the various outcomes from AI.

While many people think AI simply gets programmed and you move on, you don’t always get the outcome you’d expect, said AI entrepreneur Marita Cheng.

A good example of this was Deep Mind’s AlphaGo, Cheng said, which was programmed based on the best data sets available -- moves made in previous champion games.

But when it came to playing the [human] world champion, the machine stunned the Go world with a move no-one had ever seen before.

“IA can learn from everything that you’ve taught it and then make its own decisions, based on the objectives that you’ve given it,” Cheng said.

IFIP President Mike Hinchey said bias was one of the biggest risks when programming machines.

“When humans are programming, a lot of the time, they’re building in their own biases, because it’s easy to program what you think is right.

“At NASA, we’ve had autonomous spacecraft, and because of the way they were programmed, they would always be biased towards making certain decisions, rather than one that would actually be more optimal.”