Skip to main content
Cookies Policy
Detailed information on the use of cookies on this website is provided in our Privacy Policy. By closing this message and proceeding, you consent to our use of cookies in accordance with our Cookies Policy.
x

We have implemented new login procedure. Learn More

  • CONTACT US
  • icon-facebook
  • icon-linkedin
  • icon-twitter

AI wants to be your Self-Improvement Coach

Friday, 28 Oct 2016

IA

How ready are you to take on-board what your computer says?

There is a growing move to redefine the acronym AI as augmented - rather than artificial – intelligence, a nod to its intended role to help humans rather than replace them in work.

But it remains an open question just how willing people will be to take constructive feedback on the way they work from an algorithm or machine.

Researchers at MIT Media Lab are trying to understand that dynamic better.

The researchers want to understand how people’s biases affect how they react, and to get a computer to provide feedback that might help them to change their behaviour.

“We call this human-in-the-loop machine learning,” lab director Joichi Ito told attendees of the IBM World of Watson conference in Las Vegas.

“The idea is can we make the computer smarter by understanding the biases of the humans, but then can we go back to the human being and help them improve?”

Traditionally, Ito said, computer scientists built machine learning models and tweaked them incessantly until the results they produced “roughly match reality”

Only then were people invited to use the models and provide feedback on their relative success.

MIT’s research changes that.

“What we’re doing here is actively putting a human being into the training loop, and we can also create an interface where we’re providing [them] feedback in real time,” Ito said.

In other words, Ito and his team want to create machine learning models and algorithms not just using data; industry professionals are invited to interpret the data.

That interpretation – called a “lens” – is used to help the machine understand the different inferences possible, potentially making it more accurate in recognising the right answer, as well as where the humans trying to work out the right answer are going wrong.

Click HERE to read the full article.