Elon Musk wants to keep AI in check

Written By Unknown on Sabtu, 17 Januari 2015 | 00.32

From "2001: A Space Odyssey" to the "Terminator" movies, Hollywood has warned about brainiac robots running amok and turning on us, their human creators. Now the genius behind Tesla Motors and SpaceX is giving a $10 million shot in the arm to a 
local nonprofit dedicated to ensuring robotic weapons and cars don't get too smart for their own circuits.

It's a scenario that has Elon Musk unnerved. He compared artificial intelligence to "summoning the demon" at a Massachusetts Institute of Technology conference last fall, and has called AI "potentially more dangerous than nukes."

"Certainly you could construct scenarios where recovery of human civilization does not occur," Musk said in a video yesterday introducing his donation to the Future of Life Institute. "When the risk is that 
severe, you should be proactive and not reactive."

The nonprofit institute based in Cambridge is focused on maximizing the potential benefits of artificial intelligence and minimizing the inherent risks of smart machines. It's backed by an array of mathematicians and computer science experts, including Jaan Tallinn, a co-founder of Skype, and plans to use Musk's donation to begin accepting grant applications next week from researchers working on artificial intelligence safety.

"There's obviously nothing intrinsically benevolent about machines," said Max Tagmark, Future of Life Institute president and a Massachusetts Institute of Technology professor. "The reason that we humans have more power on this planet is because we're smarter. If we start to create entities that are smarter than us, then we have to be quite careful when we start to do that to make sure whatever goals they have are aligned with our human goals."

Among the potential pitfalls of artificial intelligence are:

  • Autonomous weapons and drones that could potentially cause an accidental war. A U.N. expert in 2013 called for a global ban on armed robots that could select and kill targets without human control.
  • The ethical and legal implications of allowing a self-driving car to decide what action to take in a dangerous situation. For example, a self-driving car might swerve off the road and kill the passenger to avoid a family crossing the street.
  • Economic effects, such as developing software that completely automates a specific job, replacing humans and causing massive unemployment.

The key, said Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence and an Oregon State University professor, is ensuring the software behaves the way we want it to.

"We will soon be able to say to our cars, 'Get me to the airport as quickly as possible,' " Dietterich said, "but we don't want the car to drive 300 mph and run over pedestrians."

With technological advances moving artificial intelligence out of labs and into the real world, these are questions that need to be addressed sooner rather than later, Tagmark said.

"If you're building a self-driving car for example, it's a lot more important that it works correctly than a Roomba," he said. "That kind of low quality stuff won't cut it when we have stuff that affects our lives. These questions of making artificial intelligence robust and beneficial to society are more important."


Anda sedang membaca artikel tentang

Elon Musk wants to keep AI in check

Dengan url

http://wartawanlocal.blogspot.com/2015/01/elon-musk-wants-to-keep-ai-in-check.html

Anda boleh menyebar luaskannya atau mengcopy paste-nya

Elon Musk wants to keep AI in check

namun jangan lupa untuk meletakkan link

Elon Musk wants to keep AI in check

sebagai sumbernya

0 komentar:

Posting Komentar

techieblogger.com Techie Blogger Techie Blogger