Sunday, May 26th, 2019

The Evolution of AI: Can Morality be Programmed?

Published on July 2, 2016 by   ·   No Comments
..

..

 FUTURISM

Recent advances in artificial intelligence have made it clear that our computers need to have a moral code. Disagree? Consider this: A car is driving down the road when a child on a bicycle suddenly swerves in front of it. Does the car swerve into an oncoming lane, hitting another car that is already there? Does the car swerve off the road and hit a tree? Does it continue forward and hit the child?

Each solution comes with a problem: It could result in death.

It’s an unfortunate scenario, but humans face such scenarios every day, and if an autonomous car is the one in control, it needs to be able to make this choice. And that means that we need to figure out how to program morality into our computers.

Vincent Conitzer, a Professor of Computer Science at Duke University, recently received a grant from the Future of Life Institute in order to try and figure out just how we can make an advanced AI that is able to make moral judgments…and act on them.

Making Morality

At first glance, the goal seems simple enough—make an AI that behaves in a way that is ethically responsible; however, it’s far more complicated than it initially seems, as there are an amazing amount of factors that come into play. As Conitzer’s project outlines, “moral judgments are affected by rights (such as privacy), roles (such as in families), past actions (such as promises), motives and intentions, and other morally relevant features. These diverse factors have not yet been built into AI systems.”

That’s what we’re trying to do now.

In a recent interview with Futurism, Contizer clarified that, while the public may be concerned about ensuring that rogue AI don’t decide to wipe-out humanity, such a thing really isn’t a viable threat at the present time (and it won’t be for a long, long time). As a result, his team isn’t concerned with preventing a global-robotic-apocalypse by making selfless AI that adore humanity. Rather, on a much more basic level, they are focused on ensuring that our artificial intelligence systems are able to make the hard, moral choices that humans make on a daily basis.

Read More HERE

Share the Truth:
  • Digg
  • Facebook
  • Twitter
  • Google Bookmarks
  • Global Grind
  • MySpace
  • Ping.fm
  • Tumblr
  • email

Readers Comments (0)




Please note: Comment moderation is enabled and may delay your comment. There is no need to resubmit your comment.

Daily News and Blogs

Listen to the TIS Network on blogtalkradio.com

Check Out Pop Culture Podcasts at Blog Talk Radio with TIS Network on BlogTalkRadio

Like us on Facebook

Advertise Here

Advertise Here