How NOT to destroy humanity with AI | Stuart Russe… | 質問の答えを募集中です! How NOT to destroy humanity with AI | Stuart Russe… | 質問の答えを募集中です!

How NOT to destroy humanity with AI | Stuart Russe…

ES
How NOT to destroy humanity with AI | Stuart Russell
Professor Stuart J. Russell warns that AI could pose an existential threat to humanity unless we can ensure that these systems remain aligned with human values and goals.

Subscribe to Freethink on YouTube ► https://freeth.ink/youtube-subscribe
Explore the Creativity & AI Special Issue ► https://www.freethink.com/special-issues/creativity-a-i/?utm_source=youtube&utm_medium=video&utm_campaign=youtube_description

Dr. Stuart J. Russell is a Professor of Computer Science at UC Berkeley and has been studying the development of artificial intelligence for decades.

While he doesn’t think this latest crop of generative AI tools necessarily presents a significant threat to humanity, he does think it has helped to open the public’s eyes to the potential risks of more intelligent AI that could be coming in the future.

“They’re giving people now, in a very real sense, what would it be like if we had artificial general intelligence on tap available 24/7 to solve any problem that we might have. And they’re also seeing in a very visceral way that could present real risks,” Russell explained in a recent interview with Freethink.

As Russell argues in his book Human Compatible: Artificial Intelligence and the Problem of Control, we need to be appropriately concerned about the future threat of human-level, artificial general intelligence (AGI) which could pose an existential threat to humanity unless we can ensure that these systems remain aligned with human values and goals.

He contends that the standard approach to designing AI systems — in which machines are programmed to maximize some objective function — is fundamentally flawed because these machines don’t actually understand the world around them in any comprehensive way, a flaw that, in his mind, could lead to unintended and catastrophic failures if it can’t sufficiently anticipate the consequences of its own actions. Instead, he proposes a new approach to AI design in which machines are explicitly programmed to defer to humans in matters of value and to operate within a framework of uncertain and incomplete knowledge. By ensuring that AI systems are “human-compatible” in this way, Russell argues that we can harness the enormous potential of AI while minimizing the risk of catastrophic outcomes.

Watch on Freethink.com ► https://www.freethink.com/robots-ai/how-to-stop-runaway-ai/?utm_source=youtube&utm_medium=video&utm_campaign=youtube_description

◠◠◠◠◠◠◠◠◠◠◠◠◠◠◠◠◠◠
Dr. Stuart J. Russell is a Professor of Computer Science at the University of California at Berkeley.
◡◡◡◡◡◡◡◡◡◡◡◡◡◡◡◡◡◡◡

Watch our original series:
► Hard Reset: https://youtube.com/playlist?list=PLXthoedLVIdLvnNgiCshQvqKdS7T_qeGY
► Just Might Work: https://youtube.com/playlist?list=PLXthoedLVIdIS7K-6oNkrya-v-k-X4zYI
► Challengers: https://youtube.com/playlist?list=PLXthoedLVIdKeeuwpDPSyHSC54obntRxB

◠◠◠◠◠◠◠◠◠◠◠◠◠◠◠◠◠◠◠
About Freethink
No politics, no gossip, no cynics. At Freethink, we believe the daily news should inspire people to build a better world. While most media is fueled by toxic politics and negativity, we focus on solutions: the smartest people, the biggest ideas, and the most ground breaking technology shaping our future.
◡◡◡◡◡◡◡◡◡◡◡◡◡◡◡◡◡◡◡

Enjoy Freethink on your favorite platforms:
► Daily editorial features: https://www.freethink.com/?utm_source=youtube&utm_medium=video&utm_campaign=youtube_description
► Solutions-based stories, straight to your inbox: https://www.freethink.com/subscribe/?utm_source=youtube&utm_medium=video&utm_campaign=youtube_description
► Facebook: https://www.facebook.com/freethinkmedia​/
► Instagram: https://www.instagram.com/freethink​/
► Twitter: https://twitter.com/freethinkmedia​/
► Join the Freethink forum: http://www.facebook.com/groups/freethinkforum/



 ⬇人気の記事!⬇

タイトルとURLをコピーしました