Hassabis, Altman and AGI Labs Unite – AI Ext… | 質問の答えを募集中です! Hassabis, Altman and AGI Labs Unite – AI Ext… | 質問の答えを募集中です!

Hassabis, Altman and AGI Labs Unite – AI Ext…

未分類
Hassabis, Altman and AGI Labs Unite – AI Extinction Risk Statement [ft. Sutskever, Hinton + Voyager]
The leaders of almost all of the world’s top AGI Labs have united to put out a statement on AI Extinction Risk, and how mitigating it should be a global priority. This video covers not just the statement and the signatories, including names as diverse as Geoffrey Hinton, Ilya Sutskever, Sam Harris and Lex Fridman, but also goes deeper into the 8 Examples of AI Risk outlined at the same time by the Center for AI Safety.

Top academics from China join in, while Meta demurs, claiming autoregressive LLMs will ‘never be given agency’. I briefly cover the Voyager paper, in which GPT 4 is given agency to play Minecraft, and does so at SOTA levels.

Statement: https://www.safe.ai/statement-on-ai-risk
Natural Selection Paper: https://arxiv.org/pdf/2303.16200.pdf
Yann LeCun on 20VC w/ Harry Stebbings: https://www.youtube.com/watch?v=OgWaowYiBPM
Voyager Agency Paper: https://arxiv.org/pdf/2305.16291.pdf
Karpathy Tweet: https://twitter.com/karpathy/status/1663392621690249218
Hassabis Benefit Speech: https://www.youtube.com/watch?v=KHFmIknP_Hc
Stanislav Petrov: https://en.wikipedia.org/wiki/Stanislav_Petrov
Bengio Blog: https://yoshuabengio.org/2023/05/07/ai-scientists-safe-and-useful-ai/

https://www.patreon.com/AIExplained



 ⬇人気の記事!⬇

タイトルとURLをコピーしました