Featuring Ilya Sutskever’s thoughts on alignment, Sam Altman’s blog musings, Max Tegmark’s proposals, Bostrum and his Superintelligence quote, a top Google Bard worker on whether it is possible, Emad Mostaque and his thoughts, as well as DeepMind and Demis Hassabis. The six month moratorium was called for in an open letter, signed by, among others Emad Mostaque, Elon Musk, Max Tegmark and Yuval Noah Harari.
Open Letter: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Altman Interview: https://www.youtube.com/watch?v=L_Guz73e6fw
Altman Blog: https://blog.samaltman.com/machine-intelligence-part-1
Sutskever Interview: https://www.youtube.com/watch?v=Yf1o0TQzry8
Hassabis Interview: https://time.com/6246119/demis-hassabis-deepmind-interview/
Emad Mostaque: https://twitter.com/EMostaque/status/1640989142598205446
X-risk Analysis: https://arxiv.org/pdf/2206.05862.pdf
Richard Ngo: https://twitter.com/RichardMCNgo/status/1640568775018975232
The Alignment Problem: https://arxiv.org/pdf/2209.00626.pdf
Current and Near Term AI: https://arxiv.org/pdf/2209.10604.pdf
Bostrum Talk: https://www.youtube.com/watch?v=MnT1xgZgkpk
Tegmark Interview w/ Lex Fridman: https://www.youtube.com/watch?v=RL4j4KPwNGM
Anthropic AI Safety: https://www.anthropic.com/index/core-views-on-ai-safety#:~:text=Anthropic%E2%80%99s%20role%20will%20be%20to%20provide%20as%20much,effort%20towards%20preventing%20the%20development%20of%20dangerous%20AIs.
NBC News Interview: https://www.nbcnews.com/tech/tech-news/tech-watchdog-raised-alarms-social-media-warning-ai-rcna76167
AI Impacts Survey: https://aiimpacts.org/wp-content/uploads/2023/03/HLMIvalue2016final4.png
Dustin Tran: https://twitter.com/dustinvtran
Nadella: https://www.fastcompany.com/90696770/microsoft-satya-nadella-book-recommendations