Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Center for AI Safety. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Center for AI Safety or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

AISN #65: Measuring Automation and Superintelligence Moratorium Letter

6:29
 
Share
 

Manage episode 516350511 series 3647399
Content provided by Center for AI Safety. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Center for AI Safety or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.

In this edition: A new benchmark measures AI automation; 50,000 people, including top AI scientists, sign an open letter calling for a superintelligence moratorium.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.

CAIS and Scale AI release Remote Labor Index

The Center for AI Safety (CAIS) and Scale AI have released the Remote Labor Index (RLI), which tests whether AIs can automate a wide array of real computer work projects. RLI is intended to inform policy, AI research, and businesses about the effects of automation as AI continues to advance.

RLI is the first benchmark of its kind. Previous AI benchmarks measure AIs on their intelligence and their abilities on isolated and specialized tasks, such as basic web browsing or coding. While these benchmarks measure useful capabilities, they don’t measure how AIs can affect the economy. RLI is the first benchmark to collect computer-based work projects from the real economy, containing work from many different professions, such as architecture, product design, video game development, and design.

Examples of RLI Projects

Current [...]

---

Outline:

(00:29) CAIS and Scale AI release Remote Labor Index

(02:04) Bipartisan Coalition for Superintelligence Moratorium

(04:18) In Other News

(05:56) Discussion about this post

---

First published:
October 29th, 2025

Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-65-measuring

---

Want more? Check out our ML Safety Newsletter for technical safety research.

Narrated by TYPE III AUDIO.

---

Images from the article:

Examples of RLI Projects
Current AI agents complete at most 2.5% of projects in RLI, but are improving steadily.
Survey statistics showing U.S. adults' views on AI development and regulation.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

  continue reading

72 episodes

Artwork
iconShare
 
Manage episode 516350511 series 3647399
Content provided by Center for AI Safety. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Center for AI Safety or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://podcastplayer.com/legal.

Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.

In this edition: A new benchmark measures AI automation; 50,000 people, including top AI scientists, sign an open letter calling for a superintelligence moratorium.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.

CAIS and Scale AI release Remote Labor Index

The Center for AI Safety (CAIS) and Scale AI have released the Remote Labor Index (RLI), which tests whether AIs can automate a wide array of real computer work projects. RLI is intended to inform policy, AI research, and businesses about the effects of automation as AI continues to advance.

RLI is the first benchmark of its kind. Previous AI benchmarks measure AIs on their intelligence and their abilities on isolated and specialized tasks, such as basic web browsing or coding. While these benchmarks measure useful capabilities, they don’t measure how AIs can affect the economy. RLI is the first benchmark to collect computer-based work projects from the real economy, containing work from many different professions, such as architecture, product design, video game development, and design.

Examples of RLI Projects

Current [...]

---

Outline:

(00:29) CAIS and Scale AI release Remote Labor Index

(02:04) Bipartisan Coalition for Superintelligence Moratorium

(04:18) In Other News

(05:56) Discussion about this post

---

First published:
October 29th, 2025

Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-65-measuring

---

Want more? Check out our ML Safety Newsletter for technical safety research.

Narrated by TYPE III AUDIO.

---

Images from the article:

Examples of RLI Projects
Current AI agents complete at most 2.5% of projects in RLI, but are improving steadily.
Survey statistics showing U.S. adults' views on AI development and regulation.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

  continue reading

72 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play