Go offline with the Player FM app!
‘Stranger Things’ creators may be leaving Netflix plus AI stuffed animals and Anthropic says some Claude models can now end ‘harmful or abusive’ conversations
Manage episode 500959746 series 1321951
Earlier this week, Variety and other Hollywood publications reported that Matt and Ross Duffer, the brothers who created “Stranger Things” (and wrote and directed many episodes), were in talks to sign an exclusive deal with Paramount (now under the ownership of David Ellison’s Skydance). Then on Friday evening, Puck’s Matthew Belloni posted that the Duffers had in fact “made their choice” and were going to Paramount. Also, do A.I. chatbots packaged inside cute-looking plushies offer a viable alternative to screen time for kids? That’s how the companies selling these A.I.-powered kiddie companions are marketing them, but The New York Times’ Amanda Hess has some reservations. And, Anthropic has announced new capabilities that will allow some of its newest, largest models to end conversations in what the company describes as “rare, extreme cases of persistently harmful or abusive user interactions.” Strikingly, Anthropic says it’s doing this not to protect the human user, but rather the AI model itself.
Learn more about your ad choices. Visit podcastchoices.com/adchoices
5918 episodes
Manage episode 500959746 series 1321951
Earlier this week, Variety and other Hollywood publications reported that Matt and Ross Duffer, the brothers who created “Stranger Things” (and wrote and directed many episodes), were in talks to sign an exclusive deal with Paramount (now under the ownership of David Ellison’s Skydance). Then on Friday evening, Puck’s Matthew Belloni posted that the Duffers had in fact “made their choice” and were going to Paramount. Also, do A.I. chatbots packaged inside cute-looking plushies offer a viable alternative to screen time for kids? That’s how the companies selling these A.I.-powered kiddie companions are marketing them, but The New York Times’ Amanda Hess has some reservations. And, Anthropic has announced new capabilities that will allow some of its newest, largest models to end conversations in what the company describes as “rare, extreme cases of persistently harmful or abusive user interactions.” Strikingly, Anthropic says it’s doing this not to protect the human user, but rather the AI model itself.
Learn more about your ad choices. Visit podcastchoices.com/adchoices
5918 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.