Artwork

Sisällön tarjoaa Spencer Greenberg. Spencer Greenberg tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.
Player FM - Podcast-sovellus
Siirry offline-tilaan Player FM avulla!

Should we pause AI development until we're sure we can do it safely? (with Joep Meindertsma)

1:01:28
 
Jaa
 

Manage episode 414626866 series 2807068
Sisällön tarjoaa Spencer Greenberg. Spencer Greenberg tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.

Read the full transcript here.

Should we pause AI development? What might it mean for an AI system to be "provably" safe? Are our current AI systems provably unsafe? What makes AI especially dangerous relative to other modern technologies? Or are the risks from AI overblown? What are the arguments in favor of not pausing — or perhaps even accelerating — AI progress? What is the public perception of AI risks? What steps have governments taken to migitate AI risks? If thoughtful, prudent, cautious actors pause their AI development, won't bad actors still keep going? To what extent are people emotionally invested in this topic? What should we think of AI researchers who agree that AI poses very great risks and yet continue to work on building and improving AI technologies? Should we attempt to centralize AI development?

Joep Meindertsma is a database engineer and tech entrepreneur from the Netherlands. He co-founded the open source e-democracy platform Argu, which aimed to get people involved in decision-making. Currently, he is the CEO of Ontola.io, a software development firm from the Netherlands that aims to give people more control over their data; and he is also working on a specification and implementation for modeling and exchanging data called Atomic Data. In 2023, after spending several years reading about AI safety and deciding to dedicate most of his time towards preventing AI catastrophe, he founded PauseAI and began actively lobbying for slowing down AI development. He's now trying to grow PauseAI and get more people in action. Learn more about him on his GitHub page.

Staff

Music

Affiliates

  continue reading

370 jaksoa

Artwork
iconJaa
 
Manage episode 414626866 series 2807068
Sisällön tarjoaa Spencer Greenberg. Spencer Greenberg tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.

Read the full transcript here.

Should we pause AI development? What might it mean for an AI system to be "provably" safe? Are our current AI systems provably unsafe? What makes AI especially dangerous relative to other modern technologies? Or are the risks from AI overblown? What are the arguments in favor of not pausing — or perhaps even accelerating — AI progress? What is the public perception of AI risks? What steps have governments taken to migitate AI risks? If thoughtful, prudent, cautious actors pause their AI development, won't bad actors still keep going? To what extent are people emotionally invested in this topic? What should we think of AI researchers who agree that AI poses very great risks and yet continue to work on building and improving AI technologies? Should we attempt to centralize AI development?

Joep Meindertsma is a database engineer and tech entrepreneur from the Netherlands. He co-founded the open source e-democracy platform Argu, which aimed to get people involved in decision-making. Currently, he is the CEO of Ontola.io, a software development firm from the Netherlands that aims to give people more control over their data; and he is also working on a specification and implementation for modeling and exchanging data called Atomic Data. In 2023, after spending several years reading about AI safety and deciding to dedicate most of his time towards preventing AI catastrophe, he founded PauseAI and began actively lobbying for slowing down AI development. He's now trying to grow PauseAI and get more people in action. Learn more about him on his GitHub page.

Staff

Music

Affiliates

  continue reading

370 jaksoa

Kaikki jaksot

×
 
Loading …

Tervetuloa Player FM:n!

Player FM skannaa verkkoa löytääkseen korkealaatuisia podcasteja, joista voit nauttia juuri nyt. Se on paras podcast-sovellus ja toimii Androidilla, iPhonela, ja verkossa. Rekisteröidy sykronoidaksesi tilaukset laitteiden välillä.

 

Pikakäyttöopas