Artwork

Sisällön tarjoaa WNYC Studios and The New Yorker, WNYC Studios, and The New Yorker. WNYC Studios and The New Yorker, WNYC Studios, and The New Yorker tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.
Player FM - Podcast-sovellus
Siirry offline-tilaan Player FM avulla!

We've Been Wrong to Worry About Deepfakes (So Far)

28:47
 
Jaa
 

Manage episode 384025047 series 248
Sisällön tarjoaa WNYC Studios and The New Yorker, WNYC Studios, and The New Yorker. WNYC Studios and The New Yorker, WNYC Studios, and The New Yorker tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.

Deepfakes, videos generated or manipulated by artificial intelligence, allow people to create content at a level of sophistication once only available to major Hollywood studios. Since the first deepfakes arrived seven years ago, experts have feared that doctored videos would undermine politics, or, worse, delegitimize all visual evidence. In this week’s issue of The New Yorker, Daniel Immerwahr, a professor of history at Northwestern University, explores why little of this has come to pass. As realistic as deepfakes can be, people seem to have good instincts for when they are being deceived. But Immerwahr makes the case that our collective imperviousness to deepfakes also points to a deeper problem: that our politics rely on emotion rather than evidence, and that we don’t need to be convinced of what we already believe.

You can read Daniel Immerwahr’s essay inThe New Yorker’s first ever special issue about artificial intelligence—out now.

  continue reading

932 jaksoa

Artwork
iconJaa
 
Manage episode 384025047 series 248
Sisällön tarjoaa WNYC Studios and The New Yorker, WNYC Studios, and The New Yorker. WNYC Studios and The New Yorker, WNYC Studios, and The New Yorker tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.

Deepfakes, videos generated or manipulated by artificial intelligence, allow people to create content at a level of sophistication once only available to major Hollywood studios. Since the first deepfakes arrived seven years ago, experts have feared that doctored videos would undermine politics, or, worse, delegitimize all visual evidence. In this week’s issue of The New Yorker, Daniel Immerwahr, a professor of history at Northwestern University, explores why little of this has come to pass. As realistic as deepfakes can be, people seem to have good instincts for when they are being deceived. But Immerwahr makes the case that our collective imperviousness to deepfakes also points to a deeper problem: that our politics rely on emotion rather than evidence, and that we don’t need to be convinced of what we already believe.

You can read Daniel Immerwahr’s essay inThe New Yorker’s first ever special issue about artificial intelligence—out now.

  continue reading

932 jaksoa

Kaikki jaksot

×
 
Loading …

Tervetuloa Player FM:n!

Player FM skannaa verkkoa löytääkseen korkealaatuisia podcasteja, joista voit nauttia juuri nyt. Se on paras podcast-sovellus ja toimii Androidilla, iPhonela, ja verkossa. Rekisteröidy sykronoidaksesi tilaukset laitteiden välillä.

 

Pikakäyttöopas