Artwork

Sisällön tarjoaa Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology. Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.
Player FM - Podcast-sovellus
Siirry offline-tilaan Player FM avulla!

The Promise and Peril of Open Source AI with Elizabeth Seger and Jeffrey Ladish

38:44
 
Jaa
 

Manage episode 385014002 series 2503772
Sisällön tarjoaa Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology. Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.

As AI development races forward, a fierce debate has emerged over open source AI models. So what does it mean to open-source AI? Are we opening Pandora’s box of catastrophic risks? Or is open-sourcing AI the only way we can democratize its benefits and dilute the power of big tech?

Correction: When discussing the large language model Bloom, Elizabeth said it functions in 26 different languages. Bloom is actually able to generate text in 46 natural languages and 13 programming languages - and more are in the works.

RECOMMENDED MEDIA

Open-Sourcing Highly Capable Foundation Models

This report, co-authored by Elizabeth Seger, attempts to clarify open-source terminology and to offer a thorough analysis of risks and benefits from open-sourcing AI

BadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13B

This paper, co-authored by Jeffrey Ladish, demonstrates that it’s possible to effectively undo the safety fine-tuning from Llama 2-Chat 13B with less than $200 while retaining its general capabilities

Centre for the Governance of AI

Supports governments, technology companies, and other key institutions by producing relevant research and guidance around how to respond to the challenges posed by AI

AI: Futures and Responsibility (AI:FAR)

Aims to shape the long-term impacts of AI in ways that are safe and beneficial for humanity

Palisade Research

Studies the offensive capabilities of AI systems today to better understand the risk of losing control to AI systems forever

RECOMMENDED YUA EPISODES

A First Step Toward AI Regulation with Tom Wheeler

No One is Immune to AI Harms with Dr. Joy Buolamwini

Mustafa Suleyman Says We Need to Contain AI. How Do We Do It?

The AI Dilemma

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

  continue reading

120 jaksoa

Artwork
iconJaa
 
Manage episode 385014002 series 2503772
Sisällön tarjoaa Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology. Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.

As AI development races forward, a fierce debate has emerged over open source AI models. So what does it mean to open-source AI? Are we opening Pandora’s box of catastrophic risks? Or is open-sourcing AI the only way we can democratize its benefits and dilute the power of big tech?

Correction: When discussing the large language model Bloom, Elizabeth said it functions in 26 different languages. Bloom is actually able to generate text in 46 natural languages and 13 programming languages - and more are in the works.

RECOMMENDED MEDIA

Open-Sourcing Highly Capable Foundation Models

This report, co-authored by Elizabeth Seger, attempts to clarify open-source terminology and to offer a thorough analysis of risks and benefits from open-sourcing AI

BadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13B

This paper, co-authored by Jeffrey Ladish, demonstrates that it’s possible to effectively undo the safety fine-tuning from Llama 2-Chat 13B with less than $200 while retaining its general capabilities

Centre for the Governance of AI

Supports governments, technology companies, and other key institutions by producing relevant research and guidance around how to respond to the challenges posed by AI

AI: Futures and Responsibility (AI:FAR)

Aims to shape the long-term impacts of AI in ways that are safe and beneficial for humanity

Palisade Research

Studies the offensive capabilities of AI systems today to better understand the risk of losing control to AI systems forever

RECOMMENDED YUA EPISODES

A First Step Toward AI Regulation with Tom Wheeler

No One is Immune to AI Harms with Dr. Joy Buolamwini

Mustafa Suleyman Says We Need to Contain AI. How Do We Do It?

The AI Dilemma

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

  continue reading

120 jaksoa

Kaikki jaksot

×
 
Loading …

Tervetuloa Player FM:n!

Player FM skannaa verkkoa löytääkseen korkealaatuisia podcasteja, joista voit nauttia juuri nyt. Se on paras podcast-sovellus ja toimii Androidilla, iPhonela, ja verkossa. Rekisteröidy sykronoidaksesi tilaukset laitteiden välillä.

 

Pikakäyttöopas