Artwork

Sisällön tarjoaa Daniel Filan. Daniel Filan tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.
Player FM - Podcast-sovellus
Siirry offline-tilaan Player FM avulla!

17 - Training for Very High Reliability with Daniel Ziegler

1:00:59
 
Jaa
 

Manage episode 338517759 series 2844728
Sisällön tarjoaa Daniel Filan. Daniel Filan tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.

Sometimes, people talk about making AI systems safe by taking examples where they fail and training them to do well on those. But how can we actually do this well, especially when we can't use a computer program to say what a 'failure' is? In this episode, I speak with Daniel Ziegler about his research group's efforts to try doing this with present-day language models, and what they learned.

Listeners beware: this episode contains a spoiler for the Animorphs franchise around minute 41 (in the 'Fanfiction' section of the transcript).

Topics we discuss, and timestamps:

- 00:00:40 - Summary of the paper

- 00:02:23 - Alignment as scalable oversight and catastrophe minimization

- 00:08:06 - Novel contribtions

- 00:14:20 - Evaluating adversarial robustness

- 00:20:26 - Adversary construction

- 00:35:14 - The task

- 00:38:23 - Fanfiction

- 00:42:15 - Estimators to reduce labelling burden

- 00:45:39 - Future work

- 00:50:12 - About Redwood Research

The transcript: axrp.net/episode/2022/08/21/episode-17-training-for-very-high-reliability-daniel-ziegler.html

Daniel Ziegler on Google Scholar: scholar.google.com/citations?user=YzfbfDgAAAAJ

Research we discuss:

- Daniel's paper, Adversarial Training for High-Stakes Reliability: arxiv.org/abs/2205.01663

- Low-stakes alignment: alignmentforum.org/posts/TPan9sQFuPP6jgEJo/low-stakes-alignment

- Red Teaming Language Models with Language Models: arxiv.org/abs/2202.03286

- Uncertainty Estimation for Language Reward Models: arxiv.org/abs/2203.07472

- Eliciting Latent Knowledge: docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit

  continue reading

42 jaksoa

Artwork
iconJaa
 
Manage episode 338517759 series 2844728
Sisällön tarjoaa Daniel Filan. Daniel Filan tai sen podcast-alustan kumppani lataa ja toimittaa kaiken podcast-sisällön, mukaan lukien jaksot, grafiikat ja podcast-kuvaukset. Jos uskot jonkun käyttävän tekijänoikeudella suojattua teostasi ilman lupaasi, voit seurata tässä https://fi.player.fm/legal kuvattua prosessia.

Sometimes, people talk about making AI systems safe by taking examples where they fail and training them to do well on those. But how can we actually do this well, especially when we can't use a computer program to say what a 'failure' is? In this episode, I speak with Daniel Ziegler about his research group's efforts to try doing this with present-day language models, and what they learned.

Listeners beware: this episode contains a spoiler for the Animorphs franchise around minute 41 (in the 'Fanfiction' section of the transcript).

Topics we discuss, and timestamps:

- 00:00:40 - Summary of the paper

- 00:02:23 - Alignment as scalable oversight and catastrophe minimization

- 00:08:06 - Novel contribtions

- 00:14:20 - Evaluating adversarial robustness

- 00:20:26 - Adversary construction

- 00:35:14 - The task

- 00:38:23 - Fanfiction

- 00:42:15 - Estimators to reduce labelling burden

- 00:45:39 - Future work

- 00:50:12 - About Redwood Research

The transcript: axrp.net/episode/2022/08/21/episode-17-training-for-very-high-reliability-daniel-ziegler.html

Daniel Ziegler on Google Scholar: scholar.google.com/citations?user=YzfbfDgAAAAJ

Research we discuss:

- Daniel's paper, Adversarial Training for High-Stakes Reliability: arxiv.org/abs/2205.01663

- Low-stakes alignment: alignmentforum.org/posts/TPan9sQFuPP6jgEJo/low-stakes-alignment

- Red Teaming Language Models with Language Models: arxiv.org/abs/2202.03286

- Uncertainty Estimation for Language Reward Models: arxiv.org/abs/2203.07472

- Eliciting Latent Knowledge: docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit

  continue reading

42 jaksoa

Alle Folgen

×
 
Loading …

Tervetuloa Player FM:n!

Player FM skannaa verkkoa löytääkseen korkealaatuisia podcasteja, joista voit nauttia juuri nyt. Se on paras podcast-sovellus ja toimii Androidilla, iPhonela, ja verkossa. Rekisteröidy sykronoidaksesi tilaukset laitteiden välillä.

 

Pikakäyttöopas