Siirry offline-tilaan Player FM avulla!
LW - Why I funded PIBBSS by Ryan Kidd
Arkistoidut sarjat ("Toimeton syöte" status)
When? This feed was archived on October 23, 2024 10:10 (). Last successful fetch was on September 22, 2024 16:12 ()
Why? Toimeton syöte status. Palvelimemme eivät voineet hakea voimassa olevaa podcast-syötettä tietyltä ajanjaksolta.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 440214513 series 3337129
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I funded PIBBSS, published by Ryan Kidd on September 15, 2024 on LessWrong.
I just left a comment on PIBBSS' Manfund grant request (which I funded $25k) that people might find interesting. PIBBSS needs more funding!
Main points in favor of this grant
1. My inside view is that PIBBSS mainly supports "
blue sky" or "
basic" research, some of which has a low chance of paying off, but might be critical in "
worst case" alignment scenarios (e.g., where "
alignment MVPs" don't work, "
sharp left turns" and "
intelligence explosions" are more likely than I expect, or where we have more time before AGI than I expect). In contrast, of the technical research MATS supports, about half is
basic research (e.g., interpretability, evals, agent foundations) and half is
applied research (e.g., oversight + control, value alignment). I think the MATS portfolio is a better holistic strategy for furthering AI safety and reducing AI catastrophic risk.
However, if one takes into account the research conducted at AI labs and supported by MATS, PIBBSS' strategy makes a lot of sense: they are supporting a wide portfolio of blue sky research that is particularly neglected by existing institutions and might be very impactful in a range of possible "worst-case" AGI scenarios. I think this is a valid strategy in the current ecosystem/market and I support PIBBSS!
2. In MATS' recent post, "
Talent Needs of Technical AI Safety Teams", we detail an AI safety talent archetype we name "Connector". Connectors bridge exploratory theory and empirical science, and sometimes instantiate new research paradigms. As we discussed in the post, finding and developing Connectors is hard, often their development time is on the order of years, and there is little demand on the AI safety job market for this role.
However, Connectors can have an outsized impact on shaping the AI safety field and the few that make it are "household names" in AI safety and usually build organizations, teams, or grant infrastructure around them.
I think that MATS is far from the ideal training ground for Connectors (although some do pass through!) as our program is only 10 weeks long (with an optional 4 month extension) rather than the ideal 12-24 months, we select scholars to fit established mentors' preferences rather than on the basis of their original research ideas, and our curriculum and milestones generally focus on building object-level scientific/engineering skills rather than research ideation and "identifying gaps".
It's thus no surprise that most MATS scholars are "Iterator" archetypes. I think there is substantial value in a program like PIBBSS existing, to support the long-term development of "Connectors" and pursue impact in a higher-variance way than MATS.
3. PIBBSS seems to have decent track record for recruiting experienced academics in non-CS fields and helping them repurpose their advanced scientific skills to develop novel approaches to AI safety. Highlights for me include Adam Shai's "computational mechanics" approach to interpretability and model cognition, Martín Soto's "logical updatelessness" approach to decision theory, and Gabriel Weil's "tort law" approach to making AI labs liable for their potential harms on the long-term future.
4. I don't know Lucas Teixeira (Research Director) very well, but I know and respect Dušan D. Nešić (Operations Director) a lot. I also highly endorsed Nora Ammann's vision (albeit while endorsing a different vision for MATS). I see PIBBSS as a highly competent and EA-aligned organization, and I would be excited to see them grow!
5. I think PIBBSS would benefit from funding from diverse sources, as mainstream AI safety funders have pivoted more towards applied technical research (or more governance-relevant basic research like evals). I think Manifund regrantors are well-positio...
1851 jaksoa
Arkistoidut sarjat ("Toimeton syöte" status)
When? This feed was archived on October 23, 2024 10:10 (). Last successful fetch was on September 22, 2024 16:12 ()
Why? Toimeton syöte status. Palvelimemme eivät voineet hakea voimassa olevaa podcast-syötettä tietyltä ajanjaksolta.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 440214513 series 3337129
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I funded PIBBSS, published by Ryan Kidd on September 15, 2024 on LessWrong.
I just left a comment on PIBBSS' Manfund grant request (which I funded $25k) that people might find interesting. PIBBSS needs more funding!
Main points in favor of this grant
1. My inside view is that PIBBSS mainly supports "
blue sky" or "
basic" research, some of which has a low chance of paying off, but might be critical in "
worst case" alignment scenarios (e.g., where "
alignment MVPs" don't work, "
sharp left turns" and "
intelligence explosions" are more likely than I expect, or where we have more time before AGI than I expect). In contrast, of the technical research MATS supports, about half is
basic research (e.g., interpretability, evals, agent foundations) and half is
applied research (e.g., oversight + control, value alignment). I think the MATS portfolio is a better holistic strategy for furthering AI safety and reducing AI catastrophic risk.
However, if one takes into account the research conducted at AI labs and supported by MATS, PIBBSS' strategy makes a lot of sense: they are supporting a wide portfolio of blue sky research that is particularly neglected by existing institutions and might be very impactful in a range of possible "worst-case" AGI scenarios. I think this is a valid strategy in the current ecosystem/market and I support PIBBSS!
2. In MATS' recent post, "
Talent Needs of Technical AI Safety Teams", we detail an AI safety talent archetype we name "Connector". Connectors bridge exploratory theory and empirical science, and sometimes instantiate new research paradigms. As we discussed in the post, finding and developing Connectors is hard, often their development time is on the order of years, and there is little demand on the AI safety job market for this role.
However, Connectors can have an outsized impact on shaping the AI safety field and the few that make it are "household names" in AI safety and usually build organizations, teams, or grant infrastructure around them.
I think that MATS is far from the ideal training ground for Connectors (although some do pass through!) as our program is only 10 weeks long (with an optional 4 month extension) rather than the ideal 12-24 months, we select scholars to fit established mentors' preferences rather than on the basis of their original research ideas, and our curriculum and milestones generally focus on building object-level scientific/engineering skills rather than research ideation and "identifying gaps".
It's thus no surprise that most MATS scholars are "Iterator" archetypes. I think there is substantial value in a program like PIBBSS existing, to support the long-term development of "Connectors" and pursue impact in a higher-variance way than MATS.
3. PIBBSS seems to have decent track record for recruiting experienced academics in non-CS fields and helping them repurpose their advanced scientific skills to develop novel approaches to AI safety. Highlights for me include Adam Shai's "computational mechanics" approach to interpretability and model cognition, Martín Soto's "logical updatelessness" approach to decision theory, and Gabriel Weil's "tort law" approach to making AI labs liable for their potential harms on the long-term future.
4. I don't know Lucas Teixeira (Research Director) very well, but I know and respect Dušan D. Nešić (Operations Director) a lot. I also highly endorsed Nora Ammann's vision (albeit while endorsing a different vision for MATS). I see PIBBSS as a highly competent and EA-aligned organization, and I would be excited to see them grow!
5. I think PIBBSS would benefit from funding from diverse sources, as mainstream AI safety funders have pivoted more towards applied technical research (or more governance-relevant basic research like evals). I think Manifund regrantors are well-positio...
1851 jaksoa
Alle episoder
×Tervetuloa Player FM:n!
Player FM skannaa verkkoa löytääkseen korkealaatuisia podcasteja, joista voit nauttia juuri nyt. Se on paras podcast-sovellus ja toimii Androidilla, iPhonela, ja verkossa. Rekisteröidy sykronoidaksesi tilaukset laitteiden välillä.