In episode 105 of The Gradient Podcast, Daniel Bashir speaks to Eric Jang.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSS
Follow The Gradient on Twitter

Subscribe now

Outline:

(00:00) Intro

(01:25) Updates since Eric’s last interview

(06:07) The problem space of humanoid robots

(08:42) Motivations for the book “AI is Good for You”

(12:20) Definitions of AGI

(14:35) ~ AGI timelines ~

(16:33) Do we have the ingredients for AGI?

(18:58) Rediscovering old ideas in AI and robotics

(22:13) Ingredients for AGI

(22:13) Artificial Life

(25:02) Selection at different levels of information—intelligence at different scales

(32:34) AGI as a collective intelligence

(34:53) Human in the loop learning

(37:38) From getting correct answers to doing things correctly

(40:20) Levels of abstraction for modeling decision-making — the neurobiological stack

(44:22) Implementing loneliness and other details for AGI

(47:31) Experience in AI systems

(48:46) Asking for Generalization

(49:25) Linguistic relativity

(52:17) Language vs. complex thought and Fedorenko experiments

(54:23) Efficiency in neural design

(57:20) Generality in the human brain and evolutionary hypotheses

(59:46) Embodiment and real-world robotics

(1:00:10) Moravec’s Paradox and the importance of embodiment

(1:05:33) How embodiment fits into the picture—in verification vs. in learning

(1:10:45) Nonverbal information for training intelligent systems

(1:11:55) AGI and humanity

(1:12:20) The positive future with AGI

(1:14:55) The negative future — technology as a lever

(1:16:22) AI in the military

(1:20:30) How AI might contribute to art

(1:25:41) Eric’s own work and a positive future for AI

(1:29:27) Outro

Links:

Eric’s book

Eric’s Twitter and homepage

Read More in  The Gradient