“Success in creating effective A.I.,” said the late Stephen Hawking, “could be the biggest event in the history of our civilization. Or the worst. We just don’t know.” Are we creating the instruments of our own destruction or exciting tools for our future survival? Once we teach a machine to learn on its own—as the programmers behind AlphaGo have done, to wondrous results—where do we draw moral and computational lines? In this program, leading specialists in A.I., neuroscience, and philosophy tackle the very questions that may define the future of humanity.
PARTICIPANTS: Yann LeCun, Susan Schneider, Max Tegmark, Peter Ulric Tse
MODERATOR: Tim Urban
MORE INFO ABOUT THE PROGRAM AND PARTICIPANTS: https://www.worldsciencefestival.com/programs/teach-robots-well-will-self-taught-robots-end-us/.
This program is part of the BIG IDEAS SERIES, made possible with support from the JOHN TEMPLETON FOUNDATION.
- SUBSCRIBE to our YouTube Channel and "ring the bell" for all the latest videos from WSF
- VISIT our Website: http://www.worldsciencefestival.com
- LIKE us on Facebook: https://www.facebook.com/worldscience...
- FOLLOW us on Twitter: https://twitter.com/WorldSciFest
- Opening film on the history and future of artificial intelligence. 00:06
- Participant intros. 06:05
- What is machine learning? 07:34
- What are neural networks and how do they learn? 09:30
- Teaching computers to create internal models of the world? 12:00
- What do the next 10 years in AI look like? 13:50
- Artificial narrow intelligence and mental models. 14:35
- How is AI changing the world of art and creativity? 16:01
- Can computers be creative? 19:35
- AI writes a screenplay for a movie, how did it turn out? 23:20
- What is artificial general intelligence? 25:30
- How far away are we from developing artificial general intelligence equivalent to human intelligence? 27:00
- Will advanced AI turn into Terminators and take over the world? 28:30
- What's so special about human intelligence? 31:10
- What is human consciousness and will machines ever experience consciousness? 31:11
- Separating intelligence from consciousness. 41:34
- Defining morality in AI agents. 44:34
- Will machines ever have emotions? 46:45
- Should we be looking at other forms of non-human intelligence to model in our machines? 50:05
- How do you align the drives of AI with human values? 52:25
- Will artificial general superintelligence be good or bad for humankind? 53:10
- Creating a new ethics of AI. 56:15
- When will we ever have super-AGI? 58:40
- Produced by Christy Wegener
- Associate Produced by Laura Dattaro
- Opening film written / produced by Christy Wegener, edited by Gil Seltzer
- Music provided by APM
- Additional images and footage provided by: Getty Images, Shutterstock, Videoblocks
This program was recorded live at the 2018 World Science Festival and has been edited and condensed for YouTube.