At the Beneficial AI 2017 Conference, moderator Max Tegmark leads the group through a discussion of "what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen".

Representing the Western leaders of AI development and thought were eight men:

Elon Musk
CEO, Tesla
Jaan Tallinn
Estonian programmer
Ray Kurzweil
Director of Engineering, Google
Demis Hassabis
Co-Founder, Google DeepMind
Nick Bostrom
Swedish philosopher at the University of Oxford
Bart Selman
Professor of Computer Science, Cornell University
Stuart Russell
Co-Founder, You.i
David Chalmers
Australian philosopher

To start out, each of the participants were asked, yes or no, if they believed super-intelligence was a) possible and b) likely to become a reality. All affirmed both.

Moving forward, opinions diverged on the details, with DeepMind's Demis Hassabis among the most optimistic in terms of technological potential and ethical implementation into society. One of the more gloomy sentiments was offered by Ray Kurzweil who asked if super-AI wouldn't treat humans the same way humans treat other species, for example how livestock are handled.

Elon Musk offered the most bold opinions on integration between humans and machines in that he strongly advocated for a direct neural connection between the human brain and the information network. The process had already started when the power of the entire Internet was put in our pockets via the iPhone. Not only can we obtain any information with a few commands, but everyone can "send a message to whole world" instantly.

We are already cyborgs.

Elon Musk, Beneficial AI 2017 Conference

The problem humans currently have is one of physical limitation, or bandwidth as he called it. While we take in information quite rapidly (though not as fast as we someday will), our ability to output information is limited by our "meat sticks" (how fast our fingers operate). Part of the solution is to directly wire the brain to the network in order to skip the manual input step. Musk would return to the wired brain later in the discussion when proposing a way for all members of society to have equal access to the super-intelligent grid.

Demis Hassabis also promoted an idea several times that AI and super-AI development should not be held back by fears about its potentially negative consequences - development should move along steadily at this point, the hardest point in the "S-curve" while the resources are available and before any rival and untethered development leaps ahead. Safety concerns are valid and must be addressed, but they also slow down development he told the panel. In response to the animal-cruelty comment by Ray Kurzweil, Demis suggested that the reason humans treat animals, for example tigers, poorly is because one tiger might need X square kilometers to survive, while humans need the space to grow. In an AI world, the hope is that scarcity of resources would be eliminated and abundance would make sure inhumanity unnecessary. In any event, if any pause in development or too much ethical analysis impedes progress, it will only be in vain as other parties will likely still be moving forward. At the same time, there exists a danger if development teams try to cut corners and ignore safety and ethical issues in attempt to gain the upper hand.

Each of the panel members had the opportunity to weigh in the potential dangers of explosive growth in AI capabilities as well as any benevolent prospects.

One interesting concern was raised by Bart Selman that humans will not understand the solutions that AI creates. This is already occurring as the complexity of the logic abstracted while the system trains on its data defies comprehension by humans. He did point out that some are already teach AIs to explain their work and this offers hope for the far more complicated solutions which will emerge down the road.

Stuart Russell opined that some AI programs could be akin to weaponized software, capable of being transmitted over the network by "bad actors".

On the positive side, David Chalmers, joking about his "slowing brain" said he was excited to see what improvements a super-AI could enhance our human capabilities. Someday, he suggested, AI-developed technology might allow us to upload our brains to the cloud. Additionally, we could witness the rise of an AI philosopher capable of solving age-old problems too difficult for the human mind.

Finally, Nick Bostrom suggested that post-human modes of being. could be better in ways "beyond our comprehension".

The full discussion is available on YouTube at: and additional resources provided by Future of Life can be found at:

AI principles

BAI 2017