Elon had done this line of presentation last year too. It is not working.
Stephen Pinker, Harvard Professor and Optimist pushed back. He said, “If Elon Musk was really serious about the AI threat he’d stop building those self-driving cars.”
This was Elon’s response on Twitter — “Wow, if even Pinker doesn’t understand the difference between functional/narrow AI (e.g. car) and general AI, when the latter *literally* has a million times more compute power and an open-ended utility function, humanity is in deep trouble”
The question is why is Elon having trouble communicating his idea of biggest risk? Why is he doubling up on hyperbole headlines?
Could communication be his problem?
Facts Tell, Stories Stick.
I like this story from renowned scientist Michio Kaku. He makes a similar point about Aliens — that could be true for Artificial Intelligence.
“The real danger to a deer in the forest is not the hunter with a gigantic rifle, but the developer.
The guy with blueprints, the guy in the three-piece suit, the guy with the slide ruler and calculator.
The guy that is going to pave the forest and perhaps destroy a whole eco-system.”
Recently deceased physicist, Stephen Hawking, made a great point that mirrors the hunter with the gigantic rifle.
“The automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.”
Elon, through his fleeting reference to the “existential crisis”, worries about the paver in the three-piece.
How does he make his point? By pushing back on the so called experts.