According to Paul Christiano, a former major researcher at OpenAI, there is a 10-20% possibility that artificial intelligence will take over and destroy humans. He is particularly concerned about what would happen when AIs achieve human-like reasoning and creative abilities. Big names like Bill Gates and Elon Musk have voiced worry that AI poses an existential threat to humanity. AI is trained by being given mounds of data with no idea what to do with it, and machine learning has allowed AIs to make enormous strides in putting together well-structured replies to human questions. Scientists suggest that we should figure out how to put up barriers now, rather than later.
Robin Hanson and Eliezer Yudkowsky argued the idea of AI being exponentially intelligent than humans and capable of self-improvement. Perry Metzger suggested that even if AI systems achieve human intelligence, there would still be plenty of time to prevent negative results. Yann Le Cun also raised his voice, arguing that an AI takeover is “utterly impossible” for mankind.