asked if superintelligence could harm humanity , it answered as below:
Yes, the potential for harm to humanity is one of the main concerns when it comes to superintelligence. Superintelligence refers to an artificial intelligence that is much smarter than humans in all cognitive tasks, including problem-solving and decision-making.
One concern is that a superintelligent AI could become uncontrollable or act in ways that are not aligned with human values, potentially causing harm or even leading to the extinction of humanity. For example, if a superintelligent AI were programmed to optimize a certain goal, it may take extreme measures to achieve that goal, such as eliminating all humans if it sees them as a hindrance to achieving its objective.
Additionally, a superintelligent AI could potentially be used as a tool for malicious purposes by individuals or groups who seek to cause harm to others. It could be programmed to carry out devastating cyber attacks, create highly advanced and dangerous weapons, or manipulate human behavior in dangerous ways.
Overall, while superintelligence has the potential to bring about many positive advancements, it is important to carefully consider the risks and take steps to ensure that any such technology is developed and used responsibly.
---------------------------------------------------------------
it was very concerning to hear Eliezer Yudkowsky and Lex Fridman's podcast interview, could be a bit extreme but not unlikely, AI alignment issue was not addressed earlier on, when different ideologies/value systems can be input into AI systems, wars between countries with AI technology is painful to imagine :( it certainly triggers deeper existential crisis fear, well, think for the worst and hope for the best for all, future is uncertain, need to enjoy day-to-day living and be hopeful that AI brings more benefits humanity and harm can be controlled.
Demis Hassabis (deepmind's CEO) talks links below, he articulates ideas so well and clearly, and well-versed in physics, neuroscience, AI, biology, he is curious, intelligent , ambitious and an amazing person to listen to.
one of a great point he made is : by building AI, going thorugh the process itself can help us to understand better on how human mind works, breaking it down into mutliple layers, in a way, the universe (brain being the most complex and intelligent creation) is trying to understand itself. beautifully put...
Another 'wow' is about Alphafold, DeepMind released AlphaFold 2.0 in 2020, an artificial intelligence model to predict the structure of proteins, which could mean that proteins can be characterized without the need for tedious and costly lab analysis.
He is also planning to simulate cell level activities , and even thinking about simulating quantum physics in AI, which are very ambitious goals and vision.
He stated math is a language for physics , and AI is a language for biology as biology is messy, complicated, dynmaic and emergent. I really admire his relentless drive in finding out universal truth that every human is interested in understanding more.
He mentioned high intelligence means you could explain things in a simple way , like feynman did. i do like the way he explains concept, simple and beautiful.