A picture of Chinese president Xi Jinping is reflected in the visor of robot

“AI is a risk for all humanity”, the interview to Nick Bostrom

Nick Bostrom is a Swedish philosopher and professor who heads the Future of Humanity Institute at the British University of Oxford. Bostrom’s field of study is that of transhumanism and embraces areas such as cloning, Artificial Intelligence (AI), superintelligence, the possibility of transferring consciousness to technological supports, nanotechnologies and theses on simulated reality. As far as AI and its future developments are concerned, the professor and researcher is animated by a long-term “pessimistic” position: substantially, Bostrom believes that when AI will reach the level of superintelligence, i.e. when it will exceed human intelligence, it could easily pose a danger to all of humanity. Something that could happen over a very long period of time, even generational, but according to him could easily happen. To deepen this position and understand what the risks really are, but above all how AI works in view of the possible advent of superintelligence, we interviewed him.

Professor, you are strongly critical of the development of AI, with a very pessimistic long-term perspective on the dangers it could have for all of humanity. So I ask you: is it possible to teach a machine ethics, empathy and compassion, exclusively human values? How?

I’d say rather that I’m ambivalent about AI. I’m very optimistic about the potential upside, if we get things right, though I do also think there are serious risks involved with developing machine superintelligence. Exactly what we need to steer advanced AI systems to get a good outcome is not known yet, and that is an active research field. Many of the smartest people I know are now working in that field.

If human values could be taught to AI, knowing that man is not only characterized by positive values but can also be petty, sneaky and violent, could there be a risk that the machine, autonomously, knowing us, would assume these behaviors and become aggressive towards us?

Yes, though it could also become aggressive to us without such values.

Can Asimov’s laws of robotics be applied to AI? Yes/no why?

Asimov’s approach was too simplistic.

You believe that controlling the capabilities of AI can only be a temporary measure. Wouldn’t it be enough to insert a different type of algorithm that automatically inhibits any aggressive behavior? Is it possible to define such an algorithm?

Is it aggressive behavior when we pave over an ant colony to build a parking lot?

Knowing that AI works based on the dataset that we give it, wouldn’t it be enough to choose the dataset wisely to avoid nasty surprises?

Smart AIs can generalize beyond their training data, and they can build models of the world and use those models to understand novel situations and to make plans.

AI machines are faster than humans in learning concepts and reasoning, but they are still not capable of performing automatic human actions, those that are based on instinct. Will this gap ever be filled?

I think currently AIs are slower than humans in learning concepts. For example, ChatGPT-4 was trained on approximately all of the internet, but individual humans achieve similar concept understanding based on vastly less data.