HOW DO WE STOP THE ROBOT TAKEOVER? OXFORD DONS HAVE A PLAN

 

As AI gets smarter, meet the academics on a mission to save humanity from the matrix. In the news: Sunday Times feature on the Institute for Ethics in AI

The Institute for Ethics in AI is one of two research institutes that will be based in the Schwarzman Centre when it opens in the 2025-26 academic year. The Institute was established in 2019 as part of Mr Schwarzman’s gift to Oxford University.

Yet he Institute is already up and running with a team of more than 10 researchers who are making a vital contribution to the public and political debate on how to respond to the rapid rise of artificial intelligence. The Sunday Times recently visited them and published a major feature on 29 January 2023. The full feature can be found here, and an extract is below:

In the past few years there has been a conspicuous attempt by the large AI companies to get ahead of these questions. The likes of Microsoft, Google (which owns DeepMind) and OpenAI have all appointed in-house ethicists. If you ask the latest GPT chatbot to say something racist or homophobic, it will generally refuse. This was not the case with some earlier chatbots. But there’s an enduring sense that ethics are mostly an afterthought for these fast-moving and profit-driven organisations, a compliance hoop they must jump through. [Professor John Tasioulas, Director of the Institute] and the crew of philosophers he has assembled are arguing that ethics should be foundational. So rather than just assessing how to ensure the chatbot isn’t racist, we might wonder whether it’s actually a good idea to try to make sentient chatbots in the first place.

What keeps Tasioulas up at night is not visions of The Matrix or Blade Runner, but the steady erosion of our humanity by AI that is focused on maximising Silicon Valley profits. “The scenario that really worries me is that you live in a dehumanised world where decisions that affect you are taken by automated systems,” he says. “That you don’t play an active role in this decision-making and this is sold to you as a way of getting your preferences fulfilled. That we become demoralised into thinking that human action is futile or superseded by the existence of these structures. That there is this sense of alienation and passivity and people are not able to be autonomous rational agents shaping their own future.”

One interesting aspect of AI ethics is how much it varies from one domain to another. Some offer huge potential benefits. Take cancer diagnosis. Few of us would object to robots becoming exceptionally good at spotting cancers and mapping the impact of their removal, which they should be able to do over the coming years. But using similar technology on the battlefield has very different moral implications.

Image credit: Sunday Times/Open AI