UK experts raise alarm over rising popularity of China’s DeepSeek

The logo of DeepSeek is displayed alongside its AI assistant app on a mobile phone, in this illustration picture taken January 28, 2025. — Reuters


The logo of DeepSeek is displayed alongside its AI assistant app on a mobile phone, in this illustration picture taken January 28, 2025. — Reuters 

UK experts have expressed apprehensions over the rapid increase in the use of DeepSeek, urging users to remain vigilant of the Chinese artificial intelligence platform. 

The experts cited concerns about the chatbot spreading misinformation and alleged that the Chinese state can possibly exploit information and data handed over by the users. 

Joining the bandwagon, the British government has left it up to the people to use the new AI but it says the officials are monitoring anything that might threaten the national security and added that necessary actions will be taken against any threats, as per The Guardian

A professor of the foundations of AI at the University of Oxford, Michael Woolridge, said it would not be unreasonable to assume that data shared with the chatbot would not be shared with the state.

“I think it’s fine to download it and ask it about the performance of Liverpool football club or chat about the history of the Roman empire, but would I recommend putting anything sensitive or personal or private on them? Absolutely not […] Because you don’t know where the data goes,” he said.

Another expert who is a member of the United Nations’ high-level advisory body on AI, Dame Wendy Hall, stated: “You can’t get away from the fact that if you are a Chinese tech company dealing with information, you are subject to the Chinese government’s rules on what you can and cannot say”.

Co-founder of the Centre for Information Resilience (CIR), Ross Burtley, expressed his concerns as he said, “We should be alarmed”.

He alleged if the AI chatbot is left unchecked, it could “feed disinformation campaigns, erode public trust and entrench authoritarian narratives within our democracies”.

The UK technology secretary Peter Kyle on Tuesday told the News Agents podcast that people need to make their own choices about DeepSeek right now because “we haven’t had the time to fully understand it […] this is a Chinese model that […] has censorship built into it”.

“So, it doesn’t have the kind of freedoms you would expect from other models at the moment. But of course, people are going to be curious about this,” he added.

Some users and testers of DeepSeek have found that it refuses to answer questions on sensitive topics.

“The biggest problem with generative AI is misinformation,” Hall said. 

“It depends on the data in a model, the bias in that data and how it is used. You can see that problem with the DeepSeek chatbot,” she added. 

Related News