The tech and financial world has been aflutter this week over DeepSeek, a new AI service which seems to have some pretty amazing reasoning/problem solving abilities. So this week’s Dose will be all singing, all dancing, all DeepSeek. OK, just DeepSeek. No one wants to see me sing and dance. So here’s the poop on DeepSeek as I see it:
DeepSeek seems to be pretty, pretty good at some tasks - its reasoning abilities appear to mimic human thought patterns in a lot of ways. It looks like it could be useful for some tasks and its costs are more than 90% lower than its competitors.
DeepSeek is a Chinese company and all your data (email address, questions you ask the service, and telemetry from your installation and usage of the app) is stored on servers in China. This means that the Chinese government has the legal right to access this data if they so please.
DeepSeek made a pretty big security booboo this week, leaving a database with lots of information open to the Internet. This database included user information and records of the questions being asked of the service. They seem to have fixed the problem and to be fair, many other companies have made similar missteps.
DeepSeek’s AI was trained to be a good Chinese citizen and thus will answer some questions with the Chinese Communist Party Line rather than straight facts. For example, it does not talk about Tiananmen Square and parrots Chinese government policy on issues such as Xinjiang and Taiwan. We don’t know what other biases have been programmed in to the model, but there are bound to be more.
So, is it dangerous to use DeepSeek? It depends…
If you work for the government or are involved in work which is commercially or politically sensitive, I would definitely not recommend using DeepSeek, at least not the version hosted in China. Many US companies have blocked access to DeepSeek from corporate networks just to be on the safe side, and I have to agree with them on this.
If you want unbiased answers to questions on which China’s Communist Party might have an opinion, again, DeepSeek is probably not the right tool to use.
If you as an individual want to play with AI, get help writing code or do other tasks which don’t involve sensitive data, using DeepSeek is probably not much riskier than most of the other apps you use. If the Chinese government wants your data, they probably already have it as part of past breaches or simply by purchasing it from one of many unregulated data brokers. No offense, but you (and I) are probably not that interesting to the Commies.
The real risks posed by DeepSeek seem to be:
Propaganda/soft power risks - all the students using DeepSeek to do their research and write their papers will be getting the Chinese Communist Party’s view of the topics they are working on. That is what really concerns me here over the long term. If we see a continuation of the free/low priced services that they are offering, it is likely that this is at least part of the reason for the service’s existence.
Commercial risks - DeepSeek’s claims that they were able to build their models without the levels of tech investment made by the big AI players in the US have shaken financial markets and called a number of large companies’ business plans and investments into question. I for one, think that a lot more money, government assistance, and (probably stolen) tech went into DeepSeek than its makers are letting on. Could this be part of an evil plot to destabilize the West? Maybe. Or maybe western techies have just gotten lazy, preferring to throw more hardware and energy at AI than working to squeeze efficiency out of their code.
Hidden use risks - Because DeepSeek offers their services using the same API (think language) as OpenAI at 90% less cost, many services (especially smaller ones) may elect to quietly embed DeepSeek inside their products. You might end up using DeepSeek whether you want to or not. We will need to be informed consumers, asking which AI models the products we use are relying on.
Response risks - Let’s be honest here - with the level of chaos, greed, corruption, and stupidity in DC for the next 4 years at least, the chances of any kind of a reasonable legal, regulatory or consolidated technical response led or augmented by the US government to whatever risks emerge from DeepSeek or any technology are somewhere south of zero. Buckle up folks.
But DeepSeek has some promise, too - their models can be run locally, which means that the tech can be used to bring AI in house for many companies, resolving the security concerns over sending potentially sensitive data to AI providers and reducing costs. Efforts are underway to create open source versions of the DeepSeek R1 model which would enable the use of the tech without the concerns over Chinese Communist Party bias/access.
So, at the end of the day, the risk/reward picture around DeepSeek is mixed, like it is for most things.
But look at the bright side - there’s a 1 in 83 chance we won’t need to worry about AI after 2032.