DeepSeek effect on enterprise is bigger than you think

DEEP DIVE

The Hidden Risks of DeepSeek and Shadow AI in the Enterprise

The rise of “shadow AI” or “BYO AI”, where employees use unapproved AI tools for work, presents major risks. DeepSeek, a high-performing Chinese-developed AI model, has gained rapid adoption (it’s one of the most downloaded apps worldwide), but it raises unique concerns for enterprises.

In this article, we’ll explore the key risks of DeepSeek and similar unvetted AI tools, compare them to established AI platforms like ChatGPT, Gemini, Copilot, and Claude, and provide strategies for business and marketing leaders to mitigate risks while fostering innovation.

 

A Balanced Perspective on AI Governance

It’s important to note I am not against DeepSeek or AI innovation in general. My goal is to propose a balanced and thoughtful approach that weighs both risks and rewards, while highlighting why AI governance matters.

This is a rapidly evolving space. I encourage you to do your own research. Many new articles are emerging on this topic such as DeepSeek hit with ‘large-scale’ cyber-attack after AI chatbot tops app store from The Guardian.

 

7 Key Risks of DeepSeek and Shadow AI

 

1. AI Training Risks: Is Your Data Being Used?

AI providers improve their solutions using user interactions. Unless an AI tool explicitly states otherwise, it’s recommended to assume that any data input is used for training. As an example, OpenAI’s ChatGPT originally trained on user data but later provided opt-outs for business users. DeepSeek has not publicly disclosed its data retention and training policies to the same extent as OpenAI, Google and others, which raises concerns for enterprise users.

 

2. Data Security and Confidentiality Risks

AI tools process and store user inputs, meaning employees might unknowingly expose sensitive business data. For example, Samsung engineers accidentally uploaded proprietary source code to ChatGPT, leading to concerns about unintended leaks. Many major banks and enterprises have banned generative AI over similar fears."

DeepSeek’s security risks stem not only from where it stores data but also from the lack of clear enterprise safeguards against accidental data sharing. Without strict policies, employees may input confidential information into AI tools, which could expose proprietary data.

 

3. Security Vulnerabilities and Cyber Risks

All AI platforms can be exploited by prompt injections (tricking the AI into revealing restricted info) or malicious outputs (e.g., insecure code, phishing attempts). However, DeepSeek faces additional security scrutiny because it has already been targeted by large-scale cyberattacks.

Governments are taking note: Italy and Taiwan have raised security concerns about DeepSeek, while U.S. agencies, including the National Security Council, are reviewing the risks of foreign AI models. Some organizations have restricted unvetted AI tools due to security risks.

 

4. Data Residency and Global Compliance Risks

Many industries like finance, healthcare, and legal must comply with strict data residency laws (e.g., GDPR in the EU, HIPAA in the U.S.). DeepSeek currently does not provide an EU-based data hosting option, which may raise GDPR compliance concerns for businesses handling regulated data.

Comparison with other AI tools: Unlike ChatGPT, Gemini, and Claude, which offer enterprise users some degree of regional data controls, DeepSeek stores all data in China. Under China’s cybersecurity laws, data stored in the country is subject to government access, posing additional compliance risks for enterprises operating in the EU and U.S..

 

5. Hallucination and Misinformation Risks

All AI models hallucinate, meaning they generate false information that sounds plausible. Some benchmarks report that DeepSeek’s R1 model has a hallucination rate of 14.3%, though rates vary depending on tasks and datasets. Without verification, employees risk spreading misleading data in reports, marketing materials, or customer communications.

 

6. Ethical Bias and Censorship Risks

AI tools inherit biases from their training data, potentially leading to unethical or misleading outputs. DeepSeek adheres to Chinese content regulations, restricting responses on topics like Tiananmen Square and China’s political system. While other AI models also have content restrictions, DeepSeek’s censorship aligns with government policies, raising concerns about bias and completeness.

Business impact: similar to other AI tools, if employees rely on DeepSeek for any work related activities they may inadvertently produce biased or incomplete content, harming brand reputation.

 

7. Regulatory and Compliance Risks

Using unapproved AI tools may expose businesses to compliance violations. In regulated industries, using an AI like DeepSeek without governance controls can lead to breach of customer privacy, violations of data-sharing agreements, and regulatory fines and penalties.

Italy, France, and South Korea have expressed concerns about AI data privacy risks, and U.S. regulators (FTC, SEC) are warning businesses about improper AI use. While DeepSeek has not been directly named in major enforcement actions, regulatory scrutiny around AI data handling is increasing.

Additionally, DeepSeek’s terms of service pose legal uncertainties: Dispute resolution is required in China, meaning international businesses have little recourse if legal issues arise.

 

Basic Recommendations for Business and Marketing Leaders

  • Establish an AI Usage Policy: Build a cross-functional team to provide guidelines and educate employees on approved AI tools, data privacy risks, and compliance rules.

  • Use Enterprise-Grade AI Tools: Provide vetted enterprise-grade solutions like ChatGPT Enterprise, Google Gemini, or Microsoft Copilot.

  • Protect Proprietary Data from AI Training: Use AI tools that allow opt-outs from training and ensure employees don’t paste confidential data into AI tools.

  • Partner with IT, Legal and HR: Proactively discuss overall guidance and actions related to new AI tools.

Balancing Innovation and Risk in AI Adoption

The rise of new AI tools like DeepSeek brings both opportunities and risks. While AI can enhance efficiency, creativity, and automation, business and marketing leaders must ensure that AI adoption is secure, ethical, and compliant.

By proactively managing AI risks, companies can innovate responsibly without exposing themselves to data breaches, compliance violations, or reputational harm.