Inside Microsoft’s Secret Ban: The Controversial App They Don’t Want You to Use

Microsoft employees have been prohibited from using the DeepSeek app due to concerns about data security risks and possible exposure to propaganda, according to statements made by Brad Smith, Microsoft’s president and vice chairman, during a recent Senate hearing.

Smith explained to lawmakers that Microsoft’s internal policies explicitly forbid employees from using DeepSeek’s desktop and mobile application, citing significant apprehensions regarding privacy and content accuracy. He revealed the company also declined to approve DeepSeek’s app for inclusion in the Microsoft app store due to these same issues.

The restriction, Smith explained, arises from worries which revolve around data storage practices in China and the potential for answers provided through DeepSeek’s AI service to be influenced by Chinese propaganda narratives. DeepSeek openly acknowledges in its privacy policy that it stores user-generated data on servers located within China, making such stored information subject not only to Chinese domestic law but also specifically to regulations that require cooperation with intelligence authorities.

Compounding Microsoft’s worries is DeepSeek’s reputation for rigorous censorship of topics deemed sensitive by the Chinese government, a practice which raises broader implications around freedom of information for international users.

Interestingly, despite these strong critiques, Microsoft previously chose to host DeepSeek’s R1 AI model on its own Azure cloud service platform. At that time, Microsoft emphasized that the AI model had undergone vigorous internal testing, “red teaming,” and safety checks before becoming available to Azure clients. Moreover, Smith mentioned during his Senate appearance that Microsoft had managed to closely examine and modify aspects of DeepSeek’s AI model to minimize harmful outcomes. However, he did not provide explicit details on precisely how this modification process was implemented or what changes were made.

It’s important to note, however, that Microsoft’s Azure hosting of DeepSeek’s open-source R1 model differs substantially from offering DeepSeek’s proprietary, cloud-connected chatbot app directly. Since DeepSeek is open-source software, developers can independently download, host, and run its models without transmitting user data back to China, mitigating one primary source of security concern.

Nevertheless, Microsoft remains cautious, as merely removing data transfer issues does not completely rule out potential problems associated with propaganda dissemination or the inadvertent generation of insecure code.

There is also a competitive dimension to this issue. While DeepSeek’s standalone app competes directly with Microsoft’s proprietary Copilot AI-powered internet assistant, Smith argued that competitiveness alone does not dictate Microsoft’s app store policies. In fact, other competing chat apps, such as Perplexity, are currently available for download from the Windows app store, though notably Google services such as Chrome and its AI chatbot Gemini do not appear there.

Smith’s recent comments mark the first instance where Microsoft has publicly confirmed its internal prohibition against DeepSeek use, aligning notably with a broader international pushback against AI products seen as tied too closely to Chinese governmental influence.

More From Author

Aurora’s Mysterious Night-Shift: Bold Plans Unveil Untethered Autonomous Trucking Secrets for 2025

Billion-Dollar Secrets: The Untold Battle Between Silicon Valley’s Power Player and a Financial Titan Unveiled

Leave a Reply

Your email address will not be published. Required fields are marked *