Welcome to the April edition of Proton Business Insights.
This month, we’re looking at digital sovereignty. A key change in Microsoft Copilot data processing has sparked concerns that European businesses don't have enough control over their own data and infrastructure.
Our deep dive explores the rise of shadow AI and how a quick productivity boost can inadvertently expose your company’s internal knowledge.
Plus, as always, we break down the latest security fails and share some stories that caught our attention over the past month.
📣 Introducing Proton Workspace: It's easier than ever to break away from Big Tech platforms. In the past month we launched Proton Meet, appointment scheduling in Proton Calendar, and a new plan that combines our entire suite into a single subscription: Proton Workspace. Businesses can protect all their data inside a single encrypted platform. See the details.
Digital sovereignty
Microsoft’s flex routing and the illusion of control
Starting today, a new policy from Microsoft goes into effect for European business customers using Copilot. Instead of processing AI queries on EU servers as promised, that data can sometimes be sent to the US, Canada, or Australia. Microsoft is turning this “flex routing” feature on by default.
The promise of an “EU data boundary” appears to be more arbitrary than customers might have expected when they signed up. This sudden policy change highlights something very important: Digital sovereignty is fake unless you hold the keys to your data.
If you run a business in Europe, here are three things to ask yourself about your tech stack:
Is my IT infrastructure a cost center or an investment? If it’s a cost center, maybe you go with the most convenient choice. If it’s an investment, you might prioritize security, sovereignty, and values.
Is it real digital sovereignty or “digital sovereignty washing”? Big Tech knows the European market wants control, so that's what they say in their sales pitch. But the product falls short: US-owned infrastructure will never give EU companies real sovereignty.
Are there viable European alternatives? Increasingly the answer is yes. EU governments are reducing their dependence on US tech by switching to European alternatives.
Everything you need to know about age verification: Several countries and US states are considering age verification regulations for certain categories of websites. The topic is important for consumers and businesses, but few people actually understand the various concepts and methods. We break it down here.
What small businesses still get wrong about password managers: While over half of SMBs told us they use password managers in a recent study, unsafe credential sharing still persists at surprisingly high rates. We suspect people are using individual password managers in their browser. But that’s not going to give you admin control over your business. See the details.
You can now add Groups in Proton Pass for Business: Instead of managing access, one person at a time, you can create groups and set permissions at the group level. Access controls are now easier to apply consistently as your team grows.
Lessons learned
Security fails of the month
What recent breaches reveal about the consequences of everyday security decisions:
European Commission’s cloud leak: The EU's executive cabinet confirmed that attackers breached its cloud infrastructure. The group ShinyHunters claimed the hack and said they exfiltrated over 350 GB of data, including internal mail servers, confidential contracts, and private employee information.
The lesson: Your public cloud apps are often the weakest link in your security. If they aren't properly isolated, a single breach can expose your internal emails, contracts, and sensitive data.
A human hospital breach: A staff member at Seoul National University Hospital accidentally sent an internal message to the wrong email address, exposing the records of 16,000 patients. The leak included sensitive data on mothers and newborns, including medical test results and patient IDs.
The lesson: Human error is inevitable. When sensitive data is exposed, the impact depends on how easy it is to access.
Lloyds Bank million pound glitch: A technical vulnerability briefly allowed Lloyds Bank customers to see the transaction details of other users. One customer saw over £1 million in unrecognized payments, while others could view sensitive National Insurance numbers and car registrations.
The lesson: Data exposure doesn't always require a hacker. Sometimes, a flaw in your own code can be more damaging than an outside attack.
Deep dive
When your company’s intelligence becomes a public resource
You know that the people on your team are already using AI. They might be using it to summarize notes or draft an email. But it could be doing more damage than you think.
This new phenomenon is called shadow AI. Each time your company information is pasted into a large language model (LLM), you lose visibility over how it’s processed, stored, and used.
Why shadow AI is a concern
Most AI assistants memorize the data you feed them. Your information is woven into its logic and becomes part of the collective “intelligence”. And because AI is optimized to provide the most relevant answer possible, it’s essentially programmed to share what it knows whenever someone happens to trigger the right prompt.
If a developer pastes proprietary software for a debug, or a strategist summarizes a new roadmap, it could resurface as advice for anyone asking a similar question months or years down the line. Unlike a file on a server, you can’t delete it. It’s out there, and your company’s hard-won proprietary knowledge is a permanent, undeletable part of a product owned by Big Tech.
What can you do?
Banning AI tools is difficult to enforce and possibly even counterproductive. A more effective approach is to ensure usage happens within systems you control, rather than pushing it into external platforms you can’t see or manage. There are privacy-first AI tools that enable control and compliance. But you need to know what to look for.
Here are the hallmarks of AI platforms that are safe for businesses:
No data training: If the AI trains models with your data, it’s a no.
Open source models and apps: Most Big Tech providers use proprietary models and code, so you don’t know how they were trained or what the apps do with your data.
Strong encryption and no logs: Big Tech companies tend to collect your prompts and other data where it can be leaked in a data breach or shared with third parties. Look for evidence that the platform doesn’t do this.
In brief
Cybersecurity stories that caught our eye this month
The danger of security that only looks good on paper: This Deep Delver article argues that many automated tools create a “veneer of security” that satisfies auditors but fails to actually protect data. It's a critical look at why checking a box isn't the same as reducing risk.
Are your Zoom meetings now AI podcasts?: 404 Media reveals a company called WebinarTV has been secretly joining private Zoom calls, recording them, and turning them into AI-generated podcasts — for profit. With over 200,000 stolen meetings already hosted, it’s a timely reminder to use unique links and check who can access your meetings.
When a single mistake becomes a $10 billion mistake: Watch how Anthropic accidentally leaked the full source code for Claude Code via a public registry. It’s a sobering reminder that even the world’s leading AI labs can be brought down by a single missing line in a configuration file.
That’s all from us this month! It was a big one, but we’re not done yet. We’ll see you back here in May with more business security news and analysis. In the meantime, you can always get in touch with our team if there’s any way we can help you.
Stay secure,
The Proton Team
Connect with the Proton community
Proton AG, Route de la Galaise 32, Plan-les-Ouates, Geneva, 1228, Switzerland