INSIGHTS
Delivering assurance at the speed of AI

The Internet Report

The New Realities of Data Sovereignty

By Mike Hicks & Barry Collins
| | 30 min read
Internet Report on Apple Podcasts Internet Report on Spotify Internet Report on SoundCloud

Summary

Digital information is constantly in motion, crossing borders and jurisdictions. Learn about the current dynamics around data sovereignty and the importance of understanding the path your data takes.


This is The Internet Report, where we analyze outages and trends across the Internet through the lens of Cisco ThousandEyes Internet and Cloud Intelligence. I’ll be here every other week, sharing the latest outage numbers and highlighting a few interesting outages. This week, we’re taking a break from our usual programming for a conversation about data sovereignty. As always, you can read more below or tune in to the podcast for firsthand commentary.

Data Sovereignty in Motion: Visibility, Compliance, and Control in 2026

In an increasingly interconnected digital landscape, the concept of data sovereignty has evolved far beyond simple storage requirements. While many organizations focus on "data at rest," the reality of modern networking means that digital information is constantly in motion, crossing international borders and jurisdictions through complex, dynamic paths. As regulations tighten globally, it’s becoming as critical to understand the journey your data takes as it is to know where that data eventually resides.

This episode of The Internet Report explores the emerging frontier of data sovereignty in flight. From unexpected BGP routing decisions to the hidden risks of disaster recovery failover paths, we break down why visibility into the entire end-to-end service delivery chain is now a business imperative. As a follow on to our 2026 outlook, we examine how the rise of agentic AI and autonomous decision-making adds a new layer of complexity to global compliance.

Illuminating the intersection of routing, security, and governance, this discussion provides a roadmap for organizations looking to safeguard their data in a "harvest now, decrypt later" world. You’ll learn:

  • The critical distinction between data at rest and data in flight, and why encryption alone is no longer a sufficient safeguard.

  • How dynamic routing and peering arrangements can cause sensitive data to traverse unexpected jurisdictions.

  • The impact of agentic AI on data sovereignty as autonomous systems make real-time decisions about data destinations.

  • Best practices for aligning disaster recovery planning with sovereignty requirements to avoid compliance gaps during failover events.

To learn more, listen now and follow along with the full transcript below.

A Conversation on Data Sovereignty

BARRY COLLINS: I'm sure much of our audience is familiar with the concept of data sovereignty, but just to level set, give us a quick overview of what it is.

MIKE HICKS: So think of it like physical goods crossing borders. Different rules apply in different places and digital information, therefore, is subjected to laws and regulations of the country of where it's actually stored and processed.

The fact then, it's a sort of data—there's laws in place, in terms of paper records, or physical records--just because it's digital doesn't mean it doesn't have the same rules applied to it. Organizations themselves then have to sort of strike this balance between agility of innovation, but they also need to make sure that they have enhanced control over their digital infrastructure data. So ,they want to use these sort of best-in-class services, AI capabilities, so they can access new features. But what they can't do is sort of sacrifice compliance or expose themselves to legal risks by actually having their data in different places.

Governments themselves can exercise legal control over data within their borders. This is going to include a requirement for where certain types of data must be kept, how it can be accessed, and by whom—so by law enforcement regulators, or even in civil disputes and obviously, under what conditions they can access it there.

Depending on what that data is, there's going to be, sort of, different sensitivities applied to it. So, highly sensitive data typically must remain within the country itself. So, we're talking about things like health records, financial transactions, government citizen data, things like tax records and biometric information. Then you have medium sensitivity data. And this may have conditional requirements. So, employer personnel information might be able to transfer between countries, as long as it has adequate protections around it. And then customer contact details may require consent. So, people can opt in and then you're allowed to transfer that data around there. Then the other end of scale, you have the lowest sensitivity data, so this can then include things like aggregated analytics, anonymized data.

So, the same effect with data could be treated in different ways depending on the context of itself. So how it's actually passed across there. And then this is going to then have different impacts on how that data is stored, then how and who can access it there. It really isn't a one size fits all. It depends on the type of data and what type of access is actually required.

Now there's also a critical shift we need to talk about here, which is that data sovereignty isn't just about data at rest. All the legislation and all the requirements typically resolve around data at rest. So therefore we then have got to talk about: How do we get the data from point A to point B? How do people access it? So, it is also then about this, what we call “data sovereignty in flight.” And this is kind of an emerging frontier of this data sovereignty areas there. We need to be able to understand specifically where it's actually going. And that's really what we want to look at and sort of unpack as we go through it today.

BARRY COLLINS: We've often talked on this podcast about the importance of visibility over your entire end-to-end service delivery chain. Explain why companies’ data may take routes that they don't expect.

MIKE HICKS: So, let's dig into that data in flight. Why does data take those unexpected routes? As we said, most regulations focus on that data at rest, where it's actually stored: Is my data within the German data center or is it a US one? And encryption at rest, physical security, and access controls. But because data is constantly moving between these systems, locations, and users, we have these highly distributed environments there, from edge collection points to central processing. Now between different microservices, the distributed applications, different dependencies, all these different data need to be required and accessed from different geographic locations.

Then you have to remember you have the network itself that is sort of dynamic. So, to build in this resilience around there, we're talking about dynamic routing decisions, so: BGP routing decisions. Now there's peering arrangements between the carriers, real-time conditions such as congestion and route leaks—these types of things that require the dynamic changing of a route to maintain business continuity. All of this together means you're not going to take the exact path. There's all these conditions that come into play and effectively, the Internet is going to work out which path you actually take through the network itself.

Now because of this, the data can traverse multiple jurisdictions between the source and destinations.

So just because you're connecting, say, Sydney to Melbourne, the traffic might go via Singapore, or it could even go via the US, depending on who your ISP is, who they're actually peering with, and the routes. And also things like CDN architectures: So where are we actually going to? Where's that local point you get there?

Now without actually sort of understanding those paths, you don't know where it's gonna go. And to a degree, that's fine. We're looking at that information from the edge. You press return on your keyboard, you expect to get a response within certain times. And the network is effectively taking care of that. But because of this, you need to be able to understand exactly where it is.

The other part of this then is this emerging concern, “Harvest Now, Decrypt Later.” So what this concerns is adversaries capturing encrypted traffic today, storing it so they can actually capture it out and decrypt later. So if your data crosses borders, you didn't intend to, it can have this long-term risk where the data can actually be captured and decrypted later on.

BARRY COLLINS: To put this into context, tell us about the real-life example you discovered where a country's census data was leaving that country.

MIKE HICKS: So, census data is probably about as sensitive as it gets. This is personal information on an entire population. The expectation, therefore, is this is domestic-only infrastructure that's used. So in this particular example, this was exactly the case. There was obviously somebody sending the information from within the country and all the data that's been stored within the same country.

When I actually looked at what was happening with this network connection, again, it was actually the data at rest was specifically going and staying within country. When I actually looked at the path, it was actually coming out of the country, leaving the country and being back hauled through the provider's network through a different country and then back into the country itself before it went to the data at rest.

In this particular case, there wasn't actually sort of anything nefarious with this. It was just the way that the provider's network was actually structured to make the most of the optimal path. But the reality was, this data was actually leaving the country to get back into a server which was located within the same country.

BARRY COLLINS: What are the different factors you must consider when choosing the path you want your data to take?

MIKE HICKS: So, let's be honest about what choosing actually means here. In most cases, you're not going to just draw a line on the map and say, route my traffic here, go through this one there. The Internet's going to use BGP to determine where the path actually goes.

And then you have the peering arrangements within there, so what is that, from a load perspective and also from a cost perspective between the actual providers themselves? So, you don't choose a specific path, but what you do is you choose a provider, their architecture, and their constraints they have. But beyond that, you don't necessarily have a choice. But that choice of who your first provider is really starts to inform how that peer relationship works through—how they're peered and how they make the connections to get to that end destination.

So having said that, what we need to start classifying is what data we're actually moving. So, what data are we talking about? Are we talking about health records? Are we talking about financial transaction? Marketing analytics?

Different data is going to have different regulatory requirements. Some legally can't leave the country—full stop. Others we can actually transmit through jurisdictions with adequate protections. So the data classification is going to determine what constraints you need to impose there. And then coming back to the provider. This selection is your primary control point. This is the access point where you actually, sort of, make this connection coming into, and you're not controlling the routing directly, but your ISP or cloud provider does provide that downstream peering perspective.

BARRY COLLINS: Is there anything more you can do once you’ve chosen a provider?

MIKE HICKS: Once you've chosen your provider, then you still have to consider the performance. Again, this comes back to the type of data we're actually moving. Some latency-sensitive applications—so, a trading system or real-time collaboration—is going to, sort of, limit the geographic options of where we want to keep that data—and how we want it to get there.

If you demand your data to stay in country but need global low latency, then these constraints may actually, sort of, conflict. What you might need to consider in those places is splitting the data up. Consider how we satisfy those needs, so therefore, we're not putting all the data together in one single part.

Even if you can't control the exact paths, what you can evaluate is the likely routes themselves. So, you can understand for your peering relationship where those partners go. But which countries they typically route through will actually go for those services. So then, understand the legal frameworks within those jurisdictions themselves. We know the likely route for this traffic to go through country A or country B, so understand the actual legal jurisdictions for that, because this is always coming into place.

As we said, different countries have different legal requirements about what you can access and how you can access that data. By understanding which countries you're passing through, you can understand sort of what access people can actually have to that data.

The other thing about this is we are talking about this dynamic world. So, you really need to sort of understand that everything's going to change at any one time, or can change, when this is constantly moving. So ultimately when we think about this, we really need to sort of understand where our paths are going from any one time.

And sometimes this might mean changing your architecture. If data sovereignty is actually critical, specifically for this part of the application, it may mean you need to say, move that data closer, or understand what is happening within there—or consider things like a sovereign cloud, where everything is going to be guaranteed in jurisdiction by design. It might be a little bit less convenient, but it's going allow you to comply with the regulations to store your data.

BARRY COLLINS: It's not only the primary route that must be considered, but the route the data should take if the primary route fails, right? Because that's often when surprises occur.

MIKE HICKS: Yeah, the primary paths are usually well planned and documented. You know where you're going out to from there. And you've probably done due diligence on that primary connectivity, which path we're going to take. But then you have these situations, you know, to maintain business continuity where you're going to failover a route. These are often triggered by some automatic process. It could be during an outage or it could be due to congestion.

So, because of this, it's not that they're specifically not tested, but you're not maybe looking at or understanding the path at that moment in time. Because this is constantly changing or something happened, there's something's occurred to trigger and to roll over to this failover, which actually then could have actually induced something else, which meant this failover path is active, but now we're going through a different path. We actually don't necessarily understand where it's going through.

The backup routes may not actually respect the same sovereignty policies. The backup could be potentially through whatever's available. Therefore, we're going through different carriers. They have different peering relationships. And the geographic diversity is going to mean jurisdictional diversity as well.

So effectively, this could actually expose the data to different jurisdictions or less secure infrastructure. We might take a longer path because we need to maintain connectivity—so therefore, traveling through more countries. Or, to actually maintain speed, we've actually dropped a different circuit, we may have a less secure protocol or lower encryption standards to make sure the traffic passes through there.

When we're thinking about disaster recovery planning, you've got to intersect that with the data sovereignty. The disaster recovery plan needs to account for where the failover is going to send the traffic to. The mitigation strategies: Can I shift to a specific backup ISP and still maintain that sovereignty?

You might need to accept that you're going to have degraded performance in order to maintain compliance. It's coming to this tradeoff between availability, performance, and sovereignty to sort of understand that overall system.

BARRY COLLINS: You can arguably once have assumed that if your data was end-to-end encrypted, it didn't really matter which path your data took from a security standpoint. Explain why that's no longer the case.

MIKE HICKS: End-to-end encryption is going to protect the payload content, so we have the certificates, the VPNs, application layer encryption—they're all critical and they're all there. If someone's going to intercept your traffic, they actually can't read the payload, but that's not everything that's involved in there. The metadata is still visible, even with encryption. We can see things like the source and destination IP addresses, the actual volume of data and the timing patterns of traffic, the duration of frequency of the communications. You can actually put those together to understand specific patterns.

But then we also have this concept of certain jurisdictions have lawful intercept requirements. Even for encrypted traffic, some countries actually require the capability to intercept. This may require escrow keys or back doors, but the fact is then just because the payload is encrypted doesn't mean that it's not subject to legal access within those countries.

Then we have this aspect of quantum computing and this really changes that threat model. The current encryptions assume that factoring large numbers is computationally infeasible. Now what quantum computing has introduced to us is that they can actually break that assumption. So this is where we have that “Harvest Now, Decrypt Later,” where the data captured could be readable in, like, a few years’ time.

So the bottom line to this is that encryption is necessary, but it's not on its own sufficient. You know, it's a foundational control, but it's not the complete solution. You need to combine the encryption with routing controls, visibility, and governance. So, this defense-in-depth encryption plus sovereignty-aware routing is really what you need to consider.

BARRY COLLINS: AI adds another layer of complexity to this discussion. What new questions should we be asking about data sovereignty in the age of AI?

MIKE HICKS: AI adds an entirely new dimension to data sovereignty. So the traditional concerns still apply, which paths we take in there, but AI creates fundamentally new data flows. So where is our inference happening? Is it on the device? Is it on the edge? Is it a centralized cloud? And where's the training data stored? Where's it going to be processed? And potentially, where's that data going to be retained?

But critically there, where is the AI directing data to go? Which path we're going to take to which tools we need to call down? Where are those tools located? Which countries are they in? We might be crossing different boundaries. Agentic systems are going to make autonomous routing decisions at machine speeds. And when I say they're making routing decisions, what I mean is they're selecting the destination where they're going to go. And the inference or the reasoning engine there, they're actually making a choice of the endpoint they need to complete the task they've been asked to do.

From there, this actually might route to wherever that inference cluster has capacity. Now, again, this could be in different country. It could span multiple providers, different jurisdictions, all without the explicit user choice. So, you've actually asked it to do this task and now it's gone off to complete that task and it's going to do whatever it needs to actually do there. So, what we actually need to do in these situations is to have that visibility within these autonomous choices to make sure they're aligned with your sovereignty requirements or your actual requirements.

The other thing to consider there is a data sharing between AI providers and infrastructure partners. Your AI vendor might actually use another company's infrastructure. Again, we're going to have other systems involved. We have sub-processors, third-party model providers. These complex business relationships are going to determine where the data actually goes. You need to understand the full chain, not just your direct vendor.

So we were talking about the ISP provider, where we're saying, okay, we can look at and understand the peer relations that we have. All of a sudden, we have these different dynamic chains effectively coming in place and then have the services built on top of services. So we really need to understand everything that's going through there. You need to understand how your agentic systems are functioning and are they operating within intended parameters. And those parameters need to include specifically: Where are we getting those tools from? Are we selecting those tools? Where is that data actually stored. And where's our training model and our data going to be stored and retained?

BARRY COLLINS: Finally, just wrap up the factors we need to be considering when thinking about data sovereignty as we head deeper into 2026.

MIKE HICKS: The key takeaway is that data sovereignty isn't just about storage anymore. It's about the entire journey. Where we talked about data at rest, that was effectively chapter one. Now data in flight is chapter two. And most organizations probably haven't considered or opened that book on that yet. So from that collection through the processing to storage and the deletion of the data, you need to consider that end-to-end data life cycle and the governance that goes with that.

The other thing you need to consider is that regulations are continuing to evolve globally. What's compliant today may not be tomorrow, which countries have different jurisdiction laws may actually change tomorrow there. And also, we have this situation where the AI stuff's starting to grow. Now there are a few AI-specific regulations coming in, the EU AI Act, but there others will follow close by. So you need to be able to understand these as they come and understand what's happening around those systems.

If I was focusing on three pillars, I'd say compliance, security, control. Compliance to make sure you're meeting the regulatory requirements, security so you're aware and protected against threats, including potentially future ones--these zero-day types of things. And control, ultimately knowing where the data is and the ability to enforce policies.

Also consider evaluating sovereign cloud options. What is a sovereign cloud? A sovereign cloud is cloud infrastructure where everything stays within jurisdiction. So it's not just the service, but also the operations, the administration, the support staff, and they're all in one country. Data never leaves the borders, even for backup or disaster recoveries. So, you know, things like government cloud environments with air-gapped infrastructure. Financial services using domestic-only cloud regions with local operations teams and the like.

Really start to understand the routing policies and align those with the data governance. Don't leave the routing to chance. Specify the requirements for primary and backup paths. Use BGP communities and traffic engineering to influence the routing wherever you can.

My key takeaway for data sovereignty: It isn't just about storage anymore. We said data at rest is necessary, but it's insufficient. The way that applications, distributed systems operate today means that data is constantly in flight. So, you need visibility and control over the entire journey. From the collection, through the processing, the storage and deletion, it's really about that end-to-end lifecycle governance.

BARRY COLLINS: That’s our show. Please give us a follow and leave us a review on your favorite podcast platform. We really appreciate it and not only does this help ensure you're in the know when a new episode’s published but also helps us to shape the show for you. You can follow us on LinkedIn or X @ThousandEyes or send questions or feedback to internetreport@thousandeyes.com. Until next time, goodbye!

Subscribe to the ThousandEyes Blog

Stay connected with blog updates and outage reports delivered while they're still fresh.

Upgrade your browser to view our website properly.

Please download the latest version of Chrome, Firefox or Microsoft Edge.

More detail