This is The Internet Report, where we analyze outages and trends across the Internet through the lens of ThousandEyes Internet and Cloud Intelligence. I’ll be here every other week, sharing the latest outage numbers and highlighting a few interesting outages. This week, we’re taking a break from our usual programming for a conversation on the evolution of Internet architecture with Geoff Huston AM, Chief Scientist at the Asia Pacific Network Information Centre (APNIC), the Regional Internet Registry for the Asia Pacific region. As always, you can read more below or tune in to the podcast for firsthand commentary.
The Internet: Past, Present, and Future
From its humble beginnings as a telephone-inspired network to its current state as a complex, name-driven ecosystem, the Internet has transformed dramatically over the past four decades, scaling to meet rapidly growing demand. And the challenge to “scale still more” persists to this day as the networking community evolves infrastructure to support emerging technologies like artificial intelligence (AI).
In this episode of The Internet Report podcast, APNIC’s Chief Scientist Geoff Huston takes us on a journey through the history of the Internet and explores what the future might hold. We’ll discuss:
-
The Challenge of Scale: Over the past 40 years, the Internet has faced and overcome numerous scaling challenges, from the limited 32-bit IP address space to the explosive growth of connected devices. This need for constant expansion remains a central theme in its evolution.
-
Moore's Law and Networking: Advances in chip technology, driven by Moore’s Law, have been pivotal in enabling the replication of massive amounts of data across global networks, making content delivery more efficient and accessible.
-
The Shift to Asymmetry: The move from symmetric to asymmetric networking has allowed for more efficient use of resources, supporting the proliferation of devices wanting to connect to the Internet without a proportional increase in server infrastructure.
-
The Rise of CDNs: The rise of content distribution networks (CDNs) has transformed how data is delivered, reducing the distance packets need to travel and alleviating pressure on the network.
-
Name-driven Architecture: The Internet has become name-centric instead of number-centric, with the domain name system (DNS) playing a crucial role in maintaining scalability and security.
-
AI's Impact on Network Architecture: AI is pushing the limits of current Internet architecture, requiring massive processing power and specialized data centers. The stage is set for future network architecture innovation to meet AI’s demands—and realize its full potential.
Listen now and follow along with the full transcript below.
A Conversation With Geoff Huston: The Evolution of Network Architecture
BARRY COLLINS: Hi, everyone. Welcome back to The Internet Report where we uncover what's working and what's breaking on the Internet—and why. I'm Barry Collins, and I'll be hosting today with the amazing Mike Hicks, Principal Solutions Analyst at Cisco ThousandEyes.
This week, we're taking a break from our usual programming for a special conversation with Geoff Huston, a Member of the Order of Australia.
Geoff is also Chief Scientist at APNIC, the regional Internet address registry for the Asia Pacific region. Geoff researches into infrastructure, IP technologies, and address distribution policies, among other topics. He's going to be talking with us about evolutions in Internet architecture and how it's changing to support efforts to optimize service delivery.
As always, we included chapters in the episode description below so you can skip ahead to the sections that are most interesting to you. And if you haven't already, we'd love you to take a moment to give us a follow over at Spotify, Apple Podcasts, or wherever you like to listen.
So, Geoff, can you tell us a little more about what you do at APNIC and what you're focused on right now?
GEOFF HUSTON: Yeah. I'm the grandiosely titled Chief Scientist at APNIC.
It's kind of an interesting position. You see, this body, APNIC, actually deals out IP addresses in the Asia Pacific region.
And the real question is, what are the rules? Who gets them? What are the conditions?
And so interestingly enough, in the grand experimental form of the Internet, this is actually sort of an industry self-regulatory body. And so the rules are actually made up by the folk who participate in the process, come to the meetings, and help form it. But there's this interesting kind of question.
Are they good rules? Are they effective rules? Do they work? And the issue is, where's the feedback loop? How do we know if we've done the right thing? Who's looking? Who's measuring?
That's me.
That's my job. I look and I measure and I try to understand, you know, the interaction between the way we hand out elements of infrastructure, IP addresses in this case, and the makeup of the Internet. How does it route? What is the connectivity?
How is it glued together? And do the policies and the operations kind of work together? And if there's friction, where is this friction and why? So that's kind of the general remit, but I interpret this very loosely. I got involved in congestion control algorithms, some work on Starlink performance, routing security. And so in some ways, you just chase down interesting stuff and sort of pull the strings a bit and see what comes out.
BARRY: Let's talk about the evolution of Internet architecture. How have naming systems, content delivery, and routing services changed over the years?
GEOFF: Oh, you see, the Internet was not a grand thought out of nothing.
It really was modeled on the old telephone system. A few of your listeners might actually remember what a telephone is. They're increasingly scarce these days. But this funny box on the desk that everyone had a unique number, sort of an international code, an area code, and the rest of it. And every telephone had a number.
And the theory was, although I'm not sure it was really implemented that well, that any telephone could make a connection with any other telephone by simply dialing the number string. And we built the Internet almost the same way. We used digital addresses rather than straight numbers, but oddly enough, they look the same. And in essence, we strapped up circuits between computers.
And by numbering every computer uniquely, if you sent out a packet into this network whose destination address was the number of a machine you wanted to get that packet, the network would go, not a problem, and pass the packet through its internal switching systems and pop out of the computer at the other end.
It really was quite unimaginative in terms of it was the telephone network for computers.
You sit there and go, well, how has it changed? And the answer is it's changed massively.
And part of the issue is the telephone network was really only good for one thing: people talking.
You had to actually be on the phone at the same time in the same sort of space, if you will, to actually make it work. And if you wanted to do anything else with a telephone network, it was kind of, no, we didn't build it for that. We built the entire thing around human voice, the dynamics, the frequency range, etc.
Whereas if you think about computers, it's kind of, well, what's a computer network good for?
Well, what's a computer good for?
And the answer these days is, what isn't it good for? Teleconferencing, television, streaming systems, databases, you name it. We can bend computers to do that. And in the same way, we've bent computer networks to do much the same.
And then you ask, well, how have things changed?
Well, the other thing, which is a bit of an ugly secret, is that we're actually not very good at this. Software is buggy. Things don't work very well, and having every computer exposed to everyone else is perhaps unwise.
I'm sure the digital watch on my wrist is a great digital watch, but I'm really not sure if that needs to be exposed to the entire Internet. And I would rather that it could make connections, but not be subject to other people trying to make connections at it. And so we changed over the years to sort of move away from this symmetric model of a telephone—I can make calls, I can receive calls—into this new model of asymmetry, where there are things called servers whose job it is to deliver, and there are things called clients, including my watch, whose job it is to access these servers, and that's about it.
And that kind of asymmetry actually had a number of hidden benefits. Not only did it not expose computers that really couldn't defend themselves to the rest of the big, bad Internet, and that's a huge plus, it also meant that you could grow the client base as big as you liked without necessarily growing the server base. And, you know, the part of the story of the last forty years, the telephone network at its peak had something like 600 million subscribers.
We went past that years ago.
No one's sure how many things are connected to the Internet. If you said 30 billion, you could be right. If you said 50 billion, I couldn't argue with you. You know, we actually don't know, but we do know it's a really big number.
And so those changes that have come through, they have come through as sort of an evolution of understanding that computers aren't humans. They don't just do a one trick thing of speaking. They do lots of things, and we want to push that down through networks to make them do more with each other. And that kind of desire got reflected in the architecture of the network itself.
MIKE HICKS: It's interesting about that build on a telephone system. So, you know, we had sort of numbers. We're obviously going from one. And if you think about going way back, I'm very old.
So, you know, even the switchboard operators plugging these things in to match the calls. So, magically, it happened end to end. So, essentially, we've architected on top of that because routing protocols work hop by hop basis. When I start off my journey, I don't know how I'm gonna get to the other end.
I just know my next hop. So we've designed on top of that. So is the only reason for us to stay at that, is it a case of we've done too much now? We can't rip that all up, or is it we can actually build on top and circumvent some of the restrictions that that imposed upon us?
GEOFF: Most of the Internet's history, and, you know, it's a bit weird to say forty years is history, but most of that forty years has actually been in trying to build on top of what we've got. It's remarkably hard to rip up stuff and start again. Once you've got thirty billion things out there, trying to sort of raise the flag and say to everybody, "Hey, stop, trying to start all over again," is logistically, practically, economically infeasible.
So everything is kind of a gradual tweak, an evolution, if you will, into the basic model. So we actually haven't changed routing. We haven't changed the mechanisms of pushing packets through the network. We still have this 1980s destination-based, hop-by-hop forwarding algorithm, where in essence, every single switching element knows the relative location of every possible IP address.
And you sort of think to yourself, well, maybe that does mean that every machine needs its own unique address.
Elegant refinement.
Because what we quickly realized was that you never start a conversation towards a client. It's the client that starts the conversation out. So when a client isn't talking to anyone, it doesn't need an address.
So why don't we have this kind of token at the edge of every network of address numbers, of real numbers? And each time a client starts a conversation with a server, servers have addresses, as the packet passes by this token bucket of addresses, it gets given a real address and that conversation has an address. When the conversation stops, that address is returned back to the pool. So clients share addresses, and the addresses are effectively translated as the packet leaves the local network, your home, your office, something, and goes out into the larger public space.
It's those kinds of refinements, which I suppose in retrospect are easy enough to describe. But at the time, there was a lot of thinking about will this all work? Will applications understand the fact that in a client, there's no such thing as a permanently visible external address? It's all kind of shifting sands.
We've done this also with the domain name system and the way we use names. Again, this is this constant refinement because the last thing you want to do on the Internet, it never stands a chance, is introduce a technology where everyone has to stop, restart, wipe it clean, and get it over again. We can't make it work like that. So it's all just continuous itsy bitsy little refinements, sometimes big refinements, that try and be backward compatible.
BARRY: You talked about the addressing system and assigning numbers. A few years ago, the big fear was that we were going to run out of those numbers. How do we get around that problem?
GEOFF: Well, that that was kind of really interesting. As I said, we have a a forty-year-old history. Literally, the Internet jumped out of the lab in about 1987, in the United States in the academic community there, and it very quickly kind of touched a nerve. We were just doing home personal computers. We were just getting into the fact that computers weren't hulking great million dollar mainframes. There were also things that people had in their homes. And very quickly, it sort of became evident that the addressing space, the range of numbers in the protocol we were using, which was 32 bits long, wasn't quite enough.
Now two to the power 32 is four billion. Four billion!
And even then, in 1989, a lot of universities, companies, you know, government agencies had million dollar machines.
And if you did a sort of a count of how many, you'd be lucky to get past a million of them. As I recall, because I programmed one, there was the dear old VAX-11 from Digital Equipment. Wonderful machine. Most of people of my vintage cut their teeth on that machine.
But the issue was four billion? Surely that's enough. But by 1989, when we started to think about what's going on with personal computers, you know, the Apple was just scratching at the door and so on. It seemed pretty obvious that that wasn't going to be enough. We couldn't number every machine if there were more than four billion machines out there.
BARRY: So what was the workaround?
GEOFF: The telephone network got over this by adding more numbers. You know, they just slid them in at the end. And the reason why was that each telephone had no idea of its own number or anyone else's. You just kept on dialing numbers until you ran out of patience or something at the other end rang. All of the intelligence was inside the network.
But the weird thing about the Internet, and part of the reason why it worked, is that it swapped the intelligence from in the network to the edge. That made networking really cheap because these were just datagram switches. They didn't really have a huge amount of intelligence, but we crammed all that into every single end system. So if you wanted to change the addresses and add a few more bits to the address length, you really needed to reprogram, stop, and start every single attached device.
Now you kind of go, oh, 1989, there was only a few hundred thousand or so. Even then, we're sitting there going, can't do that. Just really can't do that. So when it was a case of we're running out of addresses and we can see that this isn't going to work for much longer, it was a big thing, and we really did spend three years, I think, in concentrated meetings all over the planet going, what do we do next? How do we reconcile this?
BARRY: And so you effectively avoided the shortage of IP addresses by moving to names or other numbers?
GEOFF: Well, that's what happened, but the path to it was by no means, by no means so direct. The first thought was the Internet is just this American experiment. Look, we've run out of addresses. It's all over. Let's shut down the shop and say to the telephone companies who are doing their own digital networking, "Look, you won, you won. Just remember what we learned so that when you build the real one, the adults come in, apply the lessons, please."
But by 1990, '91, the Internet had gathered its own kind of hubris, its own kind of, "we think we're right."
And part of the issue was this was not a network-centric architecture. It was an edge-centric architecture. And this made networks so much simpler, so much cheaper.
Even universities could afford them. And that kind of revolution in dropping the price was not something that anyone wanted to walk away from. And so when we looked at, well, should we give up or should we press on? The answer really was, we think we're onto something that is actually quite unique. It's moved away from network centrism and moved towards edge-based computing, and maybe we can push this a little and see how we do it. There was a bit of a beauty parade, three years of various proposals to put more computers into the Internet, and some of them were pretty wacky, some of them less so. But, you know, in the end, after consulting a whole bunch of folks, including, you know, the power utility industry, the sort of emerging dreams of what we know today as the Internet of Things, we came up with, well, if what we've done is so good and the real problem is just we haven't got enough addresses to number everything, why don't we just do the same thing with vastly bigger numbers?
And so the first kind of proposal that came out of the blocks was to expand that 32-bit address space, possible four billion machines or so, to 128 bits. Now that's a big number, two to the power 128. You can start collecting grains of sand. You will easily pass a planet the size of the Earth and you still need more grains of sand to get to the two to the 128. You know, you could build quite a few planets on the way. These are massive numbers. And we kind of thought, "This time for sure we've got this one sussed. We're never, ever going to run out."
BARRY: So to be clear, what you're talking about here is IPv6 addressing. Why wasn't that the instant solution you perhaps thought it was?
GEOFF: When you want to persuade someone to reprogram, to stop and start, to embrace a different technology, typically it's got to be better or faster or cheaper. And if the only thing you're offering is future risk, then guess what the rest of us say to that? Well, let my future self worry about that future risk because it's not in my face this week. I'm going to do nothing.
And so oddly enough, we blissfully kind of did IPv6, and the first specs were out in 1994 and then did nothing. Just absolutely nothing. A bit of polishing, a bit of refining, but generally didn't worry about it because it didn't do anything differently. Anyone who tried to use it didn't find that they were cheaper than their competitors.
And indeed, because it wasn't backward compatible, if you were the first kid on the block to have v6, guess who you could talk to? Nobody.
MIKE: The other v6 people.
GEOFF: Yeah. All the other v6 people on your block. No one. And so networks become a kind of tyranny of the masses. It's everyone else that determines what you do. You have very little say in it. And so trying to kickstart v6 really didn't get very far.
Interestingly, asymmetric networking got a long way, and the reason was the dial-up modems.
You see, when you had a dial-up modem for your house, you got one address for your house. As soon as you bought the second computer for your house, what we're going to do? Buy another phone line, another dial-up modem? That's a bit crazy. We didn't do that. We actually started playing around with network address translation by making that modem that also ran the connection externally to be a little smart with addresses and share that address across the machines in your house.
And indeed, all of the 90s and the early part of the 2000s were spent building this technology of address sharing, and no one was doing v6, nobody.
And then the promise of truly big happened. It was the Apple iPhone.
Because as soon as that happened, instead of talking hundreds of millions, the real word was billions. They were making billions of devices. They were selling them. You know, everyone was under intense pressure to grow their networks, just grow and grow and grow. And the problem is that iPhones didn't connect to your house, they connected back to the carrier.
And while a number of carriers, and I remember there were a few in my part of the world in Asia Pacific, sort of came to the local address registry, APNIC, us, and said, "We'd like forty million addresses, please."
"Forty million?! You're joking!"
"Well, they've sold a lot of these devices. What are we going to do?"
And a few folk really did try to do massive address allocations. But very quickly, it became apparent that the way around this was actually using network address translation. That you give all these phones private addresses when they want to talk out there. They borrow an address from the pool, hand it back when they're done, and that was kind of working fine.
But that growth pressure just kept on going on and on and on.
And interestingly, by 2011, that had only been around for about seven years. That great pool of four billion addresses, we could see the other end. It was running down. And it's kind of, "Hey, industry, seriously, we've got to do something."
Now, I don't think there was a single iPhone on the planet that was dual stacked at the time or a single device. And indeed, they had done their business model such that if you actually wanted to support two protocols to a mobile device, the poor old mobile operator had to pay the equipment vendor twice as much in leasing costs because it was actually charged by connection minutes per protocol. Multiple protocols, more money to the vendor.
You know, the operators kind of go, I'm not gonna do this. And so 2011 rolled around, and we started looking at, oh, what are we going to do?
BARRY: What did you do?
GEOFF: The first issue was, well, why don't we try and cope with two things at once? We haven't got enough v4 addresses, that's true. We'd desperately like you to run v6, but you can't get rid of your v4 because until everyone's v6 capable, you still need v4.
So we started address sharing, port sharing. There were 35 different transition mechanisms developed by the Internet Engineering Task Force at the time. Thirty five! You know, if it's a Thursday and you're facing west, you know, use transition mechanism number two. That's the one for you.
And, you know, there were just so many out there, and it was incredibly confusing.
It really wasn't working very well. We were kind of making a hash of it for everybody. And into all this mess came, I think, the factor from the left, the sort of surprise thing that none of us had really thought about much.
But the real issue is, what are people using the network for?
And of course, this is the age where YouTube, where Facebook, where a whole bunch of these services that realistically concentrated on video in the long run, because video is what captivated everyone, started to appear.
If you create the world's most popular video set and run it out of Kansas City and get a few billion users all over the planet trying to draw their content from Kansas City, digitally, Kansas is going to melt. You can't do that. The only way you can scale with that kind of stuff where the content is known in advance is to replicate it.
Well, isn't that expensive?
And the answer is, yes, but not as expensive as it was yesterday.
Because you see, what was driving this industry, what drove mobile phones, what drove all of these innovations, was actually this thing called Moore's Law. Gordon Moore was an executive at the chip maker Intel through the 60s and 70s, and I think the 80s. Integrated circuits appeared in 1957. They are a refinement of the original transistor invented by Bell Research in '47. Integrated circuits sort of placed two, four, eight, sixteen transistors on a single silicon substrate so that instead of just making a single gate, you could make a few million of them and build an entire computer on one chip. And Gordon Moore found out that we managed to continuously refine that process. And about every two years, the number of discrete transistors on a chip was doubling.
That's pretty cool. At the same time, the clock speed was also doubling. Oh my God, twice as many gates going twice as fast. And the cost was halving. Wow! This is kind of so prodigious.
Imagine if the car industry had done that. The car would not even fit on my finger and cost about two cents. You know, the computing industry was pulling this out of thin air and just continuously making difficult problems easy. How do you make a mobile phone? Wait for two years and it'll be easy. Everything with Moore's Law kind of makes things possible.
So when you say, "Can I replicate all of these videos, all these megabytes, petabytes of data in, say, 500 places around the globe?" The answer is yeah. How much will it cost? Well, a lot less than yesterday, and it'll cost even less tomorrow because Moore's Law makes this stuff cheaper.
BARRY: What impact did that have on the evolution of the Internet?
GEOFF: And so not only had we gone into client-server, but there was this new business model, which was replicating the content so that instead of talking about servers, we talked about content distribution networks.
There were some early players like Akamai, but, you know, very quickly the big folk, the Microsofts of this world, Google started doing it very early on, and a number of others, sort of replicating service all over the world. All of a sudden, your packets didn't go around the world. Your packets went to the closest data center.
Initially, there might have been five in Europe. Later on, there's one in every city that has more than a few million people. They're everywhere, and they're replicating all of the content that those folk are actually wanting. So now the packets don't go very far. It's a different world. It's a different kind of network now because all I need to do is contact the closest data center and I'm done.
How do I know what's the closest data center? How do I do that? Ah, the DNS, the naming system. Because if there's a thing called www.geoffsfavoritewebsite.com, then if I load it up into one of these content distribution networks, when you type that in your browser, the DNS will point you, will translate that, to an address of a data center near you. When I do it, I'll get a different address. I'll get a data center near me.
My content, my service, actually doesn't have a unique address anymore. It has a unique name, not a unique address.
Interestingly, this actually meant we could scale and scale and scale like never before. Some of the big content distribution networks now host literally millions, probably hundreds of millions of discrete sites. They host an enormous amount of different people's content. But they can do it all from a handful of IP addresses. I don't need to number every service. I just need to number the data centers.
And so this took a huge amount of pressure off the data centers. It also took a huge amount of pressure off the endpoints because they were borrowing addresses from each other and translating.
And so for a while there again, the pressure on, well, let's adopt 128-bit addressing in v6, again, kind of dissipated. The pressure to move sort of dropped off. It wasn't faster. It wasn't cheaper. It wasn't better. It just solved a problem that, oddly enough, we were also solving at many other ways at higher levels in the protocol stack. That's a really critical observation because in the war of the Internet versus telephony, the real issue was stripping out cost and function from the network and moving that money across into the edge systems.
We've now done another move, and that's moving it from the network up to the application level. And it's actually a world populated by name. Names are now driving the network.
MIKE: So then you said we've moved from this telephony. We moved from this hop-by-hop. We moved this up to the application stack as it were to the application itself. We're driven by these names across there. We've also moved the compute resources sector, or the intelligence, let's say, to the edge. What we've then also introduced, I believe, effectively, we sort of moved away now then from this serial projection, client-network-server, to this mesh of environments where we have protocols, applications. You know, you talked about DNS being that fundamental block in terms of resolving the name and then we go forward. That fails. We can't resolve the name. But we're now also relying on all these other disparate protocols, applications that need to work seamlessly together for me as a user to access.
GEOFF: Well, you know, that's certainly true, but that's the whole reason why we have astonishingly powerful computers with huge amounts of memory running an enormous amount of software to make all this happen. It was once said, I think it was a 5ESS telephone switch that AT&T used to run, had a whopping great 100 million lines of code.
I look at any decently sized mobile phone and you're up to a few billion lines or more. This stuff is highly complex, but it's also well exercised. And part of the reason, I think, that, you know, the Internet works is we're constantly going through most code paths most of the time to make all this happen.
The beauty about the naming system is that unlike fixed-length numbers, this stuff just grows as we grow. And the whole issue about trying to make the Internet work these days is trying to keep up with the pace of silicon and innovation and people. We need to keep be able to scale this. It's twice the size, ten times the size. And as we move on with this and find weirder and weirder ways of displacing other things like human drivers of cars and replace it with programs, algorithms, and network control, then there's more and more demands being placed on trying to make this stuff scale even further.
And part of the issue is we've actually made the fundamental building block of today's networks now based around names and not IP addresses.
When you go to a site you trust, how do you know it's that site? How do you know it's a bank, my bank? And it's a really good question because it's not a very nice world out there, and a lot of people are trying to hoodwink. Oddly enough, I'm going to prove that I'm your bank is based around I can prove the domain name that I have control of is the domain name, the DNS name, that you are going to in your URL. That's the match. So I can prove that I have that domain name. I'm the real thing. This is why you trust me. That fundamental trust mechanism on the Internet is not based on IP addresses. It's actually based on DNS names, and everything else follows from that.
That, in computing terms, is actually, again, revolutionary. We didn't aim to do it. It was just a bunch of people, each of them reacting to today's problems and incrementally just tweaking at this. And we've come up with systems that have left the old hop-by-hop, destination-based addressing as just the basic way of moving packets around the network. But all that service environment, the reason why the network works, is actually all name based these days. And that's a very, very big shift.
BARRY: You said that back in the 80s when IPv4 was created, they had billions of addresses and nobody thought we'd ever run out. Do you think now we're at the point where we've got effectively infinite capacity, or do you think in 20 years' time, someone's going to be sitting on a podcast like this saying, "Hey, those guys back in the 2020s, they thought they had this problem solved, and look where we are now."
GEOFF: Look. We've always had the conversation of, well, that's enough for anybody. 14.4 kilobit dialup modems. No one needs to go faster. Oh, I've got a 56 kilobit modem. Oh, gee, it works a bit quicker, doesn't it? I'm going to go DSL and start loading. We've always managed. And these days in the world of fiber rollout, the folk who jumped first and pushed 10 megabits to the home are now looking a bit silly when the area around optics and passive optical networking is now using coherent light, and the total capacity of those systems is pushing towards terabits.
You kind of go, "Well, how can you do that? Are you just getting cleverer?" And the answer is, well, not really. If you got better chips and as you come down in track width from five nanometers to three to two, you can actually do a whole bunch more computation and you can cram more signal into that fiber and detect it at the other end. And so part of the reason why we can make these fiber systems work so much better is, again, Moore's Law. As long as Moore's Law keeps on delivering, yes, we can push this a whole lot further.
There are a few kinks in that road coming up, and they're going to be a little bit rocky. But the principle is if we can keep on solving these problems as they arise and make ever denser and, oddly enough, ever cheaper and more powerful chipsets, we can do so much more. And as we do that, it will supplant more and more activities, and it will become a self-fulfilling demand.
MIKE: Well, you mentioned this concept then of, you know, sort of Moore's Law building on top of it, doing more computations, and we're solving problems as we go. Do you then think that, you know, if you think about TCP/IP, it was fundamentally built to cope with these lower-quality networks. But now we then have, you know, these improvements in the optics. We obviously can't improve on the speed of light, or maybe we can, maybe Moore's Law helps us with that. But we're starting to improve the underlying quality. So therefore, that allows us to do more. So we think about things like a BBR, for example, from a congestion control algorithm. So we can effectively sort of push more data down from a throughput perspective because we're not expecting loss.
I guess the question in all that is, is the improvement in the underlying infrastructure actually helping us advance more?
GEOFF: It's kind of a contradictory answer to that, Mike, because as we increase the fiber capacity, we can put more power into the laser, we can, you know, put more light intensity through. The next sort of aim is, can I make the road wider? Can I use the more marginal parts of the spectrum and try and extract signal from it? And so oddly enough, when you're in those margins, the signal quality is really quite disgusting.
And you're actually using the digital signal processing and the signal encoding techniques to actually pull out a signal where 10 years ago you never had a hope.
And if you think about it, that's the best expression I can do for that these days is actually looking at the engineering that goes behind Starlink. Here is one side of the conversation, the spacecraft, zooming overhead at 27,000 kilometers an hour. Wow. And you're trying to do a video call. You're trying to sort of do a smooth connection using TCP/IP. The fact that it works at all is a miracle. The fact that it works so well isn't that it was easy. It actually isn't. It was really hard, but it's possible with today's chipsets.
And so oddly enough, as we kind of improve our processing capability, we can move towards more marginal areas of the band and exploit that as well. We can gather more bandwidth out. And so there's always a need to actually stay on that leading edge to extract the most we can out of the systems that we have. A piece of fiber does fine at 10 megabits a second, but these days you want to get 1.6 terabits out of it. That's not easy. You know, that's quite the opposite. But we're doing it. You know, the latest submarine cable systems are now phenomenally thick in terms of their capacity.
So, yes, we exploit these benefits that come out of greater chip density constantly and just constantly, you know, up the level of how much data we can move, how quickly, and reducing the cost of doing that at the same time.
BARRY: Talking of things that are going to push the capabilities of our networks, we have to talk about the impact of AI. How do you think that's going to affect the processing power required for today's Internet architecture?
GEOFF: When AI was first a subject of academic interest, and I'm talking late 50s, early 60s, there were two schools of thought, inductive and deductive.
One is I observe something, and I observe all of its variance, and I start to observe patterns. Then all of a sudden, if I observe enough of these patterns, I can start to predict. So if I stop a sentence… midway, it's highly likely that the next word is midway and so on. And so you get these kind of systems that are inductive by being imitative of everything they've seen before and creating probabilistic patterns of the future. It's a cheap trick. It's a really impressive cheap trick, but it's a cheap trick.
But the other system that's also being developed, and it comes as part of this neural net system, is the elements of deduction. If A, then B. If B, then C. If C, then D. And it was kind of work from, geez, century and a half ago, Bertrand Russell, Principia Mathematica. Maths is simply a formal system that you can build with five basic axioms, and all the rest of the towers of Babel are logical. And the whole idea is if I can teach a computer to be deductive, it will know everything because that's what we do with humans. And it's those two pressures that are driving AI.
This becomes really quite, quite an interesting issue because the extent to which we can take neural systems and produce densities of connectivity that are orders of magnitude bigger than a human brain, and at the same time sort of give it the flexibility to behave like neural networks do in a human brain, if they're capable of deductive reasoning, it's almost impossible to contemplate what comes out the other end.
A whole lot of money is being thrown into a whole lot of data centers really quickly in today's world to figure out what's going on here. And I think most of this is exploratory. No one knows. It is the big unknown. But like I said, there's this kind of suspicion. What if they're right? And if they are right, oh my God, this is a different world for humanity.
And so, yes, you touch on a really, really big issue. It's only lately with the graphics chips, the silicon processors, the cramming a trillion, a trillion gates on a single piece of silicon fabric. It's only those latest things that have made this feasible. Yes, it's stretching every bit of engineering to its absolute limit. Two hundred kilowatts of heat coming out of every single rack, liquid immersion for every single processing unit.
Everything about that world is pushing it to its absolute limit. The theory is that in five years' time, Moore's Law will make that commonplace, assuming Moore's Law delivers. If we've reached the end of Moore's Law, that's a different future. If it can't deliver, we're back into, well, it's really expensive. It costs five bucks to answer your question. Somebody's got to pay that money. It's not going to go very far. But if Moore's Law keeps on delivering, you know, it'll all get cheaper and faster.
MIKE: What about the follow-on effect? You know, we talked about the heat, the load. We've seen examples of data centers where they've had generative AI workloads in there which have tripped the chillers out and affect other stuff in other racks there as well. Can we reduce the heat? Do we distribute the models wider? What do we do in that case?
GEOFF: We've got to the point where you don't share. An AI data center is not a data center. It is a custom-built AI data center. It's actually a completely different arrangement, and it's, I think, almost as radical as IP networking was to the old telephone network.
Now the data center is, if you will, the memory bus of doing a whole bunch of storage and a whole bunch of processes and trying to make any processor talk to any part of that memory at terabit speeds. And it's kind of a fascinatingly difficult engineering challenge. It only works in a building, and it's a very big building, and it does take a huge amount of heat, huge amounts of power. It's its own piece of engineering.
As I said, the theory is we're doing this because it'll be cheaper tomorrow. We're not doing this because we expect those ones to hang around for 20 years. But relying on Moore's Law is getting to be a faith, not a science.
We're down to two by two nanometer tracks. You know, these are tiny. Electrons tunnel through those tracks. We can't control the process anymore. We can't go thinner. And so now we're faced with the conundrum of trying to build 3D lattice structures of semiconductor material down at nanometer accuracy using technologies that, quite frankly, no one's figured out yet.
I'm not even sure if there are kids who are going to do this in their professional career or whether they've even been born. The challenges of making Moore's Law deliver over the next 10 to 20 years, I think every bit as awesomely large as the original challenge of the first integrated circuit. It's not going to be an easy ask. And so I suspect the pace that's got us to 2025 and, you know, the last 15 years with mobile phones, data centers, content distribution networks, etc., I think that's going to slacken off. And the next few years is going to be a little gentler because we just can't innovate at the chip level to the extent that we'd like to as quickly as we were doing before.
We've pushed all the planar technologies to its limit. If anyone's in this area, they'd be aware of FinFETs, gallium all around silicon, gas chips, and so on, which is those early elements of three-dimensional work on what was a flat chip. But we now need to go the next generation of lattice work, and that's all just completely new territory for all of us. Fascinating, though. Absolutely exciting work.
MIKE: Absolutely. It is, again, almost we've got this evolution. We've come through. We're going to even more distributed architecture to sort of circumvent. We've hit a roadblock as it were at that point, and then the innovation effectively will come horizontally. From a networking perspective, let's just call it east to west from a distributed architecture perspective.
GEOFF: There are a number of challenging points that we're going to start working with. The size of storage is getting really, really amazingly small. And you could actually argue that most of the Internet problems that we used to have, go and find your data, it's somewhere on the other side of the planet, has been changed into, well, the data you need is in the data center just down the road.
Now why is it just down the road? Why isn't it in my house? And you go, oh, it's a bit big. I need a bit more chilling than you've got. You haven't got enough power, dude. You haven't got enough racks. But I might in 10 years. You know, at some point this gets small enough and the systems are ubiquitous enough that it's not an outlandish idea. It's just all a case of scale.
And, you know, if there's one story about the Internet over 40 years, it's the challenge of scale.
And so far, we've managed in an erratic and unplanned way, but we've managed to respond to that challenge of scale. And as long as we're able to keep on doing that, in some ways, someone's gonna figure this out. I've no idea how, and and I hope that they make a lot of money out of it too because, you know, there has to be rewards there somewhere. But folk will be working on it. You can count on that.
BARRY: And that's our show. Huge thanks to Geoff Huston for joining us.
Please give us a follow and leave us a review on your favorite podcast platform. We really appreciate it. And not only does this ensure you're in the know when a new episode is published, it also helps us to shape the show for you. You can follow us on LinkedIn or X @thousandeyes or send questions and feedback to internetreport@thousandeyes.com. Until next time, goodbye.
By the Numbers
Let’s close by taking our usual look at some of the global trends ThousandEyes observed across ISPs, cloud service provider networks, collaboration app networks, and edge networks over two recent weeks (April 21 - May 4):
-
In a reversal of the downtrend observed in the previous two weeks (April 7-20), global outages increased from April 21 - May 4. During the first week in the fortnight (April 21-27), ThousandEyes recorded a 13% rise in outages, which increased from 309 to 348. This upward trend continued into the next week (April 28 - May 4), when outages rose from 348 to 444, marking a 28% increase compared to the previous week.
-
The United States experienced a similar pattern. Although outages in the first week (April 21-27) remained stable at 69, there was an increase in the subsequent week. During the week of April 28 - May 4, outages rose from 69 to 95, representing a 38% increase.
-
From April 21 to May 4, an average of 21% of all network outages occurred in the United States, down from the 32% observed in the previous period (April 7 - April 20). This 21% marks the second consecutive period in which U.S.-based outages accounted for less than 40% of all recorded outages.
-
In April, a total of 1,804 outages were recorded globally, representing a 15% decrease from the 2,110 outages noted in March. In the United States, outages decreased significantly, dropping from 901 in March to 531 in April—a 41% reduction. This shift in trend is notable because total outages, both globally and in the U.S., have typically risen from March to April in previous years. This change may be partially related to the timing of the Easter holiday, which occurred in late March or early April in 2023 and 2024 but fell later in April in 2025. Good Friday and/or Easter Monday are public holidays in many countries around the world, leading to a potential drop in maintenance work or other updates that might cause outages.
