OUTAGE ANALYSIS
Cloudflare Outage Analysis: November 18, 2025

The Internet Report

Diving Into the Red Sea Cable Cuts & More Outage News

By Mike Hicks
| | 18 min read
Internet Report on Apple Podcasts Internet Report on Spotify Internet Report on SoundCloud

Summary

Gain insights on the Red Sea subsea cable cuts, as well as other recent service disruptions that impacted the Great Firewall of China, Mailchimp, Google, and Verizon.


This is The Internet Report, where we analyze outages and trends across the Internet through the lens of ThousandEyes Internet and Cloud Intelligence. I’ll be here every other week, sharing the latest outage numbers and highlighting a few interesting outages. As always, you can read the full analysis below or listen to the podcast for firsthand commentary.


Internet Outages & Trends

When submarine cables carrying significant Internet traffic are severed, you might expect widespread service failures. However, the September 6 Red Sea cable cuts revealed something more nuanced: While services experienced increased latency and some degradation, the Internet's redundant paths kept traffic flowing.
The automatic rerouting wasn't perfect or universal, but for many monitored routes, packet loss remained negligible even as traffic took longer, more circuitous paths to reach destinations.

Read on to learn more about the network rerouting following the Red Sea cable cuts, plus analysis of other recent service disruptions including potential Great Firewall issues that appeared to impact HTTPS traffic, backend problems at Mailchimp, fiber path disruptions in Google's Bulgarian infrastructure, and a software-induced outage at Verizon.


Red Sea Subsea Cable Infrastructure Disruption

On September 6, multiple submarine cable systems in the Red Sea experienced damage, forcing widespread traffic rerouting. The affected systems—SEA-ME-WE-4, IMEWE, FALCON GCX, and Europe India Gateway—normally carry traffic between Europe, Middle East, and Asia. The International Cable Protection Committee's early analysis points to commercial shipping activity as the probable cause—likely a vessel dragging its anchor across the cables, a common occurrence that accounts for roughly 30% of global cable faults annually.

Microsoft confirmed the disruptions, noting that network traffic travelling through the Middle East might see elevated latency due to undersea fiber cuts in the Red Sea. The disruptions started at 5:45 AM (UTC) on September 6, affecting Microsoft Azure cloud computing services.

Screenshot of Microsoft’s statement about the disruption, posted on the Azure status page
Figure 1. Microsoft’s statement about the disruption, posted on the Azure status page

Pakistan Telecommunication Company Limited (PTCL) also reported decreased capacity on impacted cables. In the United Arab Emirates, users of the country's state-owned Du and Etisalat networks reported slower Internet speeds.

What made this incident instructive wasn't the cable damage itself—submarine cables break regularly—but understanding the varying impacts. When primary paths through the Red Sea became unavailable, traffic automatically shifted to alternative routes, often through terrestrial networks across Asia or alternative submarine systems.

ThousandEyes screenshot showing traffic rerouting through alternative paths, via different geographic regions
Figure 2. Traffic rerouting through alternative paths following the cable damage, via different geographic regions

Overall, the rerouting worked, but with important distinctions:

  • International transit traffic (like AWS to European destinations) successfully rerouted with 100-200ms additional latency but negligible packet loss.

  • Regional Middle East traffic, especially routes through Jeddah where the damage occurred, experienced both increased latency and packet loss—explaining why customers in Dubai reported connectivity issues to India via Jeddah.

  • Traffic from inland locations like Riyadh showed packet loss when attempting to reach international destinations, as these routes typically transit through Jeddah to reach submarine cables.

ThousandEyes screenshot showing packet loss observed from Riyadh following the cable damage
Figure 3. Packet loss observed from Riyadh following the cable damage, consistent with routes that depend on Jeddah cable landing infrastructure

This strategy of rerouting traffic through different paths is a common tactic providers use to minimize the impact on users while the cables are being repaired. For example, to respond to the September 6 cable damage, Microsoft said that it had rerouted network traffic through other paths to keep its service running and PTCL also reported using alternative bandwidth channels to reduce service degradation.

As alluded to above, while this type of rerouting helps maintain connectivity, sending traffic through longer and possibly more congested paths can also cause slower performance. This likely explains the increased latency ThousandEyes observed on connections like Mumbai-Frankfurt, as traffic that normally uses optimized Red Sea transit routes was redirected through alternative and perhaps less efficient pathways.

ThousandEyes screenshot showing increased latency observed on the regional connection between Mumbai and Frankfurt
Figure 4. After the undersea cable cuts, ThousandEyes observed increased latency on regional connections, including between Mumbai and Frankfurt

The incident demonstrated how physical infrastructure failures can trigger automatic rerouting, with varying degrees of success, depending on geographic location and proximity to the damage. Alternative carriers successfully absorbed rerouted international traffic without significant degradation—traffic between distant endpoints like AWS and European destinations primarily experienced latency increases. However, regional traffic near the cable landing points faced more severe impacts, including both latency and packet loss, highlighting how the quality of redundancy varies based on available alternative paths and distance from the failure point.


To dive deeper into the world of subsea cables, check out this blog post and podcast.

Great Firewall Port 443 Disruption

On August 19, another connectivity issue arose, this time appearing to involve the Great Firewall of China (GFW). Connections to TCP 443 from within China to external destinations exhibited connection resets, disrupting web communications and services that rely on port 443. This port is the standard for HTTPS traffic, which encrypts web communications to safeguard the data in transit. Nearly all modern websites, web applications, and secure communications utilize port 443 for their encrypted connections. Furthermore, many cloud services, APIs, software updates, and enterprise applications heavily depend on this port for secure communication. The incident occurred between 16:34 and 17:48 UTC, effectively cutting off China from most encrypted web services and external websites.

ThousandEyes data revealed failures consistent with active connection termination. Attempts to connect to Microsoft Azure and AWS platforms resulted in immediate connection errors, with response times in single-digit milliseconds—suggesting the connections were being terminated at the network layer rather than experiencing routing or DNS issues.

ThousandEyes screenshot displaying connection attempts showing immediate termination during the disruption
Figure 5. Connection attempts showing immediate termination during the disruption

The blocking mechanism exhibited characteristics consistent with Great Firewall behavior: the system injected forged TCP RST+ACK packets to terminate connections. However, the pattern differed from previously documented GFW behavior, with the system showing incrementing values in certain packet fields rather than the identical values typically seen in GFW reset packets.

Screenshot showing that during the disruption ThousandEyes observed connection refused messages
Figure 6. During the disruption, ThousandEyes observed "connection refused" messages, suggesting active connection termination

ThousandEyes screenshot showing the AWS console was unreachable during the incident
Figure 7. AWS console unreachable during the incident

The trigger mechanism was notably asymmetric:

  • Outbound from China: Both the initial SYN packet and the server's SYN+ACK response triggered multiple reset packet injections.

  • Inbound to China: Only the domestic server's SYN+ACK response triggered the blocking—foreign SYN packets were not blocked.

This asymmetry and the specific targeting of only port 443 (while leaving ports like 22, 80, and 8443 unaffected) represented unusual behavior compared to previously documented patterns. The 74-minute duration followed by restoration to normal operations was notable, though the underlying cause remains unclear.

The incident's port-specific blocking pattern offers a useful case study in network diagnostics. When port 443 failed while ports 22 and 80 remained functional, and connection attempts returned single-digit millisecond RST responses, this indicated termination at the network layer rather than application or routing failures. The diagnosis required multiple data points in context—knowing other ports worked ruled out general connectivity issues, while the RST timing pattern distinguished active termination from passive failures. No single indicator would have been sufficient.

Mailchimp Login Disruption

On September 3, Mailchimp users around the world encountered issues logging into the email marketing platform for about 35 minutes.

Starting around 2:25 AM (UTC), ThousandEyes observations pointed to backend issues affecting the platform. ThousandEyes detected HTTP 500 internal server errors, which appeared simultaneously across various regions, suggesting a centralized backend failure rather than regional infrastructure issues. Additionally, the fact that HTTP responses were being received—rather than connection timeouts—indicated that frontend infrastructure including load balancers and web servers remained operational. All network-level operations—DNS resolution, TCP connection establishment, SSL handshake, request transmission, and response reception—stayed functional.


Explore the Mailchimp incident further in the ThousandEyes platform (no login required).

ThousandEyes screenshot showing that the login disruption impacted Mailchimp users around the globe
Figure 8. The login disruption impacted Mailchimp users around the globe

The HTTP 500 response codes showed that requests were successfully reaching Mailchimp's application servers, but the backend services were unable to process them. The failure pattern—widespread HTTP 500 errors with intact network connectivity across multiple data centers simultaneously—suggests a single point of failure in Mailchimp's core application infrastructure, likely involving authentication services or underlying database systems.

ThousandEyes observed recovery beginning around 2:55 AM (UTC), with service appearing almost fully restored by 3 AM (UTC). Mailchimp acknowledged the incident, describing it as connection issues experienced when trying to log into Mailchimp.

This incident exemplified a common pattern in modern cloud services: backend service failures that leave the network and frontend infrastructure fully operational. For network operations teams, this highlights the importance of application-layer monitoring beyond traditional network metrics—connection success doesn't guarantee service availability when backend systems fail.

Google Services Disruption

On September 4, various Google services experienced connectivity issues that impacted users in southeastern Europe, including Turkey, Greece, and Bulgaria. The disruption lasted about an hour, from 7:08 AM to 8:12 AM (UTC).

The incident affected multiple Google services including YouTube, Maps, Gmail, Drive, and Search. Issues reportedly included 5xx errors, trouble loading YouTube videos, Google Maps not loading map data or calculating routes, issues completing searches on Google Search, difficulties sending and receiving emails, and problems accessing Google Drive documents and files.

ThousandEyes screenshot showing that Google services only appeared disrupted in southeastern Europe
Figure 9. Google services only appeared disrupted in southeastern Europe, seeming to function normally in the rest of the world

The service disruptions were limited to southeastern Europe and the same services remained fully operational elsewhere in the world. This regional pattern, coupled with the fact that services requiring user authentication—Gmail, Drive, Calendar, and personalized YouTube features—were affected, pointed to infrastructure constraints in the region rather than application-level failures.

Google confirmed the outage was due to fiber path disruptions within its production backbone infrastructure in Bulgaria, which reduced available network bandwidth for the point of presence (PoP) location in Sofia. The company stated that the remaining capacity could not support the traffic load for connections between the Sofia location and the rest of the Google backbone network.

The company resolved the issue by rerouting traffic to adjacent PoP locations. Google engineers manually redirected traffic flows to restore connectivity, indicating that automated failover mechanisms were insufficient to handle this specific capacity reduction scenario.

This incident illustrated how different types of infrastructure face distinct redundancy challenges. While submarine cables can leverage multiple geographic paths, regional PoP infrastructure operates under different constraints—when the Sofia location experienced fiber disruptions, the specific characteristics of the situation led Google engineers to manually optimize traffic distribution to adjacent facilities to maintain service quality.

Verizon Wireless Service Disruption

On August 30, Verizon experienced a nationwide wireless service outage affecting cellular connectivity across the United States. The disruption lasted approximately seven hours and was attributed to a software-related failure.

The outage began between 4:30 and 5:00 PM (UTC), with reports indicating widespread service disruptions. Verizon confirmed the incident, stating that a software issue was affecting wireless service for some customers.

The failure manifested primarily as loss of cellular connectivity, with affected devices displaying "SOS" or emergency-only mode. The outage pattern was described as completely random with some accounts where only one person out of 8 or 9 were experiencing the issue, indicating inconsistent impact across device types and account configurations within the same geographic areas.

Service restoration efforts continued through the evening, with Verizon noting that engineers were working on the service disruption.

Verizon’s reports that the failure was software-related are consistent with the rapid resolution timeline and the selective impact pattern across devices. Unlike infrastructure damage or equipment failures that typically require physical repairs, software issues can often be resolved through configuration changes or system restarts, explaining the relatively quick restoration compared to the longer recovery timeline often seen for hardware-related outages.


By the Numbers

Let’s close by taking a look at some of the global trends ThousandEyes observed over recent weeks (August 25 - September 7) across ISPs, cloud service provider networks, collaboration app networks, and edge networks.

Global Outages

  • From August 25 to 31, ThousandEyes observed 260 global outages, which mirrored the total seen the prior week (August 18 to 24).

  • This stability shifted during the week of September 1 to 7, when outages climbed to 308, representing an 18% increase from the prior period. This brought global outage levels back to match the peak recorded earlier in August (August 4-10), when 308 outages were also observed.

United States Outages

  • The United States experienced a steady increase in outages throughout the two weeks from August 25 - September 7. During the first week (August 25 to 31), U.S. outages rose to 136, representing an 11% increase from the previous week's 123. This upward momentum continued during the week of September 1 to 7, with U.S. outages climbing further to 166, representing a 22% increase.

  • The 166 outages recorded during September 1-7 represented the highest weekly total for U.S. network disruptions seen in the eight-week period from July 14 - September 7.

  • Additionally, over the period from August 25 to September 7, the United States accounted for 49% of all observed network outages, representing nearly half of global network disruptions during this timeframe.

Month-over-month Trends

  • Global network outages increased significantly from July to August 2025, rising 46% from 767 incidents to 1,117. This represents an addition of 350 outages month-over-month.

  • The United States followed a similar trajectory, with outages increasing from 398 in July to 517 in August, representing a 30% increase and an additional 119 outages. This means U.S. outages accounted for 34% of the month-over-month global increase.

Bar chart showing global and U.S. network outage trends over eight recent weeks, from July 14 through September 7
Figure 10. Global and U.S. network outage trends over eight recent weeks

Subscribe to the ThousandEyes Blog

Stay connected with blog updates and outage reports delivered while they're still fresh.

Upgrade your browser to view our website properly.

Please download the latest version of Chrome, Firefox or Microsoft Edge.

More detail