Resolved -
The routing fix has been stable since we fixed it earlier today and we're considering the issue reliably resolved.
We apologize for the outage and will perform a post mortem analysis with our upstream provider.
Sep 11, 12:32 CEST
Monitoring -
Our uplink provider identified a routing issue in our transfer networks and has switched the affected routes to a new transfer network. According to our monitoring things have stabilized, but we're keeping an eye on things for now.
Sep 11, 06:51 CEST
Update -
We've improved the situation (the routing loops are gone) but we're still seeing some traffic loss for incoming traffic for some networks. This is currently a stochastic issue as we're distributing traffic over multiple paths and some of them are affected. Unfortunately we can not simply disable those paths as this will also cause further issues. We're still in touch with the upstream provider to get this sorted.
The current adverse effects present mostly through DNS resolver issues and stochastic loss of traffic when accessing outside resources.
Sep 11, 05:54 CEST
Identified -
We're seeing routing loops and partial routing errors from our upstream data center. We're getting in touch with their NOC.
Sep 11, 04:40 CEST
Investigating -
We're seeing a number of errors around DNS resolution issues in our primary data center and are investigating.
Sep 11, 04:30 CEST