Flying Circus
All Systems Operational
VM servers   Operational
VM storage cluster   Operational
Network and Internet uplink   ? Operational
Central services   ? Operational
Related external services Operational
Bitbucket Git via HTTPS   Operational
Bitbucket Mercurial via HTTPS   Operational
Bitbucket SSH   Operational
GitHub   Operational
pypi.python.org   Operational
Fastly Europe (FRA)   Operational
Fastly Europe (AMS)   Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Sep 25, 2017

No incidents reported today.

Sep 24, 2017

No incidents reported.

Sep 23, 2017

No incidents reported.

Sep 22, 2017
Completed - The scheduled maintenance has been completed.
Sep 22, 00:00 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Sep 21, 21:00 CEST
Scheduled - Release 2017_020 is ready and will be rolled out during the specified timeframe.

See http://flyingcircus.io/doc/reference/changes/2017/r020.html for information about the specific changes.
Sep 20, 14:45 CEST
Sep 20, 2017
Resolved - It appears that the .IO registry has fixed the name server issues. It appears that some of the root nameservers delivered (cachable) NXDOMAIN answers thus convincing some clients that .IO domains do not exist.

We haven't seen an issue in our monitoring about this for the last hours and have seen reports on the internet from others that things went back to normal.
Sep 20, 20:41 CEST
Monitoring - We're seeing DNS resolution errors for .io domains (including our www.flyingcircus.io and my.flyingcircus.io) that appear to be caused by an issue in the global DNS infrastructure. Pingdom has also acknowledged that a large number of customers are reporting suspicious DNS resolution errors. Our DNS servers are working fine and your services are not affected by this directly. Resolution of our domains within our data centers is not affected. Recursive resolution for third party domains is likely to be affected by this as well.
Sep 20, 16:31 CEST
Sep 19, 2017
Completed - The maintenance went fine. We experienced 1 initial slowdown for about 2 minutes and two more slowdowns of about 30 seconds each.

The cluster is currently still recovering with our new throttling parameters applied which we expect may take another 2-3 hours without needing our attention.
Sep 19, 22:57 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Sep 19, 22:00 CEST
Update - Our scheduled maintenance was intended to be performed tomorrow night and we adjusted this accordingly.

Unfortunately, the Status Page maintenance calendar starts its' week with Sundays, so we accidentally picked today instead of tomorrow. We're sorry for the confusion.
Sep 18, 21:04 CEST
Scheduled - We need to reboot our storage servers to adjust BIOS settings for improved stability and perform preventative filesystem checks. We will take down one storage server and let it recover to minimize impact.

We have discussed the performance impact of the recovery traffic with Ceph developers and have determined new settings that look promising to dramatically reduce slow requests and hanging IO during recovery. Our lab setup has shown those to be stable and we will use those settings on the cluster during this maintenance. We can not promise those to be perfect yet and thus expect multiple windows of 1-2 minutes of increased IO latency.
Sep 18, 14:07 CEST
Sep 18, 2017

No incidents reported.

Sep 17, 2017

No incidents reported.

Sep 16, 2017

No incidents reported.

Sep 15, 2017

No incidents reported.

Sep 14, 2017

No incidents reported.

Sep 13, 2017

No incidents reported.

Sep 12, 2017

No incidents reported.

Sep 11, 2017

No incidents reported.