www.ably.io
Back

Incident log archive

All times are shown in UTC

December 2019

3rd December 2019 04:30:00 PM

Minor transient disruption to channel lifecycle webhooks over the next day or two

Customers using channel lifecycle webhooks may experience some brief transient disruption (which in some cases may very briefly include duplicate or lost channel lifecycle webhooks) at some point over the next day or two, while we transition channel lifecycle webhooks over to a new architecture (message rules on the channel lifecycle metachannel). The result will be a and more dependable channel lifecycle webhooks, as they will now get the reliability benefits of running on top of Ably's robust, globally distributed channels, rather than (as they were previously) all lifecycle events for an app being funnelled through a single point.

Reported

September 2019

30th September 2019 05:40:00 AM

Capacity issues in ap-southeast-1 (Singapore) region

Since 0540UTC today, the cluster in the ap-southeast-1 region was unable to obtain sufficient capacity to meet demand. As a result, slightly higher latencies are being experienced by connections in the region.

Until more capacity is available, we are diverting traffic to ap-southeast-2 (Sydney).

30th Sep 03:46 PM

AWS capacity has now come online in the Singapore region (ap-southeast-1). All traffic is being routed back to this region now.

Resolved

in about 10 hours
25th September 2019 11:54:00 AM

Elevate rate of 5xx errors in US-East-1

We had a higher than normal level of 5xx errors from our routing layer in us-east-1 between 11:54 and 13:17 UTC. We believe we have identified the issue, have instituted a workaround, and are working on a fix. Service should be generally unaffected as rejected requests will have been rerouted to other regions by our client library fallback functionality.

Resolved

in about 1 hour
25th September 2019 10:45:20 AM

EU performance issues

In both EU West and EU Central at 10:45 UTC there was a sharp rise in load, which has subsided at 10:49 UTC (4 minutes).

We have manually intervened to accelerate capacity provision, and our monitoring systems indicate traffic is being routed to other regions as expected whilst the capacity issue remains.

Resolved

in 4 minutes

July 2019

25th July 2019 10:00:00 AM

Issues in ap-southeast-2 (Sydney) due to data center connectivity issues

From 10:00 to 10:05 UTC, our ap-southeast-2 (Sydney) data center experienced some connectivity issues between it and other datacenters. After five minutes full connectivity was restored. Other datacenters were unaffected.

Resolved

in 5 minutes
24th July 2019 01:13:22 AM

Our automated health check system has reported an issue with realtime cluster health in ap-southeast-1-a

This incident was created automatically by our automated health check system as it has identified a fault. We are now looking into this issue.

24th Jul 01:14 AM

Our health check system has reported this issue as resolved.
We will continue to investigate the issue and will update this incident shortly.

24th Jul 10:14 AM

Due to a load spike, message transit latencies for messages through the Asia Singapore datacenter may have been slower than normal for a period of around 10 minutes. The issue resolved itself automatically through autoscaling.

Resolved

in 5 minutes
2nd July 2019 01:50:05 PM

Cloudflare issues affecting fallback hosts, CDN, and website

Fallback realtime hosts (*.ably-realtime.com), the Ably website and CDN (affecting website assets and library access) are having availability problems due to Cloudflare issues: https://www.cloudflarestatus.com/incidents/tx4pgxs6zxdr .

The primary realtime hosts (rest.ably.io, realtime.ably.io) do not use cloudflare and are still working fine, so the service is still up.

We are in the process of bypassing Cloudflare on selected high-priority hosts (the website, status site, and CDN).

Update 14:05 UTC: Cloudflare has been bypassed for the website, status site, and CDN. Fallback hosts are still going through cloudflare, but as primary hosts are all up (and have been the whole time), this should have no effect on service status.

2nd Jul 02:25 PM

Cloudflare is back up, so fallback hosts are now responding as normal.

Resolved

in 36 minutes

June 2019

24th June 2019 11:35:30 AM

Ably Website and CDN availability issues

The Ably website and CDN (affecting website assets and library access) are having availability problems due to the global Cloudflare outage. We are redirecting away from Cloudflare and service should resume shortly.

24th Jun 11:46 AM

All Cloudflare-mediated endpoints have been moved away from Cloudflare.

Resolved

in 11 minutes
24th June 2019 10:42:20 AM

Fallback endpoints unavailable globally: Cloudflare issue

We are re-routing fallback endpoints at the moment.

More information as we have it.

24th Jun 11:08 AM

Fallback endpoints are now restored, circumventing Cloudflare

Resolved

in about 1 hour

April 2019

20th April 2019 08:21:35 AM

Network timeouts in us-west-1 datacenter

We are seeing a high number of timeouts in the us-west-1 datacenter at present.

20th Apr 08:57 AM

We are investigating the root cause of the issue. If this issue is not resolved soon we will temporarily redirect traffic away from the us-west-1 datacenter until the underlying issues are resolved

20th Apr 09:19 AM

All intermittent timeouts in the us-west-1 region have stopped since 09:21 UTC.

We believe the underlying issue was a networking issue in the underlying AWS datacenter, but have not been able to confirm that. However, for the last hour, the datacenter appears to be healthy.

Clients closest to us-west-1 experiencing timeouts would have been automatically reconnected to an alternative datacenter with our automatic fallback capability. See https://support.ably.io/a/solutions/articles/3000044636 for more details.

Resolved

in about 1 hour
14th April 2019 10:30:00 AM

Error rates climbing in us-east-1

We have routed traffic away temporarily from our us-east-1 datacenter whilst we investigate the cause of the increased errors in our us-east-1 datacenter. All traffic is being routed automatically to the nearest datacenters.

14th Apr 11:49 AM

The us-east-1 (North Virginia) datacenter is healthy and active again. Traffic is now being routed to this datacenter as normal.

Resolved

in about 1 hour

February 2019

26th February 2019 12:00:00 AM

Occasional timeouts when querying channel history in some circumstances

In the last three days (since the 26th February), a small proportion of channels experienced timeouts querying channel history if a message was published in the same region as the query was made in within 16 seconds of the query being made. We are rolling out a fix now and looking into why this was not caught by our test suites. We apologise for the inconvenience.

Resolved

in 4 days

January 2019

29th January 2019 05:45:00 PM

Elevated error rates in ap-southeast-1

Users connected at the ap-southeast-1 region (Asia Singapore) may have experienced elevated latencies and/or errors in the past half hour.

Other data centers were unaffected, so client libraries should have transparently redirected traffic to another datacenter through normal fallback functionality.

As a precaution, we have shut down that data center; users who normally connect to ap-southeast-1 will now likely connect to either ap-southeast-2 (Australia), us-west-1 (California), or eu-central-1 (Frankfurt)

29th Jan 09:59 PM

The ap-southeast-1 region is now fully operational once again.
We're continuing to investigate the underlying issue.

Resolved

in 34 minutes
25th January 2019 08:20:00 AM

Cassandra (storage) issue in us-east-1

One of our Cassandra nodes in us-east-1 is down due to an underlying hardware fault. This transiently caused some errors on a percentage of our realtime service nodes connected to that faulty node. The server has now been isolated.

25th Jan 10:59 AM

The faulty node has now been fully removed from the cluster, and all data has been successfully replicated to a new healthy node.

Resolved

in 28 minutes
3rd January 2019 03:20:00 PM

Elevated latencies and error rates globally

We have been alerted to higher than normal latencies due to a capacity issue in us-east-1 which we are working to resolve

Update 15:51 UTC: We have severe capacity issues worldwide due to a sudden inability to a bootstrap new instances. We are working to fix this as soon as possible.

3rd Jan 04:29 PM

We have identified the underlying issue (dependency on a third party system that, by design, should not have impacted our ability to add capacity, but did due to an internal bug). We have applied hot fixes to all environments and rolled this out globally. Error rates are dropping rapidly and latencies reducing, however there are still some residual issues we are manually resolving.

3rd Jan 05:33 PM

We are still experiencing issues in us-east-1 which is causing higher than normal error rates in us-east-1. We believe the issue is caused by an unstable gossip node reporting inconsistent ring states to the cluster.

3rd Jan 06:36 PM

Given that we identified the issue was related to gossip and ring state inconsistencies, we are rolling out new gossip nodes across every region, which is rapidly resolving the issues.

3rd Jan 06:47 PM

We have stabilised the gossip and ring state globally now and error rates have reduced dramatically. There are few nodes that are still emitting channel errors which we are investigating.

3rd Jan 08:45 PM

Latencies and error rates are back to normal in all regions.

We sincerely apologise to customers who were affected by the incident, and will be posting a post-mortem once the investigation has completed.

9th Jan 12:21 AM

We have completed the investigation of this incident and have written up a full post mortem at https://gist.github.com/paddybyers/3e215c0aa0aa143288e4dece6ec16285.

Any customers who have any questions or would like to discuss this incident should get in touch with the support and sales team at https://www.ably.io/support.

Once again, we sincerely apologise to customers who were affected by this incident. We are doing everything we can to learn from this incident and ensure that the service remains fully operational moving forwards.

Resolved

in about 5 hours