We constantly run tests on all our systems, to check that they are working from end to end.
The current state of the Pusher system is:
WebHooks were not sent for 30 minutes - resolved
3rd May 2013, 12:03 PM UTC
11:23 UTC WebHooks stopped being sent
11:57 UTC WebHooks recovered
I'm afraid that missing WebHooks have been lost and cannot be delivered, so if you rely on WebHooks to synchronize state you will need to use the appropriate API calls to fetch the current state.
Presence channel state is inconsistent for some channels - resolved
25th March 2013, 02:03 PM UTC
13:00 UTC Some internal configuration errors have resulted in presence and WebHook related data becoming inconsistent for some channels. We're currently working on a fix. ETA 14:30 UTC.
- 25th March 2013, 03:10 PM UTC
Returning to normality, some data is still wrong, we're working on it.
- 25th March 2013, 04:21 PM UTC
Presence channels are now working correctly
- 26th March 2013, 12:13 PM UTC
Some channels received duplicate channel occupation WebHooks for a period this morning. This issue has now been fixed and the system is now operating normally. Apologies for any inconvenience any of these issues have caused.
Webhook issues - resolved
24th December 2012, 03:14 PM UTC
We're currently experiencing some issues with delivering webhooks. We'll update this when we know more.
500 Errors in the API - resolved
29th November 2012, 04:44 PM UTC
We're seeing elevated rates of http error 500 for some apps at the moment, but we're working hard on this issue.
Elevated latency - resolved
8th November 2012, 02:38 PM UTC
We're just looking into an issue with increased latency at the moment.
Increased latency and failure of messaging - resolved
17th October 2012, 03:05 AM UTC
02:30 UTC Issues are affecting service. Currently investigating
03:00 UTC Service is back to normal.
Issue accepting new connections and increased latency - resolved
12th October 2012, 02:44 AM UTC
Currently facing issue opening new connections. We are investigating.
03:29 UTC Systems are back to normal.
03:55 UTC Some stats data is currently missing in dashboards between about 02:30 - 03:00 UTC. This data will be recovered over the next 24 hours.
Temporary webhook failure - resolved
7th October 2012, 10:47 PM UTC
We're currently experiencing issues delivering webhooks. We're aware of the problem and are working on a fix.
Dashboard inaccessible - resolved
31st August 2012, 01:03 AM UTC
We're currently resolving some problems in our dashboard. All other systems are totally unaffected by this.
- 31st August 2012, 01:55 AM UTC
Dashboard is accessible again. Statistics are being updated.
Pusher issues - resolved
30th August 2012, 10:01 AM UTC
We're currently experiencing issues with messaging, further information will follow.
- 30th August 2012, 10:27 AM UTC
This was an issue in our loadbalancers, which should now be fixed
Connectivity issues - resolved
29th August 2012, 12:22 PM UTC
Pusher is currently having websocket connectivity issues. We apologize for the inconvenience.
- 29th August 2012, 12:31 PM UTC
This is now fixed.
Elevated latency and sporadic 504s on REST API - resolved
27th August 2012, 06:54 PM UTC
We're currently looking into reports of increased latency, and are taking steps that should resolve this soon.
- 28th August 2012, 04:01 PM UTC
We have identified the cause of the problem, and we're working on changes that will fix it. Apologies for the inconvenience.
Webhooks problems - resolved
16th August 2012, 02:57 AM UTC
The pusher webhooks notification service exhibited problems for ca 30 minutes due to a faulty server. We apologize for any inconvenience caused.
Small service disruption - resolved
5th August 2012, 03:41 PM UTC
One of our components temporarily stopped responding, and service was disrupted for ~10 minutes. The issue has been resolved and we don't expect further re-occurrences.
Authentication errors - resolved
6th July 2012, 02:11 AM UTC
Last night one of our system components experienced failover. While the service continued running, some user's authentication keys took longer than expected to replicate. We're sorry for any inconvenience for those affected, and we will look into why this happened.
Connectivity issues - resolved
29th June 2012, 08:27 PM UTC
We are currently experiencing issues with connectivity in AWS. We are investigating.
07:47 UTC We are mainly back up and running. Core real time services are working except for posting to the API via SSL (this is due to having to switch away from using Amazon ELBs which are still causing issues). The management site will recover when the RDS it relies upon recovers...
11:10 UTC Site is now available again
11:15 UTC SSL requests are now supported again for the API. All systems are now functioning correctly.
Load balancer failure - resolved
12th May 2012, 04:50 AM UTC
We had some load balancer failures between 11:30 - 11:40 UTC which caused WebSocket connections to fail. This is now resolved
Increased latency in the system - resolved
9th May 2012, 12:25 PM UTC
On a number of occasions today, latency has increased beyond acceptable levels. We are looking into the cause, but suspect a faulty node that we are working to replace. Sorry for the inconvenience.
Network connectivity lost - resolved
15th March 2012, 02:51 AM UTC
0930 UTC Lost network connectivity due to an AWS issue, waiting for the root cause.
0945 UTC Service is operating normally
We are experiencing problems with message delivery - resolved
6th March 2012, 09:02 AM UTC
During an upgrade to our infrastructure to increase capacity, we began having problems delivering messages. We are working to fix this.
We will keep you informed.
- 6th March 2012, 09:19 AM UTC
Invalid API access credentials have been cached by the system. It will take a few minutes for us to flush these caches.
- 6th March 2012, 09:54 AM UTC
Pusher service is back to normal.
We are experiencing some message delivery failures - resolved
5th March 2012, 01:38 PM UTC
We are investigating and will keep you updated.
We have resolved this issue. Message delivery is back to normal.
Client connection via SSL is currently unavailable - resolved
5th March 2012, 12:01 PM UTC
We are working to fix this.
Non-SSL connections work fine.
SSL connections, and the rest of the Pusher service, are now operating normally.
Load balancer down - resolved
5th March 2012, 11:31 AM UTC
One of our load balancers is down. We are working to bring it back up. Until we fix the problem, it will take longer than usual for clients to connect.
The load balancer that was down is now back up.
Connectivity issues - resolved
28th February 2012, 09:58 AM UTC
We're currently experiencing an issue which may result in connections being closed. We're working to resolve this ASAP.
- 29th June 2012, 11:33 PM UTC
This was related to an incident in AWS. Some connectivity has now been restored, but we are working through some remaining issues. SSL is currently unavailable on our REST API.
Issues establishing connections - resolved
10th February 2012, 09:23 AM UTC
We've been experiencing some intermittent connectivity issues in the last hour. We're looking into it.
Message connectivity issue - resolved
23rd November 2011, 10:36 AM UTC
We are looking into this problem at the moment, and will update the status when we know more.
- 23rd November 2011, 11:23 AM UTC
Things have returned to normal now. We will now be looking into the causes behind this.
Load balancer failure - resolved
20th September 2011, 05:50 PM UTC
We have suffered the loss of one of our websocket load balancers
This load balancer has been removed from DNS, but you may experience extra connection attempts till the change propagates
- 22nd September 2011, 09:54 AM UTC
We had another temporary failure of the same load balancer. We have now decided to provision a new load balancer, to avoid any further issues with this EC2 instance.
Brief connectivity loss due to AWS - resolved
9th August 2011, 03:46 AM UTC
Last night at around 7:40 PM PDT, an Amazon availability zone lost connectivity. This affected us for about 15 minutes. Connectivity was restored, and we are now operating normally. Sorry for any inconvenience.
API not accepting events - resolved
7th May 2011, 04:07 AM UTC
We're currently investigating - update coming soon
11:11UTC We seem to be experiencing an issue with Amazon's elastic load balancer
11:24UTC We have switched to a new elastic load balancer however DNS will take some time to propagate to
api.pusherapp.com. The API can temporarily be accessed from
Slow API performance - resolved
11th April 2011, 08:17 AM UTC
We experienced some issue with Amazon's elasic load balancer this morning which caused much higher than usual API request times. The mean latency was approaching 1s between 9:30 UTC & 12:30 UTC and was highly variable. All requests were being handled, but the increased timeout probably resulted in some requests timing out.
We solved this issue by provisioning a new ELB instance, and are addressing the route cause with Amazon.
Sorry for the inconvenience
Timeouts in API and on socket connections - resolved
18th March 2011, 10:42 AM UTC
We saw several issues with API timeouts and socket connection timeouts between 2am and 5am this morning. We're still trying to work out the cause of these issues. Currently Pusher is operating normally.
Disruption to socket servers - resolved
1st December 2010, 09:05 AM UTC
For approximately 15 minutes the socket processes were not running. No messages were delivered during this period (around 4pm GMT).
This was caused by all socket processes failing, and then a failure of our process monitoring to restart them. We're continuing to investigate this to ensure that it cannot happen again. Our apologies for this disruption to the service.
UPDATE This issue has resurfaced again (11:30pm GMT). We have restarted the affected processes, but we are still working on the permanent solution.
Message relay failure - resolved
25th October 2010, 12:40 PM UTC
As of approximately 20:00 GMT, our service has stopped relaying messages from the API to the connected clients. This seems to be a problem with our message bus that we are trying to resolve. We will update this status when it has been resolved.
Sorry for any inconvenience.
We have managed to get the issue resolved but are continuing to look into why it happened, and preventing it from happening in future.
Increased latency yesterday - resolved
22nd October 2010, 06:59 AM UTC
For approximately 5 minutes yesterday the mean latency of Pusher exceeded 500ms which is unacceptable to us. This was caused by a sharp spike which saw us handle approximately 10 times more traffic during an hour than our previous maximum. We quickly started another instance and brought mean latencies back down below 100ms. We apologise for any inconvenience.
Stability issues over the weekend - resolved
27th September 2010, 09:53 AM UTC
On the weekend we experienced failures on several of our socket servers. The result was that some clients were unable to connect and in some cases messages were incorrectly routed.
This was due to a bug in our redis client library which we have now upgraded. The problem should not happen again.