On Tuesday 23rd March at approximately 6:30pm UTC we observed delays in processing automated messages (behavioural, transactional and workflows).
We identified an issue with our round-robin system. We debugged and fixed the root cause by approximately 9.30pm UTC. During the period 6:30pm-9.30PM UTC automated messages were operating at a severely reduced throughput, resulting in a backlog of automated messages. The backlog of automated messages was fully processed and running in realtime by 11pm UTC. Newsletters sent between 9.30pm-12:30am UTC (Wednesday) were sent at a reduced rate whilst this backlog was processed, resulting in some delays.
We've been posting updates to the various system components on this status page over the last 12 hours to reflect the above delays as they occurred.
Since 12:30am UTC we have identified a second issue with some of the raw hardware provided by our cloud provider. **It appears this is a hardware issue and is not related to the earlier incident**. Messages **are** currently sending in realtime but we are operating at an overall reduced capacity with some delays in aggregating conversion metrics and sending webhooks.
We're working with our cloud provider to resolve this issue, at which time we'll be able to scale to full capacity. In the interim we're monitoring and will post updates if the situation changes and message send speeds are affected.
If you have any questions, please email us at firstname.lastname@example.org