Message Labs
I've had reports from a couple of suppliers who have delayed messages and they are on Symantec for their mail gateway. I suspect the impact is far reaching
Symantec.cloud is in the middle of rebuilding its portal this lunchtime following a prolonged outage spanning more than 24 hours. The snafu stemmed from a database crash. Problems first surfaced at 1000 UTC (1100 BST) on Monday and dragged on until lunchtime on Tuesday, as detailed in a series of updates to Symantec.cloud’s …
As usual, with all cloud services: if continuous service is critical, then you should have two completely independent suppliers of the same service.
In the case of spam filtering this is easy: just have two MX records pointing to two different cloud filtering suppliers.
Of course, this is going to cost you double. Hence you have to decide whether the extra cost is worth it to you or not.
Not sure if they really know what an impact it is having on customers, but not impressed from this end.
I've had to put bypasses in to get to the Internet. I am beginning to worry that they don't actually know what they are doing.
We were told this morning the following:-
As a way of an update there are two issues which arose from the DC power down over the weekend. First and most pressing is the corruption of the ClientNet DB servers. Secondly Spam Manager 1 and 5 server Rack did not power back up correctly (Believed to be a power issue)
The first issue resulted in customer portal ClientNet as well as Insight to be unavailable. The parent case for this Portal incident INC-348312 is 10364778 and currently we have 58 cases associated with this incident. We have also informed Customer Care team that inbound calls from client may spike because of the incident on the cloud side.
As of 06:50 UTC, War Room is still ongoing. After restoring the necessary Databases and while running checks, Tier 2 engineers found errors due to inconsistencies in few Tables in Databases, which they are working on to identify the possible options to fix that. Database development team has been involved to get suggestion for the best possible options to proceed.
Client Support enabled the Message of the Date on the Phones and updated the Splashpage to reflect the messaging that we are aware of the incident and are working with top priority to resolve the issue. Support is also focusing on inbound calls and making proactive outbound calls to specific clients (incl. those who reached by email to Support Box)
Further updates will follow.
As a way of an update there are two issues which arose from the DC power down over the weekend. First and most pressing is the corruption of the ClientNet DB servers. Secondly Spam Manager 1 and 5 server Rack did not power back up correctly (Believed to be a power issue)
The first issue resulted in customer portal ClientNet as well as Insight to be unavailable. The parent case for this Portal incident INC-348312 is 10364778 and currently we have 58 cases associated with this incident. We have also informed Customer Care team that inbound calls from client may spike because of the incident on the cloud side.
As of 06:50 UTC, War Room is still ongoing. After restoring the necessary Databases and while running checks, Tier 2 engineers found errors due to inconsistencies in few Tables in Databases, which they are working on to identify the possible options to fix that. Database development team has been involved to get suggestion for the best possible options to proceed.
Client Support enabled the Message of the Date on the Phones and updated the Splashpage to reflect the messaging that we are aware of the incident and are working with top priority to resolve the issue. Support is also focusing on inbound calls and making proactive outbound calls to specific clients (incl. those who reached by email to Support Box)
Further updates will follow.
As of 06:50 UTC, War Room is still ongoing. After restoring the necessary Databases and while running checks, Tier 2 engineers found errors due to inconsistencies in few Tables in Databases, which they are working on to identify the possible options to fix that. Database development team has been involved to get suggestion for the best possible options to proceed.
That's priceless. The above report strongly suggests they didn't implement their backup regime very well. Super ironic, as until recently Symantec owned NetBackup - the high-end enterprise wide backup suite.
Normally I wouldn't laugh at someone's misfortune... but Symantec are a piece of work, so laughing now. :)
>The above report strongly suggests they didn't implement their backup regime very well.
Also, it points out that the service isn't a cloud.
A cloud would have multiple replicated instances spread across various data centres, service health monitoring and dynamic DNS and IP load-balancing across known good servers.
Well, at least they and their customers have saved money by outsourcing the function to a specialist company with in-depth expertise.
Also, it points out that the service isn't a cloud.
A cloud would have multiple replicated instances spread across various data centres, service health monitoring and dynamic DNS and IP load-balancing across known good servers.
Only if done properly. This is Symantec we're talking about here. They prioritise hiring sales staff, and de-emphasise quality engineering.
Some of their guys are good, but some have less than stellar attention to detail. (Case in point probably caused this outage, and/or unusable backups.)
Symantec gets what it has sowed -- screw over your staff, threaten them all the time, then ceremonially make them redundant.
That means the people who had the clues are all sitting down the pub laughing their arses off that Symantec's cloud is down.
Frankly I was surprised to hear that Symantec was still a company AND that intelligent people bought things from them.
I suppose I should have known that five years of closing my eyes and wishing wouldn't make them go away but heck I even included "and please protect all IT managers from buying Symantec products" in my nightly prayers.
Oh, you mean that awesome computer performance enhancement tool? We used it company-wide in our Windows XP days to revitalize our network and get an extra year out of the workstations so we could afford our Windows 7 roll-out. Good to know it has other world improving uses!