IBM's Auckland datacentre issues resolved

Eagle Technology says 25 of its clients affected

Problems at IBM’s Highbrook datacentre in Auckland appear to have been resolved by 10am today.

The Virtual Server Services environment had been down since 3am on Monday.

An IBM spokesperson in Australia says in a bare-bones statement: “The New Zealand Virtual Server Services environment has been restored. We are actively working with all impacted clients to ensure they are fully operational.

“All other service offerings delivered from our Highbrook data centre continue to be fully operational.”

There is no indication of what caused the outage at the $80 million data centre, which opened in May 2011 to provide cloud services to major IBM clients.

Eagle Technology has been badly affected by the outage. Computerworld spoke to CEO Gary Langford at 11am this morning.

“It’s caused us quite a bit of sleeplessness,” says Langford. “Twenty-five of our clients were at Highbrook.”

“The system only came up in the past hour, and we’re now checking every thing and working through it.

“Once the dust has settled, we’re going to have a need to understand what went wrong.”

He says the outage was ironic, given that Eagle was last year named IBM’s Cloud Partner of the Year.


Cloudy Computing


Can't be IBM fault then. Must have been some rodents chewing on those new cables.



So with what 50+ hours of down time (possibly more). That should mean to get there relialibility back to 99.9 or dare I say 99.99 percent uptime (5 minutes downtime per year) will take a thousand years or so. (provided it doesn't happen again) But then im sure they have the best spin doctors in the business working in the problem of how to complete rfp's in the future which ask about reliability of service.



seriously? why?



> Be Jennifer Moxon
> working working working making money from Clouds
> MFW IBM Cloud fails
> ಠ_ಠ
> MFW IBM Australia won't let me take to my customers
> ಠ_ಠ



Where was IBM New Zealand MD Jennifer Moxon during this incident? All the comms seem to be coming out of IBM Australia.

[This comment has been moderated.]



"An IBM spokesperson in Australia"

There we go, we're being treated like Fijians again by our Aussie "brothers".



"All other service offerings delivered from our Highbrook data centre continue to be fully operational." Im sure if your a VSS client, thats very important to you.



So how do they know they have "resolved" it if they don't know the cause? If that is true, what we have is a workaroun, a band aid and not a permanent fix. OR are they just trying to hide it whilst talking to their PR guys on what to really say?



How do you take a whole data centre down?
Did the pipe in or out get cut by the Gardner digging weeds?
Perhaps MegaIBM targeted by the FBI?
Presumably this data centre is a cheap money making frail design.
I am staying out of the cloud for now!!!!



IBM NZ lack quality engineers to maintain the VSS services infrastructure. Some where at some point some one or more than one person, has commited an act of engineering technical negligence on the VSS front end and back end components, which torpedoed into various directions bringing down the whole stack of cards down and there may have been no real way of undoing the negligent engineering act, OR they may have tried to undo which then probably made it more worse. $80 million data centre running into issues suddenly, it is supposedly world class standard, don't they have monitoring, proactive checks and detection, prevention mechanisms in place , any regular health checks, go figure.



IBM havn't won any big accounts for years in NZ, and what they do have they seem more inclined to screw over.
As for monitoring, they don't even use their own toolsets as they don't trust them , and reinvent the wheel each time!

IBM are only interested in taking money, not providing proper support to their customers.
I'm surprised people still use them, but I guess it's all in a name huh?!



At some stage Computerworld may wish to either increase moderation, or discontinue anonymous postings. In this case, it seems that there are a whole heap of spiteful hindsight experts who want to kick IBM (their turn this time - others have been equally treated) without actually even being able to draw the distinction between a Datacentre (which did not go out) and a Cloud Service running in a Datacentre (which did).

I do not work for IBM, but I am not inclined to post my name, given it will help perpetuate further spite and ill-informed judgement.



Do people really think that cloud means automatic DR? Seems like this has been a VSS environment outage which any company worth there quid in salt would have a DR plan for. Not sure what the circumstances were but sounds like a few IT managers may have there heads in the clouds if they think a single infrastructure can support a business.



This quality of the comments on this forum really staggering.

Seriously people how about we stop bashing IBM because they are big blue yanks and look at the facts. Not that IBM should be smelling of roses, there is some serious rot; but lets at least be rational about what the issues really are.

As previously noted in other comments, the data centre did not go down, a single virtual server hosting offering went down. The reporting on this across the NZ media has been disgraceful and fear mongering.

The comment titled "IBM NZ lack quality engineers" is just an example of NZ's tall poppy syndrome and the poster obviously lacks understanding of the complexity and opportunity for failure that exists in such a complex system.

I don't see anyone claiming that Amazon Web Services 'dont have the engineers capable of running a cloud' despite the fact that in the last six months of 2012, they suffered 3 major outages, one which continued for >70 hours and resulted in customer data loss. And it's not just AWS and IBM NZ clouds who have suffered issues. Last year, Tumblr, GoDaddy, Salesforce, GoogleTalk, Dropbox, Google Apps, Office365, Azure all suffered major customer impacting outages.

People need to understand that architecting business IT using cloud platforms requires you to think differently to how it was done when we all owned all our own hardware.

Secondly - IBM don't comment locally because they are not allowed to. Thats a corporate policy and has nothing to do with the calibre of the people in country.

Thirdly - the availability advertised is three nines, not five nines and many of the customers are on older contracts which were sold with an SLA of 98.5%. Furthermore for cloud services, the uptime is typically calculated on a month by month basis, not annual basis.

Finally - people who are are running critical business services need to understand there DR posture and the resulting risk. If they choose to put all their services into a single site with no DR plan, they will almost certainly at some point suffer an outage. If your services are that critical, plan for disaster and don't put all your services into a single site offering. It's not rocket surgery.

The real problem at IBM NZ is the management (from the very top in the US, all the way to exec level in AU/NZ) have run the delivery organization in to the ground, driving down moral and cutting staff levels to the bone, forcing overworked smart technical engineers to focus on mundane compliance tasks that could or should be automated but have not been due to complex, cumbersome and frequently changing process.



Everyone can have something go wrong - the secret is not to make it worse by bad communication...I wonder if IBM NZ and AU have learnt their lesson yet - probably not

Comments are now closed

How can Kiwi CIOs compete when every NZ business unit is a tech startup?