Sandy Razes Mediaocean Too, Delays Crucial Media Buys

SandyAs if buying and selling ads isn’t chaotic enough during the political season, Hurricane Sandy upped the pandemonium quotient by knocking out some global systems used by agency clients of Mediaocean, the largest ad transaction service provider in the industry.

But according to the company, the main effect was a delay in service and not loss of data. Many clients had restored service -- at some level -- within roughly 24 hours. And by Friday service was “100% restored,” a company rep said.

On Tuesday the company implemented what it called a “disaster recovery plan,” which had many staffers working around the clock to restore the systems. The main effort involved shifting services linked to a battered New York City-located processing system to the company’s London operation.

By Wednesday morning, European clients were up and running again, company Mediaocean CEO Bill Wise reported to employees in a company-wide memo. “U.S. clients are starting to get access,” he reported. “System performance will undoubtedly be hurt, but considering this hurricane is the largest storm-related outage in U.S. history, clients should understand.”

Wise also reported that the company had retained an IBM consultant in London “to help restore and improve performance.”

Lesson learned, per Wise: “This hurricane has made obvious we need better processes in place -- from infrastructure to systems to communication to alerting users to outreach to our senior agency partners.”

advertisement

advertisement

>
1 comment about "Sandy Razes Mediaocean Too, Delays Crucial Media Buys".
Check to receive email when comments are posted.
  1. Henry Blaufox from Dragon360, November 5, 2012 at 10:06 a.m.

    If I recall correctly, MediaOcean, or at least predecessor DDS used to claim that they had full failover capability between New York and Europe, since they had dual processing facilities. This is nothing out of the ordinary. Why did it take three days plus to completely switch over? Isn't there a disaster recovery plan the firm could use to run a drill (just in case,) put staff in place on both sites to make the switch, and then execute the switchover as it became clear a switchover was likely to be needed? Ot were there other logistical problems that hindered the failover plan execution?

Next story loading loading..