Tuesday
Oct152013

Datacenter Migrations – Colocation Processes and Procedures

Clients get excited when starting a colocation project because they anticipate relief from managing their own datacenter.  With that “dirty work” behind them they turn their attention to the latest and greatest, whiz-bang, technology that the last vendor they saw paraded in front of them.

What they forget is colocation facilities have rules and processes designed to protect all tenants.  These procedures can appear onerous and limiting depending on your internal procedures.

Mastering these processes prior to a migration is of the utmost importance.  We have witnessed Project Managers (who learn these processes) be the ones to receive and deliver shipments, authorize entry into the datacenter, parking, coordinate vendor repair personnel, etc.  Certainly, not a great use of their time and something that will need to eventually be transitioned to support staff.

We find all colocation vendors eager to train and coach new tenants.  They want tenants to have positive first impressions of their facility and the processes supporting the facility.  In many respects, after the “glow” of the datacenter tour wears off, it is the processes and procedures that impress customers.

Critical processes for support teams to master include:

  1. Where to ship equipment
  2. Where to find the equipment once it has been received by the colocation facility
  3. How to get equipment delivered to your cage/cabinets
  4. Where to stage equipment
  5. Where to dispose of cardboard/rubbish
  6. Where to find the server lift and ladders
  7. How to order a telecommunications cross-connect cable
  8. How to get vendors access to the facility (one-time and recurring)
  9. How to order “remote hands” services
  10. How to access the facility during off hours
  11. How to use the “man-trap” (where appropriate)
  12. Location of the loading dock
  13. How to get a key to your cage or how to get escorted to the cage

Many colocation customers wait until they have gone live, and something breaks, in order to learn these processes. Taking the time, early in the process, to acquire these skill makes for a smoother migration and reduces migration risk and future downtime.  Learning how to integrate these processes with those of your own organization will create fluid solutions for all parties.

Sunday
Sep292013

Datacenter Migrations – Monitoring During Migrations

While helping a client with a Managed Services and Monitoring RFP, I visited a company that performed monitoring and managed services for datacenters and the servers, storage, and networks making up the datacenter.  These folks are “monitoring Ninjas” and obsess about what and how they monitor. 

When I asked them to talk more about their managed services, the President stood up and said “you don’t get us.”  He said, his company is more profitable the fewer the number of unplanned incidents or outages his customers incur.  His mission was to make sure everyone in his organization was focused on catching problems before they produce business impact.  He lived this every day.

Monitoring tends to be more of an “after the fact” activity.  Engineers tend to be more concerned with virtualization, IOPs, converged infrastructures, etc. than in the tools that help us manage environments.

We see monitoring as a migration tool helping to deliver the following:

  • Indication of environment health
  • Measures progress of migrating systems (auto-discovery finds new systems and simple pings tell you when they are live or in-progress).  The migration is over when “everything is green”
  • Capacity measurement
  • Typically has defined escalation for issues, interfaces to ticketing systems, and well-defined processes

Organizations implement monitoring in different ways with some doing simple “up/down pings” and others having the capability to monitor the performance of individual processes. All types of monitoring can be of great help to the people performing datacenter migrations.

One thing to be careful of when monitoring during a datacenter migration is to suppress alerts for systems being migrated.  Having dozens of IT staff receive text messages about down systems during a migration can create many issues for the project manager.

Implement monitoring before starting your datacenter migration and you will find the secret to turn art into science!

Sunday
Sep082013

Datacenter Migrations – Turnover to Production Support Organization

You have just spun-up a new datacenter.  Doing it right, you engage vendors, VARs, consultants, your staff and anyone else that can bring experience and expertise so you reduce risk.   Equipment is installed, telecommunications are working, everything is cabled, “Green lights” are flashing all around.

When do you turn-over the data center to your production support team?

This is a critical question defining both the speed of adoption as well as the level of distraction for production support teams.

Handing over a newly built datacenter, yet to be tested, can create a lot of support work for a production support team.  Assuming the team is supporting other systems in production, this subtracts energy and mindshare from the people keeping everything running.

Turnover of a full-tested data center creates feelings of someone else’s problem being “thrown over the wall.”  The production support organization becomes disenfranchised from the outcome and might sit in judgment of the engineering team and vendors tasked to build the datacenter.

We believe the optimal solution is as follows:

  1. Have the production support team participate in vendor/VAR selection.  After all, they will be the ones dealing with the vendors long after you have built the datacenter.
  2. Engage the production support team during the design process.  A datacenter should be designed ease of maintenance, problem resolution, and growth.  Let the production support organization take ownership for defining things like monitoring, location of equipment, procedures for replacement of faulty equipment, location and method of cabling, etc.
  3. Production Support “Ownership” of the datacenter should start with testing.  It is the job of those doing the testing to prove to those supporting it that it is ready for production.  Testing needs to include the following:
    a.     Use of existing or new runbooks for support of the new environment
    b.     Testing of processes and procedures used in the support of the datacenter (i.e., disk drive fails, autodials vendor, and vendor needs to be able to get on-site to replace and dispose of disk).
    c.     Monitoring, alerting, and escalation procedures
    d.     Delivery and disposal of equipment
    e.     Reservation and tracking of “Remote Hands” activities (if procured through a colocation or cloud vendor)
    f.      Vendor response time for maintenance activities is clear and works.
  4. Ask the production support team to provide daily updates of the health of the new datacenter

Entering a datacenter buildout project with an understanding the new datacenter is not in production until your production support team has full ownership and responsibility will save time and aggravation during a migration. It also creates a more inclusive environment.    

 

 

Monday
Aug262013

Datacenter Migrations – Avoiding Pitfalls

It’s 5:48 AM Saturday during a 45-day, 7x24, colocation datacenter migration.

A failed disk drive combined with failover testing causes a storage array “head” to throw an error and need to be taken off-line for analysis.

DBAs in India migrating databases suddenly notice performance is half what it was the day before and copy jobs are failing.

The environment is in transition so the customer’s IT organization hasn’t taken over support from the vendor.  A bunch of engineers are debating, over email, why this has happened.

The Datacenter Migration Project Manager gets in his car and drives to the co-location site, monitoring the discussion of the engineers.

Having never replaced a disk drive in a storage array, the Project Manager calls the customer’s Technical Manager for instructions on how to replace the drive.  Support personnel in India see the drive come back on-line and then the redundant storage array “head” come back on-line.

The engineers do the “naked hula dance” figuring the problem has been solved.  Everything is back on-line, no reported errors, hardware has all “green lights.”  Migration continues.

I’ll give you a second to guess all the things wrong with this scenario……………….

Successful datacenter buildouts and migrations should be precision activities involving architects, engineers, project managers, vendors, staff, and management.  Processes need to be documented and followed.  Let’s see what went wrong here.

  1. Diagnostics, engineering, and analysis were occurring while semi-productions were proceeding.  While testing windows were delineated, other activities were proceeding at the same time.
  2. DB performance was noticed through copy jobs.  Where was the monitoring?
  3. The vendor who did the installation is now two weeks beyond the delivery date and no handoff has occurred to the internal support organization.  Who is supporting the environment?
  4. Procedures were not set-up with the colocation provider allowing vendors to show up and replace the disk drive.  Why was a Project Manager replacing the disk drive?
  5. Once everything came back on-line work proceeded.  Why wasn’t root cause analysis performed?
  6. DBAs, support personnel, engineers, vendors, project managers, and technical managers were all doing their own thing.  Why wasn’t there a problem or incident management process followed?

In this multi-part series we examine some important activities and processes one should have before beginning a datacenter migration.

Friday
Aug162013

When a Company (like Patch) lays off People

I used to blog regularly weekly for the Westborough Patch and apparently the blog was popular…one of the more popular ones in MetroWest Boston.

The editor and I became very friendly.  In addition to my blog, I would run out and take pictures of current events….fires, a bank robbery, even a drowning.  She moved on, and I had the opportunity to meet some of the other people at Patch.  They are all good people, interested in bringing journalism to the hyperlocal market in the face of traditional media’s decline.

Alas, the company struggled.  Time will let analysts determine why Patch struggled.  Confusing editorial direction?  Unnavigable site improvements?  Bloated cost structure.

Nonetheless, lots of people lost their jobs today.  By some accounts, half the company was reduced. The CEO did a classic emotional based firing.  Patch became brittle.

Today was when the bell tolled.  The “Go Forward Team” was led to one conference room, while those being declared unnecessary were led to another room to get their walking papers and rumored one week severance.

Frankly, I’ve lived through this many times.  The first layoff I experienced was in high school, when I was laid off at my summer job so the Marina I worked in could show a profit in their last month.  No big deal; I had a job 3 days later.

My next personal experience was with Dennison Stationery Products Company, a division of Dennison Manufacturing Company.  Layoffs were like using drugs for the first time.  The first one was painful, with every name getting reviewed by the executive team.  The next one simply had a review of the number of people per department.  On the last, the President came to my office, and in pain said he needed two names and my name couldn’t be on the list.  I saw how a management team developed callouses to deal with ongoing layoffs.  Nobody was impervious to the pain.

Similarly, at Fidelity Investments there was a period of regular “reductions in force” occurring around October 1.  Fidelity hired some of the best and the brightest, so everyone knew what to expect.  And come October 1, the organization was frozen as managers had brief meetings with individuals and immediately sent them off to human resources for a package.

In a word, this “sucks.”  I can deal with terminating someone because they stole from the company.  Letting someone go because they just are not needed any more, through no fault of their own, is miserable.

So today Patch.

There were meetings at certain times, all planned with surgical precision I’m sure.  There are two groups of people coming out of today…the “go forward team” and those let go.  It’s my view both groups are in pain.  The ones staying feel guilty, and sad for their friends.  The ones let go feel anger, and are scared finding a job in this economy.

My advice for the two groups follows:

Go forward team

You need to let the dust settle for the weekend.  Next week, you need to assess what you have left and move forward.  Those deciding to complain incessantly about AOL (Patch’s owner) need to move on.  This is a time where the expression, “All we have is lemons, we need to make lemonade,” fits.  While being sensitive that a “mass execution” of coworkers just took place, the remaining people need to lead their way to success. 

As I said to one editor today, you need to deal with the emotions of those remaining, and then go home and puke.  It is not fun.  The pain will pass.  The sooner people re-engage around the work the better it will be for the people and the company.

You need to be positive and upbeat while you feel let down.

For the departed

This sucks.  It is a kick in the gut. There are no words to make you feel better.

Now, get off the couch and get to work.

Your job is now finding a job.  Pure and simple. 

“Taking time off,” is bullshit unless you are independently wealthy and don’t need to work. 

 

  • Task one – apply for unemployment.  Yes, you are unemployed.  Those programs are there for people like you.  Take advantage of them.  Consider food stamps, too.  Why not?  You are out of work, and you don’t know how long.
  • Task two – update your resume.  I’ll give you a day for this.  You’re a journalist for crying out loud.  You work with a canvas of paper and a brush of words.  If your resume is not up to date, update it.
  • Task three – change your LinkedIn status, or somehow notify your connections you need work.  This particular reduction is monumental…everyone even tangentially in the industry knows about it.  You have a few hundred other people in the same boat.  There is nothing to be ashamed of….so make it known.  By the way, most people you meet with have been laid off once or more in their career, so get over your embarrassment.  You are not alone.
  • Task four – network your ass off.  Jobs rarely come to you, you need to find them.  You need to make everyone you know understand you are immediately available.  You have a great background now…journalism, new media, etc.  Play off it.
  • Task five – and this is controversial – don’t be too picky or overthink your search strategy.  One of the issues for people leaving Fidelity was it is highly unlikely they’ll get a job at the same pay level.  SO…take something at a level you can afford.  Another thing I’ve seen time and again is people hesitant to send out resumes because they are waiting to find out how that last interview went.  The heck with that strategy, you need to paper the countryside and keep at it until you land.  No breaks.

 

There’s nothing magic here.  It is solely based on my personal experiences.  I’m sure others have experiences they can add and amplify the message.

To both groups.  You are not a failure.  You have challenges ahead of you.  Take this weekend to recoup and screw your head on straight.  Come Monday, hit the ground with a renewed vigor.  Good luck!

Sunday
Jul282013

Paralysis of Analysis

As consultants we are adept at identifying patterns quickly for action.

Sometimes the patterns are obvious from our experience, and we do data analysis to confirm the pattern (and not ignore other patterns).  Sometimes we must analyze large volumes of data to discover the pattern (for example, the server capacity needs for one life insurance company tracked the NASDAQ index, with an unbelievable correlation of 1!)

Sometimes the answers are obvious and simply need confirmation.  We once worked with a diversified manufacturing company, where the distribution included direct to retail and reseller.  The mass market channel, representing 15% of total company sales, wanted the company to include an additional label on a pallet indicating the pallet contents.  This was to speed receiving.  Ironically, this company also made the printer and the software for producing the label, so internal costs were negligible.  The director of the customer operations division wanted a detailed analysis of the costs for producing this label so “the company could decide whether to apply the label or not.”  While we fully support completely understanding costs, making this “decision point” introduced consternation.  The warehouse was fully prepared to apply the label, and Sales was demanding the customer request be made.  It was hardly worth the meetings to determine if an additional cost of “about a buck” a pallet was worth the effort.

We see this time after time.

One client wanted to understand systems configurations and interdependencies in report format.  Certainly the information is needed, and represents many of the foundational elements of a Configuration Management Database (CMDB.)  The client acknowledged this would be best in a CMDB, nonetheless wanted reports of the data.  Frankly, we question whether anyone printed or looked at a 400+ page report.  This is a case where a database was the perfect solution, and would have increased the speed of the project (by having the most current data at the team’s fingertips.)

Another client wanted to do forecasting for certain data center capacities.  The client executive wanted great specificity. Some of the project data for future years simply was an unknown.  The budgets were not complete, and system designs were unknown.  This is a case where using reasonable and thoughtful estimates is most appropriate.  In this case, a great deal of time was expended querying technology teams for data they simply did not have. 

In all of these cases, the argument can be made we were ineffective in making our positions known.  I do believe reasonable people, when presented with the facts, will often come to the same answer.

What each of these cases share is an engaged manager who insists and demands more data regardless of the facts or data availability.  These managers, in an attempt to get to the most accurate information possible, actually end up expending more time and resource than the analysis warrants (in our opinion.)  In each case, the manager was a very bright and well intentioned person, with great passion for their work.  We would submit they need to back away from an issue and look at it objectively determining if the extra investment is warranted. 

Some might call this common sense or street smarts, and while I have sympathy with that perspective feel don’t want to discount the wisdom of the executive.  In the end, a more rounded conversation is warranted between the consultant and the executive making sure the executive understands the cost of the analysis for the return.  If the executive still chooses to make the investment, we are obligated to continue the work.  Ultimately, we are a cost to the executive, and they need to understand the value of the analysis.

Do you have a story of analysis paralysis?  Please share!

Tuesday
Jun042013

It’s Time to Make Internet Identification a Reality

The first ever rental property I purchased was bought in 2007…right before the US real estate bust.  The six family building had everything my new business partners had to have…a nice entrance, separate utilities, and the ability to make into condos.  We purchased the property to the delight of the former owner, who at the closing proceeded to tell us how the neighborhood was in decline as his attorney pushed him out the door.  Six years later and one real estate bust behind us, and we are now positioned to sell the now six unit condominium. Suffice to say this property isn’t the envy of real estate mogul Donald Trump.

This morning, our attorney sent over eight documents critical to the closing.  I’d love to tell you the HUD statement was in the package, it wasn’t.  The documents include multiple deeds, resignation from the (holding) company, etc.

What struck me as odd was how these documents had to be handled:

  • I printed the package of eight documents
  • I went to the bank, where notary services are offered
  • Having greeted the assistant branch manager/notary, who acknowledged me by name, I then had to prove who I was by showing her my Driver’s License
  • The document titles being notarized were captured in a large paper ledger held in a large bound book
  • As signing of the documents took place, the Notary then signed her name, embossed the paper, and stamped her identification onto the form.
  • I then took the package of eight documents, and overnighted them from the post office to the other owners so they could follow a similar process.

Why can’t this be done electronically?  Why did I have to take an hour and incur substantive out of pocket costs to affect this?

I can go on Amazon.com and order virtually anything and have it shipped anywhere more readily.  All I have with Amazon is an ID and password.  Since I adore one click ordering, they have my credit card on file.  Is this secure?  No.  Is this convenient?  Yes.

IDs and Passwords trace their roots to MIT in the 60s and making sure the right people get charged for computer time.  Passwords are often shared, causing a major ancillary issue.

Multifactor authentication is a step up.  The question is how hard is it to find out someone’s mother’s maiden name?  There’s an argument suggesting to never answer those questions truthfully!

Tokens or fobs (from companies like RSA) is a way to increase security.  Pricey to attain and administer, these systems are another step up.  One bank uses them for their cash management system, causing folks like me to have multiple fobs (inconvenient.)  Fingerprints, retinal scans, etc are biometric methods to address.

Some of the best and brightest are addressing this through industry groups like the Global Identity Foundation or Fast IDentity Online (FIDO).  With these issues solved, perhaps doing standard real estate transactions can be accomplished online.

What else could we then do?  My small town has a government structure called town meeting, where everyone in the town goes to the high school to vote on every line item.  These meetings can go days…and are often poorly attended.  This is an area where automation could help.

The same holds true for other transactions.  Going to the “Registry of Motor Vehicles” is another crown pleaser.  My laptop has a camera…why not let me take the picture on it?

As technologists, the developed world is ripe for doing more transactions securely online.

Let’s get it done.

Friday
May312013

"Management will Derail us Quickly"

During a conference call yesterday discussing a data center consolidation/migration, the client made the following observation:

“Management will derail us quickly.”

This is a topic often discussed in hushed tones for fear of insulting “management.”  The point was simply, “During this time of transition, we need to let our people do their jobs and not get called into meetings.”

The same holds true during a large “problem management” (in ITIL-speak, sometimes called Incident Management in non-ITIL-speak.)  The technical people need to do their roles, with light handed management oversight.

The way we do this is to separate the technical team and the management team, with someone going back and forth between the two groups (often a manager.)  Management is thinking longer term during an outage or a migration (what is the impact, who do we need to notify, how are our teams holding up) and the technical team is thinking about what then need to do in the next five minutes.  The planning horizons are different.

When the two groups collide, disruption occurs.  Management is often frustrated as the technical people don’t speak in their language, the technical team is frustrated because they can’t get work finished.

There are legitimate needs for the group to overlap, such as if there is a directional change, or an unanticipated risk appears.

The owness is on the management team to keep communications at the right level.  Hyper engaged managers often find this a challenge, and one they need to think about.  Often the best approach is for a hyper engaged manager to take some time off….literally.

The best manager I ever had always scheduled time away during a major planned event.  He had the confidence in us to do our jobs, and the knowledge we would reach out if things went awry.  This doesn’t mean he was disengaged.  It means he had someone providing him with updates “on the side” and monitoring progress.  The staff appreciated the approach as they could focus on their jobs, and yet everyone knew who was accountable.

In another example, during a recent migration a cabling subcontractor dropped a ball introducing a schedule lag.  Bottom line is the time was going to be extended, although still in the window.  The manager needed to take some time out of the office…and grab lunch.  Hovering would do nothing useful.

As project managers, it’s important to understand these dynamics and help the managers get what they need while not disrupting the teams. 

Thursday
May232013

MIT CIO Symposium - 2013 Synopsis

MIT Sloan has for 10 years held a “CIO Symposium.”  This is an opportunity for thought leaders in industry and academia to compare notes and contemplate the future.  This year’s theme, “Architecting the Enterprise of the Future.”

My synopsis of the one day session? IT is DEAD.  The CIO title often referred to as the Chief Information Officer (not the Chief Investment Officer of financial services firms) really means Career Is Over.

Many things in IT have died over my career.  The mainframe was declared a dinosaur in the 70s and 80s as “distributed computing” took over.  Companies like Prime, Data General, and Digital Equipment Corporation were poised to dethrone International Business Machines (IBM) as the leader of computing.

Who remembers Prime, Data General, and Digital Equipment Corporation (DEC)?  Each of these companies was a once proud Massachusetts computer company, and IBM thrives.

I left the session with a distinct feeling of coming from a wake, where memories of the dearly departed were shared.

Sage advice like “IT needs to align with the business” has been a mantra for decades.  “IT needs to speak in business terms and not IT terms” is an evergreen comment.  I was live tweeting from the audience, and at one point was challenged by a friend:

 

 

“The Use of Power and Influence During the Process of Innovation” was a theme of an early session. 

My tweeting shows some good soundbites, and perhaps nothing new:

 

 

 

“Big data” got a bit more of a reaction:

 

 

 

 

And some reaction was bittersweet:

 

As an IT professional, do I think IT is dead?  No (although I did tell my son not to get into the business, and he did anyway.)  IT needs to evolve or perish.  (This isn’t new either.)

IT is seeing a repeat of the mainframe death knell because so many business lines can now “do their own thing.”  Salesforce.com has revolutionized the way sales teams work, and business lines feel empowered.  Are there issues around privacy and “islands of automation?”  You bet.  And many organizations are balancing agility with regulations and are being successful.

I’m going to submit “islands of automation” was one of the cries in the era of Prime, Data General, and Digital Equipment Corporation.  It’s déjà vu all over again. 

What drives business lines to do these projects is IT is viewed as an obstacle, not an enabler.

Georgia Papathomas, VP and CIO of J&J Pharmaceuticals, had some great insights in a session, “Driving Innovation and Managing Expectations:”

  • Everything in IT takes too long,”
  • “There is only a business strategy, no IT Strategy.” It’s an IT roadmap at J&J.  This is a subtle powerful shift.
  • We are using technology to evolve to a Health Care company, from a Product Company.  And that doesn’t mean it’s IT leading the charge.

The session “Strategic Agility through IT: Harnessing the Convergence of Data, Analytics and the Cloud” offered some jewels like:

  • Agile methodology not just for IT; for business unit as well (somehow that doesn’t sound like business speak.”
  • “It used to be work had the best technology, and not home. It’s switched. And the people managing tech need to understand,” offered Michael Relich, EVP, CIO and Strategic Planning for Guess
  • Sharing data between private clouds (like fraud data) is a large opportunity for companies

The award for the best stream of soundbites in the session, “The Evolving Cloud Agenda” went to Scott Blanchette, SVP of Information and Technology Services at Vanguard Health Systems:

  • Security and Privacy regulations most significant barrier to cloud
  • We put the appropriate (security) wrappers around data in the cloud.
  • Buying equipment is cheap today; even EMC. (Crowd knowingly laughs)
  • Don’t ask your barber if you need a haircut yielded  don’t ask a hardware vendor if you should go cloud from @ValaAfshar
  • Processing speed is accelerating faster than collection of health care data
  • We put governance in place for people who come up with the best ideas.  “The best ideas will come from someplace that we don’t expect.”

My view of the best session, the one giving me the most reason for hope, was put on by Andrew McAfee, Principal Research Scientist at the MIT Sloan School of Management.  Sadly he was on at the very end.

He posits the steam engine propelled humankind as first machine age and we are on the precipice of 2nd machine age.  He sees the following CODE:

  • Cyborg - new man/machine combinations will propel us, combining human/digital contribution
  • Open - organizations will be more open. The smartest people work somewhere else. With enough eyes the bugs are small
  • Data-Driven - rigorous analytical approach is driving economy. (Think a restaurant owner can identify theft)
  • Evolving - how quickly innovation is happening and disruption happening.

 

Whether Information Technology stays in the “C” suite or not, it is clear IT can study the past, and use this knowledge as a way to leapfrog (and not simply repeat.)

 

Gary L Kelley’s twitter feed from the MIT CIO Symposium and other extraneous topics can be viewed at https://twitter.com/glkelley

Wednesday
May082013

The Importance of Staff and Shifts

In the course of our business, we see many data center/applications migrations and/or high-severity issues.  One observation we always share with our clients is to plan for staff rotation.  As you might expect, some listen and others do not. Here’s why it’s important.

Migrations often happen overnight…when the business sleeps or operates at a lower activity level.  Organizations without satisfactory disaster recovery plans often incur an outage to do a migration.  People are resilient for so many hours, and then they crash. 

What often happens in migrations is everyone wants to be at the starting line, and the adrenaline keeps them engaged.  If shifts are not “forced,” then there is often nobody left with “gas in their tank” to troubleshoot issues.  People simply have to disengage to be fresh.

We saw this at a large customer where the team had persevered, declared success, and then dragged themselves home.  There was an issue, and the on-call was unwilling to make changes as he didn’t understand the changes that had taken place (a change management issue.)  NOBODY involved was responding to calls.  As it turned out, the group’s manager lived in my town, and I got to knock on his door at 10:00AM on a Sunday morning.  His wife wasn’t happy (he had been up all night) and did indeed get him up.  While he resolved the issue, a few months later he resigned and went to work at a different company. 

In this case, the team was not structured to focus on a multiple day issue….and response was poor.

In another case, a new virus definitions in client’s antivirus system determined the operating system was bad, quarantining the operating system.  The client had a policy to delete quarantined files, so with the speed of automation thousands of operating systems were deleted.

The senior manager quickly determined this would require a sustained 24/7 response, and teams were “nominated” to cover 12 hour shifts.  We were asked to help on a sustained basis, providing process oversight and helping with crisply doing turnovers.

To the credit of the senior manager, this approach allowed a sustained response as systems we recovered from (gasp!) tape.

Large IT shops often run with multiple shifts and a technical response is more organic.  Smaller shops tend to have an operational capability 24x7, and may lack the detailed technical response.

When planning or reacting to major events, think in terms of how to rotate your staff for a sustained time.