Symantec Releases DR Practices Survey

Friday, July 10, 2009

Symantec Corp. has released the results of its fifth annual Global IT Disaster Recovery survey.  According to the report, 93% of organizations have had to execute their disaster recovery plans and the average cost of implementing DR plans for each downtime incident is US$287,000…..The average budget for disaster recovery initiatives worldwide is US$50 million.  Response within Canada reflected those of the worldwide results, but percentages were noticeably different in terms of virtualization backup practices.  Only 10% of Canadian respondents do not back up data on virtualized systems, compared to 36% of worldwide.  “The more stringent requirements in general were in North America,” said Dan Lamorena, senior manager of high availability and disaster recovery solutions at Symantec.  Overall recovery times are faster and the cost of downtime is higher in Canada and the U.S. when compared to other countries surveyed, he noted.  The average time it takes to “achieve skeleton operations after an outage” is three hours.  To be fully “up and running after an outage,” the average is four hours, states the report…..

Executive-level involvement in DR plans is rising.  In 2007, 55% of respondents reported DR committees involved the CIO, CTO or IT director; this dropped to 33% in 2008.  The number rose to 67% in 2009, according to the report.  Symantec attributes the rise to DR “becoming a competitive differentiator” and other factors including the size of DR budgets and the impact on customers.  The increased level of executive involvement is a significant issue, Stahl noted.  When executives are not involved at a real active level with DR planning and business impact analysis (BIA), the IT group will often build an over-engineered plan, he said.  “You get this sort of notion from the business that everything’s critical … because they’re not going to assume that something is not critical.  They’re not going to second-guess that maybe off-the-cuff comment from the executive,” said Stahl.  Info-Tech notices a downward trend when executives get involved in the BIA and see how those costs line up, he pointed out.  “The more structured that conversation takes place, the more a detailed methodology is followed, the likelihood that they’re going to achieve an optimal state of alignment and costs,” he said.  Recovery time objectives fell from five hours in 2008 to four hours in 2009.  “In 2009, 75% of tests were successful, more than doubling the 30% of tests that met RTO objectives in 2008.  While this rate also parallels executive involvement, they may or may not be correlated,” states the report. 

One in four DR tests fail.  This figure marks an improvement, however, when compared to previous years.  In 2007, 50% of DR tests failed.  The number dropped to 30% in 2008 and 25% in 2009, according to the report.  “Only 15% say that tests have never failed,” states Symantec.  “Although this is good news, one test failure in four is still alarmingly high.  “But the number doesn’t alarm Stahl.  “Tests are meant to fail … it’s not alarming unless I’m getting to the point where customers are actually trying to recover and failing.  That mean’s they’re not testing, doing that remediation cycle through their DR,” he said.  “DR is a living thing.  The infrastructure is continually changing and morphing and it would be unreasonable to expect enterprises to be 100% on the test year after year.  If they are, that means they’re probably not doing anything else in the infrastructure of the business,” he said.  Reasons cited for test failures included staff errors (47%), technology failure (40%), inappropriate processes (37%) and out of date plans (35%), states the report.  Insufficient technology, which ranked third on the list of reasons for test failure in 2008, dropped to fifth place this year, notes Symantec.  While 96% of IT organizations have tested their DR plans at least once, roughly 35% of organizations perform their test only once or less than once a year, according to the report.  “This is 12% lower (and an improvement) from the 47% that reported minimal testing in 2008.  However, Symantec and most IT experts believe that every organization should be testing more frequently than once a year,” states the report.  While full end-to-end tests used to be norm, according to Stahl, the trend is shifting to unit tests.  “What happens now is they target tests (to) applications or services where they’ve made significant changes because they just can’t sustain a full test.  It’s too big, it’s too much, it’s too complex,” he said.  Organizations aren’t performing more tests because of a lack of resources in terms of people’s time (48%), disruption to employees (44%), budget (44%) and disruption to customers (40%), states the report…..Nearly one third (27%) of respondents do not their test virtual servers as part of their DR plans and more than one-third (36%) do not perform regular backups of data on virtualized systems, states the report.  The lack of storage management tools (53%), lack of backup storage capacity (52%) and lack of automated recovery tools (50%) were reported as the top challenges in “protecting mission-critical data and applications in virtual environment.”

One of the most significant points raised in the survey, according to Lamorena, are the issues with virtualization.  “As people are becoming more familiar with the technology and they are moving more mission-critical applications to these environments, they are encountering some of the challenges and are starting to look at what solutions are really going to help deal with this more complex virtual environment,” he said.  Based on the survey findings, Symantec recommends organizations curb the costs of downtime by implementing more automation tools that minimize human involvement, reducing the impact of testing on clients and revenue by implementing non-disruptive testing methods, and including those responsible for virtualization in disaster recovery planning.  There are a lot of automation solutions available for disaster recovery that include high availability clustering, monitoring the health of applications, automating the startup of applications at the data recovery site and reprovisioning servers, Lamorena pointed out.  “In the reality of this 24/7 economy and increasing business requirements, we think people are going to look at more automated solutions … the biggest resource you struggle to find is the people.  In a real disaster, no one wants to be leaving their homes to make sure their data centre is up and running,” he said.  Configuration health checks, also known as aggregators, are one non-disruptive method recommended by Lamorena.  “This isn’t a testing tool, but it gives you real good sense of the health of the environment,” he said.  “Virtual environments should be treated the same as a physical server, showing the need for organizations to adopt more cross-platform and cross-environment tools or standardizing on fewer platforms,” states Symantec.

Reference : http://www.cio.com/article/print/496643

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: