Six Tips for Protecting Critical Data
Tip One – People, Policies and Priorities First
Consider having the right people, policies and procedures in place before turning attention to technology strategy. Designate one individual in
the company as the data protection owner. This person is responsible for getting management buy- in, documenting the processes,
investigating the options, and directing testing and training.
The data protection owner should form a group to determine what the most critical information to the business is. This small group should
include those individuals whose input will ensure that the most critical business information is protected. In a small business, this may be just
the owner or the executive staff. In a midsize business, a manager from each function is probably most appropriate. The data protection
owner should identify any relevant regulations that affect the company's data protection priorities. Next, the group should define the critical
applications. Given the limited resources in most small and midsize businesses, initially narrow your focus to the one or two core applications
where an inability to access key information can quickly start to cost you money, such as your e-commerce site, customer database or e- mail
system. By focusing on protecting just one or two critical applications, your data protection goals will be more attainable.
Tip Two – Get the Data out of the Building
It is extremely important to get your data out of the building and out of harm's way. The ideal offsite location is distant geographically so it
remains unaffected by large-scale disasters, such as earthquakes and hurricanes. Consider what the most likely threats are to your place of
business.
● Is it local power outages? How far away would you need to store the data to be on a different power grid?
● Is it earthquakes or hurricanes? Keep the backup data at least an area code away.
● Is it most likely to be server failures? Think about what could be done for more rapid recovery of the production machine.
Think creatively about how you can cost-effectively backup the data remotely. For example, if your office is in New York City and your IT
administrator lives in New Jersey, you could simply setup a PC backup server in their home that is connected to the main server by DSL or
cable.
Tip Three – Calculate the Costs of Downtime
For your peers to appreciate the gravity of the problem, you may need to estimate the downtime costs for employees, suppliers and customers
if they can't access critical information. The following method provides a simple way to estimate the average cost per hour of downtime.
Cost Per Occurrence = (To + Td) x (Hr + Lr)
To = Time / Length of Outage
Td = Time Delta to Data Backup (How long since the last backup?)
Hr = Hourly Rate of Personnel (Calculate by monthly expenditure per department divided
by the number of work hours.)
Lr = Lost Revenue per Hour (Applies if the department generates profit. A good rule is to look
at profitability over three months and dividing by the number of work hours.)
Next, define the recovery objectives for your applications. The best way to quantify your objectives is with a Recovery Time Objective (RTO) and
Recovery Point Objective (RPO) for each application. The RTO for an application is simply the goal for how quickly you need to have that
application's information restored. For example, perhaps 4 hours, 8 hours, or next business day is tolerable for e-mail systems. The RPO for an
application is the goal for how much data you can afford to lose since the last backup. Is it 2 minutes worth, 20 minutes or 2 hours? Then
estimate the costs to achieve your RTO and RPO for each application.
Tip Four – Think Beyond Tape to Achieve Your Recovery Objectives
Once you have established how quickly you need to recover key applications (RTO), how much data you can afford to lose (RPO) and your
budget, you can select the appropriate technology solution. Like many SMBs, you are likely to discover that traditional tape backup won't be
enough to achieve your RTO and RPO goals for critical applications.
For SMBs whose critical applications run at multiple remote locations, the quality and consistency of on-site tape backup is also an issue. Few
companies of any size have technical experts in branch locations who can clean and maintain tapes, ensure that they are properly backing up,
and execute a recovery when needed.
Small and midsize businesses face a conundrum: tape backup systems are inexpensive and fairly reliable, but they offer poor RPO and RTO for
critical applications, and they are usually ineffective for remote locations. Hardware mirroring technology, which uses remote copy technology
to provide synchronous mirroring between two sites, offers excellent RPO and RTO but it is prohibitively expensive for a small or midsize
business to buy and manage. Plus, it is less than ideal for backing up remote locations which often have low-bandwidth connections.
New solutions based on asynchronous software-based replication can achieve the acceptable RTO and RPO objectives for critical applications
without the cost and complexity of the synchronous replication approach. With software-based replication, only the bytes that change are
replicated. When compared with synchronous replication solutions, this approach offers lower load on the production servers, faster updates
and the ability to send replication updates across low-bandwidth Internet networks.
Double-Take – The Solution for SMBs
Double-Take is the most effective way for small and midsize
businesses to experience the data protection benefits of
asynchronous software replication.
Today, Double-Take is the most relied-upon solution for real-
time replication of critical data and automated failover for
application availability. Double-Take is Microsoft Windows
2000 and 2003 certified at all levels, one of the few
replication products to have achieved this level of
certification. It delivers protection that is better or
comparable to many hardware based solutions, but costs
tens of thousands less.
Double-Take replicates changes to files at the byte-level from any Windows Server to any other Windows Server across any IP-based network.
It installs on each server and monitors the real-time changes that are occurring to files and it replicates those changes to another server and
applies them to a secondary replica of the data. All changes are sent and applied in the exact logical order that they occurred on the
production system, guaranteeing a crash-consistent copy of data on the secondary system.
Beyond minimal protection
Double-Take Small Business Server Edition from Double-Take Software goes beyond the minimal protection of periodic backup by providing
disk-based continuous data replication, ensuring minimal data loss, and enabling fast recovery from any disaster or outage - priced with the
small business in mind.
Double-Take continuously captures byte-level changes as they happen and replicates those changes, locally or to a recovery site miles away.
Because changes are captured in near-real time, in the event of a disk crash, power failure, human error or natural disaster, you may only lose
seconds of data - instead of hours or entire days. Unlike other costly solutions that limit your geographic options or require special network
connections, Double-Take Small Business Server Edition works over any distance using your existing IP networks even the Internet. You can
station your target server as far away as you would like to ensure maximum protection against disasters.
DEPENDING ON DATA: PROTECTING SMALL BUSINESS SERVER AND SIX OTHER TIPS FOR SMBS
Because Double-take replicates only the bytes that change, it uses the minimum bandwidth required to backup your data. Features like
Flexible Bandwidth Scheduling and Intelligent Compression allow you to control when replication occurs and how much bandwidth Double-
Take is allowed to use.
When disaster strikes, you can use the secondary disk-based copy of your data to restore the production server within moments. Double-Take
is server, storage, network and application independent and works with the services you have today.
Full-Server Failover: Protect more than just your data
But there's more to protecting your systems than just protecting the data. Double-Take's Full-Sever Failover feature ensures everything is
protected - including the operating system(OS), applications and data. The Full-Server Failover feature combines cutting-edge system state
protection and recovery capabilities with the real-time replication of Double-Take, ensuring applications are available when they're needed
without introducing paralyzing levels of complexity for the small business owner. A single click of a button is all that it takes to completely
recover your systems. Because the Full-Server Failover feature is protecting the system state of the production server, there's no need to
separately manage service packs, application updates or hotfixes on the standby server, further reducing the complexity of maintaining
application uptime.
Tip Five - Make it Easy for Users to Restore Themselves
Most SMBs don't have the IT resources to respond to individual requests to restore files. Fortunately, solutions like Microsoft's Windows
Storage Server 2003 make it easy for users to restore files themselves. Windows Storage Server 2003 can be configured to take a snapshot of
the data on a server twice a day, for example. Should a user delete or make unwanted permanent changes to a document, they can simply
select the file from any snapshot by right clicking on the file, selecting "Properties", viewing all the versions of the file and selecting the one
they want.
Tip Six - Make Sure You Really Can Restore In Different Situations
It's important to make sure you have thought through how to restore your critical applications quickly - either locally or at a different location.
Do you have fast access to all of the components you need to recover? What are the specific steps needed to restore a failed server? What
would you do if you had to move the company's operations and employees to another location?
Double-Take can help you recover at another location or recover a branch location quickly, and because it only replicates the data that's
changed, it works well over long distances even with low bandwidth connections.
No comments:
Post a Comment