Tuesday, April 13, 2010

Cooling the datacenter: "hippie engineering" meets modern IT

Tourists visiting Newcastle upon Tyne are more likely to pack a warm wool sweater than a beach blanket. In northern England, swimsuit season brings summer rains and chilly temperatures. Yet according to the Palo Alto, CA-based Hewlett-Packard, you won't find a better spot for a data center.




"The cool location is very attractive. We will probably only run the auxiliary cooling devices three days a year," says Ed Kettler, a fellow at Hewlett-Packard (HP).



In February the company opened a facility that pulls sea air through seven-foot intake fans. The first story of the building is used to channel air. "We built a second story for the data center, and put intake fans on the first level," says Kettler. "We basically have a twelve-foot raised floor."



Typical data centers look nothing like this. Most use a three-foot raised floor to circulate cold air. Beneath perforated floor tiles, an air conditioner often runs day and night. Even with air-side economization—a cooling system that brings in outside air—chillers are usually needed to cool the air after intake. Inefficient compressors force the air upwards through tiny holes in the floor tiles.



"Our system is more efficient because it uses fans," says Kettler. If industry standards change, allowing servers to run a degree or two hotter, Kettler says he will never have to run mechanical chillers.



As much as half the energy used by a data center goes towards cooling. While total power use is beginning to drop thanks to equipment efficiency, data centers still consume more energy than all color televisions in the US combined. Sixty-one billion kilowatt-hours (kWh) were consumed in 2006, representing 1.5% of national energy use.



Back to the future

This is why a handful of high tech companies are going retro, and revamping historic building designs that pre-date punch cards and magnetic strips (not to mention blade servers). The designs borrow from old bungalows, Northern California communes, and even Pre Columbian pueblos and ancient Roman villages. In the spirit of permaculture—a design philosophy that uses wind and sunlight for free cooling and light—electricity use is on the decline.



"A cool, breezy location, and a smart building design will make a big difference," says Chris Page, Director of Climate and Energy Strategy for Yahoo!.



Yahoo is scheduled to open a Lockport, New York data center in September—called the "Yahoo Compute Coop," the patent-pending design harnesses prevailing winds for cooling. Like a chicken coop, thermal convection moves air through the building, and coolers will only be run a handful of days each year.



The design is surprisingly simple. While wind power requires gusty hilltops and canyons, not to mention windmills, IT permaculture—to coin a term where one is needed—doesn't aim to make electricity from the elements. The goal is to simply use the natural features present on a piece of property to help reduce reliance on the grid.



"Unlike wind power, our designs don't require strong prevailing winds," says Page.



Similar design efforts are under way at Stanford University in Palo Alto, CA, and at the National Renewable Energy Lab in Golden, CO, which plans to open a new data center in June.



"Nowadays we live in sealed little boxes," says Paul Torcellini, an engineer at the National Renewable Energy Lab (NREL). "But before 1930, people had to design buildings to work with their environment—to make the most of natural air conditioning and lighting."



This is why old bungalows have large, covered porches and breezeways. Deciduous trees and vines can be planted along south facing walls, blocking sun in the summer, and allowing light to pass through windows in the winter. In some designs, south facing windows channel light to heat-absorbent rock walls and floors. Thanks to similar tactics, Torcellini's new $63 million building will run entirely on solar. "Ten years ago, no one thought a data center could do this—the average solar array would barely make a dent in typical energy consumption," says Torcellini.



While solar has improved since then, the number of panels needed for a traditional data center still exceeds most budgets and space restrictions. This is why cutting energy use is so important, says Torcellini. "By reducing demand with climate friendly designs, we can power the entire data center and office with the solar energy we make on site," he says.



Designed to be one of the most efficient offices in the world, the building uses half the energy of a typical facility, even despite housing a 3,000 square foot data center. The building will soon have Platinum level Leed Certification—a rating offered by the Washington DC-based Green Building Council.



High Tech Hippies

Drawing from a design in Utah's Zion National Park, evaporative cooling also helps Torcellini cool his data center. Trellised over a south-facing deck, leafy kiwi plants are used for passive cooling and heating at the Occidental Arts and Ecology Center—an intentional community and permaculture design school based in Sonoma, CA. The leaves block sun in the summer, and fall to the ground in the winter, allowing light and warmth through the sliding door.



Just down the coast in Bolinas, James Stark evaluates a home he built at the Regenerative Design Institute. "We planted citrus on the Southside of our office to help block the sun, and also because citrus likes the heat from the walls," says Stark, who teaches permaculture. The house uses earthen walls to retain heat—at least on the south face. Light straw clay is used on the north side of the building to provide insulation.



"It's not hard to make similar good design features work in the context of a data center," says Page. "There is a tendency to plop down data centers without looking at the climate and energy factors, but you save a lot of money by working with the local environment."



Absorbing and blocking light is a classic permaculture approach. This is why the National Renewable Energy Lab built their data center into a slope on the north side of the building. On the opposite side of the building, offices are entirely above ground, and lighted with sun. Near the top of the building, a row of glass channels light to a reflective, white ceiling.



"You want this level of windows as high as possible so that they get the most sun to the ceiling," says Torcellini. As the sun hangs lower in the sky during the winter, tiny louvers were installed to collect and channel the rays. In the summer, the light is scattered to prevent heating.



Pre Columbian Anasazi villages harnessed similar strategies. Built in the upper sides of high desert cliffs, the sun warmed rock dwellings directly in the winter. Rays angled towards the bottom side of the rock face in the summer, missing the village abodes. Rome and Greek communities were similarly heated with passive solar, and viewed a failure to do so as uncivilized. The playwright Aeschylus suggested only barbarians "lacked knowledge of houses turned to face the winter sun."



Aschylus don't use the phrase "permaculture"—neither do Page, Torcellini and Kettler, for that matter. Yet their buildings are reminiscent of the "hippie engineering" fad that borrows from ancient and historic designs. Popularized in Australia and Northern California during the '70s, permaculture certifications are awarded at green design institutes throughout the San Francisco Bay Area. Permaculture refrigerators harness cold air from shallow holes dug beneath the kitchen floor. A rectangular pipe connects the earth to a ceiling vent, where a fan pulls the constant cold air supply upwards. After installing shelves, screens, and a refrigerator door, food can be kept at 40 degrees.



Bringing permaculture to the datacenter

Torcellini and his colleagues used a similar strategy when designing their data center—even though at a much larger scale. Air for the building's air conditioning system is pulled from a cool basement maze of concrete and stone. "We call it the labyrinth," says Torcellini. "Cement walls trap the natural cold in the earth, and air for the data center is filtered through."



In the 1920s similar "earth tunnels" were installed in Washington DC. Like in a cave, the air in the earth is constantly cold—even during the summer. "It's the same principle as with the fridge," says Torcellini. "The air just needs a long path to travel to pick up the cool."



Drawing from a design in Utah's Zion National Park, evaporative cooling also helps Torcellini cool his data center. At the Visitor's center, hot desert air is funneled through chimneys that contain moist water membranes. As air passes through the wet layer, it cools, becomes dense, and falls. The momentum circulates a breeze through the building—all without the use of electricity.



"It's basically like a swamp cooler that uses no energy," says Torcellini, who helped design the Zion towers. "Because of the chimney height, the flow of cool air is quite strong." In the scorched desert of Southern Utah, summertime temperatures reach 110 degrees. Yet the towers push 5,000 cubic feet per minute of cool air into the building.
To integrate this model into the data center, Torcellini swapped tall chimneys for solar powered fans, and installed evaporative pads into the air ducts. "You could design chimneys like those in Zion for use in a data center, although it wouldn't work in humid areas. It's not the heat that makes the system work, it's the dry air," says Torcellini. By relying on fans, the data center's duct system works all year, regardless of changes in humidity.




"The point is to work with Mother Nature rather than against her," says Kettler.



HP's facility combines a host of additional green features. It stores rainwater from the rooftop in underground cisterns, and then uses it in the data center's humidification system. Warm air from server racks is used to heat the building, and a reflective roof lowers inside temperatures. The company replaced parking lots with native trees and grass, and cracked concrete was removed for wetlands restoration. "We tried to return the area to its natural state, so the water seeps into the ground through grass and plants, like it would have before the building was constructed," says Kettler.



Individually, these systems may only solve part of the cooling problem, but combined, they have a big impact. Permaculture data centers say they will only use coolers three days a year—some may not ever run auxiliary chillers. Rainwater harvesting and natural lighting will also have a big impact on utility bills.



"You can build the initial building a lot cheaper and run it using less electricity," says Page. "Green design is often cost effective design."



It's how you use what you've got



Stanford's datacenterTo help IT companies locate the best climate for their data center, the Beaverton, Oregon-based nonprofit Green Grid launched a series of interactive cooling maps last year.



Engineers pick a location, and then modify temperatures and humidity conditions to see how many annual hours of "free cooling" are offered in the area. "You can also type in your zip code, and search for surrounding areas," says Roger Tipley, Senior Engineering Strategist, at HP, and a board member at Green Grid.



Released last week, a "Power Configuration Efficiency Estimator" helps evaluate power distribution topology choices. This new tool helps engineers assemble their data centers efficiently. By reducing cooling needs, data centers can rely on outdoor air for more hours each year.



For companies stuck in Las Vegas, Nevada, or Houston, Texas, Torcellini says there is no reason to lose hope. "You can work with the conditions in any area; even the designs are different," he says. Denver hits 100 degrees during the summer, yet the National Renewable Energy Lab harnesses cool west winds from the flatlands for much of the year.



Engineers working on a new Stanford data center are currently choosing between wind capture and heat harvesting technologies, and they say every design has its flaws and benefits. Hills near campus pick up cool coastal breezes, and Stanford identified a spot where the winds blow in consistent patterns. "One option is to structure the data center like a chimney, and pull the air through to the servers from the hill," says Joyce Dickerson, director of sustainable IT at Stanford. Hot air would be released to the sky through air vents above racks, and a spine-like conduit would pull cool air through the center of the building.



"We think it's a great design," says Dickerson. "But we may have an even more efficient option." Locating the new data center closer to campus will lower the transportation footprint, and make the facility more accessible to researchers.



Heat may also be directed from the back of the servers to the central campus utility. In this case, water coils would run through the back of server racks; on contact with the coils, heat would transfer, preventing room temperatures from rising. "The plan is to ultimately provide all heating and cooling on campus with water, so it would be easy to integrate this system into the data center," says Dickerson.



These efforts are applauded by sustainability consultants. Both heat harvesting and wind-based designs represent an unprecedented level of integration. "Facilitating temperature flow at this level is novel," says Mark Bramfitt, a San Francisco-based consultant who specializes in energy efficient retrofits for data centers. "The problem in most data centers is that the hot and cold air mix, and this wastes a lot of energy," says Bramfitt.



Wind harvesting designs get around this with clever ducting and venting systems. Heat from servers is channeled away from machines. Cool air is brought right to the servers. Some designs even use wireless temperature sensors to direct air towards the hottest racks.



Old school data centers and permaculture

Old buildings weren't designed to let air in, and the payback time on retrofits is often three to five years. Applying permaculture techniques doesn't require wind harvesting and passive solar. Even data centers that opt against wind harvesting can benefit from better heat and air circulation, says Bramfitt. "You can get nice cool air into the data center any number of ways, but this doesn't mean it will be blown around efficiently," he says.



As most data centers push air through perforated floor tiles, the room is cooled from the ground up. Racks are often fifteen degrees hotter at the top than the bottom, and air is wasted cooling the room itself, rather than the machines. This is why the average data center spends 15 percent of its energy blowing air around, says Bramfitt.



Thermal convection can be integrated into almost any cooling model, and ducting systems can be used to separate hot and cool air. Stanford has recently introduced tiles that contain scoops. "It gets the cold right to the machines, and pushes air to the top of the racks more quickly," says Dickerson. The center also installed the same ceiling tiles used in clean rooms. Strip curtains are another way to direct air, and white racks can be used to reflect light. Delivering cold air right to machines can be as simple as moving perforated tiles to the right place. "You don't have to harness sea winds to manage your air flow better," says Bramfitt.



These simple sustainability efforts are perhaps more realistic—at least for the average data center. "Old buildings weren't designed to let air in, and the payback time on retrofits is often three to five years," says Bramfitt. Air side economization—though a much smaller-scale effort than wind harvesting—is still too expensive for the data center dinosaurs. The original concept was to isolate computers as far away from the elements as possible. Tucked away in the heart of the building, behind thick walls and layers of protection, it's hard to bring outside air into the most retro of retrofits.



As far as integrating IT-permaculture techniques, it's also difficult to predict the temperature changes a company can expect. "Measuring convection and air flow is hard," says Torcellini, "and it's hard to say we lost x degrees because of cooling from the basement labyrinth or the wind from the flat lands."



The biggest hurdle is measuring airflow. "It's like trying to figure out how much wind comes through your window," says Torcellini. "Just putting your hand up to feel the breeze blocks some of the air flow." Measurements in the Zion chimneys were complicated by instrumentation that blocked breezes, and while tracer gasses are sometimes used to visualize the flow of air, these don't always work in open systems.



Like the communes of Northern California and the villages of ancient times, the proof is more in the pudding than the metric. "There is no formula for permaculture—you just design buildings to work with the land, and modify until you have something that works" says Stark, who teaches this philosophy to his permaculture students.



This is one way that the National Renewable Energy Lab's new data center is perhaps more high tech than hippie. Despite the difficulty of measuring integrated systems, they are making every effort to try. Meters have been installed on evaporative pads and fans, and each cooling component will be measured as the seasons change. By the end of the year, researchers should know how these systems work together to ease energy demand.



The bottom line is that companies need to be more proactive about their data center footprint, says Torcellini. "A lot of companies outsource to server farms, believing that if the data center is no longer part of their utility bill, it's not their problem," he says. "We designed our building to show that the highest energy efficiency standards can be met, even with an in-house data center."

Can a machine tickle?

It has been observed at least since the time of Aristotle that people cannot tickle themselves, but the reason remains elusive. Two sorts of explanations have been suggested. The interpersonal explanation suggests that tickling is fundamentally interpersonal and thus requires another person as the source of the touch. The reflex explanation suggests that tickle simply requires an element of unpredictability or uncontrollability and is more like a reflex or some other stereotyped motor pattern. To test these explanations, we manipulated the perceived source of tickling. Thirty-five subjects were tickled twice--once by the experimenter, and once, they believed, by an automated machine. The reflex view predicts that our "tickle machine" should be as effective as a person in producing laughter, whereas the interpersonal view predicts significantly attenuated responses. Supporting the reflex view, subjects smiled, laughed, and wiggled just as often in response to the machine as to the experimenter. Self-reports of ticklishness were also virtually identical in the two conditions. Ticklish laughter evidently does not require that the stimulation be attributed to another person, as interpersonal accounts imply.

Tuesday, February 16, 2010

Microsoft Unveils Windows Phone 7 Series

Microsoft Unveils Windows Phone 7 Series

The months of waiting are over as Ballmer and company reveal Microsoft's response to the iPhone and Android. But will it be enough to stave off competitors with a long head start?




Windows Phone 7
Windows Phone 7. Click to enlarge. Source: Microsoft

Microsoft unveiled its long-awaited Windows Phone 7 Series on Monday, finally giving the world a glimpse of its answer to mobile competitors' touchscreen phones as well as an introduction to the smartphone operating system's new name.

At the press conference, held at this week's Mobile World Congress (MWC) in Barcelona, Spain, Microsoft (NASDAQ: MSFT) CEO Steve Ballmer said that the "7 Series" phones will be available at retail in time for the 2010 holiday season.

Officials also named a list of partners, including mobile operators as well as handset manufacturers that will build the 7 Series. On the list of operators that will be partnering with Microsoft to provide phones to customers as well as software and services are AT&T, Deutsche Telekom AG, Orange, SFR, Sprint, Telecom Italia, Telefonica, Telstra, T-Mobile USA, Verizon Wireless and Vodafone.
RELATED ARTICLES

Ballmer to Discuss Windows Phone Plans at MWC
More Speculation on Windows Phone 7
Will Android Dominate Mobile World Congress?
Microsoft Readies Two Windows Phone Systems?

For more stories on this topic:

Senior Vice President Andy Lees of Microsoft's mobile communications business also announced that it is investing in joint projects with two of the mobile operators -- AT&T and Orange -- to bring "a full Windows Phone 7 Series experience to the market across a range of phones."

He did not elaborate on what the efforts might entail, however.

It's no accident that Microsoft singled out AT&T, since it is so far the exclusive partner for Apple's (NASDAQ: AAPLE) iPhone in the U.S. It was also the first to deliver smartphones based on Windows Mobile -- the current version of its smartphone software -- into the U.S. market in 2003. Orange, meanwhile, was the first operator to offer Windows Mobile smartphones in 2002, Leeds added.

Although the rumor mill had been rife with speculation that Microsoft would sell handsets under its own brand, such an announcement was not forthcoming -- at least not at MWC.


Instead, Microsoft trotted out a list of handset manufacturers that will make the 7 Series, including Dell, Garmin-Asus, HTC, HP, LG, Samsung, Sony Ericsson, Toshiba and Qualcomm.

"One of the things we've kept constant is our belief in the partner model," Lees said, though he also admitted that Microsoft has at least considered making its own branded phone.

Ballmer said Microsoft will describe opportunities for developers at the company's upcoming MIX developers conference in Las Vegas next month.

"About a year and a half or two years ago, we had to step back to recast and reform our strategy [and] I think we're well on our way to something that can be pretty exciting," Ballmer said.
Windows Phone 7 features

Among the features coming in 7 Series phones will be a user interface that is closely related to Microsoft's Zune HD media player, along with built in Bing search, and support for Xbox Live.

Microsoft CEO Steve Ballmer Joe Belfiore, Microsoft's corporate vice president for Windows Phone program management



(L & R) Microsoft CEO Steve Ballmer and Joe Belfiore, corporate vice president for Windows Phone program management, discuss Windows Phone 7.

The 7 Series devices will also have three dedicated hardware buttons that are controlled by the operating system -- home, back, and Bing search.

Additionally, Microsoft will provide a 7 Series version of Office and the Outlook mail client, and a large portion of Microsoft's emphasis is on consistency of the UI in order to provide users with an integrated experience, officials said.

"Internet Explorer [on 7 Series] is based on the desktop IE code [and Outlook] works just like Outlook on the desktop," Joe Belfiore, corporate vice president for Windows Phone program management," told the gathered press.

As far as hardware specifications, 7 Series devices will support a four-point multitouch UI.

Perhaps one disappointment for some potential customers is that 7 Series devices will lack support for Adobe Flash.

"We have no objection to Adobe Flash support, but in [version 1] there will be no support," Ballmer told a questioner at the end of the press conference.

As for futures, Microsoft officials said that the company will be gradually revealing more regarding 7 Series over coming months.

"We hope Windows Phone 7 Series is our lucky number," Ballmer added.

Six Tips for Protecting Critical Data

Six Tips for Protecting Critical Data
Tip One – People, Policies and Priorities First
Consider having the right people, policies and procedures in place before turning attention to technology strategy. Designate one individual in
the company as the data protection owner. This person is responsible for getting management buy- in, documenting the processes,
investigating the options, and directing testing and training.
The data protection owner should form a group to determine what the most critical information to the business is. This small group should
include those individuals whose input will ensure that the most critical business information is protected. In a small business, this may be just
the owner or the executive staff. In a midsize business, a manager from each function is probably most appropriate. The data protection
owner should identify any relevant regulations that affect the company's data protection priorities. Next, the group should define the critical
applications. Given the limited resources in most small and midsize businesses, initially narrow your focus to the one or two core applications
where an inability to access key information can quickly start to cost you money, such as your e-commerce site, customer database or e- mail
system. By focusing on protecting just one or two critical applications, your data protection goals will be more attainable.


Tip Two – Get the Data out of the Building
It is extremely important to get your data out of the building and out of harm's way. The ideal offsite location is distant geographically so it
remains unaffected by large-scale disasters, such as earthquakes and hurricanes. Consider what the most likely threats are to your place of
business.
● Is it local power outages? How far away would you need to store the data to be on a different power grid?
● Is it earthquakes or hurricanes? Keep the backup data at least an area code away.
● Is it most likely to be server failures? Think about what could be done for more rapid recovery of the production machine.
Think creatively about how you can cost-effectively backup the data remotely. For example, if your office is in New York City and your IT
administrator lives in New Jersey, you could simply setup a PC backup server in their home that is connected to the main server by DSL or
cable.


Tip Three – Calculate the Costs of Downtime
For your peers to appreciate the gravity of the problem, you may need to estimate the downtime costs for employees, suppliers and customers
if they can't access critical information. The following method provides a simple way to estimate the average cost per hour of downtime.
Cost Per Occurrence = (To + Td) x (Hr + Lr)
To = Time / Length of Outage
Td = Time Delta to Data Backup (How long since the last backup?)
Hr = Hourly Rate of Personnel (Calculate by monthly expenditure per department divided
by the number of work hours.)
Lr = Lost Revenue per Hour (Applies if the department generates profit. A good rule is to look
at profitability over three months and dividing by the number of work hours.)
Next, define the recovery objectives for your applications. The best way to quantify your objectives is with a Recovery Time Objective (RTO) and
Recovery Point Objective (RPO) for each application. The RTO for an application is simply the goal for how quickly you need to have that
application's information restored. For example, perhaps 4 hours, 8 hours, or next business day is tolerable for e-mail systems. The RPO for an
application is the goal for how much data you can afford to lose since the last backup. Is it 2 minutes worth, 20 minutes or 2 hours? Then
estimate the costs to achieve your RTO and RPO for each application.


Tip Four – Think Beyond Tape to Achieve Your Recovery Objectives
Once you have established how quickly you need to recover key applications (RTO), how much data you can afford to lose (RPO) and your
budget, you can select the appropriate technology solution. Like many SMBs, you are likely to discover that traditional tape backup won't be
enough to achieve your RTO and RPO goals for critical applications.
For SMBs whose critical applications run at multiple remote locations, the quality and consistency of on-site tape backup is also an issue. Few
companies of any size have technical experts in branch locations who can clean and maintain tapes, ensure that they are properly backing up,
and execute a recovery when needed.
Small and midsize businesses face a conundrum: tape backup systems are inexpensive and fairly reliable, but they offer poor RPO and RTO for
critical applications, and they are usually ineffective for remote locations. Hardware mirroring technology, which uses remote copy technology
to provide synchronous mirroring between two sites, offers excellent RPO and RTO but it is prohibitively expensive for a small or midsize
business to buy and manage. Plus, it is less than ideal for backing up remote locations which often have low-bandwidth connections.
New solutions based on asynchronous software-based replication can achieve the acceptable RTO and RPO objectives for critical applications
without the cost and complexity of the synchronous replication approach. With software-based replication, only the bytes that change are
replicated. When compared with synchronous replication solutions, this approach offers lower load on the production servers, faster updates
and the ability to send replication updates across low-bandwidth Internet networks.
Double-Take – The Solution for SMBs
Double-Take is the most effective way for small and midsize
businesses to experience the data protection benefits of
asynchronous software replication.
Today, Double-Take is the most relied-upon solution for real-
time replication of critical data and automated failover for
application availability. Double-Take is Microsoft Windows
2000 and 2003 certified at all levels, one of the few
replication products to have achieved this level of
certification. It delivers protection that is better or
comparable to many hardware based solutions, but costs
tens of thousands less.
Double-Take replicates changes to files at the byte-level from any Windows Server to any other Windows Server across any IP-based network.
It installs on each server and monitors the real-time changes that are occurring to files and it replicates those changes to another server and
applies them to a secondary replica of the data. All changes are sent and applied in the exact logical order that they occurred on the
production system, guaranteeing a crash-consistent copy of data on the secondary system.
Beyond minimal protection
Double-Take Small Business Server Edition from Double-Take Software goes beyond the minimal protection of periodic backup by providing
disk-based continuous data replication, ensuring minimal data loss, and enabling fast recovery from any disaster or outage - priced with the
small business in mind.
Double-Take continuously captures byte-level changes as they happen and replicates those changes, locally or to a recovery site miles away.
Because changes are captured in near-real time, in the event of a disk crash, power failure, human error or natural disaster, you may only lose
seconds of data - instead of hours or entire days. Unlike other costly solutions that limit your geographic options or require special network
connections, Double-Take Small Business Server Edition works over any distance using your existing IP networks even the Internet. You can
station your target server as far away as you would like to ensure maximum protection against disasters.
DEPENDING ON DATA: PROTECTING SMALL BUSINESS SERVER AND SIX OTHER TIPS FOR SMBS
Because Double-take replicates only the bytes that change, it uses the minimum bandwidth required to backup your data. Features like
Flexible Bandwidth Scheduling and Intelligent Compression allow you to control when replication occurs and how much bandwidth Double-
Take is allowed to use.
When disaster strikes, you can use the secondary disk-based copy of your data to restore the production server within moments. Double-Take
is server, storage, network and application independent and works with the services you have today.
Full-Server Failover: Protect more than just your data
But there's more to protecting your systems than just protecting the data. Double-Take's Full-Sever Failover feature ensures everything is
protected - including the operating system(OS), applications and data. The Full-Server Failover feature combines cutting-edge system state
protection and recovery capabilities with the real-time replication of Double-Take, ensuring applications are available when they're needed
without introducing paralyzing levels of complexity for the small business owner. A single click of a button is all that it takes to completely
recover your systems. Because the Full-Server Failover feature is protecting the system state of the production server, there's no need to
separately manage service packs, application updates or hotfixes on the standby server, further reducing the complexity of maintaining
application uptime.

Tip Five - Make it Easy for Users to Restore Themselves
Most SMBs don't have the IT resources to respond to individual requests to restore files. Fortunately, solutions like Microsoft's Windows
Storage Server 2003 make it easy for users to restore files themselves. Windows Storage Server 2003 can be configured to take a snapshot of
the data on a server twice a day, for example. Should a user delete or make unwanted permanent changes to a document, they can simply
select the file from any snapshot by right clicking on the file, selecting "Properties", viewing all the versions of the file and selecting the one
they want.


Tip Six - Make Sure You Really Can Restore In Different Situations
It's important to make sure you have thought through how to restore your critical applications quickly - either locally or at a different location.
Do you have fast access to all of the components you need to recover? What are the specific steps needed to restore a failed server? What
would you do if you had to move the company's operations and employees to another location?
Double-Take can help you recover at another location or recover a branch location quickly, and because it only replicates the data that's
changed, it works well over long distances even with low bandwidth connections.