EV

Data Virtualization

Data virtualization, which is a virtual representation of the data instead of a physical representation, is perhaps the most interesting way to increase efficiency in the management of its IT operations. Ken Hess wrote a good article on the subject on the popular IT blog ZD net, for those interested. But many users often forget that the data is the same reason to exist of the company’s IT infrastructure. Also, the fact of addressing the inefficiencies at the data layer often produces the best returns in terms of return on investment. Deduplication, thin provisioning and snapshots are examples of techniques that dramatically reduce or eliminate the physical problems associated with the data, while providing full access and “virtual” to the data.

Take the example of data deduplication. In a storage infrastructure, the same data is duplicated over and over again (stressful passing the capacity limits of all infrastructure and business processes), often for perfectly valid reasons. However, this massive duplication has a negative exponential impact on backup process, data recovery and disaster recovery (DR). By default, most backup tools regularly create complete images of a set of data, which often results in dozens of copies. Profits from disposal of these duplicates can be felt throughout the organization.

For example, by saving to an appliance of disc based deduplication IT take immediate gains in performance due to the very nature of the disk. When the data is deduplicated, they also occupy far less space both at rest and in transit. Thus, deduplicated data at point A, will require much less infrastructure resources (bandwidth, capacity, etc.) to be stored and transferred to point B and so on. Because typical backup deduplication rates can reach 20: 1 or more, the retention of data on the virtual disk can greatly improve the efficiency of backup operations, which improves the quality of service and benefits to trades. This gain in efficiency is further improved because the induced gains in IT operations and time, one of the most precious today in the datacenter.

Regulatory constraints in data storage material for the emails are easy enough to understand. But it is more difficult to determine the most cost-effective way to comply with this mandate. The easiest way to comply with the regulation is to apply a rule stupidly data, store them in one place and never move. Well, I guess the easiest way would be to have it all managed by IT support services in London. Anyway, this way, you can always find them when you need them. But this practice is often inconsistent with the objectives for storage optimization. An email is a representation of data, and early in his life, it may require a level of availability that simply will not be necessary after a certain period of time, for example, when it becomes a fixed archive (persistent) that never change. So it makes sense to store forward this email to the infrastructure platform the most economical, which will probably not the original. This is not because there are constraints retention and archiving the object should be sentenced to ineffective treatment forever. The same logic should be applied to the data without storage constraints and benefits will be similar. Each data object, regardless of its form, will eventually become a persistent, immutable asset that will only rarely consulted. This type of data represents the overwhelming majority of data managed by the company and have radically different requirements than active or dynamic data. Generally, we can apply the same logic dice data they cease to be active and accessible and this, it is stored in an archive or simply on an economic storage class. By duplicating these data, we can more easily and effectively protect, access, secure, and store without it requires superhuman effort from IT staff.

Having less data “live” consuming precious infrastructure resources allows for more virtual instances of that data and derive greater business value. This is a positive cycle that confirms a simple principle: we can do more with less. Again, if you’re having trouble with the subject, don’t stress out about it, you can always outsource the job to professionals who provide IT support services in Soho.

hard_drive_data_recovery

PCA and PRA Part 1

PCA and PRA: the corporate weapon against computer disasters. Few companies today can dispense with them to the point that in many cases IT inability to deal with a computer problem can be fatal. A study by the American consulting firm Eagle Rock, 40% of companies with an interruption of 72 hours of their IT and telecommunications resources do not survive a data disaster. That is why more and more companies of all sizes strive to implement a disaster recovery plan or an IT business continuity plan.

Over time, the meaning of these two terms has evolved. Historically, the continuity plan clung to analyze the potential impact of a disaster or failure on the company’s business and defining the means and procedures to be implemented to limit the consequences. The recovery plan was interested meanwhile IT aspects of the PCA.

For IT professionals, the terminology has evolved increasingly BCP describes all means to ensure business continuity applications, that is to say, to ensure high availability of the applications (which implies the impossibility of stopping these applications even in case of disaster on a site). The PRA meanwhile describes all the means and procedures designed to ensure a rapid and orderly resumption of production after an unexpected shutdown (for example related to a technical failure, or energy, human error or a natural disaster) . The difference between the two approaches tends to be limited to a difference in terms of infrastructure downtime and disaster recovery applications.

PCA: ensure high availability of applications

As part of a PCA, the company ensures define architectures, means and procedures necessary to ensure high availability of infrastructure (data center, servers, network, storage) supporting the implementation of enterprise applications. The objective is to ensure that whatever the situation, the infrastructure put in place ensure users uninterrupted service.

In general, a PCA implementation requires the establishment of facilities between several redundant data centers and operating jointly so that in case of component failure at the primary site, the relay automatically made by the secondary site.

Typically, such an architecture implies the establishment of a scheme guaranteeing consistency on storage arrays between the primary site and the secondary site. One example that allows a solution like EMC VPLEX as the latest generation of berries VSP Hitachi G1000. It is also possible that technology GeoCluster NetApp or HP (3Par). These two technologies actually provide transparent data replication between two or more sites and allow simultaneous write access to data on all sites. Coupled with orchestration and virtualization solutions, where to failover software technologies (Oracle RAC, SQL Server Failover Cluster …), they allow automated switches applications of a data center to the other in case of failure on the primary site.

Note that all enterprise applications are not necessarily concerned with the implementation of a BCP, just because some are not deemed critical and can tolerate a stop, or a possible loss of data. This criticality is defined collaboratively with the business to determine what the scope of the PCA and which applications will be affected by a “simple PRA”. It should also be properly sizing the infrastructure for failover to the secondary site does not affect the performance too. In the case of an architecture in active / active mode, production is indeed divided between the two data centers of the company, so a disaster on one of them translated mechanically by a decrease half the processing capacity available, thus potentially a performance degradation of the surviving infrastructure.