Servers process data that is moved over networks and kept on data storage systems. And these days, primarily because organizations are retaining more data for longer periods of time, storage needs to be considered part of a converged data center environment.
Virtual and converged environments need shared storage, which IT departments can accomplish using shared serial-attached SCSI for in-cabinet and high-density blade servers. Other options include Ethernet-based iSCSI, NAS (NFS and CIFS), Fibre Channel over Ethernet and traditional Fibre Channel. Multiprotocol and unified storage solutions that support a mix of block (SAS, iSCSI, Fibre Channel and FCoE) and file (NAS, NFS, CIFS) operations have become popular for converged environments.
Multiprotocol storage systems combine traditional block and file-based storage into a unified solution to reduce cost and complexity while adding flexibility, resiliency and scalability. In a converged environment, shared storage is important because it allows different physical servers to access storage resources so they can support the various applications they host. The same way that virtual servers eliminate the dependency or affinity of a given application to a specific server, shared storage enables the hosting of VMs, their applications and their data because the actual storage is not confined to a dedicated system or server.
As IT organizations know, at a time when dependencies on information continue to grow, there is no such thing as a data recession. Addressing data growth and associated infrastructure resource management (IRM) and data center infrastructure management (DCIM) tasks — along with other data protection costs — can be as simple as preventing certain data from being stored at all. Data footprint reduction (DFR) comprises a set of techniques, technologies and best practices that help drive efficiencies so that more can be accomplished with available resources.
Reducing an organization’s data footprint has many benefits, including reducing or maximizing its IT infrastructure resources, such as power and cooling, storage capacity, and network bandwidth, while at the same time enhancing application-service delivery in the form of timely backup, business continuity/disaster recovery, performance and availability.
If an organization does not already have a DFR strategy, now is the time to develop and implement one across its environment. There are many different DFR technologies for addressing various storage-capacity optimization needs, some of which are time/performance-centric while others are space/capacity-focused. Different approaches use different metrics to gauge efficiency and effectiveness.
Which DFR technique is the best? That depends on what IT is trying to accomplish in terms of business and IT objectives. For example, is the goal to achieve maximum storage capacity at the lowest cost without concern for performance? Or does the enterprise need a mix of performance and capacity optimization? Is the IT staff looking to apply DFR to primary, online, active data and applications, or to secondary, near-line, inactive or offline data? Some forms of storage optimization reduce the amount of data and/or maximize available storage capacity. Other forms of storage optimization focus on boosting performance or increasing productivity.
In short, data footprint reduction is expanding beyond just deduplication for backup and other early-deployment scenarios. For some applications, reduction ratios are an important focus, so organizations need tools and techniques that deliver those results. Likewise, for applications that require performance first with some data-reduction benefit, there are tools that are optimized to meet those priorities.
Vendors have begun to expand their current capabilities and techniques to meet changing needs and criteria. Those with multiple DFR tools will better meet an organization’s needs than those that offer just a single DFR function or focused tool.