Automated tiering improves storage ROI, particularly for firms that are deploying solid-state drives in high-performance environments. Auto-tiering is a key enabling technology for storage administrators to improve workload performance and user experience by better understanding the value of data.
Storage administrators have generally over-provisioned storage for each new application they rolled out, particularly if it was business-critical. And in order to assure performance, this meant that the most expensive, or Tier 1 storage was assigned to the application.
The purchase of Tier 1 storage for all of these applications has not only escalated storage costs, which now account for approximately 20% of total IT budget, but also has been the main cause of low capacity utilization. In fact, a recent InformationWeek survey showed that storage managers are still using only 54% of their installed storage capacity.
Storage tiering assigns data based on value as well as access and retention needs to cut enterprise data storage costs and address storage capacity issues. Traditionally, tiering was external to storage subsystems, with different media (memory, disk or tape) representing different tiers. Tiers were defined by the performance, capacity and cost basis of the storage. These tiers have been defined as Tier 1, Tier 2, Tier 3, etc.
Figure 1. External Storage Tiering- Data Moves Between Tiers
Source: Tech-Tonics
Storage tiering is an ongoing and complicated process. Traditionally, administrators had to assign a cost basis to their storage and then define tiers accordingly by manually classifying the data. Tiers would then be applied to applications and the data that was part of those applications. Even though data migration tools helped ease the process, assigning data to the right tiers remained complex.
Efficiencies through automation
Automated storage tiering has evolved as a solution, particularly for virtualized and cloud-based environments where performance demands are high. Auto-tiering software helps storage administrators flexibly optimize utilization while reducing latency. It automates the process of classifying data at the application layer as well as the movement of data to the optimum tier of storage to maximize utilization.
Administrators still set the policies as they pertain to performance, cost and space requirements. The software helps define what level of activity determines whether data is “hot” or not. In most organizations, an estimated 60% of data are user files. These files experience a burst of frequent activity within the first 30 days after they are created. But after 60 days, 80% of these files are seldom used.
IOPS (input/output per second) which measures response times, and throughput, or bandwidth requirements for each application are the best measures to gauge user demand. Monitoring and analytics lets the administrator set a time period over which the software monitors the IOPS, or demand patterns for specific applications and data. The software then controls when to move the data to lower tiers when usage “cools”. Newer predictive and prescriptive capabilities can make recommendations about data movement based on past observations so that the mix remains optimized.
Solid-state drives and NAND flash as external devices became known as tier 0, due to the temporary nature of the data residing there, especially when used as just as a cache. The appearance of mixed-drive arrays with solid-state drives (SSDs) and hard disk drives (HDDs) has given administrators more tiers of storage to choose from, while extending the definition of tiering to internal (within the array) as well as external.
Figure 2. Definition of Storage Tiers
Source: Tech-Tonics
As data matures through its lifecycle, whether it begins as hot data or not, tiering moves the data to lower, less expensive storage media, better matching workload requirements with actual storage needs. This improves utilization, while reducing overall storage costs owing to less new purchases and energy consumption (every 15K rpm disk drive uses 7-15 watts of electricity). Automated tiering, which is now a core function in most storage arrays, handles the data movement regardless of whether the tiering is internal or external.
Various surveys suggest that approximately 60% of IT shops have implemented storage tiering in their environments to improve capacity utilization. With nearly the same percentage having deployed solid-state drives (SSDs), half of these shops are using automated storage tiering tools. Auto-tiering is a must-have for SSDs and highly-virtualized and cloud-based storage environments because of the need to keep active data in the fastest tier as close to real-time as possible.
In conclusion, automated tiering should be the centerpiece of an evolving storage strategy. The technology can improve ROI and reduce TCO by focusing on the value of data while enhancing data management and GRC initiatives.