Michael Cunningham, director of Hitachi Data Systems’ Business Consulting Practice, looks at some of the obstacles to optimising TCO in the data centre and suggests a strategy for meeting the challenge.
Across a range of industries, at all stages of the supply chain, everyone is talking about total cost of ownership (TCO). It is unquestionably an important concept to bear in mind, with direct relevance to the bottom line, wherever it applies.
While many IT planners assume that purchasing low-cost disk solutions will drive down the total cost of storage ownership, Hitachi Data Systems has observed that price alone does not equate to lowering operating expenses or reducing the total cost of disk ownership. Properly designing and implementing multi-tiered storage concepts can significantly and positively alter current and long-term costs.
The IT department, perhaps more than most, needs to be aware of the importance of TCO. However, keeping on top of the concept in a typically heterogeneous environment filled with a wide variety of systems and networks can be tricky. Perhaps a good starting point would be to clarify exactly what we mean when we talk about TCO.
In the case of planned storage growth or expansion (reactive in nature relative to demand), TCO can be effective for calculating total lifecycle costs of competitive or comparative solutions. Total cost of ownership is exactly what the name implies— the total operating and purchase cost of an asset (such as storage). To determine TCO, several costs are accumulated that would be incurred over some number of years:
• Purchased elements from the vendor (hardware, software, installation, verification, migration)
• IT departmental costs for installation (new training, new room preparation)
• Write-off costs if the systems being replaced are not fully depreciated, or are not at the end of the lease life
• Year-on-year environmental costs, such as electricity, airconditioning, floor space
• Maintenance costs (after the warranty period) for hardware and software
• Ongoing labour costs, training, and vendor fees not otherwise covered.
TCO is most effective when performing a head-to-head comparison of two or more storage solutions (either vendor options or topology/architecture options). Best practices indicate that TCO requirements are included in all competitive bid situations, with the IT department itself establishing the parameters of the TCO cost models.
Since many people look at capital expenditures (CAPEX) as a one-time cost, they are fooled into thinking that the lowest purchase price is also the lowest TCO. This is not the case in most storage and data centre environments. Storage purchase decision-makers should be looking at the TCO of competitive solutions, and not just at the lowest price per megabyte presented by vendors in the final negotiations.
The TCO of an organisation’s storage environment therefore must take into account much more than simply the purchase and maintenance costs of the hardware and software in use. There is a wide range of other aspects that the IT manager needs to build into the equation, a small selection of these are discussed here.
Storage administration
Storage administration is simply the person hours and effort spent managing the storage infrastructure, and can be measured in the number of terabytes of data that can be managed by one full-time equivalent (FTE) employee. As a rule, tiered and pooled storage architectures can achieve higher managed terabyte (TB) to administrator ratios than standalone islands.
The cost of labour can reach up to 45 percent of the TCO of a storage infrastructure, and is therefore often one of the first targets when budgets are squeezed. Although this can be effective, there is nearly always a trade-off. It is important to ensure that the workforce is still large and skilled enough to carry out the tasks in hand however, there are a few tried and true means of driving labour costs down:
Systems management effectiveness: Enabled by an advanced storage architecture/strategy, is key to managing more with less. A five- to seven-fold improvement is a usual observation after a centralisation activity is complete.
Different tiers of storage: Differing levels of management effort can be applied to be commensurate with the value and use of the data on each tier. With small-to-medium environments (less than 100TB), a significant reduction in labour cost may be negligible. For larger environments, different service levels and operating levels can be applied for different levels of labour content. Some of the labour activities that can be directly impacted by storage in a multi-tiered storage architecture include:
• Storage provisioning time
• Management and labour effort
• Back-up support, including recovery time and snap copy management
• Operational control and monitoring
In a single pooled environment, the data value is hard to segregate, so everything is managed in a uniform manner (relative to labour efforts). Provisioning is about the same, and problem resolution and event handling is uniform since an error may affect critical and non-critical applications alike. In a multi-tiered architecture, different levels of support, pro-active management time and corresponding cost can be properly allocated.
Utilisation
Software for thin provisioning, such as Hitachi Dynamic Provisioning™, gives users the ability to allocate virtual disk storage according to anticipated future storage needs, without the need to put aside large amounts of physical disk storage. If and when the need for further physical resources does arise, these can be purchased at a lower cost, with implementation occurring smoothly and without disruption to essential applications. This feature exploits Hitachi Data Systems’ existing back-end virtualisation and storage management services, bringing virtualisation to the front end and giving organisations new ways of viewing their storage resources.
The potential for a corresponding improvement in efficiency is strong. CIO consultancy IT Centrix recently estimated that a similarly configured best-of-breed solution from a competitive vendor will consume 60 percent more IT budget over three years than Hitachi Data Systems’ Universal Storage Platform™ V with Dynamic Provisioning. This illustrates a marked reduction in the cost of storage configuration and administration and data movement.
Storage performance
Poor storage performance can affect the business as surely as running out of disk space will. In businesses that rely on online interaction to drive businesses, a slow response can mean customers will look elsewhere. At the same time, internal productivity can suffer as staff are unable to access the information they need. At worst, the business itself may be in breach of legal regulations if data is unavailable for prolonged periods and its bottom line can be seriously affected. Meanwhile, IT staff are working overtime to address and resolve issues before any of the above consequences are reached.
Precisely quantifying the cost of performance can be difficult, so it is perhaps easier to think of it in terms of impact on revenue. If storage performance is low, then overall transaction throughput from disk to user will be compromised, and this in turn brings difficulties for revenue-generating business operations.
Planning is essential to ensure that the storage network performs to an acceptable standard, and organisations should work with suppliers to identify where improvements can be made.
The Hitachi Universal Storage Platform™ V via Hitachi Dynamic Provisioning can create Large Logical Storage Pools. Each pool can be configured to the RAID level required and with the desired number of disks to support target IOPS rates. The Logical Storage Pools can be created according to application needs or part thereof, i.e., database logs/temp space to one pool type and database table spaces to another, each having differing performance characteristics. This creates significant performance improvements for applications, dispensing with the need for host-based volume managers to create volumes over a large number of disk drives.
Storage waste
Wasted storage is seen as the capacity that has been purchased and potentially allocated to a host but is not being used. Maintaining 100 percent utilisation is unrealistic and so to an extent some white space is inevitable and desirable, but many IT managers strive to ensure it is kept to a minimum. Optimising utilisation of course also means that costs associated with storage such as software and power and cooling are not wasted, creating better cost efficiency all around. Improvements from 25 to 60 percent total disk capacity utilisation are not uncommon and reductions in future procurements can be made.
There are a number of ways in which this issue can be tackled, including data de-duplication, tiered storage to move wasted space to a lower tier, thin provisioning and storage virtualisation. While these approaches can all bring great value to the business and positively affect TCO, their introduction needs to be planned with the help of storage suppliers to ensure organisations do not end up spending more on the administration of the solution than they did on the problem.
Data centre environments
The less space a storage infrastructure takes up, the less it costs to run. This is due to the costs implicit in keeping a
data centre operational – physical building space, power and airconditioning for example.
By virtualising the storage environment, IT departments can reduce the number of physical servers needed, thereby minimising floor space taken up and reducing power and cooling requirements.
The Hitachi Universal Storage Platform™ V is currently the only large-scale heterogeneous virtualisation solution in the industry and is capable of supporting an almost infinite amount of external storage capacity. It now supports 247 Petabytes, which is 670 percent more than the original version of the USP and 24,600 percent more than any other storage system. This can help speed and simplify storage virtualisation decisions, which can in turn mean a more immediate and noticeable impact on the IT department’s cost efficiency.
Security and data protection
Data security is governed in various ways. Some requirements dictate that it should just be secured while at rest (i.e. sitting in the storage stack), while others demand that data is protected ‘in flight’ as well. Costs for meeting these requirements can vary depending on the age, access patterns and recent movement of the data in question.
It is also important to ensure an effective data protection strategy is in place in the event of a serious disruption to the system, such as a catastrophic event. Information must be kept secure, ideally at multiple sites, and a fast recovery plan must be in place.
Unlike its competitors, Hitachi has engineered data security into the core of its storage controllers. The Hitachi USP V has the unique feature of being able to isolate and ‘fence off’ high security applications at any point in the storage hierarchy – from channel, to cache, to storage device. Additional security features include:
• Controller-based data shredding
• LUN Security for LUN-level access control to worldwide names (WWN)
• Write Once Read Many (WORM) software for tamperproof long-term data retention
• Role-based access
• An Audit Log file, which stores a history of all user access operations performed on the system
• Fibre Channel Secure Protocol Authorisation A (FC-SP Auth A) for the authentication of Fibre Channel entities
• Support for encryption appliances such as NeoScale and Decru.
• Hitachi is the only supplier in the market to offer a 100 percent data availability guarantee on its high-end products. This all combines to deliver a significant reduction in the time and effort required for backup and remote replication processes.
Biting off what you can chew
A helpful way of approaching this is to think of improving storage TCO as like going on a diet. Simply identifying the aim is not enough to get you there: you need a plan – be it diet, exercise, cutting out chocolate. You also cannot try to do everything at once, or you risk being overwhelmed by the task.
The secret is to identify key areas where improvements are needed or can be made and then understanding the best way to meet your goal. This might be by introducing thin provisioning, or virtualisation say, or by defining better management processes to make the best of the architecture you already have. A great means to determine the kind of diet you want (or need) is to undertake a Storage Economics Strategy service from Hitachi Data Systems. This vanguard approach to TCO reduction (with a tangible ROI) uses a detailed methodology to identify and realise significant cost savings opportunities within your storage environment. Covering people, process and technology; storage economics delivers “bankable” capital and operational cost savings without compromising service delivery.
Best practices and lessons learned
After having completed hundreds of storage assessments and economic justifications during the past several years, Hitachi Data Systems has identified and documented best practices and lessons learned in storage economics. They are summarised below:
Any storage economic analysis needs to be driven by a joint effort between the finance department and the IT group responsible for storage decisions.
ROI savings have to be defendable and believable within the organisation.
A balanced team is essential for conducting an unbiased economic analysis. Best-in-class teams include technologists, operations, finance and end-customer advocates (who can provide a meaningful storage economic model) and vendor stakeholders.
Separation of hard savings from soft savings provides credibility to storage justifications. Hard savings are typically those that can generate tangible financial variances that can be counted or taken out of a particular department’s budget. Soft savings are those that are recognised and appreciated, but may not be defendable to the point that real monies are ever saved.
Every business considers different hard and soft savings in TCO and ROI analysis, but not all conditions are ever equal. When IT and finance departments can agree to a wide range of cost containers to modelling and evaluating, the economic analysis is less biased in any functional area.
Company finance officers (CFO, COO) need to drive the economic targets, IRR, payback goals, and so forth. These should not be driven by the IT department.
Ask vendors to articulate and define economic value in real currency terms that can be applied to a particular business. The IT department should set the criteria and categories that are meaningful (hard savings), and not leave this to vendors.
Economic factors need to be part of a long-term storage strategy. Storage economics should work in partnership with other storage qualities, such as availability, scalability, performance, manageability, and the like.
Storage economic techniques are applicable in both good and bad economic times.
Economic positioning needs to start early in strategy and architecture definitions, and not be left to tactical purchasing events alone.
Review, revise and modify ROI and TCO calculations after the fact. Periodically review cost-reduction assumptions in order to provide consistency in storage and IT economic decisions. As time permits, validate theoretical ROI and TCO parameters with real/measured cost components.
Identifying the economic hero(s) is essential to finding and reclaiming storage related OPEX cost savings. The absence of the economic hero will lead to dead-end results that are never implemented (since there is no sponsor for the changes). If a cost saving does not have a stakeholder, it may not be perceived as real and will often be disallowed.
Do not confuse ROI and TCO tools with the real intent of the economic messaging; any tool is a means to the end.
Purchase price per megabyte is the wrong single metric to use in economic decisions for storage; this is also true of any part of the storage ecosystem (HBA, tape, networks, and so forth).
If costs cannot be characterised, and owners for the costs identified, then an ROI activity will be a waste of time.
In detailed ROI analysis, data collection of the current environment (capacity, costs, performance) is the most problematic element of the work.
More than one economic perspective is often necessary. IT planners need to be flexible in CAPEX and OPEX categories and not limit the savings to one or two categories alone.
Even with superior economic savings potential, if the IT department is unable or unwilling to change operating parameters, processes, procedures, or vendors (as necessary); then economic motivators alone will be insufficient to enable a change.
Do not allow vendors alone to define and characterise TCO and ROI variables and categories.
The IT managers’ TCO diet
By
Staff Writers
on Nov 16, 2007 9:37AM
Got a news tip for our journalists? Share it with us anonymously here.
Partner Content

Tech For Good program gives purpose and strong business outcomes
Ingram Micro Ushers in the Age of Ultra

Build cybersecurity capability with award winning Fortinet training from Ingram Micro

Kaseya Dattocon APAC 2024 is Back

Secure, integrated platforms enable MSPs to focus bringing powerful solutions to customers
Sponsored Whitepapers
-1.jpg&w=100&c=1&s=0)
Stop Fraud Before It Starts: A Must-Read Guide for Safer Customer Communications

The Cybersecurity Playbook for Partners in Asia Pacific and Japan

Pulseway Essential Eight Framework

7 Best Practices For Implementing Human Risk Management

2025 State of Machine Identity Security Report