<
>

Making heads and tails of the NetApp Flash Strategy

Flash technology in enterprise arrays have become the new battleground among array vendors responding to customers, analysts and flash vendor startups that are all crying out for better performance than is available from conventional disk-based (aka spinning disks or spindle drives) solutions.
 
IDC estimates the solid state array (SSA) market is likely to grow at a CAGR of 34% through 2015 to become a US$1.2 billion market. That market opportunity is what is forcing storage startups and traditional spinning-disk vendors to rush to the market with their interpretation of what customers want.
 
The result is a smorgasbord of offerings that appear (on paper) to be designed to respond to every application’s performance requirement today. An ongoing challenge for everyone – both vendor and users – is that business applications are a moving target. We recognize that apps are moving to the cloud in some respects. Even the most contentious solution – data security and the management thereof – is moving to the cloud. Most vendor solutions today are playing catchup to new business applications springing up in response to the anticipated needs of customers.
 
A challenge for enterprises looking for that better performance is to understand whether one particular implementation of flash technology is the right solution for them. Data&StorageAsean (DSA) recently caught up with Karthik Ramarao, Chief Technology Officer and Director of Strategy and Technology for NetApp, Asia Pacific to better understand NetApp’s flash implementation strategy which currently is available in three flavours: EF series; All Flash FAS; and FlashRay*.


 

With so many different types of solutions from a single vendor, you have to wonder what is NetApp trying to do here? A common message from NetApp spokespersons is the recognition that no one flash solution fits all. At the same time, enterprise IT is looking for a platform (or multiples of a platform) that is easy to operate and manage.  
 
DSA: How does NetApp implement a hybrid storage solution? Do you mean using hybrid HDDs?
 
Ramarao: Businesses today face two contradicting requirements - the need to increase performance of their storage subsystem while at the same time reducing costs and resources needed to operate and manage.
 
That means that customers are looking for the most efficient way to spend their limited storage budgets. Hybrid storage is an increasingly attractive way to do that.
 
NetApp with its broad portfolio of flash and spindle based solutions can architect solutions that meets the customers price – performance profile while at the same time delivering  enterprise class reliability and  ease of use through a common but very mature and time tested operating system that is  rich in features.
 
NetApp’s hybrid arrays are engineered to tailor make a data management platform that use IOPS and latency-efficient flash (flash cache and SSDs) to provide fast access to hot data, while using a variety of spindle disks to provide cost effective capacity.
 
NetApp’s hybrid storage platforms present the following benefits to our customers:
 

  • Enterprise class reliability
  • Better price/performance with a choice of platforms and combinations
  • Superior scalability
  • Broader efficiency toolset (efficiency features depend on the platform choice)

 
We do not use Hybrid HDDs as we deliver better price performance even by using cheaper than hybrid HDD disks and consolidating on flash.
 
DSA: Does NetApp use SSDs in its all flash arrays or memory channel storage (nand chips) on memory slots on a backplane or motherboard? Why?
 
Ramarao: We use SSDs in our flash arrays. This provides our customers with a cost effective solution that delivers scalability while satisfying most performance needs. Majority of applications today benefit more from this architecture – optimum combination of price, performance and capacity. Along with our various operating systems that run these flash arrays, the customers are also able to benefit from a host of time tested storage efficiency and performance features. As mentioned earlier, we do provide a net of choices here, so that we can meet the customers’ price-performance profile.
 
Some of the key features addressing distinct requirements on NetApp’s suite of flash arrays:
 
EF series (OS:  NetApp SANtricity®) to optimize for Price/Performance which has:

  • a focus on low-latency performance
  • lightweight data management with application integration as plug-ins
  • a small footprint, low power usage
  • been designed to scale-up

 
 All-Flash FAS (AFF) (OS: Clustered Data ONTAP®) to optimize for rich functionality so as:

  • to balance performance and features
  • to have robust management with built-in efficiencies, protection
  • to give larger footprint and high power
  • is designed to scale-out

** Editor's note: At the time of this interview, NetApp had yet to announce its FlashRay offering. As of writing, very little is still known about it beyond the blog post by NetApp.
 
DSA: I read that for flash to really deliver the array itself, all the way to its architecture - meaning operating system - has to be designed from the ground up around flash. ONTAP clustering is based on HDD storage. So how do you validate your performance claims if your base architecture is built on legacy HDD technology - aka spinning disks.
 
Ramarao: Most customers we speak to with regard to flash solutions are asking for high performance along with reliability, scalability/capacity and cost effectiveness. For today’s NAND technologies such as SLC, MLC, TLC etc, we are able to deliver leading performance along with the other metrics mentioned above with our existing operating system. Importantly, these operating systems are very mature and rich in features and functionalities. Customers are therefore not only able to get the performance but are able to get that with features like non-disruptive operations, superior clustered scalability, storage efficiency features, tiered usage etc.
 

DSA: Flash are still rated for their short write cycle life. When computing the TCO benefits how many years are you calculating for the life of the NetApp AFA? By the same token how many years are you calculating is the life of a comparable HDD-based solution? (I am assuming that the TCO calculations are for NetApp products?)
 
Ramarao:  Taking mean time between failures (MTBF) into calculation, an SSD is two million hours.  This is equivalent to FC/SAS drives.  With that, NetApp provides an unconditional five-year warranty (three years standard, two years extended) for all available SSD models. The five-year service life guarantee on each SSD is based on a non-stop worst-case scenario workload of 100% random writes over a five-year period.
 
 
DSA: Can you provide third party benchmarks of your all flash arrays?
 
Ramarao: As there are so many possible combinations of how a flash may be deployed, we believe that it will be best served if we are able to size based on the application and other metrics of a particular customer. We provide a plethora of tools which is able to quite accurately simulate and size for the requirement so that there are no surprises for the customers. Therefore, I believe general published benchmarks may not be the best yardstick. There are however white papers we publish along with our partners for specific use cases that may provide better and deeper insight.

 
* Editor’s note: It is interesting to note that NetApp markets the new FlashRay as built from the ground up. Does this mean that its other flash offerings are no more than extensions of traditional spindle-based technology?  

 
For a bird’s eyeview of Netapp’s all flash storage strategy, watch NetApp VP of All-Flash Arrays, Ty McConney, discusses the company’s approach to flash.

You might also like
Most comment
share us your thought

0 Comment Log in or register to post comments