InvestSMART

Flash 2.0 - the future of memory

Consumer appetite for Flash memory is insatiable and enterprise IT is also taking note. However, to leverage Flash's full potential IT leaders need to rethink the way they apply and share the technology.
By · 6 Aug 2013
By ·
6 Aug 2013
comments Comments
Upsell Banner

Over the last couple of years, we’ve witnessed a near insatiable consumer appetite for Flash memory, rapidly creating a billion dollar industry. From iPhones and tablets to Ultrabooks, consumer devices have accounted for 85 per cent of Flash sales. The remaining 15 per cent has garnered the attention and fascination of enterprise IT, where it is poised to become the next “rock star” technology in the datacentre.

While early adopters – primarily those powering the web – have embraced Flash for its blazing speed, many enterprise IT leaders still eye the technology as “Flash 1.0”, using the first-generation technology for tactical, one-dimensional deployment, such as adding flash as a quick fix, directly into an over-burdened server or storage array for a singular purpose.

As Flash matures and is acclimated into greater and more complex computing environments, it’s beginning to experience growing pains. New processors and power-hungry applications are demanding greater density and more power. Datacentres are requiring greater flexibility and lower prices to facilitate broad adoption and placement across servers and storage in complex environments like hybrid cloud computing. And so begins the right of passage into a grownup technology, or Flash 2.0.

In its 1.0 version, Flash has followed pace with Moore’s law by doubling in density to keep up with the increasing core count of contemporary Intel processors, facilitating more simultaneous I/O requests, enabling more VMs, supporting larger data sets and meeting demand for greater throughput. The industry response to this current phenomenon has been greater density, faster speeds and lower prices, which, as history has shown, is a recipe for commoditisation.

In many ways, Flash is Flash, produced in huge volumes with very little differentiation, and minor variations of price and performance. Meanwhile, greater density paired with smaller form factors has forced the component vendors to adopt costly and complex technology to address the increase in error rates and data volatility that accompanies this trend. We can expect to see greater potential for volatility, increased disk failure and data corruption.

For Flash to truly mature to its 2.0 version and settle into the datacentre, the technology must evolve from the limits of a one-dimensional commodity component to a multi-dimensional, system-level architecture and implementation. To achieve this goal, the industry must advance software-centric system-wide solutions to improve performance, ensure data integrity and achieve promised efficiencies.

To leverage Flash to its fullest competitive advantage in the enterprise, IT leaders need to rethink the way they apply and share the technology across their physical and virtual environments. How much and when do they need to scale Flash to meet high compute and analytical demands? Where is the technology placed first? Which applications require the fastest access? How will Flash operate across complex and diverse datacentres? 

A software-defined Flash storage approach can help answer these questions. This enables datacentres to maximise the fullest potential of Flash as a tiered media in a large-scale storage system. This software-defined framework for Flash management will help close the gap between datacentre demands and hardware limitations with a software layer that manages service levels, performance needs, costs and availability.

With this approach, Flash is positioned to address traditional and next generation datacenter demands for scaling, availability and performance in multi-platform environments.  To achieve scale, a software approach should manage both physical and virtual deployments of Flash across server and storage networks to provide greater performance for demanding big data analytics and transactional processing.  

Expectations for greater scalability tied to Flash also create demands for advanced data services. This framework can address this with host-based technologies like deduplication, smarter caching with array awareness, and improved pooling and clustering.

No doubt the Flash technology in standalone, one-dimensional scenarios will continue to make our notebooks and devices faster. But to be widely deployed in the enterprise, the technology has to be tightly coupled with advancements in software management. This is where we’ll see the real growth over the next several years. 

Darren McCullum is Regional Manager of XtremIO for EMC ANZ.

Google News
Follow us on Google News
Go to Google News, then click "Follow" button to add us.
Share this article and show your support
Free Membership
Free Membership
Darren McCullum
Darren McCullum
Keep on reading more articles from Darren McCullum. See more articles
Join the conversation
Join the conversation...
There are comments posted so far. Join the conversation, please login or Sign up.