Accelerating Storage Innovation in the Next Data Decade - Dan Inbar shares key trends for the Next Data Decade 1

Accelerating Storage Innovation in the Next Data Decade – Dan Inbar shares key trends for the Next Data Decade

Before joining Dell Technologies, Dan Inbar has served as the CEO of N-trig until its acquisition by Microsoft and served in a number of key positions at SanDisk and M-Systems, e-Mobilis, Top Image Systems Ltd and NICE Systems Ltd, bringing to his role over two decades of experience in storage technologies.

Today, he’s sharing his insight on how technology has transformed almost every imaginable business into an IT-driven business with data storage as well as its access, implementation and application becoming ever more crucial. Here’s Dan’s thoughts on Accelerating Storage Innovation in the Next Data Decade.

Dan Inbar Dell EMC Next Data Decade

Accelerating Storage Innovation in the Next Data Decade

Dan Inbar, president and general manager, Storage, Dell Technologies

Over the previous decade, technology transformed nearly every business into an IT-driven business. From farming to pharmaceuticals, these information technology developments have led organizations to reimagine how they operate, compete, and serve customers. Data is at the heart of these changes and will continue its transformative trajectory as organizations navigate the waves of technological progress in the next “Data Decade.”

In data storage – which touches every IT-driven business – the pace of innovation is accelerating, yet most enterprises continue to struggle with data’s explosive growth and velocity. Getting the highest use and value from their data is becoming ever more critical for organizations, especially for those with data stores reaching exabyte scale.

In order to have strategic value in the enterprise, storage innovation must cross the capabilities chasm from just storing and moving around bits to holistic data management.

In 2019, our Dell Technologies Storage CTO Council studied more than 90 key technologies and ranked which ones have the innovation potential to help storage cross that capabilities chasm in the next 5-10 years. This year, there are three key areas we believe will be difference-makers for organizations that are pushing the limits of current storage and IT approaches.

Let’s take a closer look.

Trend #1: Machine learning and CPU Performance unlock new storage and data management approaches

This year, we will see new approaches that solve streaming data challenges, including the use of container-based architectures and software-defined storage. There is a desire by customers in industries such as manufacturing, cybersecurity, autonomous vehicles, public safety and healthcare to build applications that treat data as streams instead of breaking it up into separate files or objects.

Ingesting and processing stream data has unique challenges that limit traditional IT and storage systems. Since streaming workloads often change throughout the day – storage capacity and compute power must be elastic to accommodate. This requires intelligence within the storage that can instantly provide autoscaling.

By treating everything as a data stream, event data can be replayed in the same way we watch a live sporting event on a DVR-enabled TV, where the program can be paused, rewound and replayed instantly. Until now, application developers have been limited in their ability to address use cases that can leverage data as streams for capture, playback and archive. Enabling these capabilities with data will make it easier to build applications that allow new use cases that were never thought of previously.

Dataset Management helps solve the data lifecycle problem

In the realm of data management, 2020 will usher in new approaches for organizations wishing to better manage the data that is distributed across many silos of on-prem and cloud data stores. Data growth has been outstripping the growth of IT budgets for years, making it difficult for organizations not only to keep and store all their data, but manage, monetize, secure and make it useful for end users.

Enter Dataset Management – an evolving discipline using various approaches and technologies to help organizations better use and manage data through its lifecycle. At its core, it is about the ability to store data transparently and make it easily discoverable. Our industry has been very good at storing block, file and object data, sometimes unifying these data in a data lake. Dataset Management is the evolution of a data lake, providing customers with the ability to instantly find the data they want and make it actionable in proper context across on-prem and cloud-based data stores.

Dataset Management will be especially useful for industries (i.e. media & entertainment, healthcare, insurance) that frequently have data stored across different storage systems and platforms (i.e. device/instrument generated raw data, to derivative data at a project level, etc.). Customers want the ability to search across these data stores to do things such as creating custom workflows. For instance, many of our largest media & entertainment customers are using Dataset Management to connect with asset management databases to tag datasets, which can then be moved to the correct datacenters for things such as special effects work or digital postprocessing, then to distribution and finally to archives.

Traditional methods for managing unstructured data only takes you so far. Because of new technological advancements like machine learning and higher CPU performance, we see Dataset Management growing further in prominence in 2020, as it offers organizations a bridge from the old world of directories and files to the new world of data and metadata.

Trend #2: Storage will be architected and consumed as Software-defined

We can expect to see new storage designs in 2020 that will further blur the line between storage and compute.

Some of our customers tell us they are looking for more flexibility in their traditional SANs, wishing to have compute as close to storage as possible to support data-centric workloads and to reduce operational complexity.

With deeper integration of virtualization technologies on the storage array, apps can be run directly on the same system and managed with standard tools. This could be suitable for data-centric applications that require very storage- and data-intensive operations (i.e. analytics apps, intense database apps). Also, workloads that require quick transactional latency and a lot of data.

This isn’t HCI in the classic sense, but rather about leveraging and interoperating with existing infrastructure and processes while also giving a greater degree of deployment flexibility to suit the customer’s specific environment and/or application. It could open up new use cases (i.e. AI ML/analytics at edge locations and/or private cloud, workload domains, etc.); it could also lead to lower cost of ownership and simplification for IT teams and application owners that don’t always have to rely on a storage admin to provision or manage the underlying storage.

Software-defined Infrastructure no longer just for hyper-scalers

Software-defined infrastructure (SDI) is also becoming a greater consideration in enterprise data centers to augment traditional SANs and HCI deployments. Long the realm of hyper-scalers, traditional enterprises are ready to adopt SDI for the redeployment of certain workloads that have different requirements for capacity and compute than what traditional 3-layer SANs can provide.

These are customers architecting for agility at scale and want the flexibility of rapidly scaling storage and compute independently of each other. It’s for the customer that needs to consolidate multiple high performance (e.g. database) or general workloads. As enterprises consider consolidation strategies, they will bump up against the limits of traditional SANs and the unpredictable performance/costs and lock-in of cloud services. This is where SDI becomes a very viable alternative to traditional SANs and HCI for certain workloads.

Trend #3: High-performance Object storage enters the mainstream

As Object moves from cheap and deep, cold storage or archive to a modern cloud-native storage platform, performance is on many people’s minds.

One of the reasons we see this trending upward this year is demand for it by application developers. Analytics is also driving a lot of demand and we expect to see companies in different verticals moving in this direction.

In turn, the added performance of flash and NVMe are creating tremendous opportunity for Object-based platforms to support things that require speed and near-limitless scale (i.e. analytics, Advanced Driver Assistance Systems (ADAS), IoT, cloud-native app development, etc.). Side note: historically, Object storage hasn’t been fast enough for ADAS workloads, but all-flash is changing that conversation.

Flash-based Object storage with automated tiering to disk offers a cost-effective solution, particularly when a customer is talking about hundreds of petabytes or exabyte-scale. It allows you to move the data you need up to the flash tier to run your analytics and high-performance applications and then move the data off to a cold or archive tier when you’re done with it.

As Object becomes tuned for flash and NVMe, we expect a higher level of interest in Object for things that have traditionally been stored on file-based NAS, such as images, log data, and machine generated data.

As the pace of technology innovation accelerates, so too will the possibilities in storage and data management. We are standing with our customers at the dawn of the “Data Decade.”

If the last ten years brought some of the most dramatic changes in tech, just imagine what’s next.

Accelerating Storage Innovation in the Next Data Decade - Dan Inbar shares key trends for the Next Data Decade 2

[Article courtesy of Dan Inbar, president and general manager, Storage, Dell Technologies]