Industry News

LOS ALTOS, Calif. – March 7, 2017 – Primary Data today announced powerful new features that transform its DataSphere software platform into a metadata engine that automates the flow of data to help enterprises meet evolving application demands at petabyte scale. Released today, DataSphere 1.2 optimizes scale-out NAS for performance without bottlenecks, integrates easily with the cloud, automates data management, and provides client and file visibility with billions of files hosted across different storage systems.

“With DataSphere, customers are able to cut overprovisioning costs by up to 50 percent, and those savings easily run into the millions when you are managing petabytes of data,” said Lance Smith, Primary Data CEO. “In today’s 1.2 release, DataSphere has taken another step forward, automating the management of over a billion files as an enterprise metadata engine. With DataSphere, enterprises can significantly accelerate performance, overcome vendor lock-in, and easily leverage the cloud to maximize both savings and efficiency.”

The updates to DataSphere 1.2 evolve the platform from its early focus serving Test and Dev environments to meeting demanding enterprise production requirements with the scalability, widespread platform support, reliability, availability and serviceability needed to finally automate the flow of data to the right storage at the right time. Customer-driven features and capabilities enable DataSphere to serve diverse enterprise environments with a resilient, enterprise-ready platform with robust data services. New features in DataSphere 1.2 include:

  • Enterprise scalability with automated management of billions of files enables companies to serve and manage data at petabyte scale
  • Supercharge scale-out NAS performance for unstructured data and other NAS workloads with vendor-agnostic support for Dell EMC Isilon and NetApp ONTAP solutions
  • Accelerate cloud access with direct interfaces for Amazon S3 and compatible cloud platforms; scale cloud uploads and downloads linearly while preserving the namespace for applications
  • Serve diverse enterprise environments with expanded support for Linux, Mac and Windows, including Linux, macOS, and SMB support for Windows Server 2008/Windows 7 and later
  • Enterprise reliability, accessibility and serviceability, including non-disruptive H/A failover and volume retrieval ensures rapid recovery without impact to ongoing I/O processing to ensure recovery without interruption
  • Enhanced data services, including offloaded cloning directly from clients preserves application performance and optimizes capacity usage
  • Visibility into file and client performance with hot file visibility; real-time performance graphs across different storage resources visible on user dashboard
  • Faster performance with advanced metadata algorithm intelligence and resource usage while continuing to maintain client I/O even while data is in flight

“Automating the flow of enterprise data with a metadata engine like DataSphere is essential now that we create so much data every day,” said Steve Wozniak, Primary Data Chief Scientist. “Some data is hot and valuable, while other data quickly gets cold and needs to be kept somewhere that will keep costs low. DataSphere automatically makes sure the storage you have is always serving the right data, and makes it easy to add new resources like the cloud, even when you are managing billions of files. Simplicity like this isn’t easy to achieve, but once you find a better way, you wonder what you used to do without it.”

DataSphere enables enterprises to place the right data on the right storage at the right time across enterprise infrastructure and the cloud to automatically meet evolving application demands without interruption. The DataSphere software platform virtualizes data by splitting the control path from the data path, separating data from the underlying storage so that it can be managed independently of hardware.

By creating a global namespace that spans cloud, shared and local storage, DataSphere helps enterprises overcome performance bottlenecks without buying new hardware. DataSphere’s powerful policy engine flows data to the ideal storage resource to automatically meet performance, price and protection requirements. In addition, DataSphere can monitor and move cold data to lower cost tiers like the cloud while maintaining accessibility. Moving files to fast flash resources accelerates performance and optimizes existing storage investments without disrupting applications.

“Primary Data’s updates to DataSphere 1.2 provides a metadata engine with the scale and reliance to automate insight across heterogeneous storage environments with billions of files,” said Jeff Kato, Senior Analyst, The Taneja Group. “The ability to harmonize and modernize capabilities across diverse types of traditional storage while still improving overall performance and efficiency is a unique approach in the enterprise market today.”

DataSphere is now available from Primary Data and its reseller partners. To learn more, contact us at [email protected].

About Primary Data

Primary Data automates the flow of data to ensure the right data is in the right place at the right time across enterprise infrastructure and the cloud to meet evolving application demands with its DataSphere platform. The storage and vendor-agnostic DataSphere architecture is based on a metadata engine that automatically moves data to the most appropriate resource to meet data requirements without application interruption. DataSphere helps enterprises overcome performance bottlenecks, integrate with the cloud for savings and active archival, and easily adopt new resources from any vendor. DataSphere enables customers to reduce overprovisioning by up to 50 percent, generating savings that easily run into the millions for enterprises operating at petabyte scale. To learn more, visit us at www.primarydata.com or follow us on Twitter at @Primary_Data.