Quantum ATFS Provides High-Performance Data Ingestion

Quantum's All-Terrain File System improves data ingestion by instantly tagging and classifying data, enabling users to quickly find and retrieve files and gain real-time insights.

Karen D. Schwartz, Contributor

November 13, 2020

3 Min Read
Light rails emerge from a complex tangled web
Getty Images

Continuing its expansion into data management, Quantum this week introduced a new software platform that manages data across its lifecycle.

The goal, said Noemi Greyzdorf, Quantum’s director of product marketing, is to help organizations better cope with fast-growing amounts of unstructured data.

“Companies today are performing ‘unnatural acts’ in attempts to manage their data – guessing at capacity and where data lives, searching file systems for days, uncertain about what they can delete. The results are silos of data and a loss of control and visibility,” she said. “Quantum is setting out to meet this challenge by enabling users to classify and visualize data for real-time insights, and leverage those insights to optimize storage resource allocation and utilization, to place data where and when it is needed by an application.”

The new software platform, called Quantum All-Terrain File System (ATFS), provides high-performance data ingestion by instantly tagging and classifying data as it is ingested. It also includes real-time search analytics to quickly find and retrieve files. The system can scale to billions of files and provides up to 12GB/sec performance. Scripts and queries can be returned in seconds, and all system and data analytics are available in real time.

For many organizations, the ability to instantly classify and tag data is attractive. At the time of data ingestion, file metadata the system comes across is captured and stored in a database on a Non-Volatile Memory Express (NVMe) tier. Users also can set up rules for applying tags to files, which can be based on a number of variables, including IP address, owner and source of the data. These are also stored in the database.

In addition, changes to a file’s metadata are captured proactively. ATFS has a policy engine that uses the insights gained through classification per metadata and tags to execute purposeful data placement. “Since we have insights into application workflows via API and tags, ATFS can just-in-time place data into a flash tier for application consumption and remove it when the operation has completed to make room for other files and operations,” Greyzdorf said.

ATFS also can use tags to inform retention of data or how data is presented and viewed. Data can be moved into an S3 bucket on-premises or in the cloud in native format, allowing cloud-native applications to access the data over S3 without going through ATFS. The platform maintains metadata and tag information in both on-premises and cloud environments.

Automating Data Placement

Quantum also announced upgrades to several of its other products. The newest version of the Quantum StorNext file system now includes new ways to automate data placement on NVMe for high-throughput, low-latency workloads. This allows administrators to define pools of NVMe, solid-state drives (SSDs) and hard disk drives (HDDs) within their file system, and then move data between pools based on policy. StorNext 7 also provides expanded web services APIs that provide new ways to query metadata, automate data movement, and configure and manage the file system.

In addition, Quantum expanded its ActiveScale object storage line with three new products, along with a new feature to better protect critical data. Object Lock provides immutability and can protect data against ransomware, Greyzdorf said, and it can function as a repository for compliance data. In addition, small file aggregation effectively improves capacity efficiency when storing small files.

Taken together, these announcements position Quantum well, said Ashish Nadkarni, group vice president for infrastructure systems, platforms and technologies at IDC.

“What companies need is a single strategy that encompasses their disparate unstructured data sets, rather than creating islands and islands that can’t talk to each other,” he said. “They want to shift to a common data management layer based on a set of platforms that can talk to each other.”

About the Author

Karen D. Schwartz

Contributor

Karen D. Schwartz is a technology and business writer with more than 20 years of experience. She has written on a broad range of technology topics for publications including CIO, InformationWeek, GCN, FCW, FedTech, BizTech, eWeek and Government Executive

https://www.linkedin.com/in/karen-d-schwartz-64628a4/

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like