Free Newsletters for the Channel
Register for Your Free Newsletter Now
July 25, 2017
By Stefan Bernbo, Founder and CEO, Compuverde
Google chief economist Hal Varian puts the astronomical explosion of data in perspective: “Between the dawn of civilization and 2003, we only created five exabytes [of data]; now we’re creating that amount every two days. By 2020, that figure is predicted to sit at 53 zettabytes (53 trillion gigabytes) – an increase of 50 times.”
This exponential growth has pushed traditional data-center storage architectures past the breaking point. Your customers are in search of modern storage solutions that don’t require the huge costs or time investments associated with linearly scaling legacy storage appliances. Even with all the time and expense involved, vertical storage architectures often contain bottlenecks that slow performance to an unacceptable level. This creates an opportunity to steward customers toward a new purchase and serve them with your knowledge of what the industry has to offer. Just make sure that you aren’t swayed by industry hype and misleading marketing.
Software-defined storage (SDS) decouples the programming that controls storage-related tasks from the physical storage hardware and can dramatically reduce the costs associated with that hardware. Fewer, less-expensive servers can be used to improve both capacity and performance. Administration is simplified and made more flexible and efficient. SDS enables users to allocate and share storage assets across all workloads.
For these reasons, SDS has become a big hit. By 2020, anywhere from 70-80 percent of unstructured data will be stored and managed on lower-cost hardware supported by software-defined storage, according to a recent Gartner report.
Eighty percent of today’s zettabytes of data is unstructured. It is widely understood that unstructured data is best managed with a file system, so storage solutions that offer file systems currently represent 80 percent of the market. For some reason, though, many SDS offerings focus solely on block or object store; few focus on file systems or do them well. Without a file system overlaying this data, it becomes very difficult to manage that data.
Each type of storage exists because it focuses on a specialty:
Block storage is used for storing virtual machines or databases.
Object storage is newer and used for machine-to-machine/IoT transactions and other applications that require extreme scalability. However, it isn’t much better than block when it comes to managing data.
File systems, though not as widely touted as the other two types of storage, are the best at handling unstructured data.
Now, some SDS providers claim to provide file systems with their offerings. However, these file systems are usually based on …
…Samba, a freeware version of the SMB/CIFS networking protocol that allows end users to access and use files on a company’s intranet or network. Many in need of a file system have turned to this option, but providing file services through Samba, which is open source, may mean going without some features that Windows users are accustomed to.
Disaster Recovery in a Hyperconverged World: HCI simplifies IT infrastructure design by integrating the storage layer with compute and networking. This reduces complexity and saves money, but it also requires changes in how customer data and applications are protected. Here’s what partners need to know.]
And, it’s not only a file system that organizations need in order to deal with unstructured data; file-related features are also necessary. These include:
Retention rules automatically create a single folder or a hierarchy of folders on file servers, to be deleted according to assigned policies.
A snapshot is a read-only copy of the contents of a file system or independent file set taken at a single point in time. When a snapshot of an independent file set is taken, all files and nested dependent file sets will be included in the snapshot.
Tiering uses a policy that enables IT to define where a certain file should go, as well as if and when a file will be migrated between file system pools. We can define both file placement and migration policies. By using a policy, you create a filter that designates a specific file type to a particular tier. Tiered storage is more efficient and boosts performance.
Quotas are a feature to help monitor the amount of storage being used. IT can set a soft-limit quota that will raise a warning when part of a file system is close to reaching its storage limit but still allow data to be saved. If you set up a hard limit quota, after the quota is reached, no new data can be saved.
With the ongoing exponential growth of data, customers need storage architectures that not only meet current capacity needs without breaking the bank, but that can scale easily and quickly. To manage all that unstructured data, they need a full-featured approach that is able to handle object, block and file system types of storage. Make sure that you have thoroughly vetted solutions so that you know you are offering your customers the one that will best serve them.
Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise scale data storage solutions designed to be cost-effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements.
Read more about:Agents
You May Also Like
Channel People on the Move: AT&T, C1, Mitel, TD Synnex, MoreMar 1, 2024
Viirtue, MSP Partners Seek Larger Piece of IT PieFeb 29, 2024
New Cisco OT Route to Market Opens New Partner SetFeb 29, 2024
Broadcom-VMware Saga Update: Nutanix Wins, Carbon Black Sale, Hock Tan PayFeb 29, 2024