RainStor Solves the “Big Data Problem”
Infrastructure software company RainStor has launched version 4 of their namesake data repository and retrieval solution, aimed squarely at quickly and compliantly compressing and querying large chunks of historical data. Here’s some background.
RainStor, whose storage infrastructure can be deployed on-premise or in the cloud, has built the version 4 release around compliance with federal, corporate, and healthcare industry guidelines, says RainStor VP of Product Management Ramon Chen. After all, it’s those guidelines that dictate just how much data needs to be retained.
The trouble is that the longer a business or agency has been in operation, the more data goes from “active” to “historical” — and machine generated data like satellite imagery or equipment logs is historical by definition. All of that needs to go into a repository, but the size can quickly inflate out of control, especially when stringent compliance rules are in effect.
That’s where RainStor comes in. Rather than use what Chen calls “brute force” compression methods, RainStor uses proprietary compression methods to keep data structure intact while knocking a huge amount off of file sizes, boosting ingestion speeds by as much as 50%. Chen says it’s a major improvement over tape backup since the data can still be queried straight from storage.
“We’re making the big data problem smaller,” Chen says.
And, like most SaaS systems, Chen claims RainStor has such a low TCO that MSPs reselling version 4 can increase their job bids to boost margins and still undercut the competition.