StorReduce Blog

StorReduce Named a 2018 Cool Vendor in Storage Technologies by Gartner

StorReduce Named a 2018 Cool Vendor in Storage Technologies by Gartner Vendors selected for the Cool Vendor report are innovative, impactful and intriguing StorReduce, the ground-breaking scale-out cloud and object storage deduplication software company, today announced that it has been named a Cool Vendor in the, 2018 “Cool Vendors in Storage Technologies” report by Gartner, Inc. The report evaluates interesting, new and innovative vendors, products and services in the storage technology market.

Cloud Native Scale-out Deduplication: Remove Purpose Built Backup Appliances, Store to Object Storage and Save Up to 70%

StorReduce Scale-Out Deduplication Now Enables Primary Backups of On-Premises Environments Straight to AWS Cloud and Removal of Purpose Built Backup Appliances. StorReduce helps enterprises storing unstructured data to Amazon Simple Storage Service (Amazon S3) or Amazon S3 Infrequent Access to speed up transfer of the data and reduce their amount and cost of storage by as much as 98 percent. StorReduce’s deduplication engine is software defined, deploying on-premises or on-cloud as a virtual machine, in a docker container or on physical hardware.

StorReduce/Cloudian Webinar: Deduplication for S3 Streams - A Cloud First Model

Douglas Soletz, of Cloudian, and I spent a very enjoyable hour talking about StorReduces Cloud First approach to deduplication.

Watch the video below …

StorReduce is proud to be a Google Cloud Storage Coldline Launch Partner

New Google Cloud Storage Coldline: low cost, live cloud storage. An ideal place for secondary backups, long-term archival storage, Hadoop backups and more… The new Google Cloud Storage Coldline’s extremely low cost combined with StorReduce’s deduplication enables enterprises to save up to 45% by moving their long-term backup data off tape to Coldline. Work out how much you will save by moving your data off tape to Coldline with our TCO calculator here.

Watch Deepak Verma from HDS talk about StorReduce at Veritas Vision 2016 (8min)

StorReduce is excited to be the official deduplication solution selected by HDS for their Hitachi Content Platform. Thank you Deepak Verma for presenting on our joint solution at Veritas Vision 2016.

Watch Deepak’s talk below (8 min) …

Watch Isaiah Weiner from AWS talk about StorReduce at the AWS Chicago Summit 2016 (2:45min)

Thanks to Isaiah Weiner, a storage expert at Amazon, for recommending StorReduce as the best software for deduplicating and moving backups to Amazon S3 at AWS’s North American summits this year. Watch him present at the AWS Chicago Summit 2016 below and don’t miss his session at AWS re:invent in November 2016. Watch the video below (2:45min)…

StorReduce launches support for replication

The companies we’re working with have a diverse set of requirements and workloads but one common need we often see is the desire to store multiple copies of the same data. Typically this is for redundancy, but often it’s for big data workloads, where the same data needs to be quickly accessible in multiple regions. The downsides of storing multiple copies is of course the time and cost associated with it.

Clustering brings High Availability to StorReduce

Update (June 2016): StorReduce now supports advanced scale-out clusters that, in addition to high-availability, provide active-active load balancing and enable a single global deduplication namespace to span hundreds of petabytes of data, and to operate at tens of gigabytes per second of throughput. StorReduce forms a critical part of your cloud storage infrastructure. You need it to be reliable and resilient in the face of unexpected outages and failures - wherever they might occur.

StorReduce Supports AWS S3 Standard - Infrequent Access.

To celebrate the release of AWS S3 Standard - Infrequent Access (S3-IA), StorReduce is offering the first 10 customers a FREE StorReduce Virtual Appliance to migrate up to 1 PB of on-premises backup and archival data to AWS S3-IA. Conditions Apply*. Given that an estimated 90+% of enterprises and much of the mid-market still have most of their data on-premises, it is a great move for AWS to bring in a platform optimized to attract the large on-premises backup, archive and disaster recovery market to cloud.

StorReduce is now integrated with Veritas (Symantec) NetBackup 7.7!

So with a mere configuration change, you can now deduplicate inline then migrate your on-premises backups (TBs - PBs) to cloud up to 30x faster and store around 97% less data on cloud. Frank Slootman was right - tape sucks! So why are enterprises still forced to save their periodic weekly / monthly backups to tape when there is highly durable and ultra-affordable public cloud available? It turns out that there are huge barriers to migrating large volumes of backups from on-premises to cloud over existing internet connections...

How much Hash is Enough?

StorReduce does deduplication by breaking incoming data into blocks, and only storing blocks that it hasn’t seen before. The way we tell what blocks are unique is to compare the hash of the new block with the hashes of all existing stored blocks, so it’s important that we don’t have a hash collision. A hash collision could cause us to throw away data that was actually unique. It goes without saying that we never want this to happen for any workload.

How to Saturate 10Gig Ethernet with a Single StorReduce Server

One of StorReduce’s claims to fame is to be able to operate at sustained throughput rates of up to 1,100 megabytes per second for both reads and writes, using a single StorReduce server. We occasionally get some raised eyebrows when we tell people this; it’s an impressive number and it does break new ground. It’s also significant in a business sense. It means that we cut a large on-premise to cloud migration (petabytes) from years to weeks, that we enable clouds to have much larger volumes of data to be migrated to them than ever before, and that StorReduce’s inline deduplication is fast enough to be used by big data companies wanting to use Hadoop or search in real-time.