StorReduce supports replication of deduplicated data between cloud regions, between public clouds or between public and private cloud. By only replicating the unique data StorReduce can save significant costs on outgoing bandwidth and replica storage.
A second StorReduce server can run in the replica region or cloud to enable immediate read-only access to the replicated data. This is perfectly suited for an additional layer of redundancy and for disaster recovery in the cloud.
Enabling replication (replication source)
This section assumes you already have a StorReduce server that is fully configured and functioning and you would like to add a replica.
If you do not already have a configured StorReduce server please see the getting started guide.
Step 1: Create an S3 Bucket to hold the replica data
In your Amazon S3 Management Console click on
Give the bucket a suitable name, e.g. ‘storreduce-replica’
Select the region in which you intend to access the replicated data. This will generally be in a different region to the bucket the holds the primary data:
Step 2: Configure replication
If replication is licensed you will be able to see the replication entry in the header of the StorReduce dashboard. If you cannot see the replication menu entry and wish to request a trial of replication please contact us to arrange a license.
Click “Create Replica”:
Enter the details of the bucket you created in step 1. If using IAM roles, the access key and secret key fields can be left blank. Please note that replicas inherit the same storage class and encryption settings as the primary bucket:
After the server restarts the replication page should show the newly configured replica:
Step 3: Verify replication is working
To verify replication is working you may check the following places:
In the Amazon S3 Management Console you may check the contents of the primary and replica buckets. The replica bucket should closely match the contents of the primary, although during periods of heavy write activity it may lag behind briefly:
During start-up the primary server log (
/var/log/storreduce.log) should indicate replication is enabled:
FileNumberTracker 2015/11/09 22:13:02.861040 INFO NextFileNo: 0 Storer 2015/11/09 22:13:02.861349 INFO Starting 64 Storers ShardReplicator 2015/11/09 22:13:02.861392 INFO Replicating backend files from us-west-2/storreduce-primary to us-east-1/storreduce-replica Security 2015/11/09 22:13:02.861497 INFO Stub started AWSAuthorizer 2015/11/09 22:13:02.861509 INFO AWSAuthorizer started
Adding a read-only server (replication target)
After completing the first section your data is now safely replicated. This section shows how to configure a read-only StorReduce server to read the data from the replica. You may choose to do this on-demand in a disaster recovery scenario, or to always have read-only access to your replica if required for your particular workload.
This section assumes you know how to deploy a StorReduce server. If you do not know how to deploy a StorReduce server please see the getting started guide.
Step 1: Deploy the read-only server
Deploy a new StorReduce server in the same region as the replica bucket.
Configure the StorReduce server with the following settings:
- Under the Server section check “Read-only Server”
- Leave the Storage Class section empty; this is inherited from the primary.
- Fill in the Storage Encryption section identically to the primary
- Under the Storage section enter the details of the replica bucket
- Under the Storage Credentials section configure appropriate keys, if not using IAM roles.
- Under the Network and SSL/TLS sections configure details appropriate for the replica e.g. a custom DNS name and appropriate SSL certs (or use the defaults).
Step 2: Verify the read-only server is working
Shortly after files are uploaded to the primary server they should be visible in the file browser on the replica server.
The replica server log (
/var/log/storreduce.log) should indicate it has processed the files contained in the replica bucket and that is now following changes:
Compressor 2015/11/09 22:43:03.335919 INFO Starting 2 Compresssors, Algorithm: Flate Hasher 2015/11/09 22:43:03.335934 INFO Starting 2 Hashers Server 2015/11/09 22:43:03.335993 INFO Creating metadata pipeline LookUp 2015/11/09 22:43:03.336004 INFO Starting 32 Lookup threads Server 2015/11/09 22:43:03.336081 INFO Starting Recovery Storer 2015/11/09 22:43:03.337499 DEBUG Recovering file 0 (1/49) Storer 2015/11/09 22:43:04.085253 DEBUG Recovering file 1 (2/49) Server 2015/11/09 22:43:04.088472 INFO Completed recovery for file 0 Storer 2015/11/09 22:43:05.376016 DEBUG Recovering file 2 (3/49) *extraneous lines removed* Storer 2015/11/09 22:44:07.738018 DEBUG Recovering file 48 (49/49) Server 2015/11/09 22:44:07.738900 INFO Completed recovery for file 47 Server 2015/11/09 22:44:08.472123 INFO Completed recovery for file 48 Server 2015/11/09 22:44:08.472140 INFO Recovery complete ShardFollower 2015/11/09 22:44:09.146168 INFO Following changes to backend storage, poll interval is 2s Security 2015/11/09 22:44:09.146459 INFO Stub started AWSAuthorizer 2015/11/09 22:44:09.146608 INFO AWSAuthorizer started