As the Hybrid cloud became a standard for enterprise IT infrastructure, enterprises consider public cloud storage as a long-term archiving solution. As a result, most Backup applications and storage appliances are now ready to integrate with Azure, AWS storage API.
I thought to share some Day2 challenges while deploy, integrate and manage the backup applications with cloud storage options.
Commvault is one of the leaders in enterprise backup tools, so a couple of scenarios will be tested in this series of posts using commvault and AWS s3, Glacier. Below picture depicts the LAB architecture.
1) Cloud Storage integration support
2) where we can fit cloud storage in a 3-2-1 strategy for backups
3) Deduplication, Micro pruning options
4) Encryption
5) Object locking and Ransomware protection
6) Cloud Lifecycle policy Support
7) Disaster recovery within the cloud
Commvault seems natively supporting most of the cloud storage API without additional license requirements.
Linking the cloud storage bucket needs a programmatic access key and secret. Traditionally we can store these within commvault credential manager (encrypted within the appliance, registering backup infra as a resource and associating IAM role will be explained in further posts).A Sample backup examined, with deduplication enable at on premise and no deduplication at could.
Since it is it is first backup we got only compression benefit, not dedup at commvault disk library level.
780MB test payload occupied around 680MB in disk library and the same amount of storage taken in S3 library (with out dedup).
Job at on premise:
Local Disk Lib Chucks :
Cloud Disk Lib Chucks in activity log:
S3Cloud Disk Lib created each chunk with 32MB with default configuration (further finetuning need to be explored).
Take away: while calculating storage IO cost , we need to consider 32MB object size and some of the retry IO and meta data IO, so per GB it is not 32 IO , it may exceeded more than that depending on configuration and tuning parameters.
Comments
Post a Comment