Skip to main content

Posts

Cloud Storage - Backup and Archiving (Commvault) -1

 As the Hybrid cloud became a standard for enterprise IT infrastructure, enterprises consider public cloud storage as a long-term archiving solution. As a result, most Backup applications and storage appliances are now ready to integrate with Azure, AWS storage API. I thought to share some Day2 challenges while deploy, integrate and manage the backup applications with cloud storage options. Commvault is one of the leaders in enterprise backup tools, so a couple of scenarios will be tested in this series of posts using commvault and AWS s3, Glacier. Below picture depicts the LAB architecture. 1) Cloud Storage integration support 2) where we can fit cloud storage in a 3-2-1 strategy for backups 3) Deduplication, Micro pruning options 4) Encryption 5) Object locking and Ransomware protection 6) Cloud Lifecycle policy Support 7) Disaster recovery within the cloud Commvault seems natively supporting  most of the cloud storage API without additional license requirements. Integrating library
Recent posts

AWS Datasync overview

AWS DataSync is an online data transfer service that simplifies, automates, and accelerates copying large amounts of data between on-premises storage systems and AWS Storage services and between AWS Storage services.  For example, DataSync can copy data between Network File System (NFS), Server Message Block (SMB) file servers, self-managed object storage,  S3 buckets, EFS  file systems, and Amazon FSx. Above diagram depicts the typical architecture of AWS Datasync services. How it works:  1) Data Sync Service: Service in the AWS cloud, which manages and tracks data sync tasks, schedules  2) Data Sync Agent: A Virtual Appliance with computing power to run scheduled copy, uploading capability and maintain metadata (for full and incremental data transfer ) deployed at on-premise or cloud.  Advantages: a) Cost-effective solution for Data Sync task ( service charged for per GB transfer in only) b) Best suited for aggressive deployment with zero-touch existing infrastructure. c) Secure tran

Aws File Storage gateway insights #2

  S3 is object storage emulated as NFS using AWS file storage gateway; we need to understand S3 object operations and associated charges. Putting more frequent changing files on the AWS file storage gateway may surge the cost. Below is the AWS file operation vs S3 object impact. Interestingly, in LAB, I observed that even if you are accessing the S3 console using the AWS console for administrative purposes, it is calling the list API call or getting the files list. With help of FUSE and S3fs,  on premise NFS exported files were able to access in  cloud EC2 instances. This is very useful incase of you have some systems that needs hybrid file access. [root@ip-172-31-13-8 s3fs-fuse]# s3fs rmanbackupdemo -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 /mys3mount [root@ip-172-31-13-8 s3fs-fuse]# c /mys3mount/ -bash: c: command not found [root@ip-172-31-13-8 s3fs-fuse]# cd /mys3mount/ [root@ip-172-31-13-8 mys3mount]# ls awstest [root@ip-172-31-13-8 mys3mount]#

Expanding Swap Memory online in Oracle Linux.

  Steps: 1) Turnoff  Current Swap Device 2) Make Swap Device by fallocate and filling zeros 3) change mount permission to root only  4) swapon newly created mount. Example:- login as: root root@10.0.0.65's password: Last login: Mon Oct  4 02:29:55 2021 from mmd-12713385543 [root@ol7-19 ~]# swapoff -v /dev/mapper/ol-swap swapoff /dev/mapper/ol-swap [root@ol7-19 ~]# fallocate -l 4G /swape [root@ol7-19 ~]# dd if=/dev/zero of=/swape bs=1024 count=4194304 4194304+0 records in 4194304+0 records out 4294967296 bytes (4.3 GB) copied, 16.5165 s, 260 MB/s [root@ol7-19 ~]# ls ltr / ls: cannot access ltr: No such file or directory /: bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  swape  sys  tmp  u01  u02  usr  var [root@ol7-19 ~]# ls -ltr / total 8388576 drwxr-xr-x.   2 root   root              6 Apr 10  2018 srv drwxr-xr-x.   2 root   root              6 Apr 10  2018 opt drwxr-xr-x.   2 root   root              6 Apr 10  2018 mnt drwxr-xr-x.   2 root   ro

AWS File Storage Gateway insights #1

AWS File Storage Gateway insights (for NFS) #1  Best Features:   a) NFS version supported from 3.0 to 4.2   b) Local Cache reduces IO transaction to S3   c) NFS supports typical NAS storage exports with root squash, no root squash , all squash   d) Unix file permission propagated in S3 objects and accessibility in cloud instance   e) End to End Encryption   f) Depending on the AWS storage gateway computing and storage (flash) power we get best performance for writes and reads   g) best suit for Archiving, Local backups   h) Jumbo Frames are supported   below are the other aspects where we dont have control   a) Compression and deduplication at local storage   b) Multi VLAN support   c) Onsite snapshots   d) No Local Monitoring , only with AWS monitoring tools   e) No Local console access for Troubleshooting   Some of the snippets to get more insight on billing, IO and other pattrens.   a) Appliance supports Jumbo Frames ( which is important for NAS stoage devices). b) Data Cached lo

Change Block Tracking (CBT) and VM Backups in VMware

As I started my IT career from Backup Engineer to whatever today I am, I thought my first post on VMware should be on VM backup strategy, especially on Change Block Track for CBT mechanism   CBT (Change Block Track) is allows backup applications to take incremental backups rapidly on VM. CBT uses VMware Storage API formerly known as VMware Storage API for Data Protection (VADP for more info https://kb.vmware.com/s/article/1021175) . Below are the steps involved in enabling CBT on a VM.        1). We need to define the CBT setting by configuring the VM configuration parameters ctkEnable= “True” (Advanced /General> Configuration Parameters).        2). Above setting enables CBT global leve mean every disk can be tracked by CBT means you are allowed do a snapshot on VMDK file to track the changes for backup but independent or multi writer disks does not t like this setting to be enabled and it is not recommended too.        3). We can ensure that required disk is on

Welcome to the New World

I have blog on Backup and Storage which I am running from past 9 Years , from couple of years I am busy with my profession and not able to post my new learning on VMware and Cloud infrastructure .  As I am going through VCAP certification I thought it is the best timing to share some of my Software define data centre experiences with VMware and other cloud integration platforms. Wish you all a Happy and prosperous new year