Skip to main content

AWS File Storage Gateway insights #1

AWS File Storage Gateway insights (for NFS) #1 

Best Features: 

 a) NFS version supported from 3.0 to 4.2 
 b) Local Cache reduces IO transaction to S3 
 c) NFS supports typical NAS storage exports with root squash, no root squash , all squash 
 d) Unix file permission propagated in S3 objects and accessibility in cloud instance 
 e) End to End Encryption 
 f) Depending on the AWS storage gateway computing and storage (flash) power we get best performance for writes and reads 
 g) best suit for Archiving, Local backups 
 h) Jumbo Frames are supported 

 below are the other aspects where we dont have control 

 a) Compression and deduplication at local storage 
 b) Multi VLAN support 
 c) Onsite snapshots 
 d) No Local Monitoring , only with AWS monitoring tools 
 e) No Local console access for Troubleshooting 

 Some of the snippets to get more insight on billing, IO and other pattrens. 

 a) Appliance supports Jumbo Frames ( which is important for NAS stoage devices).
b) Data Cached locally and uploaded to S3 , which reduces S3 IO (PUT,GET requests) and increases performence i) 1GB Upload took around 120+ PUT Requests
ii) over a 1gb LAN, we are able to see 30MB read speed for stream, in real time we may get more throughput.

 root@crazycomputing-Virtual-Machine:/backup# fio -numjobs=1 -iodepth=128 -direct=1 -ioengine=libaio -sync=1 -rw=randread -bs=4K -size=1G -time_based -runtime=60 -name=Fio -directory=/backup

 Fio: (g=0):rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 fio-3.16 Starting 1 process Fio: Laying out IO file (1 file / 1024MiB) Jobs: 1 (f=1): [r(1)][100.0%][r=30.1MiB/s][r=7695 IOPS][eta 00m:00s] Fio: (groupid=0, jobs=1): err= 0: pid=8661: Mon Oct 4 14:59:36 2021 read: IOPS=7941, BW=31.0MiB/s (32.5MB/s)(1862MiB/60016msec) slat (nsec): min=1700, max=6325.7k, avg=10884.20, stdev=26945.46 clat (usec): min=7166, max=67675, avg=16103.03, stdev=2399.66 lat (usec): min=7180, max=67679, avg=16114.26, stdev=2399.42 clat percentiles (usec): | 1.00th=[12780], 5.00th=[13829], 10.00th=[14222], 20.00th=[14746], | 30.00th=[15139], 40.00th=[15533], 50.00th=[15795], 60.00th=[16188], | 70.00th=[16581], 80.00th=[16909], 90.00th=[17695], 95.00th=[18744], | 99.00th=[23987], 99.50th=[29492], 99.90th=[45351], 99.95th=[52691], | 99.99th=[63177] bw ( KiB/s): min=24824, max=34960, per=99.98%, avg=31760.60, stdev=1431.68, samples=120 iops : min= 6206, max= 8740, avg=7940.12, stdev=357.92, samples=120 lat (msec) : 10=0.02%, 20=97.17%, 50=2.74%, 100=0.07% cpu : usr=3.40%, sys=6.48%, ctx=740658, majf=0, minf=137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% issued rwts: total=476639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=128 Run status group 0 (all jobs): READ: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=1862MiB (1952MB), run=60016-60016msec root@crazycomputing-Virtual-Machine:/backup# ls -ltr total 1048577 -rw-r--r-- 1 nobody nogroup 9 Oct 3 17:08 testfile -rw-r--r-- 1 nobody nogroup 1073741824 Oct 4 14:58 Fio.0.0 root@crazycomputing-Virtual-Machine:/backup#

Comments

Popular posts from this blog

Cloud Storage - Backup and Archiving (Commvault) -1

 As the Hybrid cloud became a standard for enterprise IT infrastructure, enterprises consider public cloud storage as a long-term archiving solution. As a result, most Backup applications and storage appliances are now ready to integrate with Azure, AWS storage API. I thought to share some Day2 challenges while deploy, integrate and manage the backup applications with cloud storage options. Commvault is one of the leaders in enterprise backup tools, so a couple of scenarios will be tested in this series of posts using commvault and AWS s3, Glacier. Below picture depicts the LAB architecture. 1) Cloud Storage integration support 2) where we can fit cloud storage in a 3-2-1 strategy for backups 3) Deduplication, Micro pruning options 4) Encryption 5) Object locking and Ransomware protection 6) Cloud Lifecycle policy Support 7) Disaster recovery within the cloud Commvault seems natively supporting  most of the cloud storage API without additional license requirements. Integrating...

AWS Datasync overview

AWS DataSync is an online data transfer service that simplifies, automates, and accelerates copying large amounts of data between on-premises storage systems and AWS Storage services and between AWS Storage services.  For example, DataSync can copy data between Network File System (NFS), Server Message Block (SMB) file servers, self-managed object storage,  S3 buckets, EFS  file systems, and Amazon FSx. Above diagram depicts the typical architecture of AWS Datasync services. How it works:  1) Data Sync Service: Service in the AWS cloud, which manages and tracks data sync tasks, schedules  2) Data Sync Agent: A Virtual Appliance with computing power to run scheduled copy, uploading capability and maintain metadata (for full and incremental data transfer ) deployed at on-premise or cloud.  Advantages: a) Cost-effective solution for Data Sync task ( service charged for per GB transfer in only) b) Best suited for aggressive deployment with zero-touch existing i...

Aws File Storage gateway insights #2

  S3 is object storage emulated as NFS using AWS file storage gateway; we need to understand S3 object operations and associated charges. Putting more frequent changing files on the AWS file storage gateway may surge the cost. Below is the AWS file operation vs S3 object impact. Interestingly, in LAB, I observed that even if you are accessing the S3 console using the AWS console for administrative purposes, it is calling the list API call or getting the files list. With help of FUSE and S3fs,  on premise NFS exported files were able to access in  cloud EC2 instances. This is very useful incase of you have some systems that needs hybrid file access. [root@ip-172-31-13-8 s3fs-fuse]# s3fs rmanbackupdemo -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 /mys3mount [root@ip-172-31-13-8 s3fs-fuse]# c /mys3mount/ -bash: c: command not found [root@ip-172-31-13-8 s3fs-fuse]# cd /mys3mount/ [root@ip-172-31-13-8 mys3mount]# ls awstest [root@ip-172-31-13-...