In Ryan’s blog post, we have seen benefits provided by SmartIO in today’s datacenter where SSDs are becoming more and more prevelant. Sumit in his blog has provided more details about SmartAssist tool using which administrators can decide on an optimal size of SSD for their applications. In majority of the scenarios, default settings for SmartIO using SSD size suggested by SmartAssist tool works well. But some administrators might want to have more granular control on SmartIO. In this post, we will talk about some of those. The examples in this blog talks from a file system angle, but majority of them are applicable for block level caching too.
Let us consider a scenario where administrators have multiple applications on different file systems or single application using multiple file systems on the same host. By default, all of these file systems will be using the SmartIO cachearea. But there might be certain file systems which have higher IO performance requirements compared to other file systems. The administrators will prefer to enable caching using SSDs only for the file systems having higher IO performance requirements. There are couple of ways in which SmartIO can be configured to meet the requirement.
Administrators can create a cachearea as “noauto”. By default, all cacheareas are created “auto”. We can use the following command to create a cache area with noauto tag.
# sfcache create –noauto ssd0_0
If a cachearea is created as noauto, then by default no file system will be using the cachearea. A file system can be enabled to use noauto cache area using “smartiomode” option in the mount command.
# mount -t vxfs -o smartiomode=read /dev/vx/dsk/testdg/vol1 /mnt1 # mount -t vxfs -o smartiomode=writeback /dev/vx/dsk/testdg/vol2 /mnt2
In this fashion, administrators can enable caching only for the file systems having higher IO performance requirements. We need to remember that “smartiomode” is a mount parameter, so whenever a file system is unmounted and mounted again or after a host restarts, then the option need to be specified again. If necessary, administrators can change a cache area from “noauto” to “auto” and vice versa using following command:
# sfcache set [--auto|--noauto] <cachearea>
Administrators can enable or disable caching for individual file systems. By default, caching is enabled for all file systems provided the cache area is “auto”. Administrators can disable caching for a file system during mount itself using “smartiomode=nocache” option.
# mount -t vxfs -o smartiomode=nocache /dev/vx/dsk/testdg/vol1 /mnt1
With this option, if they want to enable caching for the file system, then they have to remount the file system either with “smartiomode=read/writeback” option or without any “smartiomode” option.
Alternatively, administrators can disable caching for a file system (for which caching was enabled by default) using
# sfcache disable /mnt1
Later on caching can be reenabled for the file system using
# sfcache enable /mnt1
On a side note, adminstrators can selectively enable a file system for read or writeback caching. As we know, a file system with read caching is expected to benefit read performance only, whereas a file system enabled with writeback caching is expected to benefit both read and write performance, but it will consume more SSD resources. By default, a file system is enabled for read caching only. Depending on the application requirements, administrators can enable writeback caching using “smartiomode=writeback” option for mount command.
In this blog post, we have learnt about how SmartIO caching can be enabled or disabled at a file system level. In the next post in this series, we will see how it can be enabled or disabled at a file or directory level within a file system.
For more details regarding SmartIO commands and use cases, we can refer to admin guide.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.