Historically, NFS has been a widely used network file system for use with virtualization deployments. While this was discussed during Phase 1 of ETSI NFV discussions, it is not clear whether this is still a need for Operators.
One of the things that NFS does provide us is scale out capabilities with v4.1 (aka pNFS). This is an IETF standard and support for both server and client are in the kernel and in all of our target distributions. The main caveat with pNFS remains the lack of erasure coding support with the layout driver. With that said, I believe we can get that addressed. Currently, the layout drivers support simple NFS, striping, and mirroring. This is the breakdown of meaning:
Simple NFS means that metadata server will provide a handle back to the client of the location of a file, but then the fulfillment of the file will be direct between the Client and the I/O server. The metadata server doesn't remain the proxy for all of the I/O. This is different than legacy NFS where the metadata and I/O were on the same node. This is consistent with all of our other separations of control and data practices, elsewhere in the NFVI.
Striping will aggregate I/O performance across the various nodes. The downside is that without parity, a crash in a node will result in a crash of the file system. In some cases, this might be alright. Also, we can implement HA or redundant nodes, much like what we do for Lustre deployments in the HPC arena.
Mirroring will allow us to replicate data across various I/O nodes, much like is the default case for a Swift deployment. This has the benefit of providing redundancy and availability of data, but does not result in any aggregation of performance.
These days, we have quite a few different N+M Erasure Coding algorithms and implementations to choose from. If there's NFS desire, we can investigate the integration of one or some of these into a Layout driver. The layout driver is what tells the Client how to fetch and reassemble the data on the I/O nodes.
Last thing to note about pNFS I/O nodes is that NFS v4.1 doesn't really care that the underlying I/O nodes are NFS or not. They can be iSCSI or object based (OSD) too. So this means a lot of legacy infrastructure can be repurposed.
Also, SMB is still relevant in Enterprise deployments, but it's not clear whether our Community sees need here. If there's a desire to use any type of Hyper-V then this could become an issue if we don't have it.
This topic will also be discussed on the email tech-discuss list, and any feedback will brought back to this forum thread.