Virtio-Fs - A Shared File System For Virtual Machines
Virtio-Fs - A Shared File System For Virtual Machines
Virtio-Fs - A Shared File System For Virtual Machines
1
What is virtio-fs?
Desired semantics:
● POSIX file system plus modern extensions
● Concurrent access from multiple guests
● Local file system semantics (coherency) where possible
2
Use case: Lightweight VMs and container VMs
Micro VMs, Kata Containers, Function as a service
(FaaS) Guest
Requirements:
Container
● Fast boot time - Avoid copying file contents into
guest during boot
● Low memory overhead - Share read-only file
contents between all guests
● Access to files from host - Both read and write Container Dynamic
access Image Config
Requirements:
● No guest network access - Isolate guest from storage network for security
● Hide storage details - Change storage technology without affecting guests
virtio-fs
Guest Host Storage
Ceph Backend
4
Use case: Traditional file sharing
Share a host directory with the guest
Requirements:
● No manual setup - Easy to implement as management tool command
● Add/remove directories at will - Hotplug support
Shared
virtio-fs
Guest Files
Host
5
Why virtio-fs?
6
Architecture
Files
Host
7
Virtiofsd
libfuse QEMU
lib passthrough_ll Vhost-user backend consisting of:
libvhost-
user
Subset of libfuse library
● Modified (not ABI compatible)
Libvhost-user
● Provides the basis for the transport
Passthrough_ll
virtiofsd ● Loopback FUSE file system
Thread per queue + thread pool for servicing
requests
8
Potential daemons
● Other filesystems
○ Instead of POSIX could access network FS directly (e.g. gluster/ceph/nfs)
Rather than through the kernel
○ Or block storage via userspace (see next talk!)
● Other implementations
○ Rust implementation being considered (crosvm but not vhost-user)
9
DAX
BAR
● Guest driver requests
FUSE client File frag (un)mapping by special fuse
File frag message
virtiofs.ko File frag
● Mappings appear in PCI-BAR at
File frag
File frag
guest specified offset
Guest
● BAR appears almost like DAX
Host device in guest
virtiofsd QEMU ○ But is only a window into the fs;
mmap
File frag
not the whole fs
File frag ● Virtiofsd opens files, QEMU
Files File frag
mmap’s them
File frag
File frag
10
Differences from normal FUSE
12
Virtio-fs vs virtio-9p Benchmark
Source:
https://lore.kernel.org/linux-fsdevel/20190821173742.24574-1-vgoyal@redhat.com/
13
virtio-fs vs virtio-9p Benchmark Contd.
Source: https://lore.kernel.org/linux-fsdevel/20190821173742.24574-1-vgoyal@redhat.com/
14
Caches
● Cache latency much less than roundtrip between host and guest
● Filesystem caches:
○ Data: can be shared between host and guest (DAX)
○ Metadata, pathname lookup: can’t be shared
● If not shared, then need to invalidate on “remote” change
○ Synchronous invalidate → strong coherency
○ Asychronous invalidate or timeout → weak coherency
● Guest cache invalidate should not block (Denial of Service)
15
Shared memory version table
FUSE client
● Multiple guests ↔ single table
FUSE client
● One possible implementation of
Inode synchronous, non-blocking
Inode invalidation
● Fast validation of cache entry
○ Compare value of two memory
Virtiofsd locations
file version ● Not (yet) working for host
Virtiofsd foo 13 filesystem changes
bar 28
dir 43
16
Current Status
Linux guest driver - Core merged in 5.4, DAX not yet posted
17
Thank you linkedin.com/company/red-hat
youtube.com/user/RedHatVideos
Red Hat is the world’s leading provider of enterprise
18
Cache modes
Users can choose coherency vs performance trade-off:
● Coherency may require more communication, lower performance
Available modes:
19
Security model
Guest has full control over file uid, gid, and permissions
● Access checks performed inside guest
● Guests sharing a file system must trust each other
● Design choice in current implementation, not inherent in VIRTIO spec
20
Benchmark configuration
Host:
● Fedora 28 host with 32 GB RAM and 24 logical CPUs
● 2 sockets x 6 cores per socket x 2 threads per core
● ramfs as the storage
Guest:
● 8 GB RAM and 16 vCPUs
● 8 GB DAX Window
● 4 x 2 GB fio data files