Try to run a Node.js application without a real filesystem and watch how quickly it falls apart. require() reads files. fs.readFile() reads files. Template engines read files. Config loaders read files. Logging libraries write files. The entire Node.js ecosystem assumes that a POSIX-like filesystem exists, is writable, and is the primary way to access code and data.
This was fine in 2010 when Node.js ran on servers with local disks. It's increasingly problematic now that we want to run JavaScript in places where the filesystem doesn't exist, is read-only, or shouldn't be trusted: edge runtimes, WebAssembly sandboxes, serverless functions, and browser-based development environments.
Where the Filesystem Assumption Breaks
Edge runtimes. Cloudflare Workers, Deno Deploy, and Vercel Edge Functions run JavaScript at the edge — on machines close to the user, with minimal infrastructure. These environments often don't provide a writable filesystem, or they provide a limited in-memory filesystem that doesn't persist between requests. Node.js modules that call fs.writeFileSync crash. Modules that use fs.existsSync to check for config files behave unpredictably.
WebAssembly. Running Node.js inside a WebAssembly sandbox requires mapping filesystem operations to the Wasm runtime's virtual filesystem (typically WASI). This mapping is imperfect — file permissions, symlinks, and platform-specific paths don't translate cleanly. A VFS abstraction would make this mapping explicit rather than ad hoc.
Testing. Unit testing code that reads configuration files, loads templates, or writes logs requires either mocking the fs module (fragile, because different libraries use different fs APIs) or creating temporary directories with test fixtures (slow, flaky, leaves garbage on disk). A virtual filesystem would let tests run entirely in memory, deterministically, without touching the real filesystem.
Security sandboxing. When running untrusted code — plugins, user scripts, CI/CD build steps — you want to control filesystem access. Currently, this requires OS-level sandboxing (containers, namespaces) or monkey-patching the fs module. A VFS would allow fine-grained filesystem policies: this code can read /app but not /etc, can write to /tmp but not /app.
What a VFS Would Look Like
A virtual file system isn't a new concept. Operating systems have used VFS layers since the 1980s — Linux's VFS allows ext4, NFS, and procfs to coexist behind the same API. The idea for Node.js is the same: decouple the filesystem interface (fs.readFile, require(), etc.) from the filesystem implementation.
// Hypothetical VFS API
import { createVFS, MemoryFS, ReadOnlyFS, OverlayFS } from 'node:vfs';
// In-memory filesystem for testing
const testFS = new MemoryFS({
'/app/config.json': '{"port": 3000}',
'/app/templates/index.html': '<h1>Hello</h1>',
});
// Read-only view of the real filesystem
const readOnly = new ReadOnlyFS('/');
// Overlay: reads from real FS, writes go to memory
const sandbox = new OverlayFS(readOnly, new MemoryFS());
// Run code with a specific filesystem
const vfs = createVFS(sandbox);
vfs.run(() => {
// Inside this context:
// fs.readFileSync('/etc/hosts') → reads real file
// fs.writeFileSync('/tmp/log.txt') → writes to memory overlay
// require('./module') → resolves against the VFS
const app = require('./app');
app.start();
});
The key insight is that the VFS intercepts all filesystem access — not just explicit fs calls but also require(), import(), __dirname, and process.cwd(). This is what makes it useful: you can redirect an entire application's filesystem access without modifying the application.
The Module Resolution Problem
The deepest entanglement between Node.js and the filesystem is module resolution. When you write require('express'), Node.js walks up the directory tree looking for node_modules/express directories, reading package.json files, resolving symlinks, and following the main or exports field. This process makes dozens of filesystem calls for a single require.
Yarn PnP (Plug'n'Play) partially solved this by replacing node_modules with a manifest file that maps module names to zip archives. It's faster (no directory traversal) and more deterministic (no hoisting surprises), but it required monkey-patching Node's module resolution, which broke tools that made assumptions about node_modules existing.
A proper VFS would make Yarn PnP's approach a first-class feature. Module resolution would go through the VFS, which could implement any mapping — zip archives, in-memory modules, remote URLs, or the traditional node_modules directory walk. Different strategies for different environments, same API for application code.
Prior Art
Other runtimes have already solved variations of this problem.
- Deno loads modules from URLs by default and caches them in a managed directory. There's no
node_modulesand no filesystem-based module resolution. The runtime controls where and how modules are stored. - Bun uses a global module cache with hardlinks, reducing filesystem overhead. Its module resolution is heavily optimized compared to Node.js's.
- Go's
io/fsinterface (added in Go 1.16) defines a filesystem interface that's implemented by the OS filesystem, embedded files (embed.FS), zip archives, and in-memory filesystems. Standard library functions accept thefs.FSinterface, making them testable without real files. - Java's NIO
FileSystemabstraction supports pluggable filesystem providers, including in-memory implementations for testing and zip-based filesystems. - .NET's
IFileSystempattern is widely used in .NET applications for testability, though it's a community convention rather than a runtime feature.
The Go approach is particularly instructive. By defining a small interface (Open, Read, Stat) and using it throughout the standard library, Go made filesystem abstraction trivially easy without breaking existing code. Node.js could do the same by defining a VFS interface and gradually migrating the standard library and module loader to use it.
The Challenge of Compatibility
The biggest obstacle to a Node.js VFS isn't the implementation — it's the ecosystem. npm has over two million packages, and a significant fraction of them access the filesystem in ways that assume POSIX semantics, real paths, and writable directories.
A VFS that breaks existing packages is dead on arrival. It needs to be opt-in and backwards-compatible: if you don't explicitly create a VFS, everything works exactly as it does today. When you do create a VFS, it should transparently intercept filesystem calls so that well-behaved packages work without modification.
'Well-behaved' is the key qualifier. Packages that shell out to cp or rm, use native addons that call libc's open() directly, or rely on /proc or other OS-specific filesystems won't work with a VFS. This is a hard boundary — any abstraction leaks when code reaches past it to the underlying system.
What's Actually Being Proposed
The Node.js VFS discussion isn't theoretical — there are concrete proposals in the Node.js issue tracker. The current direction involves a few key ideas.
First, a FileSystemProvider interface that the fs module delegates to. The default provider is the real filesystem. Custom providers implement the same interface for in-memory, read-only, overlay, or remote filesystems.
Second, integration with the module loader. require() and import() would resolve modules through the VFS, enabling module resolution strategies that don't depend on node_modules directories.
Third, a policy layer. The VFS could enforce access controls — preventing code from reading or writing paths outside a defined scope. This would give Node.js built-in sandboxing capability similar to Deno's --allow-read and --allow-write flags.
Whether this ships in Node.js 24, 26, or ever depends on the Node.js team's bandwidth and the community's appetite for a breaking change in how filesystem access works. But the pressure is real: edge runtimes are growing, WebAssembly deployment targets are multiplying, and the gap between 'JavaScript everywhere' and 'Node.js specifically on servers with real filesystems' is becoming a competitive liability.
What You Can Do Today
You don't need to wait for a runtime VFS to benefit from filesystem abstraction. For new code, wrap filesystem access behind an interface. Instead of calling fs.readFile directly, create a FileStore interface that your code depends on. Implement it with the real filesystem for production and an in-memory map for tests. This is standard dependency inversion — unglamorous, effective, and compatible with any runtime.
For existing code, libraries like memfs and unionfs provide in-memory and overlay filesystem implementations that patch Node's fs module. They're not perfect — native addons and child processes bypass them — but they work for pure-JavaScript code and make testing dramatically easier.
The filesystem is the last major piece of Node.js infrastructure that doesn't have a clean abstraction boundary. Streams have interfaces. HTTP has interfaces. Even the module loader has hooks. When the filesystem gets the same treatment, Node.js will be a genuinely portable runtime rather than a server-specific one. That's a change worth pushing for.