AWS just made S3 buckets work like actual file systems. The new S3 Files feature, now generally available in 34 regions, gives any AWS compute resource direct file system access to data sitting in S3. Built on Amazon EFS, it handles the translation between file operations and S3's object model while keeping your data in place. No more copying data between object storage and file systems. No more sync pipelines.
For AI agents and ML teams, the feature solves a real pain point. Agents that need to persist memory or share state across pipelines can now do it directly against S3. Data prep workflows don't require staging files elsewhere first. The system caches hot data for low latency and claims aggregate read throughput of multiple terabytes per second, so storage shouldn't bottleneck performance. It works with existing S3 data too, no migration required.
Bad news for companies selling file-system-in-front-of-S3 solutions. NetApp's Cloud Volumes ONTAP, Dell's PowerScale, Weka, Vast Data, and Alluxio all built businesses around bridging exactly this gap. AWS just commoditized what they do. These vendors will need to shift toward things AWS doesn't build, like multi-cloud portability or specialized compliance features. Making S3 feel like a file system is now a checkbox in the AWS console.