When containers started out, they were meant to be ephemeral – stateless, disposable and data-light. But that’s all changed. As Gartner notes, use cases for containers have evolved to include analytics and artificial intelligence (AI) processing, and by 2028, it predicts 15% of on-premise production workloads will run in containers. That’s a 300% increase since 2022.

Now, while containers themselves retain all the benefits of ephemerality – rapidly reproducing, then dying back just as quickly to account for workload spikes – the storage attached to them cannot live by the same rules.

As enterprises move from proofs of concept to running a big chunk of production workloads in containers, the storage layer has become a pivot point. While the early days were focused on simple web scaling, containers have now moved into the realm of mission-critical databases, massive data science pipelines, and the power-hungry world of generative AI (GenAI).

The challenge lies in navigating key choices such as file versus block versus object storage, CSI versus container-native storage, and whether to go for a dedicated container storage platform.