Ship it!
My 44th release of Ceph since we started enterprise-class product releases is now on the wire, and a little celebration is in order.
This June, we have a special release of Ceph. As Josh and I first announced a year ago at the OpenInfra Summit’s State of the Cephalopod, and was then introduced in Tech Preview with 7.0 in December, we have a major addition to the Ceph family of protocols with the arrival of the newfangled NVMEoF Gateway.
Building Blocks #
Adding NVME over TCP to our supported protocols suite gives us a way to provision our Ceph block storage where LibRBD and kRBD (or NBD) native drivers do not reach yet — notably, VMware. With the addition of a vSphere plugin we borrowed from the IBM FlashSystem team, NVMEoF is well-integrated with the world’s favorite virtualization platform, and aims to be the rising challenger in the arena of Software-Defined storage solutions for virtualization.
Ceph Platform 7.1 is not only the basis of our IBM Storage Ceph product, it also enables the delivery of OpenShift Data Foundation (our Kubernetes storage product), and Red Hat Ceph Storage (our OpenStack product). RHCS 7.1, released two days ago, opens the way to this Summer’s release of OpenStack Platform 18, for which it is the storage of choice.
This has been a remarkable tour-de-force for the new Ceph NVMEoF team (of course), but also for the rest of the band, from Build to Test and Management — everyone had to pull together to make this happen in a year and we hit our plan with remarkable precision.
And we are not done yet — the Block interface will see significant advances in IBM Cloud and the OpenShift Data Foundation products in the second half, as well as a refresh of the Ready Nodes appliance format popular with those looking for a complete solution from the hardware up.
Object of Attention #
The Object team has not been sitting on their laurels as the world’s market-leading object storage solution for multi-PetaByte clusters, and introduced formal WORM certification of Object Lock functionality with IBM Ceph 7.x. IBM estimates there are now at least 6.5 exabytes of Ceph out in the world, and the Object Storage Gateway (RGW) takes a lion’s share of that capacity.
The Intel team’s long history as the second biggest contributor to the Ceph codebase is in the spotlight once more, with their contribution of support for Intel’s QuickAssist technology (QAT). By offloading compression to the coprocessors found in Intel’s Eagle Stream processors, RGW is now capable of ingesting object data into a Ceph cluster at line speed and compress it with no performance impact over uncompressed data. This has the potential to significantly improve the TCO of any Ceph object storage data lake whenever the data being stored is compressible.
But while Red Hat Ceph is a compute storage product aimed at OpenStack and OpenShift requirements, IBM Ceph has more global ambitions as a unified storage product — the Linux of Storage, as it were. In that context, we have re-introduced NFS v4 support with 7.0, and 7.1 is bringing NFS v3 compatibility for those of you running Windows Server and AiX NFS clients.
But that’s not all — IBM’s Watsonx.data product comprises OpenSource query engines, data serialization libraries, and table formats — all Open. It now also includes entitlement to 768 TB of raw capacity of IBM Storage Ceph — an Open Source enterprise storage solution for Watsonx.data just rhymes, but even more so using a capable Object store with it just makes sense.
One More thing #
Ceph Squid is about to launch upstream, and we are intent on releasing our enterprise version of it before the end of the year, keeping pace in lockstep with upstream development, as we re-committed again (and delivered) last year.
NVMEoF, WORM Certification, NFS, Appliances, OpenStack, Kubernetes, and more — Ceph 7.1 is the culmination of our massive Platform 7 effort, and it is firing on all cylinders of Unified Storage, delivering on Block, Object and File.
And we are not done yet - we still have half a year to go. Welcome to the Linux of Storage.
Comments? Discuss on Hacker News.