The Hub

Software is made at the intersection of Technology and Management.

Read this first

Insert Next Disk

Making Ceph failed disk replacement seamless

with Sebastian Wagner (Red Hat), Paul Cuzner (Red Hat), and Ernesto Puerta Treceno (Red Hat)

Red Hat Ceph Storage 5 introduces cephadm, a new integrated control plane that is part of the storage system itself, and enjoys a complete understanding of the cluster’s current state — something that external tools could not quite achieve as well because of their external nature. Among its many advantages, cephadm unified control of the state of a storage cluster significantly simplifies operations.

Replacing failed drives made easy

For example, the older process to replace drives in ceph-ansible required multiple steps and running processes enforcing configuration on all nodes when what was desired was updating only one node’s configuration. Managing around drive encryption would at times involve further complexity.

New ways: replacing a

...

Continue reading →


Ceph & Rook: Data Security and Storage Hardening

This October, Ana McTaggart, Michael Hackett and myself presented a fast-paced security overview of all things Ceph and Rook at the newfangled CNCF Security Conference.

Screen Shot 2021-12-08 at 6.42.22 PM.png

This was a mixed virtual and in-person event. We elected to go virtual, but we saw from pictures the in-person session was well attended, despite travel restrictions. Yay vaccines!

CNSCon room picture.png

Due to the virtual format, this talk was pre-recorded and I presented at a brisk pace. This inevitably happens when it is late in the evening and I had too much caffeine. Have some coffee yourself, then listen to the recording:

Our slides are available as a PDF and can be viewed inline below — we are very interested in your feedback and comments or any unanswered questions you may have. Find me on Twitter and share your thoughts!

Comments? Discuss on Hacker News.

Continue reading →


A Look at Ceph Storage 5

Introducing the Pacific codebase of Red Hat’s flagship storage product

with Marcel Hergaarden (Red Hat)

Red Hat Ceph Storage 5 is now generally available. This release includes support for NFSv4, additional disaster recovery capabilities for CephFS and RBD, as well as new security features and performance improvements.

New features

Red Hat Ceph Storage 5 introduces significant new functionality in several key areas:

  • The new, integrated Cephadm control plane and newly-stable management API lay a foundation for customized automation. With these improvements we avoid requiring mastery of a DevOps configuration system to deploy and manage Ceph clusters.
  • CephFS now supports the NFS 4 protocol, enabling AI/ML workloads requiring a filesystem interface alongside object storage. CephFS is also adding geo-replication capabilities for disaster-recovery (DR) multi-cluster configurations...

Continue reading →


Ceph Storage — Enterprise Meets Community

with Sage Weil (Red Hat)

Sage and I renewed the now annual tradition of delivering a recorded roadmap session for the Red Hat Summit virtual audience. This year, we have been selected for the launch of Red Hat TV, which will happen later this summer, so our session’s release may be delayed, becoming available at a time separate from the virtual Red Hat Summit event itself.

redhat-summit-2021-april.png

I covered the upcoming Red Hat Ceph Storage 5, based on upstream’s Pacific release, and Sage illustrated the Community plans for the Quincy release coming next year. We tried to compensate for the lack of interaction by packing a lot of material for the attendees of the session, so brew a cup of tea and block your calendar, because you will need a full hour to go through all that we are throwing at you!

Our slides are available as a PDF and can be viewed inline below — we are not including our backup slides...

Continue reading →


Red Hat Ceph Storage 5: Livin’ La Vida Loca

The Red Hat Ceph Storage life cycle: upgrade scenarios and long-lived deployments

with Sean Murphy (Red Hat)

Different industries have varying requirements for the software systems on which their respective businesses rely. Some operators choose to quickly embrace the latest and greatest release when facing change and integration updates. Others defer upgrades for as long as possible, trying to continue on a tried-and-true combination of software components until end-of-support (or security patching) forces a change.

The distinction is somewhat artificial, as most operators really adopt a combination of the two strategies for different parts of their infrastructure. The Red Hat Ceph Storage life cycle aims to address both faster and slower movers. In this post, we’ll share how we’re helping customers stay current while also providing longer life cycle options where needed.

Software

...

Continue reading →


RHCS 5: Introducing Cephadm

Highlights of Alpha 4 release include the new integrated installer

with Daniel Pivonka (Red Hat) and Paul Cuzner (Red Hat)

We’re delighted to announce availability of the new Alpha 4 release of Red Hat Ceph Storage 5, built on the upstream project’s Pacific release cycle. This post is the first of a series that will walk you through the enhancements coming with the next major upgrade of Ceph Storage—
well ahead of their production release—and give the details needed to facilitate testing with early-access releases.

Today’s post centers on the new Cephadm interface to the orchestration API, which is intended to become the preferred bare-metal installation and management method for Ceph across the broader vendor community. You can find download details for early access releases at the end of this blog. Now, without further ado, on to what is new.

Octopus on the move.jpg

A short history

In the recent...

Continue reading →


Building a Ceph-powered Cloud

Deploying a containerized Red Hat Ceph Storage 4 cluster for Red Hat Open Stack Platform 16

with John Fulton (Red Hat) and Gregory Charot (Red Hat)

Ceph is the most popular storage backend for OpenStack by a wide margin, as has been reported by the OpenStack Foundation’s survey every year since its inception. In the latest survey, conducted during the Summer of 2019, Ceph outclassed other options by an even greater margin than it did in the past, with a 75% adoption rate.

image2_31.png

Red Hat’s default method for installing OpenStack is with a tool called director. This tool can configure OpenStack to connect with an existing Ceph cluster, or it can deploy a new Ceph cluster as part of the process to create a new OpenStack private cloud. Because director integrates ceph-ansible, the same deployment options described in our previous post dedicated to it remain available from director. This post...

Continue reading →


Ceph at Red Hat Summit 2020

Virtual experience.png

Sage, Uday, and I put our best efforts into making sure that the new virtual venue for the Red Hat Summit would not diminish customer access and visibility into our future plans for Ceph. We delivered an unprecedented 18-month roadmap for the downstream, enterprise-class supported product, showcasing the “secret deck” that is usually reserved for internal planning use within our team — I usually make a roadmap statement to customers only when we have a 70%+ confidence estimate and delivery is scheduled to occur within a year, but this time I chose to relax the rules a little to give you a view stretching two major releases and 18-months into the future. We wanted to reward your virtual attention with more access by providing a little more food for thought into what is coming with Red Hat Ceph Storage 5 and 6 this Summer and the next.

Nautilus.png

Not to be outdone, Sage walked us through...

Continue reading →


Ceph Block Performance Monitoring

Putting noisy neighbors in their place with “RBD Top” and QoS

with Jason Dillaman (Red Hat)

Prior to Red Hat Storage 4, Ceph storage administrators have not had access to built-in RBD performance monitoring and metrics gathering tools. While a storage administrator could monitor high-level cluster or OSD I/O metrics, oftentimes this was too coarse-grained to determine the source of noisy neighbor workloads running on top of RBD images. The best available workaround, assuming the storage administrator had access to all client nodes, was to poll the metrics from the client nodes via some kind of homegrown makeshift external tooling.

Ceph Storage 4 now incorporates a generic metrics gathering framework within the OSDs and MGRs to provide built-in monitoring, and new RBD performance monitoring tools are built on top of this framework to translate individual RADOS object metrics into...

Continue reading →


The Power User’s Path to Ceph

Deploying a containerized Ceph Storage 4 cluster using ceph-ansible

with Guillaume Abrioux (Red Hat) and Paul Cuzner (Red Hat)

Introduction

The landscape of modern IT infrastructure is dominated by software defined networking, public cloud, hybrid cloud and software defined storage. The shift from legacy hardware centric architectures to embrace software defined infrastructure requires a more mature orchestration “engine” to manage changes across distributed systems. For many enterprises, Ansible has fulfilled this requirement and this in turn has led to the upstream Ceph community basing their next generation management toolchain on Ansible, in the form of Sébastien Han’s ceph-ansible.

Ceph Storage was the first Red Hat product to incorporate Ansible technology after our October 2015 acquisition of Ansible’s corporate sponsor. Red hat Ceph Storage has been shipping ceph-ansible...

Continue reading →