Oracle OpenStack 4.0.1: Use Ceph Storage for Glance (Image Service) Backend
(Doc ID 2399229.1)
Last updated on MAY 22, 2020
Applies to:Oracle Cloud Infrastructure - Version N/A and later
Oracle OpenStack for Oracle Linux - Version OpenStack 4.0.1 and later
Ceph Storage requires at least three storage nodes for High Availability.
It is recommended to have at least three control nodes for High Availability.
The Ceph MON container (ceph_mon) handles all communications with external applications and clients. It checks the state and the consistency of the data, and tracks active and failed nodes in the Ceph storage cluster. It is deployed to control nodes.
The Ceph OSD (Object Storage Device) container (ceph_osd) is deployed to storage nodes using pre attached/labelled disk or LUN on them.
Oracle OpenStack deploys its Ceph storage services into Docker containers, not connecting to existing Ceph storage.
There are three typical partition layouts for journal and data of an OSD, as described in details here.
The journal I/O on each storage node for an OSD is vital for overall performance of Ceph, it is recommended to put journal on Solid State Drive (SSD), referred as external journals.
In our sample, no Solid State Drive (SSD) installed on storage nodes, hence using Partitions for Co-Located Journals (journal and data on different partitions of same disk or LUN) for demonstration purpose.
The size of the disk (or LUN) must be at least 11 GB; 6 GB for the journal and 5 GB for the data.
Setup in this sample:
To view full details, sign in with your My Oracle Support account.
Don't have a My Oracle Support account? Click to get started!
In this Document