Concept
- OSD: the program responsible for operating the hard disk, one hard disk one OSD
- MON: manage cluster status, more important, can run one on each of multiple nodes
- MGR: monitoring cluster status
- RGW(optional): provides object storage API
- MDS(optional): provides CephFS
Ways to use Ceph for storage.
- librados: library
- radosgw: Object Storage HTTP API
- rbd: block storage
- cephfs: file system
Authentication
Ceph client authentication requires a username + key. By default, the username is client.admin
and the key path is /etc/ceph/ceph.username.keyring
. ceph --user abc
indicates that the cluster is accessed as user client.abc
.
The user’s permissions are determined by service type. You can use ceph auth ls
to show all users and their permissions.
As you can see, osd.0
has all permissions for OSD, and only permissions for osd-related functions for both mgr and mon; client.admin
has all permissions. profile
can be thought of as a collection of predefined permissions.
Create a new user and grant permissions.
|
|
Modify permission.
|
|
Get permission.
|
|
Delete User.
|
|
OSD
Managing OSDs is actually managing the hard drives that store your data.
Check the status.
|
|
Shows how many online and offline OSDs there are.
|
|
Shows the storage tiers, where the non-negative IDs are the actual OSDs and the negative numbers are other tiers, such as storage pools, enclosures, hosts, etc.
Pool
Pool is a storage pool, subsequent RBD/CephFS features need to specify a storage pool to work.
Create a storage pool.
For performance reasons, you can set the number of PGs (Placement Groups). By default, a replicated type of pool will be created, which means that multiple copies will be stored, similar to RAID 1. It can also be set to an erasure type of pool, similar to RAID 5.
The data in each Placement Group will be stored in the same OSD. The data is distributed in different PGs by hash.
List all storage pools.
|
|
View storage pool usage.
|
|
IO state of the storage pool.
|
|
Take a snapshot of the storage pool.
|
|
RBD
RBD exposes Ceph as a block device.
Create
Initialize Pool for RBD.
|
|
For security reasons, a separate user is usually created for the RBD user.
|
|
Create an RBD image.
|
|
Indicates that an image with the name yyy and a size of 1024MB was created on Pool xxx.
Status
Lists the mirrors in the Pool.
The default Pool name is rbd
.
View mirror information.
Expand capacity
Modify the capacity of the mirror.
Mount
When mounting RBD on another machine, first modify the configuration under /etc/ceph
to make sure you have the user, key and MON address.
Then, mount the device with rbd.
|
|
Mount the yyy image under Pool xxx as user abc.
You can see the device files under /dev/rbd*
or /dev/rbd/
at this point.
The mounted devices are displayed.
|
|
CephFS
Create
If the orchestrator is configured, you can directly use the following command.
|
|
Create a CephFS named xxx
.
It can also be created manually.
This creates two pools, one for storing metadata and one for storing file data. A CephFS requires one pool for metadata and several pools for file data.
Once CephFS is created, the corresponding MDS is started.
Status
View the MDS status.
|
|
Client Configuration
Before mounting CephFS, first configure the client.
Run ceph config generate-minimal-conf
in the cluster and it will generate a configuration file.
Copy the contents to /etc/ceph/ceph.conf
on the client. This will allow the client to find the MON address and FSID of the cluster.
Next, we create a user on the cluster for the client.
|
|
Create a user, abc, with read and write access to CephFS xxx. Save the output to /etc/ceph/ceph.client.abc.keyring
on the client.
Mount
|
|
Log in as user client.abc
and mount the /
directory under CepFS xxx
to MOUNTPOINT
. It will read the configuration under /etc/ceph
, and if it is already written in ceph.conf
, it can be left out on the command line.
fsid refers not to the CephFS ID, but actually to the cluster ID: ceph fsid
.
Quotas
CephFS can place limits on directories.
Limits the directory size and number of files; a LIMIT of 0 means no limit.
NFS
You can share out CephFS or RGW by way of NFS.
Start the NFS service.
Run an NFS server on the host, and the name of the NFS cluster is xxx.
View NFS cluster information.
|
|
List all NFS clusters.
|
|
NFS Export CephFS.
|
|
This exports a directory within CephFS that can be accessed by clients via the NFS mount /a/b/c path (pseudo path). You can set access rights to the client’s IP.
This allows you to mount on the client side.
|
|
RadosGW
RGW provides S3 or OpenStack Swift-compatible object storage APIs.
TODO
orchestrator
Since Ceph needs to run multiple daemons, all in different containers, a system-level orchestrator is typically run to add and manage these containers.
View the current orchestrator.
The more common one is cephadm. If cephadm is used during installation, then the orchestrator is also it.
The service being orchestrated.
|
|
The container being orchestrated.
|
|
The orchestrated host.
|
|
Update
Use the container orchestrator to upgrade.
If you can’t find the image on docker hub, pull it from quay.io
.
Check the status of the upgrade.
View cephadm logs.
|
|