Architektura:
Ceph OSD – ukládání souborů
Ceph Monitor
# Ceph OSD ( Object Storage Daemons ) storage data in objects , manages data replication , recovery , rebalancing and provides stage information to Ceph Monitor. Its recommended to user 1 OSD per physical disk.
# Ceph MON ( Monitors ) maintains overall health of cluster by keeping cluster map state including Monitor map , OSD map , Placement Group ( PG ) map , and CRUSH map. Monitors receives state information from other components to maintain maps and circulate these maps to other Monitor and OSD nodes.
# Ceph RGW ( Object Gateway / Rados Gateway ) RESTful API interface compatible with Amazon S3 , OpenStack Swift .
# Ceph RBD ( Raw Block Device ) Provides Block Storage to VM / bare metal as well as regular clients , supports OpenStack and CloudStack . Includes Enterprise features like snapshot , thin provisioning , compression.
# CephFS ( File System ) distributed POSIX NAS storage.
2 pooly (třeba 1 pro HDD a druhý pro SSDčka):
CEPH: SATA and SSD pools on the same server without editing crushmap
Odkazy:
http://docs.ceph.com/docs/master/start/hardware-recommendations/
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
https://pve.proxmox.com/wiki/Ceph_Server#Recommended_hardware