手冊 https://docs.ceph.com/docs/luminous/start/intro/
在所有 node 安裝 ceph
Ceph Manager
管理 Ceph
Ceph Monitors
監控、存取 DATA
空間計算
nOSD * k / (k+m) * OSD Size
空間計算
nOSD * k / (k+m) * OSD Size
pg 數量建議
https://docs.ceph.com/docs/mimic/rados/operations/placement-groups/
INSTALL CEPH (QUICK)
https://ceph.com/install/
查看狀態
ceph status
查看 osd 清單
ceph osd tree
=================================
移除 osd
先摧毀 ceph osd destroy osd.4 –force
ceph osd purge osd.4 –force
CRUSH map 定義設備位置
可分成 row=a rack=a2 chassis=a2a host=a2a1
使用 pool
https://docs.ceph.com/docs/jewel/rados/operations/pools/
移除 cephfs
停掉所有 ceph-mds
systemctl stop ceph-mds.target
killall ceph-mds
或是從 webui
查看 cefs 資訊
root@pve2:~# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
移除cefs
ceph fs rm
ceph fs rm cephfs –yes-i-really-mean-it
移除 pool (安全起見 要兩次 )
ceph osd pool delete
ceph osd pool delete
ceph osd pool delete cephfs_data cephfs_data –yes-i-really-really-mean-it
ceph osd pool delete cephfs_metadata cephfs_metadata –yes-i-really-really-mean-it
=================================
CRUSH MAP
GET A CRUSH MAP
ceph osd getcrushmap -o {compiled-crushmap-filename}
ceph osd getcrushmap -o map
DECOMPILE A CRUSH MAP
crushtool –decompile {compiled-crushmap-filename} -o {decompiled-crushmap-filename}
crushtool –decompile map -o map.txt
compile A CRUSH MAP
crushtool –compile {decompiled-crushmap-filename} -o {compiled-crushmap-filename}
crushtool –compile map.txt -o map
ceph osd setcrushmap -i map
=================================
ERASURE CODE PROFILES
查
ceph osd erasure-code-profile get default
設定
ceph osd erasure-code-profile set myprofile k=4 m=2
k=data chunks
m=糾錯碼 chunks
GET A CRUSH MAP
ceph osd getcrushmap -o {compiled-crushmap-filename}
DECOMPILE A CRUSH MAP
crushtool -d {compiled-crushmap-filename} -o {decompiled-crushmap-filename}
編輯
rule erasure_rule {
id 1
type erasure
# 固定分 6 等分
min_size 6
max_size 6
# 分成三個 host 儲存,每個 host 存兩等份
step take default
step choose indep 3 type host
step choose indep 2 type osd
step emit
}
—————————————- https://docs.ceph.com/docs/mimic/rados/operations/erasure-code/
create cefs
建立兩個 pool
$ ceph osd pool create cephfs_data <pg_num>
$ ceph osd pool create cephfs_metadata <pg_num>
ceph osd pool create {pool-name} {pg-num} {pgp-num} erasure \
\[erasure-code-profile] [crush-ruleset-name]
ceph osd pool create ec42 64 erasure myprofile erasure_rule
ceph osd pool set ec42 allow_ec_overwrites true
ceph osd pool application enable ec42 rbd
ceph osd pool create rbd_ec 64
ceph osd pool application enable rbd_ec rbd
rbd create rbd_ec/test_ec –size 100G –data-pool ec42
echo “rbd: rbd_ec” » /etc/pve/storage.cfg
echo “monhost 192.168.43.21;192.168.43.22;192.168.43.23” » /etc/pve/storage.cfg
echo “content images,rootdir” » /etc/pve/storage.cfg
echo “krbd 1” » /etc/pve/storage.cfg
echo “pool ceph_es_metadata” » /etc/pve/storage.cfg
echo “username admin” » /etc/pve/storage.cfg
rbd: rbd_ec
monhost 192.168.43.21;192.168.43.22;192.168.43.23
content images,rootdir
krbd 1
pool rbd_ec
username admin
cp /etc/pve/priv/ceph.client.admin.keyring /etc/pve/priv/ceph/rbd_ec.keyring;
rbd ls ec42 -l
For now no, but we will evaluate an addition.
But try this setting, put it in the ceph.conf. It should use the specified pool by default for rbd data.
[client]
rbd default data pool =
https://k2r2bai.com/2015/11/21/ceph/cephfs/
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/ceph_file_system_guide/deploying-ceph-file-systems
https://bugzilla.proxmox.com/show_bug.cgi?id=1816
https://forum.proxmox.com/threads/created-an-erasure-code-pool-in-ceph-but-cannot-work-with-it-in-proxmox.45099/
https://ceph.com/planet/erasure-code-on-small-clusters/