Setup manually a ceph cluster:
To be run on the first ceph-mon host
Usage: 001_ceph_mon_prepare.sh <cluster_name> <mons_hostname> <mons_ip> Example: 001_ceph_mon_prepare.sh ceph-test 10.0.0.0/8 server01,server02 10.0.0.1,10.0.0.2
- cluster-name
- network subnet of cluster
- $(hostname -s) of all monitor hosts - only for initial setup!
- ip-addresses of initial monitor hosts
To be run on every additional (and new) ceph-mon host
Usage: 002_ceph_mon_add.sh <cluster_name> Example: 002_ceph_mon_add.sh ceph-test
- only the cluster-name is needed
To be run ONCE on every ceph-osd host
Usage: 003_ceph_osd_add_to_bucket.sh <cluster_name> Example: 003_ceph_osd_add_to_bucket.sh ceph-test
- only the cluster-name is needed
To be run on every ceph-osd hosts for every journal-hdd
Usage: 004_ceph_journal_prepare.sh <cluster_name> <hdd_for_ceph_journal> <mount_point_of_journal_hdd> Example: 004_ceph_journal_prepare.sh ceph-test sdc /mnt/sdc
- cluster-name
- which hdd is ment for journal
- mount point of hdd
To be run on every ceph-osd hosts for every hdd that is supposed to store data
Usage: 005_ceph_osd_add.sh <cluster_name> <hdd_for_ceph_data> <mountpoint_for_ceph_journal> Example: 005_ceph_osd_add.sh ceph-test sdc /mnt/sdb
- cluster-name
- hdd for data storage
- mount-point journal-hdd (see 004)
OTHER SCRIPTS
Get the crushmap and decompile it
Usage: $0 <cluster_name> <name_of_crushmap> Example: $0 ceph-test crush_map_file
- cluster-name
- name of crushmap_file
Set crushmap after adjustments
Usage: $0 <cluster_name> <name_of_crushmap_file> Example: $0 ceph-test crush_map_file
- cluster-name
- name of crushmap_file to be applied
To be run on a ceph-osd hosts containing the osd which should be removed
Usage: ceph_remove_osd.sh <cluster_name> <osd_id> Example: ceph_remove_osd.sh ceph-test 1
- cluster-name
- osd number