For this time, we can deploy ceph and openstack administration. why ceph storage ? because ceph is open source platform storage that can be scale-out unified storage, and cost-effective.
Prequisite
- 6 Node (3 controller+network, 3 compute+storage)
- 6 Subnet (Public api, Internal api, Self-service, provider, ceph public, ceph cluster)
- Akses Internet pada masing masing network
- Setiap node menggunakan os ubuntu 18.04
- Ceph rilis Octopus
- Openstack rilis Ussuri
List Host
# nano /etc/hosts
# vip
10.10.26.100 public.itugasmu.com
10.10.25.100 internal.itugasmu.com
# internal
10.10.25.11 mb-os-controller-01 mb-os-controller-01.interal.itugasmu.com
10.10.25.12 mb-os-controller-02 mb-os-controller-02.interal.itugasmu.com
10.10.25.13 mb-os-controller-03 mb-os-controller-03.interal.itugasmu.com
10.10.25.14 mb-os-compute-01 mb-os-compute-01.interal.itugasmu.com
10.10.25.15 mb-os-compute-02 mb-os-compute-02.interal.itugasmu.com
10.10.25.16 mb-os-compute-03 mb-os-compute-03.interal.itugasmu.com
# public
10.10.26.11 mb-os-controller-01.public.itugasmu.com
10.10.26.12 mb-os-controller-02.public.itugasmu.com
10.10.26.13 mb-os-controller-03.public.itugasmu.com
10.10.26.14 mb-os-compute-01.public.itugasmu.com
10.10.26.15 mb-os-compute-02.public.itugasmu.com
10.10.26.16 mb-os-compute-03.public.itugasmu.com
Update & Upgrade
For all-node :
# apt update -y; apt upgrade -y
Key Host Checking
For all-node to automatic checking such as yes/no:
# vim ~/.ssh/config
Add Value :
Host *
StrictHostKeyChecking accept-new
Generate SSH Keypair
For all-node :
# ssh-keygen -t rsa
# cat ~/.ssh/id_rsa.pub | ssh user@other-node 'cat >> ~/.ssh/authorized_keys'
Then show pubkey and copy to all-node :
# cat ~/.ssh/id_rsa.pub | ssh user@other-node 'cat >> ~/.ssh/authorized_keys'
OR
# ssh-copy-id user@other-node
Setup Ceph
1. Download and install cephadm
# curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
# chmod +x cephadm
2. Add repository
# echo deb https://download.ceph.com/debian-octopus/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
# wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
# apt update
3. Install cephadm
# ./cephadm install
4. Initial bootsrap ceph
# mkdir -p /etc/ceph
# cephadm bootstrap --mon-ip 10.10.29.11 --initial-dashboard-user admin --initial-dashboard-password XXXXX
5. Install ceph-common
# apt install ceph-common
6. Add host ceph
# NodeCeph="mb-os-controller-01 mb-os-controller-02 mb-os-controller-03 mb-os-compute-01 mb-os-compute-02 mb-os-compute-03"
# for node in $NodeCeph; do ssh-copy-id -f -i /etc/ceph/ceph.pub root@${node};done
7. Add node orchestration
# ceph orch host add mb-os-controller-01 10.10.29.11
# ceph orch host add mb-os-controller-02 10.10.29.12
# ceph orch host add mb-os-controller-03 10.10.29.13
# ceph orch host add mb-os-compute-01 10.10.29.14
# ceph orch host add mb-os-compute-02 10.10.29.15
# ceph orch host add mb-os-compute-03 10.10.29.16
8. Check orchestration list
# ceph orch host ls
9. Add label mon and mgr to controller-node and label osd to compute-node
# ceph orch host label add mb-os-controller-01 mon
# ceph orch host label add mb-os-controller-01 mgr
# ceph orch host label add mb-os-controller-02 mon
# ceph orch host label add mb-os-controller-02 mgr
# ceph orch host label add mb-os-controller-03 mon
# ceph orch host label add mb-os-controller-03 mgr
# ceph orch host label add mb-os-compute-01 osd
# ceph orch host label add mb-os-compute-02 osd
# ceph orch host label add mb-os-compute-03 osd
10. Check orchestration list again
# ceph orch host ls
11. Create Cluster Configuration Ceph
# vim ceph_cluster_conf.yaml
add value:
service_type: mon
placement:
count: 3
label: mon
---
service_type: mgr
placement:
count: 3
label: mgr
---
service_type: osd
service_id: osd_spec_default
placement:
label: osd
data_devices:
size: '50G'
12. Apply Configuration
# ceph orch apply -i ceph_cluster_conf.yaml
12. Watch ceph until HEALT_OK and OSD up
# watch ceph -s
13. Copy ceph.conf in cephadm shell to /etc/ceph/ceph.conf
# cephadm shell cat /etc/ceph/ceph.conf > /etc/ceph/ceph.conf
14. Modify ceph.conf and make the configuration not using space/tab in front of line, like this
# vi /etc/ceph/ceph.conf
<no space/tab in here>[global]
<no space/tab in here>fsid = <fsid ceph>
<no space/tab in here>mon_host = [v2:10.X0.X0.11:3300/0,v1:10.X0.X0.11:6789/0] [v2:10.X0.X0.12:3300/0,v1:10.X0.X0.12:6789/0] [XXXXX]
15. Create pool
# ceph osd pool create volumes
# ceph osd pool create images
# ceph osd pool create backups
# ceph osd pool create vms
16. Set pool to rbd
# rbd pool init volumes
# rbd pool init images
# rbd pool init backups
# rbd pool init vms
17. Create keyring
# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' -o /etc/ceph/ceph.client.glance.keyring
# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=images' -o /etc/ceph/ceph.client.cinder.keyring
# ceph auth get-or-create client.nova mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=vms, allow rx pool=images' -o /etc/ceph/ceph.client.nova.keyring
# ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' -o /etc/ceph/ceph.client.cinder-backup.keyring
Setup Openstack
1. Install requirements
# apt -y install python3 python3-pip python3-setuptools python3-dev libffi-dev gcc python3-venv
2. Setup Virtual Environment
# python3 -m venv kolla-venv
# source kolla-venv/bin/activate
3. In Venv, install requirements
# pip3 install ansible==2.9.17
# pip3 install kolla-ansible==9.3.1
4. Create Kolla and Openstack directory
# mkdir -p /etc/kolla
# chown -R $USER:$USER /etc/kolla
# mkdir -p /etc/kolla
# chown -R $USER:$USER /etc/kolla
5. In openstack directory, copy inventory and copy example to kolla directory
# cp /usr/local/share/kolla-ansible/ansible/inventory/* .
# cp -r /usr/local/share/kolla-ansible/etc_examples/kolla/* /etc/kolla
6. Create ansible configuration:
# mkdir /etc/ansible
# nano /etc/ansible/ansible.cfg
Add value:
[defaults]
host_key_checking=False
pipelining=True
forks=100
interpreter_python=/usr/bin/python3
7. Edit inventory multinode configuration
[control]
mb-os-controller-0[1:3]
[network]
mb-os-controller-0[1:3]
[compute]
mb-os-compute-0[1:3]
[monitoring]
mb-os-controller-0[1:3]
[storage]
mb-os-controller-0[1:3]
[deployment]
localhost ansible_connection=local
8. Verify ansible connection
# ansible -i multinode all -m ping
9. Generate Password for Openstack Services
# kolla-genpwd
10. Create or change global configuration.
# nano /etc/kolla/globals.yml
add value:
---
kolla_base_distro: "ubuntu"
kolla_install_type: "source"
openstack_release: "ussuri"
kolla_internal_vip_address: "10.10.25.100"
kolla_external_vip_address: "10.10.26.100"
kolla_internal_fqdn: "internal.andy.lab"
kolla_external_fqdn: "public.andy.lab"
kolla_external_vip_interface: "ens4"
api_interface: "ens3"
tunnel_interface: "ens5"
neutron_external_interface: "ens6"
enable_haproxy: "yes"
neutron_plugin_agent: "ovn"
neutron_ovn_distributed_fip: "yes"
enable_neutron_provider_networks: "yes"
storage_interface: "ens3"
network_interface: "ens3"
## tls (if not using TLS comment in below )
kolla_enable_tls_internal: "yes"
kolla_enable_tls_external: "yes"
kolla_copy_ca_into_containers: "yes"
kolla_enable_tls_backend: "yes"
openstack_cacert: "/etc/ssl/certs/ca-certificates.crt"
## openstack service
enable_openstack_core: "yes"
enable_cinder: "yes"
enable_fluentd: "no"
nova_compute_virt_type: "kvm"
## ceph
glance_backend_ceph: "yes"
cinder_backend_ceph: "yes"
nova_backend_ceph: "yes"
Note: if error no_module pkg example docker package reinstal in node
apt-get install --reinstall python-docker -y
11. Generate certficate if using TLS. the certificate can be generate into /etc/kolla/certificates
# kolla-ansible -i multinode certificates
12. Create Configuration Kolla Directory
# mkdir /etc/kolla/config
# mkdir /etc/kolla/config/nova
# mkdir /etc/kolla/config/glance
# mkdir -p /etc/kolla/config/cinder/cinder-volume
# mkdir /etc/kolla/config/cinder/cinder-backup
13. Copy ceph.conf and keyring ceph to Configuration Kolla Directory
# cp /etc/ceph/ceph.conf /etc/kolla/config/cinder/
# cp /etc/ceph/ceph.conf /etc/kolla/config/nova/
# cp /etc/ceph/ceph.conf /etc/kolla/config/glance/# cp /etc/ceph/ceph.client.glance.keyring /etc/kolla/config/glance/
# cp /etc/ceph/ceph.client.nova.keyring /etc/kolla/config/nova/
# cp /etc/ceph/ceph.client.cinder.keyring /etc/kolla/config/nova/
# cp /etc/ceph/ceph.client.cinder.keyring /etc/kolla/config/cinder/cinder-volume/
# cp /etc/ceph/ceph.client.cinder.keyring /etc/kolla/config/cinder/cinder-backup/
# cp /etc/ceph/ceph.client.cinder-backup.keyring /etc/kolla/config/cinder/cinder-backup/
14. Deploy Openstack
# kolla-ansible -i ./multinode bootstrap-servers
# kolla-ansible -i ./multinode prechecks
# kolla-ansible -i ./multinode deploy
# kolla-ansible -i ./multinode post-deploy
15. Add CA path to file rc and root ca to ca-certificates
# echo "export OS_CACERT=/etc/ssl/certs/ca-certificates.crt" | tee -a /etc/kolla/admin-openrc.sh
# cat /etc/kolla/certificates/ca/root.crt | sudo tee -a /etc/ssl/certs/ca-certificates.crt
16. Install openstack client and testing operational openstack
# pip3 install python-openstackclient
# source /etc/kolla/admin-openrc.sh
# openstack service list
# openstack compute service list
# openstack volume service list
# openstack network agent list
0 Comments