[Ceph] Ceph Installation - jewel (latest 10.2.11) - I > Ceph

본문 바로가기

사이트 내 전체검색

뒤로가기 Ceph

[Ceph] Ceph Installation - jewel (latest 10.2.11) - I

페이지 정보

작성자 꿈꾸는여행자 작성일 25-09-09 16:51 조회 738 댓글 0

본문

안녕하세요.

 

꿈꾸는여행자입니다.

 

 

이번에는 Ceph을 주제로 다루고자 합니다. 


기존 Ceph 내용 구성 후 최근 Ceph 내용으로 검증해 나가 보겠습니다. 

> 다음 

 

1. Preflight

1.1. CEPH DEPLOY SETUP

1.2. CEPH NODE SETUP

                        

 

목차 

 

1. Preflight

1.1. CEPH DEPLOY SETUP

1.1.1. Ceph 설치 버전 Testing 점검

1.1.2. RED HAT PACKAGE MANAGER (RPM)

1.2. CEPH NODE SETUP

1.2.1. INSTALL NTP

1.2.2. INSTALL SSH SERVER

1.2.3. CREATE A CEPH DEPLOY USER

1.2.3.1. Ceph Deploy User Overview

1.2.3.2. Create Deploy User

1.2.4. ENABLE PASSWORD-LESS SSH

1.2.5. ENABLE NETWORKING ON BOOTUP

1.2.6. OPEN REQUIRED PORTS

1.2.6.1. Start firewalld Service

1.2.6.2.  Open ports on Calamari node [해당 사항 없음]

1.2.6.3. Open ports on ALL Ceph monitor nodes

1.2.6.4. Open ports for Ceph osd nodes

1.2.7. SELINUX

1.2.8. PRIORITIES/PREFERENCES [Options]

1.2.9. Adjust PID Count

1.2.10. Adjust Netfilter conntrack Limits 

 

 

상세 내역은 아래와 같습니다.

 

감사합니다.  

 

> 아래 

 


1. Preflight



http://docs.ceph.com/docs/master/start/quick-start-preflight/



* PREFLIGHT CHECKLIST

   * Thank you for trying Ceph! 

   * We recommend setting up a ceph-deploy admin node and a 3-node Ceph Storage Cluster to explore the basics of Ceph. 

      * admin(deploy) Node와 3 node 구성 테스트가 일반적임

   * This Preflight Checklist will help you prepare a ceph-deploy admin node and three Ceph Nodes (or virtual machines) that will host your Ceph Storage Cluster. Before proceeding any further, see OS Recommendations to verify that you have a supported distribution and version of Linux. When you use a single Linux distribution and version across the cluster, it will make it easier for you to troubleshoot issues that arise in production.

   * In the descriptions below, Node refers to a single machine.



  




1.1. CEPH DEPLOY SETUP



http://docs.ceph.com/docs/master/start/quick-start-preflight/



[admin-node에서 실행]



Add Ceph repositories to the ceph-deploy admin node. Then, install ceph-deploy.

repository 등록과 ceph-deploy 설치



1.1.1. Ceph 설치 버전 Testing 점검



http://docs.ceph.com/docs/master/start/os-recommendations/



INFERNALIS (9.2.Z) AND JEWEL (10.2.Z)



Distro

Release

Code Name

Kernel

Notes

Testing

CentOS

7

N/A

linux-3.10.0

 

B, I, C

Debian

8.0

Jessie

linux-3.16.0

1, 2

B, I

Fedora

22

N/A

linux-3.14.0

 

B, I

RHEL

7

Maipo

linux-3.10.0

 

B, I

Ubuntu

14.04

Trusty Tahr

linux-3.13.0

 

B, I, C


HAMMER (0.94)

Distro

Release

Code Name

Kernel

Notes

Testing

CentOS

6

N/A

linux-2.6.32

1, 2

 

CentOS

7

N/A

linux-3.10.0

 

B, I, C

Debian

7.0

Wheezy

linux-3.2.0

1, 2

 

Ubuntu

12.04

Precise Pangolin

linux-3.2.0

1, 2

 

Ubuntu

14.04

Trusty Tahr

linux-3.13.0

 

B, I, C




* NOTES

   * 1: The default kernel has an older version of btrfs that we do not recommend for ceph-osd storage nodes. Upgrade to a recommended kernel or use XFS.

   * 2: The default kernel has an old Ceph client that we do not recommend for kernel client (kernel RBD or the Ceph file system). Upgrade to a recommended kernel.

* TESTING

   * B: We build release packages for this platform. For some of these platforms, we may also continuously build all ceph branches and exercise basic unit tests.

   * I: We do basic installation and functionality tests of releases on this platform.

   * C: We run a comprehensive functional, regression, and stress test suite on this platform on a continuous basis. This includes development branches, pre-release, and released code.

* Version

   * Red Hat Ceph Storage v1.3

      * Based on Ceph Hammer

   * Red Hat Ceph Storage v2.0

      * Based on Ceph Jewel





1.1.2. RED HAT PACKAGE MANAGER (RPM)



[For admin-node]



For CentOS 7, perform the following steps:



1. Install and enable the Extra Packages for Enterprise Linux (EPEL) repository. Please see the EPEL wiki page for more information.

2. On CentOS, you can execute the following command chain:

   1. 아래에 epel repository 및 utility 설치 정보가 포함되어 있음

# sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*

   2. 3. Add the package to your repository. Open a text editor and create a Yellowdog Updater, Modified (YUM) entry. Use the file path /etc/yum.repos.d/ceph.repo. For example:

   1. ceph repository 등록

# sudo vi /etc/yum.repos.d/ceph-noarch.repo

[ceph-noarch]

name=Ceph noarch packages

baseurl=http://download.ceph.com/rpm-{ceph-release}/{distro}/noarch

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://download.ceph.com/keys/release.asc

[ceph-noarch]

name=Ceph noarch packages

baseurl=http://download.ceph.com/rpm-jewel/el7/noarch

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://download.ceph.com/keys/release.asc

   2. Desc

      1. Paste the following example code. Replace {ceph-release} with the recent major release of Ceph (e.g., jewel). Replace {distro} with your Linux distribution (e.g.,el7 for CentOS 7). Finally, save the contents to the/etc/yum.repos.d/ceph.repo file.

      2. ceph.repo 정보 등록 시 리눅스 배포판 버전과 ceph-release 버전 확인하여 작성

         1. http://download.ceph.com/

4. Update your repository and install ceph-deploy:

   1. # sudo yum update

# sudo yum install -y ceph-deploy

   2. 


1.2. CEPH NODE SETUP



The admin node must be have password-less SSH access to Ceph nodes. When ceph-deploy logs in to a Ceph node as a user, that particular user must have passwordless sudo privileges.



        패스워드 없이 deploy 대상 노드에 접근이 가능해야 함



* Set hostname

   * 편의를 위한 hostname 변경 

   * # nmtui

10.0.0.76       admin-node

10.0.0.70       node1

10.0.0.71       node2

10.0.0.72       node3

   * 


1.2.1. INSTALL NTP



[For ALL Ceph Nodes]



We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to prevent issues arising from clock drift. See Clock for details.



On CentOS / RHEL, execute:



# sudo yum install -y ntp ntpdate ntp-doc


Ensure that you enable the NTP service. Ensure that each Ceph Node uses the same NTP time server. See NTP for details.



        NTP 서버는 같은 곳을 보도록 해야함

        NTP 서버 설정 정보 수정



1. Install NTP

   1. # yum install -y ntp

   2. 2. Make sure NTP starts on reboot.

   1. # systemctl enable ntpd.service

   2. 3. Start the NTP service and ensure it’s running.

   1. # systemctl start ntpd

   2. 4. Then, check its status.

   1. # systemctl status ntpd

# systemctl list-unit-files | grep ntp

   2. Note

      1. ntpdate.service와 같이 사용되지 않도록 유의

5. Ensure that NTP is synchronizing Ceph monitor node clocks properly.

   1. # ntpq -p

   2. 




1.2.2. INSTALL SSH SERVER



For ALL Ceph Nodes perform the following steps:

1. Install an SSH server (if necessary) on each Ceph Node:

   1. # sudo yum install -y openssh-server

   2. 2. Ensure the SSH server is running on ALL Ceph Nodes.

   1. # systemctl status sshd.service

   2. 1.2.3. CREATE A CEPH DEPLOY USER



For ALL Ceph Nodes

1.2.3.1. Ceph Deploy User Overview



The ceph-deploy utility must login to a Ceph node as a user that has passwordless sudo privileges, because it needs to install software and configuration files without prompting for passwords.

        ceph-deploy sudo 권한 있는 사용자가 password 없이 접근 가능해야 함



Recent versions of ceph-deploy support a --username option so you can specify any user that has password-less sudo (including root, although this is NOT recommended). To use ceph-deploy --username {username}, the user you specify must have password-less SSH access to the Ceph node, as ceph-deploy will not prompt you for a password.

        ceph-deploy 명령어에서 "password-less" ssh 명령어 지원



We recommend creating a specific user for ceph-deploy on ALL Ceph nodes in the cluster. Please do NOT use “ceph” as the user name. A uniform user name across the cluster may improve ease of use (not required), but you should avoid obvious user names, because hackers typically use them with brute force hacks (e.g., root, admin,{productname}). The following procedure, substituting {username} for the user name you define, describes how to create a user with passwordless sudo.

        root, admin, ceph 사용자 이름을 제외한 나머지 사용자 이름으로 특정 사용자를 지정하여 paswordless sudo 실행





Note 

Starting with the Infernalis release the “ceph” user name is reserved for the Ceph daemons. If the “ceph” user already exists on the Ceph nodes, removing the user must be done before attempting an upgrade.



1.2.3.2. Create Deploy User



On CentOS and RHEL, you may receive an error while trying to execute ceph-deploy commands. If requiretty is set by default on your Ceph nodes, disable it by executing sudo visudo and locate the Defaults requiretty setting. Change it to Defaults:ceph!requiretty or comment it out to ensure that ceph-deploy can connect using the user you created with Create a Ceph Deploy User.

        sudoers.d 에 등록 시 requiretty 설정도 같이 포함



1. Create a new user on each Ceph Node.

   1. 모든 Ceph Node에 신규 사용자를 등록

$ ssh root@ceph-server

$ sudo useradd -d /home/{username} -m {username}

$ sudo passwd {username}

$ sudo useradd -d /home/cephdeploy -m cephdeploy

$ sudo passwd cephdeploy

   2. Node list

      1. admin-node

      2. node1

      3. node2

      4. node3

2. For the new user you added to each Ceph node, ensure that the user has sudo privileges.

   1. 위에는 실행 예 밑에 가 진짜 실행

$ cat << EOF >/etc/sudoers.d/{username}

{username} ALL = (root) NOPASSWD:ALL

Defaults:{username} !requiretty

EOF



$ sudo chmod 0440 /etc/sudoers.d/{username}

$ cat << EOF >/etc/sudoers.d/cephdeploy

cephdeploy ALL = (root) NOPASSWD:ALL

Defaults:cephdeploy !requiretty

EOF



$ sudo chmod 0440 /etc/sudoers.d/cephdeploy

   2. 3. Edit /etc/hosts file

   1. admin-node node에 등록된 IP 정보와 맞게 hosts 파일을 수정

# sudo vi /etc/hosts

10.0.0.76       admin-node

10.0.0.70       node1

10.0.0.71       node2

10.0.0.72       node3

   2.    3. /etc/hosts 파일 배포

[root@admin-node ~]# scp /etc/hosts root@node1:/etc/hosts

[root@admin-node ~]# scp /etc/hosts root@node2:/etc/hosts

[root@admin-node ~]# scp /etc/hosts root@node3:/etc/hosts

   4.    5. Node list

      1. admin-node

      2. node1

      3. node2

      4. node3

1.2.4. ENABLE PASSWORD-LESS SSH



For admin-node



Since ceph-deploy will not prompt for a password, you must generate SSH keys on the admin node and distribute the public key to each Ceph node. ceph-deploy will attempt to generate the SSH keys for initial monitors.

        password를 위한 prompt를 사용하지 않기 위해 SSH keys 생성        



1. Generate the SSH keys, but do not use sudo or the root user. Leave the passphrase empty:

   1. sudo 또는 root 사용자를 사용하지 말 것

   2. admin Node에서 cephdeply 사용자 ssh-keygen 생성

[root@admin-node ~]# su - cephdeploy

[cephdeploy@admin-node ~]$ ssh-keygen

Generating public/private rsa key pair.

Enter file in which to save the key (/home/cephdeploy/.ssh/id_rsa): [Enter]

Created directory '/home/cephdeploy/.ssh'.

Enter passphrase (empty for no passphrase): [Enter]

Enter same passphrase again: [Enter]

Your identification has been saved in /home/cephdeploy/.ssh/id_rsa.

Your public key has been saved in /home/cephdeploy/.ssh/id_rsa.pub.

The key fingerprint is:

   3. 2. Copy the key to each Ceph Node, replacing {username} with the user name you created with Create a Ceph Deploy User.

   1. ssh key 배포, 사용자 이름으로 변경

   2. admin-node node에서 실행 cephdeploy 

$ ssh-copy-id {username}@node1

$ ssh-copy-id {username}@node2

$ ssh-copy-id {username}@node3

[cephdeploy@admin-node ~]$ ssh-copy-id cephdeploy@admin-node

[cephdeploy@admin-node ~]$ ssh-copy-id cephdeploy@node1

[cephdeploy@admin-node ~]$ ssh-copy-id cephdeploy@node2

[cephdeploy@admin-node ~]$ ssh-copy-id cephdeploy@node3

   3.    4. login test

[cephdeploy@admin-node ~]$ ssh -l cephdeploy node1

[cephdeploy@node1 ~]$ 

   5. Desc

      1. password 없이 로그인 확인

3. (Recommended) Modify the ~/.ssh/config file of your ceph-deploy admin node so that ceph-deploy can log in to Ceph nodes as the user you created without requiring you to specify --username {username} each time you execute ceph-deploy. This has the added benefit of streamlining ssh and scp usage. Replace{username} with the user name you created:

   1. Edit ~/.ssh/config on admin-node Node

      1. ceph-deploy 실행 시 특정 이름을 사용하지 않기 위해 아래와 같이 작성

      2. admin-node node의 cephdeploy 사용자로 login 후 실행

   2. [cephdeploy@admin-node ~]$ whoami

cephdeploy

[cephdeploy@admin-node ~]$ vi ~/.ssh/config

Host node1

  Hostname node1

  User {username}

Host node2

  Hostname node2

  User {username}

Host node3

  Hostname node3

  User {username}

Host node1

  Hostname node1

  User cephdeploy

Host node2

  Hostname node2

  User cephdeploy

Host node3

  Hostname node3

  User cephdeploy

   3.    4. After editing the ~/.ssh/config file on the admin node, execute the following to ensure the permissions are correct:

   5. [cephdeploy@admin-node ~]$ chmod 600 ~/.ssh/config

   6. 


1.2.5. ENABLE NETWORKING ON BOOTUP



For ALL Ceph Nodes



Ceph OSDs peer with each other and report to Ceph Monitors over the network. If networking is off by default, the Ceph cluster cannot come online during bootup until you enable networking.

The default configuration on some distributions (e.g., CentOS) has the networking interface(s) off by default. Ensure that, during boot up, your network interface(s) turn(s) on so that your Ceph daemons can communicate over the network. For example, on Red Hat and CentOS, navigate to /etc/sysconfig/network-scripts and ensure that the ifcfg-{iface} file has ONBOOT set to yes.

        ONBOOT시 network device 시작되도록 설정



Check onboot



# grep "ONBOOT"  /etc/sysconfig/network-scripts/ifcfg-*

/etc/sysconfig/network-scripts/ifcfg-enp0s3:ONBOOT=yes

/etc/sysconfig/network-scripts/ifcfg-enp0s8:ONBOOT=yes

/etc/sysconfig/network-scripts/ifcfg-enp0s9:ONBOOT=yes

/etc/sysconfig/network-scripts/ifcfg-lo:ONBOOT=yes


1.2.6. OPEN REQUIRED PORTS



firewalld 설정을 포함한 Open Port 설정



Ceph Monitors communicate using port 6789 by default. Ceph OSDs communicate in a port range of 6800:7300 by default. See the Network Configuration Reference for details. Ceph OSDs can use multiple network connections to communicate with clients, monitors, other OSDs for replication, and other OSDs for heartbeats.

        통신 기본 Port 6789

        OSD 통신 Port 6800:7300



On some distributions (e.g., RHEL), the default firewall configuration is fairly strict. You may need to adjust your firewall settings allow inbound requests so that clients in your network can communicate with daemons on your Ceph nodes.

        firewall 설정이 필요



For firewalld on RHEL 7, add port 6789 for Ceph Monitor nodes and ports 6800:7300 for Ceph OSDs to the public zone and ensure that you make the setting permanent so that it is enabled on reboot. For example:



1.2.6.1. Start firewalld Service

[For all Nodes]



Start and ensure firewalld is runnig



# systemctl start firewalld 

# systemctl enable firewalld

# systemctl list-unit-files | grep -i firewalld

# systemctl status firewalld.service


1.2.6.2.  Open ports on Calamari node [해당 사항 없음]



[On Calamari node]



You MUST open ports 80, 2003, and 4505-4506 on your Calamari node. First, open the port to ensure it opens immediately at runtime. Then, rerun the command with --permanent to ensure that the port opens on reboot.



# firewall-cmd --zone=public --add-port=80/tcp

# firewall-cmd --zone=public --add-port=80/tcp --permanent

# firewall-cmd --zone=public --add-port=2003/tcp

# firewall-cmd --zone=public --add-port=2003/tcp --permanent

# firewall-cmd --zone=public --add-port=4505-4506/tcp

# firewall-cmd --zone=public --add-port=4505-4506/tcp --permanent


1.2.6.3. Open ports on ALL Ceph monitor nodes



[On Ceph monitor nodes]



You MUST open port 6789 on your public network on ALL Ceph monitor nodes



# firewall-cmd --zone=public --add-port=6789/tcp

# firewall-cmd --zone=public --add-port=6789/tcp --permanent


1.2.6.4. Open ports for Ceph osd nodes



[On Ceph osd nodes]



Finally, you MUST also open ports for OSD traffic (6800-7300). Each OSD on each Ceph node

needs a few ports: one for talking to clients and monitors (public network); one for sending data to other OSDs (cluster network, if available; otherwise, public network); and, one for heartbeating

(cluster network, if available; otherwise, public network). To get started quickly, open up the default port range. For example:



# firewall-cmd --zone=public --add-port=6800-7300/tcp

# firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent


For additional details on firewalld, see Using Firewalls.



1.2.7. SELINUX



For ALL Ceph Nodes



On CentOS and RHEL, SELinux is set to Enforcing by default. To streamline your installation, we recommend setting SELinux to Permissive or disabling it entirely and ensuring that your installation and cluster are working properly before hardening your configuration. To set SELinux to Permissive, execute the following:



To configure SELinux persistently (recommended if SELinux is an issue), modify the configuration file at /etc/selinux/config.



# sudo setenforce 0

# sudo sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config


1.2.8. PRIORITIES/PREFERENCES [Options]



For ALL Ceph Nodes



해당 정보 수정은 확인이 필요(Red Hat)



Ensure that your package manager has priority/preferences packages installed and enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to enable optional repositories.



# sudo yum install -y yum-plugin-priorities


1.2.9. Adjust PID Count



[On Ceph osd nodes]


Hosts with high numbers of OSDs (more than 12) may spawn a lot of threads, especially during recovery and re-balancing. The standard RHEL 7 kernel defaults to a relatively small maximum number of threads (32768). Check your default settings to see if they are suitable.



# cat /proc/sys/kernel/pid_max


Consider setting kernel.pid_max to a higher number of threads. The theoretical maximum is 4,194,303 threads. For example, you could add the following to the /etc/sysctl.conf file to set it to the maximum:

        해당 config 정보 추가

# vi /etc/sysctl.conf

kernel.pid_max = 4194303


To see the changes you made without a reboot, execute:



# sysctl -p


To verify the changes, execute:



# sysctl -a | grep kernel.pid_max

kernel.pid_max = 4194303


1.2.10. Adjust Netfilter conntrack Limits



[ On OSD and Montor nodes]



https://access.redhat.com/documentation/en/red-hat-ceph-storage/version-1.3/installation-guide-for-red-hat-enterprise-linux/#adjust_netfilter_conntrack_limits



* Note

   * nf_conntrack 값 설정 계산 방식을 확인 중



When using a firewall and running several OSDs on a single host, busy clusters might create a lot of network connections and overflow the kernel nf_conntrack table on the OSD and monitor hosts.



To find the current values, execute the following commands:



# cat /proc/sys/net/netfilter/nf_conntrack_buckets



128000



# cat /proc/sys/net/netfilter/nf_conntrack_max


The nf_conntrack_max value defaults to the nf_conntrack_buckets value multiplied by 8.

        nf_conntrack_max / nf_conntrack_buckets 값 ~

배수는 4배 일 수도 있음



Consider setting nf_conntrack_buckets to a higher number on the OSD and monitor hosts. To do so, create a new /etc/modprobe.d/ceph.conf file with the following content:



Where <size> specifies the new size of the nf_conntrack_buckets value. For example:

        nf_conntrack_buckets 정보를 size에 입력



# sudo vi /etc/modprobe.d/ceph.conf

options nf_conntrack hashsize=<size>

options nf_conntrack hashsize=128000


Having this option specified loads the nf_conntrack module with a maximum table size of

1024000 (128000 * 8).



To see the changes you made without a reboot, execute the following commands as root:



# systemctl stop firewalld

# modprobe -rv nf_conntrack

# systemctl start firewalld


* Note

   * 방화벽이 사용 중 이지 않은 경우 다음과 같은 로그 발생

      * 해당 사항 확인 필요

# modprobe -rv nf_conntrack

modprobe: FATAL: Module nf_conntrack is in use.

      * 


To verify the changes, execute the following commands as root:



# sysctl -a | grep conntrack_buckets

# sysctl -a | grep conntrack_max


 


댓글목록 0

등록된 댓글이 없습니다.

Copyright © 소유하신 도메인. All rights reserved.

사이트 정보

회사명 : (주)리눅스데이타시스템 / 대표 : 정정모
서울본사 : 서울특별시 강남구 봉은사로 114길 40 홍선빌딩 2층 / tel : 02-6207-1160
대전지사 : 대전광역시 유성구 노은로174 도원프라자 5층 / tel : 042-331-1161

PC 버전으로 보기