[Ceph] Ceph Installation - jewel (latest 10.2.11) - IV > Ceph 자료실

본문 바로가기
사이트 내 전체검색

Ceph 자료실

[Ceph] Ceph Installation - jewel (latest 10.2.11) - IV

페이지 정보

profile_image
작성자 꿈꾸는여행자
댓글 0건 조회 394회 작성일 25-10-22 11:02

본문

안녕하세요.

 

꿈꾸는여행자입니다.

 

 

이번에는 Ceph을 주제로 다루고자 합니다. 


기존 Ceph 내용 구성 후 최근 Ceph 내용으로 검증해 나가 보겠습니다. 

> 다음 

 

3. BLOCK DEVICE QUICK START

3.1. INSTALL CEPH

3.2. CONFIGURE A BLOCK DEVICE

                        

목차 

 

3. BLOCK DEVICE QUICK START [테스트]

3.0. Prerequisite

3.0.1. Edit /etc/hosts

3.0.2. Set firewall

3.1. INSTALL CEPH

3.1.1. Create Deploy User

3.1.2. Install Ceph

3.1.3. Check ceph

3.2. CONFIGURE A BLOCK DEVICE

3.2.1. Case Ceph Base Docu

3.2.2. Case Redhat Quick start

 

 

상세 내역은 아래와 같습니다.

 

감사합니다.  

 

> 아래 

 

 

 

 

 



________________



3. BLOCK DEVICE QUICK START [테스트]



Exam Case MariaDB and MongoDB



http://docs.ceph.com/docs/master/start/quick-rbd/



To use this guide, you must have executed the procedures in the Storage Cluster Quick Start guide first. Ensure your Ceph Storage Cluster is in an active + clean state before working with the Ceph Block Device.

* Note 

   * The Ceph Block Device is also known as RBD or RADOS Block Device.



  




You may use a virtual machine for your ceph-client node, but do not execute the following procedures on the same physical node as your Ceph Storage Cluster nodes (unless you use a VM). See FAQ for details.



        Block Device에 접근하기 위해 접근 대상 Client에 ceph-client를 설치 후 접근



3.0. Prerequisite

3.0.1. Edit /etc/hosts

* Node list

   * admin-node

   * node1(mon)

   * ceph-client



# vi /etc/hosts

10.0.0.76       admin-node

10.0.0.70       node1

10.0.0.71       node2

10.0.0.72       node3

10.0.0.75       ceph-client


3.0.2. Set firewall





[On Ceph monitor port]



You MUST open port 6789 on your public network on ALL Ceph monitor nodes



# firewall-cmd --zone=public --add-port=6789/tcp

# firewall-cmd --zone=public --add-port=6789/tcp --permanent




[On Ceph osd port]



Each OSD on each Ceph node needs a few ports: one for talking to clients and monitors (public network)



# firewall-cmd --zone=public --add-port=6800-7300/tcp

# firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent


For additional details on firewalld, see Using Firewalls.



3.1. INSTALL CEPH

3.1.1. Create Deploy User

* Create a new user on each Ceph Node.

   * 모든 Ceph Node에 신규 사용자를 등록

$ ssh root@ceph-server

$ sudo useradd -d /home/{username} -m {username}

$ sudo passwd {username}

$ ssh root@ceph-server

$ sudo useradd -d /home/cephdeploy -m cephdeploy

$ sudo passwd cephdeploy

   * Node list

      1. ceph-client

* For the new user you added to each Ceph node, ensure that the user has sudo privileges.

   * 위에는 실행 예 밑에 가 진짜 실행

$ cat << EOF >/etc/sudoers.d/{username}

{username} ALL = (root) NOPASSWD:ALL

Defaults:{username} !requiretty

EOF



$ sudo chmod 0440 /etc/sudoers.d/{username}

$ cat << EOF >/etc/sudoers.d/cephdeploy

cephdeploy ALL = (root) NOPASSWD:ALL

Defaults:cephdeploy !requiretty

EOF



$ sudo chmod 0440 /etc/sudoers.d/cephdeploy

   * 


3.1.2. Install Ceph

1. Note

   1. 설치 전 Deploy를 위한 사용자 계정 추가 필수

2. Verify that you have an appropriate version of the Linux kernel. See OS Recommendations for details.

   1. lsb_release -a

uname -r

   2. 3. On the admin node, use ceph-deploy to install Ceph on your ceph-client node.

   1. ceph-deploy install ceph-client

# su - cephdeploy

$ cd my-cluster

$ ceph-deploy install ceph-client

   2. 4. On the admin node, use ceph-deploy to copy the Ceph configuration file and the ceph.client.admin.keyring to the ceph-client.

   1. ceph-deploy admin ceph-client

# su - cephdeploy

$ cd my-cluster

$ ceph-deploy admin ceph-client

   2. 5. The ceph-deploy utility copies the keyring to the /etc/ceph directory. Ensure that the keyring file has appropriate read permissions (e.g., sudo chmod +r/etc/ceph/ceph.client.admin.keyring).

   1. ceph-client에서 실행

$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring

   2. 3.1.3. Check ceph

ceph-client node에서 실행

# ceph status

    cluster 88135ade-3647-4083-bcc7-b4d7a64d2849

     health HEALTH_OK

     monmap e1: 1 mons at {node1=10.0.0.70:6789/0}

            election epoch 6, quorum 0 node1

     osdmap e42: 3 osds: 2 up, 2 in

            flags sortbitwise

      pgmap v262: 64 pgs, 1 pools, 0 bytes data, 0 objects

            23369 MB used, 78980 MB / 102350 MB avail

                  64 active+clean


3.2. CONFIGURE A BLOCK DEVICE

3.2.1. Case Ceph Base Docu



1. On the ceph-client node, create a block device image.

   1. rbd create foo --size 4096 [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]

$ rbd create foo --size 4096

   2. 2. On the ceph-client node, map the image to a block device.

   1. sudo rbd map foo --name client.admin [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]

$ rbd map foo --name client.admin

   2. 3. Use the block device by creating a file system on the ceph-client node.

   1. sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo


This may take a few moments.

   2. 4. Mount the file system on the ceph-client node.

   1. sudo mkdir /mnt/ceph-block-device

sudo mount /dev/rbd/rbd/foo /mnt/ceph-block-device

cd /mnt/ceph-block-device

   2. See block devices for additional details.



3.2.2. Case Redhat Quick start



https://access.redhat.com/documentation/en/red-hat-ceph-storage/1.3/paged/installation-guide-for-red-hat-enterprise-linux/chapter-3-client-quick-start



 

 

 

댓글목록

등록된 댓글이 없습니다.

회원로그인

회원가입

사이트 정보

회사명 : (주)리눅스데이타시스템
대표이사 : 정정모
본사 : 강남구 봉은사로 114길 40 홍선빌딩 2층
- tel : 02-6207-1160
대전지사 : 유성구 노은로174 도원프라자 5층
- tel : 042-331-1161

접속자집계

오늘
2,109
어제
2,585
최대
8,445
전체
2,034,319
Copyright © www.linuxdata.org All rights reserved.