[Ceph] Ceph Installation - jewel (latest 10.2.11) - III > Ceph 자료실

본문 바로가기
사이트 내 전체검색

Ceph 자료실

[Ceph] Ceph Installation - jewel (latest 10.2.11) - III

페이지 정보

profile_image
작성자 꿈꾸는여행자
댓글 0건 조회 694회 작성일 25-09-09 17:15

본문

안녕하세요.

 

꿈꾸는여행자입니다.

 

 

이번에는 Ceph을 주제로 다루고자 합니다. 


기존 Ceph 내용 구성 후 최근 Ceph 내용으로 검증해 나가 보겠습니다. 

> 다음 

 

2. STORAGE CLUSTER QUICK START

2.2. CREATE A CLUSTER

2.3. OPERATING YOUR CLUSTER

2.4. EXPANDING YOUR CLUSTER [Fail / Rebooting시 실패]

2.5. STORING/RETRIEVING OBJECT DATA [OPTIONS]

                        

 

목차 

 


2.2.8.3. ADD OSD HOSTS/CHASSIS TO THE CRUSH HIERARCHY

2.2.8.4. CHECK CRUSH HIERARCHY

2.2.9. Copy ceph-deploy key


2.3. OPERATING YOUR CLUSTER

2.3.2. MONITORING A CLUSTER

2.3.2.1. INTERACTIVE MODE

2.4. EXPANDING YOUR CLUSTER [Fail / Rebooting시 실패]

2.4.1. ADDING AN OSD

2.4.1.1. Clean partition

2.4.1.2. Prepare the OSD

2.4.1.3. Activate the OSDs

2.4.1.4. Edit crush map

2.4.1.5. Check the cluster status

2.4.2. ADD A METADATA SERVER

2.4.3. ADD AN RGW INSTANCE [OPTIONS]

2.4.4. ADDING MONITORS

2.5. STORING/RETRIEVING OBJECT DATA [OPTIONS]

2.5.1. Object 위치 찾기

2.5.2. Exercise: Locate an Object

 

상세 내역은 아래와 같습니다.

 

감사합니다.  

 

> 아래 

 

 



2.2.8.3. ADD OSD HOSTS/CHASSIS TO THE CRUSH HIERARCHY



Once you have added OSDs and created a CRUSH hierarchy, add the OSD hosts/chassis to the CRUSH hierarchy so that CRUSH can distribute objects across failure domains. 



ceph osd crush set {id-or-name} {weight} root={pool-name}  [{bucket-type}={bucket-name} ...]




For example:



ceph osd crush set osd.0 1.0 root=default datacenter=dc1 room=room1

row=row1 rack=rack1 host=node2

ceph osd crush set osd.1 1.0 root=default datacenter=dc1 room=room1

row=row1 rack=rack2 host=node3

ceph osd crush set osd.2 1.0 root=default datacenter=dc1 room=room1

row=row1 rack=rack3 host=node4

ceph osd crush set osd.1 1.0 root=default datacenter=dc1 room=room1 row=row1 rack=rack1 host=node2

ceph osd crush set osd.2 1.0 root=default datacenter=dc1 room=room1 row=row1 rack=rack2 host=node3


The foregoing example uses three different racks for the exemplary hosts (assuming that is how they are physically configured). Since the exemplary Ceph configuration file specified "rack" as the largest failure domain by setting osd_crush_chooseleaf_type = 3, CRUSH can write each object replica to an OSD residing in a different rack. Assuming osd_pool_default_min_size = 2, this means (assuming sufficient storage capacity) that the Ceph cluster can continue operating if an entire rack were to fail (e.g., failure of a power distribution unit or rack router).



2.2.8.4. CHECK CRUSH HIERARCHY



Check your work to ensure that the CRUSH hierarchy is accurate.



ceph osd tree


If you are not satisfied with the results of your CRUSH hierarchy, you may move any component of your hierarchy with the move command.



ceph osd crush move <bucket-to-move> <bucket-type>=<parent-bucket>


If you want to remove a bucket (node) or OSD (leaf) from the CRUSH hierarchy, use the remove command:



ceph osd crush remove <bucket-name>




2.2.9. Copy ceph-deploy key

1. Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.

   1. $ ceph-deploy admin {admin-node} {ceph-node}

   2.    3. For example:

      1. $ ceph-deploy admin admin-node node1 node2 node3

      2. 2. When ceph-deploy is talking to the local admin host (admin-node), it must be reachable by its hostname. If necessary, modify /etc/hosts to add the name of the admin host.

   1. /etc/hosts 정보가 모드 등록되어 있어야 함

3. Ensure that you have the correct permissions for the ceph.client.admin.keyring.

   1. $ sudo chmod +r /etc/ceph/ceph.client.admin.keyring

   2. 4. Check your cluster’s health.

   1. $ ceph health

   2.    3. Your cluster should return an active + clean state when it has finished peering.




________________



2.3. OPERATING YOUR CLUSTER



Deploying a Ceph cluster with ceph-deploy automatically starts the cluster. To operate the cluster daemons with Debian/Ubuntu distributions, see Running Ceph with Upstart. To operate the cluster daemons with CentOS, Red Hat, Fedora, and SLES distributions, see Running Ceph with sysvinit.

        ceph-deploy를 통해 자동으로 cluster가 시작됨



To learn more about peering and cluster health, see Monitoring a Cluster. To learn more about Ceph OSD Daemon and placement group health, see Monitoring OSDs and PGs. To learn more about managing users, see User Management.

        monitoring cluster와 osd 확인 방법은 url 클릭하여 참조



Once you deploy a Ceph cluster, you can try out some of the administration functionality, the rados object store command line, and then proceed to Quick Start guides for Ceph Block Device, Ceph Filesystem, and the Ceph Object Gateway.





2.3.2. MONITORING A CLUSTER



http://docs.ceph.com/docs/master/rados/operations/monitoring/



2.3.2.1. INTERACTIVE MODE



To run the ceph tool in interactive mode, type ceph at the command line with no arguments. For example:



[root@node1 ~]# su - cephdeploy



[cephdeploy@node1 ceph]$ chmod 644 /etc/ceph/ceph.client.admin.keyring 



[cephdeploy@node1 ~]$ ceph

ceph> health

ceph> status

ceph> quorum_status

ceph> mon_status

ceph> health

HEALTH_OK



ceph> status

    cluster 88135ade-3647-4083-bcc7-b4d7a64d2849

     health HEALTH_OK

     monmap e1: 1 mons at {node1=10.0.0.70:6789/0}

            election epoch 5, quorum 0 node1

     osdmap e35: 3 osds: 2 up, 2 in

            flags sortbitwise

      pgmap v168: 64 pgs, 1 pools, 0 bytes data, 0 objects

            23368 MB used, 78981 MB / 102350 MB avail

                  64 active+clean



ceph> quorum_status

{"election_epoch":5,"quorum":[0],"quorum_names":["node1"],"quorum_leader_name":"node1","monmap":{"epoch":1,"fsid":"88135ade-3647-4083-bcc7-b4d7a64d2849","modified":"2016-08-05 10:59:37.410321","created":"2016-08-05 10:59:37.410321","mons":[{"rank":0,"name":"node1","addr":"10.0.0.70:6789\/0"}]}}



ceph> mon_status

{"name":"node1","rank":0,"state":"leader","election_epoch":5,"quorum":[0],"outside_quorum":[],"extra_probe_peers":[],"sync_provider":[],"monmap":{"epoch":1,"fsid":"88135ade-3647-4083-bcc7-b4d7a64d2849","modified":"2016-08-05 10:59:37.410321","created":"2016-08-05 10:59:37.410321","mons":[{"rank":0,"name":"node1","addr":"10.0.0.70:6789\/0"}]}}








________________



2.4. EXPANDING YOUR CLUSTER [Fail / Rebooting시 실패]



* 주의 사항

   * 작업 전에 firewalld에 필요한 Port를 Open 하도록 한다.

   * You MUST open port 6789 on your public network on ALL Ceph monitor nodes

      * # firewall-cmd --zone=public --add-port=6789/tcp

# firewall-cmd --zone=public --add-port=6789/tcp --permanent

      *    * Each OSD on each Ceph node needs a few ports

      * # firewall-cmd --zone=public --add-port=6800-7300/tcp

# firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent

      * 


Once you have a basic cluster up and running, the next step is to expand cluster. Add a Ceph OSD Daemon and a Ceph Metadata Server to node1. Then add a Ceph Monitor to node2 and node3 to establish a quorum of Ceph Monitors.



  


2.4.1. ADDING AN OSD

* ADDING/REMOVING OSDS

   * http://docs.ceph.com/docs/hammer/rados/operations/add-or-rm-osds/

   * ceph osd rm osd.0 명령어를 통해 필요 없는 osd 삭제 함

2.4.1.1. Clean partition



Since you are running a 3-node cluster for demonstration purposes, add the OSD to the monitor node.

        clean partition tables

$ ceph-deploy disk zap node1:/dev/sdb


2.4.1.2. Prepare the OSD

Then, from your ceph-deploy node, prepare the OSD.



Disk 사용하는 경우

$ ceph-deploy osd prepare node1:/dev/sdb


폴더를 사용하는 경우

[root@admin-node ~]# su - cephdeploy

[cephdeploy@admin-node ~]$ ssh node1

[cephdeploy@node1 ~]$ sudo mkdir /var/local/osd2

[cephdeploy@node1 ~]$ sudo chown -R ceph:ceph /var/local/osd2

[cephdeploy@node1 ~]$ exit

[cephdeploy@admin-node ~]$ cd my-cluster

[cephdeploy@admin-node my-cluster]$ 

 ceph-deploy osd prepare node1:/var/local/osd2


2.4.1.3. Activate the OSDs



Finally, activate the OSDs.



Disk 사용하는 경우

$ ceph-deploy osd prepare node1:/dev/sdb1


폴더를 사용하는 경우

[cephdeploy@admin-node my-cluster]$ 

 ceph-deploy osd activate node1:/var/local/osd2


2.4.1.4. Edit crush map



Place the buckets into a hierarchy:



[cephdeploy@admin-node my-cluster]$ ceph osd crush move node1 rack=rack3

moved item id -10 name 'node1' to location {rack=rack3} in crush map


Check osd tree



[cephdeploy@admin-node my-cluster]$ ceph osd tree

ID  WEIGHT  TYPE NAME                      UP/DOWN REWEIGHT PRIMARY-AFFINITY 

 -1 2.04880 root default                                                     

 -4 2.04880     datacenter dc1                                               

 -5 2.04880         room room1                                               

 -6 2.04880             row row1                                             

 -7 1.00000                 rack rack1                                       

 -2 1.00000                     host node2                                   

  1 1.00000                         osd.1       up  1.00000          1.00000 

 -8 1.00000                 rack rack2                                       

 -3 1.00000                     host node3                                   

  2 1.00000                         osd.2       up  1.00000          1.00000 

 -9 0.04880                 rack rack3                                       

-10 0.04880                     host node1                                   

  3 0.04880                         osd.3       up  1.00000          1.00000 

  0       0 osd.0                             down        0          1.00000 


Add the OSD hosts/chassis to the CRUSH hierarchy



[cephdeploy@admin-node my-cluster]$ ceph osd crush set osd.3 1.0 root=default datacenter=dc1 room=room1 row=row1 rack=rack3 host=node1

set item id 3 name 'osd.3' weight 1 at location {datacenter=dc1,host=node1,rack=rack3,room=room1,root=default,row=row1} to crush map




2.4.1.5. Check the cluster status

Once you have added your new OSD, Ceph will begin rebalancing the cluster by migrating placement groups to your new OSD. You can observe this process with the ceph CLI.

        rebalancing cli



$ ceph -w


You should see the placement group states change from active+clean to active with some degraded objects, and finally active+clean when migration completes. (Control-c to exit.)



2.4.2. ADD A METADATA SERVER



* To use CephFS, you need at least one metadata server. Execute the following to create a metadata server:

   * $ ceph-deploy mds create {ceph-node}

   *    * For example:

      * monitor node에 등록 하므로

$ ceph-deploy mds create node1

      * * Note

   * Currently Ceph runs in production with one metadata server only. You may use more, but there is currently no commercial support for a cluster with multiple metadata servers.

2.4.3. ADD AN RGW INSTANCE [OPTIONS]



RADOS GATEWAY 추가 구성 테스트



To use the Ceph Object Gateway component of Ceph, you must deploy an instance of RGW. Execute the following to create an new instance of RGW:



$ ceph-deploy rgw create {gateway-node}


For example:



$ ceph-deploy rgw create node1


* Note 

   * This functionality is new with the Hammer release, and also with ceph-deploy v1.5.23.



By default, the RGW instance will listen on port 7480. This can be changed by editing ceph.conf on the node running the RGW as follows:



[client]

rgw frontends = civetweb port=80


To use an IPv6 address, use:



[client]

rgw frontends = civetweb port=[::]:80


2.4.4. ADDING MONITORS



A Ceph Storage Cluster requires at least one Ceph Monitor to run. For high availability, Ceph Storage Clusters typically run multiple Ceph Monitors so that the failure of a single Ceph Monitor will not bring down the Ceph Storage Cluster. Ceph uses the Paxos algorithm, which requires a majority of monitors (i.e., 1, 2:3, 3:4, 3:5, 4:6, etc.) to form a quorum.



Add two Ceph Monitors to your cluster.



ceph-deploy mon add {ceph-node}


For example:

        node2 node3에 monitor 추가 구성

[cephdeploy@admin-node my-cluster]$ ceph-deploy mon add node2

[cephdeploy@admin-node my-cluster]$ ceph-deploy mon add node3


Once you have added your new Ceph Monitors, Ceph will begin synchronizing the monitors and form a quorum. You can check the quorum status by executing the following:

        쿼럼 상태 체크

$ ceph quorum_status --format json-pretty

{

    "election_epoch": 10,

    "quorum": [

        0,

        1,

        2

    ],

    "quorum_names": [

        "node1",

        "node2",

        "node3"

    ],

    "quorum_leader_name": "node1",

    "monmap": {

        "epoch": 3,

        "fsid": "88135ade-3647-4083-bcc7-b4d7a64d2849",

        "modified": "2016-08-10 14:39:36.572393",

        "created": "2016-08-05 10:59:37.410321",

        "mons": [

            {

                "rank": 0,

                "name": "node1",

                "addr": "10.0.0.70:6789\/0"

            },

            {

                "rank": 1,

                "name": "node2",

                "addr": "10.0.0.71:6789\/0"

            },

            {

                "rank": 2,

                "name": "node3",

                "addr": "10.0.0.72:6789\/0"

            }

        ]

    }

}


* Tip 

   * When you run Ceph with multiple monitors, you SHOULD install and configure NTP on each monitor host. Ensure that the monitors are NTP peers.

2.5. STORING/RETRIEVING OBJECT DATA [OPTIONS]



OBJECT DATA 저장하고 조회하는 테스트

Ceph Client가 설치되어 있는 곳에서 테스트



To store object data in the Ceph Storage Cluster, a Ceph client must:

1. Set an object name

2. Specify a pool



2.5.1. Object 위치 찾기

The Ceph Client retrieves the latest cluster map and the CRUSH algorithm calculates how to map the object to a placement group, and then calculates how to assign the placement group to a Ceph OSD Daemon dynamically. To find the object location, all you need is the object name and the pool name. For example:



ceph osd map {poolname} {object-name}


2.5.2. Exercise: Locate an Object



Object 위치 찾기 예제



As an exercise, lets create an object. Specify an object name, a path to a test file containing some object data and a pool name using the rados put command on the command line. For example:

echo {Test-data} > testfile.txt

rados put {object-name} {file-path} --pool=data

rados put test-object-1 testfile.txt --pool=data


To verify that the Ceph Storage Cluster stored the object, execute the following:

rados -p data ls


Now, identify the object location:

ceph osd map {pool-name} {object-name}

ceph osd map data test-object-1


Ceph should output the object’s location. For example:

osdmap e537 pool 'data' (0) object 'test-object-1' -> pg 0.d1743484 (0.4) -> up [1,0] acting [1,0]


To remove the test object, simply delete it using the rados rm command. For example:

rados rm test-object-1 --pool=data


As the cluster evolves, the object location may change dynamically. One benefit of Ceph’s dynamic rebalancing is that Ceph relieves you from having to perform the migration manually.



댓글목록

등록된 댓글이 없습니다.

회원로그인

회원가입

사이트 정보

회사명 : (주)리눅스데이타시스템
대표이사 : 정정모
본사 : 강남구 봉은사로 114길 40 홍선빌딩 2층
- tel : 02-6207-1160
대전지사 : 유성구 노은로174 도원프라자 5층
- tel : 042-331-1161

접속자집계

오늘
2,110
어제
2,585
최대
8,445
전체
2,034,320
Copyright © www.linuxdata.org All rights reserved.