将设为首页浏览此站
开启辅助访问 天气与日历 收藏本站联系我们切换到窄版

易陆发现论坛

 找回密码
 开始注册
查看: 808|回复: 1
收起左侧

cinder 对接多个 ceph 存储

[复制链接]
发表于 2020-12-26 15:00:04 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有帐号?开始注册

x
环境说明当前 openstack环境正常使用由于后端 ceph 存储容量已经超过 85%不想直接进行扩容, 因为会有大量的数据迁移新创建一个独立的ceph 集群, 并计划用于 openstack 现有环境成为一个新的 ceph后端旧的 ceph 集群称为 ceph-A,  使用中的 pool 为 volumes新的 ceph 集群称为 ceph-B,  使用中的 pool 为 new_volumes目标在 openstack 中,  同时连接到两个不同的 ceph backendcinder server 配置1. ceph 连接配置2. cinder 配置ceph 连接配置

1.同时把两个 ceph 集群中的配置复制到 cinder 服务器 /etc/ceph 目录下, 定义成不同命名

[root@hh-yun-db-129041 ceph]# tree `pwd`/etc/ceph├── ceph.client.admin-develop.keyring      <- ceph-B 集群中的 admin key├── ceph.client.admin-volumes.keyring      <- ceph-A 集群中的 admin key├── ceph.client.developcinder.keyring      <- ceph-B 集群中的用户 developcinder key├── ceph.client.cinder.keyring             <- ceph-A 集群中的 cinder key├── ceph.client.mon-develop.keyring        <- ceph-B 集群中的 mon key├── ceph.client.mon-volumes.keyring        <- ceph-A 集群中的 mon key├── ceph-develop.conf                      <- ceph-B 集群配置文件(包含了 mon 地址等集群信息)└── ceph-volumes.conf                      <- ceph-B 集群配置文件(包含了 mon 地址等集群信息)

这里需要注意, clinet.client.(username).keyring 必须要与连接 ceph 的合法用户命名一致, 否则 cinder server 端, 无法正确获得权限

2.命令行下, 测试连接不同的 ceph 后端测试

ceph-A 连接测试

[root@hh-yun-db-129041 ceph]# ceph -c ceph-volumes.conf -k ceph.client.admin-volumes.keyring -s cluster xxx-xxx-xxxx-xxxx-xxxx
" T1 \3 @1 l& h
- @2 P6 b" G$ p# A9 _
2 a5 ^9 u. s3 P8 s! B       health HEALTH_OK
) a! X! k: w. C6 u1 S
# D! i9 I3 a$ `# q' s& x
9 J0 y, i1 W- V# ]6 |& `       monmap e3: 5 mons at {hh-yun-ceph-cinder015-128055=240.30.128.55:6789/0,hh-yun-ceph-cinder017-128057=240.30.128.57:6789/0,hh-yun-ceph-cinder024-128074=240.30.128.74:6789/0,hh-yun-ceph-cinder025-128075=240.30.128.75:6789/0,hh-yun-ceph-cinder026-128076=240.30.128.76:6789/0}, election epoch 452, quorum 0,1,2,3,4 hh-yun-ceph-cinder015-128055,hh-yun-ceph-cinder017-128057,hh-yun-ceph-cinder024-128074,hh-yun-ceph-cinder025-128075,hh-yun-ceph-cinder026-128076 9 z' _- l/ Z3 |. ~) Z& Q
! m5 p& w: u2 T
& p* h/ Y! N7 _8 K
      osdmap e170088: 226 osds: 226 up, 226 in : N+ L' J0 d2 _
& e' G+ ^0 f# _

% E& @3 R6 q) Y) x2 X9 y  ^     pgmap v50751302: 20544 pgs, 2 pools, 157 TB data, 40687 kobjects 474 TB used, 376 TB / 850 TB avail 20537 active+clean 7 active+clean+scrubbing+deep client io 19972 kB/s rd, 73591 kB/s wr, 3250 op/s5 j6 v1 r6 s+ x( W! [3 F# h

ceph-B 连接测试

[root@hh-yun-db-129041 ceph]# ceph -c ceph-develop.conf -k ceph.client.admin-develop.keyring -s cluster 4bf07d3e-a289-456d-9bd9-5a89832b413b
; p4 |' y+ K5 X  ' E2 z+ ]3 ~- U  t) {2 d- S! y
    health HEALTH_OK monmap e1: 5 mons at {240.30.128.214=240.30.128.214:6789/0,240.30.128.215=240.30.128.215:6789/0,240.30.128.39=240.30.128.39:6789/0,240.30.128.40=240.30.128.40:6789/0,240.30.128.58=240.30.128.58:6789/0} election epoch 6, quorum 0,1,2,3,4 240.30.128.39,240.30.128.40,240.30.128.58,240.30.128.214,240.30.128.215
. Y( I- ^  e6 d  V
' n) I2 U6 b- r1 H5 m+ I: Q$ Z; d1 S
   osdmap e559: 264 osds: 264 up, 264 in flags sortbitwise & N0 z, Q2 R$ F

; v; D& O0 ]6 i- Q* I+ }* m, r3 s0 V5 h0 a* }8 c5 I+ n
   pgmap v116751: 12400 pgs, 9 pools, 1636 bytes data, 171 objects 25091 MB used, 1440 TB / 1440 TB avail 12400 active+clean* e+ U% D, j) q" P0 L) ~/ r5 R3 A
cinder 配置

对 cinder 服务端进行配置

/etc/cinder/cinder.conf


& b% ?/ }+ G% c5 t# R' R

enabled_backends=CEPH_SATA,CEPH_DEVELOP...( {6 S6 ]2 o8 m5 ]2 Y
[CEPH_SATA]% j" h$ \4 ^# g/ X5 g& u, Y8 h# E
glance_api_version=2+ [' ~1 h' E4 }4 V# v  o

1 D  g  G1 m" D3 q) U- Y+ Jvolume_backend_name=ceph_sata
) P9 h3 Q- f8 V$ g( X5 U
; d! w& c& n5 s8 h* erbd_ceph_conf=/etc/ceph/ceph-volumes.conf
+ W: c  R8 }! g2 S* x; i* W: n6 [+ K- F  I: }) g
rbd_user=cinder
; B& U* t4 y, `+ w0 h4 ~0 h2 p0 T9 P/ u8 F3 h4 R! {$ b& ?) c. B# w
rbd_flatten_volume_from_snapshot=False* r5 R: [, L  }, [5 s8 V. X+ v% L1 J
6 z7 M) j. O( M  z0 {% A0 Y+ l( x/ ]
rados_connect_timeout=-1/ `; G( `' r* _; c! p5 n

9 p# H) ^9 D. urbd_max_clone_depth=5+ |3 |6 i, F: r
2 u/ o& k  h/ g$ c' r% ^* a5 Z
volume_driver=cinder.volume.drivers.rbd.RBDDriver; C: N: }  d( h; A1 H2 J4 l

) K* @/ `2 `9 _6 Z3 \8 m9 Urbd_store_chunk_size=4: G% n# [- C; Y: L5 T3 l+ w

6 q; r- r( M5 c7 F4 c0 orbd_secret_uuid=dc4f91c1-8792-4948-b68f-2fcea75f53b4 V# p* ~7 A& N- Z

; n' p' w) ~+ ]rbd_pool=volumeshost=cinder.vclound.com
0 |3 r' F8 g9 K( T8 n6 P: b& r6 G. u3 E& h3 c3 S
[CEPH-new_volumes]
% T; H: `- r* `  R8 J1 q0 F4 q
& P- t9 n; u+ ?; o; a1 }glance_api_version=2* p3 J3 j4 [1 m

% b/ q/ ?/ N, I* evolume_backend_name=ceph-new_volumes0 M+ f: g% z0 o

. ]) v/ t( n/ u* drbd_ceph_conf=/etc/ceph/ceph-new_volumes.conf% U7 Y+ l, ]$ ]5 O$ r- p
0 i5 }0 N' K3 Z% [3 p
rbd_user=cinder
+ c6 D" h3 _. `; T6 S8 T
1 r+ t! B8 [, h: P! N; w% Orbd_flatten_volume_from_snapshot=False# n* m  I8 [" H/ M; H6 |

' X/ k  X1 X3 g8 l$ Xrados_connect_timeout=-18 E8 u: I5 h- ]

# P. O1 ?: L# E5 Q& Irbd_max_clone_depth=5
' ]$ }" C* K0 D* Q% Y5 G$ ^3 g& t9 w* r) R) R9 M/ E; D0 f
volume_driver=cinder.volume.drivers.rbd.RBDDriver
9 b! J4 N3 y  O7 D9 _% V9 V2 h( ~0 W/ g' ~3 q
rbd_store_chunk_size=44 S' B: z9 v" @; P/ \1 `" e3 G

0 q7 Q6 p. y- J# a0 Brbd_secret_uuid=4bf07d3e-a289-456d-9bd9-5a89832b413
; j' q0 V7 m* A2 Y9 y
* L$ _% H3 V8 P% G. S3 M4 c/ wrbd_pool=new_volumes
1 I! v+ I( A6 a& t/ N& g
& S2 A% _9 V) y) g9 s( q9 Q2 ~host=cinder.vclound.com
3 L0 d& `* r; L. c5 p9 A/ `( e
 楼主| 发表于 2021-1-14 23:28:30 | 显示全部楼层
在ceph监视器上执行
6 L2 Z4 l2 N2 Z6 b6 `* z9 ~CINDER_PASSWD='cinder1234!'
* x2 G$ x% Y0 D$ c, @controllerHost='controller'* r2 J6 z8 e+ k9 N7 r
RABBIT_PASSWD='0penstackRMQ'' _- v! ?$ N! H9 @4 g" ^$ p

  r8 R0 Y' |, ~' W1.创建pool池
* U( {. T9 F" T7 @& E为cinder-volume服务创建pool池(因为我只有一个OSD节点,所以要将副本数设置为1)
4 G+ k. B5 a. Fceph osd pool create cinder-volumes 32) l% @' v5 v1 Y9 I+ S+ }5 S+ N
ceph osd pool set cinder-volumes size 1
! F2 N7 F) L: U4 e) A5 r* l5 }4 W' Kceph osd pool application enable  cinder-volumes rbd
1 Z& [6 s1 g' X( \) `6 Z; aceph osd lspools
) R) S. u, |. h& m0 D  q7 W
3 ?( d2 m2 T, s2 }# E2.查看pool池的使用情况9 h9 m. d* b" @  N, ]. @
ceph df& C& L, v! q& D

8 J  j7 X, E, l5 }8 z( a3.创建账号+ d; V! K3 v" y) W3 [* t
ceph auth get-or-create client.cinder-volumes mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=cinder-volumes, allow rwx pool=glance-images' -o /etc/ceph/ceph.client.cinder-volumes.keyring
4 d8 [' m( y( f- I9 b#查看
0 i- g7 K& ~7 M7 F; Z* q3 p6 l+ U1 aceph auth ls | grep -EA3 'client.(cinder-volumes)'
2 D, P  P: w7 Q7 u* Z  r& G0 t3 d) [2 w* P  }+ |
4.修改ceph.conf配置文件并同步到所有的监视器节点(这步一定要操作)
6 I0 ?5 w, f3 ]- Hsu - cephd 7 `7 ]3 c: s; l
cd ~/ceph-cluster/
4 q+ i$ f* t6 F+ E8 \' y5 K" @- acat <<EOF>> ceph.conf; t2 v  h$ V& {# k. ]# d
[client.cinder-volumes]& \' R9 I: U) j2 c
keyring = /etc/ceph/ceph.client.cinder-volumes.keyring5 Z8 J+ c8 F8 B, S; T* k
EOF
0 [3 E* }$ [4 `, {ceph-deploy --overwrite-conf admin ceph-mon01
# ~, G. U8 {1 ~exit
3 ^( S# M% B3 J4 e8 N7 I7 W' u& U8 h7 \3 H7 D* \4 }$ I! `
5.安装cinder-volume组件和ceph客户端(如果ceph监视器是在控制节点上不需要执行这一步)
  l& z* Z! G- ~; ^4 c- Syum -y install openstack-cinder python-keystone ceph-common
- U- k  i) D! ?& K4 n2 D: R& P0 W. P! o" ?2 ?
6.使用uuidgen生成一个uuid(确保cinder和libvirt中的UUID一致)8 F* ?' L$ `) ?% k+ V" k7 B9 d
uuidgen0 W" B( F2 {& j8 M  N, w( o/ K5 {: ~! O
运行uuidgen命令可以得到下面的UUID值:
7 s" N* [( z% P. Z7 {$ m7 `! e3 S; {4 N; ?) J  @, Y
086037e4-ad59-4c61-82c9-86edc31b0bc0& {5 u8 i4 p0 }- u4 D2 k+ i9 j
7.配置cinder-volume服务与cinder-api服务进行交互
3 Y( t' q; f' ]7 Z9 F4 f8 f8 O5 @openstack-config --set  /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:${RABBIT_PASSWD}@${controllerHost}:5672/ f! c9 u. ^" q4 Z1 r! p
openstack-config --set /etc/cinder/cinder.conf cache backend  oslo_cache.memcache_pool
; ]/ X1 R0 e& u$ B- ~; yopenstack-config --set /etc/cinder/cinder.conf cache enabled  true, t6 l6 `' }4 l
openstack-config --set /etc/cinder/cinder.conf cache memcache_servers  ${controllerHost}:11211
2 `+ F$ I; }( Q# g  @$ C6 nopenstack-config --set  /etc/cinder/cinder.conf DEFAULT auth_strategy  keystone
7 C8 a& X; s: Q( o1 w& Popenstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  auth_uri  http://${controllerHost}:5000# d4 S/ s# o* ~5 S, \2 X
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  auth_url  http://${controllerHost}:5000
$ b: M, V9 a; f3 D3 {- }openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  auth_type password 3 c# o3 J, p/ k
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  project_domain_id  default
1 d9 q  w4 D' {9 E1 z8 ~* b* Fopenstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  user_domain_id  default
" x+ d4 V7 s( ]+ ]# Copenstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  project_name  service
- \& f2 M1 ?. @8 mopenstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  username  cinder4 L3 D6 Q' C2 K8 F8 d8 r0 _8 y
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken  password  ${CINDER_PASSWD}1 G  V4 d' ?; Q" r
openstack-config --set  /etc/cinder/cinder.conf oslo_concurrency lock_path  /var/lib/cinder/tmp% L4 @& q, x. @' p

) r, O+ G7 @. b& H5 M, r, u8.配置cinder-volume服务使用的后端存储为ceph
) z, F. b. s( {" k0 Fopenstack-config --set /etc/cinder/cinder.conf  DEFAULT  enabled_backends  ceph  m3 w3 P5 p+ j

" J$ E) [2 G0 Y7 m0 y& ?! C* u; s, P9.配置cinder-volume服务驱动ceph
, S. H& U% M- {5 M% C* wopenstack-config --set /etc/cinder/cinder.conf  ceph volume_driver  cinder.volume.drivers.rbd.RBDDriver
9 g. {; e: V# v! Oopenstack-config --set /etc/cinder/cinder.conf  ceph rbd_pool  cinder-volumes+ z7 W. h3 ?$ P. Z1 R
openstack-config --set /etc/cinder/cinder.conf  ceph rbd_user cinder-volumes; e2 a. Y' L! Z; I7 z6 {
openstack-config --set /etc/cinder/cinder.conf  ceph rbd_ceph_conf  /etc/ceph/ceph.conf
, N7 E2 E: s1 K2 vopenstack-config --set /etc/cinder/cinder.conf  ceph rbd_flatten_volume_from_snapshot  false
! a9 K8 ]! ^" ]. Iopenstack-config --set /etc/cinder/cinder.conf  ceph bd_max_clone_depth  5
" e3 v2 I0 Q$ i9 m0 dopenstack-config --set /etc/cinder/cinder.conf  ceph rbd_store_chunk_size  4
! Z5 u; ]2 N9 S5 Bopenstack-config --set /etc/cinder/cinder.conf  ceph rados_connect_timeout  -1
+ W3 G/ J, I& h" u) q9 fopenstack-config --set /etc/cinder/cinder.conf  ceph glance_api_version 2" |; d8 P( H# @# `! I
openstack-config --set /etc/cinder/cinder.conf  ceph rbd_secret_uuid  086037e4-ad59-4c61-82c9-86edc31b0bc0# y: P. N" a3 W. z; c9 b* L
# e" f# G  I( W( S4 Z2 O
10.启动cinder-volume服务
4 T" |  \# K6 j6 n# I- asystemctl enable openstack-cinder-volume.service* E5 Y" m" f: X
systemctl start openstack-cinder-volume.service
, ?. Q' r4 h* l. F) ^8 wsystemctl status openstack-cinder-volume.service1 d; A: t$ H( E+ b- k- v

% y& c# E2 _: Q" E! d在需要挂载ceph卷的所有计算节点上执行/ B( v  b7 P& j( D' A
1.创建secret文件(UUID需要与cinder服务中一致)
. ^1 x9 o# M" E! O& K& D  C) Kcat << EOF > ~/secret.xml1 L- N) C  y$ X! Z% P1 Y$ C( \5 M
<secret ephemeral='no' private='no'>3 M1 C5 {& {/ `4 N
     <uuid>086037e4-ad59-4c61-82c9-86edc31b0bc0</uuid>
# @7 R% y2 b: m& O) @3 M     <usage type='ceph'>
0 W0 `, F6 _* N* A% ^         <name>client.cinder-volumes secret</name>; j4 t- N- @  Z( e0 W/ W' o
     </usage>
6 G. ~3 U9 Z: o& Y# x8 S. \2 }</secret>- l5 v1 p( r. P! A- r+ u5 r% W
EOF% ~* }) _9 _& Z7 s5 `% ~1 n( s

# h$ Q5 u9 c3 D4 ?$ M2.从ceph监视器上获取cinder-volumes账户的密钥环
( H' T3 k# j7 K* E! B( }$ R$ Wceph auth get-key client.cinder-volumes
/ r1 k! e  P/ G. V得到如下的结果:
7 Z- E4 N; @' u# F5 ZAQCxfDFdgp2qKRAAUY/vep29N39Qv7xWKYqMUw==
& q5 ]! J/ Y1 M+ [2 ^! M6 n  Q$ S1 p! u
3.在libvirt中注册UUID% X9 W' f. V4 \0 S
virsh secret-define --file ~/secret.xml# {) f, y1 U" y8 W; i" h. J

0 z# c; q. i: t7 b; V4.在libvirt中添加UUID和cinder-volumes密钥环' {0 q1 O9 F0 V+ f8 I8 `* X+ n
virsh secret-set-value --secret 086037e4-ad59-4c61-82c9-86edc31b0bc0 --base64 AQCxfDFdgp2qKRAAUY/vep29N39Qv7xWKYqMUw==
9 D7 h0 k, W4 C( K- Y( k- N) R4 k3 Q$ L: a
5.查看libvirt中添加的UUID' ~' T! x0 {6 C& K% ?) F; A
virsh secret-list
7 f3 _7 r" O8 Q* p# g5 G3 @& x  s3 _1 {4 _; o
6.重启libvirt$ e6 e. G2 ~8 ~* i, h
systemctl restart libvirtd.service
+ S3 e" l, a2 u2 }9 `6 Nsystemctl status libvirtd.service) m) I; U5 l$ m; C: K: }+ v8 Z( G3 `
. ^5 b/ I* e, J; k/ l" m
出错回滚的方案; X2 A: e7 o2 j+ S) c
1.删除pool池
$ u9 G& C2 G  S( Z3 {- a' l先在所有的监视器节点上开启删除pool的权限,然后才可以删除。
' X" `# h8 K5 q* A删除pool时ceph要求必须输入两次pool名称,同时加上--yes-i-really-really-mean-it选项。
9 n; U! k# i" p% c# B" k, ^4 K" Secho '
' S- X: a' ^, a) w: F, bmon_allow_pool_delete = true
, l8 q5 q7 V9 R% f[mon]1 i$ n9 |2 B) o
mon allow pool delete = true/ Z/ C, q& L( X. |" A( ~+ H
' >> /etc/ceph/ceph.conf
$ q! A: K6 S$ a% o7 vsystemctl restart ceph-mon.target! a7 u2 q" }- S+ @1 e/ ^0 m& ~, {
ceph osd pool delete cinder-volumes cinder-volumes  --yes-i-really-really-mean-it
7 e! R$ o  L$ _( L* z& r
, ]0 |6 d6 K" p# V2.删除账号' F3 N, Y* @- J
ceph auth del client.cinder-volumes' A% L, Y) k8 W9 u
2 r  f  E1 ?3 n* N
3.删除libvirt中注册的UUID和cinder-volumes密钥环4 H" ^" G7 _1 ^
查看:1 w* c( j; h# @3 h
virsh secret-list
, V7 i0 y% ?9 ?删除(secret-undefine后跟uuid值):
0 Z1 y0 J& k8 c; H4 W7 ?/ S  I4 kvirsh secret-undefine  086037e4-ad59-4c61-82c9-86edc31b0bc0
您需要登录后才可以回帖 登录 | 开始注册

本版积分规则

关闭

站长推荐上一条 /4 下一条

如有购买积分卡请联系497906712

QQ|返回首页|Archiver|手机版|小黑屋|易陆发现 点击这里给我发消息

GMT+8, 2021-4-13 10:17 , Processed in 0.064218 second(s), 23 queries .

Powered by 龙睿 bbs168x X3.2

© 2001-2020 Comsenz Inc.

快速回复 返回顶部 返回列表