易陆发现互联网技术论坛

 找回密码
 开始注册
查看: 615|回复: 0
收起左侧

openstack控制器安装手工安装monasca

[复制链接]
发表于 2022-12-20 10:00:02 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有账号?开始注册

x
monasca的所有组件都可以安装在一个节点上,例如openstack控制器节点上,也可以将其部署在多节点上。本文中,将在我的openstack集群中创建的新VM中安装monasca-api,该VM具有关联的浮动ip。Monasca-agent已安装在控制器节点上。代理节点通过浮动ip将指标发布到api节点。它们在同一子网中。
  • 安装我们需要的软件包和工具
    2 D5 F  T( Y4 J" H/ rapt-get install -y git! C" C  N' h7 L- c$ Y6 g
    apt-get install openjdk-7-jre-headless python-pip python-dev
  • 安装mysql数据库 如果您在openstack控制器节点中安装了monasca-api,则可以跳过安装,将已安装的msyql用于openstack服务。
    . G: }" }/ Y$ b) B# q4 p' Tapt-get install -y mysql-server; `$ f! P8 j8 g1 W' Z
    创建monasca数据库架构,在此处下载mon.sql( https://raw.githubusercontent.com/stackforge/cookbook-monasca-schema/master/files/default/mysql/mon.sql) F- S. K$ N- G
    mysql -uroot -ppassword < mon_mysql.sql
  • 安装Zookeeper 安装Zookeeper并重新启动它。我使用本地主机接口,并且只有一个Zookeeper,因此默认配置文件不需要配置。  I6 P& P! w- L8 o9 H; u7 r) U
  apt-get install -y zookeeper zookeeperd zookeeper-bin  service zookeeper restart
7 o4 f+ K" u" D/ u, N$ P7 z
  • 安装和配置kafka
    5 H2 z1 D* n5 o: ^! |wget http://apache.mirrors.tds.net/kafka/0.8.1.1/kafka_2.9.2-0.8.1.1.tgz
      g0 g: P  X8 x* Cmv kafka_2.9.2-0.8.1.1.tgz /opt
    & V( v8 C- t# Q0 `# {; L; d+ X/ t( qcd /opt
    - h4 ]' _, q; jtar zxf kafka_2.9.2-0.8.1.1.tgz" M- t; ^9 X0 V+ {, a
    ln -s /opt/kafka_2.9.2-0.8.1.1/ /opt/kafka
    1 ?3 c1 c) w8 J2 @- R% lln -s /opt/kafka/config /etc/kafka( |7 }" r! n3 _* [& I3 `" K7 m9 n
    创建kafka系统用户,kafka服务将以该用户身份启动。- [. ^1 U  O& @# h9 B% X0 B% i$ t
    useradd kafka -U -r
    : C% Z' {; P: z1 M在/etc/init/kafka.conf中创建kafka启动脚本,将以下内容复制 到/etc/init/kafka.conf中并保存。
    ) g6 ~7 e1 v) B0 Z( xdescription "Kafka"
    , V; }) N: M8 S, i6 x( h0 N2 g; Q+ P: M  h5 c* y
    start on runlevel [2345]
    2 ^3 @8 p& ^% J8 J4 D3 ?stop on runlevel [!2345]' Y4 D. w! ?0 O* O4 W6 p
    " T3 _4 W+ i/ U# z
    respawn8 B, y1 ^# F: o$ E+ K
    $ c$ k1 C. \; |
    limit nofile 32768 327683 M6 m( p' Q+ s7 j3 W/ l& g  Y7 e

    + o# q( t7 O+ v8 Z& _! _" ?% n# If zookeeper is running on this box also give it time to start up properly
    * ~& F0 n" s7 k& L6 t2 x' c1 Hpre-start script, B6 \9 u6 r8 a4 q- G
    if [ -e /etc/init.d/zookeeper ]; then1 _3 e) H% v5 {) d( W! h' p( {
    /etc/init.d/zookeeper restart
    & r3 l3 E; T& X0 \, vfi4 i) q& J8 \4 c1 m! e
    end script) l' b' K1 n( t5 s
    % q3 v! C. o2 B  _# I9 n& }, ^4 G
    # Rather than using setuid/setgid sudo is used because the pre-start task must run as root/ w# a2 B( i( Z
    exec sudo -Hu kafka -g kafka KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" JMX_PORT=9997 /opt/kafka/bin/kafka-server-start.sh /etc/kafka/server.properties
    4 ?& ^) K* r, m& g8 }- {) A# R  @配置kafka,vim /etc/kafka/server.properties,确保配置了以下内容:
    4 g. D3 w" m1 p' j2 ]' Khost.name=localhost
    3 L. ^: ?, Q- J* P* l/ h! X3 Badvertised.host.name=localhost
    , I9 T" v+ f2 Y7 Q- glog.dirs=/var/kafka
    ' P8 ~, H: z! f7 f创建 kafka log目录
      I  N; k7 {9 X0 Y1 [3 `  O. gmkdir /var/kafka
    7 |) [9 i' _, B: o8 }' D6 ~mkdir /var/log/kafka
    9 Y, B* r- R0 D9 c3 o, H# achown -R kafka. /var/kafka/
    ; }' n' G2 ]8 u9 Q- |. u9 ~chown -R kafka. /var/log/kafka/% k& a. G  v6 k8 b; {/ ?. N/ A: y
    启动kafka服务3 Y. E  R  Q" L4 L
    service kafka start
    $ U; {( D# P0 v" q下一步就是创建 kafka topics2 k2 }6 O6 @/ C( D8 |% B
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 64 --topic metrics
    8 y3 m: O3 @4 d$ [7 D/opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic events
    6 j7 H1 e7 f+ I6 Q; |) Y  }/opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic raw-events
    5 s3 [8 V( j1 E( i" {1 r/opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic transformed-events
    5 f6 l1 k3 {1 T: ~% D/opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic stream-definitions
    3 k0 B: r9 Y8 e6 a/opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic transform-definitions
    ( }) Q7 g( q6 e/opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic alarm-state-transitions" \/ V3 _' Y' [  \
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic alarm-notifications5 o0 n$ q3 @# j6 }
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic stream-notifications7 x1 e& ]$ e1 T
    /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 3 --topic retry-notifications
  • 安装和配置 influxdb2 ^) y1 @. F" e% r& @9 C
    curl -sL https://repos.influxdata.com/influxdb.key | apt-key add -
    . i, y1 B  f) a. @3 _echo "deb https://repos.influxdata.com/ubuntu trusty stable" > /etc/apt/sources.list.d/influxdb.list/ o/ j  b2 Q& f% }
    apt-get update
    $ T0 y. @3 x/ O# r# I) @apt-get install -y apt-transport-https
    ) s) l& x' O' g& d7 Yapt-get install -y influxdb7 A% t/ X5 n  F$ u/ W4 [0 V% b1 E# Q
    8 _$ {% v) \; ?9 ~! W. a# a' Q/ t  J
    service influxdb start' g$ ~8 J% n* S+ Q; n1 b
    创建 influxdb database, user, password, retention policy, 同时修改密码。
    1 y: V1 f- M  L% `$ |7 uinflux# P5 i3 i$ E6 v5 T" s  ^, }
    CREATE DATABASE mon
    8 e; E0 M0 D! R% yCREATE USER monasca WITH PASSWORD 'tyun'% S+ _. B4 j1 N1 [  ]
    CREATE RETENTION POLICY persister_all ON mon DURATION 90d REPLICATION 1 DEFAULT+ w7 i+ [- M, |7 L! }" l! @; T
    exit
  • 安装与配置 storm) o, g- z- V  `' _$ f/ o6 H: r
    wget http://apache.mirrors.tds.net/storm/apache-storm-0.9.6/apache-storm-0.9.6.tar.gz2 ^( W! c, d/ j
    mkdir /opt/storm' w& A+ h! u1 _, {
    cp apache-storm-0.9.6.tar.gz /opt/storm/
    8 O; L& @! O% r/ q; \3 T0 J; dcd /opt/storm/$ c* S4 `$ c$ U! z* i" W# }- K  {) W
    tar xzf apache-storm-0.9.6.tar.gz4 P/ o) m; m4 A/ k4 c6 d
    ln -s /opt/storm/apache-storm-0.9.6 /opt/storm/current
    ; O* I& j, S# h0 D' X! f$ Y
    9 t- a, M' D# B7 d1 juseradd storm -U -r
    ) n2 }/ }' X4 B3 j4 F& g1 Wmkdir /var/storm8 \) k0 B* I0 U/ z# k8 d
    mkdir /var/log/storm! j; K- ?' z5 U" a) s
    chown -R storm. /var/storm/7 {3 _% t4 O* k; K) h; P
    chown -R storm. /var/log/storm/
    % v) ^* {5 e5 F- L. K修改storm.yaml, vim current/storm/conf/storm.yaml
    ! r3 Q: u( \; e$ y# Y8 x### base# y! f/ W# i3 A  s1 d4 H
    java.library.path: "/usr/local/lib:/opt/local/lib:/usr/lib"4 u$ b0 @  _. a  ]) v; \+ b$ a
    storm.local.dir: "/var/storm"
    % b5 s. S* }5 V' z& z9 |6 t3 F% x, i' H. n( N1 c+ s
    ### zookeeper.*
    2 }' J- S" ^( {$ N, ^storm.zookeeper.servers:
    7 q& a" Q: c' K& I0 M* x, M- "localhost"
    , \% ?' A$ u4 Y9 }: M; @" j, ]storm.zookeeper.port: 21817 a7 N7 e+ n2 {: {' c
    storm.zookeeper.retry.interval: 50003 F( _+ C/ j/ P: C# D9 F4 w
    storm.zookeeper.retry.times: 29" `- p. M* J0 r  H, D
    storm.zookeeper.root: "/storm"3 U" y: l* {* e6 I, v
    storm.zookeeper.session.timeout: 30000/ }- Q" O2 `3 U$ q7 ~6 [

    8 x/ M) F) n( s( F) @7 v### supervisor.* configs are for node supervisors/ t& ?2 p8 W4 C1 b4 z. f
    supervisor.slots.ports:
    * C8 c# W& h& B; s- 6701$ K8 D3 O+ S  K
    - 67024 @$ f" y1 ~. \* C, I
    - 6703% ]4 j6 S/ e8 k
    - 6704: j2 ^# s3 y& `' D# d" y8 n
    supervisor.childopts: "-Xmx1024m"
    % u  _7 q- t; w' ]6 O- G  `( f4 p! G1 k3 s7 V
    ### worker.* configs are for task workers5 U* S$ u8 `/ D$ Z# E& U1 l1 e
    worker.childopts: "-Xmx1280m -XX:+UseConcMarkSweepGC -Dcom.sun.management.jmxremote"
    : F- T, K/ f8 s1 B1 {
    . h* J3 h9 O8 K. [6 ]. L, u### nimbus.* configs are for the masteri4 ^& d. i; A  b& P: e) l& r& B
    nimbus.host: "localhost"
    ( U4 h# h- R+ `6 r+ Z% dnimbus.thrift.port: 6627
    ! J9 p' ^" x3 v+ Pmbus.childopts: "-Xmx1024m"6 S- g9 j$ Y6 h- G4 B4 Z- K

    - `6 Z5 {& e* |3 N" h2 L, M% w### ui.* configs are for the master
    : O/ D  z! o# _& j5 x8 Wui.host: 127.0.0.1' ]  n. y; v6 E' R( F
    ui.port: 8078
    9 y" d* e  V/ |4 [+ Eui.childopts: "-Xmx768m"2 b0 m6 W! F( R' y8 L# k

    ! c/ c/ M3 ?, ^$ O; W### drpc.* configs
    # y5 N2 y; W' ~! a/ T" H& V8 K& @9 @! f- |' E% Z( T
    ### transactional.* configs
      q) ^8 K" A* \5 Itransactional.zookeeper.servers:
    7 |; {& |6 o, S- "localhost"5 {* e6 `: B  x4 |6 m$ ~8 F; C6 w
    transactional.zookeeper.port: 21810 V5 H2 ?, ^& D; X6 M! q
    transactional.zookeeper.root: "/storm-transactional"
    1 x, P( J1 u  d9 T/ J' A, o5 S6 Q+ L4 @3 ?
    ### topology.* configs are for specific executing storms" @- a  p3 h" L! u7 n8 y5 Y  E# Q
    topology.acker.executors: 1
    1 [! H8 ~# S2 Z9 E; I/ b3 E# Q/ ytopology.debug: false4 c* C( D, k8 p# U6 p% n5 `
    ; Z; A7 M: i6 i7 X- l: C- s4 }
    logviewer.port: 8077
    7 A( T2 i9 I& Z' E2 O) k& D/ `logviewer.childopts: "-Xmx128m"2 W4 b' e5 y" P% d, m
    创建storm supervisor 启动脚本,vim /etc/init/storm-supervisor.conf, `; r7 c8 @) p
    # Startup script for Storm Supervisor
    6 B# r0 N6 A2 V5 R1 Z, `6 |- J
    ; ?6 S6 l$ g8 \5 F) H4 }description "Storm Supervisor daemon"
    * k; ~& D& e& v* c% i' rstart on runlevel [2345], }. E. W0 A; P
    & z& w7 k% ~) e. g
    console log' T: U* C5 N7 f1 S
    respawn
    " J' c! l: ^- b- M3 L
    ; b' _, Z6 O+ `# Rkill timeout 240
    4 K$ T# [* r9 P7 brespawn limit 25 5& G3 |& n. }( r) K6 e* `

    7 I5 ~* h. h) ?& zsetgid storm
    ( i. K) m& _. E1 }setuid storm9 V0 X* w& k1 R0 A6 }
    chdir /opt/storm/current, v8 J, e, o# ?$ H8 K5 |" S( ~. a& h9 F
    exec /opt/storm/current/bin/storm supervisor
    % q' @5 [/ k5 ]* t# N创建Storm nimbus 启动脚本。vim /etc/init/storm-nimbus.conf. t7 B7 g2 B9 {( P# x) `
    # Startup script for Storm Nimbus. B0 q  k, m! i$ R$ Y  `& E5 Q# r( h
    , B. y' V& }' `- l! X9 Z$ F
    description "Storm Nimbus daemon"
    + V. s* e5 ~3 rstart on runlevel [2345]3 ~: P+ v5 N/ E7 [7 D% {/ y

    - y' W9 y- h# J# aconsole log+ y/ J! t2 I: Z% s. i) l
    respawn
    ; `: p- M$ W/ r: G$ J5 [2 u& M0 G: H! p2 T3 O" F* n3 i* p  y
    kill timeout 240( @! M7 u: D7 }! K" P# P4 ]5 T
    respawn limit 25 5
    # ]! t' D, o; q& t) O
    ' A4 S8 [- t9 O8 l  v$ {  }6 [setgid storm$ o7 [1 c( L: Y5 V5 X5 m
    setuid storm4 A( l; U% T$ ~% L; I, q
    chdir /opt/storm/current
    $ [# ~" z- Z$ Z# G3 Oexec /opt/storm/current/bin/storm nimbus
    % F! P4 \2 W  o0 L4 l; |+ P启动supervisor 与 nimbus
    8 w+ Z' x9 B' J- f. Fservice storm-supervisor start. Y, \( W  D! d% K* k
    service storm-nimbus start
  • 安装monasca api python软件包
    * G2 J0 m5 R: _5 W一些monasca组件同时提供python和java代码,主要是我选择python代码进行部署。
    ! l/ ?( H  G6 {8 I6 S! I- Gpip install monasca-common: L, i0 A2 k2 j* G
    pip install gunicorn
    5 ~  f/ X; R  S: G6 g, n+ q, Wpip install greenlet # Required for both
    2 ^3 o5 X7 F' gpip install eventlet # For eventlet workers1 T. C! K  x" V
    pip install gevent # For gevent workers
    : C3 W* o5 S3 T5 Spip install monasca-api3 E! V5 [7 ~  C/ D; r; K: l
    pip install influxdb+ f' ~% [0 [6 ^5 b
    vim /etc/monasca/api-config.ini,将主机修改为您的IP地址+ B# B$ O/ [- v% q$ l
    [DEFAULT]
    8 J; x/ T; d9 @name = monasca_api
    & T- J( {$ h7 o8 H$ p3 _
    ( Y  a1 B3 Y- K[pipeline:main]
    . Z7 }: s! B5 B# Add validator in the pipeline so the metrics messages can be validated.) S: t/ C- F& T- a5 u9 Y
    pipeline = auth keystonecontext api( y/ y* w. [8 I  p0 O
    3 T1 H) x: B! T' W5 B
    [app:api]
    . a* F$ t3 U8 Q* mpaste.app_factory = monasca_api.api.server:launch
    ; o0 s- h8 ~% @! ?  m5 R. U6 J6 A6 ]  b" B  U2 n0 l" a  R
    [filter:auth]
    + w- r) t( [8 d+ [  a9 Ppaste.filter_factory = keystonemiddleware.auth_token:filter_factory# v# _! h  x4 `( r0 p
    0 G' B# Z) [( x- g- S1 |" [
    [filter:keystonecontext]
    8 o3 T3 ?( k$ e" f. Q, Lpaste.filter_factory = monasca_api.middleware.keystone_context_filter:filter_factory4 `, I* e: |- l5 r& j# S: F3 S

    & z" X/ r  Z& m7 O/ `1 M, j[server:main]
    & Z# G4 B# {" l3 o  Euse = egg:gunicorn#main2 Z. A: E; b2 i3 D. c
    host = 192.168.2.23
    ; r: c( O4 O4 ?. E+ g+ mport = 8082' D& N4 {1 \" a* y
    workers = 1
    1 y' z$ \2 e, `% s. e# C( Iproc_name = monasca_api+ |& q/ j, t( R; P$ F2 ^6 b
    vim /etc/monasca/api-config.conf,修改以下内容. s8 \+ M% M% w: j/ T- b
    [DEFAULT]/ z4 i) S& z( M! p/ b
    # logging, make sure that the user under whom the server runs has permission
    % l0 O' Q7 D& L- ~8 A7 N# to write to the directory.) e; u# E" ?1 C1 a; t$ G% C# G
    log_file = monasca-api.log9 M# X% d2 B7 ]* ^8 ]9 n" g
    log_dir = /var/log/monasca/api/: i  E( N' h" a- _
    debug=False
    - k3 P. |+ H7 hregion = RegionOne
    # X( k3 X! ]+ s5 @2 s[security]
    ! w3 r' d6 d: R# The roles that are allowed full access to the API.
    , W" P. S  u) h, \8 Vdefault_authorized_roles = admin, user, domainuser, domainadmin, monasca-user
    $ B; j- r; F3 t
    & M9 e4 f' X$ X5 y& Y4 O# The roles that are allowed to only POST metrics to the API. This role would be used by the Monasca Agent.
    / c7 x6 G  u( b& o% y) E1 Oagent_authorized_roles = admin
    / `/ _3 M3 |$ J2 b1 D: O/ D6 c- H. h+ ]. B
    # The roles that are allowed to only GET metrics from the API.
    / S5 D9 _4 J# i5 m* l' ?  rread_only_authorized_roles = admin
    # A' }+ C( B+ j$ Y. q0 o6 x  ?7 Y
    9 u8 ~4 v7 s+ ]# The roles that are allowed to access the API on behalf of another tenant.. m) |: @* e- S% O% O5 T
    # For example, a service can POST metrics to another tenant if they are a member of the "delegate" role.( s) b( m" Y0 j& q3 D) T
    delegate_authorized_roles = admin0 n; Z5 }- W0 |8 X. V
    ! V6 Z+ O/ B; }! W- p) r
    [kafka]
    2 E& k9 h6 l( t9 R9 t) X# The endpoint to the kafka server
    % k+ j8 \- D; Vuri = localhost:9092  ^6 O9 @: V1 D) W+ i
    6 h! M) Q6 _& X: {: b. k1 j
    [influxdb]7 p5 T5 P# S! i
    # Only needed if Influxdb database is used for backend.7 B7 t4 g. E7 n, I1 y
    # The IP address of the InfluxDB service.
    ; e2 p! B" `0 K: z0 Eip_address = localhost  N9 V9 A* t! n  P; ?) P) y

    4 f+ R( }+ m" d3 x2 r# The port number that the InfluxDB service is listening on.
    , n( K* t" P9 Y: _9 Pport = 8086
    0 j# i% j* B' D+ i. i4 a; \: d8 ^  k4 Z  I7 \
    # The username to authenticate with.
    ' m2 P0 [# ]+ }* @7 l2 I+ [4 Buser = monasca
    * u3 y7 T  n" }, {  W: Q2 `! m9 b5 o% r' {' d
    # The password to authenticate with.
    * q+ Z  `6 [, B8 Z1 ypassword = tyun
    & q: k  M" f: \  N% V: J
    7 C% k6 a+ ~2 _/ z. p7 W! X0 e# The name of the InfluxDB database to use.
    ) v% R" r/ j' n& Bdatabase_name = mon
    + J/ Z9 e. a$ S( e9 v2 i: c
    # }' B' g* A; _4 N: E1 |[database]
    8 _8 u+ ?+ y5 W- g" G+ {url = "mysql+pymysql://monasca:tyun@127.0.0.1/mon") q1 |" F" K+ z7 H; \

    $ |8 b4 v8 G6 z, I$ }
    9 F3 Y. ?$ @1 t[keystone_authtoken]
    % [( S+ G5 R9 X; Y  _, A; ridentity_uri = http://192.168.1.11:35357* i- X4 R9 m0 T: I3 `0 ~/ O
    auth_uri = http://192.168.1.11:5000- K$ Y  B" O6 p) o  w; D
    admin_password = tyun, F2 s4 C( A, U. V- U0 U3 P8 O
    admin_user = monasca: `4 ~+ m+ A8 X( e) Q
    admin_tenant_name = service! {6 R) ]. H0 X4 D
    cafile =
    8 t; Q" Y, q0 c5 ^; _, |7 tcertfile =
    & V0 O+ ]- f2 Ekeyfile =
    5 Q0 f" q2 Q# e8 m3 t' xinsecure = false+ _* r3 v% S3 Y( _$ e, ?) G6 t8 p) P
    注释掉[mysql]部分,其他部分保持默认。
    0 r3 ~# y. f4 N  G) j创建monasca系统用户并进入目录
    " q. G& [# ^4 Wuseradd monasca -U -r; r9 [) B: J% v. @- _% ?
    mkdir /var/log/monasca" K1 i/ r$ X2 w! k4 c+ C/ |
    mkdir /var/log/monasca/api0 g* d9 ^- ~- b% p( Y3 i
    chown -R monasca. /var/log/monasca/, T; z0 _; F' e% A- V6 W
    在openstack控制器节点上,创建monasca用户密码,为租户服务中的用户monasca分配管理员角色。( f$ L3 z" l# E2 W$ Q
    openstack user create --domain default --password tyun monasca% P: e' w# ^* ?1 }+ a; \, S
    openstack role add --project service --user monasca admin& y! |. ^2 x2 {
    : M% B8 U# x7 l5 N; Q
    openstack service create --name monasca --description "Monasca monitoring service" monitoring
    3 E# e0 K. a% m8 b7 T6 U4 y  ]1 z! l5 X* I2 x: N
    create endpoint1 e* I' [8 p6 A$ X* C6 J* `5 V
    openstack endpoint create --region RegionOne monasca public http://192.168.1.143:8082/v2.0/ L3 ^6 r! W: K: j
    openstack endpoint create --region RegionOne monasca internal http://192.168.1.143:8082/v2.02 T- t5 ~* d5 G5 K% s
    openstack endpoint create --region RegionOne monasca admin http://192.168.1.143:8082/v2.09 }1 z4 a2 B; V( o
    192.168.1.143是我的api虚拟机地址的浮动IP,请将其更改为您的IP。( a# Y8 ]# g, Q9 }
    创建monasca api启动脚本,vim /etc/init/monasca-api.conf0 J2 H$ _( Y0 z
    # Startup script for the Monasca API6 R( z, q1 d2 ^0 @! W& E4 e

    2 N7 V6 B% [' j2 Zdescription "Monasca API Python app"( y( Z6 S* d% f2 o; }2 _. H
    start on runlevel [2345]& V! \9 k! d; X& @$ p: n
    + j( [& W# m% C( i8 l* E# ~" K
    console log+ g( B' `8 A" @, N+ f' H
    respawn
    5 m* T! K8 f: h) y6 `
    4 {4 w2 f% {9 q1 t. W- b: R0 }setgid monasca2 q. n6 W$ K: N2 j  {& U
    setuid monasca
    ! L' n0 p9 t/ i% Z/ I& oexec /usr/local/bin/gunicorn -n monasca-api -k eventlet --worker-connections=2000 --backlog=1000 --paste /etc/monasca/api-config.ini
  • 安装monasca-persister
    * n& D0 z9 w+ m1 B$ K% g; k" I创建monasca-persister启动脚本
    6 e2 ?* M. @2 \; _( xvim /etc/init/monasca-persister.conf: q* [( p0 E+ _9 b2 `# @
    # Startup script for the Monasca Persister
    ; M9 W0 e3 y! g# p
    " _9 f3 j8 z; Udescription "Monasca Persister Python app"
    . x3 c* D0 r# y" U# G# gstart on runlevel [2345]
    1 _# c3 ^/ }9 f
    $ |4 K) o0 n% c* J- \console log5 e8 U) u: v8 R8 ~  N
    respawn# i  L; K6 `2 }: K! o* D7 N
    9 j3 c. m9 b  V% @4 n
    setgid monasca
    ( f+ r0 f$ i- M' `setuid monasca
    , A: U: b- E' n4 \& B5 M$ ^exec /usr/bin/java -Dfile.encoding=UTF-8 -cp /opt/monasca/monasca-persister.jar monasca.persister.PersisterApplication server /etc/monasca/persister-config.yml9 X8 t" i. K6 C1 O8 O
    启动monasca-persister
    ' l7 t% l8 S; }1 ^7 E8 v3 ?service monasca-persister start
  • 安装monasca-notificatoin# v  I: k. b6 I% r2 _1 J
    pip install --upgrade monasca-notification
    1 x! |: a% |& W9 U- n% Capt-get install sendmail
    ) z: s8 C! G$ G将notification.yaml复制到/etc/monasca/ 创建启动脚本,vim /etc/init/monasca-notification.conf* A  f) r3 g4 C* f8 L% `6 ?# s
    # Startup script for the monasca_notification2 f& Z+ S8 ]7 Y6 R) S9 d( _
    ( ?/ n3 u& c5 p& c+ E; f& a
    description "Monasca Notification daemon"
    " n/ B( M/ ]. k+ p; e' S3 A2 t: Wstart on runlevel [2345]
    9 d1 t1 j$ a/ I' _1 X' H' S( L9 v. k3 L" o3 x
    console log
    * v$ U4 _4 |4 m: ~respawn; B! U5 q% o+ a4 Z9 |

    ' `) d& n. g' t  X; H/ j9 a, lsetgid monasca
    8 @! _6 o" `# j  xsetuid monasca) d8 F% P8 ^# w7 g* c+ Y
    exec /usr/bin/python /usr/local/bin/monasca-notification3 `& U9 k  N: p. a0 ~2 W2 v
    启动通知服务, O/ C; ^# v- V0 @5 i
    service monasca-notification start
  • 安装monasca-thresh 复制monasca-thresh到/etc/init.d/ 复制monasca-thresh.jar到/opt/monasca-thresh/ 复制thresh-config.yml到/etc/monasca /并修改主机以及数据库信息 启动monasca-thresh) W- ?9 O& g: G! `$ e
    service monasca-thresh start
  • 安装monasca-agent1 E+ T: U! n! s% ?- N
    在openstack控制器节点上安装monasca-agent,以便它可以监控openstack服务进程。
    7 I, n) v9 O* |; z& B% dsudo pip install --upgrade monasca-agent6 c5 W8 V) ^9 `* n+ n/ d6 T
    设置monasca-agent,将用户域ID和项目域ID更改为默认值。$ Z. Z- I& `$ P" m5 C$ ^
    monasca-setup -u monasca -p tyun --user_domain_id e25e0413a70c41449d2ccc2578deb1e4 --project_domain_id e25e0413a70c41449d2ccc2578deb1e4 --user monasca \3 H; X" g! K" u; m! r$ z' F/ Q
    --project_name service -s monitoring --keystone_url http://192.168.1.11:35357/v3 --monasca_url http://192.168.1.143:8082/v2.0 --config_dir /etc/monasca/agent --log_dir /var/log/monasca/agent --overwrite& R. O& Z  z. ?( D, T" e0 ?% p
    加载认证脚本admin-rc.sh,然后运行monasca metric-list。
    1 e2 S! _: }8 D0 ?4 o  f

8 ^5 k8 C! V* q, n" b; W6 T. ^DevStack安装
运行Monasca DevStack至少需要一台具有10GB RAM的主机。
可在此处找到安装和运行Devstack的说明:
https://docs.openstack.org/devstack/latest/
+ V6 T, y9 P3 Y# M- G
要在DevStack中运行Monasca,请执行以下三个步骤。
  • 克隆DevStack代码库。
    . V6 L' d  _$ a* N/ t
git clone https://git.openstack.org/openstack-dev/devstack0 J1 ~; \7 n( y$ `6 O
  • 将以下内容添加到devstack目录根目录中的DevStack local.conf文件中。如果local.conf不存在,则可能需要创建它。# y. {0 |6 }) D/ K1 V$ \
# BEGIN DEVSTACK LOCAL.CONF CONTENTS​[[local|localrc]]DATABASE_PASSWORD=secretdatabaseRABBIT_PASSWORD=secretrabbitADMIN_PASSWORD=secretadminSERVICE_PASSWORD=secretserviceSERVICE_TOKEN=111222333444​LOGFILE=$DEST/logs/stack.sh.logLOGDIR=$DEST/logsLOG_COLOR=False​# The following two variables allow switching between Java and Python for the implementations# of the Monasca API and the Monasca Persister. If these variables are not set, then the# default is to install the Python implementations of both the Monasca API and the Monasca Persister.​# Uncomment one of the following two lines to choose Java or Python for the Monasca API.MONASCA_API_IMPLEMENTATION_LANG=${MONASCA_API_IMPLEMENTATION_LANG:-java}# MONASCA_API_IMPLEMENTATION_LANG=${MONASCA_API_IMPLEMENTATION_LANG:-python}​# Uncomment of the following two lines to choose Java or Python for the Monasca Pesister.MONASCA_PERSISTER_IMPLEMENTATION_LANG=${MONASCA_PERSISTER_IMPLEMENTATION_LANG:-java}# MONASCA_PERSISTER_IMPLEMENTATION_LANG=${MONASCA_PERSISTER_IMPLEMENTATION_LANG:-python}​# Uncomment one of the following two lines to choose either InfluxDB or Vertica.# default "influxdb" is selected as metric DBMONASCA_METRICS_DB=${MONASCA_METRICS_DB:-influxdb}# MONASCA_METRICS_DB=${MONASCA_METRICS_DB:-vertica}​# This line will enable all of Monasca.enable_plugin monasca-api https://git.openstack.org/openstack/monasca-api​# END DEVSTACK LOCAL.CONF CONTENTS! m  g4 C2 v# \" p2 W
  • 从devstack目录的根目录运行“ ./stack.sh”。
    ; K5 ~! O: M( s! R1 ], w
如果要使用最少的OpenStack组件运行Monasca,可以将以下两行添加到local.conf文件中。
disable_all_servicesenable_service rabbit mysql key
% h0 I- O, d, P/ w
如果您还希望安装Tempest测试,请添加 tempest
enable_service rabbit mysql key tempest0 {% |+ p& Y# l/ c8 S  G$ B$ g
要启用Horizon和Monasca UI,请添加 horizon
enable_service rabbit mysql key horizon tempest
. l* k$ Q" t* z3 u使用Vagrant
Vagrant可用于使用Vagrantfile部署运行有Devstack和Monasca的VM。安装Vagrant后,只需在../monasca-api/devstack目录中运行vagrant up命令。
要在devstack安装中使用本地代码库,请将更改提交到本地存储库的master分支,然后在配置文件中修改与要使用的本地存储库相对应的变量file://my/local/repo/location。要使用monasca-api repo的本地实例,请将更改enable_plugin monasca-api https://git.openstack.org/openstack/monasca-api为enable_plugin monasca-api file://my/repo/is/here。这两个设置仅在重建devstack VM时生效。
  • 使用Vagrant将Vertica启用为Metrics DB9 U" B5 Q! p1 t1 c( j$ ?
    Monasca支持同时使用InfluxDB和Vertica来存储指标和告警状态历史记录。默认情况下,在DevStack环境中启用InfluxDB。. H3 k% d) _& u3 z: A. W1 f; t. M
    Vertica是Hewlett Packard Enterprise的商业数据库。可以下载免费的Community Edition(CE)安装程序,要启用Vertica,请执行以下操作:6 b0 z, J# z0 O3 w
  • 注册并下载Vertica Debian安装程序https://my.vertica.com/download/vertica/community-edition/,并将其放在您的主目录中。不幸的是,DevStack安装程序没有可以自动使用的URL,因此必须单独下载该URL,并将其放置在安装程序运行时可以找到它的位置。安装程序假定此位置是您的主目录。使用Vagrant时,您的主目录通常将以“ /vagrant_home”挂载在VM内。
  • 修改local.conf中MONASCA_METRICS_DB变量,配置Vertica的支持,如下所示:
    ; q$ ^. U/ g3 _2 n2 ]9 M! SMONASCA_METRICS_DB=${MONASCA_METRICS_DB:-vertica}, X) [: s$ V( A  M  S9 F) @

4 Q) I0 \; F7 Y/ h& I% ]
  • 使用PostgreSQL或MySQL
    ( U0 P( {; M$ O# ]Monasca支持使用PostgreSQL和MySQL,因此该devstack插件也支持。启用postgresql或mysql。
    , _4 f! @8 U# t+ b" R1 n要使用MySQL设置环境,请使用:* ^+ \# C5 d1 B; H  t! C  Q
    enable_service mysql1 n2 V2 J1 l4 D" x( D  M
    另外,对于PostgreSQL,请使用:
    9 s+ t( {' i4 Henable_service postgresql
  • 使用ORM支持) U. q1 L% ]! Y( A( F! c  M8 f& D
    ORM支持可以通过MONASCA_DATABASE_USE_ORM变量来控制。但是,如果启用了PostgreSQL(也称为数据库后端),则将强制提供ORM支持
    ; E, O- O( v! l4 zenable_service postgresql
  • 加强Apache镜像
    - a" l  ^4 y9 g6 ?/ M5 r0 \如果由于某种原因APACHE_MIRROR而无法使用,则可以通过以下方式强制执行:" P$ s! m5 w) `) ~4 W6 s% S
    APACHE_MIRROR=http://www-us.apache.org/dist/
  • 使用WSGI$ D. N/ f7 e) ?& w* C2 R
    Monasca-api可以使用uwsgi和gunicorn与Apache一起部署。默认情况下,monasca-api在uwsgi下运行。如果您想使用Gunicorn,请确保其中devstack/local.conf包含:  e) H5 C) l, {8 @( n
    MONASCA_API_USE_MOD_WSGI=False
    6 l+ A$ _" X1 X$ f6 }% p% Z1 z
使用Monasca Dashboard
安装完成Monasca Dashboard Plugin后,可以通过web控制台进行查看以及管理相应的监控与告警。
在操作控制台的“Monitoring”栏,单击“Launch Monitoring Dashboard“,这将打开在管理节点上运行的专用OpenStack Horizon门户。
在该面板中,您可以:
  • 单击OpenStack服务名称,以查看服务告警。
  • 单击服务器名称以查看相关设备的告警。
    5 y7 r9 c& C; @9 \" H  @
监控信息存储在两个数据库中(Vertica/influxdb与mysql)。备份监控数据时,将同时备份两个数据库。看到
  • 监控指标在Vertica中存储7天。
  • 配置设置存储在MySQL中。
  • 如果监控节点上的服务在高负载(例如15个控制网络和200个计算节点)下停止,则消息队列将在大约6个小时内开始清除。+ i+ I+ W& J, C* u% k. f% [
您需要登录后才可以回帖 登录 | 开始注册

本版积分规则

关闭

站长推荐上一条 /4 下一条

北京云银创陇科技有限公司以云计算运维,代码开发

QQ|返回首页|Archiver|小黑屋|易陆发现技术论坛 点击这里给我发消息

GMT+8, 2026-4-8 21:09 , Processed in 0.059479 second(s), 22 queries .

Powered by Discuz! X3.4 Licensed

© 2012-2025 Discuz! Team.

快速回复 返回顶部 返回列表