- 积分
- 16840
在线时间 小时
最后登录1970-1-1
|
马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。
您需要 登录 才可以下载或查看,没有账号?开始注册
x
Controller nodes
# J7 d P) K: S" i$ N- }Each controller node runs the Open vSwitch (OVS) service (including dependent services such as ovsdb-server) and ovn-northd. Only a single instance of the ovsdb-server and ovn-northd services can operate in a deployment. However, deployment tools can implement active/passive high-availability using a management tool that monitors service health and automatically starts these services on another node after failure of the primary node. See the Frequently Asked Questions for more information.$ O. q5 D9 S0 n" r: A
# ?! y4 Y6 k% V- GInstall the ovn-central and openvswitch packages (RHEL/Fedora).0 K/ h1 Y( M+ t% L
# [* K* q9 ^- `Install the ovn-central and openvswitch-common packages (Ubuntu/Debian).$ c, X& ^. T' _6 Y u; I
" ?2 o- ~3 T6 M% R! ^% i g
Start the OVS service. The central OVS service starts the ovsdb-server service that manages OVN databases.# t" K! }8 M( R X+ B4 g: P
( [) _" a/ B8 Q1 U- N" U
Using the systemd unit:8 _. o4 p1 E. i0 ^* Q$ F1 o' }
0 j! z9 [" D K
systemctl start openvswitch (RHEL/Fedora)
: v6 [9 d& u! ]% Qsystemctl start openvswitch-switch (Ubuntu/Debian)) T; h, F% i$ f. B
Configure the ovsdb-server component. By default, the ovsdb-server service only permits local access to databases via Unix socket. However, OVN services on compute nodes require access to these databases.5 V8 I9 N, B( ` y5 ?0 H
) P2 H" P) h8 jPermit remote database access.
" L/ w% }" [ |1 w! G6 x3 x4 ~/ b) T/ H- B4 g+ C
ovn-nbctl set-connection ptcp:6641:0.0.0.0 -- \* m$ ]# j3 t9 K" Q. T
set connection . inactivity_probe=600003 [5 r+ F: O1 b6 G7 d
ovn-sbctl set-connection ptcp:6642:0.0.0.0 -- \9 |$ Q9 S( Z* i% q! X8 A4 n
set connection . inactivity_probe=60000
s4 L3 c2 I3 Hif using the VTEP functionality:4 J0 r+ c6 C& x' B
ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640:0.0.0.0
$ k9 Z% t2 T% j }6 AReplace 0.0.0.0 with the IP address of the management network interface on the controller node to avoid listening on all interfaces.
$ _) _0 ?+ t* {; |2 h3 C$ c0 V5 V% {; L& Q s- w4 J# g1 s
Note
" D2 D, y. N* Y! z0 e, O: O* T
, {8 F; R- z4 P# K! X9 L: w' IPermit remote access to TCP ports: 6640 (OVS) to VTEPS (if you use vteps), 6642 (SBDB) to hosts running neutron-server, gateway nodes that run ovn-controller, and compute node services like ovn-controller and ovn-metadata-agent. 6641 (NBDB) to hosts running neutron-server. {# t* Z0 P7 g. o5 q/ s5 C9 H
9 B: P4 G: j9 ~$ L' h4 E: z( \" YStart the ovn-northd service.6 ^/ @. p8 c2 C) l7 V+ z
! u* d2 Q% O; g( B# GUsing the systemd unit:: K l' g6 n Q* N
. V! g- u* h1 e& k
systemctl start ovn-northd
$ E: h. z& O1 b7 s7 S i) K- ~Configure the Networking server component. The Networking service implements OVN as an ML2 driver. Edit the /etc/neutron/neutron.conf file:
" ]* ~$ d N5 J0 D0 @% p) m0 [
6 J& h' k `$ `( d5 h, I1 CEnable the ML2 core plug-in.
0 [, b; t- }1 _( E8 c$ Z2 C4 H
B! |7 z4 b7 P$ _' s2 `[DEFAULT]
5 j' T5 D( X) K/ X- V. Z! v$ _...- V- B" t a$ X7 d4 j$ n8 G
core_plugin = ml2
% h) @6 V, E% [Enable the OVN layer-3 service.5 @& X. F4 B m9 f3 }/ M( C% i! v
1 Q' T* |5 ?0 B+ Y! Y& E4 ?[DEFAULT]) T- J- D s" J
...
8 B# d* \) e, R. j6 z R y! Mservice_plugins = ovn-router
. z: e9 r2 ^; y" ^. PConfigure the ML2 plug-in. Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file:/ w D5 {8 O% w' R
8 a+ K3 j3 L" G; P% V# s+ H7 NConfigure the OVN mechanism driver, network type drivers, self-service (tenant) network types, and enable the port security extension.% @4 f2 }7 K5 s# w4 o
: }& X6 K% ~+ w1 U. T
[ml2]
8 h, f5 ], d7 c4 ~...2 @% h$ _# Y. Y! ]" Y# h8 g, [, {+ ~
mechanism_drivers = ovn
/ h) p$ u" b' K5 c/ htype_drivers = local,flat,vlan,geneve
7 e8 |' q+ T* A4 ]. ytenant_network_types = geneve: r$ d3 G8 |8 C l, u! X' u
extension_drivers = port_security4 R' l4 a8 Y$ l7 \, D' Y: m/ u
overlay_ip_version = 4
6 s& ^, n. e# _% D1 E+ x Note
* O. _4 U% p1 ]0 y8 l8 ~7 {) V- L: z5 a3 O
To enable VLAN self-service networks, make sure that OVN version 2.11 (or higher) is used, then add vlan to the tenant_network_types option. The first network type in the list becomes the default self-service network type.
/ k- j4 {, \, S# T9 n9 ^9 |8 T
" K& n4 y; p# n' }2 f& pTo use IPv6 for all overlay (tunnel) network endpoints, set the overlay_ip_version option to 6.# o! P! n ]+ o+ N" K$ `1 V
& B d2 \ Y3 R; N8 i
Configure the Geneve ID range and maximum header size. The IP version overhead (20 bytes for IPv4 (default) or 40 bytes for IPv6) is added to the maximum header size based on the ML2 overlay_ip_version option.* s$ N; y& m% j$ y
I2 [7 e7 D( {, k& _[ml2_type_geneve]) y9 R G( J2 E" c) G5 {# F- h, l
...# h+ n" V6 g. r, f9 }( ]0 G% S
vni_ranges = 1:655362 q/ m) n8 g# @3 n$ f; S" b+ u3 N
max_header_size = 38) P0 l* U" f9 ?' f/ E' B
Note
- R. y/ A' o1 b; a( p( d- J
5 B; x' u2 ~) `5 kThe Networking service uses the vni_ranges option to allocate network segments. However, OVN ignores the actual values. Thus, the ID range only determines the quantity of Geneve networks in the environment. For example, a range of 5001:6000 defines a maximum of 1000 Geneve networks. On the other hand, these values are still relevant in Neutron context so 1:1000 and 5001:6000 are not simply interchangeable.
: F7 S( v0 b! m5 _6 B' a
. n: G ` o. V* V Warning
" z V1 g5 F7 u* ~- ~% I5 k" o5 e% x
The default for max_header_size, 30, is too low for OVN. OVN requires at least 38." T& B6 M; B/ ~
0 H3 K# p- {$ e9 p* z
Optionally, enable support for VXLAN type networks. Because of limited space in VXLAN VNI to pass over the needed information that requires OVN to identify a packet, the header size to contain the segmentation ID is reduced to 12 bits, that allows a maximum number of 4096 networks. The same limitation applies to the number of ports in each network, that are also identified with a 12 bits header chunk, limiting their number to 4096 ports. Please check [1] for more information.
& M4 e3 M5 l6 j5 w3 d7 W* C
/ k5 c3 r( i$ m( s) a1 V& v! [2 L9 `[ml2]3 w4 z% Y4 d4 p
...4 {1 Q8 }% F/ M1 `& @; m& t0 `! T
type_drivers = geneve,vxlan1 X( K9 ?: o/ d$ Q% c E' p
$ g$ s6 U* ]: X
[ml2_type_vxlan]9 |7 i" X+ h0 w3 e8 H ?
vni_ranges = 1001:1100+ _* \8 t3 ^9 k8 g& o: N6 S) N( n0 x
Optionally, enable support for VLAN provider and self-service networks on one or more physical networks. If you specify only the physical network, only administrative (privileged) users can manage VLAN networks. Additionally specifying a VLAN ID range for a physical network enables regular (non-privileged) users to manage VLAN networks. The Networking service allocates the VLAN ID for each self-service network using the VLAN ID range for the physical network.
& `; H. P) ` v
) |9 k$ d! |6 M' x+ G) y& G[ml2_type_vlan]" [ Z& }/ k7 `7 `* s
...
; ]' a& W, I! u2 Y8 Enetwork_vlan_ranges = PHYSICAL_NETWORK:MIN_VLAN_ID:MAX_VLAN_ID. ^8 W. z8 u" q2 v9 b: L3 |
Replace PHYSICAL_NETWORK with the physical network name and optionally define the minimum and maximum VLAN IDs. Use a comma to separate each physical network.
; Q. Q5 S1 m1 N) [) ]7 Y
- ~1 s: ]0 m- IFor example, to enable support for administrative VLAN networks on the physnet1 network and self-service VLAN networks on the physnet2 network using VLAN IDs 1001 to 2000:; t4 C5 T: N; f5 c# z7 D
8 F4 ~5 e' ~' v, ynetwork_vlan_ranges = physnet1,physnet2:1001:2000
5 q& [. L2 g& e) Y" a' ^5 CEnable security groups.
8 P6 K% P- p7 M7 R3 D V) Q" E
% O& [$ p6 V' a[securitygroup]
8 ~* ^! Z( N$ l# Y# _0 Z7 O" D6 q8 c$ X...
( f. V' v x& ` O' s @% l: z Ienable_security_group = true
4 {" M N( O5 ?! ~ Note8 {+ r( ^# z' c0 |0 f% }
+ P# A+ D" p* Z8 P% }- O* w5 z
The firewall_driver option under [securitygroup] is ignored since the OVN ML2 driver itself handles security groups.
2 R6 p3 Z) y4 x/ \- W, K: H: _# t
Configure OVS database access and L3 scheduler" \8 h9 I& J- b; k$ R
* c' M( y7 d+ B, S# L
[ovn]% _" M" P1 [; D1 S
...
2 R4 e4 |. s( Z/ movn_nb_connection = tcp:IP_ADDRESS:6641
# e# b6 |$ ?( k4 y2 Zovn_sb_connection = tcp:IP_ADDRESS:6642
/ ~# E8 g' I8 K; Z2 U2 O5 Zovn_l3_scheduler = OVN_L3_SCHEDULER
\$ O& ]& q) f+ \2 |, c Note
7 L7 M& e- F0 z4 {2 d8 k
. q9 N8 C$ j1 DReplace IP_ADDRESS with the IP address of the controller node that runs the ovsdb-server service. Replace OVN_L3_SCHEDULER with leastloaded if you want the scheduler to select a compute node with the least number of gateway ports or chance if you want the scheduler to randomly select a compute node from the available list of compute nodes.
# N* Z% ^- W9 Z$ W- m* J; C# p4 y1 i" o1 `0 y
Set ovn-cms-options with enable-chassis-as-gw in Open_vSwitch table’s external_ids column. Then if this chassis has proper bridge mappings, it will be selected for scheduling gateway routers.
9 v- _. a* C2 Y* j# `/ Y
; @1 a3 s( R9 k/ Aovs-vsctl set open . external-ids:ovn-cms-options=enable-chassis-as-gw
- d; x) r( J& u ]Start, or restart, the neutron-server service.
) E) M5 I( |* |, c* d$ p9 q- n3 ?7 U, R& ]. G5 v
Using the systemd unit:& L P1 G& V0 k7 \1 K% ~
Z+ \4 R: d5 _, u; z
systemctl start neutron-server$ m- }' T( w9 E5 M3 w
Network nodes
# v! i0 L( ]0 }Deployments using OVN native layer-3 and DHCP services do not require conventional network nodes because connectivity to external networks (including VTEP gateways) and routing occurs on compute nodes.
" M2 Z1 C7 D3 a; w2 E) p' e, F# g% p9 L+ s* `
Compute nodes
, G1 ^$ P. s7 ^Each compute node runs the OVS and ovn-controller services. The ovn-controller service replaces the conventional OVS layer-2 agent." v( |- M# i$ G2 @7 j& a/ n: k
( Y1 r& Y( r% b5 I' T d4 G! iInstall the ovn-host, openvswitch and neutron-ovn-metadata-agent packages (RHEL/Fedora).
& I* k7 h: C# @1 e. \- e" V
' M, F9 m5 J- i6 ?/ S, FInstall the ovn-host, openvswitch-switch and neutron-ovn-metadata-agent packages (Ubuntu/Debian).
# N" k* g7 e. b% d3 U( Q9 W/ m- X5 C, B1 j/ r* ^
Start the OVS service.
. B* G2 C9 y+ u( ~+ m4 N+ t1 t6 Z. C; a/ B' q [6 g
Using the systemd unit:
, {; a8 h5 P2 i" T' z9 l4 [' o( T
systemctl start openvswitch (RHEL/Fedora)
' T1 F7 V3 o. Rsystemctl start openvswitch-switch (Ubuntu/Debian)$ v9 e( `. m' C) L6 K7 {7 }0 p
Configure the OVS service./ ?9 c6 {+ J$ M2 q' I9 M
( [: j, c& m( }8 f4 W# v: B: K' g
Use OVS databases on the controller node.: B: M" E/ _7 M- w
9 ^; P; n8 t5 W( `! k$ m8 ~- novs-vsctl set open . external-ids:ovn-remote=tcp:IP_ADDRESS:6642- L( |' f: \; i$ C
Replace IP_ADDRESS with the IP address of the controller node that runs the ovsdb-server service.
4 @$ p# t8 ?9 Y' Y( x- C
/ @) k X x! e: ZEnable one or more overlay network protocols. At a minimum, OVN requires enabling the geneve protocol. Deployments using VTEP gateways should also enable the vxlan protocol.
5 y( r$ X) J6 z1 C) D. U# p! b" Z' p/ {* Z% I, w
ovs-vsctl set open . external-ids:ovn-encap-type=geneve,vxlan$ F4 R, ^# D% j$ Z
Note+ C. m, ], {" {( I2 s8 w0 h. z
, e+ `# M) w' P4 r
Deployments without VTEP gateways can safely enable both protocols.
. N! z# E3 X r% s2 ]* {- ~4 M$ X, K" L# m, R3 |
Configure the overlay network local endpoint IP address.
$ K O' D$ n. C# Q! A: y4 [
2 [! v" E* v4 S* w1 E7 e0 Covs-vsctl set open . external-ids:ovn-encap-ip=IP_ADDRESS% }4 Q# f# e# k
Replace IP_ADDRESS with the IP address of the overlay network interface on the compute node.
2 q& ?0 S Y R$ P) U6 L
" h, v* p* D8 L) X) e, y+ d5 aStart the ovn-controller and neutron-ovn-metadata-agent services.
3 W, F* Q A! l$ B p- V+ y% ]
2 X5 e9 ^8 o1 N$ J5 S" v8 mUsing the systemd unit:
8 e0 l( I3 K6 a; B$ P) e/ Y& d/ \) O/ V: \- D7 _$ \; \
systemctl start ovn-controller neutron-ovn-metadata-agent
+ |% c0 m8 y9 @, N& dVerify operation¶
5 B( {6 O) N6 B; ^$ p, WEach compute node should contain an ovn-controller instance.5 v" X( H- h, M+ v
& {$ U+ _& J; S/ f0 q/ W
ovn-sbctl show
, q7 D/ w8 q0 L& M0 a6 b8 I) G+ Y+ _6 X <output>
# h9 @5 j- j' x8 j/ H) D) u
2 t2 J4 z* R7 H0 q4 Y
; q! Z4 O- X( V0 O# ^Deployment steps5 B3 T4 ~; f4 O+ ?$ I- l
Download the quickstart.sh script with curl:
# U5 m% e" h6 B6 z5 D8 k1 U6 p N; u2 J6 [" i; G2 G0 M4 g7 m
curl -O https://raw.githubusercontent.co ... aster/quickstart.sh3 B, h5 i+ d( S2 D3 A L+ W
Install the necessary dependencies by running:# J8 J4 P( s( D! ~) g; e. e% U
5 H+ Z) S" D3 |7 T& u+ tbash quickstart.sh --install-deps ~6 F) y( A8 s9 A- O: i, K
Clone the tripleo-quickstart and neutron repositories:
4 U3 S8 s$ I1 A, E: m* d6 o* u, y7 T* r' h; Q3 Y3 k
git clone https://opendev.org/openstack/tripleo-quickstart
" X1 U/ U' J4 {' {git clone https://opendev.org/openstack/neutron
( F* `$ g0 {% l; e+ [Once you’re done, run quickstart as follows (3 controller HA + 1 compute):
& Z9 R1 |. L2 j$ k6 I8 {2 j H2 ~" t/ Y' [
Exporting the tags is a workaround until the bug
. x4 F( h4 H8 g. Bhttps://bugs.launchpad.net/tripleo/+bug/1737602 is resolved ^0 `. v" O7 I/ T
& X9 F3 \2 H* p& ?2 ?6 {
export ansible_tags="untagged,provision,environment,libvirt,\! R" T/ f3 S E2 {) U" f
undercloud-scripts,undercloud-inventory,overcloud-scripts,\4 E) U* L$ g& B# B; y' f/ m
undercloud-setup,undercloud-install,undercloud-post-install,\4 [1 Z& ?, M4 Q) L8 K( L! u
overcloud-prep-config") W X' v% [5 E( o
& d3 {9 X: z% q2 obash ./quickstart.sh --tags $ansible_tags --teardown all \5 O2 P. [! p: n s/ f! F
--release master-tripleo-ci \/ O5 G0 Z% L: Z6 n t/ C
--nodes tripleo-quickstart/config/nodes/3ctlr_1comp.yml \
' }* Z( @* ~/ `8 I4 ]--config neutron/tools/tripleo/ovn.yml \
9 l8 K$ F* a! Y8 G0 AVIRTHOST1 j2 D2 a( K O, a
Note: ^( `1 t$ [$ @2 S- \0 D: [- ]
- ^3 W% J, h, \2 wWhen deploying directly on localhost use the loopback address 127.0.0.2 as your $VIRTHOST. The loopback address 127.0.0.1 is reserved by ansible. Also make sure that 127.0.0.2 is accessible via public keys:& ` R- b1 y g* t% }+ M
; [ E( @; i) z5 b+ Z$ k
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys& [* D. U3 P' \9 { h9 G
Note7 \" | `3 H4 }, R: v
@0 \, g' q4 z+ AYou can adjust RAM/VCPUs if you want by editing config/nodes/3ctlr_1comp.yml before running the above command. If you have enough memory stick to the defaults. We recommend using 8GB of RAM for the controller nodes.
. [7 b9 h u% o; ^0 f. B) l: R+ U
# ^+ V! R- j: mWhen quickstart has finished you will have 5 VMs ready to be used, 1 for the undercloud (TripleO’s node to deploy your openstack from), 3 VMs for controller nodes and 1 VM for the compute node.2 y5 U/ ]/ Y4 x
8 V8 t; ^! r1 \2 j8 e" {
Log in into the undercloud:8 u1 R% F' K( A; P; b8 a
3 m1 s4 W2 W/ L! C; cssh -F ~/.quickstart/ssh.config.ansible undercloud$ E' G2 v L- _3 H2 n
Prepare overcloud container images:' N- h, B! A0 I9 O9 P0 O* Z" M
8 D @; t! d* j5 H3 F" F
./overcloud-prep-containers.sh1 ^0 S% W5 d2 U- q- P- k
Run inside the undercloud:
) S2 Z% C" k5 I- u2 m+ C8 K, J: G6 `, F. a3 s: `
./overcloud-deploy.sh+ b. y: J7 ^) n. p3 Y5 _
Grab a coffee, that may take around 1 hour (depending on your hardware).
3 z* M3 k3 Y8 \/ f+ K4 S/ f _# q# j7 x" ]: H
If anything goes wrong, go to IRC on OFTC, and ask on #oooq
+ k, D. {/ c d9 v7 u! }0 h, S% O, d+ e$ Y. o& I$ d
Description of the environment
6 S9 a7 P+ t0 e5 A& rOnce deployed, inside the undercloud root directory two files are present: stackrc and overcloudrc, which will let you connect to the APIs of the undercloud (managing the openstack node), and to the overcloud (where your instances would live).
6 {7 ?7 |4 q2 g6 u9 ?' Q; L3 j+ L, l2 `( M3 F- {, Y
We can find out the existing controller/computes this way:+ z! u& i+ b' I9 A- e) o( g: R
5 [5 u7 \. q1 h b8 Ssource stackrc; |% |# v7 b5 F
openstack server list -c Name -c Networks -c Flavor
! [0 {1 W3 G4 G9 ]. c+ x+-------------------------+------------------------+--------------+9 E, s% [$ A* N
| Name | Networks | Flavor |
/ O6 \" p* T8 m. H+-------------------------+------------------------+--------------+
0 T8 G f- t$ c, V3 _0 J4 X+ N| overcloud-controller-1 | ctlplane=192.168.24.16 | oooq_control |
2 |! N9 O! e2 N0 _| overcloud-controller-0 | ctlplane=192.168.24.14 | oooq_control |* O9 s4 V6 L) z- F+ s! P5 `
| overcloud-controller-2 | ctlplane=192.168.24.12 | oooq_control |* D& Z* L2 Z' }2 H( W8 [! g
| overcloud-novacompute-0 | ctlplane=192.168.24.13 | oooq_compute |+ L" ?3 w' y3 ]0 i
+-------------------------+------------------------+--------------+
S0 x& [; h' f" d k" ~- vNetwork architecture of the environment7 l+ L+ Y# `0 S
TripleO Quickstart single NIC with vlans
! L% |/ d" F* t# Y( e$ P# N% dConnecting to one of the nodes via ssh
, z! ]* V# f5 W; o6 u8 K! ^# _We can connect to the IP address in the openstack server list we showed before.
& ?4 m) L) l% n6 E1 F' }# }4 S# ]% `1 f, ?; G- v- [
ssh heat-admin@192.168.24.160 N8 V: A4 Y+ M3 t9 Z' k" |! z T
Last login: Wed Feb 21 14:11:40 2018 from 192.168.24.1
- j& @6 |! o. {
' w$ j3 ~ s# N8 W- h) R; `ps fax | grep ovn-controller
1 N @8 C9 G1 a; @/ x: s7 } 20422 ? S<s 30:40 ovn-controller unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --no-chdir --log-file=/var/log/openvswitch/ovn-controller.log --pidfile=/var/run/openvswitch/ovn-controller.pid --detach
8 h* z" Z. O* [: i* B4 W9 V: j# e
. L/ A/ Q7 r+ T2 ]sudo ovs-vsctl show
" x, I& R, m$ Z7 _: d* {# d1 ?% Xbb413f44-b74f-4678-8d68-a2c6de725c73
: f0 B8 w5 F; a6 O, s; y8 S/ Z' J& }Bridge br-ex6 k" t* m7 g% G$ k& k
fail_mode: standalone
+ ^. k) y- P o' M* h8 `8 n, H ...2 v5 }# {8 x- b; ^4 z- u o2 _' u% H
Port "patch-provnet-84d63c87-aad1-43d0-bdc9-dca5145b6fe6-to-br-int"
( I$ i5 v3 y% v* g4 e( N. v7 H Interface "patch-provnet-84d63c87-aad1-43d0-bdc9-dca5145b6fe6-to-br-int"% ^- @8 Q4 M' I/ G* n6 o, t
type: patch! V! G0 p& [; Q6 h* N* ?; j8 g
options: {peer="patch-br-int-to-provnet-84d63c87-aad1-43d0-bdc9-dca5145b6fe6"} R5 V2 ^( ^: }
Port "eth0"
+ h% T$ E0 w' z' n Interface "eth0"
; x# W) D* W6 q* S& x- d. L ...2 a; Q) G! n# s- U$ p
Bridge br-int
- |" h( O$ m( e) G3 w8 q0 P/ w$ J( Q: z fail_mode: secure
5 \9 U9 n* D5 h3 {) J Port "ovn-c8b85a-0"
# G7 q: M! ~. Y: Q- d' I Interface "ovn-c8b85a-0"
6 w( p( K* S* E$ }- t* h type: geneve* G. _7 v, b5 g8 B
options: {csum="true", key=flow, remote_ip="172.16.0.17"}
1 z- V; @' x" J+ Q8 S, k- D$ u6 r Port "ovn-b5643d-0"
3 t, i$ n" Z7 Y Q4 p- V' s5 C Interface "ovn-b5643d-0": a- W0 {/ m2 g' E3 g
type: geneve8 K7 S; r. i% t6 z
options: {csum="true", key=flow, remote_ip="172.16.0.14"}* c. J# F1 M: @1 i! i
Port "ovn-14d60a-0"! W3 S+ P, Z( g, T
Interface "ovn-14d60a-0"
: P' n5 N& x( O( s7 U' Y, v type: geneve
) i& o% n5 t! x0 F6 b6 |* y options: {csum="true", key=flow, remote_ip="172.16.0.12"}$ C& f0 U9 r) k/ E6 n8 e
Port "patch-br-int-to-provnet-84d63c87-aad1-43d0-bdc9-dca5145b6fe6"3 S6 B5 a( n8 T5 |' v. Y$ j5 ^
Interface "patch-br-int-to-provnet-84d63c87-aad1-43d0-bdc9-dca5145b6fe6"6 q# H5 ]1 K5 R6 ?8 U
type: patch- d7 y! B! C' O7 d$ m' w
options: {peer="patch-provnet-84d63c87-aad1-43d0-bdc9-dca5145b6fe6-to-br-int"}
. Z' e d) W# r: O- @ Port br-int
) T6 }8 R# X, w3 Q0 g' h1 z Interface br-int0 F7 x9 p( r* y8 ^
type: internal2 G9 A) D' K& p5 e; ~
Initial resource creation
7 L4 H+ C! b1 Y, S' FWell, now you have a virtual cloud with 3 controllers in HA, and one compute node, but no instances or routers running. We can give it a try and create a few resources:* {( m: j f2 ?
+ _- F' W* h) ?" x. {' ~Initial resources we can create
5 \" Z3 y# `/ z" n. QYou can use the following script to create the resources.. u t' t6 {& Z2 P, c- j: [- F: x
% J. l" W1 q$ Hssh -F ~ /.quickstart/ssh.config.ansible undercloud3 E! C; \) L/ k8 M; w* {
8 X: \; x! i9 _/ F; L- J
source ~/overcloudrc7 L) B! ^# E) `1 v
! [# H7 O/ p1 E7 c
curl http://download.cirros-cloud.net ... 5.1-x86_64-disk.img \/ `, L& t* f+ K5 F
> cirros-0.5.1-x86_64-disk.img
1 h u+ t: y* {openstack image create "cirros" --file cirros-0.5.1-x86_64-disk.img \
+ h1 k$ s4 H/ D& V* e; p' m: M --disk-format qcow2 --container-format bare --public# N7 M0 N5 s6 F9 }* s) a
: f7 A8 O9 `5 v* g
openstack network create public --provider-physical-network datacentre \
4 {* W! @7 ]$ w* d& @, ?" a --provider-network-type vlan \- f {( E% m$ R+ v
--provider-segment 10 \
6 Y& v8 t5 V9 W8 S3 Y, J8 P --external --share
3 U" L. J1 i$ d$ S7 j: D, }/ d
6 W) ~4 M( \% jopenstack subnet create --network public public --subnet-range 10.0.0.0/24 \
9 ?; A8 ?+ O. h7 X3 v: ~ --allocation-pool start=10.0.0.20,end=10.0.0.250 \% i1 c; A" v, d# R& E' S; j0 F
--dns-nameserver 8.8.8.8 --gateway 10.0.0.1 \
, c& Q" {& @; g% K* H2 b* f --no-dhcp
7 w6 E- ]5 y+ a
2 C" F) O/ g- b% c+ q4 Popenstack network create private
- |- ], ?5 ?# j: ?openstack subnet create --network private private \0 m8 B/ n* ~# y4 ~- z
--subnet-range 192.168.99.0/24. \8 w+ `1 B& X7 D; B; {
openstack router create router1
0 b# w5 F i- y# Q$ u1 R* m) ^3 k( W1 u. \( z- w
openstack router set --external-gateway public router1
2 K1 @* E4 {% ~8 K* ?openstack router add subnet router1 private+ Q! L7 l: t& u( W
1 P. y* Y# |# \+ Ropenstack security group create test
0 T/ T9 X& Z) |( F W5 E; Topenstack security group rule create --ingress --protocol tcp \
Y+ f- w( B0 k) U! q( Y: {" B --dst-port 22 test1 j; p+ ^# u- K$ Q9 H% a- k
openstack security group rule create --ingress --protocol icmp test% O+ X$ U6 d# Y
openstack security group rule create --egress test S$ R+ Z# G2 z% [& D
- Z* i: K9 v8 G5 p
openstack flavor create m1.tiny --disk 1 --vcpus 1 --ram 64
7 C3 H+ B1 |2 v* @) N! H1 { W# Z+ e p R; e
PRIV_NET=$(openstack network show private -c id -f value)+ E( [. V8 I/ K) }; t- p
$ j# K4 j, [+ F3 x: K
openstack server create --flavor m1.tiny --image cirros \' U1 }& z& d: w$ U
--nic net-id=$PRIV_NET --security-group test \3 W% Y6 E0 I3 R" G3 R' g% C- D
--wait cirros
" `; P. V6 `# N; y) s+ V$ v: X
0 O/ f, W5 A+ ~6 Nopenstack floating ip create --floating-ip-address 10.0.0.130 public
. I# {, ~' f3 ^4 }openstack server add floating ip cirros 10.0.0.130( i' O" V- w* d& `; x. k6 [
Note' _' D9 B: } l4 E( g8 C; P
4 X4 w* ~) n9 o
You can now log in into the instance if you want. In a CirrOS >0.4.0 image, the login account is cirros. The password is gocubsgo.
3 u+ Z. R1 q$ E2 T& s, D# T$ M
3 `# K1 R2 M4 P0 J; I0 }0 K# }. G ssh cirros@10.0.0.130
( @& R7 {; v# l. O& Rcirros@10.0.0.130's password:
8 L$ A1 w5 f" V( \" {! A) s/ g) O" k9 G9 C0 w
ip a | grep eth0 -A 10; V5 A$ L+ |& T1 u' |
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast qlen 1000) ^% P3 @3 R( e" h' l7 ]7 @8 S; c( r
link/ether fa:16:3e:85:b4:66 brd ff:ff:ff:ff:ff:ff
# Z4 B5 e# {' G% K: G0 t3 f inet 192.168.99.5/24 brd 192.168.99.255 scope global eth0* \& u# E& H% `6 t. b1 V
valid_lft forever preferred_lft forever
" @$ g: P* m6 T% n inet6 fe80::f816:3eff:fe85:b466/64 scope link6 z) T: P( ]) I
valid_lft forever preferred_lft forever
, r h: T, C% M3 V; B
$ x. v9 o2 e ^7 [ping 10.0.0.1
, C0 W9 d: p# b2 g/ hPING 10.0.0.1 (10.0.0.1): 56 data bytes
! w% `! l$ M6 i6 u( \5 ~64 bytes from 10.0.0.1: seq=0 ttl=63 time=2.145 ms
) U2 ]: b) @9 n64 bytes from 10.0.0.1: seq=1 ttl=63 time=1.025 ms2 _; z+ k7 f' J- B7 R7 Q7 t* n* _
64 bytes from 10.0.0.1: seq=2 ttl=63 time=0.836 ms9 B9 N; x/ }. w8 ]6 L
^C
# E& P7 G7 J& u* L& s; N1 j--- 10.0.0.1 ping statistics ---
# e/ L1 l' y f; D8 K$ P3 V1 w8 @: ^3 packets transmitted, 3 packets received, 0% packet loss9 Z/ T! m' u' R! J* T, N+ W6 h
round-trip min/avg/max = 0.836/1.335/2.145 ms( l* {9 I+ q3 D+ l1 `
, s% Y! p; \2 P6 w
ping 8.8.8.8
0 \: E1 X/ m3 _9 H7 ^* j* I% ~PING 8.8.8.8 (8.8.8.8): 56 data bytes
$ G ?3 A. ~# d H" w64 bytes from 8.8.8.8: seq=0 ttl=52 time=3.943 ms w% ^- O1 J. h* b% L1 X, d
64 bytes from 8.8.8.8: seq=1 ttl=52 time=4.519 ms, a+ u( M3 A% E0 b% a
64 bytes from 8.8.8.8: seq=2 ttl=52 time=3.778 ms; j* M9 o& ]& v$ h& G
: n4 ~ N2 g, c% j
curl http://169.254.169.254/2009-04-04/meta-data/instance-id1 |( [) ?4 B" y1 T( L
i-000000028 [2 M3 M. D* x( k2 W7 F
2 q3 I$ T7 ^; d3 _3 N. P. i
& [. Z/ v5 f& J. r- L! o: w. L0 W2 g: l& x* i. ?' n
|
|