易陆发现互联网技术论坛

 找回密码
 开始注册
查看: 4096|回复: 0
收起左侧

Configuring the SDN

[复制链接]
发表于 2018-12-20 01:42:08 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有账号?开始注册

x
Overview" }0 Z* Y& @" h* o) `0 S
Available SDN Providers! n' z1 g$ v  O" }1 `* {0 L  H
Configuring the Pod Network with Ansible
* V! S- l8 J8 f% @/ z' ~5 aConfiguring the Pod Network on Masters  N. a9 l6 B- D0 {& X% V6 ~* X8 D
Configuring the Pod Network on Nodes6 p9 m9 ]1 a! b5 C# A. O
Migrating Between SDN Plug-ins0 l5 X5 |" q. ~7 M
External Access to the Cluster Network& R3 K- y& V- u5 w2 }
Using Flannel$ s+ B" G& n3 e7 j4 ?+ r
Overview
6 z! }: Z  M; F0 _The OpenShift SDN enables communication between pods across the OpenShift Container Platform cluster, establishing a pod network. Two SDN plug-ins are currently available (ovs-subnet and ovs-multitenant), which provide different methods for configuring the pod network. A third (ovs-networkpolicy) is currently in Tech Preview.
5 P: M! Y( f9 X8 X  }6 ?8 z' V+ @6 A6 o% Z
Available SDN Providers9 Z- D, F; m- [: K+ f2 W) o
The upstream Kubernetes project does not come with a default network solution. Instead, Kubernetes has developed a Container Network Interface (CNI) to allow network providers for integration with their own SDN solutions.
7 ?9 ?( ~6 P. M* v. H+ o9 B3 P* b+ Z+ ?4 ~* O' \' X% I
There are several OpenShift SDN plugins available out of the box from Red Hat, as well as third-party plug-ins., S% a, W! M2 P" k9 [

  g% D; F1 C2 C- t# X% `4 SRed Hat has worked with a number of SDN providers to certify their SDN network solution on OpenShift Container Platform via the Kubernetes CNI interface, including a support process for their SDN plug-in through their product’s entitlement process. Should you open a support case with OpenShift, Red Hat can facilitate an exchange process so that both companies are involved in meeting your needs.
; {/ y) s. q6 o9 O' ?8 x" ^% X) ?; C0 n9 F/ Z
The following SDN solutions are validated and supported on OpenShift Container Platform directly by the 3rd party vendor:. x7 |* Z- R' E9 t! D

! K( v  q/ ^: `1 wCisco Contiv (™)9 e& a# o( D, o
, H2 c7 q+ f' _' ~& m
Juniper Contrail (™)
* `6 @5 S& I! U5 }7 j, G  {2 }" g* s  j; Z2 z$ e) x. _/ W% u
Nokia Nuage (™)0 a, r0 h' M4 h2 L/ a6 p: G4 q  i( n; K
- P% x' S) s7 C% q' d8 k! j
Tigera Calico (™)
; {& `0 ?- e% c7 F) f5 e. h% ~, N' Q3 e0 K6 R! g
Configuring the Pod Network with Ansible; J7 Z- G( G6 i
For initial advanced installations, the ovs-subnet plug-in is installed and configured by default, though it can be overridden during installation using the os_sdn_network_plugin_name parameter, which is configurable in the Ansible inventory file., ^2 R+ Q: `8 u' M: r- ?1 S

' z& u- C5 J* N7 a- i5 YExample 1. Example SDN Configuration with Ansible5 I0 j$ ~3 T1 W. g1 s+ K8 t! [1 E
# Configure the multi-tenant SDN plugin (default is 'redhat/openshift-ovs-subnet')
# _$ p2 {$ t4 M4 {# os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'% c6 g) W# r% a0 y+ _8 Z( S4 H
1 G2 q! Y" G& K+ b* W  u( {- ~# T
# Disable the OpenShift SDN plugin1 |0 d) m6 A6 G- g
# openshift_use_openshift_sdn=False
7 m- V/ b" L7 X/ g$ O; _( H. k' E. ~% }
# Configure SDN cluster network CIDR block. This network block should
- v! M  Z; a& R( V; L. I$ N# be a private block and should not conflict with existing network- ], Y$ F. W) Z6 _
# blocks in your infrastructure that pods may require access to.1 ^% x' K# v: W) n9 [, J: y  K- y
# Can not be changed after deployment.
* d3 f- w1 X0 X/ D( t4 l#osm_cluster_network_cidr=10.1.0.0/16( C0 a# b6 g+ y1 w$ {+ c
5 }" K9 K" T! z/ u; H# p- O
# default subdomain to use for exposed routes+ }. H6 R5 [4 C# c
#openshift_master_default_subdomain=apps.test.example.com
/ o, H1 c! a& b' o  {% r  G1 _
7 i5 T4 r4 D. Q0 ~7 l# Configure SDN cluster network and kubernetes service CIDR blocks. These
6 r$ ~0 v( N; `4 ^# network blocks should be private and should not conflict with network blocks" G' O: c' Y9 Q) w# u3 Q; k
# in your infrastructure that pods may require access to. Can not be changed: Y/ b9 G9 H. P
# after deployment.
3 n8 [4 }2 o# y" G#osm_cluster_network_cidr=10.1.0.0/16
4 V2 [0 ]2 f, f  F  G' N# u#openshift_portal_net=172.30.0.0/16
3 D7 S8 T( m1 t" ]9 H, g& c' x8 o5 o0 `: @
# Configure number of bits to allocate to each host’s subnet e.g. 82 N7 m/ Z( C$ ?+ y; k% ]
# would mean a /24 network on the host.
1 U  N8 I9 s9 N* T+ }#osm_host_subnet_length=89 l1 j" W6 ~7 u6 {5 `* R, C
. Q. `: G5 C  T" {% ?
# This variable specifies the service proxy implementation to use:" {  o8 K" l- r
# either iptables for the pure-iptables version (the default),
, R! I) L$ Q0 t+ d# or userspace for the userspace proxy.& f: m. I# O+ c& x
#openshift_node_proxy_mode=iptables+ }$ X' N$ V. f$ h# c! C& j7 J
For initial quick installations, the ovs-subnet plug-in is installed and configured by default as well, and can be reconfigured post-installation using the networkConfig stanza of the master-config.yaml file.
# p- u3 _) q3 z7 c( e, e- L3 Q3 S( H& k2 l+ E4 y. m% F. P# @
Configuring the Pod Network on Masters. n& u5 G; k; }
Cluster administrators can control pod network settings on masters by modifying parameters in the networkConfig section of the master configuration file (located at /etc/origin/master/master-config.yaml by default):
8 Z) @3 i# k/ [  Q# T* L( E8 w3 t! w$ p1 V8 F! a- [
networkConfig:" V$ q! d9 q: L7 x7 A* ?8 M
  clusterNetworkCIDR: 10.128.0.0/14 $ {$ P. \% l, W: {. s
  hostSubnetLength: 9 2 Z# a, v4 a/ p5 h: U! C  i
  networkPluginName: "redhat/openshift-ovs-subnet"
6 [* B; g, j9 z$ M) b  serviceNetworkCIDR: 172.30.0.0/16
3 O7 e4 m, I0 Q" R: \Cluster network for node IP allocation
% n; p6 a! Z( R( A! C$ sNumber of bits for pod IP allocation within a node/ X; Y) t4 i+ p- ^8 Q
Set to redhat/openshift-ovs-subnet for the ovs-subnet plug-in or redhat/openshift-ovs-multitenant for the ovs-multitenant plug-in
: |! ]8 S: M4 @Service IP allocation for the cluster( C2 H  n* m! e3 o& v5 b) a2 B8 e
The serviceNetworkCIDR and hostSubnetLength values cannot be changed after the cluster is first created, and clusterNetworkCIDR can only be changed to be a larger network that still contains the original network. For example, given the default value of 10.128.0.0/14, you could change clusterNetworkCIDR to 10.128.0.0/9 (i.e., the entire upper half of net 10) but not to 10.64.0.0/16, because that does not overlap the original value.
( p1 F8 z0 }; FConfiguring the Pod Network on Nodes% ^( f( ^) X" d
Cluster administrators can control pod network settings on nodes by modifying parameters in the networkConfig section of the node configuration file (located at /etc/origin/node/node-config.yaml by default):
  A# j+ g$ M6 r, d/ K  F' d
. x4 ?* }, F$ Z. ~, enetworkConfig:
0 z% Y( N. f! N  mtu: 1450 3 @+ p3 b. Y/ G  t6 z' A( K
  networkPluginName: "redhat/openshift-ovs-subnet" & t8 G& z# n; H  V* d6 U4 r9 u" _8 \
Maximum transmission unit (MTU) for the pod overlay network  [2 Q4 K" G* M+ R
Set to redhat/openshift-ovs-subnet for the ovs-subnet plug-in or redhat/openshift-ovs-multitenant for the ovs-multitenant plug-in, L# H- B$ ~- q+ G. k
Migrating Between SDN Plug-ins8 C  x  `( [2 e7 g2 Q; a
If you are already using one SDN plug-in and want to switch to another:
* P) P: m4 x" @, y, \, v5 `' E& d; n1 \, }5 B; P8 ?- M
Change the networkPluginName parameter on all masters and nodes in their configuration files.
! n6 t* h  D% G: O2 q6 X4 M
0 n6 O  H7 K9 L3 h. t9 Z- J7 w& MRestart the atomic-openshift-master service on masters and the atomic-openshift-node service on nodes.- O, y9 y4 G6 M$ ^/ R
$ R5 t6 ?5 N& \
If you are switching from an OpenShift SDN plug-in to a third-party plug-in, then clean up OpenShift SDN-specific artifacts:
7 S) [) E* @, p$ j& W' c! ]9 w. X, a0 L* ?
$ oc delete clusternetwork --all
, T! @1 \, L/ u. `$ b$ oc delete hostsubnets --all( g: t. s9 Y4 i6 p
$ oc delete netnamespaces --all" c/ X4 A# x& ]$ h5 v
When switching from the ovs-subnet to the ovs-multitenant OpenShift SDN plug-in, all the existing projects in the cluster will be fully isolated (assigned unique VNIDs). Cluster administrators can choose to modify the project networks using the administrator CLI.
# K) A7 h7 E; D( J
5 R/ D* ^/ b+ X9 ?5 D6 RCheck VNIDs by running:
0 B. W5 y+ c5 S- _3 z7 _3 I
) n$ |" Q  Y5 v9 C* i3 a$ oc get netnamespace
  T! C5 K  J/ VExternal Access to the Cluster Network4 ]# p4 z" k; |; l+ v* G
If a host that is external to OpenShift Container Platform requires access to the cluster network, you have two options:' I0 m8 W5 e5 z- D7 v1 |6 h/ k. _- ~
, u3 M9 d8 q$ V) W: I
Configure the host as an OpenShift Container Platform node but mark it unschedulable so that the master does not schedule containers on it./ R  p, N, E% c* B& z) s5 b

5 w: N9 X6 q8 U. w7 ^/ |Create a tunnel between your host and a host that is on the cluster network.
9 p- a' u! w* W, Q3 }& w- {
7 v  O& F1 O2 w8 JBoth options are presented as part of a practical use-case in the documentation for configuring routing from an edge load-balancer to containers within OpenShift SDN.& \, U, a; O2 V/ V+ W& R

1 m5 z1 ~# `, b1 wUsing Flannel
; U8 I4 ^. k! y5 q2 J5 U: x- r0 HAs an alternate to the default SDN, OpenShift Container Platform also provides Ansible playbooks for installing flannel-based networking. This is useful if running OpenShift Container Platform within a cloud provider platform that also relies on SDN, such as Red Hat openstack Platform, and you want to avoid encapsulating packets twice through both platforms.
9 X- @6 C# M4 p% u% H% g- x0 M7 L7 T! m4 g: p5 I. ?
Flannel uses a single IP network space for all of the containers allocating a contiguous subset of the space to each instance. Consequently, nothing prevents a container from attempting to contact any IP address in the same network space. This hinders multi-tenancy because the network cannot be used to isolate containers in one application from another.5 \) n/ \) a7 b7 ~. Y& f7 I( [  n
5 B6 l$ g" e7 \& |, {
Depending on whether you prefer mutli-tenancy isolation or performance, you should determine the appropriate choice when deciding between OpenShift SDN (multi-tenancy) and flannel (performance) for internal networks.) e# n0 I: E' j; ?
2 U; K9 B  A( h) n
Flannel is only supported for OpenShift Container Platform on Red Hat OpenStack Platform.- `& V% {( F. ]" U# i
The current version of Neutron enforces port security on ports by default. This prevents the port from sending or receiving packets with a MAC address different from that on the port itself. Flannel creates virtual MACs and IP addresses and must send and receive packets on the port, so port security must be disabled on the ports that carry flannel traffic.
  I" K1 b9 M" B! l  \To enable flannel within your OpenShift Container Platform cluster:
0 C8 K! D2 n% Q2 t/ S5 h/ P& `( ]  }3 J4 O  @. l
Neutron port security controls must be configured to be compatible with Flannel. The default configuration of Red Hat OpenStack Platform disables user control of port_security. Configure Neutron to allow users to control the port_security setting on individual ports.
2 x0 `; A6 G+ l, S- k6 C( F1 R# I" x( k
On the Neutron servers, add the following to the /etc/neutron/plugins/ml2/ml2_conf.ini file:6 Q) Q% m! Z3 B% e

1 `/ G' v9 q" W" f$ t7 ]* P[ml2]( u# r5 Y* T6 f& K6 k4 ?+ S& x# \- r
...
$ I5 O. j! N4 F/ Textension_drivers = port_security
$ U' Y1 |5 g2 i5 d1 SThen, restart the Neutron services:
  F) L) }8 r9 Q/ T! k, K  B; _) w8 x2 i" n' b$ J( L
service neutron-dhcp-agent restart
/ C, R" U. X# d8 s2 Vservice neutron-ovs-cleanup restart/ y1 z- _9 @8 \( B6 m/ C
service neutron-metadata-agentrestart5 r3 y8 u; I$ T/ b! O! g1 ^* L% r
service neutron-l3-agent restart
5 ^5 Q# z; F: K) M% oservice neutron-plugin-openvswitch-agent restart
+ k, [9 X+ w. M% a, K: g6 Lservice neutron-vpn-agent restart$ E1 Q" A7 J$ N& Q: ]4 |
service neutron-server  restart; K3 ]+ ?" @# Z' ]5 V7 F
When creating the OpenShift Container Platform instances on Red Hat OpenStack Platform, disable both port security and security groups in the ports where the container network flannel interface will be:) n, p, c6 m+ s! n. `& a0 F! X

( y- \7 u! a4 j7 y0 zneutron port-update $port --no-security-groups --port-security-enabled=False" N* Q$ X( S+ P" d
Flannel gather information from etcd to configure and assign the subnets in the nodes. Therefore, the security group attached to the etcd hosts should allow access from nodes to port 2379/tcp, and nodes security group should allow egress communication to that port on the etcd hosts.
% z" Q& v' v0 T9 Q% E9 |  qSet the following variables in your Ansible inventory file before running the installation:
- V1 g% M4 I. k+ m7 z% X# K! g
( z6 o5 N! d% ~! b( o, ]' Dopenshift_use_openshift_sdn=false
" U2 b9 S- M6 f: R$ p5 H/ ~3 F! b/ lopenshift_use_flannel=true
/ [/ v6 l: W; s! o  ~- Bflannel_interface=eth0. |6 ?; A9 l$ d' e) x! @- q1 N: ^
Set openshift_use_openshift_sdn to false to disable the default SDN.6 S# @' L; W& [, Y
Set openshift_use_flannel to true to enable flannel in place.
7 }+ G- `, E  K3 v) [1 MOptionally, you can specify the interface to use for inter-host communication using the flannel_interface variable. Without this variable, the OpenShift Container Platform installation uses the default interface.4 @: \3 ?3 v" j4 i1 \

- B3 F# |% B# |: I( XCustom networking CIDR for pods and services using flannel will be supported in a future release. BZ#1473858
6 E$ K3 ]) B# h$ _' BAfter the OpenShift Container Platform installation, add a set of iptables rules on every OpenShift Container Platform node:* U) ]; ?8 s5 o+ d+ ^+ r" X
1 a# U: h) m8 r- e+ o
iptables -A DOCKER -p all -j ACCEPT; K" b+ b2 }, C# i) H" T. m
iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
; p; l8 d2 E, y9 S, @- VTo persist those changes in the /etc/sysconfig/iptables use the following command on every node:
3 A- C2 d. R: G! [# c
8 G: I% x% O! \. i  }cp /etc/sysconfig/iptables{,.orig}( k* ~8 A+ ~" z5 h2 m" _
sh -c "tac /etc/sysconfig/iptables.orig | sed -e '0,/:DOCKER -/ s/:DOCKER -/:DOCKER ACCEPT/' | awk '"\!"p && /POSTROUTING/{print \"-A POSTROUTING -o eth1 -j MASQUERADE\"; p=1} 1' | tac > /etc/sysconfig/iptables". ]  P( ~, J/ E8 [) w3 O9 r
The iptables-save command saves all the current in memory iptables rules. However, because Docker, Kubernetes and OpenShift Container Platform create a high number of iptables rules (services, etc.) not designed to be persisted, saving these rules can become problematic." P( T, k# l" u2 U- Z. o9 M; s
To isolate container traffic from the rest of the OpenShift Container Platform traffic, Red Hat recommends creating an isolated tenant network and attaching all the nodes to it. If you are using a different network interface (eth1), remember to configure the interface to start at boot time through the /etc/sysconfig/network-scripts/ifcfg-eth1 file:7 Y: _: P7 P8 v2 ^+ w; Y' W; a
7 O; z$ Z2 p# \8 c
DEVICE=eth18 O7 ^1 `* y- {4 S% x
TYPE=Ethernet
+ F/ z9 j4 u* }% V) r& uBOOTPROTO=dhcp1 k( C4 \0 i1 l$ [
ONBOOT=yes
1 S& {( k0 G1 P0 dDEFTROUTE=no
. }- s6 O! j( NPEERDNS=no
您需要登录后才可以回帖 登录 | 开始注册

本版积分规则

关闭

站长推荐上一条 /4 下一条

北京云银创陇科技有限公司以云计算运维,代码开发

QQ|返回首页|Archiver|小黑屋|易陆发现技术论坛 点击这里给我发消息

GMT+8, 2026-4-8 12:08 , Processed in 0.042807 second(s), 22 queries .

Powered by Discuz! X3.4 Licensed

© 2012-2025 Discuz! Team.

快速回复 返回顶部 返回列表