易陆发现互联网技术论坛

 找回密码
 开始注册
查看: 5047|回复: 3
收起左侧

nova底层热迁移虚机live-migration

[复制链接]
发表于 2019-4-13 19:12:19 | 显示全部楼层 |阅读模式
购买主题 本主题需向作者支付 3 金钱 才能浏览
 楼主| 发表于 2021-9-10 16:46:39 | 显示全部楼层
Nova-API Exploration Flow
/ b) g! m$ w8 {3 K( r
[size=1.2]Let’s first examine the nova-client instruction:
    nova live-migration [--block-migrate] <server> [<host>]
. u! u! a# q: f# Z! |, [; j2 e
[size=1.2]I submitted a request for a live-migration without –block-migrate and without a host. The Xenserver host does not have shared storage as an option. The request enters the osapi_compute API drivers following the path for live-migration. This variable is defined within nova.conf as enabled_apis and then picked up from nova.cmd.api.main as shown in the following sample.
EXPLORATION 2
  t% K; G. c9 l
                               
登录/注册后可看大图
Nova-API Exploration Flow

5 }5 _# v! b0 r/ |$ O
[size=1.2]The osapi_compute service is defined in nova.openstack.compute.wsgi.py with a simple init_application method. This tells us that nova.openstack.compute is the home of the API methods. The operation then looks for the live_migration call in the built-in list of routes within the nova.openstack.compute.wsgi service. These routes are defined and tell us where to look for the API method that handles this action.
[size=1.2]nova.cmd.api.main ->
    def main():    ...    for api in CONF.enabled_apis:        should_use_ssl = api in CONF.enabled_ssl_apis        try:            server = service.WSGIService(api, use_ssl=should_use_ssl)            launcher.launch_service(server, workers=server.workers or 1)    ...
) z9 o) f% i# Q2 k$ {! D
5 J) d6 v3 k- E2 |7 Y" }( u  }. H
EXPLORATION 3
( _; Q) O! e+ G* F1 U* a  ?+ g
                               
登录/注册后可看大图
Nova-API Exploration Flow

. D& `7 _5 B9 q- C4 h- q. ?
[size=1.2]The Route list is defined to tell our request where to go. A migration request is a server action, so it triggers the action request, which sends us into server_controller, which resolves down to migrate_server / nova.api.openstack.compute. The request for a live_migration sends the specific request of os_migrateLive to identify the correct method to use and then continues.
[size=1.2]nova.api.openstack.compute.routes ->
   def routes():    ...    from nova.api.openstack.compute import migrate_server    ...    server_controller = functools.partial(_create_controller,        ...,        migrate_server.MigrateServerController,        ...)    ...    ROUTE_LIST = (        ...        ('/servers/{id}/action', {'POST': [server_controller, 'action']        ...
; G+ c3 {% D. |% ]. b

+ L# L2 r0 l" V9 cEXPLORATION 4
7 D9 k; W  U; z* P) @* l- q& s$ }
                               
登录/注册后可看大图
Nova-API Exploration Flow
1 q3 \  X4 K. v: e% a  [
[size=1.2]Here the API sets a couple of important variables. It looks at the first if the --block-migrate flag was included. The default behavior of block_migration is to contain the auto key unless overridden by including the --block-migrate parameter. If the auto key is still included, set this to None, otherwise set it to a Boolean rather than True. We get the instance information because this is vital for migrating an instance and passing the instance, context, and block_migration over to the Compute API, which is resolved down to nova.compute.API.live_migrate. Finally, you can see that we actually get to this code by the reference of the @wsgi.action profiler
[size=1.2]nova.api.openstack.compute.migrate_server.MigrateServerController ->
class MigrateServerController():    ...    from nova import compute    ...    self.compute_api = compute.API()    @wsgi.action('os-migrateLive')    def _migrate_live(self, req, id, body):      ...      block_migration = body["os-migrateLive"]["block_migration"]      ...      if api_version_request.is_supported(req, min_version='2.25'):        if block_migration == 'auto':            block_migration = None        else:            block_migration = strutils.bool_from_string(block_migration,                                                            strict=True)      ...      instance = common.get_instance(self.compute_api, context, id)      try:        self.compute_api.live_migrate(context, instance, block_migration,                                          disk_over_commit, host, force, async)
6 ?1 L6 S) Y1 x# {3 c
7 F6 \) D: P1 j/ V6 F' a
EXPLORATION 5
; }6 m9 C; p8 e/ W( B
                               
登录/注册后可看大图
Nova-API Exploration Flow

9 O% ~+ ?( M4 N
[size=1.2]We have made the rounds through the basic Nova API service and kicked off the asynchronous request to start the live migration. I issued this request , which sent a os-migrateLive action by using the nova client. We traced this through the confusing WSGI process, which eventually set the block_migration variable and the instance variable, passing them both to the self.compute_api.live_migrate method. We know (by looking at the imports at the beginning of the previous psuedocode sample) that we can resolve this to be nova.compute.api.live_migrate.
[size=1.2]These are the first steps for firing off this live migration. I set the initial task_state of the server to MIGRATING and pulled the RequestSpec. This RequestSpec comes from the nova.objects.RequestSpec.get_by_instance_uuid method and is passed into scheduler a little bit later on. The RequestSpec contains details about the instance that the scheduler needs to verify to determine whether enough room is present to complete the process. This information includes NUMANodes, vGPU/CPU, Memory, and Disk information. If a host was specified, the method would have included additional relevant code, but this is not important for our purposes - we’re allowing the scheduler to go wild. Next, I passed the RespectSpec, None host_name and the block_migration/disk_over_commit/async (that was generated in Nova API) to the compute Task_API. Again, track this down by examining the imports, and we are now heading into the Conductor API (nova.conductor.ComputeTaskAPI.live_migrate_instance). Keep in mind: while we are indeed using the codebase for Conductor, this code is still being run on the Nova API nodes.
[size=1.2]nova.compute.api.live_migrate ->
def live_migrate():    from nova import conductor    self.compute_task_api = conductor.ComputeTaskAPI()    ...    instance.task_state = task_states.MIGRATING    ...    request_spec = objects.RequestSpec.get_by_instance_uuid(context, instance.uuid)    ...    try:            self.compute_task_api.live_migrate_instance(context, instance,                host_name, block_migration=block_migration,                disk_over_commit=disk_over_commit,                request_spec=request_spec, async=async)
$ e$ o# H  G# q+ I4 ^6 V9 T3 a8 a
/ s+ l! U, C$ O+ E: L- K4 N
EXPLORATION 6

9 e2 S/ X) B& a! ~7 B- x                               
登录/注册后可看大图
Nova-API Exploration Flow

* ?! b$ e$ O6 P: ^3 F
[size=1.2]This almost simple chunk of code sets up a new dictionary with the scheduled host name, but, because I didn’t specify this, it’s unimportant for our needs. We see from this code that the process calls one of two locations, depending on whether the async setting is included or not. The code also reveals that “async” is essentially a live_migration, because this code calls self.conductor_compute_rpcapi.live_migrate_instance. I kept the includes for this operation to show where it leads.
[size=1.2]nova.conductor.api.ComputeTaskAPI.live_migrate_instance ->
    def live_migrate_instance():    ...    from nova.conductor import rpcapi    self.conductor_compute_rpcapi = rpcapi.ComputeTaskAPI()    ...    def live_migrate_instance(self, context, instance, host_name, block_migration, disk_over_commit, request_spec=None, async=False):    scheduler_hint = {'host': host_name}    if async:            self.conductor_compute_rpcapi.live_migrate_instance(context, instance, scheduler_hint, block_migration,disk_over_commit, request_spec)        else:            self.conductor_compute_rpcapi.migrate_server(context, instance, scheduler_hint, True, False, None, block_migration, disk_over_commit, None, request_spec=request_spec)

$ Z1 P, A# |5 b+ r$ O
$ D  s+ @# `5 Q0 e: S) K* WEXPLORATION 7
. E5 K+ S( ~# K3 f  P0 \, W# m
                               
登录/注册后可看大图
Nova-API Exploration Flow

, u) A( e1 r1 F
[size=1.2]At this point, we run into the problem mentioned previously regarding the RPC service, which enters a land of extreme abstraction. While my knowledge of how these RPC services work is limited, essentially it defines a namespace and sends a message to that namespace through a chosen messenger service. Thus far, we have remained in the Nova API services, but now we are passing a message for Conductor nodes themselves to pick up and begin to work within their own managers. The RPC call for this looks like the following, with the kw variable here being the payload passed through messenger.
[size=1.2]nova.conductor.rpcapi.ComputeTaskAPI.live_migrate_instance ->
    def live_migrate_instance():    ...    RPC_TOPIC = 'conductor'    ...     kw = {'instance': instance, 'scheduler_hint': scheduler_hint,              'block_migration': block_migration,              'disk_over_commit': disk_over_commit,              'request_spec': request_spec,              }        version = '1.15'        cctxt = self.client.prepare(version=version)        cctxt.cast(context, 'live_migrate_instance', **kw)
/ \0 U2 e' N! I5 g- ?
6 A7 c6 \. B: v' a. m
[size=1.2]The main takeaway at this point in the process is that we will now officially be running on Conductor nodes, and from here the Conductor Manager runs the code that is called (live_migrate_instance). This code is nova.conductor.manager with the ComputeTaskManager having an @profiler decorator for direction. From here, Conductor conducts and coordinates with various other services to complete the migration process. This is an example of Conductor handling a larger amount of work than just communicating between database and compute node, however, Since so much work is necessary during a live migration, it is nice to have a centralized location (or “command center”) where information is passed back and forth during the beginning of this process.

, g9 K( s, d4 W: F) L9 ^
* l1 D; u; ]7 Q- {
$ ?8 Y0 [' ?! e) j8 W
& B) k8 X# X) f- I6 {2 Q2 }' r* _* z: ^9 E% M
, `/ ^+ t) `9 E- X  a  U) f

7 x) g& F% J8 u  a8 F0 E% x; q
, `/ e. y" V  |- Y  x$ F$ S+ ^
9 M4 z8 Q: {( R3 x+ M2 ?! w
# W# s. A( R/ n) ^) c, e. u  z
' q$ b6 y* @; S" b0 K5 D9 p$ v2 |' V$ q

: m" r& Q) Z/ L; F
 楼主| 发表于 2021-9-10 16:46:50 | 显示全部楼层
Nova-API Exploration Flow
  J7 U# _, y" RLet’s first examine the nova-client instruction:/ p& g( D8 Z4 N* M/ h8 Z7 k4 S& H; f5 ~
    nova live-migration [--block-migrate] <server> [<host>]. A' u' V9 [  h3 P5 ^7 g
I submitted a request for a live-migration without –block-migrate and without a host. The Xenserver host does not have shared storage as an option. The request enters the osapi_compute API drivers following the path for live-migration. This variable is defined within nova.conf as enabled_apis and then picked up from nova.cmd.api.main as shown in the following sample.
) k- F5 R9 |+ ^# e/ fEXPLORATION 2
( L) l3 k" Z% d" Y/ V0 \, @: QNova-API Exploration FlowNova-API Exploration Flow$ _8 u: J+ z6 i
The osapi_compute service is defined in nova.openstack.compute.wsgi.py with a simple init_application method. This tells us that nova.openstack.compute is the home of the API methods. The operation then looks for the live_migration call in the built-in list of routes within the nova.openstack.compute.wsgi service. These routes are defined and tell us where to look for the API method that handles this action.( h' F- O6 ~. I/ T
nova.cmd.api.main ->* x2 `2 X' |; j' _5 w# u; Y7 X
    def main():* o$ z# a4 u1 L& ?  c# T
    ...
9 K: f+ _( ]1 k9 B9 g4 Q    for api in CONF.enabled_apis:0 o. T. w; K# h" G8 c+ V( Q+ C; O
        should_use_ssl = api in CONF.enabled_ssl_apis2 F- L3 Y1 }* O) x2 }2 [. l, G& T
        try:  B$ x! r. i: y5 x
            server = service.WSGIService(api, use_ssl=should_use_ssl)
1 P. [: c9 C# z, U( d5 q/ Q, ~            launcher.launch_service(server, workers=server.workers or 1)! U% E/ X) i% _. p
    ...
: y6 W  U# B- m6 ~EXPLORATION 3: e, ]6 x+ A( d; J1 Y8 m$ |2 V
Nova-API Exploration FlowNova-API Exploration Flow3 P3 k7 `/ }4 ?8 S& J
The Route list is defined to tell our request where to go. A migration request is a server action, so it triggers the action request, which sends us into server_controller, which resolves down to migrate_server / nova.api.openstack.compute. The request for a live_migration sends the specific request of os_migrateLive to identify the correct method to use and then continues.1 f# f$ Q( M8 m% \8 p# \
nova.api.openstack.compute.routes ->" y! I& t  n1 F  r
   def routes():6 s, ?6 L" s; @7 G/ U
    ...  C% v: i  C. `7 z. V- |
    from nova.api.openstack.compute import migrate_server0 D3 s- s- _0 F/ R% B3 Z
    ...$ X2 S: C; }1 x3 g' j- B- w
    server_controller = functools.partial(_create_controller,. P7 g# ^# O4 b& g  v( q2 S6 e
        ...,3 y2 I4 a+ }7 Z- K1 B! l" J) s
        migrate_server.MigrateServerController,
8 ^- P' O" i+ c: R1 d        ...)
) I9 u2 F& K# q    ...& t& M& R/ ?  j* w
    ROUTE_LIST = (/ O9 B$ K! P1 D' y: `; F# d: u$ _! \
...6 u7 q3 `8 D: d. Z
('/servers/{id}/action', {'POST': [server_controller, 'action']
: [$ U: S; H$ h5 i0 |# h4 E ...5 u( N4 G5 {: w/ ~1 }% u2 D7 h
EXPLORATION 4) |2 C: m1 n2 L4 S5 v( e
Nova-API Exploration FlowNova-API Exploration Flow
' Z3 o  Y! L, K) d& M& |Here the API sets a couple of important variables. It looks at the first if the --block-migrate flag was included. The default behavior of block_migration is to contain the auto key unless overridden by including the --block-migrate parameter. If the auto key is still included, set this to None, otherwise set it to a Boolean rather than True. We get the instance information because this is vital for migrating an instance and passing the instance, context, and block_migration over to the Compute API, which is resolved down to nova.compute.API.live_migrate. Finally, you can see that we actually get to this code by the reference of the @wsgi.action profiler
; F1 _( C$ ]* K# Snova.api.openstack.compute.migrate_server.MigrateServerController ->
9 F7 Q; x3 T5 P( B1 _3 sclass MigrateServerController():
% ]7 D# K3 z/ f5 Y! P8 @) h    .... r* A5 J, R1 C3 b8 ?: K" ^
    from nova import compute7 S1 ?+ o4 _1 G) O6 l: u& N8 |1 p% r1 _
    ...
0 g9 h8 l0 {: c) O- Q3 B, P* S3 P    self.compute_api = compute.API()
* ?' e& r2 ]* y+ u& `* Q    @wsgi.action('os-migrateLive')
! X5 |2 i9 L9 B7 C+ m& ~    def _migrate_live(self, req, id, body):
3 y0 V/ V% E+ ~# w& J1 d9 ^% U      ...
; ~! K4 i  Y5 c2 f9 j! Q( S      block_migration = body["os-migrateLive"]["block_migration"]
% k1 s# @/ p0 M      ...
! w8 r, ^" D( C      if api_version_request.is_supported(req, min_version='2.25'):
6 s1 E0 W9 @# P8 \        if block_migration == 'auto':
8 x% x6 Y" X6 H; W            block_migration = None
- y9 R) x) p( J        else:
' ?& J- F8 u, }' Z  b8 L            block_migration = strutils.bool_from_string(block_migration,6 n- m+ D6 x$ e7 k2 l% D" w
                                                            strict=True)
" U' T9 a- Z3 ~  M1 F! P( P' I      ...
; V% f4 c( {( I' E3 j5 e! n      instance = common.get_instance(self.compute_api, context, id); B6 v& N# D3 u6 c
      try:
  Z7 D2 A  ?1 X: ^5 [ self.compute_api.live_migrate(context, instance, block_migration,8 d# c/ U* C4 I/ k
                                          disk_over_commit, host, force, async)
" E( S3 p  J/ yEXPLORATION 5
+ D0 X8 c& J, D. r0 \Nova-API Exploration FlowNova-API Exploration Flow
1 ]2 {; v( B. h1 F. p& l! l8 U3 TWe have made the rounds through the basic Nova API service and kicked off the asynchronous request to start the live migration. I issued this request , which sent a os-migrateLive action by using the nova client. We traced this through the confusing WSGI process, which eventually set the block_migration variable and the instance variable, passing them both to the self.compute_api.live_migrate method. We know (by looking at the imports at the beginning of the previous psuedocode sample) that we can resolve this to be nova.compute.api.live_migrate." _$ t4 t: o; j2 \5 Z
These are the first steps for firing off this live migration. I set the initial task_state of the server to MIGRATING and pulled the RequestSpec. This RequestSpec comes from the nova.objects.RequestSpec.get_by_instance_uuid method and is passed into scheduler a little bit later on. The RequestSpec contains details about the instance that the scheduler needs to verify to determine whether enough room is present to complete the process. This information includes NUMANodes, vGPU/CPU, Memory, and Disk information. If a host was specified, the method would have included additional relevant code, but this is not important for our purposes - we’re allowing the scheduler to go wild. Next, I passed the RespectSpec, None host_name and the block_migration/disk_over_commit/async (that was generated in Nova API) to the compute Task_API. Again, track this down by examining the imports, and we are now heading into the Conductor API (nova.conductor.ComputeTaskAPI.live_migrate_instance). Keep in mind: while we are indeed using the codebase for Conductor, this code is still being run on the Nova API nodes.
; f3 z  V' L# v0 n! Z! {nova.compute.api.live_migrate ->/ `$ J) [% ?: n) P5 K6 T, T! h! ?
def live_migrate():
" r0 w0 ^4 l' w  `3 ~$ A    from nova import conductor
5 Q1 [6 ]$ p' q: C+ J    self.compute_task_api = conductor.ComputeTaskAPI()6 L7 `: ], L/ E# j+ l( u" w! ~
    ...
5 T/ |$ Z  y" @( b    instance.task_state = task_states.MIGRATING  C. i6 W0 L: {$ U; m7 m: m
    ...
0 S" |, E3 @9 q$ x    request_spec = objects.RequestSpec.get_by_instance_uuid(context, instance.uuid)3 r. F' W2 s0 |) T1 j9 {
    ...
- C" [% o' o: \0 I* S6 q: l    try:1 |$ U! B! A. r  F* m( Y
            self.compute_task_api.live_migrate_instance(context, instance,5 L" G4 `& X3 R# H8 u4 V
                host_name, block_migration=block_migration,- t. t- j' e' ?& Q" G% v- D
                disk_over_commit=disk_over_commit,5 V: m" c0 @) D* `0 f
                request_spec=request_spec, async=async): B# ~7 R" c* m0 o# l) @' P& n
EXPLORATION 6
4 U& N4 Y. E! s! p8 Z) gNova-API Exploration FlowNova-API Exploration Flow
# T, a6 `2 ]+ `3 B/ o0 d2 ]This almost simple chunk of code sets up a new dictionary with the scheduled host name, but, because I didn’t specify this, it’s unimportant for our needs. We see from this code that the process calls one of two locations, depending on whether the async setting is included or not. The code also reveals that “async” is essentially a live_migration, because this code calls self.conductor_compute_rpcapi.live_migrate_instance. I kept the includes for this operation to show where it leads.
$ M: I$ T) v. @! O+ x- r" I. |' ynova.conductor.api.ComputeTaskAPI.live_migrate_instance ->
6 ^' Q/ h; h8 z# m  ^, I    def live_migrate_instance():
, p/ A6 q6 [1 C  g  Q. H    ...
( q- u. z: A( \7 d4 @; S    from nova.conductor import rpcapi1 o* q" {( e5 k0 ~. v8 u! b6 @
    self.conductor_compute_rpcapi = rpcapi.ComputeTaskAPI()
! G! Q# }  T$ C% f; }$ S5 w    ...
1 @3 n" X+ S! J8 U- C9 {1 D) f    def live_migrate_instance(self, context, instance, host_name, block_migration, disk_over_commit, request_spec=None, async=False):$ |3 f5 Z2 @, Z8 m5 ?& c7 g0 G
    scheduler_hint = {'host': host_name}
9 z* v7 V2 \$ j5 Y6 c# B4 F    if async:
7 ^+ S8 W# T! t- U8 A' x: }            self.conductor_compute_rpcapi.live_migrate_instance(context, instance, scheduler_hint, block_migration,disk_over_commit, request_spec)3 D7 C* u) ~; t$ U  r
        else:3 U" i, P6 I: o! r. Q9 q" ^* _
            self.conductor_compute_rpcapi.migrate_server(context, instance, scheduler_hint, True, False, None, block_migration, disk_over_commit, None, request_spec=request_spec)6 ^) ^# }" x3 c3 h  p
EXPLORATION 72 r# F7 ]. M- N4 N; i4 Q5 W
Nova-API Exploration FlowNova-API Exploration Flow" D% S% j5 h& Q$ S; m3 J
At this point, we run into the problem mentioned previously regarding the RPC service, which enters a land of extreme abstraction. While my knowledge of how these RPC services work is limited, essentially it defines a namespace and sends a message to that namespace through a chosen messenger service. Thus far, we have remained in the Nova API services, but now we are passing a message for Conductor nodes themselves to pick up and begin to work within their own managers. The RPC call for this looks like the following, with the kw variable here being the payload passed through messenger.& Y1 b+ I4 f3 h# a/ z. U
nova.conductor.rpcapi.ComputeTaskAPI.live_migrate_instance ->
* @9 r, ^2 X7 @0 Z5 u5 H+ v    def live_migrate_instance():
  W$ Z3 c* `- E: ~    ...  l1 }  |# L- w/ |' A* o* \
    RPC_TOPIC = 'conductor'
7 m2 l6 F# G- y+ c8 \4 ?9 ]    ...
8 {$ y, a! t$ @; ?; L     kw = {'instance': instance, 'scheduler_hint': scheduler_hint,. n; k8 u% }9 e/ a3 U+ N: W
              'block_migration': block_migration,4 ^. D6 h. P2 W) ?: i8 ~+ V
              'disk_over_commit': disk_over_commit,
2 i6 Y1 x  _. w9 z# F              'request_spec': request_spec,
$ {) M) B. R9 f$ g' _              }% i1 a/ g4 d8 P2 z2 ?
        version = '1.15'! E' a" ]% s4 t; X) B
        cctxt = self.client.prepare(version=version)5 O/ v1 C  f8 N
        cctxt.cast(context, 'live_migrate_instance', **kw)0 q5 U/ G/ W& U$ h; t
The main takeaway at this point in the process is that we will now officially be running on Conductor nodes, and from here the Conductor Manager runs the code that is called (live_migrate_instance). This code is nova.conductor.manager with the ComputeTaskManager having an @profiler decorator for direction. From here, Conductor conducts and coordinates with various other services to complete the migration process. This is an example of Conductor handling a larger amount of work than just communicating between database and compute node, however, Since so much work is necessary during a live migration, it is nice to have a centralized location (or “command center”) where information is passed back and forth during the beginning of this process.
+ Y/ _. p  ^0 B% h& `
 楼主| 发表于 2023-11-28 14:22:14 | 显示全部楼层
2023-11-28 14:29:18.480 7214 INFO nova.compute.resource_tracker [req-0c72b186-8bf1-4121-867a-992b278b9792 5cb9ad243e8a47cba223f287f1c449b8 ed7bb7f0687a4e39bdcaf0893b268727 - - -] Final resource view: name=computen05 phys_ram=515395MB used_ram=94208MB phys_disk=98GB used_disk=300GB total_vcpus=68 used_vcpus=116 pci_stats=<nova.pci.stats.PciDeviceStats object at 0x553db90>0 m7 W1 ^" q, E0 {9 L8 o
2023-11-28 14:29:18.499 7214 INFO nova.scheduler.client.report [req-0c72b186-8bf1-4121-867a-992b278b9792 5cb9ad243e8a47cba223f287f1c449b8 ed7bb7f0687a4e39bdcaf0893b268727 - - -] Compute_service record updated for ('computen05', 'computen05')
3 o8 |6 W5 n5 a: X- \5 p3 b8 M2023-11-28 14:29:18.500 7214 INFO nova.compute.resource_tracker [req-0c72b186-8bf1-4121-867a-992b278b9792 5cb9ad243e8a47cba223f287f1c449b8 ed7bb7f0687a4e39bdcaf0893b268727 - - -] Compute_service record updated for computen05:computen05/ y% g3 {  h- l* i' F9 z
2023-11-28 14:29:39.921 7214 INFO nova.compute.manager [req-2246c1bb-04b0-4986-b784-2c03ffcd7fbb - - - - -] [instance: 8e1d8050-c042-4182-8213-fe5ed176b022] VM Started (Lifecycle Event)
! z4 e( X/ D% l: G- B. f/ z" D2023-11-28 14:29:40.003 7214 INFO nova.compute.manager [req-2246c1bb-04b0-4986-b784-2c03ffcd7fbb - - - - -] [instance: 8e1d8050-c042-4182-8213-fe5ed176b022] During the sync_power process the instance has moved from host computen01 to host computen054 i  L/ t" D8 M
2023-11-28 14:30:07.393 7214 INFO nova.compute.manager [req-2246c1bb-04b0-4986-b784-2c03ffcd7fbb - - - - -] [instance: 8e1d8050-c042-4182-8213-fe5ed176b022] VM Resumed (Lifecycle Event)
* s/ ]" K3 N, L/ h* f* F2023-11-28 14:30:07.470 7214 INFO nova.compute.manager [req-2246c1bb-04b0-4986-b784-2c03ffcd7fbb - - - - -] [instance: 8e1d8050-c042-4182-8213-fe5ed176b022] During the sync_power process the instance has moved from host computen01 to host computen05* p7 U% f, |+ j5 H6 T. A# M
2023-11-28 14:30:07.470 7214 INFO nova.compute.manager [req-2246c1bb-04b0-4986-b784-2c03ffcd7fbb - - - - -] [instance: 8e1d8050-c042-4182-8213-fe5ed176b022] VM Resumed (Lifecycle Event)# y8 z/ t! W" \
2023-11-28 14:30:07.552 7214 INFO nova.compute.manager [req-2246c1bb-04b0-4986-b784-2c03ffcd7fbb - - - - -] [instance: 8e1d8050-c042-4182-8213-fe5ed176b022] During the sync_power process the instance has moved from host computen01 to host computen058 u$ l) ]1 J* f/ m" D+ e0 v$ t# w
2023-11-28 14:30:08.641 7214 INFO nova.compute.manager [req-b60105aa-e8b4-4e4f-81e8-a8596d1c29af 1e1454d784f945a69d29bef4c246a28d ddaa149332864b669fb166a375c58cac - - -] [instance: 8e1d8050-c042-4182-8213-fe5ed176b022] Post operation of migration started
, k: N4 b7 y- n# v5 K2023-11-28 14:30:09.063 7214 INFO nova.virt.libvirt.config [req-b60105aa-e8b4-4e4f-81e8-a8596d1c29af 1e1454d784f945a69d29bef4c246a28d ddaa149332864b669fb166a375c58cac - - -] cpu_type: x86.
1 T8 ~) B8 {2 F# {) Z3 u9 F2023-11-28 14:30:18.766 7214 INFO nova.compute.resource_tracker [req-0c72b186-8bf1-4121-867a-992b278b9792 5cb9ad243e8a47cba223f287f1c449b8 ed7bb7f0687a4e39bdcaf0893b268727 - - -] Auditing locally available compute resources for node computen059 z+ M. @& Q# ?7 }  E  q5 Z
2023-11-28 14:30:19.478 7214 WARNING nova.compute.resource_tracker [req-0c72b186-8bf1-4121-867a-992b278b9792 5cb9ad243e8a47cba223f287f1c449b8 ed7bb7f0687a4e39bdcaf0893b268727 - - -] [instance: f6ca6233-fc8e-4e07-8c61-6a7962e7a3af] Instance not resizing, skipping migration.% s% U/ H( e. e  m/ a* o' R
2023-11-28 14:30:19.516 7214 WARNING nova.compute.resource_tracker [req-0c72b186-8bf1-4121-867a-992b278b9792 5cb9ad243e8a47cba223f287f1c449b8 ed7bb7f0687a4e39bdcaf0893b268727 - - -] [instance: 09f3551f-8c46-4a22-a534-de169ff8ff36] Instance not resizing, skipping migration.( `0 }+ [/ W9 Q' _9 N8 `4 {! v
2023-11-28 14:30:19.516 7214 INFO nova.compute.resource_tracker [req-0c72b186-8bf1-4121-867a-992b278b9792 5cb9ad243e8a47cba223f287f1c449b8 ed7bb7f0687a4e39bdcaf0893b268727 - - -] Total usable vcpus: 68, total allocated vcpus: 120/ z2 ?% V8 o/ @  e6 Q
2023-11-28 14:30:19.517 7214 INFO nova.compute.resource_tracker [req-0c72b186-8bf1-4121-867a-992b278b9792 5cb9ad243e8a47cba223f287f1c449b8 ed7bb7f0687a4e39bdcaf0893b268727 - - -] Final resource view: name=computen05 phys_ram=515395MB used_ram=102400MB phys_disk=98GB used_disk=350GB total_vcpus=68 used_vcpus=120 pci_stats=<nova.pci.stats.PciDeviceStats object at 0x553db90>
# [, w6 P: j# u- P& V: c2023-11-28 14:30:19.533 7214 INFO nova.scheduler.client.report [req-0c72b186-8bf1-4121-867a-992b278b9792 5cb9ad243e8a47cba223f287f1c449b8 ed7bb7f0687a4e39bdcaf0893b268727 - - -] Compute_service record updated for ('computen05', 'computen05')
# D5 w+ c) P1 u0 E( {& ?2023-11-28 14:30:19.533 7214 INFO nova.compute.resource_tracker [req-0c72b186-8bf1-4121-867a-992b278b9792 5cb9ad243e8a47cba223f287f1c449b8 ed7bb7f0687a4e39bdcaf0893b268727 - - -] Compute_service record updated for computen05:computen05
# x4 L, a, `4 i, ^& r
您需要登录后才可以回帖 登录 | 开始注册

本版积分规则

关闭

站长推荐上一条 /4 下一条

北京云银创陇科技有限公司以云计算运维,代码开发

QQ|返回首页|Archiver|小黑屋|易陆发现技术论坛 点击这里给我发消息

GMT+8, 2026-4-8 10:39 , Processed in 0.050146 second(s), 26 queries .

Powered by Discuz! X3.4 Licensed

© 2012-2025 Discuz! Team.

快速回复 返回顶部 返回列表