Monday, 2020-08-31

*** sapd1 has quit IRC00:07
*** zhanglong has joined #openstack-nova01:04
*** swp20 has joined #openstack-nova01:04
*** suryasingh has joined #openstack-nova01:14
*** Liang__ has joined #openstack-nova01:16
openstackgerritKeigo Noha proposed openstack/nova stable/ussuri: Change default num_retries for glance to 3  https://review.opendev.org/74893601:18
*** k_mouza has joined #openstack-nova01:19
*** k_mouza has quit IRC01:23
*** tony_su has joined #openstack-nova01:26
*** songwenping_ has joined #openstack-nova01:34
*** swp20 has quit IRC01:38
*** zhanglong has quit IRC01:46
*** sapd1 has joined #openstack-nova02:06
*** zhanglong has joined #openstack-nova02:06
openstackgerritTony Su proposed openstack/nova master: Provider Config File: Coding style and test cases improvment  https://review.opendev.org/74893902:20
openstackgerritTony Su proposed openstack/nova master: Provider Config File: Coding style and test cases improvement  https://review.opendev.org/74893902:22
*** rcernin has quit IRC02:58
*** rcernin has joined #openstack-nova03:19
*** ratailor has joined #openstack-nova03:48
*** zhanglong has quit IRC03:58
*** rcernin has quit IRC04:12
*** rcernin has joined #openstack-nova04:14
*** Liang__ has quit IRC04:20
*** evrardjp has quit IRC04:33
*** evrardjp has joined #openstack-nova04:33
*** vishalmanchanda has joined #openstack-nova04:35
*** sapd1 has quit IRC04:55
*** stephenfin has quit IRC05:35
*** sapd1 has joined #openstack-nova05:37
*** stephenfin has joined #openstack-nova05:42
*** stephenfin has quit IRC05:47
*** rcernin has quit IRC05:48
*** sapd1 has quit IRC05:54
*** stephenfin has joined #openstack-nova05:56
*** zhanglong has joined #openstack-nova05:59
*** stephenfin has quit IRC06:01
*** stephenfin has joined #openstack-nova06:02
*** rcernin has joined #openstack-nova06:03
*** stephenfin has quit IRC06:09
*** PrinzElvis has joined #openstack-nova06:10
*** sapd1 has joined #openstack-nova06:15
*** stephenfin has joined #openstack-nova06:16
*** jsuchome has joined #openstack-nova06:25
*** stephenfin has quit IRC06:27
*** sean-k-mooney1 has joined #openstack-nova06:27
*** stephenfin has joined #openstack-nova06:29
*** sean-k-mooney has quit IRC06:30
*** links has joined #openstack-nova06:44
*** PrinzElvis has quit IRC06:46
*** PrinzElvis has joined #openstack-nova06:46
*** stephenfin has quit IRC06:50
*** stephenfin has joined #openstack-nova06:57
*** ralonsoh has joined #openstack-nova07:01
*** PrinzElvis has quit IRC07:03
*** PrinzElvis has joined #openstack-nova07:03
*** stephenfin has quit IRC07:04
*** PrinzElvis has quit IRC07:07
*** PrinzElvis has joined #openstack-nova07:08
*** PrinzElvis has quit IRC07:10
*** PrinzElvis has joined #openstack-nova07:11
*** songwenping_ has quit IRC07:13
*** tesseract has joined #openstack-nova07:17
*** trident has quit IRC07:18
*** swp20 has joined #openstack-nova07:20
swp20gibi: hi gibi, morning.07:21
swp20I have a question: As we donnot delete allocations when evacuate, so this case `nova.tests.functional.wsgi.test_services.TestServicesAPI.test_evacuate_then_delete_compute_service` is not reasonable, right?07:23
*** sapd1 has quit IRC07:24
*** PrinzElvis has quit IRC07:30
*** PrinzElvis has joined #openstack-nova07:31
gibiswp20: the source host allocation is not deleted during evacuation. It is left for the source host to delete it when it is recovered. If the source host is never recovered but deleted instead then we might need to delete the source side of the allocation of successfull evacuations07:35
*** tosky has joined #openstack-nova07:39
gibiswp20: alternatively we can reject the service delete if there is allocations against the RP due to finished evacuations07:40
gibiand ask the admin to clean that up manually before delete the service07:41
swp20so how can we process the exception when we delete rp failed? we cannot raise the exception.07:41
swp20please see the patch: https://review.opendev.org/#/c/748339/07:42
bauzasgood morning07:42
gibiswp20: I will check07:43
gibibauzas: good morning07:43
swp20gibi: thanks07:43
*** zigo has joined #openstack-nova07:45
*** brinzhang has joined #openstack-nova07:49
*** rcernin has quit IRC07:51
suryasingh@gibi hi... Sorry to address directly. Do you know if graceful shutdown of nova-compute is supported from nova side (For ex: like SIGTERM send to nova-compute to stopped in the middle of instance boot operation)07:54
*** belmoreira has joined #openstack-nova07:57
*** PrinzElvis is now known as Prinz-Elvis07:58
gibisuryasingh: hi! good questions. I'm not sure.07:59
*** tosky has quit IRC08:00
*** jangutter has quit IRC08:00
*** purplerbot has quit IRC08:00
gibisuryasingh: looking at the nova.service.Service impl, I don't see specific implementtin08:01
gibinova.service.Service.stop just stops the RPC service and then calls manager.clean_up08:02
*** irclogbot_0 has quit IRC08:02
*** ralonsoh_ has joined #openstack-nova08:02
gibiwe do cancel ongoing live migration though08:03
gibivia nova.compute.manager.ComputeManager._cleanup_live_migrations_in_pool08:03
gibibut I don't see any more gracefullness08:03
*** ralonsoh has quit IRC08:03
*** ralonsoh_ has quit IRC08:03
*** Prinz-Elvis is now known as PrinzElvis08:04
*** tosky has joined #openstack-nova08:05
*** jangutter has joined #openstack-nova08:05
*** purplerbot has joined #openstack-nova08:05
*** PrinzElvis is now known as Prinz_Elvis08:05
suryasinghI see gibi, Thanks for response though.08:05
*** irclogbot_0 has joined #openstack-nova08:07
*** zhanglong has quit IRC08:08
*** zhanglong has joined #openstack-nova08:09
gibisuryasingh: if you would need more gracefullness the I suggest to describe you use case in a mail on the mailing list or in a nova specification08:09
*** avolkov has joined #openstack-nova08:09
suryasinghgibi: thanks for info. I will do once feel require to mail.08:11
bauzasgibi: suryasingh: we just hook the SIGHUP signal08:12
bauzashook up*08:12
*** Prinz_Elvis is now known as PrinzElvis08:12
bauzaswe don't indeed have any other signal for a graceful restart08:12
bauzasbut operators just disable the service before they restart08:13
bauzasso they don't need to wait08:13
bauzasgibi: IIRC, oslo.service also waits gracefully08:23
brinzhanggibi, stephenfin: Fixed stephenfin's comments inline in Cyborg evacuate patch, hope you can review again https://review.opendev.org/#/c/715326/08:24
bauzashttps://docs.openstack.org/oslo.service/ocata/history.html#id1708:24
bauzasgibi: ^08:24
bauzassuryasingh: ^08:24
fricklerbauzas: I have two question regarding https://bugs.launchpad.net/neutron/+bug/1861401: a) is "hostname is immutable" documented somewhere? nearest thing I found is https://bugs.launchpad.net/nova/+bug/106815408:25
openstackLaunchpad bug 1861401 in OpenStack Compute (nova) "Renaming instance brokes DNS integration" [Low,Opinion]08:25
openstackLaunchpad bug 1068154 in OpenStack Compute (nova) "Renaming Instance Name doesn't change hostname on a Rebuild" [Wishlist,Opinion]08:25
fricklerand b: is it possible to retrieve the original hostname via the API?08:25
bauzassuryasingh: gibi: https://docs.openstack.org/oslo.service/latest/reference/service.html#oslo_service.service.Service.stop08:26
bauzasfrickler: looking08:26
suryasinghbauzas: thanks for heads up, sorry i couldn't get much. How disabling benefits in graceful shutdown ? >> so they don't need to wait ?08:27
bauzassuryasingh: by disabling the service, you avoid having instances be going to it08:28
suryasinghbauzas: yes that's correct08:28
bauzassuryasingh: so, then, when restarting the compute service, the instances are not restarted08:28
fricklerbauzas: ah, for b) I found OS-EXT-SRV-ATTR:hostname, though it's admin-only by default, which kind of restricts it's usefulness08:29
bauzasfrickler: corrrect, but you can change the policy08:30
bauzasfrickler: I mean, it's a cloud API, right ? :)08:30
bauzasso that's why it's defaulted to No08:30
suryasinghbauzas: what about on going instance booting process, suppose i disabled the nova-compute even before nova-compute gets net-id, or requested volume or object from other openstack services (neutron, cinder, swift)08:31
bauzasfrickler: for the first question, it's because we have persisted objects that use the name for knowing which compute they are related08:31
bauzasso, changing it would mean that you should also have to change the objects too08:32
*** derekh has joined #openstack-nova08:32
bauzasfrickler: it's possible I think, and maybe some operators have some tools for it08:33
fricklerbauzas: I don't question why it is immutable, I'd just like to see that clearly documented, so I can point people to it08:33
fricklerbauzas: for the API I do question why the default is admin-only, though, because that data seems to be readily available via metadata08:34
bauzasfrickler: ahah good point08:34
gibibauzas: would that service stop wait for every eventlet thread to finish too? that would mean a sort-of gracefull shutdown then08:34
bauzasfrickler: you get the hostname thru the metadata API, really ?08:34
gibibauzas: I guess suryasingh or me could try in devstack to see what happens with on the fly spawns08:35
bauzasgibi: I'm not an oslo.service expert, but I'd think it does08:35
bauzasgibi: that's what I'd expect at least08:35
bauzas(the service wait)08:35
fricklerbauzas: http://169.254.169.254/latest/meta-data/hostname gives me the original hostname, not the changed (display_)name, yes08:36
bauzaseeek08:38
frickleroh, wait, is that an even different hostname?08:38
* bauzas goes looking at the metadata API ref08:38
bauzasfrickler: wait08:40
bauzasfrickler: hostname will give you the display_name of the instance VM I'd say08:40
bauzasthat's what I'd expect08:40
bauzasat least it's what EC2 Metadata format will give you https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html08:41
bauzashttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-categories.html =>08:42
bauzashostname                                                                                            The private IPv4 DNS hostname of the instance. In cases where                                                 multiple network interfaces are present, this refers to the eth0 device                                                 (the device for which the device number is 0).08:42
bauzas08:42
fricklerbauzas: nope, those data seem immutable in my test, too (rocky). the openstack meta_data.json gives "hostname": "oldname.novalocal", "name": "newname"08:44
fricklerwhich is consistent with how neutron seems to handle dns at least08:45
bauzasfrickler: lemme rephrase, this hostname value matches the original instance name, not the compute service hostname08:52
openstackgerritMamduh proposed openstack/os-vif master: Refactor code of linux_net to more cleaner and increase performace  https://review.opendev.org/74667308:55
*** sapd1 has joined #openstack-nova08:59
fricklerbauzas: not sure what you mean by "compute service hostname". the metadata matches the original instance name and OS-EXT-SRV-ATTR:hostname, not the new instance name09:06
*** trident has joined #openstack-nova09:08
openstackgerritAkhil Gudise proposed openstack/nova master: Moved all calls from _ENFORCER.authorize to a separate _authorize method  https://review.opendev.org/73946009:08
bauzasfrickler: okay, my bad, I see the confusion09:08
gibiswp20: left comment in https://review.opendev.org/#/c/74833909:09
bauzasfrickler: I thought you were asking whether the compute service hostame was immutable09:09
bauzasI'm tired09:09
bauzasyou asked about the instance hostname09:09
bauzasand yeah, this is immutable, users can only change the display_name field09:10
bauzasfrickler: I apologize09:10
bauzaswhich is consistent with what you get from the metadata API09:10
bauzasnow, back to your original question, where it is documented, I'm doublechecking things09:10
*** dtantsur|afk is now known as dtantsur09:11
*** sapd1 has quit IRC09:11
bauzasfrickler: first, the API fields are documented here https://docs.openstack.org/api-ref/compute/?expanded=list-servers-detailed-detail#id2109:12
bauzasand the "name" field is actually the display_name fiedl09:12
*** brinzhang has quit IRC09:16
-openstackstatus- NOTICE: due to a new release of setuptools (50.0.0), a lot of jobs are currently broken, please do not recheck blindly. see http://lists.openstack.org/pipermail/openstack-discuss/2020-August/016905.html09:17
*** brinzhang has joined #openstack-nova09:19
fricklerbauzas: "The hostname set on the instance when it is booted." if one interprets "booted" as "created", that would almost do it, I guess09:28
swp20gibi: thanks for review.09:29
bauzasfrickler: agreed, fancy fixing the api ref ?09:29
fricklerbauzas: related strangeness: while the API seems to allow to filter the server list based on instance hostname, "openstack server list --host $hostname" actually seems to filter on hypervisor_hostname09:29
bauzasfrickler: hence my confusion09:29
bauzasin nova, host generally refers to the compute service hostname09:29
fricklerbauzas: ah, I see. I can do a patch for the api-ref, yes09:30
bauzasfrickler: and fwiw, we have OS-EXT-SRV-ATTR:host which gives you the compute service hostname09:31
bauzassee the problem ? :)09:31
fricklerbauzas: yeah, nice semantic overload, "name of the host" vs. "(DNS) hostname of the instance"09:32
bauzasthat's what happens when you're bugged on a Monday morning :)09:32
*** brinzhang_ has joined #openstack-nova09:37
*** brinzhang has quit IRC09:40
*** swp20 has quit IRC09:43
*** swp20 has joined #openstack-nova09:43
*** ralonsoh has joined #openstack-nova09:43
*** brinzhang_ has quit IRC09:52
bauzassean-k-mooney1: gibi: you're more experts than me on the network side, but I'm facing a small issue with the routed networks implementation10:00
bauzassean-k-mooney1: gibi: I only get the list of physnets from the requestspec when I'm in a prefilter, so instead of querying the list of segments from the network id or the port id, I was about to query neutron to give me the list of segments related to the physnets I got10:01
bauzassean-k-mooney1: gibi: do you think it's valid ? IMHO, it is.10:01
bauzassean-k-mooney1: gibi: this would even allow us to not care whether we were passed a port or a network like mriedem implemented in his WIP https://review.opendev.org/#/c/656885/7/nova/scheduler/utils.py@137910:03
*** jangutter has quit IRC10:04
*** jangutter has joined #openstack-nova10:05
*** ralonsoh has quit IRC10:10
*** ralonsoh has joined #openstack-nova10:10
*** tosky has quit IRC10:13
*** tosky has joined #openstack-nova10:13
*** stephenfin has joined #openstack-nova10:20
*** zhanglong has quit IRC10:22
*** Luzi has joined #openstack-nova10:35
* bauzas goes off for lunch10:37
*** jangutter_ has joined #openstack-nova10:41
sean-k-mooney1bauzas: one sec10:42
sean-k-mooney1bauzas: need to read that a couple of times :)10:42
sean-k-mooney1still drinking moring coffee10:43
*** brinzhang has joined #openstack-nova10:43
sean-k-mooney1not sure the request spec is correct10:43
*** jangutter has quit IRC10:44
sean-k-mooney1i need to look at where the request spec gets its physnet info10:44
sean-k-mooney1if its getting it for the nova vif objects then that is not correct10:45
sean-k-mooney1the nova vif object just use the first physnet form a network not the physnet corresponding to the segment10:45
brinzhangstephenfin: https://review.opendev.org/#/c/715326/27/api-guide/source/accelerator-support.rst@5610:46
*** sean-k-mooney1 is now known as sean-k-mooney10:47
brinzhangstephenfin: In https://releases.openstack.org/victoria/ we are not update the nova version for Victoria, do you need to keep use ussuri version 21.1.0?10:47
stephenfinbrinzhang: You mean there's no 22.0.0 release yet?10:48
stephenfinThat makes sense. It hasn't been released. You're writing these docs for when it *is* released10:48
brinzhangstephenfin: yes10:48
brinzhangaha, I think yes, it make sense10:49
sean-k-mooneybrinzhang: we only ever use the major version in the release notes10:49
stephenfinYeah, when writing things like the 'versionchanged' directive, you give the first version the change is *included* in10:50
stephenfinLeaving aside the major/minor thing, 21.1.0 has been released and it clearly wasn't included there :)10:50
sean-k-mooneyif we were fixing a bug caused by a backport i guess that could be an excpetion10:50
brinzhangsean-k-mooney: yeah, ack. We were marked this, and it is reasonable in the docs, when the use to use this feature, it's belong to the 22.0.010:51
sean-k-mooneybut i dont think we have used a minor verion in the past10:51
stephenfinsean-k-mooney: yeah, sure. I don't think we have used it though10:51
stephenfinyeah10:51
brinzhangsean-k-mooney, stephenfin: do we need to backport this to Ussuri?10:52
stephenfinyou can't10:53
stephenfinthere's a service version bump10:53
stephenfinnot backportable10:53
brinzhangack ^10:53
brinzhangstepheinfin, gibi, sean-k-mooney: will udpate this patch later, thanks for your review, it's getting closer and closer to closing10:55
stephenfinyup, we'll have this landed by end of the week, for sure10:55
gibi^^ +110:56
sean-k-mooneystephenfin: oslo lib freeze was on thursday its one week before the non-client lib freeze which is this thursday10:57
stephenfingibi, bauzas: We discussed dropping support for the untested libvirt hypervisors at the PTG. Any chance you could review these patches at some point this week since I don't think we can merge after M3? https://review.opendev.org/#/q/topic:bp/remove-deprecated-libvirt-virt-types+status:open10:58
sean-k-mooneyi try to have a potential "release candiate" on the oslo freeze if i can but we havnt this time10:58
sean-k-mooneyso we still have till thrusday for os-vif10:58
stephenfinsean-k-mooney: okay, good to hear10:58
gibistephenfin: add those to my queue now10:58
stephenfin \o/ Great, thanks :)10:59
sean-k-mooneyi normally like to have that week to see if neutron or nova shake out something and still have time to fix it before the non-client lib freeze10:59
sean-k-mooneyanyway updating the os-vif patch now10:59
*** eharney has quit IRC10:59
*** jangutter has joined #openstack-nova11:00
openstackgerritStephen Finucane proposed openstack/nova master: doc: Update references to image properties  https://review.opendev.org/74419811:02
openstackgerritStephen Finucane proposed openstack/nova master: libvirt: Drop support for UML  https://review.opendev.org/74323011:02
openstackgerritStephen Finucane proposed openstack/nova master: libvirt: Drop support for Xen  https://review.opendev.org/74323111:02
openstackgerritStephen Finucane proposed openstack/nova master: libvirt: Remove 'hypervisor_version' from 'libvirt_info'  https://review.opendev.org/74419911:02
sean-k-mooneystephenfin: i do remember i was ment to create a ci for libvirt lxc but have not :(11:02
sean-k-mooneyoh your droping uml and xen11:02
sean-k-mooneyim ok with that11:02
stephenfinYeah, I've left dropping that one for another cycle at least11:03
*** jangutter_ has quit IRC11:03
sean-k-mooneyill see if i can create the ci between m3 and rc111:03
sean-k-mooneyif not ill submit a deprecation patch for it.11:04
sean-k-mooneyoh speaking of which11:04
sean-k-mooneyif i fix https://review.opendev.org/#/c/745605/ today/tomorrow to deprecate the compute an az filter can we still merge that this release11:05
stephenfinI don't see why not11:05
sean-k-mooneyok it would be nice to be able to remove those next cycle or at least have the option too11:06
*** eharney has joined #openstack-nova11:12
*** jangutter has quit IRC11:15
*** jangutter has joined #openstack-nova11:16
*** elod has quit IRC11:27
openstackgerritStephen Finucane proposed openstack/nova master: tests: Further usage of new server helpers  https://review.opendev.org/74320411:32
openstackgerritStephen Finucane proposed openstack/nova stable/ussuri: Add a lock to prevent race during detach/attach of interface  https://review.opendev.org/74903311:32
openstackgerrityatin proposed openstack/nova master: Revert "Rebase qcow2 images when unshelving an instance"  https://review.opendev.org/74903511:36
gibibauzas: based on https://review.opendev.org/#/c/656885/7/nova/scheduler/utils.py@1322 I think your observation is valid. We know the requested network (or port but then that is translated back to network) and then we can query neutron for a list of segments in that network11:37
sean-k-mooneygibi: right we have to query neutron11:37
sean-k-mooneygibi: i think bauzas was suggesting we woudl have physnets in the request spec11:37
sean-k-mooneybut those would be incorrect if present11:38
sean-k-mooneyif you have multiple phsynets associated with a network nova only stores the frist in the vif object11:38
gibiyeah, I don't think we have physnets there. But somewhere we have physnets11:38
gibiyes, I remember now that we only check for the first physnet when walking the segments11:39
sean-k-mooneyright but they are only correct if the network has only one physnet11:39
sean-k-mooneygibi: yep its a hack that we never fixed11:39
sean-k-mooneyfixing it is non trivial for the sriov case11:40
sean-k-mooneythe pci tracker cant currently take a list of physnets11:40
gibiyes, agreee11:40
sean-k-mooneyso we cant say find a vf with any of these phsnets currently11:40
sean-k-mooneyi think the same would apply for bandwith requests11:41
sean-k-mooneyso i think looping over the requested netwroks is valid11:41
gibiagree too11:42
sean-k-mooneybut we still need to ask neuton for the list of segments11:42
sean-k-mooneywe could maybe cache that if we considered it to be expensive with a time based cache11:42
sean-k-mooneye.g. cache it with a hard time out of say 5 minutes11:43
sean-k-mooneyits something we expect to change very seldomly11:43
sean-k-mooneybut that can be done later if needed11:44
gibiyeah, that could be an optimization. Still this is a single http request per network.11:45
gibiso should not be that expensive11:45
swp20stephenin: hi, what do you means by https://review.opendev.org/#/c/715326/22..27/nova/compute/manager.py@328111:45
sean-k-mooneywell not quite11:45
sean-k-mooneywe have to get teh list of segments form the network11:45
*** links has quit IRC11:45
sean-k-mooneythen we need to get the phsynets form the segments in a different part of the code11:45
sean-k-mooneyso its 1 call for the segments and a second per segment for the segment details11:46
gibiyeah you are right there are two different places where we query segments11:47
gibithere the cache make more sense11:47
sean-k-mooneythe segment details are what i think could be cached11:47
sean-k-mooneythe segments per network im not sure needs to be11:48
brinzhangstephenfin: hi, what do you means by https://review.opendev.org/#/c/715326/27/nova/compute/manager.py@328111:48
sean-k-mooneygibi: anyway we can cross that bridge later11:48
gibiyepp, totally agree11:49
*** elod has joined #openstack-nova11:52
bauzassean-k-mooney: gibi: I'm just back, looking above12:01
bauzassean-k-mooney: gibi: okay, lemme provide the implementation today, you'll see my question12:02
gibisure12:03
* bauzas just wonders how to use a Neutron API instance in the scheduler request filter method12:03
bauzas(I mean the client)12:03
gibibauzas: I guess the previous patch did not try to call neutron from the request filter, but did the data collection earlier and stored the requested aggregate info in request_spec.request_level_params.member_of.append12:06
gibineutron was queried in conductor https://review.opendev.org/#/c/656885/7/nova/conductor/manager.py12:07
bauzasgibi: yep, but you'll see I don't use it12:07
bauzasinstead, I use a new request filter method and I use the requested destination object for passing the aggregates12:08
bauzasbut meh, you'll see12:08
gibiOK, I will check12:10
*** raildo has joined #openstack-nova12:12
openstackgerritMerged openstack/nova master: doc: Update references to image properties  https://review.opendev.org/74419812:15
gibistephenfin: do we actively reject img_hv_type=uml after https://review.opendev.org/#/c/743230/4 ?12:28
*** lbragstad_ has joined #openstack-nova12:32
sean-k-mooneygibi: stephenfin did not update teh nova object12:33
sean-k-mooneyso the image property wont reject it12:33
*** redrobot has joined #openstack-nova12:34
gibisean-k-mooney: but then what will happen? NoValidHost?12:34
sean-k-mooneyits still here https://github.com/openstack/nova/blob/b5d48043466b53fbdfe7b93c2e4efd449904e593/nova/objects/fields.py#L40512:35
sean-k-mooneywell you cant set the uml virt type anymore since the config is updated12:36
sean-k-mooneyit looks like this was only used in https://opendev.org/openstack/nova/src/branch/master/nova/scheduler/filters/image_props_filter.py#L5512:38
sean-k-mooneyso since there are no hosts with that hypervior type12:39
gibias this is an image property UML still can be requested from the image. But then I guess it will simply result in a NoValidHost12:39
*** lbragstad_ has quit IRC12:39
sean-k-mooneythen ya12:39
gibiOK cool12:39
sean-k-mooneyyou will get a novalid host12:39
sean-k-mooneywe proably should also update the field definiton to make that invalid12:39
sean-k-mooneywhich requires a slightly differnt approch where we raise an excption in obj_make_compatiable instead of just droping it12:41
sean-k-mooneylike this https://opendev.org/openstack/nova/src/branch/master/nova/objects/image_meta.py#L191-L19712:41
gibiyeah, that could be done as a follow up cleanup I guess12:41
sean-k-mooneyya its not strictly required i guess. but the valuse wont be vaild anymore12:42
sean-k-mooneyit might make sense to have only one object bump for both12:42
sean-k-mooneyalthough i can see a counter argument to be made that you might continue to support uml with an out of tree driver or something12:43
gibiagree on a single bump for multiple removals12:43
sean-k-mooneyuml is unlikly to do that12:43
sean-k-mooneyxen might be more likely12:43
sean-k-mooneyalthough we are keeping libvirt-xen right12:43
sean-k-mooneyand jut removing xex server12:43
sean-k-mooney*xen-server12:44
sean-k-mooneythe standalone xen driver or are we removing both12:44
gibiwe are removing libvirt+xen not the standalone xenserver12:44
*** mgariepy has quit IRC12:45
sean-k-mooneyisnt the standalone xenapi the one we wanted to remove12:45
sean-k-mooneylibvit xen proably still works12:45
sean-k-mooneyxenapi was the one with issues12:45
sean-k-mooneygiven it still needed pyton 2.6 like a year ago12:45
sean-k-mooneyoh the xen server side12:46
sean-k-mooneyoh we are doing both12:47
gibisean-k-mooney: you are right. that is a big -1 for stephenfin  :)12:47
sean-k-mooneyhttps://etherpad.opendev.org/p/nova-victoria-ptg12:47
sean-k-mooneylines 377 is the libvirt support12:47
sean-k-mooneyand 386 is xenapi12:47
sean-k-mooneyso we are ment to be deleteing xenapi12:47
sean-k-mooneyand deprecating xen an uml12:48
*** mgariepy has joined #openstack-nova12:48
sean-k-mooneynot deleting them12:48
sean-k-mooneygibi: so stephen is jumping the gun with the deletions12:48
sean-k-mooneyvictoria is deprecation12:49
sean-k-mooneyfor the libvirt backends and deletion for xenapi12:49
sean-k-mooneystephenfin: ^12:49
*** ratailor has quit IRC12:49
gibiI added the necessary -1s now. Thanks for catching this Sean12:50
openstackgerritBrin Zhang proposed openstack/nova master: Cyborg evacuate support  https://review.opendev.org/71532612:51
*** raildo has quit IRC12:51
sean-k-mooneyso did i :)12:52
*** lpetrut has joined #openstack-nova12:52
sean-k-mooneygibi: on the plus side since stephenfin has written the patches those are easy to merge in early wallaby12:53
gibiyeah, but first we have to merge some deprecation patches for these12:53
sean-k-mooneyyep12:53
sean-k-mooneywhile i have your attention there is a revert of one of aarents patches proposed https://review.opendev.org/#/c/749035/112:54
sean-k-mooneyi dont see how we can be getting None also the ci really does not like the revert for some reason12:54
*** lbragstad has joined #openstack-nova12:55
sean-k-mooneybut apprenlty this is failing in rdo12:55
sean-k-mooneyim not sure if it was a one off failure or if its blocking there gate12:55
sean-k-mooneyhttps://bugs.launchpad.net/tripleo/+bug/189361812:55
openstackLaunchpad bug 1893618 in tripleo "periodic-tripleo-ci-centos-8-scenario000-multinode-oooq-container-updates-ussuri tempest test_shelve_unshelve_server failing in component-pipeline " [Critical,Triaged]12:55
gibilooking...12:55
*** maciejjozefczyk has joined #openstack-nova12:55
sean-k-mooneyfor some reason instance.system_metadata.get('image_base_image_ref') is retruning none12:57
sean-k-mooneybut we set that in exactly one place and i dont really see how it can be none as that implies the instace has no image12:57
sean-k-mooneywhich makes no sense for a qcow backed vm12:57
*** links has joined #openstack-nova12:58
sean-k-mooneyactully from the pre shelve xml i can see  <nova:root type="image" uuid="5cc451d5-7abe-478a-9f4f-1a804f49a3f3"/>13:00
*** raildo has joined #openstack-nova13:00
sean-k-mooneyso its implying that the system_metadata table is populated incorrectly or instance.system_metadata does not have all the info at this point13:01
sean-k-mooneyits really strange because i would have expected this to work13:01
stephenfingibi, sean-k-mooney: I think those notes are wrong. Have we not already in-effect deprecated them? https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L606-L61413:02
sean-k-mooney no13:03
sean-k-mooneytesting is differnt13:03
sean-k-mooneythey are not deprecated yet13:03
sean-k-mooneyunless you can point to a release note13:03
sean-k-mooneyfor what its worth i did not think we planned on removing support for any libvirt backends this cycle just deprecations the only removals were going to be xenapi and vmware13:05
sean-k-mooneyvmware have now fixed the ci13:05
*** maciejjozefczyk has quit IRC13:05
sean-k-mooneyso they have been undeprecated?13:06
sean-k-mooneyi know gibi has a patch for that at least13:06
sean-k-mooneystephenfin: if we just use that waring as a depercation warning then all of libvirt arm and power support would be deprecated and its not13:07
*** bnemec has joined #openstack-nova13:07
gibithat warning was added 7 years ago https://review.opendev.org/#/c/69919/13:08
gibiI agree with sean-k-mooney here to state the deprecation explicitly for these first13:09
stephenfinOkay, fair. I'll do that now and -W what's there for another few months13:09
stephenfinThanks for the reviews :)13:09
sean-k-mooney:)13:09
gibisean-k-mooney: vmware undeprecation is still open https://review.opendev.org/#/c/742407/ but I think we have a good chance to merge that13:10
gibidansmith: could you check back to the vmware undeprecation patch ^^ ?13:12
openstackgerritBrin Zhang proposed openstack/nova master: Refactor check and exception  https://review.opendev.org/74905213:13
openstackgerritJiri Suchomel proposed openstack/nova master: Add ability to download Glance images into the libvirt image cache via RBD  https://review.opendev.org/57430113:16
*** artom has joined #openstack-nova13:17
gibisean-k-mooney: I have no idea how image_base_image_ref can be None for a qcow2 instance13:19
gibisean-k-mooney: we should make a reproduction of the tripleo test case13:19
*** jangutter has quit IRC13:20
*** jangutter has joined #openstack-nova13:20
openstackgerritBrin Zhang proposed openstack/nova master: Refactor check and exception  https://review.opendev.org/74905213:22
sean-k-mooneygibi: it failed on one of your patches too13:24
sean-k-mooneyhttps://review.opendev.org/#/c/739246/13:24
sean-k-mooneyhttp://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22_finalize_unshelve_qcow2_image%5C%22&from=30d13:24
sean-k-mooneythere are only 2 hits in logstach so far in the last month13:25
openstackgerritWenping Song proposed openstack/nova master: Reject resize operation for accelerator  https://review.opendev.org/74856013:25
*** jangutter has quit IRC13:26
*** jangutter has joined #openstack-nova13:26
*** jangutter has quit IRC13:27
*** jangutter has joined #openstack-nova13:28
sean-k-mooneygibi: so apparently the filed is not set in the system metadata13:29
*** jangutter has quit IRC13:29
*** jangutter has joined #openstack-nova13:29
gibiso we have a tempes test that can reproduce the problem but it only does it really infrequently13:32
*** lemko has quit IRC13:33
gibisean-k-mooney: nope, the error that was in my patch has a different stack trace https://zuul.opendev.org/t/openstack/build/de80ec8f00204a92b5e667bdf7a72cee/log/controller/logs/screen-n-cpu.txt#2411613:37
*** lemko has joined #openstack-nova13:38
gibialso the another nova hit in kibana is different too https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_990/739246/3/check/tempest-integrated-compute/990dace/controller/logs/screen-n-cpu.txt13:38
sean-k-mooneygibi: kibana found that error too however13:39
sean-k-mooneyon patchset 3 and 413:39
gibithe two error I see in kibana on review 739246  have different stack traces13:40
sean-k-mooneyoh they do13:40
gibiit does hit  _finalize_unshelve_qcow2_image but different way13:41
sean-k-mooneyjust the same function13:41
openstackgerritStephen Finucane proposed openstack/nova master: libvirt: Drop support for UML  https://review.opendev.org/74323013:41
openstackgerritStephen Finucane proposed openstack/nova master: libvirt: Drop support for Xen  https://review.opendev.org/74323113:41
openstackgerritStephen Finucane proposed openstack/nova master: libvirt: Remove 'hypervisor_version' from 'libvirt_info'  https://review.opendev.org/74419913:41
openstackgerritStephen Finucane proposed openstack/nova master: libvirt: Deprecate support for non-QEMU/KVM backends  https://review.opendev.org/74905513:41
openstackgerritStephen Finucane proposed openstack/nova master: libvirt: Remove '[vnc] keymap', '[spice] keymap' options  https://review.opendev.org/74905613:41
gibione of the errors I even recognize as a rebase issue from my patch13:41
stephenfinsean-k-mooney: gibi: There you go. I'll mark the actual removal patch as -W for six months or so, heh13:41
gibistephenfin: thanks13:42
gibistephenfin: if you are itcign for a real removal then the xenserver driver can be removed I tihnk :)13:43
stephenfinSure, if you're happy to review it, I'm happy to do it :)13:43
dansmithgibi: the ciwatch page shows more red than green for vmware lately, but I'm not sure how to figure out what is failing because the log dump isn't complete13:44
gibistephenfin: sure, why not13:44
dansmithgibi: oh actually, I guess I'm missing a bunch of green to the right13:44
gibidansmith: looking..13:45
dansmithbut I'm not sure what to make of the fails that have no logs, maybe those are aborts due to a new patch rev going up or something?13:45
dansmithokay, I found a fail that looks like a real fail, so I dunno what those other ones are, if not aborts13:46
dansmithI just wanted to find something that looks like a real failure to contrast it to a passing run, and also to make sure it's actually reporting failures13:47
gibiyeah, me neither13:47
gibiI'm trying to figure out if Yingji Sun or  jhui@vmware.com is on IRC or not13:48
gibiso we can ask them13:48
*** Luzi has quit IRC13:51
*** ratailor has joined #openstack-nova13:52
sean-k-mooneydansmith: they might be hitting the same keystone error that is breaking the first party ci13:53
dansmithsean-k-mooney: and reporting empty logs? seems weird, but okay13:54
sean-k-mooneywell no that is likely another issue13:55
sean-k-mooneyi just noticed they started failing when the upstream jobs also started failng13:55
*** jangutter has quit IRC13:56
*** nweinber has joined #openstack-nova13:56
*** jangutter has joined #openstack-nova13:56
*** Liang__ has joined #openstack-nova13:57
*** kevinz has quit IRC13:59
sean-k-mooneyhum the hyperv ci also seams to be broken14:00
sean-k-mooneyits been red for at least the last 7 days14:00
*** brinzhang has quit IRC14:05
*** eharney has quit IRC14:14
*** sapd1 has joined #openstack-nova14:24
*** eharney has joined #openstack-nova14:27
*** belmoreira has quit IRC14:30
*** Liang__ has quit IRC14:31
openstackgerritSylvain Bauza proposed openstack/nova master: WIP: Add a routed networks scheduler pre-filter  https://review.opendev.org/74906814:33
bauzasgibi: sean-k-mooney: there it is ^14:33
*** dklyle has joined #openstack-nova14:35
*** jangutter_ has joined #openstack-nova14:35
bauzasgibi: sean-k-mooney: I now use a pre-filter that looks at the existing physnets that were provided by https://github.com/openstack/nova/blob/master/nova/network/neutron.py#L208214:36
*** openstackgerrit has quit IRC14:37
*** gibi has quit IRC14:37
*** gibi has joined #openstack-nova14:37
*** jangutter has quit IRC14:39
bauzasI mean the network metadata that isn't persisted in the spec object but reused by every move operation14:39
*** ratailor has quit IRC14:40
*** dklyle has quit IRC14:45
*** links has quit IRC14:50
*** spatel has joined #openstack-nova14:53
*** jangutter_ has quit IRC15:02
*** jangutter has joined #openstack-nova15:02
*** mriedem has joined #openstack-nova15:04
gibibauzas: then I think you have to resolve TODO in the codepath that gather the physnets https://github.com/openstack/nova/blob/b5d48043466b53fbdfe7b93c2e4efd449904e593/nova/network/neutron.py#L202015:08
gibias today it only get one physnet per network, even if there are multiple15:09
bauzasholy shit.15:09
*** lpetrut has quit IRC15:11
bauzasgibi: in a meeting15:14
*** belmoreira has joined #openstack-nova15:14
*** lpetrut has joined #openstack-nova15:20
bauzasgibi: actually, looking at vladik's comment, I'm not expert, but do we really support multiple networks per vlans?15:25
bauzasmultiple physnets*15:25
gibias far as I know in neutron nothing prevents you to create two vlan segments with different physnets in the same network15:27
*** spatel has quit IRC15:29
bauzasgibi: okay, then I don't feel enough expert to fix this TODO15:47
bauzasgibi: the other way would be to find a way to pass requested networks down in the request spedc15:48
bauzasspec*15:48
bauzaswhich is something we don't do15:48
gibithe way the pervious patches did it was to pass the aggregates via  request_spec.request_level_params.member_of15:49
gibiand do the network segment -> host aggregate translation in an upper layer (conductor)15:50
bauzasgibi: I know15:50
bauzasbut a pre-filter is better, right?15:50
bauzasand we don't need to pass the aggregates by a new field15:50
bauzasthe destination object already contains what we need15:51
gibithis is still a pre-filter as the aggregate filterin happens in the a_c query15:51
gibialso I don't think request_spec.request_level_params.member_of is a new field at all15:51
bauzasgibi: matt added it15:52
gibithe member_of parts?15:52
gibithat could be15:52
gibibut the request_level_params object should be on master I think15:52
bauzasgibi: we already have pre-filters that add specific aggregates to verify15:52
bauzasso, the problem is not how to ask placement15:52
bauzasbut rather, how to pass the networks to the pre-filter15:52
*** belmoreira has quit IRC15:53
gibiyou have to pass in new information so you have to extend an passed object or pass a new object15:53
bauzasgibi: see, https://github.com/openstack/nova/blob/b5d48043466b53fbdfe7b93c2e4efd449904e593/nova/scheduler/request_filter.py#L12115:53
bauzasgibi: this way, from the pre-filter, you can pass the list of aggregates to require15:54
bauzasso this is a solved problem15:54
bauzasbut since we only have the request spec object that is passed down to the pre-filter, we need to get the networks from it15:54
bauzasmy proposal was to use the network metadata info15:54
bauzasbut we only provide a list of physnets15:55
gibiso you can extend the NetworkMetadata if you wish, or you can extend the RequestLevelParams of the RequestSpec15:55
bauzasor I could get the info like this https://github.com/openstack/nova/blob/b5d48043466b53fbdfe7b93c2e4efd449904e593/nova/objects/request_spec.py#L54715:56
bauzas(from the vif)15:56
*** efried has quit IRC15:56
gibido we have the instance info cache populated before reaching the compute?15:56
bauzasno, that's only for a move op15:57
bauzasAFAIK15:57
bauzasanyway, I need to think about the problem more15:57
gibiyeah, for the move you can reach into the info cache15:57
bauzasgibi: I need to balance all the options15:58
bauzasI don't want a pre-filter that would behave different between operations15:58
bauzasso I literrally need to find a way to provide the required information into the request spec object directly15:59
bauzaswhich would prevent us backports, which is a bit sad unfortunately for my company :/15:59
gibiohh, so the motivation is backportability16:00
bauzasgibi: not really, I'd say the easier be the better16:03
bauzasI was thinking the physnets be a clean way to do this, but given this TODO, I'm stuck16:04
bauzasso, I'll just provide the networks the best way, and good bye backportability16:04
gibifixing that TODO would need also an object change I guess, so you would not gain much16:04
sean-k-mooney hi so i was not following due to downstream call16:05
sean-k-mooneybauzas: we cant trust the phsynets in the networking info cache if the network has multiple phsynets16:05
sean-k-mooneyunless we fix how we currently do the phsynet lookup16:06
sean-k-mooneyso that that is segment aware16:06
bauzassean-k-mooney: that was gibi's point16:06
bauzas(17:08:49) gibi: bauzas: then I think you have to resolve TODO in the codepath that gather the physnets https://github.com/openstack/nova/blob/b5d48043466b53fbdfe7b93c2e4efd449904e593/nova/network/neutron.py#L202016:06
sean-k-mooneybauzas: the physnet in the nova vif object are only correct when you have 1 phsynet and transitivly 1 segment on the network16:07
bauzassean-k-mooney: I was just looking up the network metadata16:07
sean-k-mooneybauzas: right so if we resolve that todo we need also consider the inpact on sriov16:07
sean-k-mooneyby the way this is the issue i pointed out before. actully  supporting the multi_provider_physnet extension16:08
bauzassean-k-mooney: https://review.opendev.org/#/c/749068/1/nova/scheduler/request_filter.py@28616:08
bauzassean-k-mooney: but I used the physnets because that was the quickiest path16:08
bauzassince we don't pass the list of networks16:08
bauzasin the request spec16:08
sean-k-mooneyya so that set of phsynets is not correct for routed networks if you have more then 1 segment16:08
bauzassure16:09
bauzasso16:09
bauzaswhat I want is just a way to get a list of networks et voila16:09
bauzaswhich means I have to augment the RequestSpec object or one of its nested children16:09
sean-k-mooneythe pshnets are in the segments not in the networks so you need to list the network then query neutron for the segment to get the phsynet16:09
bauzasagain, I don't care of the physnets16:10
*** hamalq has joined #openstack-nova16:10
*** priteau has joined #openstack-nova16:10
bauzasI want to get the segments16:10
*** artom has quit IRC16:10
sean-k-mooneyyes but neutron uses physnets to map host to segments16:10
sean-k-mooneybut yes you want the segment16:10
bauzassean-k-mooney: we have a neutron API for getting the segments that are related to either a network or a physnet16:11
bauzasthe original proposal from matt was to get the networks and ask neutron to give the segments16:11
sean-k-mooneyyes16:11
bauzasthis was easy since he wrote that in the conductor16:11
bauzasand then we directly have the requested networks (or ports)16:11
bauzasbut here, we want to make it more generic in a pre-filter16:12
bauzasand in my case, I only have the requestspec object this is passed as argument16:12
bauzasgood bye networks and ports16:12
*** sapd1 has quit IRC16:12
bauzassean-k-mooney: see my problem and why I went using the physnets ?16:12
sean-k-mooneybauzas: yep i know but just to be clear this will only work for existing vms16:13
sean-k-mooneyyou cant use phsynets for booting new vms16:13
bauzassean-k-mooney: well, I can see us populating the network metadata object when creating the instance16:13
bauzashttps://github.com/openstack/nova/blob/b5d48043466b53fbdfe7b93c2e4efd449904e593/nova/network/neutron.py#L208216:14
bauzasbut either way16:14
sean-k-mooneywhen we create teh port with ip allocation policy deffer it wont be bound to a segment or phsynet yet16:14
bauzasI need a way to pass down the requested networks16:14
sean-k-mooney yes16:15
sean-k-mooneyso you eighter need to add the requested networks to the resquest spec or to the network_metadta or just add the instnace object but i know we have said no to the instance in the past16:16
sean-k-mooneyor jsut pass the instance to the prefilter16:16
sean-k-mooneywe dont actully have the instance object where this is called https://github.com/openstack/nova/blob/b5d48043466b53fbdfe7b93c2e4efd449904e593/nova/scheduler/manager.py#L15016:18
bauzassean-k-mooney: right, because that's in the scheduler16:19
bauzassean-k-mooney: a simple approach would be to mention the requested networks on the main request spec object16:19
sean-k-mooneysure but the fact we are in schduling means the object exists16:19
sean-k-mooneybauzas: yes16:19
bauzassean-k-mooney: do you think we would persist those ?16:19
bauzasor should we guess them for a move operation ?16:20
sean-k-mooneythe requested networks16:20
bauzasyes16:20
bauzasthis is risky16:20
sean-k-mooneyif we add them to the request spec and use them we need to update them when we add or remove interfaces16:20
sean-k-mooneyoh16:21
bauzasonly when you want a scheduling decision16:21
bauzasso, yeah we need to recalculate them16:21
sean-k-mooneyis the network info cache populated yet16:21
sean-k-mooneywe have the instance uuid right16:21
bauzasno, I don't think16:21
sean-k-mooneydamb16:21
bauzassean-k-mooney: only for the first instance16:21
bauzassean-k-mooney: but I'd say, just populate this field at boot time based on the base options16:22
sean-k-mooneywell select destinations has instance_uuids16:22
bauzassean-k-mooney: and for a move, just try to get them from the info cach16:22
bauzassean-k-mooney: I don't want to add a new arg to the filter16:23
bauzassean-k-mooney: all of this needs to be in the request spec16:23
bauzasbut I get your point16:23
sean-k-mooneywell we can update the request_spec.instance_uuid field16:23
bauzaswe could get the requested networks from the info cache in the scheduler, add them on the fly before calling the pre-filter and wipe them after16:23
bauzasoh please no16:23
sean-k-mooneyfor i think we already do in once case for multicreate16:24
sean-k-mooneywe had to fix someting related to this16:24
sean-k-mooneyi think it was the numa toplogy16:24
sean-k-mooneybauzas: yes16:25
sean-k-mooneyso if you were to do that16:25
sean-k-mooneyi would not add a new fiedl16:25
sean-k-mooneyjsut set the info on the object directly16:26
sean-k-mooneythat way it wont persist16:26
sean-k-mooneywe do that in some places today16:26
sean-k-mooneye.g. request_spec.temp_var = my thing16:26
sean-k-mooneyim not sure its the request spec object we do that for but its why we dont error if you set a filed on an ovo that does not exist16:27
sean-k-mooneybecause nova ocationally stores info in them that is never serialised16:27
gibiyou can add a proper ovo field that is not persisted16:28
gibithat would be a bit more readable16:28
sean-k-mooneycan you? i did not know that16:28
bauzassurely you can16:28
sean-k-mooneywell i was not aware we were already doing that16:28
bauzasthe network metadata field, for example :D16:28
bauzasthis one is lazy loaded16:29
sean-k-mooneyis that contoled by https://github.com/openstack/nova/blob/b5d48043466b53fbdfe7b93c2e4efd449904e593/nova/objects/request_spec.py#L36?16:30
gibirequested_resourceshttps://github.com/openstack/nova/blob/b5d48043466b53fbdfe7b93c2e4efd449904e593/nova/objects/request_spec.py#L566-L57316:30
*** belmoreira has joined #openstack-nova16:30
bauzassean-k-mooney: not really controlled, just we know what to look up from DB with this list16:30
sean-k-mooneyhttps://github.com/openstack/nova/blob/b5d48043466b53fbdfe7b93c2e4efd449904e593/nova/objects/request_spec.py#L538-L55616:31
sean-k-mooneyso that is built form the info cache16:31
bauzashttps://github.com/openstack/nova/blob/b5d48043466b53fbdfe7b93c2e4efd449904e593/nova/objects/request_spec.py#L13716:31
bauzassean-k-mooney: only for all ops but create16:31
sean-k-mooney this is where its filtered out16:33
sean-k-mooneyhttps://github.com/openstack/nova/blob/b5d48043466b53fbdfe7b93c2e4efd449904e593/nova/objects/request_spec.py#L655-L65716:33
sean-k-mooneywhich we call in create before we create it in the db16:34
sean-k-mooneythen we load back from the db object16:34
bauzassean-k-mooney: that's the standard way of providing a ovo field that's not persisted16:35
sean-k-mooney"standard" i was expecting to have a flag in the field deffintion16:36
sean-k-mooneylike nullable=false16:36
sean-k-mooneybut persit=false instead16:36
bauzasie. you explicitely provide a lazy-loading mechanism for getting it, but you also delete its primitive before persisting to the DB16:36
bauzassean-k-mooney: oh no, that's made thru code patterns :)16:36
bauzasso, I could technically add a new field16:37
bauzasmake it lazy-loadable16:37
sean-k-mooneyya this is proably less readable then just doing object.whatever=somethign16:37
sean-k-mooneyits certenly more complicated and eaiser to mess up16:37
bauzasand make sure we populate it from the info cache when we can16:37
bauzassean-k-mooney: well, I know our objects before they were called o.vo :)16:38
bauzasthat certainly helps :)16:38
sean-k-mooneyyep but that does not mean we should keep doing it this way16:38
*** belmoreira has quit IRC16:38
sean-k-mooneyits rather archane16:38
sean-k-mooneyi can follow it but its not obvious and its not a clean approch16:38
sean-k-mooneyit certenly works16:38
sean-k-mooneybut this is definetly techdebth16:39
sean-k-mooneyaddign a property to the class and using it would be cleaner16:39
sean-k-mooneyor addign it as a flag on the field and doing it in ovo16:39
sean-k-mooneyi guess its not really ovo but the db code16:40
bauzassean-k-mooney: I'll make a proposal tomorrow morning16:48
sean-k-mooneywe shoudl put the segment info in the nova.network.vifobject16:50
sean-k-mooneythen get those form the cache16:50
sean-k-mooneyhttps://github.com/openstack/nova/blob/b5d48043466b53fbdfe7b93c2e4efd449904e593/nova/network/model.py#L380-L40416:50
sean-k-mooneywell nova.network.model.Vif16:50
sean-k-mooneyor nova.network.model.VIF i guess is the corerct capitalisation16:51
*** efried has joined #openstack-nova16:52
sean-k-mooneywe can add a new segment filed to that object16:52
sean-k-mooneythese are not ovos16:52
*** ralonsoh has quit IRC16:53
sean-k-mooneybut those are what is stored in the info cache16:53
sean-k-mooneythey are constucted here https://github.com/openstack/nova/blob/b5d48043466b53fbdfe7b93c2e4efd449904e593/nova/network/neutron.py#L3026-L307616:54
*** stephenfin has quit IRC16:56
sean-k-mooneythis is called via  allocate_for_instance which is also what populates the info cache16:58
sean-k-mooneyoh...17:02
sean-k-mooneythat is called first on the compute host. because we only auto allocate networks on compute host if we are passed network ids17:02
bauzassean-k-mooney: again, I won't modify the vif object17:09
bauzasI think I have everything I need17:09
bauzaswe have the requested networks at boot time, so we can directly pass them thru the spec object17:09
bauzasfor the move operations, we can recalculate the requested networks from the vifs17:10
bauzasthat's it17:10
bauzashopefully, tomorrow I should be able to provide a new re17:10
sean-k-mooneyyes although im suggestign we should have the segments in the vif object anyway17:10
bauzasrev17:10
sean-k-mooneyi.e. we should add them17:10
bauzassean-k-mooney: I'm more in favor of what gibi suggested in the spec review, ie. getting the aggregates directly from neutron as an a-c query17:11
bauzaslike we do for bw-aware instances17:11
sean-k-mooneybauzas: yes that is what i orginally suggested17:11
bauzasbut that's next release17:11
sean-k-mooneybut that not going to happen this release17:11
bauzasyup17:11
sean-k-mooneybut even if we get them form neutron17:11
sean-k-mooneywe will need to store them in the vif object17:12
sean-k-mooneythat is why i was suggestign adding them there17:12
bauzaseither way, kids go back to school tomorrow morning, stopping now17:12
sean-k-mooneyanyway ill take a look at your patch tomorrow17:12
bauzassean-k-mooney: bye and thanks for all the fish17:13
*** dtantsur is now known as dtantsur|afk17:13
*** tosky has quit IRC17:14
*** derekh has quit IRC17:20
sean-k-mooneyyum fish.... i was ment to get sushi yesterday but resturant nice resturant was not delivering. i think they are still offline today but someday this week ill will order some and it will be awsome. for now im just oging to go make dinner o/17:27
* sean-k-mooney has beer battered cod in the freezer but its not the same17:28
*** priteau has quit IRC17:29
*** gyee has joined #openstack-nova17:43
*** tesseract has quit IRC17:43
*** JamesBenson has joined #openstack-nova17:43
*** jamesden_ has quit IRC17:47
*** artom has joined #openstack-nova17:49
*** jamesdenton has joined #openstack-nova17:49
*** jsuchome has quit IRC17:56
sean-k-mooneygibi: ill try to test your sriov series tomorrow but feel free to remind me to if i forget.18:05
*** lpetrut has quit IRC18:20
sean-k-mooneyartom: added a few more details to https://review.opendev.org/#/c/747451/4 but im +1 on the patch18:27
sean-k-mooneybasically i just noted the binding point for the singel port binding workflow to show that that is also valid18:27
sean-k-mooneyand responded to lee's question regarding asserting the host18:28
sean-k-mooneymelwitt: ^ if you want me to expand on anything in particalar let me know but it looks correct.18:28
artomsean-k-mooney, so when I tested this I didn't have any of the binding stuff in the vifs18:31
artomThough in retrospect maybe I was running without the extension?18:31
sean-k-mooneyhow do you mean18:32
sean-k-mooneywhen you tested this with devstack or unit/functional tests18:33
artomsean-k-mooney, devstack18:33
sean-k-mooneywhich vifs did you check18:33
artomsean-k-mooney, all of them :P18:33
artomsean-k-mooney, even the network_info from the Neutron API didn't have them18:34
sean-k-mooneywhich binding info are you looking for18:34
*** zzzeek has quit IRC18:34
sean-k-mooneythe host id is  not in the nova.network.model.VIF object18:34
sean-k-mooneywhich is what is in the network info18:34
artomsean-k-mooney, I guess it converts18:35
artomErr, dad taxi time18:35
artomBack in a bit18:35
sean-k-mooneydo you mean the migrating_to fields were not in the binding profile18:35
sean-k-mooneysure18:35
artomsean-k-mooney, that was there18:35
artomhttp://paste.openstack.org/18:35
artomErr18:35
artomhttp://paste.openstack.org/show/797304/18:36
artomI actually saved it at the time18:36
*** zzzeek has joined #openstack-nova18:36
artomTo compare the nw_info from the API vs the one from the cache18:36
artomThat link is the one from the API18:36
artomIt has migrating_to18:36
artomBut that's it18:36
artomThe one in the cache didn't have it18:36
sean-k-mooneyah ok18:36
sean-k-mooneythen ya i guess you could assert that18:36
artomhttp://paste.openstack.org/show/797305/ is from the cache18:37
sean-k-mooneythe one in the cache has not been update at all then18:37
artomOK, really have to bounce18:37
sean-k-mooneyartom: cool but just so you know you just showed there is something you can test in a follow up patch :P18:37
sean-k-mooneyyou also showed that its doint the right thing18:38
artomsean-k-mooney, checking that "migrating_to" is *not* in the VIF?18:38
sean-k-mooneysince the cache does not have any of the chagne done during migration18:38
sean-k-mooneyyep18:38
artomAh, yeah true :)18:38
* artom is looking for his shades18:38
sean-k-mooneygo do dad taxi :)18:39
*** k_mouza has joined #openstack-nova18:39
*** k_mouza has quit IRC18:40
*** k_mouza has joined #openstack-nova18:41
*** k_mouza has quit IRC18:44
*** belmoreira has joined #openstack-nova18:47
*** k_mouza has joined #openstack-nova18:51
*** zzzeek has quit IRC18:54
*** zzzeek has joined #openstack-nova18:56
*** belmoreira has quit IRC19:20
*** k_mouza has quit IRC19:34
*** tbachman has joined #openstack-nova20:05
*** tosky has joined #openstack-nova20:15
*** nweinber has quit IRC20:17
mnaseris there a 'pre-release' checklist for nova20:27
mnaserit'd be nice if some migrations that hit the same table were combined, s=>t includes adding vpmem and resources to instance_extra20:28
mnaserso going through that painful migration once might be nicer/easier20:28
mriedemmnaser: closest is probably here now https://docs.openstack.org/nova/latest/contributor/ptl-guide.html20:33
mriedemhttps://wiki.openstack.org/wiki/Nova/ReleaseChecklist is older20:33
mriedemwith what your asking for there it's probably hard to do when you're working on separate features, db schema migrations aren't rolled together at the end of the release, they happen as features land20:34
mriedem*you're20:35
mnasermriedem: yeah i guess going on the 'every commit is releasable' makes this a little impossible20:35
mriedemthose didn't actually migrate data did they? just alter table to add the column on a big table?20:36
mnasermriedem: yeah alter table on a big table indeed20:36
*** andrewbogott has left #openstack-nova20:37
mnaserin this case two migrations on instance_extra20:38
*** vishalmanchanda has quit IRC21:15
*** raildo has quit IRC21:41
*** mriedem has left #openstack-nova21:57
*** slaweq has quit IRC22:15
*** artom has quit IRC22:30
*** artom has joined #openstack-nova22:31
*** READ10 has joined #openstack-nova22:35
*** stephenfin has joined #openstack-nova22:37
*** rcernin has joined #openstack-nova22:51
*** tosky has quit IRC23:03
*** avolkov has quit IRC23:29
*** hamalq has quit IRC23:33
*** READ10 has quit IRC23:49

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!