*** k_mouza has quit IRC | 00:01 | |
*** k_mouza has joined #openstack-glance | 01:07 | |
*** k_mouza has quit IRC | 01:11 | |
*** hamalq has quit IRC | 01:30 | |
*** k_mouza has joined #openstack-glance | 01:39 | |
*** k_mouza has quit IRC | 01:43 | |
*** gyee has quit IRC | 02:06 | |
*** rcernin has quit IRC | 03:03 | |
*** rcernin has joined #openstack-glance | 03:34 | |
*** rcernin has quit IRC | 03:38 | |
*** rcernin has joined #openstack-glance | 04:08 | |
*** ratailor has joined #openstack-glance | 04:33 | |
*** ajitha has joined #openstack-glance | 04:50 | |
*** ralonsoh has joined #openstack-glance | 05:37 | |
*** hoonetorg has quit IRC | 06:41 | |
*** belmoreira has joined #openstack-glance | 06:58 | |
*** ricolin has quit IRC | 07:18 | |
*** pgaxatte has joined #openstack-glance | 07:48 | |
pgaxatte | hello | 07:48 |
---|---|---|
pgaxatte | does anyone know if there is a way to gracefully restart glance-api by waiting for the end of all image uploads? | 07:49 |
pgaxatte | I can prevent a glance node from receiving new requests but how do I wait for the current ones to end? | 07:50 |
*** rcernin has quit IRC | 07:56 | |
jokke | pgaxatte: hey, yes there is | 09:14 |
jokke | pgaxatte: just looking it for you. | 09:18 |
jokke | pgaxatte: so you can reload the config and gracefully restart the API by sending it SIGHUP. That will signal all the child processes to die once they are done processing what ever is in flight and once the old processes are all killed the service will reload config and bring the workers back up | 09:24 |
pgaxatte | awesome! | 09:25 |
pgaxatte | so the systemd and init script could implement the reload | 09:26 |
jokke | If you want to properly restart the whole thing rather than just reload the workers, you'll need to monitor it yourself and then stop the process. I'd say SIGHUP for reload and once the reload is done, then just kill and start | 09:26 |
*** k_mouza has joined #openstack-glance | 09:27 | |
pgaxatte | jokke: OK so there is a way to gracefully reload but not gracefully stop | 09:28 |
jokke | and the healthcheck middleware is handy to bleed the process from in-flight requests if you're using that for loadbalancer as you can just touch the file it monitors and it will signal that the service is not accepting new requests | 09:28 |
jokke | pgaxatte: I'm looking the signal handling and by the looks of it, we do not have that wait on shutdown, which likely would not be bad idea to do and should be pretty simple to implement as we basically have all the logic for it there | 09:30 |
pgaxatte | jokke: one issue is that it can take ages to wait for uploads to end | 09:33 |
jokke | yup | 09:34 |
pgaxatte | I found the signal handling in common/wsgi.py, I'll take a look to further understand the process | 09:35 |
jokke | and that's why the disable by file is there in the healthcheck, but looks like all our child waiting code is actually written around the reload | 09:36 |
pgaxatte | also I'll check the healthcheck middleware | 09:36 |
jokke | yeah, that's what I'm looking at in the common.wsgi.py | 09:36 |
jokke | so the functions waiting for the child processes to finish will respawn them after | 09:36 |
jokke | it's super handy to make store config changes etc. but not exactly optimal for say, upgrading the service as you would still need to monitor it yourself once all the connections are cloed | 09:38 |
pgaxatte | yeah and the only way i found to monitor this is by looking at the established tcp sockets on glance's port | 09:39 |
jokke | YUP! ;) | 09:39 |
pgaxatte | which, in some contexts like K8s, is not always possible/easy | 09:39 |
jokke | Heh, there is pretty much nothing written in containerization in mind on the process handling | 09:40 |
pgaxatte | no it relies heavily on the process to report itself if it's ready/healthy | 09:41 |
pgaxatte | anyway a gracefull shutdown would be nice when big data transfers are occuring (looking at you too Cinder :D) | 09:42 |
jokke | If you have time and interest to poke around, I'll happily review it. Like said, pretty much all the logic is in there, would just need to be split out to prevent the immediate respawn | 09:44 |
jokke | Or at least propose like light spec for it, cause I can promise you, I have forgotten this discusion by Monday on the rate I'm looking at things currently :P | 09:45 |
pgaxatte | jokke: sure, i'd love to submit code on that but I'm not sure I can find enough time to do something useful | 09:46 |
pgaxatte | haha no problem :) | 09:46 |
jokke | that'd be great low hanging fruit for anyone wanting to get involved | 09:47 |
jokke | it's pretty well isolated and simple enough once you understand what those few functions do there in the current code base | 09:47 |
pgaxatte | I'll start by registering a new blueprint and we'll see from there :) | 09:49 |
jokke | sounds great | 09:49 |
pgaxatte | there you go: https://blueprints.launchpad.net/glance/+spec/graceful-api-shutdown | 09:54 |
jokke | :) | 10:11 |
*** k_mouza has quit IRC | 10:15 | |
*** k_mouza has joined #openstack-glance | 10:19 | |
*** dasp has quit IRC | 10:31 | |
*** PrinzElvis has quit IRC | 11:01 | |
*** dasp has joined #openstack-glance | 11:57 | |
*** openstackgerrit has joined #openstack-glance | 12:05 | |
openstackgerrit | Mike proposed openstack/glance_store master: vmware: Use cookiejar from oslo.vmware client directly https://review.opendev.org/c/openstack/glance_store/+/787715 | 12:05 |
openstackgerrit | Mike proposed openstack/glance_store master: vmware: Use cookiejar from oslo.vmware client directly https://review.opendev.org/c/openstack/glance_store/+/787715 | 12:09 |
*** ratailor has quit IRC | 12:35 | |
*** whoami-rajat has joined #openstack-glance | 14:04 | |
*** k_mouza has quit IRC | 14:14 | |
*** k_mouza has joined #openstack-glance | 14:28 | |
*** pgaxatte has quit IRC | 14:33 | |
dansmith | whoami-rajat: around? do you have a DNM against glance to test your store change in the glance-cinder job? | 14:38 |
whoami-rajat | dansmith: ah, i thought my legacy tests update patch depends on the store changes, let me propose one quickly | 14:41 |
dansmith | whoami-rajat: I'm about to, just a sec | 14:41 |
openstackgerrit | Dan Smith proposed openstack/glance master: DNM Test glance_store cinder attach changes https://review.opendev.org/c/openstack/glance/+/788963 | 14:42 |
whoami-rajat | dansmith: thanks! | 14:44 |
dansmith | np ;) | 14:44 |
*** rcernin has joined #openstack-glance | 14:48 | |
*** rcernin has quit IRC | 14:52 | |
*** k_mouza has quit IRC | 14:52 | |
*** hoonetorg has joined #openstack-glance | 15:10 | |
*** rosmaita has quit IRC | 15:10 | |
*** k_mouza has joined #openstack-glance | 15:12 | |
*** belmoreira has quit IRC | 15:21 | |
*** rosmaita has joined #openstack-glance | 15:24 | |
*** k_mouza has quit IRC | 15:26 | |
*** k_mouza has joined #openstack-glance | 15:26 | |
dansmith | whoami-rajat: https://zuul.opendev.org/t/openstack/build/3d76072f4d62420f91a43810847cde73/log/controller/logs/screen-g-api.txt#1105 | 15:26 |
openstackgerrit | Dan Smith proposed openstack/glance_store master: Add cinder's new attachment support https://review.opendev.org/c/openstack/glance_store/+/782200 | 15:29 |
dansmith | whoami-rajat: I added the missing unit test for you ^ | 15:29 |
dansmith | other than the job fail, I think that's looking good now. thanks for your patience with my questions :) | 15:29 |
*** gyee has joined #openstack-glance | 15:32 | |
whoami-rajat | dansmith: thanks but you can just leave a comment and i will do the changes :) I remember changing the delete method to test the retry is called but don't know why i didn't write a test for it :P anyway thanks! | 15:32 |
whoami-rajat | dansmith: the job fail is while configuring tempest during devstack which looks related to glance too... | 15:33 |
dansmith | whoami-rajat: it's because it fails to upload the cirros image | 15:33 |
dansmith | and there's a cinder client error in the task, and looks like there is some missing argument to the client | 15:34 |
dansmith | whoami-rajat: I know you can do the changes, but I don't want to wear you out with small comments and helping with some of the things I ask for makes me feel better :) | 15:34 |
whoami-rajat | dansmith: strange, doesn't the job consider depends on patches also? | 15:37 |
whoami-rajat | the missing argument is 'instance_id' which I've already made optional here https://review.opendev.org/c/openstack/python-cinderclient/+/783628/1/cinderclient/v3/attachments.py | 15:37 |
whoami-rajat | and the glance_store patch depends on this | 15:38 |
dansmith | ah, so we need to make the job build cinderclient from source? | 15:38 |
openstackgerrit | Dan Smith proposed openstack/glance master: DNM Test glance_store cinder attach changes https://review.opendev.org/c/openstack/glance/+/788963 | 15:39 |
dansmith | there ^ | 15:39 |
whoami-rajat | I'm really not an expert on gate things, anything that would include my cinderclient patch :D | 15:39 |
whoami-rajat | great! | 15:39 |
whoami-rajat | dansmith: but there's also a cinder change that the python-cinderclient patch depends on :P | 15:39 |
whoami-rajat | https://review.opendev.org/c/openstack/cinder/+/783389 | 15:40 |
dansmith | is it not merged yet? | 15:40 |
whoami-rajat | no, only one +2 | 15:41 |
dansmith | okay, well, I think if something in the chain depends on it you should get it | 15:44 |
dansmith | we can probably see from the devstack log | 15:44 |
dansmith | HEAD is now at 0d7399b30 Merge commit 'refs/changes/89/783389/5' of ssh://review.opendev.org:29418/openstack/cinder into HEAD | 15:45 |
dansmith | whoami-rajat: ^ so I think cinder should be set properly | 15:46 |
whoami-rajat | dansmith: cool, any idea why we didn't get cinderclient then? | 15:47 |
dansmith | because it's a library | 15:47 |
dansmith | so we have to specifically ask for those to come from git instead of release if we want them | 15:47 |
dansmith | like glance_store itself | 15:48 |
whoami-rajat | hmm, i understand it but not certain why libraries are taken from last release and projects from master branch | 15:54 |
dansmith | because most of the time we want to make sure that what we're testing works against the latest release of the library | 15:55 |
dansmith | but it's just a convention | 15:55 |
whoami-rajat | ok got it, thanks | 15:56 |
dansmith | whoami-rajat: looks like that job just passed the cirros image upload | 16:08 |
whoami-rajat | after 1 failure, it sounds good now | 16:09 |
*** k_mouza has quit IRC | 16:44 | |
*** hamalq has joined #openstack-glance | 17:19 | |
*** hamalq has quit IRC | 17:20 | |
*** hamalq has joined #openstack-glance | 17:21 | |
*** rajivmucheli has joined #openstack-glance | 17:24 | |
dansmith | whoami-rajat: I'm sure you're gone for the day already, but maybe later... any idea what is up with this? https://zuul.opendev.org/t/openstack/build/74c3d110d12140c9ac39c2524b7a2668/log/controller/logs/screen-g-api.txt#14152 | 17:26 |
*** stephenfin is now known as stephenfin|PTOin | 17:27 | |
*** stephenfin|PTOin is now known as stephenfin|PTO | 17:28 | |
*** rajivmucheli has quit IRC | 17:32 | |
*** lbragstad_ has quit IRC | 18:04 | |
*** priteau has quit IRC | 18:04 | |
*** stand has quit IRC | 18:04 | |
*** lbragstad_ has joined #openstack-glance | 18:07 | |
*** priteau has joined #openstack-glance | 18:07 | |
*** stand has joined #openstack-glance | 18:07 | |
*** hoonetorg has quit IRC | 18:44 | |
*** rcernin has joined #openstack-glance | 18:48 | |
*** hoonetorg has joined #openstack-glance | 18:51 | |
*** rcernin has quit IRC | 18:53 | |
*** ralonsoh has quit IRC | 20:37 | |
*** jv has quit IRC | 20:44 | |
*** rcernin has joined #openstack-glance | 20:48 | |
*** rcernin has quit IRC | 20:53 | |
*** jv has joined #openstack-glance | 20:59 | |
*** ajitha has quit IRC | 20:59 | |
*** hoonetorg has quit IRC | 21:34 | |
*** hoonetorg has joined #openstack-glance | 21:44 | |
*** zzzeek has quit IRC | 21:55 | |
*** zzzeek has joined #openstack-glance | 21:58 | |
*** k_mouza has joined #openstack-glance | 22:44 | |
*** zzzeek has quit IRC | 22:44 | |
*** zzzeek has joined #openstack-glance | 22:47 | |
*** k_mouza has quit IRC | 22:49 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!