*** hemna2 is now known as hemna | 07:37 | |
*** priteau_ is now known as priteau | 11:16 | |
*** dasm|off is now known as dasm | 13:09 | |
abhishekk | #startmeeting glance | 14:00 |
---|---|---|
opendevmeet | Meeting started Thu Jan 20 14:00:01 2022 UTC and is due to finish in 60 minutes. The chair is abhishekk. Information about MeetBot at http://wiki.debian.org/MeetBot. | 14:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 14:00 |
opendevmeet | The meeting name has been set to 'glance' | 14:00 |
abhishekk | #topic roll call | 14:00 |
abhishekk | #link https://etherpad.openstack.org/p/glance-team-meeting-agenda | 14:00 |
abhishekk | o/ | 14:00 |
abhishekk | Waiting for others to show | 14:02 |
rosmaita | o/ | 14:03 |
rajiv | Hey | 14:03 |
abhishekk | cool, lets start, | 14:04 |
abhishekk | may be others will show in between | 14:04 |
abhishekk | #topic release/periodic jobs update | 14:04 |
abhishekk | Milestone 3 6 weeks from now | 14:04 |
abhishekk | Possible targets for M3 | 14:04 |
abhishekk | Cache API | 14:05 |
abhishekk | Stores detail API | 14:05 |
abhishekk | Unified limits usage API | 14:05 |
abhishekk | Append existing metadef tags | 14:05 |
abhishekk | So these are some important work we are targeting for M3 | 14:05 |
abhishekk | Will ping for reviews as and when they are up | 14:05 |
abhishekk | Non-Client library release - 5 weeks | 14:05 |
abhishekk | We do need to release glance-store by next week with V2 clone fix | 14:06 |
abhishekk | Periodic jobs all green | 14:06 |
abhishekk | #topic Cache API | 14:06 |
abhishekk | Cache API base patch is up for review, couple of suggestions from dansmith, I will fix them | 14:07 |
abhishekk | Tempest coverage is in progress | 14:07 |
abhishekk | #link https://review.opendev.org/c/openstack/glance/+/825115 | 14:07 |
abhishekk | I am thinking to cover more cache APIs and scenarios, will be open for reviews before next meeting | 14:08 |
abhishekk | #topic Devstack CephAdmin plugin | 14:08 |
abhishekk | #link http://lists.openstack.org/pipermail/openstack-discuss/2022-January/026778.html | 14:08 |
abhishekk | There will be efforts to create new cephadmin devstack plugin | 14:09 |
abhishekk | I will sync with victoria for more information | 14:09 |
abhishekk | from glance prospective, we need to make sure that this plugin will deploy ceph with single store as well as multistore configuration | 14:10 |
abhishekk | that's it from me for today | 14:10 |
abhishekk | rosmaita, do you have any inputs to add about cephadm plugin? | 14:11 |
rosmaita | no, i think sean moody's response to vkmc's initial email is basically correct | 14:11 |
rosmaita | that is, do the work in the current devstack-plugin-ceph, don't make a new one | 14:12 |
abhishekk | yes, I went through it | 14:13 |
abhishekk | lets see how it goes | 14:13 |
abhishekk | #topic Open discussion | 14:14 |
abhishekk | I don't have anything to add | 14:14 |
jokke_ | I guess it's just matter of changing devstack to deploy with the new tooling Ceph introduced | 14:14 |
jokke_ | not sure if there's anything else really to it for now | 14:14 |
abhishekk | likely | 14:14 |
abhishekk | anything else to discuss or we should wrap this up? | 14:16 |
jokke_ | abhishekk: I saw you had revived the cache management api patch but didn't see any of your negative tests you held it from merging last cycle ... we're still expecting new ps for that? | 14:16 |
abhishekk | jokke_, yes, I am working on those | 14:16 |
jokke_ | I still have no idea what you meant with that so can't tell if I just missed them, but there was nothing added | 14:17 |
jokke_ | kk | 14:17 |
abhishekk | Nope, I haven't pushed those yet as facing some issues | 14:17 |
abhishekk | Like one scenario for example | 14:18 |
abhishekk | create image without any data (queued status) | 14:18 |
abhishekk | add that image to queue for cache and its getting added to queued | 14:19 |
abhishekk | So I am thinking whether we add some validation there (like non-active images should not be added to queue) | 14:19 |
jokke_ | up to you ... I tried to get the API entry point moved last cycle and was very clear that I had no interest to change the actual logic that happens in the caching module ... IMHO those things should be bugfixes and changed on their pwn patches | 14:21 |
jokke_ | but you do as you wish with them | 14:21 |
abhishekk | ack | 14:21 |
abhishekk | sounds good | 14:21 |
abhishekk | anything else to add ? | 14:22 |
abhishekk | croelandt, ^ | 14:22 |
jokke_ | it makes sense to fix issues like that and the bug I filed asap for the new API endpoints so we're not breaking them right after release ;) | 14:22 |
jokke_ | but IMO they are not related to moving the endpoints from the middleware to actual api | 14:22 |
croelandt | abhishekk: nope :D | 14:23 |
abhishekk | yes they are not, but I am just thinking to do it at this point only | 14:24 |
abhishekk | croelandt, ack | 14:24 |
dansmith | o/ | 14:24 |
abhishekk | hey | 14:24 |
abhishekk | we are done for today | 14:25 |
dansmith | sweet :) | 14:25 |
abhishekk | dansmith, do you have anything to add ? | 14:25 |
rajiv | hi, i would like to follow up on this bug : https://bugs.launchpad.net/python-swiftclient/+bug/1899495 | 14:25 |
dansmith | nope | 14:25 |
abhishekk | I have cache tempest base work up, if you have time, please have a look | 14:25 |
rosmaita | i must say, it is nice to see all this tempest work for glance happening | 14:26 |
dansmith | I saw yesterday yep | 14:26 |
dansmith | rosmaita: ++ | 14:26 |
abhishekk | rajiv, unfortunately didn't get time to go through it much | 14:27 |
jokke_ | rajiv: I just read Tim's last comment on it | 14:27 |
jokke_ | rajiv: have you actually confirmed that scenario that it happens when there is other images in the container? | 14:28 |
abhishekk | I just need input whether we wait for default cache periodic time (5 minutes) or set it in zuul.yaml to less time | 14:28 |
rajiv | jokke_: yes, i replied to the comment, we have already implemented it but it dint help | 14:28 |
jokke_ | rajiv: ok, so the 500 is coming from the swift, not from Glance? | 14:28 |
rajiv | since i have nginx in the middle in my containerised setup, i am unable to validate the source | 14:30 |
jokke_ | kk, I'll try to give it another look and see if I can come up with something that could work based on Tim's comment | 14:31 |
rosmaita | rajiv: looking at your last comment in the bug, i think it's always possible to get a 5xx response even though we didn't list them in the api-ref | 14:32 |
rajiv | 409 for sure comes from swift/client.py but 500 from glance | 14:33 |
jokke_ | Ok, that's what I was asking, so the 500 is coming from glance, swift correctly returns 409 | 14:33 |
rajiv | 2022-01-20 02:02:01,536.536 23 INFO eventlet.wsgi.server [req-7cd63508-bed1-4c5f-b2cc-7f0e93907813 60d12fe738fe73aeea4219a0b3b9e55c8435b55455e7c9f144eece379d88f252 a2caa84313704823b7321b3fb0fc1763 - ec213443e8834473b579f7bea9e8c194 ec213443e8834473b579f7bea9e8c194] 10.236.203.62,100.65.1.96 - - [20/Jan/2022 02:02:01] "DELETE /v2/images/5f3c87fd-9a0e-4d61-88f9-301e3f01309d HTTP/1.1" 500 430 28.849376 | 14:33 |
abhishekk | rajiv, any stack trace ? | 14:34 |
rajiv | abhishekk: not more than this :( | 14:34 |
abhishekk | ack | 14:34 |
rajiv | 2022-01-20 02:02:01,469.469 23 ERROR glance.common.wsgi [req-7cd63508-bed1-4c5f-b2cc-7f0e93907813 60d12fe738fe73aeea4219a0b3b9e55c8435b55455e7c9f144eece379d88f252 a2caa84313704823b7321b3fb0fc1763 - ec213443e8834473b579f7bea9e8c194 ec213443e8834473b579f7bea9e8c194] Caught error: Container DELETE failed: https://objectstore-3.eu-de-1.cloud.sap:443/v1/AUTH_a2caa84313704823b7321b3fb0fc1763/glance_5f3c87fd-9a0e-4d61-88f9-301e3f01309d 409 Conflict [ | 14:35 |
jokke_ | so we do always expect to whack the container. I'm wondering if we really do store one image per container and it doesn't get properly deleted or if there is a chanse of having multiple images in thta one contianer and it's really jut cleanup we fail to catch | 14:35 |
rajiv | its 1 container per image | 14:35 |
rajiv | and segments of 200MB inside the container | 14:36 |
jokke_ | I thought it should | 14:36 |
jokke_ | so it's really a problem of the segments not getting deleted | 14:36 |
rajiv | yes, our custom code retries deletion 5 times in case of a conflict | 14:36 |
rajiv | and wait time was increased from 1 to 5 seconds, but had no luck | 14:37 |
rajiv | code : https://github.com/sapcc/glance_store/blob/stable/xena-m3/glance_store/_drivers/swift/store.py#L1617-L1639 | 14:37 |
jokke_ | I wonder what would happen if we instead of trying to delete the object and then the container we just asked swiftclient to delete the container recursively | 14:38 |
jokke_ | and let it to deal with it, would the result be the same | 14:39 |
rajiv | yes, i tried this as well but had same results | 14:39 |
jokke_ | ok, thanks | 14:39 |
rajiv | does the code need to be time.sleep(self.container_delete_timeout) https://github.com/sapcc/glance_store/blob/stable/xena-m3/glance_store/_drivers/swift/store.py#L1637 | 14:39 |
abhishekk | no | 14:39 |
abhishekk | https://github.com/sapcc/glance_store/blob/2cb722c22a085ee9cdf77d39e37d2955f48811c3/glance_store/_drivers/swift/store.py#L37 | 14:40 |
rajiv | i see a similar spec in cinder, hence i asked : https://github.com/sapcc/glance_store/blob/stable/xena-m3/glance_store/_drivers/cinder.py#L659 | 14:40 |
jokke_ | lets try to get on the next swift weekly and see if they have any better ideas why this happens and how to get around it now whn we know that it's for sure 1:1 relation and it's really swift not deleting the segments | 14:40 |
rajiv | abhishekk: ack | 14:41 |
abhishekk | wait | 14:41 |
abhishekk | this is wrong coding practice but it will work | 14:42 |
abhishekk | Lets move this to glance Irc channel | 14:42 |
rajiv | sure | 14:43 |
abhishekk | thank you all | 14:43 |
abhishekk | #endmeeting | 14:43 |
opendevmeet | Meeting ended Thu Jan 20 14:43:50 2022 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 14:43 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/glance/2022/glance.2022-01-20-14.00.html | 14:43 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/glance/2022/glance.2022-01-20-14.00.txt | 14:43 |
opendevmeet | Log: https://meetings.opendev.org/meetings/glance/2022/glance.2022-01-20-14.00.log.html | 14:43 |
*** dasm is now known as dasm|off | 23:13 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!