Thursday, 2020-07-23

*** gyee has quit IRC00:03
*** k_mouza has joined #openstack-glance01:00
*** k_mouza has quit IRC01:05
*** k_mouza has joined #openstack-glance02:07
*** k_mouza has quit IRC02:11
*** mvkr has quit IRC02:17
*** baojg has quit IRC02:17
*** baojg has joined #openstack-glance02:18
*** rosmaita has left #openstack-glance02:29
*** mvkr has joined #openstack-glance02:30
openstackgerritCyril Roelandt proposed openstack/glance-specs master: Spec for Glance cache API  https://review.opendev.org/66525803:01
*** k_mouza has joined #openstack-glance03:08
*** k_mouza has quit IRC03:13
*** rcernin has quit IRC03:28
*** rcernin has joined #openstack-glance03:43
*** rcernin has quit IRC03:47
*** rcernin has joined #openstack-glance03:49
*** k_mouza has joined #openstack-glance03:49
*** k_mouza has quit IRC03:53
*** udesale has joined #openstack-glance04:43
*** eandersson has quit IRC05:11
*** eandersson has joined #openstack-glance05:12
*** m75abrams has joined #openstack-glance05:35
*** nikparasyr has joined #openstack-glance06:08
openstackgerritAbhishek Kekane proposed openstack/glance-specs master: Introspect import plugin to calculate virtual size of image  https://review.opendev.org/74112106:51
*** ralonsoh has joined #openstack-glance06:57
*** amoralej|off is now known as amoralej07:17
openstackgerritAbhishek Kekane proposed openstack/glance master: Fix broken glance-cache-manage utility  https://review.opendev.org/74211507:19
*** brinzhang_ has joined #openstack-glance07:34
brinzhang_hi, I was update my devstack, and unstack.sh && stack.sh again, but it has an 503 error when curl -g -k --noproxy '*' -s -o /dev/null -w '%{http_code}' http://192.168.222.12/image, and it cannot start g-api service07:34
brinzhang_more details in http://paste.openstack.org/show/796236/07:34
brinzhang_Is there any important changes in glance recently?07:35
brinzhang_about devstack07:35
*** jmlowe has quit IRC08:06
*** jmlowe has joined #openstack-glance08:08
*** k_mouza has joined #openstack-glance08:31
*** rcernin has quit IRC08:56
*** k_mouza has quit IRC09:05
openstackgerritMerged openstack/glance stable/train: Fix reloading under PY3  https://review.opendev.org/71372009:10
*** k_mouza has joined #openstack-glance09:13
* abhishekk will be back at 1400 UTC09:30
*** k_mouza has quit IRC09:43
*** k_mouza has joined #openstack-glance09:44
*** k_mouza has quit IRC09:46
*** tristan888 has joined #openstack-glance10:06
*** k_mouza has joined #openstack-glance10:07
*** udesale_ has joined #openstack-glance11:03
*** udesale has quit IRC11:06
*** bhagyashris is now known as bhagyashris|afk11:54
*** tristan888 has quit IRC12:07
*** rosmaita has joined #openstack-glance12:09
*** bhagyashris|afk is now known as bhagyashris12:22
*** k_mouza has quit IRC12:23
*** amoralej is now known as amoralej|lunch12:25
*** k_mouza has joined #openstack-glance12:42
*** baojg has quit IRC12:53
*** baojg has joined #openstack-glance12:54
*** amoralej|lunch is now known as amoralej13:26
dansmithbrinzhang_: enable_service tls-proxy13:30
dansmithbrinzhang_: that will make your devstack deploy the way CI does it13:30
*** baojg has quit IRC13:35
*** baojg has joined #openstack-glance13:37
abhishekkrosmaita, smcginnis, jokke glance weekly meeting in 5 mins at #openstack-meeting13:58
rosmaitaabhishekk: ty13:58
smcginnisConflict as usual for me, but will try to lurk and catch up as I can.13:59
rosmaitasame here13:59
abhishekkack13:59
*** m75abrams has quit IRC14:04
*** alistarle has joined #openstack-glance14:08
*** alistarle has quit IRC14:11
*** alistarle has joined #openstack-glance14:11
*** alistarle has quit IRC14:17
gmannabhishekk: regarding this https://review.opendev.org/#/c/742546/6/tempest/api/image/v2/test_images.py@16014:24
gmannabhishekk: is this os_glance_importing_to_stores added to property in API also ?14:25
abhishekkgmann, in meeting atm14:25
*** alistarle has joined #openstack-glance14:29
*** alistarle has quit IRC14:31
*** k_mouza has quit IRC14:32
*** k_mouza has joined #openstack-glance14:32
*** alistarle has joined #openstack-glance14:35
*** alistarle has quit IRC14:42
*** alistarle has joined #openstack-glance14:44
abhishekkgmann, it will be available in get response if that what you are asking14:55
gmannabhishekk: and if inject_image_metadata is enabled right?14:57
abhishekkgmann, it has nothing to do with inject_image_metadata14:57
gmannabhishekk: but i cannot see that in GET response always14:58
abhishekkgmann, it will be added only after import call is initialized14:58
gmannabhishekk: i see. I am justa dding sleep for testing and later i can add wait for this property.14:59
abhishekkgmann, sounds good14:59
abhishekklet me know If you need anything15:00
* abhishekk need to attend another meeting15:00
dansmithrosmaita: are you coming to the meeting?15:03
rosmaitaomw15:03
dansmithsweet15:04
*** k_mouza has quit IRC15:04
*** alistarle has quit IRC15:05
dansmithjokke: ?15:06
dansmithrosmaita: sorry I really didn't mean to jump on you like that15:31
dansmiththat was my bad15:31
rosmaitanp, as long as you smile when you say that15:32
abhishekkrosmaita, also we are team here :D15:34
*** udesale_ has quit IRC15:42
*** baojg has quit IRC15:48
*** baojg has joined #openstack-glance15:49
* abhishekk will be back in hour15:58
*** k_mouza has joined #openstack-glance16:01
dansmithrosmaita: here's the current wsgi threading stack, fwiw: https://review.opendev.org/#/q/status:open+project:openstack/glance+branch:master+topic:async-native-threads16:06
*** brinzhang0 has joined #openstack-glance16:07
dansmithI actually got it working with two lines of code change locally, but that first patch is the proper abstraction16:07
rosmaitanice16:07
*** brinzhang_ has quit IRC16:10
*** nikparasyr has left #openstack-glance16:10
*** k_mouza has quit IRC16:13
*** gyee has joined #openstack-glance16:17
*** amoralej is now known as amoralej|off16:25
openstackgerritErno Kuvaja proposed openstack/glance stable/train: Return the mod_proxy_uwsgi paragraph  https://review.opendev.org/74270416:31
abhishekkdansmith, around?16:50
dansmithabhishekk: yep16:50
abhishekkwe are planning to have 1 meeting on Monday to discuss race condition issue16:50
abhishekkwhat will be your suitable time for the same?16:51
dansmithhang on, let me do the timezone math16:51
dansmithabhishekk: Monday I can do 1700UTC (i.e. eight minutes from now), or any time after that16:52
dansmithtuesday I could do earlier16:52
dansmithactually, wait,16:52
dansmithI could skip something and do 1400-1600 as well16:52
dansmithjust not 1600-170016:53
abhishekkCool, I will schedule it for 1400 to 150016:53
abhishekkthank you16:53
dansmithack16:54
abhishekkthank you, I will send the mail on ML shortly16:55
dansmithcool, thanks, definitely looking forward to getting that resolved16:57
abhishekk++16:57
dansmithabhishekk: can you explain the scrubber to me real quick to save me having to read/search?17:00
abhishekkscrubber is based on delaying image deletion17:00
abhishekkthere is one config option in glance which marks image as soft deleted when enabled17:01
abhishekkand later scrubber can delete those images17:01
dansmithgotcha, and where does the scrubber run? separate daemon run by the operator?17:02
abhishekkseparate daemon run by the operator17:02
dansmithokay so I don't need to make that code work without eventlet, it sounds like17:02
abhishekkI think so17:03
dansmithcool17:05
openstackgerritDan Smith proposed openstack/glance master: Make glance-api able to do async tasks in WSGI mode  https://review.opendev.org/74206517:10
openstackgerritDan Smith proposed openstack/glance master: Make image conversion use a proper python interpreter for prlimit  https://review.opendev.org/74231417:10
openstackgerritDan Smith proposed openstack/glance master: Make wsgi_app support graceful shutdown  https://review.opendev.org/74249317:10
dansmithabhishekk: ^ added a knob for native thread pool size, which we can default=1 for the backport17:10
abhishekkdansmith, ack17:10
abhishekkdansmith, I think we should also report a bug for the same so if decided we can backport it as well17:11
dansmithabhishekk: already did: https://bugs.launchpad.net/glance/+bug/188871317:11
openstackLaunchpad bug 1888713 in Glance "Async tasks, image import not supported in pure-WSGI mode" [Undecided,In progress]17:11
abhishekkcool17:12
dansmiththe final patch in there presumptively Closes-Bug that bug, because I don't know of any other wsgi shortcomings at this point17:12
abhishekkthere is one, its not shortcoming but missing as we haven't added it for uwsgi17:13
abhishekkperiodic job to prefetch image into cache17:13
dansmithokay, but that's a testing thing, not something the bug covers I think17:13
dansmithdoes prefetch use an async task?17:14
abhishekkNope17:14
dansmithokay the only prefetch I know of is with the manage command, which shouldn't have anything to do with how the api is deployed.. can you point me at something?17:15
dansmithoh, does the manage command call the api? I didn't think it did, but I see the cache doc makes references to urls17:16
abhishekkglance-cache-prefetcher was a similar utility like scrubber17:16
abhishekkbut it has dependency on registry so we moved it under api and now it runs like periodic job17:17
dansmithah, so you request via API and then that daemon does the thing in the background, separate from the API itself?17:17
abhishekkearlier it was like that17:18
abhishekknow its part of API service17:18
abhishekkit just looks in cache db whether there is any image available for queing and then downloads that image in to cache via Api17:19
dansmithsynchronously?17:20
abhishekkit just spawns a thread and run it as a deamon17:23
jokkedansmith: the glance-cache-manage lands through the API to the local sqlite db that is used for caching and when the service runs, it just periodically spawns thread that will take care of the fetching17:23
jokkedansmith: and that thread is not part of the wsgi app17:23
abhishekkhttps://github.com/openstack/glance/blob/master/glance/common/wsgi.py#L45317:24
jokkeoh it is :P17:24
dansmithah, okay I see17:24
jokkeit might actually work without any issues if it's spawned from there17:24
abhishekk:D it is not part of wsgi_app17:24
dansmithjokke: we don't call any of that code17:24
jokkeohh17:24
dansmithfrom pure wsgi17:24
jokkeso yeah, in that case it's broken17:24
dansmithso, just to be clear, the user can still run the standalone daemon-y thing right?17:24
dansmithbecause uwsgi will start a bunch of worker processes, each of which shouldn't be spawning their own prefetcher17:25
dansmithif we did that, we'd just need to make sure only one was running at a time, but that's more complex than it needs to be,17:26
dansmithand probably not what people want anyway17:26
jokkenope, I think that's removed as like abhishek mentioned it relied on glance-registry that has been removed17:26
dansmithit's still in the tree17:26
dansmithif it depends on g-reg it should probably be deleted :)17:26
jokkeso even if the entry point still exists either it just blows up or is noop17:26
jokkeyeah17:27
abhishekkdansmith, that is there but it is broken17:27
dansmith:/17:27
dansmithwell, we can just do the naive thing and make it a daemon off the wsgi worker, but.. that's not ideal.. how hard can it be to just make that work with the local sqlite db?17:27
abhishekkI think it was broken since queens or rocky17:27
dansmithmnaser: presumably you're not using this image prefetcher thing right?17:28
abhishekkdansmith, I will try to run this tomorrow on uwsgi setup I have17:29
dansmithabhishekk: run the standalone command you mean?17:29
abhishekkyes17:29
dansmithokay17:30
abhishekkI will run glance-cache-prefetcher and will note down the issues17:30
dansmithI would normally say that we need not backport fixes to that since it was never working and it's just background stuff,17:30
dansmithbut if we're installing something into bin/ that can't possibly work.... I think backporting fixes if they're not too crazy is probably prudent17:30
jokkedansmith: abhishekk: you should have bj invitation, I let abhishekk to announce it to wider public but I knew you two are going to be there17:31
dansmithyup, got it17:31
abhishekkjokke, thanks17:31
jokkedansmith: I doubt vexxhost is using caching g-api as they are running with ceph where nova has direct access17:32
dansmithI'm not sure they're all ceph everywhere, but yeah, makes sense17:33
jokkedansmith: the caching is really beneficial on big environments when you stream the data through glance17:33
jokkemakes massive difference FE with swift17:33
dansmithsure I understand the goal17:33
dansmithglance-cache-prefetcher fails to import on startup for me17:35
dansmithwhich doesn't even make sense, AFICT17:35
jokkeabhishekk: you have been poking the cache code lately ore than I have. How about the invalidation and purging code. Is that happening also outside of the wsgi app/our middleware or should that work?17:35
dansmithso, another question,17:36
abhishekkjokke, as far as I know pruner is also broken17:36
dansmithI thought you guys didn't have any jobs configuring devstack in WSGI_MODE=!uwsgi, but since the default was uwsgi until recently, surely any job that tests that has to be configuring devstack that way17:37
dansmithor did I misunderstand what periodic job you're talking about?17:37
jokkedansmith: there is no tempest testing for that as the API has existed only for couple of releases, until that it was 100% node local tooling17:40
dansmithjokke: okay abhishekk mentioned some periodic test, which I thought was a zuul periodic job that needed updating17:40
dansmithmaybe the periodic17:40
dansmithjust referred to the periodicness of the cache stuff in general?17:41
jokkedansmith: like mentioned 90% of the features implemented past 3-4 years have only unit/functional/integration coverage within repo17:41
dansmithyeah, understand, just was confused by what abhishekk was saying17:41
jokketrying to find that reference17:42
abhishekkdansmith, sorry for the confusion17:42
dansmithjokke: this, and earlier: [10:13:10]  <abhishekk>periodic job to prefetch image into cache17:42
dansmithto me, "periodic job" relating to testing, which was the context, would be a zuul periodic job17:42
dansmithso I'm sure it was just my misunderstanding17:42
jokkeyes, not zuul job, that is reference to the periodically spun thread within g-api17:43
dansmithgotcha17:43
abhishekk^^17:43
abhishekkI thought nova calls such functions as periodic jobs17:43
dansmithit calls them periodic tasks17:43
dansmithbut since the context was "what else do we need to test" I just assumed you meant you had a job running somewhere that should be cloned to test wsgi mode as well17:44
jokkeahh ... we try to avoid mixing with the taskflow controlled stuff so there is the big defference when we talk about jobs and tasks17:44
dansmithyep, understandable, clearly my mistake :)17:44
jokkenp17:44
jokkegood that you asked and not just assume in silence17:45
dansmithabhishekk: I can definitely write up the change to make it spawn these as threads in each worker, and just use a lockfile to make sure only one runs at a time, but I *think* it'd be better to make the standalone daemon work for that case, if it's not too hard17:46
dansmithit would be nice to know what someone like mnaser, who deploys as wsgi now, would want if he _did_ need the prefetcher17:46
dansmithand we can also do both if you just prefer 100% parity17:46
dansmithbut regardless, I think installing things into bin/ that don't run is pretty bad :)17:47
abhishekkdansmith, I will try to find out issues tomorrow about the same17:47
dansmithokay17:47
abhishekkI guess I will be able to make it work with minimal changes17:47
jokkeabhishekk: did we have any locking ofr the prefetcher or are they just "coordinating" in the sqlite level to not race. We discussed the possible race when you implemented it17:47
dansmithjokke: from a quick glance it looked like we'd only spawn one of those per server... are you thinking each child might have one?17:48
abhishekkjokke, they just coordinate on sql level to not to race17:48
jokkeI thought that's the thing yeah, cause we were worried next periodic picking up same veery slow prefetch and you adressed that so it won't be a problem if we have multiples of them running, I just can't remember how17:49
dansmithis that coordination per-image or per-host?17:50
dansmithbecause if you have multiples, you wouldn't want two images being cached in parallel I would think (or 16 in parallel for that matter)17:50
jokkeCurrently anything caching related is within the scope of the host17:50
dansmithokay17:51
dansmithso only one thread per host could possibly be making progress anyway?17:51
jokkeso the sqlite db is on the local disk of that host and all the actions impact only the caching on that host17:51
dansmithoh right, no I get that,17:52
dansmithI'm wondering what the locking domain is for exclusion,17:52
dansmithwhether one thread says in the db "yo, I am going to cache this image",17:52
jokkeI think multiple can make progress, so if there is 3 images in the prefetch Q and one process fetching it will go through them in serial matter, but if another process get spun up it will just pick the next one that has not been processed yet17:52
dansmithor one thread says "I will cache all these requests"17:52
dansmithright, okay, that's what we don't want I think17:53
dansmithif you're running 32 wsgi workers for your 32 cores, you do *not* want 32 images being cached in parallel :)17:53
jokkeMight be the case indeed ;)17:53
jokkeso how it works currently is kind of autobalancing. If there is too much to do, we will spin up new thread every Xmin until the precache Q is empty17:54
dansmithso just to be clear, you and abhishekk said that locking was done at the sql level, but that just means to make sure two threads don't compete for the *same* image, not that they won't compete for the whole queue, is that right?17:54
jokkeso it will gradually ramp up the parallelism ;D17:54
jokkedansmith: correct17:54
dansmithokay that seems like a storm waiting to happen to me17:54
jokkeAnd the threads will just die away once the Q is empty17:55
dansmithafter you've totally DDoS'd the disk and network though :)17:56
jokkebut yeah, If you spin 48 of them up at once, that queue will get processed very quickly :P17:56
jokkeAnd the admin might get surprised on load an traffic17:56
dansmithah, the import problem on the standalone cacher is a circular dependency17:59
jokkedansmith: right ... and that's the same thing with number of workers. As soon as you're streaming the images through glance-api you want to be careful how many workers you let go wild. Thus we changed the default worker count few cycles ago as these 48 core servers with 96 threads got that amount of workers spun up. And 96 glance workers running in one box will bring pretty much any modern18:00
jokkehardware on it's knees in busy cloud18:00
abhishekksmcginnis, could you please have a quick look here, https://review.opendev.org/74211518:00
abhishekkI guess I missed lots of discussion, scrolling back18:00
dansmithjokke: yeah for sure, but generally for a prefetch type thing, you want that to be a slow burn in the background and not overwhelm the machine with caching activity.. if you have a lot of *actual* work to do, then you should spend all the machine's resources trying to keep up, but not so for caching18:01
jokkedansmith: exactly18:01
jokkeso yes, you don't want every worker spinning up those threads18:01
dansmithso... shouldn't there be a tunable knob limiting how wide the prefetcher will scale, even for the eventlet case?18:02
dansmithbecause right now, as you said, every Xmins it will get worse18:02
dansmithlike the opposite of exponential backoff :)18:02
dansmithabhishekk: that same import thing occurs in the cache prefetcher cmd18:03
jokkedansmith: IIRC we decided to take the approach to do it if it starts breaking things instead of cluttering our already mile long config file18:03
jokkedansmith: It probably wouldn't hurt to have even hardcoded pool that limits it, but that wouldn't help th uwsgi case and like said no complains as of yet18:04
dansmithyeah, understand, it'll just be one of those saturday escalations where the machines are melting down .. I assume not many people have really tried it, but maybe I'm wrong18:05
smcginnisabhishekk: I'm not seeing the relationship between those test changes and the import change yet. Looking closer, but is there a quick explanation?18:05
abhishekkdansmith, just found out glance-cache-manage itself is broken with uwsgi18:05
jokkeI think prefetching (as it's manual admin task) is extremely rare anyways and might happen in a cases where new golden image is introduced to the environment. So quite likely no-one even from the deployments that uses the caching is using the precaching18:05
dansmithabhishekk: we should have some, you know, tests for that :D18:05
*** ralonsoh has quit IRC18:06
dansmithjokke: yep18:06
abhishekksmcginnis, there were not tests at all to test the glance-cache-manage tool18:06
smcginnisabhishekk: OK, so removing settings from those functional tests because they were not actually needed?18:06
smcginnisneeded/used18:06
jokkedansmith: that said, the edge usecases have had lots of requests towards that. So we indeed might start seeing more usge for it18:07
abhishekksmcginnis, yes18:07
smcginnisabhishekk: Got it, that makes sense then.18:07
abhishekksmcginnis, ack18:07
dansmithjokke: right and edge cases are likely to have smaller WAN pipes and smaller controllers with slower disks... :)18:07
jokkedansmith: also very limited number of images to be actually predeployed. Yet I think you are right, we might get very valid request to do something about it :P18:08
jokkedansmith: you gotta prioritize things and pick your fights like they say18:08
dansmithjokke: depends on the edge cloud.. if it's one of these "tesla rents space on the verizon cloud for self-driving-car workloads" then there are looooots of vendor images18:09
jokketru18:09
dansmithand they probably pay for pre-caching so handoff happens as the cars move around18:09
dansmithanyway, I shan't get into that rabbit hole,18:09
jokkeand in that case we get lots of visibility and that escalation might bring couple of contributors who has heavy interest to make sure it doesn't melt :P18:09
dansmithbut I think it's a legit reason not to replicate the exact same thing for the wsgi case..18:10
jokkeI've been saying few years now, one of the biggest problems Glance has community wise is that it's way too solid and just works. If we had massive escalations weekly basis due to world burning, I'm pretty sure we had more hands and eyes working on this code18:11
jokkeYet the people left are like Brian Abhishek and myself who's nature just haven't allowed us to slip there18:12
jokkesmcginnis: is great keeping us on our toes when ever we happen to need him to have eyes on something18:13
abhishekkand lately dansmith :D18:13
abhishekkdansmith, now there is catch with the glance-cache-manage with uwsgi18:14
abhishekkglance-cache-manage tries to contact glance on localhost:9292 (defualt) port18:14
abhishekkin case of uwsgi the port is different18:15
abhishekkI found it from glance-uwsgi.ini (http-socket = 127.0.0.1:60999)18:15
abhishekkSO if you run glance-cache-manage like18:16
abhishekkglance-cache-manage --port 60999 queue-image18:16
abhishekkit will work18:16
*** brinzhang_ has joined #openstack-glance18:16
dansmithabhishekk: but you can also do http://localhost/image18:16
abhishekkdansmith, that needs changes in code :D18:17
dansmithWAT18:17
dansmithyou can't point it at a url?18:17
jokkedansmith: using uri is not a thing there18:17
abhishekknope18:17
jokkeyou may present it host and port18:17
dansmithso all deployers *have* to run on port 9292?18:17
abhishekkwe are using a local client which contacts glance using host and port18:17
dansmithah, host and port, okay18:17
abhishekkno you can spescify --host and --port18:18
dansmithtaking a uri should definitely be a goal I think18:18
abhishekkNow when you run glance-cache-prefetcher after that it says "No sql_connection parameter is established"18:18
abhishekkSo I will continue to spend some time on it tomorrow18:19
*** brinzhang0 has quit IRC18:19
dansmithabhishekk: cool, sounds workable though18:20
dansmithat least, so far18:20
abhishekkyes18:20
dansmithabhishekk: so seriously,18:21
dansmithwhat's the plan to get this tested for real? should we poke this from the tempest side? should we do a post-run playbook like I do elsewhere so we're really using the glance binaries?18:22
jokkedansmith: abhishekk: I think the thing to learn from this is. Because so many components in the Glance toolchain has no tests outside of our unit/functional/integration which all runs the api on eventlet, just the service stating on uwsgi and tempest passing really does not tell anything about the health of that deployment18:22
abhishekkdansmith, prefetcher is using eventlet18:22
dansmithabhishekk: maybe more realistic functional tests are appropriate for things like the import issue18:22
jokkedansmith: the later probably. IIUC tempest is very clearly biased to API only18:22
abhishekkdansmith, yes18:23
dansmithjokke: sure, but tempest can queue, and then check status to see that it's working18:23
dansmithdevstack should be starting the prefetcher if standalone, and configuring it if bundled with the API so tempest can poke it18:23
jokkedansmith: yeah, that definitely should not hurt18:24
abhishekkdansmith, right18:24
dansmithjokke: minimal tests in tempest means that just starting and running a nova boot does not prove the health, indeed, but I think there is *plenty* of room to expand the scope of integration testing so massively narrow the window that I see right now18:25
dansmithmaybe I'll redeploy my devstack with eventlet so I can see about a test there18:26
jokkedansmith: ohh yeah18:26
dansmithwe need to at least have devstack configure the prefetcher on in standalone mode18:26
dansmithabhishekk: is that patch you linked to smcginnis above the only thing I need to make it work in eventlet mode right now?18:27
abhishekkyes18:27
jokkeI think it was more to point out towards the general tone we had for example today. When people argue "we run it under wsgi in the gate so sure it works", no we run it under wsgi in scope of tempest, nothing else is tested under uwsgi18:27
jokke+3 'u's18:28
abhishekkI am dropping out, almost midnight here18:28
abhishekkdansmith, WIP for policy doc changes18:29
abhishekkwill be submitting it by tomorrow EOD18:29
jokkegn abhishekk talk to you tomorrow18:29
abhishekkjokke, good night18:29
abhishekko/~18:29
mnaserdansmith: no prefetch usage here bc we use ceph and nova does cow images18:31
mnaserso our glance is really a directory mostly18:31
mnaserand an upload mechanism18:31
dansmithmnaser: gotcha.. let's say you were to use this,18:32
dansmithwould you want your API workers to double-duty as background image cachers, with some limit of how many will run at once, or would you rather run a image prefetcher daemon so you can limit the resources for that separately?18:33
jokkemnaser: that's what I thought ... and in that scope I truly believe that it works just fine. And is actually one of the rare cases where I'd argue it makes sense at any level having the overhead of httpd and <nameyourwsgistackhere>18:33
mnaserdansmith: one hundred percent a seperate daemon18:34
mnasercause id want api to be as stateless as possible18:35
dansmithmnaser: ack, thought so :)18:35
jokkemnaser: even if that means that you need to have eventlet deployed to run that daemon?18:35
mnaseryep, all 'agents' are using eventlet in openstack now anyways18:35
mnaserlike nova-compute and neutron-openvswitch-agent18:35
mnaseretc18:35
dansmithjokke: I'm happy to make the daemon run with real threads if it needs it, it's just that it needs to run standalone by itself I think18:36
jokkefair enough18:36
dansmith"needs to be ABLE to run standalone" I mean18:36
jokkedansmith: I think you don't as how it's tied to eventlet already, my point was do we need to decouple it or not18:36
jokkeso effectively if you need prefetcher for caching, you just run one g-api process in the host that listens only localhost and you're done for now18:37
dansmithokay wait, maybe I'm missing something,18:38
mnaserdoes the prefetcher actually host an api?18:38
dansmiththe cmd/cache_prefetcher.py is supposed to run standalone and not require an API righgt?18:39
dansmithyou can still use the wsgi api to load up the cache,18:39
dansmithbut the actual prefetching would happen in that daemon, fed from the DB, no need to run an eventlet API server right?18:39
jokkethe prefetcher needs all the db connections and the image modelling iirc. so easiest to have it as daemon is to run g-api with one worker and just let no-one talk to it's endpoint :P18:39
dansmithmnaser: today they're one and the same, but..18:39
dansmithjokke: except you'd have to have another API exposed, even if localhost which is super uncool18:40
jokke1 firewall rule :P18:40
dansmiththe prefetcher looking at the db, even sqlite, should be fine18:40
dansmithever been through a security audit? :)18:40
jokkeI'm not trying to go over the fence where it's at it's lowest, I'm using the damn gate :D18:40
jokkeso yes, probably entry point for it as separate command without loading all the paste crap and actually exposing the api endpoint is fairly doable if someone wants to spend time and do it18:42
dansmithwell, like I say, either someone needs to do that, or remove the binary that we're installing which indicates it should work, but is broken now18:42
jokkedansmith: ah, you're referring to that ... either way, I have patch ready to throw towards review.opendev.org for the later. But if you think someone has cycles for it and it helps the uwsgi cause, I'm all for reviewing such patch18:43
dansmithI definitely don't think we should be removing it, I'm just lamenting that it's known broken and still there.. but yes, let's fix it for uwsgi please18:45
dansmithsounds like abhi will at least start it but I will do it if need be18:45
jokkedansmith: btw we have quite a bit of technical debt like that in the repo, where we have fixed something since reg removal for example and the broken/unused bits are still lingering around waiting ofr cleanup18:51
jokkeMajority of those things _should_ be documented correctly, just bits of dead code around18:53
jokkeThere was for very long time the "MVP" approach preached in Glance community where it was encouraged to do minimal impact changes to achive the initial step and leave the housekeeping for follow ups, that later just unfortunately never happened in almost every such case.18:56
jokkeAnd we've still fighting to get out of that attitude towards more complete "If you do something, do it well so no-one else needs to touch it"18:57
*** k_mouza has joined #openstack-glance19:09
jokkekk, I'm off ... until tomorrow o/19:13
*** brinzhang0 has joined #openstack-glance19:17
*** brinzhang_ has quit IRC19:20
*** k_mouza has quit IRC19:22
*** jmlowe has quit IRC19:33
*** jmlowe has joined #openstack-glance19:34
*** brinzhang_ has joined #openstack-glance19:37
*** brinzhang0 has quit IRC19:41
*** jmlowe has quit IRC19:57
*** jmlowe has joined #openstack-glance20:00
openstackgerritMerged openstack/glance master: Fix broken glance-cache-manage utility  https://review.opendev.org/74211520:03
openstackgerritAbhishek Kekane proposed openstack/glance stable/ussuri: Fix broken glance-cache-manage utility  https://review.opendev.org/74274220:35
*** jmlowe has quit IRC20:57
*** jmlowe has joined #openstack-glance21:00
*** jmlowe has quit IRC21:56
*** jmlowe has joined #openstack-glance22:00
dansmithsmcginnis: another softball for ya: https://review.opendev.org/#/c/742309/22:24
*** k_mouza has joined #openstack-glance22:30
*** k_mouza has quit IRC22:34
*** jmlowe has quit IRC22:58
*** jmlowe has joined #openstack-glance23:00
*** jmlowe has quit IRC23:56

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!