Wednesday, 2022-08-31

tobberydbergHello folks, time for Public Cloud SIG meeting o/08:01
tobberydbergAnyone around? 08:01
gtemayey, thks a lot for reminder yesterday ;-)08:01
tobberydbergThat was kind of the only action point from last meeting, so would have been bad if I didn't remember to do that ;-) 08:02
gtemalol08:02
gtemado we have agenda for today?08:03
tobberydbergBut I hope to see a few more people here joining in, but lets start the meeting so we get some logs for it 08:03
gtemacool08:03
tobberydberg#startmeeting publiccloud_sig08:04
opendevmeetMeeting started Wed Aug 31 08:04:01 2022 UTC and is due to finish in 60 minutes.  The chair is tobberydberg. Information about MeetBot at http://wiki.debian.org/MeetBot.08:04
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.08:04
opendevmeetThe meeting name has been set to 'publiccloud_sig'08:04
tobberydbergAgenda and meeting notes here: https://etherpad.opendev.org/p/publiccloud-sig-meeting08:04
tobberydberg#topic 1. Last meeting log/notes review08:05
gtemaI was thinking bit further on the point with using "alias" metadata or property for having "standardized" names08:06
gtemaand I still think this is the only thing we could ever reach among different clouds08:06
tobberydbergI agree, that is probably the way to get somewhere08:07
Puck__Or at least have a mapping from the standardised to the local names.08:07
Puck__Aliases would be nice.08:08
gtemawell, this kind of mapping would need to be stored somewhere08:08
gtemaif we ever reach ".well-known" - maybe08:08
gtemabut that would mean all tools need to know that08:08
tobberydbergI had another thought though, to have a "non enforcing" naming convention as a guideline 08:09
Puck__Start with having it in docs.08:09
gtemahaving some "guideline" to agree on is of course a must. Otherwise we will never proceed :)08:10
Puck__os_distro is an example of recommended names. Dunno how well adopted they are.08:10
tobberydbergSo, the basic in being able to have somekind of mapping is "non enforcing naming convention", right?08:10
Puck__Yeah, perhaps a survey of what is currently in use would be good. Does refstacj c08:11
gtemaI think what is important is to agree on attributes which are important and must be in some form discoverable in every cloud08:11
Puck__Refstacj collect that? (Sorry, using a phone)08:11
gtemaos_distro - surely yes, but smth like hw_disk_bus is maybe not so important08:11
tobberydberg+108:12
gtemaPuck__ - refstack doesn't care at all about anything, that is why we think in the terms of public cloud alternative need to be developed08:12
gtemait is sad that so few people are here around, as if this is again not required by anybody08:13
Puck__Ack. Was hoping there was already some central information.08:13
tobberydbergyes exactly08:13
tobberydbergAnd for me it would be an addition to refstack and a check that can run from central location towards all public clouds in the program. 08:13
Puck__That'd be good. I don't know when we last ran refstack08:14
gtemaPuck__ - haven't you received recently reminder to upload fresh results?08:14
tobberydbergExactly, and can easily be faked by setting up a isolated env to run the checks, which makes the results not worth much08:15
Puck__ Erm, dunno. I'd have to go and dig. I remember it being discussed a while ago...08:15
gtematobberydberg: I propose to start declaring properties of things we think should be standardized and make voting to figure out what community thinks which are mandatory and which not08:15
NilsMagnus[m]I still wonder why using the filters of meta data shouldn't be sufficient? That on turn would require some conventions for the meets data itself, for example the operating system.08:15
NilsMagnus[m]meets=meta08:15
gtemaNils Magnus: because first there must be agreement on what comes into metadata at all08:15
gtemaand with which name08:16
Puck__The benefits would definitely be easier sharing if terraforn etc.08:16
tobberydbergThat is probably a good starting point. For the central rund of the full powered program we need some collab with interop SIG later on08:16
Puck__So, perhaps we need to work out what metadata needs to be standardised?08:16
gtemathat is exactly what my proposal is about08:16
NilsMagnus[m]Maybe we could relate to LSB (Linux standard base)?08:17
NilsMagnus[m]For naming conventions08:17
Puck__Refstack (or whatever) should also be run from a central location against the public facing API for each cloud.08:17
tobberydbergPuck__ +108:17
gtemaagree with Nils, LSB is a good start. Not 100% sure how i.e. good windows images will fit into it, but definitely worth checking08:18
tobberydbergBut as gtema says, we will need to define what to check and which attributes etc before that08:18
Puck__Yup. Should we have people put their ideas on the etherpad to be discussed next time? (List them, put +1 against the ones you agree with )08:19
gtemaI started last time exactly this08:20
Puck__Oh, awesome!08:20
gtemaand tried to get some of the props from SovereignCloudStack proposals08:20
gtema(https://etherpad.opendev.org/p/publiccloud-sig-kickoff)08:20
Puck__Okay, I see the empty list now! :)08:20
gtemaunder "resources to be standardized"08:21
tobberydberg#link  https://etherpad.opendev.org/p/publiccloud-sig-kickoff08:21
tobberydbergYou can find it there08:21
tobberydberghaha08:21
tobberydbergSo, as starting point we will go with flavors and images08:21
gtemaI doubt there is any interest on having something more then those 208:22
Puck__os_distro, sw_release for metadata08:22
Puck__Volume name08:22
Puck__Object storage plicy08:23
Puck__Volume type, not name (i.e nvme and IOPS)08:23
gtemawhy should volume name matter?08:23
gtemafor swift policies I also do not really think it is worth of any effort - this is too variable08:24
Puck__(The text input box on the webclient on my phone is crazy small)08:24
Puck__Happy to be voted down, just throwing out some ideas08:25
gtemalooking at lsb and ansible I see it returns: codename=Core, description="CentOS Linux...", id=CentOS, major_release=7, release=7.5.xxxx08:25
tobberydbergI agree that there could probably be more. But I wonder if it will be easier to solve the flavor and image situation first, as the 2 most obvious ones first before identifying more?08:25
Puck__Ack, so re08:25
Puck__Sure08:25
tobberydbergos_distro mapping "id", release mapping or major_release os_version08:26
Puck__Recommended os_distro names are already in the official docs.08:27
gtemawhich docs are you refering?08:27
Puck__Erm, hang on08:28
tobberydberghttps://docs.openstack.org/glance/latest/admin/useful-image-properties.html08:29
tobberydberg??08:29
Puck__Yes, that us ut08:30
Puck__Just found it. :)08:30
gtemahmm, maybe then just list all of them and open voting which should become part of "standard"08:31
gtema?08:31
gtemafor me 80% of them are not really helpful08:31
tobberydbergYea, because naming will be impossible to align with08:31
gtemanot really, I can't imagine why i.e. hw_video_model would matter for me08:32
Puck__I reckon that a minimum set should defined and others are optional.08:33
tobberydbergNo, a selected list of which attributes is needed indeed08:33
Puck__We'd also like to see another which defines the key software installed for app specific images.08:34
gtematja, and this is where it gets funny - something you find important is not what others think is important08:35
Puck__We need that for some licensing requirements. Currently were overloading os_diateo for that, which we hate.08:35
Puck__os_distro08:36
gtemabut you can add any other property as you want. Here we need to find a common base what really matters for end user08:36
tobberydbergcould add other properties yes for licensed stuff indeed, that would be really useful08:37
tobberydbergWe have a shitty solution for that as well08:37
gtemaok, time is ticking. I suggest everyone interested should add missing (and vote) to defined properties for flavors and images08:41
gtemaand maybe we can "touch" point of doing useful tests as part of the public clouds certification and not what currently is used by certification?08:41
tobberydbergYea - maybe move them over to the new etherpad first08:41
Puck__Sorry, by battery is almost flat. I'll probably drop off soon.08:41
gtemafrom SDK/cli pov I really struggle with different behavior of clouds on relatively simple cases08:42
tobberydbergYea, that is for a fact an issue08:42
gtemaI actually was also asking to have possibility to make sdk/cli verification on various clouds what should help improve user experience08:43
Puck__So, a basic set of tests that should pass.08:43
Puck__Agreed. We run various tempest tests against ours on a regular basis.08:43
tobberydberg(moved over the notes to the new etherpad - voting could fake place there)08:44
gtemawe could build up set of useful tests with sdk and use this as some part of "certification" or at least initial information what is going wrong08:44
Puck__Ideally clouds could flag what services they run (discovered via keystone catalogue?) And then test against those.08:44
gtemaPuck__ I am very unhappy on tempest as such, since it verifies cloud from developer perspective mostly. It barely touches "user" side08:45
tobberydbergLike that and think that is a really good start for this08:45
tobberydbergPuck__ That is also a really good idea08:45
gtemasdk is already taking care of service discovery and runs only tests for available services08:45
Puck__We made more tempest tests to test usage.08:45
gtemaon the other side we test our cloud with ansible, where we built "user scenarios" and run them in the loop08:46
Puck__Sorry, I need head off.08:46
gtemaadvantage here is that "anybody" can understand what is going on and can easily extend08:46
gtemathis does not require tempest knowledge, which is itself not very user friendly08:47
tobberydbergI thought about "alias" as we earlier discussed as well .... worth getting that custom attribute in there to align with a "naming convention"?08:47
gtemaon the other side together with sovereign cloud stack we now integrate this approach into their "certification"08:47
tobberydbergThat is something I will have too look into, that sounds like a really good thing08:48
gtemaadditional advantage out of box - we test also performance of the cloud with that for every single API call being made08:48
tobberydbergThat sounds really good as well. Even deeper, like disk, network and cpu performance?08:50
gtemayes08:50
tobberydberginteresting08:50
gtemaits quite extensible08:50
tobberydbergthat might be interesting to get in here in the future as well08:50
gtemaand for sure we also have possibility to monitor up to "ns lookup performance" inside the cloud08:50
tobberydbergSo, time is almost out here.. Should I send an email about the list in etherpad and ask people to go in and vote before next meeting?08:53
tobberydbergI mean, 2 of us here right now only...08:53
gtema:)08:53
gtemawould be good I guess08:53
tobberydbergDo you have any personal thoughts about the "alias" thing?08:53
gtemanot really. If we agree on specific metadata being present (and really verified) we can solve the problem this way08:54
gtemaand it is easier then having cryptic names inside of aliases, which anyway need to be parsed and processed08:54
Vladi[m]so the name convention for common base of properties should be in some special form for example starting with __ just to avoid some potential conflicts with existing properties on different clouds?08:55
tobberydbergYea ... alias needs to be processed to give anything useful08:56
gtemaso for me alias would contain all of the anyway agreed properties and require complex parsing to get down to things, that should be there already and make "filtering" easier08:56
gtemaI prefer much more standardizing properties and their names to alias08:57
gtemathis fits natively into the current concept without changes08:57
tobberydbergYup, agree on that, properties will resolve that situation as long as all clients can make the filtering based on it08:57
tobberydbergVladi[m] I guess that is definitely something to have in mind 08:58
gtemaand this is much easier compared to following parsing names and especially extending this name by "new" additions08:58
tobberydberg+108:58
tobberydbergOk, time is up for this week. I will send out an email asking people to go in and vote, add suggestions as well if they want to.08:59
tobberydbergAnother short thing before we close this09:00
gtemacool, thks tobberydberg09:00
tobberydbergWe missed the deadline for the PTG ... worth trying to get us in late?09:00
gtemasimilar to SDK issue - do you think lot of people are going to join since it is anyway virtual (like exactly this meeting)?09:01
fricklermaybe somehow attach to the operators sessions?09:01
gtemaI would rather think on arranging ad-hock video meeting if desired09:01
tobberydbergYea, I agree09:02
fricklergetting some visibility on the PTG timetable might still be useful and attract some people09:02
gtemahm, maybe09:02
tobberydbergGood point. We can definitely check if we can tag along the operators sig if not our own... 09:03
tobberydbergOk. I check, then we really need to avoid collisions with a few other teams for it to be worth it...09:04
gtemaok09:04
tobberydbergThanks for today and see you in 2 weeks :-) 09:05
tobberydberg#endmeeting09:05
opendevmeetMeeting ended Wed Aug 31 09:05:20 2022 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)09:05
opendevmeetMinutes:        https://meetings.opendev.org/meetings/publiccloud_sig/2022/publiccloud_sig.2022-08-31-08.04.html09:05
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/publiccloud_sig/2022/publiccloud_sig.2022-08-31-08.04.txt09:05
opendevmeetLog:            https://meetings.opendev.org/meetings/publiccloud_sig/2022/publiccloud_sig.2022-08-31-08.04.log.html09:05
gtemasee ya09:05
ttx#startmeeting large_scale_sig15:01
opendevmeetMeeting started Wed Aug 31 15:01:00 2022 UTC and is due to finish in 60 minutes.  The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot.15:01
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.15:01
opendevmeetThe meeting name has been set to 'large_scale_sig'15:01
ttx#topic Rollcall15:01
mdelavergneHi!15:01
felixhuettner[m]Hi15:01
ihti[m]Hi!15:01
ttxWelcome back!15:01
ttxWho else is here for the Large Scale SIG meeting?15:01
ttxpinging amorin 15:02
felixhuettner[m]that looks like a small round15:02
ttxlet me see if Belmiro is anywhere close15:02
ttxHmm, looks like he is not15:03
ttxThat is indeed a short crew, but let's roll it through anyway or we will never get back to a rhythm :)15:04
ttxihti[m]: is it your first Large Scale SIG meeting/15:04
ttx?15:04
ihti[m]Yes, I am colleague of Felix :)15:04
ttxOh right!15:04
ttxWell, welcome and thanks for joining15:05
ttxOur agenda for today is at:15:05
ttx#link https://etherpad.openstack.org/p/large-scale-sig-meeting15:05
ttx#topic OpenInfra Live September 29 episode - Deep dive into Schwarz Gruppe15:05
ttxFirst topic is the OpenInfra Live episode we'll be running on Sept 2915:05
felixhuettner[m]we already collected some fun stories to share there15:06
felixhuettner[m]especially about the rabbit you all know and love :)15:06
ttxThat's great! Usually Belmiro starts a thread to discuss content as we get closer to the event15:06
ttxI'll try to make sure he starts one15:06
felixhuettner[m]ok, sounds great15:07
ttxfelixhuettner[m]: ihti[m] : have you viewed a former Deep Dive episode to see what to expect? It's a pretty loose format15:07
felixhuettner[m]yep, two collegues of us did also join one in the past15:07
ttxWe try to have recurring hosts (mnaser, amorin and belmiro)15:07
* frickler sneaks in late15:07
ttxright, but the deep dive is a specific format, centered on one company in particular15:08
ttxfrickler: hi!15:08
ttxWe did one on OVHcloud and one on Yahoo so far15:08
ihti[m]Yes, have seen a couple of the deepdives in teh past so quite familier with the format15:08
felixhuettner[m]ok, then we'll definately take a look15:08
ttxjust go to https://openinfra.dev/live/ and search for "Deep Dive"15:09
ihti[m]Ah okay you just had 2 till now, so didn't miss any :)15:09
ttxIt's more like an open discussion between ops, only live on te Internet15:09
ttxusually pretty popular episodes15:09
ttxanyway, I planned to use that meeting to confirm the hosts but none of them are around15:10
ttxso I'll follow up on the email thread we started15:10
felixhuettner[m]ok, thanks15:10
ttx#action ttx to confirm hosts and ask belmiro to start a content thread about Sept29 episode15:11
ttxfelixhuettner[m]: ihti[m]: do you have questions about this show, before we move on to another topic?15:11
felixhuettner[m]not from me15:11
ihti[m]Nope15:11
ttxWe usually join 30min in advance to have time to walk around the platform15:12
ttxok!15:12
ttx#topic Status on docs website transition15:12
ttxI pushed a few changes to clean up the generated docs website at https://docs.openstack.org/large-scale/15:12
ttxhttps://review.opendev.org/c/openstack/large-scale/+/854419 is the last one15:13
felixhuettner[m]they look a lot nicer now15:13
frickleroh, I missed that one, will review later15:13
ttxOne that is approved and merged I will replace the old wiki pages with a redirect message15:13
ttxpointing people to the new location15:13
frickler+115:13
ttx#action ttx to replace all Large Scale SIG wiki pages with redirects to the docs (once 854419 merges)15:14
ttxAny question or comment on that topic?15:14
ttxalright then15:15
ttx#topic RabbitMQ questions15:15
felixhuettner[m]thanks, i collected a few over the last 2 month15:15
ttxI'm not sure with amorin and mnaser and Belmiro away we will have tat many answers right now, but can't hurt to ask15:16
felixhuettner[m]one thing is the `rabbit_transient_queues_ttl` setting which is recommended in https://docs.openstack.org/large-scale/other/rabbitmq.html#rabbit-transient-queues-ttl15:16
felixhuettner[m]and for me that feels more like something we need to fix in oslo.messaging15:16
ttxfelixhuettner[m]: yeah that definitely feels like a workaround15:17
felixhuettner[m]so what happens is that when a agent shuts down then the fanout queues it created are not deleted15:17
felixhuettner[m]and they fill up until they are deleted15:17
felixhuettner[m]the short look i had in the oslo.messaging code looked like it should actually remove them on shutdown15:17
felixhuettner[m]but that does not seem to work reliably15:17
felixhuettner[m]if there is no specific reason for that then i would open a bug for that, maybe we can then get rid of this recommendation15:18
ttxyeah, that sounds like a good path to follow... It might be a tricky one to fix though15:19
ttxbut filing a bug sounds like a good start15:19
frickleriiuc the recommendation also helps in case of unscheduled shutdown, like hardware failure15:19
fricklerbut creating and possibly fixing a bug seems useful anyway15:19
felixhuettner[m]thats a really valid point15:20
felixhuettner[m]i did not think about that15:20
felixhuettner[m]but then i'll go ahead and create a bug15:20
felixhuettner[m]#action felix.huetter to create a but regarding olso.messaging not deleting fanout queues when neutron agents stop15:20
ttxyeah not sure fixing the bug would make the recommendation invalid15:20
ttxbut that's a good open discussion to have15:20
felixhuettner[m]aaah, cant even write my name :)15:21
ttxwe got it15:21
felixhuettner[m]ok :)15:21
felixhuettner[m]the other thing is the ha policies we recommend15:21
ttxwhat about the other issue?15:21
felixhuettner[m]at the moment only the normal incoming queues are made durable and HA15:21
felixhuettner[m]however we do not do the same thing for reply queues15:21
felixhuettner[m]but since they are also tied to the lifetime of the service using them i don't see why we should treat them differently15:22
felixhuettner[m](and also oslo.messaging treats them differently)15:22
ttxthat's a good question, would be good to have others opinion on it15:23
ttxthe "policy" might have been from the one contributor to that doc, and not that much shared15:23
felixhuettner[m]i think it fits to what oslo.messaging does15:24
ttxso reopening the case is interesting15:24
felixhuettner[m]as long as it does not set the durable flag on these queues we can not make them ha anyway15:24
ttxhah15:24
ttxThis one could be worth a ML thread then, if it's oslo.messaging behavior15:24
ttxit's technically not a bug since it works as designed... just the design is questionable15:25
felixhuettner[m]yep15:25
felixhuettner[m]ok, then i'll send something to the ML15:25
ttxyeah, so my recommendation would be to open a ML thread and see who gets out of the woodwork to defend the current behavior15:25
felixhuettner[m]#action felix.huettner to raise a question on the mailinglist why reply queues are not created as durable15:25
ttxAlright, anything else on that topic?15:26
felixhuettner[m]nothing from me15:26
ttx#topic Next meetings15:27
ttxNext meeting is September 14, but I'll be in Dublin for Open Source Summit EU so I won't be able to chair it15:27
ttxWho is available to run the meeting? We definitely need one as there might be last minute details to discuss for the Deep dive on Sept 2915:27
ttxi could ask belmiro but if we have a volunteer present that might be a stronger bet15:28
felixhuettner[m]i'll probably be there (unless something breaks :) )15:28
fricklerI could do it, but I don't mind if you ask belmiro or amorin first15:28
ttxOK how about I ask them (we need them on that one anyway) and if they can't I'll take one of your generous offers to help15:29
ttx#info September 14 - IRC meeting (ttx to ask belmiro/amorin if they can chair, or felixhuettner/frickler if they can't)15:30
felixhuettner[m]sounds good15:30
ttx#action ttx to ask belmiro/amorin if they can chair next meeting15:31
ttx#info September 29 - willbe our OpenInfra Live15:31
ttx#topic Open discussion15:31
ttxAnything else, anyone?15:31
felixhuettner[m]not from me15:31
mdelavergnenot from my part15:31
fricklerI have an ad for the publiccloud_sig15:31
ihti[m]nothing from my side15:31
fricklermight be interesting for some people here, too15:32
ttxfrickler: promote away!15:32
ttxdefinitely big overlap15:32
fricklerhttps://meetings.opendev.org/meetings/publiccloud_sig/2022/publiccloud_sig.2022-08-31-08.04.html is the meeting that happened earlier today15:32
fricklerhappening every other wed at 08 UTC15:32
fricklerso EU friendly as well as NZ hopefully15:33
fricklercurrent hot topic is discussing how to provide standard cloud images15:33
ttx#info Public Cloud SIG meetings every other Wednesday at 8UTC15:33
fricklerlike tagged via metadata or standardized names15:34
RamonaBeermann[m]is it in this channel?15:34
frickleryes15:34
ttxyes15:34
RamonaBeermann[m]thx15:34
ttxfrickler: anything else?15:35
fricklerI think that's it from me15:35
ttxAlright, then thanks everyone for attending15:35
ttx#endmeeting15:35
opendevmeetMeeting ended Wed Aug 31 15:35:40 2022 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:35
opendevmeetMinutes:        https://meetings.opendev.org/meetings/large_scale_sig/2022/large_scale_sig.2022-08-31-15.01.html15:35
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/large_scale_sig/2022/large_scale_sig.2022-08-31-15.01.txt15:35
opendevmeetLog:            https://meetings.opendev.org/meetings/large_scale_sig/2022/large_scale_sig.2022-08-31-15.01.log.html15:35
mdelavergnethanks! see you!15:35
felixhuettner[m]thank you15:35
ihti[m]Thanks!15:35

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!