Wednesday, 2023-02-15

* puck wavesd08:00
tobberydbergo/08:00
fkro/08:00
fkrhow is everyone?08:02
gtemao/ - looks like a pattern08:02
fkri killed the pattern08:02
tobberydberg;-) 08:03
tobberydbergAll good here! Hope you are all well as well!08:03
tobberydberg#startmeeting publiccloud_sig08:03
opendevmeetMeeting started Wed Feb 15 08:03:51 2023 UTC and is due to finish in 60 minutes.  The chair is tobberydberg. Information about MeetBot at http://wiki.debian.org/MeetBot.08:03
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.08:03
opendevmeetThe meeting name has been set to 'publiccloud_sig'08:03
tobberydberg#link https://etherpad.opendev.org/p/publiccloud-sig-meeting08:04
tobberydbergAs usual, please put your name in there08:04
tobberydbergAwesome that somebody stepped up and created the agenda - puck?08:05
puckYeah, guilty as charged.08:05
tobberydbergThank you :-) 08:05
pucknp08:05
puckI also started putting some content into the spec.08:05
puck(but only a little bit)08:06
tobberydberg#topic 1. How to continue the "standard set of properties" work08:06
tobberydbergI saw that, great!08:06
puckI'm thinking that the spec probably needs detail on what each of the properties are about.08:07
tobberydbergLooking at that it seams to have the decided additions and looks correct 08:07
tobberydberg+108:07
tobberydbergI guess we will have to structure it and outline it towards a template...  08:11
puckAgreed08:12
fkraye08:13
tobberydbergNot really sure if there exists a template somewhere to look at ... was trying to find that...08:13
puckI can't even think of where one would be.08:16
puckWithing OpenStack that is.08:16
tobberydbergI mean, we can look at specs from other teams etc 08:16
tobberydberghttps://specs.openstack.org/openstack/cinder-specs/specs/2023.1/extend-volume-completion-action.html 08:16
tobberydbergfor example 08:17
fkrseems legit to follow that (from quickly eyeballing it)08:17
puckYeah, fair enough.08:18
tobberydbergHave been scrolling through a bunch of specs form various teams....This is a more "top level spec" that will (hopefully) result in specs in various teams...08:19
fkrI wanted to suggest ADR style in the beginning08:20
fkr(however I lack in-depth knowledge how this is usually done within openstack)08:20
fkrhowever ADR-style would offer a 'high-level spec' from which in-depth specs for the teams can be done/can come08:21
gtemaADRs are too "new" for OpenStack to start adopting08:21
gtemabut I would agree it is a pretty good usecase08:21
tobberydbergok. I guess dependencies is needed in there as well08:24
tobberydbergTo be successful we have dependencies towards nova, glance, sdk at least, right?08:25
tobberydbergImplementation specifics seams pretty far off for us to go into here ... 08:26
gtemadependency on SDK is totally different to nova/glance, but yes08:27
gtemasame way also ansible-collections-openstack would be one08:27
puckyup, object storage (although should be agnostic between swift and radosgw)08:28
tobberydbergYea, indeed 08:28
tobberydbergSo I guess next step is to drafting some text here as well. I will try to find some time to give it a stab until next meeting.08:32
puckCool08:34
tobberydbergShould we spend a few minutes on the rest of the topics on the agenda?08:35
gtema+108:36
fkr+108:37
tobberydberg#topic 2. A number of distros publish images directly to the big cloud providers, can we facilitate this for OpenStack public clouds? (puck)08:37
tobberydbergI leave the word a little bit to you puck here :-) 08:38
gtemaI do not think this will/can ever happen. At least on our side we prepare all images to include supplementary HW drivers and do some other "security" related changes08:38
gtematherefore those bare images are not really working properly in our cloud (only on few basic flavors)08:38
puckInteresting, we just publish the vendor images. I'd like us to customise some, but it hasn't happened yet.08:39
puckEspecially Ubuntu, since we aren't paying them the license fees, we can't modify them.08:39
gtemathis is not our case and we are also obliged (in front of customers) to do additional security hardenings08:40
tobberydbergBut you all allow users to "bring your own image", right?08:40
gtemayes08:40
puckYes, we allow customers to bring their own image.08:40
puckIt is a bit annoying to see the distros uploading their images for the big three and officially publishing them. I was just wondering if there is anything we can do to help get the smaller players recognised.08:41
tobberydbergSo, I like the idea of having a central local for "openstack ready images" ... but I agree that there will be hard to get all public clouds to actually use them08:41
puckEven finding those images for some distros is hard!08:41
tobberydbergexactly that puck I agree to 08:41
gtemaI can't even also imagine i.e. Fedora pushing their build to 100 other OpenStack based clouds08:41
tobberydbergNot sure how to address the issue though08:42
gtemaalso from security pov giving somebody from outside write permissions for public images is definitely not going to work on our side08:42
puckUnfortunately I have no idea, which is why I wanted to table it. ;) Perhaps a central location within OpenStack that points to where distro images are available from?08:43
puckPublic cloud providers could indicate which images they make available and whether they're vanilla, or modified?08:43
tobberydbergWell, we could potentially have one central "for OpenStack" 08:43
gtemaI can imagine building some sort of portal for the OS based clouds from where they can do something like: "import latest Fedora/Ubuntu image into my cloud"08:44
gtemabut the biggest question for me - why08:44
gtemawhat should this actually address08:44
fkrhow is the security aspect handled with the "big clouds"? I meant, I suspect that they will analyze the images that are pushed by the distros before releasing them to their customers08:44
puckI don't think that happens for the Debian images.08:44
fkrinteresting. since the notion feels different between "pulling the image from the distro and offering it customers" to "having the distro directly push it" even though, there is not really a difference ;)08:45
fricklerfor the SCS project we have https://github.com/osism/openstack-image-manager which tries to keep track of the various upstream sources08:46
fkrgood point frickler 08:46
gtemacool, I meant exactly something like that08:46
puckSubtle difference, we smoke screen the images before we make them public. (Spin up an instance, make sure we can ssh in and ping out.) That process is automated.08:46
fricklerwe could try to move that into a more general upstream location08:46
tobberydbergThat is what I meant as well :-) 08:47
tobberydbergThat would make a lot of sense imo frickler 08:47
fricklerso would this fit as repo owned by this sig? or do you see a different entity?08:48
frickler(pending discussion within the scs community of course)08:49
tobberydbergThat is a good question...I don't see that it doesn't fit, but maybe where are even better suited location?08:50
tobberydbergSome tests etc can be done towards multiple clouds for each image ... potentially each cloud is able to sign up for verification of image in there cloud...08:52
fricklermaybe we could then get vendors to contribute by updating information about their images in that repo08:52
fkrfrickler: +108:52
fricklerthe verification could maybe be combined with what gtema is building for SDK/OSC08:52
tobberydberg"Image Ubuntu XX.XX is proven to work fine on these clouds" kind of thing08:53
fricklerin terms of access to clouds being needed08:53
fkrfrickler: i can take the discussion into team iaas @ scs for 'upstreaming' the image manager, since we have the discussion on long-term maintainence anyways08:53
fricklerbut I'm not sure if the sdk project would be a good home for this. otoh maybe why not08:53
gtemayes, this sounds feasible. Some sort of sdk/osc driven verification for that is possible08:54
gtemamost likely not the SDK itself, but something like a new redesigned "certification" platform08:54
tobberydbergYea ... like the one for "external tempest testing"08:55
fricklerI was just talking in terms of openstack governance where the openstack-image-manager might be homed08:55
tobberydbergSo, this feels like something we can continue to talk about and I think a Forum session around this might be suitable as well08:56
gtema+108:57
tobberydberg#link https://etherpad.opendev.org/p/publiccloud-sig-vancouver-2023-forum-sessions08:57
tobberydbergIf you feel puck, please put it in there as a suggestion08:58
puckOkay, sure08:58
* puck considers attending possibly contentious topic of "do tested clouds need to OpenInfra financial members". :)08:59
tobberydbergWe are running out of time here ... we have one item more on the agenda before other matters :-) 08:59
tobberydbergShall we push that one until next meeting?08:59
gtemayeah09:00
puckack09:00
tobberydbergI guess that questions have multiple answers09:00
tobberydbergRunning the tests are one thing, be presented as "certified" most probably do yea09:01
puckYup, but we can park that for next time.09:02
puckAnd in fact, we're out of time.09:02
tobberydbergyea we are09:02
tobberydbergOne quick last thing ... should we try to have a session during the PTG?09:02
tobberydbergCould it be worth starting to present our ideas around standard properties, certifications etc there?09:04
puckSeems sensible.09:04
gtemayes, why not09:04
tobberydbergI'll make sure to sign up for that then! 09:05
tobberydbergI know I'm doing some travelling around those dates, but not the full week09:05
tobberydbergThanks a lot for today folks! Talk to you soon!09:05
puckCheers, hope you all have a good day!09:06
tobberydberg#endmeeting09:07
opendevmeetMeeting ended Wed Feb 15 09:07:49 2023 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)09:07
opendevmeetMinutes:        https://meetings.opendev.org/meetings/publiccloud_sig/2023/publiccloud_sig.2023-02-15-08.03.html09:07
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/publiccloud_sig/2023/publiccloud_sig.2023-02-15-08.03.txt09:07
opendevmeetLog:            https://meetings.opendev.org/meetings/publiccloud_sig/2023/publiccloud_sig.2023-02-15-08.03.log.html09:07
gtemathks, have a nice day09:08
andrewbogott_since upgrading keystone to Zed I'm seeing quite a few "MySQL server has gone away" log messages from sqlalchemy.  Only from keystone, though, not in the other services.  Anyone else seeing that?14:11
*** Guest3941 is now known as diablo_rojo15:03
felixhuettner[m]is the connection_recycle_time in openstack lower than the idle timout of mysql (or a proxy in between). Otherwise this could cause that message17:09
andrewbogott_felixhuettner[m]: (much later) my keystone connection_recycle_time is 300, my haproxy server timeout is 90m, my mysql wait_timeout  is 3600  23:37
andrewbogott_I keep looking for other sneaky timeouts that are interposing but those values seem to me like they should work! I tried setting everything to an across-the-board 3600s and got many many more 'has gone a way' messages.23:38

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!