Wednesday, 2023-11-01

jakeyiphi all meeting in around 20 mins, if anyone is around08:42
jakeyipdalees / mnasiadka around? 08:47
*** open10k8s_ is now known as open10k8s08:57
*** ShadowJonathan_ is now known as ShadowJonathan08:57
*** ricolin_ is now known as ricolin08:57
*** snbuback_ is now known as snbuback08:57
*** mnasiadka_ is now known as mnasiadka08:57
*** ravlew is now known as Guest541109:00
jakeyip#startmeeting magnum09:01
opendevmeetMeeting started Wed Nov  1 09:01:24 2023 UTC and is due to finish in 60 minutes.  The chair is jakeyip. Information about MeetBot at http://wiki.debian.org/MeetBot.09:01
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.09:01
opendevmeetThe meeting name has been set to 'magnum'09:01
jakeyipAgenda:09:01
jakeyip#link https://etherpad.opendev.org/p/magnum-weekly-meeting09:01
jakeyip#topic Roll Cally09:01
jakeyip#topic Roll Call09:01
lpetruto/09:05
jakeyiphi lpetrut09:08
lpetruthi09:08
jakeyip looks like there isn't others here today :) is there anything youw ant to talk about?09:08
lpetrutI work for Cloudbase Solutions and we've been trying out the CAPI drivers09:09
lpetrutthere's something that I wanted to bring out09:09
lpetrutup*09:09
lpetrutthe need for a manaegment cluster09:09
jakeyipcool, let's start then09:09
jakeyip#topic clusterapi09:09
jakeyipgo on09:10
lpetrutwe had a few concerns about the mangement cluster, not sure if it was already discussed09:10
jakeyipwhat's the concern?09:10
lpetrutfor example, the need of keeping it around for the lifetime of the workload cluster09:10
lpetrutand having to provide an existing cluster can be an inconvenient in multi-tenant environments09:11
jakeyipjust to clarify, by 'workload cluster' do you mean the magnum clusters created by capi? 09:13
lpetrutyes09:13
jakeyipcan you go more in detail how having a cluster in multi-tenant env is an issue?09:14
lpetrutother projects tried a different approach: spinning up a cluster from scratch using kubeadm and then deploying CAPI and have it manage itself, without the need of a separate management cluster. that's something that we'd like to experiment and I was wondering if it was already considered09:14
jakeyipby other projects yiou emaN? 09:14
jakeyipmean?09:15
lpetrutthis one specifically wasn't public but I find their approach interesting09:15
johnthetubaguylpetrut: I would like to understand your worry more on that09:16
jakeyiphio johnthetubaguy 09:16
lpetrutabout the multi-tenant env, we weren't sure if it's safe for multiple tenants to use the same management cluster, would probably need one for each tenant, which would then have to be managed for the lifetime of the magnum clusters09:16
johnthetubaguydo you mean you want each tenant cluster to have a separate management cluster?09:16
johnthetubaguylpetrut: I think that is where magnum's API and quota come in, that gives you a bunch of protection09:17
jakeyiplpetrut: just to confirm you are testing the StackHPC's contributed driver that's in review ?09:18
johnthetubaguyeach cluster gets thier own app creds, so there is little crossover, except calling openstack APIs09:18
johnthetubaguyjakeyip: both drivers do the same thing, AFAIK09:18
lpetrutyes, I'm working with Stefan Chivu (schivu), who tried out both CAPI drivers and proposed the Flatcar patches09:18
johnthetubaguyjakeyip: sorry, my calendar delt with the time change perfectly, my head totally didn't :)09:18
lpetrutand we had a few ideas about the management clusters, wanted to get some feedback09:20
lpetrutone of those ideas was the one that I mentioned: completely avoiding a management cluster by having CAPI manage itself. I know people have already tried this, was wondering if it's something safe and worth considering09:20
lpetrutif that's not feasible, another idea was to use Magnum (e.g. a different, possibly simplified driver) to deploy the management cluster09:21
johnthetubaguylpetrut: but then mangum has to reach into every cluster directly to manage it? That seems worse (although it does have the creds for that)09:21
lpetrutyes09:21
johnthetubaguylpetrut: FWIW, we use helm wrapped by ansible to deploy the management cluster, using the same helm we use from inside the magnum driver09:22
johnthetubaguylpetrut: its interesting I hadn't really considered that approach before now09:22
lpetrutjust curious, why would it be a bad idea for magnum to reach the managed cluster directly?09:23
johnthetubaguyI think you could do that with the helm charts still, and "just" change the kubectl09:23
johnthetubaguylpetrut: I like the idea of magnum not getting broken by what users do within their clusters, and the management is separately managed outside, but its a gut reaction, needs more thought.09:24
lpetrutI see09:24
jakeyipI'm afraid I don't understand how CAPI works without a management cluster09:25
johnthetubaguyits a trade off of course, there is something nice about only bootstraping from the central cluster, and the long running management is inside each clsuter09:25
jakeyipmight need some links if you have them handy?09:25
lpetrutright, so the idea was to deploy CAPI directly against the managed cluster and have it manage itself09:25
johnthetubaguyjakeyip: its really about the CAPI controllers being moved inside the workload cluster after the intiial bootstrap, at least we have spoken about that for the management cluster its-self09:25
johnthetubaguylpetrut: you still need a central managment cluster to do the initial bootstrap, but then it has less responsbility longer term09:26
johnthetubaguy(in my head at least, which is probably not the same thing as reality)09:26
lpetrutright, we'd no longer have to keep the management cluster around09:27
johnthetubaguywell that isn't quite true right09:27
johnthetubaguyah, wait a second...09:28
johnthetubaguyah, you mean a transient cluster for each bootstrap09:28
lpetrutyes09:28
jakeyipjohnthetubaguy: do you mean, 1. initial manageent cluster 2. create A workload cluster (not created in Magnum) 3. move it into this workload cluster 4. point Magnum to this cluster ?09:28
lpetrutexactly09:28
johnthetubaguyhonestly that bit sounds like an operational headache, debugging wise, I prefere a persistent management cluster for the bootstrap, but transfer control into the cluster once its up09:29
johnthetubaguy... but this goes back to what problem we are trying to solve I guess09:29
lpetrutand I think we might be able to take this even further, avoiding the initial management altogether, using userdata scripts to deploy a minimal cluster using kubeadm, then deploy CAPI09:29
johnthetubaguyI guess you want magnum to manage all k8s clusters?09:29
johnthetubaguylpetrut: I mean you can use k3s for that, which I think we do for our "seed" cluster today: https://github.com/stackhpc/ansible-collection-azimuth-ops/blob/main/playbooks/provision_capi_mgmt.yml09:30
lpetrutyeah, we were hoping to avoid the need of an external management cluster09:30
johnthetubaguy... yeah, there is something nice about that for sure.09:31
lpetrutright now, we were hoping to get some feedback, see if it make sense and if there's anyone interested, then we might prepare a POC09:31
johnthetubaguylpetrut: in my head I would love to see helm being used to manage all the resources, to keep the consisnet, and keep the manifests out of the magnum code base, so its not linked to your openstack upgrade cycle so strongly (but I would say that!)09:32
lpetrutone approach would be to extend one of the CAPI drivers and customize the bootstrap phase09:33
johnthetubaguyI guess my main worry is that is a lot more complicated in magnum, but hopefully the POC would prove me wrong on that09:33
jakeyipI think it's an interesting concept 09:33
johnthetubaguyjakeyip: +109:34
johnthetubaguylpetrut: can you describe more what that would look like please?09:34
lpetrutsure, so the idea would be to deploy a Nova instance, spin up a cluster using kubeadm or k3s, deploy CAPI on top so that it can manage itself and from then on we could use the standard CAPI driver workflow09:35
johnthetubaguyat the moment the capi-helm driver "just" does a helm install, after having injected app creds and certs into the management cluster, I think in your case you would first wait to create a bootstrap cluster, then do all that injecting, then bring the cluster up, then wait for that to finish, then migrate into the deployed clusters, including injecting all the secrts into that, etc.09:35
lpetrutexactly09:36
johnthetubaguylpetrut: FWIW, that could "wrap" the existing capi-helm driver, I think, with the correct set of util functions, there is a lot of shared code09:36
lpetrutexactly, I'd just inherit it09:36
johnthetubaguynow supporting both I like, let me describe...09:36
johnthetubaguyif we get the standalone managenet cluster in first, from a mangum point of view that is simpler09:37
johnthetubaguysecond, I could see replacement of the shared managenet cluster, with a VM with k3s on for each cluster09:37
johnthetubaguythen third, you move from VM into the main cluster, after the cluster is up, then tear down the VM09:38
johnthetubaguythen we get feedback from operators on which proves nicer in production, possibly its both, possibly we pick a winner and deprecate the others09:38
johnthetubaguy... you can see a path to migrate between those09:38
johnthetubaguylpetrut: is that sort of what you are thinking?09:39
lpetrutsounds good09:39
johnthetubaguyone of the things that was said at the PTG is relevant here09:39
johnthetubaguyI think it was jonathon from the BBC/openstack-ansible09:39
lpetrutyes, although I was wondering if we could avoid the initial bootstrap vm altogether09:39
johnthetubaguymagnum can help openstack people not have to understand so much of k8s09:39
johnthetubaguylpetrut: your idea here sure helps with that09:40
johnthetubaguylpetrut: sounds like magic, but I would to see it, although I am keen we make things possible with vanilla/mainstream cluster api approaches09:40
lpetrutbefore moving further, I'd like to check withe the CAPI maintainers to see if there's anything wrong with CAPI managing the cluster that it runs on09:40
johnthetubaguylpetrut: I believe that is a supported use case09:41
lpetrutthat would be great09:41
jakeyiplpetrut: I think StackHPC implementation is just _one_ CAPI implementation. Magnum can and should support multiple drivers09:41
jakeyipas long as we can get maintainers haha09:41
lpetrutthanks a lot for feedback, we'll probably come back in a few weeks with a PoC :)09:42
johnthetubaguyyou can transfer from k3s into your created ha cluster, so it manages itself... we have a plan to do that for our shared management cluster, but have not got around to it yet (too busy doing magnum code)09:42
jakeyipI think important from my POV is that all the drivers people want to implement not clash with one another.09:42
johnthetubaguyjakeyip: that is my main concern, diluting an already shrinking community09:42
jakeyipabout that I need to chat with you johnthetubaguy 09:42
johnthetubaguyjakeyip: sure thing09:42
jakeyipabout the hardest problem in computer science - naming :D 09:43
johnthetubaguylol09:43
johnthetubaguyfoobar?09:43
jakeyipeveryone loves foobar :) 09:43
johnthetubaguywhich name are you thinking about?09:43
jakeyipjohnthetubaguy: mainly the use of 'os' tag and config section 09:44
jakeyipif we can have 1 name for all of them, different drivers won't clash09:44
johnthetubaguyso about the os tag, the ones magnum use don't work with nova anyways09:44
johnthetubaguyi.e. "ubuntu" isn't a valid os_distro tag, if my memory is correct on that09:44
johnthetubaguyin config you can always turn off any in tree "clashing" driver anyways, but granted its probably better not to clash out of the box09:45
jakeyipyeah, is it possible to change them all to 'k8s_capi_helm_v1' ? so driver name, config section, os_distro tag is the same09:45
johnthetubaguyjakeyip: I think I went for capi_helm in the config?09:45
jakeyipyeah I want to set rulesss 09:45
johnthetubaguyjakeyip: I thought I did that all already?09:46
johnthetubaguyI don't 100% remember though, let me check09:46
jakeyipnow is driver=k8s_capi_helm_v1, config=capi_helm, os_distro=capi-kubeadm-cloudinit09:46
johnthetubaguyah, right09:47
johnthetubaguyso capi-kubeadm-cloudinit was chosen as to matches what is in the image09:47
johnthetubaguyand flatcar will be different (its not cloudinit)09:48
jakeyipjust thinking if lpetrut wants to develop something they can choose a name for driver and use that for config section and os_distro and it won't clash09:48
johnthetubaguyit could well be configuration options in a single driver, to start with09:48
schivuhi, I will submit the flatcar patch on your github repo soon and for the moment I used capi-kubeadm-ignition09:49
johnthetubaguyschivu: sounds good, I think dalees was looking at flatcar too09:49
jakeyipyeah I wasn't sure how flatcar will work with this proposal09:49
johnthetubaguya different image will trigger a different boostrap driver being selected in the helm chart09:50
johnthetubaguyat least that is the bit I know about :) there might be more?09:50
schivuyep, mainly with CAPI the OS itself is irrelevant, what matters is which bootstrapping format does the image use09:50
johnthetubaguyschivu: +109:51
johnthetubaguyI was trying to capture that in the os-distro value I chose, and operator config can turn the in tree implementation off if they want a different out of tree one?09:51
johnthetubaguy(i.e. that config already exists today, I believe)09:51
johnthetubaguyFWIW, different drivers can probably use the same image, so it seems correct they share the same flags09:52
johnthetubaguy(I wish we didn't use os_distro though!)09:53
johnthetubaguyjakeyip: I am not sure if that helped?09:54
johnthetubaguywhat were you thinking for the config and the driver, I was trying to copy the pattern with the heat driver and the [heat] config09:54
johnthetubaguyto be honestly, I am happy with whatever on the naming of the driver and the config, happy to go with what seems normal for Magnum09:55
jakeyiphm, ok am I right that for stackhpc, the os_distro tag in glance will be e.g. for  ubuntu=capi-kubeadm-cloudinit and flatcar=capi-kubeadm-ignition (as schivu said)09:55
johnthetubaguyI am open to ideas, that is what we seem to be going for right now09:56
johnthetubaguyit seems semantically useful like that09:56
johnthetubaguy(we also look for a k8s version property)09:56
johnthetubaguyhttps://github.com/stackhpc/magnum-capi-helm/blob/6726c7c46d3cac44990bc66bbad7b3dd44f72c2b/magnum_capi_helm/driver.py#L49209:57
johnthetubaguykube_version in the image properties is what we currently look for09:58
johnthetubaguyjakeyip: what was your preference for os_distro?10:00
jakeyipI was under the impression the glance os_distro tag needs to fit 'os' part of driver tuple10:00
johnthetubaguyso as I mentioned "ubuntu" is badly formatted for that tag anyways10:00
johnthetubaguyI would rather not use os_distro at all10:01
johnthetubaguy"ubuntu22.04" would be the correct value, for the nova spec: https://github.com/openstack/nova/blob/master/nova/virt/osinfo.py10:01
jakeyipsee https://opendev.org/openstack/magnum/src/branch/master/magnum/api/controllers/v1/cluster_template.py#L428 10:02
jakeyipwhich gets used by https://opendev.org/openstack/magnum/src/branch/master/magnum/drivers/common/driver.py#L14210:02
johnthetubaguyyep, understood10:03
johnthetubaguyI am tempted to register the driver as None, which might work10:04
jakeyipwhen your driver declares `{"os": "capi-kubeadm-cloudinit"}`, it will only be invoked if glance os_distro tag is `capi-kubeadm-cloudinit` ? it won't load for flatcar `capi-kubeadm-ignition` ?10:04
johnthetubaguyyeah, agreed10:05
jakeyipI thought the decision was based on values passed to your driver10:05
johnthetubaguyI think there will be an extra driver entry added for flatcar, that just tweaks the helm values, but I haven't see a patch for that yet10:05
jakeyipthat's what I gathered from over https://github.com/stackhpc/capi-helm-charts/blob/main/charts/openstack-cluster/values.yaml#L12410:07
jrosserjohnthetubaguy: regarding the earlier discussion about deploying the capi managment k8s cluster - for openstack-ansible i have a POC doing that using an ansible collection, so one managment cluster for * workload clusters10:08
johnthetubaguycapi_bootstrap="cloudinit|ingnition" would probably be better, but yeah, I was just trying hard not to clash with the out of tree driver10:08
johnthetubaguyjrosser: cool, that is what we have done in here too I guess, reusing the helm charts we use inside the driver: https://github.com/stackhpc/ansible-collection-azimuth-ops/blob/main/playbooks/provision_capi_mgmt.yml and https://github.com/stackhpc/azimuth-config/tree/main/environments/capi-mgmt10:10
jakeyipwill the flatcar patch be something that reads CT label and sets osDistro=ubuntu / flatcar? is this a question for schivu ? 10:10
johnthetubaguyjrosser: the interesting idea about lpetrut 's idea is that magnum could manage the management cluster(s) too, which would be neat trick10:10
jrosserjohnthetubaguy: i used this https://github.com/vexxhost/ansible-collection-kubernetes10:11
johnthetubaguyjrosser: ah, cool, part of the atmosphere stuff, makes good sense. I haven't looked at atmosphere (yet).10:12
jrosseryeah, though it doesnt need any atmosphere stuff to use the collection standalone, ive used the roles directly in OSA10:13
jakeyipjrosser: curious how do you maintain the lifecycle of the cluster deployed with ansible ?10:14
johnthetubaguyjrosser: I don't know what kolla-ansible/kayobe are planning yet, right now we just add in the kubeconfig and kept the CD pipelines separate10:14
jrosseri guess i would be worried about making deployment of the management cluster using magnum itself much much better than the heat driver10:14
schivujakeyip: the flatcar patch adds a new driver entry; the ignition driver inherits the cloudinit one and provides "capi-kubeadm-ignition" as the os-distro value within the tuple10:15
jrosserjakeyip: i would have to get some input from mnaser about that10:15
johnthetubaguyI wish we were working on this together at the PTG to design a common approach, that was my hope for this effort, but it hasn't worked out that way I guess :(10:15
jakeyipschivu: thanks. in that case can you use the driver nme for os_distro? which is the question I asked johnthetubaguy initially 10:17
johnthetubaguyjrosser: they key difference with the heat driver is most of the work is in cluster API, with all of these approaches. In the helm world, we try to keep the mainfests in a single place, helm charts, so the test suite for the helm charts helps across all the different ways we stamp out k8s, be it via magnum, or ansible, etc.10:17
johnthetubaguyjakeyip: sorry, I miss understood /miss read your question10:17
johnthetubaguyjakeyip: we need the image properties to tell us cloudinit vs ignition, any that happens is mostly fine with me10:19
johnthetubaguyhaving this converstation in gerrit would be my preference10:19
jakeyipsure I will also reply to it there10:21
johnthetubaguyI was more meaning on the flatcar I guess, its easier when we see what the code looks like I think10:22
johnthetubaguythere are a few ways we could do it10:22
johnthetubaguyjakeyip: I need to discuss how much time I have left now to push this upstream, I am happy for people to run with the patches and update them, I don't want us to be a blocker for what the community wants to do here.10:24
jakeyipyeah I guess what I wanted to do was quickly check if using os_distro this way is a "possible" or a "hard no" 10:24
jakeyipas I make sure drivers don't clash10:24
johnthetubaguywell I think the current proposed code doesn't clash right? and you can configure any drivers to be disabled as needed if any out of tree driver changes to match?10:25
jakeyipjohnthetubaguy: I am happy to take over your patches too, I have it running now in my dev10:25
johnthetubaguyopen to change the value to something that feels better10:25
jakeyipcool, great to have that understanding sorted10:26
johnthetubaguyjakeyip: that would be cool, although granted that means its harder for you to +2 them, so swings and roundabouts there :)10:26
jakeyipjohnthetubaguy: well... one step at a time :) 10:27
* johnthetubaguy nods10:27
jakeyipI need to officially end this cos it's over time, but feel free to continue if people have questions10:28
johnthetubaguyjakeyip: are you aware of the tooling we have for the management cluster, that reuses the helm charts?10:28
johnthetubaguyjakeyip: +110:28
jakeyip#endmeeting10:28
opendevmeetMeeting ended Wed Nov  1 10:28:31 2023 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)10:28
opendevmeetMinutes:        https://meetings.opendev.org/meetings/magnum/2023/magnum.2023-11-01-09.01.html10:28
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/magnum/2023/magnum.2023-11-01-09.01.txt10:28
opendevmeetLog:            https://meetings.opendev.org/meetings/magnum/2023/magnum.2023-11-01-09.01.log.html10:28
jakeyipmaybe not, link? 10:28
jakeyipI basically build mine from devstack's function you contributed10:29
johnthetubaguythat doesn't give you an HA cluster though10:29
johnthetubaguygreat for dev though10:29
jakeyipyeah I plan to use it to create a cluster in our undercloud and move to it10:30
johnthetubaguyjakeyip: trying to find a good set of breadcrumbs, its ansble based10:30
johnthetubaguyyeah, we have CD pipelines to maintain that using the same helm charts we use from magnum10:30
johnthetubaguyI guess this is the pipelines: https://stackhpc.github.io/azimuth-config/deployment/automation/10:31
johnthetubaguythis is an example config for the ansible: https://github.com/stackhpc/azimuth-config/tree/main/environments/capi-mgmt-example10:32
johnthetubaguyand the ansible lives in here I think: https://github.com/stackhpc/ansible-collection-azimuth-ops/blob/main/playbooks/provision_capi_mgmt.yml10:32
johnthetubaguyMatt is at a conference today, he would probably describe that better than me10:32
johnthetubaguyessentially you git it a clouds.yaml for a project you want to run the management cluster in, and it will get that running using the helm charts10:33
johnthetubaguy(ish)10:33
johnthetubaguyish = there are bits on top of the helm charts, like installation of the management cluster controllers, etc.10:34
jakeyipdoes the ansible code also do the final move from the k3s to the final management cluster?10:35
johnthetubaguyno, right now we keep both separate10:36
jakeyip ok10:36
johnthetubaguyso you get a seed VM to control the HA managment cluster10:36
jakeyipwe are approaching this part with pure capi, not using your helm chart. is there something we will be missing with pure capi ?10:36
johnthetubaguywe haven't had time to implement the enxt step10:36
johnthetubaguyjakeyip: it depends if you want to manage all those manifests multiple times or not10:37
jakeyipit's a lot of yaml, but it's a one time effort hopefully10:37
johnthetubaguywell what about upgrade?10:37
johnthetubaguywhen you need to update all the addons, like the CNI, update k8s, etc10:37
johnthetubaguythat is what the helm chat is trying to help with10:37
jakeyipyeah haven't thought of that :P 10:38
jakeyipgood point, wondering how we should approach it from user docs10:39
johnthetubaguyso the idea is you sync in the latest version of the config, then redeploy, and the upgrade to the new helm chart, and update k8s images happens via a github pipeline, option of staging first then production, if wanted10:39
jakeyipor... create new cluster and move to it? :D 10:40
johnthetubaguy(we have been running k8s clusters in production for almost two years with these helm charts and ansible, and getting the pipelines and monitoring alerts better, and testing, etc, has taken some time!)10:40
johnthetubaguyjakeyip: you could do backup and restore with valero, but cluster api does a rolling update for you quite nicely10:40
jakeyipok10:41
jakeyipthanks for the tips, we are barely getting started.10:42
johnthetubaguy(we are getting the valreo ansible scripts added, but the cluster api stuff works nicely with valero!)10:42
jakeyipperviously I didn't had much time to do this in prod, just got a breather that's why10:43
jakeyipdidn't had much time last 6 months10:43
johnthetubaguyjakeyip: it would be good to share more of this tooling as a community, obvsiouly we are building assuming kolla-ansible / kayobe managing openstack, but so far the ansible to do the management cluster is quite separate10:43
jakeyipyeah getting it to work is one thing but Day 2 ops is another big thing10:44
johnthetubaguyyeah, there is lots of stuff out there to help at least10:44
johnthetubaguywe are looking at brining ArgoCD into the mix here, although interestingly that is probably the opposite direction to some of the discussions here, so that probably needs discussion at some point10:45
johnthetubaguyit being optional, is probably the way forward anyways, for a bunch of reasons10:45
johnthetubaguy... OK, I need to run, it was good to catch up10:45
jakeyipmay be a place for it, but current concern is getting it streamlined so we can land something10:46
jakeyipthakns, I need to go too. it's almost bedtime10:46
jakeyipthanks everyone for coming I will stay till end of the hour if anyone has questions10:46
johnthetubaguyjakeyip: yep, its certainly working, keen it is useful for more people too!10:51
lpetrutyou were mentioning using ansible to deploy the management cluster. would that be another magnum driver? if so, what's nice about it is that it would show up as a regular magnum cluster, which magnum could manage.10:53
jakeyiplpetrut: who is this q for ? may need to ping their nick as they may not be watching here since meeting has ended10:58
lpetrutI think I saw johnthetubaguy and jrosser mention it10:59
opendevreviewJake Yip proposed openstack/magnum master: Add beta property to magnum-driver-manage details  https://review.opendev.org/c/openstack/magnum/+/89272910:59
jrosserlpetrut: personally i would like the managment cluster to be part of my openstack control plane11:04
jrosserso i am really not sure what i feel about magnum deploying that11:04
lpetrutgot it11:04
jrosseri expect lots of deployments will have lots of opinions about how it should be, structurally11:05
jakeyipjrosser: hm, did you mentioned you need airgap in PTG?11:09
jrosseryes11:09
jrosseryesterday i built a docker container which has a registry with enough stuff in it to deploy the control plane k8s with no internet11:10
jakeyipnice! 11:11
jakeyipwhich driver will you be testing? 11:12
jrosserjakeyip: currently i only have experience with the vexxhost driver11:56
jakeyipjrosser: ok 11:57
opendevreviewJake Yip proposed openstack/magnum master: Add feature flag for beta drivers  https://review.opendev.org/c/openstack/magnum/+/89953013:53
opendevreviewJake Yip proposed openstack/magnum master: Add beta property to magnum-driver-manage details  https://review.opendev.org/c/openstack/magnum/+/89272913:53
opendevreviewJake Yip proposed openstack/magnum master: Fix magnum-driver-manage for drivers without template path.  https://review.opendev.org/c/openstack/magnum/+/89272813:53
opendevreviewDale Smith proposed openstack/magnum master: Add beta property to magnum-driver-manage details  https://review.opendev.org/c/openstack/magnum/+/89272920:19

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!