Tuesday, 2024-01-16

opendevreviewwu.chunyang proposed openstack/kolla-ansible master: Implement automatic deploy of trove  https://review.opendev.org/c/openstack/kolla-ansible/+/86332101:22
opendevreviewMichal Arbet proposed openstack/kolla-ansible master: Make designate-sink service optional  https://review.opendev.org/c/openstack/kolla-ansible/+/90550206:48
opendevreviewMichal Arbet proposed openstack/kolla-ansible master: [DNM] Test designate  https://review.opendev.org/c/openstack/kolla-ansible/+/90564406:48
kevkoSvenKieske: here ? 07:55
*** darmach4 is now known as darmach08:28
SvenKieskekevko: yes, but currently in a Meeting09:18
opendevreviewWill Szumski proposed openstack/kolla master: Support CAP_DAC_READ_SEARCH capability  https://review.opendev.org/c/openstack/kolla/+/90557909:22
kevkoSvenKieske: just check my reply in designate  patch ...09:55
kevkoSvenKieske: I was also wondering if we shouldn't have a scenario for designate 09:56
SvenKieskeyou mean a dedicated test scenario in zuul ci? I'm always for more testing :) But we should also keep an eye on pipeline times, as these are already somewhat long :)10:01
kevkoSvenKieske: yep, but as you can see in my reply in patch for designate ..you don't need sink ...and definitely don't need audit from nova as it is only for ceilometer 10:09
SvenKieskeyeah, I was confused at first when I saw that, and the original patch didn't make me any wiser, why this was added10:09
kevkoSvenKieske: so I wanted to create a dedicated test scenario for designate for neutron dns_publish_fixed_ip plugin we providing in kolla by default 10:10
SvenKieske+1 from me, you get for that :)10:10
kevkoSvenKieske: and also sink10:11
kevkolet me write something 10:11
SvenKieskekevko: you answered someone on the ML regarding custom vault haproxy backend10:14
SvenKieskethey seem to do exactly what you told them to do? the paste has the code in /etc/kolla/haproxy/services.d/vault.cfg :)10:15
kevkoSvenKieske: nope - he said "/etc/kolla/config/<openstack_service>/<openstack_service.conf> works, the same approach doesn't seem to work for haproxy. "  10:22
kevkoSvenKieske: I replied "Just place it into /etc/kolla/config/haproxy/services.d/vault.cfg"   ...which is different ;-) 10:22
kevkoSvenKieske: did you get it :) ? 10:23
SvenKieskekevko: no, reread the mail and especially follow the link to the paste, that's not what they did.10:29
SvenKieskeI had to read it twice myself :)10:29
kevkoSvenKieske: :D aaaa ... maybe because he used tag haproxy which don't exist ? 10:35
kevkoSvenKieske: no ..it exist10:35
SvenKieskedid he state which release he uses? wasn't this tag in the past something like "loadbalancer"? maybe that's also just a special case in our downstream, but I doubt it.10:51
SvenKieskejust recently did use the tag loadbalancer for haproxy, but that was on a yoga based installation10:52
SvenKieskedid that change sometime?10:52
SvenKieskehttps://review.opendev.org/c/openstack/kolla-ansible/+/87750710:54
kevkoSvenKieske: it includes haproxy and loadbalancer also ... i was the one who rewrote haproxy :D :D 11:00
kevkoSvenKieske: maybe he don't have good inventory 11:00
SvenKieskefirst question I guess would be which version they run, maybe it's really really old11:05
kevkoSvenKieske: coud you please check my bugreport https://bugs.launchpad.net/kolla-ansible/+bug/2049503  <<< 11:17
SvenKieskeI'll check it out :)11:19
kevkoSvenKieske: please can u check it now ? I tried to describe it very well ...  because then I can start implementation and (unfortunatelly) docs11:20
SvenKieskeok11:25
kevkoSvenKieske: thank you 11:26
SvenKieskeI added a comment, I guess we already discussed that stuff yesterday or last week, I don't remember exactly. the main reason for this mess is to use a single variable where you need two, imho (at least two, I don't know if two suffices)11:30
SvenKieskemaybe there are also other solution, but in my eyes this is a bug11:31
SvenKieske*solutions11:31
kevkoSvenKieske: replied 11:37
SvenKieskekevko: maybe you also have an opinion on this unrelated topic regarding the switch to ovn by default? https://review.opendev.org/c/openstack/kolla-ansible/+/904959/comment/42e2f26f_e196f1a6/ I found it a little weird13:31
kevkoSvenKieske: haha, sometimes I really don't understand why we are switching a big stuff by default ..and sometimes there is  a problem with backward compatibility (for example fl1nt patch for designate ) :D :D :D 13:38
kevkoeven if it is reasonable and patch is small :D 13:38
kevkoSvenKieske: do you know why ? ...  btw ..i am going to check 13:38
SvenKieskewell I guess it has something to do with other software having some advantages in some cases. I also wished different softwares would implement the same stuff, feature wise, so migrations would be more painless, but most devs don't seem to care for this13:41
SvenKieskeI agree we are not really neutral when judging different changes. imho we should try to judge everything the same way. e.g. many changes don't really explain why they are done because some core maintainer "knows" why it's done, but everyone else is left in the dark.13:42
SvenKieskebut when a new contributor comes in, they have to justify everything they do, feels a little sad imho. :/13:42
SvenKiesketo be clear: of course changes should always explain why they are done, but not only for certain, but all contributions imho.13:43
SvenKieskeand large complex changes should all be reviewed with high scrutiny, and not be pushed through because someone needs it downstream.13:43
kevkoSvenKieske: haha, I've already learned how to contribute (or I hope ...) ..so, bugreport, discussion ..etc etc :D 13:43
kevkoSvenKieske: and also, don't switch anything ..just prepare it for user switch and leave defaults as before13:44
SvenKieskewhat I'm observing in the ovn case is, that more and more people are deploying ovn by default instead of plain ovs. I hoped that the ovn gaps would be closed before I have to migrate myself, but well..13:44
SvenKieskekevko: agreed. but on the other hand, when some time (releases) have gone by, we also need to switch defaults to "new shiny thing(TM)" since users expect it, I guess.13:45
SvenKieskeand I can understand it from user perspective. I also like stuff doing what I want "out of the box" without switching manually 100 config knobs. :D13:46
SvenKieskebut first and foremost stuff should work I guess, so I welcome the designate change, that was really not thought out well, to "extend" a string to a list and then not checking all locations where a string is expected.13:48
SvenKieskeI guess this is a side effect of python not being strongly typed by default. maybe we should add strong typing via mypy or similar tooling.13:48
SvenKieskethat would - in theory, when used properly - catch this stuff during static type checking in CI13:49
kevkoSvenKieske: or rewrite openstack to something more sexy ? :D GoLang :D 13:49
kevkoSvenKieske: or, let's just use k8s :D :D 13:49
kevkoSvenKieske: or drop ansible and start to use something better (just a joke :D )13:50
SvenKieskekevko: well I'm a secret member of the "rust evangelism strike force" if you heard that term online maybe :P but I think it might be feasible to at least use types in python, I mean it's in the language since some versions now?13:50
kevkoSvenKieske: but nowadays i am sometimes confused when trying several combinations and run reconfigure for minutes :( 13:50
SvenKieskekevko: jetporch was unfortunately dropped upstream (ansible rewrite in rust by original ansible author michal de haan) due to lack of interest13:51
kevkoSvenKieske: if I am correct and remember things well ... i've already seen some new openstack code which is using this feature13:51
SvenKieskekevko: you mean python types? I have seen it in some projects afaik as well, yes. Well most code is fairly old so I can understand the history.13:52
kevkoSvenKieske:  i was trying to say that sometimes I am reconfiguring again and again and most of the time it's not ansible what I hate ..but implementation of ansible ...every task is just open ssh -> copy everything include python module file -> run it -> disconnect ..and again again 13:55
SvenKieskekevko: well yeah, the advantage of that model is, it is very "easy" to do, programming wise. but imho it's not "simple" when you want to debug stuff :D I know what you mean. a "kubectl apply -f my.yml" seems to work more often, at least it does not produce dozens of ssh timeouts on it's way ;)13:57
SvenKieskethe advantage of ansible is, you can train everyone to understand the basic operational model "code execution via ssh". that's dead simple. the devil is in the details then.13:58
SvenKieskealso we are basically, at least today, writing our own poor mans orchestrator in kolla-ansible: we need to manage container lifecycle of complex and diverse services and even do clustering of containers and high availability.13:59
SvenKieskebasically poor mans kubernetes :)13:59
kevkoSvenKieske: next task I would like to work on will be magnum in kolla 14:01
SvenKieskeon the other hand, kubernetes brings many complexities you maybe don't want or need and is, as a technology, understood by not the most people. many people can "use" k8s, but when it doesn't do what they want they are unable to debug, because they don't understand: DNS, containers/namespaces, complex networking, certificates, etcd clustering <- anything from these14:01
SvenKieskemagnum seems to gain some traction again. do you mean working on the deployment of magnum via kolla or do you wanna use magnum for kolla itself? :D14:02
kevkoSvenKieske: well, I am not fan of k8s :D, and truly said my knowledge of openstack is better than k8s ... but k8s just works when i am testing from time to time 14:03
kevkoSvenKieske: well, magnum finally just works after years :D ...but actually everything is done in k8s management cluster 14:04
kevkoi would like to provide some HA solution to just deploy magnum with management cluster at once 14:05
kevkodon't need to build it separately 14:05
SvenKieskekevko: did you have a look at this then? https://github.com/vexxhost/magnum-cluster-api14:06
kevkoSvenKieske: it is what i am talking about 14:06
SvenKieskeyeah, I find this a rather nice solution, my k8s knowledge is also limited (I ran, debugged and deployed them in the past though), but I think cluster api is a rather nice approach. and what I hear is it seems to work quite well.14:08
kevkoSvenKieske: yes, it works ..but everything is done in k8s cluster ..14:08
kevkobasically you have k8s cluster on same network as openstack services ...and when some tenant want to create k8s ...magnum just proxy request to that management cluster and that cluster will spin tenant's cluster :) ..and of course then it can be updated, autoscalled, destroyed ...etc etc 14:09
kevkoSvenKieske: but for production - you probably wants your management cluster  to be HA, easy updateable etc etc ...14:11
opendevreviewWill Szumski proposed openstack/kolla master: Support CAP_DAC_READ_SEARCH capability  https://review.opendev.org/c/openstack/kolla/+/90557914:46
SvenKieskeyeah, I agree14:48
opendevreviewPiotr Parczewski proposed openstack/kolla-ansible master: Adjust Ceph metrics scrape interval in Prometheus  https://review.opendev.org/c/openstack/kolla-ansible/+/90212915:11
*** Continuity__ is now known as Continuity15:48
kevkomnasiadka: we have probably broken kolla CI build 16:08
kevkoGet "https://primary:4000/v2/": http: server gave HTTP response to HTTPS client16:08
kevkoprobably by your patch :D 16:10
SvenKieskeI'm fairly certain https did work at some point in time? did something else break that maybe?16:11
kevkoSvenKieske: i am 90 percent sure that it's broken by  https://review.opendev.org/c/openstack/kolla-ansible/+/904067 :) 16:12
bbezakkevko: I'll look into that16:21
bbezakkevko: I'm 100% sure that it was caused by it :)16:22
kevkobbezak: https://365c4224c221ec730c2d-019bc8f0795daf4dab730f80e83974fa.ssl.cf1.rackcdn.com/905116/2/check/kolla-ansible-rocky9-upgrade/58f0bf7/primary/logs/system_configs/docker/daemon.json << OLD daemon.json working 16:23
kevkobbezak: new one https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d07/904576/6/check/kolla-ansible-rocky9-upgrade/d0722d6/primary/logs/system_configs/docker/daemon.json 16:24
kevkonot working 16:24
bbezakit is only seen when running k-a upgrade jobs in kolla.16:25
bbezakyeah, previously last if statement was overidden by global docker config :16:26
bbezak{% if need_build_image and is_previous_release %}16:26
bbezak  insecure-registries:16:26
bbezak    - primary:400016:26
kevkobbezak: well, so we have broken all upgrade jobs, don't we ? :D 16:26
kevkobbezak: so actually we can't merge anything :D 16:27
bbezakand now we're falling into https://github.com/openstack/kolla-ansible/blob/master/tests/templates/globals-default.j2#L7616:27
kevkobbezak: i can fix it i think ..but i don't know what was the purpose of this patch ...i mean ..why it was changed ? 16:27
bbezakapart of fetching more logs and adding debug for podman, it looks like reorg of vars to be more specific and use native vars both for podman and docker insecure registries16:31
SvenKieskemaybe the specific vars slightly differ in behaviour? just a guess though16:36
kevkobbezak ,  SvenKieske: hmm, i've checked the code by an eye and maybe i know where is problem ..let me try it :) 16:36
bbezakexactly SvenKieske:   insecure-registries: is a native ansible-collection-kolla var which overrides previous settings16:36
bbezakand now we don't have it16:37
bbezakactually docker_custom_config16:37
bbezakwhich is now gone16:37
bbezakso either we will add it back, or rework the logic16:37
bbezakthe things is that we're not running building images job in kolla-ansible, that's why we didn't catch it16:38
SvenKieskemhm, my bad, I actually didn't review this enough, because I had a slight suspicion when reviewing that part that there might be something odd in there, because I reported a bug in the past about the insecure_registry stuff.16:39
SvenKieskethat was a unrelated bug, but I new the code from that incident, but I guess it was too long ago and I didn't quite remember why that code looked a little odd to me, so I just went with the majority vote there.16:41
bbezak(mostly afk now, can look into that tomorrow)16:47
kevkookay, there is some mess with tests/templates/globals-default.j2  or mess in version and is_previous_release17:39

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!