Tuesday, 2021-04-13

*** jralbert has quit IRC00:23
*** spatel has joined #openstack-ansible02:10
*** evrardjp has quit IRC02:33
*** evrardjp has joined #openstack-ansible02:33
*** miloa has joined #openstack-ansible04:12
*** miloa has quit IRC04:18
*** d34dh0r53 has quit IRC04:30
*** akahat has quit IRC05:02
*** spatel has quit IRC05:05
*** akahat has joined #openstack-ansible05:14
*** SiavashSardari has joined #openstack-ansible05:25
*** dpawlik has joined #openstack-ansible06:33
*** dpawlik has quit IRC06:37
*** dpawlik has joined #openstack-ansible06:40
*** dpawlik has quit IRC06:42
*** dpawlik has joined #openstack-ansible06:54
*** dpawlik has quit IRC06:55
*** pcaruana has quit IRC07:11
*** dpawlik has joined #openstack-ansible07:13
*** dpawlik has quit IRC07:13
*** dpawlik has joined #openstack-ansible07:14
*** andrewbonney has joined #openstack-ansible07:14
*** pcaruana has joined #openstack-ansible07:40
*** luksky has joined #openstack-ansible07:42
*** rpittau|afk is now known as rpittau07:43
*** tosky has joined #openstack-ansible07:49
*** miloa has joined #openstack-ansible08:42
*** miloa has quit IRC08:53
*** snapdeal has joined #openstack-ansible09:07
admin0morning09:22
*** dpawlik has quit IRC09:28
*** akahat has quit IRC09:28
*** snapdeal has quit IRC09:43
*** dpawlik has joined #openstack-ansible09:52
*** akahat has joined #openstack-ansible09:53
*** rpittau is now known as rpittau|bbl09:54
*** dpawlik has quit IRC10:45
*** dpawlik9 has joined #openstack-ansible10:53
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: Map dbaas and lbaas with role defaults  https://review.opendev.org/c/openstack/openstack-ansible/+/78411311:07
openstackgerritMerged openstack/openstack-ansible-os_nova master: Set default qemu settings for RBD  https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/78382911:08
openstackgerritMerged openstack/openstack-ansible-openstack_hosts master: Replace import with include  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/77422411:18
openstackgerritMerged openstack/openstack-ansible-os_trove master: Use ansible_facts[] instead of fact variables  https://review.opendev.org/c/openstack/openstack-ansible-os_trove/+/78073211:26
SiavashSardariHi everyone11:44
SiavashSardariI have a problem with oslomsg mq notify setup11:45
SiavashSardariI think some logic here is not quite right11:45
SiavashSardarihttps://opendev.org/openstack/openstack-ansible-tests/src/branch/master/sync/tasks/mq_setup.yml#L69-L7111:45
SiavashSardariand also in lines 82 and 9711:46
SiavashSardariI wanted to upload a patch but I though if it would be great if I can get your input first11:48
jrosserSiavashSardari: can you explain what you thing is wrong?11:52
SiavashSardarijrosser Oops, I always forget to explain '=D . I want to add another rabbitmq cluster for notifications. I added proper skel info to env.d folder and installed a cluster with rabbit role. then I added oslomsg_notify_* variables and ran the playbooks. I notices that there are no users and vhosts on new notify rabbit cluster. I did a little digging11:57
SiavashSardariand got to the link above.11:57
jrosserthis may address it? https://review.opendev.org/c/openstack/openstack-ansible-tests/+/78522411:58
SiavashSardariyep that's the case. Thanks12:00
jrosserif you could test that and leave a comment on the patch it would be great12:00
SiavashSardarijust out of curiosity I cant see how (_oslomsg_rpc_vhost is undefined) can happen?12:01
SiavashSardariyep actually I did the exact same thing on my setup and it worked. but let me check again just to be sure12:02
SiavashSardarijrosser qq I was testing just on cinder role, and noonedeadpunk patch worked. how can I use openstack-ansible-test repo to update all my repos?12:09
jrosserSiavashSardari: unfortunately you can't, that is automation for when you submit a patch to gerrit12:10
jrosseronce it merges in openstack-ansible-tests there is scripting which creates the same patch in all the relevant repositories automatically12:11
SiavashSardarioh. what about backports?12:11
jrossergenerally only bugfixes are backported, and the merged patch from openstack-ansible-tests would be cherry-picked to a stable branch and then the whole auto-generation of patches would happen again12:12
SiavashSardarijrosser Thanks for the explanation. so I should fork everything on our gitlab, commit on every repo and use them as role_requirements? Please tell me there is another way =$ X-P12:18
*** rpittau|bbl is now known as rpittau12:31
noonedeadpunkwell, technically we could backport it somewhere, but it will produce another pile of work there as some roles CI might be broken, especially later then for V...12:38
*** spatel_ has joined #openstack-ansible12:39
*** spatel_ is now known as spatel12:39
*** d34dh0r53 has joined #openstack-ansible12:53
SiavashSardarinoonedeadpunk can we at least backport to V? there are some changes in V compare to U, but I think at some point I will upgrade to U.12:59
noonedeadpunkYou can easily jump from T to V directly just in case13:00
SiavashSardarinoonedeadpunk OSA upgrades always worked fine for me, but last time I upgrade from T to U, OSA was fine but ceph part caused us data loss, and now I have a very hard job to convince management for any upgrade.13:04
noonedeadpunkYou can easily leave ceph as is actually13:04
noonedeadpunkIt's not a hard requirement to match ceph-ansible (or ceph) with specific osa version. It's more like what we're providing by default, and that can be overriden to macth current settings13:05
SiavashSardariThat is my plan but management saw upgrades as risks now '=(13:05
noonedeadpunkgotcha13:06
SiavashSardarianyways can we backport your change to V?13:07
noonedeadpunklet's cross the bridge when we come to it. We first need to land it for master anyway13:08
SiavashSardariOK then (y)13:09
noonedeadpunkbtw, can I get another vote for https://review.opendev.org/c/openstack/openstack-ansible-os_trove/+/784145/6 to be able to rebase upper patches just on master and resolve conflicts?13:10
mgariepynoonedeadpunk, done.13:12
noonedeadpunkawesome, thanks!13:12
*** tosky has quit IRC13:19
SiavashSardariThe other day I caught a strange behavior, I created a VM with external network and inside the VM I ran tcpdump and I noticed there was some SYN packets with destinations other than my IP address (but in my IP pool, I have two external subnets on my external net). I did a little digging on the issue and noticed that SYN packets destinations was the13:21
SiavashSardariIPs which are in my pool but they are not assigned to any port or fip. and when I assign one of those IPs to a port SYN packets will vanish. sg have default rules with ssh and ping rules. it's not a security issue but still very strange thing. maybe I have some missconfiguration?13:21
*** tosky has joined #openstack-ansible13:25
*** tosky has quit IRC13:29
*** tosky has joined #openstack-ansible13:29
spatelSiavashSardari check bridge ageing time13:57
spatelbrctl showstp <bridgename>13:57
spatelIn past i had that issue, it was neutron bug causing ageing time to zero generating flood to all ports13:58
*** luksky has quit IRC14:05
*** luksky has joined #openstack-ansible14:06
SiavashSardarispatel Thanks, I'm using OVS and I don't know how to check aging time. but thank you for the hint14:06
spatelThat issue is more over LinuxBridge side..14:06
SiavashSardarispatel I found some snooping aging time on ovs docs. maybe I can find something14:10
SiavashSardarispatel just to be sure, your issue was like mine? I mean just unassigned IPs was getting broadcasts?14:11
spatelIn my case bridge was flooding every single packet like unicast on each port so when i was running tcpdump i can see my neighbor vm traffic14:13
SiavashSardarioh that is not my case. that is a very big security issue. my problem is that I just get SYN packets with destination of unassigned IP addrs14:15
SiavashSardariin other way SYN packets which does not match any MAC addrs and sg rules gets broadcasted14:16
SiavashSardariand when I assign that IP to a port which gets a MAC addr broadcasting stops.14:17
*** macz_ has joined #openstack-ansible14:17
spatellook into openflow rules, may be something there or may be its normal behavior. I am not running OVS so no idea14:28
*** SiavashSardari has quit IRC14:38
openstackgerritMerged openstack/openstack-ansible-os_trove master: Change default pool subnet  https://review.opendev.org/c/openstack/openstack-ansible-os_trove/+/78414514:46
openstackgerritMerged openstack/openstack-ansible-os_designate master: Generate designate_pool_uuid dynamically  https://review.opendev.org/c/openstack/openstack-ansible-os_designate/+/77184114:47
noonedeadpunk#startmeeting openstack_ansible_meeting15:04
openstackMeeting started Tue Apr 13 15:04:14 2021 UTC and is due to finish in 60 minutes.  The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot.15:04
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.15:04
*** openstack changes topic to " (Meeting topic: openstack_ansible_meeting)"15:04
openstackThe meeting name has been set to 'openstack_ansible_meeting'15:04
noonedeadpunk#topic rollcall15:04
*** openstack changes topic to "rollcall (Meeting topic: openstack_ansible_meeting)"15:04
noonedeadpunko/15:04
noonedeadpunk#topic bug triage15:11
*** openstack changes topic to "bug triage (Meeting topic: openstack_ansible_meeting)"15:11
noonedeadpunkhttps://bugs.launchpad.net/openstack-ansible/+bug/192318315:12
openstackLaunchpad bug 1923183 in openstack-ansible "Spice html5 client does not get installed in nova api" [Undecided,New]15:12
noonedeadpunkSo, it seems that it is distro install15:14
noonedeadpunkwhich fails for centos 8...15:14
jrosserhello15:17
noonedeadpunkhey! sorry, decided to start meeting with new time today15:18
noonedeadpunkprobably worth switching from next week....15:18
jrosseryeah, just saw my email!15:19
noonedeadpunkWhat do you think about https://bugs.launchpad.net/openstack-ansible/+bug/1923184 ?15:21
openstackLaunchpad bug 1923184 in openstack-ansible "Allow disable console_agent when using spice console" [Undecided,New]15:21
noonedeadpunksounds pretty fair for me...15:23
*** gshippey has joined #openstack-ansible15:23
jrossera config override can disable that?15:24
noonedeadpunkI think it can....15:24
noonedeadpunkbut probably it really make sense to change default behaviour we have15:24
jrosseri'm struggling to find docs for this15:25
noonedeadpunkhttps://docs.openstack.org/nova/victoria/configuration/config.html#spice.agent_enabled15:26
noonedeadpunkI think we just misused nova_console_agent_enabled...15:27
jrosseryeah that is pretty odd use of it15:28
jrosserthere is the same oddness in the vnc conditional block too15:29
noonedeadpunkEventually, I'd probably just dropped nova_console_agent_enabled at all...15:29
noonedeadpunkand made to use overrides in case agent_enabled should be adjusted15:30
jrosserfeels like this uses also mis-uses that variable https://github.com/openstack/openstack-ansible-os_nova/blob/master/defaults/main.yml#L21215:31
jrosserthat'll be disabling [spice|vnc] on aarch6415:31
jrosserwhich is serial only15:31
noonedeadpunkhm, yes... I think that we can just set `nova_console_type` to false or smth15:32
noonedeadpunkwait...15:32
*** gyee has joined #openstack-ansible15:32
noonedeadpunkhttps://github.com/openstack/openstack-ansible-os_nova/blob/master/defaults/main.yml#L26615:32
noonedeadpunkwe're not using it in `[serial_console]`15:33
noonedeadpunkSuper weird really15:33
noonedeadpunkSo, we can either fix behaviour, then it would be limited to spice only I guess?15:33
noonedeadpunkOr we can jsut drop variable :)15:33
jrosserthis code is nuts15:36
jrosserseems like a legitimate use case to want to disable it for windows15:37
noonedeadpunkso you suggest fixing it?15:38
noonedeadpunkor we can fix, backport and drop afterwards :p15:38
jrosserfor spice i think fix15:38
jrosserthen i have no idea what the point of this is https://github.com/openstack/openstack-ansible-os_nova/blob/master/defaults/main.yml#L22415:39
jrosserso drop that, and clean up all the aarch64 logic15:39
noonedeadpunkI'm more wondering about that https://github.com/openstack/openstack-ansible-os_nova/blob/master/defaults/main.yml#L21215:40
jrosserthat looks bogus, and should be just True15:40
noonedeadpunkIt's just not used anywhere to add15:41
noonedeadpunkhttps://codesearch.opendev.org/?q=nova_spice_console_agent_enabled&i=nope&files=&excludeFiles=&repos=15:41
jrosserboth spice and vnc should already be disabled for aarch64 from here https://github.com/openstack/openstack-ansible-os_nova/blob/master/defaults/main.yml#L26615:41
noonedeadpunkand they are I'm pretty sure, yes15:41
jrosseroh well thats actually the bug then, this line https://github.com/openstack/openstack-ansible-os_nova/blob/master/templates/nova.conf.j2#L7915:42
noonedeadpunkI'm mrore frustrated is that we have nova_spice_console_agent_enabled, nova_novncproxy_agent_enabled and nova_console_agent_enabled15:42
jrossershould be nova_spice_console_agent_enabled15:42
jrosserthere us no need for nova_console_agent_enabled or nova_novncproxy_agent_enabled that i can see15:42
jrosserit's a specific setting for spice?15:43
noonedeadpunkyeah15:43
noonedeadpunkWell, from what I see from nova config reference15:43
jrosserwhat a mess /o\ my head hurts :)15:44
noonedeadpunkwell, I'd vote actually for removing all these options, and make use overrides. But porbably nova_spice_console_agent_enabled is really pretty widespread usecase, so maybe worth leaving this single var indeed...15:45
jrosseri think thats the best simplification given it's an already existing var15:45
noonedeadpunkbut yeah, that's a mess and great it has been raised15:45
openstackgerritMerged openstack/openstack-ansible master: Return PyMySQL installation for distro installs  https://review.opendev.org/c/openstack/openstack-ansible/+/78495715:45
noonedeadpunkok, greed then15:46
noonedeadpunk*agreed15:46
noonedeadpunk#topic office hours15:46
*** openstack changes topic to "office hours (Meeting topic: openstack_ansible_meeting)"15:46
noonedeadpunkTHe most problematic things at the moment are not being able to merge bump and mariadb version upgrade15:46
jrosseryeah, very sad that theres been no further movement on the mariadb jira ticket15:47
noonedeadpunkbump fails on rabbitmq ssl, while SSL should not be actually used according to the config...15:47
*** luksky has quit IRC15:47
jrosseri wonder if it's worth *downgrading* galera :(15:47
noonedeadpunkand I think we'd want to merge it really asap to be able to do freeze for wallaby, and not go forward with master15:47
jrosserwe'd have to go back to 10.5.7 or something15:48
noonedeadpunkand can we technically downgrade to version prior then it was in V?15:48
jrosserperhaps, we pin the exact version but the package manager might not like it15:48
noonedeadpunkbut I mean that mysql_upgrade script might not like it as well...15:49
jrosseroh right sure yes, it's just not good really at all15:49
noonedeadpunkso I think our best option is kind of to move to admin user somehow, and then we won't care about root missing privileges?15:50
*** luksky has joined #openstack-ansible15:50
jrosseri was thinking should do something really minimal in the PKI role just to get the rabbitmq stuff sorted out15:50
noonedeadpunkand I think patch for admin has already merged?15:50
jrossergshippey did a lot of investigation on this15:51
noonedeadpunkI will try to sort out bump upgrade tomorrow and also check out on maria after that.15:52
jrosserthe root user gets broken at upgrade, thats ultimately the issue15:52
jrosserit's possible to create a file with some SQL in it which is executed at startup15:52
noonedeadpunkBtw for trove, I kind of stopped with internet access from VMs inside aio, as it tries to pull docker image...15:52
jrosserthat was a possible solution in the maridb jira ticket, to fix the root user permissions at startup with extra params pointing to statements to execute immediately15:53
noonedeadpunkbut we won't need root user if we create admin somewhere before running setup-infrastructure15:53
noonedeadpunkand updating my.cnf15:54
jrosserah, in the upgrade scripts15:54
noonedeadpunk(I specificly changed order of tasks lately to prevent my.cnf from being updated before maria upgrade)15:54
noonedeadpunkyep15:54
noonedeadpunkI think that might work actually15:55
noonedeadpunkand root auth with socket I think still working in 10.5.9?15:55
jrossernot after the upgrade iirc15:56
jrosseras the root user perms get broken15:56
noonedeadpunkhm, I thought it affects only password auth?15:56
noonedeadpunkbut yeah, might be...15:56
jrosserthe ALL grant goes away15:57
noonedeadpunkfrom all root users or only `root`@`%` or smth?15:57
noonedeadpunkas how then non-upgrade tests pass....15:58
noonedeadpunkah, it's broken during running upgrade....15:58
noonedeadpunk*mysql upgrade15:58
noonedeadpunkdamn it....15:58
noonedeadpunkmaria becomes so buggy lately...15:58
jrosserthe grants are a bitfield15:59
jrosserand the interpretation of the bitfield changes 10.5.8 -> 10.5.916:00
noonedeadpunkoh...........16:00
jrosserso what previously meant 'ALL' no longer means 'ALL'16:00
noonedeadpunkI read bug report wrong way then....16:00
jrosseroh maybe me too :)16:00
noonedeadpunknah, I really just did quick lookthrough16:00
noonedeadpunkSo it was me under impressions that things are not so bad16:01
jrosserit's this massive number "access":18446744073709551615 <- what the bits mean inside that16:01
jrosser`And with the addition of the new privilege SLAVE MONITOR, the value has become insufficient for "GRANT ALL":`16:02
jrosserdoh16:02
noonedeadpunkyeah, so then indeed the only option is stratup script, that indeed would fix permissions and got removed afterwards16:02
noonedeadpunkbut that's so....16:03
jrosseri dig around in the code a bit before 8-O https://github.com/MariaDB/server/blob/10.6/mysql-test/main/upgrade_MDEV-19650.test#L20-L5316:05
noonedeadpunkI was probably confused with ` IF(JSON_VALUE(Priv, '$.plugin') IN ('mysql_native_password', 'mysql_old_password')`16:08
jrossersuper fragile here too https://github.com/MariaDB/server/blob/10.6/sql/privilege.h#L72-L7816:08
noonedeadpunkdoh, indeed... Well, that's C, so everything if fragile...16:08
noonedeadpunkI think that's one of the reasons why Rust becomes more and more popular16:09
jrosseranyway :)16:09
noonedeadpunk#endmeeting16:09
*** openstack changes topic to "Launchpad: https://launchpad.net/openstack-ansible || Weekly Meetings: https://wiki.openstack.org/wiki/Meetings/openstack-ansible || Review Dashboard: http://bit.ly/osa-review-board-v3"16:09
openstackMeeting ended Tue Apr 13 16:09:19 2021 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:09
openstackMinutes:        http://eavesdrop.openstack.org/meetings/openstack_ansible_meeting/2021/openstack_ansible_meeting.2021-04-13-15.04.html16:09
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/openstack_ansible_meeting/2021/openstack_ansible_meeting.2021-04-13-15.04.txt16:09
openstackLog:            http://eavesdrop.openstack.org/meetings/openstack_ansible_meeting/2021/openstack_ansible_meeting.2021-04-13-15.04.log.html16:09
jrosserwe do kind of need to work on the merging backlog16:11
jrossertheres a lot of stuff outstanding16:11
noonedeadpunkagree16:12
noonedeadpunkoh, btw16:13
noonedeadpunkI wanted to discuss haproxy changes from andrewbonney....16:13
noonedeadpunkI'm [retty afraid of the https://review.opendev.org/c/openstack/openstack-ansible/+/782373/116:15
jrosseroh yeah, so maybe something for tomorrow to talk to andrew16:17
jrosserbut we've added loads of monitoring on haproxy/keepalived and some really odd things happening16:17
noonedeadpunksounds good16:17
jrosserthat might be one also to ask evrardjp about too16:18
jrosseras i don't know the history of that16:18
noonedeadpunkyeah, I'm just trying to be cautious here, as I really feel we're not doing our best there, but we need to change delicately, as it kind of worked somehow that way for years...16:18
jrosserso we find things like, if you take down the external things that keepalived is using as healthcheck16:19
jrosserthen all of a sudden the haproxies are restarted16:19
jrosserand thats kind of surprising16:20
noonedeadpunkyeah, I was facing that as well and second part of the work is good (except changing behaviour)16:20
noonedeadpunkbut I'm not sure about keepalived_haproxy_notifications.sh script16:21
jrosserthis also led to chasing a bunch of galera/haproxy related stuff16:21
jrosseri think we are hitting 1500 max connections on only modest deployments16:21
jrosser~35 computes or so + HA control plane16:21
jrosserand did you ever see this? https://www.percona.com/blog/2013/05/02/galera-flow-control-in-percona-xtradb-cluster-for-mysql/16:22
noonedeadpunkumm... nope... But this could probably change in modern galera?16:23
noonedeadpunkbut not dramatically I think...16:23
jrosserperhaps, though i think we are seeing flow control kick in when we run tempest against our test lab16:23
*** rpittau is now known as rpittau|afk16:24
noonedeadpunkwell, I think that's because haproxy can't really properly pick up real master, comparing to maxscale?16:24
jrosserit's an indication that the slaves cannot keep up with the replication rate16:25
noonedeadpunkbut haproxy might pick up slave as "master"? And then it will "write" to slave, which will forward traffic to master and then recieve same stuff back?16:26
jrosserhmm16:26
noonedeadpunkah, sorry, I think xinetd prevent this from happening....16:27
jrosserwe're going to put some mysql_exporter on it to see whats going on16:29
noonedeadpunkhmm... I haven't really look much into our galera setup as actually it was never a problem...16:29
noonedeadpunkbut we have `balance leastconn` and from what I see in my cluster, all backends are L7 passed.16:30
noonedeadpunkwhich means that xinetd is not really working. And it's super bad that we write to all galera backends, as it's the thing that might cause issues for you16:30
jrosserwe were also thinking that the xinet scripts are not smart enough16:31
noonedeadpunkwe should really write only on master, as otherwise there's extra load on slaves16:31
noonedeadpunkThe only proper thing I know and battle tested from galera LB was maxscale16:31
jrosserit's possible that there are cluster states that we dont handle properly during times when flow control is active16:32
jrosserand thats confusing the healthcheck and making it fail over to another node16:32
noonedeadpunkwell, according to https://opendev.org/openstack/openstack-ansible-galera_server/src/branch/master/templates/clustercheck.j2#L74 we should see all except one backend as failed16:32
noonedeadpunkah... it's not read only....16:33
noonedeadpunkbut, it's proxying all write requests to master anyway16:34
noonedeadpunktbh, I'd be glad to replace this weird haproxy  balancing with maxscale... As it's able to get curent status and pass write requests only to master and read ones to all other hosts16:35
noonedeadpunkand I don't think we will be able to do that in haproxy ever16:35
mgariepyfor keepalived i've been setting the ping address to localhost for a couple years now. i had stability issue with external ping in the past.16:38
jrosserdoes maxscale handle it's own VIP?16:42
openstackgerritMerged openstack/ansible-hardening master: Added pam_auth_password to nullok check  https://review.opendev.org/c/openstack/ansible-hardening/+/78598416:45
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_nova master: Remove nova console variables  https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/78607716:48
noonedeadpunkjrosser: no, but, it's keepalived who handle it?16:49
jrosserthey have many words of docs but i miss a picture :)16:49
jrosserso it's like one maxscale instance per current "haproxy" node in OSA and VIP with keepalived16:50
noonedeadpunkyep16:50
jrossermakes sense16:50
noonedeadpunkand in maxscale you configure backends https://mariadb.com/kb/en/mariadb-maxscale-25-configuring-servers/ and monitor https://mariadb.com/kb/en/mariadb-maxscale-25-configuring-the-galera-monitor/16:51
noonedeadpunkand that's was kind of it iirc. Considering, that we already point all clients to vip, transition should be pretty flawless16:52
jrosseryes, it would be fairly OK upgrade path16:52
noonedeadpunkoh, and you would need a router ofc as well https://mariadb.com/kb/en/mariadb-maxscale-25-read-write-splitting-with-mariadb-maxscale/16:53
*** sshnaidm has joined #openstack-ansible17:28
*** sshnaidm is now known as sshnaidm|pto17:28
evrardjpI missed the ping, is everything ok?17:38
evrardjpOn the first OSA clusters I built and maintained, I used my own keepalived/haproxy roles, because my haproxy requirements were quite different than what we had in OSA17:40
evrardjpso I might help on keepalived and haproxy, but I might not be fully aware of all things of our HAproxy role17:40
evrardjpIf you have any question, shoot :)17:40
jrosserevrardjp: it was a query on this https://review.opendev.org/c/openstack/openstack-ansible/+/782373/117:57
jrosserhaving haproxy restart when keepalived goes to FAULT seems unusual, is there some good reason to do that which we miss?17:59
*** larsks has joined #openstack-ansible18:08
*** larsks has left #openstack-ansible18:08
mgariepyanyone here have any contact on the horizon core team ?18:40
mgariepyi have a patch that has been sitting for weeks..18:40
*** andrewbonney has quit IRC18:54
*** spatel has quit IRC19:12
*** spatel_ has joined #openstack-ansible19:20
*** spatel_ is now known as spatel19:20
*** lvdombrkr has joined #openstack-ansible19:23
lvdombrkrhello all19:23
lvdombrkris there possibility setup not-containerized openstack-ansible?19:24
*** spatel has quit IRC19:36
jrosserlvdombrkr: yes, we support a fully "metal" deployment with no containers19:42
jrosseryou can test that in the AIO by setting the environment variable "SCENARIO=aio_metal" before running bootstrap-aio.sh19:42
*** spatel_ has joined #openstack-ansible19:46
*** spatel_ is now known as spatel19:46
*** spatel has quit IRC20:03
admin0lvdombrkr, though non-containerized works very well, you might come in a situation where you install something ( say for monitoring or some internal package etc) and it affects python or adds/removes something that might affect the APIs .. so based on experince, i would recommend using the containerized setup20:07
admin0as it gives you a lot of flexibility and keeps your base-os and its packages separate from the api containers20:07
lvdombrkradmin0 jrosser: we have existing "manually" not-containerized deployed openstack (Ocata, Ubuntu 16.4) in prod but now we want deploy some dev platform and great if we can deploy something as much similar20:13
admin0lvdombrkr, they are exactly the same .. instead they run on the containers20:13
admin0you used to do non-containers does not mean you cannot give containarized system a try :)20:13
admin0you can scale, move, delete, shutdown, do stuff etc freely without worrying about its effect on other containers or the base os20:14
jrosserlvdombrkr: it’s clear that we are using lxc machine containers that work just like hosts, not docker style app containers?20:16
lvdombrkradmin0: yes but for example we want to implement horizon/keystone multi-auth. can i customize/build container images?20:18
lvdombrkrjrosser: ^20:19
jrosserthere are no container images20:19
jrosserlike I say it’s not docker style20:19
jrosseryou “boot” something that’s like an Ubuntu (or whatever OS) vm via LXC20:20
jrosserand install / configure everything with regular andible playbooks/ roles20:20
jrosserone of the strong points of OSA is the service configuration possibilities are extremely good20:21
jrosserlvdombrkr: if you've not use openstack-ansible before i would highly recommend building an all-in-one test setup in a single VM, instructions are here https://docs.openstack.org/openstack-ansible/victoria/user/aio/quickstart.html20:24
jrosserthats a completely self building deployment, it's also what we use in our CI setup so it gets very heavy testing20:26
jrosserit's reasonably likley that you would be able to test/prototype your keystone auth setup with an AIO20:28
jrossermy team have used the AIO as a testbed for SSO and openid-connect for example20:29
lvdombrkrjrosser: perfect, let me test.. if i will have questions i will ping you here20:30
jrossersure, theres usually folk around here EU timezone20:31
lvdombrkrjrosser: PS maybe you know someone who already implemented some multi-auth?20:41
jrosserdo you have an example of what you want to do?20:41
jrosseri'm slightly unsure if you mean multiple auth providers or mutli-factor auth20:42
admin0lvdombrkr, the only thing container does is provide a separate env for your apis to run ..    rest  = all the same20:43
lvdombrkrjrosser: authentification like this : https://openstackdocs.safeswisscloud.ch/en/quickguide/index.html20:46
lvdombrkradmin0: good, i will test it20:46
jrosserlvdombrkr: oh cool! keycloak :)20:47
jrosserso you've got keycloak deployed somewhere seperate from your openstack as an identity provider?20:48
*** spatel_ has joined #openstack-ansible20:52
*** spatel_ is now known as spatel20:52
lvdombrkrjrosser: not yet, we are just brainstorming right now ))20:55
jrosserok :) so i have two regions with OSA deployed clouds using keycloak as an IdP20:56
jrosserOSA won't do anything to deploy keycloak for you, but integrating keystone/horizon as federated identity with OIDC is all working20:57
lvdombrkrjrosser: you implemented it by yourself? lot of customizations required?21:05
jrosserlvdombrkr: there is quite a few settings to make for keystone, but it's all a set of yml that the ansible roles read21:08
jrossermy team contributed a lot of the current OIDC support in openstack-ansible21:09
lvdombrkrjrosser: there is no customizations in horizon? just in keystone?21:10
jrosserwell it's a mixture tbh21:10
jrosserdepends exactly what you want to achieve21:10
lvdombrkrjrosser: ok i see.. im off for today, will be back tomorrow )) thanks for clues/assist21:11
jrosserfor keystone there is an example here https://github.com/openstack/openstack-ansible-os_keystone/blob/master/defaults/main.yml#L451-L48121:11
jrosserlvdombrkr: no problem21:12
*** lvdombrkr has quit IRC21:23
*** spatel has quit IRC21:25
*** luksky has quit IRC22:07
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_neutron master: Use ansible_facts[] instead of fact variables  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/77765022:35
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Gather minimum facts  https://review.opendev.org/c/openstack/openstack-ansible/+/77917422:36
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Disable fact variables  https://review.opendev.org/c/openstack/openstack-ansible/+/77839622:36
*** macz_ has quit IRC23:26
*** tosky has quit IRC23:34

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!