opendevreview | Mark Goddard proposed openstack/kayobe-config-dev stable/yoga: Avoid rebooting after disabling SELinux https://review.opendev.org/c/openstack/kayobe-config-dev/+/845394 | 08:15 |
---|---|---|
opendevreview | Mark Goddard proposed openstack/kayobe-config-dev stable/xena: Avoid rebooting after disabling SELinux https://review.opendev.org/c/openstack/kayobe-config-dev/+/845395 | 08:15 |
opendevreview | Mark Goddard proposed openstack/kayobe-config-dev stable/wallaby: Avoid rebooting after disabling SELinux https://review.opendev.org/c/openstack/kayobe-config-dev/+/845396 | 08:15 |
opendevreview | Mark Goddard proposed openstack/kayobe-config-dev stable/wallaby: Avoid rebooting after disabling SELinux https://review.opendev.org/c/openstack/kayobe-config-dev/+/845396 | 08:15 |
Fl1nt | Hi everyone! | 08:17 |
Fl1nt | I'm facing a weird situation with Ironic team. We enable pxe_filter==dnsmasq (default?), so inspector put the host port mac address tagged with pxe_enabled as whitelisted but once it finish inspecting the node, | 08:19 |
Fl1nt | it add back ",ignore" to the end of the mac | 08:19 |
Fl1nt | which prohibit the deploy step to correctly work | 08:19 |
Fl1nt | as the node is rebooted but the inspector image can't be loaded as the DHCP_REQUESTS are filtered. | 08:20 |
Fl1nt | is there a way to tell ironic-conductor to whitelist this node on dnsmasq? | 08:21 |
opendevreview | Mark Goddard proposed openstack/kayobe master: ironic: revert to ironic's default drivers & interfaces https://review.opendev.org/c/openstack/kayobe/+/836999 | 08:54 |
Fl1nt | mgoddard, did you ever faced this situation using IRONIC? I've just re-tested, each time the inspector finish to inspect a node, it then disallow it from DHCP requests, which prohibit you to deploy the host. | 09:10 |
kevko | \o/ | 09:58 |
Fl1nt | o/ | 09:58 |
kevko | how do you feel about doing a review for me :p ? Fl1nt ? :D | 09:59 |
Fl1nt | Fine, send me your link :D | 09:59 |
kevko | ;D | 09:59 |
kevko | Fl1nt: https://review.opendev.org/q/hashtag:proxysql+(status:open%20OR%20status:merged) < yeah, proxysql :D | 10:00 |
Fl1nt | Let's go :D | 10:01 |
Fl1nt | 7 of them? | 10:01 |
Fl1nt | or a specific one? | 10:01 |
kevko | well, add proxysql support is the main one | 10:04 |
Fl1nt | oki dooki | 10:04 |
kevko | Fl1nt: Edit services roles to support database sharding << this is just change defaults so proxysql-config can works ..and others are just CIs | 10:05 |
Fl1nt | looking for the meta one 'Add proxysql' | 10:05 |
kevko | yeah, thanks, I am again preparing downstream branches ...and I think this is ready for merge ..so I am just trying to finally find reviewers so I can backport merged patch :P | 10:06 |
Fl1nt | From what I'm looking it seems to be ready, I've just a couple of comments but they're not blocking from my POV. | 10:07 |
kevko | ask please | 10:09 |
Fl1nt | Listing them. | 10:09 |
Fl1nt | why do we set a monitor user to haproxy if proxysql isn't enabled? should be none or null/empty/whatever rather isn't? | 10:10 |
Fl1nt | How do you handle shards when users deploy using a All in one setup? | 10:12 |
Fl1nt | hum... adding a keepalived check for Haproxy while Haproxy is waiting for the keepalived VIP will end up in a race condition at some point (already get the issue two years ago). | 10:13 |
kevko | Fl1nt: nope, haproxy user or let's say monitor user in mariadb cluster is used for several kolla-ansible releases - it's also used in mariadb_clustercheck container to check state of wsrep | 10:13 |
Fl1nt | ok get it | 10:14 |
kevko | Fl1nt: user is just replaced for better name - monitor - and with password | 10:14 |
Fl1nt | ok, sounds good to me for user :D | 10:15 |
kevko | Fl1nt: i am not sure if I understand your described situation | 10:15 |
Fl1nt | which one? All in one or Race condition? | 10:16 |
kevko | Fl1nt: in keepalived container is only one check which is executing checks for services needed VIP .. proxysql, haproxy, etc... | 10:16 |
kevko | Fl1nt: if one of that check fail ..VIP is going away | 10:16 |
Fl1nt | hum... we don't set any monitor password for Haproxy user? | 10:16 |
kevko | Fl1nt: nope..that is current situation :D | 10:17 |
Fl1nt | kevko, yes, but at bootstrap time, you haproxy will wait for the VIP to be ready before answering keepalived for backend/frontend generation, if the VIP then wait for haproxy to be available to declare the VIP available you're on a dependency circle. | 10:18 |
Fl1nt | I'll comment for the missing password on default haproxy monitor as it's kinda a security issue, but it's not a blocking point for your review IMHO | 10:19 |
hrw | kevko: are there packages for proxysql for ubuntu 22.04 and centos stream 9? | 10:20 |
kevko | Fl1nt: this is historical situation because of haproxy option mysql-check user haproxy post-41 <<< this implementation of mysql check in haproxy worked only without password | 10:21 |
Fl1nt | ah! argh :( | 10:21 |
Fl1nt | hum... this is really a security issue indeed, accessing the DB freely isn't really something you ever want ^^ | 10:22 |
Fl1nt | even if monitor user isn't supposed to get large rights. | 10:22 |
kevko | Fl1nt: yeah, i think so, on the other side .. haproxy user don't have real rights .. | 10:22 |
Fl1nt | We will figure it out later on for this point as it's out of scope for your contribution. | 10:23 |
kevko | Fl1nt: well, my patch is fixing this :) | 10:23 |
kevko | hrw: https://repo.proxysql.com/ProxySQL/proxysql-2.4.x/jammy/ << 22.04 | 10:23 |
hrw | kevko: https://repo.proxysql.com/ProxySQL/proxysql-2.4.x/centos/ no 9 yet ;( | 10:24 |
kevko | hrw: yeah, i will add issue to github .. Author is very responsive ... | 10:24 |
hrw | thanks | 10:25 |
Fl1nt | ok, LGTM to me | 10:29 |
Fl1nt | BUT | 10:29 |
Fl1nt | will need someone else to look at it too as I may have missed something. | 10:30 |
kevko | Fl1nt: I still don't get above race condition ...because for now I thin it's working for haproxy in same way | 10:30 |
kevko | Fl1nt: yeah, thanks | 10:30 |
Fl1nt | yes, and it is causing RC actually :D | 10:30 |
Fl1nt | but it's a really rare situation. | 10:30 |
Fl1nt | the timeout window on both being large enough, it can still happens on slow servers or specific situation. | 10:31 |
mnasiadka | hrw: https://review.opendev.org/c/openstack/kolla/+/844904 - missing +w for a reason? ;-) | 10:34 |
Fl1nt | be careful, openstacksdk\<0.99.0 <-- this release version is having issues with swift (missing headers). | 10:42 |
hrw | mnasiadka: k-a part lacks second +2 | 10:45 |
kevko | anyone else for review guys :) ? | 10:50 |
Fl1nt | kevko, regarding the command within the config.json, your comment make sense, even if I have issue with the fact that we will have to rebuild the whole container to get a specific flag. | 10:51 |
kevko | Fl1nt: specific flag ? | 10:52 |
Fl1nt | Exemple: You've got a power outage, you need to reboot your mariadb cluster that didn't correctly marked the most advanced node, you'll just need to add a --new-wsrep-cluster flag, reboot node by node and then remove this flag again. | 10:52 |
Fl1nt | with this exemple I would need to: | 10:53 |
Fl1nt | Build specific image -> deploy -> redeploy the normal image. | 10:53 |
kevko | Fl1nt: I can argument that you should use mariadb_recovery kolla-ansible playbook | 10:54 |
kevko | Fl1nt: then everything is done automatically | 10:54 |
Fl1nt | it doesn't work with power outage that prohibit you cluster to output a correct grwstate/grastate. | 10:54 |
Fl1nt | oh ok, my bad, didn't see that it was updated | 10:56 |
Fl1nt | ok, I'm fine with that. | 10:57 |
kevko | +1 | 10:57 |
Fl1nt | Do you have any equivalent process to recover a proxysql cluster? | 10:57 |
kevko | just manually ...but i was satisfied with kolla-ansible approach | 11:00 |
kevko | Fl1nt: i don't remember the reason ..but once i had a problem with something ...let me check | 11:00 |
kevko | Fl1nt: i had apatch for mariadb-arbitrator | 11:02 |
kevko | https://review.opendev.org/c/openstack/kolla-ansible/+/780811/45/ansible/roles/mariadb/tasks/recover_cluster.yml - line 79 - in some situations host which was reported as host with largest seqno was not correct .. so i sorted numerically and get first | 11:04 |
kevko | Fl1nt: ^ | 11:04 |
kevko | but i really don't remember what was the case .. empty seqno ..or -1 or something like that ..i don't know | 11:05 |
opendevreview | Merged openstack/kolla-ansible master: Add keystone_authtoken.service_type https://review.opendev.org/c/openstack/kolla-ansible/+/834035 | 11:51 |
kevko | hrw: what about you, would you like to review my patches also ? | 12:46 |
hrw | kevko: that's k-a and deployment. a bit outside of my knowledge | 12:47 |
kevko | common , it's very simple :P | 12:47 |
kevko | but ok :) | 12:47 |
kevko | come-on :P | 12:48 |
Fl1nt | Anyone able to help me with this ironic issue? I've disabled the pxe_filter feature, it's now working fine, BTW I discovered that the current ironic-inspector.conf.j2 template don't get the appropriate shape to let you disable this feature. | 12:50 |
Fl1nt | Can I patch that? I mean, at least get a way to disable the feature completly ? | 12:50 |
Fl1nt | nevermind, victoria allow for noop driver in pxe_filter. | 12:53 |
opendevreview | Michal Arbet proposed openstack/kolla master: Change kolla_version LABEL to git sha-1 https://review.opendev.org/c/openstack/kolla/+/818727 | 13:02 |
opendevreview | Dr. Jens Harbott proposed openstack/kolla-ansible master: CI: Switch upgrades xena->yoga to yoga->master https://review.opendev.org/c/openstack/kolla-ansible/+/844905 | 13:05 |
opendevreview | Michal Arbet proposed openstack/kolla master: Change kolla_version LABEL to git sha-1 https://review.opendev.org/c/openstack/kolla/+/818727 | 13:06 |
opendevreview | Merged openstack/kolla-ansible master: Fix typo in endpoint influxdb_internal_endpoint variable https://review.opendev.org/c/openstack/kolla-ansible/+/844925 | 13:14 |
kevko | mnasiadka: Why this is not merged, it looks like it's important to have cinder active-active properly configured - > https://review.opendev.org/c/openstack/kolla-ansible/+/763011 | 14:05 |
mnasiadka | kevko: feel free to work on it, it's listed as a priority on the whiteboard | 14:06 |
kevko | mnasiadka: i can, but where is a problem ? ( probably not clear what cinder-manage will do ? ) | 14:06 |
mnasiadka | kevko: Kolla whiteboard, L151 - everything is there | 14:07 |
Fl1nt | oooh c'mon... another K/V store to install on CPlane just for cinder active/active coordination??? msg: "Please enable redis or etcd when using Cinder Ceph backend" ?? seriously? | 14:07 |
kevko | mnasiadka: link please ? i promise that i am going to bookmark it :) | 14:08 |
mnasiadka | https://etherpad.opendev.org/p/KollaWhiteBoard | 14:08 |
Fl1nt | Could we make the cinder active/active mode optional then? | 14:08 |
mnasiadka | make a feature out of a bug? | 14:10 |
mnasiadka | I think those questions are rather meant for weekly meeting, feel free to propose a topic | 14:11 |
Fl1nt | I mean, I'm running our platforms over CEPH RBD on cinder, but we're definitely not willing to introduce ETCD on the stack and probably not redis neither for licenses issues. | 14:11 |
mnasiadka | what's wrong with bsd 3 clause license? ;-) | 14:12 |
Fl1nt | And I'm still having an hard time understanding what cinder is calling active/active? Dynamic Storage controller assignation for volumes in case a cinder-volume is down? | 14:12 |
mnasiadka | Yes, currently if your cinder-volume host goes down - all volumes served by it are unavailable for API operations (resize, etc) | 14:12 |
Fl1nt | Ok, so not really a big deal as you can live migrate any volumes from this host to another random host easily. | 14:13 |
Fl1nt | mnasiadka, license management at job is out of my hand but I have to fill for a request each time we had a previously unknown software in the stack for review because even with BSD-3 there are commercial issues. | 14:15 |
mnasiadka | Fl1nt: sounds like my IBM times | 14:15 |
Fl1nt | yeah pretty much ^^ | 14:15 |
Fl1nt | but anyway, if cinder need a k/v store for coordination why didn't they simply used memcached? it's exactly that. | 14:16 |
mnasiadka | it uses tooz - if that supports memcached, then why not | 14:16 |
mnasiadka | https://docs.openstack.org/tooz/latest/user/drivers.html#memcached | 14:17 |
Fl1nt | seems so: https://docs.openstack.org/tooz/latest/_modules/tooz/drivers/memcached.html | 14:18 |
Fl1nt | zookeeper aaarrghhh I almost pu**d on my mouth reading that... for god sake, those tools were created by a sick mind... | 14:21 |
Fl1nt | zKserver.sh lol, ok I'm done with this service ^^ | 14:21 |
Fl1nt | tooz is a really welcomed library tbh | 14:24 |
kevko | hmm, if we are using coordination url, don't have host or backend_host or cluster defined ..we are already active-active, right ? | 15:07 |
kevko | don't need to define cluster = something ..or yes ? | 15:07 |
opendevreview | Pierre Riteau proposed openstack/kolla master: Bump prometheus-openstack-exporter version to 1.6.0 https://review.opendev.org/c/openstack/kolla/+/845601 | 15:34 |
Fl1nt | kevko, Coordination url? you mean etcd endpoint? | 15:35 |
kevko | redis | 15:35 |
Fl1nt | cluster is to specify which cluster to use in case of multibackend I think. | 15:36 |
kevko | Fl1nt: # Name of this cluster. Used to group volume hosts that share the same backend | 15:36 |
kevko | # configurations to work in HA Active-Active mode. (string value) | 15:36 |
kevko | #cluster = <None> | 15:36 |
kevko | so, if we have only ceph ... and this is not set .. are we active-active ? | 15:37 |
opendevreview | Pierre Riteau proposed openstack/kolla master: Bump prometheus-openstack-exporter version to 1.6.0 https://review.opendev.org/c/openstack/kolla/+/845601 | 15:40 |
Fl1nt | kevko, nope, you're not ^^ | 16:08 |
kevko | how it is possible ? | 16:08 |
Fl1nt | but honestly, it's not that of a burden as this is just a matter of OPS actions not data flow. | 16:08 |
Fl1nt | priteau, I'm using 1.6.0 and unfortunately I'm no longer able to retrieve metrics on grafana because of a weird http error at exporter level. | 16:11 |
priteau | I've used 1.5.0 without problem. I will test it anyway. | 16:19 |
opendevreview | Michal Arbet proposed openstack/kolla-ansible master: Add proxysql support for database https://review.opendev.org/c/openstack/kolla-ansible/+/770215 | 17:22 |
opendevreview | Michal Arbet proposed openstack/kolla-ansible master: Edit services roles to support database sharding https://review.opendev.org/c/openstack/kolla-ansible/+/770216 | 17:22 |
opendevreview | Michal Arbet proposed openstack/kolla-ansible master: [CI] Test ProxySQL with shards in the nova cells scenario https://review.opendev.org/c/openstack/kolla-ansible/+/770621 | 17:22 |
opendevreview | Michal Arbet proposed openstack/kolla-ansible master: [DNM] Trigger cells job https://review.opendev.org/c/openstack/kolla-ansible/+/838916 | 17:22 |
opendevreview | Michal Arbet proposed openstack/kolla-ansible master: Use Docker healthchecks for mariadb-server service https://review.opendev.org/c/openstack/kolla-ansible/+/805616 | 17:22 |
opendevreview | Michal Arbet proposed openstack/kolla-ansible master: Use native fluent-logger instead of tail https://review.opendev.org/c/openstack/kolla-ansible/+/755775 | 17:37 |
opendevreview | James Kirsch proposed openstack/kolla-ansible master: Add support for LetsEncrypt-managed certs https://review.opendev.org/c/openstack/kolla-ansible/+/741340 | 18:04 |
opendevreview | Dr. Jens Harbott proposed openstack/kolla-ansible master: Further Keystone-related cleanups https://review.opendev.org/c/openstack/kolla-ansible/+/843748 | 18:31 |
opendevreview | Piotr Parczewski proposed openstack/kolla-ansible master: Add support for configuring Elasticsearch replication https://review.opendev.org/c/openstack/kolla-ansible/+/805954 | 18:44 |
opendevreview | Piotr Parczewski proposed openstack/kolla-ansible master: Add support for configuring Elasticsearch replication https://review.opendev.org/c/openstack/kolla-ansible/+/805954 | 18:50 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!