Monday, 2023-10-16

*** gthiemon1e is now known as gthiemonge06:54
opendevreviewMerged openstack/octavia stable/2023.2: Fix timeout duration in start_vrrp_service during failovers  https://review.opendev.org/c/openstack/octavia/+/89778308:31
opendevreviewMerged openstack/octavia stable/2023.2: Reduce duration of failovers with amphora in ERROR  https://review.opendev.org/c/openstack/octavia/+/89778408:31
opendevreviewMerged openstack/octavia stable/2023.1: Fix timeout duration in start_vrrp_service during failovers  https://review.opendev.org/c/openstack/octavia/+/89778908:31
opendevreviewMerged openstack/octavia stable/2023.1: Reduce duration of failovers with amphora in ERROR  https://review.opendev.org/c/openstack/octavia/+/89779008:41
danfaiHi, we are running octavia zed now. At the beginning we were running taskflow with in-memory backend but switched to zookeeper with mysql for persistence. With both we experienced some issues, probably mostly related to our setup/local patches. Regarding those issues we were wondering, what is the recommended way of operating octavia in terms of taskflow?08:42
gthiemongedanfai: hi, so far only the redis backend is used in the CI, we've planned to add zookeeper (https://review.opendev.org/c/openstack/octavia/+/862671), we haven't noticed specific issues when jobboard is enabled.09:06
gthiemongethat said, I've fixed a few bugs in taskflow with jobboard (especially when errors occur), they are still under review https://review.opendev.org/q/project:openstack/taskflow+is:open09:07
gthiemongedanfai: what kind of issues do you have?09:08
opendevreviewMerged openstack/octavia stable/zed: Fix timeout duration in start_vrrp_service during failovers  https://review.opendev.org/c/openstack/octavia/+/89778609:16
opendevreviewMerged openstack/octavia stable/zed: Reduce duration of failovers with amphora in ERROR  https://review.opendev.org/c/openstack/octavia/+/89778709:16
opendevreviewMerged openstack/octavia stable/yoga: Fix timeout duration in start_vrrp_service during failovers  https://review.opendev.org/c/openstack/octavia/+/89810109:16
opendevreviewMerged openstack/octavia stable/yoga: Reduce duration of failovers with amphora in ERROR  https://review.opendev.org/c/openstack/octavia/+/89810209:16
opendevreviewMerged openstack/octavia stable/xena: Fix timeout duration in start_vrrp_service during failovers  https://review.opendev.org/c/openstack/octavia/+/89810409:16
opendevreviewMerged openstack/octavia stable/xena: Reduce duration of failovers with amphora in ERROR  https://review.opendev.org/c/openstack/octavia/+/89810509:16
opendevreviewMerged openstack/octavia stable/wallaby: Fix timeout duration in start_vrrp_service during failovers  https://review.opendev.org/c/openstack/octavia/+/89811209:16
opendevreviewMerged openstack/octavia stable/wallaby: Reduce duration of failovers with amphora in ERROR  https://review.opendev.org/c/openstack/octavia/+/89811309:16
opendevreviewMerged openstack/octavia stable/2023.2: Fix amphorae in ERROR during the failover  https://review.opendev.org/c/openstack/octavia/+/89778509:16
opendevreviewMerged openstack/octavia stable/2023.1: Fix amphorae in ERROR during the failover  https://review.opendev.org/c/openstack/octavia/+/89779109:17
opendevreviewMerged openstack/octavia stable/zed: Fix amphorae in ERROR during the failover  https://review.opendev.org/c/openstack/octavia/+/89778809:17
opendevreviewMerged openstack/octavia stable/yoga: Fix amphorae in ERROR during the failover  https://review.opendev.org/c/openstack/octavia/+/89810309:17
opendevreviewMerged openstack/octavia stable/xena: Fix amphorae in ERROR during the failover  https://review.opendev.org/c/openstack/octavia/+/89810609:22
opendevreviewMerged openstack/octavia stable/wallaby: Fix amphorae in ERROR during the failover  https://review.opendev.org/c/openstack/octavia/+/89811409:22
danfaigthiemonge: for the zookeeper one, we had issues with the connections, which was probably our configuration issue, and afterwards we had a job that was run twice, but might be also error on our side, since we migrated Lbs from an old system to octavia and had custom steps for that. Sometimes we had sometimes exceptions triggering in the worker, that left some weird state,09:26
danfaithat I still try to understand. For the memory backend I think we should have been fine if we increased the time a worker has to properly stop. 09:26
danfaigthiemonge: all in all these are only small (likely home-made) issues, but if we know which mode is the 'reference' one, we would probably try to explore it at least a bit.09:28
opendevreviewGregory Thiemonge proposed openstack/octavia master: Add zookeeper backend for jobboard in devstack  https://review.opendev.org/c/openstack/octavia/+/86267109:43
gthiemongemaybe we need to get feedback from the other operators, I know that OVH uses jobboard, but I don't know which backend09:44
gthiemonge^^ this commit enables zookeeper in our CI09:44
gthiemongebut yeah, we don't test corner cases like outages, upgrades, etc09:45
danfaithank you, I also try to understand our problems more in-depth10:05
pyjouHello everyone. I still wait for a review about the resize subject. I have the RFE https://review.opendev.org/c/openstack/octavia/+/885490 and the implementation https://review.opendev.org/c/openstack/octavia/+/890215. Can you review it? 13:01
gthiemongepyjou: Hi, I will review it13:27

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!