Monday, 2022-09-19

*** akahat is now known as akahat|ruck04:37
*** ysandeep|away is now known as ysandeep04:49
anskiyspatel: hello! When you would have a minute, could you, please, check https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/855829? :)07:25
noonedeadpunkanskiy: Um, I tend to not agree here. Well, partially.  As for "{{ venv_build_host != inventory_hostname }}" important part how venv_build_host is generated. Before Yoga it was set to groups['repo_all'][-1]. So for first controller venv_wheel_build_enable was renderred as true. For the last it was false, but it didn't matter, as wheels were already in place07:39
noonedeadpunkand now it's the first host. So condition is conflicting now07:39
noonedeadpunkI mean this one https://opendev.org/openstack/ansible-role-python_venv_build/src/branch/master/tasks/main.yml#L68-L6907:40
noonedeadpunkand it has changed as well with logic of how we get this https://opendev.org/openstack/ansible-role-python_venv_build/src/branch/master/vars/main.yml#L44-L5707:41
noonedeadpunkah, we did not change that, ok, yes, you're right07:41
noonedeadpunkIndeed, I've changed just condition when to build wheels07:42
noonedeadpunkhm, so maybe there's easier way to fix that issue...07:43
noonedeadpunkso basically we can reverse ansible_play_hosts as well to fix that07:48
noonedeadpunknot sure that does make sense though07:48
noonedeadpunkas in case of serial you do need first host07:49
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_placement master: Install git into placement containers  https://review.opendev.org/c/openstack/openstack-ansible-os_placement/+/85825807:55
noonedeadpunkBut yes, the line you've mentioned is incorrect indeed. Thanks!07:55
opendevreviewDmitriy Rabotyagov proposed openstack/ansible-role-python_venv_build master: Change default value for venv_wheel_build_enable  https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/85789707:59
anskiynoonedeadpunk: I wasn't able to reproduce this issue without your patch...08:05
anskiy(just to be clear)08:05
noonedeadpunkIt's reproducible only for metal deploys (without lxc containers at all)08:05
anskiyyes, I've bootstrapped cluster with two control-plane nodes yesterday08:06
anskiyand it was all is_metal (never tried LXC, honestly :) ) 08:07
noonedeadpunkah, so you tried to roproduce with metal08:07
noonedeadpunkhuh08:07
noonedeadpunkthat is interesting08:07
anskiylike I've said, maybe, gluster is faster than lsyncd was, and you can see the missing files from jamesdenton's trace on node, where play looks for them08:08
anskiy`"file not found: /var/www/repo/os-releases/25.1.0.dev68/ubuntu-20.04-x86_64/requirements/utility-25.1.0.dev68-constraints.txt"}` this one, for example08:09
opendevreviewJiri Podivin proposed openstack/openstack-ansible-os_tempest master: DNM: Testing change for CIX  https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/85826008:19
*** ysandeep is now known as ysandeep|lunch08:22
*** ysandeep|lunch is now known as ysandeep09:00
anskiysetting up neutron AZs with OVN and OSA is a little bit clunky :(09:05
noonedeadpunkwell. I'm dealing with AZ deployment now and hopefully push some docs soon09:10
noonedeadpunkBut in short - use custom env.d to define az groups09:10
noonedeadpunkyou can have az1-neutron_server and az2-neutron_server, which both will be children in neutron_server group09:11
anskiyyou bind node to AZ via ovs-vsctl :)09:11
noonedeadpunkand then you can set group_vars09:11
noonedeadpunkbut I don't know about OVN as working with OVS09:11
anskiywith OVS you do this in neutron.conf09:12
noonedeadpunkI bet for OVN we just miss definition of AZs09:12
noonedeadpunkyup09:12
anskiywith OVN you do it here: https://opendev.org/openstack/openstack-ansible-os_neutron/src/branch/master/tasks/providers/setup_ovs_ovn.yml#L2509:12
anskiyin ovn-cms-options09:13
anskiywith OVS, you define them via custom config overrides, right, so they would land in appropriate sections of /etc/neutron/l3_agent.ini and /etc/neutron/dhcp_agent.ini?09:15
anskiyas I don't see anything AZ related in os_neutron at all...09:16
noonedeadpunkI think it will also affect neutron.conf, but not 100 % sure09:17
noonedeadpunkso yes, it's jsut configuration setting09:17
opendevreviewDanila Balagansky proposed openstack/openstack-ansible-os_neutron master: WIP: neutron OVN plugin AZ support  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/85827109:26
anskiyI think, I need something like this ^. Gonna need to test it a bit tho09:27
noonedeadpunkanskiy: I used these ovverrides for OVS: https://paste.openstack.org/show/bYsk27AB9DqT4iVdy2tE/09:32
noonedeadpunkbut IIRC default_availability_zones is a tricky thing and there might be cases when you want to avoid defining it.09:32
noonedeadpunkunless I mixing up things with nova and cinder09:33
noonedeadpunkanskiy: I wonder maybe for this usecase it makes sense to add availability-zone key to neutron_provider_networks....09:35
anskiythis is a good idea, as now I think I have to add `neutron_ovn_controller` group to openstack_user_config, and add `neutron_availability_zones` to `host_vars` there, or something like that09:40
anskiynoonedeadpunk: I do believe that, for nova you always have "default" AZ: `nova`, and for cinder, there are these things in variables: https://opendev.org/openstack/openstack-ansible-os_cinder/src/branch/master/defaults/main.yml#L53-L5409:42
anskiythe problem with adding it to neutron_provider_networks, is that for OVN (and looks like for OVS too) AZ is chassis/node-level option, not the network one.09:50
noonedeadpunkanskiy: I do recall some trickiness when you want nova scheduler to pick AZ on itself, but you use bfv and default_az is set in cinder.10:20
noonedeadpunk`cinder.cross_az_attach is False, default_schedule_zone is None, the server is created without an explicit zone but with pre-existing volume block device mappings. In that case the server will be created in the same zone as the volume(s) if the volume zone is not the same as default_availability_zone. `10:21
noonedeadpunkhttps://docs.openstack.org/nova/latest/admin/availability-zones.html#implications-for-moving-servers10:21
noonedeadpunkanskiy: ok, yes, then let's add az variable. but we would need to cover ovs/lxb options as well with the same patch10:22
*** anbanerj is now known as frenzyfriday10:26
*** ysandeep is now known as ysandeep|afk10:34
anskiynoonedeadpunk: AZs are per-service properties, except the time when they are not :(10:39
anskiynoonedeadpunk: do you want `default_availability_zones` for neutron to be added too, or only per-host one?10:43
noonedeadpunkif add a variable for default_availability_zones - I think it should be undefined (or empty) by default.10:44
noonedeadpunkthough I don't have any strict opinion if add it or not - it's quite simple override from one side, but we don have suc variable for nova and cinder from another10:49
noonedeadpunk*but we do have such10:49
anskiythis would probably break scheduling (creating networks), if, at the same time, `neutron_availability_zones` (which would go into agent's configuration) would be set to a different value.10:49
anskiyexcept for the case, when it would be set to an empty value too, but I'm not sure if it would be valid for all the drivers10:50
noonedeadpunkoh yes, and it also reuqires scheduler to be changed. So we can actually also add variable as an undefined one (comment out in defaults for documentation purpose) and configure neutron conditionaly to whether it's defined or not10:52
anskiyokay, I've got your point, thank you. Will see, if I would be able to come up with something sane :)10:57
anskiyhopefully this week10:57
anskiystrange thing: dhcp_agent (https://docs.openstack.org/neutron/latest/configuration/dhcp-agent.html#agent) and l3_agent (https://docs.openstack.org/neutron/latest/configuration/l3-agent.html#agent) docs say, that their respective `availability_zone` is defined to `nova`, by default. 11:20
anskiyand `default_availability_zones` is empty, but docs say, that if hints and default zone are empty, AZs are still considered: https://docs.openstack.org/neutron/latest/configuration/neutron.html#DEFAULT.default_availability_zones11:21
anskiynoonedeadpunk: looks like, with OVS setup you should see AZs in `openstack availability zone list --network`, even if nothing is configured yet11:21
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Include install_method variables for openrc  https://review.opendev.org/c/openstack/openstack-ansible/+/85830311:35
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Bump ansible-core version to 2.13.4  https://review.opendev.org/c/openstack/openstack-ansible/+/85750611:35
*** ysandeep|afk is now known as ysandeep12:03
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/yoga: Fix dynamic-address-fact gathering with tags  https://review.opendev.org/c/openstack/openstack-ansible/+/85832912:04
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/wallaby: Fix dynamic-address-fact gathering with tags  https://review.opendev.org/c/openstack/openstack-ansible/+/85833012:05
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/xena: Fix dynamic-address-fact gathering with tags  https://review.opendev.org/c/openstack/openstack-ansible/+/85833112:05
opendevreviewMerged openstack/openstack-ansible master: Imported Translations from Zanata  https://review.opendev.org/c/openstack/openstack-ansible/+/85691012:31
jamesdentonhello all12:39
damiandabrowskihi!12:39
jamesdenton*high five*12:39
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_nova master: Add new line after proxyclient_address  https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/85837513:08
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_keystone master: Revert "Check the service status during bootstrap against the internal VIP"  https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/85833513:26
*** prometheanfire is now known as Guest93113:29
anskiyjamesdenton: hey! Do you have repo on gluster when seeing this error: https://bugs.launchpad.net/openstack-ansible/+bug/1989506?13:32
jamesdentonIIRC the gluster repo is configured, but wheels are skipped on the first host so there's nothing to sync13:33
jamesdentoni believe i tested noonedeadpunk's patch successfully, but need to test to make sure it behaves on metal, lxc, single/multi, etc13:34
jamesdentonlooking into some keepalived issues at the moment, stuff be broken there, too13:35
anskiyahh, I see: I have `venv_wheel_build_enable: true` in my user_variables :) That's why I wasn't able to reproduce that thing...13:36
jamesdentonthat'll do it13:37
-opendevstatus- NOTICE: As of the weekend, Zuul only supports queue declarations at the project level; if expected jobs aren't running, see this announcement: https://lists.opendev.org/pipermail/service-announce/2022-September/000044.html13:37
anskiythat means I have to retest this patch with two-node metal control-plane13:38
noonedeadpunkyeah, venv_wheel_build_enable: true is simple thing to make this work13:38
noonedeadpunkmaybe we indeed should jsut hardcode it...13:38
anskiyin W when set to true it would successfully run on first node and fail on second one, at least that's what my comment says13:41
anskiyspatel: hello!13:46
spatelanskiy hey13:57
anskiyspatel: could you, please, check https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/855829 if you have some time? 14:08
spatelsure! 14:09
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_keystone master: Bootstrap when running against last backend  https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/85838514:09
jamesdenton^^ i happened to create a similar bug, it looks like: https://bugs.launchpad.net/openstack-ansible/+bug/199000814:10
noonedeadpunkjamesdenton: ^ this should fix the issue you saw when bootstrapping keystone14:10
jamesdentongood deal, thanks14:10
noonedeadpunkaha14:10
noonedeadpunkOk, I missed it14:11
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_keystone master: Bootstrap when running against last backend  https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/85838514:11
jamesdentonan unrelated question... have you seen any differences between using with_dict vs a loop w/ dict2items?14:12
noonedeadpunkI hardly used loop| dict2items so can't say14:12
noonedeadpunkeventually I find it more and more handy to use list of mappings then just dict. Or simple list14:13
noonedeadpunkwhat I'm not sure about this patch if it will work nicely with IDP setup14:15
noonedeadpunkbut andrewbonney seems not around :(14:15
jamesdentoni guess the BBC is pretty busy right now14:16
noonedeadpunkoh, well14:16
noonedeadpunktrue14:16
jamesdentonSo, this changed from with_dict to loop recently, and the conditional is no longer effective: https://github.com/evrardjp/ansible-keepalived/blob/master/tasks/main.yml#L164-L16514:17
jamesdentonthus, our keepalived implementation is broken at the moment14:17
jamesdentonand this was also implemented, https://github.com/evrardjp/ansible-keepalived/blob/master/tasks/main.yml#L133, which causes failure since the tracking scripts haven't been dropped yet 14:18
jamesdentoni'm shuffling stuff around to get it working, but that loop w/ dict2items seems to behave differently14:18
spatelanskiy agreed we just need openvswitch but not started/enable. 14:18
noonedeadpunkjamesdenton: huh, interesting14:20
jamesdentonthere is a type here, too: https://github.com/evrardjp/ansible-keepalived/blob/master/tasks/main.yml#L164. that should be keepalived_scripts14:20
jamesdenton*typo14:20
jamesdentonbut the main issue is "item.value.src_check_script is defined" evaluates to false w/ the loop vs with_dict14:21
anskiyspatel: should you leave some kinda of LGTM comment, if that's appropriate?.. I'm not sure if I'm breaking review process with this move ---14:21
spatelI did anskiy 14:21
anskiythank you!14:22
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_neutron master: Do not start/enable Open vSwitch on ovn-northd nodes  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/85582914:23
noonedeadpunkrebased jsut to trigger recheck for gates14:24
noonedeadpunkas seems rdo repo for centos9 was in weird state during previous one14:24
noonedeadpunkI wonder though... Didn't ovs start/enabled on itself upon installation for Ubuntu/Debian?14:32
noonedeadpunkAs I bet when ovs is being upgraded - it restarts on it's own with postinstall hook or smth like that14:33
noonedeadpunkanskiy: jamesdenton ^14:33
noonedeadpunkans spatel :)14:34
spatelnoonedeadpunk it does restart itself in Ubuntu during upgrade 14:34
jamesdentonyes, IIRC openvswitch would be started automatically during service install. In this case, it seems openvswitch-switch is no longer a dependency for ovn-northd so it's really not needed on those nodes, best i can tell14:34
spateljamesdenton still it required ovn binary on ovn-northd 14:35
spatelsorry ovs*14:35
spatelbecause ovn needs ovs db 14:36
spatelbut you can keep service down 14:36
jamesdentonnot according to this: https://packages.ubuntu.com/focal/ovn-central14:38
jamesdentonvs https://packages.ubuntu.com/bionic/ovn-central14:39
jamesdentoni went a few tiers deep and didn't see openvswitch-switch. maybe common, but not switch14:39
jamesdentonbut i did test that patch on a MNAIO and found that while openvswitch-switch was not deployed on the controllers, OVN functioned as expected14:39
jamesdentonnorthd was there14:40
noonedeadpunkah, ok, true. That's spatels comment that has confused me then14:41
noonedeadpunkwith `agreed we just need openvswitch but not started/enable.`14:41
spatelYes ovn-central has that dependency to install ovs components for DB function only. 14:42
anskiyovsdb-tool, which is needed on control-plane nodes is part of the openvswitch-common package14:43
anskiyon which ovn-central depends14:44
spateli thought when you install openvswitch-common it will auto install other ovs components also. like openvswitch-switch etc.. 14:44
anskiyjamesdenton said some time ago, it was like this on 18.04, but now, it's: Depends: openssl, python3-six, python3:any, libc6 (>= 2.29), libcap-ng0 (>= 0.7.9), libssl1.1 (>= 1.1.0), libunbound8 (>= 1.8.0)14:47
jamesdentonseems like switch relies on common, but not the other way around.14:47
anskiyand I don't see `openvswitch-switch` being installed with this patch on freshly bootstrapped test environment14:48
spatelgreat! 14:49
spatelwe only need openvswitch-common which contain ovs-db - https://packages.ubuntu.com/focal/amd64/openvswitch-common/filelist14:49
*** ysandeep is now known as ysandeep|dinner14:58
noonedeadpunkyeah, that makes way more sense to me now)15:01
jamesdentonnoonedeadpunk I'll ping JP next time I see him, but here's what I've put together so far: https://github.com/evrardjp/ansible-keepalived/pull/24015:44
noonedeadpunkoh, I bet I told him about that checking with `in` is better one day15:48
jamesdenton:D15:49
noonedeadpunkjamesdenton: there's more same issues then15:50
noonedeadpunkor not15:50
jamesdentonin that playbook? probably. 15:50
noonedeadpunkL17715:50
noonedeadpunkeventually also regarding notify script15:51
jamesdentonoh yes, definitely. i wasn't using keepalived_sync_groups so i didn't want to jump the gun15:52
jamesdentoni can fix them all15:52
noonedeadpunkeventually there's quite a list15:53
jamesdentonthx15:54
noonedeadpunkmakes sense to fix them at once15:54
*** ysandeep|dinner is now known as ysandeep16:02
*** ysandeep is now known as ysandeep|out16:03
jamesdentongot 'em, i think. off to lunch with the kiddo16:13
*** admin16 is now known as admin121:41

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!