Sunday, 2019-03-31

*** luksky has quit IRC00:09
*** DanyC has quit IRC00:24
kplant23/quit00:37
*** kplant has quit IRC00:37
jamesdentondjhankb You can use this as a reference: https://docs.openstack.org/openstack-ansible-os_neutron/latest/app-openvswitch.html#openstack-ansible-user-variables00:57
jamesdentonThe scenario here is for OVS, but it would apply to LXB as well. You would basically define the 'neutron_provider_networks' with your desired bridge mappings and interface00:58
djhankbjamesdenton: Thank you, I was able to find that document and get it partially configured.00:58
jamesdentonIs there something in particular you're hoping to achieve?00:59
djhankbin my lab deployment here, I was able to get Neutron running well by defining both physical_interface_mappings as well as bridge_mappings under linux_bridge.00:59
djhankbI used the aforementioned documentation and am now getting Ansible to configure physical_interface_mappings. the "bridge_mappings" part may or may not be needed, I am still learning how all of this stuff works!01:00
djhankbI had recently added my own SSL certificates to HAProxy as well as rabbitmq - and upon re-running playbooks I broke my neutron l3 agent. Adding "physical_interface_mappings" via neutron_provider_networks in user_variables.yml and I was still having issues until I restarted rabbitmq (stop_app / start_app) and I think everything is working...01:05
jamesdentoni believe bridge_mappings is meant for OVS, while physical_interface_mappings is for LXB (as LXB will create a bridge for each network and tag off the mapped interface). Whereas with OVS, you're mapping the provider name to an OVS bridge, like br-ex or br-provider. Not sure i trust the config file example TBH01:06
djhankbRight, right. Either way I was ultimately trying to find out how to add "bridge_mappings" via user_variables.yml, and I *think* I can interpret this documentation as yml: network_mappings == [linux_bridge]:physical_interface_mappings. I can only assume that yml:network_interface_mappings == [linux_bridge]:bridge_mappings...01:11
djhankb I'm just trying to do *everything* using Ansible as I am building out this lab so that I can keep my config in Git and reproduce this once I am comfortable enough to build Openstack in Production01:12
jamesdentonWell, if you define the 'provider' network in openstack_user_config.yml, then most everything should be handled for you. no need to specify overrides at all01:29
jamesdentonSo for example: https://github.com/openstack/openstack-ansible/blob/master/etc/openstack_deploy/openstack_user_config.yml.example#L278-L28601:30
jamesdentonactually, let me enhance that. one sec01:31
jamesdentonhttps://pasted.tech/pastes/655df585faaf48610db58372cbfb9fc1f8ed365d01:32
jamesdentonWhat you'll end up with is physical_interface_mapping = physnet1:eth101:32
jamesdentonwhich means any time you create a neutron provider network using 'physnet1' label, the agent will use eth1.X in the respective bridges it creates.01:33
jamesdentonfor this conversation, you can ignore container_interface. Leave it as-is01:33
jamesdentonand for the 'range', you would want to use the VLANs you've allocated to tenant networks (if any)01:34
djhankbThat is a good point, I do have provider networks defined, and this one I remember adding later, and manually...01:42
djhankbIt looks like the piece i was missing was the "host_bind_override" you pointed out01:44
djhankbin mine I had the same value for both container interface as well as host_bind_override... so that is likely my problem, which I am trying to fix the wrong way.01:45
*** djhankb has quit IRC03:56
*** ansmith_ has joined #openstack-ansible04:39
*** ansmith has quit IRC04:40
*** eglute has quit IRC07:15
*** hamzaachi has joined #openstack-ansible07:16
openstackgerritDmitriy Rabotjagov (noonedeadpunk) proposed openstack/openstack-ansible-lxc_hosts stable/rocky: Fix image fetching process for leap 15  https://review.openstack.org/64885407:45
*** DanyC has joined #openstack-ansible08:15
*** DanyC has quit IRC08:29
*** cshen has joined #openstack-ansible08:51
*** hamzaachi_ has joined #openstack-ansible09:05
*** cshen has quit IRC09:08
*** hamzaachi has quit IRC09:08
*** hamzaachi__ has joined #openstack-ansible09:10
*** hamzaachi_ has quit IRC09:13
*** hamzaachi__ has quit IRC09:17
*** hamzaachi has joined #openstack-ansible09:17
*** hamzaachi_ has joined #openstack-ansible09:19
*** hamzaachi has quit IRC09:22
openstackgerritMerged openstack/openstack-ansible-os_horizon master: Stop installing extra packages for Horizon  https://review.openstack.org/64884209:45
*** cshen has joined #openstack-ansible11:04
*** luksky has joined #openstack-ansible11:08
*** cshen has quit IRC11:08
*** ianychoi has quit IRC11:36
*** cshen has joined #openstack-ansible13:04
*** cshen has quit IRC13:08
*** shyamb has joined #openstack-ansible15:01
*** cshen has joined #openstack-ansible15:01
*** shyamb has quit IRC15:13
*** cshen has quit IRC15:33
openstackgerritMerged openstack/ansible-role-python_venv_build master: Revert "speedup: move when block to outside include"  https://review.openstack.org/64884016:25
mnaserhmm, looks like I just caught a bug in python_venv_build16:34
mnaserodyssey4me: it seems that it's possible that the container python_venv_build uses to build it's wheels *is not* the same as the one running lsyncd, which results in a 404.  I have no idea how to end in this place, because it seems to be happening undeterministically, but I also haven't had much luck understanding the logic of selection16:35
mnaserand how a wheel builder host is picked16:35
mnaserI guess we don't see this in CI because there is only repo_server container16:36
mnaserceph_client is oddly broken on centos. im checking.. https://www.irccloud.com/pastebin/lg5P3YRn/16:56
mnaserthat's the "ceph_client : Add ceph repo" task16:56
mnaseraw crap16:57
mnaserI think its the `ceph_gpg_keys` reference16:57
mnaseryup it is16:57
*** hamzaachi_ has quit IRC17:00
openstackgerritMohammed Naser proposed openstack/openstack-ansible-ceph_client master: Remove unnecessary GPG keys setting for YUM repos.  https://review.openstack.org/64887717:01
mnasercores: please have a look at https://review.openstack.org/#/q/topic:stein-deploy -- ill continue to push things that I see are upgrade breaking / regressions17:02
*** hamzaachi_ has joined #openstack-ansible17:03
*** cshen has joined #openstack-ansible17:30
*** cshen has quit IRC17:34
*** hamzaachi_ has quit IRC17:42
*** cshen has joined #openstack-ansible17:45
*** cshen has quit IRC17:49
openstackgerritMohammed Naser proposed openstack/openstack-ansible master: glance: stop using healthcheck endpoint  https://review.openstack.org/64887817:50
*** DanyC has joined #openstack-ansible17:59
*** DanyC has quit IRC18:17
*** pcaruana has joined #openstack-ansible19:32
*** tosky has joined #openstack-ansible19:34
*** pcaruana has quit IRC19:40
*** cshen has joined #openstack-ansible19:45
*** cshen has quit IRC19:49
*** ArchiFleKs has joined #openstack-ansible20:21
*** ivve has joined #openstack-ansible21:11
*** DanyC has joined #openstack-ansible21:16
*** cshen has joined #openstack-ansible21:45
*** cshen has quit IRC21:49
*** tosky has quit IRC22:06
*** DanyC has quit IRC22:24
*** ianychoi has joined #openstack-ansible22:28
*** luksky has quit IRC22:34
openstackgerritMerged openstack/openstack-ansible master: glance: stop using healthcheck endpoint  https://review.openstack.org/64887822:47
*** ivve has quit IRC22:54
*** nurdie has joined #openstack-ansible22:57
nurdieI don't have a dedicated network host in my OSA environment. In Pike, neutron-agents were running in containers on 3 controllers. After the Queens upgrade, they are gone and and I have no neutron agents22:58
nurdieWhat did I miss? o_022:58
nurdieThat's a lie. I have neutron agents. I have no neutron-dhcp-agents though23:00
nurdieHow do I get OSA to install dhcp agents?23:12
guilhermespnurdie since queens neutron-agents runs on baremetal.23:38
nurdieguilhermesp: Yep, I get that, except mine isn't installing them lol23:39
nurdieI have no L3 agents, no DHCP agents, etc. The only neutron agents I have are the linuxbridge-agents running on compute nodes23:39
guilhermesphum, no neutron process at all on controller nodes? How did you run the upgrade?23:43
nurdierun-upgrade.sh for half. We encountered a galera bug, fixed it, and proceeded using https://docs.openstack.org/openstack-ansible/queens/admin/upgrades/major-upgrades.html23:45
nurdieguilhermesp: When I rerun setup-hosts --limit "neutron_agent", it still doesn't install them23:45
nurdieI could install the venvs and systemd unit files manually, but would be cleaner to figure out why OSA isn't seeing that it should23:46
nurdieAnd faster lol23:46
guilhermespall right so you ran the upgrade using run-upgrade.sh script. Failed, and then you re-ran again. Did it finish successfully ?23:53

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!