Monday, 2022-03-28

-opendevstatus- NOTICE: zuul isn't executing check jobs at the moment, investigation is ongoing, please be patient07:17
gokhanihi folks, I need your recommendations about networking. I have only one ip address block to connect to internet. so I will use it both host and virtual machines for internet connection. How can I configure my netplan file for neutron provider network? when I map to br-ext, I lost my connection. Can I run it with veth-pairs or another way?  my netplan file is https://paste.openstack.org/show/bHOqFt09:13
gokhanihttps://paste.openstack.org/show/bHOqFtXDTbSv5hUpVngi/09:14
noonedeadpunkmornings09:28
noonedeadpunkgokhani: do you own a network equipment or renting one?09:28
noonedeadpunkas network block could be splited into several smaller ones and served as different vlans09:29
gokhaninoonedeadpunk: mornings :) We have only a single uplink to the ISP, and a single public ip address reserved for us. And we are trying to manage netwrok traffic through a supermicro switch, which is not capable of inter-vlan routing. 09:33
gokhaninoonedeadpunk: do you think this architecture can work ? https://imgur.com/a/xN49lnL09:42
noonedeadpunktbh I'm not sure it will.... so if you want to have vms that are connecting directly to public net, you need to have an interface that neutron can manage. So it' can't be same interface as you use to access compute nodes for example09:50
noonedeadpunkif you say that vms will have only private network and reach internet through net nodes and floating ip - then you will likely have same issue on net node09:50
anskiyi think, it should be possible with OVS. That is, I have current non-openstack configuration like this: public IP on compute from the same subnet as VMs10:00
noonedeadpunkfor some reason I thought that ovs kind of same except interface is managed by ovs?10:02
anskiyif I understand the configuration correctly, it actually manages br-int (not the bridge from openstack_user_config), which connects to it via patch10:04
gokhanianskiy: I am using lxb10:05
gokhaninoonedeadpunk: may be I need to break bond, and use one interface for host and another for neutron with same ip address block 10:09
anskiygokhani: sorry, I don't have expirience with that. But I've just checked what I've been talking about: I have VLAN on compute node (which is default interface for it (the interface, which has default route)) and openstack network made with the same VLAN. Now, when I attach VM to this network I have the same connectivity as compute node.10:15
gokhanianskiy: I can explain my config like this: I have a bridge as br-ext connected to vlan1. br-ext has default route (default via 10.160.143.1 dev br-ext proto static ). there is also a ip for haproxy vip on br-ext. The problem is when I map br-ext to neutron, neutron creates a brqxxx bridge on infra nodes and ips on br-ext are moving to on brqxxx bridge, and so I lost my connection to hosts.   10:28
gokhaniit seems that interface used with host connection and neutron manage can not be same10:31
gokhaniI am also using infra nodes as network nodes.  10:32
noonedeadpunkanskiy: yes, with VLAN it's easy peasy :D10:59
jrossergokhani: maybe also look at how the openstack-ansible AIO creates a new interface (eth12) off the host interface to give to neutron11:15
jrossersomething like this https://github.com/openstack/openstack-ansible/blob/master/etc/network/interfaces.d/aio_interfaces.cfg#L53-L5611:16
anskiygokhani: so it looks like you need to create vlan interface on brqxxx with all the IP configuration that (which seems dirty), or, maybe tell neutron to use br-ext as pre-existing bridge (if that's possible)11:17
noonedeadpunkand create smth like mac-vlan?11:18
noonedeadpunkand put it to fake bridge?11:18
anskiynoonedeadpunk: it should be the same with untagged11:18
noonedeadpunkyep, that can work actually11:18
anskiynoonedeadpunk: mac-vlan to bridge two bridges? :)11:19
noonedeadpunkok, veth pair then)11:19
anskiyso it's just a replica of what OVN/OVS does with patch port11:31
gokhanijrosser: thanks, I think it can help, I am trying it11:31
masterpe[m]When I run ceph-install.yml  I some how get the following error:... (full message at https://matrix.org/_matrix/media/r0/download/matrix.org/QArJNrpqZlLlZbJuVjZlISvm)11:35
noonedeadpunktbh I won't trust putting the only interface you can reach server in ovs...11:35
masterpe[m]Any idea what the problem is?11:35
noonedeadpunkmasterpe[m]: do you run it with some tags?11:35
noonedeadpunkand is it xena?11:35
anskiynoonedeadpunk: why?11:36
masterpe[m]noonedeadpunk: No I don't use any tags, we are using OSA 21.2.611:36
masterpe[m]This is Ussuri.11:37
noonedeadpunkanskiy: it's like putting all eggs in the same basket. ovs depends on glibc heavily and was quite bggy because of that on bionic for example.11:39
noonedeadpunkservice restart causes net downtime as well11:39
noonedeadpunkand when you don't have another way to reach server I'd call it poor design choice... I might be opionated regarding it though11:39
anskiynoonedeadpunk: it's super stable on CentOS 7, we've been using this exact configuration for quite a long time. IPMI? :)11:41
noonedeadpunkthere's a switch that does not support vlans even, you think there's IPMI ? :p11:42
noonedeadpunkAnd is CentOS 7 really supported? It's under EM I believe?11:42
anskiynoonedeadpunk: inter-vlan routing absense is just stating that this particular switch is, indeed, a switch, not a router :)11:45
anskiynoonedeadpunk: yeah, part of the reason I'm deploying openstack to focal now11:46
noonedeadpunkmhm, like DES-105?11:46
noonedeadpunk:D11:47
noonedeadpunkbtw you could also use Rocky if want to - it's ready for master at least11:47
noonedeadpunkmasterpe[m]: well it's interesting. then you should have already patch that includes potential fix for that11:48
masterpe[m]do you know what patch?11:49
noonedeadpunkmasterpe[m]: so basically that should have worked https://review.opendev.org/c/openstack/openstack-ansible/+/737804/1/playbooks/ceph-install.yml11:49
noonedeadpunkbut it's still for running with tag11:50
noonedeadpunkmasterpe[m]: unless you have `osa_gather_facts: false` somewhere11:51
*** dviroel|pto is now known as dviroel11:52
jrossermasterpe[m]: can you maybe paste the whole output of the playbook to the point it fails?11:53
anskiynoonedeadpunk: that's "dumb switch" :) There is a spot, just between this one and some router. Inability to create random L3 interfaces doesn't mean it lacks management interface.11:53
noonedeadpunkyou don't need to "route" l3 interface to understand tagged traffic and re-tag it accordingly11:55
anskiynoonedeadpunk: nah, there is no requirement to stay on CentOS 7, besides operating system specific configurations is such a small part of moving existing system to openstack...11:55
noonedeadpunkbut well11:55
noonedeadpunkyeah, switching between platforms is always interesting:)11:56
masterpe[m]noonedeadpunk jrosser https://gist.github.com/mpiscaer/072b30c935ac3d3c1e2bb3eb6ee70dcd11:56
noonedeadpunkmasterpe[m]: and you limit to ceph-osd hosts?11:57
noonedeadpunkor well11:57
masterpe[m]I limit on storko111:58
noonedeadpunkok, so then you won't get facts gathered for mon hosts that feels like required by ceph-ansible...11:59
anskiynoonedeadpunk: ah, changing tags? It could be achieved with connecting two ports on the same switch :)11:59
noonedeadpunkanskiy: if it's capable of creating port groups? 12:00
jrossermasterpe[m]: i think you have two choices, to expand the --limit to include the hosts that facts are needed from12:03
jrosserthough i can see why you might not want to do that12:03
noonedeadpunkbut tbh, even some really poor dlink I had to manage in university in 2006 was capable of handling vlans... In weird way, that you couldn't specify vlan id when creating one as it was only assigning ID auto-incremental way, but still12:03
jrossermasterpe[m]: so the alternative is to use an ansible ad-hoc command something like "ansible <ceph-mon-group-name> -m setup" to gather the facts before you then run the ceph-install playbook12:04
anskiynoonedeadpunk: nope, you just egress one VLAN removing tag, and accept it with another: voila, it changed it's ID :P12:06
masterpe[m]jrosser: using the ansible with the -m setup worked.12:08
noonedeadpunkanskiy: ok, yes, this is another way around12:09
noonedeadpunkjust never had to work with managed but not supporting vlans switches:)12:10
noonedeadpunkthey always either stupid or smart enough to have that support12:10
jrossersome very early vxlan stuff needed ports looping on the front of switches12:22
gokhanijrosser: it didn't work. I created eth12 from br-ext and neutron created eth12.1 but it can not connect to internet. 12:32
jrossergokhani: i would be trying simple debugging like pinging the gateway using these various interfaces as the source12:33
jrosserand then the same from your neutron L3 agent namespace12:34
anskiynoonedeadpunk: I've only seen VLAN recolouring a couple of times, most of them were like: this other ISP sends VLAN with ID which doesn't comply with our guides, so we change it.12:36
gokhanijrosser: firstly, in vm I can not ping gateway and also tap interface and bridges12:37
gokhanijrosser: I created vm provider network 12:37
jrossernoonedeadpunk: uh-oh https://zuul.opendev.org/t/openstack/build/6af29c3f0571417cb810bf4307690cd012:41
jrosseri think we have new setuptools trouble there12:41
*** dviroel is now known as dviroel|brb12:43
gokhanijrosser: after create vethpair I think at neutron side I need to network set flat not vlan ? If I am not right, please correct me :)13:22
jrosserthat entirely depends on what you have on the actual upstream network13:22
jrossermy preference is to always do these things as vlan and then fix up whatever you need in the switches13:23
jrosserflat is only ever one network and if you ever need to change it / add another it's big work rather than simple13:23
*** dviroel|brb is now known as dviroel13:27
gokhanijrosser: in this config, I depend on only one network both for hosts and vms. I also prefer make vlan. I have br-ext tagged with vlan x. I created veth pairs from br-ext ( ip link add br-ext-veth type veth peer name eth12 || true && ip link set br-ext-veth up && ip link set eth12 up  &&   brctl addif br-ext br-ext-veth). I used eth12 for neutron. I mean at neutron side do I need to define provider13:34
gokhaninetwork as vlan or flat13:34
gokhanijrosser: yes it is flat and it worked :) thank you very much :)13:38
gokhaninoonedeadpunk: anskiy thanks for your help, I solved my problem with vethpairs :)13:42
noonedeadpunkgreat!13:45
jrossergokhani: are your VM contents trusted?13:49
gokhanijrosser: yes indeed, after application deployments to this environment it will be offline. 13:52
NeilHanlonsome problem with tox or a dependency thereof causing trouble merging https://review.opendev.org/c/openstack/openstack-ansible-tests/+/835219 (see: https://github.com/pypa/setuptools/issues/3197) -- any thoughts noonedeadpunk?14:41
NeilHanlonappears some other openstack projects (tripleO, others) have just done this: https://github.com/openstack/tripleo-ansible/commit/dab104315f5352800ec56f163f6ae12a4b8c968514:44
opendevreviewNeil Hanlon proposed openstack/openstack-ansible-tests master: Disable setuptools auto discovery  https://review.opendev.org/c/openstack/openstack-ansible-tests/+/83546814:47
opendevreviewNeil Hanlon proposed openstack/openstack-ansible-tests master: Update tests for Rocky Linux  https://review.opendev.org/c/openstack/openstack-ansible-tests/+/83521914:48
jrosserNeilHanlon: yes something is broken with new setuptools14:53
jrosserNeilHanlon: looks like you patch has helped there15:39
jrosserthough we have $lots of repos that would need patching into15:40
jrosserlike ~50 or so15:40
jrosseri see now that there is a setuptools 61.2.0 which claims to fix https://github.com/pypa/setuptools/issues/319715:40
NeilHanlonyeah changing setup.py doesn't seem overly scalable15:41
noonedeadpunkit's easier to update requirements repo to update setuptool version I guess?15:53
noonedeadpunkbecause they're in u-c15:53
jrosserunfortunately not15:53
jrosserthis is openstack-tox-docs job which takes the setuptools bundled with virtualenv15:54
jrosserand virtualenv released on the 25th contains setuptools 61.0.015:54
jrosseri wonder if we can put something in the `deps` section of tox.ini15:56
jrosseri'll poke at it in a VM15:57
*** dviroel is now known as dviroel|lunch16:13
jrosserso this is ugly - openstack-tox-jobs uses the bundled setuptools16:39
jrosserthen everything we do afterwards in the tox environment is subject to u-c16:40
jrosserand the new pip resolver then won't let you move setuptools later than 61.0.016:41
jrosseras thats contrary to u-c16:41
noonedeadpunkdamn it16:50
noonedeadpunkso bumping u-c to 61.2.0 should still work?:)16:50
opendevreviewMerged openstack/openstack-ansible master: Add mysql directory for logging  https://review.opendev.org/c/openstack/openstack-ansible/+/83509116:54
*** dviroel|lunch is now known as dviroel16:55
jrossernoonedeadpunk: lol https://review.opendev.org/c/openstack/requirements/+/83532917:11
jrosserit somehow gets removed totally there17:12
jrosserone option is to make the docs jobs n-v and just wait for a virtualenv release17:13
lowercaseshould i open an issue regarding ceilometer and gnocchi playbooks not having a user defined pip package section? the xena version of the playbook is failing due to a requirement of openstacksdk pip package not being installed and i cannot add it via a playbook way17:32
jrosserlowercase: if you think theres a bug please do - we should see something like that in our CI usually though?17:33
lowercaseTASK [os_gnocchi : Add service project]17:34
lowercasefatal: [ceilometer2_gnocchi_container-e7bc0399 -> localhost]: FAILED! => {"attempts": 5, "changed": false, "msg": "openstacksdk is required for this module"}17:34
lowercaseaccording to: https://docs.openstack.org/openstack-ansible-os_ceilometer/xena/ , openstacksdk is indeed not specified as a depens17:35
jrosserthat delegates to localhost?17:35
jrosserthat looks incorrect17:35
lowercaseohhh17:36
jrosseruh-oh https://github.com/openstack/openstack-ansible-os_gnocchi/blob/stable/xena/tasks/service_setup.yml#L3017:36
lowercaseyeah, my service_setup_host is a bastion/jumpbox server. ill install openstacksdk there17:37
lowercasei just upgraded the os on that server, makes sense my the pip package list changed with the python version upgrade17:38
jrosserno wait thats ok https://github.com/openstack/openstack-ansible-os_gnocchi/blob/stable/xena/tasks/main.yml#L10417:38
jrosserlowercase: no thats not right to use your bastion like that17:39
lowercasewhy is that?17:39
jrosseryou want something like this17:39
jrosseropenstack_service_setup_host: "{{ groups['utility_all'][0] }}"17:39
jrosseropenstack_service_setup_host_python_interpreter: "/openstack/venvs/utility-{{ openstack_release }}/bin/python"17:39
jrosserbecasue we go to a lot of trouble already to create the correct environment for all this on the utility containers17:40
jrosserthe service setup host needs the galera client setup, the openstack python modules and so on17:40
jrossermuch much simpler to use the existing utility container rather than replicate all that17:41
lowercaseim investigating, one moment17:42
jrossertbh if you have got as far as gnocchi/ceilometer already then this must have worked for the other services already17:43
lowercaseoh yes, we have this config working on all of our openstack environments this way.17:43
lowercaseand i have no utility containers in this environment, let me look at the others.17:43
lowercasemy other dev environment has utility containers but they are not functioning. one of my prod environments does not have utility either.17:45
lowercaseokay, so if thats the right way to do things. I can test bringing back the utility containers.17:46
jrosserthat would be most aligned with how things are designed17:48
jrosserthey have a MySQL admin client17:48
jrosseran openrc file for the cloud admin user17:48
jrosserand the openstack cli setup in a venv and symlinked to /use/local/bin17:49
jrosserthat’s a fair amount of admin creds on them, which isn’t ideal, but it’s no worse than the config files on all the other hosts17:50
lowercasethat;'s not a high bar to accomplish on a bastion host. I pinging my coworker who was in charge of these environments before I and asked about the reason we do not have utility containers and may choose to persue this.18:01
jrosserit would probably be possible to fiddle with the inventory to make the the utility playbook run against the bastion18:07
jrosserwe support bare metal deployment so that might be hackable18:08
jrosserit could be worth grepping around your user variables and host/group_vars on the deploy host, as it’s possible to define this service setup host on a per-service basis, or globally with the vars I mentioned before18:10
lowercaseMy coworker stated the reason we don't use the utility containers is a requirement that our mysql and rabbitmq containers are on separate different metal nodes that the infra designated metal nodes. Using the utility containers seemed to require all containers to be hosted on the infra nodes.18:33
*** dviroel is now known as dviroel|out20:30

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!