Thursday, 2019-04-11

*** tbarron_ has quit IRC00:32
*** zerick has quit IRC00:32
*** kukacz_ has quit IRC00:32
*** timburke has quit IRC00:32
*** kukacz has joined #openstack-ansible00:33
*** zerick has joined #openstack-ansible00:33
*** timburke has joined #openstack-ansible00:34
openstackgerritGuilherme  Steinmuller Pimentel proposed openstack/openstack-ansible-os_zun master: debian: add support  https://review.openstack.org/65100300:37
openstackgerritGuilherme  Steinmuller Pimentel proposed openstack/openstack-ansible-os_ceilometer master: debian: add support  https://review.openstack.org/65104300:37
openstackgerritGuilherme  Steinmuller Pimentel proposed openstack/openstack-ansible-os_gnocchi master: debian: add support  https://review.openstack.org/65103900:37
*** dxiri has quit IRC00:38
openstackgerritGuilherme  Steinmuller Pimentel proposed openstack/openstack-ansible-os_panko master: debian: add support  https://review.openstack.org/65102200:38
*** partlycloudy has joined #openstack-ansible00:53
jsquareAny idea why the containers might fail to get their network up and running? We had our Rocky deployment process very streamlined until it stopped working because of that.00:55
mnaserjsquare: what part of the network containers not working?00:57
jsquaremnaser: we've been fine tunning the deployment over the last week, we were deploying the whole thing, then tearing it down, cleaning up, and repeat over and over01:00
jsquareat some point today, it started failing, no obvious reason01:00
jsquarebasically, the infra containers don't get their network configured01:00
jsquarewe are missing something...01:01
*** cshen has joined #openstack-ansible01:05
*** cshen has quit IRC01:09
kplantit appears to me that on centos 7 both stable/stein and master are trying to install a version of keystone that doesn't exist: https://pastebin.com/DHyw6tMU01:10
*** gyee has quit IRC01:10
openstackgerritMerged openstack/openstack-ansible master: Set telemetry debian gate job to non vonting  https://review.openstack.org/65164301:16
jsquaremnaser: therefore, task "lxc_container_create : Execute first script" always failing in the setup-hosts playbook01:26
*** hwoarang has quit IRC01:38
*** hwoarang has joined #openstack-ansible01:39
*** dave-mccowan has quit IRC02:01
*** kmadac has quit IRC02:05
*** zhongjun2_ has joined #openstack-ansible02:05
*** kmadac has joined #openstack-ansible02:07
*** zhongjun2_ has quit IRC02:07
*** zhongjun2_ has joined #openstack-ansible02:07
*** zhongjun2_ is now known as zhongjun202:11
*** nurdie has joined #openstack-ansible02:30
*** markvoelker has joined #openstack-ansible02:31
cloudnulljsquare I would check to see if the lxc-dnsmasq process is running02:32
cloudnullit also may need a restart02:33
cloudnullcontainers will fail to get networking if the first interface is blocking on trying to get DHCP02:33
openstackgerritMerged openstack/openstack-ansible-os_aodh master: debian: add support  https://review.openstack.org/65104702:33
*** partlycloudy has left #openstack-ansible02:38
*** nurdie has quit IRC02:40
*** nurdie has joined #openstack-ansible02:41
*** nurdie has quit IRC02:45
jsquarecloudnull: yes, all looks fine, actually nothing has changed, apparently02:53
jsquareactually, the containers only show the loopback interface02:59
*** nicolasbock has quit IRC03:03
*** markvoelker has quit IRC03:05
*** cshen has joined #openstack-ansible03:05
*** cshen has quit IRC03:09
*** KeithMnemonic has quit IRC03:12
*** nurdie has joined #openstack-ansible03:22
*** nurdie has quit IRC03:30
*** hwoarang has quit IRC03:35
*** hwoarang has joined #openstack-ansible03:36
mnaserhmm maybe the systemd_network role didn't run?03:40
mnaserkplant: is this a multinode job?03:41
jsquarehmmm why you think it would not?03:41
mnaserjsquare: I think that's the part that configures the containers03:45
mnaserI believe if you rerun the container create role it .. should do it?03:45
mnaserif you get into any containers and run systemctl status systemd-networkd that might give us info03:45
mnasertho that sector isn't as much of my expertise03:45
*** cmart has joined #openstack-ansible03:46
openstackgerritMerged openstack/openstack-ansible-os_zun master: debian: add support  https://review.openstack.org/65100303:46
*** nurdie has joined #openstack-ansible03:49
openstackgerritMerged openstack/openstack-ansible-os_ceilometer master: debian: add support  https://review.openstack.org/65104303:51
jsquaremnaser: systemd-networkd is down inside the containers03:52
mnaserstatus show anything useful jsquare ?03:52
jsquare   Status: "Shutting down..."03:52
jsquarecan't find any error anywhere03:54
jsquarethis issue never happened, we've deployed the whole thing more than 15 times03:56
jsquarei'm at a loss03:56
mnasersystemctl restart systemd-networkd inside a container.. does that bring it back?03:59
openstackgerritMerged openstack/openstack-ansible-os_gnocchi master: debian: add support  https://review.openstack.org/65103904:00
openstackgerritMerged openstack/openstack-ansible-os_panko master: debian: add support  https://review.openstack.org/65102204:00
*** markvoelker has joined #openstack-ansible04:02
jsquareyeah, and stays up for a few seconds with status = processing requests, and then exits successfully04:03
jsquaredo you know what playbook wires up the host bridge to the containers?04:08
jsquare*bridges04:09
*** cshen has joined #openstack-ansible04:20
*** cshen has quit IRC04:27
*** markvoelker has quit IRC04:36
*** chhagarw has joined #openstack-ansible04:38
*** goldenfri has quit IRC04:44
mnaserjsquare: I think that’s lxc container create one04:46
mnaserIf you’re using lxc04:46
*** cshen has joined #openstack-ansible04:48
*** cshen has quit IRC04:52
jsquareyes, lxc, container setup fails, for some reason the containers don't get attached to lxcbr005:00
*** hwoarang has quit IRC05:00
*** hwoarang has joined #openstack-ansible05:02
openstackgerritDmitriy Rabotjagov (noonedeadpunk) proposed openstack/openstack-ansible master: Set telemetry debian gate job to vonting again  https://review.openstack.org/65169205:05
openstackgerritDmitriy Rabotjagov (noonedeadpunk) proposed openstack/openstack-ansible master: Set telemetry debian gate job to vonting again  https://review.openstack.org/65169205:06
*** rambo_li has joined #openstack-ansible05:13
rambo_liHi, I'm having some troubles deploying openstack with the rbd driver. The playbook fails on TASK [Perform online data migrations] and looking at the logs from the cinder-api container I get the following error: "Error attempting to run shared_targets_online_data_migration: MessagingTimeout: Timed out waiting for a reply to message ID". I've probabl05:15
rambo_liy missed something but can't find it05:15
*** goldenfri has joined #openstack-ansible05:16
rambo_liif anyone met this problem?05:19
*** goldenfri has quit IRC05:32
*** markvoelker has joined #openstack-ansible05:33
*** hwoarang has quit IRC05:40
*** hwoarang has joined #openstack-ansible05:41
*** cmart has quit IRC05:43
*** nurdie has quit IRC05:59
*** udesale has joined #openstack-ansible06:03
*** markvoelker has quit IRC06:06
*** kopecmartin|off is now known as kopecmartin06:09
*** yetiszaf has quit IRC06:13
*** chhagarw has quit IRC06:13
*** chhagarw has joined #openstack-ansible06:14
*** ahuffman has quit IRC06:36
*** chhagarw has quit IRC06:39
*** chhagarw has joined #openstack-ansible06:39
*** phasespace has quit IRC06:40
*** goldenfri has joined #openstack-ansible06:48
*** cshen has joined #openstack-ansible06:49
*** ahuffman has joined #openstack-ansible06:51
*** ivve has joined #openstack-ansible06:52
fnpanicgood morninh06:53
fnpanic's/h/g/g'06:54
*** markvoelker has joined #openstack-ansible07:03
*** luksky has joined #openstack-ansible07:07
*** mbuil has joined #openstack-ansible07:10
*** goldenfri has quit IRC07:26
*** CeeMac has joined #openstack-ansible07:29
*** markvoelker has quit IRC07:36
*** tosky has joined #openstack-ansible07:39
*** vnogin has joined #openstack-ansible07:42
noonedeadpunkguilhermesp: no problems:)07:49
*** phasespace has joined #openstack-ansible07:55
openstackgerritChandan Kumar (raukadah) proposed openstack/openstack-ansible-os_tempest master: Switch to import_task in os_tempest  https://review.openstack.org/65005408:01
openstackgerritChandan Kumar (raukadah) proposed openstack/openstack-ansible-os_tempest master: Switch to import_task in os_tempest  https://review.openstack.org/65005408:02
*** hamzaachi has joined #openstack-ansible08:03
openstackgerritDmitriy Rabotjagov (noonedeadpunk) proposed openstack/openstack-ansible master: Set telemetry jobs to vouting again  https://review.openstack.org/65171808:17
*** priteau has joined #openstack-ansible08:22
openstackgerritDmitriy Rabotjagov (noonedeadpunk) proposed openstack/openstack-ansible-os_ceilometer master: Noop change to test gate  https://review.openstack.org/65172008:24
*** ygk_12345 has joined #openstack-ansible08:29
*** yolanda_ has joined #openstack-ansible08:29
ygk_12345odyssey4me: Hi :)08:29
noonedeadpunkguilhermesp: I was not sure whether you was online so decided to place patch08:32
*** markvoelker has joined #openstack-ansible08:34
*** cshen has quit IRC08:51
*** cshen has joined #openstack-ansible09:03
*** markvoelker has quit IRC09:07
*** tbarron has joined #openstack-ansible09:21
*** rambo_li has quit IRC09:35
*** sum12 has quit IRC09:36
*** cshen has quit IRC09:38
*** electrofelix has joined #openstack-ansible09:40
*** luksky has quit IRC09:42
*** sum12 has joined #openstack-ansible09:46
*** ygk_12345 has quit IRC10:00
*** cshen has joined #openstack-ansible10:04
*** markvoelker has joined #openstack-ansible10:04
*** ygk_12345 has joined #openstack-ansible10:06
*** cshen has quit IRC10:09
*** luksky has joined #openstack-ansible10:18
*** markvoelker has quit IRC10:36
*** yolanda_ has quit IRC10:40
*** Kurlee has joined #openstack-ansible10:43
*** cshen has joined #openstack-ansible10:44
*** nicolasbock has joined #openstack-ansible10:46
*** yolanda_ has joined #openstack-ansible10:52
ygk_12345guilhermesp: hi :) , r u there ?10:55
*** udesale has quit IRC10:57
*** priteau has quit IRC10:59
*** ansmith has quit IRC11:22
*** dave-mccowan has joined #openstack-ansible11:38
*** ygk_12345 has quit IRC11:42
kplantmnaser: yeah, it is a multinode job11:46
*** flaviosr_ has quit IRC11:52
*** nicolasbock has quit IRC11:53
*** vnogin has quit IRC11:57
*** nicolasbock has joined #openstack-ansible12:00
*** flaviosr_ has joined #openstack-ansible12:00
phasespaceQuestion about how rabbitmq is configured. I see you set a HA policy (ha-mode: all) only for the / vhost. That means the queues in other vhosts, i.e. /nova is not mirrored, no?12:09
*** ygk_12345 has joined #openstack-ansible12:11
phasespaceThere is no queue master locator either. Doesn't all this mean that rabbitmq is configured not to be HA, and all queues will end up having the same master?12:11
noonedeadpunkSeems, that you're right. I was thinking about the same thing a while ago actually12:14
noonedeadpunklooks like smth that should be investigated and fixed as policies are Vhost-scoped and according to this logic policy for / shouldn't be applied for other vhosts12:16
*** starborn has joined #openstack-ansible12:20
noonedeadpunkphasespace I think you may define policies with https://github.com/openstack/openstack-ansible-rabbitmq_server/blob/master/defaults/main.yml#L15812:20
noonedeadpunkoh, it seems to be missing vhost parameter12:21
phasespaceyes, and i'm on rocky: https://github.com/openstack/openstack-ansible-rabbitmq_server/blob/stable/rocky/defaults/main.yml12:21
guilhermespnoonedeadpunk: I appreciate the help! let me vote :)12:22
guilhermespwe are so close to debian support <312:23
guilhermespwe just have 3 roles to fix https://review.openstack.org/#/q/topic:osa/debian-support+is:open12:23
guilhermespactually I haven't did anything with nspawn roles yet... they are missing the requisites for debian support12:24
noonedeadpunkphasespace: seems that it's really not implemented right now. So you may offer a patch for this, or we may work on it when will have some free time12:24
noonedeadpunkit seems that we'll need to patch almost every role for their policies support...12:29
*** ansmith has joined #openstack-ansible12:31
*** rambo_li has joined #openstack-ansible12:34
phasespaceyes, it would seem so12:35
* noonedeadpunk writes this down to the list of ToDo things12:36
*** udesale has joined #openstack-ansible13:00
*** sum12 has quit IRC13:01
*** sum12 has joined #openstack-ansible13:01
*** rambo_li has quit IRC13:08
guilhermespmnaser: I think we can reverse the Drop  sphinxmark https://github.com/openstack/requirements/commit/8d5a0e657612fece0173a79889ad1057b44544c713:14
serverascodehi, what's the purpose of the cidr_networks tunnel definition in the user config file? I'm working on deploying in a pure layer 3 environment, so br-vxlan interfaces will not be in the same network...any thoughts on how I might get around that?13:17
serverascodeis it just some kind of validation check to see if all vxlan interfaces are in the same network? or something else?13:17
*** pcaruana has quit IRC13:20
mnaserguilhermesp: we don't really care about sphinx mark anymore, its been replaced by openstackdocstheme13:22
noonedeadpunkserverascode: actually it's for vxlans. So neutron selects interface to build vxlan for based on the ip, this ip is taken from tunnel segment and placed into neutron plugins conf.13:22
mnaserserverascode: in that case I wouldn't even create br-vxlan ?13:23
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_designate master: Fix designate venv build constraints  https://review.openstack.org/65178413:23
jrosserguilhermesp: something like that i think ^ ?13:24
guilhermespuooow let me see jrosser13:24
serverascodemnaser if I didn't have a br-vxlan where would vxlan be tunneled for tenants in openstack? br-mgmt?13:25
serverascodeie. can we just use br-mgmt for tunnels?13:25
noonedeadpunkSo I don't have br-vxlan - I just use simple interface for this13:25
serverascodeok, cool, what is a simple interface?13:26
mnaserserverascode: you can just use an actual interface as noonedeadpunk mentioned13:26
mnaserjrosser: I added a -1, one more thing missing..13:26
mnaserI'll take the blame for that one :)13:26
mnaserbrb reboot13:26
jrosserhahah barbican :)13:27
*** goldenfri has joined #openstack-ansible13:27
noonedeadpunkYeh, I missed that as well ;(13:27
serverascodethanks all, I'm just not sure what an actual or simple interface is and how it would be setup in the user config?13:28
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_designate master: Fix designate venv build constraints  https://review.openstack.org/65178413:30
noonedeadpunkserverascode: http://paste.openstack.org/show/749184/13:31
ygk_12345can someone help me with my issue please  https://storyboard.openstack.org/#!/story/200543113:31
noonedeadpunkas far as you place neutron agents on baremetal (it's default) this should work13:31
serverascodenoonedeadpunk: ok let me take a look at that, I think I was finding I had to enter the cidr_network for tunnel and that is not a single network for all of those interfaces13:32
*** marst has joined #openstack-ansible13:33
ygk_12345mnaser: hi :). can you help me with my issue please13:33
noonedeadpunkthe main point, that ip from tunnel should be placed on ib1 (in my case). As ib1 may act any netwrok interface (without bridge)13:33
noonedeadpunkI meant not only bridge:)13:34
ygk_12345waiting for someone to pitch in13:34
serverascodeok yeah, but cidr_network -> tunnel will be defined as a single network, and my IPs that are on the vxlan interface (not bridge) will be on different networks on each node13:34
serverascodeand then I get errors when running13:35
mnaserygk_12345: I have seen that issue before with `--wait` .. I believe it's something in our load balancer stuff but I'm not sure tbh13:35
mnaserI would check the haproxy logs13:35
serverascodehow can I run this without cidr_network defined for tunnel or for that definition to have multiple networks in it13:35
ygk_12345mnaser: it is happening intermittently at the clients end13:36
noonedeadpunkserverascode: You'll need some common network... You may create some vlan for this on your equimpent13:36
serverascodethen it wouldn't be pure layer 3 :)13:36
jrosserserverascode: what error do you get?13:36
*** phasespace has quit IRC13:37
noonedeadpunkserverascode: SR-IOV?:)13:37
ygk_12345mnaser: i have taken the tcpdump as well13:38
serverascodeno it's on packet.com which is by default pure layer 313:38
serverascodewe can do vlans if needed but it's not the default and I'm trying to avoid it b/c it shouldn't be necessary unless required by OSA13:38
serverascodeor I can setup underlying vxlans for infrastrcuture but hat would be a little weird :)13:39
jrosserserverascode: what error do you get ? :)13:39
jrosserfwiw i am running L3 between TOR switches so an in a similar position13:39
serverascodejrosser I'll run again to grab the error13:40
serverascodejust need a bit13:40
serverascodebut that is great to know that you are doing l313:40
jrosserin the "old days" when the neutron agents were containerised the inventory would have needed to assign IP for the neutron containers13:40
jrosserbut that is no longer the case13:40
jrosserso it could well be that there is spurious logic around from that which no longer matters13:41
ygk_12345mnaser: any idea ?13:42
*** pcaruana has joined #openstack-ansible13:42
mnaserygk_12345: I don't know, I don't have time to dig into this in particular, but if you have any tweaks to the haproxy service, I can help land those changes.13:42
noonedeadpunkBut once you don't have tunnel network on all computing nodes and neutron agent node you'll possibly catch error, as neurtron role won't be able to fullfill linuxbrdige_agent plugin template13:42
serverascodeyeah I can understand when OSA is effectively scheduling IPs that having a single netowrk would be easiest, but if vxlan is not scheduled, but will find error msg13:43
jrossernoonedeadpunk: it just needs to be able to figure out the IP for the VTEP on each node, and if it knows the interface it can find that13:43
openstackgerritGuilherme  Steinmuller Pimentel proposed openstack/openstack-ansible-os_designate master: debian: add support  https://review.openstack.org/65104013:43
*** Kurlee has quit IRC13:44
jrosseri have this in neutron_linuxbridge_agent_ini_overrides13:45
jrosserlocal_ip: "{{ hostvars[inventory_hostname]['ansible_' + 'bond1.1941']['ipv4']['address'] }}"13:45
noonedeadpunkoh, in this case it might work of course13:45
jrosserbut that can be done a *whole* lot nicer now since jamesdenton added some extra bits to drive that from user config13:45
jrosseri've just not got round to migrating to that yet13:46
*** cshen has quit IRC13:50
*** marst has quit IRC13:52
*** ygk_12345 has left #openstack-ansible13:52
goldenfrihello all, I started a deloy over, destroyed all the containers and deleted the inventory and now it fails because cinder-manage can't complete the online_data_migrations the logs say its timing out waiting for a reply, any idea where to start troubleshooting this? Should I have deleted more things when I stated over?13:55
goldenfriopps delay = deploy13:55
noonedeadpunkgoldenfri it's known problem to be honest, and it seems that it's related more to cinder14:01
goldenfrioh hrm, is there a workaround?14:01
noonedeadpunkhttps://bugs.launchpad.net/cinder/+bug/180615614:01
openstackLaunchpad bug 1806156 in Cinder "shared_targets_online_data_migration fails when cinder-volume service not running" [Undecided,Confirmed]14:01
*** gkadam has joined #openstack-ansible14:02
noonedeadpunkso you may try to drop cinderr db and try to deploy it again14:02
*** gkadam has quit IRC14:03
goldenfriok thanks, I'm not sure why its even trying to do a data migration since I thought I was starting from scratch14:05
jamesdentonmornin14:06
noonedeadpunkgoldenfri migration is kinda required during setup : https://docs.openstack.org/cinder/rocky/install/cinder-controller-install-ubuntu.html#install-and-configure-components14:08
*** admin0 has joined #openstack-ansible14:09
noonedeadpunkgoldenfri: oh, probably you're kinda right... it's more about population than online migration14:10
goldenfriyea14:11
goldenfrimaybe I'll just try and skip that14:11
noonedeadpunktbh I commented this step out for now14:12
*** ianychoi has quit IRC14:13
*** ianychoi has joined #openstack-ansible14:14
*** marst has joined #openstack-ansible14:15
*** SimAloo has joined #openstack-ansible14:15
*** b1tsh1ft3r has joined #openstack-ansible14:19
b1tsh1ft3rDoes anyone have any guides or information on integrating grafana with openstack for monitoring/graphing ect...? Im having a difficult time finding information on how to integrate data sources with what is already available from openstack in ceilometer/gnocchi14:21
noonedeadpunkb1tsh1ft3r you may integrate grafana with gnocchi14:21
noonedeadpunkBut you'll need configured ceilometer for data collection14:22
noonedeadpunkhttps://grafana.com/plugins/gnocchixyz-gnocchi-datasource14:22
noonedeadpunkb1tsh1ft3r: or you may setup elk:) https://github.com/openstack/openstack-ansible-ops/tree/master/elk_metrics_6x14:32
b1tsh1ft3rhmm.. Ok, ill have to check into the ceilometer config i have now. I've defined the metering-infra-hosts and metrics_hosts in the openstack_user_config when deploing a queens environment, but beyond this i dont believe i have anything that actually stores the collected data. Im assuming that a backend must be present that this data is stored into14:35
noonedeadpunkCeilometer stores data in gnocchi14:35
noonedeadpunkBut in queens there were some alternatives..14:36
*** nurdie has joined #openstack-ansible14:36
*** udesale has quit IRC14:38
openstackgerritAntony Messerli proposed openstack/openstack-ansible-lxc_hosts master: Use pkill for lxc-dnsmasq systemd unit file  https://review.openstack.org/65161714:46
admin0noonedeadpunk, i have tried elk on ubuntu 18.04 .. does not work14:48
admin0elk metrics14:48
noonedeadpunkhey, cloudnull, heard this? :) ^14:49
noonedeadpunkelk metrics or elk metrics 6?14:49
admin0he knows :D14:50
admin0but i am still stuck :(14:50
noonedeadpunkthan I may sleep calmly :p14:50
noonedeadpunkunfrotunatelly can't help you - I'm using zabbix instead. And Ceilometer+Gnocchi+Grafana to monitor usage of exact instances (and for some other internal stuff)14:51
admin0my / is always 100% . and there are no files :(14:51
*** partlycloudy has joined #openstack-ansible14:53
b1tsh1ft3rnoonedeadpunk i've setup the gnocchi data source with a token (temporary) and setup the dash in grafana. Looks like i can see a list of instances from the drop down, but no metrics come through. the default panels look to have "request error"14:59
b1tsh1ft3rupon checking out the request error im seeing the response as "400 Bad request Your browser sent an invalid request."15:00
noonedeadpunkI guess smth is wrong with configuration. It was pretty tricky for me tbh15:03
noonedeadpunkI've configured CORS (allowed grafana server) for gnocchi and keystone and using direct access from browser15:04
noonedeadpunkI may kinda share by dashboard, but it's still in work, and I'm collecting SNMP data from nodes and use them in it. But you may probably catch the idea15:06
guilhermespnoonedeadpunk: remember the openstacksdk issue with trove? Take a look at this https://github.com/openstack/python-troveclient/blob/3713149ba61e2ed0dab0f03a470002355591628a/lower-constraints.txt#L4115:09
guilhermespand http://logs.openstack.org/11/651011/5/check/openstack-ansible-deploy-aio_metal_trove-debian-stable/4e668bc/logs/ara-report/result/56efc826-a0fe-4296-837e-694c1d194057/15:10
guilhermespI'm just wondering why only debian is complaining about that15:10
noonedeadpunklol15:10
*** dhellmann has joined #openstack-ansible15:10
noonedeadpunkguilhermesp: probably we don't need python-troveclient then?15:11
noonedeadpunkas since 0.11.2 it's fully integrated with openstacksdk?15:11
noonedeadpunkit's just a guess though15:12
guilhermespyeah I suspect too... we are using openstack modules to create the resouces http://logs.openstack.org/11/651011/5/check/openstack-ansible-deploy-aio_metal_trove-debian-stable/4e668bc/logs/ara-report/file/a150ec7e-6adf-46af-b009-0628651cf0e6/#line-3115:12
*** gyee has joined #openstack-ansible15:12
guilhermespnot sure if the python client is needed15:12
guilhermespI can drop it and see the results15:13
noonedeadpunkthink it's worth a shot15:13
openstackgerritGuilherme  Steinmuller Pimentel proposed openstack/openstack-ansible-os_trove master: debian: add support  https://review.openstack.org/65101115:14
guilhermesphere we go noonedeadpunk15:14
noonedeadpunkbtw can you take a look at my backport https://review.openstack.org/#/c/651178/ (unrelated)?15:15
* noonedeadpunk opens zuul at takes some cookies15:16
guilhermespdone15:16
guilhermesplet's do some trade then15:16
guilhermespcan you suggest something for this scenario15:16
guilhermespwait15:16
guilhermesphttps://review.openstack.org/#/c/651050/ qpid is failing on debian because we are defining keys from ubutnu15:17
guilhermespsomething kind of similar to the issue I was having with zun, but zun was only a silly thing. This qrouterd has 3 variables defining the keyserver15:17
guilhermesphttps://github.com/openstack/ansible-role-qdrouterd/blob/82213e344a01a5ef5359a3c524013700d89c7632/vars/ubuntu.yml#L2915:18
guilhermespI think we need to add vars for debian, of do some tweak to make the role deal with different keyservers15:18
noonedeadpunkYeah, I think that we should use different var for debian and ubuntu here...15:19
noonedeadpunkAnd seems that qpid is placed in default debian repo15:20
noonedeadpunkhttps://packages.debian.org/stretch/python-qpid15:20
noonedeadpunkSo no need in this ppa for debian15:20
*** b1tsh1ft3r has quit IRC15:25
*** ivve has quit IRC15:36
*** luksky has quit IRC15:36
guilhermespI think we will need to change here too https://github.com/openstack/ansible-role-qdrouterd/blob/82213e344a01a5ef5359a3c524013700d89c7632/tasks/qdrouterd_install_apt.yml#L16 adding conditionals to ansible_distribution and creating new tasks for debian15:37
*** dxiri has joined #openstack-ansible15:39
noonedeadpunkI'd probably L16-L32 moved under ubuntu specific block15:42
*** pcaruana has quit IRC15:42
*** cmart has joined #openstack-ansible15:42
noonedeadpunkBut what new task for debian we need? L33-39 should suit debian15:42
*** cshen has joined #openstack-ansible15:45
*** hamzy has quit IRC15:47
*** cshen has quit IRC15:49
*** chandankumar is now known as raukadah16:01
openstackgerritAntony Messerli proposed openstack/openstack-ansible-lxc_hosts master: Use pkill for lxc-dnsmasq systemd unit file  https://review.openstack.org/65161716:05
guilhermespnoonedeadpunk: btw dropping python-troveclient was not enough http://logs.openstack.org/11/651011/6/check/openstack-ansible-deploy-aio_metal_trove-debian-stable/4e2fae0/logs/ara-report/result/9358f954-8f0f-4afe-b88f-0847f05cf876/16:09
*** vnogin has joined #openstack-ansible16:13
*** vnogin has quit IRC16:14
*** vnogin has joined #openstack-ansible16:14
guilhermespand yeah noonedeadpunk you're right, no need for task for debian. Let me try it out16:15
jrosserguilhermesp: we should vendor the keys like was done for ceph_client, apt_key is brok with proxies16:16
openstackgerritLogan V proposed openstack/openstack-ansible master: Add Calico networking AIO scenario  https://review.openstack.org/64583116:19
jrosserguilhermesp: this sort of thing https://review.openstack.org/#/c/636711/16:19
*** ianychoi has quit IRC16:20
guilhermesphum jrosser that means we need to refactor the existent role, this could serve both ubuntu/debian ?16:21
jrosserwell, it didnt look like there were actually qdrouterd packages for debian from anywhere but the distro anyway16:22
jrosserperhaps that needs investigating first16:22
jrosserbut the apt_key thing will be an issue at some point in the future anyway if we ever start using that role as standard16:23
*** chhagarw has quit IRC16:25
raukadahwith respect to debian support, are we dropping ubuntu?16:26
noonedeadpunkI don't think so... are we?16:28
guilhermespraukadah: no. We're just adding debian support, don't worry :)16:28
noonedeadpunkI think we're just extending support of distributions, as debian is very alike one and might be added without really big changes. And it might be pretty easily supported16:29
raukadahcool, then , thanks!16:29
noonedeadpunkjrosser didn't know about apt_key...16:29
*** cshen has joined #openstack-ansible16:30
jrossernoonedeadpunk: did doesnt hit me becasue although i have proxies everything is mirrored inside those16:30
jrosser*it doesnt16:30
jrosserbut others who do everything through proxies have come unstuck with all our uses of apt_key16:31
noonedeadpunkI see16:31
*** vnogin has quit IRC16:35
*** dave-mccowan has quit IRC16:39
*** cshen has quit IRC16:55
*** ivve has joined #openstack-ansible16:56
*** hamzy has joined #openstack-ansible17:00
*** pcaruana has joined #openstack-ansible17:05
openstackgerritMerged openstack/openstack-ansible master: Set telemetry debian gate job to vonting again  https://review.openstack.org/65169217:09
*** ivve has quit IRC17:15
openstackgerritMerged openstack/openstack-ansible-os_designate master: Fix designate venv build constraints  https://review.openstack.org/65178417:26
*** gyee has quit IRC17:29
partlycloudyHello folks, I have a question about setting up L3 routing (Clos).17:36
partlycloudyMy edge router is on a dedicate leaf. OSPF is used between leaf and spine.17:36
partlycloudyGiven this, what is the recommended solution to bring the provider networks across leaf racks?17:36
*** kopecmartin is now known as kopecmartin|off17:40
*** luksky has joined #openstack-ansible17:41
*** cmart has quit IRC17:43
jsquarestill trying to fix the issue we have with the containers being created without network, we don't see and ifcfg-* inside them, anybody has a clue? can't find anything on the logs17:50
jrosserpartlycloudy: segmented provider networks are a thing, may be helpful17:50
jrosserpartlycloudy: but that’s if you actually need the external net on all the leaf racks17:52
*** hamzaachi has quit IRC17:55
*** ahuffman has quit IRC17:56
*** cmart has joined #openstack-ansible17:57
partlycloudyjrosser: i hope to give provider network access to all leaf racks and use OVN to send north-south traffic directly from compute nodes.17:58
partlycloudyis Clos appropriate or overkill for a cluster with a total of ~200 compute nodes?18:00
jrosserDon’t you have fundamentally two choices then, a pure L3 calico style approach or an overlay?18:01
partlycloudyjrosser: My current gears don't have BGP support, but only OSPF. Does that rule me out of calico or overlay (like EVPN/VXLAN)?18:05
openstackgerritLogan V proposed openstack/openstack-ansible-os_tempest master: Do not ping router when public net is local  https://review.openstack.org/65189618:05
openstackgerritLogan V proposed openstack/openstack-ansible master: Add Calico networking AIO scenario  https://review.openstack.org/64583118:05
openstackgerritMerged openstack/ansible-hardening master: Fix conditional cast to bool  https://review.openstack.org/64369418:06
kplantas someone who is going through a huge evpn roll out, do youself a favor and make sure all of your POPs and RRs support it.. even the things you forgot to think about18:09
kplantif you choose to go that route18:09
jrosserpartlycloudy: I’m not sure I understand really, distributing your provider network leaf switches with ospf pretty much implies assigning subnets to each leaf, that’s kind of inherent in a routed approach. OVN or not that would be your underlying topology.18:09
jrosserkplant: I’m just bringing up bgp-evpn on nxos.... it’s been an “adventure”18:10
kplanti've been dealing with it on mainly junos18:11
kplantalmost all of our stuff was 12.x and evpn support was added in 14.x18:12
kplantbut not really functional until 16.x18:12
logan-jrosser: in my experience everything on nxos is an adventure, and never the fun kind :/18:12
kplantwe also had some RRs not accepting type 10 LSAs which made our ospf take a dive18:13
kplantall in all, a great experience18:13
kplantlogan-: have you had to suffer through the poison that is ios-xe?18:13
* mnaser just wants to replace everything with frrouting boxes18:14
* kplant likes junos :-(18:15
kplanti'm kind of a juniper zealot sometimes18:15
logan-nope, have not worked with ios-xe. after experiencing early nexus 9k, new deployments are generally junos or arista/eos18:15
logan-++ junos has its downsides at times but generally its super predictable and things just make sense18:16
kplanttransactional changes are a big one for me18:16
logan-yup18:17
kplanti know ios-xr eventually ripped it off18:17
kplantbut it took way too long for others to catch on18:17
kplanti've also found that juniper products are stupid cheap compared to the competition18:18
kplantespecially cisco18:18
kplantwho cares about things like hsrp and eigrp anyway18:18
jrosserHmm I find the opposite - depends on your discount18:19
jrosserWhite box is more expensive than Cisco for me18:19
*** electrofelix has quit IRC18:19
* jrosser likes a bit of network geek-out18:21
kplanthave you gotten into any of the newer cisco platforms with virtualization like NCS?18:23
kplantone chassis nsr is pretty neat18:24
guilhermespI think the reason trove is complaining bout openstacksdk version is because debian stretch has 0.9.5-2 while the ansible openstack modules are requiring >=0.1218:24
guilhermesphttps://packages.debian.org/search?searchon=names&keywords=python-openstacksdk18:24
guilhermespthat's why only the debian job is failing18:24
guilhermesphttp://logs.openstack.org/11/651011/6/check/openstack-ansible-deploy-aio_metal_trove-debian-stable/4e2fae0/logs/ara-report/result/9358f954-8f0f-4afe-b88f-0847f05cf876/18:24
*** tosky has quit IRC18:26
openstackgerritNicolas Bock proposed openstack/openstack-ansible-lxc_hosts stable/rocky: Switch OBS mirror to branch for stabilization  https://review.openstack.org/65162318:38
partlycloudyjrosser: sorry for any confusion. i am still trying to get hold of this clos thing and sorry if my questions sounds dump :-p18:39
mnaserjrosser: I figure you probably have a much bigger purchasing contract than I do..18:40
jrossermnaser: across the whole org I expect so yes, I’m sure I’m leveraging that even though my qty is quite modest18:41
mnaserthis just in: jrosser using his discount to help us all get cheap network equipment18:42
kplanthot off the press18:42
jrosserpartlycloudy: don’t worry - nice thing here is everyone has built different things to meet different requirements so there’s a lot of real life stuff to share18:43
* jrosser hides18:43
*** cmart has quit IRC18:45
admin0how to change novalocal in ansible18:46
admin0to null/blank domain name18:46
admin0i cannot find the variable18:47
admin0is it nova_dhcp_domain and neutron_dhcp_domain overrides ?18:47
admin0i see openstack.domain and neutron_dns_domain --18:47
admin0but no values that is novalocal18:48
*** cshen has joined #openstack-ansible18:52
openstackgerritMerged openstack/openstack-ansible-os_designate master: debian: add support  https://review.openstack.org/65104018:52
jrosserkplant: we looked at NCS but found the split of features between NCS and ASR9K bad, particularly when a Cat 6500 used to do everything. Juniper MX looked a better bet, depending what features you need really18:53
*** cmart has joined #openstack-ansible18:54
*** hwoarang has quit IRC18:54
*** cshen has quit IRC18:56
kplantwhich series? mx204? or the 10Ks18:56
*** hwoarang has joined #openstack-ansible18:56
openstackgerritMerged openstack/openstack-ansible stable/stein: Add masakari-monitors to openstack_services  https://review.openstack.org/65117818:56
kplantwe have almost 2000 240s 480s and 960s18:59
kplantand we have the 10Ks in the lab right now18:59
kplanti wanted to grab some 204s for to be lightweight PERs but haven't gotten a chance yet19:00
*** pcaruana has quit IRC19:02
jrosserI know folks who are having success with mx1000319:03
kplantnice. i haven't gotten a chance to play with one yet19:18
*** christ0 has quit IRC19:21
serverascodejrosser here's a gist https://gist.github.com/ccollicutt/1d15970f0db20a8b569eeca85d4472d0 of the vxlan interfaces, but they are separate networks so  Idon't know what to put in cidr_networks -> tunnel19:24
serverascodeI added an error message I get from trying to run setup-hosts19:27
*** cshen has joined #openstack-ansible19:28
serverascodedo I need ip_from_q for a network definition?19:28
serverascodeactually nevermind, I see at least one issue there, my mistake19:34
serverascodeapologies for spam19:34
*** cshen has quit IRC19:34
*** vakuznet has joined #openstack-ansible19:47
jrosserserverascode: do you have all your infra nodes in the same L2 segment?19:50
serverascodethrere's just the one node19:52
serverascoderight now anyways19:52
serverascodeand will only be one for this particular deployment19:52
jrosserand your container_network, is that similarly L3 routed to your computes?19:54
jrossersorry cidr_networks -> container19:54
serverascodeyeah but it doesn't seem to matter b/c there is just the one node19:54
serverascodebut with br-vxlan it needs to be on infra and compute19:54
jrosserbut each compute needs an ip on the mgmt network19:54
serverascodeoh really, ok19:55
serverascodewhat is the ip_from_q?19:56
jrosseri have one per set of leaves19:56
jrosserthe config file gets quite large19:56
jrossersame goes for the storage network and so on19:57
serverascodeok19:57
openstackgerritNicolas Bock proposed openstack/openstack-ansible-lxc_hosts stable/rocky: Switch OBS mirror to branch for stabilization  https://review.openstack.org/65162319:57
jrosserserverascode: have a close read of this before going much further https://docs.openstack.org/openstack-ansible/rocky/user/l3pods/example.html19:59
serverascodejrosser thanks, that is a good call, thanks kindly for the help :)20:00
jrosserno problem :)20:00
kplantis centos 7 support still experimental in osa?20:01
*** starborn has quit IRC20:04
vakuznetis there plan to add distro install method to os_octavia?20:05
*** vakuznet has quit IRC20:10
*** dxiri has quit IRC20:22
*** hamzaachi has joined #openstack-ansible20:31
*** ansmith has quit IRC20:31
*** hamzy has quit IRC20:46
*** ansmith has joined #openstack-ansible21:04
*** partlycloudy has quit IRC21:11
*** partlycl_ has joined #openstack-ansible21:14
*** partlycl_ has left #openstack-ansible21:15
*** partlycloudy has joined #openstack-ansible21:15
*** nurdie has quit IRC21:19
openstackgerritMerged openstack/openstack-ansible-lxc_hosts master: Use pkill for lxc-dnsmasq systemd unit file  https://review.openstack.org/65161721:21
*** hamzaachi has quit IRC21:24
mnaserdo we wanna check out https://review.openstack.org/#/c/650561/21:24
*** tosky has joined #openstack-ansible21:29
*** cshen has joined #openstack-ansible21:31
*** cshen has quit IRC21:35
*** marst has quit IRC21:40
*** gyee has joined #openstack-ansible21:49
*** nurdie has joined #openstack-ansible22:01
*** nurdie has quit IRC22:06
*** luksky has quit IRC22:08
*** SimAloo has quit IRC22:11
*** tosky has quit IRC22:15
openstackgerritMerged openstack/openstack-ansible-haproxy_server master: handlers: reload instead of restart  https://review.openstack.org/65056122:20
*** chhagarw has joined #openstack-ansible22:21
*** nurdie has joined #openstack-ansible22:22
openstackgerritMerged openstack/openstack-ansible master: Set telemetry jobs to vouting again  https://review.openstack.org/65171822:25
*** nurdie has quit IRC22:26
*** chhagarw has quit IRC22:35
*** jra has joined #openstack-ansible22:47
jraI'm running a moderately large OSA-deployed OpenStack on Queens, and we're hitting grievous performance issues moving traffic through the containerized neutron agents on control plane nodes. iperf from metal to metal gives consistent 8.5-9.0Gbps, but from metal into the agents container runs anywhere from 100Mbps down to 10kbps, which seems crazy.22:50
jraSo question one, has anybody encountered this before?22:50
jraFollowing on from that, our team decided to look at moving the neutron agents out of containers onto bare metal in the control plane. I'm doing this process in dev, but the provider network config just profoundly does not work (missing interfaces, trying to enslave a bridge to another bridge, etc).22:52
jraI've been up and down the documentation looking for examples of a bare-metal neutron agent provider network config, and have found nothing that looks any different from what we've been doing; but I know the default config is for bare-metal agents in recent releases22:53
jrahow is this supposed to work?22:53
mnaserjra: what release is this?23:12
mnaseroh23:13
mnaserqueens23:13
mnaserjra: in rocky we moved the agents to baremetal23:13
jraso I read23:15
jrabut the docs haven't changed, and the openstack_user_config.yml examples don't seem to show any changes23:15
logan-pretty sure they moved in queens23:16
jraso I'm not sure how I'm supposed to change things23:16
logan-pike was the last version with containerized agents23:16
mnaserokay so in that case jra should already have bare metal agents?23:16
logan-jra: https://docs.openstack.org/openstack-ansible/queens/admin/upgrades/major-upgrades.html#implement-inventory-to-deploy-neutron-agents-on-network-hosts23:16
jrawe installed from an inventory that had built our pike install, and it had an env.d that specified containerized agents23:16
jra@logan- the problem's not getting OSA to install the agents on metal, I've got that done; it's that I don't know how to update our config to actually work with metal-deployed agents23:17
jraour config, like those in the example docs, names veth interfaces that don't exist on metal23:18
mnaserthey shouldn't exist on metal afaik23:19
jraagreed - but then, what do we specify?23:21
jraso I thought, simple, I'll just name the bridges directly23:21
*** kmadac has quit IRC23:21
*** tbarron has quit IRC23:21
*** kukacz has quit IRC23:21
*** aspiers has quit IRC23:21
*** jillr has quit IRC23:21
*** antonym has quit IRC23:21
jrabut no! Then you end up with "physical_device_mappings = vlan:br-vlan,ex:br-ex" in your linuxbridge_agent.conf23:22
jraand "can't slave a bridge device to a bridge device" in your logs23:22
logan-jra: I had a hell of a time getting it working on metal. this is what I ended up with (on a flat networking, single interface cloud):23:23
logan-https://github.com/openstack/limestone-ci-cloud/blob/master/examples/interfaces23:24
*** admin0 has quit IRC23:24
logan-basically dangling veths off of the bridge23:24
logan-and then telling neutron to attach to those veths here: https://github.com/openstack/limestone-ci-cloud/blob/3886dbc40de036e7d1e3bb61917793d7067a89b2/openstack_deploy/openstack_user_config.yml#L41-L5023:24
jraI really didn't want to do veths23:25
jrathanks for sharing those configs, btw23:26
jraOur main issue is the grievous performance issue, and virtualized networking is implicated in that mess23:26
jraso how is this supposed to work for brand-new green field installations?23:27
*** tbarron has joined #openstack-ansible23:27
*** kmadac has joined #openstack-ansible23:27
*** kukacz has joined #openstack-ansible23:27
*** jillr has joined #openstack-ansible23:27
*** aspiers has joined #openstack-ansible23:27
*** antonym has joined #openstack-ansible23:27
logan-i can't speak too much to how others do it, because the only linuxbridge cloud I have is that one (my larger clouds use calico so no bridges or network nodes), but I think most folks hand neutron physical interfaces directly to build its bridges on23:29
logan-iirc my workaround there was necessary because i had a single bond to work with and wanted the containers and neutron to share it23:29
*** cshen has joined #openstack-ansible23:31
logan-(which you can do using that host_bind_override)23:32
snadgei feel like such a noob for asking much simpler questions.. but I have an RDO 1 controller, 3 node setup deployed with ansible onto vsphere vms.. obviously the default is to use vlans, but my lack of vsphere knowledge (and access to peek at and create switches) is hampering me figuring out the best way to get them all talking to each other23:33
jra I've got dedicated hardware just for neutron agents, with dual 10G interfaces bonded down to a single interface with a bunch of VLAN devices hanging off of it, and the bridges hanging off of them. I had kinda thought I could just hand OSA those bridges; are you suggesting I should hand it the VLAN devices instead?23:33
snadgeits just a test environment and there's actually no huge requirement to have the networking actually work.. its more about testing ansible deployment scripts etc rather than actually running instances and doing things with them.. but I guess I just want to learn a bit more about neutron and possibly even vsphere networking as well23:35
*** cshen has quit IRC23:36
logan-jra: right, neutron will build its own bridges off of the physical interface: https://github.com/openstack/openstack-ansible/blob/e4940799c7dfaeda1f33094bb3e1bc143c4c0880/etc/openstack_deploy/openstack_user_config.yml.singlebond.example#L68-L7023:36
snadgeeg.. can a vsphere network switch trunk vlans similar to a physical switch.. or am i better off forgetting about that and defining a flat network setup?23:37
snadgegiven that its only for testing anyway23:37
jra@logan- interesting! I guess I didn't know that23:37
jraThanks so much @logan- and @mnaser, you guys have helped me a ton23:37
kplantsnadge: you could do multiple interfaces per vm and bind each interface to a different portgroup23:40
kplanti'm not sure how well vmware deals with encapsulation/QinQ23:40
snadgeyeah.. each VM appears to have about 5 interfaces.. with one of them dedicated to instances (apparently)23:41
snadgebut from what I can see.. the previous person who set it all up, gave up on that side of it.. we have staging and prod environments that run on metal, that don't share the same problems23:41
kplantlooks like you can create portgoups that accept a range of vlan tags23:49
kplantis that what you're looking for?23:49
snadgequite possibly.. when i look for information about vmware specific to openstack networking.. i seem to find references to full vsphere integration.. im not sure I really want to do that though, since thats very far away from production.. and this is just a test environment23:51
*** cmart has quit IRC23:52
snadgeif i can find a way to accept ranges of vlans in vmware.. ie.. get the "default" vlan networking to work.. then perhaps that's what I should do23:52
snadgeie.. like trunking23:52
snadgebut i'm thinking it might be easier just to use flat networking23:52
kplantit sounds like to me you're only using vmware as a means to virtualize your dev environment23:55
snadgeyeah.. i think its literally just a sandbox.. i don't think there is any expectation of using it for anything productive23:57
snadgeim just new to this job.. and to openstack, and in some ways vsphere as well.. but not virtualisation (kvm), linux, networking etc23:58
snadgethe first thing they got me to do was install devstack and play around with that.. which I have.. and this is basically moving on from that to something a little bit more like what is used in staging (dev) and prod.. they seem to have skipped the test environment23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!