Friday, 2018-12-14

*** thuydang has quit IRC00:00
*** thuydang has joined #openstack-ansible00:00
*** cshen has joined #openstack-ansible00:52
*** cshen has quit IRC00:57
*** gyee has quit IRC00:58
*** jcosmao has quit IRC01:04
*** rodolof has quit IRC01:40
*** macza has quit IRC01:44
*** vnogin has joined #openstack-ansible02:03
*** vnogin has quit IRC02:03
*** dave-mccowan has joined #openstack-ansible02:38
*** maharg101 has quit IRC02:48
*** admin0 has quit IRC03:41
*** lbragstad has joined #openstack-ansible03:50
*** lbragstad has quit IRC03:51
*** dave-mccowan has quit IRC04:03
*** udesale has joined #openstack-ansible04:10
*** lbragstad has joined #openstack-ansible04:13
*** asettle has joined #openstack-ansible04:23
*** markvoelker has joined #openstack-ansible05:04
*** asettle has quit IRC05:06
*** cshen has joined #openstack-ansible05:49
*** cshen has quit IRC05:54
*** hwoarang has quit IRC06:20
*** hwoarang has joined #openstack-ansible06:21
*** ThiagoCMC has quit IRC06:37
*** mkuf has quit IRC06:40
*** mkuf has joined #openstack-ansible06:41
*** mkuf has quit IRC06:51
*** mkuf has joined #openstack-ansible06:52
*** pcaruana has joined #openstack-ansible07:12
*** vnogin has joined #openstack-ansible07:13
*** DanyC has joined #openstack-ansible07:14
*** vnogin has quit IRC07:17
*** DanyC has quit IRC07:18
*** nurdie has joined #openstack-ansible07:27
*** fnpanic_ has joined #openstack-ansible07:28
fnpanic_hi07:28
fnpanic_good morning07:29
*** cshen has joined #openstack-ansible07:29
*** nurdie has quit IRC07:31
*** markvoelker has quit IRC07:45
sum12#openstack-nova07:49
*** DanyC has joined #openstack-ansible07:49
fnpanic_i am still at the AIO with ubuntu 18.0407:51
fnpanic_now everything works flawless until setup-hosts07:52
fnpanic_fails at task: Ensure that the LXC cache has been prepared]07:52
fnpanic_http://paste.openstack.org/show/737282/ is the log from /var/log/lxc-cache-prep-commands.log07:57
fnpanic_maybe it is again me but this was a fresh ubuntu 18.0408:03
fnpanic_locale also correct08:03
*** maharg101 has joined #openstack-ansible08:05
*** DanyC has quit IRC08:08
*** aludwar has quit IRC08:09
fnpanic_i need to doublecheck the sources..list08:09
fnpanic_maybe something is wrong here08:10
*** aludwar has joined #openstack-ansible08:10
*** markvoelker has joined #openstack-ansible08:16
*** gillesMo has joined #openstack-ansible08:22
*** priteau has joined #openstack-ansible08:38
*** kopecmartin|off is now known as kopecmartin08:45
*** tosky has joined #openstack-ansible08:53
*** shardy has joined #openstack-ansible09:01
fnpanic_i re-installed the host because there was a problem with sources.list being messed up09:02
fnpanic_now it looks fairly good :-)09:02
*** Emine has joined #openstack-ansible09:05
*** cshen has quit IRC09:21
*** rodolof has joined #openstack-ansible09:48
*** dcdamien has joined #openstack-ansible09:50
jrosserfnpanic_: you might consider using a VM of some sort for building AIOs09:51
jrosserbecasue it is so quick to destroy/recreate if something goes wrong. i use vagrant/virutalbox but almost any approach you like will be fine09:51
*** cshen has joined #openstack-ansible09:53
*** rodolof has quit IRC09:56
*** rodolof has joined #openstack-ansible09:56
*** cshen has quit IRC09:58
*** DanyC has joined #openstack-ansible10:03
*** gisak has joined #openstack-ansible10:21
odyssey4meredkrieg hmm, which series/tag/release is that for? unfortunately federation is not heavily used, and it's not tested, so it's a best effort thing and sometimes stuff breaks - is that rocky?10:21
odyssey4meredkrieg if you could report a bug for the two issues, we can get them fixed up10:22
odyssey4mejrosser I think we need to go ahead with https://review.openstack.org/#/c/625070/ to prevent any more merges which are actually broken10:23
*** cshen has joined #openstack-ansible10:24
*** DanyC has quit IRC10:24
*** DanyC has joined #openstack-ansible10:25
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-rabbitmq_server stable/rocky: upgrade: start service before applying policies  https://review.openstack.org/62520010:26
jrosserodyssey4me: do you know why it always returns 0?10:26
odyssey4mejrosser last I heard mnaser was following up with mtrenish, but I guess with season and all it's going to be hard to get answers right now10:27
odyssey4meI've got a star on it to follow it up again in the new year once arxcruz|next_yr is back.10:28
*** hamzaachi has joined #openstack-ansible10:28
jrosserso we need that patch to merge? i don't really understand whats going on myself though10:28
odyssey4mejrosser so, right now our tempest test runs always succeed - even if the tests run by tempest fail10:29
odyssey4mefor some reason there's always a 0 return code from tempest10:29
*** cshen has quit IRC10:29
jrosseryeah i saw the discussion yesterday10:29
odyssey4meso yeah, given that patch reverts the patch which was the most recent change and is likely the culprit - better to revert and regroup10:30
jrosserok. there is a lot broken generally with the tempest changes10:31
jrosserthere a handful of roles that are totally broken10:32
odyssey4mejrosser yeah, I know about barbican and such - but those are the distro jobs broken because of the changes relating to that10:34
odyssey4meI'll revisit those - they either need a distro plugin package added, or need to be forced to use a source build10:34
arxcruz|next_yrodyssey4me: hey, what u need from me ? :)10:35
odyssey4mearxcruz|next_yr nothing that can't wait - have a good holiday!10:35
arxcruz|next_yrodyssey4me: it's freaking cold outside lol :)10:35
odyssey4mearxcruz|next_yr lol, totally agreed :10:36
*** markvoelker has quit IRC10:36
*** markvoelker has joined #openstack-ansible10:37
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_server stable/rocky: Increase Galera self-signed SSL CA expiration  https://review.openstack.org/62520110:38
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_server stable/queens: Increase Galera self-signed SSL CA expiration  https://review.openstack.org/62520210:38
*** markvoelker has quit IRC10:41
*** hamzaachi has quit IRC10:44
*** hamzaachi has joined #openstack-ansible10:44
odyssey4mejrosser I suspect that somehow with https://review.openstack.org/#/c/607814/5 - https://github.com/openstack/openstack-ansible-os_cinder/commit/2c3aea81b55e4057fb610ec274222af2ddad5adf is no longer in effect, causing cinder-volume to fail10:46
odyssey4meI can't see why - I'll fire up a VM now to test with.10:47
jrosseris this something slipped through a bad test that returned good?10:47
odyssey4meI think so, although it's a bit curious that the test passed earlier the same day - which is 2 days after the os_tempest merge.10:48
*** electrofelix has joined #openstack-ansible10:49
*** cshen has joined #openstack-ansible10:50
jrosserodyssey4me: is it jinja variable scoping?10:57
jrosserhttps://review.openstack.org/#/c/607814/5/tasks/cinder_install.yml line 61 vs 69 for example10:57
jrossermodifying service inside the for loop then using it outside10:58
openstackgerritDmitriy Rabotjagov (noonedeadpunk) proposed openstack/openstack-ansible-os_ceilometer stable/rocky: gnocchi_resources override fixed  https://review.openstack.org/62521310:58
jrosseroh no, it's all inside the loop11:00
jrosserbut i think thats where i'd start11:00
openstackgerritDmitriy Rabotjagov (noonedeadpunk) proposed openstack/openstack-ansible-os_ceilometer stable/queens: gnocchi_resources override fixed  https://review.openstack.org/62521511:07
*** markvoelker has joined #openstack-ansible11:16
odyssey4meI've got to go afk for a few hours - bbiab.11:17
*** cshen has quit IRC11:27
openstackgerritDmitriy Rabotjagov (noonedeadpunk) proposed openstack/openstack-ansible-os_ceilometer stable/rocky: gnocchi_resources override fixed  https://review.openstack.org/62521311:28
*** admin0 has joined #openstack-ansible11:30
admin0\o11:30
openstackgerritDmitriy Rabotjagov (noonedeadpunk) proposed openstack/openstack-ansible-os_ceilometer stable/rocky: gnocchi_resources override fixed  https://review.openstack.org/62521311:31
*** CeeMac has joined #openstack-ansible11:34
*** gary_perkins has quit IRC11:37
gisakhi guys11:45
gisaknever met this before: http://paste.openstack.org/show/737295/11:46
gisakits asking for /etc/ansible/hosts, but i never had a such file and wasnt asked for it before, now ansible wants it11:47
*** gary_perkins has joined #openstack-ansible11:50
admin0gisak, what command are you using ?11:54
*** cshen has joined #openstack-ansible11:58
gisakopenstack-ansible setup-hosts.yml12:02
*** cshen has quit IRC12:02
admin0could be an error(logical) in the openstack_user_config file as well .. like indenetation etc12:07
*** udesale has quit IRC12:11
gisakyeah, u're right, thank you very much )12:13
*** cshen has joined #openstack-ansible12:19
*** pcaruana has quit IRC12:21
*** pcaruana has joined #openstack-ansible12:22
*** pcaruana is now known as pcaruana|intw|12:25
*** hamzaachi has quit IRC12:33
*** rodolof has quit IRC12:40
*** rodolof has joined #openstack-ansible12:41
jamesdentonmornin12:59
*** ansmith has joined #openstack-ansible13:00
*** fnpanic_ has quit IRC13:07
*** dcdamien has quit IRC13:18
*** cshen has quit IRC13:32
*** pcaruana|intw| has quit IRC14:05
*** Emine has quit IRC14:16
openstackgerritkourosh vivan proposed openstack/openstack-ansible-os_tempest master: Add user and password for secure image download (optional)  https://review.openstack.org/62526614:17
*** udesale has joined #openstack-ansible14:27
mgariepyioni, are you around ?14:28
ionimgariepy, sure, what's up14:28
mgariepyso, i had to disable transparent_hugepage on 4.15 kernel14:28
mgariepywhen using pci passthrough the memory is allocated and pinned for the guest memory.14:29
mgariepyhttps://bugs.launchpad.net/ubuntu/+source/linux/+bug/180841214:30
openstackLaunchpad bug 1808412 in linux (Ubuntu) "4.15.0 memory allocation issue" [Undecided,Confirmed]14:30
ionicool14:31
*** dave-mccowan has joined #openstack-ansible14:31
*** pcaruana has joined #openstack-ansible14:31
mgariepywas fun haha :D14:31
mgariepyso unless you are doing stuff that require qemu to alloc and pin all guest mem, you probably won't see the issue.14:32
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-os_cinder master: Combine, rather than replace init overrides  https://review.openstack.org/62526714:33
odyssey4memnaser jrosser That fixes the cinder-volume service on centos again. :) https://review.openstack.org/62526714:33
odyssey4mecores, please review asap to unblock the master integrated build14:33
mgariepycan i have some review for this simple doc patch :) https://review.openstack.org/#/c/622978/14:36
odyssey4memgariepy done14:37
mgariepywonderful :D14:38
*** dave-mccowan has quit IRC14:43
jrosserodyssey4me: did you see same patch in nova is slightly different, nothing is combined.... is that an oversight?14:46
gisakhey guys, what about nova_oslomsg_notify_host_group is undefined error during setup-openstack.yml ?14:47
gisakI have updated the /etc/ansible/roles/os_ceilometer/ folder from https://git.openstack.org/cgit/openstack/openstack-ansible-os_ceilometer/commit/?id=ec29ffad366ae899a93c8d6cab01ac64ffa6905914:48
*** ostackz has joined #openstack-ansible14:48
gisakbut still get the same error when running setup-openstack.yml playbook14:48
*** thuydang has quit IRC14:48
openstackgerritMerged openstack/openstack-ansible-os_neutron master: Add app-ovn.rst to index in documentation  https://review.openstack.org/62297814:48
fnpanichi14:50
fnpanicso aio works till the setup-infrastructure playbooks14:51
fnpanic-x14:51
*** Adri2000 has quit IRC14:51
fnpanicsitting behind a proxy i set the http_proxy and https_proxy14:52
fnpanici also set the pip_validate_certs: false galera_package_download_validate_certs: false14:52
fnpanicit is failiing at getting the keys for galera from keyserver14:53
fnpanicTASK [galera_client : Add keys (primary keyserver)]14:53
fnpanicand the alternate task14:53
fnpanicwhat do i need to do to fix this?14:54
fnpanicdespite getting rid of the proxy ;-)14:54
odyssey4mefnpanic fnpanic this is ubuntu, right?14:54
fnpanicyeah14:55
fnpanic18.0414:55
*** Adri2000 has joined #openstack-ansible14:55
odyssey4methat's these tasks: https://github.com/openstack/openstack-ansible-galera_client/blob/b13dba202174a098cfaf3a86e5ea5713acab70df/tasks/galera_client_install_apt.yml#L29-L6014:55
odyssey4meI guess something is required to make those work with a proxy - maybe jrosser can advise?14:55
odyssey4mefnpanic that said - did you have the proxy env vars set when you did bootstrap-aio ?14:56
fnpanicexactly this tasks14:56
fnpanicyes!14:56
odyssey4meok, so you should have a file /etc/openstack_deploy/user_variables_proxy.yml14:56
fnpanicthe http_proxy= and https_proxy= are set on boot14:57
odyssey4mefnpanic so is https://github.com/openstack/openstack-ansible/blob/master/tests/roles/bootstrap-host/files/user_variables_proxy.yml present as /etc/openstack_deploy/user_variables_proxy.yml ?14:57
fnpanicyes14:57
fnpanicis there14:58
*** vnogin has joined #openstack-ansible14:58
odyssey4meok, and if you look at your hosts/containers - can you see that content in /etc/environment ?14:58
jrosserI can help14:58
fnpanicaio host yes14:58
jrosserBut later/next week sadly14:58
fnpanic?14:59
fnpanicso the proxy settings are correct14:59
odyssey4mefnpanic can you see /etc/environment has the proxy vars set in the galera_server container?14:59
*** weezS has joined #openstack-ansible14:59
jrosserfnpanic: if you get properly stuck give me a shout and I’ll try to replicate it14:59
fnpanicone moment15:00
fnpanicenviroment file in the container is there and looks correct15:01
fnpanicproxy is reachable from container and host15:01
fnpanicnow it is in the retry loop of this task TASK [galera_client : Add keys (primary keyserver)]  3 retires left15:02
odyssey4melooks like this may be a known bug: https://github.com/ansible/ansible/issues/3169115:04
jrosserodyssey4me: I mirror so wouldn’t see that15:04
*** ivve has joined #openstack-ansible15:05
ostackzHi, struggling to understand why my VMs do not receive IP. Seems that my vxlan does not work.15:05
ostackzDocs say: br-vxlan should contain veth-pair ends from required LXC containers and a physical interface or tagged-subinterface. My output is https://pastebin.com/raw/d7BxBHEP15:05
ostackz"brctl show br-vxlan" - I see only physical interface in bridge, does anyone see in working infra node that neutron container is also bridged on br-vxlan?15:05
*** markvoelker has quit IRC15:06
fnpanicdamn15:06
odyssey4mefnpanic that's ok - you can apply an override to work around it - something like this15:07
fnpanictell me and i will give it a try :-)15:09
odyssey4meyep, just putting it together15:09
fnpanicwhat is also strange is that the retires seem to take forever to timeout15:09
fnpanicdelay says 2 but this is way longer than two15:10
fnpanicodyssey4me: Thanks!15:10
fnpanicneed coffee brb15:10
ostackzcan anyone share "brctl show br-vxlan" from working infra node? Trying to understand what bridge members should be. At least how many bridge members there are? >1?15:13
gisakguys any hints regarding "nova_oslomsg_notify_host_group is undefined" error ?15:14
fnpanicostackz: mom15:14
odyssey4mefnpanic I think I'm just going to push up a patch to vendor that key in - this is a common issue... hold a few minutes while I get that prepped15:14
fnpanichttp://paste.openstack.org/show/737311/15:15
fnpanicthanks!15:16
fnpanicostackz: look at the past15:17
odyssey4megisak I dunno if noonedeadpunk is around, but he's probably the guy to help you.15:17
ostackzfnpanic: thanks, now I see that I am lacking neutron interface in bridge. And in cact that cointainer even does not have eth10 at all.15:17
redkriegodyssey4me: it's a stable/rocky checkout from a couple weeks back.  I'll submit a bug15:17
noonedeadpunkgisak: It's in ceilometer?15:17
gisakyes15:18
fnpanicmaybe you have a mistake in openstack_user_config.yml15:18
fnpanici guess provider_networks: section ;-)15:18
fnpanicostackz: have you looked at the production examples?15:19
noonedeadpunkgisak: I think, that in 18.1.0 it should be already fixed.15:19
noonedeadpunkWhat version of OSA are you running?15:19
ostackzfnpanic I have redeployed my openstack from same config files as before, but now vxlan does not work. It did before15:20
gisak2.5.1015:20
fnpanicostackz: ok this is strange15:21
noonedeadpunkgisak: just make sure, that you have the following in /etc/ansible/roles/os_ceilometer/defaults/main.yml : https://github.com/openstack/openstack-ansible-os_ceilometer/blob/stable/rocky/defaults/main.yml#L90-L9715:21
fnpanicnothing changed in the infra? Why have you redeployed it?15:21
ostackzit was upgraded several times, then I tried Rocky before time and afterwards went back to queens15:22
fnpanicah, have you reinstalled the base os?15:23
ostackzpike-queens upgrade did not remove unneeded containers as it was supposed to, but that is old story :)15:23
fnpanichave you reinstalled the deployment host?15:23
ostackzI did reinstall OS15:23
gisakthanks, indeed #Nova notification was missing15:23
ostackzok, thanks for bridge member sharing, need to go now, will dig into vxlan later.15:24
*** dcapone2004_ has joined #openstack-ansible15:25
*** dcapone2004_ has quit IRC15:26
*** nurdie has joined #openstack-ansible15:28
fnpanickk15:30
*** hamzaachi has joined #openstack-ansible15:30
redkriegodyssey4me: here's my bug report, please let me know if you need any additional info: https://bugs.launchpad.net/openstack-ansible/+bug/180854315:31
openstackLaunchpad bug 1808543 in openstack-ansible "Keystone Federation cannot complete SP node setup on stable/rocky" [Undecided,New]15:31
noonedeadpunkgisak: probably, not only nova is missing, so you probably should check it before runnning role again15:31
*** hamzaachi has quit IRC15:32
*** hamzaachi has joined #openstack-ansible15:32
*** kopecmartin is now known as kopecmartin|off15:32
*** ivve has quit IRC15:34
*** hamzaachi has quit IRC15:48
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_client master: Use in-repo GPG keys  https://review.openstack.org/62529115:48
*** hamzaachi has joined #openstack-ansible15:48
odyssey4mefnpanic I dunno if that will pick cleanly to rocky - but try that15:48
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_client master: Use in-repo GPG keys  https://review.openstack.org/62529115:50
fnpanicodyssey4me: so looking at this i guess you just prep downloaded the GPG keys for galera in this patch right?15:58
fnpanici will patch it in manually into rocky and give it a try15:58
odyssey4mefnpanic you should be able to cherry-pick it into /etc/ansible/roles/galera_client15:58
odyssey4mefrom that url, use the 'download' drop-down on the top right, choose the 'copy to clipboard' icon next to the cherry-pick option, then in your VM change into /etc/ansible/roles/galera_client and paste that command15:59
openstackgerritAndy Smith proposed openstack/openstack-ansible master: Add qdrouterd role for rpc messaging backend deployment  https://review.openstack.org/62418416:00
odyssey4meif that doesn't work you can grab the patch file as an archive/zip and extract it there and apply it16:00
odyssey4meeffectively that removes the use of the proxy at all, because the files are in the git tree, copied over and imported16:03
fnpanicodyssey4me: cherry-picked and i will try it now and report back16:07
odyssey4mefnpanic excellent, thanks16:09
fnpanicodyssey4me: nothing to thank here, i need to thank you :-)16:10
odyssey4mefnpanic feedback on work done is just as important as the work being put together16:10
fnpanicbut proxies suck anyway16:10
fnpanicif you get anything from me it is feedback ;-)16:11
odyssey4methey make life a little more complicated sometimes :)16:11
fnpanicso far it looks farily good :-)16:11
fnpanicbut lets wait till the playbook finishes16:11
odyssey4mefnpanic that role is only used in two places - the utility container, and the galera_server container - so if you're passed those, then it's all good16:12
fnpanicit is already installing galera in the containers16:13
odyssey4meif that patch is working for you, please submit your +1 on the patch with a comment that you tested it and it works for you :)16:13
fnpanicok16:13
fnpanicwill it make it into rocky or not worth backporting?16:13
odyssey4meit helps to have that for anyone else who looks at the patch to vote on whether it should merge or not16:14
odyssey4meoh yes, I'll port it back once it merges to master16:14
odyssey4meit's a bit odd that this was done for galera_server and not galera_client, but I'm guessing mnaser forgot about that one :p16:14
fnpanicnow it is here....16:16
fnpanicTASK [rabbitmq_server : Add rabbitmq apt-keys]16:16
fnpanici guess this will have the same issue16:16
fnpanicno retry yet but i gues this will happen shortly16:16
fnpanici will comment the galera client fix16:17
*** gisak has quit IRC16:18
odyssey4meok, but we have a precedence for that now - so I can implement the same thing for the rabbitmq_server role to get that sorted out easily :)16:19
fnpanicgreat16:19
jrosserodyssey4me: check this out https://review.openstack.org/#/c/625269/16:19
*** pcaruana has quit IRC16:20
fnpanici am happy to test16:20
odyssey4mejrosser interesting, although I think we use systemd mounts now and don't bother with losetup16:20
odyssey4meI wonder if something similar is possible there.16:20
odyssey4mefnpanic ok, let me put a patch together for that too :)16:21
FrankZhangodyssey4me: hey man, I was working os_barbican recently and found out the policy file is pretty old and purely static templated. There's no substitution on it. Is it still worthy we keep it? We tried working without the policy, and nothing changed. The default policy is enough so far. https://github.com/openstack/openstack-ansible-os_barbican/blob/master/templates/policy.json.j216:21
openstackgerritMerged openstack/openstack-ansible-os_cinder master: Combine, rather than replace init overrides  https://review.openstack.org/62526716:21
odyssey4mefnpanic apologies for this, thanks for your patience16:22
*** spatel has joined #openstack-ansible16:22
odyssey4meFrankZhang I guess that if policy-in-code is done for barbican, then that policy should be removed and something like the implementation in os_keystone should be done16:22
*** ivve has joined #openstack-ansible16:24
spateljamesdenton: ^^16:24
jamesdenton?16:25
spatelI am seeing these error mesg frequently16:25
spatelostack-compute-sriov-01 nova-compute:2018-12-14 11:22:22.161 40288 WARNING nova.pci.utils [req-0d87b5e4-6ece-4beb-880c-51c7c5835a66 - - - - -] No net device was found for VF 0000:03:09.0: PciDeviceNotFoundById: PCI device 0000:03:09.0 not found16:25
spateleverything working fine so far 2 SR-IOV instance also running on this compute node..16:25
spatelany idea what is this WARNING for?16:25
jamesdentoni think i've seen this before, and it was just cosmetic. But i don't recall the details16:26
FrankZhangodyssey4me: we tested queens and rocky os_barbican, all of them failed releasing secret for other service. The scenario we worked on is Octavia TLS terminated loadbalancer which asked for secret from barbican. This one was verified on devstack and other format openstack but OSA. I'm guessing os_barbican has some problem on its wrap or configs. Do you anyone have well knowledge of os_barbican?16:26
spateljamesdenton: i wonder if its related to PciPassthroughFilter16:27
odyssey4meFrankZhang I don't have the first clue about how it works. It'd be nice if it could get some attention from people who do.16:27
spateli am seeing this error popping up every minute16:27
fnpanicodyssey4me: i will be afk for some time and will look at another computer, can you send the review to me? name: panic!16:28
fnpanicthanks16:28
odyssey4meoh bother, https://github.com/openstack/neutron/commit/7bb0b841511ead6fc58bdfe2a378801576c68f85 merged - so now our neutron role needs fixing16:28
odyssey4meI guess it's time for policy-in-code changes to be implemented across the board.16:29
*** electrofelix has quit IRC16:29
*** gyee has joined #openstack-ansible16:37
*** rodolof has quit IRC16:38
*** rodolof has joined #openstack-ansible16:39
*** hamzaachi has quit IRC16:41
*** hamzaachi has joined #openstack-ansible16:41
*** rodolof has quit IRC16:45
*** rodolof has joined #openstack-ansible16:45
openstackgerritMichael Johnson proposed openstack/openstack-ansible stable/rocky: Update Octavia to latest stable/rocky SHA  https://review.openstack.org/62530616:47
openstackgerritMichael Johnson proposed openstack/openstack-ansible stable/queens: Update Octavia to latest stable/queens SHA  https://review.openstack.org/62530716:50
*** vnogin has quit IRC16:52
openstackgerritMichael Johnson proposed openstack/openstack-ansible stable/pike: Update Octavia to latest stable/pike SHA  https://review.openstack.org/62530916:52
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-rabbitmq_server master: Use in-repo GPG keys  https://review.openstack.org/62531216:53
*** tosky has quit IRC16:56
*** shardy is now known as shardy_mtg16:58
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-rabbitmq_server master: Use in-repo GPG keys  https://review.openstack.org/62531216:58
odyssey4meif any cores are around, it'd be good to get https://review.openstack.org/625291 in so that the client and server mechanisms match and it works online/offline and through a proxy17:01
*** udesale has quit IRC17:07
*** markvoelker has joined #openstack-ansible17:31
jrosserodyssey4me: left you a comment there17:32
*** markvoelker has quit IRC17:35
*** Emine has joined #openstack-ansible17:36
*** macza has joined #openstack-ansible17:45
*** macza has quit IRC17:45
odyssey4mejrosser I would think so, except the galera_server patch did not have a reno or care about the previous implementation... but yeah, I guess I can fix them and reno them both17:56
spatelfolks if i delete flavor does that delete or impact any running instance on that flavor?17:57
jrosserif it respected the apt_key fields that were used previously it wouldnt need a reno17:57
odyssey4meit's probably actually simpler then to do what logan- suggested - just have a dict and pass it in, leaving the greatest flexibility17:57
spateli don't think but just want to confirm17:57
jrosserand exiting overrides would just carry on working like before17:57
jrosseri think logan- and I are meaning the same thing17:57
odyssey4mewell, sort of17:58
odyssey4mebut yeah, let me just make is backwards compatible - and fix galera_server17:59
*** gillesMo has quit IRC18:00
odyssey4meactually, I don't think logan's mechanism will work because we have extra options in there - and I think that will flunk out18:05
*** shardy_mtg has quit IRC18:08
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_client master: Use in-repo GPG keys  https://review.openstack.org/62529118:20
*** rodolof has quit IRC18:27
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_client master: Use in-repo GPG keys  https://review.openstack.org/62529118:30
odyssey4melogan- jrosser I think that https://review.openstack.org/625291 is the best way forward, assuming that it works. :)18:30
*** aludwar has quit IRC18:32
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_client master: Use in-repo GPG keys  https://review.openstack.org/62529118:32
openstackgerritAndy Smith proposed openstack/openstack-ansible master: Add qdrouterd role for rpc messaging backend deployment  https://review.openstack.org/62418418:32
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_client master: Use in-repo GPG keys  https://review.openstack.org/62529118:34
*** aludwar has joined #openstack-ansible18:36
*** chandan_kumar has quit IRC18:46
*** spatel has quit IRC18:53
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible stable/queens: Add automated migration of neutron agents to bare metal  https://review.openstack.org/62533118:59
*** fnpanic_ has joined #openstack-ansible19:03
fnpanic_hi19:05
fnpanic_i have seen the changes on the apt_key for galera_client19:06
fnpanic_anything i can test for rabbitmq yet?19:06
odyssey4mefnpanic_ I've pushed up a patch which should be usable for testing, although it may change like the galera_client one: https://review.openstack.org/62531219:07
nurdieodyssey4me: Update on my issue with trying to upgrade an OSA CentOS controller cluster with YUM. We couldn't get the OSA scripts for 16.0.23 to work. So many dependency errors and pip failures, etc. I ended up figuring out that repo container had the 16.0.23 services tarballs, and I created bash scripts to go through all of the containers, pull their perspective tarballs, unzip, and edit the systemd unit files to use the new ven19:22
nurdievs. We have a working OS cluster again!19:22
odyssey4menurdie ah, awesome - now the road to upgrades :)19:23
fnpanic_odyssey4me: thanks! can you send me the link19:23
nurdieodyssey4me: Yes, that's Pike. So I went with your recommendation and continued with the upgrade from Ocata to Pike with those tarballs. Thanks for all of your input the other night. It helped a lot19:24
odyssey4mefnpanic_ I did. ;)19:25
odyssey4menurdie excellent - time to get up to queens, then rocky :)19:25
fnpanic_yeah19:25
fnpanic_sorry ;-)19:25
nurdieodyssey4me: Yep! We are already preparing for that. Considering moving to an all-in-one setup with controllers on metal so that we have to rely less on OSA19:26
*** Emine has quit IRC19:26
odyssey4menurdie if your env is small and simple, and you like editing files by hand - sure :)19:26
nurdieodyssey4me: It is pretty small. Only 3 controllers and a few compute nodes, and I ended up editing a bunch of configs by hand anyways because we couldn't figure out some of the OSA pip/python venvs errors19:28
*** nurdie has quit IRC19:30
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_client master: Use in-repo GPG keys  https://review.openstack.org/62529119:36
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_client master: Use in-repo GPG keys  https://review.openstack.org/62529119:36
*** nurdie has joined #openstack-ansible19:37
fnpanic_i have cherry-picked the patch and will give it a try now19:38
*** nurdie has quit IRC19:41
fnpanic_odyssey4me: looks very good so far :-)19:47
fnpanic_odyssey4me: in your patchset the files are in files/gpg19:49
fnpanic_the problem is that the path looks only in gpg19:49
fnpanic_i changed this and it works :-)19:49
odyssey4mefnpanic_ hmm, that's odd - it should prefix it with 'files/' automatically19:50
odyssey4mecan you add that as a comment in-line in the review please?19:50
fnpanic_will do19:53
fnpanic_so setup-infra works now flawlessly behind a procx19:53
fnpanic_proxy19:53
fnpanic_let's see what setup-openstack does19:54
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible stable/queens: Add automated migration of neutron agents to bare metal  https://review.openstack.org/62533119:54
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible stable/queens: Add automated migration of neutron agents to bare metal  https://review.openstack.org/62533119:55
fnpanic_mhhh TASK [os_keystone : Ensure newest key is used for credential in Keystone19:58
fnpanic_fails19:58
fnpanic_http://paste.openstack.org/show/737335/19:58
fnpanic_the error message says nothing....19:58
odyssey4meI'm out for the night - time to go offline. I may pop on tomorrow again.19:59
jamesdentonsee ya odyssey4me19:59
fnpanic_see you20:00
fnpanic_any idea or is keystone broken in aio/rocky?20:01
fnpanic_this does not look like a proxy problem right?20:01
*** macza has joined #openstack-ansible20:01
jrosserit is possibly more like a no_proxy problem20:03
fnpanic_i was looking into this - you are reading my mind :-)20:04
fnpanic_proxy absolutly suck20:04
fnpanic_sorry for being offensive20:04
jrosserdoes no_proxy look sensible?20:04
fnpanic_yes it does20:04
fnpanic_checking the container now20:05
fnpanic_host is as it should20:05
fnpanic_no_proxy="localhost,127.0.0.1,172.29.236.100,10.0.243.190,172.29.238.62,172.29.237.51,172.29.236.253,172.29.238.29,172.29.239.67,172.29.239.182,172.29.236.238,172.29.239.228,172.29.236.143,172.29.239.15,172.29.238.110,172.29.238.167"20:05
fnpanic_same in the container20:05
fnpanic_:-(20:06
logan-there should be a log file in /var/log/keystone/keystone-manage.log that has more detail iirc20:07
fnpanic_that is the lan ip 10.0.243.19020:07
fnpanic_ok20:07
fnpanic_mhhh20:08
fnpanic_no such logfile, not in the keystone container, not in rsyslog container not on the host20:09
jrosserdoes your no_proxy env var actually have those quotes?20:10
jrosserenv | grep no_proxy       <- does that show it like you showed above20:10
fnpanic_it came from export20:10
jrosserah ok20:10
fnpanic_:-)20:10
jrosserso, what i'd do next is run the same command that the playbook did by hand20:11
fnpanic_makes totally sense20:11
jrosserbut strace <command> and then you'll see in vast detail what happened20:11
jrosserand buried in there will be the url it tried20:11
jrosseror you ca perhaps try something simpler first20:12
jrosserwhich would be to use curl/wget to test the keystone endpoint on your loadbalancer20:12
jrosserjust see that it returns anything from the POV of the container20:12
jrosserfnpanic: actually that test of the LB is really important - please try that first20:15
*** noonedeadpunk has quit IRC20:15
fnpanic_ok20:16
fnpanic_hatop says that keystone is down for service and admin20:19
fnpanic_infra services are fine like galera20:19
fnpanic_galera, rabbit and repo are healthy20:20
admin0quick quesiton .. is there a proxy or something that will allow me to cache the bootstrap files so that its faster in my network20:20
fnpanic_all openstack services are not yet ready because not setup :-)20:20
fnpanic_admin0: we use squid for all traffic and it caches the files20:21
fnpanic_but it introduces other problems :-(20:22
jrosseri mirror all the requried apt repos20:22
jrosserwith debmirror  - there are lots of choices here20:22
admin0i think squid way is more transparent :)20:22
jrosserand it's all behinh squid too, just for added fun20:22
admin0:)20:22
fnpanic_;-)20:23
fnpanic_jrosser: so what to check next?20:23
jrosserperhaps try the keystone-manage command by hand from the container, with the verbose flag20:27
fnpanic_how can i find out which command was executed easily?20:28
jrosserhttps://github.com/openstack/openstack-ansible-os_keystone/blob/master/tasks/keystone_credential_create.yml#L85-L8920:29
jrosseryou can re-run just the keystone playbook with -vvv to see more debug, that would probably show you20:29
fnpanic_ok20:30
jrossertake a quick look in playbooks/setup-openstack.yml to see how it's all organised20:30
fnpanic_ok20:31
fnpanic_d20:37
*** priteau has quit IRC20:37
fnpanic_# /openstack/venvs/keystone-18.1.1/bin/keystone-manage -d credential_migrate --keystone-user "keystone" --keystone-group "keystone"20:37
fnpanic_this one does no output but exit code is 120:38
jrosseradd --logfile /tmp/log.txt20:38
jrosseralso on your keystone container can you try 'wget <internal_vip>:8181' and see if that works20:41
fnpanic_i added keystone-manage: error: unrecognized arguments: --logfile /tmp/log.txt20:42
fnpanic_the docs of keystone-manage say this is correct....20:42
fnpanic_strange20:42
jrosserit probably wants to be the first option20:43
fnpanic_wget works flawless20:44
fnpanic_you are right20:45
fnpanic_first option works....20:45
jrosseri'm really hoping there is something useful in the log, otherwise i'm running out of ideas20:46
fnpanic_http://paste.openstack.org/show/737338/20:47
fnpanic_maybe you have an idea20:47
fnpanic_access denied....20:47
fnpanic_looks like it cannot connect to the db20:48
admin0is there any option in squid to make it cache downloads from https:// as well ?20:55
admin0or care to share some config20:55
admin0found it :)20:56
admin0ignore me :)20:56
fnpanic_;-)20:56
*** weezS has quit IRC20:58
openstackgerritAndy Smith proposed openstack/openstack-ansible master: Add qdrouterd role for rpc messaging backend deployment  https://review.openstack.org/62418420:59
fnpanic_i re-ran the setup-infra and in hatop the db is up21:01
jrossermaybe something to do with the earlier errors with apt keys21:02
fnpanic_mhhh but  everything went as expected21:03
fnpanic_so i think it looks fairly good from what i can see, db and rabbit are online21:04
*** chandan_kumar has joined #openstack-ansible21:13
fnpanic_so i guess noone ever installed OSA with a proxy :-)21:13
jrossertheres a few of us21:16
fnpanic_that gives me hope21:17
fnpanic_so what am i doing wrong :-)21:17
jrosseri have all sorts, squid proxy, deb mirrors, pip mirror, ssh bastion betweek deploy host and cloud and so on21:17
jrosserbut that doesnt all happen on day 1 with no effort21:18
fnpanic_we have "just" a squid here21:18
fnpanic_which does http and https21:18
fnpanic_that's it21:18
jrosserhowever, i put the patch in to add user_variables_proxy.yml and it *should* work21:18
jrosserif it doesnt, something needs fixing21:19
fnpanic_which patch?21:19
fnpanic_can i cherry-pick it?21:19
jrosserno - the code that picked up your env vars go proxies and auto-configured that in the AIO setup21:20
fnpanic_ah ok21:20
fnpanic_but this looks ok for me21:20
jrosserfrom what you are saying things are working now?21:21
fnpanic_no same error on keystone install21:21
fnpanic_http://paste.openstack.org/show/737338/21:21
*** DanyC has quit IRC21:22
fnpanic_btw does it make sense to split user_variables.yml in user_variables_nova.yml for example or does this opose problems?21:29
jrosserthat works21:30
fnpanic_:-)21:30
jrosseri'd think about what point you start from scratch again21:32
fnpanic_?21:33
jrosserit's really useful to be able to treat AIO as disposable so you can be sure that theres no left over cruft from stuff that went wrong early on21:33
jrosserso working in a VM is a good approach21:33
fnpanic_it is a kvm vm on synnefo/ganeti21:33
admin0fnpanic_, care to share your squid conf ?21:34
admin0i am trying to go down this route as my friday night project21:34
fnpanic_need to look at this it is a squid cluster haproxy and 4 squids at the office21:35
fnpanic_i guess the conf is in our puppet git21:35
admin0 are you making the servers ignore ssl certs ?21:35
fnpanic_no21:35
fnpanic_i will try a vagrant vm with a local proxy....21:39
fnpanic_will the apt_key changes be in master or also rocky?21:40
jrosseri would expect that to get backported quite a long way - ansible seems to have had trouble with this for a long time21:42
jrosserso certainly master & rocky21:42
fnpanic_that sounds great21:43
fnpanic_how can i know when it is in rocky?21:43
jrosserif you could verify the latest version that went up that would be great21:44
fnpanic_ok i will do21:44
jrosseronce it is reviewed and merges into master a new review is created by cherry picking the patch onto the stable/rocky branch21:44
fnpanic_got it21:45
fnpanic_you are talking about 62531221:45
fnpanic_right?21:45
jrosseryes21:45
fnpanic_k21:46
jrosseroh hold on - there are two arent there21:46
jrosserthere is a new version of https://review.openstack.org/#/c/625291/821:47
fnpanic_on 625312 i added a comment21:47
fnpanic_the files prefix is missing in the vars/ubuntu.yml21:48
fnpanic_the over one i need to test21:48
jrosserif you think it doesnt work as it stands please put a -121:49
fnpanic_ok21:50
jrosserright i'm done for today - i will try an AIO behind proxy on monday21:51
*** dcapone2004_ has joined #openstack-ansible21:51
dcapone2004_is anyone around to answer a couple newbie questions regarding openstack ansible?21:52
fnpanic_ok thanks21:52
fnpanic_have a great weekend21:52
fnpanic_dcapone2004_:21:53
fnpanic_hi21:53
fnpanic_maybe i can21:53
fnpanic_maybe21:53
dcapone2004_cool21:53
openstackgerritNicolas Bock proposed openstack/openstack-ansible master: Increase CentOS test coverage  https://review.openstack.org/61031121:53
*** rodolof has joined #openstack-ansible21:54
dcapone2004_basically I am trying to deploy a small test environment and I am hvaing difficultly determining what interfaces to map to what bridges as I cannot determine from the documentation what interface is designed to supply external IP addresses / floating ip addresses21:54
fnpanic_on a usual setup you have a few bridges21:55
dcapone2004_it is also possible, that my network cabling method might not work with openstack-ansible, but essentially, I have 4 physical host, a deployment host, an infrastructure host, a compute host, and a storage host21:55
fnpanic_br-vlan will be the provider network21:55
fnpanic_and there you will have floating ips21:56
fnpanic_this sounds ok21:56
dcapone2004_all hosts have a GigE connection to external network switch and are assigned public IPs, I have bridged this to br-mgmt21:56
fnpanic_have you looked at the production config example and walked trough the deployment guide?21:56
fnpanic_ok21:56
dcapone2004_compute host has a direct 10G connection to the storage host (no switch)21:56
fnpanic_this sounds special21:57
dcapone2004_compute host also has a 1GB direct connection to Infra host as I thought this link was needed for Neutron to function, but I'm thinking this is completely unnecessary21:57
fnpanic_not sure if this works21:57
fnpanic_cinder needs to play also nice with in here21:57
fnpanic_first of all take a look here21:59
fnpanic_https://docs.openstack.org/project-deploy-guide/openstack-ansible/rocky/targethosts.html21:59
fnpanic_configure the network section till the end of the page21:59
fnpanic_this makes networking more clear21:59
fnpanic_and21:59
fnpanic_this one22:00
fnpanic_https://docs.openstack.org/openstack-ansible/rocky/user/prod/example.html22:00
dcapone2004_yeah i read that a few times, I think what confuses me is where the external IPs come in and how I can assign the same physical interface to both br-mgmt and br-vlan (because I understood what you seemed to have confirmed in that the floating ip[s come from the flat/vlan networks)22:01
fnpanic_there you can see under Host network configuration and Environment layout which nodes need to talk to each other22:01
fnpanic_yes22:02
dcapone2004_yeah that is what I used as a rference for my cabling, compute needs to talk to everything (hence, the external switch connection, a direct connection to infra, and a direct connect to storage)22:02
fnpanic_br-mgmt is internal communication and managment22:02
dcapone2004_storage only needs to talk to compute and mgmt, hence the direct connect to compute and the external switch connection to get to mgmt22:03
fnpanic_computes can also have provider bridges so if you put an instance on a provider network without a floting in it needs this one22:03
fnpanic_if you use a router it need to connect to the network node22:03
dcapone2004_yeah which in this small test deployment is the same as the infra node from what i can tell in the documentation22:04
*** priteau has joined #openstack-ansible22:05
*** priteau has quit IRC22:05
dcapone2004_so what it seems like is I am an interface/subnet short, basically unlike an aio deploy where the mgmt and floating networks are (or at least can be since I have done it) the same, with this type of deployment, I should add a separate vlan, use a small subnet to use for management and then use what I have currently assigned to mgmt and assign it to br-vlan22:07
dcapone2004_that is probably the quickest way to get me to a deployment, but I feel like if an aio can share the same subnet for mgmt and floating ips, there must be a way to configure it for this deployment to be the same22:08
jrosserdcapone2004_: on the examples https://docs.openstack.org/openstack-ansible/rocky/user/prod/example.html the external network (for floating ip etc) is assumed to be a "flat" network on br-vlan. this means it is the untagged/native traffic on that bridge22:13
fnpanic_i guess so22:13
fnpanic_i am not aware how22:13
dcapone2004_jrosser, yes that is the plan, but for the moment, this environment is deployed offsite, so my mgmt network is public IPs, and presently the mgmt network i am using is essentially the same "flat" network22:14
dcapone2004_that is where I am running into the issue where i think my addressing is not supported22:15
admin0dcapone2004_  .. how do you plan to use the floating ips/network  ? is it on vlan or flat ?22:15
admin0i see .. vlan ..22:16
admin0so its easy ..22:16
jrosserhaving the mgmt network on public IP with no firewall or anything is not a great plan22:16
jrosserit's assumed that it is a private network22:17
fnpanic_:-)22:18
fnpanic_btw in the production example it can be a flat or tagged vlan right?22:18
dcapone2004_I am aware of that issue long-term and for production22:18
fnpanic_that depends on what you create in neutron22:18
*** rodolof has quit IRC22:18
dcapone2004_for now I have just blocked the management IPs used on the hosts to not allow any traffic except from our office22:19
dcapone2004_so I can essentially remotely manage it22:19
admin0dcapone2004_, you already have enough infra to not do an aio but do a proper install22:19
admin0dcapone2004_, maybe this will help.. https://www.openstackfaq.com/openstack-dev-server-setup-ubuntu/  --22:20
admin0so basically you can have even a single interface or multiple, it all works via vlans22:20
dcapone2004_yep, but I am essentially trying to learn openstack-ansible for deployment, I have used packstack in the past and I am trying to graduate to a better deploymenttool22:20
dcapone2004_our production plan is for a ceph cluster for storage, 3 infra nodes, and 2 compute nodes, but need to take some baby steps to understand openstack-ansible much better first22:21
fnpanic_then go for the production deployment guide22:21
jrosserhave you got control of they switch & creation of vlans etc?22:22
dcapone2004_I also get the vlan thing, I think I just need to use a different subnet/vlan for the mgmt and call it a day because that solves my problem, it just wasn't necessarily my plan22:23
dcapone2004_yes22:23
jrossermake ther public ip just for ssh into your boxes22:23
jrosserput mgmt net on another vlan and it's all then just like the prod example22:23
dcapone2004_essentially, I was looking to minimize subnet usage, so I was looking to assign a simple /24 for mgmt and "vlan", use the first 5-6 IPs of the subnet for mgmt, block those IPs at the firewall level to all traffic except our office IPs for administration22:24
jrosseryou need dozens of IP on the mgmt net22:24
jrosserbecasue each container on each host needs one22:24
admin0dcapone2004_, what you can do is this.     add 1 ip to the router .. and then NAT it to your external VIP .. that way, you can access it via 1 IP .. then the real IPs goes only into your floating ip range22:25
admin0rest = all private22:25
dcapone2004_here is where I am having the issue....how do I remotely reach the physical hosts for management if the mgmt network is private IPs? Is the design goal/expectation for it to be managed via VPN?22:27
fnpanic_quick question, is it sufficent when the deployment host has internet access only?22:27
admin0dcapone2004_, you need 1 server on public22:27
admin0then use that as a jumphost22:27
dcapone2004_yep, so that can be the deployment server which makes the most sense22:27
admin0yes22:28
fnpanic_or do all hosts and the containers need internet access? do they not use the repo host for downloading packages and so on?22:28
admin0dcapone2004_, that is normally how its done as well22:29
*** ivve has quit IRC22:29
dcapone2004_and I am guessing the suggestion would be that the public IP used for that external access be on a different subnet than the intended floating ip range?22:30
admin0not necessary .. :)22:30
admin0the floating IP you get is based on vlans and the dhcp range you specify22:30
dcapone2004_fnpanic, I think they all need internet, but the intent would be that your mgmt network would have NAT/PAT going on for the servers to access the internet, but not allow access in22:31
admin0so its possible that you have the same range IP in jumphost, and you also use parts of it as floating, on the same vlan22:31
admin0thus having one subnet only, but effectively restricting those IPs based on how you add the range22:31
admin0you can say i have /24, and .1 as router .. so you reserve say 32 ips in the front for future use .. and then in openstack , add the same network and gateway, but in dhcp, give only from .33- 250 range22:32
dcapone2004_ok, that makes sense and I knew that actually, I just forgot I would now have 2 VLANs, so I wouldn't have an issue mapping the 2 different subnets on the deployment host22:33
admin0dcapone2004_, how many network cards are there in each server ? and how many VLANs do you  have, and what is the vlanID of the public range ?22:33
fnpanic_dcapone2004_: yeah this is how i set it up22:33
fnpanic_then i need to get the proxy to work22:33
admin0my squid is not saving files :(22:34
dcapone2004_my brain cramp was 2 interfaces/bridges using the same subnet because I was "merging" the mgmt and floating ip subnets, not realizing the IP demands that the mgmt network had22:34
admin0dcapone2004_, how many interfaces do you have ?22:34
admin0you will have 4 bridges .. but how many physical interface ?22:34
admin02 or 122:34
admin0in each of the server22:34
dcapone2004_I have plenty of NICs, so I can tehcnically have up to 6 if I ever needed the bandwidth, but right now, I have it connected like this:22:35
dcapone2004_all hosts have a GigE connection to external network switch and are assigned public IPs, I have bridged this to br-mgmt22:35
dcapone2004_compute host has a direct 10G connection to the storage host (no switch)22:35
dcapone2004_compute host also has a 1GB direct connection to Infra host22:36
admin0so eth0 of all = public/management ip ..22:37
dcapone2004_basically, I need to trunk the port going to the external switch on the dpeloyment host, 1 vlan for mgmt, 1 for external/floating range for remote access to the environment22:37
admin0if you alrady have public IP, why do you need remote access range ?22:37
dcapone2004_well because as mentioned in my comment, that is where my issue/confusion has come in, because that public IP is what I used on the mgmt bridges22:38
dcapone2004_basically I put all 4 physical hosts on that public subnet, and bridged to the mgmt interface22:38
dcapone2004_which is where I was stuck not understand how to bridge it to br-vlan as well for floating IPs22:39
dcapone2004_I basically need to remove those public IPs from the mgmt bridge, use a private subnet there, trunk/Vlan the deployment server so that I can have a br-mgmt on the private subnet I use and a second vlan with a public IP attached to it22:40
dcapone2004_to manage the environment remotely22:41
dcapone2004_and trunk/vlan the infra physical host in the same way to provide external connectivity to the openstack VMs which should run route all traffic through that system via neutron22:42
admin0eth0 = say private range (can be dhcp or static) (in ALL) .. in deploy make eth0.100 and add your public Ip say under vlan 100 for remote management and ssh22:42
admin0eth1 = connect this of all servers to the switch, make trunk and allow vlan tag say  10, 20 and 3022:42
admin0eth1.10 = add this to br-mgmt and give an ip of 172.29.236.x   in ALL22:42
admin0eth1.20 = add this to br-vxlan and give an ip of 172.29.240.x  in compute and infra22:42
admin0eth2  = add this to br-storage and give an ip of 172.29.244.x on compute and storage22:42
admin0eth3 = add this to br-vlan22:43
admin0now assuming your floating ip range is say vlan 200,  on eth3, allow trunk and add vlan 20022:43
admin0when u add ip later, neutron will add eth3.200 and send it tagged22:43
admin0and when u configure, make external VIP as  172.29.236.2 (for example ) then you can  also add eth0.100:10 another public IP and SNAT/DNAT/DMZ to .2 so that whenever that public IP is hit, it opens horizon and public access to your openstack22:44
dcapone2004_ok, to try to morph your example to what I have currently cabled (to try to avoid a trip to recable for the moment and to also test my understanding of everything), could I do the following:22:46
dcapone2004_eth0.100 on deploy, private IP subnet - add to br-mgmt, eth0.200, public ip for remote management and ssh, no other connection into deploy host22:48
admin0all you need on deploy is the br-mgmt ip range and  public22:48
*** hamzaachi has quit IRC22:49
dcapone2004_yup, that is what i was trying to convey, 1 physical connection, 2 vlans, the br-mgmt vlan and the public IP vlan22:49
admin0yep22:49
admin0for simplicly ,, br-public and br-mgmt :)22:49
*** hamzaachi has joined #openstack-ansible22:49
admin0one has your public IP for remote connection and one has 172.29.236.222:49
dcapone2004_on infra physical host, eth0.100 private ip assigned in mgmt subnet, add to br-mgmt, eth0.200 - add to br-vlan for floating IP range, eth1 (no vlaning required as it is a direct connect to compute) add br-br-vxlan with 172.29.240.x22:51
dcapone2004_on compute host, eth0.100 private ip assigned in mgmt subnet, add to br-mgmt, eth1 (no vlaning required as it is a direct connect to compute) add br-br-vxlan with 172.29.240.x, eth2 (also no VLAN with direct connect to storage) add to br-storage assign 172.29.244.x22:51
dcapone2004_on storage host, eth0.100 private ip assigned in mgmt subnet, eth2 (or eth1, the correct port directly connect to compute) (also no VLAN with direct connect to storage) add to br-storage assign 172.29.244.x22:52
admin0right22:53
admin0but there is a catch22:54
admin0you cannot add a vlan directly like that on br-vlan22:54
admin0as neutron creates the vlan tag later on22:54
admin0what you can do is  add eth0 to br-vlan .. .. and then add eth0.100 to br-mgmt22:54
admin0so that this eth0.200 is created later automatically22:54
dcapone2004_got it, that makes sense22:55
admin0dcapone2004_, its a old article that i am redoing for rocky, but go to this page: https://www.openstackfaq.com/openstack-liberty-private-cloud-howto/22:55
admin0search for click here to see single network card22:55
admin0and there you will see this exactly22:56
*** macza has quit IRC22:56
admin0+click here to see single network card network configuration of c11 .. c25 nodes22:56
admin0so there, eth0 has a public ip, and also part of br-vlan .. because neutron adds and tags tags the eth0.200 later, you can even have ip and use it22:57
dcapone2004_got it22:57
*** cshen has joined #openstack-ansible22:58
admin0so if you have a dhcp, you can use it and give direct ip to eth0 for management ..  and not tag .100 specifically22:58
admin0because in that case, 172.29.236.x is on vlan122:58
admin0or have a diff ip there, and hvae mgmt under a new vlan tag as in that example22:59
admin0upto you22:59
dcapone2004_yep, I got that, was using the specific vlan numbers to make it easier to illustrate22:59
admin0will this cloud needs to be accessible from outside ?22:59
dcapone2004_I didn't read that page in depth, but it brought it a quick question that might be answered if I read the whole page, but is the VXLAN network used by os-ansible a different network than the tenant network(s) inside openstack?23:00
dcapone2004_yes, it would, glad you brought that up, because I am missing that public ip configuration somewhere23:00
admin0br-vlan is the outermost network which runs on a vlan23:00
admin0where all vxlan runs23:01
admin0br-vxlan  is the trunk for your internal networks23:01
dcapone2004_internal networks "inside" openstack or "internal" to openstack ansible deployment (that was what my question was targeting)23:01
admin0tenant network :)23:02
admin0you did the eth0.100 -- that takes care of openstack deployment/manaagement/api internal traffic23:02
dcapone2004_ok that is what I understood it to be23:02
admin0that is on br-mgmt23:02
admin0br-vxlan = network on top of which all east-west traffic flows23:02
admin0does your cloud need to be accessible via public ?23:03
dcapone2004_yep, at leat horizon would need to be23:03
admin0if yes, then you need to do this ( before setup )23:03
dcapone2004_and the API endpoints23:03
admin0so what you will do is assuming your 4 nodes have .11, .12 , .13 and .14 ip on 172.29.236.x range,23:04
admin0what you do is now add eth0.200:10 on public and add a 2nd public IP  a.b.c.d  -- this for your stack/cloud23:04
admin0and in your openstack_user_config, on external_lb_vip_address: cloud.yourdomain.com .. and in your internal DNS, point cloud.yourdomain.com to   172.29.236.10 ( virtual IP for example) and in user_variables, do haproxy_keepalived_external_vip_cidr: "172.29.236.9/22" haproxy_keepalived_external_interface: "eth0.100"23:06
dcapone2004_that should only be need on the infra host where horizon/keystone are installed correct?23:06
admin0that way, your haproxy ( if you decide it to be on infra) will have 2 ips ..23:06
admin0when haproxy is up .it will also have .9 .. which is NAT/DMZ from the public Ip we added .. and then your endpoints and horizon etc will be on cloud.example.com23:07
dcapone2004_and I don't think it would need to be added as a subinterface right? just eth0.200 because eth0.100 is bridged to br-mgmt and there is no assignment for eth0.200 at all in the "stack"23:07
*** lbragstad has quit IRC23:08
admin0if external, i normally make it br-public and if internal/management/ssh i do it as br-ssh23:08
admin0keeps it sane23:08
admin0but you get the idea23:08
admin0the public name/api endpoint is set via  external_lb_vip_address  which is mapped via haproxy_keepalived_external_vip_cidr and  haproxy_keepalived_external_interface23:09
admin0so if you have say  10.11.12.x via eth4 .. you can have  that IP given as well23:09
dcapone2004_ok, I guess the question is, does this second public IP, need to be in the same range as the br-mgmt and NATTED or can it be a separate public IP from the same range as used on the deploy host for public access?23:10
admin0anything23:10
admin0its how you want your cloud to be accessed23:10
admin0it can be on a completely new set of ips on new interface as well23:10
admin0just that if infra is the haproxy host, it will get added to haproxy23:10
dcapone2004_got it, I thought so, but you have been so very helpful, figured I'd pick your brain while I have the chance23:10
admin0its midnight for me .. so tomorrow :)23:11
admin0success :)23:11
dcapone2004_I meant with that question....I'm done with you for a while I hope....thanks a lot23:12
fnpanic_good night23:12
admin0no problem ..  if you have questions, just ask23:12
*** lbragstad has joined #openstack-ansible23:16
*** lbragstad has quit IRC23:22

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!