Wednesday, 2019-08-21

*** markvoelker has joined #openstack-ansible00:11
*** markvoelker has quit IRC00:15
*** zirpu has joined #openstack-ansible00:24
*** markvoelker has joined #openstack-ansible00:26
poopcatA little late but @allenb, can you login to the Galera container and check the '/etc/xinetd.d/mysqlchk' config and validate the IPs allowed. Perhaps adjusting 'only_from' to be 'only_from = 0.0.0.0/0' .... then restarting xinetd...00:26
poopcatIf you run 'hatop -s /var/run/haproxy.stat' from the haproxy node and all the Galera VIPs are down, this is likely your problem00:27
zirpucool. thanks.00:27
poopcatwhoops @allanb00:27
*** gyee has quit IRC00:41
*** ianychoi has quit IRC00:57
*** ianychoi has joined #openstack-ansible00:59
*** spsurya has joined #openstack-ansible00:59
*** ThiagoCMC has joined #openstack-ansible02:54
ThiagoCMCGuys! The command `openstack-ansible setup-hosts.yml` is failing on Ubuntu 18.04 with HWE (Linux 5.0), error: "modprobe: FATAL: Module nf_conntrack_ipv4 not found in directory /lib/modules/5.0.0-25-generic"02:54
ThiagoCMCAny idea?02:54
ThiagoCMCThe documented step `apt install linux-image-extra-$(uname -r)` also fails on newer Ubuntu systems!  O_O02:55
poopcatI'm not sure if the 5.0 kernel is supported at the moment02:56
ThiagoCMC:-O02:56
ThiagoCMCThe command `find /lib/modules* -iname "*nf_conntrack_ipv4*"` only returns: "/lib/modules/4.15.0-58-generic/kernel/net/ipv4/netfilter/nf_conntrack_ipv4.ko"02:56
poopcathttps://docs.openstack.org/project-deploy-guide/openstack-ansible/stein/overview.html02:57
ThiagoCMC"Linux kernel version 3.13.0-34-generic or later is required."02:57
poopcatsure, I understand02:58
poopcat5.0 kernel is pretty fresh on Ubuntu 18.0402:58
poopcattry rolling back and re-running the setup-hosts02:58
ThiagoCMCSure, but 4.18 isn't that fresh and it also doesn't have the module either.02:58
ThiagoCMCI'll rollback the kernel and try again, no problem...02:59
poopcatI've never tested with 4.18. Most of the clusters I'm on are 4.402:59
ThiagoCMCI was hoping to find a workaround on that Ansible task...02:59
ThiagoCMCThe "openstack_hosts : Load kernel module(s)"03:00
poopcatyou could work around it by commenting out the necessary checks... but that's probably not the way you'd want to go03:00
ThiagoCMCSure, I understand that...   =P03:01
ThiagoCMCJust curious... Looks like that the "nf_conntrack" does the job on 5.0?03:02
*** ThiagoCMC has quit IRC03:07
*** BjoernT has joined #openstack-ansible03:12
*** BjoernT has quit IRC03:17
*** BjoernT has joined #openstack-ansible03:17
*** rohit02 has joined #openstack-ansible03:41
*** gkadam has joined #openstack-ansible03:54
*** gkadam has quit IRC03:54
*** BjoernT has quit IRC05:06
*** shyamb has joined #openstack-ansible05:06
*** kopecmartin|off is now known as kopecmartin05:08
*** udesale has joined #openstack-ansible05:10
*** BjoernT has joined #openstack-ansible05:12
*** udesale has quit IRC05:14
*** raukadah is now known as chkumar|rover05:17
*** dave-mccowan has quit IRC05:21
*** mathlin has joined #openstack-ansible05:29
*** BjoernT has quit IRC05:29
*** mathlin has left #openstack-ansible05:30
*** miloa has joined #openstack-ansible06:01
miloaMorning06:02
jrossermorning06:02
CeeMacMorning06:23
*** ianychoi has quit IRC06:34
*** ianychoi has joined #openstack-ansible06:34
miloaSorry to bother you on that : https://review.opendev.org/#/c/671762/ if someone can give me some hints to progress on it, many thanks (Zuul build fails on openstack-ansible-deploy-aio_ceph-ubuntu-bionic)..06:39
*** arbrandes has quit IRC07:00
*** ianychoi has quit IRC07:00
*** shyamb has quit IRC07:01
*** arbrandes has joined #openstack-ansible07:01
*** ianychoi has joined #openstack-ansible07:01
jrossermiloa: here is the error https://logs.opendev.org/62/671762/5/check/openstack-ansible-deploy-aio_ceph-ubuntu-bionic/77b48c0/logs/ara-report/result/1d6a3693-949d-428b-8840-dc5fd8a2a27d/07:07
jrosserThis has been fixed as far as I know07:08
*** trident has quit IRC07:10
miloajrosser: Thanks:) so if i do a recheck, is it engough ? or may i have to do something more ?07:15
jrosserTry a recheck07:15
miloajrosser: ok thanks :)07:15
jrosserAnd feisty I’ve not voted on the patch because it’s quite complex so is a difficult review07:15
jrosserArgh autocorrects07:15
jrosser*feisty/fwiw07:16
openstackgerritErik Berg proposed openstack/openstack-ansible-os_neutron master: l3 agent on network_hosts do dvr_snat, anywhere else dvr  https://review.opendev.org/67527407:17
jrosserThe two code paths for keys provided either by the user or retrieved from the cluster are quite intermingled with overall complex logic07:17
*** trident has joined #openstack-ansible07:17
*** shyamb has joined #openstack-ansible07:20
miloaI tried to not change a lot of code :) and provide for example for nova key an output that can be used by the following task without change.07:22
*** udesale has joined #openstack-ansible07:24
jrossermiloa: you have a couple of other constructs, either block: or including a new task file for conditional “do this for cluster keys or this for user” and so long as both eventually do the same set_fact for the thing you need....07:30
*** ivve has joined #openstack-ansible07:39
ioninoonedeadpunk,07:39
ioninoonedeadpunk, i found the issue related to cecph!07:39
ionirbd create --size 1 --user openstack vms/test07:40
ioni2019-08-21 10:37:26.384 7fac65535700  0 -- 172.29.100.200:0/3784294304 >> 172.29.100.200:6822/1328282 conn(0x56371bf1a690 :-1 s=STATE_CONNECTING_WAIT_CONNECT_REPLY_AUTH pgs=0 cs=0 l=1).handle_connect_reply connect got BADAUTHORIZER07:40
ioninoonedeadpunk, it's due to https://github.com/ceph/ceph-ansible/commit/a4f4dd75352411eae8509d32d5b73d1c1ff9b71807:40
*** rpittau|afk is now known as rpittau07:40
ionilet me find the review and add a comment :D07:40
*** shyamb has quit IRC07:54
*** zbr is now known as zbr|ooo07:56
*** shyamb has joined #openstack-ansible08:32
*** ianychoi has quit IRC08:32
*** ianychoi has joined #openstack-ansible08:33
*** shyam89 has joined #openstack-ansible08:45
jrosserioni: interesting, drop a link to the review when you've done that :)08:46
ionijrosser, done that08:46
*** shyamb has quit IRC08:46
ionijrosser, here: https://review.opendev.org/#/c/675785/108:46
*** rohit02 has quit IRC08:46
ionijrosser, i think ceph guys messed up08:47
ionihttps://github.com/ceph/ceph-ansible/issues/229608:47
jrosserdo you understand what they tried to do?08:47
ioni# ceph auth caps client.<ID> mon 'allow r, allow command "osd blacklist"' osd '<existing OSD caps for user>'08:47
ionihttps://docs.ceph.com/docs/master/releases/luminous/#upgrade-from-jewel-or-kraken08:47
ionithis08:47
jrosserthey leave an unhelpful commit message waying what they did (which is obvious from the code!) but not why :/08:47
jrosser*saying08:48
ioniit seems that they implemented a staging for upgrade08:48
ionii think08:48
ionibut they added the permission to osd instead of mon08:48
ionithat's what i'm getting08:48
ionimaybe i'm understading poorly08:48
ionii'm new to ceph08:48
jrosserso it would be possible to override those vars in OSA08:50
ionijrosser, i did lost 6 days of my life trying to figuring out why the hell doesn't work08:50
ionijrosser, i did that :)08:50
jrosseri can believe it :/08:51
jrosserso is this breaking out gates anywhere?08:51
jrosserthere are broken ceph jobs i think currently08:51
ionithe sha bump i believe is uses ceph for cinder volume08:51
ioniand cinder breaks to run migrations because cinder-volume is down08:52
ioniand is down due to failling to connect on ceph08:52
ionibut the logs don't tell you anything08:52
ionii had to redefine the rados timeout to 10 secunds instead of -1 (doesn't timeout)08:52
ionito see timeouts08:52
ionithen i tried to create manually the volumes08:53
ioniadmin worked08:53
ionicinder didn't work08:53
ionifun times08:53
ionii'm strating to hate ceph08:53
ioni*starting08:54
jrosserso if i've understood correctly we could apply the fix you have already done to here https://opendev.org/openstack/openstack-ansible/src/branch/master/tests/roles/bootstrap-host/templates/user_variables_ceph.yml.j208:54
jrosserand that would unblock our ceph jobs?08:54
*** gkadam has joined #openstack-ansible08:54
ionihttps://paste.xinu.at/Mch/08:54
ionithis08:54
ionijrosser, works pretty ok08:56
jrossergreat work digging into this btw08:56
jrosserwould you like to make a patch for OSA? or I can take a look?08:57
*** gkadam is now known as gkadam-brb08:57
ionijrosser, if is ok with you, can you do it?08:57
jrosseris this affexting master too, interestingly the ceph jobs appear to pass there08:58
jrosserthe same patch is in stable-4.0 https://github.com/ceph/ceph-ansible/commit/642851fa5d4199a15cca019419afacb73541556e09:00
*** gkadam-brb is now known as gkadam09:00
ionimaybe ceph 14 handles this differently ?09:01
ionino idee09:01
openstackgerritMaksim Malchuk proposed openstack/openstack-ansible-os_nova master: Fix cell_v2 discover_hosts when Ironic enabled  https://review.opendev.org/67768109:05
*** jawad_axd has joined #openstack-ansible09:20
jrosserioni: so i'm a bit confused - here https://paste.xinu.at/Mch/ you didnt add the blacklist part to the mon section?09:28
ionijrosser, i removed it09:28
ionijust to work09:28
ionii removed it from osd and that's all09:28
jrosserright - but the upgrade notes say to add it to the mon section09:28
ionii didn't add anything else09:28
ionii just wanted to get over it and let them fix it09:29
jrosserhmm ok then i think i'll hold off on a patch - i can't test it here09:29
ionijrosser, i'll test it, hold on09:29
ioniclient.cinder09:34
ionikey: AQCo/FxdAAAAABAAu23GYtVFWPfb3NJtdo88ug==09:34
ionicaps: [mon] profile rbd, allow r, allow command "osd blacklist"09:34
ionicaps: [osd] profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images09:34
ionii think this?09:34
ionino idee :D09:35
*** Namrata has joined #openstack-ansible09:42
NamrataHi Folks,I am trying to install openstack ansible rocky release on ubuntu 18.04 Bionic beaver but Rocky Release using Ubuntu 18.04 with netplan so what do you suggest moving back to ifdown or netplan only?09:46
admin0does anyone have a  tenant cleanup script or something equivalent ?10:03
*** Namrata has quit IRC10:05
*** shyam89 has quit IRC10:05
*** Namrata has joined #openstack-ansible10:08
miloajrosser: this time i've got Zuul RETRY_LIMIT Error on openstack-ansible-deploy-aio_ceph-ubuntu-bionic10:11
*** rpittau is now known as rpittau|bbl10:14
noonedeadpunkjrosser: ioni btw, I have PR to ceph-ansible regarding common topic https://github.com/ceph/ceph-ansible/pull/432910:20
noonedeadpunkSo I guess we may ask folks why this patch has been done, and, probably rollback it there10:20
noonedeadpunkadmin0: do you mean smth like https://github.com/openstack/ospurge ?10:26
noonedeadpunkNamrata: I'd suggest running either netplan or ifupdown, not both10:27
Namratanoonedeadpunk so any of them would work fine10:28
NamrataThanks10:28
noonedeadpunkyep, there shouldn't be problems, since OSA normally do not configure networking (unless you're running script for CI)10:29
*** shyamb has joined #openstack-ansible10:30
Namratanoonedeadpunk got it Thanks10:34
jrossernoonedeadpunk: oh goodness feels like theres a lot of mess right now in the ceph ansible stuff10:39
noonedeadpunk++10:39
jawad_axdjrosser: how to use ceph rbd namespaces along with pool names in OSA ? Is it supported yet or not? I tried with poolname/namespace but no success.10:52
jrossermiloa: looks like you hit a CI bug https://review.opendev.org/#/c/677669/10:55
jrosseri've rechecked your job now that is merged10:55
miloajrosser: Many thanks :)10:56
*** owalsh is now known as owalsh|away11:08
ioninoonedeadpunk, so it seems that you were on the right patch regarding cinder volume and ceph11:19
ioninoonedeadpunk, but the osd blacklist is the problem here11:19
*** shyamb has quit IRC11:20
noonedeadpunkthere just more than one problem in these 4 lines:)11:20
noonedeadpunkioni: As I understood from yours conversation, we're waiting for jrosser to test things out?11:21
jrosserwell i'm running ceph natilus here11:21
jrosserand OSA stein11:21
ioninoonedeadpunk, he asked me if i added osd blacklist into mon and I said, no11:21
jrosserthe gate breaks with stein/mimic so i can't reproduce11:21
ionii just removed it completly11:21
jrosserand the gate works with master/nautilus, so i'm a bit confused right now about what is going on11:22
noonedeadpunkioni: and you have mimic on your deployment?11:23
jrossernoonedeadpunk: i was thinking we could temporarily add a override into our user_variables_ceph to unblock the gate11:23
ioninoonedeadpunk, yes11:23
jrosserbut i'm just not really sure what "right" looks like there in light of ioni bug and your PR11:23
noonedeadpunkjrosser: yeah, since it affects only mimic I think it would be faster and easier11:23
ioninoonedeadpunk, i just used whatever is default and stein11:23
ioninoonedeadpunk, and natilus seems to new for me11:24
*** udesale has quit IRC11:26
*** udesale has joined #openstack-ansible11:27
*** shyamb has joined #openstack-ansible11:42
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Temporary override of ceph openstack_keys  https://review.opendev.org/67770711:47
openstackgerritJonathan Rosser proposed openstack/openstack-ansible stable/stein: Temporary override of ceph openstack_keys  https://review.opendev.org/67770911:47
jrosserioni: noonedeadpunk ^ lets see how they go. Gross templating a template :(11:48
noonedeadpunkjrosser: do we rly need this for master? Can we just place a patch for stein only?11:50
* jrosser shrugs, there’s the same errors in ceph ansible for both branches11:51
noonedeadpunkbut we have broken gates ony for stein...?11:52
jrosserCorrect, but would you also like to validate the pool permission changes in master?11:52
noonedeadpunkI hoped to get PR to ceph-ansible merged:))11:53
noonedeadpunkbut yeah, lets leave it as for now:)11:53
jrosserperhaps the commit mesage should be clearer11:55
jrosseri've combined the osd blacklist and your PR changes into one there11:55
*** udesale has quit IRC12:02
admin0noonedeadpunk, thanks .i will check ospurge out12:05
*** rpittau|bbl is now known as rpittau12:10
openstackgerritMerged openstack/openstack-ansible-apt_package_pinning master: Update invalid link for README  https://review.opendev.org/67741912:14
openstackgerritMerged openstack/openstack-ansible-plugins master: Update invalid link for README  https://review.opendev.org/67741612:16
openstackgerritMerged openstack/openstack-ansible-galera_client master: Update invalid link for README  https://review.opendev.org/67742112:26
*** udesale has joined #openstack-ansible12:32
openstackgerritMaksim Malchuk proposed openstack/openstack-ansible-os_nova stable/stein: Fix cell_v2 discover_hosts when Ironic enabled  https://review.opendev.org/67771412:35
openstackgerritMaksim Malchuk proposed openstack/openstack-ansible-os_nova stable/rocky: Fix cell_v2 discover_hosts when Ironic enabled  https://review.opendev.org/67771612:37
*** shyam89 has joined #openstack-ansible12:37
*** shyamb has quit IRC12:37
*** dr_feelgood has joined #openstack-ansible12:38
openstackgerritMaksim Malchuk proposed openstack/openstack-ansible-os_nova stable/queens: Fix cell_v2 discover_hosts when Ironic enabled  https://review.opendev.org/67771712:40
*** gkadam has quit IRC12:42
*** shyam89 has quit IRC12:55
openstackgerritMerged openstack/openstack-ansible-nspawn_hosts master: Update openSUSE job  https://review.opendev.org/67716513:04
*** dr_feelgood has quit IRC13:12
*** ThiagoCMC has joined #openstack-ansible13:13
ThiagoCMCGood morning guys!13:15
ThiagoCMCI'm trying to deploy Stein on Ubuntu 18.04 with Linux 4.15 (since with 4.18 and 5.0 is failing) and now, 3 containers doesn't start, error: "networkd-dispatcher[1514]: ERROR:Failed to get interface "a3730f1a_eth0" status: Command '['/bin/networkctl', 'status', '--no-pager', '--no-legend', '--', 'a3730f1a_eth0']' returned non-zero exit status 1."13:16
ThiagoCMCAny idea?13:16
admin0ThiagoCMC, ovs, lb ?13:33
ThiagoCMClb13:33
admin0with or without netplan ?13:34
ThiagoCMCOnly Glance, Cinder API and Vol are failing, all the other containers are up and running.13:34
ThiagoCMCNetplan13:34
ThiagoCMCCinder vol is a container because I'm not using LVM/iSCSI, it'll be Ceph-based.13:34
ThiagoCMCMy openstack_deploy is basically this one: https://github.com/tmartinx/openstack_deploy13:35
openstackgerritMerged openstack/openstack-ansible-os_nova master: Fix cell_v2 discover_hosts when Ironic enabled  https://review.opendev.org/67768113:38
openstackgerritMerged openstack/openstack-ansible-os_nova master: Update invalid link for README  https://review.opendev.org/67713813:38
*** zbr|ooo is now known as zbr13:47
*** udesale has quit IRC13:47
*** jawad_axd has quit IRC13:48
*** jawad_axd has joined #openstack-ansible13:49
*** jawad_axd has quit IRC13:49
*** BjoernT_ has joined #openstack-ansible13:50
*** jawad_axd has joined #openstack-ansible13:50
*** jawad_ax_ has joined #openstack-ansible13:52
*** jawad_axd has quit IRC13:55
*** jawad_ax_ has quit IRC13:56
noonedeadpunkmnaser: seems that I've missed smth alse, since patches to uwsgi are not handled by openstackgerrit IRC bot14:00
*** pvradu has joined #openstack-ansible14:00
*** pvradu has quit IRC14:05
mnasernoonedeadpunk: yeah that's another thing14:07
mnasernoonedeadpunk: https://github.com/openstack/project-config/blob/master/gerritbot/channels.yaml#L315-L33314:07
noonedeadpunkmnaser: great, thanks14:08
*** gregwork has quit IRC14:14
*** gundalow has quit IRC14:15
*** tiffanie has quit IRC14:15
*** strattao has quit IRC14:16
*** mgagne has quit IRC14:16
*** strattao has joined #openstack-ansible14:16
*** logan- has quit IRC14:16
*** irclogbot_1 has quit IRC14:17
*** irclogbot_1 has joined #openstack-ansible14:17
*** logan_ has joined #openstack-ansible14:17
*** mgagne has joined #openstack-ansible14:17
*** ebbex has quit IRC14:18
*** ebbex has joined #openstack-ansible14:18
*** logan_ is now known as logan-14:18
*** tiffanie has joined #openstack-ansible14:18
*** gregwork has joined #openstack-ansible14:18
*** mhayden has quit IRC14:19
*** Jeffrey4l_ has quit IRC14:19
*** gundalow has joined #openstack-ansible14:19
*** Jeffrey4l has joined #openstack-ansible14:21
*** mhayden has joined #openstack-ansible14:26
*** pvradu has joined #openstack-ansible14:36
*** Namrata has quit IRC14:37
*** chkumar|rover is now known as raukadah14:40
noonedeadpunkargh, erlang broke our gates...14:47
*** pvradu has quit IRC14:51
admin0noonedeadpunk, best i thought was to install in one of the utility containers ( as it has admin rc ) .. created a venv .. but cannot install because the packages are not in the osa repo14:58
noonedeadpunkadmin0: are you running on rocky?14:59
admin0yes14:59
noonedeadpunkJust add --isolated flag to pip. It's not the best workaround, but the easiest one15:00
allanbthanks @poopcat that worked.  next bug involves the cinder migrations. searching for the logs.15:18
*** ivve has quit IRC15:21
*** gyee has joined #openstack-ansible15:22
*** pvradu has joined #openstack-ansible15:25
poopcatmake sure you add the override 'galera_monitoring_allowed_source' to make it persistent15:26
poopcat@allanb https://docs.openstack.org/openstack-ansible-galera_server/latest/15:26
allanbcool. was just looking for that.15:26
poopcatThiagoCMC: Unrelated what you're trying to solve for but I was looking for this for you last night but you had logged off by the time I got it: https://github.com/openstack/openstack-ansible-openstack_hosts/blob/master/vars/ubuntu-16.04.yml15:31
poopcaterrr https://github.com/openstack/openstack-ansible-openstack_hosts/blob/master/vars/ubuntu-18.04.yml15:31
ThiagoCMCCool! I downgraded to 4.15 and it's all good now!15:31
ThiagoCMCI guess  that for Linux 5.0, we could just remove those modules, since they don't exists anymore.15:32
ThiagoCMCAlso, I'm planning to go OpenvSwitch with DPDK, so, no need to conntract modules at all...15:32
allanbwhere would i find the cinder migration logs? that my latest error.15:44
allanb"some migrations failed unexpectedly.  see log."   container /var/log has nothing.15:44
noonedeadpunkallanb: ioni was the last to debug this problem, since it's pretty repeatable15:48
allanbthanks.  it's weird because it says there are none to do.  this is a fresh install.15:49
ionithe migration failing on cinder is due to cinder volume being down15:49
ioniopenstack volume service list15:49
allanbah.15:49
ioniand was down due to cinder user(ceph) not able to connect on volumes pool15:50
ioni that was my case and the case of gates tests15:50
ionii have to go now15:50
allanbthanks. cheers.15:51
*** mgagne has quit IRC15:54
*** mgagne has joined #openstack-ansible15:55
*** pvradu has quit IRC15:58
*** pvradu has joined #openstack-ansible16:02
*** KeithMnemonic1 has joined #openstack-ansible16:02
*** KeithMnemonic has quit IRC16:03
*** rpittau is now known as rpittau|afk16:05
*** pvradu has quit IRC16:05
*** markvoelker has quit IRC16:08
allanbseems to be a rabbitmq issue.  progress 1 step at a time. :-)16:08
ThiagoCMCIs there any docs about how to deploy OSA with Manila and Ceph?16:12
ThiagoCMCI have the `openstack_deploy/conf.d/manila.yml` but, I don't know what the `manila-data_hosts` means...  lol16:13
noonedeadpunkThiagoCMC: manila-data_hosts it's kinda storage hosts iirc16:14
ThiagoCMCI already deployed OSA Rocky with Ceph but, only for Glance and Cinder.16:14
ThiagoCMCnoonedeadpunk, right...  =)16:14
ThiagoCMCIs it required even when planning to use Ceph only?16:14
noonedeadpunkunfortunatelly there's no docs yet, and manila role is still not really solid, since we're just working on it's CI16:14
ThiagoCMCThat's okay, I would love to give it a try and give some feedback16:15
noonedeadpunkThiagoCMC: yep, since it's the hosts where manila-share will be instaled16:17
noonedeadpunkhttps://opendev.org/openstack/openstack-ansible-os_manila/src/branch/master/defaults/main.yml#L24816:18
ThiagoCMCGot it... Is it a container as well?16:18
ThiagoCMCI'm thinking about just deploying it on the shared controllers too16:19
noonedeadpunkBy default yes, but you may set is_metal: true as usual16:20
ThiagoCMCAwesome, thanks!16:20
ThiagoCMCI'm facing another problem now...  lol16:20
ThiagoCMCThe task: "python_venv_build : Upgrade pip/setuptools/wheel to the versions we want" is failing. Playbook: utility-install.yml16:21
ThiagoCMCPart of the error is: "NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f343e340690>: Failed to establish a new connection: [Errno 113] No route to host"16:21
ThiagoCMCWeird, I can ssh into the host and `ping google.com` normally...16:21
ThiagoCMC"FAILED - RETRYING: Upgrade pip/setuptools/wheel to the versions we want (1 retries left)."16:22
ThiagoCMC:-(16:22
*** markvoelker has joined #openstack-ansible16:22
ThiagoCMCI think that I found the problem... It's trying to reach my internal IP: "http://10.224.241.250:8181/os-releases/19.0.2.dev22/ubuntu-18.04-x86_64" but, might not be ready yet (should the haproxy, I guess)16:22
ThiagoCMCBut haproxy-install.yml worked16:23
ThiagoCMC:-/16:23
ThiagoCMCHaproxy is only listening on "haproxy_keepalived_external_vip_cidr", not the "internal" one16:26
guilhermesparxcruz raukadah  what would be the reason for tempest init not create content on the workspace dir? http://paste.openstack.org/show/761099/16:26
arxcruzchecking16:27
ThiagoCMCTypo...  lol16:27
guilhermespI have a hold on the bionic distro job that is failing after aug 5 https://review.opendev.org/#/c/672948/16:27
guilhermespdigging into the logs16:28
guilhermespi see that breaks with rc 3 in the temepst workspace init task16:28
guilhermespinside the hold, that paste above provides me this output16:28
guilhermespwhile in a local vm, running the gate-check-commit script it executes normally and populates /root/workspace16:29
arxcruzguilhermesp: which version of tempest ? from git ?16:29
guilhermespnot sure if I can find tempest git version as it is a distro job16:32
guilhermesphttp://paste.openstack.org/show/761101/16:33
guilhermespthat's the version of the packages16:33
noonedeadpunkguilhermesp: I'm wondering if https://review.opendev.org/#/c/675353/ can be related to the mentioned problem16:34
guilhermespI was taking a look at this noonedeadpunk earlier16:34
guilhermespbth I'm not sure, but it is the only change merged after the failures16:34
guilhermesponly related change merged*16:35
noonedeadpunkSince it added extra tempest checks, and I'm worried that it could interfer with build-in inside ubuntu pckages tests16:35
guilhermespmaybe we can revert that change and see what happens16:37
noonedeadpunkI guess rebase on the SHA before merge should do the same I guess16:38
raukadahguilhermesp: can you try tempest init -d16:38
raukadah?16:38
guilhermespsure raukadah16:38
raukadahtempest workspace list also?16:38
guilhermespis it a valid parameter for that version? http://paste.openstack.org/show/761102/16:40
guilhermespwill try to list the workspace16:40
guilhermesp"tempest workspace list" doenst provice me output16:40
raukadahguilhermesp: can you try tempest init --debug16:40
guilhermespbut in fact tempest workspace is listed in my local tests but not in the hold16:41
guilhermespwill try raukadah16:41
raukadahsorry it was --debug not -d16:41
guilhermespno output16:41
*** spsurya has quit IRC16:43
raukadahguilhermesp: http://paste.openstack.org/show/761103/ can add me to that box16:43
raukadah?16:43
guilhermespsure16:43
guilhermespa sec16:43
guilhermespraukadah:  ssh root@104.130.127.20316:44
raukadahguilhermesp: http://paste.openstack.org/show/761104/16:46
raukadahcheck foobar and tempest.log16:47
raukadahsomething wrong with plugin registration16:48
raukadahthat's why workspace dir is not getting created16:48
noonedeadpunkand this feels like related thing to mixed setup...16:48
guilhermespI've seen that shortly  earlier... yeah maybe revert noonedeadpunk 's patch to see what happens16:48
raukadahyes might be16:49
raukadahlet me try to reproduce locally with tempest and heat-tempest-plugin16:49
*** Garyx has joined #openstack-ansible16:49
raukadahnoonedeadpunk: when you say about mix setup means mixing deb pkgs and git for tempest and its plugins?16:51
noonedeadpunkyep. so tempest is installed from repo packages, but all plugins come from git16:51
*** stuartgr has joined #openstack-ansible16:52
noonedeadpunkSo I guess we might be just missing heat check before16:52
raukadahnoonedeadpunk: please never do that16:52
noonedeadpunkguilhermesp: actually, we have empty whitelist... https://storage.gra1.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/logs_48/672948/3/check/openstack-ansible-deploy-aio_distro_metal-ubuntu-bionic/df07761/logs/etc/host/tempest/whitelist.yaml.txt.gz16:52
raukadahnoonedeadpunk: I have tried to solve the same puppet side i think this year16:53
raukadahinstall all stuff from git16:53
noonedeadpunkraukadah: so I'd say there just no other way to get tests for ubuntu16:53
raukadahnoonedeadpunk: ok, will look into that on friday may be, I am on pto tomorrow16:54
noonedeadpunkfor example barbican was failing due to no tests https://review.opendev.org/#/c/673491/16:54
noonedeadpunkso for some services this seems necessary thing...16:55
*** ivve has joined #openstack-ansible16:56
noonedeadpunkBut probably we need to have separate list of plugins to be installed for mixed setups (or do excludes from existing one)16:56
noonedeadpunkBut I'm worried about empty whitelist actually as well, since it should have some tests in it16:57
raukadahmay be we need to think what we want to test there and then install?16:58
noonedeadpunkI guess that's exactly what happens...16:59
noonedeadpunkLike https://opendev.org/openstack/openstack-ansible-os_tempest/src/branch/master/vars/main.yml#L79-L8316:59
ThiagoCMCWhat can cause connectivity issues between containers on the same host?17:00
ThiagoCMCI mean, I have the "br-mgmt" at the host, IP X, the utility container have its eth1 on same subnet as br-mgmt, utility can ping host, but utility can't ping other containers...17:01
ThiagoCMC...On same bridge.17:01
jrosserThiagoCMC: we saw something maybe similar on centos with forwarding not enabled on the bridge17:02
jrosserbut never on ubuntu17:03
noonedeadpunkI had common things with infiniband, but I don't thing you've tried to build obsoleted eIPoIB :)17:03
ThiagoCMCjrosser, right, have you fixed it on centos?17:03
allanbi'm having a bridge issue, but between the controller and storage nodes.  i can ping the controller from storage on its bridge ip, but not connect to the container (rabbitmq) on the controller from the storage.17:03
guilhermespso noonedeadpunk raukadah the action now is to hold it? Maybe we dont need the hold from infra anymore as the issue was spotted right?17:03
allanbalso: ubuntu1804, stable/stein.17:03
ThiagoCMCallanb, same versions here, Linux 4.1517:04
jrosserThiagoCMC: we did fix it - you'd have to look back through the history i'm afraid as it was a while ago17:04
jrosserare you using netplan?17:04
ThiagoCMCyes, netplan17:04
jrosserhmmm well my strictly personal opinion is that i stuck with ifupdown on bionic17:05
ThiagoCMCDamn...  lol17:05
ThiagoCMCok17:05
noonedeadpunkifupdown works on bionic for me pretty well:)))17:05
allanb+1 ifupdown. keep it simple. :-)17:05
ThiagoCMCI have the same Netplan for my KVM hosts, they can talk normally17:05
noonedeadpunkallanb: if only ifupdown on ubuntu could restart networking.....17:05
allanbyeah. there is that.17:06
raukadahguilhermesp: yes17:07
raukadahnoonedeadpunk: guilhermesp with tempest + heat plugin works fine i install from git in local env17:09
guilhermesphow did you test it raukadah ? I have a local vm which I ran the gate-check-commit and seems to work fine17:10
guilhermespabout the workspace creation17:10
raukadahnope17:10
noonedeadpunkguilhermesp: so you're ok locally with  distro scenario?17:11
raukadahjust git clone tempest and heat tempest plugin and then install it in venv then temepst list-plugins and temepst init --debug17:11
noonedeadpunkMaybe we should try to rebase patch on master?17:11
guilhermespnoonedeadpunk: that what I ran scripts/gate-check-commit.sh aio_distro_metal_heat deploy distro17:11
guilhermespin a local bionic server17:11
guilhermesprebase could be an option, let me try17:11
jrosserallanb: there is a helper script to check that all the veth are connected - someone will have to fire you the name I’m on a mobile so searching for it is tricky17:12
noonedeadpunkloocs good17:12
openstackgerritGuilherme  Steinmuller Pimentel proposed openstack/openstack-ansible-os_heat master: Fix keystone endpoint for heat servers  https://review.opendev.org/67294817:13
noonedeadpunkmnaser: jrosser guilhermesp - I've got uwsgi passing tests https://review.opendev.org/#/c/676410/ - they do not run role itself, since nothing use it yet... but for initial commit I guess it's ok17:13
allanb@jrosser thanks.17:13
noonedeadpunkat least linters and docs are ok:)17:14
allanbi think my problem is that the br-mgmt on storage01 node isn't passing to controller01 node.  however, the ip for the br-mgmt interfaces ping between them.  just doesn't pass packages to the rabbitmq container.17:14
allanbip forwarding is turned on.17:14
ThiagoCMCSame here...17:15
ThiagoCMCFixed!17:15
ThiagoCMCFor some reason, I had to `apt purge docker.io ; apt autoremove -y ; reboot`17:16
ThiagoCMCLooks like that it's iptables rules at the containers host.17:16
ThiagoCMCWhew17:16
ThiagoCMCNo idea why docker.io is installed.17:16
noonedeadpunkprobably you've got magnum installed on the wrong host?17:18
jrosserMagnum itself doesn’t need that?17:20
noonedeadpunkyeah, you're right....17:21
noonedeadpunkthan I also have no idea why docker.io was there :p17:21
ThiagoCMCI don't have magnum enabled...17:23
allanbso walking through my bridge problem, hopefully someone sees something i'm missing.  controller01 br-mgmt has 192.168.100.13, storage01 has br-mgmt 192.168.100.15.  both can ping each other at those ips.17:25
allanbrabbitmq container has 192.168.100.173, pingable from controller01, not from storage01.  'ip r get 192.168.100.173' on storage01 shows it's routing through the br-mgmt i/f, but no icmp packets show up on controller01.17:26
jrosserallanb: you’ve not got an address collision somewhere have you?17:27
ThiagoCMCallanb, check if you have docker.io installed in you host17:27
allanbi'm tempted to manually add a route for that 1 ip on storage01 to route through controller01.  no docker.io already checked. :)17:27
jrosserCheck that your used_ips is correct17:27
ThiagoCMCallanb, what about your host's iptables rules?17:28
jrosserAnd then that the ip allocated in the inventory haven’t accidentally also been used on a host or some other of your hardware17:28
allanbiptables seem correct.  btw, this is running on vagrant VMs.  so it could be something weird there, but everything else is working up to this point in the last playbook, setup-openstack.yml17:28
*** errr has quit IRC17:29
jrosserallanb: well, people do struggle doing multinode things with VMware in a similar way17:30
allanbi'm really not understanding why the storage01 node isn't seeming to pass packets over the br-mgmt i/f for the rabbitmq ip.17:30
*** errr has joined #openstack-ansible17:30
jrosserWhatever magical networking under all that has to be set up to allow ip/mac that don’t belong to the vagrant box to pass17:30
allanbused ips for each of the bridge networks is setup for 10,15.17:31
allanb10 thru 15.  rest are free.17:31
*** miloa has quit IRC17:31
jrosserCan you ping any containers between vm?17:32
allanbno. but i'll verify.17:32
jrosserIf you can’t get any cross vm ping between containers I’d pretty much say this was whatever is providing the inter-vm network getting in the way17:33
allanbyeah. that definitly doesn't work.17:33
allanbthanks.  i'll continue digging there.17:34
*** macz has joined #openstack-ansible17:36
*** allanb has quit IRC17:56
*** CeeMac has quit IRC17:59
*** allanb has joined #openstack-ansible18:04
ThiagoCMCnoonedeadpunk, hey buddy, just curious, the Manila with Ceph uses CephFS, which depends on Ceph Metadata, right? If yes, does OSA created "ceph-meta" containers as well?18:08
*** markvoelker has quit IRC18:16
*** markvoelker has joined #openstack-ansible18:22
NobodyCamGood Morning OSA folks, Are there entries in openstack_user_config.yml for ceilometer?18:25
NobodyCamdoh... I found it in conf.d18:28
*** andyfriar has joined #openstack-ansible18:30
*** markvoelker has quit IRC18:30
*** kopecmartin is now known as kopecmartin|off18:33
*** andyfriar has quit IRC18:50
jrosserThiagoCMC: nearly https://review.opendev.org/#/c/673369/19:02
ThiagoCMCNice!19:03
jrossergetting manila seeing / using that might be another thing entirely, but thats a start19:04
ThiagoCMCDo you think that It'll be ready for OpenStack Train, maybe in about 4 months?19:04
jrossersomeone needs to put effort into finishing it19:04
ThiagoCMCSure! I understand that this is new and I'm planning to dive into it and help.19:04
jrosserpactches welcome :)19:04
ThiagoCMC:-D19:04
ThiagoCMCI'm not a developer but, I can find bugs.  lol19:05
guilhermespnoonedeadpunk raukadah seems that the rebased didn't work and now suse job is failing https://review.opendev.org/#/c/672948/19:30
*** markvoelker has joined #openstack-ansible19:44
*** markvoelker has quit IRC19:49
BjoernT_so it seems  systemd_python==234 package does need libsystemd-dev and pkg-config packages now during keystone install in stein, I'm the only running into this ?19:50
*** BjoernT_ is now known as BjoernT19:50
noonedeadpunkguilhermesp: suse failed earlier but yes:(20:02
guilhermespyeah noonedeadpunk ..20:06
jrosserBjoernT: if it blows up for you but not in CI then i'd be wondering why?20:10
BjoernTyeah I'm still debugging why this is a problem, the venv built on one of the nodes but not others20:11
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Temporary override of ceph openstack_keys  https://review.opendev.org/67770720:17
openstackgerritJonathan Rosser proposed openstack/openstack-ansible stable/stein: Temporary override of ceph openstack_keys  https://review.opendev.org/67770920:19
openstackgerritGuilherme  Steinmuller Pimentel proposed openstack/openstack-ansible-os_murano master: Initial commit to os_murano role  https://review.opendev.org/67783020:58
NobodyCamAny tricks to adding a package to a existing repo container with rebuilding all? want to add ceilometer20:59
openstackgerritGuilherme  Steinmuller Pimentel proposed openstack/openstack-ansible-os_murano master: Initial commit to os_murano role  https://review.opendev.org/67783021:08
BjoernT@jrosser never mind, found it. lsyncd was not able to synchronize other repo hosts21:11
*** altlogbot_2 has quit IRC21:16
*** BjoernT has quit IRC21:22
noonedeadpunkguilhermesp so I'll try to take a closer look on tempest tomorrow21:25
guilhermespright noonedeadpunk keep me updated too and let me know if I can help.21:28
guilhermespi asked infra to rm the hold btw21:28
guilhermespbtw, am I missing something here https://review.opendev.org/#/c/677830/?21:28
noonedeadpunkLooking21:30
goldenfriHi, I need to add a compute host to rocky running ubuntu 16.04, but only have the venvs for 18.04 is there a simple way to add the 16.04 ones?21:30
goldenfri(in my repo)21:30
jrosseri'm not sure what state the mixed distro support is right now21:31
jrosser"in the past" you needed one infra host (or at least on repo container) of each distro21:31
goldenfriawww thats not good21:31
goldenfricrap, I'm having a major issue with hosts with 2080TIs taking forever to spawn and don't have that issue using 16.04 with queens21:32
jrosserthis may be less of an issue now the way that the venvs are built, but i don't know of anyone who's tinkered with this recently21:33
jrosserthing is the wheels are built on the repo container so thats a ton of C compiling which will be distro specific21:34
*** ThiagoCMC has quit IRC21:35
goldenfri:-/ thanks, I should have known this wouldn't be easy21:35
jrosserfwiw i don't see that with a T4 but obv. thats not the same kind of beast21:36
*** altlogbot_1 has joined #openstack-ansible21:36
noonedeadpunkguilhermesp: so I guess you'll need run_tests.sh just to basicly check linters21:36
*** altlogbot_1 has quit IRC21:38
jrossernoonedeadpunk: \o/ the ceph job on stein gets past cinder http://zuul.openstack.org/stream/8c95d6c4173b447fa10c3a46a1fd5039?logfile=console.log21:40
*** altlogbot_2 has joined #openstack-ansible21:40
*** altlogbot_2 has quit IRC21:42
noonedeadpunkjrosser: wow, great news21:44
openstackgerritGuilherme  Steinmuller Pimentel proposed openstack/openstack-ansible-os_murano master: Initial commit to os_murano role  https://review.opendev.org/67783021:45
noonedeadpunkjrosser: regarding your comment for tox.ini - we still have these functional local tests everywhere, and I can imagine that people may still test things that way21:48
jrosseryes i did wonder about that, what you'd do for a local test21:49
noonedeadpunkbut you're absolutely right that they are not needed21:49
jrosserits not an issue really - so long as we know why its there21:50
jrosserthe in-role functional tests are now basically abandonware21:50
noonedeadpunkAnd I feel like we didn't even update docs regarding the new way of testing - https://docs.openstack.org/openstack-ansible/latest/contributor/testing.html#roles-functional-or-scenario-testing21:51
jrossergood chance that some of those just don't work any more21:51
jrosseranyway, late here time to !computer, but thanks for the pointer to your ceph-ansible PR that looks to have really helped unlock things21:52
noonedeadpunkSo I'd vote for updating docs at the first place and after providing cleanup21:53
noonedeadpunk(unfortunatelly I have pretty poor english so I don't feel myself as a technical writer at least in terms of grammar)21:54
noonedeadpunkyep, I also won't be online by the time when zuul report the result of testing, so will vote for this patch in the morning21:55
*** ivve has quit IRC21:57
*** markvoelker has joined #openstack-ansible21:59
noonedeadpunkguilhermesp: from what I see from tempest logs I'd say we don't test heat with tempest at all for source ubuntu https://storage.gra1.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/logs_48/672948/4/check/openstack-ansible-deploy-aio_metal-ubuntu-bionic/048c685/logs/openstack/aio1-utility/test_list.txt.txt.gz22:16
*** devsum has joined #openstack-ansible22:20
noonedeadpunkok, that's time I had some rest:)22:20
*** markvoelker has quit IRC22:24
*** threestrands has joined #openstack-ansible22:34
*** markvoelker has joined #openstack-ansible22:35
*** markvoelker has quit IRC22:40
*** macz has quit IRC23:00
*** ThiagoCMC has joined #openstack-ansible23:36
ThiagoCMCHey guys, I just installed Stein! However, at the Horizon login, the field "Domain" is empty and I can't type anything there... What can be?  =P23:36
ThiagoCMCI have "horizon_keystone_multidomain_support" and "horizon_keystone_multidomain_dropdown" set to true on user_vars...23:37
poopcatThiagoCMC: https://osaguide.shutdowndashr.com:8443/w/index.php/Horizon_/_Keystone_multi-domain_login23:38
ThiagoCMCThanks! I'll give it a try.23:39
ThiagoCMCStill empty...  :-(23:46
poopcatDarn --- the last I tried that on was Pike23:49
poopcatUsing the Openstack client do you have multiple domains 'openstack domain list'23:50
ThiagoCMCI have the following domains: magnum, Default and heat.23:52
ThiagoCMCSince this is just a lab, I'll try to keep the defaults and comment out multidomain...23:52
ThiagoCMCBut I'll keep your tip there, for next lab deployment.23:53
ThiagoCMC^_^23:53
ThiagoCMCThanks anyway!23:53
poopcatThiagoCMC: https://osaguide.info ... It's not very pretty but it's just some overrides mostly from Kilo>Pike ... a lot of them apply through to Stein but have really only been tested on those23:56
poopcatIt's been a while since I updated it23:57
ThiagoCMCNice!23:57
ThiagoCMCIt's working now! Without the dropdown.23:57
ThiagoCMCJust a feedback, Horizon prints an error at the corner: "Error: Unable to retrieve limits information."23:59
ThiagoCMCBut I'll check this later...23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!