Friday, 2018-08-31

*** rlandy has quit IRC01:34
*** ykarel|away has joined #oooq01:35
*** apetrich has quit IRC02:14
*** skramaja has joined #oooq03:32
*** udesale has joined #oooq03:59
*** hamzy has joined #oooq04:15
*** gkadam has joined #oooq05:04
*** ykarel_ has joined #oooq05:29
*** ykarel|away has quit IRC05:32
*** sanjayu__ has joined #oooq05:32
*** ykarel_ is now known as ykarel05:53
*** ykarel_ has joined #oooq05:55
*** ykarel_ is now known as ykarel|afk05:56
*** ykarel has quit IRC05:58
*** ykarel|afk has quit IRC06:01
*** jbadiapa has joined #oooq06:01
*** holser_ has joined #oooq06:20
*** apetrich has joined #oooq06:24
*** holser_ has quit IRC06:29
*** ccamacho has joined #oooq06:43
*** ccamacho has quit IRC06:44
*** ccamacho has joined #oooq06:52
*** ccamacho has quit IRC06:57
*** ccamacho has joined #oooq06:58
*** sanjayu__ is now known as saneax06:59
*** jfrancoa has joined #oooq07:12
jfrancoaGood morning folks! hey, have you seen too that most of the RDO cloud legacy jobs are failing due to some dependency issue? it's really similar to this reported LP from May https://bugs.launchpad.net/tripleo/+bug/177393607:13
openstackLaunchpad bug 1773936 in tripleo "ci failing on cmd2 pip" [Critical,Fix released]07:13
*** holser_ has joined #oooq07:29
*** holser_ has quit IRC07:30
*** holser_ has joined #oooq07:30
ssbarneajfrancoa: last update on this bug was 3 months, ago.07:50
jfrancoassbarnea: yes, but the logs which are appearing are very similar to the ones stated in that LP: https://logs.rdoproject.org/74/590774/1/openstack-check/legacy-tripleo-ci-centos-7-containers-multinode-upgrades-pike-branch/e3044a8/job-output.txt.gz#_2018-08-30_18_35_47_05939507:53
jfrancoassbarnea: I'm wondering if the ansible version increase couldn't have anything to do with it. This log could give some clue https://logs.rdoproject.org/74/590774/1/openstack-check/legacy-tripleo-ci-centos-7-containers-multinode-upgrades-pike-branch/e3044a8/job-output.txt.gz#_2018-08-30_18_35_39_49366507:53
ssbarneajfrancoa: wrong, this was due to wrong version of ansible, which found and fixed last night, was merged 3h ago.07:54
ssbarneaUnexpected Exception: No module named manager - is caused by ansible 2.3, when we need 2.5 to run07:54
jfrancoassbarnea: ah, ok.  Good then, could you please send me the link to the patch that fixed it?07:55
ssbarneasee https://review.openstack.org/#/c/591527/07:55
jfrancoassbarnea: thanks a lot07:55
ssbarneai suspect that we will continue to see ansible issues as zuul, ara and tripleo are all defining their own ansible versions. small variations are ok-ish, but when one is downgrading ansible from 2.5.x to 2.3.x, you can bet that everything will go wrong.07:57
*** dbecker has quit IRC08:04
*** dbecker has joined #oooq08:05
ssbarneapanda|rover: Hi! can you help me with something? like explaining the first POST_FAILURE on https://review.openstack.org/#/c/560445/  -- for some reason collect was killed after 30min when it was run with 40min timeout and clearly it was not zuul as the total duration was <2h08:10
ssbarneai suspect some timing issues around, which could be cause for serious number of such errors we seen in the last weeks.08:11
ssbarneai am asking because I am not sure is a bug, if it is will raise it on LP.08:13
panda|roverssbarnea: latest patches, containerized upgrade job ?08:26
ssbarneayes, this one http://logs.openstack.org/45/560445/128/check/tripleo-ci-centos-7-containerized-undercloud-upgrades/ba31b85/08:28
panda|roverssbarnea: collect logs shouldn't take that mich08:28
ssbarneayep, but i suspect "du" can cause this. i remember NFS issues causing it to get stuck of being very slow.08:29
panda|roverit could take du hast, du hast mich though.08:29
panda|roverwe don't have nfs anywhere AFAIK08:29
ssbarneayep, but still it may be disk and we don't want this  to break our build.08:30
panda|roverdu has completed correctly08:30
panda|roverit's not the problem08:30
ssbarneai see two options: removing it, or running it in background as soon as possible and collecting result, even if the result may be incomplete.08:31
panda|roverthe write pipe error happens always08:31
ssbarnearunning out of temp disk or memory?08:32
ssbarneastill, the culprint line is the du one, i almost always seen it the timeout happening on it.08:33
panda|roverI think it's happening *after* it08:33
ssbarneastill, there is something weird because I see the " Broken pipe" but the kill happens 15minutes later.08:34
ssbarneamaybe we should not trust the timings, it could be related to buffering in zuul, maybe?08:34
panda|roverthe broken pipe is "normal"08:34
ssbarneawhy?08:35
panda|roverzuul timing has always been correct AFAIR08:35
panda|roverssbarnea: don't know why, but it happens always08:36
panda|roverit seems here ansible is not starting the next task in the post.yaml playbook08:36
ssbarneai would use async on this command08:36
ssbarneai really don't want to fail a build due to this command, regardless if we succeed or not from collecting its result.08:37
panda|roverthe task calls the collect script log in its entirety, not just that command08:37
ssbarneaif it takes more then 5min, i don't care about its result, i would only print a warning and continue08:38
ssbarneawell, i know how to do it in bash, no need for ansible.08:38
panda|roverssbarnea:  http://logs.openstack.org/45/560445/127/check/tripleo-ci-centos-7-containerized-undercloud-upgrades/9aeb137/job-output.txt.gz#_2018-08-30_09_55_04_73813408:41
*** chem has joined #oooq08:41
ssbarneapanda|rover: searching for "du -L -ch" got code in 3 places in tripleo-ci -- which one needs to be updated?08:41
*** ykarel has joined #oooq08:43
ssbarneai will do some tests, i think i remember what cause this with sort. still which line needs fixation? (2nd question is why we need 3 copies of somethign that looks as the same code-logic)08:43
panda|roverssbarnea: it's not consistent, it will be hard to understand if the patch is really solving anything.08:43
panda|roverssbarnea: anyway the du here is the one in playbooks/tripleo-ci/templates/oooq_common_functions.sh.j208:44
panda|roverssbarnea: and the answer to you second question is "legacy, legacy, legacy stuff"08:44
ssbarneathanks. the comment about " tail -n +1" preventing the broken pipe messae is funny, because it clearly doesn't work.08:46
panda|rovermaybe it worked for a while08:49
panda|roverthe mystery thickens08:49
panda|roverssbarnea: updated promoter confing in the server directly, let's see if we can get a promotion for phase1 now08:55
panda|roverssbarnea: how's the master phase1 looking ?08:55
ssbarneapanda|rover: didn't run, for a long time. http://cistatus.tripleo.org/phase1/08:58
ssbarneapanda|rover: most of the builds are failed because of the ansible bug fix that was merged only ~3-4h ago.09:01
*** dtantsur|afk is now known as dtantsur09:07
*** cgoncalves has joined #oooq09:09
*** dbecker has quit IRC09:14
*** dbecker has joined #oooq09:17
ykarelssbarnea, panda|rover somehow http://cistatus.tripleo.org/phase1/ is not updated, as phase1 ran yesterday: https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo_trunk-promote-master-current-tripleo/384/09:20
ykarelssbarnea, panda|rover and it has 1 failure, fixing efforts for ^^ going on here:- https://review.openstack.org/#/c/598095/ and https://review.rdoproject.org/r/#/c/16044/09:22
panda|roverykarel: yep, I know about master and virtualbmc, I was also looking at rocky, that seemed to pass09:24
ykarelpanda|rover, yes rocky was good, i replied against: didn't run, for a long time. http://cistatus.tripleo.org/phase1/09:25
panda|roverrocky phase1 is promoting09:28
panda|rover\o/09:28
panda|roverpromoting containers right now09:28
cgoncalvesI started yesterday getting into vbmc errors: http://paste.openstack.org/show/729215/ ideas?09:29
panda|rovercgoncalves: known bug09:30
cgoncalvespanda|rover, thanks. is there a bug# I could follow?09:31
*** ykarel is now known as ykarel|lunch09:33
*** hamzy_ has joined #oooq09:37
panda|rovercgoncalves: hhmm apparently, no. This is the work in progress to fix it though https://review.openstack.org/59809509:37
*** hamzy has quit IRC09:37
panda|roverssbarnea: maybe we should open a bug on the virtualbmc, I thought there was one09:37
cgoncalvespanda|rover, what's better than a bug report? a bug fix! thanks :)09:37
panda|roveryes, but only if it fixes the bug09:37
ssbarneapanda|rover: doing it now, the bug09:39
ssbarneawhat is the behaviour of that virtualbmc bug, a link to the build error would help me.09:45
ssbarneacgoncalves: please help me fix the missing bits on the bug https://bugs.launchpad.net/tripleo/+bug/1790109 -- we need bugs, before fixes, especially if they are not obvious.09:50
openstackLaunchpad bug 1790109 in tripleo "virtualbmc>=1.4 is not supported" [Undecided,New]09:50
cgoncalvesssbarnea, I can paste what I have in http://paste.openstack.org/show/729215/ if that helps09:51
cgoncalvesalthough not much meaty info is available there09:51
panda|roverssbarnea: it's the bug hitting the phase1 master promotion right now09:51
panda|roverso it's a promotion-blocker09:52
ssbarneabrb 15min09:52
panda|rovermaybe it's already there09:52
cgoncalvescherry-picked the proposed fix. testing09:52
cgoncalvesnooo! :( http://paste.openstack.org/show/729217/09:53
-openstackstatus- NOTICE: Jobs using devstack-gate (legacy devstack jobs) have been failing due to an ara update. We use now a newer ansible version, it's safe to recheck if you see "ImportError: No module named manager" in the logs.09:55
*** ykarel|lunch is now known as ykarel09:58
ykarelcgoncalves, it also needs the package changes:- https://review.rdoproject.org/r/#/c/16044/, cgoncalves can u try https://logs.rdoproject.org/44/16044/2/check/legacy-DLRN-rpmbuild/869b1f4/buildset/centos/current/python2-virtualbmc-1.4.1-0.20180831065059.fa04f7b.el7.noarch.rpm09:59
cgoncalvesykarel, oooq newbie here. how could I test with that RPM if every time I ran oooq it rebuilds the undercloud node?10:02
cgoncalvesone possible way I see is changing roles/virtbmc/tasks/configure-vbmc.yml to point to that URL10:03
ykarelcgoncalves, i think adding it in undercloud_rpm_dependencies should work10:04
cgoncalvesok, should do it. it will run sudo yum install -y {{ undercloud_rpm_dependencies }}10:06
ykarelcgoncalves, mmm that would not work as virtbmc is run before undercloud install10:10
*** skramaja has quit IRC10:18
panda|roverphase1 rocky successfully promoted10:26
ykarelcool10:32
cgoncalvesykarel, managed to get python2-virtualbmc-1.4.1-0.20180831065059.fa04f7b installed by instructing roles/virtbmc/tasks/configure-vbmc.yml to install from the url you provided10:37
ykarelcgoncalves, okk cool, did virtbmc role finished successfully with the tq patch?10:38
cgoncalvesnew error: http://paste.openstack.org/show/729220/10:38
cgoncalves[stack@undercloud ~]$ rpm -qa | grep virtual10:39
cgoncalvespython2-virtualbmc-1.4.1-0.20180831065059.fa04f7b.el7.noarch10:39
ykarelcgoncalves, hmm permission issues10:40
cgoncalvesyep10:40
*** dsneddon has quit IRC10:41
ykarelcgoncalves, looks liek become: true not needed10:42
cgoncalvesok. testing10:44
ykareli pinged etingof on #rdo, can u try without become:true10:44
cgoncalvesin-progress10:46
*** cgoncalves has quit IRC10:48
*** cgoncalves has joined #oooq10:50
*** ykarel_ has joined #oooq10:52
*** ykarel has quit IRC10:53
*** udesale has quit IRC11:12
*** ykarel_ is now known as ykarel|afk11:12
panda|roverykarel|afk: did you open the bug for the metadata in the end ?11:15
*** ykarel|afk has quit IRC11:17
ssbarneapanda|rover: i am looking at the gate-check  at https://review.openstack.org/#/c/560445/ and there is only one more job failing, but is failing with timeout while running tempest for ~10mins. i suspect the reason being something before that that took too much time.11:29
ssbarneascenario00111:29
*** dbecker has quit IRC11:38
*** ykarel has joined #oooq11:40
panda|roverykarel: did you open the bug for the metadata in the end ?11:40
panda|roverykarel: it's blocking all ovb jobs11:40
ykarelpanda|rover, i haven't11:41
ykarelwas i about to create?11:41
ykareli thought u were going to associate it with the rdo cloud networking issue11:41
*** saneax has quit IRC11:42
* ykarel also thought u are working on it,11:42
ssbarneapanda|rover: ykarel : filed the metadata error as https://bugs.launchpad.net/tripleo/+bug/179012711:45
openstackLaunchpad bug 1790127 in tripleo "curl fails to get http://169.254.169.254/openstack/2015-10-15/meta_data.json" [Undecided,New]11:45
ykarelssbarnea, cool, also add promotion_blocker11:46
ykarelpanda|rover, did u found the issue with metadata, may be need to reach rhos-ops?11:47
panda|roverykarel: I already asked there11:49
ykarelpanda|rover, cool,11:49
*** gkadam has quit IRC12:00
*** dsneddon has joined #oooq12:06
*** trown|outtypewww is now known as trown12:07
*** rlandy has joined #oooq12:40
*** abishop has joined #oooq12:41
abishophi, I could use some help troubleshooting ci check failures on stable/queens patches12:42
abishopafter https://review.openstack.org/597141 merged, scenarios 2,3 pass but 1,4 seem to consistently fail12:43
abishopsee https://review.openstack.org/595357 and failure http://logs.openstack.org/57/595357/1/check/tripleo-ci-centos-7-scenario001-multinode-oooq-container/a3f7856/logs/undercloud/var/log/mistral/executor.log.txt.gz#_2018-08-30_00_40_15_95012:43
abishopany debug tips are appreciated!12:43
abishopssbarnea: you assisted last time, so any thoughts on ^^ :D12:47
ssbarneaabishop:  i can only confirm the timeouts issue, i am affected by it too and it looks pretty bad http://dashboard-ci.tripleo.org/d/cEEjGFFmz/cockpit?orgId=112:51
ssbarneaand what is worse is that this time is not slow infra that causes the timeouts, is normal performance. i don't really know what to do.12:52
ssbarneai am working on investigating scenario001 timeout causes but feel free to do the same.12:53
abishopssbarnea: yikes! thx for the info. I was concerned the problem was specific to the scenario 1 and 4 jobs12:53
ssbarneabased on http://cistatus.tripleo.org/ I would say that problem is only without containers on scenario001, all those with cont passed.12:56
*** tosky has joined #oooq13:02
abishopssbarnea: they pass on master and rocky, but I'm not seeing any recent passes on queens13:06
ssbarneaabishop: indeed, not sure what fails http://logs.openstack.org/45/594145/2/check/tripleo-ci-centos-7-scenario001-multinode-oooq-container/036bb98/logs/undercloud/home/zuul/overcloud_deploy.log.txt.gz13:15
ssbarneai do see resources.WorkflowTasks_Step2_Execution: ERROR but is not clear me to me what is causing it.13:16
ykarelssbarnea, abishop can check mistral log for ^^13:16
ykarelssbarnea, abishop http://logs.openstack.org/57/595357/1/check/tripleo-ci-centos-7-scenario001-multinode-oooq-container/a3f7856/logs/undercloud/var/log/mistral/executor.log.txt.gz#_2018-08-30_00_40_15_95013:17
ssbarnea"timed out waiting for ping module test success13:20
panda|roverjfrancoa: https://review.openstack.org/597450 are these changes needed for master  and rocky too ?13:21
rlandyhttps://github.com/openstack/tripleo-heat-templates/blob/master/ci/common/net-config-multinode-os-net-config.yaml#L13613:21
rlandyand it keeps going13:21
panda|roveroh dear13:22
jfrancoapanda|rover: do you mean because only queens in included in the conditional? the same question was asked weshay|pto in the patch I based this from and he said that rocky/master make use of other config_download_args https://review.openstack.org/#/c/597141/1/config/general_config/featureset016.yml@10313:23
jfrancoapanda|rover: so I guess not13:23
panda|roverrlandy: I have no idea how to fix that13:23
panda|roverrlandy: other than creating that file anyway13:23
rlandypanda|rover: well I am wonder if that is the right template to be used13:23
rlandywe moved the relevant ci templates to tripleo-ci13:23
rlandyas in13:23
rlandyttps://github.com/openstack-infra/tripleo-ci/blob/master/heat-templates/net-config-multinode.yaml13:24
panda|roverjfrancoa: ok thenk13:24
rlandythis one still resides in tht13:24
rlandyhttps://github.com/openstack/tripleo-heat-templates/blob/master/ci/common/net-config-multinode-os-net-config.yaml13:24
rlandyeither way, we have no equivalent in tripleo-ci13:25
rlandyso this is a PROBLEM13:26
jfrancoapanda|rover: anyway, I still need to rework that patch, because some jobs are failing as they can't find /home/zuul/config-download.yaml http://logs.openstack.org/42/597142/3/check/tripleo-ci-centos-7-scenario008-multinode-oooq-container/050f23b/logs/undercloud/home/zuul/overcloud_deploy.log.txt.gz#_2018-08-29_17_29_4213:26
jfrancoapanda|rover: I'll -1W until I'll have it working13:26
rlandypanda|rover: on the upside, /etc/nodepool is only in this file13:27
panda|roverjfrancoa: oh13:27
rlandywrt tht13:27
panda|roverjfrancoa: it's blocking promotion, we need to raise priority on this patch13:27
panda|roverrlandy: at least ....13:27
jfrancoapanda|rover: ok then, I'll try to give it an eye after I finish one thing13:28
panda|roverrlandy: but we have no ansible interaction with these templates, so even the new subnodes variable will not help here. Either we move it somewhere that takes ansible variables, or we need to create that file anyway13:28
*** dtrainor has quit IRC13:28
panda|roverjfrancoa: thanks13:28
panda|roverrascasoft: I need rascahard right now. The card has reahed the prod chain board13:29
rascasoftpanda|rover, I saw it13:30
panda|roverrascasoft: ok so do you need a trasformation, hulk style ?13:31
rascasoftpanda|rover, gimme 5 mins to regulate the gamma radiation levels13:32
*** myoung has joined #oooq13:33
panda|roverand when you're ready for the bug, I'll shout "Rasca, SMASH!"13:35
rascasoftpanda|rover, I'm ready now, but even if Hulk is one of my favs, my dark side is Raoh, king of Hokuto13:36
*** rascasoft is now known as raoh13:36
raohNow I'm ready13:36
rlandypanda|rover: Emilien added those lines, maybe he can advise - will ask13:36
raohpanda|rover, first of all the DFG here is TripleoCI, for sure, isn't it?13:37
panda|roverraoh: bluejeans ?13:37
raohpanda|rover, sure13:37
panda|roverrlandy: that template is referenced pretty much everywhere in the jobs, part of the multinode environment13:38
raohpanda|rover, https://bluejeans.com/957911389013:38
rlandypanda|rover: yep - asking on #tripleo13:38
ssbarneaadded https://bugs.launchpad.net/tripleo/+bug/1790144 - panda|rover please check (if you are not also overprovisioned already)13:50
openstackLaunchpad bug 1790144 in tripleo "queens: overcloud-deploy.sh fail with mistra timed out waiting for ping module test success: SSH Error: data could not be sent to remote host " [Critical,Triaged]13:50
*** raoh is now known as rascasoft13:52
panda|roverssbarnea: is it multinode in openstack ci ?13:53
ssbarneapanda|rover: job name is tripleo-ci-centos-7-scenario001-multinode-oooq-container13:54
ssbarneawas reported by ykarel half an hour ago, I looked at it but that is all I was able to find.13:56
ykarelmmm, i think abishop reported ^^13:58
panda|roverIs there something that is not failing today :(13:58
ykarelare gate jobs master/rocky also failing13:59
abishopykarel, panda|rover: I hadn't filed bug, just chatted w/ ssbarnea about it. failures are on stable/queens14:01
panda|roverabishop: is this something related with the config download parameters ?14:02
panda|roverssbarnea: ^14:02
abishoppanda|rover: don't think so, in fact https://review.openstack.org/597141 got two jobs working (scenarios 2,3). 1 and 4 still consistently fail14:03
rlandypls can I get another core vote on https://review.openstack.org/#/c/58148814:06
rlandysshnaidm|off already approved.14:06
rlandypanda|rover: you are the only one around - pls14:06
ykarelpanda|rover, abishop then possibly related with ceph,14:07
ykarelslagle might have idea on it14:07
panda|roverrlandy: approved14:12
rlandythank you, sir14:12
ykarelalso it doesn't look like config-download is working there14:13
panda|roverssbarnea: https://review.rdoproject.org/zuul/stream.html?uuid=b99085d01cff43149aa8b4c682571fb3&logfile=console.log14:19
panda|roverssbarnea: metadata bug is fixed14:19
*** saneax has joined #oooq14:20
*** sanjayu_ has joined #oooq14:23
*** hamzy_ is now known as hamzy14:26
*** ykarel is now known as ykarel|away14:26
*** saneax has quit IRC14:26
*** sanjayu_ has quit IRC14:36
*** dtrainor has joined #oooq14:37
rascasoftpanda|rover, wow that was fast, I moved the card to done14:37
*** dtrainor has quit IRC14:37
*** dtrainor has joined #oooq14:38
*** dsneddon has quit IRC14:39
rlandypanda|rover: I am sort of out of ideas here - waiting in more direction from the alex/emilen14:40
rlandyand to think14:40
rlandyall I ever wanted to do was add a browbeat job14:40
rlandyit's become a whole career14:41
*** jaosorior has quit IRC14:43
rascasoftmyoung, hey man you around?14:46
panda|roverrlandy: you and your acts of kindness :)14:46
rlandyno good deed goes unpunished14:47
rascasoftmyoung, so I just realized that I can't use rocky nic-configs with queens, so I'm separating all the nic configs per release inside a review, we'll need it to be merged quickly to make the jobs green again... So in short: incoming review :D14:47
*** dsneddon has joined #oooq14:49
myoungrascasoft, aye, gotcha15:01
myoungrascasoft, ongoing VPN issues, I'm just now as of a few mins ago able to get in via phx2, rdu still down/flaky15:01
rascasoftmyoung, yeah everybody hurts today15:02
myoungrascasoft, last night also i noticed all the slaves in rdo-manager-64 on jenkins are offline with ssh connectivity issues, trying to get in now to check...and those are needed to launch the jobs15:02
myoung(well for some of them) - I have a patch or two coming for rdo2:rocky as well, tweaks from the patch you merged a day or so ago15:04
*** rook has quit IRC15:04
*** myoung is now known as myoung_15:04
rascasoftmyoung_, ack, put me as a reviewer15:05
*** myoung has joined #oooq15:06
*** ykarel|away has quit IRC15:08
*** myoung_ has quit IRC15:10
*** ccamacho has quit IRC15:12
rascasoftmyoung, https://code.engineering.redhat.com/gerrit/148891 testing it here https://rhos-dev-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/oooq-queens-rdo_trunk-bmu-ha-lab-cygnus-float_nic_with_vlans/7/console if the pip cache don't make us mad as usual15:32
rascasoft:D15:32
*** jfrancoa has quit IRC15:45
myoungrascasoft: methinks we should just make all those jobs obliterate the pip cache each time.  I still think because we're not rev'ing the pip module version per commit we're always going to have these issues.  that or we just stop using it entirely15:56
myoungrascasoft: would rather just pay the 4 min tax each job than spend hours debugging what ends up being "pip doing the right thing" when we have stale ansible bits in the cache15:57
myoung(playbooks, ymls, etc)15:57
*** dsneddon has quit IRC16:03
*** dsneddon has joined #oooq16:04
rascasoftmyoung, agreed all the line here16:05
*** holser_ has quit IRC16:07
*** abishop has quit IRC16:21
*** abishop has joined #oooq16:22
myoungrascasoft: re 148891, makes sense what you're doing.  The only other route is to put conditionals inside the yamls, but I think that's worse and not in line with eventually merging this with upstream/zuul design16:24
rascasoftmyoung, exactly, I thought the same16:24
myoung(e.g. per release)16:24
*** trown is now known as trown|lunch16:34
*** ssbarnea|ruck has joined #oooq16:36
ssbarnea|ruckit seems that matrix server died again about an hour ago, so I may have missed few messages.16:37
rlandyrascasoft: so you ever use ci-rhos?16:57
ssbarnea|ruckpanda|rover still around here? can you kick https://review.openstack.org/#/c/571176/ ?16:58
*** trown|lunch is now known as trown17:34
*** sshnaidm|off has quit IRC17:37
*** sshnaidm|off has joined #oooq17:38
*** sshnaidm|off has quit IRC17:41
*** sshnaidm|off has joined #oooq17:41
*** dsneddon has quit IRC17:58
*** dtantsur is now known as dtantsur|afk17:59
*** dsneddon has joined #oooq18:00
ssbarnea|ruckfound something weird, this reports exit 1 even if I see the stack was created succesfully anf no reason to fail, http://logs.rdoproject.org/50/597450/3/openstack-check/legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035-master/fcafc96/logs/undercloud/home/zuul/overcloud_deploy.log.txt.gz18:12
*** dsneddon has quit IRC18:35
ssbarnea|ruckI raised it as https://bugs.launchpad.net/tripleo/+bug/179019918:35
openstackLaunchpad bug 1790199 in tripleo "fs035: openstack overcloud deploy returns error code without any signs or error" [Critical,Triaged]18:35
*** dsneddon has joined #oooq18:38
*** dsneddon has quit IRC18:43
*** tosky has quit IRC18:52
*** dsneddon has joined #oooq18:55
*** openstackstatus has quit IRC18:58
rlandyssbarnea|ruck: any progress on the vxlan networking error?19:05
rlandyhttps://logs.rdoproject.org/22/596422/32/openstack-check/legacy-tripleo-ci-centos-7-containers-multinode-upgrades-pike-branch/79a26b5/logs/undercloud/home/zuul/vxlan_networking.sh.log.txt.gz#_2018-08-31_13_05_4219:06
*** openstackstatus has joined #oooq19:36
*** ChanServ sets mode: +v openstackstatus19:36
*** openstackstatus has quit IRC19:55
*** openstackstatus has joined #oooq19:57
*** ChanServ sets mode: +v openstackstatus19:57
*** openstackstatus has quit IRC20:11
*** openstackstatus has joined #oooq20:11
*** ChanServ sets mode: +v openstackstatus20:11
*** holser_ has joined #oooq20:23
*** trown is now known as trown|outtypewww20:36
*** openstackstatus has quit IRC20:36
*** openstackstatus has joined #oooq20:36
*** ChanServ sets mode: +v openstackstatus20:36
*** myoung has quit IRC20:44
*** holser_ has quit IRC20:58
*** dtrainor has quit IRC22:01
*** dtrainor has joined #oooq22:02
*** tosky has joined #oooq22:09
*** rlandy has quit IRC22:16
*** jrist has quit IRC22:58
*** tosky has quit IRC23:14

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!