Wednesday, 2018-09-19

*** hamzy has joined #oooq00:05
hubbot1FAILING CHECK JOBS on stable/pike: tripleo-ci-centos-7-scenario004-multinode-oooq-container @ https://review.openstack.org/602248, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-3nodes-multinode, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-queens, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035-queens, tripleo-ci-centos-7-scenario004 (2 more messages)01:46
*** ykarel|away has joined #oooq01:52
*** saneax has joined #oooq02:20
*** ykarel|away has quit IRC02:43
*** rnoriega has quit IRC02:45
*** rascasoft has quit IRC02:45
*** rnoriega has joined #oooq02:46
*** apetrich has quit IRC02:51
*** cgoncalves|pto has quit IRC03:16
*** cgoncalves has joined #oooq03:17
*** saneax has quit IRC03:44
*** saneax has joined #oooq03:46
hubbot1FAILING CHECK JOBS on stable/pike: tripleo-ci-centos-7-scenario004-multinode-oooq-container @ https://review.openstack.org/602248, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-3nodes-multinode, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-queens, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035-queens, tripleo-ci-centos-7-scenario004 (2 more messages)03:46
*** udesale has joined #oooq03:52
*** ykarel|away has joined #oooq03:53
*** rfolco has quit IRC04:08
*** jaosorior_ is now known as jaosorior04:13
*** skramaja has joined #oooq04:19
*** hamzy_ has joined #oooq05:01
*** hamzy has quit IRC05:02
*** ykarel|away has quit IRC05:04
*** hamzy has joined #oooq05:09
*** sshnaidm has joined #oooq05:10
*** hamzy_ has quit IRC05:11
*** sshnaidm has quit IRC05:11
*** hamzy has quit IRC05:17
*** hamzy has joined #oooq05:18
*** hamzy has quit IRC05:23
*** hamzy has joined #oooq05:24
*** ratailor has joined #oooq05:31
*** quique|rover|off is now known as quiquell|rover05:33
hubbot1FAILING CHECK JOBS on stable/pike: tripleo-ci-centos-7-scenario004-multinode-oooq-container @ https://review.openstack.org/602248, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-3nodes-multinode, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-queens, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035-queens, tripleo-ci-centos-7-scenario004 (2 more messages)05:46
*** ykarel|away has joined #oooq05:47
*** jtomasek has quit IRC06:00
*** jtomasek has joined #oooq06:01
*** saneax has quit IRC06:02
*** jtomasek has quit IRC06:06
*** jfrancoa has joined #oooq06:08
*** holser_ has joined #oooq06:12
*** abishop has quit IRC06:13
*** abishop has joined #oooq06:16
*** holser_ has quit IRC06:18
*** ykarel|away is now known as ykarel06:21
*** holser_ has joined #oooq06:21
*** jtomasek has joined #oooq06:38
*** saneax has joined #oooq06:40
*** matbu has quit IRC06:43
*** matbu has joined #oooq06:46
*** chkumar|off is now known as chkumar|ruck06:53
chkumar|ruckquiquell|rover: \o/06:53
*** apetrich has joined #oooq06:57
quiquell|roverchkumar|ruck: Good morning sir07:07
quiquell|roverchkumar|ruck: still in the gates the fix for fs019 master/rocky/queens https://review.openstack.org/#/c/603323/07:19
*** ykarel is now known as ykarel|lunch07:25
*** gkadam has joined #oooq07:36
*** tosky has joined #oooq07:37
*** jtomasek has quit IRC07:37
*** kopecmartin has joined #oooq07:37
*** jtomasek has joined #oooq07:39
hubbot1FAILING CHECK JOBS on stable/pike: tripleo-ci-centos-7-scenario004-multinode-oooq-container @ https://review.openstack.org/602248, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-3nodes-multinode, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-queens, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035-queens, tripleo-ci-centos-7-scenario004 (2 more messages)07:46
mariosquiquell|rover: lets merge https://review.openstack.org/#/c/603322/3 ?07:51
mariossec, tripleo07:51
quiquell|rovermarios: ack07:56
*** gkadam has quit IRC07:57
*** bogdando has joined #oooq08:02
*** ykarel|lunch is now known as ykarel08:41
*** chem has joined #oooq08:55
*** dtantsur|afk is now known as dtantsur08:57
hubbot1FAILING CHECK JOBS on stable/pike: tripleo-ci-centos-7-scenario004-multinode-oooq-container @ https://review.openstack.org/602248, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-3nodes-multinode, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-queens, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035-queens, tripleo-ci-centos-7-scenario004 (2 more messages)09:46
*** panda has joined #oooq11:00
*** dtrainor has quit IRC11:13
*** dtrainor has joined #oooq11:13
chkumar|ruckquiquell|rover: in program call, we need to update about only rocky status na?11:16
chkumar|ruckquiquell|rover: I am getting the status on etherpad11:16
quiquell|roverchkumar|ruck: I think Wes send an e-mail11:17
quiquell|roverchkumar|ruck: He said that he will handle the program call11:17
chkumar|ruckquiquell|rover: yes, we will try to be there also if needed11:17
quiquell|roverchkumar|ruck: But is not bad to have the status there11:18
quiquell|roverchkumar|ruck: rocky status is Connection is already closed and the tempest erro you found and fs019 with the ceph issue (pending on merge)11:18
quiquell|roverchkumar|ruck: I think this is all11:18
chkumar|ruckcross verifying11:20
chkumar|ruckquiquell|rover: yes only 19 and 20 failing for rocky11:22
quiquell|roverchkumar|ruck: What was the bug regarding ironic sudo errors on rootwrapper ?11:23
quiquell|roverchkumar|ruck: was only master ?11:24
chkumar|ruckquiquell|rover: will i add the tempest failure there in the same bug for fs020 the way tempest failed it apperas me as a timed out issue11:24
chkumar|ruckquiquell|rover: https://bugs.launchpad.net/tripleo/+bug/1793073 ironic one is for queens only11:25
openstackLaunchpad bug 1793073 in tripleo "[queens] fs01 noop job failed with Stderr: u'/usr/bin/ironic-inspector-rootwrap: Unauthorized command: systemctl start openstack-ironic-inspector-dnsmasq.service (no filter matched)" [Critical,Triaged] - Assigned to Quique Llorente (quiquell)11:25
chkumar|ruckquiquell|rover:11:28
chkumar|ruckRocky Promotion status11:28
chkumar|ruck* fs019 failing https://bugs.launchpad.net/tripleo/+bug/179229611:28
openstackLaunchpad bug 1792296 in tripleo "Overcloude deploy error:Timed out waiting for messages from Execution" [Critical,In progress] - Assigned to Quique Llorente (quiquell)11:28
chkumar|ruck* fs020 failing11:28
chkumar|ruckMaster promotion status11:28
chkumar|ruck* FS16, FS17 and Fs020 failed due to telemetry issue https://bugs.launchpad.net/tripleo/+bug/179286211:28
openstackLaunchpad bug 1792862 in tripleo "[master] Telemetry Tempest integration tests failed giving Unable to complete operation on subnet b13c76ec-c851-48d0-91a9-245a2fdcad9b: One or more ports have an IP allocation from this subnet." [Critical,Fix committed] - Assigned to Mehdi Abaakouk (sileht)11:28
chkumar|ruck* FS019 ceph ansible issue11:28
quiquell|roverchkumar|ruck: So we can ignore tempest tiemout at rocky fs020 ?11:31
chkumar|ruckquiquell|rover: i think so11:31
quiquell|roverPufff we need promotions we are having a lot of timeouts, for sure because of this11:32
quiquell|rovergates timed out for the ceph fix :-(11:32
chkumar|ruckquiquell|rover: for fs020 rocky case, timedout is happening at different place11:33
chkumar|ruckquiquell|rover: fs020 rocky is running here https://review.rdoproject.org/zuul/stream.html?uuid=ab5191734b88476f86fecbdfbf82bd92&logfile=console.log11:33
quiquell|roverchkumar|ruck: Have to be docker update stuff11:33
chkumar|rucklet'see what is the output this time11:33
quiquell|roverchkumar|ruck: ok11:33
quiquell|roverchkumar|ruck: my reproducer still installing overcloud11:34
quiquell|roverchkumar|ruck: add to the etherpad that rocky ironic issues are gone after sack cleanup11:36
chkumar|ruckquiquell|rover: done11:37
*** ssbarnea has quit IRC11:39
quiquell|roverchkumar|ruck: my reproducer deployed overcloud without issues11:41
chkumar|ruckquiquell|rover: great, let' see what tempest has to say11:41
quiquell|roverchkumar|ruck: ack11:41
hubbot1FAILING CHECK JOBS on stable/pike: tripleo-ci-centos-7-scenario004-multinode-oooq-container @ https://review.openstack.org/602248, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-3nodes-multinode, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-queens, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035-queens, tripleo-ci-centos-7-scenario004 (2 more messages)11:46
*** ssbarnea has joined #oooq11:56
*** udesale has quit IRC11:56
quiquell|roverchkumar|ruck: I am going to update program doc12:04
*** rfolco has joined #oooq12:16
chkumar|ruckquiquell|rover: ack12:23
quiquell|roverchkumar|ruck: don't see weshay_pto around in the program meeting, maybe we have to cover it ?12:28
*** toure|gone is now known as toure12:29
chkumar|ruckquiquell|rover: yes12:30
quiquell|roverYou take it ?12:30
quiquell|roverI have put all the points in the doc12:31
quiquell|roverrelated to rocky12:31
chkumar|ruckquiquell|rover: sure12:31
quiquell|roverchkumar|ruck: ok that's it :-)12:32
quiquell|rovershort and sweet12:33
chkumar|ruckquiquell|rover: yuo12:33
quiquell|roverGoing for lunch now12:33
*** quiquell|rover is now known as quique|rover|lch12:33
weshay_ptoquique|rover|lch, chkumar|ruck sorry guys12:35
weshay_ptomy alarm did not triggr12:35
chkumar|ruckquique|rover|lch: ack12:35
weshay_ptotigger12:35
weshay_ptoquique|rover|lch, is there a patch to fix fs19?12:37
chkumar|ruckweshay_pto: we got it covered12:37
weshay_ptocool12:38
weshay_ptothanks12:38
*** jfrancoa has quit IRC12:46
*** trown|outtypewww is now known as trown|brb12:55
*** trown|brb is now known as trown12:56
agopiweshay_pto, rlandy the restart doesnt seem to have helped yet, maybe we need to do a jjb push as well?12:58
*** nodoz has joined #oooq13:03
*** jfrancoa has joined #oooq13:03
*** ratailor has quit IRC13:03
mariosscrum?13:04
* marios foreveralone13:04
*** holser_ has quit IRC13:04
*** holser__ has joined #oooq13:04
*** jfrancoa has quit IRC13:07
*** jfrancoa has joined #oooq13:11
chkumar|ruckquique|rover|lch: timed out has returned13:14
weshay_ptochkumar|ruck, quique|rover|lch this may help https://review.openstack.org/#/c/597222/13:19
*** quique|rover|lch is now known as quiquell|rover13:21
ssbarneafc28 image brings so many joys.... FFS why there is there is no "python" executable? it is so hard to make a symlink on install of python3 ?13:22
*** skramaja has quit IRC13:22
ssbarneathis translates to, ansible broken by default13:22
quiquell|roverweshay_pto: Actual timeouts are related to not having promotions13:23
chkumar|ruckand for prmotiion we need telemetry patches passing13:24
chkumar|ruckit is going to be kind of chicken egg problem13:24
quiquell|roverchkumar|ruck: Like all the current ruck rovering13:25
quiquell|roverchkumar|ruck: for rocky we are not affected by telemetry ?13:25
chkumar|ruckquiquell|rover: nope13:25
chkumar|ruckquiquell|rover: for master, currently 2 hoyurs back 4-5 jobs timed13:26
weshay_ptochkumar|ruck, Emilien will help you get the patches you need through the gate13:26
weshay_pto missing successful jobs: [u'periodic-multinode-1ctlr-featureset016', u'periodic-multinode-1ctlr-featureset017', u'periodic-ovb-1ctlr_1comp-featureset020', u'periodic-singlenode-featureset050']13:30
chkumar|ruckquiquell|rover: is there some problem with rdo cloud connection is getting closed on rdo third party jobs https://logs.rdoproject.org/27/603627/1/openstack-check/legacy-tripleo-ci-centos-7-containers-multinode-upgrades-pike-branch/e5c7d00/logs/undercloud/home/zuul/vxlan_networking.sh.log.txt.gz#_2018-09-19_11_12_0213:36
*** udesale has joined #oooq13:38
quiquell|roverweshay_pto: this is master ?13:38
quiquell|roverchkumar|ruck: I think jfrancoa has fixes for it13:38
chkumar|ruckquiquell|rover: https://review.openstack.org/#/c/602247/ does rocky noop changes is not getting auto triggered?13:39
quiquell|roverchkumar|ruck: was doing it, what is missing is RDO jobs13:40
jfrancoachkumar|ruck: hey, yes, this is the never ending vxlan networking issue I've been trying to solve for weeks. The thing is that mixed_upgrades jobs do not update repos in subnodes, so an old version of ovs is installed and the script fails13:40
quiquell|roverchkumar|ruck: Timeout at my reproducer :-(13:40
jfrancoachkumar|ruck: my last try to see if t gets fixed was https://review.openstack.org/#/c/603709/ but I don't have much faith13:41
chkumar|ruckquiquell|rover: on rdo noop job reached to execute tempest13:41
chkumar|ruckfor rocky fs02013:41
quiquell|roverchkumar|ruck: did they timeout ?13:41
chkumar|ruckquiquell|rover: still running13:42
quiquell|roverchkumar|ruck: yep, connection issue looks very infra, the python client is just a guard against it13:42
quiquell|roverLet's talk at #tripleo to make some noise13:42
hubbot1FAILING CHECK JOBS on stable/pike: tripleo-ci-centos-7-scenario004-multinode-oooq-container @ https://review.openstack.org/602248, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-3nodes-multinode, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-queens, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035-queens, tripleo-ci-centos-7-scenario004 (2 more messages)13:46
*** rascasoft has joined #oooq13:51
jfrancoachkumar|ruck: about the patch I mentioned above, I'm getting my faith back. At least it's installing the undercloud and it didn't fail in vxlan_networking13:52
*** ykarel is now known as ykarel|away14:04
chkumar|ruckjfrancoa: cool14:04
*** jtomasek has quit IRC14:08
ykarel|awayjfrancoa, but that looks wrong14:08
ykarel|awaylooks like that repo_setup will install same repo in both undercloud and overcloud14:08
ykarel|awaywhich i think is wrong in case of mixed upgrade, no?14:09
jfrancoaykarel|away: yes, I just wanted to confirm the issue is with that. Now the best way to solve it would be to spawn the subnode with latest repos. I'm not really sure where is that done atm, if in node-setup or where14:11
ykarel|awayjfrancoa, so do u know what caused this issue?14:12
jfrancoaykarel|away: yes, the issue is with the openvswitch version installed in the overcloud VM, it's too old: https://logs.rdoproject.org/27/603627/1/openstack-check/legacy-tripleo-ci-centos-7-containers-multinode-upgrades-pike-branch/e5c7d00/logs/undercloud/home/zuul/vxlan_networking.sh.log.txt.gz#_2018-09-19_11_12_0114:15
ykarel|awayjfrancoa, that i know, what caused that?14:16
ykarel|awayjfrancoa, if not i think marios can help in how to fix that, as i think this happened with https://review.openstack.org/#/c/58319514:17
jfrancoaykarel|away: about that I'm not sure, I don't know where do we configure the default repositories for the spawned vms14:17
mariosykarel|away: no didn't hit that one jfrancoa is this a new bug??14:18
jfrancoamarios: no, it's this vxlan_networking thing that has been happening for some weeks already14:19
ykarel|awaymarios, so u tested mixed upgrade earlier?14:19
jfrancoamarios: all mixed_upgrades job are failing in the vxlan_networking script14:20
mariosah i think that may be related to what rlandy is working on14:20
jfrancoaykarel|away: the curious thing is that the RDO upgrades job pass in the patch you referenced14:20
quiquell|roverjfrancoa: It merged !!! :-)14:22
quiquell|roverUps14:22
quiquell|roverWant to say it to rfolco :-)14:22
ykarel|awayjfrancoa, hmm that passed, so may be something else breaking then, no logs there, so can't say much14:23
jfrancoaykarel|away: so I managed to narrow down when did this start happening, it was on August 24th. I'll try to dig in deeper (the problem is that logs from that day were already erased)14:27
ykarel|awayjfrancoa, cool, yeah it would be easier to solve if root cause is found14:28
ykarel|awayjfrancoa, https://review.openstack.org/#/c/587012/14:35
ykarel|awayjfrancoa, and see job result in revert: https://review.openstack.org/#/c/597238/14:35
ykarel|awaymarios, -1 ^^14:35
*** gouthamr_ is now known as gouthamr14:35
jfrancoaykarel|away: aha, right! so it's that14:36
jfrancoaykarel|away: and the date matches14:36
*** udesale has quit IRC14:37
ykarel|awayjfrancoa, yes, and ur recheck there can confirm that14:39
rfolcoquiquell|rover, :)14:39
jfrancoaykarel|away: thanks a lot for the help, I was getting crazy of hitting blindly on different spots14:40
ykarel|awayjfrancoa, no issue, it's best to find the cause first of all, before trying to solve14:40
ykarel|awayand if that's get confirmed, people can help u in how to fix that14:41
*** quiquell|rover is now known as quique|rover|off14:43
jfrancoamarios: so if --boostrap-nodes is not used anymore in upstream ci, what's the set of tasks that substitute it? so I could link them in the base zuul template job in rdo-jobs14:43
*** jtomasek has joined #oooq14:44
mariosjfrancoa: so the tripleo.sh --bootstrap-subnodes was basically doing 2 main things, repo setup with some package install/removes, and setup the ceph loop device14:48
mariosjfrancoa: the repo stuff is already being handled by the repo-setup role14:49
mariosjfrancoa: so we only needed the ceph thing14:49
jfrancoamarios: that's right for most of the jobs, but not in the mixed_upgades ones: https://review.openstack.org/#/c/603709/1/playbooks/multinode-undercloud.yml14:49
jfrancoamarios: so when it tries to run the vxlan_networking script in the subnode, no repo setting up was done14:50
jfrancoamarios: and the job fails because of the old ovs version14:50
mariosjfrancoa: sounds right, did you check for repos on that job you pointed at14:51
* marios looks14:51
jfrancoamarios: yes, there epel and little bit more. Compare to the undercloud which contains all quickstart-* ande delorean* repos14:52
mariosjfrancoa: https://logs.rdoproject.org/27/603627/1/openstack-check/legacy-tripleo-ci-centos-7-containers-multinode-upgrades-pike-branch/e5c7d00/logs/undercloud/etc/yum.repos.d/14:53
jfrancoamarios: and we didn't observe this issue in the OpenStack infra jobs because we disable there the vxlan_networking script run for all14:53
mariosjfrancoa: ack but do you really propose to revert rather than fix?14:53
mariosjfrancoa: (i saw you rechecked the revert that weshay posted. the issue it was posted for was since fixed as i link to there)14:54
jfrancoamarios: not really, I was thinking if there would be a way to run the repo-setup for the legacy-dsvm-base-multinode zuul job template in rdo-jobs14:54
jfrancoamarios: it was just to confirm that the issue is related to that. I don't think reverting it is the solution, it would be better to try to solve it in the rdo cloud jobs, which are the ones which need still this vxlan setting up14:55
mariosjfrancoa: ack fyi there was this https://review.rdoproject.org/r/#/c/15000 related to that change for rdo-jobs (via the commit message)14:56
mariosjfrancoa: maybe file a bug would be excellent please thank you14:56
jfrancoamarios: there is https://launchpad.net/bugs/1791115 already opened. Will check the patch, thanks a lot14:57
openstackLaunchpad bug 1791115 in tripleo "[ERROR] /opt/stack/new/devstack/functions-common:Detected fatal at RDO jobs" [High,In progress] - Assigned to Jose Luis Franco (jfrancoa)14:57
mariosthank you jfrancoa14:58
*** nodoz has quit IRC15:01
*** ykarel|away is now known as ykarel15:07
chkumar|ruckquique|rover|off: noop job timedout15:08
jfrancoamarios: ykarel: hey, do you know if is there a way to tell repo-setup to setup an specific release repos? for what I can see it takes the value from the release file passed release/tripleo-ci/<release>.yaml15:13
mariosjfrancoa: you can see the repo-setup right in the release files like https://github.com/openstack/tripleo-quickstart/blob/2b597361c13249501fc5d63dfb0c2567afa75499/config/release/tripleo-ci/pike.yml#L4715:16
*** udesale has joined #oooq15:17
mariosjfrancoa: so looks like the baseurl is the way to specify 'pike'15:17
*** dtantsur is now known as dtantsur|brb15:28
*** abishop has quit IRC15:32
*** chkumar|ruck is now known as chkumar|off15:34
ykareljfrancoa, yes correct it takes value from the release file, and since the release file contains repos for single release, seems it needs to adjusted somehow as repo-setup role works based on those values15:38
ykarelearlier tripleo.sh take care for that based on UPGRDE_RELEASE var15:38
jfrancoaykarel: yes..it's getting harder15:38
ykareljfrancoa, it's seems solvable though :)15:39
jfrancoaykarel: exactly...and I realized now that the same is happening in the scenario000 non-voting upgrades job http://logs.openstack.org/60/598560/2/check/tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades/73d1a9e/logs/subnode-2/etc/yum.repos.d/15:39
ykarelmmm looks like vxlan script didn't ran there so it didn't falie15:42
jfrancoaykarel: I'm thinking about some full refactoring and get rid of the upgrades releases files (the <release>-undercloud-<release-1>-overcloud.yaml files) but that might be too ambitious, although now it could be a good moment...anyway, jobs are already broken15:42
*** panda has quit IRC15:42
ykareljfrancoa, i think there is some work already going on, but not sure, may be quique|rover|off knows15:43
jfrancoaykarel: exactly, because in all openstack infra job it's disabled by default http://logs.openstack.org/60/598560/2/check/tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades/73d1a9e/job-output.txt.gz#_2018-09-19_08_17_44_23450915:43
ykarelif not good to simplify stuffs15:43
jfrancoaykarel: I'll ask him tomorrow and see if we can work together on it15:43
ykareljfrancoa, cool15:43
jfrancoaykarel: I think I'll leave it here and try tomorrow to implement it having a fresh mind15:43
jfrancoaykarel: thanks for the help15:44
* ykarel also need to leave15:44
*** dtantsur|brb is now known as dtantsur15:46
hubbot1FAILING CHECK JOBS on stable/pike: tripleo-ci-centos-7-scenario004-multinode-oooq-container @ https://review.openstack.org/602248, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-3nodes-multinode, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-queens, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035-queens, tripleo-ci-centos-7-scenario004 (2 more messages)15:46
*** jfrancoa has quit IRC15:54
*** ykarel is now known as ykarel|away15:56
*** kopecmartin has quit IRC15:56
*** ykarel|away has quit IRC16:01
*** abishop has joined #oooq16:04
ssbarneaapparently there is some hope that future version of ansible will no longer need the ansible_python_interpreter hack, good reason to upgrade it. https://github.com/ansible/ansible/issues/4585216:06
*** agopi has quit IRC16:12
*** dtantsur is now known as dtantsur|afk16:23
*** udesale has quit IRC16:28
*** holser__ has quit IRC16:36
*** ykarel has joined #oooq16:36
*** panda has joined #oooq16:38
*** agopi has joined #oooq16:50
*** dsneddon has quit IRC17:08
*** dsneddon has joined #oooq17:08
*** trown is now known as trown|lunch17:09
*** vinaykns has joined #oooq17:16
ssbarneapanda: if you can +W it would be great https://review.openstack.org/#/c/588587/ thanks (python3-first)17:22
pandassbarnea: any way to check that flake8 ran with python3 ?17:30
pandassbarnea: http://logs.openstack.org/87/588587/3/check/openstack-tox-linters/b85fdc9/job-output.txt.gz#_2018-09-19_10_12_20_75242517:30
pandassbarnea: I see this17:30
ssbarneapanda: only experience will tell but if you want I do know a trick that makes it more obvious,  python3 -m flake817:35
ssbarneapanda: tox never failed to use correct tools, in installed. only case were i was tricked was when flake8 was not installed as a dependency and it ended up using system one.17:36
ssbarneai think that current approach is more than safe.17:36
ssbarneabtw, yup can see the flake8 path is inside the venv, so is using the py3 one.17:37
*** vinaykns has quit IRC17:40
hubbot1FAILING CHECK JOBS on stable/pike: tripleo-ci-centos-7-scenario004-multinode-oooq-container @ https://review.openstack.org/602248, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-3nodes-multinode, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-queens, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035-queens, tripleo-ci-centos-7-scenario004 (2 more messages)17:46
*** vinaykns has joined #oooq17:52
*** bogdando has quit IRC17:54
ssbarneaweshay_pto and anyone else interested about the fc28, you may want to follow this ticket https://github.com/ansible/ansible/issues/45852 -- likely ansible 2.8 (maybe a backport) will no longer need the ansible_python_interpreter hack.17:59
*** trown|lunch is now known as trown18:08
*** ykarel has quit IRC18:26
*** vkmc is now known as vkmc|afk18:49
hubbot1FAILING CHECK JOBS on stable/pike: tripleo-ci-centos-7-scenario004-multinode-oooq-container @ https://review.openstack.org/602248, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-3nodes-multinode, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-queens, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035-queens, tripleo-ci-centos-7-scenario004 (2 more messages)19:47
*** abishop has quit IRC20:00
*** jtomasek has quit IRC20:29
*** trown is now known as trown|outtypewww21:04
*** agopi has quit IRC21:28
*** dsneddon has quit IRC21:44
*** dsneddon has joined #oooq21:45
hubbot1FAILING CHECK JOBS on stable/pike: tripleo-ci-centos-7-scenario004-multinode-oooq-container @ https://review.openstack.org/602248, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-3nodes-multinode, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-queens, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035-queens, tripleo-ci-centos-7-scenario004 (2 more messages)21:47
*** agopi has joined #oooq22:30
*** vinaykns has quit IRC22:40
hubbot1FAILING CHECK JOBS on stable/pike: tripleo-ci-centos-7-scenario004-multinode-oooq-container @ https://review.openstack.org/602248, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-3nodes-multinode, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-queens, legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035-queens, tripleo-ci-centos-7-scenario004 (2 more messages)23:47

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!