Friday, 2019-12-06

*** rfolco has joined #oooq00:01
*** matbu has quit IRC00:13
*** matbu has joined #oooq00:18
rlandy|afkok - rdocloud should be clean now00:26
rlandy|afksshnaidm|afk: ^^00:26
*** rfolco has quit IRC00:44
*** rlandy|afk has quit IRC01:03
*** rfolco has joined #oooq01:13
*** rfolco has quit IRC01:33
*** rfolco has joined #oooq01:33
*** rfolco has quit IRC01:38
*** surpatil has joined #oooq03:15
*** surpatil is now known as surpatil||127_0_03:28
*** surpatil||127_0_ is now known as surpatil03:29
*** surpatil is now known as surpatil|127_0_003:29
*** surpatil|127_0_0 is now known as surpatil03:29
*** surpatil is now known as surpatil|wrf03:32
*** surpatil|wrf is now known as surpatil|wfr03:32
*** surpatil|wfr is now known as surpatil|wfh03:33
*** openstackstatus has joined #oooq03:41
*** ChanServ sets mode: +v openstackstatus03:41
*** udesale has joined #oooq04:00
*** ykarel|away has joined #oooq04:01
*** bhagyashris has joined #oooq04:03
*** ykarel|away is now known as ykarel04:21
*** bhagyashris has quit IRC04:43
*** bhagyashris has joined #oooq04:49
*** soniya29 has joined #oooq05:03
*** raukadah is now known as chkumar|ruck05:20
*** skramaja has joined #oooq05:31
*** surpatil|wfh has quit IRC05:45
*** epoojad1 has joined #oooq05:52
*** jtomasek has joined #oooq06:01
*** jtomasek has quit IRC06:06
*** jtomasek has joined #oooq06:12
*** marios|rover has joined #oooq06:29
*** soniya29 has quit IRC06:31
*** soniya29 has joined #oooq06:39
*** dsneddon has quit IRC06:39
*** d0ugal has quit IRC06:48
*** d0ugal has joined #oooq06:51
*** dsneddon has joined #oooq07:04
*** epoojad1 has quit IRC07:06
*** epoojad1 has joined #oooq07:07
*** ykarel is now known as ykarel|lunch07:46
pandamarios|rover: enqueue https://review.rdoproject.org/r/2393108:21
marios|roverneeds votes and +A if the zuul is ready please https://review.opendev.org/#/c/696874/ https://review.opendev.org/#/c/696871/ https://review.opendev.org/#/c/696870/ https://review.opendev.org/#/c/696872/08:24
marios|roverthanks and happy friday!08:25
marios|roverpanda: ack08:25
*** saneax has joined #oooq08:31
*** tesseract has joined #oooq08:31
marios|roveranyone feels like merging that please thanks https://review.opendev.org/#/c/695878/08:48
chkumar|ruckmarios|rover: I can but no rights08:51
marios|roverchkumar|ruck: o/ i thought you are travelling today08:52
chkumar|ruckmarios|rover: yes in another 30 mins08:52
chkumar|ruckmarios|rover: regarding current master promotion, rdo master dlrn current is not consistent due to failure caught in rdo spec file08:56
*** ykarel|lunch is now known as ykarel08:57
marios|roverchkumar|ruck: i saw the pipelines are a mess08:57
marios|roverchkumar|ruck: i am ignoring it for now hoping it will go away08:57
marios|roverchkumar|ruck: also rhel8 promotion seems fcucked... like i see08:58
marios|rover        * http://logs.rdoproject.org/19/23919/2/check/periodic-tripleo-ci-rhel-8-ovb-3ctlr_1comp-featureset001-master/c21429e/logs/undercloud/home/zuul/undercloud_install.log.txt.gz08:58
marios|rover        * #TODO bad promotion? we didn't have one!08:58
marios|rover         the test for https://bugs.launchpad.net/tripleo/+bug/1853652 debug patch @ http://logs.rdoproject.org/19/23919/2/check/periodic-tripleo-ci-rhel-8-ovb-3ctlr_1comp-featureset001-master/c21429e/logs/undercloud/home/zuul/undercloud_install.log.txt.gz * 2019-12-05 12:50:34 | tripleo_common.image.exception.ImageNotFoundException: Not found image:08:58
openstackLaunchpad bug 1853652 in tripleo "openstack overcloud node provide --all-manageable timing out and failing periodic rhel-8-ovb-3ctlr_1comp-featureset001-master" [Critical,Triaged]08:58
marios|roverdocker://trunk.registry.rdoproject.org/tripleomaster/rhel-binary-cron:36a84820e51bad57c6bbb92429f3afb3d9da29c2_6e3b098e08:58
marios|roverchkumar|ruck: anyway ack & safe travels i will review the pipelines in due course09:07
chkumar|ruckmarios|rover: let me fix that for you09:09
*** dsneddon has quit IRC09:11
chkumar|ruckmarios|rover: may be updating this https://review.rdoproject.org/r/#/c/23919/2/.zuul.yaml with container build dependency will help it to get the container09:13
chkumar|ruckmarios|rover: somethign like this https://review.rdoproject.org/r/#/c/23825/10/zuul.d/projects.yaml and also include depends on https://review.opendev.org/#/c/697236/ container build09:14
marios|roverthanks chkumar|ruck09:18
*** derekh has joined #oooq09:32
*** dsneddon has joined #oooq09:47
*** dsneddon has quit IRC09:53
pandamarios|rover: I gave all I can in those reviews, I'm not allowed (nor feel comfortable) gving +210:07
*** dsneddon has joined #oooq10:09
marios|roverpanda: ack thanks10:10
marios|roverpanda: not allowed? you mean cos other repos? yeay but if you referring to the 'make undercloud upgrade voting' then surely you are!10:11
marios|roverpanda: but thakns will check thn again in bit10:11
pandamarios|rover: I am ?10:12
pandamarios|rover: Ill +W every single mf there.10:12
*** derekh has quit IRC10:12
marios|roverpanda: not sure what you're referring to but since it is about jobs.. then we own those so...10:12
*** dsneddon has quit IRC10:13
*** dsneddon has joined #oooq10:14
*** derekh has joined #oooq10:18
*** dsneddon has quit IRC10:19
*** d0ugal has quit IRC10:24
*** d0ugal has joined #oooq10:27
*** dtantsur|afk is now known as dtantsur10:28
*** rfolco has joined #oooq10:28
pandarfolco: enqueue https://review.rdoproject.org/r/2393110:30
*** d0ugal has quit IRC10:31
rfolcopanda, ack, will do asap10:31
*** holser has joined #oooq10:47
*** dsneddon has joined #oooq10:52
*** derekh has quit IRC10:53
*** derekh has joined #oooq10:55
*** dsneddon has quit IRC10:56
*** derekh has quit IRC10:59
*** derekh has joined #oooq10:59
zbrpanda: marios|rover : easy review: https://review.rdoproject.org/r/#/c/23661/11:03
marios|roverzbr: make me11:05
*** dsneddon has joined #oooq11:05
zbrthanks!11:05
*** dsneddon has quit IRC11:09
*** dsneddon has joined #oooq11:10
*** dsneddon has quit IRC11:15
*** dsneddon has joined #oooq11:19
*** dsneddon has quit IRC11:24
*** sshnaidm|afk is now known as sshnaidm|off11:28
*** zbr has quit IRC11:47
*** zbr has joined #oooq11:47
*** epoojad1 has quit IRC11:47
marios|roverneeds merge please https://review.opendev.org/#/c/697413/11:50
*** tosky has joined #oooq11:50
*** dsneddon has joined #oooq11:52
*** udesale has quit IRC12:00
*** dsneddon has quit IRC12:02
*** rfolco is now known as rfolco|bbl12:03
*** dsneddon has joined #oooq12:34
*** saneax has quit IRC12:43
*** rlandy has joined #oooq13:03
rlandymarios|rover: hi - how are we doing ruck/rover wise?13:11
rlandydid I kill anything yesterday?13:11
rlandymarios|rover: also there are two reviews from stevebaker13:11
marios|roverrlandy: o/ i updated https://etherpad.openstack.org/p/ruckroversprint19 as usual... ongoing promotion blocker master/train are the headline i guess13:12
marios|roverrlandy: not aware of any new fires or fallout from your merges13:12
rlandymarios|rover: ok - then corrected them all yesterday13:12
rlandymarios|rover: pls see https://review.opendev.org/#/c/680571 and https://review.opendev.org/#/c/680573/13:12
marios|roverrlandy: ack in a few finish current thing13:13
rlandyhttps://review.rdoproject.org/zuul/builds?pipeline=openstack-component-compute13:17
rlandyrfolco|bbl: ^^ look what we have13:17
marios|roverrlandy: looks like we may have issues with rdo cloud13:18
marios|roverrlandy: currently train pipeline is runing and already some jobs hit RETRY_LIMIT13:18
rlandymarios|rover: ack - spent some time of that yesterday13:18
rlandymarios|rover: we had growing stacks13:18
rlandynot being deleted13:19
marios|roverrlandy: i already saw that today stein today run https://review.rdoproject.org/zuul/builds?pipeline=openstack-periodic-24hr lots of jobs RETRY_LIMIT & no logs13:19
marios|roverrlandy: ah ack didn't know it was ongoing from yesterday13:19
rlandymarios|rover: we need to catch one job with that before it bails and see13:19
rlandylet me look at the tenant13:20
marios|roverrlandy: few jobs still running in that pipeline13:21
rlandyhttps://review.opendev.org/gitweb?p=openstack/tripleo-ci.git;a=commitdiff;h=None in the component pipeline13:21
marios|roverrlandy: https://review.rdoproject.org/zuul/stream/58eef101a4e740ac8f14076b2ed84d76?logfile=console.log fs 2013:21
marios|roverack13:21
rlandymarios|rover: ^^ what is this console - we expect a retry limit here?13:22
marios|roverrlandy: well i don't know ... just that some of the jobs there hit that13:23
marios|roverrlandy: and i just pikced one of th eongoing ones. i don't know of a way to predict which one will hit that :D13:23
rlandy2019-12-06 13:17:58.967060 | primary |   "msg": "Data could not be sent to remote host \"38.145.32.110\". Make sure this host can be reached over ssh: ssh: connect to host 38.145.32.110 port 22: No route to host\r\n",13:23
rlandy2019-12-06 13:17:58.967353 | primary |   "unreachable": true13:23
rlandy2019-12-06 13:17:58.955826 | [Zuul] Log Stream did not terminate13:23
rlandymarios|rover: http://logs.rdoproject.org/openstack-periodic-latest-released/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-7-standalone-train/e0cc63a/ - is marked retry-limit13:25
rlandybut has logs13:25
rlandywith errors posted above13:25
marios|roverrlandy: ack good thanks noting13:26
rlandymarios|rover: looks like kforde isn't around13:26
rlandylooking through https://review.rdoproject.org/zuul/builds?result=RETRY_LIMIT13:27
rlandy2019-12-06 12:04:35.850539 | [Zuul] Log Stream did not terminate13:28
rlandy2019-12-06 12:04:35.850890 | primary | ERROR13:28
rlandy2019-12-06 12:04:35.851050 | primary | {13:28
rlandy2019-12-06 12:04:35.851113 | primary |   "msg": "Data could not be sent to remote host \"38.145.34.71\". Make sure this host can be reached over ssh: ssh: connect to host 38.145.34.71 port 22: No route to host\r\n",13:28
rlandy2019-12-06 12:04:35.851167 | primary |   "unreachable": true13:28
rlandy^^ that's a retry problem13:28
*** skramaja has quit IRC13:36
rlandyissue with collect logs connection13:43
rlandymarios|rover: ^^13:45
marios|roverrlandy: looks like chkumar|ruck noted that on the etherpad before he left this morning "    https://review.rdoproject.org/zuul/builds?result=RETRY_LIMIT it again arrived"13:45
rlandyyeah but the job run fine13:45
rlandyand fails collecting logs13:45
rlandyyou'll note that there are no logs13:46
rlandymarios|rover: when was the first time you saw the RETRY_LIMIT error?13:51
rlandyI think I may know the reason13:51
marios|roverrlandy: me today13:53
rlandymarios|rover: may be the change to the clean up script13:54
rlandyI will try reverse it13:54
marios|roverrlandy: via http://dashboard-ci.tripleo.org/d/YRJtmtNWk/cockpit?orgId=1&fullscreen&panelId=231 it seems to have peaked already today13:55
marios|roverrlandy: maybe it should be better... but still of the order of 40 undeleted stacks13:55
marios|roverrlandy: makes sense then 15:54 < rlandy> marios|rover: may be the change to the clean up script13:55
*** soniya29 has quit IRC13:55
rlandymarios|rover: pls see #sf-ops13:57
marios|roverack rlandy13:58
*** bhagyashris has quit IRC14:07
rlandysshnaidm|off: wrt cleanup scripts ... why move port deletion to the end? https://github.com/rdo-infra/ci-config/commit/55e4095c3a4d9a5d8d474482d7423557d68b632c14:12
rlandystacks fail deletion due to ports14:12
rlandyif we get rid of the ports first, the stacks deletw14:12
rlandyotherwise they become DELETE_FAILED14:13
sshnaidm|offrlandy, heat actually should delete all its resources, including ports14:15
sshnaidm|offrlandy, are you sure it's because of ports? and which ports exactly?14:15
rlandysshnaidm|off: heat should but when heat doesn't the clean up does14:16
rlandyyes - I'm sure14:16
sshnaidm|offrlandy, the only port that can't be deleted is port on undercloud that is connected to provision_ net of heat14:16
sshnaidm|offrlandy, we run stack delete before and heat should clean it up14:16
rlandysshnaidm|off: correct but a lot of stacks get stuck due to ports that can't be deleted14:17
sshnaidm|offrlandy, mmm.. but maybe these undercloud ports doesn't allow him to delete, it makes sense14:17
rlandysshnaidm|off: so the order was important14:17
sshnaidm|offrlandy, yeah, need to separate these ports from others.. I think we can revert it for now, will work on that later14:17
rlandysshnaidm|off: I'd be ok with it, if and only if we reran the stack deletion after the port deletion14:18
rlandysshnaidm|off: we are debugging a sporadic retry failure14:18
rlandywhere at the point we reach collect logs, the node can;t be reached14:18
sshnaidm|offrlandy, just previous version worked not good as well, deleting ports which was mostly unnecessary was taking a long time and stacks weren't deleted14:18
rlandysshnaidm|off: agreed that was not perfect14:19
rlandysshnaidm|off: the better way to do it is to isolate the ports14:19
rlandyor14:19
rlandyrun stack deletion again after port deletion14:19
rlandyso delete stacks - those that can delete will14:19
rlandythose that fail, will wait for port deletion14:20
sshnaidm|offwell, stacks can't be deleted if these ports are up14:20
rlandyand then get a second chance14:20
rlandyheat can take care of it in some cases14:20
rlandysome not14:20
sshnaidm|offrlandy, wait, why not if undercloud host is gone actually14:20
sshnaidm|offwell, it's still not a part of heat stack..14:21
rlandywe have a script o hit servers as well - different one14:21
sshnaidm|offneed to look for these ports exactly, delete them first, delete stacks, and then all the rest of ports14:21
rlandyyeah - I'd agree with looking more closely for the ports14:22
rlandyI suspect that the retry getting to the nods in collect logs has something to do with deleting ports14:23
rlandywe should only delete donw ports14:23
rlandydown14:23
rlandythough14:23
rlandybut maybe if the old nodes remain, there is the ip reuse14:23
rlandyfloating ip as we have seen before14:24
rlandysshnaidm|off: so yeah - let's revert the change, rethink how to define the ports better14:24
sshnaidm|offwe do delete only down ports14:24
sshnaidm|offneed to look at ports devices or networks14:25
sshnaidm|offrlandy, ack to revert14:25
rlandywe delete subnets and networks14:27
rlandyafterwards14:27
rlandyso we can be more selective14:27
*** fmount has quit IRC14:28
rlandysshnaidm|off: k - proposed revert but needs rebase - fixing14:29
rlandywe can put back the 6 hours14:30
*** fmount has joined #oooq14:31
*** Goneri has joined #oooq14:38
rlandymarios|rover: sshnaidm|off: https://review.rdoproject.org/r/#/c/23994/14:48
*** ykarel is now known as ykarel|afk14:50
marios|roverrlandy: ack checking14:55
rlandysorry - I need to put back the 36014:56
rlandydone14:57
chkumar|ruckmarios|rover: I thinl dlrn builds for master is now consistent https://review.rdoproject.org/r/#/q/topic:pin-py2+(status:open+OR+status:merged) got merged 4 hours ago14:58
marios|roverchkumar|ruck: ack cool ... you arrived ... somewhere?14:59
chkumar|ruckmarios|rover: yes15:00
chkumar|ruckin my hotel15:00
marios|roverrlandy: ah rlandy i should have checked irc first i commented on that just now but +2 anyway15:00
marios|roverchkumar|ruck: \o/15:00
marios|roverchkumar|ruck: ok it's all yours15:00
* marios|rover runs15:00
marios|rover(joke)15:01
chkumar|ruckhehe15:01
chkumar|ruckrlandy: is here , so no need to worry15:01
*** chkumar|ruck is now known as raukadah15:01
*** bhagyashris has joined #oooq15:02
rlandyraukadah: it's fine - relax in the hotel ... I agreed to this15:02
rlandymarios|rover: so fs020 is running longer than 5 hours15:03
rlandyso could get deleted15:03
marios|roverrlandy: train?15:03
*** bhagyashris has quit IRC15:04
rlandytrain and master15:04
rlandychecking retry limits15:04
rlandy2019-12-06 14:37:08.326548 | primary |   "msg": "Data could not be sent to remote host \"38.145.32.108\". Make sure this host can be reached over ssh: ssh: connect to host 38.145.32.108 port 22: No route to host\r\n",15:04
rlandy2019-12-06 14:37:08.326620 | primary |   "unreachable": true15:04
rlandy2019-12-06 14:37:08.326710 | primary | }15:04
rlandysame errpr15:04
rlandymarios|rover: ok fine- let15:05
rlandys do a clean revert15:05
mjturekanyone seen an error like this before? https://centos.logs.rdoproject.org/tripleo-upstream-containers-build-master-ppc64le/1811/logs/logs/build.log at 12:25:5015:05
mjturekI'm assuming it's the container asking the host for the delorean repo?15:05
rlandymarios|rover: sorry - last time15:08
rlandyclean revert15:08
marios|rovermjturek: not seen that you sure it isn't network issue like it couldn't talk to 172 delorean.repo?15:11
marios|roverrlandy: ack looking15:11
*** TrevorV has joined #oooq15:15
marios|roverif someone has couple mins that needs merge please https://review.opendev.org/#/c/697413/ thanks15:20
marios|roverand that one https://review.opendev.org/#/c/695878/15:22
*** dsneddon has quit IRC15:24
rlandydone15:24
mjturekmarios|rover sure seems like it :-\15:26
mjturekis 172.0.10.0 the docker bridge or something?15:27
mjturekwhere does it get set up?15:27
marios|rovermjturek: i don't know i would have to go dig but it would be somewhere in tripleo-common15:30
mjturekcool thanks15:31
marios|rovermjturek: sorry i am not getting any luck with grep on 'delorean.repo' under tripleo_common/image/* :/15:33
*** dsneddon has joined #oooq15:35
mjturekmarios|rover yeaaah I'm not seeing the IP anywhere in tripleo-common15:35
mjturek grep -rnE "172\.17\.0\.1"15:35
mjturekreturns nothing15:35
marios|rovermjturek: so on the repo setup part... at least i can point you to the upstream job ... we setup repos in pre15:35
marios|rovermjturek: https://github.com/openstack/tripleo-ci/blob/7679b71817aa385ee35003ef7ca569f91bf5fe6f/playbooks/tripleo-buildcontainers/pre.yaml#L6-L715:36
marios|rovermjturek: but that seems to happen during the build in your case?15:36
mjturekmarios|rover I mean we run the pre tasks as well15:37
rlandymarios|rover: ok -so the cleanup should be back - the down ports are mimimal now - no failed stacks15:37
rlandyrerunning locally15:37
rlandyhopefully we should be done with retry_limite15:37
rlandylet's see15:37
marios|roverrlandy: k going to recheck that in a bit https://review.rdoproject.org/r/23986 it didn't report yet butsome of them hit retry limit15:38
marios|roverrlandy: i posted it cos stein hit retry limits15:38
marios|roverrlandy: well may as well recheck it now? will that work maybe i should click rebase instead15:38
*** ykarel|afk is now known as ykarel|away15:38
rlandymarios|rover: k - either15:39
rlandyrebase it15:39
rlandyand rerun15:39
marios|roverrlandy: cant rebase testproject15:39
rlandyright15:39
marios|roverrlandy: just hit recheck15:39
marios|rovertyped it anyway15:39
*** dsneddon has quit IRC15:39
rlandychange the commit message if it won't rerun immediately15:39
marios|roverrlandy: abandon restore :D15:40
marios|roverrlandy: that kills the queue15:40
rlandyhack hack15:40
marios|rovercool they are already queued again15:41
marios|roverrlandy: posting the missing train jobs then gimme few15:42
marios|roverrlandy: i mean the ones that hit retry maybe we can get train today too that would be very nice of a friday15:42
rlandyrechecking https://review.opendev.org/#/c/69723615:42
rlandysure15:42
rlandygood idea15:42
mjturekmarios|rover: https://centos.logs.rdoproject.org/tripleo-upstream-containers-build-master-ppc64le/1811/logs/consoleText.txt if you want to see the pre-tasks run15:42
marios|roverrlandy: ack that is rhel8 17:42 < rlandy> rechecking https://review.opendev.org/#/c/69723615:43
rlandycorrect15:43
*** dsneddon has joined #oooq15:44
*** ykarel|away has quit IRC15:49
marios|roverrlandy: https://review.rdoproject.org/r/23995 current train criteria like http://paste.openstack.org/raw/787253/15:54
rlandymarios|rover: ack -ok15:55
rlandymarios|rover: and we are not expecting a master promotion15:55
marios|roverrlandy: not until we get consistent build see comment #7 https://bugs.launchpad.net/tripleo/+bug/185506315:56
openstackLaunchpad bug 1855063 in tripleo "Master standalone deploy failed with Table 'ovn_revision_numbers' already exists while performing neutron sync" [Critical,Confirmed]15:56
marios|roverrlandy: and related ping from ykarel in rdo earlier 15:16 < ykarel> marios|rover, fyi periodic job started but repo is still not consistent, still some packages pending to build https://trunk.rdoproject.org/centos7-master/queue.html, next run should be good15:56
marios|roverrlandy: 'earlier' like 3 hours ago15:56
rlandymarios|rover: ok so next master run maybe - only centos though15:57
rlandystill waiting on review for rhel15:57
marios|roverrlandy: yes. rhel is blocked on melanx15:57
rlandymarios|rover: rhel train>15:57
marios|roverrlandy: well waiting for either chandan fix or dropping the container but not for us to make that call15:57
marios|roverrlandy: yes15:57
marios|roverrlandy: no15:57
marios|roverrlandy: checking but i thought master15:57
rlandyyes, no  - to which?15:58
marios|rover17:57 < rlandy> marios|rover: rhel train>15:58
marios|roverhttps://bugs.launchpad.net/tripleo/+bug/1855050 references master afaics15:58
openstackLaunchpad bug 1855050 in tripleo "RHEL 8 container build failed while building neutron-mlnx-agent due to missing libvirt-python python-ethtool python-networking-mlnx" [Critical,Confirmed]15:58
marios|roverrlandy: panda: owns train promotions on rhel for now15:58
marios|roverrlandy: via the 'new' promoter fyi15:58
rlandymarios|rover: amazing - not our problem15:58
marios|rover(as an aside since you brought up rhel train)15:58
marios|roverrlandy: see 'promotion pipelines status' in etherpad i hav it updated on all branches with pointers15:59
*** dsneddon has quit IRC15:59
*** fmount has quit IRC16:10
mjturekmarios|rover just a heads up the next ppc64le run is in about 2 hours. Planning to jump on the node and investigate a bit more16:11
*** fmount has joined #oooq16:12
*** rfolco|bbl is now known as rfolco16:12
mjturekif you have any advice on what to look for, let me kno!16:12
rfolcorlandy, have you found the root cause of retry_limit ?16:12
rlandymaybe16:13
*** fmount has quit IRC16:13
*** apetrich has quit IRC16:15
*** fmount has joined #oooq16:18
rfolcorlandy, did you run it again in testproject to confirm?16:20
rlandyrfolco: run what against testproject?16:20
rfolcostandlone retry_limit job16:20
rfolcorlandy, what is preventing us to merge https://review.rdoproject.org/r/#/c/23875/16:21
rlandyrfolco: nothing - pls ask another core to vote16:22
rlandymarios|rover: ^^ pls16:22
rlandythere is duplication16:22
rlandyother than that ok16:22
*** dtantsur is now known as dtantsur|afk16:22
marios|roverrlandy: rfolco: don't think i'll do that review justice i will have a look on monday if still around sorry brain is done16:26
* marios|rover gets ready to go16:26
marios|rovermjturek: o/ it is my eod very soon now... not sure what to suggest for that rlandy fyi mjturek had some build container fail ... error trying to fetch delorean.repo16:26
rlandypanda?16:26
*** dsneddon has joined #oooq16:28
rfolcomjturek, link ?16:33
rfolcomjturek, for the error fetching delorean.repo16:33
*** dsneddon has quit IRC16:33
rfolcogot it16:34
rfolcohttps://centos.logs.rdoproject.org/tripleo-upstream-containers-build-master-ppc64le/1811/logs/logs/build.log16:34
marios|roverrfolco: 17:05 < mjturek> anyone seen an error like this before? https://centos.logs.rdoproject.org/tripleo-upstream-containers-build-master-ppc64le/1811/logs/logs/build.log at 12:25:5016:34
*** marios|rover is now known as marios|out16:34
*** d0ugal has joined #oooq16:35
*** dsneddon has joined #oooq16:36
*** dsneddon has quit IRC16:40
*** marios|out has quit IRC16:42
*** rlandy is now known as rlandy|ruck16:44
rlandy|ruckrfolco: ^^ covering for ruck/rover now16:44
rlandy|ruckrfolco: I am ok with merging that patch16:45
rfolcorlandy|ruck, ok, if no other cores to review, we'll do it on monday16:45
rlandy|ruckrfolco: cores or no cores - monday we merge16:45
rfolcok16:45
rlandy|ruckrfolco: pretty minimal impact16:46
rlandy|ruckpanda: ping ^^ could you take a look at rfolco patch?16:46
rlandy|ruckwe would like another core to look at16:47
rfolcomjturek, looks like the containers cannot access the host for some reason -- http://172.17.0.1/delorean.repo16:49
*** dsneddon has joined #oooq16:57
*** dsneddon has quit IRC17:02
*** derekh has quit IRC17:03
rlandy|ruckbaseurl=https://trunk-staging.rdoproject.org/centos7/component/compute/22/30/2230ec836ba41337e1fa870eeece971649e8bbf7_c9bfa01317:05
*** ykarel|away has joined #oooq17:16
*** apetrich has joined #oooq17:18
*** rlandy|ruck is now known as rlandy|ruck|brb17:29
*** dsneddon has joined #oooq17:30
*** d0ugal has quit IRC17:32
*** tesseract has quit IRC17:35
*** d0ugal has joined #oooq17:36
zbrrlandy|ruck|brb or rfolco: https://review.rdoproject.org/r/#/c/23984/ (easy)17:48
*** rlandy|ruck|brb is now known as rlandy|ruck17:54
rlandy|ruckzbr: ^^ in return, pls review https://review.rdoproject.org/r/#/c/23875/ for rfolco17:58
zbrrlandy|ruck: sure but i see it WIP with lots of pending changes.17:59
zbri guess is should wait until these are addressed and review it then.18:00
*** fmount has quit IRC18:01
*** holser has quit IRC18:01
rlandy|ruckzbr: ok - panda looked at - we're good18:03
zbrrfolco: added few comments, but i still have few questions like why not using environment instead of defining vars inside shell snippets.18:09
rfolcozbr, nice thank you18:23
zbrrfolco: did you have something preventing you from using ansible environment to declare these vars?18:24
rfolcozbr, I am following the pattern, not sure how I should pass the env vars18:24
zbrhttps://hackmd.io/7Aj-hWlvSeyqYCUaxqnqoA18:31
zbri am not asking you to do it like this, just to consider it.18:32
rfolcozbr, thanks for sharing18:32
zbrif the environment way helps us reduce number of lines, easy maintenance, we shoudl use it.18:32
zbrotherwise we can stick to bash mode.18:33
rfolcozbr, one more avise18:33
rfolcoadvise18:33
mjturekrfolco yeah just some stuff in the cico node during the run and I'm stumped18:35
rfolcohow you would search some words in an output from dlrn for example? I run dlrnapi and it returns stdout. Then in a follow up task, I grep a few items with with_item list stdout_lines18:35
rfolcozbr, ^18:35
mjturekthe docker bridge is there, is pingable, and I can curl the delorean file18:35
rfolcomjturek, selinux ?18:35
zbrwhat format is the dlrnapi result?18:35
zbrjson?18:35
mjturekrfolco: but that's all tried locally within the cico node18:35
rfolcoyes it returns { }  json format18:36
rfolcozbr, ^18:36
mjturekrfolco: that's a theory, gonna try locally and see if I get the same result18:36
zbrrfolco: i would use URI if possible because it can load JSON into variable, bypassing bash/grep.18:37
rfolcozbr, I convert stdout w/ toniceyaml function?18:38
zbranother option is to use from_json, like advertised on https://stackoverflow.com/a/40844916/99834 -- but likely will force you to do it in two tasks.18:38
rfolcointeresting... let me try something18:39
zbreither talk with dlrn api directly using uri module (probably is easy), or if you want to still call its cli, load the json in ansible18:39
zbrbut always play locally, save result to a file and play with a local playbook to get it right18:39
zbrwith ansible you almost never get it right from first attempt, at least me.18:39
rfolcooh retrieve from dlrnapi with uri18:40
rfolconow I got the idea18:41
rfolcozbr, thx18:41
rfolcowe always did with client using bash script18:41
zbrif i remember its API is REST based and simple enough, sometimes this means that you could save time bypassing it. but first check, i may be wrong.18:42
rfolcozbr, maybe its a silly question, but how do I get something from https://trunk-staging.rdoproject.org/api-centos-master-uc/18:56
rfolcodlrnapi --url https://staging.rdoproject.org/api-centos-master-uc repo-status --commit-hash bf577e5a999f7db4cb9b790664ad596e1926d9a0 --distro-hash 67a09fe97aa40ef05a73a3a7681700d2c25a58dd --success true18:56
rfolcozbr, how do I transform the args in a url18:57
rfolcozbr, I think I'll stick with cli :)18:58
zbrlook at examples from https://docs.ansible.com/ansible/latest/modules/uri_module.html18:58
zbrin fact you can even do body: "{{ some_dict | to_json }}" to send them.18:58
zbrthe key is to use POST, and not GET to avoid having to encode the URL18:59
zbris also safer18:59
zbrlook at example: Login to a form based webpage18:59
rfolcohmm cool18:59
zbrin fact, i would not be surprised to see someone already doing this with dlnr, use codesearch19:00
zbrmaybe you can find an example and save few minutes19:00
rfolcozbr, panda gave me this dlrn module as example...19:01
rfolcohttps://github.com/softwarefactory-project/dlrnapi_client/blob/master/dlrnapi_client/ansible/example_playbook.yaml19:01
zbrsure, use dlnr_api module!19:02
zbri did not know about it19:02
rfolcozbr, would have to install it from source I suppose... https://github.com/softwarefactory-project/dlrnapi_client19:04
*** dsneddon has quit IRC19:05
zbrrfolco: not really, we should be able to define it as an ansible dependency19:08
zbrnot sure if it is already published on galaxy as a collection but this can be done easily, even without we can create an requirements.yml file for that.19:09
zbri can help you next week on that.19:09
mjturekrfolco the plot thickens, locally the containers are building fine and selinux is enabled19:17
*** tosky has quit IRC19:18
mjturekany other ideas?19:20
*** jtomasek has quit IRC19:28
*** dsneddon has joined #oooq19:34
rfolcomjturek, maybe your job is missing any pre playbook ? did compare ?19:39
mjturekrfolco we run build-containers pre.yaml19:50
mjturekany other ones you can think of?19:50
mjturekI mean the only difference between the local env and the remote one is that we don't enable  push locally?19:53
rfolcoI've seen this error before but can't find the bug19:56
mjturekdang :(19:57
rfolcomjturek, in last case, bring this to next community call, where everyone is around and hopefully one will remember19:58
mjturekyeaaaah will do19:59
rfolcomjturek, searching in lp...19:59
mjturekrfolco: do you have the etherpad link for the community call?20:05
rfolcohttps://hackmd.io/IhMCTNMBSF6xtqiEd9Z0Kw20:05
rfolcomjturek, add your item at the top in the appropriate section20:06
rfolcomjturek, then this is copied over to the agenda before the mtg starts20:06
mjturekwill do!20:06
mjturekalright I have the agenda item in20:10
*** jtomasek has joined #oooq20:13
rfolcomjturek, thanks. Sorry man, no luck in finding the bug. I cannot spot anything from the logs. If selinux and iptables are good... ran out of ideas.20:20
rfolcomjturek, but you had this working before, when this started ?20:20
*** TrevorV has quit IRC20:27
rlandy|ruckcleanup script20:34
mjturekonce we switched to docker20:51
mjturekrfolco ^20:51
mjturekbut I wonder if the silent failure and this are related20:51
*** dsneddon has quit IRC20:53
*** dsneddon has joined #oooq20:53
*** dsneddon has quit IRC20:58
*** dsneddon has joined #oooq20:59
*** dsneddon has quit IRC21:03
*** rlandy|ruck has quit IRC21:11
*** dsneddon has joined #oooq21:31
*** rfolco has quit IRC21:33
*** irclogbot_1 has quit IRC21:45
*** d0ugal has quit IRC21:51
*** ykarel|away has quit IRC22:11
*** jtomasek has quit IRC22:12
*** tosky has joined #oooq22:35
*** jbadiapa has quit IRC23:51

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!