Tuesday, 2019-02-19

*** vinaykns has quit IRC00:16
*** chandankumar has quit IRC00:28
*** chandankumar has joined #oooq00:29
*** tosky has quit IRC00:57
*** rlandy has quit IRC01:18
*** apetrich has quit IRC03:14
*** rfolco has quit IRC03:36
*** ykarel|away has joined #oooq03:36
*** ykarel|away is now known as ykarel03:37
*** udesale has joined #oooq03:54
*** skramaja has joined #oooq04:01
*** agopi has quit IRC04:06
*** skramaja_ has joined #oooq04:06
*** skramaja has quit IRC04:06
ykarelchandankumar, quiquell|off seems https://review.openstack.org/#/c/637162/ is blocked on gate04:07
*** agopi has joined #oooq04:07
chandankumarykarel: quiquell|off yes, ianw is working on that04:08
ykarelchandankumar, what about adding temporary patch in RDO to unblock promotions04:08
ykareltill gate is clear and it's merged04:08
chandankumarykarel: I am ok with it04:08
ykarelchandankumar, the thing we need to consider is that any voting job should not build diskimage-builder04:09
chandankumarykarel: you mean cherry picking this commit https://review.openstack.org/#/c/637162/ in disk image builder ?04:09
chandankumaron rdo side?04:10
ykarelchandankumar, yes you can apply temporary patch in -distgit04:10
chandankumarykarel: proposing04:10
ykarelchandankumar, ack04:10
ykarelchandankumar, /me looking tripleo jobs in diskimage-builder gate04:10
ykarelchandankumar, so two tripleo jobs runs, one voting and other non-voting fs001, i think we can ignore failures to non voting one after we apply the temporary patch04:12
ykareland the voting one seems diskimage-builder package is not build there04:12
ykarelso we should be fine04:12
*** hamzy has quit IRC04:13
ykareland assuming that it will not be indirectly build via depends-on seeing the less number of patches04:13
*** chandankumar is now known as chkumar|ruck04:32
*** radez has quit IRC04:33
*** irclogbot_0 has quit IRC04:55
*** dsneddon has quit IRC04:58
*** jrist has quit IRC05:06
*** ykarel has quit IRC05:07
*** dalvarez has quit IRC05:08
*** ratailor has joined #oooq05:09
*** jrist has joined #oooq05:10
*** ykarel has joined #oooq05:23
*** udesale has quit IRC05:33
*** dsneddon has joined #oooq05:33
*** dsneddon has quit IRC05:39
*** dsneddon has joined #oooq05:42
*** udesale has joined #oooq05:43
*** dsneddon has quit IRC05:47
*** dsneddon has joined #oooq06:07
*** dsneddon has quit IRC06:16
*** dsneddon has joined #oooq06:21
*** dsneddon has quit IRC06:27
*** quiquell|off is now known as quiquell|rover06:29
quiquell|roverchkumar|ruck, ykarel: o/06:30
ykarelo/06:30
quiquell|roverLooks like bump on puppet  rabbitmq break us06:30
ykarelwhere?06:30
chkumar|ruckquiquell|rover: \o/06:32
quiquell|roverRocky06:32
*** sanjayu__ has joined #oooq06:33
quiquell|roverhttps://bugs.launchpad.net/tripleo/+bug/181647706:33
openstackLaunchpad bug 1816477 in tripleo "stable/rocky undercloud install fails on duplicate declaration of systemctl-daemon-reload" [Critical,In progress] - Assigned to Alex Schultz (alex-schultz)06:33
quiquell|roverThey have fix it06:33
quiquell|roverBut don't remember why we bumped it06:33
quiquell|roverykarel: do we have CI at Rocky rdoindo to test that?06:33
quiquell|roverIt's an undercloud job06:34
quiquell|roverIs doable06:34
*** ccamacho has quit IRC06:35
ykarelquiquell|rover, for reasoning why bumped some info can be found at https://review.rdoproject.org/r/#/c/18879/06:36
ykarelquiquell|rover, and for jobs i am adding some jobs to test tags update in https://review.rdoproject.org/r/#/c/18887/06:36
*** apetrich has joined #oooq06:38
*** jtomasek has joined #oooq06:38
*** sanjayu__ has quit IRC06:38
quiquell|roverLet's put some CI on rebump and show it to herald06:44
quiquell|roverOr we fix it06:44
quiquell|roverykarel: thanks! BTW standalone is going to cover it?06:47
ykarelquiquell|rover, didn't checked the bump06:47
ykarels/bump/bug06:47
quiquell|roverIt was undercloud job so possibly standalone covers it too06:47
quiquell|roverLet me check06:48
quiquell|roverykarel: https://review.openstack.org/#/c/636873/06:52
quiquell|roverStandalone passing have to be undercloud job06:53
ykarelquiquell|rover, ^^ passing with new puppet-rabbitmq, you checked that?06:54
quiquell|roverIt was not failing?06:54
ykarelquiquell|rover, and which jobs are broken? which scenarios06:54
quiquell|roverFs02106:55
quiquell|roverUndercloud containers06:56
*** saneax has joined #oooq06:59
chkumar|ruckquiquell|rover: fs021 are made to fail07:01
chkumar|ruckfor comparing tempest tests results07:02
*** kopecmartin|off is now known as kopecmartin07:02
*** dsneddon has joined #oooq07:03
quiquell|roverchkumar|ruck: the gate jobs from here https://review.openstack.org/#/c/636873/07:04
quiquell|roverchkumar|ruck: were the ones failing with puppet-rabbitmq bump at rocky07:04
quiquell|roverykarel: ^07:04
quiquell|rovermake sense to add any of them to rdoinfo CI or it's too much ?07:04
chkumar|ruckquiquell|rover: +1 for adding ci07:07
quiquell|roverykarel: is doable ? or those jobs are too much ?07:08
*** dsneddon has quit IRC07:08
quiquell|roverchkumar|ruck: stupid me is fs00307:09
ykarelquiquell|rover, sure, we can start with standalone and minimal set of multinode jobs07:12
ykarelquiquell|rover, for now what's the plan to unblock rocky? revert rdoinfo?07:12
quiquell|roverykarel: Alex and Emilien already merged the revert07:12
quiquell|roverykarel: I would say to put in place CI and bump to see if CI fails so we are all good07:13
quiquell|roverykarel: revert https://review.rdoproject.org/r/#/c/18899/07:13
ykarelquiquell|rover, yup just saw07:13
ykarelquiquell|rover, so ci passing now07:14
ykarelor it's still failing07:14
quiquell|roverykarel: puff is failing but different place is going to be a hard day today07:14
quiquell|roverchkumar|ruck, ykarel: heading office will be read only for a few07:14
chkumar|ruckquiquell|rover: ack07:14
ykarelquiquell|rover, i see good puppet-rabbitmq in dlrn rocky repo, so rocky ci should be good atleast now07:15
*** dsneddon has joined #oooq07:15
quiquell|roverykarel: I see some rocky failures at gates not related to that, but maybe this is transitory07:15
ykarelquiquell|rover, good to confirm what's the issue07:15
chkumar|ruckquiquell|rover: Details: {u'message': u'Only volume-backed servers are allowed for flavors with zero disk.', u'code': 403} if you see this error07:16
chkumar|ruckquiquell|rover: in scenrio 3 or 407:16
chkumar|ruckquiquell|rover: we have a fix in progress https://review.openstack.org/63767907:16
quiquell|roverchkumar|ruck: first pass07:17
quiquell|roverchkumar|ruck: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.')",07:17
quiquell|roverchkumar|ruck: But it's says it's a warning07:18
quiquell|roverchkumar|ruck: http://logs.openstack.org/94/634194/1/gate/tripleo-ci-centos-7-standalone/5ec1813/logs/undercloud/home/zuul/standalone_deploy.log.txt.gz#_2019-02-19_02_54_2807:18
*** dsneddon has quit IRC07:20
chkumar|ruckquiquell|rover: let me take a look at keystone container07:20
quiquell|roverchkumar|ruck: also  "stderr: PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory",07:21
quiquell|roverchkumar|ruck: going to check passing one07:21
chkumar|ruckquiquell|rover: regarding last one I think I have seen somewhere07:21
chkumar|ruckquiquell|rover: https://bugs.launchpad.net/tripleo/+bug/181322407:23
openstackLaunchpad bug 1813224 in tripleo "fedora28 standalone failing on tempest" [Critical,Triaged] - Assigned to Arx Cruz (arxcruz)07:23
chkumar|ruckquiquell|rover: check the last comment07:23
quiquell|roverchkumar|ruck: so red herring the no such file07:26
quiquell|roverchkumar|ruck: it's pretty new I think07:26
*** quiquell|rover is now known as quique|rover|r--07:26
chkumar|ruckquique|rover|r--: I think arxcruz was looking into that last time need to check07:28
quique|rover|r--chkumar|ruck: ack07:32
quique|rover|r--chkumar|ruck: let's wait for next noop rebase on rocky07:33
quique|rover|r--To check results07:33
chkumar|ruckok07:34
zbrmorning07:37
zbrchkumar|ruck: hi! please check comment on https://review.openstack.org/#/c/634438/1/accessbot/channels.yaml@13307:38
zbrsshnaidm or panda|off: can you please help with https://review.openstack.org/#/c/637608/ ? already tested fix via https://review.openstack.org/#/c/636160/07:39
chkumar|ruckzbr: done07:40
zbri also need a wf on https://review.openstack.org/#/c/629679/ - also confirmed07:44
marioszbr: panda|off quique|rover|r-- please revote at https://review.openstack.org/637543 updated07:46
zbrmarios: the files part is a problem. can't you just zero the file section by defining an empty one in child?07:47
zbrwhat is the zuul bug related to that?07:47
*** ccamacho has joined #oooq07:47
marioszbr: can you please add comment on the specific line in the review you are objecting to and tell me what you want me to change.07:48
marioszbr: the whole point was to have a base without files:07:48
marioszbr: this should have merged yesterday it is holding up https://review.rdoproject.org/r/#/c/1889607:48
marioszbr: no defining empty one in child does not work07:50
marioszbr: we tested that yesterday. we spent 2 hours on the phone discussing it07:50
zbrmarios: i will add comments now. please help me wf the two changes I mentioned above.07:50
marioszbr: -1 needs update https://review.openstack.org/#/c/636160/07:50
marioszbr: will check the other07:50
marioszbr: i commented in https://review.openstack.org/#/c/636160/07:51
zbrmarios: i am more concerned about the two depends-on for the moment, to get those merged. while they merge we can work to improve the one you -1, i bet is because use of files.07:51
*** jfrancoa has joined #oooq07:51
marioszbr: k will look at those too, but https://review.openstack.org/#/c/636160/ is missing layout.yaml and also needs to rebase onto mine so it doesn't delay it further07:52
quique|rover|r--marios: this is not going to work07:55
zbri am not sure if "delay my change" is a good enough reason for delaying another change but i am not in a mood for debates so early ;07:55
quique|rover|r--You still has files section07:55
mariosquique|rover|r--: where?07:55
marioszbr: k07:55
marioszbr: actually you may not need change to layout.yaml since you're folco updated the template so that part is ok07:56
quique|rover|r--marios: at tripleo-build-containers-centos707:57
mariosquique|rover|r--: yeah but i am going to parent on base07:57
mariosquique|rover|r--: in theperiodic07:57
quique|rover|r--Ahh ok this is not the one07:57
quique|rover|r--Ack07:57
mariosquique|rover|r--: like https://review.rdoproject.org/r/#/c/18896/2/zuul.d/tripleo-rdo-base.yaml07:57
quique|rover|r--Yep is rdo sorry07:57
mariosquique|rover|r--: k thanks07:57
*** udesale has quit IRC07:58
marioszbr: :D08:00
quique|rover|r--marios: +2 also pasted the link to the zuul bug08:01
mariosquique|rover|r--: thanks08:01
quique|rover|r--Well looks like is more of a documentstion issue08:01
quique|rover|r--But it can be implemented08:02
mariosquique|rover|r--: ill update for zbr comment08:02
quique|rover|r--Zb08:02
quique|rover|r--zbr: ^ added link to bug so you can remove -108:02
zbrmarios: thanks. i hope my request makes sense. i wonder if we will endup with doubling the jobs with foo-base foo-base-with-files08:03
*** udesale has joined #oooq08:03
marioszbr: sure i was actually looking for the bug/was going to file one08:03
marioszbr: but quique|rover|r--++08:03
zbrto me it looks like a design flaw, but it may be only my lack of knowledge.08:03
mariosmaybe needs 2 more + to cancel out the - like08:03
mariosquique|rover|r--++++08:03
*** dsneddon has joined #oooq08:04
marioszbr: yeah we don't know why yet. might be todo with timer but quique|rover|r-- setup let us test it yesterday08:04
zbri asked becaume my next step is to ask on #zuul channel, i want to learn why because this a key feature we would need to master.08:05
*** ykarel is now known as ykarel|lunch08:06
marioszbr: updated08:07
*** udesale has quit IRC08:07
*** udesale has joined #oooq08:08
marioszbr: why rebase?08:09
*** dsneddon has quit IRC08:10
*** rascasoft has joined #oooq08:10
zbrmarios: because when you update job you should always be sure you rebase.08:11
zbryou do not want to endup failing to at the gate and loose precious time.08:11
marioszbr: ok. IMHO it is nicer/better etiquete to ask the person who owns the review to rebase with your comments08:11
marioszbr: i mean if that person is away or not responseive fine08:12
zbrmarios: i will ask you next time, never bothered to do this on fresh updates, i never do rebase stuff that was for long in queue, but with 2m, it was the right time to do it.08:13
marioszbr: ok thanks08:14
quique|rover|r--zbr: already asked at #zuul08:15
zbrmarios: i plan to add another job named tripleo-build-containers-base-wf (from "with files") for use in jobs that need file pattern. ok?08:18
zbris "-wf" too cryptic?08:18
marioszbr: you mean in a different review though right :D08:18
marioszbr: but sure, post it and then we can comment specifically on the name in the review08:18
zbrat least is not -wtf ;) --- sure as part of my job based on yours.08:19
quique|rover|r--marios: +2 I will workflow it after ci08:22
*** saneax has quit IRC08:23
*** saneax has joined #oooq08:23
mariosquique|rover|r--: thanks08:25
*** ratailor has quit IRC08:35
*** amoralej|off is now known as amoralej08:35
*** jtomasek has quit IRC08:36
*** tosky has joined #oooq08:42
*** chem has joined #oooq08:46
*** quique|rover|r-- is now known as quiquell|rover08:46
*** ykarel|lunch is now known as ykarel08:54
*** jpena|off is now known as jpena08:59
marioszbr: do do we have a bug for  https://review.openstack.org/#/c/637608/2 and should i -1 for that09:00
marioszbr: (asking for a friend)09:00
marios;)09:00
* marios runs09:00
zbrmarios: ouch.09:01
zbrunicode09:01
zbrmarios: is just commit msg typo. i would fix if someone else would be around to regain that votes after fix. at least CI on this repo is very quick.09:03
marioszbr: oh i see you'd rather merge itand not delay any further for that right?09:07
mariosgot it thanks09:07
* marios runs09:07
marios11:07 < marios> got it thanks09:07
zbrmarios: yeah.09:07
mariossorry i meant i'll tell my friend09:07
*** panda|off is now known as panda09:11
*** bogdando has joined #oooq09:15
mariospanda: please re-add vote when you have a sec thanks https://review.openstack.org/#/c/637543/09:16
*** dalvarez has joined #oooq09:21
pandagigio mouse.09:22
*** ratailor has joined #oooq09:28
*** jfrancoa has quit IRC09:28
pandanot feeling very well today09:29
*** ccamacho has quit IRC09:31
*** dtantsur|afk is now known as dtantsur09:34
arxcruzquiquell|rover: chkumar|ruck hey, so regarding the fedora, i am not working right now on that, tbh i wasn't able to reproduce fedora locally to debug09:41
arxcruz:/09:41
*** derekh has joined #oooq09:42
*** holser_ has joined #oooq09:44
*** jfrancoa has joined #oooq09:45
zbrmarios: wf yours: https://review.openstack.org/#/c/637543/09:47
marioszbr: ac,l09:48
marioszbr: once zuul reports09:48
chkumar|ruckarxcruz: os_tempest job is also broken09:49
arxcruzchkumar|ruck: there's a bug upstream, need to change something09:49
chkumar|ruckarxcruz: http://logs.openstack.org/95/630695/11/check/tripleo-ci-centos-7-standalone-os-tempest/337804b/job-output.txt.gz#_2019-02-19_09_09_22_90738109:49
chkumar|ruckarxcruz: the error we are seeing yesterday got fixed with this review https://review.openstack.org/63767909:50
chkumar|ruckarxcruz: do we need a bug for the same?09:57
*** jfrancoa has quit IRC10:04
*** jfrancoa has joined #oooq10:05
arxcruzpanda: around? on https://review.rdoproject.org/r/#/c/18795/2/zuul.d/ovb-jobs.yaml@42 override-checkout is $release or stable/$release ?10:06
*** dsneddon has joined #oooq10:07
pandaarxcruz: stable/release10:41
mariosfolks can you please check this too when you have some time https://review.rdoproject.org/r/#/c/18896/ its the rdo side reparenting for (and depends-on) https://review.openstack.org/#/c/637543/10:43
mariosthanks10:43
arxcruzpanda: done10:54
chkumar|ruckarxcruz: bug logged https://bugs.launchpad.net/tripleo/+bug/181655211:02
openstackLaunchpad bug 1816552 in tripleo "[check][os_tempest] os_tempest standalone job is broken with etwork_type value 'vxlan' not supported." [Critical,Confirmed]11:02
mariosquiquell|rover: o/ wdyt about my comment #12 at https://bugs.launchpad.net/tripleo/+bug/1813911 hitting the manila issue afaics in https://review.openstack.org/#/c/636563/ cc arxcruz11:09
openstackLaunchpad bug 1813911 in tripleo "Manila tests are failing in featureset019 and scenario004" [Critical,Fix released]11:09
zbrgoing out for an hour11:09
quiquell|rovermarios: manila fix merged11:09
quiquell|rovermarios: and it's at tripleo-common so no promotion is needed11:09
quiquell|rovermarios: let me check11:10
*** udesale has quit IRC11:10
mariosquiquell|rover: right but did it merge like yesterday? cos i hit that then ... going to rebase as well so we'll have a rerun but thought i'd ask11:10
mariosquiquell|rover: well lets see with the rebase anyway doing11:10
quiquell|rovermarios: yep yesterday11:10
mariosquiquell|rover: ah cool11:11
quiquell|rovermarios: rebase should fix it11:11
marios(i didn't see it in the bug sorry i missed it)11:11
mariosquiquell|rover: thanks11:11
quiquell|rovermarios: np11:11
*** holser_ is now known as holser|lunch11:21
*** jtomasek has joined #oooq11:33
*** ratailor has quit IRC12:06
chkumar|rucksshnaidm: added the issue related to ara-server https://tree.taiga.io/project/tripleo-ci-board/issue/748 , feel free to take a look, thanks :-)12:10
sshnaidmchkumar|ruck, thanks12:10
*** amoralej is now known as amoralej|lunch12:16
*** chem has quit IRC12:18
chkumar|ruckquiquell|rover: please triage this card https://trello.com/c/YfkH3lPB/877-cixbz1671861ospregressionosp14upgradesdocker-hangs-after-restart12:28
chkumar|ruckquiquell|rover: apevec has asked on internal chaneel12:28
chkumar|ruckthanks :-)12:28
*** skramaja has joined #oooq12:30
*** jpena is now known as jpena|lunch12:30
*** skramaja_ has quit IRC12:30
*** rfolco has joined #oooq12:33
rfolcozbr, hi, missing kolla patches ?12:35
rfolcozbr, patchset 32... http://logs.openstack.org/60/636160/32/check/tripleo-build-containers-fedora-28/d763eb3/logs/build-err.log.txt.gz12:36
chkumar|ruckno more ip error on rdo check job https://logs.rdoproject.org/91/637791/1/openstack-check/tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset053/128c304/job-output.txt.gz#_2019-02-19_12_32_38_66760312:38
chkumar|ruckquiquell|rover: on rdo cloud we have 10 problematic stack and 27 servers in error state12:40
chkumar|ruckquiquell|rover: is cleanup not running there?12:40
*** holser|lunch is now known as holser_12:41
rfolcozbr, need to sync12:49
rfolcozbr, ps 33 --> zuul failed to merge a depends-on.... and ps 32 --> some kolla patch is missing... where is the change that tripleo-comon depends-on ? simply removed it ?12:51
*** chem has joined #oooq12:54
marioserrand biab12:55
*** udesale has joined #oooq12:55
*** weshay_PTO is now known as weshay12:58
weshaymorning12:58
marioso/13:01
mariosafternoon13:01
zbrmarios: weshay : please wf https://review.openstack.org/#/c/637543/13:05
quiquell|roverchkumar|ruck: I am back was at lunch13:05
chkumar|ruckquiquell|rover: retry error on only 2 jobs no need to worry on that13:06
quiquell|roverchkumar|ruck: we have a bz for the docker thing ?13:06
marioszbr: yes lets13:07
zbrmarios: weshay also https://review.openstack.org/#/c/637608/13:07
mariospanda: rfolco quiquell|rover +A this please? if you have 2 mins thanks looks like zuul is ok https://review.openstack.org/#/c/637543/13:07
quiquell|rovermarios: I am going to workflow it ok ?13:08
mariostripleo-build-containers-centos-7 SUCCESS in 46m 10s (non-voting)13:08
mariosquiquell|rover: yes thanks please13:08
quiquell|roverdone13:08
chkumar|ruckquiquell|rover: sure13:08
chkumar|ruckarxcruz: I think I got the solution https://bugs.launchpad.net/tripleo/+bug/181655213:08
openstackLaunchpad bug 1816552 in tripleo "[check][os_tempest] os_tempest standalone job is broken with etwork_type value 'vxlan' not supported." [Critical,Confirmed]13:08
arxcruzchkumar|ruck: u da man13:08
mariosquiquell|rover: btw looks like different issue tom rplied at https://bugs.launchpad.net/tripleo/+bug/1813911 and https://review.openstack.org/#/c/636563/6 so nonpcs scen4 needs more work looks like13:09
openstackLaunchpad bug 1813911 in tripleo "Manila tests are failing in featureset019 and scenario004" [Critical,Fix released]13:09
mariosnonpcs not really supported in manila apparently13:09
chkumar|ruckarxcruz: proposing a patch13:09
marios(probably doesn't make sense)13:09
rfolcozbr, this was the change that failed to merge ? https://review.openstack.org/#/c/637543/13:09
*** trown|outtypewww is now known as trown13:10
mariossshnaidm: zbr going to workflow https://review.openstack.org/#/c/637608/413:10
chkumar|ruckquiquell|rover: https://redhat.bluejeans.com/1571313919/6145/?src=meet_now13:10
zbrrfolco: it will now.13:10
weshaychkumar|ruck++13:13
weshayquiquell|rover++13:13
weshaywell done guys..  /me reading through bugs13:13
weshayetc13:13
quiquell|roverweshay: is hell man what a day13:14
zbrone of the issues with sprints: near the end of sprint, in the last 48h you get too many things trying to be merged at the same time, conflicts, rebases, high load on CI, people focused on trying to get their own changes in (even delaying giving the green light to other changes in order to get their in. Having an unified sprint approach can even scale this issue up. Stuff to think for retro...13:14
rfolcozbr, did you fix http://logs.openstack.org/60/636160/32/check/tripleo-build-containers-fedora-28/d763eb3/logs/build-err.log.txt.gz13:14
rfolco??13:14
weshayquiquell|rover ya.. m3 is hard because everyone is merging features13:14
zbrrfolco: yep, long time ago.13:14
sshnaidmquiquell|rover, chkumar|ruck seems like a lot of retry-limits on rdo cloud, ovb stacks are not being created13:14
weshayquiquell|rover this is why we need projects to run standalone13:14
quiquell|roverweshay: we have being hit by mistral stuff, and mistral is not run at standalone13:15
mariosweshay: **and** quiquell|rover also found time to help us with https://review.openstack.org/#/c/637543/ (-->https://storyboard.openstack.org/#!/story/2005040 ) quiquell++13:15
quiquell|rover:-/13:15
chkumar|rucksshnaidm: https://review.rdoproject.org/zuul/builds?result=RETRY_LIMIT only two today13:15
weshayquiquell|rover so we can add jobs to mistral13:15
mariosweshay: if it wasn't for quiquell|rover we would not find the problem or debug it so quickly for a workaround13:15
quiquell|roverweshay: we have too, if they allow us13:15
weshaythey have been pretty  happy w/ running tripleo in their gate13:15
zbrrfolco: i think. anyway even if f28 job is partial we should not prevent from merging it. we need to be focused on incremental improvement and be sure we do not introduce regressions.13:15
sshnaidmchkumar|ruck, no, there is much more13:15
weshaymarios aye..  thanks13:16
chkumar|rucksshnaidm: yes yesterday it was much more13:16
sshnaidmchkumar|ruck, I'm talking about today13:16
chkumar|rucksshnaidm: is tenenat cleaning not working there?13:16
marioszbr: damn it rfolco panda quiquell|rover merge conflict now? https://review.openstack.org/#/c/637543/13:16
quiquell|roverchkumar|ruck: btw the mistral bug is closed I think13:16
zbrmarios: fixing now. i was expecting this.13:16
quiquell|roverchkumar|ruck: Can we close the one related to DIB ?13:16
marioszbr: i will fix it thanks13:16
sshnaidmchkumar|ruck, need to check, but a lot of errors when creating stacks, and I see errors also when deleting them13:16
chkumar|ruckquiquell|rover: sshnaidm I think rdo cloud upgrade is going on also13:17
* chkumar|ruck not sure13:17
chkumar|ruckquiquell|rover: closing the bugs13:17
sshnaidmchkumar|ruck, this chart might be incorrect, right now in zuul I see at least 30-40 retry limits13:17
weshayripleo-ci-centos-7-scenario003-standaloneSUCCESS in 1h 51m 57s13:17
weshaytripleo-ci-centos-7-undercloud-containersSUCCESS in 1h 43m 05s13:17
quiquell|roverchkumar|ruck: thanks mate in hell13:17
weshayquiquell|rover mistral is running two tripleo jobs13:17
quiquell|roverweshay: but none of them exercise the problematic stuff13:18
weshayyou are saying we need a multinode job?13:18
quiquell|roverweshay: it's overcloud I think13:18
quiquell|roverweshay: multinode overcloud13:18
weshayok..13:18
weshayapetrich can we propose a new job on mistral?13:18
weshayquiquell|rover mistral containers are running http://logs.openstack.org/87/594187/25/check/tripleo-ci-centos-7-scenario003-standalone/9748c84/logs/undercloud/var/log/extra/docker/docker_allinfo.log.txt.gz13:19
* weshay looks at more logs13:19
weshayquiquell|rover http://logs.openstack.org/87/594187/25/check/tripleo-ci-centos-7-scenario003-standalone/9748c84/logs/undercloud/var/log/containers/mistral/engine.log.txt.gz13:20
weshayquiquell|rover when you have time tomorrow let me know what you think we're missing13:21
weshayquiquell|rover and which bug are you referring to?13:21
quiquell|roverweshay: the bug https://bugs.launchpad.net/tripleo/+bug/181602613:21
openstackLaunchpad bug 1816026 in tripleo "multinode promotion jobs timing out at overcloud deploy" [Critical,Fix released] - Assigned to Quique Llorente (quiquell)13:21
weshayah k13:22
* weshay reads13:22
quiquell|roverweshay: we have to figure it what job to put at mistral to cover that13:22
marioszbr: i caused it my recheck this morning caused it to merge https://review.openstack.org/#/c/634060/13:22
weshayinteresting13:22
marioszbr: (the conflict)13:22
zbrmarios: you did nothing wrong. now wf https://review.openstack.org/#/c/629679/13:24
zbrmarios: in case it was not clear, this was my way of saying that you did very well doing the recheck this morning ;)13:24
weshayquiquell|rover well technically we don't have to run that job in promotion man13:25
zbri will be out for 1/2hours, dropping something to the tip13:25
quiquell|roverwes it was affecting all of them13:26
quiquell|roverfs37, 30, 19, 18, 17, 16 in all of them got timeout.13:26
quiquell|roverread first comment from chkumar|ruck13:26
weshayquiquell|rover just multinode jobs ya?13:27
weshayI see13:27
mariospanda: quiquell|rover rfolco zbr sigh... merge conflict please revote https://review.openstack.org/#/c/637543/ when you have a minute please thanks13:27
weshayquiquell|rover how confident are we it's not the infra?13:27
weshayquiquell|rover when you run w/ the reproducer locally on libvirt13:28
weshaydoes it repro?13:28
quiquell|roverweshay: all have the same mistral issue13:28
sshnaidmquiquell|rover, chkumar|ruck https://snapshot.raintank.io/dashboard/snapshot/1ALYKFAvcKq81Ak2CxGim9dsBNkDwUy213:28
quiquell|roverweshay: we found the fix so we didn't reproduce13:28
quiquell|roverweshay: it was not intermitent13:28
chkumar|rucksshnaidm: 68513:28
mariossshnaidm: can you please vote when you have time in https://review.rdoproject.org/r/#/c/18896/ its the rdo side of the base reparenting (i.e. depends on https://review.openstack.org/#/c/637543/ ) for https://storyboard.openstack.org/#!/story/200504013:29
chkumar|rucksshnaidm: it is not in the main dashboard view?13:29
weshayquiquell|rover k k13:29
weshaythe len13:29
chkumar|rucksshnaidm: do we we need to re-run thescript?13:29
sshnaidmchkumar|ruck, now added it13:29
quiquell|roverweshay: Will look tomorrow but I think it was some mistral workflow that execute only at full multinode jobs13:29
quiquell|roverweshay: So I don't know if we can put that kind of jobs at mistral gates13:29
sshnaidmchkumar|ruck, yeah, better to run it again if it doesn't run now13:30
weshayquiquell|rover k13:30
weshayquiquell|rover so I'll chat w/ apetrich and dougal today re: another job on mistral13:31
weshaysshnaidm you ready?13:31
sshnaidmweshay, yeah13:31
weshayquiquell|rover thanks for working through that13:31
weshayI see the standalone-scen003 passed in the review that broke us13:31
quiquell|roverweshay: No problem, mistral guys rocks13:31
weshayaye.. and they are not even mistral guys anymore :)13:32
quiquell|roverweshay: welll that's another story, did it make senes to have standalone at mistral ?13:32
quiquell|roverweshay: we are not using mistral at standalone13:32
weshayquiquell|rover I need to hear more about that13:32
weshayquiquell|rover will chat in a bit13:32
quiquell|roverweshay: standalone just dump ansible playbooks and bypass mistral13:32
quiquell|roverI think13:32
quiquell|roverheat -> ansible13:32
*** agopi has quit IRC13:33
*** rlandy has joined #oooq13:34
*** jpena|lunch is now known as jpena13:35
mariosrlandy: good morning. once you're had caffeinne can you please check https://review.rdoproject.org/r/#/c/18896/ (and its depends on in https://review.openstack.org/#/c/637543/ ) thanks! welcome to tuesday!13:35
rlandysure13:35
mariosrlandy: thank you :D been trying to sell /#/c/637543/ all day and almost made it but self inflicted merge conflict once it was +2Ad... rebased it. really need votes on the https://review.rdoproject.org/r/#/c/18896/ rdo side from you and sshnaidm13:36
mariosthanks quiquell|rover ;)13:37
rlandylooking13:38
*** zul has joined #oooq13:40
rlandymarios: ok - w'ed https://review.rdoproject.org/r/#/c/18896 but will have to wait until depends-on merges13:42
mariosrlandy: thank you13:43
mariosvery much13:43
*** hamzy has joined #oooq13:44
chkumar|ruckweshay: quiquell|rover I am logging out now13:45
*** chkumar|ruck is now known as chandankumar13:45
chandankumarweshay: quiquell|rover this will unblock os_tempest job https://review.openstack.org/#/c/637838/13:45
quiquell|roverchandankumar: sure, long day, read you tomorrow13:45
weshaychandankumar quiquell|rover thanks fellas13:46
quiquell|roverchandankumar: will workflow it13:46
rlandypanda: or marios: pls see mandre's comment on https://review.openstack.org/#/c/635464/ ... would those services not be added with https://review.openstack.org/#/c/635208/18/ci/environments/scenario009-standalone.yaml?13:51
rlandypanda: we would need https://review.openstack.org/#/c/637591/ - which is to fix multinode scenario00913:52
pandarlandy: you added the mistral services in the end on the scenario file ?13:54
mariosrlandy: replied to him13:54
mariosrlandy: is that what you meant?13:55
rlandypanda: I did - just to see if that was what was stopping openshift deploying13:55
*** fultonj has joined #oooq13:55
*** fultonj has quit IRC13:55
mariosrlandy: added pointer to docs https://review.openstack.org/#/c/635464/9/zuul.d/standalone-jobs.yaml13:56
*** fultonj has joined #oooq13:56
rlandypanda: so - if openshift is even deploying, how is this scenario passing? we don't check for basic services to be there?13:56
pandarlandy: no, and tempest is disabled13:58
pandarlandy: technically it's deploying successfully.13:59
rlandypanda: really we need the multinode job to pass first I think https://review.openstack.org/#/c/637591/13:59
rlandytechnically :)13:59
rlandytechnically we can deploy a blank cloud and have perfect ci13:59
pandarlandy: technically we can have a job with a single shell task with exit 0 in it, and it will succeed.14:00
pandamarios: rlandy not only the scenaio009 contains openshift services, also environments/openshift.yaml is passed, and it contains the services14:01
mariospanda: ack14:01
rlandypanda: ack  ... hence confused by comment14:01
pandarlandy: adding mistral changed anything ? Trying to figure taht out14:02
*** ykarel is now known as ykarel|away14:02
rlandypanda: tbh ... until the puppet mistral stuff is fixed in https://review.openstack.org/#/c/637591/ I don't expect much to work14:02
rlandymultinode is not yet passing14:02
rlandypanda: adding mistral did nothing14:02
rlandyhttp://logs.openstack.org/64/635464/9/check/tripleo-ci-centos-7-scenario009-standalone/88a5d78/logs/undercloud/var/lib/14:03
rlandypanda: ^^ didn;t even show in the logs14:03
rlandyI could rebase on 637591 and see if that starts to deploy something14:03
rlandybut we are not even sure that is working yet14:03
rlandymarios: do we need to discuss pacemaker and standalone in the meeting or is that a done deal?14:04
*** ccamacho has joined #oooq14:05
weshayrlandy rfolco I saw a note about openstacksdk missing in chat.. that should be installed in pre.yaml14:06
weshaywas it required prior to pre.yaml running?14:07
*** rfolco has quit IRC14:07
rlandyweshay: rfolco was running the bash script14:08
*** rfolco has joined #oooq14:08
rlandyso pre had to run14:08
weshayya.. pre.yaml should run and get it installed14:08
weshayit could be I missed it though14:08
weshayI didn't think we made any openstack calls prior to pre.yaml14:08
mariosrlandy: i'll bring it up later14:09
rlandymarios++14:09
mariosrlandy: yeah i wasalready intending to do so14:09
*** vinaykns has joined #oooq14:12
*** ykarel|away has quit IRC14:13
*** jtomasek has quit IRC14:17
*** ccamacho has quit IRC14:18
weshaymarios ping in #tripleo14:19
mariosthanks weshay14:21
*** ccamacho has joined #oooq14:22
*** zul has quit IRC14:25
*** zul has joined #oooq14:26
rfolcoCI community meeting (office hours) starts now at https://bluejeans.com/4113567798 - agenda: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting14:29
rfolcomarios, quiquell, sshnaidm, weshay, panda, rlandy, arxcruz, mwhahaha, rfolco, chkumar, ssbarnea, kopecmartin ^14:29
apetrichquiquell|rover, do you have time to talk about the standalone after the squad meeting?14:33
quiquell|roverapetrich: yep14:33
*** amoralej|lunch is now known as amoralej14:34
*** radez has joined #oooq14:35
pandaOMG quiquell|rover from an office.14:36
quiquell|roverpanda: look at the landscape14:36
quiquell|roverhehe14:36
*** saneax has quit IRC14:42
*** ykarel|away has joined #oooq14:49
*** skramaja has quit IRC14:49
*** ykarel|away is now known as ykarel14:52
*** udesale has quit IRC14:53
rlandyweshay: leaving info dump on standalone scenario009 on pvt15:03
weshayack.. don't see it yet. .but ack.. not sure if I love my new irc client15:03
rfolcomarios, panda weshay me and zbr on https://bluejeans.com/u/rfolco just syncing on f2815:05
rfolcowelcome to join if you like15:05
mariosrfolco: gimme sec will join (you got number for me please)15:06
rfolcohttps://bluejeans.com/5878458097 marios15:06
mariosrfolco: tx15:07
kopecmartinchandankumar, arxcruz https://review.openstack.org/#/c/637832/15:26
arxcruzkopecmartin: done15:40
kopecmartinarxcruz, thanks15:40
pandaquiquell|rover: we have irrelevan-files in our bases, and promotion is running fine15:42
pandaquiquell|rover: (following the thread on zuul a bit)15:43
quiquell|roverpanda: i suppose they match15:43
quiquell|roverpanda: I mean they don't15:43
quiquell|roverpanda: what it deas is check files in the change and periodic has a fake change15:43
quiquell|roverpanda: so no file is matched by irrelevan so it's run15:44
quiquell|roverpanda: has just make a test on it pass at zuul with a fix so it's confirmed15:46
quiquell|rovermarios: ping16:11
mariosquiquell|rover: o/16:11
quiquell|rovermarios: do you have around the RDO review that test periodic not running ?16:12
quiquell|rovermarios: I am explaining it to zuul guys16:12
mariosquiquell|rover: sec16:12
quiquell|rovermarios: well or it's the current job there ?16:13
quiquell|roverbefore we merge upstream :-)16:13
mariosquiquell|rover: https://review.rdoproject.org/r/#/c/1872916:13
mariosquiquell|rover: that you mean/16:13
mariosquiquell|rover: https://tree.taiga.io/project/tripleo-ci-board/task/712 here too the reviews and i point to the story you filed for the files: bug16:14
quiquell|rovermarios: Thanks16:15
mariosthanks to you quiquell|rover16:16
marioswdyt folks let's merge now? http://logs.openstack.org/81/635881/4/check/openstack-tox-docs/1499ec6/html/ci/standalone_scenario_jobs.html at https://review.openstack.org/#/c/635881/ thanks!16:18
quiquell|rover+216:18
mariosweshay: panda rfolco rlandy sshnaidm zbr arxcruz chandankumar ^16:19
marios(docs low priority if you have reviews time thanks)16:19
rlandymarios; what's the deal with kolla and standalone?16:19
rlandyhttps://github.com/openstack/tripleo-heat-templates/blob/master/environments/standalone/standalone-tripleo.yaml#L11016:19
rlandyabsent from here16:19
mariosrlandy: not sure what you mean with kolla and standalone. there is an  issue for the f28 containers build16:20
mariosrlandy: ack on the mistral api there16:20
rlandymarios: mistral is there bit kolla is not16:20
mariosrlandy: maybe is in defaults/other env?16:20
mariosrlandy: in fact we aren't starting kolla as a service anywhere afaics16:22
mariosrlandy: its more like we pass kolla_config in all the services16:22
*** jfrancoa has quit IRC16:22
rlandyack - I see kolla_config in places16:22
mariosrlandy: yeah its the config that goes into the given container service but otherwise there isn't an expected os::tripleo::kolla type thing16:23
weshayjust noting to everyone..  rdo is looking shaky16:24
rlandy2019-02-19 00:52:46 | 2019-02-19 00:52:34Z [overcloud.ComputeServiceChain.KollaConfig]: CREATE_COMPLETE  state changed16:24
rlandy^^ need that16:24
mariosrlandy: so it would be the kolla_config passed as part of the compute role services16:25
mariosrlandy: that makes that happen16:25
weshayhrm.. maybe not16:25
mariosrlandy: ah but no compute. so there should be an equivalent16:26
marioskolla config in the standalone service chain16:26
rlandy2019-02-18 05:51:46 | 2019-02-18 05:51:34Z [standalone.StandaloneServiceChain.KollaConfig]: CREATE_IN_PROGRESS  state changed- guess it is there though16:26
mariosrlandy: right16:26
pandamarios: I had few notes inline but it was merged before16:28
mariospanda: sorry! where please?16:29
pandamarios: https://review.openstack.org/63588116:29
mariospanda: i think you forgot to post them?16:29
* marios refresh 16:29
mariosi see your comment thou16:29
mariospanda: is it ok if we iterate or we can stop the merge now and update16:30
mariospanda: but i don't see your notes16:30
mariospanda: even after refresh16:30
pandamarios: no no merge. I commented on the wrong patchset16:30
mariospanda: sorry for that thanks for taking the time. if you post the comments i can update it anyway on another review so its not wasted16:30
weshaypanda let''s roll :)16:30
mariosbut i still cant see them panda16:30
rfolcoping panda mtg16:31
*** ykarel is now known as ykarel|away16:37
*** quiquell|rover is now known as quiquell|off16:38
*** dtantsur is now known as dtantsur|afk16:48
*** hamzy has quit IRC16:50
rlandymarios: sorry - one more question on standalone ... here is my attempt at rocky ... http://logs.openstack.org/54/637454/1/check/tripleo-ci-centos-7-scenario009-standalone-rocky/efa07d1/logs/undercloud/home/zuul/standalone_deploy.log.txt.gz#_2019-02-18_05_30_58 - note the path to tripleo-heat-installer-templates17:01
rlandydid you make any correction for that in your rocky test jobs17:01
mariosrlandy: lemme check17:02
mariosrlandy: not as far as i can see in https://tree.taiga.io/project/tripleo-ci-board/task/625?kanban-status=1447275 https://review.openstack.org/#/q/topic:scenario-standalone-rocky http://logs.openstack.org/92/631492/8/check/tripleo-ci-centos-7-scenario001-standalone/47d4a3c/logs/undercloud/home/zuul/standalone_deploy.log.txt.gz17:05
rlandy rendering j2 template to file: /home/zuul/tripleo-heat-installer-templates/17:05
rlandymarios: k - looks the same in scenario00117:05
rlandymust be the job just not picking up the right file17:05
mariosrlandy: there was a path issue but its different (ceph at https://review.openstack.org/#/c/633724/17:05
mariosrlandy: ack 3 is green there fyi  http://logs.openstack.org/92/631492/8/check/tripleo-ci-centos-7-scenario003-standalone/a351b9f/logs/undercloud/home/zuul/standalone_deploy.log.txt.gz17:07
mariosrlandy: k gotta go17:07
*** chandankumar is now known as kmrchdn17:12
*** bogdando has quit IRC17:13
*** trown is now known as trown|lunch17:36
*** ykarel|away has quit IRC17:37
*** agopi has joined #oooq17:41
*** ccamacho has quit IRC17:46
pandadammit scenario003 timing out on my patch, more delay again17:50
*** derekh has quit IRC17:53
*** kopecmartin is now known as kopecmartin|off17:53
pandarfolco: weshay 2nd round ?18:02
rfolcooops18:02
*** hamzy has joined #oooq18:17
*** jpena is now known as jpena|off18:22
weshayrfolco panda sorry..18:24
weshaynot sure if you are still avail.. I had a phone call I could not miss18:24
rfolcoweshay, come to our party18:26
pandaweshay: we are closing, closing, closing, closing18:26
*** kmrchdn is now known as chandankumar18:26
rfolcoweshay, bring your own beer18:26
weshaywhere?18:26
weshayrfolco?18:26
rfolcoyes18:26
pandachez rfolco18:26
weshayooooo la la18:27
weshayhttps://tree.taiga.io/project/tripleo-ci-board/epic/55818:40
*** trown|lunch is now known as trown18:42
*** ccamacho has joined #oooq18:42
chandankumarrfolco: hello18:44
*** jtomasek has joined #oooq18:44
rfolcochandankumar, o/18:44
chandankumarrfolco: since you were working on new reproducer please have  a look to make sure is it still an issue https://bugs.launchpad.net/tripleo/+bug/180049518:45
openstackLaunchpad bug 1800495 in tripleo "reproducer script installs wrong puppet-tripleo package from delorian-current instead of the gating repo built locally" [High,Triaged]18:45
chandankumar?18:45
rfolcochandankumar, will look after my meeting18:46
chandankumarrfolco: sure take your time :-) thanks!18:46
*** amoralej is now known as amoralej|off18:47
*** holser_ has quit IRC19:04
*** panda is now known as panda|off19:17
*** dsneddon has quit IRC19:18
rfolcochandankumar, regarding bug on reproducer, may not want to spend time on it as the new reproducer is on the wave19:39
*** dsneddon has joined #oooq19:54
*** dsneddon has quit IRC20:00
rlandyweshay: any word on scenario009?20:15
weshayrlandy I have to get in touch w/ Steve hardy20:18
rlandyok20:18
weshaychandankumar w/ regards to https://bugs.launchpad.net/tripleo/+bug/1800495  I'll give that a go now20:19
openstackLaunchpad bug 1800495 in tripleo "reproducer script installs wrong puppet-tripleo package from delorian-current instead of the gating repo built locally" [High,Triaged]20:19
*** dsneddon has joined #oooq20:23
weshayrlandy do you have a sec?20:26
rlandyweshay: sure20:27
weshaymeh.. might as well chat on blue20:27
rlandyok20:27
weshayhttp://logs.openstack.org/52/632052/2/check/openstack-tox-docs/1bd4abc/html/install/advanced_deployment/deploy_openshift.html20:27
rlandyjoining your blue20:28
weshayrlandy http://logs.openstack.org/52/632052/2/check/openstack-tox-docs/1bd4abc/html/install/advanced_deployment/deploy_openshift.html20:29
weshayrlandy recreated..21:08
weshayTASK [ansible-role-tripleo-ci-reproducer : Open needed ports at default security group] *****************************************************************************************************************************21:08
weshaytask path: /var/tmp/RECREATE/roles/ansible-role-tripleo-ci-reproducer/tasks/openstack/main.yaml:221:08
weshayfailed: [localhost] (item=19885) => {"changed": false, "item": 19885, "msg": "openstacksdk is required for this module"}21:08
weshayfailed: [localhost] (item=22) => {"changed": false, "item": 22, "msg": "openstacksdk is required for this module"}21:08
weshaywill try to fix21:09
rlandyugh21:09
rlandyhow???21:09
weshayprobably because the virtenv is not resourced21:09
weshaynot sure21:09
weshayyet21:09
weshayhrm.. w/ --user that should not be the case21:11
weshay"Successfully installed ansible-2.7.7 appdirs-1.4.3 cached-property-1.5.1 certifi-2018.11.29 chardet-2.2.1 deprecation-2.0.6 docker-3.6.0 docker-compose-1.23.2 docker-pycreds-0.4.0 dockerpty-0.4.1 docopt-0.6.2 dogpile.cache-0.7.1 functools32-3.2.3.post2 futures-3.2.0 iso8601-0.1.12 jsonpatch-1.23 jsonpointer-2.0 jsonschema-2.6.0 keystoneauth1-3.11.21:12
weshay2 munch-2.3.2 netifaces-0.10.9 openstacksdk-0.13.0 os-service-types-1.5.0 packaging-19.0 pyparsing-2.3.1 requests-2.21.0 requestsexceptions-1.4.0 stevedore-1.30.0 texttable-0.9.1 urllib3-1.24.1 websocket-client-0.54.0"], "version": null, "virtualenv": null}21:12
weshay python21:13
weshayPython 2.7.5 (default, Oct 30 2018, 23:45:53)21:13
weshay[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux221:13
weshayType "help", "copyright", "credits" or "license" for more information.21:13
weshay>>> import openstacksdk21:13
weshayTraceback (most recent call last):21:13
weshay  File "<stdin>", line 1, in <module>21:13
weshayImportError: No module named openstacksdk21:13
rfolcoweshay, yeah exactly the same error I got21:30
rfolcozbr, https://review.openstack.org/636160 updated, kolla patch was not being applied, fixed now21:31
rfolcoand removed depends-on, merged21:31
weshayrfolco rlandy it's because pip was not updated21:31
weshayand ci probably has it pre-installed21:32
rlandyweshay; we should update pip in script?21:32
weshayrlandy ya.. sec /me putting up a patch21:32
rlandythanks21:32
weshayhttps://review.openstack.org/#/c/638016/21:35
vinayknshello channel, I'm trying to deploy TLS everywhere but I'm encountering http://pastebin.test.redhat.com/717896 any workaround would be useful.21:35
weshayvinaykns that is currently getting worked on by sshnaidm21:40
weshayit's not yet supported in ci21:40
weshayrlandy rfolco ok.. confirmed fixed :)21:40
*** hamzy has quit IRC21:41
rlandyweshay++21:41
vinayknsweshay: so as for now we can't deploy tls everywhere using oooq quickstart..?21:42
weshayw/ an ipa server?21:42
vinayknsyes, with ipa server21:43
weshayand qs deploying the ipa server?21:43
weshayw/ oooq deploying the ipa server no..21:43
vinayknsno qs uses some external ipa server21:43
vinayknsI already tried that21:43
weshaythat "should" work but is not in CI21:43
weshayvinaykns ping sshnaidm tomorrow21:43
vinayknsyeah..that's what I thought.!21:44
weshayhe's the best guy atm to help you21:44
weshaythere may be lots of bugs, but w/o CI we can't keep it clean21:44
vinayknsyeah..I'll contact him...thank you21:44
weshayvinaykns cool, sorry man21:44
vinayknsweshay: no worries.!21:45
weshayrfolco as you have time to retry.. https://review.openstack.org/#/c/638016/ fyi21:45
*** jtomasek has quit IRC21:57
*** d0ugal has quit IRC22:03
weshayrfolco chandankumar moved https://bugs.launchpad.net/tripleo/+bug/1800495 to fixed_released22:03
openstackLaunchpad bug 1800495 in tripleo "reproducer script installs wrong puppet-tripleo package from delorian-current instead of the gating repo built locally" [High,Fix released] - Assigned to wes hayutin (weshayutin)22:03
*** rlandy is now known as rlandy|brb22:18
*** d0ugal has joined #oooq22:20
*** rlandy|brb is now known as rlandy22:27
*** agopi has quit IRC22:32
*** rascasoft has quit IRC23:07
*** ccamacho has quit IRC23:13
*** ccamacho has joined #oooq23:14
*** tosky has quit IRC23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!