Tuesday, 2018-06-12

hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722400:39
*** ccamacho has quit IRC00:43
weshayrlandy|rover|bbl, ok.. when I see green on that job, I'll start on converting the other env.. and you can look at the jjb prototype01:26
*** rlandy|rover|bbl is now known as rlandy|rover02:08
*** ccamacho has joined #oooq02:17
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722402:39
*** rlandy|rover has quit IRC02:49
*** ccamacho has quit IRC03:13
*** skramaja has joined #oooq03:18
*** agopi has joined #oooq03:30
*** agopi has quit IRC03:40
*** udesale has joined #oooq03:52
*** agopi has joined #oooq03:57
*** agopi has quit IRC04:08
*** ykarel|away has joined #oooq04:09
*** ykarel_ has joined #oooq04:21
*** ykarel|away has quit IRC04:23
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario002-multinode-oooq-container, tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722404:39
*** pgadiya has joined #oooq04:48
*** pgadiya has quit IRC04:48
*** pliu has quit IRC05:10
*** pliu has joined #oooq05:11
*** rasca has quit IRC05:11
*** jschlueter has quit IRC05:11
*** myoung|off has quit IRC05:11
*** faceman has quit IRC05:11
*** lucasagomes has quit IRC05:11
*** rnoriega has quit IRC05:11
*** lhinds has quit IRC05:11
*** lhinds has joined #oooq05:12
*** faceman has joined #oooq05:13
*** rnoriega has joined #oooq05:14
*** rasca has joined #oooq05:14
*** lucasagomes has joined #oooq05:16
*** jschlueter has joined #oooq05:17
*** myoung has joined #oooq05:17
*** quiquell|off is now known as quiquell05:32
*** ratailor has joined #oooq05:36
*** tcw has quit IRC06:03
*** jtomasek has joined #oooq06:11
*** jtomasek has quit IRC06:11
*** tcw has joined #oooq06:12
*** jtomasek_ has joined #oooq06:13
*** jtomasek_ has quit IRC06:16
*** udesale has quit IRC06:18
*** udesale has joined #oooq06:18
*** saneax has joined #oooq06:29
*** florianf has joined #oooq06:34
*** jtomasek has joined #oooq06:38
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario002-multinode-oooq-container, tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722406:39
*** bogdando has joined #oooq06:43
quiquellsshnaidm: What do we need to run this at an RDO job ?06:55
quiquellhttps://review.rdoproject.org/r/#/c/14084/106:55
*** bogdando has quit IRC07:06
*** tosky has joined #oooq07:07
*** amoralej|off is now known as amoralej07:11
*** bogdando has joined #oooq07:12
*** kopecmartin has joined #oooq07:12
*** ykarel_ is now known as ykarel07:15
*** gkadam has joined #oooq07:15
*** florianf has quit IRC07:18
*** florianf has joined #oooq07:18
*** ccamacho has joined #oooq07:25
*** holser__ has joined #oooq07:37
*** holser__ has quit IRC08:00
*** jbadiapa_ is now known as jbadiapa08:00
*** holser__ has joined #oooq08:00
quiquellsshnaidm: You there ?08:18
*** ykarel_ has joined #oooq08:21
*** ykarel has quit IRC08:23
*** ykarel_ is now known as ykarel08:28
arxcruz|ruckchandankumar: https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset020-master/50abb54/undercloud/home/jenkins/tempest.log.gz08:30
hubbotFAILING CHECK JOBS on master: tripleo-quickstart-extras-gate-newton-delorean-full-minimal @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario002-multinode-oooq-container, tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722408:39
*** moguimar has quit IRC08:51
chandankumararxcruz|ruck: yup08:55
chandankumararxcruz|ruck: is it poping only in periodic jobs?08:56
arxcruz|ruckchandankumar: so far yes, i'm trying to reproduce on rdocloud08:56
pandaquiquell: how's going ?08:57
pandaquiquell: something left to review ?08:57
pandaquiquell: should I take the card in backlog ?08:57
quiquellpanda: Was starting with that card08:58
chandankumararxcruz|ruck: I have reproducer ready just now08:58
chandankumararxcruz|ruck: from fs3008:58
arxcruz|ruckchandankumar: master ?08:59
arxcruz|ruckchandankumar: can you check ?08:59
quiquellpanda: Have upload small fix to emit_release_script, prevent check job from working08:59
chandankumararxcruz|ruck: sure08:59
quiquellpanda: https://review.openstack.org/#/c/572420/08:59
quiquellpanda: Let me upload the new job as a DNM and we can play with it08:59
quiquellpanda: The new job https://review.openstack.org/#/c/574665/09:01
quiquellpanda: It's not integrated with the emit release, just print the toci variable09:02
quiquellpanda: Humm the job job doesn't appear in zuuls, I am missing something09:04
quiquellpanda: Na forget about it09:04
quiquellpanda: Testing patch https://review.openstack.org/57467109:09
quiquellpanda: We can try to merge emit_releases_script changes first https://review.openstack.org/#/c/572420/09:12
quiquellpanda: About the promoter your change has been merged, we can close https://bugs.launchpad.net/tripleo/+bug/176809009:15
openstackLaunchpad bug 1768090 in tripleo "promoter script is not comparing timestamps correctly when folding hashes" [High,In progress] - Assigned to Gabriele Cerami (gcerami)09:15
chandankumararxcruz|ruck: http://paste.openstack.org/show/723283/09:25
chandankumararxcruz|ruck: https://github.com/openstack/python-tempestconf/blob/master/config_tempest/services/identity.py#L6509:25
chandankumarit is returning 50009:25
chandankumararxcruz|ruck: we need to take a look at keystone logs why it doing 50009:25
arxcruz|ruckchandankumar: i'm opening a bug anyway09:31
chandankumararxcruz|ruck: the bug is alrady there09:33
sshnaidmquiquell, need to add a job here: https://github.com/rdo-infra/review.rdoproject.org-config/blob/8f5408dc753ff072b03961b35304da9bd50b4c64/zuul/projects.yaml#L385309:33
ssbarneaout of curiosity, did anyone had any success (or attempt) of using ansible-review?09:33
sshnaidmquiquell, and configure it there https://github.com/rdo-infra/review.rdoproject.org-config/blob/8f5408dc753ff072b03961b35304da9bd50b4c64/jobs/rdoinfra.yaml#L209:33
chandankumararxcruz|ruck: https://bugs.launchpad.net/tripleo/+bug/177630109:33
openstackLaunchpad bug 1776301 in tripleo "[master promotion] Tempest is failing with " KeyError: 'resources' "errors - Connection refused" [Critical,Triaged]09:33
*** moguimar has joined #oooq09:33
*** dtantsur|afk is now known as dtantsur09:35
arxcruz|ruckchandankumar: meanwhile, could we add a try catch ?09:36
arxcruz|rucknah, silly me09:36
quiquellsshnaidm: Ok, will try to add tox-py2709:37
chandankumararxcruz|ruck: there is but we need to make some modification for the lower portion of the code09:37
arxcruz|rucksshnaidm: what do we need to do to have this merged https://review.openstack.org/#/c/574270/ ?09:49
arxcruz|ruckrecheck ?09:49
arxcruz|rucksshnaidm: nevermind, it's on the gates already09:50
chandankumararxcruz|ruck: http://paste.openstack.org/show/723289/ does something like this work?09:55
arxcruz|ruckchandankumar: i'm thinking here, adding a try / catch is just hiding the problem...09:56
arxcruz|rucki'm trying to understand the flask stuff on keystone09:56
quiquellpanda: You there ?10:02
ykarelarxcruz|ruck, chandankumar i think /v3 needs to be appended in service_url,10:03
ykarelwith ^^ i am not seeing the issue can you try10:04
arxcruz|ruckykarel: https://github.com/openstack/keystone/blob/master/keystone/version/controllers.py#L4310:05
ykarelarxcruz|ruck, yup, but this seems to be not working10:05
pandaquiquell: sorry, got some problems10:05
pandaquiquell: here now10:05
chandankumarykarel: one min trying10:05
quiquellpanda: Ok np, what do you want first reviews, new job or promoter ?10:06
arxcruz|ruckchandankumar: can you please give me access to your env ?10:06
arxcruz|ruckgithub.com/arxcruz.keys10:06
pandaquiquell: new job, so it can roll whilewe do other things10:06
chandankumararxcruz|ruck: it worked10:07
arxcruz|ruckhmmmm10:09
chandankumaradding v3 at service_url10:09
arxcruz|ruckwell i'm waiting your patch on python-tempestconf10:09
arxcruz|ruckchandankumar: can you verify at your keystone if this code is there https://github.com/openstack/keystone/blob/master/keystone/version/controllers.py#L43 ?10:09
ykarelarxcruz|ruck, chandankumar so what i can say definitely there is an issue in keystone with unversioned url(/v3 not appended), adding /v3 in tempestconf can workaround that10:10
ykarelarxcruz|ruck, what i noticed is that when we append /v3 in url https://github.com/openstack/keystone/blob/master/keystone/version/controllers.py#L43 is not hit10:11
ykarelso we don't see issue in this case10:11
pandaquiquell: but you already tested fs50 n -> n + 1 ?10:12
pandaquiquell: becasuse I don't see much left to do for this card ..10:13
*** zoli is now known as zoli|lunch10:15
arxcruz|ruckykarel: yeah, i think i know the problem...10:16
arxcruz|ruckchandankumar: how was the auth_url ?10:16
arxcruz|ruckwas it ending with / ?10:16
arxcruz|ruckbefore the error in keystone log there's this 2018-06-11 20:16:29.821 216 INFO keystone.common.wsgi [req-591e2ecd-8088-4d2e-a5ae-c23a1624187d - - - - -] GET http://192.168.24.9:5000//10:17
arxcruz|ruckwith // at the end10:17
quiquellpanda: The WIP of the new job https://review.openstack.org/#/c/574665/10:23
quiquellpanda: It just print the toci variable, have to integrate it with the script10:23
*** holser__ has quit IRC10:23
*** holser__ has joined #oooq10:24
arxcruz|ruckchandankumar: is possible to give me access ?10:26
chandankumararxcruz|ruck: zuul@38.145.34.9210:27
chandankumararxcruz|ruck: run sh tempest-setup.sh10:28
chandankumaryou will be directly taken into containers10:28
quiquellpanda: We also need some reviewing on the patches10:29
chandankumararxcruz|ruck: able to ssh?10:30
arxcruz|ruckchandankumar: yes10:30
arxcruz|ruckchandankumar: may i add a print on keystone just to check what's going on ?10:31
chandankumararxcruz|ruck: feel free to do anything,10:31
chandankumararxcruz|ruck: I am preparing a aptch10:31
ykarelarxcruz|ruck, may be // is the problem, but i have no idea yet10:33
pandaquiquell: uploaded new patchset to test fs5010:33
pandaquiquell: not liking too much how I changed stuff :/10:33
pandaquiquell: which patches ?10:34
pandaquiquell: all the dependencies ?10:34
arxcruz|ruckykarel: from urljoin isn't the problem10:34
quiquellpanda: What do you mean ?10:34
arxcruz|ruck>>> req = webob.Request.blank(10:35
arxcruz|ruck...         '/v3', headers={'Accept': 'application/json-home'})10:35
arxcruz|ruck>>> req10:35
arxcruz|ruck<Request at 0x10affcc10 GET http://localhost/v3>10:35
arxcruz|ruckchandankumar: i've added some log.info in keystone, now restarting the docker10:36
arxcruz|ruckand it seems isn't working D:10:36
*** bogdando has quit IRC10:37
pandaquiquell: uhm I was about to ruin the rebases ..10:37
quiquellpanda: This new job is going to run only at queens ?10:38
quiquellpanda: Because n - n + 1 doesn't make sense in master10:38
pandaquiquell: we can always run n -> n + 1 from master to the FUTURE10:39
hubbotFAILING CHECK JOBS on master: tripleo-quickstart-extras-gate-newton-delorean-full-minimal @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario002-multinode-oooq-container, tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722410:39
quiquellpanda: Sounds very DLRN10:40
quiquellpanda: Maybe we can start to merge trown's work, it's working fine10:40
quiquellpanda: First this https://review.openstack.org/#/c/57241910:41
quiquellpanda: Then this https://review.openstack.org/#/c/57242010:41
*** quiquell has quit IRC10:48
chandankumararxcruz|ruck: i think keystone died10:58
arxcruz|ruckchandankumar: checking10:59
*** quiquell_phone has joined #oooq10:59
arxcruz|ruckchandankumar: i'm restarting the docker11:00
quiquell_phonePanda: they have switch of the lights at my building11:00
quiquell_phoneGoing for lunch, lets talk later11:01
quiquell_phoneQuiquell|lunch11:01
*** quiquell_phone is now known as quiquell|lunch11:01
*** quiquell|tmp has joined #oooq11:06
*** quiquell|lunch has quit IRC11:07
quiquell|tmppanda: Ok I am back, have some minutes11:07
pandaquiquell|tmp: lunch already done ?11:09
pandaquiquell|tmp: need to rebase all the changes11:09
quiquell|tmppanda: Nope, will do the lunch later, I am connecting with my mobile phone11:11
quiquell|tmpsshnaidm, marios: reviews for trown's changes https://review.openstack.org/#/c/572419/17397811:12
pandaquiquell|tmp: can you check the new rebase under 57466511:15
panda?11:15
arxcruz|ruckykarel: so it seems there's nothing to do, the problem is the /v3 because i'm running tempestconf on chandankumar env, and i get the error in python-tempestconf, but nothing in the logs from keystone as you pointed here11:17
arxcruz|ruckhttps://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-centos-7-multinode-1ctlr-featureset030-master/fbc106b/subnode-2/var/log/containers/keystone/keystone.log.txt.gz?level=ERROR#_2018-06-11_20_16_29_82411:17
mariosquiquell|tmp: ack will check11:17
ykarelarxcruz|ruck, i can see the logs in keystone11:18
arxcruz|ruckykarel: i'm running the code, and not getting the error, i believe it's related to whatever else, at least in chandankumar environment11:19
ykarelarxcruz|ruck, python -c "import requests,json;r=requests.get('http://192.168.24.1:5000',verify=False,headers={'Accept': 'application/json-home'});print(r.content)"11:20
ykarelwhen i run ^^, i see same logs as CI in keystone.log11:20
ykarelhere:- /var/log/containers/keystone/keystone.log11:20
ykareli am testing against undercloud11:21
arxcruz|ruckykarel: you're right11:21
arxcruz|ruckweird..11:21
quiquell|tmppanda: Have rebased 574665 now it's ok, let's recheck the test again11:23
quiquell|tmppanda: And now I go for lunch11:23
*** quiquell|tmp is now known as quiquell|lunch11:24
*** quiquell|lunch has quit IRC11:28
*** quiquell has joined #oooq11:31
*** quiquell is now known as quiquell|lunch11:31
arxcruz|ruckykarel: it seems the problem is the headers11:32
*** zoli|lunch is now known as zoli11:32
arxcruz|ruckif i remove the  headers 'Accept': 'application/json-home' it pass11:32
ykarelarxcruz|ruck, but without this you will not get extensions11:32
arxcruz|ruckykarel: yes11:33
arxcruz|ruckand i think our code was just wrong all the time11:33
arxcruz|ruckthat's the right behavior11:33
arxcruz|ruckbecause11:33
arxcruz|ruckif you do a curl on the :50011:33
arxcruz|ruck:500011:33
arxcruz|ruckit will return the expected11:33
arxcruz|ruckthe list of endpoints11:33
arxcruz|ruckthen if you do :5000/v311:33
arxcruz|ruckit will return the info for the v311:34
arxcruz|rucki believe this is the right behavior11:34
*** quiquell|lunch has quit IRC11:36
*** amoralej is now known as amoralej|lunch11:48
weshaypanda, ready?12:01
pandaweshay: already there12:02
weshaypanda, can't hear you12:02
*** rlandy has joined #oooq12:12
*** rlandy is now known as rlandy|rover12:12
rlandy|roverarxcruz|ruck: hello!12:14
arxcruz|ruckrlandy|rover: hello, i'm already aware of the failures :)12:15
arxcruz|ruckrlandy|rover: chandankumar is working on a patch12:15
rlandy|roverarxcruz|ruck: lol - not chasing you :) - I think I messed up the bug I logged yesterday12:15
arxcruz|ruckrlandy|rover: lol, no, didn't meant that, just to let you know :)12:16
rlandy|roverarxcruz|ruck: I may have put two problems in the same bug :(12:17
rlandy|roverhttps://bugs.launchpad.net/tripleo/+bug/177630112:17
openstackLaunchpad bug 1776301 in tripleo "[master promotion] Tempest is failing with " KeyError: 'resources' "errors - Connection refused" [Critical,Triaged]12:17
rlandy|roverThere was a tempest error and and tempestmail error12:18
rlandy|roveridk if they are the same problem12:18
arxcruz|ruckrlandy|rover: it's the same, problem is, keystone change (again) the url12:18
rlandy|roverarxcruz|ruck; ok - just checking - I know very little about these things12:18
arxcruz|ruckso, python-tempestconf was trying in an endpoint that no longer works12:18
rlandy|roverwhich is why I ruck/rover with you :)12:18
*** ratailor has quit IRC12:19
*** ykarel_ has joined #oooq12:21
*** trown|outtypewww is now known as trown12:21
*** ykarel has quit IRC12:24
*** ykarel_ is now known as ykarel12:26
ykarelarxcruz|ruck, rlandy|rover rdo zuul queue is too large currently: 27 hours12:28
ykarelas we are aware of the current issues, should we kill the current run until we get the tempestconf fix12:29
ykarelif want queens can be run as queens fix has already landed12:29
rlandy|roverykarel: we can kill the containers build on master12:32
rlandy|roverweshay: arxcruz|ruck: ^^ ok?12:32
arxcruz|ruckrlandy|rover: fine by me, this run will fail anyway12:33
arxcruz|ruckonly queens for now, we still working on the patch for master12:33
rlandy|roverarxcruz|ruck: ykarel: hmmm - I don;t have access to kill it - do you?12:33
arxcruz|ruckturns out we need to test both / and /v3 because packstack still use / to get the extensions and also we need to care about previous releases as well12:33
rlandy|roverlooking at jenkins12:34
rlandy|roverotherwise will ask on rdo12:34
arxcruz|ruckweshay: let me know when you'r done with panda :)12:34
weshayarxcruz|ruck, 3min12:34
ykarelrlandy|rover, i don't  have access, amoralej|lunch jpena can do that12:34
ykarelbut i think both master/queens killed together12:34
arxcruz|ruckykarel: and i'm still not sure if the fix will solve the problem, perhaps we need to set the env to skip ssl verify12:34
ykarelarxcruz|ruck, in tempest-conf that's already set,12:35
arxcruz|ruckykarel: i meant in the get-overcloud-nodes script12:35
arxcruz|ruckthat runs before the tempest-conf12:36
ykarelarxcruz|ruck, you mean the version check that just merged12:36
weshayarxcruz|ruck, ready12:37
arxcruz|ruckykarel: yeah12:37
ykarelarxcruz|ruck, let's see how it goes in queens, have you tried any reproducer with that patch?12:37
ykareli think it will work12:37
arxcruz|ruckstill working12:37
rlandy|roverchecking the tenant12:38
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/567224, master: tripleo-quickstart-extras-gate-newton-delorean-full-minimal @ https://review.openstack.org/56044512:39
rlandy|roverykarel: arxcruz|ruck: we got some stacks in create failed - cleaning up12:40
ykarelrlandy|rover, ack12:41
rlandy|roverykarel: is there some other issue on rdocloud today? odd tests within long waiting job queues are not being scheduled12:47
rlandy|roverthe tenant is not oversubscribed12:47
ykarelrlandy|rover, am not aware of any issue for today on rdocloud12:47
rlandy|roverI'll ask on rdo - just in case - more the scheduling than the cloud itself12:48
ykarelrlandy|rover, i think they are just waiting for the vms12:48
ykarelas upperlimit is set to 80 or 8512:48
ykarelyup jpena can give more insight on it12:48
rlandy|roverwe are not short of them - maybe we should up the limit12:48
rlandy|roverI think we can oversubscribe more - will ask12:49
ykarelack also i remember there were talks of 4 new compute nodes,12:49
ykarelbut don't know for what was those and what's the progress there12:50
ykarelweshay, any idea ^^?12:50
*** quiquell has joined #oooq12:58
quiquellweshay: Are you there ?13:00
*** tcw has quit IRC13:00
weshayykarel, quiquell sec.. coming out of a 1-113:00
ykarelack13:00
quiquellweshay: np13:01
*** amoralej|lunch is now known as amoralej13:02
amoralejykarel, what do you need?13:02
amoralejto cancel some job?13:02
ykarelamoralej, jpena cleared up13:02
amoralejok13:03
*** tcw has joined #oooq13:03
trownquiquell: thanks for fixing up the backwards store_true thing... we probably need some basic functional test that actually passes arguments to script and checks results... since our unit tests cant catch that13:05
rlandy|roveramoralej: we're chatting on rhos-ops regarding the queues13:06
rlandy|roverquiquell: hey - did you get the message re: your reprovisioned box?13:06
quiquelltrown: yw, would be nice to mock ansiple-playbook to do integration testing of TOCI13:07
quiquellrlandy|rover: Yep, thanks ! :-)13:07
quiquelltrown: Your changes are good to merge, maybe we can squash the two commits13:07
trownquiquell: that is more than what I am thinking ... more like a test that just does `python emit-releases.py ...` and checks the output matches13:08
trownquiquell: ya maybe it is better... not even sure why I split those to begin with...13:08
quiquelltrown: in that case, passing ARGV and still runing pytest is possible, so we don't have to run the script13:08
quiquelltrown: we jus test the function main()13:08
quiquelltrown: and mock the calling functions13:09
trownquiquell: oh, ya adding a test or 2 like that would be good13:09
quiquelltrown: If you do an squahs, do it over the last patch, all the Depends-On are pointing to it13:10
weshayykarel, I think the new hardware was added, we're going to get 5 new nodes over the next 3 quarters I think13:10
weshayykarel, rlandy|rover we're also going to get the working nodes from rh113:10
trownquiquell: k13:10
ykarelweshay, ack13:11
weshayquiquell, what's up13:11
quiquellweshay: two things I misunderstood13:12
quiquellweshay: for fs037 (undercloud updates) tripleo-upgrades doesn't make sense ?13:13
rlandy|rovernice13:13
*** skramaja has quit IRC13:13
quiquellweshay: meaning adding jobs for fs037 at tripleo-upgrade project13:13
weshayquiquell, 37 is the update workflow13:14
weshayit needs to be check/gate on tq/tqe/tci tripleo-heat-templates, python-triploclient, tripleo-common on master/queens13:14
quiquellweshay: Ok brain fart will abandon the change13:15
quiquellweshay: And the other, if fs050 is master only how re we going to use it to test n -> n + 1 ?13:15
weshayquiquell, additionally I'm trying to help get fs51 the upgrade workflow13:15
weshayin as check, non-voting13:16
weshayon queens/master13:16
quiquellweshay: Yep, but that's not part of the fs037 sprint14 task, I was confused yesterday13:16
quiquellweshay: Sorry about the confusion13:16
weshayfs037 on queens was a sprint task I think13:18
weshayquiquell, https://trello.com/c/flI683EI/774-ci-job-create-job-37-work-on-queens-and-calls-tripleo-upgrade-updates-workflow13:18
rlandy|roverrfolco: hey - you're a zuul/SF expert these days, right :) ...  how come other check jobs get scheduled before the one job left in a change set waiting 28 hr 24 min?13:18
weshaythat should just read.. updates workflow.. not upgrade13:18
* rlandy|rover does not understand zuul's logic here13:18
rlandy|rovershould be fifo, iiuc13:19
weshaynot sure who wrote that card13:19
*** kopecmartin has quit IRC13:19
quiquellweshay: K, and about the other point fs050 and n -> n + 1 ?13:19
quiquellweshay: if fs050 is master only, we cannot do a n -> n + 1 with it13:19
rfolcorlandy|rover, parsing what you said...13:20
rlandy|roverrfolco: pls see https://review.rdoproject.org/zuul/13:20
rlandy|roverwe have jobs waiting over 20 hours13:20
rlandy|roverwith one missing scheduled job13:20
weshayquiquell, jump on my blue13:20
*** kopecmartin has joined #oooq13:20
rlandy|roverand then other, new jobs running13:20
weshayquiquell, that is what emit_release_file.py should handle13:21
weshayquiquell, https://bluejeans.com/u/whayutin/13:21
quiquellOk13:21
rlandy|roverrfolco: gate-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035-master on the 17 hr job got scheduled13:22
rlandy|roverand our 24+ job did not13:22
rlandy|roverneither did the 28+ hr job13:22
rfolcorlandy|rover, for those with more than 20hr, I see at least one voting job that failed... and there is one job queued. I am not 100% sure this is the case, if zuul is smart enough to favor new green jobs and leave failed ones to the end (queued)13:26
rlandy|roverrfolco: if that is the case we should NOT merge any more failing jobs13:26
rlandy|roverweshay: ^^13:27
rfolcorlandy|rover, it should run yet, I am not saying zuul won't run those13:27
mariosfolks is this a known thing "Error in build_rpm_wrapper for openstack-tripleo-common" e.g. at http://logs.openstack.org/86/571186/3/check/tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades/0e1c43b/job-output.txt.gz#_2018-06-12_12_11_48_54336113:28
rlandy|roverrfolco: interestingly the 24 hr periodic also has 1 queued job13:28
rlandy|rovernow those have lower priority13:28
rlandy|roverbut still13:28
rlandy|roverone job?13:28
rlandy|roverone little job?13:28
*** Goneri has joined #oooq13:29
rlandy|roverthose jobs are all green13:29
rlandy|roverthere is some zuul logic in here we're not optimizing13:29
rfolcohmmm13:29
rfolcotrue13:29
rfolcothere is one in periodic with 33 hr 29 min13:30
rfolcoall green13:30
rfolcoand one queued13:30
rlandy|roverI know right ....13:30
rfolcorlandy|rover, gimme a sec, let me get some thoughts from sf-dfg13:31
rlandy|roverrfolco: cool13:31
rlandy|roverthanks13:31
rfolcoyw13:31
rlandy|roveroh dear - the upstream gate is also sitting at 17 hours???13:36
*** bogdando has joined #oooq13:41
*** atoth has joined #oooq13:43
rlandy|rovermarios: checked the build logs on that job ... error: File not found: /builddir/build/BUILDROOT/openstack-tripleo-common-9.1.1-0.20180612121053.7adbe75.el7.x86_64/usr/bin/container-check13:44
rlandy|roverdid we package container-check?13:44
rlandy|roverhttp://logs.openstack.org/86/571186/3/check/tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades/0e1c43b/logs/delorean_logs/7a/db/7adbe755a74692a346124440c179bcf169c94d79_dev/build.log.txt.gz13:45
rlandy|rovermarios: as a side note, you can check sova for job results comparison: http://cistatus.tripleo.org/13:45
rlandy|roverah - not bad on thr rhos-13 job - got to container-prep13:47
*** bogdando has quit IRC13:48
*** kopecmartin has quit IRC13:50
mariosrlandy|rover: thanks gonna check that build log13:51
mariosrlandy|rover: is that a new thing from today (container-check file not found) i think so/haven't seen it before13:51
rfolcorlandy|rover, fyi I am on #sf-dfg listening to paul... we'll have to fix precedence there looks like. I can work on that13:52
quiquellpanda, myoung: need another eyes in this error http://logs.openstack.org/71/574671/2/check/tripleo-ci-centos-7-containerized-undercloud-from-release-upgrades/e336ec3/job-output.txt.gz13:53
rlandy|roverrfolco++13:54
hubbotrlandy|rover: rfolco's karma is now 113:54
quiquellpanda, myoung: ok, found the issue13:55
rlandy|rovermarios: I don't know the latest status on container-check but we were pip installing it - expecting one day it would get packaged - maybe today should have been that day. It's Ian Main's work13:56
mariosrlandy|rover: ack thanks13:56
myoungquiquell: just looking at it now... http://logs.openstack.org/71/574671/2/check/tripleo-ci-centos-7-containerized-undercloud-from-release-upgrades/e336ec3/job-output.txt.gz#_2018-06-12_11_44_07_99946813:56
myoungline 34: syntax error: unexpected end of file13:57
rlandy|roverrfolco: so you want a LP bug for  the zuul scheduling issue?13:57
rlandy|roverdo you13:57
rfolcorlandy|rover, I will throw the question back to you... do we need it ?13:58
rlandy|roverrfolco; idk how the sf dfg works13:59
rlandy|roverwhat their bug tracking system is13:59
rfolcorlandy|rover, alan is struggling to delete slaves on nodepool13:59
rlandy|roverrfolco: also depends how soon it can get fixed13:59
rfolcomany on delete state13:59
rlandy|roverI know13:59
rlandy|roverbeen watching them struggle with that all morning14:00
rfolcorlandy|rover, on me... relax and enjoy your rovering14:00
rlandy|roverstacks are clean though14:00
rfolcoI can take care of this issue for you14:00
*** bogdando has joined #oooq14:00
rlandy|roverrfolco: man, happy to have you on board!14:00
rlandy|roverone less thing to worry about14:00
rfolcoI promise work, I did not promise fixing14:01
rfolcolol14:01
rfolcoI will report back to you asap14:01
*** kopecmartin has joined #oooq14:02
myoungarxcruz|ruck: please update status matrix https://etherpad.openstack.org/p/tripleo-ci-squad-meeting @ L3314:06
myoungarxcruz|ruck: for weekly #tripleo squad status14:06
myoungarxcruz|ruck: minimal the 4x4 matrix of jobs/days from rhos-release dashboard, augment with anything else notable for the #tripleo wider team14:06
myoungarxcruz|ruck: please :)14:06
arxcruz|ruckmyoung: ack14:07
*** kopecmartin has quit IRC14:07
*** kopecmartin has joined #oooq14:07
*** kopecmartin has quit IRC14:07
*** kopecmartin has joined #oooq14:08
*** ccamacho has quit IRC14:20
*** ccamacho has joined #oooq14:20
*** quiquell is now known as quiquell|off14:22
*** arxcruz|ruck is now known as arxcruz|brb14:25
rfolcohttps://review.rdoproject.org/r/14195 Set precedence to normal for openstack-periodic14:27
ykarelrfolco, is ^^ temporary?14:28
ykarelas we need to promote packages from promotion jobs14:28
rfolcono, it should be definite14:28
rfolcoand it should be low not normal14:28
myoungmarios, sshnaidm, and I gave the community meeting +10 mins an no one showed, if folks want/need to chat about $allThingsCI, the room is open, happy to return, else cancelled.14:28
ykarelrfolco, not sure how it would go as we might skip most of the cron runs for this14:29
rfolcomyoung, I think its hard to join right after community meeting when you are busy with something.... it would be much easier to attend in a specific time14:30
rlandy|roverweshay: panda: ^^ pls weigh in here14:30
rlandy|roverI am not sure how we want to solve this issue14:30
myoungrfolco: I concur re: a scheduled time, vs. "after the #tripleo meeing ends" - which is a variable start time and harder to plan for.  The counterpoint is that as #tripleo assembles for their weekly meeting and it's scheduled for a full hour (but rarely is) - it's 'open time'.  I'll bring this up in retrospective tomorrow (or marios will, he has some thoughts on the topic as well).14:31
myoungI'll pop in now as well in case folks see the internal calendar invite start time of now14:32
weshayrfolco, did you attend the #tripleo mtg?14:33
pandacommunity meeting overlaps with UA sync if tripleo meeting ends early14:33
weshayrlandy|rover, /me reads14:33
weshayhow far back?14:33
rfolcoweshay, no, sorry, was discussing rdo nodepool issue and working on a fix.14:34
weshayrfolco, k.. reminder.. attending the #tripleo mtg is mandatory14:35
weshayfor this group14:35
rfolcoweshay, ack14:36
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/567224, master: tripleo-quickstart-extras-gate-newton-delorean-full-minimal @ https://review.openstack.org/56044514:40
weshayrfolco, rlandy|rover before we lower precedence we should consider other options because that has a large negative effect on pipelines14:40
rlandy|roverweshay: rfolco; yep - that is why I wanted you to weigh in14:42
rlandy|roverwe have a throughput issue14:42
rlandy|roverand the suggestion was just to let the queues empty out14:42
rlandy|roverbut we have the 24hr pipeline job and two checks jobs14:42
rlandy|roverthat just are never getting scheduled14:43
rlandy|roverthe pipeline job worries me more14:43
rlandy|roverbecause the next 24 hr job will get queued behind it14:43
rlandy|roverif it does not run before then14:43
rlandy|roverweshay: I asked if we could just get rid of the jobs that are as yet unscheduled - ...14:56
rlandy|rover<pabelanger> no, you need to stop zuul to clear the queue14:57
rlandy|rover<pabelanger> I'd just leave it to eventually catch up14:57
rlandy|roverso we're sitting with jobs that are just going to be longer- and longer running14:58
*** rfolco has quit IRC14:58
*** rfolco has joined #oooq14:58
* rlandy|rover proposed to give all of openstack dev two days off and clear the queues14:58
weshayrlandy|rover, rfolco sorry.. bluejeans for 4min?14:59
weshayhttps://bluejeans.com/u/whayutin/14:59
rlandy|roverjoined15:00
weshaysorry.. I thought I was in15:00
*** trown is now known as trown|lunch15:02
mariosrlandy|rover: fyi that issue with container check i found this https://review.rdoproject.org/r/#/c/14143/ and https://review.openstack.org/#/c/573699/15:10
mariosrlandy|rover: (so indeed looks like it is being packaged. maybe a new run on that failing https://review.openstack.org/#/c/571186/3 will pass15:10
rlandy|rovermarios; in meeting - will look in a bit15:12
*** ccamacho has quit IRC15:14
mariosrlandy|rover: ack was just fyi as you were wondering if we packaged container-check)15:17
mariosrlandy|rover: thanks15:17
ykarelmarios, it's removed from package15:20
ykareli mean reverted: https://review.rdoproject.org/r/#/c/1419015:20
ykarelthe updated tripleo-common-containers package is in repo, so recheck should work15:21
mariosykarel: thank you!15:21
*** ykarel is now known as ykarel|away15:22
*** saneax has quit IRC15:36
rlandy|roverykarel|away: thanks for answering as usual15:36
rlandy|rovermarios: all set? let me know if there is still something to look into15:37
*** jtomasek is now known as jtomasek|bbl15:37
mariosrlandy|rover: thanks15:42
*** bogdando has quit IRC15:43
weshayquiquell|off, thanks https://review.openstack.org/#/c/574417/15:45
*** hamzy has quit IRC15:48
*** kopecmartin has quit IRC16:02
weshaysshnaidm, you still around?16:08
weshaysshnaidm, I need the trend on tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades16:08
weshayin sova16:09
*** zoli is now known as zoli|gone16:14
*** zoli|gone is now known as zoli16:15
*** trown|lunch is now known as trown16:15
*** agopi has joined #oooq16:20
*** ykarel_ has joined #oooq16:22
*** ykarel|away has quit IRC16:24
weshaysshnaidm, you have 2 +2's on https://review.openstack.org/#/c/572798/116:33
weshaytrown, where did we end up w/ 3node support on the libvirt repro?16:36
weshaynot supported I thought16:36
weshaybut wanted to confirm16:36
trownweshay: ya not supported16:38
hubbotFAILING CHECK JOBS on master: tripleo-quickstart-extras-gate-newton-delorean-full-minimal @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722416:40
trownweshay: hardcoded subnode-{0,1}: https://github.com/openstack/tripleo-quickstart/blob/master/roles/libvirt/setup/overcloud/tasks/libvirt_nodepool.yml#L149 so not at all trivial to add support16:40
weshaytrown, thanks16:41
*** dtantsur is now known as dtantsur|afk16:41
myoungweshay, trown, during the libvirt reproducer sprint I did this, which is some of the building blocks to move from "hard coded 2 nodes" to an arbitrary # of nodes --> https://github.com/halcyondude/ansible-role-virtvars16:50
myoungI also have/had patches to use it but we decided it was out of scope and punted it16:51
myoungif it's a goal I have a good starting point already done...16:51
myoungrelated goal was to use guest agent, which would fit nicely into existing prototype16:51
* weshay wishes we had CI on the libvirt repro16:54
rlandy|rovermyoung: weshay: who is our rhos-13 contact? missing overcloud_container_image_prepare.yaml    on rhos-1316:54
weshayrlandy|rover, we are missing a tht?16:55
weshayfile16:55
rlandy|roverhttp://download-node-02.eng.bos.redhat.com/rcm-guest/puddles/OpenStack/13.0-RHEL-7/2018-06-11.2/16:55
*** udesale has quit IRC16:55
ykarel_rlandy|rover, jschlueter16:55
weshayOH16:55
rlandy|roverthanks - asking16:56
*** ykarel_ is now known as ykarel|away16:56
myoungrlandy|rover: jschlueter, but jjoyce also can help.  likely #rhos-delivery is a good place to start as well16:56
jschlueterrlandy|rover: bad puddle don't use16:56
rlandy|roverjschlueter; ok - last known good puddle?16:56
rlandy|rovernvm16:57
rlandy|roverthis looks better16:57
rlandy|roverhttp://download-node-02.eng.bos.redhat.com/rcm-guest/puddles/OpenStack/13.0-RHEL-7/2018-06-12.1/16:57
rlandy|roverthanks16:57
rlandy|roverwill rerun gate16:57
weshayrlandy|rover, wait16:57
jschlueterrlandy|rover: either use 2018-06-12.1 or 2018-06-11.1 ... last passed ci was 2018-06-08.316:57
weshayrlandy|rover, are you picking a random puddle?16:57
rlandy|rover2018-06-12.1 is ok16:57
weshayrlandy|rover, http://download-node-02.eng.bos.redhat.com/rcm-guest/puddles/OpenStack/13.0-RHEL-7/passed_phase1/16:57
myoungwe should ber picking up passed_phase_116:57
rlandy|roverweshay: latest afaict ...16:57
rlandy|rovergetting16:57
rlandy|roverhttp://git.app.eng.bos.redhat.com/git/tripleo-environments.git/tree/config/release/rhos-13.yml#n6516:58
jschlueterrlandy|rover: 11.2 puddle never passed phase 1 ci16:58
myoungahh those jobs aren't using the trigger-getbuild.sh script16:58
rlandy|roveralso ...16:58
weshayrlandy|rover, you must must must must must must16:58
rlandy|roverhttp://git.app.eng.bos.redhat.com/git/tripleo-environments.git/tree/config/release/rhos-13.yml#n7216:58
weshayONLY16:58
weshayONLY16:58
weshayONLY16:58
weshayONLY16:58
weshayuse passed_phase_116:59
rlandy|roverdon;t thk we need that line16:59
rlandy|roverabove all the ONLYs16:59
weshayk16:59
myounghttp://git.app.eng.bos.redhat.com/git/tripleo-environments.git/tree/config/release/rhos-13.yml#n7 should be getting current_build, which if we're using the script that the rest uses should always be passed_phase_116:59
rlandy|roverlibguestfs_kernel_override: 3.10.0-693.5.2.el7.x86_64 - can we kill that?17:00
weshayrlandy|rover, http://git.app.eng.bos.redhat.com/git/tripleo-environments.git/tree/config/release/rhos-13.yml#n3517:00
weshayGDM17:00
weshayall these configs should default to passed_phase_117:00
myounghttp://git.app.eng.bos.redhat.com/git/tripleo-environments.git/tree/jenkins/jobs/tripleo-quickstart/scripts/trigger-getbuild.sh#n10817:00
weshaymyoung, any implications to making that the default in the release configs17:00
weshayvs.. using a workaround script?17:00
rlandy|rover{{ rhos_release_args }}17:00
myoungweshay: so if current_build is defined, it'll call rhos-release with a puddle id17:01
jschlueterrlandy|rover, weshay: also need to be using rhceph-3-rhel7:latest for OSP 1317:01
weshaythat script should be the exception17:01
myoungif it's not defined, it'll silently take the latest puddle.  I'm not crazy about this but it's what we have.  guessing those specific jobs are not passing the variable that the rest are passing.  rlandy|rover is this the TQ gate job?17:01
weshayimho not the rule17:01
weshaymyoung, rlandy|rover we can default to passed_phase_1 in the release config17:02
weshayand not bother defining it w/ the script17:02
weshayothers using that script are welcome to keep it in17:02
myoungjschlueter: does rhos-release have the ability to chase a symlink?17:02
myounge.g. get passed_phase_1 passed vs. a puddle id?17:02
weshayI think it does17:03
myoungyeah then we can bake it into release configs.17:03
jschluetermyoung: yes it does just fine ... provide -P -p <symlink|puddle_id>17:03
myoungthe goal for all these when designed was to have any/all jobs take a parameter of what puddle/hash to look at, and be fed it17:03
myoungif we want to bake in phase1 to yml's also fine17:04
myoungcan just change the default to use that17:04
rlandy|roverrhceph-3-rhel7:latest - where would we define that17:04
rlandy|roverjschlueter: ^^?17:04
jschlueterrlandy|rover: http://git.app.eng.bos.redhat.com/git/tripleo-environments.git/tree/config/release/rhos-13.yml#n6917:05
*** ykarel|away has quit IRC17:05
jschlueterdocker_ceph_container_config and docker_ceph_deploy_params17:05
myoungrlandy|rover: patch to change default incoming in 1m17:06
myoung(note: it's already done for the other jobs afaict)17:06
myoungjust via current_build param17:06
myoungtoo much confusion over this.  agree with weshey that we should just default it in the configs as well17:06
*** holser__ has quit IRC17:06
*** hamzy has joined #oooq17:09
myoungrlandy|rover, weshay, jschlueter: https://code.engineering.redhat.com/gerrit/141358 Default all release configs to passed_phase117:11
rlandy|roverlooking17:11
myoungrlandy|rover, jschlueter, weshay, arxcruz|brb (fyi/history: ospphase0 jobs that trigger on puddles automagically on internal jobs were updated 17-may to trigger off phase0 (https://code.engineering.redhat.com/gerrit/gitweb?p=tripleo-environments.git;a=commit;h=381abbed975b86c1b3f0bfd907702f8c6a13eb45)17:19
myoungthe release configs updated by patch above (https://code.engineering.redhat.com/gerrit/141358) are afaict only used by TQ/TQE gating jobs.17:19
* myoung mumbles "sorry TMI" and goes back to work17:20
jschlueter:-)17:21
rlandy|roverthanks for the info17:22
*** amoralej is now known as amoralej|off17:22
rlandy|rovermyoung: weshay: would like to merge this https://code.engineering.redhat.com/gerrit/#/c/141358/ so I can out another patch in to update rhceph-3-rhel7:lates17:24
* myoung nods affermative at rlandy|rover17:25
rlandy|roverok - let's go with it17:25
myoungrlandy|rover: i should have updated that last month17:25
rlandy|roversee what happens17:25
rlandy|rovernp17:25
myoungw/ changed script defaults17:25
rlandy|rovernext patch coming up ...17:26
rlandy|roverand here we go again on rdocloud queens/master promotion17:26
rlandy|roverall queued17:26
myoungrlandy|rover, weshay: (probably more TMI) confirmed that the ovb gate jobs don't use the script to fetch the current IP.  the script predates the dlrnapi when multijobs were the success criteria vs. promoter.  We had the very real problem of jobs picking up different puddle ID's in each multijob run.  They should all get passed_phase1 now.17:28
rlandy|rovermyoung: good to know - thanks17:34
*** holser__ has joined #oooq17:34
*** holser___ has joined #oooq17:36
*** holser__ has quit IRC17:40
weshaymyoung, rlandy|rover <sigh>17:43
weshayoh wait.. that will be ok17:43
* weshay just going through reviews17:43
weshayrlandy|rover, so just checking you are not passing a build to the jobs though right?17:44
rlandy|rover https://code.engineering.redhat.com/gerrit/14136217:44
rlandy|roverweshay; ^^ one more to check17:45
rlandy|roverweshay: we should not - just take latest known good17:45
rlandy|roverie: passed_phase_117:45
weshayrlandy|rover, +2, you can merge17:45
myoungweshay: per your feedback, https://code.engineering.redhat.com/gerrit/#/c/141358 slams the default to passed_phase1 in the configs.  It was already using that for all the jobs except OVB gates, which don't use the script as part of a "get build" multijob phase17:46
rlandy|roverok - let's try the rhos-13 gate again17:46
rlandy|roverthen back to bm17:46
*** marios has quit IRC17:49
*** marios has joined #oooq17:49
rlandy|rover%gatestatus18:04
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722418:04
*** agopi has quit IRC18:15
weshaytrown, https://review.openstack.org/#/c/574417/18:32
weshayrlandy|rover, fyi https://review.openstack.org/#/c/574794/ fixes the above hubbot issue18:32
weshayrlandy|rover, although I'm hoping emit_releases_file.py does as well18:32
rlandy|roverlooking18:33
*** holser__ has joined #oooq18:33
rlandy|roveri understand that queens needs to kick but how does that stop p->q?18:34
rlandy|roverweshay: ^^18:35
rlandy|rovernot understanding the commit message explanation18:35
weshayrlandy|rover, that's removing queens branches18:36
weshayfrom kicking the upgrade job18:36
rlandy|roverWith the new emit_releases_file.py in play when queens is triggered we should see queens -> master kick vs.. pike to queens for this job.18:36
weshayso only master kicks.. and queens -> master is executed18:36
rlandy|roverI'm +2 on the change18:36
*** holser___ has quit IRC18:36
weshayrlandy|rover, ya.. I'll retest queens kicking by reverting that after fs51 is added to the exception list18:37
*** jtomasek|bbl is now known as jtomasek18:37
weshayrlandy|rover, w/ https://review.openstack.org/#/c/574417/18:37
rlandy|roveranyways +2'ed the change18:37
weshaythanks18:37
rlandy|roverI get the basic idea18:37
rlandy|roverweshay: nit pick comment on https://review.openstack.org/#/c/574417 - if we need to merge this, I'll +2 now18:39
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722418:40
rlandy|roverweshay; in other news - there are two pocs running now downstream - envE with fs001 and rhos-13 gate - of we get a pass on either, will make changes to jjb18:40
rlandy|roverif18:40
*** agopi has joined #oooq18:53
*** atoth has quit IRC19:21
rlandy|roverweshay: you around? pls see conversation on #sf-dfg19:32
weshaymyoung, can you please ensure all cards have QE19:35
weshayhttps://trello.com/c/ZPNYHG3F/775-ci-job-make-job-50-gate e.g.19:35
weshayrlandy|rover, /me looks19:42
rlandy|roverweshay: on the upside we're down to 14 hr 50 min on check jobs in rdocloud  - which is practically  speed results compared with 28 hrs earlier19:46
weshayrfolco, qe on https://trello.com/c/flI683EI/774-ci-job-create-job-37-work-on-queens-and-calls-tripleo-upgrade-updates-workflow19:48
weshay?19:48
weshayrfolco, can you update the check boxes if you are all set19:49
rfolcoweshay, yes, I assume I can move to complete as well19:49
weshayrfolco, ya.. if you are signing off on it19:49
weshaymyoung, 0% complete on test https://trello.com/c/6tcD7ilr/778-injecting-zuul-changes-at-various-points-in-the-job-workflow19:50
*** rfolco is now known as rfolco_doctor20:05
rlandy|rover%gatestatus20:14
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722420:14
weshayrlandy|rover, the upstream gate is fairly long, atm due to a packaging bug20:24
weshayrlandy|rover, I've checked it out.. nothing more we can do, I -1'd the appropriate review20:25
weshayrlandy|rover, tempest errors on fs1820:27
rlandy|roverweshay: k - thanks20:29
sshnaidmweshay, sova is fixed20:29
rlandy|roverchecking out the fs035 failures20:29
weshaysshnaidm, thank god20:30
weshaysshnaidm, we sure we have scen 5,6,7,8,9,10?20:30
weshayI know it will only show up if a job runs20:30
sshnaidmweshay, I see 7 8 10 were running20:31
sshnaidmweshay, yeah, it will show up if will run20:31
sshnaidmweshay, every time rdo cloud has problems with networking it causes problems to dockers on sova host..20:32
sshnaidmweshay, also ruck rover dashboard won't work in this case.. maybe need to find more reliable place20:33
weshaythat's nice of it20:33
weshaysshnaidm, got to work w/ what we have20:33
*** brault has quit IRC20:39
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722420:40
*** gkadam has quit IRC20:42
*** hamzy has quit IRC20:45
*** bandini has quit IRC20:52
*** bandini has joined #oooq20:54
weshayrlandy|rover, does arxcruz|brb have a fix for the tempest config issue?20:57
weshaychandankumar, arxcruz|brb https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-centos-7-multinode-1ctlr-featureset016-master/9558a2d/undercloud/home/jenkins/tempest.log.txt.gz#_2018-06-12_20_32_2520:57
weshay?20:57
*** trown is now known as trown|outtypewww20:57
rlandy|roverweshay: afaik, he chandankumar did20:59
rlandy|roverat least the bug is assigned to him20:59
weshayhttps://review.openstack.org/#/c/574691/21:00
weshayrlandy|rover++21:01
hubbotweshay: rlandy|rover's karma is now 621:01
weshayrlandy|rover, nice job on envE21:01
rlandy|rovergetting there21:01
rlandy|roverweshay; don't worry about the other configs, I'm on it21:01
weshayrlandy|rover, I don't mind if you want to start on the jjb21:02
rlandy|roverit's all pretty much related21:02
rlandy|roverthe jjb is going to be one quickstart call21:03
rlandy|roveralso holding out hope for rhos-1321:04
rlandy|roverweshay: OMG ... 24 pipeline last job - scheduled!!!!21:04
rlandy|roverwe're saved21:04
rlandy|roveroh happiness is21:05
weshayHA21:05
weshayrlandy|rover, that job better pass ;)21:05
rlandy|rover41 hr 5 min21:05
rlandy|roverweshay; even if it doesn't, the next one will kick and not get delayed behind this one21:06
myoungweshay, sshnaidm: I've updated the logstash file list review (https://review.openstack.org/#/c/570896) per your feedback re: adding update files.  I also made it depend on a new review to add timestamps to the update logfiles as afaik this is required --> https://review.openstack.org/#/c/57488621:08
weshayrlandy|rover, https://review.rdoproject.org/r/#/c/13706/21:08
rlandy|rovervoted21:11
myoungweshay: long lines...ok21:14
chandankumarweshay: tosky can +2 on this one https://review.openstack.org/#/c/574691/21:18
arxcruz|brbchandankumar: i did the +2+w already21:19
toskyweshay, arxcruz|brb, weshay : I disagree with that patch21:20
toskyit's different from what it was discussed21:21
toskyI don't see the results from the RDO CI21:21
myoungweshay: what's the right way to continue a line in yaml, in a shell block using ansible's "|" operator ?21:22
arxcruz|brbtosky: sorry about that, i'll provide another patch with what we agree right after we have this fixed, sounds good for you?21:22
arxcruz|brbtomorrow morning, first thing i'll do, because it's pretty late for both of us today21:23
myoungI can use "shell: >" but now every multiline command (which is all of them) needs to be updated to use "&&"21:23
toskyarxcruz|brb: no, it's not good for me21:24
toskyI hate when people says agrees on one thing and that things are changed completely21:24
myoungweshay: I didn't think 80 col line lengths for this stuff was an issues as most of the rest of TU and our stuff doesn't abide by 80 col...21:24
toskyit's pretty late and it should not have happened, because this patch was discussed several hours ago21:24
toskyand agreed upon21:24
arxcruz|brbtosky: ok, just -w and i'll submit the new one, please bear with me21:25
toskyyou can -w, no need for me21:25
toskyand yes, it's late21:25
*** tosky has quit IRC21:25
sshnaidmmyoung, you can use regular "\"21:32
sshnaidmmyoung, btw, we added logs to master branch only, need to do it for quuens, newton, ocata, pike...21:33
myoungsshnaidm: aye commented in the card to that effect, we need the backport patches21:36
rlandy|roverweshay: fyi ... this is all we will be left with per env in hardware https://code.engineering.redhat.com/gerrit/#/c/141378 - note nodes settings are moving to https://review.openstack.org/#/c/574894/21:45
rlandy|rovertesting this piece out on env E while adding other env settings21:47
*** agopi has quit IRC21:52
rlandy|roverand rhos-13 going for overcloud deploy21:52
*** jbadiapa has quit IRC21:55
*** Goneri has quit IRC21:56
*** agopi has joined #oooq22:02
*** myoung is now known as myoung|bbl22:07
*** holser__ has quit IRC22:11
*** holser__ has joined #oooq22:31
*** florianf has quit IRC22:36
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722422:40
*** dtrainor has quit IRC22:52
*** holser__ has quit IRC22:54
*** hamzy has joined #oooq23:11
rlandy|roverclose but not yet on rhos-1323:39
rlandy|roverdeploy failed23:39
rlandy|rover| Controller                                | cadd461d-2cc4-482c-8787-05aafaed3d70                      | OS::Heat::ResourceGroup                          | CREATE_FAILED   | 2018-06-12T22:00:00Z |23:42
*** dougbtv_ has joined #oooq23:45
*** dougbtv_ has quit IRC23:48

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!