Tuesday, 2019-01-08

hubbot1FAILING CHECK JOBS on master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-containerized-undercloud-upgrades @ https://review.openstack.org/604298, master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-containerized-undercloud-upgrades @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000 (1 more message)00:17
rlandy|roveryeah - finally - good to have a reproducer env00:22
*** rlandy|rover is now known as rlandy|rover|bbl00:23
*** rascasoft has joined #oooq00:24
*** rascasoft has quit IRC00:31
hubbot1FAILING CHECK JOBS on master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-containerized-undercloud-upgrades @ https://review.openstack.org/604298, master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-containerized-undercloud-upgrades @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000 (1 more message)02:17
*** rlandy|rover|bbl is now known as rlandy|rover02:33
*** ykarel has joined #oooq02:57
*** ykarel has quit IRC03:04
*** ykarel has joined #oooq03:05
*** apetrich has quit IRC03:16
*** udesale has joined #oooq04:12
*** skramaja has joined #oooq04:13
hubbot1FAILING CHECK JOBS on master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/604298, master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-containerized-undercloud-upgrades @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @  (1 more message)04:17
*** rfolco has quit IRC04:34
*** rfolco has joined #oooq04:34
*** rf0lc0 has joined #oooq04:37
*** rfolco has quit IRC04:39
*** rf0lc0 has quit IRC04:40
*** rfolco has joined #oooq04:41
*** ratailor has joined #oooq05:15
*** ykarel has quit IRC05:24
*** ykarel has joined #oooq05:51
hubbot1FAILING CHECK JOBS on master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/604298, master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-containerized-undercloud-upgrades @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @  (1 more message)06:17
*** agopi__ has joined #oooq06:23
*** agopi__ has quit IRC06:33
*** agopi__ has joined #oooq06:34
*** agopi__ has quit IRC06:36
*** sshnaidm has quit IRC06:52
*** jfrancoa has joined #oooq07:06
*** quiquell|off is now known as quiquell07:09
marios09:29 < marios> needs another +2 please thanks https://review.openstack.org/#/c/628266/07:29
*** saneax has joined #oooq07:30
*** jbadiapa has joined #oooq07:43
*** quiquell is now known as quiquell|brb07:44
*** dtrainor has quit IRC07:45
*** dtrainor has joined #oooq07:46
*** gkadam has joined #oooq07:48
*** gkadam is now known as gkadam-afk07:50
*** ykarel is now known as ykarel|lunch07:51
*** saneax has quit IRC07:53
*** saneax has joined #oooq07:54
*** kopecmartin|off is now known as kopecmartin08:00
*** ccamacho has joined #oooq08:09
*** quiquell|brb is now known as quiquell08:11
*** ssbarnea has joined #oooq08:14
quiquellmarios: Scenario001 is all good now to get merged into projects ?08:16
*** ssbarnea|bkp2 has quit IRC08:16
mariosquiquell: almost i think there is one left like panko/aodh or some08:17
quiquellmarios: goignt to workflow puppet-tripleo08:17
quiquellmarios: looks like Alex is ok with it08:17
hubbot1FAILING CHECK JOBS on master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/604298, master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-containerized-undercloud-upgrades @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @  (1 more message)08:17
mariosquiquell: https://review.openstack.org/#/c/620304/ https://review.openstack.org/#/c/620301/08:17
mariosthanks quiquell08:17
quiquellmarios: workflowed08:18
*** bogdando has joined #oooq08:23
quiquellmarios: About periodics, maybe it's better to reuse the upstream jobs and overwrite the periodic part08:24
quiquellmarios: wdyt ?08:24
quiquellmarios: I see a lot of redundancy here https://review.rdoproject.org/r/#/c/1809308:24
*** gkadam-afk is now known as gkadam08:24
bogdandorlandy|rover: hi. How was it going for https://bugs.launchpad.net/bugs/1810690 ?08:26
openstackLaunchpad bug 1810690 in tripleo "periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset020-master - which includes idempotency check - is timing out during introspection" [Critical,Triaged]08:26
*** sshnaidm has joined #oooq08:31
quiquellsshnaidm: o/08:31
sshnaidmquiquell, hi08:31
quiquellsshnaidm: I saw some conversation with kforde about images, what was the conclusion ?08:31
sshnaidmquiquell, yeah, we talked about image sharing between projects, I'll send a needed flow08:32
sshnaidmquiquell, but it is still not exactly what we need08:33
quiquellsshnaidm: yep, I suppose this is not the final solution, it involves a lot of manual steps08:33
quiquellsshnaidm: just good enough for team08:33
sshnaidmquiquell, there are problem with images different names each time, their locking, etc ,etc08:33
sshnaidmquiquell, we can make a script that will download latest image, rename it, upload back and then will share it with the same name like "upstream-fedora-28" or kind of08:34
sshnaidmquiquell, but need to check if it works that way08:35
quiquellsshnaidm: We also need to check if we can ssh as zuul into shared images08:36
*** tosky has joined #oooq08:37
sshnaidmquiquell, when you boot from image you can set keys08:37
quiquellsshnaidm: Not always, only if it works with cloud-init08:37
quiquellsshnaidm: upstream images, does not work with "key-name"08:38
quiquellsshnaidm: didn't check on "RDO" upstream image08:38
sshnaidmquiquell, that's good point08:38
quiquellsshnaidm: that's what I want to test08:38
quiquellsshnaidm: If it does not work, sharing is not enough08:39
sshnaidmquiquell, but if we'll do as above, we could insert there keys I think08:39
quiquellsshnaidm: I think we can put in place the mechanism you said, even if sharing works, so we can modify whatever we need08:39
quiquellsshnaidm: we can create a periodic zuul job that do it08:40
sshnaidmyeah08:40
quiquellsshnaidm: https://paste.fedoraproject.org/paste/rahI-VnCqUYizKw3HzE7Zw08:41
sshnaidmquiquell, and seems like we can use it even if it's not accepted, so less things to do manually08:41
mariosssbarnea: o/08:41
ssbarneamarios: o/08:42
quiquellsshnaidm: Ahh that's cool08:42
ssbarneacan anyone recommend me a solution for SNMP monitoring? preferably one that also acts as a SMTP trap server?08:43
ssbarnear/SMTP/SNMP/08:44
sshnaidmquiquell, tried to do ci on centos, but it fails in post every time, not idea what's the problem: https://review.rdoproject.org/r/#/c/18087/08:44
mariosssbarnea: planning on having a go at 2/3 in a bit as discussed yesterday but wanted to sync with you as we said we would?08:44
mariosssbarnea: is ther something you want/need for the qe stuff ?08:44
mariosssbarnea: otherwise we can just sync later08:44
ssbarneamarios: lets sync in ~30mins, just want to read email, do the morning rechecks :D08:45
mariosssbarnea: k08:45
quiquellsshnaidm: Humm, we are using the host as the noodepool provider, maybe it screw up the ssh keys so we cannot access at post08:46
*** sshnaidm has quit IRC08:47
quiquellsshnaidm: do you know if secrets can be shared between trusted projects ?08:47
*** jpena|off is now known as jpena08:50
mariosreviews please thanks https://review.openstack.org/#/c/627965/09:02
*** holser_ has joined #oooq09:07
quiquellmarios: +2, they look good now, only scenario001 is missing stuff ?09:11
*** dtantsur|afk is now known as dtantsur09:21
*** ykarel|lunch is now known as ykarel09:22
*** apetrich has joined #oooq09:26
mariosquiquell: thanks man yeah well 4 had some extra09:27
mariosquiquell: see https://tree.taiga.io/project/tripleo-ci-board/task/541?kanban-status=1447276 and https://tree.taiga.io/project/tripleo-ci-board/task/448?kanban-status=144727609:27
mariosquiquell: we didn't check 2/3 yet09:28
quiquellmarios: ack09:28
quiquellmarios: what is the proccess of theck this, just look at containers ?09:28
mariosquiquell: well the process i used is in Test Req (1 compare containers 2 compare tempest) as i wrote there it is exactly what ykarel pointed out09:29
mariosquiquell: and see attached .png (scroll down)09:29
*** derekh has joined #oooq09:33
quiquellmarios: ack09:34
quiquellmarios: We only have multinode-scenario002 for stable branchs not master :-(09:46
mariosquiquell: right09:48
mariosquiquell: "Since there is no more scenario001 multinode containers job on master, we have to use the latest which is ~ 14/15th December [1][2]."09:48
mariosfrom https://tree.taiga.io/project/tripleo-ci-board/task/448?kanban-status=144727609:48
mariosquiquell: so me folco and ssbarnea had call about this yesterday. i was going to help out folco on 2/3 today (after pushign 1/4 a bit as they are more ready/the promotions bit is the crucial part would be nice to finish )09:49
quiquellmarios: Ahh we have scenario002 job logs too09:49
quiquellmarios: we have09:49
quiquellon master09:49
mariosquiquell: sounds like you are looking there too? can we sync?09:49
mariosquiquell: sync/rather i mean lets not duplicate09:49
quiquellmarios: Ahh ok, didn't know09:50
mariosso we can take one each?09:50
mariosquiquell: yeah ssbarnea is qe on all of these09:50
quiquellmarios: just trying to help, but if you are on it I will continue with other stuff09:50
mariosquiquell: but i was going to push the comparison/any fixes for 2309:50
quiquellmarios: ack,09:50
quiquellmarios: btw looks like there logs for multinode version for master barnch09:50
mariosquiquell: well didn't start yet. so you can take one if you want.09:50
mariosquiquell: so lets say you do 2 i do 3? cc ssbarnea09:51
mariosquiquell: works for you?09:51
mariosquiquell: or if you have other things i can do them both09:51
mariosquiquell: just preventing duplicate work thats all09:52
quiquellmarios: I will continue with the reproducer stuff, so we have good base for next sprint09:52
mariosquiquell: ack sounds good ok thanks (sorry i only brought it up as you were asking questions/sounded like you were gearing up to push on 2/3)09:52
quiquellmarios: Too much christmas disconnections, here in spain is bloody long arggg09:52
chandankumarmarios: quiquell why we need to keep data files in a ansible role something like that https://github.com/openstack/ansible-role-tripleo-modify-image/blob/master/setup.cfg#L22 ?09:53
marioschandankumar: don't have context on that sorry (don't know)09:53
*** sshnaidm has joined #oooq09:55
quiquellchandankumar: They are part of the role, without them the role does not work09:58
chandankumarquiquell: marios ack10:04
chandankumarquiquell: do I need to put a ansible.cfg file also if I move the files under data_files?10:04
mariosssbarnea: so i just finished a round of harrasing folks (email and joined irc chans) so the topic branches should be more/less merging today (I mean like this https://review.openstack.org/#/q/topic:replace-scen4 1/2/3/4)10:06
mariosssbarnea: i was planning on attack 2/3 in a bit after i check the promotions stuff on 1/4.10:06
quiquellchandankumar: some context would help10:06
mariosssbarnea: are you OK/clear on the qe tasks?10:06
chandankumarquiquell: https://review.openstack.org/#/c/629127/10:06
mariosssbarnea: otherwise let me know what/how i can help? if you are happy to do the qe and other stuff let me know if you want one of 2/310:06
chandankumarquiquell: I was checking this role https://github.com/openstack/ansible-role-tripleo-modify-image/blob/master/ansible.cfg10:07
chandankumarquiquell: so asked10:07
quiquellchandankumar: Not sure about ansible.cfg I suspect you don't have too10:08
chandankumarquiquell: ok, I am putting all the pieces together and let's see how it goes10:08
chandankumarquiquell: one more question I have this playbook https://review.openstack.org/#/c/628415/4/playbooks/os-tempest.yml once standalone finishes I want to run it on the same job for that what to do?10:10
quiquellchandankumar: well, right now we run tempest as part of the deploy10:12
quiquellchandankumar: instead of a playbook I suppose youi have to replace the current validate-tempest role call with os_tempest10:13
*** holser_ has quit IRC10:14
*** holser_ has joined #oooq10:15
hubbot1FAILING CHECK JOBS on master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/604298, master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-containerized-undercloud-upgrades @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @  (1 more message)10:17
mariosssbarnea: ?10:25
ssbarneamarios: done *3, only two more wf to receive on them.10:26
mariosssbarnea: ack thanks i was just mid rebase but you posted v310:26
marios(tripleo-common)10:26
ssbarnea:D10:26
ssbarneamarios: nobody asked the reasons behind including the node config in scenario name? if we would not had multinode suffix we could have repurposed it without so many changes.10:28
mariosssbarnea: well standalone is an important part of the name no?10:32
mariosssbarnea: i mean we clearly want to distinguish from multinode10:33
ssbarneamarios: that aspect can be included in scenario definitions, anyway too late now but i raised the question for the future.10:33
ssbarneamarios: you know how the downstream jobs are named,.... right? :D10:34
*** kopecmartin is now known as kopecmartin|afk10:35
mariosssbarnea: where do you mean by scenario definitions10:36
mariosssbarnea: i would still prefer to keep in the name10:36
mariosssbarnea: because standalone is a distinct animal10:37
mariosssbarnea: ok just harrassed openstack/telemetry too10:37
marioslets see10:37
mariosforgot barbican going10:39
ssbarneagate queue should go up, but for a good reason at least.10:42
chandankumarquiquell: something like this https://review.openstack.org/#/c/628415/10:48
mariosquiquell: workflowed this https://review.openstack.org/#/c/627965/ fyi (fix scen1)10:50
quiquellmarios: cool, I suppose we can workflow those projects if Alex is ok10:51
mariosquiquell: yeah he is +2 am pushing as much as possible on these today/tomorrow end of sprint whatever we don't finish we carry10:52
quiquellchandankumar: remove playbook variables from the task call10:52
quiquellchandankumar: hosts, gather_facts, etc...10:52
mariosquiquell: but why don't you want to workflow ? what do you mean by if alex is ok to workflow those projects?10:53
mariosoh "ci core"10:53
marios?10:53
mariosk10:53
mariosnp10:53
quiquellchandankumar: also tasks, this is a very bad copy paste dude!10:53
mariosforgot10:53
chandankumarquiquell: ok formatting it10:53
mariosquiquell: still would have been fine if you +A imo anyway10:53
mariosquiquell: if anyone asks me i'll just say "what i have no idea what came over him he just decided to +a it )10:54
quiquellmarios: I am very new core at tripleo, for non ci repos, I am don't want to break stuff10:54
ssbarneadoes anyone have a script for removing all local gerrit branches that are merged?10:54
quiquellmarios: ack, just some caution from my part, but it's a CI change so yep, I cam workflow it10:55
mariosquiquell: anyway done now next one10:55
quiquellmarios: Can we workflow anything else ?10:57
*** udesale has quit IRC10:57
quiquellmarios: or we depend on non tripleo projects ?10:57
mariosanyone know if there is some way to see this running/reporting without merging it? https://review.rdoproject.org/r/#/c/18079/ which is wired up in https://review.rdoproject.org/r/1809110:58
mariosquiquell: maybe scen2/3 will have fixes but otherwise the topic branches should be good now for the replacement for 1-4 been harrssing people by email/irc/fax since this morning10:59
mariosso maybe we just need to merge these ? https://review.rdoproject.org/r/#/c/18091/11:01
quiquellmarios: We have a way to run periodic jobs at check11:01
quiquellmarios: but report is going to be executed but fail since the DLRN secret is not there11:02
quiquellmarios: That would be enough ?11:02
mariosquiquell: well i guess it would help us to sanity check it isn't completely broken for some reason there11:02
mariosquiquell: like deploy pass etc11:02
quiquellmarios: This is what we did for the portin of periodic jobs to zuulv311:03
quiquellmarios: https://review.rdoproject.org/r/#/c/16806/11:03
mariosquiquell: cool thanks11:03
quiquellmarios: Importnat part is the Depends-On11:03
*** ccamacho has quit IRC11:03
quiquellmarios: Let me know if it works11:04
quiquellmarios: You can even try the new reproducer too11:04
mariosquiquell: ack11:15
quiquellmarios: Going to send email con f28 standalone job to get it voting11:24
ssbarneamarios: 10 more replace- changes to merge.11:26
mariosquiquell: i am going to restore & use this (well about to see how much pain rebase is otherwise will post new and abandon again ) fyi https://review.openstack.org/#/c/608166/11:27
mariosssbarnea: you mean there are new ones?11:27
mariosssbarnea: 13:26 < ssbarnea> marios: 10 more replace- changes to merge.11:27
ssbarneamarios: nope, i counted the not merged yet ones.11:27
mariosssbarnea: ack11:28
quiquellmarios: ack, do whatever you need with it11:28
mariosssbarnea: they all should be on their way anyway i joined the openstack/chans too after that email11:28
marios(those emails)11:28
*** ccamacho has joined #oooq11:33
*** skramaja has quit IRC11:40
*** holser_ is now known as holser|lunch11:53
*** holser|lunch has quit IRC11:55
pandarlandy|rover: can you +2 https://review.rdoproject.org/api/18122 again ? fixed couple of things, should be more ok now11:57
*** ratailor has quit IRC11:57
mariosquiquell: do i need th eontainers build dependency @ https://review.rdoproject.org/r/#/c/18079/4/zuul.d/projects.yaml (gives me error) also do i need to add ALL the jobs here or can i just have the one i want11:59
mariosquiquell: (comparing with https://review.rdoproject.org/r/#/c/16806/4/zuul.d/projects.yaml )11:59
*** d0ugal has quit IRC12:01
quiquellmarios: Only the one you want, the former review have all them because we where testing all of them there12:01
mariosquiquell: ok thans going afk for a bit will try again when back12:03
hubbot1FAILING CHECK JOBS on master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/604298, master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-standalone-upgrade @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/567224,  (1 more message)12:17
*** ccamacho has quit IRC12:20
*** d0ugal has joined #oooq12:22
ssbarnearfolco: marios : gate failed due to one tempest test failure (basic networking) https://review.openstack.org/#/c/628277/12:24
ssbarneai think the problem is persistent based on http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22within%20the%20required%20time%20(500%20s).%20Current%20status:%20BUILD.%20Current%20task%20state:%20scheduling%5C%2212:27
ssbarneabut new12:27
ssbarneahappened 8 times, but only today12:27
ssbarneaand worse, only with tripleo-ci-centos-7-scenario003-standalone12:27
ssbarneawhich makes me believe this is a bug12:28
*** gkadam has quit IRC12:29
*** jpena is now known as jpena|lunch12:37
chandankumarquiquell: I need some help in this playbook https://review.openstack.org/#/c/628415/12:43
chandankumarquiquell: http://logs.openstack.org/00/627500/26/check/tripleo-ci-centos-7-standalone-os-tempest/09d589b/job-output.txt.gz#_2019-01-08_12_06_34_64376012:44
*** bogdando_ has joined #oooq12:48
*** holser_ has joined #oooq12:49
*** bogdando has quit IRC12:51
*** bogdando_ is now known as bogdando12:51
*** ccamacho has joined #oooq12:53
*** ccamacho has quit IRC12:54
*** ccamacho has joined #oooq12:54
*** holser__ has joined #oooq12:54
*** holser_ has quit IRC12:54
ssbarneamarios: rfolco that means that no scenario 3 change will merge until we fix this.13:06
rfolcossbarnea, could be related to missing services ? marios ?13:07
mariosrfolco: o/13:10
mariosrfolco: just back from afk ssbarnea what happened?13:10
mariosah tempest?13:10
mariosrfolco: was selling all the things this morning and looking at the promotion jobs13:11
mariosrfolco: planning on doing the 2/3 sanity checks this afternoon13:11
mariosrfolco: K?13:11
rfolcomarios, hands tied here with f28 containers13:11
rfolcomarios, sounds good13:12
rfolcossbarnea, ^13:12
mariosrfolco: ssbarnea http://zuul.openstack.org/builds?job_name=tripleo-ci-centos-7-scenario003-standalone13:13
mariosrfolco: ack13:13
*** trown|outtypewww is now known as trown13:17
*** rascasoft has joined #oooq13:19
*** udesale has joined #oooq13:20
rfolcopanda, can you tmate ral quick to unblock me ?13:20
rlandy|roverpanda: https://review.rdoproject.org/r/#/c/18122 minor comment - of you want to fix it - if not, I'll w+13:21
*** zul has joined #oooq13:21
rlandy|roverbogdando - hi, wrt https://bugs.launchpad.net/bugs/1810690 - I have a reproducer env - left a note in the bug13:22
openstackLaunchpad bug 1810690 in tripleo "periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset020-master - which includes idempotency check - is timing out during introspection" [Critical,Triaged]13:22
rlandy|roverlooks like some containers are 'exited'13:23
*** agopi has joined #oooq13:24
pandarlandy|rover: I'd fix, but I'd hate to lose the other +2, are you going to +w after or you want to wait again the other +213:25
panda?13:25
rlandy|roverpanda: np - will w+ - not thta big a deal13:25
pandarfolco: sure, what's the command ?13:26
pandarfolco: unblock()13:26
rfolcothank you13:26
rlandy|roverpanda: w+'ed13:26
rfolcothat really helped13:26
pandarfolco: kill -9 pidof block13:26
rfolco:)13:26
mariosssbarnea: can you/did you file a bug for that tempest issue yet?13:26
pandarlandy|rover: thanks13:26
weshay|ruckrlandy|rover,  \0/ https://logs.rdoproject.org/openstack-periodic/git.openstack.org/openstack-infra/tripleo-ci/master/periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset020-master/b9871f8/logs/undercloud/home/zuul/tempest.log.txt.gz#_2019-01-08_10_18_0213:26
ssbarneamarios: not yet, i was hoping ruck/rover to have a look at it first.13:27
rlandy|roverpanda: here's hoping we didn't mess up13:27
weshay|ruckif we need to we can bug that and put in the skip list13:27
weshay|ruckthat is pretty good13:27
chandankumarpanda: sshnaidm is this change correct https://review.openstack.org/#/c/629127/ for installing the role as TQE extras in TQ?13:27
pandarlandy|rover: we didn't mess up *again*13:27
mariosssbarnea: d0ugal just replied on the email and it looks like  it was indeed the tempest there (email re mistral patches http://logs.openstack.org/74/628274/3/check/tripleo-ci-centos-7-scenario003-standalone/9cb0cf0/logs/undercloud/home/zuul/tempest.log.txt.gz13:27
rlandy|roverweshay|ruck: well we are further along13:27
rlandy|roverweshay|ruck: I know what that problem is ...13:27
rlandy|roverI added back overcloud_ssl13:28
rlandy|roverwe can disable it13:28
rlandy|roversee the fs change13:28
weshay|ruckrlandy|rover, fs030 passed too13:28
weshay|ruckbut I'm sure no work has been done13:28
d0ugalmarios: right - so I should recheck?13:29
rlandy|roverweshay|ruck: also - while we are changing the fs - disabled idempotency check for all releases, we could add rocky back - up to you13:29
rlandy|roverweshay|ruck: yeah - don't think that bug moved13:29
weshay|ruckrlandy|rover, it's ok .. we have an open bug.. we're ok13:29
mariosd0ugal: not yet we didn't get to a fix13:29
mariosd0ugal: thanks man13:29
mariosd0ugal: but whatever it is its new if you see on same review earlier it was green13:29
rlandy|roverweshay|ruck: putting in a patch to remove overcloud_ssl13:30
d0ugalah okay, I'll wait a bit :)13:30
rlandy|roverwe never had it before13:30
weshay|rucksshnaidm, you ready13:30
sshnaidmweshay|ruck, yep, on your bluej13:30
quiquellrlandy|rover: o/ do you have the link with the assigned hardware we have ?13:32
rlandy|roverquiquell, https://docs.google.com/spreadsheets/d/17WOeF-wiNZfB8SZrWBhu4SjWkq78XzeEG1wASBQaJTM/edit#gid=013:33
quiquellrlandy|rover: thanks!13:34
*** jpena|lunch is now known as jpena13:34
*** ykarel is now known as ykarel|away13:35
mariosquiquell: do you remember why you had the dependency on container image build in https://review.rdoproject.org/r/#/c/16806/4/zuul.d/projects.yaml13:45
rfolcoweshay|ruck, argument --base: Invalid String(choices=['centos', 'rhel', 'ubuntu', 'oraclelinux', 'debian']) value: fedora",13:46
quiquellmarios: to test exactly how we want it13:46
rfolcoweshay|ruck, will try to hack this but not sure we'll get any far13:46
mariosquiquell: i removed it in https://review.rdoproject.org/r/#/c/18079/6/zuul.d/projects.yaml cos otherwise was complaining. waiting to see if it runs now https://review.rdoproject.org/zuul/status (queued)13:46
quiquellmarios: we need the dependant job to finish13:46
mariosquiquell: so we need it here13:46
quiquellmarios: if you add the dependency you need to also put it in the list of jobs13:46
mariosquiquell: Unable to freeze job graph: Job periodic-tripleo-ci-centos-7-scenario001-standalone depends on periodic-tripleo-centos-7-master-containers-build which was not run.13:47
mariosquiquell: i get this when i added the dependency13:47
*** agopi has quit IRC13:48
mariosquiquell: thanks i see it now13:49
quiquellmarios: the freezie thing it's usualÃly related to secret, it means it cannot decrypt the secrets13:49
mariosquiquell: you have to define it13:49
mariosquiquell: thanks13:49
mariosupdating13:49
quiquellmarios: maybe you just want to test without the dependency to run the job, if a container is there it will run13:50
mariosquiquell: ok i'll wait for the job run then first13:50
quiquellmarios: It's not going to break anything since you cannot push containers from those jobs at check13:50
quiquellmarios: ack13:50
quiquellrlandy|rover: do you have a free brain cycle to question on nodepool-setup role ?13:51
rlandy|roverquiquell: ack - what's up?13:52
quiquellrlandy|rover: It has a part that it changes zuul_inventory.yaml13:52
quiquellrlandy|rover: but since I am not reproducing already run jobs I am just running jobs I don't have it13:53
quiquellrlandy|rover: make sense to continue execution if it does not exists ?13:53
quiquellrlandy|rover: talking about https://review.openstack.org/#/c/617855/13:55
rlandy|roverquiquell: I used the old nodepool-setup role13:55
rlandy|roverand removed some stuff13:55
rlandy|rovernot the new one13:55
rlandy|roverin fact I should abandon all those reviews13:55
quiquellrlandy|rover: I don't have to use https://review.openstack.org/#/c/617855/ to run zuul repro with libvirt ?13:56
rlandy|roverquiquell: no13:56
rlandy|roverquiquell: because all you need it for is  to patch the images13:57
rlandy|roverthat is all13:57
rlandy|roverI did make a change to the role13:57
quiquellrlandy|rover: ack, so just the libvirt roles are enough ?13:57
rlandy|roveryeah13:57
quiquellrlandy|rover: cool cool :-), was just following the doc13:57
rlandy|roverbecause the rest you don;t need - zuul provides it13:57
rlandy|roverquiquell: what did I say in the doc??? I should read that back13:58
quiquellrlandy|rover: no worries and ansibilizing it :-)13:58
ykarel|awayquiquell, why containers will not be pushed when running in check? asking against <quiquell> marios: It's not going to break anything since you cannot push containers from those jobs at chec13:58
quiquellrlandy|rover: and this other one is needed ? https://review.openstack.org/#/c/623294/13:58
ykarel|awayquiquell, afaik there is no such check for pushing, can u share where this is checked13:58
rlandy|roverquiquell: yes -13:59
quiquellykarel|away: yep that what I mean, no problem running them at check13:59
rlandy|roveryou only get one ip13:59
quiquellrlandy|rover: ack thanks !13:59
ykarel|awayquiquell, okk so containers will be pushed, ur message confused me14:00
ykarel|awayit's good to not run consistent-to tripleo-ci testing in check14:00
ykarel|awayas that willl update the symlink and may have some impact on jobs running in periodic14:01
rlandy|roverquiquell: I think I made some minor modifications to the old nodepool-setup role bceause the steps were not needed - ping if you hit another issue14:01
quiquellrlandy|rover: but we don't need nodepool-setup for zuul repro with libvirt, isn't that correct ?14:02
*** d0ugal has quit IRC14:02
rlandy|roverquiquell: it's a matter of the images used14:02
rlandy|roverquiquell: for libvirt, you use stack OS images14:02
rlandy|roverso they miss stuff the nodepool images have14:02
rlandy|roverthat is what that role is meant to do14:02
rlandy|roverthe old one at least14:03
ykarel|awayquiquell, is there some doc to use zuul reproducer, i want to try that14:03
rlandy|roveruse that one as is and see how you go14:03
ykarel|awayrlandy|rover, ^^14:03
quiquellykarel|away: Not yet14:03
*** zul has quit IRC14:03
quiquellrlandy|rover: so we need to call the nodepool-setup thing from the libvirt zuul repro to prepare14:04
ykarel|awayquiquell, ack will wait :), the old reproducer is broken for periodic jobs14:04
*** d0ugal has joined #oooq14:04
quiquellrlandy|rover: but not from this review 200~https://review.openstack.org/#/c/617855/14:04
rlandy|roverafter you provision nodes14:04
rlandy|roverno14:04
quiquellrlandy|rover: just the want at tqe master14:05
*** kopecmartin|afk is now known as kopecmartin14:06
quiquellykarel|away: I think it was not working pushing containers from hacked jobs at check...14:08
ykarel|awayquiquell, what's not working14:09
ykarel|awayany link?14:09
quiquellykarel|away: marios was just trying to exercise periodic jobs at a review hacking ci14:10
ykarel|awaylink of patch14:10
*** zul has joined #oooq14:11
ykarel|awayhmm found https://review.rdoproject.org/r/#/c/18093/6/zuul.d/projects.yaml14:11
mariosykarel|away: here https://review.rdoproject.org/r/#/c/18093/ and https://review.rdoproject.org/r/#/c/18079/14:12
ykarel|awayperiodic-tripleo-centos-7-master-containers-build not mentioned14:12
ykarel|awaymarios, u can just run the standalone job14:12
mariosquiquell: on scen 4 i tried with the dependencies ^14:12
mariosykarel|away: on the second link that is what i did14:12
ykarel|awaylooking, but u need a depends on14:12
*** d0ugal has quit IRC14:12
mariosykarel|away: it has depends on14:12
mariosykarel|away: but for https://review.rdoproject.org/r/#/c/18079/6/zuul.d/projects.yaml the job is still in queued in zuul status https://review.rdoproject.org/zuul/status14:14
mariosso not sure14:14
rlandy|rovermarios: I have periodic jobs running with testproject if that helps you14:14
ykarel|awaymarios, yes that will run when nodes will be available14:14
rlandy|roverhttps://review.rdoproject.org/r/#/c/18058/14:14
ykarel|awayperiodic-tripleo-ci-centos-7-scenario001-standalone is queud14:14
mariosykarel|away: thanks14:16
mariosvery much14:16
* ykarel|away out14:16
ykarel|awaynp :)14:16
hubbot1FAILING CHECK JOBS on master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/604298, master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-standalone-upgrade @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/567224,  (1 more message)14:17
*** ykarel|away has quit IRC14:21
ssbarnearfolco: http://zuul.openstack.org/builds?job_name=barbican-dogtag-devstack-functional-fedora-latest job seem broken for 3+ weeks, and is voting, so we cannot merge our change to barbican.14:22
sshnaidmssbarnea, any news from Ben Nemec about moving repo?14:22
ssbarneasshnaidm: yes, he replied on taiga. wait till Thursday when we will make the move.14:25
sshnaidmok14:26
ssbarneasshnaidm: mainly he wanted to give people one week to change branch (one week since he blogged + emailed about it)14:26
sshnaidmI see14:26
weshay|ruckthanks arxcruz14:28
weshay|ruckquiquell, so do I need access to the infra to debug http://dashboard-ci.tripleo.org/d/cEEjGFFmz/cockpit?orgId=1&panelId=154&fullscreen14:32
weshay|ruckquiquell, also wondering if you saw https://review.rdoproject.org/r/#/c/18103/14:33
quiquellweshay|ruck: haven't check RDO reviews yet14:34
quiquellweshay|ruck: the alert bugs is weird, with your review merged is working fine locally14:34
quiquellweshay|ruck: maybe we have to dig in infra too see what's happening14:34
weshay|ruckya.. I see alert bugs locally, but not promotion14:35
quiquellweshay|ruck: is very unsual to need to do that14:35
quiquellweshay|ruck: not promotion ? what do you mean ?14:35
weshay|ruckpromotion-blocker14:35
weshay|ruckthe other tag14:35
quiquellweshay|ruck: to to check your review about layout give me a sec14:35
quiquellweshay|ruck: maybe you can put in the review a snapshot from dashboard14:36
quiquellweshay|ruck: grafana has a tool to paste snapshots (they are just links)14:36
weshay|rucknot sure what you mean14:37
weshay|ruckyou are not referring to the json export?14:37
quiquellweshay|ruck: in this review https://review.rdoproject.org/r/#/c/18103/314:37
weshay|ruckaye14:37
quiquellweshay|ruck: Since it's related to layout, it's good to share a screenshot in the git comment14:37
quiquellweshay|ruck: to do so you have a "share" option at grasfana that generate a link14:38
weshay|ruckoh.. fair point14:38
quiquellweshay|ruck: so people can review without run it14:38
weshay|ruckso I'll describe it14:38
quiquelljust generate the share link paste it in the review14:38
weshay|ruckI removed the box "upstream check jobs (noop)14:39
weshay|ruckI removed the box ( RDO periodic jobs )14:39
weshay|ruckso the top rows have 3 boxes14:39
rfolco reminder >> CI community meeting (office hours) starts now at https://bluejeans.com/411356779814:39
weshay|ruckand moved alert and promotion-blocker bugs up to the top two rows14:39
weshay|ruckquiquell, I can resetup and screenshot if you like,14:40
rfolcoweshay|ruck, quiquell ssbarnea sshnaidm panda chandankumar arxcruz marios kopecmartin14:40
rfolco reminder >> CI community meeting (office hours) starts now at https://bluejeans.com/411356779814:40
quiquellweshay|ruck: running the review to see it14:41
*** zul has quit IRC14:44
mariosrfolco: joined sorry forgot14:44
marios:)14:44
weshay|ruckhttps://tree.taiga.io/project/tripleo-ci-board/epic/32714:44
quiquellweshay|ruck: added snapshot https://review.rdoproject.org/r/#/c/18103/14:46
quiquelllooks good to me14:46
*** zul has joined #oooq14:47
bogdandorlandy|rover: https://bugs.launchpad.net/tripleo/+bug/1810690/comments/1214:50
openstackLaunchpad bug 1810690 in tripleo "periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset020-master - which includes idempotency check - is timing out during introspection" [Critical,Triaged]14:50
bogdandowe should add that paunch logs to CI artifacts14:50
bogdandooh wait, not really, it is duplicated in install-undercloud.log14:51
rlandy|roverbogdando: adding logs is simple enough if we need them14:53
bogdandorlandy|rover: yeah, that's just a minor thing14:54
bogdandothe issues are as I listed it14:54
rlandy|roverbogdando: reading - thanks14:55
weshay|ruckhttp://logs.openstack.org/88/615988/6/check/tripleo-ci-split-controlplane-standalone/f29bf53/logs/quickstart_files/ssh.config.ansible.txt.gz14:55
*** ykarel|away has joined #oooq15:04
*** ykarel|away is now known as ykarel15:09
*** ykarel is now known as ykarel|away15:21
*** saneax has quit IRC15:22
ssbarneasshnaidm: can you please tell me if I missed to address any of your comments from Jan 3rd on https://review.openstack.org/#/c/613672/ ?15:31
sshnaidmssbarnea, will look tomorrow15:31
ssbarneasshnaidm: thanks.15:32
ssbarneasshnaidm: btw, i fixed molecule master to work with rawhide, merged today. once is officially included in a release, I will also enable testing with rawhide.15:33
sshnaidmssbarnea, is it released?15:34
ssbarneasshnaidm: merged to master, not released yet.15:34
sshnaidmssbarnea, ok, will try it15:34
*** chem has quit IRC15:37
weshay|ruckrlandy|rover, well.. so I guess pushing those through will be our top priority15:41
sshnaidmssbarnea, btw, tried to provision vms in molecule, works pretty nice.. maybe we can test our libvirt stuff with it15:41
mariosrfolco: you didn't propose seomthing to make 2 /3 voting yet right?15:41
mariosrfolco: (just checking but looks like not on master anyway )15:41
rfolcomarios, I did :(15:42
mariosrfolco: oh yeah15:42
mariosrfolco: no worries15:42
mariosthanks15:42
rfolcomarios, you're fixing the mess for all?15:42
mariosrfolco: yes15:42
rfolcoafazekas, thx15:42
rfolcooops15:42
mariosrfolco: easier do it in one set of reviews15:42
rfolcomarios, ^15:42
rfolcosorry afazekas ignore15:43
rfolcomarios, yup15:43
quiquellmarios: ready to +2+w15:43
ssbarneasshnaidm: we could test a *lot* with it. But I propose to adopt its use in small steps.15:43
weshay|rucksshnaidm, rlandy|rover thoughts on a new layout for cockpit https://snapshot.raintank.io/dashboard/snapshot/d3chF5rI89uSCTBuj0urA3UFh8KKb7Y0?orgId=215:43
sshnaidmssbarnea, a lot, except docker, hehe15:43
weshay|ruckhttps://review.rdoproject.org/r/#/c/18103/15:44
chandankumarssbarnea: I have one query for ansible we can use molecule for testing stuff but there are other config stuffs can we use test-infra for that?15:44
sshnaidmweshay|ruck, mm.. I think we lost two important singles like upstream noop jobs and periodic ones15:45
* rlandy|rover looks15:46
sshnaidmweshay|ruck, I definitely would like to see upstream noop jobs failure there, because it can show you broken jobs15:46
weshay|rucksshnaidm, ya.. for master only?15:46
weshay|ruckon the first or second row15:47
weshay|ruckthen the others below it15:47
sshnaidmweshay|ruck, yeah15:47
rlandy|roverweshay|ruck: I would agree with sshnaidm - we're missing periodic15:47
weshay|ruckso master no-op is just below Upstream failing gate jobs FYI..15:47
weshay|ruckbut you can't tell from the screenshot15:47
rlandy|roverwhich always had the most failures15:47
weshay|ruckperiodic and no-op are two diff things rlandy|rover15:48
sshnaidmweshay|ruck, it's in the table, but better to have "single" too15:48
weshay|ruckrlandy|rover, guys.. that screenshot is just the first 2.5 rows15:48
weshay|rucksshnaidm, rlandy|rover scroll down on https://snapshot.raintank.io/dashboard/snapshot/d3chF5rI89uSCTBuj0urA3UFh8KKb7Y0?orgId=215:49
weshay|ruckmaster no-op and master periodic are right there below upstream gate jobs15:49
rlandy|roverI see15:49
weshay|ruckthat's pretty high up there imho15:49
sshnaidmweshay|ruck, do you talk about tables?15:50
weshay|rucksorry, not 100% sure what you mean sshnaidm15:50
sshnaidmweshay|ruck, in grafana you have tables - with rows, also "singles" - squares with one data type (as zuul time, upstream failures count)15:51
weshay|rucksshnaidm, k.. yes15:51
weshay|ruckunderstood15:51
sshnaidmweshay|ruck, so I see upstream no-op jobs only in the table, but I don't see a single like we have now15:52
weshay|ruckoh.. you would like to raise the visibility of the master no-op jobs via a square15:53
weshay|ruckok.. I think that is a diff patch, but ya I can see that .. not sure how you would capture that exactly in a square15:53
weshay|rucknoting we also have the alert in this channel that everyone ignores :)15:53
weshay|rucklolz15:53
sshnaidmweshay|ruck, yeah, I'm trying to keep the most important info in the top and short15:54
weshay|rucksshnaidm, ya..  I can try and do that and run it by you.. I need more experience here15:54
weshay|ruckfor now I'd like to merge the current change15:54
sshnaidmweshay|ruck, for alerts we'll have something different soon..15:54
weshay|ruckok.. I need the bug list to start working15:55
sshnaidmweshay|ruck, not sure I understand your change.. maybe we'll discuss it with quiquell later too15:55
quiquellsshnaidm: ack15:55
weshay|ruckit's just getting rid of the squares at the top that are meaningless15:55
weshay|rucklike rdo periodic jobs15:56
weshay|ruckand upstream check jobs fails15:56
weshay|ruckand replacing those two squares w/ alert and promotion-blocker squares15:56
sshnaidmweshay|ruck, why are they meaningless??15:56
weshay|rucksshnaidm, is there any action we take on failing upstream check jobs?15:56
sshnaidmweshay|ruck, it's not "check" jobs15:57
weshay|ruckor the total count of failing periodic / promotion jobs15:57
sshnaidmweshay|ruck, it's no-op jobs that should always pass15:57
weshay|ruckah.. no-op15:57
weshay|ruckhrm..15:57
weshay|ruckI see ur point15:57
weshay|ruckok...15:58
sshnaidmweshay|ruck, total count of periodic maybe is not so important, yeah, as we have them fail a lot :D15:58
weshay|ruckagree15:58
weshay|ruckbut needs some adjustment15:58
weshay|rucksshnaidm, I'll put it back15:58
weshay|ruckshould be red though, with one failure15:58
weshay|ruckright?15:58
sshnaidmweshay|ruck, it's red, but color of number itself doesn't work for me, no idea why :(15:58
weshay|rucklolz k15:59
weshay|rucksshnaidm, I'll update the review15:59
weshay|ruckand yet another mtg16:00
*** chem has joined #oooq16:00
weshay|ruck:)16:00
sshnaidmoh yeah16:00
sshnaidmquiquell, moderator, please join :)16:00
quiquellsshnaidm, weshay|ruck sorry about that16:00
*** udesale has quit IRC16:02
rlandy|roverpanda: can we close this out? https://bugs.launchpad.net/tripleo/+bug/181087916:03
openstackLaunchpad bug 1810879 in tripleo "fedora tripleo-ci-testing link creation fails" [Critical,Triaged]16:03
pandarlandy|rover: no16:06
pandarlandy|rover: the module is failing even with my fixes16:06
*** bogdando has quit IRC16:06
pandarlandy|rover: jpena is helping me, but it's going to be tough16:06
pandarlandy|rover: the error doesnt' make any sense16:07
pandarlandy|rover: and I cannot reproduce it locally16:07
pandarlandy|rover: so I can try to run the periodic as checks, or we are going to see resutls of any fix only every 4 hours, and in any case, we need someone in rdo side to tell us what error is zuul outputting, because we are blind16:10
rlandy|roverpanda: what is the problem with running periodic as check to test?16:11
pandarlandy|rover: it would help, but we need eyes on the zuul logs as well16:14
pandarlandy|rover: because the only thing we see is "MODULE FAILURE"16:14
rlandy|roverin meeting16:15
rlandy|roverwill look in a bit16:15
pandarlandy|rover: and for the record, currently the error is "'An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ValueError: No closing quotation'"16:15
*** ykarel|away has quit IRC16:17
*** ykarel|away has joined #oooq16:17
hubbot1FAILING CHECK JOBS on master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/604298, master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-standalone-upgrade @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/567224,  (1 more message)16:17
mariosquiquell: thanks it voted \o/ https://review.rdoproject.org/r/#/c/18093/16:29
marios&& https://review.rdoproject.org/r/#/c/18079/16:30
marioshmm sucess 58 mins tho i should probably look into it a bit...16:30
*** ccamacho has quit IRC16:38
*** kopecmartin is now known as kopecmartin|off16:40
ssbarneaout of curiosity, why we do not cache the already resized overcloud image instead of original? we could save some precious time.16:53
quiquellweshay|ruck: Have to drop, do you create another meeting for the repro ?17:07
mariosrfolco: scen2 fix at https://review.openstack.org/629264 (updated taiga with the sanity check missing some services & extra horizon notes at https://tree.taiga.io/project/tripleo-ci-board/task/54317:11
mariosrfolco: i'll check 3 tomorrow17:13
* marios out time17:13
rfolcomarios, thanks man17:14
mariosweshay|ruck: incase you missed it & from my status : * mwhahaha: Y U NO GATE standalone jobs?!?! (+ chat in community call) - first make 2/3 non voting post https://review.openstack.org/#/c/629246/ (not finished checking & 3 broken for https://bugs.launchpad.net/tripleo/+bug/1810970) & post fixup for 1/4 @ https://review.openstack.org/#/q/topic:add-standalone-gates (https://review.openstack.org/620304  /629252 /628266 /629251)17:14
openstackLaunchpad bug 1810970 in tripleo "scenario003: fails with TestNetworkBasicOps:test_network_basic_ops on tempest" [High,Triaged]17:14
mariosrfolco: have a good 117:15
rfolcou have good 1 217:15
*** quiquell is now known as quiquell|off17:19
quiquell|offrlandy|rover: Puting here what we need from nodepool-setup for the repro https://review.openstack.org/#/c/629247/17:21
rlandy|roverk- thanks17:24
*** fultonj has joined #oooq17:33
*** holser__ is now known as holser|eod17:35
rlandy|roverrfolco: how do you move a taiga task from one user story to another?17:39
rfolcorlandy|rover, let me refresh my memory...17:40
pandain the sprint board, drag and drop17:40
fultonji was able to make a change to have an artifact collected from my undercloud, now I want to make a similar change to have it collected from my overcloud.17:41
fultonjhttps://review.openstack.org/#/c/615988/6/toci-quickstart/config/collect-logs.yml@25517:41
rlandy|roverthe other user story is not on the same board17:42
fultonj^ was the change that made my file get picked up on my undercloud, should the same change apply to subnode2 in a multinode job?17:42
rfolcorlandy|rover, as long they are in the same board/sprint17:42
rfolcorlandy|rover, drag and drop17:42
rfolcorlandy|rover, for stories in different sprints, you need to go to backlog, move the story to the target sprint where the new task should be, and then drag from story to story inside the sprint board17:43
rfolcopanda, is this an indication that I need undercloud installed and not just the repos configured ?17:46
rfolcoINFO:kolla.common.utils.aodh-base:chmod: cannot access '/var/www/cgi-bin/aodh': No such file or directory17:46
rfolcoand many containers fail to build due to missing files...17:48
rfolcochmod: cannot operate on dangling symlink '/openstack/healthcheck'17:48
rfolcolike this ^17:48
*** trown is now known as trown|lunch17:48
pandarfolco: you're still building with rdo-kolla role ?17:51
rfolcopanda, yes, should I stop ?17:51
rfolcopanda, I've got dozens of containers built in f28 using that modified script + hacking kolla to accept fedora as base17:53
rfolcopanda, weshay|ruck fyi https://tree.taiga.io/project/tripleo-ci-board/task/567?kanban-status=144727517:54
rfolcoI just want to have an idea on why the rest fails, if its because I need undercloud or not...17:55
pandaI don't think /var/www/cgi-bin/aodh would be in the undercloud17:56
pandaand I don't know what /openstack/healthcheck is17:56
rfolcoso apparently having the undercloud installed would not make this any better17:57
rlandy|roverrfolco: let's try a diff method - can I make a task into a user story?17:57
rfolcorlandy|rover, if you are talking about the new tripleo-buildocntainers role, I've started it too to compare17:58
fultonjpanda: in the past you helped me make this change to have my job pick up this file from the undercloud: https://review.openstack.org/#/c/615988/6/toci-quickstart/config/collect-logs.yml17:58
*** ykarel|away has quit IRC17:58
rlandy|roverrfolco: no I need to move https://tree.taiga.io/project/tripleo-ci-board/task/46517:59
rlandy|roverto17:59
fultonji am also using subnode-2 in this job and making the same file but it's not getting picked up17:59
rlandy|roverhttps://tree.taiga.io/project/tripleo-ci-board/epic/517:59
fultonjpanda: would you expect the same change to result in it getting picked up?17:59
*** derekh has quit IRC18:00
rfolcorlandy|rover, ah pff crossed chat :)18:00
pandarlandy|rover: you want to move a task into an epic ?18:00
rfolcotask into a story I guess?18:00
pandafultonj: I think we use a different config file for collect logs in upstream18:01
rlandy|roverwhatever - taiga and I do not get on18:01
rfolcopanda, story-to-epic is a possible promotion18:01
rfolcobut not task-to-story18:01
rfolcocorrect ?18:01
pandarfolco: nope18:01
pandafultonj: nevermind, that's the file18:02
pandafultonj: how big is this tarball ?18:02
rfolcorlandy|rover, I know we can promote, just trying to remember how18:03
pandarfolco: promote what ?18:03
pandarfolco: to what ?18:03
fultonjpanda: 11k18:03
rfolcopromote task-to-story18:03
fultonjpanda: i see it picked up on undercloud18:03
fultonjhttp://logs.openstack.org/88/615988/6/check/tripleo-ci-split-controlplane-standalone/f29bf53/logs/undercloud/home/zuul/18:03
pandafultonj: aaaaaaaahhhh, disk space murderer!!!!18:03
fultonjmy playbook then moves it to the subnode-218:04
fultonjbut i don't see it on subnode-218:04
fultonjhttp://logs.openstack.org/88/615988/6/check/tripleo-ci-split-controlplane-standalone/f29bf53/logs/subnode-2/home/zuul/18:05
pandafultonj: I see the collect list is the same in both nodes18:08
pandafultonj: are you certain you're copying it to /home/zuul ?18:08
rfolcopanda, rlandy|rover: https://github.com/taigaio/taiga-front/issues/144118:09
pandafultonj: that list is then passed to a rsync command, so if it's there, it should be gathered18:10
fultonjpanda: right18:11
pandafultonj: is there a link to this copy ?18:11
fultonjhttps://review.openstack.org/#/c/617368/11/playbooks/split-controlplane-standalone.yml18:12
fultonjpanda: i have evidence that this playbook suceeded at creating export_control_plane.tar.gz as i see it on undercloud /home/zuul/18:13
weshay|ruckrfolco, can I see the kolla config?18:13
fultonjbut i don't have evidence that line 53 worked18:13
fultonjhttps://review.openstack.org/#/c/617368/11/playbooks/split-controlplane-standalone.yml@5318:13
fultonjas i don't see it on subnode-218:13
fultonjhowever, it's good you confirmed that it should be picked up18:14
pandafultonj: it didn't18:14
pandafultonj: http://logs.openstack.org/88/615988/6/check/tripleo-ci-split-controlplane-standalone/f29bf53/job-output.txt.gz#_2019-01-04_00_59_23_47440218:14
weshay|ruckrfolco, clearly you don't need an undercloud18:14
pandafultonj: that part was skipped18:14
*** rf0lc0 has joined #oooq18:14
*** rfolco has quit IRC18:15
weshay|ruckrfolco, can you try a manual build as well with the openstack container build command?18:15
pandafultonj: have you passed the proper tags ? I see you require tags: extract18:15
pandarf0lc0: weshay|ruck | rfolco, clearly you don't need an undercloud18:16
fultonjthank you panda!18:16
*** rf0lc0 is now known as rfolco18:16
pandarf0lc0: weshay|ruck | rfolco, can you try a manual build as well with the openstack container build command?18:16
pandafultonj: nothing. Need to drop. See you tomorrow.18:16
weshay|ruckpanda, ?18:17
*** panda is now known as panda|off18:17
rfolcoyes yes that's what I was going to try before the power outage here due to a rainstorm18:17
panda|offweshay|ruck: I was forwarding to rfolco messages taht he probably missed18:17
rfolcoweshay|ruck, panda|off will make it a new task18:17
weshay|ruckk18:17
rfolcopanda|off, I got it thanks :)18:17
panda|offrlandy|rover: updated https://bugs.launchpad.net/tripleo/+bug/181087918:17
openstackLaunchpad bug 1810879 in tripleo "fedora tripleo-ci-testing link creation fails" [Critical,Triaged]18:17
hubbot1FAILING CHECK JOBS on master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/604298, master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-standalone-upgrade @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/567224,  (1 more message)18:17
panda|offrlandy|rover: it should work I hope18:18
panda|offrlandy|rover:  https://review.rdoproject.org/r/18128 should have fixed that error, as crazy as it sounds18:20
panda|offI hope nothing else is broken in that patch ...18:20
panda|offoh wait18:20
weshay|ruckpanda|off, it failed18:20
weshay|ruck2019-01-08 14:47:26.345615 | TASK [Promote consistent to tripleo-ci-testing for fedora repo if an equivalent exists]18:20
weshay|ruck2019-01-08 14:47:26.866107 | rdo-centos-7 | MODULE FAILURE18:20
weshay|ruckhttps://logs.rdoproject.org/openstack-periodic/git.openstack.org/openstack-infra/tripleo-ci/master/periodic-tripleo-centos-7-master-promote-consistent-to-tripleo-ci-testing/0535278/job-output.txt.gz#_2019-01-08_14_47_26_34561518:20
weshay|ruckor will that be fixed next run?18:20
panda|offweshay|ruck: yes, this run was before the fix18:20
weshay|ruckpanda|off, k.. thanks :)18:21
panda|offbut if I added any other apostrophes on any other comments in other tasks, thy could fail too18:21
panda|offno, there are none18:22
* panda|off crosses fingers18:22
weshay|ruckrfolco, nice.. thank you! https://tree.taiga.io/project/tripleo-ci-board/task/57318:25
rfolcoworking on it, no results yet18:25
rfolcoupdated the results in the other one if you haven't seen yet... https://tree.taiga.io/project/tripleo-ci-board/task/567?kanban-status=144727518:26
rfolcoweshay|ruck, ^18:26
weshay|ruckrfolco, aye.. I saw it thank you!18:26
weshay|ruckrfolco, it's too bad that keystone doesn't build18:27
rlandy|roverpanda|off: thank you18:28
rfolcoweshay|ruck, who cares ?18:28
* rfolco hides 18:29
rlandy|roverweshay|ruck: sis I understand correctly that https://tree.taiga.io/project/tripleo-ci-board/task/465 should move to https://tree.taiga.io/project/tripleo-ci-board/epic/518:29
*** bogdando has joined #oooq18:29
rlandy|rovertask -> user story18:29
*** jpena is now known as jpena|off18:30
*** dtantsur is now known as dtantsur|afk18:45
rlandy|roverdone now18:45
weshay|ruckrfolco, update the new user stories to "doing" in f28 epic please18:49
weshay|ruckrlandy|rover, /me looking18:49
rfolcoweshay|ruck, ok18:50
*** holser|eod has quit IRC18:54
*** trown|lunch is now known as trown18:55
rlandy|roverweshay|ruck: at which step is your setup failing with the new reproducer?19:05
weshay|ruckrlandy|rover, the zuul and zuul scheduler containers fail to start due to a "key" error19:06
weshay|ruckrlandy|rover, have you seen the CI on the repro project?19:10
rlandy|roverweshay|ruck: not really19:10
rlandy|roverweshay|ruck: I did notice the start up changed19:10
rlandy|roverin the doc instructions19:11
rlandy|roverwe used to run19:11
rlandy|roverpipenv run ansible-playbook playbooks/start-ci-reproducer.yaml -e ansible_python_interpreter="/usr/bin/python3" -e upstream_gerrit_user=<>19:11
rlandy|roverweshay|ruck: ^^ I assume that is where your error happens19:12
weshay|ruckrlandy|rover, https://logs.rdoproject.org/46/18046/39/gate/tripleo-ci-reproducer-fedora-28/7a65e89/19:13
weshay|ruckrlandy|rover, https://github.com/rdo-infra/ansible-role-tripleo-ci-reproducer/tree/master/playbooks/tripleo-ci-reproducer19:13
weshay|ruckrlandy|rover, imho the goal would be to ensure the user setup is as close to the CI as possible19:14
weshay|ruckso that's what I was doing, although w/ slightly diff variables passed.. e.g. for the directory where the project is initially checked out etc19:14
weshay|ruckthat way we ensure the user experience19:14
weshay|ruckrlandy|rover, wdyt?19:15
rlandy|roverweshay|ruck: in theory, I totally agree19:16
weshay|ruckrlandy|rover, he's working on libvirt I guess too https://review.rdoproject.org/r/#/c/18121/19:16
rlandy|roverlibvirt won't help you if you can't get the first steps to run19:17
*** bogdando has quit IRC19:23
rlandy|roveryay - rhos-14 passing again :)19:32
rlandy|roverarxcruz: still around?19:43
rlandy|roverstill working on the rhos-13 failure19:43
rlandy|roverhttps://github.com/openstack/tripleo-quickstart-extras/commit/6e3c506408887c88e0df38a1fa0614b6279816d619:43
rlandy|rover^^ have impacted?19:43
rlandy|rover2018-12-31 04:12:06 | 2018-12-31 04:12:06.702 27430 INFO config_tempest.main [-] Adding options from deployer-input file '/home/stack/tempest-deployer-input.conf'19:46
rlandy|rover^^ no longer runs19:46
weshay|ruckrlandy|rover, nice19:47
*** ccamacho has joined #oooq19:54
*** ccamacho has quit IRC19:58
*** jfrancoa has quit IRC20:03
rlandy|roverarxcruz: /home/stack/tempest/tempest-deployer-input.conf20:05
rlandy|rovervs20:05
rlandy|roverarxcruz: --deployer-input /home/stack/tempest-deployer-input.conf20:05
rlandy|rover^^ intended change?20:06
rlandy|roverthere is no such file ... /home/stack/tempest/tempest-deployer-input.conf20:06
hubbot1FAILING CHECK JOBS on master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/604298, master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-standalone-upgrade @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/567224,  (1 more message)20:18
weshay|ruckrlandy|rover, fyi.. rdo-cloud went down20:24
weshay|ruckrlandy|rover, when it comes back, we'll probably have to clean up and hit up kieren20:24
rlandy|roverweshay|ruck: ack - watching rhos-ops20:25
rlandy|roverweshay|ruck: I should not have said things were fine at yesterday's meeting20:25
weshay|rucklolz20:25
weshay|ruckapparently they are done20:25
* weshay|ruck checks stacks20:25
weshay|ruckrlandy|rover, hrm.. looks clean20:26
rlandy|roverweshay|ruck: yep - doesn't look bad20:26
rlandy|roverweshay|ruck: I'll clean up in a few hours if the last stacks don't go away20:27
* rlandy|rover got lost in tempest conf20:27
weshay|ruckrlandy|rover, if you are interested in seeing how to setup the ruck/rover cockpit locally20:28
weshay|ruckI'm quite good at it now20:28
rlandy|roverweshay|ruck: yes - would be good to see20:28
weshay|ruckrlandy|rover, ok.. about to do it again...20:29
weshay|ruckblue?20:29
rlandy|roverok20:29
weshay|ruckrfolco, ^20:30
rfolcoweshay|ruck, bj?20:30
weshay|ruckrfolco, if you want to see how to setup cockpit20:30
weshay|ruckrfolco, in my blue20:32
weshay|ruck<hubbot1> FAILING CHECK JOBS on master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/604298, master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-standalone-upgrade @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/567224,  (1 more message)20:39
weshay|ruckrlandy|rover, rfolco ^20:39
weshay|ruckhttps://review.rdoproject.org/r/#/c/18102/20:44
*** tosky has quit IRC20:49
*** jbadiapa has quit IRC21:10
rlandy|roverchandankumar: you around?21:14
rlandy|roverweshay|ruck: did you clean up?21:15
weshay|ruckrlandy|rover, no21:16
weshay|rucklast I looked though it was all create_complete21:16
rlandy|roverok - then cleaned up all on it own :)21:16
rlandy|roverok - then things are ok21:16
weshay|rucksure enough21:16
weshay|ruckrlandy|rover, do you know what the fix is for standalone scenario0321:27
weshay|ruckfak21:30
weshay|ruckhttps://bugs.launchpad.net/tripleo/+bug/181089821:30
openstackLaunchpad bug 1810898 in tripleo "tempest.scenario.test_minimum_basic.TestMinimumBasicScenario failure in CI" [Critical,Triaged]21:30
weshay|ruckrfolco, do you know the fix for scenario03?21:31
weshay|ruckssbarnea, ^21:31
rfolcoweshay|ruck, I suspect its the missing containers/services but not sure21:32
weshay|ruckmarios, mentioned the issue this morning21:32
weshay|ruckrfolco, it was missing services, why is it just failing now?21:32
weshay|ruckrfolco, can you please put in a patch to make it non-voting21:33
rfolcomarios did21:33
rfolcoweshay|ruck, https://review.openstack.org/#/c/629246/21:36
rfolcoweshay|ruck, need to go, resume tomorrow21:36
weshay|ruckit's about to merge21:37
weshay|ruckgood21:37
rlandy|roverweshay|ruck: is that the fix or just the avoidance?21:41
weshay|ruckavoidance21:42
rlandy|roverright - ok - should I work on this or marios and team has it?21:43
rlandy|roverweshay|ruck: is this still a bug? - passing now in gates21:46
weshay|ruckrlandy|rover, what is passing in the gates?21:48
weshay|ruckscen003?21:48
rlandy|roverhttp://logs.openstack.org/13/619713/5/gate/tripleo-ci-centos-7-standalone/2db9fb6/logs/undercloud/home/zuul/tempest.log.txt.gz21:49
rlandy|roverhttps://bugs.launchpad.net/tripleo/+bug/181089821:49
openstackLaunchpad bug 1810898 in tripleo "tempest.scenario.test_minimum_basic.TestMinimumBasicScenario failure in CI" [Critical,Triaged]21:49
weshay|ruckrlandy|rover, ya.. I only see it now on scen00321:50
rlandy|roverweshay|ruck: so either this goes to be a scen003 issue or we close it21:50
rlandy|rovernot assigned to anyone currently21:51
weshay|ruckhttp://cistatus.tripleo.org/21:51
weshay|ruckrlandy|rover, this started to fail at 1/08/ 12:3421:51
weshay|ruck2421:51
rlandy|roverI must be looking at the wrong link21:53
rlandy|roverlooks green to me21:53
rlandy|rovertripleo-ci-centos-7-standalone21:53
rlandy|roverin both check and gate21:53
weshay|ruck2019-01-08 14:04:19 |         "stderr: Error: unable to find resource 'galera-bundle'",21:53
weshay|ruckrlandy|rover, ugh21:54
weshay|ruckstandalone-scenario00321:54
weshay|ruckrlandy|rover, yes.. ur looking at the wrong link.. the original bug was opened on standalone21:54
weshay|ruckbut that does look ok now21:54
rlandy|roverweshay|ruck: k- that is the one I am asking about21:55
rlandy|roverbecause a scen03 bug is diff, no>21:55
rlandy|roveror we need to edit this21:55
weshay|ruckrlandy|rover, multiple diff failures here21:55
rlandy|roveragreed scen03 is failing21:56
weshay|ruckugh.. it's messed up, about to go non-voting though21:57
weshay|ruckrlandy|rover, let's leave it for rfolco and marios https://review.openstack.org/#/c/629246/1/zuul.d/standalone-jobs.yaml21:58
rlandy|roverweshay|ruck: ok - sorry to be a pain about this, but we can close https://bugs.launchpad.net/tripleo/+bug/1810898?21:58
openstackLaunchpad bug 1810898 in tripleo "tempest.scenario.test_minimum_basic.TestMinimumBasicScenario failure in CI" [Critical,Triaged]21:58
weshay|ruckrlandy|rover, we do need to push these through https://review.openstack.org/#/q/topic:add-standalone-gates+(status:open+OR+status:merged)21:58
weshay|ruckrlandy|rover, no.. I see the issue in scen00321:59
weshay|ruckit's still a bug21:59
rlandy|roverbut the scen03 fail has a diff trace22:00
weshay|ruckrfolco, patch order or depends-on matters22:00
rlandy|roverI don;t mind opening a diff bug22:00
weshay|ruckrlandy|rover, k22:00
weshay|ruckrlandy|rover, do what you think is right22:00
weshay|ruckI'm rechecking these patches22:00
* rlandy|rover looks at patches to push22:00
rlandy|roverhttps://review.openstack.org/#/c/628266/ was just rechecked22:02
rlandy|roverthey are all w+22:02
rlandy|roverso ok22:02
*** trown is now known as trown|outtypewww22:07
rlandy|roverweshay|ruck; https://bugs.launchpad.net/tripleo/+bug/1811004 - more targeted bug scen00322:15
openstackLaunchpad bug 1811004 in tripleo "tripleo-ci-centos-7-scenario003-standalone fails tempest.scenario.test_minimum_basic.TestMinimumBasicScenario" [Undecided,New]22:16
weshay|ruckrlandy|rover, k.. I saw another issue w/ the deployment too22:17
hubbot1FAILING CHECK JOBS on master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-standalone-upgrade @ https://review.openstack.org/604298, master: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades, tripleo-ci-centos-7-standalone-upgrade @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @  (1 more message)22:18
rlandy|roverweshay|ruck: pls link or lp and I will investigate22:18
weshay|ruckk k22:20
weshay|ruckrlandy|rover, well. the last 9 failures are basic_ops :022:22
* weshay|ruck just must have randomly found one22:22
weshay|rucklast 10 failures22:23
rlandy|roverok22:23
weshay|ruckrlandy|rover, /me marked your bug.. triaged, critical and ml222:23
rlandy|roverty22:23
weshay|ruckrlandy|rover, and you closed mandre's bug ya?22:24
rlandy|roverweshay|ruck: yes  marked it fixed released with copied traces of last successes22:24
weshay|ruckthanks22:24
weshay|ruckperfecto22:25
weshay|ruckrlandy|rover, https://review.openstack.org/#/c/628070/22:46
rlandy|roverweshay|ruck: there are two comments in there - which I agree with22:47
weshay|ruckk..22:47
weshay|ruckfull links22:47
weshay|ruckhope it doesn't fail lint22:49
rlandy|roverif it does - put back the old one and I'll +222:50
weshay|ruckrlandy|rover, have a good night.. congrats on a good ruck/rover sprint :)22:53
rlandy|roverweshay|ruck: you too - good night22:53
*** d0ugal has joined #oooq22:56
*** tosky has joined #oooq23:01
*** rascasoft has quit IRC23:16
*** rlandy|rover is now known as rlandy|rover|bbl23:16
*** rascasoft has joined #oooq23:28

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!