Monday, 2020-03-16

*** jmasud has quit IRC00:04
*** jmasud has joined #oooq00:06
*** jmasud has quit IRC00:10
*** tosky has quit IRC00:14
*** holser has joined #oooq00:14
*** jmasud has joined #oooq00:18
*** holser has quit IRC00:40
*** jmasud has quit IRC00:43
*** jmasud has joined #oooq00:45
*** jmasud has quit IRC01:03
*** jmasud has joined #oooq01:04
*** jmasud has quit IRC01:08
*** jmasud has joined #oooq01:10
*** jmasud has quit IRC01:25
*** sshnaidm is now known as sshnaidm|afk01:31
*** jmasud has joined #oooq01:43
*** jmasud has quit IRC01:50
*** jmasud has joined #oooq01:53
*** jmasud has quit IRC01:57
*** jmasud has joined #oooq02:08
*** jmasud has quit IRC02:10
*** apetrich has quit IRC03:09
*** ysandeep|afk is now known as ysandeep03:14
*** ykarel|away is now known as ykarel03:37
*** jmasud has joined #oooq03:44
*** udesale has joined #oooq04:25
*** ratailor has joined #oooq04:40
*** jmasud has quit IRC04:54
*** jmasud has joined #oooq04:55
*** skramaja has joined #oooq05:19
*** pojadhav|afk is now known as pojadhav05:26
*** sanjayu_ has joined #oooq05:42
*** holser has joined #oooq06:09
*** irclogbot_0 has quit IRC06:29
*** holser__ has joined #oooq06:31
*** holser has quit IRC06:33
*** dpawlik has joined #oooq06:58
*** jfrancoa has joined #oooq07:01
*** marios has joined #oooq07:05
*** ratailor has quit IRC07:05
*** ratailor has joined #oooq07:08
chandankumarmarios, morning, fs035, fs020, fs01 failed with pcs issue rechecked via https://review.rdoproject.org/r/25909 and baremetal fs01 cirros failure recheck via https://review.rdoproject.org/r/#/c/25865/ rest is in control07:15
marioschandankumar: o/ looking07:23
marioschandankumar: ack so we don't have confirmed issue yet for fs035/1/20 yet i.e. waiting on the test run07:24
marioschandankumar: but if all 3 failed in same way then its likely a legit issue07:24
*** irclogbot_0 has joined #oooq07:29
*** bogdando has joined #oooq07:39
*** bogdando_ has joined #oooq07:42
*** bogdando has quit IRC07:44
*** bogdando_ is now known as bogdando07:44
*** holser__ has quit IRC07:54
*** holser has joined #oooq07:54
*** matbu has joined #oooq08:01
*** dpawlik has quit IRC08:07
*** dpawlik has joined #oooq08:07
*** tesseract has joined #oooq08:12
*** dpawlik has quit IRC08:18
*** amoralej|off is now known as amoralej08:20
mariospanda: rebased the https://review.rdoproject.org/r/#/c/24774/ but i saw the molecule was still red on those is that a thing or maybe cleared now ?08:23
mariosrebasing mine ontop again08:24
mariosare we gonna merge this today then08:24
mariospanda: ?08:24
*** dpawlik has joined #oooq08:28
*** apetrich has joined #oooq08:31
chandankumarmarios, I am adding the layout for scenario 12 in periodic pipeline08:33
*** rascasoft has joined #oooq08:35
marioschandankumar: ack the comment was more about if the centos7 version was currently being used in periodics but it isn't thanks08:38
*** ykarel is now known as ykarel|lunch08:39
*** jpena|off is now known as jpena08:50
*** jmasud has quit IRC08:54
*** jmasud has joined #oooq08:56
*** ccamacho has joined #oooq08:57
chandankumarmarios, s12 entry https://review.rdoproject.org/r/#/q/topic:centos-8-fs12+(status:open+OR+status:merged)09:00
*** jaosorior has joined #oooq09:01
*** tosky has joined #oooq09:07
dpawliksshnaidm|afk, marios could you confirm that all of your jobs are using at least fedora 30?09:08
mariosdpawlik: yes and if they aren't (e.g. f28 we *used* to use) I dont think they even matter any more... don't thnk we have any f28 jobs left... checking09:09
dpawlikif this one will be merged, https://review.opendev.org/#/c/713177 jobs will fail if they are running on f29/f2809:09
mariosdpawlik: chandankumar: https://review.rdoproject.org/zuul/builds?job_name=periodic-tripleo-ci-fedora-28-ovb-1ctlr_1comp-featureset002-master-upload  (2019-03-11)09:10
mariosdpawlik: perhaps also send an email to pcci/rdo about that?09:11
mariosweshay|ruck: fyi 11:09 < dpawlik> if this one will be merged, https://review.opendev.org/#/c/713177 jobs will fail if they are running on f29/f2809:12
mariosthanks dpawlik don't worry about the mail i thought that was your review ^09:12
mariosanyone have link to the current ruck|rover pad/hackmd09:13
chandankumarmarios, https://hackmd.io/7MBqFHurTA2e5H8kYRwgag09:13
marioschandankumar: thanks, they didn't update grafana link09:13
dpawlikmarios, I will send email after wesh confirm that those jobs will be updated soon, ok?09:14
mariosdpawlik: ack added to the  https://hackmd.io/7MBqFHurTA2e5H8kYRwgag for now09:15
dpawlikmarios, but wait. Those jobs are failing09:15
panda>09:16
marios<09:16
pandamarios: always the pessimist.09:17
marios;) avoids disappointment09:17
*** sanjayu_ has quit IRC09:22
pandamarios: we got 6 promotions in the weekend, I think we are very close to merge09:30
mariospanda: nice09:31
*** arxcruz has joined #oooq09:32
arxcruzjesus, my bouncer was crazy :/09:32
chandankumararxcruz, fyi https://bugs.launchpad.net/tripleo/+bug/186759909:32
openstackLaunchpad bug 1867599 in tripleo "overcloud deploy failing on fs030 and fs016 while pulling mariadb container from undercloud registry" [Critical,Confirmed]09:32
arxcruzI knew something was wrong when I saw nobody pinging me this morning :D09:33
*** arxcruz is now known as arxcruz|rover09:34
zbrwho can help me with reviews on some upstream zuul-jobs changes? (even if you are not core)09:35
zbrif i get reviews before i ping infra, they merge much faser.09:35
arxcruz|roverakahat: hey, is it working the mistral tests? I'm checking https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-7-undercloud-containers but can't find a mistral run can you double check so i can close the bug ?09:35
arxcruz|roverzbr: sure09:36
zbrarxcruz|rover: https://review.opendev.org/#/c/708642/09:37
chandankumararxcruz|rover, one more https://bugs.launchpad.net/tripleo/+bug/186760209:59
openstackLaunchpad bug 1867602 in tripleo "overcloud deploy failed with Systemd start for pcsd failed " [Critical,Confirmed]09:59
*** ykarel|lunch is now known as ykarel09:59
arxcruz|roverchandankumar: ack09:59
*** saneax has joined #oooq10:01
marioschandankumar: so did they all fail like .URLError: <urlopen error [Errno 101] Network is unreachable>10:03
marioschandankumar: was looking at fs1 now10:03
marios        * https://logserver.rdoproject.org/openstack-component-baremetal/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001-baremetal-master/61ab8a3/job-output.txt10:04
chandankumarmarios, it was a network issue,10:04
chandankumarmarios, rerunned it here https://review.rdoproject.org/r/#/c/25865/ sorry cirros website issue10:05
chandankumarand it passed10:05
marioschandankumar: ack cool10:06
zbrmarios: did you see my email re https://review.rdoproject.org/r/#/c/25904/ ?10:07
dpawlikmarios, but the periodic jobs last time execution was very long time ago10:08
dpawlikprobaly sshnaidm|afk has changed that, or?10:08
*** sshnaidm|afk is now known as sshnaidm10:08
sshnaidmdpawlik, sorry, what are we talking about?10:09
dpawliksshnaidm|afk, http://paste.openstack.org/show/790728/10:09
dpawliksshnaidm, tl;dr does all of tripleo job running on at least f30?10:10
sshnaidmdpawlik, afaik yes10:10
dpawliksshnaidm, because if it will be merged https://review.opendev.org/#/c/713177 jobs with f29 will fail10:10
sshnaidmdpawlik, there is some trash here: https://github.com/rdo-infra/rdo-jobs/blob/master/zuul.d/deprecated-jobs.yaml#L10-L2910:11
sshnaidmdpawlik, ok, if it will fail we'll know that we still have such jobs :)10:11
dpawliksshnaidm, ah! Thats why marios found info about that job10:11
dpawliklol10:11
*** ratailor has quit IRC10:11
dpawlikso lets vote https://review.opendev.org/#/c/71317710:12
*** ratailor has joined #oooq10:13
dpawlikand sshnaidm pls vote here https://review.opendev.org/#/c/713169/. Thanks!10:15
marios11:10 < marios> dpawlik: chandankumar:10:15
marioshttps://review.rdoproject.org/zuul/builds?job_name=periodic-tripleo-ci-fedora-28-ovb-1ctlr_1comp-featureset002-master-upload  (2019-03-11)10:15
mariosdpawlik: ack yeah i said they ran long time ago ^^10:15
chandankumarmarios, https://review.rdoproject.org/r/#/c/25915/10:20
chandankumarplease +w it10:20
mariosalso change the upstream version? chandankumar10:21
marioschandankumar: (ack done)10:21
chandankumarmarios, changing it10:23
chandankumarmarios, https://review.opendev.org/71318410:26
ykarelchandankumar, but in upstream it was already running10:28
ykarelso likely those parameters can just be dropped in periodic10:29
ykarelinstead of changing upstream also10:29
mariosykarel: was it? i thought it was only 10-ovn that ran tempest10:30
chandankumarykarel, marios i got the issue https://github.com/openstack/tripleo-quickstart/blob/master/config/general_config/featureset062.yml#L3510:30
chandankumarit is set to true no need to enable there, /me abandon upstream patch10:31
marioschandankumar: ack on fs6210:31
ykarelchandankumar, ack10:33
*** ykarel is now known as ykarel|afk10:40
*** jbadiapa has joined #oooq10:55
marioschandankumar: did you see that tempest one         * https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-full-tempest-compute-master/afd840d/logs/undercloud/var/log/tempest/stestr_results.html.gz10:58
marios        * test_stamp_pattern[compute,id-10fd234a-515c-41e5-b092-8323060598c510:58
marioschandankumar: ran saturday looks like10:58
weshay|ruckysandeep, setting up.. few min10:59
ysandeepweshay|ruck, sure11:00
*** dtantsur has joined #oooq11:02
weshay|ruckarxcruz|rover, 011:03
weshay|ruck0.11:03
weshay|rucko/11:03
arxcruz|roverweshay|ruck: o/11:03
weshay|ruckhow about that.. arxcruz|rover need anything?11:04
arxcruz|rovertoo earlier for you no?11:04
arxcruz|roverweshay|ruck: we have two blockers11:04
arxcruz|roverweshay|ruck: https://hackmd.io/7MBqFHurTA2e5H8kYRwgag?both#TripleO-ISSUES11:04
arxcruz|roverchandankumar had open the bugs11:04
weshay|rucklooking11:04
weshay|ruckk.. think I saw them11:04
arxcruz|roverbrb 10 min11:05
weshay|ruckarxcruz|rover, hey.. afaict the baremetal component promoted today.. 3/16.. it should not have, http://dashboard-ci.tripleo.org/d/UDA4H3aZk/component-pipeline?orgId=1&from=now-7d&to=now11:06
weshay|ruckhttps://logserver.rdoproject.org/openstack-promote-component/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-centos-8-master-component-baremetal-promote-to-promoted-components/2feb388/job-output.txt11:06
weshay|ruckcan you please check that out11:06
*** ratailor_ has joined #oooq11:07
chandankumarweshay|ruck, baremetal fs01 failure was a transient issue11:09
*** ratailor has quit IRC11:10
weshay|ruckchandankumar, ya. but it promoted11:12
weshay|ruckysandeep, join #openstack-infra11:14
chandankumarmarios, you are working on adding fs030 to periodic pipeline if not taking it over then?11:15
chandankumarweshay|ruck, with this patch goes in https://review.opendev.org/712940 fs012 is working without tempest11:28
chandankumar*sc1211:28
weshay|ruckNICE11:29
chandankumarweshay|ruck, please https://review.opendev.org/#/c/712294/ +w it11:29
*** ykarel|afk is now known as ykarel11:29
marioschandankumar: no didn't dig there yet go ahead11:29
*** rlandy has joined #oooq11:29
chandankumarmarios, ok then11:30
arxcruz|rovershould not have what ?11:31
chandankumarrlandy, morning11:39
rlandychandankumar: hey11:39
chandankumarrlandy, I opened a bug related to fs030 https://bugs.launchpad.net/tripleo/+bug/1867599 registry issue you are working on that?11:40
openstackLaunchpad bug 1867599 in tripleo "overcloud deploy failing on fs030 and fs016 while pulling mariadb container from undercloud registry" [Critical,Confirmed]11:40
chandankumarI have added it to cix, it is also blocking slawq work11:40
rlandychandankumar: ok11:40
chandankumarrlandy, https://review.rdoproject.org/r/#/q/topic:centos-8-fs12+(status:open+OR+status:merged) good to go11:41
rlandychandankumar: https://hackmd.io/HrQd03c9SxOMtFPFrq50tg?both is out of date11:42
rlandywe should pull out what we still need there11:42
chandankumarrlandy, yes sure11:42
chandankumarrlandy, may be we can sit after scrum, get it cleaned11:43
sshnaidmchandankumar, does scenario012 pass?11:43
rlandychandankumar: looks like we have some tempest success http://tripleo-cockpit.usersys.redhat.com/d/2tivP9BWz/component-pipeline11:43
chandankumarsshnaidm, expect tempest, all is working11:43
chandankumarsshnaidm, https://review.opendev.org/#/c/712940/ will fix current issue11:43
rlandyeither way - we might as well move scenario01211:43
rlandyno point in keeping it centos7 passing or not11:44
rlandychandankumar: who would investigate this? https://bugs.launchpad.net/tripleo/+bug/186759911:45
openstackLaunchpad bug 1867599 in tripleo "overcloud deploy failing on fs030 and fs016 while pulling mariadb container from undercloud registry" [Critical,Confirmed]11:45
rlandyis it CIX'ed for CI team or for a DF?11:46
chandankumarrlandy, insecure registry missing on subnode111:46
chandankumarhttps://logserver.rdoproject.org/50/25550/8/check/periodic-tripleo-ci-centos-8-multinode-1ctlr-featureset030-master/f75e2e4/logs/subnode-1/etc/containers/registries.conf.txt.gz11:46
chandankumarrlandy, may be DF one11:46
rlandychandankumar: yep I know the failure - ok I see you have put it with PCCI atm11:47
chandankumarrlandy, ok11:48
rlandyweshay|ruck: chandankumar: looking at escalation board - are well still keeping all the OVB cards alive?11:49
rlandyalso https://trello.com/c/6agowoQH/1388-cixlp1867177tripleociproa-cannot-download-noarch-python3-cssselect-092-13el8noarchrpm11:50
chandankumarrlandy, weshay|ruck since fs01 is passing and fs020 is known, so i think we can move it done?11:50
rlandycan we close this out?11:50
rlandychandankumar: ^^ python3-cssselect-092-13el8noarchrpm - looks ok for some time?11:51
chandankumarrlandy, yes, I have not seen this issue from last week till now11:51
rlandygates seem to be moving.11:51
rlandyarxcruz|rover: ^^?11:51
arxcruz|roverrlandy: yup11:52
weshay|ruckarxcruz|rover, hey.. may have a blocking red job in stein fyi11:53
rlandyhttps://trello.com/c/mpFqJeuO/1377-cixlp1866202tripleociproa-ovb-on-centos8-fails-because-of-networking-failures11:53
weshay|rucklooking at the cockpit.. pass rate is 53%11:53
arxcruz|roverweshay|ruck: checking11:53
arxcruz|roverweshay|ruck: which one? there are scen000 scen009 and container-update11:55
arxcruz|roverupgrades is failing with this:11:55
arxcruz|rover2020-03-15 00:26:58 |    "msg": "Error: Package: ceph-ansible-4.0.14-1.el7.noarch (quickstart-centos-ceph-nautilus)\n           Requires: ansible >= 2.8\n           Installed: ansible-2.6.19-1.el7.ans.noarch (@delorean-rocky-deps)\n               ansible = 2.6.19-1.el7.ans\n",11:55
marios rlandy: thanks for review can you please check my comment @ https://review.opendev.org/#/c/711507/3/zuul.d/standalone-jobs.yaml when you next have a minute thanks... just wrt the duplication not sure it's necessary11:56
rlandymarios: ack11:57
arxcruz|roverykarel: upgrade from rocky to stein is failing because ansible package... need your help here :)11:57
rlandyhow did baremetal promote when it failed OVB?11:58
chandankumarrlandy, rerun the job11:58
rlandychandankumar: ah11:58
chandankumar*rerunned11:58
rlandychandankumar: ok - so we're not entirely out of the woods with baremetal jobs11:58
chandankumarrlandy, yes11:59
chandankumarthings seems to be fine11:59
ykarelarxcruz|rover, but ansible is not changed for long in rocky11:59
ykarelarxcruz|rover, more context please11:59
rlandychandankumar: ok - so I'll leave the networking card open for a one more meeting11:59
chandankumarrlandy, aye11:59
rlandylook slike we only started to see success today11:59
chandankumarrlandy, brb for an hour11:59
rlandychandankumar: ack11:59
ykarelin rocky we have 2.6.912:00
ykarelansible12:00
ykarelso likely ceph-ansible updated recently?12:00
arxcruz|roverykarel: https://b8ad378ded78e3ab6d47-59bc25a858d3cf4494b5a72aa2369a4f.ssl.cf5.rackcdn.com/713096/2/check/tripleo-ci-centos-7-containerized-undercloud-upgrades/6bc39ab/logs/undercloud/home/zuul/undercloud_upgrade.log12:00
ykarellooking12:01
ykarelarxcruz|rover, looks like last success for that job is in june 2019 http://zuul.openstack.org/builds?job_name=tripleo-ci-centos-7-containerized-undercloud-upgrades&branch=stable%2Fstein&result=SUCCESS12:03
ykarelin stein12:03
arxcruz|rover:O12:03
arxcruz|roverweshay|ruck: what's the blocking red job in stein ?12:03
weshay|ruckarxcruz|rover, there are only like 2-3 patches in stein over the last 2 days..12:04
weshay|ruckso.. may not be something actionable yet12:05
arxcruz|roverok12:05
ykarelarxcruz|rover, so it's happening due to https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/deployment/undercloud/undercloud-upgrade.yaml#L197-L20212:09
ykareleven if ansible 2.8 exists in stein12:09
*** ysandeep is now known as ysandeep|afk12:11
weshay|ruckpanda, morning.. how'd the promotions over the weekend go?12:12
mariosweshay|ruck: fyi 11:30 < panda> marios: we got 6 promotions in the weekend, I think we are very close to merge12:14
weshay|ruckmarios, /me going through the logs12:17
rlandymarios: responded in https://review.opendev.org/#/c/711507/12:22
rlandywe can discuss further at the meeting12:22
rlandyimho, it makes it cleaner going forward to keep the main var set on the centos-8 job12:22
rlandyand override the centos-7 job12:23
rlandybut in any case ...12:23
rlandywe made the decision while discussing parenting to duplicate12:23
pandalunch12:25
mariosrlandy: ok replied again for clarification but will duplicate them anyway12:25
rlandymarios: we can clarify at meeting12:25
rlandyit should be clear to everyone12:25
*** udesale_ has joined #oooq12:26
rlandythis is a pretty funndamental decision we made12:26
mariosrlandy: ack12:26
*** ratailor__ has joined #oooq12:27
*** udesale has quit IRC12:28
*** jmasud has quit IRC12:28
*** ratailor_ has quit IRC12:30
*** ratailor__ has quit IRC12:39
zbrweshay|ruck: should tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades  become 8?12:45
zbrAFAIK, on upgrades are incompatible with os switching12:45
zbrbecause there is no such thing as upgrading centos-7 to 8.12:45
weshay|ruckzbr, look at the hackmd12:46
weshay|ruckfor centos-812:46
rlandyarxcruz|rover: want to join me on the escalation meeting?12:46
rlandynv - we're done :)12:47
zbryep, not supported but this does not explain what we do with existing jobs12:47
zbror this means that  tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades must be removed12:47
arxcruz|roverrlandy: sorry, I don't have this cix on my calendar, is it a new meeting, it's not in rhos team meetings calendar12:47
zbrafaik, all jobs with centos-8 in their name are supposed to be replaced by centos-8, or removed.12:48
rlandyarxcruz|rover: no worries - it was mainly centos-8 so I didn't stumble too much:)12:48
weshay|ruckugh12:48
weshay|ruckarxcruz|rover, working w/ jbuchta to fix that.. and get it added back to rhos-team-meetings12:51
arxcruz|roverweshay|ruck: ack12:51
arxcruz|roverweshay|ruck: I got invitation, for today, but it should be on rhos-team-meetings12:52
rlandyhttps://hackmd.io/HrQd03c9SxOMtFPFrq50tg is getting out of date12:52
weshay|ruckarxcruz|rover, they are working that out now12:54
weshay|ruckrlandy, ya.. I did not rescrub that12:54
weshay|ruckI can do that while I listen to this stuff12:54
rlandyweshay|ruck: worth cleaning it up at today's scrum?12:55
rlandywe can take care of it12:55
rlandyweshay|ruck: right now, zuul and cockpit are better trackers12:55
weshay|ruckrlandy, so.. if you guys have time..   def.. dive into the promotion server12:56
rlandyweshay|ruck: I think the rest is fairly sorted12:56
rlandya couple of failing test to cover12:56
weshay|ruckmarios, panda I don't see a full successful run on centos-8 panda-rulz etc.. but I may be out of context.. I see containers work.. but it stops at overcloud-images.. which may be intentional12:56
mariosweshay|ruck: ack call in 3 mins12:57
weshay|ruckrlandy, I would have that the centos-7 upgrade jobs would have started to fail by now in master, but are not.. but can be killed12:57
rlandyweshay|ruck: ok - will get some reviews up on that12:58
rlandyafter scrum12:58
*** jpena is now known as jpena|lunch12:59
rlandychandankumar: scrum13:01
marios        * https://logserver.rdoproject.org/openstack-component-compute/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-full-tempest-compute-master/afd840d/logs/undercloud/var/log/tempest/stestr_results.html.gz13:04
mariosrlandy: ^13:04
*** amoralej is now known as amoralej|lunch13:07
*** apetrich has quit IRC13:25
weshay|ruckchandankumar, fyi https://review.rdoproject.org/zuul/builds?job_name=tripleo-podman-integration-rhel-8-standalone13:28
weshay|ruckwe have centos-8 going there yet?13:28
chandankumarweshay|ruck, yes, https://review.rdoproject.org/r/#/c/25916/13:28
weshay|ruckchandankumar, thanks..13:28
weshay|ruckykarel, fyi https://review.rdoproject.org/r/#/c/25916/13:28
*** dpawlik has quit IRC13:29
*** apetrich has joined #oooq13:30
ykarelweshay|ruck, ack noted13:30
ykarelwe will ignore rhel8 master failures13:30
chandankumarweshay|ruck, for ceph-ansible is it ok to drop rhel-8 jobs and move to centos-8?13:36
weshay|ruckchandankumar, yes.. all rhel8 goes bye bye13:40
marioshttps://review.rdoproject.org/r/#/c/25782/13:40
mariosrlandy: ^13:40
chandankumarbhagyashris, ^^13:41
weshay|ruckpanda++13:41
*** ChanServ sets mode: +o panda13:43
*** panda changes topic to ""Docs: https://docs.openstack.org/tripleo-quickstart/latest/ || 16th March promoter patch to review https://review.rdoproject.org/r/24774 || next promoter patch in the chain https://review.rdoproject.org/r/25782"13:43
*** panda sets mode: -o panda13:43
*** skramaja has quit IRC13:43
*** skramaja has joined #oooq13:44
*** ysandeep|afk is now known as ysandeep13:45
*** TrevorV has joined #oooq13:46
rlandymarios: sorry - we didn't discuss the duplication of vars13:49
mariosrlandy: nm i updated anyway13:50
rlandywant to chat about that?13:50
mariosrlandy: discussion in gerrit if any more is needed np13:50
rlandymarios: it's what we did above13:50
rlandymarios: ack13:50
*** jpena|lunch is now known as jpena13:50
chandankumarrlandy, marios please +w it https://review.opendev.org/#/c/712294/13:50
chandankumarhttps://review.opendev.org/#/c/712013/ needs that13:51
sshnaidmpanda, is the promoter code py3 ready?13:52
sshnaidmpanda, I wonder if we can move promoted to centos8 in some point13:52
mariossshnaidm: yes see py36 unit tests13:52
sshnaidmmarios, ack13:52
sshnaidmmaybe worth a try13:53
marioschandankumar: rlandy: beat me to it13:53
bhagyashrischandankumar, weshay|ruck yup working on it                                                                                                                                                                      chandankumar13:53
mariossshnaidm: yeah good idea... for our 'main' promoter server once we move to 'new'13:54
mariospanda: did you already consider this? ^13:54
mariospanda: 'this' use centos8 for promoter server13:55
pandamarios: sshnaidm we already discussed this with wes, move to centos8 will be a task in next sprint. The code is tested in both py27 and py36, so that can be ported without problems. The only part taht sill needs change is the provision playbook.13:57
mariospanda: cool story bro13:57
sshnaidmmarios, :D13:58
marios;)13:58
rlandychandankumar: marios: https://hackmd.io/7MBqFHurTA2e5H8kYRwgag - added section on Reviews still in play to add/move jobs14:00
mariosrlandy: ack14:00
mariosrlandy: will do in  bit14:00
rlandyykarel: hi ... wrt the email on md5 and mirrors, "using md5sum command to detect aggregate hash which is wrong as repo files can be modified in jobs"14:03
rlandywe only do this on the container build jobs14:03
rlandyand it was bu design14:03
rlandyby14:03
ykarelrlandy, but checksum changes after modificatin of repo files14:04
ykareland that's wrong imo to use md5sum command on modified repo files14:04
rlandyykarel: the repo files should not change in container build jobs14:04
ykarelrlandy, but those are changing14:04
ykareldue to mirrors14:05
rlandyif we had not done that, we would never had known they were changing14:05
ykarelyes issue exist for long but was not noticed14:05
rlandyit's an easy fix to pull the md514:05
ykarelrlandy, yes i filed bug https://bugs.launchpad.net/tripleo/+bug/1867580 and proposed patch too14:06
openstackLaunchpad bug 1867580 in tripleo "tripleo-buildcontainers jobs build containers with wrong tag" [High,In progress] - Assigned to yatin (yatinkarel)14:06
rlandyykarel: ok - so if we do go with this change, we will lose our check14:09
rlandyie: the mirrors are out of date14:09
*** Goneri has joined #oooq14:09
rlandythe mirror we reference updates about 1.5 hours after the job is run14:10
rlandyand we build images with out of date repos14:10
ykarelrlandy, u saying mirror updates after 1.5 hours?14:11
ykarelsorry didn't get that part14:11
rlandyykarel: yep - the mirrors lag14:11
ykarel:( i doubt 1.5 is too much14:12
ykarelit should sync in seconds or minutes14:12
rlandyeven if it is minutes, it's too late for the pipeline14:12
rlandyso I am not saying your patch is wrong, but using mirror is not good either14:13
ykareli don't have numbers, but i think jpena can put some light on how much it takes for mirrors to sync14:14
ykareli think here we mainly looking at promotion jobs, so rdo mirror, right?14:14
*** amoralej|lunch is now known as amoralej14:14
rlandyykarel: ack14:15
jpenarlandy, ykarel: is that the AFS mirrors? I can check the expiration time, give me some mins14:16
*** Goneri has quit IRC14:16
rlandyjpena: thanks - maybe we just hit a particular lag on friday14:16
ykareljpena, http://mirror.regionone.rdo-cloud-tripleo.rdoproject.org:8080/rdo one14:16
ykarelack take ur time14:16
*** chem has quit IRC14:17
*** Goneri has joined #oooq14:19
*** chem has joined #oooq14:19
*** udesale_ has quit IRC14:23
jpenaykarel, rlandy: the default cache expiry time is 1 hour14:26
rlandyjpena: so iiuc, the mirrors update once per hour14:29
jpenarlandy: they expire content every hour the latest. They can expire content before that, but it depends on the web server sending certain data (I can't remember off the top of my head, there is some HTTP expiry time header iirc)14:30
ykareljpena, so isn't cache expiry and mirror sync different?14:30
jpenaykarel: they are two different things. In the AFS mirrors, we have two separate entities:14:31
ykarelhmm that what i understood, so here question was for mirror sync14:31
jpena1- the mirror side, which is synced periodically (that's the centos, fedora, debian, etc mirrors)14:31
jpena2- the caching proxy side (used for RDO Trunk data iirc)14:31
rlandywe need to reference the RDO Trunk data14:32
rlandyeither way, "periodically" could mean a miss for the pipelines14:33
rlandywhich need to run immediately after the tripleo-ci-testing pin is updated14:33
rlandyhttps://review.rdoproject.org/zuul/builds?pipeline=openstack-periodic-master14:33
jpenaright, so if you need to reference the delorean.repo file, I would advise to get it from trunk.rdo14:36
rlandywhich is why I think we should avoid mirrors for all pipeline jobs14:36
rlandyykarel: ^^ we can keep your patch or not14:36
ykarelrlandy, patch is using trunk.rdo14:37
rlandyI'm ok with the change if we are now sure we are getting trunk.rdo14:37
rlandythe whole point in doing the md5 check before was to catch that14:37
ykarelthat patch is more on topic for using delorean.repo.md5 or md5sum command14:37
rlandyykarel: so would you agree that we skip the mirror-info-fork for all jobs running in a periodic pipeline?14:38
ykarelif md5sum was kept to check consistency, then i think it should be handled in the job itsel14:39
rlandyykarel: I'm ok with your change14:39
ykarelrlandy, mirror-info fork runs already in deployment jobs14:39
ykarelit was missing in other jobs14:39
rlandyhttps://review.rdoproject.org/r/#/c/25900/3/playbooks/base/pre.yaml14:40
rlandy^^ that pre runs on non-deployment jobs14:40
chandankumarrlandy, marios all reviews added here https://hackmd.io/7MBqFHurTA2e5H8kYRwgag?view#Reviews-still-in-play-to-addmove-jobs from my side14:46
rlandychandankumar: thanks ... pls see discussion above14:47
rlandyjpena: if we made all periodic jobs hit trunk.rdo directly ( and not the mirrors) could that cause any issues?14:48
jpenarlandy: we've never tested the load... My suggestion would be to use trunk.rdo to fetch the .repos, then locally modify them to use the mirrors if needed14:49
rlandyok - that's what the release files do anyways - so it's just the jobs that don't use release files (container and image build that need to adjust)14:51
chandankumarrlandy, so basically we need to make sure the delorean.md5 should be same with the nodepool proxy?15:00
rlandychandankumar: that's half the problem15:02
rlandythe code to recalculate the md5 sum was intentional15:02
rlandyto check this15:02
rlandyykarel has a patch to change the code to download the md5 rather than recalculate it15:03
rlandywhich is ok15:03
rlandybut lose our check15:03
rlandyin such a case, we need to make sure that mirrors are not used for the image and container builds15:03
rlandyiiuc - for any releases15:03
rlandyessentially the proxy can be out of date so we need to hit trunk.rdo directly in the pipeline15:04
rlandyiow ... https://review.rdoproject.org/r/#/c/25900/3/playbooks/base/pre.yaml for all jobs where we do not deploy15:05
chandankumarrlandy, https://review.rdoproject.org/r/#/c/25900/3/playbooks/base/pre.yaml but it is not going to handle image build na?15:07
rlandychandankumar: correct - see the email suggestion15:07
rlandychandankumar: but I think it needs to be done not only for centos815:08
rlandyputting up patch - sec15:08
chandankumarrlandy, ok15:08
chandankumararxcruz|rover, weshay|ruck rlandy https://lists.rdoproject.org/pipermail/dev/2020-March/009333.html15:17
rlandychandankumar: https://review.rdoproject.org/r/2592615:17
rlandy^^ so I am not sure how wide spread this should be15:17
chandankumarrlandy, build_override_repos  will work I think15:19
rlandychandankumar: what about kolla_base_tag?15:20
rlandywill that only get periodic15:20
rlandyI think 8 pr 7 doesn't matter15:20
rlandyI removed the original 8 only15:20
arxcruz|rovercurl https://corona-stats.online/USA15:21
arxcruz|rovernow that's cool15:21
chandankumarrlandy, kolla_base_tag is used only for rhel-8 or c-8 not seeing for c715:21
rlandychandankumar: so I think rhel 8 should also be included15:22
rlandyarxcruz|rover: /o\ ... this whole city is a ghost town15:22
*** Goneri has quit IRC15:22
rlandyother than the grocery store - which is a mob scene15:23
rlandyykarel: pls review https://review.rdoproject.org/r/#/c/25926/ - to check if we've got the right conditions15:24
chandankumararxcruz|rover, https://www.covidindia.com/15:24
ykarelrlandy, looking15:24
arxcruz|roverchandankumar: the curl command says 113, the website says 4315:26
chandankumararxcruz|rover, https://covidindia.org/15:26
chandankumararxcruz|rover, people are crazy they made so many sites15:26
*** sshnaidm is now known as sshnaidm|afk15:28
arxcruz|roverrlandy: I went to the market today, nothing there, but I notice people are walking on the streets normally, although, the kindergarten and schools are now officially closed15:29
arxcruz|roverand yes, I was able to buy toilet paper15:29
arxcruz|roverlol15:29
*** Goneri has joined #oooq15:29
*** dpawlik has joined #oooq15:30
pandazbr: what can I do to make rdo-tox-molecule work with pytst-html ?15:30
zbrpanda: probably to put a <2.0 condition on it15:31
pandazbr: trying15:31
rlandyarxcruz|rover: this whole social distancing thing is stupid - let's all stay away from each other - except in the grocery store - where we can all tackle each other for the last box of clorox15:31
arxcruz|roverrlandy: well, at least here, people were actually keep away from each other on the grocery store15:32
arxcruz|roverin some very small groceries they are allowing only a few people to enter15:33
*** dsneddon_ has joined #oooq15:56
pandazbr: it is already pinned ...15:58
zbrpaste link to error15:58
pandazbr: https://logserver.rdoproject.org/74/24774/73/check/rdo-tox-molecule/5556a82/job-output.txt15:58
pandazbr: appaerently the pinning doesn't work 2020-03-16 09:41:42.370839 | cloud-centos-8 | plugins: metadata-1.8.0, html-2.1.0, cov-2.8.1, molecule-1.2.415:58
pandazbr: molecule-delegated inherits from testenv:molecule15:59
pandazbr: and testenv:molecule has the version constraing15:59
pandaconstraint*15:59
pandazbr: I'm stupid16:00
zbr:D16:00
pandazbr: no I'm not16:00
zbrpanda: is not you, is pip.16:00
rlandychandankumar: https://review.opendev.org/713277 pls review16:01
zbrbut "html-2.1.0" is the reason16:01
zbrbe sure this does not get used.16:01
zbrthey will do a patch soon, but until that, you we need to avoid it.16:01
mariosdamn.. any one know how i can test that (config repo) https://review.rdoproject.org/r/25932 ... dont think i can test it with testproject depends on https://review.rdoproject.org/r/#/c/25796/16:03
pandazbr: I see, there are cross dependencies to correct all over the place16:03
*** saneax has quit IRC16:04
*** jmasud has joined #oooq16:06
mariossshnaidm|afk: rlandy: when you next have a chance can you check https://review.rdoproject.org/r/25932 I think that is what causes "dnf: command not found" @ https://logserver.rdoproject.org/96/25796/4/check/periodic-tripleo-ci-centos-8-ovb-3ctlr_1comp_1supp-featureset039-master/0522040/job-output.txt16:10
pandamarios: zbr https://review.rdoproject.org/r/25933 let's see if this works16:13
*** skramaja has quit IRC16:14
pandamarios: zbr definitely not :)16:14
*** derekh has joined #oooq16:17
chandankumarrlandy, sorry in pyconindia call deciding the fate of conference16:28
chandankumarwill look into that16:29
rlandyno rush16:29
*** tesseract has quit IRC16:33
*** bogdando has quit IRC16:35
weshay|ruckrlandy, 0/ out of meetings :)16:37
rlandyweshay|ruck: how was it?16:38
weshay|ruckmeh16:39
rlandyok - you didn't miss much here ...16:39
weshay|ruckmarios, is the plan merge the promoter change and try to promote asap after merge?16:39
pandazbr: I put version constraints everywhere, I have no idea what's installing the wrong version of pytest-html16:40
mariosweshay|ruck: panda: was going to run it 'properly' since over the weekend only containers were running.16:40
pandazbr: https://logserver.rdoproject.org/33/25933/2/check/rdo-tox-molecule/db7eacb/job-output.txt16:40
mariosweshay|ruck: panda: and then maybe tomorrow we merge that patch16:40
pandaweshay|ruck: promoter had a full run this time, with containers overcloud and dlrn16:40
weshay|ruckmarios, ok.. so let a real one fly . .then merge16:40
pandamarios: ^16:40
mariosweshay|ruck: to be clear we need to merge that https://review.rdoproject.org/r/25782 ontop of panda change it has another minor fix we need16:40
pandamarios: weshay|ruck but we have to fix rdo-tox-molecule to merge16:41
marios panda: cool story bro16:41
mariosack so 'it works' (tm)16:41
rlandyweshay|ruck: I'd like to add fs001 back into criteria for the intregration pipeline16:41
mariospanda: nice one panda16:41
* marios gets ready to run away 16:41
pandamarios: I'm trying to fix the jobs here  https://review.rdoproject.org/r/2593316:41
weshay|ruckpanda, marios should we not merge first?16:41
weshay|ruckjust curious..16:41
mariosweshay|ruck: we can't until molecule stuff fixed red jobs16:41
* marios checks review if it was fixd16:41
pandaweshay|ruck: at this point we can merge without problem ... after the jobs are fixed16:42
pandamarios: no, it was not, but I'm trying16:42
mariosweshay|ruck: https://review.rdoproject.org/r/#/c/24774/ rdo-tox-moleculeFAILURE in 21m 09s rdo-tox-molecule-delegated-centos-8FAILURE in 17m 16s16:42
weshay|ruckpanda, ya.. so unless you strongly are against it.. I would prefer we merge, then go live16:42
mariosweshay|ruck: we can't merge until those jobs ^^^16:42
pandaweshay|ruck: ah, yeah agreed.16:42
mariosweshay|ruck: but otherwise yes (I am already +2 there)16:42
weshay|ruckis there a review to fix molecule?16:43
mariosweshay|ruck: 18:41 < panda> marios: I'm trying to fix the jobs here  https://review.rdoproject.org/r/2593316:43
pandaweshay|ruck: marios there's still something in the requirements taht is installing the wrong version of pytest-html, I'm trying to find it.16:44
weshay|ruckzbr, does that tie into the centos-8 switch ur trying to get through ^16:44
weshay|ruck?16:44
*** dsneddon_ has quit IRC16:48
zbrweshay|ruck: nope, is deps issue.16:49
zbri will look at it after unblocking the rdo base node change16:49
*** ykarel is now known as ykarel|away16:50
pandazbr: https://opendev.org/openstack/requirements/raw/branch/master/upper-constraints.txt16:52
pandazbr: it's in the tox:molecule dependencies16:52
pandazbr: and taht seems to to have been fixed16:52
pandazbr: [testenv:molecule]16:53
pandadeps =16:53
panda    -c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master}16:53
pandazbr: and there pytest-html is pinned ==2.1016:53
panda2.1.016:53
*** jmasud has quit IRC16:54
*** jmasud has joined #oooq16:55
chandankumarrlandy, minute comment https://review.opendev.org/#/c/713277/2 rest is ok16:59
rlandylooking16:59
zbri have no control over what reqs team is doing there, two options investigate bug and fix pytest-molecule if bug real, or drop use of constraints.16:59
rlandychandankumar: imagebuild repos look ok now in integration pipeline17:00
chandankumarrlandy, sweet17:00
* chandankumar is thinking how to go back to my native home town17:00
zbrpanda: read https://github.com/pytest-dev/pytest-html/issues/282 5days old already17:02
*** marios has quit IRC17:02
rlandychandankumar: less corona virus there?17:03
pandazbr: anything we can do for the upper constraints ?  you had a patch there17:03
*** holser has quit IRC17:06
pandazbr: found it.17:11
chandankumarrlandy, yes17:12
chandankumarrlandy, in pune it is sky rocketting each day new 2-3 cases17:12
chandankumarall public places closed except local shops17:12
chandankumarzbr, https://github.blog/2020-03-16-npm-is-joining-github/17:14
pandazbr: weshay|ruck https://review.opendev.org/713293 should fix the jobs17:17
pandazbr: weshay|ruck but I have no idea if it will be merged.17:18
weshay|ruckk17:18
weshay|ruckpanda, probably not..17:19
weshay|ruckpanda, zbr can we not create an override molecule job that runs on centos-8?17:19
pandaweshay|ruck: thn we have 2 options, wait for the next pytest-html release, or remove the upper constraints from our requirements17:19
chandankumarzbr, https://review.opendev.org/#/c/713277/ good to go17:19
pandazbr: ideas ? ^17:19
zbrhaha, tbh, i seen it comming, not surprised npm joins github.17:20
zbri wonder if they are really aware of what pile of mess they bough17:21
weshay|ruckpanda, why is using centos-8 not an option for molecule?17:21
zbron the other hand, npm did few things right compared with pypi.17:21
weshay|ruckpanda, zbr https://meet.google.com/esb-ikfb-tfq?authuser=117:21
chandankumarzbr, github is doing one thing very well how they can capture developer mindshare, providing one shop for all17:22
chandankumarbased on demands17:23
*** rascasoft has quit IRC17:29
chandankumarrlandy, weshay|ruck anything to keep an eye tomorrow?17:39
chandankumarsee ya people stay safe!17:39
*** chandankumar is now known as raukadah17:39
pandaweshay|ruck: zbr based on what I heard on the meeting, I'm renaming all the jobs in ci-config taht are pinned to a distro, to include that distro17:43
pandazbr: weshay|ruck we have 5 right now to be renamed17:43
weshay|ruckk.. thank you17:43
weshay|ruckimho.. it would make sense to also include what they run against17:44
weshay|ruckbut it's just my opinion17:44
pandaweshay|ruck: you mean something additional to distro  ?17:45
weshay|ruckrlandy, do you care if we don't have the job that pins consistent -> component-ci-testing in the results in the cockpit query?17:45
weshay|ruckpanda, ya.. so the molecule job that runs against the promoter is much different than what runs against collect logs17:45
weshay|ruckand one has no way to see that w/o digging in17:45
*** dsneddon_ has joined #oooq17:46
weshay|ruckI guess the same job can run against multiple different things in our ci-config17:46
weshay|rucknot something we need to solve atm17:46
weshay|ruckbut it's kind of fuzzy17:46
zbrpanda: put a comment/review on https://github.com/pytest-dev/pytest-html/pull/28317:49
pandazbr: saying what ? How do I check if it solved the problem ?17:52
zbrtrust me, tested it, fixes the problem.17:52
zbri installed the patch and pytest-molecule is back happy.17:52
rlandyraukadah: should be fine tomorrow17:53
zbryou can see anything like: "having a hotfix with this would really be handy as we were forced to pin-down pytest-html due to it.17:53
rlandyweshay|ruck: that's fine17:53
rlandyie: no  consistent -> component-ci-testing in the results in the cockpit17:53
zbrdo you want me to write a reasoning-generator? it could prove handy :D17:54
rlandyoh wow - something hit the pipeline17:54
weshay|ruckrlandy, aye.. that should make it more visible when people use testproject to get something through17:54
weshay|ruckpanda, +2 https://review.rdoproject.org/r/#/c/25933/17:55
weshay|ruckci came back17:55
rlandy2020-03-16 17:27:51 |   msg: 'Container(s) with bad ExitCode: [''container-puppet-neutron''], check logs in /var/log/containers/stdouts/'17:56
rlandyweshay|ruck: how did that hit the integration pipeline w/o impacting component ^^?17:56
panda\o/17:57
pandamerging17:57
rlandyhttps://logserver.rdoproject.org/openstack-periodic-master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-full-tempest-master/e67d899/logs/undercloud/home/zuul/standalone_deploy.log.txt.gz17:57
*** dtantsur is now known as dtantsur|afk17:57
*** jpena is now known as jpena|off17:58
weshay|ruckrlandy, that error?17:58
rlandyweshay|ruck: ack17:58
rlandyshouldn't we have caught that17:59
rlandyin the component pipeline17:59
weshay|ruckrlandy, some times containers fail to start17:59
weshay|rucknot sure if that's this issue.. but I've seen that upstream quite a bit17:59
* weshay|ruck looks at this issue17:59
rlandyit's taking the whole pipeline down18:00
*** derekh has quit IRC18:00
* rlandy wonders if we shouldn't build containers in component pipeline18:00
weshay|ruckrlandy, ?18:01
weshay|ruckoh I see18:01
weshay|ruckrlandy, loooks more like a container build issue18:05
rlandyweshay|ruck: have you seen this happen before?18:05
rlandywe rebuild the container? or kolla issue?18:06
weshay|ruck2020-03-16T17:27:38.870604079+00:00 stderr F <13>Mar 16 17:27:38 puppet-user: No such file or directory @ rb_sysopen - /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini18:06
weshay|ruckwhich afaik is on the container18:07
rlandyweshay|ruck: hmmm ... maybe building containers in the pipeline is required? container update didn't hit this18:08
weshay|ruckrlandy, sorry.. it's here undercloud/var/lib/config-data/puppet-generated/neutron/etc/neutron/plugins/networking-ovn18:08
amoralejweshay|ruck, i think https://review.opendev.org/#/c/712762/18:09
weshay|ruckrlandy, https://logserver.rdoproject.org/openstack-periodic-master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-full-tempest-master/e67d899/logs/undercloud/var/lib/config-data/puppet-generated/18:09
* weshay|ruck looks18:09
amoralejnow we are forced to unpin neutron18:09
rlandy2020-03-16T17:33:39.356319204+00:00 stderr F <13>Mar 16 17:33:39 puppet-user: Error: Could not set 'present' on ensure: No such file or directory @ rb_sysopen - /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini (file: /etc/puppet/modules/neutron/manifests/agents/ovn_metadata.pp, line: 150)18:09
weshay|ruckamoralej, unpin it all :)18:09
weshay|ruckwe're good18:09
rlandyamoralej: neutron was not upinned?18:09
weshay|ruckthanks for the pointer18:09
amoralejwe need some fixes18:10
rlandywhat did I miss?18:10
amoralejnop, neutron was pinned to make transition to ovn-in-repo smooth :)18:10
amoralejit needs several fixes here and there18:10
rlandyI figure container update should have caught this18:10
rlandyie: in the component pipeline18:10
weshay|ruckrlandy, it's a kolla change18:11
amoralejweshay|ruck, rlandy https://review.rdoproject.org/r/#/c/24462/18:11
amoralejthat still has an issue, as it will break octavia with ovn18:12
amoralejalthough at this point this is probably a smaller issue18:12
rlandythat is less breakage than what we have now18:13
amoralejrlandy, that's only affecting periodic, right?18:13
weshay|ruckamoralej, ya18:13
amoraleji'm fine with unpinning if that passes ci18:14
rlandyyeah - nothing hits tripleo-current yet18:14
weshay|ruckamoralej, the rdo-info ci right?18:14
amoralejand ask neutron team to create new package for https://github.com/openstack/ovn-octavia-provider18:14
amoralejweshay|ruck, first, ci for https://review.rdoproject.org/r/#/c/24462/ itself18:15
*** sshnaidm|afk is now known as sshnaidm18:15
amoraleji rechecked some time ago18:15
rlandyamoralej: is anything else still pinned?18:15
weshay|ruckpanda, we'll be looking to promote a master c8 job run from ealier today18:15
amoralejmistral is also being unpinned today18:16
pandaweshay|ruck: will promote to panda-man18:16
pandaweshay|ruck: what's the aggregate-hash ? I see some hashes here taht failed to meet the criteria18:17
weshay|ruckpanda, /me gets18:17
*** dsneddon_ is now known as dsneddon18:17
pandaweshay|ruck: and criteria is still reduced18:17
amoralejrlandy, https://review.rdoproject.org/r/#/c/25919/18:18
rlandyamoralej: thanks LP bug coming up18:18
weshay|ruckpanda, the last run .. had https://logserver.rdoproject.org/openstack-periodic-master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-scenario010-standalone-master/4350e26/logs/undercloud/etc/yum.repos.d/delorean.repo.txt.gz18:19
weshay|ruck9e360f046d92bc8cc27921a1e491248c  delorean.repo18:20
weshay|ruckpanda, I'll patch the criteria w/ notes18:20
pandaweshay|ruck: ok it's not on the list that doesn't meet the criteria18:21
weshay|ruckpanda, ok.. no changes needed.. everything in criteria for centos-8 master passed in that run18:22
weshay|ruckpanda, I would expect to be blocked on the ovn issue for at least several days18:24
amoralejrlandy, weshay|ruck jobs are failing as that change modified config file location18:26
weshay|ruckaye18:26
weshay|ruckpanda, so we should see centos-8 kick soon and pick up that hash w/ panda_man?18:26
rlandyhttps://review.rdoproject.org/r/#/c/25919/1/tags/ussuri-uc.yml - neutron is still pinned here18:26
weshay|ruckpanda-man18:27
rlandyhttps://bugs.launchpad.net/tripleo/+bug/186766418:28
openstackLaunchpad bug 1867664 in tripleo "Master periodic jobs are failing overcloud deploy with ''Container(s) with bad ExitCode: [''container-puppet-neutron''], check logs in /var/log/containers/stdouts/'" [Critical,Triaged]18:28
rlandyamoralej: weshay|ruck: ^^18:29
rlandypromotion-blocker so it should hit CIX18:29
pandaweshay|ruck: yes18:29
amoralejone option would be to restore the packages to the containers via tripleo-common overrides18:33
amoralejis it worthy?18:33
amoralejweshay|ruck, rlandy ^18:33
weshay|ruckamoralej, how long do you think it will take to fix neutron.. moving ovn is not small..18:34
amoralejneutron unpin may introduce other issues18:34
weshay|ruckamoralej, to buy time.. it's probably worth it18:35
amoralejpackaging wise, we can fix it and unpin in some hours18:35
amoralejbut18:35
amoralejnot sure if config options, etc... are the same18:35
rlandyidk - hacking more may hurt us18:36
amoralejtomorrow we can probably have support from someone from ovn18:36
weshay|ruckpanda, it just ran18:36
rlandymaybe give neutron team a day?18:36
weshay|rucknon candidate found?18:36
rlandyamoralej: ack - let's give ovn a chance to respond18:36
rlandyif they say it's complicated/long, we can consider overrides18:37
rlandyand deal with the consequences18:37
amoralejrlandy, weshay|ruck is it fine for you to keep this failing until tomorrow?18:38
weshay|ruckok w/ me18:38
pandaweshay|ruck: https://trunk.rdoproject.org/centos8-master/tripleo-ci-testing/delorean.repo.md5 I don't see 9e360f046d92bc8cc27921a1e491248c in tripleo-ci-testing18:38
weshay|ruckpanda, /me checks container tag18:39
weshay|ruckfor that run18:39
weshay|ruckperhaps unzipping / renaming18:39
rlandyack - no emergency18:39
weshay|ruckpanda, container build says the md5 is 46b71a6620e3372c998db7a694112fd218:40
weshay|ruckpanda, whis *is* in the logs here18:40
weshay|ruck2020-03-16 10:27:02.276029 | primary | {18:41
weshay|ruck2020-03-16 10:27:02.276261 | primary |   "aggregate_hash": "46b71a6620e3372c998db7a694112fd2",18:41
weshay|ruck2020-03-16 10:27:02.276281 | primary |   "commit_hash": null,18:41
weshay|ruck2020-03-16 10:27:02.276316 | primary |   "component": null,18:41
weshay|ruck2020-03-16 10:27:02.276330 | primary |   "distro_hash": null,18:41
weshay|ruck2020-03-16 10:27:02.276337 | primary |   "in_progress": false,18:41
weshay|ruck2020-03-16 10:27:02.276355 | primary |   "job_id": "periodic-tripleo-ci-centos-8-standalone-master",18:41
weshay|ruck2020-03-16 10:27:02.276376 | primary |   "notes": "",18:42
weshay|ruck2020-03-16 10:27:02.276398 | primary |   "success": true,18:42
weshay|ruck2020-03-16 10:27:02.276411 | primary |   "timestamp": 1584354405,18:42
weshay|ruck2020-03-16 10:27:02.276471 | primary |   "url": "https://logserver.rdoproject.org/openstack-periodic-master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-master/b057a46"18:42
weshay|ruckpanda, you may have already promoted that one18:42
pandaweshay|ruck: yep, that's ancient18:43
weshay|ruckhrm18:43
pandaweshay|ruck: most recent was promoted 2:30 hours ago18:43
pandaweshay|ruck: but with reduced crteria18:43
weshay|ruckpanda, can't be ancied18:44
weshay|ruckancient18:44
rlandyok - so back to original question - how come container update never hit this issue?18:44
weshay|ruckhopefully18:44
weshay|ruckrlandy, container builds18:44
pandaweshay|ruck: 46b71a6620e3372c998db7a694112fd2 is from 10 hours ago18:44
weshay|ruckit was a change in kolla18:44
weshay|ruckpanda, ok.. that's not ancient18:44
weshay|ruck:)18:44
pandaweshay|ruck: AGES18:44
pandaweshay|ruck: EONS18:44
weshay|ruckdepends how closely you watch I guess18:44
weshay|ruckpanda, so that one looks safe to promote all the way through18:44
weshay|ruckbut we can hold that until tomorrow18:45
weshay|ruckyou check how many old attempts?18:45
weshay|ruck10?18:45
weshay|ruckrlandy, ideally kolla doesn't change that much :)18:45
pandaweshay|ruck: 10. not wure what you mean with "that" can be promoted all the way through18:45
weshay|rucklet's chat18:45
pandaweshay|ruck: we already had e03f7a0753e60c3263a20e4e6abf8512 promoted all the way through18:46
weshay|ruckhttps://meet.google.com/poc-dags-wfh?authuser=1 rlandy too18:46
weshay|ruckthat's more recent?18:46
pandayes18:46
weshay|ruckI just checked the last build18:46
weshay|ruckhuh /me looks again18:46
pandabut with reduced criteria18:46
weshay|ruckya..18:46
weshay|ruckpanda, I'd have to see what's passed all the jobs18:47
rlandyk18:47
weshay|ruckperiodic-tripleo-centos-8-master-containers-build-pushopenstack/tripleo-cimasteropenstack-periodic-mastermaster34232020-03-16T08:15:23SUCCESS18:50
weshay|ruckrlandy, panda ^18:50
pandarlandy: https://trunk.rdoproject.org/api-centos8-master-uc/api/civotes_agg_detail.html?ref_hash=e03f7a0753e60c3263a20e4e6abf851218:52
pandahttps://trunk.rdoproject.org/centos8-master/panda-man/delorean.repo.md518:56
pandaweshay|ruck: rlandy ^18:56
rlandyweshay|ruck: https://review.rdoproject.org/r/#/c/25919/1/tags/ussuri-uc.yml19:09
rlandyweshay|ruck: https://review.opendev.org/#/c/713220/19:14
*** jbadiapa has quit IRC19:18
pandaweshay|ruck: https://review.rdoproject.org/r/2578219:24
amoralejrlandy, weshay|ruck i've sent a new ps for the networking-ovn-metadata-againt.ini, i'm testing it in https://review.rdoproject.org/r/#/c/24462 and https://review.rdoproject.org/r/2593719:29
rlandyamoralej: thanks19:29
amoraleji suspect it may miss something, i'll work with neutron guys tomorrow morning19:29
weshay|ruckthanks19:30
*** dpawlik has quit IRC19:32
*** dpawlik has joined #oooq19:32
*** amoralej is now known as amoralej|off19:40
weshay|ruckhrm.. rlandy still doesn't pickup what ever was run as test project http://dashboard-ci.tripleo.org/d/UDA4H3aZk/component-pipeline?orgId=1&from=now-7d&to=now&fullscreen&panelId=42219:40
*** dpawlik has quit IRC19:41
weshay|ruckno passes on fs001 on 3/16 but we promoted19:41
weshay|ruck:(19:41
rlandywierd19:48
rlandylooking19:48
rlandychecking zuul ...19:49
rlandyweshay|ruck ...19:49
rlandy<rlandy> how did baremetal promote when it failed OVB?19:49
rlandy<chandankumar> rlandy, rerun the job19:49
rlandy<rlandy> chandankumar: ah19:49
rlandy<chandankumar> *rerunned19:49
*** ccamacho has quit IRC19:50
rlandyhttps://review.rdoproject.org/zuul/builds?job_name=periodic-tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001-baremetal-master19:50
rlandy^^ there is the pass19:50
weshay|ruckrlandy, hrm.. /me looks19:51
rlandy2020-03-16T07:07:4819:52
weshay|ruckah maybe because it's check19:52
* weshay|ruck queries again19:52
rlandyweshay|ruck: not such a big deal with cockpit doesn't pick it up19:52
rlandythe general cockpit tracking is the same19:52
rlandyyou have to search the hash19:52
rlandynot a problem19:53
weshay|ruckwell.. it's just another not very transparent thing19:53
weshay|ruckwe have soo many of those things19:53
rlandyweshay|ruck: ack - if we can but we are not below the current standard19:53
weshay|rucklolz19:54
weshay|ruckhrm.. https://trunk.rdoproject.org/api-centos8-master-uc/api/civotes_detail.html?commit_hash=123c6fc147cfdd8607a529799bfa8e800395bd02&distro_hash=40a480aaa83fac115ad82b8917f8280f88ad5cbe20:12
weshay|ruckmeh20:14
panda46b7 promoted.20:20
panda2020-03-16 20:17:03,054 14822 INFO     promoter Candidate hash 'aggregate: 46b71a6620e3372c998db7a694112fd2, commit: ee36da77f5d639ed985421d1a615dc497bb0ec7d, distro: b61bb6e4c4777dc0c28ab7df9b5e99fcfeb2b8fa, component: ui, timestamp: 1584346454': SUCCESSFUL promotion to current-tripleo20:21
panda2020-03-16 20:17:03,056 14822 INFO     promoter Summary: Promoted 1 hashes this round20:21
panda2020-03-16 20:17:03,056 14822 INFO     promoter ------- -------- Promoter terminated normally20:21
*** panda is now known as panda|off20:25
rlandynice20:32
weshay|ruckpromotion worked.. and I shut it down rlandy panda|off20:34
weshay|ruckrlandy, do I cry now or later https://trunk.rdoproject.org/centos8-master/current-tripleo/delorean.repo.md520:35
weshay|ruckpanda|off, def.. have a bug20:37
rlandywrong hash, right?20:39
weshay|ruckyup20:39
weshay|ruckall jobs will start to fail soon20:40
rlandyweshay|ruck: retag to panda_X20:40
rlandyand repromote, we can do that right?20:40
rlandyoh dear ...20:40
weshay|ruckrlandy, if you want to chat... we can start a meet20:41
weshay|ruckto fix this myself.. I have to pull all the containers20:41
rlandyweshay|ruck: k- let's chat - and see what's reasonable to do20:41
rlandycant we just tag an old hash as current-tripleo?20:42
rlandyweshay|ruck: https://meet.google.com/ggm-qzyn-jun20:43
weshay|ruckpanda|off, retag https://trunk.rdoproject.org/centos8-master/current-tripleo/delorean.repo.md5 e03f7a0753e60c3263a20e4e6abf851220:47
rlandy 46b71a6620e3372c998db7a694112fd220:48
rlandynot20:48
rlandy e03f7a0753e60c3263a20e4e6abf851220:48
weshay|ruckretag the containers tagged w/ 46b71a6620e3372c998db7a694112fd220:49
weshay|rucke03f7a0753e60c3263a20e4e6abf851220:49
*** jmasud has quit IRC20:53
*** jmasud has joined #oooq20:56
*** sshnaidm has quit IRC21:37
*** jfrancoa has quit IRC21:40
*** sshnaidm has joined #oooq21:44
rlandyweshay|ruck: rerun ui component test - think it hit the current-tripleo promotion22:04
*** TrevorV has quit IRC22:04
rlandyreran22:04
weshay|ruckrlandy, oh.. meaning the ui component is using the most recent current-tripleo + ui component?22:05
rlandyweshay|ruck: ack22:05
*** ChanServ sets mode: +o panda|off22:46
*** panda|off changes topic to "Docs: https://docs.openstack.org/tripleo-quickstart/latest/ || 17th march promoter critical bugfix https://review.rdoproject.org/r/25938"22:47
*** panda|off sets mode: -o panda|off22:47
*** holser has joined #oooq23:45
*** jmasud has quit IRC23:53
*** jmasud has joined #oooq23:53
*** jmasud has quit IRC23:58
*** holser has quit IRC23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!