Tuesday, 2021-03-23

*** tosky has quit IRC00:09
*** sshnaidm is now known as sshnaidm|off00:12
*** jmasud has joined #oooq00:17
*** jmasud has quit IRC01:23
*** jmasud has joined #oooq01:26
*** jmasud has quit IRC01:38
*** jmasud has joined #oooq01:54
*** jmasud has quit IRC02:05
*** jmasud has joined #oooq03:02
*** apetrich has quit IRC03:09
*** jmasud has quit IRC03:17
*** skramaja has joined #oooq03:58
*** ykarel has joined #oooq04:28
*** jmasud has joined #oooq04:33
*** jmasud has quit IRC05:05
*** jmasud has joined #oooq05:20
*** ykarel has quit IRC05:23
*** ykarel has joined #oooq05:25
*** udesale has joined #oooq05:26
*** ysandeep|away is now known as ysandeep05:52
chkumar|ruckykarel: https://review.opendev.org/c/openstack/tripleo-quickstart-extras/+/782362 this one05:59
chkumar|ruckto fix quay dependency05:59
ykarelchkumar|ruck, yes was looking that only, was trying to understand why not using images list06:00
ykareland why it's failing with trunk.rdo registry06:00
chkumar|ruckykarel: actually it contains the address of quay one from tripleo common06:01
chkumar|ruckwhich does not exists locally that's why it is failing06:01
chkumar|ruckwe are re-constrcuting the non_tripleo_containers from failover06:02
chkumar|ruckpull06:02
ykarelwhy we not fixing that than06:02
chkumar|ruckeach time then we need to do set facts06:04
chkumar|ruckykarel: we can simly retrive the image list06:04
chkumar|ruckwe have non_tripleo containers list , we get the container name and get their full url from image list and push it06:04
ykarelmay be i am looking at different issue, re checking06:05
chkumar|ruckykarel: https://ef9619c1056c463d9738-f79c70347d10b2f62e687e2ad6cae828.ssl.cf2.rackcdn.com/781815/1/check/tripleo-ci-centos-8-content-provider/9b5d9b2/job-output.txt06:05
chkumar|ruckykarel:  [container-build : Pull non-tripleo containers (ceph, alertmanager, prometheus) to the content provider registry] ***06:05
chkumar|ruckand above tasks06:05
chkumar|ruckbetter let me open a seperate bug and link it06:07
ykarelchkumar|ruck, i was looking at https://447f476af5555fa473a2-ba0bbef8fa5bd9d33ddbd8694210833c.ssl.cf5.rackcdn.com/781622/2/check/tripleo-ci-centos-8-content-provider/e902500/job-output.txt06:08
ykarelwhich is different06:08
ykarelbut related to fallback only06:08
ykarelokk that image don't exist in trunk registry, is that handled manually and periodicallY?06:10
chkumar|ruckykarel: image exists in rdo registry i think06:10
ykareli mean trunk.registry.rdoproject.org/ceph/grafana:6.7.406:10
chkumar|ruckwe pushed it there06:11
ykarelin master different tag is used06:11
ykarelso likely new images are not pushed yet06:11
chkumar|ruckon ho06:12
ykareland will this be handling issues in ovb jobs too?06:12
ykarellikely no as the fix is for provider jobs and ovb jobs don't enabled provider jobs06:12
chkumar|ruckykarel: for ovb I need to check whether is pulling from rdo registry or not06:13
ykarelit failed fetching from quay.io06:13
ykarelhttps://review.rdoproject.org/zuul/builds?job_name=tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset00106:13
*** jpodivin has quit IRC06:16
*** jpodivin has joined #oooq06:19
chkumar|ruckykarel: as a sort term solution, I am going to mirror grafana image on our quay namespace06:29
ykarelchkumar|ruck, the issue in quay.io is namespace related?06:29
ykareli thought issue is across quay.io06:30
chkumar|ruckykarel: they removed this image quay.io/app-sre/grafana:6.7.406:31
ykarelchkumar|ruck, ack and what's the reason for removal?06:33
chkumar|ruckykarel: need to find out the app-sre team with in Red Hat06:35
ykarelchkumar|ruck, ack that would be good to know and handle such cases in future06:36
chkumar|ruckykarel: https://hub.docker.com/r/grafana/grafana there is no 6.7.4 tag there06:38
*** ratailor has joined #oooq06:47
ysandeepfolks o/ need reviews on https://review.rdoproject.org/r/c/rdo-jobs/+/3262906:49
*** amoralej|off is now known as amoralej07:01
chkumar|ruckarxcruz: hello when around, ping me back07:05
*** slaweq_ has joined #oooq07:15
*** marios has joined #oooq07:25
chkumar|ruckmarios: Good morning, content provider for all releases is busted as grafana image is removed from quay: https://launchpad.net/bugs/1920873 series of patches: https://review.opendev.org/q/e0b0d859efb179c6868020b3fc10c2155ed9682007:32
openstackLaunchpad bug 1920873 in tripleo "quay.io/app-sre/grafana:6.7.4 went missing from quay leading to fail all content provider jobs" [Critical,Triaged]07:32
*** jmasud has quit IRC07:32
mariosack chkumar|ruck07:34
marioschkumar|ruck: tests need update?07:36
chkumar|ruckmarios: please have a look at patchset 207:38
*** jmasud has joined #oooq07:41
marioschkumar|ruck: oh i see you already updated it and we are waiting for zuul to report for v2 thanks https://zuul.openstack.org/status#78236607:43
*** ysandeep is now known as ysandeep|lunch07:46
*** slaweq_ is now known as slaweq07:53
*** apetrich has joined #oooq08:14
*** jmasud has quit IRC08:25
chkumar|ruckmarios: against ussuri and victoria tripleo common patches, we are still run train and ussuri content provider due to upgrade jobs, Do we want to tighten that up? or remove the content old release provider job there08:33
chkumar|ruckgrafana patch for stable release is not going land08:33
*** sshnaidm|off has quit IRC08:34
marioschkumar|ruck: we shouldn't be let will check what's going on there thanks08:36
marioschkumar|ruck: why 10:33 < chkumar|ruck> grafana patch for stable release is not going land08:36
marioschkumar|ruck: in victoria patch i only see ussuri content provider? https://zuul.openstack.org/status#78236708:37
marioschkumar|ruck: /me wipes eyes08:37
mariosam i missing it?08:37
marioschkumar|ruck: and in ussuri we have train content provider https://zuul.openstack.org/status#78212708:38
* marios widens eyes stares at the screen 08:38
zbr|roverchkumar|ruck: marios: low hanging https://review.opendev.org/c/openstack/tripleo-heat-templates/+/78085408:40
marioszbr|rover: what kind of fruit is it i prefer passion fruit in the morning08:40
zbr|roverdurian08:40
mariosi am guessing without the google search that's the stinky one right?08:41
zbr|roveryeah08:41
mariosthanks08:41
marios:(08:41
zbr|roverand believe me, it stinks. In Asia you can see signs in hotels and other areas with "Durian not allowed", near the no-smoke ones.08:42
zbr|roverbut it is also tasty, if you can pass the stinkness...08:42
zbr|roverwhat really happened over the night with quay, i seen emails, how can an image vanish?08:43
chkumar|ruckmarios: yes, in victoria patch there is victoria + ussuri content provider and in ussuri patch it has ussuri + train content provider08:48
marioschkumar|ruck: so what is the problem08:48
marioschkumar|ruck: this is fine. before we had all the content providers in each patch08:48
chkumar|ruckin victoria patch, ussuri content provider is going to fails due to grafana08:48
marioschkumar|ruck: these ones are the required ones. we deploy n-1 and upgrde to n08:48
marioschkumar|ruck: 10:33 < chkumar|ruck> marios: against ussuri and victoria tripleo common patches, we are still run train and ussuri content provider due to upgrade jobs, Do we want to tighten that up? or remove the content old release provider job there08:49
marioschkumar|ruck: referring to that? ^ what can we tighten up it is ok08:49
chkumar|ruckmarios: ok, but my question is how to land it https://review.opendev.org/c/openstack/tripleo-common/+/782367/ when ussuri content will fail08:50
marioschkumar|ruck: so we need that back to train08:51
marioschkumar|ruck: i think we can merge the train one because there is no upgrade job there only update08:51
marioschkumar|ruck: then use that to merge ussuri08:51
marioschkumar|ruck: then victoria08:51
marioschkumar|ruck: maybe ... thinking08:51
chkumar|ruckmarios: ok that strategy will work08:51
chkumar|ruckmarios: let me propose train one08:51
marioschkumar|ruck: ack08:51
zbr|rovermarios: regarding the lack of details in reviews, i already got some remarks from infra regarding use of internal issue tracking... so I stopped putting links.08:53
zbr|roverweshay|ruck: add the goal to prepare tripleo repos for py39, this means do whatever it takes to run at least one tox job using py39 on each of those that have python code.08:54
marioszbr|rover: "please add more than 7 words " != "please add link to issue tracker" :)08:54
zbr|roveryeah, i am cheap on words :D08:55
marioszbr|rover: wrt to issue trackers yes i always call that out we cna't do that until it becomes available publicly08:55
zbr|roverthere is one aspect we need to discuss, sometimes we need to combine unrelated changes, like https://review.opendev.org/c/openstack/tripleo-heat-templates/+/780854/1/setup.cfg08:56
zbr|roverbecause our testing pipelines are so slow, it would be impossible to update all repos in purely "atomic" changes (not enough people and hw resources).08:57
*** ysandeep|lunch is now known as ysandeep08:57
*** jpenag is now known as jpena08:57
*** ykarel is now known as ykarel|lunch08:57
zbr|roverso I hope that the team will be ok to bundle *few* low-risk changes, like fixing the warning about using underline in setup.cfg08:58
zbr|roversetuptools deprecated dashes and announced they will be removed in the future, now iw warning, but at any moment it may become an error.08:58
marioszbr|rover: if only there were some space provided in the review where you could add such context !08:59
marioszbr|rover: we should put in a feature request08:59
zbr|roverpoint taken, i will update the comment08:59
chkumar|ruckmarios: https://review.opendev.org/c/openstack/tripleo-common/+/782379 train one09:00
zbr|roverit would be so cool to be able to edit message without having to rebuild09:00
marioschkumar|ruck: ack thanks09:02
marioszbr|rover: thanks :) ... well it would have to detect that there is no depends-on i guess to allow no rebuild09:02
*** derekh has joined #oooq09:02
*** tosky has joined #oooq09:04
*** saneax has quit IRC09:04
zbr|rovermarios: regarding openstack-python3-xena-jobs -- I do not really know. Bad part is that xena has py39 only in check as non-voting, clearly less that what we want. My impression was that we want voting.09:08
zbr|roveron the other hand I know for sure that we do *NOT* want py3509:09
zbr|roverIt would make sense to replace openstack-python3-victoria-jobs with openstack-python3-xena-jobs09:11
marioszbr|rover: ack ok i wonder if they will update the -wallaby one to include py39 anyway09:11
zbr|roveri will try to switch to xena as you asked, it seems as a resonable approach, with potential of saving on maintenance in longer term09:13
*** holser has joined #oooq09:14
chkumar|ruckkopecmartin: currently content provider jobs are busted09:16
zbr|rovermarios: does the new version of patch looks ok? i see it did start the expected list of jobs.09:17
marioszbr|rover: ack will check but not immediately thanks for update09:19
*** dtantsur has joined #oooq09:42
*** jmasud has joined #oooq09:49
ysandeepchkumar|ruck, ykarel hey o/ how to create https://images.rdoproject.org/centos8/wallaby09:50
*** jmasud has quit IRC09:51
*** saneax has joined #oooq09:57
*** ykarel|lunch is now known as ykarel10:01
*** sanjayu_ has joined #oooq10:03
*** saneax has quit IRC10:05
ykarelysandeep, iirc that used to be created manually, jpena ^?10:10
jpenahm I'm not sure. I can create it right away10:10
jpenadone10:12
*** sshnaidm has joined #oooq10:13
ysandeepykarel++ jpena++ thanks10:13
*** sshnaidm is now known as sshnaidm|off10:13
*** tosky has quit IRC10:13
*** tosky has joined #oooq10:14
ykareli checked the upload script and it seems it expects directory to be present10:15
ykarelconfig/ci-scripts/tripleo-upstream/upload-cloud-images.sh one10:15
ysandeepykarel, after directory creation, what is calling ^^ above script to upload the images.. is it in image build job definitions?10:20
ykarelyes10:20
ysandeepykarel, ack.. i had a green run on image build jobs.. i will trigger them10:21
ysandeepperiodic-tripleo-centos-8-buildimage-ironic-python-agent-wallaby https://review.rdoproject.org/zuul/build/e4852e0bd4b24128b258a7816fab6845 : SUCCESS in 23m 42s10:21
ykarelokk rerun, rsync: mkdir "/var/www/html/images/centos8/wallaby/rdo_trunk" failed: No such file or directory (2)10:22
ysandeepack o/10:27
ysandeepchkumar|ruck, I am hitting this failure in wallaby, Have you noticed similiar issue in master/any other branch recently?11:21
ysandeephttps://logserver.rdoproject.org/58/28458/56/check/periodic-tripleo-ci-centos-8-standalone-wallaby/187f01b/logs/undercloud/home/zuul/standalone_deploy.log.txt.gz11:21
ysandeeperror={"ansible_loop_var": "item", "changed": true, "cmd": "podman exec -u root swift_object_expirer /usr/local/bin/kolla_set_configs", "delta": "0:00:00.082147", "end": "2021-03-23 09:43:21.439881", "failed_when_result": true, "item": "swift_object_expirer", "msg": "non-zero return code", "rc": 126, "start": "2021-03-23 09:43:21.357734", "stderr": "Error: cannot exec into container that is not running: container state improper",11:22
ysandeep"stderr_lines": ["Error: cannot exec into container that is not running: container state improper"], "stdout": "", "stdout_lines": []}11:22
ysandeephmm, /usr/local/bin/kolla_start: line 18: /usr/bin/swift-object-expirer: No such file or directory11:29
ysandeephttps://logserver.rdoproject.org/58/28458/57/check/periodic-tripleo-ci-centos-8-standalone-wallaby/986ecac/logs/undercloud/var/log/extra/podman/containers/swift_object_expirer/stdout.log.txt.gz ^^11:29
chkumar|ruckysandeep: check the dist git of swift11:29
* ysandeep looking11:30
chkumar|ruckysandeep: https://github.com/rdo-packages/swift-distgit/blob/rpm-master/openstack-swift.spec11:31
chkumar|ruckman dir exist but binary is missing there11:31
chkumar|ruckysandeep: https://github.com/rdo-packages/swift-distgit/commit/0014419a4958e39c652a74d6cff8f46464bce8ad11:32
chkumar|ruckysandeep: can you check swift componnt promoted or not11:33
ysandeepchkumar|ruck, https://trunk.rdoproject.org/centos8-wallaby/component/swift/ tripleo-ci-testing and current/consistent is same11:34
*** rlandy has joined #oooq11:35
chkumar|ruckysandeep: binary is also there https://github.com/rdo-packages/swift-distgit/blob/rpm-master/openstack-swift.spec#L55411:40
chkumar|ruckcan you pull the package version which is getting installed or check the swift component repo used there11:40
chkumar|ruck?11:40
ysandeepchkumar|ruck, ack11:41
rlandychkumar|ruck: took a shot at the internal providers - kind of need a multiple inheritance situation12:11
rlandyneed to rework the parenting to avoid copying the playbooks12:11
chkumar|ruckmarios: rlandy: weshay|ruck please donot approve any upstream patches till it lands https://review.opendev.org/c/openstack/tripleo-common/+/78236612:12
chkumar|ruckrlandy: sure, I didnot get a chance to go over that patch quay ruined my day12:13
rlandychkumar|ruck: no worries- I'll ping you when I have another solution12:18
chkumar|ruckrlandy: marios arxcruz I have moved tripleo-repos to tomorrow as sshnaidm|off is having public holiday there12:21
rlandychkumar|ruck: ack - elections there12:21
chkumar|ruckakahat: rlandy arxcruz regarding promoter, we can meet early just after community call if free.12:21
rlandychkumar|ruck: ok by me12:22
arxcruzok12:22
akahatchkumar|ruck, np.12:22
rlandyysandeep: need reviews on any wallaby patches?12:26
ysandeeprlandy, https://review.rdoproject.org/r/c/rdo-jobs/+/3262912:26
rlandyysandeep: ok to merge ^^?12:28
ysandeeprlandy, yes12:28
mariosack chkumar|ruck12:29
ysandeepbhagyashris, rlandy, marios wallaby branching mtg12:30
mariosysandeep: joining12:31
pojadhavysandeep, bhagyashris is on sick leave12:31
*** jpena is now known as jpena|lunch12:31
*** amoralej is now known as amoralej|lunch12:32
weshay|ruckin mtgs12:33
*** ykarel has quit IRC12:38
*** ykarel has joined #oooq12:38
chkumar|ruckzbr|rover: please have a look at this review https://review.opendev.org/c/openstack/tripleo-quickstart-extras/+/78236212:41
mariossshnaidm|off: weshay|ruck: rlandy: *** when you next have time please grateful for eyes/opinions @ https://review.opendev.org/c/openstack/tripleo-ci/+/779992 ( context https://review.opendev.org/c/openstack/tripleo-ci/+/779992/1/zuul.d/multinode-jobs.yaml#125 ) thank you12:43
*** ykarel_ has joined #oooq12:44
*** ykarel has quit IRC12:47
rlandymarios: ysandeep: frenzy_friday: pojadhav: akahat: chkumar|ruck: weshay|ruck: arxcruz: zbr|rover: community call in ~30 mins ... pls add agenda items .. https://hackmd.io/MMg4WDbYSqOQUhU2Kj8zNg#2021-03-23-Community-call12:52
mariosthanks rlandy12:53
ysandeepmarios frenzy_friday fyi.., rlandy -your guess was right.. that error was hidden in master component line(swift)13:06
ysandeepchkumar|ruck, its hitting in master as well.. i think worth to write a bug for master https://logserver.rdoproject.org/openstack-component-swift/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-scenario002-standalone-swift-master/08abce7/logs/undercloud/home/zuul/standalone_deploy.log.txt.gz13:07
ysandeep^^ swift component line13:07
chkumar|ruckysandeep: on it13:08
rlandyysandeep: at least there is an explanation :)13:09
rlandyysandeep: seen this error before in downstream: http://pastebin.test.redhat.com/94970913:15
ysandeeprlandy, nope13:16
mariosack ysandeep13:16
rlandyysandeep: no worries - seems to be working now .. latest run ok13:17
*** ykarel_ is now known as ykarel13:17
ysandeeprlandy, fyi.. patch to switch.. periodic verion of check jobs to non-periodic version is almost complete.. its blocked because of 17 deps issue.. container build failing there13:18
chkumar|ruckysandeep: https://bugs.launchpad.net/tripleo/+bug/192092413:19
openstackLaunchpad bug 1920924 in tripleo "/usr/bin/swift-object-expirer: No such file or directory while deploying standalone " [High,Triaged]13:19
rlandyysandeep: k - have something of a duplicate patch to get provider jobs in13:19
ysandeepchkumar|ruck, ack o/13:19
rlandyhttps://code.engineering.redhat.com/gerrit/#/c/232088/2/zuul.d/standalone-jobs.yaml13:19
rlandywe can sync up13:19
weshay|ruckchkumar|ruck, zbr|rover do you folks have a handle on the c8-train failures? Looks like 1 or 2 tempest tests are holding us back13:20
*** ykarel_ has joined #oooq13:22
ysandeeprlandy, ack.. mine was .. https://code.engineering.redhat.com/gerrit/#/c/231711/13:22
rlandyysandeep: k - np - can rebase13:22
ysandeeprlandy, you go ahead with your patch.. i will just create check version of job for image build.. i will remove container build job13:23
ysandeepfrom my patch13:23
chkumar|ruckweshay|ruck: links, please add the tasks, will look into that13:24
weshay|ruckk13:24
*** ykarel has quit IRC13:24
*** ykarel_ has quit IRC13:26
chkumar|ruckysandeep: where we look the swift rpm version in the standalone job?13:26
ysandeepchkumar|ruck, for wallaby, I looked in container build job not standalone13:28
chkumar|ruckysandeep: ok13:28
ysandeepin component line i think we are updating containers.. but not capturing rpm package info from inside containers13:29
ysandeepweshay|ruck, our 1:1 conflicting with tripleo-ci community call13:30
zbr|roverwhat was the podman replacement for docker-compose?13:30
*** amoralej|lunch is now known as amoralej13:33
*** jpena|lunch is now known as jpena13:34
*** ratailor has quit IRC13:35
zbr|roverneverming, found, and sorted: https://github.com/containers/podman-compose/issues/15113:39
zbr|roveras long the project does not have a release on pypi, i would not bother.13:39
zbr|roveri tried the devel version and discovered that it does not support remote connections.13:40
weshay|rucksoniya29, are you hitting a particular error?13:40
soniya29weshay|ruck, yeah13:40
weshay|rucksoniya29, can you please pastebin13:41
soniya29weshay|ruck, i want to spwan a local vm but each time i try installation fails13:41
soniya29weshay|ruck, i dont think we can copy within a vm13:41
weshay|rucksoniya29, k.. that's odd.. you actually don't need to install the os13:42
weshay|ruckif you import a qcow2 file into your kvm images path13:42
soniya29weshay|ruck, i tried with qcow2 as well13:42
soniya29weshay|ruck, no luck13:43
chkumar|ruckweshay|ruck: do you know where collects the logs of package update with in containers in standalone done via container prep parameter?13:44
weshay|rucksoniya29, on your laptop ya?13:44
chkumar|ruckfor example this one https://logserver.rdoproject.org/openstack-component-swift/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-scenario002-standalone-swift-master/08abce7/logs/undercloud/home/zuul/13:44
soniya29weshay|ruck, yeah13:45
weshay|rucksoniya29, same issue on your test box? rdo-ci-fx2-03-s2-drac.mgmt.rdoci.lab.eng.rdu2.redhat.com ?13:45
weshay|ruckchkumar|ruck, so the update is no longer done in child jobs13:46
weshay|ruckchkumar|ruck, we build the containers initially w/ the change13:46
weshay|ruckchkumar|ruck, so the containers are NOT updated if they are using a content provider13:46
chkumar|ruckweshay|ruck: the job is running in component pipelone13:46
weshay|ruckah..13:47
weshay|ruckk13:47
weshay|ruckso.. it comes from13:47
weshay|ruck  modify_role: tripleo-modify-image13:47
weshay|ruck        modify_vars:13:47
weshay|ruck            tasks_from: yum_update.yml13:47
weshay|ruck            update_repo: 'delorean-current,swift '13:47
weshay|ruck            yum_repos_dir_path: /etc/yum.repos.d13:47
chkumar|ruckweshay|ruck: https://logserver.rdoproject.org/openstack-component-swift/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-scenario002-standalone-swift-master/08abce7/logs/undercloud/home/zuul/containers-prepare-parameters.yaml.txt.gz13:47
* weshay|ruck looks for the log13:47
weshay|ruckchkumar|ruck, https://logserver.rdoproject.org/openstack-component-swift/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-scenario002-standalone-swift-master/08abce7/logs/undercloud/var/log/tripleo-container-image-prepare.log.txt.gz13:49
weshay|rucklook for yum_update.yml13:49
weshay|ruckchkumar|ruck, you should find one yum transaction in that log13:50
weshay|ruckchkumar|ruck, this is a concern13:51
weshay|ruck      update_repo: 'delorean-current,swift '13:51
weshay|ruckit should have one more repo13:51
weshay|ruckchkumar|ruck, right.. we should see https://logserver.rdoproject.org/openstack-component-swift/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-scenario002-standalone-swift-master/08abce7/logs/undercloud/etc/yum.repos.d/swift-component.repo.txt.gz13:52
chkumar|ruckweshay|ruck: I went through that log13:52
weshay|ruckno transactions13:52
chkumar|ruckweshay|ruck: I am looking for specific container logs13:52
weshay|ruckthe component update repo is not listed in the yaml13:53
weshay|ruckrlandy, that's how it should work right?13:53
* rlandy reads back13:53
rlandyit should be added in all component jobs13:54
weshay|ruckyes13:54
weshay|ruckbut I don't see it13:54
rlandyweshay|ruck: ^^ the component repo should be in the repo update13:54
* rlandy looks13:54
weshay|ruckrlandy, sorry13:55
weshay|ruckit's called swift13:55
rlandydefinitely a problem13:55
weshay|ruckit's there13:55
weshay|ruckfalse alarm13:55
weshay|ruckI can't read13:55
rlandybreathe, breathe13:56
weshay|ruckrlandy,  :)13:56
weshay|rucksorry sorry13:56
rlandyweshay|ruck: np :)13:56
weshay|ruckchkumar|ruck, https://logserver.rdoproject.org/openstack-component-swift/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-scenario002-standalone-swift-train/0138736/logs/undercloud/etc/yum.repos.d/swift-component.repo.txt.gz13:56
weshay|ruck[swift]13:56
weshay|ruckchkumar|ruck, so now we have to check what's in there13:56
*** sanjayu_ has quit IRC13:58
weshay|ruckchkumar|ruck, afaict..13:59
weshay|ruckhttps://trunk.rdoproject.org/centos8-master/component/swift/component-ci-testing/13:59
weshay|ruckhttps://trunk.rdoproject.org/centos8-master/component/swift/current-tripleo/13:59
weshay|ruckwe should see an update13:59
weshay|ruckchkumar|ruck, looking at the container logs14:00
weshay|ruckhttps://logserver.rdoproject.org/openstack-component-swift/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-scenario002-standalone-swift-master/08abce7/logs/undercloud/var/log/extra/podman/containers/swift_container_server/podman_info.log.txt.gz14:00
weshay|ruckopenstack-swift-container.noarch               2.27.1-0.20210322142351.7e27829.el8      @swift14:00
weshay|ruckchkumar|ruck, the swift containers were updated14:00
weshay|ruckchkumar|ruck, so perhaps the containers-update playbook in tripleo-common just does a shitty job logging14:01
weshay|ruckchkumar|ruck, see that?14:01
chkumar|ruckyes14:02
chkumar|ruckarxcruz: akahat rlandy want to start promoter meeting now?14:03
chkumar|ruckweshay|ruck: thanks!14:04
akahatchkumar|ruck, i'm okay14:04
rlandyok14:04
chkumar|ruckakahat: rlandy arxcruz weshay|ruck https://meet.google.com/pqq-cxwo-wym?authuser=0 promoter meeting14:05
chkumar|ruckakahat: arxcruz joining?14:06
chkumar|ruckarxcruz: we are waiting for you14:08
*** ysandeep is now known as ysandeep|dinner14:20
*** gchamoul has quit IRC14:32
*** gchamoul has joined #oooq14:44
*** jlarriba has joined #oooq15:02
jlarribaweshay|ruck: tripleo-quickstart is failing on CI due to an authentication error on quay.io, have you seen it?15:04
jlarribaakahat: ^15:06
akahatjlarriba, o/ can you please point to some logs?15:07
akahatchkumar|ruck, zbr|rover ^^15:07
weshay|ruckjlarriba, we've had a number of issues w/ quay.. I emailed the list and there are some launchpad bugs on it15:08
weshay|ruckjlarriba, can you paste the log?15:09
jlarribaakahat, the error is Container image prepare | undercloud | error={"changed": false, "error": "401 Client Error: UNAUTHORIZED for url: https://quay.io/v2/app-sre/grafana/manifests/6.7.4", "msg": "Error running container image prepare: 401 Client Error: UNAUTHORIZED for url: https://quay.io/v2/app-sre/grafana/manifests/6.7.4", "params": {}, "success": false}15:09
jlarribaweshay: ^15:09
weshay|ruckjlarriba, paste the log url please15:09
jlarribayeah, one sec15:10
chkumar|ruckjlarriba: https://bugs.launchpad.net/tripleo/+bug/192087315:15
openstackLaunchpad bug 1920873 in tripleo "quay.io/app-sre/grafana:6.7.4 went missing from quay leading to fail all content provider jobs" [Critical,Triaged]15:15
jlarribachkumar|ruck, thanks, that is the problem15:16
jlarribaweshay|ruck: https://zuul.opendev.org/t/openstack/build/2cca4dbc36c94da0a16fb98b12d8fbfd15:16
jlarribaakahat: ^15:17
jlarribaand the more detailed errors are in https://17c296bb94b1fb2ecf41-ea9f828f8eee3ae0801a665a4594e601.ssl.cf1.rackcdn.com/772893/7/check/tripleo-ci-centos-8-content-provider/2cca4db/job-output.txt15:17
weshay|ruckjlarriba, thanks15:18
*** ysandeep|dinner is now known as ysandeep|away15:21
zbr|roveri am back, was in e-r meeting with frenzy_friday15:27
*** jmasud has joined #oooq15:35
*** owalsh has quit IRC15:35
*** owalsh has joined #oooq15:36
*** apetrich has quit IRC15:36
*** marios has quit IRC15:37
*** irclogbot_0 has quit IRC15:37
*** marios has joined #oooq15:37
*** irclogbot_1 has joined #oooq15:38
*** irclogbot_1 has quit IRC15:49
*** irclogbot_0 has joined #oooq15:52
*** amoralej is now known as amoralej|off16:04
*** skramaja has quit IRC16:08
*** jmasud has quit IRC16:49
rlandyweshay|ruck: chkumar|ruck: ok if I try reboot the dns server in vexxhost?16:50
rlandyno response on 53 according to ade16:50
weshay|ruckrlandy, sure16:50
weshay|ruckrlandy, try and get on the console first16:50
weshay|ruckthen reboot16:50
rlandyweshay|ruck: good idea - seeing if I can16:50
weshay|ruckor just try to restart the service as well16:50
*** marios is now known as marios|out16:51
chkumar|ruckrlandy: ack16:52
rlandyha - on the node16:53
*** marios|out has quit IRC16:56
rlandy Active: active (running) since Thu 2021-02-11 20:21:59 UTC; 1 months 9 days ago16:58
rlandyservice is running16:58
rlandywill try restart16:58
chkumar|ruckweshay|ruck: please quay.io bug as a invalid17:00
*** jmasud has joined #oooq17:01
rlandy; <<>> DiG 9.11.20-RedHat-9.11.20-5.el8_3.1 <<>> @38.102.83.187 docker.io17:01
rlandy; (1 server found)17:01
rlandy;; global options: +cmd17:01
rlandy;; connection timed out; no servers could be reached17:01
rlandyweshay|ruck: chkumar|ruck: ^^ no luck on restarting service - trying node reboot now17:02
rlandyhmmm ... need to edit the /etc/resolve.conf on that box - pinging infra17:09
*** jmasud has quit IRC17:20
*** jpodivin has quit IRC17:26
*** udesale has quit IRC17:30
*** derekh has quit IRC18:03
*** jpena is now known as jpena|off18:05
rlandyweshay|ruck: can I tear two the two candudate instances on vexx18:10
*** frenzy_friday is now known as frenzyfriday|bbl18:15
weshay|ruckrlandy, ya18:39
*** dtantsur is now known as dtantsur|afk18:55
*** dtantsur|afk is now known as dtantsur|afk|afk18:55
*** dtantsur|afk|afk is now known as dtantsur|afk18:55
rlandyweshay|ruck: hey - ovb test with newer centos 8 stream image ...19:09
rlandyhttps://logserver.rdoproject.org/53/18953/43/check/tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001/079e4c0/logs/undercloud/home/zuul/build.log.txt.gz19:09
rlandyimage built19:09
rlandydeploy failure19:09
rlandybut that may be a master issue19:09
* weshay|ruck looks19:09
rlandy FATAL | Container image prepare | undercloud | error={"changed": false, "error": "401 Client Error: UNAUTHORIZED for url: https://quay.io/v2/app-sre/grafana/manifests/6.7.4", "msg": "Error running container image prepare: 401 Client Error: UNAUTHORIZED for url: https://quay.io/v2/app-sre/grafana/manifests/6.7.4", "params": {}, "su19:10
rlandyhttps://logserver.rdoproject.org/53/18953/43/check/tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001/079e4c0/logs/undercloud/home/zuul/overcloud_deploy.log.txt.gz19:10
weshay|ruckrlandy, fs001 is back to green https://review.rdoproject.org/zuul/builds?job_name=tripleo-ci-centos-8-ovb-1ctlr_1comp-featureset001&job_name=tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset00119:10
weshay|ruckoh..19:11
weshay|ruck2021-03-23 00:03:23 | 2021-03-23 00:03:23.603981 | fa163eef-5567-e0e9-27e9-000000003242 |      FATAL | Container image prepare | undercloud | error={"changed": false, "error": "401 Client Error: UNAUTHORIZED for url: https://quay.io/v2/app-sre/grafana/manifests/6.7.4", "msg": "Error running container image prepare: 401 Client Error: UNAUTHORIZED for url: https://quay.io/v2/app-sre/grafana/manifests/6.7.4", "params": {}, "success": false}19:11
weshay|ruckya..19:11
weshay|rucksorry.. looked at the wrong log19:11
weshay|ruckrlandy, kick it again.. should work19:11
weshay|ruckor at least get past that19:11
rlandyweshay|ruck: ack - ack rekicked19:14
*** frenzyfriday|bbl is now known as frenzy_friday19:46
*** jmasud has joined #oooq20:17
rlandyweshay|ruck: do you remember some talk about multiple inheritance in zuul?20:56
rlandymaybe I'm dreaming20:56
rlandyprovider jobs downstream could benefit from it20:57
weshay|rucknot really20:57
rlandyk - qualifying 17 on 8.421:00
*** jmasud has quit IRC21:02
*** jfrancoa has joined #oooq21:17
*** jmasud has joined #oooq21:38
*** jbadiapa has quit IRC21:45
*** slaweq has quit IRC21:52
*** slaweq has joined #oooq21:54
*** jfrancoa has quit IRC22:09
*** jfrancoa has joined #oooq22:11
*** jfrancoa has quit IRC22:28
*** jmasud has quit IRC22:55
*** slaweq has quit IRC23:05
*** slaweq has joined #oooq23:06
rlandyweshay|ruck: provider-jobs never report to dlrn, right?23:08
rlandyweshay|ruck: ysandeep|away: chkumar|ruck: fyi ... https://code.engineering.redhat.com/gerrit/232278 Add base provider internal job23:16
rlandyconfig job23:16
rlandywill need to merge to test out23:16
rlandyhave to parent from upstream to avoid dup'ing the playbooks called23:16
*** jmasud has joined #oooq23:21
*** rlandy has quit IRC23:53

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!