Monday, 2019-12-02

*** tosky has quit IRC00:26
*** ysandeep has joined #oooq01:24
*** apetrich has quit IRC03:08
*** ysandeep has quit IRC03:22
*** ysandeep has joined #oooq03:34
*** bhagyashris has joined #oooq04:04
*** skramaja has joined #oooq04:15
*** ykarel has joined #oooq04:20
*** udesale has joined #oooq04:37
*** surpatil has joined #oooq05:12
*** soniya29 has joined #oooq05:17
*** epoojad1 has joined #oooq05:22
*** raukadah is now known as chkumar|rover05:54
*** jtomasek has joined #oooq06:26
*** marios has joined #oooq06:45
*** marios is now known as marios|rover06:51
*** chkumar|rover is now known as chkumar|ruck06:51
chkumar|ruckmarios|rover: Good morning06:52
chkumar|ruckmarios|rover: needs +2 and +w on this https://review.opendev.org/#/c/696811/06:52
*** jfrancoa has joined #oooq06:52
chkumar|ruckthanks! when free06:52
marios|rovero/ morning and happy monday \o/ :/06:53
marios|roverchkumar|ruck: sure will do in bit06:53
chkumar|ruckzbr_: Hello, I have fixed all the podman related issues,06:54
*** ksambor has joined #oooq07:09
*** skramaja_ has joined #oooq07:13
*** skramaja has quit IRC07:14
*** surpatil has quit IRC07:29
chkumar|ruckdtantsur|afk: marios|rover Anything we can do https://review.opendev.org/#/c/696532/ to this review merged?07:32
*** akahat has joined #oooq07:33
*** surpatil has joined #oooq07:40
marios|roverchkumar|ruck: i don't have core there but just replied to harald and dmitry07:40
chkumar|ruckmarios|rover: thanks :-)07:41
*** ykarel is now known as ykarel|lunch07:43
*** jbadiapa has joined #oooq07:47
*** apetrich has joined #oooq07:52
*** matbu has joined #oooq08:09
*** tesseract has joined #oooq08:14
arxcruzjpena|off: ping when you're available :)08:15
*** ykarel|lunch is now known as ykarel08:20
*** epoojad1 has quit IRC08:22
*** amoralej|off is now known as amoralej08:29
apetricharxcruz, almost had a heart attack this morning. Went to get my bike at the garage and it wasn't there. It slept locked to the fences in front of my building. I totally forgot to bring it in08:30
arxcruzapetrich: you're playing with fire!08:31
apetricharxcruz, because I'm stupid08:31
arxcruzapetrich: btw, the insurance cover mine, and give me 50% of the money back IF i buy a new one08:31
*** tosky has joined #oooq08:31
apetrichI didn't want to play with fire08:31
apetrichNice!08:32
arxcruzI got the same model as before, but I'll keep in my balcony with some bike cover08:32
apetrichcrap08:33
apetrichgood but crap08:33
apetrichbtw what chair do you have? I have a crappy one but I bought new rollers for it and I'm amazed by it08:33
arxcruzapetrich: well, since according german laws, the building has nothing to do with robbery inside the building...08:33
*** ccamacho has joined #oooq08:34
arxcruzapetrich: i got one, not the one i wanted but a good one08:34
arxcruzlet me check the model08:34
apetrichSure. why would it? Pff german laws08:34
arxcruzhttps://www.amazon.de/gp/product/B079BSRV7P/ref=ppx_yo_dt_b_asin_title_o08_s00?ie=UTF8&psc=108:35
arxcruzapetrich: after a very long research, this was the best for the price08:35
arxcruzit's fully adjustable08:35
arxcruzarmrests goes up and down, front and back08:35
apetrichNice08:36
arxcruzand I like the mesh fabric because I swet a lot08:36
apetrichSo do I08:36
arxcruzapetrich: btw, confirmed on dec 808:37
arxcruzI forgot to tell you lol08:37
apetricharxcruz, got these https://www.amazon.de/dp/B07DXVSJWH08:37
arxcruzwhat liam likes as toys?08:37
apetricharxcruz, on nice! I was about to ask you about that08:38
arxcruzcool, i also got one floor protection from ikea08:38
arxcruzpretty good08:38
apetricharxcruz, I think a safe bet is a small playmobil set. We are starting to build a collection. matchbox cars are also a good bet, specially the ones with firetrucks or something very different08:40
apetrichwhat else?08:40
arxcruznespresso08:40
arxcruzplaymobil it will be08:40
apetrich:)08:40
arxcruzwhen i was kid, i had a lot of playmobil08:41
arxcruzthen i discover lego08:41
arxcruzthen i discover lego is too expensive08:41
apetrichMe too. but lego was way way expensive08:41
apetrichlol08:41
apetrichyeah08:41
apetrichthat's probably the reason why he has too much lego :)08:42
apetrichI never did08:42
arxcruzapetrich: same with me08:43
arxcruzunfortunately valentina likes more dolls than lego08:44
arxcruzalthough when I buy some for me, she likes to help me to mount08:44
*** jpena|off is now known as jpena08:48
jpenaarxcruz: I'm in. How can I help you?08:49
arxcruzjpena: https://github.com/ceph/ceph-ansible/pull/479108:49
arxcruzif i do a check-rdo, the master and train are triggered08:49
arxcruzbut the tripleo job not08:50
jpenaaha, let me check the job definition08:50
jpenathe master job is expected, since we did not specify any branch restriction iirc08:50
arxcruzjpena: https://review.rdoproject.org/r/#/c/23853/ and https://review.rdoproject.org/r/#/c/23852/ got merged08:50
arxcruzjpena: i think for the master, i should ad branches stable-508:51
*** dtantsur|afk is now known as dtantsur08:52
jpenaarxcruz: in https://review.rdoproject.org/r/#/c/23852/13/zuul.d/projects.yaml, why are you specifying "branches: master"08:52
jpena?08:52
arxcruzjpena: that was indeed a mistake08:52
arxcruzlol08:52
*** chem has joined #oooq08:53
arxcruzjpena: remove or specify train ?08:53
jpenayou're already specifying branches: ^stable-4.0$ in https://review.rdoproject.org/r/#/c/23852/13/zuul.d/ceph-ansible.yaml, I think you can just remove those lines08:54
arxcruzjpena: anything else ?08:55
jpenaarxcruz: I think that should be it08:55
*** Tengu has quit IRC09:01
*** Tengu has joined #oooq09:02
*** SurajPatil has joined #oooq09:07
arxcruzjpena: didn't work, still the master is getting triggered and the tripleo job isn't09:07
chkumar|ruckmarios|rover: Do we want to close this https://bugs.launchpad.net/tripleo/+bug/1853856 and https://bugs.launchpad.net/tripleo/+bug/1854517 ?09:08
openstackLaunchpad bug 1853856 in tripleo "Tempest basic ops test failed with Timed out waiting for 192.168.24.107 to become reachable in ovs deployment" [Critical,Triaged]09:08
openstackLaunchpad bug 1854517 in tripleo "RDO jobs failing with POST_FAILURE" [Critical,Invalid]09:08
arxcruzjpena: can you check the logs ?09:09
jpenaarxcruz: sure, what's the review?09:09
jpenanot the PR, I mean the change in review.rdo09:09
arxcruzjpena: https://review.rdoproject.org/r/#/c/23896/09:09
*** surpatil has quit IRC09:09
marios|roverchkumar|ruck: ack am going though the list and updating i already moved 517 to invalid a little while ago09:10
chkumar|ruckack thanks!09:12
*** epoojad1 has joined #oooq09:13
*** bogdando has joined #oooq09:14
*** matbu has quit IRC09:20
*** matbu has joined #oooq09:20
jpenaarxcruz: I found the issue. The new jobs have tripleo-ci-rhel-8-standalone-rdo as their parent, and that specifies master as the only branch, see https://github.com/rdo-infra/rdo-jobs/blob/ec62458a975937b173924ed3a85894958a4774bf/zuul.d/standalone-jobs.yaml#L49009:26
jpenawe had a similar issue in the past, remember? We need to remove that from the parent job, and let child jobs limit the branches09:26
arxcruzjpena: yeah... okay...09:26
arxcruzlet's do it09:27
arxcruzjpena: what about the master job running ?09:27
jpenaarxcruz: you're not limiting branches in the definition of tripleo-ceph-integration-master, so it would run on every PR09:28
jpenahttps://github.com/rdo-infra/rdo-jobs/blob/master/zuul.d/ceph-ansible.yaml#L2509:28
arxcruzjpena: so branches: ^master$ fix the problem09:28
jpenayes09:28
arxcruzjpena: actually, stable-5.009:29
chkumar|ruckmarios|rover: I think we have some real issue on fs0109:29
chkumar|ruckit is coming on rdocloud and vexxhost09:29
chkumar|ruckno valid host found during overcloud deploy09:29
chkumar|ruckfiling the bug soon09:31
arxcruzjpena: ^(stable-5.0|master).*$ right?09:31
jpenalgtm09:31
marios|roverchkumar|ruck: so09:31
marios|roverchkumar|ruck: i've seen that a few times it is intermittent09:31
marios|rover* http://logs.rdoproject.org/05/695805/6/openstack-check/tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001/eb60e5d/logs/undercloud/home/zuul/overcloud_deploy.log.txt.gz09:32
marios|roverchkumar|ruck: today there ^09:32
chkumar|ruckmarios|rover: yes09:32
marios|roverchkumar|ruck: but hadn't file a bug for it makes sense to do it09:32
chkumar|ruckmarios|rover: but seen the same on vexxhost09:32
chkumar|ruckmarios|rover: http://logs.rdoproject.org/80/23880/3/check/tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-train-vexxhost-nhw/5ddd3bc/logs/undercloud/home/zuul/overcloud_deploy.log.txt.gz09:32
marios|roverchkumar|ruck: but maybe 1/2 day or less sometimes09:32
marios|roverchkumar|ruck: ack09:32
chkumar|ruckmarios|rover: need to check train fs01 also09:32
*** derekh has joined #oooq09:35
ykarelchkumar|ruck, marios|rover is fs020-stein failure known? https://review.rdoproject.org/zuul/builds?job_name=periodic-tripleo-ci-centos-7-ovb-1ctlr_2comp-featureset020-stein ?09:44
ykarellooks like other jobs in stein also have issues from last couple of days09:51
ykarelex. 02709:52
arxcruzjpena: didn't work09:53
ykarelhmm 027 should be fixed with https://review.opendev.org/#/c/696811/09:53
arxcruzjpena: the master worked... but the tripleo job isn't running though09:53
marios|roverykarel: is on my todo list to check the stein promotion as it is few days old now there might be some issue but not aware of bug yet09:57
jpenaarxcruz: ok, this time the error is different -> https://softwarefactory-project.io/paste/show/1675/10:01
jpenawe're configuring the job to only trigger when certain files are changed (see https://review.rdoproject.org/r/#/c/23896/3/zuul.d/projects.yaml), and none of them match10:02
arxcruzjesus... okay, removing for this job10:03
arxcruzjpena: okay, what about now ? what's the error?10:05
arxcruzis there a way i can see the zuul logs, so i won't keep bothering you?10:06
marios|roverchkumar|ruck: we have a bug for train image build? 2019-12-01 21:24:03.148 | > error: Status code: 403 for https://rhui-cds/pulp/repos/content/dist/rhel8/rhui/8/x86_64/codeready-builder/os/repodata/repomd.xml (https://rhui-cds/pulp/repos/content/dist/rhel8/rhui/8/x86_64/codeready-builder/os/repodata/repomd.xml).10:09
marios|roverchkumar|ruck: http://logs.rdoproject.org/openstack-periodic-latest-released/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-rhel-8-buildimage-overcloud-full-train/e342ee2/build.log10:09
chkumar|ruckmarios|rover: yes10:09
chkumar|ruckmarios|rover: https://bugs.launchpad.net/tripleo/+bug/185468510:09
marios|roverchkumar|ruck: cool which one please thanks10:09
marios|roverthx10:09
openstackLaunchpad bug 1854685 in tripleo "RHEL-8 master Image build job failed with Failed to synchronize cache for repo 'rhui-codeready-builder-for-rhel-8-x86_64-rhui-rpms'" [Critical,Confirmed]10:09
chkumar|ruckmarios|rover: add train also10:09
marios|roverchkumar|ruck: yah its from train i saw it is on etherpad?10:09
chkumar|ruckmarios|rover: apevec, jpena and me looking into that10:09
marios|roverchkumar|ruck: thanks10:10
chkumar|ruckmarios|rover: stein fs027 tempest issue got fixed10:10
marios|roverrhui issue looks like10:10
chkumar|ruckmarios|rover: yes it is10:10
marios|roverchkumar|ruck: ykarel: ^^10:10
marios|roverykarel: 12:10 < chkumar|ruck> marios|rover: stein fs027 tempest issue got fixed10:10
chkumar|ruckah it is fs020 one10:10
bogdandohi. PTAL https://review.opendev.org/#/c/696306/ https://review.opendev.org/#/c/696292/ https://review.opendev.org/#/c/696091/10:11
jpenaarxcruz: you need ssh access into the zuul infra to check the logs (they can contain passwords and other sensitive info). Let me check what's going on now10:11
bogdandoand https://review.opendev.org/#/c/696056/10:11
ykarelmarios|rover, hmm i got that <ykarel> hmm 027 should be fixed with https://review.opendev.org/#/c/696811/10:11
*** skramaja_ is now known as skramaja10:12
*** ysandeep has quit IRC10:12
*** ysandeep_ has joined #oooq10:12
chkumar|ruckbrb10:16
*** ysandeep_ has quit IRC10:20
*** ksambor has quit IRC10:21
*** ksambor has joined #oooq10:25
*** tosky_ has joined #oooq10:26
*** tosky has quit IRC10:28
*** gchamoul has joined #oooq10:30
jpenaarxcruz: hm, this is strange. It is still complaining about the files (https://softwarefactory-project.io/paste/show/1676/), but I can't understand why10:35
arxcruzjpena: maybe branches: ^(stable(/train|-4.0)).*$ ?10:36
arxcruzjpena: or maybe remove the job from the projects.yaml file ?10:36
* jpena facepalms10:37
arxcruz:(10:38
jpenaarxcruz: we're defining the job for project rdo-jobs10:38
jpenanot for ceph-ansible10:38
*** tosky_ is now known as tosky10:38
arxcruzjpena: so i can remove those right ?10:38
arxcruzbut the check-rdo should work...10:38
arxcruzwhy is not?10:39
arxcruzjpena: https://review.rdoproject.org/r/#/c/23853/4/zuul.d/projects.yaml get merged, and these jobs should run10:39
arxcruzcorrect?10:39
jpenaI think it should, yes10:40
arxcruzthen why isn't ?10:41
jpenathat's what I'm trying to understand10:41
bogdandoas there is dead silence at #zuul, I'all ask here as well:10:45
bogdando11:35:15 - bogdando: is there a way to live test by local zuul itself a change in zuul-jubs? e.g. https://review.opendev.org/#/c/696337/10:45
bogdando11:35:35 - bogdando: This change looks never applied by zuul executor running my CI job ?10:45
bogdandoperchance anyone tried?10:45
chkumar|ruckmarios|rover: https://bugs.launchpad.net/tripleo/+bug/185472110:46
openstackLaunchpad bug 1854721 in tripleo "FS 01 master periodic job failed during overcloud deploy with Exceeded maximum number of retries and 503 status code" [Critical,Confirmed]10:46
chkumar|ruckmarios|rover: other vexxhost jobs hitted with same errro https://review.rdoproject.org/r/#/c/23880/10:49
jpenaarxcruz: could it be that one of the parent jobs in the chain does not trigger when we are changing the README.rst file?10:50
arxcruzjpena: maybe, let me try to change something else10:50
jpenaarxcruz: yes, that's it10:51
jpenahttps://opendev.org/openstack/tripleo-ci/src/branch/master/zuul.d/base.yaml#L5310:51
jpenathe tripleo-ci-base file is in the parent chain, and it ignores any changes to .rst files in the root dir10:51
jpenaarxcruz: aha, now we're getting a different output. We need to make the master integration jobs run only on master (just like the rpm building job)10:53
arxcruzjpena: now i got this on github side:10:53
arxcruzUnable to freeze job graph: Job tripleo-ceph-integration-rhel-8-scenario001-standalone depends on tripleo-ceph-integration-master which was not run.10:53
jpenayes, that's it10:53
arxcruzoh, yeah10:53
arxcruz:)10:53
arxcruzYES10:55
arxcruzjpena: it worked!@10:55
arxcruzjpena++10:55
jpenafinally :)10:55
*** epoojad1 is now known as epoojad1|afk10:56
arxcruzlet's see if it will work :D10:58
chkumar|ruckmarios|rover: Now poking at fs020 stein failure10:59
arxcruzchkumar|ruck: rhel-8 are suppose to support train right ?11:07
chkumar|ruckarxcruz: yes11:07
chkumar|ruckarxcruz: rhel-8 train and master11:08
arxcruzchkumar|ruck: any chance you know why this image isn't found? http://logs.rdoproject.org/42/23642/7/check/tripleo-ceph-integration-rhel-8-scenario001-standalone/c87b7a6/logs/undercloud/home/zuul/standalone_deploy.log.txt.gz11:08
arxcruzis it because it's docker://11:08
chkumar|ruckarxcruz: we are waiting for the promotion11:08
chkumar|ruckthere is no docker images11:08
chkumar|ruckcan you try with vars: force_periodic: true in the job currently11:08
arxcruzchkumar|ruck: i see... thanks11:09
chkumar|ruckarxcruz: https://bugs.launchpad.net/tripleo/+bug/185406211:09
openstackLaunchpad bug 1854062 in tripleo "rhel8 jobs failing for missing containers " Exception: Not found image: docker://..."" [Critical,Triaged]11:09
arxcruzchkumar|ruck: cool, so, it's a real failure :)11:11
marios|roverchkumar|ruck: ack thanks11:12
marios|roverarxcruz: last rhel promotion was bad so check jobs broken11:13
marios|rovermatbu: ccamacho: jfrancoa: o/ hello can you please check comment #28 @ https://bugs.launchpad.net/tripleo/+bug/1853812 can you add more info if you have it are we at green job there? where? thanks :D11:29
openstackLaunchpad bug 1853812 in tripleo "train containerized-undercloud-upgrades failing with "error creating container the container name is already in use "" [Critical,Fix released] - Assigned to Bogdan Dobrelya (bogdando)11:29
*** surpatil has joined #oooq11:29
*** SurajPatil has quit IRC11:31
bogdandomarios|rover: hi, https://review.opendev.org/#/c/695242/ has tripleo-ci-centos-7-containerized-undercloud-upgrades green, so we need all its deps merged11:32
bogdandoupdated the bug, ccamacho11:32
marios|roverthanks bogdando exactly what i was looking for but why doesn't it have the bug on it11:33
marios|roverccamacho: please add the bug there i left a comment very soft -1 but given the amount of depends on i think you'll likely update at least once :D11:34
soniya29Hi everyone, I want to export the cockpit data to json. While doing it, I am getting https://paste.fedoraproject.org/paste/w5qHOtzTRMy~Bwit-ejjzw. How can I export it properly?11:36
ccamachomarios|rover Ill update the patch11:36
ccamachodone ^11:38
marios|roverccamacho: thanks revoted11:40
soniya29marios|rover, sshnaidm, arxcruz, kopecmartin, Can anyone of you help me over this:-https://paste.fedoraproject.org/paste/w5qHOtzTRMy~Bwit-ejjzw?11:47
marios|roversoniya29: hmm i recall hitting that a few times did you maybe remove your key? alternatively try removing and clearing that but ask sshnaidm he will likely give better info11:49
marios|roversoniya29: "removing and clearing " i mean re-generate but i vaguely recall sshnaidm saying not to do that :D11:50
sshnaidmsoniya29, did you generate a key with create-api-key.py ?11:51
soniya29marios|rover, I never removed my key actually. Before trying to export the data with ./export-grafana.py, I have created a key using ./create-api-key.py11:52
soniya29sshnaidm, yes11:52
sshnaidmsoniya29, is it saved in grafana.key file?11:52
sshnaidmsoniya29, https://github.com/rdo-infra/ci-config/blob/master/ci-scripts/infra-setup/roles/rrcockpit/files/grafana/export-grafana.py#L20:L2011:54
soniya29sshnaidm: Nope. It says, failed to add API key11:54
*** dtantsur is now known as dtantsur|afk11:54
sshnaidmsoniya29, either pass it as argument or save it in file11:54
sshnaidmsoniya29, if you don't have it, need to generate a new one11:55
soniya29sshnaidm, I tried to generate, but as I said, It failed to save it. I will once again try to re-generate11:57
sshnaidmsoniya29, try new "--key-name" when generating11:58
soniya29sshnaidm, yeah11:58
soniya29sshnaidm, It worked. Thanks a lot11:59
sshnaidmnp12:00
*** holser has joined #oooq12:01
*** rfolco has joined #oooq12:09
chkumar|ruckI need help in debugging overcloud image build failure12:13
chkumar|ruckhttps://bugs.launchpad.net/tripleo/+bug/185468512:14
openstackLaunchpad bug 1854685 in tripleo "RHEL-8 master Image build job failed with Failed to synchronize cache for repo 'rhui-codeready-builder-for-rhel-8-x86_64-rhui-rpms'" [Critical,Confirmed]12:14
chkumar|ruckwe have the node hold12:14
soniya29weshay, arxcruz, I have updated the patch along with skiplist_dashboard.json file. Can you review the patch?12:16
*** jpena is now known as jpena|lunch12:26
*** udesale has quit IRC12:36
*** udesale has joined #oooq12:37
*** SurajPatil has joined #oooq12:46
*** surpatil has quit IRC12:48
*** soniya29 has quit IRC12:52
*** rlandy has joined #oooq12:57
rfolcoscrum time13:00
rfolcosshnaidm, zbr_13:01
*** amoralej is now known as amoralej|lunch13:05
*** ccamacho has quit IRC13:12
*** ccamacho has joined #oooq13:12
*** ccamacho has quit IRC13:13
*** ccamacho has joined #oooq13:13
*** epoojad1|afk is now known as epoojad113:15
sshnaidmmarios|rover, ping me after a mtg, I suspect you hit same problems we hit for rhel8 containers13:17
*** jpena|lunch is now known as jpena13:24
*** skramaja has quit IRC13:24
*** surpatil has joined #oooq13:26
*** SurajPatil has quit IRC13:28
*** SurajPatil has joined #oooq13:30
marios|roverchkumar|ruck: you on the other call? (reminder()13:30
*** surpatil has quit IRC13:33
*** soniya29 has joined #oooq13:45
weshayhttps://review.rdoproject.org/r/#/c/23880/13:49
weshayhttps://bugs.launchpad.net/tripleo/+bug/185472113:49
openstackLaunchpad bug 1854721 in tripleo "FS 01 master periodic job failed during overcloud deploy with Exceeded maximum number of retries and 503 status code" [Critical,Triaged]13:49
weshayhttps://review.rdoproject.org/r/2287013:52
*** soniya29 has quit IRC13:53
*** surpatil has joined #oooq13:56
*** SurajPatil has quit IRC13:56
*** SurajPatil has joined #oooq13:58
*** amoralej|lunch is now known as amoralej13:58
rfolcoweshay, rlandy what about ... NOW?13:58
rfolco:)13:58
rlandyrfolco: Look slike weshay has 0 mins13:58
rlandy3013:58
rlandyup to him - I'm free all day13:59
rfolcohis next hour shows as free to me13:59
weshayok.. I'm free atm14:00
rfolcorejoin same channel ?14:00
rfolcorlandy, weshay ^14:00
rlandyfine14:00
*** surpatil has quit IRC14:01
pandaI don't like being laughed at.14:01
weshaymarios|rover, we're discussing pipeline stuff14:02
weshaymarios|rover, mtg is over14:02
weshaymarios|rover, need anything / help ?14:03
*** surpatil has joined #oooq14:03
marios|roverack weshay thanks will drop then14:03
marios|roverpanda: we have a call then? or want to skip for now?14:03
chkumar|ruckmarios|rover: rlandy https://review.opendev.org/#/c/696885/14:04
pandamarios|rover: I need to move the test on production forward, with or without ruck/rover.14:05
pandamarios|rover: so whatever. You may skip.14:05
pandano one else is joining.14:05
*** SurajPatil has quit IRC14:05
marios|roverpanda: ack i can join if you need to chat or just ping us if you do. maybe cancel the meet?14:05
marios|roverpanda: i saw no-one there are you there?14:05
pandaI'll cancel also the other days.14:05
chkumar|ruckmarios|rover: rlandy will fix the RHEL image build issue14:05
marios|roverrlandy++14:06
marios|roverthanks chkumar|ruck14:06
pandaI'll try to move forward until someone else better than me at testing in production takes over.14:06
chkumar|ruckthanks to jpena and apevec for helping14:06
marios|roverchkumar|ruck: ah i thought you meant rlandy fixed it 16:05 < chkumar|ruck> marios|rover: rlandy will fix the RHEL image build issue14:06
*** holser has quit IRC14:06
marios|roverthen chkumar|ruck++ ;)14:06
chkumar|ruckweshay: regarding this bug https://bugs.launchpad.net/tripleo/+bug/1854721, I have seen this issue in most of the vexxhost job also14:07
openstackLaunchpad bug 1854721 in tripleo "FS 01 master periodic job failed during overcloud deploy with Exceeded maximum number of retries and 503 status code" [Critical,Triaged]14:07
chkumar|ruckwhich got failed14:08
weshaychkumar|ruck, aye.. sshnaidm and rlandy can be a resource if needed14:09
chkumar|ruckrlandy: sshnaidm more context here https://bugs.launchpad.net/tripleo/+bug/185472114:10
openstackLaunchpad bug 1854721 in tripleo "FS 01 master periodic job failed during overcloud deploy with Exceeded maximum number of retries and 503 status code" [Critical,Triaged]14:10
sshnaidmchkumar|ruck, is it on vexxhost?14:11
chkumar|rucksshnaidm: it is coming on rdocloud as well as vexxhost14:11
chkumar|ruckboth14:11
sshnaidmchkumar|ruck, so it's common for ovb14:11
chkumar|ruckyup14:11
chkumar|ruckmarios|rover: heading home, will back in an hr or half.14:12
marios|roverchkumar|ruck: ack14:13
*** TrevorV has joined #oooq14:14
sshnaidmchkumar|ruck, look in /var/log/extra/errors.txt for ironic and neutron errors14:14
rlandymarios|rover: chkumar|ruck: in meeting14:14
*** rlandy is now known as rlandy|mtg14:14
*** ykarel is now known as ykarel|afk14:17
*** SurajPatil has joined #oooq14:18
*** akahat has quit IRC14:18
weshaychkumar|ruck, show me where you see it in rdo14:18
*** bhagyashris has quit IRC14:20
*** surpatil has quit IRC14:21
weshayrlandy|mtg, rfolco https://tree.taiga.io/project/tripleo-ci-board/task/1420?kanban-status=144727414:24
*** ysandeep has joined #oooq14:27
marios|roverhttps://review.opendev.org/#/q/topic:containerized-undercloud-upgrades-voting votes please14:36
marios|roverthank you14:36
*** Goneri has joined #oooq14:41
*** holser has joined #oooq14:42
chkumar|ruckweshay: this time it got failed at different place http://logs.rdoproject.org/02/23902/1/check/periodic-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/2449e84/logs/undercloud/home/zuul/overcloud_deploy.log.txt.gz14:50
chkumar|ruckweshay: http://logs.rdoproject.org/openstack-periodic-master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/b216f86/logs/undercloud/home/zuul/overcloud_deploy.log.txt.gz is from periodic job14:51
*** holser has quit IRC15:06
weshaychkumar|ruck,  "Message: No valid host was found. , Code: 500" can happen for just about any reason w/ nodes15:07
weshaychkumar|ruck, I don't think you've provided enough evidence we're seeing the same things in rdo and vexx yet15:08
weshaychkumar|ruck, better to go after the port_security setting mnaser found15:08
chkumar|ruckweshay: I need to learn how to debug those issue15:12
weshaychkumar|ruck, check the ironic logs15:12
rlandy|mtgwhat rhel build issue am I fixing?15:13
weshaychkumar|ruck, /me notes.. blueprint to fix that horrible error https://blueprints.launchpad.net/tripleo/+spec/nova-less-deploy15:14
rlandy|mtgchkumar|ruck:  Went to status ERROR due to "Message: No valid host was found. , Code: 500" is a resource issue15:16
rlandy|mtgis it reproducible?15:16
rlandy|mtgcould be caused by not enough resources on cloud at the time15:16
rlandy|mtgunless the flavor requirements changed, probably not reproducible15:17
rlandy|mtgtry and see if it is15:17
chkumar|ruckrlandy|mtg: in last run it was not reproducable15:17
*** rlandy|mtg is now known as rlandy15:18
rlandychkumar|ruck: yeah - then no use in debugging15:18
rlandycloud just sometimes overruns its resources15:18
marios|roverrlandy: looking15:19
marios|roverrlandy: (sorry was about your ping in sf-ops seens thanks)15:19
rlandychkumar|ruck: marios|rover: commented on https://bugs.launchpad.net/tripleo/+bug/185472115:23
openstackLaunchpad bug 1854721 in tripleo "FS 01 master periodic job failed during overcloud deploy with Exceeded maximum number of retries and 503 status code" [Critical,Triaged]15:23
marios|roverthanks rlandy15:23
chkumar|ruckrlandy: thanks@15:23
*** SurajPatil has quit IRC15:24
rlandychkumar|ruck: marios|rover: is there another ovb error you guys need help with or are we done here?15:24
chkumar|ruckcurrently nope!15:25
rlandyawesome15:25
marios|roverchkumar|ruck: seen that before? timeout on build-images centos7 train fs1 http://logs.rdoproject.org/openstack-periodic-latest-released/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-train/696e7d4/job-output.txt15:29
marios|roverchkumar|ruck: "RUN END RESULT_TIMED_OUT"15:30
marios|roverchkumar|ruck: am looking at failed things in criteria to push train from https://review.rdoproject.org/zuul/builds?pipeline=openstack-periodic-latest-released15:30
marios|roverchkumar|ruck: but a bit concerned that i don't see any 'missing successful jobs:' lists in http://38.145.34.55/centos7_train.log it's all 2019-12-02 14:32:38,748 29024 DEBUG    promoter No remaining hashes after removing ones older than the currently promoted15:31
chkumar|ruckmarios|rover: nope seeing for first time15:31
marios|roverchkumar|ruck: hmm gonna post a testproject in a sec but i want to look at the other jobs failed there too and include them gimme few15:31
chkumar|ruckok15:32
chkumar|ruckmarios|rover: https://review.rdoproject.org/zuul/builds?result=POST_FAILURE more post_failure with out logs15:36
chkumar|ruckpoked internally15:36
marios|roverchkumar|ruck: same there fyi http://logs.rdoproject.org/openstack-periodic-latest-released/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset002-train-upload/018a177/job-output.txt but different issue (undercloud install with no logs? ) at15:36
marios|roverhttp://logs.rdoproject.org/openstack-periodic-latest-released/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-7-singlenode-featureset027-train/7565764/job-output.txt15:36
marios|roverchkumar|ruck: and yeah fs27 has no logs?15:36
marios|roverchkumar|ruck: even though it faild in undercloud install task15:36
marios|roverchkumar|ruck: posting testproject with those 3 jobs now15:37
chkumar|ruckmarios|rover: https://review.rdoproject.org/zuul/builds?result=POST_FAILURE after ci escalation call it happened15:37
chkumar|ruckthat time it was 4 only15:37
marios|roverchkumar|ruck: i checked current train criteria https://paste.fedoraproject.org/paste/zyueU8VcOor58OKw0Ilofg/raw fwiw i think just those 3 missing from last run15:37
marios|roverchkumar|ruck: for centos 7 train15:37
marios|roverchkumar|ruck: ack thx 17:37 < chkumar|ruck> marios|rover: https://review.rdoproject.org/zuul/builds?result=POST_FAILURE after ci escalation call it happened15:37
chkumar|ruckyes15:38
*** holser has joined #oooq15:39
*** ysandeep has quit IRC15:40
chkumar|ruckmarios|rover: looks like network failure https://softwarefactory-project.io/paste/raw/1679/15:44
chkumar|ruckfrom sf-ops15:44
marios|roverchkumar|ruck: thanks, i noted in transient for now15:45
marios|roverhttps://etherpad.openstack.org/p/ruckroversprint1915:45
marios|roverchkumar|ruck: you know maybe we want to move to hackmd ?15:45
chkumar|ruckmarios|rover: yes15:46
*** aakarsh has joined #oooq15:46
chkumar|ruckmarios|rover: let's do that tomorrow15:46
marios|roverchkumar|ruck: ack15:46
*** ykarel|afk is now known as ykarel|away15:54
*** bhagyashris has joined #oooq15:57
*** holser has quit IRC15:59
*** dtrainor has quit IRC16:00
chkumar|ruckmarios|rover: it is a rdo cloud issue due to ceph disk replacement16:01
chkumar|ruckit is still in progress16:02
*** ykarel|away has quit IRC16:02
marios|roverchkumar|ruck: ack thanks16:09
marios|roverchkumar|ruck: maybe we get lucky on the testproject run16:09
chkumar|ruckmarios|rover: let's do that tomorrow morning, I can get it done when i wake up16:10
*** dtrainor has joined #oooq16:10
marios|roverchkumar|ruck: i postd it sec16:10
marios|roverchkumar|ruck: (its on the etherpad)16:10
chkumar|ruckok16:10
marios|roverhttps://review.rdoproject.org/r/2390416:10
chkumar|ruckthanks!16:10
marios|roverlinks notes@ https://paste.fedoraproject.org/paste/bGdqqjsH-8WLBB9iT37sqg/raw posted testproject16:10
chkumar|ruckzbr_: https://github.com/containers/libpod/pull/4599 need help on getting this merged16:12
* chkumar|ruck needs to start learning go now16:12
*** holser has joined #oooq16:14
chkumar|rucktime to rest now16:17
chkumar|ruckmarios|rover: see ya tomorrow16:17
*** chkumar|ruck is now known as raukadah16:17
marios|roverchkumar|ruck: have a good one16:17
marios|roverraukadah: o/16:17
marios|roverraukadah: are you here to take over from chkumar|ruck?16:17
marios|rover;)16:17
raukadahmarios|rover: rlandy and weshay  can help :-)16:18
marios|roverbai go learn go16:18
weshayraukadah, marios|rover what's up?16:19
marios|roverweshay: nothing was just messing around16:19
*** ykarel|away has joined #oooq16:35
*** d0ugal has joined #oooq16:37
*** d0ugal has joined #oooq16:39
*** udesale has quit IRC16:41
* marios|rover almost hometime16:41
marios|roverweshay: need something before you lose ruck|rover for a few hours ?16:41
marios|roverweshay: latest things at https://etherpad.openstack.org/p/ruckroversprint19 - chasing train/stein promotions atm16:42
weshaymarios|rover, nope.. thanks.. just note.. rdo is backed up on current -> consistent.. which sucks for us a bit16:43
marios|roverweshay: ack... for stein should be ok cos 24hr16:44
*** bhagyashris has quit IRC16:46
*** bhagyashris has joined #oooq16:50
*** marios|rover is now known as marios|out16:52
*** marios|out has quit IRC17:01
*** sshnaidm is now known as sshnaidm|afk17:13
*** bogdando has quit IRC17:31
*** ykarel has joined #oooq17:32
*** ykarel|away has quit IRC17:34
*** Goneri has quit IRC17:40
*** Goneri has joined #oooq17:54
*** bhagyashris has quit IRC17:55
*** jpena is now known as jpena|off17:59
*** derekh has quit IRC18:00
rfolcorlandy, can you sync ?18:48
rlandyrfolco: I don;t have results to show yet18:48
rlandymy run is on generating tempest18:48
rlandyexperimented with a few options18:49
rfolcorlandy, I am stuck coz I don't know which tasks you are working on18:49
rlandyrfolco: I ma not touching any promotion work18:49
rfolcorlandy, what about add_repo patch you said you would be trying18:49
rlandyI am looking into using job.add_repos to include component18:49
rlandythat is the one that is running tempest right now18:50
rlandyI need about 30 mins to discuss results18:50
rlandyand see whether it is a good solution18:50
rlandyrfolco: what do you want to work on?18:50
rfolcorlandy, I don't know what to do because don't know what is in your patch, what is the solution, if job is defined, if release config file patch is needed, what else.18:51
*** holser has quit IRC18:51
rfolcorlandy, should I pretend your patch does not cover my tasks ?18:52
rfolcorlandy, I am sure this is not true18:52
rfolco#1408 and #140918:53
rfolcorelease file and standalone job18:53
rlandyrfolco: what was in the standalone job other than adding one var to the job definition?18:55
rlandyrfolco: wrt release file, idk - depends if we decide to use the release file to pass the component repo set_up values18:56
rlandyresults of my current test run are uploading18:56
rlandygive me few to see if this worked or not18:57
rfolcorlandy, ok let me propose the patches again and then we decide. I take the risk of throwing it away.18:57
rlandyrfolco: hang on - just s few more minutes18:58
rlandythen we can sync18:58
rlandythe release file may be an option idk yer18:58
rlandyproblem is the var precedence18:58
rlandythe release file could solve that18:59
rfolcorlandy, ok18:59
*** Tengu has quit IRC19:00
rlandyI'll explain in a few19:00
*** Tengu has joined #oooq19:01
rlandyugh collect logs takes forever19:02
rlandyhttp://logs.rdoproject.org/41/23841/12/check/periodic-tripleo-ci-centos-7-standalone-compute-master/1c08b7d/logs/undercloud/etc/yum.repos.d/component.repo.txt.gz19:03
rlandyokie dokie19:03
rlandywe have a solution to talk about19:03
rlandyweshay: rfolco: want to sync briefly now?19:04
rlandywe need to confirm one direction and go from there now19:04
weshayok19:04
rfolcook19:05
rlandyhttps://meet.google.com/wyp-znwe-gvs19:05
rlandyweshay: rfolco: ^^19:05
*** ykarel has quit IRC19:07
*** amoralej is now known as amoralej|off19:42
*** jfrancoa has quit IRC19:49
*** tesseract has quit IRC19:49
*** d0ugal has quit IRC20:02
*** brault has quit IRC20:02
rfolcoweshay, who can I talk to about dlrnapi password ? jpena|off only?20:07
rfolcoweshay, component one20:07
weshayinternal?20:07
weshaystaging upstream?20:07
rfolcostaging20:07
weshayrfolco, so.. for upstream.. I think we're going to assume we're using the perm dlrn server20:08
weshayI think jpena just has some patches to land20:08
rfolcok20:08
weshayrfolco, usually we have a weekly to sync on this kind of stuff20:08
weshaybut I'm pto this friday20:08
rfolcoweshay, thats ok, I am going to talk to jpena tomorrow morning20:09
rfolcowill work on the promotion job now, the one that checks results and promotes repo20:09
weshayhttps://review.rdoproject.org/r/#/c/23277/20:10
weshayhttps://review.rdoproject.org/r/#/c/22394/20:11
weshayrfolco, so.. I guess our calculus is this.. if he is not planning on merging the code for components for quite some time.. we'll need staging fully setup and passwds etc.. to meet your requirements20:13
rfolcok20:13
rlandyrfolco: weshay: 4 reviews in play now ...20:18
rlandyhttps://review.opendev.org/#/c/696951/ Revert "Component promotion jobs should read a custom release cfg"20:18
rlandyhttps://review.opendev.org/#/c/696933/ Include job.add_repos to repo_setup20:18
rlandyhttps://review.rdoproject.org/r/#/c/23841/ Add standalone jobs for component promotion20:19
rlandyhttps://review.rdoproject.org/r/#/c/23906/ Test component standalone20:19
rlandynow updating pipeline20:22
rlandyrfolco: weshay:  https://review.rdoproject.org/r/23712 Add openstack-component-compute pipeline and layout20:33
rlandy.... and it's snowing20:35
*** holser has joined #oooq21:06
rlandyhttps://review.rdoproject.org/r/#/c/23712/10/ci-scripts/tripleo-upstream/get-hash.sh21:07
rlandy^^ that could use a design review pls21:07
*** jtomasek has quit IRC21:15
*** TrevorV has quit IRC21:29
*** aakarsh has quit IRC21:41
rlandyrfolco: weshay: how do we know we have installed the right rpm using the component21:47
rlandyhttp://logs.rdoproject.org/06/23906/1/check/periodic-tripleo-ci-centos-7-standalone-compute-master/3947799/logs/undercloud/var/log/extra/rpm-list.txt.gz21:47
rlandy^^ looking there21:48
rlandypython2-novaclient.noarch   1:16.0.0-0.20191106091533.380fc08.el721:49
rlandy                                                         @delorean21:49
rlandyno openstack-nova21:50
rlandyDec 01 00:34:37 Installed: 1:openstack-nova-common-20.1.0-0.20191128234359.f138265.el7.noarch21:52
rlandyDec 01 00:42:47 Installed: 1:openstack-nova-api-20.1.0-0.20191128234359.f138265.el7.noarch21:52
rlandyusing https://trunk-staging.rdoproject.org/centos7/component/compute/f1/38/21:53
rlandyit should have highest priority21:54
rlandywe have a problem21:54
*** jbadiapa has quit IRC22:14
*** aakarsh has joined #oooq22:26
rlandyanyone?22:52
rlandyhttps://code.engineering.redhat.com/gerrit/186764 Add first rhos-17 release file23:08
rlandymerged that  - whatever - need it for pipeline23:09
rlandyhttps://code.engineering.redhat.com/gerrit/186766 WIP: Add component standalone jobs23:36
*** holser has quit IRC23:38
*** tosky has quit IRC23:41
rlandytesting downstream23:43

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!