Friday, 2022-07-29

*** ysandeep|out is now known as ysandeep01:54
ysandeepdasm|off, o/ still slow for me: http://pastebin.test.redhat.com/106854802:48
ysandeepreplied here: https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/43161/17#message-af254741d5431408ec72565fb39f27932cbb04e302:59
*** ysandeep is now known as ysandeep|afk03:24
dasm|offysandeep|afk: it takes 2s per logs to download. hm...03:24
dasm|offysandeep|afk: i reworked patches. can you check other ones? i'm gonna see how to solve this issue.03:25
dasm|offi03:25
dasm|offi'm not sure if it can be done more efficient.03:26
dasm|offysandeep|afk: how long does it take for you to open any logserver?03:26
dasm|offysandeep|afk: i recently had issues when tried to use ipv6 over vpn. it took a long time.03:26
dasm|offysandeep|afk: could you try again w/o vpn?03:27
*** ysandeep|afk is now known as ysandeep03:38
ysandeepdasm|off, took 1m 59s without vpn.. so looks like vpn is not a factor here03:39
dasm|offack03:39
ysandeepdasm|off, please check my comment here: https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/43161/17#message-af254741d5431408ec72565fb39f27932cbb04e3 , download task is not taking long but query_zuul_job_details03:39
ysandeepdasm|off, yes I am reviewing your other patches.. left a comment here as well: https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/42149/6#message-4d63cf5908f8944723baa187f6dfa7a0418b4a7403:41
dasm|offysandeep: so query_zuul takes long time?03:41
ysandeepyeah it took ~29 s 03:41
dasm|offso it's not fetching failure_reason?03:41
dasm|offi thought it was failure reason before03:42
ysandeepyes in this case - downloading didn't take time because there was nothing to download :) that file itself was not present03:42
dasm|offmhm03:42
dasm|offysandeep: and what happens wnen you're opening links to logserver and review? does it take so long too?03:43
dasm|offi had issues with ipv603:45
ysandeepdasm|off, I don't feel any slowness in general when using webui03:45
dasm|offysandeep: can you disable ipv6 on your network and try again via console?03:46
ysandeepnet.ipv6.conf.all.disable_ipv6 = 1 ?03:48
dasm|offi changed via gui in network manager. i'm not sure about cli option03:48
* dasm|off checking03:48
dasm|off> sudo sysctl -w net.ipv6.conf.eth0.disable_ipv6=103:49
dasm|offysandeep: yes, you're right03:49
* ysandeep doing some tests03:53
ysandeepdasm|off, http://pastebin.test.redhat.com/1068554 1 m 14s 03:54
dasm|offhmm, faster, but no cake03:54
* dasm|off can't check pastebin -- i'm on my private pc now03:54
ysandeeplet me share in openstack pastebin.. no secrets here :D03:55
ysandeephttps://paste.openstack.org/show/bRgyMBrGlMFZXEK3fdjJ/ 1m 20 s in latest test03:56
dasm|offchecking03:56
dasm|off4s for promotion info03:57
dasm|offjob details = 22s03:57
dasm|offso 'query_zuul_jobs_details" takes long time03:58
ysandeepyes.. fetching things from zuul takes time.. 03:58
dasm|offand it's about 45s total. which is over half of overall time execution03:59
dasm|offchecking the same on my config03:59
dasm|off4s for fetching jobs details04:00
ysandeepso now I am skeptical about removing compare_upstream flag here.. https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/43307/14#message-b4f8280c0ebd45b4e106ccad4eca145ab6693706  :D04:00
ysandeepthat will increase the time even more :D04:00
ysandeepimho.. we should keep separate function for influx and for print04:00
ysandeep+1 to unification but if that decrease performance and we don't use that info during print.. 04:01
dasm|offwe're sharing too many variables between those two to keep separate functions04:01
dasm|offcurrently, when i want to fetch info about last 5 promotion candidates, it's difficult04:02
ysandeeptrue.. but currently we are only doing what we have to for print.. no extra things04:03
dasm|offi would need to rerun entire script 5 times to get the same info. even though, we should be able to fetch all of that at once, maybe twice04:03
dasm|offues, that's currently the case.04:04
dasm|offbut the unification is not the source of problem here04:04
dasm|offfor me, running on master branch takes the same time like running with changes.04:05
dasm|offhence i tried to understand if the issue mignt be caused by network04:05
ysandeepI think you are closer to zuul server and logserver04:06
dasm|offi know what i can try. i can try connecting through some vpn over in Asia04:06
dasm|offysandeep: that would be the case, if you would see the same slowness when connecting through webgyi04:06
dasm|offif you're experiencing that only through the console, i'm not so sure anymore04:07
ysandeeplets both try to download a file from rdo logserver and see the difference in timing04:08
dasm|offysandeep: can you add "logging.debug(response.url)" to query_zuul_job_details? it should give you the link. you might try opening that in the browser and see how long toes it take04:08
dasm|offysandeep: ok04:09
dasm|offysandeep: time curl https://review.rdoproject.org/zuul/api/builds?job_name=periodic-tripleo-ci-centos-9-scenario004-standalone-common-master 1>/dev/null 2>&104:11
dasm|offtry this04:11
dasm|off> real 0m1.225s04:11
dasm|offit takes for me about 1.6s top04:11
ysandeeptaking ~4s here04:12
dasm|offso 4x O/04:12
ysandeepreal0m3.820s04:12
dasm|off:/04:12
dasm|offcan you open the same link in the browser and see how long does it take04:12
dasm|offin the browser it took me 962ms04:14
ysandeepdasm|off, how are you calculating that?04:14
dasm|offfx or chrome have Developer Tools04:14
dasm|offfor fx it's called "Inspect"04:15
dasm|offthere is a Network tab with info about the time04:15
ysandeepthanks, let me take a look04:16
ysandeepdasm|off, yeah around same time04:17
dasm|off> Finish: 2.39 s04:17
dasm|offthis is what i got right now04:17
dasm|offysandeep: the same, you mean to the console?04:17
ysandeepyes with the console - ~ 4 - 5 secs04:18
dasm|offack04:18
dasm|offysandeep: so, your hypothesis about server proximity is the right one04:18
dasm|offysandeep: can you see how long does it take for master cs9 now?04:19
ysandeepyeah looks like latency is a factor here04:19
ysandeepdasm|off, integration line?04:19
dasm|offyes04:20
ysandeepdasm|off, are you not late for sleep? :D 04:21
dasm|offysandeep: watching "Logan"04:21
dasm|offit's 21:21 here04:21
ysandeepdasm|off, took real1m9.727s with your reduce parametes patch04:21
dasm|off> real 0m23.288s04:22
dasm|off3$ time CURL_CA_BUNDLE="" ./ruck_rover.py 1>/dev/null 2>&104:22
dasm|offreal 0m21.220s04:23
ysandeepreal0m32.295s with master branch04:23
dasm|offysandeep: i'm gonna revisit what i can do with that.04:23
ysandeepdasm|off, I know its late for you.. have a good rest04:23
dasm|offysandeep: so, you're saying -- UX >> simplicity? :)04:24
ysandeepyeah.. imho.. user experience >> simplicity of code04:24
ysandeepi don't mind devs writing complex code as far as the end product is simple to use and UX is great04:25
dasm|offysandeep: would you be able to accept temporary (couple weeks) slowdown of the script, to improve the code, and later allow to reduce its complexity?04:27
dasm|offysandeep: kinda like 2-step approach. one -- make it easier, and later decouple it into smaller functions?04:27
dasm|offlike i mentioned -- right now there is a lot of interdependencies, which i'm trying to untangle04:28
ysandeepabsolutely that works for me04:28
ysandeepwhat do you think of https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/43161/17 as a temporary workaround?04:29
dasm|offysandeep: keeping "influx"? but you mentioned that bigger issue is with "job details"04:30
ysandeepcan we not toggle "job details" based on "influx"04:31
dasm|offi see04:31
dasm|offysandeep: i'll prepare the change (tomorrow)04:32
ysandeepdasm|off, o/ thanks 04:32
ysandeepgood night 04:32
dasm|offgood day!04:32
*** ysandeep is now known as ysandeep|break04:32
bhagyashrisakahat|ruck, arxcruz|rover let me know once you login for rr sync05:19
*** ysandeep|break is now known as ysandeep05:36
*** bhagyashris is now known as bhagyashris|ruck06:05
jm1happy friday :)06:15
ysandeepjm1: good morning and happy friday o/06:25
soniya29jm1, ysandeep happy friday :)06:32
ysandeepsoniya29: o/ you too06:32
jm1soniya29, ysandeep: o/ ๐Ÿป ๐Ÿ˜„06:37
akahat|ruckbhagyashris|ruck, o/06:42
akahat|ruckbhagyashris|ruck, do you want to sync now?06:43
bhagyashris|ruckakahat|ruck, may be after 30 mins 06:47
bhagyashris|ruckgoing for lunch06:47
*** ysandeep is now known as ysandeep|lunch07:14
arxcruz|roverbhagyashris|ruck i need 10-20 min 07:15
bhagyashris|ruckarxcruz|rover, akahat|ruck i am back07:15
bhagyashris|rucklet me know when we meet?07:21
arxcruz|roverbhagyashris|ruck akahat|ruck https://meet.google.com/hgt-rfdb-qgt?authuser=107:30
bhagyashris|ruckarxcruz|rover, hey i can't hear you...07:32
arxcruz|roverakahat|ruck around?07:38
arxcruz|roverakahat|ruck around?07:58
mariosmay disconnect for a bit ... isp is upgrading my connection hopefully back soon ;)08:15
*** jpena|off is now known as jpena08:45
*** akahat|ruck is now known as akahat09:12
arxcruz|roverbhagyashris|ruck periodic-tripleo-ci-centos-9-scenario010-kvm-internal-standalone-wallaby passed, one less to go 09:17
bhagyashris|ruckarxcruz|rover, yoo thanks!09:17
arxcruz|roverfs39 fails... :(09:17
arxcruz|roverif rlandy|out agrees, we can skip fs39 and promote wallaby c909:18
arxcruz|roveri'll wait fs001 finishes then recheck anyway 09:19
bhagyashris|ruckarxcruz|rover, ack thanks 09:20
mariosthanks ysandeep|lunch for early recheck on my non-voting patch it got through 09:30
*** ysandeep|lunch is now known as ysandeep10:21
ysandeepmarios :D10:23
*** rlandy|out is now known as rlandy10:35
rlandybhagyashris|ruck: arxcruz|rover: want to sync?10:38
rlandybhagyashris|ruck: is dviroel|afk covering today?10:38
rlandyananya still out?10:38
rlandybhagyashris|ruck: looking at failing rhel-9 components10:57
rlandyalso pls fill in promotion status on rr hackmd10:58
rlandyyes - you can skip fs03910:58
rlandyis it related to bug on cix ?10:58
rlandybhagyashris|ruck: https://code.engineering.redhat.com/gerrit/c/tripleo-environments/+/421973 Fix parameter format for registry proxy in 17 and 17.111:03
mariosamoralej: (when you have time please) can you check https://review.opendev.org/c/openstack/python-tripleoclient/+/851366 do you think it would cause any problems for rdo package builds (removed version specific classifiers) 11:12
bhagyashris|ruckrlandy, give me min11:12
amoralejlemme check11:12
amoralejmarios, i think it's not a problem from rdo packaging point of view11:13
amoralejbut i'm not sure if it's a good practice11:14
amoralejlemme take a look11:14
rlandybhagyashris|ruck: fixed the failing jobs I think 11:14
rlandyrekicked what was missing in therhle-9 line11:15
amoralejbtw, you also publish it in pypi, i'm not sure if it can be a problem for pip install it11:15
rlandyneed to join review time11:15
rlandywill sync with you afterwards11:15
*** dviroel|afk is now known as dviroel11:15
mariosthanks amoralej 11:15
amoralejmarios, btw, that will not fix https://bugs.launchpad.net/tripleo/+bug/198300411:16
rlandyreview time11:16
amoralejthar's related to mixing centos8 and centos9 builds11:17
dviroelrlandy: bhagyashris|ruck - yes i will cover ananya today11:18
*** dviroel is now known as dviroel|rover11:18
rlandydviroel|rover: bhagyashris|ruck: on review time11:18
bhagyashris|ruckdviroel|rover, ack11:18
rlandyneed to get ananya's reviews in11:18
reviewbotDo you want me to add your patch to the Review list? Please type something like add to review list <your_patch> so that I can understand. Thanks.11:18
rlandylet's sync in a bit11:19
bhagyashris|rucksure11:19
rlandybhagyashris|ruck: in the mean time, pls update the rr hackmd11:19
rlandywith promotion status etc.11:19
mariosamoralej: thanks i am in call but ... grateful if you have some info add into the bug please? 11:24
mariosamoralej: re (that will not fix )11:24
mariosamoralej: thanks for checking11:24
amoralejmarios, i replied in the review.11:24
amoralejbut i'll also update the bz11:25
amoralejlemme see what those jobs are doing ...11:25
mariosthank you amoralej i can check the review 11:25
mariosamoralej: thanks but it is c9 there replied https://review.opendev.org/c/openstack/python-tripleoclient/+/851366/4#message-7a357ceeef32a16e3b0f215059367c5ba31ba154 11:27
mariosamoralej: like the job has 3.9 installed 11:27
amoralejmarios, exactly, the host is centos9 and the package .el811:28
amoralejso for cs811:28
mariosoh.. .didn't notice that amoralej 11:29
marios:)11:29
amoraleji mean https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_2bc/851240/1/check/tripleo-ci-centos-8-9-multinode-mixed-os/2bcce46/logs/undercloud/home/zuul/install_packages.sh.log11:29
mariospython3-tripleoclient-16.4.1-0.20220728031741.bce8cb7.el8.noarch11:29
amoraleji left a comment in the bug also11:29
amoralejyep, that11:29
mariosi dind't notice it was pulling an el8 package 11:29
mariosthat is the problem then 11:29
amoralejright11:29
mariosthank you amoralej :) i will dig at that then 11:29
amoralejsome issue with repos definition i guess ...11:29
amoralejwhat are those jobs trying to do, btw? :)11:29
amoralejcompute nodes in cs8 and controllers in cs9 i guess?11:30
mariosamoralej: yeah c8 for compute only11:33
mariosamoralej: but undercloud delorean-current is correct https://7e2d13ca2835f2277cf7-527af0e460b15f09b47abe14cd6cea3c.ssl.cf5.rackcdn.com/851366/3/check/tripleo-ci-centos-8-9-multinode-mixed-os/a63148b/logs/undercloud/etc/yum.repos.d/delorean-current.repo11:34
mariosand it has the el9 repo11:34
mariosamoralej: will dig after this call 11:34
mariosamoralej: not sure why it is trying to pull el8 (or even which repo )11:34
marios(and only on tripleoclient :/)11:34
amoralejmmm maybe it's built by dlrn in the same job?11:36
amoralejit seems it doesn't11:39
amoralejweird11:39
rlandybhagyashris|ruck: dviroel|rover: rr sync https://meet.google.com/xow-evuj-adf11:49
*** soniya is now known as soniya|ruck11:52
*** soniya|ruck is now known as soniya|afk11:52
ysandeeprlandy, fyi.. As discussed left a comment for dasm|off here: https://review.rdoproject.org/r/c/rdo-jobs/+/44153/4#message-ea7b7002218ad1c1106e05a8d0644ed664daaae1 11:52
*** frenzy_friday is now known as frenzyfriday|rover11:54
frenzyfriday|roverhey bhagyashris|ruck , I am back11:59
frenzyfriday|roveranything I should look at first?12:00
bhagyashris|ruckmay be downstream 12:00
bhagyashris|rucki just checked the failing jobs and rekicked the jobs12:00
bhagyashris|ruckhaven't get chance to looks deeply12:00
bhagyashris|ruckfrenzyfriday|rover, ^12:01
bhagyashris|ruckfrenzyfriday|rover, here is hackmd https://hackmd.io/H9CSoXvlTm6nTZ4bsJkeRg12:01
dviroel|roverfrenzyfriday|rover: hey, I just started :)12:01
bhagyashris|ruckupdated 12:01
frenzyfriday|rovercool :)12:02
frenzyfriday|roverdviroel|rover, 0/ no worries, I am back now12:02
dviroel|roverit is a rover party \o/  arxcruz|rover frenzyfriday|rover 12:02
frenzyfriday|roverXD !!12:03
akahatbhagyashris|ruck, https://bugzilla.redhat.com/show_bug.cgi?id=211234412:25
rlandyysandeep: thank you12:34
rlandydviroel|rover: frenzyfriday|rover: you guys tag-teaming?12:35
rlandybhagyashris|ruck; frenzyfriday|rover;just did a sync with dviroel|rover 12:35
rlandychandankumar: hey ... still want to meet?12:36
rlandychandankumar: have 20 mins now12:37
rlandylet's try spring you free12:37
chandankumarrlandy: yes 12:37
frenzyfriday|roverrlandy, yep, dviroel|rover was covering me. I am back now12:38
dviroel|roveryeah, will stay rover for a bit12:38
rlandychandankumar: https://meet.google.com/jju-xyba-jkz?pli=1&authuser=012:39
akahatdviroel|rover, o/12:41
akahatdviroel|rover, https://sf.hosted.upshift.rdu2.redhat.com/logs/71/421971/1/check/periodic-tripleo-ci-rhel-9-scenario010-standalone-octavia-rhos-17/3504f78/logs/undercloud/home/zuul/containers-prepare-parameters.yaml12:42
akahatdviroel|rover, may be this caused it: https://opendev.org/openstack/tripleo-quickstart-extras/commit/df634354a6cd79c93d1cb0dd984ae766c687834f12:42
bhagyashris|ruckrlandy, ack12:42
akahatall rhel-9 jobs are affected by this12:42
dviroel|roverakahat: let me check12:42
akahatthis is invalid yaml:ceph_namespace: docker_ceph_namespace: registry-proxy.engineering.redhat.com/rh-osbs12:43
ysandeepakahat, rlandy merged a patch to fix ^^12:45
ysandeepakahat, https://code.engineering.redhat.com/gerrit/c/tripleo-environments/+/42197312:45
akahatoh.. 12:46
akahatysandeep, thanks12:47
dviroel|roverysandeep: akahat: yeah, looks like this was the issue, lets wait12:48
akahatdviroel|rover, yes... i was searching upstream.12:48
ysandeeprlandy, dasm|off found something interesting.. https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/environments/enable-secure-rbac.yaml#L2 "EnforceSecureRbac: false" when talking with Pranali (she reached out to me - she wanted to enable some rbac related tempest tests)12:49
ysandeepAre we really enabling srbac completely in fs020-rbac job?12:49
rlandyysandeep: so we have an overall switch12:49
rlandyto enable rbac per job12:49
rlandyper tempest test12:49
*** amoralej is now known as amoralej|lunch12:50
rlandythere may be an special switch12:50
rlandyas some tempest test are rbac-specifc12:50
rlandyor ill totally fail 12:50
rlandyto answer your question - dasm|off would know more updated info - as long as the test won't fail, we can def enable it12:51
rlandyshe can testproject check that12:51
ysandeeprlandy, she already did - deployment failed on cinder trying to enable rbac with "EnforceSecureRbac: true"12:51
ysandeephttps://logserver.rdoproject.org/04/40004/13/check/periodic-tripleo-ci-centos-9-ovb-1ctlr_2comp-featureset020-rbac-master/e16c411/logs/undercloud/home/zuul/overcloud_deploy.log.txt.gz12:52
rlandydid she try wallaby?12:52
rlandyysandeep: let's wait for dasm|off to come on line12:52
rlandythen dicuss12:52
rlandydiscuss12:52
ysandeepshe tried on master12:52
ysandeepfyi.. this was the test project: https://review.rdoproject.org/r/c/rdo-jobs/+/40004/13//COMMIT_MSG 12:52
ysandeepanyway I have requested to rerun the job once more to confirm its legit bug(looks like it is from traceback) and raise the failure in SRBAC mtg  12:54
ysandeepwe will need security/cinder folks help12:54
rlandyok12:54
ysandeepdasm|off, we need to discuss with security why "EnforceSecureRbac" is still false here: https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/environments/enable-secure-rbac.yaml#L212:55
ysandeepthis switch ^^ toggle rbac for different services12:55
ysandeepto me looks like today we are just adding some custom policies via enable-secure-rbac.yam for various services but not actually enabling srbac completely.12:56
rlandybhagyashris|ruck: frenzyfriday|rover: when did a sync with dviroel|rover, we discussed some RDO components that were lagging ... pls will one of you look at master, train, wallby c8 and c9 components and see what's failing/debug/rerun?12:58
rlandybhagyashris|ruck: also - maybe real issue on 16.2 for common - can you look?12:59
dviroel|roverrlandy: i am looking at rdo, frenzyfriday|rover is looking downstream12:59
rlandygreat12:59
rlandythank you12:59
*** ysandeep is now known as ysandeep|out13:02
rlandychandankumar: can you revote: https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/44261/2/ci-scripts/infra-setup/tenant_vars/common.yml13:05
bhagyashris|ruckrlandy, will check i am rerunning failing component jobs at downstream13:05
dviroel|roverbhagyashris|ruck: https://review.rdoproject.org/zuul/buildset/72dc549e3ea347d680c105e6be50637f - common component in master and wallaby13:12
dviroel|roverbhagyashris|ruck: frenzyfriday|rover: rlandy: need to take the kid to the doc in 10 minutes, he is not felling well since yesterday13:19
rlandydviroel|rover: np - frenzyfriday|rover is here now13:19
rlandyhope he feels better13:19
rlandyyou can roll off rr13:19
frenzyfriday|roverdviroel|rover, yeah no problem13:19
*** dviroel|rover is now known as dviroel13:20
*** dviroel is now known as dviroel|afk13:25
bhagyashris|ruckdviroel|afk, ack13:26
* bhagyashris|ruck brb13:26
chandankumarrlandy: done, 13:27
rlandybhagyashris|ruck: if you get a chance, pls see comments on https://code.engineering.redhat.com/gerrit/c/openstack/tripleo-ci-internal-jobs/+/40210113:28
rlandyalso needs rebase13:28
arxcruz|roverbhagyashris|ruck rlandy sorry, i went to take the 4th vaccine and took more time than expected13:41
arxcruz|roverbut i'm back13:41
rlandyarxcruz|rover: np - how are you feeling??13:42
rlandyarxcruz|rover: you can come off rover :)13:43
rlandyyou're done, sir13:43
arxcruz|roverrlandy so far i'm good13:43
arxcruz|roverrlandy i was just rovering to support bhagyashris|ruck since she is alone 13:43
rlandyarxcruz|rover: ok now frenzyfriday|rover is back13:43
arxcruz|roverok13:43
*** arxcruz|rover is now known as arxcruz13:43
rlandythanks for covering13:43
rlandydviroel|afk rekciked wallaby fs03913:44
rlandywill skip promo if no luck there13:44
rlandyand fs001 on c813:44
rlandythose are rekicked13:44
chandankumarmarios: hello13:44
marioschandankumar: o/13:44
chandankumarmarios: https://review.opendev.org/c/openstack/tripleo-ansible/+/850562/2#message-6c74081af5176ffc8e26cc41bf300e05db57489d13:45
chandankumarmarios: I think we need to make tripleo-ci-centos-8-9-multinode-mixed-os https://zuul.opendev.org/t/openstack/build/32a55e3ebeeb4b4da8b273a4aaa2c66a : FAILURE in 1h 38m 23s non voting in tripleo-ci 13:45
chandankumar?13:45
marioschandankumar: looking13:46
chandankumarstderr": "/bin/sh: os-net-config: command not found\n", "stderr_lines": ["/bin/sh: os-net-config: command not found"], "stdout": "", "stdout_lines": []}13:46
chandankumarnot sure it is a real issue13:46
marioschandankumar: but it was green there in tripleo-ansible i mean same review see https://review.opendev.org/c/openstack/tripleo-ansible/+/850562/2#message-610956498d685b80cf7485ed5067dcaf6af9808313:46
marioschandankumar: i know there is a problem with client (and it is non voting there)13:47
marioschandankumar: but it was green everywhere else at least so far13:47
chandankumarmarios: sorry, didnot checked the lastr log13:47
chandankumarlet me recheck it13:47
marioschandankumar: k .. that os-net-config thing is weird though 13:48
marioschandankumar: well so far only once there though see https://zuul.opendev.org/t/openstack/builds?job_name=tripleo-ci-centos-8-9-multinode-mixed-os&project=openstack%2Ftripleo-ansible&skip=0 13:49
marioschandankumar: i mean 'there' tripleo-ansible13:49
marios1 fail today 13:49
mariosso if it is a bug its a new one 13:49
marioschandankumar: ^^13:49
chandankumarmarios: thanks for looking into that, my bad, I will let you know the next run result13:49
marioschandankumar: ack thanks13:50
mariosbut the os-net-config this is worrying anyway might be a new bug chandankumar 13:50
marios:/13:50
marioslets see13:50
frenzyfriday|roverbhagyashris|ruck, do you know if we have a bug for Parse container params failure? I can see this in tripleo component dwnstream. 13:51
frenzyfriday|rovercontainers-prepare-parameters.yaml seems to be missing stuff when i compare it to passing jobs13:51
*** dasm|off is now known as dasm13:59
rlandyare we not doing happy friday anymore?14:02
rlandydasm: hello14:02
*** amoralej|lunch is now known as amoralej14:03
rlandydasm: ysandeep|out left some comments on the OVB patch14:04
rlandyconsensus was as we discussed on review time yesterday14:04
rlandyif wallaby works with master of OVB14:04
rlandylet's just do ahead14:04
rlandyand make it a c8, c9 split14:04
rlandyall c9 will go with master OVB14:04
rlandybranchful and branchless14:04
dasmrlandy: ysandeep|out  sweet! i'm gonna go over his comments 14:05
rlandydasm: so you'll have to change the branchful ones to use master as well14:05
rlandyand test 14:05
rlandyand then let us know if that works14:06
dasmack14:06
*** pojadhav is now known as pojadhav|out14:08
rlandyfrenzyfriday|rover: you;ll need to add the 17.1 scenario004 jobs to the skiplist14:08
rlandyhttps://opendev.org/openstack/openstack-tempest-skiplist/src/branch/master/roles/validate-tempest/vars/tempest_skip.yml#L129414:10
rlandyfrenzyfriday|rover: ^^ currently failing14:10
rlandyand then rerun with that skip14:10
frenzyfriday|roverrlandy, ack, adding. I am creating a bug for multinode ipa 17.0 on 914:10
frenzyfriday|roverrlandy, bhagyashris|ruck could you pls check if https://bugzilla.redhat.com/show_bug.cgi?id=2112374 looks okay? I havent filed many downstream bugs14:16
bhagyashris|ruckfrenzyfriday|rover, severity i think you can mentioned 14:27
frenzyfriday|roverack, updated14:29
bhagyashris|ruckfrenzyfriday|rover, rlandy leaving for the day ...14:38
frenzyfriday|roverbhagyashris|ruck, ack - anything I should keep an eye on?14:38
rlandybhagyashris|ruck: thanks14:39
rlandypls leave notes what we should watch14:39
bhagyashris|ruckmay be downstream specially 17 on rhel914:39
frenzyfriday|roverbhagyashris|ruck, ack, I'll check the downstream14:40
rlandyfrenzyfriday|rover: checking bug14:40
frenzyfriday|roverrlandy, https://code.engineering.redhat.com/gerrit/c/testproject/+/422009 - rerun sc004 17.1 with skiplist14:40
rlandyfrenzyfriday|rover: skiplist looks good - merging - ty14:40
chandankumarsee ya!14:41
rlandyfrenzyfriday|rover: wrt periodic-tripleo-ci-rhel-9-standalone-on-multinode-ipa-tripleo-rhos-1714:42
rlandyfrenzyfriday|rover: I think that should be fixed14:42
rlandyit was my mistake in the envs file14:43
frenzyfriday|roverrlandy, oh, cool. It should be green after your testproj finishes then. Closing the bug14:43
rlandyfrenzyfriday|rover: you should be able to testproject recheck it now14:44
rlandyI merged the tripleo-environments change14:44
rlandyI am putting the bug on distribution component while you investigate14:44
frenzyfriday|roverrlandy, ack https://sf.hosted.upshift.rdu2.redhat.com/zuul/t/tripleo-ci-internal/builds?job_name=periodic-tripleo-ci-rhel-8-standalone-on-multinode-ipa-tripleo-rhos-17&skip=0 is green again14:44
rlandyfrenzyfriday|rover: you logged the bug on rhel-914:45
rlandyfrenzyfriday|rover: also - I can't edit it14:46
rlandycan you move the component to distribution?14:46
rlandyfrenzyfriday|rover: ok - nvm - you closed it14:47
rlandyall good14:47
rlandyfrenzyfriday|rover: sorry - that was my fault  - I messed up one of the settings values14:47
frenzyfriday|roverrlandy, yep, closed it np. the 9 version of it should also be green in the next run14:48
rlandyyep - hope so14:48
* frenzyfriday|rover checking the rest of the components14:48
rlandyfrenzyfriday|rover: I think the real investigation is common on 16.214:48
rlandythose failures may be legit14:48
rlandycan you check there?14:48
frenzyfriday|roveraye , checking14:50
* jm1 bbl14:50
frenzyfriday|roveralso periodic-tripleo-rhel-8-rhos-17-component-compute-promote-to-promoted-components and some other promote jobs are red14:51
frenzyfriday|roveroh ok, those are criteria failures, sorry14:52
rlandyfrenzyfriday|rover: sooo ,,, bhagyashris|ruck and you and both rr this week14:56
rlandywhen should I book the CRE team meeting14:56
rlandymaybe thursday when you roll off?14:56
frenzyfriday|roverrlandy, thursday or friday?14:57
rlandyarie doesn;t work on friday14:57
frenzyfriday|roverthursday it might be late for Bhagyashri14:57
rlandylet me see if anything is available on thursday14:57
rlandyhmm ... we have retro and planning14:58
* rlandy will need to ask bhagyashri if we do retro wed or thursday14:58
rlandymarios: ^^ do you remember what we chose?14:58
mariosrlandy: yeah we said thursday14:58
mariosrlandy: cancel scrum on thursday and use the slot for retro14:59
mariosrlandy: then do planning immediately after it14:59
rlandyk - let me see if I can find a slot on wed14:59
rlandyfrenzyfriday|rover: ^^ just for kickoff15:00
frenzyfriday|roverrlandy, cool15:00
frenzyfriday|roverrlandy, what is pipeline_component-common-pcci-16.2_dlrn-rhel-8.4-virthost-3cont_2comp_3ceph-ip job? I dont see it in https://sf.hosted.upshift.rdu2.redhat.com/zuul/t/tripleo-ci-internal/jobs15:11
rlandyfrenzyfriday|rover: jenkins jobs15:12
rlandykciked by QE15:12
frenzyfriday|roveroh15:12
rlandythey report to dlrn15:12
rlandyif you need them rerun, I can show you how15:12
rlandywe discuss those on #openstack-pcci15:12
frenzyfriday|roverit hasnt passed in a while, checking what happened15:13
frenzyfriday|roverwhat is the downstream jenkings url?15:13
rlandychatting on internal15:14
mariosrlandy: chandankumar: ysandeep|out: dviroel|afk: o/ please add to your reviews when you have time https://review.opendev.org/c/openstack/tripleo-quickstart-extras/+/851427 thanks 15:26
reviewbotDo you want me to add your patch to the Review list? Please type something like add to review list <your_patch> so that I can understand. Thanks.15:26
frenzyfriday|roversc001 and 2 16.2 common has a common standlone deploy failure: Cannot find config file: /etc/puppet/hiera.yaml 15:36
rlandyfrenzyfriday|rover: looks legit15:36
rlandyare you seeing the same error in train common component?15:37
rlandyfrenzyfriday|rover: all the downstream lines have/will promote today15:52
*** jpena is now known as jpena|off15:52
rlandywaiting to see current run on fs001 in wallaby c815:53
rlandythen we can decide if worth it to promo15:53
frenzyfriday|roverrlandy, cool! I updated the rr hackmd with the failures I found for 16.2 common15:53
rlandycool15:53
frenzyfriday|rovercreating bugs in some time15:53
rlandyfrenzyfriday|rover: if you don;t see those in train, it can downstream bug it15:53
rlandythe CIX15:54
rlandythen15:54
rlandylunch - brb15:57
rlandyfrenzyfriday|rover: ^^ 15:57
*** marios is now known as marios|out16:02
*** dviroel|afk is now known as dviroel16:14
*** amoralej is now known as amoralej|off16:21
rlandyback16:26
rlandydasm: need anything from me re: ovb?16:27
rlandyrcastillo: you all set with virt?16:27
dasmrlandy: i'm good. thanks16:28
rcastillorlandy: testing here16:32
rcastilloI couldn't figure out how to get my keys in there yesterday. Is there a specific role I need?16:32
rcastilloforgot link https://code.engineering.redhat.com/gerrit/c/testproject/+/42186616:32
rlandyrcastillo: sent you example on pvt16:41
rlandyif keys are not there16:41
rcastillorlandy: thanks16:42
frenzyfriday|roverrlandy, I am leaving for the day. I'll check the components and create bugs for common if they still fail on monday17:24
rlandyfrenzyfriday|rover: ok - thanks17:24
rlandyfrenzyfriday|rover: may skip prmo wallaby c817:24
rlandyand c917:24
rlandytheyare both lagging17:24
rlandyfrenzyfriday|rover: wallaby c9 is 4 days out17:25
rlandywe need to promote that17:25
rlandyfrenzyfriday|rover: dviroel: https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/44307 Temp remove critera to promo wallaby c8 and c917:36
rlandyfs039 for c9 and fs001 for c817:37
frenzyfriday|roveraye17:37
rlandypls vote17:37
rlandywill revert17:38
frenzyfriday|roverdone17:38
rlandythanks17:39
dviroelrlandy: rdo openstack-component-common seems to be a real issue18:24
dviroelrlandy: i can open a LP bug 18:25
dviroel"stdout": "package openvswitch is not installed18:26
dviroelwhy only common?18:26
dviroelaffects master and wallaby (8 and 9)18:27
rlandydviroel: you checking components in RDO?18:27
rlandyananya was looking at 16.2 common18:27
rlandyinteresting 18:28
rlandylooks like both18:28
rlandyif it's common only18:28
rlandywe should compare packages in component-ci-testing and tripleo-ci-testing18:28
dviroelrlandy: i saw this error earlier, and triggered common again, to test a new commit hash18:29
dviroelrlandy: the issue remains18:30
rlandyyep18:30
rlandyok - let's look at the diff18:30
rlandyโ”ƒ current-tripleo                                                                       โ”ƒ component-ci-testing                                                                  โ”ƒ18:32
rlandyโ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”18:32
rlandyโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉ18:32
rlandyโ”‚ diskimage-builder-3.22.1-0.20220712164406.ba88a12.el9                                 โ”‚ diskimage-builder-3.22.1-0.20220725044337.50390d0.el9                                 โ”‚18:32
rlandyโ”‚ openstack-selinux-0.8.34-0.20220711202850.a241718.el9                                 โ”‚ openstack-selinux-0.8.34-0.20220727173345.c3061f5.el9                                 โ”‚18:32
rlandyโ”‚ openstack-tobiko-0.6.1-0.20220721215903.648f963.el9                                   โ”‚ openstack-tobiko-0.6.1-0.20220727131843.b303fa4.el9           18:32
rlandydviroel: best guess is selinux18:32
rlandyTengu mentioned we may be in for problems there18:32
rlandychecking if master has the same18:33
dviroelok18:33
rlandyโ”‚ openstack-selinux-0.8.34-0.20220711202841.a241718.el9                                 โ”‚ openstack-selinux-0.8.34-0.20220727174553.c3061f5.el9                                 โ”‚18:33
rlandyโ”‚ openstack-tobiko-0.6.1-0.20220721215850.648f963.el9                                   โ”‚ openstack-tobiko-0.6.1-0.20220727131836.b303fa4.el9                                   โ”‚18:33
rlandyโ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€18:33
rlandytobiko and selinux18:33
rlandyin both master and wallaby c918:33
rlandyโ”‚ openstack-selinux-0.8.34-2.20220712085402.a241718.el8osttrunk                         โ”‚ openstack-selinux-0.8.34-2.20220728085240.c3061f5.el8osttrunk  18:34
rlandysame in 16.218:34
Tenguapparently it's an issue related to a not up-to-date system18:34
Tengurlandy: -^18:34
rlandyTengu: Oh my gosh18:34
rlandyso late on a friday18:35
Tenguso if you update everything, it should be fine.18:35
Tengurlandy: and on PTO18:35
Tengu:)18:35
rlandygo, go18:35
rlandyI was just talking about you - not to you :)18:35
Tengu:D18:35
rlandydviroel: ^^ we probably need a bug with  this info18:36
Tengusorry, my 6th sense got that small tickling :)18:36
rlandynp18:36
Tengurlandy: there's one already18:36
rlandyidk what no up to date system means exactly18:36
rlandyoh18:36
rlandyok18:36
rlandycan you pass and we'll read the details18:36
Tengurlandy: https://bugs.launchpad.net/tripleo/+bug/1982744/18:36
* rlandy remembers you warning us18:36
rlandyTengu++ thank you18:37
Tengunp18:37
rlandynow pls PTO18:37
Tenguback on Tuesday!18:37
rlandysee you then :)18:37
TenguMonday is Swiss national day ;)18:37
Tengusee you around ;)18:37
rlandydviroel: adding to rr hackmd18:37
dviroelTengu: nice, thanks o/18:37
dviroelrlandy: ack 18:38
* rlandy reads bug18:39
rlandyah comment 2 address the openvswitch thing18:39
dviroellots of comments there18:43
dviroeland moved to won't fix18:44
rlandydviroel: there is a workaround 18:46
rlandywdyt?18:46
rlandytry it?18:46
rlandysudo dnf reinstall openstack-selinux container-selinux18:47
rlandysudo rpm -V openstack-selinux18:47
rlandyJust ensure your system is fully up2date before you install container-selinux and then move to to use standalone.18:47
dviroelhum, maybe18:59
dviroelseems to be an selinux issue, so pinning might work too19:02
jm1have a nice weekend #oooq :)19:17
* rlandy thinks about this19:18
rlandyreverted ci-config patch20:04
dviroelboth promoted :)20:05
rlandytrying to hold node and try the update20:24
* dasm => offline20:46
dasmtalk to you Monday!20:46
*** dasm is now known as dasm|off20:46
dviroelo/20:46
rlandyselinux seems updated21:06
rlandylet's see21:06
rlandyFailed:21:23
rlandy  openstack-selinux-0.8.34-0.20220727174553.c3061f5.el9.noarch 21:23
rlandyugh - I give up21:23
rlandydviroel: ^^ will pick this up another time21:23
rlandylogging out21:23
rlandyhave a great weekend21:23
rlandyand thanks for covering today21:23
dviroelrlandy: have a great weekend o/21:25
* dviroel too late21:25
rcastillohave a great weekend dviroel 21:29
* rcastillo out21:29
dviroelrcastillo: you too21:30
*** dviroel is now known as dviroel|out21:33

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!