Monday, 2022-05-16

*** ysandeep|out is now known as ysandeep|rover05:08
ysandeep|rovergood morning everyone o/05:16
marios\o 05:18
pojadhavgood morning \o05:24
bhagyashrisgood morning 0/05:35
* ysandeep|rover prays tempest failures please be consistent, I don't mind you failing at all but please be consistent.05:46
mariosysandeep|rover: :/ i know this feeling 05:50
ysandeep|rover:D05:51
pojadhavmarios, hey how you resolved this error ?? https://review.opendev.org/c/openstack/tripleo-ci/+/839518/8#message-23c7677c86b0457399ecc41e41ff97834a3f361b06:27
pojadhavcould you please guide me too06:28
mariospojadhav: we removed the play since it was unused https://review.opendev.org/c/openstack/tripleo-ci/+/84122306:29
mariospojadhav: where do you see that? rebase should fix 06:29
pojadhavmarios, ahh okay. my patch was in merge conflict. I rebased it.06:30
pojadhavmarios, thanks!06:30
mariospojadhav: ack np06:30
*** ysandeep|rover is now known as ysandeep|rover|lunch08:02
jpodivinthis is awkward, does anyone know what's irc nick of Jakob Meng?08:19
jpodivinCan't seem to find it in the lp 08:20
Tengujpodivin: jm1 iirc08:33
mariosTengu: jpodivin: yes correct08:37
jm1ysandeep|rover|lunch: o/08:45
jm1ysandeep|rover|lunch: sync after lunch? :)08:45
jm1jpodivin: hey, saw your mail regarding ci in ansible openstack collection. wanna talk about your ci issues tomorrow in tripleo ci community call?08:47
jm1chandankumar: o/ do you have bitwarden's cli tool installed?08:48
akahathey folks.. i'm not able to understand why same job is defined in two different places: https://review.rdoproject.org/codesearch/?q=periodic-tripleo-ci-centos-9-scenario010-ovn-provider-standalone-master&i=nope&files=&repos=08:51
jpodivinjm1: sure we can. 08:52
jpodivinjm1: although I would rather ask straight away if you don't mind. 08:52
jm1jpodivin: sure08:53
chandankumarjm1: hello09:20
chandankumarjm1: yes09:20
jm1chandankumar: have you been able to export the vault using the cli?09:28
chandankumarjm1: not yet, need to try that out09:32
jm1chandankumar: "bw list items" etc. allows us to show all secrets in a collection, "bw get item PUT_ITEM_HERE" is for single items. So "bw list items" allows us to backup our vault, although it is apparently unencrypted without further action. "bw export" does not work, as we expected.09:58
chandankumarjm1: yes list items gives so many data09:58
chandankumarjm1: bm export with organization option also does not work?09:59
*** ysandeep|rover|lunch is now known as ysandeep|rover10:01
jm1chandankumar: for me it says "Resource not found."10:05
jm1chandankumar: without orgid it works, but will only return my own vault10:05
chandankumaroh, okay, not useful then10:06
chandankumarjm1: so first solution bw list items sounds good10:06
chandankumarjm1: thank you for looking into that :-) ++10:07
ysandeep|roverjm1: I am available for sync now but if you want we can meet once rlandy|out is in ~30 mins.10:08
ysandeep|roverreviewbot: please add in review list: https://review.opendev.org/c/openstack/openstack-tempest-skiplist/+/841834 10:09
reviewbotI have added your review to the Review list10:09
jm1ysandeep|rover: ack, then lets wait for rlandy|out ^^10:11
*** rlandy|out is now known as rlandy10:15
rlandyjm1: ysandeep|rover: I'm here10:15
rlandyjm1: ysandeep|rover: want to meet?10:16
rlandyjm1: pls nick10:16
ysandeep|roverrlandy, jm1 meet.google.com/cse-yqza-tna10:17
ysandeep|roverrlandy, https://review.opendev.org/c/openstack/openstack-tempest-skiplist/+/84183410:21
ysandeep|roverjm1: hey you around?10:24
rlandy2022-05-16 10:07:05 |   - Curl error (7): Couldn't connect to server for https://buildlogs.centos.org/centos/9-stream/messaging/x86_64/rabbitmq-38/repodata/repomd.xml [Failed to connect to buildlogs.centos.org port 443: Connection refused]10:53
rlandy2022-05-16 10:07:05 | Error: Failed to download metadata for repo 'centos9-rabbitmq': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried10:53
pojadhavmarios, hello I have question regarding c7 teardown.. 10:54
*** jm1|ruck is now known as jm1|rover10:54
pojadhavif c7 job name mentioned occurrence found then in that case what should be done ?? for example https://github.com/rdo-infra/ci-config/blob/0ce33c32ecf54f4df2834e6e147989129d380b15/ci-scripts/infra-setup/roles/rrcockpit/files/mariadb/skiplist.py#L36-L3710:55
mariospojadhav: looking11:03
mariospojadhav: i am guessing that is not used any more ? (hard coded centos 7 fs 21? ) generally speaking we want to remove all the things. for this one i'd leave it till last, i.e. target zuul.d/ files first in all the places then start following more general cases like this one 11:04
mariospojadhav: so for this one, when you get to it dig and see if that is used any where and if you cant find it ask in scrum/reviews call 11:05
mariospojadhav: if it is no longer used then remove the whole file11:05
mariospojadhav: if it is used, then update that reference to something other than centos-811:05
marioserr other than centos-711:05
pojadhavmarios, ack thank you!11:06
chandankumarcs9 container does not /sbin/init command :-(11:06
chandankumarubi9 beta image have different weird issue with molecule11:06
chandankumarso use these shiny containers at own risk11:07
rlandychandankumar: need to chat with you after review time re: new initiatives11:07
chandankumarrlandy: ok11:10
chandankumarDoes anyone built an init container on top of centos stream 9 container?11:13
*** dviroel|out is now known as dviroel11:21
jm1arxcruz, frenzy_friday, rcastillo: have to attend cix call today, so will have to cancel our mtg for today11:29
arxcruzjm1 ack, anyway, i'm updating the patches with latewswt comments 11:30
arxcruzlatest*11:30
arxcruzalso did a round of reviews 11:30
reviewbotDo you want me to add your patch to the Review list? Please type something like add to review list <your_patch> so that I can understand. Thanks.11:30
frenzy_fridayjm1, no problem for me. I have updated the done cards in miro, will update the will do next cards11:30
jm1arxcruz: cool :)11:32
jm1frenzy_friday: oh great, we can still (re)use today's board for our scrum mtg ^^11:33
ysandeep|rovermarios, https://code.engineering.redhat.com/gerrit/c/openstack/tripleo-ci-internal-jobs/+/399683/21/zuul.d/standalone-jobs.yaml#21511:42
*** pojadhav is now known as pojadhav|afk12:01
chandankumarrlandy:  want to meet now?12:01
chandankumarregarding new initiatives12:01
rlandychandankumar: yes ... https://meet.google.com/vwp-ujai-ysv?pli=1&authuser=012:01
*** jm1|rover is now known as jm1|ruck12:07
*** jm1|ruck is now known as jm1|rover12:07
ysandeep|roverrlandy, jm1|rover wohoo we promoted master12:10
jm1ysandeep|rover: what was the path where you could check the neutron error in c8 wallaby?12:10
jm1ysandeep|rover: not tempest path but the other path12:11
rlandyysandeep|rover: jm1: yay - we will also promote rhos-17 on rhel-8 finally12:11
ysandeep|roverjm1: nova logs in compute node - logs/overcloud-novacompute-0/var/log/extra/errors.txt.gz12:12
ysandeep|rovererror.txt have grep of all the errors12:13
ysandeep|roverjm1, You will see something like - "Instance failed to spawn: nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed"12:13
ysandeep|roverjm1: fyi.. https://bugs.launchpad.net/tripleo/+bug/1964940/comments/412:13
jm1ysandeep|rover: am i looking at the wrong pipeline? https://review.rdoproject.org/zuul/builds?pipeline=openstack-periodic-integration-stable1-cs8&skip=012:14
ysandeep|roverjm1: that's right, c8 wallaby 12:15
jm1ysandeep|rover: because both fs show other errors than VirtualInterf..12:16
ysandeep|roverjm1: that virtual interface one is transient - happens sometimes.. We need to check if the latest hash run is failing with other error in consitent way.. /me also checking12:17
jm1ysandeep|rover: let me check! 12:23
ysandeep|roverjm1: for ex. https://trunk.rdoproject.org/api-centos8-wallaby/api/civotes_agg_detail.html?ci_name=periodic-tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset035-wallaby - if we skip fs001 and fs035 - latest hash 5951e9703b6eae17d1d3cf9312ab1eeb will promote.12:24
ysandeep|roverwe only have 2 run for the hash 5951e9703b6eae17d1d3cf9312ab1eeb , taking a look at fs03512:24
ysandeep|roverhere - bunch of tempest tests failed: https://logserver.rdoproject.org/openstack-periodic-integration-stable1-cs8/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset035-wallaby/78cd2bc/logs/undercloud/var/log/tempest/failing_tests.log.txt.gz12:25
ysandeep|roverbut the earlier run was better: https://logserver.rdoproject.org/openstack-periodic-integration-stable1-cs8/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset035-wallaby/df5d9f9/logs/undercloud/var/log/tempest/failing_tests.log.txt.gz12:25
ysandeep|roverjm1: you are right - we don't see virtual interface creation traceback in https://logserver.rdoproject.org/openstack-periodic-integration-stable1-cs8/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset035-wallaby/df5d9f9/logs/overcloud-novacompute-0/var/log/extra/errors.txt.gz 12:26
ysandeep|roverjm1: lately we are seeing lot of tempest random failures, As the latest run only have 2 runs - I would rerun the fs001/fs035 with hash 5951e9703b6eae17d1d3cf9312ab1eeb via testproject to see if those failure are consistent.12:29
jm1ysandeep|rover: ack12:30
jm1ysandeep|rover: will run testproject, if non-consistent i will ask you to vote on my criteria patch12:30
ysandeep|roverjm1++ sounds good12:31
bhagyashrisrlandy, https://code.engineering.redhat.com/gerrit/c/testproject/+/400570/5#message-ccc3c2bc86cdafc107f61e894ee849e42504889a testproject12:36
bhagyashrishttps://code.engineering.redhat.com/gerrit/gitweb?p=openstack/tripleo-ci-internal-jobs.git;a=blob;f=zuul.d/standalone-jobs.yaml#l5712:40
ysandeep|roverrlandy, jm1: 17/rhel-8 promoted as well after a long... \o/ wohoo 12:40
dviroel++12:40
*** ysandeep|rover is now known as ysandeep|rover|afk12:45
jm1ysandeep|rover++12:55
soniya29chandankumar, rlandy, arxcruz, please review these patches whenever you have time:- https://review.opendev.org/q/topic:tempest_allow_list12:56
*** pojadhav|afk is now known as pojadhav13:06
*** ysandeep|rover|afk is now known as ysandeep|rover13:11
chandankumarsoniya29: sure13:14
chandankumarsoniya29: can you update this one https://review.opendev.org/c/openstack/tripleo-quickstart/+/839725 ?13:15
soniya29chandankumar, sure13:16
frenzy_fridayhey dpawlik 0/ I added a patch to use cirobot user instead of tripleocirobot. Pls add to your review list when you get some time, thanks! https://review.rdoproject.org/r/c/config/+/4289113:21
reviewbotI have added your review to the Review list13:21
chandankumarsoniya29: pojadhav you might need a rebase https://review.opendev.org/c/openstack/openstack-tempest-skiplist/+/839588 on this, thanks!13:22
pojadhavchandankumar, sure13:22
dpawlikfrenzy_friday++13:23
soniya29chandankumar, okay13:33
rlandyysandeep|rover: jm1: ha - looks like victoria will promote13:37
ysandeep|roverrlandy, yes testproject just passed13:37
rlandyway to go today13:37
rlandyjm1 must be the good luck charm :)13:37
rlandyI;m rerunning the one failing job on rhos-17 on rhel-9 so that might promo also13:38
rlandyysandeep|rover: jm1: and 16.2 should clear promo as well today13:51
rlandywe should have a party13:51
ysandeep|rovergood to see all these promotion after fun filled last week :D13:52
ysandeep|roverrlandy, 16.2 only waiting for bm job - its in rerun13:53
ysandeep|roveroh that just passed as well :D13:53
ysandeep|roverthat means we promoted everything in last 2 days - except c8 wallaby - which jm1 is tracking now13:53
jm1ysandeep|rover: c8 wallaby integration is still running, c8 wallaby component tripleo and network are failing13:54
rlandyyep13:54
rlandyysandeep|rover: finally, right???13:55
ysandeep|roverrandom tempest failure is still a cause of concern - not all jobs failing with neutron bug - I will try to dig deeper if there is any service level fault.13:56
ysandeep|rover:) finally yeah - I haven't seen this much green in a while 13:57
ysandeep|roverbut don't want to jinx it :D fingers crossed13:57
jm1ysandeep|rover: cant we retrigger wallaby c8 component jobs? used dlrn hash from rr tool and it fails with "The DLRN hash or tag is not recognized. The hash or tag should not contain path slashes"13:58
ysandeep|roverjm1, for component just run the job without hash & force_periodic 14:00
jm1ysandeep|rover: okeeee ^^14:00
mariosrlandy: scrum time14:01
marioso/ 14:01
mariosfolks tripleo-ci14:01
mariosscrum time14:01
mariosnow14:01
mariosjoin14:01
dasm|offo/14:01
mariosplease14:01
*** dasm|off is now known as dasm14:01
mariosthanks14:01
marioso/14:01
marios\o14:02
dasmi'm gonna join after srbac14:02
dasm~o~14:02
bhagyashrishttps://code.engineering.redhat.com/gerrit/c/openstack/tripleo-ci-internal-jobs/+/399683/22/zuul.d/component-jobs.yaml#54614:12
dasmmarios: you're right. i should include depends-on for this: https://review.rdoproject.org/r/c/config/+/4261914:27
dasmdependent patch is already merged. can you revisit your -1, marios?14:28
dviroelhttps://review.rdoproject.org/r/q/topic:ceph_promotion_pipeline14:36
jm1akahat: thanks for the shellsheck patches! will have a look at code when i work on my ever growing review backlog again after rr 🙈14:36
jm1*shellcheck14:37
akahatjm1, sure. np :)14:37
dasmhttps://review.rdoproject.org/r/q/topic:pxe_uefi14:41
dasmhttps://review.opendev.org/q/topic:revive-elastic-recheck14:42
ysandeep|roverrlandy, jm1 fyi.. Victoria and 16.2 promoted as well14:49
rlandywe're having a rocking day14:50
akahatFolks, please review this, imp for OVB job run. https://review.rdoproject.org/r/c/config/+/42899 14:54
ysandeep|roverTengu:  I will miss checkpoint today - components promotions are up to date - updated the doc14:55
ysandeep|roverjm1, rlandy shutdown sequence.. need anything?15:00
rlandyysandeep|rover: should be fine ... what's the story with wallaby c8?15:00
jm1rlandy: rerun still in progress15:00
akahatchandankumar, rlandy marios ysandeep|rover please +w above patch ^^15:00
ysandeep|roverrlandy, last failures were not on neutron bug.. jm1 have rerun the jobs once again15:01
rlandyysandeep|rover: anything you want me to merge or watch?15:01
jm1ysandeep|rover: thanks for doing all the work 🙈15:01
rlandyok - perfct15:01
rlandyjm1: pls let me know when you are EoD the status15:01
rlandyand if I need to do something about that15:01
ysandeep|roverrlandy: >> anything you want me to merge or watch? no, all good..15:01
ysandeep|roverakahat, can check first thing tomorrow o/15:02
*** ysandeep|rover is now known as ysandeep|out15:02
ysandeep|outjm1, see you tomorrow o/ have a good rest of your day15:02
jm1rlandy: will have a break now while rerun jobs are in progress but will check back later today15:10
jm1ysandeep|out: have a nice evening :)15:10
mariosdasm: it needs rebase 15:19
dasmmarios: done.15:20
dasmrebased it 1h ago. it needed one more rebase15:21
*** dviroel is now known as dviroel|lunch15:25
frenzy_fridaydasm, rlandy, card for ER discussions: https://issues.redhat.com/secure/RapidBoard.jspa?rapidView=11751&view=detail&selectedIssue=TRIPLEOCI-1044 (hackmd link in desc)15:28
dasmfrenzy_friday: thx.15:28
dasmfrenzy_friday: i'll start adding tasks under this epic to show the progress15:29
frenzy_fridaydasm, yep sure. i added one for podman compose, feel free to add your face to it and update 15:30
dasmk15:30
*** marios is now known as marios|out15:37
rlandyjm1: ha - fs035 on wallaby c8 is passing in the current run integration line15:44
chandankumarsee ya!15:45
dasmrlandy: it sounds like we need to have a better way (automated?) of checking if skiplist is healthy15:49
rlandydasm: indeed - asked bhagyashris to add an item to the sprint to address this15:56
rlandyper our planning meeting15:56
dasmack15:56
rlandydasm: sorry16:03
rlandyhow's next week??16:03
rlandyjm1: so for latest run on w c8 - on fs001 is failing - you can check the tempest hit when it reports16:04
rlandylunch - brb16:05
*** dviroel|lunch is now known as dviroel16:24
jm1rlandy: fs001 passed on wallaby c8, but fs35 failed with "failed to reach ACTIVE status" https://review.rdoproject.org/r/c/testproject/+/4288816:24
rlandyjm1: lol16:25
rlandydifferent hashes?16:25
jm1rlandy: so do we stick to our plan, merge the temp criteria patch and revert it once promoted?16:25
rlandyjm1: one sec - checking the hash16:25
rlandyhttps://trunk.rdoproject.org/api-centos8-wallaby/api/civotes_agg_detail.html?ref_hash=5951e9703b6eae17d1d3cf9312ab1eeb16:26
rlandyjm1: ^^ it should promote on its own16:26
jm1rlandy: wait, does this mean all individual jobs of this hashed passed at least once?16:27
jm1rlandy: but never at the same time?16:27
rlandyjm1: it does indeed16:27
rlandyso dlrn will promote16:27
jm1rlandy: dlrn is happy as soon as each one passed at least once, it is not necessary to pass at the same time?16:28
jm1rlandy: so we dont have to do anything?16:28
rlandyjm1: you are free16:28
jm1rlandy: will look at ci degraded next16:29
rlandyjm1: http://promoter.rdoproject.org/promoter_logs/container-push/20220516-154104.log promotion in progress16:29
jm1rlandy: ah okay i was looking at https://trunk.rdoproject.org/centos8-wallaby/current/, apparently to early ^^16:30
rlandyjm1: need help with anything?16:30
jm1rlandy: will have to check the cards first, so nope :)16:30
rlandyjm1: you can always check the promoter16:31
rlandyand you'll be looking to current-tripleo16:31
rlandynot current16:31
rlandyk - ping if needed16:31
jm1rlandy: sure, thanks :)16:33
jm1rlandy: c8 wallaby component network is failing in step "oooci-build-images : Remove fwupd-redfish.conf file from overcloud-hardened-uefi-full.qcow2", i think we saw this before a couple of times. but i cannot find a bug report. any idea?16:43
rlandyjm1: we did - it cleared up16:47
rlandyis that consistent16:47
dviroelhttps://opendev.org/openstack/tripleo-ci/commit/018f3ecf7e2100d873391a354086f17469514ecf16:49
dviroel"Remove fwupd-redfish.conf file from overcloud images" ^16:49
jm1dviroel: thanks! this was submitted 4 month ago 🤔16:54
jm1rlandy: it failed with same error message on 2022-05-10 and 2022-05-15, but in between it passed sometimes16:54
rlandyjm1: yep - it's transient16:55
rlandyis the network component the only one out?16:56
jm1rlandy: nope, tripleo as well17:03
jm1rlandy: since this error is coming again and again (even i saw it, although this is my second shift only) i did this quick hack to reproduce it https://review.rdoproject.org/r/c/testproject/+/4290217:03
jm1rlandy: will wait for tripleo component rerun and then see if this is the same error17:04
jm1rlandy: okay tripleo component just passed 😂17:05
jm1rlandy: wallaby c8 tripleo component17:05
rlandyjm1: you've probably seen it often enough to track- I'd be interested to see if it shows up again after we promote this current hash 17:07
rlandybecause we did not see it in the integration line17:07
jm1rlandy: we dont get the output of the virt-customize run, so its hard to know what is going on. my little hacky patch above adds debug flags, so hopefully we see some useful info17:09
* jm1 brb17:12
rlandyok17:27
*** rlandy is now known as rlandy|mtg18:14
*** rlandy|mtg is now known as rlandy19:30
dviroelrlandy: hey o/19:37
rlandydviroel: hi19:37
dviroelrlandy: considering refactor container-login to support multiple registries, wdyt?19:37
dviroelthis https://github.com/rdo-infra/review.rdoproject.org-config/blob/master/playbooks/tripleo-rdo-base/container-login.yaml19:37
dviroelidea is to create another playbook that do the same thing, but supports multiple registries, and validate it by switch one or two jobs to test19:39
rlandydviroel: I have been editing the duplicate playbook of that in downstream19:39
rlandysure19:39
rlandyI have one19:39
rlandybit I don't think my secret is working out19:39
* rlandy gets19:39
rlandyI need to start work again on that now19:39
dviroelyep, the idea is to work on a loop, and call podman_login jus once19:40
dviroelso we don't need to duplicate things19:40
rlandydviroel: my hacked version: https://code.engineering.redhat.com/gerrit/gitweb?p=openstack/tripleo-ci-internal-config.git;a=blob;f=playbooks/tripleo-rdo-base/container-login-test.yaml;h=d02963f838b0b821858c9b13d16cef2ca3c2c1ad;hb=HEAD19:40
rlandyopen to better ideas19:41
rlandya role would be better19:41
rlandyso we can reuse19:41
dviroelmy idea is to loop over a list of registries and populate 'container_registry_logins' and afte that, call podman_login role just once, since it will iterate over 'container_registry_logins'19:43
dviroeltested on my side alredy ^19:43
dviroeljust need to create a smart way to provide registry + secret info19:44
rlandydviroel: what do you mean 'smart way'19:46
rlandyin your loop?19:47
jm1rlandy: instead of debugging this sporadic failures, i will try to revert the initial patch that brought these failures: https://review.opendev.org/c/openstack/tripleo-ci/+/84199219:48
rlandyif we don;t need it anymore19:48
dviroelrlandy: a way that isn't painfull to provide in our jobs - we have some secrets data hardcoded in our playbook.19:48
rlandyyou mean because it19:49
rlandys config?19:49
dviroellike this: https://github.com/rdo-infra/review.rdoproject.org-config/blob/master/playbooks/tripleo-rdo-base/container-login.yaml#L3619:49
dviroelthe user is hardcoded, and the secret info too19:49
jm1rlandy: that is what i am trying to find out. according to bz bug it has been fixed 6 days ago19:49
rlandyjm1: ok ... if we have the fix 19:49
rlandydviroel: charming and hard to test config19:50
rlandywe'd have to duplicate a test playbook19:50
dviroelrlandy: yes :)19:50
rlandyand try there19:50
jm1rlandy: bz bug says it is fixed in c8s for a month. no worry, i will find out19:50
rlandyok19:50
rlandydviroel: if you want you can play downstream19:51
rlandyless chance of leaking info 19:51
rlandyand  fi you do, it's only internal19:51
dviroelrlandy: also a good idea, i can validate with the 2 logins that you need19:55
* jm1 started a couple of jobs, will sleep while they are running ;) out for today20:08
rlandydviroel: go4it20:16
rlandydviroel; I want to hold a node now to try confirm the secret and the username20:17
rlandywill let you know20:17
dviroelrlandy: the username is the one with | yet?20:20
rlandydviroel: ack - I pulled the username out of secret20:21
rlandyso I could pass it in a jobs20:21
rlandyjob20:21
rlandyat the point of testing that20:22
rlandyI can pass that test to  you if you like20:22
dviroeli have a guess20:24
dviroelon https://code.engineering.redhat.com/gerrit/gitweb?p=openstack/tripleo-ci-internal-config.git;a=blob;f=playbooks/tripleo-rdo-base/container-login-test.yaml;h=d02963f838b0b821858c9b13d16cef2ca3c2c1ad;hb=HEAD#l11720:24
dviroelyou need extra quotes20:24
dviroellike this:  "{'{{ quay_login_secret_name.username }}': '{{ quay_login_secret_name.passwd }}'}"20:24
dviroelon my local tests, only work in this way ^20:25
rlandydviroel: let me try20:26
* rlandy digs up test20:28
dviroelyou may need them another extra quote for username too, since it has | on it, but you can do that on your job var20:29
rlandydviroel: I tried: https://code.engineering.redhat.com/gerrit/c/testproject/+/200295/257/.zuul.yaml20:31
rlandywhich didn't work20:31
rlandyif I add more quotes, idk20:31
rlandyhttps://code.engineering.redhat.com/gerrit/gitweb?p=openstack/tripleo-ci-internal-config.git;a=blob;f=playbooks/tripleo-rdo-base/container-login-test.yaml;h=d02963f838b0b821858c9b13d16cef2ca3c2c1ad;hb=HEAD#l11720:32
rlandyso if I added more quotes there??20:32
rlandynot sure 20:32
rlandydviroel: https://code.engineering.redhat.com/gerrit/gitweb?p=openstack/tripleo-ci-internal-config.git;a=blob;f=playbooks/tripleo-rdo-base/container-login-test.yaml;h=d02963f838b0b821858c9b13d16cef2ca3c2c1ad;hb=HEAD#l22 does not have double quotes20:33
rlandythoughts?20:33
dviroelrlandy: good question, tested here with a test playbook. The problem is passing a ansible var on user. In this case ^ the user name is hardcoded, so thats why this one works20:34
rlandylet me try one more change one the var on the job20:35
rlandylong shot - but here goes20:36
rlandydviroel: if you want to take this example to play with, with pleasure :)20:38
dviroelrlandy: yeah, i can work on this, np20:38
dviroelrlandy: without double quotes, tripleo_podman does that "podman login --username={{ dviroel_user }}             --password=fake_pass             registry2"20:39
dviroelrlandy: will continue with this tomorrow, no worries, i can use downstream to test20:51
* dviroel going out20:51
*** dviroel is now known as dviroel|out20:51
rlandydviroel|out: thank you20:54
rlandyhave a good night20:54
dasmdviroel|out: o/20:55
* dasm goes offline21:12
*** dasm is now known as dasm|off21:12
* rcastillo leaving for today21:48
*** rlandy is now known as rlandy|out23:05

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!