Thursday, 2022-08-18

*** ysandeep|out is now known as ysandeep05:29
*** ysandeep is now known as ysandeep|ruck06:01
ysandeep|ruckhah, lol of failed Gate job due to Ceph version.. ERROR: Container release pacific != cephadm release quincy; please use matching version of cephadm (pass --allow-mismatched-release to continue anyway)"],06:11
ysandeep|ruckI think they should pass now with ceph-daemon (quincy)06:11
*** amoralej|off is now known as amoralej06:16
mariosysandeep|ruck: o/ dviroel|afk was going to fix that with current-ceph promotion did he leave notes?06:32
ysandeep|ruckmarios, I don't see anything in rr hackmd.. 06:34
ysandeep|ruck<dviroel> ysandeep|ruck: rcastillo|rover: after merging release file changes, it will break ceph jobs for some minutes, until i get ceph-daemon container promoted via testproject06:34
ysandeep|ruck^^ this is the only headsup I get, /me not sure if they promoted via testproject or not06:34
ysandeep|rucklet me try to find testproject patches from dviroel|afk 06:35
mariosysandeep|ruck: yeah https://review.rdoproject.org/r/q/project:testproject+status:open+owner:viroel%2540gmail.com06:35
marioslooking06:35
mariosonly recent one is there https://review.rdoproject.org/r/c/testproject/+/36356 but not related i think 06:35
mariosfpantano: do you know about which testproject dviroel|afk was using for promotion to fix the jobs after yesterday's merge for quincy repos?06:36
ysandeep|ruckmarios, is this the correct job- periodic-tripleo-ci-promote-ceph-daemon-to-current-ceph-master-test ?06:37
ysandeep|ruckI don't see any recent run here: https://review.rdoproject.org/zuul/builds?job_name=periodic-tripleo-ci-promote-ceph-daemon-to-current-ceph-master-test%09&skip=0 06:38
mariosysandeep|ruck: i guess that is just 'test' based on the name -test06:39
mariosysandeep|ruck: so there must be another one it is in periodic lemme check 06:39
ysandeep|ruckyeah my bad06:40
ysandeep|ruckhttps://review.rdoproject.org/zuul/buildset/b618843a4ba44d1fb157028c42dee3a606:40
ysandeep|rucktripleo-ci-promote-ceph-daemon-tag-to-current-ceph-master06:40
mariosperiodic-promote-ceph-daemon06:41
mariosysandeep|ruck: i see that in rdo-jobs ^^ zuul.d/ceph-jobs06:41
ysandeep|ruckmarios, https://review.rdoproject.org/r/c/testproject/+/3672706:41
mariosysandeep|ruck: ah cos it was abandoned didn't show in our search...06:41
mariosbut ... why06:42
ysandeep|ruckmay be because he was waiting for https://review.opendev.org/c/openstack/tripleo-quickstart/+/852731 to merge first06:42
mariosysandeep|ruck: i guess he didn't need zuul to report, just needed those jobs to run. perhaps they passed then he abandoned it?06:43
mariosah maybe ysandeep|ruck it didnt merge until 3 hours after abandon?06:43
ysandeep|ruckmarios nah.. it didn't ran see https://review.rdoproject.org/zuul/builds?job_name=tripleo-ci-promote-ceph-daemon-tag-to-current-ceph-master&skip=006:43
mariosbut this is no good 06:43
ysandeep|rucklast run was aborted06:43
mariosit means scen1/4 is broken today 06:43
ysandeep|ruckyes. failing everywhere now06:44
mariosso we should revert https://review.opendev.org/c/openstack/tripleo-quickstart/+/852731 in this case :/ but hope it doesnt take all day to merge06:44
mariosysandeep|ruck: posted it06:45
mariosysandeep|ruck: may as well get zuul running on it in case we go with it06:45
ysandeep|ruckthanks! let me check deeply if we can promote the new containers instead 06:45
*** jgilaber_ is now known as jgilaber07:01
*** jpena|off is now known as jpena07:38
*** ysandeep|ruck is now known as ysandeep|lunch08:34
ibernalHi, Ignacio here from CRE team. I am trying to figure out why a particular job I am testing is failing con Component Pipeline but it's not displaying any logs on the build page in zuul. 08:42
ibernalhttps://sf.hosted.upshift.rdu2.redhat.com/zuul/t/tripleo-ci-internal/build/f5a54f984c6d40de99cad04a98dfec3f/artifacts08:42
ibernalAny tips on this one? 08:42
mariosibernal: from quick look might be due to th eparenting. previously the job called periodic-tripleo-ci-rhel-8-scenario012-standalone-rhos-16.2 which you are parenting your job from, had 2 definitions (so multiparenting in place for child jobs of that)08:47
mariosibernal: but now you removed one of those 2 definitions for periodic-tripleo-ci-rhel-8-scenario012-standalone-rhos-16.2 08:47
mariosibernal: what do you need? is it just to run tripleo-ci-rhel-8-scenario012-standalone-rhos-16.2 ?08:48
ibernalHey Marios, basically yes08:49
mariosibernal: k sec fetching link for you (familiar with testproject? we have internal one too)08:49
ibernalYes, I have used it before to trigger this job.08:50
ibernalIdeally I would like to trigger my job with tripleo-ci-rhel-8-scenario012-standalone-rhos-16.2 as a parent08:50
ibernalin case I want to customize it for my needs in the future08:51
mariosibernal: ok do you need me to give pointers for testproject? 08:51
ibernalNo need for pointers thanks!08:51
mariosibernal: ok 08:52
ibernalthank you Marios08:55
ibernalmarios++08:55
mariosnp ibernal 08:57
marioso/ tripleo-ci please needs reviews https://review.rdoproject.org/r/c/rdo-jobs/+/44067 context/testing in commit message thank you09:16
reviewbotDo you want me to add your patch to the Review list? Please type something like add to review list <your_patch> so that I can understand. Thanks.09:16
* bhagyashris stepping out for hr09:17
mariosakahat: see pvt when you get a chance thanks09:30
*** ysandeep|lunch is now known as ysandeep10:05
*** ysandeep is now known as ysandeep|ruck10:07
ysandeep|ruckmarios, ack checking 10:13
ysandeep|ruck+wed10:22
ysandeep|ruckmarios, ewww. https://review.rdoproject.org/r/c/rdo-jobs/+/44067/7#message-69efa41e2a16a9e69963c2f23ad325dae974f295 10:23
ysandeep|ruckthis should go first.. https://review.opendev.org/c/openstack/tripleo-quickstart-extras/+/85316710:23
mariosack yeah looks like depends-on blocked it thanks for reveiw and w+10:24
marioswe can recheck once the depends-on merges 10:24
mariosthank you10:24
mariosysandeep|ruck: ^^ 10:24
mariosakahat: o/ also please see comments @  https://review.rdoproject.org/r/c/rdo-jobs/+/44415/2#message-37e348601bf18ce7f7bc2ec92590a36e09c3278c11:00
mariosakahat: we can discuss if we setup call for tomorrow11:00
akahatmarios, o/ looking in11:04
ysandeep|ruckTengu, hey o/ with nftables sc001 finished within 1 hr 37 mins.. earlier it was ~2 hours11:09
ysandeep|ruckhttps://review.rdoproject.org/zuul/builds?job_name=periodic-tripleo-ci-centos-9-scenario001-standalone-master&result=SUCCESS&skip=011:09
Tenguysandeep|ruck: \o/11:09
ysandeep|ruckTengu, what's the task name now - earlier it was tripleo_firewall : Manage firewall rules | standalone | 0:43:13.707101 | 2526.37s 11:09
mariosysandeep|ruck: Tengu: that sounds like a good improvement (should be more really was like 40 mins )11:09
ysandeep|ruckhere are the logs: https://logserver.rdoproject.org/54/31954/67/check/periodic-tripleo-ci-centos-9-scenario001-standalone-master/b3188ea/logs/undercloud/home/zuul/standalone_deploy.log.txt.gz11:09
mariosright11:09
mariosnice work Tengu ysandeep|ruck 11:10
ysandeep|ruckthis one - Manage rules via nftables | standalone | 0:01:07.336244 | 0.03s ?11:10
ysandeep|ruckTengu marios , https://paste.openstack.org/show/biFnu48y5rdxkKUJ8rxP/ :)11:15
Tengu:)11:15
Tengutold ya11:15
mariosysandeep|ruck: please copy/paste that into the bug or add attachment to comment ?11:15
Tenguthe way I implemented nftables makes it soooo much faster :)11:15
ysandeep|ruckmarios, yes I will.. 11:16
mariosTengu: does this go back to the time you said 'consolidation' ? 11:16
mariosTengu: and you had divine inspiration as well as jedi knighthood?11:16
ysandeep|ruckTengu, 2 sec is super duper fast and that's ansible taking most of those 2 secs11:16
mariosysandeep|ruck: yeah but how can we make it faster ? /me pokerface11:17
marios:D11:17
ysandeep|ruckhahaha, I will not say get rid of ansible and bring directord (don't want to bring that conv again):D11:18
Tengumarios: :)11:18
Tenguwell, we probably can make a bit faster by running ansible from localhost instead of relying on SSH...11:19
Tenguand I say that with some seriousness....11:19
Tenguthough it actually takes more than 2s - there's the whole templating generation beforehand, if we really want to consider everything. Still, we're somewhere under 30 seconds to get *everything*.11:21
Tenguwithout any risk of lockout.11:21
mariosTengu: sanity check we already discussed this but to be clear this is only ci right? 11:24
mariosTengu: default is not yet sorted until upgrade from ip to nftables11:24
Tengumarios: default is still set to iptables, yes.11:24
mariosysandeep|ruck: did we enable this for all jobs or only scen1? 11:25
Tenguwe're still testing, and we must wait for the new OC image to be built with nftables - and we may want to ensure https://review.opendev.org/c/openstack/tripleo-ansible/+/853252 is in as well.11:25
Tenguthere are some depends-on that are pretty strict.11:25
Tengumarios: https://review.opendev.org/c/openstack/tripleo-heat-templates/+/85280811:25
Tenguand then there's also https://review.opendev.org/c/openstack/tripleo-ansible/+/853282 (depends-on the 852808)11:26
ysandeep|ruckmarios, tested with testproject for now.. Will be default once ^^ merges.. need more testing atm.11:26
Tenguysandeep|ruck: btw - what's the OC image status? is it available with nftables now?11:26
mariosysandeep|ruck: Tengu: ok so not wired up for jobs yet 11:26
mariosthanks11:26
Tengumarios: "still compiling" :)11:27
Tengubut we're on the right track.11:27
Tengureally11:27
Tenguthe nftables thing is faster (we can see it in the tests), and also more secure (hence the many tests we're conducting)11:27
ysandeep|ruckTengu, Yes it should be available now - atleast for tripleo-ci-testing hash.. let me crosscheck quickly11:27
Tenguysandeep|ruck: thanks!11:27
Tenguysandeep|ruck: quick question: what's this? fatal: [subnode-1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: kex_exchange_identification: read: Connection reset by peer\r\nConnection reset by 10.209.128.68 port 22", "unreachable": true}11:31
Tengumainly: what's this IP? is it something running on the OC, or is it some CI internal service?11:31
Tenguah. that IP is used on the OC node. ook.11:32
Tenguit's not the ctlplane though.11:33
Tenguis it playing the public network IP maybe?11:33
ysandeep|ruckwe are randomly seeing that everywhere.. even in content provider11:33
ysandeep|ruckTengu, see this https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_649/852532/5/gate/tripleo-ci-centos-9-content-provider/649da27/job-output.txt11:33
ysandeep|ruck2022-08-18 01:14:06.982440 | primary | failed: [undercloud] (item=quay.io/prometheus/node-exporter:v1.3.1) => {"ansible_loop_var": "item", "item": "quay.io/prometheus/node-exporter:v1.3.1", "msg": "Failed to connect to the host via ssh: kex_exchange_identification: Connection closed by remote host\r\nConnection closed by 127.0.0.2 port 22", "unreachable": true}11:33
ysandeep|ruckeven with 127.0.0.2 11:34
Tenguysandeep|ruck: oh, ok. so not really something related to the switch... or all of them are related to the nftables tests?11:34
Tengufun... that one shows iptables at play while it should be nftables: https://8e74bd9f786f5b8b89c6-371b85667fbec8e83bb96281e5edcc13.ssl.cf2.rackcdn.com/853252/3/check/tripleo-ci-centos-9-containers-multinode/c6e5b64/logs/subnode-1/var/log/extra/nftables.txt11:34
Tenguapparently the switch isn't correctly done.11:35
Tenguoh. dang. how wrong.11:35
Tenguthat's not the switch patch11:35
ysandeep|ruckTengu, I don't think they are related to nftables because we are seeing on upstream check/gate as well - and even in content provider .. looks some like ssh/ansible issue11:35
Tenguysandeep|ruck: ok - and the job is just ensuring nftables IS installed (https://review.opendev.org/c/openstack/tripleo-ansible/+/853252) so yeah, nothing to worry.11:36
ysandeep|ruckTengu, we have reported a bug but because issue is not consistent(very random) could not make any progress yet: https://bugs.launchpad.net/tripleo/+bug/198670811:36
Tenguspeeeaking of that, I'll have to change the default engine in tripleo-ansible as well.11:36
Tenguysandeep|ruck: 'k.11:36
*** dviroel|afk is now known as dviroel11:39
ysandeep|ruckTengu, interesting https://review.opendev.org/c/openstack/tripleo-puppet-elements/+/853224 merged 2 days ago but I can see nftables getting installed earlier as well.11:42
ysandeep|ruckhttps://logserver.rdoproject.org/openstack-periodic-integration-main/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-centos-9-buildimage-overcloud-hardened-uefi-full-master/7a86e31/overcloud-hardened-uefi-full.log11:42
ysandeep|ruck2022-08-01 00:55:12.774 |  nftables                          x86_64  1:1.0.4-2.el9                         quickstart-centos-base              399 k11:42
Tenguysandeep|ruck: added a comment on the LP about the connections.11:47
Tengumay have found something.11:47
Tenguysandeep|ruck: well, nftables wasn't installed on the OC cs9, that's for sure.... unless it became a dependency in the meanwhile, but I doubt it :/11:48
akahatysandeep|ruck, o/ is it known issue: https://8b5ec10dd250ff86da53-b37c466c2c0a96beb3466c3b3d4e1c87.ssl.cf2.rackcdn.com/852733/2/gate/tripleo-ci-centos-9-undercloud-upgrade/3a63ea6/logs/undercloud/home/zuul/undercloud_upgrade.log ?11:50
Tenguah.... maybe it's related to puppetlabs-apache11:52
ysandeep|ruckTengu,  >> nftables wasn't installed on the OC cs9  - based on overcloud images it was getting installed as dependency earlier 11:52
ysandeep|ruckprobably nftables was not able on jobs which don't use overcloud images at all.11:52
Tenguakahat: care to ping tkajinam on #tripleo?11:52
Tengujust to be sure.11:52
ysandeep|rucklike the container multinode job.11:52
Tenguysandeep|ruck: weird. well. anyway. now we're 100% sure it's present.11:52
Tenguand with the tripleo-ansible/tripleo_bootstrap, that will lock the issue for good.11:53
ysandeep|ruckyes tripleo_bootstrap will fix the issue for container multinode job :)11:53
Tenguysandeep|ruck: just a matter of getting the patch in -.-11:54
ysandeep|ruckTengu, let me test more jobs with your patches just to be double sure everything is fine and we can chat about some of the dropped packets on OC - maybe tomorrow11:55
Tenguysandeep|ruck: sure11:56
ysandeep|ruckTengu, fyi.. this may fix that ssh: kex_exchange_identification issue: https://review.opendev.org/c/openstack/project-config/+/85353612:24
ysandeep|ruck^^ merged, we are keeping an eye12:25
Tenguaha.12:26
Tengussh scan. maybe we can tweak a firewall rule for that.12:26
Tengu:]12:26
Tenguoh. yes. we can. of course!12:27
Tenguwe can allow SSH from known CI networks; then play with the burst setting to slow the scan attempts.12:27
Tengu:D12:27
Tenguthat would be soooo much fun to do12:28
pojadhavscrum in 4 mins : arxcruz, rlandy, marios, ysandeep|ruck , bhagyashris, svyas, soniya29, pojadhav, akahat, chandankumar, frenzy_friday, anbanerj,  dviroel, rcastillo, dasm, jm112:56
pojadhavreviews : https://review.opendev.org/c/openstack/tripleo-heat-templates/+/85333613:03
ysandeep|ruckchandankumar, hey o/ could you please send an update on vexx ticket to see if they found anything new on that retry issue?13:40
ysandeep|ruckI don't have access to vexx atm13:40
chandankumarysandeep|ruck: can you pass me the bug link13:41
chandankumarhttps://bugs.launchpad.net/tripleo/+bug/1983817 this one13:42
chandankumardviroel: do you have access to vexxhost tickets?13:51
dviroelchandankumar: i don't think so13:53
chandankumarok13:53
ysandeep|ruckrcastillo|rover, dviroel soniya29 fyi.. https://bugs.launchpad.net/tripleo/+bug/1986960 reported a new bug14:15
dviroelysandeep|ruck: thanks14:15
soniya29ysandeep|ruck, thanks14:17
ysandeep|ruckmarios, chandankumar for your next review cycle, we need https://review.opendev.org/c/openstack/tripleo-quickstart-extras/+/853652 to solve gate blocker  https://bugs.launchpad.net/tripleo/+bug/198696014:17
ysandeep|ruckreviewbot, please add in review list: https://review.opendev.org/c/openstack/tripleo-quickstart-extras/+/85365214:22
reviewbotI have added your review to the Review list14:22
mariosysandeep|ruck: ack will check after call 14:31
ysandeep|ruckthanks!14:31
dviroelysandeep|ruck: i am planning to W+ today if this works14:32
dviroellets see14:33
ysandeep|rucklgtm to fix the immediate gate blocker - I have left our conversation as a comment incase we want to follow up with a better fix later14:36
* dviroel rebooting14:36
*** ysandeep|ruck is now known as ysandeep|afk14:38
*** ysandeep|afk is now known as ysandeep14:57
ysandeeprcastillo|rover, dviroel do you need anything before I leave for the day? 14:59
* ysandeep will be on PTO tomorrow - I want to rest for a day after a well fought rr week :D (just kidding - taking my comp off for extra day last week.)15:01
dviroelysandeep: don't think so, thanks15:04
ysandeepdviroel, thanks!15:05
*** ysandeep is now known as ysandeep|PTO15:05
mariosysandeep|PTO: have a good rest o/ 15:05
ysandeep|PTOmarios: Happy vacation! see you in 2 weeks :)15:06
mariosdviroel: maybe you can workflow that later? https://review.opendev.org/c/openstack/tripleo-quickstart-extras/+/853652/2#message-ed055f681aa1aec89fa7e1efe98b47254ee7fb71 15:07
mariosysandeep|PTO: ack on the review request ^^^ 6thanks15:07
mariosthanks ysandeep|PTO o/ see you soon :)15:08
dviroelmarios: yep sure15:08
mariosdviroel: seems to be a wallaby blocker info on that commit message just waiting on zuul to report15:08
chandankumarsee ya!15:09
chemmarios: hey o/ ... I was wondering if there is a scenario with oc deployment which work with master beside 000 ... it seems that having ceph in the resource_registry just lead to error15:09
*** dasm|off is now known as dasm15:09
dasmo/15:09
chemmarios: sorry for the distraction ... just wondering if you've bumped into that error already15:10
chemnamely that one -> "tripleoclient.exceptions.InvalidConfiguration: Ceph deployment is not available anymore during overcloud deploy."15:10
mariosthere were some ceph related blockers today 15:15
mariosperhaps you are hitting some version of https://bugs.launchpad.net/tripleo/+bug/1986960 ?15:16
marioschem: not sure what you mean with 18:09 < chem> marios: hey o/ ... I was wondering if there is a scenario with oc deployment which work with master beside 000 ... it seems that having 15:16
marioschem: otherwise probably fpantano is the one to ask about the ceph errors (not something i've seen personally but as i said there were some scen1/4 ceph breakages/gate today)15:16
marioschem: off in few mins sofer but if you want/helpful we can talk again in our morning tomorrow15:18
chemmarios: ack, I'll try to check with fpantano and might open a ticket15:18
chemmarios have a good evening15:18
dviroelchem: ack, fultonj on #tripleo can help you with that too15:20
fpantanoo/15:22
chemfpantano: hey15:22
fpantanochem: as per that message, ceph deployment is now happening before the overcloud deployment15:22
fpantanoand there's a tripleoclient patch that prevents you to deploy ceph (or add any ceph stuff) if it's not deployed already15:23
fpantanoand quickstart/CI has been changed accordingly15:23
chemfpantano: https://review.opendev.org/c/openstack/python-tripleoclient/+/839727 is the one.  But I was wondering if there is an usable multinode container scenario that I can use for upstream testing ?15:23
fpantanoyeah that one, mmmm not aware about multinode + ceph15:23
fpantanoceph is deployed in scenario001 and scenario004, but they're standalone15:24
chemfpantano: currently only 000 and 007 doesn't have ... but they don't have enough other things ... so for instance scenario010 multinode containers is broken 15:24
fpantanobecause of ceph? ^ /me not familiar with that scenario, I have to check15:25
* fpantano checking15:25
chemfpantano: it has some ceph definition in the resource registry so "bam"15:25
chemfpantano: same for scenario001-multinode-containers15:26
fpantanoyeah, we should compare it w/ 001 or 004, or remove ceph from there if it's not relevant (not used in CI)15:27
chemfpantano: well if one (like only me it seems :)) use them it get the error message.  So might be worth a lp and a patch then ?15:27
*** amoralej is now known as amoralej|off15:27
mariosyou too chem o/ 15:27
fpantanoyeah15:28
chemfpantano: I might be able to do both if you kindly review :)15:28
fpantanolet's have a lp and some logs, I guess the problem is not having scenario010 aligned w/ 004 and 00115:28
chemfpantano: yeah, I'll play the seven diff ... thanks for the pointer and I'll look into them (might be better than removing any occurence of ceph :))15:29
fpantanochem: of course15:29
*** marios is now known as marios|out15:30
chemfpantano: could it be that not having the CephClient defined might be enought ?15:31
chemfpantano: oki It seems I've got the critical one ... should be able to test something soon15:32
fpantanomm I don't think so15:32
chandankumarmarios|out: ysandeep|PTO dviroel new password for dlrn user added to bitwarden15:33
chandankumarakahat: ^^15:33
fpantanothe condition is on some resource_registry services definition + deployed_ceph parameter15:33
* dviroel lunch - brb15:35
chemfpantano: oki, the lp is that https://bugs.launchpad.net/tripleo/+bug/198697415:40
chemfpantano: code should follow15:40
fpantano++15:43
fpantanothanks chem 15:43
chemfpantano: so DeployedCeph seems to be the important parameter15:45
fpantanoyes, it's part of the && condition15:46
fpantanobecause when you run openstack ceph deploy .... -> it sets parameters for you15:46
fpantanoand then you include those params in the heat stack15:46
chemfpantano: now I'm wondering if changes are needed in tripleo-ci (or somehwere) for explicitely running the ceph deployment ...15:46
fpantanoI think so15:47
fpantanoyou can check how sc001 and sc004 are set15:47
* fpantano not too much familiar w/ that setup15:48
fpantanobut marios || dviroel can help15:48
chemfpantano: do you have a featureset number ?15:48
fpantanoit should be here https://github.com/openstack/tripleo-ci/blob/master/zuul.d/standalone-jobs.yaml15:48
fpantanommmm nope :/15:49
chemfeatureset: '052'15:51
chemstandalone_ceph: true  looks nice :)15:51
chemor not ... god15:52
chemfpantano: hum ... doesn't look good, ceph-install.yaml in oooq-extra is only there for roles/standalone, not such thing for roles/multinodes ..15:57
chemfpantano: might be a bit too much for a quick hack session ...15:58
fpantanonice, we need more patches in quickstart then15:58
fpantanowe can add support, but I guess it's not so quick15:59
chemfpantano: https://opendev.org/openstack/tripleo-quickstart-extras/src/branch/master/roles/standalone/tasks/ceph-install.yml need to be ported to multinode I guess 15:59
chemfpantano: as it look like I'm the only consumer of this it might take a while :)16:00
chemfpantano: currently trying with any ceph reference removed ... it seems to go further but ... well16:00
*** jpena is now known as jpena|off16:42
* dasm => going for a walk18:54
*** rcastillo|rover_ is now known as rcastillo|rover19:16
rcastillo|roverpower went out, couldn't join review meeting19:16
rcastillo|roverbig storm here, if I'm gone that's why19:16
dviroelrcastillo|rover: ack, np19:17
* dasm is back19:47
dviroelrcastillo|rover: https://review.opendev.org/c/openstack/tripleo-quickstart-extras/+/853652 in gates, we need this one merged to fix wallaby gates20:05
rcastillo|roverdviroel: ack will track20:21
*** dviroel is now known as dviroel|afk20:32
* dasm => offline22:08
*** dasm is now known as dasm|off22:08
dasm|offo/22:08
*** rlandy_ is now known as rlandy|out23:13
*** rcastillo|rover is now known as rcastillo23:13

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!