Friday, 2019-09-27

*** dsneddon has joined #oooq00:12
*** dsneddon has quit IRC01:16
*** dsneddon has joined #oooq01:17
*** dsneddon has quit IRC01:22
*** Goneri has joined #oooq01:30
*** Goneri has quit IRC01:39
*** dsneddon has joined #oooq01:55
*** Goneri has joined #oooq02:13
*** weshay has quit IRC02:17
*** dsneddon has quit IRC02:42
*** dsneddon has joined #oooq02:45
*** Goneri has quit IRC02:47
*** ykarel has joined #oooq02:55
*** rfolco has quit IRC03:13
*** dsneddon has quit IRC03:50
*** dsneddon has joined #oooq03:57
*** dsneddon has quit IRC04:02
*** ratailor has joined #oooq04:14
*** surpatil has joined #oooq04:19
*** rlandy|biab has quit IRC04:30
*** soniya29 has joined #oooq04:35
*** skramaja has joined #oooq04:36
*** dsneddon has joined #oooq04:38
*** ratailor has quit IRC04:42
*** ratailor has joined #oooq04:42
*** udesale has joined #oooq05:00
*** ykarel is now known as ykarel|afk05:40
*** ratailor has quit IRC05:43
*** ratailor has joined #oooq05:44
*** ykarel|afk has quit IRC05:45
*** ykarel|afk has joined #oooq06:00
*** ykarel|afk is now known as ykarel06:08
*** ratailor_ has joined #oooq06:09
*** ratailor has quit IRC06:11
*** aakarsh has quit IRC06:17
*** marios has joined #oooq06:28
*** ratailor_ has quit IRC07:05
*** matbu has joined #oooq07:06
*** bogdando has joined #oooq07:07
*** ratailor has joined #oooq07:08
*** amoralej|off is now known as amoralej07:18
*** tesseract has joined #oooq07:20
*** akahat has joined #oooq07:20
*** matbu has quit IRC07:22
*** matbu has joined #oooq07:24
*** holser has joined #oooq07:32
*** kopecmartin|off is now known as kopecmartin07:36
*** tosky has joined #oooq07:37
*** apetrich has quit IRC07:40
*** jpena|off is now known as jpena07:41
*** jfrancoa has joined #oooq07:47
*** jaosorior has quit IRC07:48
chandankumarmarios:  :-)07:51
marioschandankumar: sorry07:52
marios:)07:52
chandankumarmarios: no problem!07:52
marioschandankumar: i think if we do that it makes it much easier next time we have to copy/paste it07:52
chandankumarmarios: added more reviews to the queue in RDO side07:52
*** soniya29 is now known as soniya29|lunch07:52
chandankumaryup07:52
marioschandankumar: you won't have to update all those 'train' things07:52
*** rascasoft has joined #oooq07:52
*** apetrich has joined #oooq07:52
*** surpatil is now known as surpatil|lunch07:53
bogdandohi folks, please need you opinion on https://review.opendev.org/#/c/68513407:54
*** matbu has quit IRC07:59
*** holser has quit IRC08:01
mariosbogdando: not sure i follow how the randomized fetches will affect the caching though though i don't know much about how the 'openstack infra mirrors' work08:01
mariosbogdando: like either it is cached or it isn't08:01
mariosbogdando: or you mean, within the cache itself08:01
bogdandomarios: AFAIK, there is docker.io mirrors with caching proxy08:02
bogdandoso docker.io doesn't trigger rate limiting08:02
bogdandoif randomized, cache "disappears" so docker.io might drop the download rates drastically08:03
mariosbogdando: k, i don't follow why the cache disappears though based on the order we're requesting containers08:04
mariosbogdando: oh common layers08:04
mariosbogdando: so like if you request all the nova things together they will have much of the common stuff in cache08:04
bogdandomarios: different layers of different images will less likely reside in that cache simultaneously08:04
marioslike that?08:04
bogdandoyes08:04
mariosbogdando: i see thanks. this particular point notwithstanding the question still stands 'how much improvement are we getting' from all the improvement things landed these days08:05
mariosbogdando: i guess they will compare the job run times next week or so08:06
mariosbogdando: but it is impossible to answer about the effect of any one of those things, like the one you're asking today08:06
*** matbu has joined #oooq08:06
mariosbogdando: unless we asses it here before merging it. but then is it big enough of a difference by itself to be felt i don't know08:06
bogdandoright, only asking about confirming the concern08:07
mariosbogdando: i put the conversation on the patch for the others lets see what they and anyone else might comment later08:09
bogdandothanks08:09
jfrancoamarios: hey, do you have a moment?08:11
mariosjfrancoa: i'm gonna go afk in a couple mins for errand but will be back in ~1hr08:11
*** holser has joined #oooq08:11
jfrancoamarios: ack, I'll ping you back later08:12
mariosjfrancoa: ack08:12
*** matbu has quit IRC08:16
*** matbu has joined #oooq08:18
*** marios has quit IRC08:19
*** soniya29|lunch is now known as soniya2908:28
*** derekh has joined #oooq08:33
*** surpatil|lunch is now known as surpatil08:34
arxcruz|roverzbr|ruck: around? hey, i'll be a little bit off today, spend the whole night at the hospital with my daughter08:41
zbr|ruckarxcruz|rover: sure, take care!08:41
*** ykarel is now known as ykarel|lunch08:41
*** ratailor has quit IRC08:48
*** chem has joined #oooq08:48
*** ratailor has joined #oooq08:51
*** tosky has quit IRC08:51
arxcruz|roverykarel|lunch: the get_endpoint still not working right? What can we do to get this merged asap ? :)09:02
*** ykarel|lunch is now known as ykarel09:12
*** dtantsur|afk is now known as dtantsur09:18
*** marios has joined #oooq09:24
*** dtantsur is now known as dtantsur|lunch09:24
arxcruz|roverzbr|ruck: taking my daughter to hospital, brb09:52
*** skramaja has quit IRC10:02
zbr|ruckarxcruz|rover: fyi, i added https://review.rdoproject.org/r/#/c/21860/ to validate pingus change.10:06
*** ratailor has quit IRC10:11
chandankumarzbr|ruck: https://review.rdoproject.org/r/#/c/21860/ check the comment10:16
zbr|ruck^ for first one or both?10:18
chandankumarzbr|ruck: only for periodic10:18
arxcruz|roverzbr|ruck: turns out my wife was already comming back from the hospital10:20
arxcruz|roverzbr|ruck: like your dnm message :P10:20
*** ratailor has joined #oooq10:31
*** tesseract has quit IRC10:35
*** tesseract has joined #oooq10:35
*** sshnaidm|off is now known as sshnaidm|pto10:44
*** akahat has quit IRC10:45
*** ratailor_ has joined #oooq10:45
*** ratailor has quit IRC10:47
chandankumarkopecmartin: soniya29 surpatil I have added the agenda for tempest meeting, please update/edit here : https://etherpad.openstack.org/p/tripleo-tempest-squad-meeting10:52
*** recheck has quit IRC11:00
*** recheck has joined #oooq11:01
*** dtantsur|lunch is now known as dtantsur11:03
*** bogdando has quit IRC11:16
*** jpena is now known as jpena|lunch11:17
*** bogdando has joined #oooq11:17
*** jaosorior has joined #oooq11:19
chandankumarzbr|ruck: Hello11:20
chandankumarzbr|ruck: https://logs.rdoproject.org/60/21860/21/check/periodic-tripleo-ci-rhel-8-scenario004-standalone-master/e3faaaf/logs/undercloud/home/zuul/standalone_deploy.log.txt.gz#_2019-09-27_06_49_5211:20
chandankumarzbr|ruck: we are stilling using docker in podman jobs?11:20
chandankumars/podman/scenario004 job11:21
zbr|ruckyes, change not merged11:21
zbr|ruckchandankumar: do you have any idea why I got http://logs.rdoproject.org/60/21860/21/check/tripleo-ci-rhel-8-scenario004-standalone-rdo/dd24b62/logs/undercloud/home/zuul/standalone_deploy.log.txt.gz ?11:23
zbr|ruckimage not found rhel-binary-cron11:23
zbr|rucki got other jobs failing with other images missing rhel-binary-aodh-api11:25
chandankumarzbr|ruck: checking11:28
*** matbu has quit IRC11:28
*** matbu has joined #oooq11:29
chandankumarzbr|ruck: that container does not exists on rdo registry that's why failing11:32
chandankumarzbr|ruck: when was the last rdo promotion happened11:32
zbr|ruckyep, i found that myself. maybe it was a broken promotion?11:33
*** jaosorior has quit IRC11:33
chandankumarzbr|ruck: it is what I found in current tripleo https://trunk.rdoproject.org/rhel8-master/current-tripleo/delorean.repo11:34
chandankumarzbr|ruck: but periodic failure is something we need to look11:34
chandankumarwhy it is taking docker stuff11:34
arxcruz|roverykarel: https://review.opendev.org/#/c/684288/ merged, let's hope for a promotion :D11:39
arxcruz|roverzbr|ruck: ^11:39
ykarelarxcruz|rover, Great timing, periodic job will be running in some time, let me see if repo is consistent11:39
arxcruz|roverpanda: around? https://logs.rdoproject.org/60/21860/21/check/periodic-tripleo-ci-rhel-8-scenario004-standalone-master/e3faaaf/logs/undercloud/home/zuul/standalone_deploy.log.txt.gz#_2019-09-27_06_49_4911:40
chandankumarzbr|ruck: I found the issue https://github.com/rdo-infra/rdo-jobs/blob/master/zuul.d/standalone-jobs.yaml#L30611:40
chandankumarparent from centos7 which has docker https://github.com/rdo-infra/rdo-jobs/blob/master/zuul.d/standalone-jobs.yaml#L12611:40
chandankumarzbr|ruck: we need to flip the switch11:40
arxcruz|roverpanda: is that okay if i set container_registry_deploy_docker and container_registry_deploy_docker_distribution to false here ?11:40
arxcruz|roversince rhel doesn't have docker11:40
ykarelhmm repo is consistent, but it's not picked by DLRN yet, let's hope it get's picked up before periodic job starts11:41
chandankumarfrom docker to podman11:41
arxcruz|roverykarel: do your black magic man! :P11:44
arxcruz|roverchandankumar: you're talking about rhel error ?11:45
chandankumararxcruz|rover: yes periodic one where we are testing pyngus package11:45
chandankumararxcruz|rover: https://review.rdoproject.org/r/#/c/21860/11:45
arxcruz|roveri blame marios11:47
*** ratailor__ has joined #oooq11:49
chandankumararxcruz|rover: can we get a patch in rhel8 job part?11:49
arxcruz|roverchandankumar: working on it11:50
chandankumararxcruz|rover: cool, thanks :-)11:51
*** ratailor_ has quit IRC11:51
*** weshay has joined #oooq11:55
weshayarxcruz|rover, zbr|ruck ping me when you can chat11:57
ykarelarxcruz|rover, it got picked up by dlrn :- https://trunk.rdoproject.org/centos7-master/report.html11:58
ykarelso we are good with master11:58
arxcruz|roverchandankumar: zbr|ruck https://review.rdoproject.org/r/#/c/22625/11:58
arxcruz|roverweshay: just a sec11:59
chandankumararxcruz|rover: done, thanks!11:59
arxcruz|roverweshay: i'm ready12:01
weshayeek.. 2min brb12:02
weshayok..12:04
weshayhttps://meet.google.com/igz-wfku-nka12:05
*** dsneddon has quit IRC12:09
*** dsneddon has joined #oooq12:09
*** bogdando_ has joined #oooq12:09
*** bogdando has quit IRC12:10
*** bogdando_ is now known as bogdando12:10
*** jpena|lunch is now known as jpena12:10
*** jfrancoa has quit IRC12:26
*** ratailor__ has quit IRC12:30
*** rfolco has joined #oooq12:38
*** rlandy has joined #oooq12:40
weshayarxcruz|rover,  2019-09-27 12:37:29.114754 | primary | 'baseos_undercloud_image_url' is undefined12:41
*** jfrancoa has joined #oooq12:42
*** dtrainor has quit IRC12:48
*** bogdando has quit IRC12:50
*** soniya29 has quit IRC12:51
*** dtrainor has joined #oooq12:53
*** matbu has quit IRC12:54
chandankumarrlandy: hello12:55
chandankumarrlandy: I got the parenting solution12:55
chandankumarrlandy: https://review.rdoproject.org/r/#/c/22624/12:55
*** matbu has joined #oooq12:56
chandankumarrlandy: I updated other patches also12:56
*** udesale has quit IRC12:58
*** udesale has joined #oooq12:58
mariosrfolco: rlandy panda sync in a minute12:59
chandankumarkopecmartin: meeting time13:00
rlandychandankumar: thanks - will look after meeting13:02
ykarelarxcruz|rover, master pipeline failed in 1 job consistent-to-tripleo-ci-testin13:10
ykarelcommit df96ffc5276ee316be925f9423612e9190055d8f / distro 60f2e6559ee5b128816fdf17e14b54759a915e27 combination not found in RHEL13:10
*** amoralej is now known as amoralej|lunch13:10
arxcruz|roverykarel: http://www.nooooooooooooooo.com/13:11
ykareli see that hash build in centos(at 11:56:47), and rhel(12:51:52), 55 minute gap13:12
ykarelmay be u can reschedule openstack periodic to run again?13:13
ykarelnow it looks better13:13
ykarelnope still not better,13:16
ykarelseems both DLRN builders get triggered at different time, so it leads to different queue for them13:16
ykarelmay be jpena can have a look at it ^^13:16
jpenaykarel: I can get some improvements (right now it runs once every hour for rhel), but perfect synchronization will not be achievable13:18
ykareljpena, yup some improvement will help13:19
ykareljpena, currently that job waits for 40 minutes to timeout13:19
ykarelso if possible they are sync in 40 minutes it would be good13:19
jpenaykarel: ack, I've changed it13:19
ykareljpena, so what are current setting for rhel and centos one13:20
jpenaykarel: run every 5 minutes13:20
ykareljpena, okk, good, that should improve it13:21
rlandyarxcruz|rover: zbr|ruck: weshay: marios, panda and I would like to merge some patches for the promoter13:21
rlandywe do not expect production errors13:21
rlandywe can revert if need be13:21
rlandythere will be at max three patches13:22
arxcruz|rovermerge code on production on friday?13:22
arxcruz|rover:P13:22
pandaarxcruz|rover: production is not touched13:22
rlandypanda objects13:22
mariosarxcruz|rover: shouldn't affect anything13:22
mariosarxcruz|rover: if it does just revert13:22
mariosarxcruz|rover: and sue panda13:22
rlandyarxcruz|rover: we will revert if you are unhappy13:22
arxcruz|roversounds like a good plan13:22
rlandythank you13:22
weshayarxcruz|rover, https://sf.hosted.upshift.rdu2.redhat.com/zuul/t/tripleo-ci-internal/stream/948414545dff4f299c38279bae563880?logfile=console.log13:30
weshayrunning well13:30
arxcruz|rovercool13:31
chandankumarrlandy: want to sync?13:33
rlandychandankumar: in a few13:33
chandankumarok13:33
rlandychandankumar: out of meeting - let's sync13:36
chandankumarrlandy: https://meet.google.com/apo-vxox-dzo?pli=1&authuser=013:37
rlandyjoining13:39
ykarelarxcruz|rover, zbr|ruck are u aware of any blocker for stein? i see it's 12 day and can't find a card related to it13:45
* ykarel checks last run13:45
arxcruz|roverykarel: the ssh issue ?13:46
ykarelarxcruz|rover, ssh issue was queens only13:46
arxcruz|roverthen is the get_endpoint issue13:46
arxcruz|roversorry, i need to verify13:46
ykarelarxcruz|rover, that was master only13:46
weshayarxcruz|rover, you can tmux a on the hp box and be on the undercloud13:47
arxcruz|roverykarel: then i need to check we were focused on master... :/13:47
weshayykarel, bringing up a real bm test for queens as well13:47
ykarelweshay, ack but seems it will pass there :), but good if it reproduces13:47
weshayykarel,  on a real bm box?13:48
weshayya.. I think ur right13:48
weshayykarel,  it may be important to note.. we're fixing something caused by using nodepool13:49
weshaythat's what I want to double check13:49
ykarelweshay, yes with the symptoms we have seen till now, it seems network related, so seems those usedns setting will move it ahead, but still we doubting on public connectivity13:49
ykarelon overcloud node13:50
ykarelnodes13:50
zbr|rucknope13:50
weshayykarel, aye.. agree.. having the bm env will help w/ that as well13:50
weshayamoralej|lunch, ykarel  https://sf.hosted.upshift.rdu2.redhat.com/zuul/t/tripleo-ci-internal/stream/948414545dff4f299c38279bae563880?logfile=console.log13:50
ykarelweshay, yes13:50
ykarelweshay, also i am not able to reproduce with zuul reproducer on my personal tenant13:51
weshayrlandy, fyi.. I just added a queens bm job in internal sf13:51
rlandyweshay: sec on meeting13:51
ykarelweshay, even i tried latest nodepool image13:51
ykarelbut still it didn't reproduced13:51
ykarel:(13:51
ykarelmay be it specific to CI tenant13:52
weshayykarel, ya.. I also hit a bug in the new reproducer, so you were also correct.. I was using the older one..  both are valid for testing tripleo13:52
zbr|ruckweshay: let me know if you have few minutes, re missing containers in builds.13:52
weshayykarel, where the newer one also makes sure you are operating in the same env as jobs13:52
ykarelweshay, i used newer one only, and there overcloud deploy get's successful13:53
chandankumarweshay: marios https://review.rdoproject.org/r/#/c/22549/ please merge it13:53
ykareleven with newer nodepool image, c7.7 kernel13:53
weshayykarel, oh.. it passes for you in the new reproducer?13:53
ykarelweshay, yes13:53
ykareli faced some issues but worked arounded those13:54
weshayhrm.. perhaps we're fighting an issue w/ openstack-nodepool tenant13:54
weshayykarel, interesting..   issues w/ the zuul launching the job?13:54
weshaythat's where the new one failed for me13:54
ykarelweshay, one issue was related to docker login, so i skipped that13:54
ykareland other was it retried multiple times if it fails, so set attempts: 113:55
*** surpatil has quit IRC13:55
weshayykarel, if the issues are really just the interaction of tripleo + rdo-cloud I'm inclinded to waive the tests for queens and send it through...13:55
weshayykarel, ah cool13:56
weshaymakes sense.. mine was failing in pre and that's where the login is13:56
*** udesale has quit IRC13:56
ykarelregistry_login_enabled: false13:56
weshayhave to take my kid to school.. bb in a few13:56
ykarelack13:56
weshayykarel, rock.. good to know13:56
weshayarxcruz|rover, let's patch the new reproducer w/ that ^13:57
weshayykarel++13:57
*** ratailor has joined #oooq14:02
*** Vorrtex has joined #oooq14:07
weshayzbr|ruck, hey14:10
mariospanda: we need that one? (don't think so right? https://review.rdoproject.org/r/#/c/22464/5/playbooks/rdo-tox-molecule-pre.yaml14:10
weshayzbr|ruck, can chat now14:10
mariospanda: gona remove the depends on14:10
pandamarios: kill it with fire14:10
rfolcomarios, quick question, why 2 containers do not exist for ppc64le ? I could not follow that...14:12
rfolcomsg: manifest-push.yml debug:ppc64le container not found for neutron-server14:12
zbr|ruckweshay: is there way to generate permalinks for google meet?14:12
*** amoralej|lunch is now known as amoralej14:12
weshayzbr|ruck, haven't seen one if there is14:12
zbr|ruckhttps://meet.google.com/zrs-bzjz-yvx14:13
zbr|ruckmeet makes me see how nice bj was :D14:13
pandarfolco: it follows production, not all containers in x86 have an equivalent in ppc/ Some services are currently not supported.14:14
mariosrfolco: cos they don't14:14
rfolcopanda, I thought we were using mock ones, not real ones14:15
mariosrfolco: like for x86 they are all built but for some of those there is also pppc14:15
zbr|ruckat least I was able to findout how to make it appear as an app instead of browser tab14:15
mariosrfolco: sec, there https://review.rdoproject.org/r/#/c/22445/25/ci-scripts/infra-setup/roles/promoter/molecule/container-push/playbook.yml@6314:15
mariosrfolco: thats the mock data ^^14:15
pandarfolco: in this case you're testing the manifest push behaviour14:15
zbr|ruckweshay: https://review.opendev.org/#/c/685302/14:16
pandarfolco: the manifest-push needs to act accordingly and not confuse itself, when there are less containers in ppc than there are in x86, and handle the gaps correctly14:16
rfolcothe list is base openstack-base neutron-server nova-compute14:17
pandarfolco: the input artifact in this case is created ad hoc to represent a possibly prolematic real-life situation14:17
rfolcowe get these from rdo ?14:17
pandarfolco: no, they are all fake14:17
rfolcoaaaah14:17
rfolcook14:17
rfolcothis could be more explcit in the test IMO14:18
pandarfolco: this is the nature of the tests in generale14:18
rfolco2 missing, 2 skipped in that debug task, so its not clear what the expected output is14:19
mariospanda: rfolco rlandy as discussed in call updated https://review.rdoproject.org/r/2200214:19
pandarfolco: and we are really only touching the tip of the iceberg here14:19
pandamarios: ship it!14:19
rfolcook I should stop reviewing then14:19
mariosrfolco:14:19
mariosrfolco: no do not14:19
mariospanda: lets let folks review it its fine14:19
pandarfolco: doooon't stop! reviewing it!14:20
rfolcok14:20
mariospanda: i can iterate again if we need, if its somethign major, otherwise we can merge and fix it next week but lets wait for more comments14:20
mariosthanks rfolco panda rlandy14:20
rlandylooking14:20
mariosno fire rlandy14:20
marioscan be later whenever you have time thanks14:20
weshayrlandy++ for baremetal!!!14:22
rlandyweshay: oops - hope that works - haven't checked it in a few weeks as we have not had promotions14:23
* rlandy needs to add osp-16 as well14:23
rlandyqueens should be the most likely to run though14:23
rlandymarios: any danger is this ...14:24
rlandyenvironment:14:24
rlandy    DOCKER_CLI_EXPERIMENTAL: enabled14:24
mariosrlandy: saves me from having to add export DOCKDER_CLI .. on all the "docker manifest" commands14:25
pandaAFAIK this just enables the docker manifest subcommand14:25
mariosrlandy: i.e. we need it for 'docker manifest'14:25
rlandyfine14:25
rlandyyep - just checking we do not enable something else with that14:25
pandarlandy: this deosn't mean "allow docker to act irationally because it's in experimental mode"14:26
mariosrlandy: ack thanks for checking14:26
mariospanda: --trash-my-machine14:26
pandamarios: docker pull --implicity-pull-all-dockerhub.14:26
rlandybecause docker never acts irrationally even in its current mode??14:27
pandarlandy: you have a point there.14:27
rlandyI feel we should push all this code - by the time I get back, I assume the majority of the fallout will be sorted14:28
pandarlandy: and if it doesn't you can always stay on PTO some days more.14:28
rlandyif only I had more days to take on PTO14:28
* rlandy is sort of living on borrowed PTO time14:29
weshayrfolco,  ping me when you have a few min to show me around the container prep stuff14:30
rfolcoweshay, sure give me e a few to review marios patch before they merge it14:30
pandarlandy: maybe we can trade, I have way more that I can use14:30
rlandypanda: I14:30
rlandyll pay14:30
rlandymarios: +2'ed - let's do this - we won't know for sure until we try it out14:33
rlandybeside you've hit patch 7014:33
mariosrlandy: ack14:33
rlandyimpressive in an odd way414:33
mariosrlandy: yeah that was cos it kept changing (molecule/localhost/zuul job/then rebases on the new requirement /quay/role etc )14:33
rlandyI hear you14:34
*** jaosorior has joined #oooq14:34
rfolcomarios, panda done14:35
rfolcomarios, comments are just sanity checks w/ logs.... except L137 on manifest-push.yml, which I think it should be like the step before IMHO14:36
rfolcoweshay, ping14:37
weshayrfolco, hey14:38
rfolcoweshay, good time now?14:38
weshayhttps://meet.google.com/etf-xwxz-kwa?authuser=114:38
mariosrfolco: sec looking14:40
mariosrfolco: i don't see any comments14:41
mariosrfolco: it can't be the same14:41
mariosrfolco: because then the loop index doesn't work for the conditional14:41
pandamarios: rfolco rlandy ok I'll push the big red button14:50
rlandyrun for cover!!!!14:51
* rlandy fixing linters errors14:51
rlandytest available for review shortly14:51
*** ratailor has quit IRC14:52
*** matbu has quit IRC14:58
zbr|ruckweshay: deployment failure with unable to find yum_update.sh : https://storage.bhs1.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_132/684309/3/check/tripleo-ci-centos-7-undercloud-containers/132c629/logs/undercloud/home/zuul/install-undercloud.log.txt.gz15:04
*** matbu has joined #oooq15:04
zbr|rucki was also surprised to read ansible-playbook 2.6.14 --- my impression was that we were no longer using unsupported versions of ansible15:05
*** ykarel is now known as ykarel|afk15:08
weshayzbr|ruck, will look in a sec15:18
mariosthanks panda rfolco rlandy15:19
*** dtantsur is now known as dtantsur|afk15:20
*** arxcruz|rover is now known as arxcruz|zzz15:22
rlandypanda: how did you not get this error ... http://logs.rdoproject.org/48/22348/46/check/rdo-tox-molecule/7bccbbc/tox/reports.html15:30
rlandyinstall it?15:30
* marios out15:30
marioshave a good break those who are out next few15:30
zbr|ruckrlandy: that is because pytest collection phase.15:31
rlandyzbr|ruck: meaning?15:32
*** marios has quit IRC15:32
mjturekrfolco if you have some time later, can you look at the rdoregistry and see if things are looking like you expect?15:32
rlandypanda has the same import in his tests15:32
zbr|ruckcan be bypassed in two ways: a) easy/non-optimal: add switch to pytest to ignore collection errors  b) changing test_promoter.py to bypass in case of missing module.15:32
zbr|ruckdocumented at https://stackoverflow.com/questions/30086558/py-test-fails-due-to-missing-module15:33
zbr|rucklet me know if you want me to make a CR to fix it.15:33
weshayrlandy,  please review this https://review.opendev.org/#/c/685368/15:35
rlandyweshay: to com when we do and don't want to build images ...15:37
rlandyconfirm15:37
rlandyyes on bm and ovb with/without periodic?15:38
rlandyin the current case it was skipping on check not periodic?15:38
pandarlandy: it's a different environment, and rdo-tox-molecule does not test the dlrn_promoter itself. In tox.ini try adding dlrnapi_client to the deps = in section [testenv:molecule]15:39
*** ykarel|afk is now known as ykarel15:39
pandarlandy: wait15:40
weshayrlandy, ?15:41
pandarlandy: pytest is considering your file a py test file, and it shouldn't. It's better that you rename the file in promoter_test.py, otherwise pytest will always try to launch it whie it shouldn't15:43
rlandypanda: rename it to what?15:44
pandarlandy: pytst has a glob expression on the thole subtree. Every file that starts with a test_* for it it's a pytest file. Yours isn't. If you rename it to promoter_test.py or anything that doesn't start with test_ then rdo-tox-molecule will ignore it and stop complaining15:45
pandawhole*15:46
rlandyah15:47
weshayrlandy,  commented on https://review.opendev.org/#/c/68536815:49
weshayrlandy, you wrote that code, /me is not sure I'm fully in context w/ the what and why15:50
rlandyweshay: honestly we'd need to check it15:50
rlandyages ago15:50
weshayImage building was not running in stable/ branches15:50
weshaydue to wrong regex, this patch fixes it to build on15:50
weshaybranches containing / in them.15:50
weshayk15:50
chandankumarzbr|ruck: around?15:50
rlandyyou acn assign to me - I'll fix it as soon as I get ^^ in15:50
chandankumarzbr|ruck: RHEL-8 scenario 4 standalone, pacemaker requires docker, but in RHEL-8 there is no docker , it actually installs podman how we are going to handle that case?15:51
weshaychandankumar, I think pacemaker requires docker on centos, not RHEL815:55
*** jbadiapa has joined #oooq15:56
weshaychandankumar, I'm not sure if this is working in periodic https://review.rdoproject.org/r/#/c/22377/15:56
chandankumarweshay: but my question is how we can make sure docker is installed in RHEL 8?16:05
weshaychandankumar, docker should not be installed on RHEL 816:06
weshaychandankumar, this works in OSP 15 right? podman + pacemaker16:06
chandankumarweshay: yes in downstream16:06
weshaychandankumar, right.. it could be we need to find the patches that makes that work and move them upstream.. where they should be16:07
chandankumarweshay: yes correct, that will help.16:07
weshaychandankumar, we had to do that for a few other rhel8 + tripleo jobs16:07
chandankumarweshay: https://logs.rdoproject.org/60/21860/21/check/periodic-tripleo-ci-rhel-8-scenario004-standalone-master/e3faaaf/logs/undercloud/home/zuul/standalone_deploy.log.txt.gz#_2019-09-27_06_49_5216:09
chandankumarweshay: based of this questions came up today16:09
weshaychandankumar, ya.. that's the periodic.. that means I screwed up the patch I pointed at16:10
weshayhttps://review.rdoproject.org/r/#/c/22377/16:10
* weshay looks again16:10
weshaychandankumar, it's working correctly in check16:11
* weshay gets16:11
chandankumarweshay: that needs internal modification16:11
chandankumarscenario4 was blocked on pygunus package16:11
weshaychandankumar, http://logs.rdoproject.org/53/682853/5/openstack-check/tripleo-ci-rhel-8-scenario004-standalone-rdo/b89c1e3/logs/undercloud/home/zuul/standalone_deploy.log.txt.gz16:13
weshaychandankumar, scen003 was blocked on pyngunus I thought.. is scenario04 as well?16:14
chandankumarlet me pull the bug16:14
weshaychandankumar, I don't think so16:14
weshaychandankumar, zbr|ruck all the rhel 8 scenario jobs are failing atm.. as of the last two days16:14
zbr|ruckyep, i know.16:15
chandankumarit all came from this review https://review.rdoproject.org/r/#/c/21860/16:15
weshayzbr|ruck, need booogs16:15
chandankumarzbr|ruck: pygunus was for scenario3 or 4?16:15
weshaychandankumar, scen003...16:15
zbr|ruckit was 316:15
weshaychange that review to scenario 316:15
weshaybut even still.. I think zbr|ruck has a handle on it.. we need a kolla patch to merge16:16
weshayfor painindangus I mean pyngus16:16
chandankumarthanks for the clarification , whole day got confused on this16:16
weshay:)16:16
chandankumarrlandy: and I were dicussing on podman vs docker16:17
zbr|ruckchandankumar: you do not use depends-on on the same repository, it never works.16:18
zbr|ruckyou rebase on top of it instead16:18
chandankumarzbr|ruck: ack16:18
chandankumarzbr|ruck: weshay we need to tweak kolla for centos 8 also16:19
chandankumaranother mess to solve16:19
rlandychandankumar: pls leave me notes on the patch - will get to that after broken images builds16:20
chandankumarrlandy: that one sorted out16:20
*** jaosorior has quit IRC16:21
chandankumarHappy weekend #oooq,16:21
chandankumarHappy holidays rlandy :-)16:21
*** chandankumar is now known as raukadah16:21
rlandythanks16:21
rlandyoh how I LOVE linters16:22
*** chem has quit IRC16:23
rlandyI have no idea if my code works but every line has been edited to make linters happy16:23
pandarlandy: autopep8 :)16:25
ykarel:)16:25
*** holser has quit IRC16:26
* ykarel still smiling with the statement:- I have no idea if my code works but every line has been edited to make linters happy16:26
zbr|ruckrlandy: that is because you did not embrace black, yet.16:27
zbr|ruckblack reformats your code so you dont have to (please the linters)16:27
zbr|ruckweshay: are we allowed to get packages from epel or not? (centos)16:30
weshayzbr|ruck, only if you want to be struck by lightning16:30
weshayzbr|ruck, why?16:30
zbr|ruckweshay: this means that our pyngus patch from https://review.opendev.org/#/c/685302/5/docker/openstack-base/Dockerfile.j2 is not correct.16:31
zbr|ruckread last comment16:31
*** jpena is now known as jpena|off16:31
zbr|rucki was trying to findout where it comes from because on that file it was only installed from pip16:32
*** tesseract has quit IRC16:32
zbr|ruckso the change is at best incomplete if not even wrong16:32
weshayzbr|ruck, my understanding is that is now in dlrn_deps and epel16:32
weshayzbr|ruck, I don't see the comment you are referring to16:33
*** ykarel has quit IRC16:34
zbr|rucki updated the CR, now should be fine, if it passes but considering is friday, my hopes are low.16:39
weshayzbr|ruck, I see no changes in ps 5 - 716:40
weshayah.. now I do16:41
weshayadding python2-pyngus16:41
zbr|ruckalso for debian16:41
*** zbr|ruck is now known as zbr16:41
zbrstepping down from ruck.  also good news martin already put a +2 to it, we may get it merged today.16:42
weshayzbr, have a good weekend16:43
*** ykarel has joined #oooq16:50
*** derekh has quit IRC17:01
*** weshay is now known as weshay|ruck17:08
*** amoralej is now known as amoralej|off17:10
weshay|ruckrlandy, https://sf.hosted.upshift.rdu2.redhat.com/zuul/t/tripleo-ci-internal/stream/5ec22e4764d24b39b0533b7d4e18769a?logfile=console.log17:38
zbrweshay|ruck: guess what, centos7 failure to find python2-pyngus at https://fbddc0304e65cd8308f1-04499c28cd43dbd0610a4f6cd3b9b5ff.ssl.cf2.rackcdn.com/685302/7/check/tripleo-build-containers-centos-7/86b3f2b/logs/build.log.txt.gz17:58
weshay|ruckzbr, ya.. it's called python-pyngus not python217:58
weshay|ruckpython-pyngus and python3-pyngus17:58
*** Vorrtex has quit IRC19:10
*** kopecmartin is now known as kopecmartin|off19:15
*** ykarel is now known as ykarel|away19:29
weshay|rucksshnaidm|pto, you are famous https://medium.com/faun/docker-container-for-hp-servers-management-with-ilo-8947ff5da62b19:45
*** jbadiapa has quit IRC19:50
*** ykarel|away has quit IRC20:16
*** dsneddon has quit IRC20:28
rlandyrfolco: pls vote on https://review.rdoproject.org/r/#/c/22348/20:44
rlandyit will fail until promotion complete20:44
sshnaidm|ptoweshay|ruck, no claps though :(20:46
weshay|rucksshnaidm|pto, I gave you a clap, but couldn't get it to work today.. /me will look at it as I have time20:47
sshnaidm|pto\o/20:47
rfolcorlandy, will need mor etime to review that one20:47
rlandyrfolco: no major rush20:49
rlandypanda just wanted it in20:49
rlandypanda: if you're still around, it passes tox now20:49
rlandyweshay|ruck: k - what else do you want me to look at? otherwise will fix train jobs patches20:51
weshay|ruckrlandy,  there were a couple old repro20:52
* weshay|ruck gets20:52
weshay|ruckrlandy, I don't think this is right to be honest https://review.opendev.org/#/c/685184/20:52
rlandyweshay|ruck: going to piece the old reproducer patches together next week - I think I still have them all20:52
weshay|ruckrlandy, my install got all the way through to tempest btw20:53
rlandynice20:53
rlandyvery nice20:53
rlandywe need to bring that back20:53
weshay|ruckrlandy,  we can merge this right? https://review.opendev.org/#/c/685346/20:53
rlandyto_build should be correctly set20:53
rlandyweshay|ruck: ack +2'ed that20:54
*** jtomasek has quit IRC20:55
*** dsneddon has joined #oooq20:57
pandarlandy: thanks.21:14
weshay|ruckpanda, go enjoy your weekend :)21:15
*** jfrancoa has quit IRC21:17
rlandyweshay|ruck: Ill check in on sunday if you something bm/queens done21:19
weshay|ruckrlandy, k.. have a nice weekend21:19
rlandystill around for a bit21:20
*** rlandy has quit IRC21:39
*** rfolco has quit IRC21:42
*** jpena|off has quit IRC22:07
*** jpena|off has joined #oooq22:09
*** rfolco has joined #oooq22:10
*** dsneddon has quit IRC22:29
*** dsneddon has joined #oooq23:18
*** dsneddon has quit IRC23:23

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!