Wednesday, 2018-08-22

*** dmellado has quit IRC01:22
*** rlandy has quit IRC02:25
*** apetrich has quit IRC02:26
*** dmellado has joined #oooq02:38
*** sanjayu_ has joined #oooq03:20
*** sanjayu_ has quit IRC05:34
*** honza has joined #oooq05:43
*** ykarel has joined #oooq05:54
*** jfrancoa has joined #oooq06:18
*** ccamacho has joined #oooq06:20
*** ccamacho has quit IRC06:20
*** ccamacho has joined #oooq06:20
*** gkadam has joined #oooq06:31
*** saneax has joined #oooq06:31
*** marios|rover has joined #oooq06:40
*** amoralej|off is now known as amoralej07:01
*** dmellado has quit IRC07:04
*** dmellado has joined #oooq07:06
*** bogdando has joined #oooq07:10
*** dmellado has quit IRC07:12
*** tosky has joined #oooq07:13
*** sshnaidm|afk is now known as sshnaidm07:17
*** dmellado has joined #oooq07:19
*** dtantsur|afk is now known as dtantsur07:22
*** ccamacho has quit IRC07:33
*** jtomasek has joined #oooq07:41
*** tosky has quit IRC07:43
*** tosky has joined #oooq07:43
*** jaganathan has quit IRC07:52
*** amoralej is now known as amoralej|brb08:16
*** ccamacho has joined #oooq08:16
ssbarnea|ruckmarios|rover: what to do about the timeouts? even our fix failed during post on one job, so it was not merged.08:22
*** ccamacho has quit IRC08:25
*** ccamacho has joined #oooq08:26
mariosssbarnea|ruck: which fix you mean for the updates job?08:26
*** holser_ has joined #oooq08:26
mariosssbarnea|ruck: i didn't check there yet. I was about to look at that nova issue again since ykarel commented it didn't fix it (or we dn't have new enough nova yet)08:27
ssbarnea|ruckhttps://review.openstack.org/#/c/592577/08:27
mariosssbarnea|ruck: ack08:28
mariosssbarnea|ruck: maybe comment there on the review with pointers if you have more info yet08:28
ssbarnea|ruckwithout this, the newer code from delorean-current would not be used, right?08:28
mariosssbarnea|ruck: yeah the repo wouldn't be enabled08:29
ssbarnea|rucki only did a recheck few minutes ago, so it would take 3h to get it passed, or ... not.08:29
mariosssbarnea|ruck: watching the console in zuul is always fun ;) http://zuul.openstack.org/08:29
mariosautoscroll is particularly thrilling08:30
*** amoralej|brb is now known as amoralej08:47
*** tosky has quit IRC08:59
*** tosky has joined #oooq09:00
*** chem has joined #oooq09:19
*** amoralej is now known as amoralej|brb09:24
*** ykarel_ has joined #oooq09:28
*** ykarel has quit IRC09:31
*** ccamacho has quit IRC09:54
*** ccamacho has joined #oooq09:55
*** amoralej|brb is now known as amoralej09:56
*** ykarel_ is now known as ykarel09:58
ssbarnea|ruckmarios: where are our jjb files for jobs defined on jenkins?10:01
mariosssbarnea|ruck: dono10:02
*** jaosorior_ has quit IRC10:05
ykarelsshnaidm, ci-centos? if yes then they are in ci-config repo10:06
sshnaidmssbarnea|ruck, ^^10:09
ssbarnea|rucksshnaidm: thanks, i didn't know the reponame.10:14
*** ykarel is now known as ykarel|lunch10:29
*** ykarel|lunch is now known as ykarel|away10:34
ssbarnea|ruckmarios: can you please have a look at https://ci.centos.org/job/tripleo-quickstart-gate-ocata-delorean-quick-basic/4956/console which seems to be affected by a bug that was fixed on Aug 11: https://bugs.launchpad.net/tripleo/+bug/178610610:35
openstackLaunchpad bug 1786106 in tripleo "containers-prep fails w/ update_containers is undefined due to lack of default" [High,Fix released] - Assigned to wes hayutin (weshayutin)10:35
ssbarnea|ruckclearly something is not working, should I reopen the bug and put an alert on it?10:36
ssbarnea|ruckif it still afects us, it means fix was incomplete, right.10:36
mariosssbarnea|ruck: ack maybe add a comment with pointer to the logs on the bug as starter? agree if it doesn't fix the problem then we re-open the bug10:37
mariosssbarnea|ruck: i'll also have a look in a sec10:37
ssbarnea|ruckalready did10:37
ssbarnea|ruckimpression that the fix should also be added to the extras-common role, initially was added to container-prep role.10:39
mariosack ssbarnea|ruck10:39
dalvarezchandankumar: arxcruz o/ i have a doubt regarding tempestconf10:40
dalvarezi see that in the gate we're doing:10:40
dalvarez  --debug \10:40
dalvarez      --remove network-feature-enabled.api_extensions=dvr \10:40
dalvarezchandankumar: arxcruz how can i remove other extension only for scenario007 job? i'd like to disable dhcp_agent_scheduler for the OVN case10:41
arxcruzdalvarez: hey, so, you need to overwrite the tempest_conf_removal10:47
arxcruzdalvarez: https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/validate-tempest/defaults/main.yml#L7110:47
arxcruzon the featureset that scenario007 uses10:47
ssbarnea|ruckmarios: this is weird, the extras-common/tasts/main.yml is empty in git... do we generate it?10:47
toskydalvarez, arxcruz: also, please note that, if I remember correctly, that setting is used only because the dvr extension was  advertised by neutron also when not enabled10:49
dalvarezarxcruz: oh nice nice! thanks a lot10:49
toskydalvarez, arxcruz: it would be better if you can disable  dhcp_agent_scheduler  at the deployment time, and neutron does not advertise it10:49
dalvareztosky: good point, indeed10:49
dalvarezlet me check that, this is perhaps the right way as you point out10:50
toskyyep, that was a workaround for a neutron bug10:50
toskywhich was later fixed, so maybe at some point that override can be removed10:51
dalvarezyeah now i remember10:51
dalvarezgood one10:51
mariosssbarnea|ruck: added a comment10:52
mariosssbarnea|ruck: i think it might be we need the fix in https://github.com/openstack/tripleo-quickstart/blob/master/config/release/tripleo-ci/ocata.yml#L61 too?10:52
dalvarezarxcruz: still.. the tempest validator can be adjusted per featureset?10:52
arxcruzdalvarez: what you mean ?10:52
*** sanjayu_ has joined #oooq10:53
*** saneax has quit IRC10:53
dalvarezarxcruz: i mean that the validate-tempest role that you linked is the one setting the extensions removal, can that be done per featureset/scenario?10:54
dalvarezi thought it was just a global thing for all jobs10:54
*** sanjayu__ has joined #oooq10:55
arxcruzdalvarez: yeah, it can be done per featureset, i don't see why not10:55
dalvarezarxcruz: are we doing it somewhere already so that i can take it as an example?10:56
arxcruzdalvarez: i don't think so10:57
arxcruzdalvarez: let me check10:57
toskywhy not? It's just a value to pass to tempestconf10:57
toskyI mean, we have examples10:57
arxcruztosky: on featuresets ?10:57
toskyoh, no, that's for overriding, not for removal10:58
*** apetrich has joined #oooq10:58
*** sanjayu_ has quit IRC10:58
toskytalking about overriding options10:58
arxcruzdalvarez: we can remove the default from validate-tempest, and override in all featuresets10:58
arxcruzadding the dvr option for removal10:58
toskysshnaidm: I'm not sure I get the comment in https://review.openstack.org/#/c/509554/ , does it require any action?10:59
dalvarezarxcruz: got it im just not familiar with this, i'll take a look10:59
dalvarezthanks!10:59
sshnaidmtosky, your changes to scenario008 are ignored, up to you10:59
sshnaidmtosky, this job doesn't run tempest11:00
toskysshnaidm: featureset008, you mean?11:01
sshnaidmtosky, yes11:01
toskyso it changed11:01
toskyI'm 100% sure it did11:02
toskysshnaidm: so the table is not updated? https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html11:02
*** jaosorior has joined #oooq11:15
sshnaidmtosky, seems so11:16
sshnaidmtosky, it did for branches after ocata according to code in featureset11:16
sshnaidmtosky, but because this job doesn't run in branches after ocata...11:17
toskysshnaidm: the job is not run, but the featureset is a featureset that deploys manila with tempest11:17
toskymy point is that all the featuresets that deploys manila with tempest should run those tempest tests11:18
*** ccamacho has quit IRC11:18
toskyif the featureset is not executed on the gates, it's not a problem on my side11:18
toskyeither it will continue to live there and be used by users from time to time, or disappear at some point11:19
toskybut I want consistency11:19
*** ykarel|away has quit IRC11:19
sshnaidmtosky, I see, just fyi that it will never run tempest11:20
toskysshnaidm: if used, that featureset runs tempest11:21
toskywhere do you see that it does not?11:21
sshnaidmtosky, I don't remember, did we set containers as default in pike?11:21
sshnaidmtosky, I explained above11:21
toskysshnaidm: how is that relevant?11:21
toskythen I didn't get it11:21
sshnaidmtosky, what's not clear exactly?11:22
toskysshnaidm: featureset007 has run_tempest: true from pike onwards11:22
toskyif no jobs from pike uses that featureset, it's not a problem11:23
sshnaidmtosky, it can have whatever, it doesn't mean anything if we don't support scenario004 in containers11:23
*** ccamacho has joined #oooq11:25
toskysshnaidm: again, either at some point you will run a job with containers with scenario004, or you will remove that featureset11:26
sshnaidmtosky, I'm not against this patch, it's fine with me, I'd like all realize that it won't run tempest in CI in case you expect it to do11:26
toskysshnaidm: I don't expect that11:26
toskyI already wrote it11:26
toskywhat I need to ensure is that ANY featureset with deployes manila AND tempest, IF used by anyone, will run those manila tests11:26
toskythat's it11:26
sshnaidmtosky, containers don't run with featureset008, they run only with featureset01911:26
toskysee above11:27
toskyso at some point I guess you will remove featureset017, if it should not be used because not supported11:27
toskysorry, featureset00811:27
toskythen fine, but until that point, as long as any featureset which support manila and tempest in any way is in the repository, their definition should be the same11:28
toskysame definitions regarding the tempest tests executed11:28
sshnaidmtosky, after ocata is EOLed it won't be used at all, right11:28
sshnaidmtosky, ok, completely fine with it11:29
toskythanks11:30
*** saneax has joined #oooq11:39
*** sanjayu__ has quit IRC11:42
*** saneax has quit IRC11:42
weshayssbarnea|ruck, ur not on the program call12:04
weshayor I don't see u12:04
*** amoralej is now known as amoralej|lunch12:06
weshayssbarnea|ruck, ?12:10
*** ssbarnea|ruck has quit IRC12:13
*** trown|outtypewww is now known as trown12:21
*** agopi has quit IRC12:28
*** rlandy has joined #oooq12:32
*** rnoriega_ has joined #oooq12:38
sshnaidmweshay, marios is it known that OVB jobs don't build dependencies?12:39
weshaysshnaidm, build-test-packages is not running?12:39
weshaysshnaidm, on check ovb or periodic?12:39
sshnaidmweshay, check OVB jobs, seems like it's running, but doesn't build..12:40
sshnaidmweshay, will look later, just noticed12:40
weshaypanda|off, ready if you are12:41
panda|offweshay: ready12:44
weshaypanda|off, https://etherpad.openstack.org/p/tripleo-python3-tripleoclient-issues12:49
*** agopi has joined #oooq12:56
weshaypanda|off, https://nb01.openstack.org/images/12:57
rlandyweshay: when you have a moment, I have the logs from the overcloud image build using the diff centos node12:58
rlandyrf0lc0: ^^13:00
rlandypls see https://review.openstack.org/#/c/594308 and test job https://review.openstack.org/#/c/59454813:00
rlandywith logs http://logs.openstack.org/48/594548/1/check/tripleo-buildimage-overcloud-full-centos-7/ff19db5/13:00
*** ssbarnea has joined #oooq13:03
rf0lc0looking good to me, rlandy13:03
rf0lc0am I missing anything? the build is good13:03
ssbarneai just rejoined directly, after being kicked due to the |ruck suffix13:04
rlandyrf0lc0, I am trying to compare logs13:04
rlandyto see if the changed image made a diff13:04
*** amoralej|lunch is now known as amoralej13:05
*** ssbarnea is now known as ssbarnea_13:07
rf0lc0ssbarnea, need to group your nicks to the registered account I think: https://freenode.net/kb/answer/registration13:08
mariosnot somethign i know of sshnaidm13:08
rlandypanda|off: sshnaidm: pls see https://review.openstack.org/#/c/594308/6/zuul.d/build-image.yaml and let me know if you have any concerns about changing the node here to single-centos--node13:08
rlandymarios, ^^ you alredy +1'ed - I removed my w-113:09
mariosrlandy: ack13:10
* rlandy doesn't really know who uses this job13:11
mariosrlandy: well i revoted13:11
rlandybut hopes the node change is ok13:11
mariosrlandy: i dont have objections but i might not be the best person to ask ;)13:11
mariosrlandy: i can revote when we are ready13:11
sshnaidmrlandy, don't we need now ha-utils and browbeat as they are in requirements?13:11
rlandysshnaidm: where are you pointing to?13:13
*** ssbarnea_ has quit IRC13:13
sshnaidmrlandy, https://review.openstack.org/#/c/594308/6/zuul.d/base.yaml13:13
*** ssbarnea_ has joined #oooq13:13
rlandytripleo-ci-dsvm never touch either of those13:13
rlandythey are included above13:13
rlandywell only browbeat13:14
rlandytripleo-ha-utils we have not added anything yet13:14
rlandyas we use tripleo-ha-utils, I will include it13:15
rlandythe job would not build if we had an issue13:15
*** ssbarnea_ has quit IRC13:16
*** ssbarnea|ruck has joined #oooq13:16
*** ssbarnea|ruck has quit IRC13:18
*** ssbarnea|ruck has joined #oooq13:18
rf0lc0weshay, any ideas on how do I test a periodic legacy job with an upstream patch like this https://review.openstack.org/#/c/589448 ? Want to verify if the variables did not break anything.13:19
weshayrf0lc0, I think sshnaidm has a test job out there13:21
weshaytest patch that you can use as an example13:21
weshayrf0lc0, maybe like https://review.rdoproject.org/r/#/c/14773/13:22
weshayrf0lc0, https://review.rdoproject.org/r/#/c/13943/813:23
rf0lc0weshay, hmmm, moving to check pipeline... duh13:24
* rf0lc0 makes silly face13:24
rf0lc0weshay, thank you master13:24
panda|offsorry guys, I'll be off for the rest of the week.13:25
rf0lc0panda|off, ack let me know if you need any help13:27
rlandysshnaidm, https://review.openstack.org/#/c/594308/ is what we needed to fix for the reparenting to go forward13:27
rlandytest job here ... https://review.openstack.org/#/c/59454813:28
rlandyapetrich: how's the tempest error debug going?13:41
rlandymanage to reproduce it?13:41
*** dtrainor has quit IRC13:44
apetrichrlandy, nope.13:50
apetrichalso rdo cloud kind of behaving badly does not help13:52
apetrichbut for sure on a not-container undercloud it does not happen or does not happen often13:52
apetrichon a container undercloud testing is a bit of a pain13:53
rlandyapetrich: pls post the link to your bug again13:54
apetrichrlandy https://bugs.launchpad.net/tripleo/+bug/173695013:55
openstackLaunchpad bug 1736950 in tripleo "CI: mistral testmistral_tempest_tests.tests.api.v2.test_actions.ActionTestsV2.test_get_list_actions_not_in_list_filter fails in gate scenario003 containers" [Critical,Triaged] - Assigned to Adriano Petrich (apetrich)13:55
weshayssbarnea|ruck, please investigate the rocky promotion job failures and get bugs asap13:56
weshayhttps://logs.rdoproject.org/openstack-periodic/git.openstack.org/openstack-infra/tripleo-ci/master/legacy-periodic-tripleo-ci-centos-7-multinode-1ctlr-featureset010-rocky/00dd61e/job-output.txt.gz13:56
weshayssbarnea|ruck, see https://review.rdoproject.org/zuul/status.html13:56
ssbarnea|ruckweshay: ok13:56
weshaysshnaidm, holla if you want/need help13:56
weshaysshnaidm, sorry... meant ssbarnea|ruck13:57
rlandyarxcruz: do you have some time today? I'd like to look at ^^ apetrich's bug and see what you would do about debugging that13:57
weshaynick collision13:57
weshaymarios|rover, ^13:57
apetricharxcruz, or if you want tomorrow we can meet somewhere to look at that face to face.13:57
rlandyapetrich: I am trying to learn how to debug tempest failures better13:58
apetrichtoday I'm neck deep in meetings then I have to leave13:58
rlandyI'd like to use that as a teaching example if arxcruz has the time13:58
apetrichrlandy, that makes us 213:58
mariosweshay: ack, i looked into that nova one earlier added a comment #12 @ https://bugs.launchpad.net/tripleo/+bug/178791013:59
openstackLaunchpad bug 1787910 in tripleo "OVB overcloud deploy fails on nova placement errors" [Critical,Triaged] - Assigned to Marios Andreou (marios-b)13:59
mariosweshay: meant to ask you about that re the versions13:59
rlandyapetrich: ok - cool - your morning though is my midnight :(13:59
mariosweshay: err check comment #11 instead sorry13:59
*** dmellado has quit IRC14:01
arxcruzi have the time14:03
arxcruzrlandy: apetrich14:03
arxcruzalthough i don't know what you're talking about14:03
arxcruzapetrich: tomorrow they are coming here to mount the wardrobe14:03
rlandyarxcruz: https://bugs.launchpad.net/tripleo/+bug/173695014:04
openstackLaunchpad bug 1736950 in tripleo "CI: mistral testmistral_tempest_tests.tests.api.v2.test_actions.ActionTestsV2.test_get_list_actions_not_in_list_filter fails in gate scenario003 containers" [Critical,Triaged] - Assigned to Adriano Petrich (apetrich)14:04
apetricharxcruz, so theres a mistral undercloud tempest test that shows as a failure. we are skipping it here https://review.openstack.org/#/c/592591/1/roles/validate-tempest/vars/tempest_skip_master.yml14:04
rlandywe want to use this example to set up a debug env14:04
rlandyand learn how you would work through this14:04
apetrichbut I'm unable to reproduce on a non-container undercloud14:04
apetrichand I can't find a way to run a tempest container and tempest in it14:05
rlandytomorrow afternoon your time (morning my time)?14:05
rf0lc0anyone also experiencing problems with review.rdoproject.org ?14:06
rf0lc0review.rdoproject.org took too long to respond14:06
*** dtrainor has joined #oooq14:08
arxcruzrf0lc0: 1.1.1.1 ;)14:09
arxcruzrlandy: apetrich sure, let's do it :)14:10
rlandyarxcruz: I'm ready but apetrich has meetings14:12
rlandynot sure if you want to do this twice over :)14:12
rlandyup to you14:12
arxcruzrlandy: do you have an env already up ?14:13
rlandyarxcruz: no - that is what I want to learn to set up14:14
rlandyyou showed me your env once14:14
arxcruzenv i mean the openstack installed just to run tempest14:14
arxcruzmy env i do with reproduce script14:14
rlandyoh no - but I can do that14:15
rlandydepending on the responsiveness of rdocloud14:15
rlandyarxcruz: let me set that up - will ping you when it is done14:15
arxcruzok14:15
arxcruzi have an env up, but i'm testing tempestconf14:16
rlandytrying the reproducer from this log ...14:16
rlandyhttp://logs.openstack.org/16/592216/2/check/tripleo-ci-centos-7-scenario003-multinode-oooq-container/14:17
rlandyhmmm ... rdocloud not responding :(14:21
rlandyarxcruz: ^^ :(14:21
rlandywe may have to try again another time - sorry14:22
rlandyor I could try the libvirt reproducere14:22
rlandyor I could try the libvirt reproducer14:22
rlandydoes it matter to you which one we use?14:22
*** ssbarnea|ruck has quit IRC14:23
rlandyarxcruz; trying libvirt reproducer14:28
*** jrist has joined #oooq14:45
arxcruzrlandy: yeah, even my env is down now14:57
*** apetrich has quit IRC14:58
rlandyarxcruz: I am trying the libvirt reproducer but Resize undercloud image  is failing14:58
mariosweshay: i've been chasing an issue (via grafana) for a bit and getting ready to file the bug but worried it might be because of the rax down? i was looking at tripleo-ci-centos-7-scenario003-multinode-oooq-container from https://review.openstack.org/#/c/560445/ via grafana. it times out on the overcloud deploy14:59
sshnaidmweshay, marios rdo cloud is acting up again, fyi..14:59
mariosweshay: afaics it is issue with containers http://paste.openstack.org/show/728607/ both getting registry but the error is problem with container delete15:00
mariossshnaidm: thanks do you think this is related to rdo cloud? ^^^ or i should file a bug for that15:00
weshaymarios|rover, we need an alert  bug stating rdo-cloud is down.. w/ alert and promotion blocker please15:00
weshaymarios, /me looks at your links15:01
sshnaidmmarios, no no, just fyi, not in this context15:01
*** ssbarnea|ruck has joined #oooq15:02
mariosweshay: https://bugs.launchpad.net/tripleo/+bug/178842615:02
openstackLaunchpad bug 1788426 in tripleo "RDO Cloud is down" [Undecided,In progress] - Assigned to Marios Andreou (marios-b)15:02
mariosdone lord vader15:02
mariosweshay: (hope thats what you meant)15:02
weshaymarios, lolz15:03
weshaymarios, that is perfect15:03
mariosack sshnaidm thanks15:04
weshaymarios, so chat w/ me for a sec.. are you tracking down an issue w/ rax?15:06
mariosweshay: no i just looked at the first timeout in the check jobs from grafana15:06
*** rnoriega_ is now known as rnoriega15:07
weshaymarios, k.. hard to see anything w/ rdo cloud being down :)15:07
weshayhttp://dashboard-ci.tripleo.org/d/cEEjGFFmz/cockpit?orgId=115:07
mariosweshay: which is http://logs.openstack.org/45/560445/122/check/tripleo-ci-centos-7-scenario003-multinode-oooq-container/4c6b063/15:07
weshayI guess openstack health15:07
mariosweshay: ok so i'll go ahead and file that bug then and see if any folks "from containers" know anything about that one15:08
mariosweshay: i want to see how often it is but cistatus.tripleo.org is nope15:08
weshayhttp://status.openstack.org/elastic-recheck/15:08
marios(i guess rdo cloud)15:08
mariosweshay: well assuming we have a query for it already?15:09
weshaymarios, timeouts should be tracked w/ 143 error15:09
mariosweshay: or you mean propose one15:09
weshayya.. there is15:09
weshaysearch that page for 14315:09
weshaymarios, er.. sorry Bug 1686542 - Generic job timeout bug15:09
openstackbug 1686542 in OpenStack-Gate "Generic job timeout bug" [Low,Confirmed] https://launchpad.net/bugs/168654215:09
mariosweshay: you mean for this particular ... oh you mean there is a job timeout one15:09
mariosweshay: ack15:09
weshayhttp://logstash.openstack.org/#dashboard/file/logstash.json?query=(message%3A%20%5C%22FAILED%20with%20status%3A%20137%5C%22%20OR%20message%3A%20%5C%22FAILED%20with%20status%3A%20143%5C%22)%20AND%20tags%3A%20%5C%22console%5C%22%20AND%20voting%3A115:10
weshaythen you can add filters like tripleo15:10
mariosweshay: ok not so frequent then15:10
mariosweshay: 1 fail in 24 hours15:11
mariosweshay: but why is our upstream ci status suffering (at 76.1 now)15:11
mariosall the things are super slow today15:11
mariosweshay: k gonna hold on the bug... i have notes. lets see if it happens more than once ;)15:14
weshayk15:14
mariosweshay: did you check comment #11 https://bugs.launchpad.net/tripleo/+bug/1787910 about the nova versions... so how do we get that newer version15:17
openstackLaunchpad bug 1787910 in tripleo "OVB overcloud deploy fails on nova placement errors" [Critical,Triaged] - Assigned to Marios Andreou (marios-b)15:17
mariossshnaidm: rlandy if you have time ^^15:17
rlandymarios: is there a review to look at or you want us to work on the bug?15:18
sshnaidmmarios, if we have time..?15:18
mariossshnaidm: heh:) rlandy no i meant, does what i write make sense. we need a particular version of nova (from 22nd after the patch merged)15:19
rlandyanyone else have libvirt reproducer fail on virt-resize?15:20
mariossshnaidm: weshay rlandy not clear to me how we get that. does it mean a promotion (if so we have a problem since the bug is itself promotion blocker). or we just wait?15:20
marios(aka recheck)15:20
weshayrlandy, run the command manually from your host15:21
weshayand it will be more clear what the error is15:21
weshaymarios, your question is in reference to 178791015:22
weshay?15:22
mariosweshay: yes comment #11 specifically15:22
mariosweshay: ykarel said 'not fixed yet' and i agree, even though the patch merged. We aren't gettign new enough version of nova.15:23
weshaymarios, ya.. if the patch is not in the build.. then bug remains open15:24
weshaytracking down the githash for the fix.. vs. .the dlrn package id15:24
weshayis the right way to figure that out15:24
weshayso you did that it looks like15:24
weshaymarios, so the next question is .. is rocky consistent?15:24
* weshay looks15:24
weshayhttp://rhos-release.virt.bos.redhat.com:3030/rhosp15:25
weshayah.. we're not tracking rocky15:25
weshayjust master15:25
mariosweshay: damn i missed the ci escalation status call15:25
marios:/15:25
mariosjust realised15:25
weshaymarios, meh15:25
weshayI was there15:25
weshayso. this bug is on master15:25
rlandyI hate this error ... Message: No valid host was found. There are not enough hosts available., Code: 500"15:25
weshayrlandy, ya. .terrible15:26
rlandyopenstack just spews that when it is not sure what else to say15:26
weshaymarios, /me looks at the repos15:26
weshayjust to double check15:26
weshayopenstack-nova-18.0.0-0.20180822065816.17b6957.el7.noarch.rpm15:27
weshaymarios, that is the latest patch on master nova https://github.com/openstack/nova/commits/master15:27
mariosweshay: that one should do it15:27
mariosweshay: its what in current15:27
weshaymarios, so to get ahead of it.. "IF" we wanted to15:28
marios [2] https://trunk.rdoproject.org/centos7-master/current/15:28
mariosweshay: ^15:28
weshaymeh.. nvrmind15:28
marioshas nova-18.0.0-0.20180822065816.17b6957.el7.noarch.rpm15:28
weshaymarios, ya.. we don't pick up current15:28
weshaywe pick up consistent15:28
weshaybut ya15:28
weshayatm they are the same15:28
weshaymarios ya.. so ur right man re: the bug15:31
mariosweshay: i assure you, it wasn't intentional!15:32
* marios brb15:32
weshaycan't do shit atm15:33
weshayno logs15:33
mariosweshay: so my question still stands. do we need promotion to get that newer version? or we just wait?15:34
mariosweshay: if it is the former, we need promotion, then we have a problem15:34
weshaythe promotion jobs ALWAYS pull the latest consistent rpms15:35
weshayfrom the whole of openstack15:35
weshayI can't tell what yatin was looking at atm.. because rdo logs are down15:35
weshaybut if we did go to the job he pulled as an example15:35
mariosweshay: oh so we dont needa promotion to get the newer nova... it will just get pulled into the next promotion job15:35
weshayand looked at rpm-qa.txt in the base15:35
weshaywe'd know15:35
weshaymarios, the promotion jobs always run w/ the latest openstack rpms15:36
weshaymarios, ssbarnea|ruck did you guys read through my doc?15:36
weshay:))15:36
mariosweshay: yeah it was very helpful15:36
weshayk..15:36
mariosweshay: 3 times :)15:36
weshaylolz15:36
mariosweshay: i need another pass15:36
weshayhe he15:36
weshaylolz15:36
weshaymarios, you sound like morazi15:36
mariosweshay: i worked for him for a long time :)15:37
ssbarnea|ruck2 times, i only managed 0.7 times.15:37
rlandyhmmm ... Could not open '/tmp/reproduce-tmp.jymb5/undercloud-resized.qcow2': Permission denied15:40
rlandyrun as non-root-user15:40
rlandyweird15:42
rlandyhas not changed in a while15:42
*** dmellado has joined #oooq15:45
weshayrlandy, tmate?15:46
rlandysec setting up15:48
*** gkadam is now known as gkadam-afk15:52
rlandyweshay++16:04
rlandyfor scaring my libvirt into action16:04
weshayheh16:10
rlandymarios: are you all set? still want help with ovb fun?16:13
mariosrlandy: /me hometime16:13
marioscontinue tomorrow16:14
rlandymarios: ack ok16:14
mariosrlandy: thanks for checking16:14
rlandyneed to take another pass at your reviews16:14
rlandymarios: ^^16:14
rlandywill do so if rdocloud returns16:14
mariosrlandy: k have a good 116:15
rlandysure16:15
*** gkadam-afk is now known as gkadam16:49
*** ykarel has joined #oooq16:51
*** trown is now known as trown|lunch16:52
*** gkadam has quit IRC16:54
ykarelweshay, sshnaidm are we waiting for something to get https://review.openstack.org/#/c/591982/ +W?17:04
rf0lc0weshay, do you have the power to merge this https://review.rdoproject.org/r/#/c/15825/1/resources/missing_resources_1489486510.yaml17:07
weshayykarel, it's been workflowed.. just also has been rebased17:19
weshayykarel, taking care of it17:19
weshayrf0lc0, sec17:19
ykarelweshay, okk Thanks17:19
rf0lc0weshay, nicolas said r.r.o might not have fully recovered yet17:20
weshayrf0lc0, I just have +117:20
weshaylolz17:20
*** ykarel has quit IRC17:20
rf0lc0weshay, you are not helping !17:21
rf0lc0:)17:21
*** ykarel has joined #oooq17:21
*** vinaykns has joined #oooq17:21
*** vinaykns has left #oooq17:22
rlandyovb still down :(17:22
rlandyall my jobs are red17:22
rlandyarxcruz: I have an env up on libvirt but it did not fail tempest17:23
*** ykarel_ has joined #oooq17:24
*** ykarel_ has quit IRC17:24
*** ykarel_ has joined #oooq17:24
rlandyweshay: sorry to bug you about this ... https://review.openstack.org/#/c/594308/ - ok to merge? https://review.openstack.org/#/c/594548 shows the buildimage test job17:25
ykarel_many nodes here are in deleting state: https://review.rdoproject.org/zuul/nodes.html, is someone taking care of those17:25
*** rnoriega has quit IRC17:25
*** rf0lc0 has quit IRC17:25
*** panda|off has quit IRC17:25
*** marios has quit IRC17:25
*** jschlueter has quit IRC17:25
*** weshay has quit IRC17:25
ykarel_weshay, ^^17:25
*** ykarel has quit IRC17:26
*** ykarel_ has quit IRC17:27
*** holser_ has quit IRC17:30
*** bogdando has quit IRC17:41
*** rodrigods has joined #oooq17:44
*** rfolco has joined #oooq17:50
*** weshay has joined #oooq17:50
weshayit's IDENTIFY NOT LOGIN17:51
weshay:))17:51
rlandyweshay: welcome back17:51
weshaystupid wes17:51
rfolcoI am also struggling with identify on hexchat17:51
rfolcoconnect commands: I set the identify... but this happens after I get kicked from channels17:52
rfolcowhere am I noob'ing ?17:52
rlandywhat is the login method in hexchat?17:52
rfolcocustom command17:52
rlandyhexchat -> network list -> look at login command17:53
rfolcothis ==> msg NickServ IDENTIFY rf0lc0 %p17:53
rlandymine is SASL username and password17:53
rlandy^^ when you click on freenode and Edit17:54
rfolcolet me try17:54
rlandyadd the password17:54
rlandyto the password field17:54
rlandymake sure use glabl user info17:55
rlandyglobal17:55
rfolcook. quit and restart17:55
rfolcothx rlandy17:56
rlandyrfolco: worked?17:56
rfolcotrying17:56
*** rfolco has quit IRC17:56
rlandyI guess now since he did not come back17:56
*** rfolco has joined #oooq18:05
rlandyrfolco: I guess that didn't help :(18:09
rlandyyou disappeared for too long18:10
rfolcorlandy, :(18:10
rfolcoI can manually identify myself18:10
rfolcoafter18:10
rfolcobut hexchat doesn't do that when I connect to freenode18:10
rfolco:(18:10
rlandysorry - using the SASL login method and adding my password in the dialog works for me18:12
*** ChanServ has quit IRC18:16
*** trown|lunch is now known as trown18:21
*** ChanServ has joined #oooq18:22
*** barjavel.freenode.net sets mode: +o ChanServ18:22
*** dmellado has quit IRC18:22
*** dtantsur is now known as dtantsur|afk18:43
*** tosky has quit IRC18:47
*** amoralej is now known as amoralej|off18:51
rfolcocores can please do a final pass on https://review.openstack.org/#/c/589448 ? sshnaidm marios|rover rlandy weshay18:56
rlandyreview.rdoproject still down:(18:56
rlandyrfolco: are the results you posted from a review that ran with the new workflow18:59
rlandythe tests in the actual review would not trigger18:59
rfolcoyes rlandy, I used different jobs for that patch18:59
rfolcowhy?19:00
rlandyonly missing var to check is dryrun[4]. Should be good to merge.19:00
rlandy^^ not sure about that19:00
* rlandy is so out of the loop on this change19:02
rlandyarxcruz: you around?19:02
arxcruzrlandy: kind of :)19:02
rfolcorlandy, dryrun is on demand flag... not a risk19:03
arxcruzrlandy: can we go through tomorrow? it's 9pm here :)19:03
rlandyarxcruz: ok - I have an env ready - tomorrow is fine19:03
*** jfrancoa has quit IRC19:08
weshayrfolco, so question19:16
weshaywe're leaving playook19:16
weshaysec19:16
rlandywhat happened to our reproducer card?19:17
rlandyweshay: rfolco: can I move this card into progress ... would like to start putting notes in there wrt reproducer and zuul-emulator mode19:20
rlandyprobably won;t complete this sprint - but we can start here19:20
rfolcorlandy, agreed... we started to accumulate leftovers from previous sprints :(19:21
weshayrlandy, so that is tenatively working out?19:25
rlandyweshay: the zuul_changes thing is not so simple - I am experimenting with the build-test-packages role19:26
rlandyper adarazs's work a while back19:26
rlandynever the less19:26
weshayrlandy, what about zuul cloner?19:27
rlandyI have done some experimentation and want to log notes somewhere19:27
rlandywell if zuul_cloner is deprecated, probably not the way to go19:27
rlandyrfolco: left a question on https://review.openstack.org/#/c/58944819:28
rlandyweshay: mostly I wanted somewhere to put the poc reviews and notes for tomorrow's meeting discussion19:29
rlandyweshay: we need a supported way to get the depends-on jobs from the gerrit change19:29
rlandyw/o relying on jenkins or zuul19:29
rlandybecause if we want a stanalone reproducer-type script19:29
rlandyto replace quickstart.sh, we can't rely on either19:30
rlandybuild-test-packages uses one or the other19:30
rlandyand then there are all the centos assumptions in nodepool-setup19:31
weshayrlandy, we should chat w/ paul19:34
rlandyweshay: about what part?19:35
rlandythe way to pull depends-on changes w/o zuul?19:36
weshayya19:37
rlandyrfolco: will try to comment intelligently but I am way out of the loop on this vars changes19:37
weshayrfolco, so I'm starting to get interested again in what we can tear out of these jobs19:45
rfolcoweshay, which jobs ?19:46
weshayrfolco, /me goes to review marios's patches19:47
weshaysec.. talking to paul19:47
weshayin rdo19:47
weshayrfolco, rlandy asking in #zuul19:48
rlandyack - following19:49
rlandyhoping # zuul will come up with a better answer but it possible to api query a gerrit change and grep for Depends-On:19:51
rlandyugly though19:51
rlandyweshay: ^^19:51
weshayrlandy, totally19:51
weshayrlandy, there is an endless chain there19:51
rlandysomething like ...19:51
rlandyssh -p 29418 review.openstack.org gerrit query 577230 --dependencies | grep Depends-On19:52
rlandyDepends-On: I84db703ac3c5e6da260986539e74eda3d0e6198f19:52
rlandybut then you have to query that review for its repo19:52
rlandy***very ugly*** but possible19:52
rfolcorandom thought... can we just run our job on sf and this job connects to the jenkins host and pass all zuul vars we set in the job config as we do upstream ?19:53
rfolcowe run a fake node job that is only a wrapper to the jenkins job19:53
rlandyrfolco: we need a way that is independent of zuul and jenkins19:54
rfolcowhy19:54
rlandybut if we had internal sf, we could have more options19:54
rlandybecause we need to replace quickstart.sh19:54
rlandythat runs standone19:54
rlandythat runs standalone19:54
*** panda has joined #oooq19:55
rfolcoI still believe the idea is doable, easier than parsing depends-on19:55
rfolcomust be missing something19:55
rlandyrfolco: which sf would a random dev use19:55
rlandyto test his local work?19:55
rfolcook, I was mixing jenkins internal jobs with reproducer case19:57
rlandyjenkins is def a problem as well19:59
rlandyright now build-test-packages uses the library jenkins_deps19:59
rlandywhich we could reuse ...19:59
* rlandy gets19:59
rlandyrfolco: https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/build-test-packages/library20:00
rlandyhttps://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/build-test-packages/library/jenkins_deps.py20:01
rlandyfollows the workflow decsribed above20:01
rfolcoyou have a reproducer, which should run just like the one that ran in zuul. So we do have the zuul inventory available. We have ZUUL_CHANGES then..... or not?20:01
rfolcosorry I am not on the same page here20:05
rlandyif not zuul, not zuul_changes20:17
rfolcolike reproducer for phase2 ?20:25
rlandyansible localhost -m jenkins_deps -M /home/rlandy/workspace/tripleo-quickstart-extras/roles/build-test-packages/library -a 'host="review.openstack.org" change_id="577230"'20:27
rlandy^^ that works20:27
rlandywe just pull refspec out of that20:29
rlandyweshay: ^^20:29
rlandyworst case scenario20:29
rlandythank you adarazs20:29
* weshay reads20:32
weshayrlandy, oh that's cool20:33
weshayrlandy, what is the result of that?20:33
rlandy[rlandy@rlandy library]$ ansible localhost -m jenkins_deps -M /home/rlandy/workspace/tripleo-quickstart-extras/roles/build-test-packages/library -a 'host="review.openstack.org" change_id="577230"' | grep "refspec"20:34
rlandy [WARNING]: Could not match supplied host pattern, ignoring: all20:34
rlandy [WARNING]: provided hosts list is empty, only localhost is available20:34
rlandy                "refspec": "refs/changes/30/577230/16"20:34
rlandy                "refspec": "refs/changes/77/593077/2"20:34
rlandybut I can clean that up more20:34
rlandyto build the zuul_changes string20:34
rlandyit's an idea20:34
weshayrlandy, if you can drop the refspec output into a json file20:42
weshaythat would be pretty cool20:42
rlandyweshay: yep - playing with some json output now20:42
*** trown is now known as trown|outtypewww20:54
*** jtomasek has quit IRC20:57
rlandyweshay: where do you want to go with this? make an attempt to write a  localized CLI zuul-executor ot just hack something out from a library we already have?21:11
*** holser_ has joined #oooq21:18
weshayrlandy, something to pitch to the team21:52
weshayI think21:52
*** agopi has quit IRC21:52
weshayrlandy, if you can frame up the question and some possible solutions.. I think that would lead us in the right direction21:57
weshayrfolco, where are nodesets defined?22:16
weshayin project config?22:16
rfolcoweshay, for what job22:16
weshayrfolco, doesn't exist yet..22:16
weshayrfolco, I need a fedora node22:17
rfolcoweshay, rdo or upstream22:17
weshayrfolco, upstream22:18
rfolcohttps://github.com/openstack-infra/tripleo-ci/blob/master/zuul.d/nodesets.yaml22:18
weshayI think it's called fedora-2822:18
* weshay looks22:18
weshayrfolco, that is incorrect22:18
rfolconew workflow parents to tripleo-ci-base which uses single or multinode centos from there22:19
weshayrfolco, does that parent to someting?22:19
weshayrfolco, /me looking for fedora28 nodes22:19
rfolcoweshay, openstack-zuul-jobs has more nodesets22:19
rfolcook must be there22:19
* weshay looks22:19
rfolcohttps://github.com/openstack-infra/openstack-zuul-jobs/blob/master/zuul.d/nodesets.yaml#L1422:20
rfolcoweshay, ^22:20
weshayfedora-latest22:20
weshay:)22:20
weshayrfolco, rock on.. thanks22:20
*** holser_ has quit IRC22:26
*** holser_ has joined #oooq22:34
*** ChanServ has quit IRC22:49
*** ChanServ has joined #oooq23:03
*** barjavel.freenode.net sets mode: +o ChanServ23:03
*** holser_ has quit IRC23:10
*** rlandy has quit IRC23:39

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!