Friday, 2018-06-29

hubbotFAILING CHECK JOBS on master: tripleo-quickstart-extras-gate-newton-delorean-full-minimal @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722400:52
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/567224, master: tripleo-quickstart-extras-gate-newton-delorean-full-minimal @ https://review.openstack.org/56044502:52
*** skramaja_ has quit IRC03:06
*** skramaja has joined #oooq03:08
*** skramaja has quit IRC03:34
*** ykarel has joined #oooq03:35
*** udesale has joined #oooq03:46
*** skramaja has joined #oooq04:18
*** ratailor has joined #oooq04:33
*** ykarel has quit IRC04:35
hubbotFAILING CHECK JOBS on master: tripleo-quickstart-extras-gate-newton-delorean-full-minimal, tripleo-ci-centos-7-scenario003-multinode-oooq-container @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722404:52
*** ykarel has joined #oooq05:02
*** quiquell|off is now known as quiquell|rover05:31
*** skramaja_ has joined #oooq05:44
*** skramaja has quit IRC05:47
*** ccamacho has joined #oooq05:59
*** jbadiapa has joined #oooq06:04
*** holser_ has joined #oooq06:07
*** holser_ has quit IRC06:08
*** holser_ has joined #oooq06:08
*** ccamacho has quit IRC06:20
*** ccamacho has joined #oooq06:20
*** quiquell|rover is now known as quique|rover|bbl06:31
*** yolanda__ has joined #oooq06:42
*** yolanda__ is now known as yolanda06:43
hubbotFAILING CHECK JOBS on master: tripleo-quickstart-extras-gate-newton-delorean-full-minimal, tripleo-ci-centos-7-scenario003-multinode-oooq-container @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722406:52
*** bogdando has joined #oooq06:53
*** quique|rover|bbl is now known as quiquell|rover07:02
*** amoralej|off is now known as amoralej07:07
*** tesseract has joined #oooq07:10
*** ratailor_ has joined #oooq07:29
*** tosky has joined #oooq07:32
*** ratailor has quit IRC07:32
*** ratailor_ has quit IRC07:53
*** ratailor has joined #oooq07:55
quiquell|roverarxcruz, chandankumar: Do you know anything about this ? https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset020-queens/3d190d9/undercloud/home/jenkins/tempest/tempest.html.gz07:57
*** ratailor_ has joined #oooq07:58
*** dtrainor has quit IRC07:58
*** dtrainor has joined #oooq07:58
*** ratailor_ has quit IRC07:59
*** ratailor has quit IRC07:59
*** gkadam has joined #oooq08:02
*** d0ugal has joined #oooq08:03
ykarelquiquell|rover, so only fs020 failed yesterday08:05
ykarelhttps://trunk-primary.rdoproject.org/api-centos-queens/api/civotes_detail.html?commit_hash=927e8115f6605ea18c883251b05e76b2f448eeba&distro_hash=43b025ca6c0610b81fefa32f5c826ba5385c10fd08:05
ykareldo you know why we have not considered promotion by skipping fs02008:05
ykarelqueens is 8 days behind08:06
*** ratailor has joined #oooq08:06
quiquell|roverykarel: Don't know, I don't know If can take those decisions08:06
ykarelhmm but you can propose :)08:07
quiquell|roverykarel: I will, I will...08:07
ykarelyup as fixing tempest tests can go in paraller08:08
bogdandoo/ PTAL https://review.openstack.org/#/c/577391/08:11
*** skramaja_ has quit IRC08:13
*** skramaja_ has joined #oooq08:15
*** ykarel is now known as ykarel|lunch08:16
quiquell|roversshnaidm: https://review.rdoproject.org/r/#/c/14507/, https://review.rdoproject.org/r/#/c/14473/ to merge latest from RR dashboard08:27
ykarel|lunchquiquell|rover, isn't the tempest issue fixed in master08:30
ykarel|lunchhttps://logs.rdoproject.org/51/579051/1/openstack-experimental/gate-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset020-master/Z5c0b8cc927614b77a6537a4dced72a0e/undercloud/home/jenkins/tempest/tempest.html.gz08:31
ykarel|lunchhmm tempestconf change is not promoted, local tempest run timed out08:32
ykarel|lunchrunning again tempest locally08:34
quiquell|roverDon't see this thing passing https://review.rdoproject.org/jenkins/job/gate-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset020-master/08:35
quiquell|roverwhat do you mean ?08:35
ykarel|lunchquiquell|rover, ignore for now08:37
ykarel|lunchchecking again08:37
*** gkadam_ has joined #oooq08:43
*** gkadam_ has quit IRC08:43
*** gkadam has quit IRC08:46
hubbotFAILING CHECK JOBS on master: tripleo-quickstart-extras-gate-newton-delorean-full-minimal, tripleo-ci-centos-7-scenario003-multinode-oooq-container @ https://review.openstack.org/560445, stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722408:52
*** panda|off is now known as panda08:54
*** skramaja_ has quit IRC09:09
quiquell|roverpanda, sshnaidm: To have more info at the promoter logs https://review.rdoproject.org/r/#/c/14523/09:12
quiquell|roverjust print all the hashes at Skipping09:12
*** zoli is now known as zoli|lunch09:13
*** skramaja has joined #oooq09:16
*** sshnaidm is now known as sshnaidm|off10:02
ykarel|lunchquiquell|rover, as python2-tripleo-common is included in tq, we creating the package again10:05
*** ykarel|lunch is now known as ykarel10:05
ykarelquiquell|rover, https://review.rdoproject.org/r/#/c/14522/10:05
quiquell|roverykarel: You mean the change on the releases file10:05
ykarelquiquell|rover, yes10:05
quiquell|roverykarel: ok, so you are reverting the revert10:06
ykarelyes10:06
quiquell|roverykarel: Ok, have to work a little the other one with the proper release file10:06
quiquell|roverwill have it in a few10:06
ykarelbut that will not affect https://review.rdoproject.org/r/#/c/14522/, right?10:06
quiquell|roverykarel: Don't expect, still don't really know why do we have to use those release file10:10
ykarelack10:11
*** holser_ has quit IRC10:12
*** holser_ has joined #oooq10:22
sshnaidm|offquiquell|rover, left comments on https://review.rdoproject.org/r/#/c/14507/10:25
quiquell|roversshnaidm|off: Have to show you something cool10:26
sshnaidm|offque?10:27
quiquell|roversshnaidm|off: http://38.145.34.131:3000/d/2kHMNHvik/exploration?orgId=110:28
quiquell|roversshnaidm|off: Check out the filter part, grafa ad-hoc is super powerful10:28
quiquell|roversshnaidm|off: It's clear we have more latency at limeston cloud provider for container job10:28
sshnaidm|offquiquell|rover, niiiice10:31
quiquell|roversshnaidm|off: You see the possibilty of this grafana ad-hoc variables ?10:31
quiquell|roversshnaidm|off: Now we can really explore stuff10:31
sshnaidm|offquiquell|rover, but we need to remove all short jobs from there to avoid noise, like openstack-linters10:31
quiquell|roversshnaidm|off: I am changint the patch with your comments, agree with all of them10:31
sshnaidm|offquiquell|rover, yep, variables are great, we use them in tripleo-ci dashboard too10:31
quiquell|roversshnaidm|off: ad-hoc variable type is the one, I think we are not using it there10:32
*** holser_ has quit IRC10:32
sshnaidm|offquiquell|rover, cool, will look in Sun more closely, gotta run10:32
quiquell|roversshnaidm|off: nÂp, go go go10:32
sshnaidm|offquiquell|rover, thanks for creating this!10:32
quiquell|roversshnaidm|off: Let's make it better with all of us10:33
*** dtantsur|afk is now known as dtantsur10:43
*** skramaja has quit IRC10:45
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/567224, master: tripleo-quickstart-extras-gate-newton-delorean-full-minimal, tripleo-ci-centos-7-scenario003-multinode-oooq-container @ https://review.openstack.org/56044510:52
*** panda is now known as panda|ko10:53
*** kopecmartin has joined #oooq11:01
*** d0ugal_ has joined #oooq11:04
*** d0ugal has quit IRC11:05
*** zoli|lunch is now known as zoli11:10
*** udesale has quit IRC11:24
*** sshnaidm|off has quit IRC11:29
*** ratailor has quit IRC11:32
ykarelquiquell|rover, nodepool fix working for both master and queens11:37
ykarelfs018 queens: tempest running,11:37
ykareland master passed beyond config-download11:37
quiquell|roverykarel: Let see what we get about fs017 timeout11:38
quiquell|roverykarel: \o/ !!!11:38
quiquell|roverykarel: We have to celebrate if we have promotions11:38
ykarelwe need fix for fs027 fix for master11:39
ykarelwe can expect queens promotions by skipping fs2011:39
quiquell|roverykarel: Let's see if the fix land11:39
ykarelqueue reset again11:39
quiquell|roverykarel: :-((((11:39
ykarelit's 16+ hrs now11:39
quiquell|roverykarel: Nah we have to wait11:39
*** trown|outtypewww is now known as trown11:49
*** holser_ has joined #oooq11:50
trownpanda|ko: ya I think what you did with the empty list peers might work ... if not that, maybe we can have "- secondary" there... not sure if zuul inventory creator will complain about secondary being undefined ... we could also put up a patch to the multi-node-bridge role to add a "when: groups['peers']" to those includes12:01
panda|kotrown: in that way, the peers tasks will try to run on an non existing node12:02
trownpanda|ko: actually empty list peers already made it past pre, so that seems like it is working12:02
panda|kotrown: crossing fingers12:02
*** d0ugal_ has quit IRC12:15
*** d0ugal_ has joined #oooq12:17
quiquell|roverpanda|ko, trown: Do you know if wes is PTO '12:18
panda|koquiquell|rover: I don't think he is, it's 6:30 in denver, maybe sometimes he doesn't want to wake up before dawn12:24
*** holser_ has quit IRC12:35
*** ccamacho has quit IRC12:38
*** holser_ has joined #oooq12:39
*** quiquell|rover is now known as quique|rover|lch12:40
*** rlandy has joined #oooq12:42
*** ccamacho has joined #oooq12:46
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/567224, master: tripleo-quickstart-extras-gate-newton-delorean-full-minimal @ https://review.openstack.org/56044512:53
*** holser_ has quit IRC13:04
ykarelquique|rover|lch, all(except 002,027 in master and 002 in queens) jobs in queens/master passed13:10
ykarel002 in queens/master still running which should fail13:10
*** d0ugal_ has quit IRC13:12
*** amoralej is now known as amoralej|lunch13:13
*** d0ugal has joined #oooq13:13
*** d0ugal has quit IRC13:13
*** d0ugal has joined #oooq13:13
*** udesale has joined #oooq13:14
panda|kotrown: I'm seeing undercloud jobs passing13:17
panda|kotrown: I think we did it.13:17
trownpanda|ko: yep +2'd looking good13:18
trownpanda|ko: I will analyze times13:18
*** tcw has quit IRC13:21
*** quiquell|tmp has joined #oooq13:23
*** tcw has joined #oooq13:29
*** agopi has quit IRC13:31
quiquell|tmpykarel: queens is promoting !13:33
ykarelquiquell|tmp, Great13:33
ykarelquiquell|tmp, now can think about what to do with pike13:34
ykarelif it's only fs20 same issue as queens13:34
quiquell|tmpIt was the nodepool issue + fs2013:34
quiquell|tmpWe can try to run the periodic on pike to confirm that13:34
ykarelnodepool issue should be fixed similar to master/queens13:39
ykarelif it's only fs020 it should be promoted in tomorrow's run13:39
ykarelif we skip it today13:39
ykarelin promotion criteria13:39
ykarelthan on monday we can see all green and move with zuul v3 :)13:40
quiquell|tmpIt was multinode jobs and the fs02013:40
ykarelhoping to get fs027 fix merged by then13:40
quiquell|tmpYkarel: green summertime13:40
quiquell|tmpYkarel would be nice to revies with closes bugs have more priority in the queue13:41
quiquell|tmpLike detecting Closes-Bug or simar13:42
ykarelthe one marked with Critical13:42
ykarelas most review has Closes-Bug in it13:42
ykarelbut zuul test all review in queue together to avoid issues13:43
quiquell|tmpWould be nice13:44
quiquell|tmpSometimes those unblock the queue13:44
quiquell|tmpMake sense to merge them first13:44
*** quiquell__ has joined #oooq13:47
*** quiquell|tmp has quit IRC13:50
*** bogdando has quit IRC13:56
rlandypanda: just fyi ... for reproducer https://review.openstack.org/#/c/579154 ... https://review.openstack.org/#/c/579161 - still testing13:57
panda|korlandy: thanks13:57
*** agopi has joined #oooq14:02
panda|korlandy: are you able to test those with and undercloud-only job in multinode reproducer ? I'm anticipating some hiccups during bridge creation if we use a static groupd definition for the host14:02
panda|korlandy: adding a comment in the review14:02
rlandyI am just checking that we actually attempt the bridge creation for now14:02
rlandyand yes, I can switch fs14:03
*** agopi_ has joined #oooq14:03
rlandybut I'd like to see how far I can get14:03
*** agopi_ has quit IRC14:03
*** quiquell__ has quit IRC14:04
*** agopi_ has joined #oooq14:05
*** quique|rover|lch is now known as quiquell|rover14:06
*** agopi has quit IRC14:06
*** ccamacho has quit IRC14:09
*** d0ugal has quit IRC14:15
rlandypanda: is the plan still to use the pre playbook from https://github.com/openstack-infra/zuul-jobs?14:17
rlandyor will we be rolling our won14:17
*** d0ugal has joined #oooq14:17
rlandyown14:17
rlandypanda|ko: ^^?14:17
rlandylooking at the reviews out there, looks like transition legacy stuff14:17
trownrlandy: we will use the multinode pre playbook14:19
* rlandy looks for that14:20
trownrlandy: https://github.com/openstack-infra/zuul-jobs/blob/master/playbooks/multinode/pre.yaml14:21
*** amoralej|lunch is now known as amoralej14:22
rlandytrown:ok - so the same one14:22
trownya exactly14:23
*** ccamacho has joined #oooq14:27
panda|korlandy: we can use the roles from zuul-jobs and openstack-zuul-jobs for free upstream, we just call them when needed. playbooks however need to be copied in our own repo14:34
panda|kotrown: : https://github.com/openstack-infra/zuul-jobs/blob/master/playbooks/multinode/pre.yaml  is used byt our base parent, so hierarchy comes to help, and we don't need to copy it14:35
panda|koI mean ^14:36
panda|korlandy: ^14:36
panda|koquiquell|rover: I'm not sure if we'll migrate the jobs today, but you should know what is happening, ping me if you have 5 minutes to sync14:37
rlandypanda|ko: that's fine - I was just not sure of we were still going to call anything directly from zuul-jobs14:38
panda|korlandy: trown we are running multi-node-firewall twice14:44
panda|korlandy: trown I think we can remove multinode-networking from the pre-run list in a next patch14:45
panda|kotrown rlandy as the only thing it does it to call the multi-node-firewall and it seems to be already done by our new parent14:45
*** kopecmartin has quit IRC14:51
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/567224, master: tripleo-quickstart-extras-gate-newton-delorean-full-minimal @ https://review.openstack.org/56044514:53
*** d0ugal has quit IRC14:55
panda|ko3-nodes passed and collecting logs14:56
rlandypanda|ko: I was calling the playbook directly14:58
panda|korlandy: which playbooks and where ?14:59
rlandyhttps://github.com/openstack-infra/zuul-jobs/blob/master/playbooks/multinode/pre.yaml - there is no base job in the reproducer15:00
rlandybut I don;t want to confuse your workflow15:00
*** d0ugal has joined #oooq15:02
panda|korlandy: ah yeah, I understand now.15:06
panda|koboth patch passed. We're ready to migrate when we get the reproducer feedback15:07
*** openstack has quit IRC15:22
*** openstack has joined #oooq15:23
*** d0ugal has quit IRC15:23
rlandyweshay: 1-on-1?15:30
*** d0ugal_ has quit IRC15:32
*** d0ugal__ has joined #oooq15:32
panda|korlandy: we didn't see wes today ...15:33
rlandypanda|ko: yeah - but weird because he sent me an updated 1-on-1 invite for now15:34
*** ykarel is now known as ykarel|afk15:40
*** tesseract has quit IRC15:42
*** d0ugal__ has quit IRC15:42
*** ccamacho has quit IRC15:44
*** zoli is now known as zoli|gone15:46
*** zoli|gone is now known as zoli15:46
*** sshnaidm|off has joined #oooq15:55
weshayrlandy, aye..15:56
weshayrlandy, sorry I'm late15:56
rlandyweshay: np - we can ski if you don't have time15:57
rlandyship15:57
rlandyskip15:57
weshayim in15:57
*** weshay has quit IRC16:09
*** honza has quit IRC16:09
*** gchamoul has quit IRC16:09
*** zoli has quit IRC16:09
*** dtantsur has quit IRC16:09
*** weshay has joined #oooq16:14
*** honza has joined #oooq16:14
*** gchamoul has joined #oooq16:14
*** zoli has joined #oooq16:14
*** udesale has quit IRC16:15
*** jtomasek has quit IRC16:18
*** dtantsur has joined #oooq16:23
*** dtantsur is now known as dtantsur|afk16:25
*** trown is now known as trown|lunch16:31
*** ykarel|afk is now known as ykarel|away16:31
*** ykarel|away has quit IRC16:37
*** rlandy is now known as rlandy|brb16:43
weshayrlandy|brb, fyi https://engineering.redhat.com/rt/Ticket/Display.html?id=47386916:50
weshaynot sure which location is the right place for the ticket16:51
weshayasking in irc as well16:51
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722416:53
*** openstack has quit IRC17:11
*** openstack has joined #oooq17:12
*** rlandy|brb is now known as rlandy17:16
rlandyweshay: thanks17:17
*** jfrancoa has quit IRC17:19
*** trown|lunch is now known as trown17:41
trownhmm singlenode jobs arent tracked with graphite?17:52
rlandyreproducer got as far as under cloud install18:30
*** ykarel|away has joined #oooq18:42
panda|korlandy: then stopped ?18:46
rlandypanda|ko: I see the problem ...  br-ex is not available.18:46
rlandyPLAY [Configure a multi node environment] ****************************************************************************************************************************************************18:46
rlandyskipping: no hosts matched18:46
rlandy [WARNING]: Could not match supplied host pattern, ignoring: switch18:46
rlandy [WARNING]: Could not match supplied host pattern, ignoring: peers18:46
rlandy^^ my hosts must be wrongly configuring18:47
rlandyfixing18:47
*** jaganathan has quit IRC18:50
*** jaganathan has joined #oooq18:50
rlandyCreate inventory suitable for zuul-jobs/multinode - skipped that task18:50
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722418:53
rlandy| properties             | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', u'local_gb': u'49', u'cpus': u'4', u'capabilities': u'cpu_aes:true,cpu_hugepages:true,boot_option:local,cpu_vt:true,cpu_hugepages_1g:true,boot_mode:bios'}19:12
rlandyinteresting19:12
*** yolanda_ has joined #oooq19:19
*** d0ugal__ has joined #oooq19:19
*** yolanda__ has joined #oooq19:22
*** yolanda has quit IRC19:23
*** yolanda_ has quit IRC19:24
*** d0ugal__ has quit IRC19:28
rlandyso it is accepting the attached disk19:30
rlandywe can increase that19:31
rlandyweshay: so, I have misjudged this rhos-13 error ... we are deploying with a volume and ironic is picking it up ...19:47
rlandy| properties             | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', u'local_gb': u'49', u'cpus': u'4', u'capabilities': u'cpu_aes:true,cpu_hugepages:true,boot_option:local,cpu_vt:true,cpu_hugepages_1g:true,boot_mode:bios'}19:47
rlandythe current job is at overcloud deploy ...19:47
rlandyhttps://rhos-dev-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/tq-gate-rhos-13-ci-rhos-ovb-featureset001/83/console19:47
rlandybut may or may not be stuck ...19:47
rlandy2018-06-29 15:27:57 | 2018-06-29 19:27:53Z [overcloud.Controller.0.UserData]: CREATE_IN_PROGRESS  state changed19:48
rlandy^^ was a while back19:48
rlandyI removed the env delete - so we can debug19:48
rlandy2018-06-29 16:06:58 | 2018-06-29 20:05:55Z [overcloud.Compute.0.NovaCompute]: CREATE_FAILED  ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. , Code: 500"20:11
*** ykarel|away has quit IRC20:19
weshayrlandy, hrm.. so the no hosts found is due to another reason, not the diskspace20:38
weshayis that the sum?20:38
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722420:53
rlandyweshay: it may be - will probably have to ask from here  - going through errors in logs21:01
rlandyhttps://thirdparty.logs.rdoproject.org/jenkins-tq-gate-rhos-13-ci-rhos-ovb-featureset001-83/undercloud/var/log/ironic/ironic-conductor.log.txt.gz#_2018-06-29_16_02_39_88321:02
rlandy2018-06-29 14:29:46.895 25429 ERROR nova.compute.manager [req-0a7e6434-2fe7-4a94-994b-180e9eb26312 - - - - -] No compute node record for host undercloud.localdomain: ComputeHostNotFound_Remote: Compute host undercloud.localdomain could not be found.21:02
rlandyhttps://thirdparty.logs.rdoproject.org/jenkins-tq-gate-rhos-13-ci-rhos-ovb-featureset001-83/undercloud/var/log/nova/nova-compute.log.txt.gz#_2018-06-29_14_29_46_89521:03
rlandyweshay: ^^ seen that before?21:03
* weshay looks21:03
rlandyneed to check for other successful rhos-13 deploys21:04
*** trown is now known as trown|outtypewww21:05
weshayrlandy, I've seen that error, just not always as something fatal21:06
* weshay looks more21:06
*** agopi_ is now known as agopi21:08
weshayrlandy,  instance_uuid=instance.uuid, reason=e.format_message())\n', u'RescheduledException: Build of instance d75c3b43-da3f-4813-9972-f43bb0a62eeb was re-scheduled: Insufficient compute resources: Free memory 0.00 MB < requested 4096 MB; Free disk -31.00 GB < requested 40 GB.\n']21:10
weshayhttps://thirdparty.logs.rdoproject.org/jenkins-tq-gate-rhos-13-ci-rhos-ovb-featureset001-83/undercloud/var/log/extra/errors.txt.gz#_2018-06-29_16_04_06_92821:10
rlandyso it was the memory21:11
rlandyand disk space21:11
weshaytoo bad you can't bond instances21:11
rlandywe can do better on the disk space21:11
rlandycan get a bigger volume21:11
rlandymemory is interesting21:12
rlandylook at the flavors21:12
rlandywe deploy a large overcloud21:12
rlandyand an extra large undercloud21:12
*** dougbtv_ has quit IRC21:13
rlandy| properties             | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', u'local_gb': u'49', u'cpus': u'4', u'capabilities': u'cpu_aes:true,cpu_hugepages:true,boot_option:local,cpu_vt:true,cpu_hugepages_1g:true,boot_mode:bios'}21:14
rlandy u'local_gb': u'49 - that I accept is lower21:14
rlandybut memory is the same as rdocloud21:14
rlandy(undercloud) [stack@undercloud ~]$ sudo df -h21:16
rlandyFilesystem      Size  Used Avail Use% Mounted on21:16
rlandyweshay: I assume from the message it's complaining about the resources on the overcloud - not undercloud21:22
weshayrlandy, guess what21:23
rlandysurprise me21:23
weshaywe're the only ones using OVB21:23
weshayinternal is not21:23
weshay"looking into it"21:23
rlandywow - so why doesn't someone else complain?21:24
weshaythat is why21:24
weshayno one is trying osp w/ ovb except for us21:24
agopidontcha worry rlandy weshay soon we'll be there with you to complain21:24
weshayagopi, hey.. I just tested setup.py21:25
rlandyagopi is going to run away from out OVB idea21:25
weshayagopi, not seeing any problem w/ changes being sucked into the local working dir21:25
weshayagopi, made changes to tqe, tripleo-env.. playbooks21:25
weshayit's all good21:25
agopiweshay, the update fixed all problems, i made changes to the gdoc immediately aftere thought you checked it21:25
agopiweshay++21:25
hubbotagopi: weshay's karma is now 521:25
weshayOH21:25
weshayoh this fixed it..21:26
weshayhttps://bugs.launchpad.net/tripleo/+bug/177874821:26
openstackLaunchpad bug 1778748 in tripleo "CI: parsing error while installing cliff " [High,Won't fix]21:26
weshayok21:26
weshayk k21:26
weshayya.. we need to bail out in the right place w/ pip errors21:26
agopiyes sir!21:26
weshaycool21:26
agopiim looking into reproducer script21:27
weshayk21:28
rlandyhttps://github.com/cybertron/openstack-virtual-baremetal/blob/master/templates/undercloud-volume.yaml#L1621:30
rlandy^^ can change that setting21:30
rlandyweshay: can I delete that deployment and try again with a bigger volume?21:33
weshayrlandy, NUKE IT21:33
rlandyat least I was not totally off about resources problem :(21:33
*** agopi is now known as agopi|off21:39
*** agopi|off has quit IRC21:44
*** dtantsur|afk has quit IRC22:06
*** dtantsur has joined #oooq22:07
*** matbu has quit IRC22:17
*** agopi|off has joined #oooq22:18
weshaybah matt strikes again22:32
weshay(pending—There are no nodes with the label ‘rdo-manager-64-74proto’)22:32
rlandybaremetal_boot_from_volume_size: 5022:33
rlandyundercloud_boot_from_volume_size: 5022:33
weshaycool22:34
rlandyha - we can set this - is it will read it22:34
rlandyif it22:34
weshayrlandy, ping me w/ a link when it kicks22:34
weshayhttps://code.engineering.redhat.com/gerrit/#/c/142864/22:36
weshayrlandy, any reasons why this shouldn't be removed?22:37
weshaythat you know of?22:37
weshaywe're not building images for pike, queens, master22:38
rlandyweshay: none - but I don;t know a lot about this stuff22:38
rlandyweshay: watching https://rhos-dev-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/tq-gate-rhos-13-ci-rhos-ovb-featureset001/84/console22:39
rlandyhas volumes set to 8022:39
rlandyfor undercloud and baremetal22:39
weshayniiiice22:39
rlandylet's see if it takes22:40
weshaywhere does the volume get added?22:40
rlandy -e baremetal_boot_from_volume_size=80 -e undercloud_boot_from_volume_size=8022:40
rlandyI added it in the job for now22:40
weshayya.. but on the file system22:40
rlandyhttps://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/ovb-manage-stack/defaults/main.yml#L3122:41
rlandyhttps://github.com/cybertron/openstack-virtual-baremetal/blob/master/templates/virtual-baremetal-servers-volume.yaml#L1122:41
rlandythat sets the env  in22:41
rlandyhttps://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/ovb-manage-stack/templates/env.yaml.j2#L4222:41
rlandy6         "| stack_status          | CREATE_FAILED22:42
rlandywe probably overran our quota22:42
weshay:(22:49
weshayrlandy, if you need to shutdown the other internal ovb jobs.. you have my blessing22:49
rlandyweshay: nah - I think we are ok now22:50
rlandywe're create complete22:51
rlandyhttps://rhos-dev-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/tq-gate-rhos-13-ci-rhos-ovb-featureset001/85/console22:51
rlandylet's see what this one does22:52
rlandythe memory shortage is a loss to me though. The disk we have changed22:52
hubbotFAILING CHECK JOBS on stable/queens: tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades @ https://review.openstack.org/56722422:53
*** rlandy has quit IRC23:41

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!