Thursday, 2018-08-30

*** rlandy|bbl is now known as rlandy01:37
*** rlandy has quit IRC01:42
*** apetrich has quit IRC02:16
*** ykarel has joined #oooq03:24
*** ykarel_ has joined #oooq03:31
*** ykarel has quit IRC03:33
*** skramaja has joined #oooq03:49
*** gkadam has joined #oooq03:52
*** ykarel__ has joined #oooq03:58
*** ykarel_ has quit IRC03:58
*** ykarel__ has quit IRC04:22
*** kopecmartin has joined #oooq04:50
*** ykarel__ has joined #oooq04:58
*** ykarel__ is now known as ykarel05:00
*** jbadiapa has quit IRC05:01
*** ccamacho has joined #oooq05:38
*** udesale has joined #oooq05:53
*** udesale has quit IRC06:07
*** sanjayu_ has joined #oooq06:08
*** udesale has joined #oooq06:08
*** sanjayu__ has joined #oooq06:09
*** sanjayu_ has quit IRC06:12
*** jfrancoa has joined #oooq06:21
*** apetrich has joined #oooq07:09
*** jbadiapa has joined #oooq07:14
*** tesseract-1 has joined #oooq07:35
*** tesseract-1 has quit IRC07:36
*** tosky has joined #oooq07:38
ykarelssbarnea, there seems to be an issue with promoter(rocky current-tripleo not promoted to current-tripleo-rdo), there is a traceback: http://38.145.34.55/rocky.log08:20
*** ykarel is now known as ykarel|lunch08:30
ssbarneai am not sure how to deal with this but I will raise bug for it08:36
ssbarneaykarel|lunch: https://bugs.launchpad.net/tripleo/+bug/178985208:41
openstackLaunchpad bug 1789852 in tripleo "promoter failure: KeyError: u'current-tripleo-rdo-internal'" [Undecided,New]08:41
ssbarneait seems that I do not have permissions to edit more bug fieds, maybe someone can add me to tripleo project in LP.08:42
*** dtantsur|afk is now known as dtantsur08:49
pandammmh08:51
pandainteresting08:52
*** panda is now known as panda|rover08:52
panda|roverssbarnea: are you ruck right now ?08:53
ssbarneapanda|rover: i think so, nobody told me to stop.08:54
panda|roverlol08:54
panda|roverssbarnea: have you been ruck for how long ?08:54
ssbarneathis is 2nd week.08:55
panda|roverssbarnea: since the start then ?08:55
ssbarneapanda|rover: yep, since Aug 20 and I was expecting till Sep 7, right? Now I relied mostly on Wes as I do not know lots of things.08:58
panda|roverssbarnea: I'm your backup, sorry for yesterday, difficult time at home, wanna chat ?08:59
*** holser_ has joined #oooq09:02
*** holser_ has quit IRC09:02
*** holser_ has joined #oooq09:03
ssbarneapanda|rover: https://bluejeans.com/265541792809:07
*** ykarel|lunch is now known as ykarel09:18
*** udesale has quit IRC09:26
panda|roverssbarnea: http://rhos-release.virt.bos.redhat.com:3030/rhosp09:37
*** ccamacho has quit IRC09:39
*** ccamacho has joined #oooq09:39
ykarelssbarnea, Thanks, good to have a bug for the other issue as well, virtualbmc one as that's also blocking promotion09:40
jaosoriorHas anybody seen this error? http://paste.openstack.org/show/729121/10:16
rascasoftpanda|rover, sshnaidm hey folks, can you help me make these https://review.openstack.org/#/c/597432/ https://review.openstack.org/#/c/596789/ move on? It should be a matter of a workflow +110:22
*** jaosorior has quit IRC10:25
*** jaosorior has joined #oooq10:27
panda|roverssbarnea: https://ci.centos.org/job/rdo_trunk-promote-rocky-current-tripleo/10:33
*** udesale has joined #oooq10:33
ykareljaosorior, yes we are seeing in rdo phase 1 after virtualbmc 1.4.0, effort for fixing here: https://review.openstack.org/#/c/598095/110:38
jaosoriorykarel: thanks10:39
ykareljaosorior, do u have setup that has the issue, etingof might want to see if his patch is working10:40
ykarelas in ci there is no libvirt job10:41
jaosoriorI'm redeploying right now10:41
ykarelkk10:42
jaosoriorbut yeah, I can check out the patch10:42
jaosoriorafter this redeploy10:42
jaosorior(and after lunch :D )10:42
ykarelokk there is a job for ocata, checking logs10:42
ykarelbut we wanted master ones10:43
ykarelas virtualbmc is promoted there only10:43
*** udesale has quit IRC10:44
*** udesale has joined #oooq10:44
ssbarneapanda|rover: give this a kick, https://review.openstack.org/#/c/598089/10:57
panda|roverssbarnea: approved11:00
panda|roverrascasoft: I'm putting a fix on the promotion and then will look at your patches11:00
panda|roverssbarnea:  https://review.rdoproject.org/r/1604211:10
*** holser_ has quit IRC11:14
*** holser_ has joined #oooq11:15
panda|roverrascasoft: are you willing to give me some context ?11:19
panda|roverrascasoft: in upgrades images need to be downloaded twice11:19
panda|roverrascasoft: is there another place where we get the second image ?11:19
rascasoftpanda|rover, oh I think this answers what me and sshnaidm were discussing about this patch11:22
panda|roveryeah because it needs to download the image for the other release11:23
rascasoftpanda|rover, and *just* the overcloud image? Not the ironic-python-agent one?11:24
panda|roverrascasoft: we need to ask upgrades, the patch was added by matby11:24
panda|rovermatbu11:24
rascasoftpanda|rover, understood11:24
panda|roverI think there may be a better way to do it, but they need to be involved11:24
rascasoftpanda|rover, I'll do it11:24
panda|roverrascasoft: thanks11:25
rascasoftpanda|rover, by the way, the one I care most is https://review.openstack.org/#/c/597432/11:25
ssbarneajust discovered how ancient is the gerrit version on rdo... 2.11, fails to even do case-insensitive user lookup...11:25
rascasoftpanda|rover, which I just saw it was workflowed. So I'm good thanks!11:25
panda|roverrascasoft: yep, and I think ocata alrady has the new cli anyway11:26
*** sshnaidm is now known as sshnaidm|afk11:28
*** dmellado has quit IRC11:50
jaosoriorykarel: tried out the patch and had the following error: http://paste.openstack.org/show/729129/12:04
*** trown|outtypewww is now known as trown12:06
*** udesale has quit IRC12:08
*** udesale has joined #oooq12:11
*** rlandy has joined #oooq12:37
*** gkadam has quit IRC12:38
rlandypanda: we are not looking good :(12:39
rlandycouldn't get rdocloud results yesterday12:40
rlandypanda|rover: ^^12:40
*** udesale has quit IRC12:40
panda|roverrlandy: infra problems ?12:44
*** udesale has joined #oooq12:53
rlandyyep12:56
rlandywe have failures12:56
rlandybut hard to tell what the cause is12:56
rlandyssbarnea here?12:56
ssbarnearlandy: yep, here. just raised https://bugs.launchpad.net/tripleo/+bug/178989812:57
openstackLaunchpad bug 1789898 in tripleo "gate failing due to missing python-tripleoclient" [Undecided,Triaged] - Assigned to Gabriele Cerami (gcerami)12:57
rlandylet's talk at the meeting12:57
panda|roverrlandy: meeting ?13:03
*** myoung|pto is now known as moyung13:03
*** moyung is now known as myoung13:03
myoungo/13:03
*** jbadiapa has quit IRC13:03
*** jbadiapa has joined #oooq13:22
panda|roverrlandy: rfolco I will put the changes for the vxlan to be backwards compatible, is that ok ?13:47
rlandypanda|rover: fine by me13:48
rlandyI'm starting to log the failures in the card so we can track them13:48
ssbarneabtw, do we have a mongodb deployment up running that I can use for a very small db? if not, where can I deploy an instance? (don't want to use my personal rdo-cloud account for multiple reasons, including stability)13:48
rlandyssbarnea: do you have a minidell?13:49
*** udesale has quit IRC13:49
rlandyotherwise we can assign you some hardware13:49
rlandyalso we have internal cloud space I can lend you for a bit13:50
rlandywe have a test tenant13:50
*** udesale has joined #oooq13:50
panda|roveroh we have to refactor everything13:51
panda|roveroh joy13:51
panda|roverwe have logic in the variables currently13:51
panda|roverthere's no way i can put a conditional os.stat("/etc/nodepool") in those variables13:52
ssbarnearlandy: nope. is not that I don't have a place where to install mongodb, i have plenty but I want one that is production rated and that not depends on one individual.13:52
ssbarneaon downstream we had a team cloud account used for services, perfect for this use case, this being different than the tennant accounts used for testing.13:53
ssbarneain the end, even rdo cloud would be ok, but it must be team tennant. i wipe my own account too often to trust putting a database on it :D13:54
rlandyssbarnea: we have a team account13:54
rlandybut I'm using it13:54
rlandydepending on how long you need it for13:54
rlandywe can accommodate you13:54
ssbarnearlandy: likely long term, but 2GB RAM is more than enough.13:56
rlandyssbarnea: long term - no good there - too much chance it will get wiped13:59
rlandyssbarnea: when d you need this by? we are early adopters of upshift14:00
*** ykarel is now known as ykarel|away14:00
rlandybest I can offer atm is our stage tenant14:00
rlandywhich is also a team tenant14:00
rlandyI am the only one who uses it14:00
rlandyso pretty safe14:00
rlandyif you can hang on, I'll pull you into our upshift adoption work14:01
ssbarnearlandy: well, it may work. mainly I want to use it to keep a BFA database for Jenkins, which stores one small json object for each build. the DB is for stats and even if goes down it does not impact jenkins (may miss some builds in worst case), still I don't want to hear that VM vanished by accident. Shortly uptime is less important than the historical data recorded there. Yep, i can wait.14:04
*** ykarel|away has quit IRC14:06
ssbarnearlandy: in fact I can have a different approach as free tier from https://www.mongodb.com/cloud/atlas/pricing would cover for at least 6mo if not even more than one year.14:07
rlandyssbarnea: k- ping me when this sf fun is over and we will work something out for you14:07
ssbarnearlandy: sure.14:08
ssbarneabtw, as a team do we use any safe medium for sharing credentials? like ansible-vaulted repo, 1password, lastpass, ...14:09
panda|roverssbarnea: IRC14:09
rlandyssbarnea: nah - we just trust each other14:09
rlandyvery safe14:10
ssbarneapanda|rover: LOL, "irc - made with security in mind" :D14:10
*** sshnaidm|afk has quit IRC14:10
panda|roverssbarnea: well, we usually discover security problems only every six months, right before the PTG, so I guess it's OK.14:12
panda|rovermyoung: do you know Terry Pratchett and the Unseen University ?14:14
rlandypanda|rover: when you are done rebasing the world, pls vote on https://review.openstack.org/#/c/58148814:15
panda|roverrlandy: refactoring. I have to change everything.14:16
rlandyhappiness is ...14:16
*** sshnaidm|afk has joined #oooq14:19
*** sshnaidm|afk is now known as sshnaidm14:20
myoungpanda|rover: I know of the author, but have not read him yet14:23
*** ykarel has joined #oooq14:35
rlandyrascasoft: hi - do you ever see problems installing the undercloud on rhos-13?14:45
rascasoftrlandy, I'd say no. Last time I tried everything went fine. I just have problems in rdocloud because of the well known issue of the getcert14:46
rascasoftrlandy, last time I tried -> on baremetal14:47
rlandyrascasoft: you're deploy rhos-13 on rdocloud???14:49
rlandypls so no14:49
rlandysay14:49
rascasoftrlandy, LOL no no14:49
rlandyok - restart breathing14:49
*** sanjayu__ has quit IRC14:50
rascasoftrlandy, sorry if I made confusion, I just cited the only problem I have today with the undercloud14:50
rlandyrascasoft: running rhos-13 on ci-rhos, I upped the noba db sync timeout14:50
rlandymany times it still fails14:50
*** ykarel has quit IRC14:54
rlandyrascasoft: what release file are you using for rhos-13? the one from tripleo-envs?14:55
rascasoftrlandy, yes14:58
rlandyrdocloud back in business?14:58
rascasoftrlandy, do you have a link where are you failing?14:58
rascasoftrlandy, rdocloud is always in business, but I had to deal with so many failures these days, it's really hard14:59
rlandyrascasoft: I got one pass in many tries ... log links are down now though14:59
rlandymyoung: ^^ log server for internal?14:59
rlandygoes to rdocloud14:59
myoungrlandy: we can make a 1 line change and use internal logs, at least for rdo2 and osp0 jobs15:01
* myoung reads scrollback15:01
* myoung is multitasking, tempest squad scrum starts now15:02
arxcruzmyoung: meeting now ?15:02
*** sdoran has joined #oooq15:02
*** ykarel has joined #oooq15:03
*** udesale has quit IRC15:04
rlandymyoung: not urgent15:05
*** zul has quit IRC15:05
*** zul has joined #oooq15:09
*** kopecmartin has quit IRC15:16
rascasoftrlandy, in the meantime I discovered what was making my overcloud deploys on master fail: https://bugs.launchpad.net/tripleo/+bug/178791015:17
openstackLaunchpad bug 1787910 in tripleo "OVB overcloud deploy fails on nova placement errors" [Critical,Triaged] - Assigned to Marios Andreou (marios-b)15:17
*** ykarel has quit IRC15:25
*** ykarel has joined #oooq15:29
rlandyhttp://cistatus.tripleo.org/ - down with rdocloud?15:35
rlandyugh - no job comparison logs15:36
*** jtomasek has quit IRC15:38
*** ykarel_ has joined #oooq15:38
*** ykarel has quit IRC15:39
myoungrlandy, rascasoft, want me to patch rdo2/osp0 to use internal logs until RDO cloud log server is back?15:44
myoungit's a quickie15:44
rlandymyoung: no - let's wait for rdocloud15:44
myoungok15:44
rlandyI really need sova more than anything else right now :(15:44
rlandytrying to compare failures15:44
myoungsshnaidm: do we still have an instance running internally?15:45
rascasoftmyoung, what rlandy says :) but on the other side I just created this https://code.engineering.redhat.com/gerrit/#/c/148775/ that fixes a couple of things for osp10, if you have time15:45
myoungre: logserver, if it does come to it, this is what we did last time this happened (dec 2016) http://git.app.eng.bos.redhat.com/git/tripleo-environments.git/commit/jenkins/jobs/tripleo-quickstart?id=c5a942fab8940b4b1877ca889a68961f3300c91415:46
sshnaidmmyoung, no, but I will set up one..15:46
myoungrascasoft: sure, can have a look15:46
myoungrascasoft: lgtm, +2, will merge with the test job you kicked off is completed15:52
*** ykarel_ is now known as ykarel15:55
*** trown is now known as trown|lunch15:55
*** hamzy has quit IRC16:00
*** ykarel has quit IRC16:00
rlandypanda|rover: rfolco: I logged all the errors (with comments) on https://trello.com/c/TVsZ3Ut6/877-clean-up-rdo-sf-legacy-code-after-zuulv3-migration16:07
rlandywe can compare results when we trigger again16:08
*** ykarel has joined #oooq16:14
*** ykarel has quit IRC16:17
rlandypanda|rover: need to run out to a doctor's appt - back in about 1hr+16:19
*** ccamacho has quit IRC16:19
*** rlandy is now known as rlandy|afk16:19
*** ykarel has joined #oooq16:25
*** ykarel_ has joined #oooq16:29
*** jfrancoa has quit IRC16:29
*** ykarel has quit IRC16:31
ssbarneadoes anyone knows how can I mention a storyboard ticker from inside the commit message? https://wiki.openstack.org/wiki/GitCommitMessages does not mention anything regarding storyboard.16:32
toskyStory: nnnnn16:33
toskyTask: mmmmm16:33
*** jbadiapa has quit IRC16:35
*** agopi has quit IRC16:40
*** agopi has joined #oooq16:42
myoungCI squad: http://lists.openstack.org/pipermail/openstack-dev/2018-August/134030.html16:49
panda|roverrlandy|afk: rfolco updated dependencies, I added a compatibility layer that transforms /etc/nodepool information in what we consume now, I hope it works16:50
*** ykarel_ has quit IRC16:53
*** ykarel has joined #oooq16:55
*** ykarel has quit IRC16:55
*** ccamacho has joined #oooq16:58
*** ykarel has joined #oooq17:01
*** stack has joined #oooq17:03
*** dtantsur is now known as dtantsur|afk17:04
*** ykarel_ has joined #oooq17:06
*** ykarel has quit IRC17:08
*** agopi has quit IRC17:11
*** trown|lunch is now known as trown17:13
*** gkadam has joined #oooq17:15
myoungssbarnea, panda|rover, marios|rover, looking at rdo2, looks like most of the slaves are offline...is this known / tracked yet?17:22
myoung^^ https://rhos-dev-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/computer/17:22
*** skramaja has quit IRC17:23
*** ccamacho has quit IRC17:24
*** hamzy has joined #oooq17:28
*** ykarel_ is now known as ykarel|away17:32
*** holser_ has quit IRC17:35
*** ykarel|away has quit IRC17:36
*** dmellado has joined #oooq17:44
*** agopi has joined #oooq17:48
rfolcopanda|rover, I don't understand why we need to read legacy /etc/nodepool (backward compat) if we are removing the pre.ymal that populates it?18:10
rfolcopanda|rover, any job still populating /etc/nodepool ?18:11
*** gkadam has quit IRC18:17
*** rlandy|afk is now known as rlandy19:05
rlandypanda|rover: awesome thanks - will watch the jobs now19:06
rlandyrfolco: we need the compatibility layer as we have to merge the depends on changes first19:06
rlandywhich will break before we merge the major review19:06
rlandy2018-08-30 19:01:15.659341 | primary | to see the full traceback, use -vvv19:12
rlandy2018-08-30 19:01:15.725697 | primary | /home/zuul/workspace/devstack-gate/functions.sh: line 150: /home/zuul/workspace/logs/reproduce.sh: No such file or directory19:12
rlandyssbarnea, ^^ you know about the above error?19:14
rlandypanda|rover: ?19:14
rlandyrfolco: ^^??19:25
rlandyanyone home?19:25
rfolcorlandy, devstack-gate... oh jeez19:26
rlandyrfolco: you know what broke this?19:26
rfolcono19:26
rlandyok - will have to dig in here19:27
rlandysomething merged19:27
rfolcolegacy jobs still use it...19:27
rlandywhat deleted it?19:30
rlandyhttps://github.com/openstack-infra/tripleo-ci/blob/master/scripts/tripleo.sh#L135819:35
rlandynothing should have deleted that19:35
*** hamzy has quit IRC19:54
*** hamzy has joined #oooq19:55
rlandy2018-08-30 19:33:47.759662 | primary | ERROR! Unexpected Exception: No module named manager20:02
rlandyrfolco: ^^ real problem I think20:02
*** centos has joined #oooq20:04
*** centos is now known as Guest5845320:04
rfolcorlandy, fatal?20:04
rlandyyes20:04
rlandyhttps://github.com/openstack/tripleo-quickstart/commit/60f9a8fdf14f3f6f023a8616107bea7b448fddd620:04
rfolcorevert?20:06
rlandyidk20:07
rlandyI don;t see it on upstream20:08
rlandyhttps://review.openstack.org/#/c/587371/20:09
rlandymerged 30 hours ago20:10
rlandylet's try a revert20:10
rlandyrfolco: shoot - revert didn;t work ERROR! Unexpected Exception: No module named manager20:20
rfolcorlandy, please paste the link I can try to help finding root cause20:21
rlandyhttps://logs.rdoproject.org/39/598339/1/openstack-check/legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/899cc92/job-output.txt.gz#_2018-08-30_20_19_35_67561120:21
rlandyrfolco:I'm comparing to previous jobs20:21
rlandy2018-08-30 20:19:29.238903 | tripleo-ovb-centos-7 | ara 0.16.0 has requirement ansible>=2.4.5.0, but you'll have ansible 2.3.2.0 which is incompatible.20:25
rlandy cat: /etc/nodepool/primary_node_private: No such file or directory seems ok20:26
rlandyrfolco: this is the real problem ...20:27
rlandyhttps://logs.rdoproject.org/39/598339/1/openstack-check/legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/899cc92/job-output.txt.gz#_2018-08-30_20_19_36_29810220:27
rlandypanda|rover's patch is also broken20:32
rlandybut will fix that later20:33
rfolcofrom ansible.config.manager import find_ini_config_file20:36
rfolcorlandy, but this is ansible error20:36
rlandyrfolco: ^^ what are pointing to here?20:38
rlandyssbarnea: you around?20:38
ssbarnearlandy: on mobile, ready to go to sleep. anything urgent?20:40
rlandyssbarnea: rdocloud jobs are broken20:40
rlandyhttps://logs.rdoproject.org/39/598339/2/openstack-check/legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/c9efb7c/job-output.txt.gz#_2018-08-30_20_37_06_77702920:40
rlandywe are looking at this error:20:40
rlandy2018-08-30 20:37:08.606139 | tripleo-ovb-centos-7 | ERROR! Unexpected Exception: No module named manager20:40
rlandyI tried a revert of your patch20:41
rlandybut it didn't help :(20:41
rlandyssbarnea: did you test on rdocloud?20:42
rlandythe patch does not show those results20:42
ssbarneayep, it is ansible but not the minor version bump, is likely using ansibke smaller than 2.4.020:42
rlandyI am comparing with20:43
rlandyhttp://logs.rdoproject.org/25/561725/10/openstack-check/legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/172eceb/job-output.txt.gz20:43
ssbarneaansible moved manager module at some point, i remember a fix in infrared for dealing with that.20:43
rlandyssbarnea: ^^ see the diff>20:43
rlandy2018-08-30 08:40:16.356658 | tripleo-ovb-centos-7 |  [WARNING]: No hosts matched, nothing to do20:43
rlandy2018-08-30 08:40:17.755555 | tripleo-ovb-centos-7 | localhost | SUCCESS => {\20:43
ssbarnealet me go to computer20:43
rlandyk20:43
rfolcorlandy, thats it... python -c "from ansible.config.manager import find_ini_config_file"20:48
rfolco --> ansible 2.7 works20:48
rfolcorlandy, same for 2.5 doesn't20:48
rfolcojust tested in a venv20:48
rfolcobrb20:48
ssbarnearlandy: cannot blame my bump from 2.5.3 to 2.5.8 when this failure is caused by the fact that version installed is ansible-2.3.2.0 ! which is clearly the source of the problem.20:49
rlandyssbarnea: yeah - I know the revert did not work for that20:49
ssbarnearlandy: i know the manager issue, as we encounter it downstream, about 3-4 months ago.20:50
rlandywhat happened to versions>20:50
rlandy2018-08-30 20:36:59.441068 | tripleo-ovb-centos-7 | ara 0.16.0 has requirement ansible>=2.4.5.0, but you'll have ansible 2.3.2.0 which is incompatible.20:50
rlandypip upgrade ansible20:51
ssbarneawhen we migrated from 2.4 to 2.5 (or 2.3 to 2.4 not sure). is part of the breaking changes in ansible release notes.20:51
rlandybut this breaks from yesterday to today20:52
ssbarneawell, pip deps are known to cause problems, this is why I always do a "pip check" after running pip install, to stop the build if there are conflicts reported.20:52
rlandywhat upgrades then?20:52
rlandywas not quickstart20:52
ssbarneaif fact I think is what downgrades it, not what upgrades it.20:53
ssbarneai think the pip install command is part of this script https://logs.rdoproject.org/39/598339/2/openstack-check/legacy-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/c9efb7c/job-output.txt.gz#_2018-08-30_20_36_13_77688820:54
ssbarnearlandy: found: https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate-wrap.sh#L3120:56
ssbarneaif we define a different version we can avoid that issue.20:56
rlandyweird this was a year ago20:59
ssbarneais the first time i see this repo, but this hardcoded line makes me wonder who decides which ansible we need, if each repo has its own version we will never be able make them work.20:59
*** hamzy has quit IRC20:59
ssbarnea2.3 is VERY old21:00
rlandyonly of ANSIBLE_VERSION is not defined21:00
rlandydid we remove that definition?21:00
rlandyI am just trying to mail down what changed21:00
*** trown is now known as trown|outtypewww21:03
ssbarnearlandy: sorted! just wait a little bit others are working to fix it. See https://review.openstack.org/#/c/591527/ -21:05
rlandyssbarnea: cool21:05
rlandyok - now to fix panda|rover's patch21:06
ssbarneai will go to sleep, is 10pm here and I woke at 6am. CC me on any reviews or even email me, i will takeover early morning. ok?21:08
ssbarneaeven so, my ability to help is limited to +/-1 anyway ;)21:09
rlandyssbarnea: good night - we can handle it from here21:10
rlandysorry I accused your patch :(21:11
rlandyit was my first guess21:11
*** holser_ has joined #oooq21:19
*** dmellado has quit IRC21:31
*** dmellado has joined #oooq21:31
*** holser_ has quit IRC21:37
rlandyrfolco: getting there21:40
*** agopi has quit IRC21:40
rlandyfixed panda|rover patch21:41
rlandywaiting on devstack gate fix21:41
*** holser_ has joined #oooq21:42
*** myoung has quit IRC21:47
*** tosky has quit IRC22:23
*** holser_ has quit IRC22:30
*** sshnaidm is now known as sshnaidm|off22:35
*** dbecker has quit IRC22:37
*** dbecker has joined #oooq23:06
*** agopi has joined #oooq23:12

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!