Monday, 2017-03-27

*** amotoki has quit IRC00:01
*** bbbzhao has quit IRC00:20
*** bzhao has joined #openstack-lbaas00:21
*** oomichi has quit IRC00:39
*** oomichi has joined #openstack-lbaas00:42
*** amotoki has joined #openstack-lbaas01:04
*** yuanying_ has joined #openstack-lbaas01:04
*** yuanying has quit IRC01:07
*** reedip has quit IRC01:10
*** amotoki has quit IRC01:27
*** sanfern has quit IRC01:49
*** reedip has joined #openstack-lbaas02:00
*** oomichi has quit IRC02:08
*** oomichi has joined #openstack-lbaas02:12
*** yamamoto has joined #openstack-lbaas02:25
*** yuanying has joined #openstack-lbaas02:30
*** yuanying_ has quit IRC02:33
*** cpuga has joined #openstack-lbaas02:35
*** amotoki has joined #openstack-lbaas02:42
*** yamamoto has quit IRC02:43
*** yamamoto has joined #openstack-lbaas02:43
*** amotoki has quit IRC02:53
*** sanfern has joined #openstack-lbaas02:54
*** reedip has quit IRC02:58
*** amotoki has joined #openstack-lbaas03:06
*** links has joined #openstack-lbaas03:09
*** yamamoto has quit IRC03:16
*** yamamoto has joined #openstack-lbaas03:18
*** reedip has joined #openstack-lbaas03:30
*** amotoki has quit IRC03:31
*** yamamoto has quit IRC03:35
*** gongysh has joined #openstack-lbaas03:45
*** cpuga has quit IRC03:49
*** amotoki has joined #openstack-lbaas03:52
*** gongysh has quit IRC04:03
*** cpuga has joined #openstack-lbaas04:03
*** cpuga has quit IRC04:08
*** yamamoto has joined #openstack-lbaas04:22
*** cpuga has joined #openstack-lbaas05:11
*** sshank has quit IRC05:14
*** sshank has joined #openstack-lbaas05:14
*** gcheresh_ has joined #openstack-lbaas05:17
*** gongysh has joined #openstack-lbaas05:19
*** cpuga_ has joined #openstack-lbaas05:26
*** rcernin has joined #openstack-lbaas05:26
*** cpuga has quit IRC05:30
*** cpuga_ has quit IRC05:30
*** aojea has joined #openstack-lbaas05:54
*** oomichi has quit IRC05:58
*** oomichi has joined #openstack-lbaas06:01
*** aojea has quit IRC06:04
*** aojea has joined #openstack-lbaas06:04
*** aojea has quit IRC06:09
*** reedip has quit IRC06:24
*** oomichi has quit IRC06:28
*** oomichi has joined #openstack-lbaas06:32
*** oomichi has quit IRC06:39
*** oomichi has joined #openstack-lbaas06:43
*** reedip has joined #openstack-lbaas06:56
*** oomichi has quit IRC06:58
*** kobis has joined #openstack-lbaas06:59
*** oomichi has joined #openstack-lbaas07:03
*** oomichi has quit IRC07:08
*** oomichi has joined #openstack-lbaas07:13
*** aojea has joined #openstack-lbaas07:17
*** pcaruana has joined #openstack-lbaas07:20
*** oomichi has quit IRC07:38
*** oomichi has joined #openstack-lbaas07:43
*** tesseract has joined #openstack-lbaas07:44
*** krypto has joined #openstack-lbaas08:07
*** reedip has quit IRC08:23
*** reedip has joined #openstack-lbaas08:27
*** openstackgerrit has quit IRC08:33
*** krypto has quit IRC09:31
*** krypto has joined #openstack-lbaas09:31
*** aojea_ has joined #openstack-lbaas09:34
*** aojea has quit IRC09:37
*** reedip has quit IRC09:54
*** krypto has quit IRC10:24
*** krypto has joined #openstack-lbaas10:25
*** amotoki has quit IRC10:30
*** krypto has quit IRC10:34
*** gongysh has quit IRC10:35
*** krypto has joined #openstack-lbaas10:35
*** pksingh has joined #openstack-lbaas10:38
*** pksingh has quit IRC10:53
*** reedip has joined #openstack-lbaas10:54
*** sanfern has quit IRC10:56
*** krypto has quit IRC11:11
*** krypto has joined #openstack-lbaas11:11
*** reedip has quit IRC11:15
*** amotoki has joined #openstack-lbaas11:46
*** sanfern has joined #openstack-lbaas11:57
*** catintheroof has joined #openstack-lbaas12:08
*** krypto has quit IRC12:31
*** krypto has joined #openstack-lbaas12:32
*** krypto has quit IRC12:32
*** krypto has joined #openstack-lbaas12:32
*** links has quit IRC12:40
*** reedip has joined #openstack-lbaas12:48
*** chlong has joined #openstack-lbaas12:51
*** gongysh has joined #openstack-lbaas13:26
*** krypto has quit IRC13:34
*** krypto has joined #openstack-lbaas13:35
*** openstackgerrit has joined #openstack-lbaas13:52
openstackgerritGerman Eichberger proposed openstack/neutron-lbaas master: Octavia Proxy Plugin  https://review.openstack.org/41853013:52
*** krypto has quit IRC13:56
*** krypto has joined #openstack-lbaas13:56
*** blogan has joined #openstack-lbaas14:07
*** bzhao has quit IRC14:07
*** bzhao has joined #openstack-lbaas14:08
rm_workxgerman: how is that going?14:09
rm_workit'd make my testing a lot easier14:09
xgermangood14:09
xgermantrying to get a devstack up…14:09
*** chlong has quit IRC14:09
rm_worki got the sorting/paging working this weekend I think...14:10
rm_workpreliminarily...14:10
rm_workneed to do filtering too though14:10
xgermank, I am looking at failover and the proxy14:11
rm_workbut my priority is on listener/pool/member v214:12
rm_worklistener is good to go14:12
rm_workjohnsom +2'd it14:12
rm_workI can't because i worked on it too much i think14:12
xgermanok, then I will look at that as well14:12
rm_workor else i'd +A it right now, lol14:12
xgermanlooking…14:13
xgermanok - +A14:19
*** pksingh has joined #openstack-lbaas14:26
rm_workerr so14:30
rm_workon a Pool CREATE with a Listener, does the Listener go to ERROR if we fail to create the Pool?14:30
rm_workjohnsom / xgerman14:30
rm_workright now the code is set up to explicitly set it to ERROR, but the *test* is set to check for *ACTIVE*, and I am personally not sure which makes more sense14:31
xgermanmmh, not sure14:31
xgermanwith pool being a high level object I would think pool -> error; listener ok14:32
rm_workwell, if it's a create, there IS no Pool14:32
rm_workI think?14:32
rm_workoh wait, maybe the pool does exist14:32
rm_worklet me see14:32
xgermanpool is high level so should exist14:32
*** krypto has quit IRC14:33
rm_workyes, it does14:33
rm_workbut Pools in neutron-lbaas are weird and don't have two statuses like everything else14:33
rm_workonly a single "status", not provisioning_ / operating_14:33
*** krypto has joined #openstack-lbaas14:34
*** krypto has joined #openstack-lbaas14:34
rm_workand it seems to use the statuses from operating_status14:34
xgerman:-(14:37
xgermanso we half-assed it14:37
rm_workyeah, so, uhh... hmm14:38
rm_workwe can't put the high-level object in ERROR14:38
rm_workso I guess we have to put the Listener in ERROR? except...14:38
rm_workthen it's immutable14:39
rm_workso you have to ... delete it?14:39
rm_workor wait, is ERROR status immutable or not?14:39
rm_workI think it is14:39
rm_workyep, immutable14:39
rm_workso if your Pool CREATE fails, you have to DELETE the Listener14:40
rm_workthat seems bad14:40
rm_workso I think we opt for *not* error?14:40
rm_workman, member updates are set to do that too14:43
rm_workugh i dunno14:43
rm_workmaybe I'll leave it for now14:43
rm_workOH, the pool CAN go to ERROR14:49
rm_workhuh.14:49
*** gcheresh_ has quit IRC14:55
*** fnaval has joined #openstack-lbaas14:55
rm_workwtfff15:07
johnsomo/\15:07
*** rcernin has quit IRC15:07
openstackgerritMerged openstack/octavia master: Octavia v2 API for listeners  https://review.openstack.org/42474415:07
rm_workjohnsom: weird pool issues15:07
johnsomWhat are you seeing?15:07
rm_workA) On a Pool CREATE failure, what should happen to the Listener operating_status?15:07
rm_workERROR, or ACTIVE?15:08
rm_workB) Somehow the pool's operating_status is being set to ERROR in the DB on a failure, but I can't find where/how15:08
johnsomoperating status is observed, so it really depends on the point in time it failed.  Both of those could be valid15:08
rm_workerr15:08
rm_workself.repositories.listener.update(15:09
rm_work                        session, listener_id, operating_status=constants.ERROR)15:09
rm_workwe do that15:09
rm_workdoesn't seem very observed15:09
johnsomWhere do you see that?15:09
rm_workcontrollers/pool.py15:09
rm_work_send_pool_to_handler()15:09
rm_workon a create failure15:09
rm_workwe do the same on other object types too15:10
*** kobis has quit IRC15:10
rm_workin Octavia v1 api, we expect Listener status to be ERROR on a pool create failure15:11
johnsomHmmm,15:11
johnsomI'm not 100% sure that makes sense to me.  The pool is going to rollback, so not really exist, same with the database changes for the listener.15:14
rm_workno15:14
rm_workthe pool does not roll back15:14
rm_workas xgerman points out, it is a high level object now15:14
rm_workso it's already created15:14
johnsompool.py:16715:14
johnsomAh, no I see now15:15
rm_workand we never set it to ERROR or anything15:15
rm_workBUT15:15
rm_workmy point B was, somehow it does get it15:15
rm_workand I can't figure out how15:15
rm_workso on a post failure, we get back the object15:15
rm_workand the status is "OFFLINE"15:15
johnsomWell, one problem I see is we aren't setting the provisioning status to ERROR on that failure, so the pool is in a strange state15:15
rm_workif you immediately do a GET, it is status ERROR15:16
johnsomStuck in pending I guess15:16
rm_workPool has only one status in neutron-lbaas15:16
rm_workand it is more closely a mirror of operating_status15:16
johnsomrm_work I am pretty sure that is health_manager setting it that way, as it is the "observe" engine and manages operating_status15:17
rm_workin our DB, we have both but we only set operating_status15:17
rm_workhmm15:17
rm_workbut this is in *functional tests*15:17
rm_worki'm observing this when in a functional test15:17
rm_workapi_pool = self.create_pool(...); api_pool['pool']['status'] == constants.ERROR15:19
rm_workerr sorry15:20
rm_workapi_pool = self.create_pool(...); api_pool['pool']['status'] == constants.OFFLINE15:20
rm_workapi_pool = self.get_pool(api_pool['pool']['id'])' api_pool['pool']['status'] == constants.ERROR15:20
rm_workoh15:22
rm_worknevermind15:22
rm_worki am dumb and looking at the wrong test15:22
rm_workok so i know how to fix this then15:22
rm_workerr, kinda15:22
johnsomYeah, looking at pools, there are a number of bugs here with the status.  It's setting operating status and not provisioning.  That isn't good15:23
rm_workI think maybe we want Listener to stay ACTIVE but Pool to be ERROR?15:23
rm_workyeah i'm working on it15:23
rm_worki have a huge update incoming15:23
johnsomoperating status gets overwriten by the HM15:23
rm_workthe tests that exposed this were just ... being skipped15:23
rm_workyeah but we can initially set it to ERROR15:23
rm_workon Create15:23
rm_workbecause ... yes15:23
sshankrm_work, There was also an error related to bad handler. I had to skipped it until solution could be got.15:24
rm_workyeah15:24
rm_workthat's the one15:24
*** chlong has joined #openstack-lbaas15:25
sshankI think there error related to bad handler might come in members as well. nakul_d had something similar.15:25
sshankthese*15:25
rm_workjohnsom: not to mention that i just noticed we don't differentiate ERROR between operating_status and provisioning_status15:26
rm_workthere's two "ERROR = " lines in constants.py15:26
rm_work>_>15:26
johnsomJoy15:26
rm_worknot that is present a "problem"15:27
rm_workbut15:27
johnsomOk, so listener IS setting provisioning status to ERROR, like it should15:27
rm_workerr15:28
rm_worki'll look15:28
rm_workyeah k15:28
johnsomBut, I just notices we really don't have the right try blocks here, I think....  I need to get my coffee and read through a bit deeper15:28
rm_workso on Pool Create failure15:28
rm_workshould listener go to ... provisioning_status = ERROR ?15:28
johnsomI don't think so personally.15:29
rm_workk15:29
rm_workme either15:29
johnsomPart of the point of prov_status ERROR is to allow the user to delete that object and rebuild15:29
rm_workbut i think it depends on what Neutron-LBaaS does, not what I want15:29
johnsomYeah, worried about that too....  ugh15:30
rm_workso the question is ... what does the status tree look like in neutron-lbaas, when a Pool CREATE fails15:30
rm_workdigging through tests to see if i can find the answer15:30
rm_workbut we have no functional tests in neutron-lbaas15:32
rm_workaaaah15:33
rm_workmaybe in neutron-lbaas it depends on the plugin entirely?!15:33
xgermanpotentially15:34
johnsomPersonally I think we need to get this "right"15:34
rm_workyeah but aren't we handcuffed15:34
johnsomIt will haunt us if we don't.15:34
rm_workto "what neutron-lbaas did"15:34
rm_worksshank: yeah i think you're right, it's in members too... i tried to fix it there as well but i should have figured out this one first, i think15:35
xgermanmmh, why does v2 not return loadbalancers?15:36
rm_workxgerman: ?15:36
xgermanI am using my proxy15:36
xgermanhttps://www.irccloud.com/pastebin/hnLUnuQL/15:36
xgermangives me blank15:36
rm_work:/15:37
rm_workmight be related to the get_all fix15:37
xgermanhttps://www.irccloud.com/pastebin/rnlxrZCO/15:37
rm_workxgerman: try pulling in this patch:15:37
xgermanok15:38
rm_workhttps://review.openstack.org/#/c/449822/15:38
rm_workah hold on15:38
rm_workmerge conflict15:38
xgermanwell, at least creating an LB worked15:38
rm_workyeah i bet it's that fix15:39
rm_workone sec15:39
xgermanhttps://www.irccloud.com/pastebin/UcUjbc1M/15:39
xgermanbut that driver error thing worries me ;-)15:39
rm_workooooone sec15:39
*** belharar has joined #openstack-lbaas15:40
openstackgerritAdam Harwell proposed openstack/octavia master: Fix get_all method for v2 LB controller  https://review.openstack.org/44982215:44
rm_workdoh it was a noop rebase, lolk15:44
rm_worknot sure why it said merge failed15:45
rm_workxgerman: try with that patch15:45
xgermanok, also get <id> seems to be broken15:45
johnsomYeah, I think that is part of the filters work.  It's id as a url param right?15:46
rm_workxgerman: err15:46
rm_workno?15:46
rm_workit should be GET /listeners/<uuid>15:47
rm_worknot url-param15:47
rm_workit's get_one()15:47
rm_workbut it should be working15:47
xgermanhttps://www.irccloud.com/pastebin/lji7oVer/15:47
rm_workget_all would be broken though15:47
xgermannope - as id in the Url15:47
johnsomBut I think the client is doing strange things where it's doing a filtered get all15:47
rm_workummm15:47
rm_workwow15:47
rm_workthat'd be dumb15:47
rm_workxgerman: do --debug15:47
johnsomWell, that works....15:47
rm_workoh15:48
rm_workwait that's a curl15:48
xgermanyep15:48
rm_workuhh15:48
xgermanthe client has the filter error15:48
rm_workthat's a curl against actual octavia15:48
*** aojea has joined #openstack-lbaas15:48
xgermanyep15:48
rm_workwut15:48
xgermanat least post works :-)15:48
rm_workthat should not fail15:48
rm_worki see it working in functional tests15:48
xgermanok, might be me…15:49
*** aojea_ has quit IRC15:50
rm_workhmm15:51
rm_workfunctional tests COULD be bugged?15:52
rm_worknot testing properly?15:52
rm_workthe LB is there in the DB?15:52
xgermanyes15:53
xgermanit’s in. the LB - that’s how I got the id15:53
johnsomnoauth or ???? is it a project_id mis-match?15:54
xgermanI was wondering about that15:55
xgermansame15:56
rm_workoh are you passing project_id?15:57
rm_worki don't see it in that curl?15:57
xgermanhttps://www.irccloud.com/pastebin/bJk3C8WZ/15:57
rm_workdid you enable auth?15:57
xgermannope15:57
rm_workin octavia.conf15:57
*** amotoki has quit IRC15:57
rm_workyeah do that :P15:57
rm_workmake the auth method = keystone15:58
johnsomIt should still work with noauth though15:58
xgermanwell, now I get 40115:58
rm_worklol15:58
rm_workah yeah15:59
rm_workthe token is wrong15:59
rm_workyou can't use the SHA thing for some reason15:59
rm_workneed to generate a token with "openstack token issue"15:59
xgermanok15:59
*** aojea has quit IRC15:59
xgermanstill16:00
*** aojea has joined #openstack-lbaas16:00
xgermanbut my new functionality can find it16:01
xgermanhttps://www.irccloud.com/pastebin/n4R2FMj5/16:01
xgermanmmh, maybe I broke it be adding that stuff16:01
xgermanlogs:16:02
xgerman017-03-27 12:00:14.181 13223 DEBUG wsme.api [req-0ecb55a1-289c-4d3d-936c-203e1083ab53 - a476a84e54644f1ab01864ed7fa6e258 - default default] Client-side error: Load Balancer c6s7254b5-f076-c67254b5-f076-45e9-bb88-bfe69bf3461d not found. format_exception /usr/local/lib/python2.7/dist-packages/wsme/api.py:22216:02
*** aojea has quit IRC16:04
*** sanfern has quit IRC16:06
*** sanfern has joined #openstack-lbaas16:07
rm_workhmmm16:09
rm_worki don't have a good stack currently, i don't think16:09
rm_worknm, i do16:10
rm_workand it works fine16:10
rm_workfor me16:10
rm_workcurl http://127.0.0.1:9876/v2.0/lbaas/loadbalancers/2479e788-e3a5-4806-b61c-021884ea9afc -H "Content-Type: application/json" -H "X-Auth-Token: $MYTOKEN" | jq16:10
rm_workreturns me the LB16:10
rm_worksomething might be wonky in your octavia setup :/16:11
rm_workok so johnsom i am just going to make the sensical choice here for the status issue16:11
rm_workand if it isn't what neutron-lbaas does...16:11
rm_worksomeone can file a bug16:11
rm_workand we can change it then16:11
johnsomgrin16:11
rm_workdoes that seem OK to you? >_>16:12
rm_workbecause guessing around how to make it do the wrong thing, seems dumb16:12
johnsomI guess it depends how far we go.  I would like to start with the "right" way16:12
rm_workand i agree with the intent, if you create a bad pool, you should be able to fix the listener16:12
rm_workso, fixing it16:12
johnsomThere are some bugs against LBaaS and status anyway, so...16:13
rm_workjohnsom: so the only issue i see16:14
rm_workis that since Pool only has one status16:14
rm_workand it is tracking "operating_status" really16:15
rm_workwe can set the provisioning_status to ERROR but the user will never see it16:15
rm_workbut if it isn't provisioned, there's no HM updating that, so we should be able to initially manually set operating_status to ERROR and it'll stick16:15
rm_workbut then, do we *unset* it as the listener's default pool?16:16
xgermanso our failover/replace logic kills the amphora BEFORE it readies the new one…16:16
rm_workand .... what, roll back to the old default_pool?16:16
rm_workxgerman: yep16:16
rm_workxgerman: that's what i assumed you'd be fixing16:16
xgermanI will16:16
xgermanI made it accessible with the API16:16
rm_work:)16:16
xgermandidn’t think that needed fixing16:17
johnsomrm_work I am not following you.  My nlbaas reference DB has both status16:17
rm_workerrrr16:17
rm_workbut the returned object16:17
rm_work??16:17
rm_workyou showed me the return from a pool create/get16:17
rm_worki thought16:17
rm_workand it only had "status"16:17
xgermanoh, in other news I got +2 on the openstack-ansible-os_octavis project16:17
rm_work\o/16:18
johnsomJust a second16:18
johnsomSo, show on pool doesn't show any status....16:18
johnsomhttps://www.irccloud.com/pastebin/JwvJbd7V/16:18
rm_workugh ok16:19
rm_workso i have been back and forth on this like 4 times16:19
rm_workSHOULD a pool display both provisioning_status and operating_status ?16:19
johnsomyes16:19
johnsomWe are adding those in the octavia v2 api16:19
rm_workbut with shared pools16:20
rm_workwhat do those MEAN16:20
*** pksingh has quit IRC16:20
rm_workcan a pool be both provisioned and not?16:20
rm_workor since it's on the same LB i guess maybe not?16:20
johnsoma pool is a pool16:21
rm_workk16:21
rm_workswitching it AGAIN16:21
rm_workfifth time16:21
johnsomSo confused right now...16:21
*** ducttape_ has joined #openstack-lbaas16:23
rm_workok16:24
*** pksingh has joined #openstack-lbaas16:24
*** belharar has quit IRC16:25
*** pksingh has quit IRC16:26
rm_workjohnsom: what about on a PUT? :P16:32
rm_workif we take a working pool and break on a put16:32
rm_workdoes everything ELSE go back to ACTIVE?16:33
johnsomYes16:33
rm_workand on a DELETE?16:33
rm_workdoes the pool go to ERROR and everything else to ACTIVE?16:33
johnsomCorrect16:33
*** rcernin has joined #openstack-lbaas16:37
*** bcafarel has quit IRC16:38
*** bcafarel has joined #openstack-lbaas16:42
*** ducttape_ has quit IRC16:50
*** krypto has quit IRC16:51
*** krypto has joined #openstack-lbaas16:51
*** amotoki has joined #openstack-lbaas16:58
*** krypto has quit IRC16:58
*** tesseract has quit IRC16:59
*** armax has joined #openstack-lbaas17:02
*** amotoki has quit IRC17:03
*** rcernin has quit IRC17:08
rm_workerrr, hmm17:09
rm_workthe v1 controller for Pool leaves the LB/Listener objects in PENDING_CREATE17:09
rm_workI wonder if that is ... only for functional?17:09
rm_worklike, something else is supposed to revert that if it was really running?17:10
rm_workbut not sure what...17:10
johnsomNo, I think that is a bug.  I think I commented on the patch about that17:10
rm_worki mean in v1?17:12
rm_workah17:12
rm_workyou commented17:12
rm_workbut i think just about v217:12
rm_workit also does this in v117:12
rm_workso we have a bug in v1?17:12
*** pcaruana has quit IRC17:12
johnsomYEs17:14
rm_workk17:15
rm_workok... will push up shortly17:16
johnsomOk17:16
*** harlowja has quit IRC17:18
*** chlong has quit IRC17:22
*** harlowja has joined #openstack-lbaas17:24
*** chlong has joined #openstack-lbaas17:38
*** armax_ has joined #openstack-lbaas17:39
*** armax has quit IRC17:39
*** armax_ is now known as armax17:39
*** ducttape_ has joined #openstack-lbaas17:47
johnsomDone digging through e-mail and such.  How is the patch update looking?17:47
rm_workmock issues17:48
rm_workagain17:48
rm_workthe mock isn't ... mocking right17:48
rm_workwhen i run the tests individually, it works great17:48
rm_workwhen i run them as a set17:48
rm_workthe mock doesn't fire17:48
rm_worki feel like i fixed this exact problem before but i can't figure out where it is17:49
rm_worki've been grepping through IRC logs17:49
xgermanwell, given that we are doing  API docs we could always re-write history17:53
rm_worklol17:53
johnsomSpeaking of...  https://review.openstack.org/#/c/438757/17:58
johnsomgrin17:58
johnsom(API docs, not necessarily rewriting history...)17:58
*** amotoki has joined #openstack-lbaas17:59
*** amotoki has quit IRC18:03
*** aojea has joined #openstack-lbaas18:05
rm_workwtf seriously i know i solved this but i can't find a trace of it18:05
*** aojea has quit IRC18:10
*** armax has quit IRC18:13
*** gcheresh_ has joined #openstack-lbaas18:14
*** catintheroof has quit IRC18:15
*** gongysh has quit IRC18:15
*** gcheresh_ has quit IRC18:19
*** armax has joined #openstack-lbaas18:22
*** aojea has joined #openstack-lbaas18:32
rm_workjohnsom: this mock is making me pull my hair out18:35
johnsomDo you want to push it up and let me take a look?18:35
rm_workdo you know why the tests would get a different version of a mock if run individually18:35
rm_workversus run as a group18:36
johnsomI know there are some issues about config loading when you run them individually.18:36
johnsomGroup issues I have seen are usually test order/leak issues18:37
*** aojea has quit IRC18:37
rm_workhmm18:38
rm_workyeah i might just push this up18:38
*** armax has quit IRC18:41
openstackgerritAdam Harwell proposed openstack/octavia master: Introduce Octavia v2 API for pools  https://review.openstack.org/40592218:42
rm_workok so18:42
rm_workyou'll see the three bad handler tests failing18:42
rm_workif you run them individually they work fine18:42
johnsomOk18:43
rm_workbut if you run the whole class of tests, the handler_mock we're using to set the exception side effect, is not the same as the one the controller is actually using18:43
*** cody-somerville has joined #openstack-lbaas18:45
*** cody-somerville has quit IRC18:45
*** cody-somerville has joined #openstack-lbaas18:45
rm_workthis wouldn't be as annoying if i didn't REMEMBER fixing this18:48
rm_workalready resolved. grrrr18:49
rm_workand it seems to work in v1?18:55
rm_workyeah in the v1 tests, self.handler_mock().pool from the test code matches the object_id of self.handler in the controller code18:59
rm_workbut in v2 it does not18:59
*** amotoki has joined #openstack-lbaas19:00
*** amotoki has quit IRC19:05
*** aojea has joined #openstack-lbaas19:09
*** ducttape_ has quit IRC19:09
*** gcheresh_ has joined #openstack-lbaas19:12
*** kobis has joined #openstack-lbaas19:20
rm_workso, v2 must do some import earlier than v1 that makes it load up the handler class before we mock it?19:25
johnsomIt's in the init for pool.py19:25
johnsomI was just looking at that19:25
rm_worki mean, same for v119:28
rm_worktechnically it's coming from the base19:28
rm_workwhich means whenever the app is initialized i thought19:28
rm_workwhich is clearly after19:28
johnsomI'm not sure v1 is doing anything19:28
rm_worki'm testing this in v119:30
rm_workdebugging19:30
rm_workin v1 the same tests run19:30
rm_work(create_with_bad_handler)19:30
rm_workbut in v1, the handler mock is the same ID19:30
rm_workin the controller and in the test19:30
rm_workjohnsom: ah19:34
rm_workso, in v2, the FIRST time we run setup of the base, it sets the mock, and that is the one it uses for all eternity19:35
rm_workthat first mock_handler object19:35
rm_workon subsequent runs, it makes a new one, but the app still uses the old one19:35
rm_worksomehow v1 doesn't work that way??19:35
rm_workin v1, it always gets the new mock19:36
rm_workwtf that makes no sense19:38
xgermanist +119:39
rm_work?19:39
xgermanagree with no sense19:43
xgermanI am paying attention to too many things so likely no attention at all19:44
*** kobis has quit IRC19:48
rm_workheh19:48
rm_workok so i found a dumb workaround that i don't understand19:48
rm_workIf i go fetch the correct (original) mock out of the test app... it kinda works19:50
rm_workok i see a difference19:53
rm_workbut i don't understand it19:54
rm_workok ... ummm20:08
rm_work"fixed"20:08
rm_workjust wait until it makes the app, then20:09
rm_workself.handler_mock = getattr(self.app.app.application.application.application.root, 'v2.0').handler20:09
rm_workoverwrite the newly created mock with the actual mock20:09
rm_worktada <_<20:09
rm_workof course then the test cost needs to clean up changes it makes to the mock manually20:09
rm_work*test code20:10
rm_workyeah so in v1, every time we create a test app, it gets a new mock correctly20:12
rm_workin v2, when we create a test app, it gets ... the same one20:12
rm_worksorry that's not true20:13
rm_workin both v1 and v2, every time setUp runs, getattr(self.app.app.application.application.application.root, 'v2.0').handler is the same object ID20:13
rm_workbut in v1, even though it's the same object ID, when the controller code actually runs it has a different ID each time (the same as self.handler_mock)20:14
rm_workin v2, when the controller code actually runs, it has the same ID every time (the original app's mock ID) which doesn't match self.handler_mock except the first run20:14
*** ducttape_ has joined #openstack-lbaas20:16
openstackgerritAdam Harwell proposed openstack/octavia master: Introduce Octavia v2 API for pools  https://review.openstack.org/40592220:18
rm_workjohnsom: in case you want to look at the egregious hack i had to do to get this working ^^20:19
johnsomOk20:19
rm_workjust look at the diff20:19
johnsomThis almost seems like a joke: self.app.app.application.application.application.root20:22
*** catintheroof has joined #openstack-lbaas20:23
rm_workyes20:25
rm_worki am actually laughing20:25
rm_workout loud20:25
rm_workbut in the sort of maniacal / scary way that means maybe i should be in a straight jacket20:26
johnsomHa20:26
rm_workthis falls directly into the category of *stupid*20:26
rm_worki've wasted 5 hours on this20:27
rm_workthis isn't the correct solution20:27
rm_worki'm about to not care and just leave it like this anyway20:27
rm_worki can't find any code difference that should be relevant20:28
*** ducttape_ has quit IRC20:35
*** ducttape_ has joined #openstack-lbaas20:38
*** ducttape_ has quit IRC20:38
*** catintheroof has quit IRC20:39
*** csomerville has joined #openstack-lbaas20:48
rm_workIf I had a table here, I would flip it20:50
rm_workI officially give up20:50
rm_workat least for today20:50
*** cody-somerville has quit IRC20:51
rm_workthere's no reason these should behave differently20:51
rm_work(v1 and v2)20:51
*** aojea has quit IRC20:57
*** gcheresh_ has quit IRC20:57
*** aojea has joined #openstack-lbaas20:57
*** aojea has quit IRC21:02
*** aojea has joined #openstack-lbaas21:02
*** csomerville has quit IRC21:39
*** ducttape_ has joined #openstack-lbaas21:43
*** ducttape_ has quit IRC21:43
rm_workjohnsom: fixed your issues. posting momentarily21:46
johnsomHa, ok.  I'm testing....21:46
rm_workafk 20 after posting, got so wrapped up in this stupid mock BS that I forgot to eat21:47
openstackgerritAdam Harwell proposed openstack/octavia master: Introduce Octavia v2 API for pools  https://review.openstack.org/40592221:47
*** ducttape_ has joined #openstack-lbaas21:51
*** armax has joined #openstack-lbaas21:52
*** amotoki has joined #openstack-lbaas22:02
*** chlong has quit IRC22:03
*** armax has quit IRC22:04
*** amotoki has quit IRC22:12
*** aojea has quit IRC22:13
*** aojea has joined #openstack-lbaas22:13
johnsomSigh22:14
johnsomhttps://www.irccloud.com/pastebin/4zDSsVnG/22:14
johnsomBombs out when you use a listener_id instead of a load balancer ID22:15
*** ducttape_ has quit IRC22:15
*** aojea has quit IRC22:18
*** ducttape_ has joined #openstack-lbaas22:19
rm_workerr22:20
rm_workthat might be my fault, one sec22:20
johnsomI posted some comments.  On post we are assuming a loadbalancer_id22:21
rm_workah no but simple22:21
rm_workyeah22:21
*** fnaval has quit IRC22:25
rm_workummm so, should a Pool GETreturn the pool including health monitors?22:37
rm_workright now it's set up to not return "children"22:37
rm_workso no members or health monitors22:38
rm_workbut i'd point out that those can be fixed later because they don't exist yet in the API :P22:38
rm_work(they're subsequent patches)22:38
johnsomI think they are just IDs that are returned22:38
rm_workhmm22:39
rm_workwell, the v2 api can't create HMs or members at this point22:39
rm_workbut i guess i can try to emulate it22:39
rm_workbut IMO it should be fixed in the next patches22:39
johnsomThat is fine, I'm just marking things that are coming back different.  In this case, when there is not content it's returning differently....  I think on listener we fixed it in the patch22:40
rm_workyeah so i fixed the listeners thing just now22:41
rm_workeasy peasy22:41
johnsomOk, I put up a few more.22:41
rm_workk22:41
johnsomOne more test and I'm done22:42
rm_work"The content returned from the PUT call has provisioning_status as "ACTIVE" instead of PENDING_UPDATE."22:43
rm_workDo you mean... it's supposed to be "PENDING_UPDATE" and it is returning "PENDING_CREATE"? :P22:43
johnsomI got prov status ACTIVE and old data in the response from a PUT call.22:44
rm_workhuh.22:44
rm_workfunctional test is showing "PENDING_CREATE" which is ... also wrong22:45
rm_workbut totally different, lol22:45
johnsomI would expect that to be PENDING_UPDATE as I changed the session persistence type which is a handler round trip22:45
rm_workOH NO i see22:45
rm_workok22:45
rm_workyeah22:45
*** armax has joined #openstack-lbaas22:46
rm_workwait hold up22:48
rm_workso wasn't there a thing22:48
rm_workwhere we don't update the DB on the API side22:48
rm_work?22:48
rm_workjohnsom: ^^22:49
johnsomYes, but we should be giving the user back a PENDING_UPDATE status22:49
rm_workit doesn't look like any of v1 or the prior v2 controllers do updates for the PUT22:49
rm_workah, err22:49
rm_workok22:49
rm_workso literally just override the one22:49
johnsomReally?  I am pretty sure it does, it just may not be returning that response22:49
rm_workI don't think Listener does it22:49
rm_workor LB22:49
johnsomIsn't pool.py:73 setting it?22:51
johnsomIt's just we are pulling from the DB in a different session....22:51
rm_workerr22:52
johnsomActually, it is the same session....  hmmm22:52
johnsomWe either need to lie to the user and give them the updated values, or return with PENDING_UPDATE for provisioning status...22:53
johnsomAh, that only sets the LB and listener into PENDING_UPDATE...  That is why22:54
johnsomWhich is our DB locking22:54
rm_workyeah22:54
johnsomSo, yeah, I think we need to hack the response had give back PENDING_UPDATE...22:54
rm_workI can do POOL too22:54
rm_workI think we can update the DB right after the lock22:56
rm_workshould be fine22:56
johnsomYes22:56
rm_worksince it IS locked22:56
johnsomI just checked that we DO set it back to ACTIVE at the end of the flow as welll22:56
johnsomOk, that is all I have on pools.  I've run through the tests I wanted to check22:58
rm_workk 1 sec22:59
rm_workhmm did a get_all work for you?23:10
rm_workit's misbehaving for me23:10
rm_workoh nm23:10
johnsomYes, it workeed23:10
rm_workoh nm the nm, weird, still having problems i think23:11
*** ducttape_ has quit IRC23:15
johnsomTaking a break for a few, will be back in 3023:15
*** ducttape_ has joined #openstack-lbaas23:19
rm_workkk23:33
*** ducttape_ has quit IRC23:34
johnsomBack23:35
rm_workk23:50
rm_workalmost done23:51
rm_workhad some unexpected fallout from changing the delete status23:51
rm_workok good to go23:51
openstackgerritAdam Harwell proposed openstack/octavia master: Introduce Octavia v2 API for pools  https://review.openstack.org/40592223:51
rm_workformally requesting the session_persistence update issue be filed as a bug and fixed later :P23:51
johnsomOk, I will take a look23:52
rm_workotherwise i think it's solid23:52
rm_workbesides the ridiculous stupid mock issue23:52

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!