*** amotoki has quit IRC | 00:01 | |
*** bbbzhao has quit IRC | 00:20 | |
*** bzhao has joined #openstack-lbaas | 00:21 | |
*** oomichi has quit IRC | 00:39 | |
*** oomichi has joined #openstack-lbaas | 00:42 | |
*** amotoki has joined #openstack-lbaas | 01:04 | |
*** yuanying_ has joined #openstack-lbaas | 01:04 | |
*** yuanying has quit IRC | 01:07 | |
*** reedip has quit IRC | 01:10 | |
*** amotoki has quit IRC | 01:27 | |
*** sanfern has quit IRC | 01:49 | |
*** reedip has joined #openstack-lbaas | 02:00 | |
*** oomichi has quit IRC | 02:08 | |
*** oomichi has joined #openstack-lbaas | 02:12 | |
*** yamamoto has joined #openstack-lbaas | 02:25 | |
*** yuanying has joined #openstack-lbaas | 02:30 | |
*** yuanying_ has quit IRC | 02:33 | |
*** cpuga has joined #openstack-lbaas | 02:35 | |
*** amotoki has joined #openstack-lbaas | 02:42 | |
*** yamamoto has quit IRC | 02:43 | |
*** yamamoto has joined #openstack-lbaas | 02:43 | |
*** amotoki has quit IRC | 02:53 | |
*** sanfern has joined #openstack-lbaas | 02:54 | |
*** reedip has quit IRC | 02:58 | |
*** amotoki has joined #openstack-lbaas | 03:06 | |
*** links has joined #openstack-lbaas | 03:09 | |
*** yamamoto has quit IRC | 03:16 | |
*** yamamoto has joined #openstack-lbaas | 03:18 | |
*** reedip has joined #openstack-lbaas | 03:30 | |
*** amotoki has quit IRC | 03:31 | |
*** yamamoto has quit IRC | 03:35 | |
*** gongysh has joined #openstack-lbaas | 03:45 | |
*** cpuga has quit IRC | 03:49 | |
*** amotoki has joined #openstack-lbaas | 03:52 | |
*** gongysh has quit IRC | 04:03 | |
*** cpuga has joined #openstack-lbaas | 04:03 | |
*** cpuga has quit IRC | 04:08 | |
*** yamamoto has joined #openstack-lbaas | 04:22 | |
*** cpuga has joined #openstack-lbaas | 05:11 | |
*** sshank has quit IRC | 05:14 | |
*** sshank has joined #openstack-lbaas | 05:14 | |
*** gcheresh_ has joined #openstack-lbaas | 05:17 | |
*** gongysh has joined #openstack-lbaas | 05:19 | |
*** cpuga_ has joined #openstack-lbaas | 05:26 | |
*** rcernin has joined #openstack-lbaas | 05:26 | |
*** cpuga has quit IRC | 05:30 | |
*** cpuga_ has quit IRC | 05:30 | |
*** aojea has joined #openstack-lbaas | 05:54 | |
*** oomichi has quit IRC | 05:58 | |
*** oomichi has joined #openstack-lbaas | 06:01 | |
*** aojea has quit IRC | 06:04 | |
*** aojea has joined #openstack-lbaas | 06:04 | |
*** aojea has quit IRC | 06:09 | |
*** reedip has quit IRC | 06:24 | |
*** oomichi has quit IRC | 06:28 | |
*** oomichi has joined #openstack-lbaas | 06:32 | |
*** oomichi has quit IRC | 06:39 | |
*** oomichi has joined #openstack-lbaas | 06:43 | |
*** reedip has joined #openstack-lbaas | 06:56 | |
*** oomichi has quit IRC | 06:58 | |
*** kobis has joined #openstack-lbaas | 06:59 | |
*** oomichi has joined #openstack-lbaas | 07:03 | |
*** oomichi has quit IRC | 07:08 | |
*** oomichi has joined #openstack-lbaas | 07:13 | |
*** aojea has joined #openstack-lbaas | 07:17 | |
*** pcaruana has joined #openstack-lbaas | 07:20 | |
*** oomichi has quit IRC | 07:38 | |
*** oomichi has joined #openstack-lbaas | 07:43 | |
*** tesseract has joined #openstack-lbaas | 07:44 | |
*** krypto has joined #openstack-lbaas | 08:07 | |
*** reedip has quit IRC | 08:23 | |
*** reedip has joined #openstack-lbaas | 08:27 | |
*** openstackgerrit has quit IRC | 08:33 | |
*** krypto has quit IRC | 09:31 | |
*** krypto has joined #openstack-lbaas | 09:31 | |
*** aojea_ has joined #openstack-lbaas | 09:34 | |
*** aojea has quit IRC | 09:37 | |
*** reedip has quit IRC | 09:54 | |
*** krypto has quit IRC | 10:24 | |
*** krypto has joined #openstack-lbaas | 10:25 | |
*** amotoki has quit IRC | 10:30 | |
*** krypto has quit IRC | 10:34 | |
*** gongysh has quit IRC | 10:35 | |
*** krypto has joined #openstack-lbaas | 10:35 | |
*** pksingh has joined #openstack-lbaas | 10:38 | |
*** pksingh has quit IRC | 10:53 | |
*** reedip has joined #openstack-lbaas | 10:54 | |
*** sanfern has quit IRC | 10:56 | |
*** krypto has quit IRC | 11:11 | |
*** krypto has joined #openstack-lbaas | 11:11 | |
*** reedip has quit IRC | 11:15 | |
*** amotoki has joined #openstack-lbaas | 11:46 | |
*** sanfern has joined #openstack-lbaas | 11:57 | |
*** catintheroof has joined #openstack-lbaas | 12:08 | |
*** krypto has quit IRC | 12:31 | |
*** krypto has joined #openstack-lbaas | 12:32 | |
*** krypto has quit IRC | 12:32 | |
*** krypto has joined #openstack-lbaas | 12:32 | |
*** links has quit IRC | 12:40 | |
*** reedip has joined #openstack-lbaas | 12:48 | |
*** chlong has joined #openstack-lbaas | 12:51 | |
*** gongysh has joined #openstack-lbaas | 13:26 | |
*** krypto has quit IRC | 13:34 | |
*** krypto has joined #openstack-lbaas | 13:35 | |
*** openstackgerrit has joined #openstack-lbaas | 13:52 | |
openstackgerrit | German Eichberger proposed openstack/neutron-lbaas master: Octavia Proxy Plugin https://review.openstack.org/418530 | 13:52 |
---|---|---|
*** krypto has quit IRC | 13:56 | |
*** krypto has joined #openstack-lbaas | 13:56 | |
*** blogan has joined #openstack-lbaas | 14:07 | |
*** bzhao has quit IRC | 14:07 | |
*** bzhao has joined #openstack-lbaas | 14:08 | |
rm_work | xgerman: how is that going? | 14:09 |
rm_work | it'd make my testing a lot easier | 14:09 |
xgerman | good | 14:09 |
xgerman | trying to get a devstack up… | 14:09 |
*** chlong has quit IRC | 14:09 | |
rm_work | i got the sorting/paging working this weekend I think... | 14:10 |
rm_work | preliminarily... | 14:10 |
rm_work | need to do filtering too though | 14:10 |
xgerman | k, I am looking at failover and the proxy | 14:11 |
rm_work | but my priority is on listener/pool/member v2 | 14:12 |
rm_work | listener is good to go | 14:12 |
rm_work | johnsom +2'd it | 14:12 |
rm_work | I can't because i worked on it too much i think | 14:12 |
xgerman | ok, then I will look at that as well | 14:12 |
rm_work | or else i'd +A it right now, lol | 14:12 |
xgerman | looking… | 14:13 |
xgerman | ok - +A | 14:19 |
*** pksingh has joined #openstack-lbaas | 14:26 | |
rm_work | err so | 14:30 |
rm_work | on a Pool CREATE with a Listener, does the Listener go to ERROR if we fail to create the Pool? | 14:30 |
rm_work | johnsom / xgerman | 14:30 |
rm_work | right now the code is set up to explicitly set it to ERROR, but the *test* is set to check for *ACTIVE*, and I am personally not sure which makes more sense | 14:31 |
xgerman | mmh, not sure | 14:31 |
xgerman | with pool being a high level object I would think pool -> error; listener ok | 14:32 |
rm_work | well, if it's a create, there IS no Pool | 14:32 |
rm_work | I think? | 14:32 |
rm_work | oh wait, maybe the pool does exist | 14:32 |
rm_work | let me see | 14:32 |
xgerman | pool is high level so should exist | 14:32 |
*** krypto has quit IRC | 14:33 | |
rm_work | yes, it does | 14:33 |
rm_work | but Pools in neutron-lbaas are weird and don't have two statuses like everything else | 14:33 |
rm_work | only a single "status", not provisioning_ / operating_ | 14:33 |
*** krypto has joined #openstack-lbaas | 14:34 | |
*** krypto has joined #openstack-lbaas | 14:34 | |
rm_work | and it seems to use the statuses from operating_status | 14:34 |
xgerman | :-( | 14:37 |
xgerman | so we half-assed it | 14:37 |
rm_work | yeah, so, uhh... hmm | 14:38 |
rm_work | we can't put the high-level object in ERROR | 14:38 |
rm_work | so I guess we have to put the Listener in ERROR? except... | 14:38 |
rm_work | then it's immutable | 14:39 |
rm_work | so you have to ... delete it? | 14:39 |
rm_work | or wait, is ERROR status immutable or not? | 14:39 |
rm_work | I think it is | 14:39 |
rm_work | yep, immutable | 14:39 |
rm_work | so if your Pool CREATE fails, you have to DELETE the Listener | 14:40 |
rm_work | that seems bad | 14:40 |
rm_work | so I think we opt for *not* error? | 14:40 |
rm_work | man, member updates are set to do that too | 14:43 |
rm_work | ugh i dunno | 14:43 |
rm_work | maybe I'll leave it for now | 14:43 |
rm_work | OH, the pool CAN go to ERROR | 14:49 |
rm_work | huh. | 14:49 |
*** gcheresh_ has quit IRC | 14:55 | |
*** fnaval has joined #openstack-lbaas | 14:55 | |
rm_work | wtfff | 15:07 |
johnsom | o/\ | 15:07 |
*** rcernin has quit IRC | 15:07 | |
openstackgerrit | Merged openstack/octavia master: Octavia v2 API for listeners https://review.openstack.org/424744 | 15:07 |
rm_work | johnsom: weird pool issues | 15:07 |
johnsom | What are you seeing? | 15:07 |
rm_work | A) On a Pool CREATE failure, what should happen to the Listener operating_status? | 15:07 |
rm_work | ERROR, or ACTIVE? | 15:08 |
rm_work | B) Somehow the pool's operating_status is being set to ERROR in the DB on a failure, but I can't find where/how | 15:08 |
johnsom | operating status is observed, so it really depends on the point in time it failed. Both of those could be valid | 15:08 |
rm_work | err | 15:08 |
rm_work | self.repositories.listener.update( | 15:09 |
rm_work | session, listener_id, operating_status=constants.ERROR) | 15:09 |
rm_work | we do that | 15:09 |
rm_work | doesn't seem very observed | 15:09 |
johnsom | Where do you see that? | 15:09 |
rm_work | controllers/pool.py | 15:09 |
rm_work | _send_pool_to_handler() | 15:09 |
rm_work | on a create failure | 15:09 |
rm_work | we do the same on other object types too | 15:10 |
*** kobis has quit IRC | 15:10 | |
rm_work | in Octavia v1 api, we expect Listener status to be ERROR on a pool create failure | 15:11 |
johnsom | Hmmm, | 15:11 |
johnsom | I'm not 100% sure that makes sense to me. The pool is going to rollback, so not really exist, same with the database changes for the listener. | 15:14 |
rm_work | no | 15:14 |
rm_work | the pool does not roll back | 15:14 |
rm_work | as xgerman points out, it is a high level object now | 15:14 |
rm_work | so it's already created | 15:14 |
johnsom | pool.py:167 | 15:14 |
johnsom | Ah, no I see now | 15:15 |
rm_work | and we never set it to ERROR or anything | 15:15 |
rm_work | BUT | 15:15 |
rm_work | my point B was, somehow it does get it | 15:15 |
rm_work | and I can't figure out how | 15:15 |
rm_work | so on a post failure, we get back the object | 15:15 |
rm_work | and the status is "OFFLINE" | 15:15 |
johnsom | Well, one problem I see is we aren't setting the provisioning status to ERROR on that failure, so the pool is in a strange state | 15:15 |
rm_work | if you immediately do a GET, it is status ERROR | 15:16 |
johnsom | Stuck in pending I guess | 15:16 |
rm_work | Pool has only one status in neutron-lbaas | 15:16 |
rm_work | and it is more closely a mirror of operating_status | 15:16 |
johnsom | rm_work I am pretty sure that is health_manager setting it that way, as it is the "observe" engine and manages operating_status | 15:17 |
rm_work | in our DB, we have both but we only set operating_status | 15:17 |
rm_work | hmm | 15:17 |
rm_work | but this is in *functional tests* | 15:17 |
rm_work | i'm observing this when in a functional test | 15:17 |
rm_work | api_pool = self.create_pool(...); api_pool['pool']['status'] == constants.ERROR | 15:19 |
rm_work | err sorry | 15:20 |
rm_work | api_pool = self.create_pool(...); api_pool['pool']['status'] == constants.OFFLINE | 15:20 |
rm_work | api_pool = self.get_pool(api_pool['pool']['id'])' api_pool['pool']['status'] == constants.ERROR | 15:20 |
rm_work | oh | 15:22 |
rm_work | nevermind | 15:22 |
rm_work | i am dumb and looking at the wrong test | 15:22 |
rm_work | ok so i know how to fix this then | 15:22 |
rm_work | err, kinda | 15:22 |
johnsom | Yeah, looking at pools, there are a number of bugs here with the status. It's setting operating status and not provisioning. That isn't good | 15:23 |
rm_work | I think maybe we want Listener to stay ACTIVE but Pool to be ERROR? | 15:23 |
rm_work | yeah i'm working on it | 15:23 |
rm_work | i have a huge update incoming | 15:23 |
johnsom | operating status gets overwriten by the HM | 15:23 |
rm_work | the tests that exposed this were just ... being skipped | 15:23 |
rm_work | yeah but we can initially set it to ERROR | 15:23 |
rm_work | on Create | 15:23 |
rm_work | because ... yes | 15:23 |
sshank | rm_work, There was also an error related to bad handler. I had to skipped it until solution could be got. | 15:24 |
rm_work | yeah | 15:24 |
rm_work | that's the one | 15:24 |
*** chlong has joined #openstack-lbaas | 15:25 | |
sshank | I think there error related to bad handler might come in members as well. nakul_d had something similar. | 15:25 |
sshank | these* | 15:25 |
rm_work | johnsom: not to mention that i just noticed we don't differentiate ERROR between operating_status and provisioning_status | 15:26 |
rm_work | there's two "ERROR = " lines in constants.py | 15:26 |
rm_work | >_> | 15:26 |
johnsom | Joy | 15:26 |
rm_work | not that is present a "problem" | 15:27 |
rm_work | but | 15:27 |
johnsom | Ok, so listener IS setting provisioning status to ERROR, like it should | 15:27 |
rm_work | err | 15:28 |
rm_work | i'll look | 15:28 |
rm_work | yeah k | 15:28 |
johnsom | But, I just notices we really don't have the right try blocks here, I think.... I need to get my coffee and read through a bit deeper | 15:28 |
rm_work | so on Pool Create failure | 15:28 |
rm_work | should listener go to ... provisioning_status = ERROR ? | 15:28 |
johnsom | I don't think so personally. | 15:29 |
rm_work | k | 15:29 |
rm_work | me either | 15:29 |
johnsom | Part of the point of prov_status ERROR is to allow the user to delete that object and rebuild | 15:29 |
rm_work | but i think it depends on what Neutron-LBaaS does, not what I want | 15:29 |
johnsom | Yeah, worried about that too.... ugh | 15:30 |
rm_work | so the question is ... what does the status tree look like in neutron-lbaas, when a Pool CREATE fails | 15:30 |
rm_work | digging through tests to see if i can find the answer | 15:30 |
rm_work | but we have no functional tests in neutron-lbaas | 15:32 |
rm_work | aaaah | 15:33 |
rm_work | maybe in neutron-lbaas it depends on the plugin entirely?! | 15:33 |
xgerman | potentially | 15:34 |
johnsom | Personally I think we need to get this "right" | 15:34 |
rm_work | yeah but aren't we handcuffed | 15:34 |
johnsom | It will haunt us if we don't. | 15:34 |
rm_work | to "what neutron-lbaas did" | 15:34 |
rm_work | sshank: yeah i think you're right, it's in members too... i tried to fix it there as well but i should have figured out this one first, i think | 15:35 |
xgerman | mmh, why does v2 not return loadbalancers? | 15:36 |
rm_work | xgerman: ? | 15:36 |
xgerman | I am using my proxy | 15:36 |
xgerman | https://www.irccloud.com/pastebin/hnLUnuQL/ | 15:36 |
xgerman | gives me blank | 15:36 |
rm_work | :/ | 15:37 |
rm_work | might be related to the get_all fix | 15:37 |
xgerman | https://www.irccloud.com/pastebin/rnlxrZCO/ | 15:37 |
rm_work | xgerman: try pulling in this patch: | 15:37 |
xgerman | ok | 15:38 |
rm_work | https://review.openstack.org/#/c/449822/ | 15:38 |
rm_work | ah hold on | 15:38 |
rm_work | merge conflict | 15:38 |
xgerman | well, at least creating an LB worked | 15:38 |
rm_work | yeah i bet it's that fix | 15:39 |
rm_work | one sec | 15:39 |
xgerman | https://www.irccloud.com/pastebin/UcUjbc1M/ | 15:39 |
xgerman | but that driver error thing worries me ;-) | 15:39 |
rm_work | ooooone sec | 15:39 |
*** belharar has joined #openstack-lbaas | 15:40 | |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Fix get_all method for v2 LB controller https://review.openstack.org/449822 | 15:44 |
rm_work | doh it was a noop rebase, lolk | 15:44 |
rm_work | not sure why it said merge failed | 15:45 |
rm_work | xgerman: try with that patch | 15:45 |
xgerman | ok, also get <id> seems to be broken | 15:45 |
johnsom | Yeah, I think that is part of the filters work. It's id as a url param right? | 15:46 |
rm_work | xgerman: err | 15:46 |
rm_work | no? | 15:46 |
rm_work | it should be GET /listeners/<uuid> | 15:47 |
rm_work | not url-param | 15:47 |
rm_work | it's get_one() | 15:47 |
rm_work | but it should be working | 15:47 |
xgerman | https://www.irccloud.com/pastebin/lji7oVer/ | 15:47 |
rm_work | get_all would be broken though | 15:47 |
xgerman | nope - as id in the Url | 15:47 |
johnsom | But I think the client is doing strange things where it's doing a filtered get all | 15:47 |
rm_work | ummm | 15:47 |
rm_work | wow | 15:47 |
rm_work | that'd be dumb | 15:47 |
rm_work | xgerman: do --debug | 15:47 |
johnsom | Well, that works.... | 15:47 |
rm_work | oh | 15:48 |
rm_work | wait that's a curl | 15:48 |
xgerman | yep | 15:48 |
rm_work | uhh | 15:48 |
xgerman | the client has the filter error | 15:48 |
rm_work | that's a curl against actual octavia | 15:48 |
*** aojea has joined #openstack-lbaas | 15:48 | |
xgerman | yep | 15:48 |
rm_work | wut | 15:48 |
xgerman | at least post works :-) | 15:48 |
rm_work | that should not fail | 15:48 |
rm_work | i see it working in functional tests | 15:48 |
xgerman | ok, might be me… | 15:49 |
*** aojea_ has quit IRC | 15:50 | |
rm_work | hmm | 15:51 |
rm_work | functional tests COULD be bugged? | 15:52 |
rm_work | not testing properly? | 15:52 |
rm_work | the LB is there in the DB? | 15:52 |
xgerman | yes | 15:53 |
xgerman | it’s in. the LB - that’s how I got the id | 15:53 |
johnsom | noauth or ???? is it a project_id mis-match? | 15:54 |
xgerman | I was wondering about that | 15:55 |
xgerman | same | 15:56 |
rm_work | oh are you passing project_id? | 15:57 |
rm_work | i don't see it in that curl? | 15:57 |
xgerman | https://www.irccloud.com/pastebin/bJk3C8WZ/ | 15:57 |
rm_work | did you enable auth? | 15:57 |
xgerman | nope | 15:57 |
rm_work | in octavia.conf | 15:57 |
*** amotoki has quit IRC | 15:57 | |
rm_work | yeah do that :P | 15:57 |
rm_work | make the auth method = keystone | 15:58 |
johnsom | It should still work with noauth though | 15:58 |
xgerman | well, now I get 401 | 15:58 |
rm_work | lol | 15:58 |
rm_work | ah yeah | 15:59 |
rm_work | the token is wrong | 15:59 |
rm_work | you can't use the SHA thing for some reason | 15:59 |
rm_work | need to generate a token with "openstack token issue" | 15:59 |
xgerman | ok | 15:59 |
*** aojea has quit IRC | 15:59 | |
xgerman | still | 16:00 |
*** aojea has joined #openstack-lbaas | 16:00 | |
xgerman | but my new functionality can find it | 16:01 |
xgerman | https://www.irccloud.com/pastebin/n4R2FMj5/ | 16:01 |
xgerman | mmh, maybe I broke it be adding that stuff | 16:01 |
xgerman | logs: | 16:02 |
xgerman | 017-03-27 12:00:14.181 13223 DEBUG wsme.api [req-0ecb55a1-289c-4d3d-936c-203e1083ab53 - a476a84e54644f1ab01864ed7fa6e258 - default default] Client-side error: Load Balancer c6s7254b5-f076-c67254b5-f076-45e9-bb88-bfe69bf3461d not found. format_exception /usr/local/lib/python2.7/dist-packages/wsme/api.py:222 | 16:02 |
*** aojea has quit IRC | 16:04 | |
*** sanfern has quit IRC | 16:06 | |
*** sanfern has joined #openstack-lbaas | 16:07 | |
rm_work | hmmm | 16:09 |
rm_work | i don't have a good stack currently, i don't think | 16:09 |
rm_work | nm, i do | 16:10 |
rm_work | and it works fine | 16:10 |
rm_work | for me | 16:10 |
rm_work | curl http://127.0.0.1:9876/v2.0/lbaas/loadbalancers/2479e788-e3a5-4806-b61c-021884ea9afc -H "Content-Type: application/json" -H "X-Auth-Token: $MYTOKEN" | jq | 16:10 |
rm_work | returns me the LB | 16:10 |
rm_work | something might be wonky in your octavia setup :/ | 16:11 |
rm_work | ok so johnsom i am just going to make the sensical choice here for the status issue | 16:11 |
rm_work | and if it isn't what neutron-lbaas does... | 16:11 |
rm_work | someone can file a bug | 16:11 |
rm_work | and we can change it then | 16:11 |
johnsom | grin | 16:11 |
rm_work | does that seem OK to you? >_> | 16:12 |
rm_work | because guessing around how to make it do the wrong thing, seems dumb | 16:12 |
johnsom | I guess it depends how far we go. I would like to start with the "right" way | 16:12 |
rm_work | and i agree with the intent, if you create a bad pool, you should be able to fix the listener | 16:12 |
rm_work | so, fixing it | 16:12 |
johnsom | There are some bugs against LBaaS and status anyway, so... | 16:13 |
rm_work | johnsom: so the only issue i see | 16:14 |
rm_work | is that since Pool only has one status | 16:14 |
rm_work | and it is tracking "operating_status" really | 16:15 |
rm_work | we can set the provisioning_status to ERROR but the user will never see it | 16:15 |
rm_work | but if it isn't provisioned, there's no HM updating that, so we should be able to initially manually set operating_status to ERROR and it'll stick | 16:15 |
rm_work | but then, do we *unset* it as the listener's default pool? | 16:16 |
xgerman | so our failover/replace logic kills the amphora BEFORE it readies the new one… | 16:16 |
rm_work | and .... what, roll back to the old default_pool? | 16:16 |
rm_work | xgerman: yep | 16:16 |
rm_work | xgerman: that's what i assumed you'd be fixing | 16:16 |
xgerman | I will | 16:16 |
xgerman | I made it accessible with the API | 16:16 |
rm_work | :) | 16:16 |
xgerman | didn’t think that needed fixing | 16:17 |
johnsom | rm_work I am not following you. My nlbaas reference DB has both status | 16:17 |
rm_work | errrr | 16:17 |
rm_work | but the returned object | 16:17 |
rm_work | ?? | 16:17 |
rm_work | you showed me the return from a pool create/get | 16:17 |
rm_work | i thought | 16:17 |
rm_work | and it only had "status" | 16:17 |
xgerman | oh, in other news I got +2 on the openstack-ansible-os_octavis project | 16:17 |
rm_work | \o/ | 16:18 |
johnsom | Just a second | 16:18 |
johnsom | So, show on pool doesn't show any status.... | 16:18 |
johnsom | https://www.irccloud.com/pastebin/JwvJbd7V/ | 16:18 |
rm_work | ugh ok | 16:19 |
rm_work | so i have been back and forth on this like 4 times | 16:19 |
rm_work | SHOULD a pool display both provisioning_status and operating_status ? | 16:19 |
johnsom | yes | 16:19 |
johnsom | We are adding those in the octavia v2 api | 16:19 |
rm_work | but with shared pools | 16:20 |
rm_work | what do those MEAN | 16:20 |
*** pksingh has quit IRC | 16:20 | |
rm_work | can a pool be both provisioned and not? | 16:20 |
rm_work | or since it's on the same LB i guess maybe not? | 16:20 |
johnsom | a pool is a pool | 16:21 |
rm_work | k | 16:21 |
rm_work | switching it AGAIN | 16:21 |
rm_work | fifth time | 16:21 |
johnsom | So confused right now... | 16:21 |
*** ducttape_ has joined #openstack-lbaas | 16:23 | |
rm_work | ok | 16:24 |
*** pksingh has joined #openstack-lbaas | 16:24 | |
*** belharar has quit IRC | 16:25 | |
*** pksingh has quit IRC | 16:26 | |
rm_work | johnsom: what about on a PUT? :P | 16:32 |
rm_work | if we take a working pool and break on a put | 16:32 |
rm_work | does everything ELSE go back to ACTIVE? | 16:33 |
johnsom | Yes | 16:33 |
rm_work | and on a DELETE? | 16:33 |
rm_work | does the pool go to ERROR and everything else to ACTIVE? | 16:33 |
johnsom | Correct | 16:33 |
*** rcernin has joined #openstack-lbaas | 16:37 | |
*** bcafarel has quit IRC | 16:38 | |
*** bcafarel has joined #openstack-lbaas | 16:42 | |
*** ducttape_ has quit IRC | 16:50 | |
*** krypto has quit IRC | 16:51 | |
*** krypto has joined #openstack-lbaas | 16:51 | |
*** amotoki has joined #openstack-lbaas | 16:58 | |
*** krypto has quit IRC | 16:58 | |
*** tesseract has quit IRC | 16:59 | |
*** armax has joined #openstack-lbaas | 17:02 | |
*** amotoki has quit IRC | 17:03 | |
*** rcernin has quit IRC | 17:08 | |
rm_work | errr, hmm | 17:09 |
rm_work | the v1 controller for Pool leaves the LB/Listener objects in PENDING_CREATE | 17:09 |
rm_work | I wonder if that is ... only for functional? | 17:09 |
rm_work | like, something else is supposed to revert that if it was really running? | 17:10 |
rm_work | but not sure what... | 17:10 |
johnsom | No, I think that is a bug. I think I commented on the patch about that | 17:10 |
rm_work | i mean in v1? | 17:12 |
rm_work | ah | 17:12 |
rm_work | you commented | 17:12 |
rm_work | but i think just about v2 | 17:12 |
rm_work | it also does this in v1 | 17:12 |
rm_work | so we have a bug in v1? | 17:12 |
*** pcaruana has quit IRC | 17:12 | |
johnsom | YEs | 17:14 |
rm_work | k | 17:15 |
rm_work | ok... will push up shortly | 17:16 |
johnsom | Ok | 17:16 |
*** harlowja has quit IRC | 17:18 | |
*** chlong has quit IRC | 17:22 | |
*** harlowja has joined #openstack-lbaas | 17:24 | |
*** chlong has joined #openstack-lbaas | 17:38 | |
*** armax_ has joined #openstack-lbaas | 17:39 | |
*** armax has quit IRC | 17:39 | |
*** armax_ is now known as armax | 17:39 | |
*** ducttape_ has joined #openstack-lbaas | 17:47 | |
johnsom | Done digging through e-mail and such. How is the patch update looking? | 17:47 |
rm_work | mock issues | 17:48 |
rm_work | again | 17:48 |
rm_work | the mock isn't ... mocking right | 17:48 |
rm_work | when i run the tests individually, it works great | 17:48 |
rm_work | when i run them as a set | 17:48 |
rm_work | the mock doesn't fire | 17:48 |
rm_work | i feel like i fixed this exact problem before but i can't figure out where it is | 17:49 |
rm_work | i've been grepping through IRC logs | 17:49 |
xgerman | well, given that we are doing API docs we could always re-write history | 17:53 |
rm_work | lol | 17:53 |
johnsom | Speaking of... https://review.openstack.org/#/c/438757/ | 17:58 |
johnsom | grin | 17:58 |
johnsom | (API docs, not necessarily rewriting history...) | 17:58 |
*** amotoki has joined #openstack-lbaas | 17:59 | |
*** amotoki has quit IRC | 18:03 | |
*** aojea has joined #openstack-lbaas | 18:05 | |
rm_work | wtf seriously i know i solved this but i can't find a trace of it | 18:05 |
*** aojea has quit IRC | 18:10 | |
*** armax has quit IRC | 18:13 | |
*** gcheresh_ has joined #openstack-lbaas | 18:14 | |
*** catintheroof has quit IRC | 18:15 | |
*** gongysh has quit IRC | 18:15 | |
*** gcheresh_ has quit IRC | 18:19 | |
*** armax has joined #openstack-lbaas | 18:22 | |
*** aojea has joined #openstack-lbaas | 18:32 | |
rm_work | johnsom: this mock is making me pull my hair out | 18:35 |
johnsom | Do you want to push it up and let me take a look? | 18:35 |
rm_work | do you know why the tests would get a different version of a mock if run individually | 18:35 |
rm_work | versus run as a group | 18:36 |
johnsom | I know there are some issues about config loading when you run them individually. | 18:36 |
johnsom | Group issues I have seen are usually test order/leak issues | 18:37 |
*** aojea has quit IRC | 18:37 | |
rm_work | hmm | 18:38 |
rm_work | yeah i might just push this up | 18:38 |
*** armax has quit IRC | 18:41 | |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Introduce Octavia v2 API for pools https://review.openstack.org/405922 | 18:42 |
rm_work | ok so | 18:42 |
rm_work | you'll see the three bad handler tests failing | 18:42 |
rm_work | if you run them individually they work fine | 18:42 |
johnsom | Ok | 18:43 |
rm_work | but if you run the whole class of tests, the handler_mock we're using to set the exception side effect, is not the same as the one the controller is actually using | 18:43 |
*** cody-somerville has joined #openstack-lbaas | 18:45 | |
*** cody-somerville has quit IRC | 18:45 | |
*** cody-somerville has joined #openstack-lbaas | 18:45 | |
rm_work | this wouldn't be as annoying if i didn't REMEMBER fixing this | 18:48 |
rm_work | already resolved. grrrr | 18:49 |
rm_work | and it seems to work in v1? | 18:55 |
rm_work | yeah in the v1 tests, self.handler_mock().pool from the test code matches the object_id of self.handler in the controller code | 18:59 |
rm_work | but in v2 it does not | 18:59 |
*** amotoki has joined #openstack-lbaas | 19:00 | |
*** amotoki has quit IRC | 19:05 | |
*** aojea has joined #openstack-lbaas | 19:09 | |
*** ducttape_ has quit IRC | 19:09 | |
*** gcheresh_ has joined #openstack-lbaas | 19:12 | |
*** kobis has joined #openstack-lbaas | 19:20 | |
rm_work | so, v2 must do some import earlier than v1 that makes it load up the handler class before we mock it? | 19:25 |
johnsom | It's in the init for pool.py | 19:25 |
johnsom | I was just looking at that | 19:25 |
rm_work | i mean, same for v1 | 19:28 |
rm_work | technically it's coming from the base | 19:28 |
rm_work | which means whenever the app is initialized i thought | 19:28 |
rm_work | which is clearly after | 19:28 |
johnsom | I'm not sure v1 is doing anything | 19:28 |
rm_work | i'm testing this in v1 | 19:30 |
rm_work | debugging | 19:30 |
rm_work | in v1 the same tests run | 19:30 |
rm_work | (create_with_bad_handler) | 19:30 |
rm_work | but in v1, the handler mock is the same ID | 19:30 |
rm_work | in the controller and in the test | 19:30 |
rm_work | johnsom: ah | 19:34 |
rm_work | so, in v2, the FIRST time we run setup of the base, it sets the mock, and that is the one it uses for all eternity | 19:35 |
rm_work | that first mock_handler object | 19:35 |
rm_work | on subsequent runs, it makes a new one, but the app still uses the old one | 19:35 |
rm_work | somehow v1 doesn't work that way?? | 19:35 |
rm_work | in v1, it always gets the new mock | 19:36 |
rm_work | wtf that makes no sense | 19:38 |
xgerman | ist +1 | 19:39 |
rm_work | ? | 19:39 |
xgerman | agree with no sense | 19:43 |
xgerman | I am paying attention to too many things so likely no attention at all | 19:44 |
*** kobis has quit IRC | 19:48 | |
rm_work | heh | 19:48 |
rm_work | ok so i found a dumb workaround that i don't understand | 19:48 |
rm_work | If i go fetch the correct (original) mock out of the test app... it kinda works | 19:50 |
rm_work | ok i see a difference | 19:53 |
rm_work | but i don't understand it | 19:54 |
rm_work | ok ... ummm | 20:08 |
rm_work | "fixed" | 20:08 |
rm_work | just wait until it makes the app, then | 20:09 |
rm_work | self.handler_mock = getattr(self.app.app.application.application.application.root, 'v2.0').handler | 20:09 |
rm_work | overwrite the newly created mock with the actual mock | 20:09 |
rm_work | tada <_< | 20:09 |
rm_work | of course then the test cost needs to clean up changes it makes to the mock manually | 20:09 |
rm_work | *test code | 20:10 |
rm_work | yeah so in v1, every time we create a test app, it gets a new mock correctly | 20:12 |
rm_work | in v2, when we create a test app, it gets ... the same one | 20:12 |
rm_work | sorry that's not true | 20:13 |
rm_work | in both v1 and v2, every time setUp runs, getattr(self.app.app.application.application.application.root, 'v2.0').handler is the same object ID | 20:13 |
rm_work | but in v1, even though it's the same object ID, when the controller code actually runs it has a different ID each time (the same as self.handler_mock) | 20:14 |
rm_work | in v2, when the controller code actually runs, it has the same ID every time (the original app's mock ID) which doesn't match self.handler_mock except the first run | 20:14 |
*** ducttape_ has joined #openstack-lbaas | 20:16 | |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Introduce Octavia v2 API for pools https://review.openstack.org/405922 | 20:18 |
rm_work | johnsom: in case you want to look at the egregious hack i had to do to get this working ^^ | 20:19 |
johnsom | Ok | 20:19 |
rm_work | just look at the diff | 20:19 |
johnsom | This almost seems like a joke: self.app.app.application.application.application.root | 20:22 |
*** catintheroof has joined #openstack-lbaas | 20:23 | |
rm_work | yes | 20:25 |
rm_work | i am actually laughing | 20:25 |
rm_work | out loud | 20:25 |
rm_work | but in the sort of maniacal / scary way that means maybe i should be in a straight jacket | 20:26 |
johnsom | Ha | 20:26 |
rm_work | this falls directly into the category of *stupid* | 20:26 |
rm_work | i've wasted 5 hours on this | 20:27 |
rm_work | this isn't the correct solution | 20:27 |
rm_work | i'm about to not care and just leave it like this anyway | 20:27 |
rm_work | i can't find any code difference that should be relevant | 20:28 |
*** ducttape_ has quit IRC | 20:35 | |
*** ducttape_ has joined #openstack-lbaas | 20:38 | |
*** ducttape_ has quit IRC | 20:38 | |
*** catintheroof has quit IRC | 20:39 | |
*** csomerville has joined #openstack-lbaas | 20:48 | |
rm_work | If I had a table here, I would flip it | 20:50 |
rm_work | I officially give up | 20:50 |
rm_work | at least for today | 20:50 |
*** cody-somerville has quit IRC | 20:51 | |
rm_work | there's no reason these should behave differently | 20:51 |
rm_work | (v1 and v2) | 20:51 |
*** aojea has quit IRC | 20:57 | |
*** gcheresh_ has quit IRC | 20:57 | |
*** aojea has joined #openstack-lbaas | 20:57 | |
*** aojea has quit IRC | 21:02 | |
*** aojea has joined #openstack-lbaas | 21:02 | |
*** csomerville has quit IRC | 21:39 | |
*** ducttape_ has joined #openstack-lbaas | 21:43 | |
*** ducttape_ has quit IRC | 21:43 | |
rm_work | johnsom: fixed your issues. posting momentarily | 21:46 |
johnsom | Ha, ok. I'm testing.... | 21:46 |
rm_work | afk 20 after posting, got so wrapped up in this stupid mock BS that I forgot to eat | 21:47 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Introduce Octavia v2 API for pools https://review.openstack.org/405922 | 21:47 |
*** ducttape_ has joined #openstack-lbaas | 21:51 | |
*** armax has joined #openstack-lbaas | 21:52 | |
*** amotoki has joined #openstack-lbaas | 22:02 | |
*** chlong has quit IRC | 22:03 | |
*** armax has quit IRC | 22:04 | |
*** amotoki has quit IRC | 22:12 | |
*** aojea has quit IRC | 22:13 | |
*** aojea has joined #openstack-lbaas | 22:13 | |
johnsom | Sigh | 22:14 |
johnsom | https://www.irccloud.com/pastebin/4zDSsVnG/ | 22:14 |
johnsom | Bombs out when you use a listener_id instead of a load balancer ID | 22:15 |
*** ducttape_ has quit IRC | 22:15 | |
*** aojea has quit IRC | 22:18 | |
*** ducttape_ has joined #openstack-lbaas | 22:19 | |
rm_work | err | 22:20 |
rm_work | that might be my fault, one sec | 22:20 |
johnsom | I posted some comments. On post we are assuming a loadbalancer_id | 22:21 |
rm_work | ah no but simple | 22:21 |
rm_work | yeah | 22:21 |
*** fnaval has quit IRC | 22:25 | |
rm_work | ummm so, should a Pool GETreturn the pool including health monitors? | 22:37 |
rm_work | right now it's set up to not return "children" | 22:37 |
rm_work | so no members or health monitors | 22:38 |
rm_work | but i'd point out that those can be fixed later because they don't exist yet in the API :P | 22:38 |
rm_work | (they're subsequent patches) | 22:38 |
johnsom | I think they are just IDs that are returned | 22:38 |
rm_work | hmm | 22:39 |
rm_work | well, the v2 api can't create HMs or members at this point | 22:39 |
rm_work | but i guess i can try to emulate it | 22:39 |
rm_work | but IMO it should be fixed in the next patches | 22:39 |
johnsom | That is fine, I'm just marking things that are coming back different. In this case, when there is not content it's returning differently.... I think on listener we fixed it in the patch | 22:40 |
rm_work | yeah so i fixed the listeners thing just now | 22:41 |
rm_work | easy peasy | 22:41 |
johnsom | Ok, I put up a few more. | 22:41 |
rm_work | k | 22:41 |
johnsom | One more test and I'm done | 22:42 |
rm_work | "The content returned from the PUT call has provisioning_status as "ACTIVE" instead of PENDING_UPDATE." | 22:43 |
rm_work | Do you mean... it's supposed to be "PENDING_UPDATE" and it is returning "PENDING_CREATE"? :P | 22:43 |
johnsom | I got prov status ACTIVE and old data in the response from a PUT call. | 22:44 |
rm_work | huh. | 22:44 |
rm_work | functional test is showing "PENDING_CREATE" which is ... also wrong | 22:45 |
rm_work | but totally different, lol | 22:45 |
johnsom | I would expect that to be PENDING_UPDATE as I changed the session persistence type which is a handler round trip | 22:45 |
rm_work | OH NO i see | 22:45 |
rm_work | ok | 22:45 |
rm_work | yeah | 22:45 |
*** armax has joined #openstack-lbaas | 22:46 | |
rm_work | wait hold up | 22:48 |
rm_work | so wasn't there a thing | 22:48 |
rm_work | where we don't update the DB on the API side | 22:48 |
rm_work | ? | 22:48 |
rm_work | johnsom: ^^ | 22:49 |
johnsom | Yes, but we should be giving the user back a PENDING_UPDATE status | 22:49 |
rm_work | it doesn't look like any of v1 or the prior v2 controllers do updates for the PUT | 22:49 |
rm_work | ah, err | 22:49 |
rm_work | ok | 22:49 |
rm_work | so literally just override the one | 22:49 |
johnsom | Really? I am pretty sure it does, it just may not be returning that response | 22:49 |
rm_work | I don't think Listener does it | 22:49 |
rm_work | or LB | 22:49 |
johnsom | Isn't pool.py:73 setting it? | 22:51 |
johnsom | It's just we are pulling from the DB in a different session.... | 22:51 |
rm_work | err | 22:52 |
johnsom | Actually, it is the same session.... hmmm | 22:52 |
johnsom | We either need to lie to the user and give them the updated values, or return with PENDING_UPDATE for provisioning status... | 22:53 |
johnsom | Ah, that only sets the LB and listener into PENDING_UPDATE... That is why | 22:54 |
johnsom | Which is our DB locking | 22:54 |
rm_work | yeah | 22:54 |
johnsom | So, yeah, I think we need to hack the response had give back PENDING_UPDATE... | 22:54 |
rm_work | I can do POOL too | 22:54 |
rm_work | I think we can update the DB right after the lock | 22:56 |
rm_work | should be fine | 22:56 |
johnsom | Yes | 22:56 |
rm_work | since it IS locked | 22:56 |
johnsom | I just checked that we DO set it back to ACTIVE at the end of the flow as welll | 22:56 |
johnsom | Ok, that is all I have on pools. I've run through the tests I wanted to check | 22:58 |
rm_work | k 1 sec | 22:59 |
rm_work | hmm did a get_all work for you? | 23:10 |
rm_work | it's misbehaving for me | 23:10 |
rm_work | oh nm | 23:10 |
johnsom | Yes, it workeed | 23:10 |
rm_work | oh nm the nm, weird, still having problems i think | 23:11 |
*** ducttape_ has quit IRC | 23:15 | |
johnsom | Taking a break for a few, will be back in 30 | 23:15 |
*** ducttape_ has joined #openstack-lbaas | 23:19 | |
rm_work | kk | 23:33 |
*** ducttape_ has quit IRC | 23:34 | |
johnsom | Back | 23:35 |
rm_work | k | 23:50 |
rm_work | almost done | 23:51 |
rm_work | had some unexpected fallout from changing the delete status | 23:51 |
rm_work | ok good to go | 23:51 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Introduce Octavia v2 API for pools https://review.openstack.org/405922 | 23:51 |
rm_work | formally requesting the session_persistence update issue be filed as a bug and fixed later :P | 23:51 |
johnsom | Ok, I will take a look | 23:52 |
rm_work | otherwise i think it's solid | 23:52 |
rm_work | besides the ridiculous stupid mock issue | 23:52 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!