Monday, 2016-02-22

*** ducttape_ has joined #openstack-lbaas00:10
*** outofmemory is now known as reedip00:11
*** johnsom_ has quit IRC00:20
*** ducttape_ has quit IRC00:23
*** PK has quit IRC00:34
*** amotoki has joined #openstack-lbaas00:54
*** ducttape_ has joined #openstack-lbaas01:04
*** bochi-michael has joined #openstack-lbaas01:33
*** fnaval_ has joined #openstack-lbaas01:46
*** paco20151113 has joined #openstack-lbaas01:46
*** fnaval has quit IRC01:49
*** yamamoto has joined #openstack-lbaas01:52
*** ducttape_ has quit IRC01:56
*** ducttape_ has joined #openstack-lbaas01:57
*** ducttape_ has quit IRC02:05
*** yamamoto has quit IRC02:14
*** minwang2 has joined #openstack-lbaas02:22
*** bochi-michael has quit IRC02:37
*** bochi-michael has joined #openstack-lbaas02:38
*** minwang2 has quit IRC02:43
*** ducttape_ has joined #openstack-lbaas02:45
*** amotoki has quit IRC02:52
*** minwang2 has joined #openstack-lbaas02:53
*** yamamoto has joined #openstack-lbaas02:58
*** ducttape_ has quit IRC02:59
*** bochi-michael has quit IRC02:59
*** amotoki has joined #openstack-lbaas03:04
*** ducttape_ has joined #openstack-lbaas03:14
*** amotoki has quit IRC03:15
*** minwang2 has quit IRC03:18
*** yuanying has quit IRC03:20
bloganjohnsom: regarding the egress security group stuff, pretty sure we didn't have vrrp implemented when I put that in03:20
*** amotoki has joined #openstack-lbaas03:20
*** minwang2 has joined #openstack-lbaas03:25
*** amotoki_ has joined #openstack-lbaas03:25
*** amotoki has quit IRC03:29
*** amotoki_ has quit IRC03:35
*** kevo has joined #openstack-lbaas03:37
*** amotoki has joined #openstack-lbaas03:41
openstackgerritStephen Balukoff proposed openstack/neutron-lbaas: L7 capability extension implementation for lbaas v2  https://review.openstack.org/14823203:42
*** amotoki has quit IRC03:44
*** ducttape_ has quit IRC03:45
*** Aish has joined #openstack-lbaas03:52
*** Aish has quit IRC03:52
*** minwang2 has quit IRC03:53
*** links has joined #openstack-lbaas03:57
*** amotoki has joined #openstack-lbaas04:00
*** allan_h has joined #openstack-lbaas04:03
*** yuanying has joined #openstack-lbaas04:10
*** PK has joined #openstack-lbaas04:13
sbalukoffBy the way, with the above patch, and Evgeny's python-neutronclient patch, in theory we should be able to do end-to-end CLI testing of L7 at this point. (Evgeny's CLI patch still has a couple minor problems, but these shouldn't get in the way of testing.)04:19
*** alhu_ has joined #openstack-lbaas04:26
*** allan_h has quit IRC04:28
*** alhu__ has joined #openstack-lbaas04:39
*** alhu_ has quit IRC04:39
*** PK has quit IRC04:44
*** PK has joined #openstack-lbaas05:10
*** minwang2 has joined #openstack-lbaas05:24
rm_workawesome05:33
rm_worksbalukoff: so i just need ....05:34
rm_workhttps://review.openstack.org/148232 for neutron-lbaas and your full chain for octavia and ... which python-neutronclient patch?05:34
rm_workok building against those05:36
rm_workneed to figure out what python-neutronclient patch to install, but have to do that at the end manually anyway05:37
sbalukoffrm_work: Let me get that patch for you...05:50
sbalukoff https://review.openstack.org/#/c/217276/05:50
sbalukoffI'm working on updating that one as well, though-- it's not presently dependent on my shared pools patch.05:51
sbalukoffAnd it's been a few days since Evgeny touched it. I'll probably ask him whether he wants me to update it before I upload a new patch set.05:51
sbalukoff(I'm also rebuilding a stack here with all that stuff.)05:52
openstackgerritStephen Balukoff proposed openstack/octavia: Add L7 documentation  https://review.openstack.org/27883006:06
sbalukoffThe above fixes a minor nit that johnsom pointed out.06:07
sbalukoff(There are no code changes associated with it, so please carry on with your testing, eh. ;) )06:07
*** minwang2 has quit IRC06:08
*** minwang2 has joined #openstack-lbaas06:10
openstackgerritStephen Balukoff proposed openstack/octavia: Add L7 documentation  https://review.openstack.org/27883006:10
*** minwang2 has quit IRC06:16
*** minwang2 has joined #openstack-lbaas06:16
*** alhu__ has quit IRC06:19
*** bana_k has joined #openstack-lbaas06:19
*** Guest61736 is now known as mariusv06:27
*** mariusv has joined #openstack-lbaas06:27
*** PK has quit IRC06:46
*** minwang2 has quit IRC07:15
*** bana_k has quit IRC07:16
*** kobis has joined #openstack-lbaas07:21
*** chlong_ has quit IRC07:30
*** evgenyf has joined #openstack-lbaas07:47
*** nmagnezi has joined #openstack-lbaas07:50
*** numans has joined #openstack-lbaas07:50
*** kobis has quit IRC07:50
*** kobis has joined #openstack-lbaas07:53
paco20151113Hi, I installed octavia with devstack all-in-one , service all up and running . also I can check the amphora rest api with curl08:03
paco20151113but the o-cw log appear   WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.08:03
paco20151113the lbaas-loadbanace-list  result  in "PENDING_CREATE"08:04
paco20151113curl -k --cert /etc/octavia/certs/client.pem https://192.168.0.8:9443/0.5/info08:04
paco20151113{08:04
paco20151113  "api_version": "0.5",08:04
paco20151113  "haproxy_version": "1.5.14-1ubuntu0.15.10.1~ubuntu14.04.1",08:04
paco20151113  "hostname": "amphora-dab7be5b-5c82-429d-b6eb-f0e556404fd9"08:04
paco20151113anyone know how to solve this issue . ?08:04
*** pcaruana has joined #openstack-lbaas08:05
paco20151113I succesfully installed octavia with stable/liberty which will set static route with gw br-ex to access the lb-mgnt-network.08:06
paco20151113I checked the  commit log . it seems the devstack changed with this patch https://github.com/openstack/octavia/commit/8e242323719d8ed0016ff8296fe46a7feab0745c08:07
sbalukoffpaco20151113: It takes a while for the amphora to boot.08:08
sbalukoffpaco20151113: Can you the status of the amphora in 'nova list' ?08:08
paco20151113don't know why the amphorae.drivers rest_api_driver can not access the instance , even  I can access it with curl from client .08:08
paco20151113stack@48-28:~/neutron-lbaas$ nova list08:09
paco20151113+--------------------------------------+----------------------------------------------+--------+------------+-------------+----------------------------------------------------------------------------------+08:09
paco20151113| ID                                   | Name                                         | Status | Task State | Power State | Networks                                                                         |08:09
paco20151113+--------------------------------------+----------------------------------------------+--------+------------+-------------+----------------------------------------------------------------------------------+08:09
paco20151113| dce363f4-8d4f-44b3-bd38-d2223321f304 | amphora-2a945bf3-0481-4bfa-96d4-0fb993c340ac | ACTIVE | -          | Running     | lb-mgmt-net=192.168.0.7; private=fdf5:fc3f:1d33:0:f816:3eff:feca:cfd6, 10.0.0.7  |08:09
paco20151113| 8ffa5f4e-1ce2-4b3f-aec3-9b0e57c50fb0 | amphora-3219faf4-1af7-4e68-9169-4408a9093621 | ACTIVE | -          | Running     | lb-mgmt-net=192.168.0.9; private=fdf5:fc3f:1d33:0:f816:3eff:febb:8035, 10.0.0.10 |08:09
paco20151113| b421a2d8-4e14-4262-ae6c-44ed79b9c282 | amphora-44b7450d-f0bd-41b0-9790-fbdc9ebe2a2a | ACTIVE | -          | Running     | lb-mgmt-net=192.168.0.5; private=fdf5:fc3f:1d33:0:f816:3eff:feea:7846, 10.0.0.4  |08:09
paco20151113| 36e338c2-cbaa-47ca-a18f-3161b29fddc8 | amphora-dab7be5b-5c82-429d-b6eb-f0e556404fd9 | ACTIVE | -          | Running     | lb-mgmt-net=192.168.0.8; private=fdf5:fc3f:1d33:0:f816:3eff:fe26:16f5, 10.0.0.11 |08:09
paco20151113| a392af2f-7986-475e-b60a-9fe1ebfa8b67 | amphora-f360fce4-8b6d-4454-95f7-bf2d3c8da6e8 | ACTIVE | -          | Running     | lb-mgmt-net=192.168.0.6; private=fdf5:fc3f:1d33:0:f816:3eff:fecc:9737, 10.0.0.8  |08:09
paco20151113| e39e512d-3a02-4b1e-8aeb-994430bfd8fd | amphora-f7601f19-62ce-4568-9c95-f60ad90aaf94 | ACTIVE | -          | Running     | lb-mgmt-net=192.168.0.4; private=fdf5:fc3f:1d33:0:f816:3eff:feb7:fa0, 10.0.0.5   |08:09
paco20151113+--------------------------------------+----------------------------------------------+--------+------------+-------------+--------------------------------------------------------------08:09
sbalukoffWow, you've got quite a few of them08:09
paco20151113sorry , it's a little bit longer .  but  I am sure the instance all up and running well.08:09
paco20151113since , I can access the rest_API by curl .08:09
sbalukoffYeah.08:09
sbalukoffThis is on stable/liberty?08:10
paco20151113no , it's master08:10
sbalukoffOh, ok!08:10
sbalukoffSo, there is a bug (just filed by me this evening) where the load balancer can sit in the pending_create state for 5 minutes or more.08:11
sbalukoffEven after the amphora is up08:11
sbalukoffhow long did you let the controller worker try to bring the amphora all the way up?08:11
sbalukoffThis is the bug I'm talking about:  https://bugs.launchpad.net/octavia/+bug/154821208:12
openstackLaunchpad bug 1548212 in octavia "Load balancer sits in PENDING_CREATE state much longer than necessary" [Undecided,New]08:12
sbalukoffIf that sounds similar to what you're seeing, you might subscribe to that bug. I was going to try to tackle it this week if nobody else does.08:12
paco20151113amp_active_wait_sec = 2008:13
paco20151113amp_active_retries = 10008:13
sbalukoffpaco20151113: Yep, it should notice the amphora is up much quicker than it is. For some reason those values are not being respected here...08:13
sbalukoff(That's the nature of this bug.)08:13
paco20151113is there any workaroud for this ?08:14
sbalukoffFor not, wait for 5 minutes for the controller worker to finish bringing the amphora up.08:14
sbalukoffFor now.08:14
sbalukoffThat is, if it's the same bug you are experiencing.08:15
sbalukoffIf the load balancer never leaves the PENDING_CREATE state (even after 10 or 15 minutes), then you might have encountered a different bug.08:16
sbalukoffAlso--  once the amphora is up, the rest of the workflow (create listener, create pool, create member, etc.) should all go really quickly-- a second or two at most to execute.08:18
sbalukoffEr... once the load balancer is ACTIVE.08:18
paco20151113maybe I encounter  another bug , because >20 mintues later . it move to "ERROR".08:18
sbalukoffOk.08:18
sbalukoffIf you can gather up your logs and submit a bug report, we can look into it.08:19
paco20151113ok . sure ,will do . thanks for help08:19
sbalukoffNo problem, eh.08:19
sbalukoffOk, I'mma go to bed. Have a good night, y'all!08:20
paco20151113bye.08:20
paco20151113have good night.08:21
*** jschwarz has joined #openstack-lbaas08:53
openstackgerritChaozhe Chen(ccz) proposed openstack/octavia: Trivial: cleanup unused conf and log variables  https://review.openstack.org/28300709:03
*** amotoki has quit IRC09:06
*** amotoki has joined #openstack-lbaas09:14
*** amotoki has quit IRC09:35
*** chlong_ has joined #openstack-lbaas10:06
*** paco20151113 has quit IRC10:10
*** yamamoto has quit IRC10:28
*** yamamoto has joined #openstack-lbaas11:10
*** yamamoto has quit IRC11:14
*** zigo has quit IRC11:40
*** zigo has joined #openstack-lbaas11:41
*** links has quit IRC11:53
*** links has joined #openstack-lbaas12:07
*** chlong_ has quit IRC12:17
*** yamamoto has joined #openstack-lbaas12:41
*** nmagnezi has quit IRC12:42
*** rtheis has joined #openstack-lbaas12:44
*** nmagnezi has joined #openstack-lbaas12:57
*** links has quit IRC13:08
*** amotoki has joined #openstack-lbaas13:08
*** yamamoto has quit IRC13:10
*** ducttape_ has joined #openstack-lbaas13:12
*** yamamoto has joined #openstack-lbaas13:16
*** amotoki has quit IRC13:18
*** links has joined #openstack-lbaas13:22
*** numans has quit IRC13:23
*** ducttape_ has quit IRC13:24
*** yamamoto has quit IRC13:33
*** links has quit IRC13:50
*** numans has joined #openstack-lbaas13:55
*** neelashah has joined #openstack-lbaas13:56
*** amotoki has joined #openstack-lbaas13:58
*** chlong_ has joined #openstack-lbaas14:06
*** localloop127 has joined #openstack-lbaas14:07
*** openstackgerrit has quit IRC14:17
*** openstackgerrit has joined #openstack-lbaas14:18
*** piet has joined #openstack-lbaas14:27
*** doug-fish has joined #openstack-lbaas14:39
*** nmagnezi has quit IRC14:53
*** ducttape_ has joined #openstack-lbaas15:00
*** ducttape_ has quit IRC15:00
*** ducttape_ has joined #openstack-lbaas15:00
*** nmagnezi has joined #openstack-lbaas15:03
*** woodster_ has joined #openstack-lbaas15:34
*** localloop127 has quit IRC15:41
*** PK has joined #openstack-lbaas15:52
*** numans has quit IRC15:58
*** ajmiller has joined #openstack-lbaas15:59
*** PK has quit IRC16:03
*** TrevorV|Home has joined #openstack-lbaas16:05
*** Oku_OS has joined #openstack-lbaas16:08
*** armax has joined #openstack-lbaas16:10
johnsomsbalukoff When you are on again, I would like to talk about your bug: https://bugs.launchpad.net/octavia/+bug/154821216:18
openstackLaunchpad bug 1548212 in octavia "Load balancer sits in PENDING_CREATE state much longer than necessary" [High,New]16:18
*** sbalukoff has quit IRC16:21
xgermanisn’t that configurable?16:23
*** pcaruana has quit IRC16:27
*** nmagnezi has quit IRC16:29
*** fnaval_ has quit IRC16:30
*** PK has joined #openstack-lbaas16:34
johnsomYes, and the defaults are pretty short, so I'm looking for more information.16:39
johnsomFYI, the neutron client "IndexError" issue is fixed, pending release, and tracked here: https://bugs.launchpad.net/python-cliff/+bug/153977016:40
openstackLaunchpad bug 1539770 in cliff "Empty set causing out of range error" [Undecided,Fix released] - Assigned to Doug Hellmann (doug-hellmann)16:40
* johnsom wears his oslo liaison hat for a minute16:40
*** localloop127 has joined #openstack-lbaas16:44
*** PK has quit IRC16:45
*** amotoki has quit IRC16:45
*** Purandar has joined #openstack-lbaas16:46
*** fnaval has joined #openstack-lbaas16:49
*** jwarendt has joined #openstack-lbaas16:54
*** TrevorV|Home has quit IRC16:54
*** jschwarz has quit IRC16:56
*** jwarendt has quit IRC16:58
*** kobis has quit IRC16:58
*** jwarendt has joined #openstack-lbaas17:01
*** jwarendt has quit IRC17:05
*** jwarendt has joined #openstack-lbaas17:06
*** minwang2 has joined #openstack-lbaas17:13
*** piet has quit IRC17:13
openstackgerritMerged openstack/neutron-lbaas: Updated from global requirements  https://review.openstack.org/28275217:23
*** alhu__ has joined #openstack-lbaas17:33
*** alhu__ has quit IRC17:33
*** kevo has quit IRC17:36
*** Purandar has quit IRC17:36
*** allan_h has joined #openstack-lbaas17:40
*** Aish has joined #openstack-lbaas17:50
rm_worksbalukoff why are you not here T_T17:57
*** ducttape_ has quit IRC17:57
rm_workneed the python-neutronclient patch rebased on top of whatever fix happened for the index out of range bug17:57
rm_workor ...17:58
rm_workmaybe i can cherry-pick them17:58
*** piet has joined #openstack-lbaas18:05
*** ducttape_ has joined #openstack-lbaas18:08
rm_workoh it's cliff <_<18:14
johnsomYeah18:14
johnsomcliff 2.0 fixes it18:15
rm_workeh whatever i guess it doesn't impact my testing18:15
johnsomOr, go back in time to 1.15.018:15
rm_workah yeah just did -U cliff18:16
rm_workand it seems good18:16
rm_workdoes requirements cap cliff before 2.0?18:16
rm_workotherwise i thought i should have gotten this already18:16
johnsomYeah, it's still in the approval process to release18:16
rm_workah i stacked last night, if they released this morning18:16
rm_workok so per sbalukoff the l7 stuff in neutron-lbaas isn't quite ready so i can't just use the client yet for full testing?18:17
johnsomI thought he said last night it was good enough for an end-to-end18:17
rm_workbackwards-compatible18:17
rm_workbut not L7 i think18:17
rm_worksomething about the octavia driver18:18
johnsomTo quote: <sbalukoff> By the way, with the above patch, and Evgeny's python-neutronclient patch, in theory we should be able to do end-to-end CLI testing of L7 at this point. (Evgeny's CLI patch still has a couple minor problems, but these shouldn't get in the way of testing.)18:18
ajmillerrm_work johnsom I just posted a -1 review for the shared pools patch.  It breaks the dashboard panels.18:18
johnsomhttps://review.openstack.org/14823218:18
johnsomajmiller Ok, good to know!18:19
ajmillerThe default pool ID for the listener is coming back empty.  That is how the UI learns about pool/members/HMs18:19
rm_workah ok johnsom cool18:20
johnsomI will load it up this morning after the meetings die down18:20
rm_worki've got it now18:21
rm_workgoing through my baseline18:21
*** kevo has joined #openstack-lbaas18:24
*** Purandar has joined #openstack-lbaas18:33
rm_workBTW remember we still need this: https://review.openstack.org/#/c/276802/18:36
*** piet has quit IRC18:36
rm_workbecause we aren't getting requirements updates/checks18:36
rm_workverifying if we need to do anything else on our side to prepare for that18:36
rm_workhmm, not sure how to create multiple pools for one LB with neutron-client18:40
rm_workno option that I see for setting it on a LB not a listener, and doing a second pool on a listener comes back with an error about default_pool already set18:40
*** Aish has quit IRC18:41
rm_workoh wait was sbalukoff going to be off today?18:45
*** neelashah has quit IRC18:46
xgermanrm_work it seems like every dat and then he shows up at 4am… maybe he moved to Australia?18:53
*** Aish has joined #openstack-lbaas18:53
ajmillerhmmm johnsom, rm_work, I redid my UI test with the shared pools patch, now I'm seeing the default pool..18:57
rm_workso ... it's working then? :P18:57
rm_workthat'd be good news :)18:58
ajmillerIt seems to be...  But I need to double-check what I've done.18:58
*** neelashah has joined #openstack-lbaas19:07
*** evgenyf has quit IRC19:10
neelashahjohnsom rm_work - has this bug been reported before? https://bugs.launchpad.net/neutron/+bug/1548386 we are running into this as part of the dashboard work for Mitaka19:14
openstackLaunchpad bug 1548386 in neutron "LBaaS v2: Unable to edit load balancer that has a listener" [Undecided,New]19:14
*** sbalukoff has joined #openstack-lbaas19:15
sbalukoffHey ajmiller: Are you around?19:15
ajmillersbalukoff: I'm around but in a call right now...19:15
ajmillerI'll be slow to respond till that's over.19:16
johnsomneelashah I have not seen that before.   However, I don't use the namespace driver much.19:16
sbalukoffOk, just looking to get more information on the bug you found, specifically which devstack services to enable in order to reproduce the GUI bug you found.19:16
rm_workah sbalukoff is back19:16
rm_work[12:40:04]  <rm_work>hmm, not sure how to create multiple pools for one LB with neutron-client19:16
rm_work[12:40:30]  <rm_work>no option that I see for setting it on a LB not a listener, and doing a second pool on a listener comes back with an error about default_pool already set19:16
rm_worksbalukoff: ^^19:16
sbalukoff(Since, from what I can tell, the API is still returning the data it should.)19:16
ajmillersbalukoff Well, I just set it up in what I thought was exactly the same way, and it seems to be working19:16
sbalukoffrm_work: Right. You need L7 in order  to do that19:17
rm_workerr19:17
* ajmiller is trying to figure out just what he did.19:17
rm_workbut i thought you created pool2 first19:17
rm_workthen pointed l7 at it19:17
sbalukoffrm_work: Oh!19:17
sbalukoffrm_work: Are you using the CLI that has shared-pool updates?19:17
rm_workyes19:17
rm_workthe patch you linked earlier19:17
rm_workright?19:17
rm_worki see l7 stuff in the tab-complete for commands under lbaas-19:17
sbalukoffrm_work: I...  think so?  I dunno. Evgeny has a patch that adds L7 support but his CLI patch is not dependent on my CLI patch, so it's possible you don't have the shared pools CLI patch..19:18
sbalukoffLet me see...19:18
sbalukoffrm_work: This is the CLI shared pools patch: https://review.openstack.org/#/c/218563/19:19
sbalukoff(I've asked Evgeny if I can make his patch dependent on mine, but I have not done that yet as I didn't want to step on his toes.)19:19
sbalukoffrm_work: In any case, with that patch (or rather the two relevant files in it) you should be able to create a pool that's not associated with a listener by specifying the --loadbalancer on the pool create command.19:20
madhu_akfnaval: there is a response from Andrea about tempest-plugin ML19:20
rm_workaaah that is what you meant19:20
rm_workok I think that's the issue, i need both19:21
sbalukoffajmiller: Thanks for looking into this more closely, eh!19:21
rm_workawesome, grabbed that and now it works19:22
rm_workthanks sbalukoff19:22
sbalukoffYay!19:22
rm_workin fact i just hit up-arrow and re-ran my *guess* at the right command, and it worked :)19:22
sbalukoffSo it's intuitive to boot, eh? ;)19:22
rm_workyep19:23
rm_workhmmmm19:25
*** neelashah1 has joined #openstack-lbaas19:25
rm_workneutron lbaas-l7policy-create --action REDIRECT_TO_POOL --redirect-pool pool2 --listener listener119:25
rm_work"Redirect pool id is missing for L7 Policy with pool redirect action."19:25
*** neelashah has quit IRC19:25
rm_worktried with the ID directly too19:25
rm_workeither the help text is wrong (it says to use "--redirect-pool") or else there's a bug19:25
rm_worksbalukoff: ^^19:25
sbalukoffrm_work: I suspect there's a bug. I have not delved too deeply into Evgeny's CLI patch, but I know there are a few problems with it.19:26
sbalukoffBah...  Ok, I'm going to work on patching it--  I hope he doesn't mind me stepping on his toes in the interests of getting it working in time for the deadline. :P19:27
rm_workback to CLI testing19:28
rm_workjohnsom: how far have you gotten with L7 testing?19:28
rm_workI've been hammering at it for 3 days or so now, but i still am not sure how much i've actually managed to test, manual testing is insane for this product T_T19:28
johnsomrm_work I have finished octavia regression testing without neutron-lbaas/neutron client.19:28
rm_workI'm really close to rage+219:28
sbalukoffrm_work: Do legacy things work with the CLI stuff you've tested thus far? (ie. stuff you could do prior to shared-pools and l7?)19:29
rm_worksbalukoff: so far as i've tested, though with the CLI i haven't really done much besides basic ops19:29
johnsomrm_work That looks good.  I'm going to do end-to-end today.  If that goes well, I would be in favor of merge tomorrow/late tonight19:29
sbalukoffrm_work: I've been trying hard not to rebase on all the L7 stuff!19:29
*** ducttape_ has quit IRC19:29
rm_workheh19:29
rm_workjohnsom: yeah sounds good to me19:29
sbalukoffYay!19:29
rm_workI might do a final walkthrough of the code and then +2 as I go19:30
sbalukoffI am so looking forward to entering bugfix mode instead of "keep the house of cards standing" mode.19:30
sbalukoff:)19:30
rm_workyes19:30
rm_workI think that's better for everyone's sanity19:30
rm_workeven if something breaks... we'll find out and fix19:30
rm_workand it'll be less of a nightmare than retesting everything constantly19:30
johnsomOk.  Would you all might asking the other cores to hold +A until I get through tests today?19:30
sbalukoffWell, at this point, I know I conflict with what minwang is doing in her big patch under consideration, and I'd like to not holding up anyone else's work.19:31
rm_workjohnsom: I doubt anyone has the balls to +A this without asking in channel first :P19:31
sbalukoffThat and Trevor19:31
johnsomOk, cool.19:31
rm_workwell trevor rebased on top of you19:31
rm_workso he should be fine19:31
johnsomI just want one good end-to-end test day, then I think we are good for merge and fix as necessary19:31
sbalukoffjohnsom: I'll get going immediately on fixing Evgeny's python-neutronclient L7 CLI patch.19:32
johnsomWe do have a bunch of patches in +2 state.  I'm flexible on order19:32
rm_workyeah i'm going to throw a couple more test cases at this and go into full syntax-review mode19:32
rm_worki think we focus on L7 and merge anything else that still works afterwards :P19:33
johnsomYeah, I think that works.  Heck, half of them are sbalukoff's anyway19:33
rm_worksomeone's been a busy bee19:34
johnsom+119:34
johnsomIt's great19:34
sbalukoffHaha!19:34
rm_workactually we can start +A-ing from the top down19:34
johnsomsbalukoff I wanted a bit more info on this: https://bugs.launchpad.net/octavia/+bug/154821219:34
openstackLaunchpad bug 1548212 in octavia "Load balancer sits in PENDING_CREATE state much longer than necessary" [High,New]19:34
rm_worklike docs19:34
johnsomrm_work Already ahead of you on that one19:34
sbalukoffMy intent is to push IBM as hard as I fscking can to start using Octavia big-time with Mitaka. So, I *need* to make sure that Octavia doesn't suck when I start the internal pressure.19:35
johnsomsbalukoff Do you have time to chat about it or later?19:35
sbalukoffjohnsom: Yes19:35
johnsomGreat.  Was it looping on "Could not connect to instance" or just hung after one?19:35
sbalukoffjohnsom: I have a lot of ideas there, and I suspect that the unreasonable REST timeouts have a lot to do with it:  I know you're working on that fix and figured if you were stuck you'd let people know.19:36
sbalukoffjohnsom: It did that a couple times then hung.19:36
rm_worksbalukoff: i am a bit sad the sequence for l7 rules starts with 1 <_<19:36
sbalukoffjohnsom: which is why I suspect the REST timeout thing.19:36
sbalukoffrm_work: Yeah, I fought with Evgeny a while over it.19:37
rm_workwas he for 1 or 019:37
johnsomsbalukoff Ok.  Yeah, if it didn't keep looping, that is bad.  If you had the debug log for that it would be handy on the bug.19:37
sbalukoffrm_work: I start counting with zero as well-- but at that point it was simpler just for Octavia to mirror what Evgeny had written Neutron-LBaaS to do.19:37
sbalukoffI was arguing for 019:37
rm_worki guarantee somewhere someone will have to write code that offsets for 119:37
rm_workprobably a lot of people >_>19:37
rm_workah well19:38
sbalukoffjohnsom: I don't have a debug log, but could probably reproduce this problem within in a few days. I've seen it often enough.19:38
johnsomsbalukoff Sadly, I assigned that to myself, but have been focusing on reviews/testing and haven't started.  If you have cycles, feel free to take it19:38
ajmillersbalukoff:  I isolated the broken LB creation workflow, and I don't think it is related to your code.  I think it is problem in the dashboard panels when creating HTTPS LBs without valid certs.19:38
ajmillerOne more test and I'll know for sure.19:38
sbalukoffajmiller: Oh, ok!19:39
sbalukoffjohnsom: Ok--  I think I might be juggling some of this CLI update stuff today, but if you get to it before me, I won't be pissed about duplicating effort to get that one fixed.19:40
johnsomOk19:40
sbalukoffEspecially if it does lead to clearing up the intermittent scenario test failure bug.19:40
*** ducttape_ has joined #openstack-lbaas19:44
*** yamamoto has joined #openstack-lbaas19:48
sbalukoffOk, gonna grab lunch before I get too buried in this CLI code.19:51
*** localloop127 has quit IRC19:54
*** localloop127 has joined #openstack-lbaas20:00
johnsomYeah, just grabbed a sandwich myself.20:07
fnavalmadhu_ak: yep, I saw the response; mtrenish had replied to me via IRC on that20:12
fnavalbasically, use tempest_lib and tempest together; it's totally better than copying the code in tree20:13
fnavaleven though tempest might not be stable, it's still better20:13
madhu_akfnaval, sure. In that case, do we need to propose any files in tempest-lib, so it can be stable?20:13
fnavalmadhu_ak: so there is a QA midcycle meet up happening this week.  I've spoken to Rackspace folks who are going there about that.  Hopefully, they'll have a plan of attack.  If not, then we'll have to start attending their meetings and find out what they need.20:14
fnavalEither way, I might just start attending their meetings and see where they're at and see what help they need.20:15
fnavalI think that's far better than just throwing code out there.20:15
madhu_aksounds goo fnaval20:16
madhu_akgood*20:16
fnavalmadhu_ak: if you're able to attend, here's some info: https://wiki.openstack.org/wiki/Meetings/QATeamMeeting20:16
johnsomsbalukoff I have the REST API timeout bug.  Give me ~3020:17
fnavalmadhu_ak: according to this etherpad, they have some migration work that's prioritized;  I suppose we can help with reviews https://etherpad.openstack.org/p/mitaka-qa-priorities20:17
madhu_aksure fnaval. I shall attend it. if you have outlook invite in handy, can you forward them?20:17
fnavalyep20:17
sbalukoffjohnsom: Sound good!20:18
fnavalwell, I'm not sure of the timings yet - they'll be figuring that out this week.  so the week after next might be the restart of it20:18
madhu_aksure, fnaval.20:19
johnsomI'm thinking a 60 second timeout on the REST API calls.  Comments?20:19
fnavalmadhu_ak: added you to the cal20:19
johnsom(by default of course)20:19
madhu_akawesome. Thanks20:20
*** itsuugo has quit IRC20:20
*** bharathm has quit IRC20:20
*** mugsie has quit IRC20:20
*** reedip has quit IRC20:20
*** mestery has quit IRC20:20
*** _laco has quit IRC20:20
*** dnovosel has quit IRC20:20
*** mestery has joined #openstack-lbaas20:20
*** mugsie has joined #openstack-lbaas20:20
*** dnovosel has joined #openstack-lbaas20:22
fnavalmadhu_ak: they have a weird alternating schedule - one during 1700 UTC and 0900 UTC.  I plan to just attend the 1700 UTC.20:22
*** itsuugo has joined #openstack-lbaas20:22
madhu_akexactly. its 3am20:22
*** bharathm has joined #openstack-lbaas20:22
*** reedip has joined #openstack-lbaas20:23
*** piet has joined #openstack-lbaas20:24
*** _laco has joined #openstack-lbaas20:34
*** prabampm has quit IRC20:37
*** kevo has quit IRC20:37
*** jwarendt has quit IRC20:37
*** prabampm has joined #openstack-lbaas20:38
*** jwarendt has joined #openstack-lbaas20:38
*** crc32 has joined #openstack-lbaas20:44
*** Purandar has quit IRC20:44
*** doug-fish has quit IRC20:45
johnsomHmmm, given we retry these connections in the driver, I think I am going to drop the default timeout down to 10 seconds.  That is more than enough for a connect/read timeout IMHO.20:47
*** openstackgerrit has quit IRC20:47
*** openstackgerrit has joined #openstack-lbaas20:48
*** doug-fish has joined #openstack-lbaas20:48
sbalukoffjohnsom: I was thinking 3 seconds is enough--  if the REST server stops sending data for that long something is wrong. I mean: Are we doing any synchronous operations with it?20:49
sbalukoff(stops sending data without closing the connection, I mean.)20:49
sbalukoffThough I think we should retry if that happens, eh.20:50
sbalukoff(I had a peek at the code: We retry on connection timeout, but not on read timeout--  we should probably retry at least a few times on read timeout as well.)20:51
johnsomWe do retry.  Here is the docs for the setting: http://www.python-requests.org/en/latest/user/advanced/#timeouts20:51
johnsomYeah, I'm covering both connect and read (using the float option)20:52
sbalukoffOh, ok!20:52
*** doug-fish has quit IRC20:53
johnsomHow about 5 to split or diff20:53
johnsomIt's in the config, so you can tune as fits in your deployment20:54
*** kevo has joined #openstack-lbaas20:54
xgermank, let’s keep it brief20:54
rm_workjohnsom: seems reasonable20:54
sbalukoffjohnsom: Sounds good!20:56
openstackgerritMichael Johnson proposed openstack/octavia: Add a request timeout to the REST API driver  https://review.openstack.org/28325520:59
sbalukoffNice!20:59
johnsomThere we go21:00
*** Kiall has quit IRC21:00
*** Kiall has joined #openstack-lbaas21:00
*** Kiall has quit IRC21:00
*** Kiall has joined #openstack-lbaas21:01
rm_worki'm working my way down the L7 reviews now21:02
sbalukoffGreat!21:03
rm_worksbalukoff: you had a lot of TODOs in the last one i looked at... flows, i think21:03
rm_worki assume that's stuff that won't be addressed immediately21:03
rm_workaround status tree updates21:03
sbalukoffrm_work: Yes, they are similar to the other todos that johnsom has for reverts in other flows.21:03
rm_workyeah, seemed that way21:04
sbalukoffWe do need to revisit the revert strategy on a lot of flows.21:04
johnsomIf that is the one I already +2'd it is consistent with the existing need to update the revert paths21:04
rm_workyeah21:05
rm_workman, we define _test_lb_and_listener_statuses like 80 times it seems21:05
rm_workI guess they are all VERY slightly different?21:06
sbalukoffYeah.21:06
rm_workor else there's just no good place to put that?21:06
rm_workit seems almost like the difference is the string in the LOG message21:06
rm_work>_>21:06
sbalukoffI'm thinking that someone should eventually move that to a file in octavia/common or something, and abstract it out enough that it's usable from all the API controllers.21:06
rm_workah well21:06
rm_workyeah21:07
rm_workor the base controller21:07
sbalukoffOh yes! There!21:07
rm_workcan still override it if necessary21:07
rm_worki'll look at doing that at some point when I'm up for 20 hours and my OCD kicks in and I can't focus on the things i'm ACTUALLY supposed to be doing21:07
sbalukoffHeh! Ok.21:07
openstackgerritMerged openstack/neutron-lbaas: Shared pools support  https://review.openstack.org/21856021:08
*** Kiall has quit IRC21:09
sbalukoffWoot!21:09
sbalukoffThere it is!21:09
johnsomWell, I guess Al has the balls to +A21:09
*** Kiall has joined #openstack-lbaas21:10
sbalukoffIndeed.21:10
rm_workrofl21:14
rm_workon the n-lbaas side21:14
rm_workthat's fine :P21:14
sbalukoffHeh! the Octavia share-pools patch has been merged for a couple weeks now. :)21:14
sbalukoffshared21:14
sbalukoffOh! I need to unmark the shared-pools CLI patch from being WIP.21:15
rm_worksbalukoff: yeah we should merge that21:16
rm_workseems to work21:16
sbalukoffIf you want to +1 it so it gets attention, here it is: https://review.openstack.org/#/c/218563/21:17
*** doug-fish has joined #openstack-lbaas21:22
*** crc32 has quit IRC21:22
sbalukoffrm_work: It looks like Evgeny beat me to the punch on getting his L7 CLI updated. The tests you were doing with the CLI that didn't work:  Were those using the patchset that Evgeny uploaded this morning?21:24
*** crc32 has joined #openstack-lbaas21:28
*** Purandar has joined #openstack-lbaas21:29
rm_workerr21:30
rm_workdefine "this morning"21:30
rm_workprobably newest as of like 4 hours ago21:30
johnsomOk, what stupid mistake am I making.  I did a checkout on https://review.openstack.org/#/c/148232/, then python setup.py install, ran the db migration.21:30
rm_workbut, I hadn't cherry-picked your shared-pools patch yet, once i did that everything seemed fine I think21:30
johnsomBut starting up q-svc gives me: ImportError: Plugin 'neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2' not found.21:30
rm_workjohnsom: *sudo* python setup.py install? :P21:31
*** doug-fish has quit IRC21:31
johnsomeyp21:31
rm_workjust guessing, i am usually running as stack user :P21:31
johnsomyes21:31
rm_workheh yeah dunno... you can try with bit.do/devstack21:32
rm_workjust put both relevant patches in there21:32
rm_worki need to modify it to work with client patches too but not sure how yet21:32
rm_worki mean, i could do it manually, but i assume there's an easy way with devstack21:32
johnsomYeah, I was just hoping to avoid a restack.  Ok, guess I don't get a choice as my usual tricks aren't working21:33
sbalukoffrm_work: Let me know if you figure out an easy way to get the CLI stuff into devstack other than going to /opt/stack/python-neutronclient and checking out the patch you're testing. :P21:34
sbalukoffAnyway, good to know that getting both patches in seemed to fix things.21:34
sbalukoff(I'll be trying to break things there myself in any case as I test Evgeny's CLI patch.)21:35
rm_worksbalukoff: i might ask around in the qa channels, i think they work on devstack21:36
sbalukoffAah, ok.21:36
sbalukoffbharathm: Could I please get you to re-visit this?  https://review.openstack.org/#/c/281603/21:51
bharathmsbalukoff: sorry.. I should have done it much sooner.. :-)21:54
sbalukoffbharathm: No worries, eh21:55
*** manishg has joined #openstack-lbaas21:57
johnsomHmmm22:06
johnsom2016-02-22 14:04:44.409 TRACE neutron.api.v2.resource AttributeError: 'OctaviaDriver' object has no attribute 'l7policy'22:06
*** armax has quit IRC22:06
*** rtheis has quit IRC22:07
rm_workyeah you need two patches for neutronclient right?22:08
rm_workerr22:08
johnsomsbalukoff should commit f82d24463a510d726be485fb89f86c3c1149069b have included that?22:08
rm_workand also, did sbalukoff say there was something borked with l7?22:08
rm_workin the octavia driver22:08
johnsomYeah, he mentioned there might be some bugs.22:08
johnsomI did a: neutron lbaas-l7policy-create --action REJECT --name policy1 --listener listener122:09
*** Purandar has quit IRC22:09
rm_worki did all my actual L7 testing via REST to octavia :/22:09
rm_workjust used the neutron stuff to spin up the basics22:10
sbalukoffjohnsom: Looking...22:10
johnsomYeah, I'm trying to go end-to-end22:10
rm_worksince the client had a bug for me not allowing me to make policies correctly22:10
rm_workjohnsom: let me know if you run into:22:10
rm_work[13:25:21]  <rm_work>neutron lbaas-l7policy-create --action REDIRECT_TO_POOL --redirect-pool pool2 --listener listener122:11
rm_work[13:25:27]  <rm_work>"Redirect pool id is missing for L7 Policy with pool redirect action."22:11
johnsomhttps://gist.github.com/anonymous/598110999506c18cb711 has the full traceback22:11
rm_worki got there and gave up L7 testing on clientside22:11
sbalukoffthanks johnsom: I think this is a bug in the Neutron-LBaaS Octavia driver that I added this weekend. Working on a fix now.22:13
johnsomCool22:13
*** Purandar has joined #openstack-lbaas22:15
johnsomrm_work Confirmed: https://gist.github.com/johnsom/147f8dd2b599b69b476122:16
rm_workyeah i tried with the ID too22:17
rm_worki think it isn't reading the option right22:17
sbalukoffHaha! Ok, I found the problem, I think. Time to fix it...22:18
rm_workon the LAST patchset22:24
rm_workand then i'll have +2'd the whole chain22:24
rm_workand by last, i mean first22:24
sbalukoffHeh!22:25
sbalukoffNice!22:25
*** neelashah1 has quit IRC22:26
openstackgerritStephen Balukoff proposed openstack/neutron-lbaas: L7 capability extension implementation for lbaas v2  https://review.openstack.org/14823222:27
sbalukoffOk, I'm going to restack to test this ^^^  But I think it *should* fix the problem you saw, johnsom.22:27
sbalukoffI'm not sure yet if it fixes the problem you saw, rm_work.22:28
*** piet has quit IRC22:29
*** localloop127 has quit IRC22:30
rm_worki think mine is a client bug22:32
sbalukoffrm_work: I suspect you're right. Gonna poke at it once I finish restacking here.22:33
*** piet has joined #openstack-lbaas22:34
johnsomI added it to the gerrit for the client22:36
*** ducttape_ has quit IRC22:40
*** ducttape_ has joined #openstack-lbaas22:42
*** allan_h has quit IRC22:50
*** allan_h has joined #openstack-lbaas22:52
*** allan_h has quit IRC23:14
*** Purandar has quit IRC23:14
johnsomHmmm, really strange results in the scenario gate with the requests timeout: http://logs.openstack.org/55/283255/1/check/gate-neutron-lbaasv2-dsvm-scenario/4cd4de5/logs/screen-o-cw.txt.gz23:15
*** allan_h has joined #openstack-lbaas23:16
*** allan_h has quit IRC23:17
johnsomI think five seconds is too short23:17
johnsomOr we should break out the connection timeout vs. reas23:17
johnsomread23:17
*** piet has quit IRC23:18
*** ducttape_ has quit IRC23:20
sbalukoffjohnsom: Dang! It works so much better on my devstack. ;)23:23
barclaacsbalukoff: it always works on devstack23:32
sbalukoffHaha!23:32
rm_workalright I just +2'd the last L7 review23:34
rm_workI'm good with it merging as-is23:34
*** armax has joined #openstack-lbaas23:34
johnsomI would like the commands to actually work23:35
rm_workwell23:36
rm_workvia API they seemed to do SOMETHING23:36
rm_work(I just meant in Octavia)23:36
johnsomAh23:36
*** allan_h has joined #openstack-lbaas23:40
sbalukoffYeah, the neutron-lbaas CLI has at least one minor bug the johnsom found (I've confirmed it, testing the fix), and once I got the CLI creating policies... I discovered that although Evgeny asked that l7policy positions start numbering with 1, he didn't set up the DB that way... so I'mma have to fix that real quick.23:47
sbalukoffBut! the Octavia stuff should be golden! XD23:48
johnsomOk, ping me with patches I should be checking out after you have updated them.23:48
johnsomIt looks like the IRC bot isn't announcing them all23:49
openstackgerritMichael Johnson proposed openstack/octavia: Add a request timeout to the REST API driver  https://review.openstack.org/28325523:49
johnsomHmm, so Octavia patches announce, but neutron-lbaas patches don't.23:50
rm_workjohnsom: well, I am going to be off for a bit... but... I've +2'd everything in the relevant Octavia chain23:51
johnsomOk, that one splits connect timeout (default 10) from read timeout (default 60)23:51
rm_workso as far as I'm concerned, I could come back later and have it merged23:51
*** manishg has quit IRC23:51
johnsomOk, once things seem to be working I will hit them23:51
rm_workdid ... anyone else want to do a sanity check on these? :P23:51
rm_workor just trust johnsom and I to review and sbalukoff to have written it right? :P23:52
johnsomGood question23:52
rm_worki'll be frank, there's so much testing to do that i've really only tested the happy-path for the most part23:52
rm_workthat said, our sad-path has never worked super well <_<23:52
rm_workso I doubt this could make it much worse23:53
sbalukoffrm_work: Yeah, we should fix that someday. :/23:53
rm_workyes.23:53
rm_workone of these days.23:53
rm_workonce happy-path actually does the stuff we want correctly :P23:53
johnsomYeah.  It's still better than some systems I have seen..  Not to name names23:53
rm_workand once this all settles I'll actually probably end up in bugfix mode too23:55
johnsomI think we meet are criteria of two companies reviewing the third with rm_work and I.  I went through a bunch of the non-L7 octavia paths last week/weekend.  So I feel pretty good we didn't break existing functionality23:55
rm_workunless I get pulled internally again23:55
sbalukoffrm_work: My plan is to go nuts with the bugfixes as well.23:55
sbalukoffOnce this is all settled.23:56
rm_workyes23:56
rm_workalso will go on a merge-spree for the smaller stuff that's already up23:56
sbalukoffI'm actually fairly proud of what we've built with Octavia; I want to see people using it!23:56
rm_workindeed23:56
rm_workit's very close to something that I think could be deployable ;)23:57
rm_workjust a couple outstanding things23:58
rm_worklike the single-create API which trevorv is almost done with hopefully23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!