Monday, 2015-12-14

johnsomNo need to duplicate.  The rebind is optional, you can take it out and make the param on the task name match.00:12
ptoohillI added the one piece i need (though i need to do more) to the create_lb flow even and getting errors. ill check this in00:14
ptoohillThe example can be found in the create_load_balancer flow, this isnt exactly what i want to do, but it shows error and negates your sample00:15
ptoohillhttps://review.openstack.org/#/c/199954/00:15
ptoohillso i cant have another flow then is what it comes down to00:15
ptoohillunless its after the decider, and at that point its too late for me to inject stuff from a 'pre' flow00:16
johnsomHere is one without rebind: https://gist.github.com/johnsom/5ac6a9bd7d814743eab700:16
ptoohillThe point where it breaks in my example is line 52 of load_balancer_flows00:16
ptoohilli get the rebind stuff, i was just poking at it, because it wouldnt solve what i need, though you made it sound like it would00:17
johnsomAh, ok, so note, that flow is unordered.  That means you task may not run before the others in the flow.00:18
ptoohillyea, so theres no way to reuse this stuff00:18
ptoohillthis was yet just another example to show what error it gives00:18
johnsomWell, I would put that line is the flow before get_create_load_balancer_flow00:18
ptoohillive tried to do it many ways00:19
ptoohilltried that00:19
ptoohillsame error00:19
johnsomThe unordered flow allows the active and standby to boot in parallel00:19
ptoohillgraph flow doesnt like what im trying to do00:19
ptoohillits not the unordered flow00:19
ptoohillthe data is there00:19
johnsomOk, let me check this out and take a look at the bigger picture.00:20
ptoohillNotFound: Mapped argument 'network_ids' <= 'network_ids' was not produced by any accessible provider (2 possible providers were scanned)00:20
johnsomWhat line #?00:20
ptoohilldoesnt give a line number, this is stack from rpc_errors00:20
ptoohillthe closes related thing is from the controller_worker call00:20
ptoohillit doesnt even compile00:20
johnsomAh, ok, give me a minute to have a look00:21
ptoohill File "/opt/stack/octavia/octavia/controller/queue/endpoint.py", line 45, in create_load_balancer00:21
ptoohill                                                                                                                  │------------+                                    │2015-12-13 18:12:39.106 22594 ERROR oslo_messaging.rpc.dispatcher     self.worker.create_load_balancer(load_balancer_id)00:21
ptoohillFile "/opt/stack/octavia/octavia/controller/worker/controller_worker.py", line 273, in create_load_balancer00:21
ptoohill                                                                                                                  │            |                                    │2015-12-13 18:12:39.106 22594 ERROR oslo_messaging.rpc.dispatcher     create_lb_tf.run()00:21
ptoohillThen a bunch of rpc stuff, then that network_ids message00:22
johnsomOk, I'm not sure you can use defaults on task parameters.  I haven't seen it and that might through the flow compile off.  Second we need to add you flow in an outer linear flow to make sure it runs.  Finally, I would double check that both of those tasks are returning something and not blowing the storage away.00:35
johnsomI'm running py27 now to see if I can get your error.  Then I will code up a new get_create_load_balancer_flow in gist that should help.00:35
ptoohillThat error is coming from graph flow code00:36
ptoohillThe example i linkked was just quick to expose the error. Ive tried putting them in seperate flows, adding them to linear flows etc00:36
ptoohilli appreciate you taking time on this for me00:36
johnsomNot a problem, we are all a team00:37
ptoohill:)00:37
ptoohillmy objective is to allow flows the ability to determine networking00:38
ptoohillin master, right now, some of the networking is decided outside, but the mgmt net and a few other checks are handled inside the create_amp task. I need to avoid that so we can reuse networks00:38
ptoohilland, i also need a different networking subflow, so my objective to to reuse the create_loadbalancer flow (because thatll work as is) but add things before and after00:39
ptoohillthe failover flow is similar, except you attach the networks, not supply them while creating00:40
ptoohillor atleast the old stuff that i remembered00:40
johnsomYeah, I made some semi-major changes to the failover flow as I have been adding active/standby00:41
johnsomOk, take a look at this: https://gist.github.com/johnsom/b6cb5e1b630d2ccba9a900:41
johnsomThat "should" let it compile00:41
ptoohillinteresting00:41
ptoohillill add that in00:42
johnsomObviously it's hack-y code, but it should test the idea00:42
ptoohillIm not sure how to make it less hacky :/00:43
johnsomwell, constants and such.00:43
ptoohillgotcha00:43
ptoohilltesting now00:44
johnsomfingers crossed00:44
ptoohilloh dear, reached quota -.-00:44
johnsomUgh, yes, I hate that.  I have been hitting that on my failover testing.  Security groups00:45
ptoohill:P00:45
johnsomhave been my headache00:45
johnsomI finally went in neutron.conf, jacked the numbers, and restarted neutron.00:45
ptoohillOk, this seems to be working. and does help clear up misunderstanding regarding providers00:46
johnsomI've got two bugs in failover that have been giving me headaches.  I'll get them hammered out this week.00:46
ptoohillthe docs wasnt really giving a lot of hints about providers that the error was mentioning00:46
ptoohillso i went down the path of attempting to link it in the graph, but as long as its one big flow and not multiple flows patched itll work00:47
johnsomOk, cool.  Yeah, the unordered flow means all of those sub-flows and tasks, at that level, run in parallel, so the provides wasn't running in time for the other flows00:47
ptoohillwell i was doing it outside of the unordered also, this is still a graphflow issue, but is resolved by one big flow00:47
johnsomYeah, the docs and samples are horrible!  I have aspirations to help them with that.00:48
ptoohill:)00:48
ptoohillThanks again, sorry for being a pain00:48
*** chlong has joined #openstack-lbaas00:48
johnsomNP.  I hope that gets you going again.  If not ping me tomorrow.00:49
ptoohillI think this will work for me all the way, i just need to make sure i do similar to what you did with the outer flow00:49
johnsomI need to go put lights on the tree and crack open a brew00:49
ptoohillhope this will work with the 'post' things i need, well see00:49
ptoohillThat sounds good00:49
ptoohillenjoy! :)00:50
johnsomGood luck!00:50
*** rsanchez87 has quit IRC01:20
*** ljxiash has joined #openstack-lbaas01:24
*** mixos has joined #openstack-lbaas01:28
*** rsanchez87 has joined #openstack-lbaas01:37
*** ianbrown has quit IRC02:07
*** ianbrown has joined #openstack-lbaas02:08
*** bochi-michael has joined #openstack-lbaas02:17
rsanchez87Hi All, I realize it's not possible to use LBAAS (HAProxy) for multiple protocols on the same VIP, but how does one handle HTTP redirection to HTTPS for example?02:43
*** ianbrown has quit IRC03:02
*** ianbrown has joined #openstack-lbaas03:02
*** manishg has joined #openstack-lbaas03:19
*** ljxiash has quit IRC03:25
*** ianbrown has quit IRC03:25
*** ianbrown has joined #openstack-lbaas03:25
*** ianbrown has quit IRC03:36
*** ianbrown has joined #openstack-lbaas03:36
*** ianbrown has quit IRC03:41
*** ianbrown has joined #openstack-lbaas03:41
*** manishg has quit IRC03:49
*** yamamoto has joined #openstack-lbaas03:55
*** prabampm has joined #openstack-lbaas04:43
*** ig0r_ has quit IRC04:50
*** ig0r_ has joined #openstack-lbaas04:55
*** ianbrown has quit IRC05:17
*** ianbrown has joined #openstack-lbaas05:17
*** mixos has quit IRC05:56
*** gomarivera has quit IRC06:04
*** bochi-michael has quit IRC06:06
*** yuanying has joined #openstack-lbaas06:11
*** ig0r_ has quit IRC06:12
*** numans has joined #openstack-lbaas06:15
*** kobis has joined #openstack-lbaas06:28
*** amotoki has joined #openstack-lbaas06:31
*** rsanchez87 has quit IRC06:48
*** rcernin has joined #openstack-lbaas07:02
*** bochi-michael has joined #openstack-lbaas07:03
*** openstackgerrit has joined #openstack-lbaas07:10
*** kobis has quit IRC07:12
*** nmagnezi has joined #openstack-lbaas07:13
*** bana_k has joined #openstack-lbaas07:18
openstackgerritStephen Balukoff proposed openstack/octavia: Shared pools support  https://review.openstack.org/25636907:30
*** eezhova has quit IRC07:36
*** chlong has quit IRC07:46
*** kobis has joined #openstack-lbaas08:03
openstackgerritBrandon Logan proposed openstack/neutron-lbaas: WIP - Get Me A LB  https://review.openstack.org/25720108:05
sbalukoff"Get Me A LB"?08:36
*** kobis has quit IRC08:38
rm_work:P08:38
rm_worknice blogan, lol08:38
rm_workgood naming08:38
rm_worksbalukoff: I assume that's the single-call LB Create08:40
rm_workAKA "Get me a LB"08:40
sbalukoffThat's what I'm guessing, too. But, I went ahead and put a -1 on it anyway.08:40
rm_workgood call :P08:40
sbalukoffHaha08:40
sbalukoffAlso, I'm starting to feel good about the Octavia shared pools patch. Third patchset is up there now. As far as I know, all I have yet to do is to add tests for new / altered functionality, and tell y'all you're stupid when you point out things I'm doing wrong. :)08:41
sbalukoffI should have patch set 4 with those new tests by EOD tomorrow.08:42
sbalukoffAnd since then there will be only two days until I'm gone on vacation, I intend to spend that time reviewing others' work.08:42
rm_workyeah I'm technically off until next year <_<08:44
sbalukoffHaha!08:44
rm_workbut prolly going to try to finish up what I'm working on and do some reviews this week anyway08:44
sbalukoffYou're a glutton for punishment.08:44
rm_worki didn't want to take my vacation all right now but if i don't take it i lose it <_< bleh08:45
sbalukoffYeah, I know.08:49
sbalukoff:P08:49
*** eranra has joined #openstack-lbaas08:50
sbalukoffHey Eran!08:50
sbalukoffHow's it going?08:50
* rm_work sleeps08:51
sbalukoffYep, I'mma go sleep as well. Have a good day, rm_work and eranra!08:51
sbalukoff(Er... night, as the case may be.)08:51
openstackgerritPhillip Toohill proposed openstack/octavia: Updates for containers functionality  https://review.openstack.org/19995408:53
*** bana_k has quit IRC09:00
openstackgerritBo Chi proposed openstack/neutron-lbaas: Change status to INACTIVE if admin_state_up if false  https://review.openstack.org/25587509:04
*** rm_you has joined #openstack-lbaas09:17
*** bochi-michael has quit IRC09:21
*** bochi-michael has joined #openstack-lbaas09:27
*** admin0 has joined #openstack-lbaas09:30
*** amotoki has quit IRC09:31
*** ig0r_ has joined #openstack-lbaas09:39
*** bharathm has quit IRC09:53
*** bharathm has joined #openstack-lbaas09:54
*** bharathm has quit IRC10:10
*** rcernin has quit IRC10:25
*** admin0 has quit IRC11:37
*** chlong has joined #openstack-lbaas11:39
*** kobis has joined #openstack-lbaas12:04
*** admin0 has joined #openstack-lbaas12:07
*** doug-fish has joined #openstack-lbaas12:23
*** rtheis has joined #openstack-lbaas12:30
*** doug-fish has quit IRC12:34
*** doug-fish has joined #openstack-lbaas12:39
*** barclaac has quit IRC12:43
*** ducttape_ has joined #openstack-lbaas12:44
*** barclaac has joined #openstack-lbaas12:49
*** ducttape_ has quit IRC13:00
*** bharathm has joined #openstack-lbaas13:11
*** bharathm has quit IRC13:15
*** numans has quit IRC14:11
*** admin0 has quit IRC14:16
*** eranra_ has joined #openstack-lbaas14:18
*** ajmiller has joined #openstack-lbaas14:19
*** eranra has quit IRC14:21
*** eranra_ has quit IRC14:22
*** neelashah has joined #openstack-lbaas14:23
*** yamamoto has quit IRC14:26
*** neelashah1 has joined #openstack-lbaas14:30
*** neelashah has quit IRC14:30
*** TrevorV|Home has joined #openstack-lbaas14:38
*** ducttape_ has joined #openstack-lbaas14:55
*** ajmiller has quit IRC14:56
*** ig0r_ has quit IRC15:04
*** ljxiash has joined #openstack-lbaas15:07
*** TrevorV|Home has quit IRC15:09
*** yamamoto has joined #openstack-lbaas15:15
*** nmagnezi has quit IRC15:22
*** eranra has joined #openstack-lbaas15:22
*** eranra_ has joined #openstack-lbaas15:28
*** ljxiash has quit IRC15:30
*** eranra has quit IRC15:32
*** eranra_ has quit IRC15:41
*** admin0 has joined #openstack-lbaas15:43
*** manishg has joined #openstack-lbaas15:47
*** manishg has quit IRC15:53
*** yamamoto has quit IRC15:56
*** yamamoto has joined #openstack-lbaas15:58
*** admin0 has quit IRC16:19
*** sbalukoff has quit IRC16:25
*** rsanchez87 has joined #openstack-lbaas16:41
*** rsanchez87 has quit IRC16:45
*** armax has joined #openstack-lbaas16:47
*** rsanchez87 has joined #openstack-lbaas16:48
*** manishg has joined #openstack-lbaas16:49
*** yamamoto has quit IRC16:59
*** jonesbr has joined #openstack-lbaas17:13
*** madhu_ak has joined #openstack-lbaas17:17
*** madhu_ak has quit IRC17:22
*** ducttape_ has quit IRC17:24
*** yamamoto has joined #openstack-lbaas17:28
*** bochi-michael has quit IRC17:34
*** doug-fish has quit IRC17:35
*** doug-fish has joined #openstack-lbaas17:35
*** doug-fish has quit IRC17:40
*** yamamoto has quit IRC17:46
*** manishg has quit IRC17:56
*** manishg has joined #openstack-lbaas17:56
*** kobis has quit IRC18:04
*** Aish has joined #openstack-lbaas18:07
blogananyone notice our massive job failures over the weekend and continuing to today?18:09
bloganhttp://docs-draft.openstack.org/44/255944/1/check/gate-neutron-lbaas-docs/dca3405//doc/build/html/dashboards/check.dashboard.html18:09
johnsomNope18:10
bloganmight be related to the grenade gate failure18:12
johnsomYeah, I was just going to say that.18:13
rm_workbleh18:15
rm_worksomeone have it handled or should I look into it?18:15
johnsomThe gernade issue with oslo.middleware is fixed.  Not sure if that is related or not18:16
rm_workk18:16
blogani thought it was just devstack settings18:16
bloganaccoring to the ML at least18:16
blogancoudl be a different grenade job18:16
johnsomYeah, could be we are thinking about two different issues18:17
*** kobis has joined #openstack-lbaas18:17
*** kobis has quit IRC18:18
rm_workanyone have the magic string to hide things with -W in the search filter for gerrit?18:19
rm_workso I can add it to project:openstack/octavia status:open18:19
rm_workin my bookmark18:20
*** Aish has quit IRC18:20
rm_workgot it18:22
rm_work"project:openstack/octavia status:open -label:Workflow-1"18:22
johnsomneat18:23
*** bana_k has joined #openstack-lbaas18:23
*** Aish has joined #openstack-lbaas18:24
blogancould be all the WIP patches we've pushed up too, the minimal job had spike but has come back down, and that gets run a lot more than any other job18:25
rm_workso possibly legit failures? :P18:27
johnsomFYI I fixed up that api doc update.  I didn't see the experimental tag blogan mentioned.  At least in the rendered doc I saw it was stable18:27
*** neelashah1 has quit IRC18:27
bloganjohnsom: might have been in thte table of contents then18:29
bloganone sec18:29
blogani must be crazy18:29
bloganguess i shouldn't do all those drugs on the weekedn18:30
blogancould have sworn i saw it18:30
bloganoh well, i feel better18:30
johnsomgrin18:30
*** ig0r_ has joined #openstack-lbaas18:39
*** bharathm has joined #openstack-lbaas18:42
ptoohillanyone having issues building lb today? im noticing some funky behavior and wanted to see if i was alone or not18:42
* ptoohill is all alone18:49
rm_worki'm still getting keystone errors trying to do TLS stuff <_<18:50
rm_workbefore it even gets close to Barbican18:50
ptoohillmaybe i can help with that, these other issues im seeing like not being able to connect to instance on master and building configs on the instance in my patch18:51
*** doug-fish has joined #openstack-lbaas18:55
ptoohillOctavia is Oprah18:57
johnsomWe are giving away cars now?18:58
ptoohills/cars/bugs18:58
*** madhu_ak has joined #openstack-lbaas18:59
ptoohillThough this is probably just another thing affecting just me some how, but that doesnt explain issues in master19:00
*** doug-fish has quit IRC19:00
*** doug-fish has joined #openstack-lbaas19:01
*** doug-fis_ has joined #openstack-lbaas19:03
*** doug-fis_ has quit IRC19:04
*** doug-fis_ has joined #openstack-lbaas19:04
*** doug-fish has quit IRC19:05
*** sbalukoff has joined #openstack-lbaas19:15
*** woodster_ has joined #openstack-lbaas19:26
*** bharathm has quit IRC19:31
openstackgerritMerged openstack/octavia: Updated from global requirements  https://review.openstack.org/25649619:37
*** albertom has quit IRC19:45
*** ajmiller has joined #openstack-lbaas19:49
*** albertom has joined #openstack-lbaas19:52
*** Aish has quit IRC20:05
*** whydidyoustealmy has quit IRC20:07
*** shakamunyi has quit IRC20:07
*** bharathm has joined #openstack-lbaas20:22
*** madhu_ak has quit IRC20:28
*** jonesbr has quit IRC20:29
*** jonesbr has joined #openstack-lbaas20:32
*** neelashah has joined #openstack-lbaas20:32
*** ducttape_ has joined #openstack-lbaas20:38
*** harlowja has quit IRC21:04
*** madhu_ak has joined #openstack-lbaas21:05
*** Aish has joined #openstack-lbaas21:06
*** neelashah has quit IRC21:06
*** harlowja has joined #openstack-lbaas21:07
*** harlowja_ has joined #openstack-lbaas21:11
*** harlowja has quit IRC21:13
*** Aish has quit IRC21:18
openstackgerritMerged openstack/neutron-lbaas: Updated from global requirements  https://review.openstack.org/25649321:22
*** ducttape_ has quit IRC21:39
*** manishg_ has joined #openstack-lbaas21:41
*** manishg__ has joined #openstack-lbaas21:42
*** manishg has quit IRC21:42
*** manishg__ has quit IRC21:43
*** manishg has joined #openstack-lbaas21:44
*** manishg_ has quit IRC21:45
*** chlong has quit IRC21:51
*** Aish has joined #openstack-lbaas22:11
bloganyeah i can't even ping the amphorae anymore, something fishy is going on22:23
*** crc32 has joined #openstack-lbaas22:27
*** ducttape_ has joined #openstack-lbaas22:27
*** crc32 has quit IRC22:28
*** crc32 has joined #openstack-lbaas22:29
*** jonesbr has left #openstack-lbaas22:31
johnsomJoy.  Thanks for the warning to not restack22:34
bloganlol22:36
bloganwell its just got to be me bc the scenario jobs are running and passing, as far as i've seen22:39
ptoohillim having other issues with connection in the back in and some unexpected behavior22:45
ptoohillbackend*22:46
*** rtheis has quit IRC22:46
bloganwell i can't even ping a cirros image....22:48
bloganbleh22:48
ptoohilllol22:49
*** ducttape_ has quit IRC22:49
johnsomMy issue is after I failover an amp, I can't delete the load balancer.  "security group is in use"22:51
madhu_akI have seen the same issue "security group is in use" when running lbaas api tests ( using haproxy), but I normally clear db manually and run the tests, it passed for me. (Well, recollecting the experiences when I write lbaas api tests) ..22:55
johnsomYeah, I think it's due the fact we now have "DELETED" status amps hanging around.22:56
johnsomI just need to track it down.22:56
bloganjohnsom: we only have those in our db right?22:57
bloganjohnsom: i see what you mean, they actually didn't get deleted in nova22:58
bloganjohnsom: yeah that would be a problem, also when a nova instance is deleted, even after its gone from the list, it seems its take some unknown time for the security group to actually be released22:58
bloganjohnsom: which is why there's a security group delete retry22:58
bloganmight try upping that22:59
blogananyway, figured my problem out, network namespaces are not being created22:59
johnsomThis is a bug.  There is still a port with the security group on it.22:59
bloganahhh23:00
bloganwhat port?23:00
bloganthe vip port or a port on an instance?23:00
johnsomIt's one of the two vip neutron ports23:01
bloganthere should only be one vip port per load balancer23:01
openstackgerritCarlos Garza proposed openstack/octavia: Implementing EventStreamer  https://review.openstack.org/21873523:01
openstackgerritCarlos Garza proposed openstack/neutron-lbaas: Implementing EventStreamer reciever for octavia  https://review.openstack.org/24147423:01
johnsomI have one port "94" that has the allowed address pairs on it with "93", then I also have a port with just the "93" address on it23:03
johnsomneutron port-list23:03
johnsom"94" port has owner compute:none, "93" port is owner neutron:LOADBALANCERV223:04
bloganyeah the 93 is the vip port23:04
blogan94 is the instance port23:04
blogan93 should have an actual name23:05
bloganbut both of those still exist?23:05
*** chlong has joined #openstack-lbaas23:05
bloganafter a failover?23:05
bloganwait, actually, don't you just want to transfer those ports over to a new amphora instead of deleting them?23:05
rm_worksoooo23:06
johnsomyes, "93" has a name, "94" does not.  Somehow "94" still has the sec group on it23:06
rm_workdoes neutron-lbaas ACTUALLY read the file /etc/neutron/neutron_lbaas.conf23:06
rm_workseems like the answer is "no"?23:06
johnsomYeah, the scenario is create lb, failover lb, then delete lb23:07
johnsomdelete lb fails23:07
rm_workwhat is that file even *for*23:07
bloganrm_work: it should23:07
bloganrm_work: if you want to do lbaas specific things23:07
johnsomrm_work the service providers are in there23:07
johnsomi.e. Octavia driver23:08
rm_workwell, i basically had to copy all the config over from that to neutron.conf to get it to actually be set23:08
bloganrm_work: all about hwo you start neutron-server, i pass in neutron_lbaas.conf23:08
rm_workah so devstack does --config-file /etc/neutron/neutron.conf23:08
bloganrm_work: or you could have done neutrnon-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/neutron_lbaas.conf23:08
rm_workby default, devstack does not include neutron_lbaas.conf on the commandline args for neutron-server23:08
bloganrm_work: yeah, which neutron used to automatically pull in neutron_lbaas.conf without specifying it, but now it doesn't23:08
rm_workloool23:09
rm_workok then our devstack plugin is borked23:09
johnsomblogan when did that change?23:09
bloganjohnsom: i don't remember23:10
bloganjohnsom: just found it out one day while testing the allocates_vip setting in neutron_lbaas23:10
bloganwondering why it wasn't being picked up23:10
rm_worklol23:10
rm_workso that cost me 3 days23:10
bloganit took you 3 days to decide to ask?23:11
johnsomHmm, ok.  I don't have to hack things to get devstack going.  I guess we have octavia as the default in config.py23:11
rm_workblogan: it took until I was pretty positive it wasn't being loaded to ask, yes23:11
bloganno devstack runs fine right now without being specified, its just if you want to change something from default thats in neutron_lbaas.conf23:11
rm_work"the config file for our project isn't being read" isn't something that comes to mind by default23:11
rm_workit's one of those *base assumptions*23:11
rm_workrofl23:12
rm_work"just if you want to change something from default"23:12
rm_workIE, if you want to actually set a value23:12
bloganwell i mean normally if you have two config files, you have to specify both, it reading the neutron_lbaas.conf magically was the exception23:12
johnsomhahaha23:12
johnsomWe make perfect choices for default, so, yeah it was redundant...  hahaha23:12
rm_worksooooo devstack happily updates the neutron_lbaas.conf file with the config necessary to run23:12
rm_workIE, using a non-default password or IP23:13
rm_workand then it just sits there and doesn't get read23:13
rm_workso nothing works right because it just uses defaults :P23:13
bloganit works right for defaults :)23:13
rm_work>_<23:13
rm_workthat isn't even close to being *actually working*23:13
johnsomAny color as long is it's black...23:14
rm_workYou can have any color you want, as long as it's blakc23:14
rm_workLOL23:14
rm_workyes johnsom thanks23:14
rm_workso i'm not insane23:14
bloganworks for black23:14
blogan!23:14
rm_work>_<23:14
blogani'm not saying its not a problem, just haven't had the time to go fix it myself, haven't really needed to, just got devstack running and then restarted the neutron-server with teh right command23:15
bloganthought i put in a bug for it though, but apparently not23:17
*** doug-fis_ has quit IRC23:18
*** doug-fish has joined #openstack-lbaas23:18
bloganhttps://bugs.launchpad.net/neutron/+bug/152609623:19
openstackLaunchpad bug 1526096 in neutron "lbaas devstack plugin should start neutron-server with neutron_lbaas.conf" [Undecided,New]23:19
bloganthere we go23:19
*** doug-fis_ has joined #openstack-lbaas23:20
*** doug-fis_ has quit IRC23:21
bloganmy devstack problem is solved23:22
johnsomblogan, since the security group gets applied to both of those ports, I think we have an issue.  When the first port is deallocate_vip'd it's going to fail to delete the security group because the second port still has it23:22
*** doug-fis_ has joined #openstack-lbaas23:22
*** sbalukoff has quit IRC23:22
*** doug-fish has quit IRC23:23
bloganjohnsom: what are the steps before deallocate_vip is executed?23:24
johnsomUnplugVIP, DeleteAmphoraeOnLoadBalancer, and MarkLBAmphoraeDeletedInDB23:26
*** ducttape_ has joined #openstack-lbaas23:26
bloganjohnsom: so you're getting totally new ips on the new amp from the failed one no?23:27
blogantotally new ports23:27
johnsomNot these IPs they are moving23:28
johnsomOh, ports, let me look23:28
johnsomNope, it looks like they just get moved.  Same ids on the two ports.23:29
bloganthe reason i ask is because if all you do is transfer the ports (except the mgmt port) then you don't need to unplug vip23:29
johnsomI think the only thing that changes is the management port23:29
johnsomthe failure I am looking at is when the user deletes the lb23:30
*** ducttape_ has quit IRC23:30
johnsomNot during failover23:30
bloganohhhh23:30
blogansorry23:30
johnsomNp, it's confusing23:30
bloganso is this always an lb with a failover that happened?23:30
johnsomIt seems to be that way.  I might be wrong though.  Just tracking it down23:31
johnsomNormally I can delete lbs, but that doesn't mean whatever I broke didn't break that too23:32
bloganwell then, delete amphorae on load balancer should just delete all the load balancers, and remove the ports on that amphorae23:32
bloganunless, the failover disassociated those ports from the amphora23:32
crc32https://review.openstack.org/#/c/241474/ <-- needs review23:32
blogando those ports have a device_id?23:32
crc32https://review.openstack.org/#/c/218735/ <-- needs more review23:32
crc32Feel free to rubber stamp a +223:33
johnsomblogan yes, before failover they both have the same device ID.  After failover it is a different ID at least on one23:34
bloganjohnsom: okay i think a good test would be to see if a port that is transferred from one instance to another is deleted when that new instance is deleted23:35
johnsomAt the point in deallocate_vip where it bombs, 93 still has a device_id, 94 does not23:35
bloganbc that seems like what is happened23:35
blogan94 is device_owner:compute?23:35
* blogan reads scrollback23:35
bloganah yeah that might be the problem23:35
blogan94 is the instance port23:35
johnsom94 is compute:None, it does not go away when the instance is deleted via nova23:36
bloganso either the device_id doens't get updated to the new amphora's id on failover, or the device_id is being cleared out at some point23:36
johnsomCool, thanks for helping to track this down!  I will check that.23:36
bloganjohnsom: try this, create another instance with that port, see if device_id gets populated, if it doesn't update that port's device_id with the amphora_id and then delete23:37
bloganthe instance23:37
bloganhell i can probably test this out myself real quick23:37
bloganbc im curious now23:38
bloganand i now have a working devstack23:38
johnsomI'm going to reset, run again and check that device_id after failover23:38
*** doug-fis_ has quit IRC23:43
johnsomOk, that is strange.  Just created an lb.  The two ports have two different device ids here.  About to fail over and see what happens23:45
bloganjohnsom: the vip port will have the lb id23:46
bloganjohnsom: bc its only for the lb, the instance port was created by nova, so it'll have the instance id23:46
johnsomthe compute:none port has the same device_id as the mgmt port23:47
bloganjohnsom: yep, that makes sense, they're both on the same amphora23:47
bloganjohnsom: yeah found the problem23:49
bloganjohnsom: so when you transfer the port, and booth the instance with that port, that port's device_id will get populated with the new amphora's id BUT23:49
bloganjohnsom: when that new amphora is deleted nova only clears out the device_id and device_owner, it doesn't delete the port23:50
bloganjohnsom: i assume its because nova has no way to know if that port was originally created by nova or some other service/user23:50
bloganso we need to explicitly delete that port from neutron23:50
johnsomOk.  I will make it so.23:52
bloganjohnsom: i'm thinking of how we will do that in the network driver, which i'm not sure we can, the proper place would be deallocate_vip i think23:52
bloganand that does get a vip object, so you'd have to go vip.loadbalancer.amphorae i guess, that should give you all the port_ids23:53
bloganwish there was a way to tell nova to also delete this port when that instance is deleted, but i guess thats all underneath23:56
johnsomOk, I follow you.  Yeah.  Can't hurt to delete it explicitly though (with a try block).23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!