Saturday, 2016-02-13

bloganif we keep things as is, what is the alternative to solving the current problems?00:00
johnsomblogan Yes, I think that is what I see as options00:00
xgermanyeah, and make sure the status indicates that things ion amp might be different then seen here00:00
johnsomCurrent problem I know of is the member update issue.  I can fix that.00:00
sbalukoffAgain, you've not actually looked at my patch yet.00:01
xgermanok, I gotta run00:01
bloganshould we make a N plan to implement a task based API? where we do story the intended state adn the current state of all resources?00:01
johnsomIf we stay with how it is, there is some chunk of work to make sure we can do the model updates with the L7 stuff like we do with the other existing model objects.00:01
blogans/story/store00:01
blogansbalukoff: i wish i could :(00:02
bloganokay so it has to be fixed for L7, sothat rules out the task based API anytime soon00:03
johnsomThose are the two issues I know of with the current implementation.  Others sbalukoff?00:03
sbalukoffIf we stay with how it is, we're also duplicating code between the data model and repository.00:03
blogansbalukoff: mind rehashing what those duplication points are? i have missed it00:03
johnsomYeah, I can't identify those eitehr00:04
sbalukoffUnless we do something differently in the to_data_model code, we're going to be dead in the water. The code I implemented there in the shared pools patch is n^2 or worse efficiency.00:04
blogandid you do an n^n algorithm?00:04
johnsomI thought you realized the reason you did that was really just a bad test00:04
bloganbc i couldn't even if i tried i dont think00:05
sbalukoffblogan: The biggest ones are around the L7 functionality. But there are others y'all are actually overlooking right now when it comes to relationships that sqlalchemy keeps track of for us. Right now we're not doing anything particularly complex with those things, but that's likely to change as we add new features.00:05
*** piet_ has joined #openstack-lbaas00:05
sbalukoffblogan: I'm pretty sure I did an n^n.00:06
sbalukoffyes.00:06
sbalukoffSorry.00:06
blogani'd love to see it lol00:06
sbalukoffIt's worth noting:00:06
sbalukoffThe patch I implemented this morning is almost exactly a roll-back to blogan's code prior to the shared pools patch.00:06
johnsomRight, thus my confusion here00:07
sbalukoff(It turns out blogan's code had an error in it that turned out in our favor regarding data model population...)00:07
bloganyes! dumb luck to the rescue!00:07
sbalukoffHaha!00:07
johnsomThe problem with that patch is it breaks the create calls at the moment00:07
sbalukoffWell. it's not just the create calls...00:08
sbalukoffWait...00:08
sbalukoffActually....00:08
sbalukoffOk, the follow-up patch I posted (with the flow reorder) seems to work fine with create calls.00:08
blogangrrr i really wish gerrit was up00:08
sbalukoffAt least, I've not been able to get it to fail.00:08
sbalukoffbut I don't think the follow up patch actually touches any of the create calls...00:09
johnsomsbalukoff be sure to check the haproxy.cfg in the amp.  The flows don't fail, the config is just toast00:09
sbalukoffSo... I have no idea how they were breaking for you, johnsom.00:09
sbalukoffOh, I was watching the haproxy.cfg00:09
johnsomOk00:09
bloganokay i gotta go home, ill be on later sometime00:10
bloganhopefully gerrit will be up00:10
sbalukoffi've been trying to detect discrepancies between the octavia database and what is actually deployed for the "success" scenario.00:10
bloganill comment on the review00:10
sbalukoffSo yes, i've been looking very closely at the haproxy.cfg00:10
*** yamamoto_ has joined #openstack-lbaas00:12
sbalukoffSo... it's entirely possible all the validation stuff I'm doing in the repo is actually the wrong place for it and that it should go in the data model instead. Or some other yet-tobe-invented validation layer that's external to all of this.00:16
sbalukoffI will say that some of the l7policy code in particular makes use of sqlalchemy orderedlist stuff (since policies have an order that needs to be managed), that I don't like the idea of having to reproduce in some other layer.00:16
sbalukoffEspecially since a new policy being inserted in the list will effectively reorder all the other policies in the list.00:17
sbalukoffI'm disturbed that the return data we give for updates / deletes is not the state of affairs after the transaction has occurred (which is how things work with creates).  Maybe we should have blocking calls when they do updates?00:18
sbalukoff(Right now we just shove everything off to an asynchronous handler and hope for the best--  so what gets returned by the API is going to need to be taken with a grain of salt anyway.)00:19
johnsomI'm assuming you me via the API.  Yes, since we are async we return a 202 and what they asked us to change.00:20
johnsomGoing synchronous is yet another topic to rehash00:20
johnsomThe difference here, is under current code, GET queries return what is actually configured on the amps, where re-order would return what we want to be on the amps.  So, if you have 10,000 amps in an LB, get under the reorder code would indicate they are updated when they have not yet been updated.00:21
johnsomExtreme crazy case, yes00:22
sbalukoffWell... I would argue that a get when you're in the middle of changing state is a kind of "anyone's guess" scenario.00:23
sbalukoffWe probably don't want to block the get...00:23
johnsomfair00:23
johnsomIt really comes down to whether we want that history or not.  If not, we can strip a heck of a lot of code out.00:24
sbalukoffAnd... well, I guess it comes down to a matter of opinion on what is more worthwhile for the user to see when the state is 'PENDING_UPDATE':  The state of things before the update, or the state they will be in once the update completes.00:25
sbalukoffI could see arguments either way on that one.00:25
sbalukoffSo the database state should always be the state of things on the amphora (assuming we eventually write roll-back code that occasionally succeeds. ;) )...  and the theory is it's easier to update to the previous state if we don't change the database until we know whether the transaction succeeded?  (I'm not convinced of this-- you're going to have to store state one way or another, either where you were or where you want00:27
sbalukoffto go. I suspect it's about equally difficult either way... but I don't know that for sure.)00:27
*** piet_ has quit IRC00:29
johnsomWe are already storing that state00:31
sbalukoffI agree that it seems like a good idea to be cautious about updating the database pre-emptively--  we're closer to actual database transactions that way.00:31
johnsomWhere we were is in the DB, where we want to go is stored in the flow storage (UPDATE_DICT)00:31
sbalukoffYeah,well you could just as easily do a ROLLBACK_DICT. ;)00:32
johnsomBack to the GET argument00:32
johnsomYeah00:32
sbalukoffYeah.00:32
sbalukoffI think we're cheating with create commands though:00:33
johnsomIn theory, UPDATE_DICT is easier on the queue00:33
sbalukoffWe're showing the user how things will be after the change.00:33
johnsomI agree.  It's not too un-fair though00:34
sbalukoffSo.... I really have to run for now. (There are already 6 guests over for a party I'm throwing tonight and I'm being a terrible host...)  One thing I will say about this is that not having things work for the success state is causing me severe problems trying to test and troubleshoot the l7 code.00:34
johnsomUnlike a user than updates to disable session persistence, sees the change in the API, but still sees stuck traffic on his nodes.00:34
sbalukoffBecause at this point it's really hard for me to tell where things went wrong if updates don't end up in the haproxy.cfg.00:35
sbalukoffThat ambiguity goes away when i do the DB updates first. :P00:35
johnsomOk, well, everything *except* member update works for me pre-shared pools.  member is just a model update00:35
sbalukoffThe l7policy and l7rule stuff suffers the same problem as the member update stuff.00:35
sbalukoffAnd again, right now this goes away with the two patches I suggested this morning.00:36
johnsomeasy != right necessarily.  Ok, go to your party.  Gerrit will be back tuesday00:37
sbalukoffPerhaps a compromise?  Maybe we could do the task-flow reorder for now to make sure things work for the "success" state, and make sure that roll-backs are on the docket in the near future?00:37
johnsomRemove all that code to put it back in?  Not buying00:37
sbalukoffI know that easy is not necessarily right. But having to take great pains is usually a sign of bad design. :/00:38
sbalukoffNo--00:38
sbalukoffLeave the model code in place.00:38
sbalukoffHave any of my patches tried to remove it?00:38
johnsomNo, but I wouldn't leave that much dead code around either00:38
sbalukoffI'm working off the idea that "things need to work for normal 'success' flows in production" including stuff like member updates. :/00:39
sbalukoffRight now... they don't.00:39
johnsomMember updates is a pretty easy fix00:39
sbalukoffAnd I suspect that was the case before shared pools.00:39
sbalukoffIs it?00:39
johnsomYeah00:39
johnsomI've been poking at it while we discussed00:40
sbalukoffHopefully gerrit will be back soon.00:41
sbalukoffOk, I'm off to this party.00:44
sbalukoffHave a good weekend in case I don't see y'all before Monday, eh!00:45
*** piet has joined #openstack-lbaas00:48
*** yamamoto_ has quit IRC00:52
*** allan_h has joined #openstack-lbaas00:56
*** yamamoto_ has joined #openstack-lbaas00:57
*** ajmiller has quit IRC00:57
johnsomFYI, three lines of code (maybe there is a python optimized way) fixes the member update issue00:59
*** _cjones_ has quit IRC01:05
*** piet has quit IRC01:24
*** yamamoto_ has quit IRC01:28
johnsomThis fixes the pre-shared pools member update.  member delete should be fixed too as it was reorderd in a recent patch01:31
johnsomhttps://gist.github.com/anonymous/2811a238a4a42f1b0adb01:31
*** madhu_ak has quit IRC01:32
johnsomWhen I get a chance I will try the shared pools with reverted to_model code again and see if I can repro the corrupt haproxy.cfg issue and fix the update member there.  Given create has always worked before I think there is something wrong in sharedpools01:33
johnsomRight now I need to do my day job, which I put off for this discussion01:34
*** openstackgerrit has quit IRC01:47
*** openstackgerrit has joined #openstack-lbaas01:48
openstackgerritJeffrey Longstaff proposed openstack/neutron-lbaas: Create plugin driver for F5 Networks appliances.  https://review.openstack.org/27983601:48
*** yamamoto_ has joined #openstack-lbaas02:06
*** yamamoto_ has quit IRC02:21
*** yamamoto_ has joined #openstack-lbaas02:22
*** yamamoto_ has quit IRC02:22
*** yamamoto_ has joined #openstack-lbaas02:22
*** bana_k has quit IRC02:32
*** kevo has quit IRC02:35
*** bana_k has joined #openstack-lbaas02:46
*** bana_k has quit IRC02:47
*** bana_k has joined #openstack-lbaas02:47
*** bana_k has quit IRC02:48
*** chlong has joined #openstack-lbaas02:51
*** yamamoto_ has quit IRC02:56
*** woodster_ has quit IRC02:56
*** piet_ has joined #openstack-lbaas03:10
*** HenryG has quit IRC03:12
*** HenryG has joined #openstack-lbaas03:13
*** amotoki has quit IRC03:37
*** chlong has quit IRC03:54
*** yamamoto has joined #openstack-lbaas03:56
*** piet_ has quit IRC04:02
*** yamamoto has quit IRC04:04
*** piet has joined #openstack-lbaas04:10
*** piet has quit IRC04:35
*** kobis has joined #openstack-lbaas05:06
*** piet has joined #openstack-lbaas05:22
*** armax has quit IRC05:22
*** fnaval has quit IRC05:25
*** fnaval has joined #openstack-lbaas05:34
*** piet has quit IRC05:41
*** piet has joined #openstack-lbaas05:52
*** piet has quit IRC06:07
*** fnaval has quit IRC06:54
*** piet has joined #openstack-lbaas06:57
*** piet has quit IRC07:12
*** piet has joined #openstack-lbaas07:30
*** piet has quit IRC07:45
*** hockeynut_afk has joined #openstack-lbaas08:14
*** hockeynut has quit IRC08:15
*** HenryG has quit IRC08:15
*** HenryG has joined #openstack-lbaas08:17
*** yamamoto has joined #openstack-lbaas09:30
*** yamamoto has quit IRC09:37
*** yamamoto has joined #openstack-lbaas09:43
*** yamamoto has quit IRC09:46
*** yamamoto has joined #openstack-lbaas10:47
*** yamamoto has quit IRC10:53
*** piet has joined #openstack-lbaas11:50
*** piet has quit IRC12:13
*** piet has joined #openstack-lbaas12:25
*** doug-fish has quit IRC12:55
*** doug-fish has joined #openstack-lbaas12:57
*** doug-fish has quit IRC12:57
*** ducttape_ has joined #openstack-lbaas14:01
*** ducttape_ has quit IRC14:18
*** ducttape_ has joined #openstack-lbaas14:30
*** ducttape_ has quit IRC15:10
*** ducttape_ has joined #openstack-lbaas16:29
*** ducttape_ has quit IRC16:36
*** ducttape_ has joined #openstack-lbaas16:47
*** ducttape_ has quit IRC16:48
*** armax has joined #openstack-lbaas17:16
*** armax has quit IRC17:50
*** madhu_ak has joined #openstack-lbaas18:00
*** madhu_ak has quit IRC18:21
*** kobis has quit IRC19:58
*** kobis has joined #openstack-lbaas19:59
*** bana_k has joined #openstack-lbaas20:39
*** bana_k has quit IRC21:28
*** ducttape_ has joined #openstack-lbaas22:56
*** ducttape_ has quit IRC23:05

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!