Monday, 2015-08-17

*** mestery has joined #openstack-lbaas00:59
*** mestery has quit IRC01:36
*** mestery has joined #openstack-lbaas01:36
*** mestery has quit IRC01:37
*** logan2 has quit IRC02:18
*** haigang has joined #openstack-lbaas02:39
*** enikanorov2_ has quit IRC02:41
*** haigang has quit IRC02:46
*** logan2 has joined #openstack-lbaas02:57
*** ganeshna has joined #openstack-lbaas03:03
openstackgerritBrandon Logan proposed openstack/neutron-lbaas: Octavia driver correctly reports statuses  https://review.openstack.org/21359703:40
rm_workjohnsom: what is the dependency that is broken03:46
rm_workjohnsom: for the UDP thing03:46
rm_workHM?03:46
rm_workfailover flows?03:47
rm_workor something upstream03:47
*** openstack has joined #openstack-lbaas04:18
*** diogogmt has quit IRC04:30
*** vivek-ebay has joined #openstack-lbaas04:43
*** vjay6 has joined #openstack-lbaas05:23
*** vivek-ebay has quit IRC05:25
openstackgerritAdam Harwell proposed openstack/octavia: Adding amphora failover flows  https://review.openstack.org/20233605:34
rm_workrebasing the stack i'm about to be working on top of05:34
rm_workman, gerrit is SLOOOOOW tonight05:37
rm_worktaking like 5-10 seconds to respond05:37
*** vjay6 has quit IRC05:37
*** enikanorov2 has joined #openstack-lbaas05:38
rm_workerr hmm05:42
rm_workmaybe i don't need to be on the end of that chain05:42
*** enikanorov2 has quit IRC05:43
rm_workoh, nope, i do05:43
*** bana_k has joined #openstack-lbaas05:46
*** vjay6 has joined #openstack-lbaas05:52
*** vjay6 has quit IRC06:05
*** vjay6 has joined #openstack-lbaas06:09
*** vjay6 has quit IRC06:15
*** vjay6 has joined #openstack-lbaas06:16
*** vjay6 has quit IRC06:21
rm_workxgerman: hmm, after looking at this for a bit, I am not sure if i think this needs to be done as a mixin06:28
rm_worklike... why is the code that handles DB updates for statuses a mixin? is there really multiple options for that?06:29
rm_workif a member goes down... the member needs to be updated in the DB to "down", period, right?06:29
rm_workand I am thinking maybe this new "event stream" thing doesn't really need to have a bunch of options either...06:31
rm_workoriginally i figured it would have at least "send to queue" and "just log"06:31
rm_workbut...06:31
rm_workoh, actually I see06:31
rm_workwas thinking of the wrong queue06:32
rm_workthis is driving me crazy though, I can't find ANYTHING that actually *uses* any of these mixins06:40
rm_workthe Health or Stats mixins06:40
bana_kdoes ctavia supports docker as of now?06:59
bana_koctavia*06:59
*** jschwarz has joined #openstack-lbaas07:16
*** ganeshna has quit IRC07:23
*** enikanorov2 has joined #openstack-lbaas07:34
*** iwi has joined #openstack-lbaas07:43
*** numan has joined #openstack-lbaas07:43
*** bana_k has quit IRC07:49
*** ganeshna has joined #openstack-lbaas07:50
rm_youbana_k: "support"?08:11
rm_youah, he's gone08:11
*** mixos has quit IRC08:25
rm_workerg, xgerman i think i am going to wait and discuss this in the morning rather than overwrite your CR with my current stuff08:38
*** iwi has quit IRC08:44
*** apuimedo has joined #openstack-lbaas08:55
*** vjay6 has joined #openstack-lbaas09:03
*** kbyrne has quit IRC09:12
*** kbyrne has joined #openstack-lbaas09:17
*** vjay7 has joined #openstack-lbaas09:54
*** vjay6 has quit IRC09:55
*** vjay8 has joined #openstack-lbaas10:04
*** vjay7 has quit IRC10:04
*** ganeshna has quit IRC10:50
*** vjay8 has quit IRC11:17
*** ganeshna has joined #openstack-lbaas11:33
*** vjay8 has joined #openstack-lbaas11:37
*** vjay8 has quit IRC11:42
*** ganeshna has quit IRC11:42
*** woodster_ has joined #openstack-lbaas11:46
*** chlong has quit IRC11:59
*** ebagdasa has joined #openstack-lbaas12:00
*** vjay8 has joined #openstack-lbaas12:04
openstackgerritOpenStack Proposal Bot proposed openstack/neutron-lbaas: Updated from global requirements  https://review.openstack.org/21226112:09
*** apuimedo has quit IRC13:15
*** diogogmt has joined #openstack-lbaas13:41
*** vjay8 has quit IRC13:48
*** diogogmt has quit IRC13:54
*** chlong has joined #openstack-lbaas14:03
*** numan has quit IRC14:37
*** diogogmt has joined #openstack-lbaas14:48
*** apuimedo has joined #openstack-lbaas14:50
*** Dave has joined #openstack-lbaas14:52
*** mlavalle has joined #openstack-lbaas14:56
*** mlavalle has quit IRC14:59
*** mlavalle has joined #openstack-lbaas14:59
johnsomrm_work rm_you It's the failover flows15:14
*** sbalukoff has quit IRC15:17
*** chlong has quit IRC15:18
openstackgerritArmando Migliaccio proposed openstack/neutron-lbaas: Remove fall-back logic to service provider registration  https://review.openstack.org/20622115:25
*** chlong has joined #openstack-lbaas15:32
*** diogogmt_ has joined #openstack-lbaas15:35
*** diogogmt has quit IRC15:37
*** diogogmt_ is now known as diogogmt15:37
*** chlong has quit IRC15:38
*** chlong has joined #openstack-lbaas15:40
*** TrevorV has joined #openstack-lbaas15:41
*** vjay8 has joined #openstack-lbaas15:47
*** vjay8 has quit IRC15:52
*** mestery has joined #openstack-lbaas15:53
*** enikanorov2 has quit IRC16:12
*** vivek-ebay has joined #openstack-lbaas16:14
*** vivek-ebay has quit IRC16:29
johnsomTrevorV Any chance you could get the tox tests passing on https://review.openstack.org/#/c/202336/ today?16:43
*** fnaval has joined #openstack-lbaas16:45
TrevorVjohnsom, that's the thing on my plate today, yessir :)16:47
johnsomExcellent.  I was going to volunteer if you didn't have the cycles.  It's hosing up my test runs for the health manager.16:48
bloganrm_work, xgerman, johnsom: https://review.openstack.org/#/c/213597/16:48
bloganpatch for the status updates in the driver16:48
bloganthe octavia driver i mean16:48
johnsomSaw that, cool16:49
*** vivek-ebay has joined #openstack-lbaas16:50
*** bharath has joined #openstack-lbaas16:52
TrevorVblogan, you blew threw that this weekend?  Thanks.  I couldn't get to it this weekend...16:52
TrevorVSorry about that johnsom16:53
TrevorVBut getting tox passing won't necessarily complete the flow for failover, you know that right?16:53
johnsomTrevorV No problem16:53
TrevorVokay16:53
TrevorVJust making sure16:53
johnsomYeah, I'm fine that it is still WIP16:53
johnsomI'm just writing up the config stuff for the health manager and since it depends on the health_manager which depends on failover flows my tox runs are failing.16:54
*** jschwarz has quit IRC16:55
*** enikanorov2 has joined #openstack-lbaas17:00
openstackgerritTrevor Vardeman proposed openstack/octavia: Adding amphora failover flows  https://review.openstack.org/20233617:00
TrevorVjohnsom, there y'ar mate17:00
johnsomThank you Sir!17:01
TrevorVNow to get the flow working ha ha17:04
xgermanyep17:05
*** vivek-ebay has quit IRC17:10
*** vivek-ebay has joined #openstack-lbaas17:14
*** mestery has quit IRC17:15
*** KunalGandhi has joined #openstack-lbaas17:16
*** bana_k has joined #openstack-lbaas17:25
*** mestery has joined #openstack-lbaas17:26
openstackgerritMichael Johnson proposed openstack/octavia: Implement UDP heartbeat sender and receiver  https://review.openstack.org/20188217:31
johnsomWIP getting ready for rebase17:31
openstackgerritMichael Johnson proposed openstack/octavia: health manager service  https://review.openstack.org/16006117:32
*** enikanorov2 has quit IRC17:32
*** mestery has quit IRC17:34
openstackgerritMichael Johnson proposed openstack/octavia: Implement UDP heartbeat sender and receiver  https://review.openstack.org/20188217:42
johnsomWIP re-bassing, boom, boom17:45
*** minwang2 has joined #openstack-lbaas17:51
*** madhu_ak has joined #openstack-lbaas18:05
rm_workxgerman: so i am failing at figuring out where the statuses are currently updated in octavia18:09
rm_workxgerman: https://review.openstack.org/#/c/201882/31/octavia/controller/healthmanager/heartbeat_udp.py,cm line 209 updates the DB but i think that DB is not the main DB Table, but the separate "healthTable"18:10
rm_workright?18:10
rm_workwhich then something else is supposed to read, and then update the main DB18:10
rm_workxgerman: which i would assume this reads: https://review.openstack.org/#/c/160061/38/octavia/controller/healthmanager/health_manager.py,cm like on line 47 but then all it does is trigger a failover flow on line 5318:11
xgermanwe are having our standup — will get back in 1018:11
rm_worki don't see any code anywhere in octavia right now that ACTUALLY updates the statuses for stuff in the DB18:11
rm_workxgerman: kk18:11
johnsomrm_work Also in standup, but I agree with you.18:12
rm_workwhich i assume is what this is for: https://github.com/openstack/octavia/blob/master/octavia/controller/healthmanager/update_health_mixin.py18:13
rm_workbut no one actually uses it18:13
rm_workand i am honestly not sure exactly HOW they'd use it18:13
rm_workbecause i still don't quite understand how "picking a mixin via config" works, because you inherit from the mixin you want in the class definition, so how is this one selected?18:14
rm_workblogan: i think the queue solution is just as trivial to do, if we can actually fix the fact that octavia doesn't update its DB correctly at all yet for status changes ...18:16
rm_workblogan: would rather have just seen the neutron side of that implemented :/18:17
*** mixos has joined #openstack-lbaas18:19
*** sbalukoff has joined #openstack-lbaas18:21
*** bharath has quit IRC18:24
xgermanrm_work the plan is for the UDP receiver to include the mixins - this is how we get around a driver-calling-another driver18:26
xgermanwe still need to integrate that with Carlo’s code18:26
rm_workhmm18:27
rm_workexcept i dont think so18:27
bloganrm_work: are you talkin about prov statuses or operating statuses?18:27
rm_workthe UDP receiver just writes to the separate amphorahealth table18:27
rm_workwhich is the one we were saying might be in redis or might not18:27
rm_workbut is separate nonetheless18:27
rm_workthe thing that actually updates all the statuses would need to read from that18:27
xgermanyea, and it also need to integrate with the mains to ship what bogan calls operational status18:28
rm_workwhich i think is min's HM review?18:28
xgermanso there are two tables strucures:18:28
bloganrm_work: and it might be as trivial, but we have this in case its now, plus we're going to need to ahve some kind of check to determine with something has "timed out", as in something is in PENDING_CREATE but octavia never sends back a "CREATED" event18:28
xgerman1) To detect if heartbeats are missing — that implemented18:28
xgerman2) Keep track of member up/down; listener uip/down — that’;s missing18:29
rm_workyeah my concern is the lack of #2 even *within* octavia18:29
rm_workI was assuming that was at least handled already18:29
rm_workbut it is not :(18:29
blogannope18:29
johnsomThe health checker needs to decide if there is an issue and update the amphora/lb/etc tables18:29
xgermanwell, we have the mixins but they are not hooked up18:29
rm_workagain, i am not sure how you select which mixin to use18:30
xgermannope, the messages need to be processed and send to the mixins18:30
rm_worki was hoping for an example but it's not in use yet18:30
bloganxgerman: i thought we decided to just have a health manager interface, instead of using the mixin, the health manager being pluggable just like the other drivers18:30
bloganxgerman: i coudl be getting the mixin mixed up here though18:30
rm_workyeah i am having a hard time wrapping my head around how "configuration selectable mixins" works18:30
rm_workwhere was the example you linked me last time?18:30
xgermanwell, we need a way to take the information from the UDP message and put it wherever it’s needed18:31
rm_workit scrolled off my history and i couldn't find it18:31
rm_workwell, it is ALWAYS needed in the DB18:31
rm_worki don't see a situation where we wouldn't want to update our DB statuses18:31
xgermanI am now trying to rethink that since we need to write to the Octavia DB and put it on the queue18:31
xgermanso we likely need a list of actions18:31
rm_workwell18:31
bloganthe idea was to allow a polling implementation and this event driven implementation18:32
rm_worki figure we need to get it writing to the DB *first*18:32
bloganbut we were going to do the event driven implementation18:32
bloganyes18:32
bloganwe do18:32
rm_workand then the eventstream thing i had envisioned would be able to hook in there18:32
xgermanwell, we have code for doing all of that — we just need to hook them up with the UDP reciever18:33
rm_workIMO we step back from the queue thing for a second, and seriously just get octavia's own DB up to date18:33
xgermanand I wanted that code to be a bit more mature before taking it as a dependency18:33
rm_workthen we can revisit the next step18:33
xgermanso basically we need to stabilize the up stuff first18:33
rm_workall of my plans relied on the DB being properly updated already18:33
xgermanrm_work we have the update code but we don;t have a stable up_receiver to insert that18:34
xgermanwill get on that today and then everythign should be more clear18:34
rm_workok18:35
rm_worki'll just work on the neutron-lbaas side then maybe18:35
xgermanyeah18:35
rm_worksince at least it's 100% clear what has to happen on that side18:35
rm_workneed to read from queue :P18:35
xgerman:-)18:35
bloganin the meantime we can get that review i pushed up merged and then switch the code whenever this is done18:35
bloganrm_work: i would like whatever event stream code you put into neutron-lbaas to be driver agnostic, but that can be done in the future18:36
rm_workwell, even your review isn't helpful yet, because octavia doesn't know what its own statuses are >_>18:36
rm_workblogan: that was the plan18:36
bloganrm_work: octavia updates its statuses correctly18:36
bloganrm_work: thats done outside of the health stuff18:37
bloganit doesn't update the operating statuses though18:37
bloganother than dumbly saying ONLINE vs OFFLINE18:37
xgermanyep, ops status18:37
xgermanis midding18:37
rm_workblogan: i mean member up/down, listener up/down18:37
bloganwell OFFLINE when being provisioned, ONLINE when being updated18:37
bloganrm_work: yeah but what I've written shouldn't be sued to monitor operating statues18:37
blogantahts not scalable18:37
rm_workyeah18:37
rm_workthat is why we need an eventqueue system18:37
bloganwhat i've written can be used to just montior provisioning status when a change happens, that is scalable18:38
rm_workhmm18:38
rm_workso my plan was:18:38
bloganbut if we can use the event queuing system for this thats fine, but we'll still need some process to monitor when somethign in octavia gets stuck in a provisioning status anyway18:38
xgermanyeah, I think we are good for provisioning stats with your code18:40
xgermanwhat we now need is the operational status one18:40
bloganyeah18:40
rm_workgeneric interface "EventStream" with one method, "emit" which takes an event type (from constants.py, like EVENT_MEMBER_STATUS_CHANGE or EVENT_LB_PROVISION_COMPLETE" or whatever) and there would be a QueueEventStream and a LoggingEventStream to start with, but everywhere statuses are updated (health monitor for operational, flows(?) for provisioning), that interface would be called with the correct event type and the data for it18:40
bloganso yall okay with gettnig that merged in now (after i write the tests), and then switch to use a better solution later?18:40
bloganrm_work: thats what i figured you had in mind, and that looks sound to me and will give us teh ability to send more info to whomever wants to consume these events18:41
rm_workfor the QueueEventStream, the EVENT_TYPE would be the name that goes on the queue18:41
rm_workso, the only issues i have still are that I am not sure how to normalize the data for the events18:42
rm_workEventStream.emit(EVENT_MEMBER_STATUS_CHANGE, member_dict) ?18:42
rm_workand just pass the dict for the member?18:42
bloganyeah, not everything the member has but for starters member id and status18:43
rm_workright, so18:43
rm_workdo we have a small data structure that we define, like {"object_type", "object_id", "new_status"} ?18:43
rm_workor is that too simplistic for health messages18:44
bloganits gong to have to be truned into a dictionary anyway, but if you want to turn into a class you'd have to define it18:44
rm_workright18:44
rm_workbut18:44
rm_workcan we easily do that?18:44
bloganor you can use the member one we have now and infer what teh object_type is by the class and use the id and provisioning fields18:44
rm_workwill something like the dict i mentioned above appropriately capture the message in all cases?18:44
rm_workyeah18:45
rm_workbut for what we put on the queue18:45
bloganwe will need to include stats in these18:45
rm_workor log in the logs18:45
bloganlike bandwidht and connections18:45
rm_worksure, but i assume stats would be a different model18:45
bloganfor listeners18:45
*** bharath has joined #openstack-lbaas18:45
blogansame event strea though?18:45
bloganstream18:46
rm_workye18:46
*** vivek-ebay has quit IRC18:46
rm_workwhat i had for example constants was18:46
rm_workEVENT_MEMBER_STATUS = 'MEMBER_STATUS_EVENT'18:46
rm_workEVENT_LISTENER_STATUS = 'LISTENER_STATUS_EVENT'18:46
rm_workEVENT_STATS_UPDATE = 'STATS_UPDATE_EVENT'18:46
rm_workwhere there are essentially two "MAJOR" types of events emitted18:46
rm_workSTATS events, which would have one model type18:46
rm_workand STATUS events18:46
rm_workwhich would have another18:46
bloganyeah that'd be fine i believe18:46
rm_workthose two are just not possible to combine generically18:46
rm_workmember and listener status changes can both be mapped to {"id", "type", "status"}18:47
rm_workso I have code started for this18:47
bloganthe api listening on the neutron lbaas side will have methods that expect the model, so the model itself will be an interface as well18:47
rm_workbut i basically scrapped xgerman's mixin stuff, which is why i didn't submit yet18:47
rm_workyeah18:47
xgermanrm_work you want those sent when things change or very time we get info, e.g. member1 up, next second member_1 up, ...18:48
rm_workwhen things change only18:48
bloganwell when the heartbeat says things change right?18:48
rm_worksince it's a queue, that should not pose any problems -- it should be durable (we shouldn't lose any data), and it should be ordered (so lots of quick changes will still be reflected correctly)18:48
xgermannope, the heartbeat tells you the current status18:48
*** minwang2 has quit IRC18:49
xgermanrm_work I am not doubting that part :-)18:49
rm_workxgerman: right, it says the current status, writes it to the *health* table, and then the HM interprets if there is any change18:49
xgermanwell, no18:49
rm_workwhich is where the emission to the eventstream would actually happen18:49
xgermanthat health_table only covers amphora health18:49
rm_workwhich includes member statuses, doesn't it?18:49
xgermanNO18:50
xgermanmember status is part of the members table18:50
rm_workerr but18:50
rm_workthat's where member status updates come from18:50
rm_workthe UDP packets18:50
rm_workhttps://review.openstack.org/#/c/201882/33/octavia/controller/healthmanager/heartbeat_udp.py,cm18:50
xgermanyes, and then we need to check in the DB if there is a change and then send it to you18:50
rm_workright18:50
rm_workbut18:50
xgermanso it’s computational expensive - just saying18:51
rm_workthe UDP Heartbeat receiver isn't responsible for that18:51
rm_workright?18:51
rm_workit needs to just "receive -> dump to db -> goto start"18:51
xgermanmaybe18:51
xgermanthere is some performance issue… if we write everything orread first and only write if it changes or18:52
rm_workright now, if a member goes down, it is reported in the UDP heartbeat... and then what, thrown away?18:52
xgermanit is reported in each UDP heart beat it down18:52
rm_workyeah18:52
xgermanand we will add the code to write it to the DB each heartbeat18:52
xgermanbut you want in your event stream a CHANGE18:52
rm_workbut ... the udp receiver doesn't store that?18:52
rm_workor does it18:52
rm_workwell18:53
xgermannot yet18:53
xgermanand I will fix that18:53
rm_workdo you really want the UDP Receiver to be writing to the member, listener, etc tables for every LB every time it gets a heartbeat?18:53
xgermanhow else would you do it?>18:53
rm_workok, i think i do just need to wait18:53
rm_workand see what you have planned there18:53
xgermanwell, I will just write that to the DB and hope we don’t need to scale for the Tokyo demo18:54
xgermanin real life you probably need to reds that18:54
xgermanor at least have a memory cache so you only write when it changes18:54
rm_workyeah18:56
*** vivek-ebay has joined #openstack-lbaas18:56
*** vivek-ebay has quit IRC19:07
*** TrevorV has quit IRC19:07
*** vivek-ebay has joined #openstack-lbaas19:10
bloganyou dont have to write everytime, you can just check if there is a difference19:11
bloganits still a select request everytime though19:11
xgermanyep19:12
xgermanhence I am worried about scalability but that is likely M work19:12
blogani think we all are worried about it, but i think at some point the difference will hae to be calculated anyway19:15
bloganby our system or neutron-lbaas19:15
blogani mean by octavia or neutron-lbaas19:15
*** madhu_ak_ has joined #openstack-lbaas19:26
*** minwang2 has joined #openstack-lbaas19:28
*** madhu_ak has quit IRC19:30
*** madhu_ak_ is now known as madhu_ak19:30
openstackgerritGerman Eichberger proposed openstack/octavia: Implement UDP heartbeat sender and receiver  https://review.openstack.org/20188219:33
*** vivek-ebay has quit IRC20:01
*** abdelwas has joined #openstack-lbaas20:13
openstackgerritPhillip Toohill proposed openstack/neutron-lbaas: Registering consumers  https://review.openstack.org/18870320:14
*** crc32 has joined #openstack-lbaas20:15
openstackgerritPhillip Toohill proposed openstack/octavia: Updating ssh driver with root user check  https://review.openstack.org/20320320:16
crc32german and johsom what are you pushing into my patchsets?20:18
rm_workptoohill: whelp, Castellan-Certs is dead, and I think we may just want to rip out the whole CertManager interface I went to all that effort to make, and just use Barbican directly :/20:18
crc32xgerman johnsom you both pushed a patchset each around the weekend into my code base. What is it your trying to do?20:19
crc32https://review.openstack.org/20188220:19
rm_worki think those were rebasess?20:19
johnsomcrc32 I hooked your code into Min's health_manager and made it a dependent patch.  I also started work on the config file stuff.20:19
rm_workah20:19
crc32xgerman took out my Queue reader and writer.20:20
johnsomcrc32 We talked about me doing the config stuff on Friday right?20:20
crc32yes we did. I didn't see your changes specifically yet but german ripped out my Queue sender and reciever.20:20
johnsomcrc32 I have a couple of bugs, but should have the config stuff done today20:22
crc32Also I was going to comment about your comments to about sha256. At the midcycle we talked about using SHA256 and everyone including you seemed fine with it back at the midcycle. The computations are down to the microsecond and it is only 30% slower then sha1 and we have bigger bottlenecks that are 4 orders of magnitude slower than sha256.20:22
crc32I'm reverting germans changes. I don't know why he broke my Queue sender and Reciever20:23
ptoohillI seen that a sec ago rm_work :/20:23
johnsomOk, I just forgot.20:23
ptoohillrm_work: Do we have to rip out the interface20:23
ptoohillmaybe it's something we implement later or something else pops up20:23
crc32also in reponse to your pid reader. It seems the structure of the stats socket is "./uuid/uuid.pid and not ./uuid.haproxy.pid"20:24
crc32and don't know why its different but if you log into the amphora you will see the directory structure.20:24
johnsomOk20:25
openstackgerritPhillip Toohill proposed openstack/octavia: Updates for containers functionality  https://review.openstack.org/19995420:27
xgermancrc32 what is your vision of how to write the operational status to the DB?20:27
xgermanI think the queue approch is flawed and we really need to get that resolved so we can be part of Liberty20:28
crc32I read the UDP send the stats down the Queue then multiple workers can read from the Queue on the other side and write to the database.20:28
xgermanthat’s the same mu code does just queuing it up in a threadpool20:29
xgermanand also hooking up the operational status reporting we need20:30
crc32I woulden't mind you putting the thread pull on the other side of the Queue but leave one thread reading the UDP packets.20:31
crc32thread pool20:31
xgermannot sure what the queue is adding?20:31
johnsomI think what xgerman added will work20:31
rm_workptoohill: yeah we can leave it for now but at some point it may just need to be ripped out20:32
crc32and I know what I did would work as well.20:32
xgermanwell, you need a thread pool in any case so the queue is superfluous20:32
rm_workptoohill: CertGenerator can stay, it will still be useful, but CertManager is critically flawed, in that no end-user would ever be able to store certs without Barbican :/20:33
openstackgerritMadhusudhan Kandadai proposed openstack/neutron-lbaas: Add basic listener scenario tests  https://review.openstack.org/21385220:33
rm_workptoohill: which i think we ran into with the local impl20:33
rm_workand hand-waved to deal with accepting certs and storing them on-behalf of the user later20:33
rm_workbut i don't really think that's feasible to implement sanely20:33
rm_workat least, it's not feasible to sanely implement a *mixed* approach -- either we'd have to store on-behalf (more like CLB1) or else we'd have to let the user store, not a combination20:34
rm_workand I like the idea of letting the user store, because that enables the re-usability I was hoping for20:35
crc32I disagree but I guess it doesn't matter.20:37
ptoohillrm_work: I agree. I went through the reviews on that. Makes sense, sadly enough.20:40
rm_workyeah20:40
openstackgerritPhillip Toohill proposed openstack/octavia: Adding sni_containers to Octavia API  https://review.openstack.org/20968420:41
*** vivek-ebay has joined #openstack-lbaas21:00
*** amotoki has joined #openstack-lbaas21:09
*** mestery has joined #openstack-lbaas21:15
*** kev has joined #openstack-lbaas21:21
*** kev is now known as Guest9161121:21
Guest91611neutron-lbaasv2-agent is giving me this error AttributeError: 'module' object has no attribute 'PeriodicTasks' any ideas?21:22
Guest91611I'm trying to run kilo21:22
bloganGuest91611: module object requires neutron to be installed, do you have that?21:23
Guest91611neutron service is running21:25
bloganGuest91611: can you import it in a python interpreter in the same enviornment the agent is attempting to be ran in?21:26
Guest91611ok21:26
bloganGuest91611: thats just a simple test i'd do to troubleshoot21:27
bloganGuest91611: plus how are you installing neutron and nuetorn-lbaas? devstack? a package?21:27
Guest91611devstack21:27
Guest91611I clone the kilo devstak21:27
bloganGuest91611: are you doing the enable_plugin neutron-lbaas line in the localrc?21:28
Guest91611used following conf https://gist.github.com/anonymous/a79a047d93e95c41f62b21:29
johnsomcrc32 heads up, I am going to change the jinja template for the amphora agent.21:29
bloganGuest91611: ah, so thats pulling down the master branch21:29
bloganGuest91611: for neutron-lbaas21:29
bloganGuest91611: you need to specify the stable/kilo branch21:30
bloganso at teh end of that enable_plugin method add stable/kilo21:30
crc32xgerman I'm not sure what this check() code is doing. My dorecv() method takes no parameters and your passing in a db object. also check takes amphora ID as a parameter but you don't know what amphora_id will be untill you get the udp packet.  What does "def check(self amphora_uid) actually do?21:30
Guest91611oh ok. ll do it.21:30
Guest91611thanks a lot21:31
crc32johnsom: Ok i think thats potholes stuff21:31
bloganGuest91611: np21:31
crc32ptoohill's stuff21:31
johnsomcrc32 It's code I added this weekend.  Just didn't want you to be writing tests on the config format while I was changing it21:32
crc32I was already writing mock tests on friday so those are ruined already.21:33
Guest91611just FYI I followed this one http://docs.openstack.org/developer/devstack/guides/devstack-with-lbaas-v2.html21:33
openstackgerritSherif Abdelwahab proposed openstack/octavia: Amphora Flows and Service Drivers for Active Standby  https://review.openstack.org/20625221:33
crc32but still though. What is the "def check(self, amphora_id) code supposed to do?"21:33
bloganGuest91611: yeah but thats just doing everything from master, doing it from a release requires this difference bc the enable_plugin directive doesn't understand the branches yet (not sure it will ever)21:34
crc32line 186 and up in heartbeat_udp.py21:34
ptoohilljohnsom: I'm curious the template change. Though i can just view the chaneg when it's up.21:34
ptoohilloh21:34
ptoohillthere's another jinja template somewhere isnt there, I think that's the one you speak of?21:34
johnsomptoohill It's not the haproxy template, it's on I wrote for the amphora-agent config file21:34
ptoohillyea, gotcha ;)21:35
Guest91611oh ok . thanks21:35
*** KunalGandhi has quit IRC21:36
openstackgerritPhillip Toohill proposed openstack/octavia: Adding sni_containers to Octavia API  https://review.openstack.org/20968421:40
rm_workxgerman: so were you actively working on figuring out the operational status updates?21:47
rm_workis that something you're planning to just propose some code for, or is it something we need to discuss more?21:47
xgermanI will get the database updates working and then once I see your queue sped I can put them on the queue21:48
*** jorgem has joined #openstack-lbaas21:48
rm_workhmm21:49
rm_workwell i wasn't working on the octavia side of the queue, i was looking at the neutron-lbaas side21:49
xgermanyeah, that sounds fine21:49
xgermanI can code the Octavia side uo21:49
crc32xgerman I'm not sure what this check() code is doing. My dorecv() method takes no parameters and your passing in a db object. also check takes amphora ID as a parameter but you don't know what amphora_id will be untill you get the udp packet.  What does "def check(self amphora_uid) actually do?21:49
xgermanbut as I said we need to agree on the message and code21:49
rm_workwell, i can do that too, but wanted to wait until i saw how the db updates were happening21:49
rm_workhmm yes21:50
xgermancrc32 it’s buggy and I need to fix/test21:50
xgermanI put it up so I am not trampling on everybody’s feet and I think I am21:50
crc32so do I just revert it back to the queue method for now untill we get a working patch set?21:51
xgermanno, we meed to have the DB update in there21:53
xgermanI will fix that and write unit tests just need to coordinate with johnsom so his changes don’t overwrite mine21:55
crc32Do you have time to discuss this?21:55
crc32right now it looks like your code will start dropping UDP packets if the thread pool gets full.21:56
johnsomNo, it won't drop UDP packets21:56
crc32check is calling dorecv()21:56
johnsomThe DB jobs just get queued up if all of the workers are busy21:57
crc32so the thread pool queue can grow indefinitly21:57
bana_kdeleting the loadbalancer should delete the corresponding amphora?21:59
bloganbana_k: yes22:00
bana_khmm ok. Not happening though as of now22:00
rm_workxgerman: so do you agree that a generic data-model for EVENT_*_STATUS_CHANGE could be: {"object_type", "id", "new_status"} ?22:01
*** KunalGandhi has joined #openstack-lbaas22:01
xgermansure22:01
rm_workok22:01
*** mestery has quit IRC22:01
bloganbana_k: oh i think there's a bug where if the port was created by neutron-lbaas the loadbalancer delete will fail in octavia22:01
xgermanbut we need to agree on those values22:01
rm_workwell22:01
rm_workI figured the "type" would be a constant22:01
xgermanand id’s are not the same in octavia + labs necessarily22:01
xgerman(or are they - I forgot)22:02
bloganneutron-lbaas you mean?22:02
xgermanyep22:02
bloganxgerman: they are the same now, octavia allows the id to be specified22:02
xgermanawesome!!22:02
xgermanso yeah, rm_work give me the list of values and I will plug them in22:03
bana_kblogan:  should I see any errors in o-controller ?22:03
bloganbana_k: o-cw?22:03
bana_kblogan : yes22:03
bloganbana_k: yes22:03
bana_kblogan: didn't see any22:03
*** mestery has joined #openstack-lbaas22:04
blogantahts something i've been meaning to fix anyway, let me try to get something pushed up real quick22:04
bloganbana_k: hmmm so it said the lb-delete-flow successfully completed?22:04
bana_ko-cw didn log anything22:04
johnsomblogan bana_k I think there is a comment in the code that we should add the delete22:05
*** KunalGandhi has quit IRC22:05
bloganbana_k: you should changed your octavia.conf to have debug = True22:05
bloganjohnsom: add the delete to what?22:05
johnsomWe were going to have the housekeeping delete the amp later.  So I left a comment to pick it up later22:06
johnsomoctavia/controller/worker/flows/amphora_flows.py:15122:06
bana_kblogan : 2015-08-17 17:06:38.817 12940 INFO octavia.common.config [-] Logging enabled!22:06
bana_kI think for  some reason cw is not logging anything22:07
johnsomWe changed our collective mind and plan to just delete it.  We (or I) haven't got around to fix that22:07
bloganjohnsom: ohhh, then i must be wrong then, but i don't see why we don't just delete it then on the delete request, not a big deal though22:08
bloganjohnsom: oh okay, yes thats what i thought22:08
johnsomblogan Yep22:08
bloganjohnsom: well while i fix this port bug issue i can do taht as well22:08
johnsomblogan Be my guest22:08
blogani need to get my coding fix on again, i enjoyed being able to write code this past weekend22:09
*** mestery has quit IRC22:09
rm_workxgerman: yeah I will define a model type I think22:09
rm_workxgerman: and an interface (probably not a mixin) to use22:09
openstackgerritOpenStack Proposal Bot proposed openstack/octavia: Updated from global requirements  https://review.openstack.org/21388922:12
*** amotoki has quit IRC22:14
*** KunalGandhi has joined #openstack-lbaas22:19
xgermanblogan, johnsom we definitely need some debug setting which keeps the amp around for post mortems22:20
johnsomYes22:20
johnsomI don't think we need house keeping to accomplish that however.22:20
xgermanyep22:21
xgermanjust wanted to make sure it doesn’t slip our collective mind :-)22:21
openstackgerritPengtao Huang proposed openstack/neutron-lbaas: change if to else  https://review.openstack.org/21256022:29
bloganxgerman: you think if a delete call comes in for the load ablancer we'll want to post mortem that? usually thats a correctly functioning amphora22:33
*** chlong has quit IRC22:34
bloganxgerman: i figured we'd want that for just failing over amphorae, but i suppose there could be the case where a customer deletes it bc it wasn't behaving correctly22:34
xgermanyep22:35
openstackgerritOpenStack Proposal Bot proposed openstack/neutron-lbaas: Updated from global requirements  https://review.openstack.org/21226122:38
openstackgerritOpenStack Proposal Bot proposed openstack/octavia: Updated from global requirements  https://review.openstack.org/21388922:39
*** chadix has joined #openstack-lbaas22:51
*** vivek-ebay has quit IRC22:57
openstackgerritCarlos Garza proposed openstack/octavia: Implement UDP heartbeat sender and receiver  https://review.openstack.org/20188223:01
*** vivek-ebay has joined #openstack-lbaas23:07
*** jorgem has quit IRC23:08
*** mestery has joined #openstack-lbaas23:13
*** mixos has quit IRC23:15
openstackgerritBrandon Logan proposed openstack/neutron-lbaas: Octavia driver correctly reports statuses  https://review.openstack.org/21359723:19
*** tiny-hands has joined #openstack-lbaas23:43
openstackgerritBharath M proposed openstack/octavia: Add Housekeeping to manage spare amphora  https://review.openstack.org/20282923:43
openstackgerritBharath M proposed openstack/octavia: Add Housekeeping to manage spare amphora  https://review.openstack.org/20282923:50
*** crc32 has quit IRC23:51
*** mestery has quit IRC23:55
*** bharath has quit IRC23:58
*** amotoki has joined #openstack-lbaas23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!