Friday, 2017-01-20

bana_khi, what does admin_state_up for LB means ?00:09
johnsomEnabled00:09
bana_kI have updated the admin_state_up as False and I am still able to load balance the traffic00:09
johnsomadmin_state_up = False means it is disabled.  It is still configured, but will not pass traffic00:09
johnsomThat should not be the case.  Which version of the code?00:10
johnsomOlder releases had some bugs around that00:10
bana_klet me check00:10
bana_kDec 1600:10
rm_worksomehow doing devstack on my local machine in a VM is not a lot faster >_>00:10
bana_kcommit id 34873f99bc14ff69b54ea6960a90cdbc9b0c227500:10
johnsomHmm, so that is mitaka.  Looking00:12
bana_kok.00:12
johnsomOctavia driver?00:13
bana_kyes00:13
johnsomYeah, I don't think that fix made it back to Mitaka00:15
johnsomThe fix was here: https://review.openstack.org/#/c/364707/00:15
johnsomYou can search your git history for that patch00:15
bana_kthanks johnsom, ll look00:15
rm_workyeah i pity people running on liberty :/00:16
johnsomYeah, liberty is EOL, so hopefully folks have moved on00:17
rm_workI wonder <_<00:18
*** ducttape_ has joined #openstack-lbaas00:20
rm_workso our py3x tests are broken because of nova, right?00:23
johnsomYeah, last time I looked through the logs00:23
rm_workoh maybe it isn't00:26
rm_worki assumed it was at first but00:26
rm_workit might be our bad00:27
rm_workit's actually here: http://logs.openstack.org/47/382147/16/check/gate-neutron-lbaasv2-dsvm-py3x-api-ubuntu-xenial-nv/40de556/logs/screen-o-cw.txt.gz#_2017-01-19_22_10_40_39100:27
rm_workwhich is on our side... gonna take a look00:28
*** ducttape_ has quit IRC00:31
rm_workyeah ok00:33
rm_workthis is borked00:36
*** gongysh has joined #openstack-lbaas00:37
rm_workjohnsom: opinion00:41
rm_workjohnsom: a file uses both load_balancer and loadbalancer00:41
rm_workwhich one should I normalize to00:41
rm_workI prefer loadbalancer00:41
rm_workbut I am not sure as a project00:41
johnsomYeah, we haven't exactly been careful about that.  I'm fine with loadbalancer00:42
rm_workk00:43
rm_workugh yeah our VIP datamodel uses load_balancer00:43
rm_workwhelp00:43
rm_workso i have to leave the keyword alone T_T00:43
rm_workanyway this might fix our py3x00:44
rm_workgonna push this up and grab dinner while i wait for zuul00:44
rm_workit's a bug in the noop driver <_<00:44
johnsomOk, yeah, that looks farther than it got last time I looked.00:46
johnsomI would like to get some of these off non-voting, but I think we need to fix that too long bug first.00:46
openstackgerritAdam Harwell proposed openstack/octavia: Cleanup noop network driver to fix py3x  https://review.openstack.org/42294200:48
*** fnaval has quit IRC00:48
*** yamamoto_ has joined #openstack-lbaas00:51
johnsomUgh, the anchor doc doesn't render well.  I'm going to fix that in the doc patch as well.00:51
rm_workok00:51
xgermannice00:51
*** gongysh has quit IRC00:52
*** bradjones has quit IRC00:55
rm_workjohnsom: ok so, "marker" is supposed to be a number... like ...  from the "Nth" entry01:06
rm_workthough ... thinking about it that's ...01:06
rm_workkinda add01:06
rm_work*odd01:06
rm_workis that how neutron does it???01:06
johnsomNo, marker is the id01:06
rm_workuhh01:06
rm_worktelling you quite assuredly01:06
rm_workfrom the code01:06
rm_workit isn't01:06
rm_workit's a number01:07
rm_workit slices the resultset01:07
johnsomI did in fact us the neutron client to get my test calls01:07
rm_workif that's not how neutron does it... then it was implemented wrong01:07
rm_workour implementation uses integer offset01:07
johnsomYeah, that isn't right01:07
rm_workso if that's not what is supposed to happenb01:07
rm_workthis whole thing needs to be reworked01:08
rm_workwell #@$%01:08
rm_workalright01:08
rm_workI guess I can poke at this tomorrow <_<01:09
rm_workwish I could ask carlos why he did it this way01:11
rm_workbut i don't think he's been on IRC in a while01:11
johnsomhttp://developer.openstack.org/api-ref/networking/v2/?expanded=list-subnets-detail#pagination01:11
johnsom"The marker parameter is the ID of the last item in the previous list. "01:12
johnsomThat is what I was going on to test01:12
rm_workyeah this is not how he did it at all01:12
*** amotoki has joined #openstack-lbaas01:15
rm_workok so I *thought* carlos was just pulling in the sorting/paging code from neutron01:21
rm_worklike01:21
rm_workcopy/paste01:21
rm_workbut the neutron paging/sorting code looks like it's architected ENTIRELY differently <_<01:21
johnsomI actually pointed him at glance since it had cleaner code, but I don't think that is what happened.01:21
rm_workah k01:22
rm_worki'll look at glance01:22
johnsomglance implemented both the old way (neutron) and the new api-wg way for sorting01:22
rm_workhmm k01:22
rm_workmaybe it's the new way01:22
johnsomWell, pagination is um, yeah, openstack style: https://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html#pagination01:23
johnsomSo... I don't know that there is a new way for that01:23
rm_workPagination is just ... TODO01:24
rm_worklol01:24
rm_workyeah this looks ... different too?01:26
rm_worker01:26
rm_workok i'm going to figure it out tomorrow01:26
rm_workmaybe just start over01:26
rm_workyou're correct, sorting doesn't actually do anything01:27
rm_workand pagination is ... <_<01:27
johnsomWell, on sorting, lubosz pointed out that I was using the "old" sorting syntax and not the new one listed on that page I linked above.01:28
rm_workI still love:01:28
johnsomSo, it might be working ok01:28
rm_work2017-01-20 00:51:06.087254 | Setting up the workspace01:28
rm_work2017-01-20 00:51:06.087374 | ... this takes 3 - 5 minutes (logs at logs/devstack-gate-setup-workspace-new.txt.gz)01:28
rm_workso much lies01:28
rm_workI kinda want to submit a patch to devstack to change that to "30-45 minutes"01:28
rm_workOH yeah you are, lol01:28
rm_workdidn't even look, just copied your queries01:29
johnsomYeah, it is definitely 30 minutes.  Though, in fairness, 10-15 of that is probably our image building01:29
rm_workyep sorting works01:29
johnsomYeah, I just grabbed them from neutron client01:29
rm_workuhh but01:29
rm_workdoesn't that mean we need to support the old way too01:29
johnsomI.e. trying to verify compat01:29
rm_workto be non-breaking?01:29
rm_workright01:29
rm_work....01:29
*** bana_k has quit IRC01:35
*** kevo has quit IRC01:35
*** yamamoto_ has quit IRC01:38
*** woodster_ has quit IRC01:45
*** ducttape_ has joined #openstack-lbaas01:55
*** ducttape_ has quit IRC01:59
*** gongysh has joined #openstack-lbaas02:11
*** ducttape_ has joined #openstack-lbaas02:13
*** reedip_ has joined #openstack-lbaas02:19
openstackgerritNir Magnezi proposed openstack/octavia: WIP - Fix the amphora-agent support for RH based Linux flavors  https://review.openstack.org/33184102:21
*** yamamoto_ has joined #openstack-lbaas02:26
*** beardedeagle has joined #openstack-lbaas02:28
*** reedip_ has quit IRC02:32
*** links has joined #openstack-lbaas02:53
*** ducttape_ has quit IRC02:54
*** bana_k has joined #openstack-lbaas03:33
*** armax has joined #openstack-lbaas03:40
*** amotoki has quit IRC03:49
*** links has quit IRC04:16
*** links has joined #openstack-lbaas04:18
*** amotoki has joined #openstack-lbaas04:19
*** amotoki_ has joined #openstack-lbaas04:25
*** bana_k has quit IRC04:26
*** amotoki has quit IRC04:27
*** beardedeagle has quit IRC04:30
*** bana_k has joined #openstack-lbaas04:52
*** catinthe_ has quit IRC04:54
*** catintheroof has joined #openstack-lbaas05:02
*** jerrygb has quit IRC05:05
*** bana_k has quit IRC05:11
*** bana_k has joined #openstack-lbaas05:11
rm_workjohnsom: this is port merge? http://logs.openstack.org/42/422942/1/check/gate-octavia-v1-dsvm-scenario-ubuntu-xenial-nv/9aa9599/console.html#_2017-01-20_01_52_38_28702605:12
rm_workhttp://logs.openstack.org/42/422942/1/check/gate-octavia-v1-dsvm-scenario-ubuntu-xenial-nv/9aa9599/logs/testrepository.0.gz05:12
johnsomSweet, I will work on it tomorrow05:13
rm_worklol didn't expect you to be around05:13
rm_worki was gonna take a glance now05:13
*** catintheroof has quit IRC05:14
rm_workalso, i guess i only fixed one of the py3x tests05:14
rm_workso, one passes, but the rest don05:14
rm_work*don't (still nova i think)05:14
rm_workprogress I guess05:14
rm_workhttps://review.openstack.org/#/c/422942/05:14
rm_worki'll keep poking05:14
rm_workit seems the VM spun up but SSH doesn't get through right05:14
johnsomOk, let me know what you find and if I should pick it up in the morning05:15
rm_workkk05:15
rm_workjohnsom: it seems like just an insane amount of05:17
rm_work2017-01-20 01:49:11,322 3281 INFO     [octavia.tests.tempest.v1.scenario.base] Got IOError in check        connection: <urlopen error [Errno 111] Connection refused>05:17
rm_workwhich is basically the VMs taking forever to spin up right?05:17
rm_workwhich is ... arbitrary kinda, so it explains why it's just *sometimes*.... if it's particularly slow, it goes over the threshold05:17
rm_workah and it's trying the connection once every like .........05:18
rm_work0.001 seconds05:18
rm_workso ever 0.001 seconds it spits a log line05:18
johnsomYeah, not sure.  I think I added that log message to track down some of the flask issues05:18
rm_work1000 lines per second05:18
rm_workthat's why it gets so hufe05:18
rm_work*huge05:18
johnsomHmm, that is odd05:18
rm_workthere's like 60,000 lines of this05:18
rm_workmore05:19
rm_workit's like 95% of the log05:19
johnsomI know we dropped that wait time, but umm05:19
rm_workfirst:05:19
johnsomWell, that should make it easier to fix05:19
rm_work2017-01-20 01:47:27,361 3281 INFO     [octavia.tests.tempest.v1.scenario.base] checking connection       to ip: 172.24.5.13 port: 8005:19
rm_worklast:05:19
rm_work2017-01-20 01:51:48,823 3279 INFO     [octavia.tests.tempest.v1.scenario.base] Got Error in sending        request: <urlopen error timed out>05:19
rm_workis from like line 700 to line 8100005:20
rm_workso. yeah.05:20
rm_workthat should be an easy fix05:20
rm_workwas there a problem trying once per second? lolol05:20
johnsomI think that was your change, to not wait durning tests any longer than needed... grin05:21
rm_worklol down to 0.001?!?05:22
johnsomI don't know.  We may not need to log that either...05:22
rm_workgonna look for where it is05:22
rm_workfor example BTW: http://paste.openstack.org/show/595721/05:22
johnsomHmm, what I am thinking of was a mocked out sleep, so it would be odd that would impact the scenario tests05:23
rm_workah found it05:23
rm_workthere's no wait *at all*05:23
rm_workand since it bails instantly because it's an actual IOError not a timeout05:23
rm_workit's as past as python can send connections05:23
rm_work*as fast as05:23
rm_workhttps://github.com/openstack/octavia/blob/master/octavia/tests/tempest/v1/scenario/base.py#L615-L63005:24
rm_workoh wait, what calls this05:24
rm_worksorry it's here:05:24
rm_workerr05:25
rm_worklike 5 lines below05:25
rm_worksame thing, no sleep at all05:25
rm_workI might add a sleep(0.5)? maybe?05:25
rm_workis that kosher?05:25
johnsomYeah, go for it.  That is code I am working on for octavia v2 as well, so I will pull that in there as well.05:26
johnsomHeck, 1 second is probably just fine too05:27
rm_workor I can have it try as fast as it wants but only print the message every 1s or so05:27
rm_worki dunno05:27
rm_workeh just doing a 1s sleep is going to be fine05:27
* rm_work watches it explode 6 months later05:27
johnsomJust have it sleep, it is easy and polite to the other vms05:27
rm_workdid we have a bug filed?05:29
johnsomYes, I opened on05:29
johnsomOne05:29
rm_worklooking05:29
rm_worknothing matches subunit...05:30
*** cody-somerville has joined #openstack-lbaas05:31
johnsomHmm, I think it closed with my first archive patch05:31
rm_worklol ok whatever05:31
openstackgerritAdam Harwell proposed openstack/octavia: Fix connection timeout spam in subunit logs  https://review.openstack.org/42304105:31
rm_workI guess tomorrow I'll keep looking at py3x and also try to untangle this sorting/paging mess05:32
johnsomOk, nope.  I am too tired.  No bug05:32
rm_worknight :)05:33
*** csomerville has quit IRC05:33
johnsomI will nit on that patch that we should sleep after checking for the timeout, but it is a nit05:34
rm_workehhh05:34
rm_workso i debated05:34
rm_workit runs the connection attempt as the condition for the while05:35
rm_workso by the time you get to the sleep, it's done at least one try05:35
rm_workoh05:36
rm_workit should be in an else05:36
rm_worki read the second bit wrong05:36
rm_workthen we can technically go over05:36
rm_workbut05:36
rm_workit's kinda ... meh05:36
johnsomYeah05:36
rm_workunless we subtract the delay from the timeout check05:36
rm_workbut that's iffy too05:36
rm_workor in this case it'd be05:37
johnsomOver complicates it05:37
rm_workif (time.time() - start + delay) > timeout05:37
rm_workI mean... it'd work05:37
rm_workwould you rather be conservative in one direction or the other?05:37
rm_workah and technically it doesn't need to be an`else` I guess, since the `if` raises05:38
johnsomRight05:39
*** jerrygb has joined #openstack-lbaas05:39
rm_workk05:40
openstackgerritAdam Harwell proposed openstack/octavia: Fix connection timeout spam in subunit logs  https://review.openstack.org/42304105:40
rm_workshould be fine05:40
rm_workmade it 1s too05:40
rm_worki think you're right05:40
rm_workoh i'll hit the v2 as well05:40
*** anilvenkata has joined #openstack-lbaas05:40
openstackgerritAdam Harwell proposed openstack/octavia: Fix connection timeout spam in subunit logs  https://review.openstack.org/42304105:41
johnsomOk.  I am going to go back to reading the news.  Catch you tomorrow05:41
*** catintheroof has joined #openstack-lbaas05:46
*** saju_m has joined #openstack-lbaas06:09
*** sticker has quit IRC06:12
*** armax has quit IRC06:19
*** armax has joined #openstack-lbaas06:20
*** armax has quit IRC06:20
*** saju_m has quit IRC06:59
*** kevo has joined #openstack-lbaas07:35
*** pcaruana has joined #openstack-lbaas07:43
*** afranc has joined #openstack-lbaas07:44
*** Aju has quit IRC07:51
*** bana_k has quit IRC07:57
*** tesseract has joined #openstack-lbaas08:05
*** gcheresh_ has joined #openstack-lbaas08:10
*** eezhova has joined #openstack-lbaas08:17
*** cody-somerville has quit IRC08:20
*** cody-somerville has joined #openstack-lbaas08:21
*** cody-somerville has quit IRC08:21
*** cody-somerville has joined #openstack-lbaas08:21
*** openstackgerrit has quit IRC09:02
*** saju_m has joined #openstack-lbaas09:11
*** saju_m has quit IRC09:24
*** saju_m has joined #openstack-lbaas09:36
*** jerrygb_ has joined #openstack-lbaas09:41
*** jerrygb has quit IRC09:43
*** jerrygb has joined #openstack-lbaas09:46
*** jerrygb_ has quit IRC09:48
reedipanyone here?10:05
*** gcheresh_ has quit IRC10:06
*** jerrygb has quit IRC10:13
*** jerrygb has joined #openstack-lbaas10:14
*** gongysh has quit IRC10:22
*** gongysh has joined #openstack-lbaas10:28
*** amotoki_ has quit IRC10:28
*** gongysh has quit IRC10:37
*** amotoki has joined #openstack-lbaas10:38
*** kevo has quit IRC10:42
*** yamamoto_ has quit IRC10:50
*** ipsecguy_ has quit IRC10:51
*** ducttape_ has joined #openstack-lbaas11:29
*** ducttape_ has quit IRC11:31
*** ipsecguy has joined #openstack-lbaas11:32
*** yamamoto has joined #openstack-lbaas11:40
*** amotoki has quit IRC11:44
*** pcaruana has quit IRC11:59
*** yamamoto has quit IRC12:02
*** yamamoto has joined #openstack-lbaas12:03
*** pcaruana has joined #openstack-lbaas12:06
*** yamamoto has quit IRC12:08
*** yamamoto has joined #openstack-lbaas12:10
*** gcheresh_ has joined #openstack-lbaas12:15
*** catinthe_ has joined #openstack-lbaas12:32
*** catintheroof has quit IRC12:33
*** yamamoto has quit IRC12:44
*** yamamoto has joined #openstack-lbaas12:44
*** jerrygb_ has joined #openstack-lbaas12:46
*** yamamoto has quit IRC12:48
*** jerrygb has quit IRC12:49
*** yamamoto has joined #openstack-lbaas12:51
*** gcheresh_ has quit IRC12:52
*** gcheresh has joined #openstack-lbaas12:52
*** yamamoto has quit IRC12:52
*** links has quit IRC12:55
*** amotoki has joined #openstack-lbaas12:56
*** ducttape_ has joined #openstack-lbaas13:11
*** ducttape_ has quit IRC13:21
*** ducttape_ has joined #openstack-lbaas13:24
*** ducttape_ has quit IRC13:34
*** bradjones has joined #openstack-lbaas13:38
*** bradjones has quit IRC13:38
*** bradjones has joined #openstack-lbaas13:38
*** ipsecguy has quit IRC13:44
*** ipsecguy has joined #openstack-lbaas13:45
*** ipsecguy has quit IRC13:46
*** ipsecguy has joined #openstack-lbaas13:47
*** yamamoto has joined #openstack-lbaas13:52
*** yamamoto has quit IRC14:02
*** matt-borland has joined #openstack-lbaas14:06
*** beardedeagle has joined #openstack-lbaas14:16
*** woodster_ has joined #openstack-lbaas14:27
*** catintheroof has joined #openstack-lbaas14:33
*** catinthe_ has quit IRC14:36
*** yamamoto has joined #openstack-lbaas14:47
*** yamamoto has quit IRC14:53
*** ducttape_ has joined #openstack-lbaas14:53
johnsomreedip: Hi15:02
*** ducttape_ has quit IRC15:02
*** gcheresh has quit IRC15:17
*** fnaval has joined #openstack-lbaas15:24
*** amotoki has quit IRC15:24
*** ankur-gupta-f2 has joined #openstack-lbaas15:25
*** ducttape_ has joined #openstack-lbaas15:29
*** yamamoto has joined #openstack-lbaas15:54
*** anilvenkata has quit IRC15:55
*** pcaruana has quit IRC16:03
*** yamamoto has quit IRC16:03
*** amotoki has joined #openstack-lbaas16:05
*** amotoki has quit IRC16:06
*** armax has joined #openstack-lbaas16:14
*** openstackgerrit has joined #openstack-lbaas16:20
openstackgerritMichael Johnson proposed openstack/octavia: Fix the docs page title  https://review.openstack.org/42288416:20
*** _ducttape_ has joined #openstack-lbaas16:59
*** ducttape_ has quit IRC17:02
*** bana_k has joined #openstack-lbaas17:09
*** reedip_ has joined #openstack-lbaas17:14
*** csomerville has joined #openstack-lbaas17:31
*** cody-somerville has quit IRC17:34
*** _ducttape_ has quit IRC17:35
*** ducttape_ has joined #openstack-lbaas17:36
*** kevo has joined #openstack-lbaas17:37
*** bana_k has quit IRC17:39
*** eezhova has quit IRC17:45
*** eezhova has joined #openstack-lbaas17:47
openstackgerritNakul Dahiwade proposed openstack/octavia: Align Octavia v2 API for Members  https://review.openstack.org/42339317:51
*** eezhova has quit IRC18:12
*** bana_k has joined #openstack-lbaas18:26
*** reedip_ has quit IRC18:40
*** tesseract has quit IRC19:32
*** catinthe_ has joined #openstack-lbaas19:50
bloganjohnsom: the octavia v2 api is upposed to have the /lbaas prefix correct?19:52
bloganlike ip:port/v2.0/lbaas/loadbalancers?19:52
bloganand do we also need /v2.0? or /v2?19:52
*** catintheroof has quit IRC19:53
rm_workwait what20:11
rm_workhold on, what does neutron use right now20:11
rm_workis v2 BEFORE lbaas?20:11
*** gcheresh has joined #openstack-lbaas20:23
blogan2.020:27
bloganv2.020:27
rm_workhmm20:27
rm_worki kinda thought we weren't going to deal with that cruft, and would recommend deployers do l7 routing to invisibly make things go to the right place20:29
rm_workbut maybe not20:29
sindhublogan: johnsom : shouldn't octavia v2 have: /v2.0/loadbalancers , /v2.0/listeners ... etc?20:29
rm_work^^ so, this20:29
bloganwell if our goal is to have it just work by changing the keystone catalog, then we'd have to probably maintain the /v2.0/lbaas/ portions20:30
rm_workwe can't change the keystone catalog, can we?20:30
rm_worklbaas is part of neutron20:30
bloganbut l7 routing would work too, but not as seamless20:30
rm_workuntil we have our own keystone type20:30
rm_workisn't it?20:30
blogani mean we can always just ahve /v2.0/loadbalancers and /lbaas/v2.0/loadbalancers at the same times, thats just adding one line20:31
rm_workcatalog has to continue to point to neutron20:31
rm_workand the deployer has to l7 route /lbaas to us20:31
rm_workAFAIU20:31
rm_workthere's no "lbaas" keystone catalog entry20:31
bloganthere's not, but eventually there will be20:32
rm_workand at that point we DEFINITELY don't want /lbaas in our URL do we?20:32
blogani dont20:32
blogani also don't want v2.0, just v220:32
rm_workany change to the clients we'd need to make to use a different endpoint name, would also just include the endpoint URL changing too, right?20:33
rm_workhmm, that's the way keystone and nova went with v3 right?20:33
rm_worknot v3.020:33
bloganpretty sure20:33
rm_workso i'd say that's the future direction of things20:33
rm_workand while we're depending on l7 routing, we may as well just fix that too20:33
bloganwe'll have to see what glorius dear leader johnsom says20:34
rm_workso yes, i'd be behind ip:port/v2/loadbalancers ip:port/v2/pools etc20:34
rm_workhopefully it's clear that almost all of my "questions" above were rhetorical >_>20:35
*** catinthe_ has quit IRC21:02
*** catintheroof has joined #openstack-lbaas21:03
*** beardedeagle has quit IRC21:06
*** beardedeagle has joined #openstack-lbaas21:06
*** catintheroof has quit IRC21:07
*** beardedeagle has quit IRC21:18
*** beardedeagle has joined #openstack-lbaas21:18
johnsomJust got back from lunch, reading the scroll back...21:23
*** eezhova has joined #openstack-lbaas21:24
johnsomSo, a few comments here:21:24
johnsom1. octavia will be a new service in keystone (it's private I think at the moment)21:25
johnsom2. We need to maintain /v2.0/lbaas/loadbalancers as neutron-lbaas has it for some time.  This is so apps can just switch the service/endpoint and still enjoy the neutron-lbaas API21:26
johnsom3. We can also, optionally, alias /v2.0/lbaas/loadbalancers to /v2.0/loadbalancers and have that as a go forward path.  Though, I think there is some value to having the lbaas in there for folks that want to do L7 games with their API servers, endpoints, or API security stuffs (i.e. Apigee, etc.)21:27
*** eezhova has quit IRC21:28
johnsomrm_work, blogan, sindhu: thoughts, comments?21:30
bloganwe can easily have both with pecan21:31
blogan/lbaas/loadbalancers and /loadbalancers21:31
rm_workyeah that's fine21:32
blogani guess we can also have both v2 and v2.021:32
bloganbut that'd be a bit weird21:32
johnsomYeah, that was kind of my thought on the issue.  That doesn't break the API21:32
bloganwell if we did /v2/loadbalancers and /v2.0/lbaas/loadbalancers that'd be fine by me21:32
johnsomWe could have v2 and v2.0, but all I really care about is v2.021:32
bloganas long as we have a deprecation plan for the /v2.0/lbaas21:32
johnsomAren't all the projects v2.0 vs v2?21:33
rm_workonly because of timing21:33
johnsomActually it looks 50/5021:34
blogankeystone is v321:34
johnsomWith microversioning dropping it all together right?21:34
blogani think so21:34
sindhu  /v2/loadbalancers and /v2.0/lbaas/loadbalancers: +121:35
johnsomYeah, so, for compatibility, we need v2.0, v2 is optional21:35
johnsomSo, current discussion is:21:36
johnsom /v2/loadbalancers21:37
johnsom /v2/lbaas/loadbalancers21:38
johnsom /v2.0/loadbalancers21:38
johnsom */v2.0/lbaas/loadbalnacers21:38
rm_workyes21:38
johnsomWhere * is required for compatibility21:38
johnsomSeems like a lot of paths, but I'm fine with it.21:38
*** gcheresh has quit IRC21:39
johnsomMVP is the * one, the others are optional/nice to have21:39
*** jerrygb_ has quit IRC21:41
johnsomI should capture this somewhere.  Where would be most helpful for folks?  Edit the spec, add it to a bug, or create an etherpad?21:42
rm_workwhy is MVP always the worst one >_>21:42
johnsomBecause it's the old one we have to be compatible with....  Grin21:43
openstackgerritMichael Johnson proposed openstack/octavia: Fix the docs page title  https://review.openstack.org/42288421:43
sindhujohnsom: https://etherpad.openstack.org/p/octavia-ocata-merge-review-priority why not here?21:44
johnsomrm_work Your -1 nit....  Grin21:44
rm_workyeah sorry about that <_<21:44
rm_workit irked me21:44
johnsomsindhu Ok21:44
sshank_q21:45
*** _ducttape_ has joined #openstack-lbaas21:48
johnsomOk, added to the etherpad, let me know if you have questions21:49
*** _ducttape_ has quit IRC21:49
*** _ducttape_ has joined #openstack-lbaas21:50
sshank_So pools or others will be of the format: /v2.0/lbaas/pools ?21:50
*** ducttape_ has quit IRC21:51
*** ndahiwade has joined #openstack-lbaas21:55
ndahiwadesshank_: yes21:56
*** matt-borland has quit IRC22:23
*** armax has quit IRC22:26
*** jerrygb has joined #openstack-lbaas22:40
*** armax has joined #openstack-lbaas22:42
*** beardedeagle has quit IRC22:43
openstackgerritNakul Dahiwade proposed openstack/octavia: Align Octavia v2 API for Members  https://review.openstack.org/42339322:47
openstackgerritBrandon Logan proposed openstack/octavia: Add common base type for v1 and v2  https://review.openstack.org/42354222:52
openstackgerritBrandon Logan proposed openstack/octavia: Add v2 load balancer type and controllers  https://review.openstack.org/42354322:52
openstackgerritBrandon Logan proposed openstack/octavia: Add v2 load balancer type and controllers  https://review.openstack.org/42354322:54
bloganjohnsom, sindhu, sshank_, ndahiwade: ^^ thats a first pass at getting it working.  API requests work just like neutron-lbaas v2 as far as i tested.22:55
johnsomCool!  Thanks!22:55
blogani realized, that decorator isn't really needed as we don't need to change anything about the data models22:55
bloganwe just need to do it all with the types22:55
sshank_blogan, Thank you.22:55
blogani need to add tests to these rviews too and probably clean up some more, maybe split it out into one more rview, not sure yet22:56
ndahiwadeblogan, thanks:)22:57
bloganif every controller and type is changed for this, it's not too bad of a work load22:57
sindhublogan: thank you :)22:57
rm_workblogan: want to merge https://review.openstack.org/#/c/423041/ ? :P22:58
rm_workstop getting annoying false-failures on our scenarios22:58
bloganrm_work: i usually charge johnsom one cookie for these requests, but for you...2 COOKIES!22:58
rm_workoh god22:59
rm_workthat's pretty greedy blogan22:59
rm_workbut if that's the price22:59
blogani'm a capitalist pig22:59
*** eezhova has joined #openstack-lbaas22:59
johnsomrm_work Just send him one of these like I did: Set-Cookie: RackSID=20170117111850; path=/; domain=.rackspace.com23:00
rm_worklol23:00
bloganmy internal validation has been updated to reject those cookies23:01
openstackgerritBrandon Logan proposed openstack/octavia: Add v2 load balancer type and controllers  https://review.openstack.org/42354323:02
rm_workah blogan this one too: https://review.openstack.org/#/c/422942/23:03
rm_workthat should be free though because it's fixing your fail :P23:03
bloganrm_work: is it a fail if i didnt know python 3 existed?23:04
rm_workit should fail even on py2, not sure why we don't see that23:04
rm_worki guess py2 somehow figures out how to hash our object?23:05
bloganrm_work: its a fail if  you add into that patch another change that renamed load_balancer to loadbalancer23:05
rm_workwhich doesn't really make sense23:05
rm_worklolol23:05
rm_workconsistency23:05
*** kevo has quit IRC23:05
rm_workI'm actually really curious how py2 passes using this23:06
rm_workdoes py2 like ... id() the LB object or something? >_>23:07
blogani actually thought we had to do that to make this work23:09
bloganactually i thought i had changed this to just use the id23:09
bloganlike23:09
bloganid attribute23:09
rm_workhmmm23:09
bloganoh well23:09
rm_workdid that ... never merge? :P23:09
rm_workis there another patch out there that already does this? lol23:09
bloganwho knows, it could have been antoher noop driver too23:09
bloganmaybe py2 uses the __repr__ method to hash it23:10
rm_workmayhaps23:12
* rm_work is listening to josh convince people to use taskflow23:12
* rm_work is being swayed23:12
bloganrm_work: don't give into it23:12
rm_workif we used jobboard this would be pretty sweet23:13
bloganwell johnsom has always planned that23:13
rm_workyeah23:13
bloganand it is a good thing23:13
rm_workit is23:13
bloganmy issues with taskflow haven't been with its usefulness23:13
rm_workyeah it's just painful for me to track what's happening, and sometimes it's really counterintuitive what happens with the objects stored, it seems23:14
* johnsom Won't comment on that....23:14
bloganjohnsom will comment on that23:14
rm_workdidn't we have to have a task specifically just to mangle the dependency objects and refresh fields?23:14
johnsomblogan I was going to say something like your disposition for a certain coding style.....23:14
bloganjohnsom: refresh my memory23:15
bloganworkflow engines in general?23:15
johnsom"relaxed" coding style23:15
rm_worklol23:15
johnsomrm_work I think that was more about our data model situation than taskflow23:15
rm_workah, mayhaps23:15
rm_workmight be improperly blaming taskflow23:16
johnsomI.e. getting the data model objects updated from the DB at the right times23:16
blogani'm very relaxed when i'm coding :)23:16
johnsomIt's a polite way of saying "slapping it together"23:16
johnsomgrin23:16
bloganthast the best way!23:16
rm_workgotta slap your troubles away23:16
bloganslap-a-da-bass23:17
johnsomSo rm_work, now you have Josh's ear, you guys can work together on implementing jobboard for us....23:17
rm_workmaybe :)23:17
rm_workhe'll have more time since he's not running for oslo PTL again :P23:17
johnsomExactly...23:17
bloganthat sounds like a smart move on his part23:17
blogani'd hate to be a poor sucker who runs for ptl23:18
bloganamiright johnsom?23:18
*** bana_k has quit IRC23:18
johnsomSigh, yeah..... https://review.openstack.org/#/c/423381/23:18
rm_work:P23:18
rm_workinteresting, we do this via gerrit?23:19
blogan-123:19
rm_work-1 for "continue to help this incredible team continue"23:19
rm_worknit23:19
johnsomYour nomination is via gerrit, votes are via the same old system...23:19
johnsomAll formal and stuff now that we are a grown up project23:19
rm_workand "neturon stadium"23:20
johnsomToo much politicking?23:20
rm_worktoo little spellchecking :P23:20
johnsomSigh, I even had someone read it over....23:20
rm_workyour mistake was not having ME read it over ^_^23:21
*** bana_k has joined #openstack-lbaas23:21
rm_workI'm a proud member of Cosa Nostra Grammatica :P23:22
*** _ducttape_ has quit IRC23:22
johnsomThere you go23:22
rm_workhttps://d3gqasl9vmjfd8.cloudfront.net/7371b292-687b-407f-8cdb-d6167eb020ac.png23:22
johnsomIt only had one +2 anyway23:22
rm_worknice23:23
rm_workhaha23:23
rm_workyeah surprised you cared enough to fix it23:23
johnsomThe reviewer was a journalism major too.  Though in all fairness I think she was trying to go to sleep....23:23
* johnsom Thinks it's good his spouse is not on IRC23:24
*** kevo has joined #openstack-lbaas23:24
johnsomHey, I had two -1 from constitutes, so I had to fix it....23:25
rm_workheh23:26
johnsomOk, back to fixing our sonar CI records from still pointing to non-existent HPE e-mail addresses23:26
*** armax has quit IRC23:41
johnsomrm_work are you going to the PTG?23:42
rm_workjohnsom: lol i guess my email is the recovery email for octavia.sonar@gmail23:42
rm_workgetting spammed about stuff noe23:42
rm_work*now23:42
rm_workno :(23:42
johnsomYeah, I just changed it back to my e-mail address.23:42
rm_workkk23:42
johnsomI also updated the third party CI record address to be a forwarder at octavia.cloud instead of an old hp alias23:43
johnsomIn case anyone needs to track us down23:43
rm_workcool23:44
johnsomI was just finishing up fixing the third party CI voting stuff after the spin out.  we now have octavia-ci which has the third party gates in it for voting like they had under neutron23:45
rm_workhow many people are going to the PTG from octavia I wonder?23:46
johnsomI know of 6-723:46
rm_workwe still have 6-7 people? ^_^23:46
johnsomHa, funny23:47
bloganif you count me as a person23:47
bloganim subhuman though23:47
johnsomWell, yes, I did, so 5-623:47
rm_workmaybe you're the "6th-7th"23:47
rm_workah23:47
rm_worklol23:47
johnsomI didn't say gentlemen...23:47
bloganalright i've had enough of your insults23:47
bloganim done for the day...i think23:47
openstackgerritMerged openstack/octavia: Fix connection timeout spam in subunit logs  https://review.openstack.org/42304123:47
blogani may work on some more of the octavia stuff23:48
bloganWHILE RELAXING23:48
*** eezhova has quit IRC23:48
johnsomSo, stackalytics says 50 people have done at least one review on "octavia group" projects23:48
johnsomblogan Enjoy!  Thanks again for your work on the API stuff23:48
rm_workhow many have commits? :/23:51
rm_workthough I guess reviews help too23:51
*** kevo has quit IRC23:53
*** fnaval has quit IRC23:55
johnsomPatch sets is also 50, but commits is hmm, also 50.  Interesting.  That includes octavia, the dashboard, and lbaas23:57
rm_workhmm23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!