Wednesday, 2017-01-18

*** hongbin has quit IRC00:12
*** mattmceuen has joined #openstack-kuryr00:44
*** yedongcan1 has joined #openstack-kuryr01:10
*** limao has joined #openstack-kuryr01:17
*** salv-orlando has joined #openstack-kuryr01:22
*** tonanhngo has quit IRC01:25
*** salv-orlando has quit IRC01:27
*** pmannidi has quit IRC01:27
*** pmannidi has joined #openstack-kuryr01:29
*** tonanhngo has joined #openstack-kuryr01:33
*** mattmceuen has quit IRC01:34
*** tonanhngo has quit IRC01:37
vikas_ivc_, ping02:47
*** hongbin has joined #openstack-kuryr02:54
*** salv-orlando has joined #openstack-kuryr03:23
*** salv-orlando has quit IRC03:28
*** janki has joined #openstack-kuryr03:53
openstackgerritvikas choudhary proposed openstack/kuryr-kubernetes: Add support for nested pods with Vlan trunk port  https://review.openstack.org/41057805:01
*** hongbin has quit IRC05:48
openstackgerritMerged openstack/kuryr-libnetwork: Updated from global requirements  https://review.openstack.org/41993206:15
*** ivc_ has quit IRC06:28
*** tonanhngo has joined #openstack-kuryr06:52
*** saneax-_-|AFK is now known as saneax07:00
*** salv-orlando has joined #openstack-kuryr07:12
*** salv-orlando has quit IRC07:17
*** salv-orlando has joined #openstack-kuryr07:45
*** pmannidi has quit IRC07:53
*** salv-orlando has quit IRC07:53
apuimedolimao: congratulations on you becoming a core ;-)07:58
limaoapuimedo: Thanks team!07:58
apuimedoirenab: Do you agree with sending the virtual gathering etherpad to the mailing list already?07:59
irenabapuimedo: give me a min to recheck, but generally yes07:59
apuimedook07:59
*** yamamoto has quit IRC08:17
openstackgerritvikas choudhary proposed openstack/kuryr-kubernetes: Add support for nested pods with Vlan trunk port  https://review.openstack.org/41057808:19
vikas_apuimedo, irenab ltomasbo  PTAL ^08:20
ltomasbovikas_, ok, I'll do asap08:20
vikas_ltomasbo, thanks08:20
irenabapuimedo: I think the agendais good. Wonder if there is nothing for libnetwork or the lib08:26
*** salv-orlando has joined #openstack-kuryr08:37
openstackgerritZhang Ni proposed openstack/fuxi: Enable Fuxi to use Manila share  https://review.openstack.org/37545208:37
irenabvikas_: ping08:42
vikas_irenab, hi08:42
irenabvikas_: I am checking the latest patchset and wonder about the vlan allocation logic08:43
vikas_irenab, ok.. which part is bothering you08:43
irenabI know it was discussed yesterday, but still not sure why its required to rebuild the allocated vlans map every time subport is created08:44
vikas_irenab, this will be needed in a/a HA08:44
irenabvikas_: can you elaborate?08:44
vikas_irenab, let say there are two kuryr-controllers and both gets pod request at same time08:44
irenabvikas_: hereyou have the reties to help08:45
irenabretries08:45
vikas_irenab, and both  has seg_driver of their own, let say they got allocated same vlan_ids08:45
vikas_irenab, yes08:45
vikas_irenab, now one of the pods will succeed and other will fail08:46
vikas_irenab, in that case retry will happen and allocated vlan_id will be retrieved from neutron08:46
vikas_irenab, and this time seg_driver will not allocate this vlan_id08:46
vikas_irenab, does this make sense08:47
irenabvikas_  its very not intuitive08:47
vikas_irenab, which part should i try explaining again08:47
irenabvikas_: I got your explanation, but I mean from the code it does not seem intuitive08:48
irenabI guess woth ports preallocated in advance, we may get into similar issues once consider A:A08:49
vikas_irenab, got your point08:51
ltomasbovikas_, irenab: whould it make sense to also change seg_driver so that the vlan_ids return are not in order but somehow random within the available set?08:51
ltomasbothat way we may avoid a few collisions08:52
ltomasbosuch as in the example vikas_ described about A:A08:52
vikas_ltomasbo, IMO, yes that should also be done08:52
irenabvikas_: for now we probably need to have comment on this (better devref), but I think we should have it obvious in the code08:52
irenabltomasbo: vikas_ :yes random seems as improval08:53
vikas_irenab, should i add comments in the code for explanations?08:53
irenabvikas_: maybe docstring for the add_subport, and better namings for the inner methods following ltomasbo comment08:54
apuimedoirenab: waiting for somebody to propose libnetwork related stuff :P08:54
vikas_irenab, cool, going to update08:54
ltomasboI'm by the way working on a patch to make neutron allocate the vlan_ids if they are not provided by, in our case, kuryr08:54
apuimedo\o/08:55
*** tonanhngo has quit IRC08:55
vikas_ltomasbo, cool08:55
ltomasbowhich will remove (hopefully) all the mess from kuryr level08:55
irenabapuimedo: or maybe its just in a good shape, perfect as there is nothing to add :-)08:55
ltomasbobut I'm getting some weird behavior, not sure if related to trunk ports or kuryr yet08:56
apuimedoirenab: there's nothing to add, but there's stuff to remove08:56
apuimedoltomasbo: please describe it08:56
irenabapuimedo: :-)08:56
ltomasboas even though the vlan driver and id are properly chosen at neutron level (if not passed from kuryr)08:56
ltomasbosometimes the nested container get connectivity, other times not08:56
irenabltomasbo: if we want to support Ocata, it will have to manage seg. ids08:57
ltomasboirenab, kuryr or neutron?08:57
*** garyloug has joined #openstack-kuryr08:57
irenabltomasbo: kuryr08:57
vikas_if kuryr wants to support neutron ocata release08:58
irenabassuming neutron seg. ids management enters in Pike08:58
apuimedoirenab: ocata is not frozen yet08:58
apuimedothere's a slim chance08:58
ltomasbothe patch I'm preparing is to remove the need for vlan_ids to be specify, but you can of course specify them if you want08:58
apuimedoirenab: ltomasbo's patch is compliant with the spec, so it seems like it could be accepted08:58
ltomasboso, the logic is the next -> if you request a subport_add with seg_type and id, then it uses that one (could fail if already taken)08:59
irenabapuimedo: ltomasbo vikas_ : I think we should have kuryr seg_ids specify as optional toggled by config setting08:59
vikas_irenab, +108:59
ltomasboand if you don't specify seg_type, it chooses vlan by default, and if you don't speficy seg_id it chooses on available for you08:59
irenabltomasbo: this makes sense. Its the same way as provider networks work09:00
apuimedoirenab: vikas_: I prefer it as exception handling09:00
apuimedothe exception should allow us to differentiate if none are available or we requested with 'wrong' parameters09:00
vikas_apuimedo, make sense09:01
irenabapuimedo: it seems less ‘managed’ approach, but I I do not have strong objection09:01
ltomasboapuimedo, I think irenab meant that we can make kuryr either to handle its own seg_ids or allow neutron manage them for us, but one way or the other could be enabled at kuryr based on config, am I right irenab?09:02
irenabltomasbo: yes, exactly09:02
irenabwith reasonable defaults09:03
*** yamamoto has joined #openstack-kuryr09:07
*** salv-orlando has quit IRC09:21
*** limao has quit IRC09:25
*** limao has joined #openstack-kuryr09:26
*** limao has quit IRC09:30
openstackgerritvikas choudhary proposed openstack/kuryr-kubernetes: Add support for nested pods with Vlan trunk port  https://review.openstack.org/41057809:31
vikas_ltomasbo, irenab ^09:34
*** yamamoto has quit IRC09:34
*** yamamoto has joined #openstack-kuryr09:36
ltomasbovikas_, ok09:37
*** devvesa has joined #openstack-kuryr09:38
*** yamamoto has quit IRC09:39
*** pcaruana has joined #openstack-kuryr09:41
irenabvikas_: posted few questions09:47
*** neiljerram has joined #openstack-kuryr10:04
openstackgerritAntoni Segura Puimedon proposed openstack/kuryr-kubernetes: vif: avoid exceptions for non exceptional cases  https://review.openstack.org/42177710:06
apuimedoirenab: vikas_: ltomasbo: small patch to improve handling of common cases ^^10:07
irenabapuimedo: ack10:07
vikas_apuimedo, sure10:07
openstackgerritAntoni Segura Puimedon proposed openstack/kuryr-kubernetes: vif: avoid exceptions for non exceptional cases  https://review.openstack.org/42177710:11
apuimedovikas_: irenab: I simplified it10:11
apuimedoa bit more10:11
*** garyloug has quit IRC10:17
*** garyloug has joined #openstack-kuryr10:18
*** garyloug has quit IRC10:19
openstackgerritvikas choudhary proposed openstack/kuryr-kubernetes: Add support for nested pods with Vlan trunk port  https://review.openstack.org/41057810:19
*** garyloug has joined #openstack-kuryr10:19
vikas_irenab, ltomasbo :)10:19
irenabvikas_: ack10:19
vikas_apuimedo, ^10:21
apuimedoI'm wondering if to do the same for _get_vif10:22
apuimedosince it is pretty common not to have a vif annotation10:22
*** devvesa has quit IRC10:24
irenabapuimedo: pointer to the place you wonder about?10:26
irenabvikas_: looks good, some more suggestions posted10:26
apuimedohttps://github.com/openstack/kuryr-kubernetes/blob/master/kuryr_kubernetes/controller/handlers/vif.py#L116-L12010:26
apuimedoit's not such a clear cut case like the one before, where the exception raising was gratuitous10:27
openstackgerritDongcan Ye proposed openstack/kuryr-libnetwork: Optimize add subnetpool tag  https://review.openstack.org/42061010:27
irenabapuimedo: returning None is expected either if there is no annotations or the expected annotation is not there10:28
apuimedoright10:28
apuimedoit is almost asking for some annotation wrapping10:29
apuimedo:P10:29
irenabapuimedo: looks like you are in the mood to pythonize the code, go for it :-)10:29
apuimedoin fact, there's TODO's from ivc_ about that10:29
apuimedoI'm in the middle of using /run files for keeping vif data for deletion10:29
apuimedoand this caught my eye10:29
*** devvesa has joined #openstack-kuryr10:30
*** pcaruana has quit IRC10:38
*** devvesa has quit IRC10:39
*** yamamoto has joined #openstack-kuryr10:55
*** tonanhngo has joined #openstack-kuryr10:55
*** devvesa has joined #openstack-kuryr10:56
*** tonanhngo has quit IRC10:57
*** yamamoto has quit IRC11:10
mchiapperosorry folks, do you know where the code for trunk ports is in neutron?11:14
mchiapperoltomasbo?11:14
ltomasboyep11:14
ltomasboservices/trunk11:14
mchiapperoit was easy actually...11:15
mchiapperothanks11:15
ltomasboI think I found the issue for the connectivity, going to try to push the automatic vlan id allocation in a while11:15
openstackgerritvikas choudhary proposed openstack/kuryr-kubernetes: Add support for nested pods with Vlan trunk port  https://review.openstack.org/41057811:28
vikas_irenab, ltomasbo ^11:29
ltomasbovikas_, you are working hard today :D11:31
vikas_ltomasbo, :D11:32
*** salv-orlando has joined #openstack-kuryr11:49
irenabvikas_: thanks a lot for addressing all comments11:55
*** pcaruana has joined #openstack-kuryr11:56
vikas_irenab, thanks a lot for quick responses :)11:56
vikas_ltomasbo, too11:56
ltomasbonp, nice work!11:57
vikas_apuimedo, https://review.openstack.org/41057812:07
apuimedovikas_: would the retry handler that wraps the handler already take care of the retry in https://review.openstack.org/#/c/410578/12..18/kuryr_kubernetes/controller/drivers/nested_vlan_vif.py12:13
*** ivc_ has joined #openstack-kuryr12:16
vikas_apuimedo, that retires for different exception12:25
vikas_s/retries12:25
apuimedoi wonder if it could be adapted12:26
apuimedowdyt ivc_ ^^12:26
vikas_may be we can iterate after this patch for improvement12:26
vikas_and see if possible to adapt12:27
ivc_apuimedo vikas_ sorry i'm missing parts of the discussion, my irc bouncer died today :/12:28
vikas_ivc_, apuimedo said vikas_: would the retry handler that wraps the handler already take care of the retry in https://review.openstack.org/#/c/410578/12..18/kuryr_kubernetes/controller/drivers/nested_vlan_vif.py12:29
ivc_yup thats what retry handler is for12:30
ivc_but in some cases it does not solve the problem12:31
ivc_i had to do the same thing in https://review.openstack.org/#/c/376045/12/kuryr_kubernetes/controller/drivers/lbaasv2.py@30712:32
irenabivc_: the method should raise ResourceNotReady to get pipeline call the event handler again, right?12:33
vikas_we can write a wrapper on add_subport for retrying on "Conflict" exception. But i will love to defer this to a successive patch12:33
ivc_irenab yes but the problem is when you have some parts of work already done before the exception it gets messy12:34
apuimedovikas_: I'm okay with just putting a REVISIT note12:34
ivc_apuimedo in https://review.openstack.org/#/c/376045/12/kuryr_kubernetes/controller/drivers/lbaasv2.py@32612:35
irenabivc_: agree with you, looks like the one who writes the code of the event handler must be aware of the Retry being applied and keep the code that handles this properly.12:35
ivc_irenab i'm actually thinking about getting rid of the retry _handler_ and replacing it with just a decorator12:36
irenabivc_: which requires some sort of state store12:36
vikas_apuimedo, https://review.openstack.org/#/c/410578/18/kuryr_kubernetes/controller/drivers/nested_vlan_vif.py@131 already have a note added12:36
apuimedoivc_: it's probably a good idea to move it to a decorator and context manager12:37
apuimedofor finer granularity12:37
apuimedovikas_: very well12:37
ivc_apuimedo yup12:37
apuimedoirenab: have you tried the patch?12:39
ivc_apuimedo irenab vikas_ btw while we are at it, wdyt if we need to preserve the 'timeout per event' functionality with the ctxmgr/deco retry?12:39
apuimedoivc_: I'm not sure I got the question12:39
ivc_or just keep it simple/generic and be per-call timeout12:39
apuimedooh12:39
*** gsagie has joined #openstack-kuryr12:40
apuimedoI think I get it now, you mean whether the context manager/decorator event timeout aware12:40
apuimedoor not12:40
ivc_apuimedo with the current retry handler you have a shared timeout for different operations, so it is predictable how much time is spent per event12:40
ivc_yup12:40
apuimedoIt would be better to keep this guarantee12:40
apuimedobut the code is going to look funny12:41
vikas_i think individual time-outs per event will give better tuning capability12:41
ivc_apuimedo we can keep the guarantee. we keep the 'retry' handler and have it store remaining time or the timestamp the event was received12:42
*** yedongcan1 has left #openstack-kuryr12:42
irenabapuimedo: deploying now12:43
ivc_it wont 'look funny', there's a pretty clean/simple solution for that :)12:43
apuimedoirenab: then I leave the w+1 on you12:43
irenabapuimedo: ok12:43
apuimedoivc_: won't that couple the retry handler and the decorator/ctxtmgr?12:44
irenabapuimedo: must get fullstack asap12:44
* apuimedo knows12:44
apuimedothe CI mechanism confuses me12:44
irenabapuimedo: another topic to the virtual sprint if we won’t sort it out before12:45
apuimedoI have to ask a couple of questions to yuvalb12:45
apuimedoirenab: indeed12:45
ivc_apuimedo it will in some way12:45
ivc_apuimedo ok now i think it would be best if we keep the current 'retry event' but also make a decorator/context manager that will share the same time limit. it covers all the cases12:47
* apuimedo 'd like to avoid the coupling12:47
ivc_it wont be coupling, they would be different APIs of the same thing12:48
apuimedoivc_: That's what I was thinking about12:48
apuimedobut!12:48
apuimedohow do you give the ctxt manager a reference to the retry handler that is wrapping the current event?12:48
apuimedoafaik we don't have it available12:49
ivc_apuimedo thread-local var12:49
apuimedoivc_: it is there? Or you are thinking about putting it? (green thread local var, I suppose)12:49
ivc_eventlet has support for it12:50
ivc_you just use the one from threading module and eventlet has it patched already12:50
apuimedooh... that's so implicit... this monkeypatching gives me the creeps12:52
irenabvikas_: I think you didn’t update the local.conf options following the review comments ltomasbo posted few spins ago12:56
*** devvesa has quit IRC12:58
vikas_irenab, i missed that but existing local.conf is also working. Are you facing any issue13:14
irenabvikas_: I had issues with ovecloud, it kept failing for keystone service to start13:15
irenabvikas_: but not a proable, this can be fixed by the devstack support patch13:16
irenabnot a problem13:16
vikas_irenab, thanks13:16
ltomasboyep. I was not able to deploy the overcloud with the current local.conf13:23
*** gsagie has quit IRC13:31
*** salv-orlando has quit IRC13:36
apuimedoltomasbo: maybe you can send a follow-up patch to fix :P13:37
*** salv-orlando has joined #openstack-kuryr14:21
ltomasboI didn't manage so far to get the patch working in my environment, but I will keep trying14:26
ltomasbothat said, I think it should be change for the multi_host as it makes more sense rather than have under/over cloud, which has a slightly different meaning in my opinion, and will fix the problem of keystone being a requirement14:27
ltomasbofor the overcloud14:27
ltomasboThat said, I agree, polishing the local.conf could be done in a follow-up patch14:28
apuimedoltomasbo: I agree with you14:33
openstackgerritZhang Ni proposed openstack/fuxi: Enable Fuxi to use Manila share  https://review.openstack.org/37545214:38
*** gsagie has joined #openstack-kuryr14:47
*** devvesa has joined #openstack-kuryr14:50
*** mattmceuen has joined #openstack-kuryr14:52
*** hongbin has joined #openstack-kuryr14:53
*** mattmceuen has quit IRC15:06
*** openstack has joined #openstack-kuryr15:17
*** jerms has quit IRC15:18
*** jermz has joined #openstack-kuryr15:18
*** pcaruana is now known as pablo|500|15:18
*** neiljerram has quit IRC15:19
*** alraddarla has quit IRC15:19
*** alraddarla has joined #openstack-kuryr15:19
*** neiljerram has joined #openstack-kuryr15:22
*** v1k0d3n has joined #openstack-kuryr15:25
*** v1k0d3n has quit IRC15:33
*** v1k0d3n has joined #openstack-kuryr15:34
*** saneax is now known as saneax-_-|AFK15:36
*** limao has joined #openstack-kuryr15:36
*** limao_ has joined #openstack-kuryr15:39
*** limao has quit IRC15:40
*** v1k0d3n has quit IRC15:41
*** david-lyle has joined #openstack-kuryr15:47
*** v1k0d3n has joined #openstack-kuryr15:49
*** janki has quit IRC16:03
*** tonanhngo has joined #openstack-kuryr16:12
*** salv-orl_ has joined #openstack-kuryr16:24
*** salv-orlando has quit IRC16:27
*** limao_ has quit IRC16:39
*** gsagie has quit IRC16:46
*** devvesa has quit IRC16:52
*** v1k0d3n has quit IRC16:58
*** salv-orl_ has quit IRC16:58
*** v1k0d3n has joined #openstack-kuryr16:58
*** v1k0d3n has quit IRC17:21
*** v1k0d3n has joined #openstack-kuryr17:35
*** david-lyle is now known as bailing-wire17:37
*** dougbtv is now known as dougbtv|afk17:40
*** bailing-wire has quit IRC17:46
*** v1k0d3n has quit IRC18:48
*** v1k0d3n has joined #openstack-kuryr18:52
*** huikang has joined #openstack-kuryr19:01
*** salv-orlando has joined #openstack-kuryr19:27
*** huikang has quit IRC19:43
*** huikang has joined #openstack-kuryr19:44
*** huikang has quit IRC19:47
*** huikang has joined #openstack-kuryr19:47
*** bailing-wire has joined #openstack-kuryr20:00
*** bailing-wire is now known as david-lyle20:02
*** salv-orlando has quit IRC20:04
*** huikang has quit IRC20:21
*** huikang has joined #openstack-kuryr20:23
*** huikang has quit IRC20:46
*** v1k0d3n has quit IRC20:56
*** v1k0d3n has joined #openstack-kuryr21:23
*** huikang has joined #openstack-kuryr21:47
*** severion has joined #openstack-kuryr21:47
*** huikang has quit IRC21:51
*** severion has quit IRC21:56
*** severion has joined #openstack-kuryr21:56
*** severion has quit IRC21:58
*** severion has joined #openstack-kuryr22:00
*** severion is now known as v1k0d3m22:00
*** v1k0d3m has quit IRC22:01
*** severion has joined #openstack-kuryr22:03
*** v1k0d3n has quit IRC22:24
*** severion has quit IRC22:24
*** portdirect is now known as shipindirect22:25
*** david-lyle has quit IRC22:32
*** david-lyle has joined #openstack-kuryr22:40
*** pmannidi has joined #openstack-kuryr22:54
*** salv-orlando has joined #openstack-kuryr23:05
*** salv-orlando has quit IRC23:09
*** saneax-_-|AFK is now known as saneax23:15
*** jermz has quit IRC23:24
*** jermz has joined #openstack-kuryr23:24
*** garyloug has quit IRC23:27
*** tonanhngo has quit IRC23:41
*** tonanhngo has joined #openstack-kuryr23:42
*** shipindirect is now known as portdirect23:43

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!