Friday, 2017-02-03

*** hongbin has quit IRC00:04
janonymousapuimedo: Hi00:30
apuimedojanonymous: hi00:31
janonymousapuimedo: Was wondering which thing to do first, swarm mode check or tls devstack..00:31
janonymousapuimedo: pls let me know00:31
apuimedojanonymous: http://www.quickmeme.com/img/c2/c26809e3a038ae40c48384056be3891a0adcbf3ea2d13053e5ca83d8deb93798.jpg00:32
apuimedoXD00:32
janonymous:D00:32
apuimedojanonymous: I'd say... Quick investigation on if Docker 1.13 swarm mode allows to specify other drivers like kuryr00:34
apuimedoand then, before jumping to code any support, finish the devstack tls and add a fullstack test for it00:34
apuimedojanonymous: what do you think?00:34
janonymousapuimedo: yeah, sure!00:35
janonymousapuimedo: I was checking tls devstack using USE_SSL=True flag for default services, seems like cinder fails which leads to devstack failure00:35
janonymousapuimdeo: not sure should log a bug :D00:35
janonymousapuimedo:i will proceed as per your suggestions, Thanks toni00:36
apuimedojanonymous: asking around in openstack infra irc channel may save you the work of filing a bug00:36
apuimedo:P00:36
janonymousapuimedo:Haha00:37
janonymousyeah i can try that :)00:37
apuimedojanonymous: let me know how it goes00:43
apuimedo(but later, now I'll go to bed :P )00:43
janonymous:) sure00:44
*** salv-orlando has joined #openstack-kuryr00:49
*** salv-orlando has quit IRC00:54
*** salv-orlando has joined #openstack-kuryr01:00
*** salv-orlando has quit IRC01:06
openstackgerritZhang Ni proposed openstack/fuxi master: Enable Fuxi to use Manila share  https://review.openstack.org/37545201:13
*** yedongcan has joined #openstack-kuryr01:27
openstackgerritZhang Ni proposed openstack/fuxi master: Enable Fuxi to use Manila share  https://review.openstack.org/37545201:28
*** portdirect is now known as portdirect_away01:30
*** hongbin has joined #openstack-kuryr01:35
*** yedongcan1 has joined #openstack-kuryr01:41
*** yedongcan has quit IRC01:43
*** salv-orlando has joined #openstack-kuryr02:02
*** salv-orlando has quit IRC02:06
*** mattmceuen has joined #openstack-kuryr02:12
*** hongbin has quit IRC02:19
*** hongbin has joined #openstack-kuryr02:21
*** yamamoto has quit IRC02:23
*** hongbin has quit IRC02:28
janonymousapuimdeo: Seems kuryr works when changed to swarm mode, only change i have to made is remove --cluster-store etcd://localhost:4001 from daemon startup : http://paste.openstack.org/show/597474/02:38
vikascapuimedo, pong02:53
*** pmannidi has quit IRC02:57
*** pmannidi has joined #openstack-kuryr02:58
*** salv-orlando has joined #openstack-kuryr03:03
*** salv-orlando has quit IRC03:07
*** yedongcan1 has quit IRC03:15
*** yedongcan has joined #openstack-kuryr03:16
*** hongbin has joined #openstack-kuryr03:21
*** yamamoto has joined #openstack-kuryr03:24
*** mattmceuen has quit IRC03:25
*** yamamoto has quit IRC03:30
*** hongbin has quit IRC03:31
*** yamamoto has joined #openstack-kuryr03:58
*** mattmceuen has joined #openstack-kuryr04:21
*** mattmceuen has quit IRC05:00
*** salv-orlando has joined #openstack-kuryr05:04
*** salv-orlando has quit IRC05:09
*** yamamoto has quit IRC05:10
*** yamamoto has joined #openstack-kuryr05:14
*** yamamoto has quit IRC05:14
*** jay-ahn has joined #openstack-kuryr05:46
*** jay-ahn has quit IRC05:47
*** hyunsun has joined #openstack-kuryr05:47
*** janki has joined #openstack-kuryr06:03
*** salv-orlando has joined #openstack-kuryr06:05
*** salv-orlando has quit IRC06:10
*** yamamoto has joined #openstack-kuryr06:15
*** saneax-_-|AFK is now known as saneax06:18
*** yamamoto has quit IRC06:19
*** yamamoto has joined #openstack-kuryr06:19
*** salv-orlando has joined #openstack-kuryr06:57
*** salv-orl_ has joined #openstack-kuryr07:49
*** salv-orlando has quit IRC07:51
apuimedojanonymous: sounds great!07:59
apuimedojanonymous: Did you verify that the communication is using the Neutron ports?07:59
apuimedovikasc: Morning!07:59
vikascapuimedo, Morning!08:00
vikascapuimedo, i missed your ping yesterday08:00
*** reedip has quit IRC08:01
*** reedip has joined #openstack-kuryr08:02
apuimedono worries08:02
*** salv-orl_ has quit IRC08:04
*** salv-orlando has joined #openstack-kuryr08:04
*** salv-orlando has quit IRC08:08
janonymousapuimedo: yeah , i checked that neutron logs were showing requests when creating network08:10
apuimedojanonymous: cool!08:11
apuimedoI think you should definitely make a youtube video or asciicinema recording of it and twit that kuryr-libnetwork works with Docker swarm mode 1.13 :-008:12
apuimedo:-)08:12
janonymousapuimedo:I prefer to remain anonymous ;)08:12
apuimedotrue :P08:13
apuimedoYou can make a twitter profile with https://pbs.twimg.com/profile_images/716059387735965696/_qYqY6BK.jpg as avatar08:13
apuimedoxd08:13
janonymousHehee :D08:14
janonymousapuimedo: just to add i checked with single node swarm deployment08:16
apuimedook. We should check with multi node as well08:17
janonymoussure08:20
apuimedoI received the new OpenStack Foundation made Kuryr logo08:38
*** hyunsun has quit IRC08:39
*** yamamoto has quit IRC08:40
janonymousWe should order  kuryr t-shirts :D08:40
apuimedohttps://drive.google.com/file/d/0B7nRaHXyizqSMXBiakh1dGFBYXdCZHFKUko3OU44d1FYTWJr/view?usp=sharing08:40
apuimedoI am biased since I made the previous logo08:40
apuimedobut I liked it better08:40
apuimedo:P08:40
apuimedoThis platypus doesn't have a cheeky smile08:41
janonymousHahaaa08:41
janonymousthis is the side view i guess08:42
janonymous:P08:42
apuimedoI suppose08:42
janonymousBut this is not even nearer to what our previous logo was08:43
apuimedoThis one has a cheeky smile https://s-media-cache-ak0.pinimg.com/originals/1a/f3/37/1af33760a821e18bb6e7bd3457c55aa2.jpg08:43
apuimedojanonymous: well, tbf, the previous one was not a platypus08:43
janonymouswhat about http://docs.openstack.org/developer/kuryr-libnetwork/readme.html#getting-it-running-with-a-service-container08:44
apuimedohttp://www.environment.nsw.gov.au/images/nature/platypusLg.jpg would have been nice too08:44
apuimedojanonymous: you mean that we should fix the upstream container?08:45
janonymoussry the logo on this page08:46
apuimedothat's the one I made08:46
janonymousapuimedo:http://docs.openstack.org/developer/kuryr-libnetwork/readme.html#kuryr-libnetwork08:46
apuimedoI guess it will go to the trash :'(08:46
janonymousI saw some mailing list and protests on a bear logo in mailing list08:47
janonymous:D , all logos look the same :P08:47
apuimedowell, the final version of Ironic's bear is much better than the inital one08:49
apuimedobut I'm not sure which feedback to give on this one08:49
janonymousHaha08:50
apuimedoivc_: did you see ltomasbo's and my comments on the lbaas patch?08:53
*** garyloug has joined #openstack-kuryr08:59
*** yamamoto has joined #openstack-kuryr09:24
*** yamamoto has quit IRC09:40
*** salv-orlando has joined #openstack-kuryr10:06
*** salv-orlando has quit IRC10:11
ivc_apuimedo aye, will update it soon10:38
apuimedocool10:38
apuimedoivc_: now ltomasbo is also testing your resourceversion patch10:38
apuimedoand stumbled upon a case that annotation are not yet there10:38
apuimedoand it gets an exception10:38
ltomasboyep, going to comment on the patch what I found10:38
apuimedo(when initializing new_annotations)10:38
apuimedoltomasbo: maybe you can share the struggle with ivc_10:39
apuimedothree brains think better than two10:39
ltomasboivc_, why workflow -1 in the patch btw?10:39
ivc_well i was thinking about baking 'skip stale' functionality into it (that was the original plan, but it is more complicated than the current time-based 'skip-stale' patch)10:40
*** yamamoto has joined #openstack-kuryr10:41
apuimedoivc_: but I merged too fast10:41
ivc_but at the same time 'skip stale' got merged so i've not yet decided10:41
apuimedoxd10:41
ivc_^10:41
ivc_and it also has -1 from irenab anyway10:42
ivc_so it requires a bug on launchpad10:42
ivc_ltomasbo but anyway, what's the issue with exception?10:44
ltomasboivc_, ok, thanks10:44
ltomasboin L6210:45
ivc_KeyError?10:45
ltomasboI'm getting an error as reousrce['metadata'] has no annotations dic10:45
ltomasboyep10:45
ltomasboso, I modified by reousrce[metadata].get('annotations', {})10:46
ivc_yup10:46
ltomasboand then, to update the resource_version properly10:46
ltomasboI swtiched the for loop you have in L63-6510:46
ltomasboby:10:46
ltomasbofor k, v in new_annotations.items():10:46
ltomasbo    if v != annotations.get(k, v):10:47
ltomasbo        break10:47
ltomasboelse:10:47
ltomasbo...10:47
*** yamamoto has quit IRC10:47
ltomasboso that if there is no annotations, the else gets executed and in the next iteration the response is ok10:47
ltomasbonot sure though if that could break other use cases...10:47
ivc_i think i had a reason why it was 'annotations' driving the loop10:47
ltomasbothat is what I don't know10:48
ivc_i need to think about it, but anyway we can get rid of get(..., {}) and simply "if 'annotations' not in ...['metadata']: ... continue" early10:49
ltomasboyep10:50
ltomasboI just saw the problem was the time as apuimedo suggested10:50
ltomasboI just rebase my patch (without using the annotations one)10:50
ltomasboand now it works as expected10:50
ivc_you probably got the 'skip stale' patch now10:51
ivc_they are designed to work together10:51
ivc_as in 'resource_version' introduces the issue that 'skip stale' fixes10:51
ltomasboyep10:51
ltomasbobut with skip stale it is enough for me (rigth now)10:52
ltomasboanyway, I guess the .get(10:52
ltomasboanyway, I guess the .get('annotations', {})10:52
ltomasbocould be useful for your patch10:52
ivc_it depends if there was an actual reason why 'annotation' is driving the loop instead of 'new_annotation'. but that was more than a month ago and i don't quite remember, so i need to go through it once more10:54
ivc_and it prolly deserves some #NOTE10:54
ivc_ltomasbo also, you'll probably still see that issue in some rare cases even after 'skip stale'10:56
ltomasbocould be10:57
ltomasbobut it was happening to me all the time10:57
ltomasboeven debugging with pdb, where the race are less probable10:57
ivc_its not the race10:57
ivc_you were guaranteed to get it :)10:58
ltomasbobut anyhow, thanks for the skip stale! it was driving my crazy the whole morning :D10:58
ivc_the flow is that K8s itself causes several events during creation of the resource. i think 'skip stale' patch has a good description about whats going on10:59
ivc_and what you were seeing is caused by second event which still has no annotation but happens after kuryr processed first and added an annotation11:00
ivc_so it was literally guaranteed to happen11:00
ivc_and without skip_stale/resource_version patch it would cause unnecessary neutron create/delete operations for ports11:01
*** yedongcan has left #openstack-kuryr11:01
ivc_ltomasbo and by the way, if you actually want to test services, just take https://review.openstack.org/#/c/376045/ - it has everything already, i'm just adding tests and splitting it into smaller parts now11:07
*** salv-orlando has joined #openstack-kuryr11:08
ltomasboivc_, I've already tested that!11:09
ltomasbomy problem is that my patch was not removing ports with release_vif in the normal way11:10
ltomasboso I found some leftover ports11:10
ltomasbodue to not having the skip stale11:10
ivc_ah ok11:10
ltomasbobut now it works, although as you said, it still may end up in some race and leaving some ports11:11
*** salv-orlando has quit IRC11:12
ivc_ltomasbo in your pool implementation do you have any performance numbers already?11:15
ltomasboI'm going to submit a new patch in a few minutes11:15
ltomasboI tried in a local VM here11:16
ltomasbowith some containers that needs around 12-14 seconds to boot up11:16
ltomasboand I'm getting them (if the port is already there) in 7-10 seconds11:16
ltomasbobut I can execute a better benchmarking11:17
ltomasbobut it can save 2-3 seconds (in a slow environment)11:17
ivc_thing is i've measured port creation with ~20-50 concurrent pods with original patch11:17
ivc_and imo what you are optimising now is not the bottleneck11:18
ivc_neutron.port_create is quite fast11:18
ivc_so i don't see much gain in optimising it11:18
ivc_the bottleneck is the 'wait for status==ACTIVE' loop11:18
ltomasbowell, in my case, it was not that fast11:19
ltomasboplus, the wait status==active will be faster if the port is already created11:19
ivc_i've not added devref/spec/bp for that, but we discussed that 'pool' design before11:19
ivc_ltomasbo it would not11:20
ivc_not with ovs-agent at least11:20
ltomasboI'm using ovs-firewall11:20
ivc_in case of ovs-agent port_create is mostly about adding a record in database11:20
ivc_ovs-firewall is still ovs-agent11:21
ivc_right?11:21
ltomasboand I will also try to speed up the nested case, where there are more calls to the neutron (create port, attach, update)11:21
ltomasboyep11:21
ivc_well what i was proposing is a little different11:21
ivc_so instead of optimising port_create we optimise the cni side of things11:22
ltomasboI think I'm getting more performance speed up due to testing this in a devstack over a VM11:22
ivc_so pools are per-node and interfaces/veths are also pooled11:22
ivc_ltomasbo my point is in a fair test i expect a very little gain from your current approach11:23
ltomasboseems one of the bottleneck is for the agent to ask the port details to the server11:23
ivc_na11:24
ltomasboso, if we already have does, we speed up that part11:24
ivc_the bottleneck is ovs-agent11:24
ltomasboovs-agent doing what?11:24
ltomasboattaching the veth to the br-int?11:24
ivc_thats done by nova11:25
*** salv-orlando has joined #openstack-kuryr11:25
ivc_:)11:25
ltomasboor creating the linux bridge and applying the ip tables?11:25
ivc_s/nova/os-vif/11:25
ivc_linux bridge is also os-vif responsibility (and it is quite fast)11:25
ivc_you know how ovs-agent/nova/neutron-server interact with each other during vm startup?11:26
ltomasboyep11:27
ivc_so we are pretty much like nova here11:27
ltomasbobut here we remove the nova11:27
ltomasboyep, agree11:28
ivc_we plug veth same as nova and wait for ovs-agent to configure everything (i.e. flows/iptables)11:28
ivc_and thats where we waste all the time11:28
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr-kubernetes master: [WIP] Adding pool of ports to speed up containers booting/deletion  https://review.openstack.org/42668711:29
ivc_if you profile port_create -> osvif.plug -> poll show_port you'll see that it is that show_port polling part that takes the majority of time11:29
ltomasboI'm using ovs-firewall, so no iptables here11:29
ivc_doesn't really matter, it just replaces it with flows11:30
ivc_(but its prolly faster than with iptables)11:30
ivc_so, just add logging around port_create on controller side and you'll see its <1 second11:31
ivc_tho it grows as you add concurrency11:32
ltomasbo1 second is not something that small for containers11:32
ltomasboso, still saving11:32
ivc_its not by itself. but its 1 second if 12 seconds11:33
ivc_and pooling veths you'll be saving all those 12 seconds11:33
ivc_and 12 seconds is quite good actually. when i added ~20 pods it was getting to about 1-2 minutes intervals11:34
ivc_so my point is we better wait on pooling optimisation for now11:35
ltomasbo1-2 to create 20 cointainers?11:35
ltomasbolet me try that with/without my patch11:35
ivc_1-2 minutes max per container11:36
ivc_just do 'kubectl run ... --replicas=20'11:36
ltomasboper container??11:40
ivc_yup11:40
ltomasbothat is a lot!11:40
ivc_aye11:40
ivc_with replicas=50 it didn't finish at all :)11:40
ivc_but that was also before skip_stale and had quite a bit of overhead11:41
ivc_ltomasbo but we are talking about devstack inside vm here, so these numbers should not be taken very seriously (on real hw with properly setup ost it would be much better)11:44
ltomasbo:D11:44
ivc_ltomasbo but my point is that even on devstack with veth-pooling (as opposed to just neutron port pooling) we can make it almost instant11:45
apuimedomeh11:46
ivc_and to get to veth-pooling we first need runfiles implemented (apuimedo is working on it afaik) and also split cni into daemon/exec11:46
apuimedoit seems I have quite a backlog to read11:46
ivc_xD11:46
ivc_apuimedo i think we need to make a todo/roadmap. maybe during VTG11:47
ivc_i mean what actionable items we have in a backlog and what are the dependencies/order of execution11:49
apuimedoivc_: what's the plan on avoiding k8s to delete the veth together with the namespace?11:54
*** pmannidi has quit IRC11:54
apuimedoor is remove from network called before the namespace is deleted?11:54
apuimedoIf it is, then that's the biggest optimization11:55
apuimedoto move the veth back into a namespace11:55
apuimedocalled kuryr_pool11:55
apuimedo(or, with the new logo, platypool11:55
apuimedo)11:55
ivc_apuimedo https://www.youtube.com/watch?v=hdcTmpvDO0I11:55
ivc_move it to 'recycle namespace'11:56
apuimedoivc_: I wanted to call a core meeting for monday/tuesday to finalize the vtg scheudle and sessions11:56
apuimedolol11:56
apuimedomadagascar11:56
*** huats has quit IRC11:56
openstackgerritIlya Chukhnakov proposed openstack/kuryr-kubernetes master: K8s Services support: LBaaSSpecHandler  https://review.openstack.org/42744012:06
ivc_apuimedo ltomasbo ^^12:08
*** yamamoto has joined #openstack-kuryr12:12
*** dougbtv has quit IRC12:15
*** dougbtv has joined #openstack-kuryr12:19
*** dougbtv_ has joined #openstack-kuryr12:24
*** dougbtv has quit IRC12:27
*** garyloug has quit IRC12:44
apuimedoivc_: thanks. Will check after lunch13:04
*** salv-orlando has quit IRC13:24
*** yamamoto has quit IRC13:33
*** janki has quit IRC13:35
*** garyloug has joined #openstack-kuryr13:45
*** yamamoto has joined #openstack-kuryr13:56
*** yamamoto has quit IRC14:42
*** pcaruana has quit IRC15:11
*** pcaruana has joined #openstack-kuryr15:17
*** saneax is now known as saneax-_-|AFK15:19
*** yamamoto has joined #openstack-kuryr15:28
*** yamamoto has quit IRC15:33
*** salv-orlando has joined #openstack-kuryr15:44
*** hongbin has joined #openstack-kuryr16:06
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr-kubernetes master: [WIP] Adding pool of ports to speed up containers booting/deletion  https://review.openstack.org/42668716:31
*** yamamoto has joined #openstack-kuryr16:32
hongbinapuimedo: hey antoni, just want to get your early feedback on this one: https://review.openstack.org/#/c/427923/ . i wanted to make sure it is on the right track before doing further work on this feature16:36
*** yamamoto has quit IRC16:38
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr-kubernetes master: [WIP] Adding pool of ports to speed up containers booting/deletion  https://review.openstack.org/42668716:49
*** lakerzhou_ has joined #openstack-kuryr16:50
*** jamespage has joined #openstack-kuryr17:24
apuimedohongbin: I just put a comment to it. I like the approach17:26
apuimedo:-017:26
apuimedo:-)17:26
apuimedoThanks17:26
apuimedoivc_: I'm happy with https://review.openstack.org/#/c/427440/317:28
apuimedoI would probably have left the set endpoints annotation for the patch that introduces the endpoints handler17:28
apuimedobut I don't want to be so picky17:28
*** jgriffith has quit IRC17:56
*** mestery has quit IRC17:56
*** mestery has joined #openstack-kuryr17:57
*** jgriffith has joined #openstack-kuryr17:57
*** pc_m has quit IRC18:18
*** pc_m has joined #openstack-kuryr18:22
*** neiljerram has quit IRC18:34
hongbinapuimedo: thanks18:47
*** salv-orlando has quit IRC19:05
*** salv-orlando has joined #openstack-kuryr19:08
*** salv-orl_ has joined #openstack-kuryr19:49
*** salv-orlando has quit IRC19:52
*** garyloug has quit IRC19:54
*** salv-orl_ has quit IRC20:05
*** salv-orlando has joined #openstack-kuryr20:06
*** salv-orlando has quit IRC21:45
*** russellb has quit IRC21:58
*** vikasc has quit IRC22:19
*** reedip has quit IRC22:19
*** vikasc has joined #openstack-kuryr22:19
*** reedip has joined #openstack-kuryr22:21
*** neiljerram has joined #openstack-kuryr22:24
*** salv-orlando has joined #openstack-kuryr22:29
*** lakerzhou_ has quit IRC22:33
*** reedip has quit IRC22:50
*** reedip has joined #openstack-kuryr22:51
*** pmannidi has joined #openstack-kuryr23:20
*** pmannidi has quit IRC23:26
*** pmannidi has joined #openstack-kuryr23:29
*** saneax-_-|AFK is now known as saneax23:35
*** salv-orlando has quit IRC23:48
*** salv-orlando has joined #openstack-kuryr23:49
*** neiljerram has quit IRC23:54
*** salv-orlando has quit IRC23:54
*** neiljerram has joined #openstack-kuryr23:55
*** salv-orlando has joined #openstack-kuryr23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!