Thursday, 2017-12-14

*** yuanying_ has joined #openstack-kuryr00:25
*** yuanying has quit IRC00:27
*** gouthamr has joined #openstack-kuryr00:42
*** yuanying_ has quit IRC01:00
*** yuanying has joined #openstack-kuryr01:09
*** caowei has joined #openstack-kuryr01:14
*** kiennt26 has joined #openstack-kuryr01:19
*** threestrands_ has joined #openstack-kuryr02:22
*** threestrands_ has quit IRC02:22
*** threestrands_ has joined #openstack-kuryr02:22
*** threestrands has quit IRC02:22
*** threestrands_ has quit IRC02:22
*** threestrands_ has joined #openstack-kuryr02:23
*** threestrands_ has quit IRC02:23
*** threestrands_ has joined #openstack-kuryr02:23
*** salv-orl_ has joined #openstack-kuryr02:29
*** threestrands_ has quit IRC02:32
*** caowei has quit IRC02:32
*** yuanying has quit IRC02:32
*** kzaitsev_pi has quit IRC02:32
*** salv-orlando has quit IRC02:32
*** s1061123 has quit IRC02:32
*** dmellado has quit IRC02:32
*** ChanServ has quit IRC02:32
*** ChanServ has joined #openstack-kuryr02:34
*** barjavel.freenode.net sets mode: +o ChanServ02:34
*** threestrands_ has joined #openstack-kuryr02:36
*** caowei has joined #openstack-kuryr02:36
*** yuanying has joined #openstack-kuryr02:36
*** kzaitsev_pi has joined #openstack-kuryr02:36
*** s1061123 has joined #openstack-kuryr02:36
*** dmellado has joined #openstack-kuryr02:36
*** gouthamr has quit IRC03:17
*** yamamoto has joined #openstack-kuryr04:18
*** kiennt26 has quit IRC04:36
*** kiennt26 has joined #openstack-kuryr04:37
*** caowei has quit IRC04:58
*** s1061123 has quit IRC05:20
*** s1061123 has joined #openstack-kuryr05:25
*** caowei has joined #openstack-kuryr05:45
*** janki has joined #openstack-kuryr05:48
*** openstackgerrit has quit IRC06:47
*** threestrands_ has quit IRC06:53
*** juriarte has joined #openstack-kuryr07:21
*** kiennt26 has quit IRC07:24
*** gcheresh has joined #openstack-kuryr07:32
*** janki has quit IRC08:15
*** reedip has quit IRC08:15
*** reedip has joined #openstack-kuryr08:18
*** celebdor has joined #openstack-kuryr08:24
*** pmannidi has quit IRC08:26
*** openstackgerrit has joined #openstack-kuryr08:50
openstackgerritEyal Leshem proposed openstack/kuryr-kubernetes master: Translate k8s policy to SG  https://review.openstack.org/52691608:50
*** huats has quit IRC08:55
*** huats has joined #openstack-kuryr08:55
*** janki has joined #openstack-kuryr09:36
*** garyloug has joined #openstack-kuryr09:44
*** dougbtv_ has joined #openstack-kuryr09:50
*** dougbtv has quit IRC09:51
*** dougbtv__ has joined #openstack-kuryr09:51
*** dougbtv_ has quit IRC09:55
*** caowei has quit IRC10:05
*** dougbtv_ has joined #openstack-kuryr10:14
*** dougbtv__ has quit IRC10:17
*** dougbtv__ has joined #openstack-kuryr10:33
*** dougbtv_ has quit IRC10:36
*** reedip has quit IRC10:44
*** reedip has joined #openstack-kuryr10:59
*** dougbtv_ has joined #openstack-kuryr11:02
*** dougbtv__ has quit IRC11:05
*** maysamacedos has joined #openstack-kuryr11:11
*** yamamoto has quit IRC11:12
openstackgerritEyal Leshem proposed openstack/kuryr-kubernetes master: Kubernetes Network Policy support Spec  https://review.openstack.org/51923911:39
*** janki has quit IRC11:46
*** yamamoto has joined #openstack-kuryr11:53
*** yamamoto has quit IRC11:54
*** yamamoto has joined #openstack-kuryr11:55
*** jerms has joined #openstack-kuryr12:12
*** jerms has left #openstack-kuryr12:14
*** yamamoto has quit IRC12:19
*** reedip has quit IRC12:54
*** kzaitsev_pi has quit IRC13:03
*** kzaitsev_pi has joined #openstack-kuryr13:04
*** yamamoto has joined #openstack-kuryr13:07
*** reedip has joined #openstack-kuryr13:08
openstackgerritEyal Leshem proposed openstack/kuryr-kubernetes master: Kubernetes Network Policy support Spec  https://review.openstack.org/51923913:13
*** c00281451_ has quit IRC13:28
*** c00281451 has joined #openstack-kuryr13:29
*** maysamacedos has quit IRC13:34
*** maysamacedos has joined #openstack-kuryr13:37
*** yamamoto has quit IRC13:41
*** yamamoto has joined #openstack-kuryr13:50
*** yamamoto has quit IRC13:54
*** kiennt26 has joined #openstack-kuryr13:55
*** atoth has joined #openstack-kuryr14:00
*** maysamacedos has quit IRC14:02
*** yamamoto has joined #openstack-kuryr14:14
*** yamamoto has quit IRC14:19
*** gouthamr has joined #openstack-kuryr14:25
*** kiennt26 has quit IRC15:26
*** gcheresh has quit IRC15:57
*** juriarte has quit IRC16:17
*** maysamacedos has joined #openstack-kuryr16:17
dulekltomasbo: Hm, just a thought. Does your code have some protection from pools that are "gone"?16:23
ltomasbodulek, what do you mean with pools that are gone?16:23
dulekltomasbo: I mean situation when e.g. node gets removed from the cluster so pools that are related to it are not needed anymore.16:23
ltomasboif the node gets removed from the cluster, the kubernetes scheduler will not allocate ports on them16:24
ltomasbo*pods16:24
ltomasboso, we will still have that pool, but we will not use it16:24
dulekltomasbo: Sure, but some ports will still get allocated there and those will just waste subnet addresses.16:24
ltomasbothats true16:25
ltomasbobut that could be like a clean periodic tasks16:25
dulekltomasbo: In case of nodes that would work pretty well.16:25
ltomasbobut...?16:26
dulekltomasbo: It would be a bit more difficult if someone changed SG id in the config?16:26
ltomasbo:D16:26
ltomasbowe talked some time agou about having the ports in the pool with some kind of TTL16:26
dulekltomasbo: I'm just not sure what should trigger pool removal in such case.16:26
ltomasboso that they are removed eventually if not used16:27
ltomasboso, we can add some extra info into the pools16:27
dulekltomasbo: Hm, that makes sense. Aaaand… it's pretty easy to implement with KuryrPort CRD as I can just check object's age.16:28
ltomasboand have a counter that increases the value everytime there is a periodic check16:28
ltomasboand when the port is taken (and put it back into the pool after usage) set that value to 016:28
dulekltomasbo: What if your controller is restarted each hour? We don't really have place to keep such info.16:29
dulekI mean - in a persistent way.16:29
ltomasbowhy the controller will be restartee each hour?16:29
ltomasbostill, when it is restarted, it will put the ports into the pools and set the TTL flag to 016:30
ltomasboor to the max value16:30
ltomasboahh, you mean to keep it in sync with the kuryr-cni view...16:30
dulekltomasbo: That's exaggerated example, but it's a normal practice to restart Python applications in a regular fashion, as Python's GC will not release memory on its own.16:30
dulekltomasbo: Not really, I don't have a solution now.16:31
ltomasbothat should not be a problem unless you are restarting the controller more frequently than the time to live16:31
dulekltomasbo: Sure thing!16:31
dulekI'm just not sure what TTL makes sense. It should be balanced between using unnecessary resources and having higher latency when starting a pod.16:32
ltomasboperhaps we can have a ttl per pool?16:32
dulekThat problem sound like something you can write a masters thesis about. :D16:33
ltomasboand if the pool is not requested at all in a few hours/days, then assume we can delete it?16:33
ltomasboand of course delete it if the node is not present anymore16:33
dulekand no pods exist that would be in such pool.16:33
ltomasboyep16:33
dulekThat would work IMO. :)16:34
ltomasboalso, does it matter is the kuryr-cni is aware of the pools that are removed?16:34
ltomasbodulek, ^^16:34
ltomasboit will never assing a pod to that pool anyway, and the resources are freed by the kuryr-controller16:34
dulekYeah, I don't think it matters.16:35
dulekIf pod comes that will match the pool - it will get created and CNI will see the port.16:35
ltomasbook16:35
ltomasbodulek, what will happen in the other case16:35
dulekltomasbo: That is?16:35
ltomasbowhere the kuryr-controller is up all the time and a node (kuryr-cni) reboots?16:35
ltomasbowill it read the kuryrCRD to recover the info about the pools at that node?16:36
dulekltomasbo: Not much. In current master it doesn't matter, data is on the annotation.16:36
dulekltomasbo: And in case of what I'm writing currently - it will just re-read the KuryrPort list from k8s API.16:36
dulekltomasbo: I'm trying to make the API the source of truth, so it's fine.16:37
ltomasbodulek, great!16:37
dulekltomasbo: Am I interrupting too much? One more question comes to my mind.16:37
dulekltomasbo: Well, just ignore it if you're too busy. ;)16:38
ltomasbono! please ask! I'm fed up already with tripleO issues...16:38
dulek:D16:38
ltomasboI prefer to switch context a bit!16:38
dulekltomasbo: What's the difference in Neutron port management when we're running in nested configuration?16:38
ltomasboyou mean interactions between kuryr-controller and neutron?16:39
dulekltomasbo: Because I see NestedVIFPoolDriver having a lot of additional code.16:39
dulekltomasbo: Yup. I guess some additional checks are needed when deleting the ports?16:39
dulekNot sure about adding?16:39
ltomasbothe difference is that when creating a pod in BM, you just need to create the port16:40
ltomasbobut for the nested case16:40
ltomasboyou need to create the port and attach it to the trunk port where the VM is16:40
ltomasboahh, but you mean from the pool perspective?16:40
dulekltomasbo: Attach through Neutron API?16:40
ltomasboyep16:41
ltomasbowithout pools, you need to create the port, select a vlanid and attach it to that port in that vlan_id16:41
dulekltomasbo: Hm, okay. And I can get the vlan_id for the node from?16:42
ltomasbodo you need the vlan id for anything?16:42
dulekltomasbo: Oh, sorry, I misunderstood.16:43
ltomasboand, regarding code length, note there are some extra functions on the nested pool to force repopulation and free actions of the pool16:43
dulekltomasbo: Anyway how do I know the trunk port I should attach the pod port to?16:43
ltomasboand you needed to get the trunk info in a different way to speed up the recovery of pre-created ports avoiding neutron calls16:43
ltomasbofrom the kuryr-cni perspective, you are already inside the VM, right?16:44
ltomasboor you mean the controller?16:44
dulekltomasbo: The controller.16:44
ltomasboif it is the controller, it is on the pod info, and then we get through neutron calls what trunk belongs to that host16:45
ltomasbothere is a get_parent_port_by_host_ip function for that16:45
*** yamamoto has joined #openstack-kuryr16:46
ltomasboat vif dirver16:46
dulekltomasbo: Okay, so let's say I don't care about recovery in my PoolDriver.16:46
dulekltomasbo: Then I don't need trunk info in there?16:47
dulekltomasbo: Note that I have all the os-vif object saved into K8s API.16:47
ltomasbokuryr-controller will be the one calling neutron to attach the port into that trunk, right?16:48
dulekltomasbo: Uhm… Yes, it will be creating the port.16:48
dulekltomasbo: But it will be going through VIFDriver, right?16:49
ltomasboyep16:49
ltomasbowe also use the trunk information when removing pods16:49
dulekltomasbo: Still VIFDriver will take care of that for me, isn't it?16:50
ltomasboas that will put the port back into the pool or not, depending on the max size you set for the pool16:50
ltomasbonot on the kuryr-cni16:50
ltomasboumm, wait16:50
ltomasboyep, if the kuryr-controller still does that (I need to review your patch)16:50
*** garyloug has quit IRC16:52
celebdorltomasbo: dulek: I didn't read it all16:53
dulekcelebdor: Which one?16:53
celebdorare you suggesting cleaning up pools when hosts are gone?16:53
*** yamamoto has quit IRC16:54
dulekcelebdor: Ah. So I'm worried about dead pools that take up addresses.16:54
dulekcelebdor: Especially when running on VMs as they're bit more transient than BM.16:54
celebdordulek: this one http://s1.picswalls.com/wallpapers/2015/11/22/deadpool-photo_103702363_293.jpeg ?16:54
ltomasbowe may need to delete pools once NetworkPolicy is there16:54
dulekcelebdor: Oh well. Then I'm more worried.16:55
celebdorwell, whatever is being used to delete nodes or create nodes16:55
celebdorshould clear the pools16:55
celebdorotherwise the VM won't be destroyed xD16:55
celebdora trunk with ports can't be deleted16:55
celebdorso in openshift-ansible case, the scale down will have to handle it with shade16:55
dulekcelebdor: Ehm… Doesn't this behavior suck?16:56
celebdordulek: which? the fact that deleting a trunk doesn't automatically remove its subports?16:57
celebdoryes, that sucks16:57
celebdorXD16:57
celebdorbig time16:57
celebdorIf you ask me16:57
dulekcelebdor: The fact that admin will need to run this pool manager and free pools.16:57
dulekcelebdor: Was that the primary reason for pool manager to exist?16:58
celebdordulek: no, no16:58
celebdorthe expectation is that if you want to add/remove worker nodes16:59
celebdoryou do it with a deployment tool16:59
celebdorin openshift's case, that's openshift-ansible16:59
celebdordulek: if batch operations in Neutron were faster16:59
celebdorI wouldn't mind deleting the ports and create them automatically17:00
celebdorbut with how expensive and taxing to openstack it is17:00
celebdoryou kinda want the operator to know that's gonna happen17:00
celebdorrather than have it happen by accident17:00
dulekcelebdor: Hm, okay, fine… Still for the people running without deployment tool… And in the BM case - dead vif pools are an issue.17:02
celebdordulek: I agree17:02
celebdorand I would make it configurable17:03
celebdorthat you can configure the controller so that we have a node watcher17:03
dulekcelebdor: Pool key is composite, it's not only nodes.17:03
celebdordulek: so?17:04
celebdorwhen a node is deleted17:04
dulekcelebdor: So such dead pools will get created also when SG is changed in the config.17:04
celebdoryou do pools.items()17:04
celebdoroh, that...17:04
dulekcelebdor: Or in the future subnet.17:04
celebdorwe don't have networkpolicy yet17:04
dulekcelebdor: project_id will be safe in that case. :D17:04
dulekcelebdor: Yeah, with NP it will be even worse.17:05
celebdorin any case, let's take one thing at a time17:05
celebdorif the node delete event comes17:05
dulekcelebdor: That's why ltomasbo came up with this idea of TTL for pools that have no pods.17:05
celebdorwe can delete all the pools in which the host is in the key17:05
celebdorI'd rather not17:06
celebdordefeats the purpose of pre-allocation17:06
dulekcelebdor: Yeah, TTL-pool size ratio would be hard to set according to deployment usage characteristic.17:07
celebdorexactly17:15
openstackgerritMichał Dulko proposed openstack/kuryr-kubernetes master: WiP: Preview of CNI daemon side VIF choice  https://review.openstack.org/52724317:38
*** maysamacedos has quit IRC17:42
*** maysamacedos has joined #openstack-kuryr17:43
*** yamamoto has joined #openstack-kuryr18:05
*** gouthamr has quit IRC18:24
*** reedip has quit IRC18:29
*** gouthamr has joined #openstack-kuryr18:37
*** reedip has joined #openstack-kuryr18:41
*** celebdor has quit IRC19:15
*** openstack has joined #openstack-kuryr20:30
*** ChanServ sets mode: +o openstack20:30
*** gouthamr has quit IRC20:47
*** yamamoto has joined #openstack-kuryr21:07
*** openstack has joined #openstack-kuryr21:10
*** ChanServ sets mode: +o openstack21:10
*** yamamoto has quit IRC21:15
*** threestrands has joined #openstack-kuryr21:31
*** gouthamr_ has joined #openstack-kuryr21:45
*** yamamoto has joined #openstack-kuryr21:51
*** gouthamr_ is now known as gouthamr22:04
*** ChanServ has quit IRC22:17
*** ChanServ has joined #openstack-kuryr22:24
*** barjavel.freenode.net sets mode: +o ChanServ22:24
*** gouthamr has quit IRC22:28
*** ChanServ has quit IRC22:28
*** ChanServ has joined #openstack-kuryr22:31
*** barjavel.freenode.net sets mode: +o ChanServ22:31
*** threestrands_ has joined #openstack-kuryr23:10
*** threestrands has quit IRC23:13

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!