Monday, 2017-01-23

janonymouslimao: hey00:05
*** limao has joined #openstack-kuryr00:47
*** saneax is now known as saneax-_-|AFK00:53
*** yedongcan has joined #openstack-kuryr01:09
*** yedongcan has quit IRC02:21
*** vikasc has joined #openstack-kuryr02:54
*** yedongcan has joined #openstack-kuryr03:11
*** limao has quit IRC03:40
*** limao has joined #openstack-kuryr03:42
*** limao has quit IRC03:47
*** yedongcan1 has joined #openstack-kuryr03:51
*** yedongcan has quit IRC03:52
*** reedip has quit IRC04:35
*** limao has joined #openstack-kuryr04:42
openstackgerritIlya Chukhnakov proposed openstack/kuryr-kubernetes: Improve pipeline/Async logging  https://review.openstack.org/42390304:43
*** v1k0d3n has quit IRC04:44
*** v1k0d3n has joined #openstack-kuryr04:44
*** limao has quit IRC04:47
*** reedip has joined #openstack-kuryr04:47
*** limao has joined #openstack-kuryr05:19
*** limao has quit IRC05:23
*** limao has joined #openstack-kuryr05:25
*** hongbin has quit IRC05:28
openstackgerritIlya Chukhnakov proposed openstack/kuryr-kubernetes: OVO model for K8s Services support  https://review.openstack.org/42390805:33
*** v1k0d3n has quit IRC05:56
*** reedip has quit IRC05:58
*** v1k0d3n has joined #openstack-kuryr06:02
*** reedip has joined #openstack-kuryr06:10
*** saneax-_-|AFK is now known as saneax06:19
*** janki has joined #openstack-kuryr06:26
*** gsagie has joined #openstack-kuryr06:58
*** yamamoto has quit IRC08:10
*** pcaruana has joined #openstack-kuryr08:17
openstackgerritJaivish Kothari(janonymous) proposed openstack/kuryr-libnetwork: Tls support configurations  https://review.openstack.org/41060908:23
*** yedongcan1 has quit IRC08:28
*** yedongcan has joined #openstack-kuryr08:29
*** yamamoto has joined #openstack-kuryr08:46
openstackgerritAnh Tran proposed openstack/kuryr-libnetwork: Typo fix: happend => happened  https://review.openstack.org/42399508:54
*** limao has quit IRC09:12
*** limao has joined #openstack-kuryr09:16
*** yamamoto has quit IRC09:21
*** limao has quit IRC09:21
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr: Add randomness to the returned vlan_ids  https://review.openstack.org/42264109:25
*** yamamoto has joined #openstack-kuryr09:40
*** devvesa has joined #openstack-kuryr09:58
*** yedongcan has left #openstack-kuryr10:07
*** apuimedo is now known as apuimedo|flu10:13
*** irenab has left #openstack-kuryr10:14
*** irenab has joined #openstack-kuryr10:15
ltomasboapuimedo|flu, irenab, vikasc, all: I've been discussing with armax the device_owner usage10:24
ltomasbohttps://review.openstack.org/#/c/419028/10:24
ltomasbohe suggest if we can use tagging instead of the device_owner10:24
irenabltomasbo: is tagging on other than network resource got merged?10:24
ltomasboas it seems there is no consensus about how this field should be used, and could lead to problems (as the one we are hitting)(10:25
ltomasbonot sure, don't think so: https://review.openstack.org/#/c/419028/10:25
ltomasboI checked and we only use device_owner in once in kuryr-libnetwork10:26
irenabjust saw it, https://review.openstack.org/#/c/41366210:26
irenabnot tags to use yet, but can be a way to go10:26
ltomasbojust to filter out some devices before removing the port when deleting the container10:26
irenabltomasbo: the meaning of the device_owner field in neutron port is really not clear. Tags will be better since it is meant to be for the external managment usage10:28
ltomasboyep, I think it will be a safer choice10:29
ltomasboand I just checked, we only use device_owner once in kuryr-libnetwork10:29
ltomasboand not at all in kuryr-kubernetes10:29
ltomasboso, maybe, until tags are merged in neutron, I can propose a fix to the kuryr-libnetwork, and we can also work in parallel to set tags instead of device_owner10:30
irenabltomasbo: this is good. I think we can set it, but definetly not count on it for some operations later. Seems there is no contract on keeping the provided value on create10:30
*** neiljerram has joined #openstack-kuryr10:31
ltomasboyou mean for tagging?10:31
irenabltomasbo: device_owner field. Tags are meant to be set by external to neutron entity, so the contract is that neutron won’t change it, actually it is not supposed to pass it to the back-ends even10:32
ltomasboso, even more in favor of using tags instead of device_owner...10:33
irenabyes10:33
ltomasbogreat! thanks!10:33
irenabwe just need to rely on the patch https://review.openstack.org/#/c/41366210:33
ltomasboyep, I will reply to armax, and work on that10:36
*** jchhatbar has joined #openstack-kuryr10:43
*** janki has quit IRC10:45
*** portdirect_away is now known as portdirect10:59
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr-libnetwork: Ensure subports are deleted after container deletion  https://review.openstack.org/42406711:27
*** yamamoto has quit IRC11:34
vikascirenab, ltomasbo , i want to know you opinion on handling one scenario11:48
irenabvikasc: go ahead11:48
vikascin nested-pods case, if mtu of vm interface is less than what binding code has fetched from neutron port and try to set it on container interface, it fails11:49
vikasci was just playing around. my vm has 1400 mtu and neutron port has 1450.. so in binding when it tries to set 1450 on container port and try to iface.up() it fails11:50
vikascirenab, ltomasbo ^11:51
ltomasboumm11:51
vikasci verified that if neutron_port mtu <= vm interface mtu, no problem11:51
ltomasboand what is the MTU of the server nic?11:53
irenabvikasc: the trunk port mtu < subport mtu?11:53
ltomasboI guess the subport is using the same vNIC as the VM, so the MTU should be the same11:54
vikascltomasbo, server means vm? its 140011:54
vikascirenab, yes this is the error case11:54
vikascltomasbo, it should be same ideally, but sometimes vms are not able to work with neutron advertised mtu and user may need to lower it11:55
irenabvikasc:  sounds like the subport should be less than a trunk11:55
vikascirenab, will it be reasonable to add a check in binding to ensure this11:56
irenabvikasc: on kuryr side?11:56
vikascirenab, yes11:56
ltomasboshouldn't this be ensure at trunkport functionality?11:56
irenabit maybe should go by the lowest11:56
irenabltomasbo: good question11:57
vikascirenab, yes.. this check..11:57
ltomasbomaybe it is, unless you specify a different one...11:57
ltomasbogoing to check11:57
irenabvikasc: it maybe not even for a check sake, for getting the MTU set properly11:57
vikascltomasbo, this problem will occur only if user has manuaaly lowered the vm interface mtu11:58
irenabso this is something that cannot be checked on the api request handling11:59
vikascirenab, i do not think it can be checked. i give you my example11:59
irenabvikasc: the option for figuring out the MTU on kuryr side looks reasonable to me11:59
ltomasbohttps://github.com/openstack/neutron/blob/master/neutron/services/trunk/rules.py#L172:L19112:00
vikascirenab, ltomasbo , i prepared setup and neutron setted 1450 as mtu on vm12:00
irenabvikasc: so it retrieves the subport network mtu, correct?12:01
ltomasboare you using vxlan?12:01
ltomasboit is good to have 1450 to allow vxlan headers12:01
vikascirenab, yes12:01
vikascltomasbo, yes12:01
irenabvikasc:  the method below, there is a validation for the case we discuss12:02
irenabhttps://github.com/openstack/neutron/blob/master/neutron/services/trunk/rules.py#L19612:02
vikascltomasbo, but for my vm, 1450 was not working, i had to lower it12:02
vikascltomasbo, irenab, ssl connection to a docker repo was hanging12:03
ltomasboummm, weird12:03
vikascltomasbo, irenab i struggled a lot to find out that if i lower the mtu of vm , it started working12:03
irenabvikasc: interesting12:04
ltomasbobut for the parent port?12:04
vikascltomasbo, from inside the vm12:04
ltomasbothat seems like a bug somewhere12:04
vikascfor the vm port only i had to lower the mtu to get connection working with the docker registry12:04
ltomasboI've never had problems with the MTU for the trunkports12:04
ltomasboonly with the docker registry?12:05
ltomasbothe other connectivity was working well?12:05
vikascltomasbo, irenab , not sure, but my understanding is that it depends on if there is a network switch in the path that has lower mtu12:05
vikascltomasbo, yes12:05
vikascltomasbo, so problem may depend on destination from same source(my vm)12:06
vikascltomasbo, i can show you over bj :)12:06
vikascltomasbo, now only if you are interested12:06
vikascirenab, i will add a check then to handle such problem12:07
vikascirenab, i was struggling since last 2-3 hours to figure out why binding code not working :)12:07
irenabvikasc: I agree with your direction to make binding code more robust12:09
vikascirenab, error that pyroute2 was throwing was not saying anything about mtu .. it was saying "index out of range" :)12:09
irenabvikasc: sounds like bug for pyroute2 :-)12:09
*** jchhatbar_ has joined #openstack-kuryr12:10
*** jchhatbar_ is now known as janki12:10
vikascirenab, will raise it on pyroute2 and add in commit message for reference12:10
*** jchhatbar has quit IRC12:13
ltomasbovikasc, I cannot connect by bj right now, but may ping you later or tomorrow12:14
ltomasbobtw, I manage to create nested containers with your patch after moving to a slightly bigger VM12:15
vikascltomasbo, sure.. anytime :)12:15
ltomasboat the end it was a cpu + timeout problem, not just memory12:15
vikascoh12:15
vikascand it was kiing handler?12:16
vikascs/killing12:16
vikascor handler was timing out, starved on cpu?12:16
ltomasbotime oyut12:18
ltomasbotime out12:18
ltomasboit took more than 20 minutes to boot up the container the first time12:19
irenabltomasbo: vikasc : we probably need to add recommended flavor and image in the readme section before we have fullstack tests12:19
ltomasboso I moved to another deployment with a bit more memory and then it worked12:19
irenabltomasbo: I had similar issues12:19
ltomasboirenab, yes, definitely agreed12:19
ltomasbothe 'overcloud' should have at least 4 GB and 2 vcpus12:20
vikasctrue12:20
vikascltomasbo, irenab shall i add this detail to readme?12:21
irenabI wonder if we could have small image just for kuryr overcloud and not entire fedora/ubuntu12:21
irenabvikasc: yes, definitely12:21
vikascirenab, but then k8s is also needed12:23
vikascirenab, for a all-in-one setup12:23
ltomasbomaybe it could be usefull to have kuryr-controller in one VM12:23
ltomasboand kuryr-cni in the worker node12:23
ltomasboperhaps we can then simplify the VM requirement (although 2 will be needed, but more close to real deployments)12:24
vikascltomasbo, but our readme instructions are for single node only12:24
ltomasbovikasc, yep, you are right, let's keep it simple12:24
vikascltomasbo, may be we can add 4GB and 2 vcpu for all-in-one kind12:25
vikascand then user can accordingly estimate if he goes for multinode12:25
ltomasbothat is what I needed for the overcloud VM12:25
vikascanyways, starvation will be less on multinode due to load distribution12:25
ltomasboyes vikasc, keep it simple, maybe just mention that is for single-node, and that it could be splitted in different VMs if needed12:26
vikascltomasbo, make sense!!12:26
ltomasboirenab, vikasc: btw, can you look at this two: https://review.openstack.org/#/c/424067/ https://review.openstack.org/#/c/422641/12:27
ltomasbos/this/these12:27
vikascltomasbo, asap12:28
ltomasbothanks!12:28
*** yamamoto has joined #openstack-kuryr12:34
openstackgerritvikas choudhary proposed openstack/kuryr-kubernetes: Update ReadMe for nested-pods setup resource requirements  https://review.openstack.org/42409312:48
*** saneax is now known as saneax-_-|AFK12:53
*** dougbtv has joined #openstack-kuryr13:07
*** yamamoto has quit IRC13:14
openstackgerritvikas choudhary proposed openstack/kuryr-kubernetes: Handle mtu on kuryr side  https://review.openstack.org/42410413:15
leifmadseno/13:15
*** v1k0d3n has quit IRC13:28
*** v1k0d3n has joined #openstack-kuryr13:29
*** saneax-_-|AFK is now known as saneax13:48
*** limao has joined #openstack-kuryr13:58
*** garyloug has joined #openstack-kuryr13:59
*** yedongcan_ has joined #openstack-kuryr14:05
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr: Add randomness to the returned vlan_ids  https://review.openstack.org/42264114:10
*** hongbin has joined #openstack-kuryr14:20
*** yedongcan__ has joined #openstack-kuryr14:22
*** v1k0d3n has quit IRC14:23
*** yedongcan_ has quit IRC14:23
*** v1k0d3n has joined #openstack-kuryr14:24
*** yedongcan__ has quit IRC14:24
*** limao_ has joined #openstack-kuryr14:33
*** limao has quit IRC14:35
*** gsagie has quit IRC14:37
*** hongbin has quit IRC14:39
*** hongbin has joined #openstack-kuryr14:40
*** limao has joined #openstack-kuryr14:48
*** limao_ has quit IRC14:49
*** yedongcan__ has joined #openstack-kuryr14:49
*** yedongcan__ has quit IRC14:53
*** hongbin has quit IRC15:00
*** hongbin_ has joined #openstack-kuryr15:00
*** hongbin_ has quit IRC15:08
*** saneax is now known as saneax-_-|AFK15:12
openstackgerritDarla Ahlert proposed openstack/kuryr-libnetwork: Add reno support to kuryr-libnetwork  https://review.openstack.org/42419815:32
*** janki has quit IRC15:33
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr: Add randomness to the returned vlan_ids  https://review.openstack.org/42264115:35
*** hongbin has joined #openstack-kuryr15:56
*** limao has quit IRC15:59
*** reedip has quit IRC16:07
*** pcaruana has quit IRC16:13
*** reedip has joined #openstack-kuryr16:21
mchiapperoltomasbo: have you run any benchmarks on nested containers with VLAN segmentation?16:34
ltomasbomchiappero, I just create some containers and tested connectivity16:36
ltomasbofor both kuryr-libnetwork and kuryr-kubernetes16:36
mchiapperono, ok, I was just curious :)16:36
mchiapperodo you plan to run some?16:36
ltomasboanyway, I just tested on some devstack VM, so it would not be a fair benchmarking...16:37
ltomasbomaybe we can do so, yes. Do you have any specific benchmark in mind>?16:37
mchiapperouhm no, actually, but I was interested in comparing at some point VLAN, macvlan and ipvlan16:38
mchiapperomaybe I'll consider allocating some time in the future if no one else will16:38
mchiapperowell, it's more curiosity than a real interest16:39
mchiapperobut could be useful16:39
ltomasbowere the ipvlan/macvlan port drivers merged?16:40
mchiapperoon libnetwork only, we plan to start on k-k8s this week16:41
ltomasboI meant for kubernetes16:41
ltomasbogreat!16:41
mchiapperoyeah, sorry, we've been stuck on other tasks and had little time for kuryr lately16:41
ltomasbono problem, just curiosity!16:42
mchiapperosame here :)16:42
ltomasbowould be really nice to try to use it with dpdk too16:42
mchiapperoyes, we are considering it16:43
ltomasbodo you think it can work right away? making the VM runing the containers using the DPDK? Or will it be more complex than that?16:44
mchiapperorunning DPDK applications in containers is not a major issue, although for production use we still need a few bits that will likely be available in the coming weeks16:46
mchiapperointegrating with COEs can be though, as it's a very different concept of networking16:46
mchiapperosome orchestration engines only consider one (kernel backed) interface per container16:47
mchiapperoI think I'll bring the topic up once ready16:48
ltomasbook, thanks mchiappero16:57
*** david-lyle_ has joined #openstack-kuryr16:57
*** david-lyle_ has quit IRC17:03
mchiapperoltomasbo: you're welcome!17:11
*** devvesa has quit IRC17:28
*** david-lyle_ has joined #openstack-kuryr17:56
*** david-lyle_ has quit IRC17:56
*** david-lyle_ has joined #openstack-kuryr17:57
*** david-lyle_ is now known as david-lyle18:05
*** gsagie has joined #openstack-kuryr18:08
*** garyloug has quit IRC18:53
*** gsagie has quit IRC19:28
*** dougbtv_ has joined #openstack-kuryr20:22
*** dougbtv has quit IRC20:24
*** yuvalb has quit IRC21:09
*** oanson has quit IRC21:10
*** irenab has quit IRC21:10
*** yamamoto has joined #openstack-kuryr21:20
*** yuvalb has joined #openstack-kuryr21:29
*** oanson has joined #openstack-kuryr21:30
*** irenab has joined #openstack-kuryr21:30
*** yamamoto has quit IRC21:43
*** oanson has quit IRC21:56
*** oanson has joined #openstack-kuryr21:57
*** yamamoto has joined #openstack-kuryr22:23
*** oanson has quit IRC22:24
*** oanson has joined #openstack-kuryr22:25
*** limao has joined #openstack-kuryr22:46
*** saneax-_-|AFK is now known as saneax22:52
*** limao has quit IRC22:52
*** limao has joined #openstack-kuryr22:52

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!