Wednesday, 2016-11-16

*** hongbin has quit IRC00:10
*** tonanhngo has quit IRC00:10
*** oanson has quit IRC00:16
*** tonanhngo has joined #openstack-kuryr00:20
*** tonanhngo has quit IRC00:25
*** limao has joined #openstack-kuryr00:32
*** tonanhngo has joined #openstack-kuryr01:12
*** tonanhngo has quit IRC01:12
*** huikang has joined #openstack-kuryr02:08
*** yedongcan has joined #openstack-kuryr02:34
*** yuanying has quit IRC02:52
*** huikang has quit IRC03:29
*** tonanhngo has joined #openstack-kuryr03:34
*** tonanhngo has quit IRC03:39
*** yuanying has joined #openstack-kuryr03:50
*** yedongcan has quit IRC04:20
*** tonanhngo has joined #openstack-kuryr04:44
*** tonanhngo has quit IRC04:48
*** yedongcan has joined #openstack-kuryr05:10
*** tonanhngo has joined #openstack-kuryr05:28
*** tonanhngo has quit IRC05:32
*** yedongcan has quit IRC05:36
*** irenab has joined #openstack-kuryr05:59
*** yedongcan has joined #openstack-kuryr06:04
*** irenab has quit IRC06:43
*** oanson has joined #openstack-kuryr06:44
*** janki has joined #openstack-kuryr06:53
*** janki has quit IRC06:56
*** dimak has joined #openstack-kuryr08:16
apuimedomchiappero: hi08:41
*** janki has joined #openstack-kuryr08:50
*** janki has quit IRC08:58
apuimedovikasc: I think we can take https://review.openstack.org/#/c/397201/1 in'09:09
vikascapuimedo, sure Toni, let me take a look.09:10
apuimedocool09:11
mchiapperohi apuimedo!09:12
apuimedolimao: mchiappero: it really looks like we need to cut a release of kuryr-lib09:12
apuimedo:-)09:12
mchiapperoneed to have a quick chat with you, but I have a meeting in 15 mins09:12
apuimedoI'll diff 0.1.0..master to come up with the next version09:12
mchiapperowill you be around at 11?09:12
apuimedomchiappero: at 11:10 probably09:13
mchiapperonp09:13
mchiapperowell, if you are available now I can actually talk09:13
limaohello, mchiappero, apuimedo09:14
mchiapperohi limao09:14
apuimedobetter 11:10, now I'm expecting some repairman09:14
mchiapperobetter for me too anyway :)09:15
*** lmdaly has joined #openstack-kuryr09:18
mchiapperoBTW a guy from docker I guess told me that they have been busy with the new release and that will reply soon to my issue (https://github.com/docker/libnetwork/issues/1520)09:28
*** garyloug has joined #openstack-kuryr09:30
*** janki has joined #openstack-kuryr09:31
*** janki has quit IRC09:35
*** oanson has quit IRC09:42
*** oanson has joined #openstack-kuryr09:47
*** limao has quit IRC09:57
vikascapuimedo, WDYT on this https://review.openstack.org/#/c/397057/210:02
vikascapuimedo, why does this line is required in localrc for q-qos enabling, enable_plugin neutron https://git.openstack.org/openstack/neutron10:04
vikascapuimedo, i tried without this and it seems to be enabling qos10:05
apuimedomchiappero: cool10:23
apuimedovikasc: not sure, I assumed they must have some newer code on the plugin but if it works without, -1 and get limao to verify10:23
vikascapuimedo, cool, let me confirm with limao then10:24
vikascapuimedo, i would love to have your opinion on https://review.openstack.org/#/c/361993/11 before i start working on unit tests10:25
apuimedook. Will check, thanks\10:25
mchiapperoone question, will the trunk approach have the same behaviour as the IPVLAN one for the nested case?10:27
apuimedomchiappero: in which sense?10:27
mchiapperoIs the code path the same, same need for an updated allowed_pairs10:28
vikascapuimedo, and this one also please https://review.openstack.org/#/c/397057/2 :)10:28
mchiapperoI'm reasoning a bit on the driver model on kuryr-libnetwork, and I'm trying to understand whether there are commonalities across macvlan/ipvlan and trunk10:29
vikascmchiappero, nope10:29
mchiapperook10:29
vikascmchiappero, allowed-address-pair wont be required in trunk port case10:29
apuimedomchiappero: there's actually two different ways to do the trunk mode10:29
apuimedoso two trunk modes10:29
apuimedoa) One vlan per network -> needs to touch allowed address pairs10:30
mchiapperoso, in the end the plugins should be bridge (Bare Metal), nested ipvlan/macvlan, nested trunk10:30
vikascmchiappero, i can see commonality in binding part10:30
apuimedob) one vlan per container -> no need to touch allowed address pairs10:30
mchiapperooooh ok, got it10:30
vikascapuimedo, hmm10:30
apuimedoveth, ipvlan/macvlan, trunknet, trunkcont10:30
vikascapuimedo, i was thinking on 'b' approach10:31
mchiapperoso, the a) case is in theory more flexible than ipvlan10:31
mchiapperobut how do we want to handle the fact that ipvlan could be working on BM and nested?10:32
vikascapuimedo, how do you compare 'a' and 'b' ?10:32
apuimedovikasc: in (a) you can have many more containers per VM10:33
apuimedoand communication inside a network does not go all the way down to the host10:33
mchiapperobut single tenant per VM, right?10:33
apuimedomchiappero: right10:33
apuimedobrb10:33
mchiapperook10:34
mchiapperobut I think I'm still not too clear with the approach you had in mind, so let's continue whenever you can :)10:34
vikascmchiappero, can you add a bit more on "single tenant per VM"10:35
vikascmchiappero, please10:35
mchiapperoso, my understanding is that internally every container will be part of the same network, like in the IPVLAN/MACVLAN case10:36
vikascand ..10:36
mchiapperoso basically it's a single admin domain, single tenant10:36
vikascyou talking about 'a'?10:37
mchiapperowhile for the b) case the VM will transparently let the containers join separate networks10:37
mchiapperoyes, sorry10:37
mchiapperolet's wait for apuimedo to confirm though :)10:37
vikascmchiappero, sure :)10:38
mchiapperoI'm trying to understand whether there are commonalities, so rather a BM plugin, a nested one (ipvlan, macvlan, trunk case a?), maybe another for special cases (e.g. trunk case b)10:40
vikascmchiappero, do we have an etherpad or shared doc for discussion on this?10:43
mchiapperouhm... not sure, I guess so10:44
apuimedoI do not think we do10:45
vikascmchiappero, will it be productive to start one10:45
apuimedowe usually use the mailing list for these things10:45
mchiapperook, even though it's faster to have a chat here if it's ok with you10:45
mchiapperowhat was the approach you had in mind apuimedo?10:46
apuimedoin the b) case IIRC, and ltomasbo can correct me if I'm wrong, unless the actions are performed by the admin, you can only make subports for trunk ports of your same tenant10:46
vikascapuimedo, i am still not getting why --allowed-address pair is needed in (a) .. because trunk-port will pass the tagged traffic through10:46
apuimedovikasc: are you sure about that?10:46
vikascapuimedo, not sure..10:46
apuimedobecause I thought that the untagging ovs bridge with patch ports to the br-int10:47
apuimedojust gets stuff to br-int, and there the rules for anti ip spoofing could still apply10:47
ltomasbohi, let me read what is the discussion about10:47
*** yedongcan has left #openstack-kuryr10:47
apuimedoltomasbo: vikasc was wondering if it is necessary to touch allowed address pairs when you use one subport per container network10:48
ltomasboI think so, that would be pretty much the ipvlan case10:48
ltomasboas you need to create a subport for the trunk10:48
ltomasboand then a port (with an IP) which will be used for each container10:49
ltomasboi.e., one extra port per container10:49
ltomasbothis is for the 1 vlan per network, and many container sharing the same subport10:50
apuimedomchiappero: vikasc: implicit into that is that you get a vlan device in the VM and then you need to put an ipvlan device from it into each container10:50
*** dimak has quit IRC10:51
apuimedoltomasbo: did you already try to make ipvlan linked to vlan device?10:51
ltomasbodidn't spend that much time on in lately, but I tried manually on top of lmdaly ipvlan implementation10:51
ltomasbojust by changing the iface name from eth0 to eth0.10110:52
ltomasboand that already works10:52
ltomasboneed to make that parametrizable though10:52
vikascapuimedo, because the network of vm will be different from network of containers in case (a), anti ip spoofing rules will not impact container traffic and we might need not require allowed address pair.. thats how i was thinking10:52
vikascapuimedo, will give it a try10:53
*** dimak has joined #openstack-kuryr10:53
apuimedovikasc: don't you think it will impact if there's containers on the same network in different VMs?10:53
ltomasbocasa a was 1vlan per network, right?10:53
vikascltomasbo, yes10:53
ltomasboI can remove the allowed address pair and see if the VM gets connectivity10:53
ltomasboI already have a devstack with that working10:54
vikascltomasbo, great10:54
vikascapuimedo, thinking on your point10:54
mchiapperommm another question I have is: shouldn't be kuryr-libnetwork in charge of picking up the right kyrur-lib driver?10:55
vikascapuimedo, i might be missing something. Would explore more at my end.10:56
mchiapperoI mean, reading the configuration and asking kuryr-lib for the right driver, by whatever mean, e.g. a factory10:56
apuimedomchiappero: you mean so we don't need to configuration settings?10:56
apuimedomy idea was that if some kuryr-libnetwork can only work with one kuryr-lib driver, that it will just use it, without checking the kuryr-lib conf10:57
mchiapperoto avoid having a config option for the kuryr-libnetwork side and another for kuryr-lib, and avoid any coherency issues10:58
mchiapperoI'm not sure I got what you mean10:58
apuimedolet me rephrase11:00
mchiapperoso, say you want veths on BM, libnetwork will know from the config file an ask kuryr-lib to get such a driver and will take the codepath that skips the allowed-address-pairs and so on11:00
apuimedothere is kuryr-lib driver setting in kuryr.conf11:00
apuimedothere is also kuryr-libnetwork driver setting in kuryr.conf11:00
mchiapperoexactly, I'm wondering whether this is a good approach11:01
apuimedomy thought was that some selections in kuryr-libnetwork make the kuryr-lib setting unnecessary11:01
mchiapperoor is anyway the approach we want11:01
apuimedobut for some cases, you may need a choice11:01
ltomasboit does not work without the allowed-ip-address11:01
apuimedoltomasbo: thanks for checking!11:01
vikascthanks ltomasbo and apuimedo11:01
ltomasboI need to catch a flight now, so leaving in 10 mintues11:02
mchiapperosure, but what I'm asking is: shouldn't all the config options be read by k-libnetwork?11:02
mchiapperoltomasbo: bye! :)11:02
ltomasboapuimedo: I found a workaround for the QoS + Trunk ports that could be interesting for kuryr11:02
ltomasbohttps://bugs.launchpad.net/neutron/+bug/163918611:02
openstackLaunchpad bug 1639186 in neutron "qos max bandwidth rules not working for neutron trunk ports" [Low,Confirmed] - Assigned to Luis Tomas Bolivar (ltomasbo)11:02
ltomasboit is not the right solution11:02
ltomasbobut at least it may allow us to try QoS while waiting for a more decent solution (OVS new functionality)11:03
apuimedomchiappero: they will all be read by k-libnetwork, we only have a single kuryr.conf11:04
mchiapperook, if I'm not wrong currently kuryr-lib parses its own options11:04
apuimedomchiappero: it does. I cheated a bit on the definition11:05
apuimedowhat I meant it's that the kuryr-libnetwork process includes kuryr-lib11:05
apuimedoand they all read their part of kuryr.conf11:05
mchiapperoyes I know11:05
mchiapperobut still, I don't understand how you plan to guarantee the driver-to-driver coherency11:06
mchiappero:)11:06
apuimedowell, I planned to have kuryr-libnetwork manage that11:06
apuimedoby whitelisting in the driver modules11:06
apuimedowhich kuryr-lib drivers it can work with11:06
mchiapperook, so that's what I was thinking about, but kuryr-lib shouldn't create any driver by itself11:06
apuimedomchiappero: what do you mean with 'create'?11:07
mchiapperoapuimedo: how would you implement that?11:07
mchiapperosorry, s/create/load/g11:07
mchiapperoprobably this wording was confusing11:07
apuimedowell11:08
apuimedoI wanted to take a usability approach11:08
apuimedolet's see if I can explain11:08
apuimedoso each driver would have11:08
apuimedoALLOWED_BINDING_DRIVERS tuple11:08
apuimedolike for example11:08
apuimedo('ipvlan', 'veth')11:08
apuimedounless the kuryr-lib binding driver config option specifies a full path to a binding driver, only ipvlan and veth can be used in this example11:09
apuimedoand the setting can be filled with 'ipvlan' or 'veth', no need for the whole path when you select an allowed one11:09
mchiapperoI'm still not sure whether we are saying the same thing or not :D11:10
apuimedo:O11:10
apuimedo:P11:10
mchiapperook ok :)11:10
apuimedowrong emoji the first time11:10
apuimedoI meant :P because my lack of sleep must have been impairing my explanation11:10
apuimedolet me put it in this way11:10
apuimedokuryr-libnetwork reads kuryr.conf to see which port handling driver it should use11:11
mchiapperomy idea was that a driver in kuryr-libnetwork will be allowed to request kuryr-lib to load drivers specific to it11:11
apuimedothis loads a driver module in kuryr-libnetwork11:11
mchiapperoexactely11:11
apuimedothis driver has a constant with which modules it whitelists11:11
mchiapperowill also determine/validate the driver for kuryr-lib11:11
mchiapperoexactly what I'm saying11:12
mchiapperobut currently the driver in kuryr-lib get loaded by kuryr-lib itself11:12
apuimedoit will still read the kuryr.conf binding driver config option to check if the operator specified a full path to an external driver. If that is not the case, only the whitelisted ones can be selected11:12
apuimedomchiappero: we can fix it to be done/overriden by kuryr-libnetwork11:12
apuimedoprobably11:13
mchiapperoyeah ok, that's what I wanted to check with you :)11:13
apuimedokuryr-libnetwork and CNI pieces choose the binding from their own information11:13
apuimedoand from whatever was on kuryr.conf about the binding driver11:13
mchiapperoI would probably create a function to request the loading of a specific driver in kuryr-lib... kind of a factory method in OOP11:14
mchiapperojust to double check we are on the same page11:15
apuimedomchiappero: as long as it is not called factory and it doesn't smeel too much of oop11:16
* apuimedo has severe allergies :P11:16
apuimedos/smeel/smell/11:16
mchiapperowhatever, just to check we are talking about the same things :)11:17
mchiapperoI'll get unpopular but I don't have any allergies :p11:18
apuimedolucky you... Every spring fscking pollen tries to kill me11:18
mchiapperowell, no, I do have the same problem, even though in Ireland it has reduced significantly11:20
mchiapperobut my background is way more on staticly typed languages, much much less on python/ruby11:22
apuimedoI exorcised the Java demons11:27
mchiapperowell, this conversation can be very dangerous :P11:29
mchiapperobut I've worked a lot wit C/C++/Java, all of them have things I don't like11:30
mchiapperoI have little experience with Python, but I would say the same11:30
mchiapperowhen I'll write the perfect language I'll let you know though :P11:31
apuimedoI'm on the same boat11:39
apuimedo:P11:39
apuimedoso far I like what I see in Rust11:39
apuimedobut the worst thing I ever saw was embedded Java (javacard)11:39
mchiapperoI've never tried Rust, I would still like to learn golang first, but sooner or later I'll have a look :)11:40
mchiapperoI guess RH is working on projects in pretty much every common language11:42
apuimedoyes, pretty much11:50
*** pc_m has quit IRC12:14
*** pc_m has joined #openstack-kuryr12:16
openstackgerritIlya Chukhnakov proposed openstack/kuryr-kubernetes: Pod subnet driver and os-vif utils  https://review.openstack.org/39832412:27
openstackgerritIlya Chukhnakov proposed openstack/kuryr-kubernetes: Pod subnet driver and os-vif utils  https://review.openstack.org/39832412:36
*** dimak has quit IRC12:36
*** irenab has joined #openstack-kuryr12:39
*** irenab has quit IRC12:44
*** lmdaly has quit IRC12:56
*** garyloug has quit IRC13:02
*** dimak has joined #openstack-kuryr13:56
*** dimak has quit IRC14:03
*** dimak has joined #openstack-kuryr14:17
*** dimak has quit IRC14:22
*** dimak has joined #openstack-kuryr14:24
*** tonanhngo has joined #openstack-kuryr14:51
*** lmdaly has joined #openstack-kuryr14:52
*** hongbin has joined #openstack-kuryr14:58
*** garyloug has joined #openstack-kuryr14:58
*** dimak has quit IRC15:07
*** oanson has quit IRC15:51
*** david-lyle has quit IRC17:48
*** david-lyle has joined #openstack-kuryr17:48
*** tonanhngo has quit IRC17:51
*** garyloug has quit IRC17:58
*** lmdaly has quit IRC18:12
*** diogogmt has joined #openstack-kuryr18:13
*** tonanhngo has joined #openstack-kuryr18:32
*** tonanhngo has quit IRC18:36
*** tonanhngo has joined #openstack-kuryr18:38
*** tonanhngo has quit IRC18:39
*** tonanhngo has joined #openstack-kuryr18:39
*** oanson has joined #openstack-kuryr19:50
*** irenab has joined #openstack-kuryr21:50
*** irenab has quit IRC21:55
*** tonanhngo_ has joined #openstack-kuryr21:58
*** tonanhngo has quit IRC22:01
*** tonanhngo_ has quit IRC22:03
*** tonanhngo has joined #openstack-kuryr22:26
*** tonanhngo has quit IRC22:30
*** oanson has quit IRC22:41
*** diogogmt has quit IRC23:25
*** irenab has joined #openstack-kuryr23:38
*** irenab has quit IRC23:43
*** david-lyle_ has joined #openstack-kuryr23:50
*** david-lyle has quit IRC23:51

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!