Tuesday, 2016-11-22

*** jgriffith has joined #openstack-kuryr00:26
*** jgriffith has quit IRC00:30
*** jgriffith has joined #openstack-kuryr00:33
*** jgriffith has quit IRC00:36
*** hongbin has quit IRC00:43
*** diogogmt has quit IRC01:05
*** pmannidi_ has joined #openstack-kuryr01:09
*** pmannidi has quit IRC01:11
*** yedongcan has joined #openstack-kuryr01:35
*** diogogmt has joined #openstack-kuryr01:49
*** jgriffith has joined #openstack-kuryr01:49
*** hongbin has joined #openstack-kuryr02:09
*** jgriffith has quit IRC02:12
*** limao has joined #openstack-kuryr02:25
*** tonanhngo has quit IRC02:48
*** yamamoto_ has joined #openstack-kuryr02:48
*** tonanhngo has joined #openstack-kuryr02:48
*** tonanhngo has quit IRC02:50
*** yamamoto_ has quit IRC02:57
*** yamamoto_ has joined #openstack-kuryr03:00
*** tonanhngo has joined #openstack-kuryr03:04
*** tonanhngo has quit IRC03:06
*** jgriffith has joined #openstack-kuryr03:36
*** jgriffith has quit IRC03:36
*** hongbin has quit IRC03:51
*** hongbin has joined #openstack-kuryr03:55
*** tonanhngo has joined #openstack-kuryr04:04
*** tonanhngo has quit IRC04:06
janonymousMerge conflicts :(04:11
*** hongbin has quit IRC04:13
*** diogogmt has quit IRC04:15
*** yamamoto_ has quit IRC04:17
openstackgerritDongcan Ye proposed openstack/kuryr-libnetwork: Fullstack: Using the credentials from openrc config file  https://review.openstack.org/39958504:17
yedongcanjanonymous: :P, bad news.04:18
janonymousyedongcan: :P04:19
*** limao has quit IRC04:22
*** tonanhngo has joined #openstack-kuryr04:24
*** yamamoto_ has joined #openstack-kuryr04:25
*** tonanhngo has quit IRC04:25
*** yamamoto_ has quit IRC04:27
*** yamamoto_ has joined #openstack-kuryr04:32
janonymousivc_, vikasc: can i work on vagrant for kuryr-kubernetes once devstack is done?04:32
vikascjanonymous, that would be great04:32
janonymousvikasc: thanks04:33
vikascjanonymous, yw!!04:33
*** yamamoto_ has quit IRC04:34
*** pmannidi_ is now known as pmannidi04:37
*** tonanhngo has joined #openstack-kuryr04:45
*** tonanhngo has quit IRC04:48
*** yamamoto_ has joined #openstack-kuryr05:02
*** yamamoto_ has quit IRC05:03
vikasclinux bridge is not letting arp reply pass through it. No iptable rule seem to be blocking. even tried after flushing iptables. Any suggestion?05:04
*** yamamoto_ has joined #openstack-kuryr05:09
janonymousvikasc: what about security groups?05:17
janonymous*rules05:17
vikascjanonymous, as i said, i flushed iptable rules, thats where security groups are applied, so ...05:18
vikascjanonymous, something else05:18
janonymousvikasc: have you checked on ovs ?05:21
*** yamamoto_ has quit IRC05:21
vikascjanonymous, yes, ovs is not blocking anything. I can see packet entering on one interface of bridge but not coming out of the expected interface05:22
*** limao has joined #openstack-kuryr05:22
*** janki has joined #openstack-kuryr05:23
janonymousvikasc: aah, out of suggestions now :P05:26
vikascjanonymous, np :)05:26
vikascjanonymous, thanks !!05:26
janonymousvikasc: :)05:26
*** limao has quit IRC05:27
*** limao has joined #openstack-kuryr05:28
reedipvikasc : You are getting the Input packet on OVS and Linux Bridge, but not the output packets , right ?05:33
vikascreedip:  limao: "into the vm" direction, linux bridge is not forwarding the arp reply onto the tap interface, which i can see entring into the linux bridge on 'qvb' interface from ovs05:44
limaoHello vikasc05:44
vikasclimao, hello Liping :)05:44
vikasclimao, i was trying out trunk port support in neutron manually05:45
limaoI just join the channel, let me check the log05:45
*** yedongcan has quit IRC05:46
reedipvikasc : so you have vlan enabled in trunk mode?05:46
vikasclimao, created two vlan interfaces in the nova-vm, and plugged them into two namespaces05:46
*** irenab_ has joined #openstack-kuryr05:46
vikascreedip, yes05:46
vikascwhen i am trying to ping one namespace to another(both in same vm, but on netwrok different than vm), i can see vlan tagged arp packets on linux bridge05:47
vikascand the strange part is, from with in both namespaces i can ping router ip. Then arp reply from router(which is also tagged) is not getting blocked on linux bridge05:50
limaoYou create two vlan interface and plug into different namespace in the nested vm, but they can't ping each other, right?05:50
vikasclimao,  yes05:50
vikasclimao, on debugging found that, tagged arp reply is not able to pass linux bridge05:51
limaovikasc: oh, Let me have a try in my local.05:51
vikasclimao, sure, thanks05:52
*** yedongcan has joined #openstack-kuryr05:54
*** tonanhngo has joined #openstack-kuryr05:55
*** tonanhngo has quit IRC05:56
vikasclimao, i did launch vm with neutron trunk parent port and created vlan interfaces and child subports06:02
*** yamamoto_ has joined #openstack-kuryr06:02
limaovikasc: Can you list your neutron commands? so that I can get same env06:03
vikasclimao, i followed these commands https://wiki.openstack.org/wiki/Neutron/TrunkPort#API-CLI_mapping06:04
limaook, thanks06:04
*** tonanhngo has joined #openstack-kuryr06:15
limaohi vikasc, I'm downloading image(which follow the guide), just a quick confirm : did you set default gw in the namespace?06:16
*** tonanhngo has quit IRC06:16
limaoare your create eth0.101 and eth0.102 , then add them into different ns?06:16
vikascyes on last line06:17
vikascon default gw, yes i created neutron router (implemeted with namespace)06:17
vikascon host06:17
limaoIn the NS of eth0.101, did you set up the default GW?06:18
limaoI mean default route06:18
vikasclimao, 50.0.0.0/24  i used as network for vlan interfaces inside ns06:18
limaocan you show me the output of ip route in the ns?06:19
vikasclimao, i did set router ip, 50.0.0.1 as default gw06:19
limaovikasc: ok, get it06:20
vikasclimao, 50.0.0.6 and 50.0.0.9 are vlan interface ips06:20
vikasclimao, 100.0.0.6 is the vm ip, inside which vlan interfaces are created06:20
limaooh,  you create one subport right? and both of the containers are in same subnet06:21
limao(I was thought, the containers are in different subnet, something like 50.0.0.6, 60.0.0.9)06:22
vikasclimao, containers/ns are from same subnet, 50.0.0.0/2406:23
vikasclimao, two subports06:23
vikasclimao, one for each vlan interface06:23
limaovikasc: OK, let me see06:24
vikascthanks limao06:25
*** tonanhngo has joined #openstack-kuryr06:27
*** janki is now known as janki|lunch06:30
*** tonanhngo has quit IRC06:30
*** yedongcan has quit IRC06:38
openstackgerritvikas choudhary proposed openstack/kuryr: Fix container port ipaddress setting in ipvlan/macvlan drivers  https://review.openstack.org/39705706:48
*** oanson has joined #openstack-kuryr06:51
*** reedip has quit IRC07:06
limaoHi vikasc07:12
vikaschi limao07:12
limaoAre you add two subports which in same subnet, but with different vlan?07:15
vikasclimao, yes07:16
vikascvlan for seperating out their traffic within vm07:16
limaoOK, get it... I'm not sure if this is supported, let me try07:18
*** reedip has joined #openstack-kuryr07:20
vikasclimao, what is supported use-case in your understanding?07:20
limaovikasc: different vlan for different subnet(in my mind)07:21
limaovagrant@devstack:~/devstack$ openstack port create --network net1 --mac-address "$parent_mac" port307:23
limaoHttpException: Unable to complete operation for network c4f24371-3605-4c84-9471-63069574fb4b. The mac address fa:16:3e:75:20:26 is in use.07:23
limaoHi vikasc, I get this error, when I create second subport in the same network07:23
vikasclimao, one moment pls, let me check07:24
limaovikasc: it should not allow you create two ports with same mac in same subnet( this is make sense)07:24
vikasclimao, macs will be different07:24
limaovikasc: Did you create two subport with same mac with Parent_mac?07:24
*** tonanhngo has joined #openstack-kuryr07:25
vikasclimao,  no, with neutron provided macs07:25
limao#     # (a) either create child ports having the same MAC address as the parent port07:25
limao#     # (remember, they are on different networks),07:25
limao#     # NOTE This approach is affected by a bug of the openvswitch firewall driver:07:25
limao#     # https://bugs.launchpad.net/neutron/+bug/162601007:25
openstackLaunchpad bug 1626010 in neutron "OVS Firewall cannot handle non unique MACs" [High,In progress] - Assigned to Jakub Libosvar (libosvar)07:25
*** tonanhngo has quit IRC07:26
limaoDid you follow The guide with a) or b) to do this?07:26
limao#     # (a) either create child ports having the same MAC address as the parent port07:26
limao#     # (remember, they are on different networks),07:26
vikasclimao, 'b'07:26
limao#     # (b) or create the VLAN subinterfaces with MAC addresses as random-assigned by neutron.07:26
limaoOK07:26
limaoI'm using a) now, let me try b)07:26
limaoSo when you create sub interface, you assigned the mac in VM, right?07:27
limao#            ssh vm0 sudo ip link add link eth0 name eth0.101 address "$child_mac" type vlan id 10107:27
limao#            # eth0 and eth0.101 have different MAC addresses07:27
vikasclimao, i am working on vlan-per-container approach,  vlan-per-subnet/network will be another approach07:29
vikasclimao, yes, with this neutron provided mac, vlan interface is created07:29
limaovikasc: thanks, get it. trying this07:32
vikasclimao, thanks !!07:32
limaovikasc: how do you configure the eth0.101 ip, with dhcp or manually assign?07:44
*** janki|lunch is now known as janki07:44
*** tonanhngo has joined #openstack-kuryr07:45
vikasclimao, i had dhcp enabled on neutron network 50.0.0.0/24, so whatever neutron dhcp allocated to ports, i assigned manually o vlan interfaces07:46
*** tonanhngo has quit IRC07:47
vikasclimao, 50.0.0.6 and 50.0.0.907:47
limaovikasc: OK, I find when I add first subport, I can get it from dhcp(dhclient eth0.203)07:47
limaowhen I add second subport(which is in same subnet), it can't get ip from dhcp then07:48
vikasclimao, sorry i could not understand07:48
limaoLet's say your case, for example, you use 101 for 50.0.0.6, and 102 for 50.0.0.907:49
vikasclimao, ok07:49
limaothen after ifup eth0.101, I can use dhclient eth0.101 to get ip 50.0.0.607:50
limaobut after ifup eth0.102, I can't use dhclient eth0.102 to get 50.0.0.907:50
vikasclimao, is it assigning same ip, which is allocated by neutron when corresponding child port was created?07:50
limaonot same ip, it is allocated by neutron07:51
limaowhich is child port ip07:51
vikasclimao, yes , i meant that only... same ip as child port's ip07:51
vikasclimao, i just looked up child ports and assigned manually07:52
limaovikasc: ohh.. same with child port's ip07:52
limaovikasc: OK, I think I need some time to look into it, I need to double confirm if it can work with two subport which are in same subnetwork07:53
vikasclimao, sure07:53
vikasclimao, will look forward to your findings07:53
limao(Two subport in different subnet works in my env)07:53
vikasclimao, thats like ipvlan case07:54
vikasclimao, right?07:54
vikasclimao, or each container will have seperate subnet the?07:54
vikascs/the/then07:54
limao(or each container will have seperate subnet...)07:55
limaoeach container will have seperate subnet07:55
vikasclimao, how a cluster of containers belonging to same subnet can be created in such case07:57
vikasclimao, if each container will have seperate subnet07:57
limaovikasc: I have not think it clear now, just mean that can work in neutron trunk feature :-), I agree that each container should in same subnet in our vm-nest case07:58
vikasclimao, ok07:58
vikasclimao, where do you think two vlan interfaces on same subnet case will fail?07:59
vikasclimao, in my understanding there should not be a problem08:00
limaovikasc: I mean may have some bug08:00
vikasclimao, ok..08:00
vikasclimao, yeah.. i will explore more08:00
*** tonanhngo has joined #openstack-kuryr08:02
*** dimak has joined #openstack-kuryr08:04
*** tonanhngo has quit IRC08:05
*** tonanhngo has joined #openstack-kuryr08:23
*** tonanhngo has quit IRC08:24
*** yedongcan has joined #openstack-kuryr08:26
*** tonanhngo has joined #openstack-kuryr08:44
*** tonanhngo has quit IRC08:45
*** gsagie has joined #openstack-kuryr08:45
openstackgerritMerged openstack/kuryr: [docs] Libnetwork remote driver missing a step  https://review.openstack.org/40032308:57
*** lmdaly has joined #openstack-kuryr09:00
*** tonanhngo has joined #openstack-kuryr09:04
*** tonanhngo has quit IRC09:06
openstackgerritIlya Chukhnakov proposed openstack/kuryr-kubernetes: Controller side of pods' port/VIF binding  https://review.openstack.org/37604409:08
ivc_irenab, vikasc, apuimedo, i've rebased VIFHandler patch https://review.openstack.org/#/c/376044/3 to use drivers09:11
ivc_also removed the lengthy docstring in favor of the devref that irenab is working on09:12
vikascthanks ivc_09:15
openstackgerritLouise Daly proposed openstack/kuryr-libnetwork: [WIP]Driver based model for kuryr-libnetwork  https://review.openstack.org/40036509:18
*** tonanhngo has joined #openstack-kuryr09:24
*** tonanhngo has quit IRC09:25
openstackgerritIlya Chukhnakov proposed openstack/kuryr-kubernetes: Generic VIF controller driver  https://review.openstack.org/40001009:27
vikascirenab, ping09:31
irenab_vikasc, pong09:32
vikascirenab, can you please reword yr comment on https://review.openstack.org/#/c/361993/11/kuryr/lib/segmentation_type_drivers/vlan.py09:32
vikascirenab, I was expecting to see the class deriving from the SegmentationDriver to deal with VLAN type09:33
vikascReply09:33
vikasc09:33
vikascDone09:33
vikasc09:33
vikascFix09:33
vikasc1909:33
vikasc    def __init__(self):09:33
vikasc2009:33
vikasc        self.available_local_vlans = set(moves.range(const.MIN_VLAN_TAG,09:33
vikasc2109:33
vikasc                                                 const.MAX_VLAN_TAG))09:33
vikascirenab, I was expecting to see the class deriving from the SegmentationDriver to deal with VLAN type09:33
vikascirenab, what i need to do09:33
irenab_vikasc, sorry, what is your question?09:34
vikascirenab, " I was expecting to see the class deriving from the SegmentationDriver to deal with VLAN type"09:34
vikascirenab, sorry, i could not get this comment clearly09:34
vikascirenab, what you are expecting09:35
vikascirenab, you want a base class and then specific classes deriving from it?09:35
vikascirenab, or something else09:35
irenab_vikasc, let me explain. The current class is SegmentationDriver, but it deals with VLAN management09:35
vikascok09:36
irenab_so when VXLAN driver will be added, it will need other handling09:36
irenab_ it can either delegate alloc/release to Driver or just have VLanSegDriver09:37
vikascirenab, i was thinking that vxlan driver will have vxlan.py , using the name of seg_driver from config, i am loading driver dynamically09:38
vikasclike this09:39
vikascsegmentation_driver = importutils.import_module(cfg.CONF.binding.driver)09:39
vikascdriver = segmentation_driver.SegmentationDriver()09:39
irenab_so you are using 'namespace' and not class to differentiate09:42
vikascirenab, yes09:42
vikascirenab, same name in different namespace09:42
irenab_vikasc,  somehow missed this, then my comment ir resolved09:43
irenab_s/ir/is09:43
vikascirenab, thanks09:43
vikascirenab, anyways going to update patch to resolve other reviewers comments09:43
*** tonanhngo has joined #openstack-kuryr09:43
vikascirenab, wanted to make sure i am not missing something09:43
irenab_vikasc, thank you for the patience09:44
*** irenab_ has quit IRC09:44
*** irenab_ has joined #openstack-kuryr09:44
*** tonanhngo has quit IRC09:44
vikascirenab, s/for/from :)09:45
*** garyloug has joined #openstack-kuryr09:47
openstackgerritMerged openstack/fuxi: Fix the .gitignore file  https://review.openstack.org/39928209:47
openstackgerritvikas choudhary proposed openstack/kuryr: [WIP] Nested-Containers: vlan driver  https://review.openstack.org/36199309:48
apuimedocool09:49
irenab_vikasc, is this still WIP?09:49
janonymousivc_,vikasc: one question.. currently hyperkube binary is extracted and run for kuryr.... what is the problem when run directly kubernetes env with restarting with cni driver ?09:50
vikascirenab, test cases are pending09:50
apuimedojanonymous: ?09:50
irenab_vikasc, so cannot be merged, right?09:50
vikascirenab, right09:50
janonymousapuimedo: hey was going through kuryr-kubernetes09:51
vikascirenab, working on test cases09:51
janonymousapuimedo: is that question correct?09:51
apuimedojanonymous: could you rephrase it? I'm afraid I didn't get it09:52
*** lmdaly has quit IRC09:52
janonymousapuimedo:yeah09:52
vikascjanonymous, you mean y not individual binaries?09:53
janonymousapuimedo: to be specific in devstack installation i found that hyperkube image is extracted , extract_hyperkube --> prepare_kubelet--> run_k8s_kubelet , so when a kubernetes cluster is running doesn't placing cni driver in appropiate location and restarting hyperkube would work to take up changes?09:55
*** apuimedo has quit IRC09:55
*** apuimedo has joined #openstack-kuryr09:56
janonymousvikasc: existing binaries09:57
apuimedojanonymous: which existing binaries?09:58
janonymousapuimedo: like in already running kubernetes cluster, running kubelet,api-server etc..09:58
vikascapuimedo, no need to rerun already running api-server10:00
vikascsoory10:00
vikascjanonymous, ^10:00
apuimedojanonymous: oh, you can reuse existing ones10:00
apuimedolook at the devstack settings10:00
apuimedoif you specify already running services, IIRC, it won't launch its own10:00
vikascapuimedo, +110:00
apuimedoif I don't remember correctly, I can fix it10:00
janonymousapuimedo: https://git.openstack.org/cgit/openstack/kuryr-kubernetes/tree/devstack/plugin.sh#n34010:01
vikascjanonymous, it is like running kubelet on this node only, on which devstack is being run10:01
vikascjanonymous, and then kubelet will get registerd itself with apiserver10:02
vikascjanonymous, we should be placing cni driver also at expected path and run kuebelet accordingly10:03
janonymousyes this last point i was referring to10:04
*** tonanhngo has joined #openstack-kuryr10:04
janonymousapuimedo: vikasc: but sry for not clearly framing the question10:04
apuimedommm... I'm not sure I got it yet10:05
apuimedoyou can decide not to run the kubelet10:05
*** tonanhngo has quit IRC10:05
apuimedoand run it manually10:05
apuimedothat's why I put it so that you have to explicitly enable the kubelet service10:06
vikascjanonymous, do you mean that you could not find in devstack plugin where is the code for placing cni at expected path?10:06
janonymoussorry for creating confusion, let me rephrase it from beginning10:07
vikasc:) sure10:07
janonymousWe have https://git.openstack.org/cgit/openstack/kuryr-kubernetes/tree/devstack/plugin.sh#n340 which has 3 steps as mentioned in it.10:07
janonymousQuestion1) If i do not extract hyperkube and use already installed hyperkube image with cni parametes would it be okay ?10:09
janonymousQuestion2) also placing cni in expected path and restarting kubernetes services would not be enough to take the changes? just like in kuryr-libnetwork we do...place driver at a path and tell docker to use10:10
apuimedo1) janonymous: you would have to modify your hyperkube image to place our cni stuff inside10:13
apuimedo2) you need to start the kubelet when the plugin descriptors are already in place10:13
janonymous1) by modify you mean restart with different parameters? or to rebuild hyperkube binary?10:17
janonymous2) right10:17
limaoHi vikasc10:20
ivc_janonymous, what is the use-case that you have in mind? why exactly do we want an existing (and thus uncontrolled) kubernetes cluster?10:21
openstackgerritvikas choudhary proposed openstack/kuryr: Fix container port ipaddress setting in ipvlan/macvlan drivers  https://review.openstack.org/39705710:22
vikasclimao, hello10:22
limaovikasc: just share you what I find right now10:22
openstackgerritvikas choudhary proposed openstack/kuryr: [WIP] Nested-Containers: vlan driver  https://review.openstack.org/36199310:23
limaovikasc: when we add two subport in different vlan within same subnet, it will happen mac flapping10:23
vikasclimao, mac flapping?10:24
janonymousivc_: not specific, trying to understand...as magnum might be deploying the coes...so was thinking10:24
*** tonanhngo has joined #openstack-kuryr10:24
limao09:55:27.377407 fa:16:3e:f6:d3:a0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 204, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.0.5.1 tell 10.0.5.7, length 2810:24
limao09:55:27.377449 fa:16:3e:f6:d3:a0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 203, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.0.5.1 tell 10.0.5.7, length 2810:24
limao09:55:27.377465 fa:16:3e:44:81:4b > fa:16:3e:f6:d3:a0, ethertype 802.1Q (0x8100), length 46: vlan 204, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Reply 10.0.5.1 is-at fa:16:3e:44:81:4b, length 2810:24
* vikasc looking10:24
apuimedoirenab: now https://review.openstack.org/#/c/397057/7 looks mergeable to me10:25
vikasclimao, which interface these captures are taken10:25
limaovikasc: I send out arp request from vlan 204, but when I tcpdump from qbr, it will have two arp request10:25
vikasclimao, thats expected10:25
*** tonanhngo has quit IRC10:26
limaovikasc: This is because there will be vlan flapping between tbr- and br-int10:26
limaovikasc: there will be spt-XXX with vlan 203, and spt-YYY with vlan 204 in tbr-10:27
vikasclimao, that i understand10:27
limaovikasc:  but in br-int, the spi-XXX and spi-YYY are in same local vlan10:27
limaovikasc: then it will happen that10:28
limaovikasc: This will impact the mac learning table on qbr-10:28
vikasclimao, let me explain my understanding.. step by step10:28
limaovikasc: that's why you can tcpdump arp response on qbr- , but it did not fw to the vm10:29
limaovikasc: sorry, I have to catch a bus very soon10:29
vikasclimao, i will catchup with you10:29
limaovikasc : brctl showmacs qbrXXX10:30
vikasclimao,  i will mail you10:30
limaovikasc: you can use showmacs to check the mac learning table of qbr10:30
limaovikasc: when it have problem, you will find the mac of subport is learning on qvbYYY port, it should be on tap port10:31
vikasclimao, i will check through yr points and get back10:32
vikasclimao, thanks a lot for the time :)10:32
limaovikasc: Thanks, and sorry to have to go now10:32
vikasclimao, np.. will cathcup10:32
*** limao has quit IRC10:33
janonymousivc_,vikasc, apuimedo: Thanks for your responses,i would try from viewpoint mentioned maybe i am wrong at some point:)10:33
vikascjanonymous, yw!10:37
apuimedo;-)10:38
vikascapuimedo, do you have time to continue this discussion a bit?10:39
apuimedovikasc: which?10:39
vikascapuimedo, trunk port one10:39
*** garyloug has quit IRC10:40
apuimedooh10:41
apuimedosure10:41
apuimedolet's go10:41
apuimedowhat is the issue?10:41
vikascexplaining10:42
vikasci have a nova vm launched on parent trunk port 100.XXXXXX10:42
ivc_apuimedo, vikasc, can you point me at where in kuryr-lib do we handle network namespaces (i.e. create ns and move veth peer to it)?10:42
vikascfollowing this doc https://wiki.openstack.org/wiki/Neutron/TrunkPort10:42
apuimedopaging ltomasbo10:44
apuimedo:-)10:44
vikascivc_, https://github.com/openstack/kuryr/blob/master/usr/libexec/kuryr/ovs#L2510:44
vikascapuimedo, i already had discussion with him10:45
vikascapuimedo, since morning i am telling my story to everynody10:45
vikasc:D10:45
vikascivc_, is this what you were looking for?10:45
apuimedo:-)10:45
vikascapuimedo, he said he will try my scenario and get back10:45
ivc_vikasc, nope :) i'm looking for a container side of veth10:46
vikascapuimedo, let forget that i asked you to discuss this :P . ltomasbo will get back10:46
vikascapuimedo, i will update you10:47
apuimedoivc_: IIRC it depends on the COE10:47
apuimedodocker moves it for you in libnetwork10:47
ivc_apuimedo, COE is what?10:47
apuimedoContainer Orchestration Engine10:47
apuimedoin this case, the Docker runtime10:47
apuimedolibcontainer+libnetwork10:47
apuimedoyou want to see how to do it in ipdb?10:48
ivc_na10:48
ivc_i was wondering if it is already in kuryr-lib but couldn't find it10:48
apuimedono, I do not think it is there10:49
openstackgerritvikas choudhary proposed openstack/kuryr: [WIP] Nested-Containers: vlan driver  https://review.openstack.org/36199310:50
apuimedoivc_: vikasc: what's the status on https://review.openstack.org/#/c/397853/510:57
apuimedoI'd like to shorten the outstanding queue10:57
ivc_apuimedo, i don't have any updates planned for the whole chain now (unless there are -1)10:59
apuimedoI'm asking about 'He is going to update VIFHandler patch with driver api calls.'10:59
apuimedoit seems you and vikasc agreed to some change, is that right?10:59
ivc_apuimedo, i've rebased VIFHandler already :)10:59
vikascapuimedo, ivc_ has done that10:59
apuimedovikasc: then +2 ;-)10:59
apuimedolet's get rid of patches!11:00
vikascapuimedo, sure.. allow me couple of minutes :)11:00
apuimedoor we'll only be rebasing till we grow white beards11:00
openstackgerritIlya Chukhnakov proposed openstack/kuryr-kubernetes: Controller side of pods' port/VIF binding  https://review.openstack.org/37604411:02
ivc_^ rebased as there was an update in the chain and the patch was not visible in 'related changes'11:05
apuimedook11:05
apuimedoivc_: are you using the current devstack code in your vagrant VM?11:06
apuimedoI've been getting some errors starting kubelet11:06
ivc_i'm not using vagrant11:06
ivc_aye you need to set the config options11:06
ivc_for default project_id/subnet etc11:06
ltomasbohi apuimedo, vikasc was in a meeting the whole morning, going to try it now and get back to you11:06
apuimedoltomasbo: so much meeting will make you a politician11:07
ltomasbo:D I hope not!11:07
vikascltomasbo, are you able to scroll back and see liping's comments?11:08
vikascltomasbo, he also tried same11:08
ivc_apuimedo, do you think we need to update devstack as part of https://review.openstack.org/376044 or are you ok if we do so in separate patch?11:08
* apuimedo checking11:09
ltomasboso, not working for him either?11:09
ivc_apuimedo, and while i'm not using vagrant, i do use devstack, just deploying it manually11:09
* apuimedo will check in a little while actually11:09
openstackgerritMerged openstack/kuryr-kubernetes: Controller driver base and pod project driver  https://review.openstack.org/39785311:09
vikascltomasbo, yes.. not working11:09
apuimedoivc_: and devstack launched the kubelet service fine for you at every stacking?11:09
ltomasboI undestand that it could be tricky for br-int, but there should be no problem with the qbr11:09
apuimedocause for me there's some flakiness11:10
vikascltomasbo, thats what i was thinking11:10
ltomasbolet me try it anyway! :D11:10
vikascltomasbo, sure.. and then lets discuss11:10
apuimedoivc_: IntegrityError is rather broad in scope :P11:10
apuimedoI propose we subclass it python3 OSError style in the future11:11
ivc_apuimedo, IntegrityError is something that is not expected to happen11:11
apuimedofucksakes, I just proposed subclassing11:11
ivc_you need to set all options for [neutron_defaults] in kuryr.conf11:11
apuimedoy=ー( ゚д゚)・∵.11:12
apuimedoivc_: sure. I get that, I'm just saying that it would be nicer to have a NeutronConfError subclassing IntegrityError and subclass taht11:12
apuimedo*that11:12
ivc_apuimedo, in fact that should be an RequiredOption error11:13
apuimedoivc_: not necessarily11:13
ivc_can you post an exception stacktrace to paste?11:13
apuimedoRequiredOption could be a type of IntegrityError11:13
apuimedobecause I imagine IntegrityError can also be that the Option Required is present but leading to bad values/resources11:14
ivc_apuimedo, no, i mean i'm raising RequiredOption in case the option is missing11:14
ivc_like https://review.openstack.org/#/c/398324/10/kuryr_kubernetes/controller/drivers/default_subnet.py@5011:14
ivc_so if you got an IntegrityError, then i've probably missed something11:15
ivc_can you paste a stacktrace?11:15
apuimedoivc_: I'm just reviewing the code11:15
apuimedo;-)11:15
apuimedoI meant to say that I prefer that when we get stacktraces, I'd like us to be more specific in the kind that gets raise11:16
apuimedo*raised11:16
apuimedothat I'm fine catching IntegrityErrors, but for logging it would be nices if we had subtypes11:16
ivc_there should be an error message11:17
apuimedoI know. The error message is nicely specific. Nothing to complain about in the message11:18
ivc_apuimedo, stop teasing me and give me the stacktrace xD11:18
apuimedoivc_: I will when I get there11:18
apuimedonetwork = next(net.obj_clone() for net in subnets.values()11:18
apuimedo                       if net.id == neutron_port.get('network_id'))11:18
apuimedothis code is a bit peculiar11:19
apuimedonaming wise11:19
apuimedodue to how we handle the subnets -> nets mapping11:19
apuimedoit gets confusing11:19
apuimedoI'll propose a name change11:19
openstackgerritDongcan Ye proposed openstack/kuryr-libnetwork: Fullstack: Using the credentials from openrc config file  https://review.openstack.org/39958511:20
ivc_apuimedo, ikr, but to fix that we need Subnet.id in os-vif11:21
apuimedoivc_: I just posted the comment11:21
apuimedohttps://review.openstack.org/#/c/399953/2/kuryr_kubernetes/os_vif_util.py11:22
apuimedoline 11311:22
ivc_aye11:23
ivc_well subnets is in the same format as returned by PodSubnetsDriver.get_subnets11:23
apuimedoivc_: sure. I'm just trying to get it readable for new onlookers and for my future self that tends to forget things11:24
ivc_imo, we can add a docstring11:24
apuimedoivc_: yes. docstring is a good solution11:24
ivc_apuimedo, that whole os_vif_util is like a can of worms because of that missing subnet.id11:25
apuimedoI agree on the assessment :-)11:25
ivc_i expect to refactor it once os-vif is updated11:25
apuimedoI'm really tempted to send the patch to os-vif11:26
ivc_that'd be cool11:26
ivc_just note that everything OVO-related requires a OVO version bump11:26
ivc_and there might be some resistance on their side11:26
apuimedoivc_: I expect all difficulty to be exactly on that11:28
ivc_apuimedo, can you pls post the stacktrace with the exact error message to paste.openstack.org?11:29
apuimedoivc_: I didn't get any stacktrace. I was just imagining ways in which it could fail11:30
apuimedosorry I wasn't clear enough in my explanation above11:30
ivc_oh11:31
ltomasbovikasc, I'm trying with two subports in the same network11:34
ltomasboand I see that dhclient only get IP for the first one, not the second one11:34
ivc_apuimedo, well those IntegrityErrors should never happen (i.e. i don't expect that code to be reachable at all), but its there so we have some context instead of that StopIteration (or something else)11:34
vikascltomasbo, problem is because of arp reply coming from ns2-->linux_br-->trunk_br-->br-int-->trunk_br--->linux_br--> ... so finally brigde sees that arp reply is comming from other direction than vm11:34
ltomasboand I can only ping from outside to the first one11:34
apuimedoivc_: ok11:34
ivc_in case i've missed something and/or something went completely wrong11:34
ltomasboI'm not even using namespaces11:35
*** tonanhngo has joined #openstack-kuryr11:35
ltomasbojust the eth0.101 and eth0.10211:35
vikascltomasbo, yes .. i also noticed that both namespaces can ping outside ip11:35
ltomasboboth can ping out, but only one can be ping in11:35
vikascltomasbo, thats because in that case port learned is correct11:35
*** tonanhngo has quit IRC11:36
vikascltomasbo, but where try to ping each other, just check linux_br mac table..mac of one of them is learned on outside port 'qvo' of bridge11:36
ltomasbothe problem will be also the route table11:37
apuimedoivc_: ok. Finished reviewing https://review.openstack.org/#/c/399953/2 . Once the docstring is there I can +211:37
ltomasbo10.0.5.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0.10111:38
ltomasbo10.0.5.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0.10211:38
ivc_apuimedo, cool, thnx11:38
vikascltomasbo, this route table is from vm?11:40
ltomasboyep, inside the VM11:41
vikascltomasbo, thats not the case in my vm :11:44
vikascroot@vm0 ~]# netstat -nr11:44
vikascKernel IP routing table11:44
vikascDestination     Gateway         Genmask         Flags   MSS Window  irtt Iface11:44
vikasc0.0.0.0         100.0.0.1       0.0.0.0         UG        0 0          0 eth011:44
vikasc100.0.0.0       0.0.0.0         255.255.255.0   U         0 0          0 eth011:44
vikasc169.254.169.254 100.0.0.1       255.255.255.255 UGH       0 0          0 eth011:44
vikascltomasbo, my interfaces are in namespaces, so not visible in vm's network namespace11:45
vikascltomasbo, i dont think this issue will be there if ovs is used instead of ovs+linux_br. WDYT?11:47
*** lmdaly has joined #openstack-kuryr11:49
ltomasboumm, I can try that too11:50
ltomasbowhat I see is the arp reply being on the wrong vlan11:50
ltomasbo22:23:36.873910 fa:16:3e:de:ac:83 > Broadcast, ethertype 802.1Q (0x8100), length 46: vlan 101, p 0, ethertype ARP, Request who-has 10.0.5.12 tell 10.0.5.1, length 2811:51
ltomasbo22:23:36.873931 fa:16:3e:de:ac:83 > Broadcast, ethertype 802.1Q (0x8100), length 46: vlan 102, p 0, ethertype ARP, Request who-has 10.0.5.12 tell 10.0.5.1, length 2811:51
ltomasbo22:23:36.875733 fa:16:3e:e2:1d:71 > fa:16:3e:de:ac:83, ethertype 802.1Q (0x8100), length 46: vlan 101, p 0, ethertype ARP, Reply 10.0.5.12 is-at fa:16:3e:e2:1d:71, length 2811:51
ltomasbowhile it should be 102 (and the mac of eth0.102)11:51
ltomasboI just tried removing the route to 10.0.5.0 from eth0.10111:54
ltomasboand then the eth0.102 works perfectly11:54
ltomasboboth in and out11:55
* vikasc reading11:55
ltomasbogoing to try if ovs-firewall can deal with this, and even if it does, if it is doing the right encapsulation11:55
*** tonanhngo has joined #openstack-kuryr11:55
ltomasboperhaps trunk ports are not intended to have several vlans on the same network, but rather one vlan per network11:55
*** tonanhngo has quit IRC11:56
vikascltomasbo, above tcpdump is from "tap" interface of "qbr" right?11:58
vikascltomasbo, if you do tcpsump on other side interface,"qvo" .. you will see one more arp reply packet with vlan 10211:59
vikascltomasbo, above vlan 101 arp reply is coming from inside vm and going to br-int via trunk bridge and from there when it will coming back12:00
ltomasboyup, tap inferface12:00
openstackgerritIlya Chukhnakov proposed openstack/kuryr-kubernetes: Port-to-VIF os-vif translator for hybrid OVS case  https://review.openstack.org/39995312:01
ltomasbovikasc, not sure I got your last comment12:01
ivc_apuimedo, added docstrings https://review.openstack.org/#/c/399953/2..3/kuryr_kubernetes/os_vif_util.py12:02
ltomasbothat was the reply to a ping from the qrouter namespace, not from one ns to another12:02
vikascltomasbo, this arp reply with vlan 10212:02
apuimedoivc_: ivc_ why should https://review.openstack.org/#/c/400010/3/kuryr_kubernetes/os_vif_util.py line 198 restrict how many subnets the backing network of a subnet has?12:02
apuimedoor am I misreading?12:02
vikascltomasbo, sorry vlan 10112:02
vikascltomasbo, it is coming from inside vm and going to br-int via trunk bridge12:02
ivc_apuimedo, its not restricting networks, but its because of the same mapping problem12:03
apuimedohow I read it, if let's say we have a network A with subnets X an Y12:03
ltomasboyep, via qbr and trunk bridge12:03
ivc_apuimedo, you will have (X.id: A[X], Y.id: A[Y]}12:04
apuimedoyou'd have a mapping {id(X): osvif(a), id(Y): osvif(a)}12:04
ivc_yup12:04
apuimedothe trick here is12:04
vikascltomasbo, from br-int this same packet will turn back and will enter trunk-br again and this time will be tagged with vlan 102.. and you can see this arp reply with 102 on other interface of linux bridge12:04
ivc_apuimedo, but those two osvif(a) will have different subnets attribute12:04
apuimedothat's what I thought12:04
apuimedoI have to say, that's a bit weird12:05
ivc_its ugly, but i can't help it12:05
apuimedoand only result of how we craft the network osvif objects :P12:05
ivc_no, because we need subnet_id -> subnet mapping while that structure should also have network object12:05
vikascltomasbo, bluejeans?12:06
ltomasbook12:06
apuimedoivc_: but the osvif Network object that you get as values in the example above could have references to both X and Y12:07
apuimedoin both values12:07
*** yedongcan has left #openstack-kuryr12:07
ivc_apuimedo, erm no. those would be copies of the same Network object12:08
apuimedoyou could have both X and Y in the subnets SubnetList12:08
ivc_nope12:08
ivc_then i'll loose subnet_id -> subnet mapping12:08
apuimedommm12:08
apuimedoI really misenderstood then. I thought the mapping is subnetid to osvif Network object12:09
ivc_well, you see, this thing should have been [NetworkA[SubnetA1, SubnetA2], NetworkB[SubnetB1]]12:09
ivc_that mapping is logically subnet_id -> subnet mapping, that also has the Network that contains the subnet12:10
ivc_apuimedo, so as soon as we get a subnet.id, the 'subnets' mapping will be converted into  [NetworkA[SubnetA1, SubnetA2], NetworkB[SubnetB1]]12:13
apuimedook12:14
*** tonanhngo has joined #openstack-kuryr12:15
ivc_in fact it would probably much cleaner if os-vif network did not have a 'subnets' attribute at all and instead a Subnet had network_id12:17
*** tonanhngo has quit IRC12:17
apuimedoivc_: your database background (foreign key) is showing :P12:20
ivc_haha12:20
ivc_but it would allow bi-directional relationship between Network <-> Subnet12:21
apuimedoand I have to say... I tend to agree. But I would probably have both for browsing and to have it lazy load the resource12:21
apuimedoORM style12:21
apuimedokilling performance since 2000s12:21
ivc_xD12:22
apuimedolmdaly: mchiappero: have you tried baremetal kuryr-libnetwork with dpdk?12:23
ivc_apuimedo, you'd then also have to solve circular dependency or that lazy loading would kill the server :)12:24
*** lmdaly has quit IRC12:24
apuimedoivc_: soft links not to kill also the garbage collector12:24
ivc_apuimedo, during serialization i mean :)12:24
ivc_apuimedo, or add some sort of AI to guess if the user wants net as part of the subnet, or subnet as part of net.12:26
*** tonanhngo has joined #openstack-kuryr12:26
ivc_apuimedo, maybe thats how skynet was born12:27
*** tonanhngo has quit IRC12:27
apuimedo:-)12:28
*** garyloug has joined #openstack-kuryr12:37
irenab_ivc_, ping12:38
ivc_irenab_, pong12:39
irenab_ivc_, going though the Controller pod patch and have few quesetions12:40
irenab_questions12:40
irenab_The drivers that are sued by VifHandler are instintiated or referenced by the VifHandler?12:41
ivc_yes12:41
irenab_get_instance sort of implying singleton12:42
ivc_it is12:42
irenab_So there maybe the case that case Driver will be used by other entity Handler, correct?12:42
ivc_it is possible, yes12:42
irenab_s/case/same12:42
ivc_same instance12:43
irenab_ivc_, great. thanks12:43
ivc_irenab, np :)12:43
irenab_ivc_, I think I will drop the drivers hierarchy at the diagrams, it gets too crowded12:43
*** garyloug has quit IRC12:44
ivc_yep, just keep the interface ones12:44
ivc_i.e. PodVIFDriver12:44
*** tonanhngo has joined #openstack-kuryr12:45
irenab_for the derived ones, the question regarding naming12:45
irenab_there is GenericVIFDriver but the rest are DefaultXXXDriver12:46
irenab_ivc_, by design?12:46
ivc_irenab_, yes, Default not as in 'default driver', but 'default project' or 'default subnet'12:47
*** tonanhngo has quit IRC12:47
ivc_irenab_, also i'd prefer if we have base class names (like PodVIFDriver) on the diagram, because it will allow us to keep devref up-to-date even if we change the drivers used by default12:48
irenab_I think we better have and explain the default drivers too, since I was confused about if tis defualt driver or default subnet/project12:51
irenab_ivc_, I will complete going through the patch and share the modified devref12:52
ivc_we can do it in text. i'm just worried that diagrams/images are harder to maintain :)12:52
ivc_irenab_, cool, thnx12:52
*** tonanhngo has joined #openstack-kuryr13:04
*** tonanhngo has quit IRC13:05
*** limao has joined #openstack-kuryr13:10
irenab_ivc_, posted few questions on the patch13:12
mchiapperoapuimedo: no, we never tried, but I guess we will soon13:16
mchiapperoapuimedo: so, I'm afraid we are busy these days, I would suggest to go for the release and not wait for us13:17
mchiapperoalthough we will catch up soon13:17
mchiapperonow, two big problems we spotted13:17
limaohi vikasc,ltomasbo13:18
vikaschi limao13:18
mchiapperothe first one is that the binding drivers assign IP addresses to interfaces, while a docker guy claims the remote plugin should never do that13:19
limaovikasc: just back home and read through your irc log13:19
vikasclimao, ltomasbo is going to try with ovs-firewall, to see if this problem is with linux bridge only13:20
mchiapperothe second very big problem is that it seems that neutron won't let you assign the same mac address to two different neutron ports13:20
mchiapperobut we can't change the mac address on the ipvlan interface either as it's not supported (but could be, I could easily produce a patch)13:20
limaovikasc: OK, cool13:20
mchiapperoalso, I'm wondering whether it's possible to create a neutron port with just IPv4 addresses and no IPv6 ones13:21
irenab_ivc_, apuimedo : ping13:28
irenab_vikasc, ping13:28
ivc_pong13:30
irenab_ivc_, I posted questions on patchset 3 and then realised you rebased it13:31
ivc_irenab_, no problem13:31
*** yamamoto_ has quit IRC13:31
*** tonanhngo has joined #openstack-kuryr13:34
*** tonanhngo has quit IRC13:35
limaovikasc: one more question , is the latest design vm-nested(trunk / subport) in this spec https://github.com/openstack/kuryr/blob/master/doc/source/specs/newton/nested_containers.rst13:35
vikasclimao, yes.. but it needs update13:38
ivc_irenab_, i've replied to your comments13:38
limaovikasc: OK, let me go through that again :-), will check with you if have questions.13:39
vikasclimao, sure :)13:40
irenab_ivc_, checking in a min13:48
openstackgerritOpenStack Proposal Bot proposed openstack/fuxi: Updated from global requirements  https://review.openstack.org/37374513:52
*** jgriffith has joined #openstack-kuryr13:53
openstackgerritOpenStack Proposal Bot proposed openstack/kuryr-libnetwork: Updated from global requirements  https://review.openstack.org/35197613:54
*** tonanhngo has joined #openstack-kuryr13:54
*** tonanhngo has quit IRC13:56
*** tonanhngo has joined #openstack-kuryr14:15
*** tonanhngo has quit IRC14:16
*** yamamoto has joined #openstack-kuryr14:16
ivc_irenab_, ping14:20
irenab_ivc_, pong14:20
ivc_irenab_, https://review.openstack.org/#/c/376044/3/kuryr_kubernetes/controller/handlers/vif.py@5514:20
ivc_i thought  you suggested that we add 'Filter' abstraction14:20
irenab_I did14:20
ivc_and they you ask me why we need it :)14:21
ivc_i'm confused14:21
irenab_I just not sure why ADD/DELETE may need different ones14:21
irenab_Filter is for not handle the event14:21
apuimedoirenab_: pong14:21
*** lmdaly has joined #openstack-kuryr14:21
irenab_apuimedo, wanted to ask not to merge urgently after +2 for non trivial patches, didn't have time to review :-)14:22
ivc_irenab_, well for VIFHandler the filter would be 'if not self._is_host_network(pod)' and imo abstracting it into a Filter entity does not seem worth it14:22
irenab_I though you suggest there can be cases when Filter for ADD may differ from Filter for DELETE14:23
ivc_and we'll still have the 'if not self._is_pending(pod)' for 'on_present'14:23
irenab_I was more on 'Filter if this k8s instance should be handled by kuryr'14:24
ivc_irenab_, what i mean is i don't see a reason to justify introducing that 'Filter' entity14:24
ivc_for now at least14:24
irenab_is_pending is logical check14:25
apuimedook14:25
ivc_irenab_, so is 'is_host_network'14:26
irenab_ivc_, I agree that Filter abstraction is not required right now, possibly we will have to add it in case there will be multiple CNI drivers14:26
ivc_irenab_, agreed14:26
ivc_irenab_, so to make it clear, your comment was for the 'REVISIT' part, right?14:27
irenab_ivc_, yes :-). I think we have enough abstaction/pluggability points for now14:27
ivc_:)14:27
irenab_ivc_, so there are only 2  questions left14:29
irenab_I am not sure I got your response on one time annotation update for the current code (I got your intention for the next step)14:29
ivc_irenab_, regarding the 'why else'14:29
irenab_yes14:30
ivc_do you suggest that we remove 'elif' and move the inner block outside?14:30
ivc_or replace 'elif' with 'else'?14:30
irenab_I was not sure what happens for Pod that is updated14:31
irenab_seems nothing, but then maybe code can be simplified14:31
ivc_not sure if i understand what you mean14:31
ivc_there are 3 cases covered there14:32
irenab_I do not understand why Else at line 70 is required14:32
ivc_if there's no VIF -> we create one and annotate (the VIF may have 'active' either True or False - that depends on VIFDriver)14:32
ivc_if VIF.active == True -> do nothing14:33
ivc_if VIF.active == False -> activate it14:33
irenab_so if it is not active, when its going to be activated?14:33
apuimedoirenab_: when you have a moment, merge https://review.openstack.org/#/c/394819/14:33
ivc_irenab_, if it is not active, it is up to 'activate_vif' to do something about it :)14:34
ivc_in case of neutron-ovs or neutron-linuxbridge 'activate_vif' will poll neutron port status14:34
ivc_(and raise ResourceNotReady to retry if necessary)14:35
irenab_ivc_, but else prevents to enter this block14:35
*** tonanhngo has joined #openstack-kuryr14:35
ivc_it does not14:35
irenab_ivc_, seems its time for me to take another cup of coffee :-)14:36
ivc_:)14:36
ivc_its tricky one :014:36
*** tonanhngo has quit IRC14:36
ivc_the thing is that if request_vif returned active==False, we've updated pod annotation14:36
ivc_it triggers another event which will get us to the 'elif' part14:37
irenab_ivc_, where this event is triggered?14:37
ivc_by k8s14:37
ivc_when you annotate pod it triggers another event14:37
irenab_update?14:37
ivc_'modified', but yes :)14:38
apuimedoirenab_: ivc_: thanks for going thoroughly over this ;-)14:38
irenab_ivc_, ok, got it :-). We need sequences diagrams for k8s-kuryr-neutron ...14:38
apuimedoirenab_: almost flow charts14:39
ivc_irenab_, the trick here is to get 'active=False' annotation to CNI driver so it can plug the VIF so neutron will update port status14:39
irenab_ivc_, sort of like it again how k8s does things14:39
irenab_apuimedo, ivc_ will try to add them tomorrow to the devref. Need to go now14:40
ivc_irenab_, and when neutron updates the status, activate_vif succeeds and updates annotation which is captured by CNI and causes it to unblock and to return to kubelet14:40
openstackgerritMerged openstack/kuryr: Replaces uuid.uuid4 with uuidutils.generate_uuid()  https://review.openstack.org/39481914:40
irenab_ivc_, so activate_vif blocks till timeout or status is ACTIVE?14:40
apuimedothanks irenab_ for the merge14:41
ivc_irenab_, i expect activate_vif to only 'show_port' once and just raise ResourceNotReady but it is up to the VIFDriver14:41
*** garyloug has joined #openstack-kuryr14:41
ivc_irenab_, https://review.openstack.org/#/c/400010/3/kuryr_kubernetes/controller/drivers/generic_vif.py@5914:42
*** hongbin has joined #openstack-kuryr14:42
mchiapperoapuimedo: ping14:42
irenab_ivc_,  got it, so pipeline will retry. Nice14:43
ivc_:)14:43
ivc_irenab_, so do you see the big picture now? :)14:44
*** irenab_ has quit IRC14:47
apuimedomchiappero: pong14:48
ivc_irenab_, i've replied to https://review.openstack.org/#/c/376044/3/kuryr_kubernetes/controller/handlers/vif.py@6214:51
apuimedojerms: ping14:53
jermsapuimedo: sup14:53
*** tonanhngo has joined #openstack-kuryr14:55
apuimedojerms: hongbin could use hearing what's the storage usage of kubernetes from openshift14:55
apuimedohe's working on fuxi (kuryr's storage backend)14:56
jermsi see14:56
*** tonanhngo has quit IRC14:57
*** gsagie has quit IRC14:57
* hongbin is news to the fuxi project14:57
hongbini just started to pick up fuxi, but let me know if anything i can help14:58
mchiapperoapuimedo: I'm going to have meetings from now on, but I'm interested in an opinion on the above14:58
mchiapperothank you :)14:58
mchiapperoI'll check the chat later14:58
hongbinjerms: ^^14:58
limaohi vikasc14:58
jermsi must be missing somethign terribly obvious -- kube already talks to cinder14:58
limaovikasc: I have a workaround to let it work "# brctl setageing qbrXXX 0", after this, the containers can ping each other14:59
ivc_apuimedo, vikasc, merge https://review.openstack.org/#/c/398324 ?15:01
ivc_^s/merge/shall we merge/ :)15:01
*** hongbin has quit IRC15:10
apuimedo:P15:15
apuimedomchiappero: I'm afraid I miss "the above"15:15
apuimedojerms: it's about finding if there is any gap that needs to be covered. Currently you put annotations for volumes in the pod definitions. Then I suppose the K8s driver picks that up, right?15:17
jermspod only asks for  PV and size, and as of 1.4, something called a storageclass15:18
jermskube itself is configure with the storage backend info15:18
apuimedojerms: kubelet?15:18
apuimedoivc_: merged15:20
*** tonanhngo has joined #openstack-kuryr15:20
*** tonanhngo has quit IRC15:22
openstackgerritMerged openstack/kuryr-kubernetes: Default pod subnet driver and os-vif utils  https://review.openstack.org/39832415:24
openstackgerritMerged openstack/kuryr-kubernetes: Default pod security groups driver  https://review.openstack.org/39951815:26
*** yamamoto has quit IRC15:27
ivc_apuimedo, cool. thnx :)15:32
apuimedoanytime15:33
apuimedolmdaly: thanks for the new patch. Just posted the review15:35
apuimedomchiappero: I think it should be possible to have ipv4 and no ipv615:36
lmdalythanks apuimedo!15:36
apuimedoabout the binding assigning address, I think that is something that we should make libnetwork be able to disable on binding15:37
apuimedok8s uses it, docker bans it15:37
apuimedoso probably we should have libnetwork not pass ip address info to the binding15:37
apuimedoand the binding not attempting to set if none is passed15:37
*** tonanhngo has joined #openstack-kuryr15:38
*** tonanhngo has quit IRC15:40
*** yamamoto has joined #openstack-kuryr15:41
*** tonanhngo has joined #openstack-kuryr15:55
*** tonanhngo has quit IRC15:55
*** yamamoto has quit IRC15:59
*** kristianjf has joined #openstack-kuryr16:09
*** tonanhngo has joined #openstack-kuryr16:16
*** tonanhngo has quit IRC16:17
*** oanson has quit IRC16:17
*** yamamoto has joined #openstack-kuryr16:19
*** dimak has quit IRC16:23
*** diogogmt has joined #openstack-kuryr16:25
*** hongbin has joined #openstack-kuryr16:30
*** yamamoto has quit IRC16:31
*** yamamoto has joined #openstack-kuryr16:32
*** tonanhngo has joined #openstack-kuryr16:35
*** tonanhngo has quit IRC16:36
*** yamamoto_ has joined #openstack-kuryr16:43
*** yamamoto has quit IRC16:46
limaoapuimedo: vikasc: Hi,  ltomasbo and I check with jlibosva in openstack-neutron channel about Trunk port feature.  Here is some highlight: 1) Neutron only support Trunk feature with ovs-fw(if we need sg for subport)   2) We need to make sure the src mac is different for all the containers on the Nested VM, because in the guest OS, as by default it's gonna have the mac of parent port even though it goes out from vlan interface. ( For this16:48
limaobug: https://bugs.launchpad.net/neutron/+bug/1626010)16:48
openstackLaunchpad bug 1626010 in neutron "OVS Firewall cannot handle non unique MACs" [High,In progress] - Assigned to Jakub Libosvar (libosvar)16:48
*** yamamoto_ has quit IRC16:50
*** tonanhngo has joined #openstack-kuryr16:55
*** tonanhngo has quit IRC16:56
*** janki has quit IRC16:59
apuimedo:/17:01
apuimedolimao: mchiappero: lmdaly: this could be bad news for ipvlan17:02
limaoapuimedo: looks like only macvlan can work in my mind (if that bug not fixed)17:03
ltomasbowhy?17:03
apuimedoyes, sounds to me like that too17:04
apuimedoltomasbo: ipvlan always has the mac address of the link iface17:04
ltomasbowith ipvlan you can have different macs too17:04
apuimedoreally?17:04
limaoltomasbo: ohh17:04
apuimedoI thought it always reuses the mac of the parent device17:04
limaoI do not know this..17:04
ltomasboyes, we need to make the kuryr plugin in a way that the iface is dynamic17:04
*** huikang has joined #openstack-kuryr17:04
ltomasboI tried with ipvlan (lmdaly patch) over a different (hardcoded) iface, e.g., eth0.10117:05
ltomasboand it worked for me17:05
ltomasbothe only think is to have that iface loaded dynamically, which needs some thinking17:05
ltomasbofor the ipvlan binding17:06
limaoltomasbo: the src mac will be different with eth0 in that case? (ipvlan on eth0.101)17:07
apuimedoltomasbo: what you mean is the one vlan per network17:07
ltomasboif you create the eth0.101 with a different mac, yes, it should be17:07
apuimedoeach vlan virtual device gets a different mac17:07
apuimedoand then, ipvlans on top17:07
apuimedoand it's not an issue then that the different ipvlans of the same vlan have the same mac, is that right?17:08
limaoI think we can have a try if it will get different src mac in that case17:09
ltomasboif I undestood the differences between ipvlan and macvlan17:09
ltomasbois that macvlan you differentiate by mac address, while ipvlan you differentiate by ip, right?17:10
ltomasbobut does that mean the mac of the ipvlan needs to be the same?17:10
ltomasboI don't think so17:10
ltomasbodidn't try the complete approach, but what I tried with lmdaly ipvlan patch was to have a container attached to iface eth0.101 (with MAC 1)17:11
ltomasboand then change iface (manually) to eth0.102 (with MAC 2) and create another container17:12
ltomasboand they could reach their networks (didn't remember if I try them to talk to each other)17:12
*** huikang has quit IRC17:15
*** tonanhngo has joined #openstack-kuryr17:15
limaoltomasbo: then looks like we need one local vlan(to get different mac) for one container in ipvlan case17:16
ltomasboeach subport will have a different mac17:16
ltomasbowhen you create it17:16
ltomasbowe just need to use that one17:16
*** tonanhngo has quit IRC17:17
ltomasboI did a quick check with ovs-firewall:17:17
ltomasboping from namespace 1 to qroute17:17
ltomasbo17:21:33.115816 fa:16:3e:06:af:7c > fa:16:3e:1d:4e:54, ethertype 802.1Q (0x8100), length 102: vlan 101, p 0, ethertype IPv4, 10.0.5.12 > 10.0.5.1: ICMP echo request, id 950, seq 2, length 6417:17
ltomasbo17:21:33.116105 fa:16:3e:1d:4e:54 > fa:16:3e:06:af:7c, ethertype 802.1Q (0x8100), length 102: vlan 101, p 0, ethertype IPv4, 10.0.5.1 > 10.0.5.12: ICMP echo reply, id 950, seq 2, length 6417:17
ltomasboso, vlan 101 and mac fa:16:3e:06:af:7c17:18
limaofa:16:3e:06:af:7c is your subport of 101 mac right?17:18
ltomasboand from namespace 2:17:18
limaook17:18
ltomasbo7:22:36.283519 fa:16:3e:a2:81:06 > fa:16:3e:1d:4e:54, ethertype 802.1Q (0x8100), length 102: vlan 102, p 0, ethertype IPv4, 10.0.5.10 > 10.0.5.1: ICMP echo request, id 956, seq 2, length 6417:18
ltomasbo17:22:36.284507 fa:16:3e:1d:4e:54 > fa:16:3e:a2:81:06, ethertype 802.1Q (0x8100), length 102: vlan 102, p 0, ethertype IPv4, 10.0.5.1 > 10.0.5.10: ICMP echo reply, id 956, seq 2, length 6417:18
ltomasboso, vlan 102, and mac  fa:16:3e:a2:81:0617:18
limaoget you17:19
ltomasboboth of them different from the vm mac fa:16:3e:90:03:1217:19
limaoin vm-nested case, then why do not we just move the eth0.101 into the container?17:20
limao(since it will be one vlan for one container)17:20
ltomasboI guess so, for the 1 vlan per container should be ok17:21
limaothen maybe there is no need to create macvlan and ipvlan up on the vlan interface in this case in my mind17:21
ltomasbothe problem could be for the 1 vlan per network if ipvlan is used on top (I guess)17:22
limaoThen It would be a little bit similiar with our allowed-address-pair case17:23
ltomasboyep, allowed-address-pair should work, true17:23
limaogreat thanks ltomasbo! I'd go to sleep now. Zzzz... ;-)17:25
ltomasboI need to leave now too17:25
ltomasboI'll keep digging tomorrow!17:25
limaoltomasbo: see you and thanks for the help!17:25
ltomasboyou're welcome!17:25
*** limao has quit IRC17:30
*** tonanhngo has joined #openstack-kuryr17:42
mchiapperoI'll catch up with the chat later but meanwhile I'll try to shed some light on IPVLAN17:50
mchiapperobasically, functionality wise IPVLAN and MACVLAN are the same with the only difference that IPVLAN uses a single MAC address all the time, to comply to arp/spoofing settings on many setups17:52
mchiapperothe implementation is different though, so performance might be different too17:52
mchiapperonow, IPVLAN uses the master interface for egressing packets (if I'm not wrong the L2 header in the skb buffer is actually replaced before leaving the master interface)17:54
mchiapperothe slave selection for incoming packets it's obviously made by looking at the IP addresses (by means of a hashtable)17:55
mchiapperoso, the mac address from the slave interfaces is never actually used, and by default is equal to the one belonging to the master (that's actually the only one you can ever see)17:56
*** diogogmt has quit IRC17:56
mchiapperothe problem: 1) neutron doesn't allow to have multiple neutron ports with the same mac address 2) there is no support for changing the mac address in the kernel IPVLAN driver17:57
mchiapperohowever (2) can be solved17:58
mchiapperonot sure about (1) (I guess no)17:58
mchiapperois anyone still tuned? :)17:58
*** diogogmt has joined #openstack-kuryr17:59
*** lmdaly has quit IRC17:59
mchiapperoof course we can ignore the mac address in the "dangling" neutron port, but it's far from being a good solution18:00
mchiapperotechnically speaking adding the support for changing the mac address for IPVLAN slave devices is just a few lines of code, could be accepted upstream, could not. But I see someone might point out that it could turn out to be confusing18:03
mchiapperoeven though by default could be the same as the master, and if you are changing it is because you know what you are doing18:04
mchiapperoso...18:04
mchiapperoregarding kuryr-lib assigning IP addresses, yes, I think we can provide no IPs (good), a flag (less good), an additional code path to be invoked by k-k8s only18:06
*** huikang has joined #openstack-kuryr18:16
*** huikang has quit IRC18:20
*** garyloug has quit IRC18:25
*** huikang has joined #openstack-kuryr18:27
*** oanson has joined #openstack-kuryr18:28
openstackgerritIlya Chukhnakov proposed openstack/kuryr-kubernetes: Generic VIF controller driver  https://review.openstack.org/40001018:34
openstackgerritIlya Chukhnakov proposed openstack/kuryr-kubernetes: Controller side of pods' port/VIF binding  https://review.openstack.org/37604418:34
*** diogogmt has quit IRC18:43
*** diogogmt has joined #openstack-kuryr18:45
*** oanson has quit IRC18:47
*** jgriffith has quit IRC18:51
*** diogogmt has quit IRC18:55
*** diogogmt has joined #openstack-kuryr19:00
*** huikang has quit IRC19:13
*** huikang has joined #openstack-kuryr19:25
*** huikang has quit IRC19:36
*** oanson has joined #openstack-kuryr19:57
*** diogogmt has quit IRC20:00
*** huikang has joined #openstack-kuryr20:08
*** ltomasbo has quit IRC20:19
*** huikang has quit IRC20:22
*** ajo has quit IRC20:25
*** ltomasbo has joined #openstack-kuryr20:25
*** ajo has joined #openstack-kuryr20:25
*** huikang has joined #openstack-kuryr20:31
*** huikang has quit IRC20:32
*** huikang has joined #openstack-kuryr20:33
*** diogogmt has joined #openstack-kuryr20:37
*** oanson has quit IRC20:53
*** oanson has joined #openstack-kuryr20:54
*** jgriffith has joined #openstack-kuryr21:13
*** jgriffith has quit IRC21:17
*** jgriffith has joined #openstack-kuryr21:20
*** oanson has quit IRC21:24
*** jgriffith has quit IRC21:29
*** huikang has quit IRC21:36
*** huikang has joined #openstack-kuryr21:37
*** huikang has quit IRC21:40
*** jgriffith has joined #openstack-kuryr21:44
*** jgriffith has quit IRC22:06
*** jgriffith_ has joined #openstack-kuryr22:07
*** jgriffith_ is now known as jgriffith22:13
openstackgerritHongbin Lu proposed openstack/fuxi: Fix the installation guide  https://review.openstack.org/39929622:48
*** hongbin has quit IRC23:51

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!