*** banix has quit IRC | 00:05 | |
*** fawadkhaliq has quit IRC | 00:07 | |
*** fawadkhaliq has joined #openstack-kuryr | 00:08 | |
*** fawadkhaliq has quit IRC | 00:28 | |
*** fawadkhaliq has joined #openstack-kuryr | 00:29 | |
*** fawadkhaliq has quit IRC | 00:40 | |
*** shashank_hegde has quit IRC | 01:12 | |
*** salv-orlando has joined #openstack-kuryr | 01:58 | |
*** yamamot__ has joined #openstack-kuryr | 02:04 | |
*** salv-orlando has quit IRC | 02:07 | |
*** yuanying has quit IRC | 02:12 | |
*** yuanying has joined #openstack-kuryr | 02:12 | |
*** wanghua has joined #openstack-kuryr | 02:13 | |
*** shashank_hegde has joined #openstack-kuryr | 02:48 | |
*** yuanying has quit IRC | 02:52 | |
*** banix has joined #openstack-kuryr | 03:03 | |
*** yamamot__ has quit IRC | 03:10 | |
*** banix has quit IRC | 03:36 | |
*** salv-orlando has joined #openstack-kuryr | 03:38 | |
*** yuanying has joined #openstack-kuryr | 03:50 | |
*** salv-orlando has quit IRC | 03:51 | |
*** yamamot__ has joined #openstack-kuryr | 04:04 | |
*** banix has joined #openstack-kuryr | 04:14 | |
*** oanson has joined #openstack-kuryr | 04:15 | |
*** shashank_hegde has quit IRC | 04:30 | |
*** shashank_hegde has joined #openstack-kuryr | 04:42 | |
*** shashank_hegde has quit IRC | 04:49 | |
*** salv-orlando has joined #openstack-kuryr | 04:51 | |
*** banix has quit IRC | 04:56 | |
*** salv-orlando has quit IRC | 05:03 | |
*** shashank_hegde has joined #openstack-kuryr | 05:31 | |
*** mestery has quit IRC | 05:32 | |
*** mestery has joined #openstack-kuryr | 05:38 | |
*** shashank_hegde has quit IRC | 05:38 | |
*** mestery has quit IRC | 05:40 | |
*** irenab has joined #openstack-kuryr | 05:41 | |
*** mestery has joined #openstack-kuryr | 05:43 | |
*** tfukushima has joined #openstack-kuryr | 05:45 | |
*** fawadkhaliq has joined #openstack-kuryr | 05:49 | |
*** salv-orlando has joined #openstack-kuryr | 05:56 | |
*** oanson has quit IRC | 06:01 | |
*** shashank_hegde has joined #openstack-kuryr | 06:12 | |
*** fawadkhaliq has quit IRC | 06:26 | |
*** fawadkhaliq has joined #openstack-kuryr | 06:29 | |
*** salv-orlando has quit IRC | 06:54 | |
*** salv-orlando has joined #openstack-kuryr | 07:15 | |
*** shashank_hegde has quit IRC | 07:25 | |
*** shashank_hegde has joined #openstack-kuryr | 07:27 | |
*** salv-orl_ has joined #openstack-kuryr | 07:34 | |
*** salv-orlando has quit IRC | 07:37 | |
*** tfukushima has quit IRC | 07:43 | |
*** shashank_hegde has quit IRC | 07:46 | |
*** tfukushima has joined #openstack-kuryr | 07:50 | |
*** fawadkhaliq has quit IRC | 08:01 | |
openstackgerrit | Taku Fukushima proposed openstack/kuryr: Make logging level configurable https://review.openstack.org/256249 | 08:16 |
---|---|---|
*** dingboopt has quit IRC | 08:27 | |
openstackgerrit | Taku Fukushima proposed openstack/kuryr: Make logging level configurable https://review.openstack.org/256249 | 08:44 |
*** openstackgerrit has quit IRC | 09:17 | |
*** openstackgerrit has joined #openstack-kuryr | 09:17 | |
*** oanson has joined #openstack-kuryr | 09:56 | |
*** openstack has joined #openstack-kuryr | 09:58 | |
*** salv-orl_ has quit IRC | 10:06 | |
*** salv-orlando has joined #openstack-kuryr | 10:15 | |
*** yamamot__ has quit IRC | 10:25 | |
*** tfukushima has quit IRC | 10:43 | |
*** salv-orlando has quit IRC | 10:48 | |
*** salv-orlando has joined #openstack-kuryr | 10:48 | |
*** salv-orlando has quit IRC | 10:53 | |
*** banix has joined #openstack-kuryr | 11:02 | |
*** banix has quit IRC | 11:10 | |
*** yamamoto has joined #openstack-kuryr | 11:19 | |
*** yamamoto has quit IRC | 11:31 | |
*** yamamoto has joined #openstack-kuryr | 11:34 | |
*** banix has joined #openstack-kuryr | 11:36 | |
*** yamamoto has quit IRC | 11:38 | |
*** apuimedo has joined #openstack-kuryr | 11:42 | |
openstackgerrit | Merged openstack/kuryr: Make logging level configurable https://review.openstack.org/256249 | 11:43 |
*** banix has quit IRC | 11:44 | |
*** yamamoto has joined #openstack-kuryr | 11:47 | |
*** yamamoto has quit IRC | 11:48 | |
*** openstack has quit IRC | 12:04 | |
*** openstack has joined #openstack-kuryr | 12:05 | |
*** yamamoto has joined #openstack-kuryr | 12:15 | |
*** yamamoto has quit IRC | 12:36 | |
*** yamamoto has joined #openstack-kuryr | 12:38 | |
*** salv-orlando has joined #openstack-kuryr | 12:39 | |
*** huats has joined #openstack-kuryr | 12:50 | |
huats | Hi | 12:50 |
huats | I've been playing (or at least trying) to play with Kuryr a bit (inside a mitaka devstack so far). so far I haven't been able to communicate between a container and an instance both launched on the same network (created by docker and well seen by neutron) | 12:53 |
huats | And I have noticed that the a router (in the OpenStack term) connected to that neutron cannot ping the container | 12:54 |
huats | Do you have any idea why ? | 12:54 |
huats | (the security group has been modified to allow ping) | 12:54 |
*** irenab has quit IRC | 13:04 | |
apuimedo | huats: which kuryr version are you using? Is it all from devstack tip of the branch? | 13:19 |
huats | apuimedo: yep. I have fetched it this morning | 13:20 |
apuimedo | huats: did you create the container on a pre-existing network, or you first created the container network and then launched the VM on it? | 13:21 |
huats | I have first created the contrainer network, using the docker network command, then I have launched instances and container on it | 13:22 |
apuimedo | what's the output of neutron net-list? | 13:22 |
huats | it is correct : the container network is listed | 13:23 |
huats | (I can paste is but I am not sure it is interesting) | 13:23 |
apuimedo | huats: what's the name of the network? | 13:24 |
huats | foo | 13:24 |
huats | :) | 13:24 |
huats | no sorry | 13:25 |
huats | (it was the docker name...) | 13:25 |
huats | kuryr-net-6ce6cd73 | 13:25 |
huats | is the name ot the network | 13:25 |
apuimedo | very well | 13:26 |
apuimedo | if you run two containers on that docker network, can they ping each other? | 13:26 |
huats | and 6ce6cd73f81f is the ID of the network in the Docker "space" | 13:26 |
huats | yes : 2 containers can ping each other, and 2 instances can ping each others | 13:27 |
huats | but no communication container - instance | 13:27 |
apuimedo | are the instances and containers on the same machine? Are you using ovs? | 13:27 |
huats | I have a single node devstack yes | 13:27 |
huats | so single machine and ovs | 13:27 |
apuimedo | ok | 13:28 |
huats | while I look at the neutron port associated to the containers they are noted "status | DOWN" | 13:28 |
huats | I am not sure it is relevant | 13:28 |
apuimedo | I have to say I never tried the pod to vm communication in ovs, I usually just run it with MidoNet | 13:29 |
huats | I understand :) | 13:29 |
apuimedo | gsagie: some idea of what it could be? | 13:29 |
apuimedo | huats: if you launch another instance | 13:29 |
apuimedo | on a separate network | 13:29 |
apuimedo | and you add a router between the two networks | 13:29 |
apuimedo | would you be able to ping between the new instance and the containers? | 13:30 |
apuimedo | also, please, verify that the containers veth devices are properly added to the ovs integration bridge | 13:30 |
huats | I already have a router connected to that network and I cannot ping between the container and the router (I have put myself in the router namespace to do that) | 13:30 |
huats | but I'll try with a different network | 13:31 |
apuimedo | huats: to me it sounds like the containers may not have been picked up by the neutron agent | 13:31 |
huats | they get the correct port IP that is what bothers me... | 13:31 |
apuimedo | huats: look for the veth device on the host | 13:32 |
apuimedo | if it is put in the ovs bridge | 13:32 |
apuimedo | cause the IP is a matter of what kuryr reports | 13:32 |
huats | same issue when I try the separated network | 13:34 |
huats | I am having a look at the bridge | 13:34 |
*** WANGFeng has joined #openstack-kuryr | 13:34 | |
huats | apuimedo: stupid question probably but how can I know what is the name of the veth of the cotnainer ? | 13:35 |
huats | I do have 2 -veth ports in the br-int bridge and since I have 2 containers I assume it is correct | 13:37 |
apuimedo | huats: are they correctly tagged with the neutron port id? | 13:38 |
apuimedo | huats: usually you can tell the right pair by the index of the link device | 13:38 |
huats | I am not sure to understand what you mean | 13:40 |
huats | they don't have any tag | 13:41 |
huats | while the others have | 13:41 |
apuimedo | that's wrong then | 13:41 |
apuimedo | "external_ids:iface-id" should be set to the neutron port uuid | 13:42 |
apuimedo | it should also have external_ids:owner set to kuryr | 13:42 |
huats | where can I see the "external_ids:iface-id" that you are refering ? | 13:45 |
apuimedo | ovs-vsctl --columns=name,external-ids list Interface | 13:46 |
*** tfukushima has joined #openstack-kuryr | 13:46 | |
huats | ok so I was wrong | 13:47 |
huats | name : "e8252451-veth" | 13:47 |
huats | external_ids : {attached-mac="12:f8:3f:38:e3:b6", iface-id="f237a2fd-4876-4e66-a03f-c538bdb12296", iface-status=active, owner=kuryr, vm-uuid="e82524513f03064595c73d2ed65e7f7af2a0b286fe3aa0494c888c4c6a1fb58f"} | 13:47 |
apuimedo | is the iface-id correct? | 13:48 |
huats | the mac is not correct but the iface-id is correct | 13:49 |
apuimedo | the mac doesn't match the mac inside the container? | 13:51 |
huats | yep | 13:51 |
huats | while the mac on neutron and inside the container are the same, it is different from the one noted on ovs | 13:52 |
*** banix has joined #openstack-kuryr | 13:54 | |
apuimedo | that's fine | 13:57 |
apuimedo | do you see the interface being picked up by the neutron agent in the logs? | 13:57 |
huats | apuimedo: I am sorry but I don't know what do I need to look | 14:02 |
apuimedo | ajo: you around? | 14:03 |
huats | I do have that on my neutron-agent logs | 14:03 |
huats | DEBUG neutron.agent.linux.async_process [-] Output received from [ovsdb-client monitor Interface name,ofport,external_ids --format=json]: {"data":[["fae03880-f973-4928-8197-5c59958827a3","old",null,null,["map",[]]],["","new","e8252451-veth",10,["map",[["attached-mac","12:f8:3f:38:e3:b6"],["iface-id","f237a2fd-4876-4e66-a03f-c538bdb12296"],["iface-status","active"],["owner","kuryr"],["vm-uuid"," | 14:03 |
huats | e82524513f03064595c73d2ed65e7f7af2a0b286fe3aa0494c888c4c6a1fb58f"]]]]],"headings":["row","action","name","ofport","external_ids"]} _read_stdout /opt/stack/neutron/neutron/agent/linux/async_process.py:237 | 14:03 |
huats | which is the right interface | 14:03 |
ajo | apuimedo, pong, on neutron meeting #openstack-meeting | 14:09 |
apuimedo | huats: that looks like it got picked up | 14:11 |
apuimedo | ajo: we are having some issues | 14:11 |
apuimedo | with neutron agent / kuryr | 14:12 |
*** oanson has quit IRC | 14:13 | |
ajo | apuimedo, what happened? | 14:14 |
ajo | ovs-agent, lb-agent ? | 14:14 |
apuimedo | ovs | 14:14 |
huats | ovs-agent (since I think it is my case that we are talking about :)) | 14:14 |
apuimedo | ajo: for some reason, we set up the right tags in ovs-vsctl | 14:15 |
apuimedo | but huats env can't get the containers ping the VMs in the same network | 14:15 |
apuimedo | I wonder if there's been some recent change that is breaking it | 14:15 |
huats | I am using a devstack from today | 14:15 |
huats | I will be more than happy to help investigate :) | 14:17 |
apuimedo | huats: I'll try to reproduce tomorrow morning in my environment, today I have to work on some containers | 14:22 |
huats | ok | 14:22 |
huats | apuimedo: i won't be able to talk tomorrow but I will be connected | 14:22 |
apuimedo | ok | 14:22 |
ajo | apuimedo, hmm, there was a change to introduce anti mac spoofing protection in openflow | 14:28 |
ajo | apuimedo, are you using the right mac address? | 14:28 |
ajo | apuimedo, ovs-ofctl dump-flows br-int will show | 14:29 |
ajo | it seems that iptables/ebtables was unable to provide all the protection we needed | 14:29 |
apuimedo | ajo: that is most likely it then | 14:29 |
apuimedo | ajo: we are setting the mac address of the container side of the veth | 14:30 |
ajo | apuimedo, and you're using the neutron-db mac address? that may work | 14:30 |
apuimedo | I think we do update the mac address | 14:31 |
apuimedo | in neutron | 14:31 |
apuimedo | let me verify | 14:31 |
ajo | hmm | 14:31 |
ajo | may be that does not trigger a port update, and subsequent change of mac address in the openflow tables ? | 14:32 |
*** irenab has joined #openstack-kuryr | 14:32 | |
apuimedo | ajo: we put it in the create_port in the field mac_address | 14:33 |
huats | ajo: apuimedo if I can help ... | 14:33 |
ajo | hmm, at create port | 14:33 |
ajo | that may work | 14:33 |
ajo | if it's at create... | 14:33 |
ajo | apuimedo, : https://review.openstack.org/#/c/299022/ | 14:33 |
ajo | can you do a ovs-ofctl dump-flows br-int | 14:33 |
apuimedo | but we do not pass it when we do update_port | 14:33 |
apuimedo | huats: ^^ | 14:33 |
ajo | and check if the rules for arp spoofing are good? | 14:33 |
huats | ajo: I have not idea how to check that rules | 14:34 |
ajo | mac spoofing sorry | 14:34 |
apuimedo | ajo: thanks for the hint about anti spoofing | 14:34 |
ajo | huats, paste me dump-flows of br-int | 14:34 |
ajo | and I can tell you which ones | 14:34 |
ajo | apuimedo, np, I'm glad to be of any help | 14:35 |
huats | http://pastebin.com/Fa8dMJHQ | 14:35 |
apuimedo | :-) | 14:35 |
huats | ajo: ^ | 14:35 |
ajo | huats, you don't have such rules | 14:36 |
ajo | your neutron git probably is not updated with that yet | 14:36 |
ajo | you would see some dl_src= | 14:36 |
ajo | filters otherwise | 14:36 |
ajo | sorry | 14:36 |
ajo | you have some, let me check | 14:36 |
* ajo scractches head | 14:37 | |
ajo | right at the end :D | 14:37 |
ajo | table=25 | 14:37 |
irenab | ajo: you are everywhere :-) | 14:37 |
ajo | cookie=0x9b4e8002b1c6084d, duration=11704.644s, table=25, n_packets=420, n_bytes=46586, idle_age=2778, priority=2,in_port=7,dl_src=fa:16:3e:b0:fe:f7 actions=NORMAL | 14:37 |
ajo | cookie=0x9b4e8002b1c6084d, duration=8836.155s, table=25, n_packets=25, n_bytes=2402, idle_age=8653, priority=2,in_port=11,dl_src=fa:16:3e:33:d8:fb actions=NORMAL | 14:37 |
ajo | cookie=0x9b4e8002b1c6084d, duration=3639.312s, table=25, n_packets=49, n_bytes=4931, idle_age=3584, priority=2,in_port=15,dl_src=fa:16:3e:43:51:32 actions=NORMAL | 14:37 |
ajo | irenab, lol | 14:37 |
huats | I have just launched the devstack this morning so it is weird that neutron is not updated :( | 14:37 |
ajo | huats, it is | 14:38 |
ajo | huats, I didn't see it | 14:38 |
huats | sorry I typed before I saw the end of your message | 14:38 |
ajo | you can relate to the port by the ip address on the lines above (table=24) | 14:38 |
ajo | with arp_spa=10.10.1.x | 14:38 |
ajo | check the ports mac address with neutron port-list or neutron port-show | 14:38 |
ajo | and see if they match with those | 14:38 |
ajo | dl_src=fa:16:3e:43:51:32 for 10.10.2.3 | 14:39 |
ajo | dl_src=fa:16:3e:33:d8:fb for 10.10.1.7 | 14:39 |
ajo | dl_src=fa:16:3e:b0:fe:f7 for 10.10.1.4 | 14:39 |
huats | actually those are the mac addresse for the instances | 14:39 |
huats | (I have 3 of them) | 14:39 |
huats | there is no mention of the one for the containers | 14:40 |
ajo | I see packets that matched (n_packets and n_bytes counters are not 0) | 14:40 |
ajo | huats, what do you mean? | 14:40 |
ajo | you have nested ports? | 14:40 |
huats | I have currently 3 VM instances and 2 containers on the same network (made by docker with kuryr) | 14:41 |
ajo | huats, apuimedo I mean, containers in instances? | 14:41 |
huats | ajo no containers with docker run directly | 14:41 |
apuimedo | ajo: no, containers and instances side-by-side in the same network | 14:41 |
ajo | ahm | 14:41 |
ajo | those entries will not happen if port_security for those ports are disabled | 14:41 |
huats | port_security is enabled for the containers ports | 14:42 |
huats | http://pastebin.com/bn1XsuxK | 14:42 |
huats | (here is an example of port for a container) | 14:42 |
ajo | and is it plugged ? | 14:43 |
ajo | os, may be the agent is not properly detecting the port? | 14:43 |
ajo | when you plug it to ovs and set the info ? | 14:44 |
huats | I haven't done anything particular :) | 14:44 |
huats | I am not sure what you mean by plug it to ovs | 14:44 |
apuimedo | maybe it is not marked as plugged? | 14:44 |
ajo | this is how I did it the last time I tried manually: | 14:45 |
ajo | sudo ovs-vsctl -- --may-exist add-port br-int $MGMT_PORT_DEV -- set Interface $MGMT_PORT_DEV type=internal -- set Interface $MGMT_PORT_DEV external-ids:iface-status=active -- set Interface $MGMT_PORT_DEV external-ids:attached-mac=$MGMT_PORT_MAC -- set Interface $MGMT_PORT_DEV external-ids:iface-id=$MGMT_PORT_ID -- set Interface $MGMT_PORT_DEV mac=\"$MGMT_PORT_MAC\" | 14:45 |
apuimedo | the agent has log entries for the port, right huats (I think you pasted it above) | 14:45 |
ajo | hmm, I can't recall any other important change, but there should be something that broke you | 14:46 |
huats | apuimedo: I think it has some entry about it | 14:46 |
huats | DEBUG neutron.agent.linux.async_process [-] Output received from [ovsdb-client monitor Interface name,ofport,external_ids --format=json]: {"data":[["fae03880-f973-4928-8197-5c59958827a3","old",null,null,["map",[]]],["","new","e8252451-veth",10,["map",[["attached-mac","12:f8:3f:38:e3:b6"],["iface-id","f237a2fd-4876-4e66-a03f-c538bdb12296"],["iface-status","active"],["owner","kuryr"],["vm-uuid"," | 14:47 |
huats | e82524513f03064595c73d2ed65e7f7af2a0b286fe3aa0494c888c4c6a1fb58f"]]]]],"headings":["row","action","name","ofport","external_ids"]} _read_stdout /opt/stack/neutron/neutron/agent/linux/async_process.py:237 | 14:47 |
huats | It is what I have in my logs | 14:47 |
huats | Is it what you mean ? | 14:47 |
ajo | it's being set correctly | 14:47 |
ajo | and does attached-mac matches the port mac ? | 14:48 |
ajo | and the neutron port-show ? | 14:48 |
apuimedo | huats: when you said above that you saw "status DOWN" where did you see that? | 14:48 |
ajo | huats, if you can, please paste me a bigger ovs-agent log to check | 14:48 |
huats | apuimedo: it was on the neutron port-show for the status DOWN | 14:48 |
huats | ajo : while the mac on neutron and inside the container are the same, it is different from the one noted on ovs | 14:49 |
apuimedo | ajo: huats: so probably the issue is this "DOWN" state | 14:49 |
ajo | huats, the "attached-mac" is wrong? | 14:49 |
huats | ajo yes | 14:49 |
ajo | huats, ok, you probably need to make sure the "vif driver" set's the right attached-mac | 14:50 |
apuimedo | on ovs the mac should be the one inside the VM/container, not the tap mac address, right ajo ? | 14:50 |
ajo | the same one you want to use in the instance | 14:50 |
apuimedo | huats: and that's not the one you see? | 14:50 |
ajo | I thought the tap mac, and the one inside the container are always the same | 14:50 |
huats | I have : | 14:50 |
huats | name : "e8252451-veth" | 14:50 |
huats | external_ids : {attached-mac="12:f8:3f:38:e3:b6", iface-id="f237a2fd-4876-4e66-a03f-c538bdb12296", iface-status=active, owner=kuryr, vm-uuid="e82524513f03064595c73d2ed65e7f7af2a0b286fe3aa0494c888c4c6a1fb58f"} | 14:50 |
apuimedo | ajo: they slightly differ IIRC | 14:50 |
huats | for the output of ovs-vsctl --columns=name,external-ids list Interface | 14:51 |
huats | but when I do an ifconfig I got fa:16:3e:94:f2:bd from inside the container | 14:52 |
apuimedo | ajo: """So deal with this, libvirt will set all guest TAP devices so that they | 14:53 |
apuimedo | have a MAC address with 0xFE as the first byte. The real physical NIC | 14:53 |
apuimedo | added to the bridge is thus guaranteed to have a smaller MAC address, | 14:53 |
apuimedo | and so the bridge will permanently use the MAC address of the physical | 14:53 |
apuimedo | NIC, which is what we want""" | 14:53 |
ajo | what? | 14:53 |
ajo | :) | 14:53 |
huats | which is the same that I got from the neutron port-show (| mac_address | fa:16:3e:94:f2:bd ) | 14:53 |
apuimedo | in libvirt the mac address of the tap starts with 0xFE | 14:53 |
apuimedo | (If they did not change it) | 14:54 |
ajo | I think they change it to have the same | 14:54 |
ajo | but with veths... | 14:54 |
ajo | it's a tap in that case | 14:54 |
ajo | but for veths, both sides have different macs? | 14:54 |
apuimedo | huats: try please to update the external_ids attached-mac to be the one inside the container | 14:54 |
apuimedo | ajo: yes, in veths the macs are different | 14:54 |
ajo | hmm | 14:54 |
ajo | in that case | 14:54 |
ajo | the "attached-mac" should be whatever it is on the neutron database | 14:55 |
ajo | and inside the container, whatever it is in the neutron database | 14:55 |
ajo | and I suspect, the other end of the veth (br-int side), that does not matter | 14:55 |
huats | apuimedo: how can I change the external_ids attached-mac ? | 14:55 |
ajo | try | 14:56 |
ajo | ovs-vsctl set Interface br-int-side-device external-ids:attached-mac=$your_mac | 14:56 |
ajo | and then restart the ovs-agent | 14:56 |
ajo | to make sure the port is picked up | 14:56 |
huats | ajo: I have to change the br-int-side-device to e8252451-veth right ? | 14:58 |
ajo | huats, whatever you see on ovs-vsctl | 14:59 |
huats | ajo yeah :) | 14:59 |
huats | I just wanted to be sure :) | 14:59 |
huats | ajo : apuimedo : it works ! | 15:04 |
apuimedo | :-D | 15:05 |
apuimedo | huats: so the problem was that the attached-mac we had in ovs was the one for the host side of the veth pair instead of the container side, is that right? | 15:05 |
apuimedo | ajo: thanks a lot for giving a hand! | 15:06 |
huats | looks like it ! | 15:06 |
apuimedo | very well. I guess we'll have to patch that :-) | 15:07 |
apuimedo | huats: do you mind making a bug report? | 15:07 |
huats | apuimedo: I'll be mmore than happy to do it !: | 15:09 |
ajo | greaat :) | 15:09 |
ajo | apuimedo, you're welcome :D | 15:10 |
apuimedo | thanks huats for finding this and helping with the troubleshooting | 15:10 |
apuimedo | it is much appreciated | 15:10 |
huats | apuimedo: and I would be happy to try to patch it by myself (if I have some guidance...) | 15:10 |
huats | apuimedo: of course ! | 15:10 |
huats | thank you apuimedo and ajo for the troubleshooting | 15:11 |
huats | apuimedo: I'll be in Austin so it might help :) | 15:11 |
apuimedo | huats: looking forward to meet you then | 15:12 |
apuimedo | I'll be in all the kuryr sessions :-) | 15:12 |
huats | apuimedo: I report it against kuryr right ? | 15:12 |
apuimedo | huats: indeed | 15:12 |
*** openstackgerrit has quit IRC | 15:18 | |
*** openstackgerrit has joined #openstack-kuryr | 15:18 | |
*** yamamoto has quit IRC | 15:21 | |
huats | apuimedo: is there anything you want to be added : https://bugs.launchpad.net/kuryr/+bug/1569412 | 15:21 |
openstack | Launchpad bug 1569412 in kuryr "Wrong mac affected to the external-ids:attached-mac" [Undecided,New] | 15:21 |
huats | :) | 15:21 |
*** yamamoto has joined #openstack-kuryr | 15:22 | |
*** salv-orlando has quit IRC | 15:22 | |
huats | what do I need relaunch if I want to change to the usr/libexec/kuryr/ovs to be taken in consideration ? | 15:28 |
apuimedo | huats: ? | 15:29 |
huats | I'll try to do some patch to fix it | 15:29 |
apuimedo | huats: you don't need to relaunch, new containers should get the changes in /usr/libexec/kuryr/ovs immediately | 15:30 |
huats | ok | 15:30 |
huats | :) | 15:30 |
huats | thanks | 15:30 |
apuimedo | since they are used doing a process execution | 15:30 |
apuimedo | huats: wow! Thanks even more, if you send a patch :-) | 15:30 |
huats | apuimedo: can you give me a little more information of the context of execution of the usr/lib/exec/kuryr/ovs ? | 15:45 |
apuimedo | sure | 15:46 |
huats | so that I know what I can request... | 15:46 |
apuimedo | did you see kuryr/kuryr/binding.py | 15:46 |
apuimedo | ? | 15:46 |
huats | yep | 15:47 |
*** salv-orlando has joined #openstack-kuryr | 15:48 | |
apuimedo | in port_bind processutils.execute | 15:49 |
apuimedo | you will see what is passed to the binding script | 15:49 |
huats | ok | 15:50 |
*** yamamoto has quit IRC | 15:51 | |
huats | the thing is that we need to be able to get the mac from the usr/lib/exec/ovs and with requesting neutron I am not sure how (yet) | 15:51 |
huats | the thing that is currently used is broken since it is not the mac from the container | 15:52 |
huats | (in my understanding) | 15:52 |
apuimedo | huats: well, what I can see here is that supposedly, the mac address that is passed is the one from the neutron port | 15:52 |
apuimedo | which is exactly what it should be, though not what happened on your env | 15:53 |
huats | nope | 15:53 |
apuimedo | huats: oh, I see! | 15:53 |
huats | I have just loggued what is passed to that script | 15:53 |
huats | plugging veth 06c5d65f-veth (Neutron port c1d20f27-a02c-41a0-8fc2-8b30f339ef01)... | 15:53 |
apuimedo | in usr/libexec/kuryr/ovs | 15:53 |
huats | there is no mac passed :) | 15:54 |
apuimedo | try changing $mac with $4 | 15:54 |
*** tfukushima has quit IRC | 15:54 | |
apuimedo | and launch another container | 15:54 |
*** fawadkhaliq has joined #openstack-kuryr | 15:55 | |
*** yamamoto has joined #openstack-kuryr | 16:02 | |
*** lezbar has joined #openstack-kuryr | 16:02 | |
*** WANGFeng has quit IRC | 16:03 | |
huats | apuimedo: ok | 16:09 |
huats | so changing $mac with $4 fix the issue of different macs | 16:09 |
huats | But | 16:09 |
huats | the container is not able to ping the vm | 16:09 |
huats | :( | 16:09 |
*** fawadkhaliq has quit IRC | 16:09 | |
huats | I am trying to figure out the differences | 16:10 |
huats | I will be really happy to work on that | 16:10 |
huats | I guess there is something more | 16:11 |
huats | since I have juste restarted the neutron-agent and the ping has start to work immediately | 16:11 |
huats | or something that is not properly seen without that restart of the agent | 16:12 |
apuimedo | that's very odd | 16:13 |
*** fawadkhaliq has joined #openstack-kuryr | 16:13 | |
apuimedo | ajo: any idea about ^^ | 16:13 |
*** shashank_hegde has joined #openstack-kuryr | 16:16 | |
huats | apuimedo: ajo and clearly changing the usr/lib/exec/ovs is needed otherwise even with restarting the neutron-agent the communication is not working | 16:20 |
huats | so at least we have one step :) | 16:20 |
huats | now we need the second one :) | 16:20 |
huats | and to figure out why the neutron-agent needs to be restart | 16:21 |
apuimedo | huats: yeah... That's the part I'm the most concerned about | 16:23 |
apuimedo | why on earth it is not picked up properly by the ovs agent on the first shot | 16:23 |
huats | I have the feeling that something is done at startup | 16:28 |
apuimedo | must be | 16:28 |
huats | here is the output that gives me that impression | 16:28 |
huats | neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-cfde7d25-fa44-4c03-8ba8-1f2db5889847 - -] Port 406fdbc7-07e1-40c8-a57e-a9936a35e6fe updated. Details: {u'profile': {}, u'network_qos_policy_id': None, u'qos_policy_id': None, u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': u'fe8026c9-eed7-4bb1-8700-53b6799f6b9f', u'segmentation_id': 1005, u'device_owner': u'kuryr:container', u'physical_network': None, | 16:28 |
huats | u'mac_address': u'fa:16:3e:ea:fe:83', u'device': u'406fdbc7-07e1-40c8-a57e-a9936a35e6fe', u'port_security_enabled': True, u'port_id': u'406fdbc7-07e1-40c8-a57e-a9936a35e6fe', u'fixed_ips': [{u'subnet_id': u'0b90c1c1-8b8e-4b1f-b332-5d06fa2b7b76', u'ip_address': u'10.10.1.13'}], u'network_type': u'vxlan', u'security_groups': [u'19a04be3-1bc9-415f-b176-67bbe0895d20']} | 16:28 |
huats | ajo: if you have any idea :) | 16:29 |
*** shashank_hegde has quit IRC | 16:36 | |
*** fawadkhaliq has quit IRC | 16:50 | |
*** fawadkhaliq has joined #openstack-kuryr | 16:53 | |
*** salv-orlando has quit IRC | 16:58 | |
*** salv-orlando has joined #openstack-kuryr | 17:24 | |
*** salv-orl_ has joined #openstack-kuryr | 17:37 | |
*** shashank_hegde has joined #openstack-kuryr | 17:38 | |
*** salv-orlando has quit IRC | 17:39 | |
*** banix has quit IRC | 17:58 | |
*** oanson has joined #openstack-kuryr | 18:06 | |
*** oanson has quit IRC | 18:22 | |
*** salv-orl_ has quit IRC | 18:26 | |
*** salv-orlando has joined #openstack-kuryr | 18:27 | |
*** salv-orlando has quit IRC | 18:31 | |
*** fawadkhaliq has quit IRC | 18:38 | |
*** fawadkhaliq has joined #openstack-kuryr | 18:38 | |
*** fawadkhaliq has quit IRC | 18:43 | |
*** fawadkhaliq has joined #openstack-kuryr | 18:43 | |
*** oanson has joined #openstack-kuryr | 18:52 | |
*** fawadkhaliq has quit IRC | 18:52 | |
*** banix has joined #openstack-kuryr | 18:54 | |
*** fawadkhaliq has joined #openstack-kuryr | 18:54 | |
*** fawadkhaliq has quit IRC | 18:58 | |
*** fawadkhaliq has joined #openstack-kuryr | 18:59 | |
*** fawadkhaliq has quit IRC | 19:00 | |
*** fawadkhaliq has joined #openstack-kuryr | 19:01 | |
*** asti_ has joined #openstack-kuryr | 19:02 | |
*** yuanying has quit IRC | 19:04 | |
*** fawadkhaliq has quit IRC | 19:20 | |
*** fawadkhaliq has joined #openstack-kuryr | 19:20 | |
*** oanson has quit IRC | 19:41 | |
*** asti_ has quit IRC | 19:46 | |
*** asti_ has joined #openstack-kuryr | 19:51 | |
*** asti_ has quit IRC | 19:54 | |
*** salv-orlando has joined #openstack-kuryr | 19:55 | |
*** salv-orlando has quit IRC | 19:59 | |
*** fawadkhaliq has quit IRC | 20:11 | |
*** fawadkhaliq has joined #openstack-kuryr | 20:12 | |
*** banix has quit IRC | 20:24 | |
*** asti_ has joined #openstack-kuryr | 20:33 | |
*** salv-orlando has joined #openstack-kuryr | 20:41 | |
*** salv-orlando has quit IRC | 20:51 | |
*** salv-orlando has joined #openstack-kuryr | 20:52 | |
*** apuimedo has quit IRC | 21:00 | |
*** fawadkhaliq has quit IRC | 21:03 | |
*** fawadkhaliq has joined #openstack-kuryr | 21:03 | |
*** fawadkhaliq has quit IRC | 21:18 | |
*** fawadkhaliq has joined #openstack-kuryr | 21:18 | |
*** asti_ has quit IRC | 21:23 | |
*** banix has joined #openstack-kuryr | 21:57 | |
*** salv-orlando has quit IRC | 22:00 | |
*** salv-orlando has joined #openstack-kuryr | 22:01 | |
*** fawadkhaliq has quit IRC | 22:16 | |
*** fawadkhaliq has joined #openstack-kuryr | 22:16 | |
*** salv-orlando has quit IRC | 23:08 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/kuryr: Updated from global requirements https://review.openstack.org/304896 | 23:09 |
*** fawadkhaliq has quit IRC | 23:13 | |
*** fawadkhaliq has joined #openstack-kuryr | 23:14 | |
*** yuanying has joined #openstack-kuryr | 23:18 | |
*** asti_ has joined #openstack-kuryr | 23:29 | |
*** yuanying has quit IRC | 23:36 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!