Wednesday, 2019-03-06

*** dmellado has quit IRC00:03
*** dmellado has joined #openstack-kuryr00:04
*** celebdor has joined #openstack-kuryr00:04
*** celebdor has quit IRC00:10
*** mrostecki has quit IRC00:11
*** oanson has quit IRC01:13
*** hongbin has joined #openstack-kuryr02:26
*** janki has joined #openstack-kuryr03:49
*** janki has quit IRC03:50
*** janki has joined #openstack-kuryr03:50
*** hongbin has quit IRC04:42
*** yboaron has joined #openstack-kuryr05:29
*** gcheresh_ has joined #openstack-kuryr05:32
*** gcheresh_ has quit IRC05:37
*** gcheresh_ has joined #openstack-kuryr06:07
*** gkadam has quit IRC06:13
*** ccamposr has joined #openstack-kuryr06:59
*** maysams has joined #openstack-kuryr08:12
*** gkadam has joined #openstack-kuryr08:14
*** alisanhaji has joined #openstack-kuryr08:27
*** pcaruana has joined #openstack-kuryr08:29
*** yboaron_ has joined #openstack-kuryr08:32
*** yboaron has quit IRC08:34
dulekltomasbo: Any idea why Octavia keeps breaking my default route?09:06
dulek"default via 192.168.0.1 dev o-hm0"09:06
ltomasboyes!09:06
ltomasboit is broken with ovn09:06
*** yboaron_ has quit IRC09:06
dulekltomasbo: :)09:06
ltomasboI added this to the devstack/plugin09:06
*** yboaron_ has joined #openstack-kuryr09:07
ltomasbo     if  ! ps aux | grep -q [o]-hm0 && [ $OCTAVIA_NODE != 'api' ] ; then09:07
ltomasbo         sudo dhclient -v o-hm0 -cf $OCTAVIA_DHCLIENT_CONF09:07
ltomasbo+        sudo ip route replace default via 10.16.151.25409:07
ltomasboswitching it to your default route...09:07
ltomasbothat worked for me09:07
dulekYou've added that to Octavia's devstack/plugin, right?09:08
dulekJust the last line.09:08
ltomasboyep09:08
ltomasboexactly09:08
dulekltomasbo: Damn, but that doesn't get my pods access to the internet.09:10
*** kmadac has joined #openstack-kuryr09:10
ltomasboumm09:13
ltomasbomaybe you need to use masquerade...09:13
ltomasborunning on RDO-cloud?09:13
ltomasbodulek, ^^09:13
ltomasbotry this:09:13
ltomasbosudo iptables -A FORWARD -d 172.24.4.0/24 -j ACCEPT09:13
ltomasbosudo iptables -A FORWARD -s 172.24.4.0/24 -j ACCEPT09:13
ltomasbosudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE09:13
ltomasbosudo iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited09:13
ltomasbosudo iptables -D FORWARD -j REJECT --reject-with icmp-host-prohibited09:13
dulekltomasbo: Nah, my local VM.09:14
*** celebdor has joined #openstack-kuryr09:15
dulekltomasbo: Magic… xD09:16
dulekltomasbo: Can I expect that in the gates pods will have access to the outside?09:16
ltomasbodulek, they should09:18
*** FlorianFa has quit IRC11:01
*** FlorianFa has joined #openstack-kuryr11:02
*** kmadac1 has joined #openstack-kuryr11:02
*** pcaruana has quit IRC11:04
*** kmadac has quit IRC11:05
*** kmadac1 has quit IRC11:08
*** kmadac2 has joined #openstack-kuryr11:12
*** pcaruana has joined #openstack-kuryr11:32
dulekcelebdor: Can you help me decide how should I configure that coredns to make sense upstream?12:02
dulekcelebdor: Okay, nevermind, it's always that when I write on IRC I finally figure out a correct solution. :P12:04
*** kmadac2 has quit IRC12:11
*** kmadac2 has joined #openstack-kuryr12:13
*** danil_ has joined #openstack-kuryr12:22
*** danil has quit IRC12:26
*** mrostecki has joined #openstack-kuryr12:31
dulekyboaron_: Do I remember correctly that at the moment networking-ovn provider doesn't support UDP?12:41
yboaron_dulek, Hmmm , I'm not 100% sure, I think u r right12:42
dulekOkay, thanks.12:43
*** mrostecki has quit IRC12:51
*** mrostecki has joined #openstack-kuryr12:53
alisanhajiHi people of the world, I have a question about the pod network in kuryr. Can you have multiple neutron networks to put pods in or can you only use the subnet specified by pod_subnet parameter in kuryr.conf? Thanks13:46
*** pcaruana has quit IRC13:51
*** pcaruana has joined #openstack-kuryr14:01
*** FlorianFa has quit IRC14:09
dulekalisanhaji: You can specify pod subnetpool if you expect to have a subnet per K8s namespace.14:10
dulekltomasbo: Ha, I've run out of security groups on my env.14:11
dulekI assume the fact that I have 5 of them called "kube-system/coredns" indicates some resource leaking.14:11
*** gkadam_ has joined #openstack-kuryr14:16
*** gkadam has quit IRC14:19
*** alisanhaji has quit IRC14:26
*** alisanhaji has joined #openstack-kuryr14:35
ltomasboyep, could be14:48
dulekltomasbo, maysams: I have a Service definition that lists two protocols using the same port. I assume that is not supported?14:53
dulekError message: {"debuginfo": null, "faultcode": "Client", "faultstring":14:53
dulek"Another Listener on this Load Balancer is already using protocol_port 53"}14:53
ltomasboyou can open the same port on different protocols (tcp and udp)14:54
ltomasbobut not on the same I suppose14:54
dulekltomasbo: That's how I have it 53/UDP,53/TCP.14:54
dulekIt's possible it doesn't work with OVN only, though.14:55
dulekBTW - I such loadbalancer never gets deleted, because it constantly fails listener creation with that error…14:56
ltomasbomaybe, not sure if ovn has support for udp14:56
ltomasboperhaps you can remove the listener then14:56
ltomasbothe one already created14:56
dulekltomasbo: Deleting the loadbalancer helps.14:58
*** gkadam__ has joined #openstack-kuryr15:00
dulekyboaron_, ltomasbo: Just clarifying my findings: networking-ovn provider *does* support UDP, using same port for TCP and UDP was the issue.15:02
*** gkadam_ has quit IRC15:03
ltomasbodulek, my last update on that was that ovn-provider was supporting both upd and tcp, but not on the same loadbalancer15:03
ltomasboyou can have lbaas with tcp or udp, but not with both (unless they fixed that already)15:03
dulekltomasbo: Ah, makes sense, wonderful.15:05
alisanhajidulek: thanks!15:06
maysamsdulek: sry, I was afk. But seems you guys figure it out15:09
dulekmaysams: Yup.15:10
maysamsgood!15:10
dulekalisanhaji: What's your use case? We might be able to help more if we understand it.15:10
alisanhajiwell I want to have kuryr-kubernetes in magnum so that containers and pods be in the same networks. In this case I can configure a separate network for each magnum_k8s cluster but I am also thinking about having Ironic and a separate network for each tenant15:13
ltomasboalisanhaji, each tenant you refer kubernetes user? or openstack project?15:16
ltomasboalisanhaji, I imagine you are installing kuryr in a given OpenStack tenant, right?15:16
alisanhajiI was wondering if it was also possible to have Kuryr use the same Neutron that was used to spawn the VM that hosts kubernetes with Kuryr. I think as long as I have a separate network for the VMs that host magnum and another network that runs inside these VMs, it should be fine (with tons of encapsulation)15:17
alisanhajiltomasbo: as an openstack project15:17
alisanhajiltomasbo: magnum runs a k8s clusters for15:18
ltomasboalisanhaji, ahh, sure, you can set in kuryr.conf the network for the containers (if no namespace/np is used) and then point it to the VM network you like15:18
alisanhajiinside a project15:18
alisanhajiltomasbo: thanks! I think the trick is to make magnum create a new pod subnet or subnet pool for each new cluster and configure k8s kuryr to use that one15:19
ltomasboalisanhaji, ok, another option is to have a subnetpool15:21
ltomasbothen, if you enable namespace isolation, or network policies15:21
ltomasboa new subnet and network will be created from that subnetpool for each kubernetes namespace15:21
ltomasboall connected to the same router of course15:21
alisanhajitrue, it would be even more convenient15:22
*** mrostecki has quit IRC15:23
alisanhajiltomasbo and dulek thanks for the help, I will do some tests and see with Magnum team to try to put Kuryr among the network drivers15:25
dulekalisanhaji: Hey, hey, the trick is having instance of kuryr-kubernetes per each K8s cluster! :)15:27
dulekalisanhaji: That's how it's thought.15:27
dulekalisanhaji: We allow running Kuryr in pods on the K8s cluster, just like you would run Calico.15:28
dulekalisanhaji: Can you share a K8s cluster between OpenStack tenants in Magnum?15:29
alisanhajidulek: so each cluster that magnum spawns will come with it's own configured kuryr-kubernetes15:29
dulekalisanhaji: Yes, I don't think it would even work to have single kuryr-controller serve multiple K8s clusters.15:30
dulekIf it does, it's by pure luck, we never assumed that.15:30
alisanhajidulek: I don't think so, each cluster is isolated for a specific project15:31
*** mrostecki has joined #openstack-kuryr15:31
alisanhajiI don't know if their is a "public" parameter15:31
dulekalisanhaji: Okay, so basically the way to go is to have one kuryr-controller (and kuryr-daemons on nodes) for each K8s cluster Magnum deploys.15:34
dulekAnd easiest way to achieve that would be to just run the K8s yamls that you can generate using script tools/generate_k8s_resource_definitions.sh.15:35
dulek(It might not fit your needs, maybe you need your own, or you want to contribute fixes to our script, we're totally welcoming here)15:36
dulekltomasbo: Do you want me to update upstream container images? I have them built, but I didn't wanted to interrupt your debugging session.15:36
ltomasbodulek, sure! please do!15:37
ltomasboI'm just compiling kuryr myself on the testbed to not rely on that!15:37
ltomasbobut updating the master images for both would be great!15:37
alisanhajidulek: ok thanks, I'll check this out, run some tests locally and see how I can automate it with Magnum15:38
dulekalisanhaji: Sure, feel free to bug us with anything.15:38
dulekalisanhaji: And don't do `git blame` on that script, it's me. ;)15:39
alisanhajidulek: haha they should have a command 'git peace' to know who to thank for a commit, to balance things out :D15:41
* dulek sets up a git alias. :D15:42
alisanhajihaha!15:43
*** pcaruana has quit IRC15:53
*** janki has quit IRC15:55
*** yboaron_ has quit IRC16:04
*** pcaruana has joined #openstack-kuryr16:06
*** gkadam__ has quit IRC16:42
*** pcaruana has quit IRC16:55
dulekHa, I've just successfully run first upstream K8s NP test! :D17:24
dulekThing is - that one was testing connection without any policy, let's see what happens with next. :)17:25
ltomasbodulek, great! \o/17:25
maysamsdulek: uhuu.. That's great, dulek :)17:25
*** gcheresh_ has quit IRC17:32
*** maysams has quit IRC17:39
*** celebdor has quit IRC18:02
dulekhttp://paste.openstack.org/show/747373/18:07
dulekOkay, 3 out of 8 failed, it's not awful.18:07
dulekWorst part is that those 8 tests took 36 minutes…18:08
*** ccamposr has quit IRC18:35
*** openstackgerrit has joined #openstack-kuryr18:36
openstackgerritMichał Dulko proposed openstack/kuryr-kubernetes master: Switch Octavia API calls to openstacksdk  https://review.openstack.org/63825818:36
openstackgerritMichał Dulko proposed openstack/kuryr-kubernetes master: Add option to tag Octavia resources created by us  https://review.openstack.org/63848318:36
*** kaiokmo has quit IRC19:07
*** mrostecki has quit IRC19:13
*** kaiokmo has joined #openstack-kuryr19:38
*** mrostecki has joined #openstack-kuryr20:34
*** alisanhaji has quit IRC20:37
*** celebdor has joined #openstack-kuryr21:05
*** irclogbot_1 has joined #openstack-kuryr21:09
*** irclogbot_1 has quit IRC21:28
*** celebdor has quit IRC21:50
*** celebdor has joined #openstack-kuryr21:57
*** celebdor has quit IRC22:02
*** rh-jelabarre has quit IRC23:43
*** celebdor has joined #openstack-kuryr23:46

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!