*** yamamoto has quit IRC | 00:01 | |
*** yamamoto has joined #openstack-kuryr | 00:05 | |
*** https_GK1wmSU has joined #openstack-kuryr | 00:08 | |
*** https_GK1wmSU has left #openstack-kuryr | 00:10 | |
*** caowei has joined #openstack-kuryr | 00:22 | |
*** pmannidi has joined #openstack-kuryr | 00:31 | |
*** kiennt has joined #openstack-kuryr | 00:32 | |
*** kiennt has quit IRC | 00:47 | |
*** kiennt has joined #openstack-kuryr | 00:51 | |
*** limao has joined #openstack-kuryr | 00:52 | |
*** yedongcan has joined #openstack-kuryr | 00:53 | |
*** dougbtv_ has quit IRC | 00:58 | |
*** yedongcan1 has joined #openstack-kuryr | 01:17 | |
*** yedongcan has quit IRC | 01:18 | |
*** aojea has joined #openstack-kuryr | 01:22 | |
*** premsankar has quit IRC | 01:23 | |
*** yamamoto has quit IRC | 01:25 | |
*** aojea has quit IRC | 01:26 | |
*** yamamoto has joined #openstack-kuryr | 01:29 | |
*** yamamoto has quit IRC | 01:51 | |
*** yamamoto has joined #openstack-kuryr | 01:56 | |
*** aojea has joined #openstack-kuryr | 02:07 | |
*** yamamoto has quit IRC | 02:07 | |
*** aojea has quit IRC | 02:12 | |
*** yamamoto has joined #openstack-kuryr | 02:19 | |
*** hongbin has joined #openstack-kuryr | 02:21 | |
*** https_GK1wmSU has joined #openstack-kuryr | 02:33 | |
*** https_GK1wmSU has left #openstack-kuryr | 02:35 | |
*** kiennt has quit IRC | 02:41 | |
*** kiennt has joined #openstack-kuryr | 02:42 | |
*** limao has quit IRC | 02:44 | |
*** limao has joined #openstack-kuryr | 02:46 | |
*** hongbin_ has joined #openstack-kuryr | 02:55 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/kuryr-libnetwork master: Updated from global requirements https://review.openstack.org/484555 | 02:56 |
---|---|---|
openstackgerrit | OpenStack Proposal Bot proposed openstack/kuryr-kubernetes master: Updated from global requirements https://review.openstack.org/488010 | 02:56 |
*** hongbin has quit IRC | 02:57 | |
*** hongbin has joined #openstack-kuryr | 02:59 | |
*** hongbin_ has quit IRC | 03:00 | |
*** yamamoto has quit IRC | 03:01 | |
*** yamamoto has joined #openstack-kuryr | 03:06 | |
*** yamamoto has quit IRC | 03:21 | |
*** yamamoto has joined #openstack-kuryr | 03:24 | |
*** gouthamr has quit IRC | 03:28 | |
*** yamamoto has quit IRC | 03:36 | |
*** yamamoto has joined #openstack-kuryr | 03:44 | |
*** yamamoto has quit IRC | 04:04 | |
*** hongbin has quit IRC | 04:05 | |
*** yamamoto has joined #openstack-kuryr | 04:09 | |
*** yboaron_ has joined #openstack-kuryr | 04:21 | |
*** caowei has quit IRC | 04:33 | |
*** janki has joined #openstack-kuryr | 05:06 | |
openstackgerrit | Berezovsky Irena proposed openstack/kuryr-kubernetes master: Add devstack support for multi-node deployment https://review.openstack.org/489098 | 05:29 |
*** caowei has joined #openstack-kuryr | 05:42 | |
apuimedo | irenab: did you see my answer to your comment? | 05:49 |
irenab | yes, some hoew missed it | 05:49 |
irenab | I am running deployment now, once done will +2 | 05:49 |
irenab | I updated the multi node patch to include some option for overcloud, but in general local.conf should be good for nested too | 05:50 |
irenab | please take a look | 05:50 |
apuimedo | thanks irenab | 05:54 |
apuimedo | irenab: the multi node you sent was for overcloud alrady, wasn't it? | 05:55 |
irenab | yes, was missing one line | 05:55 |
apuimedo | ah, ok | 05:55 |
apuimedo | I'm having some connectivity problems to my cloud, so I couldn't try it yet | 05:56 |
irenab | sure, take your time | 05:56 |
*** kiennt has quit IRC | 05:57 | |
*** kiennt has joined #openstack-kuryr | 06:01 | |
irenab | apuimedo, ping | 06:06 |
apuimedo | irenab: pong | 06:07 |
irenab | https://pastebin.com/HXiEVzAc | 06:08 |
apuimedo | irenab: that's odd | 06:10 |
apuimedo | is there a /opt/stack/octavia? | 06:10 |
irenab | trying to restack | 06:10 |
irenab | now seems to be cloning octavia, maybe be gliches of the devstack | 06:12 |
apuimedo | weird | 06:25 |
irenab | it looks better now, still deploying | 06:27 |
apuimedo | ok | 06:36 |
apuimedo | irenab: reclone on or off? | 06:37 |
ltomasbo | toni, I'm getting the same error | 06:38 |
ltomasbo | apuimedo, ^^ | 06:39 |
apuimedo | ltomasbo: are you sure it's the latest patchset? | 06:39 |
ltomasbo | Took it from yesterday, but I'll pull the new ones and re-deploy (just in case) | 06:39 |
apuimedo | thanks ltomasbo | 06:42 |
ltomasbo | apuimedo, I got the same error | 06:59 |
ltomasbo | seems OCTAVIA_CONF_DIR is not set | 06:59 |
ltomasbo | I see there is no octavia dir being cloned either... | 07:00 |
irenab | ltomasbo, check the local.conf that octavia is set to true | 07:01 |
ltomasbo | is not set to true by default? | 07:01 |
ltomasbo | ok, I don't see it there | 07:02 |
irenab | for me it seems to be in the loop waiting for the default/kubernetes to become active | 07:02 |
ltomasbo | umm, I see the local.conf was not updated, going to make the pull again! | 07:04 |
*** pcaruana has joined #openstack-kuryr | 07:06 | |
irenab | ltomasbo, apuimedo devstack completed properly. Will try to run some app | 07:12 |
ltomasbo | irenab, great! | 07:26 |
ltomasbo | mine is still ongoing | 07:26 |
ltomasbo | but now I see the octavia dir, so I assume it will work | 07:26 |
irenab | it takes a lot of time to get service to active | 07:26 |
irenab | apuimedo, any idea how to update openrc to source it for k8s project? | 07:27 |
kzaitsev_ws | my deployment ran out of space yesterday =( | 07:29 |
kzaitsev_ws | so I'm re-deploin octavia patch now too =) | 07:29 |
irenab | ltomasbo, kzaitsev_pi please try the multi node one as well | 07:29 |
ltomasbo | irenab, I use the --project tenant-id | 07:30 |
ltomasbo | and use the admin | 07:30 |
irenab | ltomasbo, run openstack commands with --project? | 07:32 |
ltomasbo | irenab, I'll try the multi-node one as soon as I finish other stuff and get some quota to create a couple of VMs, I hope I don't have problems with MTU for that one | 07:33 |
ltomasbo | yep, I run the commands like that | 07:33 |
ltomasbo | just the ones to list the ports, loadbalancers, ... | 07:33 |
irenab | got it | 07:33 |
ltomasbo | it does not work to create stuff (I think) | 07:33 |
irenab | apuimedo, I used the app you shared few days ago (frontend + redis master and slave). There is ping from FE to Redis pods, but no ping from FE to Redis service | 07:35 |
*** garyloug has joined #openstack-kuryr | 07:52 | |
*** egonzalez has joined #openstack-kuryr | 07:56 | |
*** garyloug has quit IRC | 07:58 | |
*** yboaron_ has quit IRC | 07:58 | |
ltomasbo | apuimedo, do you know if all the octavia stuff works with ovs-firewall (instead of the hybrid one)? | 08:02 |
ltomasbo | apuimedo, it works for me | 08:09 |
ltomasbo | I see the loadbalancer port status as down | 08:09 |
ltomasbo | but it is actually working | 08:09 |
ltomasbo | apuimedo, I can curl the exposed service from the qrouter namespace, but not from one pod to the other (which I guess it could be the intended behavior) | 08:11 |
*** kural has joined #openstack-kuryr | 08:12 | |
apuimedo | I'm back | 08:14 |
*** garyloug has joined #openstack-kuryr | 08:14 | |
apuimedo | ltomasbo: from one pod you can't curl the service? | 08:14 |
ltomasbo | that's right | 08:15 |
irenab | apuimedo, ping should work? | 08:15 |
apuimedo | irenab: no, ping to the service shouldn't work | 08:15 |
apuimedo | only curl | 08:15 |
apuimedo | well, if you ssh into octavia haproxy netns | 08:15 |
apuimedo | you should also be able to curl to the members | 08:15 |
apuimedo | irenab: the long time waiting for default/k8s is normal | 08:15 |
apuimedo | the first octavia LB takes a loooooooooooong time | 08:16 |
apuimedo | hence why I increase the timeout | 08:16 |
irenab | just for the sake of the verification, how do you check the services? | 08:16 |
apuimedo | "ltomasbo | apuimedo, it works for me" What does? | 08:16 |
apuimedo | irenab: I started an unrelated pod | 08:17 |
apuimedo | and curled the service | 08:17 |
irenab | FE? | 08:18 |
irenab | devstack completed properly | 08:18 |
ltomasbo | apuimedo, I meant I can curl the service (and the pods) from the qrouter namespace | 08:18 |
ltomasbo | but not from inside the pod | 08:18 |
ltomasbo | well, from inside the pods I can curl the other pods | 08:18 |
ltomasbo | but not through the service exposed, directly on the pods IPs | 08:19 |
apuimedo | irenab: what's FE? | 08:20 |
irenab | front end | 08:20 |
apuimedo | ltomasbo: which SG is on the clusterip port and which on the other port of the service? | 08:20 |
apuimedo | irenab: yes, from FE you should be able to curl BE service | 08:21 |
ltomasbo | apuimedo, I did not modified any sg, so the ones created by default with your local.conf | 08:26 |
ltomasbo | let me check | 08:26 |
apuimedo | ltomasbo: please do. And if I can get access it will also be helpful | 08:27 |
apuimedo | irenab: you tried with ovs or with df? | 08:28 |
irenab | just your patch, ovs | 08:28 |
ltomasbo | apuimedo, with clusterip you meant the lbaaspoirt? | 08:28 |
irenab | the local.conf you provided | 08:28 |
apuimedo | ltomasbo: the vip port | 08:28 |
apuimedo | the one that is down (is handled by keepalived) | 08:28 |
apuimedo | and also the one that is actually up in the service network | 08:28 |
apuimedo | and the one in the pod subnet | 08:29 |
irenab | from the front end pod, trying curl http://redis-service-ip. does not work | 08:29 |
ltomasbo | ahh the octavia-lb-vrrp... | 08:29 |
apuimedo | right | 08:29 |
ltomasbo | and sure, give me your ssh-key and I'll give you access | 08:29 |
apuimedo | irenab: ltomasbo: maybe I missed some SG | 08:29 |
apuimedo | ltomasbo: sent | 08:30 |
ltomasbo | apuimedo, both lbaas and octavia-lb-vrrp have the same sg | 08:30 |
ltomasbo | but the pods have 2 different sg, and they are different from the others | 08:30 |
apuimedo | alright then | 08:30 |
*** aojea has joined #openstack-kuryr | 08:30 | |
apuimedo | I remember now | 08:31 |
apuimedo | :-) | 08:31 |
apuimedo | I manually changed the SG of the VIP port | 08:31 |
apuimedo | and then was investigating the rest | 08:31 |
apuimedo | and forgot this step in the patch | 08:31 |
apuimedo | :/ | 08:31 |
apuimedo | ltomasbo: what do you mean the 2 pods have different SG? | 08:31 |
apuimedo | All the ports have the same SG | 08:31 |
apuimedo | but the octavia ports in the pod subnet have the 'default' of the admin project | 08:32 |
apuimedo | and the ports of the pods the 'default' of the 'k8s' project | 08:32 |
ltomasbo | yep | 08:32 |
ltomasbo | I have 2 pods running | 08:32 |
ltomasbo | and they have 2 sg each (with are the same, I guess the default and the k8s) | 08:32 |
apuimedo | ltomasbo: right, this is correct | 08:33 |
ltomasbo | yep, I checked | 08:33 |
ltomasbo | they are the default and the pod_pod_access | 08:33 |
ltomasbo | while the vip has the lb-XXX security group | 08:34 |
apuimedo | irenab: ltomasbo: did you notice how nice that we don't need the neutron cli anymore? | 08:34 |
apuimedo | the devstack includes 'openstack loadbalancer *' commands | 08:34 |
irenab | cool | 08:34 |
irenab | less dependencies, less confusion | 08:35 |
ltomasbo | ohh, I did not realize about that! | 08:35 |
apuimedo | ltomasbo: irenab: from what I see, the lb has a security group allowing the listener port | 08:36 |
apuimedo | from any origin | 08:36 |
irenab | apuimedo, how did you verify? Maybe I am missing something | 08:37 |
apuimedo | irenab: I searched the port with the VIP | 08:37 |
apuimedo | and then did port show of it | 08:37 |
apuimedo | to see the SG | 08:37 |
apuimedo | and looked at the rules | 08:37 |
ltomasbo | apuimedo, I see it allows tcp on port 80 | 08:37 |
irenab | I mean that k8s is working | 08:37 |
ltomasbo | as ingress, right? | 08:37 |
irenab | services | 08:38 |
ltomasbo | perhaps it needs to be http instead? | 08:38 |
apuimedo | let me check a moment | 08:38 |
irenab | apuimedo, what I see is that is I curl to 127.0.0.1 on FE pod it works, but with service_ip, it doesn't | 08:44 |
*** Guest14_ has joined #openstack-kuryr | 08:44 | |
apuimedo | irenab: right | 08:44 |
*** Guest14_ is now known as ajo | 08:45 | |
apuimedo | now I'm debugging in ltomasbo's machine | 08:45 |
apuimedo | I see that the vip is well configured in the haproxy-amphora netns | 08:45 |
apuimedo | irenab: ltomasbo: ok, at least something | 08:47 |
apuimedo | the communication between the amphora netns and the lb members works | 08:47 |
apuimedo | now I have to remember how I solved the pod -> LB | 08:47 |
apuimedo | sorry for the troubles | 08:47 |
apuimedo | I forgot I had fixed this also when making the patch | 08:48 |
ltomasbo | :D | 08:48 |
ltomasbo | no problem! | 08:48 |
irenab | its ok | 08:50 |
irenab | integrating Openstack projects is not expected to be working from the first time | 08:51 |
ltomasbo | that's very true! | 08:52 |
apuimedo | ;-) | 08:52 |
apuimedo | thanks to you both | 08:52 |
apuimedo | ltomasbo: meh... I can't see why | 09:02 |
ltomasbo | :D | 09:02 |
ltomasbo | too many security groups! | 09:02 |
apuimedo | ltomasbo: but do you see that the communication reached? In the curl | 09:02 |
ltomasbo | no, didn't check with tcpdump | 09:03 |
apuimedo | with incorrect checksum | 09:03 |
apuimedo | look at the tmux | 09:03 |
ltomasbo | ohh | 09:03 |
ltomasbo | with incorrent cksum?? | 09:03 |
apuimedo | yeah | 09:03 |
apuimedo | this happened the other day | 09:03 |
ltomasbo | in the way back from the amphora? | 09:03 |
apuimedo | and I think the fix I did was to change the SG | 09:03 |
apuimedo | I don't see any communication on the way back | 09:04 |
ltomasbo | could it be due to the http? | 09:05 |
ltomasbo | I see in the tmux the first one has the right cksum | 09:05 |
ltomasbo | but not the following ones | 09:05 |
apuimedo | ltomasbo: ? | 09:06 |
ltomasbo | maybe it is a bug somewhere in security group actually | 09:06 |
ltomasbo | or in neutron routing not properly changing some of the headers leading to a wrong cksum | 09:06 |
apuimedo | ltomasbo: did you change the local.conf in any way? | 09:07 |
apuimedo | maybe to add the ovs firewall? | 09:07 |
ltomasbo | nop: cp /opt/stack/kuryr-kuberentes/devstack/local.conf.sample local.conf | 09:07 |
apuimedo | ok | 09:07 |
ltomasbo | should I try with ovs-firewall instead? | 09:07 |
apuimedo | no, no | 09:07 |
apuimedo | it should work with hybrid | 09:08 |
apuimedo | I got it working on Friday | 09:08 |
ltomasbo | ahh, ok | 09:08 |
ltomasbo | was not a different security group rules for ingress traffic from outsite than from a different sec-group? | 09:09 |
ltomasbo | perhaps we need an extra one for that | 09:09 |
ltomasbo | apuimedo, should we add 'remote security group' from pods too? | 09:10 |
ltomasbo | at least to double check if that is the problem | 09:10 |
apuimedo | ltomasbo: the problem is most likely security groups | 09:10 |
ltomasbo | yep, that is my bet too | 09:11 |
apuimedo | but we can't add the lb sgs to the pods, because they happen afterwards | 09:11 |
ltomasbo | no | 09:11 |
apuimedo | ltomasbo: here's the thing | 09:11 |
ltomasbo | I mean to add an extra rule to the lb sg | 09:11 |
apuimedo | the lb SGs as we saw, allow communication | 09:11 |
ltomasbo | so that it accepts ingress traffic from pods | 09:11 |
ltomasbo | not only from outside (that seems to be working) | 09:12 |
apuimedo | on the IP for the port and they don't specify remote SG | 09:12 |
apuimedo | ltomasbo: you mean because of the router namespace test? | 09:12 |
ltomasbo | yep | 09:12 |
apuimedo | ltomasbo: the router port doesn't have security though | 09:13 |
apuimedo | so it's cheating | 09:13 |
ltomasbo | ahhh, that's true | 09:13 |
ltomasbo | can I assign a floating ip to the vip and check | 09:13 |
apuimedo | ltomasbo: why is the rule list output so shitty | 09:13 |
apuimedo | ? | 09:13 |
apuimedo | it doesn't even show direction | 09:14 |
apuimedo | it pisses me off big time | 09:14 |
ltomasbo | apuimedo, perhaps you should use rule show | 09:14 |
apuimedo | ltomasbo: yes, for every rule | 09:14 |
apuimedo | not very convenient at all | 09:14 |
ltomasbo | or use --long | 09:15 |
*** yboaron_ has joined #openstack-kuryr | 09:15 | |
ltomasbo | apuimedo, so, with --long it shows address and direction | 09:16 |
apuimedo | irenab: ltomasbo: I wonder if the fact that the vrrp port is in the admin project | 09:19 |
apuimedo | but I would think not | 09:19 |
apuimedo | meh | 09:21 |
apuimedo | I added a rule that allows everything, but it didn't help | 09:21 |
ltomasbo | ummm | 09:22 |
*** limao has quit IRC | 09:24 | |
apuimedo | ltomasbo: I forgot... How do I see the tcpdump output as it happens as opposed to just when I abort it? | 09:24 |
ltomasbo | umm, that is what it was supposed to be the default, right? | 09:25 |
apuimedo | ltomasbo: not in the ubuntu amphorae | 09:25 |
apuimedo | xD | 09:25 |
ltomasbo | umm | 09:27 |
ltomasbo | there is an immediate-mode, but not working either... | 09:27 |
ltomasbo | let's try on the tap... | 09:27 |
*** yamamoto has quit IRC | 09:37 | |
apuimedo | ltomasbo: if you use the mac addresses to match it you'll save guesswork | 09:39 |
ltomasbo | ahh, I'm getting lost... :D | 09:40 |
ltomasbo | the amphora VMs have 2 nics, right? | 09:40 |
ltomasbo | they have 3 taps | 09:40 |
ltomasbo | why? | 09:40 |
apuimedo | one for management network (used for ssh too) | 09:41 |
apuimedo | one for service netwrok | 09:41 |
apuimedo | one for l2 connectivity to pods | 09:41 |
apuimedo | ltomasbo: that's the one | 09:43 |
ltomasbo | yep | 09:43 |
ltomasbo | and the problem is with the arp reply? | 09:44 |
ltomasbo | oui Unknown... | 09:44 |
ltomasbo | apuimedo, do you see htat? | 09:48 |
apuimedo | what, sorry? | 09:48 |
ltomasbo | the arp reply arrives to the pods port with a different IP | 09:49 |
ltomasbo | 10.0.0.126 instead of 10.0.0.148 | 09:49 |
ltomasbo | could that be the problem? | 09:49 |
ltomasbo | that is even a different subnet, right? | 09:49 |
apuimedo | wtf | 09:51 |
ltomasbo | ahh, no wit, it is asking for 126 directly | 09:51 |
ltomasbo | not sure why though | 09:51 |
apuimedo | that's the router port, isn't it? | 09:51 |
ltomasbo | umm, don't know, let's see | 09:51 |
ltomasbo | yes, it is | 09:52 |
apuimedo | then it is normal, I'd say | 09:52 |
ltomasbo | yep yep. I got confuse by the ip range | 09:53 |
ltomasbo | the pods are in .64 | 09:53 |
ltomasbo | so, it is asking for the right one, and the service is in the .128 | 09:53 |
apuimedo | brrr | 09:55 |
apuimedo | wth is going on | 09:55 |
apuimedo | ltomasbo: I'll make a 'hack' | 09:55 |
ltomasbo | :D | 09:56 |
ltomasbo | go ahead | 09:56 |
ltomasbo | it is a 'disposable' deployment, so, if it breaks, we re-stack! | 09:57 |
*** yamamoto has joined #openstack-kuryr | 09:57 | |
apuimedo | ok | 09:59 |
apuimedo | thanks | 09:59 |
*** caowei has quit IRC | 10:05 | |
openstackgerrit | Jaivish Kothari(janonymous) proposed openstack/kuryr-kubernetes master: Kubernetes client usage from upstream K8s repo https://review.openstack.org/454555 | 10:08 |
*** yamamoto has quit IRC | 10:08 | |
*** kiennt has quit IRC | 10:12 | |
apuimedo | ltomasbo: pfffff | 10:13 |
apuimedo | can't find it | 10:13 |
ltomasbo | :D | 10:13 |
ltomasbo | I don't know octavia that much, so I can figure out why it is happening either... | 10:14 |
ltomasbo | question is, it is happening just for me? irenab, is it yours working? | 10:14 |
apuimedo | I'd imagine it is the same | 10:15 |
apuimedo | ltomasbo: lol | 10:21 |
apuimedo | it doesn't even reply from the amphora netns | 10:22 |
apuimedo | when you do curl 10.0.0.148 | 10:22 |
ltomasbo | umm | 10:25 |
*** yamamoto has joined #openstack-kuryr | 10:25 | |
ltomasbo | then it is not security groups... | 10:26 |
apuimedo | no, most likely not | 10:31 |
apuimedo | ok. I'll try one thing more | 10:32 |
*** pmannidi has quit IRC | 10:38 | |
apuimedo | ltomasbo: irenab: fucking hell | 10:38 |
apuimedo | I manually changed to l3 mode and it worked | 10:38 |
apuimedo | xD | 10:38 |
ltomasbo | what? | 10:39 |
apuimedo | ltomasbo: I removed the members | 10:41 |
apuimedo | and I added them without specifying subnet | 10:41 |
apuimedo | so that no lb port is created in the port subnet | 10:41 |
ltomasbo | was the local.conf not forcing it l2 instead of l3 mode? | 10:42 |
apuimedo | ltomasbo: right, because our handler code does not support l3 mode yet | 10:42 |
apuimedo | so I did manual l3 mode by manually creating the members | 10:42 |
ltomasbo | ahh, ok | 10:42 |
ltomasbo | now I get it | 10:42 |
apuimedo | so at least I know that the L3 part of my devstack is correct | 10:43 |
apuimedo | for l2 there must be something missing | 10:43 |
apuimedo | I wonder why | 10:43 |
ltomasbo | so, then we get back to the security group issue? | 10:43 |
ltomasbo | or even some bug with ovs | 10:43 |
apuimedo | ltomasbo: yes. I'll create L2 one now | 10:43 |
apuimedo | and see what happens | 10:43 |
ltomasbo | great! | 10:43 |
ltomasbo | good luck! :D | 10:43 |
apuimedo | may the source be with me | 10:44 |
*** yedongcan1 has left #openstack-kuryr | 10:49 | |
apuimedo | ltomasbo: L2 LB is up | 10:51 |
apuimedo | but no success | 10:51 |
apuimedo | same behavior | 10:51 |
apuimedo | LB can curl pods | 10:51 |
apuimedo | but nobody can curl LB | 10:51 |
ltomasbo | umm | 10:52 |
ltomasbo | strange | 10:52 |
ltomasbo | can you try adding ping, and check if that works? | 10:53 |
openstackgerrit | Jaivish Kothari(janonymous) proposed openstack/kuryr-kubernetes master: Kubernetes client usage from upstream K8s repo https://review.openstack.org/454555 | 10:58 |
apuimedo | ltomasbo: ping to what? to the LB? | 10:59 |
ltomasbo | yep | 11:00 |
ltomasbo | although the traffic is getting to the namespace, right? | 11:00 |
ltomasbo | it is just that it is not coming back, right? | 11:00 |
ltomasbo | s/namespace/amphora-poxy namespace inside the amphora vm | 11:01 |
irenab | apuimedo, maybe worth to push l3 then, and keep l2 for the follow-up | 11:06 |
irenab | ? | 11:06 |
*** yamamoto has quit IRC | 11:17 | |
*** yamamoto has joined #openstack-kuryr | 11:32 | |
janonymous | zengchen: kzaitsev_pi updated https://review.openstack.org/#/c/454555/ | 11:32 |
*** zengchen has quit IRC | 11:35 | |
*** zengchen has joined #openstack-kuryr | 11:36 | |
*** atoth has joined #openstack-kuryr | 11:47 | |
*** yamamoto has quit IRC | 11:49 | |
apuimedo | ltomasbo: I don't see it going from LB to members | 12:08 |
apuimedo | irenab: yes. I'm starting to think that way | 12:08 |
apuimedo | to make l3 and file a blueprint for L2 | 12:08 |
apuimedo | I'm also tempted to make it octavia only, but that would probably be too much | 12:09 |
irenab | it maybe too heavy for sometimes | 12:10 |
apuimedo | it just pisses me off a bit to have to pass the service subnet when with octavia it is not necessary | 12:10 |
apuimedo | irenab: you are right, of course | 12:10 |
apuimedo | irenab: ltomasbo: so how do we go about it? I add L3 support on a follow-up patch? Or should I add it here in the same? | 12:11 |
irenab | The current one seems to be not working, so I guess L3 should come first | 12:12 |
apuimedo | irenab: alright. So I'll add it to the current patch, which already added the l3 pieces in devstack | 12:12 |
apuimedo | what a bother | 12:12 |
apuimedo | :/ | 12:12 |
ltomasbo | :/ | 12:13 |
ltomasbo | I agree with irenab, if this is not working now, we should go for l3 first | 12:13 |
ltomasbo | that will mean some modifications to ivc work on lbaas, right? | 12:14 |
apuimedo | ltomasbo: yes | 12:15 |
apuimedo | irenab: ltomasbo: There's two ways to do this | 12:15 |
apuimedo | a) I change the code to always pass the service subnet as only L3 works | 12:15 |
apuimedo | b) I add a config option under [kubernetes] named service_to_pod_connectivity defaulting to 'layer3' with 'layer2' being the other value and make if/else in the lbaas handler to pass either pod or service subnet when creating the member | 12:17 |
irenab | less if/else, less options to fail | 12:18 |
irenab | I would go for passing always and not using for l3 | 12:19 |
irenab | but up to you | 12:19 |
apuimedo | irenab: what do you mean with 'passing always'? | 12:19 |
apuimedo | not making it configurable for now and use the service subnet always? | 12:19 |
ltomasbo | do we have any lbaas section? | 12:19 |
irenab | ltomasbo, I guess we will add one | 12:20 |
ltomasbo | if that so, I would include that inside lbaas, rather than kuberentes | 12:20 |
ltomasbo | irenab, xD | 12:20 |
irenab | apuimedo, I meant (1) | 12:20 |
irenab | a | 12:20 |
irenab | (a) in your list if I got it correctly | 12:21 |
apuimedo | ltomasbo: no, we don't | 12:21 |
apuimedo | irenab: ltomasbo: okay then. I go with (a) and we'll make it configurable when and if we add L2 mode | 12:21 |
apuimedo | does that sound right to you? | 12:21 |
irenab | +1 | 12:21 |
apuimedo | kzaitsev_ws: vikasc: your opinion would also be helpful | 12:21 |
ltomasbo | sound good to me | 12:23 |
vikasc | +1 | 12:24 |
apuimedo | thanks | 12:31 |
*** alraddarla has joined #openstack-kuryr | 12:33 | |
irenab | apuimedo, I am trying to deploy multi node app, and also see some issues with reaching service, this is before your patch | 12:36 |
apuimedo | irenab: with haproxy lbaasv2? | 12:36 |
irenab | yes | 12:36 |
irenab | but I have DF for SDN, so many potential points where it can go wrong | 12:36 |
apuimedo | I wonder if it is not from some devstack change changing projects | 12:36 |
apuimedo | me moved from the demo to the k8s one | 12:37 |
apuimedo | there may be unresolved crap in that | 12:37 |
irenab | apuimedo, maybe, not sure we have proper testing to make sure it worked after the change | 12:37 |
apuimedo | yeah | 12:37 |
irenab | in case of the overcloud heat, should loadbalancer have the heat created SG or the default one? | 12:43 |
kzaitsev_ws | apuimedo: always pass service subnet where? | 12:45 |
kzaitsev_ws | looks ok'ish to pass smth to the handler and not always use it. we have that kind of pattern all over the code =))) | 12:47 |
apuimedo | kzaitsev_ws: to the member creation | 12:48 |
*** yamamoto has joined #openstack-kuryr | 12:49 | |
kzaitsev_ws | hm. if I got it right l2 doesn't work yet anyway, right? | 12:50 |
apuimedo | kzaitsev_ws: right | 12:51 |
apuimedo | irenab: in order to try it, maybe you can change devstack/settings to specify demo project again | 12:52 |
* kzaitsev_ws imagining that I know nothing about the topic | 12:54 | |
kzaitsev_ws | if we have two options and only one works at the moment — fells logical to add a config variable after/when we make the 2d work too | 12:55 |
*** yamamoto has quit IRC | 12:59 | |
openstackgerrit | Antoni Segura Puimedon proposed openstack/kuryr-kubernetes master: octavia: Make Octavia ready devstack https://review.openstack.org/489157 | 13:01 |
apuimedo | kzaitsev_ws: irenab: ltomasbo: here it goes | 13:01 |
apuimedo | this will probably work for l3 | 13:01 |
apuimedo | I'll try it now | 13:02 |
apuimedo | let's see if it works | 13:02 |
*** gouthamr has joined #openstack-kuryr | 13:03 | |
ltomasbo | apuimedo, thanks! I'll try it later too | 13:03 |
apuimedo | thanks | 13:04 |
*** alraddarla has left #openstack-kuryr | 13:05 | |
irenab | apuimedo, I will try tomorrow, need to go now | 13:08 |
apuimedo | thanks irenab | 13:09 |
openstackgerrit | OpenStack Proposal Bot proposed openstack/fuxi master: Updated from global requirements https://review.openstack.org/483522 | 13:27 |
*** aojea has quit IRC | 13:29 | |
*** janki has quit IRC | 13:29 | |
*** aojea has joined #openstack-kuryr | 13:29 | |
*** hongbin has joined #openstack-kuryr | 13:38 | |
openstackgerrit | Antoni Segura Puimedon proposed openstack/kuryr-kubernetes master: octavia: Make Octavia ready devstack https://review.openstack.org/489157 | 13:58 |
*** janki has joined #openstack-kuryr | 14:11 | |
apuimedo | ltomasbo: kzaitsev_ws: ^^ works | 14:44 |
apuimedo | I just finished testing it | 14:44 |
ltomasbo | great! | 14:45 |
openstackgerrit | Antoni Segura Puimedon proposed openstack/kuryr-kubernetes master: devstack: fix ovs_bind device to be on the pod net https://review.openstack.org/489629 | 14:53 |
apuimedo | yboaron_: I think I know how to modify the lbaasspec for loadbalancerip | 14:57 |
apuimedo | https://github.com/openstack/kuryr-kubernetes/blob/master/kuryr_kubernetes/controller/handlers/lbaas.py#L90 | 14:57 |
apuimedo | yboaron_: I'd add here something like | 14:57 |
apuimedo | ext_ip = self._drv_ext_ip.get_ext_ip(service, project_id) | 14:58 |
apuimedo | then, if that returns None | 14:58 |
apuimedo | you don't put anything | 14:58 |
apuimedo | in the LBaaSServiceSpec | 14:59 |
apuimedo | the _drv_ext_ip sees the service spec and decides what to do about user specified vs "allocate any" | 14:59 |
apuimedo | then, if it is not None | 14:59 |
apuimedo | you put a ext_ip field to the lbaasservicespec | 15:00 |
yboaron_ | apuimedo, Can drivers access ( i mean in terms of SW design) to k8s resources fields , I thought that drivers should only use information stores in annotations | 15:17 |
*** pcaruana has quit IRC | 15:22 | |
kzaitsev_ws | yboaron_: what's the big difference between accessing annotations and resources? | 15:22 |
yboaron_ | kzaitsev_ws, no difference , thought that from SW design and coupling kuryr driver should use information stored in annotations | 15:24 |
apuimedo | yboaron_: they can access the fields they get passed | 15:24 |
apuimedo | in this case, it gets passed the service spec | 15:24 |
apuimedo | yboaron_: to make the annotation, you can use a driver | 15:25 |
apuimedo | so that using different drivers, give you a different ext_ip in this case | 15:25 |
apuimedo | for example one driver could create and remove fips, the other one maintain a pool | 15:26 |
yboaron_ | apuimedo, OK got it | 15:26 |
apuimedo | ;-) | 15:26 |
yboaron_ | apuimedo , kzaitsev_ws :10x | 15:29 |
kzaitsev_ws | 10x? | 15:31 |
yboaron_ | kzaitsev_ws, 10c for commenting :-) | 15:32 |
yboaron_ | kzaitsev_ws, I need to adjust to community ... | 15:33 |
kzaitsev_ws | oh. got it. 1st time I see '10x' used this way ) | 15:33 |
yboaron_ | kzaitsev_ws, thought that first time you saw thanks in kuryr channel ... | 15:34 |
yboaron_ | :-) | 15:34 |
kzaitsev_ws | <offtopic> New album by Arcadi Fire is pretty good | 15:34 |
kzaitsev_ws | *Arcade | 15:35 |
apuimedo | Arcadi Fire? | 15:35 |
apuimedo | never heard of it | 15:35 |
apuimedo | kzaitsev_ws: I usually work more to things like this https://www.youtube.com/watch?v=HIThQKypY4I | 15:36 |
kzaitsev_ws | OMG, seriously? =) | 15:37 |
kzaitsev_ws | that's way too cool for me | 15:38 |
kzaitsev_ws | I have nothing to top that =) | 15:39 |
*** pcaruana has joined #openstack-kuryr | 15:40 | |
apuimedo | kzaitsev_ws: well, sometimes I play other things | 15:40 |
apuimedo | when I need a lot from my poor brain I listen to die zauberflote | 15:40 |
apuimedo | and when devstack works well... | 15:40 |
apuimedo | https://www.youtube.com/watch?v=0DFsF_0tfiM | 15:41 |
kzaitsev_ws | what next? The Rite of Spring? | 15:41 |
apuimedo | kzaitsev_ws: polovtsian dances from Borodin maybe | 15:42 |
apuimedo | or bach's cello suites by Casals or Rostropovich | 15:43 |
kzaitsev_ws | I can't really just listen to classical music =( makes me really sleepy. I love opera and balley. One autumn my wife and I went to a number of performances in Bolshoi | 15:43 |
apuimedo | I'd like to go to the Bolshoi as well | 15:43 |
kzaitsev_ws | but then we decided to go to the Main Stage (to see it after reconstruction). | 15:44 |
apuimedo | kzaitsev_ws: this one gets played a lot while I work https://www.youtube.com/watch?v=76R0N2GN6Jo | 15:44 |
apuimedo | kzaitsev_ws: For Opera I usually stick to Puccini | 15:44 |
apuimedo | with a bit of Mozart and Bizet :P | 15:44 |
apuimedo | but you can't work to Carmen | 15:44 |
kzaitsev_ws | and the prices were rather high, but there was a cheap enough concert with just the orchestra playing different pieces. So we thought — why not? a great opportunity to see the big stage. | 15:45 |
kzaitsev_ws | I almost slept through it %) | 15:45 |
yboaron_ | apuimedo, so in case of 'allocate any' , the external/floating ip will be stored in LBaaSServiceSpec only after we received the reply from neutron , right ? | 15:45 |
kzaitsev_ws | yboaron_: you had to close </offtopic> first! | 15:46 |
kzaitsev_ws | =) | 15:46 |
openstackgerrit | Antoni Segura Puimedon proposed openstack/kuryr-kubernetes master: devstack: Don't assume the router ip https://review.openstack.org/489642 | 15:47 |
apuimedo | yboaron_: that's right | 15:47 |
apuimedo | we block on neutron giving us a floating IP | 15:47 |
apuimedo | then, once we have the lb vip, if the lbaasspec has an ext_ip | 15:48 |
apuimedo | we ask the ext_ip driver to associate it | 15:48 |
yboaron_ | we should maintain state/in_progress field in ext_ip driver to avoid race condition , if we"ll service resource changed in k8s while we are waiting for neutron | 15:50 |
apuimedo | yboaron_: which race condition? | 15:51 |
apuimedo | the clusterIP can't change | 15:51 |
apuimedo | you'd have to delete the service and create another | 15:51 |
apuimedo | and in that case, the ext ip will be freed | 15:51 |
apuimedo | and allocated again | 15:51 |
yboaron_ | until we got reply from neutron LBaaSServiceSpec doesn't include ext_ip_address , right ? | 15:52 |
yboaron_ | i'm talking about the allocate any case | 15:52 |
yboaron_ | what if user update/change field in k8s-service at this time ? | 15:53 |
apuimedo | kzaitsev_ws: https://bugs.launchpad.net/kuryr-kubernetes/+bug/1681338 | 15:55 |
openstack | Launchpad bug 1681338 in kuryr-kubernetes "kuryr-k8s may not work properly in a multi-cni cluster" [High,Triaged] | 15:55 |
kzaitsev_ws | ah, I'm not even sure it's a bug at the moment | 15:56 |
kzaitsev_ws | ah | 15:56 |
apuimedo | yboaron_: I don't think the user can change the spec | 15:56 |
apuimedo | you can try it | 15:56 |
yboaron_ | apuimedo, OK , smart user | 15:56 |
kzaitsev_ws | right we would allocate ip even if the node is not handeled by our cni | 15:56 |
apuimedo | for pod you can't, the k8s api blocks you. For service I can't say. Haven't tried | 15:56 |
apuimedo | kzaitsev_ws: look at the comment I posted | 15:57 |
yboaron_ | apuimedo, I will | 15:57 |
kzaitsev_ws | k8s definitely knows if a node has cni configured or not | 15:59 |
kzaitsev_ws | would be great if it would expose information on which cni is there in the node object.. | 15:59 |
kzaitsev_ws | I'm not really sure it knows itself though at the moment | 16:00 |
openstackgerrit | Antoni Segura Puimedon proposed openstack/kuryr-kubernetes master: octavia: Make Octavia ready devstack https://review.openstack.org/489157 | 16:01 |
kzaitsev_ws | apuimedo: actually. if I were to architect a multi-cni cluster I would use taints | 16:01 |
apuimedo | kzaitsev_ws: like? | 16:02 |
kzaitsev_ws | apuimedo: I'd mark kuryr-nodes with kuryr-taint and other-cni nodes with other-cni-taint. And would expect pods to have tolerations for one of the networks (or both) | 16:04 |
* kzaitsev_ws having second thoughts | 16:04 | |
*** egonzalez has quit IRC | 16:04 | |
apuimedo | kzaitsev_ws: I'd allow multi cni nodes | 16:04 |
apuimedo | once there is multinetwork that's easy to do | 16:05 |
apuimedo | gotta go pay a visit to Babička | 16:05 |
apuimedo | talk to you later guys | 16:05 |
kzaitsev_ws | in any case. I think it's safe to downgrade the bug to medium or even low. I'm not sure there's anyone who really needs that functionality ) | 16:05 |
apuimedo | ;-) | 16:05 |
kzaitsev_ws | at the moment at least | 16:05 |
kzaitsev_ws | apuimedo: later =) | 16:06 |
*** janki has quit IRC | 16:10 | |
ltomasbo | apuimedo, my stack also completed and it works too | 16:19 |
*** alraddarla_ has joined #openstack-kuryr | 16:32 | |
ltomasbo | bye guys! I'll be on PTO the next 20 days! Have a great summer! | 16:33 |
yboaron_ | ltomasbo, have FUN! | 16:33 |
ltomasbo | yboaron_, thanks! | 16:34 |
*** dougbtv_ has joined #openstack-kuryr | 16:40 | |
*** yboaron_ has quit IRC | 16:43 | |
*** tonanhngo has joined #openstack-kuryr | 16:50 | |
*** pcaruana has quit IRC | 16:58 | |
*** kural has quit IRC | 17:56 | |
*** garyloug has quit IRC | 18:04 | |
*** dougbtv_ has quit IRC | 18:09 | |
*** egonzalez has joined #openstack-kuryr | 18:30 | |
*** kzaitsev_ws has quit IRC | 18:43 | |
*** aojea has quit IRC | 20:39 | |
apuimedo | ltomasbo: man, you and dmellado are slackers! | 20:44 |
apuimedo | xD | 20:44 |
openstackgerrit | Antoni Segura Puimedon proposed openstack/kuryr-kubernetes master: octavia: Make Octavia ready devstack https://review.openstack.org/489157 | 20:46 |
*** aojea has joined #openstack-kuryr | 20:54 | |
*** yamamoto has joined #openstack-kuryr | 21:17 | |
*** yamamoto has quit IRC | 21:21 | |
*** yamamoto has joined #openstack-kuryr | 21:21 | |
*** yamamoto has quit IRC | 21:26 | |
*** egonzalez has quit IRC | 21:37 | |
apuimedo | irenab: Great work with the multinode devstack worker | 22:21 |
apuimedo | it will be very useful! | 22:21 |
*** yamamoto has joined #openstack-kuryr | 22:23 | |
*** yamamoto has quit IRC | 22:25 | |
*** yamamoto has joined #openstack-kuryr | 22:25 | |
*** kural has joined #openstack-kuryr | 22:46 | |
*** hongbin has quit IRC | 23:30 | |
*** aojea has quit IRC | 23:33 | |
*** aojea has joined #openstack-kuryr | 23:34 | |
*** aojea has quit IRC | 23:38 | |
*** gouthamr has quit IRC | 23:43 | |
*** kural has quit IRC | 23:50 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!