Tuesday, 2017-08-01

*** yamamoto has quit IRC00:01
*** yamamoto has joined #openstack-kuryr00:05
*** https_GK1wmSU has joined #openstack-kuryr00:08
*** https_GK1wmSU has left #openstack-kuryr00:10
*** caowei has joined #openstack-kuryr00:22
*** pmannidi has joined #openstack-kuryr00:31
*** kiennt has joined #openstack-kuryr00:32
*** kiennt has quit IRC00:47
*** kiennt has joined #openstack-kuryr00:51
*** limao has joined #openstack-kuryr00:52
*** yedongcan has joined #openstack-kuryr00:53
*** dougbtv_ has quit IRC00:58
*** yedongcan1 has joined #openstack-kuryr01:17
*** yedongcan has quit IRC01:18
*** aojea has joined #openstack-kuryr01:22
*** premsankar has quit IRC01:23
*** yamamoto has quit IRC01:25
*** aojea has quit IRC01:26
*** yamamoto has joined #openstack-kuryr01:29
*** yamamoto has quit IRC01:51
*** yamamoto has joined #openstack-kuryr01:56
*** aojea has joined #openstack-kuryr02:07
*** yamamoto has quit IRC02:07
*** aojea has quit IRC02:12
*** yamamoto has joined #openstack-kuryr02:19
*** hongbin has joined #openstack-kuryr02:21
*** https_GK1wmSU has joined #openstack-kuryr02:33
*** https_GK1wmSU has left #openstack-kuryr02:35
*** kiennt has quit IRC02:41
*** kiennt has joined #openstack-kuryr02:42
*** limao has quit IRC02:44
*** limao has joined #openstack-kuryr02:46
*** hongbin_ has joined #openstack-kuryr02:55
openstackgerritOpenStack Proposal Bot proposed openstack/kuryr-libnetwork master: Updated from global requirements  https://review.openstack.org/48455502:56
openstackgerritOpenStack Proposal Bot proposed openstack/kuryr-kubernetes master: Updated from global requirements  https://review.openstack.org/48801002:56
*** hongbin has quit IRC02:57
*** hongbin has joined #openstack-kuryr02:59
*** hongbin_ has quit IRC03:00
*** yamamoto has quit IRC03:01
*** yamamoto has joined #openstack-kuryr03:06
*** yamamoto has quit IRC03:21
*** yamamoto has joined #openstack-kuryr03:24
*** gouthamr has quit IRC03:28
*** yamamoto has quit IRC03:36
*** yamamoto has joined #openstack-kuryr03:44
*** yamamoto has quit IRC04:04
*** hongbin has quit IRC04:05
*** yamamoto has joined #openstack-kuryr04:09
*** yboaron_ has joined #openstack-kuryr04:21
*** caowei has quit IRC04:33
*** janki has joined #openstack-kuryr05:06
openstackgerritBerezovsky Irena proposed openstack/kuryr-kubernetes master: Add devstack support for multi-node deployment  https://review.openstack.org/48909805:29
*** caowei has joined #openstack-kuryr05:42
apuimedoirenab: did you see my answer to your comment?05:49
irenabyes, some hoew missed it05:49
irenabI am running deployment now, once done will +205:49
irenabI updated the multi node patch to include some option for overcloud, but in general local.conf should be good for nested too05:50
irenabplease take a look05:50
apuimedothanks irenab05:54
apuimedoirenab: the multi node you sent was for overcloud alrady, wasn't it?05:55
irenabyes, was missing one line05:55
apuimedoah, ok05:55
apuimedoI'm having some connectivity problems to my cloud, so I couldn't try it yet05:56
irenabsure, take your time05:56
*** kiennt has quit IRC05:57
*** kiennt has joined #openstack-kuryr06:01
irenabapuimedo, ping06:06
apuimedoirenab: pong06:07
irenabhttps://pastebin.com/HXiEVzAc06:08
apuimedoirenab: that's odd06:10
apuimedois there a /opt/stack/octavia?06:10
irenabtrying to restack06:10
irenabnow seems to be cloning octavia, maybe be gliches of the devstack06:12
apuimedoweird06:25
irenabit looks better now, still deploying06:27
apuimedook06:36
apuimedoirenab: reclone on or off?06:37
ltomasbotoni, I'm getting the same error06:38
ltomasboapuimedo, ^^06:39
apuimedoltomasbo: are you sure it's the latest patchset?06:39
ltomasboTook it from yesterday, but I'll pull the new ones and re-deploy (just in case)06:39
apuimedothanks ltomasbo06:42
ltomasboapuimedo, I got the same error06:59
ltomasboseems OCTAVIA_CONF_DIR is not set06:59
ltomasboI see there is no octavia dir being cloned either...07:00
irenabltomasbo, check the local.conf that octavia is set to true07:01
ltomasbois not set to true by default?07:01
ltomasbook, I don't see it there07:02
irenabfor me it seems to be in the loop waiting for the default/kubernetes to become active07:02
ltomasboumm, I see the local.conf was not updated, going to make the pull again!07:04
*** pcaruana has joined #openstack-kuryr07:06
irenabltomasbo, apuimedo devstack completed properly. Will try to run some app07:12
ltomasboirenab, great!07:26
ltomasbomine is still ongoing07:26
ltomasbobut now I see the octavia dir, so I assume it will work07:26
irenabit takes a lot of time to get service to active07:26
irenabapuimedo, any idea how to update openrc to source it for k8s project?07:27
kzaitsev_wsmy deployment ran out of space yesterday =(07:29
kzaitsev_wsso I'm re-deploin octavia patch now too =)07:29
irenabltomasbo, kzaitsev_pi please try the multi node one as well07:29
ltomasboirenab, I use the --project tenant-id07:30
ltomasboand use the admin07:30
irenabltomasbo, run openstack commands with --project?07:32
ltomasboirenab, I'll try the multi-node one as soon as I finish other stuff and get some quota to create a couple of VMs, I hope I don't have problems with MTU for that one07:33
ltomasboyep, I run the commands like that07:33
ltomasbojust the ones to list the ports, loadbalancers, ...07:33
irenabgot it07:33
ltomasboit does not work to create stuff (I think)07:33
irenabapuimedo, I used the app you shared few days ago (frontend + redis master and slave). There is ping from FE to Redis pods, but no ping from FE to Redis service07:35
*** garyloug has joined #openstack-kuryr07:52
*** egonzalez has joined #openstack-kuryr07:56
*** garyloug has quit IRC07:58
*** yboaron_ has quit IRC07:58
ltomasboapuimedo, do you know if all the octavia stuff works with ovs-firewall (instead of the hybrid one)?08:02
ltomasboapuimedo, it works for me08:09
ltomasboI see the loadbalancer port status as down08:09
ltomasbobut it is actually working08:09
ltomasboapuimedo, I can curl the exposed service from the qrouter namespace, but not from one pod to the other (which I guess it could be the intended behavior)08:11
*** kural has joined #openstack-kuryr08:12
apuimedoI'm back08:14
*** garyloug has joined #openstack-kuryr08:14
apuimedoltomasbo: from one pod you can't curl the service?08:14
ltomasbothat's right08:15
irenabapuimedo, ping should work?08:15
apuimedoirenab: no, ping to the service shouldn't work08:15
apuimedoonly curl08:15
apuimedowell, if you ssh into octavia haproxy netns08:15
apuimedoyou should also be able to curl to the members08:15
apuimedoirenab: the long time waiting for default/k8s is normal08:15
apuimedothe first octavia LB takes a loooooooooooong time08:16
apuimedohence why I increase the timeout08:16
irenabjust for the sake of the verification, how do you check the services?08:16
apuimedo"ltomasbo | apuimedo, it works for me" What does?08:16
apuimedoirenab: I started an unrelated pod08:17
apuimedoand curled the service08:17
irenabFE?08:18
irenabdevstack completed properly08:18
ltomasboapuimedo, I meant I can curl the service (and the pods) from the qrouter namespace08:18
ltomasbobut not from inside the pod08:18
ltomasbowell, from inside the pods I can curl the other pods08:18
ltomasbobut not through the service exposed, directly on the pods IPs08:19
apuimedoirenab: what's FE?08:20
irenabfront end08:20
apuimedoltomasbo: which SG is on the clusterip port and which on the other port of the service?08:20
apuimedoirenab: yes, from FE you should be able to curl BE service08:21
ltomasboapuimedo, I did not modified any sg, so the ones created by default with your local.conf08:26
ltomasbolet me check08:26
apuimedoltomasbo: please do. And if I can get access it will also be helpful08:27
apuimedoirenab: you tried with ovs or with df?08:28
irenabjust your patch, ovs08:28
ltomasboapuimedo, with clusterip you meant the lbaaspoirt?08:28
irenabthe local.conf you provided08:28
apuimedoltomasbo: the vip port08:28
apuimedothe one that is down (is handled by keepalived)08:28
apuimedoand also the one that is actually up in the service network08:28
apuimedoand the one in the pod subnet08:29
irenabfrom the front end pod, trying curl http://redis-service-ip. does not work08:29
ltomasboahh the octavia-lb-vrrp...08:29
apuimedoright08:29
ltomasboand sure, give me your ssh-key and I'll give you access08:29
apuimedoirenab: ltomasbo: maybe I missed some SG08:29
apuimedoltomasbo: sent08:30
ltomasboapuimedo, both lbaas and octavia-lb-vrrp have the same sg08:30
ltomasbobut the pods have 2 different sg, and they are different from the others08:30
apuimedoalright then08:30
*** aojea has joined #openstack-kuryr08:30
apuimedoI remember now08:31
apuimedo:-)08:31
apuimedoI manually changed the SG of the VIP port08:31
apuimedoand then was investigating the rest08:31
apuimedoand forgot this step in the patch08:31
apuimedo:/08:31
apuimedoltomasbo: what do you mean the 2 pods have different SG?08:31
apuimedoAll the ports have the same SG08:31
apuimedobut the octavia ports in the pod subnet have the 'default' of the admin project08:32
apuimedoand the ports of the pods the 'default' of the 'k8s' project08:32
ltomasboyep08:32
ltomasboI have 2 pods running08:32
ltomasboand they have 2 sg each (with are the same, I guess the default and the k8s)08:32
apuimedoltomasbo: right, this is correct08:33
ltomasboyep, I checked08:33
ltomasbothey are the default and the pod_pod_access08:33
ltomasbowhile the vip has the lb-XXX security group08:34
apuimedoirenab: ltomasbo: did you notice how nice that we don't need the neutron cli anymore?08:34
apuimedothe devstack includes 'openstack loadbalancer *' commands08:34
irenabcool08:34
irenabless dependencies, less confusion08:35
ltomasboohh, I did not realize about that!08:35
apuimedoltomasbo: irenab: from what I see, the lb has a security group allowing the listener port08:36
apuimedofrom any origin08:36
irenabapuimedo, how did you verify? Maybe I am missing something08:37
apuimedoirenab: I searched the port with the VIP08:37
apuimedoand then did port show of it08:37
apuimedoto see the SG08:37
apuimedoand looked at the rules08:37
ltomasboapuimedo, I see it allows tcp on port 8008:37
irenabI mean that k8s is working08:37
ltomasboas ingress, right?08:37
irenabservices08:38
ltomasboperhaps it needs to be http instead?08:38
apuimedolet me check a moment08:38
irenabapuimedo, what I see is that is I curl to 127.0.0.1 on FE pod it works, but with service_ip, it doesn't08:44
*** Guest14_ has joined #openstack-kuryr08:44
apuimedoirenab: right08:44
*** Guest14_ is now known as ajo08:45
apuimedonow I'm debugging in ltomasbo's machine08:45
apuimedoI see that the vip is well configured in the haproxy-amphora netns08:45
apuimedoirenab: ltomasbo: ok, at least something08:47
apuimedothe communication between the amphora netns and the lb members works08:47
apuimedonow I have to remember how I solved the pod -> LB08:47
apuimedosorry for the troubles08:47
apuimedoI forgot I had fixed this also when making the patch08:48
ltomasbo:D08:48
ltomasbono problem!08:48
irenabits ok08:50
irenabintegrating Openstack projects is not expected to be working from the first time08:51
ltomasbothat's very true!08:52
apuimedo;-)08:52
apuimedothanks to you both08:52
apuimedoltomasbo: meh... I can't see why09:02
ltomasbo:D09:02
ltomasbotoo many security groups!09:02
apuimedoltomasbo: but do you see that the communication reached? In the curl09:02
ltomasbono, didn't check with tcpdump09:03
apuimedowith incorrect checksum09:03
apuimedolook at the tmux09:03
ltomasboohh09:03
ltomasbowith incorrent cksum??09:03
apuimedoyeah09:03
apuimedothis happened the other day09:03
ltomasboin the way back from the amphora?09:03
apuimedoand I think the fix I did was to change the SG09:03
apuimedoI don't see any communication on the way back09:04
ltomasbocould it be due to the http?09:05
ltomasboI see in the tmux the first one has the right cksum09:05
ltomasbobut not the following ones09:05
apuimedoltomasbo: ?09:06
ltomasbomaybe it is a bug somewhere in security group actually09:06
ltomasboor in neutron routing not properly changing some of the headers leading to a wrong cksum09:06
apuimedoltomasbo: did you change the local.conf in any way?09:07
apuimedomaybe to add the ovs firewall?09:07
ltomasbonop: cp /opt/stack/kuryr-kuberentes/devstack/local.conf.sample local.conf09:07
apuimedook09:07
ltomasboshould I try with ovs-firewall instead?09:07
apuimedono, no09:07
apuimedoit should work with hybrid09:08
apuimedoI got it working on Friday09:08
ltomasboahh, ok09:08
ltomasbowas not a different security group rules for ingress traffic from outsite than from a different sec-group?09:09
ltomasboperhaps we need an extra one for that09:09
ltomasboapuimedo, should we add 'remote security group' from pods too?09:10
ltomasboat least to double check if that is the problem09:10
apuimedoltomasbo: the problem is most likely security groups09:10
ltomasboyep, that is my bet too09:11
apuimedobut we can't add the lb sgs to the pods, because they happen afterwards09:11
ltomasbono09:11
apuimedoltomasbo: here's the thing09:11
ltomasboI mean to add an extra rule to the lb sg09:11
apuimedothe lb SGs as we saw, allow communication09:11
ltomasboso that it accepts ingress traffic from pods09:11
ltomasbonot only from outside (that seems to be working)09:12
apuimedoon the IP for the port and they don't specify remote SG09:12
apuimedoltomasbo: you mean because of the router namespace test?09:12
ltomasboyep09:12
apuimedoltomasbo: the router port doesn't have security though09:13
apuimedoso it's cheating09:13
ltomasboahhh, that's true09:13
ltomasbocan I assign a floating ip to the vip and check09:13
apuimedoltomasbo: why is the rule list output so shitty09:13
apuimedo?09:13
apuimedoit doesn't even show direction09:14
apuimedoit pisses me off big time09:14
ltomasboapuimedo, perhaps you should use rule show09:14
apuimedoltomasbo: yes, for every rule09:14
apuimedonot very convenient at all09:14
ltomasboor use --long09:15
*** yboaron_ has joined #openstack-kuryr09:15
ltomasboapuimedo, so, with --long it shows address and direction09:16
apuimedoirenab: ltomasbo: I wonder if the fact that the vrrp port is in the admin project09:19
apuimedobut I would think not09:19
apuimedomeh09:21
apuimedoI added a rule that allows everything, but it didn't help09:21
ltomasboummm09:22
*** limao has quit IRC09:24
apuimedoltomasbo: I forgot... How do I see the tcpdump output as it happens as opposed to just when I abort it?09:24
ltomasboumm, that is what it was supposed to be the default, right?09:25
apuimedoltomasbo: not in the ubuntu amphorae09:25
apuimedoxD09:25
ltomasboumm09:27
ltomasbothere is an immediate-mode, but not working either...09:27
ltomasbolet's try on the tap...09:27
*** yamamoto has quit IRC09:37
apuimedoltomasbo: if you use the mac addresses to match it you'll save guesswork09:39
ltomasboahh, I'm getting lost... :D09:40
ltomasbothe amphora VMs have 2 nics, right?09:40
ltomasbothey have 3 taps09:40
ltomasbowhy?09:40
apuimedoone for management network (used for ssh too)09:41
apuimedoone for service netwrok09:41
apuimedoone for l2 connectivity to pods09:41
apuimedoltomasbo: that's the one09:43
ltomasboyep09:43
ltomasboand the problem is with the arp reply?09:44
ltomasbooui Unknown...09:44
ltomasboapuimedo, do you see htat?09:48
apuimedowhat, sorry?09:48
ltomasbothe arp reply arrives to the pods port with a different IP09:49
ltomasbo10.0.0.126 instead of 10.0.0.14809:49
ltomasbocould that be the problem?09:49
ltomasbothat is even a different subnet, right?09:49
apuimedowtf09:51
ltomasboahh, no wit, it is asking for 126 directly09:51
ltomasbonot sure why though09:51
apuimedothat's the router port, isn't it?09:51
ltomasboumm, don't know, let's see09:51
ltomasboyes, it is09:52
apuimedothen it is normal, I'd say09:52
ltomasboyep yep. I got confuse by the ip range09:53
ltomasbothe pods are in .6409:53
ltomasboso, it is asking for the right one, and the service is in the .12809:53
apuimedobrrr09:55
apuimedowth is going on09:55
apuimedoltomasbo: I'll make a 'hack'09:55
ltomasbo:D09:56
ltomasbogo ahead09:56
ltomasboit is a 'disposable' deployment, so, if it breaks, we re-stack!09:57
*** yamamoto has joined #openstack-kuryr09:57
apuimedook09:59
apuimedothanks09:59
*** caowei has quit IRC10:05
openstackgerritJaivish Kothari(janonymous) proposed openstack/kuryr-kubernetes master: Kubernetes client usage from upstream K8s repo  https://review.openstack.org/45455510:08
*** yamamoto has quit IRC10:08
*** kiennt has quit IRC10:12
apuimedoltomasbo: pfffff10:13
apuimedocan't find it10:13
ltomasbo:D10:13
ltomasboI don't know octavia that much, so I can figure out why it is happening either...10:14
ltomasboquestion is, it is happening just for me? irenab, is it yours working?10:14
apuimedoI'd imagine it is the same10:15
apuimedoltomasbo: lol10:21
apuimedoit doesn't even reply from the amphora netns10:22
apuimedowhen you do curl 10.0.0.14810:22
ltomasboumm10:25
*** yamamoto has joined #openstack-kuryr10:25
ltomasbothen it is not security groups...10:26
apuimedono, most likely not10:31
apuimedook. I'll try one thing more10:32
*** pmannidi has quit IRC10:38
apuimedoltomasbo: irenab: fucking hell10:38
apuimedoI manually changed to l3 mode and it worked10:38
apuimedoxD10:38
ltomasbowhat?10:39
apuimedoltomasbo: I removed the members10:41
apuimedoand I added them without specifying subnet10:41
apuimedoso that no lb port is created in the port subnet10:41
ltomasbowas the local.conf not forcing it l2 instead of l3 mode?10:42
apuimedoltomasbo: right, because our handler code does not support l3 mode yet10:42
apuimedoso I did manual l3 mode by manually creating the members10:42
ltomasboahh, ok10:42
ltomasbonow I get it10:42
apuimedoso at least I know that the L3 part of my devstack is correct10:43
apuimedofor l2 there must be something missing10:43
apuimedoI wonder why10:43
ltomasboso, then we get back to the security group issue?10:43
ltomasboor even some bug with ovs10:43
apuimedoltomasbo: yes. I'll create L2 one now10:43
apuimedoand see what happens10:43
ltomasbogreat!10:43
ltomasbogood luck! :D10:43
apuimedomay the source be with me10:44
*** yedongcan1 has left #openstack-kuryr10:49
apuimedoltomasbo: L2 LB is up10:51
apuimedobut no success10:51
apuimedosame behavior10:51
apuimedoLB can curl pods10:51
apuimedobut nobody can curl LB10:51
ltomasboumm10:52
ltomasbostrange10:52
ltomasbocan you try adding ping, and check if that works?10:53
openstackgerritJaivish Kothari(janonymous) proposed openstack/kuryr-kubernetes master: Kubernetes client usage from upstream K8s repo  https://review.openstack.org/45455510:58
apuimedoltomasbo: ping to what? to the LB?10:59
ltomasboyep11:00
ltomasboalthough the traffic is getting to the namespace, right?11:00
ltomasboit is just that it is not coming back, right?11:00
ltomasbos/namespace/amphora-poxy namespace inside the amphora vm11:01
irenabapuimedo, maybe worth to push l3 then, and keep l2 for the follow-up11:06
irenab?11:06
*** yamamoto has quit IRC11:17
*** yamamoto has joined #openstack-kuryr11:32
janonymouszengchen: kzaitsev_pi  updated  https://review.openstack.org/#/c/454555/11:32
*** zengchen has quit IRC11:35
*** zengchen has joined #openstack-kuryr11:36
*** atoth has joined #openstack-kuryr11:47
*** yamamoto has quit IRC11:49
apuimedoltomasbo: I don't see it going from LB to members12:08
apuimedoirenab: yes. I'm starting to think that way12:08
apuimedoto make l3 and file a blueprint for L212:08
apuimedoI'm also tempted to make it octavia only, but that would probably be too much12:09
irenabit maybe too heavy for sometimes12:10
apuimedoit just pisses me off a bit to have to pass the service subnet when with octavia it is not necessary12:10
apuimedoirenab: you are right, of course12:10
apuimedoirenab: ltomasbo: so how do we go about it? I add L3 support on a follow-up patch? Or should I add it here in the same?12:11
irenabThe current one seems to be not working, so I guess L3 should come first12:12
apuimedoirenab: alright. So I'll add it to the current patch, which already added the l3 pieces in devstack12:12
apuimedowhat a bother12:12
apuimedo:/12:12
ltomasbo:/12:13
ltomasboI agree with irenab, if this is not working now, we should go for l3 first12:13
ltomasbothat will mean some modifications to ivc work on lbaas, right?12:14
apuimedoltomasbo: yes12:15
apuimedoirenab: ltomasbo: There's two ways to do this12:15
apuimedoa) I change the code to always pass the service subnet as only L3 works12:15
apuimedob) I add a config option under [kubernetes] named service_to_pod_connectivity defaulting to 'layer3' with 'layer2' being the other value and make if/else in the lbaas handler to pass either pod or service subnet when creating the member12:17
irenabless if/else, less options to fail12:18
irenabI would go for passing always and not using for l312:19
irenabbut up to you12:19
apuimedoirenab: what do you mean with 'passing always'?12:19
apuimedonot making it configurable for now and use the service subnet always?12:19
ltomasbodo we have any lbaas section?12:19
irenabltomasbo, I guess we will add one12:20
ltomasboif that so, I would include that inside lbaas, rather than kuberentes12:20
ltomasboirenab, xD12:20
irenabapuimedo, I meant (1)12:20
irenaba12:20
irenab(a) in your list if I got it correctly12:21
apuimedoltomasbo: no, we don't12:21
apuimedoirenab: ltomasbo: okay then. I go with (a) and we'll make it configurable when and if we add L2 mode12:21
apuimedodoes that sound right to you?12:21
irenab+112:21
apuimedokzaitsev_ws: vikasc: your opinion would also be helpful12:21
ltomasbosound good to me12:23
vikasc+112:24
apuimedothanks12:31
*** alraddarla has joined #openstack-kuryr12:33
irenabapuimedo, I am trying to deploy multi node app, and also see some issues with reaching service, this is before your patch12:36
apuimedoirenab: with haproxy lbaasv2?12:36
irenabyes12:36
irenabbut I have DF for SDN, so many potential points where it can go wrong12:36
apuimedoI wonder if it is not from some devstack change changing projects12:36
apuimedome moved from the demo to the k8s one12:37
apuimedothere may be unresolved crap in that12:37
irenabapuimedo, maybe, not sure we have proper testing to make sure it worked after the change12:37
apuimedoyeah12:37
irenabin case of the overcloud heat, should loadbalancer have the heat created SG or the default one?12:43
kzaitsev_wsapuimedo: always pass service subnet where?12:45
kzaitsev_wslooks ok'ish to pass smth to the handler and not always use it. we have that kind of pattern all over the code =)))12:47
apuimedokzaitsev_ws: to the member creation12:48
*** yamamoto has joined #openstack-kuryr12:49
kzaitsev_wshm. if I got it right l2 doesn't work yet anyway, right?12:50
apuimedokzaitsev_ws: right12:51
apuimedoirenab: in order to try it, maybe you can change devstack/settings to specify demo project again12:52
* kzaitsev_ws imagining that I know nothing about the topic12:54
kzaitsev_wsif we have two options and only one works at the moment — fells logical to add a config variable after/when we make the 2d work too12:55
*** yamamoto has quit IRC12:59
openstackgerritAntoni Segura Puimedon proposed openstack/kuryr-kubernetes master: octavia: Make Octavia ready devstack  https://review.openstack.org/48915713:01
apuimedokzaitsev_ws: irenab: ltomasbo: here it goes13:01
apuimedothis will probably work for l313:01
apuimedoI'll try it now13:02
apuimedolet's see if it works13:02
*** gouthamr has joined #openstack-kuryr13:03
ltomasboapuimedo, thanks! I'll try it later too13:03
apuimedothanks13:04
*** alraddarla has left #openstack-kuryr13:05
irenabapuimedo, I will try tomorrow, need to go now13:08
apuimedothanks irenab13:09
openstackgerritOpenStack Proposal Bot proposed openstack/fuxi master: Updated from global requirements  https://review.openstack.org/48352213:27
*** aojea has quit IRC13:29
*** janki has quit IRC13:29
*** aojea has joined #openstack-kuryr13:29
*** hongbin has joined #openstack-kuryr13:38
openstackgerritAntoni Segura Puimedon proposed openstack/kuryr-kubernetes master: octavia: Make Octavia ready devstack  https://review.openstack.org/48915713:58
*** janki has joined #openstack-kuryr14:11
apuimedoltomasbo: kzaitsev_ws: ^^ works14:44
apuimedoI just finished testing it14:44
ltomasbogreat!14:45
openstackgerritAntoni Segura Puimedon proposed openstack/kuryr-kubernetes master: devstack: fix ovs_bind device to be on the pod net  https://review.openstack.org/48962914:53
apuimedoyboaron_: I think I know how to modify the lbaasspec for loadbalancerip14:57
apuimedohttps://github.com/openstack/kuryr-kubernetes/blob/master/kuryr_kubernetes/controller/handlers/lbaas.py#L9014:57
apuimedoyboaron_: I'd add here something like14:57
apuimedoext_ip = self._drv_ext_ip.get_ext_ip(service, project_id)14:58
apuimedothen, if that returns None14:58
apuimedoyou don't put anything14:58
apuimedoin the LBaaSServiceSpec14:59
apuimedothe _drv_ext_ip sees the service spec and decides what to do about user specified vs "allocate any"14:59
apuimedothen, if it is not None14:59
apuimedoyou put a ext_ip field to the lbaasservicespec15:00
yboaron_apuimedo, Can drivers access ( i mean in terms of SW design) to k8s resources fields , I thought that drivers should only use information stores in annotations15:17
*** pcaruana has quit IRC15:22
kzaitsev_wsyboaron_: what's the big difference between accessing annotations and resources?15:22
yboaron_kzaitsev_ws, no difference , thought that from SW design and coupling kuryr driver should use information stored in annotations15:24
apuimedoyboaron_: they can access the fields they get passed15:24
apuimedoin this case, it gets passed the service spec15:24
apuimedoyboaron_: to make the annotation, you can use a driver15:25
apuimedoso that using different drivers, give you a different ext_ip in this case15:25
apuimedofor example one driver could create and remove fips, the other one maintain a pool15:26
yboaron_apuimedo, OK got it15:26
apuimedo;-)15:26
yboaron_apuimedo , kzaitsev_ws :10x15:29
kzaitsev_ws10x?15:31
yboaron_kzaitsev_ws, 10c for commenting  :-)15:32
yboaron_kzaitsev_ws, I need to adjust to community ...15:33
kzaitsev_wsoh. got it. 1st time I see '10x' used this way )15:33
yboaron_kzaitsev_ws, thought that first time you saw thanks in kuryr channel ...15:34
yboaron_:-)15:34
kzaitsev_ws<offtopic> New album by Arcadi Fire is pretty good15:34
kzaitsev_ws*Arcade15:35
apuimedoArcadi Fire?15:35
apuimedonever heard of it15:35
apuimedokzaitsev_ws: I usually work more to things like this https://www.youtube.com/watch?v=HIThQKypY4I15:36
kzaitsev_wsOMG, seriously? =)15:37
kzaitsev_wsthat's way too cool for me15:38
kzaitsev_wsI have nothing to top that =)15:39
*** pcaruana has joined #openstack-kuryr15:40
apuimedokzaitsev_ws: well, sometimes I play other things15:40
apuimedowhen I need a lot from my poor brain I listen to die zauberflote15:40
apuimedoand when devstack works well...15:40
apuimedohttps://www.youtube.com/watch?v=0DFsF_0tfiM15:41
kzaitsev_wswhat next? The Rite of Spring?15:41
apuimedokzaitsev_ws: polovtsian dances from Borodin maybe15:42
apuimedoor bach's cello suites by Casals or Rostropovich15:43
kzaitsev_wsI can't really just listen to classical music =( makes me really sleepy. I love opera and balley. One autumn my wife and I went to a number of performances in Bolshoi15:43
apuimedoI'd like to go to the Bolshoi as well15:43
kzaitsev_wsbut then we decided to go to the Main Stage (to see it after reconstruction).15:44
apuimedokzaitsev_ws: this one gets played a lot while I work https://www.youtube.com/watch?v=76R0N2GN6Jo15:44
apuimedokzaitsev_ws: For Opera I usually stick to Puccini15:44
apuimedowith a bit of Mozart and Bizet :P15:44
apuimedobut you can't work to Carmen15:44
kzaitsev_wsand the prices were rather high, but there was a cheap enough concert with just the orchestra playing different pieces. So we thought — why not? a great opportunity to see the big stage.15:45
kzaitsev_wsI almost slept through it %)15:45
yboaron_apuimedo, so in case of 'allocate any' , the external/floating  ip will be stored in LBaaSServiceSpec only after we received the reply from neutron , right ?15:45
kzaitsev_wsyboaron_: you had to close </offtopic> first!15:46
kzaitsev_ws=)15:46
openstackgerritAntoni Segura Puimedon proposed openstack/kuryr-kubernetes master: devstack: Don't assume the router ip  https://review.openstack.org/48964215:47
apuimedoyboaron_: that's right15:47
apuimedowe block on neutron giving us a floating IP15:47
apuimedothen, once we have the lb vip, if the lbaasspec has an ext_ip15:48
apuimedowe ask the ext_ip driver to associate it15:48
yboaron_we should maintain state/in_progress  field in ext_ip driver to avoid race condition , if we"ll service resource changed in k8s while we are waiting for neutron15:50
apuimedoyboaron_: which race condition?15:51
apuimedothe clusterIP can't change15:51
apuimedoyou'd have to delete the service and create another15:51
apuimedoand in that case, the ext ip will be freed15:51
apuimedoand allocated again15:51
yboaron_until we got reply from neutron LBaaSServiceSpec doesn't include ext_ip_address , right ?15:52
yboaron_i'm talking about the allocate any case15:52
yboaron_what if user update/change field in k8s-service  at this time ?15:53
apuimedokzaitsev_ws: https://bugs.launchpad.net/kuryr-kubernetes/+bug/168133815:55
openstackLaunchpad bug 1681338 in kuryr-kubernetes "kuryr-k8s may not work properly in a multi-cni cluster" [High,Triaged]15:55
kzaitsev_wsah, I'm not even sure it's a bug at the moment15:56
kzaitsev_wsah15:56
apuimedoyboaron_: I don't think the user can change the spec15:56
apuimedoyou can try it15:56
yboaron_apuimedo, OK , smart user15:56
kzaitsev_wsright we would allocate ip even if the node is not handeled by our cni15:56
apuimedofor pod you can't, the k8s api blocks you. For service I can't say. Haven't tried15:56
apuimedokzaitsev_ws: look at the comment I posted15:57
yboaron_apuimedo, I will15:57
kzaitsev_wsk8s definitely knows if a node has cni configured or not15:59
kzaitsev_wswould be great if it would expose information on which cni is there in the node object..15:59
kzaitsev_wsI'm not really sure it knows itself though at the moment16:00
openstackgerritAntoni Segura Puimedon proposed openstack/kuryr-kubernetes master: octavia: Make Octavia ready devstack  https://review.openstack.org/48915716:01
kzaitsev_wsapuimedo: actually. if I were to architect a multi-cni cluster I would use taints16:01
apuimedokzaitsev_ws: like?16:02
kzaitsev_wsapuimedo: I'd mark kuryr-nodes with kuryr-taint and other-cni nodes with other-cni-taint. And would expect pods to have tolerations for one of the networks (or both)16:04
* kzaitsev_ws having second thoughts16:04
*** egonzalez has quit IRC16:04
apuimedokzaitsev_ws: I'd allow multi cni nodes16:04
apuimedoonce there is multinetwork that's easy to do16:05
apuimedogotta go pay a visit to Babička16:05
apuimedotalk to you later guys16:05
kzaitsev_wsin any case. I think it's safe to downgrade the bug to medium or even low. I'm not sure there's anyone who really needs that functionality )16:05
apuimedo;-)16:05
kzaitsev_wsat the moment at least16:05
kzaitsev_wsapuimedo: later =)16:06
*** janki has quit IRC16:10
ltomasboapuimedo, my stack also completed and it works too16:19
*** alraddarla_ has joined #openstack-kuryr16:32
ltomasbobye guys! I'll be on PTO the next 20 days! Have a great summer!16:33
yboaron_ltomasbo, have FUN!16:33
ltomasboyboaron_, thanks!16:34
*** dougbtv_ has joined #openstack-kuryr16:40
*** yboaron_ has quit IRC16:43
*** tonanhngo has joined #openstack-kuryr16:50
*** pcaruana has quit IRC16:58
*** kural has quit IRC17:56
*** garyloug has quit IRC18:04
*** dougbtv_ has quit IRC18:09
*** egonzalez has joined #openstack-kuryr18:30
*** kzaitsev_ws has quit IRC18:43
*** aojea has quit IRC20:39
apuimedoltomasbo: man, you and dmellado are slackers!20:44
apuimedoxD20:44
openstackgerritAntoni Segura Puimedon proposed openstack/kuryr-kubernetes master: octavia: Make Octavia ready devstack  https://review.openstack.org/48915720:46
*** aojea has joined #openstack-kuryr20:54
*** yamamoto has joined #openstack-kuryr21:17
*** yamamoto has quit IRC21:21
*** yamamoto has joined #openstack-kuryr21:21
*** yamamoto has quit IRC21:26
*** egonzalez has quit IRC21:37
apuimedoirenab: Great work with the multinode devstack worker22:21
apuimedoit will be very useful!22:21
*** yamamoto has joined #openstack-kuryr22:23
*** yamamoto has quit IRC22:25
*** yamamoto has joined #openstack-kuryr22:25
*** kural has joined #openstack-kuryr22:46
*** hongbin has quit IRC23:30
*** aojea has quit IRC23:33
*** aojea has joined #openstack-kuryr23:34
*** aojea has quit IRC23:38
*** gouthamr has quit IRC23:43
*** kural has quit IRC23:50

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!