Tuesday, 2017-09-19

*** lakerzhou has quit IRC00:04
*** oanson has quit IRC00:08
*** oanson has joined #openstack-kuryr00:09
*** caowei has joined #openstack-kuryr00:53
*** lakerzhou has joined #openstack-kuryr00:58
*** lakerzhou1 has joined #openstack-kuryr01:01
*** lakerzhou has quit IRC01:04
*** kiennt has joined #openstack-kuryr01:08
*** hongbin has joined #openstack-kuryr01:56
*** wangbo has joined #openstack-kuryr02:07
*** hongbin has quit IRC02:43
*** hongbin has joined #openstack-kuryr02:49
*** robust has quit IRC03:10
*** lakerzhou1 has quit IRC03:33
*** hongbin has quit IRC03:51
*** kiennt has quit IRC03:53
*** janki has joined #openstack-kuryr04:32
*** wangbo has quit IRC04:35
*** caowei has quit IRC04:44
*** aojea has joined #openstack-kuryr04:48
*** aojea has quit IRC04:53
openstackgerritJaivish Kothari(janonymous) proposed openstack/kuryr-kubernetes master: CNI Daemon template  https://review.openstack.org/48002804:54
*** yamamoto has quit IRC04:57
*** wangbo has joined #openstack-kuryr05:04
janonymousapuimedo:  `/msg Chanserv OP #openstack-kuryr apuimedo`05:05
janonymousapuimedo:  `/topic https://etherpad.openstack.org/p/kuryr-queens-vPTG `05:05
*** yamamoto has joined #openstack-kuryr05:07
*** caowei has joined #openstack-kuryr05:10
*** wangbo has quit IRC05:30
*** wangbo has joined #openstack-kuryr05:39
*** aojea has joined #openstack-kuryr05:46
*** aojea has quit IRC05:52
*** wangbo has quit IRC05:54
*** aojea has joined #openstack-kuryr05:56
*** aojea has quit IRC06:05
*** apuimedo has quit IRC06:05
*** huats_ has quit IRC06:05
*** apuimedo has joined #openstack-kuryr06:05
*** wangbo has joined #openstack-kuryr06:08
*** deepika08 has quit IRC06:08
*** aojea has joined #openstack-kuryr06:10
*** huats_ has joined #openstack-kuryr06:10
*** c00281451_ has quit IRC06:16
*** jchhatbar has joined #openstack-kuryr06:22
*** jchhatbar has quit IRC06:23
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr-kubernetes master: Avoid port update neutron call during pods boot up  https://review.openstack.org/50491506:23
*** jchhatbar has joined #openstack-kuryr06:23
*** janki has quit IRC06:24
*** janki has joined #openstack-kuryr06:28
*** jchhatbar has quit IRC06:31
*** wangbo has quit IRC06:37
*** wangbo has joined #openstack-kuryr06:39
*** kiennt has joined #openstack-kuryr06:48
*** wangbo has quit IRC06:57
livelace2ltomasbo, Hello, BTW, is there any way to set port name as it does openshift for services (<svc>.<namespace>.cluster.local) Like <namespace>-<pod> ?06:58
*** wangbo has joined #openstack-kuryr06:58
ltomasbohi livelace206:59
livelace2it will be more convenient to recognize the ports06:59
ltomasbowe had the pod, and I guess we could add the namespace yes, but there is no option to enable that at the moment (at least that I know)06:59
livelace2ok07:00
ltomasbohowever, in the patch that I propose yesterday I enabled the option to not set any name at all07:01
ltomasboas nova does when booting VMs07:01
ltomasboand in order to speed up the booting process07:01
livelace2It's cool too07:04
livelace2ltomasbo, Could you answer me the question that I asked apuimedo "Do pods work well in your environment on one node"?07:09
ltomasbolivelace2, what do you mean in one node?07:10
ltomasboin a deployment with just one server/VM?07:10
ltomasboyep, what problem do you have?07:10
ltomasbonested/baremetal?07:10
livelace2I meant that there is one working node, where work two Pods, do they have any network connectivity problems ?07:11
ltomasbonot that I'm aware of07:12
ltomasboare you using ml2/ovs>?07:12
livelace2yes07:12
ltomasboand the pods are nested (inside a VM) or baremetal?07:13
ltomasbodulek, I'm testing your patches for containerized deployment and I'm getting this: http://paste.openstack.org/show/621390/07:13
ltomasbodulek, does it ring a bell?07:14
ltomasbolivelace2, and with network connectivity problem you mean that they cannot reach each other, or they cannot reach internet?07:15
dulekltomasbo: It does. Are you using containers that you've built yourself?07:15
ltomasbodulek, you mean for the cni and controller?07:16
ltomasboI just used devstack to deploy them07:16
*** deepika08 has joined #openstack-kuryr07:16
ltomasbo with the KURYR_K8S_CONTAINERIZED_DEPLOYMENT=True07:17
dulekltomasbo: Okay, so they should be built. I might made a mistake that didn't show up on my system, because it wasn't fresh. Let me check.07:17
ltomasbodulek, I see this:07:17
ltomasbols /opt/stack/cni/bin/07:17
ltomasboloopback07:17
ltomasboshould there be only loopback there?07:18
livelace2ltomasbo, Network completely07:18
dulekltomasbo: Nope, looks like CNI installation didn't put it in there.07:18
ltomasbolivelace2, what if you create a VM? does it has connectivity?07:19
dulekltomasbo: and "ls /etc/cni/net.d"?07:19
ltomasbodulek, 10-kuryr.conf  99-loopback.conf07:19
dulekltomasbo: Curious. Can you check out if kuryr-cni pod in kube-system namespace isn't failing?07:20
dulekltomasbo: And if so - take a look on its logs.07:20
ltomasbodulek, it says 0 restarts07:20
ltomasboand describing the pod, it just says it started normally07:21
dulekltomasbo: "ls /opt/cni/bin"?07:22
ltomasbokuryr-cni  kuryr-cni-bin07:22
dulekltomasbo: Here those are!07:22
ltomasboxD07:22
*** pcaruana has joined #openstack-kuryr07:22
ltomasbowhere they should be? in /opt/stack/cni?07:22
dulekltomasbo: Actually I wonder why your CNI binaries path is /opt/stack/cni. I thought that /opt/cni/bin is the standard.07:23
dulekltomasbo: Setting those should help you: https://github.com/openstack/kuryr-kubernetes/blob/master/cni.Dockerfile#L10-L1107:24
ltomasboit seems it looks into /opt/kuryr-cni/bin instead of /opt/cni/bin07:25
ltomasbodulek, ok, and then rebuild the cni, right?07:25
dulekltomasbo: Yes.07:26
ltomasboalthough those are already in the cni.Dockerfile....07:26
dulekltomasbo: Rather looks into /opt/stack/cni instead of /opt/cni/bin. Is /opt/stack/cni what's used in DevStack? If so we need to make DevStack build containers with changed option.07:26
* dulek needs to check.07:27
ltomasbodulek, yep, probably it does07:27
ltomasbodulek, thanks! I'll check too if it is just a matter of placing them in the right place!07:28
ltomasbonow I can play rebuilding them and learn a bit more than if it was working at the first attempt!07:28
ltomasboxD07:28
dulekltomasbo: Awesome. I'll analyze DevStack plugin to determine the correct path. Fix should be trivial.07:28
ltomasboyep, I guess so!07:29
*** janki is now known as janki|lunch07:45
ltomasbodulek, umm, I tried the next without success:07:49
ltomasbo./tools/build_cni_daemonset_image07:49
ltomasbokill the cni pod07:49
ltomasboand create a new pod after that07:49
ltomasbosame error, looking at the wrong location07:49
dulekltomasbo: You've changed the ENVs in Dockerfile, right?07:49
ltomasboand now, I copy the kuryr-cni and kuryr-cni-bin to the /opt/stack/cni/bin07:49
ltomasboand I'm getting this   23s   8s      3       kubelet, masterovn-aaa51502522f47e483ea0192acc5004722d2f306vm-0         Warning FailedSync      Error syncing pod, skipping: failed to "KillPodSandbox" for "d8b1ef91-9d0e-11e7-ba79-fa163ef67899" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"demo-2293951457-dlfk2_default\" network: netplugin failed but error parsing its diagnostic07:50
ltomasbomessage \"\": unexpected end of JSON input"07:50
ltomasbodulek, no, I did not changed them07:50
ltomasboyou mean change them to opt/stack/cni??07:50
dulekltomasbo: You could, or add build-args to build command. Or add envs in kuryr-cni daemonset k8s manifest. :)07:51
dulekAnyway copying should help anyway.07:51
dulekAnd this JSON error is something new to me. Had you tried looking at kubelet logs?07:52
apuimedolivelace2: ltomasbo: it was this setup right https://ibb.co/fYC2B5 ?07:52
*** egonzalez has joined #openstack-kuryr07:52
ltomasboyep, I wonder why it is not working with the copy07:52
ltomasbomaybe I copied after the cni was there, let me try again07:53
livelace2apuimedo, Yes07:55
ltomasbodulek, kubelet journal: http://paste.openstack.org/show/621393/07:56
apuimedolivelace2: ltomasbo: so a21/ns and a22/ns can't talk to either 172.16.2.8 nor 172.16.2.9?07:57
dulekltomasbo: Interestingly enough this isn't happening in the gate: http://logs.openstack.org/33/503733/7/check/gate-tempest-dsvm-lbaasv2-kuryr-kubernetes-ubuntu-xenial-nv/9f14dd1/logs/screen-kubelet.txt.gz07:58
livelace2apuimedo, No, 2.9 is reachable, but 2.8 not.07:58
dulekltomasbo: But if configs were put in wrong place, it gate might never be configured to use containerized CNI. Whoops.07:58
apuimedolivelace2: 2.7 and 2.5 are the addresses in the same subnet as 2.9 and 2.8?07:59
ltomasbodulek, I don't really now, but in my case, event though config files where not in the proper place, the controller and cni were running as containers08:00
ltomasboso, I guess the gate should have failed too08:00
ltomasboperhaps I did something wrong when creating my devstack...08:00
dulekltomasbo: Ah sure, but CNI container is just doing "sleep infinity". Until we have daemon, it's only purpose is to put binaries and configs in correct place.08:00
ltomasboahhh, true08:01
dulekltomasbo: So if Kuryr was installed on the system and no CNI config was provided, then k8s might just use standard CNI. Controller definitely was running containerized.08:01
livelace2apuimedo, Yes08:02
dulekI'm trying to dig the gate runs, but we don't have much configs copied there.08:02
dulekSo I'll need to run this myself.08:02
ltomasbodulek, I can give you access to my 'non-working' devstack deployment if that is of any help08:03
dulekltomasbo: Let me see first if my patch sets the CNI_BIN_DIR and CNI_CONF_DIR correctly…08:04
ltomasbook, at least in my case the files where in those dirs08:05
ltomasbobut the cni was not looking for them there...08:05
apuimedolivelace2: what's the address of the VM that has the two namespaces?08:05
dulekltomasbo: Yeah, I've wrote a patch that feeds correct envvars into docker build, just need to test it.08:06
ltomasbodulek, ok great!08:06
livelace2apuimedo, That VM is placed in another network, 172.16.1.0/2408:07
ltomasbodulek, https://review.openstack.org/#/c/490377/6/cni_builder08:07
ltomasbocould be that bin_dir_path is not defined...?08:08
apuimedolivelace2: cool08:08
apuimedocan you verify that there's the trunk ovs switch in the host for the central VM?08:09
*** janki|lunch is now known as janki08:09
dulekltomasbo: Yes! Those are two different Docker images…08:10
dulekltomasbo: cni.Dockerfile, where definitions were put isn't using cni_builder. So that's causing problems as well.08:11
dulekltomasbo: Oh, this might cause the problem with JSON. Thank you!08:13
dulekltomasbo: Can you do "cat /opt/cni/bin/kuryr-cni"?08:14
ltomasbono problem!08:14
ltomasbosure08:14
ltomasbo#!/bin/bash08:14
ltomasboexport PBR_VERSION='0.2.1.dev25'08:14
ltomasbo\/kuryr-cni-bin08:15
ltomasbowithout \, but I cannot type it otherwise...:D08:15
dulekltomasbo: Okay, so you can try putting correct path in there, right? I'm working on a fix.08:16
ltomasbogreat! thanks dulek08:17
livelace2apuimedo, absolutely, do you need output of commands ?08:21
ltomasbodulek, even removing that and pointing to the right place: \/opt/cni/bin/kuryr-cni-bin08:22
ltomasboI get the same error08:22
ltomasbo failed to find plugin \"kuryr-cni\" in path [/opt/stack/cni/bin /opt/kuryr-cni/bin]"08:22
ltomasboit seems it tries to search not in /opt/cni/bin, but in /opt/kuryr-cni/bin08:23
*** kiennt has quit IRC08:23
dulekltomasbo: Oh right, you need to copy them between directories as well.08:23
dulekltomasbo: There are two bugs in one. :(08:23
ltomasbodulek, xD08:23
apuimedolivelace2: yeah08:24
ltomasboso, the solution is to have them in kuryr-cni?08:24
ltomasboor make the cni point to /opt/cni?08:24
dulekltomasbo: You need to place kuryr-cni and kuryr-cni-bin in correct place, which seems "/opt/stack/cni/bin". Then you need to edit kuryr-cni to point to /opt/stack/cni/bin/kuryr-cni-bin.08:26
dulekltomasbo: This should be enough.08:26
ltomasbodulek, ok!08:27
ltomasbodulek, demo2-1575152709-6z9q0   1/1       Running08:29
ltomasboyep, that is08:29
dulekltomasbo: Thanks for testing and debugging! I'll come up with a fix soon.08:30
ltomasbodulek, no problem! thanks for the help! it is really cool to have this running as containers!08:30
apuimedodulek: please, remember to file a bug in launchpad08:31
apuimedoand reference it in the fix08:31
dulekapuimedo: Sure,08:34
apuimedothanks dulek!08:34
livelace2apuimedo, https://paste.fedoraproject.org/paste/75IZq-kZuyepojg5NsY8eA08:43
ltomasbolivelace2, note you are using the ovs-hybrid, instead of the ovs-firewall08:44
ltomasboand trunk ports with ovs-hybrid and security groups are not working well08:44
ltomasboso, either you move to ovs-firewall, or disable security groups on those ports08:45
livelace2ltomasbo, https://paste.fedoraproject.org/paste/A2HA41jgw9zyASzNGXihCQ08:46
livelace2sg permit all08:46
apuimedolivelace2: I'd still try with native ovs firewall08:47
ltomasbodid you restart the agent?08:47
apuimedoyou need to reattach the ports08:47
apuimedorestarting the agent is not enough08:47
ltomasboas in your previous paste-bin the output of the show port says it is ovs-hybrid08:47
ltomasboahh, ofc08:47
ltomasboif the ports are already attached, you need to re-attach them08:48
ltomasboin fact, probably, if you already have the vm running on that trunk port08:48
ltomasboyou will need to hard-reboot the vm08:48
apuimedoltomasbo: no, you can hotunplug and hotplug IIRC08:50
ltomasboreally?08:51
ltomasbothere will be a qbr connecting the VM to the br-int, that should be drop, right?08:52
apuimedoltomasbo: I think that's how I did it08:54
apuimedobut can't tell you for sure08:54
ltomasboneither do I :(08:54
livelace2Did new ports https://paste.fedoraproject.org/paste/jawbSZP6dZBC5CabO3SmTA08:58
livelace2Reattaching doesn't help, only recreating08:58
apuimedook08:59
livelace2Parent port needs the same, I suppose :(08:59
apuimedolivelace2: yup09:00
apuimedocreate a new port09:00
apuimedohot plug it09:00
apuimedoand move the subports there09:00
*** yamamoto has quit IRC09:07
openstackgerritMichał Dulko proposed openstack/kuryr-kubernetes master: Clean up ENV vars mistmatches in Dockerfiles  https://review.openstack.org/50512509:07
dulekltomasbo: ^ It would be great if you could try this fix up (with removal of /opt/stack/cni before starting).09:09
ltomasbodulek, sure, let me trash away my devstack deployment and generate one from scratch. I'll let you know as soon as it is up and running09:10
dulekltomasbo: Awesome, thank you!09:10
ltomasbodulek, np!09:10
livelace2It seems to be ... working now :)09:13
ltomasbolivelace2, so, what was it?09:13
livelace2https://paste.fedoraproject.org/paste/1uyc4Scr6yKRK5U5X2OYLQ09:13
ltomasbothe ovs-firewall?09:13
ltomasboI mean, the ovs-firewall not being active?09:14
apuimedogreat!09:15
livelace2It was the fact, that I didn't recreate ports after changing of the firewall (hybrid vs native). With native all namespaces on one Node can ping other hosts in a network :)09:15
livelace2I saw that on the hypervisor host iptables rules were disappeared and I thought that it's enough for firewall changing09:17
apuimedoaha09:17
livelace2Anyway, now I can concentrate on kuryr itself. Thanks guys for your help :)09:17
apuimedolivelace2: thanks to you for the patient troubleshooting!09:17
*** c00281451 has joined #openstack-kuryr09:17
*** c00281451 is now known as zengchen09:18
zengchenping apuimedo09:19
apuimedozengchen: pong09:20
zengchenapuimedo:is the vPTG on 2-4 october09:21
openstackgerritDanil Golov proposed openstack/kuryr-kubernetes master: Allow passing multiple VIFs to CNI  https://review.openstack.org/47101209:21
openstackgerritDanil Golov proposed openstack/kuryr-kubernetes master: Add SR-IOV capabilities to VIF handler  https://review.openstack.org/46245509:21
openstackgerritDanil Golov proposed openstack/kuryr-kubernetes master: Add SR-IOV binding driver to CNI  https://review.openstack.org/46245609:21
openstackgerritDanil Golov proposed openstack/kuryr-kubernetes master: Allow requesting additional subnets via annotation  https://review.openstack.org/48254409:21
openstackgerritDanil Golov proposed openstack/kuryr-kubernetes master: Add SR-IOV documentation  https://review.openstack.org/47845809:21
openstackgerritDanil Golov proposed openstack/kuryr-kubernetes master: [WIP] Allow setting specific ports for SRIOV handler  https://review.openstack.org/47849409:21
*** robust has joined #openstack-kuryr09:25
zengchenapuimedo:It does not say specific time for vPTG in the ethopad https://etherpad.openstack.org/p/kuryr-queens-vPTG09:26
dulekapuimedo: Good point, what timezone are we in?09:29
dulekDuring the VTG.09:29
*** yamamoto has joined #openstack-kuryr09:32
*** robust has quit IRC09:32
apuimedoI was planning on doing 13-15utc each day09:45
janonymousi am good with it!09:57
*** livelace2 has quit IRC10:11
*** reedip has joined #openstack-kuryr10:13
*** alishaaneja has joined #openstack-kuryr10:17
*** wangbo has quit IRC10:27
*** yamamoto has quit IRC10:34
*** yamamoto has joined #openstack-kuryr10:38
ltomasbohi again dulek: I'm getting this if running with octavia http://paste.openstack.org/show/621407/10:47
ltomasbodulek, it seems it does not quite like the if that you included at the end of the local.conf :)10:47
*** yamamoto has quit IRC10:47
dulekltomasbo: Uh-oh, right.10:48
dulekltomasbo: I wonder how do you do that then…10:49
ltomasboyep, I don't now either, I'll look for other examples, there may be some out there...10:50
*** caowei has quit IRC10:50
ltomasbodulek, and if there is no way, perhaps we need 2 separate files, one for octavia, one for lbaas10:54
dulekltomasbo: Okay, I think it may be done easier.10:58
ltomasboprobably...10:59
dulekltomasbo: http://paste.openstack.org/show/621410/ - $OCTAVIA_CONF will be empty ($OCTAVIA_CONF_DIR/octavia.conf wasn't), so it will get ignored.10:59
dulekAt least that's what I've found in DevStack docs. Let's see.11:00
ltomasbo:D11:00
ltomasbosounds good11:01
*** wangbo has joined #openstack-kuryr11:06
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr-kubernetes master: Add Pool Manager to handle subports  https://review.openstack.org/49869811:07
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr-kubernetes master: Add list and show pool commands to Pool Manager  https://review.openstack.org/50441011:07
*** yamamoto has joined #openstack-kuryr11:23
*** robust has joined #openstack-kuryr11:30
*** wangbo has quit IRC11:44
openstackgerritMichał Dulko proposed openstack/kuryr-kubernetes master: Fix local.conf.sample once again  https://review.openstack.org/50518711:44
dulekltomasbo: ^ I've tried it and seems to be working.11:46
dulekAm I the only one not receiving emails from Gerrit?11:46
*** wangbo has joined #openstack-kuryr11:46
*** robust has quit IRC11:46
ltomasbodulek, I'm testing it too, and yes, it seems to work11:46
ltomasbodulek, there was a maintenance process yesterday, perhaps still some side effects11:47
apuimedodulek: you sure you don't have the gerrit email still sending to intel?11:49
alishaaneja I am trying to setup kuryr-kubernetes with Devstack. I populated the contents of Devstack/local.conf with Kuryr-kubernetes/local.conf.sample and then ran ./stack.sh for Devstack. It gives me this error: https://gist.github.com/alisha17/282e31d4061e99d88057802884a30a5a . Can someone guide me how to resolve this?11:49
apuimedoalishaaneja: hey11:50
apuimedoalishaaneja: are you running this on a VM?11:50
alishaanejaNope. should I setup devstack on say, Vagrant?11:51
apuimedoalishaaneja: using VMs is considered safer11:52
apuimedo:-)11:52
dulekapuimedo: 100% sure, I was receiving emails before yesterday's outage.12:02
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr-kubernetes master: Avoid port update neutron call during pods boot up  https://review.openstack.org/50491512:02
apuimedoweird12:03
apuimedogo to #openstack-infra then12:03
*** robust has joined #openstack-kuryr12:19
deepika08Hey apuimedo: dulek ,team-kuryr ! :)12:24
deepika08I am facing similar issue for Ubuntu 16.04 desktop in VM12:24
deepika08 , I tried finding some solutions , running as stack user , even tried conjure-up but no luck,ask.openstack .org  showing can't be reached12:24
*** atoth has joined #openstack-kuryr12:25
dulekdeepika08: Yup, ask.openstack.org seems down. Can you paste the error into paste.openstack.org and put the link here?12:36
*** robust has quit IRC12:54
deepika08dulek : i lost the pastebin link produced earlier , need to install afresh , may take some time , i had mainly these two errors """12:57
deepika08generate-subunit 1505810377 253 fail12:57
deepika08World dumping.../stack.sh:exit_trap:522 exit 10012:57
deepika08i will put the link asap!12:58
dulekdeepika08: The real error should start somewhere before that. Those are only post-failure messages.12:58
*** lakerzhou has joined #openstack-kuryr13:00
deepika08oh ! there was failure of Rabbit server before that13:00
deepika08dulek^^13:00
*** robust has joined #openstack-kuryr13:06
dulekdeepika08: Hm. That's something unusual. Exact log could help, maybe there's networking problem and something is unable to connect to RabbitMQ.13:06
*** gouthamr has joined #openstack-kuryr13:13
*** wangbo has quit IRC13:20
deepika08dulek https://paste.ubuntu.com/25572437/13:29
dulekdeepika08: journalctl -u rabbitmq-server13:30
dulekdeepika08: This should show full rabbitmq log, which might be helpful.13:30
dulekdeepika08: You probably need to run it with sudo.13:34
dulekjanonymous, apuimedo: I have no luck with running CNI daemon (https://review.openstack.org/#/c/480028), pods are hanging in ContainerCreating state.13:35
dulekjanonymous, apuimedo: Do you want me to fix this and work on finishing the patch? Judging from the TODO listed in commit message there's still a lot of work to do.13:36
apuimedodulek: try your hand at it and update janonymous and I on the findings13:38
dulekapuimedo: Okay. I'm guessing the message is just lost somewhere between CNI and daemon, it shouldn't be too difficult to find where.13:39
apuimedodulek: does CNI send the request to the socket file13:39
dulekapuimedo: I think so, need to check to be sure. First I'll try to unclog logging in daemon, I'll need it anyway.13:42
apuimedocool13:42
*** hongbin has joined #openstack-kuryr13:51
apuimedoltomasbo: the -1 is for visibility https://review.openstack.org/#/c/504915/313:57
apuimedolook at the question13:57
ltomasboapuimedo, oohh true13:59
ltomasboI forgot about that...13:59
apuimedo:-)14:00
ltomasboI'll fix it asap14:00
ltomasboother than that is ok?14:00
*** robust has quit IRC14:02
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr-kubernetes master: Avoid port update neutron call during pods boot up  https://review.openstack.org/50491514:07
*** wangbo has joined #openstack-kuryr14:08
apuimedoltomasbo: it is14:19
apuimedo+214:19
ltomasbogreat!14:19
ltomasbothanks!14:19
ltomasbotrying to fix the octavia l2 issue, at the end it has to be with the git version...14:20
*** yamamoto has quit IRC14:20
apuimedoxD14:20
*** janki has quit IRC14:36
ltomasbodulek, did you stacking work at the end? (for the containerized version)14:43
ltomasboin my case I see the kubernetes-scheduler and the kubernetes-controller-manager pods died during the process14:44
dulekltomasbo: You mean https://review.openstack.org/#/c/505125/ ? I've used env to test local.conf.sample patch and forget about it. :P14:45
dulekltomasbo: It happens to me sometimes, mostly due to etcd leader election failures.14:45
ltomasboyep, I meant that one14:45
dulekltomasbo: Normally restart of etcd and devstack@kube* help14:45
duleks.14:45
*** wangbo has quit IRC14:46
ltomasbook, though it crashed during stacking, not sure if something else was missing14:46
*** wangbo has joined #openstack-kuryr14:46
dulekltomasbo: I'14:49
dulekltomasbo: I'll run it once other DevStack run finishes.14:49
ltomasbook14:49
ltomasboI restarted those services14:50
ltomasboand started the kubernetes controller and scheduler14:50
ltomasbobut still kuryr-cni crashes14:50
dulekltomasbo: Same errors?14:51
ltomasbono no, now the kuryr-cni pod is in error or crash status14:52
ltomasbocrashloopbackoff14:52
ltomasbobut probably the stacking crashed before finishing14:53
*** wangbo has quit IRC14:55
*** wangbo has joined #openstack-kuryr14:56
*** wangbo has quit IRC14:58
*** alishaaneja has quit IRC15:06
apuimedoltomasbo: you tried the no port update patch already?15:16
apuimedoI'm considering trying it on my 2000 port scale lab15:16
ltomasboyes, I tested it15:17
ltomasbo(with odl as backend)15:17
ltomasboit is faster even in my small nested environemnt15:17
ltomasbolike 40% faster15:17
apuimedoltomasbo: how did you check when the pools are loaded?15:17
apuimedomeh... I would have expected it to be 300% faster15:17
ltomasboapuimedo, https://review.openstack.org/#/c/504410/15:17
apuimedobut I will take 40%15:17
apuimedoxD15:17
ltomasboI can check the status of the pools with that one15:17
apuimedoright15:18
ltomasbo curl --unix-socket /etc/kuryr/kuryr_manage.sock http://localhost/listPools -H "Content-Type: application/json" -X GET -d '{}'15:18
ltomasbothat tell you all the pools, and the amount of ports the have15:18
ltomasboand with showPool, you get the information for a given pool15:18
ltomasboother than that, perhaps it is good to include a LOG message when the process finish15:19
ltomasboif you give me 5 minutes, I'll update https://review.openstack.org/#/c/504915/15:19
ltomasboapuimedo, what do you thing?15:19
apuimedoltomasbo: a log message would be a good thing15:20
ltomasbook, I'll add it15:20
*** yamamoto has joined #openstack-kuryr15:20
*** yamamoto has quit IRC15:25
*** dimak has quit IRC15:25
*** dimak has joined #openstack-kuryr15:26
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr-kubernetes master: Avoid port update neutron call during pods boot up  https://review.openstack.org/50491515:27
ltomasboapuimedo, ^^15:27
ltomasboapuimedo, now there will be a message in kuryr-kubernetes when the ports are loaded into the pools15:28
apuimedoperfect15:33
apuimedothanks ltomasbo15:33
ltomasbonp!15:33
apuimedoltomasbo: somehow I'm not seeing the POOL updated log message15:41
apuimedoand I started it like 3 minutes ago15:41
ltomasboumm15:42
*** wangbo has joined #openstack-kuryr15:42
ltomasboapuimedo, how many ports?15:42
apuimedo2k15:42
ltomasbook15:42
apuimedobut considering there's no pods15:42
apuimedoit shouldn't be this slow15:42
apuimedoshould it?15:42
ltomasboactually it will be faster if there are pods15:42
ltomasboas you only need to get the information about the ones that are available15:43
apuimedobut this should only be a port list, shouldn't it?15:43
apuimedoor we do it per trunk?15:43
ltomasboyou need to get the information regarding the port15:43
apuimedoWhat I mean is15:43
apuimedoa single port list15:43
apuimedocontains the info about the parent port already15:43
apuimedoso we should be able to reconstruct the pools with that15:44
apuimedoshouldn't we?15:44
ltomasboumm, let me see15:44
ltomasbothough that is already 2k calls to neutron15:44
ltomasbothe port list does not show the parent15:44
ltomasbothe port show do it15:44
ltomasboumm, no, but you are right, I just do the get_ports by attr and then the trunk list15:46
apuimedoltomasbo: that is not 2k calls to neutron15:46
apuimedoit shouldn't be15:46
ltomasboyep, sorry, I'm looking at the code now15:46
apuimedook15:46
ltomasbojust 2 neutron calls, the get_ports_by_attrs and the list_trunks15:47
ltomasboand, one more per trunk15:48
ltomasboto get the trunk parent port ip15:48
ltomasbostill, it should be faster than that I guess15:49
ltomasboapuimedo, perhaps you can include a print for each trunk15:49
ltomasboso that you know if it is stuck or still working15:49
apuimedoltomasbo: now I don't want to stop it xD15:51
ltomasboapuimedo, I understand why...15:51
apuimedobut it still didn't give the log message15:52
ltomasbocan you check if the process is actually doing anything?15:52
*** yamamoto has joined #openstack-kuryr15:52
apuimedoltomasbo: 11-15% cpu15:52
ltomasbook, at least doing something15:53
apuimedoltomasbo: just finished15:53
apuimedodamn!15:54
ltomasbo\o/15:54
apuimedoIt took 14 minutes15:54
ltomasbohow long it took?15:54
apuimedothat's terrible15:54
apuimedo:/15:54
ltomasboyep, not good15:54
apuimedo14min14s15:54
apuimedoI'll optimize this later15:54
ltomasboshould we optimized by reading from a file? or something like that15:54
apuimedofirst I'll see how fast we can get it without that15:55
ltomasbonice!15:56
ltomasboI'm on the tmux...15:56
apuimedowow15:56
apuimedothis was fast15:56
ltomasbothat is a good ratio!15:56
ltomasboyep15:56
apuimedolet's count it15:56
ltomasbohave to be near 50 p/s?15:56
apuimedoweird15:57
apuimedoi'm dumb15:57
apuimedoforgot to pass the params15:57
ltomasbowhat happend?15:58
apuimedoxD15:58
ltomasboahh, ok jejejej15:58
ltomasbothe namespace...15:58
*** yamamoto has quit IRC16:03
apuimedogood lord16:03
apuimedothat is fast16:03
*** egonzalez has quit IRC16:03
ltomasbofinal number?16:06
ltomasbo(or plot)16:06
ltomasboxD16:06
apuimedo21.95pods/s16:06
apuimedoso you were right, about 40% faster than not sending the port udpate16:07
ltomasbo:D16:07
apuimedoI honestly expected it to be much faster16:07
ltomasbowould have been better to have the 400% you where thinking about16:08
apuimedobut I guess I can't complain16:08
ltomasbo:d16:08
ltomasboI guess they are good enough16:08
apuimedommm16:09
ltomasbobut still room for improvement!16:09
apuimedowell, the target is about 112pods/s16:09
apuimedobut it depends on the scheduler as well16:09
apuimedoso 1 pod/s/node16:09
ltomasboyep, with the distributed approach with started discussing the other day, I guess we can beat that16:10
apuimedoyeah, of course16:16
apuimedook, going  to the doctor with my son16:16
apuimedotty tomororw16:16
apuimedo*tomorrow16:16
ltomasbobye16:29
dulekjanonymous: Ha, found it! I think config initialization was missing in case of daemonized cni (not the daemon itself).16:37
*** pcaruana has quit IRC16:38
*** robust has joined #openstack-kuryr16:50
*** gouthamr has quit IRC17:01
*** gouthamr has joined #openstack-kuryr17:02
*** ltomasbo has quit IRC17:06
dulekjanonymous, apuimedo: Not that daemon works, but at least I can see the communication. :) Will continue tomorrow.17:10
*** wangbo has quit IRC17:15
*** ltomasbo has joined #openstack-kuryr17:30
apuimedodulek: cool18:24
dulekapuimedo: In case you want to follow this, I'm noting stuff that don't work on the review itself.18:34
apuimedoperfect18:34
apuimedothanks for that18:34
*** jerms has quit IRC18:50
*** jerms has joined #openstack-kuryr19:01
*** neex has joined #openstack-kuryr19:16
*** neex has quit IRC19:18
*** egonzalez has joined #openstack-kuryr20:01
*** gouthamr has quit IRC20:03
*** atoth has quit IRC21:22
*** lakerzhou has quit IRC21:27
*** egonzalez has quit IRC21:28
*** gouthamr has joined #openstack-kuryr21:35
*** dougbtv__ has joined #openstack-kuryr21:36
*** aojea has quit IRC22:16
*** yamamoto_ has joined #openstack-kuryr22:19
*** gouthamr has quit IRC22:21
*** robust has quit IRC22:27
*** openstackgerrit has quit IRC22:47
*** gouthamr has joined #openstack-kuryr23:01
*** gouthamr has quit IRC23:40
*** gouthamr has joined #openstack-kuryr23:55
*** lakerzhou has joined #openstack-kuryr23:56
*** openstackgerrit has joined #openstack-kuryr23:57
openstackgerritHongbin Lu proposed openstack/kuryr-libnetwork master: [WIP] Supprt searching existing port with macaddress  https://review.openstack.org/50544323:57
*** lakerzhou1 has joined #openstack-kuryr23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!