Wednesday, 2019-07-31

*** kiseok7 has joined #openstack-kuryr01:05
openstackgerritpengyuesheng proposed openstack/kuryr-kubernetes master: Bump the openstackdocstheme extension to 1.20  https://review.opendev.org/67287501:37
*** maysams has quit IRC02:08
*** ndesh has joined #openstack-kuryr02:37
*** gkadam has joined #openstack-kuryr03:40
*** gcheresh has joined #openstack-kuryr05:14
*** threestrands has joined #openstack-kuryr05:44
*** brault has joined #openstack-kuryr06:25
*** brault has quit IRC06:39
*** takamatsu has quit IRC07:17
*** brault has joined #openstack-kuryr07:19
*** brault has quit IRC07:23
*** brault has joined #openstack-kuryr07:23
*** maysams has joined #openstack-kuryr07:27
*** pcaruana has quit IRC07:35
*** pcaruana has joined #openstack-kuryr08:13
*** takamatsu has joined #openstack-kuryr08:32
*** takamatsu_ has joined #openstack-kuryr08:48
*** threestrands has quit IRC08:48
*** takamatsu has quit IRC08:49
*** takamatsu_ has quit IRC09:30
*** takamatsu has joined #openstack-kuryr09:37
*** ndesh has quit IRC09:57
*** takamatsu has quit IRC10:08
*** takamatsu has joined #openstack-kuryr10:24
*** pcaruana has quit IRC10:45
aperevalovwe decided to implement direct RPC before any other performance improvements. I didn't yet create separate blueprint.11:13
dulekaperevalov: Let me try to get apuimedo here…11:14
aperevalovbut I see here some different approaches: 1) use oslo.messaging. This approach intend to use 3rd part messaging server like rabbitmq, but not from OpenStack's controller, cause it could be unavailable from k8s cluster.11:15
aperevalov2) use own listening loop, which for example implemented by iohttp or similar framework, it this approach there is no messaging bus.11:17
dulekaperevalov: What had you already used?11:17
dulekaperevalov: To test this.11:18
aperevaloveach of approach has its drawbacks as well as its own advantages.11:18
dulekltomasbo: ^11:18
aperevalovI tried iohttp, but not yet integrated it into kuryr-controller.11:18
aperevalovI think we need to figure out with key requirements for this feature.11:19
aperevalovIs it performance or robustness?11:19
dulekaperevalov: So you don't know if direct communication improves anything?11:19
aperevalovyes, I didn't yet measured it, but I measured different parts of launch time process.11:20
*** celebdor has joined #openstack-kuryr11:21
celebdorHi11:21
dulekcelebdor: o/11:21
aperevalovand I measured in artificial test the time "kubectl --watch" reacts to pods modification. And it's not so huge time.11:21
celebdorsorry, for some reason my irc client loses auth and I get kicked out all the damn time11:22
dulekcelebdor: So summary is that aperevalov was looking at implementing direct communication between kuryr-controller and kuryr-daemon.11:22
dulekInstead of waiting for pods to get annotated with vifs.11:22
celebdorthat's the same that I saw in the sydney summit some team in China was doing11:23
aperevalovyes, but I want to keep saving annotation into k8s, for fallback case, e.g. when kuryr-controller is restarting.11:23
celebdorhow big is the advantage?11:23
dulekI think we don't know yet.11:25
aperevalovit's hard to say now, I saw in my measurements that cni-daemon obtains annotation after a one or two seconds.11:25
celebdoryou mean one or two seconds after the controller POSTs the annotation?11:26
aperevalovand comparable to "kubectl in --watch mode" it was always slower.11:26
aperevalovyes11:26
celebdorso kubectl sees the annotation way earlier?11:26
celebdorhow much earlier approximately?11:26
dulekcelebdor: kubectl does not got through Octavia LB.11:27
dulekcelebdor: It should really matter that much, but hey - it's Octavia.11:27
ltomasbolol11:27
aperevalovtime was synchonized. It was better to prepare numbers. No it wasn't Octavia in that setup as well as another load balancer.11:28
aperevalovit would be betther for me to prepare some document result, but I don't have it now.11:29
celebdordulek: the connection to an API in kube controller would also be via clusterip...11:29
aperevalovcelebdor, no it's not necessary (for test environment) if they both in the same subnet.11:31
dulekcelebdor: The idea is not to use HTTP and services, just talk to pod's IP.11:31
aperevalovso my traffic goes directly w/o proxing/routing...11:31
dulekThat's one of my issues with the idea - it's breaking some K8s paradigm, that pods communicate through services.11:31
celebdortalk in which protocol?11:32
dulekI see RabbitMQ was tossed in the discussion.11:33
dulekWell, that's not really a good idea to depend on another service.11:33
aperevalovas I remember you also using hostNetwork for kuryr-controller and kuryr-daemon. So both of this service is not obligated to use k8s Service's endpoints address.11:34
dulekiohttp - I'm not familiar with that really.11:34
dulekRight, it's hostNetworking.11:34
aperevalovit is one process/thread async library for rpc.11:35
dulekaperevalov, celebdor: My another problem with that approach is that the same issue is solved by daemon VIF choice idea.11:35
dulekAs we make kuryr-controller only manage the pools and kuryr-daemon just does everything on its own.11:36
aperevalovAlso I heard a suggestion from my co-workers just to write directly to etcd.11:36
aperevalovwhat is "daemon VIF choice idea"? is it enhancement for pools we discussed before?11:37
dulekaperevalov: Yup.11:38
celebdoretcd would have to be a separate etcd11:38
celebdorit is not usually allowed to touch the k8s etcd11:38
celebdordulek: daemon vif choice requires the controller handling the SG after the port assignment, doesn't it?11:39
dulekcelebdor: Yes, we need that to support SG's.11:39
duleks/SG's/NP's11:40
celebdorin general, accessing rabbit is problematic from the vm network11:40
celebdorthat's why I asked a fuck ton of times the Neutron community to expose a watch endpoint for port events11:40
celebdorwe even had an intern write a service that reads the bus and exposes as a watch endpoint11:40
aperevalovcelebdor, but neutron rpc is some kind of watching API for ports.11:41
celebdorwhich, as I said, is often not allowed to reach from the VMs in the kubernetes deployed on VMs case11:41
aperevalovI forgot, it was through mq11:42
dulekaperevalov: Yes, so we see nested deployments as main use case for Kuryr.11:42
dulekaperevalov: And in those allowing VM's (which run K8s masters, nodes, so Kuryr) to access Neutron's RabbitMQ is… crazy.11:43
dulekWouldn't be allowed on any sane env where you don't 100% trust the code.11:43
dulekaperevalov: So maybe we should take a step back and ask about your use case? What are you building?11:43
aperevalovyes it's true,11:44
celebdordulek: your golang CNI does not create the vlan/veth interfaces, does it?11:44
aperevalovwe have both bm and vm use cases on the same installation11:44
celebdorit only asks the cni daemon to do it, IIRC11:44
aperevalovso we don't have so general case as public cloud11:45
celebdorright11:45
dulekcelebdor: It does not, just data processing and forwarding is done there.11:45
celebdorwe did demo bm and vm together last year11:46
celebdorhttps://github.com/openshift/kuryr-kubernetes/blob/master/kuryr_cni/main.go#L41-L44 confused me11:47
aperevalovour master runs in vm, but it's not nova-instance. So every minion and master in the same control plane network.11:47
celebdoraperevalov so on the BM machines I suppose you run the neutron agents and so on, right?11:47
dulekcelebdor: Ha, good point, I actually copied that from the skeleton but it shouldn't really be necessary.11:47
aperevalovcelebdor, you right11:47
celebdor:D11:48
celebdoraperevalov good11:48
aperevalovso kuryr-controlle's hostNetwork IP is reachable from bm nodes11:48
celebdorright11:49
aperevalovand the idea was: 1) connect from kuryr-daemon directly to kuryr-controller11:49
aperevalov2) or to separate MQ on the master for only this purpose (own oslo.messaging)11:49
celebdorIn general I'm not opposed to cni daemons connecting to a controller11:49
celebdoras long as it's not reusing the OpenStack rabbitmq11:50
aperevalovyes, it should be own/separate MQ11:50
aperevalovor even as I wrote in 1) w/o MQ11:50
aperevalovMQ such as rabbitmq has useful features such as 100% deliverable, even if kuryr-daemon will restared/or failed.11:52
aperevalovbut MQ can increase notification time11:53
dulekaperevalov: And it's a dependency.11:53
dulekI don't really love dependencies.11:54
celebdordoes kubectl see the notifications soon enough for your purpose?11:54
aperevalovI think, yes it shows stable time when it prints results.11:55
dulekaperevalov: So maybe there's some issue with how Python networking code is looking for those notifications?11:56
aperevalovI also that it was due to k8s watching API handling in kuryr-controller/daemon11:56
dulekMaybe it's just a matter of some polling time tweaking?11:56
aperevalovis it a polling mode in kuryr, I thought it was blocking read.11:58
celebdorcan you explain "I also that it was due to k8s watching API handling in kuryr-controller/daemon" some more?11:58
aperevalovI also thought it was due to problems in handling watch API11:59
dulekaperevalov: Yes, but we use requests library for watching the API. Maybe it does some buffering and only flushes every 1 second or something like this?11:59
dulekaperevalov: It's worth checking first IMO.12:00
aperevalovdulek, ok12:00
celebdordulek: aperevalov: I think that's the first thing to check out. watch for the pod annotations12:01
*** rh-jelabarre has joined #openstack-kuryr12:01
celebdorusing kubectl and record how late we see those12:01
celebdorif that's good enough, I'm not opposed to have cni daemon changed to use go client instead12:02
celebdorfor example12:02
dulekcelebdor, aperevalov: It would be best to just write a client in Python and golang and measure the difference.12:02
celebdoryup12:02
celebdoragreed12:02
dulekaperevalov: Then if that will not work, I think I'd prefer to use some other protocol than RabbitMQ - grpc maybe?12:03
celebdorgrpc is not a message broker though12:04
aperevalovok, as I see K8sClient's watch just post a request periodically, but not listen a socket. Ok I'll strace golang's client.12:04
aperevalovwhere to write updates on it?12:05
aperevalovin blueprint I created?12:05
celebdorfor example12:06
celebdorsounds like a good place12:06
dulekaperevalov: Just a thought here. You're aware that we're waiting for pod annotations in a loop that has a period of 1 second?12:06
dulekaperevalov: https://github.com/openstack/kuryr-kubernetes/blob/0c2d88d8d39420f80e333c51cd87c286f4c7c4c8/kuryr_kubernetes/cni/plugins/k8s_cni_registry.py#L114-L11812:06
dulekcelebdor: Do we really need a message broker? If waiting for message times out just fall back to reading the annotation.12:07
celebdorI'm not saying we need it12:07
celebdorI'm just saying that it's a very different thing12:08
celebdorgotta step out for a while to feed the kids12:09
celebdorttyl12:09
aperevalovdulek, thanks, right now I know.12:09
*** takamatsu has quit IRC12:10
*** pcaruana has joined #openstack-kuryr12:10
dulekaperevalov: And just in case - same happens when waiting for it to become active: https://github.com/openstack/kuryr-kubernetes/blob/0c2d88d8d39420f80e333c51cd87c286f4c7c4c8/kuryr_kubernetes/cni/plugins/k8s_cni_registry.py#L69-L7812:11
dulekMaybe, just maybe, you only need to tweak those?12:11
aperevalovAs I remember we discussed with ltomasbo, he suggested to not wait all VIF, just wait for main VIF. Probably first VIF.12:16
*** takamatsu has joined #openstack-kuryr12:31
aperevalovI did strace of kubectl. Looks like it's not possible to attach file into launchpad. So I put it into etherpad, please see (https://etherpad.openstack.org/p/strace_of_kubectl)12:51
aperevalovso we can see: it does epoll on socket 3 (4th it's epoll's fd). There is no requests to get updates. It just awake when new bytes in fd 3 appears. So it follows another approach. What do you think about this approach and implementing it in k8s_client. I know python is not fully multithreaded.12:53
celebdorworth a try13:03
*** gkadam has quit IRC13:28
openstackgerritMichał Dulko proposed openstack/kuryr-kubernetes master: Unset --admission-control when starting K8s API  https://review.opendev.org/67381913:54
*** takamatsu has quit IRC14:28
aperevalovcelebdor: "worth a try" what about it was? reducing number of VIF to wait for or epoll approach in golang?14:50
celebdor"implementing it in k8s_client"14:52
celebdorin your message from an hour ago14:52
aperevalovyes )14:58
*** gdwornicki has joined #openstack-kuryr15:16
*** gcheresh has quit IRC15:17
*** takamatsu has joined #openstack-kuryr16:47
*** takamatsu has quit IRC16:59
*** aperevalov has quit IRC17:00
*** gdwornicki has quit IRC17:36
*** gdwornicki has joined #openstack-kuryr17:39
*** gkadam has joined #openstack-kuryr17:43
*** gdwornicki has quit IRC18:04
*** gdwornicki has joined #openstack-kuryr18:08
*** altlogbot_2 has quit IRC18:19
*** altlogbot_0 has joined #openstack-kuryr18:20
*** gkadam has quit IRC18:23
*** takamatsu has joined #openstack-kuryr18:43
*** gdwornicki has quit IRC19:07
*** gdwornicki has joined #openstack-kuryr19:22
*** takamatsu has quit IRC19:47
*** takamatsu has joined #openstack-kuryr21:40
*** takamatsu has quit IRC23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!