Tuesday, 2017-12-12

openstackgerritMaysa de Macedo Souza proposed openstack/kuryr-kubernetes master: Update cni_main.patch file to match the current version of main.py  https://review.openstack.org/52727300:03
*** portdirect has joined #openstack-kuryr00:29
*** salv-orlando has joined #openstack-kuryr00:56
*** salv-orlando has quit IRC01:00
*** caowei has joined #openstack-kuryr01:24
*** yamamoto has joined #openstack-kuryr01:41
*** gouthamr has joined #openstack-kuryr01:53
*** salv-orlando has joined #openstack-kuryr01:57
*** salv-orlando has quit IRC02:02
*** reedip has quit IRC02:06
*** reedip has joined #openstack-kuryr02:17
*** dmellado has quit IRC02:42
*** dmellado has joined #openstack-kuryr02:48
*** salv-orlando has joined #openstack-kuryr03:27
*** salv-orlando has quit IRC03:32
sapd_Hi everyone. I think kuryr-libnetwork has a bug.03:36
sapd_on this line. https://github.com/openstack/kuryr-libnetwork/blob/master/kuryr_libnetwork/controllers.py#L168803:36
sapd_if req_mac_address  = '', neutron return all port, we expected is only one of not.03:36
*** dmellado has quit IRC03:37
*** dmellado has joined #openstack-kuryr03:38
sapd_if this is a neutron feature, we have to have a condition if filtered_ports03:39
sapd_if filtered_ports > 2 , we have to create a new port03:40
*** salv-orlando has joined #openstack-kuryr04:28
*** salv-orlando has quit IRC04:33
*** caowei has quit IRC04:46
*** gouthamr has quit IRC04:53
*** janki has joined #openstack-kuryr04:57
*** caowei has joined #openstack-kuryr05:27
*** salv-orlando has joined #openstack-kuryr05:29
*** salv-orlando has quit IRC05:34
*** yboaron has joined #openstack-kuryr05:50
*** salv-orlando has joined #openstack-kuryr06:30
*** salv-orlando has quit IRC06:34
*** juriarte has joined #openstack-kuryr07:10
*** salv-orlando has joined #openstack-kuryr07:31
*** salv-orlando has quit IRC07:35
*** salv-orlando has joined #openstack-kuryr07:44
*** yboaron has quit IRC08:04
*** salv-orlando has quit IRC08:35
openstackgerritGenadi Chereshnya proposed openstack/kuryr-tempest-plugin master: Testing pod to pod connectivity  https://review.openstack.org/52562608:42
dmelladosapd_: feel free to send a patch and we'll review it ;)08:58
sapd_dmellado, I think the contribute to openstack is hard,08:59
sapd_I do not understand how to contribute :D08:59
*** janonymous has joined #openstack-kuryr09:00
*** salv-orlando has joined #openstack-kuryr09:01
sapd_dmellado https://github.com/greatbn/kuryr-libnetwork/commit/1ec83f4a38b235696edd84abb39565fef99f55f0, I patched.09:02
*** caowei has quit IRC09:04
dmelladosapd_: it's not that hard at all! xD09:06
dmelladohttps://docs.openstack.org/infra/manual/developers.html#getting-started09:06
dmelladosapd_:  ^^ ;)09:06
sapd_dmellado, thanks I will read it. :D09:07
dmelladoplease do take a look and submit a patch as we won't be accepting PR ;)09:07
*** jappleii__ has quit IRC09:10
*** dmellado has quit IRC09:17
openstackgerritMichał Dulko proposed openstack/kuryr-kubernetes master: Use K8s 1.8 with Hyperkube  https://review.openstack.org/52550209:18
openstackgerritMichał Dulko proposed openstack/kuryr-kubernetes master: WiP: Preview of CNI daemon side VIF choice  https://review.openstack.org/52724309:18
*** garyloug has joined #openstack-kuryr09:25
*** yboaron has joined #openstack-kuryr09:28
openstackgerritMichał Dulko proposed openstack/kuryr-kubernetes master: Use K8s 1.8 with Hyperkube  https://review.openstack.org/52550209:35
openstackgerritMichał Dulko proposed openstack/kuryr-kubernetes master: WiP: Preview of CNI daemon side VIF choice  https://review.openstack.org/52724309:35
*** dmellado has joined #openstack-kuryr09:48
*** janki has quit IRC10:07
*** dmellado has quit IRC10:07
*** kiennt26 has quit IRC10:09
openstackgerritMichał Dulko proposed openstack/kuryr-kubernetes master: Make some Tempest gates voting  https://review.openstack.org/52736010:13
*** dmellado has joined #openstack-kuryr10:20
*** caowei has joined #openstack-kuryr10:20
*** caowei has quit IRC10:24
*** yamamoto has quit IRC10:35
dulekyboaron: Hi. So it's impossible to run OpenShift+Octavia right now?10:39
dmelladodue to 403?10:43
*** yamamoto has joined #openstack-kuryr10:48
*** dmellado has quit IRC10:50
*** dmellado has joined #openstack-kuryr10:59
dulekdmellado: Dunno, yboaron commented on https://review.openstack.org/#/c/527360/.11:07
*** dmellado has quit IRC11:16
*** dmellado has joined #openstack-kuryr11:18
*** dmellado has quit IRC11:20
*** dmellado has joined #openstack-kuryr11:24
yboarondulek, Hi , sorry for the late response (lunch) , yes, it's impossible to run openshift with ocatavia (with firewall=OVS) ,11:32
dulekyboaron: But the job that is set to voting isn't enabling OpenShift.11:32
dulekyboaron: So I'm not sure what's the issue here.11:32
*** dmellado has quit IRC11:32
*** dmellado has joined #openstack-kuryr11:33
yboarondulek, can  you elaborate what octavia job is testing ?11:33
dulekyboaron: It installs Kubernetes (Hyperkube) + OpenStack with Octavia and configures Kuryr to wire K8s pods.11:34
dulekyboaron: Then we have like 1 test that creates a VM, a pod and tries to ping the pod from the VM.11:34
yboarondulek, OK , if that;s the case - so it's fine11:34
yboarondulek, min11:35
dulekyboaron: In case we'll have some OpenShift-specific tempest tests, we'll need to make sure those tests will be skipped when running without OpenShift.11:35
dulekyboaron: And it's pretty simple to do so with Tempest.11:35
dulekyboaron: Now fun thing is we have "tempest-kuryr-kubernetes-octavia-openshift" job. Which as you've said - will not work (or at least load balancing will now work).11:36
dulekdmellado: ^11:36
dulekyboaron: s/now/no11:36
dmelladodulek: having a flag is pretty simple11:43
dmelladoso no worries on that11:44
dmelladowhat's going on with the lbaas on openshift?11:44
dulekdmellado: yboaron explains that Octavia will not work with firewall=OVS, which is required for OpenShift to work.11:45
dulekOr at least that what I understood.11:45
dmelladohmmm it seems that that was working before at least11:47
dmelladoso we could ping the octavia folks to get that back working11:47
yboarondulek , yep , there's a bug with octavia management network when firewall=OVS , so in bottom line it's impossible to create loadbalancer. to be more specific , in case of kuryr-k8s devstack will be stuck at: https://github.com/openstack/kuryr-kubernetes/blob/master/devstack/plugin.sh#L19811:48
dmelladoyboaron: did you file that bug on octavia, or know if it's already filled?11:49
yboarondmellado, No , didn't file a bug , just communicate with bcafarel via email .11:50
yboarondmellado, I"ll check it11:51
dmelladoyboaron: ack, thanks!11:51
dmelladoif there's a bug already opened I'll try to have it addressed or even take care of it11:51
yboarondmellado, NP , I hope I'll have time to take care of it today11:52
*** yamamoto has quit IRC11:54
*** yamamoto has joined #openstack-kuryr11:58
dmelladothanks yboaron12:00
dmelladolet me know if you get stuck and I can help in any way12:00
yboarondmellado,  you're welcome12:02
*** yamamoto has quit IRC12:03
openstackgerritEyal Leshem proposed openstack/kuryr-kubernetes master: [WIP] Translate k8s policy to SG  https://review.openstack.org/52691612:12
dulekltomasbo: Ping.12:18
*** yamamoto has joined #openstack-kuryr12:35
ltomasbodulek, pong13:19
*** janonymous has quit IRC13:20
*** salv-orlando has quit IRC13:20
*** atoth has joined #openstack-kuryr13:22
dulekltomasbo: Can we talk after the scrum?13:30
ltomasbosure!13:30
*** dougbtv_ has joined #openstack-kuryr13:38
*** dougbtv has quit IRC13:41
*** salv-orlando has joined #openstack-kuryr13:45
*** kiennt26 has joined #openstack-kuryr13:48
*** dougbtv_ has quit IRC13:51
*** dougbtv_ has joined #openstack-kuryr13:52
openstackgerritMichał Dulko proposed openstack/kuryr-kubernetes master: Make some Tempest gates voting  https://review.openstack.org/52736013:54
dulekdmellado: Looks like zuul will be playing games again - http://zuulv3.openstack.org/ is inaccessible.13:58
dmelladoouch, looks like it :\13:58
dulekdmellado: "<odyssey4me> it looks like the zuul dashboard has timed out several times today, and it seems to correlate to jobs ending up timing out if they're running through the dashboard timeout"13:58
dmelladojust *awesome*13:58
dmelladolet's check #openstack-infra for updates13:58
dulekdmellado: Infra uploaded new version of this dashboard yesterday, so I guess they'll need to revert it.13:58
dmelladoprobably13:59
dmelladodulek: looks like zuulv3 is maxed on ram13:59
dmelladoso I'd say we'll see some odd things for the time being...13:59
dulekdmellado: Yay, fun stuff!14:00
*** maysamacedos has joined #openstack-kuryr14:09
*** salv-orlando has quit IRC14:21
*** yamamoto has quit IRC14:24
*** yamamoto has joined #openstack-kuryr14:24
*** gcheresh has joined #openstack-kuryr14:26
*** salv-orlando has joined #openstack-kuryr14:33
*** maysamacedos has quit IRC14:34
*** maysamacedos has joined #openstack-kuryr14:35
*** maysamacedos has quit IRC14:40
*** maysamacedos has joined #openstack-kuryr14:41
openstackgerritYossi Boaron proposed openstack/kuryr-kubernetes master: Add L7 routing support for openshift context.  https://review.openstack.org/52390014:43
*** maysamacedos has quit IRC14:44
leyaldulek, the patch of 1.8 worked perfect for me. thanks :)14:48
dulekleyal: I'm glad to hear that. :)14:49
dulekltomasbo: So ping again. :)14:49
*** salv-orl_ has joined #openstack-kuryr14:56
*** maysamacedos has joined #openstack-kuryr14:57
*** gouthamr has joined #openstack-kuryr14:59
*** salv-orlando has quit IRC14:59
*** kiennt26 has quit IRC15:23
*** maysamacedos has quit IRC15:24
*** garyloug has quit IRC15:26
ltomasbohi dulek15:30
ltomasbotell me!15:30
dulekltomasbo: Why had you decided to leave out subnet from the pool key?15:30
ltomasboI was thinking on including it actually15:32
ltomasbodulek, but have that in my to do list15:32
ltomasboas at that point all the pods where on the same subnet...15:32
ltomasbobut we definitely need to add it15:32
dulekltomasbo: Same goes for project_id? :P15:32
ltomasboproject id is there, right?15:33
dulekltomasbo: Yes, but it also doesn't change.15:33
dulekltomasbo: Anyway the issue I'm having is as such:15:34
ltomasboahh, yep, but it was provided on the request_vif already15:34
ltomasbounlike the subnet (I think)15:34
dulekltomasbo: Oh, okay, makes sense.15:34
dulekI've moved the VIF reservation to the daemon.15:35
dulekSo kuryr-controller is only responsible for creating ports and adding them as CRDs into K8s API, so daemon will notice them.15:35
dulekEasy.15:35
dulekBut - how do I extend pools.15:35
dulekPool creation when it doesn't exist is simple - when I get pod notification I check if pool exists and create it.15:36
dulekBut with delayed port reservation it's difficult to increase sizes of the pools.15:36
dulekI could do it as a periodic task, the problem is I don't have neither subnet and pod info in the periodic task.15:37
dulekltomasbo: What do you think?15:38
* ltomasbo reading15:38
ltomasboumm, maybe due  to the context change (from tripleO) I'm missing something15:40
ltomasbothe pool population is still handled by the kuryr-controller, right?15:40
*** yboaron has quit IRC15:41
ltomasbodulek, ^^15:41
dulekltomasbo: Yes. Currently I'm triggering both initial population and repopulation when new pod is detected by controller.15:41
dulekltomasbo: Fine thing with this is that I have all the required info - pod, pool_key, subnet.15:42
ltomasbodulek, then, kuryr controller can still populate the pool when the number is below X pod per a specific pool, right?15:42
dulekltomasbo: But if I create more pods at once than the number of vifs added by single repopulation, some pods will be missing the vifs as just one repopulation will be triggered.15:43
dulekltomasbo: More or less, but reservation will happen on daemon side.15:43
dulekltomasbo: So controller doesn't know when the port is taken from the pool15:44
ltomasbodulek, but will the cni retry as it does today the controller?15:44
*** gcheresh has quit IRC15:45
ltomasbobasically, what we have now is that if there is not enough pods, a repopulation is triggered15:45
dulekltomasbo: Well, CNI doesn't have a way to communicate with Controller, so noticing it on CNI side doesn't help us.15:45
ltomasboand the pod request_vif will fail with resource not ready and retry later15:45
dulekltomasbo: I could probably count running pods for each pool and available ports for each pool… But I wonder if this will be accurate.15:46
ltomasbodulek, I think I'm missing something15:47
duleks/available/all15:47
ltomasbothe pools will be on the cni, right?15:47
ltomasbobut the info there is just the subnet, port_id, pool_key...15:47
dulekltomasbo: Hm, yes. But pool creation happens in the controller.15:47
ltomasbothe actual call to neutron still happens at kuryr-controller, right?15:47
dulekltomasbo: Yes, that's the assumption.15:48
*** mrostecki has joined #openstack-kuryr15:48
ltomasbook, and request_vif (from the handler) will still call the controller, right?15:48
dulekltomasbo: Which request_vif?15:48
ltomasbowhen you create a pod, the vif handlers calls request_vif to get a vif for the pod15:49
ltomasboand then the controller check the pool15:49
ltomasboif there is a port it returns it15:49
ltomasboand if there is not, it fails with ResourceNotReady15:49
ltomasboand it will be retried again15:49
dulekltomasbo: Yeah, that's still in there, but I'm now returning None.15:50
ltomasbohow will be the flow now with the cni side?15:50
ltomasboit is the cni returning the vif? or just performing the annotation (and removing that from the controller)?15:50
dulekltomasbo: So controller checks the pool and repopulates if needed. That's it, no annotating done on controller side.15:51
dulekltomasbo: Now repopulation creates KuryrPort CRD that kuryr-daemon is watching for.15:51
dulek(or CRDs)15:51
dulekltomasbo: Now kuryr-daemon keeps a registry of KuryrPorts.15:52
dulekltomasbo: So when the CNI request comes, CNI just takes a free KuryrPort and annotates it.15:52
dulekltomasbo: It's done in a safe manner using compare-and-swap technique.15:52
dulekltomasbo: And basically that's it.15:54
ltomasbook15:57
ltomasbodulek, so, we still now at the kuryr-controller the amount of ports left at the pools, right15:57
ltomasboand we can still trigger the repopulation actions from there15:58
ltomasbothe difference will be when CNI does not have a port from the CRD to take15:58
dulekltomasbo: So the source of truth is KuryrPort API.15:58
dulekltomasbo: So kuryr-controller doesn't know *immediately* if a port is missing for a pod.15:58
ltomasbothen, how to detect that and make that to re-try, right?15:58
*** mrostecki has left #openstack-kuryr15:59
ltomasbodulek, why it will not know it?15:59
dulekltomasbo: Hm.15:59
ltomasbothe kuryrport crd is created by the kuryr-controller, right?16:00
ltomasbowhen the request_vif comes16:00
dulekltomasbo: https://review.openstack.org/#/c/527243/5/kuryr_kubernetes/controller/drivers/vif_pool.py@63916:00
dulekltomasbo: Yes, but it's not assigned to a pod.16:00
ltomasbothe different (If I got it right) is that when a repopulation is needed, it will create kuryrPort CRDs, right?16:00
ltomasbothe difference will be that we don't care about ports_ids anymore at the kuryr-controller, but about the amount of pods requests16:01
ltomasboso that we just keep track of the remaining number of ports, right?16:01
dulekltomasbo: Oh…16:02
ltomasbodulek, can we keep track of the pools at the controller in a similar way? without relying on the kubernetesAPI?16:02
ltomasboand just make the cni side relying on API?16:03
dulekltomasbo: Not really. Controller will not know if a port was taken by a pod until CNI annotates it.16:03
ltomasboumm16:03
dulekltomasbo: And it cannot assume that every pod will take a port. E.g. - if pod will get removed before CNI request is sent.16:03
ltomasboso, kuryr-controller will not be listening anymore on pods creation?16:04
dulekltomasbo: It still listens for that. I've mentioned a bit earlier that I probably could count pods and trigger repopulation when *total* number of pods in a pool is higher than KuryrPorts number in a pool.16:05
dulekltomasbo: How this ResourceNotReady retry works?16:05
ltomasbosomething like taht but with some margin (to ensure a minimum size of the pool)16:06
ltomasbothat was in the controller side16:06
dulekltomasbo: Suuure, sure.16:06
ltomasboso, whenever there was no port available for a pod, the request_vif will trigger a repopulation action (in another thread) and return a ResourceNotReady16:06
ltomasboso taht the vif_handler will retry later for that pod16:07
ltomasboand next time it will get the port created by the repopulation16:07
ltomasbothere are a few handlers for logging, retries, ... in the handlers folder16:07
ltomasbonot sure if they are applicable to the cni side...16:07
ltomasbobut something like waiting on the kuryrPorts CRD to be populated may work I guess16:08
dulekltomasbo: VIFHandler doesn't seem to be inheriting from retry?16:08
ltomasbowell, it does retry as the pods gets the vif after the resourcenotready...16:09
ltomasbolet me see16:09
dulekltomasbo: Okay, ControllerPipeline wraps it in such handler automatically.16:09
dulekltomasbo: I'll try to wrap my head around what we've discussed.16:11
ltomasboyep, it is in the pipeline.py16:14
ltomasbobut just for ResourceNotReady16:14
ltomasboso, I guess something similar may help...16:14
dulekltomasbo: Looking from another angle - I wonder if a periodic job shouldn't repopulate the pools.16:16
dulekltomasbo: That way it would be free from race conditions.16:16
dulekltomasbo: I wonder… When creating 10 new ports in repopulation action - why is pod passed to request_vifs?16:17
dulekltomasbo: https://github.com/openstack/kuryr-kubernetes/blob/f53188a2b80851403b2e89c3410da692cffba0af/kuryr_kubernetes/controller/drivers/neutron_vif.py#L10116:18
dulekltomasbo: Does this setting matter at all? Because it will be the same for 10 ports created for the pool.16:18
ltomasbodulek, I'm not sure if that was needed16:34
ltomasbobut now I think we use it at the ports to recover the ports for the non-nested case16:35
dulekltomasbo: Yeaaaah, now I've noticed that it's fine, it's just getting node hostname from pod.16:35
* dulek needs coffee, but it's too late now. ;)16:35
ltomasbo:D16:35
ltomasboalso, the problem with time base repopulations is that you will still need to handle retries16:36
ltomasboin case of burst pods creations16:36
ltomasboin between updates16:36
dulekltomasbo: Not really I think.16:36
dulekltomasbo: I mean - new pool creation will need to be handled.16:36
dulekltomasbo: But repopulation of existing pools can happen in periodic tasks and this should be safe.16:37
ltomasboif you have 10 ports at a given pool, and you re-populate that one let's say every minute16:37
ltomasboand you have 20 pods arriving between two repopulation actions16:37
ltomasbowhat would it happen?16:37
dulekltomasbo: Not much, CNI will wait until controller catches up.16:38
dulekltomasbo: And will grab ports once available.16:38
ltomasbobut then you can have latency added to the pods to be created based on that interval16:38
ltomasboinstead of just repopulating it when you see it's needed16:39
dulekltomasbo: Yes, but is it different currently?16:39
*** juriarte has quit IRC16:39
dulekltomasbo: When there's a high number of pod created at once, is pool still updated 10-by-10?16:40
ltomasboummm16:40
ltomasboyep, we have a update lock to not trigger many repopulations16:41
ltomasbodulek, so, perhaps at the end is not that different anyway16:42
dulekltomasbo: Oh well. That's the worst part - I'm starting to wonder if the design I'm working on is actually any better. :P16:42
ltomasboit will just save some time when a burst happened in between updates if not many of them are triggered together16:43
dulekltomasbo: BTW - why are you doing an interval-based repopulation instead of just doing a failing lock so 2 repopulations will never run in parallel?16:44
ltomasbodulek, that is a good question...16:47
dulekltomasbo: Okay, that's enough for today, I'll need to sleep and rethink this with a fresh mind.16:51
dulekThanks!16:51
ltomasboxD16:52
ltomasboyour welcome!16:52
*** maysamacedos has joined #openstack-kuryr17:17
*** salv-orl_ has quit IRC18:23
*** gcheresh has joined #openstack-kuryr18:33
*** salv-orlando has joined #openstack-kuryr18:42
*** maysamacedos has quit IRC19:40
*** maysamacedos has joined #openstack-kuryr19:42
*** salv-orlando has quit IRC19:46
*** leyal has quit IRC19:48
*** leyal has joined #openstack-kuryr19:49
*** salv-orlando has joined #openstack-kuryr20:05
*** maysamacedos has quit IRC20:28
*** c00281451_ has joined #openstack-kuryr20:32
*** dmellado has quit IRC20:32
*** salv-orl_ has joined #openstack-kuryr20:32
*** openstackgerrit has quit IRC20:34
*** dmellado has joined #openstack-kuryr20:34
*** salv-orlando has quit IRC20:35
*** zengchen has quit IRC20:35
*** s1061123 has quit IRC20:35
*** lihi has quit IRC20:37
*** maysamacedos has joined #openstack-kuryr20:39
*** lihi has joined #openstack-kuryr20:40
*** s1061123 has joined #openstack-kuryr20:40
*** maysamacedos has quit IRC20:42
*** jistr has quit IRC20:45
*** jistr has joined #openstack-kuryr20:45
*** jappleii__ has joined #openstack-kuryr21:23
*** jappleii__ has quit IRC21:24
*** jappleii__ has joined #openstack-kuryr21:25
*** jappleii__ has quit IRC21:26
*** jappleii__ has joined #openstack-kuryr21:27
*** jappleii__ has quit IRC21:27
*** jappleii__ has joined #openstack-kuryr21:28
*** dougbtv has joined #openstack-kuryr21:34
*** gcheresh has quit IRC21:41
*** maysamacedos has joined #openstack-kuryr21:41
*** maysamacedos has quit IRC21:43
*** gouthamr has quit IRC22:18
*** maysamacedos has joined #openstack-kuryr23:10
*** maysamacedos has quit IRC23:31
*** maysamacedos has joined #openstack-kuryr23:31
*** atoth has quit IRC23:35
*** pmannidi has joined #openstack-kuryr23:36

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!