Thursday, 2021-06-24

opendevreviewRoman Dobosz proposed openstack/kuryr-kubernetes master: Fixes for latest changes on Neutron devstack.  https://review.opendev.org/c/openstack/kuryr-kubernetes/+/79420009:13
opendevreviewMerged openstack/kuryr-kubernetes master: Update the Service Creation Process  https://review.opendev.org/c/openstack/kuryr-kubernetes/+/79536309:57
opendevreviewSunday Mgbogu proposed openstack/kuryr-kubernetes master: Kuryr Kubernetes Loadbalancers Reconciliation Design  https://review.opendev.org/c/openstack/kuryr-kubernetes/+/79623410:17
opendevreviewRoman Dobosz proposed openstack/kuryr-kubernetes master: Minor formatting corrections.  https://review.opendev.org/c/openstack/kuryr-kubernetes/+/79786910:17
opendevreviewMichał Dulko proposed openstack/kuryr-kubernetes master: Add original exception to log abou dead component  https://review.opendev.org/c/openstack/kuryr-kubernetes/+/79787611:05
opendevreviewMichał Dulko proposed openstack/kuryr-kubernetes master: Add original exception to log about dead component  https://review.opendev.org/c/openstack/kuryr-kubernetes/+/79787611:07
opendevreviewRoman Dobosz proposed openstack/kuryr-kubernetes master: Fixes for latest changes on Neutron devstack.  https://review.opendev.org/c/openstack/kuryr-kubernetes/+/79420011:07
digitalsimbojaHello!12:34
digitalsimboja@maysams, @ltomasbo: I am reworking the reconciliation flow and wish to share my line of thoughts around that?12:34
opendevreviewSunday Mgbogu proposed openstack/kuryr-kubernetes master: Kuryr Kubernetes Loadbalancers Reconciliation Design  https://review.opendev.org/c/openstack/kuryr-kubernetes/+/79623412:57
digitalsimbojaPlease take a look above12:58
digitalsimbojaThanks12:58
digitalsimbojaHere is how I view it:12:58
digitalsimbojaIf OpenStack resource is deleted or modified, KuryrLoadBalancer which occasionally 'pools' the OpenStack resources notices12:59
digitalsimbojaAnd sends a patch to k8s lb CRD with empty status13:00
ltomasbodigitalsimboja, checking13:02
ltomasbodigitalsimboja, some inputs there as I see it13:02
ltomasbo1) add Handler to the KuryrLoadBalancer box13:03
ltomasbo2) perhaps this needs to be modified by something more meaningful "LB deleted/modified/klb ev()"13:03
digitalsimbojanoting...13:04
ltomasboI assume that means the KLB Handler is doing some "get loadbalancers" and then it detects if there are some missing13:04
digitalsimbojasure13:04
ltomasboalso, the ServiceHandler/EndpointHandler rows there look wrong to me13:05
ltomasbothe spec is not removed/lost, so they do not need to recreate them13:05
digitalsimbojaThats my confusion honestly13:05
ltomasbothey are only involved when the service/endpoints gets created/deleted/modified, i.e., when k8s actions take place, not when OpenStack action take place (deletion of loadbalancer, as in your case here)13:05
digitalsimbojaThey are alredy holding the spec13:06
ltomasbodigitalsimboja, perhaps you can try to write that sequence diagram for the service creation first, shared with us to ensure you got it right, and then, we go for updating this one for the reconciliation13:07
ltomasboit would be a great addition to the documentation to have such a figure for the service/endpoints creation13:07
digitalsimbojaThat would make sense13:07
ltomasboand I think it will help you with the flow too13:07
digitalsimbojaSure! Because we are more or less doing something like a reverse of the k8s event in this case?13:08
ltomasbonot really13:09
ltomasbowe just need to checkout that OpenStack resource is missing and trigger the recreation (same flow as when creating the service)13:09
ltomasboand as you already have there, the recreation is triggered by removing the status section13:10
ltomasboso, you are missing how/where to detect the OpenStack resource is missing13:10
ltomasbothen the second step, how to trigger the recreation you already know about it (removing the status on the related KLB CRD)13:10
digitalsimbojaMy thoughts around detecting missing OpenStack resource: I believe this can be handled by the KuryrLoadBalancerHandler since it had a way of patching to k8s CRD with empty status13:13
digitalsimbojaSo I am wondering the reconcile mechanism should be implemented right there sort of13:13
digitalsimbojaSo am thinking of doing something like this:13:14
digitalsimbojaon the KuryrLoadBalancerHandler13:14
digitalsimbojaget_loadbalancer ev()13:14
digitalsimbojaThen compare with loadbalancer_crd 13:15
digitalsimbojaand then trigger the recreation is anything is missing13:15
ltomasbothere is some problems with that approach13:15
digitalsimbojaBut the get_loadbalancer ev() should call Octavia API after some time13:15
ltomasbo1) the handler is executed for specific KLB CRD (i.e., for specific services)13:16
ltomasboso, it is not something that runs in the background regularly13:16
ltomasbo2) the OpenStack calls are usually on the drivers... so I'm more inclined to add that on the lbaasv2 driver instead (but we will need to circle back to removing the CRD status as you mentioned, so not completely sure about this13:17
ltomasbowhat is get_loadbalancer_ev()?13:17
ltomasboev stands for events?13:17
digitalsimbojaclients.get_loadbalancer_clients sort of13:18
ltomasbothere is no events in the OpenStack Octavia API you can subscribe to... the call will be kind of "give me all the loadbalancers"13:18
ltomasboand then you will need to process all the KLB CRDs status, and check if the loadbalancer ID in the CRD status is on the retrieved information about all those loadbalancers on OpenStack side13:18
digitalsimbojasoemthing like this would get me the entire loadbalancers I think:13:19
digitalsimboja    def get_octavia_version(self):13:19
digitalsimboja        lbaas = clients.get_loadbalancer_client()13:19
digitalsimbojalbaas should get me the loadbalancer to talk to Octavia?13:19
digitalsimbojathe client sorry13:19
digitalsimbojaIf that is the case then I need to figure out how to regularly make the call to fetch the loadbalancers from OpenStack using the client?13:20
ltomasboyep, there is some background threads in other drivers doing something similar13:21
ltomasboperhaps you can take a look at them, and add that on the lbaasv2 driver13:21
digitalsimbojaperfect13:21
ltomasbobut would be nice if you also get someone else inputs on this13:22
digitalsimbojasure!13:22
digitalsimbojaSo for now, I would need to modify the flow to reflect that lbaasv2 driver should call Octavia API to fetch all the loadbalancers13:24
digitalsimbojathen13:24
ltomasbofor now I would try to write a similar image to the one you have for the service creation flow13:26
ltomasboto help you understand the creation flow (and to later improve documentation with that)13:26
ltomasbothen, we can re-take the existing one, and have kind of a mix betwee nthe creating flow and the one you have13:27
ltomasboas the difference will be the triggering mechanism, but the second part should be exactly the same13:27
digitalsimbojaPerfect13:27
digitalsimbojaFor now let me check other drivers and see how they regularly call OpenStack?13:32
ltomasbovif_pool does some of those13:35
ltomasbobefore the lbaas has also some code related to that, let me try to find it on an old repo13:35
ltomasboactually, we had it on the handler init https://github.com/openshift/kuryr-kubernetes/blob/release-3.11/kuryr_kubernetes/controller/handlers/lbaas.py#L17513:37
ltomasbowhich means, perhaps you are right and the best place is the KuryrLoadBalancerHandler13:37
ltomasboactually, that function was to clean up loadbalancers, and you can take ideas from there, as it was using the OpenStack API to get the OpenStack loadbalancers...13:39
ltomasbodigitalsimboja, ^13:39
ltomasbodigitalsimboja, I wrote some information that could be useful for you, not sure if you saw it13:42
ltomasboas you disconnected right after13:42
digitalsimbojaNo I did not13:43
ltomasbo<ltomasbo> vif_pool does some of those13:43
ltomasbo<ltomasbo> before the lbaas has also some code related to that, let me try to find it on an old repo13:43
ltomasbo<ltomasbo> actually, we had it on the handler init https://github.com/openshift/kuryr-kubernetes/blob/release-3.11/kuryr_kubernetes/controller/handlers/lbaas.py#L17513:43
ltomasbo<ltomasbo> which means, perhaps you are right and the best place is the KuryrLoadBalancerHandler13:43
ltomasbo<ltomasbo> actually, that function was to clean up loadbalancers, and you can take ideas from there, as it was using the OpenStack API to get the OpenStack loadbalancers...13:43
ltomasbodigitalsimboja, ^13:43
digitalsimbojaThanks13:43
maysamsreading...13:46
maysamsltomasbo: regarding adding the extra thread, in the example you share it would only run upon restart of the controller meaning it would only get the Services and lbs at that specific moment13:57
maysamsit might happens that some services are create after the controller is up and not taken into account?13:57
ltomasbounless the thread is running every X mins, right? and redoing the actions, right?13:58
maysamsI just thought if it would make sense to rely on the reconciliation mechanism we have in Kuryr, but this might increase the number of calls to Octavia13:58
ltomasbothe example I pasted was just to be executed at the begining (kuryr-controller start), but we can have another one that never finishes... and rerun the same loop every 10 mins or so13:59
maysamsthis one done once 13:59
ltomasboright, that will mean for each existing KLB CRD there will be a call to Octavia API, right?14:00
maysamsmy pasting is somehow not working here... hm14:00
ltomasbothat won't scale really well I'm afraid14:00
maysamsyeah14:01
maysamsdigitalsimboja: as Luis mentioned the service and endpoints handlers does not have impact during reconciliation. In Kuryr, the communication would be between kuryrloadbalancer CRD and lbaasv214:05
digitalsimbojaUnderstood!14:06
digitalsimbojaThey are only involved when an event occurs on K8s, so I have to scrap those14:06
maysamsyes14:06
digitalsimbojaNow coming back to the real problem, I think what we need is a task schedular sort of that runs every x mins?14:07
digitalsimbojaThat is makes a call to OpenStack every x mins14:07
maysamsyes, have you checked the Link Luis provided?14:07
digitalsimbojaI am looking at it yeah14:08
maysamsthat is a good examples on how to fetch data every x time in a separete thread14:08
digitalsimbojaAlso, the vif_pool driver uses the eventlet.spawn method14:17
digitalsimbojaTaking a look14:17
opendevreviewMichał Dulko proposed openstack/kuryr-kubernetes master: Use correct logger when setting pyroute2 log level  https://review.opendev.org/c/openstack/kuryr-kubernetes/+/79795615:04

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!