Tuesday, 2018-06-05

*** gianpietro has quit IRC00:16
*** gianpietro has joined #openstack-kuryr00:21
*** gianpietro has quit IRC00:26
*** gianpietro has joined #openstack-kuryr00:27
*** maysamacedos has joined #openstack-kuryr00:30
*** gianpietro has quit IRC00:31
*** gianpietro has joined #openstack-kuryr00:38
*** gianpietro has quit IRC00:43
*** hongbin has joined #openstack-kuryr00:57
*** kiennt26 has joined #openstack-kuryr01:10
*** gianpietro has joined #openstack-kuryr01:49
*** yamamoto has joined #openstack-kuryr02:15
*** kiennt26 has quit IRC02:28
*** lihi has quit IRC02:36
*** lihi has joined #openstack-kuryr02:37
*** maysamacedos has quit IRC02:39
*** kiennt26 has joined #openstack-kuryr02:43
*** rh-jelabarre has quit IRC03:27
*** rh-jelabarre has joined #openstack-kuryr03:36
*** rh-jelabarre has quit IRC03:59
*** kiennt26 has quit IRC04:29
*** yboaron has joined #openstack-kuryr04:42
*** CrayZee has joined #openstack-kuryr04:57
*** gcheresh_ has joined #openstack-kuryr05:05
*** hongbin has quit IRC05:06
*** gcheresh_ has quit IRC05:10
*** gcheresh has joined #openstack-kuryr05:11
*** salv-orlando has joined #openstack-kuryr05:17
*** salv-orlando has quit IRC05:17
*** ispp has joined #openstack-kuryr06:12
*** isssp has quit IRC06:14
*** salv-orlando has joined #openstack-kuryr06:32
*** salv-orlando has quit IRC06:32
*** salv-orlando has joined #openstack-kuryr06:33
*** salv-orlando has quit IRC06:52
*** pcaruana has joined #openstack-kuryr06:54
*** yamamoto_ has joined #openstack-kuryr06:57
*** yamamoto has quit IRC07:00
*** pcaruana is now known as pcaruana|worksho07:03
*** gianpietro has quit IRC07:04
*** gianpietro has joined #openstack-kuryr07:05
*** gianpietro has quit IRC07:10
*** janki has joined #openstack-kuryr07:15
*** celebdor1 has joined #openstack-kuryr07:26
*** gianpietro has joined #openstack-kuryr07:29
*** pcaruana|worksho is now known as pcaruana07:50
*** garyloug has joined #openstack-kuryr08:04
*** yboaron has quit IRC08:20
*** salv-orlando has joined #openstack-kuryr08:49
*** salv-orlando has quit IRC08:55
*** gianpietro has quit IRC09:06
*** lxkong has quit IRC09:09
*** phuoc_ has joined #openstack-kuryr09:11
*** phuoc has quit IRC09:14
*** lxkong has joined #openstack-kuryr09:32
*** salv-orlando has joined #openstack-kuryr09:51
*** salv-orlando has quit IRC09:55
*** garyloug has quit IRC10:22
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr-kubernetes master: Namespace deletion functionality for namespace_subnet driver  https://review.openstack.org/56224910:23
*** garyloug has joined #openstack-kuryr10:26
celebdor1ltomasbo: ping10:34
ltomasbocelebdor1, pong!10:35
celebdor1ltomasbo: IIRC when you do network delete for a network that has ports, it fails, right?10:35
ltomasbocelebdor1, yes10:36
dulekltomasbo: Yay, it's merging!10:37
celebdor1ok10:37
ltomasbocelebdor1, why you asked?10:38
celebdor1ltomasbo: cause in your patch I don't see port removal10:40
celebdor1and considering that the namespace deletion is a cascading op in k8s10:40
celebdor1that makes namespace deletion a bit racy10:41
celebdor1doesn't it?10:41
ltomasbocelebdor1, that was why I handled the case for pools in a follow up patch10:41
celebdor1you rely on the pod deletion events to be processed before the namespace deletion event10:41
celebdor1I am not talking about the pool case10:41
celebdor1I'm talking about event races10:41
ltomasbocelebdor1, you mean the ports are being deleted but not yet deleted, and we triggered the network deletion, right?10:42
celebdor1let's see if I get this right10:42
celebdor1I have a namespace with 1000 pods10:42
celebdor1when I send a DELETE to the namespace resource10:43
celebdor1K8s will start doing DELETE events for each of those 1000 pods10:43
celebdor1and then will send the delete event of the namespace10:43
celebdor1I hope this is right10:43
ltomasboyep, and kuryr (without pools) will start deleting the ports10:43
celebdor1in case it is, what happens if we are slow deleting those pods and another green thread gets the namespace deletion10:44
celebdor1it will fail to delete the network10:44
celebdor1and fail to delete the crd10:44
celebdor1right?10:44
ltomasbojust to be clear, this is without pools --> https://review.openstack.org/#/c/562249/10:44
ltomasboand if delete_network fails, the CRD will not be deleted, yes10:45
ltomasboand on_delete will raise a NeutronClientException10:45
ltomasboso, I guess there will be a retry, right?10:46
ltomasboohh, but perhaps I should split the remove_interface_router and delete network in 2 different try/except10:46
celebdor1ltomasbo: my point is that probably we could raise a resource not ready10:47
celebdor1so it is automatically retried10:47
celebdor1and yes, in that case, the neutron deletion actions should have different error handling10:47
ltomasbomake sense10:47
ltomasbocelebdor1, can you remove the +W?10:47
celebdor1I still merged the patch. I'm just asking for higher reliability10:47
celebdor1in a follow up patch10:47
celebdor1I don't need longer patch series nor complexity10:48
celebdor1that deserves its own patch10:48
ltomasbook, then I'll add a following patch to address that10:48
celebdor1thanks ltomasbo!10:50
ltomasbothank you!10:50
celebdor1;-)10:50
*** pliu has quit IRC10:50
*** pliu has joined #openstack-kuryr10:51
*** atoth has joined #openstack-kuryr11:16
celebdor1ltomasbo: regarding https://review.openstack.org/#/c/572118/1/kuryr_kubernetes/controller/drivers/vif_pool.py11:16
celebdor1aren't the keys devices names and the values vif annotations?11:17
celebdor1or are the port ovos?11:17
celebdor1with the name you put it suggests the latter, but I thought it was the former11:17
* ltomasbo looking11:18
ltomasbocelebdor1, yes, keys are device names (like 'eth0') and the values the annotations11:18
ltomasboand I need the port id present at the annotation, that is why I use annotatoins.values()11:19
celebdor1ltomasbo: that part is clear11:19
ltomasbowell, it is more like --> {'eth0': vif-obj}11:19
celebdor1I only meant that it should be11:19
celebdor1for vif in annotations.values()11:19
celebdor1instead of for port11:20
celebdor1the port var name there is misleading11:20
ltomasboahh, yes, you are right!11:20
ltomasboI'll change it right away11:20
celebdor1thanks11:21
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr-kubernetes master: Add ports pool clean up support to namespace deletion  https://review.openstack.org/56414811:23
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr-kubernetes master: Ensure namespace delete is retry to avoid race  https://review.openstack.org/57234411:23
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr-kubernetes master: Fix precreated ports recovery after pod annotations change  https://review.openstack.org/57211811:28
*** yamamoto_ has quit IRC11:29
ltomasbocelebdor1, ^^ done!11:30
celebdor1great11:31
celebdor1irenab, dulek: please review https://review.openstack.org/#/c/572118/211:31
dulekcelebdor1: Hm, I have a thought now… We probably need an convertion utility or compatibility code in case annotation is in old format.11:35
dulekIn case of Kuryr upgrade from Queens.11:35
*** yamamoto has joined #openstack-kuryr11:35
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr-kubernetes master: Retry namespace deletion to mitigate cascading race  https://review.openstack.org/57234411:36
celebdor1dulek: of course we do11:38
celebdor1I'll think about it11:39
celebdor1thanks for bringing it up!11:39
dulekcelebdor1: Quick thought #2: since we don't support anyone running trunk yet, we should be able to undo some changes while it's still Rocky.11:45
dulekcelebdor1: So I think the best way to approach this compatibility code would be to switch from putting blank dict to an dictionary o.vo.11:45
dulekcelebdor1: That way you only fetch the annotation, check its type and if it's the old one, you pack it.11:46
*** snapiri- has joined #openstack-kuryr11:48
*** CrayZee has quit IRC11:50
openstackgerritMerged openstack/kuryr-libnetwork master: modify the kuryr workflow document  https://review.openstack.org/57208511:52
openstackgerritMerged openstack/kuryr-libnetwork master: [ci] Use zuul v3 native job for Rally  https://review.openstack.org/56885511:52
*** snapiri- is now known as CrayZee11:53
*** CrayZee is now known as Guest9114511:53
*** Guest91145 has quit IRC11:53
*** snapiri- has joined #openstack-kuryr11:54
*** snapiri- has quit IRC11:54
*** snapiri- has joined #openstack-kuryr11:55
*** snapiri- has quit IRC11:55
*** yamamoto has quit IRC11:58
*** rh-jelabarre has joined #openstack-kuryr11:59
celebdor1dulek: that makes sense12:07
celebdor1a lot of it12:07
celebdor1I was also wondering if it won't converge with your kuryrport work12:07
celebdor1and then the annotation would just be a reference to a CRD12:07
dulekcelebdor1: This could definitely help, but I think we should fix it in master first.12:09
celebdor1agreed12:17
*** maysamacedos has joined #openstack-kuryr12:19
*** yamamoto has joined #openstack-kuryr12:44
*** rh-jelabarre has quit IRC12:52
openstackgerritMerged openstack/kuryr-tempest-plugin master: Testing curl to the service of type LoadBalancer  https://review.openstack.org/57073412:54
*** rh-jelabarre has joined #openstack-kuryr12:58
*** CrayZee has joined #openstack-kuryr13:15
*** celebdor1 has quit IRC13:27
*** celebdor1 has joined #openstack-kuryr13:30
*** yamamoto has quit IRC13:44
*** gcheresh has quit IRC14:19
*** hongbin has joined #openstack-kuryr14:21
*** maysamacedos has quit IRC14:29
*** maysamacedos has joined #openstack-kuryr14:30
*** yamamoto has joined #openstack-kuryr14:45
*** yamamoto has quit IRC14:51
*** yamamoto has joined #openstack-kuryr14:56
*** yamamoto has quit IRC15:01
*** celebdor1 has quit IRC15:01
*** pcaruana has quit IRC15:05
*** janki has quit IRC15:37
*** celebdor1 has joined #openstack-kuryr15:44
*** celebdor1 has quit IRC15:50
*** celebdor1 has joined #openstack-kuryr15:52
*** yamamoto has joined #openstack-kuryr15:58
*** yamamoto has quit IRC16:03
*** maysamacedos has quit IRC16:23
*** yboaron has joined #openstack-kuryr16:33
*** yamamoto has joined #openstack-kuryr16:50
*** yamamoto has quit IRC17:00
openstackgerritMerged openstack/kuryr-kubernetes master: Fix precreated ports recovery after pod annotations change  https://review.openstack.org/57211817:01
*** celebdor1 has quit IRC17:03
*** maysamacedos has joined #openstack-kuryr17:10
*** isssp has joined #openstack-kuryr17:35
*** ispp has quit IRC17:40
*** yamamoto has joined #openstack-kuryr17:56
*** yamamoto has quit IRC18:02
*** CrayZee has quit IRC18:15
*** yamamoto has joined #openstack-kuryr18:58
*** yamamoto has quit IRC19:03
*** kzaitsev_pi has quit IRC19:29
*** kzaitsev_pi has joined #openstack-kuryr19:30
*** janki has joined #openstack-kuryr19:32
*** yamamoto has joined #openstack-kuryr19:59
*** yamamoto has quit IRC20:05
*** yboaron has quit IRC20:05
*** maysamacedos has quit IRC20:36
*** jerms has left #openstack-kuryr20:43
*** yamamoto has joined #openstack-kuryr21:01
*** yamamoto has quit IRC21:05
*** celebdor1 has joined #openstack-kuryr21:06
*** rh-jelabarre has quit IRC21:12
*** janki has quit IRC21:23
*** celebdor1 has quit IRC21:43
*** yamamoto has joined #openstack-kuryr22:02
*** atoth has quit IRC22:03
*** yamamoto has quit IRC22:06
*** yamamoto has joined #openstack-kuryr23:03
*** garyloug has quit IRC23:05
*** yamamoto has quit IRC23:07
*** threestrands has joined #openstack-kuryr23:08
*** hongbin has quit IRC23:27

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!