Monday, 2020-01-13

*** threestrands has joined #openstack-glance00:38
*** Liang__ has joined #openstack-glance00:58
*** mvkr has joined #openstack-glance01:15
*** threestrands has quit IRC01:43
*** awalende has joined #openstack-glance02:45
*** awalende has quit IRC02:49
*** threestrands has joined #openstack-glance03:10
*** threestrands has quit IRC03:11
*** threestrands has joined #openstack-glance03:11
*** davee__ has joined #openstack-glance04:03
*** davee_ has quit IRC04:05
*** davee___ has quit IRC04:05
*** davee_ has joined #openstack-glance04:06
*** bhagyashris has joined #openstack-glance04:11
*** bhagyashris has quit IRC04:42
*** bhagyashris has joined #openstack-glance05:04
*** awalende has joined #openstack-glance05:18
*** awalende has quit IRC05:22
*** evrardjp has quit IRC05:33
*** evrardjp has joined #openstack-glance05:34
*** threestrands has quit IRC05:38
*** ratailor has joined #openstack-glance05:58
*** udesale has joined #openstack-glance06:09
*** udesale is now known as udesale_06:28
*** udesale_ is now known as udesale06:29
*** Luzi has joined #openstack-glance06:35
*** ratailor_ has joined #openstack-glance06:41
*** ratailor has quit IRC06:42
*** ratailor__ has joined #openstack-glance06:46
*** ratailor_ has quit IRC06:49
*** awalende has joined #openstack-glance07:01
*** jawad-axd has joined #openstack-glance07:02
*** awalende_ has joined #openstack-glance07:03
*** awalende has quit IRC07:05
*** jawad-axd has quit IRC07:06
*** jawad-axd has joined #openstack-glance07:07
*** jawad-ax_ has joined #openstack-glance07:10
*** jawad-axd has quit IRC07:12
*** rcernin has quit IRC07:13
*** brinzhang has joined #openstack-glance07:24
*** brinzhang_ has joined #openstack-glance07:36
*** brinzhang has quit IRC07:39
*** brinzhang has joined #openstack-glance07:48
*** brinzhang_ has quit IRC07:51
*** bhagyashris has quit IRC07:58
*** trident has joined #openstack-glance08:02
*** brinzhang_ has joined #openstack-glance08:10
*** brinzhang has quit IRC08:13
*** tesseract has joined #openstack-glance08:21
*** tosky has joined #openstack-glance08:22
*** bhagyashris has joined #openstack-glance09:05
*** brinzhang_ has quit IRC09:32
*** Liang__ is now known as Liangfang09:33
*** Liangfang is now known as LiangFang09:33
*** brinzhang_ has joined #openstack-glance09:33
*** brinzhang has joined #openstack-glance09:41
*** brinzhang_ has quit IRC09:45
*** pcaruana has joined #openstack-glance09:49
*** brinzhang_ has joined #openstack-glance09:51
*** brinzhang has quit IRC09:55
*** pcaruana has quit IRC09:56
*** ratailor has joined #openstack-glance09:59
*** LiangFang has quit IRC09:59
*** ratailor__ has quit IRC10:01
*** brinzhang has joined #openstack-glance10:02
*** brinzhang_ has quit IRC10:06
*** jawad-ax_ has quit IRC10:07
*** ratailor_ has joined #openstack-glance10:17
*** ratailor has quit IRC10:19
*** brinzhang_ has joined #openstack-glance10:21
*** brinzhang has quit IRC10:24
*** brinzhang has joined #openstack-glance10:30
*** brinzhang_ has quit IRC10:33
*** brinzhang_ has joined #openstack-glance10:40
*** brinzhang has quit IRC10:43
*** jawad-axd has joined #openstack-glance10:49
*** jawad-axd has quit IRC10:53
*** rcernin has joined #openstack-glance11:11
*** jawad-axd has joined #openstack-glance11:14
*** brinzhang has joined #openstack-glance11:30
*** brinzhang_ has quit IRC11:34
*** brinzhang_ has joined #openstack-glance11:39
*** brinzhang has quit IRC11:42
*** bhagyashris has quit IRC11:50
*** brinzhang_ has quit IRC11:52
*** brinzhang_ has joined #openstack-glance11:52
*** brinzhang has joined #openstack-glance11:54
*** brinzhang_ has quit IRC11:58
*** bhagyashris has joined #openstack-glance12:02
*** brinzhang has quit IRC12:06
*** brinzhang has joined #openstack-glance12:07
*** zzzeek has quit IRC12:16
*** zzzeek has joined #openstack-glance12:17
*** udesale has quit IRC12:23
*** abhishekk has quit IRC12:23
*** abhishekk has joined #openstack-glance12:24
*** brinzhang_ has joined #openstack-glance12:38
*** brinzhang_ has quit IRC12:40
*** brinzhang_ has joined #openstack-glance12:41
*** brinzhang has quit IRC12:42
*** brinzhang_ has quit IRC12:42
*** brinzhang_ has joined #openstack-glance12:43
*** brinzhang_ has quit IRC12:44
*** brinzhang has joined #openstack-glance12:45
*** brinzhang has quit IRC12:46
*** brinzhang has joined #openstack-glance12:47
*** brinzhang_ has joined #openstack-glance13:20
*** brinzhang_ has quit IRC13:21
*** brinzhang_ has joined #openstack-glance13:21
*** brinzhang_ has quit IRC13:22
*** brinzhang_ has joined #openstack-glance13:23
*** brinzhang has quit IRC13:23
*** brinzhang_ has quit IRC13:24
*** brinzhang_ has joined #openstack-glance13:24
*** ratailor_ has quit IRC13:35
*** brinzhang has joined #openstack-glance13:37
*** brinzhang_ has quit IRC13:40
*** brinzhang_ has joined #openstack-glance13:42
*** brinzhang has quit IRC13:45
*** rcernin has quit IRC13:53
*** brinzhang has joined #openstack-glance14:01
*** brinzhang has quit IRC14:03
*** brinzhang has joined #openstack-glance14:03
*** brinzhang_ has quit IRC14:04
*** Luzi has quit IRC14:12
*** zaneb has joined #openstack-glance14:22
*** brinzhang_ has joined #openstack-glance14:31
*** brinzhang has quit IRC14:34
*** pcaruana has joined #openstack-glance14:34
*** brinzhang has joined #openstack-glance14:37
*** brinzhang_ has quit IRC14:40
*** bhagyashris has quit IRC14:51
*** lpetrut has joined #openstack-glance15:12
*** awalende_ has quit IRC15:22
*** awalende has joined #openstack-glance15:23
*** awalende_ has joined #openstack-glance15:27
*** jawad-axd has quit IRC15:27
*** awalende has quit IRC15:27
*** awalende_ has quit IRC15:31
*** pcaruana has quit IRC16:12
*** gyee has joined #openstack-glance16:15
*** jmlowe has quit IRC16:51
*** jmlowe has joined #openstack-glance16:52
*** pcaruana has joined #openstack-glance17:13
*** nicolasbock has joined #openstack-glance17:20
*** jawad-axd has joined #openstack-glance17:33
*** evrardjp has quit IRC17:33
*** evrardjp has joined #openstack-glance17:34
*** lpetrut has quit IRC17:54
*** jawad-axd has quit IRC18:04
*** lpetrut has joined #openstack-glance18:17
*** lpetrut has quit IRC18:28
*** jawad-axd has joined #openstack-glance18:45
*** tesseract has quit IRC18:53
*** jawad-axd has quit IRC19:00
*** awalende has joined #openstack-glance19:23
*** awalende has quit IRC19:28
jokke_rosmaita: around?21:42
rosmaitahello21:42
jokke_rosmaita: hey, thanks for the review. I was writing response for it and started circulating. I realized I still don't understand what/why you are trying to achieve.21:44
jokke_I had one mistake there, so there is not even HTTPInvalid, the invalid is translated few lines down to a BadRequest (which indeed probably isn't the best response for that)21:45
rosmaitaok, we should all take a look at the response codes again on that21:46
jokke_Which is irrelevant as long as we're not in agreement of how we adress the horrible behaviour underneath it.21:47
rosmaitawell, i think there are 2 things we want to accomplish21:47
rosmaita(1) avoid the race condition on our side that would re-insert the location that is being deleted21:48
rosmaita(which your patch does)21:48
rosmaitaand21:48
rosmaita(2) have the API behave well21:48
rosmaitawhich i think your patch is not doing, because a user could get a 204 for two separate delete requests21:49
rosmaitacatching the glance_store.NotFound in location.pop() just feels icky to me, but I guess i could go along with it if everyone else agrees it's ok21:50
rosmaitaas long as we can address (2) also21:50
jokke_I think what you were proposing is not the greatest API behaviour either where we claim the asset doesn't exists but yet subsequent request might clame it does again21:51
rosmaitayeah, but i think that's a bigger issue -- we're going to have that if someone does DELETE v2/stores/sid/iid and GET v2/images/iid almost at the same time21:53
jokke_we could address that by passing the image save function to the location code and call it (saving the database state before trying to delete the resource from store) and if store fails we could put it back there like the location failur does now. This would just be horrible mess and really difficult logic to follow21:53
rosmaitayeah, so i think we should not do that21:54
jokke_and all that just because of our horrible onion proxy system that is causing us to not to keep our db up to date21:55
rosmaitaright, so i don't think we can solve that problem for all affected calls ATM ... but i do think it is weird bahavior for the API to return 204 for multiple calls of the same DELETE request21:57
rosmaitathat's all i'm saying21:57
jokke_One thing we could do is to raise store.exception.NotFound from the location (without re-adding the location back to the list) and return 410 Gone from the API call (after saving the deletion from the metadata)22:00
*** vesper has joined #openstack-glance22:00
jokke_I just think it feels even weirder to return 4xx and yet modify the resource22:01
*** vesper11 has quit IRC22:01
rosmaitawell, we wouldn't be modifying the resource in the thread that's returning the 4xx22:01
rosmaitathat would be handled by the "other" thread22:02
jokke_we would22:02
jokke_no, we would be removing still the location metadata22:02
rosmaitano, the location.pop() puts it back, but as long as we don't call image.save(), we're not modifying anything22:03
jokke_so one issuing DELETE to the stores endpoint where the data is not in the backend would get rid of the location record but receive 410 instead of 204 as the resource state was not in normal state22:03
jokke_but that doesn't change the fact that we cannot clean up the stores ever if the data is missing. We _must_ fix the location code for that22:04
jokke_and all the GET calls would be still returning the store in the image stores property22:04
jokke_what I'm proposing is that we do save the image, but for your peace of mind we could return 410 instead of 204 in the case the data is not there22:05
jokke_and 410 would be perfectly reasonable response for that, looking the description in w3.org22:06
rosmaitayes, i agree about that22:06
rosmaitaalthough22:06
jokke_10.4.11 410 Gone22:06
jokke_The requested resource is no longer available at the server and no forwarding address is known. This condition is expected to be considered permanent.22:07
rosmaitamy concern is that we are absolutely positive that for all drivers, a glance_store.NotFound will only happen when the data doesn't exist -- couldn't be an ephemeral response during cluster rebalancing or something?22:07
rosmaitai mean "are we absolutely positive"22:07
jokke_and I don't think there is anything in the HTTP spec that states there must be no changes in the resource when 4xx i returned22:07
rosmaitacdent was my go-to guy for that kind of question, but i believe he has burned out on openstack and is doing something else now22:09
rosmaita(if you can imagine that)22:09
jokke_naah, not possible ;(22:09
jokke_It might be worth of checking (and fixing) but there is one corner case that comes to my mind now when you mentioned (and if we do that, it's really really shitty from us)22:11
jokke_Do we return NotFound from store if we can't reach the store endpoint or do we return something sane?22:12
jokke_say if the swift or ceph is offline22:12
rosmaitai think 404 is ok, but def not 41022:13
rosmaitaor we can return some kind of 50022:13
rosmaitasince it's a server-side problem22:13
jokke_No I mean currently22:13
jokke_not what we would do if we were writing this now, and I mean glance_store.exception.NotFound not HTTPNotFound22:14
rosmaitagotcha22:14
rosmaitai do not know, but hope it is a 5xx22:14
jokke_glance_store does not return http exceptions at all22:14
jokke_we just have overloaded some of the names22:14
jokke_but my point is I hope we raise something sane to distinquish it rather thn the same error we would do when the data is just missing22:15
jokke_back in 5min22:15
rosmaitaright, and that's why i am worried about whether all drivers return g_s.NotFound under the same circumstances22:15
rosmaitaok, i will be here for 45 more min22:16
rosmaitajokke_: from looking at glance_store.exceptions, we should get something like this when the store is not available:22:22
rosmaitaclass RemoteServiceUnavailable(GlanceStoreException):22:22
rosmaita    message = _("Remote server where the image is present is unavailable.")22:22
rosmaitathough if i am grepping correctly, it's only raised in one place: _drivers/http.py:22022:25
jokke_ok, so that's probably something we should look into addressing then :(22:30
jokke_the code calling it can obviously handle that exception so we just need to make sure we return that or something sane rather than exception.NotFound when the store is not reachable22:31
rosmaitaright22:31
jokke_I knew I was missing something from your thoughts that just didn't translate from the review :D22:32
jokke_thanks for taking the time with me to figure this out22:32
rosmaitayeah, sorry about that22:34
rosmaitalooks like your code would translate a glance_store.RemoteServiceUnavailable into a HTTPInternalServerError, so that's cool22:34
jokke_So if I make sure our stores don't return exception.NotFound from backends we can't reach would that 410 Gone approach sound reasonable for you?22:35
*** brinzhang_ has joined #openstack-glance22:35
jokke_that way we would be eliminating the race condition and able to recover from failed deletes or store corruption without huge amount of refactoring or admin intervention22:37
rosmaitagive me a min to scroll back to that22:37
*** brinzhang has quit IRC22:39
jokke_Let me re-phrase that for you:22:40
rosmaitayes, please22:40
rosmaita:)22:40
jokke_If store returns NotFound (that being second delete call while the first one is still doing the actual delete or longer term situation where we don't have the data in backend) we keep the location code as it's propoed in current patch22:41
jokke_but instead of returning 204 we save the image and return 410 Gone22:42
jokke_and I'd fix the BadRequest from the response of the actual location not being in the image metadata22:43
jokke_that would address us ever returning more than once 204 from the DELETE calls and us having conflicting responses at latest after the second DELETE22:44
jokke_and allow user to clear the locations by issuing the DELETE call if the data is not available due corruption, earlier crash during delete etc.22:45
rosmaitabut if we keep the location code same as proposed on the current patch, where it eats the NotFound, we can't tell in images.py that we should raise a 410 ?22:48
*** tkajinam has joined #openstack-glance22:49
jokke_oh sorry, yes instead of returning the location, we will reraise the NotFound, just not inject the location back to the list22:49
jokke_so we keep the special case in location.py for the NotFound22:49
jokke_and we revert on any other exception coming from store22:50
rosmaitagotcha22:50
jokke_and if we get NotFound from location.pop() we save the image and return 410 Gone22:51
jokke_That will make that we ever get only one 204 and only once 41022:52
jokke_and after that it behaves like the location does not exist22:52
jokke_even if the delete call is still going that will eventually return the 20422:53
rosmaitaok, and it also addresses your concern that we need a way to delete-from-store when the image is showing that data exists in a store, but the data has actually been deleted from the backend22:53
rosmaitai think that sounds ok22:53
jokke_exactly22:53
rosmaitai believe i can live with that22:54
rosmaitayeah, it sounds good22:54
jokke_so timeline this would be possible: 1) DELETE 2) DELETE 3) 410 Gone 4) GET (returns without the store in the list) 5) 204 from first delete22:54
jokke_or the same thing without 1) and 5)22:55
jokke_in that data missing case22:55
jokke_but 1) DELETE 2) DELETE 3) DELETE where 1) takes long time would return 4) 410 Gone 5) 404 NotFound 6) 20422:57
*** rcernin has joined #openstack-glance22:57
jokke_great, I'll get rev up tomorrow and we kept within that 45min what you had :D22:58
rosmaitayou da man!22:59
rosmaitathanks for working this out22:59
jokke_thank you for pointing the underlying problems and bearing with me :D22:59
jokke_now, dinner time! TTYL23:00
*** tosky has quit IRC23:06
*** awalende has joined #openstack-glance23:19
*** awalende has quit IRC23:23
*** Liang__ has joined #openstack-glance23:25
*** Liang__ has quit IRC23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!