Thursday, 2016-05-05

*** gb21 has quit IRC00:55
*** buzztroll has quit IRC00:56
*** buzztroll has joined #openstack-glance01:03
*** mtanino has quit IRC01:05
*** btully has joined #openstack-glance01:05
*** buzztroll has quit IRC01:05
*** gb21 has joined #openstack-glance01:07
*** mine0901 has quit IRC01:10
*** ozialien10 has quit IRC01:12
*** sdake has joined #openstack-glance01:22
*** gb21 has quit IRC01:28
*** dims has quit IRC01:30
*** dims has joined #openstack-glance01:35
*** gb21 has joined #openstack-glance01:43
*** julim has joined #openstack-glance01:45
*** ducttape_ has quit IRC02:05
*** sdake_ has joined #openstack-glance02:08
*** sdake has quit IRC02:11
*** ducttape_ has joined #openstack-glance02:17
*** buzztroll has joined #openstack-glance02:18
*** sdake_ has quit IRC02:27
*** pushkaru has quit IRC02:36
*** pushkaru has joined #openstack-glance02:37
*** pushkaru has quit IRC02:39
*** pumarani__ has joined #openstack-glance02:39
*** pumarani__ has quit IRC02:44
openstackgerritwangxiyuan proposed openstack/glance: Remove DB downgrade  https://review.openstack.org/22922002:45
*** pushkaru has joined #openstack-glance02:46
*** buzztroll has quit IRC02:47
*** ducttape_ has quit IRC02:50
*** gb21 has quit IRC02:52
*** pushkaru has quit IRC02:53
*** cdelatte has quit IRC03:03
*** gb21 has joined #openstack-glance03:04
nikhilflwang: hello sir. around by any chance?03:31
flwangnikhil: yes, boss03:31
nikhilflwang: ah, so glad 😃03:32
flwangnikhil: how can i help?03:32
nikhilflwang: wanted to discuss the location metadata weigth stuff if you've time now03:32
flwangnikhil: oh, yes03:32
flwangnikhil: i'm really happy you can give some sunshine on this corner03:33
nikhilflwang: LOL03:34
nikhilflwang: sorry about the delay, summit stuff has been crazy03:34
flwangnikhil: i see. and i'm sorry i missed this one03:35
nikhiland it's hard to find you online at the same time & when 5 people aren't pinging me at the same time 😃03:35
flwangand we missed to have a f2f again03:35
nikhilflwang: indeed. I was thinking the same thing!03:35
nikhilflwang: take care of your little one, that's more imp now03:36
*** pt_15 has quit IRC03:36
flwangnikhil: sure thanks03:36
nikhilto be little one*03:36
flwangnikhil: so what's the question for the location weight?03:36
nikhilflwang: first, I'm trying to find the patch. is it just the python-glanceclient or there's a server side change as well?03:37
flwanghttps://review.openstack.org/#/c/26886503:38
flwanghere you are03:38
flwangno change for the client side03:38
nikhildarn, I swear I had it open03:39
nikhilbut couldn't find it under your owner list :(03:40
nikhilflwang: I see, so this was the old style lite-spec. I wasn't aware of that. I just glanced at the code last night when you linked it here..03:41
flwangnikhil: yep, it's the old lite-spec03:41
flwanganything else i need to do?03:42
nikhilflwang: ah, it's Jake's so doesn't apprear in your filters. ok.03:43
flwangbased on the new style, seems we still need to submit a spec for this, right?03:43
flwangnikhil: yes03:43
nikhilflwang: no, we agreed to not ask the ones that were approved03:43
flwangok, cool03:43
nikhilI can move that stuff if needed, but nothing from your end03:43
flwangawesome03:44
nikhilflwang: mind me asking if you are using locations like that? the backend stores are being passively hidden behind locations?03:44
flwangnikhil: now, we're not using cell03:46
flwangand we're only using ceph for now, no swift in our env03:46
nikhilflwang: gotcha. so you're mostly proxying stuff for Jake?03:47
nikhilflwang: the reason I ask is I'm very skeptical when people talk about multiple locations and copy-on-write being done w/o glance knowledge03:48
flwangnikhil: yep, i think it's a good user case03:48
flwangand given community is moving to cell very much03:48
nikhilflwang: and a bit scared about the way it's done too. there's no gurantee to the data consistency and makes it ephemeral -- if you see what I'm saying03:48
flwangso i'm pretty sure this could be a very useful  feature for those03:49
nikhilok, useful for sure. how is important here.. anyway I'm reading the bug closely again03:49
flwangnikhil: it can improve the data transfer performance03:50
flwangwhat do you mean the data consistency?03:50
nikhilflwang: on the data transfer performance -- I've a book to write 😃03:53
flwangnikhil: btw, except Glare, that's your focus in Newton for glance?03:53
nikhilI mean it03:53
flwangyou can answer my above question later :)03:54
nikhilwhat is _that_?03:54
flwangs/that/what03:54
flwangsorry03:54
nikhilI am not supposed to have a focus so I don't have a focus03:54
flwangwahaha03:55
nikhilI am supposed to enable people to have a focus03:55
flwangok, then enable me first :)03:55
nikhilflwang: I searched for it and "wahaha" is some song 😃03:55
flwangi will translate 'wahaha' with 'lol'03:56
nikhilflwang: are you aware of the location discussions we have been having?03:56
flwangnikhil: not really, but i suppose it's related to security which mike and flaper87 and i discussed a lot03:57
flwangnikhil: is there is an etherpad/link?03:57
nikhilflwang: there's no link03:59
flwangO:-)04:00
nikhilflwang: can you help me understand the "ceph clone abilities" and "each instance booting will re-upload image to ceph" ?04:00
nikhilI am asking as people told me that you guys use ceph04:00
nikhilfeel free to say no04:00
*** buzztroll has joined #openstack-glance04:02
flwangi think the 'ceph clone abilities' Jake was talking about is the CoW of ceph04:03
flwangto be more accurate04:04
flwangceph will create a copy of the image(in ceph is a volume), the copy/clone is  most like a link04:04
flwangthat said, ceph is not really creating a new copy, but a link04:04
flwangso it could be very fast04:05
*** buzztroll has quit IRC04:05
nikhilI see, I still need to wrap my head around what exactly is happening. is this during snapshot or is this COW done live while the instance is running or is the image directly used as volume by the instance while it's in ceph.. 😃04:07
flwanghttp://docs.ceph.com/docs/master/rbd/rbd-snapshot/#layering04:08
nikhilflwang: thanks for the link 😃 it also means I take one more day to internalize everything!04:09
flwangbased on my understanding, it's a 'clone' based on a snapshot04:09
flwangnikhil: sorry, i think a doc link can be more clear since i'm not really confident to be a storage expert :)04:10
nikhilflwang: makes sense. It would be nice to get these answers from Jake too unless you know him personally and are proxying 😃04:10
nikhilflwang: np sire! the way I figured at the summit was bunch of people are hacking around glance as ceph+glance can be bad combination for such redundant uploads04:11
nikhilflwang: so, let me ask the same questions on Jake so that he can answer what they are doing in their deployment precisely and we both are not guessing it. but thanks for the link and I will read it.04:11
nikhilflwang: btw, I was curious why this can't be done outside of the server?04:12
flwangoutside glance?04:12
nikhilflwang: to be more precise you add a config here (line 29) https://review.openstack.org/#/c/268865/6/glance/common/location_strategy/metadata_weight.py04:13
nikhilflwang: I am curious if that is meant to standardize the metadata key for locations ?04:13
nikhilflwang: why can't each deployer specify their metadata keys as they wish04:14
nikhilflwang: and the computation of the location weights are done on the client side04:14
nikhilif the locations are known then we can safely assume operator have exposed them already04:15
flwangnikhil: then you have to define 'client'04:16
nikhilflwang: for now, I am assuming the virt driver is the client04:17
flwanggenerally, user just want to use horizon or nova cli to create a new instance04:17
flwanginstead of a private python script to talk with nova client to do it. i think that that's the normal case04:17
*** sdake has joined #openstack-glance04:18
nikhilflwang: right but the proposal is for instance to know the location to boot from. so they are just asking for cell level computes to know of this information and the locations are opaque from user's perspective.04:19
nikhilhere -> "If this is made possible, each cell can return the closest location for each glance image, and also do copy-on-write for local stores like RBD."04:19
nikhilI am getting more and more convinced that this should live in the RBD (libvirt) driver if they want it04:20
flwangyep, so you mean this could be done in nova?04:20
nikhilflwang: exactly04:20
flwangor more lower level?04:20
flwangnot only related to RBD for this case04:21
flwangif you have several different backend04:21
nikhilflwang: because all the assumptions are that they want to use ceph or they want to use swift+ceph etc. It looks like a corner case scenario for specific virt drivers they are using - if that made sense.04:21
flwangthis feature can help get the one you want04:21
nikhilsure, but how many store type do you think you would want to optimize this way04:22
nikhilthere's one more complexity to this use case04:22
flwangnikhil: at least 2 i think04:22
nikhilhow are these images getting sync04:22
nikhilflwang: which ones?04:22
flwangnikhil: that's the question for all the multi locations scenarios04:22
nikhilcorrect04:23
flwangnikhil: not only for this04:23
nikhiland it'd be left for the operator to determine judiciously if they need to use multi locations04:23
nikhilSo, the way I look at it is:04:24
nikhilmultiple locations are an advanced configuration scenarios04:24
nikhilmore so designed to optimize data pipe in specific cases04:24
nikhillike disaster recovery04:24
nikhil(includes data replication, data preservation, etc)04:24
*** buzztroll has joined #openstack-glance04:25
nikhiland for operator defined local storages04:25
nikhilfor example04:25
*** btully has quit IRC04:26
nikhilfor each compute node in a cell, one dedicated glance-api is deployed for faster boot times04:26
nikhilwhat stores are being used then?04:26
nikhilswift, ceph, filesystem?04:26
nikhilis glance cache going to be the faster way for boot04:26
nikhilor is the operator going to run some off the band job to pull data from safer storage like swift to local filesystem store04:27
nikhilor in Jake's they use ceph04:27
nikhilin each of these cases, operator are going into long length to optimize their data transfer and know what virt drivers they want to use04:27
flwangJake is the glance operator in this case ;)04:28
nikhilso, I want to safely assume that they will know where the location weight is going to be used to pull from closer location04:28
nikhilwhere == which part of nova04:28
flwangnikhil: hmm...04:28
flwangnikhil: i won't say it's not correct04:29
flwangsince both glance and nova are the part of the booting process04:29
flwangtechnically, you do try to get the best image at either side04:30
*** btully has joined #openstack-glance04:30
flwangbut personally, i think the location strategy is used to for this kind of scenario04:30
flwangthat said, i can't see why we can't do this in glance04:30
flwangdoes that make any sense?04:30
*** buzztroll has quit IRC04:31
nikhilflwang: to be fair, multiple locations were introduced in glance for faster boot times to help _operators_ (as per the original use case)04:31
nikhilflwang: but the management of the locations for that wasn't supposed to be part of glance04:32
nikhilflwang: my problem is that we do not want to assume all locations are correct04:32
nikhilflwang: because they can be out of sync04:32
nikhilflwang: and without knowledge of glance04:32
nikhilflwang: so any assumption that leads for glance to believe all locations are equal in terms of data consistency is false04:33
nikhiland I proposed that to the team during the summit and got no push back (yet)04:33
flwangsorry? what do you mean 'got no push back'?04:34
nikhilmoreover any location that the user/operator sets without the data going through glance may have different checksum04:34
nikhilflwang: sorry, actually scratch that. I just meant the point was clear and people seemed to agree and we ran out of time.04:35
nikhilflwang: a bit irrelevant to our discussion but I was trying to be explicit about it04:35
nikhilI hope my points are clear.04:36
flwangnikhil: i see. but as i said above, the images sync is not related to this feature, IMHO04:37
flwangsince it's general issue for multi location04:37
*** buzztroll has joined #openstack-glance04:37
flwangwhich flaper87 and i have discussed this in a security bug04:37
flwangand yes, i agree it would be nice if glance can check the checksum for each locations04:38
flwangbut it's hard to do that for some locations, like http04:38
flwangyou have to download it :(04:38
nikhil😃04:38
nikhilflwang: out of sync is related to the bug. how can you determine the weight of a location if the different locations are not guranteed to be giving same data from the point of view of glance.04:40
nikhilflwang: where as the locations are being updated by nova/virt drivers so that information is local to them. they are in a much better position to say what's correct.04:41
flwangnikhil: no, my point is out of sync is a general issue, even without this feature, we do still have the issue, right?04:41
flwangnikhil: i can work on the multi location sync issue if that's your concern :)04:42
flwangseems you're trying to figure out a focus for me in newton ;D04:42
nikhilflwang: haha, indeed 😃04:44
nikhilflwang: but we've ton of work ahead of us man!04:44
nikhilflwang: we need your help with quota stuff out there in the delimiter library.04:45
flwangnikhil: so what's the dirty job i can help to make you happy?04:45
nikhilLOL04:45
flwangnikhil: ah, i see. i'm happy to be counted04:45
nikhildirty job, why would you want a dirty job :P04:45
nikhilflwang: LOL. you're always in. just disappeared after a bit04:45
flwangnikhil: because dirty job is always can make boss happy :)04:46
nikhilflwang: nina's case was super important so we had to figure out the right way to do things!04:46
flwangnikhil: i just need to get some excited job, TBH04:46
nikhilflwang: LOL, who's the boss here! we're all serving OpenStack 😃04:46
flwangnikhil: so what's the conclusion for glance quota?04:47
flwangdo nested quota since the first step?04:47
*** sdake has quit IRC04:47
flwangor go for the delimiter and then support it in glance?04:47
*** sdake has joined #openstack-glance04:48
nikhilflwang: no, what we need is a way for all the quota types to exist from step#1 so that we can avoid migrations04:48
nikhilI was able to identify 4 types of quotas in openstack as of today04:48
flwangso nested quota from day1?04:49
*** buzztroll has quit IRC04:49
nikhilnot necessarily04:49
nikhilif we adopt delimiter (when ready), it will allow operator to choose simple or nested or floating etc quotas for their tables04:50
nikhilso if we adopt the library, operator can choose which type they want to have in their deployment04:50
flwangi see, but the question is when it could be ready04:51
flwangand what should we do now? especially in newton for glance04:51
nikhiland for that we all need to pitch in 😃04:51
nikhilok, for newton in glance we still have a ton of work04:51
nikhildepends on what you are looking for04:51
nikhil1. if you are interested in quota, delimiter is the most exciting path. small team lots to do, get to know different cases.04:52
nikhil2. there's import that stuart is wokring on04:52
nikhil3. mike is working on nova04:52
nikhil4. flavio is working on microversions04:52
nikhil5. brian is working on docs and rolling upgrades04:53
nikhil6. hemanth is working on config improvements04:53
nikhiland then we've glare!04:53
flwangnikhil: cool, seems quota is the one i can take04:54
nikhiloh yes, there's community/inherited images support too04:54
nikhil#8 😃04:54
flwangCommunity image visibility      Su Zhang04:55
nikhilwe need reviews on all of these, help (depending on how you can collaborate) on all of these!04:55
nikhiltimothy from symantec has taken over that spec04:55
flwangok, cool04:55
nikhilwe are going to discuss that tomorrow during the meeting04:55
flwangnikhil: so now the meeting time is still 14:00 UTC, right?04:56
nikhilso, there are ton of things to do and people won't be able to get them done themselves in one cycle!04:56
nikhilflwang: yes04:56
flwang:(04:56
flwang2:00am for NZ04:56
nikhilthere is some metadefs stuff that we need to fix04:56
flwangso as for quota, we have decide to brace delimiter, so that means until it's ready, we basically can't do anything in glance, right?04:57
*** tshefi has joined #openstack-glance04:57
nikhilflwang: what does your work timing look like?04:57
nikhilflwang: yeah :/04:57
flwangnikhil: ok, i see04:57
nikhilflwang: are you against working on non-incubated projects?04:57
flwangno04:58
nikhilok04:58
flwangi just want to figure out anything i can do in glance04:58
nikhilflwang: just curious about your interest in that library04:58
nikhilflwang: my feeling is that we need some expertise in that lib for us to adopt in glance04:58
flwangnikhil: i'm interested anything can make glance better :)04:58
nikhilI can do that if no one else is interested04:59
nikhilbut then I'm not supposed to do that and let people focus on stuff! :)04:59
nikhilflwang: haha, that's a great spirit 😃04:59
flwangok, feel free take it if you want04:59
nikhilLOL04:59
flwangthen i will find another part i can grab04:59
flwangsorry, man,i have to run to pick up my boy05:00
nikhilflwang: I was saying that you'd else I will have to 😃05:00
flwangi will catch up with you later as for the focus05:00
nikhilflwang: ok, np. take care. I just meant feel free to pick up what you want/need. we can figure out the logistics later.05:00
*** pdeore has joined #openstack-glance05:01
nikhilcioa05:01
flwangquota is my plate now, see you later05:01
*** gb21 has quit IRC05:01
*** janonymous has joined #openstack-glance05:38
*** buzztroll has joined #openstack-glance05:48
janonymousChangelog in https://github.com/openstack/glance/blob/master/MANIFEST.in#L1 , https://github.com/openstack/glance/blob/master/MANIFEST.in#L8  means same right ?05:48
*** mosulica has joined #openstack-glance06:00
*** openstackgerrit has quit IRC06:03
*** openstackgerrit has joined #openstack-glance06:03
*** mingdang1 has joined #openstack-glance06:10
*** e0ne has joined #openstack-glance06:32
*** sdake has quit IRC06:32
*** e0ne has quit IRC06:43
*** tesseract has joined #openstack-glance06:44
*** tesseract is now known as Guest2128806:45
*** mingdang1 has quit IRC07:01
*** btully has quit IRC07:34
*** mvk_ has quit IRC07:44
*** dmk0202 has joined #openstack-glance07:57
*** btully has joined #openstack-glance07:58
*** btully has quit IRC08:03
*** dmk0202 has quit IRC08:03
*** e0ne has joined #openstack-glance08:06
*** dmk0202 has joined #openstack-glance08:11
*** dshakhray has joined #openstack-glance08:15
*** jistr has joined #openstack-glance08:34
*** mvk_ has joined #openstack-glance08:39
*** buzztroll has quit IRC08:45
*** e0ne has quit IRC08:46
*** e0ne has joined #openstack-glance08:49
*** sdake has joined #openstack-glance09:24
*** buzztroll has joined #openstack-glance09:41
*** buzztroll has quit IRC09:46
*** e0ne has quit IRC09:48
*** e0ne has joined #openstack-glance09:49
*** e0ne has quit IRC10:05
*** groen692 has joined #openstack-glance10:14
*** sdake has quit IRC10:29
*** MattMan has quit IRC10:39
*** MattMan has joined #openstack-glance10:40
*** groen692 has quit IRC10:41
*** e0ne has joined #openstack-glance10:51
*** links has joined #openstack-glance11:05
*** buzztroll has joined #openstack-glance11:30
*** buzztroll has quit IRC11:34
*** pdeore has quit IRC11:40
*** smatzek has joined #openstack-glance11:45
*** someara2 has joined #openstack-glance11:53
*** links has quit IRC11:53
*** someara2_ has joined #openstack-glance11:57
*** someara2 has quit IRC12:01
*** ekarlso has quit IRC12:06
*** ekarlso has joined #openstack-glance12:06
*** MVenesio has joined #openstack-glance12:06
*** ducttape_ has joined #openstack-glance12:07
openstackgerritBhagyashri Shewale proposed openstack/python-glanceclient: WIP:Fix 'UnicodeEncodeError' for unicode values in url  https://review.openstack.org/31291512:08
*** btully has joined #openstack-glance12:10
*** mingdang1 has joined #openstack-glance12:20
*** ducttape_ has quit IRC12:24
*** burgerk has joined #openstack-glance12:33
*** someara2_ has left #openstack-glance12:33
*** ninag has joined #openstack-glance12:37
*** bapalm has joined #openstack-glance12:54
*** tsymanczyk has joined #openstack-glance13:02
*** ducttape_ has joined #openstack-glance13:11
*** buzztroll has joined #openstack-glance13:18
*** buzztroll has quit IRC13:22
*** ducttape_ has quit IRC13:36
*** mtanino has joined #openstack-glance13:41
*** ducttape_ has joined #openstack-glance13:44
*** sigmavirus24_awa is now known as sigmavirus2413:45
*** openstackgerrit has quit IRC13:47
*** openstackgerrit has joined #openstack-glance13:47
*** tshefi has quit IRC13:50
*** ametts has joined #openstack-glance13:55
openstackgerritStuart McLaren proposed openstack/glance: [WIP] Add image import schema / schema validation  https://review.openstack.org/31297213:58
nikhilCourtesy meeting reminder: ativelkov, cpallares, flaper87, flwang1, hemanthm, jokke_, kragniz, lakshmiS, mclaren, mfedosin, nikhil_k, Nikolay_St, Olena, pennerc, rosmaita, sigmavirus24, sabari, TravT, ajayaa, GB21, bpoulos, harshs, abhishek, bunting, dshakhray, wxy, dhellmann, kairat13:59
*** MVenesio has quit IRC14:00
*** pushkaru has joined #openstack-glance14:03
*** links has joined #openstack-glance14:07
*** burgerk has quit IRC14:12
*** links has quit IRC14:16
*** MVenesio has joined #openstack-glance14:17
*** jaypipes has joined #openstack-glance14:25
*** mingdang1 has quit IRC14:26
*** krotscheck has quit IRC14:27
*** jistr has quit IRC14:29
*** krotscheck has joined #openstack-glance14:33
*** djkonro has joined #openstack-glance14:44
*** cdelatte has joined #openstack-glance14:48
*** sdake has joined #openstack-glance14:50
*** jistr has joined #openstack-glance14:51
*** ayoung has quit IRC14:59
*** mclaren has joined #openstack-glance15:01
nikhilrosmaita: did you have more input on the location strategy stuff?15:01
nikhilsorry we ran out of time there15:01
rosmaitai'll have to look at that patch15:01
nikhilok15:01
rosmaitai thought zhi had designed it so htat you could plug in your own strategy15:01
nikhilmclaren: o/15:02
rosmaita"you" == deployer15:02
*** burgerk has joined #openstack-glance15:02
mclarenhey15:03
nikhilwe're just chatting on locations15:03
nikhilrosmaita: yeah, you're right! it's in glance/common/location_strategy15:04
nikhilI've never had a chance to touch that code15:04
nikhilwho uses it!?15:04
rosmaitai think there are 2 options by default15:04
rosmaitazhi had a use case when he was at ibm15:04
rosmaitanot by default15:05
rosmaitai mean 2 strategies exist that can be applied15:05
rosmaita(but that's by memory, i don'thave the code in front of me)15:05
*** buzztroll has joined #openstack-glance15:06
rosmaitanikhil: if you want, i'll put a minus on the spec saying please explain why the curent stuff won't work for you15:06
nikhilrosmaita: yes please!15:06
rosmaitanikhil: do you have the link handy?15:06
nikhilrosmaita: I read through it and memory is coming back. they wanted to implement os-brick like stuff for glance15:07
nikhilrosmaita: that makes me uncomfortable. I do not know if we've tests around these!15:07
rosmaitaTIL there is an os-brick project15:07
nikhilrosmaita: here's the link https://github.com/openstack/glance/tree/master/glance/common/location_strategy15:07
rosmaitanikhil: thanks15:08
rosmaitanikhil: i meant to the bug, but i see it on the meeting agenda15:08
nikhilcool15:09
nikhilmclaren: did you want to discuss import stuff now or just prefer it on the review?15:09
mclarenon the review is good I think15:10
nikhilcool15:10
*** buzztroll has quit IRC15:10
*** TravT has quit IRC15:19
*** TravT has joined #openstack-glance15:20
*** julim has quit IRC15:22
*** julim has joined #openstack-glance15:24
*** ayoung has joined #openstack-glance15:32
*** ducttape_ has quit IRC15:37
*** dmk0202 has quit IRC15:38
*** djkonro has quit IRC15:45
openstackgerritNiall Bunting proposed openstack/glance: Fix glance-cache-prefetcher  https://review.openstack.org/27532215:45
*** ayoung has quit IRC15:46
*** Guest21288 has quit IRC15:46
*** ducttape_ has joined #openstack-glance15:48
*** buzztroll has joined #openstack-glance15:55
*** davideagnello has joined #openstack-glance16:05
*** ducttape_ has quit IRC16:06
*** ducttape_ has joined #openstack-glance16:11
*** ducttape_ has quit IRC16:12
*** sdake_ has joined #openstack-glance16:29
*** sdake has quit IRC16:31
openstackgerritIan Cordasco proposed openstack/glance: Remove unnecessary executable privilge of unit test file  https://review.openstack.org/31099116:33
*** djkonro has joined #openstack-glance16:35
buntingIf anyone has any free time to review, this is a patch for glance_store swift functional tests. https://review.openstack.org/#/c/302374/16:40
*** ducttape_ has joined #openstack-glance16:46
*** djkonro has quit IRC16:58
*** mosulica has quit IRC16:58
*** sdake_ has quit IRC17:00
nikhilbunting: thanks, will take a look in a bit17:04
*** mine0901 has joined #openstack-glance17:11
*** jistr has quit IRC17:17
*** cebreidian has quit IRC17:24
*** nikhil has quit IRC17:31
*** nikhil has joined #openstack-glance17:32
*** ayoung has joined #openstack-glance17:33
rosmaitanikhil: left a comment on bug 152845317:44
openstackbug 1528453 in Glance "Provide a ranking mechanism for glance-api to order locations" [Wishlist,Triaged] https://launchpad.net/bugs/1528453 - Assigned to Jake Yip (waipengyip)17:44
*** ducttape_ has quit IRC17:45
nikhilrosmaita: ack, thanks. looking.17:46
*** tsymanczyk has quit IRC17:46
*** dshakhray has quit IRC17:50
*** TravT has quit IRC17:50
*** tsymanczyk has joined #openstack-glance17:51
*** tsymanczyk is now known as Guest2925217:51
*** itlinux has joined #openstack-glance17:52
*** pushkaru has quit IRC18:10
*** burgerk_ has joined #openstack-glance18:18
*** burgerk has quit IRC18:22
*** ducttape_ has joined #openstack-glance18:27
*** ninag has quit IRC18:30
*** ninag has joined #openstack-glance18:30
*** e0ne has quit IRC18:32
*** ninag_ has joined #openstack-glance18:32
*** ninag_ has quit IRC18:32
*** ninag_ has joined #openstack-glance18:33
*** ninag has quit IRC18:35
*** ninag_ has quit IRC18:38
*** Guest29252 is now known as tsymanczyk18:42
nikhilthanks rosmaita for the excellent research!18:45
nikhilI added an additional comment relating to my comment on the bug.18:45
*** sdake has joined #openstack-glance18:48
*** btully has quit IRC18:49
*** ninag has joined #openstack-glance18:53
*** mvk_ has quit IRC18:54
*** pushkaru has joined #openstack-glance18:54
*** sdake_ has joined #openstack-glance18:55
*** ninag has quit IRC18:56
*** sdake has quit IRC18:57
openstackgerritBrian Rosmaita proposed openstack/glance-specs: Update image import refactor spec  https://review.openstack.org/31187118:58
tsymanczyki just noticed the "core reviewers" section in the community image spec came copy paste from when kragniz owned it. it currently lists brian-rosmaita and stuart-mclaren . should i change this?19:07
kragnizif they're not the people going to review it, probably19:09
kragnizI'm sure rosmaita will remain19:09
tsymanczykhe seems interested, yeah. i guess remove stuart for now and wait to be told who to add?19:10
kragnizsure, do that19:10
nikhiltsymanczyk: you can add my lp id "nikhil-komawar"19:11
tsymanczykcool, done.19:12
*** smatzek has quit IRC19:15
*** pushkaru has quit IRC19:15
-openstackstatus- NOTICE: Gerrit is restarting to address performance issues related to a suspected memory leak19:21
nikhilbunting: around by chance?19:27
nikhilI wanted to discuss https://review.openstack.org/#/c/275322/19:27
*** e0ne has joined #openstack-glance19:27
nikhilyou say in the commit message creds are not getting passed and in the bug that it works for keystone v1 (somethign as per openstack community doesn't exist)19:28
*** ninag has joined #openstack-glance19:29
*** mvk_ has joined #openstack-glance19:32
nikhilI added comment to review, let's discuss there19:33
*** ninag has quit IRC19:36
*** ninag has joined #openstack-glance19:37
*** btully has joined #openstack-glance19:40
*** avarner has joined #openstack-glance19:44
buntingnikhil: I am19:44
*** avarner is now known as MistySkies19:45
*** MistySkies has left #openstack-glance19:45
*** flwang1 has joined #openstack-glance19:45
flwang1rosmaita: ping re member id19:45
nikhilbunting: o/19:46
flwang1nikhil: can you remind me why we don't want to verify the member id by talking with keystone?19:46
rosmaitaflwang1: just because it takes time19:46
rosmaitaat least that's my memory19:46
nikhilverify as in verify if it exists?19:46
flwang1rosmaita: i think the case is only about image share, right?19:47
nikhilI think the user who wants to share that image should be careful about that19:47
rosmaitaflwang1: probably, i can't think of anything else atm19:47
flwang1rosmaita: and given image share is not a very frequent action, i can't be convinced to worry about the performance19:47
rosmaitanikhil: the problem is if you make a typo, you get no feedback, your sharing just doesn't work19:47
rosmaitaflwang1: there was a summit session about keystone adding a call for this19:48
rosmaitasome other project needs it, too19:48
nikhilrosmaita: well :) we'd check if member is of type uuid at max19:48
rosmaitanikhil: not guaranteed to be a uuid19:48
rosmaitatenant_id is just a string19:48
nikhilah19:49
rosmaitamaybe only alphanumeric, not sure about that, though19:49
nikhilI think the DB should take care of that19:49
nikhil"the column type"19:49
flwang1rosmaita: nikhil: i would like propose verify member id with keystone19:50
nikhilisn't it insecure to make a call to keystone to see if a member id exists?19:50
rosmaitastill, if you're off by a digit, the person you shared with can't use the image, and you're left scratching your head19:50
nikhilrosmaita: why would someone want to expose that info19:50
flwang1would you mind me submitting  a spec for this?19:50
nikhilrosmaita: for example, we return a "404" for an image that doesn't belong to a tenant19:51
tsymanczykinsecure in what regard?19:51
rosmaitaflwang1: someone has a spec up about making image member_id alphanumeric only19:51
nikhilwe do not return 40319:51
nikhilflwang1: was that directed to me?19:51
flwang1rosmaita: but that's not the solution19:51
flwang1to verify if it's a 'valid' tenant id19:51
rosmaitano, but we should not do both19:52
buntingnikhil: It seems that part of the commit message refers to ps119:52
nikhilI don't think so we need to validate with keystone19:52
nikhillook at my 404 vs 403 analogy above19:52
nikhilto put it other way: I'm not convinced that we'd19:53
rosmaitaflwang1: gimme a sec to find the info about htat summut session19:53
nikhilthe problem of one off by digit exists even in case of GET image/id19:53
rosmaitanikhil: suppose i share with tenant_id "1234l434"19:53
flwang1rosmaita: cool,thx19:54
rosmaitait looks to me like i shared with 1234143419:54
rosmaitabut that guy contacts me to say, where is the image?19:54
rosmaitathen i have to fix it19:54
rosmaitaor, i tell him i shared it, he still can't see it, and gives up19:54
nikhilbunting: gotcha, because the code did not seem to pass any cred info. I'm assuming you are going to update stuff ?19:54
*** e0ne has quit IRC19:55
nikhilrosmaita: sorry, but that looks like a edge case :)19:55
rosmaitanikhil: maybe19:56
nikhilwe're adding validation logic for each call for an edge case?19:56
rosmaitahey, don't blame me, it's flwang1 's idea!19:56
nikhilthis is similar to the string problem with my keyboard on my phone19:56
nikhilIt might be worth thinking of a validation list on top of glance19:57
nikhillike a spell corrector for keyboard19:57
nikhilrather than annoying auto correct?19:57
nikhilin this case the auto correct is super expensive19:57
buntingnikhil: Doing it now19:58
rosmaitayeah, because even if you know the thing is a valid tenant_id, you still *don't* know if it's the right one19:58
nikhilyeah19:58
nikhilwhat if both the tenants exist above?19:59
openstackgerritNiall Bunting proposed openstack/glance: Fix glance-cache-prefetcher  https://review.openstack.org/27532219:59
nikhil12341434 and 1234l43419:59
nikhil?19:59
rosmaitaok, you have  convinced me that this isn't so bad20:00
rosmaitaflwang1: Project ID validation with Keystone - https://etherpad.openstack.org/p/newton-nova-keystone20:00
flwang1rosmaita: thanks20:00
flwang1nikhil: if both exist, user may accept a wrong image20:01
*** smatzek has joined #openstack-glance20:01
rosmaitaflwang1: yes, but even if we validate the tenant_id, that will still happen20:02
rosmaitabecause you're not proposing to return "You are proposing to share with tenant 123345 whose name is xxxxx and who lives at xxxxxx and whose credit card # is xxxxxxx"20:02
rosmaita:)20:02
flwang1correct20:02
flwang1ok, i'm going to withdraw my member verification idea again  :D20:03
rosmaitano, stick with it!20:03
flwang1rosmaita: haha20:04
rosmaitai guess we need to figure out how much of an edge case this is20:04
flwang1rosmaita: agree20:04
nikhilagree20:04
flwang1that's why i said maybe we need a spec20:04
rosmaitabecause it's possible to put usernames in there now and wonder why sharing isn't working20:04
flwang1so that we have a place to track and get more feedback from others20:05
rosmaita(that actually happened during testing in our cloud)20:05
flwang1rosmaita: oh, yes, tenant name and tenant id20:05
rosmaitai agree that it's worth discussing20:05
rosmaitaand i think you have a pretty good idea of the concerns people will have!20:06
flwang1rosmaita: nikhil: so does it worth a spec or just propose it to our weekly meeting?20:06
nikhilmakes sense on the spec20:06
nikhilflwang1: may be let's start with a lite-spec20:06
nikhildo propose your problem statement though20:07
rosmaita+1 on lite-spec, in a meeting we'll have the same conversation we just had, only the names will be different20:07
nikhilotherwise people will get confused like I did just now20:07
nikhilrosmaita is on a roll today20:07
nikhilnames will be different was hilarious20:08
flwang1nikhil: rosmaita: cool, thanks for the feedback20:09
flwang1i can't remember how many times i propose to verify the member id :D20:09
nikhilflwang1: sorry to hear that!20:10
*** ayoung has quit IRC20:10
nikhilflwang1: it'd be easier to propose things20:10
*** TravT has joined #openstack-glance20:10
flwang1nikhil: cool20:12
flwang1running to office now20:12
flwang1ttyl20:12
nikhilliterally?20:12
nikhilok ttyl20:12
nikhil:)20:12
rosmaitaflwang1: have a good run20:12
flwang1only 12km20:12
flwang1basically no traffic20:12
flwang1hopefully :D20:12
nikhilwow20:12
nikhil7.5 miles20:13
rosmaitathat's a serious run20:14
nikhilbunting: I optimistically gave a +2 https://review.openstack.org/#/c/275322/ as you just changed the commit message20:16
*** buzztroll has quit IRC20:16
*** buzztroll has joined #openstack-glance20:16
nikhilthis is something that hemanthm and rosmaita might be using so will defer the other +2 to them20:16
openstackgerritTimothy Symanczyk proposed openstack/glance-specs: Community image visibility BP  https://review.openstack.org/27101920:16
openstackgerritHemanth Makkapati proposed openstack/glance: Get rid of redundant config opts in scrubber  https://review.openstack.org/31314420:17
*** flwang1 has quit IRC20:17
*** smatzek has quit IRC20:20
*** pushkaru has joined #openstack-glance20:24
-openstackstatus- NOTICE: Gerrit is restarting to revert incorrect changes to test result displays20:29
*** dmk0202 has joined #openstack-glance20:29
openstackgerritTimothy Symanczyk proposed openstack/glance-specs: Community image visibility BP  https://review.openstack.org/27101920:31
*** cdelatte has quit IRC20:34
openstackgerritHemanth Makkapati proposed openstack/glance: Improve help text of scrubber opts  https://review.openstack.org/31272020:35
*** sdake has joined #openstack-glance20:35
*** sdake_ has quit IRC20:36
openstackgerritMerged openstack/glance: Remove unnecessary executable privilge of unit test file  https://review.openstack.org/31099120:50
buntingnikhil: Sound20:56
*** rcernin has joined #openstack-glance21:05
*** smatzek has joined #openstack-glance21:13
*** MVenesio has quit IRC21:16
*** cdelatte has joined #openstack-glance21:24
*** mingdang1 has joined #openstack-glance21:26
*** pushkaru has quit IRC21:26
*** flwang1 has joined #openstack-glance21:31
*** ametts has quit IRC21:34
*** MVenesio has joined #openstack-glance21:42
*** MVenesio has quit IRC21:44
*** sdake has quit IRC21:50
*** sdake has joined #openstack-glance21:51
*** sdake has quit IRC21:56
*** ametts has joined #openstack-glance21:56
*** ayoung has joined #openstack-glance22:03
*** ninag has quit IRC22:05
*** ninag has joined #openstack-glance22:06
*** ducttape_ has quit IRC22:06
*** ninag has quit IRC22:10
*** sigmavirus24 is now known as sigmavirus24_awa22:14
*** cdelatte has quit IRC22:22
*** rcernin has quit IRC22:29
*** ninag has joined #openstack-glance22:35
*** ametts has quit IRC22:35
*** ninag has quit IRC22:39
*** TravT has quit IRC22:51
*** cdelatte has joined #openstack-glance22:51
*** david-lyle has quit IRC22:51
*** david-lyle has joined #openstack-glance22:53
*** ducttape_ has joined #openstack-glance22:54
openstackgerritTimothy Symanczyk proposed openstack/glance-specs: Community image visibility BP  https://review.openstack.org/27101922:55
*** david-lyle has quit IRC22:58
*** davideag_ has joined #openstack-glance23:04
*** ayoung has quit IRC23:07
*** davideagnello has quit IRC23:07
*** GB21 has joined #openstack-glance23:10
*** jaypipes has quit IRC23:16
*** sgotliv has quit IRC23:21
*** mingdang1 has quit IRC23:27
*** GB21 has quit IRC23:28
*** mtanino has quit IRC23:30
*** mtanino has joined #openstack-glance23:30
*** krotscheck has quit IRC23:31
*** krotscheck has joined #openstack-glance23:31
*** GB21 has joined #openstack-glance23:32
*** cdelatte has quit IRC23:34
*** dmk0202 has quit IRC23:42
*** MVenesio has joined #openstack-glance23:45
*** davideag_ has quit IRC23:45
*** MVenesio has quit IRC23:50
*** ayoung has joined #openstack-glance23:53

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!