Tuesday, 2018-11-13

*** gyee has quit IRC00:39
*** Liang__ has joined #openstack-glance00:40
*** jokke_ has quit IRC00:47
*** Liang__ is now known as LiangFang01:21
*** imacdonn has joined #openstack-glance01:38
*** imacdonn_ has quit IRC01:38
*** zhanglong has joined #openstack-glance01:58
*** udesale has joined #openstack-glance03:43
*** pdeore has joined #openstack-glance03:50
*** poojaj has joined #openstack-glance04:45
*** pdeore has quit IRC04:55
*** threestrands has joined #openstack-glance05:02
*** zul has quit IRC06:04
*** udesale has quit IRC06:16
*** udesale has joined #openstack-glance06:16
*** zhanglong has quit IRC07:18
*** mosulica has joined #openstack-glance07:37
*** mosulica has quit IRC07:58
*** lbragstad has joined #openstack-glance08:03
*** threestrands has quit IRC08:04
*** lbragstad has quit IRC08:06
*** mosulica has joined #openstack-glance08:07
*** lbragstad has joined #openstack-glance08:12
*** mosulica has joined #openstack-glance08:21
*** priteau has joined #openstack-glance08:35
*** itlinux has joined #openstack-glance08:49
*** itlinux has quit IRC08:50
*** lbragstad has quit IRC09:12
*** lbragstad has joined #openstack-glance09:24
*** mvkr has quit IRC09:29
*** brinzhang has joined #openstack-glance09:32
*** lbragstad has quit IRC09:39
*** priteau has quit IRC09:52
*** lbragstad has joined #openstack-glance09:54
*** mvkr has joined #openstack-glance09:55
*** mosulica has quit IRC09:56
*** pcaruana has joined #openstack-glance09:56
*** mosulica has joined #openstack-glance09:57
*** lbragstad has quit IRC10:04
*** mvkr has quit IRC10:38
*** mvkr has joined #openstack-glance10:38
*** LiangFang has quit IRC10:42
*** lbragstad has joined #openstack-glance10:44
*** poojaj is now known as poojajadhav11:09
*** poojajadhav is now known as poojaj11:10
*** poojaj has quit IRC11:26
*** lbragstad has quit IRC11:32
*** lbragstad has joined #openstack-glance11:40
*** lbragstad has quit IRC11:52
*** lbragstad has joined #openstack-glance11:53
*** lbragstad has quit IRC12:09
*** lbragstad has joined #openstack-glance12:12
*** lbragstad has quit IRC12:12
*** brinzhang has quit IRC13:43
*** Liang__ has joined #openstack-glance13:50
*** Liang__ is now known as LiangFang13:51
*** zul has joined #openstack-glance14:03
*** MattMan_ has quit IRC14:04
*** MattMan_ has joined #openstack-glance14:04
*** mosulica has quit IRC14:07
*** udesale has quit IRC14:16
*** Dinesh_Bhor has joined #openstack-glance14:22
*** udesale has joined #openstack-glance14:33
*** pvradu has joined #openstack-glance14:48
*** Dinesh_Bhor has quit IRC14:48
*** jistr is now known as jistr|mtg14:57
*** sapd1 has joined #openstack-glance15:19
openstackgerritzhouxinyong proposed openstack/glance master: Update the HTTP links to HTTPS in metadefs-index.rst.  https://review.openstack.org/61768815:23
openstackgerritzhouxinyong proposed openstack/glance-specs master: Applying the HTTPS protocal in add-protected-filter.rst.To keep the website in this file be more robust,we'd better update the links to HTTPS type  https://review.openstack.org/61769015:26
*** LiangFang has quit IRC15:29
*** Dinesh_Bhor has joined #openstack-glance15:29
openstackgerritzhouxinyong proposed openstack/glance_store master: Replacing the HTTP protocal with HTTPS in remove-s3-driver-f432afa1f53ecdf8.yaml.  https://review.openstack.org/61769215:31
*** trident has quit IRC15:55
*** trident has joined #openstack-glance15:56
*** jistr|mtg is now known as jistr15:59
*** jmlowe has joined #openstack-glance15:59
*** gyee has joined #openstack-glance16:00
*** Luzi has joined #openstack-glance16:04
*** itlinux has joined #openstack-glance16:17
*** jmlowe has quit IRC16:22
*** Dinesh_Bhor has quit IRC16:23
*** itlinux has quit IRC16:32
*** jmlowe has joined #openstack-glance16:34
*** mvkr has quit IRC16:39
*** irclogbot_1 has joined #openstack-glance16:39
*** irclogbot_1 has quit IRC16:43
*** itlinux has joined #openstack-glance16:43
*** Luzi has quit IRC16:47
*** _alastor_ has joined #openstack-glance16:51
*** itlinux has quit IRC17:21
*** jmlowe has quit IRC17:25
*** udesale has quit IRC17:25
*** jmlowe has joined #openstack-glance17:28
*** jmlowe has quit IRC17:36
*** pvradu has quit IRC17:56
*** pvradu has joined #openstack-glance17:56
*** pvradu has quit IRC18:00
*** vishwanathj has joined #openstack-glance18:08
*** sapd1 has quit IRC18:11
imacdonnrosmaita: are you around this week? not sure if you got to go to Berlin18:20
rosmaitai am sitting at home sulking18:20
imacdonnawww18:20
rosmaitawhat's up?18:20
imacdonnhttps://bugs.launchpad.net/glance/+bug/180258718:20
openstackLaunchpad bug 1802587 in Glance "With multiple backends enabled, adding a location does not default to default store" [Undecided,New]18:20
imacdonnstill trying to understand how this has anything to do with the validation_data stuff .. other than that they both involve setting locations via patch18:21
rosmaitawell, when i have multiple backends set up (assuming i did it correctly), and i use your glanceclient patch without setting a specific backend, the call fails18:22
imacdonnso my question (https://review.openstack.org/602794) ... does it work (adding a location without specifying a backend) WITHOUT my patch ?18:23
rosmaitadepends on whether you have multiple backends enabled or not18:23
vishwanathjhi, looking for links that explains how glance copies large size images (>4 GB) over to compute nodes .... have ansible kolla openstack installation and have bare metal compute nodes; noticing that the first time a VM is created using the image, it takes a long time to create the VM or fails to create VM due to timeout18:23
imacdonnright18:23
imacdonnmy point is that it has nothing to do with my patch .. the problem already existed, and is completely separate ... I think ?18:24
vishwanathjor any links that explains how to handle my scenario is appreciated as well18:24
rosmaitaimacdonn: yes, you have not broken anything, just revealed a problem18:25
rosmaitavishwanathj: glance doesn't do the copy ... nova requests the image data from glance via http18:26
imacdonnrosmaita: yeah, so I'm thinking that this shouldn't block the addition of validation_data to glanceclient ... unless you think wee need to do something radically different18:26
vishwanathjrosmaita oh I see18:26
rosmaitaimacdonn: well, i'm not sure, just don't want to introduce new functionality that's going to fail a lot ... we need abhishek and erno's opinions, they've been driving the multiple backends18:27
rosmaitaimacdonn: abhishek should be back tomorrow, he knows about the bug18:27
rosmaitavishwanathj: what backend are you using for glance?18:28
vishwanathjwe are using IBM Swift as the glance backend...we have openstack installed in IBM Cloud18:29
rosmaitaimacdonn: so if your question is, "what changes do i need to make to my patch to get it approved?", the answer from me is, "i don't know"18:29
imacdonnrosmaita: FWIW, I tried to enable multi-backend in Rocky, prompted by a deprecation warning, and was not able to get it to work, then I read something that said it's experimental and not recommended yet ... IMO, deprecation is been abused a bit these days, but that's s separate issue too18:30
rosmaitaimacdonn: i didn't try it in stable/rocky, but did get it working in S-118:31
imacdonnrosmaita: I need to try it again, as I don't remember what failed18:32
rosmaitaimacdonn: i have a paste of my glance-api.conf for multiple backends, gimme a minute to find it18:33
vishwanathjrosmaita using IBM swift as the glance backend, Openstack is installed in IBM cloud18:33
rosmaitaimacdonn: http://paste.openstack.org/show/734694/18:33
imacdonnrosmaita: I think I was attempting to configure it for multi, but with only a single backend ... i.e. I don't actually need multiple (I only every use http), but using the new config options18:33
rosmaitaimacdonn: that's also why i want a second opinion about the bug, not sure i have everything configured properly18:34
rosmaitavishwanathj: i don't have any suggestions18:35
vishwanathjrosmaita ok, thanks18:35
rosmaitavishwanathj: when nova requests the image from glance, glance will get it from swift and cache it as it streams it to nova; the second request for the same image (after the caching is complete) should be much faster18:36
rosmaitavishwanathj: that doesn't speed up the first download, though18:36
imacdonnthere may be some timeout that can be tweaked on the nova side? I don't recall offhand18:37
vishwanathjrosmaita oh ok, so it is expected that the first download would be slow and subsequent ones would be faster because it would be cached in the compute node...looks like every new compute node added will have this issue, right/18:37
imacdonnevery compute node would have to cache the image the first time it's needed there18:38
*** gyee has quit IRC18:38
rosmaitavishwanathj: yes, and like imacdonn points out, the compute also has a cache, but that's usually limited to only base images18:38
imacdonna possible alternative approach is to create a volume from the image, then snapshot that volume .. then when you create a new VM, clone the snapshot to make a boot volume18:39
imacdonnall of that should happen on the Ceph side, and "be fast"18:40
imacdonnoh wait, you said Swift, not Ceph18:40
imacdonnnevermind ;)18:40
vishwanathjimacdonn so the workaround exists for ceph backend but not Swift?18:41
rosmaitavishwanathj: the image cache is middleware, to see if it's enabled check the paste_deploy.flavor config option in glance-api.conf18:41
rosmaitahttps://github.com/openstack/glance/blob/master/etc/glance-api.conf#L441318:41
imacdonn... but if you do have a cinder service with a backend that supports cloning, it is a way to make booting from large images much more expedient18:42
vishwanathjI see18:43
imacdonnrosmaita: I enabled multi again, and it seems to start up OK .. the more I think about it, it seems likely I ran into the same issue you did18:58
rosmaitacould be ... i get an "invalid location" error, which isn't very helpful18:59
imacdonnrosmaita: this reminds me of another issue ... the show_multiple_locations config option is supposedly deprecated since Newton, but.... https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#L452-L45518:59
rosmaitayeah, it turns out to be remarkably complicated to get rid of that option19:00
rosmaitaimacdonn: https://docs.openstack.org/releasenotes/glance/ocata.html#relnotes-14-0-0-origin-stable-ocata-other-notes19:02
imacdonnyeah19:04
imacdonnthe deprecation message still kindof lies, though19:04
imacdonnit says "the same functionality can be achieved with greater granularity by using policies.", which is not quite true19:05
rosmaitawell, it's more an aspiration than a statement of fact, it turns out19:06
imacdonnheh19:07
imacdonnquestion .... is there any backend other than http that actually supports setting locations ?19:08
imacdonntrying to think of any other case where it'd actually make sense19:08
rosmaitawell, locations are a glance thing, really, not a backend thing19:08
*** irclogbot_1 has joined #openstack-glance19:09
rosmaitai think when they were introduced, the idea was you could have an image stored in multiple places19:09
rosmaitayou would use a location_strategy module to pick the location that was used for a particular request19:09
imacdonnI should have said "store", not "backend", perhaps19:10
rosmaitabut i think you can configure nova to use locations to do the fast-snapshotting thing for images with the cepth backend19:11
rosmaitathere's also the possibility to pre-deploy image data directly to the backend and then set the location19:12
rosmaitabasically, what you're doing19:13
rosmaitaexcept with the other stores19:13
imacdonnok, hmmm19:13
vishwanathjrosmaita imacdonn what do you think about following the instructions at https://docs.openstack.org/glance/latest/admin/cache.html ... would it help alleviate a bit? I know it talks about caching on the server where glance-api is running and not on compute node19:13
imacdonnvishwanathj: it might ... it probably depends on where your bottleneck is19:14
rosmaitavishwanathj: if you use the prefetcher, it should speed things up19:15
rosmaitabut imacdonn is right19:15
rosmaitayou need to make sure a cached image is actually faster19:15
vishwanathjok, I am using kolla ansible for deploying my openstack controller, all the services are running inside of a docker container, are you aware the cache will be supported on docker containers19:17
rosmaitavishwanathj: i really don't know.  i think the software will work, but there may be resource constraints on how big your cache can be19:19
vishwanathjok19:20
_alastor_rosmaita: Hey, I'm trying to write some compatibility code in a driver for the Store.add function which changed signature in Dec 201719:58
_alastor_rosmaita: The issue I'm hitting is with the capabilities.check decorator, it's not correctly passing the store as the first argument19:58
_alastor_Example: http://paste.openstack.org/show/734785/19:58
rosmaita_alastor_: looking20:00
_alastor_rosmaita: This is the exception. http://paste.openstack.org/show/734786/20:01
_alastor_rosmaita: It occurs because the first argument passed to 'add' is the image_id instead of the store object20:01
_alastor_rosmaita: I'm running on Pike, but it's pretty much the same on master20:02
*** irclogbot_1 has quit IRC20:09
*** irclogbot_1 has joined #openstack-glance20:11
*** vishwanathj has quit IRC20:19
*** jmlowe has joined #openstack-glance20:20
rosmaita_alastor_: i think i need more context for http://paste.openstack.org/show/734785/20:30
_alastor_rosmaita: I'm doing a simple import check to see if driver.back_compat_add exists.  If it doesn't, then I need to overwrite the Store.add function with the older version20:32
_alastor_rosmaita: Essentially I'm creating an adapter that will change the old function signature to the new one so I only have to maintain one version20:32
rosmaita_alastor_: not sure i understand ... why don't you just write the new function sig and use driver.back_compat_add to handle an "old style" call?20:35
_alastor_rosmaita: Because that import won't function in a stable/pike environment because back_compat_add doesn't exist20:36
rosmaitaoh, ok20:36
rosmaitado you have the capabilities.check decorator also on your definition of the store.add() function?20:46
_alastor_rosmaita: Yes, that functions just fine.  I add a passthrough back_compat_add function to the driver module when I detect it isn't present.  The issue I'm running into is trying to correctly overwrite the 'add' function in that scenario20:50
_alastor_When I use types.MethodType to overwrite I get an error like this: http://paste.openstack.org/show/734787/20:50
_alastor_Example using types.MethodType: http://paste.openstack.org/show/734788/20:52
rosmaitathanks20:52
*** mvkr has joined #openstack-glance21:02
*** pvradu has joined #openstack-glance21:04
rosmaita _alastor_: i am not sure what's going on, but if you are already doing the capabilites check on add() and _add() is calling add(), then i wouldn't think you need to decorate _add()?  what happens if you just leave the decorator off?21:09
_alastor_rosmaita: When I take the decorator off I get this: http://paste.openstack.org/show/734789/21:25
rosmaitalooking21:25
_alastor_rosmaita: I think it means the wrong number of arguments are getting passed to the function still21:26
rosmaitathat looks just like the error when you used types.MethodType21:27
rosmaitanot sure what that means, though21:28
_alastor_rosmaita: This is where everything goes wrong.  I've poked around with PDB and I can't figure out why the call fails: http://paste.openstack.org/show/734790/21:33
_alastor_rosmaita: You can see the argspec of the function should work with the arguments provided21:34
rosmaitaif you are still in the debugger, what does it give you for the argspec for store._add ?21:36
_alastor_rosmaita: can't call inspect on it because it's a nested method.  I'll try promoting it and see if that helps21:39
rosmaita_alastor_: i have to go afk for a few hours, good luck21:43
*** jmlowe has quit IRC21:43
_alastor_rosmaita: alrighty, thanks21:44
*** pcaruana has quit IRC21:56
*** pvradu has quit IRC21:59
*** Sravan has joined #openstack-glance22:06
*** vishwanathj has joined #openstack-glance22:07
*** jiaopengju has quit IRC22:08
*** jiaopengju has joined #openstack-glance22:08
*** hoonetorg has quit IRC22:09
*** kukacz has quit IRC22:09
*** hoonetorg has joined #openstack-glance22:10
*** kukacz has joined #openstack-glance22:10
*** Sravan has quit IRC22:52
*** gyee has joined #openstack-glance23:01
*** pvradu has joined #openstack-glance23:14
*** pvradu has quit IRC23:19
*** Sravan has joined #openstack-glance23:27
*** Sravan has quit IRC23:32
*** Sravan has joined #openstack-glance23:38

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!