clayg | oh whatever pep8 | 00:06 |
---|---|---|
clayg | lol! it's that guy | 00:07 |
clayg | we need to attach his twitter account to launchpad! | 00:07 |
notmyname | heh, yeah :-) | 00:08 |
*** tonanhngo has joined #openstack-swift | 00:08 | |
clayg | at least we fixed his other bug? https://bugs.launchpad.net/glance-store/+bug/1213179 | 00:09 |
openstack | Launchpad bug 1213179 in python-swiftclient "swiftclient logs all ClientException (404) at ERROR (with traceback)" [Critical,Fix released] - Assigned to Bartek Żurawski (bartekzurawski1) | 00:09 |
*** tonanhngo has quit IRC | 00:13 | |
*** klamath has quit IRC | 00:13 | |
*** chsc has quit IRC | 00:25 | |
clayg | timburke: patch 442342 | 00:25 |
patchbot | https://review.openstack.org/#/c/442342/ - python-swiftclient - Fix logging of the gzipped body | 00:25 |
clayg | what is up with "needs_items" | 00:25 |
clayg | what does the *real* return!? | 00:25 |
clayg | oh... maybe the test is just misappropriating MockHtpResponse? | 00:29 |
timburke | in requests-land, there's a headers attr that's a CaseInsensitiveDict. in httplib-land, there's a getheaders method that returns a list of tuples. and in swiftclient-test-land, MockHttpResponse is terrible | 00:30 |
timburke | the more i look at things, i think that fake getheaders is just plain wrong. it should always have been item-ized | 00:31 |
*** NM has joined #openstack-swift | 00:31 | |
*** catintheroof has quit IRC | 00:31 | |
*** jamielennox is now known as jamielennox|away | 00:41 | |
*** jamielennox|away is now known as jamielennox | 00:44 | |
*** tonanhngo has joined #openstack-swift | 00:45 | |
*** tonanhngo has quit IRC | 00:50 | |
*** NM has quit IRC | 00:58 | |
openstackgerrit | Clay Gerrard proposed openstack/python-swiftclient master: Cleanup fake in test_swiftclient https://review.openstack.org/442881 | 01:07 |
kota_ | good morning | 01:11 |
clayg | rledisez: oh, it could even just be "if _get_any_lock(fds): break" | 01:12 |
*** dja has quit IRC | 01:20 | |
mattoliverau | kota_: morning | 01:23 |
kota_ | mattoliverau: o/ | 01:23 |
mattoliverau | kota_: regarding ring zipper, if we in version 1 at least are forcing different regions per ring, could we sort ring input by region, that way ring order is important? (Just a thought while I'm sitting here making and eating lunch) | 01:25 |
mattoliverau | Ring order isn't important I meant | 01:26 |
kota_ | mattoliverau: ah, it may be good idea to sort by region | 01:27 |
kota_ | mattoliverau: i've been thinking of how we could go that better in the last night, too. | 01:27 |
kota_ | mattoliverau: my idea in the last night, a couple of different way from yours exist | 01:28 |
kota_ | 1. add index format like 0:ring_file, 1:ring_file and sort via the index | 01:29 |
kota_ | but it's still painful if the operator lost the index | 01:29 |
kota_ | 2. make it the ring zipper as 2-phase like *ringbuilder* | 01:29 |
kota_ | that means the new syntax will be like `swift-ring-zipper add zipper.builder ring_file` for some times, and then `swift-ring-zipper zip zipper.builder` | 01:30 |
kota_ | the rings are sorted by id in the zipper.builder | 01:31 |
notmyname | zipper zip zip zippy doo | 01:31 |
kota_ | it would be safer but it could be more complex rather than current simple implementatin | 01:31 |
kota_ | notmyname: sounds like disney song :) | 01:32 |
notmyname | lol | 01:32 |
*** squid has quit IRC | 01:33 | |
kota_ | mattoliverau: and then, thinking of what stuation will be affected for the ring order, it's IIF EC and each ring is part of ec_k + ec_m | 01:33 |
kota_ | replica case should be safe | 01:34 |
kota_ | ec_duplication and each ring keeps ec_k + ec_m (i.e. # of rings == ec_duplication_factor) is also safe | 01:35 |
clayg | I thought we were going to have .json on disk? | 01:41 |
clayg | so it was swift-ring-builder zip object-1.json -> object-1.ring.gz | 01:42 |
clayg | then you can just have the json be [region-0.builder, region-1.builder] | 01:42 |
kota_ | clayg: yeah, that was, but IIRC, the format has been not discussed yet? | 01:42 |
clayg | or the idea is that *in* the zipped ring we want to "fix" the 'region' attribute of the node/device entries if the used the same region integer in both? | 01:43 |
clayg | oh | 01:43 |
kota_ | clayg: ah, you want to feed the *builder* file to zipper? | 01:43 |
clayg | sorry I saw a notice of patch - but didn't look at it yet | 01:43 |
clayg | idk, i'm with you - haven't really discussed or thought through it yet | 01:43 |
kota_ | clayg: on the region perspective, I found an issue when the zipper resolve the region namespace | 01:44 |
clayg | it's not *unreasonable* that the swift-ring-builder could handle running rebalance on a list of rings, getting the ring data out and stiching them together | 01:44 |
kota_ | clayg: if the zipper makes new region name from each rings | 01:44 |
clayg | the goal is region1.builder + region2.builder -> ~magic~ -> composite.ring.gz | 01:45 |
clayg | the ~magic~ can say "you need to rebalance region1.builder to make region1.ring.gz" or not | 01:45 |
clayg | depends on the use-case | 01:45 |
kota_ | e.g. [ring-0.ring.gz, ring-1.ring.gz], each ring has region 0, 1. and zipper handles them as different like... maybe region 0-0, 0-1, 1-0, 1-1 | 01:45 |
kota_ | how does the operator know and how we could set the write/read affinity in the proxy-server.conf | 01:46 |
clayg | yeah... I think maybe the zipper should reject things when it concats the devices list | 01:47 |
clayg | oh... maybe not | 01:47 |
clayg | depends on the intent and the use-case | 01:47 |
kota_ | clayg: yes, definately | 01:47 |
clayg | for now as I understand it being used if a zipper said "screw you I'm not zipping these two device lists together because 'thing I expect to be unique between them is not unique'" I would be ok with that | 01:48 |
kota_ | clayg: that's my current implementation at patch 441921 | 01:49 |
patchbot | https://review.openstack.org/#/c/441921/ - swift - Composite Ring Zipper Implementation | 01:49 |
*** tonanhngo has joined #openstack-swift | 01:49 | |
clayg | I could see it bailing out if {d['region'] for d in devices1} & {d['region'] for d in devices2} or even if d['ip']!? | 01:49 |
kota_ | for my usecase, it's ok (becuase it's been my idea) to set unique region by the operator and the zipper will reject if same region found in. | 01:51 |
kota_ | Am i thinking too hard? | 01:53 |
*** tonanhngo has quit IRC | 01:53 | |
clayg | kota_: idk, I like this! https://review.openstack.org/#/c/441921/2/swift/cli/ringzipper.py | 01:57 |
patchbot | patch 441921 - swift - Composite Ring Zipper Implementation | 01:57 |
kota_ | clayg: :) | 01:57 |
clayg | i'm trying to decide if the ip stuff goes far enough | 01:58 |
clayg | I know we don't expect/want the same node to be in both rings! | 01:58 |
kota_ | clayg: probably, https://review.openstack.org/#/c/441921/2/swift/cli/ringzipper.py@122 is what you're looking for | 01:59 |
patchbot | patch 441921 - swift - Composite Ring Zipper Implementation | 01:59 |
clayg | I agree trying to magic around when an operator makes a *mistake* (two different region rings with the same region number) instead of encoruging them to do the right/expected thing (use unique region number in each ring) by using fail early and loud error messages is the best thing we can do | 02:00 |
clayg | or ... i'm not sure I said that right | 02:00 |
kota_ | to be simple, I added it as a validation *AFTER* zipped | 02:00 |
clayg | if it's simple not only does it have less bugs - it's easier for an operator to understand (and maybe even patch!) | 02:00 |
*** sams-gleb has joined #openstack-swift | 02:01 | |
clayg | kota_: so that check only says the same device can't be in the list twice... that's a good check... but that would allow for half a nodes devices to be in one ring and the other half in the other ring | 02:01 |
clayg | ... and that node could be in different region/zones and come from different rings | 02:02 |
clayg | i guess maybe that's not so different from what we have today | 02:02 |
clayg | I mean you can definately have the same node in different zones/regions | 02:02 |
kota_ | clayg: ah, that's correct | 02:02 |
clayg | ... I'm not sure if it's possible to get the same (ip, port, device) combo into a ring with different id's | 02:03 |
kota_ | clayg: that check exists in the *ring builder* | 02:03 |
kota_ | 1 sec | 02:04 |
kota_ | https://github.com/openstack/swift/blob/master/swift/cli/ringbuilder.py#L653-L660 | 02:04 |
clayg | *lame* | 02:05 |
kota_ | the ring-builder rejects the device when the operator try to add the device and the ip/port/dev in the builder file | 02:05 |
clayg | that's only the cli | 02:05 |
clayg | that's lame | 02:05 |
clayg | I mean... i guess having it is better than not? | 02:05 |
clayg | ... but it should probably be in add_dev ;) | 02:05 |
clayg | definatley a good reason to duplicate it in the zipper | 02:05 |
*** sams-gleb has quit IRC | 02:06 | |
clayg | ok, so if you `zipper region1.ring.gz region2.ring.gz` then `zipper region2.ring.gz region1.ring.gz` next time - is that bad? | 02:06 |
clayg | I feel pretty strongly we should just go ahead and bite the bullet and define a data exchagne format for synthetic rings which is esentially a list of dicts, and the dict has one key 'name' | 02:07 |
kota_ | to say correctly, the bad case exists but my use case and almost works well | 02:07 |
kota_ | the bad case is in *normal* ec, I don't know if someone wants to use the zipped ring for *normal* ec though | 02:09 |
clayg | so but I think if you fat finger it - it's bad `zipper r1-8.ring.gz r2-8.ring.gz r3-8.ring.gz r4-8.ring.gz r5-8.ring.gz r6-8.ring.gz r7-8.ring.gz r9-8.ring.gz` ... is that what I typed last time? did I forget one this time? | 02:09 |
clayg | right, gotcha - if node_index matters the order matters | 02:10 |
*** vint_bra has joined #openstack-swift | 02:10 | |
kota_ | clayg: yes | 02:10 |
clayg | so, I was hoping we wouldn't grow a new command line tool - I was hoping it would just go in swift-ring-builder and I was hoping we would have a thing - that looks a lot like a builder for most of the cli commands - but instead of .builder - it's a .json - and in the json is a list of pointers to some other builder/rings | 02:12 |
clayg | i don't now why i was thinking that was a good idea | 02:12 |
*** vint_bra has left #openstack-swift | 02:12 | |
clayg | but `swift-ring-builder region1.builder remove d1; swift-ring-builder object-1.json rebalance` sounds *awesome* | 02:12 |
clayg | maybe it's too much for now | 02:13 |
clayg | I wonder what is the surfance area of the RingBuilder api that swift-ring-builder acctually uses? | 02:14 |
clayg | CompositeRingBuilder takes json - wraps a bunch of instances of RingBuilder and exposes a similar interface - or blows up with MethodNotSupported | 02:15 |
clayg | if we have a swift-ring-zipper I think we'll keep it around for a long time - even if we later define the data format and try to improve the ux ... | 02:16 |
clayg | idk!? | 02:16 |
clayg | I'm not going to get to use swift-ring-builder *or* swift-ring-zipper - all of my ring manipulation is programatic | 02:16 |
clayg | so if there was a *class* that somehow abstracted this for me that would be a more interesting thing to consume from upstream | 02:17 |
clayg | idk! | 02:18 |
*** tonanhngo has joined #openstack-swift | 02:19 | |
kota_ | for the input format and where (builder or zipper) we should implement, IMO, the patch 441921 is good starting point to discuss | 02:22 |
patchbot | https://review.openstack.org/#/c/441921/ - swift - Composite Ring Zipper Implementation | 02:22 |
kota_ | the patch is made as *at least* (small enough) features to get composite zipped ring so I am able to adjust some ways which would be more useful. | 02:24 |
kota_ | like json | 02:24 |
kota_ | or builder like interface | 02:24 |
*** tonanhngo has quit IRC | 02:24 | |
kota_ | or abstracted classes for the programmer | 02:24 |
*** dja has joined #openstack-swift | 02:28 | |
*** JimCheung has quit IRC | 02:31 | |
*** dmorita has quit IRC | 02:35 | |
*** dmorita has joined #openstack-swift | 02:58 | |
*** dmorita has quit IRC | 03:02 | |
*** JimCheung has joined #openstack-swift | 03:10 | |
*** chlong_ has joined #openstack-swift | 03:15 | |
*** JimCheung has quit IRC | 03:15 | |
*** chlong has quit IRC | 03:15 | |
*** winggundamth has joined #openstack-swift | 03:20 | |
*** winggundamth has quit IRC | 03:28 | |
*** winggundamth has joined #openstack-swift | 03:31 | |
*** dmorita has joined #openstack-swift | 03:38 | |
*** dmorita has quit IRC | 03:42 | |
openstackgerrit | Merged openstack/python-swiftclient master: Fix logging of the gzipped body https://review.openstack.org/442342 | 04:01 |
*** psachin has joined #openstack-swift | 04:18 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/python-swiftclient master: Updated from global requirements https://review.openstack.org/89250 | 04:25 |
clayg | I always get bit by py35 json.dumps returns a string instead bytes | 04:30 |
clayg | it's like ... I *started* with internal representation of a data object - I *want* to exchange it to another system (wire, filesyste, process) - which is *why* I **serialized** it!? Why you giving me back another non-serializable internal object!? | 04:31 |
clayg | I need bytes, baby! bytes! | 04:32 |
mattoliverau | clayg: it does tell you in the function name 'dumps' s for string:P | 04:32 |
mattoliverau | we need a dumpb | 04:32 |
clayg | idk, does pickle.dumps return a unicode string? | 04:32 |
mattoliverau | no idea | 04:32 |
clayg | >>> pickle.dumps(None) | 04:33 |
clayg | b'\x80\x03N.' | 04:33 |
clayg | py35 | 04:33 |
clayg | >>> json.dumps(None) | 04:33 |
clayg | 'null' | 04:33 |
mattoliverau | sigh | 04:33 |
clayg | I think I'm on to something here! py35 json bites me all the time because *its* stupid! | 04:33 |
clayg | it's not me this time! it's not me this time! it's not me this time! | 04:34 |
mattoliverau | i think it might be just that :) | 04:34 |
* clayg fully admits timburke will come back with some argument how it's all very internally consistent and in fact... | 04:34 | |
clayg | it is me | 04:34 |
openstackgerrit | Clay Gerrard proposed openstack/python-swiftclient master: Cleanup fake in test_swiftclient https://review.openstack.org/442881 | 04:35 |
*** dmorita has joined #openstack-swift | 04:40 | |
*** links has joined #openstack-swift | 04:41 | |
clayg | OH BS -> https://docs.python.org/3/library/json.html#json.loads py3.6 let's you give loads a bytearray! but dumps still produces str even tho https://docs.python.org/3/library/json.html#character-encodings clearly states JSON should be UTF-8 - not flipping internal unicode string object sthat you can't jz;lskjdf fliping sending to other ocmputers! | 04:41 |
clayg | there is a but for this somewhere with the word "unfortuante" in there somewhere | 04:42 |
*** dmorita has quit IRC | 04:44 | |
clayg | i knew it! https://bugs.python.org/issue19837 | 04:46 |
clayg | Nick feels me | 04:47 |
patchbot | Error: You don't have the admin capability. If you think that you should have this capability, be sure that you are identified before trying again. The 'whoami' command can tell you if you're identified. | 04:47 |
clayg | shut up patchbot not in the mood | 04:47 |
mattoliverau | clayg: are you sure your pen name isn't Nick.. that was your arguement, and looks like it's still in discussion. We might have to throw and encode everywhere we dumps or wrap up our own. 'from swift.common.utils import json' :( | 04:55 |
*** dja has quit IRC | 05:16 | |
*** dmorita has joined #openstack-swift | 05:20 | |
*** dmorita has quit IRC | 05:25 | |
*** adriant has quit IRC | 05:39 | |
*** SJ has joined #openstack-swift | 05:58 | |
*** SJ has quit IRC | 06:00 | |
*** sams-gleb has joined #openstack-swift | 06:08 | |
*** sams-gleb has quit IRC | 06:12 | |
*** pcaruana has joined #openstack-swift | 06:43 | |
clayg | mattoliverau: maybe six will do it for us? https://github.com/benjaminp/six/issues/185 | 06:53 |
*** ChubYann has quit IRC | 06:54 | |
*** dja has joined #openstack-swift | 06:55 | |
*** furlongm has quit IRC | 07:29 | |
*** tanee_away is now known as tanee | 07:29 | |
*** sams-gleb has joined #openstack-swift | 07:31 | |
*** furlongm has joined #openstack-swift | 07:33 | |
*** rcernin has joined #openstack-swift | 07:40 | |
*** tesseract has joined #openstack-swift | 07:43 | |
*** jlvillal has quit IRC | 07:51 | |
openstackgerrit | Merged openstack/swift master: Global EC Under Development Documentation https://review.openstack.org/432513 | 08:12 |
*** jlvillal has joined #openstack-swift | 08:12 | |
*** chlong_ has quit IRC | 08:18 | |
*** sanchitmalhotra has quit IRC | 08:27 | |
*** sanchitmalhotra has joined #openstack-swift | 08:28 | |
*** amoralej|off is now known as amoralej | 08:31 | |
*** geaaru has joined #openstack-swift | 08:41 | |
*** kei_yama has quit IRC | 08:44 | |
*** openstackgerrit has quit IRC | 09:03 | |
*** hseipp has joined #openstack-swift | 09:10 | |
*** openstackgerrit has joined #openstack-swift | 09:15 | |
openstackgerrit | Kota Tsuyuzaki proposed openstack/swift master: Optimize ec duplication and its md5 hashing https://review.openstack.org/421673 | 09:15 |
*** cbartz has joined #openstack-swift | 09:16 | |
clayg | kota_: didn't you find the bug with mutliple ec polices that made the reconstructor stats errors? | 09:30 |
clayg | https://github.com/openstack/swift/blob/b7e0494be295d604aeb2e97e8a0708503c80bbb5/swift/obj/reconstructor.py#L648 | 09:31 |
clayg | ^ that worries me a lot now that collect_parts shuffles all of the policies together | 09:31 |
clayg | doing it right: https://github.com/openstack/swift/blob/b7e0494be295d604aeb2e97e8a0708503c80bbb5/swift/obj/reconstructor.py#L272 | 09:32 |
kota_ | clayg: i had, that was from the stats calculation causes zero division | 09:32 |
clayg | doing it wrong: https://github.com/openstack/swift/blob/b7e0494be295d604aeb2e97e8a0708503c80bbb5/swift/obj/reconstructor.py#L534 | 09:32 |
clayg | not sure it'd be so trivial to make a unittest - but I think the bug is there | 09:33 |
kota_ | looking | 09:33 |
clayg | i'm not gunna fix it just this moment - but I'd like to get it on launchpad now that I've seen it :\ | 09:33 |
*** murugesh has joined #openstack-swift | 09:34 | |
murugesh | Hi There | 09:34 |
murugesh | This time i get 503 service unavailable error | 09:35 |
clayg | I just hate filing bugs "I looked at code and saw a bug" | 09:35 |
murugesh | when i run below command on keystone node | 09:35 |
murugesh | swift stat --debug | 09:35 |
kota_ | clayg: let me make sure, now I've been filled up the node_index/frag_index semantics and need to try to re-install the reconstruction | 09:36 |
clayg | murugesh: answer is in the logs | 09:36 |
murugesh | http://paste.openstack.org/show/601892/ | 09:36 |
*** mvpnitesh has joined #openstack-swift | 09:36 | |
kota_ | so you're trying to shuffle the jobs accross the multiple ec policies and then remote_hashes uses self.headers which can be updated in process_job | 09:37 |
kota_ | right? clayg | 09:37 |
kota_ | sounds buggy if it was updated by different policy, when running jobs in parallel. | 09:38 |
murugesh | @Clayg, | 09:38 |
murugesh | here is the log | 09:38 |
murugesh | proxy-server: Unable to validate token: Identity server rejected authorization necessary to fetch token data | 09:38 |
murugesh | What wrong i am doing here?? | 09:39 |
kota_ | murugesh: sounds like you may look at keystone setting in your proxy-server.conf | 09:40 |
kota_ | may need to | 09:40 |
murugesh | Sure +kota, I am using openstack-mitaka version. can you refer me how proxy-server.conf should be?? | 09:42 |
mvpnitesh | hi , i'm trying to do a devstack setup of swift and i'm getting the following error " Unable to locate config for container-reconciler , Starting account-server...(/etc/swift/account-server/1.conf), liberasurecode[8512]: liberasurecode_backend_open: dynamic linking error libJerasure.so.2: cannot open shared object file: No such file or directory" | 09:43 |
mvpnitesh | can any one help me in resolving this issue | 09:43 |
kota_ | murugesh: depends on your keystone api version (v2 or v3) but... | 09:44 |
kota_ | murugesh: current default setting is https://github.com/openstack/swift/blob/master/etc/proxy-server.conf-sample#L325-L340 | 09:45 |
clayg | mvpnitesh: the liberasurecode errors are benign | 09:45 |
clayg | murugesh: my keystone settings look like this -> https://gist.github.com/clayg/c414b01f6e1202a2050ea8c0eb96efa0 | 09:47 |
*** raymond__ has joined #openstack-swift | 09:47 | |
murugesh | +kota_ thanks for your reference | 09:48 |
murugesh | and my keystone version is 'python-keystoneclient.', DeprecationWarning) 2.3.1 | 09:48 |
murugesh | keystone --version is the right command to check right?? | 09:49 |
raymond__ | Hi, are questions about cluster issues okay to discuss here or should help requests be directed to another channel? | 09:49 |
murugesh | Thanks clayg will take look on provided link | 09:49 |
clayg | kota_: re: self.headers and remote_hashes - sure looks like multiple policies could result in some cross contamination? | 09:50 |
clayg | kota_: do we just file the bug "two people agree code looked wrong"? | 09:50 |
clayg | raymond__: this is the spot for everything swift! You can talk about apple, taylor, w/e - as long as it's not sparrows or martins (touchy subject for notmyname) - swift's only! | 09:51 |
openstackgerrit | Mahati Chamarthy proposed openstack/swift master: Limit number of revert tombstone SSYNC requests https://review.openstack.org/439572 | 09:51 |
kota_ | could file but I'd like to add the status incomplete, not sure right now. | 09:51 |
kota_ | clayg:^^ | 09:51 |
clayg | mahatic_: !!!! i'm rarely online when you are | 09:51 |
clayg | 2am PST is where it's at - I should stay up late more | 09:51 |
murugesh | +kota&clayg, I am struggling to fix this issue from past three days | 09:52 |
murugesh | Kindly help me out | 09:52 |
*** cshastri has joined #openstack-swift | 09:52 | |
clayg | murugesh: sorry ment to say "keystone --version" is *not* the version of keystone; it's the version of python-keystone client | 09:52 |
kota_ | clayg: it looks 3 p.m. in Indea | 09:53 |
murugesh | #clayg, how can i check the keystone api version?? | 09:53 |
clayg | which... AFAIK is deprecated and they mostly use openstackclient for the commandline: https://github.com/openstack/python-keystoneclient#python-bindings-to-the-openstack-identity-api-keystone | 09:54 |
mvpnitesh | clayg: the error is " openstack object store account set --property Temp-URL-Key=password | 09:54 |
patchbot | Error: No closing quotation | 09:54 |
mvpnitesh | 2017-03-08 07:52:51.268 | Internal Server Error (HTTP 500) (Request-ID: tx6c67c03620e14568ae680-0058bfb817)" | 09:54 |
clayg | murugesh: i have no idea!? But i'm willing to learn! | 09:54 |
raymond__ | So we're having an issue with object deletion. We rely mainly on the object expirer, which is running as expected but no objects are ever actually deleted | 09:54 |
kota_ | murugesh: i also not have clear answer how to detect your keystone api version | 09:55 |
kota_ | murugesh: but | 09:55 |
clayg | murugesh: so on my machine `curl http://192.168.8.8/identity | python -m json.tool` show's me v2.0 is deprecated and apparently v3.8 api version is where it's at! | 09:55 |
raymond__ | On a DELETE of an expired object we get the 404 as expected, but the object server just states "ERROR Container update failed (saving for async update later): 404 response from <node>" | 09:55 |
kota_ | murugesh: the request to retrieve the token from your client seems to work, so checking os envrion could help you | 09:56 |
raymond__ | That aync update never seems to happen as the object is never removed from the container listing | 09:56 |
clayg | murugesh: the api version is maybe more useful for our context than the service version? | 09:56 |
murugesh | Thanks +kota | 09:56 |
kota_ | OS_IDENTITY_API_VERSION | 09:56 |
murugesh | #clayg my keystone API version is v3 | 09:57 |
mahatic_ | clayg: :D actually you're often online when I am ;) | 09:57 |
mahatic_ | it's another thing that I'm not so active on IRC in the first half | 09:57 |
clayg | mahatic_: yeah - you should just make more noise | 09:57 |
clayg | raymond__: that's interesting! is your object-expirer daemon logging anywhere? | 09:57 |
mahatic_ | okay, noted ;) | 09:57 |
clayg | mvpnitesh: do you only get 500's when you try to set account metadata? does *other* requests work? like object upload, or stat'ing he account? | 09:58 |
clayg | murugesh: how do you know your keystone api version is v3? My proxy-server.conf doesn't say v3 anywhere? I think the keystone client like auto negotiates the version or something insane? | 09:59 |
raymond__ | Yes, " object-expirer: Pass so far 78487s; 224319 objects expired (txn: tx73ee4a1f64cf4e3abbf10-0058bfd945)" | 10:00 |
raymond__ | It just keeps increasing in the number of objects expired every run | 10:00 |
raymond__ | It accounts for a large amount of our IOPs and we've had to throttle it to serial | 10:00 |
murugesh | #clayg, I just confirmed with my colleague | 10:01 |
raymond__ | Doing a manual DELETE of one of the expired objects doesn't ever remove it from the container (and I'm suspecting not from the disk) | 10:01 |
mvpnitesh | clayg: i'm also getting s-object failed to start , >/opt/stack/status/stack/s-container.pid; fg || echo "s-container failed to start" | | 10:01 |
clayg | mvpnitesh: ok that's probably related - if the container-server or object-server isn't running stuff should be pretty broken - can you start them!? | 10:03 |
clayg | raymond__: 200K objects to expire doesn't seem that terrible? | 10:05 |
mvpnitesh | <clayg>: i'm getting this error when i'm trying to do the devstack setup , i've tried to restart those services, it is saying they are not found | 10:07 |
clayg | who is saying who is not found? systemctl says a service named swift-object is not found? bash says a command named swift-object-server is not found? | 10:09 |
raymond__ | clayg: It does for our cluster size and it just constantly increases, we have objects with a 60 day TTL that are still in the container three months after expiry | 10:10 |
kota_ | murugesh: hmm... https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token/__init__.py#L191-L198 keystonemiddleware config at master looks a bit diffferent | 10:10 |
kota_ | from swift master | 10:11 |
raymond__ | It hasn't _ever_ decreased for the extent of our log collection | 10:11 |
clayg | raymond__: so I found this laying around https://gist.github.com/clayg/5ab6001c13a733ae23b0fdf905af2a60 | 10:12 |
clayg | if you don't have a better way to go poking at dot accounts - InternalClient is pretty good | 10:12 |
clayg | it needs/expects rings and stuff | 10:12 |
clayg | if you have some sort of super-admin-reseller-super-user token sort of thing baked into your auth system you may be able to get that to authorize you | 10:13 |
kota_ | no idea for now, on the keystone setting but definitely keystone rejected the token authentication from the log... | 10:13 |
murugesh | @+kota_, could you pls let me know what you want me to check?? | 10:13 |
*** mvpnitesh has quit IRC | 10:14 | |
clayg | raymond__: were you saying you tried to delete an object "manually" and that didn't remove it from the container listing? | 10:14 |
*** mvpnitesh has joined #openstack-swift | 10:14 | |
kota_ | the auth_token setting to request the token validation to *correct keystone server* which issued your token. | 10:15 |
raymond__ | clayg, yes and by "manually" I mean a DELETE verb HTTP request | 10:15 |
clayg | is that the case for *any* object - or just expired ones? When you say "manually" you just mean through a normal DELETE api request -> https://developer.openstack.org/api-ref/object-storage/?expanded=delete-object-detail#delete-object | 10:15 |
clayg | raymond__: is it only some containers? How many objects are in the container? > 100M? | 10:16 |
murugesh | +kota_,shall i paste my current [filter:authtoken] settings?? | 10:17 |
murugesh | which was specified in proxy-server.conf file | 10:17 |
kota_ | murugesh: it's ok but I'm not sure I can help you with the paste | 10:18 |
clayg | murugesh: you said your keystone was like mitaka or something right? I'm not sure who's going to have a mitaka keystone laying around. and historically that stuff has changed every release | 10:18 |
clayg | it's hard to keep up :\ | 10:18 |
murugesh | Just take a look on my authtoken settings in proxy-server.conf file | 10:19 |
raymond__ | clayg, some objects seem to work (non-expired) so I'm unsure if it's just some containers. Object per container is pretty consistent in the thousands range | 10:19 |
murugesh | http://paste.openstack.org/show/601898/ | 10:20 |
clayg | raymond__: and aside from the status line - the expirer isn't logging any *errors*? | 10:20 |
raymond__ | clayg, in this instance we're using swift as a content addressable store so our key distribution between containers is pretty consistent as the container and key is based on the SHA1 of the content | 10:20 |
clayg | raymond__: and what did you say the client response looked like? I would expect it 404 - but I would also expect it to drop a tombstone and update the row in the container. | 10:21 |
murugesh | clayg, Here we are using keystone service as httpd WSGI | 10:21 |
raymond__ | clayg: 204 in the case of a non-expired object, 404 in the case of the expired one | 10:21 |
clayg | murugesh: so that doesn't look anything like my authtoken config section that works in ocata/pike -> https://gist.github.com/clayg/c414b01f6e1202a2050ea8c0eb96efa0 | 10:22 |
raymond__ | clayg: tombstones (.ts) files are dropped as expected, though our auditor seems to be having issues with some of them as we're getting "object-server: ERROR Trying to audit /srv/node/sdb1/objects/372/b4a/5d1425792dd17a525056282e8abb6b4a: #012Traceback (most recent call last):#012 File "/usr/lib/python2.7/dist-packages/swift/obj/auditor.py", line 223, in failsafe_object_audit#012 self.object_audit(location)#012 File "/ | 10:22 |
patchbot | Error: No closing quotation | 10:22 |
clayg | murugesh: esp the auth_uri - i'm not sure when the /identity stuff got added; I think keystone listening on 5000 is deprecated or something? who knows | 10:23 |
raymond__ | clayg: on a number of our nodes (Swift-2.10), I noticed the 2.11 changelog is suppressing that error so we have an update planned | 10:23 |
clayg | raymond__: the interesting bit is at the end of the log message - and your irc message was truncated? | 10:23 |
murugesh | clayg, thanks then will change like you mentioned above and get back to you | 10:23 |
raymond__ | clayg: "line 271, in object_audit#012 ts = df._ondisk_info['ts_info']['timestamp']#012KeyError: 'ts_info" <-- last section of the stack trace | 10:24 |
clayg | oh, yeah - that was annoying | 10:24 |
clayg | ^ mahatic_ ;) | 10:24 |
kota_ | murugesh: or you may check the project_domain_id, user_domain_id, project_name, username, password setting with your collegue | 10:24 |
clayg | murugesh: maybe the auth_url!? | 10:25 |
kota_ | murugesh: Swift config example say *they must match the Keystone credentials for the swift service* | 10:25 |
clayg | raymond__: do you monitor async_pendings? didn't you say async_pendings? | 10:26 |
*** sams-gle_ has joined #openstack-swift | 10:27 | |
*** sams-gle_ has quit IRC | 10:28 | |
clayg | raymond__: so in my case a DELETE to an expired object returned 404 (as expected) - but it also overwrote the expired .data with a .ts and immediately updated the container | 10:29 |
clayg | so... that's how it supposed to work | 10:29 |
clayg | maybe we could try to figure out which part exactly isn't working in your system? | 10:29 |
*** sams-gleb has quit IRC | 10:30 | |
raymond__ | clayg, we seem to get the .ts but the container update seems to get "Container update failed (saving for async update later)" and never occurs | 10:30 |
clayg | does the .data get replaced with a .ts - can you lookup an expired object with swift-get-nodes and try to find the .data on disk (verify with swift-object info /path/to/timestamp.data) | 10:30 |
raymond__ | clayg, yeah I'll follow a transaction through | 10:30 |
clayg | ah! cool - or crazy - probably bad | 10:30 |
clayg | so... do you have a bunch of async_pendings? it's a tld next to accounts containers objects | 10:31 |
openstackgerrit | Alistair Coles proposed openstack/swift master: Add assertions to test_reconstructor test_get_response https://review.openstack.org/442485 | 10:31 |
clayg | you can check on them with recon I think? there's a recon cron thing that counts them up in the background I think | 10:31 |
acoles | clayg: I rolled your suggestions into patch 442485, thanks! | 10:32 |
patchbot | https://review.openstack.org/#/c/442485/ - swift - Add assertions to test_reconstructor test_get_resp... | 10:32 |
raymond__ | clayg, I appears recon wasn't ever set up. I'll have our IT do that and take a look at the async pending | 10:32 |
clayg | so when I break a container node and do a object update I can see a file like this: /srv/node1/sdb1/async_pending/3bb/83656d270fc11826b7aabf06bc5d93bb-1488968887.64846 | 10:33 |
clayg | if there's a bunch of files like that it normally means.... well it means the object server isn't succesffully getting updates to the container servers | 10:33 |
clayg | ... there's different reasons that may be the case | 10:34 |
clayg | it's the job of the object-updater to keep trying to send the updates to the container servers | 10:34 |
clayg | raymond__: you might see if the object-updater is logging anything useful (assuming you have some async pending container updates?) | 10:34 |
clayg | acoles: did you add in any [[]] * 10? that's my new thing that I do now | 10:35 |
clayg | just random | 10:35 |
clayg | throw it in every few lines | 10:35 |
acoles | hehe | 10:35 |
clayg | yeah, i don't test it anything - just throw that in the change and push over peoples patches | 10:36 |
acoles | https://github.com/openstack/swift/blob/master/REVIEW_GUIDELINES.rst#run-it | 10:37 |
acoles | clayg: ^^ :P | 10:37 |
acoles | I am ashamed that I got a pep8 error on another patch overnight - I am *sure* I always run pep8 before pushing! | 10:38 |
clayg | acoles: gd! so this is the follow up to the follow up to the change "inrecase test coverage for reconstructor" | 10:39 |
acoles | yep. everything is a follow up, nothing is new :) | 10:40 |
clayg | rofl | 10:40 |
acoles | We should have another tag: "Precursor-To:" then we can anticipate follow ups:) | 10:41 |
clayg | ZOMG | 10:41 |
acoles | "Causes-Bug:" | 10:42 |
*** cbartz has quit IRC | 10:44 | |
tone_zrt | Hi all, nice to join Swift family! | 10:45 |
tone_zrt | I'm new to Swift project. Hope to contribute more. | 10:45 |
tone_zrt | In the past several days, I met one problem when I tried to upload object to one container. | 10:46 |
tone_zrt | Please refer to https://ask.openstack.org/en/question/103741/upload-object-error-permission-denied-on-debian/ | 10:46 |
tone_zrt | I have no idea on the problem. Could anybody help me? | 10:46 |
tone_zrt | Thanks! | 10:46 |
*** dmorita has joined #openstack-swift | 10:47 | |
* clayg wishes he spent more time on ask :'( | 10:49 | |
clayg | stupid tpool_reraise | 10:51 |
*** dmorita has quit IRC | 10:52 | |
tone_zrt | clayg, thanks a lot! I did some debug, and foud the error is because the exception from xattr.setxattr() and xattr.getattr() | 10:53 |
clayg | tone_zrt: this is going to sound stupid, but honestly I think you'll save a lot of your time if you just pop the _finalize_put call out the tpool to preserve your traceback real quick: https://gist.github.com/clayg/ab205aefd407689acc4dfdf7f216765a | 10:54 |
clayg | ... in write_metadata? | 10:54 |
tone_zrt | yes | 10:54 |
tone_zrt | I will do the change and check the result | 10:55 |
clayg | maybe it's a selinux thing? | 10:56 |
tone_zrt | No, I don't enable selinux in Debian | 10:57 |
clayg | tone_zrt: so in my devstack config - my object server is configured to drop_privledges to ubuntu and my ring a device named 'sdb1' which is a directory in the object server's configured "devices" directory | 11:01 |
*** geaaru has quit IRC | 11:01 | |
clayg | the sdb1 directory (and all directories under that) are 755 ubuntu:ubuntu | 11:02 |
tone_zrt | 0755 should be the dafault setting. | 11:03 |
clayg | the ask question doesn't show `ls -alhF /opt/stack/data/swift/drives/sdb1/1` | 11:03 |
kota_ | oh, acoles is back to online while I was fighting frag_index world | 11:04 |
tone_zrt | because i debug the problem, i changed to 0777 | 11:04 |
clayg | did you create a user 'swift'? did you fix the storage server configs's "user = swift" setting? | 11:04 |
tone_zrt | I used "stack" | 11:04 |
clayg | 777 should fix most permission deneid errors :D | 11:05 |
acoles | kota_: hi! what's up in frag_index world? or do you mean you renamed all our chunk_indexes?? | 11:05 |
clayg | acoles: do you mean node_index? | 11:05 |
kota_ | wait a sec, i'll push my idea soon, it's still not sure I'm doing right thing though. | 11:05 |
acoles | backend_index?? | 11:05 |
clayg | backend_chunk_node_or_fragment_id | 11:05 |
acoles | kota_: oh i thought you had pushed a new version | 11:06 |
clayg | he had! it's great! let's ship that one | 11:06 |
kota_ | acoes: i pushed a new version and now trying more improvement | 11:06 |
clayg | ok, so I think mvpnitesh got lost murugesh is playing with keystone configs raymond__ is going to track async pendings tone_zrt is going chmod 777 -r / and .... | 11:07 |
acoles | kota_: I'm sure it's great...go home, sleep :) | 11:07 |
clayg | kota_ & acoles will fix the rest? mahatic_ will +A | 11:07 |
clayg | sounds like ya'll got it covered | 11:08 |
clayg | g'night | 11:08 |
acoles | clayg: good night | 11:08 |
openstackgerrit | Kota Tsuyuzaki proposed openstack/swift master: Eliminate node_index in Putter https://review.openstack.org/443072 | 11:08 |
kota_ | acoles: that one | 11:08 |
acoles | "Eliminate" yay! | 11:08 |
kota_ | clayg: good night! | 11:08 |
murugesh | Hi clayg | 11:14 |
murugesh | You there?? | 11:14 |
*** NM has joined #openstack-swift | 11:14 | |
murugesh | +kota_ | 11:14 |
murugesh | This time i get 500 internal server error | 11:15 |
*** hseipp has quit IRC | 11:15 | |
murugesh | when i run swift stat --debug on my keystone node | 11:15 |
acoles | tone_zrt: crazy question, but does the object you are uploading have content or is it zero-sized? It's just curious that the permission error comes when setting the xattrs, which is after any object data is written to the file | 11:15 |
kota_ | murugesh: that sounds progressed? | 11:16 |
tone_zrt | The size of the file is neat 1.6MB | 11:16 |
murugesh | and my swift log says the following | 11:16 |
acoles | tone_zrt: ok. | 11:16 |
tone_zrt | I did the complete thing in Ubuntu, everything is OK | 11:16 |
tone_zrt | On Debian, it is failed. :( | 11:17 |
acoles | yeah, I read the report on ask :/ | 11:17 |
murugesh | +kotla_, the log is | 11:17 |
murugesh | http://paste.openstack.org/show/601905/ | 11:17 |
murugesh | and have changed the authtoken settings in proxy-server.conf as clayg mentioned above | 11:18 |
acoles | murugesh: is that log from proxy-server? | 11:19 |
murugesh | Yeah +acoles from /var/log/swift/swift.log | 11:20 |
*** catintheroof has joined #openstack-swift | 11:20 | |
kota_ | acoles: i think so, that log is from auth_token | 11:20 |
murugesh | and my current authtoken setting is follows | 11:20 |
murugesh | http://paste.openstack.org/show/601907/ | 11:20 |
kota_ | murugesh: i'm surplised at the memcach module is not in requirements.txt!?!? | 11:21 |
kota_ | ah, it should be to acoles | 11:21 |
kota_ | murugesh: that log looks to say you need to install memcache | 11:21 |
kota_ | maybe, `sudo pip install python-memcached`? | 11:22 |
acoles | or check you have cache = swift.cache in authtoken config section | 11:23 |
acoles | see https://docs.openstack.org/developer/swift/overview_auth.html#configuring-swift-to-use-keystone | 11:23 |
kota_ | acoles: yes, we need it but it looks to fail at *import*? | 11:23 |
murugesh | Yes already memcached service is up and running | 11:24 |
murugesh | http://paste.openstack.org/show/601908/ | 11:24 |
kota_ | murugesh: i think the lack is python-memcached *client* to connect the memcahced service from your python environ | 11:25 |
kota_ | to say correctly, from your swift (running in python) | 11:25 |
murugesh | Is there any remedy steps for this?? | 11:26 |
*** mvpnitesh has quit IRC | 11:26 | |
*** mvpnitesh has joined #openstack-swift | 11:26 | |
kota_ | murugesh: did you try to install python-memcahced (client) for your swift? | 11:28 |
acoles | swift uses its own memcached client, you need the 'cache' middleware in proxy pipeline and also that config option ^^ in authtoken middleware section. | 11:28 |
acoles | https://github.com/openstack/swift/blob/4ee20dba482e8f8e4b9ea574428c72c5728a10a1/etc/proxy-server.conf-sample#L99-L99 | 11:28 |
kota_ | acoles: oh, that's correct | 11:28 |
*** psachin has quit IRC | 11:28 | |
kota_ | acoles: nice follow, i was missing that. | 11:29 |
*** sams-gleb has joined #openstack-swift | 11:29 | |
acoles | otherwise, if you don't set cache=swift.cache, then yes I guess you'll need python memcache installed since authtoken will, I presume, default to use that. But I have never run authtoken that way | 11:30 |
kota_ | and auth_token attempts to use another memcached client if no cache setting in the swift pipeline | 11:30 |
kota_ | am i right? | 11:30 |
acoles | kota_: I guess so? the traceback suggests that :) | 11:30 |
acoles | murugesh: can you confirm you have cache=swift.cache in authtoken config section? | 11:30 |
* acoles needs coffee, bbiab | 11:32 | |
murugesh | Yes +acoles i have mentioned this | 11:32 |
murugesh | cache=swift.cache in proxy-server.conf | 11:32 |
*** JimCheung has joined #openstack-swift | 11:33 | |
*** sams-gleb has quit IRC | 11:34 | |
*** psachin has joined #openstack-swift | 11:36 | |
murugesh | +acoles, when i remove authtoken and keystoneauth from pipeline i am getting proper response | 11:36 |
*** JimCheung has quit IRC | 11:37 | |
*** cbartz has joined #openstack-swift | 11:38 | |
murugesh | Here is the successful output | 11:40 |
murugesh | http://paste.openstack.org/show/601909/ | 11:40 |
*** NM has quit IRC | 11:44 | |
*** NM has joined #openstack-swift | 11:51 | |
*** tdasilva has quit IRC | 11:52 | |
acoles | murugesh: and do you have the cache middleware in the proxy pipeline? to the left of authtoken? | 11:53 |
murugesh | Yes +acoles | 11:53 |
acoles | could you paste your proxy-server conf to pastebin | 11:56 |
*** vint_bra has joined #openstack-swift | 11:58 | |
*** vint_bra has quit IRC | 11:59 | |
openstackgerrit | Kota Tsuyuzaki proposed openstack/swift master: Eliminate node_index in Putter https://review.openstack.org/443072 | 12:05 |
* kota_ is going to back home | 12:06 | |
kota_ | hope, murugesh's problem resolved | 12:06 |
murugesh | 2mins +acoles | 12:07 |
murugesh | Nope +kota_ | 12:07 |
kota_ | sorry, it's close to the time I have to go home. | 12:07 |
murugesh | Its ok +kota thank you so much for your help with this | 12:09 |
murugesh | +acoles, here is my proxy-server.conf file pastebin link | 12:10 |
murugesh | http://paste.openstack.org/show/601912/ | 12:10 |
*** hseipp has joined #openstack-swift | 12:11 | |
acoles | murugesh: in authtoken section you have memcached_servers = 127.0.0.1:11211 - the swift sample conf does not show memcached_servers option being set. IDK but perhaps that option is overriding cache=swift.cache and causing authtoken to try to use a memcache client??? try removing that. | 12:17 |
acoles | murugesh: TBH I am guessing - otherwise the config looks ok | 12:17 |
acoles | you have memcache_servers set in the cache section, which is correct | 12:17 |
acoles | authtoken is maintained by another project (keystone) so I am not hugely familiar with it | 12:18 |
openstackgerrit | Christopher Bartz proposed openstack/python-swiftclient master: ISO 8601 timestamps for tempurl https://review.openstack.org/423377 | 12:18 |
murugesh | yes +acoles, i have removed it and now im getting following error when i run swift stat --debug in keystone node | 12:20 |
murugesh | http://paste.openstack.org/show/601915/ | 12:21 |
murugesh | sorry above link is incorrect ^^ | 12:23 |
acoles | murugesh: that is failing to get a token form keystone | 12:23 |
acoles | ah, NM | 12:23 |
murugesh | I get error 503 service unavailable | 12:23 |
acoles | before it was 500, correct? so did you see a different error in proxy log? | 12:24 |
acoles | murugesh: sorry this is proving so difficult for you BTW | 12:24 |
murugesh | I see this error in /var/log/swift/swift.log | 12:25 |
murugesh | http://paste.openstack.org/show/601916/ | 12:25 |
*** gkadam has quit IRC | 12:25 | |
murugesh | Yes +acoles, its eating up my head from past three days | 12:25 |
acoles | murugesh: ok, some progress, authtoken middleware is now making request to keystone to validate your token, but that request is being denied because the authtoken middleware credentials aren't being accepted | 12:28 |
acoles | so you see Identity server rejected authorization in the logs | 12:28 |
murugesh | Yeah that's correct | 12:29 |
acoles | you need to double check that the options you have in authtoken are correct - auth_url, password, somains etc | 12:29 |
acoles | domains | 12:29 |
*** catinthe_ has joined #openstack-swift | 12:29 | |
acoles | in particular there is often confusion around the domain id | 12:29 |
*** catintheroof has quit IRC | 12:29 | |
*** david-lyle has quit IRC | 12:30 | |
*** catintheroof has joined #openstack-swift | 12:30 | |
*** catinthe_ has quit IRC | 12:31 | |
acoles | it's along story sadly, see https://bugs.launchpad.net/swift/+bug/1604674, but double check the id and/or name of your default domain | 12:31 |
openstack | Launchpad bug 1604674 in OpenStack Object Storage (swift) "Doc error in Auth Overview for specifying keystone domain " [Undecided,In progress] - Assigned to Alistair Coles (alistair-coles) | 12:31 |
openstackgerrit | Juan Antonio Osorio Robles proposed openstack/python-swiftclient master: Use generic keystone client instead of versioned one https://review.openstack.org/443104 | 12:32 |
openstackgerrit | OpenStack Proposal Bot proposed openstack/swift master: Updated from global requirements https://review.openstack.org/88736 | 12:33 |
*** david-lyle has joined #openstack-swift | 12:33 | |
acoles | murugesh: one way to troubleshoot the authtoken credentials is to verify that you can use keystone client to reach keystone with the same credentials | 12:33 |
acoles | murugesh: but good news is we seem to have fixed the first problem :) | 12:34 |
murugesh | Yeah +acoles | 12:35 |
murugesh | so shall i change the domain ID and domain name values?? | 12:35 |
*** gkadam has joined #openstack-swift | 12:36 | |
*** oshritf has joined #openstack-swift | 12:36 | |
acoles | you should check what they should be first :) or i guess you could take a guess - the most common issue is that the domain *name* is default rather then its *id* being default, so changing the conf to 'user_domain_name = default' and 'project_domain_name = default' *may* help but that is a total guess | 12:37 |
acoles | you should be able to list your domains from keystone using openstack client | 12:38 |
acoles | something like "openstack domains list" but I don't remember for sure | 12:38 |
acoles | unfortunately my keystone service is down at the moment so I can't reproduce it | 12:39 |
murugesh | yes i got output on running the openstack domains list" in my keystone node | 12:41 |
acoles | murugesh: also you have /identity at end of auth_url which I have not seen before. | 12:42 |
acoles | murugesh: ok, so do you have a domain named 'default', if so what is its id? | 12:42 |
openstackgerrit | kavitha h r proposed openstack/python-swiftclient master: Python 3.4 support is removed https://review.openstack.org/443110 | 12:42 |
murugesh | i removed /identity in auth_uri and url | 12:42 |
openstackgerrit | Juan Antonio Osorio Robles proposed openstack/python-swiftclient master: Use generic keystone client instead of versioned one https://review.openstack.org/443104 | 12:42 |
murugesh | [root@cfvsenctrl121 ~]# openstack domain list +----------------------------------+---------+---------+----------------+ | ID | Name | Enabled | Description | +----------------------------------+---------+---------+----------------+ | 55035e2cb3db412db67bdc81b9fd3d6b | default | True | Default Domain | +----------------------------------+---------+---------+----------------+ | 12:43 |
*** gkadam has quit IRC | 12:43 | |
acoles | murugesh: great. so I think you need this change in authtoken section: 'user_domain_name = default' and 'project_domain_name = default' | 12:44 |
acoles | i.e. change _id for _name | 12:45 |
openstackgerrit | Alistair Coles proposed openstack/swift master: Support EC policy for in process functional tests https://review.openstack.org/442749 | 12:46 |
*** mvpnitesh has quit IRC | 12:48 | |
murugesh | Already it is _id in authtoken section | 12:49 |
murugesh | http://paste.openstack.org/show/601922/ | 12:49 |
acoles | murugesh: yes, but make it _name=default, not _id | 12:52 |
murugesh | Sure | 12:53 |
acoles | the domain list is telling you that your "Default domain" has name default, id some random UUID | 12:53 |
acoles | it used to be that the id was default. | 12:53 |
acoles | :/ it's so confusing | 12:53 |
murugesh | +acoles, no luck | 12:58 |
murugesh | since i changed _id as _name | 12:59 |
acoles | :( same message in proxy log? 503? | 13:00 |
murugesh | Yes same | 13:01 |
murugesh | Unable to validate token: Identity server rejected authorization necessary to fetch token data | 13:01 |
murugesh | proxy-server: Identity server rejected authorization | 13:01 |
murugesh | proxy-server: Identity response: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} | 13:01 |
acoles | murugesh: hmmm. so my best suggestion now is to use openstack cli to verify that you can talk to keystone using the exact same credentials as you have in the authtoken conf. i.e. using username swift, project service, the domain names, and your password etc | 13:03 |
acoles | for some reason authtoken is being denied when it tries to validate user token | 13:04 |
acoles | murugesh: I have to go for a while. you may also get help in #openstack-keystone for the authtoken/keystone interactions | 13:05 |
murugesh | Sure +acoles thank you very much for your help | 13:06 |
murugesh | I really appreciate your patience | 13:06 |
*** geaaru has joined #openstack-swift | 13:06 | |
acoles | murugesh: NP, I hope it is now just a case of getting the correct credentials/options for authtoken, and having swift service correctly setup in keystone | 13:07 |
murugesh | yeah will try to get help from #openstack-keystone | 13:08 |
*** psachin has quit IRC | 13:13 | |
*** amoralej is now known as amoralej|lunch | 13:24 | |
*** links has quit IRC | 13:25 | |
*** cshastri has quit IRC | 13:28 | |
*** klamath has joined #openstack-swift | 13:29 | |
*** klamath has quit IRC | 13:29 | |
*** oshritf has quit IRC | 13:30 | |
*** klamath has joined #openstack-swift | 13:30 | |
*** oshritf has joined #openstack-swift | 13:30 | |
*** psachin has joined #openstack-swift | 13:30 | |
*** sams-gleb has joined #openstack-swift | 13:31 | |
*** dmorita has joined #openstack-swift | 13:33 | |
*** sams-gleb has quit IRC | 13:35 | |
*** oshritf has quit IRC | 13:36 | |
*** dmorita has quit IRC | 13:37 | |
*** mvk has quit IRC | 13:40 | |
*** tongli has joined #openstack-swift | 13:41 | |
openstackgerrit | Juan Antonio Osorio Robles proposed openstack/python-swiftclient master: Use generic keystone client instead of versioned one https://review.openstack.org/443104 | 13:44 |
*** psachin has quit IRC | 13:46 | |
*** psachin has joined #openstack-swift | 13:46 | |
*** sileht has quit IRC | 13:51 | |
openstackgerrit | Juan Antonio Osorio Robles proposed openstack/python-swiftclient master: Use generic keystone client instead of versioned one https://review.openstack.org/443104 | 13:52 |
*** sileht has joined #openstack-swift | 13:55 | |
*** thiago_ has joined #openstack-swift | 13:57 | |
*** thiago_ is now known as Guest84307 | 13:57 | |
*** Guest84307 is now known as tdasilva | 13:57 | |
*** ChanServ sets mode: +v tdasilva | 13:58 | |
*** dja has quit IRC | 13:58 | |
*** sileht has quit IRC | 13:59 | |
*** tdasilva has quit IRC | 14:03 | |
*** tdasilva has joined #openstack-swift | 14:04 | |
*** sileht has joined #openstack-swift | 14:04 | |
*** sileht has quit IRC | 14:04 | |
*** murugesh has quit IRC | 14:05 | |
*** sileht has joined #openstack-swift | 14:08 | |
*** sileht has quit IRC | 14:08 | |
*** sileht has joined #openstack-swift | 14:10 | |
*** sileht has quit IRC | 14:11 | |
*** sileht has joined #openstack-swift | 14:26 | |
*** amoralej|lunch is now known as amoralej | 14:26 | |
*** tongli has quit IRC | 14:30 | |
*** dja has joined #openstack-swift | 14:39 | |
*** sileht has quit IRC | 14:47 | |
*** chlong_ has joined #openstack-swift | 14:48 | |
*** dewanee has joined #openstack-swift | 14:49 | |
dewanee | hello everyone | 14:49 |
dewanee | some time ago we uploaded by mistake an old version of the rings to the swift cluster and eventually the node got rebooted | 14:51 |
dewanee | anyone cares to chip in as: what now? | 14:51 |
dewanee | the main differences between the two versions are the weights of one node using erasure coding | 14:52 |
*** sileht has joined #openstack-swift | 14:54 | |
*** sileht has quit IRC | 14:54 | |
*** dja has quit IRC | 14:55 | |
openstackgerrit | Alistair Coles proposed openstack/swift master: Document SAIO rsync service setup for ubuntu 16 https://review.openstack.org/443162 | 14:57 |
*** mvk has joined #openstack-swift | 15:03 | |
openstackgerrit | Alistair Coles proposed openstack/swift master: Document SAIO rsync service setup for ubuntu 16 https://review.openstack.org/443162 | 15:07 |
*** tdasilva- has joined #openstack-swift | 15:08 | |
*** _JZ_ has joined #openstack-swift | 15:09 | |
*** jaosorior has joined #openstack-swift | 15:11 | |
*** sileht has joined #openstack-swift | 15:13 | |
*** tdasilva- has quit IRC | 15:13 | |
*** tdasilva- has joined #openstack-swift | 15:28 | |
*** sams-gleb has joined #openstack-swift | 15:33 | |
*** links has joined #openstack-swift | 15:36 | |
*** sams-gleb has quit IRC | 15:37 | |
*** tanee is now known as tanee_away | 15:40 | |
openstackgerrit | Monty Taylor proposed openstack/swift master: Replace references to swift.openstack.org https://review.openstack.org/443190 | 15:48 |
openstackgerrit | Andreas Jaeger proposed openstack/python-swiftclient master: Change swift.o.o URL https://review.openstack.org/443198 | 15:54 |
jaosorior | cschwede: could you check this out if you get a chance https://review.openstack.org/#/c/443104/ ? | 15:55 |
patchbot | patch 443104 - python-swiftclient - Use generic keystone client instead of versioned one | 15:55 |
*** rcernin has quit IRC | 15:58 | |
*** Jeffrey4l_ is now known as Jeffrey4l | 15:59 | |
*** joeljwright has joined #openstack-swift | 16:00 | |
*** ChanServ sets mode: +v joeljwright | 16:00 | |
*** sams-gleb has joined #openstack-swift | 16:08 | |
cschwede | jaosorior: sure, i'll have a look at that today | 16:14 |
jaosorior | cschwede: thanks | 16:14 |
*** Jeffrey4l has quit IRC | 16:15 | |
*** Jeffrey4l_ has joined #openstack-swift | 16:15 | |
*** jaosorior has quit IRC | 16:24 | |
openstackgerrit | Mahati Chamarthy proposed openstack/swift master: Limit number of revert tombstone SSYNC requests https://review.openstack.org/439572 | 16:35 |
*** chsc has joined #openstack-swift | 16:39 | |
*** chsc has joined #openstack-swift | 16:39 | |
mahatic_ | clayg: dropped off early today. didn't notice gate failure! | 16:39 |
notmyname | good morning | 16:43 |
timburke | clayg: json.dumps always gives you native strings because it's fundamentally a textual format. pickle uses a binary format, so it needs binary data. makes about as much sense as anything to do with strings in the py2-py3 transition... | 16:45 |
* notmyname is a little sad that it seems infra want's to remove swift.openstack.org | 16:49 | |
openstackgerrit | Alistair Coles proposed openstack/swift master: Test EC chunk_transformer with larger input chunks https://review.openstack.org/442108 | 16:51 |
mahatic_ | clayg: re: "line 271, in object_audit#012 ts = df._ondisk_info['ts_info']['timestamp']#012KeyError: 'ts_info" oops? But don't notice any change suppressing it on master. I still see that bit | 16:53 |
* mahatic_ is calling it a night | 16:54 | |
*** links has quit IRC | 16:55 | |
clayg | mahatic_: nah, you fixed that? something with DiskFileExpired or Deleted or something? | 16:56 |
clayg | dewanee: that should be fine? I mean some rebalance traffic probably started - some disks might be more full than others - that's not great - but just push out new/better rings - it'll probably start to work itself out | 16:57 |
*** winggundamth has quit IRC | 16:57 | |
mahatic_ | clayg: right DiskFileDeleted. I thought, that also started leaving some trace back that I wasn't aware of :D what a relief! | 17:00 |
mahatic_ | clayg: you're back online in 6hrs? :O | 17:02 |
clayg | kauphy is superior alternative to zzz | 17:03 |
clayg | notmyname: don't be sad! they want to remove ci.openstack.org too! What they *need* id a DNSaaS project that's widely deployed and maintained so that they can automate and scale self-service horizontal and vertical per-project vanity domain provisioning and maintenance | 17:05 |
clayg | notmyname: you should write spec and work on it! that sort of cross project work is what makes OpenStack *one* project | 17:06 |
notmyname | nah, I was thinking of just being sad about how things aren't the way they used to be | 17:08 |
clayg | oh... yeah that makes sense too | 17:09 |
dewanee | clayg, I am not sure I understood completely how rings works when rebalancing is involved | 17:09 |
dewanee | I'll clarify: 2 rings one the "parent" of the other | 17:10 |
notmyname | magic | 17:10 |
dewanee | the parent got pushed by mistake after all the data movement triggered by the rebalanced "child" was complete | 17:10 |
* notmyname is not being helpful this morning in any irc channel | 17:10 | |
clayg | notmyname: rings are not magic pixy dust that transport data ;) | 17:11 |
dewanee | now assuming I have the same ring running on both server (really small deployment) | 17:11 |
notmyname | clayg: that's tsync, right? | 17:12 |
clayg | dewanee: do you still have the child ring? | 17:12 |
dewanee | yes | 17:12 |
clayg | notmyname: tsync is not magic pixy dust that transport data ;) | 17:12 |
notmyname | lol | 17:12 |
clayg | no magic pixy dust! | 17:12 |
dewanee | should I rebalance that one if I need to change weights? or the current runnign one? | 17:12 |
clayg | dewanee: I mean I hate to sound flippant - but just deploy the child rings out - it'll be fine | 17:12 |
notmyname | dewanee: when you say parent and child, is the parent the first one and the child is after a rebalance? | 17:13 |
clayg | oh... do you *have* weights that need to change? | 17:13 |
dewanee | child is after weight change + rebalance | 17:13 |
notmyname | kk thanks | 17:13 |
dewanee | we were taking all the erasure coded stuff out of a node | 17:13 |
clayg | dewanee: the problem is that it takes a long time for the EC reconstructor to get the on-disk state to match the in-ring state | 17:14 |
clayg | like... a *long* time :\ | 17:14 |
dewanee | ok so let's ask the million dollar question: | 17:14 |
dewanee | can data be lost messing up with rings as we unwillingly did | 17:15 |
dewanee | ? | 17:15 |
clayg | dewanee: no, not really | 17:15 |
dewanee | considering the modification regarded the weight change of the object ring in erasure coding | 17:16 |
clayg | I mean I'm trying to think how that might happen - I can't think of anything - even unlikely | 17:16 |
clayg | dewanee: the thing with rings is they're just a replica2part2dev table - all the replicas of all the parts have to have a device_id filled in the table - *which* device_id can change depending on lots of builder options | 17:16 |
dewanee | so the node that had all the segments and find itself having too many of them according to the ring won't simply delete them | 17:16 |
clayg | but all the replicas of all the parts have to be assigned | 17:16 |
clayg | when a node picks up a ring ... if it seems it has a part-replica that isn't assigned to it - it tries to move it to where it goes | 17:17 |
clayg | simple | 17:17 |
dewanee | clayg, you are being really helpful so let me ask one more thing | 17:17 |
clayg | dewanee: no, only after it moves it (successfull) to the new location will it delete it | 17:17 |
dewanee | ok | 17:18 |
clayg | it's acctually *really* common to *over replicate* during a rebalance | 17:18 |
clayg | like you have to have the data in 4 places before you can have it only 3 again | 17:18 |
*** tesseract has quit IRC | 17:18 | |
clayg | or in EC you'll have two copies of a fragment that moved before you have only one copy of each fragment | 17:19 |
dewanee | it is my understading that swift won't consider ring that have a timestamp prior to the current loaded ring file | 17:19 |
dewanee | but I assume that when the node gets rebooted/service restarted it will use what finds on disk. Is that correct? | 17:19 |
*** JimCheung has joined #openstack-swift | 17:20 | |
clayg | rings have timestamps? you mean like mtime? does that get preserved if you idk, rsync your ring to your servers maybe? | 17:20 |
dewanee | we use a deployment system that in principle preserve those | 17:20 |
dewanee | I meant mtime , yes | 17:21 |
clayg | dewanee: yeah I guess that's true - i've never really consider the implications there - definately on restart the reload machineary is just going to use whatever ring it finds | 17:21 |
*** psachin has quit IRC | 17:22 | |
clayg | that's kind of a weird artificat/side-effect - it wasn't really by design - it's just checking a files mtime with stat is cheaper than .... idk doing some sort of checksum to see "has the ring changed" | 17:22 |
dewanee | ok so sorry for not having pointed out earlier but the reason I am investigating all this is because we have a 80k replication failures on nodes that appears perfectly sane | 17:22 |
dewanee | I thought that might had something to do with this ring mess we did\ | 17:23 |
clayg | the idea is just that running services will pick up "the new" ring after ring_check_interval - no one was trying to prevent an older mtime from being reloaded - I sorta thing that is maybe a surprising behavior in practice that could probably be considerd a bug :\ | 17:23 |
clayg | dewanee: maybe it's trying to push data to a node that's full? | 17:23 |
dewanee | I wish | 17:23 |
*** vint_bra has joined #openstack-swift | 17:24 | |
clayg | dewanee: it could also just be getting rejected for concurrency a lot | 17:24 |
dewanee | ok I'll look into those. It would surprise me as the service has really few users/objects | 17:25 |
dewanee | thanks meanwhile! time for me to go. | 17:25 |
notmyname | nah, the replication concurrency. | 17:25 |
notmyname | not client | 17:26 |
dewanee | mmh that could very well be now that you mention it. we had problem with the reconstructor eating up memory and taking an unbelievable amount of time to reconstruct 120 disks | 17:27 |
dewanee | so we spawned a few with a more limited scope | 17:27 |
dewanee | let's say 12 disks | 17:27 |
notmyname | 120 disks. on one server? or in the cluster total? | 17:27 |
dewanee | on one unfortunately | 17:27 |
clayg | please be in one server! | 17:27 |
clayg | yes yes yes! | 17:28 |
notmyname | lol | 17:28 |
clayg | lol | 17:28 |
dewanee | ahah | 17:28 |
dewanee | not following | 17:28 |
notmyname | dewanee: congrats! you win the award of where swift reconstructor is most terrible! | 17:28 |
dewanee | we just have 2 nodes right now so not that many disks | 17:28 |
notmyname | dewanee: clayg has been ranting about this in the office for quite a while and is currently working on this exact problem :-) | 17:29 |
*** hseipp has quit IRC | 17:29 | |
dewanee | eheh we have been thinking of splitting the enclosure between two nodes for some time now | 17:29 |
dewanee | so 60 disks would be more manageable | 17:29 |
notmyname | 10 would be better ;-) | 17:29 |
dewanee | really? | 17:29 |
clayg | dewanee: get a newer swift that supports reconstructor handoffs_*only* and try this https://gist.github.com/clayg/01e063655c250d6808e2b341ab65594c | 17:30 |
dewanee | than it would completely defy the usecase we bought the server for.. | 17:30 |
clayg | notmyname: stop telling people to use dinky storage nodes - not all software problems are "better" addressed with hardware solutions | 17:30 |
clayg | notmyname: see ^ | 17:30 |
notmyname | clayg: sorry. sorry, dewanee | 17:30 |
dewanee | ahaha okok | 17:30 |
notmyname | dewanee: do what clayg says | 17:31 |
dewanee | so thanks and talk to you again for sure! | 17:31 |
dewanee | going for real this time | 17:31 |
notmyname | good night | 17:31 |
notmyname | clayg: it's like good cop, bad cop | 17:32 |
clayg | lol | 17:32 |
clayg | i guess I better typey typey | 17:32 |
*** cbartz has left #openstack-swift | 17:39 | |
openstackgerrit | Romain LE DISEZ proposed openstack/swift master: Use real mocking in unit tests test_cname_lookup https://review.openstack.org/443248 | 17:40 |
openstackgerrit | Romain LE DISEZ proposed openstack/swift master: Allow to configure the nameservers in cname_lookup https://review.openstack.org/435768 | 17:40 |
*** dmorita has joined #openstack-swift | 17:45 | |
clayg | kota_: I went ahead and filed lp bug #1671180 | 17:46 |
openstack | Launchpad bug 1671180 in OpenStack Object Storage (swift) "Multiple EC policies can cause reconstructor to get wrong remote suffixes" [Undecided,New] https://launchpad.net/bugs/1671180 | 17:46 |
*** dmorita has quit IRC | 17:49 | |
*** dmorita_ has joined #openstack-swift | 17:49 | |
*** dmorita_ has quit IRC | 18:00 | |
*** dmorita has joined #openstack-swift | 18:17 | |
clayg | timburke: FML, so it was *never* a good fake for *anything* | 18:29 |
timburke | clayg: yup :'( | 18:31 |
*** disaster has joined #openstack-swift | 18:33 | |
disaster | hi i try to put on tempurl but it don't work. public or private container same way... is there a thing to do on acl or something? | 18:34 |
disaster | i'm on ovh object storage if that can help | 18:35 |
clayg | timburke: looks like we just dropped the ball here -> https://review.openstack.org/#/c/51901/3 | 18:36 |
patchbot | patch 51901 - python-swiftclient - enhance swiftclient logging (MERGED) | 18:36 |
*** chlong_ has quit IRC | 18:37 | |
clayg | timburke: except... it *is* a dict? DEBUG:swiftclient:RESP HEADERS: {u'X-Account-Storage-Policy-Ec-Bytes-Used': u'0', u'Content-Length': u'0', ... | 18:37 |
openstackgerrit | Merged openstack/python-swiftclient master: Change swift.o.o URL https://review.openstack.org/443198 | 18:37 |
clayg | oh... that's just scrub_headers being robust to bad mocks - gd! | 18:38 |
timburke | clayg: it's turtles all the way down... only instead of turtles, sadness | 18:39 |
clayg | oh, nope scrub_headers is use on request and response headers | 18:40 |
clayg | it *has* to be robust to both types | 18:40 |
clayg | ok, good - so we're back to Fake's getresponse should return tuples and the world makes sense | 18:40 |
clayg | err... getheaders w/e | 18:41 |
*** rcernin has joined #openstack-swift | 18:42 | |
clayg | who did you say failed when you changed the fakes to return lists of tuples? | 18:42 |
rledisez | disaster: it should work if you set the key on the account. did you try to send a mail to cloud@ml.ovh.net? somebody from the team will look at it tomorrow | 18:43 |
timburke | clayg: nobody fails with lists of tuples. two fail without the method at all, so it might make sense to rewrite them similar to how you've rewritten that most-recent test | 18:44 |
disaster | rledisez: i set a key and i call ovh like 15 times for nothing... "I do a internal request for you" | 18:45 |
disaster | rledisez: more than a week | 18:45 |
disaster | rledisez: i don't know if french enterprise hate french people but my experience with this technical support is so so bad | 18:48 |
*** mvk has quit IRC | 18:48 | |
*** pxwang has joined #openstack-swift | 18:48 | |
notmyname | disaster: ouch. let's keep product/company support out of this channel, but if we can help you with the API calls or understanding how it works, we'd be happy to help in here | 18:49 |
openstackgerrit | Clay Gerrard proposed openstack/python-swiftclient master: Fix MockHttpResponse to be more like the Real https://review.openstack.org/442881 | 18:52 |
disaster | notmyname: sorry, i say that cause rledisez tell me to join ovh support and i'm quite mad about it | 18:52 |
*** chlong_ has joined #openstack-swift | 18:53 | |
notmyname | disaster: I completely understand. and it sounds really frustrating | 18:53 |
clayg | disaster: on behalf of rledisez I'm sure I can say that he's sorry to hear that - everyone here wants you to have great experience using swift and we want to help in anyway we can | 18:54 |
notmyname | yes! | 18:54 |
rledisez | sure. we discussed in private. i’ll make sure everything is handled tomorrow :) | 18:54 |
clayg | disaster: rledisez is a good guy and friend and I know he works for ovh and shares our sentiment | 18:55 |
disaster | thanks i know you do your best! | 18:55 |
disaster | clayg: the way you give me for my purpose with tempurl was really good so the madness to can't do it is high | 18:56 |
clayg | I can sympathise it's a hard thing - one person can't always move a mountain - everyone appreciates a little patience and forgiveness - and i don't think anyone here minds some constrcutive criticism | 18:56 |
disaster | sorry for my english i'm not really sure if he is good or not but you seems to undestand what i say ^^ | 18:57 |
clayg | I can't definately relate to getting mad as heck when it's not easy to do the right thing! Don't give up | 18:57 |
*** joeljwright has quit IRC | 18:58 | |
openstackgerrit | Merged openstack/swift master: Replace references to swift.openstack.org https://review.openstack.org/443190 | 19:03 |
disaster | thank you i won't complain more today ^^ goodbye | 19:07 |
*** disaster has quit IRC | 19:07 | |
*** pcaruana has quit IRC | 19:13 | |
*** bkopilov_ has joined #openstack-swift | 19:14 | |
*** bkopilov has quit IRC | 19:14 | |
*** mvk has joined #openstack-swift | 19:24 | |
*** geaaru has quit IRC | 19:27 | |
jrichli | clayg: if CHANGELOG doesn't need to adhere to 80 lines, then that is fine. I see there is already a link that exceeds that number. sorry I didn't realize that. | 19:28 |
clayg | jrichli: oh i have no idea - seems like a reasonable goal in practice - but line wrapping long links is always a stickler | 19:29 |
notmyname | jrichli: yeah, your comment is good. and the patch makes the file uglier. but wrapping links is hard :-/ | 19:30 |
clayg | jrichli: your comment led me (incorrectly?) assume you were mostly warm on the change but had one suggestion for improvement - I agreed with that sentiment but tipped toward let's just merge it since monty/infra had already done the leg work to get it that far and it's more us a burden for them than us | 19:31 |
jrichli | clayg: iz coo | 19:32 |
clayg | jrichli: you know me tho - i'm super hypocritical and inconssitent - one minute I'm like "merge it doen't matter how ugly it looks - it fixes the thing" - then the next I"m like "this trash isn't getting in on *my* watch; it doesn't matter if this is good; it could be *better*" | 19:33 |
*** amoralej is now known as amoralej|off | 19:39 | |
clayg | timburke: maybe the lesson is that we don't need shouldn't have MockHTTPResponse and StubResponse? swift has at least half a dozen fake response things | 19:39 |
*** pxwang has quit IRC | 20:03 | |
*** chlong_ has quit IRC | 20:16 | |
*** chlong_ has joined #openstack-swift | 20:28 | |
*** rcernin has quit IRC | 20:36 | |
*** adriant has joined #openstack-swift | 20:43 | |
acoles | :) I just installed a fresh keystone service for my SAIO, but noticed this bug https://bugs.launchpad.net/swift/+bug/1662473 :( | 20:51 |
openstack | Launchpad bug 1662473 in OpenStack Object Storage (swift) "get internal error when try authenticate by keystone. " [Undecided,Confirmed] | 20:51 |
jrichli | acoles: was this on Ubuntu 14? | 20:53 |
*** joeljwright has joined #openstack-swift | 20:57 | |
*** ChanServ sets mode: +v joeljwright | 20:57 | |
timburke | related: https://github.com/openstack/requirements/commit/08b589c | 20:58 |
notmyname | know what time it is? best time of the week! swift team meeting time in #openstack-meeting | 20:59 |
mattoliverau | morning | 20:59 |
*** spotz is now known as spotz_zzz | 20:59 | |
*** spotz_zzz is now known as spotz | 21:00 | |
kota_ | mattoliverau: mornin | 21:00 |
*** mathiasb_ is now known as mathiasb | 21:00 | |
notmyname | acoles: clayg: meeting time courtesy ping | 21:02 |
*** m_kazuhiro has joined #openstack-swift | 21:07 | |
*** mkaminski has joined #openstack-swift | 21:12 | |
*** mkaminski has quit IRC | 21:16 | |
*** dja has joined #openstack-swift | 21:26 | |
*** dja has quit IRC | 21:32 | |
*** mkaminski has joined #openstack-swift | 21:33 | |
*** dja has joined #openstack-swift | 21:33 | |
*** Jeffrey4l_ has quit IRC | 21:34 | |
*** Jeffrey4l_ has joined #openstack-swift | 21:35 | |
*** mkaminski has quit IRC | 21:37 | |
openstackgerrit | Romain LE DISEZ proposed openstack/swift master: Replace replication_one_per_device by custom count https://review.openstack.org/390781 | 21:39 |
*** NM has quit IRC | 21:41 | |
*** joeljwright has quit IRC | 21:47 | |
clayg | rledisez: how do you mean "starting multiple processes" - starting multiple swift-object-reconstructor's? like by hand? or you have supervisor script you threw together or something? | 21:48 |
clayg | I feel like recon stats aren't really going to be geared toward having multiple instances of swift-object-reconstructor running | 21:49 |
clayg | ... not sure about statsd metrics - but I think it's about the same problem | 21:49 |
rledisez | clayg: it’s an sys-V initscript that starts N process (N configurable in /etc/default/swift-object-reconstructor). it just needed a small patch in reconstructor to take the list of devices as parameter while running in forever mode | 21:49 |
rledisez | yeah, stats are a bit messy, but i’m ok with that as long as it works | 21:50 |
*** dja has quit IRC | 21:50 | |
*** dja has joined #openstack-swift | 21:51 | |
rledisez | clayg: i’m thinking, we also separate revert and sync jobs (so, N processes for revert, and N processes for sync) | 21:52 |
clayg | jesus, how do you do that? like a command line arg for handoffs_only? | 21:52 |
clayg | or sync_jobs_only? | 21:52 |
*** sams-gleb has quit IRC | 21:53 | |
rledisez | yes: -r -> revert, -s -> sync. -d sda,sdb,sdc… | 21:54 |
clayg | does the existing devices option only effect once mode or something? | 21:55 |
clayg | yeah i guess so - but that's trivial | 21:56 |
rledisez | yes, that’s part of the patch: use device option in forever mode | 21:56 |
clayg | then you had to plumb the revert/sync stuff - I feel like there's still a log of ... | 21:57 |
clayg | I mean kudos for getting something that works and leaning on it - but ... did you at least file an upstream bug or anything? | 21:58 |
rledisez | yeah, it’s a condition in collect_parts() | 21:58 |
clayg | is that patch up somewhere I can look at it more closely? | 21:58 |
rledisez | clayg: i should not answer your last two questions, you’ll be mad at me ;) | 21:59 |
clayg | it may be a bit out of context without the sysV script that controls it | 21:59 |
clayg | s'fine | 21:59 |
rledisez | clayg: clearly. it’s not a nice implementation. i prefer the way object-auditor works, but it’s a lot more work | 21:59 |
clayg | i mean I have my little collection of gross hacks too - i'm not always sure how to share them | 22:00 |
clayg | do you have a similar thing going for you swift-object-replicators? with the ssync? | 22:00 |
rledisez | no, it works fine for us, no patch there | 22:01 |
clayg | cool | 22:02 |
clayg | well crap https://gist.github.com/clayg/b3df9db01d799da645e597e2bd5f5da3 | 22:22 |
clayg | I'm not sure what I've done in the reconstructor that is more complex than that... tpool reraise or something I guess | 22:23 |
clayg | or some weird side effect of importing swift.common.utils like lp bug #1380815 | 22:23 |
openstack | Launchpad bug 1380815 in OpenStack Object Storage (swift) "swift.common.utils should not patch logging on import" [High,New] https://launchpad.net/bugs/1380815 | 22:23 |
*** chlong_ has quit IRC | 22:24 | |
*** vint_bra has quit IRC | 22:28 | |
*** dja has quit IRC | 22:35 | |
acoles | notmyname: this bug https://bugs.launchpad.net/swift/+bug/1662473 is fixed IMO but fixed in keystonemiddleware, do we mark it as Invalid or Fix committed? | 22:36 |
openstack | Launchpad bug 1662473 in OpenStack Object Storage (swift) "get internal error when try authenticate by keystone. " [Undecided,Confirmed] | 22:36 |
notmyname | acoles: good question. thinking | 22:39 |
*** catintheroof has quit IRC | 22:44 | |
notmyname | acoles: I think the hardest part is the words next to each state that LP provides | 22:46 |
notmyname | acoles: so I think we should mark it as invalid, but not because it wasn't a bug, but because it was resolved without pushing code to swift | 22:46 |
notmyname | i'll write some words to that effect | 22:46 |
acoles | yes, thought exactly that i.e. the subtest for Invalid is quite dismissive but as you say its probably the only sensible state to go to | 22:47 |
acoles | subtext* | 22:47 |
acoles | notmyname: you doing it or me? | 22:48 |
notmyname | i don't care either way. your call | 22:49 |
*** tdasilva has quit IRC | 22:51 | |
acoles | I will | 22:51 |
notmyname | ok, thanks | 22:51 |
*** dja has joined #openstack-swift | 22:53 | |
*** sams-gleb has joined #openstack-swift | 22:54 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/python-swiftclient master: Updated from global requirements https://review.openstack.org/89250 | 22:54 |
openstackgerrit | OpenStack Proposal Bot proposed openstack/swift master: Updated from global requirements https://review.openstack.org/88736 | 22:55 |
*** m_kazuhiro has quit IRC | 22:55 | |
acoles | notmyname: done. good night | 22:56 |
notmyname | acoles: perfect. thanks. good night | 22:57 |
clayg | so i seemed to make some progress, apparently if you start a greenthread before you fork your mutliprocessing workers that bad... | 22:58 |
timburke | i could see that, yeah... | 22:59 |
*** sams-gleb has quit IRC | 22:59 | |
*** dja has quit IRC | 22:59 | |
clayg | timburke: it's like i guess the fork'd process doesn't understand it's internal hub state entirely | 22:59 |
clayg | I wish eventlet had more explicit control of the hub - as it is I think like the first time you spawn it starts the hub for you if it's not running? | 23:00 |
*** dja has joined #openstack-swift | 23:09 | |
*** dja has quit IRC | 23:16 | |
openstackgerrit | Clay Gerrard proposed openstack/swift master: Add multiprocessing worker to reconstructor https://review.openstack.org/443356 | 23:27 |
*** catintheroof has joined #openstack-swift | 23:28 | |
*** kei_yama has joined #openstack-swift | 23:30 | |
*** chsc has quit IRC | 23:32 | |
*** NM has joined #openstack-swift | 23:34 | |
*** klamath has quit IRC | 23:38 | |
openstackgerrit | Clay Gerrard proposed openstack/swift master: Add multiprocessing worker to reconstructor https://review.openstack.org/443356 | 23:44 |
*** pxwang has joined #openstack-swift | 23:44 | |
*** pxwang has left #openstack-swift | 23:45 | |
*** dja has joined #openstack-swift | 23:47 | |
*** NM has quit IRC | 23:49 | |
*** david-lyle has quit IRC | 23:56 | |
*** dja has quit IRC | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!