*** jrichli has joined #openstack-swift | 00:06 | |
*** garthb has quit IRC | 00:13 | |
*** kota_ has joined #openstack-swift | 00:16 | |
*** ChanServ sets mode: +v kota_ | 00:16 | |
kota_ | good morning | 00:16 |
---|---|---|
jrichli | kota_ : good morning! | 00:19 |
*** zhill has quit IRC | 00:19 | |
kota_ | jrichli: morning :D | 00:19 |
jrichli | kota_ : I have gotten farther in my learning Japanese, but I feel that don't really know how to say anything :-) | 00:20 |
jrichli | Learning a new language is very hard! | 00:20 |
kota_ | jrichli: exactly, I am still not good at English :/ | 00:21 |
jrichli | I was learning the difference between wa and ga. I think I will end up knowing some single words, but making sentences is another thing. | 00:21 |
jrichli | we seem to all understand you just fine :-) | 00:21 |
kota_ | thanks | 00:21 |
jrichli | I should say, understand you well. I see that sentence now, and I understand it might not translate well. | 00:22 |
jrichli | I guess this is really a good thing for me to do. | 00:22 |
kota_ | basically 'wa' and 'ga' have same meaning. Japanese can understand if you could get mistake for use case. | 00:23 |
jrichli | yes, but I read some examples where the difference can be make the difference between a compliment and an insult! | 00:23 |
kota_ | In my opinion, we often use 'ha' for statement... maybe | 00:24 |
jrichli | oh, I haven | 00:24 |
jrichli | 't gotten to that one yet | 00:24 |
kota_ | be verb, am, are, was, like that | 00:24 |
kaliya | torgomatic, it can, it's 777 with swift:swift... | 00:25 |
kota_ | and 'ga' is used for action verb like do, play | 00:25 |
kota_ | or so | 00:25 |
jrichli | I haven't see that yet either - I dont think. I am using Rosetta Stone, which is a good, but I have to suppliment by reading info on the internet. | 00:26 |
kota_ | e.g. if you want to say, 'I AM a worker'. you should use 'ha' in the statement. | 00:27 |
kota_ | oh, Rosetta Stone is popular in US, too? | 00:27 |
jrichli | ok, that makes sense. That type of statement hasn't been covered yet. | 00:27 |
jrichli | yep | 00:27 |
kota_ | I saw the commercial in Japan, too, I didn't try it yet tho | 00:28 |
kota_ | jrichli: if you have any question, feel free to ask me :P | 00:30 |
kota_ | about Japanese. | 00:30 |
*** flwang1 has quit IRC | 00:30 | |
*** m_kazuhiro has joined #openstack-swift | 00:30 | |
jrichli | Great, thanks! I am sure I will take you up on that at some point. | 00:31 |
*** gyee has quit IRC | 00:33 | |
*** rjaiswal has joined #openstack-swift | 00:39 | |
kaliya | Tokyo summit? :) I am trying with "Japanese for busy people" but I'm too busy | 00:45 |
kota_ | kaliya: lol | 00:46 |
kaliya | So far, I know how to say sumimasen | 00:48 |
jrichli | lol. Well, some words are nice because they sound very similar to the english equivalent. Like pen, computer, internet, dinning room, to name a few. | 00:48 |
*** zacksh_ has quit IRC | 00:48 | |
*** darrenc has quit IRC | 00:48 | |
*** sudorandom has quit IRC | 00:48 | |
*** mattoliverau has quit IRC | 00:48 | |
*** darrenc has joined #openstack-swift | 00:49 | |
*** jamielennox has quit IRC | 00:49 | |
*** flwang has quit IRC | 00:49 | |
*** CJM has quit IRC | 00:49 | |
*** StevenK has quit IRC | 00:50 | |
*** donagh has quit IRC | 00:50 | |
*** jroll has quit IRC | 00:50 | |
*** kevinc___ has quit IRC | 00:50 | |
kota_ | jrichli: yes, they came from outside of Japan. We have a way to call them as they are. | 00:50 |
*** mattoliverau has joined #openstack-swift | 00:51 | |
*** StevenK has joined #openstack-swift | 00:51 | |
*** zacksh has joined #openstack-swift | 00:51 | |
jrichli | ah, i see. I even thought "Tishatsu" was pretty similar. Easy to remember anyway, and I thought applicable to the summit :-) | 00:51 |
*** ChanServ sets mode: +v mattoliverau | 00:52 | |
jrichli | more applicable than dinning room anyway :-) | 00:53 |
*** jroll has joined #openstack-swift | 00:53 | |
*** donagh has joined #openstack-swift | 00:53 | |
kota_ | jrichli: yup, before a kind of "shirt" came to Japane, Japanese people weared "kimono", traditional clothes. | 00:54 |
*** jamielennox has joined #openstack-swift | 00:54 | |
*** sudorandom has joined #openstack-swift | 00:54 | |
kota_ | that's why we have same pronounceation for "shirt" | 00:54 |
*** CrackerJackMack has joined #openstack-swift | 00:54 | |
jrichli | ah. this very interesting | 00:54 |
jrichli | I mean, that is very interesting | 00:55 |
kota_ | :) | 00:55 |
jrichli | but then, why is laptop so different? | 00:55 |
kota_ | I don't know :/ | 00:56 |
*** flwang has joined #openstack-swift | 00:56 | |
jrichli | :-) oh well, I knew it wouldn't be that easy | 00:56 |
kota_ | Japanese sometimes make a new word combining foreign laungage. | 00:56 |
kota_ | AFAIK, old Japanese thought "notebook + personal computer" for the laptop. | 00:57 |
kota_ | before we got the word "laptop" | 00:57 |
jrichli | oh, ic. | 00:57 |
kota_ | and it would be defact like note-PC | 00:58 |
*** jkugel has joined #openstack-swift | 00:58 | |
kota_ | jrichli: sorry, I have a meeting, away from here. | 01:00 |
*** kota_ has quit IRC | 01:00 | |
jrichli | kota_: no problem :-) | 01:05 |
*** kei_yama has quit IRC | 01:14 | |
*** kei_yama has joined #openstack-swift | 01:15 | |
mattoliverau | morning all | 01:15 |
*** flwang1 has joined #openstack-swift | 01:15 | |
mattoliverau | had a busy morning | 01:15 |
jrichli | mattoliverau: morning! That is great to hear that you will be vacationing in Japan for 2 weeks! | 01:17 |
mattoliverau | yeah! gonna be awesome! | 01:18 |
mattoliverau | also looks like there is another swift dev in the making: http://postimg.org/image/63kefpeyp/ | 01:19 |
mattoliverau | ^ that;'s why I was late this morn.. I'm a little excited now :) | 01:19 |
jrichli | OMG! congratulations!!! | 01:20 |
mattoliverau | jrichli: thanks :) | 01:20 |
mattoliverau | just following in tdasilva's and ho's foot steps :) | 01:20 |
jrichli | so, that says he/she is 12 weeks and 6 days old? | 01:21 |
mattoliverau | yeah, it was the 12 weeks scan, by the length it looks closer to 12 weeks 6 days. Wont know the sex until 20 weeks | 01:22 |
mattoliverau | and now in safetly zone so can finally tell people :) | 01:22 |
jrichli | ic. I bet it was hard not to say anything :-) | 01:22 |
charz | mattoliverau: Congruations!!! | 01:22 |
mattoliverau | yeah, my remote work mates and the swift community are first to know LOL! (other then family of course). | 01:24 |
jrichli | as it should be ;-) | 01:25 |
jrichli | so soon, you will need tiny swift shirts | 01:26 |
mattoliverau | lol | 01:27 |
hrou | mattoliverau, oh wow that's awesome, congratulations !!! | 01:30 |
mattoliverau | hrou: thanks man.. what are you still doing online, shouldn't you be sleeping :P | 01:30 |
hrou | mattoliverau, I recall chatting about this very subject during the hackathon ! Its funny here at work I find kids come in batches, literary for a 6 month period, kids popping out everywhere ! Then nothing for a year. | 01:31 |
hrou | ha, no just 9:30 here. | 01:33 |
mattoliverau | lol, so whos next? | 01:33 |
hrou | I love the 'mattoliverau v2.0', that's a great pic. But wait, rcbau ? : ) should there be a 2.0 appended on to that as well | 01:34 |
*** haomaiwang has joined #openstack-swift | 01:37 | |
*** darrenc is now known as darrenc_afk | 01:42 | |
mattoliverau | hrou: rcbau is rackspace cloud builders Australia, its the meme I used to tell my work team :) | 01:47 |
mattoliverau | Cause memes are important | 01:47 |
hrou | mattoliverau, ah ! : ) Ok that's hilarious and now makes perfect sense | 01:48 |
*** jkugel has quit IRC | 01:54 | |
*** haomaiwang has quit IRC | 02:01 | |
*** haomaiwang has joined #openstack-swift | 02:01 | |
*** m_kazuhiro has quit IRC | 02:04 | |
tdasilva | mattoliverau: Congrats!! | 02:16 |
*** darrenc_afk is now known as darrenc | 02:17 | |
ho | mattoliverau: Congratulations!!! | 02:18 |
openstackgerrit | Zack M. Davis proposed openstack/swift: given Python 3, use http.client.parse_headers instead of rfc822.Message https://review.openstack.org/203304 | 02:23 |
*** kei_yama has quit IRC | 02:23 | |
openstackgerrit | Zack M. Davis proposed openstack/swift: given Python 3, use parse_headers instead of rfc822.Message https://review.openstack.org/203304 | 02:24 |
*** kei_yama has joined #openstack-swift | 02:24 | |
notmyname | mattoliverau: that's awesome! congrats! | 02:25 |
*** changbl has joined #openstack-swift | 02:28 | |
mattoliverau | Thanks y'all | 02:40 |
jrichli | notmyname: I was excited to see women sizes on the t-shirt survey :-). also, you said you have the design? Can it be revealed? | 02:44 |
*** eranrom has joined #openstack-swift | 02:53 | |
*** haomaiwang has quit IRC | 02:55 | |
*** albertom has quit IRC | 02:56 | |
*** haomaiwa_ has joined #openstack-swift | 02:57 | |
*** albertom has joined #openstack-swift | 02:59 | |
*** rjaiswal has quit IRC | 03:00 | |
*** haomaiwa_ has quit IRC | 03:01 | |
*** eranrom has quit IRC | 03:01 | |
*** haomaiwa_ has joined #openstack-swift | 03:01 | |
*** ppai has joined #openstack-swift | 03:28 | |
notmyname | jrichli: heh. not yet ;-) | 03:28 |
notmyname | jrichli: and yes, I wanted to make sure I had some women's sizes too | 03:28 |
jrichli | notmyname: mahatic will be happy too! The men's h] | 03:29 |
jrichli | shirts were pretty big on her | 03:29 |
mattoliverau | Just an FYI, I may miss the meeting tomorrow, or at least a chuck of it cause I need to fly to Sydney at the crack of dawn. | 03:55 |
*** sanchitmalhotra has joined #openstack-swift | 03:57 | |
*** haomaiwa_ has quit IRC | 04:01 | |
*** 7F1AAJQF9 has joined #openstack-swift | 04:01 | |
*** m_kazuhiro has joined #openstack-swift | 04:03 | |
*** hrou has quit IRC | 04:18 | |
*** jrichli has quit IRC | 04:23 | |
*** mahatic has joined #openstack-swift | 04:40 | |
mahatic | good morning | 04:43 |
*** wbhuber has joined #openstack-swift | 04:48 | |
*** baojg has joined #openstack-swift | 04:49 | |
*** wbhuber has quit IRC | 04:53 | |
*** kota_ has joined #openstack-swift | 04:58 | |
*** ChanServ sets mode: +v kota_ | 04:58 | |
kota_ | mattoliverau: congrats! | 04:59 |
mattoliverau | kota_: thanks man :) | 05:00 |
*** 7F1AAJQF9 has quit IRC | 05:01 | |
*** haomaiwa_ has joined #openstack-swift | 05:01 | |
kota_ | Just what I want to say...and be offline for meeting, again :( | 05:02 |
*** mahatic has quit IRC | 05:06 | |
*** kota_ has quit IRC | 05:06 | |
*** mahatic has joined #openstack-swift | 05:06 | |
*** flwang1 has quit IRC | 05:06 | |
mahatic | mattoliverau: congratulations!! | 05:07 |
mahatic | :) | 05:07 |
mattoliverau | mahatic: thanks :) | 05:07 |
mahatic | and awesome that you're vacationing in Japan! | 05:07 |
mattoliverau | mahatic: are you coming to Japan? | 05:13 |
mahatic | mattoliverau: my flights are done, visa on the way! (Japanese visa is super compliacted, I am actually getting docs shipped from Tokyo! :D) | 05:21 |
*** SkyRocknRoll has joined #openstack-swift | 05:21 | |
mattoliverau | wow, ok.. well travelled documents | 05:21 |
mahatic | lol yeah | 05:22 |
*** silor has joined #openstack-swift | 05:25 | |
*** silor has quit IRC | 05:26 | |
*** silor has joined #openstack-swift | 05:26 | |
*** changbl has quit IRC | 05:30 | |
*** silor1 has joined #openstack-swift | 05:30 | |
*** silor has quit IRC | 05:30 | |
*** silor1 is now known as silor | 05:30 | |
*** ekarlso- has joined #openstack-swift | 05:37 | |
*** ekarlso- has quit IRC | 05:50 | |
*** km has quit IRC | 05:52 | |
*** baojg has quit IRC | 05:52 | |
*** km has joined #openstack-swift | 05:52 | |
*** haomaiwa_ has quit IRC | 06:01 | |
*** haomaiwang has joined #openstack-swift | 06:01 | |
*** mahatic has quit IRC | 06:08 | |
*** mahatic has joined #openstack-swift | 06:14 | |
*** baojg has joined #openstack-swift | 06:20 | |
*** xnox has quit IRC | 06:34 | |
*** xnox has joined #openstack-swift | 06:35 | |
*** xnox has quit IRC | 06:44 | |
*** xnox has joined #openstack-swift | 06:46 | |
*** akle has joined #openstack-swift | 06:51 | |
*** haomaiwang has quit IRC | 07:01 | |
*** haomaiwang has joined #openstack-swift | 07:01 | |
openstackgerrit | Merged openstack/swift: Improving statistics sent to Graphite. https://review.openstack.org/202657 | 07:12 |
*** rledisez has joined #openstack-swift | 07:14 | |
*** pberis has joined #openstack-swift | 07:25 | |
*** baojg has quit IRC | 07:39 | |
*** geaaru has joined #openstack-swift | 07:40 | |
*** pberis has quit IRC | 07:53 | |
*** jordanP has joined #openstack-swift | 07:53 | |
*** hseipp has joined #openstack-swift | 07:54 | |
*** baojg has joined #openstack-swift | 07:57 | |
*** haomaiwang has quit IRC | 08:01 | |
*** haomaiwang has joined #openstack-swift | 08:01 | |
*** acoles_ is now known as acoles | 08:01 | |
*** jordanP has quit IRC | 08:02 | |
*** jordanP has joined #openstack-swift | 08:02 | |
*** cazino has joined #openstack-swift | 08:03 | |
*** cazino has left #openstack-swift | 08:03 | |
*** m_kazuhiro has quit IRC | 08:12 | |
*** jordanP has quit IRC | 08:17 | |
*** jistr has joined #openstack-swift | 08:32 | |
*** jordanP has joined #openstack-swift | 08:43 | |
*** itlinux has joined #openstack-swift | 08:43 | |
acoles | mattoliverau: congrats! sharding irl ;) | 08:45 |
*** flwang1 has joined #openstack-swift | 08:49 | |
*** SkyRocknRoll has quit IRC | 08:49 | |
*** T0m_ has joined #openstack-swift | 08:56 | |
*** haomaiwang has quit IRC | 09:01 | |
*** haomaiwang has joined #openstack-swift | 09:01 | |
*** baojg has quit IRC | 09:10 | |
*** kei_yama has quit IRC | 09:16 | |
*** Kennan_Vacation2 has quit IRC | 09:17 | |
*** trifon_ has quit IRC | 09:19 | |
*** SkyRocknRoll has joined #openstack-swift | 09:22 | |
*** Kennan_Vacation has joined #openstack-swift | 09:25 | |
*** m_kazuhiro has joined #openstack-swift | 09:29 | |
*** km has quit IRC | 09:33 | |
mattoliverau | Lol, thanks acoles | 09:35 |
*** mahatic has quit IRC | 09:36 | |
*** haomaiwang has quit IRC | 10:01 | |
*** haomaiwang has joined #openstack-swift | 10:02 | |
*** trifon_ has joined #openstack-swift | 10:03 | |
*** itlinux has quit IRC | 10:24 | |
*** logan2 has quit IRC | 10:24 | |
*** logan2 has joined #openstack-swift | 10:28 | |
*** T0m_ has left #openstack-swift | 10:43 | |
*** itlinux has joined #openstack-swift | 10:47 | |
*** hseipp has quit IRC | 10:53 | |
*** haomaiwang has quit IRC | 11:01 | |
*** haomaiwang has joined #openstack-swift | 11:01 | |
*** aix has quit IRC | 11:06 | |
*** T0m_ has joined #openstack-swift | 11:22 | |
T0m_ | Is there a way to query a swift object to see what partion its part of? | 11:23 |
*** mahatic has joined #openstack-swift | 11:26 | |
acoles | T0m_: you can use swift-get-nodes command, like this: | 11:27 |
acoles | swift-get-nodes /etc/swift/object.ring.gz AUTH_acc/cont/obj | 11:27 |
acoles | that's if you have access to a node. there is no way to find partition via the swift API. | 11:29 |
T0m_ | ok will give that a go - thanks | 11:30 |
mahatic | acoles: hello, I need to skip probe tests when we're not using encryption. Would I need to be creating a parameter of sorts in swiftclient, like we have for healthcheck, to turn it on/off? I'm not sure | 11:32 |
*** logan2 has quit IRC | 11:38 | |
*** logan2 has joined #openstack-swift | 11:40 | |
mahatic | I don't think we know if encryption is on/of by default. Atleast, I don't | 11:43 |
*** logan2 has quit IRC | 11:50 | |
*** resker has joined #openstack-swift | 11:54 | |
*** logan2 has joined #openstack-swift | 11:54 | |
*** lpabon has joined #openstack-swift | 11:57 | |
*** esker has quit IRC | 11:57 | |
*** thurloat is now known as thurloat_isgone | 11:59 | |
*** haomaiwang has quit IRC | 12:01 | |
*** haomaiwa_ has joined #openstack-swift | 12:01 | |
*** Protux has quit IRC | 12:03 | |
*** lpabon has quit IRC | 12:07 | |
*** ppai has quit IRC | 12:08 | |
*** logan2 has quit IRC | 12:17 | |
*** logan2 has joined #openstack-swift | 12:20 | |
*** ppai has joined #openstack-swift | 12:22 | |
*** marzif has quit IRC | 12:25 | |
*** marzif has joined #openstack-swift | 12:25 | |
*** changbl has joined #openstack-swift | 12:37 | |
*** logan2 has quit IRC | 12:41 | |
*** logan2 has joined #openstack-swift | 12:42 | |
*** haomaiwa_ has quit IRC | 12:50 | |
*** aix has joined #openstack-swift | 12:51 | |
*** m_kazuhiro has quit IRC | 12:54 | |
*** resker has quit IRC | 12:55 | |
*** ctrath has joined #openstack-swift | 12:57 | |
*** ppai has quit IRC | 13:02 | |
*** ctrath has quit IRC | 13:04 | |
*** dustins has joined #openstack-swift | 13:07 | |
*** hrou has joined #openstack-swift | 13:08 | |
*** nadeem has joined #openstack-swift | 13:15 | |
*** nadeem has quit IRC | 13:15 | |
*** nadeem has joined #openstack-swift | 13:16 | |
*** marcusvrn_ has joined #openstack-swift | 13:16 | |
*** jkugel has joined #openstack-swift | 13:19 | |
*** SkyRocknRoll has quit IRC | 13:21 | |
*** breitz has quit IRC | 13:23 | |
*** breitz has joined #openstack-swift | 13:23 | |
openstackgerrit | David Goetz proposed openstack/swift: go: add a couple timers to see about GetHashes vrs Sync time https://review.openstack.org/221753 | 13:24 |
*** haomaiwang has joined #openstack-swift | 13:28 | |
*** flwang1 has quit IRC | 13:31 | |
*** tongli has joined #openstack-swift | 13:33 | |
*** mahatic has quit IRC | 13:37 | |
*** flwang1 has joined #openstack-swift | 13:38 | |
*** ctrath has joined #openstack-swift | 13:43 | |
*** lcurtis has joined #openstack-swift | 13:48 | |
*** mahatic has joined #openstack-swift | 13:50 | |
*** flwang1 has quit IRC | 13:50 | |
*** wbhuber has joined #openstack-swift | 13:51 | |
*** SkyRocknRoll has joined #openstack-swift | 13:52 | |
*** nadeem has quit IRC | 13:58 | |
*** ho has quit IRC | 14:00 | |
*** haomaiwang has quit IRC | 14:01 | |
*** haomaiwang has joined #openstack-swift | 14:01 | |
*** wbhuber_ has joined #openstack-swift | 14:02 | |
*** wbhuber has quit IRC | 14:03 | |
*** mahatic has quit IRC | 14:03 | |
*** mahatic has joined #openstack-swift | 14:04 | |
*** mahatic has quit IRC | 14:04 | |
*** sanchitmalhotra has quit IRC | 14:05 | |
*** flwang1 has joined #openstack-swift | 14:06 | |
*** trifon_ has quit IRC | 14:15 | |
*** jlhinson has joined #openstack-swift | 14:15 | |
*** jrichli has joined #openstack-swift | 14:17 | |
*** flwang1 has quit IRC | 14:18 | |
tdasilva | notmyname: good morning, is there a meeting today? | 14:23 |
*** proteusguy__ has quit IRC | 14:32 | |
*** SkyRocknRoll has quit IRC | 14:34 | |
*** jlhinson has quit IRC | 14:34 | |
*** ujjain- has quit IRC | 14:37 | |
*** ujjain- has joined #openstack-swift | 14:37 | |
jrichli | mahatic: are you around? | 14:39 |
jrichli | mahatic: I dont see you on. I will send you email | 14:40 |
openstackgerrit | David Goetz proposed openstack/swift: go: assign startTime to Now at beginning of replication run https://review.openstack.org/221512 | 14:42 |
*** wbhuber_ is now known as wbhuber | 14:45 | |
*** jlhinson has joined #openstack-swift | 14:46 | |
openstackgerrit | Alistair Coles proposed openstack/swift: Trivial Key Master for encryption https://review.openstack.org/193749 | 14:47 |
openstackgerrit | David Goetz proposed openstack/swift: go: use recon diskusage so you can weed out unmounted drives https://review.openstack.org/221446 | 14:48 |
*** proteusguy__ has joined #openstack-swift | 14:50 | |
*** pberis1 has joined #openstack-swift | 14:55 | |
*** pberis1 is now known as pberis | 14:55 | |
*** mahatic has joined #openstack-swift | 14:58 | |
*** minwoob has joined #openstack-swift | 14:58 | |
*** esker has joined #openstack-swift | 14:59 | |
*** haomaiwang has quit IRC | 15:01 | |
*** haomaiwang has joined #openstack-swift | 15:01 | |
*** jistr is now known as jistr|call | 15:02 | |
*** esker has quit IRC | 15:04 | |
*** marzif has quit IRC | 15:10 | |
*** akle has quit IRC | 15:16 | |
notmyname | good morning | 15:24 |
jrichli | good morning | 15:25 |
*** haomaiwang has quit IRC | 15:26 | |
openstackgerrit | Alistair Coles proposed openstack/swift: Trivial Key Master for encryption https://review.openstack.org/193749 | 15:28 |
*** haomaiwang has joined #openstack-swift | 15:30 | |
*** flwang1 has joined #openstack-swift | 15:30 | |
*** SkyRocknRoll has joined #openstack-swift | 15:33 | |
*** esker has joined #openstack-swift | 15:33 | |
notmyname | tdasilva: yes, there's a meeting today | 15:33 |
tdasilva | notmyname: cool, thanks! | 15:33 |
*** bill_az has joined #openstack-swift | 15:36 | |
openstackgerrit | Alistair Coles proposed openstack/swift: Cryptography module to be used by middleware https://review.openstack.org/193826 | 15:38 |
*** garthb has joined #openstack-swift | 15:38 | |
*** trifon_ has joined #openstack-swift | 15:44 | |
*** marcusvrn_ has quit IRC | 15:45 | |
*** adutta has joined #openstack-swift | 15:48 | |
*** adutta has quit IRC | 15:48 | |
*** gyee has joined #openstack-swift | 15:49 | |
*** esker has quit IRC | 15:50 | |
*** tongli has quit IRC | 15:51 | |
*** tongli has joined #openstack-swift | 15:51 | |
*** oddsam91 has joined #openstack-swift | 15:53 | |
*** flwang1 has quit IRC | 15:54 | |
openstackgerrit | Merged openstack/swift: go: add a couple timers to see about GetHashes vrs Sync time https://review.openstack.org/221753 | 15:54 |
*** jistr|call is now known as jistr | 15:55 | |
*** tongli has quit IRC | 15:55 | |
*** haomaiwang has quit IRC | 16:01 | |
*** itlinux has quit IRC | 16:01 | |
*** haomaiwang has joined #openstack-swift | 16:01 | |
*** esker has joined #openstack-swift | 16:04 | |
notmyname | if you haven't taken the survey on tshirt sizes and colors, please do so https://www.surveymonkey.com/r/5WQ7Y9F | 16:05 |
*** esker has quit IRC | 16:09 | |
hrou | Hey timburke around ? | 16:13 |
*** flwang1 has joined #openstack-swift | 16:21 | |
*** oddsam91 has quit IRC | 16:31 | |
*** rledisez has quit IRC | 16:32 | |
*** sasik has joined #openstack-swift | 16:33 | |
*** jordanP has quit IRC | 16:34 | |
*** prnk28 has joined #openstack-swift | 16:37 | |
*** jith_ has joined #openstack-swift | 16:37 | |
jith_ | Hi all how to clear the contents of swift as a whole. | 16:39 |
jith_ | i should make all the storage node empty | 16:40 |
ctennis | turn off the swift services and reformat the drives | 16:40 |
*** oddsam91 has joined #openstack-swift | 16:41 | |
*** jistr has quit IRC | 16:42 | |
jith_ | ctennis: thanks.. then do i need to add the drives again in the ring? | 16:42 |
ctennis | no as long as you don't change or remove the ring files | 16:42 |
jith_ | initially i made one keystone authentication, and uploaded certain GB's of data.. Later i configured the same swift cluster with another keystone for authentication.. so how to delete the previous cluster contents?? | 16:43 |
ctennis | you will have to manually delete each object via the API you want to be gone, or just have to wipe all of the data | 16:43 |
ctennis | or you can delete the swift account you want explicitly and the reaper will take care of removing all of the object data for you | 16:44 |
jith_ | i lost the old authentication.. so cant do anything with api right without the previous authentication?? | 16:45 |
*** sasik has left #openstack-swift | 16:45 | |
jith_ | i just need to erase all the data | 16:46 |
jith_ | ctennis: is this the procedure swift-init all stop | 16:46 |
jith_ | mkfs.xfs /dev/sdb1 .. i am sorry if i am wrong | 16:46 |
ctennis | if you have a new user who is a "superadmin" it can delete other accounts | 16:47 |
ctennis | you just need to know the account name | 16:47 |
ctennis | otherwise yes that looks right | 16:47 |
jith_ | ctennis.. then should i do all the mounting steps again?? | 16:47 |
ctennis | you will need to unmount it before oyu can format it, but yes | 16:47 |
jith_ | i followed this http://docs.openstack.org/kilo/install-guide/install/apt/content/swift-install-storage-node.html | 16:48 |
jith_ | so i should complete the 3rd step in the previous link... is it so? | 16:49 |
*** oddsam91 has left #openstack-swift | 16:49 | |
*** jistr has joined #openstack-swift | 16:55 | |
openstackgerrit | Merged openstack/python-swiftclient: Add links for release notes tool https://review.openstack.org/221329 | 16:56 |
*** marcusvrn_ has joined #openstack-swift | 16:57 | |
openstackgerrit | Merged openstack/swift: go: assign startTime to Now at beginning of replication run https://review.openstack.org/221512 | 16:57 |
*** tongli has joined #openstack-swift | 16:58 | |
*** haomaiwang has quit IRC | 17:01 | |
*** haomaiwang has joined #openstack-swift | 17:01 | |
*** luapsil has joined #openstack-swift | 17:02 | |
*** luapsil has quit IRC | 17:04 | |
*** esker has joined #openstack-swift | 17:10 | |
*** esker has quit IRC | 17:11 | |
*** zhill has joined #openstack-swift | 17:15 | |
*** mahatic has quit IRC | 17:17 | |
*** delattec has joined #openstack-swift | 17:18 | |
*** cdelatte has joined #openstack-swift | 17:18 | |
*** jistr has quit IRC | 17:28 | |
*** haomaiwang has quit IRC | 17:28 | |
*** devlaps has joined #openstack-swift | 17:32 | |
*** geaaru has quit IRC | 17:34 | |
clayg | acoles: you about? | 17:35 |
acoles | clayg: yep | 17:35 |
clayg | acoles: patch 217830 fixes the ugly ValueError when we int('None') | 17:36 |
patchbot | clayg: https://review.openstack.org/#/c/217830/ | 17:36 |
acoles | yes | 17:36 |
wbhuber | bil: do a "source openrc" | 17:37 |
acoles | clayg: 217830 is good, i was wondering if we needed a probe test to validate it, but then saw the probe test you change in patch 218023 | 17:38 |
patchbot | acoles: https://review.openstack.org/#/c/218023/ | 17:38 |
acoles | which seems to validate the change in 217830 | 17:38 |
*** esker has joined #openstack-swift | 17:38 | |
acoles | but i couldn't see *why* it did | 17:38 |
clayg | acoles: ok, yeah I need to rebase that change | 17:39 |
acoles | clayg: so did i understand correct that we only saw the int('None') when a handoff with only tombstones reverts to another handoff, so there is no frag index nor node index? | 17:41 |
acoles | the patch is fine its me thats confused :) | 17:42 |
clayg | acoles: yes that's correct | 17:42 |
clayg | acoles: well - i'm confused too | 17:42 |
clayg | acoles: weekend in between when I wrote the patch | 17:42 |
acoles | clayg: sorry - i gotta go - maybe catch you later around meeting time | 17:43 |
clayg | i don't understand why my primary runs reconstructor on second handoff then first instead of first handoff then second - i suppose it shouldn't matter - but the test acts like it does | 17:44 |
clayg | i guess I'll try swapping them when I rebase the test and try to remember why/if it matters | 17:45 |
*** acoles is now known as acoles_ | 17:51 | |
*** T0m_ has left #openstack-swift | 17:53 | |
*** SkyRocknRoll has quit IRC | 18:20 | |
*** eranrom has joined #openstack-swift | 18:21 | |
wbhuber | clayg: acoles: not sure if there's an existing swift command that lists containers specifically for a policy. im in a situation where i have three policies running and each policy has several containers with numerous objects. So, I'd like to list the containers per policy to deduce some more info. | 18:25 |
*** aix has quit IRC | 18:29 | |
*** tongli has quit IRC | 18:35 | |
*** esker has quit IRC | 18:36 | |
openstackgerrit | Alan Erwin proposed openstack/swift-specs: Adding expiring objecs spec https://review.openstack.org/221914 | 18:55 |
tdasilva | is Alan Erwin around in the channel? | 19:01 |
tdasilva | aerwin3: ^^ maybe?? | 19:02 |
aerwin3 | I am here. :) | 19:02 |
tdasilva | hey! how's going...I remember we talking about the expiring objects issue during last hackathon | 19:03 |
aerwin3 | Yes sir. How is it going? | 19:03 |
tdasilva | I guess the part i'm a bit confused is how does having the auditor expire fix the "expire can't catch up issue" | 19:04 |
tdasilva | is the auditor just faster than the expirer? | 19:04 |
aerwin3 | I just put in a review for a spec on the fix for it. | 19:04 |
aerwin3 | https://review.openstack.org/221914 | 19:04 |
tdasilva | aerwin3: yeah..i was just reading it | 19:04 |
*** esker has joined #openstack-swift | 19:05 | |
*** esker has quit IRC | 19:05 | |
ahale | i guess auditors just walk the local disk and scale easily when you add more storage, compared to expirers which do network reqs and are more annoying to run lots of (we dont run it on every object server) ? | 19:08 |
*** esker has joined #openstack-swift | 19:08 | |
aerwin3 | ahale: thats right. Scaling the expirer is a serious pain. | 19:11 |
tdasilva | ahale: expirer is annoying to run :) took me a while to figure out the whole 'process' and 'processes' options | 19:11 |
ahale | i like that spec btw :) | 19:11 |
*** oddsam91 has joined #openstack-swift | 19:11 | |
aerwin3 | Thanks. Hopefully this will reduce the amount of moving parts needed for this feature. | 19:12 |
*** oddsam91 has left #openstack-swift | 19:13 | |
aerwin3 | tdasilva: One thing to note about this spec is when an object's expire_at time hits, doesn't mean that the object is removed. The cluster will need to be able to handle having the object around until the auditor deletes it. | 19:14 |
tdasilva | aerwin3, ahale: an issue that it introduces is that projects using swift + third-party storage backends can use the expirer feature and they typically don't need the auditor (as their storage systems takes care of data consistency) | 19:14 |
tdasilva | tdasilva: not that i'm a fan of the expirer, but i can use it | 19:14 |
tdasilva | aerwin3: ^ | 19:14 |
ahale | with expire info in the users container, it could be interesting to eventually have an account header showing how much unexpired expiring data an account has ? | 19:15 |
aerwin3 | ahale: I can see that. Maybe being able to see how much data will be expiring soon can help customers make decisions. | 19:17 |
*** silor has quit IRC | 19:18 | |
openstackgerrit | Eran Rom proposed openstack/swift: Container-Sync to iterate only over synced containers https://review.openstack.org/205803 | 19:19 |
aerwin3 | tdasilva: That is an interesting observation. | 19:19 |
tdasilva | aerwin3: so, just for example, with the Swift-on-File project we make use of the expirer daemon, but we don't use the auditor (we let gluster or gpfs take care of keeping the data replication consistent) | 19:19 |
tdasilva | aerwin3: in the case, it is nice to be able to the the info from the container listing | 19:20 |
*** esker has quit IRC | 19:20 | |
*** esker has joined #openstack-swift | 19:30 | |
openstackgerrit | Alan Erwin proposed openstack/swift-specs: Adding expiring objecs spec https://review.openstack.org/221914 | 19:31 |
*** aix has joined #openstack-swift | 19:33 | |
aerwin3 | tdasilva: so for this use case, checking to see what objects are left to expire is needed? Do I have that correct? | 19:34 |
tdasilva | aerwin3: not sure what you mean by "checking" | 19:35 |
tdasilva | do you mean a "checking" by doing a container listing? | 19:36 |
*** resker has joined #openstack-swift | 19:37 | |
aerwin3 | yes. So, currenty if you want to know what objects are needed to expire as container listing to the .expiring_objects account would be where you would start. | 19:37 |
tdasilva | aerwin3: correct | 19:37 |
aerwin3 | Is that the kind of information that is need for the use case? | 19:37 |
tdasilva | aerwin3: right | 19:37 |
aerwin3 | Would being able to query an account to see what objects have/have not expired solve the need? | 19:38 |
tdasilva | aerwin3: so, your spec is actually trying to solve two (related) problems. One is that a container listing has "stale" data and by adding the expire-at column, you could filter that out | 19:40 |
*** esker has quit IRC | 19:40 | |
tdasilva | second, is the expirer daemon not being fast enough to expire objects | 19:40 |
aerwin3 | tdasilva: Correct. | 19:41 |
tdasilva | while I agree with both issues, the proposed solution for the second issue is what could be a problem for us | 19:41 |
aerwin3 | tdasilva: I can see where that would be considering that auditor isn't running in your use case. | 19:41 |
openstackgerrit | OpenStack Proposal Bot proposed openstack/python-swiftclient: Updated from global requirements https://review.openstack.org/89250 | 19:42 |
tdasilva | yeah...I understand the expirer is a problem, but it has this nice characterist of getting the info from the container as opposed to getting the info from the underlying file | 19:42 |
tdasilva | it would be nice to be able to keep that | 19:42 |
openstackgerrit | OpenStack Proposal Bot proposed openstack/swift: Updated from global requirements https://review.openstack.org/88736 | 19:43 |
*** devlaps has quit IRC | 19:43 | |
aerwin3 | tdasilva: what is the 'info from the container' that would be nice to keep? | 19:47 |
clayg | i guess I'll try swapping them when I rebase the test and try to remember why/if it matters | 19:48 |
tdasilva | aerwin3: sorry...i mean to say that the expirer daemon today is doing a GET on the container (.expiring_objects/<expiring_container_name>) | 19:48 |
tdasilva | that's how it knows what objects to expire...as opposed to looking in the underlying filesystem and scanning the file (as the auditor would do) | 19:49 |
mattoliverau | Morning all from am airport lounge. | 19:50 |
* mattoliverau needs coffee stat | 19:50 | |
tdasilva | mattoliverau: good morning! | 19:50 |
aerwin3 | tdasilva: Oh I see. The deletion of the object by the auditor is the issue. Because the container listing could be made to show objects that need to expire. | 19:53 |
aerwin3 | mattoliverau: morning! | 19:53 |
*** flwang1 has quit IRC | 19:53 | |
*** resker has quit IRC | 19:54 | |
tdasilva | aerwin3: correct | 19:54 |
tdasilva | aerwin3: i need to take off for a bit to drive home, but will be back in about 1 hour for the swift meeting | 19:55 |
*** esker has joined #openstack-swift | 19:56 | |
aerwin3 | tdasilva: Ok I will need to think on an approach to address your use case. | 19:56 |
openstackgerrit | David Goetz proposed openstack/swift: go: replicator log time spent getting remote hashes https://review.openstack.org/221942 | 19:58 |
mattoliverau | So I will be flying during meeting, I apologize, make sure someone is sarcastic in my absence.. I will be reading the meeting log when I land ;) | 19:58 |
*** delattec has quit IRC | 20:08 | |
*** cdelatte has quit IRC | 20:08 | |
*** mvandijk has joined #openstack-swift | 20:11 | |
*** eranrom has quit IRC | 20:28 | |
*** esker has quit IRC | 20:35 | |
openstackgerrit | Alan Erwin proposed openstack/swift-specs: Adding expiring objecs spec https://review.openstack.org/221914 | 20:40 |
*** prnk28 has quit IRC | 20:44 | |
openstackgerrit | Minwoo Bae proposed openstack/swift: Reconstructor logging to omit 404 warnings https://review.openstack.org/221956 | 20:49 |
*** esker has joined #openstack-swift | 20:52 | |
*** kota_ has joined #openstack-swift | 20:55 | |
*** ChanServ sets mode: +v kota_ | 20:55 | |
kota_ | good morning | 20:56 |
wbhuber | kota_: good morning | 20:56 |
notmyname | hello kota_ | 20:56 |
*** esker has quit IRC | 20:56 | |
aerwin3 | kota_: morning | 20:57 |
*** ho has joined #openstack-swift | 20:58 | |
timburke | good morning kota_! | 20:58 |
notmyname | meeting time | 20:59 |
*** acoles_ is now known as acoles | 21:00 | |
*** darrenc_ has joined #openstack-swift | 21:02 | |
*** darrenc has quit IRC | 21:03 | |
*** dustins has quit IRC | 21:08 | |
*** trifon_ has quit IRC | 21:09 | |
acoles | clayg: i think i figured out my confusion with that probe test while driving home | 21:14 |
*** pberis has quit IRC | 21:17 | |
acoles | clayg: i'd forgotten that a handoff will revert tombstones to all primaries PLUS another handoff if one of the primaries is down | 21:18 |
clayg | acoles: that's true | 21:22 |
acoles | clayg: yeah. so with the probe test change you made in patch 218023, 2nd handoff reverts its tombstone first while one primary is still down, so attempts to revert to first handoff, which would fail without the fix in patch 217830 | 21:25 |
patchbot | acoles: https://review.openstack.org/#/c/218023/ | 21:25 |
ho | kota_: how about 燕軍団? | 21:27 |
kota_ | just 燕 seems better. | 21:28 |
ho | kota_: difference is team swift or just swift. :-) | 21:29 |
*** flwang1 has joined #openstack-swift | 21:37 | |
kota_ | ho: yup | 21:45 |
*** ChanServ changes topic to "Review Dashboard: https://goo.gl/eqeGwE | Summary Dashboard: https://goo.gl/jL0byl | Summit planning: https://etherpad.openstack.org/p/tokyo-summit-swift | Logs: http://eavesdrop.openstack.org/irclogs/%23openstack-swift/" | 21:46 | |
notmyname | the summit planning etherpad is now in the topic message | 21:46 |
kota_ | if you want to mean "a team", "仲間" or "家族" seems better? | 21:46 |
ho | kota_: my image is たけし軍団 (Takeshi Kitano's team) :-) | 21:47 |
ho | kota_: the meaning of 迅速 does not "directory" fit to swift but the charators look cool for me. | 21:48 |
wbhuber | kota_: acoles: minwoob: notmyname: I've added my preliminary (or alloying) .02 cents on EC patch/bug prioritization in the following page: https://wiki.openstack.org/wiki/Swift/PriorityReviews | 21:51 |
notmyname | wbhuber: thanks | 21:51 |
kota_ | wbhuber: thanks! | 21:51 |
minwoob | wbhuber: thanks! | 21:52 |
*** hrou has quit IRC | 21:52 | |
notmyname | kota_: ho: what is あまつばめ | 21:53 |
*** jlhinson has quit IRC | 21:54 | |
kota_ | notmyname: scientific name | 21:54 |
kota_ | written in "Hiragana" | 21:54 |
acoles | kota_: ho: アマツバメ科 ? | 21:56 |
acoles | wbhuber: k. thanks | 21:56 |
notmyname | minwoob: I had a question on https://bugs.launchpad.net/swift/+bug/1491883 | 21:59 |
openstack | Launchpad bug 1491883 in OpenStack Object Storage (swift) "Reconstructor complains 404 is invalid responce when building object" [Undecided,In progress] - Assigned to Minwoo Bae (minwoob) | 21:59 |
notmyname | let me see if I can go find it again... | 21:59 |
minwoob | notmyname: Okay | 22:00 |
*** acoles is now known as acoles_ | 22:00 | |
notmyname | minwoob: ah. so on replication, we do have the possibility of logging the 404 in update() but not in update_deleted(). and the reconstructor will log the 404 | 22:01 |
notmyname | minwoob: so I'm trying to reconcile in my head what ctennis and you are saying on the bug report | 22:02 |
clayg | acoles_: ok it's important to run the second handoff first because it acctually *can* delete it's handoff after it verifies it's in sync with the alive primaries + the first handoff | 22:02 |
clayg | acoles_: if you ran the first handoff first - it would find the failed primary and then realize the tombstone on itself is about as good as it's going to get and keep it around | 22:03 |
kota_ | acoles: same meaning between あまつばめ and アマツバメ. | 22:03 |
clayg | I think the tombstone revert jobs could be dramatically more optomistic about when it's safe to cleanup themselves | 22:03 |
clayg | but... different issue | 22:03 |
*** jkugel has quit IRC | 22:03 | |
*** jkugel has joined #openstack-swift | 22:04 | |
mattoliverau | And landed.. Sorry I missed the meeting | 22:04 |
minwoob | notmyname: It seems that our consensus last time was to remove the reconstructor 404 logging since apparently the warnings given by the object-server were argued to be good enough, and this way we won't fill up the logs. | 22:06 |
kota_ | mattoliverau: np | 22:07 |
minwoob | notmyname: ctennis: I was originally for keeping the reconstructor 404 logs, but the above seemed convincing, especially since reconstructor 404s aren't considered "unusual". | 22:08 |
ctennis | notmyname: I'm trying to say that the "Invalid response 404" is unnecessary | 22:08 |
notmyname | ctennis: minwoob: yeah, that makes sense. I think that doesn't make sense to me is that the object replication logs 404s | 22:08 |
minwoob | notmyname: I'll verify whether we should log the 404 in update() and not in update_deleted(), as per your suggestion. | 22:09 |
minwoob | notmyname: For replication | 22:10 |
notmyname | minwoob: heh, I'm not suggesting anything. just trying to understand | 22:11 |
minwoob | Ah. | 22:11 |
*** jkugel has quit IRC | 22:11 | |
*** ctrath has quit IRC | 22:12 | |
ho | notmyname, acoles, kota_: I'm not sure but あまつばめ is a kind of swallow | 22:18 |
notmyname | wbhuber: I'm not sure that https://bugs.launchpad.net/swift/+bug/1491908 is just a logging thing | 22:21 |
openstack | Launchpad bug 1491908 in OpenStack Object Storage (swift) "proxy server reports 202 even though EC backend didn't accept object" [Undecided,New] | 22:21 |
wbhuber | notmyname: taking another look at it... | 22:22 |
minwoob | notmyname: wbhuber: That one has to do with the container sync dependency. | 22:22 |
wbhuber | notmyname: again, the list i posted is up for debate | 22:22 |
minwoob | Fixing the 202 to 409 will break current container sync'ed clusters. | 22:22 |
wbhuber | minwoob: don't you think this one will be handled pre-LIberty? | 22:23 |
minwoob | notmyname: wbhuber: So before that one can be fixed, it looks like the 100-continue + timestamp idea for container sync needs to be implemented first, and then decide how to handle the 409 within that context. | 22:24 |
torgomatic | it might be worth breaking container sync to fix EC though; if right now you can PUT an object and have the cluster say 202 but not actually store the object, that's bad | 22:25 |
torgomatic | besides, wouldn't it only break container sync for EC containers? or is that incorrect? | 22:26 |
ctennis | torgomatic: it's not EC dependent | 22:27 |
torgomatic | oh, boo | 22:27 |
torgomatic | never mind then | 22:27 |
wbhuber | notmyname: you're right. i'll put that out of the logging set - can set its priority to other than low but timetable is not limtied to Liberty at least. | 22:27 |
ctennis | it's just that in my case, with a 3 node EC cluster, 1 node had a majority of responses, so if the clock is skewed back you get a majority of 409s | 22:28 |
torgomatic | OTOH, it only happens when your time rolls backwards...? | 22:28 |
ctennis | yes | 22:28 |
torgomatic | okay, so not as big a deal as I thought it was then | 22:28 |
torgomatic | still important, of course, but not oh-shit-stop-the-world | 22:28 |
ctennis | right | 22:28 |
wbhuber | ctennis: i'd be interested to try to recreate it on my clutster... looks quirky | 22:28 |
ctennis | just upload an object, stop ntp on some nodes, set the clock back on one or more nodes and reupload a new object. | 22:29 |
wbhuber | ctennis: notmyname: minwoob: med-high? | 22:29 |
notmyname | set the clock back by how much? clocks get out of sync | 22:30 |
ctennis | I think it does highlight the importance that accurate and synchronized clocks are paramount for EC though | 22:30 |
ctennis | notmyname: well, any time <= to the original timestamp triggers the issue | 22:30 |
ctennis | the container-sync thing relies on the fact that if the objects are synchronized, the timestamps equal, and thus you get a 409 if you try to overwrite it | 22:31 |
ctennis | so it says "a 409 is okay, so let's treat that as a 202" | 22:31 |
jrichli | torgomatic: results of my quick attempt to find out what a fish goal is : http://www.standard.co.uk/news/goal-fish-thierry-shows-his-finishing-power-6685791.html | 22:32 |
*** AndreiaKumpera has quit IRC | 22:32 | |
*** kota_ has quit IRC | 22:33 | |
torgomatic | jrichli: it's a shame the final score wasn't eeleven to twona | 22:35 |
*** lcurtis has quit IRC | 22:35 | |
jrichli | hee hee | 22:35 |
openstackgerrit | Clay Gerrard proposed openstack/swift: Fix purge for tombstone only REVERT job https://review.openstack.org/218023 | 22:38 |
notmyname | minwoob: where is the 100-continue timestamp patch/bug you referenced? | 22:38 |
clayg | notmyname: I think there was one that ctennis filed that was more like 409 should emit 202 | 22:39 |
clayg | *shouldn't | 22:39 |
notmyname | clayg: https://bugs.launchpad.net/swift/+bug/1491908 ? | 22:39 |
openstack | Launchpad bug 1491908 in OpenStack Object Storage (swift) "proxy server reports 202 even though EC backend didn't accept object" [Undecided,New] | 22:39 |
notmyname | clayg: minwoob was saying it might depend on a different one | 22:39 |
minwoob | notmyname: eranrom: It was the one for allowing the proper handling of multiple uploads in the container sync scenario. | 22:40 |
minwoob | notmyname: That if we encounter a 100-continue | 22:41 |
minwoob | Then switch the 409 to a 417 | 22:41 |
notmyname | changing to a different marker for detecting that the data has been synced | 22:43 |
minwoob | For 409 to 202, only do that if the request has a container sync auth header | 22:45 |
minwoob | As a workaround to support existing container sync clients. | 22:45 |
notmyname | so why is this an EC specific thing? wouldn't the same thing happen for replicated data? | 22:45 |
minwoob | That seems to be the case. | 22:46 |
minwoob | I think ctennis must have filed it as they were doing testing on their EC cluster. | 22:46 |
notmyname | sure | 22:46 |
minwoob | Along with the other bugs. | 22:46 |
openstackgerrit | Merged openstack/swift: go: replicator log time spent getting remote hashes https://review.openstack.org/221942 | 22:47 |
*** jrichli has quit IRC | 22:49 | |
*** darrenc_ is now known as darrenc | 22:51 | |
wbhuber | ctennis: regarding https://bugs.launchpad.net/swift/+bug/1491605, is there any asssessment that the randomizing reconstructor jobs would improve performance as opposed to serially processing jobs disk by disk? | 22:52 |
openstack | Launchpad bug 1491605 in OpenStack Object Storage (swift) "Reconstructor jobs are ordered by disk instead of randomized" [Undecided,New] | 22:52 |
ctennis | wbhuber: that's my hunch yeah | 22:52 |
ctennis | wbhuber: but I'm not sure if is has all of the jobs at first..it seems like it builds them per disk. | 22:53 |
notmyname | https://bugs.launchpad.net/swift/+bug/1484598 and https://review.openstack.org/#/c/213147/ seems to be a Big Deal | 22:53 |
openstack | Launchpad bug 1484598 in OpenStack Object Storage (swift) "Proxy server ignores additional fragments on primary nodes" [Undecided,In progress] - Assigned to paul luse (paul-e-luse) | 22:53 |
notmyname | clayg: right? ^ | 22:54 |
notmyname | ie should be very high priority | 22:54 |
wbhuber | notmyname: +1 | 22:54 |
*** rjaiswal has joined #openstack-swift | 22:55 | |
wbhuber | ctennis: i'd need to re-test the handling of the jobs logic first to see how that exactly works. so basically, it'd go through each of single node's disks and process job therein. it'd be troublesome if the node itself is hitting some latency. | 22:55 |
clayg | notmyname: I think it's a lower priority than optomistic GETs | 22:56 |
ctennis | wbhuber: yeah that sounds right | 22:56 |
notmyname | clayg: optimistic gets is https://review.openstack.org/#/c/215276/ ? | 22:57 |
clayg | ignoring multiple frag index for the same etag/timestamp coupled with the better bucketing of Ib710a133ce1be278365067fd0d6610d80f1f7372 should be sufficient to "work around" the multiple frag issues during *normal* rebalances (where less than parity part-replica need to move between a reconstrcutor cycle-time/ring-rebalance) | 22:58 |
notmyname | that one has landed | 22:59 |
clayg | yeah, etag buckets is in - but it's not throwing out potential duplictes | 22:59 |
clayg | yeah acoles patch 215276 is the begining of the work on optomistic gets | 23:00 |
patchbot | clayg: https://review.openstack.org/#/c/215276/ | 23:00 |
notmyname | ok | 23:00 |
clayg | timeouts on write resulting in missing .durables is a thing that we've all observed under load - that coupled with some disk failures or network timeouts on GETs is causing data that should be able to be served to come up 404 :'( | 23:00 |
notmyname | so that one depends on the patch that closes https://bugs.launchpad.net/swift/+bug/1484598, so I'm setting that bug to critical. it's important and is blocking other important stuff | 23:01 |
openstack | Launchpad bug 1484598 in OpenStack Object Storage (swift) "Proxy server ignores additional fragments on primary nodes" [Critical,In progress] - Assigned to paul luse (paul-e-luse) | 23:01 |
clayg | the rebalance multiple frag holding primary thing is not as well understood IMHO | 23:01 |
*** km has joined #openstack-swift | 23:01 | |
notmyname | also, acoles_ needs to rebase his patch ;-) | 23:01 |
clayg | ... but it definately requires a rebalance and if we *just* threw out duplicate frags (even without going back to re-read the second frag from a primary holding multiples) may be enough | 23:02 |
clayg | well - I think the chain is sorta out of whack - and the patch doesn't really take on the whole thing - I think it started with just making it possible to *ask* an object server to return a non-durable frag | 23:02 |
clayg | current thinkin is we'd be better off most of the time if we just get the non-durable flags with a marker that says "hopefully someone else can vouch for the reconstructability of this - but here's the latest timestamp I have for the name" | 23:03 |
clayg | where "current" is circa Austin | 23:04 |
*** hrou has joined #openstack-swift | 23:04 | |
clayg | ... but I think we lost track of some of it during the reconstructor fallout coming out of the intel benchmarks | 23:04 |
notmyname | torgomatic: can you please look at https://bugs.launchpad.net/swift/+bug/1491669 and set the "importance" flag? | 23:05 |
openstack | Launchpad bug 1491669 in OpenStack Object Storage (swift) "Balance of EC ring seems odd compared to replica ring" [Undecided,New] | 23:05 |
clayg | I think those fixes are all up for review - and ctennis & paul have commented that they are critical to having successfully stressful benchmarks | 23:05 |
ctennis | notmyname: this one doesn't show up in my normal list but I think it's probably medium: https://bugs.launchpad.net/swift/+bug/1491676 | 23:06 |
openstack | Launchpad bug 1491676 in OpenStack Object Storage (swift) "Reconstructor has some troubles with tombstones on full drives" [Undecided,New] | 23:06 |
notmyname | ctennis: I was just looking at that one | 23:06 |
ctennis | ah ok | 23:06 |
notmyname | clayg: yeah, my next step is to go through associated patches and see what's proposed | 23:06 |
ctennis | when I look at the list of bugs I've reported, it doesn't show up for some reason | 23:06 |
ctennis | an nm there it is, it's not sorted in order | 23:06 |
torgomatic | notmyname: done | 23:07 |
notmyname | torgomatic: thanks | 23:07 |
clayg | torgomatic: you got there so much quicker than I did! | 23:08 |
clayg | notmyname: well did you find the bug for timeouts writing .druables? if you're going to make lp bug #1484598 a release blocker you have to do the same with that one too | 23:10 |
openstack | Launchpad bug 1484598 in OpenStack Object Storage (swift) "Proxy server ignores additional fragments on primary nodes" [Critical,In progress] https://launchpad.net/bugs/1484598 - Assigned to paul luse (paul-e-luse) | 23:10 |
*** david-lyle has quit IRC | 23:11 | |
notmyname | ctennis: https://bugs.launchpad.net/swift/+bug/1491676 looks like it might be the normal "full clusters are bad, m'kay?" | 23:11 |
openstack | Launchpad bug 1491676 in OpenStack Object Storage (swift) "Reconstructor has some troubles with tombstones on full drives" [Undecided,New] | 23:11 |
ctennis | notmyname: not even full cluster, just full disk. | 23:11 |
notmyname | ctennis: well, ok. same thing ;-) | 23:12 |
*** david-lyle has joined #openstack-swift | 23:12 | |
ctennis | perhaps, but it's for a different reason. and based on this fixing a full EC cluser is going to be much more difficult. | 23:12 |
clayg | ctennis: :'( | 23:13 |
ctennis | clayg: don't frown, I'm guessing there's a fix :) | 23:14 |
notmyname | clayg: https://bugs.launchpad.net/swift/+bug/1469094 I hadn't looked at that yet since it was already ranked | 23:14 |
openstack | Launchpad bug 1469094 in OpenStack Object Storage (swift) "Timeout writing .durable can cause error on GET (under failure)" [High,Confirmed] | 23:14 |
torgomatic | not harder than replication, right? rsync can't push tombstones to a full disk either | 23:14 |
ctennis | but that wasn't hte issue here. in a 24 disk replicated system, if 3 disks "fill" I would easily expect them to be able to clear themselves up on their own | 23:15 |
clayg | ctennis: do the full disks have any handoff partitions (partitions balanced *off* of them) still lying about? or is it just that the parts that belong on the disk won't fit? | 23:15 |
ctennis | the issue in replication is things can't unfill because other disks are full so they have nowhere to go | 23:15 |
ctennis | in this setup, the disks just couldn't unfill even though there were plenty of places for data to go | 23:16 |
ctennis | clayg: I've since wiped things | 23:16 |
ctennis | in other words, in a replicated cluster..if I was full and added new drives, things would start spreading out. here I had full disks, added new drives and they never unfilled. | 23:19 |
clayg | ctennis: k - I think there's some missing tracebacks | 23:19 |
ctennis | possible. I can recreate it again | 23:20 |
clayg | ctennis: it's not clean what was failing in the push data off the full disks path | 23:20 |
clayg | *clear | 23:20 |
ctennis | ok | 23:20 |
clayg | ctennis: maybe the re-hash? | 23:20 |
*** jkugel has joined #openstack-swift | 23:20 | |
*** darrenc is now known as darrenc_afk | 23:20 | |
*** amit213 has quit IRC | 23:22 | |
*** amit213 has joined #openstack-swift | 23:22 | |
ctennis | clayg: I still have the all.log from that day | 23:22 |
*** david-lyle has quit IRC | 23:22 | |
notmyname | ctennis: thanks. I marked it as incomplete | 23:22 |
*** david-lyle has joined #openstack-swift | 23:23 | |
clayg | ctennis: could be useful! upload it somewhere I can get at it? | 23:25 |
notmyname | ok, (except for that last one about full drives) all of the EC tagged bugs have been given a priority. next up is to review and set ones that should be release blockers | 23:25 |
notmyname | why does LP show me bugs that are Fix Committed by default? | 23:29 |
clayg | notmyname: because if it didn't you'd forget about them! | 23:30 |
notmyname | I'm ok with that! they're "done" ;-) | 23:30 |
clayg | :) | 23:30 |
notmyname | tests pass, deploy to prod! ;-) | 23:31 |
notmyname | torgomatic: for https://bugs.launchpad.net/swift/+bug/1452431, this is more likely in EC, but could happen in replicated storage too? and the requirement is that it will happen when the device count is < primary nodes? | 23:32 |
openstack | Launchpad bug 1452431 in OpenStack Object Storage (swift) "some parts replicas assigned to duplicate devices in the ring" [Medium,Confirmed] - Assigned to Samuel Merritt (torgomatic) | 23:32 |
torgomatic | notmyname: yes and no, respectively | 23:32 |
notmyname | oh, ok. so it's more likely than that? what's the trigger? | 23:33 |
torgomatic | it *will* happen if #devs < #replicas, but it *can* happen if things aren't well-balanced | 23:33 |
notmyname | like the "region less than 1 regionth of the cluster" situation? | 23:34 |
torgomatic | yep | 23:34 |
torgomatic | 3 zones of unequal weights might make it happen | 23:34 |
clayg | "replicanth" | 23:35 |
notmyname | clayg: yes, a better word. more cromulent in every way | 23:36 |
notmyname | torgomatic: ok, I'll leave it at medium and remove the EC tag | 23:36 |
*** kei_yama has joined #openstack-swift | 23:36 | |
*** bill_az_ has joined #openstack-swift | 23:43 | |
*** minwoob has quit IRC | 23:50 | |
*** kota_ has joined #openstack-swift | 23:58 | |
*** ChanServ sets mode: +v kota_ | 23:58 | |
kota_ | I'm back here, sitting in my office. | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!