notmyname | pandemicsyn2: now there is https://pypi.python.org/pypi/gertty/1.0.0 instead of fgerrit | 00:02 |
---|---|---|
openstackgerrit | A change was merged to openstack/swift: Avoid usage of insecure mktemp() function https://review.openstack.org/109793 | 00:09 |
clayg | it's cute because tty - i get it | 00:10 |
*** jergerber has quit IRC | 00:10 | |
*** kyles_ne has joined #openstack-swift | 00:12 | |
*** tkay has quit IRC | 00:14 | |
mattoliverau | and gertty works offline, so you can review while flying ;) | 00:16 |
kyles_ne | is there anyone here who could answer a few questions about how glance is supposed to handle swift as a backend? | 00:19 |
kyles_ne | (I asked on glance channel, still waiting for a response there, figured I'd ask here too) | 00:20 |
*** kyles_ne has quit IRC | 00:21 | |
*** dmorita has joined #openstack-swift | 00:25 | |
clayg | depends on how specific the question is i 'spose | 00:31 |
clayg | did you ask *the question* over there - or just ask if you can ask? | 00:31 |
clayg | cause you should definately ask here, dunno if we'll be able to help tho | 00:32 |
tab__ | when will erasure codes be available in swift? | 00:35 |
torgomatic | tab__: when they're done | 00:38 |
torgomatic | sounds flippant but I'm actually serious; I hate giving out dates because they're never right | 00:39 |
*** tab__ has quit IRC | 00:39 | |
torgomatic | either the code lands after the date and people get cranky because it's "late", or it lands before the date and I look like I was sandbagging | 00:39 |
*** judd7 has joined #openstack-swift | 00:41 | |
*** occupant has joined #openstack-swift | 00:44 | |
zaitcev | clayg: 9 to 1, this guy had ntpd die on him - https://bugs.launchpad.net/swift/+bug/1365330 | 00:54 |
zaitcev | clayg: sorry, it's the "cyclic replication" bug | 00:54 |
clayg | tab_: just ask notmyname he's better at guaging the "tone" of torgomatic "nope" when he asks if it's done yet | 00:56 |
*** addnull has joined #openstack-swift | 01:05 | |
*** shri has left #openstack-swift | 01:12 | |
*** mwstorer has quit IRC | 01:14 | |
openstackgerrit | A change was merged to openstack/swift: Spelling mistakes corrected in comments. https://review.openstack.org/118701 | 01:14 |
*** xrandr has left #openstack-swift | 01:31 | |
*** echevemaster has joined #openstack-swift | 01:41 | |
*** nosnos has joined #openstack-swift | 01:49 | |
*** elambert has joined #openstack-swift | 01:56 | |
openstackgerrit | Matthew Oliver proposed a change to openstack/swift: Add concurrent reads option to proxy https://review.openstack.org/117710 | 02:10 |
*** tkay has joined #openstack-swift | 02:12 | |
*** elambert has quit IRC | 02:16 | |
*** elambert has joined #openstack-swift | 02:23 | |
*** haomaiwang has joined #openstack-swift | 02:30 | |
*** elambert has quit IRC | 02:33 | |
*** elambert has joined #openstack-swift | 02:34 | |
*** haomaiwang has quit IRC | 02:35 | |
*** elambert has quit IRC | 02:38 | |
*** addnull has quit IRC | 02:56 | |
*** addnull has joined #openstack-swift | 02:57 | |
*** tkay has quit IRC | 03:12 | |
openstackgerrit | A change was merged to openstack/swift: Small Fix for FakeServerConnection https://review.openstack.org/117679 | 03:13 |
*** zaitcev has quit IRC | 03:34 | |
*** tab_ has quit IRC | 03:40 | |
*** judd7 has quit IRC | 04:15 | |
*** tkay has joined #openstack-swift | 04:20 | |
*** miqui has quit IRC | 04:23 | |
*** tkay has quit IRC | 04:26 | |
*** addnull has quit IRC | 05:01 | |
*** kopparam has joined #openstack-swift | 05:04 | |
*** pberis has quit IRC | 05:11 | |
*** echevemaster has quit IRC | 05:12 | |
*** ppai has joined #openstack-swift | 05:24 | |
*** pberis has joined #openstack-swift | 05:31 | |
*** ttrumm has joined #openstack-swift | 05:37 | |
*** ttrumm_ has joined #openstack-swift | 05:38 | |
*** ttrumm has quit IRC | 05:41 | |
openstackgerrit | Matthew Oliver proposed a change to openstack/swift: Treat 404s as 204 on object delete in proxy https://review.openstack.org/114120 | 05:52 |
openstackgerrit | Matthew Oliver proposed a change to openstack/swift: Treat 404s as 204 on object delete in proxy https://review.openstack.org/114120 | 06:22 |
*** k4n0 has joined #openstack-swift | 06:25 | |
*** gyee has quit IRC | 06:25 | |
*** kopparam has quit IRC | 06:30 | |
*** kopparam has joined #openstack-swift | 06:30 | |
*** kopparam has quit IRC | 06:35 | |
*** ttrumm has joined #openstack-swift | 06:35 | |
*** ttrumm_ has quit IRC | 06:38 | |
openstackgerrit | Jamie Lennox proposed a change to openstack/swift: Use identity_uri instead of auth fragments https://review.openstack.org/119300 | 06:51 |
*** homegrown has quit IRC | 06:57 | |
*** bvandenh has joined #openstack-swift | 06:58 | |
*** mrsnivvel has quit IRC | 06:59 | |
*** kopparam has joined #openstack-swift | 07:01 | |
*** mrsnivvel has joined #openstack-swift | 07:02 | |
mattoliverau | OK my brain is failing to review code, I think I've been staring too long. I'm going to call it a day, have a great weekend all. | 07:04 |
*** homegrown has joined #openstack-swift | 07:05 | |
*** homegrown has left #openstack-swift | 07:05 | |
*** homegrown has joined #openstack-swift | 07:06 | |
*** tsg has joined #openstack-swift | 07:08 | |
*** kopparam has quit IRC | 07:13 | |
*** kopparam has joined #openstack-swift | 07:14 | |
*** kopparam has quit IRC | 07:18 | |
*** tsg has quit IRC | 07:25 | |
*** ppai has quit IRC | 07:29 | |
*** tsg has joined #openstack-swift | 07:38 | |
*** tsg has quit IRC | 07:42 | |
*** ppai has joined #openstack-swift | 07:42 | |
*** swat30 has quit IRC | 07:46 | |
*** swat30 has joined #openstack-swift | 07:48 | |
*** mkollaro has joined #openstack-swift | 07:49 | |
openstackgerrit | Lin Yang proposed a change to openstack/swift: Change method _sort_key_for to static https://review.openstack.org/119312 | 07:58 |
*** kopparam has joined #openstack-swift | 08:04 | |
*** mandarine has joined #openstack-swift | 08:13 | |
mandarine | Good morning, there :) | 08:13 |
mandarine | I have a bit of a problem with a "Not Found" (404) swift container that still appears in the account :( | 08:15 |
mandarine | Like, if I curl http://swift.url/v1/AUTH_myaccount | grep "mycontainer", it appears | 08:19 |
mandarine | But when I curl http://swift.url/v1/AUTH_myaccount/mycontainer , I get a 404 from the container server :( | 08:19 |
mandarine | I feel like this is a corrupted account and that the file should not appear. But is there a way to check this ? :x | 08:20 |
ahale | mandarine: i would have a poke around the cluster with swift-get-nodes, see if all account replicas are consistent, check replication.. if its been deleted try track down the delete line and work out what happened that way | 08:37 |
mandarine | all account replicas are the same, yup :( | 08:40 |
mandarine | I'll have a look at swift-get-nodes | 08:41 |
mandarine | Thank you very much :) | 08:45 |
mandarine | Th eproblem is not solved at all but I can check more deeply | 08:45 |
*** ppai has quit IRC | 08:54 | |
*** ppai has joined #openstack-swift | 09:10 | |
*** dmorita has quit IRC | 09:15 | |
*** k4n0 has quit IRC | 09:16 | |
*** vr2 has joined #openstack-swift | 09:22 | |
*** ppai has quit IRC | 09:23 | |
vr2 | in your office | 09:32 |
*** ppai has joined #openstack-swift | 09:35 | |
*** JelleB is now known as a1|away | 10:22 | |
*** bvandenh has quit IRC | 10:28 | |
*** bvandenh has joined #openstack-swift | 10:29 | |
*** aix has joined #openstack-swift | 10:38 | |
*** kopparam has quit IRC | 10:39 | |
*** kopparam has joined #openstack-swift | 10:40 | |
*** ttrumm_ has joined #openstack-swift | 10:47 | |
*** ttrumm has quit IRC | 10:49 | |
*** kopparam has quit IRC | 10:55 | |
*** kopparam has joined #openstack-swift | 10:56 | |
*** kopparam has quit IRC | 11:01 | |
*** mkollaro has quit IRC | 11:02 | |
*** tongli has joined #openstack-swift | 11:30 | |
*** kopparam has joined #openstack-swift | 11:40 | |
*** mahatic has joined #openstack-swift | 11:45 | |
*** dmsimard_away is now known as dmsimard | 11:53 | |
*** k4n0 has joined #openstack-swift | 11:55 | |
*** openstackgerrit has quit IRC | 12:01 | |
*** openstackgerrit has joined #openstack-swift | 12:03 | |
*** zacksh has quit IRC | 12:09 | |
*** zacksh has joined #openstack-swift | 12:11 | |
*** tab_ has joined #openstack-swift | 12:18 | |
tab_ | on what basis does Swift decides that disk is gone bad - is this only on basis of disk swift-audit script, or is there also some read/write response factor, in case too long , declated dead? | 12:19 |
*** nosnos has quit IRC | 12:19 | |
*** nosnos has joined #openstack-swift | 12:20 | |
*** mkollaro has joined #openstack-swift | 12:22 | |
*** nosnos has quit IRC | 12:24 | |
*** zacksh has quit IRC | 12:36 | |
*** zacksh has joined #openstack-swift | 12:38 | |
*** kopparam has quit IRC | 12:38 | |
*** kopparam has joined #openstack-swift | 12:39 | |
*** ppai has quit IRC | 12:41 | |
*** kopparam has quit IRC | 12:44 | |
*** zacksh has quit IRC | 12:44 | |
*** zacksh has joined #openstack-swift | 12:46 | |
*** miqui has joined #openstack-swift | 12:50 | |
btorch | tab_: swift-drive-audit | 12:53 |
btorch | tab_: although, we have recently encountered some scenarios where there are xfs corruption on some drives where the kernel doesn't specify the device block or shutsdown the filesystem | 12:54 |
*** morganfainberg has quit IRC | 12:54 | |
btorch | tab_: so you probably might want to have some other monitoring checks .. this is on ubuntu precise with dell perc cards btw | 12:55 |
btorch | tab_: https://github.com/pandemicsyn/stalker | 12:55 |
*** morganfainberg has joined #openstack-swift | 12:57 | |
tab_ | btorch: thx. yes, i think that possible disk end-of-life is possible to consider much more in advance | 12:59 |
tab_ | fore example , testing write speed under priority, if controller goes bad , this would slow down over time | 12:59 |
tab_ | btorch: maybe also question for you. what swift does in case disk drive, cluster full? | 13:00 |
tab_ | does it still enable reads when writes are not possible? how does swift reposnse to this situation? | 13:00 |
btorch | tab_: all drives or a couple of drives within the cluster ? | 13:00 |
tab_ | if we can cover both scenarios, i would be greatful | 13:01 |
tab_ | i guess when couple of disks, data would be moved to empty ones? | 13:01 |
tab_ | what if moving data not possible? | 13:01 |
*** zacksh has quit IRC | 13:04 | |
*** zacksh has joined #openstack-swift | 13:05 | |
btorch | tab_: brb have to look at an issue | 13:07 |
tab_ | ok thx | 13:08 |
*** tdasilva has joined #openstack-swift | 13:11 | |
btorch | tab_: ok so first, you don't want to have all drives full :) so consider a drive full at 75%-80%, so you have enough time (hopefully) to get new zones in or scale current ones | 13:12 |
tab_ | ok i understand, but shit happens :) | 13:12 |
btorch | tab_: when a drive is 100% full, either because of dark data, to many handoff objects .. etc swift will just place that object that should have gone to that drive to another one (handoff) | 13:14 |
tab_ | is the number of handoff nodes per zone configurable? | 13:14 |
*** bkopilov has quit IRC | 13:18 | |
tab_ | what if even hand-off nodes full? I just want to make sure that Swift still enables to read data from clsuter? | 13:19 |
btorch | tab_: no you can't configure that as far as I know | 13:20 |
btorch | tab_: I think on read it will try the primary nodes and 3 other handoff nodes and if it can't find the object you will get a 404 | 13:21 |
*** mrsnivvel has quit IRC | 13:21 | |
btorch | tab_: I think deletes only go to the primary nodes | 13:22 |
btorch | tab_: I think put/posts also just does up to 6 .. I could swear there was an option for that in the proxy though | 13:25 |
*** morganfainberg has quit IRC | 13:25 | |
tab_ | btorch: what do you mean up to 6? :) i don't think i follow you here | 13:26 |
wer | for each replicant there is a corresponding handoff that would get hashed AFAIK. It's not "node" specific but rather object specific. But would likely be another node,disk,failure domain. And each zone would have handoff's for each replicant. | 13:26 |
btorch | tab_: reads should not be an issue, but obviously just because you as the end user is not trying to write to the system, doesn't mean that the swift background jumps like replicators, updaters ...etc is not trying to | 13:26 |
wer | swift-get-nodes will compute the locations for you if you are curious. | 13:27 |
*** morganfainberg has joined #openstack-swift | 13:27 | |
btorch | backgrond jobs :) | 13:27 |
tab_ | ok. i understand. thx | 13:27 |
tab_ | yes there is always somone writing to the wall :) | 13:27 |
wer | swift-get-nodes <ring> objects account would show you all the locations (6 if replications is x3, 3 locations and 3 handoff locations...) | 13:28 |
*** ttrumm_ has quit IRC | 13:29 | |
*** ttrumm has joined #openstack-swift | 13:29 | |
btorch | swift-get-nodes <account ring> account_hash :) if you need container then <container ring> account container .. end so on | 13:29 |
tab_ | btorch: also "just" read operation , also wants to be logged, which follows by write operation | 13:31 |
tab_ | so i guess in full system weird things is going on? | 13:32 |
*** bvandenh has quit IRC | 13:33 | |
btorch | tab_: sorry not sure I follow you there | 13:33 |
btorch | tab_: yeah weird things happen :) you could say that | 13:34 |
tab_ | if someone calls READ Swift API call to get object, than this is logged, in case there is no space left on the devices...than possible also read is disturbed | 13:34 |
*** bkopilov has joined #openstack-swift | 13:35 | |
btorch | tab_: nah never seen that, if you have 3 replicas it will read from the first primary (if object not there for any reason) it will try another node where the object should be located | 13:37 |
ahale | btorch: isnt it just that get-nodes shows the first 3 handoffs by default, but really every possible other drive is a potential handoff | 13:39 |
*** vr2 has quit IRC | 13:39 | |
*** vr2 has joined #openstack-swift | 13:39 | |
btorch | ahale: yeah true .. remember the time get nodes showed all handoffs :) | 13:39 |
ahale | yeah :) | 13:40 |
*** ttrumm has quit IRC | 13:42 | |
*** zacksh has quit IRC | 13:44 | |
*** zacksh has joined #openstack-swift | 13:46 | |
wer | sorry if I was busting in on your conversation guys :) | 13:46 |
btorch | wer: nah the more the better :) | 13:47 |
wer | cool :) | 14:03 |
*** morganfainberg has quit IRC | 14:18 | |
*** morganfainberg has joined #openstack-swift | 14:27 | |
*** vr3 has joined #openstack-swift | 14:34 | |
*** vr2 has quit IRC | 14:34 | |
*** jergerber has joined #openstack-swift | 14:46 | |
*** ttrumm has joined #openstack-swift | 14:48 | |
*** ttrumm has quit IRC | 14:49 | |
*** tab__ has joined #openstack-swift | 14:53 | |
*** k4n0 has quit IRC | 14:57 | |
openstackgerrit | Lorcan Browne proposed a change to openstack/swift: Update swift-init restart to accommodate unsuccessful stops https://review.openstack.org/116944 | 14:59 |
*** tsg has joined #openstack-swift | 15:05 | |
*** Guest18621 is now known as annegentle | 15:16 | |
*** gyee has joined #openstack-swift | 15:49 | |
*** homegrown has left #openstack-swift | 15:56 | |
dfg | clayg: finally got around to cleaning up that slowdown middleware. its not much but here you go: https://github.com/dpgoetz/slowdown | 16:01 |
*** mahatic has quit IRC | 16:01 | |
dfg | clayg: its a little weird because the feeding out bytes thing but I had that there from when I was testing the ranged retry on failures for GETs. | 16:03 |
*** kyles_ne has joined #openstack-swift | 16:07 | |
*** elambert has joined #openstack-swift | 16:08 | |
openstackgerrit | Tushar Gohad proposed a change to openstack/swift: EC: Make quorum_size() specific to storage policy https://review.openstack.org/111067 | 16:18 |
*** mordred has quit IRC | 16:18 | |
*** mordred has joined #openstack-swift | 16:18 | |
*** tkay has joined #openstack-swift | 16:25 | |
*** vr3 has quit IRC | 16:26 | |
notmyname | good morning, world | 16:26 |
*** vr1 has joined #openstack-swift | 16:26 | |
peluse | guten morgen | 16:48 |
openstackgerrit | John Dickinson proposed a change to openstack/swift: make the bind_port config setting required https://review.openstack.org/118200 | 16:54 |
notmyname | pep8 fix and a better log message. | 16:55 |
*** mkollaro has quit IRC | 17:01 | |
*** vr1 has quit IRC | 17:02 | |
*** openstackgerrit has quit IRC | 17:04 | |
*** HenryG is now known as HenryThe8th | 17:12 | |
*** elambert has quit IRC | 17:22 | |
*** elambert has joined #openstack-swift | 17:23 | |
*** homegrow_ has joined #openstack-swift | 17:23 | |
*** IgnacioCorderi has joined #openstack-swift | 17:26 | |
*** homegrow_ has quit IRC | 17:28 | |
*** homegrow_ has joined #openstack-swift | 17:29 | |
*** IgnacioCorderi has quit IRC | 17:30 | |
*** kopparam has joined #openstack-swift | 17:32 | |
*** homegrow_ has quit IRC | 17:36 | |
*** homegrow_ has joined #openstack-swift | 17:37 | |
*** kopparam has quit IRC | 17:40 | |
*** kopparam has joined #openstack-swift | 17:40 | |
*** homegrow_ has quit IRC | 17:44 | |
*** kopparam has quit IRC | 17:45 | |
*** homegrown has joined #openstack-swift | 17:45 | |
clayg | dfg: that's awesome! | 17:58 |
*** mwstorer has joined #openstack-swift | 17:59 | |
dfg | clayg: glad you like it :) | 18:01 |
*** homegrown has quit IRC | 18:03 | |
*** shri has joined #openstack-swift | 18:03 | |
*** IgnacioCorderi has joined #openstack-swift | 18:03 | |
clayg | dfg: well i know how much you're into slo'ing things down | 18:03 |
dfg | ya. i'm good at making things slow and overly complicated. glange loves pointing that out :) | 18:05 |
*** shri has quit IRC | 18:08 | |
*** zaitcev has joined #openstack-swift | 18:09 | |
*** ChanServ sets mode: +v zaitcev | 18:09 | |
*** shri has joined #openstack-swift | 18:10 | |
*** aix has quit IRC | 18:10 | |
*** openstackgerrit has joined #openstack-swift | 18:12 | |
openstackgerrit | Thiago da Silva proposed a change to openstack/swift: moving object validation checks to top of PUT method https://review.openstack.org/115995 | 18:15 |
*** echevemaster has joined #openstack-swift | 18:19 | |
openstackgerrit | A change was merged to openstack/swift-bench: Work toward Python 3.4 support and testing https://review.openstack.org/118812 | 18:25 |
*** kenhui has joined #openstack-swift | 18:36 | |
*** tsg has quit IRC | 18:37 | |
*** shri has quit IRC | 18:37 | |
openstackgerrit | Samuel Merritt proposed a change to openstack/swift: Error limit the right node on object PUT https://review.openstack.org/119442 | 18:43 |
clayg | glange! I miss glange! | 18:47 |
clayg | dfg: do you guys have travel figured out for this fall? Who's going to the hackathon; whos going to the summit? | 18:48 |
glange | the paris summit? redbo might be going to that, I think | 18:48 |
dfg | clayg: i'm pretty sure i'll be going | 18:49 |
redbo | hurricanerix_ is going to the hackathon, not sure if anyone else is. dfg and I are going to the summit. | 18:49 |
peluse | I'm heading to both | 18:49 |
glange | where is the hackathon? | 18:50 |
hurricanerix_ | glange: Boston | 18:50 |
glange | lame :) | 18:50 |
glange | a bunch of openstack developers in Paris sounds pretty romantic | 18:51 |
peluse | we're going on a dinner cruise I think | 18:51 |
glange | somebody should warn the ladies of France :) | 18:51 |
notmyname | eating snails in the shadow of the eiffel tower | 18:51 |
clayg | you mean sheading bikes? | 18:52 |
clayg | wait... i don't think thats the verb form of that :\ | 18:52 |
clayg | glange: you're a language snob help me out here | 18:52 |
glange | well, English isn't a fully construcive language | 18:53 |
glange | so you have to look up the word to see if it is valid | 18:53 |
clayg | torgomatic: what do you call two crows sitting on a tree branch? | 18:53 |
*** geaaru has quit IRC | 18:53 | |
glange | if you were speaking in Esperanton, it and any any other constructed word would be valid | 18:53 |
openstackgerrit | Thiago da Silva proposed a change to openstack/swift: moving object validation checks to top of PUT method https://review.openstack.org/115995 | 18:53 |
glange | Esperanto | 18:53 |
glange | http://www.newyorker.com/magazine/2012/12/24/utopian-for-beginners <-- clayg: that is an example of a constructed language that went too far | 18:55 |
clayg | glange: i hear i18n support is coming back into swift - we could work on that Esperanto translation | 18:55 |
redbo | I don't know if you can talk sensibly about conjugations of a noun you've just verbed. | 18:55 |
clayg | glange: oh I read that, freaky! | 18:55 |
glange | clayg: yeah, that is a strange story | 18:56 |
glange | I really think Esperanto (or something like it) is a good idea | 18:56 |
glange | you learn your native language plus one easy to learn international language and you are all set | 18:56 |
peluse | clayg, so what about the two crows? | 18:57 |
clayg | peluse: torgomatic: attempted murder | 18:58 |
*** openstackgerrit has quit IRC | 19:01 | |
clayg | awww man... nothing? | 19:02 |
notmyname | clayg: :-) | 19:03 |
*** openstackgerrit has joined #openstack-swift | 19:03 | |
tdasilva | clayg: I had to look up the explanation for that one: http://www.mirror.co.uk/news/weird-news/worlds-geekiest-jokes-explained-after-2051303 | 19:03 |
*** elambert has quit IRC | 19:18 | |
peluse | clayg, OK, I get it now. A little slow today - a murder of crows :) | 19:21 |
*** elambert has joined #openstack-swift | 19:21 | |
glange | http://www.thealmightyguru.com/Pointless/AnimalGroups.html <-- there is a lot of material to make similar jokes with :) | 19:24 |
glange | for example, what do you call one million midges? an overbite | 19:25 |
notmyname | a group of swifts can be called a "screaming frenzy" | 19:25 |
notmyname | (from http://identify.whatbird.com/obj/232/overview/Black_Swift.aspx) | 19:26 |
swifterdarrell | notmyname: sounds like a group of swift developers | 19:26 |
notmyname | indeed | 19:26 |
notmyname | I'm actually trying to work that in to the next swift tshirts I make :-) | 19:26 |
zaitcev | A recent inventory found that I already possess 3 Swift T-shirts. | 19:27 |
notmyname | zaitcev: then you need 4 more so you only have to do laundry once a week instead of every 3 days ;-) | 19:28 |
notmyname | zaitcev: do you have a manilla shirts yet? | 19:28 |
swifterdarrell | notmyname: really?? I just wear my 3 ten days each and do laundry monthly... | 19:28 |
notmyname | swifterdarrell: to each his own | 19:29 |
swifterdarrell | ;) | 19:29 |
zaitcev | notmyname: with patches like mine it may be too soon to think about a Manila shirt. Also, they would probably just use envelopes... | 19:29 |
notmyname | heh | 19:30 |
*** tsg has joined #openstack-swift | 19:31 | |
zaitcev | swifterdarrell: I use my Colorado shirt as a night gown. It's quite soft, for a swag tee. | 19:32 |
notmyname | http://lwn.net/SubscriberLink/610769/d7957916159f2fff/ <-- "Rethinking the OpenStack development process". it's a summary of recent mailing list threads, but something that will affect swift more if we move to the single release with milestones release plan | 19:34 |
torgomatic | so we build a nut dispenser that dispenses two nuts per core reviewer, and when you submit a patch you tie its number to a squirrel and release it near the nut dispenser... | 19:45 |
torgomatic | but then, the patch queue for Swift is nearing 100, so who am I to talk? | 19:46 |
*** zul has quit IRC | 19:49 | |
tdasilva | notmyname: i was checking the object migration patch and saw your earlier comment on using WSGIContext. I'm still not sure when to use it and when not to use it...do you have any pointers? | 20:05 |
notmyname | tdasilva: it has to do with when things are created. the middleware class is instantiated at startup and then repeatedly called. so if you have any state stored in the class, it will mess up under concurrency. also, if you need to reliably get the response codes from things to the right on the pipeline, then you have to collect that carefully (because of how generators work) | 20:08 |
notmyname | tdasilva: WSGIContext takes care of all that for you | 20:08 |
notmyname | tdasilva: the most common thing is when the middleware needs to take an action based on the response codes. the _app_call() in WSGIContext does the right thing | 20:10 |
* tdasilva nods... | 20:12 | |
notmyname | tdasilva: https://gist.github.com/notmyname/c8ee8ef2bba474cf1632 | 20:12 |
notmyname | tdasilva: it's a lot easier to explain on a whileboard :-) | 20:12 |
notmyname | but the important part is that the x+=1 isn't executed until the .next() is called on the iterator | 20:13 |
notmyname | so in wsgi, the response is set to some default (maybe a 200), so if you ask for it before you start iterating over the response_iter, then you don't actually have the right value | 20:13 |
notmyname | so WSGIContext grabs the first part of the response iter to make sure that the response codes are set properly, then stores that for later when the response actually needs to be sent to the client (client starts reading it) | 20:14 |
notmyname | if your middleware doesn't do that, then you'll get whatever the default response code is because the whole stack of the pipeline generators isn't executed until the response starts to be read (basically) | 20:15 |
*** IgnacioCorderi has quit IRC | 20:15 | |
notmyname | tdasilva: make sense? | 20:16 |
tdasilva | yes, I think so... :-) I'm looking at the WSGIContext code and the existing middleware's to see if understand correctly | 20:18 |
tdasilva | I guess the difficulty was trying to see the difference in the existing code where some middleware uses it and other's don't | 20:18 |
tdasilva | I'm going by what you said where it depends if the middleware takes action on the response or not | 20:19 |
notmyname | tdasilva: the biggest difference would be if the middleware is doing something to the response based on the response code. | 20:19 |
notmyname | right | 20:19 |
tdasilva | for example, name_check just does some checks and returns bad request, so no need for wsgicontext | 20:19 |
*** tsg has quit IRC | 20:20 | |
notmyname | tdasilva: right. ratelimit is similar. crossdomain doesn't need one because it passes through anything not on its configured path | 20:23 |
peluse | torgomatic, tsg needs your +2 again for https://review.openstack.org/#/c/111067/ as he has to rebase to pass jenkins (no code change though). I just reposted mine | 20:27 |
peluse | s/has/had | 20:27 |
*** tsg has joined #openstack-swift | 20:28 | |
*** zul has joined #openstack-swift | 20:29 | |
tdasilva | notmyname: thanks! | 20:30 |
tab__ | is the default auditor interval 30min default for all: acc/container and object auditors or you can configure less frequent interval for object auditor? | 20:36 |
*** achhabra has joined #openstack-swift | 20:39 | |
*** kenhui has quit IRC | 20:40 | |
*** kenhui has joined #openstack-swift | 20:41 | |
*** achhabra_ has joined #openstack-swift | 20:42 | |
*** achhabra__ has joined #openstack-swift | 20:42 | |
*** miqui has quit IRC | 20:47 | |
*** achhabra_ has quit IRC | 20:48 | |
*** achhabra__ has quit IRC | 20:48 | |
*** echevemaster has quit IRC | 20:49 | |
*** tsg has quit IRC | 20:51 | |
torgomatic | peluse: on it | 20:58 |
*** kenhui has quit IRC | 21:17 | |
openstackgerrit | Samuel Merritt proposed a change to openstack/swift: Pay attention to all punctual nodes https://review.openstack.org/119484 | 21:23 |
*** jasondotstar has quit IRC | 21:24 | |
torgomatic | notmyname: if you wanna go poke https://review.openstack.org/#/c/117907/ again, that might be good | 21:28 |
torgomatic | you liked it before, but py2.6 didn't | 21:29 |
notmyname | torgomatic: days always have 86400 seconds, right? ;-) | 21:30 |
torgomatic | notmyname: according to the Python 2.7.7 source, yes :) | 21:30 |
notmyname | heh. not surprising :-) | 21:31 |
torgomatic | quoting: static PyObject *seconds_per_day = NULL; /* 3600*24 as Python int */ | 21:31 |
*** openstackgerrit has quit IRC | 21:31 | |
torgomatic | (it gets filled in later) | 21:31 |
torgomatic | seconds_per_day = PyInt_FromLong(24 * 3600); | 21:32 |
torgomatic | so what I have is at least as correct as Python 2.7 ;) | 21:32 |
notmyname | ya, it's just easier to point to python when some leap second causes writes to not be ordered | 21:33 |
*** openstackgerrit has joined #openstack-swift | 21:33 | |
notmyname | DefCore community meetings are next week http://lists.openstack.org/pipermail/community/2014-September/000868.html | 21:54 |
tab__ | Does swift treats chunked data, which is written to the same container, as each would be it's own object, meaning it would write them to different disks? Some sort of striping chunked data to disks... ? | 21:56 |
notmyname | tab__: what doe you mena by chunked data? pieces of a large object manifest? or an object sent with transfer-encoding: chunked? | 21:56 |
tab__ | notmyname: in the sens of pieces of larger object manifest ... | 21:58 |
notmyname | tab__: every "thing" referenced by a large object manifest (either directly in a static manifest or indirectly in a dynamic manifest) is treated as a separate object in the system that is individually placed and referenceable via the API | 22:00 |
*** judd7 has joined #openstack-swift | 22:02 | |
tab__ | would writing each chunk to different container, i guess each has it's own sqlite database, fasten write speeds? i am thinking theoretically... | 22:03 |
notmyname | tab__: in general, splaying the data across different containers (in the same or different swift account) will help improve speed. | 22:05 |
tab__ | notmyname: thx. is there any up-limit/recommendet of number per container objects? | 22:06 |
notmyname | tab__: it depends :-) | 22:06 |
notmyname | tab__: suppose you have one container with 100 million objects in it. that container will be slower to write to because the container listing will take longer than a container with 100 objects in it | 22:07 |
tab__ | 1 million, 2 millions? :) | 22:07 |
notmyname | importantly this only has to do with write speed. read performance is NOT affected | 22:07 |
tab__ | aha great. good to know | 22:08 |
notmyname | tab__: if you are using flash drives for containers (you should), your "limit" will be a lot. you could have tens of millions of objects. or more. | 22:09 |
notmyname | tab__: if you are using spinning drives, your limit will be something like 1-2 million | 22:09 |
tab__ | ok. thx for info. | 22:09 |
notmyname | tab__: a lot of times I recommend to people to use a lot of containers for this. ie design your naming scheme to splay across a lot of different containers | 22:10 |
notmyname | tab__: one way is if you are using a hash to name the objects, then use the first few bytes of the hash in the container name. so if you use 3 hex digits, that will splay it across 4096 containers | 22:10 |
tab__ | is there also some upper limit for number of countainers over which you splash sliced data over - meaning there is i guess some number of container behind which you can not gain on speed anymore... | 22:13 |
notmyname | tab__: so you should do some testing on your own, of course, based around your specific workload. if you are doing a lot of concurrent object PUTs, then using more containers is good. if you have some static set of data that isn't updated many times per second, then you might not need to have a lot of containers | 22:13 |
tab__ | aha ok | 22:13 |
notmyname | tab__: probably. I'd guess it would be <number of spindles in the container ring> / <number of container replicas> (on average) | 22:13 |
notmyname | tab__: ya, and it's important to know that the "limits" I'm talking about here are more like "the point at which you can't do more than 10 writes per second to a container" | 22:14 |
notmyname | tab__: eg if your use case is only doing a hundred writes per minute, then you can have _really_ large containers before you see an impact | 22:15 |
notmyname | tab__: of course, large containers are harder to replicate, so splaying them helps replication too :-) | 22:16 |
torgomatic | also note that that's *average* speed; you can still write in fast bursts and the container updates will get saved to async-pending files on the object servers, then pushed to the containers eventually | 22:16 |
notmyname | torgomatic: ++ | 22:16 |
tab__ | great info | 22:19 |
tab__ | thx | 22:19 |
notmyname | tab__: may I ask what you're building your swift cluster for? I'm always curious about how and why people are using swift | 22:19 |
notmyname | also, fill out the user survey with your swift info! it will really help. http://openstack.org/user-survey/ | 22:20 |
tab__ | :) it's should be used for telco purposes, where you could expect a lot of user generated data... and swift is under consideration | 22:22 |
notmyname | tab__: very cool. please let me know how I can help | 22:23 |
notmyname | tab__: I can also be reached at me@not.mn | 22:24 |
tab__ | ok. will let you know... | 22:24 |
*** achhabra has quit IRC | 22:27 | |
*** dmsimard is now known as dmsimard_away | 22:47 | |
*** dencaval has quit IRC | 22:52 | |
*** kyles_ne has quit IRC | 23:10 | |
*** physcx has quit IRC | 23:28 | |
*** otoolee- is now known as otoolee | 23:34 | |
*** DisneyRicky has quit IRC | 23:35 | |
*** DisneyRicky has joined #openstack-swift | 23:42 | |
*** elambert has quit IRC | 23:52 | |
*** gyee has quit IRC | 23:58 | |
*** dmsimard_away is now known as dmsimard | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!