*** mlauter_ has joined #openstack-swift | 00:00 | |
*** mlauter_ has quit IRC | 00:01 | |
*** mlauter has quit IRC | 00:02 | |
*** DisneyRicky has joined #openstack-swift | 00:03 | |
*** shakamunyi has joined #openstack-swift | 00:21 | |
*** dmorita has joined #openstack-swift | 00:25 | |
*** shakamunyi has quit IRC | 00:29 | |
*** shakamunyi has joined #openstack-swift | 00:29 | |
*** The_CFG has quit IRC | 00:30 | |
*** shakamunyi has quit IRC | 00:39 | |
*** addnull has joined #openstack-swift | 01:00 | |
*** shakamunyi has joined #openstack-swift | 01:04 | |
*** shakamunyi has quit IRC | 01:14 | |
*** shakamunyi has joined #openstack-swift | 01:39 | |
*** shakamunyi has quit IRC | 01:54 | |
*** DisneyRicky has quit IRC | 02:02 | |
*** shakamunyi has joined #openstack-swift | 02:20 | |
*** shakamunyi has quit IRC | 02:36 | |
*** mrsnivvel has quit IRC | 02:45 | |
*** shakamunyi has joined #openstack-swift | 03:02 | |
*** DisneyRicky has joined #openstack-swift | 03:05 | |
*** mrsnivvel has joined #openstack-swift | 03:10 | |
*** shakamunyi has quit IRC | 03:11 | |
*** echevemaster has quit IRC | 03:35 | |
*** shakamunyi has joined #openstack-swift | 04:07 | |
*** hhuang has joined #openstack-swift | 04:12 | |
*** nosnos has joined #openstack-swift | 04:18 | |
*** shakamunyi has quit IRC | 04:20 | |
*** fifieldt has joined #openstack-swift | 04:25 | |
*** X019 has quit IRC | 04:55 | |
*** shakamunyi has joined #openstack-swift | 05:07 | |
*** kopparam has joined #openstack-swift | 05:13 | |
*** shakamunyi has quit IRC | 05:24 | |
*** bkopilov has quit IRC | 05:40 | |
*** k4n0 has joined #openstack-swift | 05:44 | |
*** shakamunyi has joined #openstack-swift | 05:50 | |
*** bkopilov has joined #openstack-swift | 05:56 | |
*** shakamunyi has quit IRC | 05:57 | |
*** DisneyRicky has quit IRC | 06:02 | |
*** ttrumm has joined #openstack-swift | 06:07 | |
openstackgerrit | Matthew Oliver proposed a change to openstack/swift: Basic container sharding middleware https://review.openstack.org/125553 | 06:20 |
---|---|---|
*** ttrumm_ has joined #openstack-swift | 06:25 | |
*** ttrumm has quit IRC | 06:28 | |
*** miqui has quit IRC | 06:38 | |
*** shakamunyi has joined #openstack-swift | 06:53 | |
*** jamiehannaford has joined #openstack-swift | 07:03 | |
*** kopparam has quit IRC | 07:04 | |
*** hhuang has quit IRC | 07:13 | |
*** shakamunyi has quit IRC | 07:19 | |
*** jistr has joined #openstack-swift | 07:23 | |
*** joeljwright has joined #openstack-swift | 07:35 | |
*** geaaru has joined #openstack-swift | 07:38 | |
*** shakamunyi has joined #openstack-swift | 07:39 | |
*** hhuang has joined #openstack-swift | 07:50 | |
*** shakamunyi has quit IRC | 07:55 | |
*** nellysmitt has joined #openstack-swift | 08:15 | |
*** Dafna has joined #openstack-swift | 08:22 | |
*** hhuang has quit IRC | 08:34 | |
*** kopparam has joined #openstack-swift | 08:34 | |
*** oomichi has joined #openstack-swift | 08:37 | |
*** kopparam has quit IRC | 08:45 | |
*** k4n0 has quit IRC | 08:46 | |
*** mkollaro has joined #openstack-swift | 08:55 | |
*** k4n0 has joined #openstack-swift | 09:00 | |
*** kopparam has joined #openstack-swift | 09:07 | |
*** aix has joined #openstack-swift | 09:07 | |
*** shakamunyi has joined #openstack-swift | 09:40 | |
*** k4n0 has quit IRC | 09:41 | |
*** joeljwright has quit IRC | 09:42 | |
*** shakamunyi has quit IRC | 09:45 | |
*** k4n0 has joined #openstack-swift | 09:56 | |
openstackgerrit | Christian Schwede proposed a change to openstack/swift: Fix minor typo https://review.openstack.org/126249 | 10:05 |
*** dmorita has quit IRC | 10:16 | |
*** Trixboxer has joined #openstack-swift | 10:45 | |
*** mkerrin has joined #openstack-swift | 10:46 | |
*** kopparam has quit IRC | 10:48 | |
*** kopparam has joined #openstack-swift | 10:49 | |
*** sandywalsh has joined #openstack-swift | 10:53 | |
*** dmsimard_away is now known as dmsimard | 10:58 | |
*** nellysmitt has quit IRC | 10:58 | |
*** shakamunyi has joined #openstack-swift | 11:03 | |
*** bkopilov has quit IRC | 11:06 | |
*** ttrumm_ has quit IRC | 11:06 | |
*** ttrumm has joined #openstack-swift | 11:07 | |
*** anna_ has joined #openstack-swift | 11:10 | |
*** shakamunyi has quit IRC | 11:12 | |
*** kopparam has quit IRC | 11:13 | |
*** anna_ is now known as X019 | 11:15 | |
*** bkopilov has joined #openstack-swift | 11:18 | |
*** mahatic has joined #openstack-swift | 11:24 | |
*** nellysmitt has joined #openstack-swift | 11:33 | |
*** nosnos has quit IRC | 11:34 | |
*** kopparam has joined #openstack-swift | 11:35 | |
*** sandywalsh has quit IRC | 11:45 | |
*** jamiehannaford has quit IRC | 11:45 | |
*** jamiehannaford has joined #openstack-swift | 11:46 | |
*** sandywalsh has joined #openstack-swift | 11:47 | |
*** kopparam has quit IRC | 12:03 | |
*** kopparam has joined #openstack-swift | 12:03 | |
*** kopparam has quit IRC | 12:08 | |
*** joeljwright has joined #openstack-swift | 12:10 | |
*** kopparam has joined #openstack-swift | 12:12 | |
*** addnull has quit IRC | 12:14 | |
*** miqui has joined #openstack-swift | 12:23 | |
*** shakamunyi has joined #openstack-swift | 12:25 | |
*** nitika2 has joined #openstack-swift | 12:32 | |
*** X019 has left #openstack-swift | 12:35 | |
*** mkollaro has quit IRC | 12:35 | |
openstackgerrit | Kota Tsuyuzaki proposed a change to openstack/swift: Efficient Replication for Distributed Regions https://review.openstack.org/99824 | 12:38 |
*** shakamunyi has quit IRC | 12:40 | |
*** kopparam has quit IRC | 12:42 | |
*** kopparam has joined #openstack-swift | 12:43 | |
*** kopparam has quit IRC | 12:47 | |
*** moosev2 has joined #openstack-swift | 12:49 | |
*** aix has quit IRC | 12:58 | |
*** aix has joined #openstack-swift | 12:59 | |
*** moosev2 has quit IRC | 13:04 | |
*** kbee has joined #openstack-swift | 13:05 | |
*** hhuang has joined #openstack-swift | 13:09 | |
*** JO99 has joined #openstack-swift | 13:10 | |
JO99 | Hello Guys, I am running swift 2.1, I would like to run the object auditor against a single drive but it still goes and audits all drives, this is the command I am using: swift-object-auditor object-server.conf --devices=/srv/node/sda1 -o -v | 13:12 |
*** kopparam has joined #openstack-swift | 13:13 | |
*** lcurtis has joined #openstack-swift | 13:21 | |
*** kbee has quit IRC | 13:23 | |
*** jamiehannaford has quit IRC | 13:25 | |
*** jamiehan_ has joined #openstack-swift | 13:25 | |
*** cdelatte has joined #openstack-swift | 13:27 | |
*** delattec has joined #openstack-swift | 13:27 | |
*** oomichi has quit IRC | 13:34 | |
*** shakamunyi has joined #openstack-swift | 13:37 | |
*** annegentle has joined #openstack-swift | 13:40 | |
*** judd7 has joined #openstack-swift | 13:44 | |
*** shakamunyi has quit IRC | 13:53 | |
*** tdasilva has joined #openstack-swift | 14:03 | |
*** lcurtis has quit IRC | 14:05 | |
*** ttrumm has quit IRC | 14:05 | |
*** shakamunyi has joined #openstack-swift | 14:07 | |
*** lcurtis has joined #openstack-swift | 14:19 | |
*** cschwede has joined #openstack-swift | 14:20 | |
*** shakamunyi has quit IRC | 14:21 | |
*** swift_fan has joined #openstack-swift | 14:22 | |
*** kopparam has quit IRC | 14:23 | |
*** mlauter has joined #openstack-swift | 14:24 | |
*** kopparam has joined #openstack-swift | 14:24 | |
swift_fan | What does it mean when the command "swift-recon -d" gives you the following output: | 14:28 |
swift_fan | =============================================================================== | 14:28 |
swift_fan | --> Starting reconnaissance on 3 hosts | 14:28 |
swift_fan | =============================================================================== | 14:28 |
swift_fan | [2014-10-06 14:23:07] Checking disk usage now | 14:28 |
swift_fan | Distribution Graph: | 14:28 |
swift_fan | 41% 1 ************* | 14:28 |
swift_fan | 42% 5 ********************************************************************* | 14:28 |
swift_fan | Disk usage: space used: 12758937247120 of 23986336481280 | 14:28 |
swift_fan | Disk usage: space free: 11138925674160 of 23986336481280 | 14:28 |
swift_fan | Disk usage: lowest: 41.75%, highest: 42.47%, avg: 42.1035417006% | 14:28 |
swift_fan | =============================================================================== | 14:28 |
*** jistr has quit IRC | 14:28 | |
swift_fan | Why is there a "1 *************" and a "5 *********************************************************************" ? | 14:29 |
swift_fan | originally, those were "3 ********************************************" and "3 ********************************************" | 14:29 |
swift_fan | And then later it became just "6 *********************************************************************" | 14:29 |
swift_fan | But now it's "1 *************" and a "5 *********************************************************************" | 14:30 |
*** jistr has joined #openstack-swift | 14:30 | |
swift_fan | What am I reading, here ? | 14:30 |
cschwede | swift_fan: it means that is one disk with 41% usage, and 5 disks with 42%. Before that you had 3 disk with 41% and 42% respectively | 14:31 |
cschwede | s/that is/that there is/ | 14:31 |
swift_fan | cschwede : Ah, I see. | 14:34 |
swift_fan | cschwede : So, does Swift try to keep disks as even as possible (in terms of space used) ? | 14:36 |
swift_fan | cschwede : And if so, why does one disk have only 41% usage, while the others have 42% ? | 14:36 |
swift_fan | cschwede : And how do I tell which one is the disk that only has 41% usage ? | 14:37 |
swift_fan | cschwede : (amongst the ones that have 42%). | 14:37 |
cschwede | swift_fan: yes, because whenever you store an object the location is calculated using an hash ring and this distributes objects across your cluster. As long as all disks have the same weight in the ring, the disks usage is nearly balanced. Of course this depends on the object sizes, thus you might see small differences between the disks. | 14:38 |
openstackgerrit | Daniel Wakefield proposed a change to openstack/python-swiftclient: Allow segment size to be specified in a human readable way. https://review.openstack.org/126310 | 14:39 |
*** ppai has joined #openstack-swift | 14:39 | |
JO99 | guys this command "swift-object-auditor object-server.conf --devices=/dev/sda1 -o -v" isn't checking just one drive but all of them, any idea??????? | 14:40 |
cschwede | swift_fan: you could use "swift-recon -d -v", this gives you much more information that you could parse. For example, usage for each disk | 14:40 |
swift_fan | cschwede : Ok, thank you very much! | 14:43 |
cschwede | swift_fan: you're welcome, glad that i could help! | 14:43 |
*** aix has quit IRC | 14:48 | |
swift_fan | cschwede : When you said the location of an object is calculated by using a "hash ring", do you know how this hash ring works, and what it means to be a hash "ring" ? | 14:49 |
*** CaioBrentano has joined #openstack-swift | 14:53 | |
cschwede | swift_fan: yes, i know how it works :) have a look at this blog post to get an idea what happens in Swift: https://julien.danjou.info/blog/2012/openstack-swift-consistency-analysis | 14:53 |
*** kopparam has quit IRC | 14:57 | |
*** kopparam has joined #openstack-swift | 14:58 | |
*** aix has joined #openstack-swift | 15:00 | |
*** jistr has quit IRC | 15:02 | |
*** jistr has joined #openstack-swift | 15:02 | |
*** kopparam has quit IRC | 15:03 | |
portante_ | notmyname: did anybody take notes on what happened at the hackathon last week? | 15:07 |
cschwede | portante_: i did, at least some ;) need something special? | 15:09 |
ppai | swift_fan, also, check this out: https://swiftstack.com/blog/2012/11/21/how-the-ring-works-in-openstack-swift/ | 15:11 |
portante_ | cschwede: no, just looking for whatever write-ups were done, thanks | 15:11 |
*** echevemaster has joined #openstack-swift | 15:17 | |
*** lcurtis has quit IRC | 15:25 | |
*** kbee has joined #openstack-swift | 15:27 | |
notmyname | good morning | 15:28 |
notmyname | portante_: a review/debrief of the hackathon is on my todo list for today (among many other things) | 15:28 |
notmyname | acoles: any news on how or if the HP breakup will affect you and your team? I think you'll be in the new enterprise corp? | 15:29 |
portante_ | notmyname: great, thanks | 15:30 |
*** portante_ is now known as portante | 15:31 | |
*** ChanServ sets mode: +v portante | 15:31 | |
acoles | notmyname: i assume the same, enterprise | 15:31 |
notmyname | acoles: did you learn about it at the same time as the rest of us? ;-) | 15:31 |
acoles | notmyname: saw some chatter first thing today, then confirmation | 15:33 |
notmyname | seems NYT broke the story yesterday | 15:33 |
acoles | notmyname: maybe we'll change our mind like 2 years ago or whenever ;) | 15:33 |
portante | :) | 15:33 |
portante | its all been done before ... | 15:33 |
notmyname | portante: you wouldn't know anything about that, now would you? ;-) | 15:34 |
portante | :) | 15:34 |
notmyname | mahatic: nikhil: another thing on my todo list today is to look at OPW projects | 15:35 |
acoles | notmyname: portante: funny thing is we had big message about everyone having 'HP' on summit badges (rather than hp, HP, Hp, hewlett-packard, hp labs, ...) | 15:36 |
mahatic | notmyname, great. Would you also be able to make a rough schedule, so it can be used for the application? | 15:36 |
mahatic | notmyname, rough schedule of the opw project | 15:36 |
notmyname | acoles: yes, make sure you take care of those critical issues like that :-) | 15:36 |
notmyname | mahatic: yes, I'll work on that :-) | 15:36 |
mahatic | notmyname, thank you! :) | 15:37 |
acoles | notmyname: but, naming is SO critical ;) | 15:37 |
notmyname | one of the classic Hard Problems | 15:37 |
* acoles just hopes that the enterprise will still feed him cake | 15:38 | |
cschwede | notmyname: acoles: is that coincidence? HP, Hard Problems? Sorry, couldn't resist ;) | 15:40 |
notmyname | :-) | 15:40 |
acoles | :) | 15:41 |
portante | acoles: branding is important, for sure, but how one spends time and money on ensuring 300,000 people are on the same page (if that is even possible) seems open to debate :) | 15:42 |
JO99 | I think I figure it out after looking at the code, it seems to assume that it is /srv/node folder this is how it should be: swift-object-auditor object-server.conf --devices=sda1 -o -v | 15:46 |
openstackgerrit | Alistair Coles proposed a change to openstack/python-swiftclient: Fix cross account upload using --os-storage-url https://review.openstack.org/125759 | 15:52 |
*** cschwede has quit IRC | 15:59 | |
*** jamiehan_ has quit IRC | 16:00 | |
*** jistr has quit IRC | 16:02 | |
*** rmcall has joined #openstack-swift | 16:03 | |
*** kyles_ne has joined #openstack-swift | 16:07 | |
*** cpen has joined #openstack-swift | 16:17 | |
swift_fan | What does it mean when the logs say that "rsyslogd was HUPed" ? | 16:19 |
*** nellysmitt has quit IRC | 16:26 | |
*** nellysmitt has joined #openstack-swift | 16:28 | |
notmyname | kilo design summit schedule http://kilodesignsummit.sched.org/grid/ | 16:41 |
*** mlauter has quit IRC | 16:43 | |
*** elambert has joined #openstack-swift | 16:46 | |
*** kbee has quit IRC | 16:48 | |
*** aix has quit IRC | 16:49 | |
*** kopparam has joined #openstack-swift | 16:58 | |
*** mkollaro has joined #openstack-swift | 17:02 | |
*** kopparam has quit IRC | 17:03 | |
*** shri has joined #openstack-swift | 17:04 | |
hurricanerix | Thanks tdasilva and notmyname, I had a great time in Boston! | 17:05 |
notmyname | hurricanerix: me too. I'm glad you were able to come! | 17:05 |
elambert | +1 ^^ | 17:06 |
cpen | hey all, I’ve got a question about custom object metadata. it seems that a POST deletes any existing custom metadata that you added with a previous PUT or POST. why is this? I would have assumed that a POST mentioning only one particular piece of custom metadata would leave existing custom metadat intact. | 17:11 |
notmyname | cpen: for objects in Swift, the metadata you POST overwrites the existing set of metadata. you have to send the current set of metadata you want saved | 17:12 |
*** CaioBrentano1 has joined #openstack-swift | 17:12 | |
*** CaioBrentano has quit IRC | 17:12 | |
notmyname | cpen: (and for completeness) account and container metadata are updated as you described. yes this is a wart in the swift api | 17:12 |
cpen | notmyname: suppose you have two independent writers, each of which wants to update a different piece of custom metadata. then this design necessarily introduces a race condition into your application | 17:13 |
*** CaioBrentano1 is now known as CaioBrentano | 17:13 | |
cpen | because each will have to first HEAD, merge his updates, then POST. one of the two POSTs will win, and one will lose | 17:13 |
cpen | :[ | 17:13 |
notmyname | cpen: no different than the contents of the object itself with two writers | 17:13 |
cpen | that’s true. but as far as the server is concerned, metadata has structure, whereas the object itself does not | 17:14 |
cpen | is the current behavior a trade-off of some kind? some internal difficulty that’s unclear to me? | 17:15 |
cpen | or was this a conscious design decision with some big benefit? | 17:15 |
cpen | notmyname: you mention it’s a “wart” in the api. does that mean that there is a desire (or a plan) to revise this behavior in a future version of Swift? | 17:19 |
notmyname | the reason it's that way is mostly historical. swift was originally written to replace another storage engine, and that was the API it inherited | 17:20 |
notmyname | swift considers (and stores) data and metadata together | 17:21 |
notmyname | it's a wart exactly because of the different ways it's done in the system | 17:21 |
notmyname | and I think a lot of people would like to resolve it, but that will require doing something around versioning the API. which currently hasn't been written | 17:22 |
cpen | ah, interesting. what storage engine was Swift written to replace? | 17:24 |
notmyname | so the reason it is that way is because (1) that's the way it started and (2) adding versioned API support has never been seen as such a huge "problem" that those who pay devs to work on swift have prioritized it :-) | 17:24 |
*** JO99 has quit IRC | 17:25 | |
cpen | got it. I take it then as well that there’s not a timeline among the developers to introduce a new API version. is that correct? | 17:26 |
*** nellysmitt has quit IRC | 17:27 | |
notmyname | cpen: as you know, swift was originally written internally at rackspace as the engine for rackspace cloud files. the system it replaced was an internal storage engine | 17:28 |
tdasilva | hurricanerix: it was great to have you guys here. pretty amazing too see how much got accomplished | 17:28 |
cpen | notmyname: ah yes, that’s right | 17:28 |
notmyname | cpen: it's no so much "not enough time". more like what to prioritize | 17:29 |
notmyname | eg do we add erasure code support, which customers are asking for, or do we version the API? | 17:29 |
cpen | notmyname: sure. that makes sense. | 17:31 |
goodes | notmyname: can we take it a certain that there will be no swift design sessions on the Friday? | 17:32 |
notmyname | cpen: of course, if there were some set of new devs coming along and their employers had different priorities and did want to contribute versioned api support to swift, that would be fine *hint*thint* ;-) | 17:32 |
cpen | :] | 17:32 |
notmyname | goodes: no | 17:32 |
notmyname | goodes: did you see the schedule above? | 17:33 |
cpen | notmyname: I’m trying to find public documentation on the project of the current set of big-ticket items that the dev team is going to work on | 17:33 |
notmyname | goodes: well have' formally scheduled sessions (on thursday IIRC) and then a half day on friday | 17:33 |
cpen | notmyname: i.e., seeing a list somewhere where A appears above B, etc. | 17:33 |
goodes | notmyname: I did, but I missed the Friday sessions, did not realize that I had to scroll to the side | 17:34 |
notmyname | goodes: ah | 17:34 |
*** echevemaster has quit IRC | 17:35 | |
openstackgerrit | Alistair Coles proposed a change to openstack/python-swiftclient: Fix cross account upload using --os-storage-url https://review.openstack.org/125759 | 17:37 |
*** geaaru has quit IRC | 17:40 | |
cpen | in any case, notmyname, thanks for the explanation. quite helpful | 17:40 |
goodes | I think that this is kind of too small publish a spec, but there are some cases, e.g. baseline performance testing where you would want to have the proxy pipeline completely empty (especially DLO) but that is impossible today with the proxy server required_filters - I'm thinking of proposing an option in proxy-server.conf to disable (or override) - do this | 17:44 |
goodes | sound like a reasonable idea? | 17:44 |
notmyname | goodes: hmmm...I'm not a huge fan of that, but I understand where you're coming from. weren't you talking about benchmarking each middleware or having some numbers there? | 17:46 |
notmyname | goodes: reason i'm not a huge fan is because of stuff like gatekeeper catch_errors and proxy_logging | 17:47 |
goodes | notmyname: yes, but hard to even get a baseline - right now I just editing the source, but does not make sense that I should be doing that | 17:47 |
notmyname | cpen: having about 4 or 5 conversations at once now. timeslicing back to your question soon ;-) | 17:48 |
goodes | I agree, but what if you had middleware that for example encrypted conatiner and object names so only account owner with correct key could read them, that would need to be after DLO in pipeline | 17:49 |
goodes | there is no way to do that today | 17:49 |
cpen | thanks notmyname. :] I think my only remaining question is about finding a project priorities list. | 17:50 |
openstackgerrit | A change was merged to openstack/swift-specs: Add RSS feed https://review.openstack.org/120544 | 17:50 |
goodes | Matt's container sharding is another example of something that needs to come after DLO | 17:50 |
notmyname | goodes: I think alpha_ori had an interesting solution to that a while back (sortof inside joke) | 17:50 |
goodes | I agree about gatekeeper, but proxy_logging and dlo should not be forcing their way in | 17:51 |
notmyname | goodes: he had some code that would allow for dynamically placing middleware based essentially on a solver | 17:51 |
alpha_ori | notmyname: maybe one of these days I'll resubmit that. | 17:51 |
notmyname | cpen: the two places really to find that info are listed in the topic | 17:52 |
notmyname | cpen: the priority reviews page and the ideas page | 17:52 |
notmyname | cpen: the priority reviews are a short-term view of what needs to be reviewed now before other stuff, and the ideas page is a list of some smaller, independent things that people could work on | 17:52 |
goodes | notmyname: conclusion is that it is less simple then I thought, I will submit a spec | 17:53 |
cpen | notmyname: super helpful. thanks! | 17:53 |
notmyname | cpen: as far as big-picture stuff, that's mostly handled at in-person meetings like the summit and hackathons (and sometimes weekly meetings). that would be things like erasure codes, encryption, search/indexing, etc | 17:54 |
notmyname | (those are some current things) | 17:54 |
notmyname | cpen: there's also the swift-specs repo that has some good info on bigger things people are looking at | 17:54 |
cpen | ok, so 4 places to look/go. :] | 17:54 |
notmyname | cpen: we haven't been using specs as much as we could, but we will more in the future | 17:54 |
cpen | got it | 17:55 |
notmyname | cpen: heh. yeah. there's your answer :-) | 17:55 |
cpen | thanks again notmyname | 17:55 |
notmyname | cpen: I think (hope) that we'll basically have 2 lists eventually. one for small independent stuff and one for bigger things | 17:55 |
*** acoles is now known as acoles_away | 17:55 | |
notmyname | the bigger one would be specs, the smaller one would be the ideas page (or something similar) | 17:55 |
cpen | notmyname: why not something like a centralized backlog of tasks, including both big (investigatory) and small (more focused) items | 17:56 |
notmyname | cpen: yeah. within openstack there are the launchpad blueprints, but they have some problems that prevents them from being used much, at least from a dev perspective | 17:57 |
notmyname | cpen: what you're asking about is essentially the missing "project manager" role in openstack. that's mostly falling to the PTLs (ie me for swift) now, and that's why you get a "let's try this and see if it works" array of options | 17:58 |
*** NM has joined #openstack-swift | 17:59 | |
notmyname | goodes: so here's what I'd recommend. don't jump in to "how can I remove all the middleware". middleware is an implementation detail orthogonal to "this is a plugin and optional" | 18:10 |
notmyname | goodes: instead, focus on finding where the bottlenecks in the middleware are, and share your results, and let's focus on making it faster for everyone | 18:10 |
CaioBrentano | Hi guys… A question about versioning objects…. When I create an object in a versioned container and the container where the versions are stored doesint exists, I got a 412 error. That's perfect. Why don't I get the same error when I set X-Versions-Location and the container doesnt exists yet? | 18:17 |
openstackgerrit | A change was merged to openstack/swift: Fix minor typo https://review.openstack.org/126249 | 18:24 |
*** kopparam has joined #openstack-swift | 18:26 | |
clayg | CaioBrentano: short answer is setting the metadata doesn't check that the location exists - maybe it's reasonable to allow you to create the version location container after you set the header? | 18:27 |
*** mkollaro has quit IRC | 18:27 | |
*** zaitcev has joined #openstack-swift | 18:28 | |
*** ChanServ sets mode: +v zaitcev | 18:28 | |
goodes | notmyname: my aversion with the current system is that if you remove dlo from the pipeline, the current logic sticks it between proxy_logging and proxy_server which makes no sense at all | 18:29 |
goodes | notmyname: perhaps this is a bug | 18:29 |
*** kopparam has quit IRC | 18:31 | |
goodes | notmyname: hold on, could be an issue with my system, don't hold me to that | 18:33 |
notmyname | :-) | 18:33 |
CaioBrentano | clayg: yeah, I think it's reasonable… but it's still a precondition fail, according to the concept of the 412 error, isn't it ? | 18:34 |
*** gyee has joined #openstack-swift | 18:37 | |
*** NM has quit IRC | 18:38 | |
goodes | notmyname: if you pipeline is pipeline = gatekeeper cache tempauth proxy-logging proxy-server | 18:38 |
goodes | notmyname: then you end up with -> proxy-server: Pipeline was modified. New pipeline is "catch_errors gatekeeper cache tempauth proxy-logging dlo proxy-server". | 18:39 |
notmyname | goodes: may be a bug when you don't have two proxy-logging instances | 18:39 |
goodes | notmyname: agree | 18:39 |
*** NM has joined #openstack-swift | 18:43 | |
notmyname | donation matching challenge to the Ada initiative has been given to the entire openstack community http://lists.openstack.org/pipermail/openstack-dev/2014-October/047892.html | 18:47 |
clayg | CaioBrentano: I think we acctually cheat on PreCondition fail a bit; that's supposed to be use for X-If-Match sorta requests - it's like a well understood http semantic probably pushing the envelope to steal it for application specific errors - but you know... living specification and all that; I always liked 422 or good 'ol 400 for that sort of thing | 18:48 |
clayg | CaioBrentano: is your question "should this request return an error" or "if this request *should* return an error what should it be?" | 18:49 |
CaioBrentano | clayg: my question is the first one… "should this request return an error"? | 18:52 |
CaioBrentano | thinking in avoid a future 412, because that container doesn't exists | 18:52 |
openstackgerrit | anju Tiwari proposed a change to openstack/swift: Added a check for limit value https://review.openstack.org/118186 | 18:55 |
CaioBrentano | clayg: I mean this because, as you said, we are already "cheating" on PreCondition when creating the object :P | 18:55 |
clayg | CaioBrentano: I think the pre-existing behavior isn't evil enough to justify a potentially client breaking change. Create container with X-Version-Location: xxx ; create container xxx is currently a valid workflow. | 18:55 |
clayg | CaioBrentano: well I think it's reasonable to return *a* client error in that case - the request was not successful. If that wasn't currently returning an error (and say just failing silently to store either the object of the version) - that'd probably be something we'd have to fix. | 18:56 |
CaioBrentano | clayg: makes sense! ;) | 18:57 |
CaioBrentano | clayg: thanks! | 18:58 |
*** cschwede has joined #openstack-swift | 18:58 | |
*** dc has joined #openstack-swift | 19:16 | |
dc | what is the best way to find out the total size of a swift cluster? | 19:17 |
swifterdarrell | dc: find an isomorphism between the cluster and a fish, then apply http://en.wikipedia.org/wiki/Fish_measurement | 19:20 |
swifterdarrell | dc: or you could walk the account DBs and perform some aggregation on the volumetric stats in there (object count, container count, and bytes stored) | 19:22 |
dc | swifterdarrell: lol..yes! | 19:22 |
clayg | swifterdarrell: I think that's your isomorphism | 19:23 |
*** kopparam has joined #openstack-swift | 19:27 | |
*** mahatic has quit IRC | 19:28 | |
*** mkollaro has joined #openstack-swift | 19:29 | |
*** kopparam has quit IRC | 19:32 | |
dc | swift-recon -d | 19:36 |
*** kopparam has joined #openstack-swift | 19:37 | |
*** dc has quit IRC | 19:38 | |
*** nellysmitt has joined #openstack-swift | 19:40 | |
*** kopparam has quit IRC | 19:41 | |
*** mahatic has joined #openstack-swift | 19:54 | |
*** kyles_ne has quit IRC | 20:00 | |
*** kyles_ne has joined #openstack-swift | 20:00 | |
*** kyles_ne has quit IRC | 20:05 | |
*** mahatic has quit IRC | 20:09 | |
*** mkollaro has quit IRC | 20:10 | |
*** miqui has quit IRC | 20:19 | |
*** miqui has joined #openstack-swift | 20:20 | |
*** miqui has quit IRC | 20:21 | |
swift_fan | Hi -- how do you tell whether a particular object that you just recently uploaded, has been stored on a particular disk/node ? | 20:22 |
*** judd7 has quit IRC | 20:23 | |
swift_fan | I am able to download it back from my load balancer, but I want to know whether one of the servers sitting behind the load balancers has that object stored. | 20:23 |
notmyname | swift_fan: is your load balancer spooling stuff to local storage instead of passing it through to swift? | 20:28 |
swift_fan | notmyname : I don't believe so. | 20:29 |
swift_fan | notmyname : I'm not exactly sure what that means, but I believe it's just passing the objects right through. | 20:29 |
swift_fan | notmyname : I'm using HAProxy. | 20:29 |
notmyname | swift_fan: then you know that the object has been durably stored when you get a 2xx series response from the PUT request. and the fact you can get the data with a GET confirms it | 20:30 |
swift_fan | notmyname : Ok, you're talking about CURL, but does the same apply for the swiftclient, as well? | 20:31 |
notmyname | swift_fan: curl is simply an http client. as is the swiftclient CLI tool. Swift | 20:32 |
notmyname | swift_fan: curl is simply an http client. as is the swiftclient CLI tool. Swift's API is based on HTTP, so it will always give HTTP response codes to requests | 20:32 |
swift_fan | notmyname : But is there still a way to manually check whether a particular node has that particular object ? | 20:32 |
swift_fan | I know that you're saying that once you're able to download an object, then it has been stored+replicated. | 20:33 |
swift_fan | (correct me if I'm wrong on that, please). | 20:33 |
zaitcev | All the way to hardware, too. With fsyncs. | 20:33 |
swift_fan | But how do you manually check whether a particular node of choice contains that object ? | 20:33 |
notmyname | swift_fan: yes, it's possible. you can use swift-get-nodes and then look on the different storage servers | 20:34 |
zaitcev | Users of the API have no such capability, but administrator can use swift-get-nodes to obtain the mapping. | 20:34 |
notmyname | swift_fan: but I'm a little worried that you might be getting too caught up in some implementation details that don't matter from the client perspective | 20:34 |
*** nellysmitt has quit IRC | 20:35 | |
*** kopparam has joined #openstack-swift | 20:37 | |
nitika2 | notmyname: hi, | 20:38 |
notmyname | nitika2: hi | 20:38 |
nitika2 | I'm looking forward to hear the reply from you. | 20:39 |
nitika2 | for the project discussion. | 20:39 |
*** kyles_ne has joined #openstack-swift | 20:39 | |
swift_fan | notmyname zaitcev : So does swift-get-nodes just return where Swift *plans* to place the particular object, | 20:42 |
*** kopparam has quit IRC | 20:42 | |
swift_fan | notmyname zaitcev : or does it return where Swift has *already* placed the object ? | 20:43 |
*** IRTermite has joined #openstack-swift | 20:43 | |
swifterdarrell | swift_fan: this is required reading at the level of abstraction at which you're apparently wanting to operate: http://docs.openstack.org/developer/swift/overview_ring.html | 20:44 |
swifterdarrell | swift_fan: any full account/container/object path gets deterministically mapped to a partition, then the ring defines the N devices on which that partition resides (N may be different than the replica count for fractional ring replica counts, but it will be an integer). | 20:47 |
swifterdarrell | swift_fan: for any given ring, the above mapping isn't a "plan" per se as a statement of fact. | 20:49 |
swifterdarrell | swift_fan: separate from that is whether there is a (correct) object at those N locations on those N devices | 20:49 |
swifterdarrell | swift_fan: the latter is of concern for the object-server answering questions on behalf of the proxy-server (and some client on the other side of the proxy), the swift-object-auditor, and the swift-object-replicator | 20:51 |
swift_fan | swifterdarrell : So what you're saying, then, is that swift-get-nodes isn't really a way to tell whether an object has been placed on a particular node ? | 21:02 |
*** annegentle has quit IRC | 21:02 | |
NM | swift_fan: It can tell you where the object should be. It doesn't check if the object is there. It's like a library index: it can tell you where the book should be but it doesn't say if the book is there. (I hope it isn't a bad analogy) | 21:05 |
*** fifieldt_ has joined #openstack-swift | 21:14 | |
swifterdarrell | swift_fan: I feel like there's some question you're trying to actually ask, but haven't yet | 21:17 |
swifterdarrell | swift_fan: what are you after, specifically? | 21:17 |
*** hhuang has quit IRC | 21:18 | |
*** fifieldt has quit IRC | 21:18 | |
*** Trixboxer has quit IRC | 21:19 | |
*** gyee has quit IRC | 21:22 | |
* zaitcev . o O ( If he ever learns about the Hadoop node-placement middleware, we're all dead. DEAD. ) | 21:26 | |
*** rmcall has quit IRC | 21:28 | |
*** rmcall has joined #openstack-swift | 21:28 | |
portante | zaitcev: okay, do tell ... | 21:29 |
zaitcev | ^_^ | 21:29 |
*** hhuang has joined #openstack-swift | 21:30 | |
ctennis | swifterdarrell: swift-get-nodes has output that shows a bunch of "curl" commands, which you can individually run to query the end nodes if the object data is there or not | 21:32 |
*** mccgeek has joined #openstack-swift | 21:32 | |
zaitcev | only if you have access to the nodes, which users should not have | 21:33 |
zaitcev | otherwise they defeat all the access controls | 21:33 |
*** mccgeek has left #openstack-swift | 21:34 | |
ctennis | correct, it doesn't work from the "outside" | 21:34 |
mattoliverau | Morning all, its nice to see chatter in channel. I take it the hackathon went well :) notmyname, looking forward to reading your write up :) | 21:38 |
*** kopparam has joined #openstack-swift | 21:38 | |
* notmyname needs to do a writeup | 21:39 | |
mattoliverau | Lol, or anyone else's (who was there, cause my write up of the hackathon would be a bit of a lie) :p | 21:40 |
*** kopparam has quit IRC | 21:42 | |
*** tdasilva has quit IRC | 21:42 | |
swift_fan | zaitcev : What do you mean about the "Hadoop node-placement middleware" ? | 21:45 |
zaitcev | swift_fan: you know, if you follow http://www.bing.com/search?q=hadoop+swift+middleware then the first result is the Sahara doc | 21:48 |
zaitcev | swift_fan: but actually I was joking. Nobody is going to die if you install that. | 21:48 |
zaitcev | well, maybe your soul... just a little... | 21:49 |
*** fungi has joined #openstack-swift | 21:55 | |
fungi | redbo: it looks like you may be listed in https://review.openstack.org/#/admin/groups/24,members twice... can you remove whichever of those two gerrit accounts you're no longer using? (the one for "mike-launchpad" looks like it's probably not in use?) | 22:01 |
swift_fan | Ah, I see. | 22:02 |
swift_fan | Ok. | 22:02 |
swift_fan | Btw, does anyone know how to use tcpdump in order to see what other nodes your proxy-server(s) are communicating with ? | 22:02 |
zaitcev | it's a separate NIC most of the time, so just -i eth2 or something | 22:03 |
*** NM has quit IRC | 22:03 | |
zaitcev | it won't show everything in case of PACO, mind | 22:03 |
swift_fan | zaitcev : I want to see whether one of my proxy-servers is communicating with another one of my proxy-servers whenever I issue a swift-download command. | 22:04 |
zaitcev | But personally I did not resort to tcpdump in a long time. | 22:04 |
zaitcev | Longer than Ben Kenobi heard the name of Obi-wan. | 22:04 |
swift_fan | zaitcev : Because as an experiment I turned off replication. | 22:04 |
swift_fan | zaitcev : And thus I want to see how many times the object can be downloaded successfully (out of 3 attempts) when issuing a request to the load balancer. | 22:05 |
swift_fan | zaitcev : And after I've uploaded a simple object after turning off replication across the cluster. | 22:06 |
swift_fan | zaitcev : So I would like to see whether the proxy-server that received the download request from my load balancer will try to contact the other proxy servers, if it does not contain the requested object. | 22:06 |
swift_fan | (each node has a proxy-server, account-server, container-server, and object-server). | 22:07 |
swift_fan | zaitcev : If not tcpdump, what would be the recommended way to do this ? | 22:08 |
zaitcev | tail -f /var/log/swift/swift.log worked for me | 22:09 |
swift_fan | My logs are separated by service (proxy, account, container, object). | 22:10 |
swift_fan | Do you know which server would be logging this information? | 22:10 |
swift_fan | (I'm guessing proxy-server?) | 22:10 |
*** CaioBrentano has quit IRC | 22:11 | |
*** annegentle has joined #openstack-swift | 22:13 | |
*** joeljwright has quit IRC | 22:19 | |
*** annegentle has quit IRC | 22:22 | |
ppai | i'm exploring object versioning and observed the following behavior: http://fpaste.org/139733/ Is that expected ? Or am I being paranoid | 22:23 |
*** rmcall has quit IRC | 22:25 | |
*** elambert has quit IRC | 22:28 | |
*** gyee has joined #openstack-swift | 22:32 | |
*** dmsimard is now known as dmsimard_away | 22:34 | |
*** kopparam has joined #openstack-swift | 22:39 | |
*** cschwede has quit IRC | 22:40 | |
*** kopparam has quit IRC | 22:44 | |
*** ppai has quit IRC | 22:45 | |
*** Dafna has quit IRC | 22:49 | |
*** kopparam has joined #openstack-swift | 23:40 | |
*** kopparam has quit IRC | 23:45 | |
*** occup4nt has joined #openstack-swift | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!