*** abhirc has joined #openstack-swift | 00:00 | |
*** jasondotstar has quit IRC | 00:03 | |
*** ho has joined #openstack-swift | 00:09 | |
*** nellysmitt has quit IRC | 00:10 | |
openstackgerrit | Samuel Merritt proposed openstack/swift: Allow per-policy overrides in object replicator. https://review.openstack.org/149441 | 00:11 |
---|---|---|
ho | good morning! | 00:14 |
*** oomichi has joined #openstack-swift | 00:22 | |
*** dmorita has joined #openstack-swift | 00:32 | |
*** tdasilva has quit IRC | 00:39 | |
*** jasondotstar has joined #openstack-swift | 00:40 | |
*** k69 has joined #openstack-swift | 00:44 | |
k69 | hi, are these values ok for swift-ring-builder command ? (http://paste.ubuntu.com/9827340/) the metadata for accound is fine? | 00:53 |
k69 | account.builder* | 00:53 |
k69 | sry i meant " balance meta " values | 00:54 |
*** jasondotstar has quit IRC | 00:58 | |
*** yuanz has joined #openstack-swift | 01:03 | |
*** yuan has quit IRC | 01:03 | |
*** peluse_ has joined #openstack-swift | 01:03 | |
*** tsg has quit IRC | 01:03 | |
*** ahonda has joined #openstack-swift | 01:04 | |
*** peluse has quit IRC | 01:06 | |
*** gyee has quit IRC | 01:07 | |
mattoliverau | k69: have you done a rebalance on account.buider? post to adding sdc1 that is? cause devices id 2 and 3 have no partitions | 01:14 |
k69 | mattoliverau, actually i couldnt rebalance them, coz i added them after rebalanced sdb1 so it gave me error of "minimum_hour "which i needed to wait for an hour, is it going to make problems if i rebalance it later? | 01:18 |
k69 | mattoliverau, ty for the reply | 01:19 |
mattoliverau | k69: is this on a live cluster or a test one? | 01:21 |
k69 | test | 01:21 |
mattoliverau | k69: then you can run: swift-ring-builder account.builder pretend_min_part_hours_passed | 01:22 |
mattoliverau | k69: so you don't have to wait min-part-hours | 01:22 |
k69 | oh thanks | 01:23 |
mattoliverau | oh and morning ho :) Sorry been in and out today and missed you coming online :) | 01:24 |
k69 | thats great, and one more problem i have faced is that at the part of restarting services, "sudo service memcached restart" works fine but "sudo service swift-proxy restart" shows "stop : unknown instances" and "start :job failed ro start" | 01:24 |
k69 | on all nodes | 01:25 |
mattoliverau | k69: you can replace the ring without restarting services. The services wil automatically look for an updated ring. Also I find 'swift-init proxy restart' works better | 01:26 |
mattoliverau | k69: just dump the ring in /etc/swift on your storage nodes and that's it :) | 01:26 |
k69 | mattoliverau, oh so u mean i dont need to restart them at all as (http://docs.openstack.org/juno/install-guide/install/apt/content/swift-finalize-installation.html) has said | 01:28 |
mattoliverau | k69: if you are just updating the ring files, then no. If you modify the proxy configuration, then yes. You just rebalanced account.builder, building a new account ring, so assumed that is the only thing that has changed | 01:30 |
k69 | mattoliverau, oh, ok, but i have also modified proxy, and i have inserted "sudo chown -R swift:swift /etc/swift" but on controller and object nodes, all cannot recognize the service | 01:32 |
k69 | mattoliverau, when i want to restart them | 01:32 |
mattoliverau | k69: does swift-init work? i.e: swft-init proxy restart | 01:34 |
mattoliverau | Sorry I mean swift-init proxy restart (you need to type correctly) | 01:35 |
k69 | mattoliverau, http://paste.ubuntu.com/9827768/ | 01:36 |
mattoliverau | k69: looks like you have a configuration issue in proxy-server.conf | 01:37 |
k69 | mattoliverau, oh yes it has error, thanks | 01:39 |
ho | mattoliverau: morning! i'm looking forward to meeting you online. i recoginzed that hotels in sanfrancisco are expensive. :) | 01:40 |
mattoliverau | ho: yeah me too. are you going to be staying at the pickwick? the one suggested in eventbrite? | 01:41 |
ho | mattoliverau: not yet decided. | 01:42 |
ho | mattoliverau: I checked the cost of the hotel. | 01:42 |
ho | mattoliverau: now i'm checking the cost of hotels around pickwick. | 01:44 |
mattoliverau | ho: well that's where I'll be :) Work is sending me, so I didn't check, just put down the recommended hotel and they booked it :) | 01:44 |
*** dmsimard_away is now known as dmsimard | 01:46 | |
*** tellesnobrega_ has joined #openstack-swift | 01:47 | |
openstackgerrit | Daisuke Morita proposed openstack/swift: Output logs of policy index https://review.openstack.org/136995 | 01:48 |
ho | mattoliverau: i will let you know the info (biginning of next week). | 01:49 |
mattoliverau | ho: cool, glad you can come :) | 01:50 |
ho | mattoliverau: welcom :) | 01:51 |
*** dmsimard is now known as dmsimard_away | 01:55 | |
*** abhirc has quit IRC | 02:03 | |
*** abhirc has joined #openstack-swift | 02:04 | |
*** jasondotstar has joined #openstack-swift | 02:05 | |
*** jasondotstar has quit IRC | 02:05 | |
k69 | mattoliverau: umm - still cant restart it swift.conf > ( http://paste.openstack.org/show/160567/ ) & proxy-server.conf > (http://paste.openstack.org/show/160569/) | 02:11 |
*** guest10101010 has joined #openstack-swift | 02:13 | |
*** haomaiwang has joined #openstack-swift | 02:14 | |
*** addnull has joined #openstack-swift | 02:21 | |
mattoliverau | k69: I just did a quick test, you need to remove the whitespace at the start of 'paste.filter_factory = keystonemiddleware.auth_token:filter_factory' in the [filter:authtoken] section | 02:22 |
k69 | <mattoliverau>, wow thanks for the test, the "swift-init proxy restart" works but still "service swift-proxy restart" doesnt | 02:26 |
*** tsg has joined #openstack-swift | 02:26 | |
k69 | oh no wait, after that i also worked | 02:27 |
k69 | thanks alot it is solved :) | 02:27 |
*** fandi has quit IRC | 02:27 | |
mattoliverau | k69: cool! well happy testing and enjoy swift :) | 02:28 |
k69 | <mattoliverau>, :-) | 02:29 |
k69 | sure ! | 02:29 |
*** guest10101010 has quit IRC | 02:33 | |
*** hugespoon has quit IRC | 02:41 | |
*** hugespoon has joined #openstack-swift | 02:41 | |
*** dmsimard_away is now known as dmsimard | 02:57 | |
*** tellesnobrega_ has quit IRC | 02:58 | |
*** tellesnobrega_ has joined #openstack-swift | 03:04 | |
*** tellesnobrega_ has quit IRC | 03:09 | |
*** tellesnobrega_ has joined #openstack-swift | 03:18 | |
*** addnull has quit IRC | 03:21 | |
*** fandi has joined #openstack-swift | 03:22 | |
*** bill_az has quit IRC | 03:25 | |
*** lpabon has quit IRC | 03:26 | |
k69 | hey guys, how can i rebuild a base file by "swift-ring-builder account.builder create 10 3 1" command coz i have made mistakes for the current account.builder and it shows false information by "swift-ring-builder account.builder " command | 03:28 |
*** hugespoon has quit IRC | 03:30 | |
*** dmsimard is now known as dmsimard_away | 03:30 | |
*** hugespoon has joined #openstack-swift | 03:44 | |
*** hugespoon has left #openstack-swift | 03:44 | |
openstackgerrit | Yuan Zhou proposed openstack/swift: Fix container deletion synchronizing with non_zero SP https://review.openstack.org/149469 | 03:53 |
mattoliverau | k69: what do you mean by rebuild a base file? if you want to recreate from scratch then mv the account.builder out of the way and run create again. If it is just the drives then you can use the add/remove | 03:56 |
k69 | mattoliverau, oo ok, can i delete account.builder and account.ring.gz, and run the swift builder add command thing ? | 03:59 |
*** jrichli has joined #openstack-swift | 03:59 | |
openstackgerrit | Yuan Zhou proposed openstack/swift: Fix container deletion synchronizing with non_zero SP https://review.openstack.org/149469 | 04:01 |
k69 | mattoliverau, nvm IT WORKS WONDER !!! thanks alot, swift is fun :) (with ur help) | 04:05 |
*** k69 has quit IRC | 04:05 | |
*** lcurtis has joined #openstack-swift | 04:06 | |
notmyname | print "hello, world" | 04:07 |
notmyname | what did I miss today? | 04:07 |
notmyname | (my flight home is delayed nearly 2 hours) | 04:07 |
mattoliverau | k69: yes.. but if it was a real cluster I wouldn't, you'd modify the exiting build file as otherwise the partitions (depending on the seed used) could be distributed to other drives causing the cluster to move potentially all objects (not good)... but seeing as your testing, its fine :) | 04:17 |
mattoliverau | notmyname: hey! how's seattle? | 04:18 |
mattoliverau | notmyname: I've written a distributed prefix tree, it can search for large sub trees, I can split them out, join them back together and changes in data is timestamped.. now I just need to hook it into container sharded v2 and see how well it works ;) | 04:21 |
notmyname | mattoliverau: seattle is nice (as always). pretty cloudy today. kinda colder than I expected. and seems to have a problem with a certain plane to my home. but they still have great fish for dinner! | 04:21 |
notmyname | mattoliverau: nice! (wrt prefix trees) | 04:22 |
mattoliverau | oh and finish the update code. So far everything stays in order... #famous last words. | 04:22 |
mattoliverau | notmyname: nice :) Presentation went well then? | 04:23 |
notmyname | ya, I think so. unfortunately, due to some schedule changes, there weren't many people there | 04:23 |
notmyname | however, it was fun talking to people running Swift and hearing about what their experiences are | 04:24 |
mattoliverau | notmyname: bugger, yeah, so long as people (or even you) got something out of it, then it's a success :) | 04:24 |
notmyname | Moz specifically said that things work pretty well and they like the performance and the ease of operations (manageability). they're pretty happy with it and building a new, larger swift cluster | 04:25 |
mattoliverau | notmyname: so this Monday is Australia day, so long weekend here.. so probably wont be around Monday, just a heads up.. but seeing as it's your Sunday you probably wont notice ;P | 04:25 |
notmyname | and someone from oracle was there and talked a little about how they use swift (upstream swift!) internally for storage (backups, social media storage, etc) | 04:26 |
notmyname | mattoliverau: :-) | 04:26 |
mattoliverau | notmyname: awesome! | 04:26 |
mattoliverau | Go upstream swift! | 04:26 |
notmyname | it's sortof well-known that oracle's public cloud, which purports a swift API, is a reimplementation that they did internally on their own stuff. but today I learned that they are using upstream swift internally | 04:27 |
*** panbalag has quit IRC | 04:28 | |
notmyname | I should wander over to my gate. if my guess is correct, boarding should start in about 15 minutes | 04:29 |
notmyname | talk to everyone tomorrow | 04:30 |
mattoliverau | notmyname: k, have a great flight! Get some safe | 04:31 |
mattoliverau | *get home safe | 04:31 |
*** jrichli has quit IRC | 04:37 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/swift: Updated from global requirements https://review.openstack.org/88736 | 04:38 |
openstackgerrit | Tushar Gohad proposed openstack/swift: Bump eventlet version to 0.16.1 https://review.openstack.org/145403 | 04:53 |
*** abhirc has quit IRC | 05:03 | |
*** abhirc has joined #openstack-swift | 05:05 | |
*** lcurtis has quit IRC | 05:08 | |
*** nshaikh has joined #openstack-swift | 05:11 | |
*** ppai has joined #openstack-swift | 05:22 | |
*** lpabon has joined #openstack-swift | 05:27 | |
*** lpabon has quit IRC | 05:29 | |
*** nshaikh has quit IRC | 05:35 | |
*** abhirc has quit IRC | 05:46 | |
openstackgerrit | Kota Tsuyuzaki proposed openstack/swift: Efficient Replication for Distributed Regions https://review.openstack.org/99824 | 05:51 |
*** chlong has quit IRC | 05:54 | |
*** chlong has joined #openstack-swift | 06:01 | |
*** addnull has joined #openstack-swift | 06:01 | |
ho | I was wondering about a way how to tox test with external package (only provided at git: https://github.com/rodrigods/oslo.policy). how to configure this for testing? | 06:05 |
ho | s/about a way// | 06:05 |
*** bpap has quit IRC | 06:16 | |
ho | i added oslo.policy entry in test-requirements.txt but it doesn't work (there is no module in pypi) | 06:21 |
*** fandi has quit IRC | 06:21 | |
*** jyoti-ranjan has joined #openstack-swift | 06:22 | |
ho | -e git://github.com/rodrigods/oslo.policy.git#egg=oslo_policy in test-requirements.txt works. unnn... | 06:27 |
*** jyoti-ranjan has quit IRC | 06:31 | |
*** silor has joined #openstack-swift | 06:44 | |
*** fandi has joined #openstack-swift | 06:45 | |
*** echevemaster has quit IRC | 06:46 | |
*** wasmum has quit IRC | 06:51 | |
*** wasmum has joined #openstack-swift | 06:51 | |
*** tellesnobrega_ has quit IRC | 07:00 | |
*** fandi has quit IRC | 07:23 | |
*** fandi has joined #openstack-swift | 07:30 | |
*** fandi has quit IRC | 07:36 | |
*** jyoti-ranjan has joined #openstack-swift | 07:47 | |
*** chlong has quit IRC | 08:14 | |
*** fandi has joined #openstack-swift | 08:14 | |
*** rledisez has joined #openstack-swift | 08:18 | |
*** fandi has quit IRC | 08:20 | |
*** geaaru has joined #openstack-swift | 08:21 | |
*** fandi has joined #openstack-swift | 08:23 | |
*** jyoti-ranjan has quit IRC | 08:23 | |
*** jyoti-ranjan has joined #openstack-swift | 08:39 | |
*** addnull has quit IRC | 08:39 | |
*** addnull_ has joined #openstack-swift | 08:41 | |
*** addnull_ has quit IRC | 08:42 | |
*** addnull has joined #openstack-swift | 08:42 | |
*** addnull has quit IRC | 08:42 | |
*** acoles_away is now known as acoles | 08:43 | |
*** addnull has joined #openstack-swift | 08:43 | |
*** addnull has quit IRC | 08:44 | |
*** addnull has joined #openstack-swift | 08:44 | |
*** addnull has quit IRC | 08:45 | |
*** addnull has joined #openstack-swift | 08:45 | |
*** addnull has quit IRC | 08:46 | |
*** addnull has joined #openstack-swift | 08:46 | |
*** addnull has quit IRC | 08:47 | |
*** addnull has joined #openstack-swift | 08:47 | |
*** addnull has quit IRC | 08:48 | |
*** addnull has joined #openstack-swift | 08:48 | |
*** addnull has quit IRC | 08:49 | |
*** addnull has joined #openstack-swift | 08:49 | |
*** addnull has quit IRC | 08:50 | |
*** addnull has joined #openstack-swift | 08:50 | |
*** addnull has quit IRC | 08:51 | |
*** addnull_ has joined #openstack-swift | 08:51 | |
*** addnull_ has quit IRC | 08:52 | |
*** addnull has joined #openstack-swift | 08:53 | |
*** addnull has quit IRC | 08:54 | |
*** addnull has joined #openstack-swift | 08:54 | |
*** addnull has quit IRC | 08:55 | |
*** addnull has joined #openstack-swift | 08:55 | |
*** addnull has quit IRC | 08:56 | |
*** addnull has joined #openstack-swift | 08:56 | |
*** addnull has quit IRC | 08:57 | |
*** addnull has joined #openstack-swift | 08:57 | |
*** nellysmitt has joined #openstack-swift | 08:57 | |
*** addnull has quit IRC | 08:58 | |
*** addnull has joined #openstack-swift | 08:58 | |
*** addnull_ has joined #openstack-swift | 09:00 | |
*** addnull has quit IRC | 09:00 | |
*** addnull has joined #openstack-swift | 09:01 | |
*** addnull_ has quit IRC | 09:01 | |
*** addnull has quit IRC | 09:02 | |
*** addnull has joined #openstack-swift | 09:02 | |
*** addnull has quit IRC | 09:02 | |
*** addnull has joined #openstack-swift | 09:03 | |
*** addnull has quit IRC | 09:04 | |
*** addnull has joined #openstack-swift | 09:04 | |
*** addnull has quit IRC | 09:05 | |
*** addnull has joined #openstack-swift | 09:05 | |
*** addnull has quit IRC | 09:06 | |
*** addnull has joined #openstack-swift | 09:06 | |
*** addnull has quit IRC | 09:07 | |
*** addnull has joined #openstack-swift | 09:07 | |
*** addnull has quit IRC | 09:08 | |
*** addnull has joined #openstack-swift | 09:08 | |
*** addnull has quit IRC | 09:09 | |
*** addnull has joined #openstack-swift | 09:10 | |
*** addnull has quit IRC | 09:10 | |
*** addnull has joined #openstack-swift | 09:11 | |
*** addnull has quit IRC | 09:12 | |
*** addnull has joined #openstack-swift | 09:12 | |
*** addnull has quit IRC | 09:13 | |
*** addnull has joined #openstack-swift | 09:13 | |
*** addnull has quit IRC | 09:14 | |
*** addnull has joined #openstack-swift | 09:14 | |
*** addnull has quit IRC | 09:15 | |
*** jistr has joined #openstack-swift | 09:15 | |
*** addnull has joined #openstack-swift | 09:15 | |
*** addnull has quit IRC | 09:16 | |
*** addnull has joined #openstack-swift | 09:16 | |
*** addnull has quit IRC | 09:17 | |
*** addnull has joined #openstack-swift | 09:17 | |
*** addnull has quit IRC | 09:18 | |
*** addnull has joined #openstack-swift | 09:19 | |
*** addnull has quit IRC | 09:19 | |
*** addnull has joined #openstack-swift | 09:20 | |
*** addnull has quit IRC | 09:20 | |
*** addnull has joined #openstack-swift | 09:21 | |
*** addnull_ has joined #openstack-swift | 09:22 | |
*** addnull has quit IRC | 09:22 | |
*** addnull_ has quit IRC | 09:23 | |
*** addnull has joined #openstack-swift | 09:23 | |
*** addnull has quit IRC | 09:24 | |
*** addnull has joined #openstack-swift | 09:24 | |
*** addnull has quit IRC | 09:25 | |
*** addnull has joined #openstack-swift | 09:25 | |
*** addnull has quit IRC | 09:26 | |
*** addnull has joined #openstack-swift | 09:26 | |
*** addnull has quit IRC | 09:27 | |
*** addnull has joined #openstack-swift | 09:28 | |
*** addnull_ has joined #openstack-swift | 09:29 | |
*** addnull has quit IRC | 09:29 | |
*** addnull_ has quit IRC | 09:29 | |
*** addnull has joined #openstack-swift | 09:30 | |
*** addnull has quit IRC | 09:34 | |
*** jordanP has joined #openstack-swift | 09:39 | |
openstackgerrit | Merged openstack/swift: dlo: Update doc about manifest containing data https://review.openstack.org/146390 | 09:41 |
*** addnull has joined #openstack-swift | 09:50 | |
*** addnull has quit IRC | 09:52 | |
*** addnull has joined #openstack-swift | 09:52 | |
openstackgerrit | Merged openstack/swift: Make ThreadPools deallocatable. https://review.openstack.org/145647 | 09:57 |
*** ppai has quit IRC | 09:58 | |
*** tsg has quit IRC | 10:01 | |
*** ppai has joined #openstack-swift | 10:12 | |
*** jyoti-ranjan has quit IRC | 10:15 | |
*** ho has quit IRC | 10:23 | |
*** aix has joined #openstack-swift | 10:29 | |
*** dmorita has quit IRC | 10:31 | |
*** tellesnobrega_ has joined #openstack-swift | 10:36 | |
*** nellysmitt has quit IRC | 10:43 | |
*** jyoti-ranjan has joined #openstack-swift | 10:46 | |
*** addnull has quit IRC | 10:56 | |
*** ppai has quit IRC | 11:00 | |
*** addnull has joined #openstack-swift | 11:00 | |
*** addnull has quit IRC | 11:04 | |
*** haomaiwang has quit IRC | 11:05 | |
*** tellesnobrega_ has quit IRC | 11:05 | |
*** addnull has joined #openstack-swift | 11:09 | |
*** addnull has quit IRC | 11:13 | |
*** ppai has joined #openstack-swift | 11:13 | |
openstackgerrit | Takashi Kajinami proposed openstack/swift: Remove redundant container updating after rsync https://review.openstack.org/149308 | 11:21 |
*** jyoti-ranjan has quit IRC | 11:28 | |
*** addnull has joined #openstack-swift | 11:29 | |
*** jyoti-ranjan has joined #openstack-swift | 11:35 | |
*** mahatic has joined #openstack-swift | 11:37 | |
*** panbalag has joined #openstack-swift | 11:38 | |
*** fandi has quit IRC | 11:42 | |
openstackgerrit | Joel Wright proposed openstack/python-swiftclient: This patch fixes downloading files to stdout. https://review.openstack.org/144899 | 11:46 |
*** chlong has joined #openstack-swift | 11:50 | |
*** nellysmitt has joined #openstack-swift | 12:10 | |
*** tellesnobrega_ has joined #openstack-swift | 12:20 | |
*** tellesnobrega_ has quit IRC | 12:35 | |
*** gvernik has joined #openstack-swift | 12:42 | |
*** tellesnobrega_ has joined #openstack-swift | 12:43 | |
*** addnull has quit IRC | 12:50 | |
*** gvernik has quit IRC | 12:57 | |
*** jyoti-ranjan has quit IRC | 13:02 | |
*** bill_az has joined #openstack-swift | 13:26 | |
*** ppai has quit IRC | 13:49 | |
*** mtreinish has quit IRC | 14:08 | |
*** abhirc has joined #openstack-swift | 14:16 | |
*** lcurtis has joined #openstack-swift | 14:19 | |
*** joeljwright has joined #openstack-swift | 14:25 | |
*** tellesnobrega_ has quit IRC | 14:27 | |
*** tellesnobrega_ has joined #openstack-swift | 14:27 | |
*** tellesnobrega_ has quit IRC | 14:52 | |
*** chlong has quit IRC | 14:54 | |
*** abhirc has quit IRC | 15:01 | |
*** thebloggu has joined #openstack-swift | 15:06 | |
*** jasondotstar has joined #openstack-swift | 15:10 | |
*** abhirc has joined #openstack-swift | 15:14 | |
thebloggu | Is there any way for me to specify the swift.conf to use when starting a server in Icehouse? I'm having this (https://bugs.launchpad.net/swift/+bug/1091007) issue because I'm trying to run swift in a user directory for testing and everytime I try to run the proxy server I get an error saying I don't have the /etc/swift/swift.conf file | 15:17 |
portante | Have folks heard about Espresso - "LinkedIn's hot new distributed document store"? http://getprismatic.com/story/1421871866596 | 15:28 |
portante | The are positioning it between Oracle and Voldemort (http://www.project-voldemort.com/voldemort/, which is apparently an opensource version of Amazon's dynamo) | 15:30 |
portante | s/The/They/ | 15:30 |
*** tsg has joined #openstack-swift | 15:31 | |
*** abhirc has quit IRC | 15:45 | |
*** jrichli has joined #openstack-swift | 15:45 | |
*** dmsimard_away is now known as dmsimard | 15:51 | |
*** abhirc has joined #openstack-swift | 15:55 | |
*** abhirc has quit IRC | 16:03 | |
*** booly-yam-9117 has joined #openstack-swift | 16:07 | |
*** tdasilva has joined #openstack-swift | 16:09 | |
*** jordanP has quit IRC | 16:19 | |
*** tellesnobrega_ has joined #openstack-swift | 16:28 | |
*** david-lyle_afk is now known as david-lyle | 16:35 | |
*** geaaru has quit IRC | 16:49 | |
*** Nadeem_ has joined #openstack-swift | 17:00 | |
*** Nadeem_ has quit IRC | 17:00 | |
*** abhirc has joined #openstack-swift | 17:02 | |
*** gvernik has joined #openstack-swift | 17:11 | |
*** gvernik has quit IRC | 17:16 | |
*** jistr has quit IRC | 17:16 | |
*** zul has quit IRC | 17:17 | |
notmyname | good morning, world | 17:18 |
*** zul has joined #openstack-swift | 17:19 | |
notmyname | portante: interesting | 17:20 |
notmyname | mahatic: congrats on having the OPTIONS patch land! | 17:20 |
mahatic | notmyname, thank you! that was quick :) | 17:21 |
mahatic | patch landing | 17:21 |
mahatic | notmyname, for the server type, won't Host header do? | 17:25 |
notmyname | tsg: hello. I've got a question about eventlet versions | 17:25 |
mahatic | http://tools.ietf.org/html/rfc4229#section-2.1.51 | 17:25 |
notmyname | mahatic: isnt' the host header set on the request? is there a def....oh, thanks. /me reads | 17:25 |
tsg | notmyname: good morning | 17:26 |
tsg | notmyname: eventlet versions seems to be a hot topic on all channels (nova, infra, swift!) :) | 17:26 |
notmyname | tsg: is 0.16.1 the min version? or just the recommended one? | 17:27 |
tsg | notmyname: for us 0.16.0 is the minimum | 17:27 |
notmyname | tsg: ah? what else? is it the nova breakage thing? I talked to infra people last week about it | 17:27 |
notmyname | tsg: hmm...what doe 0.16.1 specifically give us that 0.16.0 doesn't have (pure swift perspective) | 17:27 |
tsg | notmyname: yes, clarkb seemed to indicate yesterday that the nova gate breakage wasn't an issue at this point | 17:27 |
tsg | notmyname: there were some inconsistencies in the files included in the eventlet 0.16.0 source tarball and pip blob | 17:28 |
notmyname | ya, the cause is that stable release testing didn't freeze requirements. so it was tested with new versions of the code (which removed deprecated functionality) and thus broke in the gate | 17:28 |
notmyname | tsg: that's the only difference in 16 and 16.1? | 17:29 |
tsg | so we suggested the author create a 0.16.1 point fix - so that's the only difference | 17:29 |
tsg | notmyname: correct | 17:29 |
tsg | there was an issue a couple of days ago where bandersnatch would not pick up 0.16.1 but that seems to have been resolved by clarkb | 17:29 |
notmyname | tsg: ok. so IMO then the only real consideration is whatever distros find easier to package. whatever they want is what we should have in the requirements | 17:30 |
notmyname | ie it's very likely that whatever we have in global requirements is what gets packaged in the next ubuntu/rhel/cent repos | 17:30 |
tsg | notmyname: yes, I have a pending review for global-requirements master to get 0.16.1 in (https://review.openstack.org/#/c/145816/) that clarkb said he would look at soon (next week) | 17:31 |
tsg | we can then match the same in swift requirements on feature/ec | 17:31 |
notmyname | ok. sounds like 16.1 is the way to go | 17:31 |
notmyname | tsg: what's it blocking for us wrt EC work? | 17:32 |
tsg | notmyname: yes .. the < 0.16 version freeze applied only to stable/juno | 17:32 |
*** cutforth has quit IRC | 17:33 | |
tsg | since jenkins is pulling 0.16.1 now, there is no blocker for us | 17:33 |
tsg | (it was a blocker earlier for the 2-phase commit stuff I have pending) | 17:33 |
notmyname | ok | 17:33 |
notmyname | tsg: thanks for working on it! | 17:33 |
tsg | notmyname: sure. I will post the change today after some cleanup | 17:33 |
notmyname | great | 17:34 |
notmyname | mahatic: I don't think Host sounds right. (http://tools.ietf.org/html/rfc7230#section-5.4) | 17:36 |
notmyname | mahatic: perhaps Server is better (http://tools.ietf.org/html/rfc7231#section-7.4.2) | 17:36 |
tsg | notmyname, peluse_: we just got a +2 for 0.16.1 on global-requirements. so that's another thing out of the way | 17:36 |
notmyname | yay | 17:36 |
*** gvernik has joined #openstack-swift | 17:40 | |
*** panbalag1 has joined #openstack-swift | 17:44 | |
mahatic | notmyname, oh, yes. My bad. Thanks. But er what should I be returning in the header? What all information? | 17:45 |
*** acoles is now known as acoles_away | 17:45 | |
notmyname | mahatic: what do you think? | 17:45 |
*** gvernik has quit IRC | 17:45 | |
*** booly-yam-9117_ has joined #openstack-swift | 17:46 | |
*** booly-yam-9117 has quit IRC | 17:47 | |
*** panbalag has quit IRC | 17:48 | |
*** jyoti-ranjan has joined #openstack-swift | 17:50 | |
peluse_ | tsg: nice job!!!! | 17:50 |
mahatic | notmyname, each server's details? | 17:51 |
mahatic | notmyname, ip, port, device? | 17:52 |
*** jasondotstar has quit IRC | 17:52 | |
tsg | peluse_: should have the rest of the change up there today after we chat | 17:52 |
notmyname | mahatic: we'd already know the IP and port (ie we just talked on that very ip and port to get this info). and there could be multiple devices | 17:52 |
mahatic | notmyname, ah, sorry, ring details | 17:56 |
notmyname | mahatic: I'd expect somehting like "%s (%s)" % (server_name, swift_version) | 17:56 |
mahatic | notmyname, oh that's it? | 17:56 |
*** tsg_ has joined #openstack-swift | 17:57 | |
*** tsg has quit IRC | 17:57 | |
mahatic | okay. I got the recon mixed up. I'm out of my mind. | 17:58 |
notmyname | mahatic: ya, I think so. then recon can ping the servers in the ring with OPTIONS and then aggregate and report on the results | 17:59 |
*** rledisez has quit IRC | 18:00 | |
mahatic | notmyname, hmm, okay. I need to understand ring a bit more I think. | 18:03 |
*** joeljwright has left #openstack-swift | 18:03 | |
mahatic | notmyname, the aim is to only validate the server name and version? Or is it like, once we have them, more information on server will be followed? | 18:05 |
*** jasondotstar has joined #openstack-swift | 18:08 | |
*** jrichli has quit IRC | 18:16 | |
*** silor has quit IRC | 18:21 | |
notmyname | mahatic: I think to start with we only want to validate the name. having the version too offers some opportunity for new tools later. but for now, validating the name is sufficient I think | 18:28 |
*** kevinc_ has joined #openstack-swift | 18:29 | |
mahatic | notmyname, ah okay | 18:31 |
*** Nadeem has joined #openstack-swift | 18:35 | |
*** tsg_ has quit IRC | 18:39 | |
*** david-lyle has quit IRC | 18:46 | |
*** tdasilva has quit IRC | 18:46 | |
thebloggu | Is there any way for me to specify the swift.conf to use when starting a server in Icehouse? I'm having this (https://bugs.launchpad.net/swift/+bug/1091007) issue because I'm trying to run swift in a user directory for testing and everytime I try to run the proxy server I get an error saying I don't have the /etc/swift/swift.conf file | 18:46 |
openstackgerrit | OpenStack Proposal Bot proposed openstack/swift: Updated from global requirements https://review.openstack.org/88736 | 18:50 |
*** panbalag1 has quit IRC | 18:52 | |
*** nshaikh has joined #openstack-swift | 19:00 | |
*** zaitcev has joined #openstack-swift | 19:01 | |
*** ChanServ sets mode: +v zaitcev | 19:01 | |
*** tdasilva has joined #openstack-swift | 19:01 | |
*** thebloggu has quit IRC | 19:05 | |
*** jyoti-ranjan has quit IRC | 19:06 | |
*** reed_ has joined #openstack-swift | 19:06 | |
*** jrichli has joined #openstack-swift | 19:10 | |
*** lcurtis has quit IRC | 19:10 | |
*** jasondotstar has quit IRC | 19:15 | |
*** jasondotstar has joined #openstack-swift | 19:18 | |
*** Nadeem has quit IRC | 19:22 | |
*** booly-yam-9117_ has quit IRC | 19:36 | |
*** tsg has joined #openstack-swift | 19:38 | |
*** mahatic has quit IRC | 19:41 | |
*** tellesnobrega_ has quit IRC | 19:44 | |
abhirc | newbie question : Do Swift and Ceph go well together | 20:06 |
notmyname | abhirc: define "together" ;-) | 20:06 |
*** bill_az has quit IRC | 20:07 | |
*** jdprax has joined #openstack-swift | 20:08 | |
*** reed_ is now known as reed | 20:09 | |
*** reed has joined #openstack-swift | 20:09 | |
jdprax | I've created a blueprint for something that I would love to see come to swift. Thoughts? https://blueprints.launchpad.net/swift/+spec/object-versioning-rsync | 20:10 |
abhirc | notmyname: I have access to an OpenStack cluster we have Ceph as the backend and it was deployed in a 3 node cluster with object storage devices co-located | 20:12 |
notmyname | abhirc: ah | 20:13 |
*** nshaikh has left #openstack-swift | 20:13 | |
notmyname | abhirc: I'd recommend giving swift and ceph separate drives (if not separate servers). mixing the two in a deployment would likely result in a lot of hardware contention | 20:14 |
abhirc | notmyname: When I access the OpenStack dashboard I see that on accessing the Containers tab under Object Storage gives me a Server Error , on further trouble shooting , it seems Swift has not been integrated and was just curious as to knowing if there are complications on having them both together, the cluster is an experimental one. | 20:15 |
notmyname | abhirc: are you sure that "containers" tab in horizon isn't for the docker-like containers? ie instead of object storage things | 20:16 |
notmyname | abhirc: horizon does support reading and writing data with swift | 20:16 |
*** gvernik has joined #openstack-swift | 20:19 | |
*** kevinc_ has quit IRC | 20:21 | |
abhirc | notmyname: I have a RedHat OpenStack flavor deployed, I might be going in the right direction , my understanding is if I create a Project and access that , I can see Object Storage Panel and under that Containers tab , am I looking at the wrong tab | 20:21 |
notmyname | abhirc: I'm not too familiar with horizon, actually. I'd ask david lyle, but I don't see him online right now. or maybe someone else in here has used horizon and can offer some guidance | 20:23 |
abhirc | notmyname: thanks so much , really appreciate it! | 20:29 |
*** gvernik has quit IRC | 20:30 | |
*** kevinc_ has joined #openstack-swift | 20:43 | |
*** bpap has joined #openstack-swift | 20:46 | |
*** jdprax has quit IRC | 20:52 | |
clayg | so what's the priority review for the day? I might be able to get one in. | 20:54 |
clayg | reuse-port? concurrent-requests? fix swift download container object -o -? | 20:54 |
clayg | maybe I should try to rewrite the set_overload patch to rip out the calculator sillyness and just do the change display to and set to use %? | 20:55 |
notmyname | clayg: Joel was asking about https://review.openstack.org/#/c/130339/7 this morning | 20:56 |
notmyname | and it seems to be an internal blocker for him | 20:57 |
notmyname | clayg: and for swift, the global cluster replication improvements could take an extra eye | 20:57 |
zaitcev | Guys, do we know of any off-line or batch tools that verify consistency checking of Swift? I tried to do that with swift-report, but there wasn't any particular urgency, so I never finished it beyond basic matching of accounts against Keystone. | 20:58 |
notmyname | zaitcev: consistency checking? like the dispersion report? or like auditor stuff? | 20:58 |
clayg | is that the shuffle thing? | 20:59 |
zaitcev | notmyname: I mean objects that do not belong into any containers, and generally mismatch between container DBs and objects. | 20:59 |
notmyname | clayg: ya | 20:59 |
clayg | I thought I already tried to +2 that and then it got even more complicated with more scary edge cases that I don't really have a good way to functionally test :\ | 21:00 |
notmyname | zaitcev: ah. umm...nothing springs to mind, but I'm hoping that because I've not looked recently | 21:01 |
clayg | zaitcev: swifterdarrell had something like that once - you should try to weasel it out of him | 21:02 |
zaitcev | notmyname: okay, thought so, thanks | 21:02 |
*** bill_az has joined #openstack-swift | 21:02 | |
zaitcev | clayg: that is because of your argument, "if we don't do X, then an enhanced possiblity to get dark data exist (per current code)". So I thought, why not do this: don't do X, but instead re-animate an effort for "swift fsck". Currently we don't even _know_ if dark data exists, and how much. | 21:04 |
zaitcev | What if RAX could save a million a year just by cleansing dark data | 21:04 |
clayg | when was I arguing that we need to do X to prevent dark data? | 21:04 |
clayg | the async_update behavior on 404's? That was the last time I remember talking abou ti... | 21:05 |
zaitcev | yes, https://review.openstack.org/99598 | 21:05 |
zaitcev | well, I'm going through review list from bottom up | 21:05 |
clayg | zaitcev: but yeah the last time we needed to fsck (I like that term) a swift cluster was cause it got >98% full | 21:05 |
swifterdarrell | zaitcev: clayg; I *do* have something like that, but it's ugly | 21:05 |
clayg | swifterdarrell: zaitcev doesn't care if it's ugly! | 21:06 |
zaitcev | swifterdarrell: it's okay, just lemme see even a draft | 21:06 |
zaitcev | swifterdarrell: more importantly, did you apply it to any live clusters and did you find any interesting dark data and other screw-ups? | 21:06 |
zaitcev | (that auditors cannot identify) | 21:07 |
clayg | zaitcev: oh, i don't know that i was arguing that Takashi's patch should get merged just that I still think it's a problem - briancline too. | 21:07 |
clayg | zaitcev: there was a bunch of hubub about trying test it, but no one did | 21:08 |
zaitcev | clayg: okay, so I keep -1 for now. It may be not a problem for anyone who knows Swift at all, but I'm inundated by reports of stuck updaters on clusters that random people install to test RDO. Usually it's "list does not show my objects I HATE YOUR SHITWARE" | 21:09 |
clayg | but ultimately the containers are where we list where the objects are, there's pleanty of code that currently deals with a replication re-animaging a container (half-deleted container) and I think that is definately perferable than loosing an object update, but again, i don't recall the specifics of the implementation that Takashi offered | 21:10 |
zaitcev | so I'm touchy about updaters | 21:10 |
clayg | stuck updaters? wow that's curious... | 21:10 |
zaitcev | well usually just overloaded VMs | 21:10 |
clayg | hrmmm... overloaded like out of disk space, or just slow cpu? | 21:11 |
zaitcev | slow | 21:11 |
zaitcev | they get even servers timing out, so... But still... | 21:11 |
ahale | well we know if dark data exists from the difference between df and the account dbs - but yeah if RAX could save a million a year just by cleansing dark data I could buy a boat | 21:12 |
zaitcev | Re. specifics, he only removed the check for 404. It's 1-liner. | 21:12 |
*** booly-yam-3388 has joined #openstack-swift | 21:13 | |
zaitcev | okay. I'll put fsck on todo list then | 21:13 |
notmyname | ahale: have you tracked any of that yet? I remember there being some delta just from fs-metadata overhead, so there osn | 21:14 |
ahale | yeah we have | 21:14 |
notmyname | ahale: have you tracked any of that yet? I remember there being some delta just from fs-metadata overhead, so they won't match exactly | 21:14 |
notmyname | ah. what delta are you seeing? | 21:14 |
ahale | well, a theory - and we found and deleted a bunch | 21:14 |
notmyname | ie what's acceptable | 21:14 |
ahale | oh i dont have numbers handy, it was a horrible long running thing I tried to have as little to do with as I could | 21:15 |
zaitcev | figures | 21:15 |
ahale | and only looking at files > 1000MB (not 1024, thanks glance) | 21:15 |
notmyname | ahale: sounds like a gholt project ;-) | 21:15 |
zaitcev | was it worthwhile though | 21:15 |
ahale | yeah it was/is | 21:15 |
notmyname | zaitcev: I wonder how you'd track async pendings (or generally the eventual consistency of listings). maybe checking objects that are over a certain age? | 21:19 |
zaitcev | notmyname: I suppose that works... There must be a certain churn at all times that fsck cannot keep up with. | 21:22 |
*** booly-yam-3388 has quit IRC | 21:27 | |
notmyname | to quote from a presentation given by some people running swift: "maintenance is easy. [swift is] made of unicorn farts" | 21:27 |
notmyname | that's from http://moz.com | 21:28 |
ahale | yah its unicorn herding thats the hard bit | 21:28 |
notmyname | :-) | 21:28 |
notmyname | ahale: ya, that was under the "short term maintenance" category | 21:29 |
notmyname | ahale: they acknowledged that long term maintenance (large containers, capacity planning, etc) is harder | 21:29 |
*** kevinc_ has quit IRC | 21:30 | |
notmyname | http://d.not.mn/swift_unicorn_farts.JPG | 21:32 |
*** geaaru has joined #openstack-swift | 21:34 | |
ahale | heres some idea of what we got on some drives we cleared some darkdata from http://img.cfil.es/0a05bc52-c62c-4f43-b118-d77d0d333da7.jpg | 21:36 |
*** abhirc has quit IRC | 21:36 | |
*** abhirc has joined #openstack-swift | 21:36 | |
notmyname | ahale: half of what's interesting there is the domain name ;-) | 21:37 |
ahale | lol | 21:37 |
ahale | its just mine not a rax thing | 21:37 |
notmyname | ah. like my d.not.mn | 21:37 |
ahale | yup | 21:37 |
notmyname | a quarter is the number of graphed lines ;-) | 21:37 |
ahale | it goes to 105% cos we overclock our drives | 21:38 |
notmyname | and then the 15% drop in used capacity (if I'm reading it right) | 21:38 |
ahale | ~ yah | 21:38 |
notmyname | ahale: you must put racing stripes and a spoiler on the drives! | 21:38 |
*** jasondotstar has quit IRC | 21:38 | |
redbo | high flow cats | 21:38 |
*** kevinc_ has joined #openstack-swift | 21:39 | |
notmyname | "it helps airflow in the dc" | 21:39 |
ahale | also turns out if you flip half the drives the spin balances better | 21:39 |
notmyname | do you use these drives? http://www.damngeeky.com/2013/11/14/15498/f1-race-car-replica-assembled-entirely-wd-hard-drive-parts.html | 21:40 |
redbo | it was tough when we opened sydney, we had to find drives that spin counterclockwise | 21:41 |
notmyname | lol | 21:41 |
redbo | I have this plan to make a bloom filter of all the objects in all the containers in the world, then just delete any objects that aren't in the bloom filter. But for some reason that scares people. | 21:43 |
*** abhirc_ has joined #openstack-swift | 21:44 | |
*** abhirc has quit IRC | 21:47 | |
ahale | i know it should work cos its maths, but its the just delete bit thats scary | 21:47 |
*** tsg has quit IRC | 21:49 | |
clayg | redbo: http://www.somethingsimilar.com/2012/05/21/the-opposite-of-a-bloom-filter/ | 21:49 |
notmyname | swifterdarrell thinks bloom filters are cool. he'd be all over that | 21:49 |
*** tsg has joined #openstack-swift | 21:51 | |
notmyname | ahale: when you're cleaning up the data, how'd you know to delete it instead of add it to the listing? | 21:52 |
notmyname | was there another index of data somewhere? | 21:52 |
redbo | clayg: That's a neat and simple idea | 21:53 |
ahale | nope there wasn't, you just have to go on the listing | 21:53 |
notmyname | ahale: so why'd you delete data? ie why is the object there wrong instead of the listing being wrong? | 21:54 |
ahale | something to do with tombstones expiring faster than asyncs and/or something else | 21:56 |
*** tsg has quit IRC | 21:56 | |
ahale | that cluster had been at 100% as well btw | 21:57 |
notmyname | at least it wasn't 105%. then youd have to deal with full cluster problems | 21:58 |
ahale | hehe true | 21:58 |
*** lcurtis has joined #openstack-swift | 21:59 | |
redbo | I checked logs for about a zillion of those objects and they all had DELETEs issued last. Objects being uplaoded without the containers being updated doesn't really seem to happen. | 21:59 |
notmyname | I don't doubt you handled it. just curious about how you did it | 22:00 |
redbo | but objects hiding on some handoff server until the tombstones get removed is totally a thing that happens | 22:00 |
ahale | ahh that was it | 22:00 |
notmyname | cleanup interval longer than the replication cycle? | 22:01 |
clayg | notmyname: it'd have to be on the handoff longer than reclaim age | 22:01 |
clayg | redbo: I'm pretty sure I've seen create container get pushed to 3 handoffs because of 507's - container existence still works - but async get's deleted without updating the container because it doesn't look at handoffs | 22:02 |
*** kevinc_ has quit IRC | 22:03 | |
redbo | I don't think we've seen that, but we've had a problem where asyncs come around and create an object in the container after the tombstone entry in the container has been removed. | 22:06 |
clayg | yuk, good to know | 22:07 |
clayg | was the async just hanging around the object tombstone had already claned up the .data too? | 22:07 |
*** gyee has joined #openstack-swift | 22:07 | |
clayg | this is very therapeutic | 22:08 |
redbo | yeah. Either someone put a drive back into commission after it was out for a week, or someone was really hammering a container and the asyncs couldn't clear for a week or whatever. | 22:09 |
ahale | if i had a dollar for every async i'd be a billionaire.. | 22:09 |
*** kevinc_ has joined #openstack-swift | 22:10 | |
clayg | maybe if we fixed the thing where tombstones don't get reaped from inactive suffixes we could run with a higher reclaim age... idk, maybe dynamic reclaim age with backpressure from how healthy things are (replication cycle time, async count) - not sure how to solve an old disk coming into the fix - mtime on the file > reclaim age brings great skepticism? | 22:11 |
clayg | any one doing reviews on swiftclient -> this one is super annoying for me: https://review.openstack.org/#/c/149108/ | 22:13 |
redbo | yeah, maybe. I've sat down to write a disk indexing/hashing thing that's better than hashes.pkl like 8 times now, but I haven't finished one yet. | 22:13 |
redbo | but part of that should be tracking tombstone files | 22:16 |
clayg | running tox on pythonswiftclient is so freaking useless, I don't even understand, where is the summary of the failed tests - why is pypy in the test set by default - does anyone understand how testr failing works? | 22:17 |
clayg | like what was wrong with nose? I liked the pretty list of dots... | 22:19 |
*** tdasilva has quit IRC | 22:23 | |
peluse_ | redbo: fyi as part of the EC work I have a WIP progress that will refactor the whole hashes.pkl so likely more easily allow for replacing the whole scheme with really anything on a per policy basis... | 22:23 |
peluse_ | redbo, it is here if you're curious what its looking like now: https://review.openstack.org/#/c/131872/ | 22:23 |
*** chlong has joined #openstack-swift | 22:24 | |
*** jrichli has quit IRC | 22:33 | |
*** peluse_ has quit IRC | 22:34 | |
*** peluse has joined #openstack-swift | 22:44 | |
redbo | peluse: The API still seems pretty geared toward invalidating a whole suffix hash at a time. | 22:45 |
redbo | peluse: I'd kind of like a class that's an abstract "partition index" or whatever, that you tell when something is added or removed to the partition, so a fancy new hashing strategy isn't forced to walk the suffix directory to find out what's changed. | 22:45 |
*** ChanServ sets mode: +v peluse | 22:46 | |
peluse | redbo: yeah, that's true as the intent here is to enable a new format for what we need for EC... I'm hesitant of course to expand the scope any further but maybe it makes the next project easier - maybe not :) | 22:48 |
redbo | fair enough | 22:49 |
redbo | but I don't think it'd be able to do what I want (or not well) | 22:52 |
peluse | :( | 22:54 |
redbo | haha.. well, constantly invalidating a hash of a chunk of the filesystem then walking to rebuild it wasn't the most inspired thing. When we generally have the information that something is added or removed and could update a hash. | 23:00 |
redbo | or do something even cleverer than hashing with that information | 23:03 |
*** david-ly_ has joined #openstack-swift | 23:07 | |
torgomatic | redbo: you mean chexor? | 23:07 |
*** dmsimard is now known as dmsimard_away | 23:07 | |
*** abhirc_ has quit IRC | 23:08 | |
*** david-ly_ is now known as david-lyle | 23:11 | |
redbo | obviously I mean chexor | 23:12 |
*** bpap_r has joined #openstack-swift | 23:17 | |
*** bpap has quit IRC | 23:19 | |
clayg | peluse: the thing is that interface needs to be moved up a level for the object-server so it's just .delete_object(**useful_info) and .add_object(**useful_info) then the implementation can decide if that means invalidate the whole suffix or update something smarter - the backend interfaces for the replicators matter less, if you didn't do per suffix syncing the replicators would look a lot different anyway | 23:19 |
*** gyee has quit IRC | 23:32 | |
*** lcurtis has quit IRC | 23:51 | |
*** abhirc has joined #openstack-swift | 23:55 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!