*** itlinux has joined #openstack-swift | 00:11 | |
*** gyee has joined #openstack-swift | 00:14 | |
*** gyee has quit IRC | 00:15 | |
*** Sukhdev has joined #openstack-swift | 00:22 | |
timburke | torgomatic: so that was walking the entire index then? to give us an idea of how long it would take to assign pivots? | 00:36 |
---|---|---|
*** itlinux has quit IRC | 00:51 | |
*** tovin07_ has joined #openstack-swift | 00:56 | |
*** lucasxu has joined #openstack-swift | 01:16 | |
*** lucasxu has quit IRC | 01:19 | |
*** Sukhdev has quit IRC | 01:21 | |
*** itlinux has joined #openstack-swift | 02:36 | |
itlinux | hello all.. I got Tripleo installed but swift is consuming most of my CPU .. any tips? I could turn off swift and move the images out of swift any tips on that.. Thanks | 03:01 |
itlinux | I have a flash storage in case | 03:01 |
*** links has joined #openstack-swift | 03:44 | |
*** kong has quit IRC | 03:48 | |
*** itlinux has quit IRC | 03:49 | |
*** Sukhdev has joined #openstack-swift | 04:29 | |
*** psachin has joined #openstack-swift | 05:11 | |
*** Sukhdev has quit IRC | 05:41 | |
*** skudlik has joined #openstack-swift | 05:52 | |
*** rcernin has joined #openstack-swift | 06:07 | |
*** SkyRocknRoll has joined #openstack-swift | 06:17 | |
*** SkyRocknRoll has quit IRC | 06:19 | |
*** klrmn has quit IRC | 06:31 | |
openstackgerrit | Tim Burke proposed openstack/swift master: Use check_drive consistently https://review.openstack.org/500158 | 06:35 |
openstackgerrit | Tim Burke proposed openstack/swift master: Differentiate between a drive that's not mounted vs. not a dir more https://review.openstack.org/504341 | 06:35 |
*** pcaruana has joined #openstack-swift | 06:45 | |
*** itlinux has joined #openstack-swift | 06:46 | |
openstackgerrit | Tim Burke proposed openstack/swift master: doc migration: update the doc link address[2/3] https://review.openstack.org/500776 | 06:47 |
openstackgerrit | Merged openstack/swift master: Always require device dir for containers https://review.openstack.org/458070 | 06:51 |
openstackgerrit | Tim Burke proposed openstack/swift master: Move listing formatting out to proxy middleware https://review.openstack.org/449394 | 06:55 |
*** geaaru has joined #openstack-swift | 06:57 | |
openstackgerrit | Tim Burke proposed openstack/swift master: Respond 400 Bad Request when Accept headers fail to parse https://review.openstack.org/502845 | 07:07 |
*** skudlik has left #openstack-swift | 07:07 | |
*** tesseract has joined #openstack-swift | 07:27 | |
*** itlinux has quit IRC | 08:15 | |
*** joeljwright has joined #openstack-swift | 08:27 | |
*** ChanServ sets mode: +v joeljwright | 08:27 | |
*** skudlik has joined #openstack-swift | 08:37 | |
*** skudlik has left #openstack-swift | 08:39 | |
*** psachin has quit IRC | 09:05 | |
*** psachin has joined #openstack-swift | 09:05 | |
*** xlucas has joined #openstack-swift | 09:19 | |
*** xlucas has left #openstack-swift | 09:21 | |
*** xlucas has joined #openstack-swift | 09:21 | |
*** xlucas has left #openstack-swift | 09:22 | |
*** silor has joined #openstack-swift | 09:34 | |
*** jlutran has joined #openstack-swift | 09:37 | |
*** ma9_1 has joined #openstack-swift | 09:40 | |
*** esnyder has quit IRC | 10:10 | |
*** tovin07_ has quit IRC | 10:14 | |
*** jlutran has left #openstack-swift | 10:17 | |
*** ma9_1 has left #openstack-swift | 10:30 | |
*** skudlik has joined #openstack-swift | 10:43 | |
*** skudlik has left #openstack-swift | 10:43 | |
*** d0ugal has quit IRC | 11:04 | |
*** silor has quit IRC | 11:05 | |
openstackgerrit | Merged openstack/swift master: doc migration: update the doc link address[2/3] https://review.openstack.org/500776 | 11:09 |
*** mat128 has joined #openstack-swift | 11:31 | |
*** mat128 has quit IRC | 11:33 | |
*** mat128 has joined #openstack-swift | 11:35 | |
*** psachin has quit IRC | 11:38 | |
*** links has quit IRC | 11:41 | |
*** mat128 has quit IRC | 11:46 | |
*** mat128 has joined #openstack-swift | 11:52 | |
*** links has joined #openstack-swift | 11:54 | |
*** d0ugal has joined #openstack-swift | 12:14 | |
*** catintheroof has joined #openstack-swift | 12:22 | |
*** pit has joined #openstack-swift | 12:28 | |
*** pit has quit IRC | 12:34 | |
*** links has quit IRC | 12:48 | |
*** Dinesh_Bhor has quit IRC | 12:58 | |
*** chlong has joined #openstack-swift | 13:04 | |
*** links has joined #openstack-swift | 13:07 | |
*** cbartz has quit IRC | 13:55 | |
*** mwheckmann has joined #openstack-swift | 14:05 | |
*** saint_ has joined #openstack-swift | 14:06 | |
*** links has quit IRC | 14:10 | |
*** gyee has joined #openstack-swift | 14:40 | |
*** klrmn has joined #openstack-swift | 14:58 | |
*** aagrawal has quit IRC | 15:10 | |
*** Sukhdev has joined #openstack-swift | 15:21 | |
*** pcaruana has quit IRC | 15:24 | |
torgomatic | timburke: yes, that was the whole thing | 15:35 |
timburke | good morning | 15:37 |
timburke | so, long enough that it'd be nice to be dropping stats (so we can see that it isn't wedged), but probably not so long that we really need to start diverting updates immediately... | 15:39 |
*** rcernin has quit IRC | 15:44 | |
*** EmilienM has quit IRC | 15:55 | |
*** EmilienM has joined #openstack-swift | 15:57 | |
*** itlinux has joined #openstack-swift | 16:02 | |
*** links has joined #openstack-swift | 16:09 | |
*** klrmn has quit IRC | 16:13 | |
*** m_kazuhiro has joined #openstack-swift | 16:16 | |
openstackgerrit | Kota Tsuyuzaki proposed openstack/swift master: Remove all post_as_copy related code and configes https://review.openstack.org/504069 | 16:17 |
*** joeljwright has quit IRC | 16:24 | |
*** SkyRocknRoll has joined #openstack-swift | 16:33 | |
timburke | easy merge! https://review.openstack.org/#/c/502893/ | 16:47 |
patchbot | patch 502893 - swift - Add assertion about last-modified to object post test | 16:47 |
openstackgerrit | Clay Gerrard proposed openstack/swift master: Accept a trade off of dispersion for balance https://review.openstack.org/503152 | 16:49 |
openstackgerrit | Clay Gerrard proposed openstack/swift master: Accept a trade off of dispersion for balance https://review.openstack.org/503152 | 16:50 |
openstackgerrit | Kota Tsuyuzaki proposed openstack/swift master: Remove all post_as_copy related code and configes https://review.openstack.org/504069 | 16:59 |
kota_ | make the patch 504069 to be back to patch set 2 that emits warning log | 17:04 |
patchbot | https://review.openstack.org/#/c/504069/ - swift - Remove all post_as_copy related code and configes | 17:04 |
*** chsc has joined #openstack-swift | 17:05 | |
*** chsc has joined #openstack-swift | 17:05 | |
*** tesseract has quit IRC | 17:06 | |
*** klrmn has joined #openstack-swift | 17:09 | |
*** SkyRocknRoll has quit IRC | 17:12 | |
*** vint_bra has joined #openstack-swift | 17:15 | |
*** vint_bra has quit IRC | 17:19 | |
*** abhitechie has joined #openstack-swift | 17:20 | |
*** SkyRocknRoll has joined #openstack-swift | 17:25 | |
*** vint_bra has joined #openstack-swift | 17:29 | |
openstackgerrit | Samuel Merritt proposed openstack/swift master: Shorten typical proxy pipeline. https://review.openstack.org/504472 | 17:34 |
torgomatic | timburke: ^^ | 17:35 |
notmyname | rledisez: ^^ | 17:36 |
timburke | torgomatic: https://trello.com/b/z6oKKI4Q/container-sharding | 17:37 |
*** Sukhdev has quit IRC | 17:41 | |
notmyname | http://d.not.mn/swift_team_denver_ptg.jpeg | 17:43 |
*** links has quit IRC | 17:48 | |
openstackgerrit | Samuel Merritt proposed openstack/swift master: Shorten typical proxy pipeline. https://review.openstack.org/504472 | 17:53 |
kota_ | timburke: https://review.openstack.org/504479 | 17:59 |
patchbot | patch 504479 - swift3 - Change log updates for version 1.12 | 17:59 |
*** Sukhdev has joined #openstack-swift | 18:07 | |
*** Sukhdev has quit IRC | 18:07 | |
openstackgerrit | Kazuhiro MIYAHARA proposed openstack/swift master: WIP: Fix location header to be relative in 'leave_relative_location' environment https://review.openstack.org/504484 | 18:16 |
itlinux | hello all.. I have installed TripleO RDO but the controllers are getting abused by the swift processors .. any tips | 18:18 |
*** m_kazuhiro has quit IRC | 18:19 | |
timburke | torgomatic: huh... http://paste.openstack.org/show/621214/ | 18:20 |
*** m_kazuhiro has joined #openstack-swift | 18:26 | |
*** honga has joined #openstack-swift | 18:28 | |
clarkb | notmyname: https://review.openstack.org/#/c/471057/4 and https://review.openstack.org/296771 | 18:37 |
patchbot | patch 471057 - swift - Func test hacks to work under against apache2 | 18:37 |
patchbot | patch 296771 - openstack-infra/devstack-gate - Enable tlsproxy by default | 18:37 |
*** geaaru has quit IRC | 18:49 | |
notmyname | itlinux: oh hi. I saw your question earlier, but it seems we're all on and off at the wrong times | 19:08 |
itlinux | ahh.. | 19:08 |
itlinux | no worries.. | 19:08 |
itlinux | any tips? | 19:09 |
notmyname | itlinux: might be a good idea to talk with cschewede (who's in the room with me right now and promises that he'd love to talk to you next week) | 19:09 |
notmyname | he's got some experience with deploying swift+tripleo | 19:09 |
itlinux | sounds good! | 19:09 |
notmyname | or swift on tripleo | 19:09 |
itlinux | very good thanks.. | 19:09 |
*** hseipp has joined #openstack-swift | 19:10 | |
*** hseipp has quit IRC | 19:10 | |
*** hseipp has joined #openstack-swift | 19:11 | |
notmyname | itlinux: oh, first thing he said earlier was maybe there's a misconfiguration with gnocchi? there's a way it can be configured to store data in swift. and swift has ceilometer installed. which sends data to gnocchi. which gets put in swift ... | 19:11 |
notmyname | I don't know details, but it may be worth checking | 19:11 |
itlinux | ahh yes I do have gnocchi and ceilometer | 19:12 |
itlinux | if he can pass some options to check then I will be happy to do it.. otherwise we can talk next week. | 19:13 |
torgomatic | timburke: heh, that's hilarious. Fortunately, it seems like it's only gonna happen at process exit. | 19:13 |
notmyname | https://postfacto.io/retros/swift | 19:23 |
itlinux | ahh you are in Denver at the PTG meeting.. | 19:29 |
notmyname | tdasilva: http://doodle.com | 19:40 |
*** mat128 has quit IRC | 19:41 | |
*** SkyRocknRoll has quit IRC | 19:51 | |
timburke | torgomatic: hrm. also, http://paste.openstack.org/show/621220/ during unittests... | 19:53 |
timburke | (other one was during probe tests, which isn't too surprising) | 19:54 |
torgomatic | timburke: no idea about that second one; the stack doesn't have anything useful in it | 19:57 |
torgomatic | certainly suspicious though | 19:57 |
notmyname | "For all XFS users out there, start planning a kernel upgrade in the near future" http://seclists.org/oss-sec/2017/q3/436 | 20:05 |
openstackgerrit | Kota Tsuyuzaki proposed openstack/swift master: Make gate keeper to save relative location header path https://review.openstack.org/504507 | 20:11 |
*** itlinux has quit IRC | 20:12 | |
kota_ | m_kazuhiro, timburke: patch 504507 is the challenge to resolve a gate issue with dvsm for symlink | 20:18 |
patchbot | https://review.openstack.org/#/c/504507/ - swift - Make gate keeper to save relative location header ... | 20:18 |
m_kazuhiro | kota_: Thanks! I will check it. | 20:19 |
*** itlinux has joined #openstack-swift | 20:19 | |
*** catintheroof has quit IRC | 21:26 | |
tdasilva | m_kazuhiro: https://review.openstack.org/#/c/449394/ | 21:36 |
patchbot | patch 449394 - swift - Move listing formatting out to proxy middleware | 21:36 |
m_kazuhiro | tdasilva: Thanks! | 21:36 |
*** guimaluf has joined #openstack-swift | 21:50 | |
*** saint_ has quit IRC | 21:51 | |
guimaluf | hi guys, I've a swift cluster in production and one of my files are in 3 handoffs server but I can't list, download or do anything through swift API, I keep getting 404 error | 21:52 |
notmyname | guimaluf: that's sounds unfortunate | 21:53 |
guimaluf | notmyname, what may be happening? | 21:53 |
notmyname | guimaluf: a 404 sounds like something I'd expect in that situation | 21:53 |
guimaluf | but the file is in swift storages | 21:53 |
notmyname | so there's a few things that may be going on | 21:53 |
guimaluf | please clarify me! :) | 21:53 |
guimaluf | notmyname, what may be happening? | 21:54 |
notmyname | ok, I was looking for some settings | 21:54 |
guimaluf | I though that if a file is present on handoffs swift should point to it, right? | 21:55 |
guimaluf | I could see the 3 copies in handoff servers | 21:55 |
notmyname | first, tell me about your cluster. replicas or ec? how many servers? how many drives per server? | 21:55 |
notmyname | what version of swift are you running? | 21:55 |
guimaluf | replica. 2 proxys, 8 storages, 4 drivers per server, two regions. | 21:56 |
notmyname | ok | 21:56 |
notmyname | how many replicas? | 21:57 |
guimaluf | kilo version | 21:57 |
guimaluf | 3 replicas | 21:57 |
notmyname | kilo? ie 2.3.0 released April 30, 2015? | 21:57 |
*** vint_bra has quit IRC | 21:57 | |
notmyname | ok | 21:57 |
guimaluf | swift 2.2.2-0ubuntu1.3~cloud1 | 21:58 |
guimaluf | 2.2 | 21:58 |
notmyname | 2.2.2 was released on feb 2 (2/2) ;-) | 21:58 |
notmyname | I liked it when that lined up | 21:58 |
guimaluf | hahahahhahaha | 21:58 |
guimaluf | good to know :P | 21:58 |
notmyname | FWIW https://wiki.openstack.org/wiki/Swift/version_map | 21:58 |
guimaluf | I hope that mystic forces of this alignment does'nt be the cause of this! | 21:59 |
notmyname | ok, so you're running a very old version of swift, so some things may be slightly different from current docs and config files. but let's try anyway | 21:59 |
notmyname | let me dig into git history to see when this config variable was introduced... | 21:59 |
guimaluf | notmyname, what kind of problem may be happning? | 22:00 |
notmyname | ok, it's there | 22:00 |
notmyname | in the proxy server, you've got a config variable called "request_node_count" | 22:00 |
notmyname | well, do you have it set? and if so, what is it set to? | 22:00 |
guimaluf | i'll check | 22:01 |
guimaluf | this setting is not present | 22:01 |
notmyname | ok | 22:01 |
notmyname | not a problem | 22:01 |
notmyname | basically, the proxy server looks up the object name in the request against the current ring to get drives. it then sends requests to the object servers that have those drives. if it gets all 404s, then it starts asking more object servers until it ask asked a total of "request_node_count" servers | 22:02 |
notmyname | if it doesn't find an object, it returns 404 to the client | 22:02 |
notmyname | the default for request_node_count is "2 * replicas" so in your case that's 6 | 22:03 |
notmyname | so there's 2 things I want to talk with you about: (1) how to get your cluster in a better shape so you can serve requests and (2) how to not let this happen again | 22:03 |
guimaluf | hahahah the object is far beyond the 6th handoff | 22:03 |
guimaluf | hahahaha | 22:03 |
guimaluf | notmyname, the immediate fix would be set this to STORAGE_COUNTs - REPLICAS, right? | 22:04 |
guimaluf | so it would search in all servers | 22:04 |
notmyname | honestly, I think that's a terrible idea | 22:04 |
guimaluf | why? | 22:05 |
notmyname | but it will work, and it may be the fastest way | 22:05 |
notmyname | it will make all requests that would otherwise 404 talk to every drive and that will be expensive | 22:06 |
notmyname | I'm assuming "STORAGE_COUNT" is the number of drives you have in the system? | 22:06 |
guimaluf | but is better then get 404 when reaching a file | 22:06 |
guimaluf | no, storage count = servers counts | 22:06 |
guimaluf | count* | 22:06 |
notmyname | oh | 22:06 |
notmyname | you you'd set it to 8 (for your 8 servers) and then that means you'd look in the first 5 handoffs (after the 3 primaries) | 22:07 |
guimaluf | you're right... it should be the drivers number... cause every driver is present in handoff servers | 22:07 |
notmyname | which means you could still get 404s (I don't know how bad the cluster is) | 22:07 |
guimaluf | exactly | 22:08 |
guimaluf | so, "how to get your cluster in a better shape so you can serve requests" ? | 22:08 |
guimaluf | :D | 22:08 |
notmyname | but that simply papers over the problem and introduces a lot of work in the cluster | 22:08 |
notmyname | ie imagine if someone simply reuqests random uuids. they'd all 404, but it would cause the cluster a lot of work to talk to every disk | 22:08 |
*** vint_bra has joined #openstack-swift | 22:08 | |
notmyname | ok, so to get it healthy... | 22:09 |
guimaluf | I would like to force object from handoff to its acctually server | 22:09 |
notmyname | you need to make sure that replication is running and completing as quickly as possible | 22:09 |
notmyname | the replication process is what does that | 22:09 |
guimaluf | but why this object remain on handoff servers? | 22:09 |
notmyname | there should be an object-replicator process running on every server (potentially more than one, but I don't remember the details in 2.2.2--we've made a *lot* of improvements since then) | 22:10 |
notmyname | it won't remain on handoff servers if replication is running | 22:10 |
notmyname | I'm assuming you have the same object ring file on each of the servers | 22:10 |
guimaluf | notmyname, yes, same ring. | 22:11 |
notmyname | ok, good | 22:11 |
guimaluf | [2017-09-15 19:11:21] Checking ring md5sums | 22:11 |
guimaluf | 8/8 hosts matched, 0 error[s] while checking hosts. | 22:11 |
notmyname | what are you using to monitor the cluster | 22:11 |
notmyname | oh. that's swift-recon output :-) | 22:11 |
guimaluf | notmyname, tail -f on swift.log :( | 22:11 |
notmyname | what's your recent replication cycle time? | 22:11 |
guimaluf | [2017-09-15 19:11:56] Checking on replication | 22:12 |
guimaluf | [replication_time] low: 0, high: 81399, avg: 34650.6, total: 277204, Failed: 0.0%, no_result: 0, reported: 8 | 22:12 |
guimaluf | Oldest completion was 2017-01-17 15:52:49 (241 days ago) by 192.168.55.109:6000. | 22:12 |
guimaluf | Most recent completion was 2017-04-18 09:56:12 (150 days ago) by 192.168.55.102:6000. | 22:12 |
notmyname | soo... yeah, that doens't look very good | 22:12 |
notmyname | is the replication process actually running now? | 22:13 |
guimaluf | wow | 22:13 |
guimaluf | now I realized those infos! | 22:13 |
*** vint_bra has quit IRC | 22:14 | |
guimaluf | notmyname, how can I check this? | 22:14 |
notmyname | ps -ef? top? | 22:14 |
notmyname | swift-init object-replicator status | 22:14 |
guimaluf | swift-*-replicator is running | 22:14 |
notmyname | swift-oldies -a0 | 22:15 |
notmyname | `swift-oldies -a 0` | 22:15 |
notmyname | I wonder if they're hung processes. could be interesting to strace them to see what's happening | 22:15 |
*** vint_bra has joined #openstack-swift | 22:15 | |
guimaluf | 5789 26528 object-replicator /etc/swift/object-server.conf | 22:16 |
notmyname | on one of the servers, kill it and restart it | 22:16 |
guimaluf | WOW | 22:18 |
guimaluf | omg! | 22:18 |
notmyname | is that good? did you just get fired? | 22:18 |
guimaluf | hahahahahhahahaha | 22:19 |
guimaluf | no, but I can see lots of things happening on logs | 22:19 |
acoles | good things I hope | 22:19 |
guimaluf | indeed the proccess was hung for some reason | 22:19 |
guimaluf | I'll repeat the proccess | 22:19 |
guimaluf | swift-init stop object-replicator isn't killing it | 22:20 |
notmyname | `kill -9`? | 22:20 |
notmyname | how full are your disks? | 22:20 |
guimaluf | 1% | 22:20 |
guimaluf | :) | 22:20 |
notmyname | you have 100TB drives??!? | 22:21 |
*** vint_bra has quit IRC | 22:21 | |
guimaluf | hahahahah no :) | 22:21 |
notmyname | ok, so replication running will get you into a better situation | 22:21 |
notmyname | since your drives aren't very full, it shouldn't take too long for that to settle down | 22:22 |
guimaluf | wow, that's was awesome! I really have no word to thanks your help, notmyname . You were really kind and helpfull! :) | 22:22 |
notmyname | (assuming some things about your hardware) | 22:23 |
guimaluf | I'll cross my finger and let the replication happens | 22:23 |
notmyname | oh, wait. we aren't done | 22:23 |
guimaluf | no? | 22:23 |
guimaluf | omg! | 22:23 |
guimaluf | ahhahaha | 22:23 |
notmyname | I don't want you to be in this situation again | 22:23 |
guimaluf | me neither! | 22:24 |
notmyname | so, I know this is cliche, but you really should upgrade to the most recent version of swift, if at all possible | 22:24 |
notmyname | you can upgrade a live cluster with no downtime and do it all at once | 22:24 |
notmyname | a rough guide is at https://www.swiftstack.com/blog/2013/12/20/upgrade-openstack-swift-no-downtime/ | 22:25 |
guimaluf | notmyname, yes, I know I should. I did once from havana, I think, but things are slow on this environment | 22:25 |
notmyname | ok, beyond that, you should make sure you have some good ops tools to monitor | 22:26 |
guimaluf | notmyname, yes. I know this. but I'm the only one taking care of this cluster | 22:27 |
notmyname | I was thinking of some other things (like making sure you don't push new rings befor a replication cycle has finished) | 22:27 |
guimaluf | I would love to do all the things I should do :( but I just can't get time enough | 22:27 |
guimaluf | notmyname, I was not aware of this replication cycle | 22:27 |
guimaluf | incredible all files was available for so long time | 22:28 |
notmyname | sure, but when you aren't monitoring, you end up in situations where replication isn't rnning for 6+ months ;-) | 22:28 |
guimaluf | 5 months! ;) | 22:29 |
guimaluf | hahaha | 22:29 |
notmyname | 241 days for the longest | 22:29 |
guimaluf | oh crap :( | 22:29 |
guimaluf | I have another issue. my rsync mask is messy... 644 for directories :/ | 22:30 |
notmyname | swift-recon is available, and there's swift-recon-cron that can be run periodically. might not be a bad idea to spend a few minutes to set that up to run and email you every day | 22:30 |
guimaluf | I'll do it. I'll take care of this cluster more closely | 22:31 |
notmyname | (ie run `swift-recon --all` periodically. swift-recon-cron can populate what swift-recon reports on) | 22:31 |
guimaluf | I'm happy that swift is really robust... even with all this it worked very well | 22:32 |
notmyname | yeah, that's great to hear :-) | 22:32 |
guimaluf | it is really good to have people like you in the community! | 22:33 |
guimaluf | I would never realize this by myself | 22:33 |
notmyname | I'm glad you stopped by to ask. there's a lot of friendly swift experts in here | 22:34 |
guimaluf | notmyname, I was neglecting this error for so long time | 22:45 |
guimaluf | Error: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/Service[swift-object-replicator]: Failed to call refresh: Could not stop Service[swift-object-replicator]: Execution of '/sbin/stop swift-object-replicator' returned 1: stop: Job failed while stopping | 22:45 |
guimaluf | Error: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/Service[swift-object-replicator]: Could not stop Service[swift-object-replicator]: Execution of '/sbin/stop swift-object-replicator' returned 1: stop: Job failed while stopping | 22:45 |
guimaluf | :( | 22:45 |
notmyname | guimaluf: can you share anything about the kind of data you're storing in swift? | 22:58 |
notmyname | I'm always curious to hear how people are using it | 22:58 |
* notmyname may lose internet availability any time in the next 15 minutes | 22:59 | |
guimaluf | notmyname, I integrate swift with owncloud and offer a storage service for brazilian researchers. :) | 23:01 |
notmyname | that's cool! | 23:01 |
guimaluf | in general it is personal files, as photos, documents, and in a closer future web conferece videos | 23:02 |
notmyname | is owncloud working well for you? I've considered using it at home | 23:04 |
notmyname | (owncloud + swift) | 23:04 |
notmyname | wifi is going down. good luck guimaluf | 23:08 |
guimaluf | notmyname, yes it is. actually just now our users are really using it | 23:08 |
guimaluf | :) | 23:08 |
guimaluf | notmyname, no words to thank you :) | 23:08 |
*** hseipp has quit IRC | 23:15 | |
*** m_kazuhiro has quit IRC | 23:28 | |
openstackgerrit | Kota Tsuyuzaki proposed openstack/swift master: Make gate keeper to save relative location header path https://review.openstack.org/504507 | 23:36 |
*** gyee has quit IRC | 23:42 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!