*** mrjk has quit IRC | 01:01 | |
*** tkajinam_ has joined #openstack-swift | 01:47 | |
*** tkajinam has quit IRC | 01:50 | |
openstackgerrit | Tim Burke proposed openstack/swift master: WIP: symlink-backed versioned_writes https://review.openstack.org/633857 | 01:55 |
---|---|---|
timburke | clayg: ^^^ very much work in progress, but you seemed curious about where i was going with p 633094 | 01:56 |
patchbot | https://review.openstack.org/#/c/633094/ - swift - Allow "harder" symlinks - 2 patch sets | 01:56 |
timburke | i made it an option for now, mainly just to make it easier to test things when starting from the old behavior, but i'm pretty sure that will be strictly better (once we've actually beaten it into shape where we'd feel comfortable landing it) | 01:59 |
openstackgerrit | Merged openstack/swift master: Python3: Fix test/unit/common/test_container_sync_realms.py https://review.openstack.org/633644 | 02:17 |
*** mikecmpbll has quit IRC | 04:23 | |
mattoliverau | zaitcev: sorry, I am back, just attempting to catch back up with email and everything that I've missed over the last few weeks. I'll take a look at the patch tomorrow (running out of time today). | 05:43 |
*** zaitcev_ has joined #openstack-swift | 05:45 | |
*** ChanServ sets mode: +v zaitcev_ | 05:45 | |
zaitcev_ | mattoliverau: I know that py3 is not your thing, but 1. sharder is your code, and 2. that patch unfortunately touches py2. | 05:46 |
*** zaitcev has quit IRC | 05:48 | |
*** psachin has joined #openstack-swift | 05:55 | |
*** gkadam has joined #openstack-swift | 07:51 | |
*** ccamacho has joined #openstack-swift | 07:52 | |
*** e0ne has joined #openstack-swift | 07:56 | |
*** pcaruana has joined #openstack-swift | 08:11 | |
*** tkajinam_ has quit IRC | 08:15 | |
*** e0ne has quit IRC | 08:20 | |
*** mikecmpbll has joined #openstack-swift | 08:37 | |
*** gkadam has quit IRC | 08:42 | |
*** mikecmpbll has quit IRC | 09:06 | |
*** mikecmpbll has joined #openstack-swift | 09:16 | |
*** dr_gogeta86 has quit IRC | 10:15 | |
*** e0ne has joined #openstack-swift | 10:18 | |
*** dr_gogeta86 has joined #openstack-swift | 10:20 | |
openstackgerrit | Merged openstack/swift master: Quiet down a unittest https://review.openstack.org/633793 | 10:30 |
*** hseipp has joined #openstack-swift | 10:42 | |
*** mvkr has joined #openstack-swift | 11:29 | |
*** mark-mcardle has joined #openstack-swift | 11:30 | |
*** mahatic has joined #openstack-swift | 11:53 | |
*** ChanServ sets mode: +v mahatic | 11:53 | |
*** gkadam has joined #openstack-swift | 12:08 | |
*** gkadam is now known as gkadam-bmgr | 12:19 | |
*** e0ne has quit IRC | 12:48 | |
*** psachin has quit IRC | 13:44 | |
*** e0ne has joined #openstack-swift | 13:46 | |
*** pcaruana has quit IRC | 13:50 | |
*** psachin has joined #openstack-swift | 13:53 | |
*** pcaruana has joined #openstack-swift | 13:57 | |
*** psachin has quit IRC | 13:58 | |
*** psachin has joined #openstack-swift | 13:59 | |
clayg | on p 633671 - I think I don't really understand the key_path :'( | 15:18 |
patchbot | https://review.openstack.org/#/c/633671/ - swift - Fix decryption for broken objects - 2 patch sets | 15:18 |
*** zaitcev_ is now known as zaitcev | 15:22 | |
*** NM has joined #openstack-swift | 15:53 | |
*** gkadam-bmgr has quit IRC | 15:57 | |
*** pcaruana has quit IRC | 16:01 | |
clayg | Anyone know anyone that uses/hacks on joss? https://github.com/javaswift/joss/issues/120 | 16:14 |
*** ccamacho has quit IRC | 16:16 | |
*** pcaruana has joined #openstack-swift | 16:17 | |
clayg | Thank goodness for probe tests. I thought I was going to get to ignore the proxy's handling of fragment handoffs for now, but I guess not... https://github.com/openstack/swift/blob/master/swift/proxy/controllers/obj.py#L1616 | 16:29 |
zaitcev | I was lucky to be able to ignore the encryption up to now. | 16:31 |
*** e0ne has quit IRC | 16:31 | |
*** e0ne has joined #openstack-swift | 16:31 | |
clayg | would a "is_handoff" flag be too on the nose? "is_primary" maybe? OTOH I could just *fix* the proxy so that it tries to PUT handoffs where they go... but I'd still need to think harder about the case where you run a 4+2 with only 8 disks like our default saio setup | 16:36 |
*** gyee has joined #openstack-swift | 16:42 | |
*** pcaruana has quit IRC | 16:45 | |
*** psachin has quit IRC | 16:48 | |
*** NM has quit IRC | 16:53 | |
*** ybunker has joined #openstack-swift | 17:28 | |
ybunker | hi all, quick question... i have to reinstall a data node (because of a kernel panic error), and i would like to know if there's a way to 'keep' the data drives of the objects, so i reinstall the OS, configure the swift pkgs, rsync and all the configuration files and then mount the drives with the objects on the same path from before, and finally start the swift daemons.. could it work? or do i need to remove the node from the ring.. and th | 17:30 |
ybunker | en reassign it? | 17:30 |
*** hseipp has quit IRC | 17:31 | |
*** e0ne has quit IRC | 17:31 | |
DHE | if the server has the same IP address and the data is in the same directory, sure. however if the system has been down for more than a week (can't remember the proper name of the setting) then you are probably better off just reformatting them. | 17:33 |
ybunker | the nodes keeps the same ip addresses, and the same directories | 17:34 |
ybunker | just two days | 17:34 |
DHE | I think it's the reclaim_age in the object server... the idea being that if a server is down longer than this amount of time, deleted objects in the cluster could become undeleted by reintroducing this node | 17:35 |
ybunker | the things is that when i mount the drives (w the obj), the permissions on the directory were: dnsmasq lpadmin | 17:35 |
DHE | oh... oh dear... dynamically assigned system account uids | 17:36 |
*** mikecmpbll has quit IRC | 17:39 | |
ybunker | chown -R swift:swift and wait for a life time? :P | 17:39 |
ybunker | or better to remove node and add it again? | 17:40 |
DHE | I considered renumbering the uids, but that affects dnsmasq and lp I guess... | 17:44 |
DHE | chow is probably the better way, but yeah going to take a while and all that... | 17:45 |
DHE | you can at least run a copy of chown per-disk and get some throughput going | 17:45 |
ybunker | also, i chown on the acct/cont disks (fast), but when I try to start the daemons (acct and cont), im getting 507,.. check que space and its ok...nothing else on the logs | 17:46 |
clayg | ybunker: chown -R is a reasonable option (yes, slow) | 17:51 |
clayg | ybunker: after the chown maybe restart the proxies to clear error limiting? | 17:52 |
clayg | i can't really think why the a/c servers would respond 507 if all the mount paths are in the right paths... | 17:52 |
clayg | `devices = /srv/node` ? | 17:53 |
ybunker | http://pasted.co/fe66cc47 | 17:55 |
ybunker | http://pasted.co/0f54b67d | 17:55 |
clayg | ok, /srv/node is the default - so you have all your disks mounted at /srv/node/<device> where <device> is the name in the ring? | 17:56 |
ybunker | http://pasted.co/f6b2f989 | 17:57 |
ybunker | yes | 17:57 |
clayg | yeah, i can't really think of why the a/c nodes would respond 507 then... | 17:57 |
clayg | basically it's just self.root (which defaults to /srv/node) join <device> (from path of request) and utils.ismount - if that returns false 507 | 17:59 |
clayg | so... kind of the ONLY way you get a 507 is if the device in the URL of the request isn't a mount at /srv/node/<device> | 17:59 |
clayg | you should have the device name in the log line of a 507 request | 18:00 |
clayg | can you find a 507 resp log line on a account/container node? | 18:00 |
ybunker | http://pasted.co/8fb9fdcd | 18:06 |
*** mikecmpbll has joined #openstack-swift | 18:07 | |
ybunker | clayg: here are some of ther 507 errors: http://pasted.co/8fb9fdcd | 18:08 |
ybunker | clayg:...got it.. it was a chmod 755 permissions thing :), now.. the problem is that is not storing on the same node | 18:13 |
*** pcaruana has joined #openstack-swift | 18:17 | |
clayg | looks like the device names were just integers? | 18:20 |
clayg | well, the proxy could be writing to handoffs if it error limited the node you were working on | 18:21 |
clayg | I think 507 is cached for like 5m! | 18:21 |
clayg | oh, maybe it's just 60s | 18:23 |
clayg | swift-container-info /real/path/to/hash.db will check the ring and say what are the expected locations | 18:23 |
clayg | Is the node you're expecting the db's to be stored on in that list? | 18:24 |
ybunker | clayg: http://pasted.co/92864c53 | 18:36 |
clayg | timburke: I think leaving the use_symlinks option in default to true and maybe deprecate is ok if it doesn't come across to crufty - but maybe I'm just nervous because it's something new | 18:48 |
clayg | oic, "2" is on .14 & .12 - ok... | 18:49 |
clayg | ybunker: but that looks fine i guess - I can't correlate from the logs to those ips tho - I don't know which node you're on - do you see "not storing on the same node" problem? | 18:51 |
ybunker | the node with the problem is 192.21.100.12 | 18:52 |
*** e0ne has joined #openstack-swift | 19:06 | |
*** rchurch has joined #openstack-swift | 19:22 | |
timburke | clayg: on p 633094 -- i'd love more input on (1) whether there ought to *also* be a X-Symlink-Target-Size and whether it should require that you specify an ETag and (2) whether 412 is the right error, or if it ought to be 409 (or maybe there's something better?) | 19:23 |
patchbot | https://review.openstack.org/#/c/633094/ - swift - Allow "harder" symlinks - 2 patch sets | 19:23 |
clayg | Could a put with an Etag do a HEAD? | 19:25 |
clayg | At least then we knew it worked at one time... | 19:26 |
clayg | SLO does that and it mostly works? | 19:26 |
timburke | ick. currently there are no requirements on the target existing to create a symlink... | 19:27 |
timburke | and the *real* feature i want out of this tool will have *just written* the object -- it knows *exactly* what the etag and size should be! | 19:27 |
clayg | Right | 19:27 |
timburke | and it gets even messier when you start wanting to have symlinks pointing to symlinks pointing to data... | 19:28 |
clayg | Internal api could write directly into sysmeta - HEAD request only required for client feature where you wanna do fancy container listings... | 19:28 |
clayg | 412 is probably wrong, you can’t change the request and make it work. 409 makes sense to me. | 19:30 |
*** ybunker has quit IRC | 19:33 | |
timburke | ugh, properly capturing the listing info for a VW symlink to a client-created, etag-validating symlink is gonna be a pain... maybe i don't actually get to re-use as much of the symlink machinery as i thought i could... | 19:35 |
*** e0ne has quit IRC | 20:05 | |
zaitcev | What is a Volkswagen simlink? | 20:14 |
timburke | versioned_writes :-) | 20:15 |
timburke | i want to make VW stop copying data all over the place | 20:15 |
timburke | it's non-atomic and race-prone | 20:15 |
timburke | plus it just sucks for your IO budget | 20:16 |
*** e0ne has joined #openstack-swift | 20:17 | |
*** e0ne has quit IRC | 20:19 | |
mattoliverau | Seeing as notmyname is away, and seems some people are en route to Fossdem, I'll assume we're not having a meeting today | 20:38 |
*** e0ne has joined #openstack-swift | 20:48 | |
clayg | oh.. uh | 21:03 |
clayg | mattoliverau: 👍 two weeks off! SO awesome! | 21:04 |
clayg | it's like notmyname takes a vacation and we all get a break | 21:04 |
mattoliverau | \o/ | 21:04 |
*** e0ne has quit IRC | 21:18 | |
*** early has quit IRC | 21:25 | |
*** early has joined #openstack-swift | 21:26 | |
*** e0ne has joined #openstack-swift | 21:47 | |
*** openstackgerrit has quit IRC | 21:50 | |
*** e0ne has quit IRC | 21:58 | |
*** tkajinam has joined #openstack-swift | 23:01 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!