openstackgerrit | Vu Cong Tuan proposed openstack/python-swiftclient master: Switch to stestr https://review.openstack.org/581610 | 02:48 |
---|---|---|
notmyname | FYI tomorrow (my monday), I'll be traveling to the LA area and won't be online | 03:50 |
*** pcaruana has joined #openstack-swift | 05:23 | |
*** pcaruana has quit IRC | 05:32 | |
*** e0ne has joined #openstack-swift | 06:17 | |
*** e0ne has quit IRC | 06:36 | |
*** ccamacho has joined #openstack-swift | 07:44 | |
*** pcaruana has joined #openstack-swift | 08:06 | |
*** gkadam_ has joined #openstack-swift | 08:44 | |
*** ccamacho has quit IRC | 09:04 | |
*** e0ne has joined #openstack-swift | 09:10 | |
*** ccamacho has joined #openstack-swift | 09:12 | |
*** pcaruana has quit IRC | 09:31 | |
*** pcaruana has joined #openstack-swift | 09:33 | |
*** ccamacho has quit IRC | 10:15 | |
*** ccamacho has joined #openstack-swift | 10:17 | |
*** tellesnobrega_ is now known as tellesnobrega | 12:20 | |
*** tellesnobrega has left #openstack-swift | 12:21 | |
*** e0ne_ has joined #openstack-swift | 12:37 | |
*** e0ne has quit IRC | 12:40 | |
*** jistr is now known as jistr|call | 13:32 | |
*** admin6 has joined #openstack-swift | 14:01 | |
*** jistr|call is now known as jistr | 14:08 | |
*** mvkr has quit IRC | 14:47 | |
*** ccamacho has quit IRC | 14:50 | |
*** ccamacho has joined #openstack-swift | 14:52 | |
*** jistr is now known as jistr|call | 15:04 | |
*** jistr|call is now known as jistr | 15:10 | |
*** mvkr has joined #openstack-swift | 15:15 | |
*** d34dh0r53 has quit IRC | 15:22 | |
*** cloudnull has quit IRC | 15:22 | |
admin6 | Hi team, I plan to reduce the number of zones I have in my ring. I currently have 6 zones (only 1 server by zone), but I’d prefer to have only 4 zones, in order to add servers by group of 4. Is that safe, to gradually remove the disks in zone 5 and 6 and add them to other zones until I have no more disks in zone 5 and 6 ? | 15:34 |
*** pcaruana has quit IRC | 16:06 | |
*** ccamacho has quit IRC | 16:08 | |
*** persia has left #openstack-swift | 16:09 | |
*** e0ne_ has quit IRC | 16:36 | |
*** openstackgerrit has quit IRC | 16:48 | |
*** cwright has quit IRC | 16:48 | |
*** cwright has joined #openstack-swift | 16:49 | |
*** gkadam_ has quit IRC | 16:57 | |
zaitcev | admin6: I don't see any show-stoppers, assuming replication of 3 + 1 handoff. | 17:01 |
*** gyee has joined #openstack-swift | 17:02 | |
admin6 | zaitcev: thx, in fact, this ring is a 9+3 erasure coding, so I plan to have 3 fragment per zone eventually | 17:03 |
zaitcev | admin6: that sounds balanced enough, although you're not having any hand-off for some reason? | 17:06 |
admin6 | zaitcev: do you mean, there will be no handoff if I only have 4 zones in that case ? I thought handoff would be dispersed over zone 1 to 4, no ? | 17:11 |
zaitcev | I'm not talking about the number of zones in this case. Suppose you have carried out your program and added 6 more servers, now you have 12 servers and 12 fragments (9+3). | 17:12 |
zaitcev | Well, anyway. You need Sam or Clay to answer this. | 17:13 |
admin6 | zaitcev: Ok. Idea behind the scene is just to buy additionnal servers by group of 4 instead of group of 6 to correctly distribute data over the zones. I’ll see what Sam or Clay think about this :-) Thanks. | 17:17 |
*** pdardeau has joined #openstack-swift | 17:33 | |
pdardeau | hi swifters! what are the current recommendations for ratios of PAC nodes to O nodes? | 17:34 |
pdardeau | or AC nodes to O nodes? | 17:34 |
timburke | pdardeau! long time no see :-) | 17:39 |
pdardeau | timburke: indeed! i've had my head down in ceph land | 17:39 |
timburke | on the question, i'm not sure... depends on object size, surely, and relative sizes of hard drives (since you'll want SSD for A/C, but will probably prefer cheaper HDD for O)... | 17:40 |
timburke | maybe other have stronger opinions than me though | 17:40 |
pdardeau | assuming O nodes with 16x 12TB HDD per node, how many of those could be served by a single AC node? | 17:43 |
pdardeau | with the single AC node having something like 4 or 6 SSD | 17:43 |
pdardeau | any rules of thumb like 1 AC node per rack? | 17:44 |
pdardeau | or at most 3 AC nodes per rack? | 17:44 |
DHE | there isn't really a set ratio. I mean, for cold storage you can probably do with 3 PAC nodes and unlimited O nodes... | 17:45 |
tdasilva | I once heard people talking in % terms, like AC takes certain % of overall storage of a cluster, but now i'm trying to remember what that number was | 17:46 |
tdasilva | so it wasn't necessarily number of nodes, but just the amount of storage dedicated to AC based on how much is for O | 17:46 |
pdardeau | tdasilva: oic | 17:47 |
tdasilva | pdardeau: https://ask.openstack.org/en/question/29811/what-is-the-appropriate-size-for-account-and-container-servers-in-swift/ | 17:48 |
DHE | but again that's a rather subjective thing. I have a tiny container (~11,000 objects) where each object takes around 256 bytes of space in the container disk, or 768 with 3-way container replication. that will probably grow as the database expands. | 17:48 |
pdardeau | DHE: yep, understood | 17:49 |
pdardeau | tdasilva: thanks for link! | 17:51 |
pdardeau | zaitcev said 1-2% of O for AC back in 2014. i wonder if he would still recommend same today? | 17:53 |
*** e0ne has joined #openstack-swift | 17:57 | |
DHE | I'm suspecting 2-3 kilobytes per object on the account+container servers is going to be fairly safe assuming 3 way replication. Scale appropriately for other configurations. but now you need to know number of objects, average object size, etc to do the math. | 17:57 |
pdardeau | DHE: gotcha. tricky for me since folks are saying X PB total cluster size | 18:00 |
timburke | pdardeau: fwiw, i've got a couple cluster my company uses for ISV integrations/testing, an they seem to be provisioned around that 1-2% guideline. actual usage indicates that may have been high. the object disks are in the neighborhood of 25-33% full, while the a/c disks are barely to 2% | 18:19 |
timburke | might have to do with workload, though? maybe those ISVs skew toward larger objects? | 18:19 |
pdardeau | swiftstack has some guidelines here: https://www.swiftstack.com/docs/admin/hardware.html#proxy-account-container-pac-nodes | 18:20 |
timburke | yeah, 0.3% for O-to-A/C seems not crazy, based on what i'm seeing | 18:22 |
timburke | er, reverse that, but yeah | 18:23 |
pdardeau | timburke: thanks real world validation | 18:38 |
DHE | pdardeau: `swift stat` will, for the indicated user, show a count of how many objects are stored (or at least enough info to add up yourself). so you can work it out, if manually | 18:39 |
*** openstackgerrit has joined #openstack-swift | 18:39 | |
openstackgerrit | Tim Burke proposed openstack/swift master: py3: port account/container replicators https://review.openstack.org/614656 | 18:39 |
*** mvkr has quit IRC | 18:55 | |
pdardeau | DHE: understood. i work for hardware vendor and am coming from angle of 'what kind of hardware would customer need to run swift cluster of X petabytes?' | 19:13 |
DHE | the main catch with A+C servers is that the unit of division is often large. If someone makes a container with 1 billion objects in it then "more disks of reasonable size" doesn't scale, you need a much bigger disk | 19:25 |
DHE | contrast objects which have a 5 GB size limit and if you're at petabyte scale then you have a LOT of them, meaning adding hard drives redistributes it fairly well | 19:25 |
timburke | though hopefully we'll be able to automate sharding sooner rather than later and it won't be such an issue :-) | 19:26 |
DHE | (there's container splitting now to help resolve the first issue, but it's not automatic afaik) | 19:26 |
DHE | was just typing that | 19:26 |
DHE | I'm preparing for a container that might have 42 million files in it which is already pushing my comfort zone by quite a bit... | 19:30 |
pdardeau | DHE: timburke you guys bring up a lot of great points. would make good blogging material if that's your game. | 19:33 |
mattoliverau | It's pdardeau! Hey man, it's been a long time, still working on ceph at Dell? Or have you managed to convince them to let you come and do some Swift stuff? be awesome to have you back again ;) | 19:52 |
pdardeau | mattoliverau: Hey! It has been a long while. Yep, still doing Ceph at Dell. But Swift appeared in my email out of nowhere, so here I am. | 19:54 |
pdardeau | mattoliverau: it's odd to see you online this time of day. or did you too participate in a time change? | 19:57 |
mattoliverau | \o/, of I knew that's all it took Id be happy to continually email you :p | 19:57 |
mattoliverau | Nah, just young kids getting me out of bed :P | 19:58 |
*** e0ne has quit IRC | 20:45 | |
openstackgerrit | Merged openstack/python-swiftclient master: Switch to stestr https://review.openstack.org/581610 | 20:58 |
*** itlinux has joined #openstack-swift | 21:23 | |
*** dhellmann has quit IRC | 21:44 | |
mattoliverau | ok, not I'm really at work, so morning :) | 21:59 |
mattoliverau | *now | 21:59 |
*** e0ne has joined #openstack-swift | 22:02 | |
*** e0ne has quit IRC | 22:05 | |
*** dhellmann_ has joined #openstack-swift | 22:18 | |
*** dhellmann_ is now known as dhellmann | 22:20 | |
*** itlinux has quit IRC | 22:45 | |
*** mvkr has joined #openstack-swift | 22:51 | |
zaitcev | did you guys see a Linkedin update that creight is now at HEB? | 23:58 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!