*** irclogbot_2 has joined #openstack-swift | 00:23 | |
*** irclogbot_2 has quit IRC | 00:39 | |
*** irclogbot_3 has joined #openstack-swift | 01:27 | |
openstackgerrit | Merged openstack/swift master: Set swift_source in account_quotas middleware https://review.opendev.org/697578 | 01:44 |
---|---|---|
*** irclogbot_3 has quit IRC | 01:47 | |
openstackgerrit | Merged openstack/swift master: Set swift_source more in s3api middleware https://review.opendev.org/697580 | 02:05 |
openstackgerrit | Pete Zaitcev proposed openstack/python-swiftclient master: Cleanup session on delete https://review.opendev.org/674320 | 02:11 |
*** irclogbot_2 has joined #openstack-swift | 02:47 | |
kota_ | morning | 04:47 |
openstackgerrit | Merged openstack/swift master: Set swift_source more in versioned_writes https://review.opendev.org/697581 | 05:13 |
mattoliverau | kota_: o/ | 05:23 |
kota_ | mattoliverau: o/ | 05:23 |
*** rcernin has quit IRC | 07:12 | |
*** tkajinam has quit IRC | 08:07 | |
*** tesseract has joined #openstack-swift | 08:30 | |
*** tesseract has quit IRC | 08:31 | |
*** tesseract has joined #openstack-swift | 08:31 | |
*** joeljwright has left #openstack-swift | 08:35 | |
*** rcernin has joined #openstack-swift | 09:11 | |
*** rpittau|afk is now known as rpittau | 09:15 | |
openstackgerrit | Thiago da Silva proposed openstack/swift master: New Object Versioning mode https://review.opendev.org/682382 | 09:36 |
tdasilva | clayg, timburke, mattoliverau: ^^^ applied changes from last reviews and fixed the issue with listing/delete when version-id=null. Have not yet tackled container-sync | 09:38 |
*** rcernin has quit IRC | 10:14 | |
*** csmart has quit IRC | 10:16 | |
*** baffle has quit IRC | 10:16 | |
*** baffle has joined #openstack-swift | 11:38 | |
*** csmart has joined #openstack-swift | 11:38 | |
*** pcaruana has joined #openstack-swift | 11:46 | |
viks___ | Hi, Object replication cycle takes around 7-8 hours in my cluster. Is it normal? My settings has ` concurrency = 1` and `replicator_workers = 0`. Should i increase the worker and concurrency to bring the replication cycle down to around 1 hour? what is the ideal time range for replication cycle i should be targeting? | 12:22 |
donnyd | so this morning I am getting a bunch of these showing up in the logs Unexpected response while deleting object | 12:57 |
donnyd | Also how many servers should the expiration process be running on? | 12:59 |
*** rdejoux has joined #openstack-swift | 13:02 | |
*** new_student1411 has joined #openstack-swift | 13:27 | |
new_student1411 | I am trying to set an acl on object using s3 API. I have set `x-amz-acl` to `public-read` and tried to access the the same using url `https://s3.<domain>/<bucket-name>/file.txt`, but it gives 400 bad request. The `s3_acl` option is true and I can see the object acl to be set using third party tool. Is this how should I handle the acl? | 13:35 |
*** new_student1411 has quit IRC | 14:32 | |
clayg | viks___: Not necessarily, the cycle time is higly dependent on the available I/O and activity in the cluster. However, if you're gettig 7-8 hours in steady state when you do a rebalance (add nodes) it's probably going to be unsatisfingly slow. | 15:27 |
clayg | viks___: it's good that you're monitoring your consistency engine - a lot of durability failure calculations I've seen give 24 windows to address faults (unmount failed disk & rebuild). You can definately increase concurrency settings and monitor I/O. | 15:29 |
clayg | viks___: but the real test will come when there's a fault or you add nodes | 15:29 |
clayg | donnyd: what version of swift are you running? newer versions have better reporting in the expirer when handling different status codes (we could try and find the commits if you're fairly recent) | 15:31 |
clayg | donnyd: you can scale out the expirer to as many nodes as you need to keep up with your backlog, most SwiftStack clusters run the expirer on every object node | 15:33 |
clayg | donnyd: you should monitor your expirer queue -> https://gist.github.com/clayg/7f66eab2a61c77869e1e84ac4ed6f1df | 15:33 |
clayg | donnyd: if you get backed up, run more | 15:33 |
donnyd | clayg: I am on stein | 15:34 |
clayg | donnyd: ok, the stuff I was thinking is old like 2.17 | 15:37 |
clayg | donnyd: when it logs "unexpected status" does it say what the status *was*? | 15:37 |
donnyd | https://www.irccloud.com/pastebin/3gLJ8adX/ | 15:38 |
clayg | donnyd: yeah, you're way behind - you need to run more expirers and increase the concurrency/workers | 15:38 |
donnyd | well that is a super duper handy little tool | 15:38 |
donnyd | I can do that | 15:39 |
donnyd | but there is nothing coming in atm - FN has been turned off for logging because of the state of it | 15:39 |
donnyd | and because I suck at the swifts | 15:39 |
clayg | the expirer has two options to scale workers - you have "processes" and "process" - processes is the total number of processes you're running - then in each config you set process = 0-n to assign each worker an index | 15:40 |
donnyd | I will post what I have.. its probably all effed up | 15:41 |
clayg | then concrrency is probably might hit a sweet spot anywhere between 4 and 30 | 15:41 |
donnyd | https://www.irccloud.com/pastebin/H3PUSrX0/ | 15:41 |
clayg | so that's fine, as long as you have 10 other configs with process = 1..9 | 15:41 |
clayg | so the neat thing about having 100M stale entries is you're not going to be done anytime soon regardless - so there's no rush 😎 | 15:43 |
donnyd | LMAO | 15:44 |
donnyd | Oh so I see how those two work together now | 15:44 |
clayg | yeah if you've been running with the above config for awhile (i.e. no one was processing rows % 1..9) that could explain the stale entries | 15:47 |
clayg | tdasilva: thanks for re-spinning versioning | 15:48 |
openstackgerrit | Romain LE DISEZ proposed openstack/swift master: relinker: Improve performance by limiting I/O https://review.opendev.org/695344 | 15:49 |
clayg | i thought about container-sync over the weekend... I think i'd prefer to find some way to gracefully abstain than trying to "one-off" syncing of the most recent version | 15:49 |
clayg | I could see someone trying to turn on versioning on the remote end and just being entirely disappointed ... we really need to up the game on container sync | 15:50 |
*** rpittau is now known as rpittau|afk | 16:46 | |
timburke | donnyd, clayg: the graph to watch will be https://grafana.fortnebula.com/d/9MMqh8HWk/openstack-utilization?orgId=2&refresh=30s&from=now-7d&to=now&fullscreen&panelId=28 -- as long as FN is still out of the log pool, that should only be coming down -- i'd expect it to settle around 5-6% if logs are being retained for 60 days, or around 1% for only 30 days (again, assuming no new ingest) | 17:14 |
*** gyee has joined #openstack-swift | 17:16 | |
donnyd | timburke: it has been slowly but surely headed downward | 17:23 |
donnyd | Total entries: 110908075 | 17:23 |
donnyd | Pending entries: 526144 | 17:23 |
donnyd | Stale entries: 103061941 | 17:23 |
donnyd | kilt almost 1M in the 1.5 hours... so only 100 or so hours to go | 17:24 |
donnyd | maybe 200 LOL | 17:24 |
*** rdejoux has quit IRC | 17:25 | |
*** pcaruana has quit IRC | 17:31 | |
*** diablo_rojo has joined #openstack-swift | 17:51 | |
*** pcaruana has joined #openstack-swift | 18:23 | |
tdasilva | clayg: skipping versioned containers until we have a better solution seems sane, same should be done of static links i guess? | 18:45 |
clayg | tdasilva: i'm struggling with how to make it obvious to the user what's going on... and with normal static links I guess there's always the "hope" that the target eventually get's written in the remote for some reason and next pass it'll work | 18:52 |
*** gmann is now known as gmann_afk | 19:17 | |
openstackgerrit | Tim Burke proposed openstack/swift master: WIP: Add proxy-server option to quote-wrap all ETags https://review.opendev.org/695131 | 19:19 |
timburke | so i'm looking at the request timings we log at the object-server: https://github.com/openstack/swift/blob/2.23.0/swift/obj/server.py#L1296-L1301 | 19:24 |
timburke | is there a reason we're doing (roughly) time to first byte instead of looking at the whole transfer? | 19:25 |
*** tesseract has quit IRC | 19:37 | |
DHE | time to first byte is often viewed as a responsiveness metric | 19:42 |
*** ab-a has left #openstack-swift | 20:09 | |
*** ab-a has joined #openstack-swift | 20:10 | |
*** ab-a has left #openstack-swift | 20:10 | |
clayg | @timburke i'm pretty sure the theory was you could monitor for anomolies watching TTFB - where as the total transfer time would be too variable based on the size of the object | 20:52 |
*** ab-a has joined #openstack-swift | 20:52 | |
timburke | so the context was, i saw object-servers respond real quick saying "hey, yeah, i've got that!" but then serves the data out at ~1/6 the speed of other drives in the cluster (looking at the total transfer time reported in the proxy) | 20:55 |
clayg | @timburke it's difficult to say for sure if it was serving the data slowly or the proxy was reading the data slowly (N.B. the proxies read is back pressured against the client socket buffers) | 21:10 |
clayg | regardless it's true that the TTFB measurement isn't giving you insight into the read throughput or the bottlenecks | 21:10 |
openstackgerrit | Clay Gerrard proposed openstack/swift master: Fix container-sync objects with offset timestamp https://review.opendev.org/698092 | 21:27 |
*** pcaruana has quit IRC | 21:31 | |
*** gmann_afk is now known as gmann | 21:48 | |
donnyd | is there any way to get swift to expire things faster? | 22:09 |
donnyd | I worry that this isn't going to get the job done until sometime in 2025 | 22:09 |
*** rcernin has joined #openstack-swift | 22:16 | |
mattoliverau | morning | 22:23 |
mattoliverau | notmyname: is it time to upgrade your mini swift setup at home? https://www.cnx-software.com/2019/12/08/rock-pi-sata-hat-targets-rock-pi-4-raspberry-pi-4-nas/ | 22:24 |
timburke | heh "The SATA HAT Top Board (with fan) is supposed to be at the top of the NAS, so maybe they could consider some ventilation holes at the top as well." | 22:34 |
*** rcernin has quit IRC | 22:36 | |
tdasilva | i wonder what's the best bang for the buck in terms of DIY home nas nowadays, odroid also had some nice boards | 23:04 |
*** diablo_rojo has quit IRC | 23:12 | |
*** tkajinam has joined #openstack-swift | 23:12 | |
*** diablo_rojo has joined #openstack-swift | 23:15 | |
*** rcernin has joined #openstack-swift | 23:17 | |
DHE | now where were those container sharding instructions? | 23:27 |
openstackgerrit | Merged openstack/python-swiftclient master: Cleanup session on delete https://review.opendev.org/674320 | 23:39 |
timburke | DHE, https://docs.openstack.org/swift/latest/overview_container_sharding.html | 23:40 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!