opendevreview | Merged openstack/swift stable/2024.1: CI: Move off CentOS 8 https://review.opendev.org/c/openstack/swift/+/922762 | 01:07 |
---|---|---|
opendevreview | Merged openstack/swift stable/2024.1: [stable-only] CI: Remove some old rolling-upgrade jobs https://review.opendev.org/c/openstack/swift/+/922691 | 01:07 |
opendevreview | Merged openstack/swift master: Support tox4 https://review.opendev.org/c/openstack/swift/+/881642 | 01:32 |
opendevreview | Merged openstack/swift stable/2023.2: CI: Move off CentOS 8 https://review.opendev.org/c/openstack/swift/+/922774 | 03:59 |
*** rwmtinkywinky7 is now known as rwmtinkywinky | 10:52 | |
opendevreview | Alistair Coles proposed openstack/swift master: proxy-server: log correct path when listing parsing fails https://review.opendev.org/c/openstack/swift/+/922715 | 11:04 |
opendevreview | Md Azmain Adib Pahlowan proposed openstack/swift master: Periodic reaper recon dump showing current progress https://review.opendev.org/c/openstack/swift/+/922831 | 14:35 |
opendevreview | Md Azmain Adib Pahlowan proposed openstack/swift master: Periodic reaper recon dump showing current progress https://review.opendev.org/c/openstack/swift/+/922833 | 14:41 |
opendevreview | Merged openstack/swift master: proxy-server: log correct path when listing parsing fails https://review.opendev.org/c/openstack/swift/+/922715 | 18:10 |
opendevreview | Shreeya Deshpande proposed openstack/swift master: class LoggerStatsdFacade https://review.opendev.org/c/openstack/swift/+/915483 | 19:13 |
opendevreview | Tim Burke proposed openstack/swift master: Use entry_points for swift-init https://review.opendev.org/c/openstack/swift/+/922870 | 19:22 |
opendevreview | Tim Burke proposed openstack/swift master: Pull swift-dispersion-populate to its own module https://review.opendev.org/c/openstack/swift/+/922871 | 19:22 |
opendevreview | Tim Burke proposed openstack/swift master: Pull swift-*-info scripts into swift.cli.info https://review.opendev.org/c/openstack/swift/+/922872 | 19:22 |
opendevreview | Shreeya Deshpande proposed openstack/swift master: statsd: deprecate log_ prefix for options https://review.opendev.org/c/openstack/swift/+/922518 | 19:38 |
opendevreview | Shreeya Deshpande proposed openstack/swift master: statsd: deprecate log_ prefix for options https://review.opendev.org/c/openstack/swift/+/922518 | 19:45 |
opendevreview | Tim Burke proposed openstack/swift stable/2023.1: [stable-only] CI: Remove some old rolling-upgrade jobs https://review.opendev.org/c/openstack/swift/+/922693 | 19:54 |
opendevreview | Tim Burke proposed openstack/swift stable/2023.1: CI: Move off CentOS 8 https://review.opendev.org/c/openstack/swift/+/922879 | 19:54 |
opendevreview | Merged openstack/swift stable/2023.2: [stable-only] CI: Remove some old rolling-upgrade jobs https://review.opendev.org/c/openstack/swift/+/922692 | 20:50 |
kota | good morning | 20:55 |
fulecorafa | Good evening | 20:55 |
timburke | o/ | 20:57 |
opendevreview | Yan Xiao proposed openstack/swift master: stats: API for native labeled metrics https://review.opendev.org/c/openstack/swift/+/909882 | 20:59 |
timburke | #startmeeting swift | 21:00 |
opendevmeet | Meeting started Wed Jun 26 21:00:01 2024 UTC and is due to finish in 60 minutes. The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot. | 21:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 21:00 |
opendevmeet | The meeting name has been set to 'swift' | 21:00 |
timburke | who's here for the swift meeting? | 21:00 |
fulecorafa | I am o/ | 21:00 |
kota | hi | 21:00 |
acoles | o/ | 21:01 |
timburke | sorry, i feel like it's been a while since i held the regular meeting | 21:02 |
timburke | but i did actually update the agenda this time! | 21:02 |
timburke | #link https://wiki.openstack.org/wiki/Meetings/Swift | 21:02 |
jianjian | o/ | 21:02 |
timburke | first up, a couple recently-reported bugs | 21:03 |
timburke | #topic account-reaper and sharded containers | 21:03 |
timburke | it looks like they don't work! | 21:03 |
timburke | #link https://bugs.launchpad.net/swift/+bug/2070397 | 21:03 |
patch-bot | Bug #2070397 - Account reaper do not reap sharded containers (New) | 21:03 |
timburke | since the reaper is relying on direct_client instead of internal_client, it gets no objects to delete, tries to delete the container, and gets back a 409 -- next cycle, same deal | 21:04 |
timburke | this was also brought up on the mailing list, as zaitcev helpfully pointed out to me yesterday | 21:05 |
timburke | #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/KOJS5G5W24PARPLNQS2D5A6M5F3JRM7V/ | 21:06 |
zaitcev | That's because I went through exactly this thing with the dark data watcher. | 21:06 |
zaitcev | So a simple thing is to copy the code from the watcher and be done. | 21:07 |
zaitcev | Maybe some useful refactoring could be done... Or switch both of them to another API... I dunno. | 21:07 |
zaitcev | It took so long to engage Alistair's attention on this, that I would prefer just clone the code and forget about it. | 21:08 |
timburke | yeah, i was wondering if it might make sense to switch to using internal_client... i'm trying to remember why we didn't go that route for the watcher | 21:08 |
timburke | i think there'd be some complication around avoiding the 410, but surely we could get around that by adding some x-backend-... header flag | 21:09 |
zaitcev | Because watcher runs in an object auditor, and internal_client is a part of the proxy. It needs a pipeline set up, for one. IIRC. | 21:09 |
timburke | hmm. true enough -- and having a pipeline leaves more room for misconfiguration | 21:10 |
timburke | zaitcev, had you taken a look at the linked patch yet? i haven't, i'll admit | 21:10 |
acoles | so is the root cause that the reaper uses direct client to list objects? | 21:11 |
zaitcev | Not really, sorry. It looked faithfully cargo-culted from the watcher. | 21:11 |
timburke | acoles, yeah, afaik | 21:12 |
acoles | I'm wondering, even if all the objects are deleted from all the shards, will the container delete succeed, until all the shards have been shrunk away?? | 21:12 |
zaitcev | uh-oh | 21:13 |
acoles | isn't there this weird property that a container with shards isn't "empty"? although IIRC we haven't several definitions of "empty" | 21:13 |
timburke | hrm... i thought we had some special handling when all shards report zero objects, but i don't remember for certain... | 21:14 |
timburke | a good first step might be for one of us to write a failing probe test, then try the linked patch, and iterate from there | 21:14 |
acoles | I just remember that we have recurring warnings in prod for containers that cannot be reclaimed because they have shards...but maybe there is a workaround for reaper 🤔 | 21:14 |
acoles | yeah a probe test is definitely a good idea | 21:15 |
timburke | that does sound familiar... | 21:15 |
timburke | can anyone volunteer to get a probe test together? | 21:15 |
zaitcev | The v-word scares me. I'm up to my chest in Cinder nowadays. | 21:16 |
timburke | hehe -- all right, i'll put it on my to-do list and hopefully i can get to it this week | 21:17 |
acoles | zaitcev: that still leaves your brain available ;-) | 21:17 |
timburke | next up | 21:17 |
timburke | #topic busted docker image | 21:17 |
timburke | #link https://bugs.launchpad.net/swift/+bug/2070029 | 21:17 |
patch-bot | Bug #2070029 - swift-docker: saio docker alpine uncorrect (New) | 21:17 |
timburke | sounds like our published docker image isn't actually working correctly | 21:18 |
timburke | i guess my first question, though, is how much we really want to commit to maintaining it? it seems like a nice-to-have, but we surely ought to have more validation of the image if we're properly maintaining it | 21:19 |
acoles | does any core dev use it? AFAIK vSAIO is the recommended dev environment | 21:20 |
zaitcev | I don't but it's because we ship our own images in RHOSP. I don't know how to build them, honestly. | 21:21 |
jianjian | I just got to know there is a "saio docker" env | 21:21 |
fulecorafa | I personally use it when developping here. I noticed the docker image was failing, but I though it was some mistake in my config here | 21:21 |
timburke | i've got a little single-disk cluster i use it for, but i don't even remember the last time i updated it | 21:22 |
fulecorafa | I did try to rebuild it using a lxc environment by hand and it still didn't work | 21:23 |
timburke | thanks for the data point fulecorafa -- fwiw, it seems like swift must not have gotten installed correctly, judging by the bug report | 21:24 |
timburke | i'll try to dig into it some more -- looks like i was the last one to touch it, in p 853362 | 21:25 |
patch-bot | https://review.opendev.org/c/openstack/swift/+/853362 - swift - Fix docker image building (MERGED) - 1 patch set | 21:25 |
acoles | fulecorafa: you may want to try https://github.com/NVIDIA/vagrant-swift-all-in-one which many of use daily for dev | 21:25 |
timburke | next up | 21:25 |
acoles | FWIW I did have a go at adding a docker target for vSAIO, almost got it working | 21:25 |
timburke | #topic log_ prefix for statsd client configs | 21:26 |
timburke | #link https://review.opendev.org/c/openstack/swift/+/922518 | 21:26 |
patch-bot | patch 922518 - swift - statsd: deprecate log_ prefix for options - 3 patch sets | 21:26 |
timburke | this was split out of https://review.opendev.org/c/openstack/swift/+/919444 and the general desire to separate logging and stats | 21:26 |
patch-bot | patch 919444 - swift - Add get_statsd_client function (MERGED) - 13 patch sets | 21:26 |
timburke | but when i got into proxy-logging, i was reminded of how we already have (at least?) two ways of spelling these configs, and i wanted to make sure that we had some consensus to make a recommendation for how the configs *should* be specified going forward | 21:28 |
timburke | we already have things like log_statsd_host and access_log_statsd_host, and while i like the idea of moving toward just statsd_host or access_statsd_host, i don't really want to add support for *both* | 21:29 |
timburke | i want to say that the access_ prefix got added because of confusion over how to do config overrides with PasteDeploy | 21:32 |
timburke | does anybody have input on what should be our preferred option name? or maybe this is all a giant yak-shave that doesn't really have any hope of getting off the ground until we can get rid of PasteDeploy and make config overrides sensible? | 21:34 |
acoles | can we think of the differentiating prefix being 'access_log_' rather than just 'access_' and then support 'access_log_statsd_host', 'statsd_host', 'log_statsd_host' in that order of preference? | 21:35 |
zaitcev | XCKD_14_standards.jpg | 21:35 |
acoles | meaning, if you want proxy logging to have different statsd host you should configure access_log_statsd_host | 21:35 |
timburke | that could work... it feels a little funny to me, though, like it's "most-specific > most-generic > middle" when i'd expect middle to be in the middle... *shrug* | 21:38 |
timburke | well, we don't have to sort it out right now. please leave thoughts/comments/ideas on the review! | 21:40 |
timburke | speaking of statsd/logging separation... | 21:40 |
timburke | #topic LoggerStatsdFacade | 21:40 |
timburke | #link https://review.opendev.org/c/openstack/swift/+/915483 | 21:40 |
patch-bot | patch 915483 - swift - class LoggerStatsdFacade - 25 patch sets | 21:40 |
acoles | I though of it as "most-specific > generic > deprecated-generic" but, yeah, it's not ideal | 21:41 |
timburke | nothing too much to report, just wanted to highlight that work continues and there might be some light at the end of that particular tunnel | 21:42 |
timburke | i think it's getting close | 21:42 |
timburke | the last two topics, you can catch up on from the agenda -- i don't think there's anything too much to do for them, but they've been hogging a decent bit of my time the last week or two | 21:44 |
timburke | the entry-points patch chain starting at https://review.opendev.org/c/openstack/swift/+/918365 could use some review if anyone has bandwidth | 21:45 |
patch-bot | patch 918365 - swift - Use entry_points for server executables - 4 patch sets | 21:45 |
timburke | but especially since we've got a new face here, i wanted to leave time for | 21:45 |
timburke | #topic open discussion | 21:45 |
timburke | anything else we should bring up this week? | 21:45 |
fulecorafa | I would like to bring a topic out, if I may | 21:45 |
timburke | by all means! | 21:46 |
fulecorafa | I've been studying how to implement multi-storage-policy containers in swift. A demand for work. I was hopping you guys could share your ideas and/or views on the solution for this | 21:47 |
fulecorafa | multi-storage-policy as in I can have 2 objects in the same bucket but one would be in normal policy and another in a "glacier" one | 21:48 |
acoles | symlinks would enable this | 21:49 |
timburke | fulecorafa, there's definitely been various interest in it over the years -- how are you thinking about objects moving between policies? would you want it to happen automatically after some period of time, or based on user action? | 21:49 |
acoles | ^^^ I was about to continue that the interesting/challenging part is 'automating' policy placement and migration between policies | 21:50 |
fulecorafa | From what I've gathered, I think the solution for this would be to use the same basic logic as SLOs: to have a normal bucket and another marked bucket (i.e. bucket+glacier). When uploading a file, given the policy, it would store the actual object in cold | 21:51 |
fulecorafa | And keep a symlink/manifest of sorts in the original bucket | 21:51 |
timburke | makes sense enough -- similar to x-amz-storage-class for s3 | 21:52 |
acoles | one thing we have learnt from SLOs and S3 mpus is that it's not a great idea to have 'extra' 'shadow' +segments buckets visible in the users namespace. | 21:52 |
acoles | but we have the capability to maintain 'hidden' buckets (which is the direction native-MPU is going on feature/mpu) | 21:53 |
timburke | yeah, we'd almost certainly want to use the null-namespace stuff introduced with the most-recent iteration on versioning | 21:53 |
fulecorafa | As for objects moving between policies, I know AWS's API has something more on moving objects, but I haven't quite gotten into it. In my specific use-case, we could just use a client to re-put the object? But I'll have to think more on that | 21:53 |
fulecorafa | acoles, I remenber you commenting this on the orphan patch I've submitted last month. | 21:54 |
acoles | right | 21:54 |
timburke | fulecorafa, yeah -- i'd expect to be able to do a server-side copy with a new destination policy | 21:54 |
fulecorafa | I recon that if we evolve on feature/mpu, we could keep this other marked buckets hidden too right? | 21:54 |
timburke | yup, i think that's acoles's plan too | 21:55 |
acoles | I think there might be a good deal of commonality with native MPU | 21:55 |
opendevreview | Yan Xiao proposed openstack/swift master: stats: API for native labeled metrics https://review.opendev.org/c/openstack/swift/+/909882 | 21:55 |
fulecorafa | Should be good to go foward with this plan! | 21:56 |
acoles | possibly even a degenerate use case "Single Part Upload" :) that uses the manifest/symlink/hidden container logic | 21:56 |
timburke | i'd only note that things may start to get pretty confusing when you've got, say, a versioned MPU uploaded to one of these mixed-policy containers... | 21:57 |
fulecorafa | Just one last question: For my specific use-case, we're just working with the S3 api in an older version. Even if we do this, the bucket should not be visible to s3 clients, correct? | 21:57 |
jianjian | fulecorafa, will it be enough that application move objects between a normal container and a cold policy container? | 21:57 |
acoles | there's probably similar cleanup concerns - what happens to the target object in "cold" when the user object is overwritten? it needs to be cleaned up like orphan segments do | 21:58 |
acoles | timburke: yeah, whilst I see commonality I'm also wary of overloading the goals of native-MPUs. One step at a time.. | 21:59 |
fulecorafa | jianjian, I don't think so, but it would be a first step. Taking AWS S3 as a counterpart, there should be other different behaviours we would need to take into account. As of now, I think to have a simple 2 policy solution would be a step in the right direction | 21:59 |
acoles | MPUs + versioning is complex enough for my small brain! | 21:59 |
acoles | fulecorafa: just curious, if you are able to share the info, do you use object versioning? | 22:00 |
fulecorafa | acoles: As soon as I started coding I noted that :). I very much liked the way you did the Orphan collector on feature/mpu. I think it would make it very simple to then collect the cold parts | 22:01 |
acoles | we still have a way to go to on finishing the orphan auditor but we're excited about that approach | 22:02 |
fulecorafa | acoles: yeah, so we do allow for versioning but just for simple, normal policy files. In the future it may be needed to get support for versioning in other types, but that's not allowed yet | 22:02 |
acoles | and do your clients use s3 API mostly? | 22:02 |
fulecorafa | mostly, yes. We're trying to make it easy to migrate | 22:03 |
fulecorafa | timburke: Yeah, I think it can become complicated quite fast. but I'm not sure there would be another way. Maybe tinkering with the underlying hashing system? (I mean, the one that actually sets where the file can be found on disk | 22:04 |
timburke | do you expect to need separate disk pools for each policy, or would there likely be a high level of overlap? another similar-but-different idea we've had was to have something like an EC policy, but sufficiently small uploads would be stored replicated on the first three assignments | 22:07 |
acoles | sorry I need to drop. fulecorafa: thanks for coming along and sharing your ideas, hope we can make progress together. Ask for help here and we'll do our best :) | 22:08 |
timburke | so it'd all use the same policy index and the same disks, but have different storage overheads | 22:08 |
fulecorafa | acoles: thank you very much for your help receptivity, alongside the comunity's! have a good one | 22:08 |
timburke | oh yeah, i suppose we're a decent bit over time :-) | 22:09 |
fulecorafa | timburke: I think one of the main uses would be to have different disk pools | 22:09 |
timburke | ack | 22:09 |
timburke | all right, i should wrap this up -- but i'll add mixed-policy containers to the agenda for next week, fulecorafa | 22:10 |
timburke | thank you all for coming, and thank you for working on swift! | 22:10 |
timburke | #endmeeting | 22:10 |
opendevmeet | Meeting ended Wed Jun 26 22:10:34 2024 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 22:10 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/swift/2024/swift.2024-06-26-21.00.html | 22:10 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/swift/2024/swift.2024-06-26-21.00.txt | 22:10 |
opendevmeet | Log: https://meetings.opendev.org/meetings/swift/2024/swift.2024-06-26-21.00.log.html | 22:10 |
fulecorafa | Thanks a lot timburke! I'll keep updating | 22:10 |
timburke | that'll be great, thanks | 22:10 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!