mattoliverau | notmyname: lol, of course! We need a noymydictionary bot :P | 00:00 |
---|---|---|
notmyname | getting the list of objects in a container. so I guess "object listings" is probably better | 00:00 |
notmyname | patchbot: what does notmyname really mean? | 00:00 |
clayg | standardize GET a/c as "object listing" - but use context clues when someone says it wrong | 00:00 |
clayg | notmyname: FWIW I realized I call them both "container listings" - I never say "account listings" :P | 00:01 |
notmyname | the code (so far) is _really_ short. and it's kinda cool because I can put a/c/o then kill the container servers then put /a/c/o2 and it works and is really fast. no 10+ sec timeout | 00:01 |
notmyname | clayg: listing a container vs list of containers. both are container listings. I think your way is best ;-) | 00:02 |
clayg | notmyname: ;) | 00:02 |
clayg | just parse it as "what I mean" - simple | 00:02 |
notmyname | :-) | 00:02 |
notmyname | I got the TODOs all written down in one place from the summit etherpads. tomorrow I'll work on getting those in some sort of organized form. If I'm incredibly lucky (probably not) I'll have something ready by the meeting tomorrow | 00:03 |
notmyname | and remember that the meeting is at 2100UTC | 00:03 |
notmyname | mattoliverau: ho: dmorita: kota_: ^ | 00:03 |
notmyname | acoles_away: ^ cschwede: ^ | 00:04 |
mattoliverau | notmyname: yup, looking forward to the sleep in :) | 00:04 |
*** annegentle has quit IRC | 00:04 | |
mattoliverau | thanks for the reminder, and soon the ical stull in openstack will be managed via yaml | 00:04 |
notmyname | is there anything in openstack that can't be solved with a yaml file? | 00:05 |
notmyname | git@github.com:openstack-infra/irc-meetings.git <-- the new meeting stuff, I'm told | 00:05 |
notmyname | new openstack shirt idea: I replaced your ops team with a small yaml file | 00:05 |
notmyname | ok, I'm headed home | 00:06 |
notmyname | later | 00:06 |
mattoliverau | nope nothing, when in doubt, yaml it! | 00:07 |
mattoliverau | notmyname: night | 00:07 |
*** annegentle has joined #openstack-swift | 00:09 | |
*** david-lyle has quit IRC | 00:20 | |
*** annegentle has quit IRC | 00:21 | |
*** annegentle has joined #openstack-swift | 00:27 | |
*** annegentle has quit IRC | 00:36 | |
*** swdweeb has joined #openstack-swift | 00:40 | |
*** jrichli has joined #openstack-swift | 00:42 | |
*** swdweeb has left #openstack-swift | 00:50 | |
*** minwoob has quit IRC | 00:50 | |
*** zhill_ has quit IRC | 01:12 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/python-swiftclient: Updated from global requirements https://review.openstack.org/89250 | 01:13 |
openstackgerrit | OpenStack Proposal Bot proposed openstack/swift: Updated from global requirements https://review.openstack.org/88736 | 01:14 |
*** jamielennox|away is now known as jamielennox | 01:23 | |
*** kota_ has joined #openstack-swift | 01:26 | |
*** annegentle has joined #openstack-swift | 01:27 | |
*** annegentle has quit IRC | 01:28 | |
openstackgerrit | Victor Stinner proposed openstack/swift: Use six to fix imports on Python 3 https://review.openstack.org/185453 | 01:29 |
*** setmason has quit IRC | 01:35 | |
*** david-lyle has joined #openstack-swift | 01:35 | |
openstackgerrit | Mauricio Lima proposed openstack/swift: Uncomment [filter: keystoneauth] and [filter: authtoken] sessions https://review.openstack.org/185755 | 01:43 |
*** annegentle has joined #openstack-swift | 01:49 | |
*** kota_ has quit IRC | 02:06 | |
*** kota_ has joined #openstack-swift | 02:08 | |
*** kota_ has quit IRC | 02:15 | |
*** david-lyle has quit IRC | 02:33 | |
*** setmason has joined #openstack-swift | 02:33 | |
*** gyee has quit IRC | 02:36 | |
*** annegentle has quit IRC | 02:38 | |
*** annegentle has joined #openstack-swift | 02:40 | |
*** jrichli has quit IRC | 03:00 | |
*** david-lyle has joined #openstack-swift | 03:11 | |
*** annegentle has quit IRC | 03:32 | |
hugokuo | few photos of Swift 5th anniversary party in Vancouver http://s217.photobucket.com/user/tonytkdk/library/OpenStack%20Swift%205th%20anniversary | 03:35 |
mattoliverau | hugokuo: nice :) | 03:37 |
*** annegentle has joined #openstack-swift | 03:37 | |
hugokuo | :) | 03:39 |
*** links has joined #openstack-swift | 04:15 | |
*** proteusguy has quit IRC | 04:16 | |
*** annegentle has quit IRC | 04:26 | |
*** kota_ has joined #openstack-swift | 04:27 | |
*** joeljwright has joined #openstack-swift | 04:37 | |
*** bkopilov is now known as bkopilov_wfh | 04:52 | |
*** leopoldj has joined #openstack-swift | 05:18 | |
*** ppai has joined #openstack-swift | 05:22 | |
*** cloudm2 has quit IRC | 05:32 | |
*** setmason has quit IRC | 05:34 | |
*** setmason has joined #openstack-swift | 05:35 | |
*** SkyRocknRoll has joined #openstack-swift | 05:40 | |
ho | hugokuo: great! | 05:41 |
hugokuo | ho: :) | 05:46 |
*** zaitcev has quit IRC | 05:55 | |
*** mmcardle has joined #openstack-swift | 05:56 | |
*** kota_ has quit IRC | 06:01 | |
*** setmason has quit IRC | 07:06 | |
*** setmason has joined #openstack-swift | 07:12 | |
*** jordanP has joined #openstack-swift | 07:14 | |
*** bkopilov_wfh has quit IRC | 07:20 | |
*** bkopilov has joined #openstack-swift | 07:25 | |
*** hseipp has joined #openstack-swift | 07:25 | |
*** hseipp has quit IRC | 07:26 | |
*** hseipp has joined #openstack-swift | 07:26 | |
*** annegentle has joined #openstack-swift | 07:27 | |
*** krykowski has joined #openstack-swift | 07:30 | |
*** annegentle has quit IRC | 07:32 | |
*** silor has joined #openstack-swift | 07:33 | |
*** silor has quit IRC | 07:34 | |
*** setmason has quit IRC | 07:41 | |
*** chlong has quit IRC | 07:45 | |
*** geaaru has joined #openstack-swift | 07:50 | |
*** jistr has joined #openstack-swift | 07:52 | |
*** kota_ has joined #openstack-swift | 08:39 | |
*** mariusv has quit IRC | 08:42 | |
*** mariusv has joined #openstack-swift | 08:47 | |
*** mariusv has quit IRC | 08:48 | |
*** mariusv has joined #openstack-swift | 08:49 | |
*** mariusv has quit IRC | 08:54 | |
*** kei_yama has quit IRC | 09:09 | |
*** km has quit IRC | 09:10 | |
eikke | is there any existing way to get policies for ObjectControllerRouter and DiskFileRouter registered from within an out-of-tree package? | 09:16 |
*** ekarlso has quit IRC | 09:16 | |
*** ekarlso has joined #openstack-swift | 09:22 | |
*** haypo has joined #openstack-swift | 09:25 | |
*** acoles_away is now known as acoles | 09:31 | |
*** theanalyst has quit IRC | 09:34 | |
*** joeljwright has quit IRC | 09:36 | |
*** dosaboy_ has quit IRC | 09:36 | |
*** dosaboy has joined #openstack-swift | 09:36 | |
*** theanalyst has joined #openstack-swift | 09:37 | |
*** joeljwright has joined #openstack-swift | 09:39 | |
*** elmo has quit IRC | 09:54 | |
*** elmo has joined #openstack-swift | 09:57 | |
*** theanalyst has quit IRC | 10:02 | |
*** theanalyst has joined #openstack-swift | 10:04 | |
*** theanalyst has quit IRC | 10:08 | |
*** theanalyst has joined #openstack-swift | 10:12 | |
*** proteusguy has joined #openstack-swift | 10:15 | |
*** ho has quit IRC | 10:48 | |
*** bhanu has joined #openstack-swift | 10:58 | |
*** bhanu has quit IRC | 11:04 | |
*** hseipp has quit IRC | 11:07 | |
*** slavisa has joined #openstack-swift | 11:14 | |
*** slavisa has quit IRC | 11:19 | |
*** slavisa has joined #openstack-swift | 11:19 | |
*** hseipp has joined #openstack-swift | 11:24 | |
*** hseipp has quit IRC | 11:25 | |
*** hseipp has joined #openstack-swift | 11:25 | |
*** openstack has joined #openstack-swift | 11:38 | |
-cameron.freenode.net- [freenode-info] if you're at a conference and other people are having trouble connecting, please mention it to staff: http://freenode.net/faq.shtml#gettinghelp | 11:38 | |
*** aix has joined #openstack-swift | 11:47 | |
*** SkyRocknRoll has quit IRC | 11:49 | |
openstackgerrit | Kota Tsuyuzaki proposed openstack/swift: Fix FakeSwift to simulate SLO https://review.openstack.org/185940 | 11:50 |
*** SkyRocknRoll has joined #openstack-swift | 11:50 | |
*** ppai has quit IRC | 11:54 | |
*** hseipp has quit IRC | 12:06 | |
*** hseipp has joined #openstack-swift | 12:07 | |
*** bkopilov is now known as bkopilov_wfh | 12:08 | |
*** ppai has joined #openstack-swift | 12:08 | |
*** kota_ has quit IRC | 12:13 | |
*** acoles is now known as acoles_away | 12:18 | |
openstackgerrit | Alistair Coles proposed openstack/swift: Make SSYNC receiver return a reponse when initial checks fail https://review.openstack.org/177836 | 12:23 |
openstackgerrit | Alistair Coles proposed openstack/swift: Remove _ensure_flush() from SSYNC receiver https://review.openstack.org/177837 | 12:23 |
*** ppai has quit IRC | 12:36 | |
*** annegentle has joined #openstack-swift | 12:41 | |
*** ekarlso has quit IRC | 12:53 | |
*** ekarlso has joined #openstack-swift | 12:53 | |
cschwede | clayg: did you recently saw errors running the vagrant saio? today i always get a „AttributeError: 'VersionInfo' object has no attribute 'semantic_version‘“, i think it is somehow related to pbr and or pip :/ | 12:56 |
openstackgerrit | Merged openstack/swift: Exclude local_dev from sync partners on failure https://review.openstack.org/175076 | 12:59 |
*** annegentle has quit IRC | 13:06 | |
openstackgerrit | Merged openstack/swift: fixup!Patch of "parse_content_disposition" method to meet RFC2183 https://review.openstack.org/185389 | 13:15 |
*** jkugel has joined #openstack-swift | 13:17 | |
*** mwheckmann has joined #openstack-swift | 13:21 | |
*** acoles_away is now known as acoles | 13:25 | |
*** annegentle has joined #openstack-swift | 13:28 | |
*** robefran_ has joined #openstack-swift | 13:30 | |
cschwede | clayg: nevermind, i somehow got the vagrant saio manually working. it’s a testtools issue, needed some updates (with more dependencies) and then it worked again. going to submit a patch later | 13:30 |
*** wbhuber has joined #openstack-swift | 13:30 | |
*** annegentle has quit IRC | 13:30 | |
*** annegentle has joined #openstack-swift | 13:33 | |
*** mwheckmann has quit IRC | 13:33 | |
*** krykowski has quit IRC | 13:35 | |
*** krykowski_ has joined #openstack-swift | 13:35 | |
openstackgerrit | Christian Schwede proposed openstack/swift: Allow SLO PUTs to forgo per-segment integrity checks https://review.openstack.org/184479 | 13:38 |
*** links has quit IRC | 13:41 | |
cschwede | timburke: ^^ i only wanted to fix a minor typo and avoid nitpicking, but gerrit didn’t detect this trivial change. sorry for that | 13:42 |
*** cloudm2 has joined #openstack-swift | 13:42 | |
*** slavisa has quit IRC | 13:44 | |
*** bhakta has left #openstack-swift | 13:47 | |
*** annegentle has quit IRC | 13:52 | |
*** annegentle has joined #openstack-swift | 13:53 | |
*** gsilvis has quit IRC | 13:53 | |
*** gsilvis has joined #openstack-swift | 13:54 | |
openstackgerrit | Alistair Coles proposed openstack/swift: Filter Etag key from ssync replication-headers https://review.openstack.org/173973 | 13:54 |
*** mcnully has joined #openstack-swift | 14:01 | |
openstackgerrit | Alistair Coles proposed openstack/swift: Make SSYNC receiver return a reponse when initial checks fail https://review.openstack.org/177836 | 14:04 |
openstackgerrit | Alistair Coles proposed openstack/swift: Remove _ensure_flush() from SSYNC receiver https://review.openstack.org/177837 | 14:04 |
*** chlong has joined #openstack-swift | 14:04 | |
*** jrichli has joined #openstack-swift | 14:09 | |
acoles | ^^ how come clayg gets to land first and I get to fix merge conflicts :/ | 14:10 |
acoles | clayg zaitcev : https://review.openstack.org/177836 needs your +2's again please due to a trivial import conflict | 14:11 |
*** mcnully has quit IRC | 14:16 | |
*** annegentle has quit IRC | 14:17 | |
*** bsdkurt has quit IRC | 14:17 | |
*** leopoldj has quit IRC | 14:17 | |
*** breitz has quit IRC | 14:19 | |
*** breitz has joined #openstack-swift | 14:20 | |
mordred | who's a good person to poke with questions about swift at rackspace? specifically, Infra are uploading images to swift and we're getting ClientException: Object POST failed returned a 504 Gateway Time-out | 14:22 |
mordred | I'm wondering if there is something we're doing wrong on our side? or if there is an issue we need to report to someone | 14:22 |
*** mwheckmann has joined #openstack-swift | 14:24 | |
*** minwoob has joined #openstack-swift | 14:32 | |
*** annegentle has joined #openstack-swift | 14:32 | |
*** esker has joined #openstack-swift | 14:33 | |
*** krykowski_ has quit IRC | 14:33 | |
*** krykowski has joined #openstack-swift | 14:37 | |
*** proteusguy has quit IRC | 14:47 | |
MooingLemur | oh, great.. it looks like on my EC system, there are a bunch of objects that lost most of their fragments after I shuffled the devices around between machines | 14:50 |
*** acampbell has joined #openstack-swift | 14:57 | |
*** zaitcev has joined #openstack-swift | 14:58 | |
*** ChanServ sets mode: +v zaitcev | 14:58 | |
*** acampbell has quit IRC | 14:58 | |
*** acampbell has joined #openstack-swift | 14:59 | |
*** mwheckmann has quit IRC | 14:59 | |
*** proteusguy has joined #openstack-swift | 14:59 | |
*** acampbel11 has joined #openstack-swift | 14:59 | |
*** bhakta has joined #openstack-swift | 15:01 | |
*** bhakta has left #openstack-swift | 15:01 | |
*** ChanServ sets mode: +v cschwede | 15:02 | |
*** mwheckmann has joined #openstack-swift | 15:04 | |
*** krykowski has quit IRC | 15:08 | |
*** slavisa has joined #openstack-swift | 15:11 | |
*** SkyRocknRoll has quit IRC | 15:14 | |
*** acampbel11 has joined #openstack-swift | 15:16 | |
*** acampbel11 has quit IRC | 15:16 | |
*** slavisa has quit IRC | 15:17 | |
notmyname | good morning | 15:18 |
notmyname | mordred: if ahale (ops) or hurricanerix_ (dev) are around, they might be able to look. otherwise a support ticket is probably faster | 15:19 |
mordred | notmyname: cool. thanks! | 15:19 |
mordred | notmyname, ahale, hurricanerix_: we've found a weirdness on our side we're investigating for the moment, if I clear that up and still get 504's, I'll come pinging | 15:19 |
notmyname | eikke: I'd love to have those exposed via a python entry_point and settable via a config | 15:19 |
*** acampbell has quit IRC | 15:20 | |
*** silor has joined #openstack-swift | 15:21 | |
*** slavisa has joined #openstack-swift | 15:22 | |
*** rbrooker_ has joined #openstack-swift | 15:25 | |
*** nadeem has joined #openstack-swift | 15:25 | |
*** nadeem has quit IRC | 15:25 | |
tdasilva | notmyname, eikke: I was thinking of doing that here: https://review.openstack.org/#/c/159285/ | 15:27 |
tdasilva | even thou this code is to leave in the swift tree, it could serve as an example | 15:28 |
*** nadeem has joined #openstack-swift | 15:28 | |
*** setmason has joined #openstack-swift | 15:31 | |
acoles | notmyname: ack meeting time change, calendar updated! | 15:32 |
notmyname | acoles: thanks | 15:32 |
openstackgerrit | Victor Stinner proposed openstack/swift: Replace StringIO.StringIO with six.BytesIO https://review.openstack.org/186042 | 15:32 |
acoles | good news is swift meetings no longer conflict with champions league matches (when they restart...) | 15:32 |
notmyname | that is good news | 15:33 |
openstackgerrit | Victor Stinner proposed openstack/swift: Get StringIO and cStringIO from six.moves https://review.openstack.org/185457 | 15:34 |
tdasilva | acoles: haha, good point | 15:34 |
*** zhill has joined #openstack-swift | 15:36 | |
*** mwheckmann has quit IRC | 15:38 | |
*** bhakta has joined #openstack-swift | 15:38 | |
mordred | notmyname: hey - so ... checking in to my infra problem ... | 15:41 |
mordred | notmyname: I see that the create call worked just fine, but in trying to update two pieces of metadata, it's sitting there for _quite_ a while | 15:41 |
mordred | is updating metadata more expensive than I would have thought? | 15:41 |
mordred | yup | 15:42 |
notmyname | mordred: yes | 15:42 |
mordred | that's where I'm getting the 504 | 15:42 |
mordred | AHA | 15:42 |
mordred | awesome | 15:42 |
mordred | see, I'm learning things | 15:42 |
mordred | notmyname: should I just set the metadata as part of the create in the first place then? | 15:42 |
notmyname | mordred: it's much better to upload the metadata with the original PUT if at all possible | 15:42 |
notmyname | yes | 15:42 |
mordred | cool | 15:42 |
mordred | I will do that | 15:42 |
mordred | thanks! bug found and solved | 15:42 |
notmyname | rackspace, like most swift deployers, has post-as-copy turned on (for good reasons). it means that updating object metadata results in a whole server-side copy of the object | 15:43 |
eikke | notmyname: cool, might work on that | 15:43 |
notmyname | eikke: looks like tdasilva has some ideas too (as I'd expect) | 15:43 |
eikke | tdasilva: assuming 159285 goes in, it won't need that, right | 15:43 |
*** mwheckmann has joined #openstack-swift | 15:44 | |
*** nadeem_ has joined #openstack-swift | 15:44 | |
*** nadeem has quit IRC | 15:44 | |
notmyname | patch 159285 | 15:45 |
patchbot | notmyname: https://review.openstack.org/#/c/159285/ | 15:45 |
tdasilva | eikke: well, that's just one storage policy (which I think would cover most third-party SP), but it won't cover all | 15:45 |
mordred | notmyname: the headers param for put_object should be the same form as the ones for post_object, yeah? | 15:45 |
notmyname | tdasilva: I'd like to see it independently of single process swift patches. ie just something that makes life easier for anyone writing/deploying alternate DiskFile implementations | 15:45 |
notmyname | mordred: correct | 15:45 |
mordred | notmyname: see... here you go having consistent parameters | 15:46 |
mordred | it's confusing | 15:46 |
notmyname | swift meeting time today is http://www.timeanddate.com/worldclock/fixedtime.html?hour=21&min=00&sec=0 | 15:46 |
mordred | are you sure I dont need to append liek a unicode bunnyrabbit for one of them? | 15:46 |
eikke | tdasilva: agree. the reason I asked for this was to allow us to write an SP implementation similar to the single-process one even though the single-process one wouldn't be merged yet. might also allow to put some diskfile-specific config in the storage policy definition | 15:46 |
notmyname | mordred: sorry. I'll try to make sure we can update our API by 1.5 versions every release and change the whole sdk every time ;-) | 15:46 |
eikke | notmyname: for diskfiles one can already use a custom entrypoint due to paste | 15:47 |
notmyname | eikke: sort of. you have to have your own (basically copied) object server that instantiates it's own DiskFile. I want to see you use the upstream object server directly and reference the DiskFile that is used | 15:47 |
*** annegentle has quit IRC | 15:47 | |
eikke | we only inherit from Swift ObjectController and indeed construct custom DiskFile stuff | 15:49 |
tdasilva | eikke, notmyname: yeah, SoF is basically the same | 15:51 |
notmyname | right. and I'd prefer to see you use the upstream object server code. makes it so that the DiskFile interface is what you have to implement instead of the obejct server interface | 15:53 |
*** barra204 has quit IRC | 15:54 | |
*** slavisa has quit IRC | 16:06 | |
openstackgerrit | Merged openstack/swift: Allow SLO PUTs to forgo per-segment integrity checks https://review.openstack.org/184479 | 16:07 |
*** jistr has quit IRC | 16:07 | |
eikke | notmyname: then how do you handle different policies? | 16:09 |
eikke | (and more specifiically, their configuration) | 16:09 |
*** Fin1te has joined #openstack-swift | 16:09 | |
notmyname | eikke: good question, and I don't know yet :-) | 16:09 |
notmyname | eikke: one idea would be to have an entry point per policy per server. that might be simplified with the current single-object-server-per-drive that swifterdarrell has been lookign at | 16:10 |
eikke | next to that, there's also the compatibility question... I really don't want to end up in a situation where our backend needs diverging codebases in order to support different swift versions (until now it's feasible with some minor hacks) | 16:10 |
notmyname | eikke: but also, the DiskFile, IMO, should actually know anything about policies (other than the index or some other unique identifier). the DiskFile needs to persist data to storage media (or in your case some other system). so as long as a policy results in a deterministic read/write, then it shoudl work, right? | 16:12 |
notmyname | s/should/shouldn't/ | 16:12 |
notmyname | at least, that's the ideal in my mind :-) | 16:12 |
tdasilva | eikke, notmyname: just to clarify, that's two different problems right? one is the python-entry point for the DiskFile so the object-server can be reused. The other, is the ability to add more storage policy types and new ObjectControllers in the proxy | 16:12 |
notmyname | tdasilva: correct. those are 2 separate things, and I'm mostly interested in the entry point for disk file implementations on the object server | 16:13 |
eikke | notmyname: mostly, I guess (it's +- what we do, all policy management is done in the objectcontroller, which instantiates diskfiles which read/write data to some location passed in their constructor, don't care about policies themself) | 16:14 |
*** rbrooker_ has quit IRC | 16:17 | |
*** bsdkurt has joined #openstack-swift | 16:21 | |
openstackgerrit | Victor Stinner proposed openstack/swift: Get StringIO and cStringIO from six.moves https://review.openstack.org/185457 | 16:23 |
openstackgerrit | Victor Stinner proposed openstack/swift: Replace StringIO with BytesIO for WSGI input https://review.openstack.org/186071 | 16:23 |
openstackgerrit | Victor Stinner proposed openstack/swift: Replace StringIO with BytesIO for file https://review.openstack.org/186072 | 16:23 |
openstackgerrit | Victor Stinner proposed openstack/swift: Replace StringIO with BytesIO in ssync https://review.openstack.org/186073 | 16:23 |
*** cdelatte has joined #openstack-swift | 16:29 | |
*** haypo has quit IRC | 16:31 | |
*** bsdkurt has left #openstack-swift | 16:31 | |
MooingLemur | clayg: Just wanted to sound the alarm a bit with my experiences with EC so far, but I don't have an understanding of what happened yet. I decided to recreate my ring (with the same part power of course). And along with physically moving some drives around, I ended up with a lot of partitions that have only a small collection of fragments of objects and the partitions are not on other devices. The fragments are simply gone. | 16:37 |
notmyname | http://lists.openstack.org/pipermail/openstack-operators/2015-May/007132.html <-- torgomatic | 16:37 |
MooingLemur | clayg: I've been trying to understand the replication code path, and I really wish the reverts were somehow logged, since I think that's where the errant deletions happened. | 16:38 |
MooingLemur | s/replication/reconstruction/ | 16:38 |
clayg | MooingLemur: quarantine? | 16:39 |
MooingLemur | nope, no quarantine dirs on any devices in my cluster | 16:39 |
clayg | MooingLemur: which patches are you running? | 16:40 |
MooingLemur | clayg: just the one that forces the primary to eat its own fragment index | 16:40 |
clayg | MooingLemur: so same part power... so we expected all the parts to be on the wrong devices - but still totally usable to the new cluster/ring | 16:40 |
clayg | MooingLemur: and it only worked for *some* objects? Or it worked like not at all. | 16:41 |
acoles | cd | 16:41 |
acoles | oops! | 16:41 |
clayg | acoles:~/$ | 16:41 |
acoles | clayg: something like that yeah :D | 16:41 |
acoles | rm -rf | 16:42 |
acoles | oops ;) | 16:42 |
MooingLemur | clayg: Lemme see what I can gather. Good question. I've just been doing things like: for i in `seq -w 01 05`; do ssh swift-storage-$i ls -l /srv/node/\*/objects/100214/???/*/*\#*; done | 16:42 |
MooingLemur | but for lines that have appeared in my logs saying only 4/9 fragments were found, etc | 16:42 |
clayg | Permission denied | 16:43 |
acoles | phew | 16:43 |
acoles | clayg: thx for all your reviews btw. i'm all over the place with jet lag but catching up slowly | 16:43 |
clayg | MooingLemur: how many nodes do you have? you may have to go trolling around on all the disks - it could be that they're out there but the proxy can't find them and the reconstructor isn't making progress for some reason | 16:44 |
clayg | MooingLemur: I've seen pyeclib randomly segfault - there's a couple of patches about for some of those issues - but I don't think we've squared it all yet | 16:44 |
clayg | acoles: you must be jet lagged if you think I'm doing reviews :\ | 16:45 |
*** hseipp has left #openstack-swift | 16:45 | |
clayg | acoles: although i do technically have one of your ssync changes checked out at the moment | 16:45 |
acoles | heh | 16:45 |
MooingLemur | clayg: I have 5 nodes, with 4 devices each. My ssh should be able to find them. | 16:45 |
clayg | acoles: the *idea* is that will result in a score | 16:45 |
clayg | MooingLemur: good hunting! | 16:45 |
MooingLemur | also that ssh should be objects-1, not objects... but anyway, the idea's the same | 16:45 |
acoles | clayg: i think i am in timezone clay-4days | 16:46 |
MooingLemur | clayg: it's pretty clear many of the objects have all of their fragments... maybe I should just sample my container listing and count the 200s and the others. | 16:47 |
MooingLemur | clayg: when reconstruction reverts, does the .durable file tag along (or a new one created on the receiver)? | 16:48 |
*** annegentle has joined #openstack-swift | 16:48 | |
clayg | MooingLemur: like I said a non-200 is something to dig into, but we need to know if the fragments are "out there somewhere" | 16:48 |
clayg | MooingLemur: maybe depending on your scheme 2 * replica might acctually already be hitting every disk for you - dunno | 16:49 |
clayg | MooingLemur: oh, could be duplicates as well - if some of the responding nodes have multiple frags the proxy can't really tease that out | 16:49 |
MooingLemur | clayg: https://bpaste.net/show/68b504bc7a62 | 16:49 |
clayg | MooingLemur: receiver will create a new one | 16:49 |
MooingLemur | I only have 5 storage nodes, and /srv/node/* should cover all the mounts | 16:50 |
MooingLemur | it's a 12-replica policy (9+3) | 16:50 |
*** nadeem_ has quit IRC | 16:50 | |
MooingLemur | that one just happens to be a small object | 16:50 |
clayg | MooingLemur: yeah looks sketchy - like maybe auditor or some other jackhole swept over the hashdir and used the wrong hash_cleanup_listdir :\ | 16:51 |
MooingLemur | clayg: https://bpaste.net/show/5d593cb6a950 | 16:51 |
MooingLemur | yeah, stuff got destroyed :P | 16:52 |
*** annegentle has quit IRC | 16:53 | |
notmyname | clayg: https://imgflip.com/i/m2l0l | 16:53 |
MooingLemur | clayg: it looks like it's way worse than an occasional lost race though | 16:53 |
*** nadeem has joined #openstack-swift | 16:54 | |
clayg | notmyname: good one! | 16:54 |
clayg | 12-frag on... what'd you say 20 devices | 16:55 |
MooingLemur | yeah | 16:55 |
clayg | MooingLemur: yeah I'm pretty sure the auditor ate them :\ | 16:58 |
MooingLemur | clayg: I found some instances in the logs where the same host kept removing the same partition on the same devices an hour after it did it the first time | 17:00 |
clayg | MooingLemur: that's *probably* nothing, partition reaping (esspeically with mixed fragments) can be a long term effort. | 17:01 |
MooingLemur | it's as if something was eating the suffixes, reconstructor was removing the empty partition dir, but then some other host kept putting it back | 17:01 |
clayg | MooingLemur: I'm pretty sure what happens is that you get a bunch of frags in hashdir, then the auditor spins up a replicated diskfile and calls hash_cleanup_listdir on it which eats all the frags but the last lexigraphically sorted one | 17:02 |
MooingLemur | ohhh.. the non-EC semantics | 17:03 |
clayg | I'm guessing we didn't notice it because a) it ignores durables b) it only happens with multi-frag c) we didn't allow multi-frag d) my patch allows it - but wasn't tested | 17:03 |
clayg | well... *YOU* tested it - and it sucked for you. so boo-berries on me for that; kudos to you tho | 17:04 |
clayg | assuming I'm right you saved someone a ton of heart-ache | 17:04 |
MooingLemur | :D | 17:04 |
clayg | well... I mean there's a non-zero chance we would have caught it testing/reviewing - but this is sorta better almost | 17:05 |
MooingLemur | auditor doesn't seem to log how many of those "purge obsolete object" ops it did | 17:06 |
MooingLemur | a pretty common thing on most clusters and probably not generally useful, but it would have been in this case because my cluster has had very few object replacements | 17:07 |
*** mwheckmann has quit IRC | 17:07 | |
*** nadeem has quit IRC | 17:08 | |
*** nadeem has joined #openstack-swift | 17:08 | |
*** annegentle has joined #openstack-swift | 17:10 | |
MooingLemur | clayg: I think you're on to something though, because for the objects that are bereft of all their fragements, it always seems to have fragement 11. | 17:10 |
MooingLemur | but I'm surprised it's not 9. I thought lexically 11 comes between 1 and 2. | 17:11 |
*** annegentle has quit IRC | 17:13 | |
*** petertr7 is now known as petertr7_away | 17:16 | |
*** kutija has joined #openstack-swift | 17:16 | |
clayg | MooingLemur: so in my situation the auditor did eat them - but they just got moved to /srv/node1/sdb5/quarantined/objects-1/5b22a3128fa5035650369eac48c28589 | 17:16 |
clayg | MooingLemur: fwiw i like the throw out the ring and create a new one approach to finding edge cases in revert handling - nice | 17:18 |
MooingLemur | I don't have /srv/node/*/quarantined on any host | 17:18 |
clayg | MooingLemur: because BALEETED! (?) | 17:19 |
MooingLemur | I guess so. Why did your test quarantine them I wonder | 17:20 |
clayg | MooingLemur: good question... if it was hash_cleanup_listdir it would have just dropped them... | 17:20 |
clayg | MooingLemur: it says "Exception reading metadata" | 17:21 |
MooingLemur | hmm, I didn't have any of those.. the xattrs should have always been good | 17:22 |
MooingLemur | I'm running on ext4, but I doubt that would make that particular difference | 17:22 |
clayg | MooingLemur: well I don't expect the issue was xattrs - the other disks audit just fine - it was only the one hashdir with the multiple frags | 17:23 |
*** mwheckmann has joined #openstack-swift | 17:23 | |
MooingLemur | clayg: btw, throwing out the ring was out of wanting to know whether the overloading would be improved by setting the weights of all the target devices directly rather than the incremental raising of the weights from flat. (The answer was not by much) :) | 17:30 |
MooingLemur | the cluster has a mixture of drive sizes, so I was finding the best weighting to use most of their drives proportionally to their capacity. I just had to underweight the single largest drive. | 17:32 |
*** aix has quit IRC | 17:35 | |
*** acoles is now known as acoles_away | 17:36 | |
*** acoles_away is now known as acoles | 17:43 | |
*** acoles is now known as acoles_away | 17:43 | |
*** jordanP has quit IRC | 17:46 | |
*** chlong has quit IRC | 17:53 | |
clayg | wow, so how did I not know that mv will preserve xattrs by default and cp will not | 17:57 |
*** mlima has joined #openstack-swift | 17:57 | |
clayg | did I like know it once and then forgot it? do i normally just use mv and assumed cp would work? do i type cp -a out of muscle memeory? | 17:57 |
clayg | MooingLemur: so with xattrs copied over correctly the auditor seems indifferent to the ec hashdir with multiple fragments | 18:00 |
MooingLemur | clayg: indifferent? As in it ignores it and leaves the fragments there? | 18:03 |
clayg | MooingLemur: yeah | 18:03 |
mlima | I did the deploy of swift following the kilo manual and only managed to make it work after restarting the RabbitMQ service. Does that make sense? | 18:04 |
clayg | mlima: lol, no | 18:06 |
clayg | mlima: swift doesn't use rabbit - neither does keystone acctually (that I know of) | 18:06 |
clayg | mlima: so basically rabbit shouldn't have effected anything swift related - cielometer maybe? | 18:07 |
clayg | acoles_away: and this is how I end up not reviewing your patch :'( | 18:07 |
mlima | my deploy use only swift and keystone | 18:07 |
mlima | however, I had to restart the RabbitMQ service to it works | 18:08 |
MooingLemur | clayg: so, as I understand the flow | 18:10 |
mlima | I think the problem is not the swift or the keystone, but communication between them. the RabbitMQ has some interaction with them? | 18:10 |
MooingLemur | oops, mishit enter | 18:10 |
mlima | +clayg: I think the problem is not the swift or the keystone, but communication between them. the RabbitMQ has some interaction with them? | 18:10 |
*** setmason has quit IRC | 18:11 | |
MooingLemur | clayg: so, as I understand the flow of things that would reach hash_cleanup_listdir, even reconstructor.py calling it own get_hashes would reach it. I don't think it's necessarily auditor. Even reconstructor could trigger this cleanup. | 18:11 |
MooingLemur | on its own | 18:11 |
clayg | mlima: neither swift nor keystone communicate via rabbit - and definately not to each other - maybe the metering stuff - does either project have anything cielometer related in the pipeline? | 18:11 |
clayg | MooingLemur: yeah but in all of those cases it's policy aware and they route to the *correct* hash_cleanup_listdir - i thought maybe the auditor was just to stupid - but so far it doesn't seem to be causing problems | 18:12 |
*** setmason has joined #openstack-swift | 18:13 | |
clayg | notmyname: whats the magic git am syntax that lets me apply a patch file with errors like I was cleaning up a merge conflict? | 18:13 |
*** mmcardle1 has quit IRC | 18:13 | |
mlima | +clayg: I use it [pipeline:main] pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo proxy-logging proxy-server | 18:14 |
clayg | mlima: seems pretty reasonable - nothing in there gunna talk to amqp | 18:15 |
clayg | notmyname: git apply -3 <patch> worked just fine - why did I have to say the -3 ? there wasn't any conflicts are anything... just maybe line numbers moved around? | 18:17 |
mlima | +clayg: I opened a bug and fix on manual (https://review.openstack.org/#/c/185783/), however this was not well accepted | 18:17 |
openstackgerrit | OpenStack Proposal Bot proposed openstack/swift: Updated from global requirements https://review.openstack.org/88736 | 18:18 |
*** geaaru has quit IRC | 18:20 | |
*** setmason has quit IRC | 18:21 | |
clayg | mlima: yup pretty strange, my devstack setup doesn't have rabbit running, when I point my standalone dev setups at a standalone keystone, i don't have rabbit running - it's just not related | 18:21 |
clayg | mlima: so I'm basically in the same camp as everyone else on that bug :\ | 18:22 |
clayg | mlima: it may be trivial to reproduce what you're seeing by following the guide (or maybe not, I'm not familiar with those docs) - but it doesn't make sense - there must be something else going on - seems like folks would rather leave the possible issue in docs than cargo cult something that makes no sense | 18:23 |
clayg | mlima: maybe you can create a smaller reproducible example of how keystone and swift are somehow giving the appearance of interacting with rabbit? | 18:24 |
*** fthiagogv has joined #openstack-swift | 18:26 | |
tdasilva | clayg: do you guys use swift-bench at all? or mostly ssbench? | 18:30 |
clayg | tdasilva: i use swift-bench some - ssbench doesn't support direct-to-object tests (that I know of) | 18:31 |
*** setmason has joined #openstack-swift | 18:31 | |
clayg | MooingLemur: I tried doing the blow away and recate rings trick | 18:32 |
clayg | MooingLemur: all of my frags parts were basically in the wrong place after that - but with my primaries-must-eat-primary-frags patch applied after a service restart to pickup the new code basically everything slammed back to where it belonged on the first pass | 18:33 |
*** setmason has quit IRC | 18:34 | |
*** setmason has joined #openstack-swift | 18:34 | |
*** acampbell has joined #openstack-swift | 18:35 | |
*** acampbel11 has joined #openstack-swift | 18:36 | |
*** Fin1te has quit IRC | 18:37 | |
clayg | MooingLemur: so I just ran the experiement again - after a complete ring rebuild, during the subsequent reconstruction revert I definately observed a node holding multiple fragments - but then it pushed off the misplaced one shortly after - no harm no foul | 18:45 |
*** Fin1te has joined #openstack-swift | 18:47 | |
clayg | auditor, updater, replicator - nothing seems to produce the observation of fragment loss | 18:52 |
dmsimard | I'm seeing swift-object-replicator eating a ton and a half of CPU on my storage nodes. Any way to tell if it's relevant for it to be taking so much resources ? Should I tone down the amount of workers or something ? | 18:53 |
clayg | dmsimard: how many cores? | 18:54 |
dmsimard | clayg: 16 cores | 18:54 |
*** silor1 has joined #openstack-swift | 18:54 | |
clayg | dmsimard: and like *eight* of them are pegged at 100%? | 18:55 |
*** silor has quit IRC | 18:56 | |
clayg | dmsimard: yeah concurrency is the only tunable I see - maybe run_pause | 18:56 |
notmyname | reminder that the meeting is NOT in 4 minutes. it's in 124 minutes (~2hours) | 18:56 |
dmsimard | clayg: The current concurrency is set to 16, it's eating around 500% CPU total | 18:56 |
clayg | dmsimard: that makes very little sense :D | 18:57 |
clayg | dmsimard: aside from the subprocess calls to rsync - the replicator isn't even multiprocess - it should all e on one core greenthreaded | 18:58 |
swifterdarrell | dmsimard: clayg: how many partitions do you have per disk, on avg? maybe your part power's really high compared to your disk count? | 18:58 |
swifterdarrell | clayg: dmsimard: maybe the %CPU is including forked-off children (e.g. the rsyncs or something?) | 18:59 |
dmsimard | clayg: What would be a sane concurrency for replicator ? | 18:59 |
*** cutforth has joined #openstack-swift | 19:01 | |
clayg | dmsimard: whatever doesn't use all your cpu? | 19:03 |
dmsimard | swifterdarrell: I'm looking at 3800ish partitions on avg, the cluster is indeed rather small | 19:03 |
dmsimard | 200 disks or so | 19:03 |
dmsimard | clayg: Any downsides to reducing the amount of concurrency ? | 19:04 |
clayg | dmsimard: less partitions replicated at once means longer replication cycle time | 19:06 |
*** mmcardle has joined #openstack-swift | 19:07 | |
clayg | dmsimard: that part count looks reasonable - you're at what part power 17-18? | 19:08 |
clayg | maybe - is some of that cpu like disk wait? | 19:09 |
dmsimard | clayg: yeah, 18 | 19:10 |
dmsimard | clayg: Not getting much i/o wait, less than 5% on avg | 19:11 |
dmsimard | I just halved the concurrency of replicator, I'll monitor the impacts and see what happens | 19:12 |
dmsimard | On another note, I saw some interesting stuff at the summit in the HP Talk - putting the container and account databases right on the proxy nodes is a good idea :) | 19:13 |
clayg | MooingLemur: well... I can't reproduce and my theory about the auditor turned out to be false | 19:13 |
*** gyee has joined #openstack-swift | 19:16 | |
dmsimard | The replication time returned by swift-recon, I'm assuming the unit is seconds ? | 19:25 |
*** theanalyst has quit IRC | 19:27 | |
openstackgerrit | John Dickinson proposed openstack/swift: drop Python 2.6 testing support https://review.openstack.org/186137 | 19:29 |
*** tab__ has joined #openstack-swift | 19:29 | |
*** theanalyst has joined #openstack-swift | 19:30 | |
openstackgerrit | Merged openstack/swift: go: log 499 on client early disconnect https://review.openstack.org/183577 | 19:32 |
MooingLemur | clayg: makes me think it's a rarer race than I thought. I'm gonna audit all my objects in the EC containers. | 19:35 |
clayg | dmsimard: i'm not so sure - looks like units might be... minutes? | 19:35 |
MooingLemur | this will take a while :) | 19:35 |
MooingLemur | 5TB or so | 19:35 |
clayg | MooingLemur: is there any possiblity in the process of redoing the rings that services got started with a swift.conf that might have thought the ec datadir was a replicated type policy? | 19:37 |
MooingLemur | clayg: I don't think so. The swift.conf is the same on all nodes, same md5, all modified May 14, and all servers have been restarted multiple times | 19:39 |
MooingLemur | I just checked | 19:39 |
clayg | MooingLemur: sigh | 19:40 |
MooingLemur | (restarted multiple times before the whole reshuffle that happened starting last Saturday) | 19:41 |
clayg | MooingLemur: do you have any other patches applied to 2.3.0? | 19:41 |
*** lpabon has joined #openstack-swift | 19:41 | |
MooingLemur | clayg: no, just that one, and only on the storage nodes. It was applied by user_patches as part of gentoo portage, and the patch applied cleanly, otherwise the installation would have bailed out. | 19:43 |
MooingLemur | clayg: I'm certainly willing to try again and reshuffle the entire ring to see what happens (after I either re-upload or remove the orphaned object fragments) | 19:47 |
MooingLemur | clayg: especially if you have an idea where I could put in some debug logging | 19:47 |
clayg | MooingLemur: I don't really have any good ideas :\ | 19:49 |
MooingLemur | clayg: I mean, I'd like to log the revert itself | 19:50 |
MooingLemur | so perhaps we can get some information from what doesn't get logged, more than what is | 19:50 |
swifterdarrell | dmsimard: at least one value is dealt with raw in minutes but you'd have to check the code to see if it's normalized to seconds somewhere (and which one it is--I don't remember off the top of my head) | 19:51 |
clayg | swifterdarrell: dmsimard: yeah sorry ment to say that I think the replication time in swift recon is minutes | 19:54 |
clayg | swifterdarrell: thanks | 19:54 |
*** james_li has joined #openstack-swift | 20:03 | |
james_li | Hi All, a quick question: is delete_object a sync call, or async? | 20:04 |
clayg | james_li: from the HTTP api perspective - the majority of servers will have written the tombstone and unlinked the older .data file before the client gets a response | 20:06 |
james_li | clayg: ok thanks. is the delay related to the object size, i.e. deleting a larger object will cost longer time than smaller objects? | 20:08 |
clayg | james_li: meh, maybe... it's probably more like how full your disk is | 20:10 |
zaitcev | with the exception of a cluster of object versioning enabled, in which case deleting bigger objects definitely takes longer | 20:15 |
james_li | clayg: yeah. I am implementing a feature in Solum which includes deleting of large images from swift, I was not sure if I can do delete_object in the Solum API layer because I don't want our API gets blocked for a long time. So from your explanation I can see its probably fine to do deletion in the Solum API. | 20:18 |
*** esker has quit IRC | 20:19 | |
openstackgerrit | Samuel Merritt proposed openstack/swift: Allow SAIO to answer is_local_device() better https://review.openstack.org/183395 | 20:19 |
openstackgerrit | Samuel Merritt proposed openstack/swift: Allow one object-server per disk deployment https://review.openstack.org/184189 | 20:19 |
*** silor1 has quit IRC | 20:23 | |
torgomatic | james_li: when you say "delete_object", to what function, exactly, are you referring? | 20:24 |
james_li | torgomatic: swiftclient.client.delete_object | 20:25 |
torgomatic | james_li: okay, good... that looks as though it simply issues a single HTTP DELETE request, which means everything everyone said holds true | 20:26 |
james_li | torgomatic: thanks for clarification :) | 20:28 |
*** mlima has quit IRC | 20:32 | |
*** lpabon has quit IRC | 20:32 | |
*** zhill has quit IRC | 20:33 | |
*** zhill has joined #openstack-swift | 20:33 | |
*** robefran_ has quit IRC | 20:34 | |
*** barra204 has joined #openstack-swift | 20:35 | |
*** annegentle has joined #openstack-swift | 20:37 | |
*** acampbel11 has quit IRC | 20:37 | |
*** rdaly2 has joined #openstack-swift | 20:37 | |
*** acampbell has quit IRC | 20:37 | |
peluse | MooingLemur, I see a looong conversation with clayg wrt EC. Any chance you can post a quick summary of problem and discussions to date? | 20:43 |
MooingLemur | peluse: sure. | 20:43 |
*** mcnully has joined #openstack-swift | 20:45 | |
*** redbo has quit IRC | 20:45 | |
*** redbo has joined #openstack-swift | 20:45 | |
*** ChanServ sets mode: +v redbo | 20:45 | |
*** esker has joined #openstack-swift | 20:46 | |
MooingLemur | peluse: Last week, I had uploaded a large amount of data to a stable cluster (5 hosts, 20 devices) with a couple of EC policy (9 data, 3 parity) containers. I had adjusted weights and ran into issues with fragments unable to be pushed to primary nodes which was solved by clayg's patch that forced primaries to always accept their own fragments even if a different fragment of the same object existed there. Over the weekend, I ... | 20:47 |
MooingLemur | ... replaced some devices (that were zero-weight), and then physically moved others around between hosts. I also ended up recreating the ring with the same part power in an effort to determine whether the balance/dispersion would be better on a fresh ring than an organically grown/changed one. Things seemed to revert okay at first, but after a couple days (this morning) I noticed that was still having trouble finding some ... | 20:47 |
MooingLemur | ... fragments. | 20:47 |
MooingLemur | peluse: I did an ls for the data on all devices in the cluster, and found that many fragments were missing on the objects that the logging was complaining about. Something ate the fragments of some of the objects. | 20:48 |
*** ho has joined #openstack-swift | 20:50 | |
*** barra204 has quit IRC | 20:50 | |
peluse | wow, OK that's a few steps allright :) | 20:50 |
ho | good morning! | 20:50 |
MooingLemur | peluse: clayg thought it may have been the auditor using non-EC semantics which ended up reaching hash_cleanup_listdir without a fragment index, and lexically pruning out everything but the highest, but he found auditor wasn't to blame on his tests, and he was unable to reproduce. | 20:51 |
peluse | MooingLemur, so it sounds like you applied clayg's fix after some failures has already occured, is that the case? | 20:51 |
MooingLemur | peluse: I applied his patch after replication got stuck, on a test I was doing last week after moving some weights. Everything ended up happy after a day or so there. | 20:51 |
MooingLemur | s/replication/reconstruction/ | 20:52 |
*** mmcardle has quit IRC | 20:53 | |
MooingLemur | peluse: at this point, I've added logging to the delete after revert, and after I audit and re-upload the broken objects, I'm going to scramble the ring again and see if it happens again. | 20:53 |
*** kota_ has joined #openstack-swift | 20:53 | |
peluse | MooingLemur, yeah, I'm just trying to think about how to break it down into a test that we can repro step by step | 20:53 |
peluse | MooingLemur, that was be good if you can be 100% sure that everything is fine and then it sounds like you're saying a simple rebalance causes from frags to go missing? | 20:54 |
peluse | s/from/some | 20:55 |
MooingLemur | peluse: the simple rebalance didn't seem to make frags go missing. I think it was the ring re-creation | 20:55 |
MooingLemur | where perhapse none of the frags were in the right place | 20:55 |
peluse | maybe you can define 'scramlbe the ring' more for me in super clear terms? | 20:55 |
MooingLemur | I had organically changed the original ring at first, raising weights, adding devices, rebalancing, modifying weights. | 20:56 |
MooingLemur | but then I renamed that old ring, re-created it from scratch with the final weights where they were supposed to be, then rebalanced | 20:56 |
peluse | ahh, scrambling :) | 20:56 |
*** slavisa has joined #openstack-swift | 20:57 | |
MooingLemur | so I suspect it was a completly different layout, so basically all data would end up being moved | 20:57 |
*** rdaly2 has quit IRC | 20:57 | |
peluse | if you can narrow it down, like it sounds like you're working on, to a ring chance and what exact actions were taken between the two states of the ring that would obviously be very good data | 20:58 |
peluse | but I didn't read the whole backlog so maybe you guys already came to that conclusion... | 20:58 |
mattoliverau | Morning | 20:58 |
openstackgerrit | Merged openstack/swift: drop Python 2.6 testing support https://review.openstack.org/186137 | 20:59 |
notmyname | swift meeting in 1 minute | 20:59 |
*** ryshah has joined #openstack-swift | 20:59 | |
peluse | yo mattoliverau | 20:59 |
notmyname | mattoliverau: not too early I hope ;-) | 20:59 |
mattoliverau | much better! | 20:59 |
peluse | MooingLemur, I have to run after the swift mtg to a sixth grade graduation but would love to help with this later if you'll be around early evening? | 20:59 |
MooingLemur | peluse: I think I'll be available. I'm in UTC-7, and other than commute, I'll be on | 21:00 |
MooingLemur | thanks :) | 21:01 |
*** acoles_away is now known as acoles | 21:01 | |
*** esker has quit IRC | 21:01 | |
slavisa | how to participate swift meeting, is it only irc or another/additional way of communication? | 21:02 |
notmyname | slavisa: it's in irc in #openstack-meeting | 21:03 |
notmyname | weekly at 2100utc on wednesdays in that channel | 21:03 |
slavisa | thx | 21:03 |
*** slavisa has quit IRC | 21:04 | |
*** cutforth has quit IRC | 21:04 | |
*** slavisa has joined #openstack-swift | 21:04 | |
openstackgerrit | Samuel Merritt proposed openstack/swift: Remove simplejson from swift-recon https://review.openstack.org/186169 | 21:06 |
openstackgerrit | Samuel Merritt proposed openstack/swift: Remove simplejson from staticweb https://review.openstack.org/186170 | 21:06 |
*** bkopilov_wfh has quit IRC | 21:13 | |
*** Fin1te has quit IRC | 21:14 | |
*** tab__ has quit IRC | 21:18 | |
*** mandarine has quit IRC | 21:18 | |
*** bkopilov has joined #openstack-swift | 21:23 | |
*** bkopilov has quit IRC | 21:31 | |
*** ryshah has quit IRC | 21:33 | |
openstackgerrit | Kota Tsuyuzaki proposed openstack/swift: Fix FakeSwift to simulate SLO https://review.openstack.org/185940 | 21:39 |
*** fthiagogv has quit IRC | 21:43 | |
*** jkugel has left #openstack-swift | 21:47 | |
*** bkopilov has joined #openstack-swift | 21:49 | |
clayg | nice | 21:55 |
notmyname | as i get the wiki pages/LP updated, I'll share those | 21:55 |
openstackgerrit | Kota Tsuyuzaki proposed openstack/swift: Fix FakeSwift to simulate SLO https://review.openstack.org/185940 | 21:55 |
notmyname | acoles: jrichli: also, later today or tomorrow, I'll get the crypto branch onto the review dashboard | 21:55 |
notmyname | ok, gotta step out for a few minutes | 21:56 |
MooingLemur | I vaguely remember reading something to the effect that for EC policies, X wasn't yet implemented. But I cannot find nor remember what this X was. Maybe it was recovering from object node failures mid-download? | 21:56 |
*** kota_ has quit IRC | 21:57 | |
acoles | notmyname: thx | 21:57 |
jrichli | notmyname: thx! | 21:57 |
*** slavisa has left #openstack-swift | 21:59 | |
torgomatic | MooingLemur: multi-range GET requests | 22:01 |
*** acoles is now known as acoles_away | 22:01 | |
torgomatic | possibly also failures mid-download, but I think that one's in there already | 22:01 |
MooingLemur | torgomatic: aha, that's what it was.. | 22:03 |
*** joeljwright has left #openstack-swift | 22:03 | |
MooingLemur | multi-range | 22:03 |
MooingLemur | thanks :) | 22:04 |
torgomatic | MooingLemur: it's at https://review.openstack.org/#/c/173497/ if you feel like banging on the code a bit | 22:04 |
ho | torgomatic: is the patch #173497 successor of #166576? | 22:10 |
torgomatic | ho: yes, it's the same one but proposed to master instead of feature/ec | 22:10 |
torgomatic | probably also the odd update here and there, but nothing major | 22:10 |
ho | torgomatic: i see. I can continue my review for it :-) | 22:12 |
openstackgerrit | Merged openstack/swift: go: check error returns part 1 https://review.openstack.org/183605 | 22:16 |
*** annegentle has quit IRC | 22:20 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/swift: Updated from global requirements https://review.openstack.org/88736 | 22:22 |
*** annegentle has joined #openstack-swift | 22:24 | |
* notmyname back | 22:28 | |
notmyname | mattoliverau: clayg: torgomatic: acoles_away: cschwede: when you open the gerrit dashboard in your web browser, is that normally on a large screen or on your laptop screen? | 22:41 |
clayg | notmyname: normally on my laptop i guess | 22:42 |
notmyname | ok | 22:43 |
mattoliverau | About 50/50. If at my desk its a large screen, if cafe hacking on the laptop. Or may 60/40 big screen to laptop. | 22:43 |
torgomatic | notmyname: yep, same here | 22:43 |
torgomatic | my large screen is vertical anyway, so my laptop screen is the widest one I have | 22:43 |
notmyname | ok | 22:45 |
notmyname | I'm actually wondering about vertical space | 22:45 |
notmyname | for the review dashboards. ie what will you see and what will you not see because of needing to scroll | 22:45 |
*** james_li has quit IRC | 22:45 | |
*** annegentle has quit IRC | 22:46 | |
*** kutija has quit IRC | 22:49 | |
*** km has joined #openstack-swift | 22:50 | |
notmyname | tdasilva: sorry, I didn't get to the py3 patches in the meeting. I just realized that | 22:51 |
notmyname | I do want to bring it up next week | 22:51 |
* notmyname goes and adds it to the agenda | 22:51 | |
tdasilva | notmyname: no worries :) | 22:51 |
tdasilva | notmyname: wondering if would be possible to add obj. versioning and copy middleware to priority reviews | 22:52 |
tdasilva | would like to get those done and out of the way | 22:52 |
notmyname | tdasilva: yes, that needs to be added back again. they dropped off with the ec work | 22:52 |
notmyname | tdasilva: ok, I starred them. can you verify? | 22:53 |
tdasilva | got it, thanks! | 22:54 |
*** torgomatic has quit IRC | 22:54 | |
tdasilva | noticed in the etherpad that some other feature have a dependency on copy middleware, so that should help too :) | 22:54 |
tdasilva | summit etherpad | 22:54 |
*** jrichli has quit IRC | 23:03 | |
*** chlong has joined #openstack-swift | 23:05 | |
notmyname | ok, I've got some new dashboards. working on the shortlinks and I'll get the channel topic and wiki updated | 23:06 |
*** ChanServ changes topic to "New meeting time 2100UTC Wednesdays: https://wiki.openstack.org/wiki/Meetings/Swift | Review Dashboard: https://goo.gl/ktob5x | Project Overview: https://goo.gl/jTYWgo | Logs: http://eavesdrop.openstack.org/irclogs/%23openstack-swift/" | 23:08 | |
notmyname | ok, channel topic and wiki page updated. new dashboard, including crypto work, is up. also slightly updated the "recently proposed" and "older open patches" sections. should include more now, I think | 23:13 |
*** wbhuber has quit IRC | 23:18 | |
*** kei_yama has joined #openstack-swift | 23:24 | |
notmyname | as always, please let me know what's broken about the review dashboards and what you'd like to see made better | 23:27 |
sweeper | notmyname: how about a newrelic plugin? :3 | 23:58 |
notmyname | I've never used newrelic. what would that give us? | 23:59 |
sweeper | I could mail you some homebrew apple cider? | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!