*** doxavore has joined #openstack-swift | 00:02 | |
*** dmorita has joined #openstack-swift | 00:33 | |
*** kota_ has joined #openstack-swift | 00:46 | |
kota_ | morning | 00:47 |
---|---|---|
mattoliverau | kota_: morning! | 00:56 |
kota_ | mattoliverau: good morning! this week is the last one we can spend until OpenStack Summit Vancouver :P | 00:57 |
mattoliverau | kota_: yup, which is why I've been frantacally working on sharding so we can discuss it.. I think I finally have a verion of the SPEC that relates to the current status of the POC (minus what I've done on the weekend). | 00:58 |
mattoliverau | kota_: when are you flying out? Saturday? | 00:59 |
kota_ | mattoliverau: from vancouver? scheduled Saturday. | 01:00 |
kota_ | mattoliverau: you want the discussion | 01:01 |
kota_ | whoops | 01:01 |
kota_ | the week end after summit? | 01:02 |
mattoliverau | kota_: lol, I see my english failed there, I meant fly in.. but fly out is just as interesting ;) | 01:04 |
kota_ | mattoliverau: lol, on Sanday morning, I'll arrive at :) | 01:06 |
mattoliverau | kota_: sharding discussions can happen at summit, I just frantically doing work on it, so there is something to discuss | 01:06 |
mattoliverau | kota_: lol, you'll be nicely jetlagged for Monday ;) | 01:06 |
mattoliverau | I arrive Sat night | 01:06 |
kota_ | mattoliverau: absolutely, I'm worried a bit I could talk well :( | 01:07 |
kota_ | mattoliverau: I guess you've work well about the sharding stuff so I'm looking forward to discussing it at the summit ;) | 01:09 |
mattoliverau | kota_: actually you might be ok Monday, as you'll sleep all Sunday night due to exhaustion flying all the way, and forcing yourself to stay up all Sunday long.. it'll be tuesday when then jetlag stops you sleeping ;P | 01:09 |
kota_ | mattoliveau: exactly! | 01:10 |
mattoliverau | kota_: we'll see, I have the sharding "working" was hoping to get time to do some bench marks.. but this week will be busy so I see how far I get with that :) | 01:10 |
kota_ | mattoliverau: :) | 01:15 |
*** jrichli_ has joined #openstack-swift | 01:20 | |
*** orzel has quit IRC | 01:21 | |
*** jrichli has quit IRC | 01:22 | |
*** jrichli__ has joined #openstack-swift | 01:58 | |
*** proteusguy has joined #openstack-swift | 01:59 | |
*** jrichli_ has quit IRC | 02:01 | |
*** ho has quit IRC | 02:04 | |
*** kota_ has quit IRC | 02:06 | |
*** ho has joined #openstack-swift | 02:06 | |
*** doxavore has quit IRC | 02:06 | |
*** proteusguy has quit IRC | 02:15 | |
*** jrichli__ has quit IRC | 02:32 | |
*** NM has joined #openstack-swift | 02:35 | |
*** NM has quit IRC | 02:43 | |
*** NM has joined #openstack-swift | 02:44 | |
*** kota_ has joined #openstack-swift | 02:57 | |
*** NM has quit IRC | 03:02 | |
*** tobe has joined #openstack-swift | 03:20 | |
*** NM has joined #openstack-swift | 03:23 | |
*** NM has quit IRC | 03:24 | |
*** ho_ has joined #openstack-swift | 03:26 | |
*** ho has quit IRC | 03:27 | |
*** ho_ has quit IRC | 03:28 | |
*** kota_ has quit IRC | 04:08 | |
*** ppai has joined #openstack-swift | 04:08 | |
*** vinsh has joined #openstack-swift | 04:10 | |
*** ho has joined #openstack-swift | 04:10 | |
*** vinsh has quit IRC | 04:21 | |
*** kota_ has joined #openstack-swift | 04:39 | |
*** tamizh_geek has joined #openstack-swift | 04:43 | |
*** kei_yama has quit IRC | 04:53 | |
*** kei_yama has joined #openstack-swift | 04:55 | |
*** tobe has quit IRC | 05:03 | |
*** chlong has quit IRC | 05:09 | |
*** silor has joined #openstack-swift | 05:27 | |
*** rmcall has quit IRC | 06:00 | |
*** kota_ has quit IRC | 06:09 | |
*** rmcall has joined #openstack-swift | 06:14 | |
openstackgerrit | Matthew Oliver proposed openstack/swift-specs: Large Containers - container sharding spec https://review.openstack.org/139921 | 06:17 |
*** dmorita has quit IRC | 06:23 | |
*** dmorita has joined #openstack-swift | 06:23 | |
*** ppai has quit IRC | 06:29 | |
*** jlmendezbonini has joined #openstack-swift | 06:55 | |
*** jlmendezbonini has left #openstack-swift | 06:58 | |
*** vinsh has joined #openstack-swift | 07:22 | |
*** vinsh has quit IRC | 07:26 | |
*** ppai has joined #openstack-swift | 07:31 | |
*** tobe has joined #openstack-swift | 07:42 | |
*** geaaru has joined #openstack-swift | 07:44 | |
*** rmcall has quit IRC | 07:48 | |
*** rmcall has joined #openstack-swift | 07:52 | |
*** fifieldt has joined #openstack-swift | 07:57 | |
*** fifieldt has quit IRC | 07:58 | |
*** jistr has joined #openstack-swift | 07:58 | |
*** jordanP has joined #openstack-swift | 08:11 | |
*** acoles_away is now known as acoles | 08:55 | |
*** geaaru has quit IRC | 09:07 | |
openstackgerrit | Emmanuel Cazenave proposed openstack/swift: X-Auth-Token should be a bytestring. https://review.openstack.org/181834 | 09:07 |
openstackgerrit | Emmanuel Cazenave proposed openstack/swift: X-Auth-Token should be a bytestring. https://review.openstack.org/181836 | 09:10 |
*** aix has joined #openstack-swift | 09:20 | |
*** knl has joined #openstack-swift | 09:21 | |
*** haomaiwa_ has quit IRC | 09:31 | |
*** xnox has quit IRC | 09:42 | |
*** xnox has joined #openstack-swift | 09:48 | |
*** geaaru has joined #openstack-swift | 10:19 | |
*** ho has quit IRC | 10:47 | |
*** dmorita has quit IRC | 10:51 | |
*** zhill has quit IRC | 11:07 | |
*** EmilienM|afk is now known as EmilienM | 11:28 | |
*** zul has quit IRC | 11:37 | |
*** zul has joined #openstack-swift | 11:38 | |
*** tobe has quit IRC | 11:53 | |
*** tab___ has joined #openstack-swift | 11:55 | |
tab___ | Installing and following installation manual for installing Swift and Keystone (http://docs.openstack.org/juno/install-guide/install/yum/content/ch_swift.html), are generating ssh keys needed for multi node deployment for rsync or not, since this manual does not cover within swift chapter, but it does mention it when creating nova network, which i do no need? | 11:57 |
tab___ | calling command for syncing local and remote dir, it does need password to type: rsync -a dir1 root@swift-paco2:/home/dir1 | 11:58 |
ctennis | where does it describe the ssh key setup? | 12:00 |
ctennis | ah I see | 12:01 |
ctennis | it doesn't mention it, and you're wondering if it's required | 12:01 |
ctennis | no, it's not required. typically folks used the rsync daemon which listens on port 873 and doesnt use or require ssh | 12:02 |
*** ppai has quit IRC | 12:03 | |
*** ppai has joined #openstack-swift | 12:04 | |
tab___ | but if calling the command by hand it wants password | 12:05 |
tab___ | so rsync byitself shoud work just fine without keys? | 12:05 |
tab___ | ok | 12:05 |
*** ppai has quit IRC | 12:09 | |
*** esker has quit IRC | 12:14 | |
*** ppai has joined #openstack-swift | 12:24 | |
*** km has quit IRC | 12:27 | |
*** ppai has quit IRC | 12:32 | |
*** haomaiwa_ has joined #openstack-swift | 12:43 | |
*** early has quit IRC | 12:56 | |
*** early has joined #openstack-swift | 12:57 | |
*** fthiagogv has joined #openstack-swift | 12:59 | |
*** NM has joined #openstack-swift | 12:59 | |
*** haomaiwa_ has quit IRC | 13:04 | |
*** kei_yama has quit IRC | 13:07 | |
*** fifieldt has joined #openstack-swift | 13:17 | |
*** fifieldt has quit IRC | 13:17 | |
*** chlong has joined #openstack-swift | 13:22 | |
*** Gue______ has joined #openstack-swift | 13:30 | |
jordanP | Hi guys. We're running a third party CI, with a third party Diskfile and we test specifically python 2.6 support. We'd appreciate support for merging these backports to kilo and juno: https://review.openstack.org/#/c/181836/ and https://review.openstack.org/#/c/181834/ Those cherrypicks are one line changes. Thanks a lot ! | 13:32 |
*** esker has joined #openstack-swift | 13:32 | |
acoles | jordanP: added my +1, I don't seem to get to +2 backports | 13:34 |
eikke | acoles: thanks! | 13:35 |
*** jkugel has joined #openstack-swift | 13:35 | |
acoles | eikke: np | 13:35 |
*** Gue______ has quit IRC | 13:41 | |
*** Gue______ has joined #openstack-swift | 13:53 | |
*** wbhuber has joined #openstack-swift | 14:03 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/python-swiftclient: Updated from global requirements https://review.openstack.org/89250 | 14:04 |
openstackgerrit | OpenStack Proposal Bot proposed openstack/swift: Updated from global requirements https://review.openstack.org/88736 | 14:04 |
*** bkopilov has quit IRC | 14:05 | |
*** openstackgerrit has quit IRC | 14:06 | |
*** openstackgerrit has joined #openstack-swift | 14:06 | |
tdasilva | good morning | 14:17 |
*** openstack has joined #openstack-swift | 14:20 | |
*** minwoob has joined #openstack-swift | 14:24 | |
*** breitz has joined #openstack-swift | 14:28 | |
*** annegentle has joined #openstack-swift | 14:40 | |
*** vinsh has joined #openstack-swift | 14:53 | |
*** knl has quit IRC | 14:54 | |
*** Gue______ has quit IRC | 15:01 | |
wbhuber | notmyname: What are the hardware specs fot the community QA cluster? | 15:03 |
wbhuber | (To test / analyze the performance for EC) | 15:04 |
acoles | tdasilva: you're back! hows things? | 15:07 |
tdasilva | acoles: hey! everything is going well :) getting used to the new life | 15:08 |
acoles | great! good to have you back around | 15:11 |
tdasilva | acoles: how are things going here? Seems like you guys have been very busy even after EC landed. How's preparation for vancouver? | 15:12 |
*** aix has quit IRC | 15:12 | |
acoles | tdasilva: looks like vancouver will be a busy and interesting summit for swift - lots on the agenda | 15:15 |
tdasilva | acoles: yes, there are many presentations and it seems like the design summit discussions are very interesting too | 15:16 |
tdasilva | acoles: been trying to follow the hummingbird emails discussions, would like to start looking at code soon...sounds very cool | 15:17 |
acoles | tdasilva: yup | 15:18 |
*** fifieldt has joined #openstack-swift | 15:22 | |
*** lpabon has joined #openstack-swift | 15:23 | |
*** fifieldt has quit IRC | 15:23 | |
*** lpabon has quit IRC | 15:24 | |
*** Gue______ has joined #openstack-swift | 15:26 | |
*** bkopilov has joined #openstack-swift | 15:27 | |
openstackgerrit | Louis Taylor proposed openstack/python-swiftclient: POC: Service token support https://review.openstack.org/181936 | 15:32 |
*** annegentle has quit IRC | 15:37 | |
*** annegentle has joined #openstack-swift | 15:38 | |
*** acoles is now known as acoles_away | 15:40 | |
*** gyee has joined #openstack-swift | 15:41 | |
*** dencaval has joined #openstack-swift | 15:48 | |
*** annegentle has quit IRC | 15:59 | |
*** annegentle has joined #openstack-swift | 16:00 | |
*** annegentle has quit IRC | 16:06 | |
*** Gue______ has quit IRC | 16:14 | |
*** vinsh_ has joined #openstack-swift | 16:15 | |
*** jistr has quit IRC | 16:17 | |
*** G________ has joined #openstack-swift | 16:17 | |
*** vinsh has quit IRC | 16:18 | |
*** bkopilov has quit IRC | 16:29 | |
*** rmcall has quit IRC | 16:31 | |
*** rmcall has joined #openstack-swift | 16:32 | |
*** G________ has quit IRC | 16:32 | |
notmyname | good morning | 16:38 |
notmyname | tdasilva: welcome back! | 16:38 |
*** Guest___ has joined #openstack-swift | 16:39 | |
notmyname | peluse: is tsg around today? | 16:40 |
*** bkopilov has joined #openstack-swift | 16:43 | |
*** esker has quit IRC | 16:46 | |
*** nadeem has joined #openstack-swift | 16:46 | |
*** Guest___ has quit IRC | 16:49 | |
*** annegentle has joined #openstack-swift | 17:05 | |
*** nadeem has quit IRC | 17:09 | |
peluse | checking... | 17:12 |
peluse | notmyname, he is in an offsite training class today. I can text him if you need something? | 17:12 |
tdasilva | notmyname: hi! thanks! good to be back | 17:13 |
notmyname | peluse: ah thanks. | 17:19 |
notmyname | peluse: nothing extraordinarily urgent. I've seen a lot of people have trouble with getting pyeclib installed. the differences between the pypi version vs downloading/packaging the source | 17:20 |
notmyname | peluse: I was hoping he could write up a couple of paragraphs about it, being explicit in what needs to be done depending on the source, and add it to the EC overview doc | 17:21 |
notmyname | of course, he doesn't have to be the one to do that, but I was guessing he'd be the one who knows the most | 17:21 |
*** haomaiwa_ has joined #openstack-swift | 17:22 | |
*** tamizh_geek has quit IRC | 17:31 | |
peluse | notmyname, OK I'l shoot him an email | 17:31 |
*** tamizh_geek has joined #openstack-swift | 17:32 | |
notmyname | peluse: mind if I answer your EC perf email question in her? | 17:36 |
notmyname | *here | 17:36 |
peluse | no prob | 17:36 |
notmyname | the question was "are you hitting line rate, or are you cpu limited before you get there?" | 17:37 |
peluse | yes, at the storage node specifically | 17:37 |
notmyname | the QA cluster is all-in-one PACO nodes. I've got a graph of CPU/process group | 17:38 |
notmyname | so that helps, but it is fuzzed since they are PACO nodes | 17:38 |
peluse | sure | 17:39 |
notmyname | in general, I started with a pretty high test load and got a lot of errors. a couple of local patches later and after significantly reducing the concurrency, we're running with no errors | 17:39 |
peluse | we noticed at line rate CPU pinned and tried shutting down all non-essential services and saw no drop in CPU which seemed a little surprising and, of course, have us questioning the ability to fo EC reconstruction under heavy load | 17:39 |
-openstackstatus- NOTICE: We have discovered post-upgrade issues with Gerrit affecting nova (and potentially other projects). Some changes will not appear and some actions, such as queries, may return an error. We are continuing to investigate. | 17:40 | |
*** ChanServ changes topic to "We have discovered post-upgrade issues with Gerrit affecting nova (and potentially other projects). Some changes will not appear and some actions, such as queries, may return an error. We are continuing to investigate." | 17:40 | |
notmyname | so I saw that to make it clear that I'm only looking for relative performance over replication instead of "what's possible with EC performance" | 17:40 |
notmyname | but... | 17:40 |
notmyname | yes. CPU usage is high | 17:40 |
peluse | yeah, our test was with 3x | 17:40 |
peluse | looking to see how much CPU we would ahve left at the SN running at line rate | 17:40 |
notmyname | I'm not sure yet if it's CPU-limited, but it is high, and I suspect (but don't have numbers on) that it's with the REPLCIATE verb requests (ie socket connections and md5 of dirlistings) | 17:41 |
notmyname | at least, that would be in line with what I've seen before | 17:41 |
notmyname | completely unfiltered raw results from a recent run: https://gist.githubusercontent.com/charz/77dd1c3f73b228bdd6a1/raw/393e72ce70b24f960f61bfce4608b578d1767136/gistfile1.txt | 17:42 |
peluse | I'd have to check w/Bill on our end but I assume he turned on replcaitor since I asked him to shutdown all non-essential services | 17:42 |
peluse | in your output what specfiically does concurrency map to? | 17:43 |
peluse | and, also, do you have similar data for 3x repl? | 17:43 |
*** annegentle has quit IRC | 17:43 | |
notmyname | concurrency is from the ssbench config. | 17:44 |
notmyname | and yes, this is repl and ec. note the policy column | 17:44 |
peluse | is that basically number of connections? or, like, number of workers and each has some number of connections? | 17:44 |
notmyname | total connections, not per worker. IIRC there are 8 workers serving those | 17:44 |
peluse | oh, duh on the policy column, wasn't scrolling down far enough :) | 17:44 |
notmyname | so yes, it's super low. but that's ok for relative tests | 17:44 |
*** geaaru has quit IRC | 17:45 | |
peluse | do you have net/cpu/mem util that you can correlate with the ssbench output? | 17:45 |
notmyname | for the 60 drives in the cluster, there are 20 for EC, 20 for repl, and 20 that are for both (4 per server). that's what the "isolation" vs "shared" means | 17:45 |
notmyname | yeah, it's in the swiftstack controller data. I haven't actually extracted that for the run. but I also have all the logs and configs and rings for those | 17:46 |
*** fresh has joined #openstack-swift | 17:46 | |
notmyname | I'm going to talk to charz about getting the time-series data from the controller this week. and whatever we have will be available next week at the summit | 17:46 |
notmyname | you can see what the file sizes are (ie what the ssbench scenarios are) in this pull request: https://github.com/swiftstack/ssbench/pull/107/files | 17:47 |
peluse | that was going to be my next question :) | 17:48 |
notmyname | super tiny ones (a few bytes) to multi-GB ones | 17:48 |
notmyname | basically, it's a few brackets that, I hope, will show a point at which EC and replication become "interesting" | 17:50 |
notmyname | and this is a 6+4 scheme with ISA-L | 17:50 |
peluse | yeah, when caleb somes out we'll have lots of comparison-analysis to do wrt test setup, methodoloy, results to date, settings, etc. Will be great to spend time on this at the summit to better prepare for that trip | 17:51 |
notmyname | right | 17:51 |
peluse | and hopefully get some more folks engaged as well, I know kota showed some interest | 17:51 |
peluse | and mark seger has emailed me as well but he won't be at the summit. he wants to get involved though once we have our feet under us wrt what we're testing, how, what we're seeing | 17:52 |
*** jordanP has quit IRC | 17:52 | |
notmyname | hmm | 17:53 |
notmyname | I'm just looking at thsoe results for the first time myself | 17:53 |
notmyname | looks like the current inflection point is in the "small" tests. at that point, or right after, EC is faster than replication | 17:53 |
notmyname | small == 1MB -> 5MB objects | 17:54 |
*** esker has joined #openstack-swift | 17:56 | |
peluse | maybe throw those results in an etherpad or something so we can put markers in interesting places or something. not sure which rows/cols are grabbing your attention | 17:57 |
notmyname | looking at req/sec | 17:57 |
notmyname | I want to graph them and see what pops out | 17:57 |
notmyname | depending on the size, I want to bundle up some of the data and make a public link to it | 17:58 |
notmyname | probably not the logs, since those are multi-GB per run | 17:58 |
peluse | yeah, better idea | 17:58 |
notmyname | anyway, that's an interesting way to start monday, but I've got to move on to a couple of other things (defcore patches and summit talk prep) | 17:59 |
peluse | ditto | 17:59 |
*** vinsh has joined #openstack-swift | 18:08 | |
*** vinsh_ has quit IRC | 18:11 | |
*** annegentle has joined #openstack-swift | 18:12 | |
wbhuber | Sorry if I missed anything, but what is a miniscule test as opposed to tiny & small test? | 18:16 |
notmyname | wbhuber: check the pull request link above. mostly, miniscule < small | 18:16 |
notmyname | (I had already used "small" and needed something smaller) | 18:17 |
clayg | did everyone have a good weekend? | 18:17 |
notmyname | clayg: yup. except when I had to take apart my washing machine. but actually that's kinda fun too, since you get to play with tools | 18:19 |
notmyname | wbhuber: also, to your earlier question, the community QA cluster is 5 servers, 12 drives each. drives are donated by HGST. there are 4 servers with 6TB drives and 1 server with 8TB drives. the CPUs are donated by Intel and they are Avoton chips (SoC with 8 cores) and 8GB of RAM | 18:19 |
notmyname | clayg: my oldest finished his spring soccer season this weekend with an undefeated record. so that was good :-) | 18:20 |
wbhuber | notmyname: got it and thanks. will prolly come up with cascading questions after digesting them. | 18:22 |
*** morganfainberg has quit IRC | 18:25 | |
*** clduser_ has quit IRC | 18:25 | |
*** swifterdarrell has quit IRC | 18:25 | |
*** torgomatic has quit IRC | 18:25 | |
*** rsFF has quit IRC | 18:25 | |
*** remix_auei has joined #openstack-swift | 18:25 | |
*** nadeem has joined #openstack-swift | 18:26 | |
clayg | notmyname: your oldest is like 7 - do they even keep score? | 18:26 |
*** chrisnelson has quit IRC | 18:26 | |
*** AbyssOne has quit IRC | 18:26 | |
*** chrisnelson_ has joined #openstack-swift | 18:26 | |
*** remix_tj has quit IRC | 18:26 | |
*** eikke has quit IRC | 18:26 | |
peluse | nice! | 18:26 |
*** eikke has joined #openstack-swift | 18:26 | |
peluse | wbhuber, at Intel we are also setting up a perf cluster for EC and working with notmyname's company to coordinate testing | 18:27 |
*** morganfainberg has joined #openstack-swift | 18:27 | |
*** clduser_ has joined #openstack-swift | 18:27 | |
*** swifterdarrell has joined #openstack-swift | 18:27 | |
*** torgomatic has joined #openstack-swift | 18:27 | |
*** rsFF has joined #openstack-swift | 18:27 | |
*** sendak.freenode.net sets mode: +vv swifterdarrell torgomatic | 18:27 | |
*** hub_cap has left #openstack-swift | 18:27 | |
*** silor has quit IRC | 18:27 | |
peluse | 2 proxies, 10 sotrage nodes 12 disks each + an SSD each. Actually 2 clsuter like this, one with Avoton (atom based) CPU in the nodes and one with Xeon | 18:27 |
*** nadeem has quit IRC | 18:27 | |
wbhuber | peluse: good to know. how soon is the perf cluster ready for test? | 18:28 |
minwoob | peluse: Possibly for performance testing, it is better to avoid using a PACO setup? | 18:28 |
*** annegentle has quit IRC | 18:29 | |
*** annegentle has joined #openstack-swift | 18:29 | |
minwoob | It seems that PACO may be more for functional testing moreso than being meant for stress tests. | 18:30 |
*** tamizh_geek has quit IRC | 18:30 | |
*** AbyssOne has joined #openstack-swift | 18:31 | |
peluse | wbhuber, we're baselining it now w/plans to have real data coming our of it in mid Jun (after the summit and after some key learnings from the work notmyname just mentioned) | 18:32 |
notmyname | minwoob: yeah, I feel the same way. PACO for testing/validation or small clusters. | 18:32 |
peluse | ditto | 18:32 |
notmyname | clayg: he's playing a year up in U8 ;-). will be in U9 competitive in the fall | 18:32 |
*** NM1 has joined #openstack-swift | 18:38 | |
*** NM has quit IRC | 18:38 | |
*** nadeem has joined #openstack-swift | 18:38 | |
*** Gu_______ has joined #openstack-swift | 18:39 | |
*** annegentle has quit IRC | 18:40 | |
*** annegentle has joined #openstack-swift | 18:41 | |
peluse | notmyname, tsg said no problem on the addtl pyeclib install docs. I assume you mean on the pyeclib page - we already have a link there in the Swift overview docs but don't on like the SAIO instractions so we could add it there too | 18:42 |
notmyname | peluse: I was thinking http://docs.openstack.org/developer/swift/overview_erasure_code.html but really just any place that's easily referencable is good | 18:43 |
wbhuber | peluse: do you have more specific HW specs for the perf cluster? CPU? Size of the disks? | 18:44 |
peluse | wbhuber, on the Intel side, sure. Will dig them up | 18:44 |
peluse | notmyname, so yeah I put the pyeclib section in that doc and purposely left off installation details so that the Swift docs wouldn't need to be udpated if/when they change | 18:44 |
peluse | so there's arleady a link there but no good isntalation directions at the other end :( | 18:45 |
*** fthiagogv has quit IRC | 18:46 | |
peluse | once he updates the pyeclib site with installation details, I'll clarrify the link in our docs to say "For isntallaation details, see.." but I'll spell installation correctly | 18:46 |
notmyname | :-) | 18:47 |
*** harlowja has quit IRC | 18:52 | |
*** harlowja has joined #openstack-swift | 18:52 | |
*** Gu_______ has quit IRC | 18:57 | |
*** Gu_______ has joined #openstack-swift | 18:59 | |
minwoob | peluse: Are you reserving x number of disks for certain storage policies, or are all being used during each performance run? | 19:04 |
minwoob | peluse: And are your proxies using SSDs for the account/container databases? | 19:14 |
peluse | ssd for databases only, yes | 19:20 |
peluse | we plan to share all policies on all disks but run Ec tests separately from repl tests | 19:21 |
peluse | also note that we use cosbench as a workload generator (as opposed to ssbench) and zabbix on all nodes to gather mem/cpu/net stats | 19:21 |
peluse | minwoob, but the SSDs holding those are on the storage nodes (1 each) not in the proxies | 19:23 |
peluse | only OS boot drive on proxies... | 19:24 |
*** nadeem has quit IRC | 19:24 | |
*** lpabon has joined #openstack-swift | 19:43 | |
*** dencaval has quit IRC | 19:52 | |
*** Gu_______ has quit IRC | 19:53 | |
openstackgerrit | Merged openstack/swift: Remove workaround for old eventlet version https://review.openstack.org/181597 | 19:53 |
*** fresh has quit IRC | 19:55 | |
*** nadeem has joined #openstack-swift | 19:56 | |
*** nadeem has quit IRC | 20:14 | |
*** drwho has joined #openstack-swift | 20:17 | |
openstackgerrit | Michael Barton proposed openstack/swift: go: clean up and add to obj tests https://review.openstack.org/182065 | 20:18 |
openstackgerrit | Michael Barton proposed openstack/swift: go: new tmp dir layout https://review.openstack.org/182066 | 20:19 |
notmyname | redbo: any thoughts on doing that same tmp dir patch for python? | 20:20 |
*** nadeem has joined #openstack-swift | 20:22 | |
redbo | I don't know, the discussion kind of seemed to stall. | 20:25 |
redbo | I guess I could take it away from whoever was working on it | 20:32 |
notmyname | ah. was there already a patch for it? | 20:36 |
*** MVenesio has joined #openstack-swift | 20:38 | |
*** rmcall has quit IRC | 20:41 | |
*** openstackgerrit_ has joined #openstack-swift | 20:43 | |
*** zaitcev has joined #openstack-swift | 20:46 | |
*** ChanServ sets mode: +v zaitcev | 20:46 | |
*** Fin1te has joined #openstack-swift | 20:51 | |
redbo | notmyname: there was https://review.openstack.org/#/c/180883/ | 20:53 |
notmyname | redbo: oh yeah. thanks. | 20:56 |
notmyname | shri normally gets on later in our day, IIRC. might be good to chat with him to see what's the best way to get something landed | 20:57 |
*** haomai___ has joined #openstack-swift | 21:02 | |
*** wbhuber has quit IRC | 21:02 | |
*** haomaiwa_ has quit IRC | 21:05 | |
*** annegent_ has joined #openstack-swift | 21:10 | |
*** esker has quit IRC | 21:11 | |
*** annegentle has quit IRC | 21:15 | |
*** Fin1te has quit IRC | 21:29 | |
notmyname | fun fact found while working on summit presentations. current sloc for swift is roughly 109k. 25% is in swift/. ie only about 1/4 of our codebase is the actual code. the rest is tests (with a little for bin files) | 21:32 |
*** mwheckmann has joined #openstack-swift | 21:34 | |
mwheckmann | hello. Does anyone know the status of the container-to-container sync feature? That is to say syncing containers between remote distinct clusters? | 21:36 |
mwheckmann | as described here: http://docs.openstack.org/developer/swift/overview_container_sync.html | 21:36 |
mwheckmann | It works great at first but it seems to be fragile. | 21:37 |
mwheckmann | I can break synchronization by issuing successive deletes of objects (in a for loop for example). | 21:37 |
mwheckmann | This is with Juno. | 21:37 |
mwheckmann | Connectivity between my proxies and remote object nodes is fine | 21:38 |
clayg | mwheckmann: maybe this bug -> https://bugs.launchpad.net/swift/+bug/1413619 | 21:39 |
openstack | Launchpad bug 1413619 in OpenStack Object Storage (swift) "when container sync run, already deleted object is synced" [Undecided,New] - Assigned to Gil Vernik (gilv) | 21:39 |
mwheckmann | clayg: hmmm.. maybe. that's not the error I'm seeing, but I haven't turned on debug or anything like that. | 21:41 |
*** MVenesio has quit IRC | 21:41 | |
mwheckmann | I don't have a specific container-sync log. Didn't turn that on. | 21:42 |
mwheckmann | clayg: What I'm seeing in the logs is repetitive attempts to delete the objects but they're not actually getting deleted. | 21:44 |
clayg | well the daemon will log - even if it's just to /var/log/syslog prefixed with container-sync - you might turn on DEBUG - if you have a reproducible error it'd be wonderful if you could include details on a bug report on launchpad | 21:44 |
mwheckmann | will try | 21:44 |
mwheckmann | definitely 100% reproducible | 21:44 |
clayg | oh interesting, the except block may need to allow for 409 | 21:45 |
clayg | I think a conflict on PUT is translated to accepted, deleting an object with the exact same timestamp probably blows up and prevents sync point from moving forward | 21:45 |
clayg | interesting | 21:45 |
mwheckmann | clayg: You're that I'm seeing this behviour because I uploaded these objects at the same time *and* because I'm deleting these at the same time | 21:47 |
mwheckmann | ? | 21:47 |
clayg | mwheckmann: no I think it's just a bug for delete's | 21:48 |
clayg | the acceptable status for sync_point 2 delete needs to be expanded to allow for a 409 status | 21:48 |
mwheckmann | Well I'm happy to test a patch. This is a lab cluster. In the meantime, I'll try to get some more useful logs | 21:49 |
mwheckmann | (lab clusters in the plural) | 21:50 |
*** annegent_ has quit IRC | 21:52 | |
*** jrichli has quit IRC | 21:52 | |
*** annegentle has joined #openstack-swift | 21:53 | |
notmyname | yay Swift! https://github.com/bouncestorage/swiftproxy | 21:54 |
*** NM1 has quit IRC | 21:56 | |
*** annegentle has quit IRC | 22:00 | |
*** jkugel has quit IRC | 22:13 | |
mwheckmann | clayg: with debug on I'm now getting the same error as bug #1413619 | 22:14 |
openstack | bug 1413619 in OpenStack Object Storage (swift) "when container sync run, already deleted object is synced" [Undecided,New] https://launchpad.net/bugs/1413619 - Assigned to Gil Vernik (gilv) | 22:14 |
mwheckmann | 100% repoducible | 22:14 |
*** wbhuber has joined #openstack-swift | 22:18 | |
clayg | yeah if it's what I think it is should be trivial to write a probe test for | 22:18 |
clayg | mwheckmann: can you confirm the sync target is responding with a 409 to the second DELETE request? | 22:18 |
mwheckmann | clayg: checking... | 22:21 |
minwoob | Quick question - what determines when an approved patch gets merged? | 22:25 |
minwoob | Specifically, for the master branch. | 22:25 |
mwheckmann | clayg: No 409's from the target proxy whatsoever | 22:26 |
*** drwho has quit IRC | 22:26 | |
mwheckmann | just a bunch of 204 responses | 22:26 |
mwheckmann | followed by 404's on subsequent attempts to delete the same object | 22:26 |
zaitcev | minwoob: One of the core developers approves or "lands" it. Not sure about other projects, but in Swift we have a convention that one core has to review a patch first, and the second one may land it. | 22:27 |
mwheckmann | clayg: there were 66 objects total in the container to be deleted. The target container proxy only 204'd 59 DELETE requests. | 22:29 |
minwoob | zaitcev: Ah, I see. Thank you. | 22:30 |
mwheckmann | clayg: some objects (about 9) were never even subjected to a DELETE | 22:30 |
mattoliverau | Morning | 22:32 |
clayg | mwheckmann: hrmm... i wasn't expecting the 404's - everythin you said makes sense include it not working - but only if the subseqent delete's 409 - if they return 404 all the way to sync client then it should have made progress | 22:34 |
*** nadeem has quit IRC | 22:40 | |
mwheckmann | clayg: It looks like the 404's are subsequent DELETE attempts on objects that *were* successfully deleted. | 22:42 |
mwheckmann | The problem is that there are outstanding objects that we *never* end up seeing a remote DELETE request for. | 22:42 |
*** esker has joined #openstack-swift | 22:42 | |
clayg | mwheckmann: yeah, that would make sense if the sync_points aren't progressing because the sync_point2 sweep over the already delete'd objects blows up because of a 409 | 22:48 |
*** occupant has joined #openstack-swift | 22:53 | |
*** esker has quit IRC | 22:57 | |
mwheckmann | clayg: I created a new bug #1453993 as that other one wasn't clear | 23:09 |
openstack | bug 1453993 in OpenStack Object Storage (swift) "container sync gets stuck after deleting all objects" [Undecided,New] https://launchpad.net/bugs/1453993 | 23:09 |
*** bill_az has joined #openstack-swift | 23:19 | |
*** esker has joined #openstack-swift | 23:22 | |
*** km has joined #openstack-swift | 23:26 | |
*** esker has quit IRC | 23:37 | |
*** jrichli has joined #openstack-swift | 23:48 | |
*** ho has joined #openstack-swift | 23:49 | |
*** minwoob has quit IRC | 23:50 | |
ho | good morning! | 23:53 |
-openstackstatus- NOTICE: Gerrit is going offline while we perform an emergency downgrade to version 2.8. | 23:55 | |
*** ChanServ changes topic to "Gerrit is going offline while we perform an emergency downgrade to version 2.8." | 23:55 | |
mattoliverau | ho: morning | 23:55 |
ho | mattoliverau: morning! | 23:56 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!