*** dmsimard has quit IRC | 00:03 | |
*** rpedde is now known as rpedde_away | 00:05 | |
*** hurrican_ has joined #openstack-swift | 00:11 | |
*** hurricanerix has quit IRC | 00:15 | |
*** hurrican_ has quit IRC | 00:16 | |
*** krtaylor has joined #openstack-swift | 00:26 | |
*** tgohad has quit IRC | 00:51 | |
*** dmorita has joined #openstack-swift | 01:00 | |
openstackgerrit | John Dickinson proposed a change to openstack/swift: Added docs about the swift_source log field https://review.openstack.org/71163 | 01:01 |
---|---|---|
*** shri has left #openstack-swift | 01:31 | |
*** zul has joined #openstack-swift | 01:51 | |
*** nosnos has joined #openstack-swift | 02:00 | |
*** byeager has joined #openstack-swift | 02:01 | |
*** byeager has quit IRC | 02:09 | |
*** mkerrin1 has joined #openstack-swift | 02:10 | |
*** booi_ has joined #openstack-swift | 02:11 | |
openstackgerrit | A change was merged to openstack/swift: Remove duplicate doc entry for swob https://review.openstack.org/72922 | 02:11 |
*** haomaiwa_ has joined #openstack-swift | 02:11 | |
*** mtaylor has joined #openstack-swift | 02:13 | |
*** byeager has joined #openstack-swift | 02:13 | |
*** kris_ha has joined #openstack-swift | 02:14 | |
*** cschwede_ has joined #openstack-swift | 02:14 | |
*** dmorita has quit IRC | 02:14 | |
*** tristanC_ has joined #openstack-swift | 02:14 | |
*** mkerrin has quit IRC | 02:15 | |
*** haomai___ has quit IRC | 02:15 | |
*** dfg has quit IRC | 02:15 | |
*** mordred has quit IRC | 02:15 | |
*** booi has quit IRC | 02:15 | |
*** Trixboxer has quit IRC | 02:15 | |
*** nosnos_ has joined #openstack-swift | 02:15 | |
*** mlipchuk1 has joined #openstack-swift | 02:16 | |
*** dfg has joined #openstack-swift | 02:16 | |
*** mkerrin has joined #openstack-swift | 02:16 | |
*** haomai___ has joined #openstack-swift | 02:17 | |
*** csd has quit IRC | 02:19 | |
*** mlipchuk has quit IRC | 02:19 | |
*** kris_h has quit IRC | 02:19 | |
*** gyee has quit IRC | 02:19 | |
*** tristanC has quit IRC | 02:19 | |
*** saurabh_ has quit IRC | 02:19 | |
*** creiht has quit IRC | 02:19 | |
*** cschwede has quit IRC | 02:19 | |
*** gholt has quit IRC | 02:19 | |
*** bsdkurt1 has joined #openstack-swift | 02:19 | |
*** saurabh_ has joined #openstack-swift | 02:21 | |
*** saurabh_ has joined #openstack-swift | 02:21 | |
*** haomaiwa_ has quit IRC | 02:22 | |
*** mkerrin1 has quit IRC | 02:22 | |
*** nosnos has quit IRC | 02:22 | |
*** zul has quit IRC | 02:22 | |
*** bsdkurt has quit IRC | 02:22 | |
*** Diddi has quit IRC | 02:22 | |
*** byeager has quit IRC | 02:23 | |
*** byeager has joined #openstack-swift | 02:26 | |
*** booi_ has quit IRC | 02:27 | |
*** bsdkurt has joined #openstack-swift | 02:28 | |
*** zul has joined #openstack-swift | 02:30 | |
*** Anju has quit IRC | 02:30 | |
*** saurabh_ has quit IRC | 02:31 | |
*** tristanC_ has quit IRC | 02:32 | |
*** nosnos has joined #openstack-swift | 02:32 | |
*** bsdkurt1 has quit IRC | 02:33 | |
*** haomai___ has quit IRC | 02:33 | |
*** dfg has quit IRC | 02:33 | |
*** nosnos_ has quit IRC | 02:33 | |
*** shadowing has quit IRC | 02:33 | |
*** shadowing has joined #openstack-swift | 02:38 | |
*** tristanC has joined #openstack-swift | 02:38 | |
*** Diddi has joined #openstack-swift | 02:39 | |
*** bada has joined #openstack-swift | 02:40 | |
*** saurabh_ has joined #openstack-swift | 02:40 | |
*** saurabh_ has joined #openstack-swift | 02:40 | |
*** dmorita has joined #openstack-swift | 02:40 | |
*** Anju has joined #openstack-swift | 02:41 | |
*** torgomatic_ has joined #openstack-swift | 02:42 | |
*** bada_ has quit IRC | 02:43 | |
*** dfg_ has joined #openstack-swift | 02:43 | |
*** mkerrin1 has joined #openstack-swift | 02:43 | |
*** zackf1 has joined #openstack-swift | 02:43 | |
*** russellb_ has joined #openstack-swift | 02:45 | |
*** dfg_ has quit IRC | 02:47 | |
*** zul has quit IRC | 02:47 | |
*** creiht_ has joined #openstack-swift | 02:48 | |
*** mdonohoe has joined #openstack-swift | 02:48 | |
*** cschwede has joined #openstack-swift | 02:48 | |
*** nosnos has quit IRC | 02:49 | |
*** bsdkurt has quit IRC | 02:49 | |
*** mkerrin has quit IRC | 02:49 | |
*** mlipchuk1 has quit IRC | 02:49 | |
*** cschwede_ has quit IRC | 02:49 | |
*** kris_ha has quit IRC | 02:49 | |
*** krtaylor has quit IRC | 02:50 | |
*** zackf has quit IRC | 02:50 | |
*** torgomatic has quit IRC | 02:50 | |
*** markd has quit IRC | 02:50 | |
*** peluse has quit IRC | 02:50 | |
*** russellb has quit IRC | 02:50 | |
*** torgomatic_ is now known as torgomatic | 02:50 | |
*** russellb_ is now known as russellb | 02:50 | |
*** peluse has joined #openstack-swift | 02:50 | |
*** zul has joined #openstack-swift | 02:52 | |
*** dfg has joined #openstack-swift | 02:52 | |
*** nosnos has joined #openstack-swift | 02:56 | |
*** krtaylor has joined #openstack-swift | 03:00 | |
*** bsdkurt has joined #openstack-swift | 03:08 | |
*** peluse has quit IRC | 03:08 | |
*** kris_ha has joined #openstack-swift | 03:08 | |
*** peluse has joined #openstack-swift | 03:09 | |
*** mlipchuk has joined #openstack-swift | 03:11 | |
*** sphoorti has joined #openstack-swift | 03:11 | |
sphoorti | Hey, I am trying to run this command " swift -v -V 2.0 -A http://192.168.1.4:5000/v2.0/ -U admin -K devstack stat" , but i get the following error " Endpoint for object-store not found - have you specified a region? " | 03:13 |
sphoorti | What could possibly be going wrong ? | 03:13 |
*** creiht_ is now known as creiht | 03:16 | |
*** ChanServ sets mode: +v creiht | 03:19 | |
*** haomaiwang has joined #openstack-swift | 03:29 | |
*** gyee has joined #openstack-swift | 03:29 | |
*** gholt has joined #openstack-swift | 03:29 | |
*** ChanServ sets mode: +v gholt | 03:29 | |
*** mkerrin1 has quit IRC | 03:30 | |
*** mkerrin1 has joined #openstack-swift | 03:30 | |
*** peluse has quit IRC | 03:30 | |
*** peluse has joined #openstack-swift | 03:30 | |
*** zul has quit IRC | 03:31 | |
portante | notmyname: https://wiki.openstack.org/wiki/ObjectStorageBackends | 03:32 |
*** zul has joined #openstack-swift | 03:36 | |
* zaitcev pokes anticw_ again | 03:38 | |
zaitcev | portante: looks promising but there's no tour | 03:39 |
zaitcev | sphoorti: well, what does keystone endpoint-list says? And service-list? | 03:40 |
zaitcev | sphoorti: most common mistake is a typo. The "object-store" that swift prints in that error message is the actual keyword it searches for (well, keystoneclient searches actually). So, people who enter, say, "objects-store" get that error. Something like that. | 03:41 |
sphoorti | zaitcev: keystone servie-list shows keystone and swift | 03:41 |
zaitcev | sphoorti: no, don't give me that. Paste the actual console log somewhere to paste.openstack.org or such. | 03:42 |
sphoorti | Sorry. Sure i ll paste the output and show you | 03:43 |
sphoorti | zaitcev: http://paste.openstack.org/show/64874/ | 03:45 |
*** zul has quit IRC | 03:52 | |
*** chandan_kumar has joined #openstack-swift | 03:58 | |
sphoorti | Also zaitcev : http://paste.openstack.org/show/64883/ | 04:02 |
zaitcev | sphoorti: the second one is just firewall or something | 04:04 |
zaitcev | sphoorti: Strange, the region is actually case sensitive, but I forgot what the right capitalization is | 04:05 |
zaitcev | I am almost certain it's not "regionOne" | 04:05 |
zaitcev | Your "RegionOne" is correct, it appears. Hmm.... | 04:07 |
*** byeager has quit IRC | 04:08 | |
*** chandan_kumar has quit IRC | 04:09 | |
*** chandan_kumar has joined #openstack-swift | 04:10 | |
zaitcev | sphoorti: very odd. Your keystone configuration looks perfect (leaving the "cannot connect" aside). | 04:13 |
zaitcev | The "swift --debug -v -V 2.0 -A http://192.168.1.4:5000/v2.0/regionOne" is incorrect, I think. Should just be /v2.0. | 04:14 |
zaitcev | Oh, wait, I take it back. | 04:15 |
zaitcev | No, I don't. Really it can't be right. POST goes to /v2.0/tokens | 04:16 |
zaitcev | Maybe SSL got enabled somewhere by accident. | 04:16 |
*** basha has joined #openstack-swift | 04:17 | |
sphoorti | zaitcev: how do i check that? | 04:19 |
zaitcev | sphoorti: well, it's a Keystone question.... but I would check what's in [ssl] in /etc/keystone/keystone.conf. | 04:23 |
*** pberis has quit IRC | 04:25 | |
sphoorti | zaitcev: ssl is enable=true | 04:26 |
zaitcev | sphoorti: you can just use "swift -v -V 2.0 -A https://192.168.1.4:5000/v2.0 -U admin -K devstack stat" and test. | 04:27 |
sphoorti | zaitcev: i tried that command and it gives me :- Authorization Failure. Authorization Failed: Unable to establish connection to http://192.168.1.4:5000/v2.0/tokens | 04:28 |
zaitcev | sphoorti: I don't know how it turns https: into http: for you, sorry. I should work if you copy-paste it from IRC. | 04:29 |
sphoorti | zaitcev: i am getting the same error | 04:30 |
*** pberis has joined #openstack-swift | 04:30 | |
zaitcev | sphoorti: "same" in what way? You typed -A https:// and it reports http:// again? | 04:31 |
sphoorti | zaitcev: Authorization Failure. Authorization Failed: Unable to establish connection to https://192.168.1.4:5000/v2.0/tokens | 04:31 |
sphoorti | Initially i was getting:- Endpoint for object-store not found - have you specified a region?. But now the authorization is failing | 04:33 |
zaitcev | Maybe you crashed Keystone meanwhile | 04:33 |
sphoorti | should i restart the vm? | 04:33 |
zaitcev | Look, it's simple: just do whatever it takes for autothorisations to succeed. Don't worry about Swift, make sure you can post for tokens with curl first. | 04:34 |
sphoorti | the curl command is also not working | 04:34 |
sphoorti | zaitcev: i tried curl -i -H "X-Auth-Key: devstack123" -H "X-Auth-User: admin" http://192.168.1.4:5000/v2.0 | 04:34 |
sphoorti | curl: (7) Failed connect to 192.168.1.4:5000; Connection refused | 04:34 |
*** basha has quit IRC | 04:35 | |
zaitcev | sphoorti: well, duh. Make sure that Keystone is even running... Like, netstat -atn | grep 5000 | 04:36 |
sphoorti | from the native machine ? | 04:36 |
sphoorti | or the vm ? | 04:36 |
sphoorti | zaitcev: from the vm it says tcp and LISTEN | 04:37 |
zaitcev | sphoorti: the command you pasted weren't going to work, since it was tryping to use v1 authentication protocl avainst a v2 endpoint, but that does not matter anyhow | 04:37 |
saurabh_ | zaitcev: Shen storing object on file-system, it uses last 3 digit of hash for calculating the path, why last 3 digit not first any specific reason behind this? | 04:37 |
*** basha has joined #openstack-swift | 04:37 | |
saurabh_ | when* | 04:37 |
saurabh_ | notmyname: ^^^^ | 04:39 |
creiht | saurabh_: that's just to split things up a bit on the filesystem so you don't end up with one directory that has millions of files in it | 04:45 |
saurabh_ | creiht: yes, it's fine but why last 3 digits not first? according to me, no logical reason behind that, but tell me if you know any logic behind that? | 04:48 |
saurabh_ | sphoorti: may be you can check authtoken and keystoneauth middleware configured correctly or not. | 04:51 |
sphoorti | saurabh_: how does one check that? basically i have a devstack running on the vm and I have changed the contents of my localrc to allow only swift and keystone | 04:52 |
saurabh_ | sphoorti: ok then they would be correct. so you can start from fresh environment | 04:53 |
saurabh_ | sphoorti: open new terminal | 04:53 |
sphoorti | on the vm ? | 04:54 |
saurabh_ | on your swift cluster from where you executed devstack/stack.sh | 04:54 |
creiht | saurabh_: gives you better distribution of files across directories | 04:54 |
sphoorti | saurabh_: did that | 04:55 |
saurabh_ | sphoorti: cd devstack | 04:55 |
sphoorti | done | 04:55 |
saurabh_ | sphoorti: execute source openrc | 04:55 |
saurabh_ | then swift list | 04:55 |
saurabh_ | swift stat | 04:56 |
sphoorti | swift list shows nothing | 04:56 |
saurabh_ | swift stat | 04:56 |
saurabh_ | ?? | 04:56 |
sphoorti | swift stat shows no containers created | 04:56 |
notmyname | saurabh_: I think that was my "fault" for doing the last three characters rather than the first 3. if the splaying were across incrementing numbers, the last 3 would give you better distribution. but since the splaying is based on a hash, then first 3 vs last 3 doesn't really matter. http://www.mpiweb.org/Libraries/Magazine/the_more_you_know.jpg | 04:56 |
saurabh_ | sphoorti: execute "swift upload con localrc" | 04:57 |
saurabh_ | sphoorti: then "swift list" | 04:58 |
sphoorti | i see con as the output | 04:58 |
sphoorti | saurabh_: | 04:58 |
saurabh_ | sphoorti: ok so does it resolve you problem? | 04:59 |
sphoorti | saurabh_: I am trying to connect to swift via other machines in the network | 04:59 |
sphoorti | that is not happening | 04:59 |
* notmyname goes back to being not online | 04:59 | |
saurabh_ | notmyname: in this case, how much probability of data distribution lies in one directory > Here directory is created using the last three character of hash value | 05:05 |
*** krtaylor has quit IRC | 05:06 | |
*** basha has quit IRC | 05:07 | |
openstackgerrit | A change was merged to openstack/swift: Ensure swift.source is set for DLO/SLO requests https://review.openstack.org/71415 | 05:09 |
*** zul has joined #openstack-swift | 05:09 | |
*** ppai has joined #openstack-swift | 05:11 | |
zaitcev | "LookupError: Entry point 'dlo' not found in egg 'swift' (dir: /q/zaitcev/hail/swift-tip; protocols: paste.filter_factory, paste.filter_app_factory; entry_points : )" -- oh great now what | 05:17 |
creiht | zaitcev: sudo python setup.py develop | 05:19 |
*** krtaylor has joined #openstack-swift | 05:19 | |
creiht | when new entrypoints are added, you have to run setup again | 05:20 |
zaitcev | creiht: Well, I spent some effort in order to stop doing that. | 05:20 |
*** basha has joined #openstack-swift | 05:26 | |
*** zackf1 is now known as zackf | 05:30 | |
*** basha has left #openstack-swift | 05:36 | |
*** gyee has quit IRC | 05:40 | |
*** ChanServ sets mode: +v torgomatic | 05:49 | |
*** chandankumar_ has joined #openstack-swift | 05:49 | |
*** chandankumar_ has quit IRC | 05:49 | |
*** chandankumar_ has joined #openstack-swift | 05:50 | |
*** fifieldt has joined #openstack-swift | 05:51 | |
*** chandankumar_ has quit IRC | 05:52 | |
*** chandan_kumar has quit IRC | 05:53 | |
*** chandan_kumar has joined #openstack-swift | 05:54 | |
torgomatic | also the partition comes from the top N bits of the hash, so if you pulled the *top* 3 chars for suffix dir and your part power was at least 12, it would do no good | 05:54 |
torgomatic | you'd get objects/$part/$first-12-bits-of-part/$hash, and it'd just be extra directories with no splaying | 05:55 |
*** zul has quit IRC | 06:08 | |
openstackgerrit | Samuel Merritt proposed a change to openstack/swift: Small efficiency improvement for SLO range GET https://review.openstack.org/73161 | 06:17 |
openstackgerrit | Samuel Merritt proposed a change to openstack/swift: Support If-[None-]Match for object HEAD, SLO, and DLO https://review.openstack.org/73162 | 06:17 |
*** nshaikh has joined #openstack-swift | 06:26 | |
*** prometheanfire has joined #openstack-swift | 06:45 | |
prometheanfire | so, can we configure swift to use sha256/512 instead of md5? | 06:46 |
*** fifieldt_ has joined #openstack-swift | 06:53 | |
*** fifieldt has quit IRC | 06:53 | |
zackf | prometheanfire: don't do it | 06:56 |
prometheanfire | zackf: do what? | 06:56 |
prometheanfire | zackf: you know what I might be doing? | 06:56 |
*** zaitcev has quit IRC | 06:56 | |
zackf | yes | 06:57 |
prometheanfire | what's that? | 06:57 |
prometheanfire | anyway, talking to a friend, thinking about adding swift as an api to access obects in zfs (not on, in) | 07:06 |
prometheanfire | since zfs is all object based internally | 07:07 |
*** pberis has quit IRC | 07:11 | |
*** sphoorti has quit IRC | 07:12 | |
*** pberis has joined #openstack-swift | 07:12 | |
*** sphoorti has joined #openstack-swift | 07:13 | |
*** rongze has joined #openstack-swift | 07:17 | |
*** mlipchuk has quit IRC | 07:35 | |
prometheanfire | this would be like using ceph (another object based fs) as a backend for swift, but without the network | 07:35 |
prometheanfire | als | 07:35 |
prometheanfire | also | 07:35 |
prometheanfire | swift as a zfs backend as a new vdev for zfs | 07:35 |
prometheanfire | would be useful... | 07:36 |
zigo | chmouel: I love pbr, because with it, "python setup.py install" just works (tm) and do what it's supposed to do correctly, and with PBR, I don't need to worry about *many* things, plus it's a move away from python-setuptools-git which was horrible. Though if you provide a working setup.py that does things correctly, that's fine too. It's just that PBR *solved* issues that were there previously, but if there's no issue (like with swift | 07:41 |
zigo | currently), then maybe you don't need pbr... | 07:41 |
zigo | (gosh, my wording is highly inneficient, sorry guys...) | 07:42 |
*** foexle has joined #openstack-swift | 07:52 | |
*** xga has joined #openstack-swift | 08:05 | |
*** sphoorti has quit IRC | 08:06 | |
*** vanheerj has joined #openstack-swift | 08:09 | |
*** hugokuo has quit IRC | 08:11 | |
*** fifieldt_ has quit IRC | 08:13 | |
*** wkelly has quit IRC | 08:13 | |
*** hugokuo has joined #openstack-swift | 08:14 | |
*** fifieldt_ has joined #openstack-swift | 08:14 | |
*** amandap has quit IRC | 08:15 | |
*** creiht has quit IRC | 08:15 | |
*** creiht has joined #openstack-swift | 08:16 | |
*** ChanServ sets mode: +v creiht | 08:16 | |
*** openstackgerrit has quit IRC | 08:16 | |
*** amandap has joined #openstack-swift | 08:16 | |
*** wkelly has joined #openstack-swift | 08:18 | |
*** mlanner has quit IRC | 08:24 | |
*** xga has quit IRC | 08:24 | |
*** xga has joined #openstack-swift | 08:26 | |
*** mlanner has joined #openstack-swift | 08:26 | |
*** sphoorti has joined #openstack-swift | 08:26 | |
*** mlipchuk has joined #openstack-swift | 08:27 | |
*** kris_ha has quit IRC | 08:31 | |
*** mrsnivvel has joined #openstack-swift | 08:33 | |
vanheerj | hi. I am very new to Openstack. i am trying to install a 2 node test environment using only swift. I dont want the whole openstack suite. Im trying to compare swift storage with cloudian. is there a quick install for swift somewhere? | 08:41 |
omame- | vanheerj: you may want to look at http://docs.openstack.org/developer/swift/howto_installmultinode.html | 08:42 |
*** omame- is now known as omame | 08:42 | |
vanheerj | much apreciated. I will look at that. Im using Centos but i will modify the install as i go along | 08:43 |
*** saju_m has joined #openstack-swift | 08:50 | |
*** fifieldt_ has quit IRC | 08:57 | |
koolhead17 | zigo: :P | 08:59 |
*** nacim has joined #openstack-swift | 09:00 | |
*** xga has quit IRC | 09:04 | |
*** xga has joined #openstack-swift | 09:06 | |
*** mkerrin1 has quit IRC | 09:13 | |
*** Midnightmyth has joined #openstack-swift | 09:25 | |
*** nshaikh has quit IRC | 09:27 | |
*** nshaikh has joined #openstack-swift | 09:30 | |
*** kris_ha has joined #openstack-swift | 09:34 | |
*** mkollaro has joined #openstack-swift | 09:41 | |
madhuri | hi all | 09:52 |
madhuri | I am getting a databse connection error | 09:53 |
madhuri | while uploading any object | 09:53 |
erlon | notmyname: welcome John, I really appreciate the discussion! | 10:03 |
*** xga_ has joined #openstack-swift | 10:06 | |
*** xga has quit IRC | 10:06 | |
*** peluse has quit IRC | 10:18 | |
*** mkollaro has quit IRC | 10:41 | |
*** mkerrin has joined #openstack-swift | 10:42 | |
*** mmcardle has joined #openstack-swift | 10:43 | |
*** ppai has quit IRC | 10:45 | |
*** mkollaro has joined #openstack-swift | 10:52 | |
*** sphoorti has quit IRC | 10:58 | |
vanheerj | Am I right when i say that openstack needs Centos 6.4 minimum? | 10:58 |
*** sphoorti has joined #openstack-swift | 10:59 | |
*** ppai has joined #openstack-swift | 11:02 | |
*** xga_ has quit IRC | 11:08 | |
*** xga_ has joined #openstack-swift | 11:14 | |
chmouel | notmyname: so about https://review.openstack.org/#/c/69187/ i think this is ready for merge, feel free to merge it and do 2.0 release when you get the chance | 11:23 |
*** xga_ has quit IRC | 11:25 | |
*** notmyname has quit IRC | 11:29 | |
*** sphoorti has quit IRC | 11:32 | |
*** notmyname has joined #openstack-swift | 11:32 | |
*** ChanServ sets mode: +v notmyname | 11:32 | |
*** sileht has quit IRC | 11:32 | |
*** krtaylor has quit IRC | 11:33 | |
*** sileht has joined #openstack-swift | 11:33 | |
*** sphoorti has joined #openstack-swift | 11:34 | |
vanheerj | If I have a 3 node swift cluster and each node has a single disk that it stores obekcts on and al 3 nodes is in a single zone, does it mean I have no replicas? | 11:36 |
vanheerj | In other words, does swift only replicate over zones or over nodes? | 11:36 |
*** krtaylor has joined #openstack-swift | 11:38 | |
*** bvandenh has joined #openstack-swift | 11:56 | |
*** xga has joined #openstack-swift | 12:06 | |
*** ppai has quit IRC | 12:10 | |
*** ppai has joined #openstack-swift | 12:12 | |
vanheerj | To answer my own question, it seems that replication happens as uniqly as possibly over zones but if you have only one zone it will replaicte over nodes | 12:14 |
*** dmorita has quit IRC | 12:25 | |
*** dmsimard has joined #openstack-swift | 12:40 | |
*** sphoorti_ has joined #openstack-swift | 12:40 | |
*** sphoorti has quit IRC | 12:40 | |
*** mmcardle has quit IRC | 12:46 | |
*** SkyRocknRoll__ has quit IRC | 12:48 | |
*** SkyRocknRoll__ has joined #openstack-swift | 12:50 | |
*** acorwin has quit IRC | 12:50 | |
*** acorwin has joined #openstack-swift | 12:52 | |
*** xga has quit IRC | 13:06 | |
*** xga has joined #openstack-swift | 13:06 | |
*** nosnos has quit IRC | 13:10 | |
*** sphoorti_ has quit IRC | 13:11 | |
*** sphoorti_ has joined #openstack-swift | 13:12 | |
*** ppai has quit IRC | 13:14 | |
*** zul has joined #openstack-swift | 13:14 | |
*** hurricanerix has joined #openstack-swift | 13:29 | |
*** mmcardle has joined #openstack-swift | 13:30 | |
*** xga has quit IRC | 13:31 | |
*** sphoorti_ has quit IRC | 13:31 | |
*** xga has joined #openstack-swift | 13:31 | |
*** nshaikh has left #openstack-swift | 13:45 | |
*** seiflotfy__ has quit IRC | 13:49 | |
*** nshaikh has joined #openstack-swift | 14:04 | |
*** xga_ has joined #openstack-swift | 14:19 | |
*** xga has quit IRC | 14:20 | |
*** rongze has quit IRC | 14:21 | |
*** vanheerj has quit IRC | 14:25 | |
*** saju_m has quit IRC | 14:28 | |
*** rongze has joined #openstack-swift | 14:29 | |
*** fifieldt has joined #openstack-swift | 14:40 | |
notmyname | chmouel: ack. will look later this morning | 14:42 |
*** fifieldt has quit IRC | 14:43 | |
notmyname | zigo: I'd love some more detail from your experience. what are some of those issues that pbr solved for you? | 14:43 |
*** ChanServ sets mode: +v dfg | 15:00 | |
*** byeager has joined #openstack-swift | 15:04 | |
*** xga has joined #openstack-swift | 15:05 | |
*** xga_ has quit IRC | 15:05 | |
*** Trixboxer has joined #openstack-swift | 15:09 | |
*** rpedde_away is now known as rpedde | 15:17 | |
*** jergerber has joined #openstack-swift | 15:18 | |
*** mkerrin has quit IRC | 15:21 | |
*** mkollaro has quit IRC | 15:28 | |
*** mkollaro has joined #openstack-swift | 15:28 | |
*** chandankumar_ has joined #openstack-swift | 15:29 | |
*** chandan_kumar has quit IRC | 15:29 | |
*** tongli has joined #openstack-swift | 15:30 | |
*** mkerrin has joined #openstack-swift | 15:30 | |
*** mkollaro has quit IRC | 15:33 | |
*** xga_ has joined #openstack-swift | 15:35 | |
*** xga has quit IRC | 15:36 | |
*** lpabon has joined #openstack-swift | 15:41 | |
*** mkollaro has joined #openstack-swift | 15:43 | |
*** openstackgerrit has joined #openstack-swift | 15:47 | |
*** nshaikh has left #openstack-swift | 16:02 | |
*** mtaylor is now known as mordred | 16:07 | |
*** mordred has quit IRC | 16:07 | |
*** mordred has joined #openstack-swift | 16:07 | |
*** saju_m has joined #openstack-swift | 16:08 | |
*** kris_ha has quit IRC | 16:08 | |
*** bvandenh has quit IRC | 16:11 | |
*** chandankumar__ has joined #openstack-swift | 16:16 | |
*** chandankumar_ has quit IRC | 16:19 | |
*** tdasilva has joined #openstack-swift | 16:19 | |
*** kris_ha has joined #openstack-swift | 16:25 | |
cschwede | notmyname: i updated my patch https://review.openstack.org/#/c/73037/ - now it works just like the other files in bin/ that are only importing main() from other modules | 16:27 |
*** zackf has quit IRC | 16:33 | |
*** zackf has joined #openstack-swift | 16:33 | |
*** gyee has joined #openstack-swift | 16:37 | |
*** chandankumar__ has quit IRC | 16:38 | |
*** mlipchuk has quit IRC | 16:41 | |
*** chandan_kumar has joined #openstack-swift | 16:41 | |
*** xga_ has quit IRC | 16:43 | |
*** xga has joined #openstack-swift | 16:45 | |
notmyname | cschwede: nice. looks good IMO | 16:52 |
notmyname | (I guess I should put that in gerrit :-) | 16:52 |
*** vkdrao has joined #openstack-swift | 16:53 | |
*** rturk has joined #openstack-swift | 17:00 | |
*** rturk is now known as rturk-away | 17:01 | |
*** sphoorti has joined #openstack-swift | 17:03 | |
sphoorti | Hey Folks! I am trying to connect to the swift server, but I get the following error http://paste.openstack.org/show/65103/ . What possibly could be going wrong? | 17:05 |
creiht | sphoorti: looks like a problem with keystone runnong on 192.168.1.4 | 17:07 |
creiht | I would check the keystone logs to see if that helps narrow down what is going on | 17:08 |
sphoorti | my /etc/keystone.conf file shows ssl enable= true | 17:08 |
sphoorti | is it causing the problems? | 17:08 |
creiht | other than that, I don't have a lot of experience with keystone, sorry | 17:08 |
creiht | sphoorti: hopefully the logs will give you a better idea of what is or isn't working right | 17:09 |
*** xga has quit IRC | 17:10 | |
*** mmcardle has quit IRC | 17:11 | |
*** xga has joined #openstack-swift | 17:13 | |
*** bsdkurt has quit IRC | 17:14 | |
*** mmcardle has joined #openstack-swift | 17:16 | |
sphoorti | creiht: where would i find keystone logs? | 17:18 |
creiht | sphoorti: heh sorry I don't know for sure | 17:19 |
creiht | but I would start with /var/log/syslog or /var/log/messages | 17:19 |
*** rongze has quit IRC | 17:19 | |
*** rongze has joined #openstack-swift | 17:20 | |
creiht | If you need more help with keystone, I recommend asking in #openstack-dev | 17:20 |
sphoorti | Thanks creiht :). I shall ask there too | 17:23 |
*** chandan_kumar has quit IRC | 17:24 | |
*** rongze has quit IRC | 17:24 | |
*** kris_ha has quit IRC | 17:28 | |
*** nacim has quit IRC | 17:37 | |
*** xga has quit IRC | 17:37 | |
*** peluse has joined #openstack-swift | 17:47 | |
*** byeager has quit IRC | 17:50 | |
*** zaitcev has joined #openstack-swift | 17:54 | |
*** ChanServ sets mode: +v zaitcev | 17:54 | |
sphoorti | hey zaitcev, i tried running the swift server again and this time I get the following error :- http://paste.openstack.org/show/65103/ | 17:57 |
zaitcev | sphoorti: I dunno, maybe try --insecure | 17:58 |
sphoorti | the same SSL exception zaitcev | 17:58 |
*** mkollaro has quit IRC | 18:01 | |
zaitcev | sphoorti: good luck... I would verify with keystone's own client and/or curl and see how far that goes. | 18:02 |
*** mmcardle has quit IRC | 18:05 | |
*** bantone has joined #openstack-swift | 18:09 | |
sphoorti | zaitcev: the curl command worked http://paste.openstack.org/show/65124/ | 18:11 |
zaitcev | sphoorti: no it didn't, you used http:// there. Please pay little attention! | 18:11 |
sphoorti | zaitcev: curl -i -H "X-Auth-Key: devstack123" -H "X-Auth-User: admin" https://192.168.1.4:5000/v2.0 | 18:12 |
sphoorti | curl: (35) error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol | 18:12 |
zaitcev | *facepalm* | 18:13 |
sphoorti | :( | 18:15 |
torgomatic | sphoorti: you're trying to speak HTTPS to an HTTP server; it's not gonna work. Either turn on SSL on 192.168.1.4:5000 (whatever it is) or tell swiftclient to use http. | 18:19 |
sphoorti | ssl is enabled in /etc/keystone.conf | 18:19 |
sphoorti | torgomatic: | 18:20 |
torgomatic | sphoorti: well, you spoke http to port 5000 and it worked, so ssl is not enabled in the listening process | 18:20 |
torgomatic | config file or no | 18:20 |
*** byeager has joined #openstack-swift | 18:20 | |
sphoorti | sorry that was commented. | 18:21 |
sphoorti | would i have to rerun stack.sh torgomatic ? | 18:21 |
peluse | torgomatic: think we oughta include # containers per policy in acct HEAD as well (have it working now but only including obj count and total bytes per policy)? | 18:22 |
torgomatic | sphoorti: I have no idea; I only run devstack when stuff breaks that I can't replicate on my SAIO | 18:22 |
sphoorti | torgomatic: if i try with http, then i get the following error :- | 18:23 |
sphoorti | Endpoint for object-store not found - have you specified a region? | 18:23 |
torgomatic | peluse: hm... maybe? | 18:23 |
*** csd has joined #openstack-swift | 18:23 | |
torgomatic | sphoorti: yeah, that's a keystone catalog thingy, and the words "keystone catalog thingy" represent the extreme outer envelope of my keystone knowledge | 18:24 |
torgomatic | you might try #openstack-dev for that | 18:24 |
torgomatic | peluse: so for objects, I have a use case in my head: billing differently for different policies | 18:24 |
sphoorti | i tried asking there :( | 18:24 |
torgomatic | sphoorti: well, try looking in the catalog (how? I dunno) and seeing if there's a thing ofor "object-store" in it | 18:25 |
torgomatic | *for | 18:25 |
*** foexle has quit IRC | 18:26 | |
*** markd has joined #openstack-swift | 18:28 | |
*** byeager has quit IRC | 18:29 | |
*** mdonohoe has quit IRC | 18:29 | |
peluse | torgomatic: yup, the obj billing is the clear use case. I don't see much need to report containers per policy but thought it might be nice just as informational. Would be easier to add it now then sometime later is why I ask. Not there now, let me know if you think I should add it... | 18:31 |
torgomatic | peluse: On the one hand, it provides a nice symmetry with objects, but on the other hand, it's extra bookkeeping and hence extra IO... I'm leaning towards not doing it | 18:32 |
torgomatic | notmyname: clayg: thoughts? ^^ | 18:32 |
sphoorti | torgomatic: when i run a service keystone status , it says keystone:unrecognized service | 18:32 |
torgomatic | sphoorti: well, keystone is clearly running because you have something listening on port 5000, or at least I'm assuming that thing is keystone | 18:34 |
torgomatic | and what does checking the daemon state have to do with getting the catalog out? | 18:34 |
torgomatic | I mean, I guess one's a prerequisite, but you know it's running because you can talk to it | 18:34 |
*** mdonohoe has joined #openstack-swift | 18:35 | |
zaitcev | His catalog is fine, he pasted it yesterday. It's perfect. I think he manages to get wrong URL somehow, because one of his pastes had some bizarre nonsense like posting to /v2.0/regionOne/token. | 18:35 |
sphoorti | keystone tenant-list and service-list do not work. | 18:36 |
zaitcev | worked here - http://paste.openstack.org/show/64874/ | 18:37 |
*** markd has quit IRC | 18:38 | |
torgomatic | zaitcev: weird. I don't think swiftclient's keystone support has changed at all in a long time, so I don't understand where the wrong URL is coming from | 18:39 |
torgomatic | hm, it looks like swiftclient imports keystoneclient.v2_0 to handle keystone auth, so maybe it's a bug in keystoneclient? | 18:40 |
torgomatic | no, that doesn't make sense, the devstack gates wouldn't work at all if that were true | 18:40 |
zaitcev | crazy | 18:41 |
sphoorti | I get an "Authorization Failure. Authorization Failed: Unable to establish connection to http://192.168.1.4:5000/v2.0/tokens " error | 18:42 |
*** ccorrigan has quit IRC | 18:45 | |
*** byeager has joined #openstack-swift | 18:57 | |
*** shri has joined #openstack-swift | 19:04 | |
*** vkdrao has quit IRC | 19:08 | |
*** markd has joined #openstack-swift | 19:14 | |
*** mdonohoe has quit IRC | 19:16 | |
*** tdasilva has quit IRC | 19:19 | |
*** mdonohoe has joined #openstack-swift | 19:25 | |
*** markd has quit IRC | 19:26 | |
creiht | what in the world does LOST mean on the jenkins runs? | 19:32 |
notmyname | I've not seen that before | 19:33 |
notmyname | clarkb: what's up with "LOST" in jenkins? | 19:33 |
clarkb | usually it means the gearman job went away catastrophically | 19:34 |
clarkb | notmyname: have a link to a specific instance? | 19:34 |
creiht | clarkb: https://review.openstack.org/#/c/73037/https://review.openstack.org/#/c/73037/ | 19:34 |
creiht | erm | 19:34 |
creiht | https://review.openstack.org/#/c/73037/ | 19:34 |
clarkb | give me a few moments to dig through logs | 19:35 |
notmyname | clarkb: thanks | 19:35 |
notmyname | clarkb: also seems it never picked up the approval to merge (after the LOST runs). I don't know if that' related | 19:36 |
*** bsdkurt has joined #openstack-swift | 19:36 | |
*** mdonohoe has quit IRC | 19:37 | |
*** markd has joined #openstack-swift | 19:38 | |
clarkb | notmyname: not sure what is going on there, but I think jeblair and sdague were tweaking zuul this morning, going to hop over to infra and ask there | 19:39 |
*** tdasilva has joined #openstack-swift | 19:39 | |
notmyname | clarkb: thanks for looking. I'll lurk in -infra | 19:39 |
*** bsdkurt has quit IRC | 19:40 | |
*** bsdkurt has joined #openstack-swift | 19:41 | |
notmyname | creiht: looks like you've got an interesting discussion going in gerrit with torgomatic about the auditor | 19:42 |
creiht | hah | 19:42 |
creiht | was just reading that | 19:42 |
creiht | torgomatic: so I get what you are saying | 19:43 |
creiht | I guess I need to think on it a bit | 19:43 |
*** gyee has quit IRC | 19:43 | |
creiht | my biggest worry is people thinking, oh heck yeah make that happen quick | 19:43 |
creiht | and wonder why their systems are behaving so badly | 19:43 |
notmyname | creiht: FWIW the defaults are still at concurrency=1 | 19:45 |
creiht | yeah I know | 19:45 |
creiht | notmyname: but part of the problem is that this isn't even a verified issue yet (at least from reading the commit description) | 19:45 |
creiht | I would like to verify that a.) This is really an issue and b.) this fix actually solves that problem | 19:46 |
*** fbo is now known as fbo_away | 19:47 | |
*** mdonohoe has joined #openstack-swift | 19:49 | |
prometheanfire | notmyname: you know who was working zfs as a swift backend (using the zfs internal object model thing) | 19:49 |
notmyname | prometheanfire: nexenta talked about it a _long_ time ago | 19:50 |
prometheanfire | notmyname: if you have email addrs of devs I'd be intrested :D | 19:50 |
notmyname | prometheanfire: no, I don't. | 19:50 |
notmyname | prometheanfire: so what are you looking at? | 19:50 |
*** luisbg has joined #openstack-swift | 19:50 | |
prometheanfire | two things | 19:51 |
prometheanfire | first is using zfs as the backing store for swift | 19:51 |
prometheanfire | instead of storing objects on a fs you store objects in an object storage system | 19:51 |
prometheanfire | zfs uses objects nativly (internally) | 19:51 |
prometheanfire | second | 19:51 |
*** markd has quit IRC | 19:52 | |
prometheanfire | using zfs as a presentation layer for swift (kinda like the swift fuse thing) | 19:52 |
prometheanfire | since zfs speaks objects nativly | 19:52 |
notmyname | so to the first, swift is not an abstraction layer to an object storage system. swift is actually an object storage system itself, designed to abstract away storage volumes so that API client don't have to think about the hard problems of scaling storage | 19:52 |
prometheanfire | right | 19:53 |
creiht | swift is an object storage system that has an abstraction layer that is going to get abused if you like ir or not ;) | 19:53 |
notmyname | and the work that's been happening with DiskFile is designed to allow you to make optimizations for particular storage volumes. it's not designed to put swift on another abstraction layer | 19:53 |
*** ryao has joined #openstack-swift | 19:53 | |
ryao | Hi. | 19:53 |
prometheanfire | ryao: hi | 19:53 |
prometheanfire | notmyname: are we talking about zfs as a presentation layer for swift or zfs as a backing store for swift? | 19:54 |
notmyname | prometheanfire: the latter | 19:54 |
prometheanfire | zfs as a backing store may be better because it speaks objects nativly | 19:54 |
prometheanfire | instead of storing an object as a file (like you would in xfs) you store it as an object | 19:55 |
ryao | notmyname: Internally, ZFS is a transactional object store, so it could work well for something like this. | 19:55 |
prometheanfire | notmyname: ryao is a ZoL dev | 19:55 |
creiht | torgomatic: so back to the auditor stuff, if it is a real problem, I would revisit the idea of adding the parallelism | 19:55 |
creiht | of course it is just a -1, so if others think otherwise, they can certainly go ahead :0 | 19:55 |
creiht | meant :) | 19:55 |
notmyname | creiht: I'm guessing that he's on a train headed to the office | 19:55 |
ryao | Unfortunately, my time is very limited today, so I can't talk much right now. I am being paid to try to recover data lost because someone did `zfs destroy dataset` when there were no backups. | 19:56 |
notmyname | prometheanfire: I think it's a great idea to look into particular file systems and use a DiskFile (and other stuff not yet fully realized) to store data. eg XFS, and yes ZFS too | 19:56 |
ryao | I can talk for a few minutes, but an extended discussion would need to wait until next week. | 19:56 |
prometheanfire | ryao: I think I can talk about the basics, just wanted you in here in case you were intrested in direct interaction :D | 19:57 |
prometheanfire | notmyname: ya, that's about all there is to say about ZoL as a backing store for swift | 19:57 |
prometheanfire | now, about zfs as a presentation layer | 19:57 |
prometheanfire | that would be using swift as a new type of vdev (a networked type) | 19:58 |
ryao | I am more interested in the latter. Swift + ZFS for easy offsite backup is cool. | 19:58 |
prometheanfire | and that's the summary :D | 19:58 |
notmyname | prometheanfire: seems fraught with peril :-) | 19:58 |
prometheanfire | notmyname: therefore fun :D | 19:58 |
notmyname | there are several gateway systems that are out there to provide a POSIX interface onto Swift. essentially, you're talking about the same thing | 19:59 |
prometheanfire | basically | 19:59 |
ryao | prometheanfire: After thinking about it some more, I don't think a new vdev type would be the best place to put this... we would have already had object to digitual virtual address translation done in the DVA, which means potential inefficiency due to internal fragmentation. Ideally, it would hook into the middle layer of ZFS. | 19:59 |
prometheanfire | ryao: that's what I thought you meant :P | 20:00 |
ryao | notmyname: Have you seen ZFS send/recv? | 20:00 |
ryao | notmyname: It is similar to rsync for backups with 1 key difference. It uses a unidirectional pipe. | 20:00 |
notmyname | ryao: seen it. not used it. I've got a ZFS/OpenIllumos file server at home | 20:00 |
ryao | notmyname: It is neat for off site backups. | 20:00 |
creiht | but send/recv only works with snapshots right? | 20:02 |
prometheanfire | true, iirc | 20:02 |
ryao | creiht: Yes. That is necessary to ensure consistency. | 20:02 |
creiht | and it can only send/recv the whole snapshot | 20:02 |
creiht | so not really like rsync :) | 20:03 |
prometheanfire | it can send diffs though | 20:03 |
ryao | creiht: Doing rsync on a live system suffers from a race condition. If a modification is made to the uncopied region that depends on a modification to the copied region after it is copied, bad things happen. | 20:03 |
ryao | creiht: It supports incremental send/recv.. | 20:03 |
ryao | creiht: i.e. all changes between snapshot A and snapshot B. | 20:03 |
creiht | right between specific snapshots | 20:04 |
ryao | creiht: And it puts less IO load on the system because ZFS knows the list of changes. | 20:04 |
creiht | anyways, I'll let you go on | 20:04 |
ryao | creiht: One last thing. ZFS is not just a POSIX front-end. It also supports block storage. | 20:04 |
* creiht is fairly familiar with zfs | 20:04 | |
creiht | I'm also not notmyname :) | 20:04 |
ryao | creiht: My mistake. | 20:05 |
creiht | no worries | 20:05 |
*** mdonohoe has quit IRC | 20:05 | |
*** markd has joined #openstack-swift | 20:05 | |
creiht | so if we skip all the details, exactly what are you wanting to propose? | 20:05 |
ryao | creiht: Nothing right now. I was just chatting with prometheanfire last night about some guys using ZFS on Amazon S3 with a FUSE program called s3backer. I mentioned various deficiencies in S3 that make this less than ideal and prometheanfire told me that Swift has no such deficiencies. | 20:06 |
notmyname | heh | 20:07 |
creiht | lol | 20:07 |
creiht | well there are certainly deficiences for trying to make any type of posix like file system on top of swift | 20:07 |
ryao | In particular, swift supports a variable object size and Rackspace Cloud Files has no request fees. | 20:07 |
prometheanfire | ryao: about that, I don't think s3 forces a block size | 20:07 |
ryao | prometheanfire: It forces an object size in the bucket. | 20:07 |
creiht | ryao: consistency is your biggest enemy | 20:07 |
notmyname | creiht: sounds like ryao is trying to make your life hard keeping cloud files up :-) | 20:08 |
creiht | lol | 20:08 |
ryao | notmyname: Why do you say that? | 20:08 |
prometheanfire | I still have my delete script that kills things :P | 20:08 |
notmyname | ryao: jsut when someone starts talking about "no request fees" as a perk (actually I think it's pretty awesome), ops guys start cringing thinking about what the cluster request rates are about to do :-) | 20:09 |
marcusvrn | portante: ping | 20:09 |
ryao | creiht: I am aware. Thankfully, ZFS is fully asynchronous internally, so I suspect this can be handled. I know that s3backer manages it (although I am not sure if I like how it does that). | 20:09 |
creiht | notmyname: nah, that's what we have rate limits for :) | 20:09 |
notmyname | creiht: if user_agent == 'ZFS': rate_limit *= .5 | 20:09 |
notmyname | ;-) | 20:09 |
ryao | creiht: I have a counter point to that, but I will keep quiet. | 20:09 |
notmyname | ryao: nah, just joking around, mostly | 20:10 |
ryao | Actually, I will say it. ZFS is not a single disk filesystem and it can do striping with no redundancy. ;) | 20:10 |
notmyname | ryao: but since we realize some zfs+swift bridge would be pretty chatty (probably), it's just a thing | 20:10 |
ryao | notmyname: Probably less chatty than the s3backer approach in production on Amazon's systems. | 20:10 |
ryao | notmyname: I have seen it firsthand and it works decently. | 20:11 |
ryao | notmyname: Without the deficiencies of S3, ZFS could work marvelously on an object storage backend. Anyway, this all hit me last night when chatting with prometheanfire in #gentoo-dev. | 20:12 |
notmyname | ryao: ya, it's cool I'd love to see something like that for swift, like there is for S3. this would be a great place to ask questions and see the best ways to make such a client for swift | 20:12 |
prometheanfire | speaking of, anyone going to scale? | 20:13 |
*** mdonohoe has joined #openstack-swift | 20:13 | |
koolhead17 | https://ask.openstack.org/en/question/11874/swift-middleware/ << looks like more of ceilometer related question rather than swift | 20:13 |
ryao | notmyname: Well, my hands are tied finishing work-in-progress ZFS stuff for the next 3 to 5 weeks. If this still looks sane to me afterward, I expect to try creating a prototype. | 20:13 |
notmyname | ryao: cool | 20:14 |
*** dmsimard1 has joined #openstack-swift | 20:14 | |
notmyname | koolhead17: I think we fixed that issue (secondary groups when swift drops privs) | 20:14 |
notmyname | koolhead17: ya merged jan 27 | 20:15 |
koolhead17 | notmyname: So might not have gone with distro yet :) | 20:15 |
notmyname | koolhead17: doesn't look like it was in 1.12 | 20:15 |
notmyname | koolhead17: so it hasn't been in an official swift release | 20:16 |
*** markd has quit IRC | 20:16 | |
koolhead17 | hmm. so what should i reply | 20:16 |
*** dmsimard has quit IRC | 20:17 | |
notmyname | koolhead17: known issue, already fixed. will be in the next release. https://bugs.launchpad.net/swift/+bug/1269473 | 20:17 |
koolhead17 | thanks | 20:18 |
*** markd has joined #openstack-swift | 20:18 | |
*** saju_m has quit IRC | 20:19 | |
openstackgerrit | A change was merged to openstack/python-swiftclient: Rename Openstack to OpenStack https://review.openstack.org/73200 | 20:20 |
openstackgerrit | A change was merged to openstack/python-swiftclient: Remove extraneous vim configuration comments https://review.openstack.org/73227 | 20:20 |
*** mdonohoe has quit IRC | 20:20 | |
*** byeager has quit IRC | 20:24 | |
*** byeager has joined #openstack-swift | 20:24 | |
*** sphoorti has quit IRC | 20:25 | |
*** mdonohoe has joined #openstack-swift | 20:31 | |
*** markd has quit IRC | 20:33 | |
*** markd has joined #openstack-swift | 20:35 | |
luisbg | for Swift development do you recommend I run the Swift All In One or devstack? | 20:35 |
notmyname | luisbg: for dev work, I'd recommend the swift all in one | 20:35 |
luisbg | notmyname, ok, cool. thanks | 20:36 |
*** tongli has quit IRC | 20:36 | |
luisbg | notmyname, do you run it in a vm or directly on the machine? | 20:36 |
*** mdonohoe has quit IRC | 20:36 | |
*** mdonohoe has joined #openstack-swift | 20:37 | |
notmyname | luisbg: my primary dev "box" is a VirtualBox image. but I also have an SAIO running at a cloud hosting provider too. either way works. a SAIO is not for performance testing. it's only for functionality | 20:38 |
*** markd has quit IRC | 20:39 | |
luisbg | notmyname, have to fix the networking here in the office, VirtualBox needs a router/switch that supports bridge mode | 20:39 |
notmyname | luisbg: in addition to using the SAIO docs at http://swift.openstack.org, you could look at https://github.com/swiftstack/vagrant-swift-all-in-one | 20:39 |
portante | marcusvrn: pong | 20:40 |
portante | sorry, been in meetings | 20:41 |
portante | creiht: what is the auditor gerrit id? | 20:41 |
portante | discussion thingy? | 20:41 |
*** markd has joined #openstack-swift | 20:41 | |
luisbg | notmyname, wow, vagrant looks nice. going to try it | 20:41 |
notmyname | portante: https://review.openstack.org/#/c/59778/ | 20:42 |
portante | ah, deep | 20:42 |
*** mdonohoe has quit IRC | 20:43 | |
*** mdonohoe has joined #openstack-swift | 20:55 | |
dmsimard1 | Would a bug in the ceilometer swift-proxy middleware be a swift bug or a ceilometer bug (or both!?) | 20:55 |
*** markd has quit IRC | 20:57 | |
creiht | dmsimard1: most likely a ceilometer bug, since the code resides in ceilometer right? | 20:59 |
creiht | (the middleware code) | 20:59 |
*** dmsimard1 is now known as dmsimard | 20:59 | |
*** kmartin has joined #openstack-swift | 20:59 | |
dmsimard | creiht: I'm thinking that too. Eh, I'll file something, see what happens. | 21:00 |
*** markd has joined #openstack-swift | 21:00 | |
*** shri has quit IRC | 21:00 | |
*** mdonohoe has quit IRC | 21:02 | |
*** mdonohoe has joined #openstack-swift | 21:03 | |
luisbg | notmyname, that vagrant setup was amazingly easy! | 21:04 |
luisbg | notmyname, do you know if there are more for a full openstack environment? like devstack | 21:05 |
*** bsdkurt has quit IRC | 21:05 | |
*** markd has quit IRC | 21:05 | |
*** markd has joined #openstack-swift | 21:07 | |
*** mdonohoe has quit IRC | 21:09 | |
notmyname | luisbg: I don't know. alpha_ori and clayg set up that swift one for us at SwiftStack | 21:09 |
luisbg | notmyname, nice :) | 21:09 |
luisbg | notmyname, https://github.com/bcwaldon/vagrant_devstack <-- going to try that | 21:11 |
*** mdonohoe has joined #openstack-swift | 21:11 | |
*** Diddi has quit IRC | 21:12 | |
*** shri has joined #openstack-swift | 21:12 | |
*** markd has quit IRC | 21:14 | |
torgomatic | creiht: for the auditor stuff, I actually have a customer who pays attention to audit times and wants them as low as possible (because reasons, I guess) | 21:19 |
*** csd has quit IRC | 21:20 | |
torgomatic | I think they're running about 45-day audit cycles now, but if they could drop that a bunch they'd be really happy | 21:20 |
torgomatic | making the auditor faster was on my list of stuff to do after storage policies, so I'm happy someone else did it for me :) | 21:20 |
dmsimard | Wow 45 days | 21:21 |
torgomatic | dmsimard: lots of disks, lots of files per disk, and they can't run too fast without destroying performance on the auditor's current target | 21:21 |
dmsimard | torgomatic: lots is probably an understatement :p | 21:22 |
dmsimard | Still, I'm impressed. | 21:22 |
creiht | the auditor was never meant to end quick :) | 21:23 |
creiht | I'm just worried that it will cause more issues than fix by parallelizing it | 21:23 |
torgomatic | creiht: well, no, but there's a big range between "quick" and "this is never going to finish, is it" :) | 21:23 |
*** markd has joined #openstack-swift | 21:24 | |
creiht | lol | 21:24 |
creiht | well when we wrote it, the expectation was that it could take a month or two to complete | 21:24 |
notmyname | mission accomplished! | 21:24 |
torgomatic | hehe | 21:24 |
creiht | lol | 21:24 |
swifterdarrell | hahaha | 21:24 |
portante | month or two? | 21:25 |
swifterdarrell | or three. whatever | 21:25 |
portante | gives you some degrees of freedom, huh? | 21:25 |
creiht | well disk io is expensive | 21:25 |
portante | disks are going away, no? | 21:25 |
portante | ;) | 21:25 |
*** mdonohoe has quit IRC | 21:25 | |
notmyname | the target would be to have the auditor run faster than the rate of failure in the media (bit rot). fs corruption is mostly taken care of with the ZBS | 21:25 |
creiht | right | 21:26 |
notmyname | but instead of "do everything you can to check this one drive" over and over again, doesn't parallelizing it allow for smaller overall impact to the node? | 21:27 |
notmyname | to torgomatic's point | 21:27 |
swifterdarrell | creiht: say you limit the ALL auditor at 15 MB/disk, you have 24 disks to a chassis, and the disks are 50% full. That's an audit cycle time of ~1 month (google sez 0.957 months) | 21:27 |
swifterdarrell | creiht: the I/O cost per spindle is the same in that case as doing that 15 MB/s on every disk at once | 21:27 |
swifterdarrell | creiht: so... | 21:27 |
swifterdarrell | creiht: help me see the downside here | 21:27 |
creiht | when the auditor is running on that disk, you might as well count it as not available | 21:28 |
portante | swifterdarrell: wait, 15 MB per disk ? | 21:28 |
portante | 15 MB/s per disk | 21:28 |
creiht | if you run it in parallel, that whole system is not available | 21:28 |
portante | or do you mean 15 MB/s for all disk? | 21:28 |
portante | s | 21:28 |
creiht | on a whole cluster by default, the auditor currently basically takes out one drive at a time | 21:28 |
swifterdarrell | creiht: if that's true, then each disk is good for one streaming downoad from one proxy-server on behalf of one customer? really? | 21:29 |
portante | takes out? | 21:29 |
portante | wow | 21:29 |
portante | I hope that is not the design. :) | 21:29 |
creiht | I mean take out figuratively | 21:29 |
portante | yeah, but, it seems folks are not seeing figurative audits. :) | 21:29 |
creiht | the amount of disk io a single disk can do is substantially reduced when the auditor is running | 21:29 |
* creiht sighs | 21:30 | |
swifterdarrell | creiht: same setup as above, but 24 auditors work on a different disk each, and the whole audit sweep time is ~1.2 days (vs ~ 1mo) | 21:30 |
*** markd has quit IRC | 21:30 | |
portante | but how are all those drives connected? | 21:30 |
creiht | tubes | 21:30 |
zaitcev | torgomatic: since you're poking at auditor, did you ever try to do strace -p on a real one? I did and it looked as if the one that's supposed to look for 0-size files read files anyway. | 21:30 |
portante | ;) | 21:30 |
portante | yes! | 21:30 |
swifterdarrell | creiht: k, limit the auditors to 2 MB/s per spindle. That's 7.5 TIMES less load per spindle yet the system cycle time is now 9 days instead of around 30 | 21:31 |
portante | won't that parallelism stress the tubs? | 21:31 |
portante | tubes? | 21:31 |
* creiht sighs | 21:31 | |
torgomatic | zaitcev: no, I haven't tried it | 21:31 |
creiht | like I said, it is just a -1 | 21:31 |
creiht | if you guys feel it is that important, go ahead | 21:31 |
creiht | I still just fail to see the use | 21:32 |
portante | creiht: but don't give up so easily | 21:32 |
creiht | meh | 21:32 |
creiht | :) | 21:32 |
portante | I think there is something here with regard to where that parallelism shifts the behaviors | 21:32 |
swifterdarrell | creiht: sorry, i'm sort of coming into the middle of the convo... but is the argument against the patch that it's bad for anyone to be able to do? Or that adding the ability to parallelize hinders a one-processor auditor deployment (my understanding is that it does not)? | 21:32 |
creiht | swifterdarrell: I had two arguments | 21:32 |
creiht | 1.) is this really even a problem? | 21:33 |
zaitcev | The important question I'm asking are 1) if Sam is going to blow up exising clusters, 2) code becomes unmaintainable bloated mess y/n? If it passes that, he can have all parallelism he wants, I'm all for it. | 21:33 |
creiht | 2.) paralelizing it will just cause other problems in any sytem that enables it | 21:33 |
creiht | #1 I'm still not convinced of | 21:33 |
creiht | there might be a *percieved* problem, but it is working as designed | 21:34 |
zaitcev | re Chuck's 2 is why we add a regrettable knob to enable/disable (I prefer fewer knobs, but oh well) | 21:34 |
creiht | heh | 21:34 |
*** markd has joined #openstack-swift | 21:34 | |
creiht | yeah I know | 21:34 |
portante | fwiw, it is reasonable to show a problem being solved for #1, as adding code for parallelism does add the possibility that existing code is impacted | 21:34 |
creiht | if it does go in, can we make it so that the one check in the recon stuff isn't recursive? | 21:35 |
portante | For #2, though, is it fair to say "will cause other problems", or is it possible it might cause problems, but folks can experiment | 21:35 |
swifterdarrell | creiht: k, ya, sounds like some fundamental disagreement. But as long as it doesn't hurt ppl who don't want to attempt parallelization, I think the the Real World(tm) will tell us all the true answer to #2 soon enough | 21:35 |
creiht | I propose a compromise | 21:35 |
torgomatic | at least one customer of mine is rather unhappy with the long audit time, but they can't shrink it without demolishing performance (more) on a disk | 21:35 |
torgomatic | (the hot disk, that is) | 21:36 |
creiht | I noticed that an option was added to run the audit on a specific set of disks | 21:36 |
creiht | what if we just added that to start | 21:36 |
creiht | and you can then experiment with running multiple numbers of auditors | 21:36 |
creiht | in a real env | 21:36 |
creiht | if that doesn't cause perf issues then revisit the parallelization | 21:37 |
swifterdarrell | creiht: that seems reasonable | 21:37 |
portante | sounds good to me | 21:38 |
creiht | as I can see where it might be useful to audit a certain disk out of band anyways | 21:38 |
swifterdarrell | creiht: running N (count of disks) swift-object-auditor processes with N configs which tell them to each pay attention to a different one of the N should be close enough to infinite parallelism, and is just config files and proc mgmt | 21:38 |
swifterdarrell | creiht: I dig it | 21:38 |
portante | creiht: got a nice sound today | 21:38 |
torgomatic | eh, I guess that could work | 21:39 |
creiht | lol | 21:39 |
torgomatic | I mean, more config files and more processes doesn't bother me since I have code to write those | 21:39 |
creiht | torgomatic: it would just be short term for experimentation | 21:39 |
torgomatic | creiht: alright, and then if it does actually help someone then we put the parallelism in and operators don't have to juggle extra configs? | 21:40 |
creiht | sure | 21:40 |
torgomatic | alright | 21:40 |
swifterdarrell | torgomatic: ya... w/real-world numbers that don't show it destroys things, it's a no-brainer | 21:40 |
creiht | I would prefer to make changes based on data rather than guesses :) | 21:41 |
*** csd has joined #openstack-swift | 21:41 | |
creiht | and it would still be nice for someone to audit the auditor to see if there is a way to make it less io intensive | 21:42 |
swifterdarrell | creiht: but who would audit the auditor auditors?! | 21:42 |
creiht | swifterdarrell: well the first step would be to create an openstack program | 21:42 |
portante | we have a way | 21:42 |
swifterdarrell | creiht: maybe just think REALLY hard about the file contents instead of reading them? | 21:42 |
swifterdarrell | hahahaha | 21:42 |
portante | kinda | 21:43 |
notmyname | swifterdarrell: "I'm not a computer scientologist, but that sounds easy" | 21:43 |
openstackgerrit | Luis de Bethencourt proposed a change to openstack/swift: small typo in cli/recon.py https://review.openstack.org/73425 | 21:49 |
luisbg | \o/ ^ | 21:50 |
zaitcev | heh | 21:55 |
notmyname | luisbg is a new contributor (to swift and openstack) and found a little typo to submit to see what the process is like | 21:56 |
notmyname | I guess we should have gone back and forth and ignored it for a few weeks first ;-) | 21:57 |
luisbg | notmyname, that would've helped my motivation to stick around :P | 21:57 |
notmyname | ya. it's actually somewhat of a problem we're currently trying to fix | 21:58 |
luisbg | notmyname, let me know if I can help :) | 21:58 |
luisbg | maybe new contributors that start getting the tropes, can mentor newer contributors | 21:58 |
luisbg | I enjoyed reading Julie's email about "attracting new contributors a few days ago" | 21:59 |
luisbg | side question: how many +1's does a review need before it gets merged with the main branch? | 21:59 |
notmyname | normally patches need 2 +2s before it is approved | 22:00 |
notmyname | so 2 core reviewers need to approve it first | 22:00 |
*** Trixboxer has quit IRC | 22:00 | |
notmyname | smaller patches that don't have any side effects (like yours) may get merged with just one reviewer. but it's rare | 22:00 |
luisbg | got it, +2 sounds good for all really | 22:02 |
luisbg | smaller patches will have an easier time finding reviewers | 22:02 |
notmyname | luisbg: the difference is that core reviewers can add a +2 and non-core reviewers can only add a +1 | 22:02 |
luisbg | notmyname, I thanked you, is the other reviewer here to thank him as well? :) | 22:04 |
luisbg | Samuel | 22:04 |
notmyname | luisbg: oh did he +2 it too? | 22:05 |
torgomatic | honestly, "+2" and "+1" are terrible names for the actions | 22:05 |
notmyname | luisbg: sam == torgomatic | 22:05 |
torgomatic | because "+1" and "+1" does not equal "+2" | 22:05 |
*** bsdkurt has joined #openstack-swift | 22:06 | |
torgomatic | and this confuses basically every single person that ever starts using Gerrit | 22:06 |
torgomatic | but then, naming things is one of the two hard problems of computer science :) | 22:06 |
notmyname | sw-approved? | 22:06 |
creiht | notmyname: man you approved without 2 +2's | 22:11 |
creiht | shame :0 | 22:11 |
notmyname | creiht: torgomatic saved me by giving his +2 also (after I approved it) | 22:12 |
luisbg | torgomatic, thanks! :) | 22:12 |
torgomatic | luisbg: you're welcome | 22:12 |
luisbg | wait, can someone explain the +2? | 22:12 |
luisbg | is +2 = the second +1? | 22:12 |
torgomatic | notmyname: yeah, you almost went to OpenStack jail ;) | 22:13 |
creiht | notmyname: and I did, just for good measure :) | 22:13 |
luisbg | or is it a more valuable +1? | 22:13 |
creiht | plus, to boost my stats :) | 22:13 |
notmyname | heh | 22:13 |
creiht | since they are gamifying the system, I'm going to start min/maxing | 22:13 |
notmyname | make a number and people with optimize for it | 22:13 |
creiht | :) | 22:13 |
notmyname | creiht: you have earned the "superfluous review" badge! | 22:13 |
creiht | we will have daily 1 character typo fixes, that everyone will train +2s on | 22:14 |
torgomatic | it'd be more interesting with if we got badgers instead | 22:15 |
torgomatic | http://img1.etsystatic.com/000/0/5587345/il_fullxfull.310066583.jpg%3Fref%3Dl2 | 22:15 |
luisbg | creiht, I can keep sending them, I am currently reading the code to learn it, so will probably see more :P | 22:15 |
creiht | haha | 22:16 |
creiht | lol | 22:16 |
luisbg | this over-review is making me think the process if fast and easy! great community | 22:16 |
luisbg | 10/10 will commit again | 22:16 |
creiht | haha | 22:17 |
torgomatic | you've seen the demo; wait'll you get the real deal (I kid, I kid) | 22:17 |
luisbg | this is the demo tutorial, soon the real sandbox game starts and it is completely different | 22:18 |
luisbg | why does the commit/review not show in https://review.openstack.org/#/q/status:open,n,z ?? | 22:22 |
luisbg | curious | 22:22 |
torgomatic | luisbg: it does, it's just on page 2 right now | 22:24 |
torgomatic | try "status:open project:openstack/swift" | 22:24 |
luisbg | torgomatic, oh wow! things move quickly | 22:24 |
notmyname | https://review.openstack.org/#/q/-status:workinprogress+AND+status:open+AND+(project:openstack/swift-bench+OR+project:openstack/object-api+OR+project:openstack/swift+OR+project:openstack/python-swiftclient),n,z | 22:24 |
torgomatic | there's a lot of projects; if you don't filter to the ones you care about, you'll never find anything | 22:25 |
luisbg | noted | 22:25 |
luisbg | notmyname, bookmarked the link | 22:26 |
*** dmsimard has quit IRC | 22:30 | |
* zaitcev pokes anticw_ even harder | 22:43 | |
*** byeager has quit IRC | 22:49 | |
*** prometheanfire has left #openstack-swift | 22:50 | |
*** byeager has joined #openstack-swift | 22:55 | |
*** mkollaro has joined #openstack-swift | 23:05 | |
*** gyee has joined #openstack-swift | 23:07 | |
*** acoles has quit IRC | 23:17 | |
*** Midnightmyth has quit IRC | 23:18 | |
*** byeager has quit IRC | 23:19 | |
*** acoles has joined #openstack-swift | 23:23 | |
*** mkollaro has quit IRC | 23:28 | |
*** mkollaro has joined #openstack-swift | 23:29 | |
notmyname | swiftclient changes incoming | 23:40 |
* torgomatic ducks and covers | 23:40 | |
notmyname | clarkb: ok, so I approve something and it goes to the check queue? | 23:41 |
notmyname | when did that change and why? | 23:41 |
notmyname | jeblair: ^ | 23:41 |
notmyname | hmm...and it looks like python-swiftclient is running swift functional tests. that seems to be a 10 minute delay that doesn't do anything | 23:43 |
jeblair | notmyname: that's not quite right but it should be as soon as https://review.openstack.org/#/c/73418/ lands | 23:43 |
notmyname | jeblair: I just approved https://review.openstack.org/#/c/69187/ and it is in the check queue now and not the gate queue | 23:44 |
jeblair | notmyname: one of the changes is that before a change goes into the gate pipeline, it needs to have a +1 from jenkins within the last 24h. if it doesn't it will go get one automatically; that's what's happening there. | 23:45 |
jeblair | notmyname: then once that arrives, it should go into the gate queue automatically*. | 23:45 |
notmyname | hmm | 23:45 |
jeblair | notmyname: * there's a bug with that we are working on right now, so it may not happen with this change | 23:45 |
jeblair | notmyname: the reasoning is that lots of people were putting changes into the gate queue that would never work; that slowed the whole thing down, especially for other projects. | 23:46 |
jeblair | notmyname: there's more on that subject in the email | 23:46 |
notmyname | jeblair: last thing I see on the ML from you is on 1/22 about the end-of-year gate issues | 23:47 |
jeblair | notmyname: sdague | 23:48 |
notmyname | gamification? | 23:48 |
notmyname | version discovery? | 23:48 |
jeblair | Subject: [openstack-dev] Changes coming in gate structure | 23:49 |
jeblair | To: openstack-dev@lists.openstack.org | 23:49 |
jeblair | Date: Wed, 22 Jan 2014 15:39:58 -0500 | 23:49 |
jeblair | Message-ID: <52E02C9E.501@dague.net> | 23:49 |
notmyname | ok. I had seen that and remembered slimming down the tests being run. hadn't come across (or remembered) the check/gate queue things | 23:54 |
jeblair | notmyname: the end result should not involve a process change (it should do the right thing automatically); if we don't find the auto-gate-enqueue bug soon i'll revert the config change until we find it. | 23:55 |
notmyname | jeblair: k. thanks for the explanation | 23:56 |
notmyname | jeblair: also, it looks like I need to take the swift functional tests out of the swiftclient tests. there's a config file to do that. /me goes looking | 23:56 |
*** mkollaro has quit IRC | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!