*** CR7 has joined #openstack-swift | 00:12 | |
*** trifon has quit IRC | 00:14 | |
*** kevinc_ has quit IRC | 00:22 | |
*** esker has quit IRC | 00:27 | |
*** m_kazuhiro has joined #openstack-swift | 00:30 | |
*** rjaiswal has quit IRC | 00:40 | |
*** asd112z has joined #openstack-swift | 00:41 | |
*** wshao has joined #openstack-swift | 00:45 | |
openstackgerrit | Victor Stinner proposed openstack/swift: py3: Update pbr and dnspython requirements https://review.openstack.org/217423 | 00:48 |
---|---|---|
*** shri has left #openstack-swift | 01:07 | |
*** lpabon has joined #openstack-swift | 01:10 | |
*** flwang has quit IRC | 01:11 | |
*** m_kazuhiro has quit IRC | 01:12 | |
*** m_kazuhiro has joined #openstack-swift | 01:13 | |
*** barker has joined #openstack-swift | 01:32 | |
*** barker has quit IRC | 01:37 | |
*** jrichli has joined #openstack-swift | 01:40 | |
*** flwang has joined #openstack-swift | 01:40 | |
*** lpabon has quit IRC | 01:50 | |
*** lpabon has joined #openstack-swift | 01:58 | |
*** haomaiwa_ has joined #openstack-swift | 02:03 | |
*** jlk has left #openstack-swift | 02:04 | |
*** ccavanna_ has joined #openstack-swift | 02:07 | |
*** haomaiwa_ has quit IRC | 02:09 | |
*** haomaiwang has joined #openstack-swift | 02:10 | |
*** ccavanna has quit IRC | 02:10 | |
*** pberis has joined #openstack-swift | 02:21 | |
*** wshao has quit IRC | 02:30 | |
*** wshao_ has joined #openstack-swift | 02:31 | |
*** wshao_ has quit IRC | 02:31 | |
jrichli | clayg: around? | 02:32 |
*** baojg has joined #openstack-swift | 02:39 | |
*** asd112z has quit IRC | 02:50 | |
notmyname | good evening | 02:50 |
*** jlk has joined #openstack-swift | 02:50 | |
notmyname | didn't get a chance to get online earlier today | 02:50 |
jrichli | notmyname: good evening | 02:50 |
notmyname | there, there, clayg. it will be ok :-) | 02:51 |
jrichli | lol, I didn't know if the 'q' was a mistake, or a statement :-) | 02:54 |
*** haomaiwang has quit IRC | 03:01 | |
*** haomaiwang has joined #openstack-swift | 03:01 | |
mattoliverau | notmyname: evening | 03:03 |
mattoliverau | jrichli: hi | 03:03 |
mattoliverau | notmyname: how's openstack silicon valley? if thats where you are :) | 03:03 |
jrichli | mattoliverau: hello! | 03:04 |
*** lpabon has quit IRC | 03:07 | |
notmyname | mattoliverau: yeah, that's where I was today (back at home right now) | 03:11 |
notmyname | mattoliverau: it was ... ok | 03:11 |
notmyname | not too technical. good to talk to some people. I saw some companies on name badges that I assume are openstack users | 03:12 |
notmyname | I'll be going to the office tomorrow instead of back to day 2 of the event :-) | 03:12 |
notmyname | mattoliverau: oh! I got another arduino from one of the vendors, so that was nice ;-) | 03:13 |
notmyname | oh, wow. look at the gate. our CVE patches still haven't landed after nearly 9 hours!! | 03:15 |
ccavanna_ | notmyname: Hi. | 03:16 |
notmyname | https://www.openstack.org/summit/tokyo-2015/schedule/design-summit is the schedule for tokyo | 03:16 |
notmyname | ccavanna_: hi | 03:16 |
ccavanna_ | notmyname: Can I ask for your "reviewing" expertise for https://review.openstack.org/#/c/202657/ ? | 03:17 |
ccavanna_ | Samuel reviewed it, and I was hoping ... :-) | 03:17 |
ccavanna_ | Or anybody else online who wishes to do one review .... | 03:17 |
ccavanna_ | Do we need one or two cores to review? I don't remember. | 03:18 |
notmyname | yeah. normally we have 2 cores review something | 03:18 |
ccavanna_ | You were talking about an Arduino earlier, I just finished assembling an Arduino robotic arm with my daughter. | 03:19 |
ccavanna_ | (not wired yet) | 03:19 |
notmyname | cool | 03:19 |
mattoliverau | notmyname: \o/ free stuff! At least you stayed for one day :) | 03:23 |
*** sanchitmalhotra has joined #openstack-swift | 03:27 | |
notmyname | ccavanna_: honestly, I'm not likely to get to that review tonight | 03:27 |
ccavanna_ | notmyname: not a problem. | 03:27 |
ccavanna_ | Thanks anyway! I leave it open for anyone wishing to review. | 03:28 |
*** CR7 has quit IRC | 03:35 | |
*** CR7 has joined #openstack-swift | 03:36 | |
*** pberis has quit IRC | 03:48 | |
*** chenhuayi has joined #openstack-swift | 03:51 | |
*** flwang has quit IRC | 03:58 | |
*** swat30 has quit IRC | 03:58 | |
*** haomaiwang has quit IRC | 04:01 | |
*** haomaiwa_ has joined #openstack-swift | 04:01 | |
*** jamielennox is now known as jamielennox|away | 04:04 | |
*** swat30 has joined #openstack-swift | 04:04 | |
*** jamielennox|away is now known as jamielennox | 04:05 | |
*** jlk has left #openstack-swift | 04:09 | |
*** jrichli has quit IRC | 04:15 | |
*** links has joined #openstack-swift | 04:38 | |
*** trifon has joined #openstack-swift | 04:38 | |
*** CR7 has quit IRC | 04:44 | |
*** changbl has quit IRC | 04:46 | |
*** trifon has quit IRC | 04:49 | |
*** _hrou_ has joined #openstack-swift | 04:59 | |
*** haomaiwa_ has quit IRC | 05:01 | |
*** haomaiwang has joined #openstack-swift | 05:01 | |
*** zaitcev has quit IRC | 05:02 | |
*** hrou has quit IRC | 05:02 | |
*** ppai has joined #openstack-swift | 05:08 | |
*** flwang has joined #openstack-swift | 05:13 | |
*** sanchitmalhotra1 has joined #openstack-swift | 05:16 | |
*** openstackgerrit has quit IRC | 05:16 | |
*** openstackgerrit has joined #openstack-swift | 05:16 | |
*** flwang has quit IRC | 05:18 | |
*** sanchitmalhotra has quit IRC | 05:18 | |
*** trifon has joined #openstack-swift | 05:25 | |
*** SkyRocknRoll has joined #openstack-swift | 05:25 | |
*** rjaiswal has joined #openstack-swift | 05:27 | |
*** m_kazuhi_ has joined #openstack-swift | 05:29 | |
*** m_kazuhiro has quit IRC | 05:29 | |
*** albertom has quit IRC | 05:29 | |
*** albertom has joined #openstack-swift | 05:31 | |
*** silor has joined #openstack-swift | 05:42 | |
*** silor1 has joined #openstack-swift | 05:43 | |
*** baojg has quit IRC | 05:45 | |
*** silor has quit IRC | 05:46 | |
*** silor1 is now known as silor | 05:46 | |
*** baojg has joined #openstack-swift | 05:52 | |
*** baojg has quit IRC | 05:52 | |
*** haomaiwang has quit IRC | 06:01 | |
*** 5EXABZ8QP has joined #openstack-swift | 06:01 | |
*** _hrou_ has quit IRC | 06:12 | |
*** sanchitmalhotra has joined #openstack-swift | 06:16 | |
*** sanchitmalhotra1 has quit IRC | 06:18 | |
*** baojg has joined #openstack-swift | 06:22 | |
*** sanchitmalhotra1 has joined #openstack-swift | 06:28 | |
*** m_kazuhiro has joined #openstack-swift | 06:29 | |
*** m_kazuhi_ has quit IRC | 06:29 | |
*** sanchitmalhotra has quit IRC | 06:31 | |
*** dimasot has joined #openstack-swift | 06:33 | |
dimasot | Hi, i have some problem with container_sync at master version | 06:34 |
*** albertom has quit IRC | 06:34 | |
*** albertom has joined #openstack-swift | 06:40 | |
*** SkyRocknRoll has quit IRC | 06:47 | |
dimasot | the problem that I see is: | 06:54 |
dimasot | at lines 501-504 of sync.py | 06:55 |
dimasot | put_object(sync_to, name=row['name'], headers=headers, | 06:55 |
dimasot | the sync_to contains the full url that includes "https://<ip address>" part | 06:56 |
dimasot | and it cause to error 500 at remote proxy since it failed to pars such a path at utils.py | 06:57 |
dimasot | when I change the line 501 of sync.py to this one | 06:58 |
dimasot | put_object(urlparse(sync_to).path, name=row['name'], headers=headers, | 06:59 |
dimasot | I mean I use urlparse(sync_to).path instead sync_to | 06:59 |
dimasot | everything is worked | 06:59 |
dimasot | my question is if it is a real bug or I did something wrong at configuration, and thre exists fix that avoids code changes | 07:00 |
*** 5EXABZ8QP has quit IRC | 07:01 | |
*** haomaiwang has joined #openstack-swift | 07:02 | |
*** SkyRocknRoll has joined #openstack-swift | 07:02 | |
*** rledisez has joined #openstack-swift | 07:04 | |
*** sanchitmalhotra has joined #openstack-swift | 07:28 | |
*** sanchitmalhotra1 has quit IRC | 07:30 | |
*** rjaiswal has quit IRC | 07:40 | |
*** geaaru has joined #openstack-swift | 07:43 | |
*** SkyRocknRoll_ has joined #openstack-swift | 07:55 | |
*** SkyRocknRoll_ has quit IRC | 07:56 | |
*** haomaiwang has quit IRC | 08:01 | |
*** haomaiwang has joined #openstack-swift | 08:01 | |
*** baojg has quit IRC | 08:02 | |
*** jordanP has joined #openstack-swift | 08:14 | |
*** jistr has joined #openstack-swift | 08:15 | |
*** baojg has joined #openstack-swift | 08:16 | |
*** mahatic has joined #openstack-swift | 08:24 | |
dimasot | Hi, i have some problem with container_sync at master version | 08:25 |
dimasot | at lines 501-504 of sync.py | 08:26 |
dimasot | put_object(sync_to, name=row['name'], headers=headers, | 08:26 |
dimasot | contents=FileLikeIter(body), | 08:26 |
dimasot | proxy=self.select_http_proxy(), logger=self.logger, | 08:26 |
dimasot | timeout=self.conn_timeout) | 08:26 |
dimasot | the sync_to contains the full url that includes "https://<ip address>" part | 08:26 |
dimasot | and it cause to error 500 at remote proxy since it failed to pars such a path at utils.py | 08:26 |
dimasot | when I change the line 501 of sync.py to this one | 08:27 |
dimasot | put_object(urlparse(sync_to).path, name=row['name'], headers=headers | 08:27 |
dimasot | everything is worked | 08:27 |
dimasot | my question is if it is a real bug or I did something wrong at configuration, and thre exists fix that avoids code changes | 08:27 |
cschwede | dimasot: what is your setting in container-sync-realms.conf? | 08:36 |
cschwede | http://docs.openstack.org/developer/swift/overview_container_sync.html#configuring-container-sync | 08:36 |
cschwede | dimasot: ^^ have a look at that section; probably you put a url in the X-Container-Sync-To header instead of the realm settings? | 08:37 |
*** haomaiwang has quit IRC | 08:51 | |
*** haomaiwang has joined #openstack-swift | 08:51 | |
*** jistr has quit IRC | 08:54 | |
*** jistr has joined #openstack-swift | 08:56 | |
*** wuhg has joined #openstack-swift | 09:00 | |
*** haomaiwang has quit IRC | 09:01 | |
*** jistr has quit IRC | 09:01 | |
*** haomaiwang has joined #openstack-swift | 09:02 | |
*** dmorita has quit IRC | 09:04 | |
*** jistr has joined #openstack-swift | 09:05 | |
*** dimasot has quit IRC | 09:05 | |
*** mfalatic has quit IRC | 09:12 | |
*** sanchitmalhotra1 has joined #openstack-swift | 09:32 | |
openstackgerrit | Merged openstack/swift: Disallow unsafe tempurl operations to point to unauthorized data https://review.openstack.org/217253 | 09:32 |
*** sanchitmalhotra has quit IRC | 09:34 | |
*** ho has quit IRC | 09:45 | |
*** baojg has quit IRC | 09:50 | |
openstackgerrit | Jiri Suchomel proposed openstack/swift: Let object-info find files in a given directory https://review.openstack.org/189258 | 09:56 |
*** haomaiwang has quit IRC | 10:01 | |
*** haomaiwang has joined #openstack-swift | 10:02 | |
*** jistr has quit IRC | 10:06 | |
*** eandersson has joined #openstack-swift | 10:10 | |
*** jistr has joined #openstack-swift | 10:18 | |
*** dimasot has joined #openstack-swift | 10:20 | |
*** mahatic has quit IRC | 10:21 | |
*** [1]dimasot has joined #openstack-swift | 10:26 | |
*** dimasot has quit IRC | 10:26 | |
*** [1]dimasot is now known as dimasot | 10:26 | |
openstackgerrit | Merged openstack/swift: Disallow unsafe tempurl operations to point to unauthorized data https://review.openstack.org/217259 | 10:26 |
openstackgerrit | Merged openstack/swift: Better scoping for tempurls, especially container tempurls https://review.openstack.org/217260 | 10:28 |
*** [1]dimasot has joined #openstack-swift | 10:32 | |
*** dimasot has quit IRC | 10:34 | |
*** [1]dimasot is now known as dimasot | 10:34 | |
*** Kennan has quit IRC | 10:38 | |
dimasot | cschwede: thanks, it looks like I really added the container-sync-realms.conf but did not removed the X-Container-Sync-To header, will try to remove it and check that it works | 10:38 |
dimasot | cschwede: I re-read the container sync configuration section (the link that you sent previously) | 10:44 |
dimasot | and found that section: Any values in the realm section whose names begin with cluster_ will indicate the name and endpoint of a cluster and will be used by external users in their containers’ X-Container-Sync-To metadata header values with the format “//realm_name/cluster_name/account_name/container_name”. Realm and cluster names are considered case insensitive. | 10:44 |
dimasot | so it is how my X-Container-Sync-To header looks like Sync To: //par-tor/tor_proxy1/AUTH_5b4569e1a2f94c252376243a4ec9934e/mycontainers3001 | 10:48 |
dimasot | and at /etc/swift/container-sync-realms.conf | 10:48 |
dimasot | I have [par-tor] | 10:50 |
dimasot | key=key | 10:50 |
dimasot | key2=key2 | 10:50 |
dimasot | cluster_par_proxy1=https://ip1/v1/ | 10:50 |
dimasot | cluster_par_proxy2=https://ip2/v1/ | 10:50 |
dimasot | cluster_tor_proxy1=https://ip3/v1/ | 10:50 |
dimasot | cluster_tor_proxy2=https://ip4/v1/ | 10:50 |
dimasot | by the way it worked this way with swift 2.2 | 10:51 |
dimasot | and now with master it doesn't | 10:51 |
dimasot | btw I even did not got any requests at the remote cluster before I added the "sync_proxy = http://ip3:80,http://ip4:80" to teh container-server.conf file at the origin cluster | 10:54 |
*** Kennan has joined #openstack-swift | 10:55 | |
*** haomaiwang has quit IRC | 11:01 | |
*** haomaiwa_ has joined #openstack-swift | 11:02 | |
*** aix has quit IRC | 11:02 | |
*** haomaiwa_ has quit IRC | 11:11 | |
*** m_kazuhiro has quit IRC | 11:23 | |
*** aix has joined #openstack-swift | 11:33 | |
cschwede | dimasot: oh, that doesn’t sound good if it worked before. i will check it locally later and get back to you | 11:38 |
dimasot | cschwede: thanks, eran also thinks that maybe the addition of sync_proxy = http://ip3:80,http://ip4:80 to the container-server.conf caused to this problem, so I will try to remove it and to see wha thappen there | 11:44 |
haypo | hello. i splitted my python3 patch in smaller patches, easier to review: https://review.openstack.org/#/q/owner:%22Victor+Stinner%22+status:open+project:openstack/swift,n,z (ignore Work in progress patches) | 11:50 |
*** haomaiwang has joined #openstack-swift | 11:53 | |
*** haomaiwang has quit IRC | 12:01 | |
*** haomaiwang has joined #openstack-swift | 12:02 | |
*** dimasot has quit IRC | 12:08 | |
*** haigang has joined #openstack-swift | 12:22 | |
*** haigang has quit IRC | 12:26 | |
*** jsuchome has joined #openstack-swift | 12:27 | |
openstackgerrit | Jiri Suchomel proposed openstack/swift: Let object-info find files in a given directory https://review.openstack.org/189258 | 12:29 |
*** jsuchome has left #openstack-swift | 12:29 | |
*** annegentle has joined #openstack-swift | 12:31 | |
*** openstackgerrit has quit IRC | 12:31 | |
*** openstackgerrit has joined #openstack-swift | 12:31 | |
*** wuhg has quit IRC | 12:41 | |
openstackgerrit | Merged openstack/swift: Disallow unsafe tempurl operations to point to unauthorized data https://review.openstack.org/217254 | 12:44 |
*** km has quit IRC | 12:49 | |
*** kei_yama has quit IRC | 12:53 | |
*** hrou has joined #openstack-swift | 12:55 | |
*** haomaiwang has quit IRC | 13:01 | |
*** haomaiwang has joined #openstack-swift | 13:02 | |
*** trifon has quit IRC | 13:03 | |
*** dustins has joined #openstack-swift | 13:06 | |
*** changbl has joined #openstack-swift | 13:06 | |
*** I has joined #openstack-swift | 13:06 | |
*** I is now known as Guest62749 | 13:07 | |
*** pgbridge has joined #openstack-swift | 13:09 | |
*** trifon has joined #openstack-swift | 13:23 | |
*** ccavanna_ has quit IRC | 13:28 | |
openstackgerrit | Jiri Suchomel proposed openstack/swift: Let object-info find files in a given directory https://review.openstack.org/189258 | 13:34 |
*** Guest62749 has quit IRC | 13:39 | |
*** pushkarajthorat has joined #openstack-swift | 13:43 | |
*** petertr7_away is now known as petertr7 | 13:51 | |
*** breitz has quit IRC | 13:52 | |
*** breitz has joined #openstack-swift | 13:52 | |
*** macleanal has joined #openstack-swift | 13:54 | |
*** ppai has quit IRC | 13:57 | |
macleanal | Hello, I'm having trouble listing the containers of my Openstack object store with s3curl. The swift command line client works fine and I can list the objects in a container with s3curl but a call to list the containers with s3 curl returns no content found. | 13:57 |
*** trifon has quit IRC | 14:00 | |
*** petertr7 is now known as petertr7_away | 14:01 | |
*** haomaiwang has quit IRC | 14:01 | |
*** kairo has joined #openstack-swift | 14:01 | |
*** ccavanna has joined #openstack-swift | 14:02 | |
*** 7GHAA12OT has joined #openstack-swift | 14:02 | |
*** Kennan has quit IRC | 14:02 | |
*** minwoob has joined #openstack-swift | 14:03 | |
*** Kennan has joined #openstack-swift | 14:03 | |
*** ppai has joined #openstack-swift | 14:06 | |
*** ppai has quit IRC | 14:06 | |
*** jrichli has joined #openstack-swift | 14:08 | |
*** jlhinson has joined #openstack-swift | 14:13 | |
*** proteusguy_ has quit IRC | 14:26 | |
Vinsh | Why is it that when I set web-listints: true on a container... I can no longer list the container using swift client OR view the container contents in horizon? | 14:26 |
Vinsh | Indeed it enables web-listings in a browser of that container. But I lose any other ability to read the container. | 14:27 |
notmyname | Vinsh: can you give a pastbin of the headers on that container? | 14:28 |
Vinsh | by headers.. do you mean what swift stat shows on it? | 14:28 |
Vinsh | when I set set-listings to true.. it hides all of that in swift stat. | 14:28 |
notmyname | then get your auth creds with `swift auth -v` and use `curl -I ...` to get the headers | 14:29 |
Vinsh | OK, i'll get that. here is more of an example: http://paste.openstack.org/show/429716/ | 14:30 |
Vinsh | notmyname: When I curl.. I get no headers back. Unless I set --header "X-Container-Meta-Web-Listings:" | 14:33 |
Vinsh | then I get headers back. | 14:34 |
notmyname | Vinsh: ok. interesting. you've got a publicly readable container that you also are wanting the auto-generated web-listings for | 14:34 |
Vinsh | Thats correct | 14:34 |
Vinsh | however enabling web-listings disables all other listing methods on the container. | 14:34 |
notmyname | while I'm starting up my VM, try this. add "--header "x-web-mode: true" to the request when you have web listings turned on | 14:34 |
notmyname | to the stat request | 14:35 |
*** links has quit IRC | 14:37 | |
Vinsh | to the stat request? | 14:39 |
Vinsh | Can't seem to get the syntax right for that | 14:39 |
*** petertr7_away is now known as petertr7 | 14:40 | |
notmyname | so it seems to be working for me so far | 14:40 |
notmyname | give me amoment to get my kdis out the door to school. then I'll paste an example | 14:40 |
Vinsh | Ok! | 14:41 |
*** proteusguy_ has joined #openstack-swift | 14:43 | |
notmyname | ok, they're off | 14:45 |
jordanP | guys, I have a 'requirements' question related to python-swiftclient. it requires futures>=2.1.3 so if I install it, I could get futures in version 3.0.3 (latest). But taskflow Juno needs futures<=2.2.0 | 14:47 |
jordanP | so I could get a " pkg_resources.DistributionNotFound: The 'futures<=2.2.0,>=2.1.6' distribution was not found and is required by taskflow" | 14:48 |
notmyname | jordanP: a backport to juno should have already landed to fix that | 14:48 |
jordanP | notmyname,thanks. in taskflow or in swiftclient ? | 14:48 |
*** dustins has quit IRC | 14:48 | |
Vinsh | notmyname: Should containers continue to list with swift-client and horizon if web-listings is set to true? | 14:50 |
Vinsh | Is that expected? | 14:50 |
notmyname | jordanP: oh. looks like the gate is being very slow https://review.openstack.org/#/c/215786/ | 14:50 |
cschwede | Vinsh: that should be the case only for unauthenticated requests, or not? „swift list <containername>“ still works for me | 14:50 |
cschwede | Vinsh: what swift version are you using? | 14:51 |
Vinsh | 2.3.1 | 14:51 |
jordanP | notmyname, that's great, thanks ! I'll just seat and relax until this get merged :) | 14:51 |
Vinsh | 2.3.1 client. | 14:51 |
notmyname | Vinsh: a couple of fat-finger typos in there, but https://gist.github.com/notmyname/f8696896fb8f47b8d53b | 14:52 |
Vinsh | 2.2.2-0 server | 14:52 |
notmyname | jordanP: actually, it's at the top of the gate queue with an estimated 0 minutes left. been there for nearly 16 hours | 14:53 |
jordanP | notmyname, yep I just saw that too. 0 minute, not even time for a coffee... | 14:53 |
Vinsh | 2.3.1 client must have an issue here.... it hits a traceback trying to list a container with x-container-meta-web-listings: true | 14:55 |
openstackgerrit | Merged openstack/python-swiftclient: Update from global requirements https://review.openstack.org/215786 | 14:56 |
notmyname | jordanP: ^ | 14:57 |
Vinsh | a curl of the same request yields "301 Moved Permanently" | 14:57 |
notmyname | Vinsh: yeah, that sounds right | 14:57 |
jordanP | \O/ | 14:57 |
notmyname | Vinsh: add a slash | 14:57 |
*** dustins has joined #openstack-swift | 14:57 | |
notmyname | jordanP: I believe the plan is to add a release tag there | 14:57 |
*** hrou has quit IRC | 14:57 | |
cschwede | Vinsh: „swift list <containername>“ works in python-swiftclient 2.3.1 for me too | 14:58 |
Vinsh | a slash to what? | 14:58 |
cschwede | Vinsh: url in your curl command | 14:58 |
Vinsh | yeah | 14:59 |
Vinsh | that seems to work for curl now.. adding the slash | 14:59 |
Vinsh | but http://paste.openstack.org/show/429736/ | 15:00 |
Vinsh | python swift client no likey | 15:00 |
Vinsh | and I am pretty sure our horizon is using 2.3.1 also. :-/ | 15:00 |
openstackgerrit | Tristan Cacqueray proposed openstack/swift: Get better at closing WSGI iterables. https://review.openstack.org/217750 | 15:01 |
*** 7GHAA12OT has quit IRC | 15:01 | |
*** haomaiwa_ has joined #openstack-swift | 15:02 | |
Vinsh | notmyname: so by "that sound right" you have seen this with client 2.3.1? | 15:02 |
notmyname | Vinsh: oh, no. the 301 is correct (that you got with curl) | 15:02 |
Vinsh | swift stat even comes back empty with out adding the "/" | 15:03 |
Vinsh | Why do I have a lack of slashes? :) | 15:04 |
notmyname | Vinsh: what does `swift stat -v` return? | 15:04 |
Vinsh | returns 0/nothing for all the header items. | 15:04 |
Vinsh | unless i disable web-listings.. then it shows them again. | 15:04 |
Vinsh | OR if I add the "/" to the curl. | 15:05 |
notmyname | wait. it shows up again if you add the / ? | 15:06 |
Vinsh | If I curl stat with web-listings enabled. I get an empty return content-length 0 | 15:07 |
Vinsh | if I do that same curl but add a "/" at the end of the url | 15:07 |
Vinsh | I get a content-length of non-zero | 15:07 |
*** miurahr has joined #openstack-swift | 15:08 | |
Vinsh | that curl command is the one I get from swift stat in verbose mode | 15:08 |
Vinsh | the same curl python swift client generates uses. seems to be missing a needed "/" in what it generates to list web-listings enabled containers. | 15:08 |
notmyname | right. without the / you end up getting a 301 | 15:09 |
Vinsh | which you see in the pastebin. busts 2.3.1 | 15:09 |
notmyname | but it works for cschwede with 2.3.1 | 15:09 |
*** ccavanna has quit IRC | 15:11 | |
*** mfalatic has joined #openstack-swift | 15:11 | |
*** petertr7 is now known as petertr7_away | 15:11 | |
Vinsh | that traceback my client sees. whats that about? | 15:13 |
Vinsh | "simplejson.scanner.JSONDecodeError: Expecting value: line 1 column 1 (char 0)" | 15:13 |
*** petertr7_away is now known as petertr7 | 15:15 | |
*** lcurtis has joined #openstack-swift | 15:18 | |
Vinsh | using client 2.5.0 the traceback changes to: ValueError: No JSON object could be decoded | 15:18 |
Vinsh | its not JSON anymore.. its HTML with web-listings turned on right? | 15:18 |
Vinsh | so? | 15:18 |
notmyname | Vinsh: so from what I can see, our clusters are acting the same way. the oddness is in what your client is doing | 15:19 |
Vinsh | I see this on an ubuntu 14.04 node as well as on my mac laptop. | 15:20 |
Vinsh | So, you also see that traceback? | 15:20 |
notmyname | no | 15:21 |
Vinsh | :-/ | 15:21 |
notmyname | I used curl to hit your url. the responses there look exactly like what I see when I hit my url without a token. ie they have the same info | 15:21 |
notmyname | and when I set web-listings on my container, swift stat works just fine | 15:22 |
*** links has joined #openstack-swift | 15:22 | |
Vinsh | is this a wrong way to set web-listings? --> "swift post -m 'web-listings: true' <container> ? | 15:22 |
notmyname | ya, that's fine | 15:22 |
notmyname | moreover, the behavior I'm seeing with curl and without using auth tokens is the expected and correct bahavior | 15:23 |
Vinsh | something in our cluster is different then... causing the swift-client to be upset with what's returned. | 15:24 |
notmyname | maybe, but what I'm saying is that your cluster and my cluster are doing the same thing (to the extent I can compare them--ie no auth token) | 15:25 |
Vinsh | Ah, I get ya | 15:25 |
notmyname | Vinsh: also, you said that curl seems to be doing the right thing for you | 15:25 |
Vinsh | only if I add an extra "/" to the request | 15:25 |
Vinsh | the curl that the client generates in verbose.. has no extra "/" at the end. | 15:26 |
Vinsh | if I do that same curl. I get 0 conent length back | 15:26 |
Vinsh | if I add a '/' to it.. I get content. | 15:26 |
Vinsh | I wonder why that curl command the client prints in verbose lacks the "/" ? | 15:26 |
Vinsh | You get what I mean? | 15:27 |
notmyname | yeah, that's interesting. if you use curl, have and auth token, and don't have a trailing /, then you get odd results | 15:27 |
*** dustins has quit IRC | 15:28 | |
notmyname | Vinsh: ok, let's focus on just one thing at a time | 15:30 |
Vinsh | Ok, what first? | 15:31 |
notmyname | Vinsh: only use curl | 15:31 |
Vinsh | k | 15:31 |
notmyname | that will let us be explicit about the requests that are sent and avoid any translation of the results | 15:31 |
notmyname | next, clear the web-listings header | 15:32 |
Vinsh | cleared | 15:32 |
notmyname | still publicly readable? | 15:32 |
Vinsh | not publicly readable | 15:33 |
notmyname | ok | 15:33 |
notmyname | so do a HEAD to the container. `curl -I ...` | 15:33 |
notmyname | do you see the header info you expect? | 15:34 |
Vinsh | I do | 15:34 |
notmyname | good | 15:34 |
notmyname | now, mark it as public and do the HEAD again | 15:34 |
Vinsh | shouldn't "X-Container-Read: .r:*,rlistings" mean it IS public? | 15:35 |
notmyname | right | 15:35 |
Vinsh | horizon lists it as public now | 15:35 |
Vinsh | but I can't get to it in a browser even. | 15:35 |
notmyname | ok, what did the original HEAD have? did it have those headers set? | 15:35 |
Vinsh | yes it did | 15:35 |
notmyname | oh, well there is something strange | 15:36 |
*** changbl has quit IRC | 15:37 | |
notmyname | can you pastebin the result of the contaienr HEAD? | 15:37 |
notmyname | from curl | 15:37 |
Vinsh | http://paste.openstack.org/show/429791/ | 15:38 |
notmyname | thanks. can you show -v instead of -i please? | 15:39 |
Vinsh | http://paste.openstack.org/show/429792/ | 15:40 |
* notmyname was really hoping for something strange to show up there | 15:41 | |
notmyname | ok, here's the current state as I see it | 15:42 |
notmyname | you've got the right headers set on the container. you're getting expected results back when you send an auth token | 15:42 |
notmyname | however, the container isn't publicly readable (ie without the auth token) | 15:42 |
notmyname | is there anything between you and the swift cluster? anything that would cache a response or short circuit it if it doesn't have an auth token? | 15:43 |
Vinsh | a lousy A10 load balancer. | 15:43 |
Vinsh | I swear.. if its that thing.. | 15:43 |
Vinsh | its doing the ssl offloading | 15:43 |
*** changbl has joined #openstack-swift | 15:44 | |
notmyname | lol | 15:44 |
notmyname | any way you can hit a proxy directly jsut to confirm that it's not causing problems? | 15:44 |
Vinsh | I should be able to set OS_ something for that... and specify a proxy | 15:45 |
Vinsh | oh yeah --os-storage-url. testing. | 15:46 |
portante | acoles_: hey, are you going to get swift running on this beast? http://www.hpl.hp.com/research/systems-research/themachine/ | 15:46 |
notmyname | Vinsh: or instead use curl and specify the proxy name/ip directly | 15:46 |
*** changbl has quit IRC | 15:51 | |
Vinsh | notmyname: Same results going directly to the proxy. | 15:55 |
notmyname | ah, ok | 15:55 |
notmyname | so at least we know it's not a load balancer | 15:55 |
Vinsh | Was hopeful :) | 15:55 |
* cschwede is back, looking at the staticweb thingy | 15:58 | |
cschwede | Vinsh: notmyname: looking at the output of http://paste.openstack.org/show/429792/ | 16:00 |
cschwede | i noticed the „-I“ in the command (upper i) | 16:00 |
cschwede | that is normal that there is no content | 16:00 |
*** haomaiwa_ has quit IRC | 16:01 | |
cschwede | try it without the upper „-i“ | 16:01 |
*** haomaiwa_ has joined #openstack-swift | 16:02 | |
cschwede | the „-I“ will only fetch headers, no content | 16:02 |
Vinsh | Indeed, that did return a non zero content length. | 16:03 |
Vinsh | and will then list the container. | 16:04 |
cschwede | so curl works now as you would expect? | 16:04 |
Vinsh | If web-listings is not set | 16:05 |
Vinsh | if I set it to true. I get this: | 16:05 |
*** silor has quit IRC | 16:05 | |
Vinsh | http://paste.openstack.org/show/429794/ | 16:06 |
Vinsh | a before and after for you. | 16:06 |
Vinsh | Just sort of mutes what's returned. | 16:06 |
cschwede | Vinsh: add the trailing „/„ | 16:06 |
*** haomaiwa_ has quit IRC | 16:06 | |
cschwede | trailing slash, that is | 16:06 |
Vinsh | that returns an html listing of the container names | 16:07 |
cschwede | with the X-Auth-Token set? | 16:07 |
Vinsh | Same thing I see in python-swiftclient when running debug.. but it causes a traceback there. | 16:08 |
notmyname | cschwede: the weird thing is that without the web-listings, even though the ACLs set it to public, you get a 401 | 16:08 |
Vinsh | yes, token set | 16:08 |
cschwede | Vinsh: is the token valid? if the token is not valid i also get a html listing | 16:08 |
cschwede | notmyname: hmm, checking | 16:08 |
*** SkyRocknRoll has quit IRC | 16:08 | |
Vinsh | did a keystone token-get. is still valid. | 16:08 |
Vinsh | Fernet tokens. | 16:09 |
notmyname | Vinsh: you've got the web-lists on right now, correct? | 16:09 |
Vinsh | correct | 16:10 |
cschwede | notmyname: hmm, i might have missed something, when do you get a 401? with curl, a token and public acls? | 16:10 |
*** changbl has joined #openstack-swift | 16:10 | |
notmyname | Vinsh: ok, turn off web listings (and web listings only) | 16:11 |
notmyname | cschwede: checking | 16:11 |
Vinsh | have set it to "false" | 16:11 |
Vinsh | or shall I unset it all the ay? | 16:11 |
Vinsh | *way | 16:11 |
notmyname | Vinsh: unset it all the way | 16:12 |
Vinsh | done. | 16:12 |
notmyname | yes. there | 16:12 |
notmyname | ok, here's what I'm seeing | 16:12 |
notmyname | when you have public ACLs and web-listings on, I can access your container (no token, obviously) | 16:12 |
notmyname | when you have web-listings=false, I get the appropriate error (404 from staticweb) | 16:13 |
notmyname | when you completely remove web-listings, I get a 401 unauthorized | 16:13 |
notmyname | that last one is the weird one | 16:13 |
notmyname | I should be able to get a listing since you have .r:8,.rlistings | 16:13 |
notmyname | s/8/* | 16:14 |
notmyname | Vinsh: so when you do a GET or HEAD right now, with web-listings removed, you still see the correct ACLs for public access, right? | 16:14 |
cschwede | yep | 16:14 |
Vinsh | I do | 16:14 |
Vinsh | I see "< X-Container-Read: .r:*,rlistings" | 16:14 |
notmyname | that's the point at which I don't know what's going on. that's the difference from what I see in my local cluster | 16:15 |
notmyname | which is why I was asking about some caching thing or something | 16:15 |
Vinsh | woah | 16:15 |
Vinsh | so get this | 16:15 |
Vinsh | I just went in silly horizon. and set it private. then public again | 16:16 |
Vinsh | and I can list it in the browser | 16:16 |
* cschwede will check back later. problem solving seems to be in good hands ;) | 16:16 | |
notmyname | ah ha | 16:16 |
Vinsh | cschwede: Thank you very much for you time helping also! | 16:16 |
notmyname | cschwede: heh, That's what I was thinking. /me needs to get to the office ;-) | 16:16 |
notmyname | cschwede: you need to get home to your kids. have a good evening :-) | 16:16 |
cschwede | Vinsh: you’re welcome, as i said i will be back online later | 16:16 |
cschwede | notmyname: ha, and i need to get out of the office for dinner ;) | 16:17 |
notmyname | Vinsh: ok, that is weird. and I can see it too | 16:17 |
cschwede | notmyname: thx | 16:17 |
notmyname | Vinsh: so it does sound like there is some weird caching going on | 16:17 |
notmyname | Vinsh: right? | 16:17 |
Vinsh | now I don't see the acls though | 16:17 |
notmyname | Vinsh: with or without an auth token | 16:18 |
Vinsh | with | 16:18 |
notmyname | Vinsh: how production is this cluster? | 16:18 |
* notmyname wonders if there's something odd with memcache | 16:18 | |
Vinsh | Its full time production | 16:18 |
Vinsh | 2 regions. 4 zones. 4 replicas | 16:18 |
notmyname | ah ok. so don't restart memcache ;-) | 16:18 |
Vinsh | that would just cause token retry for a client.. not that bad? | 16:19 |
notmyname | well, a full cache flush would cause a spike in load and latency | 16:19 |
notmyname | do you have one global memcache setting or one for each region? | 16:20 |
Vinsh | Thats probably ok. | 16:20 |
Vinsh | memache is shared between all proxies in both regions | 16:20 |
notmyname | as a general rule, I'll never recommend doing a full cache flush on a production cluster | 16:20 |
notmyname | Vinsh: ok | 16:20 |
Vinsh | Load isn't very high on this cluster.. just some occasional web page reads and desktop client backups. | 16:21 |
Vinsh | Every day we onboard a bunch of new customers.. which is cool. its being use more. | 16:21 |
notmyname | Vinsh: so you have public access but you don't see the ACL headers when you to an authorized request to the container | 16:21 |
notmyname | is it possible that the token you're using isn't the "owner" of the swift account? | 16:22 |
Vinsh | My user has been added as admin/swift_operator in this tennant | 16:22 |
*** ccavanna has joined #openstack-swift | 16:22 | |
Vinsh | I also added the owner role to my user | 16:23 |
notmyname | just now? | 16:23 |
*** hrou has joined #openstack-swift | 16:23 | |
Vinsh | earlier this morning before I asked for help | 16:23 |
Vinsh | heh. there we go again. I set web-listings true. suddenly in horizon the container is listed as 'private" | 16:24 |
notmyname | ok | 16:24 |
*** jistr has quit IRC | 16:24 | |
*** dustins has joined #openstack-swift | 16:24 | |
notmyname | ok, I think horizon is weird here. pay no attention to it (for the time being) | 16:25 |
notmyname | (too many variables) | 16:25 |
Vinsh | yeah. best ignored. | 16:25 |
notmyname | ok, so I now see the web listing as expected | 16:25 |
*** rledisez has quit IRC | 16:26 | |
*** marzif has joined #openstack-swift | 16:27 | |
notmyname | ok, I do need to get myself ready and go to the office | 16:28 |
notmyname | I'll be back online later today | 16:28 |
Vinsh | Thank you for your time!! I'll be here if you get any ideas. | 16:29 |
*** annegentle has quit IRC | 16:33 | |
*** ctennis has quit IRC | 16:34 | |
*** ctennis has joined #openstack-swift | 16:34 | |
*** pushkarajthorat has quit IRC | 16:37 | |
*** jordanP has quit IRC | 16:44 | |
*** links has quit IRC | 16:45 | |
*** petertr7 is now known as petertr7_away | 16:46 | |
*** petertr7_away is now known as petertr7 | 16:48 | |
*** annegentle has joined #openstack-swift | 16:51 | |
*** annegentle has quit IRC | 16:51 | |
*** annegentle has joined #openstack-swift | 16:54 | |
*** marzif has quit IRC | 16:55 | |
*** aix has quit IRC | 16:58 | |
*** jordanP has joined #openstack-swift | 17:00 | |
wbhuber | tsg: clayg: what are the performance indicators that liberasurecode should be used as default as opposed to jerasure (native EC) if there are any? | 17:01 |
*** theintern has joined #openstack-swift | 17:07 | |
*** petertr7 is now known as petertr7_away | 17:11 | |
*** theintern has quit IRC | 17:15 | |
*** macleanal has quit IRC | 17:36 | |
*** annegentle has quit IRC | 17:36 | |
peluse | wbhuber, we (swift) don't want to provide guidance on which library to choose with the exception of noting that the built in stuff upcoming in liberasure code is optimized like the other available independent libraries like ISAL from intel or jerasure | 17:48 |
wbhuber | peluse: ah, i was just wondering about the switch - the rationale behind it in one patch. that's all. | 17:57 |
*** geaaru has quit IRC | 17:59 | |
peluse | wbhuber, so the main reason was to not include any full ext library (like jerasure) within liberasure code mainly due to some legal stuff that happened last year with jerasure. Just staying 'clean' as possible there | 18:00 |
wbhuber | peluse: thanks for the clarification. | 18:00 |
*** Kennan has quit IRC | 18:03 | |
*** Kennan has joined #openstack-swift | 18:03 | |
openstackgerrit | paul luse proposed openstack/swift: logic error in ssync_rcvr when getting EC frags from a handoff https://review.openstack.org/217830 | 18:05 |
notmyname | ray tracing | 18:07 |
notmyname | heh. mischan | 18:10 |
*** prometheanfire has joined #openstack-swift | 18:15 | |
prometheanfire | so, why does swift use pyeclib? has anyone checked the abismal quality of it? | 18:15 |
notmyname | prometheanfire: wow. what a great intro comment to make friends with ;-) | 18:16 |
prometheanfire | it's been really anoying | 18:16 |
prometheanfire | I have a list of bugs if you want :P | 18:16 |
notmyname | prometheanfire: pyeclib is used for the erasure code support. you aren't the first packager that's been annoyed by it | 18:16 |
prometheanfire | it installs insecure runpaths btw | 18:17 |
prometheanfire | so sec vuln if you want | 18:17 |
prometheanfire | https://bugs.gentoo.org/show_bug.cgi?id=558884 https://bugs.gentoo.org/show_bug.cgi?id=558886 | 18:17 |
openstack | bugs.gentoo.org bug 558884 in Applications "dev-python/PyECLib fails multilib-strict check" [Normal,Confirmed] - Assigned to prometheanfire | 18:17 |
notmyname | prometheanfire: and yes, bugs/issues filed on it would be good. I'm currently working with the maintainers to figure out better ways to mange it so that issues are addressed/closed more quickly and packaging is better | 18:17 |
openstack | bugs.gentoo.org bug 558886 in Applications "dev-python/PyECLib installs files with insecure runpath" [Normal,Confirmed] - Assigned to prometheanfire | 18:18 |
prometheanfire | well, the upstream libs it bundles don't even have releases :( | 18:18 |
prometheanfire | I'd actually like to package them, at least then I could make it a little better | 18:18 |
scotticus | notmyname: prometheanfire isn't here to make friends ;) | 18:19 |
notmyname | prometheanfire: is it liberasurecode? | 18:19 |
prometheanfire | also, it doesn't look in /usr/lib64 at all, which seems odd... | 18:19 |
prometheanfire | notmyname: nah, that package is better :P | 18:19 |
prometheanfire | gf-complete and jerasure | 18:20 |
prometheanfire | http://jerasure.org/jerasure/jerasure/tags | 18:20 |
prometheanfire | http://jerasure.org/jerasure/gf-complete/tags | 18:20 |
notmyname | ah, ok | 18:21 |
notmyname | prometheanfire: so it's either good or bad that those have the same maintainer :-) | 18:21 |
prometheanfire | in this case I get the bundling | 18:21 |
notmyname | I'm told that the next release of pyeclib/liberasurecode will not have any bundling of those | 18:21 |
prometheanfire | that's nice :D | 18:22 |
prometheanfire | didn't have the bundling issue with liberasurecode at least | 18:22 |
notmyname | tsg and keving are the maintainers. tsg is around in here sometimes (but not now) | 18:23 |
*** annegentle has joined #openstack-swift | 18:23 | |
*** annegentle has quit IRC | 18:25 | |
*** dimasot has joined #openstack-swift | 18:29 | |
*** proteusguy_ has quit IRC | 18:29 | |
*** chenhuayi has quit IRC | 18:32 | |
*** petertr7_away is now known as petertr7 | 18:33 | |
wbhuber | notmyname: i believe those bugs shd be attended and fixed before we declare EC production ready that has pyeclib wrapped? sec vuln is no fun. | 18:33 |
wbhuber | not sure how high of a risk those bugs are though. usually there's CVE score for each one if not stamped yet. | 18:34 |
prometheanfire | local user exploit I think | 18:34 |
prometheanfire | https://blog.flameeyes.eu/2013/01/dealing-with-insecure-runpaths | 18:35 |
prometheanfire | we automatically strip the runpaths, so no issue on gentoo, but I'm guessing most of you don't run gentoo :P | 18:35 |
peluse | clayg? | 18:35 |
prometheanfire | it's more of a question if the other distros have good runpaths or not | 18:36 |
clayg | yo! | 18:36 |
wbhuber | gentoo or no gentoo, need to figure if runpaths exist or not on any platforms | 18:36 |
prometheanfire | default install location to /usr/local is also a sad | 18:36 |
prometheanfire | install process also create tarballs that are never used I think | 18:37 |
prometheanfire | really should be redone from the ground up imo | 18:37 |
peluse | clayg, hey just updated the bug and yeah it looks like it can be 'None' we think, take a quick look and see what you think | 18:38 |
wbhuber | know the result of running the local exploit? | 18:38 |
wbhuber | EoP or DoS? Or else? | 18:38 |
prometheanfire | local code exec / possible privlige up | 18:39 |
prometheanfire | from one user to another | 18:39 |
clayg | peluse: that ctennis is pretty smart | 18:39 |
clayg | peluse: ec reconstructor jobs shouldn't have a frag_index of None - but replication jobs would - so I think that's the place to attack the bug | 18:40 |
clayg | ... but; are you using ssync for replication in the testing? | 18:40 |
peluse | we are only using EC recon | 18:41 |
clayg | peluse: there's some ssync tests that spin up a object server (with ssync_reciever) and make real calls over the wire at the bottom of test_ssync_receiver - TestSsyncRxServer I'm going to try and add coverage there | 18:41 |
clayg | peluse: well that might mean we have two bugs then - reconstructor ssync jobs should all have frag indexes - shouldn't they? maybe syncing tombstones? | 18:42 |
peluse | clayg, OK so feel free to push over the patch that fixes it. We are going to run w/that for now | 18:43 |
peluse | clayg, to your last question for sync jobs yeah | 18:43 |
clayg | yeah there's a cute comment "this is an unfortunate situation" blah blah blah :) | 18:44 |
prometheanfire | the bundle'd libs don't even really obey the options in setup.py... | 18:44 |
clayg | peluse: but that's good I think - should be clear to fix - thanks for the additional details | 18:45 |
*** annegentle has joined #openstack-swift | 18:46 | |
peluse | clayg, I think that comment was yours :) | 18:49 |
*** sanchitmalhotra has joined #openstack-swift | 18:49 | |
*** sanchitmalhotra1 has quit IRC | 18:51 | |
*** dimasot has quit IRC | 19:00 | |
*** aix has joined #openstack-swift | 19:11 | |
*** zaitcev has joined #openstack-swift | 19:14 | |
*** ChanServ sets mode: +v zaitcev | 19:14 | |
*** andrey-mp has joined #openstack-swift | 19:23 | |
andrey-mp | clayg: please, could you review this again https://review.openstack.org/#/c/215766/ ? I've changed commit message (one word) and your vote is gone... | 19:25 |
*** esker has joined #openstack-swift | 19:31 | |
andrey-mp | thank you! | 19:34 |
*** annegentle has quit IRC | 19:36 | |
*** annegentle has joined #openstack-swift | 19:37 | |
wbhuber | clayg: peluse: i did some digging on LP #1488610 and was able to recreate the traceback whenever one node has refused connection back to the caller. it's in swift.common.utils.py line #1307 where sys.exc_info() returns stopiteration (traceback) instead of errno=111 (ECONNREFUSED) for the 2nd failing node. the 1st node that refused the conx posted the message successfully. does this make you recall anything like this before? | 19:39 |
openstack | Launchpad bug 1488610 in OpenStack Object Storage (swift) "EC connection refused exception when storage node is offline" [Undecided,New] https://launchpad.net/bugs/1488610 - Assigned to Bill Huber (wbhuber) | 19:39 |
*** andrey-mp has quit IRC | 19:39 | |
wbhuber | ctennis: ^^ | 19:39 |
cschwede | Vinsh: ok, maybe i just broke my saio, but now i have a very similar behaviour to yours regarding web-listings and python-swiftclient. i need more time to verify this, will do that tomorrow morning (i’m UTC+2) and get back to you. | 19:53 |
Vinsh | I'm EST (Ithaca, NY) | 19:54 |
Vinsh | similar is good... makes me feel less crazy :) | 19:54 |
Vinsh | I'll keep this chan up. | 19:55 |
*** silor has joined #openstack-swift | 19:58 | |
*** silor1 has joined #openstack-swift | 19:58 | |
*** silor has quit IRC | 20:02 | |
*** silor1 is now known as silor | 20:02 | |
*** esker has quit IRC | 20:04 | |
*** esker has joined #openstack-swift | 20:08 | |
*** esker has quit IRC | 20:13 | |
*** dustins has quit IRC | 20:34 | |
clayg | wbhuber: great digging! | 20:35 |
clayg | wbhuber: ... but it doesn't make *that* much sense - when would exc_info not return same Exception that it wants log :D | 20:36 |
wbhuber | clayg: with mocked_http_conn(whatevererroris * [num of nodes actually used - 1]) would generate traceback, but that's mocked_http_conn as opposed to real http_conn. I'm thinking that the exception is not handled when the node goes offline... there's something of a link down there. | 20:38 |
clayg | wbhuber: but the traceback is ECONNREFUSED or StopIteration? | 20:39 |
wbhuber | clayg: in testing env, i'd not see ECONNREFUSED b/c stopiteration comes into place first. | 20:40 |
clayg | wbhuber: I get why some an unexpected error would turn into a traceback when it gets through to logger.exception - but I don't get why exc_info would *not* be socket.error ECONNREFUSED the LogAdapter is trying to translate the message - then then it *would* be a socket.error when it goes to log it? | 20:40 |
wbhuber | clayg: unless i'm not understanding what the heck stopiterator actually is | 20:40 |
clayg | wbhuber: well, StopIteration is just an exception - you see them sometimes with mocked_http_conn if you don't provide enough mock responses to fullfill the UUT's http requests | 20:41 |
clayg | wbhuber: given that in the simple unittest you were able to see the socket.error log line translated as expected there must be something more sinister going on | 20:43 |
clayg | wbhuber: have you been able to duplicate the traceback in dev but shutting down some subset of services manually? (like swift-init object-server stop -c 1 and swift-init object-reconstructor once -nv -c 2 or something?) | 20:43 |
wbhuber | clayg: no, not yet, but thanks for the commands - those 're helpful. | 20:44 |
*** gyee has joined #openstack-swift | 20:51 | |
*** petertr7 is now known as petertr7_away | 21:09 | |
*** jordanP has quit IRC | 21:12 | |
notmyname | there has been a *lot* that's happened in swift since the last release (2.3.0) | 21:20 |
* notmyname is working on the changelog updates | 21:20 | |
prometheanfire | notmyname: splitting out the bundled libs is SO much cleaner btw | 21:33 |
prometheanfire | http://lab.jerasure.org/jerasure/jerasure/issues/5 http://lab.jerasure.org/jerasure/gf-complete/issues/8 | 21:33 |
notmyname | prometheanfire: thanks for filing those | 21:34 |
clayg | peluse: oh neat! we're not reverting partitions with handoffs! | 21:35 |
*** silor has quit IRC | 21:39 | |
prometheanfire | notmyname: anyway, /parting, cya around :D | 21:42 |
*** prometheanfire has left #openstack-swift | 21:42 | |
*** annegentle has quit IRC | 21:45 | |
peluse | clayg, huh? | 21:49 |
clayg | clayg: s/handoffs/tombstones/ | 21:50 |
clayg | peluse: im' having a bad week :'( | 21:50 |
peluse | heh, hey you wanna chat real quick w/us? sceanrio/theory to run by you | 21:50 |
*** ccavanna has quit IRC | 21:51 | |
clayg | there's this call to validate_fragment_index inside of purge and when it raises DiskFileError - we just continue - no log or nothing | 21:51 |
clayg | peluse: sure! | 21:51 |
*** NM has joined #openstack-swift | 21:56 | |
NM | Hi guys! | 22:00 |
NM | Looking at the docs: http://developer.openstack.org/api-ref-objectstorage-v1.html, Accounts doesn't seen to have accept delete. Is it correct? | 22:01 |
notmyname | NM: normally correct | 22:01 |
*** CaioBrentano has joined #openstack-swift | 22:02 | |
notmyname | NM: accounts only accept PUT and DELETE when the proxy has allow_account_management = true. (default is false) | 22:02 |
CaioBrentano | Hi Swift gurus... simple question. Is it possible to delete an entire tenant? | 22:03 |
notmyname | I suspect NM and CaioBrentano know each other | 22:03 |
notmyname | CaioBrentano: yes | 22:03 |
CaioBrentano | notmyname sorry dude. my chat was idle and I couldn't see his question! He's my co-worker | 22:05 |
notmyname | no worries. | 22:05 |
notmyname | just fun to get basically the exact same question from 2 people within 2 minutes of each other :-) | 22:05 |
notmyname | CaioBrentano: the answer is yes. send a DELETE to the account | 22:06 |
notmyname | CaioBrentano: however, only some users (and proxies) are allowed to do that | 22:06 |
*** annegentle has joined #openstack-swift | 22:07 | |
NM | notmyname: Thanks a lot! | 22:07 |
NM | notmyname: Can I find that info somewhere? | 22:08 |
notmyname | which info? | 22:08 |
clayg | peluse: ctennis: I'll probably attach this to a bug report pretty soon -> https://gist.github.com/clayg/36f8e9f1bded52b6e1b1 | 22:10 |
*** AndreiaKumpera has joined #openstack-swift | 22:11 | |
NM | notmyname: About the correlation between allow_account_management and DELETE an account. | 22:11 |
notmyname | NM: allow_account_management must be set to true. and the auth token must be for a reseller_admin | 22:12 |
notmyname | NM: I'd recommend that you have a separate proxy server deployed that is firewalled off and has allow_account_management = true | 22:13 |
notmyname | NM: then you need to get creds for a reseller admin and send the delete to the account. then, in the background, the account-reaper starts deleting stuff | 22:14 |
notmyname | NM: you can set the delay_reaping value to a number of seconds you want to wait until it actually starts deleting the data (eg a week or month or something) | 22:15 |
*** hrou has quit IRC | 22:16 | |
*** pgbridge has quit IRC | 22:20 | |
wbhuber | clayg: say you're doing an ECPutter.connect to a storage node and you break at one of the lines inside the connection and manually shut down one node that its trying to connect. guess you'd put some kind of set_trace() or rpdb to force the thread to wait and shut down the server (swift-init object-server stop -c 1) | 22:21 |
clayg | wbhuber: sure | 22:23 |
clayg | wbhuber: did you do that!? | 22:23 |
wbhuber | clayg: did with nose.tools but to no avail. | 22:23 |
mattoliverau | Morning | 22:24 |
*** annegentle has quit IRC | 22:24 | |
wbhuber | clayg: running on SAIO <---- | 22:24 |
clayg | yeah i don't know how paul made that error | 22:24 |
wbhuber | probably from harness testing | 22:24 |
peluse | this paul? | 22:24 |
clayg | it's like his socket.error fails the isinstance test in the LogAdapter.exception | 22:25 |
*** annegentle has joined #openstack-swift | 22:25 | |
CaioBrentano | notmyname thanks for the answers. I'm testing with NM! Sorry again for the duplicated question! | 22:25 |
clayg | peluse: yeah! the traceback on ECONNREFUSED - for everyone else it's all nice an clean - no traceback | 22:25 |
peluse | ahhh | 22:25 |
clayg | peluse: like we call logger.exception - but when you trace it into utils our logger wraps all that up and makes the socket.errors pretty | 22:26 |
peluse | wbhuber, yeah so we weren't on an SAIO we just went to storage node and stopped all services. | 22:26 |
wbhuber | that's a good logical system test approach | 22:26 |
NM | notmyname: Thanks so much. That was quite clear. And finally I got the reseller_admin thing :) | 22:27 |
notmyname | great | 22:27 |
clayg | peluse: also the plot thickens on the 'None' header story; the frag_index can be None, but only on revert jobs - which should mostly only be talking to primaries :\ | 22:27 |
notmyname | NM: CaioBrentano: so how are you using swift? | 22:27 |
*** annegentle has quit IRC | 22:29 | |
lcurtis | Hello all, grappling with another theoretical question...say if we lose 3 drives that just so happen to have 3 replicas of an object...the replicator essentially just walks the filesystem, correct? so would we ever be able to tell if that object goes missing? | 22:30 |
notmyname | lcurtis: you'd have an entry in a container listing with no corresponding object | 22:31 |
clayg | notmyname: but what if *those* drives failed too!? | 22:31 |
notmyname | clayg: then you'd look at the sum of the stats in the account and compare it against your hand-calculated total of the size of everything you could find in your account! | 22:32 |
lcurtis | but essentially would have to fall back on container listing | 22:32 |
lcurtis | ? | 22:32 |
notmyname | "but what if *those* drives failed too!?" you ask. then you'd go drink | 22:33 |
lcurtis | lol | 22:33 |
CaioBrentano | notmyname: in which sight? business sight or technical sight? | 22:34 |
clayg | lcurtis: fall back to what now when? like you lost a whole part - maybe multiple parts (if those three devices had many in common) - so the objects in them are for numorus /account/container/object names | 22:34 |
*** NM has quit IRC | 22:34 | |
notmyname | CaioBrentano: both! actually, I'm just curious about how you're using swift. what your use case is. how much data. public or private. so I guess that's the business side | 22:35 |
lcurtis | clayg: well...precisely...im not sure what i would even rely upon losing 3 drives to tell me what id actually lost, or at least to be able to tell some technicians when they could swap a drive if we lost more than 2 at a time | 22:36 |
notmyname | lcurtis: there's nothing that automatically checks that everything referenced in a listing is somewhere else in the cluster. so if you lose 3 drives, then you use swift-ring-builder to see if there were any common partitions on them. and if so, you lost any data in those partitions | 22:36 |
clayg | lcurtis: I think maybe monitor drive read failure rates, or slow reads, bad sectors, or think about your stock and make sure you're adding new drives into new zones with different batches or hard drives so when you hit corrolated failures they don't line up across multiple failure domains | 22:36 |
*** ho has joined #openstack-swift | 22:38 | |
lcurtis | thanks you guys...making sense | 22:38 |
clayg | ... and then you know... read blackblaze and decide how you're gunna manage the impending failure spike as that lot of drives in the old hardware starts to come of age | 22:38 |
*** annegentle has joined #openstack-swift | 22:39 | |
lcurtis | notmyname: which switches would i use in swift-ring-builder to match partitions? | 22:39 |
notmyname | is it not in the -he usage message? | 22:39 |
clayg | notmyname: that would be a neat feature of swift-ring-builder remove - like before it started the rebalance if it checked for "parts in common" and displayed (1 replica of X parts, 2 replicas or X parts) something like that | 22:40 |
clayg | ^ torgomatic | 22:40 |
CaioBrentano | notmyname: We work at globo.com... it is the internet company of Grupo Globo (huge media company). We are using for several purposes, from serving statics web content to serve OS images for Tsuru (our PAAS). We use both... public and private | 22:40 |
torgomatic | clayg: isn't that the dispersion report you wrote? | 22:41 |
clayg | peluse: ok, nm, so it may *all* be replication timeouts! | 22:41 |
notmyname | CaioBrentano: cool! | 22:41 |
clayg | torgomatic: yeah - probably similar - but I'm thinking specifically in the "remove failed devices" workflow | 22:42 |
notmyname | CaioBrentano: I had heard globo.com used swift. just didn't know any details | 22:42 |
clayg | like if you only remove one a time - then it *should* always be just "1 replica of X parts" | 22:42 |
CaioBrentano | notmyname: we are still not that big, because is not easy to change the way people make their deploys/apps. But we have two regions in two DC... | 22:42 |
lcurtis | notmyname: ah! thanks man! I never used the list_parts switch yet | 22:42 |
clayg | ... but if you remove *two devices* you could have some parts-in-common | 22:42 |
*** esker has joined #openstack-swift | 22:42 | |
torgomatic | clayg: so you get a sense of how panicked to be? | 22:44 |
notmyname | clayg: like a check to list_parts before the remove is done. and warn | 22:44 |
*** kutija has quit IRC | 22:47 | |
ho | good morning guys! | 22:48 |
*** esker has quit IRC | 22:48 | |
lcurtis | have to run..be on later..thanks a million for that tip...had not done list_parts before | 22:48 |
lcurtis | will have to mess with it | 22:48 |
notmyname | hello ho | 22:48 |
*** lcurtis has quit IRC | 22:49 | |
*** esker has joined #openstack-swift | 22:51 | |
ho | notmyname: hi | 22:54 |
*** esker has quit IRC | 22:56 | |
*** jrichli has quit IRC | 22:58 | |
mattoliverau | ho: morning | 23:06 |
ho | notmyname: yesterday, i built kinetic-swift. i'm curios about performance with actual drives. do you have a result or feeling for it? | 23:08 |
ho | mattoliverau: morning! | 23:08 |
CaioBrentano | notmyname: where did you hear about it? | 23:09 |
notmyname | ho: we've run some numbers, but I don't have them handy right now. is kinetic something you're looking at seriously for your clusters? or is it more of a cool toy to play with? | 23:09 |
notmyname | CaioBrentano: not sure. | 23:10 |
ho | notmyname: "a cool toy to play with" :-) | 23:10 |
notmyname | heh, ok | 23:10 |
clayg | who ever updated swift-object-info to read tombstones is a fucking saint | 23:11 |
clayg | minwoob: ^ was that you? | 23:11 |
minwoob | clayg: I wish it were XD | 23:11 |
CaioBrentano | notmyname: thanks again... 8PM in BR... see ya | 23:12 |
minwoob | What's going on? | 23:12 |
clayg | Ricardo Ferreira <- whoever you are; I owe you a drink | 23:12 |
*** km has joined #openstack-swift | 23:12 | |
minwoob | Btw, I'm really close on these probe tests ... they're failing intermittently. | 23:13 |
*** jlhinson has quit IRC | 23:13 | |
*** annegentle has quit IRC | 23:14 | |
minwoob | Main issue here is that the part seems to be moved by shutil.move(), however sometimes the frag is still there ?? | 23:14 |
clayg | minwoob: ??? | 23:14 |
minwoob | Either that, or the partner primary's reconstructor non-deterministically reconstructs or doesn't reconstruct the "missing" fragment. | 23:15 |
clayg | minwoob: welll......... | 23:15 |
notmyname | what's up with https://bugs.launchpad.net/swift/+bug/1457691 ? peluse marked it as "critical" (changed from "high"), which says I shouldn't cut a release until it is resolved | 23:15 |
openstack | Launchpad bug 1457691 in OpenStack Object Storage (swift) "node timeout on overwrite can easily cause mis-matched etag fragment to 503" [Critical,In progress] - Assigned to paul luse (paul-e-luse) | 23:15 |
clayg | notmyname: peluse made that critical - i guess he wants you to not release! | 23:16 |
notmyname | https://review.openstack.org/#/c/212187/ closes it. and needs another +2. so looks like that's what's needed | 23:16 |
clayg | notmyname: I think I'm supposed to be squashing acoles fixes down into it | 23:16 |
clayg | notmyname: I was going to do that - but I'm working on another bug(s?) that ctennis and him found | 23:16 |
notmyname | clayg: are there other EC fixes that need/should go into 2.4.0? I want to do that asap now that the tempurl bugs have been closed | 23:17 |
clayg | notmyname: yeah that makes sense to me - I think the priority on the EC bugs should be reduced to high and a release should be cut - but that's just like my *opinion* | 23:18 |
notmyname | ok, thanks | 23:18 |
clayg | notmyname: really I think the one we're working on now (lp bug #1489546) should be higher than the proxy GET bug since it can cause the consistency engine to get gummed up | 23:18 |
openstack | Launchpad bug 1489546 in OpenStack Object Storage (swift) "logic error in ssync_rcvr when getting EC frags from a handoff" [Undecided,In progress] https://launchpad.net/bugs/1489546 - Assigned to paul luse (paul-e-luse) | 23:18 |
notmyname | I want to get peluse's opinion on it too | 23:18 |
clayg | notmyname: well YEAH! | 23:19 |
clayg | notmyname: peluse is probably frustrated because that patch has been ongoing for a number of days (since Austin really) - but, well it's finally close to a state that we're all happy with (and we're not even *that* happy - still want to use timestamps instead of etags) | 23:20 |
clayg | notmyname: ... but like I said, the reconstructor tombstone revert bug seems like a bigger issue ATM (again, my opinion, they all need to be fixed, and the newest bug seems like the wrong one to hold up a release for) | 23:20 |
*** miurahr has quit IRC | 23:21 | |
*** kei_yama has joined #openstack-swift | 23:26 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/swift: Updated from global requirements https://review.openstack.org/88736 | 23:26 |
*** sanchitmalhotra1 has joined #openstack-swift | 23:30 | |
*** sanchitmalhotra has quit IRC | 23:32 | |
clayg | minwoob: oh i ment to tell you | 23:33 |
clayg | minwoob: so it's annoying, but in our dev environment every reconstructor process has two devices | 23:34 |
clayg | minwoob: that part is fine, we always expect nodes in production to have multiple devices | 23:34 |
clayg | minwoob: but in an 8 device 4+2 scheme there's just not that many non-primary devices | 23:34 |
clayg | minwoob: it's likely that when you turn the crank on a reconstructor you're hitting both a handoff device and a primary device | 23:35 |
clayg | minwoob: unless you do something to make the primary device *not* do the rebuild it's a toss up if the revert job will go first or not | 23:35 |
clayg | minwoob: if you look at... say test_reconstruct_from_reverted_fragment_archive, there's this really annoying block where it will "force the handoff device to revert instead of potentially racing with rebuild by deleting any other fragments" | 23:36 |
minwoob | clayg: Oh - so that explains the occasional "what's this fragment doing here?" vs the expected 404! | 23:37 |
minwoob | clayg: Thanks!! | 23:38 |
minwoob | clayg: I will make use of that block again. | 23:38 |
clayg | minwoob: extract it to a helper! (maybe... don't do that just cause I think it's a good idea before trying it - try to extact it - then decide if it's a useful helper) | 23:38 |
minwoob | clayg: Yeah, it's probably not going to be the same in these scenarios. | 23:39 |
*** bill_az has joined #openstack-swift | 23:39 | |
minwoob | clayg: Will come up with something. Thanks. Also, why was the reconstructor architected to look both left and right? | 23:40 |
minwoob | clayg: Can't they all just look left (or right)? | 23:40 |
minwoob | clayg: In either case, everyone is being covered for. | 23:40 |
notmyname | minwoob: because left+right was better than "look at all the other nodes". | 23:41 |
minwoob | notmyname: Yeah, but there doesn't seem to be much gain for detecting a missing fragment by doing left+right, vs. just left, or just right. | 23:43 |
notmyname | minwoob: I was giving you a reason. I didn't say it was a good reason ;-) | 23:43 |
minwoob | notmyname: I see. Okay. :) | 23:43 |
clayg | minwoob: it might have been a bit of WAG - but it's better coverage on holes - if changes (sync) only propogates one direction it can take a long time to get all the way back aground if you only go one direction | 23:43 |
minwoob | clayg: Ah, okay. | 23:44 |
*** hrou has joined #openstack-swift | 23:48 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!