*** zaitcev has joined #openstack-swift | 00:01 | |
*** ChanServ sets mode: +v zaitcev | 00:01 | |
*** HenryG has quit IRC | 00:04 | |
swift_fan | notmyname : Why's that ? | 00:08 |
---|---|---|
swift_fan | notmyname : out of curiosity ? | 00:08 |
notmyname | swift_fan: just curious | 00:08 |
swift_fan | notmyname : I'm in central. | 00:09 |
swift_fan | notmyname : you ? | 00:09 |
swift_fan | notmyname : it's 7:09 PM right now. | 00:10 |
notmyname | I'm in san francisco | 00:10 |
swift_fan | notmyname : Ah, I see. | 00:13 |
*** traz_ has joined #openstack-swift | 00:16 | |
traz_ | Hi all, is there a way to validate keystone PKI token at swift side without using CMS ? | 00:17 |
notmyname | traz_: what's CMS? | 00:17 |
notmyname | traz_: unless you have other, non-swift requirements for keystone pki tokens (eg you're also using nova and need them there), then I'd recommend using uuid tokens from keystone instead | 00:18 |
traz_ | Cryptographic Message Syntax | 00:18 |
notmyname | traz_: I don't know | 00:19 |
swifterdarrell | traz_: like, you tell Swift some key and it can cryptographically verify the token contents w/o having to do a network round-trip back to keystone? | 00:19 |
traz_ | PKI is cms token, cms is a openssl lib | 00:19 |
traz_ | yes | 00:19 |
swifterdarrell | traz_: no idea... but sounds like a neat feature, maybe? | 00:20 |
traz_ | I'm able to get PKI token from keystone | 00:20 |
notmyname | swifterdarrell: traz_: I think the hard part with the pki token is you still have to do the network request to get the CRL (/me is far from an expert on keystone PKI) | 00:20 |
swifterdarrell | traz_: I think you'd have to modify one or more of the keystone-related middlewares... at least one of which lives in python-keystoneclient | 00:21 |
traz_ | but its validation at swift side is failing as openssl version installed on swift server does not support cms | 00:21 |
swifterdarrell | traz_: ah | 00:21 |
swifterdarrell | traz_: upgrade openssl? I'm really just guessing at this point. | 00:21 |
traz_ | yes this is a solution, but is it possible without upgrading ? I mean is there any middleware like that ? | 00:22 |
traz_ | or any other way ? | 00:22 |
notmyname | traz_: keystone provides the auth_token middleware that turns a keystone token (uuid or pki) into an identity record that swift understands. that's the only thing in swift that does anything with keystone | 00:24 |
traz_ | notmyname: but still it should be less than the number of requests to keystone server for validation | 00:24 |
notmyname | traz_: so eg there aren't any other tools in swift to do stuff with keystone tokens | 00:24 |
traz_ | ok. I'm using keystonemiddleware | 00:25 |
notmyname | traz_: yeah, I'm looking at that now too. interestingly, I don't see anything in requirements.txt about openssl | 00:25 |
traz_ | sorry keystoneclient middleware | 00:25 |
traz_ | paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory | 00:26 |
notmyname | traz_: that's where it used to live. they split it out into a separate repo (keystonemiddleware). same thing though | 00:26 |
traz_ | now, keystone client is using cms to validate a pki token -- which I want to avoid | 00:27 |
traz_ | ok | 00:27 |
notmyname | traz_: "from keystoneclient.common import cms" <- it would seem to be required | 00:28 |
*** foexle has joined #openstack-swift | 00:29 | |
notmyname | I joined memcached. nothing has been said in there :-) | 00:29 |
traz_ | Yes, it seem to be , but is there any way this can be done without upgrading openssl ? | 00:29 |
notmyname | traz_: I don't know | 00:30 |
traz_ | ok thanks for responding | 00:30 |
notmyname | traz_: I think you might find better answers (more helpful than "I don't know") by asking the keystone devs | 00:31 |
traz_ | ok thanks, I'l try there | 00:31 |
*** tdasilva has joined #openstack-swift | 00:33 | |
*** kota_ has joined #openstack-swift | 00:38 | |
*** foexle has quit IRC | 00:48 | |
*** shri1 has left #openstack-swift | 00:51 | |
*** siwsky has left #openstack-swift | 00:53 | |
*** traz_ has quit IRC | 00:57 | |
*** tkatarki has quit IRC | 01:08 | |
*** tkatarki has joined #openstack-swift | 01:15 | |
*** tkatarki has quit IRC | 01:21 | |
*** tkatarki has joined #openstack-swift | 01:21 | |
*** haomaiw__ has quit IRC | 01:30 | |
*** haomaiwang has joined #openstack-swift | 01:31 | |
*** tdasilva has quit IRC | 01:33 | |
*** nosnos has joined #openstack-swift | 01:39 | |
*** tkatarki has quit IRC | 01:40 | |
*** Fin1te has joined #openstack-swift | 01:43 | |
*** haomaiw__ has joined #openstack-swift | 01:46 | |
*** haomaiwang has quit IRC | 01:48 | |
*** haomaiwang has joined #openstack-swift | 01:56 | |
*** haomaiw__ has quit IRC | 01:56 | |
*** kota_ has quit IRC | 02:09 | |
*** Fin1te has left #openstack-swift | 02:13 | |
*** Fin1te has joined #openstack-swift | 02:14 | |
*** tgohad has quit IRC | 02:16 | |
*** HenryG has joined #openstack-swift | 02:26 | |
*** bill_az has quit IRC | 02:40 | |
*** mrsnivvel has quit IRC | 02:46 | |
*** haomaiw__ has joined #openstack-swift | 02:48 | |
*** haomaiwang has quit IRC | 02:52 | |
*** openstackgerrit has quit IRC | 02:58 | |
*** Fin1te has quit IRC | 03:00 | |
*** mrsnivvel has joined #openstack-swift | 03:09 | |
Sanchit | clayg: None that I can see. Posting the query again. | 03:11 |
Sanchit | Hi, Regarding the ACL permissions on Container, Is it possible to specify a particular role in 'X-Container-Read' and then, all the users with that particular role can access the objects in the specified container? In general terms, is role-based ACL a feature in openstack-swift? | 03:11 |
Sanchit | Can anyone help me regarding this query posted above??? | 03:11 |
*** nosnos has quit IRC | 03:26 | |
*** nosnos has joined #openstack-swift | 03:26 | |
*** nosnos has quit IRC | 03:30 | |
*** dmsimard is now known as dmsimard_away | 04:10 | |
*** echevemaster has quit IRC | 04:14 | |
*** nosnos has joined #openstack-swift | 04:32 | |
*** zaitcev has quit IRC | 04:36 | |
*** brnelson has quit IRC | 05:12 | |
*** brnelson has joined #openstack-swift | 05:12 | |
*** elambert has joined #openstack-swift | 05:20 | |
*** haomaiw__ has quit IRC | 06:00 | |
*** haomaiwang has joined #openstack-swift | 06:00 | |
*** ttrumm has joined #openstack-swift | 06:06 | |
mattoliverau | Night swift land, I'm calling it a day (and it's friday, woo weekend!). Have a great one all. | 06:13 |
*** haomai___ has joined #openstack-swift | 06:16 | |
*** haomaiwang has quit IRC | 06:19 | |
*** d0m0reg00dthing has joined #openstack-swift | 06:28 | |
*** kyles_ne has joined #openstack-swift | 06:32 | |
*** k4n0 has joined #openstack-swift | 06:34 | |
*** d0m0reg00dthing has quit IRC | 06:47 | |
*** d0m0reg00dthing has joined #openstack-swift | 06:47 | |
*** d0m0reg00dthing has quit IRC | 06:47 | |
*** d0m0reg00dthing has joined #openstack-swift | 06:48 | |
*** kevinbenton has quit IRC | 06:50 | |
*** d0m0reg00dthing has quit IRC | 06:50 | |
*** d0m0reg00dthing has joined #openstack-swift | 06:50 | |
*** d0m0reg00dthing has quit IRC | 06:53 | |
*** d0m0reg00dthing has joined #openstack-swift | 06:53 | |
*** foexle has joined #openstack-swift | 07:08 | |
*** acoles_away is now known as acoles | 07:08 | |
*** tab___ has quit IRC | 07:10 | |
*** k4n0 has quit IRC | 07:24 | |
*** kyles_ne has quit IRC | 07:27 | |
*** kyles_ne has joined #openstack-swift | 07:27 | |
*** k4n0 has joined #openstack-swift | 07:28 | |
*** kyles_ne has quit IRC | 07:32 | |
*** jamiehannaford has joined #openstack-swift | 07:37 | |
*** haomai___ has quit IRC | 07:52 | |
*** haomaiwa_ has joined #openstack-swift | 07:53 | |
*** jistr has joined #openstack-swift | 07:56 | |
*** aix has joined #openstack-swift | 08:01 | |
*** haomaiwa_ has quit IRC | 08:17 | |
*** haomaiwang has joined #openstack-swift | 08:17 | |
*** nellysmitt has joined #openstack-swift | 08:22 | |
*** haomaiw__ has joined #openstack-swift | 08:31 | |
*** ttrumm has quit IRC | 08:32 | |
*** haomaiwang has quit IRC | 08:33 | |
*** nellysmitt has quit IRC | 08:53 | |
*** foexle has quit IRC | 08:54 | |
*** geaaru has joined #openstack-swift | 09:32 | |
*** nosnos has quit IRC | 09:39 | |
*** nosnos has joined #openstack-swift | 09:40 | |
*** nosnos has quit IRC | 09:44 | |
*** d0m0reg00dthing has quit IRC | 09:58 | |
*** kevinbenton has joined #openstack-swift | 10:03 | |
*** X019 has quit IRC | 10:41 | |
*** X019 has joined #openstack-swift | 10:54 | |
*** kopparam has joined #openstack-swift | 11:33 | |
*** miqui has quit IRC | 11:34 | |
*** kopparam has quit IRC | 11:37 | |
*** X019 has quit IRC | 11:41 | |
*** k4n0 has quit IRC | 11:42 | |
*** X019 has joined #openstack-swift | 11:44 | |
*** jistr has quit IRC | 12:00 | |
*** thofi74 has joined #openstack-swift | 12:02 | |
*** mahatic has joined #openstack-swift | 12:14 | |
*** mkollaro has joined #openstack-swift | 12:17 | |
*** jistr has joined #openstack-swift | 12:19 | |
*** foexle has joined #openstack-swift | 12:21 | |
*** CaioBrentano has joined #openstack-swift | 12:36 | |
*** SkyRocknRoll has joined #openstack-swift | 12:44 | |
*** foexle has quit IRC | 12:46 | |
*** NM has joined #openstack-swift | 12:55 | |
*** miqui has joined #openstack-swift | 12:56 | |
*** HenryG has quit IRC | 12:57 | |
*** tkatarki has joined #openstack-swift | 13:08 | |
*** tsg has joined #openstack-swift | 13:09 | |
acoles | cschwede: i figured out how to mock OutputManager without needing to change its __init__ signature, https://review.openstack.org/#/c/125759/9 | 13:09 |
*** jyoti-ranjan has joined #openstack-swift | 13:12 | |
jyoti-ranjan | I have one question related to Swift cluster tunning as I am seeing some failure if number of concurrent request is large | 13:13 |
jyoti-ranjan | In particular, can anyone help me understanding how critical below paramters are important | 13:14 |
jyoti-ranjan | sysctl -w net.ipv4.tcp_tw_recycle=1 | 13:14 |
jyoti-ranjan | sysctl -w net.ipv4.tcp_tw_reuse=1 | 13:14 |
jyoti-ranjan | sysctl -w net.ipv4.tcp_syncookies=0 | 13:14 |
jyoti-ranjan | Any input will be helpful! | 13:14 |
*** jistr is now known as jistr|bbl | 13:16 | |
*** bill_az has joined #openstack-swift | 13:16 | |
*** elambert has quit IRC | 13:20 | |
*** tkatarki has quit IRC | 13:21 | |
*** tkatarki has joined #openstack-swift | 13:26 | |
*** mrsnivvel has quit IRC | 13:27 | |
ctennis | what failure are you seeing jyoti-ranjan> | 13:32 |
jyoti-ranjan | I am seeing errors like | 13:34 |
jyoti-ranjan | Oct 9 06:49:08 overcloud-ce-controller-swiftstorage1-6wcs67e44qei proxy-server: ERROR with Object server 192.168.116.46:6000/d1411740199 re: Trying to DELETE /AUTH_a07be3f0ffc74183a0bba38ce1c57dc0/c/o-0-1-174: ConnectionTimeout (0.5s) (txn: tx8a8a4532ca64438fae8c2-0054362fe3) (client_ip: 192.168.116.34) | 13:34 |
jyoti-ranjan | Oct 9 06:49:08 overcloud-ce-controller-swiftstorage1-6wcs67e44qei proxy-server: Handoff requested (1) (txn: tx8a8a4532ca64438fae8c2-0054362fe3) (client_ip: 192.168.116.34) | 13:34 |
jyoti-ranjan | Oct 9 06:49:08 overcloud-ce-controller-swiftstorage1-6wcs67e44qei proxy-server: ERROR with Object server 192.168.116.48:6000/e1411744653 re: Trying to DELETE /AUTH_a07be3f0ffc74183a0bba38ce1c57dc0/c/o-0-1-174: ConnectionTimeout (0.5s) (txn: tx8a8a4532ca64438fae8c2-0054362fe3) (client_ip: 192.168.116.34) | 13:34 |
jyoti-ranjan | ConnectionTimeout error | 13:34 |
*** tsg has quit IRC | 13:35 | |
*** tsg has joined #openstack-swift | 13:35 | |
*** bill_az has quit IRC | 13:43 | |
acoles | Sanchit: Assuming you are using keystone auth, you can specify a role in X-Container-Read | 13:48 |
acoles | Sanchit: a user with that role *on the account's tenant* can then read the container | 13:51 |
acoles | Sanchit: at least, that was my experience, see http://paste.openstack.org/show/120167/ | 13:52 |
*** mahatic has quit IRC | 13:53 | |
*** bsdkurt has quit IRC | 13:55 | |
*** gvernik has joined #openstack-swift | 13:56 | |
*** dmsimard_away is now known as dmsimard | 13:57 | |
acoles | Sanchit: be aware that other access controls are based on a user having *any* role on a tenant e.g. X-Container-Write: test:tester2 will grant write access to user 'tester2' is tester2 has *any* role on tenant test | 14:01 |
acoles | clayg: ^^ | 14:02 |
*** tsg has quit IRC | 14:07 | |
*** tsg has joined #openstack-swift | 14:13 | |
*** otherjon has quit IRC | 14:15 | |
*** otherjon has joined #openstack-swift | 14:15 | |
*** openstackgerrit has joined #openstack-swift | 14:16 | |
*** gvernik has quit IRC | 14:17 | |
*** CaioBrentano1 has joined #openstack-swift | 14:17 | |
*** gvernik has joined #openstack-swift | 14:19 | |
*** CaioBrentano has quit IRC | 14:19 | |
*** CaioBrentano1 is now known as CaioBrentano | 14:20 | |
*** gvernik has quit IRC | 14:23 | |
*** jamiehan_ has joined #openstack-swift | 14:25 | |
*** jamiehan_ has quit IRC | 14:26 | |
*** jamiehannaford has quit IRC | 14:27 | |
*** tsg has quit IRC | 14:28 | |
*** HenryG has joined #openstack-swift | 14:31 | |
*** NM has quit IRC | 14:40 | |
*** elambert has joined #openstack-swift | 14:54 | |
*** SkyRocknRoll has quit IRC | 15:00 | |
*** Sanchit has quit IRC | 15:01 | |
*** foexle has joined #openstack-swift | 15:07 | |
*** nellysmitt has joined #openstack-swift | 15:07 | |
*** tsg has joined #openstack-swift | 15:08 | |
*** cloudnull has joined #openstack-swift | 15:11 | |
*** aerwin has joined #openstack-swift | 15:13 | |
*** foexle has quit IRC | 15:18 | |
*** kbee has joined #openstack-swift | 15:20 | |
*** mahatic has joined #openstack-swift | 15:22 | |
*** tdasilva has joined #openstack-swift | 15:33 | |
*** kbee has quit IRC | 15:34 | |
*** kbee has joined #openstack-swift | 15:35 | |
mahatic | notmyname, Good morning. I was going through instructions for application process (opw) and stumbled onto this: https://wiki.openstack.org/wiki/OutreachProgramForWomen/Mentors | 15:49 |
mahatic | notmyname, should you not be adding yourself to the list that is at the end of the page? | 15:50 |
*** jistr|bbl is now known as jistr | 15:50 | |
ctennis | jyoti-ranjan: do your kernel logs indicate you're running out of tcp ports or connections? | 15:53 |
*** kbee has quit IRC | 15:53 | |
mahatic | acoles, hi, around? | 15:55 |
acoles | mahatic: hi | 15:55 |
mahatic | acoles, thank you for your comments. Just a couple of questions: I should also be modifying the check in service.py to accomodate "BKMG" from this: https://review.openstack.org/#/c/126310/ | 15:56 |
mahatic | correct? | 15:56 |
*** aerwin has quit IRC | 15:56 | |
*** foexle has joined #openstack-swift | 15:58 | |
acoles | mahatic: no. 126310 will remove BKMG and pass an int to service.py | 15:58 |
mahatic | acoles, oh | 15:59 |
mahatic | acoles, yes, just saw. my bad! | 16:00 |
acoles | mahatic: so your check is good | 16:00 |
mahatic | acoles, hmm, i should only be moving the test and include your suggestions, right? | 16:02 |
*** lcurtis has joined #openstack-swift | 16:04 | |
*** kbee has joined #openstack-swift | 16:04 | |
mahatic | acoles, i do not see a file like test_service.py, should i be creating that one? | 16:05 |
notmyname | gppd ,prmomg | 16:08 |
notmyname | wow | 16:08 |
*** bill_az has joined #openstack-swift | 16:08 | |
notmyname | let's try that again | 16:08 |
notmyname | good morning | 16:08 |
acoles | mahatic: at this moment, do not change the test you already have in test_shell.py | 16:09 |
*** kbee has quit IRC | 16:09 | |
*** kbee has joined #openstack-swift | 16:09 | |
acoles | mahatic: if 126310 merges, then you will probably need to change that test, but 126310 has not been approved yet | 16:09 |
mahatic | acoles, okay | 16:09 |
acoles | mahatic: the test_service.py i suggested is an additional test that will work regardless of 126310 | 16:10 |
acoles | mahatic: its up to you, but if you want to add test_service.py then, yes, it is a new file under tests/unit - you can copy/paste from my pastebin | 16:11 |
acoles | and git add the file | 16:11 |
acoles | mahatic: or... you could leave a comment to say you do not want to change your patch any more :) I'm +1, the changes are only suggestions | 16:13 |
mahatic | acoles, okay :) I will work on it as a follow-up patch maybe? and my current test will not be needed at all right if 126310 gets approved? | 16:14 |
mahatic | acoles, not the check, but the test in test_shell.py | 16:14 |
acoles | mahatic: if your patch merges before 126310 then the author of 126310 will have to modify your test in test_shell.py, because the output string will change. not your problem :) | 16:16 |
*** mkollaro has quit IRC | 16:17 | |
*** foexle has quit IRC | 16:17 | |
acoles | mahatic: so if you want to take the other suggestions as a follow-up the i will +2 | 16:19 |
acoles | s/the/then/ | 16:19 |
mahatic | acoles, sure, I will do a follow-up patch | 16:21 |
*** kbee has quit IRC | 16:21 | |
*** joseff has joined #openstack-swift | 16:21 | |
joseff | hi all anyone help me get a command installed/working on rhea? | 16:22 |
joseff | looking to get st command working | 16:22 |
joseff | http://bazaar.launchpad.net/~hudson-openstack/swift/1.2/view/head:/bin/st | 16:22 |
joseff | doesn't seem to be in regular swift files any more | 16:23 |
mahatic | acoles, i can as well work on it now, but to give you some context, opw apparently requires a patch to be merged. Although this one is: https://review.openstack.org/119193 | 16:24 |
mahatic | acoles, that got overridden by https://review.openstack.org/#/c/93788/. So I'll be happy to have this merged first, and then continue working on the follow-uo | 16:24 |
notmyname | joseff: `st` ("swift tool") is really old. it was renamed "swift" and then separated into the python-swiftclient repo | 16:24 |
mahatic | follow-up* | 16:24 |
joseff | so its just called swift now? | 16:25 |
acoles | mahatic: ok | 16:25 |
mahatic | acoles, and opw=OutreachProgramforWomen (internship program) | 16:25 |
joseff | I installed the python-swiftclient from pip | 16:25 |
notmyname | joseff: yes. https://github.com/openstack/python-swiftclient/blob/master/bin/swift | 16:25 |
joseff | sweet thanks for the insight | 16:25 |
joseff | will try it now same command structure? | 16:25 |
notmyname | joseff: yes. | 16:26 |
joseff | got errors | 16:26 |
acoles | mahatic: ah, not the 'office of public works' then :) | 16:26 |
joseff | No module named pbr.version | 16:26 |
notmyname | oh. then let me rephrase. "no" | 16:26 |
notmyname | ;-) | 16:26 |
joseff | haha | 16:26 |
notmyname | joseff: oh. fun. pbr | 16:27 |
joseff | no idea | 16:27 |
mahatic | acoles, :) nope! | 16:27 |
joseff | all i want to do is move files lol | 16:27 |
*** jyoti-ranjan has quit IRC | 16:27 | |
notmyname | joseff: ok, so that's an openstack library to install. it's a dependency of the CLI now | 16:27 |
*** delattec has joined #openstack-swift | 16:27 | |
notmyname | joseff: it may also require you to upgrade your version of setuptools | 16:27 |
joseff | thought i did all dependancy | 16:27 |
joseff | its updated | 16:27 |
joseff | Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. Are you sure that git is installed? | 16:29 |
joseff | with is that now | 16:29 |
joseff | lol | 16:29 |
joseff | did yum install python-pip | 16:29 |
joseff | did and now this | 16:29 |
notmyname | joseff: yeah. update setuptools. hang on let me check a bug report | 16:30 |
*** delatte has quit IRC | 16:30 | |
joseff | might be working now | 16:32 |
joseff | had to update /install some del | 16:32 |
joseff | dep | 16:32 |
notmyname | joseff: https://bugs.launchpad.net/python-swiftclient/+bug/1379579 | 16:32 |
mahatic | acoles, thank you for the help and comments | 16:32 |
notmyname | joseff: if that's what you're seeing, then upgrading setuptools will fix it | 16:32 |
joseff | I did that too | 16:32 |
joseff | forced upgrade | 16:33 |
joseff | hmm maybe I have the options wrong but I don't see status | 16:33 |
joseff | swift upload -cv images /mnt/images -A https://identity.api.rackspacecloud.com/v2.0/ -U user -K key | 16:34 |
joseff | that right? | 16:34 |
notmyname | joseff: you need to add "-V2" | 16:34 |
joseff | swift upload -cv -V2 ?? | 16:35 |
joseff | or swift upload -cV2 | 16:35 |
acoles | mahatic: welcome | 16:36 |
*** delatte has joined #openstack-swift | 16:37 | |
notmyname | joseff: -c -V2 to be safe :-) | 16:37 |
joseff | ok thanks lets try | 16:37 |
notmyname | joseff: or "-V 2" | 16:37 |
joseff | hmm not showing anything | 16:39 |
joseff | just new line and blink | 16:39 |
joseff | oh... different v | 16:39 |
joseff | swift upload --verbose -c -V 2 | 16:40 |
joseff | shows me nothing | 16:40 |
*** delattec has quit IRC | 16:40 | |
notmyname | joseff: instead of "upload" start with "stat" just to see that the other stuff is working | 16:40 |
openstackgerrit | David Goetz proposed a change to openstack/swift: Fix bug with expirer and unicode https://review.openstack.org/127594 | 16:41 |
joseff | account not found? | 16:43 |
joseff | this is working with cloud fuse | 16:43 |
notmyname | joseff: you've confirmed the username and key? | 16:44 |
joseff | yea | 16:44 |
joseff | swift -A https://identity.api.rackspacecloud.com/v2.0/ -U username -K key stat -v | 16:44 |
joseff | right? | 16:44 |
notmyname | joseff: you need the "-V 2" to specify that rackspace is using the v2 auth protocol | 16:45 |
joseff | ahh | 16:45 |
joseff | swift -A https://identity.api.rackspacecloud.com/v2.0/ -U username -K key stat -v -V 2 | 16:45 |
joseff | not working either | 16:45 |
joseff | also tried swift -V 2 | 16:46 |
swifterdarrell | joseff: "swift" CLI tool distinguishes between global args and sub-command args | 16:46 |
swifterdarrell | joseff: the "-V2" should be "global" so come before "stat" | 16:46 |
swifterdarrell | joseff: also, "--debug" (also global) gives more info than "-v"/"--verbose" | 16:46 |
swifterdarrell | joseff: those may not be all of your problems w/that command, but those jump out at me | 16:47 |
*** zackmdavis has left #openstack-swift | 16:47 | |
joseff | swift --debug -V 2 -A blah -U foo -K bar stat -- nada | 16:48 |
joseff | says account not found | 16:48 |
swifterdarrell | joseff: oh yeah, do you have any of the ST_* env vars set? | 16:49 |
joseff | that is a very good idea and I would tell if knew how | 16:49 |
joseff | lol | 16:49 |
swifterdarrell | joseff: env | grep ST_ | 16:50 |
joseff | nothing | 16:50 |
swifterdarrell | joseff: k... i was going to look up a working example, but it was using Cloud Files' v1 auth | 16:50 |
notmyname | swifterdarrell: ya, me too | 16:51 |
joseff | yes don't get me started with their examples | 16:51 |
notmyname | however, when I try to switch it to v2 auth for RAX, I get "Account not found" | 16:51 |
joseff | totally screwed me over in cloud fuse that https://auth.api.rackspacecloud.com/v1.0 doesn't work any more | 16:52 |
joseff | now it's https://identity.api.rackspacecloud.com/v2.0/ | 16:52 |
joseff | talking about RS of course | 16:53 |
swifterdarrell | joseff: ya, i dunno... may need to get fanatical support or whatever to work through how the swift CLI needs to be contorted to work against RAX v2, especially if they've made changes to it recently | 16:53 |
notmyname | joseff: I can get it to work with https://identity.api.rackspacecloud.com/v1.0 | 16:53 |
joseff | lets try that | 16:53 |
notmyname | joseff: I think one other difference might be api key vs account password in v2 auth | 16:53 |
joseff | accepted | 16:54 |
joseff | so let me see the upload | 16:54 |
notmyname | oh. that's interesting: "ClientException: No tenant specified" when trying v2 auth. not sure if user error (some missing option) or if a bug in swiftclient | 16:55 |
joseff | swift upload --verbose -c -V 1 production_images /mnt/images -A https://identity.api.rackspacecloud.com/v1.0 -U user -K key | 16:56 |
joseff | right? | 16:56 |
joseff | doing nothing again | 16:56 |
joseff | debug shows nothing either | 16:57 |
notmyname | swift --os-auth-url https://api.example.com/v2.0 --os-tenant-name tenant \ | 16:59 |
notmyname | --os-username user --os-password password list | 16:59 |
notmyname | I have no idea how to find or what the tenant name is for rackspace | 16:59 |
joseff | using v 1 seems to work on connect | 17:00 |
joseff | would it be the collection name? | 17:00 |
*** shri has joined #openstack-swift | 17:00 | |
swifterdarrell | notmyname: joseff: surely this is what Rackspace Support is *for*? | 17:00 |
joseff | hahahaha | 17:00 |
joseff | yea no | 17:00 |
swifterdarrell | notmyname: joseff: how can they sell this and not tell people how to actually use it? | 17:01 |
joseff | they just crawl docs | 17:01 |
notmyname | swifterdarrell: they could tell people to ask in the swift dev channel ;-) | 17:01 |
*** aix has quit IRC | 17:01 | |
swifterdarrell | notmyname: *that's* working | 17:01 |
clayg | acoles: that's interesting about roles in keystone auth with swift - i hadn't thought about it like that | 17:01 |
clayg | acorwin: didn't someone post in here they had managed to get some sort of policy.json support into swift + keystone? | 17:02 |
notmyname | joseff: yeah. at this point swifterdarrell is right. time to go ask rackspace support. they are in #rackspace | 17:02 |
joseff | I asked a network tech (told really) that you guys are based on the Swift protocol for cloud file so I should be able to use clouffuse to mount to server | 17:02 |
joseff | know what he sai | 17:02 |
joseff | well possibly but we don't support third party apps. You should add a raid system to your server for file storage | 17:02 |
joseff | lol | 17:02 |
acoles | clayg: yeah, i had to go and play to convince myself | 17:02 |
clayg | the tenant is the rackspace username (that use to log into mycloud) - the *username* you have to create when you make the api keys - it's under some drop down in the upper left - something about "users" | 17:03 |
acoles | clayg: but once user has any role on tenant then are considered to be 'in' the tenant as far as tenant:user type acls are concerned | 17:03 |
clayg | joseff: ^ i recently hacked up a rc file that worked with novaclient | 17:03 |
acoles | clayg: its not restricted to 'admin' and '__member__' (ie swiftoperators) | 17:03 |
clayg | acoles: yeah i mean that's how'd you have to map keystones idea of roles "in" tenants to swift's idea of "users" in "accounts" or whatever the f' we have | 17:04 |
clayg | joseff: bah - upper right :\ | 17:04 |
joseff | yea got that | 17:04 |
acoles | clayg: i believe (emphasize this is heresay :) that in other openstack services *any* role on a tenant gets you full access | 17:05 |
joseff | swift -V 2 --debug --os-auth-url https://identity.api.rackspacecloud.com/v2.0 --os-tenant-name username --os-username username --os-password key list | 17:05 |
joseff | like this? | 17:05 |
joseff | doesn't work | 17:05 |
shri | Hey guys… question about reclaiming objects. Which server deletes the rows from the container db and which one deletes the ".ts" files from the filesystem once an object has gone past the reclaim age? | 17:06 |
clayg | joseff: well i think you have to *create* an api user for that sort of thing; i wouldn't imagine the name being the same as the account name | 17:07 |
joseff | only three user types | 17:07 |
joseff | hmm | 17:08 |
joseff | thats creating a real user | 17:08 |
btorch | joseff: just use swiftly tool https://github.com/gholt/swiftly/tree/stable it's much better than the swift tool | 17:09 |
joseff | does it do uploads to cloud file? | 17:09 |
btorch | joseff: than you can create a file like http://goo.gl/eyCpky and source it before using the swiftly commands | 17:10 |
joseff | clayg seems the v1 connection actually gives me a 204 -- no content | 17:12 |
joseff | btorch: just need a fast way of shipping files to cloud file -- faster then cp via cloud fuse | 17:12 |
btorch | joseff: btw you can use https://identity.api.rackspacecloud.com/v2.0 instead of the old auth endpoint I have on that file example | 17:13 |
joseff | yea but the v2 gives errors with tenant etc | 17:13 |
joseff | v1 just uses the user and key | 17:13 |
*** lpabon has joined #openstack-swift | 17:13 | |
btorch | joseff: swiftly gives a lot or commands that swift tool doesn't .. just type swiftly and it will show you all the commands | 17:13 |
clayg | btorch: what did you link me bro? | 17:13 |
btorch | clayg: huh ? | 17:14 |
btorch | clayg: how you doing ? :) long time no talk | 17:14 |
joseff | btorch Im a total swift newbie -- swiftly look intimidating to say the least | 17:15 |
joseff | lol | 17:15 |
clayg | yeah i'm doing great - but http://goo.gl/eyCpky does not look like a file bash would source? | 17:15 |
clayg | joseff: no swiftly's great! pip install swiftly - should be just as good | 17:15 |
joseff | ok | 17:15 |
joseff | ...beings try 4 | 17:15 |
joseff | lol | 17:15 |
joseff | begins* | 17:15 |
clayg | joseff: but you're missing something obvious - you need to check in with rackspace support - they're probably better at explaining it than us johnnies | 17:15 |
joseff | error: invalid command 'egg_info' | 17:16 |
joseff | I have | 17:16 |
joseff | I'll ask again | 17:16 |
btorch | clayg: it shows the output of my .swiftly_dfw than can be sourced | 17:16 |
clayg | btorch: RLY? it looks like json - you "source" that? Like bash "source" that? | 17:17 |
btorch | joseff: not sure what you mean by the "tenant" ... you have an account user right ? | 17:17 |
btorch | clayg: lol what page are you on ? | 17:17 |
joseff | yea thats the username | 17:17 |
btorch | clayg: I've seen 0bin.net not work well for some people .. not sure why | 17:18 |
joseff | RS Rep: error: invalid command 'egg_info' | 17:18 |
btorch | clayg: ok I see | 17:18 |
notmyname | btorch: the file you linked in the pastebin has a dictionary with the keys "iv" "salt" and "ct" | 17:18 |
joseff | RS Rep: error: Have you checked out any of the support articles that we have? | 17:18 |
joseff | lol | 17:18 |
btorch | clayg: it's not decrypting when behind my vpn | 17:18 |
joseff | bet you she pulls up the API doc with the end point | 17:19 |
joseff | that we have all been looking at | 17:19 |
btorch | clayg: http://0bin.net/paste/CLfdv3ANwQ9XF92s#lqaxF8A0ohgJvfdSuvdrIPRHEx+6N+CHfwM1EWrIaBs | 17:19 |
btorch | clayg: weird now it is ... http://goo.gl/r85DDm <- worked now as well | 17:20 |
joseff | http://www.rackspace.com/knowledge_center/article/syncing-private-cloud-swift-containers-to-rackspace-cloud-files | 17:20 |
joseff | doesn't really explain how to do it to cloud file | 17:20 |
btorch | joseff: so all you need is the username and your account API key ... that's what I use and it works fine | 17:21 |
joseff | got that | 17:21 |
joseff | whats the command you use to upload | 17:21 |
joseff | ? | 17:21 |
joseff | btorch:Command python setup.py egg_info failed with error code 1 | 17:22 |
joseff | when trying to install swiftly | 17:22 |
joseff | ok | 17:23 |
joseff | got it | 17:23 |
btorch | joseff: give me a quick sec and I'll put something together | 17:23 |
btorch | ok cool | 17:23 |
joseff | got the install done that it | 17:23 |
joseff | is | 17:23 |
joseff | had to reinstall setup tools again | 17:23 |
btorch | what version did you get ? | 17:24 |
btorch | swiftly --version | 17:24 |
joseff | 2.04 | 17:24 |
btorch | cool | 17:24 |
joseff | I | 17:24 |
joseff | m trying to upload a image folder recursively skipping existing to a container | 17:24 |
joseff | talking like 850GB of images | 17:25 |
joseff | been a bit of a nightmare | 17:25 |
btorch | after you create a file like .swiftly_whatever (http://pastebin.com/ekffRFzr) you can run "swiftly auth" to test out | 17:28 |
btorch | joseff: hmm, are you doing that from a public location ? | 17:28 |
joseff | on server | 17:28 |
joseff | via ssh | 17:28 |
btorch | joseff: is it a racksapce server ? like a cloud server ? | 17:29 |
joseff | its a rs hw server to a rs cloud server | 17:29 |
joseff | cloud files I mean | 17:29 |
btorch | ok so, this is what I would find out first (end you probably know already) | 17:30 |
joseff | can't use sn | 17:30 |
btorch | 1 make sure that SWIFTLY_REGION is where your server is | 17:30 |
joseff | service net deosnt run between physical and cloud | 17:30 |
btorch | it does if you get rack-connect | 17:30 |
joseff | nope doesn't | 17:31 |
joseff | I checked | 17:31 |
joseff | i wish | 17:31 |
btorch | hmm let me double check that but that's doesnt seem right | 17:31 |
joseff | i know! | 17:31 |
joseff | but thats what the net tech said when I told my acct manager to get me one | 17:31 |
joseff | two different data centers for physical and cloyd -- but I thought thats what rack connect fixed | 17:32 |
btorch | ah no you can't that way | 17:32 |
btorch | must be on the same DC | 17:33 |
joseff | yea that makes rc almost useless | 17:33 |
joseff | ok config is done | 17:33 |
joseff | .swiftly_config | 17:33 |
btorch | it's meant to connect service net within a DC so that you don't have to pay public bandwidth | 17:34 |
joseff | yea, but hybrid networks get the short change | 17:34 |
joseff | can't wait till I'm all clloud | 17:34 |
btorch | so the other option you have, and ask your AM to be sure, you could ship the data and someone on cloud support would import that data into your account | 17:35 |
btorch | like ship usb drive, and instructions on how to proceed and someone would import that data into your cloud account into a cloudfiles container on the region you want | 17:36 |
clayg | ok you two - get your own room :P | 17:36 |
btorch | lol | 17:36 |
joseff | haha | 17:36 |
btorch | I gotta head out anyway .. joseff hope it all works | 17:36 |
joseff | how do you upload? | 17:38 |
btorch | joseff: I pm'd you | 17:42 |
*** zaitcev has joined #openstack-swift | 17:51 | |
*** ChanServ sets mode: +v zaitcev | 17:51 | |
*** exploreshaifali has joined #openstack-swift | 17:53 | |
shri | Hey guys, I think my previous question got lost amidst other discussion… I have a question about reclaiming objects. Which server deletes the rows from the container db and which one deletes the ".ts" files from the filesystem once an object has gone past the reclaim age? | 17:54 |
*** themadcanudist has joined #openstack-swift | 17:56 | |
*** themadcanudist has left #openstack-swift | 17:56 | |
*** themadcanudist has joined #openstack-swift | 17:57 | |
*** themadcanudist has left #openstack-swift | 17:57 | |
*** themadcanudist has joined #openstack-swift | 17:57 | |
*** themadcanudist has left #openstack-swift | 17:57 | |
openstackgerrit | Alistair Coles proposed a change to openstack/swift: Make in process functional tests use sample proxy-server.conf https://review.openstack.org/127607 | 18:00 |
acoles | tdasilva: ^^ and portante fyi | 18:00 |
tdasilva | acoles: nice! will definitely take a look | 18:02 |
acoles | tdasilva: thx. btw, did your mouse receiver arrive? | 18:02 |
*** kyles_ne has joined #openstack-swift | 18:03 | |
joseff | swirly is refusing to play nice | 18:03 |
joseff | swiftly | 18:03 |
tdasilva | acoles: just received an email from facilities saying a package arrived for me! :-) | 18:03 |
tdasilva | acoles: thanks! | 18:03 |
acoles | tdasilva: well if its a very small package then maybe thats it !) | 18:05 |
notmyname | shri: replication. and yes, it seems that it should still work with just one replica | 18:06 |
shri | notmyname: Thanks! just to understand this correctly.. even with one replica replicator should reclaim deleted objects, correct? | 18:09 |
notmyname | shri: that's my understanding from just now looking over the code (not running it). but note that the 1-replica case isn't something that's often (ever?) tested | 18:09 |
shri | I see | 18:10 |
shri | who deletes the entries from the container db and the ".ts" files from the filesystem? | 18:10 |
notmyname | replication | 18:10 |
shri | but there are replicators for objects as well as containers. | 18:10 |
notmyname | yes | 18:10 |
joseff | to use swiftly do I use put instead of upload? | 18:11 |
shri | I mean to ask which does object-replicator delete the ".ts" files and container-replicator delete the entries from the container db? | 18:12 |
shri | s/ask which does/ask does/g | 18:12 |
*** jistr has quit IRC | 18:13 | |
notmyname | shri: yes | 18:15 |
shri | I see.. so if I want to change the reclaim_age, I should change it in both container-server.conf as well as object-server.conf | 18:15 |
notmyname | correct | 18:16 |
shri | perfect… thanks a lot! | 18:16 |
*** acoles is now known as acoles_away | 18:16 | |
joseff | I think I got swiftly pushing files | 18:17 |
joseff | not sure really as I get some 200, 201 created and 404 | 18:17 |
joseff | and I'm not sure what they mean in this context | 18:18 |
joseff | using the put -d -i path container-name | 18:18 |
portante | acoles_away: sweet | 18:19 |
*** Anju has joined #openstack-swift | 18:19 | |
*** Anju has quit IRC | 18:23 | |
*** Anju has joined #openstack-swift | 18:23 | |
*** kyles_ne has quit IRC | 18:25 | |
*** kyles_ne has joined #openstack-swift | 18:26 | |
*** mkollaro has joined #openstack-swift | 18:29 | |
*** kyles_ne has quit IRC | 18:30 | |
*** kyles_ne has joined #openstack-swift | 18:35 | |
shri | another quick question.. I'm seeing a LockTimeout in the container-server trying to lock the db (.lock) file. Does the node_timeout config option govern this timeout value? | 18:36 |
zaitcev | no | 18:36 |
zaitcev | What version are you running? I seem to recall that Havana shipped with that bug. | 18:37 |
zaitcev | Or maybe Grizzly | 18:37 |
zaitcev | Anyway, it should not be happening. | 18:37 |
shri | I'm using version 1.13.1 … I believe thats Havana | 18:37 |
*** aix has joined #openstack-swift | 18:39 | |
shri | so is the timeout value even configurable? | 18:40 |
*** swift_fan has left #openstack-swift | 18:40 | |
zaitcev | shri: I think your case is something different, but could you look into your /usr/lib/python2.*/site-*/swift/common/db.py and verify that this is in: https://github.com/openstack/swift/commit/ef7f9e27a2d10aeaa9ed550e9595c54ccacdd4f2 | 18:42 |
shri | I see the GreenDBCursor class .. so I think the fix exists | 18:44 |
shri | not sure if this has a bearing on the timeout though, right? This affects retries. | 18:46 |
*** openstackgerrit has quit IRC | 18:48 | |
notmyname | http://www.ottawalife.com/wp-content/uploads/2012/07/actor-mime.jpg <- for torgomatic | 18:49 |
zaitcev | http://www.pbfcomics.com/114/ | 18:51 |
zaitcev | shri: You could try to track down which particular timeout you're hitting ("I'm seeing a LockTimeout in the container-server trying to lock the db (.lock) file" is not sufficient to know). | 18:51 |
zaitcev | shri: but I suggest that you look at the cluster health first | 18:52 |
shri | The locktimout is reported for both puts and deletes. Something like this: | 18:55 |
shri | container-server: ERROR __call__ error with PUT /r0/96/AUTH_blah/blah-blaherm/652cec6a-22676977 : LockTimeout (10s) | 18:55 |
*** openstackgerrit has joined #openstack-swift | 18:55 | |
openstackgerrit | anju Tiwari proposed a change to openstack/swift: Added a check for limit value https://review.openstack.org/118186 | 18:56 |
*** kyles_ne has quit IRC | 18:56 | |
*** kyles_ne has joined #openstack-swift | 18:57 | |
zaitcev | oh, right, it's caught so you cannot know where it happens... it is in the reply body, however. I think. | 18:57 |
*** thofi74 has left #openstack-swift | 18:58 | |
openstackgerrit | anju Tiwari proposed a change to openstack/swift: Added a check for limit value https://review.openstack.org/118186 | 18:59 |
openstackgerrit | David Goetz proposed a change to openstack/swift: Ssync does not replicate custom object headers https://review.openstack.org/127033 | 18:59 |
notmyname | shri: is this with one replica? | 19:00 |
*** kyles_ne has quit IRC | 19:01 | |
*** kyles_ne has joined #openstack-swift | 19:02 | |
*** mkollaro has quit IRC | 19:04 | |
joseff | looks like swiftly is working good | 19:07 |
zaitcev | it better | 19:21 |
notmyname | `mv python-swiftclient slowly` | 19:21 |
*** smart_developer has joined #openstack-swift | 19:24 | |
smart_developer | I am trying to implement a rolling upgrade/patch scheme for my Swift cluster. | 19:25 |
smart_developer | To do that, I think it's best to bring down 1 node at a time, upgrade that, and then put it back in. | 19:25 |
notmyname | smart_developer: what have you tried? what have you already read online? | 19:25 |
notmyname | based on what' you've tried, what current issues are you seeing? | 19:26 |
smart_developer | Once the "recovered node" has had a chance to sync all the objects from the not-upgraded nodes, I plan to take down another node, repeat the process, etc. | 19:26 |
smart_developer | notmyname : Well, I already know how to bring down the node, upgrade it, and bring it back up, but the main question is | 19:26 |
smart_developer | notmyname : How to know once the "recovered node" has had a chance to sync all of the objects that need to be synced ? | 19:27 |
smart_developer | notmyname : One method that comes to mind is using "swift-recon -d" to see that now the disk usage across the cluster is even, | 19:27 |
smart_developer | notmyname : But this does not seem like the best method out there, nor does it seem like it would be reliable in every case. | 19:28 |
notmyname | no. go read various already published articles online. try it out. look at logs. learn. then come with your specific questions based on what you are seeing | 19:28 |
smart_developer | notmyname : Not only the objects, but also the containers and accounts, and any other information that needs to be synced as well. | 19:28 |
* notmyname will not get sucked into more hours-long conversations about stuff not based on what you're seeing in your cluster | 19:28 | |
smart_developer | notmyname : swift-recon was all I could find. | 19:29 |
smart_developer | notmyname : I even came across a SwiftStack article. | 19:29 |
smart_developer | notmyname : But that seemed to skip over the part about how to know when the "recovered node" is fully-recovered, and when the next node can be taken down. | 19:30 |
smart_developer | notmyname : Sorry, I'm new to this, so it's not as easy to know what to even search for. | 19:32 |
*** kyles_ne has quit IRC | 19:32 | |
*** kyles_ne has joined #openstack-swift | 19:33 | |
smart_developer | Does anyone know? | 19:35 |
*** kyles_ne has quit IRC | 19:37 | |
*** judd7 has joined #openstack-swift | 19:38 | |
MooingLemur | Because of the decentralized nature of Swift, there is no way to know for certain how fully-replicated the cluster is at any particular point. You can only get a general idea through sampling. A decent set of metrics would be based on something like swift-recon (where you inject all the test objects against a fully-up cluster, then check it later), and by watching the logs for object-replicator and see that you've done at least ... | 19:41 |
MooingLemur | ... one full pass on all of your other nodes after an upgrade. | 19:41 |
swifterdarrell | smart_developer: what exactly are you upgrading? why do you expect data to move? | 19:43 |
MooingLemur | However, if you're just doing rolling restarts, is it that important to you to fully replicate before taking down the next node? If each node is only going to be down a few minutes for upgrades, you probably won't be gaining too much in terms of coherency/consistency, and I don't think you'll necessarily reduce the amount of time that some objects aren't fully replicated. | 19:43 |
MooingLemur | I would just take down one node at a time, and don't worry about allowing replication to happen before moving onto the next. Since you likely have 3 replicas of each object anyway, the node you just brought up will get replicated data pushed to it almost as quickly anyway. | 19:44 |
*** judd7 has quit IRC | 19:45 | |
openstackgerrit | David Goetz proposed a change to openstack/swift: Change black/white-listing to use sysmeta. https://review.openstack.org/126049 | 19:45 |
*** judd7 has joined #openstack-swift | 19:46 | |
MooingLemur | And since you're only taking one node down at a time, the likelihood of clients getting a transient 404 are minimal. | 19:46 |
*** judd7 has quit IRC | 19:47 | |
smart_developer | MooingLemur : Is this common practice, what you've suggested? How long do the upgrades usually take ? | 19:48 |
smart_developer | swifterdarrell : not necessarily expecting data to move, but for instance, if new objects were uploaded to the cluster while the upgraded node was down, then the upgraded node would need some time to get synced with the new objects. | 19:48 |
smart_developer | swifterdarrell : MooingLemur brings up a good point, but I'm not sure if that's the "recommended method" in actual production-end deployments. | 19:49 |
MooingLemur | smart_developer: As your cluster gets large, waiting for replication between upgrading boxes becomes impractical. I would venture to say that it would be common practice not to wait. Our Swift systems are on Gentoo, and I've not used the Ubuntu/RHEL installation before except for the SAIO/Swiftstack test instances. | 19:50 |
MooingLemur | So I can't really answer how long it'll take. It all depends on what you're upgrading :) | 19:50 |
smart_developer | MooingLemur : The end cluster I'm planning to put into deployment will have 3 nodes total, each node having all the proxy,account,container,object services. | 19:51 |
smart_developer | MooingLemur : And by upgrades, I generally mean patches, for instance, to the Swift code. | 19:52 |
smart_developer | MooingLemur : But "upgrade" can really be anything that's necessary, more than that. | 19:52 |
MooingLemur | The process you use is up to you. I'm just giving you a recommendation based on what I'd do in my environment (currently 20 object nodes in the largest cluster) | 19:55 |
*** Anju has quit IRC | 19:55 | |
smart_developer | MooingLemur : Do you have an upgrade recommendation for my 3-node cluster ? | 19:56 |
smart_developer | (with all services running on each node). | 19:56 |
smart_developer | I hear that you're saying that for large clusters, it may be better not to wait for replication. | 19:56 |
smart_developer | But what about for a cluster of 3-node size ? | 19:57 |
smart_developer | would there be motivation to wait ? | 19:57 |
smart_developer | (for replication). | 19:57 |
MooingLemur | nothing particually different, it's just that the longer a node is out, a larger % of data is affected in a small cluster. I don't think it makes an appreciable difference on the overall health of the cluster based on whether you wait. It's really up to you. I wouldn't wait. | 19:58 |
MooingLemur | I guess with the exception of the first machine, you might want to watch logs to make sure everything is happening as it should after the upgrade, before you do the rest of your machines. :) But that's general advice, not anything specific to Swift. | 20:05 |
smart_developer | MooingLemur : why for the first machine ? | 20:06 |
MooingLemur | So that you can gauge whether the upgrade was successful and had the intended effect, since if the upgrade didn't do the right thing, you won't be doing the same damage to the rest of your cluster. :P | 20:07 |
smart_developer | MooingLemur : Ah, that makes sense! Thanks. | 20:15 |
smart_developer | MooingLemur : :) | 20:15 |
smart_developer | Is there any backing up of any information/data that is required, before or after the take-down of a node during the rolling upgrades process ? | 20:16 |
notmyname | torgomatic: why does https://review.openstack.org/#/c/124903/4/swift/common/swob.py change cStringIO to StringIO? inheritance? | 20:20 |
shri | notmyname: sorry was afk. About the container-server LockTimeout while locking the .lock file, yes… this was with a single replica | 20:20 |
torgomatic | notmyname: yeah, you can't inherit from cStringIO | 20:20 |
notmyname | torgomatic: oh. well that's fun | 20:20 |
*** tab_____ has joined #openstack-swift | 20:21 | |
notmyname | torgomatic: (and although I haven't gotten that far yet) I'm guessing the WsgiStringIO is so tests can squash something in those 100 headers? | 20:21 |
torgomatic | notmyname: yeah, if that method's not there, tests blow up but good | 20:21 |
notmyname | ok, thanks | 20:21 |
smart_developer | notmyname : https://swiftstack.com/blog/2013/12/20/upgrade-openstack-swift-no-downtime/ | 20:22 |
smart_developer | notmyname : It seems that a lot of the web resources I've been looking through the past couple of days all point to this one. | 20:22 |
smart_developer | notmyname : I'm not sure if it was intentionally left out, but, | 20:22 |
smart_developer | notmyname : It doesn't say anything about whether backing up of any information/data before-or-after the takedown of a node during the rolling updates process | 20:23 |
smart_developer | notmyname : is required. | 20:23 |
smart_developer | notmyname : For instance, rings ? | 20:23 |
smart_developer | notmyname : or anything else | 20:25 |
ctennis | smart_developer: nothing | 20:25 |
smart_developer | notmyname : I just discussed with MooingLemur, and it looks like at least the objects isn't an issue. | 20:25 |
smart_developer | notmyname : But I'm not sure about anything else. | 20:26 |
*** lpabon has quit IRC | 20:32 | |
smart_developer | MooingLemur -- earlier you commented that there's really no way to tell whether the recovered node has been fully-synced with the other 2 nodes. | 20:40 |
smart_developer | MooingLemur -- however, I was wondering why that is so, since there has got to be a CLI command that can check that, or something | 20:40 |
ctennis | smart_developer: look in the logs for the output of the replicators (object and container) and once they've completed run through the entire set of partitions after your node is back up then you are in good shape | 20:41 |
smart_developer | MooingLemur -- Like how "swift-recon -d" can check whether all the disks scattered across the cluster are being evenly utilized. | 20:41 |
openstackgerrit | OpenStack Proposal Bot proposed a change to openstack/python-swiftclient: Updated from global requirements https://review.openstack.org/89250 | 20:44 |
openstackgerrit | OpenStack Proposal Bot proposed a change to openstack/swift: Updated from global requirements https://review.openstack.org/88736 | 20:44 |
smart_developer | ctennis : What about the account replicator ? | 20:44 |
ctennis | smart_developer: yes same | 20:44 |
MooingLemur | smart_developer: the simple answer to that question is that it would take so long to verify that everything is synched, that an event that would cause it to get out of sync would likely happen in the meantime. That's why there are tools like swift-recon that can give you a sampling of how replicated the data is. Use swift-recon and what ctennis said, monitor the replicator logs on the nodes that were left up. | 20:45 |
smart_developer | ctennis : Are there any other replicators that I need to consider, as well ? | 20:45 |
ctennis | those are the only 3 replicators | 20:45 |
smart_developer | MooingLemur : Ok. Do you know which swift-recon command/option you would need to use ? | 20:47 |
*** mitz_ has quit IRC | 20:47 | |
*** delattec has joined #openstack-swift | 20:50 | |
*** delatte has quit IRC | 20:52 | |
*** miqui has quit IRC | 20:57 | |
MooingLemur | read it, try it, test it, figure it out. :) | 21:01 |
*** delatte has joined #openstack-swift | 21:03 | |
*** delattec has quit IRC | 21:05 | |
*** mahatic has quit IRC | 21:07 | |
*** joseff has quit IRC | 21:11 | |
*** Nadeem has joined #openstack-swift | 21:23 | |
smart_developer | MooingLemur : okay, Thanks. | 21:30 |
*** CaioBrentano has quit IRC | 21:37 | |
openstackgerrit | David Goetz proposed a change to openstack/swift: Fix bug with expirer and unicode https://review.openstack.org/127594 | 21:39 |
*** nshaikh has joined #openstack-swift | 21:42 | |
*** nellysmitt has quit IRC | 21:47 | |
*** tkatarki has quit IRC | 21:58 | |
*** navid__ has joined #openstack-swift | 22:01 | |
smart_developer | Has anyone ever had any trouble using account quotas before ? | 22:02 |
smart_developer | Because the curl commands I've been using have been return a HTTP 200 OK, but when using a non-reseller-admin-role I can actually exceed the quota that I set, by a lot. | 22:03 |
smart_developer | Is this a known bug within the Swift community, or is it likely that I'm just doing things wrong ? | 22:03 |
*** nshaikh has quit IRC | 22:04 | |
*** navid__ is now known as nshaikh | 22:04 | |
smart_developer | I just followed one of the Swift user guides' examples, and modified what I needed to run it against my Swift cluster. | 22:04 |
smart_developer | Anyways, has account-quota-bytes been known to cause various errors ? | 22:05 |
*** gyee has quit IRC | 22:18 | |
smart_developer | Thank you, anyone who is able to answer this. :) | 22:18 |
*** IRTermite has quit IRC | 22:20 | |
*** jergerber has joined #openstack-swift | 22:23 | |
*** nshaikh has left #openstack-swift | 22:26 | |
*** erlon_ has quit IRC | 22:29 | |
smart_developer | I modeled my cURL command based on this example : curl -k -i -H POST <my_HAProxy_IP>:<my_HAProxy_port>/v1/AUTH_<my_tenant_id> -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: <token>" -H "X-Account-Meta-Quota-Bytes: 25" | 22:37 |
smart_developer | (for my issue with trying to get account-quota-bytes to work). | 22:37 |
smart_developer | Moreover, I've experimented with container quotas, and I did not experience any of these issues ......... | 22:38 |
*** joseff has joined #openstack-swift | 22:44 | |
*** geaaru has quit IRC | 22:46 | |
*** wasmum has left #openstack-swift | 22:49 | |
*** occup4nt is now known as occupant | 22:49 | |
*** echevemaster has joined #openstack-swift | 22:52 | |
*** openstackgerrit has quit IRC | 22:54 | |
*** dmsimard is now known as dmsimard_away | 22:56 | |
*** lcurtis has quit IRC | 22:56 | |
smart_developer | (basically, had no problems experimenting with container quotas and making them work, but when trying to make the account-quota-bytes work, I'm running into those issues). | 22:56 |
smart_developer | :( | 22:56 |
*** cutforth has quit IRC | 22:59 | |
*** marcusvrn_ has quit IRC | 23:02 | |
*** Nadeem has quit IRC | 23:07 | |
*** Nadeem has joined #openstack-swift | 23:23 | |
*** Nadeem has quit IRC | 23:24 | |
*** dmsimard_away is now known as dmsimard | 23:29 | |
*** elambert has quit IRC | 23:33 | |
*** tab_____ has quit IRC | 23:38 | |
*** dmsimard is now known as dmsimard_away | 23:49 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!