*** dmsimard_away is now known as dmsimard | 00:02 | |
*** oomichi has joined #openstack-swift | 00:03 | |
torgomatic | you know it's a good unit test when it starts with "import greenlet" | 00:08 |
---|---|---|
*** tkay has quit IRC | 00:10 | |
*** jergerber has quit IRC | 00:13 | |
*** kyles_ne has quit IRC | 00:17 | |
*** kyles_ne has joined #openstack-swift | 00:17 | |
*** kyles_ne has quit IRC | 00:21 | |
*** dmorita has joined #openstack-swift | 00:29 | |
mattoliverau | lol | 00:35 |
*** kota_ has joined #openstack-swift | 00:40 | |
*** lcurtis has joined #openstack-swift | 00:58 | |
*** dmsimard is now known as dmsimard_away | 01:02 | |
*** dmsimard_away is now known as dmsimard | 01:08 | |
*** clu_ has quit IRC | 01:11 | |
*** addnull has joined #openstack-swift | 01:13 | |
*** Edward-Zhang has joined #openstack-swift | 01:23 | |
*** dmsimard is now known as dmsimard_away | 01:23 | |
*** bill_az has quit IRC | 01:26 | |
*** tsg has quit IRC | 01:27 | |
*** nexusz99 has joined #openstack-swift | 01:32 | |
*** lcurtis has quit IRC | 01:39 | |
*** nexusz99 has quit IRC | 01:44 | |
*** nexusz99 has joined #openstack-swift | 01:44 | |
*** nosnos has joined #openstack-swift | 01:45 | |
*** nexusz99 has quit IRC | 01:47 | |
*** HenryG has joined #openstack-swift | 01:49 | |
*** X019 has quit IRC | 01:51 | |
*** X019 has joined #openstack-swift | 01:54 | |
*** haomai___ has quit IRC | 01:56 | |
*** nexusz99 has joined #openstack-swift | 01:56 | |
*** haomaiwang has joined #openstack-swift | 01:57 | |
*** kyles_ne has joined #openstack-swift | 02:00 | |
*** tkay has joined #openstack-swift | 02:07 | |
*** haomaiwang has quit IRC | 02:14 | |
*** haomaiwang has joined #openstack-swift | 02:14 | |
openstackgerrit | A change was merged to openstack/swift: Fix profile tests to clean up its tempdirs. https://review.openstack.org/123499 | 02:16 |
*** X019 has left #openstack-swift | 02:16 | |
*** haomaiwang has quit IRC | 02:19 | |
*** haomaiw__ has joined #openstack-swift | 02:19 | |
*** kyles_ne has quit IRC | 02:32 | |
*** haomaiwang has joined #openstack-swift | 02:35 | |
*** haomaiw__ has quit IRC | 02:37 | |
*** haomaiwang has quit IRC | 02:40 | |
*** haomaiwang has joined #openstack-swift | 02:41 | |
*** nosnos has quit IRC | 02:55 | |
*** nosnos_ has joined #openstack-swift | 02:56 | |
*** haomaiw__ has joined #openstack-swift | 02:56 | |
*** nosnos_ has quit IRC | 02:57 | |
*** nosnos has joined #openstack-swift | 02:57 | |
*** haomaiwang has quit IRC | 02:59 | |
*** tkay has quit IRC | 03:00 | |
*** nosnos has quit IRC | 03:01 | |
smart_developer | What's the best way to tell how many Swift-related processes are running on your machine ? | 03:01 |
smart_developer | My guess would be | 03:02 |
smart_developer | ps aux | grep swift | wc -l | 03:02 |
smart_developer | But if you do this as root, will it be accurate? | 03:02 |
smart_developer | Or is it better to log in as "swift" | 03:02 |
*** cdelatte has quit IRC | 03:03 | |
smart_developer | Because for some reason, "su - swift" on the command line is returning "This account is currently not available." | 03:03 |
*** tkay has joined #openstack-swift | 03:03 | |
smart_developer | I was wondering if this is preventing me from gathering better statistics about the number of Swift processes running. | 03:04 |
smart_developer | And maybe in the first place there is a more appropriate method than the "ps aux ....." utility. | 03:04 |
smart_developer | ?? | 03:04 |
*** tkay has quit IRC | 03:05 | |
*** Edward-Zhang has quit IRC | 03:08 | |
*** Edward-Zhang has joined #openstack-swift | 03:08 | |
zaitcev | ps is fine for showing running processes, in fact it's pretty much the only tool. You can do ps ux -U swift. | 03:08 |
*** swat30 has quit IRC | 03:11 | |
*** swat30 has joined #openstack-swift | 03:12 | |
*** tkay has joined #openstack-swift | 03:14 | |
smart_developer | zaitcev : "ps ux -U swift" seems to return something completely different from "ps aux | grep swift" | 03:19 |
smart_developer | What exactly am I seeing with "ps ux -U swift"? | 03:19 |
smart_developer | a LOT more entries are being outputted than running "ps aux | grep swift" | 03:20 |
*** nexusz99 has quit IRC | 03:23 | |
*** nexusz99 has joined #openstack-swift | 03:24 | |
*** gyee has quit IRC | 03:24 | |
*** nexusz99 has quit IRC | 03:28 | |
zaitcev | try ps auxw| grep swift :-) | 03:28 |
smart_developer | zaitcev : I was just wondering what it was that you originally suggested ? | 03:28 |
zaitcev | Just read man ps | 03:29 |
smart_developer | ok. | 03:29 |
smart_developer | :) | 03:29 |
smart_developer | zaitcev : By the way, do you know if 1 worker = 1 process ? | 03:33 |
*** nexusz99 has joined #openstack-swift | 03:33 | |
smart_developer | zaitcev : And by worker, I mean the ones defined in the /etc/swift/*.conf files for proxy-server, account-server, container-server, and object-server. | 03:34 |
zaitcev | smart_developer: it's in swift/common/wsgi.py | 03:35 |
smart_developer | Ok, thank you. | 03:35 |
zaitcev | as you can see, it does os.fork() for every count of worker, so at least the intent is there | 03:37 |
smart_developer | zaitcev : What do you mean by "at least the intent is there" ? that there's a bug and it doesn't work as expected? | 03:38 |
*** davdunc` has joined #openstack-swift | 03:44 | |
*** davdunc has quit IRC | 03:46 | |
*** addnull has quit IRC | 03:49 | |
smart_developer | zaitcev : Let's say that in the [DEFAULT] section of your proxy-server.conf you have workers=5, and in the [DEFAULT] section of account-server.conf, container-server.conf, and object-server.conf you have workers=2. | 03:49 |
smart_developer | How would you count the number of processes for Swift that you'd expect to see ? | 03:50 |
*** nexusz99 has quit IRC | 03:53 | |
*** nexusz99 has joined #openstack-swift | 03:54 | |
*** zhiyan has quit IRC | 03:54 | |
*** zhiyan has joined #openstack-swift | 03:57 | |
*** nexusz99 has quit IRC | 03:58 | |
openstackgerrit | OpenStack Proposal Bot proposed a change to openstack/python-swiftclient: Updated from global requirements https://review.openstack.org/89250 | 03:59 |
openstackgerrit | OpenStack Proposal Bot proposed a change to openstack/swift: Updated from global requirements https://review.openstack.org/88736 | 04:00 |
*** kbee has joined #openstack-swift | 04:00 | |
*** tkay has joined #openstack-swift | 04:02 | |
kbee | torgomatic: peluse: i see a +2 against https://review.openstack.org/#/c/123835/.. any idea who needs to approve it | 04:04 |
*** tkay has quit IRC | 04:04 | |
*** haomaiw__ has quit IRC | 04:10 | |
*** haomaiwa_ has joined #openstack-swift | 04:10 | |
*** jyoti-ranjan has joined #openstack-swift | 04:10 | |
openstackgerrit | Keshava Bharadwaj proposed a change to openstack/swift: Provides proper error handling on builder unpickle https://review.openstack.org/122225 | 04:15 |
redbo | kbee: it's been approved, it'll be automatically merged. it can take a little while. | 04:18 |
*** haomaiw__ has joined #openstack-swift | 04:18 | |
redbo | smart_developer: most of the servers will have workers+1 processes. The extra process just sits and waits for signals and kills the children, I think. | 04:20 |
kbee | redbo: Thnx.. I though its 2 +2s and +workflow.. but no one has approved.. | 04:21 |
redbo | "workflow" the approved flag | 04:22 |
*** haomaiwa_ has quit IRC | 04:22 | |
redbo | ^is | 04:22 |
kbee | redbo: awww.. thnx.. | 04:23 |
*** exploreshaifali has joined #openstack-swift | 04:25 | |
*** nosnos has joined #openstack-swift | 04:28 | |
*** nexusz99 has joined #openstack-swift | 04:28 | |
smart_developer | redbo : What do you mean by "children" ? | 04:31 |
redbo | the workers | 04:31 |
smart_developer | redbo : Why would it kill any of the workers ? | 04:32 |
redbo | if you want to stop or restart the service | 04:33 |
*** tkay has joined #openstack-swift | 04:33 | |
smart_developer | redbo -- I didn't know that you would need an entire process dedicated, per service, for that. | 04:34 |
smart_developer | redbo -- was there no other way to make it more efficient? | 04:34 |
smart_developer | That may be a difficult question to answer. | 04:36 |
redbo | I don't think it's been a priority. A process that's just sitting there isn't that big a cost. | 04:36 |
smart_developer | So I have workers=3 in [DEFAULT] for object-server.conf | 04:36 |
smart_developer | And I see 4 swift-object-server processes, which is expected according to what you've outlined. | 04:37 |
smart_developer | But this is where it gets tricky. | 04:37 |
smart_developer | I see just 1 swift-object-updater, and 1 swift-object-replicator, but 3 swift-object-auditors | 04:37 |
smart_developer | redbo -- do you know what the logic/reasoning behind that, is? | 04:38 |
smart_developer | For swift-container-auditor I only see 1, and only 1 for swift-account-auditor, as well. | 04:39 |
redbo | there are 2 object auditor worker processes | 04:39 |
*** addnull has joined #openstack-swift | 04:39 | |
*** addnull has quit IRC | 04:40 | |
*** addnull has joined #openstack-swift | 04:40 | |
*** addnull has joined #openstack-swift | 04:40 | |
smart_developer | redbo -- but why?? | 04:40 |
smart_developer | and why only 1 for everything else | 04:41 |
redbo | because they check different things | 04:44 |
smart_developer | So if seeing 3 swift-object-auditors means there are 2 object auditor worker processes, that means the 3rd process is the one that is set aside for stopping the service? | 04:44 |
smart_developer | redbo -- And if we see only 1 swift-container-auditor, or just 1 swift-object-replicator, does that mean that there are actually 0 container auditor workers and 0 object replicators ? | 04:46 |
smart_developer | (if you're consistent with the n-1 processes actually being used for workers, as in the object auditor scenario). | 04:46 |
redbo | well, it also re-launches any children that die. those other services just don't use a worker/controller process. | 04:46 |
*** exploreshaifali has quit IRC | 04:48 | |
smart_developer | redbo : So the fact that there are 3 object auditor processes (2 workers, 1 extra) -- that is not something that is set by default by the Swift core code itself, instead of the .conf files? | 04:50 |
smart_developer | is that correct/incorrect? | 04:50 |
smart_developer | sorry, meant that is*, instead of that is not | 04:50 |
smart_developer | meant to ask: So the fact that there are 3 object auditor processes (2 workers, 1 extra) : that -is- something that is set by default by the Swift core code itself, instead of the .conf files? | 04:51 |
smart_developer | initially | 04:51 |
redbo | that sounds correct | 04:51 |
smart_developer | redbo : Okay -- Is there a way to set it through the .conf files as well? | 04:52 |
redbo | you can turn off one of the scans | 04:52 |
smart_developer | how? | 04:52 |
*** kyles_ne has joined #openstack-swift | 04:53 | |
redbo | if you set zero_byte_fps to 0 in the auditor config, it doesn't run the zero byte scan. which is a subset of the main audit scan. You can also set "concurrency" in the auditor conf, which creates more workers of the main audit. | 04:54 |
smart_developer | redbo : Does the concurrency # = # of workers for the auditor, then ? | 04:55 |
smart_developer | the concurrency # that you set | 04:55 |
redbo | yes | 04:56 |
*** nexusz99 has quit IRC | 04:56 | |
smart_developer | Okay thank you! | 04:56 |
smart_developer | Btw, what exactly does setting workers = (some number) in the [DEFAULT] section of your .conf files mean, then? | 04:57 |
smart_developer | What exactly does that specify/set ? | 04:57 |
redbo | how many worker processes should be created for the server. concurrency is only used in the auditor config. | 04:58 |
smart_developer | redbo : What's used for the updaters and replicators, then ? | 04:58 |
smart_developer | the other consistency services. | 04:58 |
smart_developer | or even the middleware, as well. | 04:58 |
redbo | I think most or all of the consistency services support a "concurrency" setting, but they use threads instead of processes. | 04:59 |
smart_developer | redbo : Oh, so then the concurrency # that you set in the .conf files actually means # of threads ? | 05:00 |
smart_developer | (and not # of processes?) | 05:00 |
smart_developer | just wanted to make sure. | 05:00 |
redbo | for the replicators and updaters, yes | 05:00 |
smart_developer | but for auditor it's the # of processes, and not # of threads ? | 05:00 |
zaitcev | [zaitcev@guren swift-tip]$ find swift -name '*rep*.py'| xargs grep concurrency | 05:01 |
zaitcev | swift/common/db_replicator.py: self.cpool = GreenPool(size=concurrency) | 05:01 |
redbo | yep | 05:01 |
smart_developer | Why do they flip-flop it that way ? | 05:01 |
smart_developer | Why not stay consistent to one indicator/method ? | 05:01 |
*** addnull has quit IRC | 05:02 | |
redbo | I think it's just been a "right tool for the job" thing. Some services need to scale beyond one CPU, so they get extra processes. Threads are kind of easier (so long as they don't share state). | 05:03 |
smart_developer | zaitcev : What in the world did you just share ? | 05:06 |
smart_developer | redbo : Thanks for the info. It was very helpful. | 05:06 |
smart_developer | When do you normally sign on, btw? | 05:06 |
redbo | my schedule isn't very regular | 05:07 |
zaitcev | smart_developer: well, you wanted to know how the "concurrency" flag is used | 05:08 |
smart_developer | redbo How do you check how many threads are being allocated on your node to running Swift ? | 05:08 |
smart_developer | (and zaitcev ) | 05:08 |
zaitcev | I think we only use processes and greentreads, so you should not actually need to examine os-level threads | 05:08 |
zaitcev | You may though... ps has options for that (man ps). | 05:09 |
zaitcev | As far as greenthreads go, they are unfortunately invisible. | 05:09 |
zaitcev | I mean, without code embedded ahead of time | 05:09 |
zaitcev | I had a patch that dumped greenthreads at some point... | 05:10 |
zaitcev | https://review.openstack.org/70513 | 05:10 |
zaitcev | You know what a green tread is, right? Just asking, sorry. | 05:11 |
smart_developer | zaitcev : Not really XD | 05:12 |
zaitcev | okay | 05:13 |
zaitcev | It's a type of threading that's invisible to the OS and imitated entirely inside an OS-level thread or process. | 05:13 |
smart_developer | redbo : Where did you learn all the information that we went over today -- is there a place that has good documentation about these details, or do you always end up having to look through the core Swift code ? | 05:13 |
zaitcev | he wrote it all | 05:13 |
zaitcev | to begin with | 05:13 |
redbo | haha.. yeah, some of it. And I just watch what changes. I think you might need to read the code to get a lot of that stuff. | 05:14 |
zaitcev | I'm trying to hint that reading the code is easy and profitable | 05:15 |
smart_developer | redbo zaitcev Are you both core Swift developers ? | 05:15 |
zaitcev | well, I still grep through it with xargs | 05:15 |
smart_developer | just wondering | 05:15 |
redbo | yeah | 05:15 |
zaitcev | yes. But I joined only recently, when Swift already was a fully developed project and product. | 05:15 |
zaitcev | So I do not know a lot of history. | 05:16 |
zaitcev | For example... why indeed replicators are in once process, huh? Obviously it has a side effect of crude load control... But was it intentional? | 05:17 |
smart_developer | crude load control ? | 05:17 |
zaitcev | Well yeah. It won't eat more than 1 CPU even if concurrency is upped. | 05:17 |
redbo | I guess nobody's ever needed more than 1 CPU worth of container replication | 05:18 |
smart_developer | It also seems that object replicators are using only 1 CPU. | 05:19 |
smart_developer | (in observation of redbo 's comment) | 05:19 |
zaitcev | hmm... swift/common/db_replicator.py is the only places that subclasses run_forever | 05:20 |
*** fifieldt has joined #openstack-swift | 05:21 | |
zaitcev | sorry, that was false | 05:21 |
*** kopparam has joined #openstack-swift | 05:21 | |
smart_developer | zaitcev : Along the lines of Swift using greenthreads and not actual OS threads, what's the advantage for Swift using greenthreads ? | 05:21 |
smart_developer | zaitcev : Basically, why did the developers decide to do that ? | 05:22 |
zaitcev | smart_developer: I just told you that I'm not the right guy to ask historical questions. But in most other projects greenthreads used for ease of locking. You don't need to worry about all your lists getting corrupted (much). | 05:23 |
smart_developer | What lists ? | 05:24 |
zaitcev | *headdesk* | 05:24 |
redbo | I don't know that there's an advantage in every case. but threads in python can perform horribly, especially if you're doing a lot of stuff over the network. | 05:24 |
smart_developer | redbo zaitcev By the way, I just did a "ps -eLf | grep swift" on one of my Swift nodes, and it seems to list over 100 Swift-related threads that are currently running. | 05:28 |
smart_developer | So that means that Swift is using both greenthreads, and OS threads ? | 05:28 |
smart_developer | So in reality there could potentially be way more threads than what I see here in "ps -eLf | grep swift" ? | 05:29 |
redbo | yes.. it gets a little complicated :) it allocates some OS threads to make certain function calls. Normally 20 threads per process. | 05:29 |
zaitcev | Good for running that experiment. Personally I suspect those a library threads that we do not consciously create. Keep in mind though, there's 1 thread per each process minimum. | 05:29 |
zaitcev | Wait, really? In what place? | 05:30 |
smart_developer | (because that command only shows the OS-level threads, I'm assuming -- but this assumption might not be correct in itself). | 05:30 |
redbo | eventlet's tpool creates 20 threads right off the bat | 05:30 |
zaitcev | oh | 05:30 |
smart_developer | redbo : eventlet's tpool creates 20 threads right off the bat -- for each service ? process? or what? | 05:30 |
redbo | each process that uses tpools, which is probably most of them | 05:31 |
redbo | then threads_per_disk in object server creates N threads per drive per worker. We turned that on on a server with a bunch of drives and a bunch of workers today and hit some system limit on the number of threads :) | 05:32 |
smart_developer | Does the Swift administrator get to control how many eventlet threads get produced per process ? | 05:32 |
redbo | there's some environment variable you can set | 05:33 |
redbo | EVENTLET_THREADPOOL_SIZE | 05:33 |
smart_developer | and that's *not* specified in the /etc/swift/*.conf files ? | 05:34 |
redbo | nope | 05:34 |
smart_developer | Does your Swift service / Swift cluster stop working correctly if you set the system ulimit/nproc to a number that's less than all the OS threads that Swift needs ? | 05:36 |
smart_developer | I am running some workloads against my cluster right now, and I see that there are a little over 100 OS threads running for Swift, currently. | 05:37 |
smart_developer | But if I set the system ulimit/nproc to, say, 99 | 05:37 |
smart_developer | will that cause any functions of my Swift cluster/service to not work correctly ? | 05:37 |
smart_developer | (ulimit/nproc can be set by either "ulimit -u 99" or in /etc/security/limits.conf ) | 05:38 |
*** nexusz99 has joined #openstack-swift | 05:39 | |
zaitcev | most of daemons try to respawn, so this will cause them to loop | 05:39 |
redbo | yes, I think that would cause workers to not start | 05:39 |
redbo | oh yeah, that would be hilarious. I mean unless it happened to me. | 05:39 |
smart_developer | But you would just have fewer workers | 05:39 |
smart_developer | Does that necessarily mean the user of your Swift cluster would notice any difference (other than maybe performance differences). | 05:39 |
smart_developer | So, functionality-wise, can it still be okay ? | 05:40 |
redbo | yeah, but the controller process would be trying to restart them constantly | 05:40 |
*** addnull has joined #openstack-swift | 05:41 | |
smart_developer | redbo : So you're saying that it might still be a fully-functioning Swift cluster/services, but there could be a significant performance degradation ? | 05:42 |
*** ppai has joined #openstack-swift | 05:42 | |
redbo | yes | 05:43 |
smart_developer | zaitcev : That doesn't necessarily mean the entire Swift system itself gets stuck and hangs though, does it ? | 05:43 |
redbo | maybe we should put a little sleep in there where it spawns workers | 05:44 |
smart_developer | redbo : I imagine that would have its own advantages / drawbacks though, right ? | 05:44 |
smart_developer | And also, isn't it good to set some kind of reasonable limit for the ulimit/nproc, since you want to prevent a forkbomb from using up all of your system resources, right ? | 05:45 |
redbo | I don't think it'd hurt anything. workers don't usually die anyway. | 05:45 |
smart_developer | For instance, I ran into an issue where the object-auditor started fork bombing. | 05:45 |
smart_developer | And it went took up all the ulimit/nproc # of processes that the OS would allow it to utilize. | 05:46 |
redbo | doesn't sound like a terrible idea. but you'd have to set it higher than what it normally uses. which, once it's been running for a bit, it doesn't really create new threads, so that number should be easy to pin down. | 05:47 |
smart_developer | redbo : Why do you have to set it higher than what it normally uses ? | 05:47 |
redbo | so you don't hit it during normal operations? | 05:48 |
smart_developer | redbo : Why not set it at exactly what it normally uses ? | 05:50 |
*** tkay has quit IRC | 05:50 | |
*** kopparam has quit IRC | 05:51 | |
*** kopparam has joined #openstack-swift | 05:51 | |
redbo | well like the replicator needs to launch rsyncs. it might launch other transient processes or threads I can't think of. | 05:51 |
smart_developer | Ah, I see! | 05:52 |
smart_developer | ok. | 05:52 |
smart_developer | redbo : Can you do "ps -eLf | grep swift | wc -l" on some of your nodes, and let me know how many threads there tend to be ? | 05:53 |
smart_developer | I'm getting around 110. | 05:54 |
smart_developer | A little different for each node (by 1, or 2). | 05:54 |
smart_developer | A little different for each node (by 1 or 2). | 05:54 |
*** kopparam has quit IRC | 05:56 | |
redbo | randomly chosen server says 965. But that's a big server with a lot of workers. | 05:57 |
*** kyles_ne has quit IRC | 05:57 | |
*** tkay has joined #openstack-swift | 05:57 | |
*** tsg has joined #openstack-swift | 05:57 | |
smart_developer | redbo : Is it a similar number on another server with similar specs ? | 05:58 |
*** kyles_ne has joined #openstack-swift | 05:58 | |
smart_developer | redbo : Moreover, should one expect it to be similar ? | 05:59 |
*** addnull has quit IRC | 05:59 | |
redbo | well these others are all coming in at 740-750. I think the first one also happened to be running container servers, and these are pure object nodes. | 06:00 |
smart_developer | redbo : Why would there be more for container servers, compared to object servers ? | 06:01 |
*** Manish_ has joined #openstack-swift | 06:01 | |
redbo | it's running both | 06:01 |
Manish_ | clayg: Hi...Hope you are doing well | 06:01 |
Manish_ | clayg: I have one query related to permissable limit of Container/Account metadata size | 06:02 |
smart_developer | redbo : But if one had pure container servers, and a different server had pure object servers, which one would be expected to have more threads running on them ? | 06:02 |
*** kyles_ne has quit IRC | 06:02 | |
smart_developer | redbo : Comparing two hypothetical servers | 06:02 |
smart_developer | redbo : I guess we could also throw in a pure account server in this scenario, as well, for comparison. | 06:02 |
openstackgerrit | Yuan Zhou proposed a change to openstack/swift-specs: Initial draft for high level EC Get flow https://review.openstack.org/117492 | 06:03 |
Manish_ | clayg: SWIFT documentation says that "max_meta_overall_size" is 4096 bytes, does it includes ACL related information(X-Container-Read, X-Container-Write) as well, which is basically also a metadata... | 06:04 |
redbo | it's just a function of how many workers you have. 21 for each worker, plus probably 21 for each consistency service process you have running, plus whatever for rsync. | 06:04 |
*** kopparam has joined #openstack-swift | 06:04 | |
redbo | then if you run threads_per_disk on object servers, you get N threads for each disk. | 06:05 |
smart_developer | "then if you run threads_per_disk on object servers, you get N threads for each disk." -- is in addition to --- "it's just a function of how many workers you have. 21 for each worker, plus probably 21 for each consistency service process you have running, plus whatever for rsync", correct ? | 06:06 |
smart_developer | (on top of that) | 06:06 |
redbo | yes, I think so. those 20 threads get created when you import tpool, which still happens even if you set threads_per_disk. | 06:07 |
redbo | oh that's a lie, they're created the first time you execute something in a thread pool. so maybe they aren't created. | 06:07 |
smart_developer | I thought tpool was for greenthreads, though ? | 06:08 |
smart_developer | (greenthreads is for eventlet, right?) | 06:08 |
redbo | tpool is the 20 system threads that it can send work over to | 06:08 |
openstackgerrit | OpenStack Proposal Bot proposed a change to openstack/swift: Imported Translations from Transifex https://review.openstack.org/124290 | 06:09 |
smart_developer | Ok then how do you know how many threads, and how many processes, are created for rsync ? | 06:09 |
redbo | umm... it should launch a maximum of the concurrency of the replicator | 06:10 |
*** addnull has joined #openstack-swift | 06:12 | |
smart_developer | threads or processes ? | 06:12 |
redbo | rsyncs are processes | 06:12 |
*** astellwag has quit IRC | 06:13 | |
smart_developer | so, replicators are threads (as mentioned earlier), but rsyncs are processes ? | 06:13 |
smart_developer | well, you just confirmed for the rsync part, but could we confirm on the replicators-being-defined-as-threads, part ? | 06:13 |
redbo | the object replicator uses worker processes, the container replicator uses green threads/probably real threads, and they both can execute [concurrency] rsync processes. | 06:15 |
*** nshaikh has joined #openstack-swift | 06:21 | |
Manish_ | any one who can resolve my query? | 06:24 |
*** dvorkbjel has quit IRC | 06:29 | |
redbo | Manish_: only things that start with x-container-meta count towards max_meta_overall_size for containers, so x-container-read and write won't. even though they're treated a lot like meta values. | 06:29 |
*** swat30 has quit IRC | 06:29 | |
*** dvorkbjel has joined #openstack-swift | 06:32 | |
*** mahatic has quit IRC | 06:32 | |
*** addnull has quit IRC | 06:34 | |
*** nexusz99 has quit IRC | 06:37 | |
*** nexusz99 has joined #openstack-swift | 06:37 | |
*** k4n0 has joined #openstack-swift | 06:37 | |
*** zaitcev has quit IRC | 06:39 | |
*** addnull has joined #openstack-swift | 06:40 | |
*** swat30 has joined #openstack-swift | 06:40 | |
Manish_ | redbo: Thanks for the information. Do you know, if there is any limitation about maximum number of users to whom read/write ACL can be provided at a time? | 06:40 |
*** tkay has quit IRC | 06:41 | |
*** tkay has joined #openstack-swift | 06:46 | |
Manish_ | redbo: I mean if i need to provide write ACL on any container, to 500 users out of 1000 users, so can i do this by setting their name in X-Container-Read? Or is there any support for userGroup in SWIFT? | 06:47 |
smart_developer | redbo zaitcev : Thank you very much for your time. Hope you guys have a good one! | 06:49 |
redbo | I think that's going to depend on the auth system. I know we can do access by roles, I'd guess keystone has something similar. There's no specific limit on number of users in an ACL, but we do limit all headers to 8192 bytes, so that might be pushing it. | 06:49 |
smart_developer | redbo zaitcev : Thank you. | 06:49 |
redbo | np, laters | 06:49 |
*** mahatic has joined #openstack-swift | 06:50 | |
*** tkay has quit IRC | 06:51 | |
*** tkay has joined #openstack-swift | 06:51 | |
redbo | Manish_: I don't know, you'll have to find someone who knows auth stuff better. I think you could create a role and give access to it with the acl and assign that role to the users, but my knowledge of that isn't very deep. | 06:55 |
openstackgerrit | Matthew Oliver proposed a change to openstack/swift: Add sort option to container listings https://review.openstack.org/121278 | 06:56 |
openstackgerrit | Matthew Oliver proposed a change to openstack/swift: Add container reverse listings https://review.openstack.org/120709 | 06:56 |
*** tkay has quit IRC | 07:00 | |
*** Edward-Zhang has quit IRC | 07:06 | |
*** tgohad has joined #openstack-swift | 07:10 | |
*** BAKfr has joined #openstack-swift | 07:11 | |
*** tsg has quit IRC | 07:13 | |
*** tkay has joined #openstack-swift | 07:15 | |
*** kopparam has quit IRC | 07:15 | |
*** dvorkbjel has quit IRC | 07:15 | |
*** dvorkbjel has joined #openstack-swift | 07:17 | |
*** mahatic has quit IRC | 07:28 | |
*** haomaiw__ has quit IRC | 07:33 | |
*** haomaiwang has joined #openstack-swift | 07:34 | |
*** tgohad__ has joined #openstack-swift | 07:48 | |
*** kopparam has joined #openstack-swift | 07:50 | |
*** tkay has quit IRC | 07:51 | |
*** tgohad has quit IRC | 07:52 | |
*** haomaiw__ has joined #openstack-swift | 07:53 | |
*** haomaiwang has quit IRC | 07:54 | |
*** haomaiw__ has quit IRC | 07:59 | |
*** haomaiwang has joined #openstack-swift | 08:00 | |
acoles | torgomatic: thanks for your replies on the zero-copy patch. | 08:01 |
*** kota_ has quit IRC | 08:02 | |
*** foexle has joined #openstack-swift | 08:08 | |
*** oomichi has quit IRC | 08:11 | |
*** haomaiw__ has joined #openstack-swift | 08:16 | |
*** haomaiwang has quit IRC | 08:18 | |
*** ppai has quit IRC | 08:29 | |
acoles | torgomatic: curious about autogenerated config, as far as i can see gate-swift-docs just builds the rst | 08:31 |
acoles | so, the rate_limit_ options are still in the proxy section of the deployment guide, but got removed from sample config a while back | 08:32 |
*** joeljwright has joined #openstack-swift | 08:40 | |
*** ppai has joined #openstack-swift | 08:42 | |
*** nshaikh has quit IRC | 08:47 | |
*** aix has joined #openstack-swift | 08:49 | |
*** nshaikh has joined #openstack-swift | 08:51 | |
*** tgohad__ has quit IRC | 08:58 | |
*** a1|away_ is now known as AbyssOne | 09:06 | |
*** Dafna has joined #openstack-swift | 09:07 | |
*** X019 has joined #openstack-swift | 09:26 | |
*** dmorita has quit IRC | 09:34 | |
Manish_ | redbo: metadata limit of 4096 bytes is per request limitation, or taoll size of metadata which can be associated with any container/account? | 09:46 |
*** exploreshaifali has joined #openstack-swift | 10:00 | |
*** nosnos has quit IRC | 10:09 | |
*** nosnos has joined #openstack-swift | 10:09 | |
*** kopparam has quit IRC | 10:10 | |
*** kopparam has joined #openstack-swift | 10:18 | |
*** kopparam has quit IRC | 10:20 | |
*** kopparam has joined #openstack-swift | 10:21 | |
*** nshaikh has quit IRC | 10:24 | |
*** nshaikh has joined #openstack-swift | 10:25 | |
*** tdasilva has quit IRC | 10:35 | |
*** jamiehannaford has joined #openstack-swift | 10:35 | |
*** haomaiw__ has quit IRC | 10:47 | |
*** kbee has quit IRC | 10:48 | |
*** addnull has quit IRC | 10:55 | |
*** nexusz99 has quit IRC | 10:56 | |
*** cdelatte has joined #openstack-swift | 11:09 | |
*** davdunc` has quit IRC | 11:31 | |
*** tdasilva has joined #openstack-swift | 11:33 | |
*** ppai has quit IRC | 11:38 | |
*** k4n0 has quit IRC | 11:46 | |
*** joeljwright has quit IRC | 11:47 | |
*** ppai has joined #openstack-swift | 11:51 | |
*** kopparam has quit IRC | 11:51 | |
*** kopparam has joined #openstack-swift | 11:52 | |
*** kopparam has quit IRC | 11:57 | |
*** sandywalsh has quit IRC | 12:01 | |
*** ppai has quit IRC | 12:03 | |
*** kopparam has joined #openstack-swift | 12:06 | |
*** kopparam_ has joined #openstack-swift | 12:09 | |
*** kopparam has quit IRC | 12:11 | |
*** geaaru has joined #openstack-swift | 12:13 | |
*** kopparam_ has quit IRC | 12:14 | |
*** nosnos has quit IRC | 12:24 | |
*** nosnos has joined #openstack-swift | 12:24 | |
*** nosnos has quit IRC | 12:29 | |
*** kopparam has joined #openstack-swift | 12:38 | |
*** dmsimard_away is now known as dmsimard | 12:39 | |
*** kopparam has quit IRC | 12:43 | |
*** tsg has joined #openstack-swift | 12:56 | |
*** dmsimard is now known as dmsimard_away | 13:02 | |
*** occupant has quit IRC | 13:03 | |
*** occupant has joined #openstack-swift | 13:03 | |
*** X019 has quit IRC | 13:04 | |
*** mahatic has joined #openstack-swift | 13:06 | |
*** nshaikh has quit IRC | 13:12 | |
*** lcurtis has joined #openstack-swift | 13:12 | |
*** X019 has joined #openstack-swift | 13:16 | |
*** bill_az has joined #openstack-swift | 13:28 | |
*** Manish_ has quit IRC | 13:30 | |
*** mrsnivvel has quit IRC | 13:34 | |
*** davdunc` has joined #openstack-swift | 13:34 | |
*** davdunc` is now known as davdunc | 13:35 | |
*** davdunc has quit IRC | 13:35 | |
*** davdunc has joined #openstack-swift | 13:35 | |
*** dmsimard_away is now known as dmsimard | 13:36 | |
openstackgerrit | A change was merged to openstack/swift-specs: Initial draft for high level EC Get flow https://review.openstack.org/117492 | 13:36 |
openstackgerrit | Filippo Giunchedi proposed a change to openstack/swift: Additional exit codes for swift-drive-audit https://review.openstack.org/124398 | 13:40 |
*** X019 has left #openstack-swift | 13:41 | |
*** tdasilva has quit IRC | 14:00 | |
*** kbee has joined #openstack-swift | 14:00 | |
*** exploreshaifali has quit IRC | 14:15 | |
openstackgerrit | Simon Lorenz proposed a change to openstack/swift: Added flag to create unique tmp dir per strg node https://review.openstack.org/124088 | 14:19 |
*** kbee has quit IRC | 14:22 | |
*** kbee has joined #openstack-swift | 14:23 | |
*** nexusz99 has joined #openstack-swift | 14:41 | |
*** annegent_ has joined #openstack-swift | 14:44 | |
*** tsg has quit IRC | 14:46 | |
*** tsg has joined #openstack-swift | 14:57 | |
*** bill_az has quit IRC | 15:08 | |
*** kenhui has joined #openstack-swift | 15:12 | |
*** annegent_ has quit IRC | 15:15 | |
*** jyoti-ranjan has quit IRC | 15:16 | |
*** jyoti-ranjan has joined #openstack-swift | 15:16 | |
*** annegent_ has joined #openstack-swift | 15:19 | |
*** anteaya has quit IRC | 15:23 | |
*** zacksh has quit IRC | 15:24 | |
*** zacksh has joined #openstack-swift | 15:25 | |
*** jyoti-ranjan has quit IRC | 15:26 | |
mahatic | hi, is there any specific doc on swift-recon? I only see it's references, but nowhere a manual/doc on it. Can someone please point me to it? | 15:32 |
mahatic | I already went through this swift-recon part here: https://swiftstack.com/blog/2012/04/11/swift-monitoring-with-statsd/ | 15:32 |
acoles | mahatic: http://docs.openstack.org/developer/swift/admin_guide.html#cluster-telemetry-and-monitoring | 15:34 |
mahatic | acoles, oh i did go through that as well. It talks about how to use/config swift-recon | 15:35 |
*** tsg has quit IRC | 15:37 | |
acoles | mahatic: ah, ok. sorry i don't have any other links | 15:37 |
mahatic | acoles, np. thanks for the link | 15:38 |
*** annegent_ has quit IRC | 15:40 | |
*** BAKfr has quit IRC | 15:42 | |
*** mrsnivvel has joined #openstack-swift | 15:43 | |
*** annegent_ has joined #openstack-swift | 15:45 | |
notmyname | good morning | 15:46 |
notmyname | mahatic: also look at the help messages for swift-recon. and I think there is a man page | 15:46 |
mahatic | good morning | 15:46 |
mahatic | oh okay | 15:47 |
mahatic | notmyname, also, while i will continue to get familiarized with the project, we/you would still be waiting for app process to go through, correct? | 15:48 |
openstackgerrit | A change was merged to openstack/swift: Zero-copy object-server GET responses with splice() https://review.openstack.org/102609 | 15:48 |
mahatic | In that case, can you as well point me to some bugs that i could work on meanwhile? or should i just look them up on launchpad? | 15:48 |
openstackgerrit | A change was merged to openstack/python-swiftclient: Fix unit tests failing when OS_ env vars are set https://review.openstack.org/124009 | 15:48 |
*** mrsnivvel has quit IRC | 15:48 | |
notmyname | woohoo! zerocopy GET landed | 15:49 |
notmyname | hang on. gotta restart. brb | 15:49 |
mahatic | okay | 15:50 |
notmyname | back | 15:52 |
notmyname | acoles: looks like probably at least another hour for https://review.openstack.org/#/c/123975/ to land | 15:52 |
*** kyles_ne has joined #openstack-swift | 15:54 | |
acoles | notmyname: :( it could yet fail... | 15:56 |
notmyname | "at least" ;-) | 15:56 |
notmyname | interesting. cinder and triple-o are the only 2 projects that have more than one person running for ptl | 16:02 |
notmyname | https://wiki.openstack.org/wiki/PTL_Elections_September/October_2014 | 16:02 |
*** mrsnivvel has joined #openstack-swift | 16:02 | |
acoles | notmyname: torgomatic: i have 'no more copies anymore' in my head to this tune https://www.youtube.com/watch?v=g6yTRq_rJg4 | 16:03 |
acoles | i have no idea if that spans continents/generations ! | 16:03 |
*** gyee has joined #openstack-swift | 16:03 | |
notmyname | -) | 16:04 |
*** tsg has joined #openstack-swift | 16:05 | |
notmyname | tsg: good morning | 16:06 |
notmyname | mahatic: so you were wondering about....? | 16:06 |
mahatic | notmyname, okay you remember :D I was wondering (again) if i should have ideally posted in the opw channel | 16:07 |
notmyname | mahatic: if it's about swift, here is better | 16:07 |
mahatic | but here it is: just to clarify/confirm - while i continue to get familiarized with the project, we/you would still be waiting for app process to go through, correct? | 16:07 |
mahatic | In that case, can you as well point me to some bugs that i could work on meanwhile? or should i just look them up on launchpad? | 16:07 |
mahatic | it's about partly both | 16:08 |
hurricanerix | Is anybody going to the hackathon interested in playing board/card games one night? I am considering packing some, but if nobody would be interested then I wont. | 16:09 |
notmyname | hurricanerix: nice! | 16:10 |
*** haomaiwang has joined #openstack-swift | 16:10 | |
*** annegent_ has quit IRC | 16:11 | |
notmyname | mahatic: I'm not specifically waiting on anything from opw. from my reading of it, it's more of "do various things to get involved" rather than "here's one big three month project". | 16:11 |
notmyname | mahatic: and also, what's needed today won't necessarily be the same by the time opw officially kicks off | 16:11 |
hurricanerix | notmyname: I guess I will take that as a yes. =) | 16:11 |
notmyname | hurricanerix: well, at least one yes. | 16:12 |
notmyname | mahatic: therefore, I don't think there's any reason to wait on opw eg to do the recon checker thing | 16:12 |
*** pberis has quit IRC | 16:13 | |
mahatic | notmyname, sure. Also, I will have to do a small write up on what I would be working on in those 3 months, after discussing with the mentor | 16:13 |
mahatic | that's a part of the process | 16:13 |
*** mrsnivvel has quit IRC | 16:13 | |
mahatic | notmyname, like you said, if not exactly. probably an idea would help i guess | 16:14 |
notmyname | mahatic: going through https://bugs.launchpad.net/swift to see what pain points people currently have in swift is always valuable as well. | 16:19 |
*** bkopilov has quit IRC | 16:19 | |
mahatic | notmyname, sure | 16:21 |
notmyname | in python when will [int] > [str]? | 16:28 |
*** haomaiwang has quit IRC | 16:29 | |
notmyname | ah ok. "never" http://stackoverflow.com/questions/3270680/how-does-python-compare-string-and-int | 16:31 |
notmyname | acoles: is HP still using the name_check middleware? | 16:31 |
notmyname | because it looks like it is (and always has been) pretty broken https://bugs.launchpad.net/swift/+bug/1372397 | 16:33 |
notmyname | mahatic: not sure if Madhuri is working on that or not, but that's a pretty simple bug to fix | 16:33 |
*** anteaya has joined #openstack-swift | 16:34 | |
mahatic | notmyname, oh looking into it | 16:36 |
acoles | notmyname: believe so, will ping otoolee | 16:37 |
kbee | notmyname: i see a 2 +2s and +workflow against https://review.openstack.org/#/c/123835/.. any idea who nneds to approve it.. | 16:37 |
mahatic | notmyname, maybe I will check with madhuri when she is in | 16:38 |
mahatic | oh the bug is also reported by her and assigned to herself | 16:39 |
notmyname | kbee: http://status.openstack.org/zuul/ <-- things take a _long_ time to land in openstack... | 16:39 |
notmyname | kbee: the integrated gate queue (where swift patches go to land in swift) is currently over 15 hours long | 16:40 |
kbee | notmyname: Thanks.. if i remeber.. one needs to approve also even after +2 right.. i didnot see any mail that it was approved.. hence asked.. | 16:40 |
notmyname | kbee: you've got a green check in the workflow column. that's what it needs | 16:41 |
kbee | notmyname: awesome.. thanks | 16:41 |
*** annegent_ has joined #openstack-swift | 16:41 | |
notmyname | kbee: but, zuul says that patch is "skipped". I have no idea why. so it still may take a while to land. maybe it doesn't rebase cleanly on top of master any more | 16:41 |
notmyname | dfg: are you (or anyone at RAX) working on https://bugs.launchpad.net/swift/+bug/1366090 ? | 16:43 |
*** annegent_ has quit IRC | 16:46 | |
kbee | notmyname: should i rebase and resubmit ot what needs to be done now | 16:47 |
*** mkollaro has quit IRC | 16:47 | |
notmyname | kbee: what have you checked? | 16:47 |
kbee | notmyname: https://review.openstack.org/#/c/123835/. | 16:48 |
notmyname | kbee: no, I mean why do you think you need to do something other than wait? | 16:49 |
*** sandywalsh has joined #openstack-swift | 16:49 | |
kbee | notmyname: rebase and push code for review ? | 16:50 |
notmyname | kbee: why do you want to do that? what problem are you trying to solve by doing that? | 16:50 |
kbee | notmyname: I'm not sure.. jsut asked since you said that " I have no idea why. so it still may take a while to land. maybe it doesn't rebase cleanly on top of master any more" | 16:51 |
notmyname | kbee: ok, so figure out if there is a problem or not first. and if there is, then you'll have a path forward. | 16:52 |
*** Dafna has quit IRC | 16:53 | |
notmyname | kbee: all I'm saying is that it take a while to get patches though the gate. maybe there's an issue. maybe not. I haven't looked | 16:53 |
kbee | notmyname: ok..thnx.. got it.. i'll look into it.. | 16:53 |
*** dencaval has joined #openstack-swift | 16:54 | |
notmyname | fifieldt: cool! joanna is working on https://bugs.launchpad.net/swift/+bug/1364735 ? | 16:56 |
notmyname | mahatic: that's another pretty easy one, but I'd want to hear from fifieldt first to see if joanna is working on it | 16:56 |
mahatic | notmyname, oh sure. I actually came across it just a while ago. In lowhangingfruit tag i guess | 16:59 |
notmyname | yeah, that's a good tag to search on | 16:59 |
dfg | notmyname: no. patches welcome :) | 17:01 |
tsg | notmyname: morning, missed your message earlier | 17:01 |
*** annegent_ has joined #openstack-swift | 17:01 | |
notmyname | dfg: kk. I'm going throuh bug reports this morning | 17:01 |
notmyname | dfg: I added a comment based on a stupid simple test on my saio | 17:02 |
dfg | notmyname: i'll fix that sos pull that you did with the conflicts- i was just busy that day | 17:02 |
notmyname | dfg: heh, no worries (but I think it would be cool) | 17:02 |
notmyname | dfg: I didn't realize it had conficts now | 17:02 |
dfg | i merged a separate pull req. anyway- not a big deal | 17:03 |
notmyname | dfg: thanks | 17:03 |
*** acoles is now known as acoles_away | 17:04 | |
*** smart_developer has quit IRC | 17:06 | |
*** smart_developer has joined #openstack-swift | 17:11 | |
kbee | notmyname: just out of curiosity, whats the importance you would tag https://bugs.launchpad.net/swift/+bug/1370680 | 17:12 |
notmyname | kbee: low. swift-ring-builder isn't in the read/write data path | 17:13 |
notmyname | kbee: certainly a way to make things better, though. thanks for working on it | 17:13 |
kbee | notmyname: ok :).. on https://review.openstack.org/#/c/123835/, found there was a rebase problem.. have rebased and pushed again for review.. | 17:14 |
*** annegent_ has quit IRC | 17:14 | |
*** haomaiwang has joined #openstack-swift | 17:14 | |
imkarrer | Good after noon. I am using the Tempest tests for Swift and have a few questions about the expected authentication method used when creating a new user during test setUp for the test contained in test_container_acl.py and test_account_quotas.py. First, am I in the right place or should I place this question in the Openstack Tempest channel? | 17:15 |
notmyname | imkarrer: not sure. you won't find a lot of tempest-specific help in here (try #openstack-qa for that), but we can help if you have auth questions around swift | 17:17 |
clayg | ohai | 17:17 |
openstackgerrit | Keshava Bharadwaj proposed a change to openstack/swift: Fixes unit tests to clean up temporary directories https://review.openstack.org/123835 | 17:18 |
*** haomaiwang has quit IRC | 17:19 | |
kbee | torgomatic: peluse there was a rebase problem on https://review.openstack.org/#/c/123835/ and hence was not merged.. i've rebased and resubmitted.. | 17:20 |
*** imkarrer has quit IRC | 17:21 | |
*** imkarrer has joined #openstack-swift | 17:21 | |
*** kyles_ne has quit IRC | 17:33 | |
* kbee is logging off for the day | 17:33 | |
*** kyles_ne has joined #openstack-swift | 17:33 | |
*** tdasilva has joined #openstack-swift | 17:35 | |
*** nexusz99 has quit IRC | 17:36 | |
*** kbee has quit IRC | 17:37 | |
*** kyles_ne has quit IRC | 17:38 | |
*** foexle has quit IRC | 17:39 | |
*** geaaru has quit IRC | 17:42 | |
notmyname | mahatic: here's another small one to look at https://bugs.launchpad.net/swift/+bug/1244506 | 17:48 |
mahatic | notmyname, okay. This one: https://review.openstack.org/#/c/69576/ | 17:49 |
*** exploreshaifali has joined #openstack-swift | 17:49 | |
mahatic | I was looking at that one | 17:50 |
mahatic | I just assigned myself to the link you just gave | 17:53 |
mahatic | and "You may only assign yourself because you are not affiliated with this project and do not have any team memberships." this is confusing | 17:53 |
*** tkay has joined #openstack-swift | 17:53 | |
*** jamiehannaford has quit IRC | 17:54 | |
notmyname | mahatic: yes. that is confusing. I wonder what it means | 17:55 |
mahatic | notmyname, heh yeah | 17:56 |
*** kyles_ne has joined #openstack-swift | 18:04 | |
*** kyles_ne has quit IRC | 18:06 | |
*** kyles_ne has joined #openstack-swift | 18:06 | |
exploreshaifali | notmyname, I am trying to configure swift in my VM, there is no opt/stack/swift folder in VM | 18:07 |
exploreshaifali | I am following devstack guide for swift | 18:08 |
notmyname | exploreshaifali: you're setting up a swift dev environment? | 18:08 |
exploreshaifali | yes | 18:08 |
exploreshaifali | http://devstack.org/configuration.html | 18:08 |
openstackgerrit | Alan Erwin proposed a change to openstack/swift: Adding object partition check https://review.openstack.org/122194 | 18:08 |
notmyname | exploreshaifali: don't use devstack | 18:08 |
exploreshaifali | ok | 18:08 |
exploreshaifali | than? | 18:08 |
notmyname | exploreshaifali: https://github.com/swiftstack/vagrant-swift-all-in-one is the fast way | 18:09 |
notmyname | exploreshaifali: and that's an automated version of http://docs.openstack.org/developer/swift/development_saio.html | 18:09 |
exploreshaifali | yeah, I was about to ask for same | 18:10 |
exploreshaifali | :) | 18:10 |
exploreshaifali | also please can you provide any link for swift cLI | 18:10 |
exploreshaifali | *CLI | 18:10 |
notmyname | ? | 18:10 |
notmyname | what doe you mean? | 18:10 |
notmyname | that's the python-swiftclient project (it provides a CLI and SDK) | 18:11 |
exploreshaifali | notmyname, this bug https://bugs.launchpad.net/python-swiftclient/+bug/1371650 | 18:11 |
exploreshaifali | is related to swift CLI so I asked for it | 18:11 |
notmyname | exploreshaifali: oh. dev work on swiftclient would need a swift cluster to talk to, but you can dev wherever you want for tht | 18:11 |
exploreshaifali | ok | 18:11 |
exploreshaifali | cool :) | 18:12 |
exploreshaifali | Thanks! | 18:12 |
*** NM1 has joined #openstack-swift | 18:14 | |
NM1 | Hi guys! Did anyone run any tests concerning the new bash bug? | 18:15 |
*** NM1 has left #openstack-swift | 18:17 | |
*** NM1 has joined #openstack-swift | 18:17 | |
*** tsg has quit IRC | 18:17 | |
*** tsg has joined #openstack-swift | 18:18 | |
glange_ | what sort of tests? | 18:23 |
NM1 | I see some blog post talking about executing arbitrary commands, just by setting specific header on the http request. | 18:28 |
glange_ | true, but are worried about that affecting swift? | 18:28 |
*** annegent_ has joined #openstack-swift | 18:31 | |
glange_ | I thought that was a bash security problem and didn't have anything to do with http | 18:31 |
NM1 | Just wondering. I supposed that most of the users start swift by using bash as shell | 18:31 |
notmyname | certainly don't consider this an exhaustive pen-test, but I send the exploit over the content-type and referer headers and didn't see any issues. swift doesn't set the environment variables and call out to a shell (on the read/write data path) | 18:31 |
NM1 | notmyname: I also tried it here with the same results. | 18:32 |
notmyname | glange_: yeah, but if you've got a server process that takes the user input and uses it to populate env vars then call a shell, you're doomed | 18:32 |
notmyname | glange_: and the first easy example was with cgi things | 18:32 |
glange_ | oh, like a sql injection? | 18:32 |
notmyname | glange_: but I've also see a demo of doing it via dhcp | 18:32 |
notmyname | sortof. but in this case it's shell functions, not sql | 18:33 |
glange_ | yeah | 18:33 |
glange_ | "like" :) | 18:34 |
NM1 | People running swift behind an apache http server could have some problems. But of course that would be an apache issue, not swift. | 18:34 |
notmyname | glange_: like a sports metaphor | 18:35 |
glange_ | exactly | 18:35 |
notmyname | NM1: ya maybe. don't know | 18:35 |
*** annegent_ has quit IRC | 18:36 | |
*** annegent_ has joined #openstack-swift | 18:36 | |
*** tsg has quit IRC | 18:48 | |
*** tsg has joined #openstack-swift | 18:50 | |
*** kyles_ne has quit IRC | 18:51 | |
*** tkay has quit IRC | 18:52 | |
ahale | yeah i had a quick poke around swift earlier and decided only place it shells out is replicators calling rsync and its fine, though didn't do exhaustive check on libs | 18:55 |
*** tkay has joined #openstack-swift | 18:56 | |
notmyname | ahale: yeah. that | 18:56 |
mahatic | notmyname, I can't quite figure out which file is generating the exception? I did look into (multithreading.py). but other than that, Could you point me in some direction please? | 18:59 |
mahatic | https://bugs.launchpad.net/swift/+bug/1244506 | 18:59 |
notmyname | "JSONx is an IBM® standard format to represent JSON as XML." but....why? | 19:01 |
notmyname | mahatic: you've got the swift client locally? | 19:02 |
mahatic | yeah | 19:02 |
swifterdarrell | notmyname: well, you've got Java in hand and all those "web 2.0" jerks keep insisting on JSON... | 19:02 |
mahatic | notmyname, that's where i found the file /python-swiftclient/swiftclient/ | 19:03 |
swifterdarrell | notmyname: if you've got a ton of XML tooling, it's gotta be annoying to deal with JSON | 19:03 |
notmyname | mahatic: and if you upload something and set the segment size to some non-integer, do you see a traceback | 19:03 |
notmyname | swifterdarrell: sure, of course :-) | 19:04 |
notmyname | swifterdarrell: as I saw elsewhere, all these people say that they have enterprise experience, but I don't think they've spent a single day on a starship | 19:04 |
swifterdarrell | notmyname: ++ | 19:05 |
*** NM1 has left #openstack-swift | 19:08 | |
*** NM1 has joined #openstack-swift | 19:08 | |
Tyger_ | notmyname: The problem with that is the starship Enterprise was named after a US Navy vessel. (Quite a few of them, actually.) | 19:08 |
*** Tyger_ is now known as Tyger | 19:09 | |
*** tsg has quit IRC | 19:10 | |
notmyname | mahatic: basically, recreate it locally. that is, get the traceback yourself, and then let's work from there | 19:19 |
mahatic | notmyname, sorry. was trying out that took me to another trace back. :D trying to resolve it. will ping back | 19:20 |
mahatic | and that toook* | 19:20 |
*** kyles_ne has joined #openstack-swift | 19:22 | |
*** tkay has quit IRC | 19:24 | |
*** kyles_ne has quit IRC | 19:26 | |
mahatic | notmyname, swift upload -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing v1 /home/mahatic/myswiftfile | 19:27 |
notmyname | mahatic: and that should work, right? | 19:28 |
mahatic | gives me "HTTPConnectionPool(host='127.0.0.1', port=8080): Max retries exceeded with url: " | 19:28 |
notmyname | ah that's something different than the bug report | 19:28 |
mahatic | or "Account not found" | 19:28 |
mahatic | notmyname, yeah. But last i tried, i did upload a file. Does that error mean something obvious? am i missing something here? | 19:29 |
notmyname | mahatic: how'd you set up your swift cluster? is it following the SAIO instructions? using something else? | 19:30 |
mahatic | notmyname, SAIO instructions | 19:30 |
notmyname | ok | 19:31 |
*** zohar has joined #openstack-swift | 19:31 | |
zohar | Hi all! | 19:32 |
mahatic | notmyname, might be very basic, but startmain script has to be run only initially? | 19:32 |
zohar | I have a newbie question please | 19:33 |
notmyname | mahatic: that basically "turns on" the main parts of swift. if you ever stop them, you'd run start main again | 19:33 |
notmyname | mahatic: eg I run `resetswift` and `startmain` a dozen times a day | 19:33 |
mahatic | notmyname, and that gives me: "IOError: [Errno 13] Permission denied: '/var/run/swift/proxy-server.pid' " | 19:33 |
notmyname | zohar: don't ask to ask! just ask! :-) | 19:33 |
mahatic | notmyname, correct. that's what i was looking at it as | 19:34 |
zohar | just generally, if i clone github.com/openstack/swift and i just want to run the .functests there, on a single node setup, what preperation do i need? | 19:34 |
notmyname | zohar: you can run the functional tests against any swift endpoint, including those from public providers | 19:34 |
zohar | yes that makes sense | 19:35 |
zohar | so say the endpoint exists already | 19:35 |
notmyname | zohar: you've got your own clone of the repo and want to install it and run functests? | 19:35 |
zohar | but nothing else does | 19:35 |
zohar | yes basically | 19:35 |
notmyname | zohar: oh, so you've already got a swift cluster somewhere else? | 19:35 |
notmyname | zohar: I gues I'm wondering if you're trying to change swift and test it or just test some other swift endpoint | 19:35 |
zohar | yes, but not exactly | 19:36 |
zohar | yes | 19:36 |
zohar | testing changed swift on some endpoint | 19:36 |
zohar | all that endpoint has is swift-like object storage | 19:36 |
notmyname | zohar: oh | 19:36 |
zohar | i want to create everything else necessary to run those tests | 19:36 |
mahatic | notmyname, I will have to be on branch "master"? | 19:37 |
notmyname | zohar: the functests use the test.conf https://github.com/openstack/swift/blob/master/test/sample.conf | 19:37 |
*** tkay has joined #openstack-swift | 19:37 | |
notmyname | zohar: let me see where that comes from | 19:38 |
zohar | ok | 19:39 |
notmyname | zohar: it's loaded from the value set in SWIFT_TEST_CONFIG_FILE env var or from /etc/swift/test.conf | 19:40 |
notmyname | zohar: so make a test config file and reference it when calling ./.functests in the root of the source tree | 19:41 |
notmyname | zohar: that shoudl be it | 19:41 |
notmyname | mahatic: shouldn't matter. | 19:41 |
zohar | so on a fresh ubuntu14, i clone this repo, install keystone, i export some env variables like: OS_USERNAME,TENANT,PASSWOD, and OS_AUTH_URL=http://127.0.0.1:35357/v2.0 and then try keystone tenant-list, im obviously missing some stuff, i would like to know what | 19:42 |
notmyname | zohar: is your "swift" endpoint using keystone creds or keystone-style auth? | 19:42 |
zohar | keystone-style auth | 19:43 |
zohar | but im not even at the point of reaching that endpoint yet | 19:43 |
zohar | i just want a most minimal "openstack" distro on this node, which I will point to my "swift-like" node | 19:43 |
zohar | by creating a service and endpoint | 19:44 |
zohar | but im not even at that step yet | 19:44 |
zohar | will of course add my local tenants to the remote "swift" too, once i am able to create tenants here | 19:45 |
notmyname | zohar: you don't need anything locally. ie where you are running the test from | 19:46 |
zohar | oh really | 19:46 |
notmyname | zohar: you're only adding creds into the config file. those will need to be valid in the target swift cluster | 19:46 |
notmyname | zohar: which also means that if you're using some non-keystone auth system, that's ok as long as it understands the v1, v2, v3 auth protocols | 19:47 |
*** annegent_ has quit IRC | 19:48 | |
notmyname | zohar: eg I've got customers not using keystone but using swift with ldap or ad. so the creds would match their ldap creds. and the swift cluster knows how to translate it (which it does with my company's prouct) | 19:49 |
notmyname | zohar: that's only an example. I don't want to add complexity to my answers and confuse anything :-) | 19:49 |
zohar | yes thank you :) | 19:50 |
zohar | so im tryin to recreate this test setup made by another person | 19:50 |
zohar | in their setup, keystone is functioning on the test node | 19:50 |
zohar | it allows me to keystone list-tenants etc.. | 19:50 |
notmyname | zohar: so how do you do auth in your "swift" cluster? can you pass in the X-Auth-User and X-Auth-Key headers and get back an X-Storage-Url endpoint? (that's the v1 style) | 19:50 |
notmyname | ah ok | 19:50 |
mahatic | notmyname, okay. Any guesses about the IOError? -> "IOError: [Errno 13] Permission denied: '/var/run/swift/proxy-server.pid' " | 19:51 |
zohar | using normal keystone | 19:51 |
notmyname | mahatic: yeah. permissions issues :-) | 19:51 |
mahatic | notmyname, there is no proxy-server.pid there | 19:51 |
zohar | so on the cluster end, we have normal keystone tenants too | 19:51 |
mahatic | notmyname, :) I did try as root. But i don't see any proxy-server.pid in that path. Am i not supposed to? | 19:52 |
notmyname | mahatic: if it's not running, there shouldn't be anything there | 19:52 |
mahatic | notmyname, okay | 19:53 |
*** tkay has quit IRC | 19:53 | |
openstackgerrit | paul luse proposed a change to openstack/swift: Merge master to feature/ec https://review.openstack.org/124503 | 19:53 |
zohar | basically, what i want is to make a service endpoint on my test node pointing to the cluster, and add relevant auth tenants to it too | 19:54 |
*** zaitcev has joined #openstack-swift | 19:54 | |
*** ChanServ sets mode: +v zaitcev | 19:54 | |
notmyname | zohar: you want to add creds to your keystone instance which point to your swift cluster. then you add those creds to the test.conf file | 19:54 |
notmyname | right? | 19:55 |
*** kyles_ne has joined #openstack-swift | 19:55 | |
zohar | honestly i have no idea about the test.conf file | 19:55 |
zohar | i dont think it was even used (at least that sample.conf one) on the other setup im trying to recreate | 19:55 |
notmyname | zohar: it's used for running the functional tests that are part of the swift codebase | 19:56 |
zohar | hmm well i think the default values are fine | 19:57 |
zohar | the auth_host should be the test node | 19:57 |
notmyname | I've got to get on a phone call | 19:58 |
notmyname | zohar: auth_host points to whatever the auth (keystone) endpoint is | 19:59 |
notmyname | I don't know what your test node is | 19:59 |
zohar | yes | 19:59 |
zohar | ok | 19:59 |
zohar | so on the remote "cluster" i only have basic object storage | 19:59 |
zohar | nothing else | 19:59 |
zohar | i want everything else (including keystone) on the test node | 19:59 |
zohar | everything else that is necessary for running whatever tests between this node and the storage cluster through swift api | 20:00 |
notmyname | zohar: ok, that makes sense | 20:00 |
notmyname | zohar: what's required depends on what your auth integration is in your swift cluster | 20:00 |
zohar | should be /v2.0/ | 20:00 |
notmyname | zohar: if it's keystone, then run keystone on your test node | 20:00 |
notmyname | zohar: right, so it sounds like you're on the right path | 20:01 |
zohar | ive installed python-keystoneclient on the test node | 20:01 |
zohar | im supposing i need a server | 20:01 |
notmyname | yeah | 20:01 |
*** exploreshaifali has quit IRC | 20:01 | |
zohar | oh so that was my mistake | 20:01 |
zohar | anything besides keystone that i need? | 20:02 |
zohar | (ive totally missed it, did apt-get install keystone, and just went with the first renamed recommendation it game me...) | 20:02 |
notmyname | ...phone call... | 20:03 |
zohar | no problem thanks for the help man, dont let me keep you from your call | 20:03 |
zohar | also yeah i got the apt package name for keystone client from trying to run keystone and getting recommended package to install, while actually FORGETTING to install keystone... | 20:04 |
*** annegent_ has joined #openstack-swift | 20:04 | |
zohar | now i have this process running: /usr/bin/python /usr/bin/keystone-all | 20:05 |
*** kenhui has quit IRC | 20:05 | |
zohar | keystone tenant-list returns: The request you have made requires authentication. (HTTP 401) | 20:06 |
zohar | i have: OS_AUTH_URL=http://127.0.0.1:35357/v2.0/ | 20:06 |
*** mrsnivvel has joined #openstack-swift | 20:07 | |
*** jergerber has joined #openstack-swift | 20:07 | |
mahatic | notmyname, now when the permissions are alright startmain is throwing this: "Exception: Could not bind to 0.0.0.0:8080 after trying for 30 seconds" | 20:07 |
zohar | i think i need an apache server running too, ive updated the /etc/keystone/keystone.conf and now keystone command is returning: Authorization Failed: Unable to establish connection to http://127.0.0.1:35357/v2.0/tokens | 20:11 |
zohar | what specifically do i need to install for this? | 20:12 |
NM1 | mahatic: I should be running anotther app on the same port | 20:12 |
NM1 | mahatic: Are you running linux? | 20:12 |
mahatic | NM1, yeah | 20:13 |
mahatic | NM1, netstat -plnt -> http://paste.openstack.org/show/115702/ | 20:14 |
NM1 | mahatic: try run netstat -nlpt|grep 8080 and see if there is any application running | 20:14 |
mahatic | python i guess | 20:14 |
NM1 | mahatic: ps -ef|grep 27691 | 20:15 |
mahatic | NM1, http://paste.openstack.org/show/115703/ | 20:16 |
NM1 | It seens your proxy server is already running | 20:16 |
mahatic | NM1, but i did do a resetswift and then did startmain | 20:17 |
mahatic | NM1, oh and also, now it is showing because i didnt do a resetswift yet after running startmain | 20:18 |
mahatic | my startmain output looks like this: http://paste.openstack.org/show/115704/ | 20:18 |
NM1 | Can you share the 'stop' command? | 20:20 |
*** aix has quit IRC | 20:21 | |
*** annegent_ has quit IRC | 20:26 | |
mahatic | NM1, stop command? | 20:27 |
*** annegent_ has joined #openstack-swift | 20:27 | |
NM1 | mahatic: it looks like you proxy is not stopping. | 20:27 |
NM1 | mahatic: *your | 20:27 |
mahatic | NM1, okay. im not aware of a stop command. can you please tell me the command? | 20:28 |
notmyname | `swift-init all stop` | 20:28 |
notmyname | or just `swift-init proxy stop` | 20:28 |
mahatic | for the latter, "no proxy-server running" | 20:29 |
mahatic | maybe i could just allocate another port to proxy? other than 8080? | 20:30 |
notmyname | phone call done, but now I've got to run some errands | 20:30 |
notmyname | mahatic: https://github.com/swiftstack/vagrant-swift-all-in-one is a very simple way to get a SAIO up and running | 20:30 |
notmyname | mahatic: yes, it's important to figure out why you are having permissions/port issues, using that will get you going quickly | 20:31 |
mahatic | notmyname, alright | 20:31 |
notmyname | zohar: you probably won't find too many hints in this channel about how to get keystone up and running (many of us don't use keystone day-to-day). #openstack-keystone would be a good place to start. | 20:32 |
openstackgerrit | Doug Hellmann proposed a change to openstack/swift-specs: Use the current date for the copyright statement https://review.openstack.org/120546 | 20:32 |
openstackgerrit | Doug Hellmann proposed a change to openstack/swift-specs: Remove templates from toctrees https://review.openstack.org/120545 | 20:32 |
openstackgerrit | Doug Hellmann proposed a change to openstack/swift-specs: Add RSS feed https://review.openstack.org/120544 | 20:32 |
notmyname | zohar: however, once you have it configured, then this is definitely the place to ask about getting the functests running against your endpoint | 20:32 |
zohar | notmyname, thank you very much! | 20:32 |
mahatic | notmyname, sure. I will definitely do that. Will try about the port a lil more and then jump onto that | 20:32 |
*** morganfainberg is now known as morgan | 20:36 | |
*** morgan is now known as morganfainberg | 20:37 | |
mahatic | NM1, http://paste.openstack.org/show/115719/ -> can i just not assign python some other port? would it be any problem? | 20:43 |
smart_developer | Hi everyone, quick question : For what functions does Swift utilize memcached ? | 20:46 |
smart_developer | For everything you can think of, or for only certain parts ? | 20:46 |
NM1 | mahatic: Is it a test server? | 20:48 |
*** annegent_ has quit IRC | 20:49 | |
mahatic | NM1, I just changed the port for proxy | 20:49 |
mahatic | it seems to be working alright | 20:49 |
NM1 | mahatic: ok. I'm just afraid you have to do that again. | 20:50 |
mahatic | NM1, http://paste.openstack.org/show/115725/ that's the output | 20:50 |
mahatic | NM1, again? why would that be? | 20:50 |
*** tdasilva has quit IRC | 20:50 | |
NM1 | mahatic: What command are you using to start swift? | 20:52 |
mahatic | i did a startmain | 20:52 |
mahatic | the script | 20:52 |
NM1 | mahatic: do you have something like restartmain ? | 20:53 |
mahatic | NM1, there is a reset | 20:53 |
mahatic | resetswift | 20:53 |
mahatic | NM1, oh you mean, everytime i reset, it would go back to port 8080? | 20:53 |
mahatic | NM1, i just did that. it didn't happen. | 20:54 |
NM1 | mahatic: Humm. I'm just wondering why the proxy server running on port 8080 didn't stop. | 20:55 |
mahatic | NM1, it indicated there was no proxy-server running when i gave this command: 'swift-init proxy stop' | 20:56 |
mahatic | NM1, i'm thinking it is only because the port was occupied | 20:56 |
NM1 | mahatic: I think it didn't find the pid file. I'd kill the process using kill PID | 20:57 |
mahatic | NM1, oh. but notmyname was suggesting that there wouldn't be a pid if there was no proxy-server running | 21:00 |
NM1 | mahatic: Yes but in your case you have a proxy-server running. | 21:02 |
mahatic | NM1, no right, 'swift-init proxy stop' gave 'no proxy-server running' output | 21:03 |
mahatic | NM1, I actually went to the folder and checked. PID was not there. Now it is there. | 21:04 |
NM1 | mahatic: for me it looks like the proxy server started and couldn't right the pid file for some reason. Without the pidfile swift-init can stop the process. However it was running. | 21:05 |
NM1 | mahatic: According with this http://paste.openstack.org/show/115703/, for me it was running. | 21:06 |
NM1 | right = write | 21:08 |
*** mahatic_ has joined #openstack-swift | 21:08 | |
mahatic_ | NM1, oh. hmm | 21:09 |
NM1 | mahatic: if it's running and ok, nevermind :) | 21:10 |
*** mahatic has quit IRC | 21:10 | |
*** tkay has joined #openstack-swift | 21:13 | |
mahatic_ | NM1, yeah. But i would also want to know how to stop a pid? And how does it affect in anyway if i use a diff port? | 21:13 |
*** tkay has quit IRC | 21:15 | |
NM1 | mahatic_: If the process don't have a pid file you can just run kill pid_number | 21:18 |
NM1 | mahatic_: if that doesn't work you can try kill -9 pid_number | 21:18 |
mahatic_ | NM1, ah okay. | 21:19 |
*** pberis has joined #openstack-swift | 21:37 | |
*** shri has joined #openstack-swift | 21:40 | |
*** swat30 has quit IRC | 21:40 | |
*** thurloat has quit IRC | 21:42 | |
*** thurloat has joined #openstack-swift | 21:44 | |
*** swat30 has joined #openstack-swift | 21:46 | |
*** swat30 has quit IRC | 21:54 | |
*** thurloat has quit IRC | 21:55 | |
*** tkay has joined #openstack-swift | 21:55 | |
*** swat30 has joined #openstack-swift | 22:01 | |
*** tkay has quit IRC | 22:02 | |
*** thurloat has joined #openstack-swift | 22:02 | |
*** imkarrer has quit IRC | 22:02 | |
*** imkarrer has joined #openstack-swift | 22:02 | |
*** jergerber has quit IRC | 22:13 | |
*** IRTermite has quit IRC | 22:13 | |
*** NM1 has quit IRC | 22:14 | |
*** kyles_ne has quit IRC | 22:16 | |
*** kyles_ne has joined #openstack-swift | 22:17 | |
*** thurloat has quit IRC | 22:17 | |
*** thurloat has joined #openstack-swift | 22:18 | |
*** dmsimard is now known as dmsimard_away | 22:18 | |
openstackgerrit | A change was merged to openstack/python-swiftclient: Remove a debugging print statement https://review.openstack.org/123975 | 22:35 |
*** NM1 has joined #openstack-swift | 22:40 | |
*** lcurtis has quit IRC | 22:43 | |
clayg | ^ oh no i needed that!? | 22:47 |
*** kyles_ne has quit IRC | 22:47 | |
clayg | on nm, that was a different debug print statement | 22:48 |
*** kyles_ne has joined #openstack-swift | 22:48 | |
mattoliverau | Lol, you can always as more :p | 22:57 |
mattoliverau | 22:57 | |
morganfainberg | notmyname or anyone else: https://bugs.launchpad.net/keystone/+bug/1299146 this doesn't look like we have anything to do on the keystone side. | 23:05 |
morganfainberg | am I correct? | 23:06 |
*** bill_az has joined #openstack-swift | 23:13 | |
*** kyles_ne has quit IRC | 23:23 | |
*** kyles_ne has joined #openstack-swift | 23:24 | |
briancline | is there currently a way at the proxy level to put a max time limit on keepalives? | 23:28 |
* briancline dug around quite a bit but didn't see anything code/doc wise | 23:29 | |
*** kyles_ne has quit IRC | 23:31 | |
*** kyles_ne has joined #openstack-swift | 23:31 | |
*** NM1 has quit IRC | 23:37 | |
*** bill_az has quit IRC | 23:40 | |
notmyname | morganfainberg: I think you can drop keystone from that. IIRC the only questions were around the guarantees of uniqueness for certain names and ids. https://bugs.launchpad.net/keystone/+bug/1299146/comments/10 | 23:49 |
morganfainberg | notmyname, cool ty | 23:49 |
*** jeblair is now known as corvus | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!