Tuesday, 2015-11-17

*** emagana has quit IRC00:00
*** laron has joined #openstack-operators00:06
*** VW has joined #openstack-operators00:08
*** krobzaur has joined #openstack-operators00:09
*** elo has joined #openstack-operators00:09
*** laron has quit IRC00:10
*** lhcheng_ has joined #openstack-operators00:13
*** lhcheng has quit IRC00:14
*** VW has quit IRC00:14
mgagneSo I'm reading about federation. As a public cloud provider, how can I benefit from it? As a SP (service provider), I need to map users from a trusted IdP (IDentity Provider) to a local keystone group and assign roles to it.00:22
mgagneso far, is my understanding right?00:23
*** sanjayu has quit IRC00:55
*** stanchan has quit IRC01:08
*** VW has joined #openstack-operators01:15
*** VW has quit IRC01:19
*** kencjohnston has joined #openstack-operators01:26
*** elo has quit IRC01:34
*** SimonChung has quit IRC01:35
*** markvoelker has joined #openstack-operators01:37
*** shimura_ken has joined #openstack-operators01:42
*** krobzaur has quit IRC01:43
*** gyee has quit IRC01:55
*** mriedem_away is now known as mriedem01:59
*** jmccrory has joined #openstack-operators02:05
*** fawadkhaliq has joined #openstack-operators02:05
*** mriedem has quit IRC02:07
*** alejandrito has joined #openstack-operators02:08
*** alejandrito has quit IRC02:08
*** alejandrito has joined #openstack-operators02:08
lhcheng_mgagne: yes, the mapping is made to the local keystone group.02:09
lhcheng_mgagne: one use case for cloud provider is to allow support people to federate into their customer's cloud02:10
lhcheng_so they can have access to troubleshoot issue02:10
mgagnelhcheng_: it raises more questions than it answers. Doesn't it mean I would have to create a mapping for each customers so one doesn't end up being granted access to all projects?02:10
mgagnelhcheng_: I mean, unless it's self-served, I don't see how a public cloud benefits from it without huge operational overhead.02:11
*** krobzaur has joined #openstack-operators02:12
lhcheng_mgagne: I haven't had much experience with the mapping engine, perhaps you can ask in the keystone room. But I got your point, it does involve a lot of work to maps each users.02:14
mgagnelhcheng_: thanks, I'm starting to understand02:14
lhcheng_maybe you could create a mapping of  something like %username%  to %username%-group02:15
*** shimura_ken has quit IRC02:16
*** krobzaur has quit IRC02:20
*** alejandrito has quit IRC02:21
*** armax has joined #openstack-operators02:24
*** alejandrito has joined #openstack-operators02:33
*** bapalm has quit IRC03:06
*** harshs has quit IRC03:06
*** harshs has joined #openstack-operators03:16
*** VW has joined #openstack-operators03:17
*** albertom has quit IRC03:18
*** VW has quit IRC03:22
*** albertom has joined #openstack-operators03:23
*** lhcheng_ has quit IRC03:23
*** Marga__ has quit IRC03:24
*** Marga_ has joined #openstack-operators03:35
*** Marga_ has quit IRC03:36
*** harshs has quit IRC03:46
*** VW has joined #openstack-operators03:51
*** VW has quit IRC03:53
*** VW has joined #openstack-operators03:54
*** kencjohnston has quit IRC03:59
*** VW has quit IRC04:07
*** alejandrito has quit IRC04:16
*** lhcheng has joined #openstack-operators04:23
*** laron has joined #openstack-operators04:30
*** laron has quit IRC04:34
*** zhangjn has joined #openstack-operators04:35
*** zhangjn has quit IRC04:45
*** harshs has joined #openstack-operators04:46
*** lhcheng has quit IRC04:48
*** SimonChung has joined #openstack-operators04:49
*** SimonChung1 has joined #openstack-operators04:51
*** homegrown_ has joined #openstack-operators04:53
*** SimonChung has quit IRC04:54
*** baffle has joined #openstack-operators04:55
*** __nick has joined #openstack-operators04:57
*** cfarquhar_ has joined #openstack-operators04:58
*** wasmum- has joined #openstack-operators04:59
*** ruagair_ has joined #openstack-operators04:59
*** homegrown has quit IRC05:00
*** serverascode has quit IRC05:00
*** _nick has quit IRC05:00
*** codebauss has quit IRC05:00
*** baffle_ has quit IRC05:00
*** ruagair has quit IRC05:00
*** mgagne has quit IRC05:00
*** sbadia has quit IRC05:00
*** wasmum has quit IRC05:00
*** cfarquhar has quit IRC05:00
*** notmyname has quit IRC05:00
*** ruagair_ is now known as ruagair05:00
*** mgagne has joined #openstack-operators05:00
*** elo has joined #openstack-operators05:00
*** elo has quit IRC05:02
*** codebauss has joined #openstack-operators05:03
*** notmyname has joined #openstack-operators05:03
*** serverascode has joined #openstack-operators05:05
*** sbadia has joined #openstack-operators05:06
*** Marga_ has joined #openstack-operators05:10
*** Marga_ has quit IRC05:11
*** Marga_ has joined #openstack-operators05:11
*** B_Smith__ has joined #openstack-operators05:21
*** lhcheng has joined #openstack-operators05:22
*** B_Smith_ has quit IRC05:25
*** elo has joined #openstack-operators05:34
*** elo has quit IRC05:35
*** jmccrory has quit IRC05:48
*** jmccrory has joined #openstack-operators05:51
*** lhcheng has quit IRC05:54
*** aswadr has joined #openstack-operators05:54
*** lhcheng has joined #openstack-operators05:55
*** sanjayu has joined #openstack-operators05:57
*** harshs has quit IRC06:11
*** rcernin has joined #openstack-operators06:22
*** aswadr has quit IRC06:55
*** maishsk has quit IRC07:10
*** bvandenh has joined #openstack-operators08:01
*** maishsk has joined #openstack-operators08:01
*** matrohon has joined #openstack-operators08:15
*** bvandenh has quit IRC08:22
*** egonzalez has joined #openstack-operators08:36
*** liverpooler has joined #openstack-operators08:38
*** bvandenh has joined #openstack-operators08:40
*** liverpoo1er has joined #openstack-operators08:41
*** liverpooler has quit IRC08:43
*** egonzalez has quit IRC08:46
*** egonzalez90 has joined #openstack-operators08:47
*** elemoine has joined #openstack-operators08:48
*** elemoine has quit IRC08:53
*** elemoine has joined #openstack-operators08:54
*** elemoine has quit IRC08:57
*** elemoine has joined #openstack-operators08:57
*** bvandenh has quit IRC08:58
*** elemoine has quit IRC08:59
*** lhcheng has quit IRC09:00
*** elemoine has joined #openstack-operators09:01
*** Marga_ has quit IRC09:07
*** maishsk has quit IRC09:08
*** liverpoo1er has quit IRC09:11
*** elemoine has quit IRC09:11
*** liverpooler has joined #openstack-operators09:11
*** elemoine has joined #openstack-operators09:12
*** Marga_ has joined #openstack-operators09:16
*** cbrown2_ocf has joined #openstack-operators09:22
*** openstackgerrit has quit IRC09:31
*** openstackgerrit has joined #openstack-operators09:32
*** electrofelix has joined #openstack-operators09:32
*** sleinen-AS559 has joined #openstack-operators09:41
*** esker has joined #openstack-operators09:49
*** Marga_ has quit IRC09:50
*** belmoreira has joined #openstack-operators09:52
*** esker has quit IRC09:54
*** esker has joined #openstack-operators09:55
*** esker has quit IRC09:55
*** markvoelker has quit IRC10:05
*** cbrown2ocf has joined #openstack-operators10:16
*** egonzalez90 has quit IRC10:18
*** cbrown2_ocf has quit IRC10:19
*** cbrown2ocf is now known as cbrown2_ocf10:19
*** egonzalez has joined #openstack-operators10:20
*** subscope has joined #openstack-operators10:30
*** openstackgerrit has quit IRC10:31
*** openstackgerrit has joined #openstack-operators10:32
*** lhcheng has joined #openstack-operators10:48
*** subscope has quit IRC10:50
*** lhcheng has quit IRC10:54
*** markvoelker has joined #openstack-operators11:06
*** markvoelker has quit IRC11:11
*** zhangjn has joined #openstack-operators11:12
*** zhangjn has quit IRC11:12
*** zhangjn has joined #openstack-operators11:13
*** zhangjn has quit IRC11:14
*** zhangjn has joined #openstack-operators11:14
*** zhangjn has quit IRC11:15
*** zhangjn has joined #openstack-operators11:15
*** zhangjn has quit IRC11:15
*** zhangjn has joined #openstack-operators11:16
*** zhangjn has quit IRC11:16
*** zhangjn has joined #openstack-operators11:17
*** zhangjn has quit IRC11:18
*** zhangjn has joined #openstack-operators11:18
*** zhangjn has quit IRC11:19
*** zhangjn has joined #openstack-operators11:20
*** zhangjn has quit IRC11:20
*** zhangjn has joined #openstack-operators11:21
*** berendt has joined #openstack-operators12:21
*** elemoine_ has joined #openstack-operators12:24
*** elemoine_ has left #openstack-operators12:25
*** elemoine_ has joined #openstack-operators12:25
*** fawadkhaliq has quit IRC12:26
*** fawadkhaliq has joined #openstack-operators12:27
*** elemoine has quit IRC12:30
*** fawadkhaliq has quit IRC12:31
*** markvoelker has joined #openstack-operators12:37
*** lhcheng has joined #openstack-operators12:37
*** sleinen-AS5591 has joined #openstack-operators12:41
*** sleinen-AS559 has quit IRC12:42
*** markvoelker has quit IRC12:42
*** lhcheng has quit IRC12:42
*** alejandrito has joined #openstack-operators12:42
*** maishsk has joined #openstack-operators12:54
*** bvandenh has joined #openstack-operators12:58
*** lhcheng has joined #openstack-operators13:02
*** lhcheng has quit IRC13:07
*** csoukup has joined #openstack-operators13:11
*** signed8bit has joined #openstack-operators13:17
*** __nick is now known as _nick13:17
*** markvoelker has joined #openstack-operators13:27
*** csoukup has quit IRC13:31
*** elemoine_ has quit IRC13:33
*** elemoine has joined #openstack-operators13:34
*** regXboi has joined #openstack-operators13:34
*** cbrown2_ocf has quit IRC13:49
*** cbrown2_ocf has joined #openstack-operators13:51
*** cbrown2ocf has joined #openstack-operators13:52
*** cbrown2_ocf has quit IRC13:55
*** cbrown2ocf is now known as cbrown2_ocf13:55
*** signed8bit is now known as signed8bit_ZZZzz13:56
*** signed8bit_ZZZzz is now known as signed8bit14:00
*** mdorman has joined #openstack-operators14:09
*** signed8bit is now known as signed8bit_ZZZzz14:19
*** Marga_ has joined #openstack-operators14:19
*** laron has joined #openstack-operators14:20
*** signed8bit_ZZZzz is now known as signed8bit14:21
*** laron has quit IRC14:22
*** laron has joined #openstack-operators14:22
*** sanjayu has quit IRC14:26
*** kencjohnston has joined #openstack-operators14:27
*** liverpooler has quit IRC14:29
*** krobzaur has joined #openstack-operators14:32
*** mriedem has joined #openstack-operators14:32
*** nikhil_k is now known as nikhil14:39
*** krobzaur_ has joined #openstack-operators14:40
*** krobzaur has quit IRC14:43
*** krobzaur_ has quit IRC14:45
*** fawadkhaliq has joined #openstack-operators14:48
*** Marga_ has quit IRC14:52
*** mriedem has quit IRC14:55
*** dminer has joined #openstack-operators14:57
*** laron has quit IRC14:57
*** zhangjn has quit IRC14:59
*** mriedem has joined #openstack-operators15:03
*** laron has joined #openstack-operators15:04
*** ccarmack has joined #openstack-operators15:07
*** ccarmack has left #openstack-operators15:07
*** VW has joined #openstack-operators15:14
*** VW has quit IRC15:15
*** VW has joined #openstack-operators15:16
*** Guest80875 is now known as mfisch15:16
*** mfisch is now known as Guest2059415:17
*** Guest20594 is now known as mfisch15:18
*** mfisch has quit IRC15:18
*** mfisch has joined #openstack-operators15:18
*** elemoine has quit IRC15:27
*** elemoine has joined #openstack-operators15:30
*** krobzaur_ has joined #openstack-operators15:36
*** elo has joined #openstack-operators15:39
*** armax has quit IRC15:49
*** Piet has quit IRC15:50
*** britthou_ has joined #openstack-operators15:51
*** VW has quit IRC15:53
*** britthouser has quit IRC15:53
*** krobzaur_ has quit IRC15:54
*** krobzaur_ has joined #openstack-operators16:03
*** bvandenh has quit IRC16:03
*** rcernin has quit IRC16:04
*** mriedem has quit IRC16:06
*** SimonChung1 has quit IRC16:09
*** mriedem has joined #openstack-operators16:11
*** belmoreira has quit IRC16:13
*** sleinen-AS559 has joined #openstack-operators16:14
*** sleinen-AS5591 has quit IRC16:14
krobzaur_Anybody here have experience with Fuel 6.1? I have a question about deploying the influxdb-grafana and elasticsearch-kibana plugins16:19
*** armax has joined #openstack-operators16:28
*** kencjohnston has quit IRC16:29
*** kencjohnston has joined #openstack-operators16:30
*** Piet has joined #openstack-operators16:31
*** armax has quit IRC16:37
*** laron has quit IRC16:39
*** armax has joined #openstack-operators16:40
*** VW has joined #openstack-operators16:45
*** kencjohnston has quit IRC16:45
*** VW has quit IRC16:47
*** VW has joined #openstack-operators16:48
pasquier-skrobzaur_, I can help16:48
*** maishsk has quit IRC16:48
*** krobzaur_ has quit IRC16:49
*** swann has joined #openstack-operators16:51
*** elemoine has quit IRC16:53
*** kencjohnston has joined #openstack-operators16:54
*** gyee has joined #openstack-operators16:55
*** elemoine has joined #openstack-operators16:56
*** kencjohnston has quit IRC16:57
*** harshs has joined #openstack-operators16:58
*** mriedem is now known as mriedem_meeting16:59
*** kencjohnston has joined #openstack-operators16:59
*** laron has joined #openstack-operators17:00
*** britthou_ is now known as britthouser17:00
*** laron has quit IRC17:04
*** alop has joined #openstack-operators17:06
*** laron has joined #openstack-operators17:19
*** SimonChung has joined #openstack-operators17:23
*** matrohon has quit IRC17:25
*** elemoine has quit IRC17:28
*** _hanhart has joined #openstack-operators17:29
mgagneif anyone is planning to support federation in their public cloud, I would like to hear about it17:30
*** laron has quit IRC17:31
*** egonzalez has quit IRC17:34
*** rcernin has joined #openstack-operators17:36
*** mriedem_meeting is now known as mriedem17:39
jlkmgagne: I think we have some very loose plans around allowing blue box private clouds to federate into IBM public cloud, or vice versa. But it's VERY early thoughts, nothing really fleshed out.17:40
mgagnejlk: which cloud will be the SP and IdP?17:41
jlkwell I could see it going either way17:41
mgagnejlk: I'm trying to understand the user story around consuming federation enabled cloud17:41
*** laron has joined #openstack-operators17:41
jlka private cloud customer may want to federate into the public cloud, so private owns the accounts17:41
jlkOR the other way, so the accounts really exist in public cloud, and it federates into the private cloud17:41
*** Piet has quit IRC17:44
*** jmccrory has quit IRC17:44
*** lhcheng has joined #openstack-operators17:52
*** _hanhart has quit IRC17:54
mgagnejlk: I'm interested in the user story: how does he setup this on the public cloud? support ticket? custom control panel? self-service?17:54
*** markvoelker_ has joined #openstack-operators17:57
*** laron_ has joined #openstack-operators17:57
*** jmccrory has joined #openstack-operators17:58
jlkmgagne: we don't know any of that right now17:58
mgagnejlk: alright, same here ^^'17:59
*** SimonChung has quit IRC17:59
*** baffle has quit IRC17:59
*** kencjohnston has quit IRC17:59
*** laron has quit IRC18:00
*** markvoelker has quit IRC18:00
*** sbadia has quit IRC18:00
*** baffle has joined #openstack-operators18:00
*** sbadia has joined #openstack-operators18:05
*** armax has quit IRC18:06
*** laron_ has quit IRC18:10
*** sanjayu has joined #openstack-operators18:10
*** VW has quit IRC18:12
*** kencjohnston has joined #openstack-operators18:14
mriedemjlk: you might be interested in testing this out in pre-prod kilo https://review.openstack.org/#/c/246530/18:17
mriedemrelated to klindgren and the conductor load issues they are experiencing in kilo18:18
*** sanjayu has quit IRC18:18
*** dansmith has joined #openstack-operators18:18
mgagnehmm18:18
mgagneinteresting18:18
mgagnecan't a command line tool be used to set a flag in the BD once migration is completed? (since you can force the migration using CLI)18:19
klindgrenapparently not :-/18:19
*** VW has joined #openstack-operators18:20
mgagnebut I guess it's just a change to test the impact of disabling online migration18:20
klindgrenasked the same question18:20
mgagnehow can someone detect that an online migration is completed?18:20
*** harshs has quit IRC18:22
mriedemmgagne: dansmith: i was going to say we could add a dry run option to nova-manage db migrate_flavor_data 18:23
dansmithmriedem: for what purpose? it tells you how many matched/ran right?18:23
mriedemdansmith: to not actually do the migration18:23
dansmithright, I know what dry run means.. to what end? :)18:24
dansmithperhaps I'm missing some scrollback?18:24
mriedem(12:20:36 PM) mgagne: how can someone detect that an online migration is completed?18:24
dansmithmriedem: okay, right, my point was that if you try to run and don't get any hits, then you're done18:25
dansmithyou might be able to run with --max=0 or something18:25
mgagnemriedem: it was more or less  gateway question to: How can Nova knows it no longer needs to try to migrate data and stop doing lookups18:25
mriedemdansmith: i thought about that but don't think it will do anything https://github.com/openstack/nova/blob/stable/kilo/nova/db/sqlalchemy/api.py#L607418:25
mriedemyou'll just get 0 results in your query18:25
dansmithcan someone test that code to see if it even addresses the cpu problem?18:26
*** fawadkhaliq has quit IRC18:26
*** VW has quit IRC18:26
*** armax has joined #openstack-operators18:26
dansmithif it does, then we could explore doing things automatically, but it doesn't make sense unless that cuts out enough work to matter18:26
mriedemmgagne: there is a a db migration in liberty that forces it18:26
mriedemmgagne: https://github.com/openstack/nova/blob/stable/liberty/nova/db/sqlalchemy/migrate_repo/versions/291_enforce_flavors_migrated.py18:27
*** liverpooler has joined #openstack-operators18:27
mriedemmgagne: so i guess run this query manually https://github.com/openstack/nova/blob/stable/liberty/nova/db/sqlalchemy/migrate_repo/versions/291_enforce_flavors_migrated.py#L2418:28
mgagneI'm more or less making the assumption that flavor migration is causing the CPU load (I might be wrong) and looking for a solution for an operator to not have to update a config once he is done with online migration but make it automatic18:28
*** laron has joined #openstack-operators18:28
dansmithmgagne: yes, I understand18:28
mgagnebut lets get results first =)18:28
dansmithmgagne: prove that that patch makes it better and then we can talk about making it automatic18:28
dansmithmgagne: you understand this is gone in liberty right?18:29
mgagnedansmith: I'm not geared to perform upgrade every 6 months18:29
dansmithwe need to balance the benefit of merging something that major into kilo vs. a config option just for one release18:29
mgagnewhich service uses this code path?18:32
dansmithanything that talks to the db directly18:32
mgagnenova-conductor or all services which happens to use condutor18:33
dansmithso if you're using real conductor, that's conductor and api and scheduler18:33
dansmithno, compute is unaffected18:33
mgagneok, using conductor18:33
mgagnethe real one18:33
mgagneapplied and restarted nova-conductor. reduced CPU usage by ~0.5-1%. (down from 7%) can't say I ever had that problem with CPU usage so far, we tend to throw hardware at problems18:39
dansmithdoes that mean it was 7% and now it's 6%?18:41
mgagneyea18:41
mgagne~6.5%18:41
dansmithany idea what you were at when you were on juno? or is the load even comparable?18:41
mgagnedidn't try Juno, went straight from icehouse to kilo18:42
*** mriedem has quit IRC18:42
dansmithwell, same difference18:42
mgagneno, I don't know tbh18:42
mgagnebut klindgren might be the one that should test the change since he is the one complaining about CPU usage =)18:42
dansmithokay, well at that low it's hard to imagine that it's really measurable for you anyway18:42
dansmithyep18:42
klindgrenworking on it now...18:43
dansmithklindgren: please be exceedingly careful about deploying that, in case it's not clear18:43
mgagnefaster!18:43
* mgagne cracks the whip18:43
dansmithklindgren: also, you could deploy it to one conductor node and compare18:43
*** mriedem has joined #openstack-operators18:44
klindgrenmgagne, I was going to wait for an integration test to actually pass ;-)18:46
mgagneklindgren: :D18:46
dansmithklindgren: mgagne deployed, that's probably a better metric, FWIW18:46
mgagneklindgren: in our case, come back in ~1h30-2h00 :P18:46
dansmithklindgren: unless you mean internal tests18:47
dansmithjenkins won't exercise this code at all18:47
mgagnedansmith: I often have a cowboy hat on my head, you shouldn't trust my tests =)18:47
dansmithmgagne: if you applied, set the config, then at least you're running it18:47
klindgrenI changed the default to true18:48
klindgrenrestarting now18:48
klindgrenactually waiting a few minutes to get some 1 minute sar telemetry18:49
*** electrofelix has quit IRC18:53
*** armax has quit IRC18:53
*** armax has joined #openstack-operators18:55
* dansmith assumes klindgren lit fire to the cloud18:57
*** SimonChung has joined #openstack-operators18:58
klindgrenyea18:59
mdormananxiously awaiting the results here klindgren18:59
klindgrengenerally seeing a 4% decrease in cpu consumption across all cores19:00
dansmithdoes that mean four times the 30 cores you're running?19:00
dansmithklindgren: ^19:00
klindgrenhttp://paste.ubuntu.com/13316282/19:01
klindgren36% CPU reduction in total19:01
klindgren4% * 8 cores19:01
dansmiththat's good, yes?19:02
klindgrenyes - much much closer to pre-juno numbers19:02
dansmithi.e. closer to pre-kilo numbers?19:02
dansmithokay19:02
klindgrens/juno/kilo19:02
*** harshs has joined #openstack-operators19:02
dansmithklindgren: is this in preprod or did you put on a cowboy hat?19:02
klindgrencowboy19:02
dansmithjeezus19:02
klindgrenthis is real prod with real work load19:02
mdormanklindgren:  always has his cowboy hat on19:03
klindgrenI dont always edit code - but when I do - I do it in production19:04
mdormanto be fair, there are two other boxes also running conductor that weren’t touched19:04
dansmithwell, for what it's worth, I think that was pretty risky19:04
dansmithbut whatever, it's your deal19:04
dansmithklindgren: I will clean this up to be less snarky, but I think we'll want to see that run for a while before we get too serious about actually applying it, okay?19:04
klindgrenyea - agreed, I will let it bake on this node for a while19:05
*** bvandenh has joined #openstack-operators19:08
*** berendt has quit IRC19:09
klindgrenwith all cores shown : http://paste.ubuntu.com/13316425/19:10
*** Marga_ has joined #openstack-operators19:13
*** bvandenh has quit IRC19:14
dansmithklindgren: are you happy enough yet to apologize about calling my answer shit?19:15
*** laron has quit IRC19:17
dansmith..guess not.. :)19:19
xavpaicehe just had to duck out and fix production ;)19:19
dansmithcould be :)19:19
dansmithI warned him19:19
klindgrensorry was getting coffee19:19
dansmithxavpaice: actually, I expect my linkedin profile pic is posted somewhere in the office19:20
klindgrento be fair the shit comment was to the throw more hardware at the problem19:20
xavpaicegood to see your priorities are right19:20
klindgren;-)19:20
dansmithwith lots of little holes in it19:20
xavpaicemore hardware solves all performance proble.... oh19:21
mgagnehttp://blogs.msdn.com/cfs-filesystemfile.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-01-32-02-metablogapi/8054.image_5F00_thumb_5F00_35C6E986.png19:21
mgagnehaha better one with the cowboy hat: http://www.developermemes.com/wp-content/uploads/2013/03/i-dont-always-test-my-code-but-when-i-do-i-do-it-in-production-stay-on-call-my-friend-thumb-238x300.jpg19:21
*** SimonChung has quit IRC19:22
*** SimonChung1 has joined #openstack-operators19:22
*** signed8__ has joined #openstack-operators19:23
*** SimonChung1 has quit IRC19:23
*** SimonChung has joined #openstack-operators19:23
klindgrendansmith, seems like cpu usage is better - but still seeing conductor eat cpu19:24
klindgrenhttp://paste.ubuntu.com/13316590/19:24
*** SimonChung has quit IRC19:24
*** SimonChung1 has joined #openstack-operators19:24
klindgrenso better - but not magic bullet19:25
*** signed8bit has quit IRC19:25
dansmithklindgren: but at pre-kilo levels, yes?19:25
*** Piet has joined #openstack-operators19:26
*** krobzaur_ has joined #openstack-operators19:26
*** laron has joined #openstack-operators19:27
klindgrenmost of the time seems to be closer to pre kilo-levels.  Honestly didn't start looking at it until after kilo - when our cpu monitors started going off19:27
klindgrenlike I said - pre kilo we ran 40 conductor workers  on 2 servers without issues19:28
mdormanwe have also added a lot more compute servers since upgrading to kilo, too, so there are multi variables here19:28
klindgrenI was under the impression that new compute went into another cell?19:29
mdormanah yeah you’re right sorry19:29
mdormanmixing up my regions19:29
dansmithklindgren: you seem to think that conductor should not be doing a lot of work and thus not consuming a lot of CPU... why is that?19:29
klindgrenI am not saying it shouldn't be doing work.  I am saying that it eats by far the most cpu of any process in my cloud.19:30
dansmithit's doing all the database work on behalf of 250 compute nodes...19:31
klindgrenfor something that as I understand it is bascially acting as a database proxy?19:31
dansmithmore than that, even, but at least that19:31
*** laron has quit IRC19:31
dansmithit is more than a database proxy, but that's the work you're seeing the most of I expect, yes19:32
klindgrento give a refrence point - I also campare this our existing provisioning system that hands building of VPS stuff19:33
klindgrenwhere it deals with thousands of nodes, 60k+ VPS's and it uses about the same amount of hardware as our openstack control plane does19:33
klindgrenfor something about 1/200th the size19:34
dansmithheh19:35
mdormanyeah in general it would be nice if it could be more scalable19:36
mdormanbut that kinda applies to everything19:36
mdormanmaybe not more scalable.  but smaller scaling ratio19:36
mriedemwell, you can reduce traffice by turning down periodic tasks in the compute nodes, but it looks like you guys have already looked at some of that19:37
mriedemklindgren: mdorman: are you guys running db slaves at all?19:39
dansmithmriedem: what does that have to do with cpu load?19:39
dansmithon conductor19:39
mriedemnvm19:40
mgagneI find those numbers interesting and it would be nice it performance team could help projects to reach those numbers. 200x is an impressive number19:40
klindgrennot saying I expect openstack to do the same thing - they provide different service levels19:41
mgagneyea, sure19:41
klindgrenlike the vps product doesn't do metadata19:41
mgagneI'm sure there is online migration stuff in play, object backports, periodic tasks, etc.19:41
klindgrenbut they do online migrations and the like all the time19:42
*** elemoine has joined #openstack-operators19:42
mgagnebut find the increase cost in performance difficult to understand19:42
dansmithklindgren: mdorman: so I should abandon this patch right?19:42
klindgrendansmith, I think it helps - it did decrease cpu - but its not the main % of consumption or so it seems19:43
mdormandansmith:   no db slaves19:43
dansmithklindgren: it's clearly not the main database consumption.. that would be silly19:44
dansmither, cpu consumption part of the database proxy19:44
mdormani would rather see something where nova recognizes when all flavors (or whatever) are done migrated, and then skip that code on its own19:44
mdorman(or was that the piece that’s removed in Liberty+ ?)19:44
dansmithmdorman: yeah, this is gone in liberty19:44
mdormangottcha19:44
*** elemoine has quit IRC19:46
*** ccarmack has joined #openstack-operators19:49
*** britthouser has quit IRC19:54
*** armax has quit IRC19:57
*** laron has joined #openstack-operators19:59
*** britthouser has joined #openstack-operators20:00
*** krobzaur_ has quit IRC20:10
*** regXboi has quit IRC20:11
*** Marga_ has quit IRC20:11
*** laron has quit IRC20:12
*** harshs has quit IRC20:13
*** Marga_ has joined #openstack-operators20:17
*** bvandenh has joined #openstack-operators20:18
*** ccarmack has left #openstack-operators20:21
openstackgerritJJ Asghar proposed openstack/osops-tools-contrib: Adding the README  https://review.openstack.org/24531620:22
*** cbrown2_ocf has quit IRC20:38
*** elemoine has joined #openstack-operators20:38
klindgrendansmith, I think we are going to try temporarily setting the metadata_cache_expiration high to see if that reduces the work load on conductor20:43
klindgrenmy assumption is that the majority of load in our cloud on conductor comes from metadata20:44
klindgrenwe serv ~32k metadata request per 15 minutes (132k per hour)20:44
mriedemif anything interesting comes up it'd be good to post back on that performance thread in the ops ML20:54
*** bvandenh has quit IRC20:55
*** VW has joined #openstack-operators21:00
*** VW has quit IRC21:01
*** VW has joined #openstack-operators21:02
*** cbrown2_ocf has joined #openstack-operators21:05
*** cbrown2ocf has joined #openstack-operators21:05
*** krobzaur_ has joined #openstack-operators21:21
*** Rockyg has joined #openstack-operators21:28
*** VW has quit IRC21:31
*** maishsk has joined #openstack-operators21:32
*** maishsk_ has joined #openstack-operators21:37
*** cbrown2ocf has quit IRC21:37
*** cbrown2_ocf has quit IRC21:37
*** SimonChung1 has quit IRC21:37
*** maishsk has quit IRC21:40
*** maishsk_ is now known as maishsk21:40
mgagneklindgren: can metadata response be cached or would it defeat its purpose? (being dynamic)21:53
*** Marga_ has quit IRC21:53
klindgrendefault cache is 15 seconds21:59
klindgrena lot of stuff in metadata is not dynamic21:59
klindgrenor is not very dynamic21:59
*** SimonChung has joined #openstack-operators21:59
klindgrenlike AZ, flavor, private ip, userdata (currently), image, vm uuid22:00
*** elemoine has quit IRC22:02
*** kencjohnston_ has joined #openstack-operators22:09
mgagneeven with cache, conductor is taking that much CPU?22:11
*** kencjohnston has quit IRC22:13
*** lhcheng has quit IRC22:14
*** kencjohnston_ has quit IRC22:15
*** harshs has joined #openstack-operators22:15
*** lhcheng has joined #openstack-operators22:16
klindgrenhttp://paste.ubuntu.com/13316590/22:19
*** rcernin has quit IRC22:21
balajinklindgren: you mentioned about a quota patch that Cern had22:25
balajini assume this was the hierarchial multi tenancy or was this quota per flavors?22:25
balajinany links to patches / details?22:25
klindgrenyea - hrm - is might be in the ops etherpad - I know it was a topic on the nova unconfrence22:26
balajinhttps://etherpad.openstack.org/p/operator-local-patches22:27
balajinin this one?22:27
klindgrendoubt its in there tbh22:27
balajinokay, any names to talk to22:28
klindgrenhttps://etherpad.openstack.org/p/mitaka-nova-unconference22:28
klindgrenline 32 - 4022:28
balajinthanks22:30
*** Marga_ has joined #openstack-operators22:33
*** SimonChung1 has joined #openstack-operators22:40
*** SimonChung has quit IRC22:40
*** liverpooler has quit IRC22:53
*** mriedem has quit IRC22:53
*** liverpooler has joined #openstack-operators22:54
*** liverpooler has quit IRC23:00
*** Rockyg has quit IRC23:03
*** xavpaice has quit IRC23:04
*** SimonChung1 has quit IRC23:07
*** SimonChung has joined #openstack-operators23:07
*** xavpaice has joined #openstack-operators23:08
*** SimonChung has quit IRC23:10
*** lhcheng has quit IRC23:25
*** lhcheng has joined #openstack-operators23:27
*** alejandrito has quit IRC23:31
*** SimonChung has joined #openstack-operators23:34
*** signed8__ has quit IRC23:42
*** SimonChung1 has joined #openstack-operators23:51
*** SimonChung has quit IRC23:51
*** SimonChung has joined #openstack-operators23:52
*** SimonChung has quit IRC23:52
*** SimonChung1 has quit IRC23:52
*** SimonChung2 has joined #openstack-operators23:52

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!