Thursday, 2015-02-19

*** Marga_ has quit IRC00:05
*** Marga_ has joined #openstack-operators00:05
*** markvoelker has quit IRC00:19
*** furlongm has joined #openstack-operators00:30
*** SimonChung2 has quit IRC00:32
*** SimonChung has joined #openstack-operators00:32
*** SimonChung1 has joined #openstack-operators00:33
*** SimonChung1 has quit IRC00:33
*** SimonChung has quit IRC00:33
*** SimonChung2 has joined #openstack-operators00:33
*** mdorman has quit IRC00:36
*** david-lyle is now known as david-lyle_afk00:41
*** Marga_ has quit IRC00:46
*** alop has quit IRC00:46
*** Marga_ has joined #openstack-operators00:46
*** Marga_ has quit IRC00:47
*** Marga_ has joined #openstack-operators00:47
*** markvoelker has joined #openstack-operators00:52
*** markvoelker has quit IRC00:59
*** trad511 has quit IRC01:04
*** Marga_ has quit IRC01:06
*** Marga_ has joined #openstack-operators01:07
*** Marga_ has quit IRC01:12
*** markvoelker has joined #openstack-operators01:29
*** SimonChung has joined #openstack-operators01:37
*** SimonChung2 has quit IRC01:37
*** devlaps has quit IRC01:55
*** SimonChung has quit IRC01:57
*** Piet has quit IRC02:45
*** SimonChung has joined #openstack-operators03:01
*** SimonChung1 has joined #openstack-operators03:05
*** SimonChung has quit IRC03:07
*** harlowja_ is now known as harlowja_away03:15
*** snk has quit IRC04:13
*** snk has joined #openstack-operators04:16
*** rbrooker has quit IRC04:39
*** markvoelker has quit IRC04:42
*** markvoelker has joined #openstack-operators04:42
*** markvoelker has quit IRC04:46
*** markvoelker has joined #openstack-operators05:13
*** sanjayu has joined #openstack-operators05:15
*** markvoelker has quit IRC05:17
*** VW_ has joined #openstack-operators05:19
*** VW_ has quit IRC05:21
*** VW_ has joined #openstack-operators05:22
*** sandywalsh has joined #openstack-operators05:38
*** sandywalsh_ has quit IRC05:40
*** Marga_ has joined #openstack-operators05:40
*** Marga_ has quit IRC05:41
*** Marga_ has joined #openstack-operators05:42
*** zigo has quit IRC05:54
*** zigo has joined #openstack-operators05:55
*** markvoelker has joined #openstack-operators06:13
*** markvoelker has quit IRC06:18
*** signed8bit has joined #openstack-operators06:22
*** signed8bit_ZZZzz has quit IRC06:23
*** belmoreira has joined #openstack-operators06:26
*** VW_ has quit IRC06:28
*** slaweq has joined #openstack-operators06:41
*** slaweq has quit IRC07:08
*** slaweq has joined #openstack-operators07:10
*** markvoelker has joined #openstack-operators07:14
*** markvoelker has quit IRC07:19
*** VW_ has joined #openstack-operators07:27
*** VW_ has quit IRC07:27
*** VW__ has joined #openstack-operators07:28
*** Marga_ has quit IRC07:29
*** slaweq has quit IRC08:02
*** harlowja_away has quit IRC08:03
*** slaweq has joined #openstack-operators08:04
*** slaweq has quit IRC08:09
*** markvoelker has joined #openstack-operators08:15
*** markvoelker has quit IRC08:20
*** zz_avozza is now known as zz_zz_avozza08:44
*** bvandenh has joined #openstack-operators09:10
*** matrohon has joined #openstack-operators09:14
*** markvoelker has joined #openstack-operators09:16
*** derekh has joined #openstack-operators09:19
*** markvoelker has quit IRC09:21
*** racedo_ has joined #openstack-operators09:31
*** VW__ has quit IRC09:34
*** VW_ has joined #openstack-operators09:39
*** VW_ has quit IRC09:48
*** VW_ has joined #openstack-operators09:49
*** zz_zz_avozza is now known as avozza09:52
*** markvoelker has joined #openstack-operators10:17
*** markvoelker has quit IRC10:22
*** pboros has joined #openstack-operators10:36
*** VW_ has quit IRC10:39
*** pboros_ has joined #openstack-operators10:57
*** pboros has quit IRC10:59
*** blair has quit IRC11:02
*** avozza is now known as zz_avozza11:07
*** VW_ has joined #openstack-operators11:07
*** zz_avozza is now known as avozza11:16
*** markvoelker has joined #openstack-operators11:18
*** markvoelker has quit IRC11:23
*** VW_ has quit IRC11:25
*** VW_ has joined #openstack-operators11:27
*** VW_ has quit IRC11:27
*** VW_ has joined #openstack-operators11:28
*** blair has joined #openstack-operators11:32
*** blair has quit IRC11:34
*** blairo has joined #openstack-operators11:34
*** blairo has quit IRC11:39
*** markvoelker has joined #openstack-operators12:19
*** markvoelker has quit IRC12:24
*** VW_ has quit IRC12:29
*** VW_ has joined #openstack-operators12:30
*** avozza is now known as zz_avozza12:42
*** VW_ has quit IRC12:47
*** blair has joined #openstack-operators12:50
*** blair has quit IRC12:55
*** markvoelker has joined #openstack-operators13:02
*** zz_avozza is now known as avozza13:10
*** bilco105 has quit IRC13:23
*** bilco105 has joined #openstack-operators13:24
*** markvoelker_ has joined #openstack-operators13:42
*** markvoelker has quit IRC13:42
*** racedo_ is now known as racedo13:59
*** cpschult has joined #openstack-operators14:10
*** trad511 has joined #openstack-operators14:14
*** VW_ has joined #openstack-operators14:22
*** j05h has quit IRC14:25
*** VW_ has quit IRC14:26
*** VW_ has joined #openstack-operators14:33
*** sanjayu has quit IRC14:33
*** csoukup has joined #openstack-operators14:34
*** markvoelker_ has quit IRC14:38
*** markvoelker has joined #openstack-operators14:39
*** j05h has joined #openstack-operators14:51
*** delattec has joined #openstack-operators15:14
*** avozza is now known as zz_avozza15:36
*** mdorman has joined #openstack-operators15:47
*** jaypipes has quit IRC15:51
*** jaypipes has joined #openstack-operators16:01
*** david-lyle_afk is now known as david-lyle16:04
*** alaski has joined #openstack-operators16:06
VW_do the meeting commands work in these channels?16:08
klindgren__I dont think so16:08
mdormani doubt it, but could try16:08
klindgren__I dont see an openstack bot in here16:08
VW_no worries16:09
VW_I'll take notes in the etherpad16:09
VW_if you folks could help16:09
mdormancould you paste the link?16:09
belmoreirahttps://etherpad.openstack.org/p/Large-Deployment-Team-Meetings16:09
VW_yep - that16:10
VW_thanks belmoreira16:10
VW_Ok, let's get started16:10
VW_alaski: you here?16:10
alaskiyep16:10
VW_cool - thanks!16:10
VW_so, folks, in the last meeting, there were questions about how cells v2 was coming along16:11
VW_I asked alaski to join for a bit16:11
mdormancool16:11
VW_I know he has to run by mid-hour, but I thought we could catch up on the progress and find out how we can help16:11
alaskisure16:11
alaskithere's been work on a few fronts16:12
alaskiwe've put some effort into getting cellsv1 tested better16:12
alaskithere's a check job upstream that isn't voting yet, but is down to about 2 failures we need to fix to get it voting16:12
alaskifor cellsv2 we've been focusing on the new database that's going to be added16:13
alaskithat work is being tracked under https://blueprints.launchpad.net/nova/+spec/cells-v2-mapping right now16:13
alaskithe goal is to have that in place with a few tables by the end of Kilo, though nothing will be using it yet16:14
alaskithere's been some parallel work so that Nova will be able to communicate with multiple databases16:14
alaskibut that should be mostly invisible to operators16:14
VW_so, would this be in addition to the current DB we run at the region/top level, alaski?16:15
VW_or replace it16:15
mdorman^ my question too16:15
alaskithis would replace it16:15
VW_and by multiple, nova would talk to it and all the cells directly?16:15
alaskithe idea is that there will be an api level database, and a cell level database, and they will now be different16:15
alaskiVW_: yes16:15
VW_got it16:15
alaskinova-api would talk to multiple cell databases directly16:15
mdormanare their plans for a migration strategy to go from v1 to v2 schema?  or would that not be part of the kilo release?16:16
alaskithat will not be done in Kilo16:16
mdormanalright.16:16
alaskibut the basic idea is that the current nova db will become a cell db, and a current cell db can remain.  migration would be primarily populating the new api db16:17
alaskiso there's not a schema change, we're just adding a new one up top, and replacing the top one if you're currently using cells16:18
mdormangottcha16:18
mdormanmakes sense16:18
alaskiand the primary focus in K is getting that db in place and getting Nova able to work with multiple dbs16:19
alaskiL is going to get into migrating data, scheduling, networking and more16:19
mdormankk16:19
alaskioh, and multiple message queues in K (hopefully)16:19
alaskidiscussions have begun with Neutron folks as to how the projects will work together with cells16:20
*** j05h has quit IRC16:20
VW_awesome16:20
mdormanhow would it look if multiple DBs are supported, but not multiple message queues?16:20
alaskimdorman: multiple message queues will need to be supported before cellsv2 can be used16:20
mdormanok.16:21
VW_wouldn't they be though?  The new top level DB is basically telling the API which message queue to sue, right16:21
VW_s/sue/use16:21
alaskiVW_: right16:21
alaskiwe just need to add support for that in the code16:21
alaskibut currently the focus in on the database16:21
alaskithe summary is that we're mainly working on some underpinnings in K, and nothing will be usable until L16:22
VW_awesome, alaski - thanks for sharing16:22
VW_what can this group do to help?16:23
*** zz_avozza is now known as avozza16:23
VW_I know that among operators, we have a larger perctange using (or needing to use for scale) cells16:23
alaskicontinuing to ask questions and providing feedback, especially when we get to scheduling and networking16:24
*** med_` is now known as medberry16:24
VW_so folks, like myself, belmoreira and mdorman have a vested interest in helping it succeed :)16:24
mdorman:)16:24
*** medberry is now known as medberry216:24
mdormanalaski:  what is the best avenue for that?  are there specs that we can help review?  other forums?16:24
VW_any specific blueprints that you haven't gotten enough feedback on, alaski?16:24
alaskihttps://review.openstack.org/#/c/141486/ is way out of date now, but that's about scheduling16:25
VW_cool - we'll try to stoke the fires on that one then16:25
alaskimdorman: specs are the best, but if there's a place I can raise visibility for other avenues please let me know16:25
*** Piet has joined #openstack-operators16:26
VW_you might remind this group of your meeting times, alaski16:26
mdormanopenstack-operators ML i think is probably the best for that16:26
VW_we should probably be better about being in those too16:26
alaskimdorman: gotcha, I can send info there when new specs come up16:26
alaskihttps://wiki.openstack.org/wiki/Meetings/NovaCellsv2#Next_Meeting16:26
VW_cool - we are trying to spread the word on that.  I mentioned it last month during the alternate meeting time too16:27
VW_I know you have to run, alaski.  Thanks for joining16:27
alaskinot a problem16:27
mdormanyeah, thanks for the insights16:28
VW_there is an ops mid-cycle in two weeks as well.  I won't make it, but I'll get any of the LDT folks there to help spred the word16:28
alaskifeel free to ping me in #openstack-nova or on email with any further questions16:28
*** j05h has joined #openstack-operators16:28
VW_that brings us to the other main thing I wanted to talk about16:29
VW_the upcoming mid-cycle16:29
VW_tom has an hour and a half set aside for LDT on Monday16:29
*** delatte has joined #openstack-operators16:30
mdormanlooking back at previous meeting notes for follow up items we could address there...16:31
VW_belmoreira, mdorman or any other Large Deployment Team folks - are you attending?16:31
mdormani and klindgren__  are16:31
belmoreirai'm not attending16:31
*** klindgren__ is now known as klindgren16:32
*** delattec has quit IRC16:32
mdormani think mfisch and dvorak are as well16:32
VW_ok cool, mdorman.  I'll have a couple folks from Rackspace there too.  I just had family travel already scheduled16:32
mdormanyup16:32
mdormanso is this our last meeting before that meetup, or would we have one mor?16:33
VW_this is our last one before16:34
mdormanok.16:34
VW_I'm almost wondering if it would be useful to build a list of wants around scheduling and cell/host control16:34
VW_statuses, filters, etc16:34
mdormanseems like that could have some potential16:34
VW_it would be something very tangible and clearly fits in a space where alaski and company need some feedback16:35
VW_it could be less focused on specific specs and more of a wish/want list16:35
VW_but could inlcude feedback on the spec above as well16:35
mdormanright16:35
belmoreirayes16:35
VW_for example, I REALLY want to be able to "disable" a cell so no new builds go there16:35
VW_the folks in Phill could circulate what they come up with back to the ML for those of us not there16:36
VW_then we can finalize it in the March meeting, and pass it on to the devs16:36
mdormani like it16:36
VW_would be a huge win for the working group process in general16:36
belmoreirayes16:37
mdormanare there other general topics like that for any of the other projects where we could do something similar (neutron?)16:37
belmoreirathere are other interesting topics that are going on that ops should discuss, in my view...16:37
belmoreiramdorman: :)16:37
belmoreiranova net -> neutron migration16:38
VW_I agree, belmoreira.  And many will be at the larger meeting.  I'm just trying to make sure we make the most of the time slot Tom gave LDT16:38
VW_but yeah, something Neutron-ish sounds and smells like a good place to focus some time16:39
VW_:D16:39
VW_Tom also gave me a preview of the Vancover schedule and we have an LDT timeslot there too16:40
mdormannice16:41
belmoreiraother point not really related with development is how LDT are doing ops (procedures)16:41
belmoreirait would be nice to have a discussion on this16:41
VW_yeah, belmoreira - that is a good idea16:41
belmoreirafor example, how to retire hundreds of servers16:41
mdormanyou thinking sort of like a show-and-tel on that?16:41
VW_That might be a great thing to do in Vancouver16:42
VW_let me go see if I can still pull up the schedule16:42
belmoreiraimage management is other important topic in my view16:43
VW_yes, belmoreira +eleventy billion :)16:43
klindgrenyes16:43
VW_I'm really wanting to hear more about your squid/caching stuff16:44
VW_and we are looking at some active precaching on the hypervisor16:44
VW_very early stages, though16:44
klindgrenI have an ansible playbook to precache images on a server16:44
klindgrenusing bittorrent16:44
belmoreirathat is really interesting16:45
belmoreiraI would love to hear more about it16:45
VW_sounds like if we took one of our two 40 minute session in Vancover to have an open discussion about image management, it wouldn't be a waste of time then?16:45
VW_:)16:46
mdormanprobably not16:46
mdormanshould we try to include some time at the meetup to discuss wants for vancouver agenda?16:46
klindgrenyea - I would love for "community public" and "provider public" to be segregated somehow16:46
VW_cool then - consider it done pending finalization of the official schedule by the Foundation16:46
belmoreiragreat16:47
*** dmsimard_away is now known as dmsimard16:47
klindgrenand that when people delete images you dont end up with vm's that can no longer migrate because the backing image has been destroyed16:47
VW_we probably could cover that in Philly, mdorman, but we only have an hour and a half16:47
VW_we still have March and April LDT meetings too16:47
mdormanah yeah good point.16:47
VW_yeah klindgren - there is some useful best practices there16:48
VW_I didn't chime into the thread a while back, but we actually release a new image with each update for that reason16:48
VW_but we are working longer term, for a fancier way, so that we don't have to keep as many old images around16:48
*** alop has joined #openstack-operators16:49
mdormancool16:49
VW_for now, I'd like to keep one of the Vancover sessions (40 minutes) open to produce something, but if we don't have a canidate like the scheduling stuff, we can have another sharing session like the images work16:50
VW_or expand images to both16:50
mdormansounds good.16:50
VW_because I'm sure folks have plenty to talk about there ;)16:50
klindgrenSlightly off topic - but is has anyone else recently updated to juno and now getting neutron read timeouts?16:51
dvorakwhat does the message look like?16:51
klindgrenWe are getting it randomly - but consistently on metadata calls16:52
VW_check for DB deadlocks16:52
klindgrenFailed to get metadata for ip: 10.22.253.1116:52
klindgren<trace>16:52
klindgrenConnectionFailed: Connection to neutron failed: HTTPSConnectionPool(host='openstack.int.godaddy.com', port=9696): Read timed out. (read timeout=60)16:52
VW_a recent deploy of ours (which is late 2014 code) has seen an increase in those on the Neutron side16:53
VW_caused us a lot of headache16:53
klindgrenwhat solved it?16:53
VW_we are still solving, actually16:53
VW_:\16:54
klindgrenah - anyway would love to talk about what you tried after the meeting16:54
VW_have you already put that on the list?  I can try and get some folks to chime in that know more about the specifics16:54
klindgrennope - was going to ask in #openstack-neutron but to be honest- that always seems like a huge waste16:54
VW_send it to #openstack-neutron and #openstack-opeators and I'll try to kick some folks to take a look16:55
klindgrenkk16:55
VW_we may be dealing with completely different problems16:55
VW_but if our neutron folks have any insight that will help, I'd like them to share16:55
klindgrenagreed but I tired things that "should fix it"16:55
klindgrenincreased workers/timeouts16:56
klindgrenand nothing helped16:56
klindgren:-/16:56
VW_I'm in our UK office till the 28th, but I can harrass them from afar and, worst case, in person when I get back16:56
VW_OK, well, sorry for the meeting room confusion16:56
* VW_ is clearly way behind on wiki work 16:56
VW_but thank you guys for joining today.  Looks like we have some useful stuff to work on16:57
VW_mdorman, klindgren I might pull you two together with a couple of my RS folks going so we can plan ahead for the session based on the above16:57
klindgrenkk16:58
mdormansounds good.16:58
VW_alright - well thinks guys16:59
VW_I'll let you get back to all your operations related fires :)16:59
klindgren:-)16:59
belmoreirathanks :)16:59
mgagneanyone using custom weighers in Nova? I find it inconvenient that Nova will load ALL weighers found in scheduler/weights by default unlike filters. And tests breaking because I added one (or more) weighers in scheduler/weights.17:02
*** Marga_ has joined #openstack-operators17:08
*** Marga_ has quit IRC17:09
*** Marga_ has joined #openstack-operators17:10
VW_mgagne: I pinged one of our devs with the above.  He wasn't quite sure what you were asking, but mentioned the following option to control the weights used:17:19
VW_cfg.ListOpt('scheduler_weight_classes',17:19
VW_               default=['nova.scheduler.weights.all_weighers'],17:19
VW_               help='Which weight class names to use for weighing hosts'),17:19
mgagneVW_: I agree, my message wasn't that clear =)17:19
mgagneVW_: I'm already using that config. The point is that I find it inconvenient that nova.scheduler.weights.all_weighers is the default. It will tries to load all weighers.17:21
mgagneVW_:  This same config is used in tests and someone cannot drop in a new weigher without updating a test that counts the number of weighers found. That's inconvenient when you have more than one custom weighers. You have to apply your patch in the right order, each one incrementing the previous value.17:21
VW_hmm - OK - let me see if I can find more17:23
mgagneVW_: my rant is mainly about the fact ALL weighers are loaded by default (using auto-discovery) unlike filters which loads a bunch of "standard" filters.17:23
VW_gotcha17:24
VW_we pretty much just use the offsets to discourage builds17:25
mgagnesay it again?17:25
VW_and now we've actually writted a custom filter to block all builds if a specific offset is set17:25
VW_yes, sorry - let me explain better17:26
VW_when you say weights, my mind initially goes to weighting of cells in the DB17:26
mgagnehehe17:27
VW_cause we use that a lot17:27
VW_since there is no "disable" for a cell17:27
VW_so apologies for going all left field on you17:27
mgagneoh right =)17:27
mgagneyes, that feature could come in handy17:28
mgagneI think RS mentioned it a couple of time last summit during cells presentations17:28
*** bvandenh has quit IRC17:32
*** derekh has quit IRC17:34
VW_yes we did17:34
VW_I yell at our cores about it all the time17:34
VW_:)17:34
mdormandevops, man!17:34
VW_no, I know17:34
VW_we actually wrote a custom filter for now17:35
mgagnemdorman: I have a love/hate relationship with the concept of devops =)17:35
mdormanheh, yeah17:35
VW_but I'd like proper disabling in the DB with reason fields, etc - like hosts17:35
mdormanone of those sounds-good-in-theory things17:35
mgagnemdorman: the only article I found that kind of summarize my opinion can be read on puppetlabs: http://puppetlabs.com/blog/salesforce-devops-journey17:35
VW_there could be others, though - admin only for example17:36
mgagnemdorman: mainly because people I know only wish to do the "good" part of ops, not the bad and boring one =)17:37
VW_we stopeed calling it devops and only practice it-buzz-word-du-jor now :P17:38
mgagnehow about superhero? :D17:38
mgagnewith a rockstar logo17:38
VW_I actually started calling it opsdev jokingly a while back because most of my teams are systems engineers first and foremost.  They've been trying to take all the "devops" tactics for deploying apps on infrastructure and turning a fleet of infrastructure nodes into an application itself17:39
*** yapeng has joined #openstack-operators17:40
VW_but anyway, I totally get where mdorman is coming from on the misunderstanding of what it is and isn't17:41
*** matrohon has quit IRC17:42
VW_alright gents, I've ignored the day job for too long.  back to meeting hell....17:45
mgagnegood luck17:45
mgagnethe vote for presentations website is slow as hell :-/17:53
mdorman+117:59
mgagnePages take forever to load, nobody ain't got time for that18:00
*** avozza is now known as zz_avozza18:09
*** harlowja has joined #openstack-operators18:10
*** belmoreira has quit IRC18:11
*** SimonChung1 has quit IRC18:17
mgagnefor those who needs real same host resize: https://github.com/mgagne/nova/commit/bbc92b18f7e5e6105c7608469e080f6c4e8ea5ac18:30
*** cpschult has quit IRC18:30
*** Piet has quit IRC18:34
*** yapeng has quit IRC18:37
*** yapeng_ has joined #openstack-operators18:37
*** Piet has joined #openstack-operators19:00
*** trad511 has quit IRC19:08
*** delatte has quit IRC19:15
*** VW_ has quit IRC19:22
*** cpschult has joined #openstack-operators19:39
*** yapeng_ has quit IRC19:52
*** Marga_ has quit IRC19:54
*** VW_ has joined #openstack-operators20:12
*** turnerg has joined #openstack-operators20:12
*** VW_ has quit IRC20:18
*** VW_ has joined #openstack-operators20:19
*** SimonChung has joined #openstack-operators20:19
*** Marga_ has joined #openstack-operators20:25
*** Marga_ has quit IRC20:30
*** VW_ has quit IRC20:31
*** blair has joined #openstack-operators20:36
*** blair has quit IRC20:40
*** Piet has quit IRC20:46
*** mdorman has quit IRC20:48
*** vinsh has joined #openstack-operators20:50
*** trad511 has joined #openstack-operators20:51
*** Marga_ has joined #openstack-operators21:19
*** Marga_ has quit IRC21:20
*** Marga_ has joined #openstack-operators21:20
*** blair has joined #openstack-operators21:22
*** signed8bit has quit IRC21:24
*** signed8bit has joined #openstack-operators21:24
*** Piet has joined #openstack-operators21:29
*** alop has quit IRC21:30
*** Marga_ has quit IRC21:30
*** Marga_ has joined #openstack-operators21:31
*** alop has joined #openstack-operators21:32
*** yapeng has joined #openstack-operators21:39
jlkAny of you exposing keystone v3 (in the catalog) at all?21:46
*** jaypipes has quit IRC21:53
*** trad511 has quit IRC21:57
mgagnenot yet ^^'22:01
*** Marga_ has quit IRC22:01
mgagneworking on it22:01
mgagnehave to review the new policy.json =)22:02
*** trad511 has joined #openstack-operators22:10
*** trad511 has left #openstack-operators22:10
*** blair has quit IRC22:11
*** SimonChung has quit IRC22:11
*** SimonChung1 has joined #openstack-operators22:12
*** SimonChung has joined #openstack-operators22:12
*** SimonChung1 has quit IRC22:12
*** mdorman has joined #openstack-operators22:15
*** trad512 has joined #openstack-operators22:18
jlkI'm having a hell of a time with it22:21
*** turnerg has quit IRC22:24
*** signed8bit has quit IRC22:26
klindgrenwe appear to have the v3 keystone endpoint22:27
jlktrying to figure out how to actually expose it22:27
klindgrenI think we had to do it for some custom stuff22:27
jlkI mean, the /v3 URI is there, but isn't listed in the service catalog22:27
klindgrenaround projects22:27
jlktrying to put it in there... things go weird22:27
jlkmost clients and such are setup to expect a auth end point of /v2.0/22:27
*** signed8bit has joined #openstack-operators22:28
jlksome say to put a unversoined URI in to the service catalog22:28
mdormanklindgren:  i htink we only use it for our project api stuff so that we can assign roles to groups as well as users.   iirc v2 you can only assign roles to users, not groups22:28
jlkbut then things are even weirder.22:28
klindgrenyea I think so as well22:28
klindgrenhowever we have both the v2 and v3 exposed in the serivce catalog (endpoint0list)22:29
jlkhttp://docs.openstack.org/developer/keystone/http-api.html#how-do-i-migrate-from-v2-0-to-v322:29
jlkhow do you distinctly name them?22:29
klindgrenwe also have a keystone and keystonev3 service created in the service-list22:29
jlkfor things like cinder, clients explicitly look for volumes or volumesv222:30
jlkas a endpoint type22:30
jlkI am unsuccessful in getting openstack  client or keystoneclient to work with unversioned endpoint22:31
jlkbut maybe they mean with the code, not with the actual command line client22:32
klindgrenjlk here is what we have:22:33
klindgrenkeystone endpoint-list22:34
klindgren| <redact> | zone1  |        https://openstack-dev.int.godaddy.com:5000/v3        |        https://openstack-dev.int.godaddy.com:5000/v3        |        https://openstack-dev.int.godaddy.com:35357/v3       | <keyv3> |22:34
klindgren| <redact> | zone1  |       https://openstack-dev.int.godaddy.com:5000/v2.0       |       https://openstack-dev.int.godaddy.com:5000/v2.0       |       https://openstack-dev.int.godaddy.com:35357/v2.0      | <keyv2> |22:34
klindgrenkeystone service-list22:34
klindgren| keyv2 |  keystone  |    identity   |    OpenStack Identity Service    |22:34
klindgren| keyv3 | keystonev3 |   identityv3  |  OpenStack Identity Service v3   |22:34
klindgrenbut like mike said - I believe we are using v2 for pretty much everything22:35
jlkyeah, okay.22:35
klindgrenand only using v3 for a very specific function22:35
jlkoooh, using code directly I get discovery to work right22:35
jlk>>> keystone = client.Client(auth_url='http://localhost:5000')22:35
jlk>>> type(keystone)22:35
jlk<class 'keystoneclient.v3.client.Client'>22:35
jlkI just don't think the cli interface is setup to utilize that22:36
klindgrenalso via the cli stuff - we specifically configure auth url to b the v2.022:37
*** trad512 has quit IRC22:37
*** medberry2 is now known as med_22:37
jlkhow do you make use of the v3 stuff then?22:37
*** med_ has quit IRC22:37
*** med_ has joined #openstack-operators22:37
jlkwhat code out there specifically looks for "identityv3" ?22:37
klindgren1 very specific project22:37
klindgrenhehehe22:37
klindgrenI hsould say we have keystonev3 enabled - but its only used by a single internal project22:38
*** signed8bit has quit IRC22:38
klindgrenso - sorry for the wild goose chase - we aren't doing what I thought you wanted :-/22:38
jlkyou sort of are22:39
jlkI need v3 end point to do some federatoin stuff, but I just don't get how to make this work22:39
mgagnewhen I tested from CLI, I used the versionized URL and OS_IDENTITY_API_VERSION=322:41
mgagnekeystonev3 and identityv3 in the catalog22:41
*** VW_ has joined #openstack-operators22:46
*** vinsh has quit IRC22:48
*** matrohon has joined #openstack-operators22:56
*** yapeng has quit IRC23:08
*** blair has joined #openstack-operators23:15
*** pboros_ has quit IRC23:24
*** Marga_ has joined #openstack-operators23:30
*** vinsh has joined #openstack-operators23:32
*** vinsh has quit IRC23:32
*** SimonChung has quit IRC23:46
*** SimonChung1 has joined #openstack-operators23:46
*** csoukup has quit IRC23:47
*** markvoelker has quit IRC23:49
*** matrohon has quit IRC23:56
*** vinsh has joined #openstack-operators23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!