*** toddnni has quit IRC | 00:19 | |
*** toddnni has joined #openstack-keystone | 00:22 | |
*** Rhys has quit IRC | 00:25 | |
*** Rhvs has joined #openstack-keystone | 00:25 | |
*** d0ugal__ has joined #openstack-keystone | 00:27 | |
*** Dinesh_Bhor has joined #openstack-keystone | 00:41 | |
*** jmlowe has quit IRC | 00:43 | |
*** jmlowe has joined #openstack-keystone | 00:44 | |
*** d0ugal__ has quit IRC | 00:58 | |
*** d0ugal__ has joined #openstack-keystone | 01:06 | |
*** dklyle has quit IRC | 01:09 | |
*** dklyle has joined #openstack-keystone | 01:09 | |
*** dklyle has quit IRC | 01:11 | |
*** dklyle has joined #openstack-keystone | 01:12 | |
*** felipemonteiro__ has quit IRC | 01:18 | |
*** r-daneel has joined #openstack-keystone | 01:39 | |
*** d0ugal__ has quit IRC | 01:39 | |
*** r-daneel_ has joined #openstack-keystone | 01:42 | |
*** r-daneel has quit IRC | 01:43 | |
*** r-daneel_ is now known as r-daneel | 01:43 | |
*** panbalag has joined #openstack-keystone | 01:53 | |
*** dave-mccowan has joined #openstack-keystone | 02:12 | |
*** dklyle has quit IRC | 02:15 | |
*** dklyle has joined #openstack-keystone | 02:16 | |
*** dklyle has quit IRC | 02:22 | |
*** dklyle has joined #openstack-keystone | 02:23 | |
*** dklyle has quit IRC | 02:24 | |
*** dklyle has joined #openstack-keystone | 02:24 | |
*** nicolasbock has quit IRC | 02:25 | |
*** dklyle has quit IRC | 02:25 | |
*** dklyle has joined #openstack-keystone | 02:26 | |
*** dikonoor has joined #openstack-keystone | 02:41 | |
openstackgerrit | Merged openstack/keystone master: Use the provider_api module in limit controller https://review.openstack.org/562712 | 02:43 |
---|---|---|
*** itlinux has joined #openstack-keystone | 02:58 | |
*** Rhvs is now known as Rhys | 03:03 | |
*** Rhys is now known as Guest29196 | 03:03 | |
*** d0ugal__ has joined #openstack-keystone | 03:05 | |
*** panbalag has quit IRC | 03:13 | |
*** d0ugal__ has quit IRC | 03:21 | |
*** d0ugal__ has joined #openstack-keystone | 03:22 | |
*** dklyle has quit IRC | 03:27 | |
*** dklyle has joined #openstack-keystone | 03:28 | |
*** d0ugal__ has quit IRC | 03:29 | |
*** itlinux has quit IRC | 03:32 | |
*** d0ugal__ has joined #openstack-keystone | 03:36 | |
*** d0ugal__ has quit IRC | 03:47 | |
*** lbragstad has joined #openstack-keystone | 03:49 | |
*** ChanServ sets mode: +o lbragstad | 03:49 | |
*** dklyle has quit IRC | 03:50 | |
*** dklyle has joined #openstack-keystone | 03:50 | |
*** dklyle has quit IRC | 04:01 | |
*** dklyle has joined #openstack-keystone | 04:01 | |
*** Kumar has joined #openstack-keystone | 04:29 | |
openstackgerrit | XiaojueGuan proposed openstack/keystone-specs master: Trivial: Update pypi url to new url https://review.openstack.org/565411 | 04:35 |
openstackgerrit | Lance Bragstad proposed openstack/keystone-specs master: Add scenarios to strict hierarchy enforcement model https://review.openstack.org/565412 | 04:40 |
*** dave-mccowan has quit IRC | 04:43 | |
*** lbragstad has quit IRC | 04:51 | |
openstackgerrit | XiaojueGuan proposed openstack/keystoneauth master: Trivial: Update pypi url to new url https://review.openstack.org/565418 | 04:59 |
*** links has joined #openstack-keystone | 05:03 | |
*** Kumar has quit IRC | 05:29 | |
*** gongysh has joined #openstack-keystone | 06:07 | |
*** homeski has quit IRC | 06:12 | |
*** pcichy has joined #openstack-keystone | 06:22 | |
*** gongysh has quit IRC | 06:27 | |
*** gongysh has joined #openstack-keystone | 06:31 | |
*** dikonoor has quit IRC | 06:38 | |
*** gongysh has quit IRC | 06:49 | |
*** rcernin has quit IRC | 07:06 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/keystonemiddleware master: Imported Translations from Zanata https://review.openstack.org/565455 | 07:09 |
*** gongysh has joined #openstack-keystone | 07:12 | |
*** gongysh has quit IRC | 07:13 | |
*** gongysh has joined #openstack-keystone | 07:17 | |
*** david-lyle has joined #openstack-keystone | 07:22 | |
*** david-lyle has quit IRC | 07:24 | |
*** dklyle has quit IRC | 07:24 | |
*** gongysh has quit IRC | 08:06 | |
*** belmoreira has joined #openstack-keystone | 08:33 | |
*** pcichy has quit IRC | 08:38 | |
*** belmoreira has quit IRC | 08:42 | |
*** gongysh has joined #openstack-keystone | 08:55 | |
*** d0ugal__ has joined #openstack-keystone | 08:57 | |
*** d0ugal__ has quit IRC | 09:02 | |
*** Dinesh_Bhor has quit IRC | 09:21 | |
*** dklyle has joined #openstack-keystone | 09:26 | |
*** gongysh has quit IRC | 09:44 | |
*** lifeless_ is now known as lifeless | 09:46 | |
*** d0ugal has joined #openstack-keystone | 10:00 | |
*** d0ugal has quit IRC | 10:00 | |
*** d0ugal has joined #openstack-keystone | 10:00 | |
*** panbalag has joined #openstack-keystone | 10:21 | |
johnthetubaguy | wondering if anyone could help me better understand the heat and federated user problems: https://bugs.launchpad.net/keystone/+bug/1589993 | 10:43 |
openstack | Launchpad bug 1589993 in OpenStack Identity (keystone) "cannot use trusts with federated users" [High,Triaged] | 10:43 |
johnthetubaguy | I am using a group mapping and I think I see the same problems as described in the bug (as I don't do the create role assignment at login thingy) | 10:44 |
*** nicolasbock has joined #openstack-keystone | 10:48 | |
*** panbalag has quit IRC | 10:52 | |
mordred | kmalloc: the issue I reported yesterday on the new ksa stack is an issue with the devstack endpoint registration | 11:00 |
*** nicolasbock has quit IRC | 11:09 | |
*** dklyle has quit IRC | 11:15 | |
*** gongysh has joined #openstack-keystone | 11:15 | |
*** nicolasbock has joined #openstack-keystone | 11:22 | |
*** raildo has joined #openstack-keystone | 11:49 | |
*** dklyle has joined #openstack-keystone | 11:56 | |
*** dklyle has quit IRC | 11:58 | |
*** david-lyle has joined #openstack-keystone | 11:58 | |
*** dave-mccowan has joined #openstack-keystone | 12:08 | |
*** pcichy has joined #openstack-keystone | 12:15 | |
*** mchlumsky has quit IRC | 12:16 | |
*** edmondsw has joined #openstack-keystone | 12:19 | |
*** mchlumsky has joined #openstack-keystone | 12:20 | |
*** Horrorcat has joined #openstack-keystone | 12:40 | |
*** panbalag has joined #openstack-keystone | 12:44 | |
*** pcichy has quit IRC | 12:48 | |
*** panbalag has quit IRC | 12:48 | |
*** martinus__ has joined #openstack-keystone | 12:53 | |
*** panbalag has joined #openstack-keystone | 13:04 | |
*** wxy| has joined #openstack-keystone | 13:07 | |
*** david-lyle has quit IRC | 13:08 | |
*** panbalag has quit IRC | 13:29 | |
*** panbalag has joined #openstack-keystone | 13:29 | |
*** lbragstad has joined #openstack-keystone | 13:32 | |
*** ChanServ sets mode: +o lbragstad | 13:32 | |
*** asettle has left #openstack-keystone | 13:33 | |
*** openstackgerrit has quit IRC | 13:34 | |
*** dklyle has joined #openstack-keystone | 13:38 | |
lbragstad | johnthetubaguy: ping - i reviewed wxy|'s patch for unified limits with the use cases from CERN | 13:39 |
lbragstad | was curious if I could run something by you | 13:39 |
lbragstad | i attempted to walk through the scenarios here https://review.openstack.org/#/c/565412/ | 13:40 |
johnthetubaguy | lbragstad: sure, I should pick your brain about a federation but with heat in return :p | 13:41 |
lbragstad | johnthetubaguy: that sounds fair - we can start with federation stuff | 13:41 |
johnthetubaguy | OK, basically its this bug again: https://bugs.launchpad.net/keystone/+bug/1589993 | 13:42 |
openstack | Launchpad bug 1589993 in OpenStack Identity (keystone) "cannot use trusts with federated users" [High,Triaged] | 13:42 |
lbragstad | there is a lot of context behind https://bugs.launchpad.net/keystone/+bug/1589993 | 13:42 |
johnthetubaguy | we seem to hit that when doing dynamic role assignment | 13:42 |
johnthetubaguy | which I think is expected? | 13:42 |
johnthetubaguy | well, known I guess | 13:42 |
lbragstad | sure | 13:42 |
lbragstad | i remember them bringing this up to us in atlanta | 13:43 |
lbragstad | during the PTG | 13:43 |
johnthetubaguy | the problem is doing the dynamic role assignment on first login doesn't work for our use case | 13:43 |
johnthetubaguy | basically because we want the group to map correct to an assertion that can be quite dynamic (level of assurance) | 13:44 |
johnthetubaguy | not sure if I am missing a work around, or some other way of doing things | 13:44 |
*** felipemonteiro__ has joined #openstack-keystone | 13:44 | |
lbragstad | well - we do have something that came out of newton design sessions aimed at solving a similar problem | 13:45 |
lbragstad | it was really for the "first login" case | 13:45 |
lbragstad | where the current federated flow resulted in terrible user experience, because a user would need to hit the keystone service provider to get a shadow user created | 13:46 |
lbragstad | then an admin would have to come through and manually create the assignments between that shadow user and various projects | 13:46 |
lbragstad | which isn't a great a experience for users, because it's like "hurry up and wait" | 13:47 |
lbragstad | so we built - https://docs.openstack.org/keystone/latest/advanced-topics/federation/federated_identity.html#auto-provisioning | 13:47 |
johnthetubaguy | right, but the auto provisioning fixes the roles on first login, I thought? | 13:47 |
* lbragstad double checks the code | 13:48 | |
johnthetubaguy | we would need it remove role assignments and replacement on a subsequent login | 13:49 |
*** felipemonteiro_ has joined #openstack-keystone | 13:49 | |
johnthetubaguy | (FWIW, I suspect we are about to hit application credential issues for the same reason, but its a pure guess at this point) | 13:49 |
lbragstad | so - it looks like the code will run each time federated authentication is used | 13:51 |
*** panbalag has left #openstack-keystone | 13:51 | |
johnthetubaguy | maybe we didn't test this properly then | 13:51 |
lbragstad | so if the mapping changes in between user auth requests, the mapping will be applied both times | 13:51 |
johnthetubaguy | would it remove role assignments? | 13:51 |
lbragstad | it would not | 13:52 |
lbragstad | not the mapped auth bit | 13:52 |
johnthetubaguy | yeah, that is the bit the group mapping does for "free" | 13:52 |
lbragstad | but... it might if you drop the user | 13:52 |
johnthetubaguy | so the use case is where the IdP says if two factor was used | 13:52 |
johnthetubaguy | if the two factor isn't used, then they get access to less projects | 13:53 |
*** felipemonteiro__ has quit IRC | 13:53 | |
lbragstad | huh | 13:53 |
johnthetubaguy | I mean its not two factor, its a generic level of assurance assertion, but same difference really | 13:53 |
lbragstad | ok - that makes sense | 13:53 |
johnthetubaguy | its not something I had considered before working through this project | 13:54 |
lbragstad | right - it's elevating permissions based on attributes of the assertion | 13:54 |
johnthetubaguy | its using EGI AAI, its an IdP proxy, so a single user can login with multiple idendities, with varing levels of assurance, some folks need a minimum level | 13:54 |
*** links has quit IRC | 13:54 | |
lbragstad | couldn't you use the regular group mapping bits? | 13:55 |
johnthetubaguy | we do, but that breaks heat | 13:55 |
*** dklyle has quit IRC | 13:55 | |
johnthetubaguy | or seems to | 13:55 |
lbragstad | ah | 13:55 |
* lbragstad thinks | 13:57 | |
*** spilla has joined #openstack-keystone | 13:57 | |
johnthetubaguy | (not sure about how this maps to application credentials yet, as the openid connect access_token dance is really stupid for ansible/CLI usage patterns) | 13:57 |
lbragstad | the problem is that that trust implementation relies on the role assignment existing in keystone | 13:58 |
lbragstad | i think it would work with auto-provisioning, but then the problem becomes cleaning up stale role assignments | 14:00 |
lbragstad | right? | 14:00 |
johnthetubaguy | yeah | 14:00 |
lbragstad | so -something that would be very crude | 14:00 |
johnthetubaguy | think is not cleaning those up is a feature for some users, but could be a mapping option I guess force_refresh=True? | 14:01 |
*** dikonoor has joined #openstack-keystone | 14:01 | |
lbragstad | would be to use auto-provisioning and then purge users using keystone-manage mapping purge | 14:01 |
* lbragstad looks back at the code | 14:02 | |
lbragstad | ah - nevermind, mapping_purge only works for ID mappings | 14:02 |
johnthetubaguy | maybe a always_purge_old_roles mapping? would that delete app creds as we modify the role assinments? | 14:02 |
lbragstad | not shadow users | 14:02 |
johnthetubaguy | ah | 14:02 |
lbragstad | so - with a refresh option, how would that get invoked? | 14:03 |
johnthetubaguy | I guess we want something to say that the mapping should delete all other role assignments, somehow | 14:03 |
johnthetubaguy | in a way that doesn't always clear out all current application credentials | 14:03 |
johnthetubaguy | (unless there were role assignments delete, then I guess you have to invalidate app creds, which seems fine) | 14:04 |
johnthetubaguy | hmm, needs writing up I think | 14:04 |
lbragstad | yeah - that's the stance we take on app creds currently | 14:05 |
*** felipemonteiro_ has quit IRC | 14:05 | |
johnthetubaguy | yankcrime: are you following along with my thinking here, or have I gone crazy :) | 14:05 |
lbragstad | anytime a role assignment is removed for a user, we invalidate the application credential | 14:05 |
johnthetubaguy | lbragstad: yeah, I think I still like that, its simple | 14:05 |
*** felipemonteiro_ has joined #openstack-keystone | 14:05 | |
johnthetubaguy | I think the action is to write up the use case, and document the missing piece | 14:06 |
johnthetubaguy | like a partial spec write up or something? | 14:06 |
johnthetubaguy | (a very short one!) | 14:06 |
lbragstad | at least documenting the use case would be good | 14:07 |
lbragstad | i hadn't really heard anything back from the heat team | 14:07 |
johnthetubaguy | OK, cool | 14:08 |
johnthetubaguy | feels like the group mapping might want to get deprecated if it really doesn't work with these features | 14:09 |
yankcrime | johnthetubaguy: not at all, i think that's a succinct summary of what we're trying to achieve | 14:09 |
johnthetubaguy | but anyways, that is for that write up | 14:09 |
johnthetubaguy | yankcrime: cool | 14:09 |
johnthetubaguy | now you may have walked into my trap, but we should work out who should write that up :) | 14:09 |
lbragstad | ok - one last question | 14:10 |
yankcrime | bah i'd hoped that by agreeing i'd avoid that trap ;) | 14:10 |
johnthetubaguy | lbragstad: sure | 14:10 |
yankcrime | the use-case is pretty clear, i don't mind getting that drafted | 14:10 |
yankcrime | (clear to us i mean) | 14:10 |
lbragstad | you always know which users need this refresh behavior, right? | 14:10 |
knikolla | o/ | 14:11 |
johnthetubaguy | I think its all users in this case, so I think the answer is yes | 14:11 |
yankcrime | we can infer which users need this refresh by other attributes | 14:11 |
lbragstad | hmm | 14:11 |
yankcrime | (claims, whatever) | 14:11 |
lbragstad | ok - i was thinking about a possible implementation that might be pretty eas y | 14:11 |
yankcrime | but yeah right now for mvp we can safely assume all users in this case | 14:12 |
lbragstad | but it would require using the keystone API to set an attribute on the user's reference that would get checked during federated authentication | 14:12 |
johnthetubaguy | if that can be set in the mapping? maybe | 14:12 |
ayoung | what if we are overthinking this? | 14:12 |
ayoung | what if...we really don't want dynamic role assignments or any of that | 14:13 |
ayoung | and instead...somehow use HMT to solve the problem. | 14:13 |
ayoung | like... | 14:13 |
ayoung | first login, a federated user gets put into a pool | 14:13 |
ayoung | and...we have a rule (hand wave) that says that user can create their own project | 14:13 |
ayoung | the pool is the parent project, and it has 0 quota | 14:14 |
*** wxy| has quit IRC | 14:14 | |
ayoung | the child project create is triggered by the user themself, and they get an automatic role assignment for it | 14:14 |
*** dikonoor has quit IRC | 14:14 | |
yankcrime | ayoung: is this mercador? | 14:14 |
ayoung | yankcrime, I thought it was psedoconical | 14:15 |
* ayoung just made a map joke | 14:15 | |
* yankcrime googles pseudoconical | 14:16 | |
ayoung | https://en.wikipedia.org/wiki/Map_projection | 14:16 |
johnthetubaguy | so while I do like that, the main use case is to get access to existing project resources based on IdP assertions, that are dynamic, i.e. revoke access if assertions change | 14:16 |
yankcrime | ayoung: lol | 14:16 |
ayoung | johnthetubaguy, I caught a bit about the groups | 14:16 |
lbragstad | keystone supports one way to do that - but things like trusts don't work with it | 14:17 |
ayoung | it seems to me that the role assignments should be what control that...the IdP data should be providing the input to the role assignments, but should not be creating anything | 14:17 |
lbragstad | if we use the other way (with auto provisioning) then the features work, but cleaning up the assignments is harder | 14:17 |
knikolla | i remember there was a proposed change to have the assignments from the mapping persist until the user logs in again through federation and the attributes are reavaluated | 14:17 |
lbragstad | knikolla: interesting, do you know where that ended up | 14:18 |
lbragstad | ? | 14:18 |
knikolla | https://review.openstack.org/#/c/415545/ | 14:18 |
johnthetubaguy | knikolla: yeah, that is basically what we need here | 14:18 |
* yankcrime nods | 14:18 | |
ayoung | Dec 28, 2016 | 14:18 |
ayoung | That is as old as some of my patches | 14:19 |
ayoung | let me take a look at that | 14:19 |
johnthetubaguy | I kinda like the mapping doing a "clean out any other assignments" thing, and just have groups treated like any other role assignment | 14:20 |
johnthetubaguy | the delete re/add thing screws with app creds, which would be bad | 14:21 |
johnthetubaguy | lbragstad: back to quotas, do you have more context to your patch, was that from an email? | 14:22 |
lbragstad | johnthetubaguy: it wasn't - but wxy too a shot at working the CERN use cases into https://review.openstack.org/#/c/540803/ | 14:22 |
lbragstad | and i tried to follow it up with this - https://review.openstack.org/#/c/565412/1 | 14:23 |
johnthetubaguy | lbragstad: why is it not restricted to two levels? | 14:23 |
lbragstad | johnthetubaguy: which specification? | 14:24 |
ayoung | so...it seems to me that group membership should be based solely on the data in the assertion. If the data that triggers the group is not there, the user cannot get a token with roles based on that group | 14:24 |
johnthetubaguy | johnthetubaguy: https://review.openstack.org/#/c/540803 | 14:25 |
ayoung | groups are not role assignments. Groups are attributes from the IdP that are separate from the user herself | 14:25 |
lbragstad | johnthetubaguy: there are some things in there that need to be updated | 14:25 |
lbragstad | and that's one of them, because i don't think we actually call that out anywhere | 14:26 |
lbragstad | johnthetubaguy: so - good catch :) | 14:26 |
ayoung | so...we should not trigger any persisted data on the group based on the IdP data. It should all be dynamic. | 14:27 |
*** felipemonteiro__ has joined #openstack-keystone | 14:27 | |
johnthetubaguy | ayoung: technically, sure. But as a user we want to dynamically assign a collection of role assignments, based on attributes we get from the IdP that change over time, right now we are doing that via groups (... which breaks heat) | 14:27 |
lbragstad | because the role assignment isn't persistent and breaks when heat tries to use a trust | 14:28 |
johnthetubaguy | so the rub is you want to create an application credential, so you don't have to go via the IdP for automation, that gains what the federated user had at the time | 14:28 |
johnthetubaguy | and you want the heat trust to persit, as long as it can | 14:28 |
johnthetubaguy | now if the IdP info changes on a later login (or a refresh of the role assignments is triggered via some automation process out of band), sure that access gets revoked, but thats not a usual every day thing | 14:29 |
ayoung | johnthetubaguy, so what you are saying is that Heat trusts are based on attributes that came in a Federated assertion. Either we have them re-affirmed everytime (ephemeral) or we make them persistant, and then we are stuck with them for all time. | 14:30 |
johnthetubaguy | anyways, I think we need to write up this scenario, if nothing else so its very clear in our heads | 14:30 |
ayoung | Hmmm | 14:30 |
*** felipemonteiro_ has quit IRC | 14:30 | |
ayoung | This is a David Chadwick question. | 14:30 |
johnthetubaguy | yeah, its in between those too... its like the user is partially deleted... | 14:30 |
johnthetubaguy | it is a bit fishy, to be sure | 14:31 |
johnthetubaguy | its a cache of IdP assertions | 14:31 |
johnthetubaguy | but anyways | 14:31 |
*** jdennis has quit IRC | 14:33 | |
lbragstad | johnthetubaguy: so - the examples i worked on last night to unified limits only account for two levels | 14:33 |
*** jdennis has joined #openstack-keystone | 14:34 | |
johnthetubaguy | lbragstad: they sound sensible as I read through them, I am adding a comment on the other spec a second, will see what you think about it. | 14:34 |
ayoung | OK...so the question is "how long should the data from the assertion be considered valid" | 14:35 |
ayoung | on one hand, the assertion has a time limit on it, which is something like 8 hours | 14:35 |
lbragstad | johnthetubaguy: granted, both examples I wrote in the follow on break | 14:35 |
ayoung | and that is not long enough for most use cases | 14:35 |
ayoung | one the other hand, Keystone gets no notifications for user changes from the IdP, so if we make it any longer, those are going to be based on stale data | 14:36 |
ayoung | So having a trust based on a Federated account needs to be an elevated level of priv no matter what | 14:36 |
ayoung | same is true of any other delegation, to include app creds | 14:36 |
johnthetubaguy | same with app credentials | 14:37 |
johnthetubaguy | yeah | 14:37 |
*** gongysh has quit IRC | 14:37 | |
ayoung | so, who gets to decide? | 14:37 |
ayoung | I think the answer is that the ability to create a trust should be an explicit role assignment | 14:37 |
ayoung | if you have that, then the group assignments you get via a federated token get persisted | 14:38 |
ayoung | another way to do it is at the IdP level, but that is fairly course grained | 14:38 |
johnthetubaguy | it feels like this is all configuration of the mapping | 14:38 |
ayoung | and...most IdPs don't provide more than identifying info | 14:38 |
ayoung | johnthetubaguy, so, it could be | 14:38 |
johnthetubaguy | should the group membership or assignment persist or not | 14:38 |
ayoung | one part of the mapping could be "persist group assignments" | 14:39 |
ayoung | but, the question is whether you would want that for everyone from the IdP | 14:39 |
*** openstackgerrit has joined #openstack-keystone | 14:40 | |
openstackgerrit | ayoung proposed openstack/keystone master: Enable trusts for federated users https://review.openstack.org/415545 | 14:40 |
kmalloc | mordred: ah good to know and phew | 14:40 |
ayoung | that is just a manual rebase knikolla | 14:40 |
johnthetubaguy | ayoung: personally I am fine with global, but I could see arguments both ways | 14:41 |
ayoung | johnthetubaguy, I see three cases | 14:41 |
mordred | kmalloc: I have a patch up to fix devstack and the sdk patch now depends on it - and correctly shows test failures in the "install ksa from release" but test successes with "install ksa from master/depends-on" | 14:41 |
mordred | kmalloc: https://review.openstack.org/#/c/564494/ | 14:41 |
ayoung | 1. Everyone from the IdP is ephemeral. Persist nothing. 2. Ecveryone is persisted. 3. Certain users get promoted to persisted after the fact. | 14:41 |
ayoung | actually, a fourth | 14:41 |
ayoung | 4. there is a property in the Assertion that allows the mapping to determine whether to persist the users groups or not | 14:42 |
ayoung | note that "everyone is persisted " does not mean that "all groups should be persisted" either | 14:42 |
kmalloc | mordred: looking now. | 14:42 |
ayoung | it merely means that a specific group mapping should be persisted | 14:42 |
ayoung | johnthetubaguy, that make sense? | 14:43 |
kmalloc | mordred: so uh... Dumb question, I thought we worked hard for versionless entries in the catalog? | 14:44 |
johnthetubaguy | ayoung: I think so... although something tells me we are missing a bit, just not sure what right now | 14:44 |
kmalloc | mordred: this is, afaict, undoing the "best practices" | 14:44 |
ayoung | johnthetubaguy, I think if we make it possible to identify on a group mapping that the group assignment should be persisted, we'll have it down. We will also need a way to clean out group assignments, although that technically can be done today with e.g. an ansible role | 14:45 |
johnthetubaguy | right, today we have persist roles or transient groups, what we really need is a "clear out stale" at login option, but I need to write this all down, I am getting lost | 14:46 |
ayoung | johnthetubaguy, not at login | 14:50 |
ayoung | johnthetubaguy, that may never happen | 14:50 |
johnthetubaguy | its only one way to get refreshed info, but its a way. | 14:51 |
johnthetubaguy | you would need out of band automation... | 14:51 |
kmalloc | mordred: this is sounding like a bug in ksa | 14:52 |
kmalloc | mordred: or in cinder handling versionless endpoints | 14:52 |
johnthetubaguy | ayoung: that is the fishy bit I was worried about before | 14:52 |
ayoung | limited time trusts | 14:52 |
ayoung | make them refreshable...you have to log in periodically with the right groups to keep it alive | 14:53 |
ayoung | so...limited time group assignments? | 14:53 |
ayoung | put an expiration date on some groups assignments, with a way to make them permanent via the group API | 14:53 |
johnthetubaguy | ayoung: yeah, that could work, like app cred expiry I guess | 14:55 |
mordred | kmalloc: versionless endpoints in the catalog don't work for cinder - so the bug is there | 14:55 |
kmalloc | Ahhhh. | 14:55 |
mordred | kmalloc: I believe that basically we somehow missed actually getting cinder updated in the same way nova got updated | 14:55 |
kmalloc | So, how did this ever work :P ;) | 14:55 |
mordred | nobody has used the block-storage endpoint for any reason yet | 14:56 |
kmalloc | Haha | 14:56 |
kmalloc | Ok then. | 14:56 |
mordred | the other endpoints int he catalog in devstack - for volumev2 and volumev3 - are versioned | 14:56 |
johnthetubaguy | lbragstad: added a comment here: https://review.openstack.org/#/c/540803/9/specs/keystone/rocky/hierarchical-unified-limits.rst@35 | 14:56 |
lbragstad | johnthetubaguy: just saw that - reading it now | 14:56 |
mordred | kmalloc: so - in short, the answer to "how did this ever work?" is "It never has" :) | 14:57 |
lbragstad | johnthetubaguy: is VO a domain? | 14:57 |
lbragstad | or a top level project? | 14:58 |
kmalloc | We need to bug cinder folks, but, annnyway, this.sounds like a clean fix for now. | 14:58 |
mordred | kmalloc: yah - ksa correctly fails with the misconfigured endpoint | 14:58 |
lbragstad | i'm struggling with the mapping of terms | 14:58 |
lbragstad | ok - so VO seems like a parent project or a domain | 15:00 |
kmalloc | mordred: yay successfully failing >.> | 15:00 |
lbragstad | and VO groups are children projects of the VO | 15:00 |
johnthetubaguy | lbragstad: yeah | 15:02 |
lbragstad | johnthetubaguy: ok | 15:02 |
johnthetubaguy | lbragstad: VO = parent project, in my head | 15:02 |
lbragstad | cool - so if we set the VO to have 20 cores | 15:03 |
lbragstad | you *always* want the VO to be within that limit, right? | 15:03 |
johnthetubaguy | lbragstad: yes | 15:03 |
lbragstad | ok | 15:04 |
lbragstad | if we set 20 cores on the VO, and the default registered limit is 10 | 15:04 |
*** wxy| has joined #openstack-keystone | 15:04 | |
johnthetubaguy | lbragstad: basically the PI pays the bill for all resources used by anyone in the VO | 15:04 |
johnthetubaguy | (via a research grant, but whatever) | 15:04 |
lbragstad | sure | 15:04 |
lbragstad | so the VO has to be within whatever the limit is | 15:05 |
lbragstad | it can never over-extend it's limit, right? | 15:05 |
johnthetubaguy | correct | 15:05 |
lbragstad | here are where my questions come in | 15:05 |
johnthetubaguy | no exceptions, wibble races and if you care about them | 15:05 |
*** mchlumsky has quit IRC | 15:05 | |
*** pcichy has joined #openstack-keystone | 15:05 | |
lbragstad | say we have a VO with 20 cores | 15:05 |
johnthetubaguy | so you say this "which defeats the purpose of not having the service understand the hierarchy." | 15:05 |
johnthetubaguy | we should get back to that bit after your questions | 15:06 |
lbragstad | yeah | 15:06 |
lbragstad | so the VO doens't have any children | 15:06 |
lbragstad | and the default registered limit for cores is 10 | 15:06 |
johnthetubaguy | yep | 15:06 |
lbragstad | so any project without a specific limit assumes the default | 15:06 |
lbragstad | right? | 15:06 |
johnthetubaguy | yep | 15:06 |
lbragstad | cool | 15:06 |
*** spilla has quit IRC | 15:06 | |
lbragstad | so - as the PI, I create a child under VO | 15:07 |
johnthetubaguy | yep, gets a limit of 10 | 15:07 |
lbragstad | yep - because i didn't specify it | 15:07 |
johnthetubaguy | for all 15 sub projects I create | 15:07 |
lbragstad | then i create another child | 15:07 |
lbragstad | so - two children, each default to 10 | 15:07 |
johnthetubaguy | yep | 15:07 |
lbragstad | and total resources in the tree is 20 | 15:07 |
lbragstad | technically, I'm at capacity | 15:08 |
johnthetubaguy | so not in the model I was proposing | 15:08 |
johnthetubaguy | (in my head) | 15:08 |
lbragstad | say both child use all of their quota | 15:08 |
johnthetubaguy | it might be the old "garbutt" model, if memory severs me | 15:08 |
johnthetubaguy | right, so we have project A, B, C | 15:09 |
johnthetubaguy | say we also created D | 15:09 |
johnthetubaguy | B, C, D are all children of A | 15:09 |
*** spilla has joined #openstack-keystone | 15:09 | |
lbragstad | A = VO, B and C are children with default limits for cores | 15:09 |
*** r-daneel has quit IRC | 15:09 | |
johnthetubaguy | yeah | 15:09 |
lbragstad | should you be able to create D? | 15:09 |
johnthetubaguy | likes add D, same as B and C | 15:09 |
lbragstad | without specifying a limit? | 15:09 |
johnthetubaguy | I am saying yes | 15:09 |
johnthetubaguy | (just go with it...) | 15:10 |
lbragstad | ok | 15:10 |
lbragstad | so - let's create D | 15:10 |
johnthetubaguy | so we have A (limit 20), B, C, D all with limit of 10 | 15:10 |
lbragstad | yep - so the sum of the children is 30 | 15:10 |
johnthetubaguy | yep | 15:10 |
johnthetubaguy | so lets say actual usage of all projects is 1 test VM | 15:10 |
johnthetubaguy | so current actually usage is now 4 | 15:10 |
lbragstad | sure | 15:11 |
johnthetubaguy | project A now tries to spin up 19 VMs, what happens | 15:11 |
johnthetubaguy | well thats bad, its only allowed 16 | 15:11 |
lbragstad | right | 15:11 |
johnthetubaguy | because B, C, D are using some | 15:11 |
* lbragstad assumes the interface for using oslo.limit is enforce(project_id, project_usage) | 15:11 | |
johnthetubaguy | that is the bit I don't like | 15:12 |
lbragstad | i'm not sure i like it either, but i'm curious to hear why you don't | 15:12 |
johnthetubaguy | so my assumption was based on the new Nova code, which is terrible I know | 15:12 |
johnthetubaguy | in that world we have call back to the project: | 15:13 |
johnthetubaguy | count_usage(resource_type, project_id) | 15:13 |
lbragstad | oh... | 15:13 |
johnthetubaguy | so when you call enforce(project_id, project_usage)... you get the idea | 15:13 |
lbragstad | right - oslo.limit gets the tree A, B, C, and D from keystone | 15:14 |
lbragstad | then uses the callback to get all the usage for that tree | 15:14 |
johnthetubaguy | right, so the limit on A really means sum A, B, C and D usage | 15:14 |
johnthetubaguy | and any check on a leaf means check the parent for any limit | 15:14 |
lbragstad | hmm | 15:14 |
johnthetubaguy | or something a bit like that | 15:15 |
lbragstad | ok - i was operating yesterday under the assumption there wouldn't be a callback | 15:15 |
johnthetubaguy | now.. this could be expensive | 15:15 |
lbragstad | ^ that's why i was operating under that assumption :) | 15:15 |
johnthetubaguy | when there are lots of resources owned by a VO, but then its expensive when you are a really big project too | 15:15 |
lbragstad | i didn't think we'd actually get anyone to bite on that | 15:15 |
johnthetubaguy | so its one extra thing that is nice about the two level limit | 15:15 |
lbragstad | are you saying you wouldn't need the callback with a two-level limit? | 15:16 |
lbragstad | s/limit/model/ | 15:16 |
johnthetubaguy | I think you need the callback still, in some form | 15:16 |
lbragstad | ok - i agree | 15:16 |
johnthetubaguy | it just you know its not going to be too expensive, as there are only two levels of hierarchy | 15:16 |
lbragstad | you could still have a parent with hundreds of children | 15:18 |
lbragstad | i suppose | 15:18 |
johnthetubaguy | you could, and it would suck | 15:18 |
lbragstad | ok - so i think we hit an important point | 15:19 |
johnthetubaguy | now we don't have any real use cases that map to the one that is efficient to implement | 15:19 |
johnthetubaguy | we can invent some, for sure, but they feel odd | 15:19 |
lbragstad | if you assume oslo.limit accepts a callback for getting usage of something per project, then you can technically create project D | 15:20 |
johnthetubaguy | so even in the other model, what if project A has already used all 20? | 15:20 |
lbragstad | if you can't assume that about oslo.limit, then creating project D results in a possible violation of the limit, because the library doesn't have a way to get all the usage it needs to make a decision with confidence | 15:20 |
johnthetubaguy | even though the children only sum to 20, it doesn't help you | 15:21 |
lbragstad | i think that needs to be a characteristic of the model | 15:21 |
johnthetubaguy | agreed | 15:22 |
lbragstad | if A.limit == SUM(B.limit, C.limit) | 15:22 |
lbragstad | then you've decided to push all resource usage to the leaves of the tree | 15:22 |
lbragstad | which we can figure out in oslo.limit | 15:22 |
lbragstad | so when you go to allocate 2 cores to A, we can say "no" | 15:22 |
johnthetubaguy | well, I think what you are looking for is where project A usage counts any resource limits in the children, before counting its own resources | 15:22 |
lbragstad | oh... | 15:23 |
johnthetubaguy | so A, B, C, (20, 10, 10) means project A can't create anything | 15:23 |
lbragstad | i guess i was assuming that A.usage = 0 | 15:23 |
johnthetubaguy | this is where the three level gets nuts, as you get usages all over the three levels, etc | 15:24 |
lbragstad | yeah..... | 15:24 |
johnthetubaguy | I think we can't assume A has usage 0, generally it has all the shared resources | 15:24 |
johnthetubaguy | or the stuff that was created before someone created sub projects | 15:24 |
lbragstad | right... | 15:24 |
johnthetubaguy | same difference I guess | 15:25 |
johnthetubaguy | anyways, I think that is why I like us going down the specific two level Virtual organisation case, it makes all the decisions for us, its quite specific and real | 15:25 |
ayoung | anyone else feel like a lot of these problems come down to "we know when to do the action, but we have no event one which to undo it." | 15:25 |
openstackgerrit | XiaojueGuan proposed openstack/keystoneauth master: Trivial: Update pypi url to new url https://review.openstack.org/565418 | 15:26 |
*** wxy| has quit IRC | 15:27 | |
lbragstad | johnthetubaguy: ok - so what do we need to accomplish this? the callback? | 15:27 |
*** wxy| has joined #openstack-keystone | 15:27 | |
johnthetubaguy | lbragstad: this code may help: https://github.com/openstack/nova/blob/c15a0139af08fd33d94a6ca48aa68c06deadbfca/nova/quota.py#L172 | 15:28 |
johnthetubaguy | lbragstad: you could get the usage for each project as a callback? | 15:28 |
lbragstad | well - somewhere the service (e.g. nova in this case) code, I'm envisioning | 15:28 |
lbragstad | limit.enforce(project_id, usage_callback) | 15:29 |
lbragstad | oslo.limit get's the limit tree | 15:29 |
lbragstad | then uses the callback to collect usage for each node in the tree | 15:29 |
lbragstad | and compares that to the request being made to return a "yes" or "no" to nova | 15:30 |
lbragstad | sorry - limit.enforce(resource_name, project_id, usage_callback) | 15:30 |
johnthetubaguy | looking for a good line in Nova to help | 15:30 |
johnthetubaguy | lbragstad: https://github.com/openstack/nova/blob/644ac5ec37903b0a08891cc403c8b3b63fc2a91c/nova/compute/api.py#L293 | 15:31 |
lbragstad | ok | 15:32 |
lbragstad | part of that feels like it would belong in oslo.limit | 15:32 |
johnthetubaguy | lbragstad: my bad, here we are, this moves to an oslo.limit call, where oslo.limit has been setup with some registed callback, a bit like CONF reload callback: https://github.com/openstack/nova/blob/3800cf6ae2a1370882f39e6880b7df4ec93f4b93/nova/api/openstack/compute/server_groups.py#L157 | 15:34 |
lbragstad | and it would look like limit.enforce('injected_files', 'e33f5fbbe4a4407292e7ccef49ebac0c', api._get_injected_file_usage_for_project) | 15:34 |
johnthetubaguy | note the check quota, create thing, recheck quota model | 15:34 |
johnthetubaguy | sorry, the delta check is better, check we can increase by one, etc, etc. | 15:35 |
lbragstad | ahh | 15:35 |
lbragstad | so - objects.Quotas.check_deltas() would be a replaced by a call to oslo.limit | 15:35 |
johnthetubaguy | yeah | 15:36 |
lbragstad | i suppose the oslo.limit library is going to need to know the usage requested, too | 15:36 |
johnthetubaguy | more complicated example, to note how to make it more efficient: https://github.com/openstack/nova/blob/ee1d4e7bb5fbc358ed83c40842b4c08524b5fcfb/nova/compute/utils.py#L842 | 15:36 |
johnthetubaguy | thats the delta bit in the above two examples | 15:36 |
johnthetubaguy | the other case is a bit odd... its a more static limit, there are a few different cases | 15:37 |
lbragstad | so limit.enforce('cores', 'e33f5fbbe4a4407292e7ccef49ebac0c', 2, api._get_injected_file_usage_for_project) | 15:37 |
johnthetubaguy | so it needs to be a dict | 15:37 |
johnthetubaguy | https://github.com/openstack/nova/blob/ee1d4e7bb5fbc358ed83c40842b4c08524b5fcfb/nova/compute/utils.py#L852 | 15:38 |
johnthetubaguy | otherwise it really sucks | 15:38 |
lbragstad | hmm | 15:38 |
johnthetubaguy | I mean oslo.limits wouldn't check the user quoats of course, they are deleted | 15:38 |
lbragstad | doesn't the max count stuff come from keystone now though? | 15:38 |
lbragstad | don't we only need to know the resource name, requested units, project id, and usage callback? | 15:39 |
johnthetubaguy | so you would want an oslo limits API to check the max limits I suspect, but that could be overkill | 15:40 |
johnthetubaguy | anyways, focus on the instance case, yes that is what you need | 15:41 |
lbragstad | how is a max limit different from a project limit or registered limit? | 15:41 |
*** itlinux has joined #openstack-keystone | 15:42 | |
johnthetubaguy | the count of existing resources would always return zero I think | 15:42 |
johnthetubaguy | its like, how many tags can you have on an instance, its a per project setting | 15:42 |
johnthetubaguy | but the count is not per project, its per instance | 15:43 |
johnthetubaguy | I messed that up... | 15:43 |
johnthetubaguy | reset | 15:43 |
johnthetubaguy | tags on an instance is a good example | 15:43 |
johnthetubaguy | to add a new tag | 15:43 |
johnthetubaguy | check per project limit | 15:43 |
johnthetubaguy | but you count the instance you are setting it on, not loads of resources owned by that project | 15:43 |
openstackgerrit | XiaojueGuan proposed openstack/keystone-specs master: Trivial: Update pypi url to new url https://review.openstack.org/565411 | 15:44 |
johnthetubaguy | so usually we don't send a delta, we just check the actual number | 15:44 |
johnthetubaguy | but we can probably ignore all that for now | 15:44 |
lbragstad | hmm | 15:44 |
johnthetubaguy | focus on the basic case first | 15:44 |
lbragstad | ok | 15:44 |
openstackgerrit | XiaojueGuan proposed openstack/keystone-specs master: Trivial: Update pypi url to new url https://review.openstack.org/565411 | 15:44 |
lbragstad | i think i agree | 15:44 |
johnthetubaguy | core, instances, ram when doing create instance | 15:44 |
* lbragstad has smoke rolling out his ears | 15:44 | |
johnthetubaguy | did you get how we check quotas twice for every operation | 15:45 |
lbragstad | yeah - i think we could expose that both ways with oslo.limit | 15:45 |
lbragstad | and nova would just iterate the creation call and set each requested usage to 1? | 15:45 |
johnthetubaguy | lbragstad: based on my experience that is a good sign, it means you understand a good chunk of the problem now :) | 15:45 |
lbragstad | to maintain the check for a race condition, right? | 15:45 |
johnthetubaguy | so the second call you would just set the deltas to zero I think, I should check how we do that | 15:46 |
*** r-daneel has joined #openstack-keystone | 15:47 | |
johnthetubaguy | yeah, this is the second check: https://github.com/openstack/nova/blob/3800cf6ae2a1370882f39e6880b7df4ec93f4b93/nova/api/openstack/compute/server_groups.py#L180 | 15:47 |
lbragstad | that feels like it should be two different functions | 15:47 |
johnthetubaguy | delta of zero | 15:47 |
johnthetubaguy | I mean it could be | 15:47 |
lbragstad | the first call you want it to tell you if you can create the thing | 15:48 |
lbragstad | the second call happens after you've created said thing | 15:48 |
johnthetubaguy | yeah | 15:48 |
lbragstad | and the purpose of the second call is to check that someone didn't put us over quota in that time frame, right? | 15:48 |
johnthetubaguy | yeah | 15:49 |
lbragstad | and if they did, we need to clean up the instance | 15:49 |
johnthetubaguy | yeah | 15:49 |
lbragstad | ok | 15:49 |
johnthetubaguy | it is a very poor implementation of this: https://en.wikipedia.org/wiki/Optimistic_concurrency_control | 15:49 |
lbragstad | so the first call happens in the "begin" | 15:50 |
lbragstad | and the second is done in the "validate" | 15:50 |
johnthetubaguy | basically | 15:50 |
lbragstad | ok - i think that makes sense | 15:51 |
johnthetubaguy | although its all a bit fast and loose, kill the API and you will be left over quota, possibly | 15:51 |
johnthetubaguy | it all assumes some kind of rate limiting is in place to limit damage around the edge cases | 15:51 |
lbragstad | right- because you didn't have a chance to process the roll back | 15:51 |
*** r-daneel has quit IRC | 15:51 | |
johnthetubaguy | we don't hide the writes from other transactions, basically | 15:52 |
johnthetubaguy | well, don't hide the writes until commit happens | 15:52 |
johnthetubaguy | its a bit more optimistic that optimistic concurrency control, its more concurrency damage limitation | 15:53 |
johnthetubaguy | s/that/than/ | 15:53 |
* lbragstad nods | 15:53 | |
johnthetubaguy | Ok, did we both burn out yet? I think I am close | 15:53 |
lbragstad | same... | 15:54 |
johnthetubaguy | I think you understand where I was thinking with all this though...? | 15:54 |
johnthetubaguy | does that make more sense to you now? | 15:54 |
lbragstad | i think so | 15:54 |
* johnthetubaguy waves arms about | 15:54 | |
lbragstad | i think i can rework the model | 15:54 |
lbragstad | because things change lot with the callback bit | 15:54 |
johnthetubaguy | I don't think its too terrible to implement | 15:55 |
lbragstad | yeah - i just need to draw it all out | 15:55 |
lbragstad | but that callback let's us "over-commit" limits but maintain useage | 15:55 |
lbragstad | right? | 15:55 |
johnthetubaguy | I think it means we can implement any enforcement model in the future (if we don't care about performance) | 15:55 |
lbragstad | and that's the important bit you want regarding the VO/ | 15:56 |
openstackgerrit | Merged openstack/keystone master: Fix the outdated URL https://review.openstack.org/564714 | 15:56 |
johnthetubaguy | yeah | 15:56 |
lbragstad | ok - so to recap | 15:56 |
lbragstad | 1.) make sure we include a statement about only supporting two-level hierarchies | 15:56 |
lbragstad | 2.) include the callback detail since that's important for usage calculation | 15:56 |
*** empty_cup has joined #openstack-keystone | 15:57 | |
lbragstad | is that it? | 15:57 |
*** r-daneel has joined #openstack-keystone | 15:57 | |
johnthetubaguy | probably... | 15:57 |
johnthetubaguy | I guess what does the callback look like | 15:57 |
*** gyee has joined #openstack-keystone | 15:57 | |
johnthetubaguy | list of resources to count for a given project_id? | 15:57 |
johnthetubaguy | or a list of project_ids and a list of resources to count for each project? | 15:58 |
johnthetubaguy | maybe it doesn't matter, so simpler is better | 15:58 |
lbragstad | i would say it's a method that returns an int that represents the usage | 15:58 |
lbragstad | it can accept a project id | 15:58 |
lbragstad | s/can/must/ | 15:58 |
lbragstad | because it will depend on the structure of the tree | 15:59 |
lbragstad | and just let oslo.limit iterate the tree and call usage | 15:59 |
lbragstad | call for usage* | 15:59 |
empty_cup | i received a 401, and looked at the keystone log and see 3 lines: mfa rules not processed for user; user has no access to project; request requires authentication | 16:00 |
johnthetubaguy | lbragstad: yeah, I think we will need it to request multiple resources in a single go | 16:00 |
wxy| | johnthetubaguy: I don't oppose to two-level hierarchies. Actually I like it and I think it is simple enough for the origin proposal. | 16:01 |
johnthetubaguy | lbragstad: its a stupid reason, think about nova checking three types of resources, we don't want to fetch the instances three times | 16:01 |
johnthetubaguy | wxy|: cool, for the record I don't oppose multi-level, its just that feels like a next step, and I want to see us make little steps in the right direction | 16:02 |
*** panbalag has joined #openstack-keystone | 16:02 | |
lbragstad | ++ | 16:02 |
lbragstad | this is a really hard problem... | 16:02 |
johnthetubaguy | yeah, everytime I think its easy I find some new horrible hole! | 16:03 |
lbragstad | exactly... | 16:04 |
johnthetubaguy | would love to get melwitt to take a peak at this stuff, given all the Nova quota bits she has been doing, she has way more detailed knowledge than me | 16:04 |
lbragstad | yea | 16:05 |
lbragstad | i can work on documenting everything | 16:05 |
lbragstad | which should hopefully make it easier | 16:05 |
lbragstad | instead of having to read a mile of scrollback | 16:05 |
johnthetubaguy | awesome, do ping me again for a review when its ready, I will try make time for that, almost all our customers will need this eventually, well most of them anyways! | 16:06 |
johnthetubaguy | once the federation is working (like CERN have already) they will hit this need | 16:06 |
*** mchlumsky has joined #openstack-keystone | 16:07 | |
lbragstad | johnthetubaguy: ++ | 16:07 |
johnthetubaguy | OK, so I should probably call it a day, my brain is mashed and its just past 5pm | 16:08 |
lbragstad | johnthetubaguy: sounds like a plan, thanks for the help! | 16:08 |
johnthetubaguy | now is not the time for writing ansible | 16:08 |
johnthetubaguy | no problem, happy I could help | 16:08 |
wxy| | johnthetubaguy: yeah. The only problem I concern is about the implementation. In your spec, to enable the hierarchy is controlled by the API with "include_all_children". It means that in a keystone system, some project may be hierarchy, some don't. I's a little mess-up for a Keystone system IMO. How about the limit model driver way? So that all project's behavior will be the same at the same time. | 16:09 |
wxy| | johnthetubaguy: some way like this: https://review.openstack.org/#/c/557696/3 Of cause It can be changed to two-level easily. | 16:13 |
*** panbalag has left #openstack-keystone | 16:39 | |
*** wxy| has quit IRC | 16:39 | |
* ayoung moves Keystone meeting one hour earlier on his schedule | 16:54 | |
kmalloc | ayoung: set the meeting time in UTC/or reykjavik timezone (if your calendar does't have UTC) | 16:57 |
kmalloc | ayoung: means you wont have to "move" it explicitly :) | 16:58 |
kmalloc | the latter use of reykjavik was a workaround for google calendar (not sure if that has been fixed) | 16:58 |
*** spilla has quit IRC | 16:58 | |
lbragstad | the ical on eavesdrop actually has the meeting in UTC | 17:02 |
lbragstad | #startmeeting keystone-office-hours | 17:02 |
openstack | lbragstad: Error: Can't start another meeting, one is in progress. Use #endmeeting first. | 17:02 |
lbragstad | ugh | 17:02 |
lbragstad | #endmeeting | 17:02 |
*** openstack changes topic to "Rocky release schedule: https://releases.openstack.org/rocky/schedule.html | Meeting agenda: https://etherpad.openstack.org/p/keystone-weekly-meeting | Bugs that need triaging: http://bit.ly/2iJuN1h | Trello: https://trello.com/b/wmyzbFq5/keystone-rocky-roadmap" | 17:02 | |
openstack | Meeting ended Tue May 1 17:02:43 2018 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 17:02 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/keystone_office_hours/2018/keystone_office_hours.2018-04-24-17.56.html | 17:02 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/keystone_office_hours/2018/keystone_office_hours.2018-04-24-17.56.txt | 17:02 |
openstack | Log: http://eavesdrop.openstack.org/meetings/keystone_office_hours/2018/keystone_office_hours.2018-04-24-17.56.log.html | 17:02 |
lbragstad | #startmeeting keystone-office-hours | 17:02 |
openstack | Meeting started Tue May 1 17:02:52 2018 UTC and is due to finish in 60 minutes. The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot. | 17:02 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 17:02 |
*** openstack changes topic to " (Meeting topic: keystone-office-hours)" | 17:02 | |
*** ChanServ changes topic to "Rocky release schedule: https://releases.openstack.org/rocky/schedule.html | Meeting agenda: https://etherpad.openstack.org/p/keystone-weekly-meeting | Bugs that need triaging: http://bit.ly/2iJuN1h | Trello: https://trello.com/b/wmyzbFq5/keystone-rocky-roadmap" | 17:02 | |
openstack | The meeting name has been set to 'keystone_office_hours' | 17:02 |
ayoung | kmalloc, UTC 12:00 right | 17:03 |
lbragstad | that's the second time the meeting bot has failed to stop office hours | 17:03 |
lbragstad | http://eavesdrop.openstack.org/#Keystone_Team_Meeting | 17:03 |
ayoung | What is the goal of office hours anyway? Answering questions from community members? | 17:04 |
lbragstad | yeah, that, closing bugs, working on specs | 17:04 |
lbragstad | it's really just time set aside each week to ensure other people are around | 17:05 |
*** r-daneel_ has joined #openstack-keystone | 17:05 | |
*** r-daneel has quit IRC | 17:07 | |
*** r-daneel has joined #openstack-keystone | 17:08 | |
*** r-daneel_ has quit IRC | 17:10 | |
ayoung | lbragstad, so...what do you think of the idea of optional time-outs on group assignments? | 17:16 |
ayoung | A user comes in via Federation, gets a group assignment and it will last for a configurable amount of time, set in the mapping | 17:16 |
ayoung | once the time is up, the group assignment is no longer valid, and role assignments based on that group are also kaput | 17:17 |
ayoung | log in again, bump the time forward. | 17:17 |
ayoung | an admin can come in and set the time out to "never" to make it permanent. | 17:17 |
lbragstad | i need to think about that a bit more | 17:18 |
lbragstad | it's interesting though | 17:18 |
ayoung | lbragstad, do we have an appropriate forum at the summit for that? | 17:18 |
lbragstad | not really, we have one for unified limits, default roles, and operator feedback | 17:19 |
lbragstad | s/one/three | 17:19 |
kmalloc | ayoung: yeah UTC (-0 offset) | 17:20 |
knikolla | there is a forum session on federation stuff, but it will probably be extremely high level | 17:23 |
knikolla | https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21786/supporting-general-federation-for-large-scale-collaborations | 17:23 |
*** itlinux has quit IRC | 17:28 | |
*** itlinux has joined #openstack-keystone | 17:41 | |
*** pcichy has quit IRC | 17:44 | |
*** pcichy has joined #openstack-keystone | 17:51 | |
*** pcichy has quit IRC | 17:52 | |
*** pcichy has joined #openstack-keystone | 17:53 | |
ayoung | knikolla, it might be worth discussing there, as I think this is one of the critical topics | 17:56 |
breton | ayoung: that's nice | 17:59 |
breton | ayoung: re: timeout for groups | 17:59 |
ayoung | breton, you like the idea? | 17:59 |
ayoung | What about it appeals to you? | 17:59 |
breton | ayoung: when we were talking about adding users to groups, users being in the group forever was the biggest concern. And i see you bumped https://review.openstack.org/#/c/415545/ already. | 18:02 |
ayoung | breton, yeah, it was a simple bump, but that was before the Time-out discussion happened | 18:02 |
ayoung | I think I like the idea of timeouts for everything. Optional, but the norm for Federation | 18:03 |
breton | timeout equal to token ttl maybe? | 18:04 |
breton | i wonder if something like ?allow_expired will be needed | 18:05 |
*** spilla has joined #openstack-keystone | 18:09 | |
openstackgerrit | Merged openstack/keystone master: Add configuration option for enforcement models https://review.openstack.org/562713 | 18:11 |
ayoung | token TTL should be much shorter | 18:12 |
ayoung | breton, thing trusts. | 18:12 |
ayoung | think | 18:12 |
ayoung | breton, allowed_expired would be way too long a time for these calls. | 18:13 |
*** pcichy has quit IRC | 18:18 | |
*** pcichy has joined #openstack-keystone | 18:18 | |
*** sonuk has joined #openstack-keystone | 18:29 | |
*** sonuk has quit IRC | 18:36 | |
*** felipemonteiro__ has quit IRC | 18:57 | |
*** r-daneel has quit IRC | 19:00 | |
*** r-daneel_ has joined #openstack-keystone | 19:00 | |
*** ayoung has quit IRC | 19:02 | |
*** r-daneel_ is now known as r-daneel | 19:02 | |
*** raildo has quit IRC | 19:44 | |
mordred | lbragstad, cmurphy: if you are bored and want an easy +3 ... https://review.openstack.org/#/c/564495/ | 19:46 |
*** mchlumsky has quit IRC | 20:12 | |
*** itlinux has quit IRC | 20:39 | |
*** itlinux has joined #openstack-keystone | 20:41 | |
*** ayoung has joined #openstack-keystone | 20:55 | |
openstackgerrit | Lance Bragstad proposed openstack/keystone-specs master: Add scenarios to strict hierarchy enforcement model https://review.openstack.org/565412 | 20:59 |
lbragstad | johnthetubaguy: yankcrime ^ | 21:01 |
lbragstad | i think i got most of what we talked about out of my head and on paper | 21:01 |
lbragstad | the main bits are the "model behaviors" section and the "enforcement diagrams" | 21:02 |
lbragstad | #endmeeting | 21:03 |
*** openstack changes topic to "Rocky release schedule: https://releases.openstack.org/rocky/schedule.html | Meeting agenda: https://etherpad.openstack.org/p/keystone-weekly-meeting | Bugs that need triaging: http://bit.ly/2iJuN1h | Trello: https://trello.com/b/wmyzbFq5/keystone-rocky-roadmap" | 21:03 | |
openstack | Meeting ended Tue May 1 21:03:02 2018 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 21:03 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/keystone_office_hours/2018/keystone_office_hours.2018-05-01-17.02.html | 21:03 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/keystone_office_hours/2018/keystone_office_hours.2018-05-01-17.02.txt | 21:03 |
openstack | Log: http://eavesdrop.openstack.org/meetings/keystone_office_hours/2018/keystone_office_hours.2018-05-01-17.02.log.html | 21:03 |
*** dklyle has joined #openstack-keystone | 21:14 | |
*** martinus__ has quit IRC | 21:15 | |
*** spilla has quit IRC | 21:26 | |
*** felipemonteiro__ has joined #openstack-keystone | 21:30 | |
*** felipemonteiro_ has joined #openstack-keystone | 21:33 | |
*** itlinux has quit IRC | 21:34 | |
*** felipemonteiro__ has quit IRC | 21:36 | |
*** idlemind has joined #openstack-keystone | 21:41 | |
*** edmondsw has quit IRC | 21:48 | |
*** pcichy has quit IRC | 21:49 | |
*** pcichy has joined #openstack-keystone | 21:50 | |
*** pcichy has quit IRC | 21:54 | |
openstackgerrit | Merged openstack/keystoneauth master: Allow tuples and sets in interface list https://review.openstack.org/564495 | 21:58 |
*** jmlowe has quit IRC | 22:13 | |
*** dklyle has quit IRC | 22:21 | |
*** lbragstad has quit IRC | 22:22 | |
*** dklyle has joined #openstack-keystone | 22:26 | |
*** rcernin has joined #openstack-keystone | 22:36 | |
*** jmlowe has joined #openstack-keystone | 22:42 | |
*** lbragstad has joined #openstack-keystone | 22:46 | |
*** ChanServ sets mode: +o lbragstad | 22:46 | |
*** felipemonteiro_ has quit IRC | 22:58 | |
*** r-daneel has quit IRC | 23:23 | |
*** itlinux has joined #openstack-keystone | 23:34 | |
*** itlinux has quit IRC | 23:39 | |
*** r-daneel has joined #openstack-keystone | 23:50 | |
*** dklyle has quit IRC | 23:51 | |
*** dklyle has joined #openstack-keystone | 23:57 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!