*** dklyle has quit IRC | 00:03 | |
TheJulia | friction, or a failure to convey (or maybe listen)? | 00:30 |
---|---|---|
TheJulia | Just thinking of failure to listen from when there were people saying we must support live-migration like capabilities in bare metal :\ | 00:32 |
fungi | the xml-all-the-things folks eventually gave up and went away. the tosca-or-bust folks finally gave up and went away. the enterprise working group is now defunct i think... | 00:36 |
fungi | maybe the message will be a little easier with the present incarnation of our community | 00:36 |
*** lbragstad has joined #openstack-tc | 01:02 | |
TheJulia | what has not been proposed is not advocating one true way, but a perspective I guess | 01:32 |
*** ricolin has joined #openstack-tc | 01:35 | |
fungi | advocating one true way is sort of our status quo up to now | 01:46 |
fungi | er, ^not | 01:47 |
*** zaneb has joined #openstack-tc | 03:02 | |
*** zaneb has quit IRC | 03:21 | |
*** rosmaita has quit IRC | 03:22 | |
openstackgerrit | lei zhang proposed openstack/constellations master: Add Apple OS X ".DS_Store" to ".gitignore" file https://review.openstack.org/592259 | 04:05 |
openstackgerrit | Merged openstack/constellations master: import zuul job settings from project-config https://review.openstack.org/591739 | 04:13 |
*** gcb_ has quit IRC | 04:31 | |
*** gcb_ has joined #openstack-tc | 04:47 | |
*** e0ne has joined #openstack-tc | 05:18 | |
*** e0ne has quit IRC | 06:04 | |
*** e0ne has joined #openstack-tc | 06:12 | |
*** e0ne has quit IRC | 06:56 | |
*** e0ne has joined #openstack-tc | 06:57 | |
*** e0ne has quit IRC | 06:59 | |
openstackgerrit | Andreas Jaeger proposed openstack/governance master: Retire rst2bash https://review.openstack.org/592293 | 07:16 |
openstackgerrit | Andreas Jaeger proposed openstack/governance master: Retire rst2bash (step 5) https://review.openstack.org/592293 | 07:17 |
*** jpich has joined #openstack-tc | 07:34 | |
openstackgerrit | Merged openstack/constellations master: switch documentation job to new PTI https://review.openstack.org/591740 | 07:46 |
openstackgerrit | Merged openstack/constellations master: use python3 in tox https://review.openstack.org/591786 | 07:47 |
*** e0ne has joined #openstack-tc | 07:48 | |
*** ricolin has quit IRC | 08:12 | |
*** ianychoi_ has quit IRC | 08:42 | |
evrardjp | morning | 08:42 |
*** cdent has joined #openstack-tc | 09:08 | |
*** jaosorior has quit IRC | 09:10 | |
*** e0ne has quit IRC | 09:14 | |
*** lbragstad has quit IRC | 10:25 | |
*** jaosorior has joined #openstack-tc | 10:28 | |
*** gcb_ has quit IRC | 11:27 | |
*** e0ne has joined #openstack-tc | 11:32 | |
*** e0ne has quit IRC | 11:47 | |
*** e0ne has joined #openstack-tc | 11:50 | |
*** cdent has quit IRC | 11:57 | |
*** rosmaita has joined #openstack-tc | 12:24 | |
*** jaosorior has quit IRC | 12:25 | |
*** jaosorior has joined #openstack-tc | 12:25 | |
*** cdent has joined #openstack-tc | 12:36 | |
*** e0ne has quit IRC | 13:05 | |
*** jaosorior has quit IRC | 13:14 | |
*** jaosorior has joined #openstack-tc | 13:15 | |
*** jaosorior has quit IRC | 13:20 | |
openstackgerrit | Merged openstack/governance master: Update IRC nicks for PTLs https://review.openstack.org/590082 | 13:28 |
*** e0ne has joined #openstack-tc | 13:32 | |
*** e0ne has quit IRC | 13:38 | |
*** e0ne has joined #openstack-tc | 13:39 | |
*** e0ne has quit IRC | 13:56 | |
*** zaneb has joined #openstack-tc | 14:03 | |
*** cdent has quit IRC | 14:12 | |
TheJulia | Good morning | 14:16 |
dhellmann | o/ | 14:16 |
smcginnis | o/ | 14:18 |
*** AlanClark has joined #openstack-tc | 14:22 | |
*** AlanClark has quit IRC | 14:28 | |
aspiers | mugsie: you around at the moment? | 14:30 |
*** jaosorior has joined #openstack-tc | 14:32 | |
mugsie | aspiers: kind of - sitting in a meeting :) | 14:32 |
aspiers | you going to Denver? | 14:33 |
mugsie | i am | 14:33 |
aspiers | see https://review.openstack.org/#/c/531456/7 | 14:33 |
aspiers | latest comments | 14:33 |
*** AlanClark has joined #openstack-tc | 14:35 | |
*** annabelleB has joined #openstack-tc | 14:43 | |
*** e0ne has joined #openstack-tc | 14:46 | |
jaypipes | zaneb: really great start on the tech vision document. thanks for putting that together. | 14:55 |
zaneb | jaypipes: thanks, I appreciate your feedback | 14:55 |
jaypipes | zaneb: I can try and add some wording around the need for concrete definitions of an AZ and region later today. | 14:56 |
zaneb | that would be awesome | 14:56 |
zaneb | I really feel like I am outside my area of expertise on that subject | 14:56 |
jaypipes | zaneb: well, I actually don't want to get into the implementation details in the tech vision doc. I'm more interested in just having something in there that recognizes the *need* for clarity and definition in the arena of failure domains and partitions of a deployment. | 14:57 |
zaneb | yes, +1 | 14:59 |
zaneb | I recognise the need for that but my understanding of it is fairly shallow. it needs somebody who understands the details deeply to write a good high-level summary | 15:00 |
mnaser | tc-members: office hours :) | 15:01 |
cmurphy | o/ | 15:01 |
EmilienM | o/ | 15:01 |
dhellmann | o/ | 15:01 |
*** cdent has joined #openstack-tc | 15:01 | |
TheJulia | o/ | 15:01 |
fungi | woo! | 15:01 |
smcginnis | o/ | 15:01 |
mnaser | i have an interesting question to ask: should openstack deliverables have some sort of enforcement that they are validated by tempest? | 15:02 |
mnaser | i.e.: what if a team starts building their tempest alternative.. can we accept that project to release based on their own validation | 15:02 |
cdent | o/ | 15:02 |
fungi | gotta be careful there. we have _lots_ of "deliverables" tempest doesn't and can't see | 15:02 |
fungi | documentation repositories, for example, probably have little connection with tempest testing | 15:03 |
TheJulia | fungi's comment is my exact concern | 15:03 |
fungi | or, how would you tempest test oslo.db? | 15:03 |
TheJulia | or sushy (speaking of which, we should talk more about) | 15:03 |
fungi | (aside from it being exercised by things you're testing, of course) | 15:04 |
mnaser | so in my case: there seems to be an interest of a few to build our own 'health checks' to run at the end of an openstack ansible deployment rather than tempest. i'm not so comfortable with it, but i also don't want to be someone who stops others from doing things because im not comfortable with it | 15:04 |
mnaser | so more of a ptl hat asking for tc input in this case | 15:04 |
TheJulia | mnaser: is the desire the healthcheck reporting stuff that people want on the api so they can just poll to see if thigns look sane/good? | 15:04 |
fungi | mnaser: well, you likely at least need some sorts of checking besides what tempest offers. it's really focused on api tests, so if you want to test things other than the api you wouldn't want to shoehorn that into tempest | 15:05 |
mnaser | TheJulia: some of it is yes, but others is "tempest always breaks us so we want to do something that does a subset of what tempest does" | 15:05 |
evrardjp | mnaser: indeed. | 15:05 |
mnaser | my thought is that was can always cherry-pick a few tempest tests to run rather than the whole suite which emulates a 'health check' | 15:05 |
evrardjp | The idea was to rely on projects doing the tempest :) | 15:05 |
cdent | mnaser: I think that's a valid concern. If _other_ things in tempest are causing a run to fail it just makes you wanna flip tables | 15:05 |
TheJulia | mnaser: so it sounds like partially makes lots of sense, there is an olso healthcheck things and discussion to evolve it I believe (i may be way out of date), but at the same time the rest should be "lets go find those issues and fix them" | 15:06 |
dhellmann | mnaser : tempest does support running a subset of tests | 15:06 |
evrardjp | cdent: the problem is timeouts in our case | 15:06 |
cdent | evrardjp: ah yeah, i know how that can be | 15:06 |
evrardjp | moving to our own is not timeout specific as it is actually synchronous | 15:06 |
evrardjp | but that's an OSA detail :) | 15:06 |
mnaser | dhellmann: that's what i'm trying to propose, is that we run a smaller subset of specific tempest tests, so we'll still use tempest but dont have to write up our own stuff | 15:06 |
TheJulia | If tempest fails, there is a reason, It might not be clear out of the box, but those issues can be fixes with careful study of logs and problem identification | 15:06 |
mnaser | TheJulia: absolutely, and i think there is value in using a tool that is used across all other projects.. so it's easy for anyone to dive in | 15:07 |
cdent | TheJulia: frequently tempest fails because the target cloud is oversubscribed or the timeouts are wrong | 15:07 |
evrardjp | TheJulia: that's fair | 15:07 |
TheJulia | dhellmann: uhh, we do it all the time in ironic | 15:07 |
cdent | constantly chagning timeouts is a not fun game | 15:07 |
dhellmann | I'm not necessarily sure it's fair to ask someone working on a deployment tool to figure out why a cinder volume test is failing in a flakey manner. So while I don't think we want an alternative to tempest, I do think having a subset of the tests that are known to be stable, fast, etc. is a good idea. | 15:07 |
evrardjp | cdent: that's what we do. I have the impression we are hitting limits | 15:07 |
TheJulia | no, and timeouts should be geared for the variability we will normally see in clouds we test on | 15:07 |
TheJulia | that takes time to find | 15:07 |
smcginnis | dhellmann: ++ | 15:07 |
dhellmann | not to pick on cinder, that's just something I've dealt with in the last week | 15:08 |
zaneb | dhellmann: ++ | 15:08 |
evrardjp | mnaser: I think OSA doesn't want to replace tempest -- it wants to build a better community relationship with ansible modules. | 15:08 |
TheJulia | tempest is not a performance measurement tool, we shouldn't have timeouts such that we're doing that. If we are, then we're wasting cloud provider resources with rechecks | 15:08 |
dhellmann | mnaser : maybe gmann and the qa team would be able to advise on how to go about defining such a test suite? | 15:08 |
TheJulia | dhellmann: but it would be good if they reached out to the cinder folks and said "hey, we keep seeing this, help!" | 15:08 |
dhellmann | TheJulia : oh, definitely | 15:09 |
mnaser | dhellmann: well, we started with some progress on that, chandankumar from the tripleo team started pushing some work up to our ansible role so that we can both consume the same role | 15:09 |
mnaser | so i think by having tripleo and openstack-ansible both use the same role, we can together find that good balance of a subset of tests that we are stable, reliable together | 15:09 |
TheJulia | and if that ask is not coming across, it is hurting the community at large, because the cinder folks might look at the job and go "oh, this!"... which is just as likely as "interesting, and we'll be back to you in a month, we need to go fix something" | 15:09 |
evrardjp | TheJulia: It's probably OSA fault's because it's a race condition, but indeed it's only on cinder :p | 15:09 |
dhellmann | it would also be nice if we could get some of the deployment tools to a state where we could gate services on using them for deployments | 15:09 |
mnaser | because imho tripleo is way ahead of OSA in terms of figuring out the right tests to whitelist/blacklist | 15:10 |
evrardjp | dhellmann: ++ | 15:10 |
dhellmann | TheJulia : sure. there's a balance though about what tests really make sense to run in what jobs. We don't have to run *everything* all the time. | 15:10 |
mnaser | and leveraging that would be great.. i can imagine it being of value to kolla-ansible too | 15:10 |
TheJulia | dhellmann: exactly! | 15:10 |
TheJulia | We had to learn that in ironic the hard way | 15:10 |
mnaser | TheJulia: yeah, i think for some reason we feel like the fault is at our side because we assume that if cinder gates are passing, we're doing something wrong | 15:11 |
mnaser | so i wouldn't want to ping cinder people when their own gates are passing fine | 15:11 |
dhellmann | if we start with a "small" subset and the deployment projects gate on those, then we could potentially have the service projects run the same gate jobs and then expand that test suite | 15:11 |
mnaser | (but maybe that's something i should change) | 15:11 |
smcginnis | evrardjp: What is the OSA/Cinder issue? | 15:11 |
evrardjp | dhellmann: that's why in OSA we've been slowly changing configuration of what runs on tempest, to a point where I wondered: Does that still makes sense, where I can build the equivalent with ansible modules, to help ansible and help ourselves. | 15:11 |
mnaser | evrardjp: but ansible modules already get gating within shade/etc | 15:11 |
evrardjp | smcginnis: we follow the procedure to restart the services but it seems that cinder-scheduler now has to be restarted last. | 15:11 |
TheJulia | mnaser: I think it is worth a try at least, the cinder folks don't bite. | 15:11 |
mnaser | smcginnis: we've been struggling with services not restarting cleanly i guess, rabbitmq would reset the connections, volumes dont get created | 15:12 |
dhellmann | TheJulia : some teams do, though | 15:12 |
mordred | I didn't do it | 15:12 |
evrardjp | smcginnis: I have to dig deeper in this though. | 15:12 |
evrardjp | mordred: sure you didn't. | 15:12 |
evrardjp | :p | 15:12 |
TheJulia | dhellmann: true, and in those cases, we likely need to set a tone that it is better for the community to work together to solve the problem | 15:12 |
mnaser | idk, every time it's another issue, i'm 4 patches deep into a stack, i think i've done some progress, but still -- https://review.openstack.org/#/c/591961/4 | 15:12 |
TheJulia | well, the tone should already be set, perhaps refresher to the principles | 15:13 |
smcginnis | evrardjp: That seems odd. If that's the case, that's contrary to our published and test (at least at some point) upgrade instructions - https://docs.openstack.org/cinder/latest/upgrade.html#during-maintenance-window | 15:13 |
mordred | evrardjp: I haven't read the whole scrollback - but I'd LOVE to get better/more comprehensive testing on the ansible modules upstream | 15:13 |
evrardjp | smcginnis: that's for sure, and it didn't appear in the past. | 15:13 |
mnaser | the cinder team has been extremely helpful, i just personally feel like there's no need to ask for their time if their gates are passing | 15:13 |
mordred | evrardjp: also, now that we have github cross-testing with ansible/ansible for them, it would be great to move the ansible module tests from theopenstacksdk repo to ansible/ansible | 15:14 |
smcginnis | mnaser, evrardjp: And grenade is a full outage, right? | 15:14 |
mordred | if anyone wants to work more on cross-testing with upstream openstack modules, that would likely be a great todo-list item to add - and openstacksdk will definitely want to particupate in that gate | 15:14 |
TheJulia | mnaser: There is a huge difference between devstack gates and production like deployments though. | 15:14 |
*** ricolin has joined #openstack-tc | 15:15 | |
mnaser | smcginnis: it's not an upgrade issue in this case, we don't even have upgrade jobs right now because that's a whole another effort. this is just standing up a new deployment | 15:15 |
smcginnis | Oh weird. | 15:15 |
evrardjp | Well I guess the question of mnaser was more likely asking if it was a problem to not use tempest. It's not but it's not a good idea too. | 15:16 |
evrardjp | Did I summarise well? | 15:16 |
evrardjp | The other issues have to be handled in the appropriate channels :) | 15:16 |
mnaser | smcginnis: you can look at the stack that i linked, you'll see different failures for different reasons, a lot of it having to do with rabbitmq somehow resetting the connection, but yeah | 15:16 |
mordred | yah. it seems to me that tempest is a low-water-mark - you should at least have tempest. but also, adding thigns like "run the ansible-modules test job against your deployment" might be good additional things to do | 15:16 |
evrardjp | mordred: well it depends | 15:17 |
dhellmann | it sounds like we're getting into the technical issues of these specific upgrade tests, and those are probably better handled between the project teams | 15:17 |
mnaser | dhellmann: ++ | 15:17 |
mordred | evrardjp: it always does | 15:17 |
evrardjp | at some point if you're doing the same thing in tempest as in ansible, why carry both :p | 15:17 |
mordred | evrardjp: fwiw, openstacksdk-ansible-devel-functional-devstack is the current functional test job for the ansible modules upstream | 15:17 |
mordred | evrardjp: well, they test different things | 15:17 |
TheJulia | I see tempest as contract verification more than anything else, so ++ to what mordred said | 15:17 |
evrardjp | it doesn't prevent us from using tempest in periodics. Just tinking the path of lowest resistence | 15:17 |
mordred | tempest tests that your openstack is correct | 15:18 |
mordred | the ansible modules work around a LOT of divergence | 15:18 |
evrardjp | that's for sure, but what does matter for a deployment project? | 15:18 |
mordred | so only using the ansible modules to test you could produce a very strangely behaving cloud, but shade goes out of its way to fix it for you | 15:18 |
TheJulia | Anyway, so I have a non-technical topic I guess :) | 15:18 |
mordred | so any users _not_ using shade/ansible would have a hard time using the cloud | 15:18 |
mordred | but to you you'd think it was all fine because the ansible tests passed | 15:19 |
evrardjp | mordred: that's fair and an interesting point. | 15:19 |
mordred | I think both are an excellent idea | 15:19 |
mordred | like - if the ansible modules tests don't work against your cloud your cloud is REALLY broken | 15:19 |
evrardjp | :) | 15:19 |
dhellmann | TheJulia : what's on your mind? | 15:19 |
mordred | but passing them doesn't mean it would pass refstack | 15:20 |
mnaser | mordred: i think that this type of thing can be moved to a project level discussion where openstacksdk/ansiblejobs/etc can maybe have a job where they deploy using $tool and test against it | 15:20 |
mordred | mnaser: ++ | 15:20 |
fungi | to provide another example, the debian packaging team for openstack runs tempest against a deployment of their packages, to sanity test them | 15:20 |
mnaser | i think it would be good to talk about TheJulia's topic which i assume is along the lines of sushy :p | 15:20 |
mordred | openstacksdk and the upstream ansible modules both welcome such jobs | 15:20 |
mordred | ++ | 15:20 |
* mordred shuts up now | 15:20 | |
TheJulia | There is a discussion on the legal-discuss list regarding sushy and the inclusion of content that ?will be? or ?now is? CC-BY-3.0 licensed, which is in the form of a json document to fall back upon for error message handling. | 15:21 |
TheJulia | http://lists.openstack.org/pipermail/legal-discuss/2018-August/000505.html | 15:21 |
TheJulia | since sushy is a client for the Redfish API, and not all redfish API endpoints may have the proper document locally for error translation | 15:22 |
TheJulia | (not language translation, they return error codes and message ID values in their API) | 15:22 |
TheJulia | I feel like the consensus is leaning towards this is not an issue, but I wanted to raise awareness to this discussion and see if there are thoughts | 15:23 |
dhellmann | having sushy reuse that data file seems fine as far as I can tell | 15:23 |
dhellmann | the licenses are compatible, we have precedent, and the redfish folks expressed that allowing it was their intent | 15:23 |
mnaser | TheJulia: forgive me if i'm not understanding this correctly. sushy is a redfish api client.. but if they raise an error and they dont have said document, what happens? | 15:23 |
dhellmann | so in the absence of a clear answer from the foundation lawyer that it is not ok, I think you should go ahead | 15:23 |
TheJulia | and it is the standards body that is wanting to put it in place | 15:23 |
dhellmann | mnaser : the error contains what amounts to an exception name, without any real details | 15:24 |
mnaser | if it's not super difficult | 15:24 |
TheJulia | mnaser: and the intent is if it the translation for the exception name is not on the remote side, to have a local copy ship with the library so that it can report a human consumable error message | 15:25 |
mnaser | is it hard to maybe load the data if its located in /usr/share/foo and if not just raise a generic exception? that way we just avoid all this | 15:25 |
mnaser | i know it's extra work, but.. that would just avoid all this | 15:25 |
TheJulia | mnaser: so they were orignally thinking that it could be downloaded, but then it starts becoming like SNMP mibs | 15:25 |
TheJulia | and many deployments are not internet connected | 15:26 |
dhellmann | other than being cautious about making what seems like a legal decision, does anyone see any clear reason this shouldn't be allowed? | 15:26 |
dhellmann | (including the file in sushy directly) | 15:27 |
TheJulia | I do not | 15:27 |
dhellmann | tc-members? | 15:27 |
dhellmann | one provision wendar pointed out was giving clear attribution. have you given any thought to how you would do that? | 15:27 |
zaneb | nope | 15:27 |
cdent | I detached from the sushy conversation pretty early on as it seemed to be teapotting | 15:28 |
TheJulia | dhellmann: I need to double check the file and contents, I'm sure that can be sorted in the review process. For now I've blocked the series of changes for the feature they are trying to implement until we get this sorted out | 15:28 |
cdent | I think if there's a good doc of the situation somewhere, just include the stuff | 15:28 |
dhellmann | TheJulia : ok. a release note seems obvious. updating your LICENSE file might be a good idea, too. | 15:29 |
TheJulia | cdent: I think we would also likely end up documenting it very clearly as well | 15:29 |
TheJulia | including LICENSE | 15:29 |
dhellmann | ok, cool | 15:29 |
dhellmann | go forth and make software | 15:29 |
TheJulia | \o/ | 15:29 |
cdent | word | 15:29 |
TheJulia | Next! | 15:29 |
dhellmann | I will reply to the mailing list thread for the recordd | 15:29 |
dhellmann | tc-members: so far we have 1 nay vote to remove freezer from governance (https://review.openstack.org/588645) and 1 nay vote to remove searchlight (https://review.openstack.org/588644). Should I interpret the votes in favor of appointing PTLs for those teams as a desire to keep the teams going and approve the appointments? | 15:29 |
smcginnis | I have a feeling we would end up with the same situation in Cinder if we every support Swordfish (storage extensions to Redfish API). | 15:29 |
TheJulia | smcginnis: likely if you need to to translate resource data from exceptions or labels in their API. Speaking of which, people are working on storage interface support in sushy...... | 15:30 |
fungi | dhellmann: since the -1 on the removal changes is mine, i guess i'm not the one you're asking ;) | 15:31 |
TheJulia | dhellmann: I feel like there was 4-5 of us that discussed that ?yesterday? or the day before and it seemed like we were in favor of appointing, monitoring, and then removing IF they fail again. | 15:31 |
TheJulia | basically, someone is stepping up, lets give them a chance mindset | 15:32 |
fungi | yeah, my -1 on those wasn't a no-never but more of a not-yet | 15:32 |
smcginnis | TheJulia: ++ (for sushy and PTL appointment comments0 | 15:32 |
dhellmann | fungi : no, I'm trying to get some of your colleagues to respond to the question :-) | 15:32 |
dhellmann | TheJulia : I would feel more comfortable if you all registered a vote on both patches | 15:33 |
dhellmann | I don't like abandoning a proposal that has so few votes on it, just as a matter of practice | 15:33 |
TheJulia | I'm good with that | 15:33 |
* TheJulia adds that to the after the current patch todo list | 15:33 | |
dhellmann | it helps me ensure that I am not making assumptions about what the consensus really is | 15:33 |
TheJulia | ++ | 15:34 |
dhellmann | if we get a few more -1 votes registered today I can approve those PTL appointments and we can stop having to talk about this :-) | 15:34 |
TheJulia | \o/ | 15:34 |
jbryce | hi tc folk. is it still office hours? | 15:35 |
dhellmann | hi, jbryce, yes | 15:35 |
fungi | jbryce: yes, for another t25 minutes | 15:35 |
TheJulia | o/ jbryce | 15:35 |
dhellmann | though you don't have to wait for that if you have something to bring up, of course | 15:35 |
fungi | but it's also not a hard-and-fast timeblock | 15:35 |
fungi | yeah, we're happy to discuss things in here any time, office hours or no | 15:35 |
fungi | just (in theory) we have more people on hand to discuss things during scheduled office hours | 15:36 |
smcginnis | Some of them at least. | 15:36 |
jbryce | i didn't really have anything specific to bring up. i just wanted to stop in during an office hours. i've been on the road quite a bit over the last few weeks in random timezones and have seen things (or my name) pop up at times | 15:36 |
jbryce | but have usually seen them many hours after the discussion had moved on. so i thought i'd drop by and see if there were any foundation/staff/jbryce related things that y'all wanted to hit on | 15:37 |
* dhellmann tries to remember what that might have been | 15:37 | |
smcginnis | jbryce: We have a TC meeting set for the Sunday before the PTG. Will you be around then? | 15:38 |
fungi | jbryce: if you haven't seen it yet, zaneb has proposed a draft technical vision for openstack on which you might be interested in providing feedback: https://review.openstack.org/592205 | 15:38 |
jbryce | a couple that i remember are setting up some time at the ptg to talk about syncing up more, another one was cdent saying he had a slight jealous feeling sometimes about project management resources on new projects...trying to think of others | 15:38 |
jbryce | fungi: thanks! i was just taking a look at that | 15:38 |
cdent | jbryce: if you have thoughts to add to https://etherpad.openstack.org/p/tc-board-foundation before the ptg, that would help drive that chat some | 15:38 |
mnaser | http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-03.log.html#t2018-08-03T12:39:22 | 15:39 |
dhellmann | one of the topics on the agenda for that sunday meeting is planning for the joint leadership meeting in berlin | 15:39 |
cdent | jbryce: yeah, I wasn't sure whether it was jealousy or not, just kind of a "aw, I want some candy, maybe" | 15:39 |
dhellmann | https://etherpad.openstack.org/p/tc-stein-ptg | 15:39 |
jbryce | smcginnis: i think i'm currently arriving sunday. is there a time set for the meeting on sunday? | 15:39 |
dhellmann | jbryce : 1-5 | 15:39 |
smcginnis | ^ | 15:39 |
smcginnis | Room is back on the second floor, right? | 15:39 |
dhellmann | we'll be meeting on friday as well, so if we need to carry over part of the conversation we can do that | 15:39 |
dhellmann | it's somewhere behind the restaurant on the "atrium level" | 15:40 |
smcginnis | Where: In the “Private Dining Room” located on the Atrium Level. | 15:40 |
jbryce | ok. i put it on my calendar and will check if my travel arrangements get me in ahead of that | 15:40 |
dhellmann | there's a map linked in the etherpad | 15:40 |
jbryce | cdent: no judgement on the wanting candy | 15:40 |
fungi | happy to have you join us! sounds like AlanClark was going to try to be there as well? | 15:40 |
dhellmann | Kendall did a nice job on short notice finding us a room | 15:40 |
fungi | yes, she was super, super helpful | 15:40 |
dhellmann | fungi : yes, AlanClark said he would be | 15:40 |
jbryce | cdent: historically, some of the staff have gotten pretty strong pushback about doing those project management type things on openstack projects. but if there are areas that teams or people would like help now as resources are less abundant, i would definitely want to hear it | 15:42 |
TheJulia | jbryce: I think part of it is a perception disconnect issue | 15:42 |
dhellmann | that's an interesting perspective; I didn't realize there had been pushback | 15:43 |
fungi | well, it's a lot of stuff like foundation staff pushing these projects to arrange public meetings, plan their governance, hold elections... | 15:43 |
fungi | figure out how they're going to handle reports of suspected security vulnerabilities | 15:44 |
jbryce | i care more about openstack (in the traditional technology sense) success than ever before and i don't think there's actually a way forward for the foundation or new projects if openstack falls down. it's all very connected, so i would want to find ways to make sure people understand i and others see it that way | 15:44 |
TheJulia | It seems that perhaps some context sharing time on Sunday may be good | 15:45 |
TheJulia | If flights permit and all | 15:45 |
jbryce | ok sounds good | 15:45 |
persia | jbryce: In other contexts, I've seen staff be praised strongly by community when offering administative support or volunteering to be delegated to take care of things. I've seen pushback when staff (sometimes the same people) are described as "managing" things (even in minor ways, like "event manager"). I think more problems arise because of names in these situations than because of activities. | 15:45 |
persia | As an extreme example, I once saw staff praised for maintaining a (correct) gantt chart of all the development activities of all the developers in a project (only about 100 contributors), mostly because the staff member claimed to be a "scribe", rather than a "project manager" or similar. | 15:46 |
fungi | having seen some of it from the inside, the bulk of the handholding being done by foundation staff for some of the new osf projects is stuff the openstack project is already adept at and has long-standing solutions for | 15:47 |
TheJulia | persia: ++, and some people see it as their responsibility to take such tasks on or that it should be kept in the team. | 15:47 |
jbryce | dhellmann: and we still mostly work on openstack project related things (like right now, lots of effort going into gathering all the release updates) | 15:47 |
dhellmann | yes, annabelleB's work on that has meshed really well with the release team | 15:48 |
fungi | annabelleB is basically a fixture in the release team now | 15:48 |
TheJulia | This seems like a good thing | 15:48 |
dhellmann | indeed | 15:48 |
fungi | she's extremely involved | 15:48 |
persia | Is there any particular reason that nobody wrote "since annabelleB joined the release team, we've had excellent progress getting the release updates"? | 15:49 |
* persia thinks that mental separation between "staff" and "people" turns "Staff" into "non-people", which is awkward | 15:49 | |
TheJulia | persia: in progress still? | 15:50 |
jbryce | on the new direction stuff overall, i'm working on an email update to the foundation mailing list that puts out a basic draft for governance around strategic focus areas and open infrastructure projects | 15:50 |
TheJulia | jbryce: It would be interesting to hear her point of view. Oh annabelleB! :) | 15:50 |
dhellmann | jbryce: nice, I'm looking forward to participating in that discussion. | 15:50 |
*** e0ne has quit IRC | 15:50 | |
annabelleB | o/ catching up here and reading through the conversation | 15:51 |
jbryce | there's a board working group that has 2 meetings scheduled between now and the september board meeting (in about a month) so the hope is to have this portion of it mostly fleshed out by then and can then move to technical implementation details like drafting bylaws amendments if necessary ahead of berlin | 15:51 |
jbryce | which would then be on the january ballot | 15:52 |
jbryce | so overall timeline is basic outline in mid-late september, targeting final board approval in november then, if necessary, final steps in january election | 15:53 |
dhellmann | that sounds like it should leave time for discussion, as long as the proposals are delivered before the meetings | 15:53 |
fungi | persia: i think everyone's phrasing was to avoid conflating helpful (and welcome participation) with access controls granted to https://review.openstack.org/#/admin/groups/Release%20Managers | 15:55 |
* fungi sees a foundation staffer and also an actual non-person in that group, fwiw! | 15:55 | |
persia | fungi: Aha. That does seem to be a sensible distinction. | 15:55 |
persia | To be clear, I really appreciate the value added by actual non-persons to discussoins in some openstack channels. I just think that unless the non-person's internal representation is known simple enough (e.g. by code inspection) to not be capable of emotion, it is better to consider them "person". | 15:57 |
fungi | jbryce: thanks, that sounds like a reasonable schedule to meet the 2019 election window! | 15:57 |
persia | Whereas, for many roles in openstack, it seems nearly any person is welcomed to the role, assuming they are happy to help fill the role. To me it is key to treat those people the same, whether they are staffer or not. | 15:57 |
persia | If there are special ACLs that someone might not want to be on, then it makes sense not to say they joined a team (I know I am not on some teams for precisely those reasons). | 15:58 |
fungi | yup, same for me (and many/most of us, i'll wager) | 15:58 |
dhellmann | I think in this case it's simply a matter of "not yet" -- we're working with annabelleB on reviewing | 16:00 |
dhellmann | so in one sense she's part of the team in that we collaborate, but in another she's not in that she's not on that gerrit list, yet | 16:00 |
annabelleB | (it’s quite a process to learn! dhellmann and crew have years of knowledge i’m trying to absorb) | 16:00 |
dhellmann | annabelleB : you're doing well! | 16:01 |
annabelleB | thanks dhellmann!! | 16:01 |
zaneb | that seems similar to other teams where a subset of team members have core review permissions in Gerrit though... | 16:01 |
fungi | for sure | 16:01 |
dhellmann | yes, I'm not sure why this turned into such a pedantic semantic discussion. We love working with annabelleB. | 16:02 |
fungi | thursdays are for pedantry | 16:02 |
annabelleB | and I love working with y’all! | 16:02 |
dhellmann | /me makes a note to take thursdays off from now on | 16:02 |
evrardjp | Thank god it's friday then fungi . | 16:02 |
fungi | i guess our office hour really did wrap up on time for a change | 16:08 |
TheJulia | fungi: weird.... | 16:08 |
* TheJulia goes and finds food | 16:08 | |
zaneb | cdent, ttx: would either of y'all be interested in joining a proposal for a session about the technical vision for https://etherpad.openstack.org/p/PTG4-postlunch ? | 16:10 |
cdent | zaneb: if you like, sure. did you have a slice of the topic in mind? | 16:11 |
zaneb | cdent: I think just letting everyone know that the proposal exists and that they should read it would be a good start | 16:12 |
cdent | ✔ | 16:13 |
zaneb | we could also try to cover the content if we have time | 16:13 |
cdent | yeah, you can put me down for being willing | 16:13 |
zaneb | thanks, I'll add it to the etherpad | 16:13 |
cdent | this use of PTG4 in a few places has thrown me off a bit | 16:14 |
cdent | not in a bad way, just like a pothole | 16:15 |
persia | dhellmann: Apologies: it was because I assumed that you loved working with annabelleB that I used that situation as an example for semantics. Some prior discussion suggested that other staff members had received pushback for (unspecified) attempts to help. Using a clealy positive experience as an example was intended to show that the pushback wasn't meant as "don't do this", but rather just a tone/wording thing. | 16:16 |
zaneb | cdent: yeah, I'm gonna lose count pretty soon | 16:16 |
openstackgerrit | Merged openstack/governance master: Adding Changcai Geng candidacy for Freezer PTL https://review.openstack.org/590071 | 16:16 |
openstackgerrit | Merged openstack/governance master: mark Changcai Geng as appointed PTL for Freezer https://review.openstack.org/591472 | 16:16 |
zaneb | like that question when you get Summit tickets about how many previous summits you've been to | 16:17 |
zaneb | (10, for the record) | 16:17 |
smcginnis | Well, you won't lose count of the PTGs at least. | 16:17 |
fungi | are you sure? | 16:18 |
cdent | some day all of this will be forgotten | 16:19 |
smcginnis | Well, some day I suppose you may forget if there were 3 or 4 of them I guess. | 16:19 |
fungi | "ptg5" (to reuse the naming) is already planned to occur in denver directly on the heels of the summit and forum there | 16:19 |
fungi | 3 days of summit/forum, followed by 3 days of ptg | 16:19 |
smcginnis | I am very interested (and a bit skeptical) to see if it works as well. | 16:20 |
* fungi will just live next to the coffee dispensers those last few days | 16:20 | |
zaneb | I skipped the first Denver one so I'm already struggling with numbering :D | 16:20 |
fungi | zaneb: did you have a guess as to at what point the draft vision should be mentioned on mailing lists? before or after the lunch session at the ptg? | 16:21 |
fungi | i have a feeling the sooner we mention it to a broader public, the less talk there will be about how the tc did all of this behind closed doors and such (though there are some people who will certainly say that regardless), though i also get the attraction of progressively broadening visibility of the effort so as not to overwhelm the early draft phase with commentary | 16:23 |
cdent | fungi: why would there be any reason to wait? | 16:24 |
cdent | we should _never_ wait | 16:25 |
zaneb | fungi: yeah, I planned to post to the ML about it today | 16:26 |
TheJulia | fungi: maybe that way we will finally have a space to recharge? | 16:26 |
fungi | excellent! | 16:26 |
zaneb | I think of the lunch session more as a way of making sure we reach everybody. hopefully by that point there will be many, many people who are not encountering it for the first time | 16:27 |
*** cdent has quit IRC | 16:32 | |
*** cdent_ has joined #openstack-tc | 16:32 | |
*** jpich has quit IRC | 16:34 | |
*** openstackgerrit has quit IRC | 16:49 | |
fungi | sounds great | 17:13 |
fungi | also maybe it offers time for q&a | 17:13 |
fungi | i'm not sure how i felt about the summit session where we introduced the tc vision with a dramatic reading. it's hard to hold people's attention while reading pages of prose nonstop | 17:14 |
fungi | might be good to brainstorm on better ways to go about presenting it | 17:14 |
*** annabelleB has quit IRC | 17:16 | |
*** AlanClark has quit IRC | 17:27 | |
*** ricolin has quit IRC | 17:33 | |
*** annabelleB has joined #openstack-tc | 17:55 | |
zaneb | lol. I definitely will not be volunteering to do a dramatic reading :D | 18:11 |
smcginnis | Interpretive dance? | 18:12 |
zaneb | rofl | 18:12 |
*** diablo_rojo has joined #openstack-tc | 18:33 | |
TheJulia | a verbatem, even dramatic reading is going to result in eyes glazed over | 18:35 |
TheJulia | A dance might work, but a high level tl;dr might be along the lines needed | 18:35 |
TheJulia | maybe add in some sort of seismic event like an earthquake so people are awake.... ;) | 18:35 |
smcginnis | Maybe a loud train horn between each sentence. | 18:37 |
TheJulia | eh, some will have desensitized to that | 18:37 |
TheJulia | Then again, some would have desensitized to both ground shaking and loud horns if they have ever spent a few days camping up in Cascade locks, OR... specifically the KOA campground there. | 18:41 |
*** harlowja has joined #openstack-tc | 18:51 | |
*** e0ne has joined #openstack-tc | 19:21 | |
*** cdent_ has quit IRC | 19:23 | |
*** e0ne has quit IRC | 19:27 | |
*** tellesnobrega has quit IRC | 19:48 | |
*** zaneb has quit IRC | 19:48 | |
*** zaneb has joined #openstack-tc | 19:49 | |
*** rosmaita has quit IRC | 19:57 | |
*** e0ne has joined #openstack-tc | 20:05 | |
*** rosmaita has joined #openstack-tc | 20:18 | |
*** openstackgerrit has joined #openstack-tc | 20:26 | |
openstackgerrit | Zane Bitter proposed openstack/governance master: [DRAFT] Add a Technical Vision statement https://review.openstack.org/592205 | 20:26 |
*** e0ne has quit IRC | 20:27 | |
*** e0ne has joined #openstack-tc | 20:49 | |
*** e0ne has quit IRC | 21:18 | |
*** annabelleB has quit IRC | 21:24 | |
*** annabelleB has joined #openstack-tc | 21:26 | |
*** annabelleB has quit IRC | 21:26 | |
*** diablo_rojo has quit IRC | 21:44 | |
*** harlowja has quit IRC | 22:14 | |
*** edmondsw has quit IRC | 22:20 | |
*** edmondsw has joined #openstack-tc | 22:25 | |
*** dklyle has joined #openstack-tc | 22:33 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!