openstackgerrit | James E. Blair proposed openstack-infra/nodepool feature/zuulv3: Fix zuul-nodepool integration test https://review.openstack.org/460325 | 00:15 |
---|---|---|
jamielennox | jeblair: would you have an opinion on putting something like routes, or an actual web framework into the webapp? | 00:16 |
jamielennox | currently it's the only reason for the dep on both webob and paste (which is only used for the server) and has some misses | 00:16 |
jeblair | jamielennox: probably a good idea. should probably be something wsgi, so we can embed it in apache, etc. but maybe visit this after productionized v3? | 00:17 |
jeblair | jamielennox: what do you mean 'some misses'? | 00:18 |
jamielennox | jeblair: so the current one i just found was that both the status routes anchor at the end, but not the beginning | 00:20 |
jamielennox | so like /status/change/(\d+,\d+)$ | 00:20 |
jamielennox | but with v3 we expect a tenant id before /status | 00:20 |
jamielennox | the old path matches, invokes the call, and then errors out into the logs because tenant_name == '' | 00:20 |
jamielennox | a 404/403/500 is probably ok, but the log message at ERROR is annoying | 00:21 |
jeblair | yeah, i think there's more work to do there to finish making that multi-tenant. | 00:21 |
jamielennox | jeblair: so i started down the wsgi route on a previous patch, but the webapp thread reaches into state and queries tenant info etc | 00:21 |
jeblair | jamielennox: yeah, if we want to make a real standalone wsgi app, it will be a bit more work (requiring interaction with the scheduler over gearman) | 00:22 |
jeblair | so i think for now, best thing to do is clean up what we have so it doesn't become a big distraction, then look at doing something better later. | 00:23 |
jamielennox | jeblair: so for now i wasn't thinking anything that serious, routes should be fairly light on in just doing a proper url match, but anything wsgi should be hostable from the existing paste server | 00:23 |
jamielennox | i don't really have any opinions on frameworks, just something a bit stricter than current | 00:24 |
jamielennox | jeblair: is gearman maintaining that role longer term? i was expecting zuul would go zookeeper eventually in a similar way to nodepool | 00:26 |
jeblair | jamielennox: i'd like to move to zk for zuulv4; we might want to spiff up the webapp before then. | 00:26 |
jeblair | i have to work on dinner now | 00:27 |
jamielennox | ok, thanks | 00:27 |
*** yolanda has quit IRC | 03:56 | |
*** yolanda has joined #zuul | 04:04 | |
*** yolanda has quit IRC | 04:10 | |
*** isaacb has joined #zuul | 05:18 | |
*** isaacb has quit IRC | 06:57 | |
jamielennox | jeblair: for my backscroll tomorrow, why when you source.getProject() and it's missing do you add a new project of that name? | 07:01 |
jamielennox | http://git.openstack.org/cgit/openstack-infra/zuul/tree/zuul/driver/gerrit/gerritsource.py?h=feature/zuulv3#n48 | 07:01 |
jamielennox | jeblair: it means for example when you use the webapp and the source + project doesn't exist it automatically creates one and then returns something you can't use | 07:02 |
jamielennox | and any projects that come over gerrit that are not in the layout get added | 07:03 |
jamielennox | it's not usable by webapp because public/private keys are added to the project object in configloader.py and so AttributeError instead of 404 when you try to fetch the key | 07:06 |
openstackgerrit | Tobias Henkel proposed openstack-infra/zuul feature/zuulv3: Move dependency cycle detection into pipelines https://review.openstack.org/451423 | 09:02 |
*** jkt has quit IRC | 10:12 | |
*** jkilpatr has quit IRC | 10:38 | |
*** jkilpatr has joined #zuul | 11:08 | |
*** dkranz has joined #zuul | 14:12 | |
*** dkranz has quit IRC | 14:43 | |
jeblair | jamielennox: within the source, that's intentional so that if a change depends-on a project which isn't configured in zuul, it still works. | 14:53 |
jeblair | jamielennox: the webapp should probably be changed to use the new hostname-qualified syntax for specifying a project, and use tenant.getProject, which will only return configured projects | 14:53 |
*** dkranz has joined #zuul | 15:01 | |
SpamapS | jeblair: I thought we'd move to etcd for zuulv4. All the cool kids are... ;-) | 15:24 |
jlk | etcd deployed on coreos managed by k8s | 15:25 |
jlk | surely nobody would complain if k8s was a zuul requriement.... | 15:25 |
jeblair | SpamapS: let's see what's cool by the time we get to zuulv4 :) | 15:27 |
SpamapS | retcd -- rust etcd ;-) | 15:28 |
* SpamapS has to stop | 15:28 | |
SpamapS | that well went dry months ago :-P | 15:28 |
jlk | did it oxidize as it dried out? | 15:28 |
SpamapS | mmmmmmmmmmmmmmmmmm dad jokes | 15:29 |
*** SpamapS has quit IRC | 15:37 | |
openstackgerrit | James E. Blair proposed openstack-infra/nodepool feature/zuulv3: Fix zuul-nodepool integration test https://review.openstack.org/460325 | 15:37 |
openstackgerrit | James E. Blair proposed openstack-infra/nodepool feature/zuulv3: Enforce cloud as a required config value https://review.openstack.org/460354 | 15:37 |
jeblair | mordred: ^ fixed one more test fixture that needed the cloud param | 15:37 |
*** SpamapS has joined #zuul | 15:37 | |
* SpamapS notes weechat 1.7 looks a lot like weechat 1.3 | 15:38 | |
pabelanger | jeblair: speaking of cloud config value. I've notice we don't actually support reloads if clouds.yaml changes. Meaning nodepool has to stop / start today. Is that something we can change moving forward? eg: any objections to write a patch to fix that | 15:45 |
*** herlo has quit IRC | 15:45 | |
pabelanger | or, will it be a fair amount of work | 15:45 |
jeblair | pabelanger: the tricky part is that nodepool doesn't know about it at all. probably the best compromise is to have nodepool accept a signal (or better yet, input on a socket) to force a flush of its providers. | 15:50 |
pabelanger | good idea | 15:51 |
clarkb | it could inotify on the file(s) | 15:51 |
jeblair | clarkb: but it doesn't know about the file | 15:51 |
jeblair | and part of the point of outsourcing it to osc is that it doesn't have to | 15:52 |
clarkb | maybe thats something osc could provide as a feature? | 15:52 |
pabelanger | I was thinking we'd SIGUP from puppet when clouds.yaml changed or manually. Not sure if adding that logic into nodepool would be better or not | 15:52 |
pabelanger | I'll see how to best flush providers in nodepool | 15:53 |
jeblair | yeah, options like "here's the path to the config file we used" (if any -- you can configure a cloud entirely with env variables). or even 'register callback function if config file changes' might be possibilities | 15:53 |
jeblair | but i think the sighup/socket approach is all we'd need for our use | 15:53 |
jeblair | (and i think it's easy to incorporate into almost any config mgt system) | 15:54 |
jeblair | i'd really like to avoid doing this until after zuulv3 though, please. :) | 15:54 |
pabelanger | sure, just wanted check first, before working on it | 15:58 |
*** yolanda has joined #zuul | 16:09 | |
openstackgerrit | David Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Add config option for executor process user https://review.openstack.org/460671 | 16:30 |
Shrews | If someone wants to supply definitions for the undocumented options that I added there ^^^, I'll be happy to add them in a new patchset. | 16:30 |
Shrews | or add your own patch set | 16:34 |
Shrews | or do it in a different review | 16:34 |
* Shrews embraces open source freedom | 16:34 | |
jeblair | Shrews: done, in comments. | 16:36 |
Shrews | jeblair: THAT WAS NOT A VALID OPTION | 16:38 |
Shrews | :) | 16:38 |
openstackgerrit | David Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Add config option for executor process user https://review.openstack.org/460671 | 16:40 |
jeblair | Shrews: can you +3 https://review.openstack.org/460354 ? | 16:41 |
Shrews | looking | 16:41 |
Shrews | jeblair: that's an interesting test failure on 460354 | 16:51 |
Shrews | we protect against that, from what I can see | 16:52 |
jlk | clarkb: I think today I'm finally going to get around to trying to run tests locally again. But "locally" in my world would be in docker, so we'll see. | 16:52 |
clarkb | jlk: I think the idea is "works in not the gate" whatever that means for you | 16:52 |
jlk | well, it hwas working on my 8 cpu VM | 16:52 |
jlk | I wasn't "blocked" per se. Cloud is infinite, right? :D | 16:53 |
Shrews | jeblair: oh, maybe we don't, actually | 16:53 |
mordred | jeblair, clarkb, pabelanger: re: clouds.yaml reloads - I could totally see (later) adding an optional method to the OpenStackConfig object to register a callback function and to tell the object to monitor for config changes | 16:54 |
mordred | I agree, I do not think it's a thing we should do right now - but I don't think it's a terrible or invasive feature to add later | 16:54 |
clarkb | jlk: even on your 8cpu VM I think you should see about half the wall time and memory use goes from 4.6GB ish to 200MB ish | 16:55 |
clarkb | so now you can cloud even ahrder | 16:55 |
jlk | clarkb: yeah I did see tests get a lot faster after those fixes. That was very appreciated. | 16:56 |
jlk | I have a lot less time to look at twitter while tests are running now | 16:57 |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Protect against no matches for an upload https://review.openstack.org/460678 | 16:59 |
Shrews | jeblair: ^^^ that should fix that race, i think | 16:59 |
jeblair | Shrews: cool. we'll need to recheck-bash mordred's change in before your fix will pass tests. | 17:01 |
jeblair | (which i see is already in progress) | 17:01 |
jeblair | thanks | 17:01 |
*** harlowja has quit IRC | 17:08 | |
mordred | jeblair: just got a questoin in a different channel about delay between patch merge and doc publication | 17:23 |
mordred | jeblair: I mention it here because it makes me think that perhaps $something should emit an event that could be reported into places like IRC that is something like "post-jobs-for-change-XXX-complete" | 17:24 |
jeblair | mordred: yes, an irc reporter would be great. it's on my list of things to do after v3 is out. another thing that will help there is a new "most recent only" pipeline manager. it will reduce the delay between merge and publish by reducing the number of extraneous publication jobs. | 17:25 |
pabelanger | fedmsg has an IRC plugin :) | 17:28 |
mordred | jeblair: ++ | 17:32 |
jeblair | pabelanger: yep, fedmsg/mqtt is an option too -- i suspect we may still want an irc driver though because we may want an irc trigger. | 17:33 |
mordred | jeblair, pabelanger: perhaps the fedmsg/mqtt driver would also implement the trigger interface in addition to the reporter interface? | 17:34 |
mordred | (as could an irc-specific zuul driver) | 17:34 |
jeblair | mordred: it could, but then where would we write rules that say "these commands in irc should be used to trigger these jobs, if issued by these people"? | 17:35 |
mordred | a native IRC driver sounds lke a great idea to me (or a driver using that library we keep thinking of rewriting our bots in) | 17:35 |
jeblair | that sort of logic is easy to put in zuul, vs writing another system that bridges irc and fedmsg | 17:36 |
mordred | jeblair: oh, I was mostly imagining that the fedmsg folks would have some sort of irc bot something that would allow them to configure those things - so the config in that case would be the irc-fedmsg interface, which would result in fedmsg messages being emitted or not | 17:37 |
jeblair | (i expect we would have a fedmsg trigger too, of course) | 17:37 |
jeblair | mordred: sure, if fedmsg already has that, that's the easier way | 17:37 |
mordred | jeblair: and I mostly mentoined just because i'd _guess_ they'dhave such a bridge already | 17:37 |
* mordred thinks a fedmsg driver and an errbot driver would both be great - and both should implement all the things :) | 17:37 | |
pabelanger | ++ | 17:38 |
jlk | at some point we should talk about extending the driver framework so that drivers can be shipped out of tree, and discovered if they are in the same python environment. That way somebody _could_ write a fedmsg driver without having to get it accepted in upstream zuul. | 17:44 |
jeblair | jlk: yes; that's been the intent for a while. however, i'd like to have some batteries included. :) | 17:45 |
jlk | absolutely | 17:45 |
jeblair | pabelanger: are you plannong on working on https://review.openstack.org/454396 ? | 17:45 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Enforce cloud as a required config value https://review.openstack.org/460354 | 17:46 |
jlk | gerrit and github seem like mighty fine batteries. But if somebody wants to support Visual Source Safe, well.... | 17:46 |
jeblair | jlk: time for an external battery pack | 17:46 |
jlk | hopefully not one that will explode | 17:46 |
jlk | (metaphor stretching) | 17:47 |
pabelanger | jeblair: I am, I should be ready to switch back to zuulv3 now | 17:47 |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Protect against no matches for an upload https://review.openstack.org/460678 | 17:47 |
*** harlowja has joined #zuul | 17:51 | |
harlowja | SpamapS let me know if u are going to do k8s bonnyci demo | 18:02 |
harlowja | i'd like to join :) | 18:02 |
harlowja | those guys and there mono-repo ..... | 18:02 |
harlowja | they seem to forget they aren't in google anymore, lol | 18:02 |
harlowja | *imho | 18:03 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Ensure PRs arent rejected for stale negative reviews https://review.openstack.org/460700 | 18:05 |
*** tobiash_ has joined #zuul | 18:09 | |
tobiash_ | Hi, just curious, did someone think about running zuul & co in docker on kubernetes? | 18:13 |
jlk | I think there's been some thoughts in that direction, but I haven't seen a lot of effort | 18:15 |
tobiash_ | I'm currently thinking if it makes sense to switch our ansible based deployment to a containerized PaaS solution like openshift or kubernetes | 18:15 |
tobiash_ | Eventually we will make that step (if it works out) with the switch to zuulv3 | 18:16 |
tobiash_ | This also could enable easy auto scaling of the zuul executor | 18:17 |
jeblair | tobiash_: yes, i think that's worth looking into. we just haven't done much with that yet while we focus on v3 development. | 18:26 |
tobiash_ | we anyway run these services already in containers | 18:27 |
tobiash_ | getting nodepool-builder run in docker required some hacks though | 18:28 |
jlk | the big challenge to me is separating all the individual processes so that they can be isolated in a container | 18:29 |
jlk | and figuring out what, if any, of those processes assumes about other local processes | 18:29 |
tobiash_ | that gets much better already in v3 | 18:30 |
tobiash_ | we run nodepool-launcher, nodepool-builder, zuul-scheduler, zuul-executor and zookeeper in docker (each one in a separate container) | 18:31 |
tobiash_ | nodepool-launcher was the trickiest one due to diskimage builder | 18:32 |
tobiash_ | but we still have to try out the transition from vm+docker (via ansible) to directly use kubernetes | 18:33 |
pabelanger | tobiash_: ya, diskimage-builder in a container is fun. Spend some time over christmas doing it | 18:34 |
pabelanger | but it works | 18:34 |
tobiash_ | s/nodepool-laucher/nodepool-builder | 18:34 |
tobiash_ | yepp | 18:34 |
tobiash_ | it works for me when having elements, tmp and gen in the same mount into the container | 18:35 |
tobiash_ | otherwise I had trouble with diskimage builder running on the aufs filesystem | 18:36 |
mordred | tobiash_: we also _definitely_ have plans/urges to provide first-class support for using zuul v3 to perform CI of docker and kubernetes workloads | 18:36 |
mordred | (independent of whether or not zuul makes any steps towards running in k8s itself) | 18:37 |
* tobiash_ now understands what k8s means... | 18:37 | |
mordred | tobiash_: we've spoken wiht a few k8s people about some of it - but there are some pieces of the design that would be great to discuss with someone, such as you, who has a grasp of zuul and also is doing things with k8s already | 18:38 |
clarkb | builder you mean | 18:38 |
mordred | we've mostly been deferring thinking about it deeply until we've got v3 deployed for openstack with the existing vm support | 18:38 |
tobiash_ | mordred: I'm not using it yet but we plan to evaluate if a zuul deployment works fine in k8s | 18:39 |
mordred | tobiash_: cool - it'll be good to learn what your results are from that | 18:39 |
tobiash_ | will report in a few weeks :) | 18:39 |
tobiash_ | what do you think of adding a 'k8s' provider additionally to nodepool? | 18:40 |
tobiash_ | that could greatly reduce resources used for short or lightweight jobs and reduce response time for these | 18:41 |
pabelanger | that's been talked about before | 18:41 |
tobiash_ | oh, must have overlooked that, will check scrollback buffer | 18:42 |
pabelanger | nothing official yet, but more like it would be cool if | 18:43 |
pabelanger | I know clarkb likes the idea of using nova-docker | 18:43 |
clarkb | well mostly because its there, but apparently its eol now | 18:44 |
clarkb | so :( | 18:44 |
tobiash_ | maybe we could also use a driver concept in nodepool (v4), shade, k8s, aws, azure | 18:45 |
clarkb | tobiash_: that sort of exists if you consider drivers implementing the zk protocol | 18:46 |
pabelanger | tobiash_: yes, mordred has mentioned a few time of a linchpin integration for nodepool. I think the next step might be a spec for the plugin system for that? | 18:46 |
clarkb | with the only existing driver being nodepool for openstack | 18:46 |
*** openstackgerrit has quit IRC | 18:48 | |
tobiash_ | clarkb: well, you can think of nodepool as a driver but I think nodepool does much more than a driver | 18:48 |
mordred | yah - so.... | 18:49 |
mordred | I think this is definitely a thing we want to do - but I think it winds up being a few differentuse cases | 18:49 |
mordred | one is "I have an application in this git repo that is designed to be run in containers, so I want to request from the CI system a k8s endpoint (or a docker endpoint) that my ansible playbooks can interact with to run and validate my code" | 18:51 |
jeblair | (i think the way we will want to extend nodepool is with a separate process communicating over the zk protocol, however, there's enough boilerplate things such a system would need to do, we should abstract that out into code that can be shared among them. that's what we should do before making our second nodepool backend. we can focus on that a bit more when we get v3 out of the way.) | 18:51 |
mordred | so in that case, one would expect the user to say in their job description "I need a k8s" and for zuul to ask nodepool for a "k8s" and nodepool to hand one of those to zuul and zuul would place it into the ansible inventory | 18:52 |
mordred | *handwaving around details, clearly* | 18:52 |
mordred | there is also "I have this workload thatisn't specifically container oriented, but it lightweight and running it in a container would be just fine" | 18:53 |
mordred | which might be more something like requesting an ubuntu:xenial container in the job description, zuul asking nodepool for an ubuntu:xenial docker container, then puttingit into the inventory and the job runs there mostly like it would on a vm except it's a container not a vm | 18:54 |
mordred | our pep8 jobs are a good example of content that would likey be happy to use sucha job | 18:54 |
jlk | that tracks with what travis does | 18:55 |
jlk | you get a container env by default, you have to ask for something different | 18:55 |
mordred | from a resource management perspective, there are some open questoins - should there just be a single k8s cluster that is registered with nodepool, and the first case is handled by nodepool making a single-use tenant on the k8s cluster then handing the endpoint and credentials to the job's inventory? | 18:56 |
clarkb | mordred: proper multitenancy is a thing in k8s now? | 18:56 |
mordred | and for the second case nodepol just asks k8s to spin up a bare ubnutu container ina pod then hands remote access credsfor that container to zuul for the inventory? | 18:56 |
mordred | clarkb: as of the latest release, yes - they're just now releasing it | 18:56 |
clarkb | ah ok last I checked it was hand wavy | 18:56 |
mordred | so by the time we get to it, it should be a thing | 18:56 |
jlk | this sounds like a fascinating discussion to have, after v3 :) | 18:57 |
tobiash_ | looking forward to this :) | 18:57 |
mordred | jlk: yup. I totally don't want to attempt to solve it today - I mostly wanted ot braindump a little for tobiash_ so he can think about it a bit while poking at k8s | 18:57 |
pabelanger | mordred: clarkb: from what I heard on monday meetup, 1.6 of k8s gets experimental RBAC, but I didn't think it was multi tenant yet | 18:57 |
pabelanger | inbound only :( | 18:57 |
mordred | pabelanger: ah - ok. although it still might have full multi-tenancy by the time we get to implementing this :) | 18:58 |
clarkb | pabelanger: that lines up with what I had heard. "we have things but they arent intended to secure tenants from each other" | 18:58 |
pabelanger | mordred: agree | 18:58 |
mordred | there are also questions about how to get from a set of source code repositories that zuul prepares to those repos being in a container context | 18:59 |
mordred | it's a little easier for the 2nd case -the pep8 case- but fora container-native application there is probably an intelligent way to build the right containers based on git repo content on demand | 18:59 |
pabelanger | that's why I like the idea of we flag a nodepool VM as docker ready, then it can run zuul docker jobs for X hours, then we delete and create. Then we get the multi-tenant from openstack | 18:59 |
mordred | so that it works as that developer would expect | 18:59 |
pabelanger | but, that's basically k8s | 18:59 |
jlk | yeah I think the second case is the easy one to target. Container gets started with the source repo volume mounted in | 19:00 |
mordred | yah- I definitely don't want to accidentially re-invent k8s inside of zuul | 19:01 |
jlk | which is more of nodepool has access to static hosts that can run docker? | 19:01 |
jlk | maybe not. *shrug* | 19:01 |
clarkb | mordred: can't you just shove the whole git repo into the one true data store etcd? /s | 19:01 |
jlk | maybe i should go learn some more about this stuff before throwing out shit-posts. | 19:01 |
mordred | jlk: well, we hav ea nodepool feature need for nodepool to manage pre-existing hosts | 19:01 |
jlk | clarkb: *twitch* | 19:01 |
mordred | clarkb: :) | 19:01 |
mordred | clarkb: I am certain there will be a solution to that problem once we learn a few more thigns then turn our attention to it | 19:02 |
* jeblair notes afs would actually be a solution to that. | 19:02 | |
mordred | :) | 19:02 |
* jeblair notes the omission of "good" from that observation | 19:02 | |
Shrews | The important question that no one seems to be asking, is when will zuul support delivering games to my PS4??? | 19:04 |
*** openstackgerrit has joined #zuul | 19:05 | |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Move makemergeritem into model https://review.openstack.org/460714 | 19:05 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Use project canonical hostnames in mergers and executors https://review.openstack.org/460715 | 19:05 |
tobiash_ | Shrews: you could write a driver ;) | 19:05 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Represent github change ID in status page by PR number https://review.openstack.org/460716 | 19:05 |
jeblair | mordred: 460715 implements your go-style repo layout TODO. | 19:05 |
mordred | jeblair: super sexy | 19:06 |
jeblair | jlk: it's possible 460715 may have a small interaction with the github branch due to some test changes; hopefully not. i tried to keep it minimal. | 19:06 |
jlk | alrighty, I'll peek | 19:06 |
tobiash_ | mordred: now you mentioned pre-existing hosts, is there already a spec for this (didn't find one)? | 19:07 |
mordred | tobiash_: I _think_ it's referenced in the zuulv3 spec - one sec, lemme look (I do not think it's specced out into full implementation details though) | 19:07 |
jeblair | tobiash_: it's briefly mentioned in the zuulv3 spec, but it's not fleshed out. i don't think we need a spec. just an implementation. | 19:07 |
tobiash_ | I have this use case and maybe could work on that in the next few weeks if nobody currently works on that | 19:08 |
mordred | tobiash_: two sentences "Nodepool should also allow the specification of static inventory of non-dynamic nodes. These may be nodes that are running on real hardware, for instance." | 19:08 |
mordred | at the end of http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html#id1 | 19:09 |
tobiash_ | mordred: Ah, these two sentences, so just a requirement | 19:10 |
tobiash_ | Is there already some concept for that in someone's head? | 19:12 |
pabelanger | theres been a few specs over the years to run docker containers for CI: https://review.openstack.org/#/q/infra-specs+docker | 19:13 |
clarkb | maybe a utility to register them with zk then a known zk prefix to look for such data in? | 19:13 |
tobiash_ | probably this should be stored in a zookeeper structure for static nodes | 19:13 |
clarkb | tobiash_: :) | 19:13 |
jeblair | tobiash_: it probably needs to be another 'launcher', which means we probably need to abstract out the boilerplate launcher code i mentioned above. | 19:14 |
pabelanger | I know gozer was big on the idea, IIRC | 19:14 |
clarkb | pabelanger: ya thats where the original can we use nova docker idea came from since gozer had control of the cloud aiui and could in theory deploy ironic + nova for baremetal and nova + nova-docker for containers | 19:15 |
clarkb | pabelanger: I don't know that it ever happened there but have heard some third party CIs do nodepool + ironic + nova and that does work | 19:15 |
pabelanger | clarkb: ya, I did hear that too about 3rd-party | 19:15 |
jeblair | tobiash_: basically, the config file should define the nodes, the new 'launcher' should sync those nodes to the /nodes/ tree in zk. they should get checked out as normal, and when they are returned, just reset to 'ready' rather than 'deleted'. | 19:15 |
clarkb | pabelanger: for baremetal I think thats still a good setup but for containers things have changed all over the place since then | 19:16 |
clarkb | jeblair: ++ you made "utility" an actual useful thing in that description | 19:16 |
tobiash_ | jeblair: ok, that sounds like the simplest solution, so you mean a nodepool-static-launcher? | 19:17 |
jeblair | tobiash_: yep | 19:17 |
tobiash_ | I maybe also could think of a static node provider (which would already impose a backend driver= | 19:17 |
tobiash_ | but that probably would need more work | 19:18 |
tobiash_ | but I could imagine that this could later then be extended to support more providers like k8s, aws,... | 19:18 |
jeblair | tobiash_: if you want, i can probably write this up in more detail. but give me a few days. | 19:19 |
tobiash_ | jeblair: just decide if it should be 'another launcher' or 'multi backend support' | 19:20 |
jeblair | tobiash_: yeah, i'll take a look at that and what the config file would look like | 19:20 |
tobiash_ | I'll try to look into this then and will come up with a draft or (probably) more questions | 19:21 |
*** herlo has joined #zuul | 19:34 | |
dmsimard | How much is it a pita to maintain Zuul's "fork" of (for example) the module command ? | 19:39 |
dmsimard | Do you rebase it often on upstream, for example ? | 19:39 |
*** tobiash_ has quit IRC | 19:44 | |
mordred | dmsimard: nope. we haven't touched it much since forking it | 19:51 |
dmsimard | mordred: not too concerned about it ? bugs/cve/whatever | 19:52 |
mordred | dmsimard: I have a todo list item to sit down with the ansible core team and go through what we're doing there in detail to see what the opportunities are to upstream/add interfaces | 19:52 |
mordred | dmsimard: not hugely, no - that said, once we get some zuul jobs running on ansible prs - we should probably get something to notify us when changes are made to those files upstream so we can track them | 19:53 |
dmsimard | I was contemplating perhaps overloading the default callback (sort of how you do with the command module) but it doesn't solve the problem I want to solve anyway | 19:53 |
dmsimard | (in the context of ara) | 19:53 |
mordred | dmsimard: speaking of overloading default callback ... we should talk at some point about what the interaction between ara's and zuul's callback plugins might look like | 19:57 |
mordred | dmsimard: specifically - since we fetch console and shell output out of band (and not through the stdout_lines attributes on the tasks) | 19:58 |
dmsimard | yeah, I need to rewrite https://review.openstack.org/#/c/444088/ to a more broader "allow end users to use callbacks" or something to that effect | 19:58 |
mordred | dmsimard: if we were to start enabling ara on the jobs by default, ara would not have access to any of the stdout for any of the shell/console tasks | 19:58 |
dmsimard | mordred: ara doesn't read from stdout | 19:58 |
dmsimard | it doesn't parse stdout | 19:58 |
mordred | I know - it's a callback plugin | 19:59 |
mordred | but tasks have output that it reports | 19:59 |
dmsimard | more or less, they return a "results" object with plenty of things, the output if appropriate | 19:59 |
dmsimard | if you don't strip the results object, the data should be there even if you don't necessarily print it to stdout | 20:00 |
mordred | right | 20:00 |
mordred | what I'm saying is - we strip the task's stdout attribute from the results object | 20:01 |
dmsimard | mordred: can you show me what you mean ? | 20:01 |
mordred | dmsimard: so - like if you look at this: http://logs.openstack.org/67/458567/2/check/gate-shade-dsvm-functional-neutron/fe38922/logs/ara/reports/index.html | 20:01 |
mordred | and click on the status link for the task "Check whether /etc/hosts contains hostname" | 20:02 |
mordred | (it would be nice if those would produce a link I could paste to people, fwiw, but let's leave thatfor now) | 20:02 |
dmsimard | (permalinks to playbooks and task results are coming in the next version to avoid this kind of awkward instructions btw :p) | 20:02 |
mordred | dmsimard: you can see it has a command, delta, end, start and stdout, right? | 20:02 |
dmsimard | mordred: right | 20:03 |
mordred | and the stdout is the stdout from the shell command that was run | 20:03 |
dmsimard | https://twitter.com/dmsimard/status/855170459628916739 | 20:03 |
mordred | woot! | 20:03 |
dmsimard | mordred: there was originally more output than that or something ? | 20:03 |
mordred | nope - that's a normal non-v3 ansible run, so ara is working as expected | 20:03 |
mordred | now - if it was in v3, that stdout section would be empty | 20:03 |
dmsimard | where in the code does that happen ? | 20:04 |
mordred | because our command/shell tasks do not return any content in their stdout fields | 20:04 |
mordred | one sec- I get you links | 20:04 |
mordred | dmsimard: there are multiple pieces to it - in http://git.openstack.org/cgit/openstack-infra/zuul/tree/zuul/ansible/library/command.py#n127 - which is our fork of the command module | 20:05 |
mordred | we write stdout to a file and spin up a daemon thread that will stream it on demand | 20:05 |
mordred | dmsimard: then in http://git.openstack.org/cgit/openstack-infra/zuul/tree/zuul/ansible/callback/zuul_stream.py?h=feature/zuulv3 - which is the callback plugin we run on the executor | 20:06 |
dmsimard | So you redirect the stdout, you mean ? | 20:06 |
mordred | yes, that's right | 20:06 |
dmsimard | ah so it's not that you strip it, ansible doesn't see it in the first place | 20:06 |
dmsimard | hum, but you shouldn't do that ? | 20:06 |
mordred | http://git.openstack.org/cgit/openstack-infra/zuul/tree/zuul/ansible/callback/zuul_stream.py?h=feature/zuulv3#n95 | 20:06 |
mordred | we connect to the port on the remote host and fetch the stdout content from the streaming service running on the remote node | 20:07 |
mordred | and then output it into the ansible log | 20:07 |
mordred | dmsimard: it's important to do it for us for 2 reasons | 20:08 |
dmsimard | mordred: the command still exits json with a stdout key, so that's actually empty ? http://git.openstack.org/cgit/openstack-infra/zuul/tree/zuul/ansible/library/command.py#n456 | 20:08 |
mordred | a) it's how we are able to provide streaming access to logs rather than waiting for the shell command to be done | 20:08 |
mordred | dmsimard: yes, that's correct, we send back an empty stdout key | 20:08 |
dmsimard | mordred: so, pretend I'm an ansible/zuul_v3 user, that means I can't for register from a command task and read from stdout as a condition ? | 20:09 |
mordred | b) many of our shell commands are "run devstack and then run tempest" and produce a LARGE amount of output - if we didn't redirect it we'd be having giant return structures in memory :) | 20:09 |
dmsimard | s/for// | 20:09 |
dmsimard | yeah I'm 100% for what you've been doing in regards to streaming ansible output, it's cool and clever | 20:09 |
mordred | dmsimard: oh - hrm. that's a great point. you are right about that. bother - we might have to include it | 20:09 |
mordred | jeblair: ^^ see most recent 3 or 4 lines | 20:10 |
mordred | dmsimard: if nothing else, I'll consider this conversation a success just for you pointing that out | 20:10 |
dmsimard | mordred: I don't think it's an issue to return non-empty stdout/stderr keys with their actual contents -- you get to decide if you want to print them later anyway | 20:10 |
mordred | dmsimard: well - I was more concerned on that case about buffering them all in ram | 20:11 |
dmsimard | like, the default callback will (example) json.dumps(results) which has a stdout key | 20:11 |
dmsimard | but if you're not using the default callback, nothing prints | 20:11 |
mordred | dmsimard: which for some of our devstack jobs tha are memory constrained anyway could be problematic | 20:11 |
dmsimard | ah, I see what you mean | 20:11 |
dmsimard | hum, I can actually help profile that if you'd like | 20:11 |
mordred | maybe we need to be able to flag some commands to do the streaming but not to return to stdout - we could have a custom module for that | 20:12 |
mordred | _most_ commands it should be fine to do both | 20:12 |
mordred | but for some - like "run devstack" - we, as test authors, may want to opt-out of returning the stdout content in the result object | 20:12 |
dmsimard | but wait a sec | 20:12 |
jeblair | mordred: caught up and generally agreeing :) | 20:13 |
dmsimard | I've been running puppet-openstack-integration and packstack jobs which are at the 8GB ram ceiling as well, through ansible with that 30 minute long shell command with like 20k lines of output | 20:13 |
mordred | dmsimard: oh - good! and it's not terrible? | 20:13 |
dmsimard | And we're not running into oom as far as I am aware | 20:13 |
mordred | ok. good. | 20:14 |
* dmsimard gets link | 20:14 | |
mordred | so maybe the shortest path is to just re-enable putting stdout into the results object, but keep an eye on memory pressure to see if it exists | 20:14 |
dmsimard | https://logs.rdoproject.org/weirdo-generic-puppet-openstack-scenario003/1502/ara/reports/index.html | 20:15 |
dmsimard | There's a 38 minute long task in there | 20:15 |
jeblair | mordred: that does mean that when we switch the ansible output to json, zuul will need to parse that enormous json object regardless, right? | 20:15 |
dmsimard | with an entire puppet-openstack-integration thing | 20:15 |
dmsimard | ah, the output is actually not that much -- I forgot we changed how p-o-i logged | 20:15 |
dmsimard | mordred: now I wonder where the pressure is -- is it on the control node or the remote node ? | 20:16 |
dmsimard | Is it "streamed" to the control node and kept buffered there ? Or kept on the remote node and flushed at the end ? A bit too low level for me to know | 20:17 |
mordred | jeblair: yah - we'd want to make our own version of the json callback plugin for those purposes I think | 20:19 |
jeblair | mordred: ok, so it could discard stdout, but otherwise it's returned from the node for any interested callback plugins (or registered by playbook tasks) | 20:19 |
mordred | jeblair: ya | 20:19 |
mordred | dmsimard: it's _currently_ streamed in essentially a line-by-line manner on demand - so it's never buffered on either end more than a line-ish of text | 20:19 |
jeblair | mordred: you mean our thing, or ansible? | 20:20 |
dmsimard | mordred: so is the output flushed to disk then ? could you DDoS someone with an ansible "output" bomb ? | 20:20 |
mordred | dmsimard: if we make the change to also include it in the result object - the entire output would be buffered completely into ram on both the worker and the executor node | 20:20 |
jeblair | (because i thought dmsimard was asking how ansible handled returning result objects) | 20:20 |
jeblair | dmsimard: or were you asking about how we do console streaming? | 20:21 |
dmsimard | jeblair: we're discussing how the zuul command module strips the stdout/stderr keys and how that ties into 1) ara 2) if I'm a zuul user and want to use the stdout key from a registered command task inside a condition | 20:21 |
mordred | in our current setup, the output is sent to disk, then in separately streamed - so we're not succeptible to an ansible bomb- except for filling disk space | 20:21 |
jeblair | dmsimard: yep, i got that. i'm trying to flag what i see as a miscommunication between you and mordred. :) | 20:21 |
mordred | if we start returning stdout in the result object again, then we could also be open to an ansible bomb from the node | 20:22 |
dmsimard | mordred: right, but does ansible do that "natively" (flush line by line to disk in a tmp file) is what I meant to ask, or does it keep everything in ram | 20:23 |
mordred | dmsimard: my understanding is that it keeps everything in ram | 20:23 |
mordred | because it has to construct a json payload to ship back over the wire | 20:23 |
jeblair | and it can only do that once -- there's no incremental return protocol | 20:23 |
mordred | yah | 20:23 |
*** Cibo_ has joined #zuul | 20:23 | |
mordred | (we wind up back in the place where chatting with the core team sounds good :) ) | 20:25 |
SpamapS | harlowja: also mono repo doesn't relieve you from the need for speculative merge testing that zuul does. It just relieves you from cross-repo deps | 20:25 |
harlowja | agreed | 20:26 |
mordred | jeblair, dmsimard: we could pretty easily put a limit on what we return in stdout in the result object - since we construct that at the end | 20:26 |
jeblair | SpamapS: i'd argue it reduces, not eliminates that need (most projects ultimately have deps) | 20:26 |
mordred | if it winds up being a problem | 20:27 |
jeblair | mordred: yeah, that works too | 20:27 |
dmsimard | mordred: yeah so I agree that "script.sh" outputting 30k lines is probablematic, you could also argue that it's an ansible anti-pattern -- upstream would tell you: "don't do that, translate your bash script into granular ansible tasks" | 20:27 |
mordred | yah - which is one of the reasons I imagine this particular topic feels less richly handled compared to other things | 20:28 |
*** jkilpatr has quit IRC | 20:29 | |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Use project canonical hostnames in mergers and executors https://review.openstack.org/460715 | 20:29 |
mordred | but I think for now we should a) include it at the end b) if the content size winds up being a problem, add a max cap to what we return | 20:29 |
jeblair | ++ | 20:30 |
SpamapS | jeblair: the way they do things, I don't think they have any deps that aren't in the mono repo. Even binary-only deps. | 20:30 |
dmsimard | It's not an easy problem to solve, the problem being streaming output is all fine and good until you want to do multi node and then the output is interleaved | 20:30 |
mordred | so that for the easier case of a person legit running a shell command and then wanting to do something with the output, that they aren't broken | 20:30 |
dmsimard | I think that was DeHaan's reasoning in some age old issue | 20:30 |
mordred | dmsimard: it is indeed - and in fact, we stream the logs and interleave them into the log file! | 20:30 |
mordred | :) | 20:30 |
SpamapS | No word on whether or not they maintain long arduous forks that never get pushed upstream.. ;) | 20:31 |
jeblair | fortunately, metadata was invented and we can un-interleave. :) | 20:31 |
dmsimard | I'm curious how mtreinish's work with the mqtt-publishing callback will turn out | 20:32 |
dmsimard | Or if stdout/stderr is important to him | 20:33 |
dmsimard | mordred: hey, how about this -- you still flush to disk file but you read from that file when doing the exit_json | 20:34 |
dmsimard | like, there's the zuul stream file -- not that one, you create another one just for that one command | 20:34 |
dmsimard | you could even be nice and delete the tmpfile before doing the exit_json for space constraints | 20:35 |
jeblair | i'm not as worried about the memory pressure on the node as i am on the executor (where the problem is magnified 100x) | 20:35 |
SpamapS | IIRC, memory is the main scaling problem in larger ansible installations. | 20:36 |
dmsimard | jeblair: that's sort of what I was asking before, I have no knowledge of where the pressure occurs (and when) | 20:37 |
dmsimard | SpamapS: I can see that happening -- ansible sends the raw data to all the callbacks, if there's a lot of data to chew through it gets exponential | 20:39 |
SpamapS | and lots of forked processes | 20:42 |
jeblair | dmsimard: i think both are important, but in zuul's case, expect 100 ansible processes running simultaneously on one executor. | 20:42 |
jeblair | so we're going to see problems there first | 20:42 |
*** jkilpatr has joined #zuul | 20:43 | |
openstackgerrit | James E. Blair proposed openstack-infra/nodepool feature/zuulv3: Validate flavor specification in config https://review.openstack.org/451875 | 20:45 |
openstackgerrit | James E. Blair proposed openstack-infra/nodepool feature/zuulv3: Add support for specifying key-name per label https://review.openstack.org/455464 | 20:45 |
openstackgerrit | James E. Blair proposed openstack-infra/nodepool feature/zuulv3: Add ability to select flavor by name or id https://review.openstack.org/449784 | 20:45 |
openstackgerrit | James E. Blair proposed openstack-infra/nodepool feature/zuulv3: Support externally managed images https://review.openstack.org/458073 | 20:45 |
openstackgerrit | James E. Blair proposed openstack-infra/nodepool feature/zuulv3: Cleanup from config syntax change https://review.openstack.org/451868 | 20:45 |
*** dkranz has quit IRC | 20:45 | |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Protect against no matches for an upload https://review.openstack.org/460678 | 20:46 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Fix zuul-nodepool integration test https://review.openstack.org/460325 | 20:47 |
openstackgerrit | James E. Blair proposed openstack-infra/nodepool feature/zuulv3: Validate flavor specification in config https://review.openstack.org/451875 | 20:49 |
openstackgerrit | James E. Blair proposed openstack-infra/nodepool feature/zuulv3: Add support for specifying key-name per label https://review.openstack.org/455464 | 20:49 |
openstackgerrit | James E. Blair proposed openstack-infra/nodepool feature/zuulv3: Support externally managed images https://review.openstack.org/458073 | 20:50 |
mordred | jeblair: I started hacking on the change from above - because at first I thought "that's a simple 3 line change" - and indeed it could be a simple 3-line change. EXCEPT ... dum-dum-dum ... | 20:52 |
jeblair | mordred: who you calling a dum-dum? | 20:52 |
mordred | jeblair: we re-use the same logfile across tasks, so the 3-line change version "just open the file and read the contents right before returning" | 20:52 |
mordred | jeblair: would not _quite_ do what one might expect | 20:52 |
mordred | jeblair: so I think accumulating in memory as we write to the file is likely a better approach, yeah? | 20:53 |
mordred | jeblair: and clearly me :) | 20:53 |
jeblair | mordred: yeah, for now. | 20:53 |
jeblair | if it's a problem, we can always make a second (per-task) file | 20:53 |
*** jkilpatr has quit IRC | 20:57 | |
mordred | hrm. that might actually be the easiest way to accumulate. otherwise I'mma need to get all mutexy | 20:57 |
jeblair | mordred: why would you need to mutex? isn't the thing writing to the file the same thing that would accumulate in ram and return the blob? | 21:08 |
mordred | jeblair: the file writing happens in a thread | 21:08 |
mordred | I mean- could also just skip the mutex and dirty-read | 21:09 |
mordred | since we'd be fairly assured that the writer thread will be done writing by the time we go to return the data | 21:09 |
mordred | it might make my skin crawl slightly, but should be fine 99% of the time | 21:09 |
jeblair | mordred: don't we join the thread before returning? | 21:12 |
mordred | jeblair: we do - but it also might be stuck and refuse to join (which we have some code for) | 21:16 |
mordred | jeblair: but yah- I think we do enough that it should be safe | 21:16 |
dmsimard | mordred: there ya go: https://twitter.com/dmsimard/status/857706319805075456 | 21:22 |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul feature/zuulv3: Also send stdout back in the Result object https://review.openstack.org/460753 | 21:22 |
mordred | dmsimard: nice! | 21:23 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Add ability to select flavor by name or id https://review.openstack.org/449784 | 21:29 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Cleanup from config syntax change https://review.openstack.org/451868 | 21:29 |
jeblair | fun question: what should the 'origin' remote url be on the git repos zuul stages in the executor? currently it's set to the url that zuul itself uses, however, that's likely to involve an ssh username (and subsequently a key) which may not exist on the workers. we could add the concept of a "public" url, but not every repo zuul manages will be public. we could use the url that zuul uses unless there is a 'public url' attribute on the ... | 21:51 |
jeblair | ... connection. or we could just remove the origin remote completely and leave it up to users to add whatever remotes they may need. | 21:51 |
jeblair | i'm inclined to start with the latter (no remote whatsoever) and then add the public-or-private url later. | 21:53 |
mordred | jeblair: do we have any thoughts on what a job might want an origin for? | 21:53 |
jeblair | mordred: i can't think of any (since we are guaranteeing that the repo is completely up to date (and possibly into the future) on all branches and tags before starting a job. those are the only things we use an origin for now. | 21:54 |
mordred | jeblair: and yah - I say start with that, then public-or-private. I mean, if it's a private repo, then the user would really only be able to interact with a remote in a job from a job that could have secrets | 21:54 |
mordred | jeblair: not having an origin also helps people to not accidentally automatically do a git pull or something silly | 21:55 |
mordred | "here is a repo we have set up for you for this job" "great, I'm going to change it and test something other than what I expected to!" | 21:55 |
jeblair | mordred: mm good point | 21:56 |
*** tflink has quit IRC | 22:13 | |
*** harlowja has quit IRC | 22:14 | |
clarkb | I would set it to the transitive origin | 22:18 |
clarkb | that way should it be necessary things will just work | 22:18 |
*** tflink has joined #zuul | 22:18 | |
jeblair | clarkb: well, i think what you are calling the transitive origin is the thing i was calling "the url that zuul uses". but that won't work, because, for us, that's "git+ssh://jenkins@review.openstack.org/openstack...." | 22:23 |
jeblair | or, well, zuul@review.openstack.org. this is only on the v3 branch :) | 22:24 |
clarkb | jeblair: couldn't that be anonymous and then passed on? | 22:24 |
jeblair | clarkb: yes, for us, but not everyone, which is what i meant by adding a 'public url' option | 22:25 |
clarkb | ah ok | 22:25 |
mordred | jeblair: it's possible that one could infer a public_url from the canonical_hostname settings used for cloning perhaps - for if/when we go down that route | 22:25 |
mordred | so that a config of that parameter is only needed if your setup is extra strange | 22:25 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Comment on PRs if a remote call to merge a change failed https://review.openstack.org/460762 | 22:26 |
mordred | jeblair: so zuul sees that review.openstack.org is really git.openstack.org and sets the origin to https://git.openstack.org/openstack... blindly | 22:26 |
jeblair | mordred: yep. the config setting may only need to be a boolean. | 22:26 |
mordred | ++ | 22:26 |
mordred | but I'm still on "set it to nothing for now and see how that goes" | 22:27 |
jeblair | ya, me too. | 22:27 |
clarkb | one case I'm thinking of is the reno case | 22:28 |
clarkb | since it updates all the branches (whcih I don't think zuul will do necessarily?) then builds historical documentation out of that entire view of git repo | 22:28 |
jeblair | clarkb: zuulv3 will update all the branches | 22:29 |
clarkb | ah ok | 22:30 |
jeblair | clarkb: basically, the executor is going to give you a repo or repos with a complete representation of the future state of relevant project(s), including all branches. put another way, all of the commits that end up as zuul refs now will end up as branch tip commits in the on-disk repo. | 22:30 |
jeblair | reno jobs should get *much* simpler. :) | 22:31 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Add support for requiring github pr head status https://review.openstack.org/449390 | 22:39 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Adds github triggering from status updates https://review.openstack.org/453844 | 22:39 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Implement pipeline requirement on github reviews https://review.openstack.org/453845 | 22:39 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Ensure PRs arent rejected for stale negative reviews https://review.openstack.org/460700 | 22:39 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Represent github change ID in status page by PR number https://review.openstack.org/460716 | 22:39 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Comment on PRs if a remote call to merge a change failed https://review.openstack.org/460762 | 22:39 |
openstackgerrit | Jesse Keating proposed openstack-infra/zuul feature/zuulv3: Include exc_info in reporter failure https://review.openstack.org/460765 | 22:39 |
jlk | (sorry, some more rebase of the rebase while I'm rebasing) | 22:39 |
jeblair | jlk: all your rebase are rebase to rebase | 22:39 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Use connecction to qualify projects in merger https://review.openstack.org/460766 | 22:40 |
jeblair | there is substantial overlap in the last 2 changes i wrote -- 460715 and 460766. i'm going to attempt a squash and see if the result makes sense... | 22:43 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Use connection to qualify projects in merger https://review.openstack.org/460769 | 22:47 |
jlk | jeblair: the github-v3 stack has completed the rebase. THere are a few more (new to gerrit) patches to bring over from our fork, that I'm going to hit up now, before circling back through on review thoughts. | 22:50 |
jeblair | jlk: cool; i plan on picking up the review thread on that again rsn. | 22:51 |
jeblair | and yeah, the squashed version is only slightly worse than either of the originals quashed into it, so it's probably a net win (it's about 120% of the size of one of the originals), so that's probably what we should land. i've WIP'd the originals. | 22:53 |
*** jkilpatr has joined #zuul | 22:53 | |
*** jkilpatr has joined #zuul | 22:54 | |
jlk | oh right, gotta switch gears now and put myself back into the "port v2 to v3" mind set, instead of "rebase older v3 on current v3". | 22:54 |
jlk | oh crap. Found something in our tree that should be squashed into earlier commits, more noise incoming. | 23:17 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Whitelist pydevd debug threads https://review.openstack.org/458041 | 23:19 |
jeblair | jlk, mordred: i reviewed another 3 changes in the github stack. one of them adds even more confusion to the translate/passthrough github events question. plus there's a question about reporting. once jlk gains headspace for this, i imagine we're going to need further design chats. | 23:21 |
jlk | yup! possibly tomorrow | 23:21 |
mordred | jlk: I look forward to those chats | 23:23 |
*** harlowja has joined #zuul | 23:23 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!