Friday, 2019-03-08

*** rlandy has quit IRC00:35
*** zer0c00l has joined #zuul01:08
zer0c00lAny idea i can pass a --property (metdata) when nodepool launches an instance?01:08
zer0c00lMy nodepool.yaml looks like http://paste.openstack.org/show/747434/01:11
zer0c00lI want to nodepool to launch an instance with metdata (--property key=value)01:11
pabelangerzer0c00l: i believe you want to use https://zuul-ci.org/docs/nodepool/configuration.html#attr-providers.[openstack].pools.labels.instance-properties01:14
zer0c00lpabelanger: so it would look like01:17
zer0c00linstance-properties:01:17
zer0c00l<tab> key: value01:18
zer0c00lUnder pool/labels01:18
zer0c00lThanks01:18
pabelangerzer0c00l: yes: http://git.zuul-ci.org/cgit/nodepool/tree/nodepool/tests/fixtures/config_validate/good.yaml#n5701:19
zer0c00lzer0c00l: Thanks. That file is a great example.01:20
zer0c00lWish it is part of the documentation01:20
pabelangergood idea01:22
SpamapSzer0c00l: tell Cr45h0v3rr1d3 I said hi01:42
zer0c00lSpamapS: :)01:46
zer0c00lHackers.avi01:46
*** bhavikdbavishi has joined #zuul02:11
*** jamesmcarthur has joined #zuul02:30
*** jhesketh has quit IRC02:45
*** jhesketh has joined #zuul02:47
*** bhavikdbavishi has quit IRC02:58
*** ianw_pto has quit IRC03:23
*** ianw has joined #zuul03:23
*** jamesmcarthur has quit IRC03:27
*** daniel2 has quit IRC03:42
*** bhavikdbavishi has joined #zuul03:48
*** bhavikdbavishi1 has joined #zuul03:48
*** bhavikdbavishi has quit IRC03:52
*** bhavikdbavishi1 is now known as bhavikdbavishi03:52
*** daniel2 has joined #zuul04:59
*** bjackman has joined #zuul05:46
*** saneax has joined #zuul06:43
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: tests: remove debugging prints  https://review.openstack.org/64194207:00
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: web: switch jobs list to a tree view  https://review.openstack.org/63343707:20
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: web: add jobs list filter  https://review.openstack.org/63365207:23
*** bjackman has quit IRC07:24
*** pcaruana has joined #zuul07:26
*** gtema has joined #zuul08:04
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: web: switch jobs list to a tree view  https://review.openstack.org/63343708:11
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: web: add jobs list filter  https://review.openstack.org/63365208:11
*** bjackman has joined #zuul08:15
bjackmanI'm finding that when my jobs are aborted, the processes started by ansible tasks on my test node are not killed. This is a problem because (unfortunately) I am using the static nodepool driver08:22
bjackmanDoes anyone know what I can do about that?08:22
tristanCzuul jobs tree view with filter search preview available here: http://logs.openstack.org/52/633652/6/check/zuul-build-dashboard/91c54d3/npm/html/08:23
tobiashbjackman: that's because zuul sends a sigkill to ansible: https://opendev.org/openstack-infra/zuul/src/branch/master/zuul/executor/server.py#L181108:32
tobiashbjackman: maybe it makes sense to change that and first send sigterm and after a few seconds sigkill08:32
AJaegertristanC: you mean http://logs.openstack.org/52/633652/6/check/zuul-build-dashboard/91c54d3/npm/html/jobs ? Nice!08:34
AJaegertristanC: clicking on a job name gives a 404 - is that to be expected?08:35
AJaegertristanC: oh, direct click works - but open in new tab gives 40408:36
*** rfolco has joined #zuul08:36
tristanCAJaeger: yes, zuul preview dashboard needs to be accessed from '/', and tested through the zuul-preview service08:36
tristanCAJaeger: that's why i shared the html/ link, html/jobs doesn't work from the logserver08:37
*** rfolco|ruck has quit IRC08:38
tristanCAJaeger: thanks for checking it out, it took more than a year to land the necessary zuul-web bits :)08:40
AJaegertristanC: ah, I see. A year? Wow. Thanks for persisting and getting all bits together!08:40
tristanCthe initial implementation was https://review.openstack.org/537869, but it was missing multi-parent representation08:42
bjackmantobiash, I would have expected that when Ansible is killed, its SSH processes would die too and so the remote processes would too08:42
tobiashbjackman: the ssh processes are also killed with -9 and won't have a chance to tell the remote that they're gone now, so the remote process will notice that with some kind of delay08:43
bjackmantobiash, Ah right, so the remote process is going to carry on until the socket times out or something I guess? Then yeah I guess SIGTERM is the best solution08:48
tobiashyes08:48
bjackmanExposing my network noobiness here but I also would have thought the network stack would send a FIN when the process owning the socket got killed?08:49
*** hashar has joined #zuul09:07
*** pcaruana has quit IRC09:12
*** pcaruana has joined #zuul09:27
*** fdegir has quit IRC09:34
*** fdegir has joined #zuul09:48
*** electrofelix has joined #zuul10:36
*** jkt has quit IRC10:50
*** bhavikdbavishi has quit IRC10:51
bjackmanIs there a way to do a reconfigure of Zuul without going through all the merger<->scheduler init stuff? I'm not sure exactly what it is, but it seems to be asking the merger to check out every tag in every repo10:53
bjackmanI have a couple of repos with a lot of tags10:54
bjackmanSo it takes a very long time10:54
*** jkt has joined #zuul10:57
*** gtema has quit IRC11:28
*** gtema has joined #zuul11:28
*** hashar is now known as hasharLunch11:33
*** bhavikdbavishi has joined #zuul11:43
*** gtema has quit IRC11:46
*** gtema has joined #zuul11:49
*** panda|rover is now known as panda|rover|lunc11:54
tobiashbjackman: zuul should only check branches, not tags afaik12:07
*** EmilienM is now known as EvilienM12:08
*** hasharLunch is now known as hashar12:15
*** gtema has quit IRC12:33
*** zbr|ssbarnea has quit IRC12:46
*** zbr has joined #zuul12:47
*** gtema has joined #zuul12:52
*** bhavikdbavishi has quit IRC13:07
*** frickler has quit IRC13:17
*** frickler has joined #zuul13:18
*** panda|rover|lunc is now known as panda|rover13:25
*** pcaruana has quit IRC13:30
*** rlandy has joined #zuul13:32
*** bhavikdbavishi has joined #zuul13:55
*** rfolco is now known as rfolco|ruck14:08
*** bhavikdbavishi has quit IRC14:10
mordredtristanC: I like that tree view a lot! how hard would it be, do you think, to add a toggle to switch between list and tree view? I ask because I went to search for "openstacksdk" - and the results are really cool and informative, but if I was looking for a specific job that I knew happened to have openstacksdk in it it's a bit harder to find. I think the tree view is a great deafult though - so maybe a14:21
mordred'flatten' checkbox or something? I think it could be a followup - mostly just asking while looking14:21
fungiin the project browser, what's the cause of/distinction between the multiple master branches listed like i can see at http://zuul.opendev.org/t/openstack/project/openstack/ironic14:24
tristanCmordred: yes sure, there could be a checkbox in the jobs view14:26
mordredtristanC: cool - I left a +2 with a comment saying something like that on the change14:27
tristanCfungi: that's because the zuul api returns each variant, that is each project-template applied and the different project definition14:27
tristanCfungi: i meant to look at either freezing those per branch, either at the api level, or in the webui14:27
fungiahh, so those are different project variants? or groups of job variants? (and what determines which tab they're grouped in?)14:28
tristanCfungi: each tab is either a project-template, or a project config (e.g. applied to a project from a config projects)14:30
fungii was going to point someone there to show how to tell what jobs a project is running, but after looking at it for a moment figured i would just confuse them even more14:30
tristanCfungi: this change show the api "issue" for project-template: https://review.openstack.org/60426414:30
fungiahh, thanks!14:31
fungiseems like fixing that ought to help14:31
tristanCfungi: indeed, that ironic page is confusing, there may be other issue when multiple project template are applied14:33
*** pcaruana has joined #zuul14:37
tristanCfungi: it seems like branch matcher are not taken into account, only the default branch. Thus that ironic page is confusing master tab with some other branch.14:46
tristanCfungi: thus, either the api returns the existing branch matcher in https://git.zuul-ci.org/cgit/zuul/tree/zuul/model.py#n315914:46
tristanCfungi: either the api merge/freeze the different project configs based on the branch matchers14:47
tristanCcorvus: what's your though on that ^ ?14:48
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: web: add flatten checkbox  https://review.openstack.org/64204715:04
tristanCmordred: ^ :)15:04
mordredtristanC: wow, you're quick :)15:05
mordredtristanC: oh - and thats way simpler of a patch than I was thinking it would be15:05
tristanCmordred: thought the "processNode" may be too simple... but if we optimize it that patch may get more complex15:12
tristanCwith the amount of branch and job in openstack tenant it's quite slow, but it works so...15:13
*** jamesmcarthur has joined #zuul15:16
*** gtema has quit IRC15:30
corvustristanC: is the branch tab in http://zuul.opendev.org/t/openstack/project/openstack/ironic from the default_branch variable?15:45
corvusyes it is15:47
*** hashar has quit IRC15:48
corvustristanC, fungi: i think that's part of the confusion; that's not the right value to render there.  the default_branch is a property of the project as a whole -- it doesn't change in different contexts.  it's almost always 'master'.  but for ansible, it's 'devel'.15:48
*** TheJulia is now known as needssleep15:50
corvusit *happens* to sometimes have different values in the internal representation, just because of how it's defined, but zuul always coalesces it to the same value for a particular project before doing anything with it.  it might be useful to display it as a piece of data inside the tab, so that if it's defined twice in two different ways (which is usually an error), that's visible to the user.15:51
corvusbut it probably shouldn't be the tab at the top.15:51
corvustristanC, fungi: as to how to improve that page to make it more useful -- i think there are a couple of options15:56
corvusthe first is to keep the current approach of data representation but resolve the usability issues that make it confusing15:58
corvusthe first thing we should do is make sure that we put the source_context in the info for each tab -- that way we can see exactly where each of those config stanzas is.  that would help people understand that each of those tabs is a config stanza.15:58
corvusthen we should use some other value for the tab -- either the implied branch matcher, or source_context.branch15:58
*** irclogbot_3 has quit IRC15:59
corvusthe problem with using the implied branch matcher is that these can (just like jobs) have explicit branch matchers too, so we'd have to account for that (it may not always say "stable/queens", though it usually will.  explicit branch matchers get used a lot less often in project stanzas)16:00
corvuswe can handle it just like the job branch variant tabs16:00
corvuswe should also display the default branch and merge mode above the tabs, since it's global project info.  we should also display them inside the tabs too so users know where they are defined.16:01
corvusif think if we do that, then we'll have a more comprehensible page that answers the question "show me all of the configuration stanzas for this project"16:02
corvusthe other approach, which tristanC alluded to, would be to try to answer the question "what does this project do on this branch?"16:03
corvuswe're at a slight advantage with project variants compared to job variants -- we *do* know all the branches of a project.  so we can freeze the job graph for each of those branches and present the resulting configuration to the user.16:04
corvusi suspect that might be a slightly more common question that users ask, and perhaps displaying that as the project page may be more intuitive.16:05
corvusi think we can use the new freeze api to accomplish that.  so the javascript would say "get me all the branches for this project" then loop over those and "get me the configuration for this branch".  we could probably make a single convenience method which does all of that in one api call.16:06
corvusif we update the project page like that, i'm not sure whether we should *also* keep the current project page.  maybe we should, in case it's helpful for someone digging more deeply into the configuration.  but maybe we should move it off of the main project page.  so if you click on a project you get the new page with the frozen graph, and then from there you can click to see all the project variants.16:08
corvusregardless, the current api endpoint for returning all the project config stanzas should remain -- it's important for automated tools which need to understand the underlying configuration16:09
corvusso, in summary, i think we should make a new page using the freeze api, then link it to the current page, which we should update to correct the tab labels, but if no one thinks that's useful, we could drop it.16:10
*** bhavikdbavishi has joined #zuul16:12
*** dkehn has joined #zuul16:15
*** irclogbot_3 has joined #zuul16:17
*** irclogbot_3 has quit IRC16:18
*** irclogbot_3 has joined #zuul16:22
corvustobiash: i'm around if you want to chat :)16:25
tobiashcorvus: good timing, just arrived at home :)16:28
*** dkehn has quit IRC16:28
tobiashso the thing is that I'd like to spread the executors over multiple regions with each having an openshift16:28
tobiashthe restriction is that I can route into an openshift via a load balancer so I can route from the executors to gearman, but not from zuul-web to a specific executor16:29
tobiashwhich would break live log streaming16:29
*** dkehn has joined #zuul16:30
tobiashI had two ideas to solve that so far. One idea would be to do the streaming to zuul-web via gearman instead of finger.16:30
corvustobiash: hang on16:31
corvustobiash: you said you can route into openshift via a lb, so why can't you establish a lb for each executor so that you can access the streaming port?16:31
tobiashI'd have to open a new port for each executor and would need to configure a route to each executor16:32
corvustobiash: yeah, but that's the right thing to do in the container world16:32
tobiashthat makes the executor deployment much more complicated and would need a firewall rule change on the openshift nodes each time an executor is added16:33
tobiashI have another hand wavey idea of making the finger-gw the 'load balancer' or dispatche in the other openshift.16:34
corvustobiash: i almost hesitate to ask this, but why do you want to spread the executors out?  that's not the right architecture for zuul.  it's designed so that all the zuul components run together, and the executors reach out to external resources.16:35
tobiashthat would expand the executor zone concept a bit and could make it possible to route the stream from zuul-web to the finger-gw in the other openshift which then would dispatch the connection to the correct executor16:35
tobiashcorvus: our executor transfer much data to and from the nodes so I anticipate that this will eventually become a problem16:36
tobiashalso that would spread the io load of the executors over multiple regions16:37
tobiashbut you're right, it's not a direct requirement so far16:37
corvustobiash: ok.  i'll accept that as a given for the moment.  :)  tell me more about your fingergw idea16:38
tobiashI'm just trying to think ahead so I have ideas how to solve such bottlenecks when we hit them16:38
tobiashso the zuul-web, zuul-executor and fingergw (we would have one in each cluster) would know their 'zone'16:38
tobiashwhen zuul-web asks for the finger-address of a build it would send its zone and would get back the executor if it's in the same zone or the other fingergw if it's in another zone16:39
tobiashand the other fingergw would do the same resolution and dispatch further to the correct local executor16:40
tobiashthat way the finger-gw would be the equivalent component to the router/ingress when using http dispatching16:41
corvushow does a user end up at a particulor zuul-web?16:42
tobiashzuul-web could be in just one location or even reachable in multiple locations that are chosen via dns/geo ip or similar mechanisms16:43
tobiashif each zuul-web knows its zone that wouldn't make a difference16:43
corvusoh i thought you said zuul-web in each cluster16:43
tobiashwell, that would be possible too16:43
tobiashI meant that you need at least the fingergw in all clusters with executors16:44
corvustobiash: let's try this: https://awwapp.com/b/u1cxnco8b/16:46
corvusnever used it before, it's just the first google result for shared whiteboard16:46
tobiashwow, never saw this :)16:47
corvusyour screen is a lot larger than mine :)16:47
tobiashit's a 4k, ok I try to make it more compact ;)16:49
corvusdon't worry, i zoomed out16:49
tobiashhow can I move stuff?16:50
corvustobiash: arrow cursor at the top, drag a box around it, then move16:51
tobiashsomehow that didn't work at the first try16:51
corvusokay, i think i get it now16:52
corvusi wasn't imagining multiple executors per region earlier, only one16:52
corvus(probably because i'm more used to thinking about zones as one executor per zone)16:52
tobiashah ok16:53
corvusbut now i see what you mean, using the fingergw as an aggregator lets you set up a region with one ingress point, and then scale out to N executors in each region16:53
tobiashexactly16:53
tobiashI think that could even be relatively simple to support16:53
tobiashzuul-web already asks the scheduler for the executor executing job xyz16:54
tobiashI think we'd just need to add zone information to that call16:54
tobiashand that would magically work16:55
tobiashor maybe call it region to not confuse it with the nodepool zone information16:55
corvusi like that idea -- i don't think it's going to present problems scaling this in the future.  i think the only tricky bit will be updating the finger protocol request syntax to support routing, and maybe making sure this doesn't open some kind of DOS or vulnerability.16:55
corvuswell, since the executor knows its "zone" i think it makes sense to keep the same idea.16:56
tobiashyes16:56
corvusoh, apparently https://www.draw.io/ is open source16:59
corvushttps://github.com/jgraph/drawio16:59
tobiashI think we anyway start with all executors in the same cluster, but I'm feeling good that we have a solution in mind that can solve those problems if we hit them16:59
corvustobiash: yeah, i think that will work16:59
tobiashcool, that makes me happy :)16:59
corvus\o/17:00
*** jamesmcarthur_ has joined #zuul17:00
*** jamesmcarthur has quit IRC17:03
tobiashcorvus: just curious after reading your mail, will zuul get its own zuul-tenant on opendev or will it stay in openstack?17:05
corvustobiash: its own tenant.  we could actually do that now.17:06
corvustobiash: ttx and clarkb pushed through the initial changes to make us multi-tenant: http://zuul.opendev.org/tenants17:07
pabelanger+1 to do now :)17:07
tobiashyeah, just saw that17:07
tobiashso zuul will vanish then from zuul.o.o web ui?17:08
corvusi've had too much on my plate to start that right now, but if other folks have time, i'm happy to help :)17:08
tobiashI guess the scheduler is the same17:08
clarkbtobiash: yes I think that is the mid to long term plan17:08
pabelangersounds like a swell friday afternoon project17:08
clarkbnot sure anyone has it on their immediate plan to do17:08
corvusheh, o.o is ambiguous now :)  but, if you mean that zuul and nodepool projects won't appear in zuul.openstack.org, correct :)17:08
tobiashoh right17:09
tobiashfriday evening...17:09
corvusi'm especially looking forward to having our own tenant because we can drop "clean check" and go back to enqueueing changes directly into gate :)17:09
pabelangerwhere do we see out config-project for zuul tenant? zuul-config I assume?17:10
pabelangers/out/our17:10
corvussounds reasonable, unless we wanted to reserve that name for a future tool called "zuul-config".  things get tricky in our namespace :)17:11
pabelangerzuul-project-config might also work17:11
tobiashmaybe just zuul/conf?17:11
pabelangerzuul/tenant-config?17:11
tobiashtenant-config should be really the zuul tenant configuration17:12
pabelangertrue17:12
*** rlandy is now known as rlandy|brb17:12
corvusi like zuul/config zuul/project-config17:12
tobiash++17:12
*** pcaruana has quit IRC17:12
*** gtema has joined #zuul17:13
*** panda|rover is now known as panda|rover|baby17:48
*** rlandy|brb is now known as rlandy17:51
dmsimardbtw I found this yesterday, a sphinx lexer for yaml/yaml+jinja: https://github.com/ansible/ansible/blob/devel/docs/docsite/_extensions/pygments_lexer.py17:57
dmsimardsharing in case it might be useful for zuul docs :)17:58
dmsimardI asked the author if it was possible to split it out of Ansible so it can be re-used more easily17:59
mordreddmsimard: seems like the sort of thign that should just be submitted upstream to pygments no?18:00
mordreddmsimard: http://pygments.org/docs/lexers/#pygments.lexers.data.YamlLexer <-- pygments has one - perhaps that ansible one is just improving it - or maybe adding support for ansible-specific yaml things?18:01
dmsimardmordred: I dunno. I found it by accident last night when searching for issues preventing me from lexing yaml18:01
mordreddmsimard: yes - that is a fork of the upstream yaml lexer18:01
dmsimardmordred: in any case, if you're asking me if it should be upstreamed -- you're preaching to the choir :p18:03
* mordred now imagines a choir composed of only dmsimard18:03
*** electrofelix has quit IRC18:04
dmsimardwouldn't that be something18:16
*** irclogbot_3 has quit IRC18:17
*** irclogbot_3 has joined #zuul18:20
*** saneax has quit IRC18:22
SpamapSHrm, kubernetes pod node type didn't work for me. :-/18:31
*** jamesmcarthur_ has quit IRC18:32
corvusmordred: you said " put in an entry to the Zuul CR" in your most recent email.  what does that mean?  what's a zuul CR?18:34
mordredcorvus: custom resource18:34
mordredcorvus: or maybe I should just say resource there18:34
corvusmordred: is that the same or different than a CRD?18:35
mordredcorvus: I was saying CRD before - but CRD is "custom resource definition" - so maybe that's still the right term to use here... shrug?18:35
*** gtema has quit IRC18:35
corvusin my mental model, a CRD is a bit of yaml that causes k8s to create a CR which is a mess of containers, etc.18:36
corvusmordred: anyway, i now parse the sentence, thx :)18:36
openstackgerritMerged openstack-infra/nodepool master: docker: don't daemonize when starting images  https://review.openstack.org/63558418:37
mordredcorvus: ah - in my mental model the CRD is the bit of yaml that tells k8s what the spec is for a type of custom resource - and then a user defines a resource using yaml that matches that spec - but it's entirely possible that I'm overthiking those words and the invocation of a custom resource is the thing referred to by CRD18:38
corvusmordred: maybe you're right.  but if i just s/CRD?/yaml/ i can understand your point well enough for this conversation.  :)18:39
mordredcorvus: yaml all the yamls with the docker in the dockers for your yamls18:39
corvussomeone is going to have to understand that difference eventually, but i don't think it's now :)18:39
corvusmordred: here, let me send you a github to yaml your docker18:40
SpamapSI yaml my kubernetes with istio, docker and calico, and a side of fluentd.18:46
openstackgerritMerged openstack-infra/zuul master: tests: remove debugging prints  https://review.openstack.org/64194218:48
SpamapSHrm, is there a way to bypass the default base job entirely?18:52
SpamapSMy kubernetes pod issue is failing on the fact gathering in the first pre playbook of the base job.18:53
SpamapSSo I can't like, get ahead of the fail with a - pause: minutes=60 and peek inside the executor18:53
clarkbSpamapS: you can disable fact gathering iirc18:54
*** bhavikdbavishi has quit IRC18:55
pabelangerSpamapS: does autohold work?18:56
pabelangeroh, you said executor18:57
SpamapSclarkb: at the play level, yes... I don't control the play of the first base job.18:57
SpamapSpabelanger: yeah I can autohold the pod. The problem is the executor can't talk to kubernetes to exec in.18:57
pabelangerat one point, I think we discussed autohold not deleting executor build directory18:58
pabelangeronly after the hold was removed18:58
tobiashSpamapS: facts gathering is done before the first base job18:59
mordredSpamapS: we do a few instances of dropping special fact files into the fact cache so that fact gathering doesn't try to gather facts - and I thni kwe skip fact gathering for certain types18:59
mordredmaybe we're missing this for k8s pod type18:59
SpamapSI could turn on keep actually19:00
tobiashSpamapS: https://opendev.org/openstack-infra/zuul/src/branch/master/zuul/executor/server.py#L112219:00
tobiashSpamapS: so maybe the ansible setup might need to be skippable?19:01
pabelangertobiash: yah, still would like to find a way to make that option. Very soon it will be an issue for us, ansible-network, when we bring online network vendors that don't work properly with fact gathering19:01
mordredtobiash, SpamapS: https://opendev.org/openstack-infra/zuul/src/branch/master/zuul/executor/server.py#L396-L40119:01
tobiashSpamapS: and regarding skipping the base job, you can just define your own base job in any config-repo19:01
mordredwe skip it for localhost19:01
mordredbut ploppping that in19:01
mordredbutyeah - maybe we want to do some things more generally to allow skipping fact gathering (and puttign in stub files) on some hosts or types of hosts19:02
tobiashSpamapS: a base job btw, doesn't need to be called 'base', just set parent: null19:03
SpamapSparent: null was what I was looking for :)19:03
tobiashbut that won't solve your problem if the problem is facts gathering19:03
SpamapSthat's not where it's failing19:04
SpamapSso something else is going on19:04
tobiashah ok, then it might solve your issue :)19:04
SpamapSfact gathering in the base job is specifically where it is failing with a 401 from kubernetes.19:04
SpamapSAlso I just realized kubectl exec doesn't work for any pods on that cluster.. so.. might not be zuul's fault19:08
*** rfolco|ruck is now known as rfolco|ruck|brb19:11
*** jamesmcarthur has joined #zuul19:11
openstackgerritMerged openstack-infra/zuul master: SQL: only create tables in scheduler  https://review.openstack.org/61069619:12
*** jamesmcarthur has quit IRC19:30
*** jamesmcarthur has joined #zuul19:43
*** irclogbot_3 has quit IRC20:13
*** rfolco|ruck|brb is now known as rfolco|ruck20:21
openstackgerritJames E. Blair proposed openstack-infra/nodepool master: Remove prelude from AWS release note  https://review.openstack.org/64214820:42
corvuspabelanger, SpamapS, tobiash, mordred: ^ i think we should merge that before release20:42
corvusthe release notes look really weird with that in place: https://zuul-ci.org/docs/nodepool/releasenotes.html#in-development20:42
tobiash+220:43
corvusthe prelude would typically be used for introductory text.  like "the theme of this release is improving stability, therefore the following changes blah blah blah" :)20:44
tobiashI wonder why we added that20:44
corvusi think zuul is ready though -- so shall i tag 603ce6f474ef70439bfa3adcaa27d806c23511f7 as 3.6.0 ?20:46
pabelangerwfm +320:47
pabelangercorvus: +120:47
tobiash+120:49
corvuspushed20:50
pabelanger\o/20:51
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Revert "Docker: use the buildset registry if defined"  https://review.openstack.org/64215020:56
pabelanger3.6.0 looks to be on pypi, giving it a try locally now!21:00
*** daniel2 has quit IRC21:12
*** bjackman has quit IRC21:13
*** jamesmcarthur has quit IRC21:13
*** jamesmcarthur has joined #zuul21:14
*** daniel2 has joined #zuul21:16
corvusokay that's one release down :)21:25
openstackgerritMerged openstack-infra/nodepool master: Remove prelude from AWS release note  https://review.openstack.org/64214821:25
corvusand time to start the next!  :)21:25
*** jamesmcarthur has quit IRC21:27
*** jamesmcarthur has joined #zuul21:28
corvusnodepool 3.5.0 pushed21:35
tobiash\o/21:35
corvusi just realized -- what are we going to do when zuul revs to v4?  would we rev nodepool too?  even if there are not operator-visible changes?  :)21:37
corvussemver vs ocd!21:37
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul-preview master: Port in changes from the c++ version  https://review.openstack.org/64108921:37
Shrewsmordred: i think ^^ grabs your suggested changes too21:38
corvusokay, both releases done :)21:46
*** jamesmcarthur has quit IRC21:50
*** jamesmcarthur has joined #zuul21:53
SpamapS\o/21:53
*** jamesmcarthur has quit IRC21:54
clarkbI think I would prefer to stick to semver. If we uprev nodepool people might not upgrade nodepool at that point thinking there is work to be done to do so21:55
ShrewsSpamapS: is the "&response.json::<Value>()" syntax you used in zuul-preview an old way to specify a generic type? I can't seem to find that documented anywhere.21:56
clarkbShrews: it is part of the serde json lib21:56
clarkbValue is a type defined there so it is casting the json resposne into the serde library value type aiui21:56
SpamapSYeah it's just explicit typing because Serde's model makes it impossible to imply.21:57
mordredSpamapS: we couldn't find docs on the explicit typing syntax for functions like that ...21:58
Shrewsfunc::<Type>() is the thing i'm asking about21:59
mordredyah. that21:59
*** jamesmcarthur has joined #zuul22:08
openstackgerritMerged openstack-infra/zuul-jobs master: Revert "Docker: use the buildset registry if defined"  https://review.openstack.org/64215022:09
*** EvilienM is now known as EmilienM22:16
*** rlandy has quit IRC22:19
*** jamesmcarthur has quit IRC22:19
pabelangercool, zuul 3.6.0 installed and working22:22
pabelangernow to bump nodepool22:22
pabelangerwith the zuul dashboard, if you don't have a database setup, both Builds / Buildsets don't work, just shows 'Loading...'. I know adding a database is the fix, but until that is required, is there any way to have the web first check for database?22:32
pabelangerand not render in that case?22:32
corvuspabelanger: it's, er, supposed to do that22:45
corvusbut, tbh, if that's broken, we'll probably just fix it by requiring the db.  we're really close to that point.22:45
pabelangeryah, I've just been too lazy to setup a db in this local test environment, I think my fix is to do that and be done with it :)22:47
pabelangerso, +1 to require db22:47
pabelangeralso, it is pretty cool that zuul 3.5.0 upgraded another zuul to 3.6.0! hehe22:48
corvusShrews, mordred: i think https://matematikaadit.github.io/posts/rust-turbofish.html is what you're looking for (cc SpamapS)22:52
corvusclarkb: ^22:52
*** irclogbot_3 has joined #zuul23:02
pabelangerand nodepool 3.5.0 looks to be working properly23:08
pabelangerYay23:09
tristanCtobiash: SpamapS: executor should skip the setup phase for kubectl connection because of https://opendev.org/openstack-infra/zuul/src/branch/master/zuul/executor/server.py#L5723:34
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: web: add flatten checkbox  https://review.openstack.org/64204723:45
SpamapSmordred: Shrews that's called "turbo fish" type annotation.23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!