Monday, 2018-11-12

pabelangerclarkb: Ack, it is pretty minimal. I'll see about proposing a patch this week to help provide a little more information. User on twitter confusing zuul contribiting with having to sign CLA for OpenStack foundation00:53
*** irclogbot_3 has quit IRC00:56
tristanCcorvus: it seems like we are not handling api status code correctly, i don't think it's related to the queue id thing04:11
tristanCcorvus: the typo in pipeliens is odd though04:11
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: Add monitoring driver spec draft  https://review.openstack.org/61722004:40
*** chandankumar has joined #zuul06:04
*** chandankumar is now known as chkumar|rover06:06
*** bjackman has joined #zuul06:26
bjackmanI tried increasing the max-parallel-jobs field of the Nodepoo static driver configuration, and I noticed that the working directory on the test node was still directly in the $HOME directory (i.e. the project under test was at ~/src/gerrit/foo) - this seems like it will be a problem when there are multiple jobs in parallel; they'll step on each other's working directory06:31
bjackmanDo I need some extra config to set a unique workdir for each job?06:31
*** themroc has joined #zuul07:59
*** bjackman has quit IRC08:04
clarkbbjackman isn't here anymore but yes, I believe you need a different base job that is smart about where the job content goes on disk in the remote nodes08:32
corvuspabelanger: can you ask if the user read the readme?08:34
corvuspabelanger: it's two separate questions -- Is there enough information in the readme on how to contribute? vs Did the user find the information on how to contribute?08:35
corvusbut, regardless, this is why we need to use a gerrit that is not hosted at openstack.org (whether that's review.opendev.org or review.zuul-ci.org, i'm not sure).  but it's not the first time it's happened, and it's the main motivation for new top-level project hosting.08:36
corvusclarkb, fungi, mordred: ^ fyi08:36
clarkbits unfortunate that that assumption gets made at all (but we can't control that only make it more clera that this is openstack and that isn't for various values of this and that)08:37
corvustristanC: are you able to reproduce the issue?  you may be able to by starting up zuul with no mergers or executors online08:38
corvusclarkb: i don't expect the assumption to be made once there are no openstack.org domains involved in zuul development08:38
clarkbcorvus: yup08:38
*** pcaruana has quit IRC08:39
*** hashar has joined #zuul08:41
*** bjackman has joined #zuul08:44
*** jpena|off is now known as jpena08:51
bjackmanAhhh I guess the answer to my previous question is to set the {{ zuul_workspace_root }} variable09:33
bjackmanI guess there is some way to set it to the {{ zuul.build }} UUID09:34
corvusbjackman: yeah, the zuul_workspace_root is a conevntion used by jobs and roles in the zuul-jobs repo. it's not part of zuul itself (zuul doesn't do anything on remote nodes). but if you set that in your jobs, the setup-workspace role, and others, will use it.  clarkb suggested after you left that you could do that in a base job.09:37
bjackmancorvus: Got it, thanks.09:37
bjackmanclarkb: Sorry I missed your response, I lost my network connection.09:38
corvusbjackman: eavesdrop.openstack.org has channel logs if you need them btw09:47
bjackmancorvus: I actually had a look in there after I got back online and didn't see anything, maybe there was a race condition09:47
tobiashbjackman: I think it updates the logs every 15 minutes09:49
clarkbya we could probably change it to render the html on demand or more frequently09:49
clarkbthere is a raw log too, but I don't think we link those anywhere09:50
clarkbhttp://eavesdrop.openstack.org/irclogs/%23zuul/%23zuul.2018-11-12.log that should always be up to date09:50
tobiashgood to know09:51
*** chkumar|rover is now known as chkumar|ruck10:51
openstackgerritFabien Boucher proposed openstack-infra/zuul master: encrypt_secret: support self-signed certificates via --insecure argument  https://review.openstack.org/61728110:56
pabelangercorvus: will try to confirm. So far I think it is like you say, users see review.o.o and think zuul is part of openstack CLA.11:51
*** aluria has joined #zuul12:05
*** bjackman has quit IRC12:08
*** sshnaidm is now known as sshnaidm|afk12:09
*** quiquell has joined #zuul12:25
quiquellHello12:25
*** bjackman has joined #zuul12:25
bjackmanLooks like there's no CLA to contribute to Zuul, is that right?12:26
quiquellI am trying to test "git" driver over a local repository12:27
clarkbbjackman: correct12:27
quiquellBut zuul is not detecting the changes at my testing pipeline12:27
quiquellI am just using trigger.event: ref-updated12:27
quiquellDid it deepends on the poll_delay ?12:28
clarkbquiquell: yes that determines how often it polls the repo. Looks like default is 2 hours?12:29
quiquellclarkb: changes to just 30 seconds, creating a commit in this local repo is still not detected12:29
quiquellclarkb: with git show-ref I see the ref is updated12:29
quiquellclarkb: Do I have to use a bare repo and clone locally to add changes ?12:30
clarkbquiquell: you'll need to update the repo that is in your connection. it uses ls remote to list the refs on the remote wherever it is looks like12:32
clarkbcorvus may know more. I think corvus added the driver12:34
*** jpena is now known as jpena|lunch12:35
quiquellclarkb: so it looks for changes in the refspec but at remote ?12:35
clarkbquiquell: it lists refs in the repo listed in the connection12:36
clarkband checks if there are new refs in that list, if any have been deleted, or if they point to new commits12:37
quiquellclarkb: So if my connection points to a normal repository12:37
quiquellclarkb: Normal I  mean a clone at my machine12:38
quiquellclarkb: And I just do a commit over this, it should trigger it ?12:38
clarkbyes I would expect so12:38
clarkbfor example adding a commit to master should create a ref-updated event for the master branch updating12:38
quiquellclarkb: ack12:41
corvusclarkb, quiquell: yes that's how i would expect it to work. debug logs from the scheduler process may be helpful in debugging12:42
quiquellcorvus: I think my issue, I was trying it with docker-compose external volumes, and they don't get updated inside the running dockers12:42
quiquellcorvus: Will try with bind mount12:43
corvusquiquell: yep, that sounds probable12:43
quiquellcorvus: Attaching to the images I see the pseudo-remote is not updated :-)12:43
*** rlandy has joined #zuul12:47
*** sshnaidm|afk is now known as sshnaidm12:48
*** quiquell is now known as quiquell|brb12:49
*** bjackman has quit IRC12:50
*** EvilienM is now known as EmilienM13:00
quiquell|brbcorvus: If I don't set 'ref' attribute it will work or do I have to add it ?13:02
quiquell|brbcorvus: Doing a git ls-remote show differences13:03
*** quiquell|brb has quit IRC13:09
*** chkumar has joined #zuul13:10
*** chkumar is now known as chkumar|rover13:11
*** chkumar|ruck has quit IRC13:13
*** chkumar|rover is now known as chkumar|ruck13:13
corvusquiquell is not here, but if ref is omitted from the git trigger, it should trigger on all refs13:31
*** jpena|lunch is now known as jpena13:31
*** dkehn has joined #zuul13:52
*** chkumar|ruck has quit IRC14:22
*** hashar has quit IRC14:37
tobiashcorvus, Shrews: do you have an idea how nodes can up in ready & locked since a few hours?14:42
*** frickler has quit IRC14:43
*** frickler has joined #zuul14:43
clarkbtobiash: iirc there is a bug in zuul (I thought corvus fixed it though?) where if a job reservers/locks a node before using it then the job is removed from the pipeline by a new patchset that doesn't need the job then we leak that node14:53
tobiashclarkb: that sounds pretty similar to my issue14:54
tobiashit looks like zuul has a lock on these nodes14:54
clarkblet me see if I can find the patch that shoud've fixed it14:54
tobiashthx14:54
*** bjackman has joined #zuul14:55
clarkbtobiash: https://review.openstack.org/#/c/605527/ that one I think14:55
clarkbmaybe it didn't fully fix the issue14:55
tobiashhrm, that is for sure already in my deployment :(14:56
tobiashso it looks like something different14:56
tobiashclarkb, corvus: found the reason15:03
tobiashthe job in question has a semaphore, but the semaphore seems to not be respected when requesting and locking resources15:03
tobiashthat will be fun to sort this out15:03
clarkbtobiash: oh so it has reserved the instance and is waiting for the sempahore to allow it to proceed?15:03
tobiashyes15:04
clarkbthat could cause deadlock in the system depending on ordering (if you can lokc the node but not the semaphore and some other job locks the semaphore and not the node)15:04
tobiashand that since 3 hours with 20*16 core nodes :/15:04
corvustobiash: what has the semaphore?15:06
tobiashwe have a job x that has a semaphre with n=1515:06
tobiashnot we have 30 of them in the pipelines15:06
tobiash15 are running and 15 queued according to the semaphore15:06
tobiashbut there are already 30 nodes requested and locked which blocks other projects as we're at quota right now15:07
corvustobiash: so you're not actually deadlocked, right?15:07
tobiashno, but wasting 1/4th of our resources atm15:08
corvustobiash: gotcha.  obviously a problem, just wanted to make sure i understood15:08
corvustobiash, clarkb: we could have an option to wait for a semaphore before requesting nodes.  do you think that would help in this case?15:09
clarkbcorvus: is there any reason that shouldn't be the default or just how it always works?15:09
tobiashcorvus: I even think this should not be an option15:09
corvus(i think it needs to be an option, because i'm sure in some cases the current behavior is preferable)15:09
corvusha!  we disagree :)15:10
clarkbI think it should be default at least, not sure if it needs to be toggleable yet15:10
tobiashcorvus: do you have a specific case in mind where this would be preferable?15:10
clarkb(but maybe the behavior change is scary enough that for backward compat we don't make it default in the toggleale world)15:10
openstackgerritFabien Boucher proposed openstack-infra/zuul master: WIP - Pagure driver  https://review.openstack.org/60440415:11
corvusi'm pretty open to changing the default.  i don't have a specific use case, but often getting nodes is a bottleneck, and being able to run jobs in quick succession would be good.15:12
corvusactually, here's an example: in openstack, if we did afs publishing in a job, we might prefer the current behavior (the nodes aren't special, the semaphore is application-layer)15:12
tobiashcorvus: ok, so we can make this configurable as a compromise15:13
tobiashcorvus: would you make this togglable per semaphore or zuul config?15:13
corvustobiash: semaphore i think.  not global.  maybe even per-job.  (where the job specifies the semaphore)15:13
tobiashcorvus: ok, so the (safe) default behavior would be to wait before request and some jobs/semaphores might override that for performance?15:15
corvustobiash: sounds reasonable.  this might be worth a mailing list post to get more feedback (people may use semaphores for widely variable reasons)15:16
tobiashok, will write one15:18
*** quiquell has joined #zuul15:22
quiquellcorvus: Going back to the git driver I have the following config15:22
quiquell[connection "git.openstack.org"]15:22
quiquellname=git.openstack.org15:22
quiquelldriver=git15:22
quiquellbaseurl=file:///projects/15:22
quiquellpool_delay=3015:22
quiquellAnd under /projects/ in the container a bind mount with all the projects15:23
quiquellis this ok ?15:23
clarkbwhy not point it at gerrit at thta point?15:24
clarkboh right the url is local logically it is git.openstack.org nevermind15:24
corvusquiquell: you can use paste.openstack.org in the future for pasts like that15:24
clarkb(jet lag brain happening here)15:24
quiquellclarkb: Doing some experiments about running zuul locally using local repos15:24
quiquellcorvus: ack about paste sorry15:25
corvusquiquell: https://zuul-ci.org/docs/zuul/admin/drivers/git.html#attr-%3Cgit%20connection%3E.poll_delay  -- looks like the delay setting is misspelled15:25
corvusquiquell: and yeah, you may want a different name to be less confusing :)15:25
quiquellcorvus: Jojojo, so lame :-=), thanks15:25
*** jimi|ansible has joined #zuul15:32
quiquellworking now :-)15:36
*** chandankumar has joined #zuul15:41
*** chandankumar is now known as chkumar|ruck15:41
*** hashar has joined #zuul15:46
*** edleafe_ has joined #zuul15:47
*** chkumar|ruck has quit IRC15:58
*** themroc has quit IRC16:27
*** irclogbot_0 has joined #zuul16:35
*** irclogbot_0 has quit IRC16:36
*** pcaruana has joined #zuul17:27
*** hashar has quit IRC17:33
*** caphrim007 has joined #zuul17:58
*** jpena is now known as jpena|off18:13
* SpamapS FOMO'ing soooo hard18:20
*** quiquell is now known as quiquell|off18:48
*** bjackman has quit IRC21:49
*** bjackman has joined #zuul21:49
*** bjackman has quit IRC21:53
*** bjackman has joined #zuul21:53
*** pcaruana has quit IRC22:58
*** dkehn has quit IRC23:38
*** spsurya has quit IRC23:40
*** dkehn has joined #zuul23:44

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!