Wednesday, 2018-01-10

openstackgerritMerged openstack-infra/zuul feature/zuulv3: Add alembic.ini  https://review.openstack.org/53226000:02
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Fix error handling for pidfile  https://review.openstack.org/53220600:02
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Move CRD tests to test_gerrit_legacy_crd  https://review.openstack.org/53188600:04
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: Add skipped CRD tests  https://review.openstack.org/53188700:04
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: WIP: Support cross-source dependencies  https://review.openstack.org/53080600:04
*** JasonCL has quit IRC00:14
*** JasonCL has joined #zuul00:15
*** rlandy has quit IRC00:24
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: builder: do not cleanup image for driver not managing image  https://review.openstack.org/51692000:38
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Implement a static driver for Nodepool  https://review.openstack.org/46862400:40
*** corvus has quit IRC00:44
*** JasonCL has quit IRC01:43
*** JasonCL has joined #zuul01:48
*** harlowja has quit IRC02:24
*** mordred has quit IRC02:42
*** mordred has joined #zuul02:43
*** bhavik1 has joined #zuul05:06
*** bhavik1 has quit IRC05:06
*** bhavik1 has joined #zuul05:06
*** corvus has joined #zuul05:21
*** harlowja has joined #zuul05:24
*** bhavik1 has quit IRC05:24
*** bhavik1 has joined #zuul05:35
*** bhavik1 has quit IRC05:42
tobiashSpamapS: I also had trouble with the hangs and spent half a day for debugging a similar issue06:15
tobiashwhen I run zuul locally without any logging config I get the alembic errors, but not in my environment where I really run zuul06:16
tobiashso this might have something to do with the log config06:16
SpamapStobiash: yeah that was realy weird.06:17
SpamapSI'm just now adding some jobs to report to SQL and wondering about cleanup tasks.06:17
SpamapSSeems like that DB could get big on a busy system.06:17
*** Guest76 is now known as jlk07:02
*** threestrands has quit IRC07:36
*** harlowja has quit IRC07:49
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Do pep8 housekeeping according to zuul rules  https://review.openstack.org/52294507:52
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Use same flake8 config as in zuul  https://review.openstack.org/50971507:52
SpamapStobiash: with sql reporter, have you seen where post jobs can't be reported?07:55
SpamapSfor some reason mine aren't showing up in builds07:55
tristanCSpamapS: it should, is the sqlreporter added to your post pipeline?07:55
SpamapStristanC: I think so. :)07:55
SpamapStristanC: you win the Captain Obvious award tonight. It got left off.07:56
tristanCyay :-)07:57
SpamapSso do we need to add a GC thread for sql reporter connections?08:03
SpamapSsomething that will just wake up every few minutes and delete old builds?08:03
tristanCSpamapS: every few minutes sounds a bit overkill, actually why would you even delete entry from the database?08:09
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: handler: fix support for handler without launch_manager  https://review.openstack.org/52477308:10
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Add a plugin interface for drivers  https://review.openstack.org/52462008:10
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: builder: do not cleanup image for driver not managing image  https://review.openstack.org/51692008:10
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Implement a static driver for Nodepool  https://review.openstack.org/46862408:10
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Refactor run_handler to be generic  https://review.openstack.org/52632508:10
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Implement an OpenContainer driver  https://review.openstack.org/46875308:10
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Implement a Kubernetes driver  https://review.openstack.org/52135608:10
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool feature/zuulv3: Refactor NodeLauncher to be generic  https://review.openstack.org/53245008:10
SpamapStristanC: a busy system could end up with millions of records. I doubt they'll be relevant after a year.08:11
SpamapStristanC: and every few minutes is actually less painful than batching them up08:11
SpamapSyou want to pick off a few and go back to sleep08:12
SpamapSdatabases don't like big deletes.08:12
tristanCbut isn't mysql able to handle hundreds of millions of records easily?08:16
SpamapSSure08:17
SpamapSeventually you'll pay the price though08:17
SpamapSindexes overflow RAM08:17
SpamapSbad queries walk all the rows08:17
SpamapSif the data isn't being used, archive it, get it out of the system.08:18
*** hashar has joined #zuul08:19
tobiashSpamapS: there is still somewhing broken with sql :(08:20
tobiashseems like I had a small bug in the migration script which rendered it being effectively a noop08:21
tobiashafter fixing the migration script the test hangs when dropping the db because of waiting for a stale metadata lock08:21
SpamapSIt's working for me08:26
SpamapSpost and all08:26
tristanCSpamapS: then maybe we could add an automatic build archiving thread, not convinced it should be enabled by default though08:39
tobiashSpamapS: with github?08:40
*** jpena|off is now known as jpena08:45
openstackgerritTobias Henkel proposed openstack-infra/zuul feature/zuulv3: Really change buildset to string  https://review.openstack.org/53245908:58
*** dmellado has quit IRC10:06
*** dmellado has joined #zuul10:07
*** threestrands has joined #zuul10:21
*** dmellado has quit IRC10:49
*** hashar is now known as hasharAway10:51
*** dmellado has joined #zuul11:23
*** dmellado has quit IRC11:32
*** dmellado has joined #zuul11:34
*** jkilpatr has quit IRC11:44
*** jkilpatr has joined #zuul12:16
*** jpena is now known as jpena|lunch12:48
*** dkranz has joined #zuul13:24
*** rlandy has joined #zuul13:34
*** jpena|lunch is now known as jpena13:51
*** dkranz has quit IRC14:40
electrofelixdoes zuulv3 support projects turning pipeline notifications on/off for individual project without allowing it to alter the rest of the pipeline in any way? for example, selectively enable commenting by a post pipeline but without granting them access of a trusted config repo?14:52
*** pabelanger has quit IRC14:56
*** pabelanger has joined #zuul14:56
pabelangerelectrofelix: I don't believe so, AFAIK pipeline changes need to be done in a project-config repo14:58
tobiashelectrofelix: pabelanger is right, pipelines are defined only in trusted config repos14:59
tobiashelectrofelix: but you can have as many config repos as you want15:00
*** openstackgerrit has quit IRC15:01
electrofelixsome teams would like to run other jobs and get some notification without blocking the change from landing, we're suggesting a post pipeline job at the moment15:02
electrofelixsome teams => some local teams15:02
electrofelixrather than building another notification mechanism I was hoping we could selectively allow teams to add a job to the post pipeline and decide whether to have zuul comment without them needing to build their own pipeline to achieve this15:02
tobiashelectrofelix: are you using multiple tenants for the teams?15:03
tobiashI'm separating bigger teams by tenants where each tenant has its own additional config repo where they can do such stuff15:03
electrofelixwe've not migrated just yet, and the initial plan wasn't to use separate tenants15:03
electrofelixbut this might suggest we need to do that, I guess the question then is can we still share some jobs across tenants?15:04
tobiashelectrofelix: then one way could be to give them an additional config repo and let them define a post-team-x pipeline there15:05
tobiashelectrofelix: you can share configs but have to pay attention on where to gate changes for them15:05
electrofelixtobiash: you mean they would have to define the gate in their own config repo, but could they use some predefined jobs provided by us as part of that gate?15:06
tobiashelectrofelix: yes, that's possible but gating of the shared jobs must be done in a single tenant15:07
tobiashI also have the base job in a single repo which is shared by all tenants15:08
electrofelixah, we aren't using a shared gate job for them, but rather run an instance of a particular job template against all projects for convenience so more the second item15:09
electrofelixtobiash: tbh, the post-team pipeline sounds like the best option, thanks15:10
tobiashno problem15:10
electrofelixI presume the tenant part of the spec for zuul v3 http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html#tenants means that common pipelines and job definitions can be shared across tenants and they can then add to the pipelines? can they override them?15:22
electrofelixmordred: reading about multi tenants and thinking about https://etherpad.openstack.org/p/zuulv3-jenkins-integration, does it need to consider use of job folders per tenant?15:25
*** dkranz has joined #zuul15:27
tobiashelectrofelix: no, they cannot override them15:28
corvuselectrofelix: SpamapS has also proposed something similar that wants per-project pipeline configuration; i think after the v3.0 release we'll plan out some way of doing that and hopefully solve both use cases.15:35
*** jkilpatr has quit IRC15:39
*** dkranz has quit IRC15:44
electrofelixtobiash: that's good to know15:48
electrofelixcorvus SpamapS: has that been captured anywhere or channel history only?15:48
corvuselectrofelix: in the proposed change15:50
* corvus looks15:50
*** dkranz has joined #zuul15:50
corvuselectrofelix: https://review.openstack.org/53052115:50
corvuselectrofelix: we haven't talked about it -- only that we know we need to.  right now we're focused only on v3.0 release.15:51
electrofelixcorvus: no worries, I'm trying to build up information that will help our migration and what things to look at helping out when some time is available15:52
*** jkilpatr has joined #zuul15:54
*** dkranz has quit IRC15:57
*** jkilpatr has quit IRC16:16
mordredelectrofelix: I'm not sure - honestly, it might be nice to chat at some point about the zuulv3-jenkins-integration since you're much more up to date on more recent jenkins and have the use-case yourself16:25
SpamapSI wonder if the simplest thing would be to allow variable expansion in pipeline configs like we do for status-url and such.16:27
*** jkilpatr has joined #zuul16:28
SpamapSelectrofelix: anyway I want what you just said too. Some of my users are already like "hey this .. comments a lot".16:29
SpamapSeven though really it's just like "well stop writing broken patches" ;-)16:29
electrofelixSpamapS: blame the users, it's always their fault16:29
SpamapSprecisely16:30
SpamapScustomer shmustomer16:31
electrofelixmordred: I was thinking that as zuul can have jobs namespaced to a tenant, use of the same jobs in jenkins may need to be namespaced as well using folders otherwise how do you allow for two tenants to have the same job name but different defined, event if sharing a definition having it be used by another project may cause some surprise16:32
mordredelectrofelix: oh - so, I was kind of thinking that the zuul job corresponding to a jenkins would be a simple job that takes the jenkins job name as a parameter - that might get unweildy with a lot of jobs though ...16:34
mordredelectrofelix: are you coming to the Dublin PTG?16:40
electrofelixmordred: will try to make, after all it's only 3 hours away ;-)16:46
*** openstackgerrit has joined #zuul16:51
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Remove need to start executor as root  https://review.openstack.org/53257516:51
*** JasonCL has quit IRC16:52
mordredelectrofelix: :)16:53
*** clarkb has quit IRC16:53
*** JasonCL has joined #zuul16:54
*** clarkb1 has joined #zuul16:54
*** clarkb1 is now known as clarkb16:58
*** harlowja has joined #zuul17:16
*** dkranz has joined #zuul17:30
rcarrillocruzheya, are we good to merge https://review.openstack.org/#/c/530265/17:53
rcarrillocruzcorvus: ^17:53
corvusrcarrillocruz: yep +317:55
rcarrillocruzsweet, thx!17:55
*** jpena is now known as jpena|off18:14
*** jkilpatr has quit IRC18:19
*** jkilpatr has joined #zuul18:20
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Add specific setup inventory  https://review.openstack.org/53026518:20
*** muhd has joined #zuul18:21
muhdhi18:21
*** openstack has joined #zuul18:33
*** ChanServ sets mode: +o openstack18:33
*** harlowja has quit IRC18:37
*** jkilpatr has joined #zuul18:40
leifmadsen_greetings and salutations18:45
*** sshnaidm is now known as sshnaidm|afk18:48
*** xinliang has quit IRC18:53
*** xinliang has joined #zuul18:53
*** xinliang has quit IRC18:53
*** xinliang has joined #zuul18:54
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Remove need to start executor as root  https://review.openstack.org/53257518:57
*** harlowja has joined #zuul19:13
*** harlowja_ has joined #zuul19:16
*** harlowja has quit IRC19:19
SpamapShm19:38
*** rlandy has quit IRC19:43
*** rlandy has joined #zuul19:48
SpamapS2018-01-10 19:50:17.501903 | [cap] Waiting on logger19:51
SpamapSWhat's this? Hrm.19:51
*** JasonCL has quit IRC19:54
*** JasonCL has joined #zuul19:55
clarkbSpamapS: did the console log daemon die?19:56
SpamapSno I forgot a security group rule19:57
SpamapS to open everything ;)19:57
openstackgerritTobias Henkel proposed openstack-infra/zuul feature/zuulv3: Really change patchset column to string  https://review.openstack.org/53245920:18
SpamapSclarkb: have you done more than 2 node multi-node jobs with zuul?20:21
SpamapSv3 I mean20:22
clarkbyes we have 3 and 4 node jobs in tripleo and I think 3 node dvr ha for neutron20:22
SpamapSOk, I'm having weirdness happen with a 5 node job.20:23
pabelangerSpamapS: I've done 5 myself, worked as I expected20:23
SpamapSI have 2 providers in nodepool.. and it seems like it can't get enough nodes from one, even though the other has plenty of space.20:23
SpamapSI had to delete a few from the smaller provider and then it was able to grab enough20:24
pabelangerwas is the max-servers for smaller provider?20:25
SpamapSPredicted remaining pool quota: {'compute': {'cores': inf, 'instances': 0, 'ram':20:25
SpamapS inf}}20:25
SpamapSso that provider was _out_ of nodes20:25
pabelangeroh, right, new quota stuff20:25
pabelangerkeep forgetting about that20:25
SpamapSBut it looked like it grabbed 3, and then just kept retrying20:25
SpamapSinstead of letting the other worker handle20:25
SpamapS2018-01-10 13:22:01,425 DEBUG nodepool.driver.openstack.OpenStackNodeRequestHandler[zuul.cloud.phx3.gdg-4438-PoolWorker.a-main]: Predicted remaining pool quota: {'compute': {'cores': inf, 'instances': 45, 'ram': inf}}20:26
pabelangerdid instances fail to boot?20:26
pabelangerthe othe r220:26
SpamapSNope20:26
SpamapSeverything booted fine20:26
SpamapSMight be a logic problem.20:27
pabelangerwill have to defer to Shrews20:27
SpamapSSeems like the p worker should have basically said "I can't handle this request" when it couldn't get all 5 nodes.20:27
SpamapS(my clouds are named 'a', and 'p' because we have instance name length limits)20:27
SpamapSA = iAd and P = PHX20:28
pabelangerno, I think p will wait for even until quota is freed up20:28
pabelangerIIRC20:28
SpamapSpabelanger: but p's quota would never get freed up20:28
SpamapSIt was held by other labels.20:28
SpamapSso you'd have to wait for another test entirely to run and complete.20:28
pabelangeryes20:29
pabelangerseen that in our nodepool20:29
Shrewsif a provider CAN handle a request, but doesn't have enough nodes available, it will pause until resources are free20:29
pabelangerwhen we are at max quota20:29
SpamapSShrews: yeah thats breaking me, a lot.20:29
Shrewsthere's nothing to say "go try another provider"20:29
ShrewsSpamapS: how does it break you, other than having to wait?20:30
SpamapSShrews: I don't run those other labels very much.20:30
pabelangerSpamapS: do you have min-ready more then 1 for your labels or just a lot of labels20:30
SpamapSSo I could end up waiting days.20:30
pabelangerright, I can see that, waiting days20:30
pabelangerI do think tobiash added something to rotate out those labels however20:30
pabelangermax-lifetime?20:31
tobiashpabelanger: you mean max-ready-age?20:31
pabelangeryah20:31
pabelanger    max-ready-age: 360020:31
SpamapSehhh20:31
SpamapSno20:31
pabelangerhttps://docs.openstack.org/infra/nodepool/feature/zuulv3/configuration.html#labels20:31
SpamapSI don't think it should take a request if it's already at capacity.20:32
pabelangerSpamapS: what if you set min-ready: 0 for the label too? Does mean waiting longer for a node to launch20:32
SpamapSpabelanger: yeah, these are supposed to be quick tests.20:32
SpamapSIt's a label for tiny ubuntu nodes to run linters and stuff.20:32
tobiashbut using max-ready-age for solving this problem is a bad compromise20:33
SpamapSSo I keep a bunch just sitting there and then those tests finish fast.20:33
pabelangertobiash: yah, would be a workaround at best for sure20:33
tobiashI think we might need to tune the algorithm in nodepool20:33
SpamapSAgreed, I think the workers should stop taking reqs if they get over their quota.20:33
SpamapSBecause really I want _most_ things to end up on my 'a' cloud.20:34
SpamapSIn fact I'm going to scale back 'p' even more.20:34
ShrewsSpamapS: reading sb a bit more... "it can't get enough nodes from one, even though the other has plenty of space"20:34
pabelangerwe kinda have this issue when we get to max capacity in openstack nodepool, some requests take longer to fulfil, so seen what SpamapS is reporting before20:34
ShrewsSpamapS: it's on purpose that we do not get nodes from across multiple providers20:34
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Move CRD tests to test_gerrit_legacy_crd  https://review.openstack.org/53188620:34
ShrewsSpamapS: for a single job20:35
SpamapSShrews: yeah no I don't want cross-provider jobs20:35
tobiashI think I had a chat with corvus about that topic back in june/july20:35
ShrewsSpamapS: then i think i've missed something20:35
tobiashbut don't find it anymore20:35
SpamapSI want nodepool to not take reqs when it can't service them. Or maybe it has to introspect and release them.20:36
Shrewsbut discussion about the node selection algorithm does keep coming up, as tobiash says. i usually defer to corvus since it's the algorithm from his spec. perhaps we should reconsider the algorithm20:37
SpamapSI think size-of-req has to come into play when taking one.20:38
pabelangerSpamapS: so, is (min-ready * all labels) + 5 nodes > cloud capacity?20:38
SpamapSIf I had a max-servers of 4, would it just be forever stuck?20:38
ShrewsSpamapS: but your scenario (releasing nodes it *could* get and give up on the request) could lead to a starvation of the job in a busy system20:39
ShrewsSpamapS: no, it would be rejected20:39
SpamapSpabelanger: min-ready of all labels is 14. max-servers is 14. That was by design thinking it might avoid this fate. I was hoping some of the min-ready's would end up on 'a'.20:39
SpamapS(and max-servers on 'a' is 50)20:40
ShrewsSpamapS: at least, it should... i'm failing to find that code now20:41
SpamapSShrews: the point about starvation is understood and complicates my thinking.20:41
Shrewshttp://git.openstack.org/cgit/openstack-infra/nodepool/tree/nodepool/driver/openstack/handler.py?h=feature/zuulv3#n53820:41
Shrewstobiash: does that code ^^^ take into account max-servers?20:42
tobiashShrews: it should20:43
Shrewsi see the variable referenced, so assuming so20:43
SpamapSShrews: so that's weird, why didn't mine decline?20:43
ShrewsSpamapS: bug? tobiash?20:43
tobiashSpamapS: it declines if request > quota available for whole provider20:44
SpamapSI'm going to watch it for a while today. I had planned on reducing the max-servers for 'p' below the total min-ready once I had verified 'a' works.20:44
tobiashas the provider quota is the theoretical max20:44
tobiashit does *not* decline if request < provider quota but > currently available quota20:45
SpamapSOh right20:45
SpamapSok so that would cover if I had a max-servers of 4.20:45
tobiashthe algorithm currently takes the request and waits for nodes freed up20:45
tobiashso thats how the current algorithm is designed20:46
tobiashwhich is a problem if you have multiple small providers20:46
tobiashI had this problem too but now I switched to a single large provider20:47
ShrewsSpamapS: as to your hopes about the min-ready's being equally distributed, i could not think of a good way to do that. Current design just submits a request for that node type. Totally random as to which provider satisfies it.20:49
tobiashdecline in nodepool term means 'I will never ever be able to fulfill this request'20:49
tobiashShrews: maybe a random delay of 0-1s until locking the request?20:50
tobiashShrews: am I right that nodepool is notified by zookeeper about a new request?20:51
Shrewsmore randomness to achieve equal distribution? O.o   :)20:51
SpamapSyes20:51
SpamapSnodepool watches the req node20:51
tobiashin this case maybe the notification order is deterministic and the same provider is first every time20:51
SpamapSand when new req's are added the watch fires20:51
SpamapSand yeah it may be that it's always first-watcher20:51
Shrewstobiash: nodepool isn't "notified" as much as it checks for unhandled node requests in zk on a periodic basis20:52
SpamapSShrews: wait what?20:52
SpamapSwe don't use a watch?20:52
SpamapSthat's.. kind of the point of zk...20:52
SpamapSpolling is soooo mysql20:52
Shrewswe do not20:52
ShrewsSpamapS: our first implementation was "let's try this first since it's so much easier and see how it works"20:53
Shrewsso far, it has worked great. but I welcome patches to implement watches  ;)20:53
SpamapSseems legit20:53
Shrewswatches come with complications20:53
SpamapSso as far as the algorithm, we need a retry state, not a 'decline it is dead to me' just 'not right now'20:53
Shrewslike, once a watch is triggered, you have to re-establish it again20:54
SpamapSYeah I wonder if even just the order of providers ends up mattering.20:57
SpamapSBecuase 'p' is first, and is totally getting all the reqs.20:57
ShrewsSpamapS: nope. just 1 thread per provider. nothing special20:58
*** hasharAway has quit IRC21:38
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Remove need to start executor as root  https://review.openstack.org/53257521:48
*** threestrands_ has joined #zuul21:52
*** threestrands_ has quit IRC21:52
*** threestrands_ has joined #zuul21:52
*** threestrands_ has quit IRC21:53
*** threestrands_ has joined #zuul21:53
*** threestrands_ has quit IRC21:53
*** threestrands_ has joined #zuul21:53
*** dkranz has quit IRC21:54
*** threestrands has quit IRC21:54
*** dkranz has joined #zuul21:55
*** dkranz has quit IRC22:02
*** corvus is now known as jeblair22:21
*** jeblair is now known as corvus22:22
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Rename _useBuilder method to useBuilder  https://review.openstack.org/53188322:24
corvusShrews: the fingergw did not stop on zuulv3.o.o.  possible the issue is fixed already by later updates, but we should keep an eye on it next time.22:28
Shrewscorvus: how did you try to stop it?22:28
corvusalso, heck, it's probably okay to start and stop it repeatedly in production to test.  :)22:28
corvusShrews: service zuul-fingergw stop22:28
Shrewscorvus: the init script totally doesn't work yet22:28
corvusah ok :)  uh,  confirmed!  :)22:28
Shrewscorvus: it doesn't use the command socket. pabelanger had a change up to fix that for the other services. he said he'd add fingergw to it22:29
Shrewssome nameless coder added a non-working init script, which was approved by some nameless reviewers who didn't review close enough  ;)22:30
* Shrews hides in a corner22:30
Shrewsin shame22:31
corvusit looked like an init script!22:31
corvusi did not realize it was actually a cryptominer22:31
Shrewsit had 'init' in it somewhere. totally looked valid22:31
*** jkilpatr has quit IRC22:32
*** JasonCL has quit IRC22:34
pabelangerShrews: sorry, I haven't pushed up that patch yet. I will this evening22:36
Shrewspabelanger: k. i can do it in the morning if you can't get to it this evening. gotta go do dinner things now22:38
*** threestrands_ has quit IRC22:41
*** threestrands has joined #zuul22:55
*** threestrands has quit IRC22:55
*** threestrands has joined #zuul22:55
*** Guest7 has joined #zuul23:11
*** Guest7 has quit IRC23:11
*** threestrands has quit IRC23:11
*** threestrands has joined #zuul23:20
openstackgerritMonty Taylor proposed openstack-infra/zuul-jobs master: Adjust check for .stestr directory  https://review.openstack.org/53268823:22
*** rlandy is now known as rlandy|bbl23:37
*** haint has quit IRC23:41

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!