Thursday, 2017-05-18

*** jamielennox is now known as jamielennox|away00:27
jlkSo nodepool, there's a launcher and a builder. Which needs a clouds.yaml credential? Do they both00:33
jlk?00:33
*** jamielennox|away is now known as jamielennox00:41
* rbergeron hands jlk a beer instead of an answer because that's what she has. :)00:42
jlklol00:46
jlkbeer comes later, after Guardians 200:46
mordredjlk: both01:09
mordredjlk: launcher builds nodes - builder uploads to glance01:09
jlkalrighty. Does v3 nodepool still use gearman?01:09
jlkuh.01:10
mordredit does not01:10
jlkI'm seeing builder build them01:10
mordredbuilder builds and uploads images - launcher builds nodes?01:10
jlkoh sorry I misread01:11
jlkI'm seeing builder build the images using diskimage-builder. I haven't got one to build yet, so I don't know what else happens :)01:11
jlkI do see launcher querying the cloud01:11
mordredah!01:11
jlkI have yet to see builder query the cloud01:11
mordredyah - it likely won't until it has something to upload01:12
jlkso nodepool just talks zookeeper01:13
mordredyup - much simpler01:13
mordred(well, and openstack, but yeah)01:13
jlkheh01:14
jlkwhee, d-i-b can't find the qemu-img executable that's right close to it01:17
jlkoh haha01:17
* jlk shakes fist at docker images that do not have "which"01:17
jlkalso, kind of shame on d-i-b relying on 'which' to see if a app can be found.01:25
jlk(complains about the lack of app, no the lack of which)01:25
*** dougbtv_ has quit IRC02:05
Shrewsyah, the upload won't happen until dib build completes02:15
*** dougbtv_ has joined #zuul02:17
openstackgerritMerged openstack-infra/nodepool feature/zuulv3: Use six.moves.urllib for python3 compat  https://review.openstack.org/46359502:23
openstackgerritMerged openstack-infra/nodepool feature/zuulv3: Rename nodepool.py to launcher.py  https://review.openstack.org/46380702:23
*** dougbtv_ has quit IRC02:36
*** dougbtv_ has joined #zuul02:39
*** jamielennox is now known as jamielennox|away02:40
*** dougbtv__ has joined #zuul02:56
*** dougbtv_ has quit IRC02:58
*** jamielennox|away is now known as jamielennox03:33
*** dougbtv__ has quit IRC04:12
*** dougbtv__ has joined #zuul04:12
*** bhavik1 has joined #zuul06:12
*** bhavik1 has quit IRC06:29
*** DangerousDaren has joined #zuul06:47
openstackgerritTobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Add Dockerfile  https://review.openstack.org/46585207:40
tobiashjlk: I don't have any public repo hosting the Dockerfile but I think it might make sense to add one directly to nodepool ^^07:41
tobiashthat way it also could be easy to provide an official docker image on docker hub07:42
*** Cibo_ has joined #zuul07:51
tobiashit bakes the current workspace into the image for fast development cycles07:51
*** Cibo_ has quit IRC08:02
openstackgerritTobias Henkel proposed openstack-infra/zuul feature/zuulv3: Use ssh for git-upload-pack  https://review.openstack.org/43680208:06
openstackgerritTobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Add Dockerfile  https://review.openstack.org/46585208:17
openstackgerritTobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Add max-ready-age to label config  https://review.openstack.org/46333808:36
openstackgerritTobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Support externally managed images  https://review.openstack.org/45807308:44
*** fbo has joined #zuul08:51
*** isaacb has joined #zuul09:54
*** jamielennox is now known as jamielennox|away10:04
openstackgerritTobias Henkel proposed openstack-infra/zuul feature/zuulv3: Add Dockerfile  https://review.openstack.org/46591210:18
*** jkilpatr has quit IRC10:39
*** openstackgerrit has quit IRC10:48
*** jkilpatr has joined #zuul11:15
*** openstackgerrit has joined #zuul11:45
openstackgerritTobias Henkel proposed openstack-infra/zuul feature/zuulv3: Add Dockerfile  https://review.openstack.org/46591211:45
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Add missing cleanup to statsd fixture  https://review.openstack.org/46596213:14
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Add test config with min-ready of 0  https://review.openstack.org/46596813:28
Shrewspabelanger: 465968 should fix the fail seen in your np py3 changes13:28
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: sql-connection: make _setup_tables staticmethod  https://review.openstack.org/46597313:35
*** jamielennox|away is now known as jamielennox13:49
Shrewstobiash: did you happen to look into this failure on your change? http://logs.openstack.org/38/463338/2/gate/gate-nodepool-python27-ubuntu-xenial/531b862/testr_results.html.gz13:53
*** Cibo_ has joined #zuul13:54
tobiashShrews: not yet, just rechecked and it worked13:54
tobiashShrews: ah, no, it was that gate failure13:55
Shrewshmm, let's not recheck things that should have passed. that's something we should look at13:55
tobiashI incorporated SpamapS worries about a race and uploaded a new patch version13:56
Shrewsmight not be related to your review, but maybe another race we should hunt down13:56
Shrewshrm, i think that test makes a false assumption13:58
tobiashShrews: ok, the recheck I did today failed due to a statsd related failure (which I think you fixed today)13:58
tobiashShrews: is that correct that this test wants two nodes of fake-label and assumes that it gets one of fake-provider1 and fake-provider2?14:02
tobiashbut I think (judging from the fixture) it also could happen that both nodes come from the same provider14:03
Shrewstobiash: yes. but that's a wrong assumption. i'm now trying to determine the purpose of this test14:03
tobiashShrews: according to your commit message max-servers should be 114:04
tobiashin the fixture14:04
Shrews?14:05
Shrewsooh, image-type is missing from the .yaml. ah ha14:05
tobiashShrews: https://review.openstack.org/#/c/433127/14:06
tobiashcite: Enables the test_node_vhd_and_qcow2 test which is an example of14:06
tobiashmin-ready=2/max-servers=1 across two providers.14:06
Shrewsi don't think that matters14:08
tobiashShrews: the test fixture says min-ready=2/max-servers=214:08
tobiashso I think if 2 nodes are requested there's a race which can result in 1,1; 1,2; 2,1; 2,214:09
openstackgerritMerged openstack-infra/nodepool feature/zuulv3: Fix imports for python3  https://review.openstack.org/46380814:10
tobiashand the test wants 1,2 or 2,114:10
tobiashif the fixture would be changed to max-servers=1 there would be just the cases 1,2 and 2,1 possible which the test expects14:12
tobiashpossibly in some rare cases currently one provider fullfills both node requests before the other has the chance to14:13
Shrewsthat shouldn't matter for this test if it were working properly14:15
*** Cibo_ has quit IRC14:15
Shrewswhich it isn't14:15
tobiashthe original v2 version of this test also seems to state max-servers=1 per provider14:19
tobiashI think that was to force usage of both providers (one using vhd, the other qcow2) in parallel (which this test wants)14:20
pabelangerShrews: +214:23
pabelangerthanks14:23
*** dkranz has joined #zuul14:23
Shrewstobiash: the forced usage of both providers should come from the image-type. that's the bug14:24
Shrewsone provider supports vhd, the other qcow214:24
Shrewsmax-servers does not matter here14:24
clarkbShrews: max servers does matter because both servers could schedule on just the qcow2 cloud or just the vhd cloud14:25
clarkbso you need to set max servers such that both clouds are used14:25
tobiashthat's what I meant14:27
Shrewsah, b/c we don't specify anything other than label (format doesn't matter). i see what you mean now.14:28
tobiashyepp14:28
tobiash:)14:28
pabelangerclarkb: jeblair: have a moment to add https://review.openstack.org/#/c/465962 and https://review.openstack.org/#/c/465968/ to your review queue.  Should fix some flapping testing in nodepool thanks to Shrews14:28
tobiashhave to leave now, it's eod for me14:29
Shrewswell, that presents an issue then because of commit 1fdb05. we don't have a way to force one provider over the other14:33
clarkbright, we never did in the past either14:33
Shrewsif we set max-servers to 1, the test could hang14:33
clarkbso what we did was require more nodes than each individual max-servers which forced them onto the various clouds14:34
clarkbShrews: I think thats a bug then and test is working properly?14:34
Shrewsno, that's a design decision to pause processing if we've reached max-servers14:35
clarkbShrews: but only per cloud right?14:35
Shrewsright14:35
clarkbthe other cloud should still process14:35
clarkbso if this hanging causes the test to fail thats a bug?14:35
Shrewsit will, if it gets other requests14:35
clarkbthe idea here is you have demand for 2 instances and two clouds, one server each. Cloud A should fullfill half the demand and cloud B the other half14:36
clarkbnodepool should handle that case14:36
Shrewsnope. we don't guarantee equal distribution14:36
clarkbyou do if there is no other option though14:36
Shrewsit's whichever provider responds first14:36
clarkbif cloud A is servicing half the demand it cannot service the other half14:37
clarkbwhich will force cloud B to service it14:37
clarkb(this is why max-servers is important in this test)14:37
Shrewsthis test has 2 node requests. if max-servers=1, and both requests happen to go to provider A, then A will hang until a node is freed up. that is purposeful14:38
clarkbwhy would both go to provider A?14:38
Shrewswe do not see that, "oh A is filled up, let's try B"14:38
Shrewsbecause both providers are just processing a queue14:38
clarkbprovider A services first one, says I can't do anymore and stops. provider B says "I can fullfill this and so does"14:38
clarkbShrews: right but provider A should stop once it is "full"14:39
clarkbleaving provider B to do the remainder of the work14:39
Shrewsit does stop. but it does not relinquish any requests assigned to it14:39
clarkbI thought it did, it would reject it passing it onto the next provider14:39
Shrewsnope14:39
clarkbthen if all providers reject it its a failed request14:39
clarkb(I seem to recall reviewing this code a while back)14:39
clarkbin any cause if ^ isn't the behavior I would say its a bug14:40
clarkband the test is catching it14:40
clarkbA should relinquish the request if it is full and let another cloud that isn't take it on14:40
clarkbthe request record tracks the failed reuqests for this raeson then it knows if all providers failed14:40
jeblairclarkb: being at capacity isn't a reason to fail the request though -- if it were, we'd end up failing jobs when all our providers were at capacity14:43
clarkbit is a reason to not service it though?14:44
Shrewsalgorithm is outlined in https://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html, fwiw14:44
jeblairthe next logical thing to do would be to return the request to the queue unfulfilled, however, that would leave to starving large requests when we're at capacity (becaues only the small ones would get picked up as each provider released one node at a time)14:44
clarkbif we look at this situation as more than just a test, the behavior seems clearly wrong. It means that if you are in a resource constrained env you won't run your next test until current test finishes, even though 50% of total capacity is available14:45
jeblairso the algorithm (which is admittedly not perfect), has one rough edge and this is it -- in a multiple provider situation when at least one provider is at capacity, some requests will end up being delayed slightly longer than theoretically necessary.  but never more than one such request per provider.14:46
clarkbcould we avoid grabbing the request in the first place?14:46
clarkbI guess problem is still larger requests if doing ^14:47
jeblairclarkb: to your point: yes, it is wrong (in that it is not ideal), but intentionally so.14:47
jeblairclarkb: yeah, all the solutions i could think of to this involved communication between the launchers, which i wanted to avoid in our first pass at this.  we could decide it's worthwhile and make them more cooperative in the future.  but that adds some complexity.14:48
clarkbmaybe it is time for providers to coordinate available resources in zk?14:48
jeblairi would still like to wait until after we've put this into production.  :)14:49
clarkbmy big concern here is that tests like this come out of production issues14:49
clarkb(we know we get constrained and we know we have to support multiple image types and we have had breakages with both things in the past hence the test)14:49
jeblairclarkb: sure, but this test isn't testing the edge case of this algorithm.  we can adjust the test to deal with the new constraint.14:50
clarkbI'm not sure how you'd test the multiple image things if a single provider can grab all requests?14:51
jeblairclarkb: a provider only holds a single unfulfilled request -- so basically it grabs the request which puts it at capacity, then the one that puts it above capacity, then stops.14:52
clarkbI see so it is bounded to N+114:52
clarkbso we'd need to boot a demand of 3 not 2 then it should work?14:52
jeblairyep.  so the test could probably (i haven't looked -- just woke up and all) be adjusted to allow for an extra request for each image14:52
jeblairi imagine so (and perhaps adjust some assertions or something)14:53
Shrewsi'm changing the test to actually verify the uploaded image, rather than the booted node14:55
Shrewssince that's what is really important here14:55
jeblairor that :)14:55
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Fix test_node_vhd_and_qcow2 to validate uploads  https://review.openstack.org/46599915:03
*** rcarrillocruz has quit IRC15:05
*** jkilpatr has quit IRC15:16
*** jkilpatr has joined #zuul15:16
*** dougbtv__ is now known as dougbtv15:18
pabelangerShrews: jeblair: any objections for adding a python35 non voting or experimental job for nodepool? Hope to continue pushing up patches today15:18
openstackgerritPaul Belanger proposed openstack-infra/nodepool feature/zuulv3: Use six.reraise for python3  https://review.openstack.org/46359415:19
clarkbpabelanger: how close is it to being able to run something that passes (thinking non voting if its close)15:19
pabelangerclarkb: I have it running locally, all tests passing. Just need to pushing up bytes changes and some other nits15:20
Shrewspabelanger: i think we really need that if we don't want to regress current work  :)15:20
pabelangerShrews: should I recheck tests that failed on topic:py3-nodepool or hold off until your stack is merged?15:22
pabelangerI guess I could rebase a top of it15:22
Shrewspabelanger: i just did that for you15:22
pabelangernice15:22
Shrewsthe fails i fixed are very rare, so hopefully they'll pass this time w/o rebase15:23
Shrewsor my fixes merging15:23
openstackgerritMerged openstack-infra/nodepool feature/zuulv3: Add missing cleanup to statsd fixture  https://review.openstack.org/46596215:30
openstackgerritMerged openstack-infra/nodepool feature/zuulv3: Add test config with min-ready of 0  https://review.openstack.org/46596815:30
jeblairpabelanger: did you want to look at 451470?  it's approved but based on an abandoned patch.15:30
pabelangerjeblair: I have not, let me fix that15:31
openstackgerritMerged openstack-infra/nodepool feature/zuulv3: Add max-ready-age to label config  https://review.openstack.org/46333815:34
openstackgerritMerged openstack-infra/nodepool feature/zuulv3: Support externally managed images  https://review.openstack.org/45807315:34
openstackgerritPaul Belanger proposed openstack-infra/nodepool feature/zuulv3: Add .zuul.yaml support  https://review.openstack.org/46602615:51
pabelanger^also adds nodepool project to zuulv3-dev.o.o for more converge15:52
pabelangercoverage*15:52
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Fix test_node_vhd_and_qcow2 to validate uploads  https://review.openstack.org/46599915:59
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Fix test_node_vhd_and_qcow2 to validate uploads  https://review.openstack.org/46599916:02
*** rcarrillocruz has joined #zuul16:02
openstackgerritPaul Belanger proposed openstack-infra/nodepool feature/zuulv3: Set socket timeout for SSH keyscan  https://review.openstack.org/45147016:03
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool feature/zuulv3: Fix test_node_vhd_and_qcow2 to validate uploads  https://review.openstack.org/46599916:03
openstackgerritMerged openstack-infra/nodepool feature/zuulv3: Ensure zookeeper_servers is a list  https://review.openstack.org/46388016:04
openstackgerritMerged openstack-infra/nodepool feature/zuulv3: Sort flavors with operator.itemgetter('ram')  https://review.openstack.org/46399816:05
openstackgerritMerged openstack-infra/nodepool feature/zuulv3: Replace dict.iteritems() with dict.items() for python3  https://review.openstack.org/46402616:05
pabelangerShrews: did we recently fix http://logs.openstack.org/34/464034/4/check/nodepool-coverage-ubuntu-xenial/43e54e5/console.html#_2017-05-18_15_28_41_773418 ?16:06
pabelangeror is that one new16:06
pabelangerlooks like our delete failed because it still had data in zk16:06
Shrewsno, that one is the kazoo recursive delete of a lock race16:06
jeblairi'm ready to fork that shared lock implementation into our zk.py, since there's no action on the upstream pr16:07
Shrewsbasically, we're in the middle of deleting a znode (that contains the lock as a child), and something else comes along an locks it during the delete16:07
pabelangerah16:08
pabelangerjeblair: seems reasonable, then delete if / when upstream merges it?16:09
jeblairpabelanger: yep16:09
pabelanger+116:09
pabelangerjeblair: mind confirming 463594 is the right approach for re-raising the exception? It looks like we are using this process is zuul/scheduler.py16:15
Shrewsi don't remember how read locks alleviate this situation  :/16:16
jeblairShrews: i'm sure we'll figure it out again :)16:16
jeblairpabelanger: i'll have to do that later, i'm knee deep in pipeline requirements now16:18
pabelangernp16:18
*** bhavik1 has joined #zuul16:27
*** bhavik1 has quit IRC16:29
openstackgerritMerged openstack-infra/nodepool feature/zuulv3: Update exception message handling for python3  https://review.openstack.org/46403416:40
*** isaacb has quit IRC16:48
openstackgerritPaul Belanger proposed openstack-infra/nodepool feature/zuulv3: Use six.reraise for python3  https://review.openstack.org/46359416:51
openstackgerritPaul Belanger proposed openstack-infra/nodepool feature/zuulv3: Switch to next(generator) for python3  https://review.openstack.org/46404016:51
openstackgerritPaul Belanger proposed openstack-infra/nodepool feature/zuulv3: Python3: RuntimeError: dictionary changed size during iteration  https://review.openstack.org/46604916:51
jlko/17:08
* SpamapS testing py3k branch with gear 0.9.017:25
SpamapSRan 243 (+27) tests in 340.848s (+17.905s)17:25
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: skip py3 failing tests  https://review.openstack.org/46390317:26
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: fix webapp tests for py3  https://review.openstack.org/46390217:26
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: misc py3 changes  https://review.openstack.org/46390117:26
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: base64 changes for py3  https://review.openstack.org/46390017:26
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Encoding changes in tests for py3  https://review.openstack.org/46389917:26
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Explicitly decode decrypted secrets for py3  https://review.openstack.org/46404917:26
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: view changes for py3  https://review.openstack.org/46389817:26
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: writing yaml to disk needs bytes  https://review.openstack.org/46389717:26
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Fixes for test_model in py3  https://review.openstack.org/46389617:26
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: None does not compare to int  https://review.openstack.org/46389517:26
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: make Job and ZuulRole hashable  https://review.openstack.org/46389417:26
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: py3 hashlib error  https://review.openstack.org/46389317:26
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: py3 Changes in __del__ for gitpython  https://review.openstack.org/46389217:26
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Use gear Text interface  https://review.openstack.org/46146817:26
jeblairjlk, mordred: i've been thinking about the next three github changes, and i think three things are pushing me toward thinking that we should rework some stuff now.  1) there's terminology collisions which are making the config file confusing (not to mention the code).  2) there's some shoehorning of the github data model into gerrit which i think will constrain us later.  3) as we discussed earlier, pipeline requirements are borderline ...17:27
jeblair... driver-specific (what does it mean to evaluate a gerrit change against a pipeline requirement for a github status of "zuul:check:success"?)17:27
jeblairjlk, mordred: i've written that up with more words and a suggested refactoring here: https://etherpad.openstack.org/p/zKrBKCsc7u17:27
*** Cibo_ has joined #zuul17:28
jlkI fully expected such a conversation :)17:28
jlkI kind of think that we shouldn't have any driver specific constraints at the pipeline level17:28
jeblairjlk, mordred: can you take a look at that and see if the proposed design makes sense, and also evaluate whether now is the right time for that?17:28
jeblairjlk: neat!  that's the opposite of my proposal :)17:29
jlkI'll read the text, then respond :)17:29
SpamapSmordred: btw, remember we were talking about .testrepository fail? I talked to mtreinish and he pointed me at the solution: https://pypi.python.org/pypi/stestr17:30
* SpamapS preps a patch17:30
*** harlowja has joined #zuul17:30
Shrewsoh, speaking of py3 things... jeblair, pabelanger, SpamapS: i want to propose removing zuul's use of the hacking lib. the reason is that it hasn't quite caught up to py3.5 things (e.g., pyflakes  is older and chokes on asyncio 3.5 support).17:50
SpamapS+1 from me17:52
SpamapSdidn't know we weren't17:52
SpamapSI imagine we'll want to turn off a bunch of them17:52
Shrewsit is on their radar, however: https://governance.openstack.org/tc/goals/pike/python35.html17:52
SpamapSpy35 is the only thing I've tested17:57
SpamapSor would really want to bother with17:57
Shrewsthe current code is cool with it. the only problem i've seen so far is the websocket stuff failing pep8  http://logs.openstack.org/53/463353/10/check/gate-zuul-pep8-ubuntu-xenial/41f82e6/console.html.gz#_2017-05-11_16_48_56_73873617:58
Shrewswhich is where i stopped before i caught the plague17:59
pabelangerdo we know if anybody is looking at hacking ATM?17:59
SpamapSooooo the entire py3k stack passes tests18:00
SpamapS_SHIP IT_18:00
openstackgerritPaul Belanger proposed openstack-infra/nodepool feature/zuulv3: Fix typos in __eq__ functions  https://review.openstack.org/46606118:00
pabelangerShrews: ^ a few syntax errors18:01
*** harlowja has quit IRC18:01
Shrewspabelanger: doh18:01
pabelangersurprisingly, exposed with python318:01
*** DangerousDaren has quit IRC18:02
pabelanger  py35: commands succeeded18:03
pabelangersame for nodepool :D18:03
pabelangeractually, just 1 think left18:03
pabelangerpreexec_fn=self._activate_virtualenv is bombing18:03
pabelangerShrews: mind +3 again on https://review.openstack.org/#/c/464040/18:05
pabelangerwas a rebase18:05
Shrewspabelanger: done. and a question on 46604918:07
*** Cibo_ has quit IRC18:08
openstackgerritMerged openstack-infra/nodepool feature/zuulv3: Switch to next(generator) for python3  https://review.openstack.org/46404018:08
pabelangerShrews: k18:09
jeblairShrews: i'm okay with removal of hacking.  we mostly use it just to avoid the pep8 version treadmill.  if it's actually holding us back on some py3 things, there's no value there.18:14
*** harlowja has joined #zuul18:15
openstackgerritPaul Belanger proposed openstack-infra/nodepool feature/zuulv3: Python3: encode / decode data as utf8  https://review.openstack.org/46606518:15
Shrewsok. i'll try to figure out what libs we need to specify manually in replacing it. hopefully not many18:15
clarkbI think just pep8 and pyflakes18:15
jeblairya those18:15
clarkboh and flake8 for the config18:15
clarkbwhich is in tox.ini18:15
Shrewsoh lawd... that opens up a bunch of new pep8 errors18:21
jeblairShrews: we likely want to pin on whatever versions hacking was pinning on18:21
Shrewswell that was the problem18:22
Shrewsflake8 was too old18:22
Shrewsit's actually not many new ones.18:22
jeblairShrews: bump flake8 but not pep8?18:22
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Remove use of hacking lib  https://review.openstack.org/46606718:23
mordredjeblair: sorry - been writing specs all morning - looking now18:24
jeblairShrews: what's f405?18:25
Shrews1 sec... amending commit msg18:25
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Remove use of hacking lib  https://review.openstack.org/46606718:25
Shrewsjeblair: read new commit msg18:25
openstackgerritPaul Belanger proposed openstack-infra/nodepool feature/zuulv3: Python3: RuntimeError: dictionary changed size during iteration  https://review.openstack.org/46604918:26
openstackgerritPaul Belanger proposed openstack-infra/nodepool feature/zuulv3: Python3: encode / decode data as utf8  https://review.openstack.org/46606518:26
jeblairShrews: ok.  bummer.  i hate to turn off a pyflakes check, but that's deep into the ansible magic.  maybe we can sprinkle more #noqa around and re-enable it18:28
Shrewsjeblair: it also reports that error for the time and select libs, neither of which are imported explicitly18:28
Shrewsjeblair: i can work on re-enabling all of those individually18:29
mordredstar imports are frequently used in ansible and in ansible modules ... so that one might wind up being transitively difficult18:30
Shrewsmordred: the answer is to obviously stop using ansible. i hear this hudson thing is pretty good18:30
mordredShrews: good call18:30
openstackgerritPaul Belanger proposed openstack-infra/nodepool feature/zuulv3: Wrap map() in list() for python3  https://review.openstack.org/46606918:31
openstackgerritPaul Belanger proposed openstack-infra/nodepool feature/zuulv3: WIP: Fix preexec_fn for python3  https://review.openstack.org/46607018:31
jeblairShrews: cool.  just remember to leave E125 and E129.  also, any other E or W you want to leave are fine with me too.  :)18:32
SpamapSoh weird.. py3k failed 2 patches in the middle18:32
pabelangerokay, 466070 should be green for python3518:33
pabelangerbut it is WIP because I need to understand why it is failing18:33
pabelangerit is the magic virtualenv enabling logic18:33
pabelangerpersonally, would rather just remove that code and have the user wrap DIB virtualenv in shell if possible18:34
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Switch from testrepository to stestr  https://review.openstack.org/46607118:35
openstackgerritMerged openstack-infra/nodepool feature/zuulv3: Fix typos in __eq__ functions  https://review.openstack.org/46606118:39
Shrewsmordred: all of the files in zuul/ansible/action are just straight copies of the ansible modules, yeah?18:39
mordredShrews: well - they're modified copies18:39
Shrewsk18:40
mordredShrews: some of them are completely new code that basically subclasses the ansible module, peeks at a parameter and potentially fails18:40
mordredbut those should be fairly easy to notice :)18:40
Shrewsi was thinking of having flake8 ignore that entire directory18:40
jlktobiash: I'm running into an issue doing a dib inside a container, a failure to communicate with device-mapper driver. This is toward the end, a long build process. Are you doing anything special to expose these things inside the container for the builder ?18:44
mordredjlk: fwiw - if you want to iterate on getting dib to work in a container with a shorter build process, just make the list of elements have only "ubuntu" or "ubuntu-minimal" and "vm" and nothing else - the mechanisms for making the image themselves should all get exercised by that without you needing to wait for content to be written into the chroot18:46
jlkah18:46
clarkband add devuser if you want to boot it and login18:47
jlknot as of yet18:47
jlkthat's so much faster18:49
openstackgerritPaul Belanger proposed openstack-infra/nodepool feature/zuulv3: Python3: Use exec for self._activate_virtualenv()  https://review.openstack.org/46607018:54
* SpamapS rebases py3k on feature/zuulv3 and finds that the github stuff is all t3h broken in py318:54
jlkjeblair: okay I've read what you wrote.18:55
pabelangerk, that should be nodepool for py3 support18:55
pabelangerI should see what is needed to run nodepool-dsvm python3 job also18:55
Shrewshrm, testing env upgrade breaks zuul tests. neat18:56
jlkjeblair: if require/reject grow driver specific capability, would we still support generic requirements, for shared states, like 'open', 'latest', etc..? Similar to how we talked about allowing generic reporters that are driver agnostic.18:56
*** harlowja has quit IRC18:56
jlkor should all of the require/reject exist inside driver implementation?18:56
clarkbpabelanger: should just need to set the python version option and devstack will do the rest18:57
pabelangerjlk: mordred: ubuntu-rootfs is even smaller: https://review.openstack.org/#/c/413115/ got it down to about 42MB for xenial18:57
clarkbpabelanger: I think setting it in the plugin will be too late though, need a new job or update the existing job18:57
pabelangerclarkb: k, I'll look18:58
pabelangerwoot19:01
pabelangerhttp://logs.openstack.org/70/466070/2/check/gate-nodepool-python35-nv/019d010/console.html#_2017-05-18_19_00_00_25440019:01
SpamapSincoming py3k rebase patch bomb19:05
SpamapSWith a fun surprise at the end19:06
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: skip py3 failing tests  https://review.openstack.org/46390319:06
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: fix webapp tests for py3  https://review.openstack.org/46390219:06
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: misc py3 changes  https://review.openstack.org/46390119:06
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: base64 changes for py3  https://review.openstack.org/46390019:06
Shrewscookies?19:07
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Encoding changes in tests for py3  https://review.openstack.org/46389919:07
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Explicitly decode decrypted secrets for py3  https://review.openstack.org/46404919:07
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: view changes for py3  https://review.openstack.org/46389819:07
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: writing yaml to disk needs bytes  https://review.openstack.org/46389719:07
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Fixes for test_model in py3  https://review.openstack.org/46389619:07
Shrewsplz be cookies...19:07
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: None does not compare to int  https://review.openstack.org/46389519:07
jlkany way to tell nodepool to try building an image once, and stop if it fails?19:07
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: make Job and ZuulRole hashable  https://review.openstack.org/46389419:07
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: py3 hashlib error  https://review.openstack.org/46389319:07
Shrewsplz be cookies...19:07
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: py3 Changes in __del__ for gitpython  https://review.openstack.org/46389219:07
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: fix imports in py3  https://review.openstack.org/46389119:07
Shrewsplz be cookies...19:07
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Use gear Text interface  https://review.openstack.org/46146819:07
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Fix github driver tests for py3  https://review.openstack.org/46607819:07
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Enable py3 tests  https://review.openstack.org/46607919:07
SpamapSShrews: py3 cookies19:07
*** harlowja has joined #zuul19:07
jlkany way to tell nodepool to try building an image once, and stop if it fails?19:08
jlkpabelanger: that didn't merge, is that not currently available?19:08
pabelangerjlk: ya, you'd have to download the patchset19:14
pabelangermaybe we should just add it to project-config at this point19:15
pabelangerI've been getting a lot of push back from DIB on them19:15
pabelangerbut don't understand why19:15
*** tobiash_ has joined #zuul19:24
jlkwtf..19:25
jlkJournal size too big for filesystem.19:25
tobiash_jlk: I had issues with dib builds inside the docker filesystem19:25
tobiash_jlk: so I mount it from outsife19:25
tobiash_outside19:25
jlkwhat's your launch look like?19:26
tobiash_jlk: I'm also not using the device mapper driver (for this to work you probably have to mount /sys stuf into)19:27
tobiash_jlk: I'm currently using the aufs driver (nodepool runs in docker on ubuntu xenial)19:27
tobiash_and I think it needs to run privileged19:28
jlknot sure what I'm running19:28
tobiash_but I have to check again my docker-compose file @work tomorrow19:28
jlkStorage Driver: overlay219:28
jlk Backing Filesystem: extfs19:28
jlk Volume: local19:29
tobiash_jlk: I had problems with any storage driver until I mounted the dirs for tmp, elements and image together from the host (ext4) into the container19:30
tobiash_so currently I have /mnt/builder/tmp, /mnt/builder/image, /mnt/builder/tmp on the host and mount /mnt/builder into the container at some location and configure nodepool to use that (also tell dib to use the correct tmp dir)19:31
clarkbdib needs to be able to do things like mount filesystems and use losetup and nbd iirc19:32
clarkb(so thats where privileged likely comes in)19:32
tobiash_that's the reason for privileged container19:32
jlkyeah I have privileged set up19:32
tobiash_but just for the builder19:32
pabelangerIIRC, I had DIB in docker, but pretty sure it was privileged19:32
jlkit's just apparently struggling around loopback stuffs19:33
pabelangerI should check my notes, it was working for me back in Jan and I don't think I had any specific bind mounts going on19:33
jlkhttps://github.com/j2sol/z8s has my work in progress19:33
pabelangerhttps://github.com/pabelanger/docker-diskimage-builder19:33
pabelangerbasically used ansible and dib to build the container19:33
leifmadsenhttps://coreos.com/blog/introducing-zetcd19:33
pabelangerthen ran it under docker19:34
tobiash_jlk: so basically I use this dockerfile: https://review.openstack.org/#/c/465852/19:35
tobiash_and a similar command like you in the docker-compose except that I'm mounting the dib workspace into the container19:36
tobiash_otherwise dib broke latest at the 'copy everything into the diskimage' step19:36
pabelangerhttp://git.openstack.org/cgit/openstack/ansible-role-diskimage-builder is what I used to install DIB, then just imported that into docker19:36
tobiash_I didn't figure out why (tried several storage drivers) but when mounting that just into the container it's fine19:37
tobiash_pabelanger: right, I also didn't need any specific bind mounts other than the working area of dib19:39
clarkbbind mounting the workspace has the potentialy upside of being able to share dib caches19:40
clarkb(regardless of the weirdness around why it seems to be required)19:40
*** dougbtv_ has joined #zuul19:41
*** dougbtv has quit IRC19:41
tobiash_what I also noticed is that if the dib build failed (e.g. due to some transient network failure) nodepool leaves the tmp dir of the build untouched19:41
tobiash_that filled up the disk a few times (until I had monitoring in place)19:42
jlkwhats the dib workspace?19:43
jeblairSpamapS, mordred: er, can we hold off on this stestr thing?19:43
jlkthe part that you needed volume mounted?  Looking at the docker file, the volumes don't make much sense, other than the elements19:44
jlktobiash_: ^19:44
tobiash_jlk: I don't know how to call it correctly but with that I mean the elements folder, the tmp dir (which you tell dib) and the dest dir of the image (configured in nodepool)19:44
greghaynesjlk: you still getting journal too big for fs errors?19:45
tobiash_jlk: ups, looks like these volume declarations mostly resulted from earlier refactorings (still has to be polished a bit)19:45
jlkgreghaynes: I was, trying to do a volume mount now.19:46
*** dougbtv__ has joined #zuul19:46
tobiash_jlk: journal too big errors were gone when bind mounting tmp and dest into the container19:46
greghaynesah. Thers some wierdness there because dib images generally are resized up a large amount so if dib used the default journal size for the size disk it makes there wouldnt be a large enough journal when the image gets resized19:46
greghaynesso it forces a larger journal than needed19:46
greghayneswhich involves some wierd math that had some edge cases19:47
jlkokay19:47
jlkI'm trying with /tmp/ and /opt/nodepool/images volume mounted19:47
tobiash_jlk: the volumes declared in the dockerfile more or less define the places where I bind-mount the nodepool config currently19:48
tobiash_/var/log/nodepool: getting logs from nodepool out of the container19:48
tobiash_/etc/nodepool: nodepool config19:48
tobiash_/etc/openstack: place for clouds.yaml19:48
*** dougbtv_ has quit IRC19:49
greghaynesjlk: hrm, I wonder if that might be causing some of the journal size math to get messed up19:49
tobiash_/opt/setup-scripts: I think is outdated as v3 has no setup scropts anymore19:49
jlksomewhat annoying to watch the container output like a hawk to catch the dib error before it starts over again.19:50
tobiash_I think that journal size stuff has some weird interference with the docker storage driver if it is created within the storage managed by docker19:50
tobiash_jlk: you could just run more than one builder and have a chance that the next start is on a different one ;)19:51
jeblairSpamapS, mordred: for the first time in years, i'm almost as happy as i can be with how the tests are running; i kinda just want to get on with the work of writing code, and debugging a new test runner just sounds like distraction we don't need right now.19:52
clarkbjlk: can't you just grab the image build log? not sure why you'd have to watch it like a hawk19:53
jlknot sure where the build log goes19:53
jlkI don't have a logging config, so everything is dumping to stdout/err19:53
jeblairjlk: regarding require/reject -- i think ultimately because of the way the configuration is set up, it will end up being driver specific.  but perhaps we can put the implementation of 'open' and 'current-patchset' in the base filter class so it's easy for drivers to implement those?19:55
jlknodepool-builder_1   | 2017-05-18 19:59:30,934 INFO nodepool.builder.BuildWorker.0: DIB image ubuntu-xenial is built19:59
jlktobiash_: high five!19:59
tobiash_\o/19:59
* jlk brings the whole thing up to see if it gets tossed at the cloud20:02
*** harlowja has quit IRC20:03
*** tobiash_ has quit IRC20:12
jeblairjlk: what do you think?  should we go with https://etherpad.openstack.org/p/zKrBKCsc7u  or something else?20:19
jlkawww yissss. Image uploaded, VM booted. node registered.  Stuffed a github webhook event into the webapp and it did the end to end thing (although noop task so it didn't actually touch the vm)20:19
jlkjeblair: I think it's doable20:19
jlkwithout having looked too deeply at implementation yet20:19
jeblairjlk: you think it's worth doing?  our other options are either to abstract everything (which we've generally been avoiding), or eliminate pipeline requirements which can't be easily abstracted.  we're using all of them now, and this kind of expressiveness seems worth keeping to me: http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml#n4120:23
jlkWith the new plan, that layout would have to get modified right? To push those existing require into a gerrit: block?20:24
jlkjust making sure I understand.20:24
jeblairjlk: yep, but otherwise look about the same i think20:25
jlkI _do_ think ti's worth doing. This was one area we identified as needing better driver integration, so that we wouldn't necessarily have to mess with model.py directly20:25
jlkdefine the filters in teh driver code, allow them to be plugged in20:25
jeblairjlk: okay, i'll -1 the top of the stack with that.  do you want me to write the change to add the filter driver api, or do you want to do it?20:29
jlkIf you have time for it, I wouldn't mind you doing it, and I'll follow up quickly with reviews and trials to use it. Otherwise I can take it on, I'll just likely be bugging you a fair amount for guidance :)20:30
*** harlowja has joined #zuul20:30
jeblairjlk: okay, i'll take a stab at it.  thanks :)20:31
*** harlowja has quit IRC20:39
*** jkilpatr has quit IRC20:41
*** dougbtv__ has quit IRC20:45
*** dougbtv__ has joined #zuul20:58
*** dougbtv__ is now known as dougbtv20:58
*** dougbtv_ has joined #zuul21:02
*** dougbtv has quit IRC21:05
*** dougbtv__ has joined #zuul21:05
*** dkranz has quit IRC21:06
*** dougbtv_ has quit IRC21:07
*** dougbtv_ has joined #zuul21:11
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: WIP driver-specific filters  https://review.openstack.org/46610521:11
jeblairjlk: ^ not done yet, but there's the general shape if you wanted to take a peek.21:12
*** dougbtv has joined #zuul21:13
jeblairjlk: as you'll see, i'm actually leaning toward adding it to the 'source' interface of the driver -- it seems to me to be connected to that, since a source is where a change comes from, that's the context where these attributes are being evaluated21:14
*** jhesketh_ has joined #zuul21:14
*** dougbtv__ has quit IRC21:14
jlkalrighty21:14
jlkI'll look after lunching21:14
*** dougbtv_ has quit IRC21:15
*** jkilpatr has joined #zuul21:16
*** dougbtv_ has joined #zuul21:16
*** jhesketh has quit IRC21:18
SpamapSwhy am I getting 88M console logs?21:18
SpamapS(on success)21:19
*** dougbtv has quit IRC21:19
SpamapShttp://zuulv3-dev.openstack.org/logs/80b98612c6af46eda3c2082ff122e423/ <-- for instance21:19
SpamapShttp://logs.openstack.org/93/463893/4/check/gate-zuul-python27-ubuntu-xenial/88c599a/ too21:20
clarkblooks like its writing all the debug logs21:20
SpamapSI wonder if something we merged borked it21:20
clarkbI don't see the test name headers which is what you get when things fail so I think its just dumping stdout all the way through21:21
SpamapShttp://logs.openstack.org/65/449365/15/check/gate-zuul-python27-ubuntu-xenial/cb3d801/21:22
SpamapSclarkb: agreed21:23
SpamapSjust noticing that recently merged stuff has it too21:23
SpamapShttps://review.openstack.org/#/c/449365/ is the first one that has this21:24
jeblairthat's so unrelated it makes me wonder if something external changed?21:25
SpamapSYeah I am looking at that now21:26
clarkbSpamapS: the parent of that change has it too21:27
SpamapSAdd support for github enterprise?21:28
SpamapSI thought it was clean. Looking.21:28
clarkbI'm like 5 or six changes back and they all have it too21:28
SpamapSoh I didn't notice because they're compressed21:29
SpamapSderp21:29
SpamapSI5e312464b4f9a40f0ef00c00d12e7651e3890d4a doesn't have it21:31
clarkbhttps://review.openstack.org/#/c/439834/ is first one to have it21:31
SpamapShttps://review.openstack.org/#/c/445644/ has it21:32
clarkbhttps://review.openstack.org/#/c/439834/21/tests/base.py line 657 possibly at fault?21:33
SpamapShmmmmmm21:36
SpamapSseems like there are merges _after_ that that don't have the fail21:36
SpamapSoh21:36
SpamapSderp-a-derp I've been looking at check results21:36
SpamapSclarkb: that seems rather innocuous21:38
clarkbSpamapS: its loading the logging at "compile" time not execute time when we set up the fixtures21:39
clarkbwhich I think "wins" ?21:39
SpamapSisn't that how we setup logs all over though?21:39
clarkbhrm ya at least for scheduler21:39
SpamapSit's a common pattern21:40
clarkbI know it has caused problems in the past because its a compile time hit rather than runtime21:40
SpamapSand I believe the fakelogger fixture takes over the output configs of logging21:40
clarkbhttps://review.openstack.org/#/c/439834/21/tests/unit/test_github_driver.py that might actually be it21:42
clarkbbecause its calling basicConfig21:42
SpamapSoh yeah21:42
SpamapSthat's it21:42
SpamapSclarkb: you want to submit a removal of that one?21:42
SpamapSvery naughty21:42
clarkbSpamapS: I'm in the middle of smoking a brisket, so best if someone else gets to it21:43
SpamapSclarkb: I'll get it, thanks!21:43
SpamapSalso can haz brisket? ;-)21:43
clarkbI'm not sure how well it will survive transport to LA21:43
jlkprobably better than it'll survive being smoked21:44
jlkOh hahaha, this is my fault. SORRYYYY21:44
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Remove errant basicConfig in test_github_driver  https://review.openstack.org/46611221:46
mordredjlk: well - I reviewed it and didn't catch that - so I'll share the SORRYYYY too21:46
jlkI had no idea that nobody else was getting 90M of logs.21:47
SpamapSerrm.. no21:49
SpamapSstill printing a lot21:49
jeblairtests/unit/test_multi_driver.py has a basicConfig21:50
SpamapSyep just found that too21:50
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Remove errant basicConfig calls in test suites  https://review.openstack.org/46611221:50
mordredand tests/unit/test_github_requirements.py21:50
SpamapSI don't have that file21:51
jeblairyeah, don't think that's landed yet21:52
mordredah - it's in "Adds github triggering from status updates" I15aef35716ddbcd1e66f84a73d27ca2689c936e421:52
mordredI just happened to have my tree state with that one in there21:52
jeblairmordred: speaking of which, jlk and i are embarking on http://paste.openstack.org/show/609964/  -- does that lgty?21:52
mordredk. I left a comment on the patch that included the 3rd basicConfig21:53
SpamapS466112 seems to fix it21:53
SpamapSsuggest fast tracking it once it finishes tests21:54
mordredjeblair: yes! sorry - I totally liked that earlier and didn't say anything21:54
jeblairmordred: kk21:54
mordredjeblair: we shoudl add a "like" button to etherpads :)21:54
SpamapSOr perhaps I should finally give up on gnome terminal and use one that doesn't destroy a CPU trying to display such DEBUG deluges21:54
mordredjeblair: (we should absolutel not add a like button to etherpads)21:55
* jeblair and 0 other people like this21:55
jeblairSpamapS: no this is good. i think i can safely say i, the infra team, and our log server thank you for finding this.  :)21:56
pabelangerreminds me, we should add rsync logic to playbooks for zuulv3 to no copy insane large logs folders21:57
pabelangerwould be a good POC21:57
SpamapSpabelanger: as in, du the folder before hand, and if it's big, refuse? I like that.22:01
SpamapSthat might even relieve us of the need for an executor side du overwatch22:01
jlkjeblair: I think we're going to wind up with some duplicated code between gerrit and github drivers at least, for pipelines, branches, refs, comments, emails, username_filters, timespecs, actions22:03
jlkopen, current_patchset22:03
SpamapSdaaaaaaaaaaaaaaamnit22:03
* SpamapS pep8's22:03
jlkbut, can't guarantee all drivers will have that, so ¯\_(ツ)_/¯22:04
jeblairjlk: actually, i think just 'branches, refs, comments, emails, username'.  but yeah.22:05
jlkis pipelines covered more generically?22:05
jeblairjlk: i think it's zuul trigger specific22:06
jlkk22:06
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Remove errant basicConfig calls in test suites  https://review.openstack.org/46611222:06
jeblair(that's another nice thing about this, the eventfilter is already quite crowded with trigger-specific stuff)22:06
SpamapShm22:17
SpamapSgetting random test fails now22:17
mordredSpamapS: don't you love getting sucked down a rabbit hole?22:18
SpamapSmordred: as long as there are cuddly rabbits at the bottom and not something awful like a worst cat or systemd.22:20
jlk2017-05-18 22:15:15.729107 | No handlers could be found for logger "paste.httpserver.ThreadPool"22:20
jlkthat's not an error tho22:20
SpamapSthat's been there a while22:24
SpamapSI believe it happens because the paste server in the webapp tests add the logger _after_ fakelogger is created22:25
* SpamapS steps back from the rabbit hole for a few22:27
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Add paste to the default list of loggers in tests  https://review.openstack.org/46611822:37
mordredSpamapS: this ^^ might help that paste thingy22:37
jeblairoops.  i just ran the full test suite without applyng the log fixes.  :)22:40
mordredjeblair: :)22:41
mordredjeblair: was the logging unsurprisingly still weird?22:41
jeblairmordred: it sort of spilled all over my desktop and i have a big mess now22:43
* jlk hands jeblair a broom22:43
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: WIP driver-specific filters  https://review.openstack.org/46610522:49
openstackgerritJames E. Blair proposed openstack-infra/zuul feature/zuulv3: WIP driver-specific filters  https://review.openstack.org/46610523:24
jeblairjlk: ^ i think that's just about done.  i'm still trying to figure out which tests are hanging though.23:25
jeblairjlk: there's a bit more decomposition we can do on the TriggerEvent class, but everything that's left will take a little bit more thought, and i think most of those are settled enough for now (ie, i don't think we need to revisit what "patchset" means right at this moment, but we should later).23:27
mordredjeblair: I'm eod'ing (there are some sausages in my future) but that little logging config patch of mine failed on two RPC errors that seem troubling. I'll look tomorrow when I get up to try to sort it - but I thought I'd call your attention to it in case it troubles you too23:46
mordredhttps://review.openstack.org/#/c/466118/23:46
jeblairmordred: i think SpamapS's logging change *also* failed on rpc errors.  that does trouble me.  i worry about the gear release.23:48
mordredjeblair: yah. that's why I was troubled23:48
SpamapS:/23:49
SpamapSthey seem non-deterministic to me23:49
jeblairi'm almost done with this refactor; i'll switch to looking at that then23:50
mordredjeblair, SpamapS : luckily that was the only substantive thing in that release23:50
mordredso at least there aren't _multiple_ unrelated work areas to poke at23:51
SpamapShm, so http://paste.openstack.org/show/609973/23:51
* mordred is rescannign through the release diff before EOD just to see if he missed anything23:51
SpamapSmay legitimately be an API break that we didn't think through23:51
SpamapS(weird that it didn't show up in testing before the log fix though..)23:52
mordredSpamapS: ++23:52
mordredSpamapS: easy way to check - cast that pipeline name to non-unicode and see if the fail goes away ...23:52
Shrewsoh good, i'm not the only one seeing those two RPC fails23:53
mordredShrews: nope. we're all seeing them23:53
Shrews"I see RPC fails"23:53
SpamapSmordred: yeah that's what I'm doing.. trying to simultaneously trace back to see where it's being coerced to unicode23:53
mordredSpamapS: I would not be surprised if we accidentally double-encode somewhere23:54
SpamapSpython doesn't really let you do that the bad way if you're using .encode() and .decode()23:55
mordredwell - let me say - I would not be surprised by any one of a number of things like that - including double-encoding or legit api break we missed or accidentally using ebcidic instead of ascii23:55
SpamapS    pipeline = tenant.layout.pipelines[event.pipeline_name.encode('utf8')]23:56
SpamapSKeyError: 'nonexistent'23:56
SpamapSso I think layout.pipelines is empty23:56
SpamapSOrderedDict([('check', <Pipeline check>), ('gate', <Pipeline gate>), ('post', <Pipeline post>)])23:57
SpamapSIt's not supposed to be there. ;-)23:57
SpamapSduh23:57
SpamapS'nonexistent23:57
mordredSpamapS: so - that keyerror should be manifesting as an RPCException but isn't23:58
SpamapSyep23:58
SpamapSgah.. EOD is at hand23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!