Sunday, 2019-04-28

*** rfolco|ruck has quit IRC02:18
*** rfolco|ruck has joined #zuul02:19
*** bhavikdbavishi has joined #zuul04:38
*** bhavikdbavishi has quit IRC04:58
*** bhavikdbavishi has joined #zuul05:36
*** snapiri has joined #zuul05:55
*** ChrisShort has quit IRC06:19
*** ChrisShort has joined #zuul06:19
*** bhavikdbavishi has quit IRC06:25
tobiashpabelanger: responded on 65276406:29
*** pcaruana has joined #zuul06:56
*** jamesmcarthur has joined #zuul11:06
*** jamesmcarthur has quit IRC11:10
*** bhavikdbavishi has joined #zuul11:49
*** bhavikdbavishi has quit IRC13:54
*** jamesmcarthur has joined #zuul14:35
*** sshnaidm|off has quit IRC14:37
*** jamesmcarthur has quit IRC14:39
*** altlogbot_2 has quit IRC15:00
*** jamesmcarthur has joined #zuul15:01
*** altlogbot_2 has joined #zuul15:02
pabelangertristanC: http://softwarefactory-project.io/zuul/ wasn't loading properly. Could have been an outage I guess.15:08
openstackgerritClark Boylan proposed zuul/zuul master: Tiny cleanup in change panel js  https://review.opendev.org/65558915:14
openstackgerritPaul Belanger proposed zuul/zuul master: Increase timeout value for test_playbook timeout job (again)  https://review.opendev.org/65617415:15
clarkbpabelanger: thank you for the follow up on the parent of ^ the one I just rebased15:15
pabelangerclarkb: np15:15
pabelangerzuul-maint: ^bumps out timeout job again for test_playbook, it still is a little racy. However, I don't feel long term that is the best solution, so open to suggestions.15:16
clarkbis this the test that actually runs ansible?15:16
pabelangerthe good news, is out of 30 test runs, we only had 1 failues (^) for testing: https://review.opendev.org/655805/15:16
pabelangerclarkb: yah15:16
pabelangerclarkb: I should say, it tests that we can abort an ansible run properly15:17
pabelangeronce we hit the timeout15:17
pabelangerbut in somecases, we timeout in pre-run, which causes zuul to run the job again, messing up our build results15:18
pabelangerso, my first thought was just to implement job.pre_timeout, so that doesn't happen15:19
pabelangerbut that is a change in functionality15:19
pabelangerclarkb: also, if you are able to https://review.opendev.org/656072/ was the other race found in testing 65580515:20
clarkbpabelanger: done15:21
pabelangerty15:21
openstackgerritPaul Belanger proposed zuul/zuul master: DNM: exercise halving concurrency  https://review.opendev.org/65580515:22
clarkboh good my tox.ini update to fix the coverage target merged too15:23
clarkbthat was useful when testing the security fix (so good to have in ou rback pocket)15:24
pabelangeryah, we've stabalized things pretty well over the last few days15:24
pabelangerI am hoping to get 5 rechecks out of 655805 without any failures15:25
clarkbcatching up on the fix for the docker image builds. Was it just the firewall update?15:31
pabelangerclarkb: yah, that seems to have been the last fix15:32
pabelangerbut only for ipv6 I think15:32
clarkbya  Ithink docker modifies ipv4 rule sitself15:32
clarkbit might do ipv6 rules too if we tell it to manage ipv6 (I seem to recall a config flag for that)15:32
pabelangerclarkb: as we say that (registry) this failure just happened: http://logs.openstack.org/89/655589/2/check/zuul-build-image/0c05cc8/job-output.txt.gz#_2019-04-28_15_36_43_81897515:38
clarkbthat is a different error than before right? before was an EOF ( which makes sense give nthe firewall)15:39
clarkbwhat is odd is it pushed other images and layers prior to that15:41
clarkbI wonder if there are errors in the logs we can get at15:41
clarkb(not in a great spot to look as trying to pay attention to the board meeting)15:41
pabelangeryah, same15:42
clarkb*errors in the intermediate registry logs15:43
openstackgerritMerged zuul/zuul master: Fix test race in test_job_pause_retry (#2)  https://review.opendev.org/65607215:56
pabelangerI'll be relocating to coffee shop, but suspect I'll miss the confirmation vote for zuul. Looking forward to results shortly15:56
*** jamesmcarthur has quit IRC16:06
corvusclarkb, pabelanger: the other thing with the registry fix was adding retries to skopeo, so we will try each skopeo command 3 times16:10
openstackgerritTobias Henkel proposed zuul/zuul master: Make test_playbook more robust  https://review.opendev.org/65617716:10
tobiashpabelanger: check out this idea regarding test_playbook ^16:10
corvusthe error that pabelanger linked happened 3 times, so... maybe there is something wrong with that blob16:10
*** jamesmcarthur has joined #zuul16:11
openstackgerritTobias Henkel proposed zuul/zuul master: DNM: exercise halving concurrency  https://review.opendev.org/65580516:13
*** jamesmcarthur has quit IRC16:14
openstackgerritTobias Henkel proposed zuul/zuul master: Fix race in test_job_node_failure_resume  https://review.opendev.org/65617816:14
openstackgerritTobias Henkel proposed zuul/zuul master: DNM: exercise halving concurrency  https://review.opendev.org/65580516:15
*** jamesmcarthur has joined #zuul16:26
mnasercorvus: https://opendev.org/vexxhost/ansible-role-wireguard interesting simple multi-node vpn testing :)16:55
corvusthe openstack foundation board of directors confirmed zuul as an open infrastructure project17:10
corvuswe're in! :)17:10
clarkbhip hip hurray! (is this an appropriate use of that?)17:10
mugsiecongrats!17:12
tobiash:)17:14
mordredcorvus: I was going to type that sentence, then I realized that I think technically I'm not supposed to - so thanks!17:17
corvusmordred: yeah, us peanut gallery folks are under no such restrictions, so i figured i'd do it :)17:18
mordred\o/17:18
AJaeger\o/17:24
*** jamesmcarthur has quit IRC17:30
pabelanger\o/17:47
pabelangerhttp://logs.openstack.org/05/655805/9/check/zuul-tox-py35-1/f94c0f2/job-output.txt.gz#_2019-04-28_16_19_09_55113317:49
pabelangerthat is weird17:49
pabelangerfilesystem issue?17:49
clarkbpabelanger: might not be executable?17:52
clarkb(that would be weird though)17:52
pabelangeryah, going to recheck and see if there i an easy way to setup an autohold on that review for all the tox jobs17:54
pabelangerin case we get it again17:54
pabelangertobiash: +217:58
openstackgerritMonty Taylor proposed zuul/zuul-website master: Add slides for Spring 2019 Board Update  https://review.opendev.org/65618217:59
mordredclarkb, corvus: ^^ draft slides for this afternoon17:59
clarkbone small thing noted inline18:02
clarkbtwo small things now :)18:03
pabelangerWas zuul 3.3.0 for berlin?18:04
pabelangerI _think_ it was18:05
pabelangermordred: also left comments, but looks great18:11
pabelangerhttp://logs.openstack.org/33/631933/16/check/zuul-build-image/d6f4993/job-output.txt.gz#_2019-04-28_18_15_15_22686018:22
pabelangeranother yarn failure18:22
pabelangerwonder if there is an upstream issue18:22
*** jamesmcarthur has joined #zuul18:35
openstackgerritPaul Belanger proposed zuul/zuul master: WIP: stream logs for ansible loops  https://review.opendev.org/65618518:57
*** jamesmcarthur has quit IRC19:06
*** jamesmcarthur has joined #zuul19:14
openstackgerritMonty Taylor proposed zuul/zuul-website master: Add slides for Spring 2019 Board Update  https://review.opendev.org/65618219:15
openstackgerritJames E. Blair proposed zuul/zuul-jobs master: Add missing conditionals to build-docker-image  https://review.opendev.org/65618719:39
*** pcaruana has quit IRC19:40
tobiashpabelanger: commented on 65618519:51
openstackgerritMerged zuul/zuul-jobs master: Add missing conditionals to build-docker-image  https://review.opendev.org/65618719:56
*** jamesmcarthur has quit IRC20:05
pabelangertobiash: cool, that is what I figured but couldn't find a reference20:10
*** jamesmcarthur has joined #zuul20:13
clarkbthat yarn issue seems to be persistent?20:19
pabelangerunsure, now that docker is updated on laptop, I'm going to play around with it20:22
pabelangerso for react wizards out there, sometime my network is terrible, and when I have zuul.o.o open (or any zuul web) it doesn't seem to properly reconnect once the network is better. I just have the spinning circle on the top right20:54
pabelangera browser refresh is required, but when it was old web, I didn't have this is while on terrible wifi20:54
pabelangerwhat I see in web console is: Failed to load resource: net::ERR_NETWORK_CHANGED20:55
*** jamesmcarthur has quit IRC20:56
clarkbfwiw sometimes I have to restart all of firefox for it to work again (so not always a zuul specific issue)20:56
pabelangeryah, from what I see, it seems GET status look just gets stuck20:57
pabelangers/look/loop20:57
*** jamesmcarthur has joined #zuul21:04
ShrewsHooray for Zuul-nation!21:37
ShrewsI shall celebrate with drinking of a yeast fermented beverage21:38
SpamapSFYI, I believe zuul-web has some issues with reconnecting that result in CORS violations.21:47
SpamapSWe see it too.. it was more prevalent when the service worker was enabled.21:48
nickx-intelCORS?21:48
nickx-intel@SpamapS ^?21:48
SpamapSBut I think what happens is sometimes you need to load javascript and the headers don't authorize, so browser policies fail that, and you get Loading...21:48
SpamapShttps://developer.mozilla.org/en-US/docs/Web/HTTP/CORS21:48
nickx-intelthanks SpamapS d ^__^21:49
SpamapSI haven't confirmed yet, but it's the working theory. One of our devs uses "Brave" and it is quite reliable for him to get a blank or Loading... screen, shift-refresh fixes.21:50
clarkbthe only cors headers I see are set to *21:58
openstackgerritMonty Taylor proposed zuul/zuul-website master: Add slides for Spring 2019 Board Update  https://review.opendev.org/65618221:58
clarkband those are set by the api/ subpath21:58
clarkbthe other paths doesn't seem to set them at all21:58
SpamapSclarkb: the static bits need them too.21:59
clarkb(this is looking at opendev's deployment)21:59
clarkbI thought the default if unset was basically *21:59
clarkbmaybe some browsers (like brave) change that?21:59
*** jamesmcarthur has quit IRC22:19
*** jamesmcarthur has joined #zuul22:30
*** sshnaidm has joined #zuul22:43
*** jamesmcarthur has quit IRC22:54
*** jamesmcarthur has joined #zuul23:02
*** jamesmcarthur has quit IRC23:12
SpamapSclarkb:yes, and firefox has a more stringent option that many folks turn on.23:17
*** jamesmcarthur has joined #zuul23:20
pabelangerhttp://paste.openstack.org/show/750044/23:20
pabelangerfirst time I seen that failure in our unittests23:20
pabelangerwe should be able to make assertNodepoolState() retry on connectionloss23:21
pabelangerhttps://opendev.org/zuul/zuul/src/branch/master/tests/base.py#L287023:22
pabelangerShrews: ^maybe you have some thoughts23:22
pabelangerhttp://logs.openstack.org/05/655805/9/check/zuul-tox-py35-2/a8df9ae/testr_results.html.gz23:23
pabelangeris where the failure was23:23
*** ianw_pto is now known as ianw23:37
openstackgerritPaul Belanger proposed zuul/zuul master: Update assertNodepoolState() to retry zk requests  https://review.opendev.org/65621323:41
pabelangerfirst attempt ^23:41
openstackgerritPaul Belanger proposed zuul/zuul master: DNM: exercise halving concurrency  https://review.opendev.org/65580523:42
*** jamesmcarthur has quit IRC23:43
*** irclogbot_3 has quit IRC23:54
*** edmondsw_ has quit IRC23:58
*** irclogbot_0 has joined #zuul23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!