Tuesday, 2018-12-11

*** ianychoi has quit IRC01:20
*** neilsun has joined #zuul01:58
*** bhavikdbavishi has joined #zuul03:07
*** bjackman has joined #zuul03:55
*** bjackman has quit IRC04:12
*** bjackman has joined #zuul04:12
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: executor: add support for generic build resource  https://review.openstack.org/57066804:38
*** chandan_kumar has joined #zuul05:21
openstackgerritgaobin proposed openstack-infra/zuul master: Modify some file content errors  https://review.openstack.org/62427806:08
openstackgerritgaobin proposed openstack-infra/zuul master: Modify some file content errors  https://review.openstack.org/62427806:11
*** sdake has quit IRC06:35
*** sdake has joined #zuul06:40
*** rlandy has quit IRC06:51
openstackgerritTristan Cacqueray proposed openstack-infra/zuul-base-jobs master: Add base openshift job  https://review.openstack.org/57066906:53
*** quiquell|off is now known as quiquell07:14
*** gtema has joined #zuul07:29
openstackgerritMerged openstack-infra/zuul master: Add spacing to Queue lengths line  https://review.openstack.org/62396007:37
*** gtema has quit IRC07:51
*** quiquell is now known as quiquell|brb07:53
*** themroc has joined #zuul08:05
*** bhavikdbavishi has quit IRC08:28
*** quiquell|brb is now known as quiquell08:40
*** hashar has joined #zuul08:58
*** jpena|off is now known as jpena09:03
*** electrofelix has joined #zuul10:28
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Only reset working copy when needed  https://review.openstack.org/62434310:31
*** tobias-urdin is now known as tobias-urdin|lun11:00
*** tobias-urdin|lun is now known as tobias-urdin_afk11:01
*** rfolco has quit IRC11:18
*** rfolco has joined #zuul11:23
*** tobias-urdin_afk is now known as tobias-urdin11:27
*** hashar is now known as hasharAway11:41
*** quiquell is now known as quiquell|brb11:50
*** quiquell|brb is now known as quiquell12:13
*** jpena is now known as jpena|lunch12:32
*** hasharAway is now known as hashar12:42
*** bjackman has quit IRC12:43
*** panda|off is now known as panda13:10
*** rlandy has joined #zuul13:28
*** jpena|lunch is now known as jpena13:32
tobiashcorvus, clarkb, mordred: heads up, I'll take over this: https://review.openstack.org/613261 . It is for me now a high prio thing because when skipping jobs via zuul_return the nodes of the skipped jobs are still requested and locked forever by the scheduler14:03
tobiashso skipping child jobs via zuul_return shouldn't be used atm14:03
*** bbayszczak has joined #zuul14:11
*** quiquell is now known as quiquell|lunch14:18
*** quiquell|lunch is now known as quiquell14:30
openstackgerritFabien Boucher proposed openstack-infra/zuul master: WIP - Pagure driver  https://review.openstack.org/60440415:05
* Shrews kinda sorta has internet access today15:06
pabelangerHmm, is it possible to add back the log URL to https://zuul.openstack.org/builds ? now it seems we need to click into result URL, to pull it from: https://zuul.openstack.org/build/eda30e2fccc041b9ad08c97e7346e86a15:11
pabelangerThis means more clicking back and forth if you want to open multiple logs15:11
pabelangermaybe we can use the duration value, and link to log file15:12
fungiin a recent post to the openstack-infra ml, a research student performing a study at mcgill university poses an intriguing question about finding zuul builds triggered by specific events. i wonder if storing the trigger reason in the builds db would be a sane addition? http://lists.openstack.org/pipermail/openstack-infra/2018-December/006250.html15:14
fungipabelanger: hrm, if logs are arbitrary collected artifacts, how do we know which ones to link in the main list? set an additional parameter to indicate the url is for the "default" job artifact for that display?15:15
fungi(it seems the "log url" is still a dedicated field today, but we'd talked about having the ability to attach an arbitrary number of such artifacts to a build result record in the future)15:16
fungiShrews: so you've managed to dig out already?15:17
Shrewsfungi: yeah. but no point going anywhere since everything is closed. i made it to a starbucks this morning to attempt to work only to find it closed.15:18
fungiouch15:19
fungibroadband is still down at the house then i guess15:19
pabelangerfungi: yah, log_url is what I was thinking. I would assuming, each job had one and if not, maybe not add the href15:19
Shrewsfungi: my home internet seems to be fluctuating b/w 100kbps and 7Mbps (usually 200Mbps), so it's sort of usable now15:19
fungiwell that's no fun15:20
fungiperhaps it decided to go sledding15:20
Shrewsfungi: at least i have power. lost it 3 times already15:20
fungiand heat hopefully15:20
pabelangerShrews: you got a dump of snow before this Canadian! Impressive15:20
Shrewspabelanger: almost a foot!15:21
Shrewshistorical for this area15:21
pabelangerNice!15:21
Shrewsand hysterical15:21
pabelanger++15:21
fungiyes, hysteria seems to have been the result15:21
fungipanic, looting in the streets, and theft of everyone's rubbish bin lids15:22
fungior maybe just that last part15:22
*** neilsun has quit IRC15:41
*** bhavikdbavishi has joined #zuul16:24
*** bhavikdbavishi has quit IRC16:25
*** bhavikdbavishi has joined #zuul16:31
*** themroc has quit IRC16:39
*** sean-k-mooney has quit IRC16:43
*** quiquell is now known as quiquell|off16:48
*** gtema has joined #zuul16:48
*** sean-k-mooney has joined #zuul16:49
*** bhavikdbavishi1 has joined #zuul16:58
*** bhavikdbavishi1 has quit IRC17:00
*** bhavikdbavishi has quit IRC17:02
*** bhavikdbavishi has joined #zuul17:05
*** hashar has quit IRC17:09
*** gtema has quit IRC17:21
clarkbwas this a mostly unforecast amount of snow too?17:34
fungino, this one had a fair amount of warning, at least17:36
fungigranted, when you're not used to that amount of snow, doesn't make a ton of difference that you got a heads-up17:37
fungithe same storm buried my parents and brother's family too, but they're in the mountains and a lot more accustomed to it17:38
clarkbya, jsut remembering a few years ago where we had something similar but they forecast 2 inches and we got a foot17:38
clarkbthankfully I had power and internet the whole time. Definitely undersatnd that not being prepared for it because it isn't normal though17:38
fungiif memory serves they were forecasting significant accumulation from this one a few days ahead17:38
clarkbhopefully everyone is staying warm and dry17:38
*** sshnaidm is now known as sshnaidm|afk17:47
*** bbayszczak has quit IRC18:21
Shrewshttps://review.openstack.org/623269 could use another +2 to make nodepool tests a bit more stable18:25
clarkbShrews: done18:28
Shrewsclarkb: thx18:28
*** rfolco is now known as rfolco_brb18:38
*** jpena is now known as jpena|off18:40
*** bhavikdbavishi has quit IRC18:52
*** bhavikdbavishi has joined #zuul18:53
*** rlandy is now known as rlandy|brb18:59
openstackgerritMerged openstack-infra/nodepool master: Fix race in test_handler_poll_session_expired  https://review.openstack.org/62326919:00
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Fix node leak when skipping child jobs  https://review.openstack.org/61326119:10
*** bhavikdbavishi has quit IRC19:11
tobiashcorvus, clarkb, mordred: This fixes a severe node leak when skipping child jobs using zuul_return ^19:11
tobiashbbayszczak is the original owner of that and I addded a test case and fixed the test failures19:12
*** rlandy|brb is now known as rlandy19:14
*** electrofelix has quit IRC19:26
tobiashmordred: are there any known connection problems to docker hub? http://logs.openstack.org/61/613261/5/check/pbrx-build-container-images/4ed28a8/job-output.txt.gz#_2018-12-11_19_14_57_39737219:34
pabelangermordred: another openstacksdk question, any idea why I am seeing http://paste.openstack.org/show/737049/ this is runing 0.20.0 for openstacksdk, I plan on downgrading, but wanted to atleast let you know about the traceback19:36
pabelangerhttp://git.zuul-ci.org/cgit/nodepool/tree/nodepool/driver/openstack/provider.py#n206 hasn't changed in some time, so suspect it is a new bug in openstacksdk?19:37
tobiashpabelanger: this is known and mordred already has a fix, the downgrade to 0.19.x has been merged within the last two weeks in nodepool19:39
tobiashpabelanger: the sdk fix was this: https://review.openstack.org/62131619:40
tobiashpabelanger: and a refactor of the fix: https://review.openstack.org/62158519:40
pabelangertobiash: okay cool, I figured it was fixed, just could find the patch19:40
pabelangertobiash: yah, did an install of 3.3.1 and openstacksdk was still uncapped19:41
tobiashoh, then the cap was short after 3.3.119:41
pabelangermaybe should ask for 3.4.0 release today, if zuul.o.o is holding up well19:41
pabelangertobiash: yah19:41
tobiashoh, 3.3.1 is already six weeks ago19:42
tobiashtime flies19:42
tobiashsounds good to me19:43
tobiashis zuul ready for a 3.4.0? I think these should be released together because of the relative prio feature.19:47
*** hashar has joined #zuul19:49
pabelangernot sure, I don't think new check queue logic has been tested yet19:49
pabelangerfor zuul.o.o19:49
fungiit has just been pointed out in #openstack-tc that there is an osf board of directors call starting in 45 minutes to discuss confirmation requirements for new pilot projects: https://wiki.openstack.org/wiki/Governance/Foundation/11Dec2018BoardMeeting20:14
fungiapparently they failed to announce it on the foundation ml so i only just found out20:14
fungicorvus: ^ heads up, in case you don't see that while lunching20:17
*** tobiash has quit IRC20:19
clarkbMy hunch is that the switch to zoom from webex meant the webex based invites/reminders are no longer sent20:22
clarkbhopefully something taht can be addressed next time20:22
*** tobiash has joined #zuul20:23
openstackgerritJean-Philippe Evrard proposed openstack-infra/zuul-jobs master: Add docker insecure registries feature  https://review.openstack.org/62448420:23
openstackgerritJean-Philippe Evrard proposed openstack-infra/zuul-jobs master: Add docker insecure registries feature  https://review.openstack.org/62448420:26
Shrewspabelanger: tobiash: clarkb: fungi: corvus: FYI, openstacksdk has an issue with the just released version of dogpile.cache: https://review.openstack.org/62448520:28
Shrewsso be careful on upgrades20:29
tobiashShrews: thanks20:29
pabelangerShrews: ack20:29
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Use gearman client keepalive  https://review.openstack.org/59957320:32
fungiappreciated20:33
*** rfolco_brb has quit IRC20:57
clarkbcall is moving to pilot project confirmation guidelines now21:45
corvusi i didn't see that21:46
*** hashar has quit IRC21:56
*** EmilienM has quit IRC22:07
*** EmilienM has joined #zuul22:08
clarkbShrews: we have in region dockerhub caches that jobs can use instead. That particular issue appears to be a dns lookup though22:16
clarkbShrews: possible that docker was updating dns in the background maybe?22:17
Shrewsclarkb: wha?22:20
clarkbShrews: was it you asking about pbrx docker image issues?22:21
Shrewsclarkb: no22:21
clarkboh sorry it was tobiash22:21
clarkbyou are the same color in weechat22:21
Shrewsoh, haha. was trying to relate that to the sdk issue and was all confused  :)22:21
openstackgerritClark Boylan proposed openstack-infra/zuul-jobs master: Use mirrors if available when installing OVS on centos  https://review.openstack.org/62452523:18

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!