Tuesday, 2019-04-23

*** gerrykopec has joined #starlingx00:02
*** changcheng has quit IRC00:37
*** changcheng has joined #starlingx00:38
*** shuicheng_ has joined #starlingx01:26
*** shuicheng_ has quit IRC01:37
*** cfriesen has quit IRC02:11
*** dklyle has quit IRC02:45
*** dklyle has joined #starlingx02:46
*** rcw has quit IRC02:49
*** cfriesen has joined #starlingx03:17
*** cfriesen has quit IRC03:32
*** sdinescu has quit IRC06:17
*** sdinescu has joined #starlingx06:30
*** ericbarrett has quit IRC06:54
*** Samiam1999DTP has joined #starlingx07:15
*** Samiam1999 has quit IRC07:18
*** sdinescu has quit IRC07:55
*** sdinescu has joined #starlingx07:55
*** ijolliffe has joined #starlingx12:33
*** ericbarrett has joined #starlingx13:00
*** boxiang has quit IRC13:59
*** boxiang has joined #starlingx14:00
*** mariocarrillo1 has joined #starlingx14:15
*** lcastell has joined #starlingx14:57
*** lcastell has quit IRC14:59
*** sgw has joined #starlingx15:02
*** lcastell has joined #starlingx15:05
*** lcastell has quit IRC15:10
*** lcastell has joined #starlingx15:12
*** rcw has joined #starlingx15:12
*** lcastell has quit IRC15:16
*** lcastell has joined #starlingx15:18
*** lcastell has quit IRC15:22
*** lcastell has joined #starlingx15:24
*** lcastell has quit IRC15:29
*** irenexychen has joined #starlingx15:34
*** lcastell has joined #starlingx15:39
*** lcastell has quit IRC15:44
*** lcastell has joined #starlingx15:45
*** lcastell has quit IRC15:50
*** lcastell has joined #starlingx15:51
*** lcastell has quit IRC15:56
*** lcastell has joined #starlingx15:57
*** lcastell has quit IRC16:02
*** lcastell has joined #starlingx16:03
*** lcastell has quit IRC16:08
*** lcastell has joined #starlingx16:10
*** lcastell has quit IRC16:15
*** lcastell has joined #starlingx16:20
*** lcastell has quit IRC16:24
*** lcastell has joined #starlingx16:26
*** lcastell has quit IRC16:31
*** lcastell has joined #starlingx16:32
*** lcastell has quit IRC16:37
*** lcastell has joined #starlingx16:41
*** lcastell has quit IRC16:46
*** altlogbot_3 has quit IRC16:50
*** lcastell has joined #starlingx16:54
*** altlogbot_0 has joined #starlingx16:56
*** lcastell has quit IRC17:04
dpenney__dtroyer: can zuul audit the commit message? ie. if Depends-On needs to be in the final section now in order to function properly, maybe we can add a zuul check to protect against an unhandled tag?17:09
*** lcastell has joined #starlingx17:10
*** lcastell has quit IRC17:15
*** lcastell has joined #starlingx17:17
*** lcastell has quit IRC17:22
*** lcastell has joined #starlingx17:23
*** lcastell has quit IRC17:28
*** lcastell has joined #starlingx17:29
*** lcastell has quit IRC17:34
*** lcastell has joined #starlingx17:35
*** lcastell has quit IRC17:40
*** lcastell has joined #starlingx17:44
*** lcastell has quit IRC17:49
*** lcastell has joined #starlingx17:51
*** lcastell has quit IRC17:56
dpenney__I've been seeing Zuul re-run the "check" stage on a W+1... noticed it on a few updates now... where before it would just run the "gate". Could this due to some configuration difference in opendev.org setup?17:56
*** lcastell has joined #starlingx18:03
*** lcastell has quit IRC18:05
*** lcastell has joined #starlingx18:12
*** lcastell has quit IRC18:16
*** lcastell has joined #starlingx18:18
*** lcastell has quit IRC18:22
*** lcastell has joined #starlingx18:24
*** tsmith2 has quit IRC18:48
dtroyerdpenney__: a) that is actually a pure Gerrit function, there have been proposals in the past for adding checks to commit messages and honestly I have forgotten what the argument was that led to them not being implemented18:49
dtroyerb) can you give me an example?  I don't think the opendev work itself would have reset anything to cause the check pipeline to be forced again.  However, any changes to a review will do that.  Even (I think) the ones I did yesterday in updating commit messages.18:51
dpenney__msot recent one is ongoing now: https://review.opendev.org/#/c/652713/ ... it failed the stx-metal devstack job in the "gate" after it passed in the "check"... I removed and re-added Workflow, which previously would just redo the gate. But I've seen this one and others, yesterday and today, go back to "check" when I expected just the "gate" to run18:53
*** lcastell has quit IRC18:55
dtroyerHmmm…OK, I see it re-running check after your first +W, that is unexpected18:55
dpenney__yep18:56
dpenney__I've noticed it with a number of reviews. That may be partly responsible for how bogged down Zuul seems to be today18:56
dtroyerok, what I think happened there though is after Eric's +W yesterday morning it went to the gate pipeline as expected.18:57
dtroyerDevstack failed because the job updates had not yet merged, that is what reset everything and caused the check pipeline to run again18:57
dtroyerand then another, different gate failure caused it again18:57
dtroyerI can't find what the actual failure is in that second job… POST_FAILURE is almost always a problem in the test infrastructure somewhere.  I am not seeing reports of a series of those…and often when that happens there a missing log files, etc.  everything seems to be there19:05
*** tsmith2 has joined #starlingx19:06
dpenney__I see errors in there like: ERROR cinder.service [-] Manager for service cinder-backup ubuntu-bionic-rax-dfw-0005517845 is reporting problems, not sending heartbeat. Service will appear "down".19:09
dpenney__but that appears in a success case, too19:09
dpenney__all the Error messages in the failure case I can see are also in a success case19:10
dtroyerHmmm… I need to get this channel added to the opendev infra statusbot19:14
dtroyerrepost: (notice) NOTICE: the zuul scheduler is being restarted now in order to address a memory utilization problem; changes under test will be reenqueued automatically19:14
dpenney__did that just come out?19:20
*** lcastell has joined #starlingx19:26
*** lcastell has quit IRC19:31
dpenney__I just get "Something went wrong" now when I try accessing the zuul status page19:32
dpenney__ok, I seem to have something now19:32
dtroyerJim is restarting Zuul19:33
dpenney__ok, I wasn't sure if you were saying the restart was the cause of the earlier devstack failure, or if it had just been restarted19:34
dtroyerno, the restart wasn't the cause, but the cause may be connected somewhere under the hood.  POST_FAILURE is rarely from the job contents19:35
*** rcw has quit IRC19:35
*** rcw has joined #starlingx19:36
*** boxiang has quit IRC19:36
*** boxiang has joined #starlingx19:37
*** lcastell has joined #starlingx19:57
*** lcastell has quit IRC20:02
*** lcastell has joined #starlingx20:03
*** lcastell has quit IRC20:08
*** lcastell has joined #starlingx20:09
*** lcastell has quit IRC20:14
*** lcastell has joined #starlingx20:16
*** Samiam1999DTP has quit IRC20:17
*** lcastell has quit IRC20:20
*** rcw has quit IRC20:21
*** rcw has joined #starlingx20:21
*** lcastell has joined #starlingx20:22
*** Samiam1999 has joined #starlingx20:22
*** lcastell has quit IRC20:31
*** lcastell has joined #starlingx20:48
*** lcastell has quit IRC20:52
*** rcw has quit IRC21:07
*** ijolliffe has quit IRC21:09
*** lcastell has joined #starlingx22:08
*** lcastell has quit IRC22:13
*** lcastell has joined #starlingx22:14
*** lcastell has quit IRC23:07
*** lcastell has joined #starlingx23:22
*** lcastell has quit IRC23:24

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!