*** mestery has joined #openstack-relmgr-office | 00:14 | |
*** mestery has quit IRC | 00:19 | |
openstackgerrit | Davanum Srinivas (dims) proposed openstack/releases: Fix Bug in oslo.messaging 2.6.0 and release as 2.6.1 https://review.openstack.org/233870 | 00:22 |
---|---|---|
*** stevemar_ has quit IRC | 00:25 | |
*** stevemar_ has joined #openstack-relmgr-office | 00:29 | |
dims_ | lifeless: around? | 01:05 |
lifeless | dims_: yah | 01:43 |
openstackgerrit | Merged openstack/releases: Fix Bug in oslo.messaging 2.6.0 and release as 2.6.1 https://review.openstack.org/233870 | 01:47 |
dims_ | lifeless: thanks | 01:47 |
*** stevemar_ has quit IRC | 01:53 | |
*** dims_ has quit IRC | 02:28 | |
*** mriedem has quit IRC | 02:39 | |
*** dims__ has joined #openstack-relmgr-office | 02:41 | |
*** stevemar_ has joined #openstack-relmgr-office | 02:42 | |
*** dims__ has quit IRC | 02:59 | |
*** spzala has quit IRC | 03:03 | |
*** jhesketh has quit IRC | 03:08 | |
*** jhesketh has joined #openstack-relmgr-office | 03:13 | |
*** dims__ has joined #openstack-relmgr-office | 03:59 | |
*** dims__ has quit IRC | 04:06 | |
*** nikhil_k has joined #openstack-relmgr-office | 04:39 | |
*** nikhil has quit IRC | 04:42 | |
*** mestery has joined #openstack-relmgr-office | 04:51 | |
*** mestery has quit IRC | 05:02 | |
*** nikhil has joined #openstack-relmgr-office | 05:41 | |
*** nikhil_k has quit IRC | 05:43 | |
stevemar_ | dims and dhellmann if we could get https://review.openstack.org/#/c/233761/ and https://review.openstack.org/#/c/233763/ going, that would be great, as we're currently blocked in keystone, they are breaking the gate :( | 05:48 |
openstackgerrit | Steve Martinelli proposed openstack/releases: keystonemiddleware 2.4.0 https://review.openstack.org/233763 | 05:56 |
openstackgerrit | Steve Martinelli proposed openstack/releases: keystoneclient 1.8.0 https://review.openstack.org/233761 | 05:59 |
*** stevemar_ has quit IRC | 06:03 | |
*** dims__ has joined #openstack-relmgr-office | 06:03 | |
*** dims__ has quit IRC | 06:09 | |
*** openstack has joined #openstack-relmgr-office | 06:29 | |
*** dims__ has joined #openstack-relmgr-office | 07:05 | |
*** dims__ has quit IRC | 07:10 | |
*** armax has quit IRC | 07:15 | |
*** Travis__ has joined #openstack-relmgr-office | 07:21 | |
*** jhesketh has quit IRC | 07:22 | |
*** jhesketh has joined #openstack-relmgr-office | 07:23 | |
*** Travis__ has left #openstack-relmgr-office | 07:32 | |
*** dims__ has joined #openstack-relmgr-office | 08:06 | |
*** dims__ has quit IRC | 08:11 | |
*** openstack has joined #openstack-relmgr-office | 08:48 | |
johnthetubaguy | ttx: not seen anything else major pop up at least | 08:49 |
*** flaper87 has quit IRC | 08:51 | |
*** flaper87 has joined #openstack-relmgr-office | 08:51 | |
ttx | johnthetubaguy: no, glance was being respun on another issue, but gate being busy is killing the respin | 08:51 |
ttx | to the point where I'm wondering if we should not abandon the idea of respinning | 08:51 |
johnthetubaguy | oh, gotcha | 08:51 |
ttx | doing those oslo releases wasn't the best idea after all | 08:53 |
johnthetubaguy | ttx: its always good to have something for the post mortem | 08:54 |
*** dims__ has joined #openstack-relmgr-office | 09:08 | |
*** dims__ has quit IRC | 09:13 | |
*** openstackstatus has joined #openstack-relmgr-office | 09:38 | |
*** ChanServ sets mode: +v openstackstatus | 09:38 | |
-openstackstatus- NOTICE: gerrit is undergoing an emergency restart to investigate load issues | 09:43 | |
*** ChanServ changes topic to "gerrit is undergoing an emergency restart to investigate load issues" | 09:43 | |
SergeyLukjanov | ttx, re sahara - everything is still okay :) | 09:45 |
SergeyLukjanov | no need for RC | 09:45 |
SergeyLukjanov | (I mean no need for RC2) | 09:45 |
ttx | SergeyLukjanov: heh, cool | 09:46 |
*** dims__ has joined #openstack-relmgr-office | 09:49 | |
ttx | woo, faster. | 09:54 |
*** openstackgerrit has quit IRC | 10:01 | |
*** openstackgerrit has joined #openstack-relmgr-office | 10:02 | |
*** dims__ is now known as dims | 10:18 | |
ttx | Notes: considering a RC3 for Nova over https://bugs.launchpad.net/cinder/+bug/1505153 | 10:31 |
openstack | Launchpad bug 1505153 in Manila "gates broken by WebOb 1.5 release" [Critical,In progress] - Assigned to Valeriy Ponomaryov (vponomaryov) | 10:31 |
ttx | the trick is that would trigger a respin for Manila and Cinder at least, if we follow the same rationale there | 10:31 |
sdague | do these projects have their fixes up? It's the same bug from nova that they've inherited | 10:32 |
ttx | sdague: checking | 10:32 |
ttx | manila master fix is not merged yet | 10:32 |
sdague | cinder is merged | 10:33 |
sdague | in master | 10:33 |
sdague | https://review.openstack.org/#/c/233528/ | 10:33 |
ttx | cinder backport proposed | 10:33 |
ttx | tests pass, blocked by my -2 | 10:33 |
ttx | suspecting the tests would not pass anymore due to bug 1503501 though | 10:34 |
openstack | bug 1503501 in neutron "oslo.db no longer requires testresources and testscenarios packages" [High,Fix committed] https://launchpad.net/bugs/1503501 - Assigned to Davanum Srinivas (DIMS) (dims-v) | 10:34 |
* ttx rechecks to see | 10:34 | |
ttx | dims: can you propose a backport for https://review.openstack.org/#/c/231789/ to stable/liberty so that we can get check results (and rebase other things on top of it if needed) ? | 10:36 |
dims | ttx: ack | 10:37 |
ttx | dims: thx | 10:37 |
* ttx breaks for lunch | 10:37 | |
*** ChanServ changes topic to "OpenStack Release Managers office - Where weekly 1:1 sync points between release manager and PTLs happen - Logged at http://eavesdrop.openstack.org/irclogs/%23openstack-relmgr-office/" | 11:18 | |
-openstackstatus- NOTICE: Gerrit has been restarted and is responding to normal load again. | 11:18 | |
*** doug-fish has joined #openstack-relmgr-office | 11:22 | |
*** gordc has joined #openstack-relmgr-office | 11:36 | |
*** dims has quit IRC | 11:47 | |
*** dims has joined #openstack-relmgr-office | 11:48 | |
Kiall | ttx: re https://bugs.launchpad.net/designate/+bug/1505295 - I'm seeing liberty-rc-potential on this, should we be considering a new RC for it? or just a backport? | 12:40 |
openstack | Launchpad bug 1505295 in openstack-ansible "Tox tests failing with AttributeError" [High,In progress] - Assigned to Jesse Pretorius (jesse-pretorius) | 12:41 |
ttx | Kiall: looking | 12:42 |
ttx | Kiall: i think we can fix that postrelease. I already have a few RC respins in the pipe and they are struggling a lot with busy gate | 12:43 |
Kiall | Cool, Sounds good to me. Good luck :) | 12:43 |
ttx | Kiall: i'll let you know if we need to respin over this | 12:44 |
Kiall | Yea, it seems to only affect tests.. But.. we'll see as the other projects realize all their gates are failing ;) | 12:44 |
Kiall | (At least, that's the impression I get from the bug :P) | 12:45 |
ttx | Kiall: we have 2.6.1 | 12:45 |
ttx | that should gradually unfail as a result | 12:45 |
Kiall | Ah, cool.. | 12:46 |
Kiall | I saw it wasn't tagged as inporgress for o.m so assumed it hadn't been looked at there | 12:46 |
ttx | I'll add a note to the bug | 12:47 |
openstackgerrit | Merged openstack/releases: Ignore backup files. https://review.openstack.org/232207 | 12:58 |
openstackgerrit | Davanum Srinivas (dims) proposed openstack/releases: oslo.log bug fix release https://review.openstack.org/234202 | 13:18 |
*** stevemar_ has joined #openstack-relmgr-office | 13:20 | |
*** spzala has joined #openstack-relmgr-office | 13:25 | |
*** mestery has joined #openstack-relmgr-office | 13:38 | |
ttx | smcginnis: ping | 13:51 |
smcginnis | ttx: Hey | 13:59 |
*** sigmavirus24_awa is now known as sigmavirus24 | 14:01 | |
stevemar_ | dhellmann: dims ttx can we get https://review.openstack.org/#/c/233763/ and https://review.openstack.org/#/c/233761/ cut? they are currently blocking keystone gate for passing jenkins (we're pulling in clients into our tests that are pulling in requets 2.8.0 | 14:02 |
stevemar_ | ) | 14:02 |
ttx | smcginnis: see -infra | 14:02 |
dims | stevemar_: ack. am on it | 14:02 |
smcginnis | ttx: ;) | 14:02 |
stevemar_ | dims: ty! | 14:04 |
*** AJaeger has joined #openstack-relmgr-office | 14:17 | |
AJaeger | release manager, we need a final release of the openstackdocstheme for Liberty so that we include links to it: https://review.openstack.org/233511 . Could this be done today, please? | 14:18 |
dims | stevemar_: check on pypi please | 14:21 |
stevemar_ | dims: thanks, it was totally blocking keystone development :) | 14:22 |
openstackgerrit | Merged openstack/releases: keystonemiddleware 2.4.0 https://review.openstack.org/233763 | 14:22 |
openstackgerrit | Merged openstack/releases: keystoneclient 1.8.0 https://review.openstack.org/233761 | 14:22 |
*** bdemers has quit IRC | 14:23 | |
dims | very welcome stevemar_ | 14:23 |
*** bdemers has joined #openstack-relmgr-office | 14:24 | |
stevemar_ | dims: oh shoot, i think i have to bump g-r | 14:34 |
AJaeger | dims: Could you release openstackdocstheme as well, please? | 14:38 |
*** bdemers has quit IRC | 14:38 | |
stevemar_ | dims: i'm not sure if you have power here: https://review.openstack.org/234255 | 14:39 |
dims | stevemar_: +2 | 14:39 |
dims | AJaeger: looking | 14:40 |
AJaeger | thanks, dims | 14:40 |
stevemar_ | dims: what about https://github.com/openstack/requirements#for-upper-constraintstxt-changes | 14:40 |
*** bdemers has joined #openstack-relmgr-office | 14:40 | |
dims | stevemar_: let's wait for the CI job it will tell us | 14:41 |
stevemar_ | dims: alrighty | 14:41 |
dims | AJaeger: liberty release? | 14:44 |
AJaeger | dims: yes! Our theme contains links to releases - and we couldn't release earlier... | 14:44 |
AJaeger | see the left bar at http://docs.openstack.org/networking-guide/ - points to kilo | 14:45 |
AJaeger | NOw is time to point it to Liberty... | 14:45 |
dims | ack. | 14:45 |
AJaeger | and I prefer to have it out today and a bit of slack instead of pushing it through on Thursday and not beeing ready in time... | 14:46 |
AJaeger | so, whenever you release today is fine ;) we don't need it *now*. But I prefer today, please | 14:46 |
dims | AJaeger: working it now | 14:47 |
AJaeger | thanks, dims! | 14:47 |
AJaeger | dims, I have to leave now, will read backscroll later if there's anything for me... | 14:49 |
*** armax has joined #openstack-relmgr-office | 14:49 | |
*** AJaeger has quit IRC | 14:49 | |
dims | AJaeger: ttyl good night | 14:49 |
armax | ttx: ping | 14:58 |
*** mriedem has joined #openstack-relmgr-office | 14:59 | |
ttx | armax: o/ | 15:01 |
armax | ttx: I saw you you went for an RC3 for Glance | 15:02 |
ttx | armax: yes, and probably nova and cinder as well if the gate behaves | 15:02 |
ttx | armax: anything on your map ? | 15:02 |
armax | ttx: at this point I wonder if we should have one too, we had 4 issues that are affecting liberty unit tests | 15:03 |
ttx | armax: do you have a list of them ? So far we've been respinning on the Webob one, not on the oslo ones (which should be fixed by later versions anyway, so not permanently broken) | 15:04 |
armax | ttx: https://review.openstack.org/#/c/232270/ | 15:04 |
armax | ttx: ok, they are the oslo ones | 15:04 |
ttx | armax: the triuck being, the gate is very busy so it's very likely we open a door we don't know how to close | 15:04 |
armax | ttx: fair enough | 15:04 |
ttx | armax: but yes, the stable/liberty branch will need some care to get fixed | 15:05 |
openstackgerrit | Merged openstack/releases: Liberty release - openstackdocstheme 1.2.4 https://review.openstack.org/233511 | 15:06 |
ttx | armax: we can push them as -2 on the stable/liberty branch and see in what order they need to be fixed to pass | 15:06 |
ttx | and maybe we can make a quick yes/no decision based on that | 15:07 |
armax | ttx: let me check with ihar | 15:07 |
ttx | armax: also check the neutron-*aas | 15:08 |
ttx | armax: you may need to fix https://bugs.launchpad.net/nova/+bug/1503501 at the same time | 15:09 |
openstack | Launchpad bug 1503501 in neutron "oslo.db no longer requires testresources and testscenarios packages" [High,Fix committed] - Assigned to Davanum Srinivas (DIMS) (dims-v) | 15:09 |
armax | ttx: we have that in master | 15:09 |
armax | ttx: and it’s been cherry picked | 15:10 |
ttx | armax: hmm, might need to get squashed as a single fix to get in, or the tests will block each other | 15:10 |
ttx | if https://review.openstack.org/#/c/232270/ really blocks tests | 15:11 |
armax | ttx: yes, they are all squashed here: https://review.openstack.org/#/c/232270/ | 15:11 |
ttx | oh, ok | 15:11 |
armax | ttx: at this point, I think the issues are only UT's | 15:11 |
ttx | so we could do a neutron RC3 with just 232270 in ? | 15:12 |
ttx | that would be reasonable (and ready) enough | 15:12 |
ttx | how about the neutron-*aas, do they need patches in as well ? | 15:12 |
armax | ttx: in theory yes, I actually need to squash another fix in it to make sure that the UT work with oslo.service >0.7 | 15:13 |
ttx | ah, hm. That adds a minimum of 5 hours ot the mix before it can clear the check queue then | 15:13 |
ttx | that makes it a lot more dangerous | 15:13 |
ttx | armax: ok, update it and when the tests pass we'll make the call | 15:14 |
armax | ttx: agreed. | 15:14 |
armax | ttx: at this point we can make an apportuinistic decision | 15:14 |
armax | ttx: but if we don’t manage to, it’s not the end of the world | 15:14 |
ttx | right | 15:14 |
ttx | that can land post-release ok I think | 15:15 |
dhellmann | ttx, dims : belated good morning (sorry, I forgot I had to go to the dentist this morning) | 15:15 |
armax | ttx: indeed | 15:15 |
ttx | dhellmann: hell broke loose with the oslo releases | 15:15 |
dims | dhellmann: hey, hope you are not groggy :) | 15:15 |
ttx | + the WebOb thing | 15:15 |
dhellmann | dims: no, just a cleaning | 15:16 |
dhellmann | ttx: oh? | 15:16 |
dims | dhellmann: the oslo.db testresources thing | 15:16 |
ttx | so in retrospect it was a pretty bad idea to do oslo releases on this week | 15:16 |
dhellmann | :-/ | 15:16 |
ttx | dims: and the oslo.messaging and the oslo.versionedobjects | 15:16 |
ttx | and the oslo.log | 15:16 |
dhellmann | is there an email thread or something I can catch up on? | 15:16 |
ttx | dhellmann: no, I'm in firefighting mode, the gate is overlaoded and I can't get anything I need in | 15:17 |
dhellmann | ok | 15:17 |
dhellmann | how can I help? | 15:17 |
ttx | a dozen of projects want a respin | 15:17 |
dhellmann | ttx: I'll add a lib release freeze period to the schedule in the project guide | 15:17 |
dims | dhellmann: oslo.db had the testresources (bumping version 3.0.0 did not help as we had not capped stable/liberty). oslo.messaging had a fixture that failed because zmq was missing (zmq is optional). o.vo, master was ok stable/liberty had issues, oslo.log broke hyperv. | 15:20 |
dims | dhellmann: oslo.db - patches are in progress to add testresources to stable/liberty, master of all projects are ok | 15:20 |
dims | dhellmann: oslo.messaging - 2.6.1 unblocked everything | 15:20 |
ttx | dhellmann: it's admittedly not a real solution. We are just deferring the problem | 15:20 |
dims | dhellmann: o.vo - i think sdague added a exclude in stable/liberty | 15:21 |
ttx | it's the other face of the "don't cap" coin | 15:21 |
dhellmann | ttx: yeah, but if we move it out of the critical period at least it doesn't block the release | 15:21 |
dims | dhellmann: oslo.log - new point release unblocked them | 15:21 |
dhellmann | dims: did we get that oslosphinx release out, too? | 15:22 |
dims | dhellmann: yes, done | 15:22 |
ttx | dhellmann: well yeah. I tried to sell the argument that it shouldn't trigger respins since we would have had the same problem next week | 15:22 |
dims | right, if we had released next week we would have broken installs in whoever was trying things out | 15:23 |
sdague | also, for things we aren't going to cap, we can test | 15:23 |
sdague | dims: I did not exclude, we're backporting a fix | 15:23 |
dims | sdague: sorry | 15:23 |
dims | mis-remembered it | 15:23 |
sdague | we have the ability to add the compat jobs | 15:23 |
ttx | dhellmann: At this stage I'm leaning towards respinning the things affected by the webob issue | 15:23 |
ttx | but not the things where the branch is borked due to oslo things | 15:23 |
sdague | that's the big hole we have | 15:24 |
sdague | because that was a safety net we used to enforce | 15:24 |
ttx | in the webob case it's apparently the consuming project fault | 15:24 |
dhellmann | sdague: compat jobs? | 15:24 |
dhellmann | ttx: that approach makes sense | 15:25 |
sdague | where we tested the library against stable/foo of the rest of the stack | 15:25 |
dhellmann | sdague: I think that's the thing lifeless wants and that I've been objecting to | 15:25 |
dims | dhellmann: one note here, if nova or cinder tests break, compat jobs won't catch them | 15:26 |
ttx | dhellmann: so we are looking at projects affected by https://bugs.launchpad.net/cinder/+bug/1505153 | 15:26 |
openstack | Launchpad bug 1505153 in openstack-ansible "gates broken by WebOb 1.5 release" [High,In progress] - Assigned to Jesse Pretorius (jesse-pretorius) | 15:26 |
sdague | dhellmann: that's what we used to do, because otherwise you have to cap every all time | 15:26 |
mriedem | is it too late for me to ask why we ca'nt just cap oslo.db<3.0 in g-r on stable/liberty? | 15:26 |
ttx | mriedem: because we said we wouldn't cap for the last 6 months | 15:26 |
sdague | dims: yes, that's fine, however the compat job would have caught this | 15:26 |
mriedem | but upper-constraints doesn't apply to unit test jobs | 15:26 |
dhellmann | sdague: my impression of his approach is that we would not ever be able to advance anything in the libraries, and development would become so difficult that everyone would stop trying to do it | 15:26 |
ttx | and two days before release sound like the wrong moiment to reverse 6 month worth of thinking | 15:27 |
dhellmann | sdague: we're in this state where requirements think we are saying we're compatible, but we're not actually trying to be compatible | 15:27 |
sdague | dhellmann: some libraries are | 15:27 |
mriedem | i personally think the upper-constraints strategy is flawed, but it's hard to get into that right now | 15:27 |
ttx | mriedem: last cycle we did that and the white walkers attacked us | 15:27 |
sdague | o.vo explicitly wants to be | 15:27 |
sdague | for instance | 15:27 |
mriedem | ttx: last cycle we didn't have the centralized releases repo either | 15:27 |
dhellmann | sdague: that's fine, but lifeless was trying for a blanket policy | 15:27 |
*** mestery has quit IRC | 15:28 | |
dhellmann | I don't have a problem if a team wants to take on that challenge, but I don't want everyone forced to do it | 15:28 |
sdague | ok, so realistically every library must either be capped in gr or have a compat job | 15:28 |
ttx | mriedem: personally I would rather try to make sure that new oslo releases work with their consuming projects /before/ they are released | 15:28 |
dhellmann | that makes sense to me | 15:28 |
sdague | honestly, I'm fine with it being a library by library decision | 15:28 |
mriedem | in my experience, we've had capping issues in the past because (1) we capped with <= so we left 0 wiggle room for patch releases and (2) projects released on their own and didn't follow semver - both of which should be resolved with a centralized release team using the releases repo | 15:28 |
sdague | but right now we've got a ton which have neither | 15:28 |
dhellmann | sdague: agreed | 15:28 |
dansmith | ttx: we had that for o.vo and master, but missed liberty, FWIW, but I agree we should do muuch better for oslo libs in that area | 15:29 |
sdague | mriedem: the problem ends up being propogation of the caps into libraries which makes the N-cycle uncap craziness | 15:29 |
mriedem | maybe we shouldn't sync caps to libraries | 15:30 |
sdague | mriedem: and then you run into the pip resolver question | 15:30 |
mriedem | yeah | 15:31 |
sdague | because if they aren't there, you can get stuff that you don't expect | 15:31 |
dansmith | maybe we should rewrite in go before friday? | 15:31 |
mriedem | b/c the lib has uncapped deps and the server has capped deps | 15:31 |
* dansmith runs | 15:31 | |
sdague | and libraries depend on other libraries | 15:31 |
dims | lol | 15:31 |
sdague | which is what contraints solved, narrowly | 15:31 |
ttx | dansmith: release is tomorrow fwiw, not friday | 15:32 |
sdague | it only solves it for us, the ansible fails here demonstrate how it fails in the field | 15:32 |
dansmith | ttx: oh, then I guess my plan won't work because of *that* :P | 15:32 |
sdague | anyway | 15:32 |
mriedem | i guess i'm lost on how upper-constraints is a solution when it doesn't apply to the requirements files in a repo used during unit test jobs | 15:33 |
mriedem | which is why we're breaking this week | 15:33 |
mriedem | and i don't like having to tell packagers that they have to not only be aware of the requirements in the repo but the global upper-constraints in another repo | 15:33 |
dhellmann | mriedem: the constraints stuff just isn't done | 15:33 |
dhellmann | (finished) | 15:33 |
mriedem | but is the goal with constraints to just change the unit test jobs in our CI system to use those also, rather than just pip install requirements.txt test-requirements.txt? | 15:34 |
sdague | mriedem: yeh, well, upper constraints is a published list of "this works" | 15:34 |
dansmith | so, another direction change aside, | 15:34 |
dansmith | are we good on what we need to do in the short term? | 15:34 |
dansmith | to avoid ttx's head from exploding | 15:34 |
sdague | mriedem: yes, it's coming to unit tests | 15:34 |
sdague | it's not there yet | 15:35 |
ttx | dhellmann: apparently that wouldn't have prevented breaking the release, if we ship a requirements that's actually only not broken if you take upper-constraints into account | 15:35 |
sdague | mriedem: https://github.com/openstack/nova/blob/e6d3b8592cab5dcf35069bffd39fb3d558c16ea1/tox.ini#L13 | 15:35 |
ttx | at least that's the rationale under which we are considering a Nova rc3 | 15:35 |
mriedem | sdague: yeah that grosses me out | 15:35 |
sdague | mriedem: you have a better idea? :) | 15:36 |
sdague | ttx: in the o.vo case, I think we would have caught this with a liberty job | 15:36 |
dhellmann | ttx: yeah. maybe we should go back to capping requirements in servers, though there was some discussion of why that is not sufficient in the scrollback | 15:36 |
ttx | dansmith: if the omnibus fix from sdague passes tests, we'll likely respin a rc3 over it | 15:36 |
sdague | because libs_from_git punches through requirements | 15:36 |
sdague | and it would have exposed that incompatibility | 15:36 |
ttx | good day to have a busy gate | 15:36 |
dansmith | ttx: okay, I need to check the logs even if it passes before you pull that trigger | 15:36 |
sdague | dansmith: we've got all the logs for the dsvm jobs | 15:37 |
ttx | dansmith: mmmkay, now I need to remember to ping you | 15:37 |
dhellmann | ttx: there are ways to kick unimportant changes out of the gate | 15:37 |
ttx | dhellmann: ah? | 15:37 |
dhellmann | ttx: edit the commit message in gerrit | 15:37 |
sdague | dansmith: so you can look at them now | 15:37 |
ttx | ways I haven't discovered in the last 5 years of making openstack releases then | 15:37 |
dansmith | sdague: I don't see the partial job results on the omnibus patch | 15:37 |
ttx | dhellmann: it's the check queue that is overloaded | 15:37 |
sdague | dhellmann: http://logs.openstack.org/66/234166/1/check/gate-grenade-dsvm-partial-ncpu/9522191//logs/ | 15:37 |
ttx | dhellmann: not the gate queue | 15:37 |
dhellmann | ttx: oh, well, there's not much you can do about that | 15:38 |
ttx | right | 15:38 |
dhellmann | sdague: what am I looking for? | 15:38 |
sdague | dhellmann: right, we've just got so much in check, though failing gate queue patches keep reseting and sucking up nodes | 15:38 |
sdague | dhellmann: sorry, that was meant for dansmith | 15:38 |
dhellmann | ah | 15:38 |
sdague | tab complete fail | 15:38 |
dansmith | sdague: ttx: logs look fine +1 | 15:38 |
dhellmann | ttx: I just made the problem worse: https://review.openstack.org/234293 | 15:38 |
ttx | we did send "stop the presses, only push fixes that matter" heads-up in the past. But it seems to have only resulted in people retrying their patches to make sure those would pass | 15:39 |
dhellmann | ttx: maybe we need an "emergency mode" switch for gerrit | 15:39 |
ttx | dhellmann: yeah , I floated the idea in the past without a lot of success | 15:40 |
sdague | mriedem: https://jenkins03.openstack.org/job/gate-cinder-python27/1782/console | 15:40 |
sdague | cinder's use of 400 instead of 500 is apparently an issue? | 15:41 |
sdague | I thought that passed at some point | 15:41 |
mriedem | sdague: did they change it in the last 12 hours? | 15:41 |
ttx | dansmith: so we are all clear from your side ? | 15:41 |
sdague | mriedem: https://review.openstack.org/#/c/233923/ | 15:42 |
dansmith | ttx: yeah | 15:42 |
mriedem | sdague: well, he has to fix the unit test first | 15:42 |
ttx | dhellmann: in other news, I intercepted the syncs from the requests blacklist this morning | 15:43 |
dhellmann | ttx: that's one bit of good news | 15:43 |
sdague | mriedem: how did it pass unit tests in check? | 15:43 |
mriedem | idk but it's wrong | 15:44 |
mriedem | https://review.openstack.org/#/c/231438/5/cinder/tests/unit/test_exception.py | 15:44 |
mriedem | test_default_args checks for 400 | 15:44 |
sdague | mriedem: ok, so let's fix that | 15:44 |
sdague | that keeps reseting the gate | 15:44 |
sdague | I'll get it | 15:44 |
sdague | ok, this is a legit rebase error by gerrit I think | 15:46 |
sdague | manually rebasing | 15:47 |
sdague | ok, so here's what I'm seeing | 15:50 |
sdague | ceilometer - unit tests break on a timeout sometimes (no idea why) | 15:50 |
sdague | keystone - unit tests blow up on dep conflict with webob 1.5 - https://jenkins07.openstack.org/job/gate-keystone-python27/3849/consoleFull | 15:51 |
sdague | sahara - unit tests blow up on oslo.db issue | 15:51 |
sdague | in the current gate queue | 15:51 |
sdague | the sahara job is master | 15:52 |
openstackgerrit | Doug Hellmann proposed openstack/releases: Revert "Fix docs build by blocking bad oslosphinx version" https://review.openstack.org/234307 | 15:52 |
mriedem | hmm, keystonemiddleware problems with webob deps | 15:52 |
sdague | right, because we capped it and it didnt? | 15:53 |
dhellmann | so keystone needs the webob cap, it looks like | 15:53 |
dhellmann | yeah | 15:53 |
sdague | that's the whole caps bite you problem | 15:53 |
sdague | because you have to have the entire world synchronized | 15:53 |
dhellmann | well I wonder why the cap wasn't enforced when the test env was built | 15:54 |
sdague | I blame pip :) | 15:54 |
sdague | https://review.openstack.org/#/c/233923/ - mriedem that's the cinder thing | 15:54 |
dhellmann | yeah | 15:54 |
dhellmann | so what's the deal with webob 1.5? do they have a broken release, or are we not handling some aspect of a change in behavior? | 15:55 |
mriedem | the latter | 15:56 |
dhellmann | k | 15:56 |
mriedem | looks like keystone hasn't merged a global reqs update change | 15:56 |
*** mestery has joined #openstack-relmgr-office | 15:56 | |
mriedem | btw, the webob cap was temporary, goes away here https://review.openstack.org/#/c/233857/ | 15:56 |
dhellmann | mriedem: did we get a keystone client released with the cap, though? | 15:57 |
dhellmann | or middleware, I guess | 15:57 |
*** mestery has quit IRC | 15:57 | |
mriedem | this is the keystone change https://review.openstack.org/#/c/233820/ | 15:57 |
mriedem | keystonemiddleware has the cap and a release at 2.4.0 | 15:58 |
mriedem | which must have all happened in the last 12 hours or so | 15:58 |
mriedem | https://github.com/openstack/keystonemiddleware/commit/2074ca89b9b19c953c3fbde8915bba30a6e1e0bb | 15:58 |
sdague | oh, so that was the badness, because it wasn't supposed to get out there | 15:58 |
sdague | grrr | 15:58 |
dims | stevemar_: ^^ | 15:58 |
sdague | the webob 1.5 fix is a one line patch | 15:58 |
sdague | which is why we're just fixing it | 15:59 |
mriedem | the keystonemiddleware release was for something else https://review.openstack.org/#/c/233763/ | 15:59 |
mriedem | requests, but it inadvertantly pulled in the webob thing | 15:59 |
dhellmann | mriedem: right, so we need to undo that to fix keystone I think | 15:59 |
sdague | dhellmann: here is what that patch looks like, fyi - https://review.openstack.org/#/c/233845/ | 15:59 |
dhellmann | sdague: ack, I saw the other one in cinder, too, lgtm | 16:00 |
ttx | sdague: we have a +1 on the omnibus fix | 16:00 |
*** mestery has joined #openstack-relmgr-office | 16:00 | |
sdague | ttx: I say do it | 16:01 |
ttx | sdague: so i can open a RC3 with the two bugs referenced and approve it to stable/liberty | 16:01 |
sdague | probably get johnthetubaguy to ack it | 16:01 |
ttx | let me prepare that | 16:01 |
mriedem | the keystone g-r sync is failing unit tests, but if we land the revert of the webob cap in g-r then the keystone sync is void | 16:01 |
mriedem | and would require a keystonemiddleware 2.4.1 release | 16:01 |
* johnthetubaguy ready to comment on those when needed | 16:01 | |
dhellmann | mriedem: yep, that sounds like the plan. shall we wait until the nova situation is resolved before doing that so we're only making one change at a time? it sounds like sdague, johnthetubaguy, and ttx are close to done | 16:02 |
mriedem | seems we can still push forward with https://review.openstack.org/#/c/233857/ in paralel | 16:03 |
mriedem | *parallel | 16:03 |
mriedem | since that's only master | 16:03 |
sdague | mriedem: yeh, it can go in parallel | 16:03 |
ttx | johnthetubaguy: https://launchpad.net/nova/+milestone/liberty-rc3 | 16:04 |
mriedem | btw the merge conflict there is false | 16:04 |
ttx | about to approve https://review.openstack.org/#/c/234166/ for gate | 16:04 |
ttx | wouldn't mind your +2 on that one | 16:04 |
ttx | johnthetubaguy: ^ | 16:04 |
sdague | mriedem: no, it's real | 16:04 |
sdague | because you have an abandoned depends on | 16:05 |
sdague | that patch is stuck with the depends on | 16:05 |
mriedem | gah | 16:05 |
mriedem | ok | 16:05 |
mriedem | updating | 16:06 |
sdague | I actually sent a big long email about it this morning to the list to explain that | 16:06 |
sdague | because I realized it's not always obvious | 16:06 |
dhellmann | yeah, the error message doesn't include a reference to what is actually blocking things | 16:06 |
mriedem | i didn't realize depends-on ignored abandoned changes either | 16:07 |
dhellmann | mriedem: +2a | 16:07 |
ttx | you get a rc3. And YOU get a RC3 | 16:07 |
mriedem | thanks oprah | 16:07 |
* dims checks under his chair :) | 16:08 | |
*** AJaeger has joined #openstack-relmgr-office | 16:08 | |
ttx | dhellmann: OK, so things need time in the zuul washing machine now. We have glance and nova RC3 windows opened with fixes in the pipe. We have tentative RC3s for Cinder and Manila over basically the same fix, pending the check results | 16:10 |
ttx | if the check result returns positive in the coming hours we'll likely respin them as well | 16:10 |
AJaeger | dims: thanks for the openstackdocstheme release. | 16:10 |
dhellmann | ttx: ack. I'll work on the keystonemiddleware release to unblock keystone | 16:11 |
AJaeger | dims, dhellmann: Could you approve the constraints changes: https://review.openstack.org/233512 and https://review.openstack.org/233514 , please? | 16:11 |
ttx | dhellmann: hopefully we can hold the RC respin line there. | 16:11 |
dhellmann | ttx: yep, just the lib | 16:11 |
dhellmann | AJaeger: we're fighting a bunch of gate issues right now, I don't think we want to start changing other requirements | 16:11 |
AJaeger | dhellmann: ok, understood. | 16:13 |
*** mestery has quit IRC | 16:15 | |
ttx | mriedem: any chance you can chase down e0ne or smcginnis to get https://review.openstack.org/#/c/233668/ updated to your liking ? The cinder rc3 hinges on, a fast check result for this change | 16:18 |
ttx | we have a 6-hour tunnel on the check gate at this stage | 16:18 |
mriedem | ttx: yeah, can do | 16:19 |
ttx | mriedem: thx | 16:19 |
* ttx goes for dinner break | 16:20 | |
cp16net | dhellmann: i'd like to make a new release of the python-troveclient with a minor update from 1.3.0 to 1.3.1 could you help with this or busy atm? | 16:27 |
dhellmann | cp16net: we're fighting several requirements-related fires right now, so we're not doing releases today | 16:28 |
*** mestery has joined #openstack-relmgr-office | 16:30 | |
mriedem | cinder stable/liberty squash is updated https://review.openstack.org/#/c/234329/ | 16:32 |
cp16net | dhellmann: ok thanks i'll ping you agin about that in a day or so | 16:33 |
*** mestery has quit IRC | 16:33 | |
*** mestery has joined #openstack-relmgr-office | 16:37 | |
sdague | mriedem: is there no test fix needed for - https://review.openstack.org/#/c/234329/1 ? | 16:38 |
mriedem | sdague: no, the test_default_args or whatever it was that failed on master was an independent change | 16:39 |
dhellmann | cp16net: sounds good | 16:39 |
sdague | mriedem: ok, cool | 16:39 |
mriedem | https://review.openstack.org/#/c/231438/ which i don't want to backport | 16:39 |
mriedem | since it plays with webob internals | 16:39 |
sdague | mriedem: yep, that sounds fine | 16:39 |
dhellmann | AJaeger: we should spend some time thinking about whether the doc theme actually needs to be constrained or not -- it's not used by dsvm jobs, or most of the regular unit test jobs | 16:41 |
dhellmann | sdague, mriedem, dims : are we waiting on jobs at this point or is there something for me to poke? | 16:42 |
jgriffith | sdague: yeah, so that test isn't there in liberty so he won't hit the merge issue that I did | 16:42 |
AJaeger | dhellmann: indeed. But that's today's situation - don't we want to change the framework? | 16:42 |
sdague | the cinder change is new and waiting on nodes. Unless we wanted to jump the queue and get some stuff promoted for the release | 16:42 |
dims | dhellmann: sileht/cdent and i are working the oslo.messaging problem | 16:42 |
AJaeger | dhellmann: Or should we add a comment to constraints file and say that the entry is ignored? | 16:42 |
dhellmann | AJaeger; that package may be a candidate for the set of unconstrained things in blacklist.txt | 16:42 |
sdague | dims: does that hit liberty, or just master? | 16:43 |
AJaeger | dhellmann: let me check that... | 16:43 |
sdague | I think that's got to be the game plan right now | 16:43 |
sdague | a) what's impacting liberty, how fast can we address it | 16:43 |
dims | sdague: no cap means liberty + master are both picking up latest oslo.messaging right? | 16:43 |
sdague | b) what's impacting master making the gate reset, how do we fix those | 16:43 |
sdague | everything else | 16:43 |
openstackgerrit | Merged openstack/releases: oslo.log bug fix release https://review.openstack.org/234202 | 16:43 |
sdague | dims: yep, I just had only seen the fail on master | 16:44 |
dhellmann | sdague: I'm comfortable with us jumping the queue | 16:44 |
dims | sdague: will check with cdent | 16:44 |
sdague | dhellmann: ok, we should collect all the changes we want to get promoted in one go then | 16:44 |
dhellmann | I'll start an etherpad to track all the things we're doing, sec | 16:44 |
mriedem | there were keystone things impacting master gate i thought, bknudson was looking at those | 16:44 |
dhellmann | https://etherpad.openstack.org/p/liberty-release-gate-race | 16:44 |
sdague | mriedem: right, let's put master fixes in tier 2 | 16:45 |
sdague | and make sure we've got all the liberty things we think we need in flight | 16:45 |
*** mestery has quit IRC | 16:46 | |
sdague | ok, what other liberty things are in flight | 16:47 |
dhellmann | dims: please add what you're working on to https://etherpad.openstack.org/p/liberty-release-gate-race | 16:47 |
*** mestery has joined #openstack-relmgr-office | 16:50 | |
sdague | dhellmann: any idea if there is a manilla backport? | 16:53 |
dhellmann | sdague: I'm checking to see now | 16:53 |
dhellmann | actually manila is failing unit tests because of the oslo.db dependency change | 16:53 |
*** mestery has quit IRC | 16:54 | |
sdague | dhellmann: ok, right, then they need the omnibus fix like cinder and nova | 16:54 |
sdague | which is the 2 merged together | 16:54 |
dhellmann | this manila change does have a webob related fix | 16:54 |
dhellmann | https://review.openstack.org/#/c/234288/1 | 16:54 |
dhellmann | bswartz: ^^ | 16:55 |
dhellmann | ugh | 16:55 |
stevemar_ | dhellmann: back from lunch, what's going on with the gate?! :) was releasing ksm/ksc with the new webob req a bad move? | 16:56 |
dhellmann | stevemar_: it made things a little worse, so we'll need to spin new releases without the cap | 16:56 |
dhellmann | stevemar_: see https://etherpad.openstack.org/p/liberty-release-gate-race | 16:56 |
stevemar_ | dhellmann: okay, i can push patches to for version bumps (2.3.1 and 1.8.1) once the proposal bot updates ksc and ksm | 16:59 |
dhellmann | stevemar_: good, thanks | 16:59 |
mriedem | stevemar_: 2.4.1 for ksvm | 16:59 |
mriedem | *ksm | 16:59 |
stevemar_ | mriedem: oops, yeah | 16:59 |
stevemar_ | was going off memory | 17:00 |
sdague | dhellmann: so the only other project that I might imagine has the webob issue would be ironic | 17:00 |
sdague | because this is nova heritage code, and I'm thinking about the projects that would have forked off and gotten it | 17:00 |
sdague | cinder got it from nova, manilla from cinder | 17:01 |
dhellmann | sdague: makes sense | 17:01 |
dhellmann | devananda, jroll : is ironic seeing gate issues? | 17:01 |
sdague | I'm going to go do some digging around | 17:01 |
mriedem | when i was looking yesterday at the webob thing it was only nova cinder and manila | 17:02 |
sdague | dhellmann: ironic master at least doesn't look like it subclasses webob for exceptions | 17:02 |
sdague | mriedem: that was via logstash? | 17:02 |
mriedem | yeah | 17:03 |
mriedem | before kibana died | 17:03 |
*** doug-fis_ has joined #openstack-relmgr-office | 17:04 | |
jroll | dhellmann: not that I'm aware of | 17:04 |
*** doug-fis_ is now known as doug-fish_ | 17:05 | |
jroll | dhellmann: as of 20 minutes ago we're good | 17:05 |
dhellmann | jroll: ok, thanks for confirming | 17:06 |
jroll | np | 17:06 |
*** doug-fish has quit IRC | 17:07 | |
sdague | dhellmann: so manilla, what's our story there? are we going to wait for them to lead on that, or is someone else going to fix it for them | 17:11 |
sdague | I was waiting on doing the ask to infra until we felt pretty good we had a complete fix list | 17:11 |
mriedem | sdague: i think https://review.openstack.org/#/c/234288/ is complete | 17:11 |
mriedem | it has the oslo.db change in test-requirements.txt | 17:11 |
sdague | oh, but is has ttx -2 | 17:11 |
mriedem | i think that's just ttx bot | 17:12 |
mriedem | auto -2 | 17:12 |
sdague | hey, ttx, come back from lunch! | 17:12 |
sdague | ok, I guess we should get the cinder and nova ones promoted | 17:12 |
mriedem | dinner, so it'll be like 3 hours before he's back | 17:12 |
mriedem | right? | 17:12 |
jgriffith | LOL | 17:13 |
mriedem | the manila change is in the check queue | 17:13 |
mriedem | i have to go get my quarterly haircut now so be back in an hour | 17:13 |
*** gordc has quit IRC | 17:14 | |
stevemar_ | mriedem: not your luxurious locks! | 17:15 |
sdague | mriedem: and you aren't even coming to summit, slacker | 17:18 |
sdague | dhellmann: fungi is doing promotes now of the cinder and nova fixes | 17:18 |
fungi | yep | 17:18 |
dhellmann | fungi, sdague : thanks | 17:18 |
fungi | they're enqueued and promoted to the front of the gate now | 17:20 |
sdague | fungi: thanks much | 17:20 |
*** mestery has joined #openstack-relmgr-office | 17:20 | |
fungi | any time! | 17:20 |
ttx | sdague: o/ | 17:23 |
ttx | what when which where | 17:23 |
stevemar_ | ttx: who? | 17:23 |
sdague | ttx: you've got a -2 on the manilla fix, just curious what our manilla story will be | 17:23 |
stevemar_ | :) | 17:23 |
ttx | sdague: I'll +2 it if it passes tests in the next hours | 17:24 |
ttx | i.e. RC3 and all | 17:25 |
ttx | afaict the tests are still running? | 17:25 |
sdague | ttx: yeh | 17:25 |
* ttx needs to put the kids in bed, back in 45min | 17:26 | |
sdague | we got fungi to promote the nova and cinder ones | 17:26 |
sdague | dhellmann / ttx - stable kilo question | 17:26 |
ttx | sdague: well if he can promote that one as well that would be great | 17:26 |
sdague | who is supposed to take care of the 2015.1.2 bump on cinder - https://github.com/openstack/cinder/blob/1ec74b88ac5438c5eba09d64759445574aa97e72/setup.cfg#L3 | 17:26 |
AJaeger | dhellmann: https://review.openstack.org/234357 and https://review.openstack.org/234358 are the move of openstackdocstheme to the blacklist | 17:26 |
ttx | glance nova cinder manila is my current target RC3 list | 17:26 |
sdague | ttx: ok, I'd say remove your -2 so it can be done | 17:26 |
ttx | sdague: done | 17:27 |
*** bknudson has joined #openstack-relmgr-office | 17:27 | |
sdague | ttx: thanks, we'll have to let the current changes run their course, but will keep an eye on a good time for this one | 17:27 |
dhellmann | sdague: not sure what you mean? do we need to raise the version # for cinder in stable/kilo? | 17:28 |
ttx | sdague: cinder-stable-maint folks | 17:28 |
sdague | dhellmann: yes, because it's failing all the tests | 17:28 |
* ttx runs | 17:28 | |
dhellmann | sdague: as ttx said, the stable team for cinder should do that | 17:28 |
jgriffith | that must've been the other item that infra pointed out, that I *thought* was the ec | 17:29 |
jgriffith | cool | 17:29 |
sdague | jgriffith: ok, you on it? | 17:29 |
jgriffith | yeah | 17:29 |
jgriffith | sdague: ttx dhellmann https://review.openstack.org/#/c/234360/ | 17:30 |
sdague | jgriffith: cool, thanks | 17:30 |
* dhellmann can't wait to get everyone onto post-version numbering | 17:31 | |
mtreinish | dhellmann: there should be no reason to do post-version numbering | 17:31 |
mtreinish | all it does is cause these headaches when someone pushes a release and thinks their job is done | 17:31 |
mtreinish | (its not the first time this has happened) | 17:31 |
dhellmann | mtreinish: I'm not sure what that means. | 17:35 |
*** mestery has quit IRC | 17:35 | |
mtreinish | dhellmann: when the stable release person (forgot the real name) pushes a point release of a stable branch there are follow up tasks (like bumping the setup.cfg version) | 17:37 |
mtreinish | but that never seems to happen | 17:37 |
mtreinish | which causes that issue on stable branches | 17:38 |
*** nikhil has quit IRC | 17:39 | |
dhellmann | mtreinish: so you're saying this isn't the only task we forget to do? why does eliminating that task entirely not help to solve that problem? | 17:39 |
sdague | ok, as I think we're a bit in a lull for test results / patch merge, I'm going to pop off and assemble my table saw and mentally regroup | 17:39 |
mtreinish | dhellmann: it does, I'm saying we should remove the version lines | 17:40 |
*** nikhil has joined #openstack-relmgr-office | 17:40 | |
dhellmann | mtreinish: ok, you said "there should be no reason to do post-version numbering" but I think you meant pre-version | 17:40 |
dhellmann | mtreinish: so I think we're saying the same thing | 17:41 |
mtreinish | dhellmann: oh sry, yeah my bad | 17:41 |
*** gordc has joined #openstack-relmgr-office | 17:41 | |
dhellmann | and for now the only reason to keep with pre-versioning is that immature projects need help with the process, we tend to use pre-versioning for the milestones but we could also just restrict tag access | 17:42 |
dhellmann | mtreinish: yeah, fwiw, I'm trying to find all the by-hand things and cut them from the process where possible | 17:42 |
*** harlowja has joined #openstack-relmgr-office | 17:48 | |
bknudson | so for keystone I think for now we just wait for the webob cap to merge and then we'll put out a new ksc + ksm release with the reqs update. | 17:52 |
dhellmann | bknudson: ack, thanks, I thinks stevemar_ is on that so we're just waiting | 17:54 |
bknudson | and then we'll figure out how to deal with whatever probs are left over | 17:54 |
* dhellmann nods | 17:56 | |
*** armax has quit IRC | 17:59 | |
*** bswartz has joined #openstack-relmgr-office | 18:05 | |
bswartz | dhellmann: I'm here now -- sorry I used to have this channel on auto-join until freenode started kicking me for autojoining too many channels | 18:06 |
dhellmann | bswartz: np, I know the feeling. We're using https://etherpad.openstack.org/p/liberty-release-gate-race to track what needs to happen to finish the release today | 18:07 |
bknudson | dhellmann: liberty release? I haven't been looking at keystone issues in L. | 18:07 |
bknudson | I assume they're the same as master | 18:07 |
dhellmann | bswartz: it looks like https://review.openstack.org/#/c/234288/ failed its tests | 18:08 |
dhellmann | bknudson: the keystone libs from master are causing some issues in the liberty jobs, too, I think | 18:08 |
bswartz | dhellmann: yeah, I'm taking a look now | 18:08 |
dhellmann | bswartz: great, thanks | 18:08 |
lifeless | sdague: dhellmann: we have a room for discussing backwards compat in tokyo right? | 18:10 |
lifeless | mriedem: realistically issues for packagers haven't changed in any fundamental way | 18:11 |
lifeless | mriedem: we've *never* been able to say with precision which versions of the billions of permutations do and don't work | 18:11 |
lifeless | mriedem: its always been an approximation | 18:11 |
stevemar_ | dhellmann: i'll be a little delayed, keystone meeting right now | 18:14 |
stevemar_ | if theres something i need to do =\ | 18:14 |
dhellmann | stevemar_: I think we're still waiting for patches to land | 18:16 |
*** mriedem has quit IRC | 18:18 | |
stevemar_ | coolio | 18:19 |
sdague | dhellmann / ttx the nova and cinder patches merged it appears | 18:20 |
sdague | so those should be rc3 able | 18:20 |
dhellmann | sdague: woot, I'll let ttx handle that when he gets back because he knows which scripts need to run | 18:20 |
sdague | the manila one failed unit tests, I'm going to reprod locally and see if I can fix it | 18:21 |
*** mriedem has joined #openstack-relmgr-office | 18:21 | |
mriedem | sdague: i had to get cleaned up for mickey and the gang | 18:21 |
mriedem | walt has high standards | 18:22 |
sdague | mriedem: yeh, wouldn't want to be black listed out of there | 18:22 |
* sdague still thinks mriedem should fly from orlando to tokyo | 18:22 | |
mriedem | looks like the manila backport is hitting other unit test failures | 18:25 |
mriedem | missing this for one https://github.com/openstack/manila/commit/f38b8d4efd1f68f4ea29747f7377e0936f61d89c | 18:25 |
dhellmann | sdague: bswartz was looking at the manila patch, too | 18:26 |
mriedem | it needs f38b8d4efd1f68f4ea29747f7377e0936f61d89c | 18:26 |
lifeless | mriedem: perhaps we could do a voice call to get some high bandwidth handle on the issues you're hearing from your packagers? | 18:26 |
mriedem | lifeless: maybe at some point, not today though | 18:27 |
sdague | dhellmann: ok | 18:27 |
mriedem | bswartz: your manila backport needs f38b8d4efd1f68f4ea29747f7377e0936f61d89c squashed in also | 18:27 |
sdague | mriedem: yeh, that looks like the ticket | 18:28 |
mriedem | i could cram that in quick if no one else is doing it | 18:28 |
sdague | mriedem: go for it, you found it | 18:28 |
bswartz | mriedem: thanks that would be appreviated | 18:28 |
lifeless | mriedem: definitely we should talk in tokyo | 18:28 |
mriedem | np, up in a sec | 18:28 |
mriedem | lifeless: i won't be there | 18:28 |
* bswartz is struggling to get dependent libs installed on fedora after recent upgrade | 18:29 | |
mriedem | sdague: bswartz: https://review.openstack.org/#/c/232535/ | 18:32 |
sdague | mriedem: wait, wasn't there a version that wasn't ttx -2? | 18:33 |
mriedem | yeah, https://review.openstack.org/#/c/234288/ | 18:33 |
mriedem | dueling | 18:33 |
mriedem | i can update that one instead | 18:33 |
sdague | yeh, lets update the not -2ed patch | 18:33 |
sdague | because we would like to merge | 18:33 |
bswartz | is ttx not here to remove his -2? | 18:34 |
mriedem | weird, 234288 is what i used for git review -d | 18:34 |
mriedem | starting over | 18:34 |
bswartz | mriedem: you have 2 change IDs in your commit message | 18:35 |
bswartz | only the second one was considered by gerrit | 18:35 |
mriedem | yeah, it's usually not a problem | 18:35 |
mriedem | that's how the cinder one was | 18:35 |
mriedem | but yeah, that's why it didn't update the other | 18:36 |
sdague | yeh, it ignores everything but the last one | 18:36 |
lifeless | mriedem: :/ | 18:36 |
ttx | I'm here | 18:36 |
ttx | sdague, bswartz: am I needed ? | 18:37 |
bswartz | ttx: just 2 different changes in gerrit both trying to fix manila | 18:37 |
bswartz | mriedem updated the wrong one | 18:37 |
dansmith | ttx: in general? yes! :) | 18:37 |
sdague | not if mriedem sorts the ids on the changes | 18:37 |
sdague | we need a ttx bat signal | 18:37 |
mriedem | there https://review.openstack.org/#/c/234288/ | 18:37 |
dansmith | sdague: nice | 18:38 |
bswartz | ttx: you could remove your -2 from https://review.openstack.org/#/c/232535/ and we could use that instead | 18:38 |
ttx | sure | 18:38 |
mriedem | we're good with https://review.openstack.org/#/c/234288/ now | 18:38 |
mriedem | i fixed it | 18:38 |
bswartz | or we could not and use the one mriedem just updated | 18:38 |
ttx | well, whichever is first in the test run | 18:38 |
mriedem | too late | 18:38 |
mriedem | heh, yeah | 18:38 |
lifeless | too many fixes :) | 18:38 |
ttx | which one should we kill | 18:38 |
sdague | mriedem: I like your new topic | 18:39 |
mriedem | it's more appropriate | 18:39 |
lifeless | ttx: / dhellmann: we do have time in toyko to talk removing of version= from setup.cfg right ? | 18:39 |
dhellmann | lifeless: that's one of the topics for the fishbowl session | 18:40 |
mriedem | bknudson: ^ you should be around for that one on the version= in setup.cfg thing since you had questions about it | 18:40 |
lifeless | dhellmann: great | 18:41 |
ttx | sdague: the omnibus is in. Looks like I can tag nova rc3 | 18:41 |
dhellmann | lifeless: though given how some of the projects went this cycle, I'm mostly convinced to only encourage that for mature projects | 18:41 |
sdague | ttx: yes, for cinder as well | 18:41 |
ttx | for cinder ? | 18:42 |
ttx | the patch I have was -1ed | 18:42 |
sdague | ttx: https://review.openstack.org/#/c/234329/ | 18:42 |
lifeless | dhellmann: (I was reminded of this because the helion dev-tools-mgr was asking me about dealing with the situation where they've merged a project right after tag but before the version bump to setup.cfg | 18:42 |
ttx | ETOOMANYPATCHES | 18:42 |
lifeless | dhellmann: how so? what problems does it avoid with them ? | 18:42 |
* ttx abandons the others | 18:42 | |
lifeless | dhellmann: (Or lets wait for fishbowl?) | 18:42 |
dhellmann | lifeless: let's wait | 18:42 |
mriedem | lifeless: internally when we branch we comment out version= in setup.cfg | 18:43 |
ttx | smcginnis, johnthetubaguy: about to tag cinder and nova rc3 | 18:44 |
smcginnis | ttx: Thanks! | 18:44 |
sdague | ok, if I can get someone to +A the manilla patch - https://review.openstack.org/#/c/234288/2 (stable/liberty) | 18:44 |
sdague | I think we can ask fungi to promote it | 18:44 |
ttx | sdague: on it | 18:44 |
sdague | I've manually confirmed the unit tests pass | 18:44 |
sdague | which are all that failed before | 18:45 |
lifeless | mriedem: yeah, thats a decent approach IMO | 18:45 |
fungi | yeah | 18:45 |
*** armax has joined #openstack-relmgr-office | 18:45 | |
fungi | did the nova and cinder fixes merge yet? | 18:45 |
sdague | fungi: yes | 18:46 |
sdague | fungi: 234288,2 | 18:46 |
sdague | I manually confirmed the tests that failed last time pass | 18:46 |
ttx | fungi: if you're in a good day i'd like to have https://review.openstack.org/#/c/233661/ as well | 18:46 |
ttx | blocking glance rc3 | 18:46 |
sdague | so I think it's low risk to jump straight to gate | 18:46 |
ttx | bah it's gating already | 18:46 |
smcginnis | fungi: Last one for Cinder - https://review.openstack.org/#/c/234329/ | 18:46 |
sdague | ttx: it can be promoted | 18:46 |
ttx | fungi: it's gating already, not sure it needs promotion | 18:47 |
ttx | sdague: ok then | 18:47 |
* ttx glaces at the queue | 18:47 | |
ttx | holy mary | 18:47 |
sdague | 234288,2 and 233661,2 it seems | 18:47 |
fungi | k | 18:47 |
sdague | ttx: so is that it for rc3s? | 18:47 |
ttx | the ones I know about yes | 18:48 |
sdague | ok | 18:48 |
ttx | (the one we decided on Monday and the ones we adde tdue to the webob | 18:48 |
sdague | ok, great | 18:48 |
ttx | sdague: well, thx for the help unlocking the day | 18:48 |
* ttx likes to share the pain | 18:48 | |
sdague | yeh, no prob | 18:48 |
ttx | alright. fire in the hole, nova rc3 coming | 18:49 |
ttx | hmm, not the best hour to us ethe internetz for me | 18:50 |
fungi | heh, looks like manila's not in the integrated gate queue, so it's off gating on its own anyway | 18:51 |
ttx | yay | 18:52 |
fungi | and the glance change is at the front of the gate now | 18:53 |
ttx | hmm, looks like everyone is using netflix tonight | 18:53 |
ttx | can't get a proper nova repo clone | 18:53 |
sdague | ttx: it's only 600 MB | 18:54 |
bswartz | fungi: how did you get the manila change into the gate without passing through the check queue first? | 18:54 |
ttx | sdague: mind you, the release tools use the zuul cloner, supposedly to be able to cache, except they don't. | 18:54 |
ttx | bswartz: black magic I suspect | 18:54 |
fungi | bswartz: it involves dead chickens. better not to ask | 18:55 |
bswartz | I always suspected it was possible to pull the strings with zuul but I didn't know if anyone actually did it | 18:55 |
fungi | for urgent fixes blocking most of openstack or holding up the semi-annual release, yes | 18:56 |
fungi | sometimes also for critical fixes corresponding to security advisories | 18:56 |
bswartz | we've periodically had gate stability issues (jobs timing out, usually) that required endless rechecks -- it's something we're fixing in mitaka -- but I wondered if there were shortcuts to merge things | 18:56 |
sdague | bswartz: not unless it is affecting things at a large project level | 18:58 |
lifeless | bswartz: the technical capacity exists, policy is to only use it in really exceptional circumstances | 18:58 |
ttx | i'll be back in a few -- I don't have enough bandwidth for the tags right now | 18:59 |
bswartz | well we're fixing the problem by making the first party manila driver faster so tests take less time | 18:59 |
bswartz | but it can be maddening to have a change get thrown out of the gate and go back through the check queue multiple times before merging | 19:00 |
ttx | sdague: the cinder fix went in before https://review.openstack.org/#/c/233923/ merged in master | 19:28 |
ttx | sdague: now we have to keep tabs on that one and make sure it merges | 19:28 |
jgriffith | ttx: I think we squashed that | 19:28 |
ttx | jgriffith: squashed in master ? | 19:29 |
jgriffith | ttx: in liberty | 19:29 |
ttx | right, and tyhe fix never made it to master | 19:29 |
ttx | creating a regression opportunity | 19:29 |
jgriffith | ttx: yeah, I see now | 19:30 |
ttx | jgriffith: so we need to make sure that merges now | 19:30 |
jgriffith | ttx: yeah, I'll keep babysitting | 19:30 |
ttx | I can't even describe the state we are in the bug | 19:30 |
jgriffith | ttx: indeed... for some reason I thought the cherry pick would autofail on the merge if it hadn't landed in master | 19:31 |
jgriffith | ttx: well.. not the cherry pick, but the merge of the cherry pick | 19:31 |
ttx | well, that would remove a lot of process if that happened | 19:31 |
jgriffith | ttx: you say that like it's a bad thing :) | 19:32 |
bswartz | ttx: one of the gate tests for 234288,2 took a very long time to install devstack -- there's a good chance it will hit a timeout | 19:33 |
ttx | not sure if it's reasonable to cut the RC3 i the mean time | 19:33 |
jgriffith | ttx: so the problem is I think that'll be around 3+ hours at least before it merges | 19:34 |
bswartz | I will recheck if it ultimately fails, but it may have to travel through the check queue again | 19:34 |
ttx | fungi: we might need 233923,2 promoted in gate queue since the stable/liberty backport was approved before the master fix landed | 19:34 |
ttx | bswartz: let's keep an eye on it | 19:34 |
fungi | let me know if it becomes necessary. not paying close attention since i'm running the infra meeting right now | 19:35 |
ttx | Looks like I don't have enough bandwidth for the tagging right now anyway | 19:35 |
ttx | fungi: 233923,2 is necessary | 19:35 |
jgriffith | net neutrality :) | 19:35 |
bswartz | ttx: i've got my eye on it | 19:35 |
* bswartz has jenkins open with auto-refresh | 19:36 | |
jgriffith | ttx: should have his own fast-lane | 19:36 |
ttx | fungi: i don't feel comfortable cutting cinder rc3 while we have that potential regression in master | 19:36 |
bknudson | +3 it | 19:36 |
fungi | ttx: necessary to promote? can do then | 19:37 |
ttx | damn, just as my cinder repo clone finally completed. Bad luck | 19:37 |
fungi | ttx: okay, i promoted it | 19:38 |
ttx | fungi: thx | 19:38 |
fungi | hopefully the other stuff from earlier already merged? i'm not really watching | 19:38 |
fungi | but i can take a look again after we wrap the infra meeting | 19:39 |
ttx | fungi: well, no :) | 19:39 |
*** mestery has joined #openstack-relmgr-office | 19:45 | |
bswartz | ttx: it failed | 19:53 |
bswartz | https://jenkins06.openstack.org/job/gate-manila-tempest-dsvm-neutron-multibackend/958/console | 19:53 |
ttx | dammit | 19:53 |
ttx | fungi: we had a fail on the manila fix https://review.openstack.org/#/c/234288/ and need it to jump to gate again (it has its own queue) | 19:54 |
bswartz | it seems to be bad luck -- the slowness of devstack make it take too long | 19:54 |
ttx | fungi: shall I recheck it, or reapprove it or... | 19:55 |
*** AJaeger has left #openstack-relmgr-office | 19:55 | |
sdague | ttx: https://review.openstack.org/#/c/233923/ is not a critical fix | 19:58 |
sdague | ttx: it's almost a noop in how it exposes | 19:58 |
ttx | ah. I wonder why we included it in the RC then | 19:59 |
ttx | because the state of master changed ? | 19:59 |
* ttx is confused | 20:00 | |
sdague | ttx: we needed *some* change | 20:01 |
sdague | but 923 is shifting it from 400 -> 500 | 20:01 |
sdague | the issue was it previously was 0 | 20:01 |
sdague | which explodes | 20:01 |
sdague | so there is a little bit of eventual consistency here, but any valid http value is fine. It always gets overwritten | 20:02 |
fungi | ttx: i've pushed 234288,2 back into the gate for manila now | 20:02 |
sdague | it's just an invalid base value make things explody | 20:02 |
sdague | anyway, we can resync post tc meeting if you want | 20:03 |
bswartz | fungi: thx | 20:03 |
smcginnis | mriedem: Saw your note on the version bump. | 20:27 |
smcginnis | mriedem: Makes sense. | 20:28 |
smcginnis | mriedem: Context of why I did it: http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2015-10-13.log.html#t2015-10-13T17:25:28 | 20:28 |
mriedem | i will forever grind that axe now | 20:30 |
ttx | sdague: the evil is done now, it's top of gate anyway | 20:35 |
mriedem | sdague: manila is also setting the webob thing to 400 but as you said it doesn't really matter | 20:41 |
mriedem | let's just pull ConvertedException out to oslo.exceptions | 20:41 |
* bswartz will BRB need to drive home | 20:42 | |
*** bswartz has quit IRC | 20:42 | |
*** TravT_ has joined #openstack-relmgr-office | 20:46 | |
*** TravT__ has joined #openstack-relmgr-office | 20:48 | |
*** TravT__ has joined #openstack-relmgr-office | 20:48 | |
*** TravT has quit IRC | 20:49 | |
*** TravT__ is now known as TravT | 20:49 | |
*** TravT_ has quit IRC | 20:51 | |
jgriffith | ttx: smcginnis https://review.openstack.org/#/c/233923/ merged | 21:00 |
jgriffith | GTG | 21:00 |
ttx | alrighty | 21:00 |
ttx | all clear | 21:00 |
smcginnis | jgriffith: Just noticed that. | 21:00 |
smcginnis | Open the floodgates again. :) | 21:01 |
jgriffith | and on that note... I've got some homemade tomato sauce to make :) | 21:01 |
ttx | let's see if I can clone a repo now | 21:01 |
ttx | sdague, fungi: dammit, glance failed https://review.openstack.org/#/c/233661/ | 21:02 |
ttx | At this stage I'll pick it up tomorrow morning | 21:02 |
ttx | feel free to promote it toward the end of the day if that feels necessary to get it | 21:02 |
ttx | cinder rc3 on its way | 21:04 |
jgriffith | ttx: cool.. at this point things can/should just wait until first update :) | 21:05 |
* jgriffith bounces back out to the kitchen | 21:06 | |
*** bswartz has joined #openstack-relmgr-office | 21:07 | |
bswartz | ttx: merged https://review.openstack.org/#/c/234288/ | 21:07 |
jroll | ttx: re liberty we still have at least one patch, maybe two, inbound for ironic, jfyi | 21:08 |
ttx | bswartz: yep, will tag in a few | 21:08 |
bswartz | ttx: ty | 21:08 |
ttx | jroll: planning to make another release pre-release-day ? | 21:08 |
ttx | jroll: or is that stable point release material ? | 21:09 |
jroll | ttx: yes, there's a patch that will need backporting that is having some contention :( upgrade-related so I wanted to get it in before release day | 21:09 |
ttx | jroll: time is running out but meh | 21:09 |
jroll | ttx: I know :( | 21:10 |
ttx | intermediary released things are not so much of an issue | 21:10 |
ttx | jroll: I won't have time to help you on Thursday, so better do that release tomorrow | 21:10 |
jroll | right; I just don't want to release current version of ironic as "liberty" when there's an upgrade bug | 21:10 |
jroll | ack | 21:10 |
ttx | agreed | 21:10 |
jroll | I wanted to do it last week fwiw :P | 21:12 |
jroll | but yeah, will push people | 21:12 |
*** mestery has quit IRC | 21:12 | |
ttx | alright cinder and manila rc3 are in | 21:17 |
ttx | nova is next | 21:18 |
fungi | okay, glance 233661,2 has been shunted back to the gate again | 21:18 |
fungi | or will be as soon as zuul finishes catching up with its event queue backlog | 21:20 |
fungi | there it is | 21:21 |
ttx | alright, nova rc3 done | 21:25 |
ttx | and closing shop for the night | 21:27 |
*** mestery has joined #openstack-relmgr-office | 21:46 | |
*** gordc has quit IRC | 21:47 | |
*** mriedem has quit IRC | 21:48 | |
*** sigmavirus24 is now known as sigmavirus24_awa | 22:08 | |
*** david-lyle has quit IRC | 22:10 | |
*** david-lyle has joined #openstack-relmgr-office | 22:12 | |
anteaya | ttx: yay | 22:17 |
*** mestery has quit IRC | 22:21 | |
*** stevemar_ has quit IRC | 23:23 | |
*** stevemar_ has joined #openstack-relmgr-office | 23:24 | |
*** dims has quit IRC | 23:41 | |
*** dims has joined #openstack-relmgr-office | 23:42 | |
*** mestery has joined #openstack-relmgr-office | 23:45 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!