Monday, 2014-06-16

*** xchu has joined #openstack-infra00:00
mordredclarkb: so - I'm going to have to spin up a vm to work further on dib - I've managed to tickle a bug in dib by running a debian host to build ubuntu images, which seems to be a combination that is not (yet) tested00:04
mordredclarkb: seems like it should be a straightforward bug - but I'd rather make progress on teh task at hand...00:04
clarkbright you are feeling why chroots wont work well for the general case :)00:04
mordredclarkb: nod. otoh - did you know that debian does per-user tmp dirs?00:05
mordredclarkb: TMP=/tmp/user/100000:05
clarkbI did not know00:05
mordredme either00:05
clarkbmakes sense though00:05
lifelessmordred: have you filed a bug for me ?00:06
mordredlifeless: no, I have not00:06
*** oomichi has joined #openstack-infra00:06
mordredlifeless: sorry, I forget you like those :)00:06
lifelessmordred: are you going to do so right now? Or do I have to fly over there and sit on you until you do ?00:06
mordreddoing00:06
lifelessgood choice :)00:07
*** yamahata has quit IRC00:08
*** jhesketh has joined #openstack-infra00:08
mordredlifeless: https://bugs.launchpad.net/diskimage-builder/+bug/133029000:09
uvirtbotLaunchpad bug 1330290 in diskimage-builder "Building ubuntu images on debian host fails because of tmpdir" [Undecided,New]00:09
lifelessthanks00:09
mordredlifeless: thank you! I do not believe I would have noticed the madness in the dir path00:09
*** ianw has quit IRC00:26
*** ianw has joined #openstack-infra00:26
openstackgerritA change was merged to openstack-infra/gitdm: Update gitdm to use gerrit 2.8 workflow names  https://review.openstack.org/9946600:27
openstackgerritMatthew Oliver proposed a change to openstack-infra/config: Create user to catch email if $sysadmin == []  https://review.openstack.org/10011800:34
*** matsuhashi has joined #openstack-infra00:35
*** matsuhashi has quit IRC00:35
*** matsuhashi has joined #openstack-infra00:35
*** sarob_ has joined #openstack-infra00:38
*** sarob_ has quit IRC00:42
anteayaupgraded the lappie to trusty, so far nothing has gone boom00:42
StevenKanteaya: I currently have an upgrade running on my desktop.00:47
StevenKSo far the only thing broken while the upgrade is running is some fonts in Firefox.00:48
anteayacool00:51
anteayaI had something with the gtk pixbuf or something00:51
anteayakeep seeing that error fly by00:51
anteayaso far nothing in the install feels buggy00:51
anteaya*shrug*00:51
*** nosnos has joined #openstack-infra00:55
morganfainberganteaya, yeah the only issue i had w/ trusty on the laptop was EFI boot (but this was a clean install)01:04
*** Ryan_Lane has joined #openstack-infra01:04
*** CaptTofu_ has joined #openstack-infra01:04
*** Ryan_Lane has quit IRC01:08
anteayaefi? I _think_ I am still bios01:09
anteayadid you get it fixed?01:09
*** melwitt has joined #openstack-infra01:10
*** wenlock_ has joined #openstack-infra01:11
morganfainberganteaya, yeah i used the Mac boot ISO,01:13
morganfainberganteaya, this laptop didn't want to let me install in 'bios' mode (just outright failed)01:13
morganfainberganteaya, took me a couple extra hours when i was installing one weekend, but it works well once using the right iso01:14
anteayawhich was the right iso for you?01:14
morganfainberganteaya, http://cdimage.ubuntu.com/releases/14.04/release/ubuntu-14.04-desktop-amd64+mac.iso01:15
anteayacool01:15
morganfainberganteaya, but if yours is working inbios mode, i prob. wouldn't switch it up :)01:16
anteayayeah, just did an inplace upgrade01:18
anteayaso far so good01:18
morganfainbergmy experience is 2 OSs really succeed at that well01:18
morganfainbergubuntu01:18
morganfainbergand os X01:18
morganfainbergmost anything else I find a reinstall to be better01:18
* anteaya nods01:21
anteayanever driven os X01:21
anteayaso I have no experience01:22
morganfainbergi like it, but that's because i've been using it for a looong time (and desktop linux has only recently been really viable)01:22
morganfainbergrecently = last ~4yrs maybe01:22
morganfainbergimo01:22
anteayakk01:24
openstackgerritlifeless proposed a change to stackforge/gertty: Filter out WIP patches by default in changes list.  https://review.openstack.org/10012101:26
lifelessanteaya: if you have an HP laptop it can do either EFI or bios boot mode - check your bios to see01:28
anteayak, thanks01:29
anteayanext boot up01:29
*** wenlock_ has quit IRC01:36
mnasermaybe this isnt much of an infra issue but someone with experience might chime in01:37
*** CaptTofu_ has quit IRC01:37
mnaseranyone ever ran 14.04 with havana? or should i keep installing 12.04 till i get icehouse in01:37
*** CaptTofu_ has joined #openstack-infra01:37
anteayahavana was tested with 12.0401:39
anteayaI don't believe we test past releases with new versions of os's01:40
mnaseri don't see why it wouldn't work, but i assume the deps/etc probably will result in the install not going through01:40
mnaserthis makes it hard, oh well01:40
anteayait is possible01:40
anteayaI would definitely spend some time testing it01:40
anteayaand be prepared for it not to work01:40
*** alugovoi has joined #openstack-infra01:41
mnaseranteaya: in that case, rather not deal with that type of trouble.. better spend resources keeping things running01:41
* anteaya nods01:41
anteayayou sound like a wise person01:41
*** CaptTofu_ has quit IRC01:42
*** sweston has joined #openstack-infra01:42
anteayakeep in mind we are still testing everything on precise, or centos for py26 and perhaps something else01:43
anteayainfra has not moved to trusty yet for testing01:43
anteayawe want to, but it hasn't happened yet01:43
morganfainberganteaya, i use VMs for all testing01:45
*** blamar has joined #openstack-infra01:45
morganfainbergand vagrant (usually)01:45
morganfainbergit's way easier to control the environment and not pollute my workstation/laptop's install01:45
*** pcrews has quit IRC01:45
*** blamar has quit IRC01:47
morganfainbergmnaser, i would even recommend staying with 12.04 until Juno (current development cycle) simply because 12.04 has a lot of drive time and as anteaya said, infra doesn't test on trusty01:48
*** alexpilotti has joined #openstack-infra01:48
mnasermorganfainberg: hmm, sounds pretty good actaully, but, my concern here might be what to do when the next cycle comes in01:49
mnasernot sure how easy it is to go from 12.04 to 14.0401:49
mnaser(ex: if this is centos 5 -> centos 6, it's a big problem :p)01:49
morganfainbergmnaser, i've had great luck upgrading ubuntu. but it also depends on how you're deploying OpenStack.01:49
mnaserfor example, centos 5 to 6 is completely unsupported and "impossible" to put in other terms01:50
mnaseri think upgrading to 14.04 from 12.04 is something that can be done smoothly01:50
morganfainbergmnaser, previous job *** we would quiesce a hypervisor, then install on the new OS as part of the upgrade. so in the case of when we did os upgrades, we made sure OpenStack would run on both, then upgraded each hypervisor01:50
morganfainbergmnaser, but that was because we had very controlled package lists we wanted per release01:51
mnaserwell, i'm planning to kinda roll out with puppet packages / ubuntu repos01:51
*** sarob_ has joined #openstack-infra01:51
mnaserlife should be much simpler then01:51
morganfainbergmnaser, doesn't mean i wouldn't evacuate the hypervisor then reinstall w/ the latest ubuntu release then. (personal preference if I were to be deploying that manner)01:52
morganfainbergmnaser, but i've def. seen good upgrade paths for ubuntu01:52
mnasermorganfainberg: im thinking continue with 12.04, then when juno is out, upgrade to 14.04 first, then roll out juno after01:52
mnaserand i agree with you.. it's been pretty smooth01:53
morganfainbergmnaser, provided we get some good drive time testing on 14.04 in the check/gate, that would be my approach as well01:53
morganfainbergit isn't like 12.04 will become obsolete overnight or support juno badly if we don't get the drive time on the testing01:53
morganfainbergfor 14.04 that is.01:54
*** koolhead17 has joined #openstack-infra01:54
* morganfainberg re-reads that sentence and ... has no idea how to make it more gramatically correct - sorry for bad typing... it's sunday and i'm fighting with finding good flight times for travel :)01:54
mnasermorganfainberg / anteaya: thank you very much for bouncing the ideas around.. and thanks for your help 😊01:55
morganfainbergmnaser, of course! happy to help~01:55
morganfainberg!01:55
* mnaser has to do migrations01:55
mnasermigrations to kvm, then migrating from nova-network to quantum/neutron01:55
mnaserover 500 vms01:55
*** sarob_ has quit IRC01:55
anteayamnaser: may they go smoothly01:56
mnaserheh, thank you :)01:57
mnaseri'm worried.. but i think it'll go over smoothly01:57
*** sweston has quit IRC01:58
anteayapositive outlook, off to a good start01:59
*** zhiyan_ is now known as zhiyan02:00
*** aysyd has joined #openstack-infra02:10
*** zhiyan is now known as zhiyan_02:18
*** praneshp_ has joined #openstack-infra02:24
*** praneshp has quit IRC02:26
*** praneshp_ is now known as praneshp02:26
*** zhiyan_ is now known as zhiyan02:27
*** fifieldt__ is now known as fifieldt02:30
*** arnaud__ has joined #openstack-infra02:33
*** alugovoi has quit IRC02:34
*** penguinRaider has quit IRC02:34
*** yamahata has joined #openstack-infra02:36
openstackgerritMatthew Oliver proposed a change to openstack-infra/config: Create user to catch email if $sysadmin == []  https://review.openstack.org/10011802:37
openstackgerritMatthew Oliver proposed a change to openstack-infra/config: Create user to catch email if $sysadmin == []  https://review.openstack.org/10011802:41
*** arnaud__ has quit IRC02:45
openstackgerritJames Polley proposed a change to openstack-infra/config: Add definitions for a job to check vlan config  https://review.openstack.org/10007602:45
*** alexpilotti has quit IRC02:46
*** sarob_ has joined #openstack-infra02:51
*** CaptTofu_ has joined #openstack-infra02:53
*** sarob_ has quit IRC02:56
*** koolhead17 has quit IRC02:58
*** alexpilotti has joined #openstack-infra02:58
*** jhesketh has quit IRC03:13
*** alexpilotti has quit IRC03:16
*** adalbas has quit IRC03:16
*** shayneburgess has joined #openstack-infra03:16
anteayalifeless: thanks, I found it, uefi with or without csm, currently I am using the legacy bios, but good to know03:18
*** jhesketh has joined #openstack-infra03:26
*** aysyd has quit IRC03:27
*** crc32 has joined #openstack-infra03:27
*** nosnos has quit IRC03:36
*** CaptTofu_ has quit IRC03:42
*** CaptTofu_ has joined #openstack-infra03:42
*** shayneburgess has quit IRC03:44
*** CaptTofu_ has quit IRC03:46
*** shayneburgess has joined #openstack-infra03:48
*** shayneburgess has quit IRC03:57
*** amotoki has joined #openstack-infra03:57
*** gema has quit IRC03:58
*** otter768 has quit IRC03:58
*** gema has joined #openstack-infra03:59
*** otter768 has joined #openstack-infra04:00
*** arnaud__ has joined #openstack-infra04:02
*** pcrews has joined #openstack-infra04:03
*** morganfainberg has quit IRC04:04
*** morganfainberg has joined #openstack-infra04:04
*** morganfainberg has quit IRC04:05
*** morganfainberg has joined #openstack-infra04:06
*** yfried_ has quit IRC04:07
*** pcrews has quit IRC04:09
*** otter768 has quit IRC04:14
*** arnaud__ has quit IRC04:15
*** nosnos has joined #openstack-infra04:15
*** yaguang has joined #openstack-infra04:15
*** melwitt has quit IRC04:15
*** arnaud__ has joined #openstack-infra04:24
*** mrodden has joined #openstack-infra04:25
*** crc32 has quit IRC04:29
*** crc32 has joined #openstack-infra04:30
*** signed8bit has joined #openstack-infra04:34
*** Longgeek has joined #openstack-infra04:39
*** Longgeek has quit IRC04:40
*** Longgeek has joined #openstack-infra04:40
*** arnaud__ has quit IRC04:42
*** crc32 has quit IRC04:43
openstackgerritIan Wienand proposed a change to openstack-infra/devstack-gate: Make one copy of grenade log files  https://review.openstack.org/10013104:44
openstackgerritIsaku Yamahata proposed a change to openstack-infra/config: Add tacker project on StackForge  https://review.openstack.org/9743504:47
*** sarob_ has joined #openstack-infra04:51
*** sarob_ has quit IRC04:56
*** otherwiseguy has joined #openstack-infra04:57
*** yfried_ has joined #openstack-infra05:02
openstackgerritJoshua Hesketh proposed a change to openstack-infra/config: Add in support for 'recheck jenkins'  https://review.openstack.org/10013305:02
openstackgerritIan Wienand proposed a change to openstack-infra/devstack-gate: Cleanup of grenade and old/new log copy  https://review.openstack.org/9985205:03
*** signed8bit has quit IRC05:04
*** signed8bit has joined #openstack-infra05:05
*** otherwiseguy has quit IRC05:08
*** otherwiseguy has joined #openstack-infra05:12
*** koolhead17 has joined #openstack-infra05:13
openstackgerritlifeless proposed a change to stackforge/gertty: Don't crash on comments on unchanged files  https://review.openstack.org/9956305:14
openstackgerritlifeless proposed a change to stackforge/gertty: Filter out WIP patches by default in changes list.  https://review.openstack.org/10012105:14
openstackgerritlifeless proposed a change to stackforge/gertty: Only show verified reviews by default.  https://review.openstack.org/10013805:14
*** terryw has joined #openstack-infra05:17
*** otherwiseguy has quit IRC05:20
*** e0ne has joined #openstack-infra05:21
*** ildikov has quit IRC05:21
*** sweston has joined #openstack-infra05:28
*** signed8bit has quit IRC05:34
*** e0ne has quit IRC05:37
*** basha has joined #openstack-infra05:44
*** ildikov has joined #openstack-infra05:48
*** sarob_ has joined #openstack-infra05:51
*** penguinRaider has joined #openstack-infra05:52
*** zhiyan is now known as zhiyan_05:55
*** sarob_ has quit IRC05:55
*** zhiyan_ is now known as zhiyan05:56
*** matsuhashi has quit IRC06:00
*** matsuhashi has joined #openstack-infra06:01
*** _nadya_ has joined #openstack-infra06:05
*** e0ne has joined #openstack-infra06:07
*** _nadya_ has quit IRC06:15
*** Ryan_Lane has joined #openstack-infra06:16
*** e0ne has quit IRC06:18
*** arnaud__ has joined #openstack-infra06:18
*** e0ne has joined #openstack-infra06:18
*** arnaud__ has quit IRC06:18
*** penguinRaider has quit IRC06:21
*** arnaud__ has joined #openstack-infra06:21
openstackgerritIan Wienand proposed a change to openstack-infra/devstack-gate: Cleanup of grenade and old/new log copy  https://review.openstack.org/9985206:23
*** e0ne has quit IRC06:25
*** amotoki has quit IRC06:26
*** matsuhas_ has joined #openstack-infra06:38
*** alexpilotti has joined #openstack-infra06:38
*** alexpilotti has quit IRC06:38
*** jcoufal has joined #openstack-infra06:38
*** matsuhashi has quit IRC06:38
*** Guest47184 has joined #openstack-infra06:46
*** rgerganov_ has joined #openstack-infra06:47
*** dkehn_ has joined #openstack-infra06:54
*** ihrachyshka has joined #openstack-infra06:57
*** Clabbe has joined #openstack-infra06:58
*** dkehnx has quit IRC06:59
*** rdopieralski has joined #openstack-infra07:00
*** matsuhas_ has quit IRC07:01
*** vtapia has joined #openstack-infra07:01
*** yfried_ is now known as yfried07:03
*** _nadya_ has joined #openstack-infra07:04
*** penguinRaider has joined #openstack-infra07:04
*** matsuhashi has joined #openstack-infra07:04
*** e0ne has joined #openstack-infra07:04
*** chandan_kumar has quit IRC07:05
*** ociuhandu has joined #openstack-infra07:05
*** vtapia has left #openstack-infra07:05
*** amotoki has joined #openstack-infra07:06
openstackgerritRadomir Dopieralski proposed a change to openstack-infra/config: Add XStatic-* projects with packaged static files for Horizon  https://review.openstack.org/9571607:10
*** asettle has quit IRC07:12
ttxreed: "other" is what's not being incubated or integrated.07:13
*** rcarrillocruz has quit IRC07:13
*** rcarrillocruz has joined #openstack-infra07:13
*** e0ne has quit IRC07:14
*** Ryan_Lane has quit IRC07:15
*** Ryan_Lane has joined #openstack-infra07:15
*** basha has quit IRC07:15
*** e0ne has joined #openstack-infra07:16
*** _nadya_ has quit IRC07:16
*** pas-ha has joined #openstack-infra07:20
*** unicell has joined #openstack-infra07:21
*** flaper87|afk is now known as flaper8707:24
*** penguinRaider has quit IRC07:25
*** andreykurilin_ has joined #openstack-infra07:27
*** freyes has joined #openstack-infra07:29
*** jaypipes has joined #openstack-infra07:36
*** jaypipes has quit IRC07:36
*** jlibosva has joined #openstack-infra07:38
*** zehicle_at_dell has quit IRC07:40
mattoliverauI'm calling it a day, night all.07:41
*** zehicle_at_dell has joined #openstack-infra07:41
*** Ryan_Lane has quit IRC07:42
*** tkelsey has joined #openstack-infra07:42
*** katyafervent_awa is now known as katyafervent07:42
pas-hahi all, is there any way to login to gerrint currently? launchpad openid is not working, and google's is also not accepted (OpenID provider not permitted by site policy.)07:44
pas-hawhat are actually those permitted my site policy?07:45
pas-has/my/by/07:45
*** _nadya_ has joined #openstack-infra07:47
openstackgerritVictor Sergeyev proposed a change to openstack/requirements: Add oslo.db library  https://review.openstack.org/9140707:48
*** afazekas_ has joined #openstack-infra07:48
*** salv-orlando has joined #openstack-infra07:49
*** sarob_ has joined #openstack-infra07:51
*** rcarrill` has joined #openstack-infra07:53
*** rcarrillocruz has quit IRC07:55
*** sarob_ has quit IRC07:55
*** jistr has joined #openstack-infra07:57
tchaypoAgain?07:59
tchaypothat's exciting07:59
tchaypoHrm, it's working for me08:00
pas-haI've managed to relogin to launchpad (after some annoying errors) and now login to gerrit works08:00
pas-haso may be we need to put such hint somewhere08:01
*** hashar has joined #openstack-infra08:02
*** dmitryme has joined #openstack-infra08:04
*** jpich has joined #openstack-infra08:04
*** markmc has joined #openstack-infra08:05
*** yolanda has quit IRC08:06
*** jlibosva has quit IRC08:09
*** jlibosva has joined #openstack-infra08:11
*** vponomaryov has joined #openstack-infra08:13
*** jgallard has joined #openstack-infra08:13
ogelbukhhi, is the topic still relevant? hasn't lp restored openid yet?08:17
*** sarob_ has joined #openstack-infra08:18
*** fbo_away is now known as fbo08:20
pas-haogelbukh, it is restored to some extend, but it seems one needs to relogin to launchpad to be able to login to gerrit08:21
*** hashar has quit IRC08:22
*** pas-ha has left #openstack-infra08:22
tchaypo    /topic08:22
*** sarob_ has quit IRC08:22
tchaypowell done me08:22
*** lukego has joined #openstack-infra08:22
tchaypolifeless: did you get added to the ACL for the topic bot?08:23
*** IvanBerezovskiy has joined #openstack-infra08:23
lukego“testr list-tests” is failing for my 3rd party CI. possibly related to the setuptools issue over the weekend. anybody know the resolution? http://paste.openstack.org/show/84129/08:24
*** freyes has quit IRC08:25
*** jlibosva1 has joined #openstack-infra08:32
*** jlibosva has quit IRC08:35
*** freyes has joined #openstack-infra08:39
*** andreykurilin_ has quit IRC08:41
*** isviridov|away is now known as isviridov08:42
openstackgerritVictor Sergeyev proposed a change to openstack/requirements: Add oslo.db library  https://review.openstack.org/9140708:45
*** arnaud__ has quit IRC08:47
BobBallsdague: Can I please beg for an urgent review of https://review.openstack.org/#/c/100174/ pls?  The XenServer CI has been broken by a nova change and that devstack patch needs to land to get it back running.08:47
*** pelix has joined #openstack-infra08:48
*** sarob_ has joined #openstack-infra08:51
*** Guest47184 is now known as Hal_08:51
*** Hal_ has quit IRC08:52
*** Hal_ has joined #openstack-infra08:52
*** sarob_ has quit IRC08:56
*** zhiyan is now known as zhiyan_08:56
*** habib has joined #openstack-infra08:57
*** habib has joined #openstack-infra08:58
*** jp_at_hp has joined #openstack-infra08:58
*** rdopieralski has quit IRC08:59
SergeyLukjanovoh, I'm return back after 4 days of non-reading IRC09:02
SergeyLukjanovttx, I've +1'd your changes to release-tools09:04
ttxSergeyLukjanov: kewl, will approve now09:04
ttxSergeyLukjanov: The secodn change I'll probably wait until Swift RC and make sure they work well before committing them09:05
SergeyLukjanovsdague, lifeless, derekh, ttx, sorry folks, I've been on holidays and not reading irc last four days09:08
SergeyLukjanovttx, yup, it makes sense09:08
ttxSergeyLukjanov: good to see you're actually human09:08
SergeyLukjanovttx :)09:09
openstackgerritA change was merged to openstack-infra/release-tools: Add script for new-style milestone publication  https://review.openstack.org/9812309:09
SergeyLukjanovttx, I think you'll see it a lot of times this summer :)09:10
*** dizquierdo has joined #openstack-infra09:11
*** ihrachyshka has quit IRC09:15
*** yolanda has joined #openstack-infra09:18
yolandaclarkb, mordred, how do we want to manage the image replacement/removal on glance/nodepool? at the moment we upload the image to glance with the real name, for example "bare-precise", without any timestamp suffix09:19
yolandaso right now if we update the glance image again, we end up with two images named in the same way, and that causes trouble. Shall we overwrite the glance image in this case, of shall we use suffixes for each upload?09:20
*** cnesa has quit IRC09:23
*** adarazs has left #openstack-infra09:26
*** xchu has quit IRC09:28
*** jamielennox is now known as jamielennox|away09:29
openstackgerritA change was merged to openstack-infra/jeepyb: Wait for ACL creation in manage-projects  https://review.openstack.org/9468409:30
*** praneshp has quit IRC09:30
*** davidhadas_ has quit IRC09:32
*** rdopieralski has joined #openstack-infra09:32
*** kashyap has quit IRC09:34
openstackgerritIsaku Yamahata proposed a change to openstack-infra/config: Add tacker project on StackForge  https://review.openstack.org/9743509:39
*** ihrachyshka has joined #openstack-infra09:40
openstackgerritA change was merged to openstack-infra/config: Add jshint job for tuskar-ui  https://review.openstack.org/9647309:44
*** weshay has quit IRC09:45
openstackgerritA change was merged to openstack/requirements: Bump minimum hacking version to 0.9.2  https://review.openstack.org/9981509:45
*** rcarrillocruz has joined #openstack-infra09:46
*** mrda is now known as mrda-away09:47
*** rcarrill` has quit IRC09:48
*** _nadya_ has quit IRC09:51
*** rgerganov_ is now known as rgerganov09:56
sdagueBobBall: done10:03
*** ociuhandu has quit IRC10:04
sdagueany other deprecated things happening for xen that should be cleaned up?10:04
*** ociuhandu has joined #openstack-infra10:04
*** Hal_ has quit IRC10:05
*** openstackgerrit has quit IRC10:06
BobBallThey were the only config options I saw - but did you have something particular in mind?10:06
BobBall+ thanks10:06
*** openstackgerrit has joined #openstack-infra10:07
*** davidhadas has joined #openstack-infra10:09
*** ociuhandu has quit IRC10:09
*** rlandy has joined #openstack-infra10:13
sdagueBobBall: nope, just curious if you are seeing any other deprecation warnings10:14
sdagueso we could get ahead of those10:14
BobBallI don't think so, but I'll check again10:15
BobBallI'm very cross with myself for missing this one... (and frustrated with those on the review for ignoring it - but that's a different story :P)10:15
*** _nadya_ has joined #openstack-infra10:17
BobBalla few services are using the deprecated auth fragments and there are a bunch of glance deprecation warnings.  I don't think any of them are XS specific10:20
openstackgerritA change was merged to openstack-infra/config: Unbreak tripleo projects  https://review.openstack.org/9977810:22
*** matsuhashi has quit IRC10:24
*** jgallard has quit IRC10:24
*** matsuhashi has joined #openstack-infra10:24
openstackgerritA change was merged to openstack-infra/config: Add a solum-spec repo to stackforge  https://review.openstack.org/9823710:25
*** unicell has quit IRC10:29
*** matsuhashi has quit IRC10:33
*** matsuhashi has joined #openstack-infra10:34
BobBallsdague: you seen the increase in the zuul job queue?10:34
BobBalllooks like something bad is about to happen...10:34
*** openstackgerrit has quit IRC10:35
sdaguelooks like we had a requirements proposal go out10:35
BobBalloh - is that expected then?10:36
BobBallfair enough :)10:36
* BobBall just saw a big spike in the graph10:36
BobBalland typically big spikes are bad :P10:36
*** openstackgerrit has joined #openstack-infra10:36
*** matsuhashi has quit IRC10:41
sdagueyeh, proposal bot should maybe wait10:41
*** Longgeek has quit IRC10:41
*** matsuhashi has joined #openstack-infra10:41
*** Longgeek has joined #openstack-infra10:42
sdagueplus proposal bot is not really supposed to propose hacking10:43
ianwsdague: when you're in a reviewing mood, https://review.openstack.org/#/c/100131/ is a follow on from one you merged to stop rm'ing grenade logs.  i believe it's causing two copies of the logs now.  i think it's kind of logical to stick those logs in /grenade/10:44
sdagueianw: why? we don't do that with devstack logs on devstack runs10:44
*** Longgeek has quit IRC10:46
sdaguealso that seems to lose all the logs again10:47
ianwsdague: it's already making /opt/stack/logs/grenade and putting it's localrc in there, so seems logical to have the logs next to it.10:47
ianwsdague: http://logs.openstack.org/52/99852/3/check/check-grenade-dsvm/3b67817/logs/grenade/ looks right?10:47
*** rdopieralski has quit IRC10:49
*** rdopieralski has joined #openstack-infra10:49
sdagueso I really don't think the grenade log should be further tucked down in a grenade run10:49
ianwi've gotten quite lost in all that copying stuff debugging the fedora jobs; personally i think https://review.openstack.org/#/c/99852/ makes things better, others may disagree...10:49
*** rdopieralski has quit IRC10:49
*** rdopieralski has joined #openstack-infra10:49
*** rdopieralski has quit IRC10:50
*** rdopieralski has joined #openstack-infra10:51
*** sarob_ has joined #openstack-infra10:51
*** rdopieralski is now known as rdopiera10:52
*** pblaho has joined #openstack-infra10:52
ianwsdague: well, you're ultimately the guy with the +2 :)  i feel like keep things related together, but whatever10:54
ianwsdague: also, my internal ci is having some issues with the "convert from awk" patch10:54
ianwseems like something is hanging around and the ssh session i run devstack under isn't exiting cleanly10:55
*** sarob_ has quit IRC10:55
tkelseyHello -infra people, could I please get some more eyeballs on this https://review.openstack.org/#/c/97250/ many thanks.10:57
ianwsdage: stack     1759  1751 98 00:49 ?        09:59:32 python ./tools/outfilter.py -o /opt/stack/logs/stack.sh.log.2014-06-16-004946.2014-06-16-004946.summary10:57
*** Longgeek has joined #openstack-infra10:57
sdagueianw: ok, so perhaps there is a signal handler we need to trigger the exit ?10:58
ianwsdague: yeah, it's just sitting there in read(0, "", 4096)10:58
*** davidhadas___ has joined #openstack-infra11:03
ianwsorry, chopped, read(0...) = 011:03
ianwsdague: maybe i'm missing something, but where's the break from the "while True" readline() loop?11:03
sdagueyeh, so I thought that would signal down at the end, apparently not11:04
sdaguejust uploaded a new version that dumps out11:04
sdaguelets see11:04
ianwsdague: ok, my tests eventually got going when the testing vm ran out of disk-space ... that raised and exception and got things moving :)  but it took about 10 hours11:05
*** davidhadas has quit IRC11:06
*** elitenudel2500 has joined #openstack-infra11:06
elitenudel2500Hey!11:06
elitenudel2500Code Review - Error Server Error Cannot store contact information11:07
elitenudel2500Anyone has any experience with solving this?11:07
elitenudel2500(signing up for ICLA)11:07
*** ominakov has joined #openstack-infra11:08
openstackgerritSean Dague proposed a change to openstack-infra/config: index grenade logs in elastic search  https://review.openstack.org/10021911:09
sdagueianw: can you try this latest patch locally?11:09
*** yamahata has quit IRC11:10
sdagueianw: interestingly enough, based on the times we see double time stamps, I think the new approach generates a lot less overhead because we aren't fork/execing date 100k times during a run.11:10
*** habib has quit IRC11:12
elitenudel2500oh, i just saw the MOTD11:13
elitenudel2500sorry :P11:13
ianwsdauge: my bot is running the change now ... tests won't pass due to other issues but it should at least fail out in ~ 40 mins11:14
*** nosnos has quit IRC11:15
sdagueianw: cool11:17
sdagueianw: it's back in check queue with the new changes, locally that seems to do the right thing11:18
sdagueI guess in the gate the fact that we just kill everything means we didnt' see it11:18
BobBalloh... was just going to ask... did you kill the gate just now? :) I can't find any evidence my job is in the gate queue https://review.openstack.org/#/c/100174/11:19
BobBalloh jeeze11:20
BobBallso I did a search and it wasn't there11:20
BobBallbut now it is11:20
BobBallignore me, please...11:20
ianwsdague: ok, so yeah rhel6.5 redhatci job with revision 5 failed, but it did exit out, which is good :)11:21
sdagueok, good :)11:22
sdaguewell then +1 on the devstack review would be good with that info11:23
*** tnurlygayanov has joined #openstack-infra11:23
ianwsdague: will do, i'll let redhatci post on that one.  i think if it was rebased to TOT rhel6.5/centos6.5/f20 would work with redhatci ... just rhel7 has some issues ATM11:25
sdagueianw: so next up, I was going to take some of the exit debugging in grenade and put them into a 'worlddump' utility in devstack11:25
sdaguethe process dump, disk free list, if there are other things about the state of the box you think would be handy to dump for debug purposes, let me know11:26
ianwsdague: so almost like redhat's sos or vmware's vm-support bundle?11:27
ianwyeah, that's all good stuff.  various ip link / route / namespace output is always helpful11:28
*** matsuhashi has quit IRC11:29
*** matsuhashi has joined #openstack-infra11:29
ianwalso something i remember vm-support doing was taking two snapshots 15 seconds apart or so, can help you know if things are coming or going11:29
ianw(not of everything, but process listing, etc)11:29
sdagueianw: yeh, basically. We've been inlining these things adhoc in the console, but that just confuses everything I think11:31
sdagueso I think we should dump them out to timestamped files, which you can go look at11:31
*** CaptTofu_ has joined #openstack-infra11:31
*** matsuhashi has quit IRC11:33
*** elitenudel2500 has quit IRC11:34
ianwsdague: yeah totally.  i've been toying with the idea of a bundle download of logs, it would often be helpful11:41
openstackgerritRadomir Dopieralski proposed a change to openstack/requirements: Add xstatic-packaged JavaScript libraries for Horizon  https://review.openstack.org/9779311:41
openstackgerritRadomir Dopieralski proposed a change to openstack/requirements: Add xstatic and xstatic-jquery for Horizon  https://review.openstack.org/9433711:41
ianw(please let me know if i'm just missing the existing way to do it :)11:41
sdagueso the bundle for download would probably be best done with a wsgi script11:43
sdaguemarkmc had requested that at summit11:44
*** ociuhandu has joined #openstack-infra11:45
ianwsdague: that was my thought too, no point keeping it.  i can look into it but i'll have to bootstrap myself on the log servers ... i'm sure it's all in the config repo somewhere :)11:45
*** mbacchi has joined #openstack-infra11:48
*** YorikSar has quit IRC11:49
*** YorikSar has joined #openstack-infra11:49
*** rcarrill` has joined #openstack-infra11:50
*** mugsie has joined #openstack-infra11:50
*** sarob_ has joined #openstack-infra11:51
*** rcarrillocruz has quit IRC11:52
*** habib has joined #openstack-infra11:55
*** sarob_ has quit IRC11:56
*** weshay has joined #openstack-infra11:58
openstackgerritMaxime Vidori proposed a change to openstack-infra/storyboard-webclient: Documentation improvement  https://review.openstack.org/9977511:59
*** weshay has quit IRC12:01
*** weshay has joined #openstack-infra12:02
openstackgerritSean Dague proposed a change to openstack-infra/devstack-gate: don't use screen for grenade  https://review.openstack.org/10022912:05
sdagueianw: yeh, os-loganalyze is a decent example12:05
*** lcostantino has joined #openstack-infra12:09
*** ArxCruz has joined #openstack-infra12:09
*** Hal_ has joined #openstack-infra12:10
openstackgerritSean Dague proposed a change to openstack/requirements: revert hacking to 0.8 series  https://review.openstack.org/10023112:10
sdaguettx: I'd like your take on that ^^^12:11
*** pblaho has quit IRC12:12
*** ildikov_ has joined #openstack-infra12:12
*** pblaho has joined #openstack-infra12:13
*** ildikov has quit IRC12:13
*** ildikov_ has quit IRC12:16
*** morganfainberg has quit IRC12:18
*** CaptTofu_ has quit IRC12:19
sdagueSergeyLukjanov: do you know how to do the force deleting delete thing? I wonder if we could get some more capacity back with that. Because we seem to have grown stale deleting nodes over the weekend12:19
*** morganfainberg has joined #openstack-infra12:19
*** _nadya_ has quit IRC12:20
SergeyLukjanovsdague, hey12:20
*** cody-somerville has joined #openstack-infra12:20
*** rfolco has joined #openstack-infra12:20
SergeyLukjanovsdague, "force deleting delete thing" of what?12:20
*** _nadya_ has joined #openstack-infra12:20
*** dkranz_afk has quit IRC12:22
*** yaguang has quit IRC12:22
*** jgallard has joined #openstack-infra12:23
*** yamahata has joined #openstack-infra12:23
*** davidhadas___ has quit IRC12:24
sdaguenodepool nodes12:25
*** andreykurilin_ has joined #openstack-infra12:26
*** dprince has joined #openstack-infra12:27
openstackgerritNikita Konovalov proposed a change to openstack-infra/storyboard: Update and freeze requirements  https://review.openstack.org/9849112:28
*** adalbas has joined #openstack-infra12:29
*** ArxCruz has quit IRC12:29
SergeyLukjanovsdague, oh, does you mean that slaves are removing very slowly?12:29
sdagueyeh12:29
SergeyLukjanovsdague, I have no credentials to our clouds12:30
*** ArxCruz has joined #openstack-infra12:32
*** hdd_ has joined #openstack-infra12:33
*** miqui has joined #openstack-infra12:33
*** pblaho has quit IRC12:37
*** dkranz_afk has joined #openstack-infra12:38
*** pdmars has joined #openstack-infra12:38
*** CaptTofu_ has joined #openstack-infra12:39
*** ildikov has joined #openstack-infra12:39
openstackgerritA change was merged to openstack-infra/devstack-gate: Use root to write syslog from journalctl  https://review.openstack.org/9828312:40
*** pdmars has quit IRC12:40
openstackgerritA change was merged to openstack-infra/devstack-gate: Save sql logs  https://review.openstack.org/9431412:41
*** zhiyan_ is now known as zhiyan12:44
*** pblaho has joined #openstack-infra12:44
sdagueso I think the new grenade fails on services not starting is just because hp cloud 1.1 is so much slower - eyJmaWVsZHMiOltdLCJzZWFyY2giOiJtZXNzYWdlOlwiTWVzc2FnaW5nVGltZW91dDogVGltZWQgb3V0IHdhaXRpbmcgZm9yIGEgcmVwbHkgdG8gbWVzc2FnZSBJRFwiIEFORCBtZXNzYWdlOlwibmV0d29ya1wiIEFORCB0YWdzOlwic2NyZWVuLW4tY3B1LnR4dFwiXG4iLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsIm9mZnNldCI6MCwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwibW9kZSI6InNjb3JlIiwiYW5hbHl612:45
sdagueZV9maWVsZCI6ImJ1aWxkX25vZGUifQ12:45
*** pdmars has joined #openstack-infra12:47
openstackgerritDan Prince proposed a change to openstack-infra/nodepool: Shuffle requests for fairer label allocation  https://review.openstack.org/8822312:50
*** sweston has quit IRC12:52
*** izinovik has joined #openstack-infra12:53
*** julim has joined #openstack-infra12:55
*** jgallard has quit IRC12:56
*** jgallard has joined #openstack-infra12:57
izinovikHello.  Does my problem with creating review on gerrit (review.openstack.org) is connected with disabled launchpad openid?  Right now when I'm doing `git review' it bails out with this message: "fatal: ICLA contributor agreement requires current contact information.12:59
izinovikPlease review your contact information:12:59
izinovik  https://review.openstack.org/#/settings/contact12:59
izinovikfatal: The remote end hung up unexpectedly"12:59
*** jlibosva1 is now known as jlibosva12:59
*** smarcet has joined #openstack-infra12:59
*** sandywalsh has quit IRC13:01
*** jgallard has quit IRC13:04
*** sandywalsh has joined #openstack-infra13:04
*** CaptTofu_ has quit IRC13:04
*** jgallard has joined #openstack-infra13:04
*** CaptTofu_ has joined #openstack-infra13:05
*** basha has joined #openstack-infra13:05
ttxsdague: commented13:05
*** basha has quit IRC13:05
sdaguettx: ok :)13:06
sdagueI think it's terribleness personally13:06
sdaguettx: did you look at my comment with some of the impacts of a hacking upgrade?13:06
ttxsdague: ah now it came after13:07
ttxno*13:07
ttxrereading13:07
*** e0ne has quit IRC13:08
openstackgerritBoris Pavlovic proposed a change to openstack-infra/config: Add new project loggas  https://review.openstack.org/10024313:09
*** CaptTofu_ has quit IRC13:09
*** ominakov has quit IRC13:09
srenatusizinovik: I have seen that last week when the new person had not yet joined the openstack foundation with the same email address13:10
*** ominakov has joined #openstack-infra13:10
srenatusizinovik: https://wiki.openstack.org/wiki/GerritWorkflow#Account_Setup13:10
*** mfer has joined #openstack-infra13:10
*** chuckC has joined #openstack-infra13:10
ttxsdague: I tend to agree that this should be seen as gratuitous changes and therefore preferably pushed at low tide (probably between release and first milestone)... but I'm not sure it's a cycle issue. It's more of a gate load issue13:11
sdaguettx: they are coupled13:11
sdaguebecause it steals nodes from check13:11
*** dims has joined #openstack-infra13:12
sdaguethe reason there is a 1 hrs wait for check nodes now is because of hacking proposal to requirements jobs13:12
*** cody-somerville has quit IRC13:12
openstackgerritBoris Pavlovic proposed a change to openstack-infra/config: Add new project loggas  https://review.openstack.org/10024313:12
sdaguebut if no one else sees a problem, so be it13:13
ttxsdague: the slope is a bit dangerous here: we would classify changes by their level of gratuitousness and have a window where truly gratuitous change would land in te cycle13:13
ttxI'm ok to single out hacking there13:13
ttxand have specific rules around it13:13
ttxbut I gear there will soon be no good moment for those13:13
ttxfear*13:13
sdaguethe right time is release week13:13
sdaguethere is also something else kind of wrong with it being in the requirements proposal bot right now13:14
ttxsdague: My point is... what you're actually after here is agreement on timing rules for hacking changes.13:14
sdagueyep13:14
*** penguinRaider has joined #openstack-infra13:14
sdagueand there is a mailing list thread on that13:14
ttxit's not a one-shot, it will come back13:15
sdaguewhich I started at the same time of proposing the req change13:15
sdaguebecause this is the same issue as the pep8 fatigue that we had, where we agreed one bump, first week of cycle, then we're done13:15
ttxsdague: oh we did ? Then yes, i would put that in the same bag13:16
ttxI think we are saying the same thing, we just mean slightly different things by "cycle"13:16
sdagueok13:16
ttxI don't think landing those now jeopardizes the release -- but I agree that we could have rules for optimizing landing time for gratuitous changes.13:17
ttxWill comment as such13:17
sdaguecool13:17
*** krtaylor has joined #openstack-infra13:18
sdaguethough given that we've got fixed resources for machines and people (reviewers) I'm not sure how I understand having another openstack projects worth of changes to get through doesn't impact release :)13:18
ttxsdague: it means less things get merged. But that's not necessarily a bad thing. It impacts our velocity of things you may consider more useful.13:19
*** pblaho is now known as pblaho|afk13:19
ttxbefore j2 it doesn't really affect release quality (as would landing them during RC time, which would affect bugfixing, or during j-3, which would affect feature completeness)13:20
ttxIt would definitely mean less features merged.13:20
sdagueit impacts bug fixing now13:21
ttxas much as adding features.13:21
sdaguesure13:21
*** e0ne has joined #openstack-infra13:21
sdaguebut for instance - http://status.openstack.org/elastic-recheck/gate.html - the top gate reset bug is something that I think I have a theory on13:22
sdaguebut I'll be waiting about 3 hrs to get the first results back13:22
*** _nadya_ has quit IRC13:23
ttxright, it impacts our velocity at doing more useful things. But I would not say it jeopardizes that much the "release" (understood as releasing something coherent)13:23
*** james_li has joined #openstack-infra13:24
ttxit impacts how much features we would ship in the release for sure13:24
ttxbut tha's not a quality metric for me13:24
*** trinaths has joined #openstack-infra13:25
sdagueok13:25
ttxanyway, i think we are saying the same thing.13:25
sdagueprobably13:26
*** crc32 has joined #openstack-infra13:26
*** pblaho|afk is now known as pblaho13:26
ttxit's just that I don't think "screwing up the release" is the best argument. "Gratuitously impacting our velocity so it should be put in same bag as pep8" sounds better to me13:27
ttxnow again, my answer would be different if we were past j2.13:27
sdaguethat's going to happen pretty quick13:27
sdagueand once we're past j2, based on current capacity issues, patch round trips are going to take a couple of days13:27
ttxsdague: If that's the impact, I'm starting to question the value of upgrading hacking AT ALL13:28
ttxif it raelly screws up a whole development milestone timeframe, is the upgrade worth it ?13:28
sdagueyeh, my back of the envelope math is there is no way this stuff gets through in j213:29
sdagueas most projects haven't started yet13:29
sdaguettx: yeh, honestly, I'm in the same sort of boat. The impact of this one is it drags in a new flake813:30
sdaguewhich means a ton of new rules from upstream13:30
ttxhaven't looked into what it brings, but we may have passed a size where style checking rules need to be frozen because closing the gap on upgrade is just too costly13:30
sdaguehttp://logs.openstack.org/68/96268/12/check/gate-python-novaclient-pep8/95a7699/console.html13:31
sdaguethat's a pretty good example13:31
*** otherwiseguy has joined #openstack-infra13:32
ttxI'll try to find the thread and comment there13:32
sdagueactually, that's mostly self inflicted now that I see it13:33
*** medieval1 has joined #openstack-infra13:33
*** jgrimm_ has joined #openstack-infra13:34
*** terryw has quit IRC13:35
*** CaptTofu_ has joined #openstack-infra13:39
*** jp_at_hp has quit IRC13:44
*** eharney has joined #openstack-infra13:45
openstackgerritA change was merged to openstack-infra/storyboard: Update method improved in db api  https://review.openstack.org/9876013:45
*** jp_at_hp has joined #openstack-infra13:47
openstackgerritMatthew Treinish proposed a change to openstack-infra/config: Add tempest jobs with nova-v3 enabled  https://review.openstack.org/9983513:51
*** sarob_ has joined #openstack-infra13:51
*** pdmars has quit IRC13:52
*** pdmars has joined #openstack-infra13:53
*** radez_g0n3 is now known as radez13:53
*** homeless has joined #openstack-infra13:53
*** sarob_ has quit IRC13:55
*** e0ne_ has joined #openstack-infra13:56
*** trinaths has quit IRC13:56
*** hdd_ has quit IRC13:57
*** maxbit has joined #openstack-infra13:58
*** e0ne has quit IRC13:59
openstackgerritA change was merged to openstack-infra/storyboard-webclient: Added field restrictions and error messages to project forms  https://review.openstack.org/9587313:59
*** mriedem has joined #openstack-infra13:59
*** otherwiseguy has quit IRC13:59
*** jistr has quit IRC13:59
*** ildikov has quit IRC14:01
*** jistr has joined #openstack-infra14:01
openstackgerritVictor Sergeyev proposed a change to openstack/requirements: Add oslo.db library  https://review.openstack.org/9140714:02
*** alugovoi has joined #openstack-infra14:03
*** markmcclain has joined #openstack-infra14:04
*** koolhead17 has quit IRC14:05
*** cody-somerville has joined #openstack-infra14:05
openstackgerritBoris Pavlovic proposed a change to openstack-infra/config: Add new project loggas  https://review.openstack.org/10024314:05
*** markmcclain1 has joined #openstack-infra14:06
*** dhellman_ has joined #openstack-infra14:06
*** dhellman_ has quit IRC14:06
*** zz_gondoi is now known as gondoi14:07
*** markmcclain has quit IRC14:08
openstackgerritAlexandre Viau proposed a change to openstack-infra/config: Added the Surveil project to gerritbot, zuul and stackforge config  https://review.openstack.org/9974614:10
*** dhellmann has quit IRC14:12
*** dhellmann has joined #openstack-infra14:12
*** otherwiseguy has joined #openstack-infra14:14
*** afazekas_ has quit IRC14:16
*** crc32 has quit IRC14:18
*** otherwiseguy has quit IRC14:20
*** terryw has joined #openstack-infra14:20
*** gargola has joined #openstack-infra14:23
*** gargola has quit IRC14:25
*** gargola has joined #openstack-infra14:27
*** terryw has quit IRC14:27
sdaguefungi / clarkb: it would be good to quick approve this today - https://review.openstack.org/#/c/100219/ so I can start pruning grenade output from the console14:27
fungisdague: i can try, though on a wireless modem watching the movers pack things so not really paying attention14:28
sdaguesure14:28
sdagueit's just an indexing rule for ES14:28
sdagueso should be low risk14:29
fungisdague: looks simple enough, agreed14:29
*** rwsu has joined #openstack-infra14:29
fungisdague: size of those log files should be fairly small, right?14:30
fungiwe're not going to overwhelm logstash/elasticsearch by adding them presumably?14:30
fungiapproved. i guess it can be reverted if it does for some reason14:32
*** _nadya_ has joined #openstack-infra14:32
fungiassuming the only things it contains are the same lines echoed into the console currently, and you'll be culling those shortly thereafter, net impact should be near zero14:33
*** cp16net_ has joined #openstack-infra14:34
*** chandan_kumar has joined #openstack-infra14:36
*** W00die has joined #openstack-infra14:36
*** doug-fish has joined #openstack-infra14:37
*** wenlock_ has joined #openstack-infra14:38
sdaguefungi: yeh, my hope is to have all the pieces in over the next 2 days14:38
*** dkliban_afk is now known as dkliban14:38
anteayafungi: does anyone else have the ability to run your magic delete the nodes scripts, that you ran on Friday?14:38
openstackgerritA change was merged to openstack-infra/config: index grenade logs in elastic search  https://review.openstack.org/10021914:38
sdaguethe problem is that it's grenade, devstack, and d-g changes that all need to in a specific order14:38
*** amotoki_ has joined #openstack-infra14:39
sdaguemost of which are written, a few are not14:39
sdaguewhat's our rax quota anyway?14:39
fungianteaya: there's nothing magic. i was just filtering nodepool list output for nodes in a delete state longer than an hour and passing their node ids to nodepool delete --now14:39
sdaguebecause I was suprised how few rax nodes where in my timing lists14:40
fungianteaya: anyone with root access on nodepool.o.o (i.e. infra root admins) can do that14:40
*** kashyap has joined #openstack-infra14:40
fungisdague: pretty low. you can see the max-servers counts in nodepool.yaml.erb14:41
fungiwell, i say "pretty low" but really only low in relation to hpcloud14:41
sdaguecan we get that bumped?14:41
sdagueespecially as the rax nodes are faster than the hp cloud ones now14:41
anteayafungi: awesome, thanks, since it looks like we may need to do that before too long14:41
fungithey lowered it recently due to capacity issues, but mordred might be able to sweet talk some more out of pvo14:41
*** esker has joined #openstack-infra14:41
fungipossibly in a different region under less load14:42
*** ildikov has joined #openstack-infra14:42
sdagueok14:44
*** W00die has quit IRC14:45
sdagueok, I'm going to go get a bike ride before it gets too too hot today14:45
anteayasdague: happy cycling14:46
jeblairgood morning14:46
anteayagooooooood morning, jeblair14:46
anteayawelcome back14:46
anteayaand welcome back14:46
sdaguethis is probably a key fix for grenade failures, hopefully check results are in by the time I get back - https://review.openstack.org/#/c/100253/14:46
mordredaroo?14:46
mordredmorning jeblair !14:47
* jeblair starts gertty (with 212 sync actions)14:48
mordredjeblair: while you were gone, everything broke. welcome back - you can fix it all now!14:48
jeblairmordred: i don't think i can fix launchpad openid14:48
mordredjeblair: blasphemy14:48
*** basha has joined #openstack-infra14:49
*** alugovoi has quit IRC14:50
*** prad_ has joined #openstack-infra14:50
*** sarob_ has joined #openstack-infra14:51
*** CaptTof__ has joined #openstack-infra14:54
*** CaptTofu_ has quit IRC14:55
*** rkukura has quit IRC14:56
*** sarob_ has quit IRC14:56
*** W00die has joined #openstack-infra14:57
openstackgerritA change was merged to openstack-infra/storyboard-webclient: Including UX Feedback on menu and nav.  https://review.openstack.org/9920914:57
*** jistr has quit IRC14:58
*** jistr has joined #openstack-infra14:59
*** sarob_ has joined #openstack-infra14:59
*** hogepodge has joined #openstack-infra15:02
*** _nadya_ has quit IRC15:03
*** ihrachyshka has quit IRC15:03
*** dkranz_afk has quit IRC15:05
*** dizquierdo has quit IRC15:05
*** changbl has joined #openstack-infra15:06
*** yfried has quit IRC15:06
SergeyLukjanovjeblair, hey15:07
*** pcrews has joined #openstack-infra15:07
jeblairSergeyLukjanov: o/15:07
jeblairstoryboard meeting in #openstack-meeting-315:08
*** dkranz_afk has joined #openstack-infra15:08
*** atiwari has joined #openstack-infra15:08
*** rkukura has joined #openstack-infra15:08
*** Guest8031 is now known as mgagne15:08
*** mgagne has joined #openstack-infra15:08
*** enikanorov_ has joined #openstack-infra15:09
*** julim has quit IRC15:12
jeblairheh, gerrty is now "down to" 333 sync tasks (from 212)15:13
openstackgerrityolanda.robla proposed a change to openstack-infra/nodepool: Build images using diskimage-builder  https://review.openstack.org/4648215:14
*** sarob_ has quit IRC15:16
*** moted has joined #openstack-infra15:16
*** sarob_ has joined #openstack-infra15:16
*** alugovoi has joined #openstack-infra15:16
*** jaypipes has joined #openstack-infra15:20
*** sarob_ has quit IRC15:21
*** zul has quit IRC15:22
*** zul has joined #openstack-infra15:23
*** dangers_away is now known as dangers15:27
*** markmc has quit IRC15:28
*** basha has quit IRC15:28
*** enikanorov__ has joined #openstack-infra15:29
*** dkehn_ is now known as dkehnx15:29
*** alexpilotti has joined #openstack-infra15:30
*** enikanorov_ has quit IRC15:32
*** gondoi is now known as zz_gondoi15:34
*** IvanBerezovskiy has left #openstack-infra15:38
*** afazekas_ has joined #openstack-infra15:38
*** rdopiera has quit IRC15:40
*** denis_makogon_ has joined #openstack-infra15:41
*** denis_makogon has quit IRC15:42
*** denis_makogon_ is now known as denis_makogon15:42
*** dmakogon_ has joined #openstack-infra15:42
clarkbo/ fyi I will probably not be super here today between cold and world cup15:45
*** unicell has joined #openstack-infra15:47
anteaya:(15:47
anteayatake care of your cold15:47
anteayaclarkb: can you filter nodepool list output for nodes that have been deleting for longer than an hour and pass their node ids to nodepool delete --now15:49
anteayawe see to be in constant delete state like last week15:49
anteayas/see/seem15:49
anteayaboris-42: is this you? https://review.openstack.org/#/c/100243/315:50
*** kmartin has joined #openstack-infra15:51
*** basha has joined #openstack-infra15:52
*** zhiyan is now known as zhiyan_15:52
*** bearhands is now known as comstud15:53
openstackgerritAna Krivokapic proposed a change to openstack/requirements: Add Pint for Horizon  https://review.openstack.org/9722415:54
*** basha has quit IRC15:55
*** jcoufal has quit IRC15:56
*** sweston has joined #openstack-infra15:57
*** jlibosva has quit IRC15:57
*** ominakov has quit IRC15:59
*** dkliban is now known as dkliban_brb15:59
*** Longgeek has quit IRC16:00
krotscheckSo should we permit projects to be created that don’t have repositories? Or is it ok to create repositories for projects with the knowedge that they’ll never be used? (Context: UX wants access to storyboard, storyboard uses review.projects.yaml to preload, which also creates git repos).16:02
krotscheckThird option: Create a storyboard.projects.yaml file16:02
*** otherwiseguy has joined #openstack-infra16:03
*** e0ne_ has quit IRC16:03
rainyajeblair, are you in this morning sir?16:05
clarkbkrotscheck: I think it is ok. lp for exanple has trackers for things without repos16:05
*** freyes has quit IRC16:06
*** afazekas_ has quit IRC16:06
krotscheckclarkb: I don’t really have an opinion, it’s just that the discussion petered out before a consensus was reached.16:06
*** marun has joined #openstack-infra16:06
*** pblaho is now known as pblaho|afk16:07
*** otherwiseguy has quit IRC16:07
*** hemna_ is now known as hemna16:08
*** arnaud__ has joined #openstack-infra16:08
zaromorning16:08
anteayamorning zaro16:08
mordredclarkb: yeah - we don't have jeepyb knowledge of things not having repos right now16:09
mordredgooooooooooooooooooooooooooooooooooooooooool16:11
zaromordred: ansible16:11
morganfainbergmordred, hehe16:11
zaromorganfainberg: is your suggestion to replace jenkins with ansible?16:11
mordredzaro: no. not at all16:12
zarosorry meant for mordred ^16:12
morganfainbergzaro, i'm used to that by now. :P16:12
mordredzaro: it's simply that the yaml format for describing tasks and actions is SO CLOSE16:12
mordredthat if we aligned it somehow, it might leave us with yaml chunks that could be used by either16:12
*** pblaho|afk is now known as pblaho16:12
*** jergerber has joined #openstack-infra16:12
mordredzaro: now, there _could_ be opportunities for making use of the in ways that jhesketh suggested in the email16:15
*** blamar has joined #openstack-infra16:16
mordredzaro: but I think first step is just figuring out if effort towards alignment is meaningful in any way16:16
zaromordred: i must have missed his email.  will take a look.  thanks.16:18
*** marcoemorais has joined #openstack-infra16:20
*** dkranz_afk is now known as dkranz16:23
*** gyee has joined #openstack-infra16:26
anteayakrtaylor: going to be at today's third-party meeting?16:28
*** jgallard has quit IRC16:31
*** shayneburgess has joined #openstack-infra16:31
*** alugovoi has quit IRC16:31
*** blamar has quit IRC16:31
mordredgooooooooooooooooooooooooooooool16:32
anteayacan we get some nodes starting to be deleted please?16:33
*** marcoemorais has quit IRC16:33
anteayathe nodes that have been deleting for over an hour?16:33
*** marcoemorais has joined #openstack-infra16:33
*** markmcclain1 has quit IRC16:33
anteaya186 in check16:33
*** marcoemorais has quit IRC16:33
*** basha has joined #openstack-infra16:34
*** marcoemorais has joined #openstack-infra16:34
*** basha has quit IRC16:35
*** praneshp has joined #openstack-infra16:36
*** trinaths has joined #openstack-infra16:36
*** sweston has quit IRC16:37
*** jpich has quit IRC16:37
*** tcammann has quit IRC16:38
*** zehicle_at_dell has quit IRC16:39
jogoanteaya: good news is the nova fix for this is in the gate16:39
anteayajogo: woooo16:39
jogoanteaya: so all we need is rax to land the patch16:39
anteayayay16:40
jogoanteaya: wich may be a while :/16:40
anteayabefore the end of the week do you think?16:40
anteayaI don't know how long it takes for rax to consume master16:40
jogoanteaya: doubtful16:40
jogocomstud said up to a month16:40
anteayawell progress anyway16:40
anteayaouch16:40
*** pblaho is now known as pblaho|afk16:40
*** zzelle_ has joined #openstack-infra16:40
jogoanteaya: do you know of any plans to get more quota in general?16:40
Alex_GaynorThe only nova change I see in the gate is a deprecation warning, is that causing big issues for some reasons?16:40
anteayasomething happened at 0:00utc saturday16:40
jogoanteaya: yeah, they may try to speed it up16:41
anteayaall the nodes stuck in delete state all of a sudden ceased at 0:00 utc saturday16:41
anteayait was really odd16:41
anteayaI haven't seen any mention that anyone from rax checked logs to see why that was16:41
*** dkliban_brb is now known as dkliban16:42
anteayain backscroll I think sdague asked monty if he could ask for more quota but if there was a response from monty, I didn't see it16:42
jogoanteaya: so a 'nova show' on the nodes that broke will give some insight16:42
clarkbI doubt we get more quota from hp16:43
clarkbwe are already pushing the limits on what they give us16:43
anteayajogo: it is failing in check16:43
jogoclarkb: too bad16:43
jogoanteaya: ?16:43
anteayahttps://review.openstack.org/#/c/99796/16:43
anteayalook at it, I see it as failing in the check queue16:44
jogoanteaya: thanks16:44
anteayanp16:44
*** pblaho|afk is now known as pblaho16:44
jogowell I guess it will be a while before this lands then16:44
anteaya:(16:44
*** Hal_ has quit IRC16:45
*** pblaho has quit IRC16:45
jogoclarkb: are there any thoughts around making the check queue two stages16:45
clarkbI havent no.16:45
jogoshort running !integration jobs -> integration jobs.16:45
jogothat give folks less useful feedback if they fail pep8 or forget to update a unittest16:46
mordredgoooooooooooooooooooooooooooooooooooooooooooooooooooooooool16:46
jogobut would free up some jobs16:46
jogonot sure if the tradeoff is worth it16:46
clarkbjogo they could just run them locally...16:46
mordredclarkb: I agree- I do not think we're getting any more quota from hp or rax right now16:46
krtayloranteaya, yes, I'll be there, need anything?16:46
*** jistr has quit IRC16:46
mordredanteaya: ^^16:46
jogoclarkb: and if they don't we don't spend hours of machine hours on there patch16:47
jogomordred: thoughts ^16:47
anteayakrtaylor: just wanted to know if you would be there16:47
anteayathanks16:47
jogomordred: too bad we can't get more quota we seem to be maxing out our quota every day now16:48
anteayamordred can either you or clark make nodepool delete stuck nodes?16:48
anteayaplease16:48
mordredjogo: we've gone back and forth on running non-integration patches first16:48
jogomordred: and what is the latest?16:49
mordredjogo: I'm usually the champion of the idea, and I usually wind up beign wrong16:49
*** flaviof has joined #openstack-infra16:49
clarkbproblem with it is you end up doing more iterations16:49
clarkbresulting in more cpu time16:49
mordredjogo:  the biggest issue is that the unittests take as long as the integration tests (which is crazypants and shoudl diaf btw - unittests should never take more than 5 minutes really)16:49
Alex_GaynorI think it also promotes people not running tests locally, and incentivizing that seems wrong16:49
mordredjogo: but they do - which means we double the best time through the gate16:49
clarkbanteaya: I have not yet booted machine with keys16:49
sdagueclarkb: do you think that's still true?16:49
anteayaclarkb: ah16:50
mordredAlex_Gaynor: one of the problems is that we have a ton of quasi-functional tests in our unittests16:50
jogoclarkb: wow neutron takes 24 minutes16:50
anteayaclarkb: we need some nodes freed up to run tests16:50
mordredAlex_Gaynor: what I'd love to see is for more projects to break out functional and unittests16:50
sdagueAlex_Gaynor: it's also not clear that the current incentives create the behavior that we think we want16:50
mordredand then run functional tests against an install similar to how the swift functional tests go16:50
jogomordred: so we could do this for check queue only and not gate. but your argument is still valid16:50
flaviofhi folks. Trying to submit a change but it failed due to something unrelated to my patch: https://review.openstack.org/#/c/100130/16:50
*** ramashri has joined #openstack-infra16:51
flavioflogs: http://people.redhat.com/~iwienand/100130/16:51
*** melwitt has joined #openstack-infra16:51
*** matjazp has joined #openstack-infra16:51
flaviofis there someone in oslab in this channel who could help me?16:51
mordredAlex_Gaynor: but that's not going to happen anytime soon16:51
trinathsfungi: Hi16:52
anteayatrinaths: he is busy moving this week16:52
anteayatrinaths: what is on your mind16:52
*** sweston has joined #openstack-infra16:53
anteayaflaviof: red hat ci is ianw16:53
jogoclarkb: so you are only right for some of the bigger projects16:53
anteayapaging ianw16:53
flaviofanteaya: thanks16:53
jogoclarkb: look at 98868,5 or 98918,416:53
anteayaflaviof: np16:53
jogoin check queue16:53
*** james_li has quit IRC16:53
trinathsanteaya: okay. doubt on improving the performance of CI and some guidelines on Multi node setup for CI16:54
anteayatrinaths: do you feel like bringing them up at the third party meeting?16:54
trinathsanteaya: yes.16:55
*** ociuhandu_ has joined #openstack-infra16:55
anteayawhich starts in an hour and will benift others?16:55
*** ociuhandu has quit IRC16:55
*** ociuhandu_ is now known as ociuhandu16:55
*** sweston has quit IRC16:55
jogoAlex_Gaynor: so I can see how the incentives can be wrong,  you fail unit tests and now get really quick results. we can always just delay the results if this becomes an issue16:55
anteayagreat, do add an agenda item to the agenda16:55
trinathsanteaya: sure. kindly please add this to the agenda16:55
anteayatrinaths: you can do that16:55
anteayatrinaths: you know what you want to dicuss16:55
trinathsanteaya: okay. can I edit that page ??16:55
anteayayes16:55
trinathsanteaya,https://wiki.openstack.org/wiki/Meetings/ThirdParty16:56
anteayain the top right corner you should see a button allowing you to sign in16:56
anteayause the same email and password as signing into gerrit16:56
jogoclarkb: also I don't think this would make thinks slower in check queuee all the time16:56
trinathsanteaya: yes doing16:56
*** alugovoi has joined #openstack-infra16:56
anteayatrinaths: k16:56
jogoclarkb: if you drop down to the bottom of the check queue many jobs are finished there !integration testing and are waiting for tempest nodes16:56
clarkbjogo: the issue is in aggregate pipeline throughput16:57
*** sarob has joined #openstack-infra16:57
*** sweston has joined #openstack-infra16:57
*** _nadya_ has joined #openstack-infra16:57
clarkbas a reviewing and code submitter I want a full picture upfront to avoid fixing lots of little issues in lots of patchcsets16:57
jogoclarkb: if we assume that we will be using 100 of our quot at all times, I think on average things will be faster16:57
jogoclarkb: you should just run your unit and style checks locally first anyway16:57
clarkbjogo: agreed16:58
clarkbso this shouldn't matter :)16:58
jogoclarkb: I agree that I would prefer to get all the jobs, but we have a finite number of cloud16:58
mordredexcept16:58
clarkbmaybe that needs to be a review item16:58
clarkb-1 did not run tests locally16:58
mordredany time we say "you should ..."16:58
mordredwe fail16:58
jogomordred: true16:58
clarkbmordred: right. but I am asserting teh current state is better than the proposed state16:59
jogoclarkb: I am not convinced either way16:59
mordrednot agreeing or disagreeing with that - just making sure we don't fall down the trap of expecting different behavior from our devs16:59
jogoclarkb: if we did this for 98909,416:59
mordredI believe that it is worth doing the mental exercise to consider what needs to happen to exist in a world where we are at max quota at all times16:59
jogowe would have 20 extra nodes17:00
jogomordred: ++17:00
mordredbecause thus far we have operated under teh assumption tha we can throw more nodes at the problem17:00
mordredif we have reached the point where that is no longer an assumption, it warrants careful consideration17:00
jogomordred: well said.17:00
clarkbjogo: you would have 20 extra nodes now17:00
trinathsanteaya: done17:01
clarkbjogo: but then they would push a patch to fix pep8 and py26 py27 fails17:01
anteayatrinaths: thank you17:01
clarkbthen you push a patch to fix py26 but py27 fails17:01
clarkbthen you push a patch to fix py27 and node devstack fials17:01
*** shayneburgess has quit IRC17:01
jogoclarkb: perhaps, as mordred said I think we need to do the full thought exercise17:01
clarkbyou fix devstack but now you find that you don't work with neutron17:01
clarkbso you fix neutron17:01
*** habib has quit IRC17:01
clarkband now yo uhave done 5 round trips when one would have been sufficient17:01
anteayatrinaths: no as chair of the meeting, I may move agenda items around a bit, but if it at least posted somewhere in the agenda, we will get to it (or try based on time)17:01
anteayas/no as chair/now as chiar17:02
clarkbmordred: I agree17:02
anteayachair17:02
mordredclarkb: what if ...17:02
anteayaI can't spell17:02
*** matjazp has left #openstack-infra17:02
mordredclarkb: we added smarts (*waves hands*) such that the first iteration of any patch gets submitted to all of the things immediately17:02
jogoclarkb: and how would your scenario play out currently?17:02
mordredbut if, at _any_ point, you fail pep8 or unittests17:02
mordredthat change gets put into a category which requires it goes through two-stage verification17:03
mordredof running pep8/unit first, then integration17:03
clarkbjogo: all of those tests would fail togetehr and you can address them in one new patchset17:03
mordredso basically, "run them locally because we're going to put you in jail if you don't"17:03
jogoso what I am saying is:17:03
sdaguemordred: so we could do this another way17:03
*** sparkycollier has joined #openstack-infra17:03
jogopep8,py26,py27 fail. you see all of those together17:03
trinathsanteaya: ok17:03
sdaguewhich is implement priority flag on patches17:03
jogothen once pass those, you run all of integration17:04
sdaguebecause the biggest issue about a full pipeline is fixes that actually hep the full pipeline can't get cpu time in a timely manner17:04
sdagueand then it turns in to the manual promote process17:04
clarkbsdague: ++17:04
*** shayneburgess has joined #openstack-infra17:04
sdaguewhich isn't a process17:04
sdagueit's super adhoc17:04
clarkbwhich was intentional at the time17:05
*** marcoemorais has quit IRC17:05
clarkbbut maybe experience shows it sucks17:05
clarkband we need something else17:05
jogosdague: so this is a mostly orthoginal issue no?17:05
*** marcoemorais has joined #openstack-infra17:05
sdaguejogo: maybe17:05
sdagueexcept if critical fixes can still get through in a timely manner17:05
*** marcoemorais has quit IRC17:05
mordredI thnk they're different - one is about the general case of trying to not waste resources17:05
sdaguethan the queue grinding for everything else becomes less urgent17:05
*** marcoemorais has joined #openstack-infra17:06
clarkbanteaya: the first node I am attempting to delete is taking its sweet time...17:06
mordredI think queue grinding is still quite urgent, even if it's not 'important' patches17:06
mordredbecause that's still 48 hour merge windows for developers17:06
sdaguemordred: sure17:06
*** amotoki_ has quit IRC17:06
mordredI don't disagree that something like what you're saying couldn't be helpful17:06
*** marcoemorais has quit IRC17:06
mordredwait.17:06
sdaguebut honestly, I think we fix that one by throwing out tests17:06
mordredthat's too many double negatives17:07
mordredsdague: I think we probably do _many_ thinkgs17:07
*** marcoemorais has joined #openstack-infra17:07
jogomordred: I agree queue griding is urgent. it slows down our velocity17:07
sdaguemordred: sure17:07
*** yfried has joined #openstack-infra17:07
anteayaclarkb: I think fungi uses a loop17:07
mordredwhich is why I was saying earlier that I believe since our operating assumptions may have changed, that we may need to do a deep dive on design17:07
clarkbanteaya: yeah I was testing first17:07
jeblairwe could disable clean check or lengthen the window17:07
anteayaclarkb: then if one is stubborn it moves on to easier pickings and comes back again later17:07
anteayaclarkb: ah17:07
anteayaclarkb: stubborn nodes17:08
jogomordred: a great topic to hash out in Germany17:08
mordredsince "throw more nodes at it" may have hit its ceiling17:08
jeblairanteaya, clarkb: that's what nodepool itself does...17:08
sdaguejeblair: I think both of those things make the situation worse, no?17:08
clarkbjeblair: yes, the apparent issue is somethign in nova17:08
clarkbjeblair: and it confuses nodepool17:08
clarkbfix is proposed to nova so we should see it arrive at some point in the future at rax17:08
*** ilyashakhat__ has joined #openstack-infra17:09
jogojeblair: so looking at status.o.o/zuul now, top of gate is <2 hours old17:09
jogobut top of check is 5 hours old, and we have 182 jobs in check17:10
clarkbthough I didn't follow the converstaion about deleting nodes too closely as I was dealing with ES17:10
*** trinaths has quit IRC17:10
jogoso maybe running less check jobs would help17:10
jogoclarkb: TL;DR if a node is in deleting(error) you cannot delete it right now17:10
sdaguejogo: so the spike for the check queue started with proposal bot proposing hacking 0.9 to all the projects again, because some other requirement landed17:12
sdaguethus creating 40 - 50 patches in check failing pep817:12
jogosdague: that will happen with every requirements check change no?17:12
sdaguejogo: probably17:12
sdaguewhich is terrible17:12
jogoyes17:12
sdagueand why I think we should roll back hacking17:13
sdaguebut I seem to be the only one of that opinion17:13
jogothat arguement can be made for every requirments change17:13
jogosdague: that spike may also be because of the weekend ending etc17:14
sdaguejogo: no, it really wasn't17:14
sdagueI was awake when it happened17:14
sdaguethe entire check queue filled up all at once, and it was all hacking proposal17:14
jogobut its not full of them know is it?17:15
openstackgerritSpencer Krum proposed a change to openstack-infra/config: Logstash: Modifying rewrite rules to allow kibana 3  https://review.openstack.org/9316717:15
sdagueprobably not17:16
sdagueso it looks like a concrete issue right now is we don't have any: check-tempest-dsvm-f20 nodes17:17
*** trinaths has joined #openstack-infra17:17
sdaguewhich means we can't complete devstack patches17:17
*** ociuhandu has quit IRC17:19
*** UtahDave has joined #openstack-infra17:20
*** basha has joined #openstack-infra17:20
*** markmcclain has joined #openstack-infra17:22
openstackgerritAlexandre Viau proposed a change to openstack-infra/config: Fixed typo: 'project.yaml' -> 'projects.yaml'  https://review.openstack.org/10030517:24
boris-42anteaya yep it's me17:26
boris-42anteaya thanks for review17:26
anteayaokay17:26
sdagueanyone able to look into nodepool about the fact that there are no: devstack-f20 nodes in rotation?17:26
anteayaboris-42: a lot of stackforge projects are requiring folks to sign the cla and I don't know if that _is_ what you want of if you just copy/pasted17:26
boris-42anteaya I want17:27
boris-42anteaya this project should be a part of integrated program17:27
boris-42anteaya cause in case of success all other projects will use this service17:27
bodepdhow do you guys handle generating/storing private keys that are configured via Puppet?17:27
boris-42anteaya so yep I would like to force CLA17:27
anteayaboris-42: great, state as such in a comment on the patch so we can keep that in mind in subsequent reviews17:28
anteayaboris-42: thanks for making that clear17:28
boris-42anteaya sure17:28
boris-42anteaya btw17:28
boris-42anteaya one question17:28
*** Ryan_Lane has joined #openstack-infra17:28
boris-42anteaya why only logaas-ptl should be able to push tags?17:28
openstackgerritSpencer Krum proposed a change to openstack-infra/config: Logstash: Modifying rewrite rules to allow kibana 3  https://review.openstack.org/9316717:29
jeblairi just logged in with lp openid no probs17:29
boris-42anteaya maybe it's okay if all core team is able to that?17:29
anteayajeblair: yes that got fixed on sunday, the issue now is with statusbot17:30
jeblairanteaya: does that mean it's fixed or is there a more specific problem?17:30
jeblairoh17:30
jeblairanteaya: is anyone working on that?17:30
anteayajeblair: we can't change the channels statuss back17:30
clarkbsdague: I think it has to do with the node allocator predominantly allocating precise nodes because that is wat we need the most of17:30
clarkbjeblair: ^17:30
clarkbsdague: unfortunately when we say boot 2 nodes the changes of both of them failing are high17:30
*** _nadya_ has quit IRC17:30
anteayajeblair: other than knowing it is an issue, I don't know of anyone working on that, no17:30
clarkbwhich makes the allocator even worse17:30
jeblairanteaya: wow.  i think that's a super critical issue17:31
anteayajeblair: kk, glad you are back then17:31
*** julim has joined #openstack-infra17:31
jeblairanteaya: i'm going to drop what i'm doing and fix that17:31
anteayajeblair: kk17:31
jeblairi'm kind of surprised no one else thinks it is17:31
anteayajeblair: well can any non-core do anything to help?17:31
anteayaIll do what I can to help if there is anything I can do17:31
clarkbjeblair: lots of non core can change topics17:32
clarkber anteaya ^17:32
anteayaI don't have channel permissions17:32
jeblairclarkb: fixing them one at a time would be a very bad solution17:32
anteayaor a list of channel topics before it was changed17:32
jeblairclarkb: because i'm about to change them all back regardless of any that may have been fixed one at a time17:32
anteayaboris-42: so yes, tagging is best done by one person17:32
boris-42anteaya heh okay17:33
*** james_li has joined #openstack-infra17:33
anteayaboris-42: thanks17:33
sdagueclarkb: ok, well how do you feel about fast approving me moving f20 to experimental then?17:33
*** nati_ueno has joined #openstack-infra17:33
openstackgerritBoris Pavlovic proposed a change to openstack-infra/config: Add new project loggas  https://review.openstack.org/10024317:33
*** mmaglana has joined #openstack-infra17:34
clarkbsdague: let me check one more thing otherwise ++17:34
boris-42anteaya ^ fixed that17:34
*** pelix has quit IRC17:34
sdaguebecause right now devstack changes are actually blocked. And in blocking them, we block actually fixing the grenade gate issue17:34
anteayaboris-42: k thanks17:34
anteayaboris-42: I'll review again at the end of my day17:35
clarkbsdague: ok confirmed that both rax and hp have f20 images so it isn't a case of being pinned to one provider17:35
clarkbsdague: ++ to moving to experimental17:35
boris-42anteaya ok thanks=)17:35
*** UtahDave has quit IRC17:35
*** julim has quit IRC17:35
*** reed has joined #openstack-infra17:36
*** ilyashakhat__ has quit IRC17:36
sdagueclarkb: so does min ready degrade under this condition?17:37
clarkbsdague: yes17:37
sdaguecould we actually make min ready a hard floor?17:38
clarkbsdague: ebcause when we are at quota it becomes proportional17:38
clarkbsdague: you can't when at quota17:38
sdaguewhy? it seems that min ready should be an actual floor, with head room to expand beyond that17:39
*** julim has joined #openstack-infra17:39
sdagueor some other min item17:39
clarkbsdague: because if I can boot 1 node right now17:39
clarkbwhat do I do?17:39
clarkbI can't boot in ready nodes17:39
*** cindyo has joined #openstack-infra17:39
*** ChanServ changes topic to "Discussion of OpenStack Developer Infrastructure | docs http://ci.openstack.org | bugs https://launchpad.net/openstack-ci/ | https://git.openstack.org/cgit/openstack-infra/config/tree/"17:39
sdagueoh, so I guess min ready should fall back to min "out there somewhere"17:39
clarkbsdague: we may need a proper PID controller loop17:39
anteayaclarkb: how soon can you get to a computer with server keys?17:40
clarkbanteaya: I am doing it now17:40
mordredjeblair: how did you do that? I tried to figure out how to do it and failed17:40
clarkbanteaya: but mostly by hand because they fail randomly17:40
sdaguethe issue right now is there are actually no fedora nodes anywhere, in use or ready17:40
mordredjeblair: so there is clearly knowledge I should learn17:40
clarkbsdague: correct because it is proportional devstack-precise wins every time17:40
*** e0ne has joined #openstack-infra17:40
jeblairmordred: stack TOPIC #openstack-meeting :OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings17:40
anteayaclarkb: thanks and :(17:40
jeblairgrr17:40
jeblairmordred: grep TOPIC statusbot_debug.log.2014-06-0217:41
jeblairmordred: then emacs17:41
cindyo@anteaya I am helping Craig get python-monascaclient onto stackforge. Do you need a pypi project for python-monascaclient before creating the stackforge repo?17:41
mordredjeblair: ah. ok. I did not think directly enough. *facepalm*17:41
anteayacindyo: refresh my memory of the url for the patch?17:41
jeblairmordred: to transform those lines into "/m chanserv topic #channel foo"17:41
anteayaand welcome, cindyo17:41
jeblairmordred: a lhf bug would be to have statusbot record those externally to a yaml file and be smart about restarting...17:42
jeblairmordred: the fundamental problem is that if statusbot restarts during an alert, it loses state17:42
cindyothanks!  They current url is https://github.com/hpcloud-mon/python-monclient, but expecting to see github.com/stackforge/python-monascaclient17:42
mordredjeblair: yah. that bit I grokked17:42
anteayacindyo: sorry the gerrit url for the patch17:42
openstackgerritA change was merged to stackforge/gertty: Add ctrl-o to help dialog  https://review.openstack.org/9775317:42
anteayacindyo: https://review.openstack.org/#/c/99767/17:43
clarkbanteaya: looks like the erally old nodes are the ones that don't want to work so I ahev an upper bound on the time as well and am looping over that now17:43
openstackgerritSean Dague proposed a change to openstack-infra/config: demote f20 to experimental  https://review.openstack.org/10030917:44
anteayacindyo: so yes, according to clarkb's comment on this file: https://review.openstack.org/#/c/99767/1/modules/openstack_project/files/zuul/layout.yaml17:44
sdagueclarkb: there it is17:44
cindyoanteaya:  https://review.openstack.org/9976717:44
clarkbcindyo: anteaya: or remove the pypi jobs17:44
anteayaclarkb: go you17:44
anteayaclarkb: slight increase in running jobs, thanks17:45
*** annegentle has joined #openstack-infra17:46
clarkbsdague: +2, maybe we can get jeblair and mordred to review as well17:46
clarkbI can manually submit it if neccesary17:46
sdagueclarkb: great17:46
jeblairsdague clarkb: is f20 voting?17:46
sdaguejeblair: nope17:46
anteayacindyo: do you have what you need for your next step?17:47
jeblairwho's pushing f20?17:47
sdagueianw17:47
sdaguewhich I support17:47
cindyoanteaya,clarkb To create a new pypi PKG-INFO needed for registration, I would have to build it on the old repo.  Is that what you want before creating the new stack forge repo?  We ususally build and register projects from the repo.17:47
sdaguebut right now it's got the side effect that we can't make progress on any d-g or devstack patches17:48
jeblairare we going to loose inertia on that with this?  (eg, are we close to making it voting, etc?)17:48
clarkbcindyo: no, you can just register it directly in pypi17:48
clarkbrather than using setup.py17:48
mordredI'd like to have a longer conversation about the idea of that at all17:48
mordredbut we don't have to do that right now17:48
cindyoOK I'll do that, and let you know when it's ready17:48
sdaguejeblair: I don't know, but clarkb just explained that we're basically going to be always in this situation at full quota, which isn't a recipe for success17:49
jeblairsdague: it will correct... i'm mildly hesitant to make a change like this because the order in which jobs are being filled isn't the preferred one17:49
anteayacindyo: if you add a link to the pypi registration as a comment on the patch, that will help all subsequent reviewers of the patch17:49
mordredtl;dr- I'd don't think we should be in the business of testing linux distros - we have too many other things on our plate. If we're discussing 86-ing postgres, I don't think we should be adding base linux distros17:49
cindyoanteay: ok will do17:49
anteayacindyo: thanks, welcome to infra17:49
cindyosorry anteaya my spelling is not great17:49
mordredno matter how much I like all of my friends in the redhat world17:49
sdaguemordred: that I'm fine with as well17:49
anteayacindyo: we have a lot of fun here, do let me know anytime you want to learn more17:50
*** e0ne has quit IRC17:50
anteayacindyo: neither is mine17:50
anteayacindyo: I forgive you any spelling errors in advance17:50
clarkbanteaya: fwiw I got stuck on a less old node so for loop is going to be slow17:50
mordredI'd be happier moving the gate off of ubuntu and on to $somethingelse that isn't tied to a vendor if that would make things 'fairer'17:50
jeblairsdague, clarkb: queue systems don't work like that -- they either ultimately satisfy demand or don't.17:50
cindyohaha! and thanks for the help17:50
anteayacindyo: for irc nicknames type the first 3 letters and then tab17:50
cindyoanteaya: ahh17:51
anteayaclarkb: slow progress is better than none17:51
anteayacindyo: :D17:51
sdaguejeblair: if you believe we'll see f20 nodes show up for testing in the near future, I'm good with some other approach17:51
jeblairsdague: so if we're in a position where we will never catch up, then we need to make fundamental changes.  if we're just "busy" right now, then the nodes will eventually be allocated and the backlog will clear (and quickly)17:51
anteayaclarkb: the blue team is slowly creeping up17:51
anteayathe backlog cleared on the weekend17:52
sdaguewell check queue is + 1.5 times it's capacity at the moment17:52
*** basha has quit IRC17:52
anteayafor some reason all the nodes that were in delete state suddenly cleared at 0:00 utc saturday and I have no idea why17:52
sdaguea party being the pent up demand from how terrible everything has been the last 2 weeks17:52
*** blamar has joined #openstack-infra17:52
clarkbjeblair: sdague: I thin ka PID controller could fullfil the demand better17:52
jeblairsdague: the operative metric is the zuul job queue17:52
sdaguejeblair: ok17:52
mordredyolanda: we may want to discuss this in here so that jeblair can chime in17:52
clarkbas it would be smarter about the change over time and realize it hasn't booted and f20 node ina while17:52
*** _nadya_ has joined #openstack-infra17:53
clarkbbut I would have to go do reading and maths to figure it out17:53
sdagueclarkb: or actually have a hard floor17:53
sdagueof in-use / ready17:53
*** dims has quit IRC17:53
jeblairclarkb: yeah, there are two issues:  f20 isn't booting fast enough to make sdague happy because he wants the changes he's looking at to merge17:53
mordredjeblair: yolanda and I were discussing the nodepool dib patch, which she's close to completing. she has it working except is missing glance delete image things17:53
jeblairclarkb: versus not being able to satisfy demand at all17:53
mordredjeblair: I was suggesting that perhaps nodepool.deleteImage may not need to change at all - since once it's an image in the cloud, it should ultimately be managed identically17:54
clarkbmordred: correct, but you need to delete the local qcow217:54
sdaguejeblair: so if you look at all the long standing changes in check, the ones > 3 hrs17:54
*** dims has joined #openstack-infra17:54
sdaguethey are all waiting on f20 nodes17:54
sdagueand from what I can tell, there are no f20 nodes anywhere17:54
mordredclarkb: so you're thinking we should keep the local qcow on disk for the same time it's live in teh cloud?17:55
clarkbmordred: yes17:55
mordredclarkb: perhaps add some code to deleteImage to also find the image on disk and delete it?17:55
mordredyolanda: ^^17:55
jeblairclarkb: sdague's change will alter the behavior he's immediately seeing; however, it has a minimal impact on the second thing.  if the second thing is true to the degree that you think we will never see f20 nodes boot, then the system is unustainable regardless.17:55
clarkbmordred: they aren't super expensive and having it local may be nice17:55
*** ilyashakhat__ has joined #openstack-infra17:55
jeblairmordred: later17:55
clarkbjeblair: correct17:55
mordredjeblair: yah. just making sure it was in this channel for logging/scrollback17:55
clarkbjeblair: understood17:56
jeblairsdague: yeah, so what i'm saying is that the expected behavior when the system is busy is that those changes will queue in zuul until nodepool is able to satisfy the f20 demand, and then they will clear out quickly once it does17:56
jeblairsdague: (i'm not saying it's ideal, just expected)17:56
yolandai think the same code we have when we upload the image to glance, we will need to have it to delete it, otherwise we will be filling glance with images that aren't used17:56
sdaguejeblair: ok, so can this be fixed by increasing min ready? Because we added it to 2 core jobs, but it's at such a low allocation it basically never creates them under load17:57
jeblairsdague: if, however, we've crossed the threshold that every queuing system has where production exceeds consumption, then it's all over regardless and we need to do less or have more (clearly we are close to that point).17:57
mordredjeblair: ++17:57
*** marcoemorais has quit IRC17:58
*** marcoemorais has joined #openstack-infra17:58
yolandamordred, clarkb ^17:58
*** marcoemorais has quit IRC17:58
jeblairsdague: i think what i'm trying to say is that unless we have crossed that threshold, this is not a system problem, this is a human perception problem.  is the human perception problem bad enough to make the change you have proposed?17:58
*** marcoemorais has joined #openstack-infra17:58
mordredyolanda: one sec ...17:59
sdaguejeblair: it's actually a system problem in that the fix for the #1 gate reset issue is currently stuck17:59
clarkbyolanda: nodepool already delete images out of glance17:59
clarkbyolanda: I don't think you need to implement that17:59
*** basha has joined #openstack-infra17:59
sdagueand it's stuck on a non voting job17:59
sdaguemaybe that's a perception problem, I don't know. But that feels like a system problem to me.18:00
*** rustlebee is now known as russellb18:00
*** jp_at_hp has quit IRC18:00
mordredI think that's a perception problem18:00
jeblairsdague: i know what you mean, and i'm not trying to minimize it -- i'm just saying that assuming we have not crossed the threshold, the job will eventually run (it may take way longer than we are comfortable with)18:01
mordredyah. that's what I Was trying to type but jeblair said it better18:01
sdagueok, so based on usage graphs last week.18:01
sdaguethat's going to be Saturday18:01
sdagueso when I speak of system problem, maybe I'm thinking about it as a system as a whole18:02
yolandaclarkb, so we leave the images in glance without removing them? no problem in leaving the remaining instances into glance?18:02
sdaguebecause if we can't actually land the patches that fix the system to be more productive in a timely manner, then I think we're in a bad place18:03
clarkbyolanda: yes as the existing image management deletes old images18:03
yolandaalso in glance?18:03
*** marcoemorais has quit IRC18:03
clarkbyolanda: I believe it will just work as expected if yo upopulated the image list db table with a uuid and a time18:03
yolandai need to test it18:03
clarkbyolanda: yes, when we snapshot images in rax and hpcloud they go into glance18:03
jeblairsdague: it looks like last week the job queue dropped below quota each night18:03
*** marcoemorais has joined #openstack-infra18:03
yolandaok, so tomorrow i'll do a full test of it, spinning nodepool instances and deleting them, to check the results18:04
jeblairexcept the night of the 13/14th, where the # waiting got close to but did not touch 018:04
mordredsdague: I think what you're characterizing as a system problem is that fact that we have never designed the system to handle "land this patch urgently"18:04
sdaguemordred: that may be the case.18:04
mordredwe've designed it to handle "land all of these patches eventually"18:05
sdagueright18:05
mordredthis, of course, does not produce pleasing experiences in urgent time18:05
mordredtimes18:05
sdaguewhich makes healing the system within the system hard18:05
*** harlowja has joined #openstack-infra18:05
sdaguewhich is what we've been dealing with18:05
jeblairsdague: okay, so landing your patch will make the system more constant and less bursty, and also have a minor reduction in usage.  so it will help (to a degree) with all the things we're seeing18:06
jeblairsdague: and as you say it will unblock a fix for a gate-failing bug (though the window minimizes that impact somewhat)18:07
*** e0ne has joined #openstack-infra18:07
sdaguejeblair: the window minimizes it there, but patches are taking a lot more round trips through check because of it18:07
jeblairsdague: *nod*18:08
jeblairsdague: i haven't done a thorough analysis, but i'd expect it to recover by tomorrow18:08
sdaguejeblair: I would be surprised :)18:08
sdaguebecause there is a lot of pent up demand18:08
jeblairsdague: so i think our decision is to weigh that against what we lose by removing f2018:08
sdaguebecause while you were out we were pushing back on people approving not bug fix stuff18:09
sdaguebecause of the state of the gate18:09
jeblairsdague: (we are at about +250 jobs from where we were 7 days ago and wating hit 0 for 6+ hours after that)18:09
*** alugovoi has quit IRC18:09
jeblairsdague: so is it worth dropping f20 to get that in before tomorrow?18:10
*** sarob has quit IRC18:10
sdagueI think so. But I'm biased on the fact that all my work to make grenade debuggable requires devstack and/or d-g changes18:11
sdagueand, honestly, knowing that this is the edge condition, I don't every want f20 in the devstack check queue18:11
*** thedodd has joined #openstack-infra18:11
sdagueuntil there is a guaruntee that at least some nodes will be available18:11
reedcan someone look at this patch today please? https://review.openstack.org/#/c/99481/18:12
jeblairsdague: what kind of guarantee would you like?  and do you have someone in mind that can provide nodes to satisfy that guarantee?18:12
sdaguejeblair: I would like a min floor on node types. Which is ready + in-use.18:13
sdagueor an allocator based on time waiting18:13
sdagueso that the job that was waiting the longest in the check got it's needs filled18:14
*** jaypipes has quit IRC18:14
mordredsdague: I think that's enough of a departure in the current thing that it might be better to bucket into the aforementioned "let's talk about the gate in the world of finite cloud resources" conversation18:16
mordredrather than as a response to a current stimulus18:16
cindyoclarkb: I'm having a security issue registering my project online.  I can login to pypi, but when I try to submit I get this error...The user name or password you entered for area “pypi” on pypi.python.org:443 was incorrect. Make sure you’re entering them correctly, and then try again.18:16
openstackgerritBoris Pavlovic proposed a change to openstack-infra/config: Publish osprofiler on pypy and add check for requirments  https://review.openstack.org/10005218:17
sdaguemordred: that's fine18:17
cindyoclarkb: i am using the same username password as i did to login.  What other password is needed?18:17
sdagueI'm trying to figure out how to get out of this box though where we have a fix18:17
sdaguethat will impact our capacity18:17
sdaguewhich we can't get past tests18:17
*** sarob_ has joined #openstack-infra18:17
sdaguebecause of our capacity allocator18:17
clarkbcindyo that should be it18:17
clarkbmaybe you have a stale cookie?18:18
sdagueit feels like a very small box, and makes me claustrophobic18:18
mordredyah. I hear that18:18
cindyoclarkb: I'll try clearing cookies, I hope it's not that my email has a '-' in it.  I noticed some python programs have problems with that18:19
*** harlowja has quit IRC18:19
sdagueso I'd like to not always feel like I'm in the cheese shop sketch when trying to fix gate bugs :)18:19
*** harlowja has joined #openstack-infra18:19
sdaguebecause, that's very draining18:19
mordredtotally. but I think when we're drained is probably the wrong time to start poking large holes - precisely because of the effects of the claustropphobia18:20
mordredI think it's important that we figure out how to decrease the amount of time people feel like they are stuck in tiny boxes18:20
sdagueso demoting f20 back to experiemental, where it was a week ago, doesn't seem like the poking a large hole18:21
*** basha has quit IRC18:21
sdaguemaybe it was a couple of weeks ago, time doesn't really have meaning during that last sprinting towards fixes18:21
sdaguemordred: and I do agree with the sentiment18:22
jeblairsdague: wfm18:22
jeblairclarkb: is your work on deleting nodes having an effect?  i notice a change18:23
jeblair(zuul waiting is trending down, used nodes is trending up)18:23
*** basha has joined #openstack-infra18:23
jeblairapropos, i just got an email from cloudwatt :)18:24
sdagueso I could also use some additional review on - https://review.openstack.org/#/c/100253/ because dtroyer_zz doesn't appear to be around, and I think that simple fix will dramatically impact life18:24
*** _nadya_ has quit IRC18:24
reedjeblair, did you win something?18:24
jeblairreed: i'll let you know after i translate it from french18:24
sdaguebecause ER doesn't show the full scope of the problem as it's actually hit on a lot of different services18:24
jeblairah, i'm to follow a link to complete my registration18:25
clarkbjeblair: yes it is helping18:25
sdagueand it's hard to finger print (... I have a patch I'm working on to make it a ton more clear)18:25
clarkbhelped last week too18:25
sdagueclarkb: can you stick that in a nightly cron? :)18:25
jeblairclarkb: do you understand it enough to describe what needs to change in nodepool?18:25
mtreinishsdague: would that cause the failure file not being written thing we were seeing before?18:26
sdaguemtreinish: yep18:26
*** basha has quit IRC18:26
clarkbjeblair: aiui nodepool needs to assume even less that a deleted node will be deleted18:26
clarkbbasically nova api bug that makes deletes noop on backend18:27
clarkbso you need to be even more forceful18:27
mtreinishman those boxes must be slow then. 3 secs for bash to launch in a screen window18:27
clarkbjogo ^18:27
sdaguemtreinish: so the crux of the issue is that because we are stuffing into bash in screen, until the bash process actually spawns and gets to the tty it wont' take18:27
jeblairclarkb: i'm unsure how it could do that -- it does not assume it is deleted until it disappears from the server list18:27
clarkbright but it deletes on a slowish interval iirc18:28
sdaguemtreinish: well it seems to be a bell curve. We actually had this problem before, just in very extreme cases18:28
clarkbI honestly didnt follow the issue last week18:28
clarkbI had the same thought you did18:28
sdaguebut now that the majority of our nodes are hp 1.1, and they are slower, it's happening a lot18:28
mtreinishsdague: yeah that makes sense18:28
clarkbbut jogo and comstud indicate it isnt sufficient18:28
jeblairjogo: help us out here :)18:29
comstudhm18:29
sdaguemtreinish: I made it 3 seconds to hopefully get us completely out of the bell curve18:29
comstudreading18:29
sdagueor make it super rare18:29
mtreinishsdague: hopefully that'll be enough I guess it depends on how slow things really are18:29
comstudWhat "isn't sufficient" ?18:29
mtreinishsdague: what about removing screen from grenade? That's probably a lot of work18:29
*** nati_ueno has quit IRC18:29
clarkbcomstud we already delete nodes over and over on an interval18:29
comstudoh18:29
comstudyeah18:30
comstudthere's a nova bug...18:30
clarkbbyt for some reason rax nodes must be manually deleted18:30
*** nati_ueno has joined #openstack-infra18:30
comstudsuch that deletes are ignored18:30
mtreinishsdague: won't you also need to backport that change to the devstack stable branches?18:30
sdaguemtreinish: yeh, I thought we might be able to do that, it's not working18:30
sdaguemtreinish: most of the fails have been on the new side18:30
comstudclarkb: Yeah, if HP doesn't ever error during terminate, you don't hit this bug18:30
comstudbut we're erroring during terminate18:30
sdagueso yes, but this will help18:30
*** timrc-afk is now known as timrc18:30
jeblairclarkb, comstud: so i think the current algo is: issue delete; wait 10 minutes for it to disappear; if it doesn't, try again in 1 minute.18:30
comstudAfter that point... the deletes are stuck and cannot be deleted again via the API18:30
comstudbecause of a nova bug18:30
comstudjeblair: a patch from jogo is merging to fix the nova bug..18:31
comstudHowever, I don't have a timeframe for us deploying that fix yet.18:31
jeblaircomstud: oh, so we're under the impression that clarkb manually running a bunch of deletes is fixing things...18:31
comstudAfter that point... your logic would work18:31
jeblaircomstud: but if what you're saying is true, that's ineffective?18:31
comstudjeblair: We're having to manually reset task_states to NULL in the DB18:32
comstudafter that point...18:32
sdagueclarkb: as https://review.openstack.org/#/c/100253/ has dtroyer +A, can we jump it to the gate queue? Then we don't have to wait for the f20 node to show up18:32
comstudyes, either manual deletes or your auotmated job should work18:32
sdagueclarkb: it's passed all the other tests18:32
comstudassuming we don't error on delete again and have to reset task states again... and repeat repeat18:32
comstudhehe18:32
jeblaircomstud: ok, so behind the scenes someone is resetting something that is causing api deletes to be able to be attempted again, and after that point, our attempts may work?18:32
comstudcorrect18:32
*** sparkycollier has quit IRC18:33
jeblairclarkb: so it's possible that leaving nodepool to do this on its own may work and you may not need the manual deletes -- it might just happen 10 minutes faster your way?18:33
comstudthat makes sense to me18:33
harlowjalifeless thanks for all the good comments on those oslo-specs18:34
harlowjawill adjust them in a few18:34
lifelessharlowja: yw18:34
*** rlandy has quit IRC18:34
jeblaircomstud: thanks for enlightenment :)18:34
comstudhehe np18:34
comstudSo jogo's patch will revert task state properly on delete errors18:35
jeblairclarkb, sdague: want me to force-merge the f20 patch?18:35
*** tkelsey has quit IRC18:35
comstudBut.. like I said, I'm not sure when we'll be deploying it.18:35
clarkbjeblair: these nodes were 2 hours to 31 hours in delete state18:35
comstudI'm going to raise the issue shortly internally18:35
clarkbpretty sure it wont happen 10 minutes faster18:35
clarkbjeblair: go for it18:35
comstudUntil then, we'll have to manually reset task_states for you all18:35
comstudOR better.. just fix our delete issues :)18:36
jeblairclarkb: then we should debug nodepool further to find out if it's not doing the algorithm i described18:36
comstud(actually it's not specific to delete)18:36
sdaguejeblair: force merge would be fine as well18:36
comstudi'll be back shortly if you need anything else18:36
sdaguef20 doesn't run in the gate18:37
* comstud -> errand18:37
sdagueso getting it into the gate queue would also be fine18:37
openstackgerritA change was merged to openstack-infra/config: demote f20 to experimental  https://review.openstack.org/10030918:37
sdaguejeblair: ok, how about a gate move of https://review.openstack.org/#/c/100253 now?18:38
sdaguethen we can let the check report whenever it feels like it18:39
jeblairsdague: do you know about nodepool's "test job" feature?18:39
sdagueI guess not18:39
jeblairsdague: it can run a jenkins job on a node before marking it ready (which it will only do if the job passes)18:40
sdagueok18:40
*** Ryan_Lane has quit IRC18:40
*** basha has joined #openstack-infra18:40
jeblairsdague: sorry, i have to run18:42
sdagueok18:42
sdagueclarkb / mordred: gate move of https://review.openstack.org/#/c/100253 now? - it should dramatically help on fail rates18:42
ildikovhi18:44
clarkbdoesnt it need jenkins +1?18:44
ildikovI would like to ask some questions if someone has a few minutes for me18:44
clarkbildikov best to just ask18:46
*** nati_ueno has quit IRC18:46
ildikovclarkb: ok, I will briefly describe the issue then, thanks :)18:46
sdagueclarkb: if you look at the jenkins output everything except f20 has voted18:48
ildikovin Ceilo we have some weird log entries in the py26 gate jobs: http://logs.openstack.org/40/99140/1/check/gate-ceilometer-python26/84983c8/console.html.gz18:48
sdagueand it passed all the rest of it18:48
sdagueand it's been waiting for the f20 node for 4 hrs18:48
clarkbright but it wont queue without it18:48
sdagueclarkb: if you do it manually it will18:48
ildikovthe log is kind of full of "I/O operation on closed file" messages18:48
sdaguefungi was doing that last week18:49
clarkbah ok18:49
ildikovthe posted log is the log of a failed gate job18:49
sdagueturns out we can be in both queues at once :)18:49
sdaguethe check job will just show up and vote eventually18:49
ildikovthe other strange issue is that the successfull gate job looks almost the same: http://logs.openstack.org/40/99140/1/check/gate-ceilometer-python26/16d92de/console.html18:49
clarkblooks like next reset is ~5 minutes18:49
clarkbcan promote then18:50
ildikovit only contains the "running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1}" text in addition18:50
sdagueclarkb: honestly, I don't even need promote18:50
sdaguejust putting it in the gate queue is probably good enough18:50
clarkboh18:50
sdaguegiven that it's pretty short18:50
clarkbnot sure I can do that without promote18:50
clarkbbut will look18:51
sdagueI thought they were 2 different things18:51
ildikovI was wondering if any of these issues ring a bell for anyone here, as in local environments we couldn't reproduce them18:51
*** crc32 has joined #openstack-infra18:51
sdaguebut that being said, a promote in 5 mins is cool as well18:51
*** nati_ueno has joined #openstack-infra18:51
sdagueok, stepping away for a few18:51
clarkbildikov: sounds like you have a race in yourtets18:52
clarkbildikov: and multiple tests are using the same file object one of which uses it after it is closed18:52
clarkbah yup enqueue vs promote. I will just wait for reset and promot18:52
ildikovclarkb: shouldn't it affect the py27 gate job too?18:52
clarkbildikov: not necessarily if it is a race18:52
clarkbraces by nature don't always show up18:53
openstackgerritBoris Pavlovic proposed a change to openstack-infra/config: Add new project loggas  https://review.openstack.org/10024318:53
ildikovclarkb: the I/O error is always present in the py26 job for a long while now and I've never seen it in the py27 gate logs18:53
clarkbit may be a timing issue in py2618:54
*** basha has quit IRC18:55
clarkbI can't know for sure without actually debugging the test but that is were i would start18:55
*** Sukhdev has joined #openstack-infra18:56
ildikovclarkb: ok, sounds reasonable18:56
boris-42clarkb hi there18:56
ildikovclarkb: thanks18:57
boris-42clarkb hm one dummy question, can I add coverage task on every check18:57
boris-42clarkb otherwise it's a bit useless...18:57
*** UtahDave has joined #openstack-infra18:57
clarkbboris-42: you acn but we don't because its slow18:57
adam_gsdague, with some fixes that were finally synced out to all devstack slaves, the *dsvm-virtual-ironic jobs are looking stable again. http://no-carrier.net/~adam/openstack/ironic_gate_status.html   there's also a patch up to get everything we need cached on slaves at build time. assuming that merges and we remain >90%, when would be a good time to talk about getting it voting again?18:57
clarkbboris-42: I don't understand why it is useless in post though18:57
clarkbboris-42: having it in post shows you a trend18:57
clarkbit can't be used for review but anyone can run that locally18:58
clarkbif we can solve the coverage is slow problem then sure18:58
boris-42clarkb but having it before merging patch shows exactly place that are not covered18:58
clarkbboris-42: sure and in the past we have weighed that against test time18:58
boris-42clarkb so I don't need to do this manually18:58
*** doddstack has joined #openstack-infra18:58
clarkband consensus has been don't do it if it makes tests even slower18:58
*** mrda-away is now known as mrda18:58
boris-42clarkb ok sounds reasonable18:59
boris-42clarkb but post is still useless for me=)18:59
boris-42^_^18:59
ildikovboris-42: I have a BP in Ceilo for having a coverage gate job and the outcome of the summit session about this was also to do it in post, because it's too slow :(19:00
trinathssweston: hi19:00
swestontrinaths: hello19:00
boris-42imho tempest jobs are much slower=)19:00
anteayatrinaths sweston mind chatting in -dev?19:00
trinathsanteaya: okay19:00
*** thedodd has quit IRC19:00
ildikovboris-42: you don't have to convince me =)19:01
boris-42ildikov lol19:01
*** e0ne has quit IRC19:01
clarkbboris-42: they actually aren't when you compare to say neutron and nova19:02
openstackgerritSpencer Krum proposed a change to openstack-infra/puppet-storyboard: Add Storyboard puppet module  https://review.openstack.org/10032119:03
openstackgerritSpencer Krum proposed a change to openstack-infra/puppet-storyboard: Added Authorization Header flag to storyboard module  https://review.openstack.org/10032219:03
openstackgerritSpencer Krum proposed a change to openstack-infra/puppet-storyboard: Make storyboard run over ssl  https://review.openstack.org/10032319:03
openstackgerritSpencer Krum proposed a change to openstack-infra/puppet-storyboard: Load storyboard projects from projects.yaml  https://review.openstack.org/10032419:03
openstackgerritSpencer Krum proposed a change to openstack-infra/puppet-storyboard: Load storyboard superusers from yaml file  https://review.openstack.org/10032519:03
openstackgerritSpencer Krum proposed a change to openstack-infra/puppet-storyboard: Storyboard token expiration increased  https://review.openstack.org/10032619:03
clarkbboris-42: this is slowly getting better but for more than a year neutron unittests took longer than tempest19:03
openstackgerritSpencer Krum proposed a change to openstack-infra/puppet-storyboard: Make storyboard idempotent  https://review.openstack.org/10032719:03
openstackgerritSpencer Krum proposed a change to openstack-infra/puppet-storyboard: Put a default section header above the TTL  https://review.openstack.org/10032819:03
openstackgerritSpencer Krum proposed a change to openstack-infra/puppet-storyboard: StoryBoard module bounces apache on update  https://review.openstack.org/10032919:03
boris-42^ spamer19:03
nibalizerkrotscheck: ^ want to take a gander at that19:03
boris-42=)19:03
*** e0ne has joined #openstack-infra19:03
boris-42clarkb hm didn't run coverage in nova=)19:03
krotscheckwat19:03
boris-42clarkb but for small projects they are quite faaast19:03
clarkbright but small projects are not 90% of the development effort19:03
jogoclarkb: catching up from lunch, you are seeing delete issues in hp as well?19:04
clarkbjogo: we are not19:04
nibalizerkrotscheck: that was my attempt to push the current state of the storyboard module embeded in infra/config into puppet-storybaord19:04
nibalizerwhich i thought was the plan19:04
krotschecknibalizer: Right.19:04
boris-42clarkb so probably then it's not a big deal to enable it in small projects at least?19:04
*** Ryan_Lane has joined #openstack-infra19:04
clarkbboris-42: ya I don't care what small projects do19:04
clarkbparticularly stackforge19:04
krotschecknibalizer: My current plan was actually this - https://review.openstack.org/#/c/98007/19:04
mordrednibalizer: hey - you don't feel like figuring out something hard (or maybe easy but I can't be bothered to figure it out)19:04
boris-42clarkb ahhh19:04
jogoclarkb:  kk, you were just trying to understand the bug?19:04
clarkbwe should keep the integrated projects as 1:1 as possible19:04
boris-42clarkb nice then I am going to enable it in rally19:04
*** cp16net_ has quit IRC19:04
boris-42=)19:04
clarkbjogo: yes beacuse it doesn't make sense given the nodepool behavior19:04
nibalizermordred: sure poke it at me19:05
jogoclarkb: no it doesn't hehe. the bug is delete if an instane is in deleting state becomes a no-op19:05
jogoso if stuck in deleting,error.19:05
*** cindyo has quit IRC19:05
clarkbjogo: right but if I manually run it which uses the same API its fine19:05
mordrednibalizer: https://review.openstack.org/#/c/98656/ - but if you read https://wiki.openstack.org/wiki/Stackalytics/HowToRun19:05
nibalizerkrotscheck: aha! i suspected that review existed but failed at finding it19:05
mordrednibalizer: they run theirs in nginx using uwsgi19:06
jogomordred: so silly question. you say we aren't going to get more quota from rax or hp in the near future? Can the foundation just buy us quota?19:06
jogoclarkb: waaa19:06
mordrednibalizer: I'd like to keep using apache for consistency with our other things19:06
jogoclarkb: you can manually delete nodes stuck in deleting,error state?19:06
mordrednibalizer: but it does seem to be sluggish just running in mod_wsgi19:06
mordrednibalizer: so the question is "how does one use uswgi with apache similar to their nginx example"19:07
* mordred now runs and hides19:07
nibalizerhaha mk19:07
jogoclarkb: because at any given time you will have instances doing normal deletes19:07
*** dims has quit IRC19:07
nibalizerkrotscheck: i'll poke at that review in a sec19:07
clarkbjogo yup seems to work19:07
anteayapleia2: are you around today?19:08
clarkbbut I really dont have brain power to debug today19:08
anteayaor still conferencing?19:08
mordrednibalizer: the change could probably be done as a follow on patch - I think that one is landable as-is19:08
*** _nadya_ has joined #openstack-infra19:08
*** reed has quit IRC19:09
*** sarob_ has quit IRC19:09
openstackgerritBoris Pavlovic proposed a change to openstack-infra/config: Add rally coverage check and post tasks  https://review.openstack.org/10033019:09
*** trinaths has quit IRC19:09
nibalizermordred: are you refereing to storyboard or stackalytics?19:10
jogoclarkb: can you give me your logs from that?19:10
mordrednibalizer: stackalytics19:10
*** sarob_ has joined #openstack-infra19:10
jogoclarkb: in particular 'nova show' from before you successfully delete something by hand19:10
*** ilyashakhat__ has quit IRC19:10
anteayaso we agreed in the third-party meeting that the driverlog page will say ci exists instead of ci tested? since that is the actual status it is currently communicating19:10
anteayasdague: ^19:10
jogoand the delete command you use19:10
anteayaand I am going for a walk19:11
*** sweston has quit IRC19:11
jogosdague: '[OpenStack-Infra] change frequency of "Imported Translations from Transifex"19:12
jogofrom the -infra ML, that may not be a bad idea19:12
clarkbjogo next time things back up sure19:12
jogoclarkb: kk,  well what command did you use?19:13
jogoclarkb: a simple 'nova delete UUID' ?19:13
*** sarob_ has quit IRC19:14
*** reed has joined #openstack-infra19:14
clarkbno nodepool delete --now19:15
jogoclarkb: oh hrm, have any nodepool logs then?19:16
jogodid that work for all the nodes stuck in delete?19:16
*** dims has joined #openstack-infra19:18
*** eglynn-regus has joined #openstack-infra19:18
clarkbnot all nodes19:19
*** lcostantino has quit IRC19:20
jogoclarkb: any idea of what % stayed around?19:20
clarkblow ~2%19:21
*** eglynn-office has quit IRC19:21
*** arnaud__ has quit IRC19:22
*** arnaud__ has joined #openstack-infra19:22
*** 6JTAACWFP has joined #openstack-infra19:22
*** blamar has quit IRC19:23
*** lcostantino has joined #openstack-infra19:23
*** medieval1 has quit IRC19:23
*** nati_ueno has quit IRC19:23
*** cnesa has joined #openstack-infra19:23
*** medieval1 has joined #openstack-infra19:23
*** nati_ueno has joined #openstack-infra19:24
harlowjalifeless let me know when u start to want help on that rpm spec stuff (for pbr semeantic stuffs)19:24
harlowjalifeless or if u need unit tests or something else, i can help with19:24
*** doug-fish has quit IRC19:24
harlowja*when u get there*19:24
jogoclarkb: hmm yeah, next time you do it let me know so I can see the logs19:24
jogovery strange19:24
*** vhoward has quit IRC19:24
*** doug-fish has joined #openstack-infra19:25
lifelessharlowja: it just needs your reviews atm - did you see the recent updates, and also my query on using the release version component ?19:25
*** miqui has quit IRC19:25
harlowjalifeless will check that out19:25
mordredharlowja: https://review.openstack.org/#/c/96608/19:25
harlowjamordred yup, thx19:26
*** miqui has joined #openstack-infra19:26
jogoclarkb: dumb question did you try nodepool delete without '--now'?19:26
*** adalbas has quit IRC19:26
*** mestery has quit IRC19:27
*** adalbas has joined #openstack-infra19:27
*** vhoward has joined #openstack-infra19:27
clarkbjogo no as that is a noop19:28
clarkbit marks the node for deletion in the db19:28
clarkbwhich was already done19:28
*** lcostantino has quit IRC19:28
jogodo you have any nodepool logs?19:29
mordredjogo: we have SO MANY19:29
jogomordred: so I want a log that has an attempted delete in it19:30
jogomordred: any answer to if the foundation can just buy us more cloud?19:30
mordredjogo: I have not asked19:30
jogobecause the way clarkb describes things, there *may* be a nodepool bug19:31
jogomordred: kk, because isn't that part of there mission or something?19:31
clarkbmore likely the nova bug breaks a nodepool expectation19:32
clarkbbut ya19:32
sdagueanteaya: are the bugs re: not accounting for any of the infra jobs going to be addressed as well?19:32
annegentledumb bug triaging question: can only doc-core mark a milestone in a Launchpad bug?19:33
jogoclarkb: anyway some nodepool logs should shine some light on this19:33
*** CaptTof__ has quit IRC19:33
*** tkelsey has joined #openstack-infra19:33
*** tkelsey has quit IRC19:33
openstackgerritDan Bode proposed a change to openstack-infra/config: Remove dep on wget package for apt::key  https://review.openstack.org/10033319:34
*** CaptTofu_ has joined #openstack-infra19:34
*** _nadya_ has quit IRC19:34
jogoclarkb: perhaps a log with 'Could not delete node' in it19:34
ildikovannegentle: according to how it works in case of Ceilometer, yes19:35
jogoclarkb: or 'Exception deleting node'19:35
*** rfolco has quit IRC19:35
ildikovannegentle: but now I checked it, I'm not allowed to set, just like the importance option, it is also available only for cores19:38
*** CaptTofu_ has quit IRC19:38
jogoclarkb: wow I can see where you  ran 'nodepool delete --now'19:39
*** sweston has joined #openstack-infra19:39
*** krotscheck has quit IRC19:41
*** krotscheck has joined #openstack-infra19:42
*** james_li has quit IRC19:44
*** lcostantino has joined #openstack-infra19:44
*** nati_ueno has quit IRC19:44
*** dprince has quit IRC19:46
openstackgerritBoris Pavlovic proposed a change to openstack-infra/config: Publish osprofiler on pypy and add check for requirments  https://review.openstack.org/10005219:47
annegentleildikov: ok, thanks for testing! That's what ours does too19:47
*** ramashri has quit IRC19:48
*** annegentle has quit IRC19:48
*** rfolco has joined #openstack-infra19:48
SlickNikCan I get some eyeballs on https://review.openstack.org/#/c/98517/ when folks get a chance? Much appreciated - thanks!19:52
*** praneshp has quit IRC19:52
*** james_li has joined #openstack-infra19:53
openstackgerritBoris Pavlovic proposed a change to openstack-infra/config: Add new project loggas  https://review.openstack.org/10024319:57
*** james_li_ has joined #openstack-infra20:00
*** matjazp has joined #openstack-infra20:00
*** sweston has quit IRC20:00
*** rcarrillocruz has joined #openstack-infra20:01
*** james_li has quit IRC20:02
*** james_li_ is now known as james_li20:02
*** sarob has joined #openstack-infra20:02
*** matjazp_ has joined #openstack-infra20:03
*** Sukhdev has quit IRC20:03
*** rcarrill` has quit IRC20:04
*** marcoemorais has quit IRC20:04
*** marcoemorais has joined #openstack-infra20:04
*** matjazp has quit IRC20:04
*** matjazp_ has quit IRC20:07
*** eharney has quit IRC20:07
*** crc32 has quit IRC20:07
*** matjazp has joined #openstack-infra20:08
bodepdnibalizer: jesusaurus for the decoupling of modules project, are you considering throwing things away where there might be a better community alternative?20:08
*** rkukura_ has joined #openstack-infra20:09
*** sarob has quit IRC20:09
*** dhellmann has quit IRC20:10
*** loquacities has quit IRC20:10
*** sarob has joined #openstack-infra20:10
*** loquacities has joined #openstack-infra20:11
*** rkukura has quit IRC20:11
*** rkukura_ is now known as rkukura20:11
*** dhellmann has joined #openstack-infra20:11
*** nati_ueno has joined #openstack-infra20:14
*** dims has quit IRC20:14
*** sweston has joined #openstack-infra20:15
*** crc32 has joined #openstack-infra20:15
*** dims has joined #openstack-infra20:15
*** annegentle_ has joined #openstack-infra20:16
*** ramashri has joined #openstack-infra20:16
*** sarob has quit IRC20:17
boris-42clarkb https://review.openstack.org/#/c/100330/ <- just adding check and post cover jobs20:17
openstackgerritClark Boylan proposed a change to openstack-infra/config: Shift elasticsearch retention to 10 days.  https://review.openstack.org/10033920:19
*** nati_ueno has quit IRC20:19
clarkbsdague: mordred jeblair ^ I had been doing that by hand20:19
clarkbI just did it as I noticed ES backlog was increasing again and oen of the nodes had ~50GB free20:20
*** matjazp has quit IRC20:20
jeblairclarkb: ack +220:20
clarkbI am currently checking sar numbers to make sure we don't have another bad volume20:20
jogoso I found a nodepool bug20:22
jogoa instance can be in the deleted state20:22
jogobut 'wiatForServerDeletion' doesn't accept that as deleted20:22
jogo'waitFor'20:23
jeblairjogo: why should it?20:23
clarkbhrm its possible 07 has a bad volume too20:23
jogojeblair: because an instance can be in deleted state and appear in nova list20:23
clarkbawait of 7701.91 on 07 :/20:23
jeblairjogo: what does that mean?20:23
clarkbmordred: any luck pinging pvo?20:24
jogojeblair: it means the instance was just deleted, instances don't always vanish from nova list instantaneously20:24
jeblairjogo: in particular, what is the certainty that instance will actually be truly deleted with no further action ever necessary on the part of the user?  and how is that instance accounted for in quota?20:24
clarkbI am not sure it is worth spinnig up nodes until things work20:24
clarkbmuch better to get in touch with rax20:24
*** e0ne has quit IRC20:24
jeblairjogo: sure, but how long do you expect it to stay there?  it will keep checking every 5 seconds20:24
jogojeblair: to the first part, 100 %, for the quota double checking20:25
*** CaptTofu_ has joined #openstack-infra20:25
jogojeblair: I  am double checking how long something stays in deleted for20:25
*** lyxus has joined #openstack-infra20:26
bodepdI'm confused about why the pip class does not install pip?20:27
*** CaptTofu_ has quit IRC20:30
*** ramashri has quit IRC20:31
*** sarob_ has joined #openstack-infra20:32
*** ramashri has joined #openstack-infra20:32
*** sweston has quit IRC20:32
jeblairjogo: ok cool; i would characterize the behavior you describe as a deliberate choice to avoid trusting nova responses in case they are wrong; depending on the answers to those 3 questions (confidence, time in state, quota), it may not be necessary20:32
openstackgerritSean Dague proposed a change to openstack-infra/devstack-gate: display grenade summary instead of all of it  https://review.openstack.org/9933320:32
jeblairjogo: but if the tradeoffs are minor (does not count toward quota but is expected to remain in deleted state for < 10 mins), it may not be worth reversing that assumption20:33
jeblairjogo: if the tradeoffs are larger (could remain in state for hours), it may be a good idea20:34
*** praneshp has joined #openstack-infra20:34
*** radez is now known as radez_g0n320:34
jeblairjogo: though if it counts toward quota, we should leave it as is (so that nodepool knows not to spin up a replacement yet)20:34
*** eharney has joined #openstack-infra20:34
*** nati_ueno has joined #openstack-infra20:34
jogojeblair: thats what I am trying to sort out20:34
*** markmcclain has quit IRC20:35
jeblairjogo: yup, and thanks.  mostly brain dumping so i don't block you because i have to run afk again.  :)20:35
*** sweston has joined #openstack-infra20:35
jogojeblair: thanks20:36
*** radez_g0n3 is now known as radez20:36
*** rcarrill` has joined #openstack-infra20:36
*** julim has quit IRC20:38
*** rcarrillocruz has quit IRC20:38
reedhi guys, izinovik has a problem signing the iCLA on gerrit, I have run out of options and need some help on gerrit20:38
*** nati_ueno has quit IRC20:39
anteayareed: what is the error message?20:40
*** eharney has quit IRC20:40
*** eharney has joined #openstack-infra20:41
izinovikanteaya, error message is following:20:41
izinovikfatal: ICLA contributor agreement requires current contact information.20:41
izinovikPlease review your contact information:20:41
izinovik  https://review.openstack.org/#/settings/contact20:41
izinovikfatal: The remote end hung up unexpectedly20:41
anteayaizinovik: hi20:41
izinovikwhile doing `git review'20:41
izinovikandreykurilin_, hello20:41
anteayado you know about paste.o.o20:41
izinovikanteaya, hello20:41
izinovikyeah20:41
anteayagreat thanks, so next time20:41
izinovikI mean yes20:41
reedizinovik, ah, so that's not the issue you listed20:41
anteayaso that is sign up with the foundation20:42
anteayaI do believe320:42
reedizinovik, go to https://review.openstack.org/#/settings/contact first20:42
andreykurilin_izinovik, hi20:42
reedand follow the instructions on that page to sign the iCLA20:42
reedactually, it's here https://review.openstack.org/#/settings/agreements20:43
izinovikyes, i see that status of my agreement is 'verified'20:43
reedand if you go to https://review.openstack.org/#/settings/contact and add your contact information there, can you save them?20:44
izinovikafter pressing 'Save changes' nothing bad happend20:45
reedok... so try git review again now please20:45
*** eharney has quit IRC20:45
*** alexpilotti has quit IRC20:45
*** sarob_ has quit IRC20:45
*** sarob_ has joined #openstack-infra20:46
*** vhoward has quit IRC20:46
izinoviksame result, ICLA requires current contact information20:46
Alex_GaynorIs there a way to get emails on a particular issue besides adding yourself as areviewer?20:46
*** eharney has joined #openstack-infra20:46
izinovikmy gitreview.username is set to izinovik20:46
izinovikis it right?20:47
reedizinovik, I think so but I'm no expert there... anteaya?20:47
*** annegentle_ has quit IRC20:47
jesusaurusbodepd: im open to switching to community modules if the transition is an easy one20:48
anteayaizinovik: go to settings20:48
anteayawhat does it say your username is?20:48
*** 6JTAACWFP has left #openstack-infra20:48
anteayaAlex_Gaynor: not that I am aware20:48
anteayaAlex_Gaynor: you could set it as a watched repo20:49
anteayaAlex_Gaynor: and set your email settings for watched repos20:49
Alex_Gaynoranteaya: I definitely don't want every single nova change, just this one :-)20:49
*** vhoward has joined #openstack-infra20:49
anteayaAlex_Gaynor: ah20:49
anteayajust a reviewer then I think20:49
Alex_Gaynorwhatever, adding myself as a reviewer is fine20:49
anteayafor just one patch20:49
anteayakk20:49
izinovikanteaya, on 'settings' page I see that Username: izinovik, Email Address: izinovik@mirantis.com, Registered: May 12, 2014 3:24 PM20:50
*** sarob_ has quit IRC20:50
*** marcoemorais has quit IRC20:50
*** e0ne has joined #openstack-infra20:50
anteayathen that is your username20:50
nibalizerbodepd: i would like to see a default-to-upstream model20:51
*** marcoemorais has joined #openstack-infra20:51
nibalizersince maintaing our own modules for everything is blegh20:51
anteayapaste git review -v please izinovik20:51
*** radez is now known as radez_g0n320:52
izinovikanteaya,  http://paste.openstack.org/show/84218/20:52
*** e0ne has quit IRC20:53
*** maxbit has quit IRC20:53
*** maxbit has joined #openstack-infra20:53
anteayaizinovik: next time you paste please include the line with the command you ran20:53
*** ihrachyshka has joined #openstack-infra20:53
izinovikanteaya, ok20:54
anteayaizinovik: did you put in any contact information?20:54
*** lukego has quit IRC20:55
izinovikanteaya, where? in `git config --local` or in `git config --global` ?20:55
anteayahttps://review.openstack.org/#/settings/contact20:56
anteayadid you put in any contact information20:56
izinovikanteaya, I only filled Full Name and Preferred Email20:56
anteayaI think it needs something more20:56
izinovikanteaya, all other field on page are empty20:57
anteayalike contact information20:57
anteayayes20:57
anteayait will be20:57
anteayasince that information is stored in another db20:57
anteayanot the gerrit db20:57
anteayabut like the message says20:57
anteayait is looking for contact information20:57
izinovikanteaya, collegue of mine has only full name and preferred address and his `git review` work as expected20:57
anteayaizinovik: when did you collegue sign up?20:58
*** amotoki_ has joined #openstack-infra20:58
izinovikanteaya, I do not know20:58
*** markmcclain has joined #openstack-infra20:58
anteayawell the system is looking for some contact information20:58
anteayayou don't have to give it some20:58
anteayabut that is what it is looking for20:59
izinovikanteaya, strange I filled country and hitted 'save changes' and nothing happened20:59
*** markmcclain1 has joined #openstack-infra20:59
anteayahave you tried git review since you filled in country?20:59
*** markmcclain1 has quit IRC20:59
*** sarob_ has joined #openstack-infra20:59
reedI think the system needs also mailing address20:59
anteayaI don't know20:59
anteayasometimes it just needs country21:00
anteayaor did at one point21:00
reedbut I expect gerrit to throw a meaningful error message21:00
*** marcoemorais has quit IRC21:00
anteayalet's see what git review does now21:00
anteayareed: it does21:00
*** markmcclain1 has joined #openstack-infra21:00
*** marcoemorais has joined #openstack-infra21:00
anteayasince this db isn't accessible by gerrit, it doesn't know what is in there21:00
izinovikanteaya, only after filling all fields and hitting 'save changes', as result I see message that "Contact information last updated on Jun 17, 2014 at 12:59 AM."21:00
anteayaizinovik: what does git review do now?21:01
*** marcoemorais has quit IRC21:01
izinovikanteaya, yes, now it works21:01
anteayagreat21:01
anteayawelcome to openstack21:01
anteayahappy contributing21:01
anteayareed: so we have no way of knowing what is or is not required in the contact info fields other than going through this painful process21:02
*** mbacchi has quit IRC21:02
anteayasince changes are made to requirements and I dont' know what they are21:02
*** marcoemorais has joined #openstack-infra21:02
anteayaapparently now all fields must have content21:02
izinovikthank you for all help openstackers21:02
*** marcoemorais has quit IRC21:03
anteayaizinovik: welcome21:03
*** markmcclain has quit IRC21:03
reedanteaya, nothing should have changed, all fields have always been required21:03
anteayano21:03
*** marcoemorais has joined #openstack-infra21:03
*** rcarrill` has quit IRC21:03
anteayamy contact information is a whitespace21:03
reedanteaya, if not, then that was a bug...21:03
reedanteaya, something should be there21:03
anteayawell the bug got fixed then21:03
jogoclarkb: so I am digging into nodepool code21:03
anteayaI guess21:03
jogoand got a question21:03
*** marcoemorais has quit IRC21:03
*** marcoemorais has joined #openstack-infra21:04
jogowhere is the logic for 'if delete doesn't work try again in 10 minutes'21:04
reedanteaya, whitespace is 'something' although you may be violating the bylaws if you don't state the truth21:04
reedand firefox decided to drive me crazy today21:05
*** Sukhdev has joined #openstack-infra21:05
mordredjeblair, clarkb: digging into the ansible nova code this weekend, I learned a few things that would could potentially make use of ... but also, as expected, learned that we currently know a LOT about spinning up nodes :)21:05
*** marcoemorais has quit IRC21:06
anteayareed: it is the truth, whitespace is what I give to everyone who requests my contact info21:06
anteayaand if ever you need me, you know where to find me21:06
*** marcoemorais has joined #openstack-infra21:06
clarkbjogo: its a cron in nodepool21:06
*** denis_makogon has quit IRC21:06
reedanteaya, :) this channel is logged: you need to follow the bylaws21:07
jogoclarkb: link?21:07
*** CaptTofu_ has joined #openstack-infra21:08
jogoclarkb: because I previously saw a situation where 'a node is deleted but you have to send a second delete to actually get it to go go away. trying to reproduce that now21:09
bodepdjesusaurus: nibalizer the example that I have in mind is: https://github.com/jenkinsci/puppet-jenkins21:09
nibalizerbodepd: i had suspected21:10
nibalizerso i think integrating that is a great idea but gonna be a ton of work21:10
nibalizerbut i also haven't done much investigation into just how much to do21:10
*** isviridov is now known as isviridov|away21:11
clarkbjogo: https://git.openstack.org/cgit/openstack-infra/nodepool/tree/nodepool/nodepool.py#n114221:11
nibalizerand i would prioritize getting off of custom vcsrepo and custom apache above jenkins i think21:11
*** dhellmann has quit IRC21:11
*** loquacities has quit IRC21:11
*** shayneburgess has quit IRC21:11
jogoclarkb: ahh periodic cleanup thanks21:11
*** marcoemorais has quit IRC21:12
*** dhellmann has joined #openstack-infra21:12
*** marcoemorais has joined #openstack-infra21:13
mordrednibalizer: I _believe_ we're going to hand to land patches to upstream vcsrepo to get off of our custom one21:13
mordrednibalizer: if you wanted to, you know, accomplish that, I would buy you a chicken21:13
*** loquacities has joined #openstack-infra21:14
nibalizeroh are there github pulls i can review?21:14
mordrednibalizer: the _main_ issue, iirc, is a philosophical one - the vcsrepo seems to try to go out of its way to do the right thing for a developer laptop - that is, it tries to not break things if you did somethign weird21:14
anteayareed: I do follow the bylaws21:14
anteayalogged or not21:14
mordrednibalizer: whereas we want it for production purposes, so when we way we want ref X - we WANT REF X21:14
mordrednibalizer: nope. never got that far21:14
mordrednibalizer: also, my ruby is crappy21:14
nibalizerhrm well maybe we can add a a froce => true21:14
mordrednibalizer: I _think_ that was the main issue we had21:15
nibalizerya i was hoping to get bodepd to look at it since my ruby is pretty weak too21:15
bodepdwhat feature do you get by conneting zuul to swift?21:18
*** ociuhandu has joined #openstack-infra21:18
*** mfer has quit IRC21:18
*** james_li_ has joined #openstack-infra21:19
bodepdnibalizer: the alternative is just to patch the openstack one with all of the stuff that I need21:20
bodepdnibalizer: but given that the other jenkins module was already written as an upstream, it seems like a better path21:20
*** matjazp has joined #openstack-infra21:21
*** james_li has quit IRC21:21
*** james_li_ is now known as james_li21:21
nibalizerbodepd: do you need functionality on the master side or the slave too?21:23
clarkbbodepd: the intent is to haev single use upload urls per job21:23
clarkbbodepd: so that we can stop copying things to our rapidly shrinking disk array21:23
clarkbjeblair: ^ btw there is a bug fix to zuul that was merged last week. we shoul restart it once we settle in21:23
clarkbjeblair: it is only for the swift uploads so not super urgent21:24
*** matjazp has quit IRC21:25
*** mmaglana has quit IRC21:26
harlowjalifeless was there anything u wanted me to check using my rpm version spec checker tool21:27
*** pdmars has quit IRC21:27
*** otherwiseguy has joined #openstack-infra21:29
*** shayneburgess has joined #openstack-infra21:29
*** matjazp has joined #openstack-infra21:33
*** nati_uen_ has joined #openstack-infra21:34
bodepdnibalizer: I am still building reviewing everything to build out a master (I just have two patches for that)21:40
openstackgerritSpencer Krum proposed a change to openstack-infra/infra-specs: Multiple Data Dirs proposal  https://review.openstack.org/10036321:41
bodepdnibalizer: I expect to have to write my own slave21:41
nibalizerwooo i'm specing21:41
bodepdclarkb: what things were being copied? so if I omit that config, it just writes to disc?21:42
nibalizerbodepd: do you have a review submitted that slots out the internal jenkins module for the public one?21:42
*** CaptTofu_ has quit IRC21:42
jogoclarkb: so I don't think the cronjob is working21:42
nibalizercus if you do i'll work off of that, otherwise i can start the process21:43
*** CaptTofu_ has joined #openstack-infra21:43
nibalizeralso what-to-do about jjb :P21:43
clarkbbodepd all of the test logs. we arent using it. if you omit it then you will need a jenkins plugin or something to do it for you21:43
clarkbjogo its working to an extent21:43
clarkbjogo we arent leaking all nodes21:43
jogoclarkb: when I found a failed delete, I don't see any second attempts in teh logs21:43
jogoclarkb: I think you aren't leaking because the regular delete usually works21:44
swestonanteaya: hello !!21:44
*** matjazp has quit IRC21:44
jogoand I thought you doing a manual 'nodepool delete --now' implies we were leaking21:44
clarkbwe are leaking just not every node21:45
clarkbI suppose its possible the first delete works21:45
swestonanteaya: I do believe that I have what you wanted ... https://review.openstack.org/10035721:46
anteayasweston: hi21:46
jogoclarkb: yeah thats what I am thinking21:46
jogoclarkb: so as far as I can tell (and I may be way off)21:46
jogowhat is happening is:21:47
jogowe do delete a server, and wait for it to be deleted21:47
anteayasweston: well for starters I think I asked you to work with at most 5 patches21:47
anteayasweston: on sandbox21:47
*** andreykurilin_ has quit IRC21:47
anteayasweston: can you abandon a bunch of those please21:47
*** CaptTofu_ has quit IRC21:47
anteayasweston: you don't need a new patch to test a new patchset21:48
jogoif we hit a timeout (600 seconds), then nodepool records a stacktrace and moves on with its life, assuming the cron task will cleanup21:48
*** james_li has quit IRC21:48
swestonanteaya: yes, apologies .. I was running a script .. I created a new one, but I am so tired I must have been running the wrong one21:49
jogobut in the logs I cannot find a second attempt at 'cleanupServer' for nodes that failed to delete the first time21:49
anteayasweston: let's address that first please21:49
jogowhy something doesn't fully delete the first time, I'm not sure. but I thought I found cases where that happened21:49
jogoand a second nova delete was needed21:49
clarkbyes nova does that21:50
swestonanteaya: ok, working on it21:50
clarkbit is the reason the cron exists21:50
jogoclarkb: so /me thinks cron is broken21:50
openstackgerritJoe Gordon proposed a change to openstack-infra/nodepool: Add log to node cleanup task  https://review.openstack.org/10036821:51
jogo^ may help us validate my theory21:52
clarkbjogo thank you for looking21:52
jogoif we never see that log I am right, if we see it I don't know21:52
jogoclarkb: np21:53
*** CaptTofu_ has joined #openstack-infra21:53
*** smarcet has quit IRC21:53
sdaguekrotscheck: do you have an opinion on this - https://review.openstack.org/#/c/97788/ ?21:54
krotschecksdague: Urm.21:54
krotschecksdague: Yes.21:54
sdagueok, I want that opinion :)21:55
sdaguebecause it seems odd to me21:55
sdaguebut I'm not educated enough to know21:55
krotschecksdague: I’m trying to out how to make this make sense.21:55
krotscheckIf used correctly, it’s a tool.21:55
krotscheckIt not, it’s going to make the upstream package managers reject everything that comes out of horizon.21:56
krotscheckNodeenv _compiles_ node from source.21:56
sdagueinteresting....21:56
krotscheckAnd node is one of those stupid packages which compiles in dependencies.21:57
*** shayneburgess has quit IRC21:57
krotscheckLike, for example, heartbleed.21:57
sdaguenice21:57
sdagueso it looks like they only want to use it for testing21:57
sdaguedo you consider that an acceptable boundary?21:57
*** mriedem has quit IRC21:57
krotschecksdague: I’m not a package manager.21:57
sdaguebasically I'm trying to figure out if I'm -1ing or +2ing it21:57
*** lcostantino has quit IRC21:58
sdaguewell if it's only in test-requirements, it doesn't need to be packaged21:58
krotschecksdague: Yes, but I don’t know how strict they are about package requirements for testing. Do the rquire the ability to fully test a package only from upstream repositories?21:58
sdagueI don't know.21:59
krotscheckRight.21:59
openstackgerritSpencer Krum proposed a change to openstack-infra/config: Add node def for puppet3 master  https://review.openstack.org/8367821:59
swestonanteaya: ok, I saved three.  the rest are abandoned21:59
sdaguebut, being that it's redhat proposing, I'm going to assume they sort of figured that out.21:59
sdagueat least it sounds like it's not an insane idea if done correctly21:59
*** julim has joined #openstack-infra22:00
sdagueoh, they are doing it for style enforcement - https://review.openstack.org/#/c/97237/22:00
*** nati_uen_ has quit IRC22:00
*** nati_ueno has joined #openstack-infra22:00
krotschecksdague: Then why is it in global requirements?22:00
krotscheckDon’t they have a test-requirements?22:00
sdaguebecause you can't put something in test-requirements that's not in global requirements22:01
anteayasweston: thanks22:01
*** nati_ueno has quit IRC22:01
anteayasweston: let's keep an eye on that so your script doesn't go wild again22:01
*** homeless has quit IRC22:01
krotscheckOh, I see22:01
*** nati_ueno has joined #openstack-infra22:01
sdaguealso, interestingly, their tox call run_tests.sh22:01
swestonanteaya:  it was a fairly useless script anyway, I deleted it22:01
sdaguethat might make mordred's head explode22:01
anteayasweston: even better22:01
anteayaeasy to keep track of then22:02
anteayasweston: so I can open both .txt files http://logs.ci.vyatta.net/57/100357/1/check/dsvm-tempest-full/bf50077746b9445c8c79fbe27467f8ee/22:02
swestonanteaya:  hehe22:02
openstackgerritSpencer Krum proposed a change to openstack-infra/puppet-storyboard: Adding Rakefile  https://review.openstack.org/10037022:02
anteayasweston: well done22:02
*** homeless has joined #openstack-infra22:02
swestonanteaya: :-)22:02
anteayasweston: but the directory just contains the same txt files again22:02
anteayaso, why?22:02
mtreinishsdague: recursive venvs?22:02
*** markmcclain1 has quit IRC22:02
swestonanteaya: let me check22:03
anteayakk22:03
mtreinishsdague: or is their run_tests kind of like pretty_tox?22:03
sdaguemtreinish: I honestly stopped reading after that point22:03
*** doug-fish has left #openstack-infra22:03
krotschecksdague: So, long story short, I want to have the package managers from centos, redhat, and ubuntu to +1 this, with explicit approvals from them that it’s a valid thing to use during tests.22:03
krotschecksdague: And even then it makes me leery.22:04
*** Ryan_Lane1 has joined #openstack-infra22:04
*** marcoemorais has quit IRC22:04
*** marcoemorais has joined #openstack-infra22:05
*** marcoemorais has quit IRC22:05
jogoclarkb: yeah so it must be the cron job failing because cron job would run on nodes that are in th delete state22:05
*** marcoemorais has joined #openstack-infra22:05
jogobut no nodes in delete state are being deleted (se nodepool.nodepool:181922:06
swestonanteaya:  it was a problem with the jenkins job builder file, I fixed it22:06
jogojeblair: ^22:06
swestonanteaya: shall I run the whole process again?22:06
anteayasweston: yes please22:07
swestonanteaya: ok ... good thing this is super quick .. only half an hour, hehe22:07
swestonanteaya: what is the next step?22:07
*** marcoemorais has quit IRC22:07
*** marcoemorais has joined #openstack-infra22:08
*** nikhil___ is now known as nikhil___|afk22:09
devanandacould anyone point me towards figuring out how to approve patches for stable/icehouse branch of ironic?22:09
anteayasweston: well check the timestamps in the logs you have produced thus far22:11
anteayaare they utc timestamps?22:11
swestonanteaya: no22:13
swestonanteaya: I'll figure out how to fix that22:13
*** matjazp has joined #openstack-infra22:13
*** weshay has quit IRC22:14
bodepdnibalizer: I definitely don't have that. I'm still building around the current jenkins module.22:14
anteayasweston: k, that's your next step22:15
anteayasweston: I think it is a matter of changing the timezone of your server running your tests22:15
swestonanteaya:  thank you22:16
anteayasweston: unless that would create a problem for you22:16
anteayadevananda: I don't know how to do that22:16
swestonanteaya: shouldn't ... but the server is already in my local timezone, so I'm thinking it must be a Jenkins setting22:17
anteayareed: what is this group about? https://review.openstack.org/#/admin/groups/126,members22:17
anteayasweston: k, whatever works for you22:17
* reed checking22:17
anteayabut next thing is utc timestamps22:17
swestonanteaya: kk22:18
reedare auth sessions expiring faster on gerrit lately?22:18
anteayareed: not that I have seen22:18
reedanteaya, that group is those who signed the US Government CLA22:18
anteayaif you have two tabs open and are signed into one and not the other22:18
anteayathey fight with each other22:18
anteayareed: ah okay thanks22:18
*** homeless has quit IRC22:19
morganfainbergdevananda, i think PTL and stable maint folks usually weigh in on any stable branch change22:20
*** CaptTofu_ has quit IRC22:20
morganfainbergdevananda, and (at least in keystone) we use the same 2x+2 and then approval for it22:20
*** _TheDodd_ has joined #openstack-infra22:20
*** CaptTofu_ has joined #openstack-infra22:20
*** ramashri has quit IRC22:20
devanandamorganfainberg: that's what i'd expect, but i actually don't have +2/+A powers for ironic's stable/icehouse branch22:21
devanandamorganfainberg: OR i dont know how to do it22:21
devanandaand am doing something wrong22:21
*** ramashri has joined #openstack-infra22:21
morganfainbergdevananda, nah you should have that power, prob a fix needs to go into infra if you don't22:21
morganfainbergdevananda, or there is a group you're not in that you should be22:22
* morganfainberg goes and looks at how keystone works.22:22
devanandamorganfainberg: hmm... right. /me looks at nova22:22
*** doddstack has quit IRC22:23
mtreinishmorganfainberg, devananda: it depends on the project. It's in the gerrit acls for the project. For example tempest stable branches the whole ore group has +2 on22:23
morganfainbergmtreinish, it's not in the keystone acl, so maybe dolph is in a special stable maintenance group?22:25
morganfainbergah22:25
mtreinishmorganfainberg: does it inherit from all-projects. Then it's https://review.openstack.org/#/admin/projects/All-Projects,access22:25
morganfainbergmtreinish, yeah dolph is in stable-maintenance22:25
morganfainbergdevananda, ^ thats why dolph can do that stuff22:25
morganfainbergdevananda, as PTL you might need to be in https://review.openstack.org/#/admin/groups/120,members22:26
devanandaah ha!22:26
devanandait wasn't jenkins config or gerrit project config22:26
morganfainbergor just bug those folks to do it22:27
mtreinishdevananda: I'm on stable maint too if you want me to look at a stable thing...22:27
devanandaadam_g: want to do stable maint for ironic?22:27
devanandamtreinish: ah, thx. this one: https://review.openstack.org/#/c/97475/22:28
adam_gdevananda, i can help with it, sure.22:28
devanandaso secondary question - can we disable certain tempest runs on stable branch?22:28
*** matjazp has quit IRC22:28
devanandaadam_g: i know we didn't enable voting in the virtual-ironic job during icehouse, but i don't know, at this point, what we'd need to backport to make that job work on it22:29
mtreinishdevananda: not unless it's tied to a specific feature flag in the tempest config.22:30
adam_gdevananda, IIRC it was passing when icehouse releasd but wasn't made voting till after22:30
devanandaadam_g: ahh, never mind then.22:31
devanandalet's see why it's failing22:31
*** jgrimm_ has quit IRC22:34
*** jergerber has quit IRC22:35
cody-somervillejogo: sdague: What if new hacking rules were warnings at first and then  graduate to errors? or something. :)22:36
*** flaper87 is now known as flaper87|afk22:37
*** UtahDave has quit IRC22:38
*** matjazp has joined #openstack-infra22:41
openstackgerritAdam Gandelman proposed a change to openstack-infra/devstack-gate: Archive Ironic VM nodes console logs  https://review.openstack.org/9221622:42
adam_gdevananda, ^ that would help us here. not sure whats up with that stable job failing.22:42
sdaguecody-somerville: there are definitely lots of ways we could do it. :)22:42
*** mestery has joined #openstack-infra22:42
cody-somervillesdague: lintian might be a good model22:43
*** ihrachyshka has quit IRC22:44
*** _TheDodd_ has quit IRC22:46
sdaguejogo: where's your change to get rid of large-ops on havana?22:47
lifelessharlowja: well that matrix I put in my comment22:48
*** otherwiseguy has quit IRC22:48
harlowjalifeless kk, forgot about that one22:48
*** nati_ueno has quit IRC22:49
*** mestery has quit IRC22:55
*** amotoki_ has quit IRC22:55
mattoliveraumorning22:55
*** thedodd has joined #openstack-infra22:56
*** marun has quit IRC22:56
anteayamorning mattoliverau22:56
*** bhuvan has joined #openstack-infra22:58
anteayaif someone could get nodepool to delete some more suck nodes23:01
anteayathat could be useful23:01
anteayas/suck/stuck23:01
jheskethMorning23:01
anteayamorning jhesketh23:01
*** lttrl has joined #openstack-infra23:03
*** Sukhdev has quit IRC23:05
*** matjazp has quit IRC23:06
*** zzelle_ has quit IRC23:07
*** rkukura_ has joined #openstack-infra23:08
*** rkukura has quit IRC23:08
*** rkukura_ is now known as rkukura23:08
*** nati_ueno has joined #openstack-infra23:10
*** mmaglana has joined #openstack-infra23:12
*** CaptTofu_ has quit IRC23:13
harlowjalifeless comment added for matrix23:13
*** CaptTofu_ has joined #openstack-infra23:13
lifelessharlowja: thanks!23:16
harlowjanp23:16
lyxusis it the proper place to talk about devstack issue ? There was a commit that broke it (or so i think)23:17
*** CaptTofu_ has quit IRC23:18
*** yamahata has quit IRC23:18
*** thedodd has quit IRC23:18
*** julim has quit IRC23:19
lyxusOn the truc, the commit (3723814bf27fb4d78c6c3ad80d77882f75ad07c4) introduced the file tools/outfilter.py that doesn't exist23:19
jheskethclarkb, fungi: Are either of you guys able to have a look at a job that's been stuck in queue for nearly 100hrs?23:21
*** jamielennox|away is now known as jamielennox23:21
sdaguelyxus: -qa is the right place for that23:21
lyxussdague, thanks !23:21
*** lttrl has quit IRC23:24
*** lttrl has joined #openstack-infra23:25
clarkbjhesketh sorry I am not super useful today. fighting a cold and generally blah23:25
jheskethno worries, get better soon!23:25
clarkbjhesketh is it one of the swift uploads?23:25
sdagueclarkb: if you are still kicking - https://review.openstack.org/#/c/99750/23:25
clarkbthose dont work right now23:25
jheskethclarkb: yes23:25
sdagueas our ability to land havana patches is basically pretty hit or miss until we disable largeops on them23:26
clarkbjhesketh we need to restart zuul to pivk up my bug fix23:26
clarkbyou can push a new patchset to remove the job23:27
jheskethclarkb: which bug fix?23:27
clarkbthe one that gets the default swift url23:27
*** blamar has joined #openstack-infra23:27
*** prad_ has quit IRC23:30
clarkbsdague: done23:32
sdagueclarkb: thanks23:33
sdagueclarkb: also, is this out there for a reason still - https://review.openstack.org/#/q/status:open+project:openstack-infra/devstack-gate+branch:master+topic:do-not-merge,n,z ?23:36
*** sarob_ has quit IRC23:36
*** sarob_ has joined #openstack-infra23:37
*** matjazp has joined #openstack-infra23:37
*** melwitt has quit IRC23:38
clarkbit can be abandoned or WIP'd if that helps23:38
openstackgerritA change was merged to openstack-infra/config: Don't run large-ops test on stable/havana branches  https://review.openstack.org/9975023:39
*** sarob_ has quit IRC23:41
*** dims has quit IRC23:41
mtreinishsdague: do you want to start working through the v3 api patches today?23:41
*** matjazp has quit IRC23:41
*** atiwari has quit IRC23:47
sdagueclarkb: it keeps showing up in some slice of my queries, just curious23:48
sdaguemtreinish: actually, I'm about to walk away and get a drink. As it's properly quitting time :)23:48
sdagueping me in the morning though, tomorrow might work on that23:49
mtreinishsdague: ok23:49
sdaguethe zuul job queue is so close to having capacity again :)23:49
*** hemna is now known as hemna_23:50
mikalIs there a known issue in the postgres unit tests for nova at the moment?23:52
mikalI'm seeing:23:52
mikalAttributeError: 'PostgreSQLOpportunisticFixture' object has no attribute '_details'23:52
mikalWhich is new to me23:52
mikalhttp://logs.openstack.org/75/94775/6/check/gate-nova-python26/7a09354/console.html is an example23:53
*** sarob_ has joined #openstack-infra23:55
openstackgerritAdam Gandelman proposed a change to openstack/requirements: Set version bounds for diskimage-builder  https://review.openstack.org/10038523:56
sdaguelooks like it has some hits - http://logstash.openstack.org/#eyJzZWFyY2giOiJcIkF0dHJpYnV0ZUVycm9yOiAnUG9zdGdyZVNRTE9wcG9ydHVuaXN0aWNGaXh0dXJlJyBvYmplY3QgaGFzIG5vIGF0dHJpYnV0ZSAnX2RldGFpbHMnXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDI5NjI5NTU0MTZ923:57
sdaguemikal: there are definitely a bunch of races introduced by oslo.db23:57
sdagueespecially in the unit tests23:57
mikalThose three clustered ones are all me I think23:58
*** sarob__ has joined #openstack-infra23:58
mikalOh, I lie23:58
mikalNot all me23:58
sdaguemikal: well it might actually be your patch?23:58
openstackgerritIan Wienand proposed a change to openstack-infra/devstack-gate: Make one copy of grenade log files  https://review.openstack.org/10013123:58
sdagueor maybe your patch does a timing shift23:58
mikalsdague: unlikely... Its a libvirt logging patch23:58
sdagueenough to overlap things23:58
sdagueok, drinking time. mikal welcome to the rodeo :)23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!