Wednesday, 2016-08-24

*** spzala has quit IRC00:00
sc68calclarkb: mordred: anteaya: I'm OK with a patch that switches accept_ra = 2,00:01
*** david-lyle has quit IRC00:01
clarkbsc68cal: why not stop setting forwarding across the baord?00:01
sc68calbecause then your instances wouldn't be able to reach the world.00:02
clarkbit seems like the default should be the leave the physical interface alone and let people munge that directly if they desire00:02
clarkbsc68cal: that is not necessary for development00:02
clarkband for the people that want it they can munge their interface that way00:02
mordredclarkb: I believe people want to be able to run apt-get install in guests00:02
mordredin the gate00:02
sc68cal^ this00:02
clarkbmordred: really?00:02
mordredyes00:02
sc68calsdague ran into this and was annoyed00:02
clarkbbut why00:02
anteayaI do00:02
mordredbecause lots of projects have things htat run in guests00:02
anteayawell not so much in the gate myself00:02
anteayabut I do like apt get install00:03
sc68calOK - anyway I'm good with accept_ra = 2 - sorry I should have caught that years ago but nobody had real IPv6 infra to test this on for a while so it slipped through00:03
clarkbmordred: fwiw I don't think any of the gate tests have that working today00:03
sc68calI gotta leave for dinner00:03
clarkbmordred: since we don't plumb that through by default iirc00:03
mordredsc68cal: thanks!00:03
anteayasc68cal: thank you00:03
anteayasc68cal: enjoy dinner00:03
clarkbmordred: guests can talk to each other and the hypervisors can talk to the guests thats it00:03
clarkbmordred: since br-ex by default is not plumbed through to the "physical" nic00:04
mordredclarkb: well, it may be worth finding a time in barcelona to sit down with sdague and sc68cal and talk through needs/desires/interfaces there00:04
*** tonytan4ever has joined #openstack-infra00:04
mordredclarkb: perhaps accept_ra = 2 gets us breathing room until we can circle up at a table in spain?00:04
clarkbmordred: sure I think that will work00:04
clarkbare we wanting to change that on our images or in devstack?00:05
mordred I think devstack00:05
*** amotoki has joined #openstack-infra00:05
clarkbok. we should tell the people not using devstack that they will need to do similar00:05
mordredprobably at the same places that does the sysctl for forwarding I'd guess00:05
clarkbkolla osa etc00:05
mordredyah00:05
*** zhurong has quit IRC00:05
mordredI mean- we could also do it in image setup00:05
openstackgerritSagi Shnaidman proposed openstack-infra/tripleo-ci: POC: WIP: oooq undercloud install  https://review.openstack.org/35891900:05
jeblair--image setup00:05
jeblairi don't really want to be responsible for this :)00:05
mordredyha00:05
mordredclarkb: what's the full key? I've got devstack open to the place00:06
clarkbya I don't think we should be responsible for it00:06
pabelanger++00:06
*** jerryz has quit IRC00:06
*** tonytan4ever has quit IRC00:06
clarkbuh let me find it00:07
*** tonytan4ever has joined #openstack-infra00:07
mordredclarkb: is it net.ipv6.conf.all.accept_ra = 2 ?00:07
openstackgerritSagi Shnaidman proposed openstack-infra/tripleo-ci: WIP: DONT MERGE Testin OOOQ job  https://review.openstack.org/35914600:07
*** david-lyle has joined #openstack-infra00:07
clarkbmordred: net.ipv6.conf.all.accept_ra = 200:07
clarkbmordred: please write a nice comment for why too00:08
clarkbmordred: that way in 6 months when someone thinks we were silyl for breaking the rfcs we have a reason00:08
*** ddieterly has quit IRC00:09
mordredclarkb: https://review.openstack.org/35949000:09
mordredlemme make a longer comment00:09
clarkbmordred: and lets maybe recheck it a lot to make sure it runs happily on osic00:10
*** amotoki has quit IRC00:10
mordredclarkb: k. patchset 2 has a "please don't remove wihtout talking to infra" note00:10
*** sarob has quit IRC00:12
clarkbmordred: you added tabs that people may or may not like (I have no opinions)00:12
*** Jeffrey4l_ has joined #openstack-infra00:12
mordredI did?00:12
anteayayes00:12
mordredhow did I add _tabs_00:12
anteaya>>00:12
anteayasomeone added tabs00:12
mordredfixed00:12
mordredit was me00:13
mordredI think my editor settings only have force-spaces set for python :)00:13
* mordred should fix that00:13
clarkbmordred: let me paste you the magic for that00:13
mordredclarkb: I have some really excellent python-specific indentation rules now00:14
clarkbhttp://paste.openstack.org/show/562778/ there you go00:14
mordredlike, vim now understands that I do not like visual indentation even00:14
mordredthanks!00:14
mordredclarkb: yah - that I have - I think I need to do it for not-python thought00:15
clarkbsofttabstop and expandtab I think are the important ones00:15
mordredyah00:15
clarkboh right00:15
clarkbthis was bash00:15
mordredyah. that got it - thanks!00:16
openstackgerritMerged openstack-infra/system-config: Add root:puppet permissions to hieradata  https://review.openstack.org/32664900:16
jeblairit's funny -- the main argument against tabs is that doing it right requires configuring your editor.  :)00:17
mordred:)00:17
clarkbmordred: ps3 caught a bunch of osic nodes too so this should tell us stuff00:17
mordredclarkb: woot!00:17
*** ddieterly has joined #openstack-infra00:17
*** docaedo4 has quit IRC00:25
*** david-lyle has quit IRC00:26
*** vhosakot has quit IRC00:27
pabelangerclarkb: I am seeing a fair bit of SSHTimeoutExceptions for rax-iad subnodes: http://paste.openstack.org/show/562779/00:28
*** tpsilva has quit IRC00:28
clarkbpabelanger: huh, I wonder if that timeout value aggregates for both hosts and we run over in that case00:28
pabelangerya, I can check that now00:29
clarkbpabelanger: since I believe it does try to ssh to them serially00:30
openstackgerritMonty Taylor proposed openstack-infra/shade: Ensure per-resource caches work without global cache  https://review.openstack.org/35877600:30
mordredclarkb: I'm having un-fun getting this stuff landed. otoh - the nodepool-shade job has caught two legit issues today - so yay for testing!00:30
*** docaedo4 has joined #openstack-infra00:31
Shrewsmordred: back now. what are the things?00:31
clarkbmordred: I think we may have lost the host running your ps3 :/ but you might also need to flip the order of your accept_ra and forwading settings00:32
clarkbmordred: possible that maybe things break immediately when you set forwarding to 1 then we either don't continue and fix it or it does but we don't reconnect?00:32
mordredclarkb: nod00:33
mordredclarkb: updated00:33
mordredShrews: all sorts of fun00:33
clarkbin any case I think this is headed down the right track, so hopefully we make progress00:33
mordredShrews: battling a few gate jobs - but although some of dealing with them have been a struggle, they've also caught some real errors, so it's worthwhile struggle00:33
*** sdake has quit IRC00:36
openstackgerritMonty Taylor proposed openstack-infra/nodepool: Install nodepool and shade into a virtualenv  https://review.openstack.org/35942500:36
*** sdake has joined #openstack-infra00:37
mordredShrews: like that^^00:37
mordredbut I think this one is a winner00:37
Shrewsi +2'd the shade things you need00:38
mordredthanks!00:38
mordredShrews: also https://review.openstack.org/#/c/357517/00:38
mordredShrews: I mean, assuming it passes tests00:38
*** claudiub has quit IRC00:38
*** caowei has quit IRC00:41
*** docaedo4 has quit IRC00:41
*** gildub has joined #openstack-infra00:42
pabelangerclarkb: it actually looks like each (sub)node gets the full timeout value. For some reason, rax-ord 120 doesn't appear to be long enough.  We could double it to 240 and see how that performs for tomorrow00:43
clarkbpabelanger: huh ok00:43
*** pvaneck has quit IRC00:46
*** Julien-zte has joined #openstack-infra00:50
pabelangerclarkb: http://paste.openstack.org/show/562780/ of the failure to launch subnode00:52
clarkbmordred: arg it died again00:52
*** esikachev has joined #openstack-infra00:52
clarkbmordred: I think what we want to do is ssh in, tail the devstacklog locally then whatever was last printed is likely to be either the thing that breaks us or the thing just prior00:52
clarkbI need to go family time now so won't do that myself but I think doing ^ should et us really close00:53
*** kaisers_ has joined #openstack-infra00:53
*** kzaitsev_mb has quit IRC00:53
* clarkb finds dinner00:56
*** esikachev has quit IRC00:57
*** kaisers_ has quit IRC00:58
openstackgerritMerged openstack-infra/subunit2sql: Provide unit test coverage for process_results  https://review.openstack.org/35538501:02
openstackgerritPaul Belanger proposed openstack-infra/project-config: Double rax-ord boot-timeout value  https://review.openstack.org/35949601:03
*** raunak has quit IRC01:03
*** amotoki has joined #openstack-infra01:06
openstackgerritIan Wienand proposed openstack-infra/system-config: launch-node.py: save key when failing early  https://review.openstack.org/35941701:07
openstackgerritIan Wienand proposed openstack-infra/system-config: launch-node.py: set ansible log path  https://review.openstack.org/35949901:07
openstackgerritIan Wienand proposed openstack-infra/system-config: launch-node.py: More verbose logging  https://review.openstack.org/35950001:07
ianwthat's one nicely shaved yak i think01:07
*** docaedo4 has joined #openstack-infra01:08
ianwpabelanger: puppet's changed that permission ... trying again01:08
pabelangerclarkb: ianw: ^ 359496 doubles our boot-timeout in rax-ord01:09
pabelangerianw: ya, did some yak shaving today too01:09
*** amotoki has quit IRC01:10
*** tonytan_brb has joined #openstack-infra01:11
*** mtanin___ has quit IRC01:12
*** Sukhdev has quit IRC01:12
*** zhurong has joined #openstack-infra01:13
*** esikachev has joined #openstack-infra01:13
*** tonytan4ever has quit IRC01:14
*** shashank_hegde has quit IRC01:15
prometheanfireso, how's gate?01:16
* prometheanfire stops pouring gas on the fire01:16
*** pt_15 has quit IRC01:16
*** yanyanhu has joined #openstack-infra01:19
*** tqtran has quit IRC01:19
*** esikachev has quit IRC01:19
openstackgerritChangcheng Intel proposed openstack-infra/jenkins-job-builder: update base_email_ext to adapt Email-ext plugin  https://review.openstack.org/35513901:20
*** chlong has joined #openstack-infra01:20
*** itisha has quit IRC01:20
*** gyee has quit IRC01:22
*** baoli has joined #openstack-infra01:23
*** baoli_ has joined #openstack-infra01:24
anteayaI keep seeing that as chaching Intel01:28
*** baoli has quit IRC01:28
*** fguillot has quit IRC01:30
*** fguillot has joined #openstack-infra01:31
ianwAug 24 01:23:44 review-dev puppet-user[21995]: (/Stage[main]/Openstack_project::Gerrit/Exec[manage_projects]) Could not find command '/usr/local/bin/manage-projects' <- i guess we are missing jeepyb dependency?01:33
*** nwkarsten has joined #openstack-infra01:33
*** Apoorva_ has joined #openstack-infra01:33
*** Apoorva has quit IRC01:37
*** nwkarsten has quit IRC01:37
*** Apoorva_ has quit IRC01:37
ianwno ... just another yak ... missing dependencies looks like01:39
*** Julien-zte has quit IRC01:39
*** salv-orlando has joined #openstack-infra01:40
*** tonytan_brb has quit IRC01:41
cloudnullevenings01:48
cloudnullclarkb:  idk if you're still about,01:48
* cloudnull reading scrollback01:48
clarkbcloudnull: I can tell you its ok you don't need to debug cloud01:48
clarkbcloudnull: so you can mostly ignore the scrollback01:48
clarkbmordred: pabelanger sc68cal http://paste.openstack.org/show/562798/ is where we seem to be breaking with ipv601:48
clarkbcloudnull: ^ basically neutron + devstack creating an ipv6 subnet on osic causes things to bork01:49
clarkbwe also need to set accept_ra to 2 because neutron wants to set forwarding01:49
clarkbmordred: I think we still want your change because its more correct but we also need to figure out why we break when creating that subnet01:49
pabelangerlooking01:49
*** kzaitsev_mb has joined #openstack-infra01:49
cloudnullclarkb: is it something we're doing that's causing devstack to die in a fire ?01:50
clarkbIMO we should hand this off to the neutron team... This is reliably reproduceable and should be something that works01:50
clarkbcloudnull: no I think neutron just doesn't support being nested with ipv6 only because it borks routing (at least when devstack runs it)01:50
clarkbcloudnull: basically this should work, it doesn't. Its up to the project teams to figure out why I Think01:50
cloudnullok01:51
*** salv-orlando has quit IRC01:51
openstackgerritEmilien Macchi proposed openstack-infra/puppet-openstack_infra_spec_helper: Revert "Pin puppetlabs-spec-helper"  https://review.openstack.org/35953901:51
pabelangerclarkb: Agreed, I would say this is a test failure and neutron team should dive more into it01:51
clarkbcloudnull: what we found was that the instances we available from within osic both over private ipv4 and via global ipv601:51
*** thorst_ has joined #openstack-infra01:52
clarkbcloudnull: but are not accessible over global ipv6 outside of osic. route -6n shows we have no working global route when we get into that state but the route for the local nets are still there which is why it worsk from osic01:52
clarkbcloudnull: at first I thought it may ahve been something in osic but once I saw our routes were going wonky like that I was pretty sure it was on our end01:52
pabelangerclarkb: did the patch mordred propose work?01:53
pabelangerI haven't checked01:53
clarkbpabelanger: no, I think we need it though01:53
pabelangerack01:53
clarkbpabelanger: basically I think there are layers of broken here and mordreds patch fixes layer one01:53
pabelangerright01:53
*** yamahata has quit IRC01:57
*** raunak has joined #openstack-infra01:57
*** dkehn_ has quit IRC01:57
*** dkehn has quit IRC01:57
cloudnullok so finally done reading... :)02:00
cloudnullclarkb: so this is an issue w/ devstack creating a v6 network w/in osic nodes. has this always been a problem in the devstack gate or is this new?02:00
* cloudnull have not used devstack in FOREVER...02:01
clarkbcloudnull: I think it has always been an issue we just never noticed because we have primarily been v402:01
cloudnullok.02:01
clarkbcloudnull: zuul/jenkins have almost always used v4 for us until now02:01
clarkbcloudnull: so the problem is in devstack/neutron because this *should* work02:01
*** caowei has joined #openstack-infra02:02
cloudnullok02:02
cloudnullis this a gate blocker for them?02:03
clarkbcloudnull: sort of? basically zuul will reschedule the job over and over until it gets on not osic02:03
clarkbso things aren't broken broken just slow02:03
clarkbI really think we should use that as the carrot for getting it fixed though >_>02:03
cloudnullsounds reasonable to me02:04
jeblairwe have been too lazybusy to put a cap on the number of re-launches zuul will perform :)02:04
*** tonytan4ever has joined #openstack-infra02:04
clarkbbecause really you should be able to run neutron on an instance that relies on ipv6 networking02:05
jeblairjoin us in the future02:05
jeblairby future, i mean 1994.02:05
cloudnullha!02:05
*** vhosakot has joined #openstack-infra02:06
*** amotoki has joined #openstack-infra02:06
*** dkehn_ has joined #openstack-infra02:10
*** kzaitsev_mb has quit IRC02:11
*** amotoki has quit IRC02:11
*** dkehn has joined #openstack-infra02:11
*** sdague has joined #openstack-infra02:12
*** ddieterly has quit IRC02:15
*** esikachev has joined #openstack-infra02:16
clarkbalso I think ipv6 by default in devsatck runs is relatively new so this may just be perfect storm of new things not yet tested02:19
*** zz_dimtruck is now known as dimtruck02:19
*** fguillot has quit IRC02:19
dougwigdo you have a bug number yet?02:20
*** esikachev has quit IRC02:20
clarkbno I am juggling beer and dinner02:20
clarkbthe paste is above where it dies02:20
clarkbif someone else wants to file it otherwise I can file one in the morning02:20
dougwigclarkb: looking, though i am fairly ipv6 illiterate.  i've rattled cages, but folks seem to be in bed.02:21
clarkbdougwig: basically as soon as devstack makes an ipv6 subnet we lose connectivity02:21
clarkbdougwig: and we appear to lose connectivity because the ipv6 default route goes away02:21
dougwigclarkb: can i call it a feature?02:21
dougwig:)02:22
openstackgerritJohn L. Villalovos proposed openstack-infra/yaml2ical: Remove discover from test-requirements  https://review.openstack.org/34580902:23
dougwigclarkb: confused, the end of that paste is creating the external v4 subnet, not v602:23
*** gouthamr_ has joined #openstack-infra02:25
*** spzala has joined #openstack-infra02:25
*** raunak has quit IRC02:26
clarkbhrm maybe it cut off short02:27
*** raunak has joined #openstack-infra02:27
clarkbya i think paste failed me02:27
clarkbI will repaste ina  bit02:27
dougwigclarkb: ok, thanks.02:28
*** gouthamr has quit IRC02:28
cloudnullmordred sc68cal pabelanger clarkb -- should we also set "net.ipv6.conf.default.accept_ra" in https://review.openstack.org/#/c/359490/4/lib/neutron_plugins/services/l3 ?02:30
*** sdague has quit IRC02:33
dougwigclarkb: bug 161628202:34
openstackbug 1616282 in neutron "creating ipv6 subnet on ipv6 vm will cause loss of connectivity" [Critical,Confirmed] https://launchpad.net/bugs/161628202:34
clarkbdougwig: http://paste.openstack.org/show/562831/02:34
clarkbcloudnull: maybe?02:35
clarkbdougwig: I trimmed context so I could get the whole thing, hopefulyl enough there though02:35
*** kzaitsev_mb has joined #openstack-infra02:37
*** caowei has quit IRC02:40
*** caowei has joined #openstack-infra02:41
*** kaisers_ has joined #openstack-infra02:42
*** kaisers_ has quit IRC02:46
*** sdake has quit IRC02:46
*** sdake has joined #openstack-infra02:49
*** vhosakot has quit IRC02:50
*** salv-orlando has joined #openstack-infra02:50
*** yuanying_ has joined #openstack-infra02:51
*** yuanying has quit IRC02:52
*** yuanying has joined #openstack-infra02:53
*** kzaitsev_mb has quit IRC02:53
cloudnullwe're seeing a package installation problems in a few of our gates. http://logs.openstack.org/25/359225/7/check/gate-openstack-ansible-os_watcher-ansible-func-ubuntu-xenial/3518959/console.html#_2016-08-24_02_48_16_34453202:53
*** salv-orlando has quit IRC02:53
cloudnullits using the osic repo servers at "http://mirror.regionone.osic-cloud1.openstack.org/ubuntu xenial"02:54
cloudnullhave others seen something similar recently?02:54
clarkbcloudnull: that looks like you may need to apt get update?02:55
*** yuanying has quit IRC02:55
*** yuanying has joined #openstack-infra02:56
cloudnullyea. im looking into that now, the ansible module **should** be doing an update, but maybe its not for some reason02:56
*** tphummel has quit IRC02:56
*** yuanying_ has quit IRC02:57
*** Jeffrey4l_ has quit IRC03:03
*** gongysh has joined #openstack-infra03:03
*** gouthamr_ is now known as gouthamr03:05
*** amotoki has joined #openstack-infra03:07
*** Jeffrey4l_ has joined #openstack-infra03:08
*** mtanino has joined #openstack-infra03:08
*** amotoki has quit IRC03:11
dougwigclarkb: what version of openstack is osic running?03:14
openstackgerritIan Wienand proposed openstack-infra/puppet-jeepyb: Ensure development files installed  https://review.openstack.org/35956203:14
openstackgerritNate Johnston proposed openstack-infra/project-config: Make neutron-fwaas functional job not experimental  https://review.openstack.org/35932003:17
ianwzaro / rcarrillocruz : ^ review-dev doesn't want to deploy due to issues with dependencies here ... but getting there!03:17
*** Wei_Liu has joined #openstack-infra03:18
amrithany infra cores around, could I get a second +2 and an A+1 on https://review.openstack.org/#/c/354881/ please. AJaeger has already put one +2 on it. It adds a couple of non voting jobs to the Trove gate. Thanks!03:19
Wei_Liuhi, I have one question about jjb, does it support docker-plugin in jenkins?03:20
clarkbdougwig: I think cloudnull said liberty03:21
openstackgerritEmilien Macchi proposed openstack-infra/tripleo-ci: WIP - Implement undercloud upgrade job - Mitaka -> Newton  https://review.openstack.org/34699503:22
*** amotoki has joined #openstack-infra03:24
*** rajinir has quit IRC03:25
*** dimtruck is now known as zz_dimtruck03:28
cloudnulldougwig clarkb: whats that ?03:29
cloudnullmay be missing a message there.03:29
dougwigcloudnull: i asked what version of openstack that the osic cloud was running?03:30
cloudnullah. it liberty03:30
dougwigwhich core plugin?  ovs?03:30
cloudnulllinux bridge03:30
dougwiglb, ok.03:30
cloudnullML2 + lxb03:30
zaroWei_Liu: there's something about docker build publish plugin: http://docs.openstack.org/infra/jenkins-job-builder/builders.html?highlight=docker#builders.docker-build-publish03:32
zaroianw: which dependencies?  Gerrit?  let me know if i can help03:33
*** zz_dimtruck is now known as dimtruck03:34
ianwzaro: for jeepyb, see log in 359562 ... i'm just bumbling with the puppet now03:34
zarothat seems odd. doesn't review.o.o and review-dev share that?03:34
zaroand i think we've recently built a new review.0.0 to do the same thing (make it bigger).03:36
*** gouthamr has quit IRC03:36
*** gouthamr has joined #openstack-infra03:37
*** amotoki has quit IRC03:38
*** thorst_ has quit IRC03:38
dougwigif you're up, i need one more infra core to peek at this devstack-gate change, so i don't break the world when the lbaas v1 removal happens: https://review.openstack.org/#/c/358258/03:39
dougwig(trying to get that done at this week's midcycle)03:39
*** thorst_ has joined #openstack-infra03:39
openstackgerritIan Wienand proposed openstack-infra/puppet-jeepyb: Ensure development files installed  https://review.openstack.org/35956203:39
ianwzaro: well, it's dependencies ... so if something else just happened to come first it would be satisfied and you wouldn't know03:41
*** vikrant has joined #openstack-infra03:41
*** Sukhdev has joined #openstack-infra03:42
*** bhunter71 has quit IRC03:42
*** gouthamr has quit IRC03:45
*** AnarchyAo has joined #openstack-infra03:46
Wei_Liuzaro, no, I mean docker plugin, which is able to use a docker host to dynamically provision a slave and run build03:47
*** senk has joined #openstack-infra03:47
*** thorst_ has quit IRC03:47
*** yuanying has quit IRC03:48
*** spzala has quit IRC03:50
*** bauzas has left #openstack-infra03:50
*** kzaitsev_mb has joined #openstack-infra03:50
*** aeng has quit IRC03:50
*** tonytan4ever has quit IRC03:50
*** yuanying has joined #openstack-infra03:51
openstackgerritYanyan Hu proposed openstack-infra/project-config: Enable zaqar for senlin integration test  https://review.openstack.org/35456603:55
*** raunak has quit IRC03:57
*** salv-orlando has joined #openstack-infra03:58
*** gongysh has quit IRC03:59
*** amotoki has joined #openstack-infra04:00
*** baoli_ has quit IRC04:01
*** DmZDsfZoQv has quit IRC04:03
*** dimtruck is now known as zz_dimtruck04:05
*** aeng has joined #openstack-infra04:07
*** raunak has joined #openstack-infra04:08
*** salv-orlando has quit IRC04:09
*** amotoki has quit IRC04:09
*** raunak has quit IRC04:09
*** VmtcXzLmiz has joined #openstack-infra04:10
*** amotoki has joined #openstack-infra04:10
*** VmtcXzLmiz has quit IRC04:10
*** raunak has joined #openstack-infra04:13
*** roxanagh_ has joined #openstack-infra04:13
*** jraju has joined #openstack-infra04:14
*** jraju has quit IRC04:15
*** shashank_hegde has joined #openstack-infra04:18
*** gyx has joined #openstack-infra04:18
*** esikachev has joined #openstack-infra04:18
*** roxanagh_ has quit IRC04:19
*** esikachev has quit IRC04:23
*** kdas__ has joined #openstack-infra04:24
*** mdrabe has quit IRC04:28
*** kaisers_ has joined #openstack-infra04:31
*** amotoki has quit IRC04:31
*** kdas__ is now known as kushal04:31
*** kushal has joined #openstack-infra04:31
openstackgerritAndreas Jaeger proposed openstack-infra/bindep: Give some examples  https://review.openstack.org/35881104:33
*** kaisers_ has quit IRC04:35
*** amotoki has joined #openstack-infra04:37
*** senk has quit IRC04:38
*** bhunter71 has joined #openstack-infra04:38
*** esikachev has joined #openstack-infra04:39
*** yamamoto has quit IRC04:39
*** yamamoto has joined #openstack-infra04:39
*** raunak has quit IRC04:40
*** yamamoto has quit IRC04:41
*** mtanino has quit IRC04:42
*** raunak has joined #openstack-infra04:42
*** yamamoto has joined #openstack-infra04:42
*** yamamoto has quit IRC04:42
*** esikachev has quit IRC04:45
*** thorst_ has joined #openstack-infra04:45
*** asettle has joined #openstack-infra04:51
*** tonytan4ever has joined #openstack-infra04:51
*** thorst_ has quit IRC04:52
openstackgerritRabi Mishra proposed openstack-infra/project-config: Switch all heat jobs to use devstack plugin  https://review.openstack.org/31781704:53
*** salv-orlando has joined #openstack-infra04:54
*** tonytan4ever has quit IRC04:56
*** asettle has quit IRC04:56
*** Daisy has joined #openstack-infra05:02
*** Daisy has quit IRC05:02
*** andymaier_ has joined #openstack-infra05:05
*** AnarchyAo has quit IRC05:05
openstackgerritMerged openstack-infra/puppet-jeepyb: Require packages before installing jeepyb  https://review.openstack.org/32770705:07
*** jaosorior has joined #openstack-infra05:07
*** jaosorior has quit IRC05:09
*** jaosorior has joined #openstack-infra05:10
*** kzaitsev_mb has quit IRC05:10
*** psachin has joined #openstack-infra05:17
*** javeriak has joined #openstack-infra05:18
*** senk has joined #openstack-infra05:27
*** Sukhdev has quit IRC05:27
*** rkukura_ has joined #openstack-infra05:27
*** rkukura has quit IRC05:29
*** rkukura_ is now known as rkukura05:29
*** javeriak has quit IRC05:32
*** raunak has quit IRC05:38
*** andymaier_ has quit IRC05:39
*** roxanagh_ has joined #openstack-infra05:39
*** harlowja_at_home has joined #openstack-infra05:39
*** yamamoto has joined #openstack-infra05:39
*** roxanagh_ has quit IRC05:39
openstackgerritMerged openstack-infra/project-config: Move other-requirements.txt to bindep.txt  https://review.openstack.org/35486105:40
*** esikachev has joined #openstack-infra05:41
*** armax has quit IRC05:41
*** esikachev has quit IRC05:46
*** ilyashakhat has joined #openstack-infra05:47
ianwzaro: ok, so i have to wait for https://review.openstack.org/327707 to get pushed to puppetmaster because new hosts get their /etc/puppet/modules copied from there by ansible.  once that's done, i should be able to start the new review-dev, detach the cinder volume with the old one's ~gerrit2 and re-attach it, and update dns.  so hopefully finish this tomorrow my time05:48
*** coolsvap has joined #openstack-infra05:49
ianwfungi: you might like to review the patches at https://review.openstack.org/#/q/status:open+project:openstack-infra/system-config+branch:master+topic:launch-node before trying to launch your wiki nodes.  at least the interface restart is required, the logging is nice to have & has helped me debug05:50
*** thorst_ has joined #openstack-infra05:50
*** rbuzatu has quit IRC05:51
openstackgerritVasyl Saienko proposed openstack-infra/devstack-gate: DO NOT REVIEW  https://review.openstack.org/35609405:52
*** adriant has quit IRC05:52
*** tonytan4ever has joined #openstack-infra05:52
*** tonytan4ever has quit IRC05:57
*** thorst_ has quit IRC05:58
*** javeriak has joined #openstack-infra05:59
*** oanson has joined #openstack-infra06:00
*** sandanar has joined #openstack-infra06:02
*** esikachev has joined #openstack-infra06:02
*** _nadya_ has joined #openstack-infra06:03
*** tqtran has joined #openstack-infra06:03
*** mikelk has joined #openstack-infra06:03
*** _nadya_ has quit IRC06:04
*** esikachev has quit IRC06:06
*** ilyashakhat has quit IRC06:06
*** kzaitsev_mb has joined #openstack-infra06:07
AJaegerianw: could you review pabelanger's change https://review.openstack.org/359496 , please?06:12
*** r-mibu has quit IRC06:12
*** _nadya_ has joined #openstack-infra06:12
openstackgerritMerged openstack-infra/puppet-infracloud: Set the ssl_key_file_contents to mandatory  https://review.openstack.org/35929406:13
*** brad_behle_ has joined #openstack-infra06:19
*** kaisers_ has joined #openstack-infra06:19
*** AnarchyAo has joined #openstack-infra06:22
*** rcernin has joined #openstack-infra06:23
*** brad_behle has quit IRC06:23
*** kaisers_ has quit IRC06:24
*** aeng has quit IRC06:28
*** kzaitsev_mb has quit IRC06:30
*** ccamacho has joined #openstack-infra06:32
*** r-mibu has joined #openstack-infra06:32
*** ccamacho has quit IRC06:32
*** javeriak has quit IRC06:33
yolandagood morning06:34
*** ccamacho has joined #openstack-infra06:36
*** senk has quit IRC06:36
*** kushal has quit IRC06:37
*** kushal has joined #openstack-infra06:38
*** indistylo has joined #openstack-infra06:38
*** andreas_s has joined #openstack-infra06:38
*** aeng has joined #openstack-infra06:42
*** harlowja_at_home has quit IRC06:44
AJaegergood morning, yolanda !06:44
openstackgerritSagi Shnaidman proposed openstack-infra/tripleo-ci: WIP: DONT MERGE Testin OOOQ job  https://review.openstack.org/35914606:47
*** kaisers_ has joined #openstack-infra06:47
*** DrifterZA has joined #openstack-infra06:50
*** _nadya_ has quit IRC06:52
openstackgerritYuval Brik proposed openstack-infra/project-config: Rename Smaug to Karbor  https://review.openstack.org/35330406:52
*** yuval has joined #openstack-infra06:55
*** thorst_ has joined #openstack-infra06:56
*** tesseract- has joined #openstack-infra06:56
*** javeriak has joined #openstack-infra06:57
*** AnarchyAo has quit IRC06:58
*** _nadya_ has joined #openstack-infra07:02
*** thorst_ has quit IRC07:03
*** esikachev has joined #openstack-infra07:03
openstackgerritSachi King proposed openstack-dev/pbr: WIP: Don't ignore data-files  https://review.openstack.org/34521007:07
*** esikachev has quit IRC07:07
*** coolsvap is now known as coolsvap_07:09
*** flepied has joined #openstack-infra07:11
*** florianf has joined #openstack-infra07:11
*** indistylo has quit IRC07:12
*** javeriak has quit IRC07:12
*** sputnik13 has quit IRC07:15
*** pgadiya has joined #openstack-infra07:15
*** javeriak has joined #openstack-infra07:17
*** javeriak has joined #openstack-infra07:17
openstackgerritVasyl Saienko proposed openstack-infra/devstack-gate: DO NOT REVIEW  https://review.openstack.org/35609407:18
*** jordanP has joined #openstack-infra07:19
*** jordanP has quit IRC07:20
*** claudiub has joined #openstack-infra07:24
*** kzaitsev_mb has joined #openstack-infra07:27
*** tphummel has joined #openstack-infra07:28
_nadya_dear openstack-infra team! Could you please add nprivalova@mirantis.com to the group https://review.openstack.org/#/admin/groups/1535,members ? It is a new project created by https://review.openstack.org/#/c/355406/ . Thanks in advance!07:29
*** jpich has joined #openstack-infra07:32
yuvalAJaeger: hey :)07:33
yuvalAJaeger: regarding https://review.openstack.org/#/c/353304/ and smaug => karbor. Is there a preference in renaming pypi packages or creating new ones?07:34
*** ganesan has joined #openstack-infra07:37
AJaegeryuval: no idea ;(07:38
AJaegeryuval: I think that depends on how many already use your packages.07:38
*** vincentll has joined #openstack-infra07:39
*** e0ne has joined #openstack-infra07:40
openstackgerritMerged openstack-infra/project-config: Double rax-ord boot-timeout value  https://review.openstack.org/35949607:40
*** andymaier_ has joined #openstack-infra07:44
*** mikelk has quit IRC07:46
*** ifarkas_afk is now known as ifarkas07:47
openstackgerritMerged openstack-infra/project-config: Removed directory changes in npm-dsvm-macro  https://review.openstack.org/35942807:48
openstackgerritAndreas Jaeger proposed openstack-infra/project-config: Update api-site zuul config  https://review.openstack.org/35965607:49
*** yaume has joined #openstack-infra07:50
*** bethwhite_ has joined #openstack-infra07:50
*** abregman has joined #openstack-infra07:51
*** martinkopec has joined #openstack-infra07:51
ganesanI am getting the error on nodepool.log "JenkinsException: Error in request. Possibly authentication failed [500]: Server Error"07:52
*** e0ne has quit IRC07:53
ganesananyone help me to find the problem with my CI setup07:53
yuvalWould appreciate if any infra core could help merge https://review.openstack.org/#/c/359019/ (Karbor Fullstack Path Fix)07:53
*** tonytan4ever has joined #openstack-infra07:54
ganesannodes are created by nodepool and it is able to ssh into that07:54
yuvalthanks ianw :)07:55
*** matthewbodkin has joined #openstack-infra07:55
*** tonytan4ever has quit IRC07:58
*** zzzeek has quit IRC08:00
*** thorst_ has joined #openstack-infra08:00
*** zzzeek has joined #openstack-infra08:00
*** amotoki has quit IRC08:01
*** yaume_ has joined #openstack-infra08:02
*** kushal has quit IRC08:02
*** abregman has quit IRC08:02
openstackgerritMerged openstack-infra/project-config: Karbor (Smaug) Fullstack Path Fix  https://review.openstack.org/35901908:03
*** esikachev has joined #openstack-infra08:04
*** matrohon has joined #openstack-infra08:04
openstackgerritMerged openstack-infra/project-config: Add python34-jobs and python35-jobs to Almanach  https://review.openstack.org/35934508:05
*** yaume has quit IRC08:05
*** matrohon has quit IRC08:06
*** derekh has joined #openstack-infra08:08
*** esikachev has quit IRC08:08
*** thorst_ has quit IRC08:09
*** matrohon has joined #openstack-infra08:10
openstackgerritMerged openstack-infra/project-config: Use aliasByNode for Node Launches panel  https://review.openstack.org/35929908:11
openstackgerritMerged openstack-infra/project-config: Remove stale octavia job that never gets run, and is broken  https://review.openstack.org/35936708:11
*** lucas-afk is now known as lucasagomes08:12
ianwyolanda : you know anything about having to run "$ java -jar ./bin/gerrit.war reindex" as gerrit2 before "/etc/init.d/gerrit start" works?  because it seems our current puppet doesn't so the startup of gerrit fails08:12
yolandaianw, i remember doing that manually yes. But not on a clean install, i've done it on migrations08:13
*** abregman has joined #openstack-infra08:13
ianwyolanda: yeah, i'm wondering if https://review.openstack.org/#/c/355194/2/manifests/init.pp actually broke it for clean installs?08:13
*** abregman has quit IRC08:14
yolandaianw, so the parameter value has been changed? i see defaults for secondary-index to false as well, the behaviour shall be the same?08:15
ianwyolanda: do we set secondary_index though?  if we are, then we've stopped doing the index first i think?08:16
yolandaso... looks like08:16
yolandai see system-config setting secondary_index to true08:16
ianwyeah modules/openstack_project/manifests/gerrit.pp08:16
ianwso basically i guess we're not doing it any more?08:16
yolandayes, per that change, we have stopped doing that reindex08:17
ianwand yeah, on a new install, gerrit refuses to start saying you haven't indexed or something08:17
yolandacan you try a clean install with that offline_reindex set to true?08:17
openstackgerritMerged openstack-infra/project-config: Add new project "os-failures"  https://review.openstack.org/35581908:18
ianwyolanda: umm, not really ... this is from launch-node.py trying to create a new review-dev host08:18
ianwwell maybe i can, but too hard for right now :)08:18
yolandai don't have the background, but why has secondary_index parameter disappeared in favour of that offline reindex?08:18
*** dizquierdo has joined #openstack-infra08:19
ianwyolanda: for posterity, the error is http://paste.openstack.org/show/562898/08:19
*** woodster_ has quit IRC08:19
*** abregman has joined #openstack-infra08:19
yolandaso yes, that change looks as the problem08:20
yolandado you want to revert and retry?08:20
ianwyolanda: i'm about done for the day.  i might just put a change in, and we can let zaro have a look08:21
ianwthis server is for him anyway, so no rush :)08:21
yolandaok sounds good08:21
openstackgerritYuval Brik proposed openstack-infra/project-config: Rename Smaug to Karbor  https://review.openstack.org/35330408:21
ianwbringing this up has been a bit of a nightmare :(  bitrot sets in on this stuff when it's not used that frequently08:21
openstackgerritHenry Gessau proposed openstack-infra/project-config: Fix neutron failure rates dashboard integrated jobs list  https://review.openstack.org/35846208:21
*** tphummel has quit IRC08:22
*** jordanP has joined #openstack-infra08:22
*** Na3iL has joined #openstack-infra08:23
*** esikachev has joined #openstack-infra08:25
*** hashar has joined #openstack-infra08:25
*** yamahata has joined #openstack-infra08:27
*** asettle has joined #openstack-infra08:28
openstackgerritIan Wienand proposed openstack-infra/puppet-gerrit: Add secondary_index check  https://review.openstack.org/35968308:28
ianwzaro: ^ if you can look at that, it will help getting review-dev working :)08:28
*** abregman has quit IRC08:29
*** abregman has joined #openstack-infra08:29
*** pcaruana has joined #openstack-infra08:29
openstackgerritOpenStack Proposal Bot proposed openstack-infra/project-config: Normalize projects.yaml  https://review.openstack.org/35968708:30
*** _nadya_ has quit IRC08:31
*** esikachev has quit IRC08:31
*** ganesan has quit IRC08:32
*** jed56 has joined #openstack-infra08:33
*** tqtran has quit IRC08:34
*** ilyashakhat has joined #openstack-infra08:37
*** flepied has quit IRC08:37
*** ilyashakhat has quit IRC08:44
*** electrofelix has joined #openstack-infra08:45
*** kzaitsev_mb has quit IRC08:49
*** yamahata has quit IRC08:50
*** kzaitsev_mb has joined #openstack-infra08:50
*** dingyichen has quit IRC08:53
*** tonytan4ever has joined #openstack-infra08:55
vrovachevHi around, please take a look https://review.openstack.org/#/c/359704/108:56
*** tonytan4ever has quit IRC08:59
*** kzaitsev_mb has quit IRC09:01
*** asettle has quit IRC09:01
AJaegervrovachev: why is that needed at all?09:02
AJaegervrovachev: your refs/heads/ rule should be just fine.09:02
*** senk has joined #openstack-infra09:05
*** mikelk has joined #openstack-infra09:05
*** asettle has joined #openstack-infra09:06
*** thorst_ has joined #openstack-infra09:06
*** esikachev has joined #openstack-infra09:08
*** asettle has quit IRC09:08
*** amotoki has joined #openstack-infra09:09
*** abregman has quit IRC09:12
*** flepied has joined #openstack-infra09:12
*** ggnel_t has joined #openstack-infra09:12
*** thorst_ has quit IRC09:13
*** esikachev has quit IRC09:15
*** Hal has quit IRC09:15
*** Hal has joined #openstack-infra09:15
*** yolanda has quit IRC09:16
*** amotoki has quit IRC09:17
*** sambetts|afk is now known as sambetts09:18
*** shashank_hegde has quit IRC09:19
*** yolanda has joined #openstack-infra09:19
*** javeriak has quit IRC09:21
*** jaosorior is now known as jaosorior_lunch09:21
*** kushal has joined #openstack-infra09:26
vrovachevJaeger: It is need for right workflow according to product development by different teams and for right CI works09:26
*** abregman has joined #openstack-infra09:26
*** amotoki has joined #openstack-infra09:26
*** sshnaidm|afk is now known as sshnaidm09:27
*** _nadya_ has joined #openstack-infra09:27
*** ihrachys has joined #openstack-infra09:28
*** yolanda has quit IRC09:28
*** eset has joined #openstack-infra09:29
*** yolanda has joined #openstack-infra09:29
*** eset has quit IRC09:29
*** eset has joined #openstack-infra09:29
*** _nadya_ has quit IRC09:30
*** _nadya_ has joined #openstack-infra09:30
*** vinaypotluri has quit IRC09:31
*** _degorenko|afk is now known as degorenko09:32
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/puppet-infracloud: Create /etc/nova/ssl folder on compute  https://review.openstack.org/35973709:33
*** jlibosva has joined #openstack-infra09:35
zigoAJaeger: Hi there! Can you +2 adding deb-python-fixtures ? https://review.openstack.org/#/c/358819/09:36
zigoAJaeger: Or should I add the Needed-By: thing?09:36
AJaegerzigo: I wait for PTL +1 before I +2 any new repos - see the comment I left in there09:36
zigoAJaeger: PTL of what?09:37
zigopackaging-deb ?09:37
AJaegeryes, sure09:37
zigoThat's mordred, but he's not really an active PTL ...09:37
zigoI'll try to ping him when he wakes up.09:38
*** salv-orlando has quit IRC09:40
jlibosvahi, how are we gonna support Newton Open Stack in Ubuntu. Will be 16.04 the minimal required version or it's gonna be available in older versions too. Is there any document with the plan?09:40
*** salv-orlando has joined #openstack-infra09:41
*** berendt has joined #openstack-infra09:42
*** dizquierdo has quit IRC09:42
jlibosvas/Open Stack/OpenStack/ :)09:42
AJaegeryolanda: could you check https://github.com/openstack/os-failures , please? - this is created empty but the imported content is not synced over. Or should we wait a bit longer?09:43
openstackgerritSagi Shnaidman proposed openstack-infra/tripleo-ci: WIP: DONT MERGE Testin OOOQ job  https://review.openstack.org/35914609:43
AJaegerjlibosva: better ask the Ubuntu team that packages it. jamespage might give you a pointer09:44
jamespagejlibosva, 16.04 will be the minimum required ubuntu version09:44
jamespageno backport to 14.0409:45
jlibosvajamespage: AJaeger great, thanks a lot09:45
jamespagejlibosva, https://wiki.ubuntu.com/OpenStack/CloudArchive for reference09:45
jlibosvaso this https://wiki.ubuntu.com/OpenStack/CloudArchive?action=AttachFile&do=get&target=plan.png shows the plan also for basic ubuntu, not just cloud archive09:45
jamespageyeah09:45
jlibosvacool, thanks09:46
jamespagejlibosva, newton will be in Ubuntu 16.10, and in the UCA for 16.0409:46
jamespageUCA = Ubuntu Cloud Archive09:46
jlibosvaAJaeger: jamespage so is there a document/plan to switch all gate jobs to Xenial and when is this gonna happen?09:46
jamespagethat I don't know the complete answer to, but I see alot of activity switching things to xenial09:47
jlibosvaI found "Running CI jobs on Ubuntu Xenial by default" ML thread, sorry for asking before searching :)09:48
AJaegerjlibosva: we're working on that - help to move jobs over is welcome09:48
AJaegerNote that we will not switch all jobs over - we want master on Xenial, older branches on Trusty.09:48
jlibosvaAJaeger: I can help with fullstack and functional tests for Neutron. I don't have that much experience with project-config though09:49
jlibosvaAJaeger: if I send a patch to project-config, how can I trigger changed job to see it passed/failed?09:49
AJaegerjlibosva: you cannot ;( That's not really possible09:50
jlibosvaeh09:50
*** tosky has joined #openstack-infra09:50
jlibosvaAJaeger: so we send a patch and hope for the best or we introduce a xenial nv job in parallel?09:50
AJaegerwe test syntax of changes but since those jobs run in public virtual machines as root, we have security concerns for full self-service without review09:51
*** yanyanhu has quit IRC09:51
AJaegerjlibosva: check the archives how we did it for the first jobs.09:51
jlibosvaAJaeger: ok09:52
*** pilgrimstack1 has quit IRC09:52
AJaegerjlibosva: example https://review.openstack.org/34807809:52
AJaegerSo, those were "hope for the best"09:53
AJaegerand if it breaks, we would have reverted quickly09:53
*** hashar has quit IRC09:54
*** hashar_ has joined #openstack-infra09:54
AJaegerjlibosva: best talk with clarkb later - he's US based - and coordinate09:54
*** tonytan4ever has joined #openstack-infra09:56
*** pilgrimstack has joined #openstack-infra09:56
openstackgerrityolanda.robla proposed openstack-infra/puppet-infracloud: Parameterize certificates in infracloud  https://review.openstack.org/35975709:56
yolandahi AJaeger , taking a look09:56
*** zhurong has quit IRC09:59
AJaegerthanks, yolanda !09:59
yolandai just fixed, so please take a look in short09:59
openstackgerritJames Slagle proposed openstack-infra/tripleo-ci: DO NOT MERGE - test  https://review.openstack.org/35975909:59
*** hashar_ has quit IRC10:00
*** tonytan4ever has quit IRC10:00
*** hashar has joined #openstack-infra10:00
AJaegerwill do, thanks10:01
AJaegeryolanda: already fixed - wow!10:02
yolandaeasy mechanical thing :)10:02
AJaeger;)10:04
*** dtantsur has joined #openstack-infra10:07
*** kushal has quit IRC10:09
*** kushal has joined #openstack-infra10:10
*** thorst_ has joined #openstack-infra10:11
*** Na3iL has quit IRC10:13
openstackgerritJames Slagle proposed openstack-infra/tripleo-ci: Re-enable use of cached images  https://review.openstack.org/35976510:14
amrithyolanda, any other infra cores, would someone please approve https://review.openstack.org/#/c/354881/. There's a comment from Ian Weinand, which isn't a -1 but is it something that you want fixed? AJaeger already has a +2 on this, looking for the second. thanks!10:14
*** Na3iL has joined #openstack-infra10:17
openstackgerrityolanda.robla proposed openstack-infra/puppet-infracloud: Parameterize certificates in infracloud  https://review.openstack.org/35975710:17
yolandaamrith, i saw that, so i prefered to wait for the thread with Ian to finish10:17
*** asettle has joined #openstack-infra10:18
*** thorst_ has quit IRC10:18
yolandaamrith, if that's not a blocker, i'd prefer if the question is answered and Ian is ok with that before approving10:19
amrithyolanda, g'morning. I'm not sure that there's going to be anything more on that thread unless someone either says 'yes, we don't want the snowflake' or someone approves the change. i.e. as written the filter seems fine, but I agree, it is unique.10:19
openstackgerritMerged openstack-infra/puppet-infracloud: Create /etc/nova/ssl folder on compute  https://review.openstack.org/35973710:19
amrithyolanda, thanks. I will pose the question to Ian in that case.10:20
yolanda++10:20
openstackgerritJames Slagle proposed openstack-infra/tripleo-ci: DO NOT MERGE - test  https://review.openstack.org/35975910:21
openstackgerrityolanda.robla proposed openstack-infra/puppet-infracloud: Parameterize certificates in infracloud  https://review.openstack.org/35975710:25
*** sdake has quit IRC10:28
*** kaisers_ has quit IRC10:30
*** jaosorior_lunch is now known as jaosorior10:30
openstackgerritamrith proposed openstack-infra/project-config: [trove] Add more nv scenario tests  https://review.openstack.org/35488110:32
amrithsorry yolanda I took plan (b). made the change ian suggested and brought the line into compliance with the rest. AJaeger please take a look again.10:33
amrithand NO, this is NOT a blocker10:33
amrithso no requirement to do this urgently, let the CI do it's thing etc.10:33
amriththanks!10:34
*** rodrigods has quit IRC10:34
*** rodrigods has joined #openstack-infra10:34
yolandaamrith, makes sense10:36
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/system-config: Rename nodepool infracloud cacert from west to vanilla  https://review.openstack.org/35978110:39
*** javeriak has joined #openstack-infra10:39
AJaegeramrith: I'm sure it passes - +2A.10:41
amrithAJaeger, rcarrillocruz thank you10:42
rcarrillocruznp10:42
*** florianf has quit IRC10:42
AJaegeryou're welcome, amrith10:42
amrithAJaeger, I'd also like to chat with you and annegentle about getting rid of the trove checkbuild job.10:45
amrithI think its time is now past10:46
amrith<link follows shortly>10:46
AJaegeramrith: I've done it already ;)10:46
amrithYOU DID!10:47
amrithnice10:47
amriththanks AJaeger10:47
AJaegeramrith: https://review.openstack.org/35844610:47
AJaegeramrith: and there's https://review.openstack.org/358738 to cleanup in trove10:47
AJaegerSo, everything taken care of ;)10:47
amrithah well, almost.10:48
amrithI saw this https://review.openstack.org/#/c/358943/10:48
amrithand realized that I should've just taken care of it when I pushed the change for api-ref10:48
amrithbut, maybe leave it there and every now and then, someone will try and fix it and we can -1 it :)10:48
AJaegeramrith: https://review.openstack.org/#/c/358738/3/tox.ini removes the lines that you commented on.10:49
AJaegerbut it has not merged yet, is currently in gate.10:49
amrithWhy do you think of everything?10:49
amrith:)10:50
amriththanks AJaeger10:50
AJaegeramrith: I was on vacation and pushed a new openstack-doc-tools version out - and then checked for the fallout ;)10:50
amrithI guess I've missed the action these past few days at openstack-east10:50
amrithaha10:50
amrithcool10:50
AJaegerSo, noticed that trove has no more XML content (great!) and old code lying around...10:50
amrithyup10:51
amrithnow we've got to get the api-ref document maintained10:51
amrithI was speaking with sdague about how other projects handle keeping it up to date10:51
amrithand he had some good ideas for me10:51
*** kzaitsev_mb has joined #openstack-infra10:52
AJaegermight be worth sharing with other projects, keeping those updated was always a challenge ;(10:52
AJaegeramrith: mugsie has been working on theming for api-ref - see https://review.openstack.org/357926. I expect some followup on that. But that's presentation not content. Just a FYI10:53
amrithwe were talking about something simple like just -1'ing changes if they don't update api-ref but make changes to the api10:53
amrithyes, saw that one10:53
AJaegeramrith: yes, good idea10:53
amrithwhen the thing was wadl and xml it was harder to enforce10:53
*** hrubi_ has quit IRC10:53
AJaegerlike you -1 a change without release-notes if it needs one10:53
amrithnow we can actually try and enforce it10:53
amrithyup10:53
amrithwe do that now10:53
AJaegerRST is much easier in this regard10:54
amriththat it is10:54
amriththanks AJaeger10:54
amrithwill get ready for another fine day at @openstackeast now ...10:54
* AJaeger pulled the plug on DocBook XML building yesterday - after all the great work annegentle, sdague, and others have been driving.10:54
AJaegeramrith: enjoy!10:54
amrithyup, saw that email. Great work!10:54
amrithtake care, and thanks again yolanda rcarrillocruz AJaeger ... now for some coffee!10:55
rcarrillocruz++10:55
*** florianf has joined #openstack-infra10:56
*** jlibosva has quit IRC10:57
*** d0ugal has quit IRC10:59
*** d0ugal has joined #openstack-infra10:59
openstackgerritMerged openstack-infra/project-config: [trove] Add more nv scenario tests  https://review.openstack.org/35488111:00
*** hrubi has joined #openstack-infra11:00
*** dizquierdo has joined #openstack-infra11:05
openstackgerritMerged openstack-infra/project-config: Remove Neutron Postgres job from the Neutron check queue  https://review.openstack.org/35751911:10
*** caowei has quit IRC11:12
*** jtomasek_ is now known as jtomasek11:13
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/system-config: Add admin infracloud connection details to Puppetmaster all-clouds  https://review.openstack.org/35979711:13
*** rhallisey has joined #openstack-infra11:14
*** kushal has quit IRC11:15
*** thorst_ has joined #openstack-infra11:16
*** sdague has joined #openstack-infra11:17
*** Na3iL has quit IRC11:18
*** jkilpatr has joined #openstack-infra11:20
*** ramishra has quit IRC11:20
openstackgerritRyan Hallisey proposed openstack-infra/project-config: Few changes to the kolla-kubernetes job  https://review.openstack.org/35519911:21
*** ramishra has joined #openstack-infra11:22
*** kushal has joined #openstack-infra11:23
*** thorst_ has quit IRC11:23
*** salv-orlando has quit IRC11:25
*** salv-orlando has joined #openstack-infra11:25
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/system-config: Add admin-infracloud cloud to launcher layouts  https://review.openstack.org/35980711:31
*** sdague has quit IRC11:32
*** tpsilva has joined #openstack-infra11:34
*** thorst_ has joined #openstack-infra11:37
openstackgerritMerged openstack-infra/system-config: Rename nodepool infracloud cacert from west to vanilla  https://review.openstack.org/35978111:45
*** asettle has quit IRC11:45
*** dtardivel has joined #openstack-infra11:47
*** Wei_Liu has quit IRC11:47
openstackgerritMerged openstack-infra/system-config: Add admin infracloud connection details to Puppetmaster all-clouds  https://review.openstack.org/35979711:49
zigoAJaeger: Where may I find the list of mirrors where the AFS is hosting the Debian packages?11:49
*** hashar has quit IRC11:50
*** sigmavirus|away is now known as sigmavirus11:51
*** jaosorior has quit IRC11:51
*** jaosorior has joined #openstack-infra11:52
*** jlibosva has joined #openstack-infra11:53
*** sshnaidm is now known as sshnaidm|afk11:54
*** ldnunes has joined #openstack-infra11:54
*** tonytan4ever has joined #openstack-infra11:57
*** rfolco has joined #openstack-infra11:58
openstackgerritMonty Taylor proposed openstack-infra/nodepool: Install nodepool and shade into a virtualenv  https://review.openstack.org/35942511:58
*** Wei_Liu has joined #openstack-infra11:59
*** vincentll has quit IRC11:59
*** lucasagomes is now known as lucas-hungry12:01
*** tonytan4ever has quit IRC12:01
*** andymaier has joined #openstack-infra12:03
*** andymaier has quit IRC12:03
AJaegerzigo, for what do you need those? Those are for usage by our test jobs only and setup in the images itself12:04
*** ggnel_t has quit IRC12:05
zigoAJaeger: To answer Lana's question about where to find the repos for testing the doc.12:06
zigoAJaeger: I believe I've find a list, but I'm not sure if that's authoritative.12:06
*** kushal has quit IRC12:07
zigoAJaeger: Of course, the "official" repo list will be a Debian one after the release, but for testing the install-guide, it's IMO ok to use stuff hosted on the AFS.12:07
*** psilvad has joined #openstack-infra12:07
*** ansmith has joined #openstack-infra12:08
*** kushal has joined #openstack-infra12:09
mordredAJaeger: ++ for zigo's patch12:10
openstackgerritAleksandr Dobdin proposed openstack-infra/project-config: added: timmy  https://review.openstack.org/35983112:10
*** dprince has joined #openstack-infra12:12
*** ccamacho is now known as ccamacho|lunch12:12
*** psilvad has quit IRC12:13
openstackgerritMonty Taylor proposed openstack-infra/shade: Move list_server cache to dogpile  https://review.openstack.org/35887112:14
*** xyang1 has joined #openstack-infra12:14
*** bethwhite_ has quit IRC12:15
AJaegerzigo: best discuss with pabelanger what location to hand out12:15
zigoAJaeger: Ok.12:15
*** bethwhite_ has joined #openstack-infra12:15
zigoAJaeger: Since Monty did a +2, maybe you can +2 workflow now? https://review.openstack.org/#/c/358819/12:15
zigo:)12:15
zigoSorry to insist, but fixtures is a build-dependency of almost everything ...12:16
*** kushal has quit IRC12:16
*** hrybacki is now known as hrybacki|appt12:17
*** pradk has joined #openstack-infra12:18
*** vincentll has joined #openstack-infra12:18
*** kushal has joined #openstack-infra12:18
*** kaisers_ has joined #openstack-infra12:18
*** kdas__ has joined #openstack-infra12:21
*** coolsvap_ has quit IRC12:21
*** hieulq_ has joined #openstack-infra12:22
*** javeriak_ has joined #openstack-infra12:23
*** kaisers_ has quit IRC12:23
*** javeriak_ has quit IRC12:24
*** baoli has joined #openstack-infra12:24
*** Julien-zte has joined #openstack-infra12:24
*** kushal has quit IRC12:25
*** markvoelker has joined #openstack-infra12:25
*** gouthamr has joined #openstack-infra12:26
*** javeriak has quit IRC12:26
openstackgerritJakub Libosvar proposed openstack-infra/project-config: Non-voting jobs for Neutron Xenial fullstack and functional  https://review.openstack.org/35984312:26
*** yolanda has quit IRC12:27
AJaegerzigo: will do in a bit...12:27
*** hashar has joined #openstack-infra12:28
*** julim has joined #openstack-infra12:28
*** kdas__ is now known as kushal12:28
*** kushal has quit IRC12:28
*** kushal has joined #openstack-infra12:28
*** yolanda has joined #openstack-infra12:29
AJaegerzigo +2A12:29
zigoAwesome !12:29
*** Na3iL has joined #openstack-infra12:31
*** vincentll has quit IRC12:31
*** vincentll has joined #openstack-infra12:34
*** xyang1 has quit IRC12:34
*** xyang1 has joined #openstack-infra12:36
jlibosvaAJaeger: thanks for review. I pinged clarkb in neutron channel to ping me back once he's online :)12:38
AJaegerjlibosva: great. He'll be here as well - but might take two more hours12:38
*** asettle has joined #openstack-infra12:40
*** mdrabe has joined #openstack-infra12:41
*** markusry has joined #openstack-infra12:41
*** caowei has joined #openstack-infra12:42
*** yamamoto has quit IRC12:50
*** edmondsw has joined #openstack-infra12:50
*** zul has quit IRC12:51
*** pradk has quit IRC12:51
*** sshnaidm|afk is now known as sshnaidm12:52
pabelangermorning12:52
mordredmorning pabelanger12:53
mordredpabelanger: https://review.openstack.org/#/c/359425/ passed gate jobs12:53
*** markvoelker has quit IRC12:53
mordredrcarrillocruz: ^^ you too12:53
AJaegermorning, pabelanger !12:53
*** vincentll has quit IRC12:54
openstackgerritMerged openstack-infra/project-config: Add deb-python-fixtures to packaging-deb  https://review.openstack.org/35881912:54
*** Julien-zte has quit IRC12:55
*** rlandy has joined #openstack-infra12:56
*** kushal has quit IRC12:56
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool: Add new ZK method for sending cluster heartbeat  https://review.openstack.org/35886812:58
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool: Add new ZK method for registering a watch.  https://review.openstack.org/35883712:58
*** tonytan4ever has joined #openstack-infra12:58
openstackgerritSagi Shnaidman proposed openstack-infra/tripleo-ci: WIP: DONT MERGE Testin OOOQ job  https://review.openstack.org/35914612:58
EmilienMnibalizer: fyi https://review.openstack.org/#/c/359539/12:59
*** lucas-hungry is now known as lucasagomes12:59
AJaegermordred, pabelanger: Here're a few changes to move other-requirements to bindep.txt - could you help to land these, please? https://review.openstack.org/#/q/status:open+projects:openstack-infra+topic:bindep-mv13:00
*** gyx has quit IRC13:00
openstackgerritEmilien Macchi proposed openstack-infra/tripleo-ci: Implement non-ovb overcloud update job - Newton -> Newton  https://review.openstack.org/35133013:01
pabelangerSo, still seeing a large amount of failures in rax-iad when launching nodes.  We are timing out SSHing into the ipv4 address. We have a contact at rackspace that can help debug the issue or should I just create a ticket?13:01
*** tonytan4ever has quit IRC13:02
rcarrillocruzpabelanger, mordred : fyi, just came across https://github.com/ansible/ansible-modules-extras/pull/2709/files13:03
rcarrillocruzit's good, although we rather put the logic in shade, then call it from the module13:03
pabelangerAJaeger: looking13:03
*** ddieterly has joined #openstack-infra13:03
*** baoli_ has joined #openstack-infra13:04
mordredrcarrillocruz: looking13:05
rcarrillocruzdo we have documentation on what the quotas should look like on our clouds, for the openstackjenkins/openstackci projects13:05
rcarrillocruz?13:05
*** vikrant has quit IRC13:05
mordredrcarrillocruz: I agree - that is good13:05
mordredrcarrillocruz: maybe let's land it, but before we do ask if it's ok with the author if we move the logic he wrote there into shade13:06
pabelangerrcarrillocruz: http://docs.openstack.org/infra/system-config/contribute-cloud.html13:06
mordredrcarrillocruz: we need his permission because the ansible module is GPL and shade isn't13:06
rcarrillocruzmordred: ack13:06
*** kgiusti has joined #openstack-infra13:06
pabelanger8vCPUs, 8GB RAM, 80GB HDD our the flavors we want, so quota should reflect that13:06
*** baoli has quit IRC13:06
mordredthen we can land a followup patch to remove logic from the module13:07
rcarrillocruzpabelanger: k, can you look https://review.openstack.org/#/c/359807/1 ?13:07
cloudnullmornings13:07
*** salv-orlando has quit IRC13:07
*** ddieterly has quit IRC13:07
rcarrillocruzi'll run the launcher, then create the needed oscc clouds on puppetmaster13:07
*** eharney has joined #openstack-infra13:07
*** psachin has quit IRC13:08
pabelangerrcarrillocruz: looks good13:08
*** dprince has quit IRC13:08
rcarrillocruzthx13:08
mordredcloudnull: morning! did you see all the fun scrollback from last night related to osic ipv6?13:09
*** pgadiya has quit IRC13:09
mordredcloudnull: oh - I see thta you did!13:09
*** vincentll has joined #openstack-infra13:09
*** dprince has joined #openstack-infra13:09
*** asettle has quit IRC13:09
*** kushal has joined #openstack-infra13:10
*** asettle has joined #openstack-infra13:10
cloudnullre : the sysctl bits and devstack?13:10
openstackgerritMerged openstack-infra/groups: Move other-requirements.txt to bindep.txt  https://review.openstack.org/35485913:10
mordredcloudnull: yah. fun stuff right?13:11
*** esberglu has joined #openstack-infra13:12
cloudnullit looked like a blast :)13:12
rcarrillocruzso folks, do we keep the images built by nodepool somewhere? http://nodepool.openstack.org just show logs . Any 'cache' folder where dib keeps those images ?13:13
cloudnullbut good stuff to find IMO. I'm sure all of this work will be greatly appreciated as more clouds spin up and go the v6 route.13:13
* rcarrillocruz would love his ISP would follow the v6 route :/13:14
mordredrcarrillocruz: /opt/nodepool_dib13:14
cloudnullwe've also got another OSIC cloud planned for the santa clara DC which will have similar properties so once thats online (I've no idea when that'll be) we'll have yet another region for everyone to play with.13:15
rcarrillocruzk, those numbers correlate to nodepool image build logs i assume13:15
mordredyah13:15
mordredcloudnull: woot!13:15
openstackgerritMerged openstack-infra/reviewday: Move other-requirements.txt to bindep.txt  https://review.openstack.org/35486413:15
openstackgerritMerged openstack-infra/gear: Move other-requirements.txt to bindep.txt  https://review.openstack.org/35485813:16
openstackgerritMerged openstack-infra/openstackid: Move other-requirements.txt to bindep.txt  https://review.openstack.org/35486013:16
openstackgerritMerged openstack-infra/nodepool: Add floating-ip batching settings to clouds.yaml  https://review.openstack.org/35932713:18
mordredrcarrillocruz: hey - while I'm bothering you ...13:18
openstackgerritMerged openstack-infra/nodepool: Install nodepool and shade into a virtualenv  https://review.openstack.org/35942513:18
mordredrcarrillocruz: https://review.openstack.org/#/c/359378/ ... this is in support of launch-node - but also is likely something we can carry over into os_server and cloud-launcher13:18
*** raunak has joined #openstack-infra13:18
rcarrillocruzoh yeah13:18
rcarrillocruzthat's neat13:18
rcarrillocruzlet me review13:18
rcarrillocruzcloudnull: woot, moar clouds pls!13:19
rcarrillocruzlgtm mordred13:20
mordredthx!13:20
openstackgerritMerged openstack-infra/puppet-openstackci: Move other-requirements.txt to bindep.txt  https://review.openstack.org/35486213:22
pabelangerjeblair: we have an example of nodepool not getting data from gearman: http://paste.openstack.org/show/562933/13:22
pabelangernodepool: http://paste.openstack.org/show/562934/13:22
pabelangerzuul-launcher: http://paste.openstack.org/show/562935/13:22
openstackgerritAleksandr Dobdin proposed openstack-infra/project-config: Ansible-like tool  https://review.openstack.org/35983113:22
*** woodster_ has joined #openstack-infra13:23
*** senk has quit IRC13:24
*** pgadiya has joined #openstack-infra13:25
*** openstackgerrit has quit IRC13:26
*** openstackgerrit has joined #openstack-infra13:26
*** jlibosva has quit IRC13:28
*** amotoki has quit IRC13:29
AJaegerthanks, mordred and pabelanger !13:29
*** haleyb has joined #openstack-infra13:30
*** tqtran has joined #openstack-infra13:32
*** yamamoto has joined #openstack-infra13:32
*** sdake has joined #openstack-infra13:34
*** pcrews has quit IRC13:34
*** _ari_ has joined #openstack-infra13:35
*** tonytan4ever has joined #openstack-infra13:35
*** tqtran has quit IRC13:37
*** sdake_ has joined #openstack-infra13:37
*** zhurong has joined #openstack-infra13:37
AJaegerpabelanger: could you also review https://review.openstack.org/355038 - for windmill, please?13:37
*** eharney has quit IRC13:38
*** AJaeger has quit IRC13:39
*** yamamoto has quit IRC13:40
*** thiagop has joined #openstack-infra13:40
*** sdake has quit IRC13:40
*** ddieterly has joined #openstack-infra13:41
*** zhurong has quit IRC13:41
_nadya_dear openstack-infra team! Could you please add nprivalova@mirantis.com to the group https://review.openstack.org/#/admin/groups/1535,members ? It is a new project created by https://review.openstack.org/#/c/355406/ . Thanks in advance!13:42
*** zhurong has joined #openstack-infra13:42
rcarrillocruz_nadya_: done13:42
*** eharney has joined #openstack-infra13:43
haleybSo does the accept_ra change seem to help with ipv6 at OSIC?  i'm just waking up but was pinged and see the patch in review13:43
mordredhaleyb: it's half of the issue13:43
openstackgerritAntoni Segura Puimedon proposed openstack-infra/project-config: kuryr: Add fuxi subproject to gerritbot  https://review.openstack.org/35989913:43
*** amotoki has joined #openstack-infra13:44
mordredhaleyb: the other half we don't have a solution for ... but it seems that as soon as an ipv6 subnet is created in the devstack neutron, the default ipv6 route on the host gets set to use the loopback interface13:44
_nadya_rcarrillocruz: thanks!13:44
openstackgerritMerged openstack-infra/system-config: Add admin-infracloud cloud to launcher layouts  https://review.openstack.org/35980713:44
mordredwhich is then why we lose connectivity to the host13:44
*** sandanar has quit IRC13:45
haleybmordred: yuck, easy to reproduce?13:45
mordredhaleyb: yup. it happens consistently every time13:45
mordredhaleyb: clarkb may be more useful than me as soon as he wakes up - I'm mostly just caught up from reading scrollback13:45
*** AJaeger has joined #openstack-infra13:46
*** ddieterly has quit IRC13:46
*** ddieterly has joined #openstack-infra13:46
mordredhaleyb: here's a paste http://paste.openstack.org/show/562831/ of the relevant devstack log leading up to the route going away13:46
haleybmordred: ok, i'm just wondering if i can reproduce it locally to debug it13:46
*** VnrycepuxO has joined #openstack-infra13:46
dougwighaleyb: crazy easy to reproduce.  i expect we can do so by getting an ipv6 node on any cloud and installing devstack.13:46
dougwighaleyb: or watch zuul, it fails every time.13:47
mordredhaleyb: and this: http://paste.openstack.org/show/562777/ has the nics and routes on a host that this has happened to13:47
haleybjust on OSIC thought, right?13:47
mordred::/0                           ::                         !n   -1  1  9429 lo13:47
openstackgerritAdam Coldrick proposed openstack-infra/storyboard: Send notifications to subscribers for worklists  https://review.openstack.org/35473013:47
openstackgerritAdam Coldrick proposed openstack-infra/storyboard: Make it possible to get worklist/board timeline events via the API  https://review.openstack.org/35472913:47
dougwighaleyb: correct, but that's the only ipv6 addressed cloud infra uses.13:47
mordredwell, it happens everywhere with ipv6 - but osic is the only place that ipv6 is the only thing we have13:47
mordredso we can connect to other things via ipv413:47
haleybmordred: so are you on that system now from the paste?13:48
mordredactually ... I'm kind of confused now that I say that outloud ... rackspace has ipv6 and we talk to it over ipv6 ... but I wonder if the specific route for the cloud itself saves us there, since nodepool is in rackspace as well13:48
mordred(we can connect to the osic nodes from osic, because there is a specific route for osic)13:49
openstackgerritAntoni Segura Puimedon proposed openstack-infra/project-config: kuryr-libnetwork-core addition  https://review.openstack.org/35794513:49
openstackgerritAntoni Segura Puimedon proposed openstack-infra/project-config: kuryr-kubernetes-core addition  https://review.openstack.org/35794613:49
mordredhaleyb: I'm not anymore, no13:49
dougwigmordred: rax nodes have ipv4 addresses listed in zuul.13:49
mordreddougwig: good point13:49
mordredI guess I'm not sure why they're using ipv4 and not 6 - but let's consider that a happy accident right now13:49
*** matt-borland has joined #openstack-infra13:49
dougwighaleyb: clarkb said that after the subnet create in that log, the ipv6 default route is gone.13:49
mordredhaleyb: I do have that machine still13:50
mordredand am on it again13:50
mordreddougwig, haleyb: would access to the held node be useful?13:50
pabelangermordred: rackspace has ipv6 and we talk to it over ipv6; from where nodepool?13:51
mordredpabelanger: yah13:51
haleybmordred: can you show the routing table using 'ip -6 r' instead of route?  my eyes are just more used to it, and yeah, access might help13:51
dougwigmordred: i'm not versed enough in ipv6. haleyb ?13:51
mordredhaleyb: sure13:51
cloudnullhaleyb: I'm guessing that it has something to do with bindings w/in "/etc/radvd.conf"13:51
*** sdague has joined #openstack-infra13:51
*** pt_15 has joined #openstack-infra13:51
*** AJaeger has quit IRC13:51
mordredhaleyb: http://paste.openstack.org/show/563004/13:51
*** xarses has quit IRC13:52
*** AJaeger has joined #openstack-infra13:52
*** yamamoto has joined #openstack-infra13:52
dougwigmordred: is that node using ovs or lb?13:53
haleybcloudnull: so this devstack node is running radvd too?13:53
mordreddougwig: the cloud the vm is running on? or the devstack installed in the node?13:53
pabelangermordred: I am not sure we are actually doing that.  At least in rax-iad we use ipv4, but not sure why that is13:53
dougwigmordred: the latter13:53
sdaguemordred et al. - can we promote - 359721 which is the cinder fix for the gate?13:54
*** raildo has joined #openstack-infra13:54
pabelangermordred: we get ipv6 but nodepool is still using ipv413:54
*** DrifterZA has quit IRC13:54
mordredpabelanger: weird. we should be using ipv6 since it's there ... I'll look in to that once we fix this :)13:54
mordreddougwig: it's running ovs13:54
openstackgerritVolodymyr Stoiko proposed openstack-infra/project-config: Add fuel-plugin-rally project  https://review.openstack.org/35907613:54
dougwigmordred: can you get us "ovs-vsctl show" ?13:54
pabelangermordred: Ya, I can check no problem. Just wanted to confirm we want to use ipv6 for rackspace from nodepool13:54
*** florianf has quit IRC13:55
*** DrifterZA has joined #openstack-infra13:55
dougwigpabelanger: not until we fix this, we don't.13:55
mordredpabelanger: in theory, we want to use it everywhere13:55
mordredbut yeah, what dougwig said13:55
pabelangerright13:55
mordredpabelanger: can you get sdague's promote?13:55
pabelangerlooking13:55
mordreddougwig: http://paste.openstack.org/show/563005/13:56
haleybmordred: that paste wasn't as useful as i thought, ' ip -6 r s table all' might be better, but a login might be best if the logs show anything13:56
mordredhaleyb: so - I realized after I offered that that you need a vm in osic to be able to bounce through13:56
mordredsince the node only has working routing to other osic nodes :)13:56
*** kzaitsev_mb has quit IRC13:56
mordredhaleyb: http://paste.openstack.org/show/563006/13:57
dougwighaleyb, mordred: so, the default route in the first paste was through br-ex, and looking at the ovs dump, eth0 is not in br-ex.13:57
*** hongbin has joined #openstack-infra13:57
mordreddougwig: right. which is what we're expecting - in theory the neutron on the node should not be modifying eth013:57
mordredso that's good13:58
cloudnull haleyb i thought so.13:58
dougwigright, but how do the non-neutron packets get out anymore?  they'd go into br-ex and stop.  unless going out to the local net (i.e. osic).13:58
cloudnulli thought that's what folks were saying last night13:58
cloudnullbut I could be completely wrong13:59
dougwigmordred: i was expecting to see eth0 in br-ex, not eth0 being modified.13:59
*** pradk has joined #openstack-infra13:59
pabelangermordred: sdague: promoted14:00
*** zz_dimtruck is now known as dimtruck14:00
fungimordred: to answer your earlier question about ipv6 on job nodes in rackspace... we lost that when we switched to glean. i suspect v6 address info is not being included in the configdrive metadata but haven't had time to dig into it yet14:00
fungior at least that was the case for a while14:01
mordredfungi: ah. gotcha14:01
openstackgerritSean Dague proposed openstack-infra/elastic-recheck: Add fingerprint for cinder backup test fails  https://review.openstack.org/35990414:01
sdagueprometheanfire: thank you14:01
* rcarrillocruz thinks the cloud launcher output could be GREATLY improved, to know what's going on at any given time14:01
mordredrcarrillocruz: :)14:01
prometheanfiresdague: ?14:01
*** kushal has quit IRC14:01
sdagueoh, sorry, wrong tab complete14:01
sdaguepabelanger: thank you14:02
prometheanfire:D14:02
*** amotoki has quit IRC14:02
openstackgerritSagi Shnaidman proposed openstack-infra/tripleo-ci: WIP: DONT MERGE Testin OOOQ job  https://review.openstack.org/35914614:02
fungimordred: just jumped on a trusty node in rax-ord and indeed, /mnt/config/openstack/content/0000 only has v4 addresses in it14:03
fungimordred: same for /mnt/config/openstack/latest/network_data.json14:03
rcarrillocruzagh14:04
openstackgerritMerged openstack-infra/storyboard-webclient: Add a margin to the bottom of all pages  https://review.openstack.org/35911914:04
*** kzaitsev_mb has joined #openstack-infra14:04
rcarrillocruzpuppet-infracloud doesn't put the cert of the controller on the puppetmaster, does it?14:04
rcarrillocruzyolanda: do you remember? ^14:04
rcarrillocruzi think my launcher runs fail due to that14:05
rcarrillocruzpretty sure, the oscc yaml does not have verify: no14:05
rcarrillocruzand haven't messes any update ca certificates manually14:05
yolandano, it doesn't14:05
dougwigmordred, haleyb: so, what happened to this:14:05
dougwighttps://www.irccloud.com/pastebin/7GTbZcGw/14:05
rcarrillocruzbooooh14:05
*** asettle has quit IRC14:05
mordredfungi: ip? I'd like to look14:05
fungimordred: also /mnt/config/openstack/latest/vendor_data.json seems to be v4-only14:05
yolandapuppet-infracloud is just managing controllers and computes14:05
fungimordred: 104.130.216.250 but it's not held so may go away at any time14:06
haleybmordred: so i have a local VM i ran devstack on yesterday and it's just as broken, v4 too, but could be a red herring for me14:06
mordredhaleyb: woot!14:06
haleybi just never noticed because it's running a window manager14:06
mordreddougwig: hrm. are we maybe running that too late?14:07
rcarrillocruzhmm, we should puppetize that14:07
rcarrillocruzbecause we do that on nodepool for example, but not on puppetmaster14:07
*** kaisers_ has joined #openstack-infra14:07
*** sbezverk_ has quit IRC14:07
openstackgerritAntoni Segura Puimedon proposed openstack-infra/project-config: kuryr-libnetwork-core addition  https://review.openstack.org/35794514:07
openstackgerritAntoni Segura Puimedon proposed openstack-infra/project-config: kuryr-kubernetes-core addition  https://review.openstack.org/35794614:07
*** yamahata has joined #openstack-infra14:07
mordredpabelanger: what was it you did to get the ipv6 address to show up on the osic mirror?14:08
*** sbezverk has joined #openstack-infra14:08
apuimedothanks for the review AJaeger14:08
apuimedoI fixed the redundant line14:08
pabelangermordred: you mean when I added eth1.conf and ifup?14:09
*** florianf has joined #openstack-infra14:09
mordredyah14:09
mordredthat was all?14:10
pabelangerhttp://paste.openstack.org/show/563008/14:10
pabelangeryup14:10
AJaegerapuimedo: LGTM14:10
pabelangerdropped that into /etc/network/interfaces.d14:10
pabelangersudo ifup14:10
openstackgerritChangcheng Intel proposed openstack-infra/jenkins-job-builder: update base_email_ext to adapt Email-ext plugin  https://review.openstack.org/35513914:10
openstackgerrityolanda.robla proposed openstack-infra/puppet-infracloud: Refactor infra-cloud-bridge element to support CentOS/RH  https://review.openstack.org/35990914:10
apuimedothanks AJaeger14:10
*** kushal has joined #openstack-infra14:11
dougwigmordred: can i see /etc/neutron/plugins/ml2/ml2_conf.ini ?14:11
*** kaisers_ has quit IRC14:11
mordreddougwig: http://paste.openstack.org/show/563009/14:13
*** rbrndt has joined #openstack-infra14:15
openstackgerritMerged openstack-infra/zuul: Improve debug output from tests  https://review.openstack.org/35800814:16
openstackgerritDerek Higgins proposed openstack-infra/tripleo-ci: Remove support for legacy rh1  https://review.openstack.org/34791814:18
openstackgerritDerek Higgins proposed openstack-infra/tripleo-ci: Use delorean's exit code to decide on retry  https://review.openstack.org/34879014:19
*** yamamoto has quit IRC14:20
*** Swami has joined #openstack-infra14:20
*** Swami_ has joined #openstack-infra14:20
mtreinishfungi, pleia2: it doesn't look the e-r cron jobs are updating things14:21
pleia2let's see14:21
mtreinishI thought we got that sorted before, is there anything in the logs14:21
mtreinishpleia2: thanks14:21
*** amotoki has joined #openstack-infra14:22
fungipleia2: i'm happy to look too if you get stuck, but am even more happy to leave it to you ;)14:22
pleia2mtreinish: the new/ directory is back, and I'm positive that a failure that didn't allow the cron command to complete and delete that is what caused it to remain14:23
*** michauds has joined #openstack-infra14:24
pleia2mtreinish: I manually removed it just now, but maybe start the cron command with rmdir /var/lib/elastic-recheck/new/ ?14:24
mtreinishpleia2: ok, I can do that. I'll push a patch to add that in front of the command14:24
openstackgerritAndreas Jaeger proposed openstack-infra/project-config: Move os-api-ref to release team  https://review.openstack.org/35991614:25
pleia2or have a conditional around creating it, so it doesn't fail if it exists14:25
* pleia2 looks at mkdir man page14:25
pleia2mtreinish: ok, so just change "mkdir new" to "mkdir -p new"14:26
*** hpe-hj has joined #openstack-infra14:26
pleia2no need to add rmdir14:26
mtreinishpleia2: ok, that's an easier patch14:27
* pleia2 nods14:27
openstackgerritLuigi Toscano proposed openstack-infra/project-config: sahara/tempest: run also client tests and a pre script  https://review.openstack.org/35992014:27
openstackgerritMatthew Treinish proposed openstack-infra/puppet-elastic_recheck: Add missing -p to mkdir on uncat cron job  https://review.openstack.org/35992114:28
mtreinishpleia2, fungi: ^^^14:28
*** michauds has quit IRC14:28
dougwigmordred: can you get /opt/stack/logs tarred up and put somewhere?14:29
mtreinishfungi: you can fast +A that too :)14:29
rcarrillocruzoh, voucher from ovh14:29
rcarrillocruzthat's nice :-)14:29
openstackgerritAndreas Jaeger proposed openstack-infra/project-config: Move os-api-ref to release team  https://review.openstack.org/35991614:30
*** javeriak has joined #openstack-infra14:30
fungimtreinish: i'm curious why it had the cleanup rm at the end. does it not overwrite the directory contents it finds anyway?14:30
*** ccamacho|lunch is now known as ccamacho14:30
fungircarrillocruz: no idea how far that voucher will stretch, though i gather they're one of the less expensive providers so maybe pretty far14:32
rcarrillocruz++14:32
*** kaisers_ has joined #openstack-infra14:32
mtreinishfungi: the e-r command should overwrite the contents. I'm not sure why the rm isn't doing it's thing though14:32
mordreddougwig: http://mirror.regionone.osic-cloud1.openstack.org/logs.tgz14:32
mtreinishbut yeah the rm is not needed anymore if we mkdir -p it14:32
pleia2mtreinish: I think the issue is that sometimes the command fails, so it doesn't *get* to the rm part14:32
mtreinishtbh I think I just left the rm there because the old single file version used it14:32
pleia2because computers are terrible14:32
openstackgerrityolanda.robla proposed openstack-infra/puppet-infracloud: Refactor infra-cloud-bridge element to support CentOS/RH  https://review.openstack.org/35990914:33
fungimtreinish: yeah, just seems unnecessary (also, post-run cleanup with anything other than a trap is failure-prone)14:34
*** piet has joined #openstack-infra14:34
*** piet has quit IRC14:34
*** mdrabe has quit IRC14:34
mtreinishfungi: I can push another patch to remove it if you'd like14:35
mtreinishpleia2: in the meantime can we manually run the thing to update stuff while waiting for the gate14:35
fungimtreinish: if you don't mind--it makes the cronjob a bit cleaner14:35
mtreinishfungi: sure np, one sec14:35
pleia2mtreinish: yeah, sec14:35
*** amotoki has quit IRC14:36
*** eharney has quit IRC14:36
*** berendt has quit IRC14:37
openstackgerritMatthew Treinish proposed openstack-infra/puppet-elastic_recheck: Remove unnecessary rm from uncat cron job  https://review.openstack.org/35992714:37
mtreinishfungi: ^^^14:37
pleia2recheck user is running it now14:37
mtreinishpleia2: thanks14:37
*** mdrabe has joined #openstack-infra14:39
*** Swami_ has quit IRC14:40
*** Swami has quit IRC14:40
*** pcrews has joined #openstack-infra14:40
*** Swami has joined #openstack-infra14:40
pleia2mtreinish: should be updated now14:40
mtreinishpleia2: yep, thanks14:40
*** psachin has joined #openstack-infra14:41
*** nwkarsten has joined #openstack-infra14:41
*** piet has joined #openstack-infra14:42
openstackgerritMerged openstack-infra/tripleo-ci: Deploy minimal services in multinode job  https://review.openstack.org/35509714:42
*** oanson has quit IRC14:43
*** yamamoto has joined #openstack-infra14:43
*** esikachev has joined #openstack-infra14:43
*** hongbin has quit IRC14:43
*** xarses has joined #openstack-infra14:44
*** raunak has quit IRC14:44
*** raunak has joined #openstack-infra14:44
*** hieulq_ has quit IRC14:45
d0ugalWhat/where is the best place for me to report an invalid check-osc-plugin failure?14:46
openstackgerritMerged openstack-infra/nodepool: Add scheduling thread to nodepool builder  https://review.openstack.org/35607914:47
AJaegerd0ugal: I suggest to ask on #openstack-sdks - that's where the osc folks hang out. stevemar wrote the tests14:48
d0ugalAJaeger: Will do, thanks.14:48
rcarrillocruzhmm, i added the controller certificate to the ca-certificates on the puppetmaster. However I get errors, due to SSL when runninc osc. Passing explicitly OS_CACERT=/etc/ssl/certs/ca-certificates work. I was expecting the osc would default to that?14:48
rcarrillocruzmordred: ^14:48
stevemard0ugal: link?14:48
openstackgerritAleksandr Dobdin proposed openstack-infra/project-config: Ansible-like tool  https://review.openstack.org/35983114:48
d0ugalstevemar: https://review.openstack.org/#/c/35978414:48
rcarrillocruzshould I put an cacert param on the infracloud cloud on clouds.yaml ?14:48
mordredrcarrillocruz: are you using clouds.yaml for othe rthings?14:48
mordredrcarrillocruz: yes14:48
*** rajinir has joined #openstack-infra14:48
rcarrillocruzk thx14:48
mordredenv vars do not overlay over top of clouds.yaml settings14:49
mordredbecause there are multiple clouds in a clouds.yaml, it turns out to behave unexpectedly more often than not14:49
rcarrillocruzah gotcha14:49
mordredenv vars (other than OS_CLOUD and OS_REGION_NAME) go into a cloud named "envvars"14:49
openstackgerritSean Dague proposed openstack-infra/elastic-recheck: highlight when integrated_gate data is out of date  https://review.openstack.org/35993214:49
*** amotoki has joined #openstack-infra14:50
d0ugalstevemar: I'd like to add the exception to the output here: https://github.com/openstack-infra/project-config/blob/master/jenkins/scripts/check_osc_commands.py#L86 - do you think that would make sense?14:50
*** javeriak has quit IRC14:52
*** kaisers_ has quit IRC14:52
haleybmordred: would https://review.openstack.org/#/c/359490/ be part of the undercloud deployment?  just wondering how to test any patches, not including getting them to run in OSIC14:52
stevemard0ugal: sure, i'm good with that14:53
stevemard0ugal: i haven't changed it since i created it14:53
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/system-config: Pass cacert param for admin-infracloud  https://review.openstack.org/35993414:54
AJaegerd0ugal: why does it fail?14:54
d0ugalAJaeger: I have no idea, the exception is swalled :)14:54
*** matt-borland has quit IRC14:54
rcarrillocruzmordred: ^ , i put in place in the puppetmaster all-clouds.yaml and seems to work14:54
*** abregman has quit IRC14:54
*** eharney has joined #openstack-infra14:55
openstackgerritMerged openstack-infra/nodepool: Remove unnecessary NodePoolBuilder thread  https://review.openstack.org/35667614:55
d0ugalAJaeger: From my reading of the code I must be hitting this path: https://github.com/openstack-infra/project-config/blob/master/jenkins/scripts/check_osc_commands.py#L8614:55
*** edtubill has joined #openstack-infra14:55
AJaegerd0ugal, stevemar: I'm wondering whether you have a bug in your change - or whether it's valid. So, producing a good exception is something I happyily +2 - just ignoring it sounds wrong. Not sure what you want to do...14:58
d0ugalAJaeger: Adding a patch to output the traceback.14:58
stevemarAJaeger: every time that gate has failed has been for legimate reasons :P14:58
*** Julien-zte has joined #openstack-infra14:58
d0ugalstevemar: heh, that's fair. I may have done something but I have tested the code loads :(14:59
stevemarnormally it's a typo, but not seeing one in the patch14:59
AJaegerthen we're in agreement...14:59
*** yuval has quit IRC14:59
openstackgerritDougal Matthews proposed openstack-infra/project-config: Add the exception string when a osc command fails to load  https://review.openstack.org/35993714:59
*** jistr is now known as jistr|mtg15:00
*** rbrndt has quit IRC15:00
scottdaAJaeger: could you have a look at "Add experimental Cinder job for multibackend" when you've a chance? I think I've addressed your comments and it just needs a post-rebase re-approval . https://review.openstack.org/#/c/330678/15:00
d0ugalAJaeger, stevemar: ^ it's a bit rough, but might be enough?15:00
*** hrybacki|appt is now known as hrybacki15:00
AJaegerscottda: LGTM.15:01
AJaegerd0ugal: will review later, thanks.15:01
*** raunak has quit IRC15:01
* AJaeger needs to step out for a bit now15:01
scottdaAJaeger: Thanks. yolanda had +A'd, would either of you please re-approve ?15:02
sdagueok, new patch, for direct dump into gate - https://review.openstack.org/#/c/359939/15:05
sdaguenow that uncategorized failures are up to date, I think that's our major issue15:05
*** rbrndt has joined #openstack-infra15:05
sdagueso if someone could push to gate and promote, it will probably make a lot of things better15:05
*** raunak has joined #openstack-infra15:06
sdaguepabelanger / mordred / fungi can I tag one of you for promote on that?15:06
*** raunak has quit IRC15:06
rcarrillocruzscottda: approved15:07
scottdarcarrillocruz: Thanks for that.15:07
*** zhurong has quit IRC15:08
*** kaisers_ has joined #openstack-infra15:08
fungisdague: sure, working on that now15:09
*** abregman has joined #openstack-infra15:09
hasharhello !15:10
hashardoes anyone have details about the OVH quota accounting being off back in July ?  I have seen the Nodepool max-server got lowered to accommodate with false "Quota exceeds for instances" in https://review.openstack.org/#/c/347075/15:10
hasharturns out I am hitting the same issue with an OpenStack Liberty cloud provider  and I am looking for any potential hints :}15:10
hasharmaybe pabelanger clarkb AJaeger would know15:11
fungisdague: okay, 359939,2 is at the top of the gate pipeline now15:12
openstackgerritMerged openstack-infra/project-config: Add experimental Cinder job for multibackend  https://review.openstack.org/33067815:13
*** vinaypotluri has joined #openstack-infra15:14
openstackgerritAleksandr Dobdin proposed openstack-infra/project-config: Ansible-like tool  https://review.openstack.org/35983115:14
*** thcipriani has joined #openstack-infra15:15
*** caowei has quit IRC15:15
sdaguefungi: thank you15:15
*** andreas_s has quit IRC15:15
zigopabelanger: I'd be nice if you could reconsider the change for min-ready of the jessie nodes. Look at this: http://grafana.openstack.org/dashboard/db/nodepool in the jessie "Ready nodes" graph. More than 50% of the time when I'm working on the packaging, there's no node ready, and therefore, I need to wait.15:15
fungisdague: ant time. let me know if you turn up others15:15
pleia2ant time \o/15:16
zigopabelanger: It shows well on the last hour graph right now.15:16
*** hockeynut has joined #openstack-infra15:16
sdaguepleia2: ++ especially with infra logo15:17
pleia2:D15:17
*** greg-g has joined #openstack-infra15:17
fungizigo: rephrased, you're saying we shouldn't build as many ubuntu nodes even though there are a massive backlog of jobs waiting for them15:17
*** david-lyle has joined #openstack-infra15:17
*** salv-orlando has joined #openstack-infra15:18
hasharthcipriani: so here is the place of people maintain OpenStack infra  namely Zuul / Nodepool and other things :d15:19
mordredzigo: for context, I've been trying to land 5 shade patches for 2 solid days15:19
fungizigo: we have capacity to build around 1000 nodes currently, and the number of debian jobs run is almost certainly less than 0.1%15:19
*** pgadiya has quit IRC15:20
openstackgerritJames E. Blair proposed openstack-infra/nodepool: Remove unecessecary builder scheduler thread  https://review.openstack.org/35994915:20
openstackgerritJames E. Blair proposed openstack-infra/nodepool: Remove 'running' as a public method from builder  https://review.openstack.org/35995015:20
openstackgerritJames E. Blair proposed openstack-infra/nodepool: Simplify builder start/stop methods  https://review.openstack.org/35995115:20
openstackgerritJames E. Blair proposed openstack-infra/nodepool: Rename BuilderScheduler to NodePoolBuilder  https://review.openstack.org/35995215:20
thciprianihashar: ooh I see :)15:21
*** tonytan4ever has quit IRC15:21
*** matt-borland has joined #openstack-infra15:21
*** asettle has joined #openstack-infra15:21
*** armax has joined #openstack-infra15:22
zigofungi: Not what I've wrote at all.15:22
sc68calsdague: https://review.openstack.org/#/c/359490/415:22
*** piet has quit IRC15:22
*** yamahata has quit IRC15:23
dougwigdevstack cores, spoke with sc68cal, and he recommended the approach here for fixing the osic resets: https://review.openstack.org/#/c/359490/15:23
eantyshevHello! I think https://review.openstack.org/238988 needs a final push, it's a rather important Zuul patch which many people already use in 3rd party CIs15:24
*** tonytan4ever has joined #openstack-infra15:24
fungizigo: the ready nodes counts in that graph are misleading. it would make more sense to scale those by the amount of time spend in that state15:24
*** eantyshev has left #openstack-infra15:24
*** salv-orlando has quit IRC15:25
*** david-lyle_ has joined #openstack-infra15:25
*** sdake_ has quit IRC15:25
*** yamahata has joined #openstack-infra15:25
fungiwe do spike up over 100 xenial or trusty nodes at times, but they sit ready for only a few seconds. they're basically already spoken for by waiting jobs15:26
*** david-lyle_ has quit IRC15:26
*** vhosakot has joined #openstack-infra15:26
fungiwhat might be more useful for this particular issue is if we could graph the average time jobs spend waiting in a particular pipeline, arranged by node type15:27
openstackgerrityolanda.robla proposed openstack-infra/glean: Improve the support for checking vlan interfaces  https://review.openstack.org/35996115:27
*** hockeynu_ has joined #openstack-infra15:28
openstackgerritMerged openstack-infra/elastic-recheck: Add fingerprint for cinder backup test fails  https://review.openstack.org/35990415:28
fungii suspect that the average time spent waiting for jessie nodes is actually lower than for more popular node types, if only because we force nodepool to build them even when the demand ratio would indicate they're accounting for less than one node of demand across the entirety of our backlog15:28
*** hockeynut has quit IRC15:29
fungi(expressed as an overall percentage of our quota)15:29
*** piet has joined #openstack-infra15:32
*** psachin has quit IRC15:32
openstackgerritDoug Hellmann proposed openstack-infra/project-config: remove vitrage-release tag rights for vitrage repos  https://review.openstack.org/35996515:33
mordredsc68cal: clarkb seemed to think that there is also a second issue15:33
mordredsc68cal: in addition to that one15:34
openstackgerritMerged openstack-infra/shade: Support more than one network in create_server  https://review.openstack.org/35937815:34
dougwigmordred, sc68cal: i believe that was about the default route disappearing.  though mordred's change seems to have jobs not resetting at the 15 minute mark.15:35
*** yaume_ has quit IRC15:35
*** krtaylor has quit IRC15:35
*** hockeynu_ has quit IRC15:36
*** sdake has joined #openstack-infra15:36
mordreddougwig: oh good15:37
*** dizquierdo has quit IRC15:38
rcarrillocruzasselin: https://review.openstack.org/#/c/359918/15:39
openstackgerritDmitry Ilyin proposed openstack-infra/project-config: Enable voting checks for the Fuel unit tests Puppet 4.5  https://review.openstack.org/35733515:39
lucasagomes5 min tops!15:40
fungihah15:40
lucasagomesfinishing addressing all comments15:40
rcarrillocruzi'll self approve to get past the looping failures15:40
*** hockeynut has joined #openstack-infra15:40
*** sdague has quit IRC15:41
sc68caldougwig: the default v6 route disappears as a side effect of setting v6 forwarding to 115:42
sc68calsince the default route was discovered via RA15:42
mordredsc68cal: ah - fascinating15:42
sc68calhence, accept_ra = 215:42
sc68calsorry, I had a whole gist about this stuff from my home networking setup15:42
mordredand if we do that before setting forwarding to 1, it's all good15:42
sc68caljust thought I had more time before I had to tell people about it15:42
fungioh, right-o. i forgot linux routing gets picky about accepting router announcements when it is itself configured as a router15:43
*** andymaier_ has quit IRC15:43
mordredfungi: so - on the rax nodes15:43
fungier, router advertisements15:44
dougwigso we're likely about 6 hours from 359490,4 merging, assuming no gate restarts. that one might be worth bumping ahead in line.15:44
fungimordred: yeah, figured anything useful out there?15:44
mordredmy hunch is that because we're configuring eth0 with ipv4 static addresses, it's ignoring the RAs15:44
fungimordred: entirely possible, and easy to test if you hold a node15:44
*** tonytan_brb has joined #openstack-infra15:44
mordredand that we need to iface eth0 inet6 auto15:45
mordredto interfaces15:45
fungiyep15:45
openstackgerritAleksandr Dobdin proposed openstack-infra/project-config: Ansible-like tool  https://review.openstack.org/35983115:45
fungithat's what i do on my dual-stack systems using ifupdown15:45
fungii guess it would be a patch to glean15:46
clarkbwait mordreds change is working? I thought it was failing last night buy I may have gotten co fused it was late15:46
clarkbexcellent news if mordreds change is sufficient15:47
mordredright?15:47
mordredclarkb: also, sc68cal's explanation of the why makes sense15:47
clarkbmordred: as long as we confirm it works on osic amd doesnt crash I am happy15:47
*** tonytan4ever has quit IRC15:48
openstackgerritElizabeth K. Joseph proposed openstack-infra/system-config: Zuul has a channel, let's tell people about it  https://review.openstack.org/35997615:48
fungiin fact, most routers have a similar behavior (you need to jump through hoops to configure a router to accept an ra since they usually assume other routing protocols fulfill that role)15:48
dougwigmordred: clarkb: i did a recheck on his change to watch, noted all the v6 nodes assigned, and have been watching for resets.  several have finished with success.  given the previous 100% fail rate, i'd call that a win.15:48
clarkbmaybe I was looking at the wrong change last night when I thought I saw it crash again15:48
*** salv-orlando has joined #openstack-infra15:48
clarkbdougwig: were those neutron jobs though?15:48
*** awayne has joined #openstack-infra15:49
openstackgerrityolanda.robla proposed openstack-infra/glean: Add check to skip bridge interfaces  https://review.openstack.org/35998215:49
clarkbdougwig: we need to make syre that the devstack tests that run neutron run on osic and dont lose connectivity. if that is the case then yay15:49
*** vhosakot has quit IRC15:49
*** vhosakot has joined #openstack-infra15:49
dougwiglet me double-check15:49
mordredclarkb: http://logs.openstack.org/90/359490/4/check/gate-devstack-dsvm-updown/6df96ef/console.html15:51
clarkbalso we can get that merged and it will be easy to see if the problem persists after15:52
mordredrcarrillocruz: awesome. the os_quota guy said we can put the code into shade15:53
*** abregman has quit IRC15:53
rcarrillocruz\o/15:53
rcarrillocruzthere may be some overlap, i remember ghe putting some stuff about quotas already15:53
rcarrillocruzbut i believe it was just for a given resource, network quotas or the likes15:54
*** Sukhdev has joined #openstack-infra15:54
mordredyah15:54
mordredI like his approach to comprehensive quotas on a single thing15:54
clarkbmordred: that test doesn't appear to run the neutron command to create the ipv6-public-subnet15:54
clarkbwhich is where things were crashing for me in my paste from last night15:55
clarkb(which is why I thought there may be a second thing)15:55
clarkbhttp://paste.openstack.org/show/562831/15:55
*** mikelk has quit IRC15:55
*** adrian_otto has joined #openstack-infra15:55
openstackgerritMerged openstack-infra/puppet-elastic_recheck: Add missing -p to mkdir on uncat cron job  https://review.openstack.org/35992115:56
clarkbbut it doesn't appear to cause any failures itself so merging then reviewing results is likely fine. (I do think it is part of the puzzle at the very least)15:57
*** berendt has joined #openstack-infra15:58
dougwigclarkb: you're right, i was looking at an n-net job. made a bad assumption about the default in devstack changing. watching further.15:59
Zararcarrillocruz: aw, thanks for the nice email reply :)16:00
rcarrillocruz;-) nicely done!16:01
*** krtaylor has joined #openstack-infra16:02
*** piet has quit IRC16:03
*** tphummel has joined #openstack-infra16:03
*** gyee has joined #openstack-infra16:03
*** vincentll has quit IRC16:05
clarkbI need caffeine and foods but then I can help dig in more on the ipv6 stuff16:08
*** ifarkas is now known as ifarkas_afk16:08
*** jistr|mtg is now known as jistr16:08
haleybsc68cal: i was trying to find the kernel code that would make the v6 router disappear setting forwarding=1, but agree that, at least eventually, things would break16:11
*** yamahata has quit IRC16:11
*** raunak has joined #openstack-infra16:11
dougwigclarkb: i added 359996 to fire mordred's patch against more neutron jobs.16:12
openstackgerritJeremy Stanley proposed openstack-infra/system-config: Update zuul-env on job nodes  https://review.openstack.org/35935216:12
fungijeblair: ^ reworked per your suggestion16:12
fungiAJaeger: ^16:13
jeblairyay i made a good suggestion! (?)16:13
fungii certainly thought so16:13
haleybsc68cal: found the code - rt6_purge_dflt_routers() - and it removes default routes learned from RA's when forwarding is enabled and accept_ra is not 216:15
pabelangerhashar: There was a quota mismatch with our project in OVH.  While I don't know the fix, they did "re-sync our tenant". Not that it helps with the fix.16:15
*** _nadya_ has quit IRC16:15
*** Julien-zte has quit IRC16:15
*** matthewbodkin has quit IRC16:15
haleybdougwig, clarkb ^^ see my comment there, think the accept_ra change will do it16:15
haleybnot that it wasn't obvious already16:15
hasharpabelanger: I guess there is a glitch in how the quota is tracked and somehow it is not always updated on instance deletion /  spawn error16:15
openstackgerritJeremy Stanley proposed openstack-infra/system-config: Update zuul-env on job nodes  https://review.openstack.org/35935216:15
fungicorrecting small typo16:16
*** shashank_hegde has joined #openstack-infra16:16
hasharpabelanger: so with time the quota keeps pilling up.   I guess they manually updated the quota in whatever data backend has the quotas.16:16
jeblairfungi: something something zuul something16:16
fungiindeed16:16
clarkbhaleyb: I think it is part of the puzzle but not convinced it solves it yet16:16
hasharthcipriani: got the answer. OVH did "re-sync our tenant" which I guess is manually adjusting the bad quota16:16
fungilots of brainfog between /usr/zuul-env and /opt/zuul16:16
hasharpabelanger: thank you !16:16
clarkbneed to see a non crashed run on osic for a neutron job first16:16
*** Apoorva has joined #openstack-infra16:17
*** Apoorva has quit IRC16:17
pabelangerzigo: Yes, you do need to wait, however it still is shorter then jobs requiring ubuntu-trusty / ubuntu-xenial. For every min-ready node we bring online, we remove that node from the pool of jobs.  And since debian-jessie is only used between openstack-infra and deb-packaging team, I still believe 3 is a good number.16:17
*** Apoorva has joined #openstack-infra16:17
pabelangerzigo: even if we bump it to 5, you are going to have to wait for nodes to come online.16:17
haleybclarkb: agreed.  i'm assuming it will take a while to get that patch in place to then run another check on-top in the osic cloud if my infra 101 learning is correct16:19
openstackgerritMichal Dulko proposed openstack-infra/project-config: Move cinder multinode grenade job to check  https://review.openstack.org/35927516:19
pabelangerhashar: yes, I don't know why that happens, just that it does.  Since we increased OVH back to the original, we haven't had the problem yet.  From what I understand, it was / is a common enough problem that people knew that was the issue before we asked OVH to look into it16:19
hasharpabelanger: yeah I got the same issue with OpenStack Liberty.  Whenever some instances deletion / spawn  screw up for some reason16:20
clarkbhaleyb: no its self testing so its tested immediately. we just need a run on osic16:20
hasharpabelanger: I suspect something in nova or whatever  glitches and does not necessarily properly lower the quota when an instance is disposed.  Or at least it does not periodically refresh it with actual values16:21
hasharpabelanger: manually editing the database would fix the dispredancy for sure and I guess that is what they ended up doing16:21
*** matrohon has quit IRC16:22
dougwigclarkb: the dependent review will likely hit osic at least once.16:23
openstackgerritAdam Coldrick proposed openstack-infra/storyboard: Hide timeline events and comments of private stories  https://review.openstack.org/35989516:23
pabelangerclarkb: fungi: mordred: just catching up on 359490, are we planning on promoting? or waiting until we know it works in check?16:25
clarkbdougwig: haleyb telnet://2001:4800:1ae1:18:f816:3eff:feee:1908:19885 is a run against 35949016:25
dougwigclarkb: oh, sweet.  recorded, and watching.16:26
*** Na3iL has quit IRC16:27
jeblairpabelanger, clarkb, dougwig: i think we should promote it once the run clarkb is looking at passes the mark16:27
clarkbI am tailing its devstack log now too since that was more useful last night16:27
*** vhosakot has quit IRC16:27
*** florianf has quit IRC16:27
dougwigjeblair: +116:27
clarkbjeblair: pabelanger dougwig ya if this job gets past the neutron setup in devstack I think we can promote16:27
*** vhosakot has joined #openstack-infra16:28
pabelangerthanks, I figured that was the reason16:28
*** hpe-hj has quit IRC16:28
*** jpich has quit IRC16:29
clarkbright now its doing keystone things in devstack so probably about 5 minutes away? I forget how long it took16:29
dougwigclarkb: 14-15 mins from launch.16:29
dougwigbut that might include a timeout, so i expect you'll know soonest.16:29
*** rcernin has quit IRC16:29
clarkbI am also going to hold the instance as a precaution16:29
clarkb3831927 is held16:30
*** Swami has quit IRC16:30
pabelangerapuimedo: currently we don't have python35 on our centos-7 nodes.16:31
*** claudiub has quit IRC16:31
apuimedopabelanger: is it for a lacking software collection?16:31
clarkbwe don't install anything from software collections (I think thats the name for them)16:31
pabelangerapetrich: that said, there has been some talk about using python3.5 from software collections16:31
apuimedoOr where whould the python35 come from in EL7?16:31
openstackgerritMerged openstack-infra/puppet-elastic_recheck: Remove unnecessary rm from uncat cron job  https://review.openstack.org/35992716:32
*** hockeynu_ has joined #openstack-infra16:32
*** yolanda has quit IRC16:32
clarkbepel only has 3.3 or 3.4 so it would have to be software collections if we needed taht for some reason. I think I am missing some context though16:32
clarkbis there some reason that the default python (python2.7) won't work there?16:32
zaroianw: reviewed https://review.openstack.org/359683 i think something else is amiss.16:32
pabelangerclarkb: apuimedo: asked in a PM about using python3.5 on nodes, I redirected here16:32
apuimedoclarkb: well, we're now upstreaming the kuryr prototype16:33
clarkbwe have python3.5 on xenial instances16:33
apuimedofor kubernetes16:33
AJaegerthanks, fungi for the zuul-env updates!16:33
apuimedoand I know that xenial is fine16:33
apuimedoI wanted to know how far are we for the el7 nodes16:33
pabelangerwhen I did ask Red Hat about python3.5 and centos-7, there wasn't a clear plan yet16:33
apuimedosince we'd really much prefer to do 3.5+16:33
*** hashar has quit IRC16:33
*** Hal1 has joined #openstack-infra16:33
*** Hal has quit IRC16:33
apuimedopabelanger: that's what I was afraid of16:34
pabelangerbut software collections was suggested as a possible options16:34
clarkbI think my preference for that sort of setup would be that your jobs installed python3.5 from software collections if you need ti on centos for some reason16:34
fungiAJaeger: it will take a bit for testing to finish on 359352, but i'll self-approve with only one core vote on it if necessary at that point so it doesn't linger16:34
clarkbI don't think infra should be managing that16:34
*** hashar has joined #openstack-infra16:34
apuimedopabelanger: would that be a first in RHOSP? To use SCs?16:34
*** hashar has quit IRC16:34
clarkb(we provide a valid python3.5 platform using available system packages, if you want to branch outside of that its fine but I don't think we will bake it in)16:34
*** kaisers_ has quit IRC16:34
pabelangerclarkb: yes, agreed. If apuimedo wants python3.5 for kuryr, adding software collections would be my suggestion16:34
*** tqtran has joined #openstack-infra16:35
*** hockeynut has quit IRC16:35
*** tesseract- has quit IRC16:35
clarkbthough more and more with bindep we don't technically provide any base python16:35
clarkb(though due to image build deps we technically do)16:35
pabelangerapuimedo: not RHOSP, we don't manage that. But you could it upstream in your jobs16:35
clarkbdougwig: its running neutron setup now we should know soon16:35
dougwigclarkb: the executioner is climbing the stairs to the platform....16:36
pabelangerclarkb: ya, I've been meaning to try software collections on centos-7 for a python35 job. never enough time16:36
clarkbdougwig: I think it just died creating that public subnet16:37
dougwigclarkb: my ping just froze16:37
clarkbI do knote it created the private one just fine about 20 seconds before hand16:37
apuimedoI was under the impression that SCs were at a very primitive stage for 3.516:37
*** yamamoto has quit IRC16:37
apuimedoin terms of available libs16:37
dougwigsc68cal: haleyb: ^^  anything on the node that will help us?16:38
clarkbdougwig: ya so I don't think this is a complete solution. But likely still a required piece. Something else in the subnet creation is tripping us up16:38
*** ganesan has joined #openstack-infra16:38
pabelangerapuimedo: possible? Not sure. At this point, nobody really know. python35 support for RedHat still work in progress16:39
clarkbit should be trivial to reproduce this right? has anyone from neutron/devstack tried yet?16:39
*** tqtran has quit IRC16:39
*** Na3iL has joined #openstack-infra16:39
clarkbI guess it requires a global ipv6 addr which isn't necessarily something everyone has handy16:39
apuimedounderstood16:39
clarkbdougwig: the instance is held by nodepool so we can hop on it if we need to. THe tricky thing is devstack doesn't stop running when this happens16:39
*** sdague has joined #openstack-infra16:40
dougwigclarkb: yes, it should be.  and no, i'm not sure anyone has a v6 only setup ready to roll, but if this drags out any longer, switching to reproing elsewhere will be a better plan.16:40
clarkbdougwig: so it and or tempest and friends can continue to change state on the host which might make debugging a little tricky. Its possible we could rig up a devstack change on top of mordreds that just stops doing anything after that point16:40
sdaguecan you repromote - 359939,2 - failed on unrelated tests16:40
clarkbmake public ipv6 subnet and exit 1 to kill the rest of the job16:40
sdaguefungi: ^^^16:40
fungisdague: on it16:41
sdagueor force merge, it passed all the important stuff16:41
fungisdague: can do--just a sec16:41
sdagueand it would let people get back to work quick16:41
sdaguethe osic ipv6 delay also did trip it there16:42
fungiyeah, hopefully we're really close to pinning that one down now16:42
clarkbalso this appears to be a legitimate neutron issue... not really anything osic is doing wrong16:43
ganesanI am getting Jenkins expection on the nodepool.log "JenkinsException: Error in request. Possibly authentication failed [500]: Server Error"16:43
fungifor some reason my gerrit superpowers are being a pain16:43
sdagueso, one of the issues with osic ipv6 only nodes... that means the console doesn't work unless you as a user have ipv616:43
ganesannodepool.log -- http://paste.openstack.org/show/563098/16:43
clarkbsdague: that is correct16:44
fungisdague: or something with ipv6 connectivity you can bounce through, or a v6 tunnel16:44
jeblairor a phone16:44
fungihah, yep16:44
fungiphones seem to all be ipv6 these days16:44
fungibecause of sheer volume16:44
openstackgerritMerged openstack-infra/project-config: TripleO scenario001 experimental job  https://review.openstack.org/35667516:44
sdaguea phone with telnet client16:44
clarkbI have been using my irc screen box in rackspace. fungi has an HE tunnel. Most/all comcast customers should have native ipv616:44
*** AnarchyAo has joined #openstack-infra16:44
jeblairsdague: i you know you have one ;)16:44
sdagueclarkb: that's still a small percentage of our devs16:45
fungiand, of course, people who live in first world countries16:45
fungisomeday maybe i'll move to one16:45
*** derekh has quit IRC16:45
*** hockeynut has joined #openstack-infra16:45
cloudnullat home i use an HE tunnel works really well16:45
*** hockeynu_ has quit IRC16:45
clarkbmuch of asia is all ipv6 native and has been for a long time16:45
clarkbthat was a neat part of the tokyo summit16:45
clarkbyes, not everyone will have it working. But I think a significant number of people can make it work with minimal effort or already ahve it16:46
dougwigclarkb: asking for an osic node now, so i can jumpstart the setup phase of repro.16:46
clarkband osic has been gracious enough to provide a lot of resources to us16:46
*** shashank_hegde has quit IRC16:46
*** roxanagh_ has joined #openstack-infra16:47
*** jaosorior has quit IRC16:47
JayFAJaeger: for https://review.openstack.org/#/c/356797/ you said all I needed was python-jobs template, but ironic-lib already had the python-jobs template and wasn't running a docs job. I'm curious what piece I'm misssing?16:47
sdagueclarkb: that's fine, I'm just saying it moves burden back onto devs that already have to figure out a lot of things.16:47
clarkbdougwig: you should also be able to reproduce on a rax instance too fwiw16:48
AJaegerJayF: let me check again...16:48
clarkbdougwig: we don't notice it in our rax instances because we connect to them via ipv4 so ipv6 going away isn't noticed but it should still happen there too16:48
dougwigclarkb: heh, since i'm sitting at rax hq, by chance, that'd be a good way to go too.16:48
*** Sukhdev has quit IRC16:48
fungisdague: also the websockets work, once it lands, will ease that since we can have a non-v6 web proxy serving it16:48
sdagueok, with that force merged, then promoting the ipv6 work around is the next order of business?16:48
AJaegerJayF: you have python34-jobs and python35-jobs but *no* python-jobs16:49
sdaguefungi: yeh, this was mostly, can we get webui for this prioritized16:49
EmilienMhey infra, here's an easy patch to approve: https://review.openstack.org/#/c/359539/ - thanks!16:49
clarkbsdague: no because the ipv6 workaround does not work. There is somethign else neutron is doing to explode ipv6 when creating the public ipv6 subnet16:49
JayFAJaeger: line #3479 on https://review.openstack.org/#/c/356797/4/jenkins/jobs/projects.yaml says "python-jobs"16:49
clarkbsdague: so we need to figure that out first (sounds like dougwig is working on getting it reproduced)16:49
sdagueclarkb: ok16:49
AJaegerJayF: projects.yaml != zuul16:49
JayFAJaeger: OH! in zuul layout16:49
*** AnarchyAo has quit IRC16:49
AJaegerJayF: Those are two complete different beasts. I'm talking zuul16:49
*** Hal1 has quit IRC16:50
JayFAJaeger: I got it now, will update, ty16:50
*** asettle has quit IRC16:50
*** mtanino has joined #openstack-infra16:50
*** Hal has joined #openstack-infra16:50
clarkbdougwig: also dreamhost and vexxhost provide native ipv6 to instances if you want more options :)16:50
sdagueok, sc68cal is sitting across from me16:50
*** asettle has joined #openstack-infra16:50
*** jordanP has quit IRC16:51
AJaegerbbl16:51
openstackgerritEmilien Macchi proposed openstack-infra/tripleo-ci: WIP - Implement undercloud upgrade job - Mitaka -> Newton  https://review.openstack.org/34699516:52
*** yamahata has joined #openstack-infra16:52
openstackgerritJay Faulkner proposed openstack-infra/project-config: Add docs jobs to ironic-lib  https://review.openstack.org/35679716:52
*** DrifterZA has quit IRC16:52
haleybclarkb: now i'm not so sure setting net.ipv6.conf.all.accept_ra=2 is doing what we think, not sure it's propagating to all the interfaces.  the forwarding setting has special code to handle that case16:53
*** csomerville has joined #openstack-infra16:54
*** agordeev has joined #openstack-infra16:55
*** asettle has quit IRC16:55
sc68calhaleyb: we have code in neutron/agent/linux/interface.py that sets it for an interface -             ['sysctl', '-w', 'net.ipv6.conf.%s.accept_ra=2' % dev_name])16:55
*** dtantsur is now known as dtantsur|afk16:55
haleybsc68cal: right, but does it do it for the public interface ?16:55
dougwigsc68cal: haleyb: no, it does not.16:56
sc68calhaleyb: nope, just for the namespace16:56
*** Shrews has quit IRC16:56
haleybthe "all" setting in ipv6 isn't like the ipv4 equivalents from waht i remember16:56
*** phschwartz has quit IRC16:56
*** mordred has quit IRC16:56
dougwighaleyb: because ipv6 wants to be special?16:56
*** tonytan_brb has quit IRC16:57
haleybdougwig: it was added later, and noone noticed, so was too late to change without breaking things16:57
*** tonytan4ever has joined #openstack-infra16:57
haleybagain, that's if my mind is remembering from the discussion on the netdev list years ago16:58
openstackgerritSean Dague proposed openstack-infra/elastic-recheck: Add query for get me a network fails  https://review.openstack.org/35996016:58
haleybwe should try adding a sysctl for that public interface, which i think we pull a few lines below16:58
*** phschwartz has joined #openstack-infra16:58
jeblairhaleyb: how does 'default' factor into this?16:58
clarkbhaleyb: would writing to default instead of all change anything? or do we need to do it for each interface we want specifically?16:58
clarkbjeblair: jinx16:58
sc68calhaleyb: agree, let's find the public iface and sysctl it16:59
*** Shrews has joined #openstack-infra16:59
haleybclarkb: i think we need to do it individually, can't hurt to try.  i can tweak the patch i think16:59
*** mordred has joined #openstack-infra16:59
clarkbalso any idea why the subnet creation appears to be when we fail? is that the point in time where neutron will set forwarding on the interface?17:00
haleybyes, when the interface is added to the router i believe17:00
sc68calwe do a couple route commands in _neutron_configure_router_v4, a couple lines up before the public v4 net is creatred17:01
sc68calahh, no we don't17:02
*** sputnik13 has joined #openstack-infra17:03
*** AnarchyAo has joined #openstack-infra17:04
*** jerryz has joined #openstack-infra17:06
haleybi just pushed an update to https://review.openstack.org/#/c/359490/17:07
pabelangerjeblair: did you by chance see my comment about nodepool not getting data from gearman (manager_name that zuul-launcher sets)?17:08
*** berendt has quit IRC17:08
*** martinkopec has quit IRC17:08
*** greg-g has left #openstack-infra17:09
*** hockeynut has quit IRC17:10
sdagueso... I guess the question, can we get osic ipv6 only out of the test path for now so the debugging can happen orthoginal to people trying to land unrelated code?17:10
fungisdague: we can, though it will cut our test capacity by half, and delay additional capacity we're probably bringing online in the next day or so17:11
openstackgerritMerged openstack-infra/storyboard: Put the logic for hiding private things in storyboard/db/api/base.py  https://review.openstack.org/35989417:11
sdaguefungi: ok, but if that capacity has the same issue, it means that the gate for anything running neutron gets extra sluggish17:11
*** amitgandhinz has quit IRC17:12
pabelangermgagne_: about mtl01, is there a new set of credentials for that?17:12
fungisdague: yep. it's a balancing act. i don't know whether rescheduling neutron jobs are decreasing our throughput by half, but reducing our quota by half definitely will17:12
*** hockeynut has joined #openstack-infra17:12
*** jcoufal has joined #openstack-infra17:12
*** amitgandhinz has joined #openstack-infra17:12
pabelangerfungi: we seem to be having some issues with rax-iad, launch node timeouts for ssh for example. Do you have a contact on IRC to help troubleshoot?17:13
fungiapparently server capacity is a lot easier for providers to give us than ipv4 addresses17:13
sdaguefungi: sure. What if we moved it out of the dsvm class?17:13
fungipabelanger: i usually end up just opening a trouble ticket with rackspace and waiting for them to eventually hand it off to someone not in their first tier of support17:14
*** yolanda has joined #openstack-infra17:14
fungisdague: i don't know what that means17:14
fungiif you mean not run jobs with dsvm in the name on osic, we ceased having separate node types for dsvm jobs back in march or so17:15
openstackgerritEmilien Macchi proposed openstack-infra/tripleo-ci: Implement non-ovb overcloud update job - Newton -> Newton  https://review.openstack.org/35133017:15
sdagueoh, right, we have the same node type between dsvm and unit tests now17:15
sdagueright...17:15
sdagueit just would be really nice to not be rushing to solve this issue while people are also trying to fix other release issues :(17:16
haleybclarkb: i have to take off for a bit, but will check back regarding that patch, don't know how soon you'll know if connectivity is ok17:16
fungisdague: yep. so anyway, we can go back to all ipv4 nodes. it just may take longer to run jobs17:16
clarkbhaleyb: as soon as osic runs the neutron job for that patch and passes the neutron ipv6 subnet setup17:16
fungisdague: which would be a shame if the ipv6 routing issue turns out to be quick-ish to fix17:16
sc68calhaleyb: do you think the sysctl for setting ipv6 forwarding to 1 has any side effects, like clearing the sysctls for accept_ra?17:17
sdaguefungi: right, but then we can not grind while we get the fix, and once we know we have the fix put things back into operation.17:17
*** bethwhite_ has quit IRC17:18
sc68calsince we set accept_ra first, then forwarding? I have zero data to back it up, just spitballing17:18
fungisdague: well, it's not failing jobs, just causing them to get rescheduled so delaying results/merging for some projects/changes and wasting some of the additional resources we have17:18
fungii'm unconvinced things will get appreciably better if we stop using those additional resources, though perhaps they won't get any worse17:19
sdaguefungi: ok17:19
*** piet has joined #openstack-infra17:20
fungii agree it's an option worth entertaining17:20
sdaguethe impact that I see is it puts delay on landing gate fixes, which make things pass quicker in check (reduce our rechecks)17:20
sdagueso the fix path being slow means harder to dig out of issues once we discover and address them17:21
sdaguebut, I'm also about to drop and go back to the conference as bat is running low.17:22
*** ganesan has quit IRC17:22
*** piet has quit IRC17:22
cloudnullusing that same logic, isn't this an issue that has been "discovered" which can now be addressed?17:22
* sc68cal is in same boat17:22
*** sshnaidm is now known as sshnaidm|afk17:23
pabelangerWhile I understand the need to roll back to ipv4, I do like the idea of iterating forward on ipv6. Especially since it gives us valid test failures17:23
*** rcernin has joined #openstack-infra17:23
pabelangerI would much rather remove the requeue job logic in zuul-launcher then move back to ipv4 :)17:23
phschwartzWhat is the best way to get the list of projects/reviews currently running in zuul?17:23
sc68caliterating forward on ipv6 is a good thing, but not when it's causing serious disruption17:23
openstackgerritEmilien Macchi proposed openstack-infra/tripleo-ci: Implement non-ovb overcloud update job - Newton -> Newton  https://review.openstack.org/35133017:24
clarkbphschwartz: grab the status json file (the same thing the fancy status page uses to render the page)17:24
sc68caland we have people scrambling17:24
cloudnullalso asking because the OSIC has more room to give (more in cloud1 and cloud8 coming online soon) but if it's not helping then we'll hold off for now.17:24
fungicloudnull: well, one of the tools at our disposal when we discover something we're testing is broken is to stop testing it until we work out the fix. i guess in that vein options are to stop testing the things in neutron which trigger this, or stop testing on ipv417:24
*** hashar has joined #openstack-infra17:25
openstackgerritMerged openstack-infra/system-config: Pass cacert param for admin-infracloud  https://review.openstack.org/35993417:25
pabelangerRight I think we also have another region in internap too17:25
pabelangerI'm on stand-by to launch cloud8 once that cloud is in place17:26
*** hashar is now known as hasharAway17:26
*** pt_15 has quit IRC17:26
cloudnullis internap using v6 too ?17:26
clarkbmy preference would be to run with osic as is since all jobs that are not devstacky with neutron should be fine (except maybe kolla and osa?) and just get neutron fixed17:26
cloudnullOSA is happy :)17:27
clarkbneutron should support this and until someone has reproduced locally I am afraid that we will only set ourselves back by no longer testing it17:27
pabelangercloudnull: ipv417:27
clarkbif neutron comes back and says "we have this reproduced and are workong on a fix" I can reconsider17:27
pabelangerclarkb: I agree with that statement17:27
dougwigi think "we are taking this seriously and not ignoring it" should be enough. the pressure has served its purpose.17:28
*** tqtran has joined #openstack-infra17:28
cloudnullso maybe we can remove the17:28
cloudnulldsvm header from the osa jobs17:28
*** electrofelix has quit IRC17:29
cloudnullwhich should allow it to continue running on OSIC?17:29
cloudnullor is it not that simple ?17:29
clarkbcloudnull: ya I Think its redundant at this point if osa isn't devstack gating (since we use the same image everywhere now)17:29
*** fguillot has joined #openstack-infra17:29
*** kzaitsev_mb has quit IRC17:29
clarkbcloudnull: well if we were to turn off osic temporarily it would likely be global17:29
*** vhosakot_ has joined #openstack-infra17:29
clarkbhence my apprehension when most jobs continue to work there17:29
fungicloudnull: we can effectively remove "dsvm" from all jobs if we want. these days it mostly just means "this uses devstack-gate" but is purely cosmetic17:29
cloudnullok17:29
cloudnullclarkb: :'(17:30
dougwigcan you remove (neutron.*dsvm|dsvm.*neutron) ?17:30
pabelangerwe'd have to drop the ubuntu-trusty / ubuntu-xenial job17:30
pabelangererr, label17:30
*** csomerville has quit IRC17:30
clarkbpabelanger: ya we would have to make new labels just for neutron jobs that osic doesn't provide17:30
clarkbit would likely be more work than just fixing the issue17:30
clarkb(because glance image uplaods are slow)17:30
pabelangeryes17:30
fungi"dsvm" used to mean that the job needed devstack-specific nodes, but we no longer have devstack and non-devstack nodes... one node type (of each distro/release) to rule them all17:30
dougwigwe need to get this sorted before next week's feature freeze no matter what, IMO, or we're all doomed.17:30
sdagueclarkb: the issue with that approach is what you're actually saying is that during release everything on any project that uses neutron for default networking on a job should be delayed until this is addressed.17:31
fungidougwig: yes, i have doubts that we'll get the coming volume of change activity through without the additional capacity v6-only providers are offering us17:31
clarkbsdague: yes but shouldn't this be addressed in the next hour or two?17:31
sdaguedougwig: right, the issue is how many other unrelated features in non neutron projects miss freeze for it17:31
AJaegerproject-config cores, could you review a change for check-osc that I like to have in for next image build with mor debugging output: https://review.openstack.org/#/c/359937/1 - and a change to add os-api-ref under release team control: https://review.openstack.org/359916, please? Both have already one +2.17:32
*** vhosakot has quit IRC17:32
*** senk has joined #openstack-infra17:32
sdagueclarkb: I don't know17:32
pabelangerclarkb: I feel we know what the issue is right? We are just waiting on patches at this point right?17:32
pabelangeralso, right17:33
clarkbpabelanger: I think we have a sense of the issue, we are working on nailing down a fix that actually addresses it17:33
sdaguethe first spector of this was raised over a week ago with the mysterious restarting jobs that I noticed. If you think we're within 1 or 2 hours of a fix, cool.17:33
sdagueas a non ipv6 expert, I do not know if that is true or not17:33
clarkbsdague: yes but no one debugged it at that point17:33
clarkbthey handed it off to infra and said "not our issue"17:33
clarkbwhen in reality this is a neutron bug that should be trivially rerpduceable if anyone tried17:33
openstackgerritJay Faulkner proposed openstack-infra/project-config: Add docs jobs to ironic-lib  https://review.openstack.org/35679717:33
dougwigheh, it'll take me half that hour just to get this new node stacked.17:34
cloudnulldougwig: sdague: if you need resources on the OSIC to test things I'd be happy to spin things for you.17:34
clarkbdougwig: it should be as easy as nova boot && wget reproduce.sh && sudo ./reproduce.sh assuming you have a cloud that meets teh requirements17:34
clarkbreproduce.sh is pretty awesome17:34
AJaegerJayF: +2!17:35
dougwigmy first attempts at getting a node didn't work, just got a rax node live.17:35
*** lucasagomes is now known as lucas-dinner17:35
JayFAJaeger: tyvm for the handholding :)17:35
AJaegernp17:35
openstackgerritSean Dague proposed openstack-infra/elastic-recheck: Add query for cinder unit test OOM  https://review.openstack.org/36002317:36
*** vhosakot_ has quit IRC17:37
pabelangerdougwig: send along your public SSH key, I can setup a test node in osic-cloud117:37
sc68calclarkb: we don't know if this is a neutron bug17:37
clarkbpabelanger: you'll need to set up two fwiw17:37
*** vhosakot has joined #openstack-infra17:37
clarkbpabelanger: one to be the proxy/bastion and the other to break17:37
pabelangerclarkb: sure, I can do that17:38
*** yamamoto has joined #openstack-infra17:38
clarkbsc68cal: it seems pretty definitive to me that creating an ipv6 subnet on neutron breaks the hosts ipv6 routes. But you are correct we haven't nailed down the exact fix/cause so could be something else17:38
pabelangerclarkb: trusty right?17:39
clarkbpabelanger: xenial or trusty, both exhibit the same behavior17:39
AJaegerproject-config cores, a review to fix trove's tempest job would be nice as well: https://review.openstack.org/#/c/35699917:40
AJaegeranteaya: thanks for reviewing!17:40
pabelangerclarkb: okay17:40
*** jheroux has joined #openstack-infra17:42
*** sdague has quit IRC17:42
*** shashank_hegde has joined #openstack-infra17:43
pabelangerclarkb: k, dougwig has 2 instances in osic-cloud117:43
anteayaAJaeger: welcome, thanks for doing the heavy lifting17:43
AJaeger;)17:44
*** rbrndt has quit IRC17:44
*** ihrachys has quit IRC17:44
*** yamamoto has quit IRC17:44
*** gyee has quit IRC17:47
AJaegerpabelanger, jeblair, fungi: https://review.openstack.org/#/c/354098/ proposes a new periodic-frequent pipeline just for tripleo. didn't we want to move tripleo out of OpenStack CI instead of doing another specific change for them?17:47
fungiAJaeger: i think the ml discussion is ongoing17:48
anteayaAJaeger: I also wondered if we have any thoughts on patches to create bespoke pipelines17:48
fungithough i also don't see it coming to a resolution unless we force the issue17:48
AJaegerSo, -2 the patch - or ignore it for now?17:48
AJaegeranteaya: could you review https://review.openstack.org/359289 as well, please?17:49
fungiAJaeger: i would just skip over it for now, noting that it adds additional complexity perhaps unnecessary pending outcome of the third-party discussion on the ml17:49
dougwigclarkb: what is this reproduce.sh script you're referring to?17:49
*** tosky has quit IRC17:49
pabelangerAJaeger: I am pretty sure a new pipeline is not needed, this could be achieved using the post pipeline with some changes to tripleo-ci17:50
clarkbdougwig: every devstack gate run writes out a reproduce.sh script that is copied into the logs17:50
AJaegerdougwig: it's in the log folder17:50
AJaegerdougwig: do you have a change with logs?17:50
clarkbdougwig: shouod make it easy to reproduce and devstack gate test run17:50
dougwigi have a log tarball from mordred that i can use.17:51
pabelangerAJaeger: Let me see what tripleo is trying to do and will get back to you17:51
clarkbany neutron tempest full run reptoduce.sh should work17:51
*** piet has joined #openstack-infra17:51
clarkbwell I guess not one from a failing run17:51
*** piet has quit IRC17:51
anteayaAJaeger: okay my shell isn't great, but I feel the risk is low changing the location of the code whilst retaining the code17:51
AJaegerpabelanger: thanks17:52
*** sdague has joined #openstack-infra17:52
*** piet has joined #openstack-infra17:52
AJaegeranteaya: thanks.17:53
*** rbrndt has joined #openstack-infra17:53
AJaegerpabelanger: I commented and put a WIP on it - and wait for more discussion and comments17:53
haleybsc68cal: yes, setting forwarding=1 will remove the default route if accept_ra is not 2, i had found the kernel code17:54
anteayaAJaeger: welcome17:54
dougwighaleyb: sc68cal: you're about to reset on the gate-grenade-dsvm-neutron job on your version of mordred's patch. the instance is down.17:55
*** rbrndt has quit IRC17:55
*** ddieterly is now known as ddieterly[away]17:55
haleybdougwig: so it didn't work, sigh17:55
openstackgerritMerged openstack-infra/project-config: Add the exception string when a osc command fails to load  https://review.openstack.org/35993717:56
openstackgerritMerged openstack-infra/project-config: [Manila] Move container job to check  https://review.openstack.org/35642117:57
sc68calhaleyb: ok cool17:57
openstackgerritMerged openstack-infra/project-config: Move os-api-ref to release team  https://review.openstack.org/35991617:57
openstackgerritMerged openstack-infra/project-config: Update trove job to install trove tempest plugin  https://review.openstack.org/35699917:57
dougwighaleyb: sc68cal: i'm trying to repro with a non-CI stack. what's your plan?17:58
sc68caldougwig: I will be on the train tonight back to my home, I'll also work to reproduce tomorrow17:59
*** Swami has joined #openstack-infra17:59
sc68calI have some nodes that I can make look like OSIC env, then run17:59
*** guilherme has joined #openstack-infra18:00
haleybdougwig: i guess we need to make sure the external interface is as expected - i'm thinking it needs to set accept_ra=2 on eth0.  i will start cobbling together a test config in the meantime18:01
sdagueclarkb: is there a logstash processor that works with - http://logs.openstack.org/21/359721/1/gate/gate-tempest-dsvm-full-ubuntu-xenial/065baa6/_zuul_ansible/ansible_log.txt  ?18:01
dougwighaleyb: if you note this hacky test i started 10 minutes before yours: https://review.openstack.org/#/c/360011/  , it also failed.  and i hardcoded eth0 in that one.18:02
clarkbsdague: I am not aware of one but I would guess someone somewhere ahs written one we can borrow18:03
clarkblogstash used to have a recipes website let me see if that still exists18:03
haleybdougwig: i'm running a new stack with that change to see which one it sets18:03
sdagueclarkb: that would be great, we've got a class of early fails that are no longer in the console.html18:03
*** kushal has quit IRC18:03
clarkbhttps://github.com/logstash/cookbook/tree/gh-pages/recipes that looks old18:04
nibalizerthanks EmilienM for doign https://review.openstack.org/#/c/359539/18:04
nibalizerand thanks Hunner for the fast turnaround on that18:04
EmilienMnibalizer: cool18:04
jeblairsdague: what's the class?18:05
clarkband all the googles return are ansible playbooks to deploy logstash :/18:06
sdaguehttp://logs.openstack.org/27/348427/4/gate/gate-tempest-dsvm-full-devstack-plugin-ceph/97a875e/18:06
jeblairclarkb: how do you stash the logs from that deployment ?!? ;)18:06
*** guilherme has quit IRC18:06
sdaguejeblair: looks like an ssh connect failure in the zuul ansible18:06
sdaguethat one has a mitm warning, I've seen others which are just a hang / timeout after 10 minutes18:07
jeblairpabelanger: http://logs.openstack.org/27/348427/4/gate/gate-tempest-dsvm-full-devstack-plugin-ceph/97a875e/_zuul_ansible/ansible_log.txt may be of interest to you18:07
sdaguebut in trying to process the uncategorized board, we can't track those, because that file is not indexed18:07
*** sputnik13 has quit IRC18:08
pabelangerjeblair: looking18:09
*** sputnik13 has joined #openstack-infra18:09
*** guilherme has joined #openstack-infra18:09
dougwigfungi, jeblair, clarkb - both my stacks failed fetching cirros, and the current gerrit reviews don't work. and sean is out of pocket.  we'll keep working, but realistically, this isn't a fix that's going to happen in an hour with any degree of assurance. i think we should plan around a fix by cob tomorrow, and do whatever is needed to stabilize until then.18:09
dougwigif we're early, great, but i want to set expectations.18:09
*** Gorian|work has joined #openstack-infra18:09
*** guilherme has quit IRC18:10
clarkbdougwig: ok18:10
*** guilherme has joined #openstack-infra18:10
*** _nadya_ has joined #openstack-infra18:10
jeblairsdague: thanks18:10
*** pmalik_ has left #openstack-infra18:10
sdaguejeblair: let me see if I can find that second one18:10
*** guilherme has quit IRC18:11
*** jamesden_ has joined #openstack-infra18:11
sdagueI think it's lost in my sea of tabs never to be found again18:12
sdaguehuman logstash is extremely inefficient for repeat searches :)18:13
*** hockeynut has quit IRC18:13
mwhahahaAnyone know who the owner of 'Hitachi Manila HNAS CI' is and why it's trying to run on puppet-swift changes?18:13
mwhahahai'm assuming it's misconfigured somewhere but i have no idea where18:13
sdaguejeblair: oh, here is another one http://logs.openstack.org/30/357430/2/gate/gate-nova-docs-ubuntu-xenial/c6e2293/_zuul_ansible/ansible_log.txt18:13
*** sambetts is now known as sambetts|afk18:13
sdague2016-08-22 18:44:48,708 p=21516 u=zuul |  fatal: [node]: FAILED! => {"failed": true, "msg": "Failed to connect to the host via ssh."}18:13
sdaguethe console.html is just18:14
*** Gorian|work has quit IRC18:14
clarkbmwhahaha: the gerrit account may have an email addr assocaited wtih it if you hover over the name in the comments but also all third party CIs should have a page on the wiki with contact info18:14
jeblairsdague, pabelanger: the fingerprint it got back from the second connection is indeed different than the fingerprint of the file it got via the initial ssh-keyscan18:14
sdague2016-08-22 18:45:02.304470 | [Zuul] Job complete, result: FAILURE18:14
*** roxanag__ has joined #openstack-infra18:14
*** rbrndt has joined #openstack-infra18:14
pabelangerjeblair: looks to be rax-dfw, I wonder if another node came back online with that IP address18:14
clarkbmwhahaha: google says https://wiki.openstack.org/wiki/ThirdPartySystems/Hitachi_Manila_HNAS_CI18:14
mwhahahaclarkb: thanks18:14
anteayamwhahaha: sounds like it should be disabled, if you get no response from operators do say so, and I'll get it disabled18:15
mwhahahaok thanks18:16
anteayamwhahaha: thank you18:16
pabelangerjeblair: I think we seen this issue before in rax, and lead to us enabling ssh-keyscan for zuul-launcher.18:16
*** roxanagh_ has quit IRC18:17
*** amotoki has quit IRC18:17
*** dims has quit IRC18:18
*** dizquierdo has joined #openstack-infra18:18
jeblairpabelanger: yeah ... i think the next debugging step would be to see if we can find whether any host used 01:1b:17:f0:68:7d:c6:00:d2:af:61:04:e8:0d:e3 as a key18:18
*** dims has joined #openstack-infra18:18
pabelangeragreed18:19
jeblairpabelanger: unfortunately, while we do store the host key in the _zuul_ansible directory, we don't log its fingerprint under normal circumstances18:19
clarkbjeblair: pabelanger our images should have no key on boot forcing new keys to be generated. So I don't think the host is coming up with one key then generating a new one18:19
openstackgerritJeremy Stanley proposed openstack-infra/puppet-mediawiki: Clean up OpenStack references and genericize  https://review.openstack.org/36003618:19
clarkbhowever clouds can and do reuse IPs so maybe some flaw with not storing fingerprints?18:19
*** rhallisey has quit IRC18:20
openstackgerritJeremy Stanley proposed openstack-infra/system-config: Add a wiki-dev.o.o server to test newer mediawiki  https://review.openstack.org/35824618:20
openstackgerritJeremy Stanley proposed openstack-infra/system-config: Set wiki name and logo URL  https://review.openstack.org/36003718:20
jeblairclarkb: i agree with your assesment that the boot-up thing is unlikely, but can you elaborate on your second idea?18:20
*** marcusvrn_ has joined #openstack-infra18:20
*** kushal has joined #openstack-infra18:20
tpsilvamwhahaha, anteaya: marcusvrn_ is the owner18:21
clarkbjeblair: in $cloud we boot with IP foo, run tests, delete. Then some time later we reuse IP foo due to lack of ipv4 space. If we had stored a fingerprint somehow for that IP previously then it may not like it when things get recycled18:21
tpsilvamarcusvrn_: do you know what's happening?18:21
anteayatpsilva: great marcusvrn_ needs to attend to his system or it will be disabled18:21
jeblairclarkb: ah, right -- we don't store keys across runs.  you're right, that would be a problem, and it turns out, we would hit it in about 15 minutes in our setup.  :)18:21
clarkbjeblair: ya I would expect it to be a more common fail if that was happening18:21
*** Na3iL has quit IRC18:22
jeblairclarkb: we do store keys for a single run though -- and in the case sdague linked, we retrieved the key, ssh'd in at least once with it, then did it again and the key had changed18:22
openstackgerritDmitry Ilyin proposed openstack-infra/project-config: Enable voting checks for the Fuel unit tests Puppet 4.5  https://review.openstack.org/35733518:23
marcusvrn_anteaya: yep, I have already stopped the CI. I'm setting a nodepool environment and ran puppet script which  configure zuul to listen all projects18:23
jeblairclarkb: (note the top of the ansible log, it performs a keyscan: http://logs.openstack.org/27/348427/4/gate/gate-tempest-dsvm-full-devstack-plugin-ceph/97a875e/_zuul_ansible/ansible_log.txt  and we even store and upload the results of the keyscan:  http://logs.openstack.org/27/348427/4/gate/gate-tempest-dsvm-full-devstack-plugin-ceph/97a875e/_zuul_ansible/known_hosts18:23
*** flepied has quit IRC18:24
anteayamarcusvrn_: good, have you considered pointing your system at http://git.openstack.org/cgit/openstack-dev/ci-sandbox/ while in testing mode18:25
anteayamwhahaha: ^^18:25
marcusvrn_anteaya: Yes, that's what I did now18:25
*** d34dh0r53 is now known as b3rnard0-b0n-h4r18:25
anteayamarcusvrn_: great, thank you18:26
*** sdague has quit IRC18:26
mwhahahamarcusvrn_, anteaya: thanks18:26
anteayamarcusvrn_: just as a reality check, we are coming up to feature freeze so folks have very little patience with new ci systems running amok18:27
asselin__marcusvrn_, which puppet script "configure zuul to listen all projects" ?18:27
anteayamarcusvrn_: https://releases.openstack.org/newton/schedule.html18:27
anteayaasselin__: glad you were here to see that, I also wondered18:28
*** Sukhdev has joined #openstack-infra18:28
marcusvrn_anteaya: ok, sorry for the inconvenience18:29
openstackgerritMerged openstack-infra/project-config: Run host lookup first for configure_mirror.sh  https://review.openstack.org/35928918:29
openstackgerritMerged openstack-infra/project-config: Include dib-builddate.txt for configure_mirror.sh  https://review.openstack.org/35929018:30
anteayamarcusvrn_: thank you for your attentiveness, ci systems are hard, do consider attending a third party meeting and working with other operators18:30
anteayamarcusvrn_: there are many holes to fall down, sometimes it helps to chat with others with experience to avoid a few18:30
*** hyakuhei has quit IRC18:30
marcusvrn_asselin__: I'm using this guide: http://docs.openstack.org/infra/openstackci/third_party_ci.html and start zuul before configure to sandbox project...so by defult, it listens all projects18:31
*** claudiub has joined #openstack-infra18:31
*** cody-somerville has joined #openstack-infra18:31
marcusvrn_anteaya: sure! thanks18:32
asselin__marcusvrn_, I see, so you used the upstream project-config repo instead of your own?18:32
anteayamarcusvrn_: thank you18:32
*** pvaneck has joined #openstack-infra18:32
marcusvrn_asselin__: yep, now, I'm configuring mine18:32
openstackgerritEmilien Macchi proposed openstack-infra/tripleo-ci: Implement scenario001 CI job  https://review.openstack.org/36003918:32
asselin__marcusvrn_, here is says to use your own: http://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/single_node_ci_data.yaml#n518:33
asselin__marcusvrn_, what doc change would have helped you avoid this?18:33
asselin__marcusvrn_, I ask b/c you aren't the first, but I would prefer you to be the last....18:34
pabelangerjeblair: clarkb: I think we can easily update zuul-launcher to also log the fingerprint of the node. Working on a patch now.18:36
jeblairpabelanger: cool, thanks.  i'm running ssh-keygen on all the known_hosts files on the log server to see if i can find where that key might have been used before.  this will not be fast.18:37
marcusvrn_asselin__: Maybe add the same step about create a local git project to save local changes like system-config step18:37
marcusvrn_It would help, I think18:38
asselin__marcusvrn_, So something added to here? http://docs.openstack.org/infra/openstackci/third_party_ci.html#create-an-initial-project-config-repository18:38
asselin__reading again and not very clear18:39
*** degorenko is now known as _degorenko|afk18:39
openstackgerritMerged openstack-infra/elastic-recheck: Add query for get me a network fails  https://review.openstack.org/35996018:39
*** hyakuhei has joined #openstack-infra18:41
*** hyakuhei has quit IRC18:41
*** hyakuhei has joined #openstack-infra18:41
*** hyakuhei has quit IRC18:41
*** hyakuhei has joined #openstack-infra18:41
*** senk has quit IRC18:41
*** gyee has joined #openstack-infra18:41
*** senk has joined #openstack-infra18:41
*** zz_zz_ja has quit IRC18:41
*** ekhugen has quit IRC18:41
pabelangerjeblair: I'm using: ssh-keygen -E md5 -lf known_hosts to extract the key currently18:42
pabelangerdoes that work for you?18:42
marcusvrn_asselin__: yep...because, reading that, I created a local project-config, I edited it, but when configured my common.yaml, I added upstream project instead of local project....Maybe It was my fault too....some misunderstood18:43
openstackgerritMerged openstack-infra/nodepool: Have buildZooKeeperHosts accept a config object  https://review.openstack.org/35734718:44
jeblairpabelanger: that seems good18:45
*** adrian_otto has quit IRC18:48
openstackgerritMerged openstack-infra/elastic-recheck: Add query for cinder unit test OOM  https://review.openstack.org/36002318:48
*** _nadya_ has quit IRC18:50
dougwigaha, repro'ed.18:52
*** flepied1 has joined #openstack-infra18:52
*** hasharAway is now known as hashar18:52
mordreddougwig: ooh!18:52
* sc68cal perks up18:53
*** piet has quit IRC18:54
*** piet has joined #openstack-infra18:54
*** thiagop has quit IRC18:55
*** raunak has quit IRC18:55
*** piet has quit IRC18:56
*** piet has joined #openstack-infra18:56
*** esikachev has quit IRC18:57
openstackgerritFrédéric Guillot proposed openstack-infra/project-config: Add Almanach IRC channel  https://review.openstack.org/36004918:58
*** ekhugen has joined #openstack-infra18:58
*** zz_ja has joined #openstack-infra18:59
*** jcoufal has quit IRC18:59
*** piet has quit IRC19:00
*** piet has joined #openstack-infra19:01
*** gordc has joined #openstack-infra19:01
*** thiagop has joined #openstack-infra19:02
*** ddieterly[away] is now known as ddieterly19:04
*** Goneri has joined #openstack-infra19:05
openstackgerritMerged openstack-infra/project-config: Add docs jobs to ironic-lib  https://review.openstack.org/35679719:07
*** ccamacho has quit IRC19:07
*** piet has quit IRC19:07
sc68caldougwig: yeah no default route19:07
haleybsc68cal: right, and accept_ra is all 119:08
haleybthink we do need to change to use the public interface19:08
dougwigsc68cal: yep.  so, why?19:08
*** esberglu has quit IRC19:09
*** piet has joined #openstack-infra19:09
sc68calhaleyb: so, we set it accept_ra = 2 - then at some point in the future it gets set back to 119:09
*** kzaitsev_mb has joined #openstack-infra19:09
dougwigi expect if we added a default 6 right, it'd come right back on the air.  so... how did it get removed?19:09
dougwigoh wait.19:10
sc68caldougwig: most likely when we set ipv6.forwarding = 119:10
dougwigthis was a repro case, it does *not* have the accept_ra 2 fix.19:10
sc68calah.19:10
*** senk has quit IRC19:10
haleybsc68cal: it shouldn't do that, would just remove the default route19:10
dougwigso, can we turn off forwarding, let it get a route, then set 2 and reset forwarding?19:10
haleybyes, set it to 2 on eth019:11
*** senk has joined #openstack-infra19:11
*** ilyashakhat has joined #openstack-infra19:11
openstackgerritPradeep Kilambi proposed openstack-infra/project-config: Create puppet-panko repository  https://review.openstack.org/35366219:11
haleybwho's driving?19:11
dougwigjust set it19:11
*** piet has quit IRC19:11
dougwigi'm out, i'll let one of you drive.19:12
dougwigoh, it got a default route.19:12
haleybok, forwarding off, default route back19:12
dougwigk19:12
sc68caldougwig: it just picked up a default route!19:12
sc68caldefault via fe80::def dev eth0  proto ra  metric 1024  expires 1761sec hoplimit 6419:13
sc68calI SEE IT19:13
haleybso accept_ra=2, forwarding=1, still has default19:13
sc68calit actually now has a default19:13
sc68calhere's I'll paste19:13
sc68calit didn't have a default route19:13
*** esikachev has joined #openstack-infra19:13
dougwigi see it.19:14
sc68calI ran it probably after dougwig ran the cmd, and the default route got added19:14
haleybin my local test, PUBLIC_INTERFACE was blank, is it that way in the check jobs too?19:14
haleybsc68cal: i was setting them ala Oz19:14
dougwigso, what was wrong with the devstack fix, then?  the one where i *hard-coded* eth0 as a test?19:14
*** rhallisey has joined #openstack-infra19:14
*** b3rnard0-b0n-h4r is now known as d34dh0r5319:14
haleybdougwig: i don't know, that should have worked if that was the issue19:14
*** ddieterly is now known as ddieterly[away]19:15
*** csomerville has joined #openstack-infra19:15
clarkbdougwig: are you sure eth0 is the interface?19:15
*** dizquierdo has quit IRC19:15
dougwigclarkb: on osic it is.19:15
clarkbI think it may use biosname in some places19:15
clarkbdepending on distro release19:15
dougwigclarkb: ok.19:15
sc68caldougwig: I think it's  the timing19:16
haleybclarkb: that might be it, i think it's the PUBLIC_INTERFACE in devstack but need to verify19:16
haleybsc68cal: oh, maybe add a sleep?19:16
sc68calhaleyb: my comment earlier - maybe we need to set accept_ra after we set forwarding = 119:16
*** esberglu has joined #openstack-infra19:16
sc68calinstead of before19:16
clarkbsc68cal: we troed tjat forst19:16
*** cody-somerville has quit IRC19:16
dougwigsc68cal: how so? going offline and then back in a few seconds should be seamless to clients.19:16
clarkbbut using all not the specific interface name19:16
haleybsc68cal: but forwarding=1 will have clobbered the default route19:16
haleybsc68cal: do you think i can assume PUBLIC_INTERFACE is set here?19:18
*** esikachev has quit IRC19:18
*** ddieterly[away] is now known as ddieterly19:18
sc68calhaleyb: yes19:18
openstackgerritPaul Belanger proposed openstack-infra/zuul: Store ssh_host_key of remote node  https://review.openstack.org/36005719:19
sc68calI think it's set in devstack-gate19:19
sc68calin devstack it defaults to ""19:19
pabelangerjeblair: clarkb: ^ extract ssh_host_key for zuul-launcher19:19
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool: Add new ZK method for sending cluster heartbeat  https://review.openstack.org/35886819:19
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool: Stop using NodePoolBuilder class  https://review.openstack.org/35989619:19
openstackgerritDavid Shrewsbury proposed openstack-infra/nodepool: Add new ZK method for registering a watch.  https://review.openstack.org/35883719:19
sc68calhaleyb: hmm....19:19
dougwigsc68cal: what about non-CI users?19:19
sc68calI think all our docs state that you need to set PUBLIC_INTERFACE19:20
sc68calbut we don't provide a default19:20
haleybdougwig: they're screwed for now, and yes it's "" but must be set in a yaml somewhere?19:20
*** sdake has quit IRC19:20
*** esberglu has quit IRC19:20
sc68calWe may need to do ip -6 r s and grep for the interface19:21
sc68calbefore the forwarding = 1 fires and clobbers19:22
sc68calin devstack19:22
sc68callooks like we can't rely on PUBLIC_INTERFACE19:22
dougwigdevstack is adding that masq crap. use the same variable that uses.19:22
sc68calah good point, yeah19:22
dougwig $default_dev19:22
AJaegeranybody else that wants to review fungi's system-config change to update zuul-env on persistent nodes? https://review.openstack.org/#/c/359352/19:23
dougwigit looks it up at that point in the code. might need to copy that block, or bump its scope.19:23
*** esberglu has joined #openstack-infra19:23
*** piet has joined #openstack-infra19:23
haleybdougwig: let me move it to the top, hang on19:23
sc68calwe'll need to copy the block or assign it to a global var, it's currently a local scoped var19:24
clarkbexcept the route isn't goig away when we set it to forwarding right? its going away when neutron creates the subnet19:24
dougwigit likely takes a certain amount of time to expire.19:25
rcarrillocruzpabelanger: created flavors and projects fine with the launucher on infracloud19:25
haleybdougwig: setting forwarding=1 will remove the default route immediately19:25
dougwighmm.19:25
clarkbI just worry we are focusing on the wrong code path. I do think we need to set accept_ra to 2 at some point but that isn't the point of failure currently19:25
rcarrillocruzbrb19:25
*** rvasilets___ has joined #openstack-infra19:25
clarkbdougwig: haleyb sc68cal an easy way to test would be to make a new public ipv6 subnet on your reproduce box right? then see if the route goes away19:26
dougwigclarkb is right, 2+2 just ceased being 4.19:26
haleybclarkb: that is what is causing the route to go away though19:26
haleybwe can do that on dougs box, thought we did?19:26
clarkbhaleyb: sure but that implies neutron si doing it not devstack19:26
pabelangerrcarrillocruz: yay19:26
dougwighaleyb: then how are we seeing further activity in the log?19:27
clarkbhaleyb: which is why I think its the wrong code path19:27
*** _nadya_ has joined #openstack-infra19:27
*** dtardivel has quit IRC19:27
clarkb(I do still think we need to address the area you are focusing on, I jsut think thats the next step after we figure out why the new subnet kills things)19:27
dougwigi mean, let's try the fix, but there's a mystery here.19:27
haleybclarkb: it's happening when devstack sets forwarding=1, i can try again on dougs system19:27
*** senk has quit IRC19:27
dougwighaleyb: if you look at the paste, it doesn't happen then, it happens on the public v6 subnet create.19:28
dougwigthat's clarkb's point.19:28
*** senk has joined #openstack-infra19:28
clarkbhaleyb: my investigation shows it happens when neutron creates the public ipv6 subnet19:28
sc68calclarkb: I thought that paste was when it was doing the public v4 ne19:29
clarkbsc68cal: my first paste was cut off short. dougwig pointed that out and I posted a nwe one that has less context but includes the important bits19:30
haleybwell, it does create the public v6 subnet right before setting forwarding=119:30
dougwigsc68cal: that was a truncated paste.  look in the bug report19:30
clarkbhttp://paste.openstack.org/show/562831/ that one19:30
*** Goneri has quit IRC19:30
clarkbhaleyb: ok so maybe its a buffering issue. In any case it would be super simple to rule out if you create a new subnet19:30
haleybclarkb: i set forwarding=0, then accept_ra=1, then forwarding=1 on dougs system and default route went away19:31
openstackgerritFrédéric Guillot proposed openstack-infra/project-config: Add Almanach IRC channel  https://review.openstack.org/36004919:32
clarkbhaleyb: yes but that doesn't prove that the subnet creation isn't also a problem...19:32
clarkbhaleyb: only that forwarding=1 is also a problem19:32
clarkbI just want to rule out that we need to fix two things or learn that we do19:32
sc68calI think it's a buffering issue19:32
sc68calsince it's dying at different points in the script run19:32
sc68calone of your pastes it died at the v4 public net create19:33
clarkbsc68cal: mp19:33
clarkber19:33
clarkbno19:33
clarkbthat first paste was truncated19:33
clarkband is invalid do not look at it19:33
haleybi can give doug the change to try, just hacking now19:34
*** DsAAzvTVQt has joined #openstack-infra19:34
*** esikachev has joined #openstack-infra19:34
clarkbthe paste I just pasted is the correct one from the same ssh connection. It did not get truncated19:34
dougwighaleyb: if you focus on updating the devstack patch, i'll check clarkb's concern real quick.19:34
* sc68cal resists the urge to use the wall command19:35
dougwighaleyb: sc68cal: did one of you set accept_ra back to 1 and nuke the default route on my box?19:35
sc68calnot i19:35
*** e0ne has joined #openstack-infra19:35
fungiclarkb: dougwig: haleyb: sc68cal: it's also possible the connectivity loss is a slightly delayed reaction to something changing a little earlier19:35
haleybdougwig: yes, i did it19:36
dougwighaleyb: *phew*, scared me.19:36
dougwighaleyb: are you testing on that box?  i don't want to step on you.19:36
*** senk has quit IRC19:36
* sc68cal also breathes a sigh of relief19:36
haleybdougwig: let it get the default route bcak, hang on19:36
dougwighaleyb: i'll stay hands off until you tell me that you're done.19:37
*** piet has quit IRC19:37
*** DsAAzvTVQt has quit IRC19:38
*** esikachev has quit IRC19:38
*** claudiub has quit IRC19:41
rcarrillocruzactually19:42
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/system-config: Remove initial resources creation from infracloud controller manifest  https://review.openstack.org/36006119:42
rcarrillocruzit was a bit of a mix19:42
rcarrillocruzpabelanger: ^19:42
rcarrillocruzat the mid-cycle we put some stuff for creating resources (projects, users, etc) within puppet code19:42
*** Goneri has joined #openstack-infra19:43
rcarrillocruzlet's remove it, i just realized i had openstackjenkins, when we should have now openstackzuul19:43
*** VaUsfOUbRl has joined #openstack-infra19:43
*** VaUsfOUbRl has quit IRC19:44
dougwighaleyb: i don't see it getting the route back19:45
*** e0ne has quit IRC19:45
*** Apoorva has quit IRC19:45
haleybdougwig: i accidentally tweaked the sysctl wrong, and now it's waiting for another RA, which isn't frequent in rax i guess19:45
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/system-config: Remove rcarrillocruz from infracloud users  https://review.openstack.org/36006319:45
dougwigosic, not rax19:46
clarkbits possible that cloudnull can change that for us if it will help19:46
*** kzaitsev_mb has quit IRC19:46
clarkbthey are running liberty so liekly depends on what we can do with that19:46
haleybgive it another minute and it should return19:46
cloudnull++ what needs to be tweaked?19:47
haleybhttps://review.openstack.org/#/c/359490/ is updated19:47
* cloudnull reading19:47
haleybcloudnull: RAs are just infrequent in OSIC apparently, do you know the interval?19:47
* jamesden_ looking19:48
*** Goneri has quit IRC19:48
cloudnull^19:48
cloudnullthanks jamesden_19:48
haleybdougwig: it's back19:48
*** hockeynut has joined #openstack-infra19:49
jamesden_haleyb the default interval is 600 seconds19:49
jamesden_i can lower it if you'd like19:49
*** jamesden_ is now known as jamesdenton19:49
haleybtypically the client will send an RS, in this case we deleted something and had to wait for another19:49
*** xcGlQHdZGT has joined #openstack-infra19:50
dougwighaleyb: looks like we lucked into one of the jobs starting immediate on an osic node (gate-tempest-dsvm-neutron-full-ubuntu-xenial)19:50
mordredwoot19:50
nibalizerfungi: where are the docs on that release key attestation thing I need to do?19:52
nibalizer(as long as I'm doing gpg key things anyways)19:52
jeblairnibalizer: http://docs.openstack.org/infra/system-config/signing.html19:52
haleybjamesdenton: is that the max?  that defaults to 600, min would be .33 of that if not set, so 20019:52
jamesdentonnew interval is 200. hopefully that helps a little next time.19:52
haleybjamesdenton: thanks, again, was that max interval you chaanged?19:53
jeblairnibalizer: that's where all the root docs are19:53
jeblairs/root//19:53
jamesdentonhaleyb ra-interval range is <4-1800>19:53
*** asettle has joined #openstack-infra19:53
jamesdentonhaleyb defaults to 60019:53
*** esikachev has joined #openstack-infra19:55
haleybjamesdenton: oh, it's a cisco device.  their description of that value isn't like radvd (or anything i'm used to) that specifies min/max - but 200 is better, thanks19:55
jamesdentonahh ok. Yes, it's a Cisco box19:55
haleybthe open source implementations all follow the RFC naming convention, cisco had to be different19:56
sc68calheh.19:56
sc68calfreebsd calls it rtadvd ;)19:56
sc68calnot radvd19:57
*** gordc has quit IRC19:57
haleybsc68cal: but i bet thety have min/max settings called MaxRtrAdvInterval, etc19:57
*** piet has joined #openstack-infra19:58
sc68caloh yeah, they just added a "t" for clarity to the binary name :)19:58
nibalizerjeblair: ty ty19:58
haleybdougwig: so when will we know if it worked?19:58
cloudnulltyvm jamesdenton19:59
jamesdentonsure. ping me if you want to tweak anything else.19:59
* mordred is excited about our new RA overlords20:00
SpamapSso ipv6. very RA. #dogeV620:00
dougwighaleyb: watch and see if 2001:4800:1ae1:18:f816:3eff:fe30:66c3 goes offline in the next 10 mins or so.20:01
haleybstarted a ping20:01
dougwigi'm pinging and watching zuul20:01
*** hockeynut has quit IRC20:01
dougwigare you done with the test node for the moment?20:02
haleybdougwig: except for running a ping there, yes20:02
*** hockeynut has joined #openstack-infra20:03
*** markusry has quit IRC20:03
*** asettle has quit IRC20:03
*** markusry has joined #openstack-infra20:03
*** markusry has quit IRC20:03
*** jamesden_ has joined #openstack-infra20:04
*** jamesdenton has quit IRC20:05
*** jamesden_ is now known as jamesdenton20:05
sc68calwhat's the telnet trick for the new zuul to connect to the jenkins console20:05
*** sdague has joined #openstack-infra20:07
*** cody-somerville has joined #openstack-infra20:07
*** vhosakot has quit IRC20:08
dougwighaleyb: clarkb: sc68cal: the above job is now running tempest tests.20:08
sc68calso it worked?20:08
dougwigcautiously optimistic.20:08
*** dprince has quit IRC20:09
*** csomerville has quit IRC20:09
*** amitgandhinz has quit IRC20:09
*** raunak has joined #openstack-infra20:10
*** amitgandhinz has joined #openstack-infra20:10
*** cody-somerville has quit IRC20:12
*** vhosakot has joined #openstack-infra20:12
openstackgerritMerged openstack-infra/elastic-recheck: highlight when integrated_gate data is out of date  https://review.openstack.org/35993220:13
*** dimtruck is now known as zz_dimtruck20:13
clarkbso this isse was in not getting the interface name correct in previous attempts and needing to not use "all"20:14
* clarkb double checks the run20:14
sc68calcorrect20:14
russellb sc68cal trick?  i just grab the uri from the zuul status page and run telnet to that IP and port ...20:14
sc68calrussellb: ah. doh20:14
clarkbya its running tempest. We should make sure it finishes and nothing else comes up but this looks good20:15
sdagueprogress on the ipv6 issue?20:15
sdagueis there a patch to look at?20:15
sc68calsdague: was about to highlight you20:15
sc68calsdague: https://review.openstack.org/#/c/359490/20:15
dmsimardI'm not a Zuul gate expert but some jobs have been there for almost 24hrs now ? Is this typical ?20:15
dmsimardI was surprised to see changes +W'd this morning still hadn't merged ..20:16
*** Goneri has joined #openstack-infra20:16
pabelangerdmsimard: which review?20:16
dmsimardpabelanger: particular reviews I'm interested in are https://review.openstack.org/#/c/359391/ and https://review.openstack.org/#/c/358670/ but looking at the gate in general shows a lot of jobs >20hrs http://status.openstack.org/zuul/20:17
clarkbsc68cal: dougwig haleyb its probably worht a writeup to the dev list so that other deployment based tests know what they need to do too (if necssary)20:17
*** piet has quit IRC20:17
clarkbthinking kolla/chef/puppet/osa may potentially run into this as well20:17
pabelangerdmsimard: Ya, working on the fix now.  359490 should be the solution20:17
dougwigclarkb: and a doc bug on the install guide.20:18
*** krtaylor has quit IRC20:18
clarkbdougwig: ++20:18
dmsimardpabelanger: ack20:18
sdaguedmsimard: there are many bugs in the gate, this last one is one of them, but there are many20:18
*** piet has joined #openstack-infra20:18
sdaguewe've already addressed 2 others today as well20:18
sc68calI'll propose a doc patch to the networking guide20:18
dmsimardsdague: wasn't aware of the gate issues and that's why I was inquiring, thanks.20:18
dougwigsc68cal: k, thanks20:18
*** _nadya_ has quit IRC20:19
sdagueclarkb / sc68cal ok, so this is running tests fine on neutron on an ipv6 only node atm?20:19
clarkbsdague: yes, need to see if it finishes tempest cleanly but it got past where there was trouble20:20
haleybi am still able to ping the node in question20:20
*** piet has quit IRC20:20
sdagueI don't have ipv6 connectivity from here so blind on it20:20
sdaguegah, http://logs.openstack.org/49/359449/1/gate/gate-tempest-python34/c178353/console.html20:21
sdagueosic apt package mirror break?20:21
clarkbI doubt its osic specific if the mirror is broken since its all backed bythe same afs volume20:21
sdagueclarkb: then, ug mirror break20:22
*** piet has joined #openstack-infra20:22
*** tqtran has quit IRC20:22
pabelangerthey would have updates since that run20:22
sdaguethe 2 gate fails in tempest unit tests right now are osic20:22
sdaguefwiw20:22
sdaguehttp://logs.openstack.org/40/358140/1/gate/gate-tempest-python34/db92922/console.html#_2016-08-24_19_56_02_68927120:22
sdagueso, +A on https://review.openstack.org/#/c/359490/20:23
clarkbosic is > half our resources right now I think so not really surprising it would see fails20:23
pabelangerHmm, I do see an issue in mirror-update.o.o20:23
pabelangerlet me find out why20:23
pabelangerThe lock file '/afs/.openstack.org/mirror/ubuntu/db/lockfile' already exists. There might be another instance with the20:24
sdagueI'm about to leave computers for the day, so if that actually fixes the world please promote20:24
clarkbsdague: ok I will enqueue and promote to gate as soon as this test ends happily20:24
sdagueclarkb: thanks20:24
sdaguepabelanger: yeh, would be great, that mirror failure is going to cause cascading fails20:24
sdaguethat will pile up fast20:24
nibalizerfungi: jeblair im not seeing our key on the keyservers20:25
nibalizeroh nvm20:25
nibalizerlol20:25
*** chem has quit IRC20:25
clarkbsdague: fwiw I have a smallish script to run the telnet/nc through an ssh tunnel if you have access via ssh to another host with ipv620:25
phschwartzjeblair: sorry about yesterday, got caught up in stuff for my son's school20:25
phschwartzjeblair: The issue came back after a restart and adding of missing projects to our zuul layout. I have 3 pastes that show the about 960 lines before the first exception in the log20:26
*** kzaitsev_mb has joined #openstack-infra20:27
phschwartzjeblair: http://paste.openstack.org/show/563129/ http://paste.openstack.org/show/563131/ http://paste.openstack.org/show/563133/20:27
asselin__rcarrillocruz, mordred I'm running into that issue again where I can't connect to a VM's floating ip address. I don't know if it's the same or not, but it happened after I switched back from ansible 2.1.1.0 to 2.1.0.0. Restoring ansible version back didn't help. I'm on shade 1.11.120:27
pabelangersdague: clarkb: looks like the ubuntu mirror is stale by 4 days, cleaning it up now20:27
phschwartzjeblair: it is also good to note that in 22 hours we got just over 701k of the exceptions in the log. That is an average of about 9 per second.20:27
erlonmwhahaha: hey, do you have the link of the puppet-swift changes?20:28
asselin__rcarrillocruz, mordred some debug logs: http://paste.openstack.org/show/563135/ let me know if you have any ideas.20:28
mwhahahaerlon: it was showing up on https://review.openstack.org/360032 but i believe it was fixed as there have not been any subsequent CI postings20:30
openstackgerritMerged openstack-infra/shade: Batch calls to list_floating_ips  https://review.openstack.org/31569720:31
* clarkb makes a note to delete all the held instances in osic if this fixes things20:31
clarkb(I didn't leave hold reason notes on them because I was in a hurry to beat nodepool deleting them)20:32
anteayaerlon: this might also help: https://review.openstack.org/#/q/reviewer:%22Hitachi+Manila+HNAS+CI+%253COpenstackManilaCI%2540hds.com%253E%2220:32
anteayaerlon: bit of a mixed bag there20:32
haleybclarkb: i'm writing an email to the dev list now too20:32
clarkbhaleyb: great, thank you20:33
erlonmwhahaha: anteaya: thanks, it shouldnt be posting in those other projects20:33
*** Hal has quit IRC20:33
anteayaerlon: agreed, thanks for the follow up20:33
*** Jeffrey4l__ has joined #openstack-infra20:34
erlonmwhahaha: anteaya: started from 2:56 pm. Ill check that tomorrow morning20:34
openstackgerritTravis Truman (automagically) proposed openstack-infra/project-config: Add os_monasca repository to OpenStack-Ansible  https://review.openstack.org/36007420:34
anteayaerlon: hopefully marcusvrn_ can make time for the third party meetings in future: http://eavesdrop.openstack.org/#Third_Party_Meeting20:34
anteayaerlon: and you are welcome too20:34
*** Jeffrey4l_ has quit IRC20:34
anteayaerlon: great, vigilence on a ci system, especially a new one is greatly appreciated20:35
erlonanteaya: thanks, we will, we are in the process to move our Cis to nodepool20:35
pabelangersdague: clarkb: okay, mirror.ubuntu updated and released. We had a stale lock in reprepro which stopped the crontab for running for a few days20:35
clarkbpabelanger: any idea how it ended up in an inconsistent state?20:35
Shrewsasselin__: you might need to add a wait for the ssh port to become active before trying the ssh. haven't seen the playbook, so don't know if you're actually doing that20:35
clarkbmy understanding of the afs setup is that we should either work or fail to update?20:35
anteayaerlon: awesome, many other third party operators will have experience with moving to nodepool20:36
*** sdague has quit IRC20:36
erlonanteaya: this one still in our old infra, but someone must have changed something20:36
pabelangerclarkb: it maybe be possible it was my fault.  I need to check the timelines but I did manually hold the lock a few days ago to import the source packages20:36
clarkbah20:36
*** Apoorva has joined #openstack-infra20:36
*** tonytan4ever has quit IRC20:36
anteayaerlon: something happened, I'll let you follow up20:36
pabelangerpossible I didn't clean some things up20:36
anteayaerlon: glad the spam has stopped, thank you20:36
asselin__Shrews, actually I can't connect even manually....the floating ip is just missing altogether. For reference here's the playbook: https://review.openstack.org/#/c/320466/1520:36
pabelangerclarkb: and I didn't actually check, until today, if the mirror update happened20:36
*** hongbin has joined #openstack-infra20:37
erlonmwhahaha: anteaya: welcome, thanks for helping20:37
Shrewsasselin__: ah, well that'd be a different problem20:37
anteayaerlon: mwhahaha thank you :)20:37
asselin__Shrews, do you think it's an issue with the client or the cloud itself?20:37
pabelangerclarkb: fungi: speaking of mirrors, I could use some help landing: https://review.openstack.org/#/c/347058/ Fixes an issue with jessie-security in mirror.debian20:38
Shrewsasselin__: not sure. i have 0 background on what you're doing  :)20:38
*** tonytan4ever has joined #openstack-infra20:40
*** ilyashakhat has quit IRC20:40
*** fguillot has quit IRC20:41
*** edtubill has quit IRC20:41
clarkbpabelanger: As soon as this tempest test finishes and I can reorg the gate for it, I will try to review that and fungi's zuul env change20:41
pabelangerclarkb: sure, no rush.20:42
clarkbnah I want to feel like I got something done today :)20:42
anteayaI think possibly fixing ipv6 classifies as getting something done20:43
anteayait does in my book anyway20:43
clarkbanteaya: ya I just thought I would be working on finishing up the xenialing of newton this week. I gues there is still a little time left :)20:43
anteayathat is true20:44
anteayasomething something the best laid plans something something20:44
anteayaclarkb: sc68cal haleyb dougwig thanks for your hardwork on ipv620:44
pabelangerclarkb: The only other osic-cloud1 issue I am seeing, appears to be a gearman issue in nodepool. Some context: http://eavesdrop.openstack.org/irclogs/%23openstack-infra/latest.log.html#t2016-08-24T13:22:2620:45
*** ansmith has quit IRC20:46
pabelangerclarkb: basically, we are seeing limiting launch failures in osic-cloud1 because nodepool didn't know which launcher was used to launch the ndoe20:46
pabelangernode*20:46
pabelangerI don't understand why the data would be missing in gearman20:46
clarkbpabelanger: huh, gearman does run inthe clear so one possible way to debug that is with tcpdump20:47
clarkbpabelanger: I think you can tell tcpdump to record a rolling log so ever X bytes it makes a new file then can delete old ones20:47
clarkbpabelanger: you could set it up to do that then wait for that to show up in nodepool then go find it in the pcap file20:47
pabelangerclarkb: jeblair it could also explain why some of our jobs have 'Failed to connect to the host via ssh' from ansible, since nodepool was the one who deleted the node20:48
clarkbpabelanger: essentially that is saying there was no data in the assignment packet I think20:48
openstackgerritFrédéric Guillot proposed openstack-infra/project-config: Remove whitespaces and add parenthesis to print statements  https://review.openstack.org/36008020:48
pabelangerclarkb: right, which nodepool then deletes the node20:49
*** vhosakot has quit IRC20:49
pabelangerclarkb: I'll see if I can get a pcap going20:49
*** vhosakot has joined #openstack-infra20:50
*** roxanag__ has quit IRC20:51
clarkbpabelanger: tcpdump -s 1500 -i eth0 -w gearman.pcap-%Y-%m-%d_%H:%M:%S -W 10 -C 100 port 473020:51
clarkbpabelanger: something like that may work20:51
openstackgerritEmilien Macchi proposed openstack-infra/tripleo-ci: Implement scenario001 CI job  https://review.openstack.org/36003920:51
*** Goneri has quit IRC20:52
jeblairphschwartz: can you paste the log lines from the time zuul started until "Reconfiguration complete" ?20:52
clarkbthat should save a gigabyte of pcap data in a rotation of 10 files20:52
phschwartzjeblair: yeah, will be a couple of moments20:53
*** amitgandhinz has quit IRC20:54
*** amitgandhinz has joined #openstack-infra20:55
openstackgerritMerged openstack-infra/system-config: Remove initial resources creation from infracloud controller manifest  https://review.openstack.org/36006120:55
fungiclarkb: on nodepool hold reasons, you can add/update them later (i often hold and then repeat the same command but add a --reason, so i can have a better shot at winning the race)20:56
clarkb2016-08-24 20:56:35.333402 | [Zuul] Job complete, result: SUCCESS20:57
clarkbI am going to enqueue/promote now20:57
clarkbfungi: huh didn't know that20:57
*** _ari_ has quit IRC20:57
fungiclarkb: yeah, it's sort of a cheat, but nodepool hold holds a node in any state, including ones in a held state. so holding again with a --reason just replaces the old (possibly empty) reason when it overwrites the status20:58
*** claudiub has joined #openstack-infra20:58
pabelangerclarkb: thanks, running on nodepool in /tmp folder20:58
fungiclarkb: well, and also updates the held timestamp (so effectively resets the age in state)20:59
*** jkilpatr has quit IRC21:00
clarkbpabelanger: you might want to double check you are getting what you want after a few megabytes. Seems like I alwys have to fiddle with tcpdump in order to get what I need21:00
clarkbok devstack change is promoted21:00
*** tqtran has joined #openstack-infra21:01
phschwartzjeblair: hmm, looking at the log I sent these from, it says the Reconfiguration complete is almost 230k lines into the file.21:01
haleybclarkb: that doesn't need cherry-picking, right?21:01
clarkbhaleyb: no, any job starting behind it in the gate will run with it and once it merges any jobs starting in the check queue will get it21:02
*** roxanagh_ has joined #openstack-infra21:03
jeblairphschwartz: i meant from the time of the restart... so the most recent set of lines between "Performing reconfiguration" and "Reconfiguration complete"21:03
jeblairphschwartz: (most recest set of those lines before the error)21:03
jeblairclarkb, haleyb: and any check jobs that "Depends-On" that change as well21:04
haleybclarkb: i guess i was thinking of the stable devstack branches21:04
clarkbhaleyb: oh yes stable will liekly need it as well21:04
phschwartzjeblair: http://paste.openstack.org/show/563140/ http://paste.openstack.org/show/563141/21:07
phschwartzjeblair: 598 lines so split it21:07
nibalizeryah at least one keyserver has figured out that I signed the key http://keyserver.ubuntu.com/pks/lookup?op=vindex&search=openstack+infra&fingerprint=on21:07
*** Hal has joined #openstack-infra21:08
robcresswellWhen patches in a gate chain fail, do latter patches just get restarted? Horizon has a 24hr patch that got kicked back to Queueing :)21:08
jeblairphschwartz: that does not have the change from nibalizer to add requirements to the layout.21:08
phschwartzjeblair: yeah, we had an issue getting the server to pull that one update. But my concern is when this issue happens we are getting flooded with so many errors that in a little over 24 hours it was 22.4g of logs21:10
*** edtubill has joined #openstack-infra21:10
jeblairphschwartz: my concern is trying to fix the bug21:10
SpamapSjeblair: guessing it hasn't reloaded the layout just yet, because it's too busy in that loop21:10
jeblairphschwartz: i'm still trying to elucidate the circumstances around it21:10
SpamapSOR, the layout hasn't been delivered yet. :)21:10
SpamapSbecause some moron forgot to add his auth keys file ;)21:11
jeblairSpamapS: it's effectively never going to exit that loop and needs to be restarted21:11
SpamapSjeblair: that was my fear, makes sense.21:11
phschwartzjeblair: basically any time there is a depends-on in a review and that review it depends on is for a project not in the layout, it goes fubar21:11
*** vhosakot has quit IRC21:11
jeblairphschwartz: right, but as i said yesterday, that case is not only handled, but tested, so i'm trying to figure out what's different21:11
jeblairwhich is why i'm trying to find every detal21:11
jeblairwhether the project is in the layout (i just learned it is not) is an important one21:11
*** vhosakot has joined #openstack-infra21:11
fungiit's a configurable option, right?21:13
clarkbfungi: re the keep zuul env up to date change. Do you think we need to use pip install --upgrade? I am trying to remember under which circumstances pip won't update a package21:13
asselin__Shrews, have you ever seen this error before? http://paste.openstack.org/show/563144/21:13
clarkbfungi: I just did a local test of zuul 2.0 install upgraded to latest master and that worked without -U21:13
rcarrillocruzhey folks, i just noticed there are unmerged changes on the puppetmaster private hiera21:14
phschwartzjeblair: so, I have been able to confirm that the issue is purely related to when the project from a depends-on is not in the layout21:14
rcarrillocruza massive chmod on all files21:14
phschwartzjeblair: we missed some and we are trying to fix that21:14
pabelangerrcarrillocruz: Oh, that could be puppet21:14
rcarrillocruzanyone doing that?21:14
pabelanger1 sec21:14
jeblairfungi: that would do it21:14
clarkbmordred: dstufft do you have a tl;dr for when -U is necessary when running pip install? is it just when you want newer pacakges that are available when older ones match requirements?21:15
rcarrillocruzah k, i was about to do a change, but wanted to give a nudge to whoever to commit those21:15
phschwartzjeblair: but is a worring issue due to the volume of the errors going into the log.21:15
pabelangerrcarrillocruz: https://review.openstack.org/#/c/326649/21:15
dstufftclarkb: yes21:15
*** ldnunes has quit IRC21:15
clarkbdstufft: tyty21:15
pabelangerrcarrillocruz: the issue is 0750?21:15
dstufftclarkb: specfically, when the one you already have installed matches requirements, but you want a newer one that is available21:15
rcarrillocruzyah21:15
pabelangeror they change to 064021:15
jeblairphschwartz: one error is too many21:15
rcarrillocruzold mode 10064421:15
clarkbdstufft: ya that makes sense. thanks21:15
rcarrillocruznew mode 10075521:15
jeblairfungi: i don't see an option for it21:16
rcarrillocruzpabelanger: do you want to commit that or should I and link to thta change21:16
rcarrillocruz?21:16
fungijeblair: yeah going back through the manual i'm missing it as well21:16
pabelangerrcarrillocruz: ya, I think we should21:16
rcarrillocruzk, gimme a sec21:16
SpamapSphschwartz jeblair: do you want to maybe move this to #zuul?21:16
*** vhosakot has quit IRC21:17
* SpamapS doesn't want to interfere with infra business21:17
*** salv-orl_ has joined #openstack-infra21:17
jeblairwe can do that21:17
*** matt-borland has quit IRC21:17
rcarrillocruzpabelanger: done21:18
JayFJFYI gerrit is throwing 503s21:18
JayF"Service Unavailable" in the little grey popup window; just started about a minute ago21:18
*** piet has quit IRC21:19
* rcarrillocruz just had a nostalgia moment for seeing hpcloud hiera keys on puppetmaster21:19
*** jheroux has quit IRC21:19
*** salv-orlando has quit IRC21:20
pabelangerrcarrillocruz: thanks21:20
rcerninis it just me having problems with connecting to "https://review.openstack.org"21:20
pabelangerclarkb: confirmed tcpdump works as expected, thanks for the syntax21:20
cloudnullrcernin: +121:21
rcernin:(21:21
cloudnulllooks like review.o.o is upset21:21
rcerninthanks21:21
clarkbya I am trying to pull up its internal monitoring tools to see if it is in garbage collection pain21:21
*** thorst_ has quit IRC21:21
*** roxanagh_ has quit IRC21:21
rcarrillocruzpabelanger: in the clouds layout i don't see a user creation for bluebox21:21
rcarrillocruzwas that created manually, for the openstackci/openstackzuul projects ?21:21
rcarrillocruzi.e. out of band , not with the launcher21:22
asselin__rcarrillocruz, what is 'that' you're referring to?21:22
clarkbit doesn't look too bad on the memory front (though it appears to be getting clsoe to where things go south)21:22
*** rhallisey has quit IRC21:23
*** gouthamr has quit IRC21:23
clarkbrcernin: cloudnull http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=30&rra_id=all theres the problem I think21:23
*** baoli_ has quit IRC21:23
clarkbwe need to update our apache to allow for more connections, its using the default apache mpm event total21:23
fungioh yikes21:24
fungiprobably could stand to see where that spike came from too21:24
*** jkilpatr has joined #openstack-infra21:24
clarkbgerrit itself has an fd limit of 819221:24
clarkband it uses a bunch of those for jgit things. Somewhere in the range of 2k is probably safe21:24
fungiyeah, apache probably needs a pair to proxy every connection too21:25
rcarrillocruzasselin__: https://review.openstack.org/#/c/359807/1/cloud_launcher/clouds_layouts.yml21:25
clarkbhaleyb: in fact I think we may need the backports for grenade to work properly too. Any chacne you pushed those yet?21:25
rcarrillocruzthere is no user creation there21:25
*** piet has joined #openstack-infra21:25
clarkbhaleyb: I can promote them as well once the master chagne gets through21:25
clarkbfungi: good point21:26
fungiclarkb: we're making 2984 connections to mysql according to netstat. that can't be right...21:26
openstackgerritMerged openstack-infra/system-config: Update zuul-env on job nodes  https://review.openstack.org/35935221:26
clarkbfungi: wow21:26
fungiand climbing21:26
asselin__rcarrillocruz, sorry, I don't have the context. Which comment of mine are you referring too? I don't think I'm using that file you referenced....but need to double-check21:26
zaroi think there's a bug with gerrit that it doesn't close stale conections21:26
fungii wonder if there's a trove or backend networking issue in rackspace dfw right now21:26
clarkbfungi: fwiw I don't think the number of fds on the gerrit process has been the issue. apache is jsut capping us off at a very conservative level compared to what gerrit should be able to handle21:26
clarkbzaro: could that be the source of the leak too?21:27
funginetstat -nt|grep -c :330621:27
fungi423721:27
clarkbouch21:27
rcarrillocruzi'm not commenting to any comment of you, i was talking to pabelanger about infra's puppetmaster :-)21:27
anteayawould this be a runaway third party ci?21:27
asselin__rcarrillocruz, oh sorry...my bad21:28
anteayayes: https://review.openstack.org/#/c/358065/21:28
zarohere's the fix for it, https://git.openstack.org/cgit/openstack-infra/gerrit/commit/?h=upstream/stable-2.11&id=f57d2b8f830710d6e25d501f4dc8d76f2d2e71f221:28
anteayafuel ci21:28
openstackgerritRicardo Carrillo Cruz proposed openstack-infra/system-config: Add openstackci/openstackzuul oscc clouds to all-clouds  https://review.openstack.org/36009321:28
fungianteaya: nah, that's how fuel-ci works21:28
zaroclarkb: maybe, but i think we restart gerrit enough to not have lingering connections though21:28
anteayareally?21:28
anteayawow21:28
fungiit posts a comment for every job it starts on a change21:29
fungihowever, it only tests fuel changes, so nobody's complained21:29
anteayafrightening21:29
anteayaokay21:29
clarkbzaro: we only restart it that often beceause of the leak though...21:29
fungianyway, i've run some stats on the gerrit logs and don't see an excessive comment volume coming from anywhere21:29
clarkbzaro: so if it is teh cause of the leak that is how we are working around it21:29
anteayafungi: okay thank you21:29
fungiwhatever the spike on the graph was, it's subsided again21:30
*** hashar has quit IRC21:30
clarkbbut I think the 503s happen when we hit that limit and apache has to say go away21:31
anteayaheck of a spike21:31
clarkbbefore that it will queue things so when people say gerrit is slow I think that is what is happening. Its actually their connection being parked on the proxy before being handled21:31
*** vinaypotluri has quit IRC21:31
fungithere was a large corresponding outbound traffic spike corresponding with the open connections spike21:31
zaroclarkb: well i don't know if that's is causing the memory issue, maybe we should check the ssh sessions over time?21:32
*** thiagop has quit IRC21:32
zaroor does cacti have that info?21:32
clarkbcacti only has the system level info21:33
rcarrillocruzclarkb: when you said yesterday about doing performance testing, did you mean scping a dib image from nodepool to the controller, upload to glance, spin a vm and do a reproduce.sh from some random job there ?21:33
clarkbwhich shows us at a pretty steady 400ish tcp connections during normal operations21:33
zarowell i guess there's the gerrit show-sessions command21:33
fungialso a largeish but not huge corresponding inbound spike on the internal nic, suggesting it was perhaps reading a lot of whatever it was serving from the database21:33
clarkbrcarrillocruz: yup basically. though you don't have to scp it to the controlelr first you should be able to upload directly from nodepool using openstackclient21:33
pabelangerrcarrillocruz: yes, I manually created them, because passwords21:33
zaroclarkb: maybe put that on a watch?21:33
pabelangerrcarrillocruz: we need to solve that21:34
rcarrillocruzclarkb: ah yeah, need to put the clouds on nodepool for infracloud tho21:34
rcarrillocruzi have put them just on puppetmaster for now21:34
haleybclarkb: armax was working on the backport, need to review21:34
rcarrillocruzthx21:34
clarkbrcarrillocruz: or just have one in your homdir or something21:34
rcarrillocruzpabelanger: oh yeah, the stuff you asked about vault21:35
rcarrillocruz++21:35
*** jamesdenton has quit IRC21:35
*** piet has quit IRC21:36
clarkbfungi: I wonder if its someone doing a bunch of api queries?21:37
clarkbstackalytics maybe?21:37
*** guilherme has joined #openstack-infra21:37
fungiclarkb: it's possible but will need someone to have time to do some focused analysis of the apache logs21:37
clarkbin any case we should probably plan to increase the number of connectiosn apache will allow since I think the server can do more comfortably based on memory and cpu logs21:37
fungiagreed21:38
clarkbwe are just queuing when we don't need to21:38
fungiyep, looks like the server's powerful enough to abosrb it if only it were configured to try21:38
*** guilherme has quit IRC21:39
* clarkb doesn't want to get distracted from getting these devstack changes through, but can probably look at that tomorrow at some point21:41
zarohow was the review.o.o CPU utilization during that time?21:41
*** mdrabe has quit IRC21:41
clarkbzaro: its pretty low like 10%21:41
clarkbzaro: you can see it on http://cacti.openstack.org21:41
zarook.  there's a bug that will cause cpu to pin on certain queries.21:41
clarkboh I guess it jumped to ~50% during the peak21:42
*** gouthamr has joined #openstack-infra21:42
clarkbhttp://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=25&rra_id=all21:42
zarohttps://review.openstack.org/#/c/356617/21:42
clarkbstill not too bad with room to breath if we increase the connection limits21:42
zaroyeah, that doesn't look like it would cause unreponsiveness21:43
clarkbzaro: ya but should be fixed anyways as I know more people are using those types of queries to track work. +221:44
clarkbfungi: maybe you want to look at ^ and we can get gerrit updated when we rename things?21:45
zaromaybe should include the ssh timeout fix too?21:46
*** vinaypotluri has joined #openstack-infra21:46
*** sdake has joined #openstack-infra21:48
haleybclarkb: the mitaka and liberty patches are up, but since the code has changed they are not exact picks - https://review.openstack.org/#/c/359974/ and https://review.openstack.org/#/c/359975/21:49
*** roxanagh_ has joined #openstack-infra21:49
clarkbhaleyb: awesome. I am not devstack core but I think ianw jeblair mtreinish sdake and sc68cal are21:49
*** sdake_ has joined #openstack-infra21:50
*** thorst_ has joined #openstack-infra21:51
*** aeng has quit IRC21:51
*** weshay is now known as weshay_bbiab21:53
ianwclarkb: if sdague sc68cal doesn't jump on them, happy to push them in if consensus here is they're good21:53
clarkbianw: the master change has been approved and is at the top of the gatequeue. As long as you are happy with the merge delta I say go for it21:54
*** sdake has quit IRC21:54
*** thorst_ has quit IRC21:55
openstackgerritJeremy Stanley proposed openstack-infra/system-config: Add a wiki-dev.o.o server to test newer mediawiki  https://review.openstack.org/35824621:56
openstackgerritJeremy Stanley proposed openstack-infra/system-config: Set wiki name and logo URL  https://review.openstack.org/36003721:56
*** rvasilets___ has quit IRC21:57
*** ddieterly is now known as ddieterly[away]21:59
*** Hal has quit IRC21:59
*** adriant has joined #openstack-infra21:59
rcarrillocruzpabelanger: mind reviewing  https://review.openstack.org/#/c/360093/ ?22:01
ianwhaleyb: just a question on if the iptables stuff was deliberately left out of -> https://review.openstack.org/#/c/359975/322:01
*** Hal has joined #openstack-infra22:01
*** sdake_ has quit IRC22:01
ianwhaleyb: i guess same question for the mikata one22:02
*** rfolco has quit IRC22:03
*** sdake has joined #openstack-infra22:03
pabelangerrcarrillocruz: looks right22:04
haleybianw: i didn't think those were backported, don't know why, but could be done in a separate patch22:04
rcarrillocruzthx22:04
pabelangerrcarrillocruz: also, think I am going to try my hand at a lets encrypt playbook22:05
rcarrillocruzoh?22:05
rcarrillocruzso people's cool with let's encrypt ?22:05
rcarrillocruzi remember cody or someone bringing it up in a meeting and people wasn't very excited about it22:05
rcarrillocruzi'm all over automating all things :-)22:05
pabelangerI think as long as we have it 100% automated22:06
ianwhaleyb: sorry, not following ... i'm but a dilettante at neutron+ipv6 ... the sysctl stuff looks like it moved over fine, just wondering where the iptables fiddling comes into it, because it's not on the mikata and liberty patches22:06
haleybianw: oh, it was just not to duplicate code - that iptables call was needing the same interface value22:07
fungircarrillocruz: pabelanger: i'm not really keen on relying on letsencrypt in production, but if you want to play around with it be my guest22:07
ianwhaleyb: ooohhhh, ok, yeah, i see that now22:08
haleybianw: someone pointed out we had just done this for iptables, so i just copied it22:08
pabelangerfungi: Ya, I wanted to give it a try first. See what can be done22:09
*** Hal has quit IRC22:09
ianwhaleyb: cool.  no worries, as expected sdague jumped on it anyway, so all good22:09
pabelangerfungi: however, it would be nice to not have a self-signed cert if possible. Even if a wildcard for ic.openstack.org22:09
clarkbI will promote those as soon as the master one finishes (that should happen soon)22:09
haleybi hope it works on the stable branches the same way...22:09
fungipabelanger: rcarrillocruz: if enough of the rest of our root team are comfortable with it then i'm just as happy to let them take that on, but i have some fundamental issues with how letsencrypt is implemented that make it not something i personally want to have to deal with22:10
pabelangerfungi: Ya, I don't want to force it if people object22:10
*** notmyname has quit IRC22:10
rcarrillocruzfungi: i guess we can do a POC22:11
ianware we going to be the type of people who hit letsencrypt's renewal limits?22:11
rcarrillocruzfungi: but i have two personal projects i'd like to tackle at some point22:11
ianwalthough i guess you can ask for exceptions22:11
rcarrillocruz1. put all infra in repo and have automation to automatically deploy it22:11
clarkbhaleyb: telnet://2001:4800:1ae1:18:f816:3eff:fe27:9a2d:19885 (mitaka) and telnet://2001:4800:1ae1:18:f816:3eff:fe3b:80f0:19885 (liberty) should tell us22:11
pabelangerianw: if we had wildcard, we can do up to 2000 hosts22:11
pabelangerper week22:12
clarkbI will keep an eye on those and make sure they get to tempest before enqueing and promoting to the gate22:12
rcarrillocruz2. move over to designate (yeah, i'd need you or clark to check how the new designate looks like in RAX and if the other folks at the foundation are comfortable with the UI)22:12
pabelangerhttps://letsencrypt.org/docs/rate-limits/22:12
fungiianw: not sure what the renewal limits are... we'd presumably need to renew all certificates once every month or two so as to avoid exceeding the imposed three-month expiration (whereas right now i get certs good for 2 years, so an order of magnitude more frequent renewing)22:12
ianwpabelanger: i thought let'sencrypt was no wildcards?  that seems to be the compalint every time it comes up on hackernews22:12
fungiianw: though it does seem to allow subjectaltnames22:13
clarkbhaleyb: actually they have both already started tempest so just need the master chagne to merge then I can enqueue and promote them22:13
fungiianw: so could be worked around that way if needed22:13
*** vhosakot has joined #openstack-infra22:13
pabelangerianw: it looks like they limit you to 100 Names per Certificate22:14
pabelangerup to 20 a week is your 200022:14
*** roxanagh_ has quit IRC22:14
fungiyeah, we're probably not in danger of exceeding that at least22:15
haleybclarkb: cool, i'll stay online until they merge22:15
clarkblooks like master just merged22:15
*** roxanagh_ has joined #openstack-infra22:15
clarkbpromoting now22:16
*** sileht has quit IRC22:16
fungipabelanger: ianw: we have about two dozen certs in circulation at the moment, and the one with the most subjectaltnames has maybe a dozen22:16
fungiso we _would_ need to stagger renewals i think, couldn't fire them all in the same week fwiw22:17
*** ddieterly[away] is now known as ddieterly22:17
clarkband thats done \o/ now just to get them merged22:17
*** esberglu has quit IRC22:17
pabelangerfungi: ya, we'd need some rotation schedule22:17
clarkbonce they merge we should see the rate of ansible rc 3's fall off on the zuul launchers22:17
*** aeng has joined #openstack-infra22:17
pabelanger++22:17
*** sileht has joined #openstack-infra22:17
fungipabelanger: but that goes back to my primary worry... complexity+automation=outages22:17
openstackgerritChris Krelle proposed openstack-infra/glean: Adjust the way we wait for interfaces to become available  https://review.openstack.org/35947122:17
pabelangerfungi: Ya, valid concern too22:18
openstackgerritJames E. Blair proposed openstack-infra/zuul: Always create foreign projects if needed  https://review.openstack.org/36010522:19
jeblairSpamapS, phschwartz, fungi: ^22:19
jeblairasselin__: ^ that may be of interest to third-party ci ops22:19
fungiright now, while our cert management is manual in nature and bears a nontrivial cost, there is a human involved in the process able to double-check that things are correct and fix/undo them if not. turn that over to a daemon and wait for your services to periodically go offline until someone's around to fix certificate renewal issues on them22:20
fungijeblair: awesome, thanks22:20
mordredclarkb: ooh - I was afk for a bit ... the devstack fix wound up being the fix?22:20
clarkbmordred: yup but we had to do it to the specific interface and actually get the interface correct22:20
fungimordred: yeah, just had to get moved around a little, sounds like22:20
mordredahhh. neat. WOOT!22:21
mordredthat's super exciting22:21
clarkbpabelanger: fungi another thing to keep in mind with letsencrypt is you have to run the renewal tool as root22:21
clarkbbecause why? who knows22:21
clarkbmordred: the master change just merged and I have put the stable branch fixes at the top of the queue so hopefllyt in the next hour or so this will be a thing of the past22:21
fungithat sounds solvable22:21
pabelangerclarkb: neat, didn't know that22:21
fungithe as root issue i mean22:22
clarkbfungi: oh I am sure, its just a thing that doesn't make me confident in the tooling22:22
*** rfolco has joined #openstack-infra22:22
clarkb(it wants to listen on port 443 or something to make sure you really own the ip:port combo? I dunno22:22
jeblairclarkb, fungi, pabelanger: you don't need to turn over management of apache to letsencrypt.  it's just an api, and you can use tools to get the certs in a 'normal' way.22:22
*** esikachev has quit IRC22:22
*** rfolco has quit IRC22:22
fungii also have philosophical/community concerns with the way letsencrypt came about, but i'm willing to keep my criticisms to technical issues22:22
*** xarses has quit IRC22:23
*** fguillot has joined #openstack-infra22:23
pabelangerYa, having letsencrypt touch apache would be a no go for me22:23
fungibecause not everybody is as bitter about the state of the certificate vending machine industry and browser collusion as i am22:23
*** fguillot has quit IRC22:24
jeblairi think jamielennox had some ansible stuff to use le in a less-insane manner; we might ask about that if we decide to go down that route.22:24
clarkbjeblair: correct, aiui even if it doesn't touch apache it needs or wants root22:24
mordredclarkb: _excellent_22:24
*** ddieterly has quit IRC22:24
ianwfungi: so, how about we connect an arduino based electric buzzer up to your new firehose and insert it in your pants, it can monitor events such that it makes the voltage inversely proportionate to the time left before renewal of the letsencrypt certificates?  keep a human in the loop :)22:24
fungiianw: compelling22:25
jeblairplus, fungi would have a superhuman sense of cert expiration times22:25
jheskethMorning22:25
openstackgerritMerged openstack-infra/system-config: Add openstackci/openstackzuul oscc clouds to all-clouds  https://review.openstack.org/36009322:25
fungiyeah, after a while i could take off the training pants and just preternaturally *know* when certs are about to expire22:25
mmedvedejeblair: I just had my zuul explode yesterday with unkown project bug, filling up 40G with logs, thanks for the  https://review.openstack.org/36010522:26
jeblairmmedvede: well, that answers my question of "why aren't other folks seeing this?"  :)22:26
*** nwkarsten has quit IRC22:27
mmedvedeI should hang out in infra channel more :)22:27
jeblairthat's probably going to be worth a 2.5.1 release too22:28
mordred++22:28
mmedvedethat would be appreciated. Currently pinning to 2.5.022:29
*** markvoelker has joined #openstack-infra22:29
*** fguillot has joined #openstack-infra22:29
openstackgerritK Jonathan Harker proposed openstack-infra/puppet-log_processor: Give each job's console log its own crm classifier  https://review.openstack.org/33855822:32
SpamapSmmedvede: wow really? Interesting.22:34
SpamapShere we thought we were the only ones dumb enough to add unknown projects to dependency chains ;)22:35
*** hockeynut has quit IRC22:35
*** yamahata has quit IRC22:36
openstackgerritK Jonathan Harker proposed openstack-infra/log_processor: Give each job's console log its own crm classifier  https://review.openstack.org/36011222:36
*** Julien-zte has joined #openstack-infra22:38
*** yamahata has joined #openstack-infra22:38
anteayamorning jhesketh22:39
*** fguillot has quit IRC22:40
* anteaya laughs at the vision of fungi in hawaiian shirts and buzzard pants22:40
fungiwell, they'd just be bermuda shorts with some custom wiring22:40
anteayaha ha ha22:41
openstackgerritPaul Belanger proposed openstack-infra/project-config: Fix legend for 'Time to Ready' for nodepool dashboard  https://review.openstack.org/36011322:41
fungithat is to say, not at all wired the way you'd normally wire a pair of bermuda shorts22:41
clarkbdo you normally wire bermuda shorts?22:42
anteayathe way I'd wire my bermuda shorts for instance22:42
fungishhh22:42
pleia2hehe22:42
*** xarses has joined #openstack-infra22:43
*** fguillot has joined #openstack-infra22:46
*** yamamoto has joined #openstack-infra22:48
*** thorst has joined #openstack-infra22:48
*** vhosakot has quit IRC22:50
clarkban instance in osic running grenade behind the two devsatck backports is looking good. It got through the first stack run and is now upgrading services22:52
*** rcernin has quit IRC22:53
anteayayay!22:54
*** marcusvrn_ has quit IRC22:54
haleybclarkb: great.  i do have to take off now though, will check-in later22:55
*** rbrndt has quit IRC22:55
*** sheeprine has quit IRC22:55
*** rlandy is now known as rlandy|bbl22:55
*** zigo has quit IRC22:56
anteayathanks haleyb22:56
*** sdake has quit IRC22:57
*** zigo has joined #openstack-infra22:57
*** nwkarsten has joined #openstack-infra22:58
*** sheeprine has joined #openstack-infra22:58
clarkband now it is tempesting so I think we should be set once these merge. It will be easyish for infra to check tomorrow by grepping zuul launcher logs22:59
*** fguillot has quit IRC23:00
*** zz_dimtruck is now known as dimtruck23:00
jamielennoxjeblair: there are some alternative clients out there for letsencrypt that i think would be easier to automate than the official certbot one if you're looking around23:02
jamielennoxthe certbot config files are kind of crazy and unnecessary23:02
*** fguillot has joined #openstack-infra23:03
*** markvoelker has quit IRC23:04
*** cody-somerville has joined #openstack-infra23:05
openstackgerritPaul Belanger proposed openstack-infra/project-config: Fix ready node launch attemps panel for nodepool  https://review.openstack.org/36011823:05
nibalizerssss23:05
clarkbwoot that grenade job just passed23:05
*** pradk has quit IRC23:06
*** hongbin has quit IRC23:06
anteayayay!23:07
*** sdake has joined #openstack-infra23:07
*** tpsilva has quit IRC23:08
*** tqtran has quit IRC23:10
*** oomichi has quit IRC23:11
*** oomichi has joined #openstack-infra23:13
*** apetrich has quit IRC23:18
*** apetrich has joined #openstack-infra23:18
clarkband they just merged, so any job in the gate now or in check starting after nowish should be good23:19
*** shashank_hegde has quit IRC23:19
pabelangernice work everybody23:20
*** edmondsw has quit IRC23:20
pabelangeralso happy we didn't revert to ipv423:20
*** kzaitsev_mb has quit IRC23:20
clarkbI started a tail on zl01 that may pop up a few last stragglers from teh check queue but within an hour we should see the error fall off to zero23:21
mordredpabelanger: ++23:21
phschwartzjeblair: that patch looks great, ty for the help.23:21
clarkbpabelanger: is the ubuntu mirror looking happy again? I am going to guess that it is due to the number of happy jobs out there23:23
clarkbpabelanger: btu I think that was the only other known issue?23:24
clarkboh wait the ssh key issue is still outstanding I Think23:24
pabelangerclarkb: yes, we should be good now23:24
pabelangerwhich ssh key issue?23:24
mordredwhat's the ssh key issue?23:25
zaroianw: ping23:26
jrollis the ssh thing going on from everywhere or specific locations?23:28
pabelangerclarkb: mordred: https://review.openstack.org/#/c/360057/ only change I proposed today for SSH. To help debug ssh_host_key changing23:28
* jroll sees pabelanger in #rackspace23:28
pabelangerjroll: which ssh thing? The question I asked in #rackspace?23:28
jrollpabelanger: yeah23:28
*** fguillot has quit IRC23:29
pabelangerjroll: seems to be limited to rax-iad23:29
pabelangerseeing a lot of SSH timeouts there23:29
jrollpabelanger: yeah, but only coming from zuul or can you reproduce from everywhere?23:29
openstackgerritMerged openstack-infra/tripleo-ci: Add sshnaidm to tripleo-cd-admins  https://review.openstack.org/35721723:29
pabelangerjroll: no, I couldn't access the IP from my local internet either23:30
clarkbmordred: tl;dr is ansible complains about the host key changing implying mitm or similar23:30
*** hockeynut has joined #openstack-infra23:30
mordredclarkb: oh. yeah. I saw discussion of that23:30
jrollpabelanger: ok. what's the failure rate?23:30
oomichipleia2: hi, thanks for reviewing on https://review.openstack.org/#/c/358148/23:30
clarkbpabelanger: +2 for the zl change23:31
oomichipleia2: it is fine to drop that if not interested in23:31
mordredpabelanger: me too23:31
pleia2oomichi: sure, I know there are a couple others to look at, but I'm not really certain about the logic in those (why we're doing the thing you're changing)23:31
clarkband with that I need to go grocery shopping23:31
*** xyang1 has quit IRC23:31
*** krtaylor has joined #openstack-infra23:32
pleia2oomichi: like, I'm not sure if there's a reason for having the for loop in https://review.openstack.org/#/c/358147/ before the exception23:32
pleia2s/exeption/except23:33
*** fguillot has joined #openstack-infra23:33
oomichipleia2: At changed code, line 60-65 never raises IOError.23:34
*** nwkarsten has quit IRC23:34
pleia2ah, hm23:34
pabelangerclarkb: other issue I am tracking is gearman missing data, tcpdump in progress but haven't had the IndexError again23:35
jrollpabelanger: poking some internal support folks23:35
ianwzaro: hey23:36
pabelangerjroll: cool, thanks!23:36
clarkbpabelanger: that isnt causing failures right?23:36
clarkbjust less efficient23:36
*** AnarchyAo has quit IRC23:36
*** jcoufal has joined #openstack-infra23:36
pabelangerjroll: http://grafana.openstack.org/dashboard/db/nodepool-rackspace?panelId=11&fullscreen shows the failure rates, you can filter on iad23:36
jrollcool, ty23:37
pabelangerclarkb: ya, less efficient23:37
pabelangerclarkb: it's only happen 1 today23:37
pabelangerand only chasing it because it was osic-cloud123:37
*** jd__ has quit IRC23:37
jrollpabelanger: looks like over 50% :/23:38
pabelangeralso still having DNS issues in bluebox, but going to wait until we reset nodepool for shade fixes23:38
*** jd__ has joined #openstack-infra23:38
zaroianw: i checked gerrit functionality and it seems to work ok for me when i create a new site.  could you please explain to me what you did to get the error?23:38
pabelangerjroll: Ya, rax-iad has been problematic since Aug. 3rd: http://grafana.openstack.org/dashboard/db/nodepool-rackspace?from=1469489937977&to=147208193797723:39
pabelangeronly starting to debug it now23:39
jrolloh wow23:39
jrollpabelanger: so someone mentioned ssh host keys changing earlier, that's completely different, right?23:40
pabelangerjroll: it is23:40
jrollcool23:40
jrollpabelanger: well, I've got irc messages out, but you might get a quicker response via ticket23:40
zaroianw: is puppet running on new gerrit site or on existing gerrit site?23:40
zaropuppet/puppet running `java -jar gerrit.war ...`23:41
openstackgerritMerged openstack-infra/zuul: Store ssh_host_key of remote node  https://review.openstack.org/36005723:42
*** nwkarsten has joined #openstack-infra23:42
ianwzaro: this is launching a new node our launch scripts in system-config23:43
zaroactually i'm pretty sure it's running on existing site. ok. just wanted to confirm because that would mean my theory was correct.23:43
*** Swami has quit IRC23:43
pabelangerjroll: sure, can do that too23:43
ianwzaro: the host is up i think, although i've manually created the index but we can check logs23:43
ianwjust a sec...23:43
*** jcoufal has quit IRC23:44
ianwzaro: try ssh root@104.130.159.16123:46
zaroeven against existing site i'm not really sure why you needed to run offline reindex at all.  the gerrit version didn't change so it shouldn't have been necessary23:46
*** nwkarsten has quit IRC23:47
zaroianw: you want me to see the logs?23:47
ianwzaro: we can poke at it, yeah23:47
zaroi'm not infra root23:48
dmsimardAh, I'm sure it's been brought up but the issue I see with the ipv6 VMs is that if you're in a network that's not ipv6 enabled you can't follow the logs through telnet :(23:48
*** dingyichen has joined #openstack-infra23:48
*** yolanda has quit IRC23:48
*** claudiub has quit IRC23:48
clarkbdmsimard: yes23:49
* dmsimard is going to lobby his ISP for v623:49
*** cody-somerville has quit IRC23:49
clarkbdmsimard: zuulv3 will address that but un the meantime you can get an ipv6 (I have used a rax instance but dreamhost and vexxhost both ipv6) also HE tunnrls are a thing23:50
jeblairdmsimard: he tunnels are easy to get and set up here: https://tunnelbroker.net/23:50
pabelangerclarkb: because of the websockets?23:50
dmsimardthanks for the suggestions :)23:51
*** shashank_hegde has joined #openstack-infra23:51
jeblairpabelanger: yes -- websockets will need a proxy23:52
jeblairmordred has a plan that involves that23:52
pabelangerjeblair: Ya, figured it was something along that lines23:52
*** jerryz has quit IRC23:53
jeblairsomething about in-browser javascript and connecting to random ports on random other hosts :)23:53
clarkbpabelanger: ya23:53
*** zhurong has joined #openstack-infra23:54
asselin__jeblair, thanks for the heads up. We recently recommended folks to pin to zuul to 2.5.0. http://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/single_node_ci_data.yaml#n31 Are we expecting another tag after that merges?23:56
jeblairasselin__: yes23:56
*** roxanagh_ has quit IRC23:57
pabelangeryay, 7 patches merging now23:58
pabelangersuch wow23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!