Thursday, 2018-04-19

*** rpioso is now known as rpioso|afk00:01
*** eernst has quit IRC00:02
*** newstem has joined #openstack-infra00:04
Jeffrey4ltonyb, re EOL openstack/kolla for newton today, sure00:06
*** liusheng has quit IRC00:07
*** gouthamr has joined #openstack-infra00:09
pabelangerianw: centos-7 still looks broken00:10
pabelangerI think because we set DIB_EPEL_DISABLED in nodepool.o.o00:10
clarkbpabelanger: its explicitly enabling it for that one install though right?00:11
*** dingyichen has joined #openstack-infra00:11
pabelangerclarkb: yah, but the issue is: https://git.openstack.org/cgit/openstack/diskimage-builder/tree/diskimage_builder/elements/epel/pre-install.d/05-rpm-epel-release#n3600:12
pabelangerhttps://review.openstack.org/#/c/561479/10/diskimage_builder/elements/epel/pkg-map00:12
pabelangerhas missed centos00:12
pabelangercentos7 is for the wrong element00:12
pabelangerwell, the other centos7 element00:13
pabelangercentos-minimal is centos00:13
pabelangerso, we'll need to fix and tag 2.14.100:13
clarkbthe distro is still centos7 if using centos-minimal isn't it?00:14
ianwoh i'm constantly confused over this00:15
clarkbif svc map doesn't work that way it probably should00:15
openstackgerritPaul Belanger proposed openstack/diskimage-builder master: Fix epel element for centos-minimal  https://review.openstack.org/56243200:15
clarkbregardless of what element you use to install centos7 the distro is stiill centos700:15
pabelangerclarkb: ianw: ^00:15
pabelangerno, it is just centos for centos-minimal00:16
ianwyeah, it's messed up00:16
clarkbthat seems like a bug00:16
pabelanger2018-04-18 12:27:29.435 | ++ /opt/dib/tmp/dib_build.aHtWvnMf/hooks/environment.d/10-centos-distro-name.bash:source:1 :   DISTRO_NAME=centos00:16
*** edmondsw has joined #openstack-infra00:16
clarkbeither it should always just be centos or it should alawys be centos700:16
clarkbnot one or the other :/00:16
*** eernst has joined #openstack-infra00:16
pabelangeryah, I am unsure why they are different00:16
ianwclarkb: yeah, it should be, all we need is a time-machine :)  at any rate i think we can learn from this for centos8 era00:17
clarkb++00:17
ianwpabelanger: i guess this didn't fail CI because we don't pip-and-virtualenv in the nodepool tests00:17
ianwwe probably should00:17
pabelanger      DIB_EPEL_DISABLED: '1'00:18
pabelangerwe need to set that in testing00:18
pabelangeror should00:18
ianwi'm not sure that would have highlighted this though?  we need to be trying to use epel to notice it's not working00:19
ianwbut, i don't think it hurts00:19
*** dingyichen has quit IRC00:19
pabelangerwell, we cannot disable EPEL00:20
*** dingyichen has joined #openstack-infra00:20
pabelangerthat is why nb01.o.o is failing00:20
pabelangersince yum-utils didn't get installed00:20
pabelangerhttp://nb01.openstack.org/centos-7-0000008996.log00:20
*** dingyichen has quit IRC00:21
*** dingyichen has joined #openstack-infra00:21
ianwpableanger: right, "yum-config-manager: command not found" because of the missing match00:22
*** edmondsw has quit IRC00:23
ianwthis started because i noticed that python2-pip was actually from epel, and pip-and-virtualenv *thought* it was installing it and overwriting it, but it wasn't really00:23
ianwso i included epel in pip-and-virtualenv00:24
*** edmondsw has joined #openstack-infra00:24
ianwbut clearly none of our CI includes epel or pip-and-virtualenv?00:24
*** dingyichen has quit IRC00:24
pabelangeryah, there was a break in tripleo too, but we fixed that this afternoon00:24
pabelangerianw: no, we have pip-and-virtualenv00:25
*** dingyichen has joined #openstack-infra00:25
ianwpabelanger: hmm, from source?  how did we miss then then?00:25
pabelangerit is only a failure with DIB_EPEL_DISABLED: '1'00:26
pabelangerand we don't set it in our CI00:26
pabelangerso we don't call yum-config-manager00:26
pabelangerhttp://logs.openstack.org/79/561479/10/gate/nodepool-functional-py35-redhat-src/a0dbc04/controller/logs/builds/centos-7-0000000001_log.txt.gz#_2018-04-18_12_29_58_32200:27
*** zhurong has joined #openstack-infra00:27
pabelangeris where we install get-pip.py00:27
ianwoohhh, right; ok we don't try to disable it so never noticed the missing package, i see now, sorry00:28
*** edmondsw has quit IRC00:28
pabelangeryah00:29
pabelangerhttp://logs.openstack.org/79/561479/10/gate/nodepool-functional-py35-redhat-src/a0dbc04/controller/logs/builds/centos-7-0000000001_log.txt.gz#_2018-04-18_12_28_50_28000:29
pabelangeris where it would try to disable00:29
*** edmondsw has joined #openstack-infra00:29
ianwand where i did update it to do yum install --enablerepo=epel python2-pip doesn't care00:29
pabelangerright00:29
ianwok, yay continuing the long tradition of closely spaced dib point releases00:30
pabelangerwe likely should set DIB_EPEL_DISABLED in nodepool dsvm, since we don't actually want to test EPEL packages00:30
*** edmondsw has quit IRC00:32
ianw++00:32
*** edmondsw has joined #openstack-infra00:32
openstackgerritMerged openstack-infra/zuul-jobs master: Switch to http://security.debian.org/ for debian  https://review.openstack.org/56234900:32
*** caphrim007_ has quit IRC00:37
*** edmondsw has quit IRC00:37
*** caphrim007 has joined #openstack-infra00:38
*** eernst has quit IRC00:39
*** caphrim007_ has joined #openstack-infra00:40
*** felipemonteiro_ has joined #openstack-infra00:40
*** caphrim007 has quit IRC00:43
*** felipemonteiro_ has quit IRC00:46
*** eernst has joined #openstack-infra00:46
*** eernst has quit IRC00:51
*** tobberydberg has quit IRC00:52
*** anteaya has quit IRC01:02
*** wolverin_ has quit IRC01:04
*** wolverineav has joined #openstack-infra01:05
*** eernst has joined #openstack-infra01:08
*** wolverineav has quit IRC01:12
*** Kevin_Zheng has joined #openstack-infra01:19
*** yamahata has quit IRC01:19
*** cshastri has joined #openstack-infra01:19
*** slaweq has joined #openstack-infra01:20
*** slaweq has quit IRC01:25
*** zhurong has quit IRC01:29
*** germs has quit IRC01:30
*** germs has joined #openstack-infra01:30
*** germs has quit IRC01:30
*** germs has joined #openstack-infra01:30
*** dave-mccowan has joined #openstack-infra01:37
*** tobberydberg has joined #openstack-infra01:37
*** tobberydberg has quit IRC01:42
*** tobberydberg has joined #openstack-infra01:44
ianwok, i'm going to try deleting the 2848 images we've leaked into rax-ord01:46
ianwi am pretty sure the shade magic that stores the link to the swift container inside the image is not working in RAX01:46
ianwhence shade doesn't know how to remove the object side of things.  but let's clean up the existing images and then we can worry about that01:47
*** ssbarnea_ has quit IRC01:51
*** markvoelker_ has joined #openstack-infra01:52
*** markvoelker has quit IRC01:53
*** zhangfei has joined #openstack-infra01:55
*** salv-orl_ has joined #openstack-infra01:59
*** dayou has quit IRC02:00
*** salv-orlando has quit IRC02:02
*** jamesmcarthur has joined #openstack-infra02:05
*** dayou has joined #openstack-infra02:05
*** janki has joined #openstack-infra02:07
*** hongbin_ has joined #openstack-infra02:10
*** jchhatbar has joined #openstack-infra02:10
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: Add allowed-triggers and allowed-reporters tenant settings  https://review.openstack.org/55408202:10
*** salv-orl_ has quit IRC02:11
*** janki has quit IRC02:12
*** gcb has joined #openstack-infra02:13
*** salv-orlando has joined #openstack-infra02:13
*** ramishra has joined #openstack-infra02:17
*** salv-orlando has quit IRC02:20
*** salv-orlando has joined #openstack-infra02:22
openstackgerritTristan Cacqueray proposed openstack-infra/nodepool master: Refactor NodeLauncher to be generic  https://review.openstack.org/53555502:23
*** masuberu has joined #openstack-infra02:27
*** masber has quit IRC02:31
*** kiennt2609 has joined #openstack-infra02:33
*** kiennt2609 has quit IRC02:34
*** kiennt2637 has joined #openstack-infra02:34
*** kiennt2637 has quit IRC02:35
*** kiennt2609 has joined #openstack-infra02:35
*** stakeda has joined #openstack-infra02:39
*** psachin has joined #openstack-infra02:41
openstackgerritMerged openstack/diskimage-builder master: Fix epel element for centos-minimal  https://review.openstack.org/56243202:42
*** zhurong has joined #openstack-infra02:50
xinliangianw/pabelanger: Now arm64 ubuntu/debian node can be accessed. thanks02:50
xinliangBut still has some issues need to login ubuntu/debian to debug.02:50
ianwxinliang: so the debian issue i guess we know, with the scsi drivers not accessing the config-drive?02:51
ianwubuntu should be ok?02:51
xinliangDebian node also can be accessed , but the previous issue: package can be installed still happend.02:52
xinliangyes, ubuntu node can access now. But I found one strange issue: http://logs.openstack.org/59/557659/14/experimental/kolla-build-debian-source-arm64/4c888fd/job-output.txt.gz#_2018-04-18_11_27_57_64313002:52
ianwxinliang: oh, what was that?  i know that pabelanger has been working on the debian mirrors which might be related02:52
*** harlowja_ has quit IRC02:53
xinliangianw: this is the debian package installing issue: http://logs.openstack.org/59/557659/15/experimental/kolla-build-debian-source-arm64/7d377d9/job-output.txt.gz#_2018-04-19_01_52_57_15011402:54
ianwxinliang: hmm ... interesting.  i had to switch the download of a file during build from http to https ... it seemed there was a bad cache or something in between02:54
xinliangianw: yes, ubuntu node seems has https fetching issue.02:54
*** rfolco|off has quit IRC02:55
pabelangerxinliang: you likely want to stop fetching the key from the network and load it from local filesystem02:55
*** rfolco|off has joined #openstack-infra02:55
xinliangianw: For the debian node package installing issue, maybe there is some broken package installed during image building stage02:56
ianwxinliang: to be clear, i think what pabelanger is saying is that we can cache that key into something like /opt/files during the image build, if it is required constantly, to avoid pulling over the network02:56
*** vivsoni has quit IRC02:56
pabelangerianw: or cached in the kolla project. I'm not sure what the apt-key is for02:57
xinliangpabelanger:but  Fetching remote key is control by the building process of kolla02:57
*** vivsoni has joined #openstack-infra02:58
xinliangpabelanger: kolla might add some repos whose keys either can from key server or a link02:58
pabelangerxinliang: right, I am suggesting move those keys in tree02:59
ianw            "Setting up collectd (5.7.1-1.1) ...",02:59
ianw            "Job for collectd.service failed because the control process exited with error code.",02:59
pabelangerthe release.key you are using hasn't changed is 2017-08-23 12:0802:59
ianwxinliang: ^ so that's the debian error that causes things to fail02:59
ianwthat seems like a debian bug if collectd doesn't install correctly02:59
*** toabctl has quit IRC03:00
xinliangpabelanger: yes, caching might be better, but if a node can't fetching https things, that might not reasonable03:00
ianwhttps://bugs.debian.org/cgi-bin/bugreport.cgi?bug=779483 ... fails if no FQDN03:00
openstackDebian bug 779483 in collectd "collectd: Fails to install if no FQDN domain name" [Wishlist,Open]03:00
pabelangernetworking in linaro-cn1 isn't the best03:00
pabelangerI'm not sure how well jobs are going to work there in general03:01
xinliangianw: hmm, let me try if i can reproduce the debian issue on my local machine03:02
ianwxinliang: ooohh, that's right, so zuul can log into debian, but none of us roots can03:03
*** aeng has quit IRC03:04
xinliangpabelanger: I've tried it can fetch https key at other linaro-cn1  vm. No idea why it failed on ubuntu node.03:04
*** udesale has joined #openstack-infra03:08
*** jamesmcarthur has quit IRC03:09
*** kiennt2609 has quit IRC03:10
xinliangianw: sorry, the debian package issue is not the same as previos one. This time is collectd, last time are libffi-dev and libssl-dev03:13
xinliangbut, strangely , i can install collectd on my local machine. I will try the linaro-cn1 mirror03:14
ianwxinliang: if it's related to hostnames, etc, that might be different03:14
xinliangianw, the bug you just show is old and fired for jessie03:15
ianwbut still open :)03:15
*** aeng has joined #openstack-infra03:16
*** esberglu has joined #openstack-infra03:18
ianwxinliang: do you have a easily accessible host where you could build the scsi driver we need?  we could just manually drop it in until either the cloud upgrades or debian includes it?03:19
*** dhajare has joined #openstack-infra03:21
*** slaweq has joined #openstack-infra03:21
xinliangianw: which scsi driver? we can fetching the debian kernel source code package and build it03:21
*** harlowja has joined #openstack-infra03:22
xinliangI think we can build it on the linaro-cn1 mirror host on a debian container. what do you think of this?03:22
*** masuberu has quit IRC03:23
ianwxinliang: hmm, that probably would work03:24
*** slaweq has quit IRC03:26
ianwi guess it's a bit fragile, if the kernel updates03:27
ianwin that case i guess the module just doesn't load03:27
*** masuberu has joined #openstack-infra03:33
*** nicolasbock has quit IRC03:34
*** masber has joined #openstack-infra03:35
*** cshastri has quit IRC03:35
*** masuberu has quit IRC03:38
*** dhajare has quit IRC03:40
ianwpabelanger: yay, centos images converting03:41
*** dhajare has joined #openstack-infra03:44
prometheanfirerequirements jobs are stuck because we can't build newer libvirt-python on xenial (old libvirt is messing it up), we have two choices I can see, somehow install a newer libvirt or cap libvirt-python to <4.2.003:48
prometheanfirehttp://logs.openstack.org/51/562351/1/check/requirements-tox-py27-check-uc/8b78b71/job-output.txt.gz#_2018-04-18_21_13_19_02258703:49
*** jamesmcarthur has joined #openstack-infra03:49
*** dhajare has quit IRC03:53
xinliangianw: yes, when kernel updated we need rebuild again.03:57
ianwpromethanfire: hmm, do we only install UCA in devstack jobs?03:57
*** cshastri has joined #openstack-infra03:58
ianwprometheanfire: why are we building it too?  we usually have a wheel03:58
prometheanfireianw: it's part of our generate-constraints job03:58
prometheanfireI'm not sure why a wheel isn't used03:58
prometheanfireah, only a tar is published03:59
prometheanfirewhich needs building03:59
ianwdid it just update?03:59
*** dhajare has joined #openstack-infra03:59
ianwsometimes a wheel is published and this just sorts itself out03:59
prometheanfireno, 4.2.0 has been out for a while04:00
prometheanfirethis has been failing since the 4th04:00
prometheanfirewe just noticed in the last couple days04:00
prometheanfire4.2.0 released april 304:00
prometheanfire4.1.0 is just a tar.gz04:01
*** kiennt2609 has joined #openstack-infra04:02
*** kiennt2609 has quit IRC04:03
*** pbourke has quit IRC04:07
*** pbourke has joined #openstack-infra04:08
*** hongbin_ has quit IRC04:11
*** yamahata has joined #openstack-infra04:11
*** germs has quit IRC04:12
clarkblibvirt python is supposed to be backward compat with older lobvirt04:15
clarkbshould check if upstream knows04:15
prometheanfireya, I'll do that, but I suspect it only goes so far04:16
*** bobh has joined #openstack-infra04:17
*** jchhatba_ has joined #openstack-infra04:18
*** jchhatba_ has quit IRC04:19
*** jchhatba_ has joined #openstack-infra04:19
*** jchhatbar has quit IRC04:21
*** zhurong has quit IRC04:21
*** bobh has quit IRC04:36
*** rajinir has quit IRC04:50
*** jchhatbar has joined #openstack-infra04:50
*** eernst has quit IRC04:50
*** eernst has joined #openstack-infra04:51
*** harlowja has quit IRC04:52
*** jchhatba_ has quit IRC04:53
*** Qiming has quit IRC04:59
*** Qiming has joined #openstack-infra05:02
*** bhujay has joined #openstack-infra05:05
*** claudiub|2 has joined #openstack-infra05:10
*** pcichy has joined #openstack-infra05:12
*** Qiming_ has joined #openstack-infra05:13
mordredianw: awesome05:15
*** annp has quit IRC05:15
mordredianw: (re images and objects in rax)05:15
*** annp has joined #openstack-infra05:15
ianwmordred: delete_object('images', 'opensuse-423-1516489083') recurses right?05:17
*** mwarad has joined #openstack-infra05:17
*** links has joined #openstack-infra05:18
*** mwarad has quit IRC05:18
*** udesale_ has joined #openstack-infra05:19
*** udesale has quit IRC05:19
*** mwarad has joined #openstack-infra05:19
*** _mwarad_ has joined #openstack-infra05:19
*** _mwarad_ has quit IRC05:19
*** armaan has joined #openstack-infra05:20
*** pgadiya has joined #openstack-infra05:21
*** pgadiya has quit IRC05:21
*** slaweq has joined #openstack-infra05:22
*** e0ne has joined #openstack-infra05:27
*** slaweq has quit IRC05:27
*** pcichy has quit IRC05:32
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: Add allowed-triggers and allowed-reporters tenant settings  https://review.openstack.org/55408205:33
*** larainema has quit IRC05:34
mordredianw: yes05:37
*** zhurong has joined #openstack-infra05:37
*** quiquell|off is now known as quiquell|ruck05:43
*** armaan has quit IRC05:51
*** yolanda has joined #openstack-infra05:52
*** hashar has joined #openstack-infra05:57
*** TobbeCN has joined #openstack-infra05:57
prometheanfirepabelanger clarkb: I know it's late and all, but I'm guessing no progress on the devstack patches?05:57
*** aeng has quit IRC05:59
*** _mwarad_ has joined #openstack-infra06:03
*** mwarad has quit IRC06:03
*** bhujay has quit IRC06:04
*** bhujay has joined #openstack-infra06:04
*** toabctl has joined #openstack-infra06:04
xinliangianw: i can reproduce that debian collectd installing bug. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=77948306:11
openstackDebian bug 779483 in collectd "collectd: Fails to install if no FQDN domain name" [Wishlist,Open]06:11
xinliangianw: it seems if the hostname is not a dns name it will failed to install collectd pkg. Which means the host name should be pinged in the node.06:15
xinliangOr collectd installing will be failed06:15
*** armaan has joined #openstack-infra06:16
*** _mwarad_ has quit IRC06:16
*** jamesmcarthur has quit IRC06:17
*** bhujay has quit IRC06:20
ianwxinliang: hmm, so basically if "ping $(hostname)" doesn't work it fails?06:20
*** bhujay has joined #openstack-infra06:20
xinliangianw: yes06:20
*** pcaruana has joined #openstack-infra06:21
xinliangianw: I also reproduce this issue on my local ubuntu machine06:21
xinliangbut the linaro-cn1 ubuntu node which can install collectd06:21
xinliangso the hostname might not be setting properly on linaro-cn1 debian node,  i think06:22
*** stakeda has quit IRC06:26
*** dhill__ has joined #openstack-infra06:30
*** dsariel has joined #openstack-infra06:31
xinliangianw: from uname -a06:31
xinliangubuntu: http://logs.openstack.org/59/557659/14/experimental/kolla-build-debian-source-arm64/4c888fd/zuul-info/zuul-info.primary.txt06:31
xinliangdebian: http://logs.openstack.org/59/557659/15/experimental/kolla-build-debian-source-arm64/7d377d9/zuul-info/zuul-info.primary.txt06:32
xinliangwe can see ubuntu node set hostname to ubuntu-xenial-arm64-linaro-cn1-000357998706:32
*** dhill_ has quit IRC06:32
xinliangbut debian node's hostname still debian which might not be pinged06:33
*** zhangfei has quit IRC06:36
*** slaweq has joined #openstack-infra06:36
*** slaweq_ has joined #openstack-infra06:39
*** florianf has joined #openstack-infra06:40
*** slaweq has quit IRC06:41
*** dklyle has joined #openstack-infra06:41
*** david-lyle has joined #openstack-infra06:41
*** zhangfei has joined #openstack-infra06:48
*** dsariel has quit IRC06:49
*** dims has quit IRC06:54
*** quiquell|ruck is now known as quiquell|ruck|af06:54
*** quiquell|ruck|af is now known as quique|ruck|afk06:54
*** dims has joined #openstack-infra06:56
*** dims has quit IRC07:01
*** alexchadin has joined #openstack-infra07:02
*** dims has joined #openstack-infra07:02
*** hemna_ has quit IRC07:03
*** namnh has joined #openstack-infra07:05
*** bhavik1 has joined #openstack-infra07:05
*** bhavik1 has quit IRC07:07
*** tesseract has joined #openstack-infra07:10
*** shardy has joined #openstack-infra07:10
*** dsariel has joined #openstack-infra07:11
*** dhajare has quit IRC07:16
*** ociuhandu has joined #openstack-infra07:16
*** ociuhandu has quit IRC07:17
*** zhurong has quit IRC07:18
*** dhajare has joined #openstack-infra07:21
*** salv-orlando has quit IRC07:23
*** AL34N1X has joined #openstack-infra07:24
*** salv-orlando has joined #openstack-infra07:24
*** AL34N1X has quit IRC07:25
*** kiennt2609 has joined #openstack-infra07:26
*** salv-orlando has quit IRC07:28
ianwxinliang: hmm, sorry i'm not familiar if we setup the hostname specifically; worth looking through zuul-jobs as we might do something in there.  clarkb may also know07:31
*** amoralej|off is now known as amoralej07:32
*** dmellado has joined #openstack-infra07:32
xinliangianw: never mind07:32
*** rcernin has quit IRC07:33
*** quique|ruck|afk is now known as quiquell|ruck07:34
*** zoli|gone is now known as zoli|wfh07:35
*** dsariel has quit IRC07:35
openstackgerritIan Wienand proposed openstack-infra/project-config master: Unpause Xenial builds  https://review.openstack.org/56250807:36
*** zoli|wfh is now known as zoliXXL07:36
*** salv-orlando has joined #openstack-infra07:37
*** jpena|off is now known as jpena07:39
*** electrofelix has joined #openstack-infra07:40
openstackgerritIan Wienand proposed openstack-infra/system-config master: Add RAX image cleanup script  https://review.openstack.org/56251007:42
ianw#status log 7000+ leaked images and ~200TB of leaked images and objects cleaned up from our 3 RAX regions.  See https://review.openstack.org/#/c/562510/ for more details07:43
openstackstatusianw: finished logging07:44
*** pcaruana has quit IRC07:45
*** pcaruana has joined #openstack-infra07:46
*** jpich has joined #openstack-infra07:48
*** yamahata has quit IRC07:49
*** Qiming has quit IRC07:53
xinliangclarkb: do you know how nodepool set the hostname of a node?  i see that the hostname is the same as the node instance name.07:54
xinliangclarkb: now we found arm64 debian node is not setting the hostname properly07:55
fricklerxinliang: is the issue with config-drive on debian fixed? iiuc glean sets the hostname based on data from config-drive07:56
fricklerxinliang: looks like it is still missing in ansible_devices here http://logs.openstack.org/59/557659/15/experimental/kolla-build-debian-source-arm64/7d377d9/zuul-info/host-info.primary.yaml07:58
*** salv-orlando has quit IRC08:00
*** salv-orlando has joined #openstack-infra08:01
xinliangfrickler: the issue with config-drive is not fixed. So you the incorrect hostname is due to config-drive issue?08:02
*** tosky has joined #openstack-infra08:02
fricklerxinliang: I'm pretty sure it is, yes.08:03
xinliangfrickler: hmm, thanks , then we need to fix config-drive issue first.08:03
*** salv-orlando has quit IRC08:05
*** bhujay has quit IRC08:10
*** lucas-afk is now known as lucasagomes08:11
*** alexchadin has quit IRC08:13
*** dayou has quit IRC08:14
*** dayou has joined #openstack-infra08:15
*** owalsh_afk is now known as owalsh08:16
*** larainema has joined #openstack-infra08:19
*** david-lyle has quit IRC08:23
*** dklyle has quit IRC08:23
*** HeOS has joined #openstack-infra08:26
*** Qiming_ is now known as Qiming08:27
*** gfidente has joined #openstack-infra08:30
*** gfidente has joined #openstack-infra08:30
*** ssbarnea_ has joined #openstack-infra08:33
*** dingyichen has quit IRC08:36
*** vivsoni has quit IRC08:38
*** rcarrillocruz has quit IRC08:39
xinliangianw: trying to fix debian config-drive issue by using linaro kernel before upstream fix that issue. linaro kernel will keep updating08:40
*** derekh has joined #openstack-infra08:40
xinliangfire a bug to traking enable CONFIG_SCSI_SYM53C8XX_2 : https://bugs.linaro.org/show_bug.cgi?id=375508:41
openstackbugs.linaro.org bug 3755 in Enterprise "RPK: Enable CONFIG_SCSI_SYM53C8XX_2" [Enhancement,Unconfirmed] - Assigned to graeme.gregory08:41
*** dhajare has quit IRC08:42
*** efoley has joined #openstack-infra08:44
*** eernst has quit IRC08:51
*** eernst has joined #openstack-infra08:51
*** newstem has quit IRC08:54
*** dhajare has joined #openstack-infra08:54
*** vivsoni has joined #openstack-infra08:55
*** salv-orlando has joined #openstack-infra08:58
*** bhujay has joined #openstack-infra09:00
*** newstem has joined #openstack-infra09:07
*** dizquierdo has joined #openstack-infra09:15
*** eernst has quit IRC09:20
*** eernst has joined #openstack-infra09:20
dmelladoo/ frickler09:27
dmelladowe've been observing some issues on our gates that we're totally unable to reproduce outside of the upstream infra09:27
dmelladocould you please freeze this machine (http://zuul.openstack.org/stream.html?uuid=4a08a594fe1444cab85e71bfee500e79&logfile=console.log)09:27
dmelladoand grant us access for a while?09:28
dmelladoThis is the issue http://logs.openstack.org/64/561364/2/check/kuryr-kubernetes-tempest-octavia/f028a60/controller/logs/screen-o-api.txt.gz#_Apr_19_08_25_22_23083909:28
dmelladoand we suspect it has something to be with the way the infra is configured09:28
dmelladoyolanda: ^^09:28
dmelladoclarkb: pabelanger dmsimard|off ^^09:29
dmelladoAJaeger: ^^09:32
*** panda|rover|off is now known as panda|rover09:32
*** efoley has quit IRC09:35
*** zhangfei has quit IRC09:35
*** caphrim007 has joined #openstack-infra09:36
*** caphrim007_ has quit IRC09:38
*** agopi has quit IRC09:40
dmelladoubuntu-xenial-rax-dfw-0003608416 <= this is the host I'd like to freeze for a while09:40
*** hashar is now known as hasharAway09:42
*** salv-orl_ has joined #openstack-infra09:43
fricklerdmellado: which job is that node running and for which project+branch? I need that for the hold09:47
*** salv-orlando has quit IRC09:47
dmelladofrickler: kuryr-kubernetes-tempest-octavia09:48
dmelladokuryr-tempest-plugin master09:48
dmelladohttps://review.openstack.org/#/c/561364/09:48
fricklerdmellado: o.k., it should be held when/if the job fails, but I can also give you access now already, if you let me know your ssh public key09:50
dmelladofrickler: sure,09:50
dmelladogithub.com/danielmellado.keys09:51
dmelladoin any case I'll wait for tempest to go and fail09:51
dmelladothanks!09:51
dmelladoI'll try to finish asap09:51
dmelladodulek: ^^ I'll ping yo when this is done09:51
dulekdmellado: Sure, thanks.09:52
fricklerdmellado: node ip is 104.239.135.58, login as root should work for you now09:52
fricklerdmellado: please ping infra-root once you don't need that node anymore09:53
ianwxinliang: good idea, can you propose something to install that in dib / project-config?09:54
*** armaan has quit IRC09:55
*** armaan has joined #openstack-infra09:55
*** eernst has quit IRC09:55
*** eernst has joined #openstack-infra09:55
*** namnh has quit IRC10:05
*** kjackal has quit IRC10:08
*** kjackal has joined #openstack-infra10:08
*** sambetts|afk is now known as sambetts10:09
dmelladofrickler: will do, thanks again!10:12
openstackgerritOlivier Bourdon proposed openstack/diskimage-builder master: Fix /etc/network/interfaces file contents  https://review.openstack.org/56253510:13
*** efoley has joined #openstack-infra10:20
*** kiennt2609 has quit IRC10:24
*** bhujay has quit IRC10:33
*** xinliang has quit IRC10:39
*** boden has joined #openstack-infra10:42
*** zoliXXL is now known as zoli|lunch10:47
*** xinliang has joined #openstack-infra10:51
*** nicolasbock has joined #openstack-infra10:57
*** jpena is now known as jpena|lunch10:59
*** armaan has quit IRC10:59
*** armaan has joined #openstack-infra10:59
*** efoley has quit IRC11:08
*** vivsoni has quit IRC11:14
*** jamesmcarthur has joined #openstack-infra11:15
*** vivsoni has joined #openstack-infra11:19
*** jamesmcarthur has quit IRC11:19
*** zhurong has joined #openstack-infra11:22
*** dhajare has quit IRC11:31
*** lucasagomes is now known as lucas-hungry11:35
*** armaan has quit IRC11:37
*** vivsoni has quit IRC11:42
*** mugsie has quit IRC11:42
*** mugsie has joined #openstack-infra11:42
*** mugsie has quit IRC11:42
*** mugsie has joined #openstack-infra11:42
*** ldnunes has joined #openstack-infra11:52
*** psachin has quit IRC11:52
*** zoli|lunch is now known as zoli11:56
*** zoli is now known as zoli|wfh11:56
*** zoli|wfh is now known as zoli11:56
*** isviridov_away has quit IRC11:57
*** vaidy has quit IRC11:57
*** greghaynes has quit IRC11:58
*** rh-jelabarre has joined #openstack-infra11:58
*** dizquierdo has quit IRC11:58
*** armaan has joined #openstack-infra12:00
*** mriedem has joined #openstack-infra12:00
*** psachin has joined #openstack-infra12:01
*** greghaynes has joined #openstack-infra12:01
*** jpena|lunch is now known as jpena12:02
*** amoralej is now known as amoralej|lunch12:06
*** panda|rover is now known as panda|rover|lnc12:10
*** armaan has quit IRC12:11
*** quiquell|ruck is now known as quique|ruck|food12:11
*** jcoufal has joined #openstack-infra12:11
*** vaidy has joined #openstack-infra12:12
*** isviridov_away has joined #openstack-infra12:13
*** jamesmcarthur has joined #openstack-infra12:16
*** armaan has joined #openstack-infra12:17
*** yamamoto_ has quit IRC12:21
*** armaan has quit IRC12:22
*** jamesmcarthur has quit IRC12:26
*** rfolco|off is now known as rfolco12:26
*** jamesmcarthur has joined #openstack-infra12:26
*** yamamoto has joined #openstack-infra12:27
*** agopi has joined #openstack-infra12:28
*** thiagolib_ has joined #openstack-infra12:30
*** rlandy has joined #openstack-infra12:30
*** krenczewski has quit IRC12:30
*** quique|ruck|food is now known as quiquell|ruck12:31
*** trown|outtypewww is now known as trown12:32
*** cshastri has quit IRC12:32
*** jamesmcarthur has quit IRC12:32
*** lucas-hungry is now known as lucasagomes12:33
*** tpsilva has joined #openstack-infra12:37
*** krenczewski has joined #openstack-infra12:37
*** edmondsw has joined #openstack-infra12:38
*** agopi has quit IRC12:38
*** agopi has joined #openstack-infra12:39
openstackgerritAndrey Kurilin proposed openstack-infra/project-config master: [rally] Remove old rally-cinder jobs  https://review.openstack.org/56256812:39
openstackgerritAndrey Kurilin proposed openstack-infra/openstack-zuul-jobs master: [rally] Remove old rally-cinder jobs  https://review.openstack.org/56256912:39
*** agopi_ has joined #openstack-infra12:40
*** e0ne has quit IRC12:40
openstackgerritAndrey Kurilin proposed openstack-infra/project-config master: [rally] Remove old rally-cinder jobs  https://review.openstack.org/56256812:43
*** zhurong has quit IRC12:43
dhellmannclarkb & infra folks: as a heads-up, today is the first milestone deadline, so I expect to be tagging a bunch of repositories12:43
*** ssbarnea_ has quit IRC12:47
*** neiloy has joined #openstack-infra12:50
*** neiloy has quit IRC12:50
*** neiloy has joined #openstack-infra12:51
*** eernst has quit IRC12:51
*** eernst has joined #openstack-infra12:51
*** agopi__ has joined #openstack-infra12:53
*** newstem has quit IRC12:55
*** agopi has quit IRC12:55
*** agopi_ has quit IRC12:55
*** agopi__ has quit IRC12:55
*** agopi has joined #openstack-infra12:56
*** agopi has quit IRC12:56
*** e0ne has joined #openstack-infra12:59
*** efoley has joined #openstack-infra13:01
*** ssbarnea_ has joined #openstack-infra13:01
*** kgiusti has joined #openstack-infra13:01
*** agopi has joined #openstack-infra13:02
*** armaan has joined #openstack-infra13:02
*** ihar has quit IRC13:02
*** ssbarnea_ has quit IRC13:03
*** esberglu has quit IRC13:05
*** dbecker has joined #openstack-infra13:05
*** neiloy has quit IRC13:06
*** neiloy has joined #openstack-infra13:07
*** panda|rover|lnc is now known as panda|rover13:07
*** Goneri has joined #openstack-infra13:07
*** ssbarnea_ has joined #openstack-infra13:09
*** jistr is now known as jistr|mtg13:14
*** markvoelker_ has quit IRC13:16
*** e0ne_ has joined #openstack-infra13:17
*** markvoelker has joined #openstack-infra13:18
*** e0ne has quit IRC13:20
*** jcoufal has quit IRC13:21
*** jcoufal has joined #openstack-infra13:23
weshayquestion: I would like to have the same job voting on some repos and not voting on other repos.   Should the zuul def of the job be job.abstract to facilitate that?  /me working on https://review.openstack.org/#/c/562353/13:23
*** cshastri has joined #openstack-infra13:24
*** esberglu has joined #openstack-infra13:24
*** zhangfei has joined #openstack-infra13:25
*** VW has joined #openstack-infra13:26
*** salv-orl_ has quit IRC13:27
*** eharney has joined #openstack-infra13:30
*** wolverineav has joined #openstack-infra13:33
*** psachin has quit IRC13:33
*** TobbeCN has quit IRC13:35
corvusweshay: no, just set the voting attribute on the job in the project-pipeline13:36
*** TobbeCN has joined #openstack-infra13:36
*** wolverineav has quit IRC13:37
corvusweshay: left comment for clarity.  though i didn't address the actual zuul error (which looks like the job is already defined in another repo)13:38
weshaycorvus, ya.. I have it defined twice just based on the change in voting13:38
weshayso .. I think you are saying I can use a template, but override the voting attribute13:39
weshayI was told I could not do that13:39
weshaythe other is defined here https://review.openstack.org/#/c/562347/13:40
*** TobbeCN has quit IRC13:40
corvusweshay: not a template.  the change you linked doesn't have a project-template involved.13:40
dhellmannwe had a pre-release job failure caused by a failure to install libvirt during the sdist build. I feel like someone said something in the last week or so about not being able to install libvirt on xenial any more? I don't know if this is the same error, though. Has anyone encountered: http://logs.openstack.org/c9/c9263cde360d37654c4298c496cd9af251f23ce7/pre-release/release-openstack-python/541ad7d/job-output.txt.gz#_20113:41
dhellmann8-04-19_13_21_50_21097813:41
dhellmannthat link again, since it wrapped: http://logs.openstack.org/c9/c9263cde360d37654c4298c496cd9af251f23ce7/pre-release/release-openstack-python/541ad7d/job-output.txt.gz#_2018-04-19_13_21_50_21097813:41
weshaycorvus, right.. atm it's just two jobs defined w/ the same name.. I had started w/ a template13:41
weshayearlier13:41
*** cshastri has quit IRC13:41
weshaycorvus, ok.. I'll try to just override the voting variable13:41
*** caphrim007 has quit IRC13:44
*** amoralej|lunch is now known as amoralej13:45
weshaycorvus, thanks.. this makes a lot more sense now13:46
*** hongbin_ has joined #openstack-infra13:46
*** ihar has joined #openstack-infra13:47
openstackgerritPaul Belanger proposed openstack-infra/project-config master: Revert "Pause ubuntu-xenial DIBs"  https://review.openstack.org/56259413:49
openstackgerritMerged openstack-infra/project-config master: Unpause Xenial builds  https://review.openstack.org/56250813:55
*** annp_ has joined #openstack-infra13:58
*** krenczewski has quit IRC13:58
pabelangerclarkb: ^I've unpinned xenial for jobs, FYI14:01
*** rajinir has joined #openstack-infra14:02
*** newstem has joined #openstack-infra14:03
*** newstem has quit IRC14:04
dmsimard|offpabelanger: does that debian-jessie error ring you a bell ? http://logs.openstack.org/45/547245/4/check/ara-integration-debian-py27-2.4.3.0/9cb4bfe/ara-report/result/5bec8917-d339-4e71-86aa-5af2ad296b1f/14:10
*** yamahata has joined #openstack-infra14:10
*** jistr|mtg is now known as jistr14:10
*** nicolasbock has quit IRC14:10
*** Adri2000 has quit IRC14:13
pabelangerdmsimard|off: yah, just fixing up reprepro for debian now14:13
pabelangerdmsimard|off: hopefully another hour and it should be working14:13
dmsimard|offCool, thanks14:14
pabelangerdmsimard|off: but you can try a recheck and see if that was just an old image14:14
*** bdodd has joined #openstack-infra14:14
dmsimard|offI tried one this morning, I'll wait a little bit before retrying14:14
*** bobh has joined #openstack-infra14:14
*** kiennt26_ has joined #openstack-infra14:16
mnaserhey, i'm not sure whats going on but i guess there's leaking volumes or something14:18
mnaserand nodepool is just create->fail->create cycle14:19
openstackgerritMerged openstack-infra/system-config master: Remove debian-security from reprepro  https://review.openstack.org/56234814:19
*** rpioso|afk is now known as rpioso14:20
pabelangermnaser: yah, we've been leaking them since switching to boot-from-volume. Have you been able to see anything on openstack side to why that is?14:20
mnaserpabelanger: honestly i've been a bit busy to be tracking those down, we have made a lot of improvements to prevent it so it would be healthy to just delete them and "try again"14:21
pabelangersure, I can clean them up and see if we leak again14:21
fricklerdhellmann: just guessing, that may be caused by the recent-ish release of python-libvirt==4.2.0. we get 4.0.0 from queens UCA and 4.1.0 is blocked. may need to block 4.2.0, too.14:22
*** efoley has quit IRC14:23
*** quiquell|ruck is now known as quiquell|off14:25
pabelangermnaser: we also need to update nodepool to add quota support for volumes, I'll see if I can do that for today14:25
fricklerclarkb: running reproduce.sh seems utterly broken by now, maybe we should drop it if we don't intend to fix it.14:27
pabelangermnaser: volume 139fe4e8-b911-4484-a3d7-3d48ca3d01da looks to be stuck in creating, do you mind looking to see why?14:27
fricklerclarkb: first issue is zuul needs py3 now, then zuul-cloner blows up: http://paste.openstack.org/show/719564/14:27
mnaserreset and deleted pabelanger, anything else you're seeing?14:28
*** armaan has quit IRC14:28
pabelangermnaser: no, everything else was cleaned out. now we wait to see if we leak again14:29
*** krenczewski has joined #openstack-infra14:30
dhellmannfrickler : yeah, thanks. prometheanfire is looking into it in #openstack-requirements14:30
pabelangermnaser: actually14:31
prometheanfirefrickler: in #virt on oftc really14:31
prometheanfireupstream is looking at it, but we should probably still mask 4.2.014:31
pabelangermnaser: any idea what is happening with volume 6e425fa4-66cc-4b55-b8eb-4fb4db5ff96f / f428e504-2ffe-4027-b0a0-0963c6de43ba14:32
*** eharney_ has joined #openstack-infra14:32
*** jamesmcarthur has joined #openstack-infra14:33
pabelangerclarkb: fungi: any objections to tagging a new release of gear today?14:33
*** eharney has quit IRC14:35
corvusfrickler: it may be a matter of installing zuul v2 instead of v314:35
corvusfrickler: though... does it use zuul-cloner to fetch from mergers?  that's not going to work anymore14:36
openstackgerritJean-Philippe Evrard proposed openstack-infra/project-config master: Import os_masakari to openstack-ansible  https://review.openstack.org/56261514:37
*** jamesmcarthur has quit IRC14:37
*** efoley has joined #openstack-infra14:40
prometheanfirefrickler: https://www.redhat.com/archives/libvir-list/2018-April/msg01916.html14:41
openstackgerritJean-Philippe Evrard proposed openstack-infra/project-config master: Import os_masakari to openstack-ansible  https://review.openstack.org/56261514:42
openstackgerritsebastian marcet proposed openstack-infra/openstackid-resources master: added endpoint to retrieve track groups metadata  https://review.openstack.org/56261814:43
openstackgerritJean-Philippe Evrard proposed openstack-infra/project-config master: Add os_masakari repo base jobs  https://review.openstack.org/56261914:44
dhellmannare there any issues with the mailing list server? I've sent a couple of messages to the list but not received the usual copies, haven't received any other messages for over an hour, and the last message on http://lists.openstack.org/pipermail/openstack-dev/2018-April/date.html is about that old too14:44
dhellmannthat's for openstack-dev, if it makes any difference14:44
openstackgerritMerged openstack-infra/openstackid-resources master: added endpoint to retrieve track groups metadata  https://review.openstack.org/56261814:45
corvusdhellmann: i'll take a look14:46
dhellmanncorvus : thanks14:46
*** markvoelker has quit IRC14:47
pabelangerianw: clarkb: just noticed, we are running diskimage-builder on nodepool-builders as python2. I thought we changed that out to be python314:47
pabelangerianw: clarkb: http://paste.openstack.org/show/719566/14:48
corvusdhellmann: have you sent messages later than your 'reminder for rocky-1' msg?14:48
pabelangerguess we have an issue some place14:48
dhellmanncorvus : yes, there should be a couple about a freezer-dr release failure14:48
*** markvoelker has joined #openstack-infra14:48
dhellmann"a couple" == exactly 2 in this case14:49
*** salv-orlando has joined #openstack-infra14:51
corvuswe do seem to be accumulating entries in the incoming mailman queue14:51
corvussomething has /srv/mailman/openstack/locks/openstack-dev.lock  locked14:53
openstackgerritsebastian marcet proposed openstack-infra/openstackid-resources master: Updated filter/order param  https://review.openstack.org/56262314:53
*** Goneri has quit IRC14:54
openstackgerritMerged openstack-infra/openstackid-resources master: Updated filter/order param  https://review.openstack.org/56262314:56
*** Goneri has joined #openstack-infra14:57
*** kjackal has quit IRC14:57
corvusi believe an apache process holding the lock for openstack-dev has died.  i will remove the lock file manually.14:58
corvusdhellmann: email flood inbound :)14:59
* dhellmann braces his mua14:59
*** kiennt26_ has quit IRC15:01
*** TobbeCN has joined #openstack-infra15:04
*** zhangfei has quit IRC15:06
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul master: WIP: Upgrade to Ansible 2.5  https://review.openstack.org/56266815:06
*** TobbeCN has quit IRC15:08
*** ssbarnea_ has quit IRC15:09
*** boden has quit IRC15:09
*** VW_ has joined #openstack-infra15:09
dhellmanncorvus : I think I've received the emails I expected and the summary page seems to have updated15:09
dhellmanncorvus : thanks!15:09
openstackgerritJames E. Blair proposed openstack-infra/system-config master: Add docs on mailman lock files  https://review.openstack.org/56267015:10
corvusinfra-root: ^ fyi15:10
*** zerick has joined #openstack-infra15:10
corvusdhellmann: thanks for noticing :)15:10
corvusi think there is a timeout, so it should be self-correcting, but i think the timeout is very generous.15:11
corvuslike, hours15:11
dhellmanncorvus : human-based service monitoring ftw15:11
*** Nil_ has joined #openstack-infra15:12
*** VW has quit IRC15:12
*** yamamoto has quit IRC15:13
*** yamamoto has joined #openstack-infra15:14
*** hasharAway is now known as hashar15:15
*** cshastri has joined #openstack-infra15:19
*** yamamoto has quit IRC15:19
*** annp_ has quit IRC15:23
*** yamahata has quit IRC15:23
*** germs has joined #openstack-infra15:24
*** germs has quit IRC15:24
*** germs has joined #openstack-infra15:24
openstackgerritMerged openstack-infra/zuul-website-media master: Run zuul-website jobs  https://review.openstack.org/56175015:32
*** ramishra_ has joined #openstack-infra15:33
*** yamamoto has joined #openstack-infra15:34
*** ssbarnea_ has joined #openstack-infra15:35
*** ramishra has quit IRC15:37
*** PsionTheory has joined #openstack-infra15:38
*** camunoz has joined #openstack-infra15:39
pabelangerokay, new xenial images are online with pip10 by default15:40
*** pcichy has joined #openstack-infra15:41
*** armaan has joined #openstack-infra15:42
*** hemna_ has joined #openstack-infra15:43
openstackgerritJames E. Blair proposed openstack-infra/zuul master: WIP: late bind pipelines  https://review.openstack.org/55361815:45
*** pcichy has quit IRC15:46
*** armaan has quit IRC15:47
*** jmorgan1 has quit IRC15:48
*** armaan has joined #openstack-infra15:48
clarkbpabelanger: puppet uses openstack-pip provider to install diskimage_builder so its going to be python215:51
clarkbfrickler: for reproduce.sh zuul-cloner needs to be installed from the last zuul 2.x release, however since we don't publish zuul refs anymore I'm not sure that is super helpful either15:52
*** dklyle has joined #openstack-infra15:52
*** david-lyle has joined #openstack-infra15:52
*** caphrim007 has joined #openstack-infra15:53
*** dklyle_ has joined #openstack-infra15:53
*** caphrim007 has quit IRC15:53
*** caphrim007 has joined #openstack-infra15:54
*** caphrim007 has quit IRC15:54
openstackgerritJames E. Blair proposed openstack-infra/zuul master: WIP: late bind pipelines  https://review.openstack.org/55361815:54
*** jamesmcarthur has joined #openstack-infra15:54
clarkbI'm having an incredibly slow start today. Putting the kids in "school" means I get to suffer the wrath of the little plague carriers even more now I Guess15:54
*** HeOS has quit IRC15:56
*** yamahata has joined #openstack-infra15:56
*** boden has joined #openstack-infra15:56
*** caphrim007 has joined #openstack-infra15:57
corvusclarkb: send them to the dentist too!15:57
corvusoh, plague, not plaque.  sorry.15:57
*** slaweq_ has quit IRC15:59
*** slaweq has joined #openstack-infra15:59
*** slaweq has quit IRC16:00
*** slaweq has joined #openstack-infra16:00
pabelangermust be milestone week, backlong of 1.1k node requests :)16:00
*** links has quit IRC16:01
*** armaan has quit IRC16:02
clarkbI don't think I've ever had a run of time where I've been sick more often than the last 6 months. Eventually it has to get better right?16:02
*** nicolasbock has joined #openstack-infra16:02
clarkbpabelanger: did you see my comment about installing disk image builder on the builder nodes under pip2? that is why it runs under python216:02
pabelangerclarkb: wife is constantly sick, I've assumes when kids are 18 and out of house, it gets better16:02
clarkbpabelanger: as for gear release I don't have any objections to it. Gear has largely been a corvus effort so maybe check with corvus too16:03
*** pcaruana has quit IRC16:03
pabelangerclarkb: okay, will check pip provider, I thought we updated it to be pip316:03
fricklerprometheanfire: thanks for the pointer. so it indeed seems reasonable to block 4.2.0 and wait for a release with that patch in it16:03
corvusclarkb, pabelanger: wfm.  there's a gear behavior change that will end up in this release, and we may want to update zuul to account for it.16:04
prometheanfirefrickler: see https://review.openstack.org/56261316:04
clarkbpabelanger: puppet-diskimage_provider is what does it16:04
fricklerclarkb: o.k., so it looks indeed like it would make sense to drop reproduce.sh and invest more effort into building a replacement for v3 jobs16:04
clarkbfrickler: ya I think so16:05
*** felipemonteiro has joined #openstack-infra16:05
corvusclarkb, pabelanger: i say go for it.  when it's released, we can bump zuul's version req and remove line 1935 of executor/server.py16:05
corvusfrickler: there are stories about what's needed in storyboard16:05
*** slaweq has quit IRC16:05
pabelangercorvus: clarkb: 0.12.0? since we have new features16:06
pabelanger0.11.1 is current16:06
pabelangerc00ca944db0d6dc6ef90859b0b9b7f3a58196fb0 as 0.12.0 for gear16:08
pabelangerwill tag in a moment16:08
*** eernst has quit IRC16:08
*** electrofelix has quit IRC16:08
corvuspabelanger: wfm16:09
pabelanger0.12.0 pushed16:11
*** jamesmcarthur has quit IRC16:12
pabelangerdmsimard|off: did you want try debian again? I've updated reprepro to remove debian-security packages16:12
*** jpich has quit IRC16:15
*** hashar is now known as hasharAway16:17
pabelangerdmsimard|off: you also have 2 held nodes in nodepool, looks like 16 days old. Can we delete them?16:17
*** HeOS has joined #openstack-infra16:20
dmsimard|offpabelanger: yes and yes (I still need to troubleshoot that failure but it's something I'll need to circle back to)16:23
*** HeOS has quit IRC16:27
pabelangerclarkb: I wonder if we call pip install -u nodepool, if that is bumping and installing DIB as python16:28
pabelangerpython3*16:28
pabelangerdmsimard|off: k, was looking to see what could be freed up for backlog of nodepool16:29
pabelangermgagne: we're seeing a spike in nodes in deleting state: http://grafana.openstack.org/dashboard/db/nodepool-inap16:31
pabelangeras well as launch failures16:31
clarkbpabelanger: ya if it is listed as a dependency then likely16:32
clarkbso we may flip back and forth in that case16:33
pabelangeryah16:33
*** ramishra_ has quit IRC16:33
pabelangerI'll update puppet-diskimage_builder for pip316:33
clarkbpabelanger: in that case I'd probably remove the nomral dib pip install from the puppet manifest16:33
clarkbpabelanger: and just keep the from source option if people opt into that16:33
*** jamesmcarthur has joined #openstack-infra16:33
clarkb(otherwise rely on nodepool install to pull it in)16:33
pabelangerclarkb: only issue with that, is if we need to do a new DIB release for some reason, we'll need to wait for nodepool release to pick it up16:34
*** jchhatbar has quit IRC16:35
*** florianf has quit IRC16:35
clarkbpabelanger: oh right16:36
clarkbI guess if they both use the same python then nodepool update will see the existing isntall and not touch it16:36
clarkb++ to your original idea then16:36
*** lucasagomes is now known as lucas-afk16:40
*** wolverineav has joined #openstack-infra16:43
*** HeOS has joined #openstack-infra16:53
*** ssbarnea_ has quit IRC16:53
*** jamesmcarthur has quit IRC16:53
*** bobh has quit IRC16:54
mgagnepabelanger: thanks, will be looking into it16:54
*** VW_ has quit IRC17:00
*** VW has joined #openstack-infra17:00
*** derekh has quit IRC17:01
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Late bind projects  https://review.openstack.org/55361817:04
*** trown is now known as trown|lunch17:05
*** VW has quit IRC17:05
*** kjackal has joined #openstack-infra17:05
*** wolverineav has quit IRC17:05
*** wolverineav has joined #openstack-infra17:06
*** pblaho has quit IRC17:06
corvuswe have a significant backlog today, already an 1k node requests17:06
*** cshastri has quit IRC17:06
*** gouthamr is now known as gouthamr|afk17:06
pabelangeryah17:07
*** kjackal has quit IRC17:07
*** jamesmcarthur has joined #openstack-infra17:07
pabelangerfew errors in ovh, but think that is just because we are at capacity17:07
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul master: WIP: Upgrade to Ansible 2.5  https://review.openstack.org/56266817:08
*** zoli is now known as zoli|gone17:08
*** zoli|gone is now known as zoli17:08
corvuspabelanger, Shrews: we don't have a 'time to delete' metric do we?17:09
corvuslike, from the moment that nodepool switches a node to delete state to the time the znode is removed17:09
*** jpena is now known as jpena|off17:10
corvusi'm looking at inap, and it has a lot of nodes in delete, but i can't tell if they are cycling through, or if they are stuck.  obviously, looking at the individual nodes would indicate that, but that metric might show the same thing in aggregate17:10
*** wolverineav has quit IRC17:10
Shrewscorvus: i want to say "no"17:10
pabelangercorvus: yah, mgagne is looking into inap, I think there is an issue going on ATM17:10
pabelangerbut would be also nice to have that metric17:11
mgagnepabelanger: I think it's the same mysterious issue as last times17:11
corvusit'd sort of be the mirror of 'time to ready' which tells us if high nodes in building are okay or not17:11
pabelangermgagne: I wonder if related to us uploading images again to glance, ubuntu-xenial would have just been uploaded about the time the delete starting to lag.17:12
pabelangermgagne: are we still running with force raw as false?17:12
mgagnepabelanger: afaik yes17:12
*** efoley has quit IRC17:12
*** e0ne has joined #openstack-infra17:12
mgagnepabelanger: but nodes are in BUILD state, so something timed out and nodepool started to delete. but delete never happens for unknown reasons.17:13
*** bobh has joined #openstack-infra17:13
pabelangermgagne: yah, I wonder if that is because compute nodes is busying coverting qcow2 to raw17:13
mgagneand now it's stuck because: "[instance: 51612309-ba1a-4e69-9c95-b156453e8e13] Instance is already in deleting state, ignoring this request" in nova-cells. and I need to update database.17:14
*** rpioso is now known as rpioso|eat17:14
mgagnesomehow the delete task doesn't get through or is cancelled and nova still think the delete process in going on while it's not.17:14
pabelangerk17:14
*** e0ne_ has quit IRC17:15
*** wolverineav has joined #openstack-infra17:15
mgagneit's difficult to debug and pin point the root cause. even if I update database, I think something cached the state and reupdated the database with previous value. it's a mess.17:16
mgagneoh, now one is gone17:16
pabelangerdmsimard|off: how is the version of ansible dectected in http://logs.openstack.org/18/562718/1/check/windmill-ubuntu-xenial/f90f64f/ara-report/ (top right)? Is that to be the running version of ansible?17:16
mgagnemagic17:16
pabelangermgagne: I wonder if nodepool called delete again on that instance17:17
mgagneok, instances are getting deleted now17:17
mgagneI think so17:17
mgagnebecause I'm getting the above error multiple times17:17
pabelangerso, we try to delete, but compute doesn't respond17:17
pabelangerthen eventually it does17:17
pabelangerwould be interesting to see what CPU usage looks like on those compute nodes17:18
mgagnewell, nova-cells doesn't cooperate because it thinks the delete process is still happening17:18
pabelangeryah17:18
mgagneso you can't reinit the process without updating the database17:18
mgagnewhich is... frustrating =)17:18
pabelangeryah, we seem to be launching nodes now17:19
mgagnesome are getting deleted, I'm not sure how long it will take to complete17:19
pabelangermy gut is telling me, this might be a result of uploading the latest ubuntu-xenial dib17:19
pabelangerthundering hurd17:19
mgagneand I seem to be fighting a cache somewhere because I feel like I need to update the database multiple times17:19
*** yamamoto has quit IRC17:20
mgagnewell, it's not the first time you upload a new image. why today17:20
pabelangerwe just unpaused ubuntu-xenial, it was the first time in 7 days we rotated that image17:20
pabelangerand it is our most popular image, so I imaging we trying to use it on all the nodes at once17:21
mgagneok but... no issue last week or for last couple weeks17:21
pabelangeryah, possible we haven't use this much capacity in a while17:21
pabelangerwould need to check logs17:21
*** armaan has joined #openstack-infra17:24
*** udesale_ has quit IRC17:24
Shrewscorvus: pabelanger: mordred: oh, 2.5 brings us this lovely thing: "Added a configuration file that a site administrator can use to specify modules to exclude from being used."17:25
mgagnewhat I feel is missing is the list of running tasks on nova-compute so I can see if it's really happening or not.17:25
Shrewsoop, not #zuul17:25
*** efried has quit IRC17:26
openstackgerritMichael Johnson proposed openstack/diskimage-builder master: Add pip cache cleanup to pip-and-virtualenv  https://review.openstack.org/56205517:27
mgagnepabelanger: no more instances stuck in deleting state, it's a miracle.17:29
pabelangermgagne: yay17:29
*** pcichy has joined #openstack-infra17:29
* mgagne goes back to my ops cave17:30
*** SumitNaiksatam has joined #openstack-infra17:30
pabelangermgagne: do you have any metric for load on those compute nodes? maybe to see if nova process if CPU pinned?17:30
openstackgerritLance Bragstad proposed openstack-infra/project-config master: Add oslo.limit to zuul project list  https://review.openstack.org/55059117:30
mgagneI can look into it, I'm not the expert in graphs and metrics :P17:30
*** VW has joined #openstack-infra17:31
*** david-lyle has quit IRC17:32
*** dklyle_ has quit IRC17:33
*** dklyle has quit IRC17:33
*** shardy has quit IRC17:35
*** efried has joined #openstack-infra17:35
mgagnepabelanger: ok, the graphs are really a piece of art. but on average, CPU looks to stay between 20-40% with spikes between 40-60%. we do have holes in the graphs but at this point, I don't know if it's caused by a real issue or some "known issues" with our monitoring tools.17:35
*** patriciadomin has quit IRC17:35
*** VW has quit IRC17:35
*** wolverineav has quit IRC17:39
*** jamesmcarthur has quit IRC17:39
*** wolverineav has joined #openstack-infra17:39
*** jamesmcarthur has joined #openstack-infra17:39
dmsimard|offpabelanger: that's the version of Ansible running on the webserver portion. The version of Ansible for the specific playbook is inside the parameters pane17:40
*** wolverineav has quit IRC17:44
*** efried has quit IRC17:44
pabelangerdmsimard|off: ah, was confused. But now see the version I expecty17:45
pabelangerexpect*17:45
pabelangermgagne: holes when we had the delete issue?17:45
mgagneyea, very recently and this night. but we have holes all the time so...17:46
pabelangermgagne: how are the metrics collected? We've seen gaps in cacti.o.o when there was an issue on the remote node side17:47
*** panda|rover is now known as panda|rover|off17:47
mgagnethrough snmp so udp I guess17:47
pabelangeryah, same for us17:47
mgagneour monitoring system has some issues (I don't want to use the word garbage)17:48
melwittcan anyone give a hint about what this graph means when it shows a drop in "accepting?" does that mean jobs aren't being accepted by nodes? http://grafana.openstack.org/dashboard/db/zuul-status?panelId=19&fullscreen17:48
*** thiagolib_ has quit IRC17:49
*** eharney_ is now known as eharney17:50
pabelangermelwitt: drop usually mean the executor has a queue of jobs it is trying to process, you can see that in executor queue panel and starting builds panel17:50
*** ssbarnea_ has joined #openstack-infra17:51
melwittthanks pabelanger17:51
pabelangerit could be because an executor has hit the limit of jobs it runs (memory / load usage)17:51
pabelangerbut things are just busy today17:52
*** sambetts is now known as sambetts|afk17:52
openstackgerritMerged openstack-infra/zuul master: Test base job secrets  https://review.openstack.org/56103017:53
*** efried has joined #openstack-infra17:53
*** e0ne has quit IRC17:54
*** felipemonteiro_ has joined #openstack-infra17:55
*** armaan has quit IRC17:56
*** gouthamr|afk is now known as gouthamr17:57
*** jamesmcarthur has quit IRC17:58
*** jamesmcarthur has joined #openstack-infra17:58
*** VW has joined #openstack-infra17:58
*** felipemonteiro has quit IRC17:59
*** slaweq has joined #openstack-infra18:01
*** amoralej is now known as amoralej|off18:03
*** VW has quit IRC18:03
*** cshastri has joined #openstack-infra18:05
*** slaweq has quit IRC18:06
*** felipemonteiro__ has joined #openstack-infra18:08
*** felipemonteiro_ has quit IRC18:08
*** SumitNaiksatam has quit IRC18:09
pabelangerclarkb: seems ubuntu-xenial DIBs are working well, I haven't seen any fallout yet of pip10 on the images18:10
pabelangersame for centos-718:10
clarkbnice, that means the tox siblings fix must be working18:11
pabelangeryah18:12
openstackgerritMerged openstack-infra/zuul master: Make gearman calls async in ZuulWeb  https://review.openstack.org/56002618:13
clarkbpabelanger: probably worth status logging that?18:14
pabelangerclarkb: sure18:17
*** pabelanger has quit IRC18:17
*** pabelanger has joined #openstack-infra18:17
*** cshastri has quit IRC18:19
pabelanger#status log all DIB images (minus gentoo) have been unpaused for nodepool-builder. Latest release of diskimage-builder fixed our issues related to pip10 and glean failing to boot.18:19
openstackstatuspabelanger: finished logging18:19
*** yamamoto has joined #openstack-infra18:20
*** trown|lunch is now known as trown18:21
*** jamesmcarthur has quit IRC18:23
*** rwsu has quit IRC18:27
*** VW has joined #openstack-infra18:28
*** rpioso|eat is now known as rpioso18:28
*** VW has quit IRC18:29
*** VW has joined #openstack-infra18:29
*** pcichy has quit IRC18:30
*** yamamoto has quit IRC18:30
*** pcichy has joined #openstack-infra18:30
*** ssbarnea_ has quit IRC18:30
*** jamesmcarthur has joined #openstack-infra18:31
imacdonnany known issues with the stable/ocata branch of devstack? Seems to be failing at:18:34
imacdonn+ lib/keystone:configure_keystone:205      :   cp -p /opt/stack/keystone/etc/policy.json /etc/keystone18:34
imacdonncp: cannot stat '/opt/stack/keystone/etc/policy.json': No such file or directory18:34
clarkbimacdonn: looks like master doesn't have that file. If I had to guess you are using master keystone18:36
imacdonnhmm, I switched to the stable/ocata branch before running ./stack.sh18:36
*** TobbeCN has joined #openstack-infra18:36
imacdonnlooking at https://bugs.launchpad.net/devstack/+bug/1633986 - should have googled before asking ;)18:37
openstackLaunchpad bug 1633986 in devstack "Got "no policy json found "when run newest devstack" [Undecided,Invalid] - Assigned to Kevin Zhao (kevin-zhao)18:37
clarkbif the repo already exists it may not checkout the older branch to avoid losing data?18:37
rm_workdid gerrit just die?18:38
rm_workor just me18:38
imacdonnthe git repo was a fresh clone18:38
rm_workah just me18:38
*** wolverineav has joined #openstack-infra18:38
clarkbimacdonn: did you clone it or did devstack?18:38
clarkbimacdonn: if you cloned it then devstack may be leaving it alone? unsure just a hunch18:38
imacdonnI cloned it, then switched to the stable/ocata branch, then ran stack.sh18:39
imacdonn'git log' in /opt/stack/keystone looks right for ocata18:39
clarkbdoes that file exist?18:40
imacdonn(actually I need to look a bit closer to verify that it's really ocata) ... but no, the policy.json file does not exist18:40
*** TobbeCN has quit IRC18:41
clarkbgit describe should tell you what you are checked out on18:41
openstackgerritColleen Murphy proposed openstack-infra/system-config master: Fix puppet config for puppet 4  https://review.openstack.org/56174618:44
imacdonnyeah, that's not right .. it's keystone 13.0.0.0rc1-129-g780f97118:45
imacdonnI still don't really understand how RECLONE works18:46
imacdonnI turned that off, because it was faiing with:18:46
imacdonn+ functions-common:git_update_remote_branch:631 :   git checkout -b stable/ocata -t origin/stable/ocata18:46
imacdonnerror: Your local changes to the following files would be overwritten by checkout:18:46
imacdonnupper-constraints.txt18:46
imacdonnPlease, commit your changes or stash them before you can switch branches.18:46
imacdonnAborting18:46
*** camunoz has quit IRC18:47
*** e0ne has joined #openstack-infra18:47
clarkbya its avoiding losing your data there18:48
imacdonnTrying again after 'rm -rf /opt/stack/*'18:48
imacdonnarg .. now it's failing because some pacakges are too new :/18:51
imacdonnperhaps this is futile ... I'm trying to get a Cinder 3rd Party CI to use the matching devstack branch when testing backports18:52
clarkbthe newer packages may be due to the earlier run?18:53
imacdonnyeah18:53
clarkbthe way devstack-gate and zuulv3 native jobs handle this is to effectively clone and checkout all the repos before devstack runs then tell devstack to use the repos as is18:53
imacdonnproblem here is that the too-new stuff is coming from dist-packages18:55
*** tesseract has quit IRC18:55
*** hasharAway is now known as hashar18:57
*** thiagolib_ has joined #openstack-infra18:57
openstackgerritPaul Belanger proposed openstack-infra/zuul master: Increase unit testing of host / group vars  https://review.openstack.org/55940519:00
openstackgerritPaul Belanger proposed openstack-infra/zuul master: Inventory groups should be under children key  https://review.openstack.org/55940619:00
*** rockyg has joined #openstack-infra19:01
*** camunoz has joined #openstack-infra19:02
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul master: WIP: Update to Ansible 2.5  https://review.openstack.org/56266819:06
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul master: WIP: Update to Ansible 2.5  https://review.openstack.org/56266819:09
*** felipemonteiro_ has joined #openstack-infra19:09
*** felipemonteiro__ has quit IRC19:13
*** jamesmcarthur has quit IRC19:16
*** jamesmcarthur has joined #openstack-infra19:17
*** wolverineav has quit IRC19:18
*** wolverineav has joined #openstack-infra19:18
*** rockyg has quit IRC19:21
*** eernst has joined #openstack-infra19:23
*** wolverineav has quit IRC19:25
*** wolverineav has joined #openstack-infra19:25
*** slaweq has joined #openstack-infra19:30
*** slaweq has quit IRC19:30
*** imacdonn has quit IRC19:36
*** imacdonn has joined #openstack-infra19:36
*** tpsilva has quit IRC19:36
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Late bind projects  https://review.openstack.org/55361819:38
*** eernst has quit IRC19:39
*** eharney has quit IRC19:39
pabelangerclarkb: just reading up on ubuntu-bionic, looks like next week (April 26th) is still release day. Bionic DIBs seems to have been working well since we started building beta images.19:45
*** niska has quit IRC19:52
openstackgerritColleen Murphy proposed openstack-infra/system-config master: Fix puppet config for puppet 4  https://review.openstack.org/56174619:52
*** slaweq has joined #openstack-infra19:54
zigopabelanger: Hi there! Did you have time to investigate the issue with the stretch-security mirror? The file at http://mirror.dfw.rax.openstack.org/debian/dists/stretch-security/main/binary-amd64/Packages still holds the jessie-security.19:57
*** niska has joined #openstack-infra19:57
zigo(I'm not pushing, I just want to know if the status changed, or if I may help or something...)19:57
pabelangerzigo: Oh, hmm. They should be deleted19:58
pabelangerlet me check19:58
zigopabelanger: Well, the Packages file still contains Jessie stuff.19:58
zigopabelanger: Search for Package: libicu-dev in that file.19:58
pabelangerzigo: yah, we purged it from reprepro, I assumed those directories would also be deleted19:58
pabelangermight have to manually delete it19:59
pabelangerlet me check something19:59
zigoIf you see "deb8u7", then that's the wrong one. It must contain deb9u2.19:59
zigoAnd it's still the old jessie-security update for libicu-dev, not the stretch one ATM... :(20:00
pabelangerzigo: yah, the plan is not to mirror debian-secuirty into the mirror.debian repo, now we go directly upstream. If we do want to mirror it, we'll create mirror.debian-security to hold them20:00
zigopabelanger: I very much agree with the plan ! :)20:00
zigopabelanger: security.debian.org aren't like other repositories, they are maintained by the Debian System Administrator (ie: DSA) team, not by any random person that wants to mirror Debian.20:01
zigoSo it's a much more reliable network of mirrors.20:01
pabelangerzigo: yah, i think our mistake was trying to merge the 2 pools into a single pool in AFS20:02
pabelangerthat created checksum issues for packages20:02
zigook20:02
zigopabelanger: So, shall I do a recheck of the puppet-openstack patch that adds Debian checks? Or shall I wait?20:03
*** harlowja has joined #openstack-infra20:03
pabelangerzigo: yah, hold on moment, I deleted folders and running reprepro again to confirm it is proper20:04
zigoThanks.20:04
pabelangerthen AFS will release the volumes20:04
pabelangerthen we can recheck, and if that still fails, I'll kick off new image builds, as we get sources from AFS20:04
*** wolverineav has quit IRC20:05
*** Adri2000 has joined #openstack-infra20:06
*** e0ne has quit IRC20:07
openstackgerritJames E. Blair proposed openstack-infra/zuul master: WIP: Make config objects freezable  https://review.openstack.org/56281620:11
*** caphrim007_ has joined #openstack-infra20:12
*** rockyg has joined #openstack-infra20:12
*** tosky has quit IRC20:15
pabelangerzigo: okay, try now20:15
*** tosky has joined #openstack-infra20:16
zigopabelanger: Cheers, trying !20:16
pabelangerhttp://mirror.dfw.rax.openstack.org/debian/dists/ now updated to remove -security20:16
pabelangerand jobs are already configured to use upstream20:16
*** caphrim007 has quit IRC20:16
zigopabelanger: Though isn't this something to change in the Debian image?20:17
openstackgerritColleen Murphy proposed openstack-infra/system-config master: Fix puppet config for puppet 4  https://review.openstack.org/56174620:18
openstackgerritColleen Murphy proposed openstack-infra/ansible-role-puppet master: Don't hardcode puppet-3-specific config paths  https://review.openstack.org/56206820:18
pabelangerzigo: possible, if we are still seeing a failure then we'll need to kick off a new build now that AFS mirrors don't have -security repos.20:18
pabelangerlet me check build logs20:18
*** e0ne has joined #openstack-infra20:19
*** camunoz has quit IRC20:21
*** jamesmcarthur has quit IRC20:24
*** kgiusti has left #openstack-infra20:31
*** gyee has joined #openstack-infra20:32
*** rockyg has quit IRC20:33
openstackgerritJames E. Blair proposed openstack-infra/zuul master: WIP: Make config objects freezable  https://review.openstack.org/56281620:35
*** gfidente has quit IRC20:36
openstackgerritDoug Hellmann proposed openstack-infra/project-config master: update the release tools to mark newton as closed  https://review.openstack.org/56282620:36
openstackgerritDoug Hellmann proposed openstack-infra/project-config master: update the release tools to mark newton as closed  https://review.openstack.org/56282620:38
*** jamesmcarthur has joined #openstack-infra20:39
*** jamesmcarthur has quit IRC20:43
*** jmorgan1 has joined #openstack-infra20:45
*** trown is now known as trown|outtypewww20:49
*** eharney has joined #openstack-infra20:50
*** slaweq has quit IRC20:54
*** felipemonteiro_ has quit IRC20:54
*** slaweq has joined #openstack-infra20:54
*** slaweq has quit IRC20:59
openstackgerritJames E. Blair proposed openstack-infra/zuul master: WIP: Make config objects freezable  https://review.openstack.org/56281621:03
*** rfolco is now known as rfolco|off21:06
*** pcichy has quit IRC21:08
*** dbecker has quit IRC21:09
*** e0ne has quit IRC21:10
*** dbecker has joined #openstack-infra21:13
*** armaan has joined #openstack-infra21:14
*** rwsu has joined #openstack-infra21:15
*** ldnunes has quit IRC21:15
*** Goneri has quit IRC21:17
openstackgerritColleen Murphy proposed openstack-infra/puppet-mysql_backup master: Use instance variables in puppet template  https://review.openstack.org/56283221:17
*** dklyle has joined #openstack-infra21:17
ianwpabelanger: no calamities with dib 2.14.1 ?21:18
*** jtomasek has quit IRC21:28
*** jtomasek has joined #openstack-infra21:29
*** wolverineav has joined #openstack-infra21:29
*** esberglu has quit IRC21:30
*** vtapia has quit IRC21:32
clarkbianw: I don't think we've noticed any21:36
clarkbianw: we are using new images according to pabelanger and jobs haven't gone off the deep end21:36
*** thiagolib_ has quit IRC21:37
pabelangerianw: clarkb: no, I think things are working fine21:37
pabelangerianw: clarkb: only outstanding issue is stale packages in debian, that should be fix, waiting to hear from zigo to confirm21:38
zigopabelanger: It looks like working, see this: http://logs.openstack.org/85/561085/3/check/puppet-openstack-integration-4-scenario001-tempest-debian-stable/448abcd/job-output.txt.gz21:40
zigopabelanger: Yes, the job is failing, but that's expected ... :P21:40
zigopabelanger: At least, now, the job is *RUNNING* ! :)21:41
pabelangergreat21:41
*** agopi has quit IRC21:41
zigopabelanger: What's now working though is this: http://logs.openstack.org/85/561085/3/check/puppet-openstack-integration-4-scenario001-tempest-debian-stable/448abcd/job-output.txt.gz#_2018-04-19_21_34_29_52284321:42
zigopabelanger: I didn't know it was using the AFS mirror for puppet lab stuff...21:42
zigopabelanger: Could you add stretch in the mix for that one ?21:42
*** jcoufal has quit IRC21:43
zigoFYI, it does work here on my virtualbox using upstream puppetlabs repo.21:43
pabelangerzigo: we'll need to update : http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/mirror_update.pp#n33121:43
pabelangerreleases, will need to include stretch21:43
openstackgerritColleen Murphy proposed openstack-infra/system-config master: Fix puppet config for puppet 4  https://review.openstack.org/56174621:44
zigopabelanger: Yeah, exactly !21:44
zigopabelanger: Should I just do a patch for this?21:45
pabelangerzigo: yah, then I can review21:45
pabelangerand ianw / clarkb21:45
zigopabelanger: ie, should: releases      => ['xenial'],21:45
zigobecome: releases      => ['xenial', 'stretch'],21:45
zigo?21:45
openstackgerritColleen Murphy proposed openstack-infra/system-config master: Fix puppet config for puppet 4  https://review.openstack.org/56174621:45
pabelangerzigo: yah, i think so21:45
pabelangerhttps://apt.puppetlabs.com/dists/ does list stretch21:46
zigoThanks, doing it.21:46
zigoYeah, I know, and it does work in my virtualbox VM here on my laptop with that one. :)21:46
*** yamamoto has joined #openstack-infra21:49
*** boden has quit IRC21:49
openstackgerritThomas Goirand proposed openstack-infra/system-config master: Also mirror stretch for puppetlabs  https://review.openstack.org/56283921:49
zigopabelanger: There you go !21:49
zigoclarkb: Can you also review https://review.openstack.org/562839 please ?21:50
pabelanger+221:52
clarkbpabelanger: looking at file sizes we should be good to just have reprepro update and vos release happen automatically ya?21:54
clarkb(eg we don't need to hold the lock and go through htat process for bigger updatess)21:54
clarkboh ianw noted that as well21:54
clarkb(I just looked at hte Packages list file and its small in total21:54
*** neiloy has quit IRC21:55
pabelangeryah21:56
ianwoh good https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=895532 merged21:59
openstackDebian bug 895532 in src:linux "linux-image-4.14.0-0.bpo.3-arm64: Enable CONFIG_SCSI_SYM53C8XX_2 for ARM64" [Normal,Fixed]21:59
ianwif we get that kernel, it means we can log in on arm64 debian.  currently it doesn't have the drivers for the older (non-virtio) kvm scsi devices presented by the current cloud, so no config drive and no keys deployed22:00
*** hashar has quit IRC22:00
openstackgerritPaul Belanger proposed openstack-infra/bindep master: Add debian-stable support  https://review.openstack.org/56284522:00
openstackgerritPaul Belanger proposed openstack-infra/bindep master: Remove debian-jessie support  https://review.openstack.org/56284622:00
pabelangerianw: wow, that was fast22:01
clarkbianw: that would explain why the hostnames are not real hostnames on those nodes too22:01
clarkbseems like someone had problems installing some software due to that22:01
clarkbping $hostname not working and such22:01
ianwclarkb: ahhh, yeah that could be right.  yep that was an issue with collectd install where it tries to ping itself we were discussing yesterday22:02
*** bobh has quit IRC22:02
*** caphrim007_ has quit IRC22:03
clarkbianw: I believe that glean would do the right thing based on hostname info from config drive if that is working (write the correct config to /etc/hosts)22:03
pabelangerokay, https://review.openstack.org/#/q/topic:debian-stable should be all that is needed to remove debian-jessie from nodepool22:03
*** caphrim007 has joined #openstack-infra22:03
pabelangerI'm going to start another thread on ML to see if there are objections22:03
pabelangerbasically ara, and ansible-hardening22:03
*** dklyle has quit IRC22:08
*** caphrim007_ has joined #openstack-infra22:10
pabelangerokay, sent to ML22:12
*** caphrim007 has quit IRC22:13
clarkbalright I need to follow up on neutron job indexing22:18
clarkb*log indexing22:18
ianwout of the huge amount of stuff i deleted, we seem to have *one* stuck object in rax22:27
ianwoh, weird.  it was there, i tried to delete it via the webui, it gave an error, but now  it seems gone anyway22:28
clarkbihar: to confirm it appears that my fix to os-loganalyze got things indexed for neutron jobs again22:29
clarkbihar: build_short_uuid:"91dbaba" AND filename:"job-output.txt" is a logstash query that should confirm this for22:29
clarkb*for you22:29
clarkbianw: those are the best "please do this" "no sorry I can't" "let me try again" "oh I've already done it"22:30
clarkbianw: dmsimard|off now I guess we just watch it and see fi we leak going forward22:30
*** rcernin has joined #openstack-infra22:30
dmsimard|off\o/22:31
dmsimard|offThanks for taking care of that22:31
pabelangerdmsimard|off: mind looking at https://review.openstack.org/562844/ for ARA22:32
*** stevebaker has quit IRC22:32
dmsimard|off+322:33
ianwclarkb / dmsimard|off: one thing i noticed was that we were not falling into https://git.openstack.org/cgit/openstack-infra/shade/tree/shade/openstackcloud.py#n4560 as i was deleting, which suggests to me the link between image->object storage maybe isn't setup right22:33
ianwbut, we don't appear to have leaked objects on the latest upload either, so maybe it is working and i was just looking at really old images22:34
ianwif i get a sec, i'll upload with an instrumented shade and see if i can double check22:34
clarkbianw: I want to say mordred said current shade will remove the swift objects once the glance import is done?22:34
clarkbwhich if that is true you won't need to execute that code on delete typically (I think you'd only need to do it if the glance import failed)22:35
*** stevebaker has joined #openstack-infra22:35
clarkbianw: https://git.openstack.org/cgit/openstack-infra/shade/tree/shade/openstackcloud.py#n4962 seems to do that22:36
pabelangerdmsimard|off: thanks22:36
*** TobbeCN has joined #openstack-infra22:37
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Late bind projects  https://review.openstack.org/55361822:37
openstackgerritJames E. Blair proposed openstack-infra/zuul master: WIP: Make config objects freezable  https://review.openstack.org/56281622:37
*** mriedem is now known as mriedem_away22:38
ianwclarkb: yeah, i didn't see where IMAGE_OBJECT_KEY was removed from the image though?22:38
ianwclarkb: anyway, https://review.openstack.org/#/c/562510/ did the job -- i think it's the api equivalent of just "rm *" anyway, despite the checks22:39
clarkbianw: https://git.openstack.org/cgit/openstack-infra/shade/tree/shade/openstackcloud.py#n4955 seems to be where we attempt to set the k/v pair at least22:41
*** TobbeCN has quit IRC22:41
openstackgerritMerged openstack-infra/system-config master: Also mirror stretch for puppetlabs  https://review.openstack.org/56283922:45
*** rwsu has quit IRC22:46
openstackgerritPaul Belanger proposed openstack-infra/bindep master: Add openstack-infra/project-config as required project  https://review.openstack.org/56286222:46
pabelangerAJaeger: I think we should start pushing on removing bindep-fallback.txt. Maybe we can come up with a plan and send it out to the ML for comments.22:47
pabelangerAJaeger: eg: any project missing bindep.txt in their project repo, we copy bindep-fallback.txt into it, then delete it from our images22:48
clarkbpabelanger: won't that affect your change above? (it assumes the nodepool elements location for the content)22:49
*** edmondsw has quit IRC22:49
*** edmondsw has joined #openstack-infra22:49
*** Sukhdev has joined #openstack-infra22:49
*** armaan has quit IRC22:50
pabelangerclarkb: yah, eventually. I think we'd keep testing it until all projects are off it. Then we can likely just remove that file and use the intree version to test distro specific things22:50
pabelangermaybe we don't even need that job any more, since we are just testing we can install packages from bindep-fallback.txt22:50
*** armaan has joined #openstack-infra22:50
pabelangerI'm not really keen on adding debian-stretch packages into it22:51
pabelangerclarkb: we could also just say, bionic, stretch, fedora-28 are no longer supported, lock the file and if projects needs things, copy it intree an modify22:52
pabelangerI actually might like that approach better22:52
pabelangeras it forces projects to react22:52
*** edmondsw has quit IRC22:53
clarkbianw: reading https://git.openstack.org/cgit/openstack-infra/shade/tree/shade/_normalize.py#n314 I think the issue is we need to check if that k is in image.properties not in image itself22:54
clarkbpabelanger: ya the only downside to it is package names tend to not chagne much over time22:55
clarkbso may still be slow going for some projects but that seems like a reasonable stance, going forward you have to update in tree22:55
*** slaweq has joined #openstack-infra22:55
pabelangeryah, going to send a quick ML post to infra about it22:55
*** hongbin_ has quit IRC22:57
*** slaweq has quit IRC23:00
openstackgerritPaul Belanger proposed openstack-infra/bindep master: DNM - bionic  https://review.openstack.org/56286623:01
*** rpioso is now known as rpioso|afk23:01
*** tosky has quit IRC23:10
pabelangerclarkb: yah, it does look like bionic would pass using bindep-fallback.txt. Maybe we could prime it with a faulty package deliberately to force the issue23:11
clarkbthe downside to that approach is it will make it harder to flip everyone over to bionic by default whether they want it or not23:12
clarkb(because we'd be beraking many)23:12
pabelangerright23:12
clarkbapparently it is already time to decide if we want to attend ptg as a team23:13
clarkb(I say yes, it will be excellent time to make progress on modernizing config management and updating base server OSes etc)23:13
*** salv-orlando has quit IRC23:13
pabelanger+123:13
*** salv-orlando has joined #openstack-infra23:13
clarkbif you anticipate attending the PTG let me know as I think part of the survey is giving a rough headcount23:15
clarkb(I'll send a more formal mention to the infra list and bring it up in the meeting too)23:15
pabelangerokay, email sent to ML about bindep-fallback23:15
*** rockyg has joined #openstack-infra23:15
*** hemna_ has quit IRC23:20
openstackgerritPaul Belanger proposed openstack-infra/openstack-zuul-jobs master: Remove legacy debian-jessie / fedora-26  https://review.openstack.org/56287023:20
openstackgerritPaul Belanger proposed openstack-infra/openstack-zuul-jobs master: Remove legacy debian-jessie / fedora-26  https://review.openstack.org/56287023:22
*** caphrim007_ has quit IRC23:23
*** caphrim007 has joined #openstack-infra23:24
*** caphrim007 has quit IRC23:25
*** caphrim007_ has joined #openstack-infra23:25
openstackgerritPaul Belanger proposed openstack-infra/openstack-zuul-jobs master: Remove legacy debian-jessie  https://review.openstack.org/56287023:27
*** caphrim007_ has quit IRC23:30
clarkbianw: before you call it a weekend it would be nice if you had time to go over topic:venv_support. I'd be curious what your feedback is on those changes23:31
ianwclarkb: ok23:33
ianwclarkb: as a very first thought, what i've always thought, is that probably this breaks every plugin23:34
ianwwhich is maybe just something we need to do23:34
clarkbianw: it mostly worked with ironic except for one assumption there23:35
clarkbbut yes PATH in particular is likely ot be a problem23:35
pabelangerand another ML post about ubuntu-bionic and legacy nodesets23:37
*** Goneri has joined #openstack-infra23:40
*** agopi has joined #openstack-infra23:41
*** jamesmcarthur has joined #openstack-infra23:43
*** rockyg has quit IRC23:44
prometheanfirediablo_rojo: how many train emails so far?23:47
pabelangerconig-core: https://review.openstack.org/562870/ easy review to remove unused nodeset from ozj23:51
clarkbpabelanger: re legacy nodesets they don't really seem to be all that different from the non legacy nodesets23:51
clarkblegacy-ubuntu-xenial-2-node is identical to ubuntu-xenial-2-node for example23:52
clarkb(so I think we shouldn't really need to worry too much about bionic)23:52
clarkb(the non legacy nodeset should just work right)23:52
pabelangerclarkb: main one is legacy-ubuntu-xenial vs ubuntu-xenial, but yah you are right.23:54
pabelangerit means jobs would still switch their nodesets for legacy jobs23:55
pabelangerbut again, likely you said last week, we should push back on that happening in project-config in favor of intree changes23:55

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!