Tuesday, 2020-08-04

*** rlandy has quit IRC00:05
*** ryohayakawa has joined #openstack-infra00:17
*** armax has quit IRC00:32
*** eharney has joined #openstack-infra00:36
*** smarcet has joined #openstack-infra01:14
*** smarcet has left #openstack-infra01:14
*** rfolco has quit IRC02:07
*** apetrich has quit IRC02:14
*** bauzas has quit IRC02:14
*** tetsuro has quit IRC02:19
*** bauzas has joined #openstack-infra02:20
*** rcernin has quit IRC02:27
*** bauzas has quit IRC02:27
*** gyee has quit IRC02:29
*** bauzas has joined #openstack-infra02:29
*** rcernin has joined #openstack-infra02:30
*** ysandeep|away is now known as ysandeep02:32
*** bauzas has quit IRC02:42
*** bauzas has joined #openstack-infra02:43
ianwdonnyd: it looks like the host has ipv6 issues02:48
ianwour node launch attempts to ping -6 review.opendev.org and that's failing; i've held the generated node02:48
ianwit has an address : inet6 2001:470:e126:0:f816:3eff:feec:cf45/64 scope global dynamic mngtmpaddr noprefixroute02:49
*** bauzas has quit IRC02:50
openstackgerritAdrian Turjak proposed openstack/project-config master: Return gnocchi back to openstack  https://review.opendev.org/74459202:51
*** bauzas has joined #openstack-infra02:53
ianwi can ping the router : 64 bytes from fe80::f816:3eff:fe4a:404a%ens3: icmp_seq=1 ttl=64 time=0.873 ms02:55
*** bauzas has quit IRC02:58
*** bauzas has joined #openstack-infra02:58
ianwalso : Request to https://api.us-east.open-edge.io:8776.../volumes/detail timed out when trying to list volumes03:07
ianwclarkb / fungi: ^ something to take up tomorrow i think03:08
ianwi have uploaded a focal image, and it does boot and chat via ipv403:08
ianw./launch-node.py --cloud openstackci-openedge --region=us-east --flavor 8cpu-8GBram-80GBdisk --image ubuntu-focal mirror01.us-east.openedge.opendev.org03:08
ianwto save someone some typing :)03:08
*** Tengu has quit IRC03:13
*** Tengu has joined #openstack-infra03:15
openstackgerritGhanshyam Mann proposed openstack/openstack-zuul-jobs master: Migrate functional tests jobs to Ubuntu Focal  https://review.opendev.org/73832503:27
openstackgerritGhanshyam Mann proposed openstack/openstack-zuul-jobs master: Migrate openstack-tox-docs jobs to Ubuntu Focal  https://review.opendev.org/73832603:28
openstackgerritGhanshyam Mann proposed openstack/openstack-zuul-jobs master: Pin ubuntu bionic for tox-py27 and tox-py36 in no-constraints template  https://review.opendev.org/73832703:28
openstackgerritGhanshyam Mann proposed openstack/openstack-zuul-jobs master: Pin nodejs6 & nodejs8 jobs on ubuntu-bionic  https://review.opendev.org/73832803:28
*** armax has joined #openstack-infra03:29
*** ramishra_ has quit IRC03:34
*** hongbin has joined #openstack-infra03:35
*** psachin has joined #openstack-infra03:37
*** vishalmanchanda has joined #openstack-infra03:42
*** armax has quit IRC03:43
*** Tengu has quit IRC03:57
*** ramishra has joined #openstack-infra03:58
*** Tengu has joined #openstack-infra03:58
*** hamalq has joined #openstack-infra04:09
*** hongbin has quit IRC04:10
*** hamalq has joined #openstack-infra04:10
*** hamalq has quit IRC04:17
*** raukadah is now known as chkumar|rover04:23
*** Tengu has quit IRC04:26
*** Tengu has joined #openstack-infra04:27
*** evrardjp has quit IRC04:33
*** evrardjp has joined #openstack-infra04:33
*** ysandeep is now known as ysandeep|afk04:33
*** Tengu has quit IRC04:33
*** Tengu has joined #openstack-infra04:35
*** Lucas_Gray has quit IRC04:36
*** Ajohn has joined #openstack-infra04:43
*** ykarel has joined #openstack-infra04:59
*** lmiccini has joined #openstack-infra05:03
*** marios has joined #openstack-infra05:13
*** Tengu has quit IRC05:17
*** Tengu has joined #openstack-infra05:19
*** udesale has joined #openstack-infra05:36
*** marios is now known as marios|ruck05:43
*** Tengu has quit IRC05:44
*** Tengu has joined #openstack-infra05:45
*** Tengu has quit IRC05:49
*** Tengu has joined #openstack-infra05:51
openstackgerritOpenStack Proposal Bot proposed openstack/project-config master: Normalize projects.yaml  https://review.opendev.org/74409606:10
*** dchen has quit IRC06:17
*** ysandeep|afk is now known as ysandeep06:18
*** xek has joined #openstack-infra06:19
*** Tengu has quit IRC06:24
*** Tengu has joined #openstack-infra06:26
*** eolivare has joined #openstack-infra06:27
*** dchen has joined #openstack-infra06:29
*** eharney has quit IRC06:35
openstackgerritMerged openstack/project-config master: Normalize projects.yaml  https://review.opendev.org/74409606:37
*** Tengu has quit IRC06:38
*** Tengu has joined #openstack-infra06:40
*** hashar has joined #openstack-infra06:49
openstackgerritMerged openstack/project-config master: Add notifications for openstack-stable channel  https://review.opendev.org/74405006:51
*** slaweq has joined #openstack-infra06:52
*** eharney has joined #openstack-infra06:53
*** Tengu has quit IRC07:01
*** Tengu has joined #openstack-infra07:02
*** dchen has quit IRC07:04
*** dchen has joined #openstack-infra07:05
*** dchen_ has joined #openstack-infra07:07
*** jcapitao has joined #openstack-infra07:08
*** dchen has quit IRC07:10
*** svyas is now known as svyas|afk07:13
*** dtantsur|afk is now known as dtantsur07:16
*** xek has quit IRC07:22
*** rcernin has quit IRC07:22
*** andrewbonney has joined #openstack-infra07:47
*** rcernin has joined #openstack-infra07:48
*** tosky has joined #openstack-infra07:48
*** ralonsoh has joined #openstack-infra07:50
*** ralonsoh has quit IRC07:50
*** ralonsoh has joined #openstack-infra07:52
*** rcernin has quit IRC07:53
*** jpena|off is now known as jpena07:55
*** Tengu has quit IRC08:00
*** xek has joined #openstack-infra08:06
*** Tengu has joined #openstack-infra08:07
*** AJaeger has joined #openstack-infra08:08
*** xek has quit IRC08:18
*** jhesketh has quit IRC08:24
*** rcernin has joined #openstack-infra08:27
*** Lucas_Gray has joined #openstack-infra08:30
*** rcernin has quit IRC08:31
openstackgerritMerged openstack/project-config master: Add Ceph iSCSI charm to OpenStack charms  https://review.opendev.org/74447908:32
*** pkopec has joined #openstack-infra08:37
*** xek has joined #openstack-infra08:37
*** priteau has joined #openstack-infra08:43
*** derekh has joined #openstack-infra08:47
*** ramishra has quit IRC08:51
*** ramishra has joined #openstack-infra08:52
*** tetsuro has joined #openstack-infra08:55
*** rcernin has joined #openstack-infra08:59
*** nightmare_unreal has joined #openstack-infra09:06
*** ramishra has quit IRC09:08
*** xek has quit IRC09:08
*** lpetrut has joined #openstack-infra09:12
*** rcernin has quit IRC09:16
*** ociuhandu_ has joined #openstack-infra09:21
*** ociuhandu has quit IRC09:21
openstackgerritMerged openstack/project-config master: Revert "Remove os_congress gating"  https://review.opendev.org/74253209:24
*** apetrich has joined #openstack-infra09:40
*** ralonsoh has quit IRC09:40
*** ralonsoh has joined #openstack-infra09:41
*** hashar has quit IRC09:42
*** auristor has quit IRC10:01
*** xek has joined #openstack-infra10:11
*** udesale_ has joined #openstack-infra10:12
*** tkajinam has quit IRC10:12
*** sshnaidm_ has joined #openstack-infra10:12
*** udesale has quit IRC10:14
*** sshnaidm has quit IRC10:15
*** jhesketh has joined #openstack-infra10:21
*** yamamoto has quit IRC10:23
*** sshnaidm_ is now known as sshnaidm10:26
*** yamamoto has joined #openstack-infra10:27
*** ociuhandu_ has quit IRC10:30
*** ociuhandu has joined #openstack-infra10:31
*** ociuhandu has quit IRC10:36
*** yamamoto has quit IRC10:38
*** tetsuro has quit IRC10:44
*** xek has quit IRC10:50
*** yamamoto has joined #openstack-infra10:50
*** rcernin has joined #openstack-infra10:55
*** dklyle has quit IRC10:56
*** dchen_ has quit IRC11:03
*** aedc has quit IRC11:08
*** eharney has quit IRC11:13
*** zxiiro has joined #openstack-infra11:15
*** tosky_ has joined #openstack-infra11:20
*** tosky has quit IRC11:20
*** tosky_ is now known as tosky11:20
*** ociuhandu has joined #openstack-infra11:23
*** ociuhandu has quit IRC11:24
*** ociuhandu has joined #openstack-infra11:25
*** udesale has joined #openstack-infra11:27
*** udesale_ has quit IRC11:27
*** ociuhandu has quit IRC11:30
*** udesale_ has joined #openstack-infra11:31
*** udesale has quit IRC11:32
*** eharney has joined #openstack-infra11:32
*** gfidente has joined #openstack-infra11:32
*** jpena is now known as jpena|lunch11:36
*** udesale_ has quit IRC11:38
*** ociuhandu has joined #openstack-infra11:40
*** ykarel_ has joined #openstack-infra11:40
*** udesale has joined #openstack-infra11:41
*** jcapitao is now known as jcapitao_lunch11:42
*** ykarel has quit IRC11:43
*** marios|ruck has quit IRC11:48
*** smarcet has joined #openstack-infra11:50
*** rfolco has joined #openstack-infra11:51
*** rcernin has quit IRC11:53
openstackgerritRico Lin proposed openstack/openstack-zuul-jobs master: Add job for openstack-python3-victoria-jobs-arm64  https://review.opendev.org/74209011:53
openstackgerritThierry Carrez proposed openstack/project-config master: Retire Zuul's Kata tenant  https://review.opendev.org/74468711:58
*** takamatsu has joined #openstack-infra12:04
*** rlandy has joined #openstack-infra12:07
*** artom has joined #openstack-infra12:10
donnydWell in the upgrade to ussuri it would seem something busted in the bgp dragent12:10
donnydit doesn't seem to work at all - so I am running that down now12:11
*** tosky has quit IRC12:14
*** tosky_ has joined #openstack-infra12:14
*** tosky_ is now known as tosky12:15
*** hashar has joined #openstack-infra12:16
*** dave-mccowan has joined #openstack-infra12:16
*** ralonsoh_ has joined #openstack-infra12:26
*** ramishra has joined #openstack-infra12:27
*** ralonsoh__ has joined #openstack-infra12:28
rpittauhello everyone! In ironic projects we do have oslo.config==5.2.0 in lower-constraints in ussuri, but it's installing 8.3.1, that's breaking the l-c tests because of the validators dependencies12:28
rpittauit seems that tox is upgrading the packages without taking lower-constraints into consideration, has anything changed recently that could cause that ?12:28
*** ryo_hayakawa has joined #openstack-infra12:28
*** ryohayakawa has quit IRC12:29
*** ralonsoh has quit IRC12:30
*** ralonsoh_ has quit IRC12:31
*** svyas|afk is now known as svyas12:32
*** xek has joined #openstack-infra12:33
AJaegerrpittau: Not sure whether this helps, just wanted to poing out http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014237.html12:34
rpittauyay12:37
rpittauAJaeger: thanks, I'll test that12:37
*** jpena|lunch is now known as jpena12:39
rpittauAJaeger: we already removed install_command roughyl 3 months ago12:40
AJaegerrpittau: in that case I have no idea ;(12:43
rpittau:(12:43
*** arxcruz is now known as arxcruz|off12:43
AJaegerrpittau: IMHO the lower-constraints is just too complex, see the followup to my email as well. You might need to recreate the file12:44
*** ralonsoh__ is now known as ralonsoh12:44
rpittauAJaeger: ok, I'll have a look at that, thanks!12:45
AJaegerrpittau: did you see on the discuss mailing list the thread "[nova] openstack-tox-lower-constraints broken"? You might want to chime in there as well...12:46
rpittauAJaeger: yep, saw that too :/12:47
rpittauseems l-c hates us12:47
AJaegerthose mails seem to indicate that constraints in general hate us ;(12:48
AJaegersorry, hope others can help. Perhaps you can join forces with smcginnis and fungi about this.12:49
rpittauAJaeger: we were talking in #openstack-requirements before, they pointed out a change in tox, I guess we need to keep testing12:50
fungirpittau: i think something has changed with how tox is installing packages. in #openstack-cinder smcginnis is experimenting with my suggestion of setting -c in install_command instead of deps12:51
fungithis did seem to begin roughly the same time as the most recent tox release (within margin of error anyway)12:52
smcginnisfungi: AJaeger pointed out the reason it was removed above. That breaks the l-c job.12:52
rpittaufungi: ok, thanks12:52
smcginnisThe tox release is oddly suspicious. But as fungi pointed out, there really wasn't much in that release that should have affected this.12:53
smcginnisAnd I've updated my tox locally, and I am still unable to reproduce this on F32.12:53
fungito be clear, i didn't look at the complete list of commits between the last two releases, just their (rather minimal) changelog12:53
rpittauso basically re-add install_command with -c ?12:53
smcginnisrpittau: I have a test patch up to try that now.12:54
fungirpittau: well, that's what we're experimenting with, but we might need to do it separately in the lower-constraints env as well12:54
rpittaualright, gotcha12:54
*** yamamoto has quit IRC12:54
fungito be able to specify a different constraints file default12:54
smcginnisIf we do that, we will also need to explicitly define install_command in [testenv:lower-constraints] too.12:55
rpittausmcginnis: yeah, that's what I wanted to do12:55
*** eharney has quit IRC12:55
*** eharney has joined #openstack-infra12:56
dtantsurhey folks! I'm trying to use devstack CI jobs on a non-standard branch. How do I figure out why zuul does not run them?12:56
fungiright, that's basically what i was describing12:56
dtantsurhttps://review.opendev.org/#/c/744700/ doesn't seem enough, do I need overrides for all required projects?12:57
smcginnisfungi: Yeah, what you said. I was too busy typing to see that you had already said it.12:57
smcginnisAnd now it looks like we have a grenade issue going around too.12:57
smcginnisdtantsur: You might. Probably good to ask in #openstack-qa.12:58
dtantsurwill do12:58
fungidtantsur: you can turn on debug for your project in the zuul.yaml for your change and zuul should comment with a detailed explanation of what it selected. that might help narrow down why a particular job was not selected. but also common reasons for a job to not be selected are file filters or branch filters12:58
smcginnisBut IIRC, devstack has to be aware of each "stable" branch, so this might be a little more work.12:58
dtantsurmm, branch filters, lemme check12:58
dtantsurinterestingly, one non-devstack job also does not run12:58
dtantsurfungi: could you remind me how to turn on debug?12:59
fungisure, after i remind myself, just a moment13:00
fungidtantsur: https://zuul-ci.org/docs/zuul/reference/project_def.html#attr-project.%3Cpipeline%3E.debug13:02
*** ryo_hayakawa has quit IRC13:02
dtantsurthx!13:02
*** ryo_hayakawa has joined #openstack-infra13:03
*** auristor has joined #openstack-infra13:05
*** mgoddard has quit IRC13:07
fungidtantsur: and this is the documentation for the branch override you want to use for your required-projects list, i suspect: https://zuul-ci.org/docs/zuul/reference/job_def.html#attr-job.required-projects.override-checkout13:09
dtantsurI guess so. I wonder what to do with jobs that are fully defined in other projects..13:10
*** ryo_hayakawa has quit IRC13:10
fungiyou can still set that in a variant (basically where you include the job in your pipeline)13:10
fungiyou don't need to redefine or inherit from the job itself13:11
*** mgoddard has joined #openstack-infra13:11
openstackgerritAntoine Musso proposed openstack/pbr master: Update some url to use opendev.org  https://review.opendev.org/74470413:17
*** ysandeep is now known as ysandeep|mtg13:18
*** jcapitao_lunch is now known as jcapitao13:20
dtantsurnice, I'll try13:21
rpittausmcginnis: I'm testing the change you did for cinder in ironic and it seems working, at least locally on ubuntu bionic and latest tox13:22
*** ykarel_ has quit IRC13:22
*** ykarel_ has joined #openstack-infra13:22
*** lbragstad_ has joined #openstack-infra13:24
*** __ministry1 has joined #openstack-infra13:26
*** lbragstad has quit IRC13:26
*** __ministry1 has quit IRC13:33
smcginnisrpittau: Looks like the cinder one is passing too.13:39
dtantsurnext question, folks: https://opendev.org/openstack/project-config/src/branch/master/gerritbot/channels.yaml#L763 doesn't seem to work (the patches do not appear). anything I missed?13:39
smcginnisThat's good news / bad news. Glad it fixes it, but if the "fix" is we have to update every tox file everywhere... that's a pain.13:39
rpittausmcginnis: ehm...yeah :/13:39
rpittauI guess we need to start from master and backport... gosh that's a lot of files13:40
*** lbragstad_ has quit IRC13:41
fungidtantsur: you haven't missed anything, we have to manually update gerritbot configuration right now because it's in limbo having not been containerized yet after we containerized gerrit itself13:44
dtantsurah! could someone please do this update?13:45
smcginnisThere was also a recent gerritbot update for the #openstack-stable channel as well.13:45
fungidtantsur: i'll set myself a reminder to try and do it in a bit13:46
dtantsurthank you again!13:46
*** lbragstad has joined #openstack-infra13:46
*** armax has joined #openstack-infra13:55
*** ysandeep|mtg is now known as ysandeep14:00
openstackgerritJeremy Stanley proposed openstack/pbr master: Re-add ChangeLog  https://review.opendev.org/74471914:00
openstackgerritJeremy Stanley proposed openstack/pbr master: Add Release Notes to documentation  https://review.opendev.org/74472014:00
*** beagles has joined #openstack-infra14:01
*** yamamoto has joined #openstack-infra14:05
donnydrouting it fixed14:05
donnydis*14:05
donnydcan someone else confirm14:05
donnydping6 2001:470:e126:0:f816:3eff:fee4:7b2d14:05
*** michael-beaver has joined #openstack-infra14:05
fungidonnyd: yep! i can reach that14:07
donnydok, then we should be good to go14:16
*** ysandeep is now known as ysandeep|off14:16
*** yamamoto has quit IRC14:17
donnydapparently the BGP agent just doesn't want to work anymore in my deployment - so I am debugging it now - but for the two simple project to support the community - static routes work just fine14:17
*** pkopec has quit IRC14:18
donnydso the mirror should work now14:19
donnydbut I am thinking it probably needs to be redeployed14:19
*** dave-mccowan has quit IRC14:25
noonedeadpunkAJaeger: hi, can you push that change to the gates?:) https://review.opendev.org/#/c/742534/14:26
openstackgerritJeremy Stanley proposed openstack/pbr master: Re-add ChangeLog  https://review.opendev.org/74471914:27
openstackgerritJeremy Stanley proposed openstack/pbr master: Add Release Notes to documentation  https://review.opendev.org/74472014:27
*** hashar has quit IRC14:29
*** dklyle has joined #openstack-infra14:31
*** psachin has quit IRC14:33
*** Tengu has quit IRC14:35
*** udesale_ has joined #openstack-infra14:35
*** Tengu has joined #openstack-infra14:36
*** udesale has quit IRC14:38
*** smarcet has quit IRC14:46
*** dave-mccowan has joined #openstack-infra14:54
*** jamesmcarthur has joined #openstack-infra14:56
*** dave-mccowan has quit IRC14:59
*** pkopec has joined #openstack-infra14:59
*** artom has quit IRC15:03
smcginnisNot sure if it's related yet, but just noticed a new pip was released today that has a few bug fixes related to resolving dependencies. Might be interesting to see if that gets picked up and if it addresses our constraints issues.15:04
smcginnishttps://pip.pypa.io/en/stable/news/15:04
*** artom has joined #openstack-infra15:04
*** derekh has quit IRC15:05
clarkbsmcginnis: we run tox from a virtualenv iirc now to prevent polluting the global image python. You should be able to run a pre job step to upgrade the pip in that virtualenv before the run stage15:05
clarkb(otherwise will need to wait for image updates)15:05
clarkbI'm in a meeting now but can help sort that out if it is something you want to try15:05
*** smarcet has joined #openstack-infra15:05
*** derekh has joined #openstack-infra15:05
fungismcginnis: ooh, "Correctly find already-installed distributions with dot (.) in the name and uninstall them when needed."15:06
smcginnisThanks clarkb. Hoping to try some things a little later, so I can ping you.15:06
fungithat could be what we're seeing!15:06
smcginnisYeah, that one caught my eye.15:06
fungiit would explain why some already installed stuff gets ignored but other stuff doesn't (thinking about it, those had a . in their names)15:06
smcginnisOh? I didn't check that yet. Was it really only ones that had a . in the name?15:07
fungiso not actually a behavior change in tox, but rather because the new tox release brought in new pip15:07
fungioslo.cache, dogpile.cache, oslo.messaging all come to mind15:08
fungias ones i saw getting reinstalled with the wrong versions15:08
smcginnisShould we now pick up this latest pip? Or will there be a delay before we see that in our jobs? Would be great if we can just recheck one of the failing jobs and see if that's now resolved.15:09
fungismcginnis: i think it'll be delayed until the next tox release15:09
fungiunless we override the version of pip in the tox envs like clarkb described15:09
*** chkumar|rover is now known as raukadah15:10
smcginnisReading the pip bug report, it is sounding kind of like that may be it: https://github.com/pypa/pip/issues/864515:12
clarkbfungi: smcginnis we build a venv for tox in our images15:13
clarkbI think we just need to rebuild our images and that will pull in newer pip15:13
fungioh, tox pulls in latest pip?15:14
clarkbthat happens roughly daily, and we can speed it up and test it with a pre run playbook update that upgrades pip in that virtualenv15:14
clarkbfungi: yes I believe so15:14
clarkbor rather the virtualenv creation does15:14
smcginnisThat all sounds like it could also explain why we saw a slight delay between releases and when we started getting failures.15:14
fungioh, actually it depends on what version of pip is vendored in the version of virtualenv used15:15
*** lpetrut has quit IRC15:15
fungihttps://virtualenv.pypa.io/en/latest/changelog.html "v20.0.29 (2020-07-31) [...] Upgrade embedded pip from version 20.1.2 to 20.2"15:16
clarkbpython3 -m venv /usr/tox-env && /usr/tox-env/bin/pip install tox is what we do roughly15:17
clarkbso ya maybe tox is pulling in newer pip (otherwise how did it update in the first plcae)15:17
smcginnisShould we add another step to that of /usr/tox-env/bin/pip install --upgrade pip to make sure it grabs the latest?15:19
clarkbI'm trying to check now if tox pulls in pip15:20
*** openstackgerrit has quit IRC15:20
clarkbhttps://github.com/tox-dev/tox/blob/master/setup.cfg#L45 they install virtualenv which will pull in pip15:21
clarkbI think that explains it?15:21
clarkbthey aren't using the pip in our tox virtualenv they are using the pip installed in the tox env virtualenv15:22
fungi#status log manually updated /etc/gerritbot/channel_config.yaml on review01 with latest content from openstack/project-config:gerritbot/channels.yaml (for the first time since 2020-03-17) and restarted the gerritbot service15:22
clarkbits turtles the whole way down but as long as virtualenv updates they will update and that is in line with the changelog fungi found15:22
openstackstatusfungi: finished logging15:22
clarkbsmcginnis: but also that means updating pip in the tox venv is unlikely tofix it for us15:22
clarkbwe need that nested virtualenv version to update15:22
fungidtantsur: i've updated gerritbot's channel config just now at your request15:23
dtantsurthank you!15:23
smcginnisSmoking gun at least. Last virtualenv release was July 31 - right when we starting getting failures.15:23
smcginnisSo now we just need a new virtualenv release to happen?15:24
clarkbwith an updated pip version included yes15:24
clarkbif we need to we can change `python3 -m venv /usr/tox-env && /usr/tox-env/bin/pip install tox` to `python3 -m venv /usr/tox-env && /usr/tox-env/bin/pip install tox virtualenv==oldversionthatworks`15:25
clarkbor do a pre step that does the equivalent of ^15:25
smcginnisThat could be a good workaround for now.15:26
smcginnisI don't see any open virtualenv issues asking for updates to newer pip requirements.15:26
clarkbprobably starting with a pre step and confirming that works is a good start. Then we can put that in the images themselves if we expect to be in that staet for a while15:27
rpittausmcginnis, clarkb, would it still make sense to modify the install_command to fix the issue? asking as I started with a couple of patches for ironic and if there is a "global fix" would be way better :)15:38
clarkbrpittau: if this is a pip bug then no I don't think that will help much. Best case you'll end up reverting it shortly anyway15:39
ralonsohmordred, hi15:39
ralonsohcan you help me on a problem in the Neutron CI?15:39
ralonsohspecifically on the grenade jobs15:39
ralonsohI think the problem is related to https://review.opendev.org/#/c/66273415:40
ralonsohwe have released 9.1.0 version recently15:40
rpittauclarkb: ok, I actually see that what smcginnis proposed, and I applied to ironic, works, at least I'm seeing the correct packages installed in the lower-constraints job, both in master and ussuri.15:42
clarkbrpittau: to be clear I'm not saying it can't be fixed that way, just pointing out it is likely that an updated pip will address the problem and you'll want to revert15:42
*** lpetrut has joined #openstack-infra15:43
clarkbralonsoh: it would be helpful to link to a specific failure15:43
*** ykarel_ is now known as ykarel|away15:43
rpittauclarkb: I see, thanks! I'll check if we can wait for that as at the moment all stable branches are broken :/15:43
ralonsohclarkb, you are right15:43
ralonsohhttps://026716e2072b41f3360b-f5b932bfaec6245f8b3ddc87df48d7a4.ssl.cf2.rackcdn.com/640258/23/check/neutron-grenade-multinode/bcc8e1e/controller/logs/screen-placement-api.txt15:43
ralonsohkeystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for identity service not found15:43
ralonsohI see the project config is15:44
ralonsohkeystone_authtoken.interface   = admin15:44
*** hashar has joined #openstack-infra15:45
*** dmellado has joined #openstack-infra15:46
clarkbralonsoh: that job appears to have installed keystonemiddleware 9.0.015:46
ralonsohCollecting keystonemiddleware===9.1.015:47
clarkbralonsoh: I see 9.0.0 in that log15:48
clarkbis that the correct log15:48
ralonsohclarkb, it uninstall 9.0.015:48
ralonsoh2020-08-04 13:39:25.813 | Requirement already satisfied: keystonemiddleware===9.1.0 in /usr/local/lib/python3.6/dist-packages (from -c /opt/stack/new/requirements/upper-constraints.txt (line 215)) (9.1.0)15:48
*** ykarel|away has quit IRC15:49
clarkbah I see. The old side installs 9.0.0 then we upgrade to the new side and get 9.1.015:50
clarkband I need to look in the grenade log to see that15:50
ralonsohclarkb, I think the problem is15:50
ralonsohkeystone_authtoken.interface   = admin15:50
ralonsohany other job in the CI uses internal15:50
ralonsohinstead of admin15:51
ralonsohI need to find now in grenade were this is configured15:51
toskyralonsoh: the grenade job is basically a devstack job which then runs grenade.sh15:53
ralonsohtosky, yeah, but why this option is different?15:53
clarkband the configs come from https://opendev.org/openstack/placement/src/branch/master/etc/placement15:53
ralonsohbecause I think this is the problem there15:53
clarkbralonsoh: I think genconfig on stable is using keystonemiddleware with 9.0.0 so the default is admin15:53
clarkbthen I believe we don't update configs for the new side15:53
ralonsohahhhhhh15:54
clarkbbut that should work if it worked on the old side? what is making admin invalid?15:54
clarkbthat would require a keystone db update wouldn't it?15:54
ralonsohI have no idea, sorry15:54
fungiyeah, it's also an intentional choice that we run grenade with config from the previous stable branch, so that we can catch when upgrades fail to be backward-compatible with configs from the previous release15:55
*** ykarel|away has joined #openstack-infra15:55
toskygrenade does update the db, but iirc the point is to not update the config15:55
toskyas fungi explained it in a better way15:55
ralonsohok, when keystone is restarted15:56
ralonsohin https://026716e2072b41f3360b-f5b932bfaec6245f8b3ddc87df48d7a4.ssl.cf2.rackcdn.com/640258/23/check/neutron-grenade-multinode/bcc8e1e/controller/logs/screen-keystone.txt15:56
clarkbya I think the trick here is to figure out why admin is no longer valid as it should be (as intentional testing design objective)15:56
ralonsohthe interface option is changed15:56
ralonsohfrom "admin" to "internal"15:56
ralonsohat 13:37:5715:56
clarkbis this the port 5000 vs 35something?15:56
clarkbor is this the endpoint attribute in the database?15:56
ralonsohkeystone_authtoken.interface   = internal15:56
ralonsohI'm noob on keystone15:57
clarkbI think the likely fix here is we need to keep admin as the value on the new side if the old side defaults to admin16:02
clarkbthen when old side defaults to internal the new side can default to internal16:02
ralonsohexactly, and this should be done in grenade, if I'm not wrong16:03
clarkbralonsoh: yes in openstack/grenade/projects/10_keystone/from_whatevertheversionthatmattershereis16:03
ralonsohclarkb, thanks a lot16:03
clarkbthere are examples of other upgrade steps like that in grenade16:03
toskyuhm, why doesn't this happen on other grenade jobs?16:03
clarkbtosky: because both old and new side use older middleware with the admin default16:03
clarkbtosky: the issue here is old defaults to admin but new defaults to internal but we don't change other service configs aiui16:04
clarkbif we keep the new side fixed to the admin default then the other services don't need their configs updated16:04
*** lmiccini has quit IRC16:05
clarkbI'm trying to check if that is set in keystones conf file already16:06
ralonsohclarkb, I'm not very sure about the use of those upgrade files16:07
ralonsohno one is calling those methods16:07
clarkbwell and https://026716e2072b41f3360b-f5b932bfaec6245f8b3ddc87df48d7a4.ssl.cf2.rackcdn.com/640258/23/check/neutron-grenade-multinode/bcc8e1e/controller/logs/etc/placement/placement_conf.txt doesn't set the interface either16:07
clarkboh I see placement is complaining about the default too16:08
clarkbwe aren't fixing the value in the configs so we use the default everywhere16:08
ralonsohyes16:08
clarkbif we fixed the value in the configs then we'd be ok I think16:08
ralonsohcan we set the default value globally?16:09
clarkbhttps://opendev.org/openstack/devstack/src/branch/master/lib/keystone#L403-L426 adds to the mystery :)16:11
clarkbwhen I first found that function I thought yes we can and that is where, but thta sets the value to public and we don't see that in the services16:11
*** jpena is now known as jpena|off16:13
donnydfungi: how are things holding up down there?16:13
ralonsohclarkb, but this is being called only in nova, when upgrading to rocky16:14
clarkbralonsoh: ya I guess it is going to be branch specific and that may help explain tosky's question16:14
*** jcapitao has quit IRC16:14
clarkbralonsoh: but that job was ussuri to master right?16:15
ralonsohyes16:15
toskyI didn't check how that job differs from the normal grenade (or grenade-multinode) job16:15
*** derekh has quit IRC16:15
clarkbin devstack master and ussuri many of the services call that16:15
ralonsohneutron and nova16:15
ralonsohand placement16:16
ralonsohhttps://026716e2072b41f3360b-f5b932bfaec6245f8b3ddc87df48d7a4.ssl.cf2.rackcdn.com/640258/23/check/neutron-grenade-multinode/bcc8e1e/controller/logs/screen-placement-api.txt16:16
clarkbralonsoh: and placement swift glance and cinder16:16
ralonsohyes16:16
ralonsohall of them16:16
clarkbbut if you look in their config files we don't have that value set16:17
clarkbhttps://026716e2072b41f3360b-f5b932bfaec6245f8b3ddc87df48d7a4.ssl.cf2.rackcdn.com/640258/23/check/neutron-grenade-multinode/bcc8e1e/controller/logs/etc/placement/placement_conf.txt16:17
*** ykarel|away has quit IRC16:17
ralonsohclarkb, because we don't configure it explicitly16:17
ralonsohwe don't call configure_auth_token_middleware16:18
clarkbralonsoh: we do in https://opendev.org/openstack/devstack/src/branch/master/lib/keystone#L403-L42616:18
clarkbralonsoh: devstack does16:18
clarkbor it should16:18
ralonsohindeed16:18
clarkbiirc the way this process works is we run devstack on the old branch to configure and deploy the services. Then grenade runs specific portions of devstack things to only update the service installations but not their configs16:19
clarkbI would've expected the config to have interface = public in them and then that won't chnage16:19
clarkbfiguring out why that isn't happening may be the bug that needs to be fixed here16:19
ralonsohclarkb, actually, this is the only variable not set16:19
ralonsohin https://opendev.org/openstack/devstack/src/branch/master/lib/keystone#L403-L426 list16:19
ralonsohcompared to https://026716e2072b41f3360b-f5b932bfaec6245f8b3ddc87df48d7a4.ssl.cf2.rackcdn.com/640258/23/check/neutron-grenade-multinode/bcc8e1e/controller/logs/etc/placement/placement_conf.txt16:19
*** pkopec has quit IRC16:19
ralonsohin the same order16:20
*** lpetrut has quit IRC16:21
clarkbI see that function doesn't set that var in ussuri16:21
clarkbso I think the fix here is to update devstack stable branches to include that iniset on that function call16:22
ralonsohclarkb, no way!16:22
ralonsohthanks!16:22
*** __ministry1 has joined #openstack-infra16:23
clarkbbut then we'll have the ussuri value fixed and won't have changing defaults to worry about16:23
clarkb(and train stein etc depending on how far back we go)16:23
ralonsohclarkb, do you mean we need to backport this ussuri fix up to the last supported branch?16:26
*** ociuhandu has quit IRC16:27
clarkbmaybe? I think it depends on if that default has changed before (it probably hasn't now thatI think about it more)16:27
clarkbso it really only matters where the dfeault has changed (between ussuri and master I think)16:27
*** hamalq has joined #openstack-infra16:38
*** hamalq_ has joined #openstack-infra16:41
fungidonnyd: we dodged a bullet here, the water came up high enough to saturate the yard but didn't rise above the slab for our downstairs. cleanup should be minimal16:41
clarkboh that reminds me mirror things. I'll likely have time for that after the meeting today16:42
ralonsohjust a heads-up16:43
ralonsohhttps://review.opendev.org/#/c/744753/16:43
donnydclarkb: I added static routes in my edge box for the two openstack projects. Routing should not be an issue any longer16:43
clarkbcool I think the best process will be to rebuild the mirror to be sure no broken network artifacts remain and take it from there16:44
*** hamalq has quit IRC16:44
*** sshnaidm is now known as sshnaidm|afk16:48
*** gfidente is now known as gfidente|afk16:48
*** gyee has joined #openstack-infra16:49
*** priteau has quit IRC16:51
*** priteau has joined #openstack-infra16:52
*** xek has quit IRC16:55
*** dtantsur is now known as dtantsur|afk17:00
*** ralonsoh has quit IRC17:00
*** __ministry1 has quit IRC17:01
*** ralonsoh has joined #openstack-infra17:01
*** ralonsoh has quit IRC17:06
*** ralonsoh has joined #openstack-infra17:07
rpittauclarkb, smcginnis, FYI new virtualenv with the fix has been released https://pypi.org/project/virtualenv/20.0.30/17:10
clarkbrpittau: great, we can trigger image builds17:11
*** nightmare_unreal has quit IRC17:11
* clarkb does that now17:12
clarkbI've triggered image builds for the ubuntus which is where we run the tox jobs predominantly. Does anyone know if we need fedora or centos etc?17:13
clarkbif not they can pick up the updates on the normal update schedule (which is every 24 hours)17:13
*** udesale_ has quit IRC17:16
*** hashar is now known as hashardinner17:16
*** Lucas_Gray has quit IRC17:22
smcginnisrpittau: Excellent timing.17:26
smcginnisclarkb: I have only seen the failures on ubuntu. So probably good just starting there.17:26
*** openstackgerrit has joined #openstack-infra17:26
openstackgerritMerged openstack/project-config master: Retire Zuul's Kata tenant  https://review.opendev.org/74468717:26
smcginnisclarkb: Does that include xenial images that some of the stable jobs run on?17:26
clarkbyes17:26
clarkbxenial, bionic, and focal17:26
smcginnisCool, thanks. I will recheck a few to see how they go.17:27
clarkbthe builds do take some time ~an hour plus upload time17:33
clarkbI'll let you know when I think rechecks are likely to pass17:33
*** priteau has quit IRC17:33
*** andrewbonney has quit IRC17:39
*** michael-beaver has quit IRC17:45
*** lpetrut has joined #openstack-infra17:49
*** lpetrut has quit IRC17:50
*** jamesmcarthur has quit IRC18:06
clarkbxenial has finished and has started uploading. bionic and focal are still building but I expect they'll complete soon. I need to pop out for a bit then prep for the meeting but we're making progress18:07
*** jamesmcarthur has joined #openstack-infra18:13
*** yamamoto has joined #openstack-infra18:15
*** jamesmcarthur has quit IRC18:17
*** bcafarel has joined #openstack-infra18:19
*** yamamoto has quit IRC18:20
*** jamesmcarthur has joined #openstack-infra18:24
*** jamesmcarthur has quit IRC18:30
*** jamesmcarthur has joined #openstack-infra18:31
*** mmethot_ has joined #openstack-infra18:42
*** mmethot has quit IRC18:45
*** jamesmcarthur has quit IRC18:48
*** jamesmcarthur has joined #openstack-infra18:48
*** jamesmcarthur has quit IRC18:51
clarkball three images are uploading now but not done uploading18:51
clarkbwe only seem to upload with 4 threads now on each builder which is less than I thought we were doing in the past. I wonder if that is due to a nodepool update18:54
fungitobiash mentioned something about uploader threads consuming too much memory on his deployment and reducing their number, but not sure if a change was merged to nodepool itself to alter that18:55
*** markvoelker has joined #openstack-infra18:56
tobiashfungi: it's https://review.opendev.org/#/c/74379018:56
tobiashBut not yet ready18:56
fungithanks, so no sounds like we haven't changed the default there18:57
clarkbfungi: its because we don't set upload-workers on the command anymore18:57
clarkbI think we were overriding that value with the old sysv init script but now we use the container command as is and don't add that in18:58
fungioh, got it, so this changed when containering18:58
clarkbafter the meeting I'll figure out how to fix that as it seems to be slowing us down on uploads18:58
clarkbfungi: ya appears to be so18:58
clarkbI'm guessing we have to override the command in the docker compose file18:59
fungisounds likely19:00
*** jamesmcarthur has joined #openstack-infra19:04
tobiashmaybe the command can take an env var as arg which can be overridden19:04
tobiashOr we make that a config option19:04
clarkba config option would be nice since we already have to supply the configs19:05
tobiashIt feels weird anyway to configure everything in nodepool.yaml but not the upload workers19:05
clarkb++19:05
*** vishalmanchanda has quit IRC19:12
*** auristor has quit IRC19:14
*** ramishra has quit IRC19:15
ralonsohclarkb, sorry (again)19:23
ralonsohthere could be a problem with pip, version 20.219:23
ralonsohit's not resolving correctly the dependencies19:24
ralonsohhttps://bugs.launchpad.net/neutron/+bug/189033119:24
openstackLaunchpad bug 1890331 in neutron "[stable] oslo.service 2.3.2 requests eventlet 0.25.2, but upper version is 0.25.1" [Undecided,New]19:24
ralonsoh--> second comment19:24
*** jamesmcarthur has quit IRC19:25
clarkbralonsoh: 20.2.1 is the fix I think19:25
clarkbwe're rebuilding new images now to pick that up19:25
ralonsohclarkb, thanks a lot!19:25
ralonsohis there a LP link?19:25
ralonsohjust for documentation19:25
clarkbno it was a pip bug19:25
clarkbwhich is in github19:25
*** jamesmcarthur has joined #openstack-infra19:25
ralonsohhttps://github.com/pypa/pip/issues/8686 this one, I think19:26
ralonsohhttps://github.com/pypa/pip/issues/869519:26
ralonsohsorry, last one19:26
clarkbhttps://github.com/pypa/pip/issues/8645 is the one smcginnis found19:26
*** mtreinish has quit IRC19:27
fungiyes, 8645 is the one which has been causing problems with constraints lists not being obeyed under tox anyway19:27
ralonsohthanks a lot for the info, we'll wait for this version then19:28
fungiit'll probably be in use within the hour19:29
*** jamesmcarthur has quit IRC19:29
*** jamesmcarthur has joined #openstack-infra19:30
*** jamesmcarthur has quit IRC19:35
*** jamesmcarthur has joined #openstack-infra19:35
*** jamesmcarthur has quit IRC19:39
*** jamesmcarthur has joined #openstack-infra19:43
*** auristor has joined #openstack-infra19:46
*** ralonsoh has quit IRC19:47
*** jamesmcarthur has quit IRC19:48
*** jamesmcarthur has joined #openstack-infra19:50
ianwdonnyd: there's currently two active servers with name "mirror01.us-east.openedge.opendev.org" ... i'm guessing you started them as part of testing?19:52
ianwneither have an ipv4 address, and we have one floating ip not attached19:53
ianwfungi/clarkb: ^ i'm launching the node now19:56
clarkbk19:56
ianw(i.e. pressing up twice and enter :)19:56
clarkbianw: have a link to the sshfp change? I'd like to revie wthat either way19:56
ianw64 bytes from review01.openstack.org (2001:4800:7819:103:be76:4eff:fe04:9229): icmp_seq=1 ttl=53 time=53.9 ms19:56
ianwit's happier on ipv619:56
ianwclarkb: that was https://review.opendev.org/#/c/743461/ that i just put it to test hopefully yesterday, but didn't happen19:57
ianw... and, it didn't work it seems ...19:58
ianwok, firstly, oe must be in disabled because the launch didn't do anything19:58
ianw$ ssh-keyscan -D 108.44.198.3520:02
ianwunknown option -- D20:02
ianwi guess that option came in after bionic?20:02
fungiyou can run it on any system which is able to reach that ip address20:02
ianwyeah, but that's not so good for automatically putting in the launch.py output :)20:03
fungithough the typical way to generate sshfp rrs is to run a command locally on the system where the host key resides and derive them from what's on the fs20:03
fungiusing ssh-keygen -r20:05
fungii suppose we could have ansible run that remotely?20:05
ianwyes, or just dump that from a ssh call at this point in the script20:07
donnydianw: no, I did not start any instances20:11
clarkbleftovers from earlier launch runs? maybe the cleanups failed20:11
donnydthere is only one instance total on OE right now20:12
donnydit looks to have been running for 16 minutes20:12
donnydI would like to point out its the wrong flavor unless the disk size for the mirror nodes are being reduced20:12
donnydor block storage is being used20:13
clarkbdonnyd: ianw indicated that cinder was going to be attempted but maybe that is the wrong appraoch for OE?20:13
donnydoh no, cinder is fine20:13
donnydI have an all nvme block store20:13
donnydso it should be pretty awesome20:13
donnydeat all you need there20:13
donnydthere is 20TB of nvme available in cinder20:14
donnydhttps://www.irccloud.com/pastebin/VsISka7B/20:15
ianwdonnyd/clarkb: oh, ok, so the last mirror didn't use a volume but a bigger instance?20:15
clarkbianw: ya that sounds right20:15
clarkbI think maybe bceause we didn't have cinder to start?20:15
donnydianw: i can get the sshfp keys using your command20:15
clarkbbut I could be wrong about that20:15
donnydwe did20:15
donnydbut the 200G instance size was used20:16
ianwdonnyd: yeah, it seems everywhere can but the one place i want to do it (bridge.o.o :)20:16
donnydyou should be able to use cinder without issue20:16
ianwdonnyd: i still get "Request to https://api.us-east.open-edge.io:8776/v3/2ed8e9a22ebf4eaeb4149f316b9d6c3d/volumes/detail timed out"20:16
donnydhrm20:16
*** jamesmcarthur has quit IRC20:16
donnydthat is strance20:16
donnydthat is strange20:16
ianw8cpu-8GBram-250GBdisk ... i don't mind.  it's easier to not have volumes attached20:17
donnydyea I think that is the flavor20:19
donnydbut that is weird cinder is doing that20:19
donnydwhat happens when you curl cinder endpoint?20:21
ianwumm .... one sec20:22
ianw$ curl https://api.us-east.open-edge.io:8776/v320:22
ianw{"error": {"code": 401, "title": "Unauthorized", "message": "The request you have made requires authentication."}}20:22
ianwso that responds20:22
ianwyeah, running with debug it happily starts chatting but /volumes/detail request just hangs20:24
donnyd[04/Aug/2020:20:19:25 +0000] "GET /v3/2ed8e9a22ebf4eaeb4149f316b9d6c3d/volumes/detail?limit=101&sort=created_at%3Adesc HTTP/1.1" 200 1169 "-" "python-cinderclient" 62152(us)20:24
donnydi can see the requests coming in - and its weird that it just hangs (assuming this is from the bridge)20:28
donnydwhich command are you running if I can ask?20:28
donnydits it a pre-build openstacksdk app?20:29
ianwjust "volume list"20:29
clarkbcould there be ipv6 routing issues still? bridge has ipv6 interface so will prefer it to talk to that api server?20:29
ianwit's from ... umm ... the container that has openstacksdk in it on bridge20:30
donnydah.. that is possible20:30
clarkband maybe asymettric routes or similar are causing packets to disappear20:30
ianwclarkb: yeah, but the initial chats are all ok20:30
*** hashardinner has quit IRC20:34
donnydthat is very strange indeed20:35
donnydI am testing from a laptop that has v6 on not my network and it seems to work ok20:36
donnydI will investigate more20:36
donnydbut if we want to just get on with it, the 250G disk flavor is what has been used in the past20:36
donnydI also just forced an machine outside my wan onto v6 only and it also seems to work ok20:37
ianw++; because it's in the disabled list i want to remove it from the inventory first, so we don't have a broken state20:37
donnydah yes, that makes sense20:38
donnyddo you think it will try to reach out via already known ip addresses?20:38
ianwit will try to talk to the old address if it's in the inventory and thus CD playbook runs will fail20:40
clarkbsmcginnis: bionic and xenial are all uploaded now. focal is still in progress20:43
clarkbsmcginnis: I think you should be able to recheck things since we haven't moved much to focal yet iirc20:43
smcginnisclarkb: Thanks! Things are already looking better - https://zuul.opendev.org/t/openstack/builds?job_name=openstack-tox-lower-constraints20:49
*** jamesmcarthur has joined #openstack-infra21:04
*** jamesmcarthur has quit IRC21:23
*** jamesmcarthur has joined #openstack-infra21:23
*** markvoelker has quit IRC21:30
*** armax has quit IRC21:46
openstackgerritSean McGinnis proposed openstack/pbr master: Fix compatiblity with virtualenv 20.x+  https://review.opendev.org/74479321:46
*** armax has joined #openstack-infra21:46
*** yamamoto has joined #openstack-infra21:55
*** smarcet has quit IRC21:58
*** slaweq has quit IRC21:58
openstackgerritSean McGinnis proposed openstack/pbr master: Fix compatiblity with virtualenv 20.x+  https://review.opendev.org/74479321:58
*** Lucas_Gray has joined #openstack-infra22:01
*** jamesmcarthur has quit IRC22:05
*** yamamoto has quit IRC22:07
*** jamesmcarthur has joined #openstack-infra22:14
*** ociuhandu has joined #openstack-infra22:15
*** jamesmcarthur has quit IRC22:17
*** jamesmcarthur has joined #openstack-infra22:17
*** ociuhandu has quit IRC22:20
*** eolivare has quit IRC22:50
*** tosky has quit IRC22:50
*** dpawlik2 has quit IRC22:55
*** bnemec-pto has quit IRC22:55
*** freerunner has quit IRC22:55
*** lastmikoi has quit IRC22:55
*** guillaumec has quit IRC22:55
*** bradm has quit IRC22:55
*** tobberydberg_ has quit IRC22:55
*** dansmith has quit IRC22:55
*** bstinson has quit IRC22:55
*** frickler has quit IRC22:55
*** tkajinam has joined #openstack-infra22:55
clarkbwe're still waiting on focal uploads ...22:56
*** dpawlik2 has joined #openstack-infra22:57
*** bnemec-pto has joined #openstack-infra22:57
*** freerunner has joined #openstack-infra22:57
*** lastmikoi has joined #openstack-infra22:57
*** guillaumec has joined #openstack-infra22:57
*** bradm has joined #openstack-infra22:57
*** tobberydberg_ has joined #openstack-infra22:57
*** dansmith has joined #openstack-infra22:57
*** bstinson has joined #openstack-infra22:57
*** frickler has joined #openstack-infra22:57
clarkbbut ianw if you get a momement review on https://review.opendev.org/#/c/744780/ would be great. Then I can approve it once uploads are done (though that might be tomorrow morning22:57
*** bdodd has quit IRC22:59
*** bdodd has joined #openstack-infra23:01
ianwlgtm, thanks23:02
*** Lucas_Gray has quit IRC23:10
*** Lucas_Gray has joined #openstack-infra23:14
*** jamesmcarthur has quit IRC23:16
*** jamesmcarthur has joined #openstack-infra23:18
*** jamesmcarthur has quit IRC23:21
*** yamamoto has joined #openstack-infra23:22
*** jamesmcarthur has joined #openstack-infra23:25
*** jamesmcarthur has quit IRC23:27
*** jamesmcarthur has joined #openstack-infra23:27
*** dchen has joined #openstack-infra23:32
*** jamesmcarthur has quit IRC23:34
*** jamesmcarthur has joined #openstack-infra23:34
*** jamesmcarthur has quit IRC23:40
*** jamesmcarthur has joined #openstack-infra23:41
*** jamesmcarthur has quit IRC23:45
*** jamesmcarthur_ has joined #openstack-infra23:46
*** hamalq_ has quit IRC23:52
*** jamesmcarthur_ has quit IRC23:54
*** jamesmcarthur has joined #openstack-infra23:54

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!