Tuesday, 2020-09-15

*** bdodd has quit IRC00:00
*** armax has quit IRC00:01
*** tetsuro has joined #openstack-infra00:03
*** rcernin has joined #openstack-infra00:05
*** rcernin has quit IRC00:06
*** bdodd has joined #openstack-infra00:12
*** armax has joined #openstack-infra00:25
*** Topner has joined #openstack-infra00:30
*** Topner has quit IRC00:35
*** hamalq has quit IRC00:45
*** armax has quit IRC00:59
*** kaisers has quit IRC01:30
*** lxkong has joined #openstack-infra01:35
*** tetsuro has quit IRC01:43
*** ianychoi_ has joined #openstack-infra01:46
*** ianychoi has quit IRC01:48
*** iurygregory has quit IRC02:08
*** rlandy has quit IRC02:22
*** Topner has joined #openstack-infra02:31
*** Topner has quit IRC02:36
*** tetsuro has joined #openstack-infra02:43
*** hashar has joined #openstack-infra02:48
*** tetsuro has quit IRC02:49
*** psachin has joined #openstack-infra03:00
*** Lucas_Gray has joined #openstack-infra03:31
*** dave-mccowan has quit IRC03:32
*** yankcrime has quit IRC04:05
*** tetsuro has joined #openstack-infra04:14
*** ykarel|away has joined #openstack-infra04:17
*** ykarel_ has joined #openstack-infra04:21
*** ykarel|away has quit IRC04:24
*** Topner has joined #openstack-infra04:32
*** evrardjp has quit IRC04:33
*** evrardjp has joined #openstack-infra04:33
*** vishalmanchanda has joined #openstack-infra04:33
*** Topner has quit IRC04:36
*** hashar has quit IRC05:13
*** ysandeep|away is now known as ysandeep05:13
*** zzzeek has quit IRC05:19
*** zzzeek has joined #openstack-infra05:22
*** ykarel_ has quit IRC05:22
*** ykarel_ has joined #openstack-infra05:23
*** matt_kosut has joined #openstack-infra05:23
*** tetsuro has quit IRC05:34
*** tetsuro has joined #openstack-infra05:39
*** tetsuro_ has joined #openstack-infra05:40
*** tetsuro has quit IRC05:44
*** stevebaker has quit IRC05:45
*** stevebaker has joined #openstack-infra05:46
*** tetsuro_ has quit IRC05:46
*** tetsuro has joined #openstack-infra05:47
*** lmiccini has joined #openstack-infra05:48
*** tetsuro has quit IRC05:53
*** jtomasek has joined #openstack-infra06:03
*** d34dh0r53 has quit IRC06:10
*** ykarel_ has quit IRC06:10
*** ykarel_ has joined #openstack-infra06:11
*** lpetrut has joined #openstack-infra06:14
*** Lucas_Gray has quit IRC06:15
*** ykarel_ is now known as ykarel06:21
*** eolivare has joined #openstack-infra06:22
*** yankcrime has joined #openstack-infra06:27
*** yankcrime has left #openstack-infra06:27
*** dklyle has quit IRC06:29
*** ramishra_ has quit IRC06:32
*** Topner has joined #openstack-infra06:33
*** Topner has quit IRC06:37
amotokiAJaeger: gmann: hi, neutron team is hitting a normal docs build failure after https://review.opendev.org/#/c/738326/ is merghed.06:44
*** ramishra has joined #openstack-infra06:44
amotokiAJaeger: gmann: The failure happens as the rsvg converter in doc/source/conf.py is not installed as a result of dropping PDF related package installation06:44
amotokiAJaeger: gmann: an example is found at https://zuul.opendev.org/t/openstack/build/0cd86df4b6aa4896ba033d68d435794806:45
*** zzzeek has quit IRC06:45
amotokiAJaeger: gmann: what is the suggested solution for this failure? is there any ongoing fix in openstack-zuul-jobs or should each project adjust doc/source/conf.py?06:46
*** zzzeek has joined #openstack-infra06:47
*** ralonsoh has joined #openstack-infra06:53
*** eandersson has quit IRC06:58
*** iurygregory has joined #openstack-infra06:58
*** eandersson has joined #openstack-infra06:59
*** zzzeek has quit IRC07:01
*** eandersson has quit IRC07:01
*** eandersson has joined #openstack-infra07:02
*** zzzeek has joined #openstack-infra07:03
*** slaweq has joined #openstack-infra07:05
*** jcapitao has joined #openstack-infra07:13
*** andrewbonney has joined #openstack-infra07:14
zbrdo we have/know a reminder bot for irc? -- likely now one that I would have to deploy myself.07:24
*** hashar has joined #openstack-infra07:24
*** ricolin has quit IRC07:32
*** tosky has joined #openstack-infra07:34
*** priteau has joined #openstack-infra07:39
*** tetsuro has joined #openstack-infra07:39
*** ricolin has joined #openstack-infra07:42
*** jpena|off is now known as jpena07:49
*** gfidente|afk is now known as gfidente07:51
*** tetsuro has quit IRC07:51
*** zxiiro has quit IRC07:51
AJaegeramotoki: 738326 did not drop PDF related package installation, it added an additional one.07:56
amotokiAJaeger: I am talking about openstack-tox-docs job for stable/train.07:57
AJaegeramotoki: 738326 should not have changed that one07:58
AJaegeramotoki: sorry, no time to dig into this right now07:58
amotokiAJaeger: It does not have pre-run definition. It is different from openstack-tox-docs job for  the master branch .07:58
openstackgerritWitold Bedyk proposed openstack/project-config master: Use only noop jobs for openstack/monasca-transform  https://review.opendev.org/75198307:58
AJaegeramotoki: Oh? That looks like a bug07:58
AJaegeramotoki: I see it now - want to sent a change to add the pre-run?07:58
amotokiAJaeger: I first would like to check the change is intentional or not. Your answer is no.07:59
AJaegeramotoki: not intentional08:00
amotokiAJaeger: I am not sure texlive-full is available for bionic. If available, the fix would be easy.08:00
AJaegerpropose a change, add a test cahnge with depends-on08:00
amotokiAJaeger: sure. I can try to add pre-run and run for the stein-ussuri job.08:01
amotokiAJaeger: thanks for clarification!08:02
*** sshnaidm|afk is now known as sshnaidm08:04
*** xek has joined #openstack-infra08:04
*** lucasagomes has joined #openstack-infra08:13
openstackgerritAkihiro Motoki proposed openstack/openstack-zuul-jobs master: openstack-tox-docs: Enable PDF build for stein-ussuri xenial job  https://review.opendev.org/75198508:13
ricolinHI Heat currently suffer with new version of package not found error08:15
ricolinhttps://zuul.opendev.org/t/openstack/build/5d4ab1c5e2784c4593222038a4e45bef08:15
ricolinERROR: Could not find a version that satisfies the requirement oslo.log===4.4.0 (from -c /opt/stack/requirements/upper-constraints.txt (line 298)) (from versions: 0.1.0, 0.2.0, 0.3.0, 0.4.0, 1.0.0, 1.1.0, 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.14.0, 2.0.0, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 3.0.0, 3.1.0, 3.2.0, 3.3.0, 3.4.0, 3.5.0, 3.6.0, 3.7.0, 3.8.0, 3.9.0, 3.10.0, 3.11.0, 3.12.0, 3.13.0,08:15
ricolin3.14.0, 3.15.0, 3.16.0, 3.16.1, 3.17.0, 3.18.0, 3.19.0, 3.20.0, 3.20.1, 3.21.0, 3.22.0, 3.23.0, 3.24.0, 3.25.0, 3.26.0, 3.26.1, 3.27.0, 3.28.0, 3.28.1, 3.29.0, 3.30.0, 3.30.1, 3.30.2, 3.30.3, 3.31.0, 3.32.0, 3.33.0, 3.34.0, 3.35.0, 3.36.0, 3.37.0, 3.38.0, 3.38.1, 3.39.0, 3.39.1, 3.39.2, 3.40.0, 3.40.1, 3.41.0, 3.42.0, 3.42.1, 3.42.2, 3.42.3, 3.42.4, 3.42.5, 3.43.0, 3.44.0, 3.44.1, 3.44.2, 3.44.3, 3.45.0, 3.45.1, 3.45.2, 4.0.0, 4.0.1, 4.1.0,08:15
ricolin4.1.1, 4.1.2, 4.1.3, 4.2.0, 4.2.1, 4.3.0)08:15
ricolinERROR: No matching distribution found for oslo.log===4.4.0 (from -c /opt/stack/requirements/upper-constraints.txt (line 298))08:15
* ricolin sorry about the c-p08:16
ricolinthe targeting mirror space is https://mirror.gra1.ovh.opendev.org/pypifiles/packages/89/ac/b71a66e54c8fcf22c4205efe2b5f94dbf282c194f9f07dbf0a1ac52d4633/08:16
fricklerricolin: yes, pypi seems to have intermittend issues again with its CDN, see also the earlier discussion in #opendev. afaict usually a recheck should help08:20
openstackgerritWitold Bedyk proposed openstack/project-config master: End project gating for openstack/monasca-analytics  https://review.opendev.org/75198708:20
ricolinfrickler, thx for the info08:25
*** ykarel is now known as ykarel|lunch08:28
*** Topner has joined #openstack-infra08:34
*** Topner has quit IRC08:38
openstackgerritWitold Bedyk proposed openstack/project-config master: End project gating for openstack/monasca-analytics  https://review.opendev.org/75198708:39
*** derekh has joined #openstack-infra08:40
openstackgerritWitold Bedyk proposed openstack/project-config master: Remove openstack/monasca-analytics  https://review.opendev.org/75199308:46
*** tetsuro has joined #openstack-infra08:53
*** tetsuro has quit IRC08:54
*** tetsuro has joined #openstack-infra08:55
*** ysandeep is now known as ysandeep|lunch08:55
*** tetsuro has quit IRC08:56
*** tetsuro has joined #openstack-infra08:57
*** tetsuro has quit IRC08:58
*** tetsuro has joined #openstack-infra08:59
Tenguah, a recheck should be enough for the oslo.log issue? oook.... hit me twice in the same review at different stages :(.08:59
*** tetsuro has quit IRC09:00
*** tetsuro has joined #openstack-infra09:00
*** dtantsur|afk is now known as dtantsur09:00
*** tetsuro has quit IRC09:12
*** ykarel|lunch is now known as ykarel09:12
*** tetsuro has joined #openstack-infra09:12
*** lajoskatona has joined #openstack-infra09:18
lajoskatonaHi, I run into mirroring issues (see: https://api.us-east.open-edge.io:8080/swift/v1/AUTH_e02c11e4e2c24efc98022353c88ab506/zuul_opendev_logs_806/751301/9/check/networking-l2gw-tempest-dummy/806df3c/job-output.txt ), which is strange as requirements upper-constraints has 4.0.1 (see: https://opendev.org/openstack/requirements/src/branch/master/upper-constraints.txt#L295 )09:21
zbri see random post failures on opendev tenant09:23
zbrhttps://review.opendev.org/#/c/750958/ -- two simple runs, failed on post, differnet jobs.09:23
*** tetsuro has quit IRC09:24
zbrany zuul-main around?09:24
duleklajoskatona: I can confirm, I see mirroring issues too.09:24
*** ralonsoh has quit IRC09:24
*** Lucas_Gray has joined #openstack-infra09:25
lajoskatonadulek: thanks.09:25
lajoskatonadulek: is it some general issue, or I was just unlucky?09:25
*** ralonsoh has joined #openstack-infra09:26
duleklajoskatona: Seems pretty general to me. Which is quite fun given that it's now 3rd stacked issue we're dealing with on our CI. :/09:26
zbrcorrelating with POST failures, I start to believe we have networking issues.09:27
amotokiAJaeger: thanks for your quick review :)09:27
zbrianw: can you look at post failures?09:30
openstackgerritMerged openstack/openstack-zuul-jobs master: openstack-tox-docs: Enable PDF build for stein-ussuri xenial job  https://review.opendev.org/75198509:34
*** yolanda has joined #openstack-infra09:37
zbr#status alert Huge number of POST_FAILURES https://zuul.opendev.org/t/openstack/builds?result=POST_FAILURE09:39
*** lxkong_ has joined #openstack-infra09:43
fricklerzbr: the post failures I looked at just seem to be pypi errors in disguise. we don't mirror pypi, just proxy cache, so not much we can do about upstream failures09:46
*** tkajinam has quit IRC09:49
zbrI cried wolf wolf before around that proxy implementation, now we enjoy it. Also the way post fails is a very bad UX, as the developer gets no clue regarding what caused the failure. zero transparency.09:50
fricklerIMHO the developer can easily see what happened in the job log, just like I did. but as always, patches with improvements are always welcome09:54
*** lxkong has quit IRC09:56
*** lxkong_ is now known as lxkong09:56
*** lajoskatona has left #openstack-infra10:07
*** ysandeep|lunch is now known as ysandeep10:10
*** slaweq has quit IRC10:13
*** slaweq has joined #openstack-infra10:21
zbrfrickler: where is that log? https://zuul.opendev.org/t/openstack/build/067a60b3b47a470daa918000b056361610:23
zbrthere is no log url on post failure10:23
*** jcapitao is now known as jcapitao_lunch10:24
*** Topner has joined #openstack-infra10:35
*** Topner has quit IRC10:39
*** ociuhandu has joined #openstack-infra10:58
*** ociuhandu_ has joined #openstack-infra11:00
*** lajoskatona has joined #openstack-infra11:02
*** ociuhandu has quit IRC11:03
*** lajoskatona has left #openstack-infra11:10
priteauHi. I am seeing job failures in blazar-nova during installation of requirements, like this:11:13
priteau2020-09-15 10:09:12.304301 | ubuntu-focal | ERROR: Could not find a version that satisfies the requirement oslo.privsep===2.4.0 (from -c /home/zuul/src/opendev.org/openstack/requirements/upper-constraints.txt (line 462)) (from versions: 0.1.0, 0.2.0, 0.3.0, 1.0.0, 1.1.0, 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.11.0, 1.12.0, 1.13.0, 1.13.1, 1.13.2, 1.14.0, 1.15.0, 1.16.0,11:14
priteau 1.17.0, 1.18.0, 1.19.0, 1.20.0, 1.21.0, 1.21.1, 1.22.0, 1.22.1, 1.22.2, 1.23.0, 1.24.0, 1.25.0, 1.26.0, 1.27.0, 1.28.0, 1.29.0, 1.29.1, 1.29.2, 1.30.0, 1.30.1, 1.31.0, 1.31.1, 1.32.0, 1.32.1, 1.32.2, 1.33.0, 1.33.1, 1.33.2, 1.33.3, 1.34.0, 2.0.0, 2.1.0, 2.1.1, 2.2.0, 2.2.1, 2.3.0)11:14
priteau2020-09-15 10:09:12.304332 | ubuntu-focal | ERROR: No matching distribution found for oslo.privsep===2.4.0 (from -c /home/zuul/src/opendev.org/openstack/requirements/upper-constraints.txt (line 462))11:14
priteauWhat could be causing this?11:14
priteauoslo.privset 2.4.0 was released four days ago11:17
*** lajoskatona has joined #openstack-infra11:18
*** zzzeek has quit IRC11:24
*** zzzeek has joined #openstack-infra11:24
dulekpriteau: See above, we've already discussed that it's issue with gate mirroring/proxy of PyPi.11:29
dulekI don't think we have any idea why this happens though. :/11:29
dulekI guess we'll need to wait for US to wake up. ;)11:29
fricklerfwiw I hit the same error earlier while deploying devstack locally, with no opendev infra involved. to me this clearly proves that it is a pypi issue11:30
*** Topner has joined #openstack-infra11:30
*** jpena is now known as jpena|lunch11:35
* zbr thinking that nothing happens while US is sleeping11:36
dulekfrickler: Ah, interesting!11:38
priteaudulek: Thanks, I had not looked at earlier discussions11:43
*** xek has quit IRC11:44
*** Goneri has joined #openstack-infra11:44
*** eolivare has quit IRC11:48
*** lpetrut has quit IRC11:54
*** jtomasek has quit IRC11:54
*** rosmaita has joined #openstack-infra11:57
*** ociuhandu_ has quit IRC12:01
*** ociuhandu has joined #openstack-infra12:01
fungiAJaeger: amotoki: texlive-full is available at least as far back as xenial, i checked last week (but has also been around mucj, much longer than xenial if memory serves)12:01
*** rfolco|ruck has joined #openstack-infra12:03
fungidulek: we know why it's happening, the cdn network pypi relies on (fastly) is randomly returning old results12:03
fungiwe don't control it, and even if you didn't use our proxy layer you'd see the same results12:04
*** jcapitao_lunch is now known as jcapitao12:05
*** rlandy has joined #openstack-infra12:07
*** matbu has joined #openstack-infra12:10
*** zxiiro has joined #openstack-infra12:12
*** Lucas_Gray has quit IRC12:14
*** Lucas_Gray has joined #openstack-infra12:16
zbrI commented on https://bugs.launchpad.net/openstack-gate/+bug/144913612:18
openstackLaunchpad bug 1449136 in OpenStack-Gate "Pip fails to find distribution for package" [Undecided,New]12:18
zbrafaik, pypi works fine and they seem to confirm it at https://status.python.org/12:19
*** lxkong has quit IRC12:21
fungisure, pypi works fine, but pypi as accessed from certain parts of the world seems to be returning old or incomplete simple api indices missing some entries12:21
zbrfungi: do we have an upstream bug for fastly?12:21
fungino, every time we bring it up, pypa points us to https://status.fastly.com/ which sometimes shows there's a problem and sometimes doesn't12:21
fungialong with reminders that there is no sla for pypi, and if we want something more stable we should keep a copy of pypi and use that instead12:22
zbrwithout a bug we have no proof12:22
fungi"sometimes we get incorrect results" is an actionless bug, and doesn't differentiate from prior incidents12:23
*** Goneri has quit IRC12:23
fungilast time we were seeing it for requests in the montreal canada area (from multiple service providers). this time it seems to be hitting the paris france area12:24
*** sshnaidm has quit IRC12:25
zbrwhat is the issue tracker? they managed to hide it quite well, even if I remember dstuffit closing a ticket I raised about random 404 few weeks back12:26
*** lxkong has joined #openstack-infra12:26
zbrreading https://pypi.org/help/#feedback and I still fail to find any place to file an infra issue, and i bet is like this by design12:27
fungithere's not really an issue tracker for the service, they have issue trackers for the software which runs their services12:27
*** ociuhandu has quit IRC12:28
fungithere is no sla for pypi, if you tell them about issues (and have actionable information like which specific fastly proxy nodes are returning bad data and what the data they're returning contains, which is not at all easy to get since there are hundreds in any one location and you can't directly query them so have to get lucky and catch it with a bunch of debugging turned on) so that they can in turn12:29
fungiopen a service ticket with fastly, then that sometimes helps12:29
*** slaweq has quit IRC12:30
fungii sometimes manage to ping specific people in #pypa-dev on freenode about stuff, but i think they mostly use slack and discourse lately12:30
zbrf***, we need to setup our own repository. current status of pypi seems to be a "not my problem" kind of issue.12:31
zbri was like this for >4h today, and we do not even have an ack for it.12:31
*** Topner has quit IRC12:33
*** slaweq has joined #openstack-infra12:35
*** eolivare has joined #openstack-infra12:35
*** jpena|lunch is now known as jpena12:36
dulekThanks for the explanations about the issues folks!12:37
rosmaitafungi: this is a stupid question, but this kind of looks like a pip install bug -- it can't find the version listed in upper-constraints.txt, but so what, because requrements is asking for a lower version that is listed in the versions it did find12:39
fungirosmaita: can you clarify? those jobs are generally installing with a constraints file, which says to install exactly some version12:40
rosmaitai may be misreading the results, but i am seeing this:12:40
rosmaitaERROR: Could not find a version that satisfies the requirement oslo.service===2.4.0 (from -c /home/zuul/src/opendev.org/openstack/requirements/upper-constraints.txt (line 36)) (from versions: 0.1.0, 0.2.0, 0.3.0, 0.4.0, 0.5.0, 0.6.0, 0.7.0, 0.8.0, 0.9.0, 0.9.1, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 1.0.0, 1.1.0, 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.11.0, 1.12.0, 1.13.0, 1.14.0, 1.15.0, 1.16.0, 1.16.1, 1.1712:40
rosmaita.0, 1.18.0, 1.19.0, 1.19.1, 1.20.0, 1.21.0, 1.22.0, 1.23.0, 1.24.0, 1.24.1, 1.25.0, 1.25.1, 1.25.2, 1.26.0, 1.27.0, 1.28.0, 1.28.1, 1.29.0, 1.29.1, 1.30.0, 1.31.0, 1.31.1, 1.31.2, 1.31.3, 1.31.4, 1.31.5, 1.31.6, 1.31.7, 1.31.8, 1.32.0, 1.32.1, 1.33.0, 1.34.0, 1.35.0, 1.36.0, 1.37.0, 1.38.0, 1.38.1, 1.39.0, 1.40.0, 1.40.1, 1.40.2, 1.41.0, 1.41.1, 2.0.0, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.3.0, 2.3.1, 2.3.2) 2020-09-15 08:23:28.927865 |12:40
rosmaita ubuntu-bionic | ERROR: No matching distribution found for oslo.service===2.4.0 (from -c /home/zuul/src/opendev.org/openstack/requirements/upper-constraints.txt (line 36)) 2020-09-15 08:23:28.927896 | ubuntu-bionic | WARNING: You are using pip version 20.2.2; however, version 20.2.3 is available.12:40
rosmaitathat is from https://zuul.opendev.org/t/openstack/build/365944e68a0d4f3f8bf9237b637b0c2512:41
*** priteau has quit IRC12:42
rosmaitawe have oslo.service>=2.0.0 in requirements.txt12:42
toskyrosmaita: or maybe just a problem with pypi12:42
toskydidn't we hit a similar issue a few weeks ago?12:43
*** dave-mccowan has joined #openstack-infra12:43
*** piotrowskim has joined #openstack-infra12:45
zbryes we had a similar problem few weeks back and my pypi ticket was closed quickly by a pypi admin. trying to find the link...12:45
dulekrosmaita: My bet is that it's getting the index - the higher version is listed there, but when it tries to get the individual package - the answer is 404.12:47
rosmaitadulek: makes sense12:48
zbrtosky: rosmaita: does https://github.com/pypa/warehouse/issues/8260 seems similar to you?12:48
toskythat sounds it12:49
*** Goneri has joined #openstack-infra12:49
toskyit seems to be consistent now, maybe the system is overloaded12:49
*** sshnaidm has joined #openstack-infra12:50
weshay|ruckany chance tripleo can get this patch auto merged.. .having trouble due to tox https://review.opendev.org/#/c/751828/12:51
weshay|ruckbhagyashri|rover, ^12:51
weshay|ruckzbr, fyi ^12:51
fungirosmaita: yes, the constraints list passed via -c specifies that pip must install exactly oslo.service===2.4.012:53
fungiand it's not seeing that version in the index returned by pypi12:53
*** chandankumar has joined #openstack-infra12:54
fungithe requirements list only really supplies package names and the version specifiers in it are ignored if you're using a constraints list to override version resolution12:55
*** priteau has joined #openstack-infra12:55
*** artom has joined #openstack-infra12:55
*** xek has joined #openstack-infra12:56
fungii have to go run some errands, but when i get back in a couple hours i can take a look and see if the situation has changed since what we observed yesterday, and maybe continue on from ianw's earlier efforts to play "spot the problem fastly endpoint"12:57
smcginnisLocal test works fine: Successfully installed oslo.service-2.4.012:57
smcginnisBad mirror or something on the PyPi side?12:58
fungiyeah, pypi itself seems fine, and some (likely the vast majority) of fastly's endpoints are returning coherent views of the index12:59
fungiyou basically have to test from a machine in an affected part of the world (paris france are at last report) repeatedly to occasionally hit the problem copy12:59
zbrplease comment on the warehouse bug, so we can escalate it.13:01
smcginnisI'd love to go to Paris to test this.13:01
*** lpetrut has joined #openstack-infra13:03
dtantsuror from our CI machines :)13:07
* dtantsur would also prefer Paris, of course13:08
dtantsurif pypi itself is fine, would it be possible to somehow manually update our mirrors?13:10
zbrdtantsur: it is not possible because our mirrors are not mirrors, they are proxies.13:10
dtantsuraaaaahh13:11
*** ociuhandu has joined #openstack-infra13:12
gmannamotoki: AJaeger thanks for doc job fix.13:12
*** priteau has quit IRC13:15
*** hashar has quit IRC13:16
*** lbragstad has quit IRC13:16
amotokigmann: np13:19
amotokigmann: btw, do you have any failure example without texlive-full? at a glance, texlive-full installation takes a lot of time, so I wonder we can investigate it more in detail....13:20
dulekOkay, so a different thing. I tracked some CI issues down to OOM killing VMs and containers on Kuryr gate. Anything changed regarding that recently? Or should I just look for a leaking app?13:21
dulekHm, dstat tells me that our test cases easily saturate 8 GBs of RAM.13:23
Tengumeh... if Paris has an issue, this impacts most of OVH things I guess.13:24
Tenguwas about to ask if there's a way to NOT use mirror.gra1.ovh.opendev.org in tripleo CI for a while.13:25
*** ociuhandu has quit IRC13:29
*** lbragstad has joined #openstack-infra13:32
dtantsurTengu: I see problems with rax as well13:35
*** xek has quit IRC13:36
TheJuliaI started noticing rax mirror issues late yesterday as well and figured it was just transitory13:38
TheJuliaGuess not :\13:38
Tengudtantsur: meh.....13:39
TenguI mostly saw ovh, but ok. pfff...13:39
Tenguwhere's rax infra located?13:42
TheJuliaTexas I believe13:43
gmannamotoki: yeah, it was failing on  pdf generating error on missing package but i cannot remember exactly what. logs are not available now as it was fixed in start of Aug. but it will be 100% reproducible if you remove that13:45
*** lmiccini has quit IRC13:45
gmannamotoki: also i did not notice any time difference in tox job after that, checked nova and cinder13:46
*** lmiccini has joined #openstack-infra13:46
amotokigmann: affected repos are those with rsvg-converter in "extensions" in doc/source/conf.py.13:46
*** ociuhandu has joined #openstack-infra13:46
TheJuliaAre we stuck just waiting until someone affiliated with pypi or fastly engages?13:46
amotokigmann: I am not sure nova and cinder use rsvg-converter13:47
gmannamotoki: ok13:48
amotokigmann: I believe I covered the context in the commit msg and you can find a failure mode there.13:49
gmannamotoki: we can give try on that but i am sure there are more failure there without texlive-full13:52
amotokigmann: np. the most important is to make it work. the current docs job takes long but it is not a longest one among jobs triggered by a single commit, so it does not matter now I think.13:53
*** yolanda has quit IRC13:53
*** yolanda has joined #openstack-infra13:54
amotokigmann: we can improve / explore a better set gradually13:54
*** dciabrin_ has joined #openstack-infra13:55
*** srwilkers_ has joined #openstack-infra13:57
*** arne_wiebalck_ has joined #openstack-infra13:57
*** gagehugo_ has joined #openstack-infra13:57
*** davidlenwell_ has joined #openstack-infra13:57
*** dougwig_ has joined #openstack-infra13:57
*** philroche_ has joined #openstack-infra13:57
*** guilhermesp_ has joined #openstack-infra13:57
*** nicolasbock_ has joined #openstack-infra13:57
*** gouthamr__ has joined #openstack-infra13:57
*** csatari_ has joined #openstack-infra13:57
*** gmann_ has joined #openstack-infra13:57
*** priteau has joined #openstack-infra13:57
*** cgoncalves has quit IRC13:58
*** jbryce_ has joined #openstack-infra13:58
*** d34dh0r53 has joined #openstack-infra13:58
*** xek has joined #openstack-infra13:59
*** srwilkers has quit IRC14:00
*** gouthamr has quit IRC14:00
*** jbryce has quit IRC14:00
*** guilhermesp has quit IRC14:00
*** nicolasbock has quit IRC14:00
*** dougwig has quit IRC14:00
*** davidlenwell has quit IRC14:00
*** gagehugo has quit IRC14:00
*** csatari has quit IRC14:00
*** fungi has quit IRC14:00
*** arne_wiebalck has quit IRC14:00
*** dciabrin has quit IRC14:00
*** philroche has quit IRC14:00
*** logan- has quit IRC14:00
*** gmann has quit IRC14:00
*** abhishekk has quit IRC14:00
*** jroll has quit IRC14:00
*** knikolla has quit IRC14:00
*** coreycb has quit IRC14:00
*** dtantsur has quit IRC14:00
*** antonym has quit IRC14:00
*** jbryce_ is now known as jbryce14:00
*** srwilkers_ is now known as srwilkers14:00
*** gagehugo_ is now known as gagehugo14:00
*** gouthamr__ is now known as gouthamr14:00
*** arne_wiebalck_ is now known as arne_wiebalck14:00
*** davidlenwell_ is now known as davidlenwell14:00
*** coreycb_ has joined #openstack-infra14:00
*** dougwig_ is now known as dougwig14:00
*** guilhermesp_ is now known as guilhermesp14:00
*** logan_ has joined #openstack-infra14:00
*** csatari_ is now known as csatari14:00
*** gmann_ is now known as gmann14:00
*** nicolasbock_ is now known as nicolasbock14:00
*** logan_ is now known as logan-14:00
*** coreycb_ is now known as coreycb14:00
*** philroche_ is now known as philroche14:00
*** mihalis68_ has joined #openstack-infra14:01
*** jamesdenton has quit IRC14:03
*** jamesdenton has joined #openstack-infra14:05
*** knikolla has joined #openstack-infra14:07
*** antonym has joined #openstack-infra14:07
*** fungi has joined #openstack-infra14:08
*** jroll has joined #openstack-infra14:08
*** ysandeep is now known as ysandeep|away14:08
*** dtantsur has joined #openstack-infra14:08
*** abhishekk has joined #openstack-infra14:11
Tenguzbr: is the 404 returned here a thing that can be used in the issue report? https://mirror.gra1.ovh.opendev.org/simple/oslo-log/14:15
Tenguhmm. sound like every link on https://mirror.gra1.ovh.opendev.org/simple/ returns a 404 in fact... meh.14:15
zbrTengu: i doubt. based on what i read frop #opendev it seems to be yet another afs problem and not a pypi/cdn one.14:16
zbrstill apparently nobody managed to setup a statusbot message around the issue, even if is critical14:17
Tenguzbr: woohoo. filesystem issue then? afs - that's apple thing isn't it?14:17
*** lajoskatona has left #openstack-infra14:18
zbrTengu: not the apple one, neither the openafs one, likely https://docs.opendev.org/opendev/system-config/latest/afs.html ;)14:21
Tenguwhich IS openAFS apparently.14:22
zbr"AFS" seems to be a very popular acronym, not that zuul was not already confused with netflix one14:22
* Tengu recalls the old time when he had to push that thing up14:22
* zbr thinking that first letter from AFS comes from "Any" :D14:23
Tenguapparently more "Andrew"14:23
*** cgoncalves has joined #openstack-infra14:24
*** Tengu has quit IRC14:25
*** Tengu has joined #openstack-infra14:27
*** artom has quit IRC14:27
*** artom has joined #openstack-infra14:28
*** Topner has joined #openstack-infra14:30
clarkbto clarify afs has nothing to do with our pypi proxying14:34
clarkbthey are separate tools14:34
*** Topner has quit IRC14:35
*** dklyle has joined #openstack-infra14:37
weshay|rucksorry to ask this as usual.. but any chance we can get a force merge for https://review.opendev.org/#/c/751828/14:38
weshay|ruckhaving trouble getting it through due to various issues14:39
*** armax has joined #openstack-infra14:41
clarkbis one of the issues timeouts? because dd'ing 4GB of 0's to disk can be slow :/14:42
*** artom has quit IRC14:45
chandankumarclarkb: currently tripleo ci jobs are busted with multiple failures we have to land multiple patches fix that https://review.opendev.org/#/c/751828/ is the frst one but got blocked on pypi issue14:48
chandankumarso wanted to it go for force merge14:48
chandankumartimeouts nope14:48
fungiTengu: TheJulia: we use three rackspace regions, all within the continental usa: dfw (texas), iad (virginia), ord (illinois)14:52
Tenguok. apparently the issue is deeper, I'm following discussions on #opendev.14:52
fungiTengu: zbr: 404 for https://mirror.gra1.ovh.opendev.org/simple/ would be pypi via proxy, not afs14:52
Tengustrong pointers to fastly and overall pypi infra, apparently some consistency issues...14:53
fungiahh, clarkb said as much too14:53
*** Topner has joined #openstack-infra14:56
*** dhill has quit IRC15:06
*** dhill has joined #openstack-infra15:07
*** xek has quit IRC15:08
smcginnisCan we get a status sent out so people stop proposing requirements patches to lower constraints for every package they fail to find?15:08
clarkbsmcginnis: Something like: #status notice Pypi'15:12
clarkber15:12
*** ykarel is now known as ykarel|away15:12
smcginnisHeh, that's probably enough. ;)15:12
clarkbsmcginnis: Something like: #status notice Pypi's CDN is serving stale package indexes for some indexes. We are sorting out how we can either fix or workaround that. In the meantime updating requirements is likely the wrong option.15:13
clarkb?15:13
clarkbsmcginnis: fwiw we're debugging and discussing in #opendev15:13
smcginnisYeah, watching there too.15:13
smcginnisThat message looks good to me.15:13
clarkbsmcginnis: one suspected workaround is for us to release new versions of the stale stuff15:13
smcginnisThat would really be a pain, but if that's what we need to do.15:13
smcginnisDo we even know what is all stale or not though?15:14
clarkb"yes"15:14
clarkbits the package indexes15:14
fungii doubt we'll fix it, i think the most we can do is provide sufficient info to the pypi sysadmins that they can forward along to fastly15:14
clarkbfor example oslo.log's latest version is 4.4.0 but we're getting served indexes that don't show that version (they do show 4.3.0)15:14
fungiright now i don't think we have enough information for them to do anything actionable15:14
clarkbfungi: they could resync all packages relaesed during the fastly issue15:15
clarkbthey've already acknowledged tehre was a problem then and we're seeing packages released in that time frame have problems (personally I think thats enough to force a resync)15:15
smcginnisI would hope that would be an easy thing on their side, but I know nothing about their setup.15:15
fungiyeah, i honestly have no idea if that's something they have the ability to do directly, but they're also generally disinclined to reach out to fastly unless they have concrete evidence as to what the problem is15:15
clarkbfungi: they did in the past, they even exposed an api call in pypi to let people anonymously trigger it15:16
clarkbfungi: we did that with our mirroring if we got errors that were unexpected15:17
clarkbI don't know if they still expose that but we can probably try and dig that up and check if that works15:17
*** ociuhandu_ has joined #openstack-infra15:17
fungii think that only affected the specific endpoint you were curling though?15:17
clarkb`curl -X PURGE https://pypi.python.org/pypi/python-novaclient/json`15:18
fungiyeah, that's still hitting fastly15:18
clarkbsays our old bandersnatch docs15:18
funginot really sure if that tells that fastly endpoint to resync, or tells all of fastly to resync, or somehow tells pypi to force a resync in fastly15:19
*** slaweq has quit IRC15:19
clarkbshould we go ahead and try it for oslo.log? `curl -X PURGE https://pypi.org/pypi/oslo.log/json` ?15:20
*** ociuhandu has quit IRC15:21
fungiworth a try i suppose15:21
fungifor some reason i have a vague recollection someone said that stopped working at some point, but i really don't know for sure15:22
*** ociuhandu_ has quit IRC15:22
clarkbresponse was json that said status: ok15:22
clarkbwhether or not that actually did anything useful? I don't know :)15:22
fungicool, i guess log it and we can check whether new failures for the same package occur after now15:22
clarkbsmcginnis: any chance you have a list of packages released around august 24/25? we can trigger that for all of them and see if we get a change15:23
*** slaweq has joined #openstack-infra15:23
*** dklyle has quit IRC15:24
*** ysandeep|away is now known as ysandeep15:28
smcginnisclarkb: Looking...15:29
smcginnisclarkb: Is that date right? The last batch of oslo releases was done on Sept 11.15:31
clarkbsmcginnis: yes, that is when oslo.log 4.4.0 was released adn roughly when fastly had problems15:31
clarkbI suspect anything that has had a more recent release is fine15:31
smcginnisoslo.service was another one I've seen, but that was in the Sept 11 batch.15:31
clarkbinteresting15:32
clarkbhave a link to the oslo.service failure?15:32
clarkb(mostly want to dobule check its the same situation as oslo.log)15:33
fungithe pypa-dev discussion was around 2020-08-25 though15:33
fungi2020-09-11 is weeks later15:33
clarkbfungi: yes thats why I'm curious if it is the same thing15:33
clarkbif it is then the fastly issue becomes less interesting unless there was another one recently15:34
smcginnisHere's an oslo.service failure: https://zuul.opendev.org/t/openstack/build/b5118d79c5684fe08650226c5b32489e15:35
fungiand looks like i was digging into a similar issue on 2020-08-1915:36
clarkbthats a different mirror but similar issue15:37
clarkbhttps://mirror.ord.rax.opendev.org/pypi/simple/oslo-service/ shows the version that was missing in the build so similar symptoms15:38
fungii'll start a similar loop in dfw looking for oslo.service 2.4.015:39
zbrclarkb: see if https://gist.github.com/ssbarnea/404ebcfcec120224e1d97b13b4856e93 helps15:39
*** ianychoi__ has joined #openstack-infra15:41
clarkbzbr: what is the significance of the release_url?15:43
zbri was expecting to see something different than 4.4.0 on an outdated index15:44
*** ianychoi_ has quit IRC15:45
fungiyou can also just grep the response for oslo.log-4.4.0.tar.gz15:45
clarkbzbr: right it would be 4.3.0 or whatever but it would still echo passed that way15:45
openstackgerritHervĂ© Beraud proposed openstack/pbr master: Adding pre-commit  https://review.opendev.org/74216015:45
clarkbfungi: ya I think that may be more what we want to check15:45
fungithat's what my current loop is testing15:45
fungibut it's also important to capture the X-Served-By header which tells you both layer identifiers for fastly's network (their second-level and first-level caches)15:46
*** lpetrut has quit IRC15:47
smcginnisclarkb, fungi: Releases between August 24-26: https://etherpad.opendev.org/p/august-24-25-releases15:47
fungiis pip going by what's in /simple or the json?15:50
fungibecause i've been checking the simple api index. maybe it's not stale but the json is?15:50
clarkbfungi: I think pip uses the simple html but pip has had so many changes I wouldn't bet on it15:52
*** lmiccini has quit IRC15:52
clarkbfungi: that may also explain why we have a hard time seeing it in our browsers15:56
fungiyeah, i'm digging in our proxy logs to work out which it is15:56
fungi"GET /pypi/simple/oslo-service/ HTTP/1.1" 200 15484 cache miss: attempting entity save "-" "pip/20.2.3 ...16:01
*** lucasagomes has quit IRC16:01
fungiso yeah, it's the simple api it's hitting, not the json metadata api which warehouse's webui fetches16:02
clarkband that is an up to date pip16:03
fungiright16:04
fungiany chance we know it's happening with recent pip and not just old pip 9.x?16:05
clarkbfungi: smcginnis' example was pip 20.2.216:05
* clarkb works on actually sending that status notice now16:06
fungiclose enough then16:06
*** gyee has joined #openstack-infra16:06
smcginnisclarkb: Gentle reminder that... ah, nevermind. :)16:06
clarkbslighyly updated: #status notice Our PyPI caching proxies are serving stale package indexes for some packages. We think because PyPI's CDN is serving stale package indexes. We are sorting out how we can either fix or workaround that. In the meantime updating requirements is likely the wrong option.16:07
clarkbthat still look good?16:07
*** artom has joined #openstack-infra16:08
clarkb#status notice Our PyPI caching proxies are serving stale package indexes for some packages. We think because PyPI's CDN is serving stale package indexes. We are sorting out how we can either fix or workaround that. In the meantime updating requirements is likely the wrong option.16:08
openstackstatusclarkb: sending notice16:08
* clarkb goes for it so it doesn't get irnogred again16:09
fungithat works, though verbose16:09
*** ykarel|away has quit IRC16:09
clarkbya its verbose because we keep getting people show up with input and have to walk them through our undersatnding again16:09
fungialso blasting 100+ irc channels to ask a handful of people to stop submitting misguided changes to the requirements repo seems a bit extreme, but i agree letting folks know there's an issue with pypi makes sense at least16:10
-openstackstatus- NOTICE: Our PyPI caching proxies are serving stale package indexes for some packages. We think because PyPI's CDN is serving stale package indexes. We are sorting out how we can either fix or workaround that. In the meantime updating requirements is likely the wrong option.16:10
clarkbanother note: fastly seems to use anycast IP addrs16:11
clarkbI see the same IPs on the west coast of the US that our mirror in france sees16:12
openstackstatusclarkb: finished sending notice16:12
clarkbzbr: ^ fyi I think that limits the utility of your script a bit16:12
*** jcapitao has quit IRC16:12
clarkbits not sufficient to simply dig the IPs then curl against them. We have to curl against them a whole bunch and hope to get a representative sample :/16:12
fungiyeah, that's why i'm just repeatedly requesting the same url. identifying the problem by endpoint ip address isn't useful16:13
fungitheir X-Served-By: headers tell you which node served it16:14
clarkbfungi: which region at least. I don't think they identify the individual server?16:14
fungicache-bwi5138-BWI, cache-dfw18641-DFW cache-bwi5138-BWI, cache-dfw18641-DFW16:15
clarkbya I don't think we've ever seen the digits change before. But I may be wrong about that16:15
fungicache-bwi5138-BWI, cache-dfw18630-DFW16:15
clarkbthose are different regions16:15
clarkbbasically we know there is BWI presence and DFW presence but not which servers within those locations we're hitting16:16
*** ysandeep is now known as ysandeep|away16:16
fungino, cache-bwi5138-BWI seems to be the first-level (central) cache and then cache-dfw18630-DFW is the second level (local) cache16:16
clarkbah16:16
fungiif you try it from different places the first level cache is basically always in baltimore16:16
fungibut the second level cache seems to be near the origin of your request16:17
fungii'm recording both in case there's some correlation at the first level16:17
clarkbx-cache may also be interesting (says if they hit at each location)16:19
clarkbthe other thought that came up last time was maybe we're truncating the response just perfectly so that 4.3.0 is there but 4.4.0 is not. But that would result in an invalid html document (but maybe pip doesn't care)16:21
clarkbcuriously I get different serials between firefox and chrome. curl has the same serial as chrome16:22
clarkbthis is for oslo-log's simple index.html. If you view source there is a comment at the end with the serial16:23
clarkboh wait I may have different urls. Double checking16:23
fungiyeah, i think i'm going to change up how i'm looping this and record the full response when the index seems to be stale16:23
clarkbyes I was comparing oslo-service and oslo-log my mistake16:24
fungibecause as soon as i tell them i'm seeing indices which are missing that version, the next question will be what's in the response16:24
*** psachin has quit IRC16:30
*** jlvillal has quit IRC16:32
*** jlvillal has joined #openstack-infra16:32
*** Topner has quit IRC16:33
clarkbI wonder if verbose pip would say more16:39
clarkbwemight be able to approach it from that angle too16:39
clarkbpip --log will write a maximum verbosity log file16:40
clarkbmaybe we add that to devstack16:41
fungimy new and improved loop requests the simple index for the package with response headers included, saves the result to a timestamped file, checks that file to see if it includes the new package version and deletes the file if so, or if it doesn't then it logs a timestamp to stdout so the file can be inspected16:41
fungithis way *if* we get a bad response, we can say which fastly endpoint served it up and also what the exact content it served was16:42
clarkb++16:42
fungiand the time of the request16:42
*** dtantsur is now known as dtantsur|afk16:46
*** slaweq has quit IRC16:47
*** eolivare has quit IRC16:48
*** piotrowskim has quit IRC16:51
*** jpena is now known as jpena|off16:56
*** derekh has quit IRC16:56
*** jtomasek has joined #openstack-infra16:58
clarkbfungi: fwiw I've tested the --log functionality of pip locally and it isn't going to provide the verbosity we need I think17:16
clarkbit does show all of the urls for packages it does fine in the index along with their details17:16
clarkbwhich might be useful depending on how exactly this fails (like truncate case more than stale caseI think)17:16
clarkbalso it does seem to confirm we're parsing html not json when looking at package indexes17:17
fungiso far i haven't hit any stale replies for oslo.log in ovg-gra1 or oslo.service in rax-dfw17:28
*** paladox has quit IRC17:37
*** paladox has joined #openstack-infra17:38
*** tosky has quit IRC17:40
clarkbya I'm wodnering if we shouldn't try and engage with pypi more directly17:43
clarkbmaybe they can at least check things on their end?17:43
clarkbbecause we have hit this several times now and just don't have the visibiltiy to debug easily17:44
*** Lucas_Gray has quit IRC17:50
*** vishalmanchanda has quit IRC18:13
*** andrewbonney has quit IRC18:23
*** xek has joined #openstack-infra18:25
*** ociuhandu has joined #openstack-infra18:29
*** cloudnull has joined #openstack-infra18:33
*** ociuhandu has quit IRC18:34
cloudnullqq - we're seeing a mess of post failure - it looks like its something being seen across the board - https://zuul.openstack.org/builds?result=POST_FAILURE - is this something known / on-going?18:34
cloudnullhttps://zuul.opendev.org/t/openstack/build/4bb04a0910a44bb4bc9c6486a1e13b9b18:35
cloudnullmore specific example from one of the jobs I was looking at18:35
clarkbcloudnull: yes, there was a quota issue with one of our swift stoarge backends. We believe that has been fixed18:35
cloudnull++18:36
* cloudnull will begin rechecking my couple of jobs 18:36
clarkband it was fixed at about 1600UTC so right after the most recent POST_FAILURE18:36
cloudnullthanks clarkb18:36
cloudnullsounds good. Appreciate it !18:36
clarkbno problem18:36
weshay|ruckthanks!18:37
*** zzzeek has quit IRC18:39
*** zzzeek has joined #openstack-infra18:41
frickleroh, that's what the no-log POST_FAILUREs were. I was first looking at some with logs that I connected to the pypi issues, but couldn't find how that would cause logs to be missing completely18:48
*** rosmaita has quit IRC18:49
*** zzzeek has quit IRC18:51
fungiyep, completely unrelated. we topped out the max swift containers quota we had in openedge18:51
fungithat's been substantially increased now18:52
frickleryeah, I read that earlier, just didn't make the connection to POST_FAILURES18:52
*** Lucas_Gray has joined #openstack-infra18:52
*** zzzeek has joined #openstack-infra18:53
TheJuliahave things calmed down with pypi? I ask because it kind of looks like a job launched about an hour may be working as expected where as one launched two hours ago is very unhappy18:55
*** ralonsoh has quit IRC18:56
fricklerTheJulia: I don't think we can confirm that yet, but it would be good news indeed18:59
fungiTheJulia: not really sure. clarkb was still finding some example failures of the same class (pip complaining about requested releases missing)18:59
TheJulia:(18:59
* TheJulia crosses her fingers18:59
clarkbthey have not calmed down18:59
TheJulia:( ^218:59
clarkbit seems to be happening in many region and with different packages18:59
clarkband it is very very confusing18:59
fungiwe're still trying to work out how to directly reproduce such a failure saving a complete copy of the server response, but it's a real heisenbug19:00
clarkbfungi: oh I bet you aren't pulling gzip files either19:00
*** zzzeek has quit IRC19:00
clarkbbut pip sends Accept-Encoding: gzip, deflate and Accept: text/html19:01
clarkbalso it sends a user string which they may key off of19:01
fungiahh, i can try adding that19:01
clarkbyou won't be able to grep it as easily if you do :/19:01
TheJuliaWell, as long as we don't need to launch a Manhattan project to remedy it... I guess.19:01
clarkbwe might almost want to hack up pip itself19:01
*** zzzeek has joined #openstack-infra19:03
fungithough i need to do more than just passing Accept-Encoding in the headers, seems i need to also tell wget about it19:03
clarkbfungi: wget has a new flag to handle gzip content iirc19:04
*** Topner has joined #openstack-infra19:08
*** zzzeek has quit IRC19:10
*** tbachman has left #openstack-infra19:12
*** zzzeek has joined #openstack-infra19:12
*** Topner has quit IRC19:13
*** priteau has quit IRC19:16
*** Topner has joined #openstack-infra19:49
*** Topner has quit IRC19:53
*** Topner has joined #openstack-infra20:28
*** Topner has quit IRC20:33
*** xek has quit IRC20:39
*** slaweq has joined #openstack-infra20:54
*** slaweq has quit IRC20:55
*** slaweq has joined #openstack-infra20:58
*** stevebaker has quit IRC21:00
*** slaweq has quit IRC21:05
*** slaweq has joined #openstack-infra21:20
*** stevebaker has joined #openstack-infra21:23
*** dave-mccowan has quit IRC21:43
*** Lucas_Gray has quit IRC21:50
*** Lucas_Gray has joined #openstack-infra21:51
*** slaweq has quit IRC21:59
*** rcernin has joined #openstack-infra22:25
*** d34dh0r53 has quit IRC22:28
*** Topner has joined #openstack-infra22:29
*** Topner has quit IRC22:35
*** rfolco|ruck has quit IRC22:51
*** tkajinam has joined #openstack-infra22:51
*** Goneri has quit IRC23:03
*** smcginnis has quit IRC23:20
*** smcginnis has joined #openstack-infra23:22
*** tetsuro has joined #openstack-infra23:54
*** Lucas_Gray has quit IRC23:55

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!