opendevreview | Matthew Oliver proposed openstack/swift master: sharder: update shard storage_policy_index if roots changes - Simple https://review.opendev.org/c/openstack/swift/+/809317 | 06:46 |
---|---|---|
opendevreview | Tim Burke proposed openstack/python-swiftclient master: Update master for stable/xena https://review.opendev.org/c/openstack/python-swiftclient/+/808446 | 15:55 |
opendevreview | Tim Burke proposed openstack/python-swiftclient master: Add Python3 yoga unit tests https://review.opendev.org/c/openstack/python-swiftclient/+/808447 | 15:55 |
opendevreview | Merged openstack/python-swiftclient master: Update master for stable/xena https://review.opendev.org/c/openstack/python-swiftclient/+/808446 | 18:03 |
opendevreview | Tim Burke proposed openstack/swift master: ring: Introduce a v2 ring format https://review.opendev.org/c/openstack/swift/+/808530 | 18:21 |
opendevreview | Tim Burke proposed openstack/swift master: ring: Allow RingData to vary dev_id_bytes https://review.opendev.org/c/openstack/swift/+/808531 | 18:21 |
opendevreview | Tim Burke proposed openstack/swift master: Allow ring-builder CLI users to specify device ID https://review.opendev.org/c/openstack/swift/+/808532 | 18:21 |
opendevreview | Tim Burke proposed openstack/swift master: ring: Allow builder to vary dev_id_bytes https://review.opendev.org/c/openstack/swift/+/808533 | 18:21 |
opendevreview | Tim Burke proposed openstack/swift master: ring: Keep track of last primary nodes from last rebalance https://review.opendev.org/c/openstack/swift/+/790550 | 18:21 |
opendevreview | Merged openstack/python-swiftclient master: Add Python3 yoga unit tests https://review.opendev.org/c/openstack/python-swiftclient/+/808447 | 18:38 |
zaitcev | What's up with Tim? Is he on vacation? | 20:30 |
opendevreview | Clay Gerrard proposed openstack/swift master: merge root timestamps https://review.opendev.org/c/openstack/swift/+/809482 | 21:11 |
zaitcev | Strange failure with Tempest, I filed this: https://bugs.launchpad.net/openstack-gate/+bug/1943884 | 21:35 |
clarkb | zaitcev: there is a bit of fallout from what we're thinking is a tox update that pulls in newer pip and setuptools and then some interaction with them and the dep solver in pip? | 21:39 |
clarkb | Basically the dep solver in pip is having a sad. I half expect the next requirements update may help sort out a bunch of that though. You might want to add openstack requirements to that bug if thay have a lp tracker | 21:40 |
clarkb | https://launchpad.net/openstack-requirements they d | 21:40 |
zaitcev | Thanks, but is this the right thing to do. I'm rather clueless on how our LP setup works. | 21:41 |
clarkb | zaitcev: well I'm not sure anyone looks at openstack-gate anymore. It needs to get to more specific people (back when you had people like mtreinish and sdague around they would look at those bugs) | 21:42 |
clarkb | And dependency solver errors are in the realm of openstack requirements | 21:42 |
clarkb | fungi: ^ fyi more fallout? | 21:42 |
zaitcev | clarkb: thanks.... I'll see if I can figure out how to add other projects. | 21:43 |
clarkb | zaitcev: click the also affects project button on the bug then type in openstack-requirements and select what the search pops up iirc | 21:44 |
zaitcev | clarkb: the Launchpad says "OpenStack Global Requirements does not use Launchpad to track bugs. Nobody will be notified about this issue. " | 21:45 |
clarkb | ha they need to update their readme then | 21:46 |
clarkb | zaitcev: in that case add devstack as this happened during stack.sh | 21:47 |
clarkb | and then the qa team will see it | 21:47 |
fungi | clarkb: thanks, it does sort of seem to be all over the place today | 21:50 |
fungi | great timing too, with openstack release work getting into full swing | 21:51 |
zaitcev | fungi: this has a little history, the first failure was on August 27. I re-checked it yesterday and it failed in the same place. Only then I thought to file a bug in LP. | 21:51 |
fungi | zaitcev: okay, so maybe not the same thing clarkb was talking about, which has really just started today as far as we can tell | 21:52 |
clarkb | fungi: maybe its the pypi serves old stuff issue then? And we should go check the cache to see if we can find an old index for reno? | 21:52 |
clarkb | that job ran in iweb | 21:53 |
clarkb | so ya the mtl suspicion I guess | 21:53 |
fungi | mmm, yeah that's the right region of the world for it to be that problem | 21:53 |
clarkb | fungi: do you remember how to grep in the apache cache? | 21:54 |
clarkb | looks like it is all zip data | 21:54 |
fungi | ykarel noted an issue with an old reno index getting served there two days ago | 21:54 |
fungi | the zip data could be wheels? | 21:55 |
clarkb | oh good point it could be the raw data is zipped | 21:55 |
fungi | whl files are really just renamed zipballs | 21:55 |
zaitcev | Okay. Thanks for looking into it. As you can see I'm entirely clueless, but I figured that re-checking without end will not do any good. | 22:03 |
clarkb | zaitcev: if it is the pypi serves stale index data problem then rechecking will eventually get it through. That particular problem is one where we think the CDN endpoint near montreal is more likely than others to fail talking to the primary pypi backend and fallback to one that is not very up to date | 22:07 |
fungi | i just did `curl -XPURGE https://pypi.org/simple/reno` (and a second time with a trailing / just in case). we've sometimes seen that help | 22:12 |
fungi | it supposedly signals to fastly (pypi's donated cdn provider) to drop the cached content for those urls and refetch | 22:13 |
clarkb | fungi: I ended up doing something like `find /var/cache/apache2/proxy -type f -name \*.header -exec grep reno {} \;` then skimmed that list. In the list only one entry seems to belong to pip fetching the index. However, I note that pip sets a cache-control max-age=0 in there | 22:16 |
clarkb | fungi: I think that means we have to catch it with our own clients in order to see it | 22:16 |
clarkb | (the index I found looks complete too) | 22:16 |
clarkb | basically pip is bypassing our caches (yay) and that means it will hit the backends frequently | 22:17 |
clarkb | I wonder if we shouldn't just file htis with pypa and say it has happened before can you check your metrics and logs to see if it happens again? | 22:17 |
clarkb | we can stop polluting this channel now. Sorry for the noise | 22:18 |
zaitcev | No, it was very edifying. | 23:10 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!