Wednesday, 2021-10-20

opendevreviewMerged openstack/project-config master: Add ubuntu-bionic-32GB  https://review.opendev.org/c/openstack/project-config/+/81468300:05
opendevreviewMerged openstack/project-config master: infra-package-needs: install latest pip  https://review.opendev.org/c/openstack/project-config/+/81467700:22
*** ykarel_ is now known as ykarel04:40
opendevreviewMerged openstack/project-config master: Remove fedora-32 disk image config  https://review.opendev.org/c/openstack/project-config/+/79564404:40
*** ysandeep|out is now known as ysandeep05:08
*** ysandeep is now known as ysandeep|afk06:34
*** ysandeep|afk is now known as ysandeep|trng07:03
*** jpena|off is now known as jpena07:29
*** ykarel is now known as ykarel|lunch08:00
*** iurygregory_ is now known as iurygregory09:00
*** ykarel|lunch is now known as ykarel10:15
*** ysandeep|trng is now known as ysandeep|afk10:19
*** rlandy is now known as rlandy|ruck10:31
*** dviroel|rover|afk is now known as dviroel|rover11:18
*** jpena is now known as jpena|lunch11:25
*** jcapitao is now known as jcapitao_lunch11:42
zulhey guys it seems we are still having problems with the bindep issue from yesterday, any eta on this on the xenial images https://zuul.opendev.org/t/openstack/build/0f2355250c814549b164c0fc6c8f9c5d12:13
*** ysandeep|afk is now known as ysandeep12:21
*** jpena|lunch is now known as jpena12:24
fricklerzul: if I look at the build logs, it seems the next build should start in 10 mins, so maybe check again in 2-3h12:41
zulack12:41
*** jcapitao_lunch is now known as jcapitao12:43
fungiyeah, the current images were uploaded around 21.5 hours ago, so chances are we'll have nodes booting from newly uploaded images in another ~2.5 hours12:53
*** timburke_ is now known as timburke12:58
opendevreviewMerged openstack/openstack-zuul-jobs master: Add OpenStack Charm gate jobs without Python 3.5  https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/81412613:10
zaitcevHow do I report an issue to the infra team? Looks like a commit-msg hook is missing on opendev.org, although it's present at review.opendev.org.13:36
zaitcevHere: https://bugs.launchpad.net/swift/+bug/194786213:36
fungizaitcev: opendev.org wouldn't have any need for the gerrit commit-msg hook, gerrit serves that because it's the only system users can push to13:40
fricklerzaitcev: what version of git-review are you using? I cannot reproduce this locally13:41
zaitcevfungi: see the bug though. When cloned from opendev.org, git-review constructs a URL for commit-msg which points to opendev.org. It ignores .gitreview in such case.13:41
fungiaccording to that bug report, it's trying to retrieve the hook from github?13:41
fungiThe following command failed with exit code 104 "GET https://github.com/tools/hooks/commit-msg"13:42
fungithe only reason i can think of git-review would do that is if the user created a custom git remote named "gerrit" and pointed it at github13:43
fungior overriding the gerrit.host setting git-review relies on to be github instead of review.opendev.org13:44
fungiregardless, it looks like someone has misconfigured something badly there13:44
zaitcevIndeed cloning from opendev.org and git-review -s work for me too, but they go straight to ssh:.13:45
zaitcevIt says13:45
zaitcev"No remote set, testing ssh://zaitcev@review.opendev.org:29418/openstack/swift.git"13:45
zaitcevand everything is fine then.13:45
fungiyes, git-review uses gerrit's ssh-based socket for git operations by default, though can be configured to use https13:46
fungimaybe the user put the wrong thing in when setting it up to interact with gerrit over https, and told it github was the gerrit server13:47
zaitcevI see. Thanks a lot, guys. I'll update the bug and we'll take it from there.13:48
fungithanks zaitcev! if they need more help troubleshooting git-review settings, they can always ask in here or on the ml... bug trackers aren't really great places to try to do user support obviously13:51
zaitcevIndeed.13:51
zaitcevAlthough that's pretty much how I do it at work... Very hard to get in touch with users.13:51
clarkbthey may be creating a gerrit remote that isn't gerrit?13:52
clarkbzul: re your email to the starglinx list I think the issue is with python3.5 not python2.7 on older systems13:53
fungifrom starlingx's perspective, it's breaking their python 2.7 jobs, but it's really that the bindep install on ubuntu-xenial nodes where they're running those py27 jobs is broken13:55
fungiand really the issue is that the last bindep release broke support for old pip which lacks support for the python_requires metadata by adding an open-ended requirement on something which has dropped python 3.5 support13:56
zulclarkb: https://zuul.opendev.org/t/openstack/build/ead6561991e54b13ad2cc6ebea5a82e413:58
clarkbah I'm a bit surprised that starlingx would run py27 jobs on such an old release. Newer releases do ahve python2.7 too13:59
zulim not, which one should we be using for py27 jobs?14:10
clarkbzul: I think focal still has python2.714:11
clarkb(though it may be the last one?)14:11
zuli would have to check14:11
*** vishalmanchanda_ is now known as vishalmanchanda14:24
*** andrewbonney_ is now known as andrewbonney14:26
opendevreviewDr. Jens Harbott proposed openstack/project-config master: Move the daily periodic trigger earlier  https://review.opendev.org/c/openstack/project-config/+/81479414:34
*** carloss_ is now known as carloss15:00
opendevreviewElod Illes proposed openstack/openstack-zuul-jobs master: Remove ocata branch from periodic-stable template  https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/81479915:01
afaranhatimburke, fungi Hi, I'm back with the results :D I was able to reproduce the issue on the node you gave me15:05
afaranhaI hit the out of space issue, and run 'df'15:05
afaranha(.temp_venv) [zuul@centos-8-ovh-gra1-0027003455 swift]$ df -h15:05
afaranhaFilesystem      Size  Used Avail Use% Mounted on15:05
afaranhadevtmpfs        3.8G     0  3.8G   0% /dev15:05
afaranhatmpfs           3.8G     0  3.8G   0% /dev/shm15:05
afaranhatmpfs           3.8G  448K  3.8G   1% /run15:05
afaranhatmpfs           3.8G     0  3.8G   0% /sys/fs/cgroup15:05
afaranhatmpfs           777M     0  777M   0% /run/user/100015:05
afaranhatmpfs           777M     0  777M   0% /run/user/015:05
afaranhaIt seems that the issue is with /mnt/config15:06
afaranhalet me put on the paste.openstack, it hod some entried15:06
afaranhaentries*15:06
afaranhahttps://paste.openstack.org/raw/810105/15:07
clarkbafaranha: /mnt/config is a RO filesystem supplying openstack metadata to the server. I think it is expected that it be 100% full and your code should never try to write to it15:10
clarkbI raelly doubt that is the issue15:10
afaranhaclarkb, I don't know where the issue is then :/ xfstmp is at 5% usage only15:11
clarkbis xfs metadata a different usage pool?15:13
clarkbI know btrfs reports usage poorly to df15:14
clarkband you have to use btrfs specific tools to actually understand use and do a rebalance to free space that otherwise appears available15:14
clarkbxfs_spaceman and xfs_scrub maybe?15:14
afaranhaI have no idea what you`re talking about :P15:22
afaranhaclarkb,  sorry, I'll need some googling on that matter15:22
afaranha"is xfs metadata a different usage pool?" So the issue here is on the metadata itself, right?15:22
clarkbbasically df showing you 5% use may not tell the whole story. I would look into xfs specific tools to verify there is free space15:23
clarkbafaranha:I don't know, I don't know much about your tests15:23
timburkefwiw, it's a swift functional test that's failing with fips mode enabled -- but there's another fips func test job on the same patch that *succeeds*15:28
timburkemy best guess is that it *is* some kind of xattr/metadata issue, and not the sort of thing that df would easily identify for us15:28
*** ykarel is now known as ykarel|away15:31
fungiafaranha: the question was whether xfs may have a separate limit on the amount of extended attributes it can store, and maybe that's what's been exceeded15:48
timburkeand in particular, whether that limit changes when running under fips mode (since the non-fips equivalent passes)15:50
*** jpena is now known as jpena|off16:31
*** ysandeep is now known as ysandeep|away16:41
sean-k-mooneyclarkb: fungi  quick question if we wanted to move soemthing form storyborad ot launchpad is there any automation for that17:01
clarkbI'm not aware of any17:01
sean-k-mooneye.g. to move active stories to new lanuchpad bugs and to create the launchpad projects17:01
sean-k-mooneyok17:01
clarkball of the data is public and there are apis on both sides so you should be able to do it, but I'm not aware of anyone having done so17:01
sean-k-mooneyif we just flip the defieniton in porject config can the infra tooling create the launchpad project for us17:02
fungiprobably the biggest hurdle will be the lp api likes to choke a lot, fall over, and then you have to wait a while to try again17:02
fungiif you're trying to copy old bugs into it17:02
fungisean-k-mooney: there is no infra tooling which creates launchpad projects, that's all manual17:02
sean-k-mooneyok17:03
clarkb(and there never was)17:03
sean-k-mooneywe are not sure if we will actully move things17:03
sean-k-mooneybut basiclaly nova uses launchpad for everythying and placment uses story board17:03
sean-k-mooneya few cycles ago placement came back into the compute deliverbale17:03
sean-k-mooneywe have not really been monitoring the plamcent story borad projects and we are debating if we we shoudl move the plamcent project in stroyboard to launchpad17:04
sean-k-mooneyand if we need to copy any bugs or not17:04
sean-k-mooneyi think right now we are not sure the effort to move is worth it but we get a cople of bugs reported in story board every year17:06
sean-k-mooneybut we never really triage them or work on them since we dont see them17:07
fungisb can be set to notify you by e-mail when a project you're monitoring gets a new story task or someone comments on a story which has a task for it, but yes if the people who are working on placement are doing most of their defect and task tracking in lp then it makes sense to tell people to start filing new reports there17:11
clarkbzul is gone but we expect the py35 jobs are happy again17:25
clarkb(and generally other jobs on xenial)17:25
fungiprobably any which started after ~14:00z17:27
fungi814647 was rechecked at 14:10:36 and worked17:27
*** rlandy|ruck is now known as rlandy|ruck|bbl22:25
*** dviroel|rover is now known as dviroel|rover|afk22:53
*** dviroel|rover|afk is now known as dviroel|rover|out23:16

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!