opendevreview | melanie witt proposed openstack/project-config master: launchpad: Use repo name to match project + series https://review.opendev.org/c/openstack/project-config/+/883603 | 01:13 |
---|---|---|
*** timburke_ is now known as timburke | 06:46 | |
opendevreview | Merged openstack/project-config master: grafana: Update kolla dashboards with proper job names https://review.opendev.org/c/openstack/project-config/+/883234 | 07:15 |
opendevreview | Merged openstack/project-config master: Add cinder-nfs charm to OpenStack charms https://review.opendev.org/c/openstack/project-config/+/879469 | 13:14 |
*** timburke_ is now known as timburke | 16:48 | |
*** melwitt_ is now known as melwitt | 17:11 | |
ihrachys_ | I've noticed a job on stable/joga failed in devstack repo with what ultimately comes down to OOM killer killing mysql. 1) should I bother to report? 2) what's the right procedure to report? (a link to a doc explaining it would be a great help; I failed to find anything authoritative) | 21:12 |
fungi | ihrachys_: at one point there was an oom checker which grepped dmesg for indicative patterns and added a clear error message to the job output. not sure how long ago it might have vanished, but adding it back could be in order if that's something which happens often enough to warrant it | 21:16 |
fungi | for devstack specifically, there's also a memory profiler included in the services run on each node, which should help in identifying where the culprit is (often what gets killed is not what used abnormal amounts of ram) | 21:17 |
fungi | if it were me, i'd start by bringing it up with the qa team (they manage devstack and the devstack-based jobs) to see if it's a known issue and/or something they need help with | 21:19 |
ihrachys_ | I don't see a particular process that could be definitely blamed. and it failed once, I just want to be a good citizen and not recheck with no report / analysis :) | 21:19 |
ihrachys_ | so #openstack-qa then? | 21:19 |
fungi | yeah, that's your best bet | 21:19 |
fungi | granted, it's a quiet friday and they may be off canoeing or something | 21:19 |
ihrachys_ | for the record, the memory tracker from the failed job at: https://zuul.opendev.org/t/openstack/build/a53c316283334d5d8514ad13cae71bed/log/controller/logs/screen-memory_tracker.txt | 21:19 |
ihrachys_ | it's mysqld taking 9% and ceph-osd some more but nothing like a single process hogging half memory | 21:20 |
ihrachys_ | fungi that's not critical, I will follow up with -qa channel on monday | 21:20 |
ihrachys_ | now - heading off to canoeing myself :) | 21:20 |
fungi | right, i think what's intended is that the memory profile get compared side-by-side with other healthier runs to see what's gone off the rails, relatively speaking | 21:21 |
fungi | in isolation a single memory profile probably doesn't tell us much | 21:21 |
clarkb | there is an opt in thing to tune mysql back. It might not be a bad idea to enable that by default and have it be an opt out but that is for the qa team to decide | 22:09 |
clarkb | mysql gets targeted by OOMKiller because it is the biggest memory user so dialing it back a bi helps | 22:09 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!