*** jpena|off is now known as jpena | 11:27 | |
infernix | ralonsoh ping? | 11:36 |
---|---|---|
infernix | this is your proposal, correct? https://review.opendev.org/c/openstack/neutron/+/790060 | 11:38 |
ralonsoh | infernix, it is, yes | 11:38 |
infernix | the delete is causing lock waits | 11:39 |
infernix | is there another approach here? could one do a SELECT and only execute the DELETE if > 0 rows? | 11:40 |
ralonsoh | infernix, when you have reservations from other executions | 11:40 |
ralonsoh | the reservations table should be almost empty | 11:40 |
ralonsoh | and the reservation context is fast enough to no collide with other request | 11:40 |
infernix | wel, in our case it is not | 11:41 |
ralonsoh | how many reservations do you have ? | 11:41 |
ralonsoh | how many registers in this table? | 11:41 |
infernix | i'm watching mysql processlist and it's been full of deletions for the last half hour orso | 11:41 |
ralonsoh | that's ok | 11:41 |
infernix | we get 500 concurrent instance creation requests and everything locks up because of these because they all get stuck in LOCK WAIT | 11:42 |
ralonsoh | every time you request a new resource | 11:42 |
infernix | let me bpaste you something | 11:42 |
ralonsoh | you create a reservation | 11:42 |
ralonsoh | then you create the resource | 11:42 |
ralonsoh | and then you delete the reservation | 11:42 |
infernix | https://pastebin.com/ | 11:43 |
infernix | ugh | 11:44 |
infernix | too big. hold on | 11:44 |
infernix | https://spirit.infernix.net/dbnolock_reservations_delete_locking.txt | 11:45 |
infernix | so what I think happens here is that there are too many concurrent DELETEs because of the large concurrent instance creation load, that they all pile up, get stuck in LOCK WAIT, and timeout after 50s, after which they are retried | 11:46 |
infernix | and this takes many minutes to a few hours to churn through | 11:46 |
ralonsoh | infernix, how many reservation registers do you have> | 11:46 |
ralonsoh | ? | 11:46 |
infernix | i'm not exactly sure what you mean | 11:47 |
ralonsoh | In the table neutron.reservations | 11:47 |
ralonsoh | how many registers do you have? | 11:47 |
infernix | 52 | 11:47 |
ralonsoh | stable number? | 11:48 |
infernix | yep | 11:48 |
ralonsoh | you should have none | 11:48 |
ralonsoh | delete all of them | 11:48 |
infernix | some of them are old, and some of them are from today | 11:48 |
infernix | is it really safe to just truncate this table? | 11:49 |
infernix | actually number goes down slowly | 11:50 |
ralonsoh | the worst thing that will happen is that you will exceed some project quota | 11:50 |
ralonsoh | in some resource | 11:50 |
ralonsoh | another question | 11:50 |
ralonsoh | is quota mandatory now for you? | 11:50 |
infernix | but that doesn't change the lock contention issue. we had the same problem on the old driver | 11:50 |
infernix | we are considering not using quotas, but that would be for all of neutron. and we'd like to keep quotas for FIP etc | 11:51 |
infernix | ports i can care less about | 11:51 |
ralonsoh | so disable the quotas for now | 11:51 |
ralonsoh | setting -1 in those resources | 11:51 |
infernix | wouldn't it be better to set track_quota_usage = false? | 11:53 |
infernix | 1248 rows now. so it goes up and down. lots of ipamallocation inserts at the moment | 11:55 |
infernix | but to me, if i look at that innodb output, there are a way too many concurrent DELETEs | 11:56 |
infernix | and they're all waiting on eachother. so it doesn't look like the new nolock driver achieves higher performance | 11:57 |
ralonsoh | infernix, that won't stop calling the quota engine | 11:57 |
infernix | ok. let me see what happens with as many quotas set to -1 as we can manage | 11:58 |
opendevreview | Merged openstack/governance master: Updating the Yoga testing runtime https://review.opendev.org/c/openstack/governance/+/820195 | 15:37 |
*** mchlumsky2 is now known as mchlumsky | 16:41 | |
*** jpena is now known as jpena|off | 17:50 | |
*** sshnaidm is now known as sshnaidm|afk | 19:06 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!