Thursday, 2024-02-01

tonybThanks to sean-k-mooney[m] I managed to reduce the list of stale/stuck/corrupt VMs from 28 to 12 (It took a while because I was *very* careful with my db manipulations)00:31
tonybNow I have 12 instances that are in the building/scheduling vm/task state.00:32
tonybSome have entries in nova_api.instace_mappings pointing at cell0 some at the real cell (which has a NULL name?)00:33
tonybAFAICT the instances don't have any allocations in placement00:34
tonybWhat is the "most correct"/"least horrible" DB update I can do to let nova know those instances are gone?00:35
tonybI was thinking of something like update instances set vm_state='ERROR', set task_state=???, set updated_at=now(), set deleted_at=now() where uuid in (....) ?00:39
clarkbtonyb: I wonder if we just set vm state to active if a delete woudl work02:17
tonybI didn't try that.03:16
opendevreviewMerged openstack/nova stable/2023.2: Updates glance fixture for create image  https://review.opendev.org/c/openstack/nova/+/90608805:10
opendevreviewMerged openstack/nova stable/2023.2: Fixes: bfv vm reboot ends up in an error state.  https://review.opendev.org/c/openstack/nova/+/90608906:40
sean-k-mooney[m]tonyb:  to mark an instance as deleted in nova you set deleted=id08:37
tonybthanks that's basically what I did.08:38
sean-k-mooney[m]so there is no way to take an instance from cell 0 and do anything other then delete it08:39
sean-k-mooney[m]cell0 is basically a gravyard for instnce we were unable to boot08:39
sean-k-mooney[m]and they should not have any allocaations in placement if they are in cell 008:39
sean-k-mooney[m]for the ones in schduling they have not had a host selected so they willl not be in any cell db08:40
sean-k-mooney[m]they will just have a build request in the api db08:40
sean-k-mooney[m]the ones in the real cell in building likely failed because of the rabbit outage08:41
sean-k-mooney[m]i assume you just want them gone so you can boot new vms with nodepool08:41
tonybyup that's the aim.   I did get rid of them 08:42
sean-k-mooney[m]if they are in building and are in a really cell then they likely have allocations so you would mark them as deleted in the cell db then  delete there allocations in placement08:42
sean-k-mooney[m]ok in that case are they all now cleanned up?08:43
tonybyup.  I did discover some other nodes that have allocations but are gone.08:43
tonybI'll figure that out tomorrow, but it should be easy to find them in placement 08:44
sean-k-mooney[m]we have two related command nova-manage heal allocatiosn and audit08:44
sean-k-mooney[m]the heal allocation command creates alloctions that are missing08:44
sean-k-mooney[m]the audit command removes allocatons that should nolonger exist08:45
tonyboh! cool.08:45
sean-k-mooney[m]and has a dry run mode by default that just prints them08:45
tonybthat's perfect for tomorrow!08:45
tonybsean-k-mooney[m]: thanks for all your help08:45
sean-k-mooney[m]https://github.com/openstack/nova/blob/stable/victoria/nova/cmd/manage.py#L269508:46
tonybsean-k-mooney[m]++08:52
gibielodilles, sean-k-mooney[m]: could you check these backports please https://review.opendev.org/q/topic:%22power-mgmt-fixups%22+branch:stable/2023.2 09:14
gibithe master revert landed yesterday (finally)09:15
sean-k-mooney[m]sure just reviewing one of mels patchs ill look at those next09:15
gibithanks09:19
bauzassean-k-mooney: gibi: got some devstack issue with ovn, because the package is missing09:28
bauzas(RHEL9.3 here)09:28
sean-k-mooney[m]its in a seperate repo call fast datapath on rhel09:29
bauzaswhen looking at https://github.com/openstack/devstack/blob/master/tools/fixup_stuff.sh I see that centos9 installs centos-release-openstack-victoria package09:29
sean-k-mooney[m]our you can just enabel the rdo repos09:29
sean-k-mooney[m]i can give you the command for the rdo repos one sec09:29
bauzasI can insdtall rdo-ovn09:29
bauzasit says this is a wrapper for OVN09:30
sean-k-mooney[m]https://github.com/openstack-k8s-operators/edpm-ansible/blob/28f80d2303497972a1d3b493760c4ff1557b973f/roles/edpm_nova/molecule/default/prepare.yml#L4009:31
sean-k-mooney[m]if you dont have the repos then i belive that will fix it for you09:31
sean-k-mooney[m]change -b antelope to master09:32
bauzashttps://paste.opendev.org/show/b95bS56avBrLTeoma0QU/09:32
bauzasthat's what I have09:32
sean-k-mooney[m]ok then devstack already enabled rdo for you09:33
bauzascorrect09:33
sean-k-mooney[m]how did it fail exactly09:33
sean-k-mooney[m]you could just go back to ml2/ovs by the way09:34
sean-k-mooney[m]but if you want to keep ovn09:34
bauzashttps://paste.opendev.org/show/bkHbTVtFyFTMA34VV1B7/09:34
sean-k-mooney[m]i can take a look at the error with you in a few minutes09:34
bauzasyeah I think I'll change my local.conf and play with ml2/ovs09:34
bauzas(my local.conf is pretty simple, nothing is told about the networking)09:35
sean-k-mooney[m]oh its the metadata agent not ovn09:35
sean-k-mooney[m]you can always just trun that off and use config drive09:36
bauzasthat's my local.conf https://paste.opendev.org/show/bwqKtvfeZhUT9aL1kFoZ/09:37
sean-k-mooney[m]https://github.com/openstack/nova/blob/master/.zuul.yaml#L192-L20109:37
sean-k-mooney[m]that is how to swap to ml2/ovs09:37
bauzasq-ovn-metadata-agent.service that's the service which failed09:38
sean-k-mooney[m]oh you need https://github.com/openstack/nova/blob/master/.zuul.yaml#L186-L189 as well09:39
bauzasoh wait, I have ovn installed09:39
sean-k-mooney[m]yes09:39
sean-k-mooney[m]its the neturon metadata agent for ovn that failed09:39
sean-k-mooney[m]not the ovn install09:39
bauzasyeah09:40
bauzasI can try to dig into the logs of that agent, if I'm able to find them09:40
sean-k-mooney[m]they are in the journal like all the rest09:40
sean-k-mooney[m]so just journalctl -u devstack@whatever09:40
sean-k-mooney[m]im stepping away for 5-10 minutes to grab coffee but when im back if you want me to ssh in and take a look with you i can09:41
bauzasyeah09:41
bauzasand thanks09:41
bauzasahah, found the culprit https://paste.opendev.org/show/bFigNVupZWRpjlmuzXZ9/09:48
sahido/ regarding live migrations, we are agree that we can live-migrate vm from compute node that are N-1 to N ?09:50
sahidin a situation of upgrading version of openstack09:50
bauzassure, not the other way09:53
sahidack thanks09:58
sean-k-mooneybauzas: ah the privsep helper10:12
sean-k-mooneybauzas: so tht issue is that its not on your path10:13
sean-k-mooneybauzas: i put a path fix in devstack to fix that at one point10:13
bauzas[stack@lenovo-sr655-01 devstack]$ which privsep-helper10:14
bauzas/usr/bin/which: no privsep-helper in (/opt/stack/.local/bin:/opt/stack/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin)10:14
bauzashmmmm10:14
sean-k-mooneyit will be in the venv bin dir10:16
sean-k-mooneyif you have it disabled10:16
bauzasI disabled the GLOBAL venv indeed10:16
sean-k-mooneythen it should be in /usr/local/bin but apprently not10:16
bauzasI have oslo.privsep10:17
bauzaslemme see where it's landed10:17
sean-k-mooneyhttps://github.com/openstack/devstack/blob/5c1736b78256f5da86a91c4489f43f8ba1bce224/stack.sh#L838 this is the rwrokaround for usign the venv10:17
sean-k-mooneyso you just need to do the same with where ever it is10:17
bauzasI suspect that privsep got installed in the venv as I first started to stack.sh with this flag10:18
sean-k-mooneyah ya so there are a bunch of things you have to manually clean up10:18
sean-k-mooneyif you did that10:18
sean-k-mooneyand removing those symlinks form /usr/local/bin is one of them10:19
bauzasand after that, even when asking devstack to reclone, I tihnk it got a problem10:19
sean-k-mooneyreclone is not enaouch10:19
sean-k-mooneyso you need to delete the venv dir10:19
sean-k-mooneyand then remove all the broken symlinks form /usr/local/bin10:19
bauzasyeah10:20
bauzas[stack@lenovo-sr655-01 devstack]$ ll /usr/local/bin/privsep-helper10:20
bauzaslrwxrwxrwx. 1 root root 39 Jan 31 11:10 /usr/local/bin/privsep-helper -> /opt/stack/data/venv/bin/privsep-helper10:20
sean-k-mooneyyep10:20
sean-k-mooneybut neutron is proably not in the venv10:20
bauzasokay, I'll reinstall the package10:20
bauzasand I'll look at other symlinks in /usr/local/bin10:21
sean-k-mooneyas i said nuke the venv first then remove any broken links10:21
sean-k-mooneyi didnt document the process but it took me about an hour to do that on centos properly but its doable to make work10:22
bauzasyeah 10:23
bauzasthanks for the help10:23
sean-k-mooney note i had to remove some egg link files too10:24
sean-k-mooneywhat i basically did ws unstack.sh; clean.sh10:24
bauzasyup, https://github.com/openstack/devstack/blob/master/tools/fixup_stuff.sh says it10:24
sean-k-mooneyremove all openstack related thigns form /usr/local/bin 10:24
sean-k-mooneyand remove all openstack packages via pip and or egglink files 10:25
sean-k-mooneyafter that i cloud jsut stack aagain without the venv and it was fine10:26
sean-k-mooneythe venv is a huge improvement but goign between each mode is a pain10:27
bauzasyeah and clean.sh doesn't do all the cleanup :)10:27
bauzasI recommend everyone stacking a new env every quarter :)10:27
sean-k-mooneyit appreanly does not install ovs and ovn which i coudl have bet money on it doing in the past10:27
sean-k-mooneythat bit me a few days ago10:27
sean-k-mooneyi almost added that to clean but i had other things to do 10:28
bauzassean-k-mooney: I wonder, I guess we can't easily run the zuul jobs on our local machines ?10:28
sean-k-mooneywell... i can kind of10:29
bauzasit would be simplier for a new developer to just hook their dev machines into some zuul registery so they would just run a specific zuul job for installing devstack and its knobs10:29
sean-k-mooneyi wrote an ansibel molecule senairo10:29
bauzasyeah I remember that10:29
sean-k-mooneythat uses the upstream zuul job roles10:30
sean-k-mooneyhttps://github.com/SeanMooney/ard/blob/master/molecule/default/molecule.yml10:30
bauzasideally, I'd just use the .zuul.yaml definitions10:30
sean-k-mooneyya that is not possibel today10:30
sean-k-mooneythere was a propoal to do that with a thing called zuul-runner10:30
sean-k-mooneyclarkb: ^ that never actully happened right10:30
bauzasinteresting10:31
bauzasI mean, don't get me wrong10:31
sean-k-mooneybauzas: the idea was you would point it at zuul adn give it the proejct name ectra and give it some resouce to use and well it would run the zuul executor logic locally and run the job agaist those resoucs10:31
bauzasthe usecase I see here is that if dev contributors start using zuul roles for deploying their devstacks, they could get less effort in configuring devstack and installing it, but they could also give the money back by also contributing to the roles definitions10:32
sean-k-mooneybauzas: if you have never done it zuul is actully pretty simple to run on your laptop with a singel docker-compoes up10:32
bauzasfor a newcomer, trying to untangle devstack (if their install failed) is somewhat tricky10:32
*** elodilles is now known as elodilles_afk10:33
sean-k-mooneyit is but i still think its an exersise tehy should do to understand whot things work10:33
sean-k-mooneybut thats why i create the ard stuff10:33
bauzasthe more they'd stick to 'standard validated upstream roles', the better chances they have to get their nodes ready 10:33
sean-k-mooneydid you know that devstack used to have a vagrant file10:33
bauzasyup10:33
bauzasI remember it10:33
sean-k-mooneyit might still exist10:33
sean-k-mooneyi never got it to work for what it wanted10:34
sean-k-mooneyso the stuff i have in the ard repo can be pointed at any host and it will give you a mulit node devstack10:34
sean-k-mooneyi used it to provisin some host in china form ireland for gibi when he first joined redhat10:35
sean-k-mooneyi was looking at a zuul job yesterday when the time stamps suddely went 5 hours into the past10:36
sean-k-mooneywhen i realised zuul was running in the us east coast and the vms were in europe10:36
sean-k-mooneyi was pretty churffed that zuul/ansible works that well across an ocean10:36
*** ravlew is now known as Guest120110:42
*** ravlew1 is now known as ravlew10:43
opendevreviewsean mooney proposed openstack/nova master: [WIP] add libvirt connection healtcheck  https://review.opendev.org/c/openstack/nova/+/90742412:58
opendevreviewMerged openstack/nova stable/2023.2: Revert "[pwmgmt]ignore missin governor when cpu_state used"  https://review.opendev.org/c/openstack/nova/+/90567213:02
opendevreviewMerged openstack/nova stable/2023.2: cpu: make governors to be optional  https://review.opendev.org/c/openstack/nova/+/90567313:02
bauzassean-k-mooney: fwiw, straight devstack install with GLOBAL_VENV=False worked like a charm with Centos 9 Stream13:59
bauzas(ovn and all the likes)13:59
bauzasno need to tweak anything13:59
sean-k-mooneyyep 14:00
sean-k-mooneycentos 9 stream is a supproted distro14:00
sean-k-mooneyrhel is not14:00
bauzasand good news, I checked that the nvidia GRID driver correcly works with latest C9S release14:00
sean-k-mooneya lot of the check for rpm distos explcity dont check for rhel14:00
bauzasthat was my main concern ^14:00
sean-k-mooneyya14:00
sean-k-mooneythe kernels are close enouch that it should install14:00
sean-k-mooneybtu you never know14:01
bauzaswith mdev live migration, you need recent qemu and libvirt versions plus a recent kernel too14:01
bauzasso I'm doublecheckign the versions now with the 30 Janth c9s build14:01
sean-k-mooneyi think we shoudlbe fine. centos 9 stream has new version of qemu and libvirt then rhel14:01
sean-k-mooneybut if you need even newer you can enabel the fedora virt preview repos14:02
sean-k-mooneythose have c9s builds too now14:02
sean-k-mooneyhttps://copr.fedorainfracloud.org/coprs/g/virtmaint-sig/virt-preview/14:02
bauzaslibvirt-9.1014:03
bauzasqemu-kvm-common-8.214:03
sean-k-mooneylibvirt 10 came out 2 weeks ago but 9.10 is from novemeber14:03
sean-k-mooneylibvirt 10 is in the copr repo14:04
sean-k-mooney10.0.0-3 14:04
sean-k-mooneyand qemu   2:8.2.0-6 14:04
sean-k-mooneyi think you should be good with the version you have to be honest14:05
gibielodilles_afk: sean-k-mooney: thanks for the +2s on the powermgmt backports. This is the (hopefully) last set this time to Antelope https://review.opendev.org/q/topic:%22power-mgmt-fixups%22+branch:stable/2023.114:20
bauzassean-k-mooney: sorry was in meeting but yeah we're fine https://specs.openstack.org/openstack/nova-specs/specs/2024.1/approved/libvirt-mdev-live-migrate.html#dependencies14:48
*** elodilles_afk is now known as elodilles15:21
opendevreviewFabian Wiesel proposed openstack/nova master: vmware: Integer division Python 2 -> 3 fix  https://review.opendev.org/c/openstack/nova/+/90744415:54
opendevreviewSylvain Bauza proposed openstack/nova master: Augment the LibvirtLiveMigrateData object  https://review.opendev.org/c/openstack/nova/+/90417517:12
opendevreviewSylvain Bauza proposed openstack/nova master: check both source and dest compute libvirt versions for mdev lv  https://review.opendev.org/c/openstack/nova/+/90417617:12
opendevreviewSylvain Bauza proposed openstack/nova master: Check if destination can support the src mdev types  https://review.opendev.org/c/openstack/nova/+/90417717:12
opendevreviewSylvain Bauza proposed openstack/nova master: Reserve mdevs to return to the source  https://review.opendev.org/c/openstack/nova/+/90420917:12
opendevreviewSylvain Bauza proposed openstack/nova master: Modify the mdevs in the migrate XML  https://review.opendev.org/c/openstack/nova/+/90425817:12
bauzasdansmith: just updated my series17:13
bauzasabout the edge cases of a failing migration that would leak the dict, I'm a bit puzzled on how to correctly test that17:13
bauzasfor sure I can add another functional test that would force a migration to break, maybe that would help but that would only check the case I already planned17:14
bauzasactually this sounds doable, I'll come up with something17:15
sean-k-mooneygibi: dansmith: related to the health check converstation. i alrady have a time stamp for when each healthcheck result was recorded so im just going to add one more timestamp for the overall responce17:27
dansmithsean-k-mooney: cool17:27
dansmithbauzas: ack17:27
sean-k-mooneyyou could figure that out yourslef on the client side but that seams pretty trivial to do and it allow you one extra data point as a clinet, how long did the responce take to get to me17:28
sean-k-mooneywe expect this to mainly run on localhost so that should be instant but if its not then it might be useful17:29
sean-k-mooneymelwitt: did you push an update to your series by the way. i wont get to it today but ill try and take a look again tomorrow17:30
melwittsean-k-mooney: no not yet :( I will later today17:31
sean-k-mooneyno rush17:31
opendevreviewSylvain Bauza proposed openstack/nova master: check we don't leak in the func test  https://review.opendev.org/c/openstack/nova/+/90746517:45
bauzasdansmith: made the functest, proofing we don't leak ^17:45
bauzasI'll rebase my series by squashing that one with the change in question17:46
opendevreviewSylvain Bauza proposed openstack/nova master: Reserve mdevs to return to the source  https://review.opendev.org/c/openstack/nova/+/90420917:47
opendevreviewSylvain Bauza proposed openstack/nova master: Modify the mdevs in the migrate XML  https://review.opendev.org/c/openstack/nova/+/90425817:47
bauzasthere we go17:47
* bauzas bails out now17:49
clarkbsean-k-mooney: no there isn't a good way to run zuul jobs locally18:06
clarkbsean-k-mooney: this is why tools like tox and nox and make etc are valuable and honestly good ideas regardless of your CI system18:06
sean-k-mooneyclarkb: well we would not remove them evn if we coudl run zuul jobs locally18:23
clarkbsure, btu if those tools work properly you reduce a lot of problems18:24
clarkband the problems with not disabling selinux don't go away running a zuul job locally18:24
sean-k-mooneyhehe well that is just because bauzas is using an os that is not supported by the too he was trying to od18:24
sean-k-mooneythe devstack is_fedora check i think works for centos and rhel but we have other check that check for fedora and centos by name18:25
sean-k-mooneythere are docs that tell you to use specific function in devstack to do this but perople done alwasy follow the docs even if they wrote them :)18:25
clarkbmore generally there is a balancing act between being so tied to your CI system that the software only really works there and ensuring the software is generally deployable18:28
clarkba common place we see this is people updating devsatck job vars rather than setting appropriate defaults in devstack proper18:28
clarkband we should try and avoid those tendencies and ensure the software can stand on its own18:28
sean-k-mooneyselunx disbaling is done here https://github.com/openstack/devstack/blob/5c1736b78256f5da86a91c4489f43f8ba1bce224/tools/fixup_stuff.sh#L38-L43 garded by is_fedora so that hsoudl have worked18:29
clarkbwhere "run this zuul job locally" is potentially useful is determining why builds have failed18:30
clarkbit is less useful as a "run our software" option18:30
sean-k-mooneyclarkb: so the issue is i have been using devstack for so long now that i rearly have issues and i can fix them pretty quickly when i do18:32
clarkbyup, we end up with blinders and biases. I'm just saying that in this situation I feel ike the problem exist in devstack and not a lack of tooling in zuul18:33
sean-k-mooneyzuul jobs can be wriite so tehy are runablle locally18:39
sean-k-mooneythey are just ansible after all18:39
sean-k-mooneythe only thing tha tmakes that hard with our devstack ones is bascilly a meta palybooks to run all the pre/post playbooks in the write order18:40
sean-k-mooneyif you coud trivially gerneate that you and the inventory you could jus tupdate the ips and run it agaisnt your own vms18:41
sean-k-mooneyin the molecule senario and playbooks i worte in my ard repo i basically just did that18:41
sean-k-mooneyi reused the roles and statically combined themn to give me somethign pretty close to the upstream jobs18:42
clarkbright. I'm basically asserting that needing to resort to that for users to run devstack is a bug though18:48
clarkband we should try and make the tool easier to use rather than make CI a stand in18:48
sean-k-mooneyright but devstack is not hard to use19:21
sean-k-mooneyall the issue bauzas hit where tyring to use it in unsupproted ways (an os that is not supproted after starting in a mode that is not supproted on its nearest supproted cousin)19:22
sean-k-mooneydansmith: i have a terible idea.19:42
dansmithyeah?19:43
sean-k-mooneythe dbcounter uses a plugin https://opendev.org/openstack/devstack/src/branch/master/tools/dbcounter/dbcounter.py19:43
sean-k-mooneydo you think i could create one for the healthchecks to detect if a conenction resets and is reestablished19:43
sean-k-mooneywould that be an insane thing to do even look into or something to condier19:44
dansmithyeah I think that's a bad idea,19:45
dansmithbecause it has to be in the [db]/connection string to work19:45
sean-k-mooneyoh ok ya i didnt know that part19:45
sean-k-mooneynever mind then i just saw it pringitn stats since i have journaclt open19:46
sean-k-mooneyand then i rememebred it was a plugin19:46
sean-k-mooneythe example is litcally just regestirign an evnt with a call back19:49
sean-k-mooneyhttps://docs.sqlalchemy.org/en/14/core/connections.html#sqlalchemy.engine.CreateEnginePlugin19:49
sean-k-mooneybut i mised the url part19:50
sean-k-mooneyalthough you can enable it a diffent way19:50
sean-k-mooneyanyway i was ment to finish a while ago so ill leave that for now19:51
dansmithfeels like a monkeypatch to me :)19:53
sean-k-mooneykind of but via a supported interface19:54
sean-k-mooneyas in its an intential extention point but19:54
JayFI'll note Ironic uses this event interface directly to set PRAGMA on sqlite usages.19:54
sean-k-mooneythe db is the last part on my list19:54
JayFSo you could potentially get the same behavior without having to monkey patch 19:54
sean-k-mooneyJayF: im not planning to monkey patch19:54
sean-k-mooneyJayF: what im trying to figure out is a clean way to detect if an database connection is lost and restablished19:55
JayFOh yeah, I figured that, but just pointing out that event interface is generally useful for stuff like that: https://github.com/openstack/ironic/blob/master/ironic/db/sqlalchemy/__init__.py#L2819:55
sean-k-mooneywithout have to do that everywhere we interact wit the db19:55
JayFYeah, I'm saying: register events in the same way they do in that CreatePlugin19:55
JayFthat example in Ironic is actually pretty darn close to half  of what you need in terms of registering the connect event19:55
sean-k-mooney ya so i looke at doing that too19:56
sean-k-mooneybut  its a little tricky to ensure we do that give how we mange our engine fasads19:56
JayFNote that doesn't have a handle on anything19:57
JayFDo you not have a place, ala __init__.py in a relevant module, you're guaranteed to be run in every thread?19:57
JayFthe dbapi_connection is provided for you on a ConnectionEvent19:57
sean-k-mooneysimple answer no we have at least 2 places like that posble more19:59
sean-k-mooneyfor the cell db i belive all engin creation goes though this function https://github.com/openstack/nova/blob/master/nova/db/main/api.py#L13820:00
sean-k-mooneyfor the api db here https://github.com/openstack/nova/blob/master/nova/db/api/api.py#L5020:01
sean-k-mooneybut there is a lot of code to review20:01
JayFI'm saying I think your assumption that it has to be done *at* engine creation time is inaccurate20:01
JayFthat it can be done anytime *before* that engine creation, as well20:01
JayFwhich is likely a much easier problem to solve20:01
JayFbut I'm not super familiar with nova is there's a different dimension here  I'm missing20:02
sean-k-mooneyproably20:02
sean-k-mooneyi just need to review the options and then see where to go form there20:02
sean-k-mooneyi was lookign at how we optionally enabel osprofiler.sqlalchemy20:02
sean-k-mooneyas well20:02
sean-k-mooneyit has a handel_error function 20:03
sean-k-mooneyhttps://opendev.org/openstack/osprofiler/src/branch/master/osprofiler/sqlalchemy.py#L9920:03
sean-k-mooneyright now im just trying to see what the options are in this space20:04
sean-k-mooneyits uses https://github.com/openstack/oslo.db/blob/master/oslo_db/sqlalchemy/enginefacade.py#L792 append_on_engine_create20:05
JayFoh, nice! I imagine we didn't use that because we only wanted to catch sqlite connections20:07
sean-k-mooneyhttps://docs.sqlalchemy.org/en/20/core/pooling.html#custom-legacy-pessimistic-ping20:07
sean-k-mooneythere are docs for how to lliste for the engine_connect event20:08
sean-k-mooneyso i was wondering if i could combine some for those approch to ensure that wehn a engin is create we greister a litener for and engine_connect20:09
sean-k-mooneythat just set a db_connection healtcheck to pass/fail 20:09
sean-k-mooneyanyway ill let this simmer in the back of my mind and come back in a few days20:10
sean-k-mooneythanks for the ironic link20:10
JayFIn Ironic, we don't have a handle on engine at all when we setup that event listener; so I'm a little confused as to if it really needs to be done per engine20:10
sean-k-mooneyya im not sure either20:10
sean-k-mooneyi should proably just ask zeek about it20:11
JayFMy current belief is it is gloriously simple -- just make sure the listener is registered and you don't need to worry about doing something "per enginer". If you find that is not correct, please let me know if you remember :D 20:13
sean-k-mooneyi mean i can give it a try and just see what happens20:19
sean-k-mooneyits either goign to work or not20:19
sean-k-mooneyJayF: oh i see whats happening in the example its calling listen for on an instance of a class20:27
sean-k-mooneyJayF: but ironic is calling it with a type in this case ConnectionEvents20:28
JayFYes, different types give you different args as well20:28
JayFso a ConnectionEvent gives you a dbapi_connection object20:28
sean-k-mooneyyep ok this makes sense now20:28
sean-k-mooneytrying to regester an event function for the engine_connect event that just raises a RuntimeError20:49
sean-k-mooneydoes nohthing so either there is no engince_connect event when the nova conductor starts20:50
sean-k-mooneyor https://github.com/openstack/ironic/blob/master/ironic/db/sqlalchemy/__init__.py#L28 didnt seam to work for me20:50
sean-k-mooneyi would guess we had not used the db connection if it was not fro 20:51
sean-k-mooney DEBUG dbcounter [-] [2076032] Writing DB stats nova_cell1:SELECT=4,nova_cell1:UPDATE=1 {20:51
tonybsean-k-mooney[m]: Thanks again for all your help.  If you're interested here's somewhat of a summary22:55
*** tosky_ is now known as tosky23:14

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!