Thursday, 2025-07-17

opendevreviewMerged openstack/watcher-tempest-plugin master: Create a function for deleting old injected metrics on prometheus  https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/95443303:31
opendevreviewDavid proposed openstack/watcher-tempest-plugin master: Add custom flavor creation for RAM based stategy tests  https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/95385307:21
opendevreviewDavid proposed openstack/watcher-tempest-plugin master: Add custom flavor creation for RAM based stategy tests  https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/95385308:59
opendevreviewDavid proposed openstack/watcher master: Disable real metrics on devstack injected data jobs  https://review.opendev.org/c/openstack/watcher/+/95528110:50
opendevreviewDavid proposed openstack/watcher-tempest-plugin master: Add custom flavor creation for RAM based stategy tests  https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/95385310:51
opendevreviewJoan Gilabert proposed openstack/watcher master: Enable storage model collector by default  https://review.opendev.org/c/openstack/watcher/+/95132310:51
opendevreviewJoan Gilabert proposed openstack/watcher-tempest-plugin master: Add test for volume retype with zone migration  https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/95462510:52
opendevreviewDavid proposed openstack/watcher-tempest-plugin master: Add custom flavor creation for RAM based stategy tests  https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/95385311:01
rlandyhello all, Watcher IRC meeting here in 35 minutes ... please add any topics to: https://etherpad.opendev.org/p/openstack-watcher-irc-meeting. Thank you11:26
rlandy#startmeeting Watcher IRC Meeting - July 17, 202512:00
opendevmeetMeeting started Thu Jul 17 12:00:37 2025 UTC and is due to finish in 60 minutes.  The chair is rlandy. Information about MeetBot at http://wiki.debian.org/MeetBot.12:00
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.12:00
opendevmeetThe meeting name has been set to 'watcher_irc_meeting___july_17__2025'12:00
rlandyHi all ... who is around?12:01
amoralejo/12:01
morenodo/12:01
rlandycourtesy ping list: dviroel jgilaber sean-k-mooney12:01
jgilabero/12:01
dviroelo/12:01
rlandychandankumar is away atm ... let's start12:03
rlandy#topic: (chandan|not around today):  Move workload_balance_strategy_cpu|ram tests to exclude list to unblock upstream watcher reviews12:03
rlandy#link:  https://launchpad.net/bugs/211687512:03
dviroelright, it is failing more often now12:04
dviroelso we can skip the tests while working on a fix12:04
dviroelmorenod is already investigating and knows what is happenning I think12:05
rlandyany objections to merging this and reverting when the fix is in?12:05
sean-k-mooneyo/12:06
amoralejit's expected that, different runs of the same jobs have different node sizes?12:06
morenodI've added a comment on the bug, the problem is that compute nodes are sometimes 8vcpus and sometimes 4vcpus... we need to create the test with a dynamic threshold, based on the number of vcpus of the compute nodes12:06
sean-k-mooneyim fine with skipping it for now the other way to do that is to use the skip_because decorator in the tempest plugin12:06
sean-k-mooneythat takes a bug ref 12:06
sean-k-mooneybut im ok with the regex approch for now12:07
sean-k-mooneyskip_because is slightly less work12:07
morenodIm also ok with skipping, I will need a few more days to have the fix12:07
sean-k-mooneybecause it wil ksip everywhere12:07
sean-k-mooneyif there isnt a parch already i woudl prefer to use https://github.com/openstack/tempest/blob/master/tempest/lib/decorators.py#L6012:08
rlandymorenod: reading your comment, the fix will take some time?12:08
sean-k-mooneyyou just add it like this https://github.com/openstack/tempest/blob/master/tempest/api/image/v2/admin/test_image_task.py#L9812:09
dviroelsean-k-mooney: yeah, it is preferable12:09
morenodrlandy, im working on it now, maybe sometime between tomorrow and monday it will be ready12:09
morenodI like the skip_because solution, it is very clear12:10
jgilaberamoralej, I can't find the nodeset definitions, but it could be possible that different providers have a label with the same name but using different flavours12:11
rlandy#action rlandy to contact chandankumar to review above suggestions while morenod finishes real fix12:11
amoraleji guess that's what is happening, i thought there was a consensus about nodeset definitions12:11
sean-k-mooneymorenod: there is also a tempest cli command to list all the decorated test i belive so you can keep track of them over time12:11
amoralejanyway, good to adjust the threshold to the actual node sizes12:12
sean-k-mooneykeep in mind the node size can differ upstream vs downstream and even in upstream12:12
morenodrelated but not related to this issue, we disabled in the node_exporter in the watcher-operator, but not on devstack based jobs. I have created this review for that https://review.opendev.org/c/openstack/watcher/+/95528112:12
sean-k-mooneyupstream we alwasy shoudl ahve at leat 8GB for ram but we can have 4 or 8 cpus depelnding on perfroamce12:12
dviroelyes, we can run these tests anywhere, so it should be adjusted to node specs12:12
morenodwe will have dynamic flavors to fix RAM and dynamic threshold to fix CPU12:13
sean-k-mooneythat an approch and one tha t comptue has used to some success in whitebox but its not alwasy easy to do 12:13
sean-k-mooneybut ok lets see what that looks ike 12:14
rlandyanything more on this topic?12:15
sean-k-mooneycrickets generally means we can move on :)12:16
rlandythank you for the input - will alert chandankumar to review the conversation12:16
rlandy#topic: (dviroel) Eventlet Removal12:16
rlandydviroel, do you want to take this one?12:16
dviroelyes12:17
dviroel#link https://etherpad.opendev.org/p/watcher-eventlet-removal12:17
dviroelthe etherpad has links to the changes ready for review12:17
dviroeli also added to the meeting etherpad12:17
dviroeltl;dr; the decision engine changes are ready for review12:17
dviroelthere are other discussions that are not code related like12:18
dviroelshould we keep a prometheus-threading job as voting? 12:18
dviroelwhich we can discuss in the change itseld12:18
sean-k-mooneyhum12:19
sean-k-mooneyso i think we want to run with both version and pershpas start with it as non voting for now12:19
dviroelin the same line, I added a new tox py3 job, to run a subset of tests with eventlet patching disabled12:20
sean-k-mooneybut if we are going to offially supprot both models in 2025.2 then we shoudl make it voting before m312:20
dviroel#link https://review.opendev.org/c/openstack/watcher/+/95509712:20
sean-k-mooneywhat i would suggest is let start iwth as non voting and look to make the treadign jobs voting around the start of august12:21
dviroelsean-k-mooney: right, I can add that as a task for m3, to move to voting 12:21
dviroelans we can look at job's history12:21
sean-k-mooneyfor the unit test job if its passing i woudl be more agressive with that and make it voting right away12:21
dviroelack, it is passing now, buy skipping 'applier' ones, which will be part of next effort, to add support to applier too12:22
sean-k-mooneyya that what we are doing in nova as well12:22
sean-k-mooneywe have 75% of the unit test passing maybe higher12:23
sean-k-mooneyso we are using an exclude list to skip the failing ones and burning that down12:23
dviroelnice12:23
sean-k-mooneyon https://review.opendev.org/c/openstack/watcher/+/952499/412:24
sean-k-mooney1 you rote it so it has an implict +212:24
sean-k-mooneybut i have also left it open now for about a week12:24
sean-k-mooneyso i was planning to +w it after the meeting if there were no other objects12:24
dviroel++12:24
dviroeli see no objections :) 12:25
sean-k-mooneyby the way the watcher-prometheus-integration-threading job failed on the unit test patch which is partly why i want to keep it non-voting for a week or two ot make sure that not a regular thing12:25
dviroeltks sean-k-mooney 12:25
sean-k-mooneyoh it was just test_execute_workload_balance_strategy_cpu12:26
dviroelsean-k-mooney: but failng12:26
dviroelyeah12:26
sean-k-mooneythat the instablity we dicussed above 12:26
dviroeli was about to say that12:26
sean-k-mooneyok well that a good sign12:26
dviroeland the same issue can block the decision engine patch to merge too, just fyi12:27
dviroelor trigger some rechecks12:27
dviroelso maybe we could wait the skip if needed12:27
dviroellets see12:27
sean-k-mooneyack, i may not have time to complete my review of the 2 later patche stoday but we can try to get those mergd somethime next week i think12:28
dviroelack12:28
dviroelthere is one more:12:28
dviroel#link https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/95426412:28
dviroelit adds a new scenario test, with continuous audit12:28
sean-k-mooneyya that not really reventlet removal as such12:28
dviroelthere is a specific scenario that I wanted to test, which needs 2 audits to be created12:28
sean-k-mooneyjust missing test coverage12:28
dviroelack12:29
dviroelit is a scenario that fails when we move to threading mode12:29
sean-k-mooneyi see do you knwo why?12:29
sean-k-mooneydid you update the defautl executor for apsschduler12:30
sean-k-mooneyto not use green pools in your treadign patch12:30
dviroeltoday continuous audit is started at Audit Endpoint constructor, before the main decision engine service fork12:31
dviroelso this thread was running on a different process12:31
dviroeland getting an outdated model12:31
sean-k-mooneyis that adressed by https://review.opendev.org/c/openstack/watcher/+/952499/412:31
sean-k-mooneyit shoudl be right?12:32
dviroel#link https://review.opendev.org/c/openstack/watcher/+/95225712:32
dviroelis the one that address that12:32
sean-k-mooneyoh ok12:32
sean-k-mooneyso when that merges the new senario test shoudl pass12:32
dviroelhere https://review.opendev.org/c/openstack/watcher/+/952257/9/watcher/decision_engine/service.py12:32
sean-k-mooneycan you add a depend on to the tempest change to show that12:32
dviroelthere is already12:32
dviroelthere is also one DNM patch that shows the failure too12:33
sean-k-mooneynot tha ti can see https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/95426412:33
dviroel#link https://review.opendev.org/c/openstack/watcher/+/954364 12:33
sean-k-mooneyoh you have the depend on in the wrong direction12:33
dviroelreproduces the issue12:33
sean-k-mooneyit need to be form watcher-tempest-plug -> watcher in this case12:34
sean-k-mooneywell12:34
dviroelsean-k-mooney: yes and no, because the tempest change is passing too, in other jobs12:34
sean-k-mooneyi guess we could merge the tempest test first assuming it passes in eventlet mode12:34
dviroelcorrect, there are other jobs that will run that test too12:35
sean-k-mooneyok i assume the last two failures of the promethius job are also the real data tests?12:35
dviroel#link https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_1f5/openstack/1f55b6937c9b47a9afb510b960ef12ea/testr_results.html12:35
dviroelpassing on watcher-tempest-strategies  with eventlet12:35
sean-k-mooneyactully since we are talking about jobs the tempest repo does have watcher-tempest-functional-2024-112:36
dviroelchandan added a comment about the failures, yes12:36
sean-k-mooneyi.e. jobs for the stable brances but we should add version of watcher-prometheus-integration for 2025.112:36
dviroel++12:37
sean-k-mooneyto make sure we do not break epoxy with promethus as we extend the test suite12:37
sean-k-mooneyok we can do that seperately, i think we can move on. ill take a look at that test today/tomorrow and we can likely proceed with it12:38
dviroelsure12:38
dviroelI think that I covered everything, any other question?12:38
sean-k-mooneyjust a meta one12:38
sean-k-mooneyit looks like the descion engin will be done this cycle12:39
sean-k-mooneyhow do you feel about the applier12:39
sean-k-mooneyalso what is the status of the api. are we eventlet free there?12:39
dviroelack, I will still look how decision engine will perf, about resource usage and default number of workers, but it is almost done with these changes12:40
sean-k-mooneywell for this cycle it wont be the defautl so we can tweak thos as we gain experince with it12:40
dviroel++12:40
sean-k-mooneyi woudl start small ie 4 workers max12:40
sean-k-mooney*thread in the pools not workers12:41
dviroelack, so one more change for dec-eng is expected for this12:41
dviroelyes, in the code is called workers, but yes, nb of threads in the pool12:41
sean-k-mooneywell we have 2 concepts12:42
dviroelsean-k-mooney: i plan to work in the applier within this cycle, but not sure if we are going to have it working until the end of the cycle12:42
sean-k-mooneyworkers in oslo means the number of processes normally12:42
sean-k-mooneyoh i see... CONF.watcher_decision_engine.max_general_workers12:43
sean-k-mooneyso watcher is using workers for eventlet already12:43
sean-k-mooneyok12:43
sean-k-mooneyso in nova we are intentioally adding new config opitons12:44
sean-k-mooneybecause the default likely wont be the same but ill look at what watcher has today and comment in the review12:44
dviroelack, the background scheduler is one that has no config for instance12:44
sean-k-mooneyok its 4 already https://docs.openstack.org/watcher/latest/configuration/watcher.html#watcher_decision_engine.max_general_workers12:45
dviroeland this one ^ - is for the decision engine threadpool 12:45
sean-k-mooneyya i think its fine normlly the eventlet pool size in most servers is set to around 1000012:45
sean-k-mooneywhich woudl obvirly be a problem12:45
sean-k-mooneybut 4 is fine12:45
dviroeldecision engine threadpool today covers the model synchronize threads12:46
sean-k-mooneyack12:46
dviroelok, I think that we can move on12:46
dviroeland continue in gerrit12:46
rlandythanks dviroel 12:46
dviroeltks sean-k-mooney12:46
rlandythere were no other reviews added on list12:47
rlandyanyone want to raise any other patches needing review now?12:47
rlandyk - moving on ... 12:48
sean-k-mooneyi have a topic for the end of the meeting but its not strictly related to a patch12:48
rlandyoops12:48
sean-k-mooneywe can move on12:48
rlandyok - well - bug triage and then all yours 12:48
rlandy#topic: Bug Triage12:48
rlandyLooking at the status of the watcher related bugs:12:49
rlandy#link: https://bugs.launchpad.net/watcher/+bugs 12:49
rlandyhas 33 bugs listed12:49
rlandy7  of which are in progress12:49
rlandyand 2 incomplete ...12:50
rlandyhttps://bugs.launchpad.net/watcher/+bugs?orderby=status&start=012:50
rlandy#link https://bugs.launchpad.net/watcher/+bug/183740012:50
rlandy^^ only that one is marked "new"12:50
rlandydashboard, client and tempest are all under control with 2 or 3 bugs either in progress or doc related12:51
sean-k-mooneythe bug seam valid if it still happens12:51
sean-k-mooneyhowever i agree that its low priorioty12:51
sean-k-mooneywe marked it as need-re-triage12:51
rlandyso raising only this one today:12:51
sean-k-mooneybecuase i think we wanted to see if thtis was fixed12:51
rlandyhttps://bugs.launchpad.net/watcher/+bug/1877956 (bug about canceling action plans)12:52
rlandyas work was done to fix canceling action plans and greg and I tested it yesterday (admitted from the UI) and that is now working12:53
dviroelwe found evidences in the code, but I didn't tried to reproduce12:53
sean-k-mooneyso this was just a looing bug i think12:53
sean-k-mooneywhen i loged at it before i think this is still a problem12:53
dviroelshould be a real one12:53
dviroeland also easy to fix12:54
sean-k-mooneyyep12:54
sean-k-mooneydo we have tempest test for cancelaion yet12:54
sean-k-mooneyi dont think so right12:54
sean-k-mooneyi thik we can do this by using the sleep action and maybe the actoator stragy12:55
rlandynot as far as I know12:55
dviroelyeah, a good opportunity to add one too12:55
sean-k-mooneyi think we should keep this open and just fix the issue when we have time 12:55
sean-k-mooneyill set it to low?12:55
dviroel++12:55
sean-k-mooneycool12:56
rlandyok - that's it for triage ...12:56
rlandysean-k-mooney: your topic?12:56
sean-k-mooneyya...12:56
sean-k-mooneyso how has heard of the service-type-athority repo?12:56
amoraleji haven't12:57
sean-k-mooneyfor wider context https://specs.openstack.org/openstack/service-types-authority/ its a thing that was created a very long time ago and is not documented as part of the project creation process12:57
jgilaberme neither12:57
sean-k-mooneyi disocverd or rediscoverd it tuesday night/yesterday12:58
sean-k-mooneyAetos is not listed there and "promethus" does nto follow the requrie naming convetions12:58
sean-k-mooneyso the keyston endpoint they want to use, sepcificly the service-type12:58
sean-k-mooneyis not valid12:59
sean-k-mooneyso they are going to have to create a servifce type "tenant-metrics" is my suggetion12:59
sean-k-mooneythen we need ot update the spec12:59
sean-k-mooneyand use that12:59
sean-k-mooneybut we need to get the tc to approve thatn and we need to tell the telemetry team about this requirement13:00
sean-k-mooneyi spend a while on the tc channel trying to understand thsi yesterday13:00
sean-k-mooneyso ya we need to let juan and jaromir know13:01
amoralejdid the telemetry team start using the wrong names somewhere?13:01
sean-k-mooneythey planned to start using promethus13:01
sean-k-mooneyfor Aetos13:01
amoralejat least no need to revert any code, i hope :)13:01
sean-k-mooneynot yet13:02
sean-k-mooneybut watcher will need to know the name to do the check for the endpoint13:02
sean-k-mooneyand the installer will need ot use the correct name too13:02
sean-k-mooneythe ohter thing i found out13:02
sean-k-mooneyis we are using the legacy name for watcher downstream i think13:02
sean-k-mooneyhttps://opendev.org/openstack/service-types-authority/src/branch/master/service-types.yaml#L31-L3413:03
sean-k-mooneyits offical service-type shoudl be resource-optimization not infra-optim13:03
dviroeloh, good to know13:03
sean-k-mooneyso that a donwstream bug that we shoudl fix in the operator13:03
sean-k-mooneyboth are technically valid but it woudl be better to use the non alias version13:04
sean-k-mooneyso jaromir i belvie is on pto for the next week or two13:04
sean-k-mooneyso we need to sync with the telemetry folks and wiehte rwe or they can update the service-types-athurity file with the right content13:05
sean-k-mooneyanyay way that all i had on this13:05
dviroeltks for finding and pursuing this issue sean-k-mooney 13:06
rlandythanks for raising this - a lot of PTOs atm ... mtunge is also out from nect week so maybe we try juan if possible13:06
rlandywe are over time so I'll move on to ...13:07
sean-k-mooneyit was mainly by acident i skim the tc meeting notes and the repo came up this week13:07
sean-k-mooneyor last13:07
sean-k-mooneyya we can wrap up and move on13:07
rlandyVolunteers to chair next meeting:13:08
opendevreviewMerged openstack/watcher master: Merge decision engine services into a single one  https://review.opendev.org/c/openstack/watcher/+/95249913:09
dviroelo/ 13:09
dviroelI can chair13:09
rlandythank you dviroel 13:09
rlandymuch appreciated13:09
rlandyk folks ... closing out13:09
rlandythank you for attending13:09
rlandy#endmeeting13:09
opendevmeetMeeting ended Thu Jul 17 13:09:43 2025 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)13:09
opendevmeetMinutes:        https://meetings.opendev.org/meetings/watcher_irc_meeting___july_17__2025/2025/watcher_irc_meeting___july_17__2025.2025-07-17-12.00.html13:09
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/watcher_irc_meeting___july_17__2025/2025/watcher_irc_meeting___july_17__2025.2025-07-17-12.00.txt13:09
opendevmeetLog:            https://meetings.opendev.org/meetings/watcher_irc_meeting___july_17__2025/2025/watcher_irc_meeting___july_17__2025.2025-07-17-12.00.log.html13:09
opendevreviewJoan Gilabert proposed openstack/watcher-tempest-plugin master: Add test for volume retype with zone migration  https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/95462514:20
opendevreviewchandan kumar proposed openstack/watcher-tempest-plugin master: Mark workload_balance_strategy_cpu|ram as unstable tests  https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/95530215:12
chandankumarsean-k-mooney: dviroel ^^ skipped the tests in watcher tempest plugin using unstable_tests. thank you !15:13
sean-k-mooneyi guess we can use unstable_test15:22
sean-k-mooneythat passes if the test passes and skips if it failed15:22
sean-k-mooneyright?15:22
chandankumaryes, correct, skip it if it fails15:22
opendevreviewchandan kumar proposed openstack/watcher-tempest-plugin master: Mark workload_balance_strategy_cpu|ram as unstable tests  https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/95530215:23
sean-k-mooneyi might actully bring this up in the qa/nova channel becuse there are a number of volume test that probly shoudl have this applied15:23
sean-k-mooneybut ok im happy with this15:23
chandankumarsure, Once I have the results, I will share with gmaan also for review.15:24
sean-k-mooneywell i didnt mean for this test15:24
sean-k-mooneyfor 3-4 years now there are a set of 2-3 volume test in tempest that frequently fail15:25
chandankumarah ok15:25
sean-k-mooneywe might want to just acknowlage that and mark them as such15:25
gmaanchandankumar: sean-k-mooney honestly saying, unstable_test decorator is really not better choice and we many times forgot to check those tests stability because there is no way to do that. I mean no UI etc15:25
sean-k-mooneydue to some interaction with qemu some of the volume test tend to trigger kernel panics in the gust15:25
gmaaninstead of unstable_test I will suggesto to skip test and unskip when it is fixed15:26
sean-k-mooneywell i suggested the skip_because decorator instead15:26
sean-k-mooneychandankumar choose to use the unstable_test one15:26
sean-k-mooneywe know what the issue is its just going to take a few days to get it fixed15:26
gmaan++ that is explicitly and we know there is one test we need to fix as it is skipped 15:26
sean-k-mooneyif you think its better ot just use skipped_because then chandankumar can you update to that?15:27
gmaanunstable_test is really for new tests and we do not know how it will behave in all diff config jobs and mainly reace condition15:27
gmaanrace15:27
chandankumaryes sure15:27
gmaanthanks15:27
sean-k-mooneygmaan: well you know the volume attach/detach one im thinking of for cinder right15:28
gmaanI will update unstable_test doc in Tempest to clear that, that is something we are missing there15:28
sean-k-mooneygmaan: we talked about jsut not running them in nova in the past becasue they were so unstable15:28
sean-k-mooneythat has decreasesd a lot after i added zswap and some other performance tuneing15:28
sean-k-mooneygmaan: what i would prefer to do but never had time15:28
gmaansean-k-mooney: yeah, as you know, we fixed/make better by waiting for VM ssh-able in advance before attach/detach but that does not solve everything15:29
sean-k-mooneyis add a retry_on_panic or retry_on_<somehting> decorator to those15:29
gmaanyeah zswap ++15:29
gmaanI see. that will be good15:30
sean-k-mooneythe kernel panic case i think we could eventually sovle with using alpine or a diffent image to replace cirros15:30
sean-k-mooneybut since we can detect that in the console logs15:30
sean-k-mooneywe coudl have tempest retry the test once 15:30
sean-k-mooneyif the vm does panic and avoid a full recheck15:30
opendevreviewchandan kumar proposed openstack/watcher-tempest-plugin master: Skip workload_balance_strategy_cpu|ram due to known bug  https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/95530215:31
gmaanI am really on to try new image like alpine ++15:31
sean-k-mooneyalthogh that is more a stop the bleedign approch then an actual fix like using ubuntu or alpine would be15:31
gmaan' avoid a full recheck' ?15:31
sean-k-mooneyi mean right now when we have kernel panics that cause a signle test failure we do "recheck kernel panic" and all jobs have to run again15:32
sean-k-mooneyvs a decorator on the test we know that can pannic tha tjust retry that one test once before reporting fail or success15:32
gmaanohk, ++ to that approach 15:32
sean-k-mooneyit partly just time to do it but maybe i can ask ai to do it...15:34
sean-k-mooneythat the new standard solution to all tech problems right. we finally foudn a replacement for just turn it off and on again15:35
dviroelchandankumar: ack, i will vote and w+1 after getting zuul report. +1 on skip vs unstable, makes sense15:42
opendevreviewDouglas Viroel proposed openstack/watcher-tempest-plugin master: Skip workload_balance_strategy_cpu|ram due to known bug  https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/95530216:10
opendevreviewAlfredo Moralejo proposed openstack/watcher master: Add status_message fields to audit, action and actionplan  https://review.opendev.org/c/openstack/watcher/+/95474516:14
opendevreviewAlfredo Moralejo proposed openstack/watcher master: Skip actions automatically based on pre_condition results  https://review.opendev.org/c/openstack/watcher/+/95474616:14
opendevreviewJoan Gilabert proposed openstack/watcher-tempest-plugin master: Add test for volume retype with zone migration  https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/95462516:20
opendevreviewJoan Gilabert proposed openstack/watcher-tempest-plugin master: Add test for volume retype with zone migration  https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/95462518:06
opendevreviewDouglas Viroel proposed openstack/watcher-tempest-plugin master: Skip execute_workload_balance_strategy_cpu|ram due to known bug  https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/95530219:14

Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!