hemanth | tkajinam or telemetry cores: Hi I see hardware.memory.* metrics are removed ( https://opendev.org/openstack/ceilometer/commit/a28cef7036edc2ecb0f60b5d27a97735482e7f98 ). I would like to collect compute host memory metrics (compute host cpu metrics already collected from nova notifications). I tried to support this via nova notifications but new metric monitors are freezed in nova and suggestion is to implement same in | 01:57 |
---|---|---|
hemanth | ceilometer-compute-agent ( see feedback in https://review.opendev.org/c/openstack/nova/+/939044). Just want to check if this approach seems ok for ceilometer before i start implementation. Can i go for specless and submit implementation PR? ( Little background the compute host memory metrics are required by watcher service for some of the strategies) | 01:57 |
tkajinam | hemanth, technically we can readde these metrics, captured by a different discovery/polling mechanism, but I personally prefer using different tools because gathering host metrics is quite generic and there are number of tools | 04:10 |
tkajinam | hemanth, I've seen some work in watcher to use prometheus backend. For example can we use prometheus-node-exporter + prometheus backend instead of adding new metrics to ceilometer ? | 04:11 |
hemanth | prometheus backend is ongoing in watcher in this cycle which can be used directly without involving ceilometer. However this feature wont be available for earlier releases in watcher. I am looking for a simpler solution that can be backportable until caracal so that watcher caracal can use the compute host metrics for the strategies. | 04:17 |
hemanth | The simplest way i could found is getting either nova collect those metrics or ceilometer directly collect those metrics as agent is already running on the compute host without involving dependency on external tools | 04:18 |
tkajinam | although telemetry project does not follow stable release policy, I'm not in favor of backporting any feature unless it fixes clear defeat of the software | 04:19 |
tkajinam | so I don't feel like it's quite backportable but there might be different opinions from other cores | 04:19 |
hemanth | ack, yes i would like to confirm the backporting policy before i start implementing anything.. | 04:20 |
tkajinam | I'm not too sure why feature backport is not accepted in watcher and that risk is pushed to telemetry | 04:20 |
tkajinam | though again we can discuss it within involved parties | 04:20 |
hemanth | I havent asked explicitly in watcher about backport, one reason is we have some breakage in watcher due to eventlet and backport of feature is not straightforward | 04:21 |
hemanth | I got the gist of your thoughts about backport and how to go ahead of this feature if required, thanks for that | 04:21 |
tkajinam | ok | 04:22 |
tkajinam | one more thing we want to add is that if this is "intermediate" solution and it is aimed to replace it by different solutions then I think we should avoid merging such one | 04:22 |
tkajinam | as we can easily expect that the implementation may not be maintained well quite soon. | 04:23 |
hemanth | ack, got it and completely agree with you | 04:23 |
tkajinam | so I'd like to understand the long term plan as well | 04:23 |
tkajinam | :-) | 04:23 |
-opendevstatus- NOTICE: The Gerrit service on review.opendev.org will be offline momentarily for a restart to put some database compaction config changes into effect, and will return within a few minutes | 22:55 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!