*** edmondsw has quit IRC | 00:01 | |
*** tovin07_ has joined #openstack-telemetry | 00:30 | |
*** thorst_afk has joined #openstack-telemetry | 00:31 | |
*** catintheroof has quit IRC | 00:35 | |
*** zhurong has joined #openstack-telemetry | 00:49 | |
*** Tom_ has joined #openstack-telemetry | 01:34 | |
*** Tom_ has quit IRC | 01:34 | |
*** thorst_afk has quit IRC | 01:41 | |
*** lhx__ has joined #openstack-telemetry | 01:59 | |
*** thorst_afk has joined #openstack-telemetry | 02:00 | |
*** jmlowe has quit IRC | 02:00 | |
*** jmlowe has joined #openstack-telemetry | 02:01 | |
*** catintheroof has joined #openstack-telemetry | 02:03 | |
*** Tommy_Tom has joined #openstack-telemetry | 02:41 | |
*** thorst_afk has quit IRC | 02:45 | |
*** thorst_afk has joined #openstack-telemetry | 02:46 | |
*** Tom_ has joined #openstack-telemetry | 02:48 | |
*** thorst_afk has quit IRC | 02:50 | |
*** Tommy_Tom has quit IRC | 02:51 | |
*** catintheroof has quit IRC | 03:07 | |
*** ddyer has joined #openstack-telemetry | 03:11 | |
*** jmlowe has quit IRC | 03:13 | |
*** jmlowe has joined #openstack-telemetry | 03:14 | |
openstackgerrit | SU, HAO-CHEN proposed openstack/panko master: Remove operator checking when using filter https://review.openstack.org/499435 | 03:31 |
---|---|---|
*** ddyer has quit IRC | 03:33 | |
*** links has joined #openstack-telemetry | 03:39 | |
*** psachin has joined #openstack-telemetry | 03:43 | |
*** thorst_afk has joined #openstack-telemetry | 03:47 | |
openstackgerrit | SU, HAO-CHEN proposed openstack/panko master: Remove operator checking when using filter https://review.openstack.org/499435 | 03:50 |
*** thorst_afk has quit IRC | 03:51 | |
openstackgerrit | SU, HAO-CHEN proposed openstack/panko master: Remove operator checking when using filter https://review.openstack.org/499435 | 04:01 |
openstackgerrit | SU, HAO-CHEN proposed openstack/panko master: Remove operator checking when using filter https://review.openstack.org/499435 | 04:06 |
*** Tom_ has quit IRC | 04:23 | |
*** Tom has joined #openstack-telemetry | 04:23 | |
*** Tom has quit IRC | 04:27 | |
*** zhurong has quit IRC | 04:29 | |
*** thorst_afk has joined #openstack-telemetry | 04:47 | |
*** zhurong has joined #openstack-telemetry | 04:48 | |
*** thorst_afk has quit IRC | 04:52 | |
*** vint_bra has joined #openstack-telemetry | 04:53 | |
*** vint_bra has quit IRC | 04:58 | |
*** yprokule has joined #openstack-telemetry | 05:01 | |
*** iranzo has joined #openstack-telemetry | 05:15 | |
*** thorst_afk has joined #openstack-telemetry | 05:48 | |
*** edmondsw has joined #openstack-telemetry | 05:52 | |
*** thorst_afk has quit IRC | 05:53 | |
*** psachin has quit IRC | 06:10 | |
*** psachin has joined #openstack-telemetry | 06:14 | |
*** links has quit IRC | 06:15 | |
*** psachin has quit IRC | 06:16 | |
*** psachin has joined #openstack-telemetry | 06:16 | |
*** Tom has joined #openstack-telemetry | 06:18 | |
*** pcaruana has joined #openstack-telemetry | 06:22 | |
*** rcernin has joined #openstack-telemetry | 06:40 | |
*** vint_bra has joined #openstack-telemetry | 06:41 | |
*** vint_bra has quit IRC | 06:46 | |
*** thorst_afk has joined #openstack-telemetry | 06:49 | |
*** thorst_afk has quit IRC | 06:54 | |
openstackgerrit | liuwei proposed openstack/ceilometer master: Modify the unit conversion error, add the meter item description in the document https://review.openstack.org/499476 | 06:56 |
openstackgerrit | liuwei proposed openstack/ceilometer master: Modify the unit conversion error, add the meter item description in the document https://review.openstack.org/499476 | 06:58 |
*** flg_ has joined #openstack-telemetry | 07:03 | |
*** hoonetorg has quit IRC | 07:04 | |
*** lhx__ has quit IRC | 07:07 | |
*** lhx__ has joined #openstack-telemetry | 07:08 | |
openstackgerrit | liuwei proposed openstack/ceilometer master: Modify the unit conversion error, add the meter item description in the document https://review.openstack.org/499480 | 07:11 |
*** hoonetorg has joined #openstack-telemetry | 07:16 | |
*** lhx__ has quit IRC | 07:18 | |
*** lhx__ has joined #openstack-telemetry | 07:18 | |
*** iranzo has quit IRC | 07:20 | |
*** iranzo has joined #openstack-telemetry | 07:23 | |
*** iranzo has quit IRC | 07:23 | |
*** iranzo has joined #openstack-telemetry | 07:23 | |
*** Tom has quit IRC | 07:25 | |
*** Tom has joined #openstack-telemetry | 07:25 | |
*** links has joined #openstack-telemetry | 07:29 | |
*** tesseract has joined #openstack-telemetry | 07:32 | |
*** Tom has quit IRC | 07:35 | |
*** Tom has joined #openstack-telemetry | 07:36 | |
*** Tom has quit IRC | 07:40 | |
*** thorst_afk has joined #openstack-telemetry | 07:50 | |
*** thorst_afk has quit IRC | 07:55 | |
openstackgerrit | SU, HAO-CHEN proposed openstack/python-pankoclient master: Fix help message of event list option 'filter' https://review.openstack.org/499495 | 08:00 |
*** Tom has joined #openstack-telemetry | 08:07 | |
*** edmondsw has quit IRC | 08:08 | |
*** Tom has quit IRC | 08:12 | |
*** Tom has joined #openstack-telemetry | 08:15 | |
*** openstackgerrit has quit IRC | 08:17 | |
*** Tommy_Tom has joined #openstack-telemetry | 08:19 | |
*** efoley has joined #openstack-telemetry | 08:23 | |
*** efoley_ has joined #openstack-telemetry | 08:24 | |
*** efoley has quit IRC | 08:24 | |
*** efoley_ has quit IRC | 08:25 | |
*** efoley has joined #openstack-telemetry | 08:25 | |
*** vint_bra has joined #openstack-telemetry | 08:30 | |
*** flg_ has quit IRC | 08:33 | |
*** vint_bra has quit IRC | 08:34 | |
*** liusheng has quit IRC | 08:42 | |
*** flg_ has joined #openstack-telemetry | 08:47 | |
*** thorst_afk has joined #openstack-telemetry | 08:51 | |
*** thorst_afk has quit IRC | 08:55 | |
*** flg_ has quit IRC | 08:57 | |
*** Tommy_Tom has quit IRC | 09:04 | |
*** liusheng has joined #openstack-telemetry | 09:07 | |
*** Tom has quit IRC | 09:26 | |
*** Tom has joined #openstack-telemetry | 09:27 | |
*** Tom has quit IRC | 09:27 | |
*** hoonetorg has quit IRC | 09:36 | |
*** psachin has quit IRC | 09:45 | |
*** thorst_afk has joined #openstack-telemetry | 09:51 | |
*** hoonetorg has joined #openstack-telemetry | 09:53 | |
*** thorst_afk has quit IRC | 09:56 | |
*** psachin has joined #openstack-telemetry | 09:58 | |
*** liusheng has quit IRC | 09:59 | |
*** liusheng has joined #openstack-telemetry | 10:09 | |
*** tovin07_ has quit IRC | 10:11 | |
*** vint_bra has joined #openstack-telemetry | 10:18 | |
*** vint_bra has quit IRC | 10:22 | |
*** Tom has joined #openstack-telemetry | 10:25 | |
*** Tom has quit IRC | 10:29 | |
*** Tom has joined #openstack-telemetry | 10:39 | |
*** Tom has quit IRC | 10:44 | |
*** Tom has joined #openstack-telemetry | 10:49 | |
*** raissa has joined #openstack-telemetry | 10:52 | |
*** thorst_afk has joined #openstack-telemetry | 10:52 | |
*** thorst_afk has quit IRC | 10:57 | |
*** psachin has quit IRC | 11:08 | |
*** yassine has quit IRC | 11:12 | |
*** dave-mccowan has joined #openstack-telemetry | 11:18 | |
*** psachin has joined #openstack-telemetry | 11:19 | |
*** yassine has joined #openstack-telemetry | 11:19 | |
*** thorst_afk has joined #openstack-telemetry | 11:40 | |
*** donghao has joined #openstack-telemetry | 11:50 | |
*** nijaba has quit IRC | 12:01 | |
*** vint_bra has joined #openstack-telemetry | 12:06 | |
*** alexchadin has joined #openstack-telemetry | 12:08 | |
*** vint_bra has quit IRC | 12:11 | |
*** catintheroof has joined #openstack-telemetry | 12:30 | |
*** Tom has quit IRC | 12:42 | |
*** gordc has joined #openstack-telemetry | 12:48 | |
*** raissa has quit IRC | 12:52 | |
*** lhx__ has quit IRC | 12:56 | |
*** leitan has joined #openstack-telemetry | 12:57 | |
*** dave-mccowan has quit IRC | 12:57 | |
*** raissa has joined #openstack-telemetry | 12:58 | |
*** Tom has joined #openstack-telemetry | 13:01 | |
*** Tom has quit IRC | 13:05 | |
gordc | theglo | 13:05 |
gordc | ca | 13:05 |
gordc | dammit. so tired i thought irc was google | 13:06 |
*** dave-mccowan has joined #openstack-telemetry | 13:06 | |
*** dave-mcc_ has joined #openstack-telemetry | 13:09 | |
*** dave-mccowan has quit IRC | 13:11 | |
*** zhurong has quit IRC | 13:14 | |
*** pradk has joined #openstack-telemetry | 13:15 | |
*** raissa has left #openstack-telemetry | 13:17 | |
*** alexchadin has quit IRC | 13:22 | |
dims | gordc : LOL | 13:48 |
*** edmondsw has joined #openstack-telemetry | 13:51 | |
*** psachin has quit IRC | 14:04 | |
*** cristicalin has joined #openstack-telemetry | 14:05 | |
*** jobewan has joined #openstack-telemetry | 14:06 | |
cristicalin | hello, is workload partitioning supported for the ceilometer-agent-notification in ocata ? | 14:07 |
cristicalin | when I activate this I see ceilometer creating a _lot_ of queues in rabbitmq and starts accumulating messages | 14:08 |
cristicalin | most of them are fetched but not acqnowledged, which leads to the broker filling up and the whole thing starts crashing down as I share the same rabbit with the rest of openstack | 14:08 |
gordc | cristicalin: yes, it's been active for a few cycles now. how many pipeline_processing_queues do you have set? | 14:13 |
cristicalin | gordc, I don't set the pipeline_processing_queues so probably defaults to 10 | 14:14 |
cristicalin | the problem is the messages don't get ACK'ed | 14:14 |
cristicalin | so they linger in the queues, I see consumers on the queues and messages are delivered to the consumers but the consumers don't ACK them | 14:14 |
gordc | cristicalin: do you see errors in notification agent? | 14:15 |
cristicalin | what could be causing this ? | 14:15 |
cristicalin | it complains about metrics out of order | 14:15 |
cristicalin | and about the broker blocking due to memory pressure | 14:15 |
gordc | oslo.messaging i believe defaults to ACK on completion (unless something changed in oslo.messaging) | 14:15 |
cristicalin | but outside of that nothing of relevance | 14:15 |
cristicalin | hmm, let me check my version | 14:16 |
gordc | metrics out of order is fine (or a known thing). you can handle it a bit better by enabling batching. | 14:16 |
cristicalin | gordc, how do I do that ? | 14:17 |
cristicalin | gordc, my oslo.messaging is 5.17.2 | 14:17 |
gordc | https://github.com/openstack/ceilometer/blob/master/ceilometer/notification.py#L70 althought it's enabled by default | 14:17 |
gordc | hmmm. i don't recall anything in oslo.messaging that regarding ACK/REQUEUE changes | 14:19 |
gordc | what's the memory pressure error? | 14:19 |
cristicalin | gordc, "The broker has blocked the connection: low on memory" | 14:19 |
cristicalin | my brokers have 8GB each | 14:19 |
cristicalin | gordc, looking at that source file, maybe this could be a problem ? https://github.com/openstack/ceilometer/blob/master/ceilometer/notification.py#L51 | 14:21 |
gordc | did you change the default? | 14:22 |
cristicalin | no | 14:22 |
cristicalin | my [notification] section only sets workload_partitioning to True and workers to 4 | 14:23 |
gordc | then it should ACK regardless. | 14:23 |
gordc | that seems fine | 14:23 |
gordc | i think the issue is rabbit can't recover at current memory load. | 14:23 |
gordc | if you want, you could disable collector if you have it, and configure notification agent to publish straight to db (default behaviour in pike, not sure default in ocata)) | 14:24 |
cristicalin | that's what I did | 14:25 |
cristicalin | I only run the notification agent at the moment | 14:25 |
cristicalin | i publish to gnocchi:// and direct://?dispatcher=panko | 14:25 |
cristicalin | as panko:// is only supported in pike | 14:26 |
cristicalin | and I did not backport the dispatcher patch from pike | 14:26 |
gordc | yep. hmm... i'm guessing it's related to memory issue. (i haven't really seen your problem so i can only guess) | 14:26 |
*** lhx_ has joined #openstack-telemetry | 14:28 | |
cristicalin | I can try draining the queues, I've set a low ttl (5 minutes) so they should drain quite fast | 14:30 |
*** donghao has quit IRC | 14:31 | |
*** vint_bra has joined #openstack-telemetry | 14:31 | |
cristicalin | gordc, is the workload partitioning needed in a multi agent setup though ? | 14:34 |
*** lhx__ has joined #openstack-telemetry | 14:36 | |
gordc | cristicalin: it's only needed if you have multiple notification agents AND you have transformations in your pipeline | 14:36 |
*** lhx_ has quit IRC | 14:36 | |
cristicalin | i guess the cpu_util transformation counts, right ? | 14:37 |
gordc | yeah | 14:38 |
gordc | but if you're using gnocchi, you can in theory compute that in post (if you're using master) | 14:38 |
cristicalin | so for these messages the emitter and consumer are both the notification agent | 14:38 |
gordc | cpu_util? | 14:39 |
cristicalin | i'm using gnocchi stable/4.0 | 14:39 |
*** aagate has joined #openstack-telemetry | 14:39 | |
cristicalin | I meal all messages passing through the queue for ceilometer.*sample | 14:39 |
gordc | i don't think the rate functionality is in v4.0... sileht added it in but i see he's not in this channel anymore | 14:39 |
cristicalin | so I'm stuck with workload partitioning for now | 14:41 |
gordc | the pipeline-* queues are IPC for notification agents | 14:41 |
gordc | i'm not sure what the polling agents use but i think they also push to *.sample queues which notification agent picks up and redirects | 14:41 |
cristicalin | redirects to where ? | 14:42 |
*** spilla has joined #openstack-telemetry | 14:42 | |
gordc | gnocchi/panko in your case... whatever is in your publisher | 14:42 |
cristicalin | oh | 14:42 |
cristicalin | the publishing part is done by the notification angent, no ? | 14:43 |
gordc | right | 14:43 |
cristicalin | i see a preference for publishing over consumption (on the queue) is there a way to tip the scales and force consumption ? | 14:44 |
gordc | sorry, i don't really understand what you typed | 14:45 |
*** iranzo has quit IRC | 14:47 | |
cristicalin | my understanding is that the notification agent processes the notification.info queue, splits up the work into the *-pipe-* queues and then also consumes these queues | 14:47 |
cristicalin | is this correct ? | 14:47 |
gordc | right | 14:48 |
*** iranzo has joined #openstack-telemetry | 14:49 | |
*** iranzo has quit IRC | 14:49 | |
*** iranzo has joined #openstack-telemetry | 14:49 | |
cristicalin | so this means that the agent acts as the publisher (emitter) to the *-pipe-* queues but also as the consumer | 14:51 |
cristicalin | is this behavior split between its workers ? | 14:51 |
cristicalin | how can I make it allocate more resources to consume the *-pipe-* queues so it catches up with the workload ? | 14:52 |
gordc | in your case, you have 4 consumers listening to notifications.info queue. all 4 notificaiton agents can also publish to all pipe-* queues. each pipe-* queue is listened to by one of the 4 notification agents. | 14:54 |
*** yprokule has quit IRC | 14:55 | |
gordc | you can't have more than one notification agent listening to pipe-* queue because that's the purpose of the multiple pipes... it groups related points into a pipe so any transformation that needs to be done is guaranteed to have access to right data. | 14:56 |
gordc | if more than one consumer listens to a pipe, it means, agentA might handle a message that agentB needs | 14:56 |
*** jobewan is now known as jobewan_away | 14:58 | |
cristicalin | ok, so when does the agent decide to consume the queue ? | 15:00 |
cristicalin | I mean fetch a message and (eventually) ACK it | 15:01 |
gordc | it works same as all other work queues. it just polls queue for work. | 15:01 |
gordc | the main queue listener will poll, and take action right away on individual messages. | 15:02 |
gordc | the pipe listeners will poll, and depending on your batch settings, wait x time or for y messages, before proceeding | 15:02 |
cristicalin | ok, I understand | 15:04 |
gordc | yeah, it's not pretty :(. probably something more elegant could be done with kafka... or with gnocchi master | 15:05 |
*** cristicalin has quit IRC | 15:08 | |
*** cristicalin has joined #openstack-telemetry | 15:10 | |
*** cristicalin has quit IRC | 15:13 | |
*** iranzo has quit IRC | 15:22 | |
*** pcaruana has quit IRC | 15:32 | |
*** edmondsw has quit IRC | 15:40 | |
*** jobewan_away is now known as jobewan | 15:40 | |
*** ddyer has joined #openstack-telemetry | 15:41 | |
*** edmondsw has joined #openstack-telemetry | 15:51 | |
*** lhx__ has quit IRC | 15:51 | |
*** lhx__ has joined #openstack-telemetry | 15:52 | |
*** edmondsw has quit IRC | 15:56 | |
*** lhx_ has joined #openstack-telemetry | 16:10 | |
*** lhx__ has quit IRC | 16:10 | |
*** hoonetorg has quit IRC | 16:18 | |
*** jobewan has quit IRC | 16:24 | |
*** lhx_ has quit IRC | 16:24 | |
*** flg_ has joined #openstack-telemetry | 16:36 | |
*** dave-mcc_ is now known as dave-mccowan | 17:01 | |
*** psachin has joined #openstack-telemetry | 17:11 | |
*** rcernin has quit IRC | 17:12 | |
*** tesseract has quit IRC | 17:14 | |
*** edmondsw has joined #openstack-telemetry | 17:42 | |
*** Tom has joined #openstack-telemetry | 17:43 | |
*** edmondsw_ has joined #openstack-telemetry | 17:44 | |
*** edmondsw has quit IRC | 17:46 | |
*** Tom has quit IRC | 17:47 | |
*** edmondsw_ has quit IRC | 17:48 | |
*** efoley has quit IRC | 17:51 | |
*** psachin has quit IRC | 17:52 | |
*** links has quit IRC | 17:52 | |
*** rwsu has quit IRC | 18:00 | |
*** dave-mccowan has quit IRC | 18:07 | |
*** Tom has joined #openstack-telemetry | 18:09 | |
*** edmondsw has joined #openstack-telemetry | 18:12 | |
*** dave-mccowan has joined #openstack-telemetry | 18:15 | |
*** rwsu has joined #openstack-telemetry | 18:16 | |
*** edmondsw has quit IRC | 18:17 | |
*** edmondsw has joined #openstack-telemetry | 18:17 | |
*** edmondsw has quit IRC | 18:21 | |
*** edmondsw has joined #openstack-telemetry | 18:25 | |
*** edmondsw has quit IRC | 18:29 | |
*** flg_ has quit IRC | 18:35 | |
*** ddyer has quit IRC | 18:37 | |
*** flg_ has joined #openstack-telemetry | 18:39 | |
*** rcernin has joined #openstack-telemetry | 19:01 | |
*** ddyer has joined #openstack-telemetry | 19:06 | |
*** edmondsw has joined #openstack-telemetry | 19:07 | |
*** iranzo has joined #openstack-telemetry | 19:11 | |
*** iranzo has joined #openstack-telemetry | 19:11 | |
*** edmondsw has quit IRC | 19:12 | |
*** spilla has quit IRC | 19:12 | |
*** hoonetorg has joined #openstack-telemetry | 19:14 | |
*** iranzo has quit IRC | 19:19 | |
*** Tom has quit IRC | 19:22 | |
*** edmondsw has joined #openstack-telemetry | 19:28 | |
*** edmondsw has quit IRC | 19:32 | |
*** cristicalin has joined #openstack-telemetry | 20:09 | |
cristicalin | gordc, are you still around ? | 20:24 |
cristicalin | seems in ocata at least batching is broken | 20:24 |
cristicalin | my issue earlier with unaccked messages in the queue went away after I set batch_size=1 | 20:24 |
cristicalin | as the batches are processed by a single thread | 20:24 |
gordc | yep | 20:25 |
gordc | hmmm... i'm not entirely sure batch_size actually changes the threads | 20:25 |
cristicalin | https://github.com/openstack/ceilometer/blob/stable/ocata/ceilometer/notification.py#L293-L296 | 20:25 |
cristicalin | that's your coment there | 20:25 |
cristicalin | anyway with batch_size=1 my test system is able to chug along with metrics every minute | 20:26 |
gordc | yeah. but that code says if batch_size == 1, don't override thread count | 20:26 |
cristicalin | now ... production ... just 100 times larger | 20:26 |
gordc | there might be an issue in oslo.messaging... might need to dig through bugs there. | 20:27 |
gordc | when your system was crashing. which queues were large? | 20:27 |
gordc | the pipe-* queues or notifications.* queue? | 20:28 |
cristicalin | the ceilometer-pipe-* | 20:29 |
cristicalin | actually they accrued about 8 times the size of the notification.* queue | 20:29 |
cristicalin | which was completely puzling | 20:29 |
cristicalin | jugding by the code it the get_batch_notification_listener is just a proxy to the oslo.messaging part | 20:30 |
cristicalin | so yes, probably an oslo issue at this point | 20:30 |
cristicalin | I'll dig through their releases for ocata | 20:30 |
gordc | you're using threading executor in oslo.messaging? | 20:31 |
gordc | nm. i guess that's not actually configurable anyways | 20:32 |
cristicalin | erm, I did not set any config related to that so probably the default | 20:32 |
cristicalin | it seems hardcoded to threading | 20:32 |
cristicalin | one other issue, I'm runing the rest of the cloud in mitaka | 20:34 |
cristicalin | just the telemetry part is ocata | 20:34 |
cristicalin | I spotted a problem with the new collection method on the compute nodes | 20:34 |
cristicalin | the one that queries libvirt for the metadata instead of nova-api | 20:34 |
cristicalin | not all instances have a user_id and project_id , can't really figure out why | 20:35 |
cristicalin | shouldn't the code fallback to the old method if the new one doesn't work ? | 20:35 |
cristicalin | today I just hacked to code to return 'UNKNOWN' for the missing part but I guess that breaks somewhere down the line | 20:36 |
cristicalin | and switching back to naive is not really something I'm looking forward to, one of the reasons for my upgrae to ocata was to aleviate the load on the nova-api | 20:36 |
gordc | i don't know how viable it is to fallback (not sure what part of the process it fails at) | 20:37 |
gordc | but yeah, you'll need to switch back to naive if it doesn't work in mitaka | 20:37 |
gordc | you're welcome to contribute a fallback code (not sure we have the resources to do that to be honest) | 20:38 |
*** ddyer has quit IRC | 20:38 | |
cristicalin | I'll try to propose a patch, guess my quick and dirty solution is not something to be accepted upstream | 20:39 |
*** ddyer has joined #openstack-telemetry | 20:39 | |
gordc | probably not... or just don't let anyone know it's a quick/dirty solution. | 20:40 |
cristicalin | I'll look into adding the fallback as I guess that still should decrease the load on nova-api | 20:40 |
cristicalin | also not sure why nova instances would be missing that info I haven't identifies yet the pattern of which instances are missing the info | 20:40 |
gordc | i think when we implemented it, all the information we needed was there for some time. i guess we were wrong | 20:41 |
cristicalin | let's poke on #openstack-nova maybe they know | 20:42 |
cristicalin | tracked that capability down to something nova added in juno so in theory anything created post juno should have the user and project id | 21:02 |
cristicalin | in my case this env started with juno so ... probably a bug in there | 21:03 |
cristicalin | maybe your assumption was valid that the data should always be in there | 21:04 |
gordc | cristicalin: cool cool. good to know in theory we support "from juno" | 21:04 |
gordc | i guess now that mitaka is eol, we just need to ensure everything newer works. :p | 21:05 |
cristicalin | I'll look into the nova problem if it turns out there is actually a valid case for that info to be missing i'll propose the fallback code | 21:05 |
gordc | cristicalin: kk, works for me | 21:05 |
*** thorst_afk has quit IRC | 21:08 | |
*** dave-mccowan has quit IRC | 21:20 | |
*** ddyer has quit IRC | 21:27 | |
*** leitan has quit IRC | 21:29 | |
*** prodriguez83 has joined #openstack-telemetry | 21:30 | |
*** catintheroof has quit IRC | 21:31 | |
*** thorst_afk has joined #openstack-telemetry | 21:34 | |
*** thorst_afk has quit IRC | 21:36 | |
*** flg_ has quit IRC | 21:37 | |
*** thorst_afk has joined #openstack-telemetry | 21:39 | |
*** ddyer has joined #openstack-telemetry | 21:56 | |
*** prodriguez83 has quit IRC | 22:10 | |
*** ddyer has quit IRC | 22:12 | |
*** pradk has quit IRC | 22:13 | |
*** ddyer has joined #openstack-telemetry | 22:16 | |
*** cristicalin has quit IRC | 22:19 | |
*** edmondsw has joined #openstack-telemetry | 22:30 | |
*** rcernin has quit IRC | 22:30 | |
*** gordc has quit IRC | 22:31 | |
*** ddyer has quit IRC | 23:00 | |
*** thorst_afk has quit IRC | 23:39 | |
*** edmondsw has quit IRC | 23:48 | |
*** catintheroof has joined #openstack-telemetry | 23:55 | |
*** edmondsw has joined #openstack-telemetry | 23:59 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!