Monday, 2019-11-18

*** Liang__ has joined #openstack-cinder00:04
*** Liang__ has quit IRC00:23
*** tosky has quit IRC00:46
*** smcginnis_ has joined #openstack-cinder00:53
*** ChanServ sets mode: +o smcginnis_00:53
*** brinzhang has joined #openstack-cinder01:08
*** whfnst has joined #openstack-cinder01:16
*** ociuhandu has joined #openstack-cinder01:31
*** ociuhandu has quit IRC01:41
openstackgerritChih Yu Wu proposed openstack/cinder master: Synology: Improve session expired error handling  https://review.opendev.org/69206401:48
*** zhanglong has joined #openstack-cinder02:24
*** ociuhandu has joined #openstack-cinder02:26
*** ociuhandu has quit IRC02:30
*** brinzhang_ has joined #openstack-cinder02:47
*** ruffian_sheep has joined #openstack-cinder02:49
*** brinzhang has quit IRC02:50
*** ociuhandu has joined #openstack-cinder03:00
*** smcginnis_ has quit IRC03:02
*** ociuhandu has quit IRC03:07
*** ruffian_sheep has quit IRC03:47
*** udesale has joined #openstack-cinder04:02
*** davee__ has joined #openstack-cinder04:03
*** davee_ has quit IRC04:04
*** bhagyashris has joined #openstack-cinder04:20
*** baojg has quit IRC04:22
*** ociuhandu has joined #openstack-cinder04:27
*** ociuhandu has quit IRC04:29
*** ociuhandu has joined #openstack-cinder04:30
*** ociuhandu has quit IRC04:36
*** bhagyashris has quit IRC04:45
*** anastzhyr has joined #openstack-cinder04:50
*** abhishekk has joined #openstack-cinder04:51
anastzhyrHello folks, can I kindly reask to review once again my diffs, please?04:52
anastzhyrhttps://review.opendev.org/#/c/688853/ https://review.opendev.org/#/c/691527/ https://review.opendev.org/#/c/691574/04:53
*** ociuhandu has joined #openstack-cinder04:53
*** ociuhandu has quit IRC04:58
*** ociuhandu has joined #openstack-cinder04:58
*** bhagyashris has joined #openstack-cinder05:03
*** ociuhandu has quit IRC05:03
*** bhagyashris has quit IRC05:27
*** ociuhandu has joined #openstack-cinder05:30
*** bhagyashris has joined #openstack-cinder05:31
*** ociuhandu has quit IRC05:35
*** ircuser-1 has quit IRC05:35
openstackgerritChih Yu Wu proposed openstack/cinder master: Synology: Improve session expired error handling  https://review.opendev.org/69206405:49
*** brinzhang has joined #openstack-cinder05:54
*** brinzhang_ has quit IRC05:58
*** Luzi has joined #openstack-cinder06:11
*** awalende has joined #openstack-cinder06:16
*** awalende has quit IRC06:20
*** bhagyashris has quit IRC06:45
*** bhagyashris has joined #openstack-cinder06:52
*** anastzhyr has quit IRC07:00
*** ociuhandu has joined #openstack-cinder07:02
*** rcernin has quit IRC07:04
*** ociuhandu has quit IRC07:06
*** sahid has joined #openstack-cinder07:32
*** dpawlik has joined #openstack-cinder07:34
*** tkajinam has quit IRC08:02
*** tesseract has joined #openstack-cinder08:15
*** tosky has joined #openstack-cinder08:20
*** ociuhandu has joined #openstack-cinder08:22
*** sahid has quit IRC08:29
*** ociuhandu has quit IRC08:29
*** sahid has joined #openstack-cinder08:33
*** Vadmacs has joined #openstack-cinder08:57
*** ociuhandu has joined #openstack-cinder09:04
*** martinkennelly has joined #openstack-cinder09:05
*** nanzha has joined #openstack-cinder09:10
*** awalende has joined #openstack-cinder09:16
*** awalende has quit IRC09:18
*** awalende has joined #openstack-cinder09:26
openstackgerritMerged openstack/cinder master: iSCSI driver initialization should fail for Primera backend  https://review.opendev.org/69236009:29
*** awalende has quit IRC09:30
openstackgerritChih Yu Wu proposed openstack/cinder master: Synology: Improve session expired error handling  https://review.opendev.org/69206409:31
*** awalende has joined #openstack-cinder09:36
*** awalende has quit IRC09:37
*** whoami-rajat has joined #openstack-cinder09:43
*** brinzhang_ has joined #openstack-cinder09:52
*** brinzhang has quit IRC09:55
*** ociuhandu has quit IRC09:57
*** ociuhandu has joined #openstack-cinder09:58
*** ociuhandu has quit IRC10:03
*** kaisers has quit IRC10:04
*** lennyb has quit IRC10:06
*** zhanglong has quit IRC10:06
*** bhagyashris has quit IRC10:10
*** pcaruana has joined #openstack-cinder10:10
*** nanzha has quit IRC10:11
*** nanzha has joined #openstack-cinder10:12
*** abhishekk has quit IRC10:12
*** bhagyashris has joined #openstack-cinder10:12
*** brinzhang_ has quit IRC10:13
*** e0ne has joined #openstack-cinder10:14
*** kaisers has joined #openstack-cinder10:20
zigoHi there! I noticed that in Debian Buster (ie: Rocky), cinder volume migration never finished. It looks like the new volume on the destination host is created correctly, and attached to the compute as it should, but the old volume stay in status "migrating" and is never deleted.10:24
*** n-saito has quit IRC10:24
zigoI'm guessing this has been fixed in either Nova, Cinder or os-brick, though which one should I try to upgrade first?10:24
zigoAnd is someone able to point to me to a patch fixing the problem in Rocky, so I could cherry-pick just that one in Buster, so the package is easily accepted by the Buster stable release team?10:25
zigoI'm attempting to upgrade os-brick to 2.5.8 first (from 2.5.5).10:29
*** CeeMac has joined #openstack-cinder10:37
*** davidsha has joined #openstack-cinder10:42
*** whoami-rajat has quit IRC10:43
*** rcernin has joined #openstack-cinder10:51
openstackgerritRaghavendra Tilay proposed openstack/cinder stable/train: iSCSI driver initialization should fail for Primera backend  https://review.opendev.org/69474310:52
*** ociuhandu has joined #openstack-cinder11:02
zigoOk, I found it, the issue is in Nova, and fixed by the point release in Rocky.11:12
*** awalende has joined #openstack-cinder11:14
*** awalende has quit IRC11:14
*** udesale has quit IRC11:17
*** awalende has joined #openstack-cinder11:20
*** awalende has quit IRC11:21
*** brinzhang has joined #openstack-cinder11:23
*** brinzhang has quit IRC11:47
*** bhagyashris has quit IRC11:56
*** bhagyashris has joined #openstack-cinder11:57
*** dpawlik has quit IRC11:58
*** dpawlik has joined #openstack-cinder12:01
*** brinzhang has joined #openstack-cinder12:03
*** zhanglong has joined #openstack-cinder12:04
*** ociuhandu has quit IRC12:13
*** Liang__ has joined #openstack-cinder12:15
*** brinzhang has quit IRC12:19
*** brinzhang has joined #openstack-cinder12:20
*** bhagyashris has quit IRC12:21
*** rcernin has quit IRC12:31
*** lpetrut has joined #openstack-cinder12:42
*** ociuhandu has joined #openstack-cinder12:43
*** zhanglong has quit IRC13:02
*** ociuhandu has quit IRC13:04
*** zhanglong has joined #openstack-cinder13:07
*** lennyb has joined #openstack-cinder13:09
*** zhanglong has quit IRC13:13
*** sapd1 has quit IRC13:16
*** lseki has joined #openstack-cinder13:17
*** sfernand has joined #openstack-cinder13:17
*** nanzha has quit IRC13:24
*** nanzha has joined #openstack-cinder13:25
*** zhanglong has joined #openstack-cinder13:29
*** KeithMnemonic has joined #openstack-cinder13:40
*** rosmaita has joined #openstack-cinder13:42
*** whoami-rajat has joined #openstack-cinder13:47
*** yan0s has joined #openstack-cinder13:52
*** whfnst has quit IRC13:53
*** awalende has joined #openstack-cinder13:55
*** zhanglong has quit IRC13:55
*** zhanglong has joined #openstack-cinder13:57
*** lennyb has quit IRC14:14
*** awalende has quit IRC14:26
*** jbernard has quit IRC14:27
*** davee__ has quit IRC14:28
*** jbernard has joined #openstack-cinder14:29
*** brinzhang has quit IRC14:33
*** zhanglong has quit IRC14:34
*** jbernard_ has joined #openstack-cinder14:47
jungleboyjMorning.14:52
e0neevening :)14:56
*** eharney has joined #openstack-cinder14:57
*** Luzi has quit IRC14:59
*** tesseract has quit IRC15:01
*** tesseract has joined #openstack-cinder15:01
*** dpawlik has quit IRC15:14
*** udesale has joined #openstack-cinder15:22
*** ociuhandu has joined #openstack-cinder15:26
*** ociuhandu has quit IRC15:28
*** Liang__ has quit IRC15:48
*** whoami-rajat__ has joined #openstack-cinder15:49
*** smcginnis_ has joined #openstack-cinder15:52
*** ChanServ sets mode: +o smcginnis_15:52
jungleboyj:-)15:54
*** dklyle has quit IRC15:57
*** dklyle has joined #openstack-cinder15:58
*** udesale has quit IRC16:06
*** jmlowe has joined #openstack-cinder16:07
openstackgerritEric Harney proposed openstack/os-brick master: iSCSI: Support mix of FQDN and IPs for portals  https://review.opendev.org/64249316:08
*** ociuhandu has joined #openstack-cinder16:14
*** tpsilva has joined #openstack-cinder16:17
*** ociuhandu has quit IRC16:19
*** whoami-rajat has quit IRC16:27
*** ociuhandu has joined #openstack-cinder16:30
*** umbSublime has joined #openstack-cinder16:48
umbSublimeI'm trying to find out exaclty how a cinder `volume create` goes from the cinder-api to the cinder-scheduler and eventually cinder-volume. How doe the cinder-api send the creation request to the cinde-scheduler? With amqp message (I see no logs about that even in debug)?16:49
*** whoami-rajat__ has quit IRC16:56
umbSublimeI'm asking because I currently have an issue with cinder where volumes stay in creating and I can't see anything about them in the scheduler logs and the amqp queue "cinder_scheduler" stays empty17:01
*** yan0s has quit IRC17:02
*** jmlowe has quit IRC17:04
*** ociuhandu has quit IRC17:17
*** nanzha has quit IRC17:20
*** ociuhandu has joined #openstack-cinder17:28
*** davidsha has quit IRC17:39
*** ircuser-1 has joined #openstack-cinder17:40
*** ociuhandu has quit IRC17:40
*** e0ne has quit IRC17:43
jungleboyjumbSublime: Yeah, the API sends it to the scheduler and it should be via AMQP.17:43
jungleboyjIf hemna_  is around he knows the create flow very well.17:43
jungleboyjGenerally if you are stuck in Creating it is because of an issue with the backend.17:44
umbSublimejungleboyj: that's also what I would assume, but I see nothing in scheduler logs about this creation17:47
umbSublimeI'm not sure if related but I see a bunch of "recvfrom(4, 0x4b03020, 8192, 0, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)" when stracing cinder-scheduler (it appears to be when talking to sql DB)17:47
umbSublimeI've checked the connection string and everything is fine17:47
jungleboyjHmmm, strange.17:48
umbSublimeSo on the same FD I see this kind of looping over but with diferent SQL every time: http://paste.openstack.org/show/786289/17:50
umbSublimeIs there some sort of cinder conf timeout I can tweak for TCP wait on sql connections ?17:54
*** smcginnis_ has quit IRC18:01
umbSublimeI was a bit off with my diag, I t appears the timeouts are when scheduler tries to talk to rabbitmq http://paste.openstack.org/show/786291/18:09
*** sahid has quit IRC18:10
umbSublimewell acutally I see it on sql port and rabbitmq port18:12
*** mvkr has quit IRC18:32
openstackgerritGorka Eguileor proposed openstack/cinderlib master: Enhance extend functionality  https://review.opendev.org/69071318:34
openstackgerritGorka Eguileor proposed openstack/cinderlib master: Fix LVM extend volume  https://review.opendev.org/69452418:39
jungleboyjWell, if you are seeing timeouts with RabitMQ in the scheduler it sounds like it isn't able to talk to the volume service which would explain it being stuck in the 'creating' state.18:41
*** martinkennelly has quit IRC18:48
*** smcginnis_ has joined #openstack-cinder18:50
*** ChanServ sets mode: +o smcginnis_18:50
*** gouthamr_ is now known as gouthamr18:51
*** tesseract has quit IRC18:52
*** e0ne has joined #openstack-cinder19:01
umbSublimeYes and all that is understandable, but the rabbitmq cluster is in perfect health and so it the sql cluster19:08
umbSublimeanyhow I think I'm chasing a red hearing19:10
umbSublimejungleboyj: do you know if this looks like the scheduler accepting the task of volume:create? If so I wonder what the next state is :/19:17
umbSublimes/state/step/19:17
jungleboyjumbSublime: I can't remember what the next state is off the top of my head.19:23
umbSublimebut those linesdo suggest the volume got scheduled right ?19:24
jungleboyjhttps://docs.openstack.org/api-ref/block-storage/v3/19:24
jungleboyjThe statuses are in there.19:24
jungleboyjYes, I believe if it gets to create the scheduler accepted it.  Otherwise it would go to error.19:25
*** dpawlik has joined #openstack-cinder19:26
umbSublimeright and since after those lines I do see the volume's "os-vol-host-attr:host" attribute get set19:26
umbSublimeSo then I'm assuming from there the scheduler put's a message in cinder-volume's queue for the proper volume19:27
jungleboyjThat sounds right.19:27
jungleboyjThen you should see a message in the c-vol logs that it received the request.19:28
umbSublimeI wish I did19:28
umbSublimeI don't see anything in the amqp queue and not logs about that in c-volume.log19:28
umbSublimebut the wierd part is the c-vol service can surely talk to AMQP just fine because it's reporting it's state back to the scheduler every N seconds19:29
*** jdillaman has joined #openstack-cinder19:30
jungleboyjOk.  That is weird.19:30
umbSublime(or if there is a message being send to c-vol's queue it's consumed really quick and logging about it is logged)19:30
umbSublimes/logging/nothing/19:31
jungleboyjsmcginnis_:  Any thoughts on the above?19:32
jungleboyjYou know the code a bit better than I do.19:32
umbSublimethis is what i see for oslo and cinder.manager in volume.log http://paste.openstack.org/show/786294/19:34
*** tosky has quit IRC19:34
umbSublimeShould I be able to follow the req-id from my POST all the way to the cinder-volume.log ?19:42
umbSublimeI see it in scheduler logs for example19:42
jungleboyjYeah, that looks like the normal periodic job from the volume process.19:44
jungleboyjAre you seeing the updates being received in the scheduler?19:44
*** sfernand has quit IRC19:47
umbSublimeyup, I do19:48
umbSublimeSo to me that would mean "c-sched<->rabbit<->c-vol" works fine19:48
jungleboyjAgreed.19:48
umbSublimeyet even after I see the sched finding where to create the service i see nothing in the AMQP queue19:49
umbSublimeLet me just annonimyse a few logs to show the full trace Of what I'm seing19:49
jungleboyjumbSublime: Ok.  Sorry, I am also note the best person to be asking.19:50
jungleboyje0ne:  You still up?19:50
e0nejungleboyj: yes19:50
umbSublimeSure, I'll just paste here for visibility thanks already for your help. I'll understand if you don't reply back :)19:51
jungleboyjHey, can you look over the discussion between umbSublime and I?19:51
* e0ne is reading19:51
jungleboyjI am still pretty jetlagged and mush brained.19:51
e0neas I understood, volume stucks in a 'creating' state and there is nothing a scheduler logs19:55
umbSublimeHere is a concise view of where I'm at: http://paste.openstack.org/show/786295/19:55
e0neumbSublime: am I right?19:55
umbSublimee0ne: that's what i was thinking at first, but now I'm fairly certain it get's passed the scheduling part19:56
e0neumbSublime: do you have debug log enabled in a scheduler?19:56
umbSublimeyep, the output in my last paste is from debug log enabled19:57
*** Vadmacs has quit IRC20:01
e0newhat backend do you have?20:01
e0neLMV or not?20:01
e0neand what version of openstack do you use?20:02
*** eharney has quit IRC20:02
jungleboyje0ne:  Looks like a scaleio environment based on the logs.20:03
e0neok20:03
jungleboyjI need to step away for a bit.  I will check back later.20:04
e0neumbSublime: how many cinder-volume instances do you have?20:05
*** adeberg has joined #openstack-cinder20:08
umbSublimethe backend is scaleio. I have 2 hosts running cinder-volume and each have 3 services20:10
umbSublimeone for each cluster20:11
umbSublimecinder-volume services are running on a different host from where cinder-api and cinder-scheduler is running on20:13
e0nesounds good20:13
umbSublimeIIRC this is a queens deployment20:13
e0nedo you see any messages in rabbitmq?20:14
*** jbernard has quit IRC20:14
umbSublimethat queue appears to be emtpy "cinder-volume.ne1-iaas-blk01.cloud.ubisoft.onbe@scaleio_cluster2"20:15
*** jmlowe has joined #openstack-cinder20:16
*** jbernard has joined #openstack-cinder20:17
e0neare you able to stop volume services temporary?20:18
umbSublimeof course. Nothing is really working ATM20:19
umbSublimeOhh I think I see where you are getting at. Stop the services -> create volume -> check if messages get into the queue ?20:19
e0neyep20:20
umbSublimeWhat parameter could I tweak on the scheduler side so it doesn't timeout right away on a rabbit message ?20:20
umbSublimeI'll give that a shot20:21
e0nebut you need to do it fast enough so scheduler won't mark volumes as 'down'20:21
umbSublimeDoing it now20:22
*** dpawlik has quit IRC20:22
e0nein such case you will see if *any* volume service reads messages from queue20:22
umbSublimeOk so the queue went from 2 consumers to 0, but still no message pilled up20:23
umbSublimeI did however see the same logs in scheduler saying it accepted the volume create and selected the same queue again20:25
umbSublimeI started c-volume again and I see the same queue now went back to 2 consumers20:26
*** dpawlik has joined #openstack-cinder20:27
e0neI don't understand what is going :(20:31
e0neI suppose you don't have any errors or warnings in c-scheduler or c-volume logs20:32
umbSublimeno :(20:37
umbSublimec-volume is just looping over sending the usage metrics back to c-sched. And c-sched does nothing when no new API calls are sent20:37
umbSublimeCould it be rpc timeout on sched side timing out to "send" the message to AMQP20:37
umbSublimeI see some wierd EAGAIN errors with strace on the scheduler log for the AMQP port20:38
umbSublimeThen again I have this "rpc_reply_retry_attempts=-1" So it should rety infinitely20:39
umbSublimeMy brain hurts from debugging this today XD20:39
*** dpawlik has quit IRC20:41
umbSublimeWhat are the cinder-volume_fanout_..... queues for ?20:42
*** dpawlik has joined #openstack-cinder20:44
umbSublimeWould the scheduler in DEBUG log something if it failed to pass a message to AMQP ?20:44
e0neyou need to enable oslo.messaging debug logs20:45
e0nehonestly, I don't remember how to do it20:45
umbSublimeI'll look into it20:50
umbSublimethanks !20:50
umbSublimeAbout the cinder-volume_fanout queues, do you know what they are used for ?20:55
e0nesorry, I'm not an expert in messaging :(20:56
*** smcginnis_ has quit IRC21:01
*** e0ne has quit IRC21:07
umbSublimeadeberg: no prob thanks a lot for the pointers :) I'll come back and let you know if I ever figure out what was wrong21:09
umbSublimee0ne ^21:09
*** eharney has joined #openstack-cinder21:12
*** awalende has joined #openstack-cinder21:16
*** awalende has quit IRC21:21
*** awalende has joined #openstack-cinder21:26
*** dpawlik has quit IRC21:29
*** awalende has quit IRC21:31
*** awalende has joined #openstack-cinder21:36
*** ociuhandu has joined #openstack-cinder21:38
*** awalende has quit IRC21:40
*** jbernard has quit IRC21:42
*** dpawlik has joined #openstack-cinder21:43
*** ociuhandu has quit IRC21:43
*** e0ne has joined #openstack-cinder21:43
*** jbernard has joined #openstack-cinder21:45
*** smcginnis_ has joined #openstack-cinder21:45
*** ChanServ sets mode: +o smcginnis_21:45
*** awalende has joined #openstack-cinder21:46
*** jbernard_ has quit IRC21:46
*** e0ne has quit IRC21:49
*** awalende has quit IRC21:56
*** awalende has joined #openstack-cinder21:57
*** awalende has quit IRC22:07
*** awalende has joined #openstack-cinder22:07
*** dpawlik has quit IRC22:08
*** awalende_ has joined #openstack-cinder22:08
*** awalende has quit IRC22:12
openstackgerritMerged openstack/os-brick stable/train: Fix FC scan too broad  https://review.opendev.org/69431422:17
*** tosky has joined #openstack-cinder22:26
*** pcaruana has quit IRC22:26
*** kaisers1 has joined #openstack-cinder22:28
*** kaisers has quit IRC22:29
*** mvkr has joined #openstack-cinder22:40
*** awalende_ has quit IRC22:46
*** rcernin has joined #openstack-cinder22:46
*** tkajinam has joined #openstack-cinder23:09
*** Liang__ has joined #openstack-cinder23:30
*** ociuhandu has joined #openstack-cinder23:31
*** ociuhandu has quit IRC23:36
*** n-saito has joined #openstack-cinder23:36
*** zhanglong has joined #openstack-cinder23:47

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!