Thursday, 2017-03-16

dansmithokay then I'll put that and the quota object change up front00:00
*** tbachman has joined #openstack-nova00:00
melwittmriedem: oh yeah, I know that song. I guess people must wave their arms when they listen to it00:01
mriedemi know i do00:02
*** pumaranikar has joined #openstack-nova00:03
melwitt\o\00:04
melwittdansmith: ignore my comment on the incompatible signature, just read your newest comment00:06
dansmithwell, I jumped to conclusions since it looked like you change the order, which is only half true00:06
dansmithbut yeah00:06
dansmith*changed00:06
melwittyeah. agreed it would be better to add a new one instead of wedge functionality into the old one00:07
dansmithI'm getting a bunch of quota errors with that limit check in front of basically unchanged code00:07
*** yingjun has joined #openstack-nova00:07
dansmithlike things going negative or something00:08
melwittwell, that's not expected :(00:09
dansmithabout to get a report00:09
dansmithhttp://pastebin.com/raw/BZWnPULS00:10
dansmithfor example00:10
melwittokay, that's because limit_check can't be used on ReservableResources, which is part of the change in nova/quota.py00:12
melwittsigh00:12
dansmithis that because the last hunk at the bottom of quota.py thinks everything should be reservable?00:13
dansmithi.e., that list should be augmented in each patch in this new list?00:13
melwittso, the last hunk is the types of the resources, old stuff has a mix of Countable and Reservable. Countable uses limit_check, Reservable uses quota.reserve00:14
melwittso the third patch changes everything to Countable and limit_check00:14
dansmithright, so each patch in my new series should add its resource type to the list of countable things00:14
dansmithand the limit_check patch being in front should basically have no change in that hunk, right?00:14
melwittif each patch is changing a reserve to a limit_check, then yeah00:15
melwittthey have to go in lock-step00:15
*** dimtruck is now known as zz_dimtruck00:15
dansmithmelwitt: right, so this is one patch in the series: http://pastebin.com/raw/4Lwcw7EN00:16
dansmith(minus its tests)00:16
dansmithso that one should add     CountableResource('server_groups', _server_group_count, 'server_groups'),00:16
dansmithright?00:16
melwittdansmith: ah yes, that makes it clear. Yeah, not add, but change00:16
*** tbachman has quit IRC00:17
*** hongbin has quit IRC00:17
melwittchange it from ReservableResource to CountableResource and include it's new counting function00:17
melwitt*its00:17
dansmithah yeah I see00:17
dansmithokay removing that chunk from the first patch to add the new limit check got me much farther00:23
dansmith195 fails down to 4600:23
dansmithpretty much all the rest look like this:     AttributeError: <nova.quota.ReservableResource object at 0x7fcbda866310> does not have the attribute 'count'00:24
dansmithwhich seems to be something in the _get_usages() change that expects something from ReservableResource?00:24
melwittthe opposite, expecting there to be a count attr00:24
melwittlike, it's assuming everything is a CountableResource00:25
dansmith_get_usage hard codes which things it thinks are countable I guess00:25
melwittyeah, since they were part of the same change. to do both, it would have to check the resource type and do each thing accordingly00:25
*** edmondsw has joined #openstack-nova00:26
melwittI think. I don't remember what _get_usage did originally. looking00:26
*** tbachman has joined #openstack-nova00:26
melwittoh, it's new00:27
dansmithit should probably look at the resource class to see if it's countable, and maybe more details need to be in there to know if it's per-project or something?00:27
dansmithdefinitely looks like that could be generalized00:27
melwittokay, yeah. I wrote that to "get counts" when it used to pull usages from the usages table in the db00:27
dansmithokay well I think if that was generalized then this first patch would pass00:28
melwittlike here it used to be a db.quota_usage_get_all_by_project_and_user() call https://review.openstack.org/#/c/416521/22/nova/quota.py@21900:28
melwittdansmith: cool, thanks. please feel free to upload as-is and I can fix them. I feel bad you're spending too much time on it00:29
dansmithgood, you should feel bad00:29
melwitt:(00:29
melwittthat's proof, the unsmiley face00:29
*** gcb has quit IRC00:30
melwittlooks like we're not out of the woods on this bug either https://bugs.launchpad.net/nova/+bug/167062700:30
openstackLaunchpad bug 1670627 in OpenStack Compute (nova) ocata "quota is always in-use after delete the ERROR instances " [Critical,In progress] - Assigned to Matt Riedemann (mriedem)00:30
melwittI dunno if you saw the latest comment00:30
*** edmondsw has quit IRC00:31
dansmithmelwitt: do you really want me to upload this on top of your stack? if so I will, I just don't want to explode your set00:31
mriedemgfdi00:31
dansmithI'm about to retire for the evening anyway00:31
melwittdansmith: I dunno, is there any other option?00:32
mriedemwe do a rollback on the quota change if the instance destroy fails00:32
dansmithmelwitt: I can push it up to github or something, but I kinda think we should do this anyway, so that's just indirection00:32
dansmithmelwitt: I was just going to try to get it a little closer before pushing, but.. it's up to you00:32
*** Apoorva has quit IRC00:33
melwittdansmith: yeah, I guess there's a small chance it could get more gnarly. but maybe that's gonna have to be dealt with anyway. yolo00:33
dansmithhaving spent hours on the patch the way it is, I don't think it can get much gnarly-er00:33
dansmithI didn't really start grok'ing most of it until I was ass-deep in trying to split it up00:34
melwittmriedem: if we rollback on a destroy fail, then things should be working right I thought00:34
melwittdansmith: hah, that's fair. yeah, go ahead and upload00:34
mriedemmelwitt: only if we hit InstanceNotFound00:34
mriedemmelwitt: they didn't say what error they hit though00:34
*** baoli has joined #openstack-nova00:38
*** baoli_ has joined #openstack-nova00:39
melwitthm, well in _delete_while_booting, it quota.commits immediately, before trying to delete, so that seems wrong. the rollback for InstanceNotFound also wouldn't do anything if commit already happened. but I didn't think we touched _delete_while_booting00:39
melwittyeah, these are all committing immediately, which I think would make rollback do nothing00:40
openstackgerritDan Smith proposed openstack/nova master: Count resources to check quota for cells  https://review.openstack.org/41652100:40
openstackgerritDan Smith proposed openstack/nova master: Make Quotas object favor the API database  https://review.openstack.org/41094500:40
openstackgerritDan Smith proposed openstack/nova master: Add online migration to move quotas to API database  https://review.openstack.org/41094600:40
openstackgerritDan Smith proposed openstack/nova master: Add Quotas.check_deltas() and set the stage for magic to happen  https://review.openstack.org/44623900:40
openstackgerritDan Smith proposed openstack/nova master: Count server group quotas  https://review.openstack.org/44624000:40
openstackgerritDan Smith proposed openstack/nova master: Count tenant_networks quotas  https://review.openstack.org/44624100:40
openstackgerritDan Smith proposed openstack/nova master: Count used_limits quotas  https://review.openstack.org/44624200:40
openstackgerritDan Smith proposed openstack/nova master: Remove useless quota_usage_refresh from nova-manage  https://review.openstack.org/44624300:40
openstackgerritDan Smith proposed openstack/nova master: Add get_count_by_vm_state() to Instance object  https://review.openstack.org/44624400:40
openstackgerritDan Smith proposed openstack/nova master: Add SecurityGroup.get_counts()  https://review.openstack.org/44624500:40
openstackgerritDan Smith proposed openstack/nova master: Add FixedIP.get_count_by_project()  https://review.openstack.org/44624600:40
openstackgerritDan Smith proposed openstack/nova master: Add FloatingIP.get_count_by_project()  https://review.openstack.org/44624700:40
melwittthanks dansmith00:40
dansmiththe final patch is like half the size00:40
melwittwoohoo00:41
dansmithfwiw, this makes it so much more obvious why this is good:00:41
dansmithhttps://review.openstack.org/#/c/446239/100:41
dansmithin just quota stuff, +500,-100000:41
dansmithanyway, I've made my mess, time for dinner00:42
melwittyeah, that's a win for sure00:42
melwitthaha, o/00:42
*** baoli has quit IRC00:43
*** liangy has quit IRC00:43
melwittmriedem: quotas.commit and quotas.rollback are mutually exclusive and I should have noticed that on the review. rollback won't work if commit already happened00:44
*** alexpilotti has joined #openstack-nova00:46
*** dikonoor has joined #openstack-nova00:48
mriedemmelwitt: then wouldn't it also be wrong in _delete_while_booting? https://github.com/openstack/nova/blob/master/nova/compute/api.py#L180700:49
mriedemthat's what i copied this from00:49
melwittyes, I think it's wrong there too00:50
melwittI'm looking to see what rollback does if commit already occurred00:51
melwittI think it's just a no-op because it would find no reservation record00:51
*** jianghuaw-m has joined #openstack-nova00:52
*** dharinic_ has joined #openstack-nova00:52
melwittcommit would have deleted the reservation record. then rollback comes along and reads the reservation, finds nothing, and then does nothing00:52
*** iceyao has joined #openstack-nova00:53
*** dharinic1 has joined #openstack-nova00:53
*** sambetts_ has quit IRC00:53
mriedemwell hells bells00:55
mriedemi'm pretty sure _delete_while_booting copied that behavior from _delete00:55
*** sambetts_ has joined #openstack-nova00:55
mriedemso normal delete we get the reservation here https://github.com/openstack/nova/blob/master/nova/compute/api.py#L194300:56
mriedemso it probably copied from https://github.com/openstack/nova/blob/master/nova/compute/api.py#L202800:56
mriedemlooks like in that flow though, we don't commit unless we delete https://github.com/openstack/nova/blob/master/nova/compute/api.py#L202300:57
mriedemso (1) create reservation, (2) delete instance = works, then commit; else rollback00:57
mriedemi think that's how things are working in the old delete flow00:57
mriedembefore the cellsv2ification00:57
*** gyee has quit IRC00:59
*** dharinic_ has quit IRC01:00
mriedemalso, i think i'm *this* close to getting this functional test to pass01:01
mriedemfixtures up the wazoo01:01
*** baoli_ has quit IRC01:01
mriedemmelwitt: i also went with your idea to stub out Claim.__init__, that was a good idea01:06
*** alexpilotti has quit IRC01:06
*** dharinic has joined #openstack-nova01:08
mriedemoh f yes it works now01:08
*** abalutoiu has quit IRC01:09
*** dikonoor has quit IRC01:09
*** kevinz has joined #openstack-nova01:12
*** dharinic has quit IRC01:13
*** armax has joined #openstack-nova01:14
*** jianghuaw-m has quit IRC01:15
*** Sukhdev has quit IRC01:17
*** Apoorva has joined #openstack-nova01:18
mriedemhere it comes01:20
openstackgerritMatt Riedemann proposed openstack/nova master: Add populate_retry to schedule_and_build_instances  https://review.openstack.org/44410601:20
openstackgerritMatt Riedemann proposed openstack/nova master: Add a functional regression/recreate test for bug 1671648  https://review.openstack.org/44620901:20
openstackbug 1671648 in OpenStack Compute (nova) "Instances are not rescheduled after deploy fails" [High,In progress] https://launchpad.net/bugs/1671648 - Assigned to Shunli Zhou (shunliz)01:20
*** Apoorva has quit IRC01:23
*** gongysh has quit IRC01:25
*** litao has joined #openstack-nova01:27
*** wangqun has joined #openstack-nova01:29
*** liverpooler has quit IRC01:35
*** mriedem has quit IRC01:38
*** dharinic has joined #openstack-nova01:39
*** eharney has quit IRC01:40
*** unicell has quit IRC01:40
openstackgerritKen'ichi Ohmichi proposed openstack/nova master: Clarify os-start API description  https://review.openstack.org/44626401:41
*** zioproto has quit IRC01:41
*** ducnc has quit IRC01:42
*** ducnc has joined #openstack-nova01:42
*** xinliang has quit IRC01:44
*** raunak has quit IRC01:45
*** jianghuaw has joined #openstack-nova01:45
*** Shunli has joined #openstack-nova01:47
*** wangqun has quit IRC01:52
*** wangqun has joined #openstack-nova01:52
*** hongbin has joined #openstack-nova01:53
*** jianghuaw has quit IRC01:53
*** jianghuaw has joined #openstack-nova01:55
*** xinliang has joined #openstack-nova01:56
*** xinliang has quit IRC01:56
*** xinliang has joined #openstack-nova01:56
*** gouthamr has quit IRC01:59
*** zz_dimtruck is now known as dimtruck02:00
*** raj_sing- has joined #openstack-nova02:02
*** crushil has joined #openstack-nova02:02
*** ssurana has left #openstack-nova02:03
*** alexpilotti has joined #openstack-nova02:04
*** yuntongjin has joined #openstack-nova02:04
*** nic has quit IRC02:09
*** tovin07_ has joined #openstack-nova02:09
*** kaisers_ has quit IRC02:10
*** fragatin_ has joined #openstack-nova02:13
*** marst has joined #openstack-nova02:14
*** edmondsw has joined #openstack-nova02:15
*** fragatina has quit IRC02:16
*** yamahata has quit IRC02:16
*** fragatin_ has quit IRC02:17
*** edmondsw has quit IRC02:20
*** gcb has joined #openstack-nova02:20
*** cfriesen has quit IRC02:21
*** READ10 has quit IRC02:21
*** cfriesen has joined #openstack-nova02:21
*** fragatina has joined #openstack-nova02:26
*** alexpilotti has quit IRC02:26
*** fandi has quit IRC02:26
*** jianghuaw_ has joined #openstack-nova02:28
*** fragatina has quit IRC02:30
*** jichen has joined #openstack-nova02:33
*** ijw has quit IRC02:34
*** ijw_ has joined #openstack-nova02:40
*** gongysh has joined #openstack-nova02:43
*** amotoki has joined #openstack-nova02:44
*** ijw_ has quit IRC02:44
*** namnh has joined #openstack-nova02:49
*** amotoki has quit IRC02:49
*** erlon has joined #openstack-nova02:50
*** amotoki has joined #openstack-nova02:54
*** moshele has joined #openstack-nova02:55
*** cfriesen has quit IRC02:57
*** cfriesen has joined #openstack-nova02:57
*** fragatina has joined #openstack-nova03:00
*** fragatina has quit IRC03:01
*** ducnc1 has joined #openstack-nova03:02
*** fragatina has joined #openstack-nova03:02
*** ducnc has quit IRC03:02
*** ducnc1 is now known as ducnc03:02
*** raunak has joined #openstack-nova03:04
*** sree_ has joined #openstack-nova03:12
*** sree_ is now known as Guest7673803:13
*** amotoki has quit IRC03:19
*** ijw has joined #openstack-nova03:25
openstackgerritWang Shilong proposed openstack/nova master: Lustre support  https://review.openstack.org/44628803:25
*** dave-mcc_ has quit IRC03:29
*** fandi has joined #openstack-nova03:33
*** unicell has joined #openstack-nova03:36
*** dikonoor has joined #openstack-nova03:39
*** unicell has quit IRC03:40
*** xinliang has quit IRC03:41
*** diga has joined #openstack-nova03:43
*** diga has quit IRC03:43
*** zhurong has joined #openstack-nova03:45
*** moshele has quit IRC03:46
*** tbachman_ has joined #openstack-nova03:48
*** udesale has joined #openstack-nova03:48
*** nicolasbock has quit IRC03:49
*** tbachman has quit IRC03:49
*** tbachman_ is now known as tbachman03:49
*** diga has joined #openstack-nova03:51
*** xinliang has joined #openstack-nova03:54
*** xinliang has quit IRC03:54
*** xinliang has joined #openstack-nova03:54
*** raj_sing- has quit IRC03:59
*** pumaranikar has quit IRC03:59
*** NikhilS has joined #openstack-nova04:01
*** amotoki has joined #openstack-nova04:08
*** links has joined #openstack-nova04:10
*** pumaranikar has joined #openstack-nova04:11
*** trinaths has joined #openstack-nova04:13
*** dharinic1 has quit IRC04:14
*** diga has quit IRC04:17
*** adisky_ has joined #openstack-nova04:17
*** unicell has joined #openstack-nova04:19
*** sneti has joined #openstack-nova04:20
*** diga has joined #openstack-nova04:20
*** dharinic_ has joined #openstack-nova04:20
*** unicell has quit IRC04:20
*** diga has quit IRC04:21
*** diga has joined #openstack-nova04:21
*** vgadiraj has joined #openstack-nova04:22
*** vks1 has joined #openstack-nova04:22
*** ssurana has joined #openstack-nova04:24
*** aunnam has joined #openstack-nova04:25
*** hongbin has quit IRC04:30
*** yuntongjin has quit IRC04:30
*** psachin has joined #openstack-nova04:30
*** ssurana has quit IRC04:31
*** dharinic- has joined #openstack-nova04:32
openstackgerritMaho Koshiya proposed openstack/nova master: Add confirm resized server functional negative tests  https://review.openstack.org/42107404:34
*** dharinic has quit IRC04:35
*** dharinic has joined #openstack-nova04:36
*** dharinic has quit IRC04:36
*** dharinic has joined #openstack-nova04:37
*** gongysh has quit IRC04:38
*** tbachman has quit IRC04:38
*** gongysh has joined #openstack-nova04:38
*** dharinic has quit IRC04:39
*** dharinic has joined #openstack-nova04:39
*** gongysh has quit IRC04:42
*** fragatina has quit IRC04:42
*** gcb has quit IRC04:44
*** kornicameister has joined #openstack-nova04:54
*** dharinic has quit IRC04:57
*** amotoki_ has joined #openstack-nova04:57
*** dharinic has joined #openstack-nova04:58
*** ayogi has joined #openstack-nova04:59
*** amotoki has quit IRC05:00
*** Sukhdev has joined #openstack-nova05:03
*** Kevin_Zheng has quit IRC05:03
*** ratailor has joined #openstack-nova05:04
*** ratailor has quit IRC05:07
*** gongysh has joined #openstack-nova05:14
*** erlon has quit IRC05:15
*** brault_ has joined #openstack-nova05:15
*** ratailor has joined #openstack-nova05:15
*** jith has joined #openstack-nova05:16
*** brault has quit IRC05:17
*** armax has quit IRC05:18
*** dharinic_ has quit IRC05:21
*** zhurong has quit IRC05:25
*** unicell has joined #openstack-nova05:26
*** dharinic1 has joined #openstack-nova05:29
jithHi all, I have configured openstack kilo setup in debian jessie with one controller node and two compute nodes. I have used glusterfs for shared storage. I have mounted the shared storage in /var/lib/nova/instances on both nodes.  I can do migration of vm&rsquo;s between nodes.. But live-migration  throws following error.. Pls do guide me.. http://pastebin.com/DHPWEkaZ05:29
*** Jeffrey4l__ has joined #openstack-nova05:30
*** udesale has quit IRC05:32
*** udesale has joined #openstack-nova05:32
*** dharinic has quit IRC05:32
*** Shunli has quit IRC05:33
*** dharinic has joined #openstack-nova05:34
*** Jeffrey4l_ has quit IRC05:34
*** Shunli has joined #openstack-nova05:35
*** claudiub has joined #openstack-nova05:36
*** dharinic has quit IRC05:38
*** ekuris has joined #openstack-nova05:39
*** kaisers_ has joined #openstack-nova05:40
*** dharinic has joined #openstack-nova05:41
*** guchihiro has joined #openstack-nova05:43
*** bkopilov has quit IRC05:44
*** Sukhdev has quit IRC05:47
*** dharinic has quit IRC05:47
*** edmondsw has joined #openstack-nova05:52
*** dharinic1 has quit IRC05:53
openstackgerritSivasathurappan Radhakrishnan proposed openstack/nova master: Permit Live Migration of Rescued Instances  https://review.openstack.org/30819805:56
*** dharinic has joined #openstack-nova05:57
*** edmondsw has quit IRC05:57
*** trinaths has quit IRC05:59
*** bkopilov has joined #openstack-nova06:00
*** yuntongjin has joined #openstack-nova06:02
*** liusheng has quit IRC06:05
*** liusheng has joined #openstack-nova06:05
*** gongysh has quit IRC06:09
*** trinaths has joined #openstack-nova06:10
*** moshele has joined #openstack-nova06:16
*** adisky_ has quit IRC06:18
*** adisky_ has joined #openstack-nova06:18
*** sridharg has joined #openstack-nova06:20
*** sandanar has joined #openstack-nova06:22
*** karthiks_afk is now known as karthiks06:24
*** Shunli has quit IRC06:24
*** Shunli has joined #openstack-nova06:25
*** david-lyle_ has joined #openstack-nova06:26
*** david-lyle has quit IRC06:26
*** Shunli has quit IRC06:26
*** Akhil has joined #openstack-nova06:26
*** raunak has quit IRC06:27
*** raunak has joined #openstack-nova06:27
*** raunak has quit IRC06:27
*** raunak has joined #openstack-nova06:28
*** raunak has quit IRC06:28
*** raunak has joined #openstack-nova06:29
*** raunak has quit IRC06:29
*** raunak has joined #openstack-nova06:30
*** raunak has quit IRC06:30
*** Shunli has joined #openstack-nova06:30
*** adisky_ has quit IRC06:30
*** raunak has joined #openstack-nova06:30
*** raunak has quit IRC06:31
*** Shunli has quit IRC06:31
*** Shunli has joined #openstack-nova06:32
*** Shunli has quit IRC06:33
*** Shunli has joined #openstack-nova06:34
*** moshele has quit IRC06:36
*** lpetrut has joined #openstack-nova06:40
*** zsli_ has joined #openstack-nova06:43
*** zsli_ has quit IRC06:46
*** Shunli has quit IRC06:46
jithHi all, I have configured openstack kilo setup in debian jessie with one controller node and two compute nodes. I have used glusterfs for shared storage. I have mounted the shared storage in /var/lib/nova/instances on both nodes.  I can do migration of vm&rsquo;s between nodes.. But live-migration  throws following error.. Pls do guide me.. http://pastebin.com/DHPWEkaZ06:49
*** belmoreira has joined #openstack-nova06:51
openstackgerritlcsong proposed openstack/nova master: Modify some grammatical mistakes.  https://review.openstack.org/44632106:55
*** prateek has joined #openstack-nova06:56
*** nkrinner_afk is now known as nkrinner06:58
*** andreas_s has joined #openstack-nova07:01
openstackgerritSivasathurappan Radhakrishnan proposed openstack/nova master: API changes for live migration of rescued instance  https://review.openstack.org/32828007:05
*** hferenc has quit IRC07:10
*** alexey_weyl has joined #openstack-nova07:13
alexey_weylHi, I have a small question about the availability zone.07:13
openstackgerritSivasathurappan Radhakrishnan proposed openstack/nova master: Port binding based on events during live migration  https://review.openstack.org/43487007:14
alexey_weylWhen I change the name or the availability zone itself in the host aggregates I don't receive an update event on the oslo bus. Is nove supposed to send such an event?07:14
alexey_weyl*nove=nova07:15
*** fandi has quit IRC07:15
*** amotoki has joined #openstack-nova07:18
*** unicell has quit IRC07:19
*** amotoki_ has quit IRC07:21
*** lpetrut has quit IRC07:21
*** lpetrut has joined #openstack-nova07:21
*** bhagyashris has joined #openstack-nova07:25
*** jaosorior has joined #openstack-nova07:26
*** yamahata has joined #openstack-nova07:30
*** moshele has joined #openstack-nova07:31
*** lpetrut has quit IRC07:33
*** tesseract has joined #openstack-nova07:33
openstackgerritSivasathurappan Radhakrishnan proposed openstack/nova-specs master: Live Migration of Rescued Instances  https://review.openstack.org/34716107:36
*** abalutoiu has joined #openstack-nova07:37
*** udesale has quit IRC07:40
*** Jeffrey4l__ has quit IRC07:41
openstackgerritMikhail Feoktistov proposed openstack/nova master: Add is_vz_container function  https://review.openstack.org/44594707:41
*** diga has quit IRC07:48
*** haplo37 has quit IRC07:48
*** voelzmo has joined #openstack-nova07:49
*** nirendra has joined #openstack-nova07:53
*** Kevin_Zheng has joined #openstack-nova07:53
openstackgerritSivasathurappan Radhakrishnan proposed openstack/nova-specs master: Live Migration of Rescued Instances  https://review.openstack.org/34716107:55
*** gongysh has joined #openstack-nova07:56
*** haplo37 has joined #openstack-nova07:57
nirendraNova evacuate on Ocata is failing. Is there any known bug corresponding to this or a workaround07:58
*** zzzeek has quit IRC08:00
*** zzzeek has joined #openstack-nova08:00
*** aarefiev_afk is now known as aarefiev08:05
*** adisky_ has joined #openstack-nova08:05
*** diga has joined #openstack-nova08:06
*** xinliang has quit IRC08:09
gmannalex_xu: hi08:10
alex_xugmann: hi08:10
gmannalex_xu: show_extensions seems to enforce all extension discovery policy - https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/extension_info.py#L28508:11
gmannalex_xu: instead of asked extension discovery policy only08:11
gmannself._get_extensions(context) is being called in show and it checks all extension discovery policy08:12
alexey_weylHi, I wanted to ask please why there is no id for the availability-zone?08:13
alex_xugmann: emm...looks like yes08:13
alex_xugmann: but it works correctly?08:14
gmannalex_xu: because of fatal=False on L226 right?08:15
*** moshele has quit IRC08:16
gmannif anyone disabled other extension on policy discovery then it would not affect of getting asked extension right?08:16
alex_xugmann: yes08:17
gmannalex_xu: yea, actually it came up on https://review.openstack.org/#/c/431740/4  i am not sure why they cannot use show_extension for all, let me check with author08:18
*** ltomasbo|away is now known as ltomasbo08:18
*** moshele has joined #openstack-nova08:18
*** arne_r has joined #openstack-nova08:19
Kevin_Zhengping gibi08:20
alexey_weylcan anyone please help me08:20
alexey_weylI wanted to ask please why there is no id for the availability-zone?08:20
alex_xugmann: emm....strange08:21
*** xinliang has joined #openstack-nova08:22
gmannalex_xu: yea from user point of view show_extension will be same, its only internal implementation which does extra checks on policies08:22
gmannalexey_weyl:  for AZ we have only name08:24
alex_xugmann: but the discovery policy is something we deprecated?08:24
alex_xuwhy people still depend on it08:24
gmannalexey_weyl: i mean its always different name so id not needed08:24
*** yamahata has quit IRC08:25
gmannalex_xu:  you mean extensions right or discovery policy also we deprecated ?08:25
bauzasgood morning Nova08:27
alex_xugmann: i think the discovery policy08:28
alex_xugmann: https://review.openstack.org/#/c/427872/ johnthetubaguy propose to remove in it in that spec08:28
alex_xugmann: not suer whether we have note to say it already deprecated08:28
gmannalex_xu: yea, i was thinking the same. i think we do not have as there was no good way to deprecate policy...08:29
openstackgerritjichenjc proposed openstack/nova master: Prevent delete cell0 in nova-manage command  https://review.openstack.org/43347608:29
alexey_weylgmann: that is correct, but if i change the name of the availability zone then vitrage don't know of which zone it was originally08:29
alexey_weylgmann: meanning, Vitrage don't know what zone was changed, because the previous name doesn't appear.08:30
alex_xugmann: not like config opt08:30
*** pcaruana has joined #openstack-nova08:30
*** diga has quit IRC08:32
*** ekuris has quit IRC08:33
gmannalex_xu: yea08:34
gmannalexey_weyl: humm, yea name change, changes the AZ completely08:34
*** abalutoiu has quit IRC08:35
alexey_weylgmann: so how can Vitrage know which availability zone was changed besides storing a cache of all of the different availability zones?08:35
*** abalutoiu_ has joined #openstack-nova08:35
gmannalexey_weyl:  but you can get the old AZ from GET server08:35
*** namnh_ has joined #openstack-nova08:36
gmannalexey_weyl: i mean servers booted with AZ will be having the same08:36
*** namnh has quit IRC08:37
*** moshele has quit IRC08:37
alexey_weylgmann: do you think I can open a bug that we would like the availability zone to have an ID?08:39
alexey_weylgmann: but still when Vitrage is doing get all availability zones it won't know what AZ was changed.08:40
openstackgerritBéla Vancsics proposed openstack/nova master: Reduce code complexity - libvirt/config.py  https://review.openstack.org/35987908:40
bauzasFWIW, I'm personnally opposed to accept the AZ changing its name unless there are no left instances in there08:40
bauzasit's a source of confusion08:41
gmannbauzas: yea me too, alexey_weyl is it much required use case?08:41
bauzascreating an AZ is possibly one of the most important things to make sure08:41
bauzasfor example, you can default instances to have an AZ by modifying the config to name the default AZ08:42
*** dimtruck is now known as zz_dimtruck08:42
gmannAZ is something managed with name so changing name is all if you do not want to deal with old one08:42
bauzasit's also an end-user UX08:42
*** abalutoiu_ has quit IRC08:42
bauzasmeaning that users see the AZs08:42
gmannyea08:42
bauzaswhen you want to modify a name, then you trample all your users08:43
*** jpena|off is now known as jpena08:44
alexey_weylbauzas: But shouldn't the AZs have and ID. like for every entity in the Database, that its properties are changed, so we will know whos data was changed by the ID.08:45
*** manasm has joined #openstack-nova08:45
alexey_weylbauzas: gmann: another question, how can I receive update events of the AZ on oslo messaging bus?08:46
bauzasalexey_weyl: AZs are just an aggregate metadata08:46
bauzasalexey_weyl: it's not a nova specific object, neither persisted like you think08:46
*** sree has joined #openstack-nova08:47
*** ekuris has joined #openstack-nova08:47
alexey_weylbauzas: Let me explain to you our use case in Vitrage, and lets see what can be done.08:47
alexey_weylbauzas: In Vitrage we have many entities of different projects in Openstack and physical entities as well that Vitrage monitors and shows to the user and to the system as well so all the alarms and the root cause alarms in the system can be shown.08:49
*** nirendra has quit IRC08:49
*** moshele has joined #openstack-nova08:49
alexey_weylbauzas: Thus, when a user changes the AZ name, it needs to be reflected in the Vitrage entity graph that has the AZ vertex in it.08:50
*** Guest76738 has quit IRC08:50
openstackgerritBéla Vancsics proposed openstack/nova master: Reduce code complexity - libvirt/config.py  https://review.openstack.org/35987908:50
alexey_weylbauzas: When vitrage receive the updated data from nova about the AZs in nova it doesn't know which AZ was changed or deleted or what has happened.08:50
alexey_weylbauzas: Thus if the AZ will have and ID Vitrage will know what AZ was changed or deleted.08:51
bauzasalexey_weyl: I'm personnally in favor of not accepting to change the AZ if there are already some instances08:51
bauzasI could write a spec08:51
*** guchihiro has quit IRC08:51
bauzasso it would mean a behavioural change, but post that microversion, it shouldn't be possible to change an AZ name08:52
*** phuongnh has joined #openstack-nova08:52
alexey_weylbauzas: Does it sound as a reasonable use case from your side?08:52
*** amoralej|off is now known as amoralej08:52
alexey_weylbauzas: didn't quite understoud your last sentence.08:53
*** ralonsoh has joined #openstack-nova08:54
bauzasalexey_weyl: lemme rephrase08:54
bauzasalexey_weyl: since the AZ information is just an aggregate metadata that is user-facing, we already have the ID you need08:55
gmannalexey_weyl: "when a user changes the AZ name" user you mean admin/operator right08:55
bauzaschanging that metadata value for that specific key is source of confusion, so I'm not in favor of accepting that08:55
alexey_weylbauzas: of course08:55
alexey_weylbauzas: so, you have that id already? can we get from somewhere?08:55
bauzasit's just an aggregate...08:56
*** jpena is now known as jpena|off08:57
*** lucas-pto is now known as lucasagomes08:58
*** david-lyle_ has quit IRC08:58
alexey_weylbauzas: don't quite understand. what can I do?08:59
bauzasalexey_weyl: do you know https://docs.openstack.org/developer/nova/aggregates.html#availability-zones-azs ?09:00
gmannalexey_weyl: we have id for aggregate not for AZ09:00
alexey_weylgmann: I am familiar with the ID in the aggregate host.09:01
bauzasalexey_weyl: https://developer.openstack.org/api-ref/compute/?expanded=show-aggregate-details-detail tells you that availability_zone is just a specific metadata key for an aggregate09:02
*** iceyao has quit IRC09:02
bauzasalexey_weyl: we're leaking the DB PK on the API unfortunately, but we added an aggregate UUID since 2.4109:03
alexey_weylgmann: I understand.09:03
bauzashttps://docs.openstack.org/developer/nova/aggregates.html#availability-zones-azs is explaining that AZs are just a conceptual object that Nova doesn't deal with09:04
*** jpena|off is now known as jpena09:04
*** efoley_ has joined #openstack-nova09:06
*** edmondsw has joined #openstack-nova09:08
alexey_weylI see09:10
alexey_weylbauzas: I see09:10
alexey_weylbauzas: Then I have please another question09:10
alexey_weylbauzas: I saw that when i change the AZ name or the aggregated host name I dont receive any update event on the oslo messaging bus09:11
alexey_weylbauzas: how can i receive those updates?09:11
bauzasyou mean a notification ?09:11
alexey_weylcorrect09:11
bauzasbecause it's an admin operation09:11
bauzasso you are supposed to know what you do09:11
*** lpetrut has joined #openstack-nova09:12
alexey_weylbauzas: but vitrage needs to get a notification of such a change.09:12
bauzasI'd rather name it a callback...09:12
alexey_weylbauzas: ok09:12
bauzasalexey_weyl: well, you can propose to add some new notifications09:13
alexey_weylbauzas: but still, when even an admin changes something we need to get some update that such a thing was done, how can we get it?09:13
alexey_weyli see09:13
bauzasgibi: do you know if we need a spec for adding notifications? my guts say yes09:13
bauzasjohnthetubaguy: ^09:14
*** gszasz has joined #openstack-nova09:16
gibibauzas: so far we went with specless bp for searchlight related additions09:17
gibiKevin_Zheng: ping, I'm here now09:18
bauzasgibi: good to know09:19
Kevin_Zhenggibi: Ah, Hi, I was going to ask something for notification, but I got it sorted out :)09:19
gibiKevin_Zheng: OK :)09:19
bauzasgibi: I thought the notification object was somehow needing a consensus before writing the implementation, but okat09:19
bauzasgibi: here, we are talking of sending notifications about aggregate modifications09:20
gibibauzas, alexey_weyl: yeah, if this is a new non trivial object then I agree that a spec for the data model is a good thing to have09:20
bauzasgibi: do we notify some aggregate-related objects ?09:20
openstackgerritjichenjc proposed openstack/nova master: Prevent delete cell0 in nova-manage command  https://review.openstack.org/43347609:20
bauzasalready I mean09:20
gibibauzas: yeah, aggregate.create and .delete are both already transformed to versioned format09:20
bauzasso we already emit those ?09:21
gibibauzas: aggregate.add_host and remove_host are in the pipe09:21
gibibauzas: yes09:21
* gibi looking up the code09:21
bauzasgibi: okay, that's what alexey_weyl was looking for09:21
bauzashe would like to subscribe on aggregate metadata change09:21
gibibauzas, alexey_weyl: here is the data modell https://github.com/openstack/nova/blob/master/nova/notifications/objects/aggregate.py09:21
bauzasgibi: perfect09:22
gibibauzas, alexey_weyl: I think metadata is already part of the notification09:22
bauzasso alexey_weyl just needs to have the aggregate update to emit09:22
bauzasgibi: yup09:22
bauzasgibi: is there already a bp for tracking this ?09:23
bauzasAFAICS, a specless BP should be enough given we already have the notification model09:23
alexey_weylgibi: bauzas: I am not quite familiar with all the internals of nova.09:23
gibibauzas: the aggregate ones are part of the notification transformation bp09:24
*** karimb has joined #openstack-nova09:24
gibibauzas: as legacy had those as well09:24
bauzasgibi: yeah, but that blueprint is not intended to add *more* notifications, just transform the existing, correct?09:24
gibibauzas: the metadata update part is something that is not transformed yet09:24
gibibauzas: we have metadata update legacy notification https://github.com/openstack/nova/blob/master/nova/objects/aggregate.py#L44209:24
bauzasgibi: lemme verify if we emit legacy notifications09:25
bauzasah-ah !09:25
gibibauzas: so we have to just transform that09:25
bauzasgibi: so alexey_weyl would just need to contribute to the transformation bp09:25
gibibauzas, alexey_weyl: it seems so, yes.09:25
*** Jeffrey4l has joined #openstack-nova09:25
bauzasperfect09:25
alexey_weylbauzas: gibi: I didn't quite understood all what you said. can you please summerize that for me? what do I need to do?09:25
bauzasalexey_weyl: so, your problem is solved from a nova perspective09:26
gibialexey_weyl: sure09:26
bauzasalexey_weyl: as gibi said, we already emit notifications on aggregate update09:26
bauzashttps://github.com/openstack/nova/blob/master/nova/objects/aggregate.py#L44209:26
bauzasalexey_weyl: the thing is, we emit the information on a non-versioned dict09:26
gibialexey_weyl: we are creating a new notification interface for nova, to have a proper interface. these are called versioned notifications.09:27
bauzasalexey_weyl: so gibi is the lead for transforming our legacy payloads into a richer format09:27
alexey_weylbauzas: gibi: I am listening to all of the notifications from nova and I didn't see any notification from nova when I changed the name of the AZ or the Aggregated host09:27
gibialexey_weyl: do you listen to notifications topic?09:27
alexey_weylyes09:27
*** links has quit IRC09:27
alexey_weylwe have added to notification topics a vitrage_notificcation09:28
bauzashttps://github.com/openstack/nova/blob/master/nova/objects/aggregate.py#L443 is the notification name09:28
gibialexey_weyl: hm, you should get aggregate.updatemetadata.end notification09:28
alexey_weyland thus we can receive all the data09:28
openstackgerritBéla Vancsics proposed openstack/nova master: Reduce code complexity - libvirt/config.py  https://review.openstack.org/35987909:28
alexey_weylbauzas: gibi: I will check it again just to make sure.09:28
bauzasin theory, that notification has a payload including the metadata that changed https://github.com/openstack/nova/blob/master/nova/objects/aggregate.py#L42309:28
gibialexey_weyl: if you not see that after updating metadata of an aggregate then we have some bug09:29
gibialexey_weyl: I will try to verify that as well09:29
*** tovin07_ has quit IRC09:29
gibialexey_weyl: btw, do vitrage plans to move from the legacy nova notifications to the versioned ones?09:30
*** jichen has quit IRC09:30
bauzasgibi: AFAICS, if you do the WSGI action on os-aggregates to set_metadata, you call the Aggregate object method that emits the legacy notification09:31
bauzashttps://github.com/openstack/nova/blob/master/nova/api/openstack/compute/aggregates.py#L19609:31
bauzasand https://github.com/openstack/nova/blob/master/nova/compute/api.py#L449009:32
alexey_weylgibi: I have to admit that this is the first time I hear about that, so I will need to check it out. But if this is something that all the projects have then vitrage will have to use such the new notifications09:32
*** jaosorior is now known as jaosorior_lunch09:32
*** namnh has joined #openstack-nova09:32
*** dgonzalez_ has joined #openstack-nova09:33
*** tpatzig_ has joined #openstack-nova09:33
*** seife_ has joined #openstack-nova09:33
*** david_1 has joined #openstack-nova09:33
gibialexey_weyl: we are in the process to define and implement the new interface so it is the good time to chime in from consumer side09:33
gibibauzas: that is my understanding of the code as well09:33
*** namnh_ has quit IRC09:34
gibibauzas: but if alexey_weyl does not see the notification then there is something we don't see09:34
alexey_weylgibi: bauzas: I have checked it again, and I didn't receive any notification about AZ and Aggregated host when changed their names09:34
*** seife_ has quit IRC09:35
*** dgonzalez_ has quit IRC09:35
*** tpatzig_ has quit IRC09:35
*** david_1 has quit IRC09:35
gibialexey_weyl: do you get aggregate.create when you create a new aggregate?09:35
bauzasalexey_weyl: and what if you change a metadata key that is *not* availability_zone ?09:35
* gibi is digging up a running test system to try as well, but it will be Mitaka based09:35
alexey_weylgibi: bauzas: When i add, delete, update instances I receive all the notification.09:36
*** yingjun has quit IRC09:37
alexey_weylgibi: bauzas: maybe it is related to the fact that I perform the changes in the horizon admin tab?09:37
alexey_weylgibi: bauzas: I will try to create a new aggregated host via cli09:37
gibialexey_weyl: I think horizon should not be the problem09:37
*** derekh has joined #openstack-nova09:38
alexey_weylgibi: bauzas: Ok, have created a new aggregated using the cli and i see it in the horizon but I didn't receive any upate09:39
gibialexey_weyl: strange09:40
gibialexey_weyl: did you check the notification from vitrage logs? doesn't vitrage filter on event_type?09:41
* gibi is still in the process to try to reproduce the issue09:42
alexey_weylI have checked, I have put a print before the filtering09:42
alexey_weylgibi: I have checked, I have put a print before the filtering09:42
*** links has joined #openstack-nova09:44
alexey_weylgibi: bauzas:09:44
alexey_weylgibi: bauzas: I might know the problem09:44
gibialexey_weyl: I set the notification driver to log in nova.conf and created an aggregate09:44
gibialexey_weyl: and I got the log:09:44
gibialexey_weyl: /var/log/nova/nova-api.log:2017-03-16T10:43:00.616385+01:00 cic-3 nova-api[22020]: 2017-03-16 10:43:00.615 22020 INFO oslo.messaging.notification.aggregate.create.start [req-0b7f199d-8d06-4d2c-bcd6-fc5a8d03b623 ad400a8540f744b5b7041a356d51b6f3 a21d93baeb5e42c8a9d1da782279e309 - - -] {"event_type": "aggregate.create.start", "timestamp": "2017-03-16 09:43:00.615696", "payload": {"name": "gibi-aggregate"}, "priority": "INF09:44
gibialexey_weyl: but this is Mitaka version09:45
alexey_weylgibi: bauzas: The productive system I work on is liberty. This might be the problem, right?09:45
gibialexey_weyl: the legacy aggregate.create works for me on Mitaka09:45
gibialexey_weyl: let me check Liberty...09:45
*** ociuhandu has quit IRC09:46
bauzasFWIW https://blueprints.launchpad.net/nova/+spec/az-block-name-update09:46
gibialexey_weyl: Liberty code also has the notification https://github.com/openstack/nova/blob/liberty-eol/nova/objects/aggregate.py#L12609:46
openstackgerritStephen Finucane proposed openstack/nova master: Add PCIWeigher  https://review.openstack.org/37952409:47
alexey_weylgibi: bauzas: ok, now it is really weird.09:47
openstackgerritStephen Finucane proposed openstack/nova master: Add PCIWeigher  https://review.openstack.org/37952409:48
openstackgerritStephen Finucane proposed openstack/nova master: Prefer non-PCI host nodes for non-PCI instances  https://review.openstack.org/37962509:48
gibialexey_weyl: is there a way for you to reproduce the problem closer to master ?09:48
alexey_weylgibi: bauzas: Yes, I am going to work on pike now, and i will check it there.09:49
alexey_weylgibi: bauzas: will update you later on.09:49
alexey_weylgibi: bauzas: anyway, thank you very much guys!!!! :)\09:49
gibialexey_weyl: just ping me when you have some test results09:49
gibibauzas: you proposal makes sense to me. less thing to change means less confusion. So changing the AZ name would be a lot harderd after it.09:50
gibibauzas: deleting aggregate and creating a new one09:50
alexey_weylgibi: no problem. thanks :)09:51
bauzasgibi: in theory, once you accept an instance to be created on that AZ, then your contract becomes harder09:52
bauzasbecause some people trust you for caring their instances09:52
gibibauzas: true.09:52
gibibauzas: what if just the az name needs to be changed but not the meanin of it? I mean the renamed az still means the  same failure domain.09:53
bauzasgibi: what I need to understand is why operators need to update their AZ names09:54
bauzasit's just a string09:54
gibibauzas: renaming the room in the building? :)09:54
bauzasso they would expose their topology semantically?09:54
jithHi all, I have configured openstack kilo setup in debian jessie with one controller node and two compute nodes. I have used glusterfs for shared storage. I have mounted the shared storage in /var/lib/nova/instances on both nodes.  I can do migration of vms between nodes.. But live-migration  throws following error.. Pls do guide me.. http://pastebin.com/DHPWEkaZ09:55
gibibauzas: you are right. My point is not really valid.09:55
gibibauzas: anyhow I'm just playing devils advocate here.09:55
bauzasjith: I'm sorry, this channel is focused on development questions09:55
bauzasgibi: nah it's fine, I could be wrong09:56
gibibauzas: az name should not reflect real topology. It should be failure domain A , B etc09:56
bauzasgibi: the only problem I see with preventing the AZ rename is that it's becoming hard to migrate instances09:56
gibibauzas: so your point seems OK to me09:56
bauzassay I made a mistake and I want to fix that09:57
gibibauzas: does admin can migrate instance by ignoring AZ?09:57
bauzasmigi: he could force the migration09:57
bauzasoops09:57
bauzass/migi/gibi09:57
migi:)09:57
gibibauzas: force migration could work for me09:57
gibimigi, bauzas: :)09:58
openstackgerritStephen Finucane proposed openstack/nova master: tests: Validate huge pages  https://review.openstack.org/39965309:58
openstackgerritStephen Finucane proposed openstack/nova master: libvirt: create functional test base class  https://review.openstack.org/40705509:58
bauzasanyway, migrating from one AZ to another is a big deal09:58
sfinucanjaypipes: Fixed that hugepages functional test https://review.openstack.org/#/c/399653/09:59
bauzasgibi: since one host can't be in two AZs at the same time, I'd suggest to place the host on a specific aggregate that is not AZ-facing10:00
*** ltomasbo is now known as ltomasbo|away10:00
bauzasgibi: and then move the host to the new aggregate with the right AZ name10:00
*** trinaths has left #openstack-nova10:00
bauzasrather than issuing a live-migration10:00
*** ralonsoh_ has joined #openstack-nova10:01
*** ociuhandu has joined #openstack-nova10:01
bauzasI should test that in devstack10:01
*** rfolco has joined #openstack-nova10:01
*** rfolco has quit IRC10:01
*** efoley__ has joined #openstack-nova10:02
*** iceyao has joined #openstack-nova10:03
mdboothlyarwood (or anybody else): Do you know what would consume volume.detach instance notifications?10:03
*** nkrinner is now known as nkrinner_afk10:03
*** ralonsoh has quit IRC10:04
gibibauzas: ohh, I see10:04
* gibi is running for lunch10:05
lyarwoodmdbooth: no sorry10:05
lyarwoodmdbooth: is this regarding the ordering change?10:06
mdboothlyarwood: Yeah10:06
mdboothAlso, the notification should probably live with the detach10:06
*** efoley_ has quit IRC10:06
*** wangqun has quit IRC10:07
*** phuongnh has quit IRC10:07
*** iceyao has quit IRC10:09
*** namnh has quit IRC10:09
*** sdague has joined #openstack-nova10:11
mdboothlyarwood: Yeah, reading the contract on those notifications, notifying after makes more sense to me.10:11
*** ltomasbo|away is now known as ltomasbo10:11
*** hshiina has quit IRC10:14
sfinucanjlvillal: https://review.openstack.org/#/c/445622/110:14
*** cdent has joined #openstack-nova10:15
openstackgerritMikhail Feoktistov proposed openstack/nova master: Add is_vz_container function  https://review.openstack.org/44594710:15
*** kevinz has quit IRC10:20
*** nicolasbock has joined #openstack-nova10:20
sfinucansean-k-mooney, jaypipes: Shouldn't this be in os-vif? https://review.openstack.org/#/c/441183/10:21
*** Jeffrey4l has quit IRC10:23
*** edmondsw has quit IRC10:25
mdboothWe've had references to Cinder attachment_id in Nova since Mitaka10:33
mdboothlyarwood: ^^^10:33
jith Hi all, I have configured openstack kilo setup in debian jessie with one controller node and two compute nodes. I have used glusterfs for shared storage. I have mounted the shared storage in /var/lib/nova/instances on both nodes.  I can do migration of vms between nodes.. But live-migration  throws following error.. Pls do guide me.. http://pastebin.com/DHPWEkaZ10:34
mdboothlyarwood: What am I missing there?10:34
lyarwoodmdbooth: yeah cinderv2 attachment_id's10:34
openstackgerritStephen Finucane proposed openstack/nova master: Replace obsolete vanity openstack.org URLs  https://review.openstack.org/44326610:34
mdboothHow are they different?10:34
lyarwoodmdbooth: I think they are only optional when detaching10:35
mdboothAre the underlying objects the same?10:35
lyarwoodmdbooth: on the Cinder side? I have no idea.10:35
lyarwoodmdbooth: but in the API yes10:35
lyarwoodmdbooth: that's all we care about right?10:35
mdboothSo a cinderv2 attachment_id is the same as a cinderv3 attachment_id10:35
mdboothi.e. the uuid values are the same10:35
lyarwoodmdbooth: that I don't know10:36
openstackgerritStephen Finucane proposed openstack/nova master: Replace obsolete vanity openstack.org URLs  https://review.openstack.org/44326610:36
gibimdbooth, lyarwood: I think we you want to change when the volume.detach is sent then it would make sense to send a detach.start and a detach.end10:36
mdboothgibi: Out of scope for this change, tbh.10:36
mdboothWaaaaaaaaay out of scope10:36
*** gongysh has quit IRC10:36
gibimdbooth: then let's not change when to send the volume.detach :)10:36
lyarwoodgibi: but I can follow up with that after10:36
gibilyarwood: I think it is not super important10:37
mdboothlyarwood gibi: I don't know enough about the practical uses of notifications to know how important that order change is, but if there's any doubt we should leave it alone10:38
mdboothIt's not at all relevant to lyarwood's change10:38
mdboothlyarwood: So that means moving the notification with the detach call10:39
lyarwoodmdbooth: sure, then the notification comes before bdm.destroy10:39
mdboothlyarwood: I don't see a problem with that.10:40
gibimdbooth, lyarwood: OK10:40
openstackgerritJohn Garbutt proposed openstack/nova-specs master: Add policy-remove-scope-checks spec  https://review.openstack.org/43303710:41
openstackgerritJohn Garbutt proposed openstack/nova-specs master: Add additional-default-policy-roles spec  https://review.openstack.org/42787210:41
mdboothlyarwood: As I see it, it's more about the behaviour when detach fails, tbh10:41
lyarwoodmdbooth: true10:41
openstackgerritSylvain Bauza proposed openstack/nova-specs master: Proposed block accepting AZ renames  https://review.openstack.org/44644610:42
lyarwoodmdbooth: I need to run for a dentist appointment, back in ~90mins or so.10:42
lyarwoodmdbooth: are you around for the cinder/nova meeting later today?10:42
mdboothlyarwood: kk10:42
mdboothlyarwood: Possibly. Talk later.10:42
lyarwoodmdbooth: ack, thanks10:42
*** Jeffrey4l has joined #openstack-nova10:46
*** alexpilotti has joined #openstack-nova10:47
*** jaosorior_lunch is now known as jaosorior10:51
*** carthaca_2 has quit IRC10:52
*** sapcc-bot4 has quit IRC10:52
*** databus23_2 has quit IRC10:52
*** david_1 has joined #openstack-nova10:52
*** sapcc-bot has joined #openstack-nova10:52
*** databus23_ has joined #openstack-nova10:52
*** carthaca_ has joined #openstack-nova10:52
*** tpatzig_ has joined #openstack-nova10:52
*** seife_ has joined #openstack-nova10:52
*** sapcc-bot has quit IRC10:54
*** databus23_ has quit IRC10:54
*** carthaca_ has quit IRC10:54
*** seife_ has quit IRC10:54
*** tpatzig_ has quit IRC10:54
*** david_1 has quit IRC10:54
*** sapcc-bot has joined #openstack-nova10:54
*** databus23_ has joined #openstack-nova10:54
*** carthaca_ has joined #openstack-nova10:54
-openstackstatus- NOTICE: paste.openstack.org is down, due to connectivity issues with backend database. support ticket has been created.10:59
*** ChanServ changes topic to "paste.openstack.org is down, due to connectivity issues with backend database. support ticket has been created."10:59
*** iceyao has joined #openstack-nova11:00
*** kaisers__ has joined #openstack-nova11:01
openstackgerritSean Dague proposed openstack/nova master: remove hacking rule that enforces log translation  https://review.openstack.org/44645211:03
*** kaisers_ has quit IRC11:04
*** iceyao has quit IRC11:04
*** jdurgin has quit IRC11:18
openstackgerritJianghua Wang proposed openstack/nova master: XenAPI: device tagging  https://review.openstack.org/33378111:22
openstackgerritBing Li proposed openstack/nova master: Add server-action-removefloatingip.json file and update servers-actions.inc  https://review.openstack.org/44647111:23
*** bvanhav_ has joined #openstack-nova11:24
*** jdurgin has joined #openstack-nova11:27
gibidansmith: Hi! I left some suggestion in https://review.openstack.org/#/c/44569711:30
*** nkrinner_afk is now known as nkrinner11:31
*** ociuhandu has quit IRC11:33
*** sree has quit IRC11:36
*** ralonsoh__ has joined #openstack-nova11:36
*** ralonsoh__ is now known as ralonsoh11:37
*** ralonsoh_ has quit IRC11:40
gibialexey_weyl: I checked the aggregate notification in a new devstack from master.11:43
gibialexey_weyl: It works for me. See the logs in http://paste.openstack.org/show/602943/11:44
*** ociuhandu has joined #openstack-nova11:44
*** ChanServ changes topic to "This channel is for Nova development. For support of Nova deployments, please use #openstack. Please see: https://wiki.openstack.org/wiki/Nova/Ocata_Release_Schedule"11:46
-openstackstatus- NOTICE: paste.openstack.org service is back up - turns out it was a networking issue, not a database issue. yay networks!11:46
*** lpetrut has quit IRC11:48
*** raj_singh has quit IRC11:49
*** vks1 has quit IRC11:49
*** dave-mccowan has joined #openstack-nova11:53
*** aysyd has joined #openstack-nova11:55
*** alexey_weyl has quit IRC11:56
*** efoley_ has joined #openstack-nova12:03
*** alexey_weyl has joined #openstack-nova12:03
alexey_weylgibi: Hi12:06
*** efoley__ has quit IRC12:06
alexey_weylgibi: I have checked it also, and i saw that notifications work for vitrage with master. but still notifications don't work on liberty (notifications about the aggregated hosts)12:07
*** namnh has joined #openstack-nova12:08
*** manasm has quit IRC12:09
gibialexey_weyl: I don't have access to Liberty test node right now. Also Liberty is already reached end of support12:10
*** amoralej is now known as amoralej|lunch12:10
gibialexey_weyl: so even if it is buggy we cannot fix it on Liberty12:10
*** jpena is now known as jpena|lunch12:11
alexey_weylgibi: ok, i see that, and it is ok. but I have some other issue12:14
alexey_weylgibi: The thing is that for example in devstack when you create it, you don't have an "aggregated host" but we have AZs.12:15
*** namnh has quit IRC12:15
*** gouthamr has joined #openstack-nova12:16
alexey_weylgibi: After our talk before I thought that I could use only the Aggregated hosts to get the needed data, but it seems that I would need to get the AZs by calling the availability-zone list and thus because it doesn't has id we have a problem12:16
mdboothlyarwood: You back?12:17
*** lpetrut has joined #openstack-nova12:17
*** jianghuaw-m has joined #openstack-nova12:18
robcresswellQuick question; someone's added a patch to Horizon that hides the Soft Reboot button if the Instance state is anything other than Active; is this correct? Had a look at the API docs but they only seem to show how to form the request, not any of the conditions around its usage.12:19
mdboothrobcresswell: soft reboot doesn't make sense for an instance which isn't running12:19
*** mlakat has quit IRC12:20
robcresswellmdbooth: Yeah, I'm not clued in on every specific status nova supports12:20
*** jianghuaw-m has quit IRC12:20
* mdbooth tries to think of non-active states where the instance is still running12:20
mdbootherror, perhaps12:20
*** ratailor has quit IRC12:21
mdboothrobcresswell: I mean, regardless of what the api allows, from a UI pov that restriction makes sense to me12:21
robcresswellmdbooth: Cool. Good enough for me.12:22
*** rfolco has joined #openstack-nova12:22
robcresswellThanks12:22
* mdbooth hopes what when you mouse-over it, it says: 'Soft reboot is only available when instance is running'12:23
* mdbooth hates disabled options which don't explain why they're disabled12:23
alexey_weylgibi: The reason that I would need to call the availability-zone list as well is because sometimes we don't have the aggregated hosts but we have AZs12:24
gibialexey_weyl: if you have an AZ then you automatically have a host aggregate behind it12:27
gibialexey_weyl: the availability-zone list just iterate throught the host aggregates to see if there is a metadata key on the aggregate with name availability_zone12:28
*** karimb has quit IRC12:28
*** edmondsw has joined #openstack-nova12:29
openstackgerritJohn Garbutt proposed openstack/nova-specs master: Add policy-remove-scope-checks spec  https://review.openstack.org/43303712:29
*** esberglu has joined #openstack-nova12:29
gibialexey_weyl: what would be different for you if there would be uuid besids AZ name?12:31
*** liverpooler has joined #openstack-nova12:32
lyarwoodmdbooth: back now, going to attempt to eat lunch and then catch up on your review comments12:32
*** ayogi has quit IRC12:32
*** udesale has joined #openstack-nova12:37
openstackgerritBéla Vancsics proposed openstack/nova master: Reduce code complexity - libvirt/config.py  https://review.openstack.org/35987912:40
*** NikhilS has quit IRC12:40
alexey_weylgibi: on my devstack which is a devstack that works with master branch, I have AZs but don't have any aggregated hosts12:40
mdboothlyarwood: You might want to start with the detach refactor, btw, because I think that requires a rethink which may affect other patches.12:41
*** efried has joined #openstack-nova12:42
gibialexey_weyl: but you have an empty host aggregate I assume12:42
lyarwoodmdbooth: kk looking now12:44
gibialexey_weyl: actually you can have more than one empty host aggregate connected to the same AZ12:44
*** liverpooler has quit IRC12:44
*** liverpooler has joined #openstack-nova12:44
gibialexey_weyl: and those aggregates have uuid12:45
gibialexey_weyl: if you need something unique12:45
openstackgerritJohn Garbutt proposed openstack/nova-specs master: Add additional-default-policy-roles spec  https://review.openstack.org/42787212:47
*** voelzmo has quit IRC12:51
*** kevinz has joined #openstack-nova12:51
*** voelzmo has joined #openstack-nova12:51
alexey_weylgibi: I don't quite understand. I ran on my devstack the command "nova aggregate-list" and received that it is empty and then I ran "nova availability-zone-list" and I saw to AZs12:51
*** voelzmo has quit IRC12:52
alexey_weylgibi: what is this empty aggregated host?12:52
*** voelzmo has joined #openstack-nova12:52
*** voelzmo has quit IRC12:53
*** lucasagomes is now known as lucas-hungry12:54
*** liangy has joined #openstack-nova12:55
*** liangy has quit IRC12:56
*** kylek3h has joined #openstack-nova12:56
*** burt has joined #openstack-nova12:57
*** tbachman has joined #openstack-nova12:58
lyarwoodhmm does any have an idea how I can reference a line in the original file of a gerrit diff?12:59
*** manasm has joined #openstack-nova12:59
*** mriedem has joined #openstack-nova12:59
rfolcocdent, ping13:00
cdenthi rfolco13:00
*** catintheroof has joined #openstack-nova13:00
*** sree has joined #openstack-nova13:00
rfolcocdent, cannot find how to test concurrent update/delete. How to emulate that ?13:00
cdentrfolco: the idea you have before of trying to do a DELETE ../inventories with an incorrect resource_provider_generation should be good enough13:01
*** iceyao has joined #openstack-nova13:01
rfolcocdent, hmm I thought the generation would fall into a different code path, not that exception.13:02
cdentthe generation being wrong is the only thing that can cause ConcurrentUpdateDetected to raise13:02
jaypipessfinucan: well, there will need to be *some* os-vif complementary part to that, yes.13:02
jaypipesmacsz: btw, you're comment on that "i find the lack of commit message disturbing." was most excellent ;)13:03
rfolcocdent, also, delete does not produce json, only post/put do. So not sure about the other suggestion on checking delete request/response with 40913:03
gibialexey_weyl: run nova aggregate-list13:03
*** jianghuaw__ has joined #openstack-nova13:03
cdentrfolco: the error's response body will be json if you send an accept header of application/json13:04
rfolcocdent, accept header mesans the decorator ?13:04
*** jpena|lunch is now known as jpena13:04
cdentso even though a success response is empty an error response will not13:04
cdenti mean in the gabbi test: request_headers:\naccept: application/json\n13:04
openstackgerritBalazs Gibizer proposed openstack/nova master: use context mgr in instance.delete  https://review.openstack.org/44376413:05
rfolcocdent, hmmm, ok, thanks13:05
cdentrfolco: that "hmmm" makes it sound you're not quite sure what I mean or ... ?13:06
*** hshiina has joined #openstack-nova13:06
*** dharinic_ has joined #openstack-nova13:06
rfolcocdent, haha hmm means, ok in theory I got it, lets see in practice.13:06
*** iceyao has quit IRC13:06
cdentah, okay, cool. let me know if you need more info as you go along13:06
rfolcocdent, I'll bother you again if I get stuck13:07
gibialexey_weyl: the AZ is not a real entity in nova. The host aggregate is the real one and if you have an aggregate with a special metadata 'availability_zone' then the hosts in that aggregate become part of an AZ13:08
alexey_weylgibi: I have ran the nova aggregate-list and I returned empty13:08
alexey_weylgibi: although I have 2 AZs13:08
*** tbachman_ has joined #openstack-nova13:09
*** bvanhav__ has joined #openstack-nova13:10
*** liangy has joined #openstack-nova13:11
*** bvanhav_ has quit IRC13:11
*** tbachman has quit IRC13:12
*** tbachman_ is now known as tbachman13:12
*** kevinz has quit IRC13:14
*** kevinz has joined #openstack-nova13:14
*** udesale has quit IRC13:14
*** vladikr has joined #openstack-nova13:15
*** mdrabe has joined #openstack-nova13:16
*** mriedem has quit IRC13:17
*** mriedem has joined #openstack-nova13:20
gibialexey_weyl: nova and internal?13:24
*** crushil has quit IRC13:24
gibialexey_weyl: nova is the default AZ. host not part of any AZ will belong there13:25
gibialexey_weyl: you can ignore the internal, as far as I understand internal never contains compute hosts13:26
mriedemhas anyone seen this test fail before? http://logs.openstack.org/75/446175/1/gate/gate-tempest-dsvm-neutron-full-ubuntu-xenial/479c7bf/console.html#_2017-03-16_01_31_33_68856313:26
gibialexey_weyl: I hope somebody correct me as I reached the limit of my knowledge13:26
*** mlavalle has joined #openstack-nova13:27
*** ociuhandu has quit IRC13:27
*** ociuhandu has joined #openstack-nova13:28
mriedembauzas: can you take a look at the ocata regression fix here https://review.openstack.org/#/c/444106/ and the functional test below it? the test is the one i was talking about yesterday.13:29
mriedemwe need to get those in because they are hurting the ironic ci13:29
*** karimb has joined #openstack-nova13:30
*** tblakes has joined #openstack-nova13:31
*** NikhilS has joined #openstack-nova13:31
*** eharney has joined #openstack-nova13:33
*** Jeffrey4l has quit IRC13:34
*** tbachman has quit IRC13:36
*** tbachman has joined #openstack-nova13:36
*** jaosorior has quit IRC13:37
*** jaosorior has joined #openstack-nova13:37
*** jaosorior has quit IRC13:37
*** jaosorior has joined #openstack-nova13:38
*** raj_sing- has joined #openstack-nova13:40
*** eharney has quit IRC13:40
*** voelzmo has joined #openstack-nova13:41
openstackgerritRafael Folco proposed openstack/nova master: DELETE all inventory for a resource provider  https://review.openstack.org/41666913:45
*** Jeffrey4l has joined #openstack-nova13:47
*** iceyao has joined #openstack-nova13:47
*** crushil has joined #openstack-nova13:48
*** felipemonteiro_ has joined #openstack-nova13:50
*** alexpilotti has quit IRC13:50
openstackgerritBalazs Gibizer proposed openstack/nova master: doc: configurable versioned notifications topics  https://review.openstack.org/44652313:50
jlvillalsfinucan, Thanks for the heads up on the flake8 patch. I commented.13:50
*** voelzmo has quit IRC13:51
*** iceyao has quit IRC13:52
*** baoli has joined #openstack-nova13:52
*** links has quit IRC13:52
mriedemlyarwood: mdbooth: i'm debugging a failure in the gate http://logs.openstack.org/75/446175/1/gate/gate-tempest-dsvm-neutron-full-ubuntu-xenial/479c7bf/logs/screen-n-cpu.txt.gz?level=TRACE#_2017-03-16_01_01_32_897 where detach of a volume to an unshelved server fails, but it actually looks like when we did the detach on the guest, we got an error from libvirt that we usually handle (device not found) and raise that back up to13:52
mriedemcompute manager to basically ignore saying it's not actually attached,13:52
mriedembut then it looks like we try to detach again, and get a different unhandled i/o error from libvirt13:53
*** rfolco_ has joined #openstack-nova13:53
lyarwoodlooking13:53
*** eharney has joined #openstack-nova13:53
mriedemthe test steps are: create a server and wait for it to be active, create a volume and wait for it to be available, shelve the server and wait for it to be offloaded, attach the volume, unshelve the server and wait for it to be active, then verify the volume is attached, and then start the teardown (detach volume and wait for it to be available, then delete the volume, then delete the server)13:54
*** rfolco has quit IRC13:54
lyarwoodand this is failing during the teardown right?13:55
openstackgerritGábor Antal proposed openstack/nova master: Transform instance.volume_attach notification  https://review.openstack.org/40199213:55
mriedemone thing i noticed is that the test specifies /dev/vdb when attaching the volume, and since it's the libvirt driver it ignores that and the volume is actually mounted at /dev/vdc13:55
mriedemlyarwood: yeah13:55
mriedemthe volume detach fails13:55
mriedemso tempest times out waiting for the volume to go from in-use to available13:55
mriedemwe get here https://github.com/openstack/nova/blob/master/nova/virt/libvirt/guest.py#L40813:56
openstackgerritSean Dague proposed openstack/nova master: remove hacking rule that enforces log translation  https://review.openstack.org/44645213:56
*** yingjun has joined #openstack-nova13:56
openstackgerritGábor Antal proposed openstack/nova master: Transform instance.volume_detach notification  https://review.openstack.org/40867613:57
*** amoralej|lunch is now known as amoralej13:57
mriedemwhat i don't get is it looks like we hit the "no target device" error from libvirt, which we handle and reraise as DeviceNotFound, but then it looks like we try to detach again and get "libvirtError: End of file while reading data: Input/output error" which we don't handle13:59
mriedemok so the device name is different also because when attaching a volume to a shelved offloaded instance, we don't set the device on the bdm https://github.com/openstack/nova/blob/master/nova/compute/api.py#L367814:00
mriedemthe tempest test should probably not even be specifying the device name14:01
lyarwoodyeah sorry I was just looking for the device xml during the detach to confirm what the target device was14:02
*** sree has quit IRC14:03
*** cfriesen has quit IRC14:07
*** yuntongjin has quit IRC14:07
*** cfriesen has joined #openstack-nova14:07
*** lucas-hungry is now known as lucasagomes14:07
*** zz_dimtruck is now known as dimtruck14:07
*** amotoki has quit IRC14:08
*** hshiina has quit IRC14:08
mriedemso in the attach volume to shelved offloaded server case, we actually call os-attach in the cinder api to mark the volume as in-use, since we can't do it from the compute (since the instance isn't on a host),14:12
openstackgerritGábor Antal proposed openstack/nova master: Transform instance.volume_detach notification  https://review.openstack.org/40867614:12
mriedemi'm wondering how that might mess up the new create attachment flows, i guess we can just update the volume attachment later when we're unshelving on the host and have the connection info, we update the vol attachment in cinder14:12
alexey_weylgibi: so the nova AZ has no aggregated host that the nova AZ is part of.14:12
alexey_weylgibi: is it correct to devstack only? or also to openstack in production?14:13
lyarwoodmriedem: right, update with the connector and get the connection_info back during unshelve14:13
gibialexey_weyl: the nova AZ contains by default every host that does not belong to any other AZ.14:14
gibialexey_weyl: it is not devstack specific but the name of the default AZ is configurable in the nova.conf with default_availability_zone option14:15
lyarwoodmriedem: do we have a bug for the detach from unshelve issue btw? I don't think it's an issue with the target dev, we look this up correctly and call for the detach using the correct xml AFAICT14:16
* lyarwood attempts to reproduce locally14:16
mriedemlyarwood: we don't yet no14:16
lyarwoodmriedem: kk, I'll write one up if I can reproduce14:16
alexey_weylgibi: I see. Thanks a lot14:16
mriedemi think we've been seeing "libvirtError: End of file while reading data: Input/output error" in the logs so much when the libvirt connection temporarily drops, that we just haven't considered this a separate issue14:16
mriedembut it clearly is related to detaching the device14:17
mriedemif ret == -1: raise libvirtError ('virDomainDetachDeviceFlags() failed', dom=self)14:17
openstackgerritBalazs Gibizer proposed openstack/nova master: Transform instance.reboot notifications  https://review.openstack.org/38295914:19
sdaguesfinucan: fixed unit tests on - https://review.openstack.org/#/c/446452/14:19
mriedemlyarwood: i think i see an issue14:19
mriedemlyarwood: in the normal attach volume flow, we pass do_driver_attach=True here https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L479914:19
*** Yingxin has quit IRC14:19
mriedemwhich is used here https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L26514:20
mriedemto actually attach the block device to the guest14:20
gibisdague: Hi! I left a detailed use case description in https://review.openstack.org/#/c/440580/5/specs/pike/approved/scheduler-hints-in-server-details.rst14:20
dansmithgibi: replied on https://review.openstack.org/#/c/445697/214:20
gibidansmith: checking...14:20
mriedemlyarwood: in the case of unshelve, we don't pass that https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L444514:20
mriedemand it defaults to False14:20
*** iceyao has joined #openstack-nova14:20
mriedemso i don't think we ever actually attach the block device to the guest during unshelve14:20
lyarwoodmriedem: isn't part of the test to start the instance?14:21
*** yuntongjin has joined #openstack-nova14:21
mriedemlyarwood: the test doesn't ssh into the guest to see that the device is there14:21
mriedemthe test just hits the compute api asking if the volume is associated with the instance, which it is via the bdms table14:21
mriedemthat doesn't mean it's actually attached in the guest though :)14:21
*** satyar has joined #openstack-nova14:22
mriedemi'm wondering if it's always been this way and the test is new, or if this was regressed when we removed check_attach internally,14:22
lyarwoodmriedem: right, I might be mixing up the ordering here, there's debug XML listing the device when we bring the instance up14:22
*** voelzmo has joined #openstack-nova14:22
mriedemhttps://github.com/openstack/nova/commit/63805735c25a54ad1b9b97e05080c1a6153d8e2214:22
*** lpetrut has quit IRC14:22
*** lucasxu has joined #openstack-nova14:23
*** Yingxin has joined #openstack-nova14:23
mriedemnvm, i guess we just never pass do_driver_attach for unshelve14:23
*** ekuris has quit IRC14:23
*** yuntongjin has quit IRC14:23
sfinucansdague: Cool. Done14:23
mriedemoh, probably because on unshelve we are spawning the instance14:23
*** weshay is now known as weshay_pto14:24
*** pcaruana has quit IRC14:24
sfinucanjaypipes: I'd figured os-vif provided the modelling while nova would just do the wiring up, e.g. altering various aspects of the libvirt XML14:25
sfinucanthat change seemed to be more of the former, so that line is blurry :)14:25
openstackgerritGábor Antal proposed openstack/nova master: Transform instance.reboot.error notification  https://review.openstack.org/41179114:26
jaypipessfinucan: ya, I added a review comment on it.14:26
sfinucanjaypipes: Gotcha. I'll check that out now14:26
*** alexey_weyl has quit IRC14:27
gibidansmith: your propsal OK for me, thanks for looking into this problem14:27
dansmithgibi: okay cool, I'll push it up.. thanks14:27
*** efoley_ has quit IRC14:28
sdaguegibi: ok, further response here, I think we need to at least narrow this14:28
*** udesale has joined #openstack-nova14:29
gibisdague: thanks, looking...14:29
openstackgerritEd Leafe proposed openstack/nova master: WIP - add some functional tests for placement  https://review.openstack.org/44612314:30
openstackgerritEd Leafe proposed openstack/nova master: Refactor placement fixtures  https://review.openstack.org/44612214:30
mriedemlyarwood: ooo i think i've got it14:31
*** udesale has quit IRC14:31
mriedemwe're using save_and_reraise_exception incorrectly14:31
mriedemin detach_device_with_retry14:31
lbragstadaunnam did you have a patch that commented out the policy during sample generation?14:31
*** udesale has joined #openstack-nova14:31
*** erlon has joined #openstack-nova14:31
mriedemlyarwood: looking at the logs, we get here https://github.com/openstack/nova/blob/master/nova/virt/libvirt/guest.py#L40814:31
mriedemFile "/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 408, in _try_detach_device14:32
bauzasmriedem: dansmith: so I looked this CET morning around the archive_deleted_rows method14:32
aunnamlbragstad, i am still fixing the tests that are failing14:32
lbragstadaunnam i can take a stab at porting that to oslo.policy if you want14:32
mriedemlyarwood: but because we aren't telling " with excutils.save_and_reraise_exception():" that we want to raise something new, it re-raises the original libvirtError14:32
bauzasmriedem: dansmith: it seems to me that it could be a bit difficult to use it directly14:32
lbragstadaunnam s/if you want/if you want to keep focusing on the nova tests/14:32
mriedemlyarwood: which i think then gets us into the retry loop https://github.com/openstack/nova/blob/master/nova/virt/libvirt/guest.py#L42014:32
mriedemand at that point we hit the i/o error14:33
aunnamlbragstad, its no the nova tests that are failing, its the tests from oslo.policy14:33
openstackgerritJim Rollenhagen proposed openstack/nova-specs master: Add spec for custom resource classes in flavors  https://review.openstack.org/44657014:33
lbragstadaunnam oh - i misunderstood that then14:33
jrolledleafe: jaypipes: cdent: ^ tell me where I'm wrong :)14:33
aunnamlbragstad, s/no/not14:33
mriedemhmm, "If another exception occurs, the     saved exception is logged and the new exception is re-raised."14:33
mriedemi wonder if something regressed in oslo.utils14:34
cdentjroll: will you accept "everywhere" as an answer?14:34
* cdent puts it in the queue14:34
lbragstadaunnam for some reason i thought you were working on nova specific tests, i'll give the patch you have a look14:34
jrollcdent: yes, then I'll give up :)14:34
*** vks1 has joined #openstack-nova14:34
lbragstadaunnam is it up for review?14:35
* cdent mumbles something about never surrender, on on, rub some dirt on it, etc14:35
aunnamlbragstad, i'll post it now14:35
lyarwoodmriedem: right that's weird, we shouldn't be hitting the no target device error in the first place14:35
mriedemlyarwood: nvm that can't be right, test_detach_device_with_retry_invalid_argument tests for that case14:35
aunnamlbragstad, ya can discuss with you the problem i am facing14:35
lbragstadaunnam cool14:36
lbragstadaunnam fwiw - https://github.com/openstack/oslo.policy/commit/a95606c1dfd7368a247c79d5f65a54c629ce29b2 landed yesterday14:36
mriedemlyarwood: could this be related to the patch you have about how we always pass persistent=True to detach_device_with_retry ?14:36
mriedemhttps://review.openstack.org/#/c/441204/14:36
aunnamlbragstad, cool14:37
lyarwoodmriedem: hmm yeah if the domain from an unshelve isn't persistent14:37
*** VAhl has joined #openstack-nova14:37
mriedemlyarwood: can you explain what a persistent domain even is?14:37
mriedemi've never actually know14:37
*** nic has joined #openstack-nova14:38
lyarwoodmriedem: https://wiki.libvirt.org/page/VM_lifecycle#Transient_guest_domains_vs_Persistent_guest_domains14:38
openstackgerritGábor Antal proposed openstack/nova master: Transform instance.reboot notifications  https://review.openstack.org/38295914:38
aunnamlbragstad, https://review.openstack.org/#/c/443332/1/oslo_policy/generator.py I commented out the rule at line 100 in this patch14:38
aunnamlbragstad, https://github.com/openstack/oslo.policy/blob/master/oslo_policy/tests/test_generator.py#L278 this test is failing14:39
openstackgerritGábor Antal proposed openstack/nova master: Transform instance.reboot.error notification  https://review.openstack.org/41179114:39
lyarwoodmriedem: but it's just a normal driver.spawn() in the compute layer so I can't see how that could happen tbh14:39
*** hongbin has joined #openstack-nova14:40
aunnamlbragstad, because in this test it is getting rules from sample policy file and merging those with the modified rules14:40
mriedemlyarwood: yeah that page doesn't help me out here,14:40
mriedembecause that implies nova is making a decision when creating the domain as to whether or not it's persistent, right?14:40
*** pumarani_ has joined #openstack-nova14:41
mriedemmaybe we are, but can you point out where we decide that?14:41
lyarwoodmriedem: just checking but for spawn() I think they are always persistent, live mighration is one of the few cases I know where we start a transient domain on the destination14:41
lbragstadaunnam so what do you have locally?14:41
aunnamlbragstad, so since I commented out the rule the parser is not getting the policy rules and its failing14:42
*** carthaca_ has quit IRC14:42
*** sapcc-bot has quit IRC14:42
*** databus23_ has quit IRC14:42
*** nic has quit IRC14:42
lbragstadaunnam can you paste a `git diff`?14:42
*** nic has joined #openstack-nova14:42
openstackgerritSujitha proposed openstack/nova master: Adding tags field to InstancePayload  https://review.openstack.org/40722814:43
aunnamlbragstad, so was thinking to write to policy-sample.yaml file without calling _generate_sample https://github.com/openstack/oslo.policy/blob/master/oslo_policy/tests/test_generator.py#L25214:44
*** mhenkel has joined #openstack-nova14:44
openstackgerritGábor Antal proposed openstack/nova master: Transform instance.reboot.error notification  https://review.openstack.org/41179114:44
aunnamlbragstad, i am not sure if it is the right way so waiting on that14:44
mhenkelhello jaypipes14:44
lyarwoodmriedem: https://github.com/openstack/nova/blob/master/nova/virt/libvirt/host.py#L833 - defineXML == creates a persistent domain14:44
mriedemlyarwood: ok. another odd thing is in the stacktrace and error, if we were re-raising DeviceNotFound from _try_detach_device, we should see oslo.utils log the original libvirtError before reraising the new exception http://git.openstack.org/cgit/openstack/oslo.utils/tree/oslo_utils/excutils.py#n21214:45
aunnamlbragstad, thats what found when I look into code14:45
mriedemlyarwood: but i don't see that happen14:45
*** yingjun has quit IRC14:45
mriedeminstead we get here http://git.openstack.org/cgit/openstack/oslo.utils/tree/oslo_utils/excutils.py#n22014:45
lbragstadaunnam i might have missed it, but what changes have you made locally?14:45
jaypipesmhenkel: hi! currently on calls for another hour or so... gimme a bit? :)14:45
mriedemi wonder if we're using a newer eventlet that's causing some context switching bugs14:45
*** marst has quit IRC14:45
aunnamlbragstad, just commented out the rule, that all rule, haven't changed the tests yet14:45
mhenkeljaypipes: sure thing, ping me when you have a minute or two14:46
openstackgerritSujitha proposed openstack/nova master: Change tags to default field in Instance object.  https://review.openstack.org/41529814:46
*** logan- has quit IRC14:46
aunnamlbragstad, commented the rule in here https://github.com/openstack/oslo.policy/blob/master/oslo_policy/generator.py#L10914:46
openstackgerritSujitha proposed openstack/nova master: Reduce calls to load_tags() to 0  https://review.openstack.org/43514614:46
*** logan- has joined #openstack-nova14:46
*** dikonoor has quit IRC14:47
mriedemno that's not it, last time we bumped eventlet up was last may14:47
*** sneti_ has joined #openstack-nova14:47
lbragstadaunnam alright - this is what i have locally http://cdn.pasteraw.com/3u2pj6gigswl55mkeqdbojeifp6gzf414:48
lbragstadwhich is showing me 6 failures14:48
lyarwoodmriedem: do you have a logstash query for this already btw?14:48
mriedemlyarwood: just this: http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22AttachVolumeShelveTestJSON%5C%22%20AND%20message%3A%5C%22Failed%20to%20detach%20volume%5C%22%20AND%20message%3A%5C%22libvirtError%3A%20End%20of%20file%20while%20reading%20data%3A%20Input%2Foutput%20error%5C%22%20AND%20tags%3A%5C%22screen-n-cpu.txt%5C%22&from=7d14:49
aunnamlbragstad, ya that's the same think that i have14:49
mriedemlyarwood: did you create a bug? i think we need some more logging in detach_device_with_retry because i can't make out the stacktrace from that method given all of the context managers14:49
lbragstadaunnam cool14:50
lyarwoodmriedem: thanks, not yet just waiting on devstack finishing locally before I try to reproduce14:50
*** cdent has quit IRC14:51
gibisdague: responded on https://review.openstack.org/#/c/440580/5/specs/pike/approved/scheduler-hints-in-server-details.rst I can bargain on what hints the API will return14:51
openstackgerritSujitha proposed openstack/nova master: Adding auto_disk_config field to InstancePayload  https://review.openstack.org/41918514:52
openstackgerritBéla Vancsics proposed openstack/nova master: Reduce code complexity - libvirt/config.py  https://review.openstack.org/35987914:53
*** lpetrut has joined #openstack-nova14:53
*** marst has joined #openstack-nova14:53
mriedemlyarwood: since we dont see "Original exception being dropped" i'm wondering if we're hitting _try_detach_device the first time, it's ok, and then when we try the 2nd time, it fails with that i/o error14:53
mriedemthat's why i want more debug logging in there to know if we're calling _try_detach_device before we get into the retry loop14:54
mriedembecause i'm guessing that the i/o error thing on an already detached device is newish in libvirt 1.3.114:54
mdboothlyarwood mriedem: Just finished reading scrollback. Sounds fun.14:54
*** tbachman has quit IRC14:54
*** armax has joined #openstack-nova14:54
mriedemmdbooth: definitely not what i planned on doing for my first 2 hours this morning :)14:55
mriedemgate spelunking14:55
*** READ10 has joined #openstack-nova14:56
*** Jack_Iv has joined #openstack-nova14:57
dansmithgibi: if you're still around, it'd be good to get your ack on this too: https://review.openstack.org/#/c/446053/414:57
Jack_IvHey Folks! Since Nova 13 the hooks are DEPRECATED, but what can I use instead?14:58
*** mlavalle has quit IRC14:58
*** mlavalle has joined #openstack-nova14:58
Jack_IvI need to run some code, pre- and post- instance build14:58
mriedemJack_Iv: there is the dynamic vendordata metadata api and notifications14:59
mriedemor upstream whatever your use case is14:59
mriedemhttps://docs.openstack.org/developer/nova/vendordata.html14:59
mriedemhttps://docs.openstack.org/developer/nova/notifications.html15:00
*** bvanhav__ is now known as bvanhav15:00
mdboothJack_Iv: What is your use case, btw?15:00
*** moshele has quit IRC15:00
gibidansmith: looking...15:00
Jack_IvI need to execute some code on compute nodes, after vm is UP and after termination15:01
Jack_Ivmdbooth: ^15:01
mdboothJack_Iv: Right, but what for?15:01
Jack_Ivlet's say, update some iptables rules15:01
mdboothWhat does the code do?15:01
mdboothOn the host?15:01
Jack_Ivright15:01
kashyapYeah, I'm curious, too.  About the use case15:02
*** karimb has quit IRC15:02
mriedemlyarwood: https://bugs.launchpad.net/nova/+bug/167348315:02
openstackLaunchpad bug 1673483 in OpenStack Compute (nova) "libvirt: test_attach_volume_shelved_or_offload_server times out waiting for device detach (which fails)" [Undecided,New]15:02
openstackgerritMoshe Levi proposed openstack/nova master: HW offload support for openvswitch  https://review.openstack.org/39826515:02
mdboothJack_Iv: Is this secret squirrel, or can you share the whole use case? I'm really curious as to why.15:02
kashyapFrom the an old mailing list discussion on hooks, I see:15:03
Jack_IvI want to update some iptables rules after VM is up and delete those rules after termination15:03
kashyapThere's three core scenarios for hooks15:03
kashyap 1. Modifying some aspect of the Nova operation15:03
kashyap 2. Triggering an external action synchronously to some Nova operation15:03
kashyap 3. Triggering an external action asynchronously to some Nova operation15:03
kashyap[From danpb]15:03
*** yingjun has joined #openstack-nova15:03
mriedemJack_Iv: you can't do that with security group rules?15:03
mdboothJack_Iv: Right. Just trying to understand why, and if there's some other way to achieve what you want.15:03
mdboothe.g. what mriedem said15:03
mriedemJack_Iv: i.e. listen for the notifications from nova, instance.create.start and instance.create.end, and when you get those notifications, adjust the secgroup rules in neutron15:04
Jack_IvNo, because clients can just edit security group rules15:04
mriedemJack_Iv: so change the neutron api to make admin-only rules?15:04
*** tblakes has quit IRC15:04
*** karimb has joined #openstack-nova15:04
mriedemJack_Iv: at the PTG we talked about a concept of service user locks on instances,15:04
mriedemyou could make a similar case for service/admin level "locks" on security group rules15:05
*** whenry has quit IRC15:05
*** dharinic_ has quit IRC15:05
dansmithI thought neutron had some base "provider rules" concept for this15:05
mriedemdansmith: maybe they do15:05
mriedemJack_Iv: have you talked to the neutron people?15:05
Jack_Ivnot yet15:05
mriedemJack_Iv: which version of openstack are you on?15:05
Jack_Ivnewton15:05
mriedemkevinbenton: does neutron have a concept of provider rules for security groups that the tenant user can't change?15:06
*** awaugama has joined #openstack-nova15:07
*** mvk has quit IRC15:07
mriedemJack_Iv: i think that's probably the thread you need to pull on though15:07
*** manasm has quit IRC15:08
mriedemlisten for notifications from nova for the instance build lifecycle, and then adjust rules via the networking API as needed15:08
*** manasm has joined #openstack-nova15:09
mriedemno vmware people around huh15:09
mriedemtheir CI results come in, if at all, about a week late15:09
*** udesale has quit IRC15:10
*** scottda has joined #openstack-nova15:11
*** sridharg has quit IRC15:11
*** manjeets has joined #openstack-nova15:13
openstackgerritLee Yarwood proposed openstack/nova master: compute: Move detach logic from manager into driver BDM  https://review.openstack.org/43952015:15
openstackgerritLee Yarwood proposed openstack/nova master: compute: Only destroy BDMs after successful detach call  https://review.openstack.org/44069315:15
gibidansmith: I'm +1 on https://review.openstack.org/#/c/44605315:16
dansmithgibi: awesome, thanks for looking at that15:16
dansmithgibi: on the other patch,15:16
lyarwoodmdbooth: ^ updated removing some of the nits, just to confirm the LM rollback issue isn't new right?15:16
mdboothlyarwood: No, the LM rollback issue looks new15:17
dansmithgibi: the change I made prevents projects from showing up on actual flavor create/delete, so I'm fixing that by just forcing projects to be loaded in the api before doing the notification15:17
mdboothThe problem is you've combined 2 functions in your move that were previously separate15:17
dansmithgibi: just fyi.. i'll push that up a bit later, it's not a huge rush15:17
mdboothAnd remove_volume_connection only called one of them15:17
mdboothThe problem is the interaction of remove_volume_connection and the code from _detach_volume, which we previously didn't call15:18
*** Jack_Iv has quit IRC15:18
gibidansmith: that sounds OK to me. I  will keep an eye on that review15:18
*** armax has quit IRC15:18
dansmithgibi: cool, thanks a bunch :)15:19
*** Jack_Iv has joined #openstack-nova15:19
mdboothIncidentally, I really don't like the proliferation of do_all_the_things(but_not_this_thing=True, dont_not_do_that_thing=Maybe, do_this_extra_thing='True')15:19
*** lucasxu has quit IRC15:22
*** kevinz has quit IRC15:22
*** mvk has joined #openstack-nova15:22
*** eharney has quit IRC15:22
*** Jack_Iv has quit IRC15:23
mdboothlyarwood: So, I feel like the old _driver_detach_volume is what needs to move, with perhaps a couple of volume_api calls thrown in. Ideally, though, the code in block_device won't concern itself with CONF.host. It'll be called, or it won't be called.15:23
*** dharinic_ has joined #openstack-nova15:23
*** armax has joined #openstack-nova15:23
*** armax has quit IRC15:24
*** Jack_Iv has joined #openstack-nova15:24
mdboothlyarwood: That's not 100% thought through, though. There may be practical reasons for dont_do_these_things=['foo', 'bar', 7, None, {}]15:25
*** eharney has joined #openstack-nova15:26
*** dharinic_ has quit IRC15:26
* mdbooth just noticed 'thought through, though' in previous sentence, and marvels at his native tongue.15:26
lyarwoodmdbooth: sorry just went over the diff again, I see your point but just to confirm, instance.host is going to point to the source during a rollback right?15:28
mriedemthere doesn't seem to be much they're not willing to do in their company15:28
mdboothNot during rollback_at_destination15:28
mriedemmdbooth: ^15:28
mdboothmriedem: Hehe.15:28
*** efried has quit IRC15:28
*** tbachman has joined #openstack-nova15:29
*** tblakes has joined #openstack-nova15:29
*** jaosorior has quit IRC15:29
*** arne_r has quit IRC15:30
mdboothlyarwood: Sorry, yes. instance.host *will* be the source. I didn't read you right there.15:30
mdboothHowever, it's executing on the destination, which is the problem.15:30
mdbooth                self.compute_rpcapi.remove_volume_connection(15:31
mdbooth                        context, instance, bdm.volume_id, dest)15:31
mdboothIn _rollback_live_migration15:31
lyarwoodmdbooth: haha right, I just couldn't find the instance.host update15:31
mdboothSo CONF.host != instance.host15:31
mdboothBUT, we still need to tear it down15:31
mdboothSo I feel like that's the sort of logic that lives in ComputeManager15:32
lyarwoodwell, we are going to call destroy that's going to do that anyway15:32
lyarwoodbut sure, I get your point15:32
lyarwoodmriedem / johnthetubaguy ; if you have anytime before the cinder meeting the uuid and attachment_id change could really use core review - https://review.openstack.org/#/q/topic:bp/cinder-new-attach-apis & https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/148958115:33
mriedemdansmith: sdague: melwitt: can we get https://review.openstack.org/#/c/444106/ and the test patch below it in so we can move forward with the backport? it would also help ironic ci that is failing on this.15:34
bauzasdansmith: have you seen my comment on archive_deleted_rows ?15:34
mriedemlyarwood: ok, i meant to go through those again today15:34
dansmithbauzas: you said you didn't think you could use it, but didn't say why (or that I saw)15:34
bauzasdansmith: because the logic in there is looping over the cell DB tables15:34
dansmithbauzas: obviously some refactoring will be needed, but I just would like to avoid having to add yet another command to run cleanup15:34
dansmithmriedem: yeah15:34
bauzasto push them in their shadow one15:34
*** hshiina has joined #openstack-nova15:35
mdboothlyarwood: In case you didn't notice, I thought the problem in _from_db_object() here was pretty severe: https://review.openstack.org/#/c/242603/23/nova/objects/block_device.py15:35
bauzasdansmith: the main point is that we provide the number of rows per table archived15:35
dansmithbauzas: not per table, but that applies to the thing we need to do in reqspec too right?15:36
*** awaugama has quit IRC15:36
bauzasdansmith: so I should say how many rows for the request_spec table are 'archived', ie. killed?15:36
bauzasdansmith: nope, this is per table AFAICS15:36
bauzasyou return either 0 or 115:36
openstackgerritMoshe Levi proposed openstack/os-vif master: HW offload support for openvswitch  https://review.openstack.org/39827715:36
dansmithbauzas: in fact, you have to look at the cell dbs to know what you can remove from reqspec, right?15:36
sdaguemriedem: ... once I finish updating the limits spec15:36
openstackgerritMatt Riedemann proposed openstack/nova master: libvirt: add debug logging in detach_device_with_retry  https://review.openstack.org/44660115:36
mriedemlyarwood: ^ is the debug patch for detach15:36
bauzasbut the verbose prints that15:37
dansmithbauzas: are you talking about the parameter or what gets printed?15:37
bauzasdansmith: what's printed15:37
*** awaugama has joined #openstack-nova15:37
lyarwoodmdbooth: yeah sorry I did miss that, are you sure that doesn't result in a save straight away?15:37
bauzasdansmith: for looking at which record to remove, I was rather thinking on looking at InstanceMapping15:37
mdboothlyarwood: If it does, I don't know how15:37
bauzasoh wait15:37
dansmithbauzas: you need to purge those too15:37
bauzasdansmith: we don't remove yet also InstanceMapping records?15:37
bauzasa-ha, yeah15:38
dansmithwe don't in the case of soft delete, but otherwise we do15:38
bauzaserf15:38
bauzasokay, then I need to look at all the cells, yeah15:38
bauzasI should also purging the instance_mappings too then15:38
lyarwoodmdbooth: ah I might be mixing this up with the driver_bdm dict updates that result in a save15:38
bauzaswell, okay, I'll see what I can do to use the same command15:39
*** efried has joined #openstack-nova15:39
mdboothlyarwood: Also, in the previous patch I didn't like the unnecessary argument type ambiguity. But I think I was ok with everything else.15:40
*** hferenc has joined #openstack-nova15:43
lyarwoodmdbooth: the uuid vs bdm_uuid issue?15:44
mdboothlyarwood: No, just a sec15:44
lyarwoodmdbooth: ah the cellv1 values thing15:44
mdboothhttps://review.openstack.org/#/c/242602/22/nova/db/sqlalchemy/api.py15:44
mdboothblock_device_mapping_update15:45
* mdbooth doesn't like trying to guess the semantics of an argument by running a regex on it15:45
mdboothThe caller has full context, and there are multiple ways to pass it in unambiguously15:45
*** tbachman has quit IRC15:46
*** awaugama has quit IRC15:46
lyarwoodmdbooth: right and at the moment that's always the id so we can just drop this for now15:46
mdboothWell then you'd have to drop the change in update_or_create15:47
*** psachin has quit IRC15:47
mdboothIt shouldn't be a complex fix.15:47
lyarwoodah yeah got it15:48
dansmithbauzas: remind me, we don't delete reqspec at all right now, but will in the case of non-soft delete soon right?15:48
dansmithbauzas: so this command will really only be needed if operators have soft delete enabled, right?15:48
*** cdent has joined #openstack-nova15:48
bauzasdansmith: yup, we don't delete the spec yety15:49
bauzasdansmith: if someone hard-deletes an instance (ie. the instance now longer exists), then we should purge the entry15:49
mriedemmdbooth: lyarwood: in case you haven't noticed yet, the original patches to add bdm.uuid were for a cellsv1 race issue where update_or_create happened at the api cell and things got racey and crazy,15:49
mriedemmdbooth: lyarwood: so that's why there is the update_or_create stuff15:50
dansmithbauzas: you're saying you will soon delete reqspec at normal instance delete time, yes?15:50
bauzasdansmith: tbc, I was planning to iterate over all the spec records, and lookup over all cells to see if some instance exists15:50
bauzasfor the purge command15:50
dansmithnot talking about purge, I'm talking about normal api delete..15:50
mriedemmdbooth: lyarwood: but i believe that update_or_create is also still called from the compute api (and/or conductor now) when creating the bdms15:50
bauzasdansmith: ah, that, I wasn't planning to introduce it given mriedem's point15:50
dansmithbauzas: his point about what?15:50
bauzasdansmith: the fact that we shouldn't delete the spec record in case this is an instance soft-delete15:51
mdboothmriedem: IIRC that's correct. The cellsv1 caller was unique in that it passed in a versioned object instead of a list of update values, though.15:51
bauzasdansmith: https://review.openstack.org/#/c/391060/2/nova/compute/api.py@180415:51
dansmithbauzas: right but assuming like 1% of people use soft delete, we should do the right thing in the 99% case and only require people that use soft-delete to have to worry about this purge15:51
dansmithI dunno what percentage do use it, but we should always delete if we know we can15:52
bauzasdansmith: so, I could just verify if it's a soft-delete ?15:52
*** david-lyle has joined #openstack-nova15:52
bauzasdansmith: and if it's not a soft-delete, then call the _delete_req_spec method?15:52
dansmithbauzas: yeah, of course, I'm not sure why we wouldn't15:52
bauzaslemme verify, but delete_type seems the good argument for verify it15:53
*** lucasxu has joined #openstack-nova15:53
*** tbachman has joined #openstack-nova15:53
bauzasdansmith: honestly, purging using the archive command would be super-expensive15:53
*** nic has quit IRC15:53
bauzasbecause I need to iterate over all the spec records, and then iterate over all cells for each record, so o(n2)15:54
dansmithbauzas: further, in the purge command, it's going to be very intensive to compare all the current reqspecs with all the deleted=1 instances in a cell, so I think you should just do the api-db purge when you archive an instance row, which will be much more efficient.15:54
dansmithbauzas: yeah, that's completely not okay15:54
dansmithbauzas: doing the purge inside archive_deleted_rows is the only place you can do it efficiently15:54
dansmitha separate purge command will have to do it O(n^2)15:54
bauzasyeah, sorry my keybord doesn't ² :p15:55
dansmitha separate purge command will not be able to effectively determine which deleted instances already have a purged api-db record,15:55
dansmithwhich makes it expensive15:55
dansmithand after you archive, it's too late15:55
*** david-lyle_ has joined #openstack-nova15:55
*** sandanar has quit IRC15:55
bauzasdansmith: correct me if I'm wrong, but archive_deleted_rows isn't yet cells-aware ?15:55
dansmithbut if you do it when you archive, then you can do it in O(n) just like archive is now15:55
*** david-lyle has quit IRC15:55
mdboothlyarwood: On locking, I think that everything in api which checks instance/task state should be required to immediately and atomically modify task state. That's probably the level of locking required for rebuild.15:56
dansmithbauzas: what does it matter if it is? whether you're doing one cell db or all of them, it's the same amount of work15:56
mriedembauzas: the delete_type arg is what tells you the type,15:56
mriedem"soft_delete" or just "delete"15:56
mdboothI think we do that in many places, btw.15:56
bauzasdansmith: sorry if I'm unclear, but that archive command would have to be run close to a child cell DB, right?15:56
bauzasdansmith: if so, that would be an upcall, right?15:56
sean-k-mooneyjaypipes: looking at https://review.openstack.org/#/c/441183 im not sure that any os-vif change is required15:56
dansmithbauzas: it's nova-manage, upcalls are fine15:57
*** david-lyle__ has joined #openstack-nova15:57
sean-k-mooneyjaypipes: the contrial pluging for os-vif would live in the networking-contrail repo not os-vif normally15:57
bauzasdansmith: mmm, okay, I need to consider that15:57
mriedembauzas: can we just focus on the hard delete case in the api as i pointed out in the review yesterday? and worry about how to best cleanup the soft delete orphans in a separate change?15:57
dansmithyes, please15:57
dansmithnot deleting when we know we can is crazypants15:57
mriedemthey should be separate changes anyway15:57
lyarwoodmdbooth: would that not still allow something like volume-attach to get through?15:57
-openstackstatus- NOTICE: The Gerrit service on review.openstack.org is being restarted to address hung remote replication tasks, and should return to an operable state momentarily15:58
lyarwoodah, not just me then15:58
mdboothlyarwood: Not if everything does it, no.15:58
*** marst_ has joined #openstack-nova15:58
bauzasmriedem: dansmith: yeah, tbc, based on dan's good point, I'm just modifying https://review.openstack.org/#/c/391060/2/nova/compute/api.py to just verify delete_type15:58
mdboothIt's not consistent, though.15:58
*** awaugama has joined #openstack-nova15:58
bauzasand leave the soft-delete deletions be done differently15:58
bauzasI need to go, ttyl15:58
*** marst has quit IRC15:59
mriedembauzas: i said that yesterday in the review :)15:59
*** david-lyle has joined #openstack-nova15:59
mriedem"So we can hard delete the request spec in the API when we hard delete an instance (the delete_type would tell you which it is)."15:59
mriedem"If we care about adding a way to purge leaked request specs (request  specs that are related to instances which are deleted), then we could  provide a nova-manage command for that, e.g. nova-manage api_db  purge_zombies, or something similar. I'd do the purge CLI in a separate  patch btw."15:59
mdboothlyarwood: Anyway, as you say that's not really in scope.15:59
*** david-lyle__ has quit IRC15:59
*** karimb has quit IRC16:00
* mriedem takes kid to school16:00
*** david-lyle_ has quit IRC16:00
mhenkelsean-k-mooney, Hello16:01
sean-k-mooneyjaypipes: that said if the plugin was ovs then yes i would have expected a patch to os-vif16:02
sean-k-mooneymhenkel: hieulq16:02
sean-k-mooneymhenkel: *hi16:02
sean-k-mooneymhenkel: just dissusing your change16:02
*** karimb has joined #openstack-nova16:02
sean-k-mooneymhenkel: do im not sure jay is about currently16:02
mhenkelI was just debating internally if we should upstream https://github.com/Juniper/contrail-nova-vif-driver/tree/master/vif_plug_vrouter to os-vif16:03
mhenkelor keep it as a separate plugin delivered by contrail packages16:03
*** voelzmo has quit IRC16:04
sean-k-mooneymhenkel: currently the policy for plugins in os-vif is that they can only be fro the reference backends16:04
mhenkelmy personal preference would be to upstream16:04
sean-k-mooneyso ovs,linux bridge and sriov16:04
mhenkelah ok16:04
sean-k-mooneythat is something we could change but the resoning is that it allows you to maintain and update them faster yourself16:05
*** david-lyle has quit IRC16:05
mhenkelsean-k-mooney: yes, that I agree with16:05
sean-k-mooneymhenkel: one thing that will be changing this cycle hopefully is that neutron will start passing os-vif vif object to nova16:06
*** r-daneel has joined #openstack-nova16:06
mhenkelsean-k-mooney: so we should be good with what I upstreamed so far?16:07
sean-k-mooneymhenkel: when that happens the change you are proposing will nolonger be required as you will be able to spcify the plug to use for the ml2 mech dirver16:07
sean-k-mooneymhenkel: yes16:07
sean-k-mooneymhenkel: though one question16:07
sean-k-mooneyfor contrial are you managin ovs or do you support other backends16:08
mhenkelno ovs, we provide our own virtual router16:08
openstackgerritDan Smith proposed openstack/nova master: Get instance availability_zone without hitting the api db  https://review.openstack.org/43975416:08
openstackgerritDan Smith proposed openstack/nova master: Avoid lazy-loading projects during flavor notification  https://review.openstack.org/44569716:08
openstackgerritDan Smith proposed openstack/nova master: Set instance.availability_zone whenever we schedule  https://review.openstack.org/44605316:08
openstackgerritDan Smith proposed openstack/nova master: Make conductor ask scheduler to limit migrates to same cell  https://review.openstack.org/43802516:08
mhenkeleither as a kmod or in userspace using dpdk16:08
sean-k-mooneymhenkel: ah ok. in that case what you have is perfect16:09
*** nic has joined #openstack-nova16:09
mhenkelsean-k-mooney: so we just need to convince jaypipes to take back his -1 ;)16:09
sean-k-mooneymhenkel: if you were managing ovs(with or without dpdk) you could upstream the changes that are required to the ovs plugin16:09
*** amotoki has joined #openstack-nova16:09
mhenkelI see16:09
sean-k-mooneymhenkel: yes but jaypipes  will likely agree when he reads my comment16:10
openstackgerritSujitha proposed openstack/nova master: Add helper method to add additional data about policy rule.  https://review.openstack.org/43484216:11
*** karimb has quit IRC16:12
melwittmriedem: on the regression test, do you know why you didn't need RetryFilter?16:12
openstackgerritSujitha proposed openstack/nova master: Add helper method to add additional data about policy rule.  https://review.openstack.org/43484216:13
*** NikhilS has quit IRC16:14
openstackgerritStephen Finucane proposed openstack/os-vif master: Use Sphinx 1.5 warning-is-error  https://review.openstack.org/44661616:15
bauzasmriedem: sorry if I misunderstood you16:16
*** nkrinner is now known as nkrinner_afk16:16
bauzasI don't want to play the French card, but...16:16
bauzasI thought both of you were asking me to abandon that change and just use the manage command for purging16:17
*** karimb has joined #openstack-nova16:17
*** belmoreira has quit IRC16:18
jrolldansmith: got a pointer to the work to expose custom classes in allocations? (is that happening yet?)16:18
dansmithjroll: I think it is .. in jay's head at least16:18
jrolldansmith: keep in mind I mostly have no idea how allocations work, so I'm not sure what's missing here. I guess it's that we can't yet create an allocation for CUSTOM_FOO?16:19
*** Drankis has joined #openstack-nova16:19
dansmithjroll: we can we just don't16:19
jrolldansmith: right, so one of the work items is to create those allocations, unless I'm missing something I don't see the dep16:20
*** sneti_ has quit IRC16:20
*** marst_ has quit IRC16:20
*** dtp has joined #openstack-nova16:20
*** Guest15143 has quit IRC16:20
*** marst has joined #openstack-nova16:20
dansmithjroll: ah, okay I didn't interpret that work item as this thing, probably because the spec seems focused on just the flavor override and scheduling part16:21
dansmithjroll: in that case, it should probably be "RT changes to allow drivers to expose allocations of custom things" and "make ironic driver do that thing"16:21
jrolldansmith: feels like we need to handle it at the same time, else the node remains free to be scheduled to :)16:21
*** r-daneel has quit IRC16:21
dansmithjroll: I don't think so, we can do the allocation thing first16:22
jrollah, true16:22
dansmithif jay is doing that he should probably be on the contributors too16:22
*** pumarani_ has quit IRC16:22
*** slunkad_ has quit IRC16:22
jrolldansmith: okay, so I guess I'll rearrange the work items, split it into RT/ironic items, and add jaypipes. cool?16:22
*** Guest15143 has joined #openstack-nova16:23
dansmithjroll: sure, but if you're going to include all that work in here, you probably need to add some wordy description of the allocation bits into the proposed change as well, because that's probably why I assumed this was only the flavor/scheduling work16:23
jrollgotcha16:24
jrolldansmith: I have no clue how that thing works, I'll add as a dep for now until I chat with jay16:24
dansmithokay if he wants it all in this spec I'm sure he can just add it himself16:26
*** cdent has quit IRC16:27
jrollya16:27
mriedemlyarwood: so i was thinking, we should probably decouple the attachment_id and the rest of that series from the bdm.uuid patch, since that's getting overly complicated at this point16:28
mriedemlyarwood: and it's been about 2 weeks, can we just decouple that from the series?16:28
mriedemmelwitt: adding the RetryFilter in makes it blow up for some reason16:29
mriedemmelwitt: you could pull it down and add that to see what i mean, but it seems to think that there is only one host,16:29
mriedemi'm not sure if that's because both compute services have the same fake-mini node?16:30
*** lucasxu has quit IRC16:30
melwittmriedem: yeah, I suspect it's trying to same host again. it still tests the bug since it is retrying, it's just probably not landing on the second host16:30
lyarwoodmriedem: yup happy to, I'll do that now before the call16:31
mriedemmelwitt: well, it dumps a message like "host1 in list, hosts already tried: host1"16:31
lyarwoods/call/meeting/g16:31
mriedemand then kicks it out,16:31
melwittmriedem: oh? I pulled it down a few minutes ago and am playing with it, just out of curiosity16:31
mriedemi'm not sure why it doesn't see host216:31
mriedemmy guess is the node values are the same, but i don't see why that should matter16:31
openstackgerritJim Rollenhagen proposed openstack/nova-specs master: Add spec for custom resource classes in flavors  https://review.openstack.org/44657016:31
mriedemlyarwood: thanks16:32
jrolldansmith: done, thanks for the help16:32
*** karimb has quit IRC16:38
*** mdrabe has quit IRC16:38
*** tonygunk has quit IRC16:40
jaypipesmhenkel: where is the os-vif plugin for vrouter?16:43
*** aarefiev is now known as aarefiev_afk16:44
jaypipesmhenkel: nm, found it, sorry.16:45
mhenkeljaypipes: cool16:45
openstackgerritOctave Orgeron proposed openstack/nova-specs master: Enables MySQL Cluster Support for Nova  https://review.openstack.org/44662616:45
*** armax has joined #openstack-nova16:46
*** bmace has quit IRC16:47
*** bmace has joined #openstack-nova16:48
*** iceyao has quit IRC16:50
*** ltomasbo is now known as ltomasbo|away16:50
*** catinthe_ has joined #openstack-nova16:52
*** mdrabe has joined #openstack-nova16:52
*** andreas_s has quit IRC16:54
*** catintheroof has quit IRC16:54
openstackgerritOctave Orgeron proposed openstack/nova-specs master: Enables MySQL Cluster Support for Nova  https://review.openstack.org/44663116:55
*** Jack_Iv has quit IRC16:56
*** unicell has joined #openstack-nova16:56
*** unicell has quit IRC16:56
*** Jack_Iv has joined #openstack-nova16:56
*** unicell has joined #openstack-nova16:56
openstackgerritStephen Finucane proposed openstack/python-novaclient master: Use Sphinx 1.5 warning-is-error  https://review.openstack.org/44663216:56
*** Apoorva has joined #openstack-nova16:57
jaypipesmhenkel: still -1 from me, but I understand now where the os-vif stuff is. :)16:58
jaypipesmhenkel: pls see my review comments for details.16:58
mhenkeljaypipes: ok, will check, thanks!16:59
openstackgerritLee Yarwood proposed openstack/nova master: objects: Add attachment_id to BlockDeviceMapping  https://review.openstack.org/43766516:59
openstackgerritLee Yarwood proposed openstack/nova master: db: Add attachment_id to block_device_mapping  https://review.openstack.org/43759716:59
jaypipesmhenkel: np! :)16:59
jaypipesmhenkel: if you add those unit tests and make the commit message changes, I'll re-review quickly and with sean-k-mooney's +1 pass it over to sfinucan or mriedem to +2.17:00
*** hshiina has quit IRC17:00
mhenkeljaypipes: will do!17:00
*** Jack_Iv has quit IRC17:01
*** ltomasbo|away is now known as ltomasbo17:01
*** bvanhav has quit IRC17:01
*** READ10 has quit IRC17:01
*** catintheroof has joined #openstack-nova17:01
jaypipesmriedem: https://review.openstack.org/#/c/416669/ looks sensible and has had several cdent reviews around API correctness.17:01
*** avolkov` has quit IRC17:01
openstackgerritJohn Garbutt proposed openstack/nova-specs master: Add spec to use cinder's new attachment API  https://review.openstack.org/37320317:02
sean-k-mooneyjaypipes: i proably should have been more stringent in my own review but the main think i was reviewing for was if the os-vif delegation looked correct.17:02
jaypipessean-k-mooney: it's cool duder :)17:02
mdboothmriedem lyarwood: I've been looking at that detach gate failure, discussing with danpb downstream.17:02
*** Jack_Iv has joined #openstack-nova17:02
mdboothI currently suspect it's a libvirt issue.17:03
mdboothIf you look at the libvirt logs, we see the device_del, then the job disappears into a black hole, then Nova reports a disconnect.17:03
sean-k-mooneyjaypipes: that said alot of that code that deligate to os-vif in nova is not unittested correctly so this is something i guess we should improve going forward17:03
jaypipessean-k-mooney: ack17:04
*** ralonsoh has quit IRC17:04
*** fragatina has joined #openstack-nova17:04
mriedemjaypipes: ok, after meetings and other things, like this big ass bowl of chili in front of me17:04
*** fragatina has quit IRC17:04
*** fragatina has joined #openstack-nova17:05
mdboothlyarwood mriedem: All of the above occurs within the context of, from Nova's pov, self._domain.detachDeviceFlags(device_xml, flags=flags)17:05
mdboothSo at the moment I don't see how it can be a Nova issue17:05
*** catinthe_ has quit IRC17:05
sean-k-mooneymhenkel: the unitest in https://review.openstack.org/#/c/334048/ may be of use to you as a reference.17:06
*** marst_ has joined #openstack-nova17:07
mdboothmriedem: How long do those log files live, btw? If I link to them in bz will they still be there in a month or 2?17:07
*** manasm has quit IRC17:08
mriedemmdbooth: i'm not sure about that, it used to be 6 months,17:09
mriedemfungi: jeblair: ^ how long does infra keep ci job logs around?17:09
*** marst has quit IRC17:09
fungimriedem: now it's down to about 45 days17:09
fungimdbooth: ^17:10
jaypipesmriedem: :)17:10
mdboothfungi: Thanks17:10
* mdbooth will copy them :)17:10
sean-k-mooneyjaypipes: actully while you are around if you get a chance to look at https://review.openstack.org/#/c/441590/ let me know if you are ok with this direction. i belive that with this change the host_info class has all the info we need to pass to neutron for the port binding negciation.17:10
fungimriedem: mdbooth: that's keeping us barely afloat with the present job log volume http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=717&rra_id=all17:10
jeblair(it's directly tied to log volume; we've had to reduce it due to a significant increase in the amount of data jobs are storing in logs)17:10
sean-k-mooneyjaypipes: that said i dont want to merge this until i have some prototype code to consume it17:11
mriedemmdbooth: fwiw i didn't notice this type of error when we were using trusty17:11
mriedemwhich was libvirt 1.2.217:11
mdboothfungi: NP. I originally assumed it would be days, tbh.17:11
mdbooth45 days is excellent.17:12
sean-k-mooneysfinucan: https://review.openstack.org/#/c/441590/ is proably of interest to you too. again im going to keep the workflow -1 until i have an end to end poc working to ensure it does capture all required info17:12
*** rcernin has joined #openstack-nova17:12
fungimdbooth: we wish it could be more, but we generate _lots_ of logs these days17:14
*** catintheroof has quit IRC17:14
*** catintheroof has joined #openstack-nova17:15
*** bvanhav has joined #openstack-nova17:17
*** manasm has joined #openstack-nova17:17
*** jdillaman has joined #openstack-nova17:19
openstackgerritChristopher Brown proposed openstack/nova master: Add lan9118 as valid nic for hw_vif_model property for qemu  https://review.openstack.org/39348917:19
*** lucasagomes is now known as lucas-afk17:21
jaypipessean-k-mooney: reviewed.17:22
*** tonygunk has joined #openstack-nova17:24
mriedemfungi: totally fine17:25
mriedemwas just wondering the timeline17:25
openstackgerritOctave Orgeron proposed openstack/nova master: Enables MySQL Cluster Support for Nova  https://review.openstack.org/44664317:25
*** ociuhandu has quit IRC17:26
*** baoli has quit IRC17:26
sean-k-mooneyjaypipes: thanks that also needs units so ill adress your comments when i add them17:27
*** tbachman has quit IRC17:27
*** baoli has joined #openstack-nova17:27
openstackgerritOctave Orgeron proposed openstack/nova master: Enables MySQL Cluster Support for Nova  https://review.openstack.org/44664317:27
*** tbachman has joined #openstack-nova17:28
*** READ10 has joined #openstack-nova17:28
*** rcernin has quit IRC17:28
*** amotoki has quit IRC17:32
*** dtp has quit IRC17:32
mdboothlyarwood mriedem: libvirtd crashed17:33
mriedemmdbooth: gah17:34
mriedemremember how i said i'd always chalked this failure up to libvirt crashing :)17:34
mdboothmriedem: So, I think we're going to request some changes in the CI environment17:34
mdboothe.g. It would be nice if libvirt didn't restart17:35
mdboothAlso, if we collected core dumps17:35
mdboothAnd there are some additional logs which would be useful17:35
openstackgerritDan Smith proposed openstack/nova master: Teach HostAPI about cells  https://review.openstack.org/44216217:35
openstackgerritDan Smith proposed openstack/nova master: Make scheduler target cells to get compute node instance info  https://review.openstack.org/43989117:35
mdboothmriedem: What's the best way to request ^^^ ?17:35
mriedemi.e. http://status.openstack.org/elastic-recheck/#1643911 http://status.openstack.org/elastic-recheck/#1646779 http://status.openstack.org/elastic-recheck/#163898217:35
mriedemmdbooth: i think the libvirtd logging stuff is handled in devstack17:36
*** derekh has quit IRC17:36
*** ociuhandu has joined #openstack-nova17:36
*** tbachman has quit IRC17:36
mriedemmdbooth: https://github.com/openstack-dev/devstack/blob/master/lib/nova_plugins/functions-libvirt#L10417:37
mdboothhttp://logs.openstack.org/75/446175/1/gate/gate-tempest-dsvm-neutron-full-ubuntu-xenial/479c7bf/logs/syslog.txt.gz#_Mar_16_01_01_3217:38
mdboothbtw17:38
*** bvanhav has quit IRC17:38
openstackgerritJohn Garbutt proposed openstack/nova master: compute: Only destroy BDMs after successful detach call  https://review.openstack.org/44069317:38
cfriesenis anyone aware of a way to have libvirt/qemu live migration maintain the "sparseness" of a qcow2 disk file?  After live migration I'm seeing the physical size match the virtual size even though most of that space is not actually used.17:38
*** tbachman has joined #openstack-nova17:39
mdboothcfriesen: Is this an image-backed qcow2?17:42
mdboothBecause if so, it should already do that.17:42
*** aysyd has quit IRC17:44
mriedemmdbooth: ok so we can just duplicate this bug i guess,17:44
mriedemand i wasted everyone's time17:44
mriedembut thanks for digging into it,17:44
mriedemif we can get more details on the actual crashes, that'd be great,17:44
openstackgerritsean mooney proposed openstack/nova master: remove flake8-import-order for test requirements  https://review.openstack.org/44562217:44
mdboothmriedem: We should raise a bug against Ubuntu's libvirt package17:44
mriedembecaues it's a pretty high probability gate failure since we moved to xenial17:44
mriedemi wouldn't really know what to say in the bug,17:45
mriedemexcept "it crashes :P "17:45
mdboothWell, we've a dump from one of those crashes17:45
mriedemwhat's that palms up idk guy?17:45
mdboothSorry, not the dump17:45
mdboothThe 'panic' or whatever, summary17:45
mriedem¯\_(ツ)_/¯17:45
mriedemthat guy17:45
*** kaisers has quit IRC17:45
mdboothHehe17:45
johnthetubaguylyarwood: I have a tiny nit on this one: https://review.openstack.org/#/c/437597/17:46
lyarwoodlooking17:46
johnthetubaguylyarwood: I am wondering if we don't actually need the get_by_attachment_id method?17:46
mdboothlyarwood: In https://review.openstack.org/#/c/440693/9/nova/compute/manager.py you've still got the notification out of order17:48
mdboothlyarwood: I thought we said we wouldn't do that?17:48
lyarwoodmdbooth: didn't your review say that it was fine to raise the notification after the detach?17:49
johnthetubaguyyeah, I saw the note added in the commit message17:49
mdboothlyarwood: It did, but then we discussed it with gibi iirc?17:49
johnthetubaguyit sounds sensible17:49
mdboothI also said I don't really understand the contract :)17:49
johnthetubaguyright now its half way through the detach right, it should be either before we call cinder, or after I think17:50
mdboothAs it stands today, you'll get a notification even if the detach fails17:50
mdboothWith this patch, you *won't* get a notification if the detach fails17:50
mdboothThat makes sense to me, but it is an unrelated change of behaviour17:50
johnthetubaguywell, depends where it fails17:50
johnthetubaguyreally, you should get an error notification if it fails17:50
mdboothjohnthetubaguy: In volume_api.detach()17:50
mdboothjohnthetubaguy: The point was also raised that we could change that entirely17:51
mdboothBut I thought for safety we should just leave it as is17:51
johnthetubaguythe real error happen in _driver_detach_volume17:51
mdboothAnd change it deliberately at another time if that's what we want17:51
johnthetubaguyso I think the tweak seems good17:51
johnthetubaguyits tempting to do it after the BDM clean up I supose17:51
lyarwoodI'm working on this now so it's not an issue to kick it out17:52
johnthetubaguyoh, wait, we can't we need to send that info as part of the instance17:52
johnthetubaguyso here is a reason to go lyarwood's way17:52
johnthetubaguyin the new API there is only one cinder API call17:53
*** asselin has joined #openstack-nova17:53
*** asselin has left #openstack-nova17:53
*** fragatina has quit IRC17:54
johnthetubaguylyarwood: I was thinking the fake BDMs should probably have attachment_id = None, so it matches the current BDMs generated by the code: https://review.openstack.org/#/c/437665/11/nova/tests/unit/objects/test_block_device.py@3617:55
lyarwoodjohnthetubaguy: true, I was thinking ahead to testing detach but you're right that doesn't make sense yet.17:56
johnthetubaguylyarwood: I would keep the uuids for the object compat one though, that seems legit17:57
*** satyar has quit IRC17:57
lyarwoodjohnthetubaguy: kk17:57
*** READ10 has quit IRC17:58
lyarwoodjohnthetubaguy: and re get_by_attachment_id , I just assumed we would want that around tbh, if you can't think of a case where we would want that over get_by_instance* then I'll drop it17:58
*** yamahata has joined #openstack-nova17:59
*** sc68cal has quit IRC17:59
johnthetubaguylyarwood: I think we will always fetch using volume-uuid and instance-uuid, because of API flow. Its easy to add when we need it though, best to leave it out for now I think.18:00
johnthetubaguylyarwood: I will probably regret that tomorrow, but it seems the best way to go18:00
lyarwoodjohnthetubaguy: kk, should I leave the index on attachment_id?18:00
mriedemprobably don't need the index if we don't query by attachment_id right?18:00
johnthetubaguylyarwood: good question, I guess we can leave that off too18:01
johnthetubaguyyeah, what mriedem said18:01
lyarwoodright, just checking before I pull it18:01
lyarwoodthanks18:01
johnthetubaguyindex will slow down write performance, so worth dropping tht18:01
*** kaisers has joined #openstack-nova18:01
*** vks1 has left #openstack-nova18:03
*** ltomasbo is now known as ltomasbo|away18:03
*** cfriesen has quit IRC18:04
*** alexpilotti has joined #openstack-nova18:05
*** david-lyle has joined #openstack-nova18:05
*** sc68cal has joined #openstack-nova18:06
*** aysyd has joined #openstack-nova18:06
*** jpena is now known as jpena|off18:07
mriedemgibi: have you ever seen this? http://logs.openstack.org/08/445308/3/check/gate-tempest-dsvm-py35-ubuntu-xenial/7bf0d72/logs/screen-n-api.txt.gz#_2017-03-16_05_31_09_39918:08
mriedemValueError: Circular reference detected18:08
mriedemduring send_notification18:08
openstackgerritKen'ichi Ohmichi proposed openstack/nova master: Clarify os-stop API description  https://review.openstack.org/44626418:08
*** cfriesen has joined #openstack-nova18:08
*** baoli has quit IRC18:09
mriedemoomichi: left a question in ^18:13
mriedemfor os-start18:13
*** baoli has joined #openstack-nova18:13
*** gszasz has quit IRC18:13
*** manasm has quit IRC18:14
oomichimriedem: oh, nice catch. I am a fun for removing it also18:15
*** Sukhdev has joined #openstack-nova18:16
*** lpetrut has quit IRC18:16
*** tbachman has quit IRC18:16
*** sc68cal_ has joined #openstack-nova18:17
*** sc68cal has quit IRC18:17
mdboothmriedem: I've also seen Circular reference detected recently18:17
mdboothAfter an error18:17
openstackgerritKen'ichi Ohmichi proposed openstack/nova master: Clarify os-stop API description  https://review.openstack.org/44626418:17
openstackgerritIldiko Vancsa proposed openstack/nova master: Remove check_detach  https://review.openstack.org/44667118:18
openstackgerritOpenStack Proposal Bot proposed openstack/nova master: Updated from global requirements  https://review.openstack.org/44667218:18
oomichimriedem: ^^^ thanks, done18:18
mdboothmriedem: http://logs.openstack.org/42/445142/3/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/07e282a/logs/screen-n-api.txt.gz?level=ERROR#_2017-03-15_10_18_17_45918:18
mdboothIs that similar?18:18
mriedemmdbooth: yes18:19
*** sneti_ has joined #openstack-nova18:19
mriedemmdbooth: https://bugs.launchpad.net/nova/+bug/1673375/comments/318:19
openstackLaunchpad bug 1673375 in oslo.messaging ""ValueError: Circular reference detected" in send_notification" [Undecided,New]18:19
mriedemthe notification is coming from the wrap_exception decorator here https://github.com/openstack/nova/blob/2380659e358770a3f36253b93a112b9779a23958/nova/compute/api.py#L460218:19
mriedembut i don't know what the circular reference is18:19
* mdbooth doesn't know how references work in json18:20
mdboothI expect if we knew that and decoded the big chunk of text above we'd discover it18:20
mriedemsomething is messed up in that change of yours18:21
mriedemTypeError: get_by_instance_mapping_list() got an unexpected keyword argument 'expected_addrs'18:22
openstackgerritOpenStack Proposal Bot proposed openstack/python-novaclient master: Updated from global requirements  https://review.openstack.org/44667418:22
mdboothmriedem: Yeah, I've fixed it since18:22
*** armax has quit IRC18:22
mriedemah ok18:22
mdboothIt was just a typo18:22
*** sc68cal_ has quit IRC18:22
mriedemjohnthetubaguy: https://review.openstack.org/#/c/446672/18:22
mriedemyup18:22
mriedemaddrs/attrs18:22
mriedemjohnthetubaguy: ^ is the cinderclient min version bump18:23
mdboothmriedem: Now *you* saw that straight away. I stared at it so long I became blind to it.18:23
mriedempair programming18:23
mriedemthe ibm execs were right!18:23
* johnthetubaguy get mriedem some help, I hear crys for help18:24
*** sc68cal has joined #openstack-nova18:24
mriedemwe were told that spotify did everything correctly, and so we needed to do everything they did18:24
mriedemlike,18:24
mriedempair programming, squads, kanban, agile,18:25
mriedemwork around a single table in a room18:25
mriedemeat together18:25
mriedemsleep together18:25
mriedemetc etc18:25
mriedemout pops innovation18:25
mriedemyada yada yada, 18 months later, i left ibm18:25
mriedem")18:25
mriedem:)18:25
mdboothmriedem: So, looking at that circular reference thing, there are a few python objects on those dicts18:26
*** liusheng has quit IRC18:26
ildikovmriedem: sounds like a long time to sleep together with the whole team :)18:26
mriedemildikov: you wouldn't believe the VD18:26
mdboothIf the json dumper is trying to serialise them, and there's a circular reference in there18:26
mdboothThen... that would be a circular reference18:26
mdboothThere are plenty to choose from18:26
*** liusheng has joined #openstack-nova18:27
ildikovmriedem: VD?18:27
mdboothThere's probably stuff in there it's just not useful to serialise, tbh18:27
mriedemvenereal disease18:27
mriedemildikov: a joke18:27
*** esberglu_ has joined #openstack-nova18:27
*** esberglu_ has quit IRC18:27
*** esberglu_ has joined #openstack-nova18:28
ildikovmriedem: I knew it was a joke, just had no idea what VD is, LOL18:28
*** rcernin has joined #openstack-nova18:28
ildikovmriedem: I could've guessed though :)18:28
*** esberglu has quit IRC18:30
*** catinthe_ has joined #openstack-nova18:30
openstackgerritSujitha proposed openstack/nova master: Add helper method to add additional data about policy rule.  https://review.openstack.org/43484218:30
*** catinth__ has joined #openstack-nova18:30
*** baoli has quit IRC18:32
*** catintheroof has quit IRC18:32
*** catinthe_ has quit IRC18:34
openstackgerritmelanie witt proposed openstack/nova master: Fix functional regression/recreate test for bug 1671648  https://review.openstack.org/44668518:35
openstackbug 1671648 in OpenStack Compute (nova) ocata "Instances are not rescheduled after deploy fails" [High,In progress] https://launchpad.net/bugs/1671648 - Assigned to Matt Riedemann (mriedem)18:35
*** sneti_ has quit IRC18:35
*** Swami has quit IRC18:35
melwittmriedem: while I was working on making it consider two hosts while scheduling, I discovered a race. so I have put up a fix ^18:35
melwittthe regression test, that is18:35
mriedemmelwitt: ok, cool18:36
*** cfriesen_ has joined #openstack-nova18:37
mriedemi had backported the functional regression test and the retry fix to ocata already, they aren't merged, but we could hold them up for the race fix too18:37
mriedemin the test18:37
*** cfriesen has quit IRC18:37
*** cfriesen__ has joined #openstack-nova18:38
*** yamahata has quit IRC18:38
melwittokay18:38
*** Swami has joined #openstack-nova18:39
melwittmriedem: the "host" on the instance server attributes is the host from the start_service('compute', host=) but the "hypervisor_hostname" is the nodename, and the nodename for both hosts was the same "fake-mini" so that's how they got collapsed into only one host for scheduling18:40
dansmithand nodes bites us in the ass once again18:41
melwittthere's a fake.set_nodes() that lets you set the hostnames to return and you have to do it before starting each compute service fixture18:41
*** cfriesen_ has quit IRC18:41
melwittyeah :( node ass-biters18:41
mriedemah ok18:41
mriedemouch18:41
mriedemalso, can you all please clean it up please,18:42
*** tesseract has quit IRC18:42
mriedemi'd like to keep this channel classy18:42
melwittafter I set the nodes, the race reared its head, sometimes I got a Claim, sometimes I got a NopClaim18:42
mriedemalright, my kid just threw up in class so i'm going to the school, back in a bit18:43
mriedemprobably all those stories i told here about nodes and local delete and quotas18:44
openstackgerritLee Yarwood proposed openstack/nova master: objects: Add attachment_id to BlockDeviceMapping  https://review.openstack.org/43766518:44
openstackgerritLee Yarwood proposed openstack/nova master: db: Add attachment_id to block_device_mapping  https://review.openstack.org/43759718:44
openstackgerritmelanie witt proposed openstack/nova master: Fix functional regression/recreate test for bug 1671648  https://review.openstack.org/44668518:46
openstackbug 1671648 in OpenStack Compute (nova) ocata "Instances are not rescheduled after deploy fails" [High,In progress] https://launchpad.net/bugs/1671648 - Assigned to Matt Riedemann (mriedem)18:46
melwittmriedem: hah, hope she's okay18:46
*** alexpilotti has quit IRC18:47
*** amoralej is now known as amoralej|off18:48
*** dharinic has quit IRC18:50
*** sneti_ has joined #openstack-nova18:52
*** alexpilotti has joined #openstack-nova18:53
*** alexpilotti has quit IRC18:58
*** rcernin has quit IRC18:58
*** dharinic- is now known as dharinic19:02
*** dtp has joined #openstack-nova19:02
*** alexpilotti has joined #openstack-nova19:04
*** sc68cal has quit IRC19:05
openstackgerritJohn Garbutt proposed openstack/nova-specs master: Add service-protected-server spec  https://review.openstack.org/43813419:06
*** fragatina has joined #openstack-nova19:06
*** fragatina has quit IRC19:07
*** fragatina has joined #openstack-nova19:07
*** alexpilotti has quit IRC19:09
*** cdent has joined #openstack-nova19:09
*** yamahata has joined #openstack-nova19:10
*** sc68cal has joined #openstack-nova19:11
mriedemlyarwood: when are you doing for the day?19:13
*** smatzek has joined #openstack-nova19:16
mriedem*done19:18
*** Jack_Iv has quit IRC19:18
mriedemi'm going to add tests to your bottom change, and fix the test in the object change19:18
*** xyang1 has joined #openstack-nova19:25
* cdent is suddenly thinking of diapers19:25
*** armax has joined #openstack-nova19:26
*** liverpooler has quit IRC19:27
*** cdent has quit IRC19:27
mriedemhis bottom is a mess19:27
mriedemdon't want the rash to set in19:28
*** liverpooler has joined #openstack-nova19:30
openstackgerritSteve Noyes proposed openstack/nova master: Throw exception if swap volume attempted on stopped server  https://review.openstack.org/44670819:30
openstackgerritDan Smith proposed openstack/nova master: Sort CellMappingList.get_all() for safety  https://review.openstack.org/44317419:31
openstackgerritDan Smith proposed openstack/nova master: Add get_by_instance_uuids() to InstanceMappingList  https://review.openstack.org/44329219:31
openstackgerritDan Smith proposed openstack/nova master: Clean up ClientRouter debt  https://review.openstack.org/44448719:31
openstackgerritDan Smith proposed openstack/nova master: Make server_groups determine deleted-ness from InstanceMappingList  https://review.openstack.org/44329319:31
openstackgerritDan Smith proposed openstack/nova master: Remove Mitaka-era service version check  https://review.openstack.org/44286119:31
openstackgerritDan Smith proposed openstack/nova master: Add workaround to disable group policy check upcall  https://review.openstack.org/44273619:31
openstackgerritMatt Riedemann proposed openstack/nova master: objects: Add attachment_id to BlockDeviceMapping  https://review.openstack.org/43766519:31
openstackgerritMatt Riedemann proposed openstack/nova master: db: Add attachment_id to block_device_mapping  https://review.openstack.org/43759719:31
*** dtp has quit IRC19:32
*** amotoki has joined #openstack-nova19:32
*** clenimar has quit IRC19:36
*** clenimar has joined #openstack-nova19:36
mriedemmelwitt: per your test fix, seems we should add some better descriptions for host and hypervisor_hostname in the api-ref https://developer.openstack.org/api-ref/compute/?expanded=show-server-details-detail#id2519:36
mriedemhost: The host name. Appears in the response for administrative users only.19:36
mriedemhypervisor_hostname: The hypervisor host name. Appears in the response for administrative users only.19:37
mriedemoh don't forget hostId: The ID of the host.19:37
mriedemthanks compute api ref!19:37
melwittmriedem: yeah, I actually had a look at that. I wasn't sure if maybe it was intentional that they don't mention the distinction between host name and nodename19:37
*** lpetrut has joined #openstack-nova19:38
melwittwhen you run for real, they the same unless you're running ironic19:38
mriedemi'm not sure what the point in making the description unclear would be19:38
mriedemdo we get any security benefit from that?19:38
melwittI dunno. I was thinking maybe if there was concern about exposing implementation details like host and nodename. I'm not sure it matters19:39
*** cfriesen__ has quit IRC19:39
*** amotoki has quit IRC19:39
melwitthypervisor_hostname: either the host name or the Ironic node name if ironic_driver19:40
melwittso, maybe that's reasonable to put there19:40
mriedemi think so19:40
mriedemif we can't actually explain wtf the fields are in our API response, our API sucks, right?19:40
mriedemi'm pretty sure if cdent were around i'd get a checkmark for that19:40
melwittyeah.19:40
melwitthehe19:40
mriedemi'll open a bug19:41
melwittk19:41
openstackgerritJohn Griffith proposed openstack/nova master: Add Cinder API version detection  https://review.openstack.org/44446519:42
lyarwoodmriedem: lol, I'm back for a while now19:42
lyarwoodmriedem: thanks for sorting the tests out, rushed things before dinner, always a mistake.19:42
*** Swami has quit IRC19:47
openstackgerritEric Fried proposed openstack/nova master: PowerVM Driver: spawn/delete #1: no-ops  https://review.openstack.org/43811919:48
mriedemmelwitt: dansmith: jaypipes: sdague: jroll: see if this makes sense https://bugs.launchpad.net/nova/+bug/167359319:49
openstackLaunchpad bug 1673593 in OpenStack Compute (nova) "api-ref: descriptions for the various host fields in server GET response are useless" [Low,Confirmed]19:49
melwittmriedem: makes sense to me19:50
jrollmriedem: yeah, makes sense19:51
jrollthey are indeed useless19:51
mriedemhostId: The ID of the host.19:51
mriedemthanks19:51
jroll# start the thread19:51
jrollthread.start()19:51
jrolllove that stuff19:51
mriedem# start it up kris19:52
mriedem# it's what i was born to do19:52
melwittlol19:53
mriedem*warm it up19:53
mriedemmy mistake19:53
*** smatzek has quit IRC19:53
dansmithooh, I was just about to catch you on that19:53
melwittlet's wear our jeans backwards19:54
*** awaugama has quit IRC19:54
mriedemthat's wiggity wiggity wiggity whack19:54
*** tbachman has joined #openstack-nova19:56
*** voelzmo has joined #openstack-nova20:02
*** Jeffrey4l has quit IRC20:06
*** Jeffrey4l has joined #openstack-nova20:07
*** Sukhdev has quit IRC20:11
*** Swami has joined #openstack-nova20:11
*** baoli has joined #openstack-nova20:11
*** baoli has quit IRC20:18
*** baoli has joined #openstack-nova20:18
*** sneti_ has quit IRC20:18
*** Drankis has quit IRC20:21
*** winston-d has joined #openstack-nova20:22
winston-ddansmith: ping20:22
*** eharney has quit IRC20:23
*** Sukhdev has joined #openstack-nova20:23
winston-ddansmith: Hi, Dan.  I recall you mentioned Nova would have a 'force-shutdown' or sth similar API/feature to ensure hypervisor is truely down during PTG.20:24
winston-ddansmith: Do you have any pointer to bp or spec?  I quickly skim through nova-specs and couldn't find it.20:25
*** cfriesen__ has joined #openstack-nova20:29
*** Swami has quit IRC20:30
sdaguemriedem: I think the bug looks fine, push me a patch and lets get the api-ref fixed!20:30
*** dtp_ has joined #openstack-nova20:31
*** dtp_ is now known as dtp20:31
*** cfriesen has joined #openstack-nova20:34
*** cfriesen__ has quit IRC20:34
mriedemsdague: alright20:35
mriedemsdague: while you're around, would be nice to +W this so i can include it in the backports for the related bug https://review.openstack.org/#/c/446685/20:35
*** crushil has quit IRC20:35
mriedemretry on failed build is kind of important20:36
sdaguemriedem: +W20:36
sdaguemriedem: honestly, if you find a thing is wrong in api-ref, I would generally skip the bug and write the patch, and just be descriptive in the commit message. Definitely know a few of those things aren't quite right still.20:37
mriedemi only wrote the bug because i wasn't planning on doing a patch right now20:40
*** crushil has joined #openstack-nova20:40
openstackgerritMatt Riedemann proposed openstack/nova master: objects: Add attachment_id to BlockDeviceMapping  https://review.openstack.org/43766520:40
*** amrith has quit IRC20:42
dansmithwinston-d:  force-down20:42
*** amrith has joined #openstack-nova20:43
dansmithwinston-d: https://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/mark-host-down.html20:49
openstackgerritJohn L. Villalovos proposed openstack/nova master: flake8: Specify 'nova' as name of app  https://review.openstack.org/44673620:49
bauzasFWIW, folks I'll be off tomorrow20:50
*** crushil has quit IRC20:50
*** voelzmo has quit IRC20:51
mriedemok20:52
mriedemi'll have a small human in the house, so i might be distracted20:52
mriedembauzas: did this get talked about in last week's nova meeting? https://blueprints.launchpad.net/nova/+spec/lvm-thin-pool20:54
mriedemand this https://blueprints.launchpad.net/nova/+spec/remove-openstack-api-directory20:54
mriedemlogan-: edleafe: ^?20:55
bauzasmriedem: the first one was not discussed given logan- wasn't there20:57
mriedemok20:57
bauzasmriedem: the second one was discussed20:57
bauzasmriedem: and we said possibly a specless BP20:57
bauzasgiven only for tracking20:57
edleafemriedem: yeah, and I have some patches for that20:58
mriedembut the bp wasn't approved20:58
edleafenot yet :)20:58
mriedemi guess i can look back at the logs, would have been nice if someone could have summarized in the bp whiteboard20:58
*** dimtruck is now known as zz_dimtruck20:58
bauzashttp://eavesdrop.openstack.org/meetings/nova/2017/nova.2017-03-09-14.00.log.html20:59
bauzasmmm, 1 min before the next one :)21:00
bauzasactually..21:00
* mriedem starts nova meeting21:01
*** sc68cal has left #openstack-nova21:03
efriedesberglu_ adreznec Could use a new +1 on https://review.openstack.org/43811921:14
*** smatzek has joined #openstack-nova21:15
*** Sukhdev has quit IRC21:16
esberglu_ack21:17
*** zz_dimtruck is now known as dimtruck21:26
openstackgerritOctave Orgeron proposed openstack/nova-specs master: Enables MySQL Cluster Support for Nova  https://review.openstack.org/44663121:26
*** catinth__ has quit IRC21:33
*** Jeffrey4l has quit IRC21:34
dtpdansmith / melwitt: can i get a new task?21:34
* jroll O_o at this static network metadata BP21:35
jrollmriedem: I don't see anything too crazy about it21:35
jrollfrom ironic POV21:35
jrollthough I wonder if there's other ways to do it, hm21:36
mriedemdtp: i will have something i think once i have a spec written21:36
dtpok21:37
dansmithmriedem: the services id thing?21:37
mriedemyes21:37
mriedemdtp: this needs a spec https://blueprints.launchpad.net/nova/+spec/service-hyper-pci-uuid-in-api21:37
mriedemdtp: if you can parse that and grok it, feel free to write the spec21:37
dansmithmriedem: cool, I just tweaked the hostapi this morning to do the thing we said it should do21:37
mriedemdansmith: ok21:37
mriedemhigh five21:37
mriedem\o\21:37
mriedem/o/21:37
dansmitho/*21:37
mriedemo/.21:37
melwitt.\o21:38
mriedemi'm just going to steal the classics from melwitt21:38
jrollo/'21:38
dtparmpit bump?21:38
jrollcheeeeeers21:38
dansmith~~.\o/.~~21:38
melwittlol21:38
mriedemodor lines?21:38
dansmithsmell lines yeah21:38
mriedemha21:38
dansmithI am in portland after all, where showers are optional21:38
dtpok mriedem, reading now21:39
*** gouthamr has quit IRC21:40
*** smatzek has quit IRC21:40
*** karimb has joined #openstack-nova21:41
*** lpetrut has quit IRC21:41
*** tblakes has quit IRC21:42
dtpwhat does the services table contain?21:46
mriedemservices21:47
dtphehe21:47
dtplike . . . keystone?21:47
mriedemno21:47
mriedeminfo about the actual nova services21:47
mriedemlike nova-compute, nova-scheduler, etc21:47
mriedemthe host/binary/topic fields21:48
mriedemservice versoin21:48
*** rfolco_ has quit IRC21:48
mriedemenabled/disabled21:48
mriedemet21:48
mriedem*etc21:48
mriedemhttps://developer.openstack.org/api-ref/compute/#compute-services-os-services21:48
dtpah.  thank you21:48
*** Jeffrey4l has joined #openstack-nova21:48
mriedemheh "Lists all running Compute services for a tenant," that's wrong21:48
mriedemservices are not per-tenant21:48
mriedemsdague: more api-ref fun ^21:48
mriedemthat should be "for a region" right?21:49
mriedemor a cell i guess21:49
mriedembut i'm not sure we want to start using the c word in the api ref yet21:49
dansmithnot a cell21:49
dansmithyeah, don't say that21:49
mriedemtrue,21:49
mriedemapi isn't in the cell21:49
dansmithsay for a deployment or region or something21:49
mriedemyeah21:49
*** Jeffrey4l has quit IRC21:50
melwittdtp: to see how it's used, look at nova/db/sqlalchemy/models.py at Service to see the data model, then look at nova/objects/service.py to see the object API and you can sort of trace how it's used from using that as a starting point. also look at nova/service.py21:50
*** Jeffrey4l has joined #openstack-nova21:50
dtpok, thanks21:51
mriedemdansmith: guh, we need to update this at some point too https://docs.openstack.org/ops-guide/arch-scaling.html#cells-and-regions21:51
dansmithhmm, yeah21:52
*** felipemonteiro_ has quit IRC21:56
*** dtp is now known as dtp-afk21:58
mriedemdansmith: i've opened a docs bug https://bugs.launchpad.net/openstack-manuals/+bug/167361621:58
openstackLaunchpad bug 1673616 in openstack-manuals "Scaling in Operations Guide - cells section needs to be updated" [Undecided,New]21:58
mriedemdansmith: you might want to add/update/fix whatever i put in the description21:58
dansmithokay22:00
dansmithlooks alright to me at first skim but I'll think on it a bit22:00
openstackgerritMatt Riedemann proposed openstack/nova master: api-ref: fix description in os-services  https://review.openstack.org/44675722:01
*** aysyd has quit IRC22:05
*** mdrabe has quit IRC22:05
*** xyang1 has quit IRC22:08
openstackgerritmelanie witt proposed openstack/nova master: Improve descriptions for hostId, host, and hypervisor_hostname  https://review.openstack.org/44676122:18
*** liangy has quit IRC22:18
melwittmriedem: ^ patch for the host* api-ref stuff22:18
openstackgerritOctave Orgeron proposed openstack/nova-specs master: Enables MySQL Cluster Support for Nova  https://review.openstack.org/44663122:26
*** alexpilotti has joined #openstack-nova22:27
*** alexpilotti has quit IRC22:32
mriedemmelwitt: thanks, comments inline22:33
*** mriedem1 has joined #openstack-nova22:36
*** mriedem has quit IRC22:37
*** mriedem1 is now known as mriedem22:41
*** esberglu_ has quit IRC22:43
cfriesenis there a known issue in newton where pre_live_migration() can hit an RPC timeout waiting for a really big glance image to download to the destination?22:45
*** karimb has quit IRC22:46
*** ijw has quit IRC22:47
*** ijw has joined #openstack-nova22:48
*** ijw has quit IRC22:48
*** liangy has joined #openstack-nova22:48
openstackgerritOpenStack Proposal Bot proposed openstack/nova master: Updated from global requirements  https://review.openstack.org/44677722:56
*** ssurana has joined #openstack-nova23:02
*** jamielennox is now known as jamielennox|away23:03
*** iceyao has joined #openstack-nova23:03
*** jamielennox|away is now known as jamielennox23:05
*** iceyao has quit IRC23:09
mriedemeasy bug fix https://bugs.launchpad.net/nova/+bug/167362823:10
openstackLaunchpad bug 1673628 in OpenStack Compute (nova) "api-ref: 'tags' field is not in response parameters docs for "GET /servers/{server_id}"" [Medium,Confirmed]23:10
mriedemcfriesen: that's not a newton specific limitation23:11
mriedemthat's a latent issue forever23:11
mriedemcfriesen: see https://review.openstack.org/#/c/419662/123:11
*** dimtruck is now known as zz_dimtruck23:11
*** liangy has quit IRC23:11
*** zz_dimtruck is now known as dimtruck23:11
openstackgerritMatt Riedemann proposed openstack/nova-specs master: Repropose tag-instance-when-boot  https://review.openstack.org/41531523:13
mriedemKevin_Zheng: are you ok with the updates i made to ^?23:13
*** Jeffrey4l has quit IRC23:14
*** alexpilotti has joined #openstack-nova23:15
*** alexpilotti has quit IRC23:15
*** alexpilotti has joined #openstack-nova23:15
mriedemmelwitt: i think your func test change might be causing a race issue with global state23:18
mriedemyeah you need to use fake.restore_nodes23:20
*** yingjun has quit IRC23:20
*** dimtruck is now known as zz_dimtruck23:21
*** sbezverk has joined #openstack-nova23:22
sbezverkhello, I need to check liveness of nova placement api process, could you recommen any methods to check not just that the socket is opened but also that the process is alive and sane.23:24
*** baoli has quit IRC23:24
mriedemsbezverk: make a curl request to / ?23:24
mriedemthat's testing much more than just the process is running23:24
sbezverkmriedem: would it not just check apachie liveness?23:25
mriedemmaking a curl request to the actual endpoint's root would provide the versions available, which means it's up and running and accepting requests23:25
mriedemand serving a response23:25
mriedemi don't know if a token is required for GET / on the placement endpoint though23:25
sbezverkmriedem: cool, thank you very much. I was not sure if curling for root is sufficient23:26
mriedemyup that will get you the versions available23:26
*** Jeffrey4l has joined #openstack-nova23:29
*** ijw has joined #openstack-nova23:33
*** ijw has quit IRC23:33
*** ijw has joined #openstack-nova23:33
*** masber has quit IRC23:36
*** amotoki has joined #openstack-nova23:36
*** mlavalle has quit IRC23:39
openstackgerritGhanshyam Mann proposed openstack/nova master: Add api-ref for filter/sort whitelist  https://review.openstack.org/42176023:39
sbezverkmriedem: any known issue with placement in ocata you are aware?23:39
openstackgerritMatt Riedemann proposed openstack/nova master: Fix functional regression/recreate test for bug 1671648  https://review.openstack.org/44668523:39
openstackbug 1671648 in OpenStack Compute (nova) ocata "Instances are not rescheduled after deploy fails" [High,In progress] https://launchpad.net/bugs/1671648 - Assigned to Matt Riedemann (mriedem)23:39
mriedemsbezverk: nothing major for placement that i can think of23:40
mriedemsbezverk: we did release some fixes for ocata in 15.0.1 earlier in the week,23:40
mriedemso if you're upgrading, i'd use 15.0.123:40
*** amotoki has quit IRC23:41
sbezverkmriedem: I brining up at the gate ocata images in kolla-kubernetes project23:41
sbezverkwhat worked with stable/newton seems broken in ocata, but it might be our issue too..23:41
*** Sukhdev has joined #openstack-nova23:41
mriedemsbezverk: i guess i'd need a more specific example of what's not working23:42
*** hongbin has quit IRC23:43
Sukhdevdear Nova folks, after fresh install of devstack, Nova continue to return "No Valid host found" error - can somebody provide some wisdom, please? See the details here - http://paste.openstack.org/show/603034/23:43
*** zioproto has joined #openstack-nova23:44
sbezverkmriedem: :) glad you asked. When I start instance I get " Placement API service is not responding." in nova scheduler log23:44
mriedemsbezverk: are you starting the placement service before the nova-scheduler service?23:44
sbezverkmriedem: it looks like connectivity issue but nothing change other than just version of images23:45
mriedemis there a "placement" endpoint in the service catalog?23:45
*** dave-mccowan has quit IRC23:45
*** marst_ has quit IRC23:45
sbezverkmriedem: yep, service catalog shows placement entries23:45
mriedemis nova.conf for the scheduler configured to use placement in the [placement] section, and are the credentials in there correct?23:45
sbezverkmriedem: all valid points, but as I said nothing changed in the gate job other than image version23:46
mriedemthe scheduler didn't use placement in newton,23:46
mriedemso that's a main different with ocata23:46
sbezverkmriedem: was there any change in endpoint registrations between newton and ocata?23:46
mriedemin newton, only the nova-compute service tried connecting to the placement service23:46
mriedemno there was no change in endpoint registration23:46
sbezverkmriedem: AHHH I did not know that23:47
mriedemi'd compare the nova.conf [placement] sections between your nova-compute and nova-scheduler services23:47
mriedemmake sure that whatever you did to make it work for nova-compute, that you do that for nova-scheduler too23:47
mriedemsbezverk: https://docs.openstack.org/developer/nova/placement.html#upgrade-notes in case you haven't seen that yet23:47
sbezverkmriedem: if you have 2 minutes I can get it, would be great if you cold take a look at it23:48
mriedemoh boy23:48
mriedem:)23:48
mriedemSukhdev: is the nova-compute service running and listed as up when you run "nova service-list"?23:48
Kevin_Zhengmriedem: yeah thanks23:49
mriedemKevin_Zheng: yeah as in you agree with the changes, or yeah as in you will look?23:49
Sukhdevmriedem : yes  - see here - http://paste.openstack.org/show/603035/23:50
Kevin_Zhengagree:)23:50
sbezverkmriedem: http://paste.openstack.org/show/603036/23:51
sbezverkmriedem: look identical to me :(23:51
Kevin_Zhengthanks alot23:52
Sukhdevmriedem : anything else that I should look for? BTW, I am running the latest code (did git pull 5-10 days ago)23:53
*** abalutoiu has joined #openstack-nova23:53
*** marst has joined #openstack-nova23:55
mriedemKevin_Zheng: yw23:56
mriedemSukhdev: check the nova-compute logs for errors23:56
mriedemsbezverk: those look the same to me too23:57
mriedemsbezverk: the services all run in individual containers right?23:57
mriedemis there anything in the deployment tooling that has to do anything specific for the compute service wrt placement that the scheduler container isn't doing?23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!