Wednesday, 2017-04-19

*** takashin has joined #openstack-nova00:00
*** Apoorva has quit IRC00:01
*** Apoorva has joined #openstack-nova00:04
*** Apoorva has quit IRC00:06
*** ijw has quit IRC00:06
*** ijw has joined #openstack-nova00:06
*** Apoorva has joined #openstack-nova00:10
*** Apoorva has quit IRC00:14
*** ijw has quit IRC00:20
*** thorst has quit IRC00:25
*** ijw has joined #openstack-nova00:41
mriedemdansmith: the claims in scheduler spec is close https://review.openstack.org/#/c/437424/ - i've posted some ideas on how to handle the issue with the overhead estimates coming from the virt driver which are used in the claim, and which we won't have in the controller layer00:42
*** lyan has quit IRC00:42
*** david-lyle has joined #openstack-nova00:44
*** amotoki has quit IRC00:45
*** gjayavelu has joined #openstack-nova00:46
*** amotoki has joined #openstack-nova00:50
*** johnsom has quit IRC00:53
*** johnsom has joined #openstack-nova00:53
*** vladikr has joined #openstack-nova00:53
*** morgabra has quit IRC00:54
*** morgabra has joined #openstack-nova00:57
*** amotoki has quit IRC00:59
*** liusheng has quit IRC01:00
*** gyee has quit IRC01:01
*** crushil has joined #openstack-nova01:02
*** nic has quit IRC01:03
*** phuongnh has joined #openstack-nova01:03
*** cNilesh has joined #openstack-nova01:04
*** NikhilS has joined #openstack-nova01:06
*** kevinz has joined #openstack-nova01:06
*** mriedem has quit IRC01:08
*** Apoorva has joined #openstack-nova01:08
*** ijw has quit IRC01:09
*** amotoki has joined #openstack-nova01:14
*** Sukhdev has quit IRC01:19
*** gus has quit IRC01:23
*** gus has joined #openstack-nova01:26
*** Apoorva_ has joined #openstack-nova01:27
openstackgerritSTEW TY proposed openstack/nova master: Transform instance.unrescue notifications  https://review.openstack.org/38827501:28
*** Kevin_Zheng has joined #openstack-nova01:31
*** Apoorva has quit IRC01:31
*** iceyao has joined #openstack-nova01:31
*** Apoorva_ has quit IRC01:32
*** Jack_Iv has joined #openstack-nova01:38
*** Guest86170 has quit IRC01:40
*** Guest86170 has joined #openstack-nova01:42
*** dtp has quit IRC01:42
*** Jack_Iv has quit IRC01:43
*** gjayavelu has quit IRC01:44
*** iceyao has quit IRC01:45
*** ngupta has joined #openstack-nova01:47
*** tbachman has quit IRC01:48
*** Sukhdev has joined #openstack-nova01:48
*** Sukhdev has quit IRC01:51
*** stvnoyes has joined #openstack-nova01:53
*** gongysh has joined #openstack-nova01:54
*** thorst has joined #openstack-nova02:00
*** iceyao has joined #openstack-nova02:01
*** ssurana has quit IRC02:03
*** takashin has quit IRC02:04
*** Shunli has joined #openstack-nova02:05
*** ociuhandu has quit IRC02:06
*** yongjiexu has joined #openstack-nova02:06
*** fragatin_ has joined #openstack-nova02:11
*** fragatina has quit IRC02:14
*** fragatin_ has quit IRC02:15
*** thorst has quit IRC02:16
*** hongbin has joined #openstack-nova02:16
*** fragatina has joined #openstack-nova02:17
*** yamahata has quit IRC02:17
Kevin_Zhengmriedem: Hi, I replied on https://bugs.launchpad.net/nova/+bug/164517502:18
openstackLaunchpad bug 1645175 in OpenStack Compute (nova) "Neutron port got deleted when attach interface failed" [Medium,In progress] - Assigned to Matt Riedemann (mriedem)02:18
*** fragatina has quit IRC02:21
*** liusheng has joined #openstack-nova02:32
*** ngupta has quit IRC02:32
*** ngupta has joined #openstack-nova02:33
*** amotoki has quit IRC02:35
*** ociuhandu has joined #openstack-nova02:36
*** ngupta has quit IRC02:37
openstackgerritZhaoBo proposed openstack/nova master: [WIP]Less neutron call in build_nw_info  https://review.openstack.org/45784502:40
*** dave-mccowan has quit IRC02:47
openstackgerritHuan Xie proposed openstack/nova master: WIP: XenAPI use os-xenapi V2 in nova  https://review.openstack.org/45349302:48
*** dharinic has joined #openstack-nova02:48
*** amotoki has joined #openstack-nova02:50
openstackgerritHuan Xie proposed openstack/nova master: WIP: XenAPI use os-xenapi V2 in nova  https://review.openstack.org/45349302:57
openstackgerritZhenyu Zheng proposed openstack/nova master: Support tag instances when boot  https://review.openstack.org/39432103:03
*** yongjiexu has quit IRC03:06
*** ducnc has quit IRC03:06
*** ducnc1 has joined #openstack-nova03:07
*** dharinic has quit IRC03:07
*** yongjiexu has joined #openstack-nova03:08
*** kfarr has quit IRC03:08
*** ducnc1 is now known as ducnc03:09
*** iceyao has quit IRC03:10
*** phuongnh has quit IRC03:10
*** MasterOfBugs has quit IRC03:11
*** thorst has joined #openstack-nova03:13
*** amotoki has quit IRC03:15
*** ngupta has joined #openstack-nova03:17
*** links has joined #openstack-nova03:19
*** kencjohnston_ has quit IRC03:19
*** kencjohnston has joined #openstack-nova03:19
*** hongbin has quit IRC03:20
*** jerrygb has joined #openstack-nova03:21
*** imacdonn has quit IRC03:28
*** gjayavelu has joined #openstack-nova03:28
*** imacdonn has joined #openstack-nova03:28
*** ngupta has quit IRC03:28
*** ngupta has joined #openstack-nova03:29
*** ngupta has quit IRC03:31
*** ngupta has joined #openstack-nova03:32
*** gouthamr has quit IRC03:32
*** thorst has quit IRC03:32
*** amotoki has joined #openstack-nova03:35
*** ngupta has quit IRC03:36
*** iceyao has joined #openstack-nova03:37
*** lifeless has quit IRC03:38
*** mdrabe has joined #openstack-nova03:40
*** crushil has quit IRC03:41
*** iceyao has quit IRC03:42
*** hieulq_ has joined #openstack-nova03:42
*** mdrabe has quit IRC03:44
*** lifeless has joined #openstack-nova03:46
openstackgerritMatt Riedemann proposed openstack/nova master: Remove unused os-pci API  https://review.openstack.org/45785403:49
*** nicolasbock has quit IRC03:51
openstackgerritHuan Xie proposed openstack/nova master: WIP: XenAPI use os-xenapi v2 in nova  https://review.openstack.org/45349303:51
*** baoli has quit IRC03:53
*** bmace has quit IRC03:54
*** ducnc1 has joined #openstack-nova03:56
openstackgerritmelanie witt proposed openstack/nova master: Add FixedIP.get_count_by_project()  https://review.openstack.org/44624603:56
openstackgerritmelanie witt proposed openstack/nova master: Add FloatingIP.get_count_by_project()  https://review.openstack.org/44624703:56
openstackgerritmelanie witt proposed openstack/nova master: Add get_count_by_vm_state() to Instance object  https://review.openstack.org/44624403:56
*** gongysh has quit IRC03:56
openstackgerritmelanie witt proposed openstack/nova master: Add SecurityGroup.get_counts()  https://review.openstack.org/44624503:56
openstackgerritmelanie witt proposed openstack/nova master: Remove 'reserved' count from used limits  https://review.openstack.org/44624203:56
openstackgerritmelanie witt proposed openstack/nova master: Remove useless quota_usage_refresh from nova-manage  https://review.openstack.org/44624303:56
openstackgerritmelanie witt proposed openstack/nova master: Count server groups to check quota  https://review.openstack.org/44624003:56
openstackgerritmelanie witt proposed openstack/nova master: Count networks to check quota  https://review.openstack.org/44624103:56
openstackgerritmelanie witt proposed openstack/nova master: Add check_deltas() and limit_check_project_and_user() to Quotas  https://review.openstack.org/44623903:56
openstackgerritmelanie witt proposed openstack/nova master: Count instances to check quota  https://review.openstack.org/41652103:56
openstackgerritmelanie witt proposed openstack/nova master: Make Quotas object favor the API database  https://review.openstack.org/41094503:56
openstackgerritmelanie witt proposed openstack/nova master: Add online migration to move quotas to API database  https://review.openstack.org/41094603:56
openstackgerritmelanie witt proposed openstack/nova master: Add InstanceGroup.get_counts()  https://review.openstack.org/45785703:56
openstackgerritmelanie witt proposed openstack/nova master: Add InstanceGroup._remove_members_in_db  https://review.openstack.org/45785803:56
openstackgerritmelanie witt proposed openstack/nova master: Count server group members to check quota  https://review.openstack.org/45785903:56
openstackgerritmelanie witt proposed openstack/nova master: Count security groups to check quota  https://review.openstack.org/45786003:56
openstackgerritmelanie witt proposed openstack/nova master: Count fixed ips for checking quota  https://review.openstack.org/45786103:56
openstackgerritmelanie witt proposed openstack/nova master: Count floating ips to check quota  https://review.openstack.org/45786203:56
*** ducnc has quit IRC03:56
*** ducnc1 is now known as ducnc03:56
*** amotoki_ has joined #openstack-nova03:58
*** amotoki has quit IRC04:00
*** takashin has joined #openstack-nova04:00
*** Sukhdev has joined #openstack-nova04:04
*** takashin has left #openstack-nova04:05
*** takashin has joined #openstack-nova04:06
*** amotoki_ has quit IRC04:06
*** phuongnh has joined #openstack-nova04:09
openstackgerritTakashi NATSUME proposed openstack/nova master: Enable cold migration with target host(1/2)  https://review.openstack.org/40895504:10
*** vks1 has joined #openstack-nova04:11
openstackgerritTakashi NATSUME proposed openstack/nova master: Enable cold migration with target host(2/2)  https://review.openstack.org/40896404:11
openstackgerritTakashi NATSUME proposed openstack/python-novaclient master: Microversion 2.42 - Fix tag attribute disappearing  https://review.openstack.org/42951204:12
openstackgerritTakashi NATSUME proposed openstack/nova master: api-ref: Add parameters in cold migrate action  https://review.openstack.org/41004204:13
*** amotoki has joined #openstack-nova04:19
*** slaweq has joined #openstack-nova04:20
*** trinaths has joined #openstack-nova04:20
*** iceyao has joined #openstack-nova04:21
*** tbachman has joined #openstack-nova04:22
*** salv-orlando has joined #openstack-nova04:23
*** slaweq has quit IRC04:25
*** fragatina has joined #openstack-nova04:26
*** iceyao has quit IRC04:26
*** adisky_ has joined #openstack-nova04:26
*** salv-orlando has quit IRC04:28
*** fragatina has quit IRC04:29
*** fragatina has joined #openstack-nova04:30
*** diga has joined #openstack-nova04:34
*** links has quit IRC04:36
*** sridharg has joined #openstack-nova04:36
*** moshele has joined #openstack-nova04:36
*** moshele has quit IRC04:39
*** sudipto has joined #openstack-nova04:39
*** moshele has joined #openstack-nova04:39
*** hieulq_ has quit IRC04:40
*** ayogi has joined #openstack-nova04:42
*** jianghuaw has quit IRC04:44
*** jianghuaw has joined #openstack-nova04:45
*** links has joined #openstack-nova04:53
*** ratailor has joined #openstack-nova04:53
*** baoli has joined #openstack-nova04:54
*** bkopilov has joined #openstack-nova04:54
*** salv-orlando has joined #openstack-nova04:56
*** moshele has quit IRC04:57
*** hieulq_ has joined #openstack-nova04:57
*** moshele has joined #openstack-nova04:57
*** hieulq_ has quit IRC04:58
*** baoli has quit IRC04:59
*** yongjiexu has quit IRC04:59
*** bkopilov has quit IRC04:59
*** bkopilov has joined #openstack-nova05:00
*** mdnadeem has joined #openstack-nova05:00
*** gongysh has joined #openstack-nova05:01
*** iceyao has joined #openstack-nova05:01
*** bkopilov has quit IRC05:02
openstackgerritTakashi NATSUME proposed openstack/nova master: Add functional tests for cold migration to same host  https://review.openstack.org/41492605:03
*** bkopilov has joined #openstack-nova05:04
*** udesale has joined #openstack-nova05:04
*** iceyao has quit IRC05:06
*** tbachman has quit IRC05:08
*** tojuvone has quit IRC05:14
*** yamahata has joined #openstack-nova05:15
*** tbachman has joined #openstack-nova05:17
*** Jack_Iv has joined #openstack-nova05:20
*** ijw has joined #openstack-nova05:23
*** amotoki_ has joined #openstack-nova05:23
*** yingwei has joined #openstack-nova05:24
*** links has quit IRC05:25
*** jerrygb has quit IRC05:25
*** tovin07_ has joined #openstack-nova05:26
*** amotoki has quit IRC05:26
*** ltomasbo|away is now known as ltomasbo05:26
*** Jack_Iv has quit IRC05:28
openstackgerritMaho Koshiya proposed openstack/nova master: Add interfaces functional negative tests  https://review.openstack.org/44289205:29
*** thorst has joined #openstack-nova05:29
*** markvoelker has quit IRC05:29
*** Jack_Iv has joined #openstack-nova05:29
*** coreywright has quit IRC05:32
*** nmathew has joined #openstack-nova05:36
*** links has joined #openstack-nova05:37
openstackgerritmelanie witt proposed openstack/nova master: Add FixedIP.get_count_by_project()  https://review.openstack.org/44624605:38
openstackgerritmelanie witt proposed openstack/nova master: Add FloatingIP.get_count_by_project()  https://review.openstack.org/44624705:38
openstackgerritmelanie witt proposed openstack/nova master: Add get_count_by_vm_state() to Instance object  https://review.openstack.org/44624405:38
openstackgerritmelanie witt proposed openstack/nova master: Add SecurityGroup.get_counts()  https://review.openstack.org/44624505:38
openstackgerritmelanie witt proposed openstack/nova master: Remove 'reserved' count from used limits  https://review.openstack.org/44624205:38
openstackgerritmelanie witt proposed openstack/nova master: Remove useless quota_usage_refresh from nova-manage  https://review.openstack.org/44624305:38
openstackgerritmelanie witt proposed openstack/nova master: Count server groups to check quota  https://review.openstack.org/44624005:38
openstackgerritmelanie witt proposed openstack/nova master: Count networks to check quota  https://review.openstack.org/44624105:38
openstackgerritmelanie witt proposed openstack/nova master: Count instances to check quota  https://review.openstack.org/41652105:38
openstackgerritmelanie witt proposed openstack/nova master: Add InstanceGroup.get_counts()  https://review.openstack.org/45785705:38
openstackgerritmelanie witt proposed openstack/nova master: Add InstanceGroup._remove_members_in_db  https://review.openstack.org/45785805:38
openstackgerritmelanie witt proposed openstack/nova master: Count server group members to check quota  https://review.openstack.org/45785905:38
openstackgerritmelanie witt proposed openstack/nova master: Count security groups to check quota  https://review.openstack.org/45786005:38
openstackgerritmelanie witt proposed openstack/nova master: Count fixed ips for checking quota  https://review.openstack.org/45786105:38
openstackgerritmelanie witt proposed openstack/nova master: Count floating ips to check quota  https://review.openstack.org/45786205:38
*** omkar_telee has joined #openstack-nova05:38
omkar_teleeHi, facing weird issue with nova conductor05:40
omkar_teleeERROR nova.servicegroup.drivers.db DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query') [SQL: u'UPDATE services SET updated_at=%(updated_at)s, report_count=%(report_count)s, last_seen_up=%(last_seen_up)s WHERE services.id = %(services_id)s'] [parameters: {'last_seen_up': datetime.datetime(2017, 4, 19, 5, 38, 13, 911802), 'services_id': 34, 'updated_at': datetime.datetime(2017, 4, 19, 5, 38,05:40
omkar_telee13, 912601), 'report_count': 1776936}]05:40
*** ekuris has joined #openstack-nova05:41
*** zhurong has joined #openstack-nova05:42
*** slaweq has joined #openstack-nova05:42
*** iceyao has joined #openstack-nova05:46
*** armax has quit IRC05:48
*** thorst has quit IRC05:48
*** prateek has joined #openstack-nova05:48
*** moshele has quit IRC05:50
*** iceyao has quit IRC05:51
*** karthiks has joined #openstack-nova05:52
*** baoli has joined #openstack-nova05:55
*** moshele has joined #openstack-nova06:03
*** salv-orlando has quit IRC06:04
*** baoli has quit IRC06:06
*** rcernin has joined #openstack-nova06:08
*** moshele has quit IRC06:10
*** zenoway has joined #openstack-nova06:13
*** zenoway has quit IRC06:16
*** zenoway has joined #openstack-nova06:16
*** zenoway has quit IRC06:19
*** ijw has quit IRC06:19
*** moshele has joined #openstack-nova06:24
*** voelzmo has joined #openstack-nova06:25
*** zenoway has joined #openstack-nova06:27
*** iceyao has joined #openstack-nova06:28
*** markvoelker has joined #openstack-nova06:29
*** markus_z has joined #openstack-nova06:30
openstackgerritZhenyu Zheng proposed openstack/nova master: Support tag instances when boot  https://review.openstack.org/39432106:32
*** pcaruana has joined #openstack-nova06:33
*** iceyao has quit IRC06:33
*** markvoelker has quit IRC06:35
*** tojuvone has joined #openstack-nova06:39
openstackgerritRitesh proposed openstack/nova master: Add image flatten when unshelve rbd image backend  https://review.openstack.org/45788606:41
*** tbachman has quit IRC06:43
*** ralonsoh has joined #openstack-nova06:44
*** thorst has joined #openstack-nova06:45
*** nmathew- has joined #openstack-nova06:47
*** thorst has quit IRC06:49
openstackgerritTakashi NATSUME proposed openstack/nova master: Update doc/source/process.rst  https://review.openstack.org/45788806:50
*** zenoway has quit IRC06:50
*** nmathew has quit IRC06:51
*** zenoway has joined #openstack-nova06:53
*** jaosorior has joined #openstack-nova06:53
openstackgerritTakashi NATSUME proposed openstack/nova master: Add functional tests for cold migration to same host  https://review.openstack.org/41492606:54
*** Jack_Iv has quit IRC06:55
*** Jack_Iv has joined #openstack-nova07:00
*** winston-d_ has joined #openstack-nova07:00
*** Sukhdev has quit IRC07:01
*** tesseract has joined #openstack-nova07:01
*** MasterOfBugs has joined #openstack-nova07:03
*** sridharg has quit IRC07:03
*** yamahata has quit IRC07:03
*** belmoreira has joined #openstack-nova07:04
*** iceyao has joined #openstack-nova07:12
*** tovin07 has joined #openstack-nova07:12
*** damien_r has joined #openstack-nova07:13
openstackgerritDinesh Bhor proposed openstack/nova master: Limits the scheduler_hints additionalProperties to string and list  https://review.openstack.org/45789007:13
*** Jack_Iv has quit IRC07:15
*** ijw has joined #openstack-nova07:17
*** salv-orlando has joined #openstack-nova07:17
openstackgerritTakashi NATSUME proposed openstack/nova master: Avoid forcing translation on logging calls  https://review.openstack.org/41387607:19
*** faizy has joined #openstack-nova07:20
*** ijw has quit IRC07:22
*** Jack_Iv has joined #openstack-nova07:25
*** trinaths is now known as trinaths_at_lunc07:26
*** trinaths_at_lunc is now known as trinaths_lunch07:26
*** markvoelker has joined #openstack-nova07:30
*** sridharg has joined #openstack-nova07:34
*** markvoelker has quit IRC07:35
*** jpena|off is now known as jpena07:37
*** baoli has joined #openstack-nova07:37
rpodolyakaomkar_telee: it may happen when a mysql server is restarted or a galera cluster configuration is changed07:39
*** baoli has quit IRC07:42
*** zsli_ has joined #openstack-nova07:43
*** ratailor is now known as ratailor|Lunch07:45
*** thorst has joined #openstack-nova07:46
*** Shunli has quit IRC07:46
*** Jack_Iv has quit IRC07:47
*** omkar_telee has quit IRC07:49
*** trinaths_lunch is now known as trinaths07:50
*** Jack_Iv has joined #openstack-nova07:50
*** Jack_Iv has quit IRC07:50
*** thorst has quit IRC07:50
*** jerrygb has joined #openstack-nova07:55
*** iceyao has quit IRC07:56
*** trinaths has left #openstack-nova07:57
*** zzzeek has quit IRC08:00
*** jerrygb has quit IRC08:00
*** gabor_antal_ has quit IRC08:01
*** zzzeek has joined #openstack-nova08:01
*** omkar_telee has joined #openstack-nova08:03
*** ducnc has quit IRC08:06
omkar_teleerpodolyaka : Thanks for input .. can u give me some directions to resolve this issue,08:07
*** lucas-afk is now known as lucasagomes08:07
omkar_teleewe have galera cluster .. .many queries i can see are in ' wsrep in pre-commit stage '08:08
omkar_teleenova conductor, cinder and neutron are continuously sending simillar error is constantly08:09
rpodolyakaomkar_telee: I'd start from mysql logs. look for the signs of errors or warnings08:09
rpodolyakaomkar_telee: and check whether all cluster members have a consistent view of the galera cluster08:09
rpodolyaka(i.e. they all see the same set of nodes in the cluster)08:10
omkar_teleeMysql logs : WSREP: Failed to report last committed - (Interrupted system call)08:10
*** karimb has joined #openstack-nova08:10
omkar_teleeWSREP: referenced FK check fail: 35WSREP: referenced FK check fail:08:11
omkar_teleei am not a database guy :(08:11
*** carthaca_ has joined #openstack-nova08:12
rpodolyakaomkar_telee: hmm, I haven't seen this problem before. According to https://www.percona.com/blog/2016/05/27/galera-error-failed-to-report-last-committed-interrupted-system-call/ it's just warning, though...08:14
omkar_teleerpodolyaka : yes, thats why I am little confused08:15
*** coreywright has joined #openstack-nova08:15
*** takashin has left #openstack-nova08:18
omkar_teleerpodolyaka : we can see galera cluster status OK ..08:18
*** ijw has joined #openstack-nova08:19
rpodolyakaomkar_telee: http://galeracluster.com/documentation-webpages/monitoringthecluster.html#checking-cluster-integrity08:19
rpodolyakaomkar_telee: check that all the nodes are the part of the same cluster08:19
omkar_teleerpodoyaka : just on same page ... checked all commands... all look normal ..08:20
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove redundant code  https://review.openstack.org/45791708:21
rpodolyakaomkar_telee: and do you use pacemaker, maybe? it may restart the nodes08:21
omkar_teleeyes, we use pacemaker08:21
*** amotoki_ has quit IRC08:22
*** hferenc has quit IRC08:22
*** gjayavelu has quit IRC08:23
*** ijw has quit IRC08:24
*** derekh has joined #openstack-nova08:25
*** Jack_Iv has joined #openstack-nova08:25
*** mvk has quit IRC08:26
omkar_teleerpodolyaka : no, we dont use pacemaker08:26
omkar_teleeits just haproxy and vip08:26
*** efoley has joined #openstack-nova08:27
*** efoley_ has joined #openstack-nova08:27
*** efoley_ has quit IRC08:27
*** aloga has quit IRC08:34
*** aloga has joined #openstack-nova08:35
*** zenoway has quit IRC08:36
*** MasterOfBugs has quit IRC08:37
*** baoli has joined #openstack-nova08:38
*** tuan_luong has joined #openstack-nova08:39
*** baoli has quit IRC08:43
*** peter-hamilton has quit IRC08:47
*** thorst has joined #openstack-nova08:47
*** moshele has quit IRC08:50
*** thorst has quit IRC08:52
*** moshele has joined #openstack-nova08:52
*** sambetts|afk is now known as sambetts08:52
openstackgerritStephen Finucane proposed openstack/python-novaclient master: Explicitly set 'builders' option  https://review.openstack.org/45792908:54
openstackgerritStephen Finucane proposed openstack/python-novaclient master: doc: Remove cruft from conf.py  https://review.openstack.org/45793008:54
*** moshele has quit IRC08:55
*** mvk has joined #openstack-nova08:56
*** tuan_luong has quit IRC08:57
*** sree has joined #openstack-nova09:00
*** tuan_luong has joined #openstack-nova09:02
*** tuan_luong has quit IRC09:04
*** ratailor|Lunch is now known as ratailor09:05
*** tuan_luong has joined #openstack-nova09:06
*** cdent has joined #openstack-nova09:08
*** moshele has joined #openstack-nova09:08
*** slaweq has quit IRC09:09
*** slaweq has joined #openstack-nova09:09
*** moshele has quit IRC09:10
*** tuan_luong has quit IRC09:10
*** mvk has quit IRC09:12
*** ociuhandu has quit IRC09:12
*** zenoway has joined #openstack-nova09:15
*** rmart04 has joined #openstack-nova09:16
*** slaweq has quit IRC09:16
*** slaweq has joined #openstack-nova09:17
*** slaweq has quit IRC09:17
*** slaweq has joined #openstack-nova09:17
*** ijw has joined #openstack-nova09:20
*** fragatin_ has joined #openstack-nova09:20
*** fragatina has quit IRC09:22
*** thomasem has quit IRC09:22
*** thomasem has joined #openstack-nova09:23
*** mvk has joined #openstack-nova09:23
*** ijw has quit IRC09:26
*** zsli_ has quit IRC09:27
*** amotoki has joined #openstack-nova09:29
*** udesale__ has joined #openstack-nova09:30
*** moshele has joined #openstack-nova09:31
*** djohnsto has joined #openstack-nova09:32
*** udesale has quit IRC09:33
*** tuan_luong has joined #openstack-nova09:33
*** moshele has quit IRC09:37
*** zenoway has quit IRC09:38
*** tuan_luong has quit IRC09:38
*** zenoway has joined #openstack-nova09:38
*** mnestratov has joined #openstack-nova09:38
*** baoli has joined #openstack-nova09:39
*** moshele has joined #openstack-nova09:39
*** prateek_ has joined #openstack-nova09:40
*** prateek_ has quit IRC09:41
*** prateek_ has joined #openstack-nova09:41
*** prateek has quit IRC09:42
*** baoli has quit IRC09:43
*** aarefiev_afk is now known as aarefiev09:45
*** amotoki has quit IRC09:45
*** thorst has joined #openstack-nova09:47
*** salv-orl_ has joined #openstack-nova09:48
*** ociuhandu has joined #openstack-nova09:48
*** Jack_Iv has quit IRC09:50
*** salv-orlando has quit IRC09:51
*** thorst has quit IRC09:52
*** Jack_Iv has joined #openstack-nova09:53
*** tuan_luong has joined #openstack-nova09:53
*** tuan_luong has quit IRC09:54
*** tuan_luong has joined #openstack-nova09:55
*** sdague has joined #openstack-nova09:55
*** cNilesh has quit IRC09:55
*** Jack_Iv has quit IRC09:55
*** Jack_Iv has joined #openstack-nova09:56
*** cNilesh has joined #openstack-nova09:56
*** amotoki has joined #openstack-nova09:58
*** Jack_Iv has quit IRC09:58
*** Jack_Iv has joined #openstack-nova09:58
*** cdent has quit IRC10:00
*** karimb has quit IRC10:02
*** NikhilS has quit IRC10:04
*** gszasz has joined #openstack-nova10:04
*** karimb has joined #openstack-nova10:10
*** zenoway has quit IRC10:11
*** djohnsto has quit IRC10:15
*** zenoway has joined #openstack-nova10:15
*** tovin07_ has quit IRC10:19
openstackgerritZhaokun Fu proposed openstack/nova-specs master: fix typos  https://review.openstack.org/45800110:21
*** ijw has joined #openstack-nova10:21
*** tuan_luong has quit IRC10:21
openstackgerritAlex Xu proposed openstack/nova master: Add check to ensure the versioned_methods are sequential  https://review.openstack.org/45800410:22
*** cNilesh has quit IRC10:22
*** ijw has quit IRC10:26
*** tuan_luong has joined #openstack-nova10:26
*** nicolasbock has joined #openstack-nova10:28
*** kevinz has quit IRC10:28
*** yingwei has quit IRC10:29
openstackgerritRitesh proposed openstack/nova master: Add image flatten when unshelve rbd image backend  https://review.openstack.org/45788610:29
*** zenoway has quit IRC10:30
*** zenoway has joined #openstack-nova10:31
*** zhurong has quit IRC10:34
*** udesale__ has quit IRC10:36
*** Jack_Iv has quit IRC10:36
*** omkar_telee_ has joined #openstack-nova10:38
*** prateek has joined #openstack-nova10:38
*** prateek_ has quit IRC10:38
*** baoli has joined #openstack-nova10:40
*** Jack_Iv has joined #openstack-nova10:40
*** omkar_telee has quit IRC10:41
*** baoli has quit IRC10:44
openstackgerritZhaokun Fu proposed openstack/nova master: fix typos  https://review.openstack.org/45801610:44
johnthetubaguysfinucan: do you have more context on this one? https://review.openstack.org/#/c/43002610:44
sfinucanjohnthetubaguy: What particular aspects of it?10:45
johnthetubaguysfinucan: added some questions on the patch10:46
* sfinucan looking10:46
johnthetubaguysfinucan: basically, I am trying to workout if we are using the API correctly10:46
openstackgerritZhaokun Fu proposed openstack/nova-specs master: fix version typo  https://review.openstack.org/45801910:47
johnthetubaguyI may have found some more context in https://bugs.launchpad.net/openstack-api-site/+bug/124201910:48
openstackLaunchpad bug 1226279 in openstack-manuals "duplicate for #1242019 document multiprovidernet extension" [Medium,Fix released] - Assigned to Diane Fleming (diane-fleming)10:48
johnthetubaguy"For these attributes validation rules are identical to the provider networks extension. Obviously both extensions cannot be used at the same time."10:48
*** zenoway has quit IRC10:48
*** thorst has joined #openstack-nova10:48
*** zenoway has joined #openstack-nova10:49
johnthetubaguy"Finally, at least in ML2, the providernet and multiprovidernet extensions are two different APIs to supply/view the same underlying information. The older providernet extension can only deal with single-segment networks, but is easier to use. The newer multiprovidernet extension handles multi-segment networks and potentially supports an extensible set of a segment properties, but is more cumbersome to use, at10:51
johnthetubaguy least from the CLI. Either extension can be used to create single-segment networks with ML2. "10:51
johnthetubaguyhmm, I am confused10:51
*** salv-orl_ has quit IRC10:52
*** zenoway has quit IRC10:52
*** zenoway has joined #openstack-nova10:52
*** zenoway has quit IRC10:52
*** winston-d_ has quit IRC10:53
*** zenoway has joined #openstack-nova10:53
*** thorst has quit IRC10:53
*** zenoway has quit IRC10:54
*** zenoway has joined #openstack-nova10:55
sfinucanjohnthetubaguy: Far as I understand it, enabling that extension turns your typical response from something like this10:56
sfinucan{'provider:physical_network': {'network': 'foo', ...}, ...}10:57
sfinucanto this10:57
sfinucan{'segments': ['provider:physical_network': {'network': 'foo', ...}, ...]}10:57
*** prateek_ has joined #openstack-nova10:57
*** zenoway has quit IRC10:58
sfinucanso vladikr is checking to see if that extension is enabled, then either simply returning 'provider:physical_network'->'network', or iterating through 'segments' to find the same10:58
sfinucanjohnthetubaguy: If that makes sense?10:59
*** prateek has quit IRC11:00
*** Jack_Iv has quit IRC11:01
*** cdent has joined #openstack-nova11:01
*** dmk0202 has joined #openstack-nova11:02
*** zenoway has joined #openstack-nova11:02
*** sudipto has quit IRC11:03
*** sudipto has joined #openstack-nova11:03
*** sudipto has quit IRC11:03
*** Jack_Iv has joined #openstack-nova11:03
*** zenoway has quit IRC11:11
*** zenoway has joined #openstack-nova11:12
*** slaweq has quit IRC11:16
*** slaweq has joined #openstack-nova11:16
*** smatzek has joined #openstack-nova11:17
*** sree has quit IRC11:18
*** dmk0202 has quit IRC11:21
*** omkar_telee_ has quit IRC11:21
*** dmk0202 has joined #openstack-nova11:21
*** liverpooler has quit IRC11:21
*** ijw has joined #openstack-nova11:22
*** rfolco has joined #openstack-nova11:27
*** ijw has quit IRC11:28
*** sree has joined #openstack-nova11:28
*** tuan_luong has quit IRC11:30
*** sree has quit IRC11:32
*** omkar_telee_ has joined #openstack-nova11:34
*** gongysh has quit IRC11:35
*** baoli has joined #openstack-nova11:41
openstackgerritZhaokun Fu proposed openstack/nova-specs master: fix overridden error  https://review.openstack.org/45803411:41
*** hferenc has joined #openstack-nova11:41
*** thorst has joined #openstack-nova11:42
*** salv-orlando has joined #openstack-nova11:42
*** thorst_ has joined #openstack-nova11:42
*** edmondsw has joined #openstack-nova11:46
*** thorst has quit IRC11:47
*** baoli has quit IRC11:47
*** salv-orlando has quit IRC11:48
*** nmathew- has quit IRC11:48
*** amotoki has quit IRC11:50
*** timello has joined #openstack-nova11:50
openstackgerritZhaokun Fu proposed openstack/nova master: fix overridden error  https://review.openstack.org/45803711:53
*** kevinz has joined #openstack-nova11:55
*** slaweq has quit IRC11:55
*** claudiub has joined #openstack-nova11:55
*** karimb has quit IRC11:55
*** karimb has joined #openstack-nova11:56
*** karimb has quit IRC11:57
*** amotoki has joined #openstack-nova11:57
*** lucasagomes is now known as lucas-hungry11:59
*** ralonsoh_ has joined #openstack-nova12:01
*** omkar_telee_ has quit IRC12:01
*** efoley_ has joined #openstack-nova12:02
*** kevinz has quit IRC12:02
*** amotoki has quit IRC12:02
*** ralonsoh has quit IRC12:04
*** efoley has quit IRC12:05
*** vks1 has quit IRC12:05
*** satyar has joined #openstack-nova12:06
*** dane-fichter has joined #openstack-nova12:06
openstackgerritRoman Podoliaka proposed openstack/osc-placement master: tests: add a hook for functional testing in the gate  https://review.openstack.org/45212212:08
*** sudipto has joined #openstack-nova12:09
*** phuongnh has quit IRC12:09
*** diga has quit IRC12:11
*** sapcc-bot has quit IRC12:11
*** carthaca_ has quit IRC12:11
*** sapcc-bot1 has joined #openstack-nova12:11
*** carthaca_1 has joined #openstack-nova12:11
*** databus23_ has joined #openstack-nova12:11
*** tpatzig_ has joined #openstack-nova12:11
*** mkoderer_ has joined #openstack-nova12:11
*** david_1 has joined #openstack-nova12:11
*** dgonzalez_ has joined #openstack-nova12:11
*** databus23_ has quit IRC12:13
*** tpatzig_ has quit IRC12:13
*** mkoderer_ has quit IRC12:13
*** dgonzalez_ has quit IRC12:13
*** david_1 has quit IRC12:13
*** cdent has quit IRC12:18
*** markvoelker has joined #openstack-nova12:19
*** karthiks has quit IRC12:22
openstackgerritAlex Xu proposed openstack/nova master: Add test ensure all the microversions are sequential in placement API  https://review.openstack.org/45804912:23
*** voelzmo has quit IRC12:23
*** voelzmo has joined #openstack-nova12:23
*** voelzmo has quit IRC12:24
*** ijw has joined #openstack-nova12:24
*** voelzmo has joined #openstack-nova12:24
*** timello has quit IRC12:25
*** liverpooler has joined #openstack-nova12:25
*** ayogi has quit IRC12:26
*** ijw has quit IRC12:29
openstackgerritZhaokun Fu proposed openstack/nova-specs master: accomodate=>accommodate  https://review.openstack.org/45805012:29
*** edmondsw has quit IRC12:29
*** edmondsw has joined #openstack-nova12:30
*** slaweq has joined #openstack-nova12:31
*** zhurong has joined #openstack-nova12:31
*** iceyao has joined #openstack-nova12:31
*** zenoway has quit IRC12:33
*** karimb has joined #openstack-nova12:35
*** slaweq has quit IRC12:37
*** zenoway has joined #openstack-nova12:37
*** slaweq has joined #openstack-nova12:38
openstackgerritAndy McCrae proposed openstack/nova master: Allow CONTENT_LENGTH to be present but empty  https://review.openstack.org/45571012:40
*** gongysh has joined #openstack-nova12:41
*** zenoway has quit IRC12:41
*** lyan has joined #openstack-nova12:43
*** jaypipes has joined #openstack-nova12:44
alex_xunova api meeting is in 15 mins at #openstack-meeting-412:45
*** zhurong has quit IRC12:45
*** ratailor has quit IRC12:46
*** zenoway has joined #openstack-nova12:46
openstackgerritRoman Podoliaka proposed openstack/osc-placement master: tests: add a hook for functional testing in the gate  https://review.openstack.org/45212212:48
*** catintheroof has joined #openstack-nova12:48
*** jpena is now known as jpena|lunch12:52
*** jerrygb has joined #openstack-nova12:53
*** falseuser has joined #openstack-nova12:54
*** falseuser has left #openstack-nova12:55
*** artom has quit IRC12:55
*** artom has joined #openstack-nova12:56
*** zenoway has quit IRC12:56
*** zenoway has joined #openstack-nova12:56
*** falseuser has joined #openstack-nova12:57
falseuser112:57
*** nmathew- has joined #openstack-nova12:57
*** nmathew- has quit IRC12:57
*** efoley__ has joined #openstack-nova12:58
falseuserhello!12:58
*** gouthamr has joined #openstack-nova12:59
*** cleong has joined #openstack-nova13:00
*** ralonsoh__ has joined #openstack-nova13:01
*** cdent has joined #openstack-nova13:01
*** efoley_ has quit IRC13:01
*** ralonsoh__ is now known as ralonsoh13:01
johnthetubaguysfinucan: sorry, missed your note, yeah thats what I was seeing, I am just trying to see if there is something else we should do too, maybe not13:01
johnthetubaguysfinucan: I wasn't expecting the neutron API to transform like that based on its settings, I guess thats what it does, confusing13:03
*** ralonsoh_ has quit IRC13:04
sfinucanjohnthetubaguy: Yeah, I must admit I based that on the docs. Maybe the docs don't reflect reality but I presumed they would13:04
*** falseuser has quit IRC13:06
*** falseuser has joined #openstack-nova13:07
*** lucas-hungry is now known as lucasagomes13:07
*** mdrabe has joined #openstack-nova13:09
*** sean-k-mooney has joined #openstack-nova13:09
*** dane-fichter has quit IRC13:10
*** jamesdenton has joined #openstack-nova13:11
*** falseuser has quit IRC13:13
*** artom has quit IRC13:14
*** artom has joined #openstack-nova13:14
*** Zhaomingjun has joined #openstack-nova13:16
*** ngupta has joined #openstack-nova13:17
*** felipemonteiro has joined #openstack-nova13:22
openstackgerritAndy McCrae proposed openstack/nova master: Allow CONTENT_LENGTH to be present but empty  https://review.openstack.org/45571013:23
*** Zhaomingjun has quit IRC13:23
*** moshele has quit IRC13:23
*** slaweq has quit IRC13:23
*** slaweq has joined #openstack-nova13:24
*** jamesdenton has quit IRC13:24
*** jamesdenton has joined #openstack-nova13:25
*** ijw has joined #openstack-nova13:25
*** xyang1 has joined #openstack-nova13:25
*** mriedem has joined #openstack-nova13:25
*** eharney has joined #openstack-nova13:26
mriedemo/13:26
*** slaweq has quit IRC13:28
*** ijw has quit IRC13:30
*** esberglu has joined #openstack-nova13:30
*** kfarr has joined #openstack-nova13:30
*** pchavva has joined #openstack-nova13:30
*** awaugama has joined #openstack-nova13:33
*** smatzek has quit IRC13:35
*** abalutoiu_ is now known as abalutoiu13:36
*** nkorabli has joined #openstack-nova13:37
openstackgerritAndy McCrae proposed openstack/nova master: Allow CONTENT_LENGTH to be present but empty  https://review.openstack.org/45571013:37
*** voelzmo has quit IRC13:38
*** voelzmo has joined #openstack-nova13:39
johnthetubaguysfinucan: I couldn't really find docs, just found the odd bug and things, where did you look for those?13:43
sfinucansec13:43
*** voelzmo has quit IRC13:43
*** vks1 has joined #openstack-nova13:43
sfinucanjohnthetubaguy: https://developer.openstack.org/api-ref/networking/v2/#multiple-provider-extension13:44
johnthetubaguysfinucan: ah, yeah, I did look at those, just wasn't 100% sure how you use it still.13:44
*** voelzmo has joined #openstack-nova13:46
sfinucanjohnthetubaguy: maybe we should ping the neutron guys directly?13:46
sfinucansomeone there could surely tell us this in a heartbeat :)13:47
sfinucanone would hope, anyway ;)13:47
*** gcb has joined #openstack-nova13:47
*** vks1 has quit IRC13:47
johnthetubaguysfinucan: yeah, we totally should13:48
johnthetubaguysfinucan: I think I get it now, just want to make sure I interpreted things correctly.13:48
sfinucanjohnthetubaguy: I'll let you do that and just lurk myself - you can ask your own questions best, heh13:48
*** dimtruck is now known as zz_dimtruck13:51
mriedemha well this was popular https://review.openstack.org/#/c/457854/13:55
*** jpena|lunch is now known as jpena13:56
mriedemlyarwood: can i get you to take a look at these backports? https://review.openstack.org/#/q/status:open+project:openstack/nova+topic:bug/168269313:57
*** smatzek has joined #openstack-nova13:58
*** kfarr has quit IRC13:59
jaypipesmriedem: reading your comment on bauzas placement-claims spec got me wondering... why is it that with multi-cell + super-conductor we are getting rid of the possibility of retries in the scheduling process?14:00
*** slaweq has joined #openstack-nova14:00
*** prateek_ has quit IRC14:00
openstackgerritFeodor Tersin proposed openstack/nova master: Implement ScaleIO image backend  https://review.openstack.org/40744014:01
*** eharney has quit IRC14:01
mriedemjaypipes: because the computes can only call back to the local cell conductor, not the super conductor, is my understanding14:01
mriedemand there is no scheduler local to the cell,14:01
mriedemso the local conductor can't ask the scheduler for a new list of hosts to retry14:01
*** links has quit IRC14:02
mriedemit would have to upcall to the top-level scheduler (the only scheduler), and i think we're avoid upcalls, because that's like cells v114:02
dansmithit's not just that we want to avoid them (we do),14:02
openstackgerritAndy McCrae proposed openstack/nova master: Allow CONTENT_LENGTH to be present but empty  https://review.openstack.org/45571014:02
dansmithbut we have no connection information for the upcall14:02
*** vks1 has joined #openstack-nova14:03
mriedemdansmith: does the cell conductor nova.conf have api_database set?14:03
*** nkorabli has quit IRC14:03
dansmithno14:03
mriedemok, just [database]connection = cell db14:04
edleafeSo the cell conductor talk to the super conductor?14:04
*** tbachman has joined #openstack-nova14:04
mriedemedleafe: no14:04
dansmithedleafe: they're on different busses, and the lower layer has no information about how to connect to the top layer14:04
*** david-lyle has quit IRC14:04
mriedemjaypipes: so for the claims in scheduler spec and the overhead stuff, couldn't we take the overhead values from the virt driver and adjust the allocations in the resource tracker?14:05
mriedemif the overhead pushes us over what's available for inventory on that provider, we get a 409 i think, and then we fail the build14:05
dansmithugh14:06
mriedemi would think if you had reserved some space for ram and disk for the provider itself, then the chances of hitting overhead 409s once we pick a host might be slim14:06
dansmithI really hate that14:06
mriedemdansmith: see my other options in the spec :)14:06
mriedemthere are worse options14:06
openstackgerritMatt Riedemann proposed openstack/nova master: Add online data migration for populating services.uuid  https://review.openstack.org/45489914:07
*** yingwei has joined #openstack-nova14:10
jaypipesmriedem: yes, we could possibly do that14:10
*** yingwei has quit IRC14:10
jaypipesmriedem: I don't like it, same as dansmith, but not sure a better solution14:10
dansmithso the point being,14:10
mriedemi figured that would be better than conductor doing an rpc call into the compute to get the overhead values before posting allocations14:11
dansmithwe tell people to make sure they account for the slack in the reserved amounts on each host to keep the chances of a failure down?14:11
mriedemdansmith: yeah14:11
mriedemit sucks14:11
mriedemi know14:11
dansmiththe conductor->compute call isn't so terrible,14:11
dansmithit's just that we can't make it until we have a compute14:11
mriedemright14:11
dansmithwhich lengthens our window14:11
cdentif we wiggle the reserved then we can't add the overhead to the allocations, right?14:12
cdentit's one or the other14:12
mriedemso conductor asks scheduler for a host, conductor gets the host mapping, rpc calls to the compute with the flavor to get the overhead, and uses that to post the allocations to placement,14:12
mriedemif that fails, we have to go back through the scheduler again for a new host14:12
dansmithcdent: hmm, yeah, placement won't let you eat the reserved amount I guess right?14:12
cdentdansmith: correct14:12
dansmithyeah14:12
cdentI left a big comment there about quota too, not sure if/how that fits in14:13
mriedemi liked the idea of posting the allocations from the scheduler where if we needed to retry we already have the list of filtered hosts, but we know that's a toss up perf wise14:13
*** zz_dimtruck is now known as dimtruck14:13
*** iceyao has quit IRC14:13
mriedemif we have to rpc call into the compute to get overhead, i think we then have to move everything into conductor14:14
dansmithmriedem: well, and scheduler calling to compute is worse than conductor14:14
dansmithmuch worse14:14
dansmithyeah14:14
*** amotoki has joined #openstack-nova14:14
*** david-lyle has joined #openstack-nova14:14
dansmithso, another way to look at this:14:15
dansmithwe're talking about having resource overrides in the flavor14:15
*** amotoki has quit IRC14:15
dansmithwe could also just say that the drivers can't calculate different values for the resources, and have to build the vm from the things they're given,14:15
bauzassorry, was afk14:16
dansmithwhich in the case of hyperv means you get $mem fewer MBs of disk,14:16
* bauzas scrolling back14:16
dansmithand in the case of libvirt, you get one fewer vcpu14:16
dansmithand if you don't want that as an operator, then you override the actual values for those with room for the slack, leaving the display values as what people will really get14:16
dansmithfor people that really use the same flavor for two hypervisor types (if that really happens) then things get ugly14:16
dansmithbut I kinda feel like the cloudy approach here is to not be as snowflake-y on the virt side14:17
bauzascould someone tell which problem you folks are discussing ? #1 reschedules in a cellsv2 world or #2 overhead values for instance given by virt drivers ?14:17
mriedemthat also pushes a lot of complexity on the operator14:17
dansmithit does, and that sucks14:17
mriedembauzas: #214:17
*** eharney has joined #openstack-nova14:17
dansmithmriedem: so, we should probably go the extra-call-to-compute route for now and if that ends up being too expensive, then punt later14:18
dansmithwe already do that for things like getting predicted-but-never-correct bdm device names right?14:18
mriedemyeah, we do that from the api14:18
bauzasso, my take on that is that overheads, since are only virt-related are things which wouldn't be exposed to users14:18
mriedemwhen attaching a volume14:18
dansmithso here's another potential optimization:14:18
bauzasusers ask for memory sizes they know14:18
*** sree has joined #openstack-nova14:18
bauzasthey don't know how much the hypervisor has14:19
bauzasso14:19
dansmithwe could do that call once, and record the value we got for the flavor id, hypervisor type and version, and only do that once per boot if we have to try several hosts in a single cycle14:19
*** hongbin has joined #openstack-nova14:19
*** satyar has quit IRC14:19
mriedemoh btw, this is nice in the py35 tests: "Fatal Python error: Cannot recover from stack overflow."14:19
dansmithmriedem: like, if we have to try four libvirt hosts in a single boot call, we only have to call to one of the computes to get the overhead14:19
mriedemsdague: ^14:19
bauzaswhen the scheduler is choosing a destination, he has to pick it based on the user request, but related to the total amount of memory is left14:20
mriedemdansmith: yeah that's not a bad idea14:20
mriedemcache the results in conductor14:20
mriedemtake the rpc call hit once14:20
dansmithwe need more information than just the hostname coming back from scheduler though, else we have to hit the db ourselves to look up the host info again14:20
dansmithyeah14:20
dansmitheither once per call or once per conductor worker14:21
dansmithor memcache or something14:21
mriedemi assumed once per worker14:21
dansmithwell, that could get big,14:21
mriedemsure, if we use the cache utils stuff,14:21
dansmithas there are tons of workers, and potentially lots of flavors and stuff14:21
mriedemit's configurable14:21
dansmithyeah, that's the way to go I think14:21
mriedemdefault is per worker in memory cache14:21
mriedemif you want memcache, configure it14:21
dansmithyeah14:21
bauzaswait14:22
bauzaswhy placement couldn't compare the user flavor vs. the size of what's left, ie. inventory - instances ?14:22
mriedemgcb: i'm seeing "Fatal Python error: Cannot recover from stack overflow." in the py35 job for nova unit tests randomly14:22
mriedemgcb: http://logs.openstack.org/97/456397/3/check/gate-nova-python35/05322ec/console.html#_2017-04-19_12_45_10_72457814:22
bauzasinstances being "instance.flavor + instance.overhead" ?14:22
mriedembauzas: placement doesn't have the overhead info14:22
*** sree has quit IRC14:22
bauzasmriedem: placement knows inventory I know14:23
mriedembauzas: we're saying we need to get the overhead from the compute before we post the allocations14:23
mriedemto take into account the overhead14:23
mriedemthen placement tells us if that works or not for the inventory on the provider14:23
*** dharinic has joined #openstack-nova14:23
mriedemif POST /allocations/instance fails, then we retry through the scheduler14:23
mriedemdan was just saying that we can cache the rpc call results in conductor - the results of the call to the compute to get the overhead14:24
mriedemper flavorid/hypervisortype/version14:24
bauzasmriedem: that would assume the overhead is not calculated per instance14:24
mriedemhuh?14:24
gcbmriedem: I saw the email , trying to figure out the root cause and fix it14:24
mriedemthe overhead is calculated per instance14:24
mriedemgcb: this is a different error :)14:24
bauzasmriedem: yup, so we would query the compute every time a boot request comes in to know the overhead, right?14:25
mriedembauzas: yes14:25
mriedemand cache the results14:25
bauzasmriedem: cache what ? given we don't know which compute14:25
gcbmriedem,  wow, let me check the details14:25
mriedembauzas: we'd do this in conductor after we get a host from the scheduler14:25
bauzaswe would have to cache overheads per flavor for all computes14:25
mriedembauzas: right14:25
dansmithno14:25
bauzasmriedem: oh, after ?14:25
bauzaserm, not sure I like that14:25
dansmithper hypervisor type/version14:26
bauzasI'd tend to prefer considering that the flavor is what the user requests14:26
mriedembauzas: that's not how it works today14:26
bauzasand placement should verify against the inventory which would be decremented by the sum of overheads14:26
*** ijw has joined #openstack-nova14:26
mriedemi request 8GB of RAM for my vm, but xen says i really need 8GB + 256MB or something14:26
bauzasat the moment, placement matches against the inventory minus the sum of allocations14:27
lyarwoodmriedem: ack sure looking now14:27
mriedembauzas: "and placement should verify against the inventory which would be decremented by the sum of overheads" - yes, but we have to get the overheads to placement14:27
bauzasI'm proposing to match against the inventory minus the allocations+overhead14:27
mriedemwe do that via the allocations14:27
mriedembauzas: yes, we all agree on that14:27
mriedemthat's not new14:27
dansmiththe fact that we have larger allocations than the flavor is not ideal, but at the moment we have no choice14:27
*** yingwei has joined #openstack-nova14:28
bauzasbut you would not just compare with the flavor for the instance, but flavor+cached_overhead ?14:28
bauzaswell, huge thing14:28
*** dharinic has quit IRC14:28
mriedemthe allocations are made up of the flavor today14:28
bauzasdansmith: mriedem: okay, I think why you'd like to cache that14:28
*** satyar has joined #openstack-nova14:28
*** satyar has quit IRC14:29
*** satyar has joined #openstack-nova14:29
mriedemshould i go back and update the spec with a summary?14:29
mriedembauzas: unfortunately for you, this means redoing a large part of the spec where you changed it from posting allocations in conductor to scheduler, now back to conductor :(14:30
cdentmriedem: what does that mean with regard to the reasons it moved back to the scheduler in the first place?14:31
*** iceyao has joined #openstack-nova14:31
*** satyar has quit IRC14:31
*** satyar has joined #openstack-nova14:31
*** ijw has quit IRC14:31
cdentconductor ->(reasons)->sched->(new reasons)->conductor14:31
bauzasmriedem: this is fine https://www.nytimes.com/2016/08/06/arts/this-is-fine-meme-dog-fire.html14:32
mriedemcdent: we moved it to scheduler because that's where the filtered host list is, so retries there might be theoretically faster,14:33
cdentso it's just a performance thing? I guess that's okay then14:33
mriedemcdent: we're moving to conductor now because conductor is what's going to need to do the rpc call into the compute to get the overhead values,14:33
bauzasmriedem: dansmith: wait, we expose flavors to scheduler, right?14:33
bauzasoh crap, nvm14:34
*** moshele has joined #openstack-nova14:34
mriedemso conductor -> scheduler (select_destinations) -> conductor (rpc call to compute to get overhead for the selected host) -> post allocations (flavor + overhead) to placement -> (1) success, build on compute, or (2) fail, retry through scheduler for a new host14:34
mriedemcdent: ^ make sense?14:34
bauzasI'm trying to consider how we could inform placement of the overhead for the specific instance14:34
mriedembauzas: via the allocations that we post14:34
mriedemsee ^14:34
cdentmriedem: yeah, makes sense, but is frustrating14:35
mriedemcdent: agree,14:35
bauzasmriedem: I'd love to have posted when we do GET /RPs=memory=X14:35
mriedemespecially since i for one didn't even know this overhead thing existed until yesterday14:35
mriedembauzas: we don't know the host then14:35
mriedemit's a chicken and egg14:35
*** belmoreira has quit IRC14:35
bauzasmriedem: where X would be flavor+overhead14:35
*** iceyao has quit IRC14:35
bauzasfuuuuuuuuuu14:36
mriedembauzas: we have to have the host to get the overhead14:36
mriedembecause the virt driver gives the overhead values14:36
bauzasright, so the only problem is with the new POST allocation14:36
bauzassorry, slow brain here14:36
bauzasmy brain is fried by that shit ton of political discussions I'm eating14:37
*** baoli has joined #openstack-nova14:37
mriedemi don't read the news anymore14:37
mriedemi heard we bombed afghanistan with a large bomb14:38
mriedemto show how cool we are14:38
edleafeIs the overhead specific to each hypervisor, but the same for all instances on that hypervisor, regardless of instance size?14:38
mriedemedleafe: no14:39
edleafeugh14:39
bauzasmriedem: I do recommend https://www.youtube.com/watch?v=hkZir1L7fSY for your knowledge14:39
*** amotoki has joined #openstack-nova14:40
mriedembauzas: ok, saved for later14:40
bauzasmriedem: NSFW14:40
mriedemi'm at home14:40
mriedemso everything is safe14:40
bauzass/NSFW/NSFWFH14:40
mriedembetween the hours of 9am and 4pm that is14:40
bauzasanyway14:40
mriedemedleafe: the hyperv overhead calculation is pretty simple14:41
bauzasmriedem: posting a summary in the spec ?14:41
mriedem'disk_gb': (instance_info['memory_mb'] + 512) // units.Ki14:41
mriedembauzas: yeah, in a minute14:41
mriedemlooking at this py35 weirdness14:41
bauzasmriedem: okay, because I was about writing a new rev -ish14:41
*** baoli has quit IRC14:41
*** moshele has quit IRC14:42
edleafemriedem: oh, that's much clearer14:42
mriedemedleafe: the xen and libvirt ones are more confusing14:43
mriedemgcb: i'm having a hard time knowing which test is doing the stack overflow14:44
*** Swami has joined #openstack-nova14:44
mriedembut there appears to be an infinite recursion in oslo.config14:44
mriedemgcb: i also see at the end: nova.tests.unit.test_rpc.TestRPC.test_add_extra_exmods [] ... inprogress14:44
bauzasmriedem: the libvirt one is scarying me, because I wonder if we could even cache that14:44
mriedemso maybe it's that one14:44
mriedembauzas: because it's config driven?14:44
mriedemand per-image14:44
mriedembauzas: yeah that's a good point14:45
mriedemdansmith: ^14:45
melwittmriedem: is anyone working on the intermittent ironic test failure? I want to take a stab at it if not14:45
mriedemthe libvirt overhead one kind of screws us14:45
mriedemmelwitt: no14:45
bauzasmriedem: exactly this14:45
mriedemmelwitt: i was considering just skipping it for now14:45
dansmithhmm, I don't remember the config bit14:45
* dansmith looks14:45
mriedemdansmith: might not be config, but it's based on image meta14:45
gcbmriedem,  Do you get error from https://review.openstack.org/#/c/456397 ?  It seems oslo.config have recursion calls14:46
johnthetubaguymelwitt: I saw talk of something in the neutron channel about ironic, but seemed more like a hard failure14:46
dansmithmriedem: yeah image meta14:46
mriedemgcb: it's multiple changes failing on this14:46
mriedemaccording to logstash14:46
melwittmriedem: okay. I think the right thing to do is just raise the Timeout exception. we shouldn't be doing a real timeout14:46
melwitt(as discussed on the bug)14:46
mriedemmelwitt: so remove all of the fake stub methods that are doing logic to trigger the timeout?14:47
mriedemmelwitt: that's the easy fix14:47
mriedemi think i get the point of why the test was doing that though14:47
mriedemto stub and mimic what the driver is doing14:47
mriedembut it's pretty fragile14:47
melwittmriedem: yeah, stub the looping call to raise the timeout. I thought the test wants to check that it does the right thing when a timeout occurs, that it raises the ConsoleNotAvailable14:48
melwittthat is, I think we can assume that oslo_service looping calls work correctly, else we're also testing oslo functionality14:49
mriedemmelwitt: yeah it's a fine line14:49
bauzasmriedem: dansmith: we also lookup the instance if we don't have the numa topology set in the instance object for knowing the overhead14:49
mriedemmelwitt: i'm fine with doing that14:49
dansmithmriedem: so, a couple things14:49
mriedemmelwitt: because my solution was skipTest :)14:50
*** omkar_telee_ has joined #openstack-nova14:50
gcbmriedem, it's really strange,  nova.tests.unit.test_rpc.TestRPC.test_add_extra_exmods is simple test.  what I found just deprecated warning from oslo.messaging14:50
*** kaisers has joined #openstack-nova14:50
dansmithmriedem: (1) presumably we'll do this without the cache first, (2) we can cache on flavorid/hv/ver first, (3) we might just make a hash key function that generates a key based on all the things currently known to matter14:50
*** eharney has quit IRC14:50
mriedemdansmith: where (3) could take into account the image id too?14:51
dansmithmriedem: and (4) the penalty for missing means we *might* fail to build because of overhead on the compute, which sucks, but.. builds can fail14:51
dansmithmriedem: yeah14:51
mriedemdansmith: and the image meta could change w/o changing the image id14:51
*** kornicameister has quit IRC14:51
dansmithmriedem: or just hash in the actual fields we know are ever looked at14:51
dansmithmriedem: right well, that's why I was thinking ^ instead of id, but yeah14:51
bauzasmriedem: dansmith: so just to make it clear, we introspect the instance every time we claim the resource in the RT if libvirt14:51
melwittmriedem: heh, okay. I'll upload it and we'll see14:51
mriedemchanging meta on an image w/o dumping the old image seems like a bad idea14:52
dansmithbecause if you have two images, but without cpu policy set, those shouldn't need to be done separately14:52
mriedembauzas: if libvirt/hyperv/xen14:52
bauzasmriedem: dansmith: for knowing the numa topology14:52
*** annegentle has joined #openstack-nova14:52
mriedembauzas: where do we even set instance.numa_topology during instance creatE?14:52
dansmithbauzas: I know, what's your point? hopefully that all goes away anyway14:52
dansmithmriedem: it's during virt build I think14:52
mriedemheh14:52
mriedemso...14:53
dansmithwell, that's where we fill it out14:53
bauzasdansmith: the fact that we could potentially not be able to cache things14:53
bauzasbecause there could be only one occurence14:53
mriedemsfinucan: do you know where we set instance.numa_topology during instance build?14:53
bauzasI mean a cache hit of 1 occurrence14:53
sfinucanmriedem: We set it twice14:53
sfinucanOnce during the filtering stage (to see if an instance would fit on the host)14:54
dansmithbauzas: the looking at the instance.numa part is just to calculate the same thing, but without having flavor/image meta14:54
sfinucanand once when actually booting the instance14:54
sfinucanI'll get the two calls now14:54
dansmithbauzas: you would get the same answer from the instance as from the metadata14:54
dansmithmriedem: ^14:54
*** sridharg has quit IRC14:54
mriedemsfinucan: looks like RT.instance_claim sets it on the instance14:55
bauzasdansmith: mmm, you're right, two instances sharing same flavors/images would necessarly have the same overhead even with numa14:55
dansmithright,14:56
dansmithit's just that we get the value either calculated from meta, or already set if the instance has had it done14:56
dansmithit's still the same answer14:56
bauzasright14:56
bauzasso, the key is the tuple (image, flavor)14:56
bauzaswell14:56
dansmithno14:56
bauzas(image, flavor, hv_type, hv_version)14:56
dansmiththe key is a hash value of flavorid, a set of image properties, hv, version)14:56
dansmithyeah14:56
dansmithnot image id though14:57
mriedemwhat other nasty things is the RT doing that we aren't thinking about?14:57
mriedemcpu pinning stuff on the host?14:57
edleafeCan we simplify things and get a good estimate of what the max total overhead could ever be on a particular host?14:57
bauzasmriedem: I think we CPU pin indeed14:57
edleafeAnd then just reserve that?14:57
*** mlakat has quit IRC14:57
mriedemedleafe: as cdent noted earlier, we don't get the reserved inventory for this14:57
bauzasmriedem: AFAIK, we just verify if we can accept a host for CPU pinning, but we do set the pin between pCPU and vCPU at the RT stage14:57
bauzassfinucan: correct ? ^14:58
sfinucanmriedem: Yeah, you're looking for calls to 'numa_fit_instance_to_host'14:58
edleafemriedem: I mean when creating the resource provider, add the total overhead it might need to the reserved part for the hypervisor14:58
sfinucanbauzas: Correct14:58
cdentmriedem: that's not what I said. What I said was _if_ we change reserved, we don't change the allocations. I actually think making a good guess at what global overhead is at boot time, and adding that to reserved is the right way to go14:58
*** marst has joined #openstack-nova14:58
cdents/boot/host boot/14:58
sfinucanWe call 'numa_fit_instance_to_host' to make sure that the instance would fit on a host with a given CPU pinning permutation14:59
dansmithcdent: meaning claim more and update later?14:59
*** marst has quit IRC14:59
edleafedansmith: no, just claim what the instance uses14:59
mriedemedleafe: cdent: i don't really know how we can safely estimate that,14:59
dansmithedleafe: eh?14:59
mriedemw/o knowing which flavors will be used, which scheduler filters are being used, etc14:59
cdentdansmith: no, continue making allocations using the non-overhead values. incorporate the expected in 'reserved' of inventory14:59
sfinucanIt's the easiest way to check that out for things like cpu_thread_policy, where we can't do a simple "have I X free CPUs" check14:59
dansmithedleafe: we're claiming before we know that, that's the whole point14:59
*** namnh has joined #openstack-nova14:59
*** marst has joined #openstack-nova14:59
bauzasokay, here is a thinh15:00
edleafedansmith: it would already be accounted for15:00
*** mdnadeem has quit IRC15:00
dansmithcdent: oh, that's what I initially was saying, yeah15:00
*** karthiks has joined #openstack-nova15:00
bauzasgiven all the nasty and fancy things we do with RT15:00
edleafein the RP's reserved15:00
bauzasI have a though15:00
*** rmart04 has quit IRC15:00
bauzaswhy not just posting an allocation without all that stuff, and do modify the allocation at the RT state ?15:00
dansmithcdent: not claim the flavor+overhead, but claim flavor, and require reserved to be set high enough to cover it15:00
mriedembauzas: that's one of the options i said in the spec15:00
dansmithbauzas: because that's terrible15:00
edleafeI see chasing the last tiny bit of resource is going to make the code an even bigger mess than it is now15:01
bauzasie. we claim for the resouces we know, but we later heal that when we claim15:01
bauzaswhen we /RT/.claim15:01
cdentdansmith: yes, but doing that in an semi-automated way based on the initial calculated (and config'd) inventory. Updating it at boot time.15:01
*** karthiks has quit IRC15:01
bauzaseither way, placement doesn't know a bit of that NFV gangband15:01
dansmithcdent: well, whether or not we do that is a detail I guess15:01
* cdent nods15:01
bauzasso I think NFV claims would still be a 2-phase step15:01
dansmithcdent: however, I think I lost sight of that as a simple option early on15:02
bauzasie. rounding the allocation by the scheduler, and refine that allocation as soon as we have a better vision15:02
dansmithcdent: that's probably the way to go, yeah, and do the more complex thing if that ends up being to naive for some reason15:02
cdentdansmith: yes15:02
cdentotherwise we run into some very complex wangling for unclear gains15:02
dansmithcdent: I lost sight of it when we started talking about claiming the flavor+overhead in one go15:02
dansmithcdent: which means we couldn't rely on reserved15:02
*** rcernin has quit IRC15:03
bauzassoooo15:03
mriedemdansmith: cdent: so would that be based on reserved_host_disk_mb and reserved_host_memory_mb ?15:03
bauzascould we save my life, and assume that we would POST an allocation that is rounded, and later refine it within the instance_claim phase, exactly like we already do ?15:03
bauzas90% of users would have benefits, and things wouldn't change (even improve) for NFV users15:04
cdentbauzas: we slightly increase the risk of retry/error if the second claim can't claim15:04
*** voelzmo has quit IRC15:04
dansmithmriedem: cdent: and vcpu15:04
cdentseems better to not allocation overhead per instance15:04
bauzascdent: like we already do :)15:04
cdentyes15:04
cdentand instead cover it with reserved15:04
cdentit is safer15:04
dansmithmriedem: cdent: meaning, we'd have to always have at least one reserved vcpu15:04
cdentmore likely to have successful claims more often15:04
mriedemdansmith: today in _compute_node_to_inventory_dict in the scheduler report client we don't reserve any vcpu15:05
dansmithright15:05
cdentdansmith: I believe you, but why?15:05
dansmithcdent: because libvirt needs to have vcpu+1 if certain flavor configs are set15:05
cdentah, okay15:05
* edleafe almost jinxed cdent's question15:05
*** eharney has joined #openstack-nova15:05
dansmithcdent: if you ask for dedicated io threads or something, then it requires another vcpu dedicated to you to handle that15:06
mriedemdansmith: cdent: thinking ahead, _compute_node_to_inventory_dict is only used for the old style driver.get_available_resource call,15:06
dansmithcdent: which is really expensive, and probably unfortunate for people that don't want that, so we'd need a way to not reserve the last vcpu on a system for people that don't want it15:06
mriedemwith get_inventory() we could make the reservation calculation in the virt drivers that actually do this overhead thing15:06
cdentyes15:06
dansmithsure, but libvirt needs to not do that if the operator doesn't want it,15:07
dansmithbecause they may have no flavors configured for this dedicated thing,15:07
dansmithand we don't want to prevent them from using their last vcpu :)15:07
mriedemare you talking to me or cdent?15:07
cdenthow much does that matter (the last vcpu)15:07
dansmithyou15:07
cdentmriedem: yes, about get_inventory, I like get_inventory15:07
dansmithheh15:08
dansmithcdent: vcpus are few15:08
dansmithcdent: compared to memory or disk15:08
bauzassorry, in a meeting atm15:08
dansmithcdent: imagine a cloud of 4-core ARM boxes15:08
dansmithwhere you lose one core per system15:08
cdentare we losing a pcpu or a vcpu?15:08
mriedemdansmith: "libvirt needs to not do that if the operator doesn't want it," - we just take that into account in the libvirt get_inventory() method don't we?15:09
mriedemwhen determining the reserved value for the VCPU resource provider inventory?15:09
dansmithmriedem: yeah, it just has to be configurable15:09
cdent(physical, not pinned)15:09
dansmithmriedem: and we don't have that currently, AFAIK15:09
dansmithmriedem: we have reserved disk and memory only right?15:09
mriedemdansmith: as far as i know15:09
mriedembut i learn new terrible things every day :)15:09
dansmithcdent: pcpu in the case of \this dedicated thing15:09
*** sree has joined #openstack-nova15:09
*** iceyao has joined #openstack-nova15:10
cdentdansmith: gotchya15:10
namnhHi everyone, I am reading the rolling upgrade in Nova, there is a point that make me confuse. According to the docs, we have to gracefully shutdown all Nova services except nova-compute. In my understanding, this will ensures no old server (like nova-api, nova-conductor, nova-scheduler) won't interact with new schema DB.15:11
namnhBut in the docs was written "Install the code for the next version of Nova, either in a venv or a separate control plane node, including all the python dependencies. Using the newly installed nova code, run the DB sync. (nova-manage db sync; nova-manage api_db sync)." it means there will have period, old services interact with new schema DB. Could you please explain for me this point?15:11
*** Oku_OS is now known as Oku_OS-away15:11
namnhHere is the docs: https://docs.openstack.org/developer/nova/upgrade.html#minimal-downtime-upgrade-process15:11
edleafedansmith: so those who are configuring flavors this way are already sacrificed a pcpu now, right? IOW, this wouldn't change it?15:11
edleafes/sacrificed/sacrificiing15:11
dansmithnamnh: nova compute does not talk to the db directly, so it won't touch the different schema15:12
dansmithedleafe: they are, but only at claim time, not at reserved-for-the-future time15:12
dansmithedleafe: since that is not config, but a flavor setting, we need a config to know if we _should_ reserve that or not15:12
edleafedansmith: ok, but in your 4 core example, they can only use 3 in either case15:13
dansmithand of course, we should validate all this with the people that wrote this code :)15:13
dansmithedleafe: but people that don't use that dedicated thing.. those are the ones we careabout15:13
dansmithedleafe: because we don't want to reserve (and thus never schedule to) the fourth cpu15:13
*** sree has quit IRC15:13
edleafedansmith: so maybe add a config option to compute: WASTE_PCPU = True15:14
*** psachin has quit IRC15:14
edleafe:)15:14
dansmithedleafe: exactly what I'm saying15:14
edleafeand these are a small minority of deployments?15:14
*** iceyao has quit IRC15:15
namnhdansmith: Yes, I know. we just have to gracefully shutdown all nova server *except* nova-compute. And in my understading, we have to do that to ensure no nova service (like nova-api, nova-condutor, nova-scheduler) can interact with new schema DB15:16
dansmithI would definitely imagine so15:16
dansmithedleafe: it's an expensive change to make RT performance better15:16
namnhdansmith: is that right?15:16
namnhs/server/services15:16
gcbmriedem, for the first issue you mentioned in the email , we can revert the change of file  https://review.openstack.org/#/c/457188/3/nova/tests/unit/virt/ironic/test_driver.py15:16
dansmithnamnh: no, you can apply the schema change before you shut down the control services15:16
mriedemthe schema changes are always additive so it shouldn't matter right?15:17
dansmithnamnh: that's why the docs say "or venv". You can apply the schema before you do anything, then shut down and upgrade your control services at the same time, then start on computes15:17
dansmithmriedem: right15:17
gcbmriedem, I have no idea why it fails randomly now15:17
mriedemgcb: melwitt is going to re-write the test15:17
*** satyar has quit IRC15:17
*** satyar has joined #openstack-nova15:18
gcbmriedem, good to know that, for the second issue,  when the issue occurred ? From the Monday ?15:18
mriedemnamnh: see the 3rd bullet there which says "At this point, new columns and tables may exist in the database. These DB schema changes are done in a way that both the N and N+1 release can perform operations against the same schema."15:18
*** amotoki has quit IRC15:19
mriedemnamnh: meaning the schema changes are all additive and won't break old code running against them,15:19
mriedemlike any new column is always nullable15:19
mriedemand we don't drop columns15:19
*** jamesdenton has quit IRC15:19
namnhdansmith: so why we have to shutdown all nova services (except nova-compute)? due to RPC or something?15:20
dansmithnamnh: you only have to shut them down when you upgrade them, and all of them have to be shutdown together15:20
dansmithnamnh: what you can't have is nova-api and nova-scheduler on different *code* versions accessing the database at the same time15:21
mriedem"For maximum safety (no failed API operations),"15:22
dansmithit says that?15:22
mriedemnamnh: the control services have code in place to check versions of the computes since we support mixed level computes, but we don't for the control services15:22
mriedemdansmith: yeah15:22
dansmiththat's not really the reason15:22
mriedemsee step 2 bullet 115:22
dansmithmaybe that's easier to understand than the real reason, I dunno15:23
* dansmith goes to see if he wrote that15:23
namnhdansmith mriedem: if possible, could you give me an example for error when nova-api and nova-scheduler on different *code* versions accessing the database at the same time?15:24
*** armax has joined #openstack-nova15:24
dansmithnamnh: data corruption15:24
dansmithseriously, don't do it15:24
namnhdansmith: ok, I understood. Thanks for your time.15:26
*** gszasz has quit IRC15:26
*** baoli has joined #openstack-nova15:26
*** pcaruana has quit IRC15:28
namnhmriedem: I just have understood. thanks15:28
*** ijw has joined #openstack-nova15:28
*** chyka has joined #openstack-nova15:30
*** eharney has quit IRC15:30
*** iceyao has joined #openstack-nova15:30
*** baoli has quit IRC15:30
*** kfarr has joined #openstack-nova15:31
*** Sukhdev has joined #openstack-nova15:31
*** gszasz has joined #openstack-nova15:32
*** hurricanerix has quit IRC15:32
*** ijw has quit IRC15:32
*** iceyao has quit IRC15:35
*** zenoway has quit IRC15:36
*** Sukhdev_ has joined #openstack-nova15:37
*** chyka_ has joined #openstack-nova15:37
*** chyka has quit IRC15:40
*** david-lyle has quit IRC15:41
*** dr_gogeta86 has joined #openstack-nova15:43
dr_gogeta86hi15:45
dr_gogeta86how can I force a compute node up15:45
dr_gogeta86?15:45
dr_gogeta86i've also rebooted many times but is always down15:45
efriedandymccr https://review.openstack.org/#/c/455710/12 - Pretty.  So pretty.15:47
imacdonndr_gogeta86, I think you're supposed to ask questions like that (i.e. how to make it work, not regarding code development) in #openstack15:47
imacdonnsee topic ;)15:47
dr_gogeta86sorry15:48
imacdonnnp15:48
efrieddr_gogeta86 Yeah, what imacdonn said - but, check your n-cpu log15:48
*** namnh has quit IRC15:48
efriedSounds like compute service is failing to start or check in.15:48
cdentefried: "save, like, five whole lines of code" <- my favorite review comment today, can hear it in my head15:49
efriedcdent :)15:49
*** damien_r has quit IRC15:51
dr_gogeta86efried, imacdonn tnx15:51
mriedemcdent: to answer your question in the spec, i don't think we account for overheads when commiting the quota reservation15:51
mriedemcdent: because the quota reservation is created in the api based on the flavor,15:52
cdentmriedem: that's what I figured, thus my concern15:52
cdentthanks for confirming it15:52
mriedemcdent: then the reservation id for the quota is passed through to compute15:52
mriedemand if the claim is good, we commit the reservation on the compute15:52
mriedembut we don't modify it at all15:52
*** ociuhandu has quit IRC15:52
cdentso if we make allocations be the true count, if they include overhead, user is going to be doing WTF sometimes15:53
*** chyka has joined #openstack-nova15:53
mriedemnow that i say this, i don't see us passing reservations to build_and_run_instance15:53
mriedemso i'm digging15:53
cdentgyres within gyres15:53
*** markus_z has quit IRC15:53
cdentwhat rough quota, its limit come round at last15:54
*** yamahata has joined #openstack-nova15:54
*** smatzek has quit IRC15:56
*** mlavalle has joined #openstack-nova15:56
mriedemcdent: so i was totally wrong15:56
mriedemwe commit the reservations in the api15:56
mriedemin the _provision_instances method15:56
*** vks1 has quit IRC15:57
*** chyka_ has quit IRC15:57
cdentmriedem: that sounds like only partially wrong15:57
mriedemeither way we don't consider the overhead in the quota15:57
mriedemquota usage i mean15:57
mriedemhad to fix that before melwitt dropped the 'quota usage' hammer on me15:57
*** eharney has joined #openstack-nova15:57
*** omkar_telee_ has quit IRC15:57
melwittlol15:58
cdentso many hammers, so few channels15:59
*** Jack_Iv has quit IRC15:59
*** tesseract has quit IRC15:59
mriedemok, so yeah if we don't account for the overhead in the quota usage,16:00
mriedembut we did in the allocation,16:00
*** Jack_Iv has joined #openstack-nova16:00
mriedemand we start using allocations (quota usage) for counting quotas stuff that mel is doing,16:00
mriedemwe're in a bind16:00
mriedembecause horizon shows me that i have 4GB of memory quota left, but the allocation is really 4GB + 256MB RAM16:01
mriedemhowever, that's already the state we're in today16:01
mriedemyou're getting that extra overhead from the flavor w/o paying for it16:01
mriedemif we account for the overhead in the allocations, and we use allocations for reporting quota, then we actually fix that hole don't we?16:02
mriedemso you can bill your customers based on real usage?16:02
cdentpeople will not like that change16:02
mriedemdefine people16:02
cdentend users16:02
cdent"yesterday i was able to do X but today I can do x-1"16:03
mriedemi imagine cloud providers would like it though16:03
mriedemmore cash for the utility company16:03
cdentyeah, like tron or whatever, I fight for the users16:03
*** tommylikehu_ has joined #openstack-nova16:03
*** Jack_Iv has quit IRC16:04
*** mdrabe has quit IRC16:04
mriedemcdent: just once, join evil corp16:04
melwitt"we fixed the glitch"16:04
*** tommylikehu_ has quit IRC16:04
mriedembauzas: dansmith: jaypipes: i left a summary in the spec16:04
mriedemhttps://review.openstack.org/#/c/437424/16:05
cdentmriedem: I'm not sure I have any soul left to sell16:05
*** tjones has joined #openstack-nova16:05
jaypipesmriedem: danke16:05
mriedemat a high level i feel like we should be accounting for the overhead (real usage) in the allocations16:05
mriedemwhich would also benefit the initial select_destinations since placement has the whole story for actual allocations on a provider16:05
hypothermic_catlyarwood: hi, are you around?16:05
*** gszasz has quit IRC16:06
lyarwoodhypothermic_cat: hey, what's up?16:06
*** dave-mccowan has joined #openstack-nova16:06
mriedemi agree that over-estimating the reserved inventory buffer is an easier solution on us, but it also means operators have to be taking that into consideration now when configuring the comptues and doing capacity planning16:06
cdentonly if we don't automate it (through magic yet to be determined)16:07
dansmithmriedem: unless we inflate the reserved amount ourselves16:07
dansmithright16:07
mriedemyeah i don't know how we do that yet16:07
bauzasmriedem: okay, will read it16:07
mriedemthis is definitely fodder for the forum session on claims in the scheduler16:07
bauzasand will try to make a new rev soon-ish16:07
melwittI don't remember what the "overhead" is. I've seen it before in the RT16:07
mriedemmelwitt: see estimate_instance_overhead in the virt drivers16:08
melwittor the virt driver16:08
*** mdrabe has joined #openstack-nova16:08
*** lucasagomes is now known as lucas-afk16:08
bauzasmelwitt: don't put your foot into that ... :p16:08
mriedemit's related to quota so melwitt has to put both feet in16:08
melwittI have before but then I forgot, so I guess I'm safe16:08
*** mnestratov has quit IRC16:08
bauzasor take the left one, if you want to be lucky16:08
dansmithmind the boots16:09
mriedemi was going to say,16:09
mriedemit would have to be deep to get up over the boots16:09
melwitthah, yeah. feet protection16:09
*** dmk0202 has quit IRC16:09
bauzasI should consider that for myself16:09
hypothermic_catlyarwood: I split the dteach patch into a zillion small ones: https://review.openstack.org/#/q/topic:bp/cinder-new-attach-apis16:10
* cdent reminds mriedem behind his hands that it is "quota usage"16:10
melwittit feels weird to think of taking up more quota usage than the flavor says though16:10
bauzasyup16:10
bauzasand it would expose the hypervisor usage to the users, nope ?16:10
melwittso I don't think I would be for that16:10
bauzasmelwitt: +116:10
hypothermic_catlyarwood: and I remember you asked about it earlier and was wondering whether you have bandwidth to look into any of those :)16:10
*** mvk has quit IRC16:11
bauzasI think we should not impact the quota usage by that, or it would expose some internal consumption logic for the end-user, nope ?16:11
bauzasif I was nasty, I'd guess which host I'm in if I'm able to see my quota decrease differently16:11
bauzasat leasy16:11
dansmithif we increase our allocations, that's how it would manifest16:11
dansmithwhich is another reason why hiding it in reserved is better16:11
dansmithIMHO16:11
lyarwoodhypothermic_cat: I will tomorrow morning, I've been stuck downstream all week but I'll make time to reviews these in the morning. mdbooth, johnthetubaguy & mriedem ^ might also want to look at these prior to the meeting tomorrow.16:12
openstackgerritMikhail Feoktistov proposed openstack/nova master: Remove fstype param from ploop init  https://review.openstack.org/44497016:12
bauzasdansmith: maybe16:12
bauzasif we try to merge quotas with placement16:12
dansmithbauzas: we're doing that16:12
bauzasI know16:12
bauzasspeaking out loud16:12
hypothermic_catlyarwood: additional tests are on their way for the base detach changes16:12
mriedemdansmith: bauzas: melwitt: ok i didn't think about that part, so reply in the spec i suppose16:13
bauzasdansmith: FWIW I tried to advocate way earlier this morning that I'm in favor of decrementing the total left amount than fixing the allocation16:13
hypothermic_catlyarwood: I'm stuck a bit too this week, I need to find more people, who could look into the code :)16:13
bauzasdansmith: unrelated, I have a couple of comments on your cells-aware series16:14
bauzasboth can be me being wrong16:14
*** nic has joined #openstack-nova16:15
*** eantyshev has joined #openstack-nova16:15
rfolcocdent, quick question: I am doing a cleanup on inventory.py replacing http error msg from db api with specific messages, like here: https://github.com/openstack/nova/blob/master/nova/api/openstack/placement/handlers/inventory.py#L409 - should we do this for other cases like BadRequest ?16:16
*** dtp has joined #openstack-nova16:16
dansmithbauzas: one is why my devstack patch is failing :D16:17
bauzasI'M NOT WRONG, WOOOOT!16:17
* bauzas should make a t-shirt16:17
*** felipemonteiro_ has joined #openstack-nova16:17
cdentrfolco: just a sec, catching up16:18
dansmithbauzas: no, you're still wrong16:18
dansmithno t-shirt for you16:18
* bauzas facepalms16:18
bauzasdansmith: L71 ?16:19
*** david-lyle has joined #openstack-nova16:19
bauzashow come ?16:19
dansmithbauzas: no, L71.516:19
rfolcocdent, more context: would be an extension of https://review.openstack.org/#/c/43677316:19
*** gszasz has joined #openstack-nova16:19
bauzasdansmith: excuse my ignorance, but what is L71.5 ?16:19
cdentrfolco: yeah, the general idea is to not expose the database error in the exception message, throughout the placement service16:19
dansmithbauzas: the line between 71 and 7216:20
cdentrfolco: sometimes the exception message will already be good, other times not. it depends on the exception that's coming from the object level: it's inconsistent16:20
bauzaslike the platform 9 3/4 ?16:20
melwittyeeeah16:20
cdentrfolco: in general the goal is to make the message that the end user sees less annoying, so use that as your guide16:20
*** felipemonteiro has quit IRC16:20
cdentrfolco: but unfortunately I need to go, house guests just showed up...16:21
dansmithbauzas: I don't get that reference, but.. probably?16:21
rfolcocdent, that helped. Thanks.16:21
melwittdansmith: you don't know ... harry potter??16:21
cdentrfolco: add me to the review(s) when stuff's ready please, so I can be sure to look at them and also to add them to the weekly email16:21
*** cdent has quit IRC16:21
dansmithmelwitt: never seen it16:21
melwittomg16:22
bauzasmelwitt: seriously, he doesn't loose a lot16:22
dansmithmelwitt: well, I watched five minutes of it, and couldn't handle it16:22
*** gongysh has quit IRC16:22
dansmithLOTR and harry potter, I just can't stomach16:22
bauzasdansmith: dammit, the platform 9 3/4 was shown at 10 mins after the beginning16:22
dansmithoh well :)16:22
*** ltomasbo is now known as ltomasbo|away16:23
bauzaseither way, I have to change my glasses this week16:24
bauzasso probably I'll review better16:24
*** dimtruck is now known as zz_dimtruck16:24
*** zz_dimtruck is now known as dimtruck16:25
dansmithbauzas: in my defense, the tests did add a cell, but I neglected to create the instance in the new cell, so .. I intended to have coverage to prove I was doing the right thing, I just failed.. twice16:25
*** dharinic has joined #openstack-nova16:26
*** ijw has joined #openstack-nova16:29
*** dharinic has quit IRC16:31
*** kaisers has quit IRC16:32
*** kaisers has joined #openstack-nova16:32
*** karimb has quit IRC16:34
*** ijw has quit IRC16:34
*** aarefiev is now known as aarefiev_afk16:35
openstackgerritmelanie witt proposed openstack/nova master: Mock timeout in test__get_node_console_with_reset_wait_timeout  https://review.openstack.org/45816116:35
*** yamahata has quit IRC16:36
*** kaisers has quit IRC16:37
mriedemdansmith: is this for the server groups thing?16:37
dansmithyeah16:37
mriedemcould just write a functional test instead,16:38
*** jheroux has joined #openstack-nova16:38
mriedemi'm finding that's easier with some of the api driven stuff and multi-cell16:38
dansmithmriedem: this one is basically functional.. uses the actual db16:38
dansmithmriedem: it didn't catch it because I made the same mistake in the test as the code16:38
*** Swami has quit IRC16:39
*** Apoorva has joined #openstack-nova16:40
*** smatzek has joined #openstack-nova16:43
melwittmriedem: I wonder if there's a bug in the backoff looping timer, it doesn't make sense that it wouldn't time out at all16:45
melwittit's supposed to end early if the backoff would put it over the specified timeout16:45
openstackgerritDan Smith proposed openstack/nova master: Make server groups api aware of multiple cells for membership  https://review.openstack.org/45733816:46
openstackgerritDan Smith proposed openstack/nova master: Sort CellMappingList.get_all() for safety  https://review.openstack.org/44317416:46
openstackgerritDan Smith proposed openstack/nova master: Clean up ClientRouter debt  https://review.openstack.org/44448716:46
openstackgerritDan Smith proposed openstack/nova master: Add workaround to disable group policy check upcall  https://review.openstack.org/44273616:46
mriedemmelwitt: i didn't dig into how the jitter stuff works in there16:46
melwittyeah. I'm looking at it out of curiosity16:46
mriedemmelwitt: i figured maybe there was a bug in the fake stub method logic16:46
mriedembut didn't dig into it16:47
melwitthm, yeah16:47
melwittI was thinking we shouldn't be doing real timeouts anyway, but it doesn't make sense that it wouldn't time out that way too16:47
openstackgerritFeodor Tersin proposed openstack/nova master: Implement ScaleIO image backend  https://review.openstack.org/40744016:50
*** gjayavelu has joined #openstack-nova16:51
*** david-lyle has quit IRC16:51
*** felipemonteiro_ has quit IRC16:53
openstackgerritMatt Riedemann proposed openstack/nova-specs master: Fix typo in deprecate-os-hosts spec  https://review.openstack.org/45816516:53
*** amotoki has joined #openstack-nova16:54
*** salv-orlando has joined #openstack-nova16:54
*** tjones has left #openstack-nova16:55
*** kaisers has joined #openstack-nova16:57
*** kaisers_ has joined #openstack-nova16:58
*** sambetts is now known as sambetts|afk17:00
*** kaisers has quit IRC17:02
*** ralonsoh has quit IRC17:02
*** kaisers_ has quit IRC17:02
*** derekh has quit IRC17:05
*** Sukhdev_ has quit IRC17:07
*** hieulq_ has joined #openstack-nova17:10
*** moshele has joined #openstack-nova17:10
dansmithgdi py3517:13
*** harlowja_ has quit IRC17:13
*** moshele has quit IRC17:14
*** marst_ has joined #openstack-nova17:14
*** harlowja has joined #openstack-nova17:15
*** kaisers has joined #openstack-nova17:15
*** harlowja has quit IRC17:17
*** harlowja has joined #openstack-nova17:17
*** marst has quit IRC17:17
*** efoley__ has quit IRC17:18
*** sdague has quit IRC17:19
*** xyang1 has quit IRC17:19
*** karimb has joined #openstack-nova17:24
*** toure has joined #openstack-nova17:24
*** ijw has joined #openstack-nova17:30
*** yamahata has joined #openstack-nova17:30
*** kaisers has quit IRC17:30
*** kaisers has joined #openstack-nova17:31
*** ijw has quit IRC17:35
*** kaisers has quit IRC17:35
*** kaisers has joined #openstack-nova17:35
*** ijw has joined #openstack-nova17:36
*** karimb has quit IRC17:36
*** iceyao has joined #openstack-nova17:37
*** felipemonteiro has joined #openstack-nova17:41
openstackgerritMatt Riedemann proposed openstack/nova master: Deprecate os-hosts API  https://review.openstack.org/45650417:41
mriedem^ is doctastic17:41
*** iceyao has quit IRC17:41
*** sudipto has quit IRC17:43
*** sudipto has joined #openstack-nova17:43
*** sudipto has quit IRC17:44
*** hieulq_ has quit IRC17:44
*** Vishal_ has joined #openstack-nova17:46
*** Vishal_ has quit IRC17:46
*** kaisers has quit IRC17:47
*** catintheroof has quit IRC17:47
*** jpena is now known as jpena|off17:49
*** jaosorior is now known as jaosorior_away17:54
openstackgerritOpenStack Proposal Bot proposed openstack/nova master: Updated from global requirements  https://review.openstack.org/45818517:54
openstackgerritOpenStack Proposal Bot proposed openstack/os-vif master: Updated from global requirements  https://review.openstack.org/45104917:56
*** xyang1 has joined #openstack-nova17:58
*** jamesden_ has joined #openstack-nova17:59
*** jvgrant_ has joined #openstack-nova18:02
*** jvgrant_ has left #openstack-nova18:02
*** fragatin_ has quit IRC18:02
*** sdague has joined #openstack-nova18:03
*** fragatina has joined #openstack-nova18:05
openstackgerritSteve Noyes proposed openstack/nova master: Add Cinder V3 Detach calls  https://review.openstack.org/43875018:12
*** voelzmo has joined #openstack-nova18:15
*** kaisers has joined #openstack-nova18:15
*** claudiub has quit IRC18:16
*** gszasz has quit IRC18:17
*** mlavalle has quit IRC18:18
dansmithI see melwitt pushed up the revised quotapalooza18:19
dansmithI'm glad, but also sad, 'cause now we have to review it18:19
*** adisky_ has quit IRC18:19
mriedemi think my afternoon just got busy with something else...18:21
*** mlavalle has joined #openstack-nova18:22
mriedemdo we want/need to get the placement usages API stuff in first?18:22
mriedembefore counting quotas?18:22
dansmithI think she's got it so it can count inefficiently without placement18:23
dansmithiirc18:23
mriedemalright18:23
dansmithwe really can't miss landing this this cycle, so whatever makes the most sense time-wise I guess18:23
mriedemi wanted something easier to review18:23
melwittsorry peeps18:23
mriedemoh we can miss it18:23
mriedemwe can miss it real good18:23
*** satyar has quit IRC18:25
*** Jack_Iv has joined #openstack-nova18:28
hypothermic_catlyarwood: mriedem: johnthetubaguy: this one for the new detach flow is ready for review: https://review.openstack.org/#/c/438750/18:28
hypothermic_catI will add it to the etherpad too18:28
mriedemok18:29
mriedemhypothermic_cat: doesn't the commit message need updating?18:29
*** catinthe_ has joined #openstack-nova18:30
*** amotoki has quit IRC18:30
hypothermic_catmriedem: bah, yes, that should be too :)18:31
mriedemlet me go through the rest first18:31
hypothermic_catmriedem: or well, actually I updated it18:31
mriedem"for each case of terminate_connection" is wrong18:32
hypothermic_catmriedem: but I can remove the "overview" part from it as it seems to be confusing18:32
mriedemyeah, just don't change anything yet18:33
*** fragatina has quit IRC18:34
mriedemhypothermic_cat: there you go, comments inline18:39
*** catinthe_ has quit IRC18:39
*** gyee has joined #openstack-nova18:40
mriedemhypothermic_cat: i'm confused about the changes, are they not in a series?18:41
mriedemyou're going to have conflicts if those aren't done in a series18:42
hypothermic_catmriedem: I tried not to have a like 8 patches long series to be able to get more hands on the code, etc18:42
hypothermic_catmriedem: that's a pain to maintain18:42
mriedemconflicts aren't fun either18:42
hypothermic_catthese are small changes there won't be that much conflict18:42
hypothermic_catI have a few that have a common base18:42
mriedemok i was thinking more about the crud operations in nova/volume/cinder.py18:43
mriedemthose are common and should be at the bottom of the series18:43
hypothermic_catif we can get the base in then there's at least two IIRC that can be independent from the rest for instance18:43
hypothermic_catmriedem: those are only in this base detach patch right now18:44
mriedemok i asked for the attachment_update one to be split out of that change18:44
mriedemuntil it's used18:44
hypothermic_catmriedem: I should've marked the other as WIP, will do that a bit later18:44
mriedemwhich is why i figured things were in a series18:44
*** nicolasbock has quit IRC18:45
hypothermic_cathmm, I think we don't even use that right now, so I might simply need to delete it18:45
hypothermic_catas we have delete and create for things like swap18:45
*** lucasxu has joined #openstack-nova18:45
*** kaisers has quit IRC18:45
*** kaisers has joined #openstack-nova18:48
*** nicolasbock has joined #openstack-nova18:55
*** gyee has quit IRC18:55
*** Sukhdev has quit IRC18:56
*** Jack_Iv has quit IRC18:57
*** Jack_Iv has joined #openstack-nova18:57
*** tbachman has quit IRC18:58
*** Jack_Iv has quit IRC18:59
*** Jack_Iv has joined #openstack-nova18:59
*** MasterOfBugs has joined #openstack-nova19:03
*** sree has joined #openstack-nova19:06
*** david-lyle has joined #openstack-nova19:06
*** yingwei has quit IRC19:06
*** lucasxu has quit IRC19:08
*** lucasxu has joined #openstack-nova19:09
*** sree has quit IRC19:10
*** fragatina has joined #openstack-nova19:15
*** david-lyle_ has joined #openstack-nova19:17
*** david-lyle has quit IRC19:17
*** eharney has quit IRC19:18
*** claudiub has joined #openstack-nova19:22
*** Jack_Iv has quit IRC19:26
*** Jack_Iv has joined #openstack-nova19:27
*** gyee has joined #openstack-nova19:27
*** Jack_Iv has quit IRC19:28
*** Jack_Iv has joined #openstack-nova19:28
*** eharney has joined #openstack-nova19:33
*** kfarr has quit IRC19:42
*** Jack_Iv has quit IRC19:43
*** iceyao has joined #openstack-nova19:44
*** Jack_Iv has joined #openstack-nova19:44
*** Jack_Iv has quit IRC19:44
*** voelzmo has quit IRC19:44
*** dmk0202 has joined #openstack-nova19:47
*** iceyao has quit IRC19:48
*** awaugama has quit IRC19:49
*** pchavva has quit IRC19:51
*** david-lyle_ has quit IRC19:52
*** voelzmo has joined #openstack-nova19:57
*** Apoorva has quit IRC19:57
*** catintheroof has joined #openstack-nova19:59
*** voelzmo has quit IRC20:02
*** iceyao has joined #openstack-nova20:03
*** kfarr has joined #openstack-nova20:03
*** annegentle has quit IRC20:07
*** iceyao has quit IRC20:08
*** cdent has joined #openstack-nova20:08
*** lucasxu has quit IRC20:13
*** lucasxu has joined #openstack-nova20:14
*** damien_r has joined #openstack-nova20:14
*** cdent has quit IRC20:17
openstackgerritFeodor Tersin proposed openstack/nova master: libvirt: Use config types to parse XML for instance disks  https://review.openstack.org/41066720:18
openstackgerritFeodor Tersin proposed openstack/nova master: libvirt: Add missing tests for utils.find_disk  https://review.openstack.org/41475020:18
openstackgerritFeodor Tersin proposed openstack/nova master: libvirt: Use config types to parse XML for root disk  https://review.openstack.org/41194120:18
*** felipemonteiro has quit IRC20:18
jaypipesmriedem, dansmith: any chance either of you would be able to review https://review.openstack.org/#/c/448282/ and series? :)20:18
dansmithjaypipes: like you're looking for odds or what?20:19
dansmithI'd say about 33%20:19
*** tbachman has joined #openstack-nova20:19
jaypipesdansmith: fair enough :)20:20
mriedemjaypipes: once i get my state of the union for pike-1 email done20:21
jaypipescheers20:22
*** karimb has joined #openstack-nova20:23
*** iceyao has joined #openstack-nova20:24
dansmithjaypipes: say yes to my comment on the bottom patch and I will approve20:28
*** Apoorva has joined #openstack-nova20:28
*** iceyao has quit IRC20:29
jaypipesdansmith: you mean like this? https://github.com/openstack/os-traits/blob/master/os_traits/__init__.py#L2720:29
dansmithjaypipes: but accessible from the command line20:30
jaypipesdansmith: sure I can add that.20:30
dansmithmaybe just make this do it: "python -mos_traits"20:30
*** sneti_ has joined #openstack-nova20:30
*** sneti_ has quit IRC20:30
jaypipesdansmith: yep, I can add a patch that does that.20:31
*** ngupta has quit IRC20:31
*** liverpooler has quit IRC20:31
dansmithjaypipes: ack, +W20:31
jaypipescheers20:31
*** Sukhdev has joined #openstack-nova20:32
openstackgerritSteve Noyes proposed openstack/nova master: Update detach to use V3 Cinder API  https://review.openstack.org/43875020:32
*** ijw has quit IRC20:37
*** david-lyle_ has joined #openstack-nova20:38
*** sdague has quit IRC20:38
*** nkorabli has joined #openstack-nova20:38
*** annegentle has joined #openstack-nova20:42
dansmithvladikr: still around?20:42
dansmithvladikr: can you look over this and see if there's anything missing from this that we should add? https://review.openstack.org/#/c/448283/420:42
*** rfolco has quit IRC20:42
vladikrdansmith, sure, looking20:43
hypothermic_catmriedem: update posted, hope it looks better now20:52
*** david-lyle_ has quit IRC20:53
*** kaisers has quit IRC20:53
*** kaisers has joined #openstack-nova20:54
*** catintheroof has quit IRC20:55
*** catintheroof has joined #openstack-nova20:55
mriedemhypothermic_cat: ok20:58
*** cleong has quit IRC20:58
*** kaisers has quit IRC20:58
*** catintheroof has quit IRC21:00
vladikrdansmith, seems fine overall. sriov VFs can be also "trusted" by it's PF. This will allow it to set a custom mac and enter promiscuous mode21:00
*** catintheroof has joined #openstack-nova21:01
vladikrdansmith, perhaps it should state the function (virtual/physical)?21:02
dansmithvladikr: I'm in another meeting, but can you just comment on the review?21:02
vladikrsure21:02
dansmiththanks21:02
*** edmondsw has quit IRC21:05
*** edmondsw has joined #openstack-nova21:06
cburgessDoes the libvirt LXC driver still work? or is that dead-ish code?21:06
*** edmondsw has quit IRC21:10
*** thorst_ has quit IRC21:10
*** eharney has quit IRC21:12
*** dimtruck is now known as zz_dimtruck21:18
*** flwang has joined #openstack-nova21:18
*** lucasxu has quit IRC21:19
flwangmriedem: ping21:19
*** jheroux has quit IRC21:23
openstackgerritJon Bernard proposed openstack/nova master: Add tempest-dsvm-ceph-rc  https://review.openstack.org/45629221:24
*** catintheroof has quit IRC21:24
mriedemcburgess: hard to say, we have a job in the experimental queue for it, but it's never been passing - it might be better now on the kernel in 16.04 though, i think the last time i looked at it, there were kernel issues with trusty21:25
mriedemflwang: pong21:25
cburgessmriedem Ew ok21:25
flwangmriedem: may i get your comments on this spec https://review.openstack.org/#/c/190919/ ?21:26
*** lucasxu has joined #openstack-nova21:26
*** kfarr has quit IRC21:26
flwangespecially if nova is still happy to keep the 'backup' function21:26
*** nkorabli has quit IRC21:26
flwangmriedem: Paul Murray commented  it's not really necessary to backup a volume-backed instance, may i know your opinions? thanks21:27
*** nkorabli has joined #openstack-nova21:27
*** smatzek has quit IRC21:27
*** dtp has quit IRC21:27
*** smatzek has joined #openstack-nova21:28
flwangmriedem: i revisited this spec because i got an email yesterday asked me why I gave up this patch, which means it's useful i think21:28
*** gouthamr has quit IRC21:29
*** Sukhdev has quit IRC21:30
efriedjaypipes mriedem mordred How does a guy get "the" ServiceCatalog initially?  I can find APIs to ask keystone for endpoint URLs, but presumably that goes to keystone every time, and we want to load up the service catalog just once.21:31
melwittdansmith: I replied to your comment on the quota review. I'm not sure what to make of the Quotas object. I used it as a place to do the api/main db straddle because no other place made sense, but it's not a true object like the others21:31
*** ngupta has joined #openstack-nova21:31
*** nkorabli has quit IRC21:32
mordredefried: it actually doesn't go to keystone every time21:32
jaypipesefried: check out the code that starts here: https://github.com/openstack/nova/blob/master/nova/scheduler/client/report.py#L195-L19821:32
mordredefried: the keystoneauth functoins to ask for endpoint urls all work from the catalog - unless they need to do version discovery because of what you requested21:32
mordredif they do need to do version discovery, that is cached21:33
efriedmordred Okay, so I can really just use endpoint URL APIs.21:33
mordredand yes- what jaypipes pasted is a good code example21:33
mordredyes21:33
*** david-lyle_ has joined #openstack-nova21:33
mordredyou, in fact, _should_ use the endpoint url apis - definitely don't try to do it yourself :)21:33
*** catintheroof has joined #openstack-nova21:34
* mordred just spent all day in a keystoneauth patch around discovery ...21:34
efriedneat.21:34
dansmithmelwitt: do those methods need to be remoted?21:35
*** thorst has joined #openstack-nova21:35
*** smatzek has quit IRC21:35
melwittdansmith: I'm not sure. I thought the already existing update_limit and create_limit were that these should be too21:36
melwitt*because the already existing21:37
dansmithmelwitt: are they ever called from compute? that's the reason they'd need to be remotable21:37
cfriesenHi folks...nova.virt.libvirt.driver.LibvirtDriver._supports_direct_io() checks whether we can mmap a 512-byte aligned block.   How is this supposed to work with disks with 4K sectors?21:37
dansmithsince we're moving them to the api, I'd tend to thing not21:37
*** damien_r has quit IRC21:37
dansmithmelwitt: the problem is returning an unversioned dict from a supposedly-versioned ovo method21:37
dansmithmelwitt: those are all expected to, in almost all cases, return a class of the object, unless they return nothing21:38
jaypipescfriesen: probably good to just update the code to check the block size of the local disk before that mmap check.21:38
melwitthm. I think in the nova-network case compute would read limits for fixed ips21:38
mriedemflwang: i think paul's comments and questions are valid, i.e. you do backups for persistence of ephemeral disk, but if you're using volumes you already have persistence, and as john pointed out it's really an orchestration API that nova has been trying to pull away from, because we do orchestration API poorly in general (think boot from volume where nova creates a new volume, or the shelve API which is mostly broken for volum21:38
mriedemcked instances)21:38
*** zz_dimtruck is now known as dimtruck21:38
mriedemflwang: so i don't have great interest in adding support to the backup api personally21:38
cfriesenjaypipes: I was wondering if we could just unconditionally switch to 4K21:39
*** david-lyle_ has quit IRC21:39
flwangmriedem: ok, got it. then i may stop putting more effort on that. thanks for the comments21:39
*** ijw has joined #openstack-nova21:39
jaypipescfriesen: yeah, I suppose so. not like mmap'ing 4K on a 512-byte sector disk will fail.21:39
*** thorst has quit IRC21:40
melwittdansmith: I see. hm, I need to go over it again as to which things compute will call. if it needs limit objects, then I would have to add a new object QuotaLimit or something I think. currently it's just reading them from its own database21:40
mriedemflwang: alternatively work it through the operators or product work group and see if there has been any interest for the same thing from their customers21:41
mriedemflwang: but i've never heard of anyone that wanted this besides you :)21:41
dansmithmelwitt: I would think limits are only read from the api regardless, yeah21:41
melwittdansmith: compute will read them for fixed ips at least though. when allocating a fixed ip21:41
melwittwith nova-network21:41
flwangmriedem: haha, we do have some customers want to see a built-in/outofbox backup, since the freezer is not really ready21:41
dansmithoh, hm21:41
dansmithmelwitt: and how does this work with the move to the api db?21:42
melwittfloating ips might be one too, it's been awhile21:42
melwittdansmith: compute would have to read them from the api db until nova-network is no longer a thing21:42
mriedemuh oh21:42
mriedemjust like server groups21:42
dansmithif it's just for n-net, then it's not a huge deal I guess21:43
dansmithtechnically a violation21:43
*** rfolco has joined #openstack-nova21:43
dansmithbut in reality..21:43
melwittyeah21:43
mriedemflwang: yeah i understand that - some people just want nova to do all of the orchestration, and we'd prefer that nova provide the APIs so that a higher-level service can put them together as the orchestrator21:43
mriedemflwang: which it certainly could with the snapshot api21:44
*** ijw has quit IRC21:44
flwangmriedem: i see and i agree it makes more sense21:44
*** shaner has quit IRC21:44
*** shaner has joined #openstack-nova21:44
*** slaweq has quit IRC21:44
flwangmriedem: yep. btw, another think i'd like to get your comments from nova view21:44
*** mvk has joined #openstack-nova21:44
*** slaweq has joined #openstack-nova21:45
*** Jack_Iv has joined #openstack-nova21:45
*** jerrygb has quit IRC21:47
flwangif I snapshot a volume-backend instance, then after delete the image in glance, do you expect the snapshot is fully delete?21:49
*** salv-orl_ has joined #openstack-nova21:49
*** slaweq has quit IRC21:49
*** lucasxu has quit IRC21:49
*** Jack_Iv has quit IRC21:49
*** salv-orlando has quit IRC21:52
flwangmriedem: i asked because currently, after delete the image, the volume behind the image will be left21:52
mriedemflwang: an actual volume in cinder?21:52
mriedemthe volume snapshot right?21:52
flwangyes21:52
*** tbachman has quit IRC21:52
mriedemi wouldn't expect glance to delete the volume snapshot21:52
mriedemwhen the instance snapshot image is deleted21:53
flwangmriedem: interesting, why?21:53
mriedembecause glance shouldn't have control over cinder resources21:53
mriedemand it would tightly couple the image snapshot from the server with the volume snapshot, based on an operation that nova is performing21:54
mriedemwhich seems wrong21:54
imacdonnthere is no dependency between the volume and the image it was created from ... so I see no reason to delete the volume if the image is deleted21:54
*** lucasxu has joined #openstack-nova21:54
mriedemnova could just as easily not create a volume snapshot when creating an instance snapshot in glance21:54
flwangmriedem: that's a good point, but why glance can delete an  image when it's stored in swift, can you see the difference?21:54
mriedemflwang: because swift is the backing store for the image right?21:54
mriedemif you're using swift as the backend21:54
mriedemcinder is not the backing store for the volume snapshot, cinder is21:55
mriedemoops21:55
mriedemglance is not the backing store for the volume snapshot, cinder is21:55
flwanggood point21:55
imacdonnyeah, that ;)21:55
mriedemswift is the backing store for the image21:55
mriedemor ceph21:55
mriedemor local file21:55
mriedemor whatever21:55
flwangbut if glance is using cinder as the backend store21:56
flwangfor this case, should the snapshot being deleted?21:56
dansmithmelwitt: there are three fields on the quotas object.. those are not sufficient to return what we've got from those new calls?21:57
mriedemflwang: only the image snapshot, not the volume snapshot21:58
dansmithI'm not really sure what we're getting there actually.. usage I guess?21:58
melwittdansmith: I don't think so because those functions are returning db models that have like a limit attribute, for example21:58
mriedemflwang: i go to glance to manage images, not volumes21:58
melwittdansmith: those are limits21:58
mriedemflwang: regardless of what is storing those images21:58
flwangmriedem: ok, cool, thanks for all the inputs21:59
flwangthat's very helpful21:59
melwittdansmith: most of them are limits. some are QuotaClass which has its own fields21:59
mriedemif you're using cinder as the backend for glance (i didn't even know that was possible), and you delete the volume out of band via the cinder API, won't that totally mess up glance's representation of that image?21:59
mriedemflwang: like if you boot a server in nova and then delete the guest via virsh21:59
dansmithmelwitt: it looks like we don't really have any of those existing methods that return raw data though right? we return an integer in one place I think, but.. nothing like a bag of dicts22:00
*** tbachman has joined #openstack-nova22:00
melwittdansmith: no, and I didn't realize the implications of what I added that needed to return things22:01
dansmithhrm.22:01
melwittdansmith: another way I considered but didn't do is I could have added the new api db accesses to nova/quota.py and do the api/main straddle there22:03
*** jamesden_ has quit IRC22:03
flwangmriedem: yep, it will. just like you delete a vm from vmcontrol instead of openstack :D22:04
dansmithmelwitt: what we need to avoid is returning unversioned data to the compute, which I don't think is solved purely by putting the straddle in quotas.py22:04
melwittdansmith: I was thinking of the Quotas object as an abstraction but wasn't think of the remotable methods part22:04
dansmithmelwitt: what we really need is a composite operation that just says "yo dawg, check this quota limit for me and raise if there's a problem" right?22:04
dansmithmelwitt: then all the data is on the conductor side and compute is just asking for a check instead of getting the data itself22:05
melwittdansmith: it's not but it would be doing the same thing it is now (without having a Quotas object version attached to it)22:05
melwittdansmith: yeah, that makes sense22:05
*** mdrabe has quit IRC22:05
dansmithmelwitt: right now we remote the call to the quotas db code through the object action without returning data, right?22:05
*** abalutoiu_ has joined #openstack-nova22:05
dansmithand your patch ends up with us returning that data to compute I think22:05
melwittdansmith: check_deltas is supposed to do that, but atm I'm confused as to where that occurs or if it solves the problem22:05
melwittcheck_deltas is the composite operation that calls count followed by limit_check22:06
dansmithmelwitt: where is check_deltas?22:08
*** lucasxu has quit IRC22:08
dansmiththat must be something later in your patch series?22:08
*** abalutoiu has quit IRC22:09
melwittdansmith: third patch I think22:09
melwittdansmith: https://review.openstack.org/#/c/44623922:10
*** esberglu has quit IRC22:11
cfriesenconfig option question:  when specifying the "live_migration_inbound_addr" config option, is it expected that an ipv6 address will be enclosed in square brackets?  Or should nova handle that when parsing the config option?22:11
dansmithmelwitt: ah yep.. that just takes resources and counts (from compute in this case), remotes to conductor where it looks at all the stuff and either explodes or doesn't, right?22:11
*** ngupta has quit IRC22:11
*** ngupta has joined #openstack-nova22:11
*** burt has quit IRC22:13
openstackgerritEric Fried proposed openstack/nova master: WIP/PoC: nova.utils.get_service_url(group)  https://review.openstack.org/45825722:13
melwittdansmith: it takes the deltas but it does the count and pulls the limits itself22:13
dansmithmelwitt: right, it just takes a resource and count, and explodes if the quota would be violated22:13
dansmithmelwitt: that's what we need yeah22:14
clarkbcfriesen: I thought I could answer based on how we configure things in multinode testing but that appears to be an entirely ipv4 control plane :/22:14
melwittdansmith: if by count you mean delta, then yes. if you want to check if you have room for 5 more instances, you pass it 5 and then it will count how many instances your project and user have allocated, then it pulls the limits and compares22:14
cfriesenclarkb: I'm wondering if maybe nova should do:22:15
cfriesenmigrate_data.target_connect_addr = utils.safe_ip_format(CONF.libvirt.live_migration_inbound_addr)22:15
dansmithmelwitt: yeah22:15
*** yingwei has joined #openstack-nova22:16
*** mlavalle has quit IRC22:16
melwittdansmith: okay, so that solves that problem. so I need to figure out what to do about the db access methods I have in the Quotas object22:17
*** dmk0202 has quit IRC22:17
melwittI put them there as a universal place to call api/main db to do the right thing, and the caller should only be the quota driver in nova/quota.py22:18
melwittI think ...22:18
dansmithmelwitt: right but does compute call into nova.quota for anythign? I thought it did22:18
dansmithif so, then your check_deltas conversion needs to come first22:19
melwittdansmith: it does, for the reserve/commit/rollback22:19
openstackgerritPeter Hamilton proposed openstack/nova master: Add configuration options for certificate validation  https://review.openstack.org/45767822:19
dansmithmelwitt: yeah, then you need to do something in a different order I guess22:19
melwittdansmith: check_deltas only works if the quota has been converted to Countable22:19
dansmithyeah22:19
*** ijw has joined #openstack-nova22:20
melwittI shall think on it22:24
melwittmy brain's too fried at the moment22:25
*** marst_ has quit IRC22:26
*** felipemonteiro has joined #openstack-nova22:27
*** eandersson has joined #openstack-nova22:27
eanderssonIs there a way to extend the data in metadata? (e.g. adding tenant name)22:28
*** marst has joined #openstack-nova22:35
*** ngupta has quit IRC22:37
*** gongysh has joined #openstack-nova22:38
*** esberglu has joined #openstack-nova22:39
*** gongysh has quit IRC22:40
*** Sukhdev has joined #openstack-nova22:40
*** sree has joined #openstack-nova22:42
*** salv-orl_ has quit IRC22:44
*** david-lyle has joined #openstack-nova22:45
*** sree has quit IRC22:47
*** jerrygb has joined #openstack-nova22:49
*** gjayavelu has quit IRC22:50
*** gjayavelu has joined #openstack-nova22:50
*** jerrygb has quit IRC22:53
mriedemeandersson: you can update the metadata associated with a server yes22:53
mriedemeandersson: https://developer.openstack.org/api-ref/compute/#server-metadata-servers-metadata22:54
mriedemhttps://docs.openstack.org/cli-reference/nova.html#nova-meta22:54
*** rfolco_ has joined #openstack-nova22:55
*** rfolco has quit IRC22:57
*** jamesden_ has joined #openstack-nova22:59
mriedemmelwitt: did you get a patch up for that ironic unit test failure?23:04
melwittmriedem: yes, https://review.openstack.org/45816123:05
mriedemsweet23:05
melwittI think the only thing it misses out on is running the _wait_state function but I think one of the other unit tests runs it23:06
melwittdouble check it though23:06
mriedemthe commit message says it's covered in other tests23:06
mriedemi guess you're lying23:06
mriedempotentially23:06
*** yingwei has quit IRC23:07
mriedembut yeah there are23:07
mriedemoff the hook this time witt23:07
openstackgerritMathieu Gagné proposed openstack/nova master: Add ability to signal and perform online volume size change  https://review.openstack.org/45432223:10
melwitthaha23:10
melwittI thought I saw it covered in other tests but maybe I misunderstood something23:11
mriedem+223:12
*** nic has quit IRC23:13
*** felipemonteiro has quit IRC23:13
*** ngupta has joined #openstack-nova23:13
melwittmriedem: thanks. at first I removed those overrides but then kept them to use in the assert_called_with to verify they're being passed along to the timer as expected23:14
mriedemalright - that's fine for the configurable one, the module scope variable was just there for tricking the timer23:14
melwittoh23:14
*** esberglu has quit IRC23:20
mriedemmdbooth: do you plan on rebasing this https://review.openstack.org/#/c/430213/ ? - something came into triage today that was a duplicate23:22
*** dimtruck is now known as zz_dimtruck23:23
mriedemsean-k-mooney: who is running the intel 3rd party ci now that wznoinsk is gone?23:23
*** zz_dimtruck is now known as dimtruck23:24
*** ngupta has quit IRC23:29
*** ijw has quit IRC23:31
*** jamesden_ has quit IRC23:33
*** chyka has quit IRC23:33
*** chyka has joined #openstack-nova23:34
*** chyka has quit IRC23:39
imacdonnmriedem, still around ?23:39
mriedemimacdonn: yeah23:41
imacdonnhey .. re. https://bugs.launchpad.net/nova/+bug/168397223:42
openstackLaunchpad bug 1662483 in OpenStack Compute (nova) "duplicate for #1683972 detach_volume races with delete" [Undecided,In progress] - Assigned to Matthew Booth (mbooth-9)23:42
imacdonnthanks, openstack23:42
imacdonnI'm not sure it's really the same problem as the one you marked it a dup of .. it's slightly related, I guess23:42
mriedemfeel free to unduplicate it if you want, but i thought the solution was the same23:43
mriedemlock on detach_volume in the compute manager23:43
imacdonnmy locking was way more extensive (and possibly way more wrong, but it works for me)23:44
mriedemcare to post your change?23:44
mriedemi'm sure mdbooth would like to look at it23:44
imacdonnbut my case was overlap between attach and detach .. not two detach's23:44
imacdonnif mdbooth is the right guy, I can get with him ... what TZ is he in ?23:44
mriedemUK23:45
mriedemso he'd be around in the morning23:45
imacdonnOK, I'll look for him them ... I can post what I did somewhere, but I'm fairly sure it's not "right"23:46
mriedemhard to know if you don't post it23:46
mriedemposting to gerrit would be preferrable, else you could attach a patch file in the bug report23:46
*** dimtruck is now known as zz_dimtruck23:47
*** zz_dimtruck is now known as dimtruck23:47
imacdonnI think it's too much of a hack to put in a review .. should I attach to my bug (the one marked as a dup) ?23:47
mriedemnothing is too much of a hack to put it in a review - just put WIP in the prefix of the commit title23:49
mriedemor -Workflow it in gerrit23:49
mriedemno one is going to think less of you :)23:49
imacdonnheh23:50
imacdonnI'll take another look at it first ... don't want to embarrass myself any more than necessary :)23:50
*** Fdaisuke has quit IRC23:51
imacdonnI have another thing I wanted to get your opinion on (completely unrelated)23:52
*** hongbin has quit IRC23:53
*** Fdaisuke has joined #openstack-nova23:53
mriedemok23:55
imacdonnso when nova plugs an instance's vif into a neutron network that has MTU != 1500, something needs to set the MTU on the tap interface. Whose job is that? nova-compute? os_vif? the neutron l2 agent? ... ?23:55
imacdonnasking because it's currently not getting done, and I'm not sure who to blame.23:56
imacdonnFor more detail, see https://ask.openstack.org/en/question/105521/whos-supposed-to-set-the-mtu-on-a-nova-instances-vifs-tap-interface/23:56
mriedemsean-k-mooney: ^23:56
mriedemimacdonn: in part it depends on the vif type23:57
*** dimtruck is now known as zz_dimtruck23:57
mriedemthe libvirt driver isn't using os-vif yet for all vif types23:57
imacdonnit is in my case .. vif_type is bridge23:57
mriedemthen i believe os-vif does it23:58
imacdonnI think I have the terminology right there ... in any case, vif_plug_linux_bridge under os_vif creates the bridge23:58
imacdonnthen libvirt creates the domain, which creates the tap interface, and puts it on the bridge23:58
mriedemdoes the network in neutron have the mtu attribute set?23:58
imacdonnyes23:59
imacdonnI've studied the code, and I can't find anything that's supposed to do the job23:59
imacdonnI don't think os_vif can do it, because the tap interface doesn't exist yet23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!