Friday, 2017-10-06

*** gbarros has joined #openstack-nova00:01
*** cdent has quit IRC00:02
*** yamamoto has quit IRC00:02
*** ijw has joined #openstack-nova00:08
*** ijw has quit IRC00:08
*** ijw has joined #openstack-nova00:08
mriedemmtreinish: does sourcing /opt/stack/new/devstack/lib/nova in grenade not also source the /opt/stack/new/devstack/local.conf?00:09
mtreinishmriedem: probably not, in devstack the assumption is likely that is sourced well before lib/nova is called00:11
mriedemalright00:12
mriedemhmm, so can i even source this thing from within a post_test_hook?00:12
mriedemhttp://logs.openstack.org/71/508271/1/check/gate-grenade-dsvm-neutron-multinode-live-migration-nv/659a2cd/logs/new/local.conf.txt.gz00:12
mriedemi need to get the CELLSV2_SETUP value00:13
*** hemna__ has quit IRC00:14
*** mingyu has joined #openstack-nova00:14
*** aloga has quit IRC00:14
mriedemseems like the only thing available to me is the ansible env vars from the d-g setup in http://logs.openstack.org/71/508271/1/check/gate-grenade-dsvm-neutron-multinode-live-migration-nv/659a2cd/logs/devstack-gate-setup-host.txt.gz00:15
mtreinishmriedem: local.conf should be sourced by grenade already: https://github.com/openstack-dev/grenade/blob/03de9e0fc7f4fc50a00db5d547413e26cf0780dd/grenade.sh#L22800:15
mriedemso i can get like GRENADE_OLD_BRANCH00:15
*** aloga has joined #openstack-nova00:16
mriedemyeah but this is a script running in the post_test_hook00:16
mriedemwould that carry over?00:17
mriedemhttp://logs.openstack.org/71/508271/1/check/gate-grenade-dsvm-neutron-multinode-live-migration-nv/659a2cd/console.html#_2017-10-04_14_36_43_55772600:18
mtreinishah, ok. Yeah it won't be sourced there00:18
melwittmtreinish: still failing after installing python3-rados, even though it shows up in py3 pip freeze on the job. was not expecting that. and when I try locally I can import rados in py3 http://logs.openstack.org/63/509663/3/experimental/gate-tempest-dsvm-py35-full-devstack-plugin-ceph-ubuntu-xenial-nv/2361fe2/logs/screen-g-api.txt.gz?level=ERROR00:18
mriedemok, so i need to do https://github.com/openstack-dev/grenade/blob/03de9e0fc7f4fc50a00db5d547413e26cf0780dd/grenade.sh#L228 from within the script00:18
*** crushil has joined #openstack-nova00:19
melwittback to the drawing board00:19
mtreinishmelwitt: what about rbd on py3. Looking at the glance_store code it if either rados or rbd isn't present it'll set both to None00:20
mtreinishmelwitt: https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/rbd.py#L37-L4200:20
melwittmtreinish: hm, good point. lemme go down that road00:21
melwittyeah you're right00:21
melwittthanks00:21
melwittneed to get py3 rbd separately00:21
mtreinishsure, np00:21
mtreinishmriedem: probably00:22
mtreinishmriedem: or something like that to source the vars you need from the conf file00:22
*** namnh has quit IRC00:22
*** crushil has quit IRC00:23
*** crushil has joined #openstack-nova00:25
*** hemna__ has joined #openstack-nova00:25
*** crushil has quit IRC00:26
openstackgerritMatt Riedemann proposed openstack/nova master: Fix live migration grenade ceph setup  https://review.openstack.org/50827100:29
mriedemmtreinish: check out this crazy shit ^00:29
*** Swami has quit IRC00:32
*** claudiub has quit IRC00:33
*** vladikr has joined #openstack-nova00:40
*** Nel1x has quit IRC00:44
*** yamamoto has joined #openstack-nova00:45
*** vladikr has quit IRC00:45
*** yamamoto has quit IRC00:49
*** baoli has joined #openstack-nova00:52
*** crushil has joined #openstack-nova01:01
*** coreywright has quit IRC01:08
*** Apoorva has quit IRC01:11
*** jogo has joined #openstack-nova01:13
*** openstackstatus has quit IRC01:14
*** openstackstatus has joined #openstack-nova01:16
*** ChanServ sets mode: +v openstackstatus01:16
*** hongbin has joined #openstack-nova01:20
*** penick has joined #openstack-nova01:20
*** coreywright has joined #openstack-nova01:26
*** mriedem has quit IRC01:26
*** zigo has quit IRC01:27
*** baoli has quit IRC01:27
*** gyee has quit IRC01:28
*** dikonoor has joined #openstack-nova01:29
*** zigo has joined #openstack-nova01:31
*** esberglu has joined #openstack-nova01:31
*** esberglu has quit IRC01:32
*** esberglu has joined #openstack-nova01:32
*** tbachman has quit IRC01:34
*** chyka has joined #openstack-nova01:36
*** esberglu has quit IRC01:36
*** penick has quit IRC01:37
*** penick has joined #openstack-nova01:38
*** chyka has quit IRC01:40
*** tetsuro has joined #openstack-nova01:41
*** tbachman has joined #openstack-nova01:46
*** yamamoto has joined #openstack-nova01:47
*** tpatil has joined #openstack-nova01:47
*** ijw has quit IRC01:47
*** chyka has joined #openstack-nova01:49
*** markvoelker has joined #openstack-nova01:52
*** yamamoto has quit IRC01:53
*** chyka has quit IRC01:53
*** yamahata has quit IRC01:55
*** sdake has quit IRC02:04
*** zhenq has quit IRC02:05
*** Tom has joined #openstack-nova02:05
*** mingyu has quit IRC02:06
*** andymccr has quit IRC02:06
*** rha has quit IRC02:06
*** lyarwood has quit IRC02:06
*** mingyu has joined #openstack-nova02:06
*** mtreinish has quit IRC02:08
*** lyarwood has joined #openstack-nova02:08
*** jistr has quit IRC02:08
*** john5223_ has quit IRC02:08
*** stephenfin has quit IRC02:09
*** phuongnh has joined #openstack-nova02:09
*** jistr has joined #openstack-nova02:10
*** Tom has quit IRC02:10
*** mtreinish has joined #openstack-nova02:10
*** stephenfin has joined #openstack-nova02:11
*** sergek__ has joined #openstack-nova02:11
*** sergek_ has quit IRC02:12
*** logan- has quit IRC02:12
*** sergek__ is now known as sergek_02:12
*** sdake has joined #openstack-nova02:13
*** andymccr has joined #openstack-nova02:13
*** rha has joined #openstack-nova02:13
*** rha has quit IRC02:13
*** rha has joined #openstack-nova02:13
*** sdake is now known as Guest284902:13
*** logan- has joined #openstack-nova02:15
*** TuanLA has joined #openstack-nova02:16
*** crushil has quit IRC02:17
*** mingyu has quit IRC02:18
*** litao__ has joined #openstack-nova02:22
*** hughsaunders has quit IRC02:22
*** crushil has joined #openstack-nova02:22
*** mingyu has joined #openstack-nova02:22
*** McNinja has quit IRC02:23
*** toan has quit IRC02:23
*** mgagne has quit IRC02:23
*** comstud has quit IRC02:24
*** mgagne has joined #openstack-nova02:24
*** McNinja has joined #openstack-nova02:24
*** mgagne is now known as Guest6609802:24
*** hughsaunders has joined #openstack-nova02:25
*** markvoelker has quit IRC02:27
*** toan has joined #openstack-nova02:28
*** takashin has quit IRC02:29
*** takashin has joined #openstack-nova02:29
*** jwcroppe has joined #openstack-nova02:30
*** litao___ has joined #openstack-nova02:30
*** diegows_ has joined #openstack-nova02:32
*** dave-mccowan has quit IRC02:33
*** litao__ has quit IRC02:35
*** diegows has quit IRC02:35
*** litao___ is now known as litao__02:35
*** sapd has joined #openstack-nova02:38
*** sapd has quit IRC02:38
*** adreznec has quit IRC02:38
*** ericyoung has quit IRC02:38
*** comstud has joined #openstack-nova02:39
*** vivsoni has joined #openstack-nova02:39
*** mhenkel has quit IRC02:39
*** adreznec has joined #openstack-nova02:40
*** mhenkel has joined #openstack-nova02:40
*** ericyoung has joined #openstack-nova02:41
vivsonii am trying to attach FC, then attach ISCSI volume to nova instance.. while attaching it create a proper vlun entries in /dev/disk/by-pth/02:41
vivsonibut after detaching the iscsi volume some of the lun entries are instact in /dev/disk/by-path directory02:41
vivsoniit would be great if someone can point me to code, from where the lun entries are created in /dev/disk/by-path02:44
vivsonii suspect some 'iscsiadm' or similar 'rescan' command is triggered and due to which the lun entries are create in /dev/disk/by-path... please correct me if i am worng02:45
vivsonithanks !!!02:45
*** baoli has joined #openstack-nova02:46
*** Tom_ has joined #openstack-nova02:49
*** crushil has quit IRC02:52
*** crushil has joined #openstack-nova02:53
*** nicolasbock_ has quit IRC03:00
*** nicolasbock has quit IRC03:01
*** tpatil has quit IRC03:03
*** edmondsw has joined #openstack-nova03:04
*** edmondsw has quit IRC03:09
*** felipemonteiro_ has joined #openstack-nova03:12
*** markvoelker has joined #openstack-nova03:24
*** takashin has quit IRC03:25
*** links has joined #openstack-nova03:32
*** mriedem has joined #openstack-nova03:35
mriedemjaypipes: dansmith: i did it! http://logs.openstack.org/18/507918/6/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/d4f175d/logs/screen-n-sch.txt.gz#_Oct_04_18_32_08_75379403:37
mriedemreproduced that 409 during claim resources in the scheduler, creating 1000 instances at once03:37
*** udesale has joined #openstack-nova03:38
*** udesale has quit IRC03:38
*** penick has quit IRC03:38
mriedemretried 18 times across that 100003:40
mriedemone of those poor saps just couldn't hack it03:40
mriedemneed to run with https://review.openstack.org/#/c/507705/2/nova/scheduler/client/report.py to find out which one i guess03:40
openstackgerritMatt Riedemann proposed openstack/nova stable/pike: Log consumer uuid when retrying claims in the scheduler  https://review.openstack.org/50996103:41
*** mriedem has quit IRC03:42
*** gouthamr has quit IRC03:42
*** udesale has joined #openstack-nova03:42
*** manasm has joined #openstack-nova03:45
*** hongbin has quit IRC03:45
*** chyka has joined #openstack-nova03:46
*** chyka has quit IRC03:50
*** crushil has quit IRC03:52
*** baoli has quit IRC03:53
*** crushil has joined #openstack-nova03:53
*** gbarros has quit IRC03:53
*** markvoelker has quit IRC03:57
*** gjayavelu has joined #openstack-nova04:02
*** penick has joined #openstack-nova04:05
*** aloga has quit IRC04:07
*** yamamoto has joined #openstack-nova04:15
*** bnemec has quit IRC04:16
*** yamahata has joined #openstack-nova04:19
*** jaosorior has joined #openstack-nova04:21
*** armax has quit IRC04:23
*** claudiub has joined #openstack-nova04:31
*** trungnv has joined #openstack-nova04:42
openstackgerritmelanie witt proposed openstack/nova master: Improve the CellDatabases test fixture and usage  https://review.openstack.org/50843204:43
openstackgerritmelanie witt proposed openstack/nova master: Target context for build notification in conductor  https://review.openstack.org/50996704:44
openstackgerritmelanie witt proposed openstack/nova master: Elevate existing RequestContext to get bandwidth usage  https://review.openstack.org/50996804:44
*** felipemonteiro_ has quit IRC04:49
*** edmondsw has joined #openstack-nova04:52
*** markvoelker has joined #openstack-nova04:54
*** udesale has quit IRC04:55
*** edmondsw has quit IRC04:57
*** psachin has joined #openstack-nova05:05
*** lpetrut_ has joined #openstack-nova05:06
*** esberglu has joined #openstack-nova05:09
*** hanish has joined #openstack-nova05:10
hanishone of my compute node is disabled due to 10 vm launch failing, how can i recover that node05:11
*** esberglu has quit IRC05:13
*** jwcroppe has quit IRC05:15
*** ratailor has joined #openstack-nova05:16
*** udesale has joined #openstack-nova05:18
Tenguyou must re-enable nova agent, hanish05:20
*** takashin has joined #openstack-nova05:21
hanish@Tengu: i restarted nova-compute agent on compute node, but still i facing the issue05:25
*** markvoelker has quit IRC05:27
*** penick has quit IRC05:27
Tenguhmmm nope, not via systemctl05:29
Tenguthere's an openstack command for that, in order to re-enable it at openstack level05:29
takashin05:30
Tenguhanish: there's something like nova service-list - you should see it's disabled for your host.05:32
Tenguthen you have nova service-enable05:32
Tenguhanish: I don't remember the "openstack unified" command for those.05:32
*** elod has joined #openstack-nova05:37
*** crushil has quit IRC05:37
hanishtengu: thanks05:38
*** gjayavelu has quit IRC05:47
*** sree has joined #openstack-nova05:56
*** spectr has joined #openstack-nova05:56
*** vvargaszte has joined #openstack-nova05:58
*** jwcroppe has joined #openstack-nova06:03
*** trinaths has joined #openstack-nova06:06
*** lpetrut_ has quit IRC06:06
*** hemna_ has joined #openstack-nova06:07
*** mdnadeem has joined #openstack-nova06:09
*** Oku_OS-away is now known as Oku_OS06:09
*** hemna__ has quit IRC06:11
*** jwcroppe has quit IRC06:14
*** clayton has quit IRC06:16
Tenguhanish: did it do the trick?06:17
*** rcernin has joined #openstack-nova06:17
*** clayton has joined #openstack-nova06:18
hanishTengu: thanks, yes it worked.06:20
*** lajoskatona has joined #openstack-nova06:21
Tenguhanish: good :).06:23
TenguI had that kind of issue earlier with tripleO.06:23
*** markvoelker has joined #openstack-nova06:24
*** pcaruana has joined #openstack-nova06:24
*** karthiks has joined #openstack-nova06:28
*** udesale has quit IRC06:31
*** udesale has joined #openstack-nova06:33
openstackgerritTakashi NATSUME proposed openstack/nova-specs master: Abort Cold Migration  https://review.openstack.org/33473206:36
*** vvargaszte has quit IRC06:36
*** dtantsur|afk has quit IRC06:40
*** dtantsur has joined #openstack-nova06:40
*** rcernin has quit IRC06:41
*** links has quit IRC06:42
*** rcernin has joined #openstack-nova06:43
*** trinaths has quit IRC06:44
*** trinaths has joined #openstack-nova06:44
*** avolkov has joined #openstack-nova06:45
takashingmann: cdent: I have modified the spec for "Abort cold migration" function. Would you review https://review.openstack.org/#/c/334732/ again?06:50
openstackgerritzhangyangyang proposed openstack/nova master: Move libvirts qemu-img support to privsep  https://review.openstack.org/50784806:50
*** vks1 has joined #openstack-nova06:52
*** sahid has joined #openstack-nova06:54
*** swamireddy has quit IRC06:55
*** esberglu has joined #openstack-nova06:57
*** markvoelker has quit IRC06:58
*** links has joined #openstack-nova06:59
*** esberglu has quit IRC07:02
*** swamireddy has joined #openstack-nova07:04
*** jwcroppe has joined #openstack-nova07:05
*** zhenq has joined #openstack-nova07:10
*** zhenq has quit IRC07:10
*** andreas_s has joined #openstack-nova07:21
*** chyka has joined #openstack-nova07:22
*** tesseract has joined #openstack-nova07:25
*** chyka has quit IRC07:26
*** aloga has joined #openstack-nova07:27
*** jwcroppe has quit IRC07:27
*** hanish has quit IRC07:29
*** lajoskatona has quit IRC07:31
*** lajoskatona has joined #openstack-nova07:36
*** links has quit IRC07:46
*** jpena|off is now known as jpena07:50
*** esberglu has joined #openstack-nova07:52
*** esberglu has quit IRC07:52
*** esberglu has joined #openstack-nova07:53
*** gszasz has joined #openstack-nova07:54
*** markus_z has joined #openstack-nova07:54
*** markvoelker has joined #openstack-nova07:54
*** mlakat has quit IRC07:56
*** esberglu has quit IRC07:57
*** aloga has quit IRC07:58
*** links has joined #openstack-nova07:59
zioprotohello :) Working on Newton I have a region in my cloud where openstack usage list goes in stacktrace. Instance not found08:24
zioprotolooking at the nova bugs I did not find anything usefull08:24
zioprototracking my logs it looks like this is broken since I upgraded to Newton08:25
zioprotodansmith: usually you like these database stories :)08:25
*** bauzas is now known as bauwser08:26
bauwserzioproto: stacktrace ?08:26
bauwserzioproto: by Newton, we began to use the API DB08:27
zioprotohttps://pastebin.com/gtJbutvi08:27
zioprotoit is funny because this happens only in 1 production region08:28
*** mlakat has joined #openstack-nova08:28
zioprotoI have the same setup on dev/staging/prod and there it works08:28
*** markvoelker has quit IRC08:28
zioprotomy feeling is that there is a broken database entry, or something specific to that region that breaks it08:28
*** edmondsw has joined #openstack-nova08:28
zioprotoof course openstack server show 72afef44-1a4b-46e5-8cbc-7bd4f0eb31ff gives me a instance not found as well08:29
zioprotowhat other tables should I dig to look for this uuid ?08:29
*** lucas-afk is now known as lucasagomes08:29
gmanntakashin: thanks. i will check soon.08:32
*** edmondsw has quit IRC08:33
takashingmann: Thanks in advance.08:33
*** psachin has quit IRC08:34
*** takashin has left #openstack-nova08:34
*** jangutter_ has quit IRC08:35
*** tetsuro has quit IRC08:36
bauwserzioproto: are you aware of those commands when you deploy a Newton cloud ? https://docs.openstack.org/nova/latest/cli/nova-manage.html#man-page-cells-v208:39
zioprotobauwser: I think I did this when I upgraded to Mitaka08:40
zioprototo split the nova db into two DBs08:40
bauwserzioproto: if I were you, I'd be looking at the nova_api DB for the instance_mappings table08:41
bauwserzioproto: and check which cell is for the instance UUID08:41
zioprotoI go have a look08:41
bauwserzioproto: then looking at host_mappings if we have a cell record for each host in it08:42
*** derekh has joined #openstack-nova08:46
*** jwcroppe has joined #openstack-nova08:49
*** aloga has joined #openstack-nova08:51
*** vks1 has quit IRC08:52
zioprotobauwser: my host_mappings table is empty, is that a bad sign ?08:52
bauwserzioproto: indeed, how many hosts do you have?08:53
zioprotoyou mean compute hosts ?08:53
zioprotolike hundreds08:53
zioprotolooks like we upgraded to newton without doing this cells housekeeping08:56
zioprotothis was mandatory at this point ?08:56
zioprototo have a cell0 database ?08:56
*** jwcroppe has quit IRC08:58
bauwserzioproto: I don't exactly remember when we used the api DB for getting the instance list09:01
bauwserzioproto: but you should definitely map the hosts09:02
*** gszasz has quit IRC09:04
openstackgerritzhangyangyang proposed openstack/nova master: Move libvirts qemu-img support to privsep  https://review.openstack.org/50784809:06
openstackgerritzhangyangyang proposed openstack/nova master: Move libvirts qemu-img support to privsep  https://review.openstack.org/50784809:07
*** vks1 has joined #openstack-nova09:10
openstackgerritzhangyangyang proposed openstack/nova master: Move libvirts qemu-img support to privsep  https://review.openstack.org/50784809:15
*** sambetts_ is now known as sambetts09:16
*** ociuhandu has joined #openstack-nova09:18
*** psachin has joined #openstack-nova09:18
*** sahid has quit IRC09:18
*** sahid has joined #openstack-nova09:19
*** ociuhandu has quit IRC09:21
*** markvoelker has joined #openstack-nova09:25
*** spectr has quit IRC09:26
*** spectr has joined #openstack-nova09:34
*** spectr has quit IRC09:34
*** spectr has joined #openstack-nova09:36
*** spectr has quit IRC09:36
*** spectr has joined #openstack-nova09:37
*** gszasz has joined #openstack-nova09:37
*** spectr has quit IRC09:38
*** manasm has quit IRC09:38
*** spectr has joined #openstack-nova09:40
*** spectr has quit IRC09:40
*** spectr has joined #openstack-nova09:41
*** spectr has quit IRC09:43
*** spectr has joined #openstack-nova09:43
*** lajoskatona has quit IRC09:48
*** lajoskatona has joined #openstack-nova09:49
*** spectr has quit IRC09:56
*** spectr has joined #openstack-nova09:57
*** spectr has quit IRC09:57
*** spectr has joined #openstack-nova09:58
*** markvoelker has quit IRC09:58
*** manasm has joined #openstack-nova10:04
*** trungnv has quit IRC10:04
*** stephenfin is now known as finucannot10:05
*** TuanLA has quit IRC10:08
*** phuongnh has quit IRC10:09
*** edmondsw has joined #openstack-nova10:16
*** trinaths has quit IRC10:17
*** edmondsw has quit IRC10:21
*** mingyu has quit IRC10:21
*** andreas_s has quit IRC10:24
*** andreas_s has joined #openstack-nova10:24
*** lajoskatona has quit IRC10:25
*** sdague has joined #openstack-nova10:28
*** andreas_s has quit IRC10:29
*** Dinesh_Bhor has quit IRC10:35
openstackgerritBalazs Gibizer proposed openstack/nova master: Transform instance.exists notification  https://review.openstack.org/40366010:35
openstackgerritBalazs Gibizer proposed openstack/nova master: Add sample test for instance audit  https://review.openstack.org/48095510:35
*** jwcroppe has joined #openstack-nova10:39
*** andreas_s has joined #openstack-nova10:40
*** jwcroppe has quit IRC10:47
*** hieulq has quit IRC10:47
*** tbachman has quit IRC10:49
*** yamamoto has quit IRC10:53
*** yamamoto has joined #openstack-nova10:54
*** yamamoto has quit IRC10:54
*** markvoelker has joined #openstack-nova10:56
*** chyka has joined #openstack-nova10:58
*** ijw has joined #openstack-nova11:00
*** chyka has quit IRC11:02
openstackgerritMerged openstack/nova master: Blacklist test_extend_attached_volume from cells v1 job  https://review.openstack.org/50990711:03
*** ijw has quit IRC11:04
*** nicolasbock has joined #openstack-nova11:10
*** nicolasbock_ has joined #openstack-nova11:10
*** smatzek has joined #openstack-nova11:11
*** nicolasbock_ has quit IRC11:15
*** nicolasbock has quit IRC11:15
gibihm https://review.openstack.org/509907 has been merged so I guess it is recheck time11:17
*** dave-mccowan has joined #openstack-nova11:20
*** mingyu has joined #openstack-nova11:22
*** yamamoto has joined #openstack-nova11:25
zioprotobauwser: in nova-manage cell_v2 simple_cell_setup [--transport-url <transport_url>] how does <transport_url> look like ?11:25
*** mingyu has quit IRC11:26
*** vks1 has quit IRC11:26
*** nicolasbock has joined #openstack-nova11:27
*** nicolasbock_ has joined #openstack-nova11:27
zioprotook I found it here https://docs.openstack.org/nova/latest/user/cells.html11:28
*** nicolasbock_ has quit IRC11:29
*** nicolasbock has quit IRC11:29
*** diga has quit IRC11:29
*** markvoelker has quit IRC11:29
*** takedakn has joined #openstack-nova11:34
*** efried is now known as fried_rice11:35
*** mingyu has joined #openstack-nova11:38
*** vks1 has joined #openstack-nova11:38
*** cdent has joined #openstack-nova11:41
*** lajoskatona has joined #openstack-nova11:41
*** baoli has joined #openstack-nova11:43
fried_ricejaypipes yt?  Wanted to brainstorm a couple of edge cases.11:43
*** artom has quit IRC11:43
*** baoli has quit IRC11:48
*** lucasagomes is now known as lucas-hungry11:56
*** tbachman has joined #openstack-nova11:58
*** spectr has quit IRC12:02
*** tbachman has quit IRC12:03
*** spectr has joined #openstack-nova12:03
*** edmondsw has joined #openstack-nova12:05
*** jpena is now known as jpena|lunch12:05
gibicould a second core look at this code removal patch? https://review.openstack.org/#/c/505164/12:06
*** tbachman has joined #openstack-nova12:09
*** edmondsw has quit IRC12:09
*** vks1 has quit IRC12:11
*** baoli has joined #openstack-nova12:12
*** zhenq has joined #openstack-nova12:15
*** zhenq has quit IRC12:15
gibibauwser: hi! you can still support my embarrassment in https://review.openstack.org/#/c/509750 if you would like to :)12:18
* cdent gives fried_rice some soy sauce12:19
*** litao__ has quit IRC12:19
fried_riceGluten free, if you please.12:19
fried_ricecdent Perhaps you'd be willing to give some feedback on these edge cases that came to me in my sleep (or lack thereof) last night.12:20
* cdent swaps out for some tamari12:20
cdentI can try, but these cases keep confusing me, but since it will probably help to talk about it, shoot.12:21
fried_riceIf I ask for CUSTOM_FOO:4, is it *ever* legal for placement to give me 3 CUSTOM_FOOs from one RP and 1 from a different RP?12:21
*** MVenesio has joined #openstack-nova12:21
*** nicolasbock has joined #openstack-nova12:22
cdentIn the fundament case of resource providers and inventory, no12:22
cdentthe request is a for a chunk of size 412:22
fried_riceHere's the thing: it makes total sense for the answer to be "yes" for something like VFs on separate PFs, all other traits being equal.  Because e.g. what if I ask for 4 VFs and I've only got 2 on each PF?12:22
cdentright, I was just going to say “vfs make that weird"12:23
fried_riceBut even there, if I'm asking for e.g. bandwidth inventory along with my VF count, how do I split that up?12:23
cdentand also why I said “fundament[al]” because I think nested makes some of these decisions less clear12:23
cdent(btw, it is great that you are exploring this stuff)12:24
fried_riceAnd it makes *no* sense for something like DISK_GB.  If I ask for 4, I don't want you giving me 1GB from each of 4 providers.12:24
* cdent nods12:24
*** markus_z has quit IRC12:24
*** esberglu has joined #openstack-nova12:24
*** esberglu has quit IRC12:24
fried_riceSo let's put that to bed and say Nay.12:24
*** esberglu has joined #openstack-nova12:24
*** esberglu has quit IRC12:24
cdentthis issue may be why in some conversations VFs have been proposed as resource providers12:24
fried_riceeek12:24
cdentikr12:25
fried_riceI mean, I guess you could do that, in the "pre-create" case like current VF passthrough does.12:25
*** pchavva has joined #openstack-nova12:25
fried_riceNo good in the "create VF dynamically" case of the future.12:25
fried_ricecdent Okay, so next thing:12:25
fried_riceBack to the VF scenario, if I ask for VF:2,BANDWIDTH:2000012:26
fried_riceI'm answering my own question.12:26
zioprotobauwser: still around here ?12:26
*** markvoelker has joined #openstack-nova12:26
fried_riceThose are total chunks on the RP.12:26
zioprotoit comes out that the uuid from the stacktrace is pretty unique12:26
fried_ricePlacement doesn't care how they're going to be split up12:26
zioprotoit is the only instance in the cloud that matches this query12:27
zioprotoselect * from instances where vm_state="shelved_offloaded"12:27
fried_riceThat's up to the virt driver once it gets that information (which we still don't have a way of doing yet - discussion Monday)12:27
cdentyup12:27
fried_riceSo I guess the op would have to assume each VF will get 10000.  And if they want a different split, they can specify them as separate request numbers (per the spec I'm composing).12:27
fried_riceCool cool.12:28
*** jwcroppe has joined #openstack-nova12:28
fried_ricecdent Thanks for sounding-boarding.12:28
cdentyou’re welcome12:28
fried_rice(Hopefully it wasn't too much like water-boarding)12:28
cdentnot today12:28
*** spectr has quit IRC12:28
*** Mr_Smurf has left #openstack-nova12:29
fried_ricejaypipes I don't need you anymore.12:29
*** jdwidari has joined #openstack-nova12:30
openstackgerritBalazs Gibizer proposed openstack/nova master: Add snapshot id to the snapshot notifications  https://review.openstack.org/45307712:30
*** markvoelker has quit IRC12:31
fried_ricecdent Oh, BUT if I ask for CUSTOM_FOO:3 and CUSTOM_BAR:2, placement *could* give me those guys from different RPs12:31
*** markvoelker has joined #openstack-nova12:31
fried_riceThat is, 3 CUSTOM_FOOs from RP1 and 2 CUSTOM_BARs from RP212:31
cdentfried_rice: it depends on how you are forming your request and what sort of sharing or nested relationship there is between rp1 and rp212:32
*** baoli has quit IRC12:32
jaypipesfried_rice: heh12:32
*** manasm has quit IRC12:33
cdentin a pre-nested universe rp1 and rp2 would need to associated in the same aggregate12:33
*** jaypipes is now known as leakypipes12:33
fried_riceleakypipes  I take it back; feel free to weigh in on the latest craziness12:33
* leakypipes reads back12:33
fried_ricecdent Same aggregate, interesting.12:33
fried_riceWhat if there are no aggregate associations?12:33
cdentfried_rice: same _placement_ aggregate (which is not the same as a nova aggregate)12:34
fried_riceOh, I get it, because that's the only way you don't get VCPU from one host and MEM_MB from another.12:34
* cdent nods12:34
fried_riceDoes RT implicitly create aggregates today?12:34
cdentnot yet, shared is only implemented on the placement side so far, not the nova side12:35
cdentand it got punted down the priority stack12:35
fried_riceWell.12:36
fried_riceThat's going to get interesting.12:36
*** jwcroppe has quit IRC12:36
fried_riceSo *with* nested but *without* shared, we would have to enforce that semantic as "same root provider".12:37
* cdent nods12:37
fried_riceFYI, the semantic I'm wrestling with is how resources are bound together (or not) with this numbered syntax thingy.12:38
*** ratailor has quit IRC12:38
*** spectr has joined #openstack-nova12:39
fried_riceAnd what I've been leaning towards is: When you use the (existing) "un-numbered" deal, the rule is "same root provider".  But when you use a "numbered" deal, the rule is "same *provider*".12:39
fried_ricecdent leakypipes ^12:39
cdentthat’s kind of been my assunption too12:39
*** nicolasbock has quit IRC12:39
fried_riceWe've gotta have a way to express that second thing; otherwise I might wind up with my VF inventory from one PF and my bandwidth inventory from another.12:40
fried_riceBut12:40
fried_riceDo we need to be able to do that first thing at all?12:40
fried_riceJust thinking out loud here.  I believe the answer is "yes".  Because it's the most flexible and simplest UX.12:41
cdentI think “yes” is correct, because we may not care, we just want some stuff12:41
*** erlon has joined #openstack-nova12:41
fried_riceCool deal.  And I can't think of a case you couldn't cover between those two options.12:41
fried_riceExcept the ones for which we need aggregates.12:41
fried_riceWhereupon the "same root provider" rule extends to "same root provider *or* aggregate"12:42
fried_riceHum, (how) does aggregate-ness propagate around a tree?12:42
* cdent doesn’t know12:43
fried_riceWas gonna say, if it applies to a whole tree, the above reduces to "same aggregate".  But that doesn't cover the case where we didn't actually declare any aggregates.12:44
fried_riceI'm inclined to think of an aggregate as kind of a special case of a trait.12:44
*** alexchadin has joined #openstack-nova12:45
cdentI’m not entirely following that logic?12:45
fried_riceIn which case, aggregates should propagate downward (from parent to child) like traits.12:45
fried_riceYeah, it's not fully formed in my bean.  Something like...12:46
leakypipesfried_rice: ack12:46
*** takedakn has quit IRC12:47
fried_riceAn aggregate is an "implicit trait" that you don't actually ask for, but that we kinda add to the request as we go along.  That is, once we pick a RP for one piece of the request, we implicitly add its invisible-aggregate-trait for purposes of the rest of the request, so we only get the rest of the inventory from RPs with that same invisible-aggregate-trait.12:47
fried_riceleakypipes What were you acking?12:48
leakypipesfried_rice: ack on thing above about when you use a numbered deal, that means same provider.12:48
fried_riceleakypipes Cool12:48
leakypipesfried_rice: aggregates don't have traits. aggregates are nothing but grouping mechanisms12:48
fried_riceyah, I get that, but am I off base logically thinking of them as described ^x4?12:49
*** baoli has joined #openstack-nova12:49
*** nicolasbock has joined #openstack-nova12:51
cdentleakypipes: am I right that the nested stack awaits resolutino of the no-orm stack (want to mention that in the rp update if it is in fact true)?12:51
*** zhenq has joined #openstack-nova12:52
fried_ricecdent That's what I have been led to understand.12:54
cdent12:54
fried_riceIt also occurred to me that I hadn't seen the code on the placement side that handles tree-ness for GET /allocation_candidates12:54
fried_riceLike, the SQL magic that does downward trait propagation and suchlike.12:55
fried_riceAt least, I don't *think* I've seen that.12:55
cdentdoesn’t exist yet as far as I’m aware?12:55
fried_riceokay.12:55
cdentparts of it will be in https://review.openstack.org/#/q/topic:bp/nested-resource-providers+status:open12:55
fried_ricecdent Yeah, I reviewed that stack and don't recall seeing the bits I'm talking about.12:56
*** takedakn has joined #openstack-nova12:58
*** READ10 has joined #openstack-nova12:59
*** jpena|lunch is now known as jpena13:00
*** edleafe is now known as figleaf13:00
*** tbachman has quit IRC13:02
openstackgerritOpenStack Proposal Bot proposed openstack/os-vif stable/ocata: Updated from global requirements  https://review.openstack.org/49025613:02
*** crushil has joined #openstack-nova13:02
*** vladikr has joined #openstack-nova13:03
*** psachin has quit IRC13:05
*** nicolasbock has quit IRC13:05
*** udesale has quit IRC13:05
*** nicolasbock has joined #openstack-nova13:05
*** takedakn has quit IRC13:05
*** takedakn has joined #openstack-nova13:07
*** manasm has joined #openstack-nova13:10
*** eharney has joined #openstack-nova13:12
*** bnemec has joined #openstack-nova13:12
*** priteau has joined #openstack-nova13:13
openstackgerritMatthew Booth proposed openstack/nova-specs master: Virtual instance rescue with stable disk devices  https://review.openstack.org/51010613:14
*** lbragstad has joined #openstack-nova13:15
mdboothlyarwood: ^^^13:15
mdboothlyarwood: Although it's your spec with only request REST API changes13:16
*** ygl has joined #openstack-nova13:16
*** takedakn has quit IRC13:18
cdenthttps://thebritishdrea.com/?text=Nested+providers+will+fix+That13:18
*** yamamoto has quit IRC13:21
*** mingyu has quit IRC13:21
*** gbarros has joined #openstack-nova13:21
mdboothcdent: Lol13:24
*** mingyu has joined #openstack-nova13:25
*** cdent has quit IRC13:26
*** ygl has quit IRC13:27
*** dansmith is now known as superdan13:28
*** jmlowe_ has quit IRC13:29
*** mingyu has quit IRC13:30
*** jmlowe has joined #openstack-nova13:30
*** jmlowe_ has joined #openstack-nova13:33
*** trinaths has joined #openstack-nova13:33
*** spectr has quit IRC13:35
*** jmlowe has quit IRC13:35
*** yamamoto has joined #openstack-nova13:35
*** lbragstad has quit IRC13:36
*** lbragstad has joined #openstack-nova13:36
*** spectr has joined #openstack-nova13:36
*** artom has joined #openstack-nova13:38
*** trinaths1 has joined #openstack-nova13:38
*** mriedem has joined #openstack-nova13:39
*** gouthamr has joined #openstack-nova13:39
*** artom_ has joined #openstack-nova13:39
andreykurilinmriedem: hi! Do you know anything merged recently which could affect pagination?13:40
superdanandreykurilin: pagination of instances?13:40
andreykurilinyes13:40
superdanandreykurilin: what are you seeing?13:40
*** trinaths has quit IRC13:41
mriedemduh duh duh13:41
andreykurilinsuperdan: marker doesn't work in some cases13:41
superdanandreykurilin: heh, any more detail than that? :)13:41
mriedemandreykurilin: have a failed job log?13:42
mriedemnovaclient or rally?13:42
andreykurilinsuperdan: sure, just need to collect links:)13:42
andreykurilingive me a sec13:42
*** artom has quit IRC13:42
*** lucas-hungry is now known as lucasagomes13:44
andreykurilinrally gates are failing due to an issue with pagination. we use limit=-1 option from novaclient. It is designed to make an inf loop changing the marker until the response will include an empty list. For some reasons API ignores the market in some cases. I copy-pasted the code from novaclient and added some debug messages13:44
andreykurilinhere is a ok execution - http://logs.openstack.org/83/509783/3/check/gate-rally-dsvm-neutron-existing-users-rally/1a480ab/console.html#_2017-10-05_23_02_16_89262013:44
openstackgerritBalazs Gibizer proposed openstack/nova master: Add error notification for instance.interface_attach  https://review.openstack.org/50664313:45
andreykurilinand just after several seconds, there is one more execution (another iteration of the workload) and it stucks13:45
andreykurilinhttp://logs.openstack.org/83/509783/3/check/gate-rally-dsvm-neutron-existing-users-rally/1a480ab/console.html#_2017-10-05_23_02_19_44123113:45
andreykurilinhere is a code which I'm using for dedbugging purpose - https://review.openstack.org/#/c/509783/3/rally/plugins/openstack/scenarios/nova/utils.py13:45
superdanandreykurilin: is that implying that you get a page with a marker and you get back a page with the marker in it?13:46
andreykurilinit is equal to what we have in novaclient but with some debug message as I already mentione13:46
andreykurilinsuperdan: yes13:46
andreykurilini think so13:46
superdanhrm13:47
andreykurilinbut it happens not regulary. In some cases the page includes the marker, in others - no13:47
superdanwhat sort key are you using?13:47
andreykurilinsuperdan: due to migration to Zull v3 I cannot say when it had happend actually, but can assume that 2 days ago13:47
andreykurilinno sort keys13:47
superdanandreykurilin: yeah I know what changed, so no question there13:48
andreykurilinsuyuperdan: here is a query http://logs.openstack.org/83/509783/3/check/gate-rally-dsvm-neutron-existing-users-rally/1a480ab/console.html#_2017-10-05_23_02_19_45228313:48
superdanandreykurilin: okay so default sort13:48
andreykurilinjust marker in query, nothing more13:48
*** links has quit IRC13:48
*** manasm has quit IRC13:48
superdanandreykurilin: are these instances from a num_instances=N type create operation?13:48
*** lajoskatona has quit IRC13:48
superdanandreykurilin: such that they probably have very similar create times?13:49
*** awaugama has joined #openstack-nova13:49
andreykurilinno13:49
superdanandreykurilin: no meaning they were created one at a time in a client loop?13:49
andreykurilinyes13:50
andreykurilinsec13:50
superdanand how many(ish)?13:50
andreykurilinsuperdan: there are 2 instances which acre created in one time (~1 sec), but from different threads and with different names13:50
*** cdent has joined #openstack-nova13:50
superdanandreykurilin: so you're literally paging through two instances?13:51
andreykurilinyes. just need to mention, that there are 2 cases and both failed. first one boot_and_list actions are performed twice in the same time. the second: list action performed once after both vms are booted13:52
*** edmondsw has joined #openstack-nova13:53
*** lajoskatona has joined #openstack-nova13:53
superdanandreykurilin: okay so limit=1 then?13:53
*** jdwidari has quit IRC13:54
superdanandreykurilin: and both instances are ACTIVE right?13:54
*** jwcroppe has joined #openstack-nova13:55
*** READ10 has quit IRC13:57
bauwserzioproto: sorry, I have a huge internal backlog to do13:57
*** edmondsw has quit IRC13:57
*** smatzek has quit IRC13:59
*** sree has quit IRC13:59
andreykurilinsuperdan: so there are 2 cases. The shared logging relates to the first case, but the behaviour of nova the same and for the second case. Let me dedscribe it more details. there are 2 threads which perfroms boot_and_list actions.  the listing is performed right after the vm become active. Both threads are using the same user and tenant13:59
zioprotobauwser: no worries !14:00
*** sree has joined #openstack-nova14:00
*** smatzek has joined #openstack-nova14:00
andreykurilinsuperdan: in this case the first thread performs list action successfully (with using limit=-1 option of novaclient) and the second thread fails14:00
*** ratailor has joined #openstack-nova14:00
superdanandreykurilin: and what does limit=-1 mean to novaclient?14:00
mriedempage until there is nothing returned i think14:01
andreykurilinyes14:01
superdanright but with what limit to the api?14:01
superdanno limit= default?14:01
andreykurilinno limit14:01
mriedemso default limit of 100014:01
andreykurilinyes14:01
superdanokay, so this really should get both instances in the first page,14:02
superdantry another with result[-1] and get an empty page, yes?14:02
andreykurilin`marker = result[-1]` gives the same page as previously with marker in it14:03
superdanright, I was describing what _should_ be happening14:03
*** hongbin has joined #openstack-nova14:03
andreykurilinyes14:03
superdanokay14:03
superdanI might have an idea of what is going on, but I need to do some experimentation14:04
superdanandreykurilin: in the meantime, can you alter that loop a bit just to see if it helps?14:04
gibicburgess: hi! Is there any next step about https://blueprints.launchpad.net/nova/+spec/libvirt-virtio-set-queue-sizes I can look at / help with?14:04
andreykurilinsuperdan: sure14:04
superdanandreykurilin: can you set the sort_keys=['uuid']14:04
*** catintheroof has joined #openstack-nova14:05
*** sree has quit IRC14:05
*** slaweq has quit IRC14:05
superdanalthough that really shouldn't matter since we're only iterating instances in a single cell db here14:05
superdanandreykurilin: and I can throw up a nova patch you can depends-on right?14:05
superdanandreykurilin: got a bug number for this yet?14:06
andreykurilinsuperdan: yes, we can do depends-on to check the fix. no, I do not have a bug report14:06
*** catinthe_ has joined #openstack-nova14:06
*** catintheroof has quit IRC14:07
andreykurilinsuperdan: superdan: made a patch to check sort_keys, but based on the queue of the zuul the results will be in several hours14:08
*** archit has joined #openstack-nova14:09
andreykurilinsuperdan: should I create a bug report?14:11
superdanandreykurilin: yeah please create a bug and I'll some debugging14:11
superdansorry, I'm stuck on a call atm14:11
andreykurilinthanks14:11
*** bnemec is now known as beekneemech14:11
*** spectr has quit IRC14:11
*** namnh has joined #openstack-nova14:12
*** alexchadin has quit IRC14:15
*** gouthamr has quit IRC14:16
*** felipemonteiro_ has joined #openstack-nova14:21
andreykurilinsuperdan: https://bugs.launchpad.net/nova/+bug/172179114:21
openstackLaunchpad bug 1721791 in OpenStack Compute (nova) "Pagination of instances works incorrect" [Undecided,New]14:21
*** alexchadin has joined #openstack-nova14:22
*** felipemonteiro__ has joined #openstack-nova14:22
superdanandreykurilin: thanks, I'm not sure how this is happening, but I'm really distracted on this call14:22
superdanandreykurilin: are you going to be around for a while?14:22
andreykurilinnp, I'll planning to be there :)14:23
*** alexchadin has quit IRC14:23
*** gyee has joined #openstack-nova14:24
*** artom_ is now known as artom14:25
*** felipemonteiro_ has quit IRC14:26
*** armax has joined #openstack-nova14:26
openstackgerritMerged openstack/nova master: Remove dest node allocations during live migration rollback  https://review.openstack.org/50768714:30
superdanoooh, I might have a recreate14:33
cdentmust be because you’re super14:34
superdanandreykurilin: do you do a regular unpaged list after the fail at all?14:35
superdanandreykurilin: I kinda feel like one of the instances has to be in ERROR state, in cell0 to make this happen14:35
andreykurilinsuperdan: while doing unpaged list, both instances are returned. After making a boot request, we are fetching the status of VM and do not continue until it become ACTIVE. Both VMs returned ACTIVE status14:38
andreykurilinso I'm pretty sure that they are not in ERROR while listing14:39
superdanandreykurilin: okay they should be sorted by created_at,id which is stable if you only have one database. Unless you have multiple cells here, or instances in cell0, I'm not sure how you could end up with unstable sort14:40
*** beekneemech has quit IRC14:40
andreykurilinsuperdan: it is dsvm job with a single node. it doesn't have any special configs14:41
superdanyeah14:42
openstackgerritDan Smith proposed openstack/nova master: WIP Always put 'uuid' into sort_keys for stabile instance lists  https://review.openstack.org/51014014:42
superdanandreykurilin: can you try with this in place ^ ?14:42
*** bnemec has joined #openstack-nova14:42
superdanif you revert the functional part of that change, the test added fails in the same way14:43
andreykurilinok, will make a depends on patch14:43
andreykurilinsuperdan: btw, performance of list action is quite good. before I added a limit to the loop, debug messages flooded the log file by 10gb of text (until jenkins kicked the job by temout) :D14:51
superdanandreykurilin: hah, cool14:51
*** yamamoto has quit IRC14:52
*** yamamoto has joined #openstack-nova14:53
*** yamamoto has quit IRC14:53
*** mdnadeem has quit IRC14:56
*** spectr has joined #openstack-nova15:04
superdanandreykurilin: hmm, actually, that test isn't fully stable, so I need to keep working on it15:04
*** spectr has quit IRC15:04
mriedemfried_rice: merry friday https://review.openstack.org/#/c/488137/2215:05
mriedemi didn't -1, but i'm sort of inlined to15:05
fried_ricemriedem ack, looking.15:05
*** gouthamr has joined #openstack-nova15:07
*** links has joined #openstack-nova15:07
*** tbachman has joined #openstack-nova15:07
*** bnemec has quit IRC15:08
*** ratailor has quit IRC15:09
*** bnemec has joined #openstack-nova15:09
*** mriedem is now known as ronlund15:09
*** Oku_OS is now known as Oku_OS-away15:11
cdentronlund: nobody wants to love on your doc fix? https://review.openstack.org/#/c/502168/15:13
*** READ10 has joined #openstack-nova15:15
*** MVenesio has quit IRC15:16
cdentfried_rice: you still have https://review.openstack.org/#/c/499826/ in your mind? what do we need to do to resolve that?15:17
fried_rice...15:17
cdenti hear that15:18
fried_ricecdent I actually keep forgetting to put it on the "stuck reviews" list for the nova meetings.15:18
fried_riceIt really seems like overkill to put up a whole microversion for that change.15:18
cdentyah15:19
fried_riceBut the process nazis would freak out if we slid it into some unrelated change that's doing a legit microversion bump.15:19
fried_riceAnd I *totally* have no problem doing it without a microversion bump.  If the rules forbid such a change, the rules are silly.15:19
*** rcernin has quit IRC15:19
openstackgerritIldiko Vancsa proposed openstack/nova master: update live migration to use v3 cinder api  https://review.openstack.org/46398715:19
fried_riceBut I clearly don't get to make that call.15:20
* cdent shrugs15:20
fried_ricecdent Makes it harder that we cut a release since the original splitup was done.15:20
* cdent nods15:21
fried_riceyou know, the one that didn't cut a new microversion when it changed the API in a similar (but more extensive) way than this.15:21
fried_ricecdent Guess I'll add it to "stuck reviews" now while I'm thinking about it, and we can discuss it next Thursday.15:22
cdentan astute plan15:22
fried_ricecdent Thanks for the reminder15:22
cdentwas doing my weekly cruise of placement tagged bugs15:22
*** jaosorior has quit IRC15:24
*** shvepsy has quit IRC15:25
*** shvepsy has joined #openstack-nova15:25
cdentgibi: you seen https://bugs.launchpad.net/nova/+bug/1721652 ? references a change you made as the potential cause15:25
openstackLaunchpad bug 1721652 in OpenStack Compute (nova) "Evacuate cleanup fails at _delete_allocation_for_moved_instance" [Undecided,New]15:25
gibicdent: looking...15:27
openstackgerritEd Leafe proposed openstack/nova master: Add alternate hosts  https://review.openstack.org/48621515:28
openstackgerritEd Leafe proposed openstack/nova master: Add Selection objects  https://review.openstack.org/49923915:28
openstackgerritEd Leafe proposed openstack/nova master: Return Selection objects from the scheduler driver  https://review.openstack.org/49585415:28
openstackgerritEd Leafe proposed openstack/nova master: Change RPC for select_destinations()  https://review.openstack.org/51015915:28
*** jwcroppe has quit IRC15:28
*** jwcroppe has joined #openstack-nova15:29
*** smatzek has quit IRC15:31
*** smatzek has joined #openstack-nova15:32
*** jwcroppe has quit IRC15:33
*** baoli has quit IRC15:34
*** baoli has joined #openstack-nova15:35
*** smatzek has quit IRC15:36
*** jwcroppe has joined #openstack-nova15:36
gibicdent: I can confirm that bug based on looking at the code. It seems that the functional test did not catched it somehow15:37
cdentgibi: cool, I figured you would know what was going on15:38
gibicdent: I felt save becuause of the functional coverage, but it seems we need a better test for it15:39
gibis/save/safe/15:39
*** edmondsw has joined #openstack-nova15:41
bauwserleakypipes: maybe tracking all the changes for https://review.openstack.org/#/c/509025/ and above would be better if we have a specless BP ?15:42
bauwserleakypipes: of course, not needing a spec15:42
*** cdent has quit IRC15:44
*** gszasz has quit IRC15:44
*** lucasagomes is now known as lucas-afk15:44
*** dtantsur is now known as dtantsur|afk15:44
*** ildikov is now known as coffee_cat15:45
*** edmondsw has quit IRC15:45
*** gjayavelu has joined #openstack-nova15:45
*** yamahata has quit IRC15:46
*** chyka has joined #openstack-nova15:46
ronlundclaudiub: want to send this in? https://review.openstack.org/#/c/509766/15:50
*** smatzek has joined #openstack-nova15:50
*** yamamoto has joined #openstack-nova15:53
*** Apoorva has joined #openstack-nova16:01
*** Apoorva has quit IRC16:02
*** yamamoto has quit IRC16:02
*** Apoorva has joined #openstack-nova16:02
finucannotleakypipes superdan: Could you take a look at this? I don't think we need 'obj_make_compatible' functions because we're not transferring these objects over the wire, but it's gone to be sure to be sure https://review.openstack.org/#/c/508498/16:08
*** bnemec has quit IRC16:08
superdanfinucannot: I'm in the middle of something deep right now, but we're registering those objects which means they can go over the wire, which means they need to have the make_compat routine16:09
superdanI'm sure leakypipes can speak to the over-the-wire-ness of now and future16:09
openstackgerritMerged openstack/nova master: stabilize test_resize_server_error_and_reschedule_was_failed  https://review.openstack.org/50975016:10
finucannotsuperdan: Ta. Holding for leakypipes16:10
*** gjayavelu has quit IRC16:10
*** pcaruana has quit IRC16:13
*** jwcroppe has quit IRC16:13
*** mingyu has joined #openstack-nova16:15
*** andreas_s has quit IRC16:16
*** xyang1 has joined #openstack-nova16:16
*** xyang1 has quit IRC16:16
*** jwcroppe has joined #openstack-nova16:16
*** xyang1 has joined #openstack-nova16:17
*** fried_rice is now known as fried_rice_afk16:18
andreykurilinsuperdan: the first patch, where I put sort_key=["uuid"] fixed an issue. the second patch(the check for a fix at nova's side) still waits for a resources at CI16:19
superdanandreykurilin: okay16:19
*** mingyu has quit IRC16:20
*** penick has joined #openstack-nova16:23
openstackgerritBalazs Gibizer proposed openstack/nova master: Reproduce bug 1721652 in the functional test env  https://review.openstack.org/51017616:26
openstackbug 1721652 in OpenStack Compute (nova) pike "Evacuate cleanup fails at _delete_allocation_for_moved_instance" [High,Confirmed] https://launchpad.net/bugs/172165216:26
*** penick has quit IRC16:28
*** dikonoor has quit IRC16:28
*** cdent has joined #openstack-nova16:28
gibicdent, ronlund: I started looking into the bug 1721652 but I run out of time for today. I will continue on Monday if nobody feels the urge to take it over.16:30
openstackbug 1721652 in OpenStack Compute (nova) pike "Evacuate cleanup fails at _delete_allocation_for_moved_instance" [High,Confirmed] https://launchpad.net/bugs/172165216:30
*** penick has joined #openstack-nova16:30
ronlundgibi: ok, thanks16:31
*** Tom_ has quit IRC16:33
*** baoli has quit IRC16:34
*** baoli has joined #openstack-nova16:34
melwittwhat's ronlund?16:36
*** baoli has quit IRC16:36
*** baoli has joined #openstack-nova16:37
*** dikonoor has joined #openstack-nova16:38
*** Tom has joined #openstack-nova16:44
superdanmmmmm, yeeeeeeaahhhh16:44
sean-k-mooneyanyone know if we were to abandon https://review.openstack.org/#/c/373293/7 would it prevent the proposal bot updating it in the future.16:44
superdanthat would be greeeeeeaaaat, mmmmkayyy?16:44
superdanwe're just going to go ahead and have to ask you to move your desk to the basement, mmmmmkayy? thaaaaaaankss...16:45
openstackgerritMerged openstack/os-vif master: Add Port Profile info to VIF objects Linux Bridge plugin  https://review.openstack.org/49082916:46
ronlundmelwitt: ron lund is a powerful name16:47
ronlundand is a name you can trust16:47
ronlundfor all your retirement investment needs16:47
ronlundin the greater tristate area16:47
melwittlol ahh16:47
sean-k-mooneyfinucannot: stephen is that you?16:47
ronlundron lund is a man that speaks in the 3rd person, loves turtle necks and has a great dirty blonde mustache16:48
*** Tom has quit IRC16:48
*** Swami has joined #openstack-nova16:48
superdanoh I guess I'm wrong16:48
melwittyours is bill lumbergh16:49
superdanohhh, damn, right16:49
melwittat first I thought maybe ron lund was ron swanson from parks and rec but it wasn't16:50
melwittthat's the only ron I could think of16:50
*** derekh has quit IRC16:52
ronlundi also personally know a ron bruns16:53
ronlundalso a powerful name16:53
ronlundreally anyone named "ron" shouldn't be fucked with16:53
*** baoli has quit IRC16:54
ronlundask maya, she'll tell you16:54
melwittI dunno anyone named ron in real life16:55
ronlundron bruns is a cattleman from baltic, south dakota16:56
ronlundi think he even sold insurance on the side...16:56
melwittan enterprising fellow16:56
cdentsounds like ron doesn’t like taxes16:56
*** esberglu has joined #openstack-nova16:56
*** esberglu has quit IRC16:56
sean-k-mooneycdent: someone likes taxes?16:57
sean-k-mooneypeople tolerate taxes in retrun for services but i have never meet anyone who actully likes them16:58
cdentwell presumably anyone who cares for their fellow person in society has some small amount of like for taxes16:58
*** bnemec has joined #openstack-nova16:59
*** gbarros has quit IRC16:59
cdentbut ron, being a strong ron, sounds like the sort that might reject the federal gov’s right to tax17:00
*** trinaths1 has quit IRC17:01
*** baoli has joined #openstack-nova17:02
penickAww whaaaat "JunoMan signed on at October 4, 2017 at 9:40:36 PM PDT and has been idle for 1 day, 12 hours, 21 minutes, "17:02
*** penick is now known as MrJuno17:02
openstackgerritElod Illes proposed openstack/nova master: WIP: Transform scheduler.select_destinations notification  https://review.openstack.org/50850617:02
ronlundsdague: the "should use allow passing user_data to rebuild" thread has taken a weird path,17:02
*** melwitt is now known as jgwentworth17:02
ronlundsdague: effectively putting me on the fence about whether or not we should add that when removing personality from rebuild17:03
superdanMrJuno: bravo17:03
MrJunoGotta own it17:03
sean-k-mooneyronlund: how so? i have not been folowing it17:03
ronlundsdague: it's not like it would be hard to add, and we are saying that user_data replaces personality files, and people do love their rebuild17:03
ronlundsean-k-mooney: just the amount of love for rebuild17:04
sdagueronlund: I'd be fine with that17:04
MrJunorybridges I think I shall decree that all members of the OpenStack team at Oath wear the scarlet letter J until we're on Ocata17:04
ronlundi originally assumed that personality files were added to rebuild b/c they aren't persisted like user_data is, but looking at the change that added personality files to rebuild, there was no explanation of why in the commit message17:04
ronlundit predated gerrit so i wasn't surprised by that17:05
jgwentworthyeah, ppl not using floating ips like keeping their ip and volumes stay attached and all that jazz17:05
*** openstackstatus has quit IRC17:05
*** openstack has joined #openstack-nova17:07
*** ChanServ sets mode: +o openstack17:07
cdentronlund: so ron bruns probably feeks like he’d be even happier without that thieving corporation tax17:08
ronlundi don't know his actual feelings on taxes. he's genuinely a nice guy, so i doubt it bothers him that much.17:08
cdenta weak ron17:09
*** MrJuno has quit IRC17:09
openstackgerritMerged openstack/nova master: Add error notification for instance.interface_attach  https://review.openstack.org/50664317:09
*** penick has joined #openstack-nova17:09
*** yamahata has joined #openstack-nova17:13
jgwentworthuh oh, I'm seeing on some gate runs of the py27 unit test job it's not running all the unit tests, only the os profiler test http://logs.openstack.org/66/509766/1/check/gate-nova-python27-ubuntu-xenial/794e1a9/testr_results.html.gz17:13
jgwentworththis is bad17:13
*** jpena is now known as jpena|off17:19
jgwentworthI think maybe it's only happening on stable17:19
*** sambetts is now known as sambetts|afk17:20
*** bnemec is now known as beekneemech17:20
jgwentworthI see it on stable/pike and stable/ocata17:21
jgwentworthstable/newton looks okay17:22
sean-k-mooneyjgwentworth: is it on master17:24
jgwentworthsean-k-mooney: I'm not seeing it on master17:24
*** Tom has joined #openstack-nova17:25
sean-k-mooneyjgwentworth: thats not so bad then because master should prevent anything getting backported if the unit tests fail17:25
sean-k-mooneythough i guess it would not catch dependecy issues17:25
jgwentworthyeah, definitely not as bad as on master17:25
superdansean-k-mooney: help, but not prevent.. a backport could assert something that is true on master and not on stable and we'll think it's okay to merge17:26
sean-k-mooneyi guess that has something to do with the zuul v2->v3->v2 changes in the last few weeks17:26
jgwentworthronlund: I noticed on stable/pike and stable/ocata there's something wrong with our unit test jobs and they're running the os test profiler test instead of all of the unit tests ^17:26
ronlundjgwentworth: sounds like an issue for mtreinish17:27
ronlunddid some stestr stuff get mixed up in stable?17:28
sean-k-mooneyjgwentworth: that should be defiend by this job spec correct https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/python-jobs.yaml#L109-L13117:28
*** Tom has quit IRC17:29
*** edmondsw has joined #openstack-nova17:29
jgwentworthI dunno, I'm not familiar with how this works but that looks like probably17:29
*** smatzek has quit IRC17:30
ronlundthe zuulv3 jobs were defined elsewhere17:30
ronlundin openstack-zuul-jobs17:30
ronlundbut that's for zuulv3, the zuulv2 stuff should be as before17:30
openstackgerritMatt Riedemann proposed openstack/nova master: Deprecate allowed_direct_url_schemes and nova.image.download.modules  https://review.openstack.org/51019517:30
ronlundbut i'm no expert17:30
*** gjayavelu has joined #openstack-nova17:30
ronlundi would be suspect of something with stestr but we shouldn't be using that in stable17:30
*** smatzek has joined #openstack-nova17:30
ronlundcdent: btw i got a recreate of the scheduling 409 failure on https://review.openstack.org/#/c/507918/17:31
ronlundhttp://logs.openstack.org/18/507918/6/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/d4f175d/logs/screen-n-sch.txt.gz#_Oct_04_18_32_08_75379417:32
ronlundit doesn't have a logging patch in that run which logs which instance failed and prompted retries17:32
*** links has quit IRC17:32
ronlundbut maybe not necessary to debug?17:32
jgwentworthnewton: https://review.openstack.org/#/c/509441/ ocata: https://review.openstack.org/#/c/509440/ pike: https://review.openstack.org/#/c/509439/17:32
*** smatzek has quit IRC17:32
*** smatzek has joined #openstack-nova17:32
sean-k-mooneylooking at https://review.openstack.org/#/c/509766/ its reporting as jenkins not zuul so i guess the python job is running as zuul v2.5 not v317:32
jgwentworthnewton job is fine, ocata and pike are messed up17:33
ronlundwell i guess we should know because the instance uuid is logged right before it17:33
*** edmondsw has quit IRC17:33
jgwentworthsean-k-mooney: yeah, this is the old jenkins stuff that I'm looking at17:33
jgwentworthmtreinish we need you17:35
*** gszasz has joined #openstack-nova17:36
*** ijw has joined #openstack-nova17:36
ronlundhmm and we only ever put RP inventory once - which is what i expected since it's the fake driver and inventory doesn't change17:37
ronlundhttp://logs.openstack.org/18/507918/6/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/d4f175d/logs/screen-placement-api.txt.gz#_Oct_04_18_27_24_49628317:37
jgwentworthI know that ostestr uses stestr underneath starting in a specific version. so if our ostestr version is sufficiently new, we could be getting stestr behavior17:37
ronlundso what else can cause "Inventory changed while attempting to allocate: Another thread concurrently updated the data." if not updating the RP inventory?17:37
ronlundleakypipes: ^ any ideas?17:37
ronlundother allocations on the same resource provider at the same time i suppose17:38
sean-k-mooneyronlund: should there not be a db lock on the inventory while a transaction is in flight that would prevent multiple concurent updates17:39
ronlundwe're not actually updating the inventory17:40
ronlundbesides the first time when the RP is created17:40
*** MVenesio has joined #openstack-nova17:40
ronlundwe are making allocations in a loop in the scheduler17:40
ronlundone by one, this isn't concurrent as far as i know17:41
sean-k-mooneyunless you have 2+ schduers doing this at the same time17:41
sean-k-mooneyjust looking at http://logs.openstack.org/18/507918/6/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/d4f175d/logs/screen-placement-api.txt.gz#_Oct_04_18_27_24_496283 so there we are creating the inventory right?17:42
sean-k-mooneyor is the payload of the put an update17:43
jgwentworthokay, so it's running all the tests, just showing the results of the os profiler run only, so it looks like this is just a display problem17:43
*** armax has quit IRC17:44
jgwentworthlike, it's picking up the wrong results to show in the testr_results html17:44
sean-k-mooneyjgwentworth: thats good because looking at https://raw.githubusercontent.com/openstack-infra/project-config/master/zuul.d/projects.yaml and the job definition everything looks correct17:44
ronlundsean-k-mooney: it's only 1 scheduler17:44
*** oomichi_afk is now known as oomichi17:44
openstackgerritMerged openstack/nova master: api-ref: note that project_id filter only works with all_tenants  https://review.openstack.org/50965017:44
*** avolkov has quit IRC17:45
*** MVenesio has quit IRC17:45
*** yamamoto has joined #openstack-nova17:45
ronlundsean-k-mooney: and yes, PUT /placement/resource_providers/c8d3d366-c0a0-481d-b7e7-b3e31b8b73e8/inventories is updating the inventory for the compute node resource provider with uuid c8d3d366-c0a0-481d-b7e7-b3e31b8b73e817:45
ronlundthat happens when nova-compute starts up and creates the compute node17:45
*** penick is now known as MrJuno17:45
ronlundgotta run to get my license renewed, bbiab17:46
sean-k-mooneyronlund: my point was its not safe to decorment an inventory by doing a put with the new value if you have 2 schduler that will create a race17:46
ronlundsean-k-mooney: the PUT has a generatoin id in it17:46
ronlundlike an etag17:46
ronlundand we're only doing it once anyway17:47
ronlundso i don't think that's the issue17:47
sean-k-mooneyoh ok so if it does not match the current generation then the scecond put will fail and retry17:47
ronlundthe 2nd put would fail and the client would have to fetch the latest generation and update their requet17:47
*** namnh has quit IRC17:47
ronlundthe server doesn't do it automatically17:47
ronlundthe consumer allocation is what's failing with the 40917:47
ronlundthere is no generation id on the consumer allocation17:47
andreykurilinsuperdan: as expected, your fix works :)17:47
jgwentworthI'm guessing it has something to do with the newer ostestr being stestr underneath and somehow it's messing up the result gathering17:48
sean-k-mooneyya that workflow is fine17:48
ronlundsean-k-mooney: this https://developer.openstack.org/api-ref/placement/#update-allocations17:48
superdanandreykurilin: okay it won't actually work with multiple cells, but I'm polishing off the full fix17:48
superdanandreykurilin: will certainly want you to test that as well17:48
andreykurilinsuperdan: I do not have multiple cells installation, so will able to test only on regular one17:49
superdanandreykurilin: yep that's fine17:49
*** gszasz has quit IRC17:50
jgwentworth.tox/py27/bin/testr last --subunit vs .tox/py27/bin/stestr last --subunit17:50
*** lajoskatona has quit IRC17:50
sean-k-mooneyjgwentworth: so on master the tox config for py27 is https://github.com/openstack/nova/blob/master/tox.ini#L33 with stestr and on newton we delegate to pretty tox script17:52
leakypipesbauwser, superdan, ronlund: reading back... just got back in.17:53
sean-k-mooneyon pike we use ostestr17:53
*** yamamoto has quit IRC17:53
jgwentworthsean-k-mooney: yeah, I think we're gonna need mtreinish to look at this. especially now that we know it's not urgent, it's running all the tests, just showing the wrong results in the html page17:56
jgwentworthand if the run fails the unit tests, the os profiler thing won't even run, so I think in a fail case it would show the fail results17:57
sean-k-mooneyjgwentworth: looking at stable ocata i think we are overriding the result because we do this https://github.com/openstack/nova/blob/c2aa30b102808882c85d3d3f53d531c4510218cd/tox.ini#L28-L3217:57
leakypipesbauwser, superdan: https://blueprints.launchpad.net/nova/+spec/de-orm-resource-providers17:58
sean-k-mooneyi think we are running all the test first then running the osprofiler tests17:59
jgwentworthsean-k-mooney: yes, it's been like that for a long time. it's like that on master too. it's just I don't know how the results are picked out of that17:59
jgwentworthout of the fact that we have two separate runs I mean17:59
leakypipesfinucannot: we are indeed going to be transferring these objects over the wire in short order. Probably good to get the obj_make_compatible() stuff done sooner or later.17:59
sean-k-mooneyyes but on master we are not using pretty tox anymore we are using stestr18:00
leakypipescdent: sorry, I had to leave before answering your question. yes, I'd like to add the nested resource providers series on to the end of that de-orm series because it makes handling superdan's request to make root_provider_uuid and parent_provider_uuid into root_provider_id and parent_provider_id.18:01
leakypipesronlund: ok, now looking into your issue18:01
* leakypipes just realized mriedem is ronlund18:02
*** vladikr has quit IRC18:02
*** gbarros has joined #openstack-nova18:03
leakypipesronlund: technically, any change to either inventory or traits of a resource provider would cause that concurrent update error. that said, I don't believe we are yet setting traits on a resource provider (other than in functional DB tests...)18:04
sean-k-mooneyjgwentworth: basically my assertion is that ostestr(pike) and python setup.py testr(ocata) probably overrite the results stestr(master) is appending?18:04
leakypipesronlund: grasping at straws, but maybe cdent's patch that reduced the number of times we update aggregates might be playing into this.. cdent, does updating aggs change the generation? /me goes to check18:06
jgwentworthsean-k-mooney: maybe. I know almost nothing about how the unit test jobs work so I couldn't tell you :)18:06
sean-k-mooneyjgwentworth: https://github.com/openstack/nova/blob/353db2d1932965b6502e002b8be510440ff529c0/tox.ini#L33 yes just checked stestr docs thats what the --combine does http://stestr.readthedocs.io/en/latest/MANUAL.html#combining-test-results18:07
sean-k-mooneyjgwentworth: on newton we doe not run os profiler which is why that works18:08
jgwentworthahhh, nice sleuthing18:08
*** chyka has quit IRC18:09
leakypipesronlund: that's a negative on the set aggregates changing the resource provider generation.18:09
openstackgerritDan Smith proposed openstack/nova master: Always put 'uuid' into sort_keys for stable instance lists  https://review.openstack.org/51014018:11
openstackgerritDan Smith proposed openstack/nova master: Fix instance_get_by_sort_filters() for multiple sort keys  https://review.openstack.org/51020318:11
superdanronlund: andreykurilin ^18:11
*** priteau has quit IRC18:16
*** lbragstad has quit IRC18:16
sean-k-mooneyjgwentworth: we might be able to use the  --partial flag for testr to get similar behavior ill propose a patch to stable/ocata to test it18:16
jgwentworthsounds cool18:17
sean-k-mooneythe doc of what --partial actully does is kindof vagure but it mentioned that it was useed with --failing to make sure un run failures where not lost if interupted so im guessing it will prevent whiping the resuts of previous runs18:19
*** baoli has quit IRC18:22
*** baoli has joined #openstack-nova18:22
*** lbragstad has joined #openstack-nova18:25
*** READ10 has quit IRC18:27
*** psachin has joined #openstack-nova18:29
* figleaf has to run out for a bit18:29
*** ijw has quit IRC18:30
*** claudiub has quit IRC18:35
ronlundleakypipes: note we aren't setting traits or doing aggregate anything in this recreate18:37
ronlundRP inventory is only set once when the compute node is created18:37
leakypipesronlund: yeah, weird indeed.18:37
ronlundthe only thing we're doing is trying to create 1000 instances in a single request, so we're processing those instances in order and making the consumer allocation requests against the same RP18:38
ronlundleakypipes: which i assume means the error comes from the amount consumed changing18:38
leakypipesronlund: yes.18:38
ronlundwhich, fine, but it's happening in serial18:38
ronlundunless that just means i'm hitting some limit, but then i'd expect a different error?18:38
ronlundthere were also 18 retries logged in that recreate until the failure, so something is giving the 409 and saying, retry, and it's working for some18:39
leakypipesronlund: it could be the periodic task on the compute node kicking in, noticing the generation has updated, and pulling inventory and generation info again.18:39
ronlundso it's not a limit issue it seems18:39
ronlundleakypipes: the inventory isn't getting updated though18:39
leakypipesronlund: but it shouldn't be setting inventory to a different set of values... :(18:39
ronlundright18:39
ronlundi see only 1 PUT /resource_providers/<uuid>/inventories in the placement api logs18:40
ronlundmayhap i need a patch to dump a stacktrace when we hit the 409 in the placement api18:40
ronlundto see where it's coming from18:40
cdentif you get the inventory, update the local generation on the local node, will an inflight allocation rap out?18:40
ronlundalso, i like to say mayhap18:40
cdentcrap , not rap18:41
cdentbut who knows, allocations may like to rhyme18:41
ronlundleakypipes: specless bp approved btw18:42
leakypipesronlund: donkey shane.18:43
*** ijw has joined #openstack-nova18:43
cdentronlund: the 409 is being received in the scheduler? and the scheduler is definitely not eventlet-ing?18:44
cdent(i believe you said that before, just confirming)18:45
*** Tom has joined #openstack-nova18:45
ronlundwe don't have multiple scheduler workers18:46
cdentronlund: i know that, but if somehow eventlet is involved and socket is patched, things might go awry18:46
ronlundwell, there was something weird18:48
ronlundsec18:48
openstackgerritDan Smith proposed openstack/nova master: Fix instance_get_by_sort_filters() for multiple sort keys  https://review.openstack.org/51020318:48
openstackgerritDan Smith proposed openstack/nova master: Always put 'uuid' into sort_keys for stable instance lists  https://review.openstack.org/51014018:48
ronlundso the error in the scheduler is here18:48
ronlundhttp://logs.openstack.org/18/507918/6/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/d4f175d/logs/screen-n-sch.txt.gz#_Oct_04_18_32_08_75379418:48
ronlundright before that, it says,18:48
ronlundAttempting to claim resources in the placement API for instance af059052-6221-4685-8151-6f450e4dc97d {{(pid=29557) _claim_resources18:48
ronlundbut if you look at req-16b6ac82-6274-4300-bb65-cc94a26648fd in the placement logs,18:48
ronlundhttp://logs.openstack.org/18/507918/6/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/d4f175d/logs/screen-placement-api.txt.gz#_Oct_04_18_32_08_69664718:49
ronlundPUT /placement/allocations/da3c3f24-227b-4b3a-8b1d-43a62c637e4418:49
ronlundthat's a different consumer18:49
* cdent nods18:49
* cdent looks at code18:49
*** Tom has quit IRC18:49
cdentronlund: since the scheduler is started via nova/cmd, the monkey patch in __init__.py is called, right?18:50
cdentSo any expectations of linear may be wrong, if a socket at some point decides to wait18:51
ronlundnova/cmd/scheduler.py calls utils.monkey_patch() but i think that's a different thing18:51
cdentyues18:52
ronlundoh in nova/cmd/__init__ yes18:52
cdentbut if you are in nova/cmd, __init__ is run18:52
ronlundoh nvm the logging thing18:53
ronlundUnable to submit allocation for instance da3c3f24-227b-4b3a-8b1d-43a62c637e44 (409 {"errors": [{"status": 409, "request_id": "req-16b6ac82-6274-4300-bb65-cc94a26648fd", "detail": "There was a conflict when trying to complete your request.\n\n Inventory changed while attempting to allocate: Another thread concurrently updated the data. Please retry your update  ", "title": "Conflict"}]})18:53
ronlundso da3c3f24-227b-4b3a-8b1d-43a62c637e44 was the instance it was requesting on18:53
ronlundthe "Attempting to claim resources in the placement API for instance af059052-6221-4685-8151-6f450e4dc97d" was a red herring,18:54
ronlundif we log ^ but no error, it just means the claim was successful18:55
cdentronlund: I’m not sure that’s the case?18:55
ronlundwe don't log anything for a successful claim18:55
cdentso in that case we have an attempt for instance X, then a fail for instance Y, where is the attempt for Y?18:55
ronlundhttp://logs.openstack.org/18/507918/6/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/d4f175d/logs/screen-placement-api.txt.gz#_Oct_04_18_32_08_825897 is that other instance18:56
cdentso why are we getting the warn about the failure after the debug about the other attempt18:56
cdentthey _are_ out of order18:56
cdentwhich is what we would expect in a monkey patched eventlet scene18:57
*** Jose____ has joined #openstack-nova18:59
cdentbbs, gotta do the dishes18:59
ronlundi'm seeing something f'ed in the scheduler code,18:59
ronlundit's doing the same scheduling request 3 times18:59
ronlund"Starting to schedule for"18:59
ronlund3 seconds apart19:00
ronlundwell, 1 second apart, 3 times19:00
ronlundall with the same request id19:00
ronlundleakypipes: well that answers that i think ^19:01
ronlundthe 409 is probably because we already posted allocations for the same consumer19:01
Jose____Hi! i found this in nova placement api log: "Placement API returning an error response: JSON does not validate: 0 is less than the minimum of 1", can this be affecting with the image creation service?19:01
*** pchavva has quit IRC19:01
ronlundJose____: no19:02
leakypipesronlund: hmm, that's odd indeed.19:02
cdentronlund: is that request id the local one or the one passed up from nova-api19:03
* cdent really does the dishes19:03
ronlundcdent: the api http://logs.openstack.org/18/507918/6/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/d4f175d/logs/screen-n-api.txt.gz#_Oct_04_18_30_21_22150119:04
ronlund{"server": {"name": "test-alloc-conflict", "imageRef": "947823a9-8c15-420a-95d8-f35cd2f024b9", "flavorRef": "1", "max_count": 1000, "min_count": 1000, "networks": "none"}}19:04
ronlundi see a successful allocation for the failed instance 2 seconds after the final 409 retry failure19:04
ronlundhttp://logs.openstack.org/18/507918/6/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/d4f175d/logs/screen-placement-api.txt.gz#_Oct_04_18_34_13_85477619:04
ronlundwth, why would we have 3 greenthreads?19:05
openstackgerritJay Pipes proposed openstack/nova master: rp: de-ORM ResourceProvider.get_by_uuid()  https://review.openstack.org/50902519:08
openstackgerritJay Pipes proposed openstack/nova master: rp: Move RP._get|set_aggregates() to module scope  https://review.openstack.org/50902619:08
openstackgerritJay Pipes proposed openstack/nova master: rp: Remove RP.get_traits() method  https://review.openstack.org/50902719:08
openstackgerritJay Pipes proposed openstack/nova master: rp: move RP._set_traits() to module scope  https://review.openstack.org/50902819:08
openstackgerritJay Pipes proposed openstack/nova master: rp: remove CRUD operations on Inventory class  https://review.openstack.org/50902919:08
openstackgerritJay Pipes proposed openstack/nova master: rp: streamline InventoryList.get_all_by_rp_uuid()  https://review.openstack.org/50903019:08
openstackgerritJay Pipes proposed openstack/nova master: rp: remove dead code in Allocation._create_in_db()  https://review.openstack.org/50903119:08
openstackgerritJay Pipes proposed openstack/nova master: rp: remove ability to delete 1 allocation record  https://review.openstack.org/50903219:08
openstackgerritJay Pipes proposed openstack/nova master: rp: fix up AllocList.get_by_resource_provider_uuid  https://review.openstack.org/50903319:08
openstackgerritJay Pipes proposed openstack/nova master: rp: rework AllocList.get_all_by_consumer_id()  https://review.openstack.org/50903519:08
openstackgerritJay Pipes proposed openstack/nova master: rp: remove _HasAResourceProvider mixin  https://review.openstack.org/50903619:08
openstackgerritJay Pipes proposed openstack/nova master: rp: break functions out of _set_traits()  https://review.openstack.org/50990819:08
ronlundaha19:09
ronlundhttp://logs.openstack.org/18/507918/6/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/d4f175d/logs/screen-n-super-cond.txt.gz#_Oct_04_18_32_07_05239819:09
ronlundselect_destinations took too long, so oslo.messaging retried19:09
ronlundwell that explains that i think19:09
ronlundi'm going to bump up the rpc timeout19:09
cdentthat doesn’t sound safe19:10
cdent(dishwasher not done yet)19:10
ronlundhuh, given we don't rate limit min_count at all,19:11
ronlundyou could totally bork some stuff up with making a huge request19:11
ronlundif the server's rpc timeout isn't configured high enough to handle it19:11
cfriesenronlund: I think we check quota fairly early, no?19:11
ronlundthis isn't quota19:11
ronlundalthough, sure you wouldn't hit this for real if you had default quota of 1019:12
sean-k-mooneyjgwentworth: https://github.com/openstack-infra/project-config/blob/dbdef981de7cb56e9cd44514a41102270bfc9bac/jenkins/scripts/run-tox.sh#L32-L48 this is why its failing we only grab the last result dir19:12
ronlundi've disabled quota19:12
ronlundif you did allow a tenant 1000 instances to burst at once,19:12
ronlundthen you are going to have to deal with big ass rpc timeouts19:12
leakypipesronlund: there's a crapload of timeouts on the MQ in that log file...19:13
ronlundyeah i know19:14
ronlundand by default oslo.messaging retries twice19:14
superdanso it just adds to the load?19:14
superdanbecause it's starting extra scheduling runs for the same set of stuff?19:15
superdanseems like we'd be breaking hard on that already anyway, pre-placement19:15
ronlundyeah this is definitely user error on my part :)19:17
*** gbarros_ has joined #openstack-nova19:17
ronlundmaybe it's a decent simulator of sorts, but probably not19:17
*** edmondsw has joined #openstack-nova19:17
ronlund"how to dos your devstack"19:17
superdanuser error why?19:18
superdanjust because all the limits are removed?19:18
ronlundyeah19:18
ronlundwe could definitely hammer scheduler/placement with concurrent requests,19:18
cdentin a perfect world it would at least fail gracefully rather than confusedly19:18
ronlundbut within a single tenant, default quota is 10 so that'd be your max19:18
superdanwell, we probably really should never retry a call to the scheduler like this after a timeout, I'm thinking19:18
ronlundwell, the good news is it totally does the allocation cleanup properly19:19
ronlundand everything is put into error state and shoved in cell019:19
superdanbecause you could hit that timeout for other reasons19:19
*** gbarros has quit IRC19:19
superdanwell, I guess it has already created the instance records, so maybe not a huge deal I guess,19:19
superdanI expect there is a case where you could do a boot, fail to hear from scheduler, never send boots to compute,19:20
superdanbut the scheduler made allocations19:20
*** MrJuno is now known as penick19:20
superdanI guess we just live with that and assume they're cleaned up at delete, but technically it's holding space for those dead ones19:20
superdanactually, maybe we wouldn't clean up allocations on delete in that case since instance.host=None?19:21
*** edmondsw has quit IRC19:22
ronlundsuperdan: well i think if instance.host == None we assume the allocations are already gone19:23
ronlundeither it failed to schedule,19:23
ronlundor it was shelved offloaded19:23
superdanright, my point19:23
ronlundand we remove allocations when shelve offloading19:23
superdanwe call to scheduler, timeout,19:23
superdanscheduler has made allocations19:23
superdanwe just delete from db because it never scheduled19:23
ronlundinstance goes to error19:24
ronlundbut has allocations19:24
superdanyes19:24
ronlundheh yeah19:24
*** tbachman has quit IRC19:25
ronlundchecking for something like that on every delete kind of sucks if it's a super edge case19:26
superdanbut no healing, so.. leaking capacity will anger people and rightly so :)19:26
ronlundright19:26
*** tbachman has joined #openstack-nova19:26
ronlundplus a delete request for allocations that never existing should be fast19:26
ronlund*existed19:26
superdanyes19:27
ronlundi know huawei customers love nfv, i need to see what their instance quota limit is quick... :)19:27
*** cdent has quit IRC19:29
ronlundha19:29
ronlund@utils.retry_select_destinations19:29
ronlundthat's what's causing the retry19:29
ronlundit's by design19:29
ronlundit's not oslo.messaging, it's nova19:29
ronlundhttps://github.com/openstack/nova/blob/353db2d1932965b6502e002b8be510440ff529c0/nova/scheduler/utils.py#L59919:31
*** tesseract has quit IRC19:31
superdanyeah19:31
*** hemna_ has quit IRC19:31
ronlundso yeah, now that we're doing claims in the scheduler, that seems like a bad idea...19:31
ronlundit does it up to max_attempts-1, so by default 2 retries19:32
superdanthat doesn't fix the allocation leak, mind you,19:32
*** gbarros has joined #openstack-nova19:32
superdanbut yeah, seems like if you fail talking to it, you're just going to hurt things by adding to the load with a retry19:32
*** fried_rice_afk is now known as fried_rice19:32
ronlundi wonder if we double up the 2nd allocation request for the same consumer19:33
ronlundmaybe not if there is only 1 rp uuid in the request19:33
ronlundnote the 2nd time through the scheduler on the retry, we could likely target a completely different host :)19:33
ronlundthus totally fucking up things for everything19:33
superdanyeah19:33
ronlundhuh, well this is fun19:34
*** gbarros_ has quit IRC19:35
*** Tom___ has joined #openstack-nova19:36
*** Tom___ has quit IRC19:40
*** chyka has joined #openstack-nova19:42
*** baoli has quit IRC19:44
*** namnh has joined #openstack-nova19:48
leakypipesfried_rice: putting: "blueprint: XXXX" does the same thing.19:50
fried_riceleakypipes Coolio.  Is there a Source Of Truth for these taggy things?19:51
*** eharney has quit IRC19:52
*** namnh has quit IRC19:53
leakypipesfried_rice: meh, https://wiki.openstack.org/wiki/GitCommitMessages19:54
leakypipesfried_rice: but it only mentions using Implements: blueprint XXX19:54
fried_ricemm19:54
leakypipesfried_rice: that's not necessary though. the word "blueprint" followed by a tag-like thing is all that's needed to link the patch with the blueprint on Launchpad.19:55
*** baoli has joined #openstack-nova19:55
fried_riceleakypipes Including having whatever bot add the URL to the whiteboard on the bp?19:55
fried_riceCause that seems to be a thing.19:55
leakypipesfried_rice: correct, that's what I mean.19:56
fried_ricek, thought you were just talking about gerrit turning it into a nice hyperlink to the LP page.19:56
fried_riceAnyway, I dig it.19:56
*** jwcroppe has quit IRC19:57
ronlundbp also works i think19:58
ronlundmaybe not19:58
*** jwcroppe has joined #openstack-nova19:58
*** baoli has quit IRC19:58
mtreinishronlund, jgwentworth: do you have a link to the thing you're seeing?19:59
mtreinishthere is a lot of backscroll, but I couldn't see a link to what I should be looking at19:59
*** jwcroppe_ has joined #openstack-nova20:00
*** baoli has joined #openstack-nova20:00
jgwentworthsec20:00
jgwentworthmtreinish: this is happening on stable/ocata and stable/pike only http://logs.openstack.org/39/509439/1/check/gate-nova-python27-ubuntu-xenial/e456c8f/testr_results.html.gz20:00
jgwentworthI think it's just a display issue, showing the os profiler result instead of the unit tests result. in the console you can see that both ran20:01
mtreinishjgwentworth: ok, yeah that's because you have 2 test runs in the tox command20:01
mtreinishfor the post processing to generate that we run testr last --subunit and pipe that into subunit2html to generate that page20:01
jgwentworthit shows the right thing on master and stable/newton for some reason even though we have 2 runs20:01
mtreinishbut testr doesn't let you combine the results20:01
mtreinishso it's just showing the results from the second one20:02
jgwentworthyeah, that's what sean-k-mooney was saying20:02
jgwentworthwell, I think testr is showing the first one. os profiler always runs last20:02
mtreinishit works on master because stestr has a --combine flag that treats the 2 commands as a single run20:02
*** jwcroppe has quit IRC20:02
*** john5223_ has joined #openstack-nova20:02
ronlundgah 2017-10-06 04:30:20.654 | /opt/stack/new/devstack/inc/meta-config: line 209: /opt/stack/new/devstack/.localrc.auto: Permission denied20:03
jgwentworthmtreinish: oh, on stable/newton we're not running the os profiler thing20:03
jgwentworththat's why that one shows up correctly20:04
mtreinishyep20:04
mtreinishthis was the same thing I was seeing on openstack-health, which is why I pushed https://review.openstack.org/#/c/501842/ before doing the stestr migration20:04
jgwentworthweird. it seems like this would always have been happening before stestr but I could have sworn I had seen full lists of the unit tests on a pass run prior to stestr20:04
jgwentworthmaybe I dreamed it20:05
mtreinishjgwentworth: this was a longstanding issue, I just don't think anyone noticed it20:05
mtreinishbefore the osprofiler tests were added it wasn't an issue20:05
jgwentworthright20:05
jgwentworthoh well, sorry for the noise. I thought something had changed20:06
*** Tom___ has joined #openstack-nova20:06
mtreinishno worries, I'm just glad other people are looking at this stuff :)20:06
mtreinishjgwentworth: fwiw, if you want to fix that output you could probably backport 50184220:08
mtreinishor just rip the osprofiler test out on the stable branches, tbh I'm not sure what value it adds20:08
jgwentworthyeah, I like running profiler last locally so tests I'm working on run and fail first without having to wait for profiler20:10
openstackgerritMatt Riedemann proposed openstack/nova stable/pike: Set regex flag on ostestr command for osprofiler tests  https://review.openstack.org/51022620:10
ronlundlet's see20:10
*** Tom___ has quit IRC20:10
*** smatzek has quit IRC20:12
*** gouthamr has quit IRC20:21
*** artom has quit IRC20:24
*** catinthe_ has quit IRC20:28
openstackgerritIldiko Vancsa proposed openstack/nova master: update live migration to use v3 cinder api  https://review.openstack.org/46398720:29
*** spectr has joined #openstack-nova20:34
*** spectr has quit IRC20:36
*** david-lyle has quit IRC20:38
*** beekneemech has quit IRC20:45
*** archit has quit IRC20:47
*** crushil has quit IRC20:48
*** dikonoor has quit IRC20:52
*** awaugama has quit IRC20:57
*** Jose____ has quit IRC21:01
*** jwcroppe_ has quit IRC21:03
*** edmondsw has joined #openstack-nova21:05
*** jwcroppe has joined #openstack-nova21:05
*** sahid has quit IRC21:06
*** Tom has joined #openstack-nova21:07
*** zhenq has quit IRC21:07
superdanronlund: I think those two patches are good to go21:07
superdanronlund: cool if you want to wait for andreykurilin to run a test, but he confirmed on one of the earlier iterations, I have a repro test in there, etc21:08
andreykurilinsuperdan: Does the new revision ready for testing? I can recheck the patch now21:09
superdanandreykurilin: oh yeah, I poked you earlier but no response :)21:09
andreykurilinoh.21:09
superdanandreykurilin: would be awesome if you could, and maybe point me at the review so I can watch?21:09
*** edmondsw has quit IRC21:10
*** Tom has quit IRC21:11
*** lyan has joined #openstack-nova21:11
andreykurilinsuperdan: heh, sure. https://review.openstack.org/#/c/510144  gate-rally-dsvm-neutron-existing-users-rally job. grep tag-to-search to see debug messages which actually shows that everything works or not21:11
superdanandreykurilin: doesn't it just fail if it fails?21:13
superdanandreykurilin: I don't see that job in the first run of that patch...21:15
*** crushil has joined #openstack-nova21:16
ronlundok, writing up a draft spec for discussion on this max_count rate limiting thing21:16
andreykurilinsuperdan: no :( I tried to make this change as soon as possible, so i have not removed a hack to limit the loop. I'll do it now. the first results were not published, it looks like you posted a new revision before the whole jobs were finished21:16
superdanah okay21:16
*** ijw has quit IRC21:17
*** david-lyle has joined #openstack-nova21:19
andreykurilinsuperdan: so originally the listing doesn't fail. it was just a inf loop. To add debug messages which will not flood the log file, I added a limit in the loop(10). That is why it stopped failing. Now I resubmitted a patch which fails in case of 10 iterations of the loop. so you do not need to check the logs21:19
*** xyang1 has quit IRC21:20
superdanandreykurilin: if len(log) == 10g -> fail? :)21:20
superdanandreykurilin: awesome thanks21:20
andreykurilinha21:20
*** felipemonteiro__ has quit IRC21:21
*** ijw has joined #openstack-nova21:23
openstackgerritMatt Riedemann proposed openstack/nova-specs master: Limit instance create max_count (spec)  https://review.openstack.org/51023521:29
ronlundsuperdan: cfriesen: leakypipes: i know it's late so probably just ignore until next week, but ^ has some thoughts on the concurrent scheduling issue21:29
*** ronlund is now known as mriedem21:29
superdanack21:30
mriedemnow lemme take a looksie at these crazy patches21:31
*** gbarros has quit IRC21:31
*** ijw has quit IRC21:37
*** ijw has joined #openstack-nova21:39
*** dklyle has joined #openstack-nova21:39
*** claudiub has joined #openstack-nova21:39
*** david-lyle has quit IRC21:41
*** dklyle has quit IRC21:41
*** dklyle has joined #openstack-nova21:42
*** lbragstad has quit IRC21:43
mriedemwow rally runs a lot of jobs21:44
mriedemthat's got to be a challenge to both tempest and trove21:44
mtreinishmriedem: I think htat's more than tempest21:45
*** namnh has joined #openstack-nova21:49
*** dklyle has quit IRC21:52
*** namnh has quit IRC21:54
cfriesenmriedem: did we ever fix the issues with multi-boot where scheduling more than min_count but less than max_count would result in instances in the "error" state?21:56
*** jwcroppe has quit IRC21:58
mriedemcfriesen: i'd need more details than that21:58
mriedemwhy would that put them in error state?21:59
*** jwcroppe has joined #openstack-nova21:59
mriedemif i request min=5 max=10 but there are only 7 ports available in my port quota, the api changes max=721:59
mriedemyou wrote that patch21:59
*** Apoorva_ has joined #openstack-nova22:00
*** Apoorva has quit IRC22:01
*** Apoorva_ has quit IRC22:02
*** Apoorva has joined #openstack-nova22:02
jgwentworthzzzeek: are you around? I'm looking into something with oslo.db and was wondering, given a TransactionContextManager, how does it switch from reader to writer mode and vice versa between separate transactions?22:02
*** Apoorva has quit IRC22:03
*** jwcroppe has quit IRC22:03
*** Apoorva has joined #openstack-nova22:03
mriedemsuperdan: done https://review.openstack.org/#/c/510203/22:07
*** chyka has quit IRC22:07
mriedemsuperdan: 2 issues, (1) the tests aren't using the args passed in and (2) apparently we have to handle boolean sort keys differently22:07
*** jgwentworth is now known as melwitt22:08
mriedemsuperdan: presumably because not all db backends model booleans the same22:08
mriedemsome are bools, some are ints22:08
cfriesenmriedem: I'm thinking about the case where we request min=5 and max=10 and the scheduler only finds hosts for 7 of them.22:08
superdanmriedem: ah I didn't think we had any booleans, but I guess we do22:08
superdanmriedem: hah, sorry, last minute cleanup on the tests and I didn't remember to plumb those through22:09
melwittzzzeek: it looks like it's controlled by the "independent" attribute, but I don't see that we use that22:09
cfriesenmriedem: I think the ones without hosts might get left in the BUILD state22:10
mriedemcfriesen: if you get fewer hosts than requested instances, it's NoValidHost https://github.com/openstack/nova/blob/master/nova/scheduler/filter_scheduler.py#L8622:13
mriedemand conductor will put them all into ERROR state and into cell0 yes22:13
cfriesenmreidem: yeah, was just looking at that22:13
cfriesenmriedem: but that doesn't make sense if the scheduler was able to schedule at least min_count instances.22:13
mriedemcfriesen: then don't request max?22:14
mriedembut yeah that's weird22:14
mriedemcfriesen: see check_num_instances_quota22:15
mriedemif you have quota for max, then that's what we say to build22:15
mriedemwe don't pass anything to the scheduler about what minimum number should be built if possible22:16
mriedemas far as i can tell22:16
mriedemwonder how this behaved in ocata22:16
cfriesenmriedem: if I remember right the quota part is okay, but the scheduler part is a bit wonky.   we added a local patch to pass "min_num_instances" in the spec_obj22:16
cfriesenmriedem: it's been broken since forever22:16
mriedemdid you try upstreaming that patch ever?22:17
*** lyan has quit IRC22:17
cfriesenmriedem: I think so, and it was suggested to just get rid of multiboot, but that got objections22:17
cfriesenI can try reviving it. :)22:17
cfriesenthe patch, I mean22:18
mriedemyeah so multicreate isn't going away22:18
mriedemjust like rebuild isn't going away22:18
mriedemso yeah would probably be good to revive that22:18
cfriesenwould that count as an API change and need a spec?22:19
cfriesenor is it incorrectly returning an error now and we can just correct it22:19
*** ijw has quit IRC22:19
zzzeekjgwentworth:. Can you email me at mike@zzzcomputing22:23
*** ijw has joined #openstack-nova22:23
*** armax has joined #openstack-nova22:23
zzzeek.com and I'll get back to you tomorrow?  Not at a computer right now22:23
melwittzzzeek: okay, will do that if I can't figure this out. thanks22:24
*** gouthamr has joined #openstack-nova22:24
*** esberglu has joined #openstack-nova22:26
*** esberglu has quit IRC22:26
fried_riceBe there existing docs describing the resources:<resource_class>=<count> syntax in flavor extra specs?22:26
* fried_rice couldn't find any22:27
*** ijw has quit IRC22:27
*** armax has quit IRC22:28
*** baoli has quit IRC22:29
cfriesenmriedem: hah, I thought I had a bug open: https://bugs.launchpad.net/nova/+bug/145812222:30
openstackLaunchpad bug 1458122 in OpenStack Compute (nova) "nova shouldn't error if we can't schedule all of max_count instances at boot time" [Wishlist,Opinion] - Assigned to Chris Friesen (cbf123)22:30
cfriesenthere's also this one: https://bugs.launchpad.net/nova/+bug/162380922:30
openstackLaunchpad bug 1623809 in OpenStack Compute (nova) "Quota exceeded when spawning instances in server group" [Wishlist,Opinion]22:30
*** erlon has quit IRC22:30
cfriesenthat latter one is really about min_count/max_count and quota_server_group_members22:30
openstackgerritDan Smith proposed openstack/nova master: Fix instance_get_by_sort_filters() for multiple sort keys  https://review.openstack.org/51020322:31
openstackgerritDan Smith proposed openstack/nova master: Always put 'uuid' into sort_keys for stable instance lists  https://review.openstack.org/51014022:31
mtreinishmriedem, jgwentworth: http://logs.openstack.org/26/510226/1/check/gate-nova-python27-ubuntu-xenial/939736e/testr_results.html.gz22:31
*** shvepsy_ has joined #openstack-nova22:31
superdanmriedem: good catches, thanks22:31
*** armax has joined #openstack-nova22:32
*** armax has quit IRC22:33
*** shvepsy has quit IRC22:33
*** NightKhaos has quit IRC22:33
*** NightKhaos has joined #openstack-nova22:37
mriedemYES!22:39
mriedemi need to get out of here and get some pizza22:40
mriedemand later some cookies maybe22:40
mriedemi won't be 160 by christmas with pizza and cookies though22:40
*** crushil has quit IRC22:41
*** leakypipes has quit IRC22:43
* cfriesen hasn't been 160 since he was 1322:44
*** mingyu has joined #openstack-nova22:48
mriedemit's all that poutin you eat22:51
mriedemi heard canadian babies are bottle fed poutin22:51
mriedemaye22:51
*** gbarros has joined #openstack-nova22:51
*** mingyu has quit IRC22:52
cfriesenmmmm...poutine.22:53
cfriesenhard to find the good stuff though....lots of crappy versions22:53
*** edmondsw has joined #openstack-nova22:53
cfriesenhot crispy fries, squeeky cheese curds, and boiling hot gravy22:54
cfriesengreat...now I want poutine and I'm pretty sure there's going to be something healthy for supper.22:55
mriedemsuperdan: ok comments in those 2 changes22:55
mriedembut really leaving this time22:56
superdanmriedem: just spiteful on that bug fix eh?22:56
superdanI think doing pagination without a requested sort order is kinda reckless, but I'll capitulate22:57
*** edmondsw has quit IRC22:58
openstackgerritMerged openstack/nova master: Note TrustedFilter deprecation in docs  https://review.openstack.org/50993122:58
openstackgerritDan Smith proposed openstack/nova master: Always put 'uuid' into sort_keys for stable instance lists  https://review.openstack.org/51014022:59
*** ijw has joined #openstack-nova23:03
*** gbarros has quit IRC23:04
*** crushil has joined #openstack-nova23:05
openstackgerritEric Fried proposed openstack/nova-specs master: Granular Resource Request Syntax  https://review.openstack.org/51024423:06
*** gouthamr has quit IRC23:10
*** Apoorva has quit IRC23:16
*** Apoorva has joined #openstack-nova23:17
*** esberglu has joined #openstack-nova23:20
*** hongbin has quit IRC23:21
*** artom has joined #openstack-nova23:21
*** crushil has quit IRC23:24
*** jgriffith is now known as jgriffith_23:24
*** esberglu has quit IRC23:24
*** crushil has joined #openstack-nova23:38
*** Swami has quit IRC23:38
*** claudiub has quit IRC23:40
*** david-lyle has joined #openstack-nova23:42
*** harlowja has quit IRC23:45
*** namnh has joined #openstack-nova23:50
*** mingyu has joined #openstack-nova23:53
*** yamamoto has joined #openstack-nova23:53
*** markvoelker has quit IRC23:54
*** namnh has quit IRC23:55
openstackgerritEd Leafe proposed openstack/nova master: Change RPC for select_destinations()  https://review.openstack.org/51015923:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!