Wednesday, 2017-08-09

*** suresh12 has joined #openstack-nova00:00
*** hemna_ has quit IRC00:05
*** gmann has quit IRC00:18
*** gmann has joined #openstack-nova00:18
*** hoonetorg has quit IRC00:23
*** sbezverk has quit IRC00:24
*** tetsuro has joined #openstack-nova00:24
*** gjayavelu has joined #openstack-nova00:33
*** randomhack has joined #openstack-nova00:33
*** kiennt has joined #openstack-nova00:34
*** thorst has joined #openstack-nova00:39
*** mdnadeem has joined #openstack-nova00:40
*** mingyu has joined #openstack-nova00:41
*** mdnadeem has quit IRC00:41
*** mdnadeem has joined #openstack-nova00:41
*** thorst has quit IRC00:44
*** gouthamr has quit IRC00:48
*** owalsh has quit IRC00:48
*** jpena|off is now known as jpena00:49
*** Shunli has joined #openstack-nova00:49
*** owalsh has joined #openstack-nova00:50
*** randomhack has quit IRC00:53
*** owalsh_ has joined #openstack-nova00:54
*** owalsh has quit IRC00:56
*** vishwana_ has joined #openstack-nova00:57
*** owalsh_ is now known as owalsh00:58
*** larainema has quit IRC00:58
*** mingyu_ has joined #openstack-nova00:58
*** baoli_ has joined #openstack-nova00:58
*** thorst has joined #openstack-nova00:58
*** mingyu has quit IRC00:59
*** baoli has quit IRC01:00
*** phuongnh has joined #openstack-nova01:01
*** vishwanathj has quit IRC01:01
*** litao__ has joined #openstack-nova01:01
mriedemdansmith: fyi, looks like we're hitting some slow nodes or something where n-cpu is auto-disabled by the servicegroup heartbeat because the report comes in late01:02
mriedemhttp://logs.openstack.org/39/491439/3/gate/gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial/d5c3c02/logs/subnode-2/screen-n-cpu.txt.gz#_Aug_08_20_55_49_47479601:02
dansmithseriously?01:03
dansmiththat's some crazy delay01:03
mriedemsometihng like that, only seeing it on live migration jobs01:03
mriedemthey fail b/c the compute service is not available01:03
mriedemclarkb: ^01:04
mriedemit's by far and away on citycloud-lon101:04
mriedemlots of HTTPReadTimeoutErrors in the console log too01:06
*** hongbin has joined #openstack-nova01:06
dansmithyeah01:06
*** markvoelker has joined #openstack-nova01:07
*** claudiub has quit IRC01:08
dansmithmriedem: well if it's clearly a bad node or something I assume the remedy is just pulling it out right?01:08
mriedemsure01:09
mriedemi leave it to the infra wizards01:09
mriedemhttps://bugs.launchpad.net/openstack-gate/+bug/170950601:09
openstackLaunchpad bug 1709506 in OpenStack-Gate "Random live migration failures due to ComputeServiceUnavailable in citycloud-lon1 nodes" [Undecided,New]01:09
dansmithokay just wanted to make sure I can go back to my dinner :)01:09
mriedemyup01:09
dansmithsweet01:09
mriedemit also helped me notice that we're not indexing the super-cond/cond-cell1 logs in logstash01:10
mriedemfix is up for that01:10
dansmithoh interesting01:10
mriedemsystem-config repo has a yaml of the files to index01:10
*** trungnv has quit IRC01:10
mriedemwith the new names we didn't account for those01:11
*** yangyapeng has joined #openstack-nova01:14
*** kiennt has quit IRC01:15
*** gjayavelu has quit IRC01:17
*** yamamoto has quit IRC01:19
clarkbmriedem: ya we emailed them today01:19
clarkbthen being citycloud01:19
*** zhurong has joined #openstack-nova01:26
*** yamamoto has joined #openstack-nova01:26
*** owalsh_ has joined #openstack-nova01:26
mriedemcool01:27
*** owalsh- has joined #openstack-nova01:30
*** owalsh has quit IRC01:30
*** owalsh- is now known as owalsh01:30
*** gcb has quit IRC01:32
*** owalsh_ has quit IRC01:33
*** awaugama has quit IRC01:35
*** Apoorva_ has joined #openstack-nova01:37
*** Apoorva has quit IRC01:40
*** Apoorva_ has quit IRC01:42
*** gongysh has joined #openstack-nova01:44
*** jpena is now known as jpena|off01:44
*** kiennt has joined #openstack-nova01:45
*** randomhack has joined #openstack-nova01:46
*** gcb has joined #openstack-nova01:46
*** trungnv has joined #openstack-nova01:53
*** esberglu has quit IRC01:56
openstackgerritMatt Riedemann proposed openstack/nova master: Mark max microversion for Pike in history doc  https://review.openstack.org/49158101:56
openstackgerritMatt Riedemann proposed openstack/nova master: Add release note for shared storage known issue  https://review.openstack.org/49158201:56
openstackgerritMatt Riedemann proposed openstack/nova master: Add a prelude section for Pike  https://review.openstack.org/49142401:56
openstackgerritMatt Riedemann proposed openstack/nova master: doc: provide more details on scheduling with placement  https://review.openstack.org/49190001:56
*** esberglu has joined #openstack-nova01:57
*** suresh12 has quit IRC01:58
*** esberglu has quit IRC01:58
*** esberglu has joined #openstack-nova01:58
mriedemstephenfin: can you drop this -2 now? https://review.openstack.org/#/c/482216/02:01
*** mdnadeem has quit IRC02:02
*** randomhack has quit IRC02:03
*** mriedem has quit IRC02:04
*** mdnadeem has joined #openstack-nova02:12
*** trungnv has quit IRC02:15
*** thorst has quit IRC02:16
*** trungnv has joined #openstack-nova02:18
*** sree has joined #openstack-nova02:20
*** trungnv has quit IRC02:21
*** trungnv has joined #openstack-nova02:21
*** saphi has joined #openstack-nova02:23
*** sree has quit IRC02:24
*** saphi has quit IRC02:24
*** saphi has joined #openstack-nova02:25
*** markvoelker has quit IRC02:30
*** annegentle has quit IRC02:33
*** markvoelker has joined #openstack-nova02:33
*** annegentle has joined #openstack-nova02:41
*** tuanluong has joined #openstack-nova02:41
*** esberglu has quit IRC02:43
*** zhurong has quit IRC02:44
*** larainema has joined #openstack-nova02:55
*** zhurong has joined #openstack-nova03:00
openstackgerritMerged openstack/nova master: Add functional test for local delete allocations  https://review.openstack.org/47057803:01
*** sbezverk has joined #openstack-nova03:06
*** yamahata has quit IRC03:06
*** markvoelker has quit IRC03:07
openstackgerritNaichuan Sun proposed openstack/nova master: xenapi: Live migration failed in xapi pool  https://review.openstack.org/48945103:10
*** esberglu has joined #openstack-nova03:13
*** links has joined #openstack-nova03:15
*** annegentle has quit IRC03:15
*** thorst has joined #openstack-nova03:17
*** esberglu has quit IRC03:17
*** sbezverk has quit IRC03:18
*** mmehan has quit IRC03:18
*** thorst has quit IRC03:21
*** dave-mccowan has quit IRC03:28
openstackgerritJackie Truong proposed openstack/nova master: Add trusted_certs to Instance object  https://review.openstack.org/48940803:28
openstackgerritJackie Truong proposed openstack/nova master: Implement certificate_utils  https://review.openstack.org/47994903:29
*** baoli_ has quit IRC03:32
openstackgerritAlex Xu proposed openstack/nova master: placement: ensure RP maps to those RPs that share with it  https://review.openstack.org/48037903:34
*** coreywright has quit IRC03:34
*** ratailor has joined #openstack-nova03:37
openstackgerritMichael Still proposed openstack/nova master: Avoid chowning console logs in libvirt  https://review.openstack.org/47222903:46
openstackgerritMichael Still proposed openstack/nova master: First attempt at adding a privsep user to nova itself.  https://review.openstack.org/45916603:46
openstackgerritMichael Still proposed openstack/nova master: Move execs of touch to privsep.  https://review.openstack.org/48919003:46
openstackgerritMichael Still proposed openstack/nova master: Move libvirts dmcrypt support to privsep.  https://review.openstack.org/49073703:46
openstackgerritMichael Still proposed openstack/nova master: Move execs of tee to privsep.  https://review.openstack.org/48943803:46
openstackgerritMichael Still proposed openstack/nova master: Move libvirt usages of chown to privsep.  https://review.openstack.org/47197203:46
openstackgerritMichael Still proposed openstack/nova master: Read from console ptys using privsep.  https://review.openstack.org/48948603:46
openstackgerritMichael Still proposed openstack/nova master: Refactor libvirt.utils.execute() away.  https://review.openstack.org/48981603:46
*** markvoelker_ has joined #openstack-nova03:49
*** jichen has joined #openstack-nova03:51
*** coreywright has joined #openstack-nova03:52
*** adisky__ has joined #openstack-nova03:52
*** markvoelker_ has quit IRC03:54
*** nicolasbock has joined #openstack-nova03:57
*** saphi has quit IRC03:57
*** suresh12 has joined #openstack-nova03:59
*** gongysh has quit IRC04:03
*** suresh12 has quit IRC04:03
*** suzhengwei has quit IRC04:10
*** saphi has joined #openstack-nova04:13
*** hongbin has quit IRC04:14
*** mamandle has joined #openstack-nova04:15
*** zhurong has quit IRC04:19
*** gcb has quit IRC04:19
*** zhurong has joined #openstack-nova04:20
*** gcb has joined #openstack-nova04:20
*** sree has joined #openstack-nova04:21
*** slaweq has joined #openstack-nova04:25
*** sbezverk has joined #openstack-nova04:26
*** gbarros has quit IRC04:26
*** hareesh has joined #openstack-nova04:29
*** coreywright has quit IRC04:29
*** slaweq has quit IRC04:30
*** diga has joined #openstack-nova04:30
*** suzhengwei has joined #openstack-nova04:32
*** gongysh has joined #openstack-nova04:39
*** mingyu_ has quit IRC04:46
*** sbezverk has quit IRC04:46
*** coreywright has joined #openstack-nova04:48
*** suzhengwei has quit IRC04:50
*** randomhack has joined #openstack-nova05:00
*** randomhack has quit IRC05:04
*** armax has quit IRC05:06
*** markvoelker has joined #openstack-nova05:06
*** gyee has quit IRC05:08
*** claudiub has joined #openstack-nova05:13
*** shan has joined #openstack-nova05:14
*** trinaths has joined #openstack-nova05:15
*** suzhengwei has joined #openstack-nova05:15
*** thorst has joined #openstack-nova05:17
*** yamamoto has quit IRC05:19
*** mingyu has joined #openstack-nova05:21
*** thorst has quit IRC05:22
*** psachin has joined #openstack-nova05:24
*** suresh12 has joined #openstack-nova05:25
*** slaweq has joined #openstack-nova05:26
*** mingyu has quit IRC05:27
*** mingyu has joined #openstack-nova05:29
*** slaweq has quit IRC05:31
*** yamamoto has joined #openstack-nova05:31
*** markvoelker has quit IRC05:32
*** markvoelker has joined #openstack-nova05:32
*** mingyu has quit IRC05:34
*** kiennt has quit IRC05:43
*** esberglu has joined #openstack-nova05:44
*** esberglu has quit IRC05:49
*** rcernin has joined #openstack-nova05:52
*** kiennt has joined #openstack-nova05:53
*** mingyu has joined #openstack-nova05:56
*** yamamoto_ has joined #openstack-nova06:05
*** moshele has joined #openstack-nova06:07
*** hoonetorg has joined #openstack-nova06:07
*** yamamoto has quit IRC06:09
*** Oku_OS-away is now known as Oku_OS06:15
*** udesale has joined #openstack-nova06:17
*** slaweq has joined #openstack-nova06:33
*** mamandle has quit IRC06:34
*** tesseract has joined #openstack-nova06:35
*** slaweq has quit IRC06:35
*** edmondsw has joined #openstack-nova06:35
*** edmondsw has quit IRC06:38
*** sridharg has joined #openstack-nova06:52
*** mingyu has quit IRC06:52
*** mingyu has joined #openstack-nova06:53
openstackgerrithuangtianhua proposed openstack/python-novaclient master: Allow boot server with multiple nics  https://review.openstack.org/49200306:59
*** cfriesen has quit IRC06:59
*** markvoelker has quit IRC07:03
*** markvoelker has joined #openstack-nova07:06
*** vks1 has joined #openstack-nova07:07
*** markvoelker has quit IRC07:09
*** sshwarts has joined #openstack-nova07:10
*** sahid has joined #openstack-nova07:10
*** vks1 has quit IRC07:12
*** mingyu has quit IRC07:13
openstackgerritMaciej Jozefczyk proposed openstack/nova master: Remove host filter for _cleanup_running_deleted_instances periodic task  https://review.openstack.org/49180807:24
*** pcaruana has joined #openstack-nova07:25
*** suresh12 has quit IRC07:26
*** markus_z has joined #openstack-nova07:31
*** sahid has quit IRC07:34
*** udesale__ has joined #openstack-nova07:38
*** udesale has quit IRC07:39
*** vks1 has joined #openstack-nova07:39
*** mamandle has joined #openstack-nova07:40
*** mingyu has joined #openstack-nova07:40
*** sahid has joined #openstack-nova07:41
*** thorst has joined #openstack-nova07:42
*** thorst has quit IRC07:44
*** udesale has joined #openstack-nova07:51
*** mlakat has quit IRC07:51
*** udesale__ has quit IRC07:51
*** moshele has quit IRC07:52
*** slaweq has joined #openstack-nova07:52
*** alexchadin has joined #openstack-nova07:54
*** esberglu has joined #openstack-nova07:55
*** slaweq has quit IRC07:55
*** slaweq has joined #openstack-nova08:14
*** esberglu has quit IRC08:16
*** mlakat has joined #openstack-nova08:16
openstackgerrithuangtianhua proposed openstack/python-novaclient master: Remove substitutions for command error msg  https://review.openstack.org/49070508:16
gibigood morning08:34
*** ralonsoh has joined #openstack-nova08:34
*** zsli_ has joined #openstack-nova08:44
*** mingyu has quit IRC08:44
*** mwhahaha has quit IRC08:47
*** zioproto has quit IRC08:47
*** TheJulia has quit IRC08:47
*** betherly has quit IRC08:47
*** slaweq has quit IRC08:47
*** sridharg has quit IRC08:47
*** psachin has quit IRC08:47
*** ratailor has quit IRC08:47
*** tuanluong has quit IRC08:47
*** mdnadeem has quit IRC08:47
*** vishwana_ has quit IRC08:47
*** vladikr has quit IRC08:47
*** _pewp_ has quit IRC08:47
*** Dinesh_Bhor has quit IRC08:47
*** thingee has quit IRC08:47
*** fnordahl has quit IRC08:47
*** mchiappe1o has quit IRC08:47
*** cburgess has quit IRC08:47
*** antonym has quit IRC08:47
*** sree has quit IRC08:47
*** gcb has quit IRC08:47
*** larainema has quit IRC08:47
*** trungnv has quit IRC08:47
*** owalsh has quit IRC08:47
*** krtaylor has quit IRC08:47
*** thomasem- has quit IRC08:47
*** dutsmoc has quit IRC08:47
*** leifz has quit IRC08:47
*** asettle has quit IRC08:47
*** fmccrthy has quit IRC08:47
*** stvnoyes has quit IRC08:47
*** dims has quit IRC08:47
*** brault has quit IRC08:47
*** BlackDex has quit IRC08:47
*** jamielennox has quit IRC08:47
*** yankcrime has quit IRC08:47
*** dgonzalez has quit IRC08:47
*** ttx has quit IRC08:47
*** andymccr has quit IRC08:47
*** aignatov has quit IRC08:47
*** aries has quit IRC08:47
*** ltomasbo has quit IRC08:47
*** mmedvede has quit IRC08:47
*** ralonsoh has quit IRC08:47
*** vks1 has quit IRC08:47
*** yamamoto_ has quit IRC08:47
*** jichen has quit IRC08:47
*** abalutoiu has quit IRC08:47
*** hamzy has quit IRC08:47
*** maciejjozefczyk has quit IRC08:47
*** ykuo_ has quit IRC08:47
*** isq_ has quit IRC08:47
*** masber has quit IRC08:47
*** dosaboy has quit IRC08:47
*** kaisers2 has quit IRC08:47
*** brad[] has quit IRC08:47
*** gibi has quit IRC08:47
*** sgordon has quit IRC08:47
*** markmc has quit IRC08:47
*** ekhugen_alt has quit IRC08:47
*** devananda has quit IRC08:47
*** EmilienM has quit IRC08:47
*** syjulian has quit IRC08:47
*** zigo has quit IRC08:47
*** udesale has quit IRC08:47
*** mamandle has quit IRC08:47
*** tesseract has quit IRC08:47
*** yangyapeng has quit IRC08:47
*** jamesdenton has quit IRC08:47
*** itlinux has quit IRC08:47
*** jowisz has quit IRC08:47
*** rha has quit IRC08:47
*** Dave has quit IRC08:47
*** csuttles has quit IRC08:47
*** alezil has quit IRC08:47
*** tinwood has quit IRC08:47
*** sapcc-bot has quit IRC08:47
*** s1061123 has quit IRC08:47
*** hughsaunders has quit IRC08:47
*** csatari has quit IRC08:47
*** DuncanT has quit IRC08:47
*** jcook has quit IRC08:47
*** dulek has quit IRC08:47
*** mlakat has quit IRC08:47
*** gongysh has quit IRC08:47
*** karthiks has quit IRC08:47
*** zz9pzza has quit IRC08:47
*** aarefiev has quit IRC08:47
*** reedip has quit IRC08:47
*** mfisch` has quit IRC08:47
*** htruta` has quit IRC08:47
*** sambetts|afk has quit IRC08:47
*** inara has quit IRC08:47
*** ababich has quit IRC08:47
*** tristanC has quit IRC08:47
*** rhagarty has quit IRC08:47
*** frickler has quit IRC08:47
*** ansiwen has quit IRC08:47
*** toabctl has quit IRC08:47
*** jeblair has quit IRC08:47
*** tonyb has quit IRC08:47
*** johnthetubaguy has quit IRC08:47
*** lyarwood has quit IRC08:47
*** mhenkel has quit IRC08:47
*** ericyoung has quit IRC08:47
*** alaski has quit IRC08:47
*** zeroDivisible has quit IRC08:47
*** keekz has quit IRC08:47
*** bradjones has quit IRC08:47
*** rtjure has quit IRC08:47
*** sergek_ has quit IRC08:47
*** jpena|off has quit IRC08:47
*** d34dh0r53 has quit IRC08:47
*** d9k has quit IRC08:47
*** lucas-afk has quit IRC08:47
*** redondo-mk has quit IRC08:47
*** obre has quit IRC08:47
*** greghaynes has quit IRC08:47
*** ildikov has quit IRC08:47
*** dtantsur|afk has quit IRC08:47
*** tommylikehu has quit IRC08:47
*** geekinutah has quit IRC08:47
*** sulo has quit IRC08:47
*** StevenK has quit IRC08:47
*** kencjohnston_ has quit IRC08:47
*** andrewbogott has quit IRC08:47
*** zul has quit IRC08:47
*** Hazelesque has quit IRC08:47
*** ejat has quit IRC08:47
*** belliott_ has quit IRC08:47
*** rcernin has quit IRC08:47
*** links has quit IRC08:47
*** ircuser-1 has quit IRC08:47
*** bswartz has quit IRC08:47
*** jaosorior has quit IRC08:47
*** bhagyashris has quit IRC08:47
*** liusheng has quit IRC08:47
*** haukebruno has quit IRC08:47
*** mnaser has quit IRC08:47
*** weshay_ has quit IRC08:47
*** shiyer has quit IRC08:47
*** shaohe_feng has quit IRC08:47
*** edleafe has quit IRC08:47
*** med_ has quit IRC08:47
*** alex_xu has quit IRC08:47
*** jogo has quit IRC08:47
*** raginbajin has quit IRC08:47
*** kuzko has quit IRC08:47
*** mdbooth has quit IRC08:47
*** larsks has quit IRC08:47
*** migi has quit IRC08:47
*** Tahvok has quit IRC08:47
*** pcaruana has quit IRC08:47
*** kiennt has quit IRC08:47
*** claudiub has quit IRC08:47
*** chohoor has quit IRC08:47
*** Qiming has quit IRC08:47
*** mgariepy has quit IRC08:47
*** serverascode has quit IRC08:47
*** junbo has quit IRC08:47
*** s-dean has quit IRC08:47
*** oomichi has quit IRC08:47
*** robcresswell has quit IRC08:47
*** elod has quit IRC08:47
*** tyrefors has quit IRC08:47
*** MikeG451 has quit IRC08:47
*** scottda has quit IRC08:47
*** carl_baldwin has quit IRC08:47
*** patrickeast has quit IRC08:47
*** ameade has quit IRC08:47
*** samueldmq has quit IRC08:47
*** cargonza has quit IRC08:47
*** icey has quit IRC08:47
*** luos has quit IRC08:47
*** MasterofJOKers has quit IRC08:47
*** mgagne has quit IRC08:47
*** gus has quit IRC08:47
*** fungi has quit IRC08:47
*** tanee has quit IRC08:47
*** tomhambleton_ has quit IRC08:47
*** rushiagr has quit IRC08:47
*** johnsom has quit IRC08:47
*** vdrok has quit IRC08:47
*** fyxim has quit IRC08:47
*** toan has quit IRC08:47
*** raorn has quit IRC08:47
*** karlamrhein has quit IRC08:47
*** NobodyCam has quit IRC08:47
*** wolsen has quit IRC08:47
*** aweeks has quit IRC08:47
*** markus_z has quit IRC08:47
*** sshwarts has quit IRC08:47
*** trinaths has quit IRC08:47
*** litao__ has quit IRC08:47
*** rodrigods has quit IRC08:47
*** jdillaman has quit IRC08:47
*** sdake_ has quit IRC08:47
*** markmcclain has quit IRC08:47
*** efried has quit IRC08:47
*** danpawlik has quit IRC08:47
*** artom has quit IRC08:47
*** clarkb has quit IRC08:47
*** anthonyper has quit IRC08:47
*** andreaf has quit IRC08:47
*** FL1SK has quit IRC08:47
*** knikolla has quit IRC08:47
*** rm_work has quit IRC08:47
*** NostawRm has quit IRC08:47
*** mrhillsman has quit IRC08:47
*** swamireddy has quit IRC08:47
*** jhesketh has quit IRC08:47
*** shan has quit IRC08:47
*** mikal has quit IRC08:47
*** spotz has quit IRC08:47
*** hferenc has quit IRC08:47
*** sean-k-mooney has quit IRC08:47
*** strigazi has quit IRC08:47
*** purplerbot has quit IRC08:47
*** jlvillal has quit IRC08:47
*** kevinbenton has quit IRC08:47
*** Daviey has quit IRC08:47
*** dansmith has quit IRC08:47
*** kazsh has quit IRC08:47
*** jamiec has quit IRC08:47
*** doffm has quit IRC08:47
*** tobasco has quit IRC08:47
*** jgriffith has quit IRC08:47
*** rnoriega has quit IRC08:47
*** bnemec has quit IRC08:47
*** adreznec has quit IRC08:47
*** john51 has quit IRC08:47
*** Jeffrey4l has quit IRC08:47
*** alexchadin has quit IRC08:47
*** jmlowe has quit IRC08:47
*** kornicameister has quit IRC08:47
*** lennyb has quit IRC08:47
*** wxy has quit IRC08:47
*** hwoarang has quit IRC08:47
*** mdrabe has quit IRC08:47
*** stephenfin has quit IRC08:47
*** shaner_ has quit IRC08:47
*** wasmum has quit IRC08:47
*** khappone has quit IRC08:47
*** mordred has quit IRC08:47
*** afazekas has quit IRC08:47
*** ebbex has quit IRC08:47
*** hemna has quit IRC08:47
*** McNinja has quit IRC08:47
*** morgan has quit IRC08:47
*** Gorian has quit IRC08:47
*** test222 has quit IRC08:47
*** jamespage has quit IRC08:47
*** rmcadams has quit IRC08:47
*** auggy has quit IRC08:47
*** jbryce has quit IRC08:47
*** john5223 has quit IRC08:47
*** zsli_ has quit IRC08:47
*** suzhengwei has quit IRC08:47
*** coreywright has quit IRC08:47
*** zhurong has quit IRC08:47
*** saphi has quit IRC08:47
*** nicolasbock has quit IRC08:47
*** phuongnh has quit IRC08:47
*** Shunli has quit IRC08:47
*** gmann has quit IRC08:47
*** kylek3h has quit IRC08:47
*** xinliang has quit IRC08:47
*** clayton has quit IRC08:47
*** tojuvone has quit IRC08:47
*** david-lyle has quit IRC08:47
*** lifeless has quit IRC08:47
*** jistr has quit IRC08:47
*** openstackgerrit has quit IRC08:47
*** aloga has quit IRC08:47
*** rdo_ has quit IRC08:47
*** huangtianhua has quit IRC08:47
*** dr_gogeta86 has quit IRC08:47
*** Guest57787 has quit IRC08:47
*** mtreinish has quit IRC08:47
*** lbragstad has quit IRC08:47
*** bauzas has quit IRC08:47
*** igordcard has quit IRC08:47
*** sfinucan has quit IRC08:47
*** patriciadomin has quit IRC08:47
*** beagles has quit IRC08:47
*** szaher has quit IRC08:47
*** smcginnis has quit IRC08:47
*** slunkad has quit IRC08:47
*** pkoniszewski has quit IRC08:47
*** ioni has quit IRC08:47
*** Anticimex has quit IRC08:47
*** andreykurilin has quit IRC08:47
*** seba has quit IRC08:47
*** Guest6666 has quit IRC08:47
*** diga has quit IRC08:47
*** tetsuro has quit IRC08:47
*** ys__ has quit IRC08:47
*** ChanServ has quit IRC08:47
*** sahid has quit IRC08:47
*** adisky__ has quit IRC08:47
*** burt has quit IRC08:47
*** ujjain has quit IRC08:47
*** colby_ has quit IRC08:47
*** vipul has quit IRC08:47
*** gaurangt has quit IRC08:47
*** Kevin_Zheng has quit IRC08:47
*** jbernard has quit IRC08:47
*** Oku_OS has quit IRC08:47
*** rfolco has quit IRC08:47
*** melwitt has quit IRC08:47
*** sballe_ has quit IRC08:47
*** zhenguo has quit IRC08:47
*** mrodden has quit IRC08:47
*** masayukig has quit IRC08:47
*** rajinir has quit IRC08:47
*** amotoki has quit IRC08:47
jianghuawIs the DB nova-api allowed to be accessed by nova-compute service?09:05
jianghuawI met an error as "RemoteError: Remote error: CantStartEngineError No sql_connection parameter is established" when nova-compute tries to query data from aggregate which belong to nova-api db.09:07
*** openstack has joined #openstack-nova13:54
*** gaoyan has joined #openstack-nova13:55
*** felipemonteiro has quit IRC13:56
*** gaoyan has quit IRC13:59
*** moshele has joined #openstack-nova13:59
*** mamandle has joined #openstack-nova14:00
openstackgerritMatthew Edmonds proposed openstack/nova master: update policy UT fixtures  https://review.openstack.org/39861014:00
*** suresh12 has joined #openstack-nova14:01
*** mingyu_ has joined #openstack-nova14:01
*** mingyu has quit IRC14:01
*** vks1 has quit IRC14:02
*** crushil has joined #openstack-nova14:03
*** cdent has quit IRC14:07
*** owalsh_ has joined #openstack-nova14:09
*** marst has joined #openstack-nova14:09
*** edleafe- has joined #openstack-nova14:10
mriedemi'm going through https://review.openstack.org/#/c/491850/ now14:12
*** owalsh has quit IRC14:12
*** edleafe has quit IRC14:12
openstackgerritGhanshyam Mann proposed openstack/nova master: Improve stable-api doc with current API state  https://review.openstack.org/48992614:13
*** mtanino has joined #openstack-nova14:14
*** udesale has joined #openstack-nova14:15
*** vks1 has joined #openstack-nova14:15
*** sridharg has quit IRC14:17
*** zhouyaguo has quit IRC14:17
*** zhouyaguo has joined #openstack-nova14:17
openstackgerritMaciej Jozefczyk proposed openstack/nova master: Remove host filter for _cleanup_running_deleted_instances periodic task  https://review.openstack.org/49180814:18
*** udesale has quit IRC14:19
openstackgerritMatthew Edmonds proposed openstack/nova master: use conf for keystone session creation  https://review.openstack.org/48512114:20
*** felipemonteiro has joined #openstack-nova14:20
*** suresh12 has quit IRC14:21
dtantsurdansmith: hi! re your comment on the ironic-related patch: should we block changing node.resource_class for active nodes in Ironic?14:21
*** suresh12 has joined #openstack-nova14:21
dansmithdtantsur: if that's possible I think that would be an excellent idea14:22
dtantsurdansmith: it's not impossible, but our beloved API microversion will kick in here. meaning, we'll only be able to block it starting with the next API version :(14:22
*** sridharg has joined #openstack-nova14:23
dtantsurI would actually block it in all versions, given that it's going to screw up nova14:23
*** hemna_ has joined #openstack-nova14:23
dansmithdtantsur: presumably you can return a 409 for mostly any reason right?14:23
dtantsurbut people tend to feel quite religiously about not bypassing the versioning14:23
dtantsurdansmith: right, but how does it help?14:24
* dtantsur looks for a wild cdent14:24
*** gbarros has quit IRC14:25
dtantsurit's an interesting corner case of the API WG to discuss. should we leave a feature that clearly breaks things, or should we break the versioning contract14:25
* dtantsur assumes mordred and edleafe- might have opinions ^^^14:25
dansmithdtantsur: well can you return 409 for anything else in that call?14:25
* mordred reads14:25
dtantsurdansmith: sorry, I think I don't get the question. We cannot just randomly return 409 I think..14:26
*** kukacz has quit IRC14:26
*** gbarros has joined #openstack-nova14:26
*** armax has joined #openstack-nova14:26
mriedemdtantsur: mearning, can the node update api return a 409 already for something else14:27
dansmithdtantsur: if 409 is already a valid return value then I think it's less problematic14:27
dansmithright14:27
mriedemso the user can already be expecting a 409 in some cases14:27
dansmithand I would expect 409 to be valid for most PUTs to cover situations like this14:27
dansmithlike "you're violating some constraint"14:27
dtantsurmriedem, dansmith, I think I get where you're heading. Yes, we can. And no, according to the API versioning ideology, we cannot do it.14:27
* dansmith shrugs14:28
dtantsurit's not only about breaking users, it's more about signaling changes /me waits for mordred to jump in14:28
mriedemhttps://developer.openstack.org/api-ref/baremetal/#update-node14:28
mordredso...14:28
mriedemdoesn't mention error codes14:28
dtantsur(for the record: I'm not the biggest fan of the API versioning here)14:28
*** cleong has joined #openstack-nova14:28
mordredfor signalling changes, it's to signal whether someone can do something or not14:28
dansmithdtantsur: well, regardless of whether ironic lets you do it, the rule to operators should be that they will break stuff if they do it once things are populated14:28
mordredthe thing in this case isn't a valid thing to try to do14:28
mordredso a user who does it today isn't actually doing a thing that works14:28
mordredso they don't actually have, you know, an application that is going to break when you do this14:29
dtantsurfair14:29
dansmiththat's true, although the application here is *probably* their ansible playbook14:29
mriedemyou've got a 409 right here https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1/node.py#L169814:29
mordredalternately, they don't need to ask the API if they can avoid doing the bad state -they can, as a client, avoid writing broken code without setting a microversion, since this is mostly about ironic returning an error when they do something stupid14:30
mordredSO14:30
dansmithwhere they've set a bunch of nodes to be a given class and will blindly blow that into ironic, I'm guessing14:30
mordredI'd say this is a bug fix where requiring a version bump does't actually add value to anyone14:30
dtantsurmriedem: let's please not use 409 though. we've done a big mistake in our youth, and now ironicclient retries it14:30
dansmitheww14:30
dtantsurhowever, the generic 400 an also be returned from essentially any endpoint, we can use it14:30
dansmithmordred: agreed14:30
dtantsurmordred: I like the way you put it :) thanks!14:31
*** owalsh_ is now known as owalsh14:31
mordreddtantsur: I will happily advocate for the validity of this change not actually being an API break if you need me to14:31
mordredrules are there to help us do the right thing - they're not there for their own sake14:31
dtantsurright14:31
dtantsurnext tricky question :)14:32
*** zhurong has quit IRC14:32
dtantsurwhat should we do about nodes that do not have any resource_class so far? I mean, active nodes?14:32
dtantsurdansmith: ^^14:32
*** psachin has quit IRC14:33
dansmithdtantsur: if you take the strict meaning of mordred's comments above,14:33
dansmiththen you aren't breaking a node that is active with no RC since we're not doing anything with those yet14:33
dansmithhowever,14:33
dansmithI think it's far more confusing to allow that14:33
dansmithnot worth the confusion over consistency14:33
mriedemjaypipes: dansmith: i'm inclined to -1 this for missing tests https://review.openstack.org/#/c/491850/ but given dansmith is out the next two days i realize there is a need to start pushing this code through14:34
dtantsurthis is what I'm thinking about. a user creates an instance back in Ocata, no resource_class set. they upgrade to Pike, then to Queens. still no resource_class.14:34
dansmithdtantsur: ah, right, for nodes with an instance14:34
mordreddtantsur: what would the resource_class be if they created it with an empty resource_class today?14:34
dtantsurmordred: None14:34
*** ralonsoh_ has quit IRC14:35
dtantsur(python None, or JSON null)14:35
dansmithmordred: it's a thing that is required for queens14:35
mordredAH14:35
*** takedakn has joined #openstack-nova14:35
*** baoli has quit IRC14:35
dansmithdtantsur: so the response probably needs to be "400: You cannot CHANGE the class of an active node"14:35
*** slaweq_ has joined #openstack-nova14:35
dtantsurso yeah, it was perfectly valid to not have any resource_class when we introduced it (which is sad, btw)14:35
dansmithdtantsur: sad indeed14:35
* dtantsur blames jroll :D14:36
dansmithheh14:36
dtantsurmy main question is: how will nova react when a node gets a resource_class (without having it previously)14:37
dansmithdtantsur: it'll do the right thing I think.. currently it's just skipping those and logging a warning14:37
dansmithit's the changing that is a problem14:37
dansmithwithout it set, we report no inventory and don't migrate the instances14:37
dansmithonce it's set, we do those things14:37
dtantsurokay, we can ban changing, I think14:38
dansmiththe changing of that (or handling the change) is the complicated bit14:38
dtantsurand what above available nodes? do you see any problems with them?14:38
jaypipesmriedem, dansmith: sorry, wrapping up a meetng14:38
dansmithdtantsur: hmm? above available nodes?14:38
dansmithdtantsur: you mean nodes with no instance on them?14:38
dtantsuryep, but available for nova to deploy on14:38
dtantsur(sorry, /me uses our state machine terms)14:39
dansmithdtantsur: well, it's less problematic for sure, but I think the caching in the ironic driver still means there's a race there, which is not great14:39
*** trinaths has joined #openstack-nova14:39
dansmithdtantsur: but I'd much rather document that as a pitfall as it doesn't require extra code,14:39
dansmithit just means there's a window where you might schedule something with the old class to the node you just changed14:40
dtantsurright. I guess we have the same problems right now with e.g. changing properties (except that people should not change their CPU count too often)14:40
dansmithand you'll migrate the instance to the stale class most likely which will be confusing, but :/14:40
*** slaweq_ has quit IRC14:40
dansmithI have to brb14:40
dtantsurthanks dansmith, I think I know what to do14:40
* edleafe- returns from making coffee and jumps into the scrollback14:42
*** edleafe- is now known as edleafe14:42
dtantsurmmmm, coffee :)14:42
edleafedtantsur: yeah, if the API already returns a particular code, and what you're changing isn't a thing that people rely on, it's a bugfix, not a version bump14:43
edleafedtantsur: so yeah, what mordred said14:43
jaypipesmriedem: k, sorry, done now.14:44
dtantsuredleafe: okie, I'll bake a patch today14:44
edleafedtantsur: if a node has no resource_class set and an instance on it, and then the resource_class gets set, the next time through the instance flavor will get updated14:44
jaypipesmriedem: I'm not entirely sure how one would add tests for a continue block...14:44
edleafedtantsur: dansmith: that's why the 'seen' cache is keyed on (instance_uuid, rc)14:44
mriedemjaypipes: i guess you'd assert that something *didn't* happen14:45
mriedemsince you continued14:45
mriedembut anyway, it can be a follow up given the circumstances with time14:46
jaypipesmriedem: k. answered your comment about local delete..14:46
edleafedtantsur: I'll revise that migration patch to remove the support for a node with an instance changing its resource_class14:47
*** gouthamr has joined #openstack-nova14:47
dansmithedleafe: dtantsur thanks a bunch14:47
*** alexchadin has quit IRC14:48
dtantsurnp14:48
mriedemdansmith: jaypipes: is this required for rc1 or just sugar? https://review.openstack.org/#/c/491012/14:55
openstackgerritEric Fried proposed openstack/nova master: nova.utils.get_ksa_adapter()  https://review.openstack.org/48813714:55
openstackgerritEric Fried proposed openstack/nova master: Get auth from context for glance endpoint  https://review.openstack.org/49005714:55
dansmithmriedem: required14:55
jaypipesdansmith: to be perfectly frank, I don't know. :(14:55
jaypipessorry, that was for mriedem ^14:55
jaypipesmriedem: see the comment I just responded to edleafe and dansmith with.14:56
jaypipesmriedem: basically boils down to "I totally think we probably might need to maybe do something here, but I'm not sure what..."14:56
dansmithjaypipes: huh? before that patch pike computes are behaving like ocata ones14:56
dansmiththat's where it was when I left it anyway14:56
mriedemalright i'll review https://review.openstack.org/#/c/488510/ in the meantime then14:58
*** felipemonteiro has quit IRC14:58
jaypipesdansmith: well, I guess after going through the scenarios in my head, I'm wondering what we really need to do there. Probably just need another hangout session with you on it.14:58
dansmithjaypipes: L1038 is the critical bit: https://review.openstack.org/#/c/491012/7/nova/compute/resource_tracker.py14:58
openstackgerritEric Fried proposed openstack/nova master: Get auth from context for glance endpoint  https://review.openstack.org/49005714:58
openstackgerritSean Dague proposed openstack/nova master: Add documentation for documentation contributions  https://review.openstack.org/49212414:58
openstackgerritSean Dague proposed openstack/nova master: Clean up *most* ec2 / euca2ools references  https://review.openstack.org/49216614:58
dansmithjaypipes: without that line, we don't move the ball forward with comptues just stomping all over everything on both sides of a boot or move operation14:59
jaypipesdansmith: oh, I totally get that. the part I'm wondering about is whether that "heal allocations" method needs to change now.14:59
dansmither, without gating that line on the version14:59
dansmithjaypipes: that's not what he was asking, right? he was asking if the whole change is required for rc1 right?14:59
jaypipesdansmith: ah, sorry. mriedem, yes, it is.15:00
jaypipesmriedem: I was -Workflow on it to discuss that bit about the heal allocations.15:00
jaypipessorry for confusion.15:00
mriedemnp15:00
dansmithjaypipes: your non-functional continue in a conditional is part of your extra debug logging I assume, and that was all you so..whatever you want to do there15:00
mnaserso: there should never be a scenario where an instance fails to spawn with local variable referenced before assignment.. right?15:00
mriedemmnaser: correct15:01
mnaserbecause im trying to go through this code and i cant find how/why it's happening15:01
mnaserokay. i'll go do some more in depth checking then15:01
mriedemUnboundLocalError right?15:01
jaypipesdansmith: heh, yeah... I was tempted to put into the log debug message something like "we're really not sure whether we even get here any more and if we do, what we should do anyway" ;)15:01
mriedemseems that would be obvious15:01
mnasermriedem the manager is catching the original exception so its making it a tad harder15:02
mnaserbased on my search, it should either be in nova/objects/numa.py or nova/virt/hardware.py as those are the two references to it15:02
mriedemoh, LOG.exception?15:02
dansmithjaypipes: yeah, and it might be useful. Just don't say "We were completely burned out at the end of pike and this seems like a bad thing we probably should have handled. Sorry about that."15:02
jaypipesdansmith: don't give me ideas... :P15:03
mnasermriedem i'll have to do, it's def not catching by default, compute logs is just showing: 2017-08-09 14:33:00.758 4242 DEBUG nova.compute.utils [req-91225f58-71c5-44d3-ae5c-734986b4a3f7 16e021e0ed5f47b68c095d6885f18f4b d7594b0298b54bcc9e4e0f252e1da2e4 - - -] [instance: 18b91008-d42f-4eac-b57d-07195e7774ba] local variable 'sibling_set' referenced before assignment notify_about_instance_usage /usr/lib/python2.7/site-packages/nova/compute/utils.py15:03
mnaser:31315:03
dansmithmriedem: comment in here about the wording https://review.openstack.org/#/c/491582/615:03
jaypipesdansmith: how about this? LOG.debug("We used to think we were indecisive. Now we're not so sure.")15:03
dansmithjaypipes: that seems fine to me. highly truth-based15:04
jaypipesvery truthy indeed.15:04
dansmithjaypipes: but I'd only +1 it and wait for others to +215:04
*** sree has quit IRC15:06
*** mamandle has quit IRC15:06
mriedemdansmith: replied https://review.openstack.org/#/c/491424/7/releasenotes/notes/pike_prelude-fedf9f27775d135f.yaml - so you want me to add those or leave it?15:07
*** sree has joined #openstack-nova15:07
*** takedakn has quit IRC15:08
dansmithmriedem: I didn't realize that wasn't +A.. I would have.. I was just suggesting that maybe circling back and putting it in there might be worthwhile, but it's really not a big deal15:08
dansmithit's on its way now15:08
openstackgerritBalazs Gibizer proposed openstack/nova master: replace chance with filter scheduler in func tests  https://review.openstack.org/49152915:08
dansmithI was just commenting on your comment15:08
mriedemwant to send this in also https://review.openstack.org/#/c/491581/ ?15:10
mriedemit was previously approved15:10
mriedempad your stats before i wreck your stats on these RT patches :)15:10
*** sree has quit IRC15:10
*** cfriesen has joined #openstack-nova15:10
mnaserwould someone be kind enough to just eye this for a second with me before i dive deeper?  is it possible that i get an error of sibling_set being referenced before assignment if: the siblings_set is empty, therefore the loop next occurs and it is referenced in line 805 (outside the loop) -- https://github.com/openstack/nova/blob/stable/newton/nova/virt/hardware.py#L777-L80715:10
*** sree has joined #openstack-nova15:10
mnasersorry poorly worded that, the loop is pretty much skipped over so sibling_set is never set to anything and it's referenced in the if statement below it15:11
*** Sukhdev has joined #openstack-nova15:11
dansmithmriedem: :(15:11
dansmithmnaser: looking15:12
mriedemmnaser: https://github.com/openstack/nova/blob/stable/newton/nova/virt/hardware.py#L805 would be the problem right?15:12
mriedemif sibling_sets.items() was empty15:12
mriedemthen there is no sibling_set variable set in the for loop above15:12
mnaserthats what i was guessing -- kinda wanted a second pair of eyes before i dive in deeper in the wrong place15:12
mnaseri will check and see what the value of sibling_sets is15:13
dansmithyeah it's referencing the loop variable15:13
*** Oku_OS is now known as Oku_OS-away15:13
mnaserok cool, i'll do some more checking and see what the sibling set value is when it works and when it doesnt15:13
mnaser(oddly enough, it fails only on the *last* instance to go in the server -- ex: if it fits 15 VMs, 14 will go in, the 15th will fail with that)15:13
dansmithmnaser: you could just put 798 and below inside an "if sibling_sets" conditional15:14
dansmithmnaser: then it won't run if the loop didn't do a thing, which would avoid the problem15:14
*** tesseract has quit IRC15:14
*** lucasxu has quit IRC15:14
mriedemif only stephenfin were around to harass15:14
dansmithit'd be much better to just set something before the loop to None, and then set it inside the loop and only run the bottom code if we found a thing15:15
dansmithbecause it's just using the last value it iterated over15:15
mnaseryeah that seems cleaner, but i also wonder if the issue is siblings_set being empty15:15
dansmithsibling_set has to be a tuple for the use on L80515:15
dansmithso you can use None as the sentinel15:15
mnaserbecuase maybe it shouldn't be and thats the issue15:15
* mnaser doesn't fully understand the hardware codebase yet15:16
dansmithmnaser: well, it's fragile code so it deserves fixing regardless, IMHO15:16
mnaserso i just want to make sure the original issue isnt siblings_set being empty?15:16
mnasertrue15:16
mnaseri was a bit taken aback seeing a variable referenced before assignment in nova's code :p15:16
dansmithif it being empty is possible (which it clearly is) and that's fatal, then this needs to check for it and log a warning15:16
*** links has quit IRC15:16
*** sree has quit IRC15:17
dansmithwe could ask sahid15:17
openstackgerritMatt Riedemann proposed openstack/nova master: Add release note for shared storage known issue  https://review.openstack.org/49158215:17
gibimriedem, jaypipes, dansmith: As Jay's resize confirm fix is almost done could you take a look on the other resize bugfix https://review.openstack.org/#/c/49149115:18
*** gmann has quit IRC15:19
mnaseryum, theory validated - SIBLING_SETS: defaultdict(<type 'list'>, {}) _pack_instance_onto_cores /usr/lib/python2.7/site-packages/nova/virt/hardware.py:78115:19
mnasernow time to understand the siblings_set and why its empty15:20
*** danpb has joined #openstack-nova15:20
danpbdansmith: ?15:20
dansmithmnaser: danpb is your huckleberry15:20
mnaserhi danpb -- i think i've discovered an issue in nova/virt/hardware.py15:21
mnaserfor some reason, siblings_set is become an empty list15:21
mnaserhttps://github.com/openstack/nova/blob/stable/newton/nova/virt/hardware.py#L777-L80715:21
openstackgerritSean Dague proposed openstack/nova master: Structure cli page  https://review.openstack.org/49211115:21
openstackgerritSean Dague proposed openstack/nova master: Add documentation for documentation contributions  https://review.openstack.org/49212415:21
openstackgerritSean Dague proposed openstack/nova master: Clean up *most* ec2 / euca2ools references  https://review.openstack.org/49216615:21
openstackgerritSean Dague proposed openstack/nova master: doc: Import configuration reference  https://review.openstack.org/49185315:21
mnaserand when its empty (for whatever reason), line 805 reference a variable that was not assigned15:21
*** itlinux has joined #openstack-nova15:21
dansmithdanpb: we know how to fix the acute issue, but mnaser is concerned that sibling_sets being empty might be indicative of some more fundamental problem15:21
dansmithand wanted a sanity check15:22
sdaguemriedem: the config reference import that stephenfin was working on should be workable now, the last iteration had a wrong include stanza that worked locally because of cruft I had, but failed in the gate15:22
dansmithdanpb: we're hoping it was just an oversight and that set can be empty for legit reasons15:22
mnaseris there any scenario where sibling_set should be empty?  this is a server with 32 cores, vcpu_pin_set is set to 2-3115:23
*** gjayavelu has joined #openstack-nova15:23
mnaserwait15:23
mnaseroh boy15:23
mnaserdid i just forget how to math15:23
mnaserand the fact it had 29 cores probably messed up the math15:23
danpbsibling_sets gets populated from available_siblings iiuc15:24
danpbso presumably available_siblings is empty too ?15:24
mnaserdanpb: let me put some log.debug's, but i confirmed that siblings_set was empty15:24
mnaserfyi this issue only occurs on the *last* VM to fit on the compute node15:24
*** ralonsoh has joined #openstack-nova15:25
danpbwell i think you'd need to work backwards through the call stack to figure out why it becomes empty15:26
mnaserdanpb: AVAILABLE_SIBLINGS: [CoercedSet([]), CoercedSet([]), CoercedSet([]), CoercedSet([]), CoercedSet([]), CoercedSet([]), CoercedSet([])]15:26
mnaserok i guess ill have to see why host_cell.free_siblings is being set to that15:27
dansmithdanpb: doesn't this come from libvirt?15:27
*** crushil has quit IRC15:28
*** markvoelker has quit IRC15:28
mnaserthe only thing i can imagine which can cause a corner case is the fact that we reserve 2 cores for the OS, so vcpu_pin_set=2-31 .. maybe that's not taken in consideration (guessing)15:28
openstackgerritMatt Riedemann proposed openstack/nova master: [placement] Add api-ref for usages  https://review.openstack.org/48056315:28
danpbdansmith: libvirt will provide info on the siblings present on the host, but  iiuc  free_siblings is populated by nova15:29
danpbso if mnaser is saying it works correctly for all VMs until the last one, it sounds like nova is filtering the info from libvirt and ending up with the empty set15:30
dansmithdanpb: oh okay, I hadn't traced it very far up the stack because I assumed we must just be getting an empty set of things from libvirt because we had no numa info or something15:30
dansmithdanpb: yeah, makes sense15:30
sahiddanpb: mnaser danpb i just for information i was working on this but did not find the root cause15:30
sahidhttps://review.openstack.org/#/c/458848/15:30
mnaserim checking the values of host_cell and instance_cell15:30
mnaserHOST_CELL: NUMACell(cpu_usage=14,cpuset=set([2,4,6,8,10,12,14,16,18,20,22,24,26,28,30]),id=0,memory=196562,memory_usage=57344,mempages=[NUMAPagesTopology,NUMAPagesTopology],pinned_cpus=set([2,4,6,8,10,12,14,18,20,22,24,26,28,30]),siblings=[set([8,24]),set([2,18]),set([10,26]),set([12,28]),set([6,22]),set([14,30]),set([4,20])])15:31
danpbhow many VMs are you running ?15:31
danpbyou've got 7 pairs of siblings there, so if each VM wanted one pair, you'd be able to run 7 VMs15:32
mnaserbooted with 120x 1GB hugepages, 30 cores available, trying to boot 15 VMs with 2 cores each + 8gb of memory each15:32
mnaserbut im not using isolate, im using prefer which i believe should try to schedule them on the same hyperthread15:32
mnasermy flavor has properties: hw:cpu_policy=dedicated, hw:cpu_thread_policy=prefer, hw:mem_page_size=1048576, hw:numa_nodes=215:33
mnaser(also hw:cpu_thread_policy=prefer forgot to put that in)15:33
mnaseri was thinking hw:numa_nodes=2 and hw:cpu_thread_policy=prefer might be the source of the issue because they are a bit the opposite of each other15:34
mnasergetting memory split from 2 numa nodes but also wanting to be on the same hyperthread isnt really possible considering you'll want to be on two physical cpus to access the numa node15:35
openstackgerritMatt Riedemann proposed openstack/nova master: [placement] Add api-ref for RP usages  https://review.openstack.org/45010515:36
*** slaweq_ has joined #openstack-nova15:37
danpbmnaser: how many vcpus per guest ?15:37
danpb(i mean what is vcpu count in the flavour ?)15:38
mnaserdanpb: 215:38
*** aarefiev is now known as aarefiev_afk15:38
danpbthat flavour config looks flawed then15:38
danpbyou've got 2 vcpus, and you've said to give the VM 2  virtual numa nodes15:38
danpbnova will want to place each virtual numa node on a separate host numa node15:38
danpbso each vcpu will need to be on a separate socket15:39
openstackgerritMatthew Edmonds proposed openstack/nova master: use conf for keystone session creation  https://review.openstack.org/48512115:39
danpbso asking nova to put the vcpus in the same hyperthread sibling is nonsensical15:39
danpbas you can't have siblings if you have separate sockets15:39
mnaserthat is what i kinda theorized a bit15:39
*** markvoelker has joined #openstack-nova15:39
mnaserso will have to fallback to 1 numanode default15:39
danpbnova shouldn't be trying to use hypthread siblings at all in ths case15:40
mnaserthat was my assumption that what it would do, thats why i put 'prefer' in there because i figured that was the more reasonable choice15:40
cfriesenwith hw:cpu_thread_policy=prefer it should just give separate pCPUs.  if you had hw:cpu_thread_policy=require I think it'd fail15:40
danpbputting in prefer should be an no-op in this case15:40
danpbas the need to pick separate sockets should have taken priority in nova to satisfy the numa constraint15:41
danpband if you have thread_policy=require, then nova ought to raise a fatal error15:41
*** crushil has joined #openstack-nova15:41
*** slaweq_ has quit IRC15:41
cfriesenhw:cpu_thread_policy=prefer is always a no-op, technically.  it's the default behaviour15:41
mnaserright: so in that case, the config seems to be ok (yes, the prefer is noop/will never work) but it should still work15:42
mnasernumastat -m reports 4096 free hugepages in each node (for a total of 8192)15:42
*** randomhack has quit IRC15:43
*** mkrai_ has joined #openstack-nova15:43
*** yamamoto has quit IRC15:43
*** chyka has joined #openstack-nova15:43
*** suresh12 has quit IRC15:44
*** yamamoto has joined #openstack-nova15:45
cfriesentoo bad sfinucan is on holidays. :)15:45
*** yamamoto has quit IRC15:45
mnaserthe used cores are: 2-15,18-3115:45
*** yamamoto has joined #openstack-nova15:46
mnaserwhich leaves 16/17 free which are on two different sockets15:46
mnaserso technically it should schedule on 16,17 which will be able to get 4096MB from each numa socket15:46
*** Apoorva has joined #openstack-nova15:47
*** baoli has joined #openstack-nova15:47
cfriesenmnaser: which version of the code is this?15:47
mnasernewton15:47
mnaserpython-nova-14.0.7-1.el7.noarch15:48
mnasermore specifically15:48
*** randomhack has joined #openstack-nova15:48
mnasersahid i looked at your change however it seems siblings_set being empty is a possiblity :(15:49
cfriesendansmith: for some reason I thought that pCPUs without a sibling would still be in sibling_set, just as a single-item set15:50
efriedsdague https://github.com/openstack/nova/blob/master/nova/cmd/status.py#L198 Am I missing something, or is this untrue?15:50
hongbinoomichi: hi, my team is working on deciding the default api version, a team member asked me to check with you since you did the api version work in nova, i mainly wanted to know how nova maintain the default api version (pick the latest version or a stable version? bump the default version everytime a new version is introduced? etc.) Are you the right person to ask?15:51
*** yamamoto has quit IRC15:51
mnaserdoes host_cell get assigned by the scheduler?15:51
cfriesenmnaser: danpb: dansmith: this looks sort of related, but the fix should be in newton: https://bugs.launchpad.net/nova/+bug/157815515:52
openstackLaunchpad bug 1578155 in OpenStack Compute (nova) newton "'hw:cpu_thread_policy=prefer' misbehaviour" [Medium,Fix committed] - Assigned to Stephen Finucane (stephenfinucane)15:52
dansmithI really don't know anything about this stuff15:52
* artom remembers something like this15:54
jaypipesmriedem, dansmith: I will be afk for 2 hours, just FYI.15:54
cfriesenmnaser: so are 16/17 the siblings of 0/1 which are not included in vcpu_pin_set?15:54
openstackgerritMatt Riedemann proposed openstack/nova master: [placement] Add api-ref for usages  https://review.openstack.org/48056315:55
artomAbout the fix, at any rate.15:55
mnasercfriesen nope, 0/1 are the ones which are not included15:55
mnaserso our vcpu_pin_set is 2-3115:55
mnaser(32 core machine)15:55
cfriesenmnaser: yes, but which are the HT siblings of 0/1?15:55
*** markvoelker has quit IRC15:56
danpbhmm, yes,  it seems numa node 0 has even numbered cpus, and node 1 has odd numbered cpus15:56
mnasercfriesen sorry, not sure i follow, but here's lscpu output if that helps? http://paste.openstack.org/show/617955/15:56
danpbso excluding 0-1, kills 1 cpu each host numa node, instead of 2 cpus15:56
cfriesenmnaser: based on the pattern above, the HT siblings of 0/1 would be 16/17....I'm wondering if that's confusing the logic in hardware.py15:56
mnaserbut i think that they are15:56
cfriesenmnaswer: running "virsh capabilities" will show sibling info15:57
*** yamamoto has joined #openstack-nova15:57
danpbso that in turn kills a entire sibling pair from each numa node15:57
*** slaweq_ has joined #openstack-nova15:57
danpbeffectively making 4 cpus unavailable15:57
*** lucasxu has joined #openstack-nova15:57
cfriesendanpb: theoretically it shouldn't kill the sibling pair though...we should have two pCPUs with no siblings15:57
mnaserhere is output of virsh caps. http://paste.openstack.org/show/617956/15:58
*** Sukhdev_ has joined #openstack-nova15:58
cfriesenbut maybe the logic can't handle a mix of sibs and no sibs15:58
mriedemsomeone want to approve this? https://review.openstack.org/#/c/491855/15:58
danpbyeah that wouldn't be surprising15:58
*** randomha1k has joined #openstack-nova15:58
mnaserso am i better off reserving 2 siblings and maybe that might get rid of the 'confusion' ?15:58
cfriesenmnaswer: seems likely15:58
mnaserso instead of 0,1 .. do 0,16 instead?15:58
danpbyeah15:59
mnaserlet me test that theory out15:59
*** randomhack has quit IRC16:00
openstackgerritEd Leafe proposed openstack/nova master: Handle addition of new nodes/instances in ironic flavor migration  https://review.openstack.org/48795416:00
edleafedansmith: dtantsur: ^^ updated to remove the node RC change handling16:00
dtantsurawesome! I'm just out of a meeting, will do the ironic part now16:00
mnaserso my vcpu_pin_set should technically be 1-15,17-31 in that case, correct?16:00
mriedemgibi: wasn't that also a problem in ocata then? CoreFilter isn't enabled by default16:00
mnaserleaving 0,16 for the OS16:00
cfriesenmnaswer: either way, please raise a bug for this issue, it'd be nice to handle it properly16:00
danpbmnaser: yeah16:01
mnasercfriesen agreed16:01
mnaserok, lets try this out16:01
mnaserill reboot with the new updates isolcpus= values so itll take me a few minutes and report16:01
*** yamamoto has quit IRC16:01
dansmithmriedem: melwitt: I'm going to disappear suddenly somewhere in the hour of the cells meeting.. assume we were on track to punt anyway, but.. that okay?16:02
*** crushil has quit IRC16:02
*** crushil has joined #openstack-nova16:02
*** lucasxu has quit IRC16:03
*** markvoelker has joined #openstack-nova16:03
dansmithjaypipes: so what's the deal with that last patch? do you want me to update it and remove the extraneous continue? or log something there or what?16:03
jaypipesplease, yes, go for it.16:04
dansmith...which?16:04
gibimriedem: you are right CoreFilter is not enabled by default in Ocata. I will quickly backport the test locally to Ocata to confirm...16:04
jaypipesdansmith: ^ sorry, I've been in meetings and dealing with russian visa mess all morning and now have a doctor's appt.16:04
jaypipesdansmith: remove the extraneous continue thign16:04
dansmithokay16:04
*** dtantsur is now known as dtantsur|brb16:05
mriedemdansmith: yeah planned on skipping cells v2 meeting today16:06
dansmithmriedem: okay16:06
mriedemgibi: the fix in ocata would have to be different probably since this code all got refactored in pike16:06
mriedemgibi: but just wanted to make sure it's not an rc1 regression blocker thing16:06
*** moshele has quit IRC16:08
dansmithmriedem: I actually think we probably should enable the cache.. I thought it was already done on the computes, but it's not. The other use on compute is to calculate the rpc pin, which is cached until restart anyway16:08
mnaserdanpb cfriesen -- i'm now getting "Not enough available CPUs to schedule instance. Oversubscription is not possible with pinned instances. Required: 1, actual: 0"16:09
*** sahid has quit IRC16:09
mnaserHOST_CELL: NUMACell(cpu_usage=14,cpuset=set([2,4,6,8,10,12,14,18,20,22,24,26,28,30]),id=0,memory=196562,memory_usage=57344,mempages=[NUMAPagesTopology,NUMAPagesTopology],pinned_cpus=set([2,4,6,8,10,12,14,18,20,22,24,26,28,30]),siblings=[set([8,24]),set([2,18]),set([10,26]),set([12,28]),set([6,22]),set([14,30]),set([4,20])])16:09
*** trinaths has left #openstack-nova16:11
danpbwhere's your other numa cell16:11
danpbthat just shows the first cell16:11
mnaseri dont know why its not listed.  i added LOG.debug("HOST_CELL: %s" % host_cell) to _numa_fit_instance_cell_with_pinning16:12
*** pchavva has quit IRC16:12
cfriesenmnaser: I think I know what's going on...you now have fewer pCPUs in one node than the other, and you're asking for 2-node guests16:12
mnaserlet me get you16:12
mnaserboth host cell output16:12
mnaseroh16:12
mnaseri think you're right16:13
danpbcfriesen: yep makes sense16:13
mnaserlet me switch things back to how they were and get the output of both host cells16:13
danpbmnaser: why do you want the guests to have multiple virtual numa cells ?16:14
*** cdent has joined #openstack-nova16:14
cfriesenmriedem: for a reno for https://review.openstack.org/#/c/491854/ would we want to describe the removal of the two default filters in "features", "upgrade", or "other"?16:14
danpbits generally not something you'd do unless guest memory exceeds the amount available in a single host, or need to consume say, PCI devices from separate nods at the same time16:14
*** lucasagomes is now known as lucas-afk16:14
mnaserdanpb: still experimenting but the idea was more efficent use of hardware.  i have 120x 1gb large pages which means that i'll end up with 60gb on each numanode, if i put 8gb sized instances only, i'll end up with 4096mb in each numa node that's unused16:15
cfriesendanpb: or you want increased memory bandwidth16:15
*** hieulq has joined #openstack-nova16:15
danpbcfriesen: that's only increased if your guest workload  avoids cross-node traffic16:15
cfriesendanpb: agreed16:16
mnaserby doing this, it splits 4096 into each numanode and i can fill the freepages .. but it's not something that's set in stone fully16:16
danpbcfriesen: otherwise you'd actually decrease throughput16:16
cfriesenmnaser: the downside is that your guests need to be able to avoid cross-numa traffic, which makes guest coding trickier16:16
danpbby having the guest all contend on the cross-node memory bus16:16
*** crushil has quit IRC16:16
cfriesendanpb: yes, it'd be a special-case16:16
mnaseryeah i was thinking the linux kernel already does an ok job handling multiple numa nodes16:17
danpbIOW unless your guest app is intelligent you want to avoid multiple numa nods16:17
mnaseri see16:17
cfriesenmnaser: the kernel does, but not all userspace code does.16:17
mriedemdansmith: ack - gonna be afk for about an hour16:17
mnaseralso the only other tradeoff that comes with this is that the threads are not shared which is not ideal16:17
mnaserso that was something i didnt like about doing this16:17
mriedemcan't be worst dad of the summer 2 days in a row16:17
cfriesenmnaser: so if you've got a single large app that wants most of that 8GB...16:17
dansmithmriedem: okay I'll push this up with those changes and I'll be gone before jenkins gets to it16:18
mnaserfor completition sake btw, this is both host cells16:18
dansmithmriedem: jaypipes so baton back to you at that point16:18
mnaserhttp://paste.openstack.org/show/617957/16:18
mnaserthere are 15 sets there16:19
mnaserlet me see what it was when we had 0-1 only16:19
cdentdansmith: couple questions about expected discover_hosts behavior: is it supposed to be idempotent (run it again and again, it’s okay)? It is supposed to cope if two different processes run it at the same time?16:20
cfriesenmnaser: but only 14 pairs of pCPUs from different numa nodes16:20
mnaseryeah, thats why it failed now, but in the original case16:20
mnaserstrangely enough16:20
mnaseri only see one output of numacell, not two16:20
dansmithcdent: yeah, it's expected to just run it over and over again, from cron even16:20
dansmithcdent: you might get a failure if you run multiples and one loses the race to insert the record, but other than that it should be fine to run multiple threads of it16:21
cdentdansmith: thanks that’s what I was hoping/expecting but wanted to confirm16:21
*** markvoelker has quit IRC16:23
mnaserim going to try setting hw:numa_nodes=1 and packing the server and seeing what happens (i should be able to get 14 at least)16:23
gibimriedem: ocata is not affected by the bug: https://github.com/openstack/nova/blob/stable/ocata/nova/scheduler/filter_scheduler.py#L18816:25
gibimriedem: I mean bug 170863716:26
openstackbug 1708637 in OpenStack Compute (nova) "nova does not properly claim resources when server resized to a too big flavor" [High,In progress] https://launchpad.net/bugs/1708637 - Assigned to Balazs Gibizer (balazs-gibizer)16:26
*** crushil has joined #openstack-nova16:26
cfriesenmnaser: do you actually need 8GB?  if you don't actually need it all, you could drop to 2MB hugepages and divide up the memory evenly with less waste.   For most things 1GB pages don't give that big of a boost.16:26
mnasercfriesen have you had experience with it? i just figured that if i can have 1gb pages, it would be better than 2mb pages but there isn't much substance to it other than 'it seems right'16:27
mnaserdropping to 2mb would obviously make life much easier16:27
cdentmriedem: yeah, we’ve been pretty inconsistent about which 400s are document in the placement-api-ref. anything common like “yo, not here” and “hey, you violated schema” has frequently been dropped. I’ve not been too strict on my reviews of that stuff except where a 4xx has some particular weird sense16:27
*** danpb has quit IRC16:28
cdentthe current tooling doesn’t really have as much support for handling error responses as we might want, that’s probably something we could and should address later. I think once the whole thing is documented we’ll be able to tune it as a whole, better16:28
*** lpetrut has quit IRC16:28
*** danpb has joined #openstack-nova16:28
mnaseri just tried to create 14x 2vcpu/8gb memory and that went ok, then 1x 1vcpu/4gb memory and that was okay, so now i have 4gb free large pages.. trying to create 1vcpu/4gb - Host does not support requested memory pagesize. Requested: 1048576 kB16:29
*** randomha1k has quit IRC16:29
*** lpetrut has joined #openstack-nova16:29
mnaserlooks like it was assigned core 21, which is on node1 .. but my free 4096 is on node016:30
mnaseri guess this happened because im still using the same thread pair, probably wouldnt happen again if i reserve 0-1 for the os16:30
*** jmlowe has quit IRC16:31
*** danpb has quit IRC16:31
cfriesenmnaser: we've done some testing.  there are some cases where it makes a noticeable difference, but most of the time the difference is not worth the wasted memory.  it depends on the guest16:31
*** markvoelker has joined #openstack-nova16:32
mnasercfriesen all the guests are going to be multiples of 1gb in memory so 120gb in large pages in 1gb or in 2mb .. wouldn't make much difference, no? it looks like the # of freepages is split across both numa nodes16:33
*** slaweq_ has quit IRC16:34
cfriesenmnaser: if it must be exact multiples of 1GB, then you may as well use 1GB hugepages.16:34
*** slaweq_ has joined #openstack-nova16:35
mnasercfriesen yeah they're all multiples of 1gb.. anyways ill try going back to 0-1 and seeing if i can fully populate the server16:35
mnaserand then after that ill file a bug regarding that sibling_set issue16:35
*** eharney has joined #openstack-nova16:37
*** crushil has quit IRC16:38
*** slaweq_ has quit IRC16:38
*** slaweq_ has joined #openstack-nova16:39
*** baoli has quit IRC16:39
*** kornicameister has quit IRC16:40
*** rcernin has quit IRC16:41
*** baoli has joined #openstack-nova16:42
*** Nil_ has joined #openstack-nova16:44
cdentmriedem: is this still relevant or has other stuff killed it: https://review.openstack.org/#/c/488187/16:44
*** lyan has quit IRC16:45
*** kornicameister has joined #openstack-nova16:45
*** markvoelker has quit IRC16:46
*** sridharg has quit IRC16:47
*** crushil has joined #openstack-nova16:50
*** sbezverk has joined #openstack-nova16:52
*** markvoelker has joined #openstack-nova16:55
*** yamamoto has joined #openstack-nova16:58
*** pchavva has joined #openstack-nova16:58
*** gcb has quit IRC17:02
*** yamamoto has quit IRC17:03
*** krtaylor has quit IRC17:05
*** eharney has quit IRC17:08
*** david-lyle has quit IRC17:08
*** david-lyle has joined #openstack-nova17:08
*** Sukhdev_ has quit IRC17:09
*** jpena|off is now known as jpena17:12
mnaserreserving 0-1 allows me to start 14 2vcpu/8gb, but refuses to let me start anymore with that same sibling_sets bug17:13
*** kristian__ has quit IRC17:13
openstackgerritChris Friesen proposed openstack/nova master: Remove ram/disk sched filters from default list  https://review.openstack.org/49185417:14
*** kristian__ has joined #openstack-nova17:14
openstackgerritChris Dent proposed openstack/nova master: replace chance with filter scheduler in func tests  https://review.openstack.org/49152917:17
*** dtantsur|brb is now known as dtantsur17:18
mnasercfriesen: doing more debugging, i believe i'm onto something -- HOST_TOPOLOGY: NUMATopology(cells=[NUMACell(UNKNOWN),NUMACell(1)])17:18
mnaserfor some reason the first numacell is unknown..? i'll have to check that17:18
*** kristian__ has quit IRC17:18
*** mriedem has quit IRC17:22
*** lyan has joined #openstack-nova17:22
melwittdansmith: +1 to skipping cells meeting17:23
*** mkrai_ has quit IRC17:23
*** randomhack has joined #openstack-nova17:25
*** mingyu_ has quit IRC17:26
*** mdnadeem has joined #openstack-nova17:26
*** mdnadeem has quit IRC17:26
*** mingyu has joined #openstack-nova17:27
*** gjayavelu has quit IRC17:27
*** mriedem has joined #openstack-nova17:29
mriedemcdent: not sure, but not something we need to care about for rc117:29
*** ralonsoh has quit IRC17:29
*** gszasz has quit IRC17:29
cdentmriedem: ‘k, will circle back round to that after we get the main things settled17:29
openstackgerritDan Smith proposed openstack/nova master: Resource tracker compatibility with Ocata and Pike  https://review.openstack.org/49101217:31
*** mingyu has quit IRC17:31
dansmithmriedem: jaypipes ^17:31
*** markvoelker has quit IRC17:31
*** randomhack has quit IRC17:32
*** sambetts is now known as sambetts|afk17:32
mriedemcfriesen: commented in your change17:33
mriedemcfriesen: if you're using the caching scheduler, you still rely on those filters since the caching scheduler doesn't use placement17:33
dansmithmriedem: did we ever push up that deprecation for those schedulers?17:33
mriedembut the caching scheduler isn't the default filter driver, the filter_scheduler is, so i think it's fine for the default enabled filters to match the default scheduler driver17:33
mriedemdansmith: i didn't see one17:33
dansmithsounded like sdague was going to but I don't know that he did17:33
dansmithmriedem: is it enough to put something in the config help text or do you want to log something?17:34
*** lpetrut has quit IRC17:34
mriedemi think we'd need both17:34
*** jmlowe has joined #openstack-nova17:35
*** jmlowe has quit IRC17:35
mnasercfriesen i'm onto something, with a totally empty hypervisor -- i see this -- AVAILABLE_SIBLINGS: [CoercedSet([1, 17]), CoercedSet([31, 15]), CoercedSet([23, 7]), CoercedSet([13, 29]), CoercedSet([27, 11]), CoercedSet([19, 3]), CoercedSet([9, 25]), CoercedSet([5, 21])]17:35
*** vks1 has quit IRC17:35
*** jpena is now known as jpena|off17:35
dansmithmriedem: okay17:35
cdentdansmith: was the resolution on the question of what to do when we get to the end of that set of conditionals where jay had “literally no idea what to do” to do nothing?17:35
mriedemdansmith: plus reno of course17:35
mnaser[1, 17] shouldn't be there, it should just be 17, because vcpu_pin_set is 2-3117:35
*** jmlowe has joined #openstack-nova17:35
mnaseri dont think that codebase takes vcpu_pin_set into consideration17:35
dansmithcdent: yes17:35
cdent17:35
*** jmlowe has quit IRC17:36
*** shan has joined #openstack-nova17:36
mriedemgibi: ok so i guess it is a regression in pike then so i'll mark it pike-rc-potential17:36
*** suresh12 has joined #openstack-nova17:37
cfriesenmnaser: you had changed vcpu_pin_set to 0,16, no?17:37
cfriesenmriedem: okay, I'll respin17:38
*** jmlowe has joined #openstack-nova17:38
mnasercfriesen i switched back.  by setting it to 0,16 => i end up being unable to spin the last instance because all cores + memory are taken on numanode #217:38
mnasers/#2/#1/17:38
mnaserand numanode 0 has 4gb memory left unused17:38
mnaserby using 0,16 -- i have a single thread in both cores that can use the leftover 4gb17:39
mnaserusing 0,16 effectively means 14 threads for 1 numanode and 16 threads for the other.  vs 15/1517:39
*** mingyu has joined #openstack-nova17:40
*** gjayavelu has joined #openstack-nova17:40
cfriesenmnaser: please double-check that you restarted nova-compute or rebooted after changing vcpu_pin_set.   I agree that you shouldn't see 1 in the AVAILABLE_SIBLINGS list.17:40
mriedemgdi, people, tox -e fast8 before git review17:40
mriedemhttps://review.openstack.org/#/c/488510/17:40
mnaser DEBUG oslo_service.service [req-39a92c29-83e3-4cb0-9f78-29b97d61147d - - - - -] vcpu_pin_set                   = 2-31 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2622 in the logs,  /proc/cmdline  contains isolcpus=2-3117:41
*** rcernin has joined #openstack-nova17:42
*** jamesdenton has quit IRC17:42
*** abalutoiu has quit IRC17:42
*** kornicameister has quit IRC17:43
*** sdague has quit IRC17:43
mnaserhttp://paste.openstack.org/raw/617965/ this is what is generated from `resources` in claims.py17:44
*** yamahata has joined #openstack-nova17:44
mnaseri dont even see it in the siblings there, i wonder if it is in the process of the transformation to the DB object17:44
*** kornicameister has joined #openstack-nova17:46
cfriesenmriedem: what do you think of this wording?  http://paste.openstack.org/show/617966/17:47
*** jamesdenton has joined #openstack-nova17:47
mriedem"If you have     ``scheduler_driver`` set to ``caching_scheduler``, you should probably     add these entries back in along with CoreFilter."17:48
mriedemissue the first: scheduler_driver is the old option name17:48
mriedemit's now [scheduler]/driver17:48
cfriesenwhoops, was looking at wrong window. :)17:49
mriedemissue the 2nd: let's not tell them what to do if they're not using the filter scheduler, but just point out that non-FilterScheduler drivers, like the CachingScheduler, may want to have these filters enabled17:49
cfriesenokay17:49
mriedemif they already have that option defined in their nova.conf, then they don't have to 'add' anything back in, as it's already there, and we're just changing the defaults17:50
cfriesenmriedem: the issue I'm worried about is if they were relying on the defaults17:50
mriedemsure, and that's why you point out that non-FilterScheduler drivers, like the CachingScheduler, may want to have these enabled17:50
cfriesenfair enough17:50
mriedemisn't there also a case that you don't want the DiskFilter enabled - if you're using shared storage?17:52
mriedemlooks like we never documented that in the description for the DiskFilter17:52
*** randomhack has joined #openstack-nova17:52
mriedemi'm not saying you need to for this change, but just thinking of the operator that then starts looking at if they need to enable these or not - will they have enough information to make that decision17:53
*** mamandle has joined #openstack-nova17:53
cdentthanks mriedem, by being assigned the fix for that bug, I should receive a gloriously huge bug bounty from … someone17:53
mriedemtjones17:54
mriedemhas the bug bounty kitty17:54
*** gyee has joined #openstack-nova17:54
openstackgerritDan Smith proposed openstack/nova master: Mark Chance and Caching schedulers as deprecated  https://review.openstack.org/49221017:54
*** Sukhdev_ has joined #openstack-nova17:54
dansmithmriedem: ^17:54
*** sshwarts has quit IRC17:55
*** mingyu has quit IRC17:55
openstackgerritChris Friesen proposed openstack/nova master: Remove ram/disk sched filters from default list  https://review.openstack.org/49185417:57
*** kbaegis has joined #openstack-nova17:57
mriedemdansmith: lgtm17:57
mnasercfriesen my apologies, looks like wrong config.. but i think i've narrowed it down to this: AVAILABLE_SIBLINGS: [CoercedSet([8, 24]), CoercedSet([2, 18]), CoercedSet([10, 26]), CoercedSet([28, 12]), CoercedSet([22, 6]), CoercedSet([30, 14]), CoercedSet([20, 4])] .. technically there should be more set in there with CoercedSet([16]) -- because the 0 is reserved17:57
mnaserit's removed the entire set rather than the single one17:58
cfriesendansmith: is the performance of FilterScheduler with Placement comparable to CachingScheduler?17:58
mnaser(this might be obvious to you but i'm just figuring it out, heh)17:58
dansmithcfriesen: it should end up being much better17:58
cfriesenmnaser: if you've got 0/1 reserved then I think arguably there should be a set with just 16 and another set with just 1717:59
*** yamamoto has joined #openstack-nova17:59
cfriesenit should be noted though that the performance of 16/17 is going to be variable, depending on what load is on 0/118:00
cfriesendansmith: better?  isn't CachingScheduler in memory for the most part?18:00
mnaserof course, that's a given18:00
*** lucasxu has joined #openstack-nova18:00
*** sdague has joined #openstack-nova18:01
dansmithcfriesen: we've been discussing this in this channel for like I year, so I hope I don't have to summarize all of it, but if you count the ability to multi-thread the scheduler with placement, and zero reschedules, yes, it should be much better18:01
dansmiths/I/a18:01
*** MVenesio has quit IRC18:01
mriedemroman numeral I18:02
*** MVenesio has joined #openstack-nova18:02
mriedemyou meant18:02
sdaguedansmith: I did not push the deprecation of the other schedulers18:02
dansmithheh, yeah18:02
sdaguedansmith: if that's a patch that is needed, I could do it this afternoon18:02
dansmithsdague: I just did18:02
mriedemhttps://review.openstack.org/#/c/492210/118:02
sdaguedansmith: ok, coolio18:02
cfriesendansmith: ah, you mean overall scheduler load.  got it.  (I was thinking time to perform a single schedule operation.)18:02
mnaserhttps://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L5632-L5633 -- bingo.. now to see why the decision of filtering out singles is there18:02
mriedemcfriesen: it's also that18:02
*** jpena|off is now known as jpena18:02
mriedemtime to schedule should be faster when you don't have reschedules18:03
mriedemb/c of bad decisions18:03
mriedemcfriesen: your team seems to do some performance testing, it would be cool if your guy(s) could benchmark and compare the two now in pike18:03
dansmithif you really care about the milliseconds requried to place something, you should be able to do better with a bunch of filterschedulers than one in-memory caching scheduler I would think18:04
mriedemlike single caching scheduler vs multiple filter schedulers18:04
cdentcfriesen: it may be the case, but I’m not sure that anyone has proven it yet, that single schedule operations will speed up because placement will help limit the initial set of available candidates18:04
*** yamamoto has quit IRC18:04
dansmithcdent: cachingscheduler only refreshes the list when it runs out of spots I think, so it varies, but it also is using very outdated information which means fast but poor decisions18:04
cfriesenI'd expect the FilterScheduler to speed up due to Placement.  Wasn't sure that it'd be faster than CachingScheduler.  Anyways, we've been using FilterScheduler since we care about accurate resource tracking.18:05
mriedemcaching scheduler is also on a periodic18:05
cfriesen(We modified stuff like resize and migration to update the resource tracking immediately rather than wait for the resource audit.)18:05
dansmithcaching scheduler is filter scheduler, with cached host list data18:05
mriedemdansmith: so this resize/rt series has a pep8 failure in the bottom change,18:06
*** kbaegis has quit IRC18:06
mriedemdansmith: but i guess hold off on fixing that so i can go through them now18:06
mriedemthe last 2 i mean18:06
dansmithokay18:06
mriedemdansmith: it's T-~3 hours to 0 dan time yeah?18:06
dansmithcheck queue is 7h long18:06
*** kbaegis has joined #openstack-nova18:07
dansmithmriedem: I think you mean T-3 hours until free-of-dan time18:07
dansmithbut yeah18:07
dansmithI assume everyone brought liquor and cake to work today for the post-2pm PDT party18:07
mriedembrought? i think it's just generally available18:08
dansmithI was trying to act like it was the 90s18:08
*** felipemonteiro has joined #openstack-nova18:10
*** felipemonteiro_ has joined #openstack-nova18:12
mnasercfriesen thanks for all your feedback, i think the best solution for now is going to be use 0,16 to have predictable performance for the host and assign hugepages with a slight bias towards node1 because it has 2 more cores so things can get properly packed18:13
*** dtantsur is now known as dtantsur|afk18:15
*** felipemonteiro has quit IRC18:16
*** lpetrut has joined #openstack-nova18:17
*** kbaegis1 has joined #openstack-nova18:21
*** yassine has quit IRC18:22
sdaguedansmith: yeh, we're down to 800 nodes in ci18:23
sdaguewhich is unfortunately not sufficient18:23
dansmithwell, it was fun while it lasted18:24
*** kbaegis has quit IRC18:24
sdagueif anyone wants to do doc reviews to hopefully complete the import - https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/doc-migration18:25
*** xinliang has quit IRC18:26
cfriesenmnaser: sorry you hit a bug. :)    we should be able to get it sorted out once sfinucan comes back from holidays18:30
cfriesenmnaser: ping me with the bug number once you file it please18:31
*** markvoelker has joined #openstack-nova18:32
*** nicolasbock has quit IRC18:33
*** mamandle has quit IRC18:34
*** nicolasbock has joined #openstack-nova18:34
*** pchavva has quit IRC18:37
mriedemdansmith: jaypipes: cdent: edleafe: problems https://review.openstack.org/#/c/488510/18:41
dansmithmriedem: I +2d before we had jenkins runs of the latest version18:42
dansmithand it was passing (tempest) before18:42
mriedemit's not that it's not passing tempest18:42
mriedemit's just spewing ERRORs18:42
jaypipeshey guys, just got back.18:42
jaypipeshmm, yes, the dreaded ./nova/compute/resource_tracker.py:1112:17: N352  LOG.warn is deprecated, please use LOG.warning!18:43
mriedemheh i even pointed that out18:44
mriedemit's in the bottom change btw18:44
mriedemso the entire series needs to be rebaesd18:44
mriedem*rebased18:44
mriedembut i have issues with the 2nd change in the series18:44
jaypipesyeah, I see that18:45
dansmithmriedem: we have the placement upgrade before nova thing required in the prelude of the renos18:45
dansmithwhich should be enough to merge this, IMHO18:45
jaypipes"prelude of the renos" sounds like some classical music composition.18:45
*** morgan is now known as kmalloc18:46
* jaypipes goes to read mriedem's comments18:46
mriedemdansmith: where?18:46
dansmithI probably shouldn't be reviewing this anyway since I was elbow deep in it myself18:46
mriedemi know it's in the release notes18:46
mriedembut it's not in the prelude18:46
mriedemhttps://docs.openstack.org/releasenotes/nova/unreleased.html#id118:46
* cdent queues up “prelude of the renos” on the hi-fi18:46
mriedemA new 1.10 API microversion is added to the Placement REST API. This microversion adds support for the GET /allocation_candidates resource endpoint. This endpoint returns information about possible allocation requests that callers can make which meet a set of resource constraints supplied as query string parameters. Also returned is some inventory and capacity information for the resource providers involved in the allocation18:46
mriedemidates.18:46
mriedemwrong one18:47
mriedemhttps://docs.openstack.org/releasenotes/nova/unreleased.html#upgrade-notes18:47
mriedemThe scheduler now requests allocation candidates from the Placement service during scheduling. The allocation candidates information was introduced in the Placement API 1.10 microversion, so you should upgrade the placement service before the Nova scheduler service so that the scheduler can take advantage of the allocation candidate information.18:47
mriedemthe code totally falls back though if 1.10 isn't available18:47
mriedemif we want to make 1.10 required, that's fine, but we need to also update nova-status, which i could do lickety split18:48
dansmithyeah I thought it was in the prelude that I read this morning18:48
cdentI think we should require 1.1018:48
dansmiththere's something that specifically says "be sure to upgrade placement first"18:48
dansmithI really thought that was the prelude18:48
mriedemthat's https://docs.openstack.org/releasenotes/nova/unreleased.html#upgrade-notes18:48
mriedemthe language there is more clear, the behavior in the scheduler code is not18:49
dansmithah it's thjs: https://review.openstack.org/#/c/491900/2/doc/source/user/placement.rst18:49
dansmithpike placement upgrade notes18:49
cdentif we haven’t got 1.10, we haven’t got alloc_candidates and we’ve not forced things forward, let’s do that18:49
mriedemi'll update nova-status quick18:49
*** nicolasbock has quit IRC18:50
cdentIf nobody gets to it tonight, I can and will look into the stuff where tempest is spewing ERROR tomorrow morning18:50
mriedemit's because we have a small flavor, 1 VCPU18:50
dansmithmriedem: did you say you think the errors during single-host migration come from the bottom patch?18:50
mriedemwhen we resize on the same host, that subtracts the flavor from itself, resulting in 0 VCPU18:50
mriedemwhich is an invalid allocation18:50
mriedemdansmith: from the middle patch18:51
mriedemthe pep8 is on the bottom patch18:51
dansmithokay I was going to say..18:51
dansmithah18:51
mriedemso we'd need some additional floor type logic18:51
cdentisn’t the allocation supposed to be doubling to 2 already? where’s that getting lost?18:51
mriedemsuch that an allocation can't dip below 018:51
*** gszasz has joined #openstack-nova18:51
mriedemcdent: good question18:52
mriedemInstance 6fa7b953-c1fe-4520-8564-aeba8e90aece has resources on 1 compute nodes18:52
mriedemOriginal resources from same-host allocation: {u'VCPU': 1, u'MEMORY_MB': 64}18:52
mriedemSubtracting old resources from same-host allocation: {u'VCPU': 0, u'MEMORY_MB': 0, 'DISK_GB': 0}18:52
mriedemthen kablammo18:52
dansmithmriedem: we're doing the -1 merge which should be subtracting us from the doubled allocation18:52
mriedemyeah the allocation doesn't appear to be doubled18:53
mriedemis the problem18:53
dansmithso18:53
dansmithin that patch,18:53
dansmithwe're still healing on both sides with reckless abandon18:53
dansmithdoes it show up in thelast patch?18:53
dansmiththat's where we stop doing that18:53
dansmithalthough that patch has a typo in it, so probably isn't running properly anyway18:54
dansmither I mean, had until I updated it a few minutes ago18:54
mriedemumm18:56
mriedemhttp://logs.openstack.org/10/488510/30/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/5a36c66/logs/screen-n-cpu.txt.gz#_Aug_09_16_27_56_29857218:56
mriedemSending updated allocation [{'resource_provider': {'uuid': '97ce076a-2644-465b-95dc-fc0674152976'}, 'resources': {u'VCPU': 0, u'MEMORY_MB': 0, 'DISK_GB': 0}}] for instance 6fa7b953-c1fe-4520-8564-aeba8e90aece after removing resources for 97ce076a-2644-465b-95dc-fc0674152976. {{(pid=19567) remove_provider_from_instance_allocation /opt/stack/new/nova/nova/scheduler/client/report.py:1091}}18:56
cdentah, so doubling gets cleaned up too soon (until later in the series)?18:56
mriedemoh nvm18:57
*** slaweq__ has joined #openstack-nova18:57
mriedemthat's known, i was thinking that was the bottom change18:57
*** slaweq_ has quit IRC18:57
*** kristian__ has joined #openstack-nova18:58
*** lpetrut has quit IRC18:59
mriedemthe current_allocs get logged but they don't have 2 VCPU18:59
jaypipesdansmith: are you actively making changes to these patches? if not, I can address mriedem's comments.18:59
mriedemso the doubling isn't happening it appears18:59
*** yamamoto has joined #openstack-nova19:00
mriedemthis is the resize in the scheduler http://logs.openstack.org/10/488510/30/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/5a36c66/logs/screen-n-sch.txt.gz#_Aug_09_16_27_37_36347119:00
mriedemreq-7cbe9eeb-dc55-4986-be30-99bd7a03d2c419:00
*** felipemonteiro_ has quit IRC19:01
openstackgerritMerged openstack/nova master: doc: provide more details on scheduling with placement  https://review.openstack.org/49190019:01
*** baoli has quit IRC19:01
mriedemreq-7cbe9eeb-dc55-4986-be30-99bd7a03d2c419:01
mriedemoops19:01
mriedemAug 09 16:27:37.633395 ubuntu-xenial-internap-mtl01-10347573 nova-scheduler[17280]: DEBUG nova.scheduler.client.report [None req-7cbe9eeb-dc55-4986-be30-99bd7a03d2c4 tempest-MigrationsAdminTest-402050297 tempest-MigrationsAdminTest-402050297] Doubling-up allocation request for move operation. {{(pid=17280) _move_operation_alloc_request /opt/stack/new/nova/nova/scheduler/client/report.py:162}}19:01
mriedemit says it's double stuffing it19:02
openstackgerritMerged openstack/nova master: Add a prelude section for Pike  https://review.openstack.org/49142419:02
*** baoli has joined #openstack-nova19:02
mriedemand it looks like it does19:02
mriedemAug 09 16:27:37.633725 ubuntu-xenial-internap-mtl01-10347573 nova-scheduler[17280]: DEBUG nova.scheduler.client.report [None req-7cbe9eeb-dc55-4986-be30-99bd7a03d2c4 tempest-MigrationsAdminTest-402050297 tempest-MigrationsAdminTest-402050297] New allocation request containing both source and destination hosts in move operation: {'allocations': [{'resource_provider': {'uuid': u'97ce076a-2644-465b-95dc-fc0674152976'}, 'resource19:02
mriedem{u'VCPU': 2, u'MEMORY_MB': 192}}]} {{(pid=17280) _move_operation_alloc_request /opt/stack/new/nova/nova/scheduler/client/report.py:202}}19:02
*** kristian__ has quit IRC19:02
openstackgerritMerged openstack/nova master: Mark max microversion for Pike in history doc  https://review.openstack.org/49158119:02
mriedembut then i see this19:03
mriedemAug 09 16:27:37.713717 ubuntu-xenial-internap-mtl01-10347573 nova-scheduler[17280]: DEBUG nova.scheduler.filter_scheduler [None req-7cbe9eeb-dc55-4986-be30-99bd7a03d2c4 tempest-MigrationsAdminTest-402050297 tempest-MigrationsAdminTest-402050297] Successfully claimed resources for instance 6fa7b953-c1fe-4520-8564-aeba8e90aece using allocation request {u'allocations': [{u'resource_provider': {u'uuid': u'97ce076a-2644-465b-95dc-19:03
mriedem74152976'}, u'resources': {u'VCPU': 1, u'MEMORY_MB': 128}}]} {{(pid=17280) _claim_resources /opt/stack/new/nova/nova/scheduler/filter_scheduler.py:289}}19:03
mriedemwhich is the single allocation again19:03
mriedemwtf19:03
openstackgerritMerged openstack/nova master: Document service layout for consoles with cells  https://review.openstack.org/49191419:03
mriedemOHHHHHHHHH19:03
mriedemi know why19:03
mriedemmfing pass by reference19:04
openstackgerritMerged openstack/nova master: Cleanup release note about ignoring allow_same_net_traffic  https://review.openstack.org/49185519:04
jaypipesmriedem: that's passing by reference.19:04
mriedemdansmith: jaypipes: cdent: problem is right here https://github.com/openstack/nova/blob/master/nova/scheduler/client/report.py#L19819:04
mriedem return new_alloc_req19:04
mriedembut we're not touching new_alloc_req19:04
mriedemso we double stuff but don't persist19:04
jaypipesyep19:05
*** yamamoto has quit IRC19:05
cdentoops19:05
*** markvoelker has quit IRC19:06
*** tbachman has quit IRC19:06
cdentdoes any of gibi’s pending functional stuff cover that?19:06
mriedemnot sure19:06
jaypipesfunc or unit should have covered this...19:06
mriedemanyway, i'll get this nova-status thing done19:06
* jaypipes checks19:06
mriedemand then could push a fix for that on top19:06
jaypipesmriedem: k, I'll fix the above.19:07
mriedemand then we rebase the series on top of those19:07
mriedemor that19:07
jaypipesmriedem: you tell me. what do you want?19:07
cdentI really need to go, but please, if stuff’s not stable by the time you guys expire tonight, let me know so I can continue it in the morning19:07
jaypipesmriedem: I can tack on a fix on the bottom of this series?19:07
mriedemjaypipes: yes it would have to be19:07
mriedemnote i've got other issues in that 2nd change though,19:08
mriedemwhich i guess could be follow ups19:08
*** krtaylor has joined #openstack-nova19:08
mriedemi'm just kind of getting tired of making myself TODOs about chasing follow ups19:08
mriedembut realize we're short on time19:08
*** markvoelker has joined #openstack-nova19:08
jaypipesmriedem: I'm happy to address your comments on the second patch. just need to know if dansmith is actively working on these.19:08
dansmithnope19:09
jaypipesdansmith: gotcha. I'll get on this then.19:09
cdentk, in that case I’ll look at the existing main stack in the morning and see where we’re at19:10
cdentg’night19:10
mriedemo/19:10
*** cdent has quit IRC19:11
*** tbachman has joined #openstack-nova19:11
mriedemso i think unit tests are covering _move_operation_alloc_request19:11
mriedemNew allocation request containing both source and destination hosts in move operation: {'allocations': [{'resource_provider': {'uuid': u'97ce076a-2644-465b-95dc-fc0674152976'}, 'resources': {u'VCPU': 2, u'MEMORY_MB': 192}}]}19:11
mriedemthat's from that same method,19:11
mriedemafter merging the allocations19:11
mriedemby reference19:11
mriedemah a red herring19:12
mriedemSuccessfully claimed resources for instance 6fa7b953-c1fe-4520-8564-aeba8e90aece using allocation request {u'allocations': [{u'resource_provider': {u'uuid': u'97ce076a-2644-465b-95dc-fc0674152976'}, u'resources': {u'VCPU': 1, u'MEMORY_MB': 128}}]} {{(pid=17280) _claim_resources /opt/stack/new/nova/nova/scheduler/filter_scheduler.py:289}}19:12
jaypipesnot for same-host-resize, though, right?19:12
mriedem^ is stale19:12
mriedemno we doubled correctly19:13
mriedemhttps://github.com/openstack/nova/blob/master/nova/scheduler/client/report.py#L20119:13
mriedemNew allocation request containing both source and destination hosts in move operation: {'allocations': [{'resource_provider': {'uuid': u'97ce076a-2644-465b-95dc-fc0674152976'}, 'resources': {u'VCPU': 2, u'MEMORY_MB': 192}}]}19:13
mriedemthat's doubled19:13
*** ys__ has quit IRC19:13
*** slaweq_ has joined #openstack-nova19:13
mriedemwhat we're logging here: https://github.com/openstack/nova/blob/master/nova/scheduler/filter_scheduler.py#L29319:13
mriedemis stale19:13
mriedemthat's why i got confused19:13
mriedemhowever, when we get to the RT, and subtract, we lost the doubled allocation somewhere still19:14
mriedemjaypipes: so i don't know what the bug is yet19:14
mriedemwe just have really misleading logging19:14
*** MVenesio has quit IRC19:14
jaypipesmriedem: I don't agree that the above line is "stale".19:15
mriedemof course it is19:15
mriedemalloc_req comes from placement19:15
mriedemwe double it19:15
*** baoli has quit IRC19:15
mriedemthen we log the original alloc_req from placement19:15
mriedemwhich is not double19:15
*** slaweq__ has quit IRC19:15
mriedemfollow the 4 log messages starting here http://logs.openstack.org/10/488510/30/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/5a36c66/logs/screen-n-sch.txt.gz#_Aug_09_16_27_37_56374919:15
*** MVenesio has joined #openstack-nova19:16
*** kbaegis1 has quit IRC19:16
*** kbaegis has joined #openstack-nova19:16
*** kristia__ has joined #openstack-nova19:16
jaypipesmriedem: yeah, right.19:17
mriedemso, we're still losing the doubled allocation somewhere19:17
mriedemmy guess is the RT is overwriting it?19:17
jaypipesmriedem: I will change placement client claim_resources() to return the possibly-updated-for-move-operation alloc_req.19:17
mriedemjaypipes: we don't have to change all of that, just drop alloc_req from this log message https://github.com/openstack/nova/blob/master/nova/scheduler/filter_scheduler.py#L29319:18
mriedemwe already logged the original and new thing19:18
*** MVenesio has quit IRC19:18
jaypipesk19:18
mriedemwell, we logged the new thing19:18
*** adisky__ has quit IRC19:20
openstackgerritMatt Riedemann proposed openstack/nova master: Require Placement 1.0 in nova-status upgrade check  https://review.openstack.org/49223419:23
*** kristia__ has quit IRC19:25
mriedemdansmith: ^ thar she blar19:25
*** kristia__ has joined #openstack-nova19:25
sdaguemriedem: commit message?19:26
sdague1.10 right?19:26
dansmithmriedem: I got distracted by a rant opportunity.. did you figure out the doubling thing?19:26
mriedemcrap19:26
jaypipesdansmith: still working on it.19:26
openstackgerritMatt Riedemann proposed openstack/nova master: Require Placement 1.10 in nova-status upgrade check  https://review.openstack.org/49223419:27
mriedemdansmith: are you ranting in the ops midcycle upgrades etherpad?19:27
mriedemperhaps19:27
mriedemoh -dev,19:27
mriedemi see19:27
dansmithyes19:27
dansmithplus screaming obscenities at the monitor that only Taylor and I can hear19:27
dansmithI can tell it's bad when she just starts slipping me candy to calm me down19:28
jaypipeslol19:29
*** sbezverk has quit IRC19:29
*** kristia__ has quit IRC19:30
dansmithjaypipes: "working on it" meaning "working on figuring it out" or "working on putting the fix into code" ?19:30
mriedemwe don't have it figured out19:30
mriedemwhere the doubled allocation went19:30
jaypipesdansmith: what mriedem said.19:30
dansmithand it's not what I said?19:30
dansmiththat healing is still blind in this patch19:30
mriedemit might bethat19:30
mriedemthat's what i suspect anyway19:30
mriedemone node is ovewriting the doubled allocation19:31
mriedemi should be able to tell that from the logs19:31
dansmithwell we're 0.25 days away from getting a run of the top one anyway19:31
openstackgerritSean Dague proposed openstack/nova master: doc: Address review comments for contributor index  https://review.openstack.org/49151719:32
sdaguedansmith: what's the non gate exposure of this?19:33
dansmithsdague: what?19:33
*** ociuhandu has joined #openstack-nova19:33
sdaguelike which jobs are showing the issue19:33
dansmithnone of them are failing19:33
dansmithif that is what you mean19:33
sdaguedansmith: one of them is showing a funny allocation though?19:34
dansmithI imagine it's the single-node tempest one that's hitting the issue though, although theoretically the multinode ones should too19:34
dansmithsdague: logging errors as they fail to do their accounting, but nothing is fatal from tempest's point of view it sounds like19:34
sdaguejust thinking if it's more effective to do local run / debug19:34
sdaguegiven the gate turn around time isn't going to get better any time soon19:34
dansmithprobably, but I have so little time left, it'd take me all that to just stack once and start looking I think19:35
mriedemgd we love to lazy-load pci_requests and pci_devices19:35
dansmithI assume jaypipes has been running this locally19:35
jaypipesdansmith: not in the last few weeks, no. been relying on functional tests and logging.19:36
jaypipesmriedem: ok if I remove that log message in scheduler _claim_resources() (with the stale alloc_req) in the bottom patch of this series? the patch call "refactor heal..."19:39
jaypipesnm, I'll just throw it in another patch19:39
jaypipespatches are cheap.19:39
mriedemhttp://logs.openstack.org/10/488510/30/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/5a36c66/logs/screen-n-cpu.txt.gz#_Aug_09_16_27_46_10032019:39
mriedemyeah so on resize to same host, it's the _instance_to_allocations_dict that overwrites the doubled allocation19:40
mriedemdansmith: which is what you were saying19:40
mriedemSending allocation for instance {'VCPU': 1, 'MEMORY_MB': 64} {{(pid=19567) _allocate_for_instance /opt/stack/new/nova/nova/scheduler/client/report.py:924}}19:40
*** kristia__ has joined #openstack-nova19:40
dansmithyeah19:40
jaypipesmriedem: right, and that's the thing that doens't get "corrected" until the later patch19:41
mriedemthat's at Aug 09 16:27:46.10032019:41
mriedemthe doubled up allocation in the scheduler was at Aug 09 16:27:37.63372519:41
mriedemok so maybe not a turd furgeson after all19:41
mriedemalthough, an ocata compute will trample that and then the last patch will double the allocations again?19:42
dansmithmriedem: while we're healing,19:42
dansmithwe'll fix it eventually on the destination node19:42
dansmithonce we're all pike, we don't trample and thus we're good19:43
*** rmcall has joined #openstack-nova19:43
mriedembut we'll continue to log ERRORs?19:44
dansmithI dunno I didn't look at the actual site where we log that, but probably so19:44
mriedemok, and we probably don't have a job that would test it either,19:45
mriedemwould require a grenade multinode job that runs a migration19:45
mriedemwe do have a multinode grenade job that runs live migration...19:45
mriedemback and forth too so that actually might show it19:45
dansmithmriedem: you can borrow my pistol when I'm done with it19:45
mriedemi'll be classy and do Seppuku19:46
dansmithheh, less mess, how midwestern of you19:46
mriedemdisemboweling is still a mess19:47
mriedemi'll do it in the soaker tub19:47
mriedemthis error log case wouldn't be the end of the world, we could release with that and backport a fix later,19:48
mriedemadding some logic to determine we're about to push a 0 allocation and just not do that19:48
*** Sukhdev_ has quit IRC19:48
dansmithwell, normally that would be a legit error case,19:49
dansmithand worth the log19:49
*** gbarros has quit IRC19:49
dansmithso I dunno.. maybe if not has_ocata and ==0, then log error or something19:49
mriedemmy point is, it doesn't cause things to fail, so not a regression, so we could cleanup the error log later and backport if necessary, yeah?19:51
dansmithyes19:51
*** kristia__ has quit IRC19:51
openstackgerritMerged openstack/nova-specs master: doc: Remove crud from Sphinx conf.py  https://review.openstack.org/45612919:51
*** kristia__ has joined #openstack-nova19:52
*** baoli has joined #openstack-nova19:52
mriedemjaypipes: so you're putting a patch at the bottom of the series to remove that bogus log in the filter scheduler, and then fixing the pep8 in https://review.openstack.org/#/c/491850/ on top of that?19:52
mriedemand then i think we can limp https://review.openstack.org/#/c/488510/ in and fix my other problems in a follow up19:52
mriedemand then once we get a good run on https://review.openstack.org/#/c/491012/ we call it a ball for rc119:53
mriedemoh except this of course https://review.openstack.org/#/c/491491/19:53
dansmithjay needs to look at that one I think19:53
dansmithit makes sense to me, but I figure he should look19:53
mriedemgibi's?19:53
mriedemyes agree19:53
dansmithyeah19:53
mriedemhe wrote it19:53
jaypipesI'm still addressing mriedem's review comments on the confirm/reverrt resize patch.19:54
mriedemi mean, the code that's changing19:54
jaypipesshould be done shortly.19:54
dansmithmriedem: right19:54
mriedemalthough it does say, "this should really raise NoValidHost but tests...."19:54
*** baoli has quit IRC19:55
*** baoli has joined #openstack-nova19:55
*** kristia__ has quit IRC19:56
*** priteau has quit IRC19:58
*** moshele has joined #openstack-nova19:58
mriedemwhat i don't understand in his test https://review.openstack.org/#/c/490814/ is how the resize doesn't fail on the compute during the claim19:58
mriedemthe claim should fail19:58
dansmithallocation ratio?20:00
dansmithhe's not setting it so I would expect it's defaulting to 16 or whatever20:00
dansmithfor vcpu20:00
mriedemdefault=0.0,20:00
dansmithwhich means 1620:00
mriedemif set to 0.0, the value20:00
mriedemset on the scheduler node(s) or compute node(s) will be used20:00
mriedemand defaulted to 16.0.20:00
mriedemyeah20:00
* dansmith pats us all on the back for being awesome20:01
mriedemhaha20:01
mriedemi'll pull his test down, set that to 1.0 and make sure it fails the resize, and then if that's the case i'll be cool with the test20:01
*** yamamoto has joined #openstack-nova20:02
dansmithso, um20:02
dansmithwe really fall back to legacy scheduling in that case?20:02
dansmithI don't think I knew that was the plan20:02
openstackgerritJay Pipes proposed openstack/nova master: placement: refactor healing of allocations in RT  https://review.openstack.org/49185020:03
openstackgerritJay Pipes proposed openstack/nova master: Remove provider allocs in confirm/revert resize  https://review.openstack.org/48851020:03
openstackgerritJay Pipes proposed openstack/nova master: Resource tracker compatibility with Ocata and Pike  https://review.openstack.org/49101220:03
openstackgerritJay Pipes proposed openstack/nova master: remove log message with potential stale info  https://review.openstack.org/49224220:03
jaypipesmriedem, dansmith: ^.20:03
mriedemwell we wouldn't with gibi's patch20:03
dansmithjaypipes: I'm not reviewing any more things today unless mriedem +2s them first20:04
dansmithmy stats and ego just can't take it20:04
*** liverpooler has quit IRC20:04
*** sbezverk has joined #openstack-nova20:04
mriedemif it makes you feel better, maya told laura that she didn't want to go to camp this morning b/c she thought i'd forget her again20:04
*** abalutoiu has joined #openstack-nova20:06
*** gbarros has joined #openstack-nova20:06
dansmithhah that's pretty awesome20:06
jaypipeslol Dad of the Yea.20:06
*** yamamoto has quit IRC20:06
jaypipesYear20:06
*** gbarros has quit IRC20:07
dansmithwhen camp called, I'd have said "oh uh yeah I "forgot" and stuff"20:07
mriedemi can't remember if i swore when the front desk lady called me20:07
mriedem"oh shit!"20:07
mriedem"lemme put the bong down and i'll be right over"20:08
mriedemhmm, setting cpu_allocation_ratio=1.0 in gibi's test doesn't make it fail20:08
dimsmriedem : ouch!20:08
dansmithmriedem: nice20:09
dansmithmriedem: set it before anything gets started?20:09
mriedemyeah, in setUp20:09
* mnaser proposes periodic job to openstack-infra/project-config to remind mriedem to pick up kids from camp20:09
mriedembefore setting the virt driver20:09
dansmithmriedem: before compute or scheduler starts?20:09
*** slaweq_ has quit IRC20:10
mriedemyes before everything starts20:10
dansmithbut wait,20:10
dansmithbefore _everything_ starts? like all thethings?20:10
mriedemhttp://paste.openstack.org/show/617983/20:10
mriedemso you're just <1 from 4 day weekend and no longer caring, i see20:11
dansmithno mriedem I mean BEFORE EVERYTHING20:11
dansmithhaha20:11
mriedem*1 hour20:11
dansmithdon't worry, this four day weekend will be a special kind of hell where I'll be begging to work on unit tests by  mid-day tomorrow20:11
*** slaweq_ has joined #openstack-nova20:14
openstackgerritEric Fried proposed openstack/nova master: WIP: Use ksa adapter for placement conf & requests  https://review.openstack.org/49224720:15
mriedemdansmith: i'm +2 on this https://review.openstack.org/#/c/492242/20:16
*** suresh12_ has joined #openstack-nova20:18
mriedemok bottom 2 are approved20:19
*** suresh12 has quit IRC20:21
*** links has joined #openstack-nova20:25
mriedemjaypipes: easy one https://review.openstack.org/#/c/488510/20:26
*** moshele has quit IRC20:27
*** slaweq__ has joined #openstack-nova20:28
dansmithmriedem: I bet it's because of the fake virt driver20:28
dansmith    2017-08-09 13:27:58,682 INFO [nova.compute.claims] Total vcpu: 10 VCPU, used: 0.00 VCPU20:28
dansmith    2017-08-09 13:27:58,682 INFO [nova.compute.claims] vcpu limit not specified, defaulting to unlimited20:28
*** sbezverk has quit IRC20:29
*** smatzek has quit IRC20:30
*** slaweq_ has quit IRC20:31
edmondswefried see the comment I just added to https://review.openstack.org/#/c/485121/7 and let me know if you have any ideas there20:31
dansmithactually we're getting an empty dict of limits in filter_properties20:31
efriededmondsw Ack.20:32
openstackgerritJay Pipes proposed openstack/nova master: Remove provider allocs in confirm/revert resize  https://review.openstack.org/48851020:32
openstackgerritJay Pipes proposed openstack/nova master: Resource tracker compatibility with Ocata and Pike  https://review.openstack.org/49101220:32
jaypipesmriedem: fixe.d20:32
mriedemdansmith: yeah, because the only limits we'd get are from NUMATopologyFilter20:32
dansmithbut our limit for vcpus is in there20:33
dansmithotherwise we won't check anything, AFAICT20:33
mriedemoh i see,20:34
dansmithmriedem: we're hitting this code: https://github.com/openstack/nova/blob/master/nova/compute/claims.py#L240-L24420:34
mriedemCoreFilter puts the vcpu limit in there20:34
dansmithhah20:34
mriedemright20:34
dansmithlemme try with ram or something else20:35
mriedemsince the CoreFilter isn't enabled, as noted in the commit message, we don't care about filtering on cores...20:35
*** markvoelker has quit IRC20:35
dansmithmust be same with ramfilter?20:36
dansmithbecause obviously I don't hit that either since no limit20:36
mriedemthe flavor he's using fits the ram on the fake driver20:36
dansmithI mean making it not20:36
dansmithyeah novalidhost when I do that and enable ramfilter20:36
*** shan has quit IRC20:37
*** suresh12_ has quit IRC20:37
mriedemok +2 on his test then https://review.openstack.org/#/c/490814/20:38
mriedemi didn't realize the enabled filters impacted the claim code in the compute20:38
*** rajathagasthya has joined #openstack-nova20:39
*** suresh12 has joined #openstack-nova20:39
dansmithso I didn't realize the filters controlled the compute node claiming behavior20:39
dansmithnot sure that really makes sense20:39
mriedemdid you just say the same thing as me?20:40
dansmithhowever, it's nice to know that removing those base filters will stop the compute node from defeating placement's decisions20:40
dansmithtwo things to set limits with different logic will definitely lead to confusion-based bugs20:40
dansmithnow I just gotta find a typo nit in here to get my stat back and we're good to go20:41
*** markvoelker has joined #openstack-nova20:41
dansmithfound one20:42
mriedemwhich change are you talking about?20:42
mriedemoh gibi's patch20:42
dansmiththe test20:42
* dansmith wrings his hands before laying down the -120:42
mriedemremoved my +2 just in time20:43
openstackgerritJay Pipes proposed openstack/nova master: replace chance with filter scheduler in func tests  https://review.openstack.org/49152920:43
dansmithI was just bluffing20:43
dansmithI did find one but it's not worth delaying over20:43
mriedemd'oh20:43
mriedemheh replaying my +220:44
*** marst_ has joined #openstack-nova20:44
mriedemare you happy with cfriesen's wording here then? https://review.openstack.org/#/c/491854/20:44
*** dtp has joined #openstack-nova20:45
dansmithmriedem: I wouldn't say I'm happy20:45
mriedemjaypipes: i'm not sure why i'm the author on this https://review.openstack.org/#/c/488510/320:46
mriedemhttps://review.openstack.org/#/c/488510/20:46
*** jmlowe has quit IRC20:46
dansmithmriedem: I've noticed that on a couple things you've updated recently20:46
dansmithresetting the author for some reason I mean20:46
mriedemi didn't update this20:46
jaypipesyeah, me too...20:46
jaypipeshaven't cared much, though. :)20:46
dansmithmriedem: you did, PS1520:47
dansmithand 1620:47
mriedemyeah it was PS1520:47
mriedemjaypipes: you want to fix that quick?20:47
jaypipesmriedem: I certainly didn't change the author.20:47
jaypipesmriedem: how do I do that?20:47
*** marst has quit IRC20:47
mriedemgit commit --amend --author "Jay Pipes <jaypipes@gmail.com>"20:47
dansmithare we re-pushing this anyway?20:47
*** baoli has quit IRC20:48
dansmithbecause ... six hour check run20:48
jaypipesgimme a sec, will do.20:48
mriedemdansmith: i brought it up in the ML http://lists.openstack.org/pipermail/openstack-dev/2017-July/119801.html20:48
mriedemi don't know what causes it automatically, something to do with git review -d + git rebase -i + git commit20:48
openstackgerritJay Pipes proposed openstack/nova master: Remove provider allocs in confirm/revert resize  https://review.openstack.org/48851020:48
openstackgerritJay Pipes proposed openstack/nova master: Resource tracker compatibility with Ocata and Pike  https://review.openstack.org/49101220:48
jaypipesok, mriedem done20:48
dansmithmriedem: I do that all the time and it doesn't reset for me20:49
*** randomhack has quit IRC20:49
dansmithI've had my way with all of jay's patches that way20:49
*** cleong has quit IRC20:49
*** baoli has joined #openstack-nova20:49
mriedemgit version 2.7.4 ?20:49
dansmithindeed actually20:50
mriedemgit-review Version: 1.25.020:50
dansmithsame20:50
dansmithI'm not running Windows98 or whatever you run though20:50
mriedemthis is xenial 16.0420:51
*** markvoelker has quit IRC20:52
* dansmith doesn't believe mriedem 20:52
mriedemhuh20:52
mriedemuser@ubuntu:~/git/nova$ cat /etc/debian_version20:52
mriedemstretch/sid20:52
jaypipesmriedem: quick summary of the cause of the "claim mystery" from that resize too big patch please?20:52
dansmithexpected :)20:52
dansmithjaypipes: he put it in the comments20:53
dansmithjaypipes: corefilter notenabled == no limit == no vcpu check on the compute node20:53
jaypipesah20:53
*** markmc` has joined #openstack-nova20:53
dansmithI'm embarrassed to say I didn't know we had that linkage20:53
dansmithand also, am horrified20:53
mriedemdansmith: like how i gave you credit in those comments?20:54
mriedemdoes your ego feel better?20:54
*** burt has quit IRC20:54
dansmithmriedem: yes, that's why I didn't NFO your stats over a comment grammar issue :)20:54
*** markmc` has quit IRC20:54
mriedemnaval flight officer?20:54
*** markmc has quit IRC20:55
*** baoli has quit IRC20:55
dansmithnuke from orbit20:55
dansmithmaybe not a fun analogy these days I guess20:55
dansmithas PDX has been one of the west coast targets mentioned20:55
mriedemheh, fire and fury baby20:55
mriedemurban dictionary has a fun one20:56
mriedem"Near Fatal Orgasm"20:56
*** markmc has joined #openstack-nova20:56
* dansmith steps a few paces to the right20:56
*** links has quit IRC20:56
*** jpena is now known as jpena|off20:56
*** markvoelker has joined #openstack-nova20:59
mriedemjaypipes: ok +2 on gibi's fix https://review.openstack.org/#/c/491491/ but letting you take the final gander21:00
jaypipesmriedem: k, reviewing.21:00
*** dr_gogeta86 has quit IRC21:00
*** slaweq_ has joined #openstack-nova21:00
*** lucasxu has quit IRC21:01
*** krtaylor has quit IRC21:01
jaypipesmriedem: done21:02
*** markmc has quit IRC21:02
dansmithI hope all these gaping holes that gibi has found in the last two weeks are coming from him poking at it for reals21:03
*** markmc has joined #openstack-nova21:03
*** vstinner has joined #openstack-nova21:03
*** yamamoto_ has joined #openstack-nova21:03
dansmithand if so, kudos21:03
mriedemcfriesen: something you should probably fix in here https://review.openstack.org/#/c/491854/21:03
mriedemyes gibi gets the last minute testing champion award21:04
cfriesendansmith: what changes would you like in https://review.openstack.org/#/c/491854/ ?21:04
*** slaweq__ has quit IRC21:04
mriedemcfriesen: see the comment i just left21:04
*** suresh12 has quit IRC21:04
*** krtaylor has joined #openstack-nova21:05
cfriesenmriedem: I can fix that easy.  Sounded like Dan wanted something else though as well.21:06
*** suresh12 has joined #openstack-nova21:06
mriedemno he was just being dan21:06
cfriesenoh, in that case21:07
*** vstinner has left #openstack-nova21:07
mriedemdansmith: per your question about testing in https://review.openstack.org/#/c/487954/ - there is a change that depends on it here: https://review.openstack.org/#/c/476968/21:07
*** thingee_ has joined #openstack-nova21:08
*** gszasz has quit IRC21:08
efriededmondsw Do you know how we're importing the keystone_authtoken conf group??21:08
*** yamamoto_ has quit IRC21:08
openstackgerritChris Friesen proposed openstack/nova master: Remove ram/disk sched filters from default list  https://review.openstack.org/49185421:09
*** catintheroof has quit IRC21:09
openstackgerritMatt Riedemann proposed openstack/nova master: Remove ram/disk sched filters from default list  https://review.openstack.org/49185421:10
*** markmc has quit IRC21:10
cfriesenmriedem: oh crud. :)21:11
mriedemfixed21:11
mriedemapproved21:11
edmondswefried I think it's this: https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token/_opts.py#L20121:13
efriededmondsw Where is that accessed by nova, though?21:13
edmondswlooks like for some reason ks_loading is used to register auth options but not session options21:13
edmondswefried the authtoken middleware is in the api pipeline21:14
*** markmc has joined #openstack-nova21:14
efriedwhat does that mean?21:14
*** gjayavelu has quit IRC21:15
edmondswefried https://github.com/openstack/nova/blob/master/etc/nova/api-paste.ini#L8721:15
*** rajathagasthya has quit IRC21:16
edmondswnote "authtoken" here: https://github.com/openstack/nova/blob/master/etc/nova/api-paste.ini#L3221:16
*** yamamoto_ has joined #openstack-nova21:16
efriededmondsw Brilliant.  What I'm trying to find is some place in nova that we can override and/or deprecate the offending opt.21:16
edmondswefried I don't think there is one... in nova. It would be in keystonemiddleware21:17
efriedBut since we don't actually have a nova/conf presence for keystone_authtoken, it would have to go in, like, workarounds or something.21:17
*** rtjure has quit IRC21:17
efriededmondsw That's probably uglier than just hacking it up the way it originally was.21:18
efriedThough if you do that, shove in a NOTE explaining why load_session_from_conf_options doesn't work.21:18
edmondswyeah, that was my thought21:18
efriededmondsw Unless there's a different conf group you could pull that cafile from...21:19
edmondswand I can bring it up with the keystone guys so they're aware... maybe they'll want to change something, maybe not21:19
edmondswwe were really trying to use keystone_authtoken for more than what it's designed for, so...21:19
efriededmondsw Yeah, perhaps a bug.21:19
*** jmlowe has joined #openstack-nova21:20
efriedOh, well, in that case, is there a more appropriate place you could/should register real session opts?21:20
edmondswdon't know of any other conf options that would already have this. We could create a new [keystone] section nova.conf like we have for glance, neutron, etc.21:20
edmondswthat would be the *right* way to do it21:20
edmondswseems very duplicative, though21:20
edmondswmore work for operators21:21
*** yamamoto_ has quit IRC21:21
efriededmondsw Call it [identity], make mordred happy.21:21
mordredI didn't do it21:21
edmondswlol21:21
edmondswI almost said "identity", and then remembered all the current sections are using codenames21:22
edmondswodd that nobody's changed that already...21:22
efriededmondsw Yeah, I think mordred wants to push for that eventually.21:22
efriededmondsw If we talk about it in -keystone and decide it's really a bug in ksm, then we could justify leaving it in [keystone_authtoken] and making a delta in nova.conf.workarounds to... work around it.21:23
edmondswefried sure, let's do that tomorrow, I have to run21:23
efriedrgr21:24
*** crushil has quit IRC21:24
*** rajathagasthya has joined #openstack-nova21:24
*** kbaegis1 has joined #openstack-nova21:26
*** kbaegis has quit IRC21:27
*** edmondsw has quit IRC21:28
*** edmondsw has joined #openstack-nova21:29
*** derekh has quit IRC21:29
*** thorst has quit IRC21:31
*** edmondsw has quit IRC21:34
*** slaweq_ has quit IRC21:36
*** tyrefors has quit IRC21:42
*** priteau has joined #openstack-nova21:47
*** priteau has quit IRC21:47
*** esberglu has quit IRC21:48
*** esberglu has joined #openstack-nova21:48
*** Sukhdev_ has joined #openstack-nova21:48
*** thorst has joined #openstack-nova21:50
*** kbaegis1 has quit IRC21:50
*** kbaegis has joined #openstack-nova21:52
*** gouthamr has quit IRC21:52
*** esberglu has quit IRC21:53
mriedemdansmith: jaypipes: ok i went through https://review.openstack.org/#/c/491012/ and posted questions, several about evacuate from an ocata compute - i know dan is MIA now and i have to head out too, but will be back around later and tomorrow to discuss anything21:54
*** itlinux has quit IRC21:54
*** thorst has quit IRC21:55
*** yamamoto_ has joined #openstack-nova21:57
*** xyang1 has quit IRC22:00
*** vipul has quit IRC22:04
*** dklyle has joined #openstack-nova22:06
*** david-lyle has quit IRC22:06
*** vipul has joined #openstack-nova22:10
*** kylek3h has quit IRC22:10
*** esberglu has joined #openstack-nova22:12
*** abalutoiu has quit IRC22:12
*** rcernin has quit IRC22:20
*** Apoorva_ has joined #openstack-nova22:20
*** sbezverk has joined #openstack-nova22:20
*** Sukhdev has quit IRC22:21
*** Apoorva has quit IRC22:23
*** Sukhdev has joined #openstack-nova22:35
*** awaugama has quit IRC22:36
*** kristian__ has joined #openstack-nova22:40
*** kristian__ has quit IRC22:44
*** dave-mccowan has quit IRC22:44
*** kornicameister has quit IRC22:50
*** gcb has joined #openstack-nova22:51
*** krtaylor has quit IRC22:52
*** sdague has quit IRC22:53
*** kornicameister has joined #openstack-nova22:55
*** itlinux has joined #openstack-nova23:00
*** lyan has quit IRC23:04
*** armax has quit IRC23:05
*** abalutoiu has joined #openstack-nova23:19
*** mtanino has quit IRC23:19
*** gcb has quit IRC23:21
*** itlinux has quit IRC23:21
*** gcb has joined #openstack-nova23:21
*** claudiub has quit IRC23:32
*** suresh12 has quit IRC23:32
*** chyka has quit IRC23:33
*** hongbin has quit IRC23:37
*** krtaylor has joined #openstack-nova23:40
*** jaypipes has quit IRC23:44
openstackgerritMichael Still proposed openstack/nova master: Avoid chowning console logs in libvirt  https://review.openstack.org/47222923:45
openstackgerritMichael Still proposed openstack/nova master: First attempt at adding a privsep user to nova itself.  https://review.openstack.org/45916623:45
openstackgerritMichael Still proposed openstack/nova master: Move execs of touch to privsep.  https://review.openstack.org/48919023:45
openstackgerritMichael Still proposed openstack/nova master: Move libvirts dmcrypt support to privsep.  https://review.openstack.org/49073723:45
openstackgerritMichael Still proposed openstack/nova master: Move execs of tee to privsep.  https://review.openstack.org/48943823:45
openstackgerritMichael Still proposed openstack/nova master: Move libvirt usages of chown to privsep.  https://review.openstack.org/47197223:45
openstackgerritMichael Still proposed openstack/nova master: Read from console ptys using privsep.  https://review.openstack.org/48948623:45
openstackgerritMichael Still proposed openstack/nova master: Refactor libvirt.utils.execute() away.  https://review.openstack.org/48981623:45
openstackgerritMichael Still proposed openstack/nova master: Move ploop commands to privsep.  https://review.openstack.org/49232523:45
openstackgerritMichael Still proposed openstack/nova master: Don't shell out to mkdir, use ensure_tree()  https://review.openstack.org/49232623:45
*** marst_ has quit IRC23:47
*** markvoelker has quit IRC23:49
*** gyee has quit IRC23:50
*** dtp has quit IRC23:52
*** diga has joined #openstack-nova23:55
*** gcb has quit IRC23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!