Friday, 2017-01-27

*** rcernin has joined #openstack-nova00:00
*** mingyu has quit IRC00:01
*** Swami has quit IRC00:03
*** mriedem has quit IRC00:03
*** mriedem has joined #openstack-nova00:04
*** harlowja has quit IRC00:04
openstackgerritEric Brown proposed openstack/nova: Add RPC version aliases for Ocata  https://review.openstack.org/42595800:07
mriedemlooks like that grenade change is happy now00:07
mriedemat least for single-node00:07
*** liangy has quit IRC00:09
*** dtp has quit IRC00:10
*** Swami has joined #openstack-nova00:10
*** dtp has joined #openstack-nova00:10
*** hongbin has quit IRC00:16
mriedemdansmith: so the multinode grenade job passed with your change https://review.openstack.org/#/c/424730/10/projects/60_nova/resources.sh but i don't see this in the logs00:17
mriedemecho "Expecting $expected computes"00:17
mriedemoh hrm http://logs.openstack.org/30/424730/10/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/9a57ab4/logs/grenade.sh.txt.gz#_2017-01-26_23_58_11_66100:19
mriedemExpecting 0 computes00:19
mriedemFound 2 computes00:19
*** dimtruck is now known as zz_dimtruck00:21
*** tblakes has joined #openstack-nova00:22
openstackgerritJaivish Kothari(janonymous) proposed openstack/os-vif: Fix broken Link  https://review.openstack.org/42599400:23
mriedemhttp://logs.openstack.org/30/424730/10/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/9a57ab4/logs/grenade_save/grenade_db.ini00:24
mriedemnova_compute_count = 000:24
*** mlavalle has quit IRC00:27
*** nic has quit IRC00:27
openstackgerritJaivish Kothari(janonymous) proposed openstack/os-vif: Removing Deprecated hacking Check  https://review.openstack.org/42599600:36
*** chyka has quit IRC00:38
*** chyka has joined #openstack-nova00:41
*** jose-phillips has quit IRC00:43
*** chyka has quit IRC00:43
*** chyka has joined #openstack-nova00:43
*** thorst_ has joined #openstack-nova00:45
dansmithmriedem: greeat00:46
*** hfu has joined #openstack-nova00:47
*** claudiub has quit IRC00:47
mriedemit's weird b/c it finds the 2 in the verify step00:48
mriedembut not before it00:48
mriedembut we're creating resources so we should have 2 computes00:48
*** chyka has quit IRC00:48
*** hfu has quit IRC00:49
*** rcernin has quit IRC00:51
alex_xumriedem: I'm already in vacation. But I think I have better to give you a update. The servers filter BP already merged the filter and sort part. For the policy rule change didn't merge yet, johnthetubaguy pointed out that changing a rule enforcement from a hard to soft should bump microversion.00:52
alex_xumriedem: and gmann works on it https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/add-whitelist-for-server-list-filter-sort-parameters00:53
*** rcernin has joined #openstack-nova00:54
alex_xumriedem: that probably needs two patch, one is about the rename of the policy rule, then change the hard to soft by microversion. I'm not sure whether it can get FFE. That should be your call00:54
alex_xumriedem: or you want something detail, I can write a email00:55
*** thorst_ has quit IRC00:55
*** rcernin has quit IRC00:55
*** rcernin has joined #openstack-nova00:55
gmannmriedem:  as alex_xu  said. i will try to push microversion patch (max over weekend) also and then you can take call on that00:55
alex_xugmann: thanks00:56
alex_xugmann: mriedem another thing is worth to track is this bug https://launchpad.net/bugs/165857100:57
openstackLaunchpad bug 1658571 in OpenStack Compute (nova) "Microversion 2.37 break 2.32 usage" [High,In progress] - Assigned to Artom Lifshitz (notartom)00:57
gmannalex_xu: mriedem  i updates the spec with latest discusion, if you guys can have a look - https://review.openstack.org/#/c/425533/00:57
alex_xugmann: let me check now00:59
gmannalex_xu:  about your comment on https://review.openstack.org/#/c/415330/25/nova/policy.py00:59
alex_xugmann: yea00:59
gmannalex_xu:  that if condition is when policy file is updated right? for example say if old deployer has policy file(with old rule overridden) and do not change file then also it hit if condition ?01:00
*** zz_dimtruck is now known as dimtruck01:00
mriedemalex_xu: gmann: a ML thread on the sort/filters policy thing is probably best01:01
mriedemi'm leaving soon for dinner01:01
alex_xugmann: it only happened when the policy file changed. the olso.policy will check timestamp of policy file, then it will reload the file. Then you will pass that if condition01:01
alex_xumriedem: got it01:02
mriedemfor https://launchpad.net/bugs/1658571 we said we'd fix it after FF as a bug fix, but that was before i was reminded that we only have 1 week until rc101:02
openstackLaunchpad bug 1658571 in OpenStack Compute (nova) "Microversion 2.37 break 2.32 usage" [High,In progress] - Assigned to Artom Lifshitz (notartom)01:02
gmannmriedem: +101:03
alex_xumriedem: 1 week from now?01:03
mriedemalex_xu: yeah01:03
alex_xuok, got it01:03
mriedemso, that one might end up waiting until pike01:03
mriedemsince it was already broken in newton01:03
mriedemwe might just have to document it for ocata01:03
alex_xuok01:03
mriedemi've tagged the bug with ocata-rc-potential though so we don't forget about it01:04
alex_xucool01:04
gmannit looks pretty close if we can get in Ocata only but yea let's not hurry on that may be01:04
*** tblakes has quit IRC01:04
alex_xugmann: +101:04
gmannalex_xu:  but for old deployer timestamp will not be changed right?01:05
*** ijw has quit IRC01:05
gmannalex_xu:  where we can see those warning so that we can verify the same01:05
alex_xugmann: emm...they should get a warning after upgrade, due to 'saved_file_rules' is empty01:05
gmannalex_xu:  ah yea, thats is nice point01:07
*** mtanino has quit IRC01:10
*** browne has quit IRC01:10
*** tblakes has joined #openstack-nova01:12
artommriedem, is one week really too little?01:16
artomI've addressed gmann's and alex_xu's feedback :)01:16
gmannartom: +1, ll try to check today. m sure that is very close01:17
artomThe patch itself is fairly straightforward, if discussion is needed I think it's more about the strategy01:17
mriedemartom: w/o looking at the change i can't really say01:18
mriedemartom: we should have a spec for the thing too since it's a new microversion right?01:18
artomMy understanding is that a new microversion is the prefered approach01:18
mriedemyes it is01:18
artommriedem, a spec for that seems... pulled by the hair, as we say in Quebec01:18
*** dimtruck is now known as zz_dimtruck01:18
artomBut if folks want, I'll hack one up :)01:19
mriedemartom: let them eat process01:19
artommriedem, although if we do a spec, there's no chance of it landing in ocata?01:20
mriedemthere is01:20
* artom is confused... what happened to eating process?01:20
mriedemartom: fwiw,01:21
*** zz_dimtruck is now known as dimtruck01:21
mriedemhttps://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/response-for-invalid-status.html was for a bug fix that required a microversion in newton01:21
mriedemit should be a relatively small and straight-forward spec01:21
mriedemjust like ^01:21
artommriedem, PRECEDENT!!1101:21
mriedemyar01:23
*** mriedem is now known as mriedem_afk01:23
*** dtp has quit IRC01:23
*** ijw has joined #openstack-nova01:26
*** thorst_ has joined #openstack-nova01:27
*** raunak has quit IRC01:28
edleafemriedem_afk: almost done with the fixes01:29
*** ijw has quit IRC01:30
*** edmondsw has joined #openstack-nova01:31
*** edmondsw has quit IRC01:43
*** liangy has joined #openstack-nova01:44
*** Swami_ has joined #openstack-nova01:45
*** Swami_ has quit IRC01:46
*** Swami has quit IRC01:48
*** edmondsw has joined #openstack-nova01:54
*** tbachman has quit IRC01:54
*** Sukhdev_ has quit IRC01:54
*** tbachman has joined #openstack-nova01:55
*** tbachman has quit IRC01:55
*** tbachman has joined #openstack-nova01:55
openstackgerritEd Leafe proposed openstack/nova: placement: RT now adds proper Ironic inventory  https://review.openstack.org/40447201:56
edleafe^^ and there they are01:56
*** tbachman has quit IRC01:59
diana_clarkeI have what is probably a stupid question... when I do a `ps aux | grep nova-api` I don't see the nova-api-metadata service, but I can still successfully query the metadata service... why?02:03
*** tbachman has joined #openstack-nova02:04
*** kaisers has quit IRC02:05
*** hfu has joined #openstack-nova02:05
diana_clarke(asking b/c I'm looking at the # of processes started when you change osapi_compute_workers vs. metadata_workers in the nova conf)02:06
*** tbachman_ has joined #openstack-nova02:08
dansmithdiana_clarke: http://logs.openstack.org/30/424730/10/check/gate-grenade-dsvm-neutron-ubuntu-xenial/e3caf96/logs/etc/nova/nova.conf.txt.gz02:08
dansmithdiana_clarke: runs them both in a single process02:08
dansmith(he said authoritatively, despite being a little fuzzy on that stuff)02:09
dansmithdiana_clarke: enabled_apis = osapi_compute,metadata02:09
openstackgerritArtom Lifshitz proposed openstack/nova-specs: Fix tag attribute disappearing  https://review.openstack.org/42603002:09
*** tbachman has quit IRC02:09
*** tbachman_ is now known as tbachman02:09
openstackgerritArtom Lifshitz proposed openstack/nova-specs: Fix tag attribute disappearing  https://review.openstack.org/42603002:12
*** dims_ has joined #openstack-nova02:14
*** dims has quit IRC02:14
*** thorst_ has quit IRC02:18
*** thorst_ has joined #openstack-nova02:20
openstackgerritArtom Lifshitz proposed openstack/nova: Pass APIVersionRequest to extentions  https://review.openstack.org/42587602:22
openstackgerritArtom Lifshitz proposed openstack/nova: Fix tag attribute disappearing in 2.33 and 2.37  https://review.openstack.org/42475902:22
openstackgerritArtom Lifshitz proposed openstack/nova-specs: Fix tag attribute disappearing  https://review.openstack.org/42603002:23
diana_clarkedansmith: sorry for quick absence (was trying to get baby to bed), scrolling back02:24
openstackgerritGhanshyam Mann proposed openstack/nova: Add new policy for server list/detail with all_tenants  https://review.openstack.org/41533002:24
openstackgerritGhanshyam Mann proposed openstack/nova: Add new policy for server list/detail with all_tenants  https://review.openstack.org/41533002:25
*** sapcc-bot1 has quit IRC02:27
*** sapcc-bot has joined #openstack-nova02:27
*** yamahata has quit IRC02:29
*** tblakes has quit IRC02:29
*** dimtruck is now known as zz_dimtruck02:30
diana_clarkedansmith: My hunch is that a new nova-api process is started for each metadata_worker, but /usr/bin/nova-api-metadata exists too (but isn't a running process). I gather it can be run independently though in some cases. I dunno...02:34
*** baoli has joined #openstack-nova02:35
dansmithit's used when you need them separate02:35
openstackgerritArtom Lifshitz proposed openstack/nova-specs: Fix tag attribute disappearing  https://review.openstack.org/42603002:35
*** zz_dimtruck is now known as dimtruck02:36
*** Apoorva has quit IRC02:39
diana_clarkedansmith: hmmm... I think I get it now. Kind of. I'm not sure if the # of nova-api processes is osapi_compute_workers + metadata_workers + 1 or max(osapi_compute_workers, metadata_workers) + 102:41
diana_clarkedansmith: and maybe maybe nproc'd, but I think I'm getting closer. Thanks!02:41
diana_clarkeoops, Jacob Two-Two02:42
*** edmondsw has quit IRC02:42
*** edmondsw has joined #openstack-nova02:43
*** edmondsw has quit IRC02:43
*** edmondsw has joined #openstack-nova02:43
*** baoli has quit IRC02:45
*** dims_ has quit IRC02:47
openstackgerritArtom Lifshitz proposed openstack/nova-specs: Fix tag attribute disappearing  https://review.openstack.org/42603002:47
*** d-bark has joined #openstack-nova02:47
*** dims has joined #openstack-nova02:50
*** edmondsw has quit IRC02:52
*** ijw has joined #openstack-nova02:55
openstackgerritEd Leafe proposed openstack/nova: placement: RT now adds proper Ironic inventory  https://review.openstack.org/40447202:55
*** tbachman has quit IRC02:58
*** ijw has quit IRC02:59
*** hongbin has joined #openstack-nova03:01
*** edmondsw has joined #openstack-nova03:01
*** raunak has joined #openstack-nova03:06
*** baoli has joined #openstack-nova03:08
*** edmondsw has quit IRC03:09
*** edmondsw has joined #openstack-nova03:11
*** yamahata has joined #openstack-nova03:13
*** edmondsw has quit IRC03:14
*** edmondsw has joined #openstack-nova03:14
*** edmondsw has quit IRC03:14
*** edmondsw has joined #openstack-nova03:15
*** mtanino has joined #openstack-nova03:15
*** mtanino has quit IRC03:21
mriedem_afkdansmith: cells is now 21 in the gate huh03:30
*** mriedem_afk is now known as mriedem03:30
mriedemnot sure there is really anything else i'm going to work on tonight03:30
mriedemthere was an old shitty steve segal movie on the other night and i'm hoping for some similar magic tonight03:30
mriedem*steven - we're not on an informal name basis03:30
dansmithheh03:31
dansmithmriedem: yeah, it's moving slow03:31
dansmithmriedem: I think this attempt on the grenade patch is going to work03:31
dansmithjust like I thought the last 12 were going to work03:31
dansmithI'm about done for the night though03:31
*** yarkot has joined #openstack-nova03:31
*** pbandark has joined #openstack-nova03:32
mriedemooo a new one eh03:36
*** guchihiro has joined #openstack-nova03:37
*** tbachman has joined #openstack-nova03:37
diana_clarkemriedem: I'm sad you've worked so much from mexico ;( I submitted one patch from the airport, but then fully embraced beach life.03:39
mriedemi can only swim so much in the same 3 pools and single pacific ocean03:39
*** thorst_ has quit IRC03:40
diana_clarke My favorite part was ditching the car for golf-cart transportation. But I was in a private city, on a private street, so it wasn;t really mexico...03:40
*** hongbin has quit IRC03:41
*** hongbin has joined #openstack-nova03:41
*** amotoki has quit IRC03:43
*** gouthamr has quit IRC03:43
mriedemedleafe: thanks for the updates, i'll review in detail in the morning03:45
mriedemdiana_clarke: i successfully ordered a cheese, and only cheese, quesadilla for my daughter using all spanish words - i made sure to gloat to my wife03:46
mriedemhigh school spanish still works03:46
*** liangy has quit IRC03:47
dansmithmriedem: well, the race between the subnode and the scheduler must not be the problem,03:55
dansmithbecause we're not having to wait at all for that to happen03:55
diana_clarkemriedem: Nice! I really regret not having tried to learn some spanish beforehand (I took german in school). I'm now doing 10 minutes per day for when I go back.03:55
diana_clarke(hoping to go back, I'm also not posting the pics I got of the mexican president online, surreal town was where I was at, lol)04:00
*** dimtruck is now known as zz_dimtruck04:00
mriedemdansmith: ok - i haven't wrapped my head around the grenade change yet and how that fixes the scheduler thing, or is meant to, but i think i get it, basically wait for as many enabled nova-compute services to show up after the upgrade as before, because the new compute service will go down and come back up so we'd wait for that04:03
mriedemanyway i'm calling it a night04:04
diana_clarkemriedem: If you find yourself online tomorrow, let me know if there is something I can do, so you can swim, etc. cheers04:06
*** zz_dimtruck is now known as dimtruck04:07
*** dimtruck is now known as zz_dimtruck04:08
*** zz_dimtruck is now known as dimtruck04:09
*** edmondsw has quit IRC04:17
*** edmondsw has joined #openstack-nova04:17
*** nicolasbock has quit IRC04:17
gmannalex_xu: mriedem johnthetubaguy - its ready now, can you check - https://review.openstack.org/#/c/415330/2704:20
*** edmondsw has quit IRC04:21
*** mtreinish has quit IRC04:22
*** mtreinish has joined #openstack-nova04:23
*** psachin has joined #openstack-nova04:24
*** zerda3 has joined #openstack-nova04:29
*** zerda2 has quit IRC04:29
*** mdnadeem has joined #openstack-nova04:30
*** ayogi has joined #openstack-nova04:50
*** baoli_ has joined #openstack-nova04:53
*** baoli has quit IRC04:55
*** hongbin has quit IRC04:57
*** udesale has joined #openstack-nova04:58
*** yamahata has quit IRC05:01
*** armax has quit IRC05:05
*** adisky_ has joined #openstack-nova05:11
*** prateek has joined #openstack-nova05:11
*** ratailor has joined #openstack-nova05:12
*** raunak has quit IRC05:26
*** sree has joined #openstack-nova05:27
*** dimtruck is now known as zz_dimtruck05:27
*** tbachman has quit IRC05:29
*** raunak has joined #openstack-nova05:29
*** artom has quit IRC05:42
*** david-lyle has quit IRC05:46
*** artom has joined #openstack-nova05:46
*** jaypipes_ has quit IRC05:53
*** karthiks has joined #openstack-nova05:53
*** kaisers has joined #openstack-nova05:54
*** kaisers has quit IRC05:55
*** Jack_V has joined #openstack-nova05:56
*** thorst_ has joined #openstack-nova05:56
*** thorst_ has quit IRC06:01
*** jaypipes_ has joined #openstack-nova06:02
*** jaypipes_ has quit IRC06:08
*** karthiks has quit IRC06:09
*** jaypipes_ has joined #openstack-nova06:11
*** sridharg has joined #openstack-nova06:22
*** vsaienko has joined #openstack-nova06:25
*** claudiub has joined #openstack-nova06:25
*** lpetrut has joined #openstack-nova06:26
*** claudiub|2 has joined #openstack-nova06:28
*** claudiub has quit IRC06:31
*** Sukhdev has joined #openstack-nova06:34
*** vsaienko has quit IRC06:37
*** avolkov has joined #openstack-nova06:40
*** vsaienko has joined #openstack-nova06:42
*** unicell1 has quit IRC06:46
*** moshele has joined #openstack-nova06:48
*** kaisers has joined #openstack-nova06:55
*** nkrinner_afk is now known as nkrinner06:56
*** moshele has quit IRC06:57
*** karthiks has joined #openstack-nova06:58
*** mjura has joined #openstack-nova07:01
*** vsaienko has quit IRC07:03
openstackgerritPawel Koniszewski proposed openstack/nova: Mark live_migration_downtime_steps/delay as deprecated for removal  https://review.openstack.org/40800207:06
*** amotoki has joined #openstack-nova07:07
*** hfu has quit IRC07:09
*** diga has joined #openstack-nova07:09
*** Sukhdev has quit IRC07:10
*** diga has quit IRC07:12
*** diga has joined #openstack-nova07:12
*** hfu has joined #openstack-nova07:17
*** kaisers has quit IRC07:21
*** rcernin has quit IRC07:24
*** diga has quit IRC07:26
*** andreas_s has joined #openstack-nova07:26
*** adisky_ has quit IRC07:29
*** kaisers has joined #openstack-nova07:36
*** mfeoktistov|2 has joined #openstack-nova07:45
*** ralonsoh has joined #openstack-nova07:50
*** sahid has joined #openstack-nova07:56
*** thorst_ has joined #openstack-nova07:57
*** markus_z has joined #openstack-nova07:59
*** vsaienko has joined #openstack-nova08:00
*** takashin has left #openstack-nova08:00
*** thorst_ has quit IRC08:01
*** amotoki has quit IRC08:05
*** diga has joined #openstack-nova08:11
*** tanee is now known as tanee_away08:13
*** jpena|off is now known as jpena08:19
*** vsaienko has quit IRC08:28
*** vsaienko has joined #openstack-nova08:28
*** gszasz has joined #openstack-nova08:31
*** jpena is now known as jpena|off08:32
*** owalsh_ is now known as owalsh08:36
*** adisky_ has joined #openstack-nova08:36
*** sree has quit IRC08:36
*** sree has joined #openstack-nova08:37
*** vsaienko has quit IRC08:38
*** jpena|off is now known as jpena08:39
*** sree has quit IRC08:41
*** vsaienko has joined #openstack-nova08:41
*** guchihiro has left #openstack-nova08:44
*** zzzeek has quit IRC09:00
*** rmart04 has joined #openstack-nova09:01
*** zzzeek has joined #openstack-nova09:01
*** tanee_away is now known as tanee09:05
*** claudiub has joined #openstack-nova09:08
*** gszasz has quit IRC09:09
*** claudiub|2 has quit IRC09:11
*** karimb has joined #openstack-nova09:16
*** gszasz has joined #openstack-nova09:17
*** Cristina_ has quit IRC09:20
*** efoley has joined #openstack-nova09:22
*** gszasz has quit IRC09:23
*** moshele has joined #openstack-nova09:26
openstackgerritGhanshyam Mann proposed openstack/nova: Make policy 'all_tenants_visible' a soft enforcement rule  https://review.openstack.org/42612809:28
*** jaypipes_ has quit IRC09:31
*** lucas-afk is now known as lucasagomes09:32
*** amotoki has joined #openstack-nova09:35
*** karthiks has quit IRC09:35
*** derekh has joined #openstack-nova09:36
*** amotoki has quit IRC09:45
*** hfu has quit IRC09:47
andreykurilinamrith: hi!09:48
*** hfu has joined #openstack-nova09:48
*** hfu has quit IRC09:49
*** hfu has joined #openstack-nova09:49
*** hfu has quit IRC09:49
*** CristinaPauna has joined #openstack-nova09:51
*** hfu has joined #openstack-nova09:51
*** hfu has quit IRC09:51
*** karthiks has joined #openstack-nova09:51
*** baoli_ has quit IRC09:51
*** claudiub has quit IRC09:52
*** gabor_antal has joined #openstack-nova09:52
johnthetubaguybauzas: looking at the patch of doom, eek09:54
johnthetubaguywhats going on there :S09:54
*** claudiub has joined #openstack-nova09:54
bauzasjohnthetubaguy: in a train to Paris so probably a bit lagging09:55
bauzasjohnthetubaguy: but lemme explain the problem09:55
johnthetubaguybauzas: I remember you say that, safe journey09:55
johnthetubaguylooks like we get the wrong min version?09:55
bauzasjohnthetubaguy: exactly09:56
bauzaswe get 16 as minimum09:56
bauzasso we call the placement API09:56
johnthetubaguysuper interesting if thats broken09:56
bauzasbut we should have 15 given we have an old subnode09:56
* johnthetubaguy nods09:56
bauzaswhat I'd love is some way to introspect more09:57
bauzaslike LOG.debugging the DB call itself09:57
johnthetubaguybauzas: we could add a line in there that lists all the nova compute service records09:57
bauzasI can throw a patch but I have very limited access09:57
johnthetubaguyI can do something if that fails09:57
johnthetubaguyI assume we are still trying to get this in ASAP09:58
johnthetubaguyits kinda release critical in my book09:58
bauzasyup, mriedem said he granted a FFE09:58
*** thorst_ has joined #openstack-nova09:58
johnthetubaguycool09:58
johnthetubaguyI guess this is the only one so far right?09:58
*** sree has joined #openstack-nova09:58
bauzasyeah, my one-day trip to Paris is terrible :/09:58
johnthetubaguyit seems a safe date to pick09:58
bauzasmriedem said he'd love also custom-rc09:58
johnthetubaguythat rest day after FFEE09:58
johnthetubaguyoh, not been tracking those patches09:59
johnthetubaguythat would be good, for iroinc09:59
bauzasyup09:59
bauzastbh, me neither09:59
johnthetubaguybauzas: so if you can't push a patch, let me know, and I can try something, but I am just going to do some log digging on your patch10:00
bauzasjohnthetubaguy: I provided the line to look at10:00
bauzasjohnthetubaguy: in the gerrit comment10:01
johnthetubaguyyeah, seen that10:01
bauzasjohnthetubaguy: the one matching the req-id that fails10:01
bauzasyou think it'd be better to introspect within the DB method, or outside of that?10:01
bauzaslike in the caller?10:02
bauzassec10:02
bauzasI think of something10:02
johnthetubaguyI would go for the fool proof thing, and just log out a list of services10:02
*** thorst_ has quit IRC10:02
bauzasthat's a remotable method, right?10:02
bauzasso we expect to have it run by the conductor10:03
bauzasbut10:03
bauzasit's within the scheduler10:03
bauzaswhere the indirection api is not set10:03
bauzasso we directly lookup the DB10:03
johnthetubaguybauzas: is it just me, or do the old and new nodes both have the same CONF.host setting?10:03
bauzasjohnthetubaguy: it's not10:03
bauzasjohnthetubaguy: or it shouldn't10:04
bauzasbut I briefly looked once at the subnode n-cpu log, and I saw it having the right name10:04
johnthetubaguyoops, I was on the wrong log10:04
bauzasthat test is actually pretty dummy10:04
bauzasit just takes all the computes, and force an instance to boot against each one10:05
johnthetubaguyhttp://logs.openstack.org/61/417961/32/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/bee8a26/logs/new/screen-n-sch.txt.gz#_2017-01-27_03_42_11_62610:05
johnthetubaguyhttp://logs.openstack.org/61/417961/32/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/bee8a26/logs/subnode-2/old/screen-n-cpu.txt.gz#_2017-01-27_03_31_28_94810:05
bauzasso it's passing on the new node, but not on the newton subnode10:05
johnthetubaguydo we truncate the hostname?10:05
bauzasI don't think so10:06
bauzasoh wait10:06
bauzasergh, I'm almost arriving to Paris10:06
bauzasI should loose some connectivity soon10:07
bauzaslet's add a bit of logging before my connection drops10:07
* bauzas rushes10:07
openstackgerritSylvain Bauza proposed openstack/nova: Scheduler calling the Placement API  https://review.openstack.org/41796110:13
bauzasjohnthetubaguy: I'm loosing connectivity, but ^10:14
bauzasjohnthetubaguy: ttyl once I'm arriving at the office10:14
bauzaswe won't get a lot of details by that call but hopefully, we would get enough info to see if the service is found10:15
johnthetubaguy+1 thats the hack I was thinking about10:15
*** d-bark has quit IRC10:17
*** karimb_ has joined #openstack-nova10:17
*** karimb has quit IRC10:18
openstackgerritGhanshyam Mann proposed openstack/nova: Add release note and docs for filter/sort whitelist  https://review.openstack.org/42176010:20
gmannjohnthetubaguy: alex_xu mriedem : all_tenant things are ready now - https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/add-whitelist-for-server-list-filter-sort-parameters10:21
*** lpetrut has quit IRC10:21
gmannjohnthetubaguy: alex_xu mriedem : updated spec and new version bump with 'no 403' on new rule10:21
gmann johnthetubaguy: alex_xu mriedem : this one with spec patch also - https://review.openstack.org/#/q/status:open+branch:master+topic:bp/add-whitelist-for-server-list-filter-sort-parameters10:22
*** vsaienko has quit IRC10:22
*** cdent has joined #openstack-nova10:23
*** karimb_ has quit IRC10:24
*** sree_ has joined #openstack-nova10:24
*** sree_ is now known as Guest2035810:25
*** sree has quit IRC10:26
johnthetubaguygmann: thanks for fixing that up, I will try get to that10:27
*** karthiks has quit IRC10:27
johnthetubaguyjust chasing the approved FFEs10:27
gmannjohnthetubaguy: thanks, ll check tomorrow on those if any comments10:28
cdentI'm available to be directed to whatever matters by whomever.10:36
*** karthiks has joined #openstack-nova10:39
*** pkoniszewski has quit IRC10:41
*** pkoniszewski has joined #openstack-nova10:42
johnthetubaguybauzas: http://logs.openstack.org/61/417961/32/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/bee8a26/logs/new/screen-n-cpu.txt#_2017-01-27_03_42_12_02910:44
johnthetubaguycdent: I think the priorities are the custom resource patches and bauzas's call placement patch10:44
johnthetubaguyseems like we get the min_service version wrong somehow10:45
johnthetubaguybut the plot thickens10:45
cdenteeeph10:45
johnthetubaguybauzas: then later you see this: http://logs.openstack.org/61/417961/32/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/bee8a26/logs/new/screen-n-cpu.txt#_2017-01-27_03_42_14_65910:45
johnthetubaguyI mean if the min_service version stuff is broken, thats a big deal for other things10:48
johnthetubaguyso it could be a good find, or a total red herring, but thats life I guess10:48
*** guchihiro has joined #openstack-nova10:50
openstackgerritAndy McCrae proposed openstack/nova: Allow placement endpoint interface to be set  https://review.openstack.org/42616310:55
openstackgerritIldiko Vancsa proposed openstack/nova: Implement new attach/detach Cinder flow  https://review.openstack.org/33028510:56
openstackgerritIldiko Vancsa proposed openstack/nova: Remove check_attach  https://review.openstack.org/33535810:56
*** ociuhandu has quit IRC10:58
openstackgerritAndy McCrae proposed openstack/nova: Allow placement endpoint interface to be set  https://review.openstack.org/42616310:59
*** udesale has quit IRC11:01
*** karimb has joined #openstack-nova11:03
*** mvk has quit IRC11:06
cdentjohnthetubaguy: It looks like edleafe has addressed mriedem's concers on https://review.openstack.org/#/c/404472/ and I just gave it read through and nothing lept out as crazy, and the voiting bits of jenkins are happy, so it might be gtg11:12
johnthetubaguycdent: ah, sweet, I should look into that one11:12
*** hfu has joined #openstack-nova11:13
*** sambetts|afk is now known as sambetts11:14
johnthetubaguycdent: edleafe: I wonder if we should add a reno note for that patch, not a blocker as it can be a follow on, but worth squaring that away11:18
cdentjohnthetubaguy: perhaps, but if we're looking for an excuse to why not, this change has no visible change on end behavior (you have to look in the db or placement api with specific intent to know).11:20
johnthetubaguycdent: good point11:21
johnthetubaguy"hey that thing you didn't know way rubbish, we fixed it, yey us"11:21
cdentheh11:22
*** lpetrut has joined #openstack-nova11:23
cdentbauzas, johnthetubaguy : the current strategy on the scheduler changes is to watch the debug output and from that figure out what's next?11:24
johnthetubaguycdent: yeah11:24
johnthetubaguyI tralled through the code and logs a bit this morning, haven't seem any more clues as such, just more things that confuse me11:25
johnthetubaguyI haven't double checked that we correctly have the min_version caching working, that was one thread I haven't double checked11:25
cdentis here a chance that the old version can be cached, leaving the cache wrong? that is, the scheduler starts up while a node is old and caches the bad version?11:27
cdentwe could test that by turning off the caching?11:27
cdentthat does seem possible11:30
cdentassuming the cache is local to the current process11:30
*** yassine has joined #openstack-nova11:31
*** yassine is now known as Guest2190911:32
*** sree has joined #openstack-nova11:32
cdentjohnthetubaguy: yeah, I'm pretty sure the above is possible, because unlike the conductor, the scheduler isn't required to listen to a hup to reset the cache. This is the review that put the cache in https://review.openstack.org/#/c/265326/11:34
*** Guest20358 has quit IRC11:34
*** lpetrut has quit IRC11:34
*** lpetrut has joined #openstack-nova11:35
johnthetubaguycdent: so in this case we want to ensure we get the older version11:35
*** tbachman has joined #openstack-nova11:35
johnthetubaguycdent: so I was expecting the scheduler to help really11:35
johnthetubaguyI mean the cache to help11:35
cdentoh. hmm.11:35
cdentI guess that means we have two different problems :)11:36
cdentbecause now we still need to make sure the cache becomes right after the final compute node becomes modern?11:36
*** tbachman has quit IRC11:41
johnthetubaguythats always going to require a restart anyways11:46
johnthetubaguyfor the compute RPC api version11:46
cdentjohnthetubaguy: yesterday we had a lot of discussions about what happens when people are "accidentally" upgrading stuff11:47
cdentlots of test failures in in unit tests and things on that latest run of 417961, since bauzas is on a train should I get that stuff?11:48
*** smatzek has joined #openstack-nova11:52
*** moshele has quit IRC11:53
johnthetubaguycdent: I think its due to a hack, those failures11:58
* cdent is unsure of what to do :)11:58
*** thorst_ has joined #openstack-nova11:59
johnthetubaguybummer11:59
johnthetubaguyServersTestManualDisk-1134536160] Bauzas: all services: ServiceList(objects=[Service(1),Service(2),Service(3),Service(4),Service(5),Service(6),Service(7),Service(8)]) _get_all_host_states11:59
johnthetubaguythats not what we wanted there11:59
cdentugh, yeah, that's not very helpful12:00
*** guchihiro has quit IRC12:00
johnthetubaguymaybe fix up str(service) ?12:00
johnthetubaguythen we should find out the versions its seeing12:01
johnthetubaguyhttp://logs.openstack.org/61/417961/33/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/a2da154/logs/new/screen-n-sch.txt.gz#_2017-01-27_10_48_09_04812:01
cdentIs there some way we can model some of this without having to use the gate?12:01
*** BobBall_AWOL is now known as BobBall12:02
*** sree has quit IRC12:02
johnthetubaguyif you have an old compute node running, and a new compute node running, and something to query the DB using objects, that should give us the same thing12:02
johnthetubaguyjust at this point, I really don't get whats happening, so its hard to know what we could fake12:02
*** thorst_ has quit IRC12:03
johnthetubaguyits tempting to also log service versions in the service updates, just to be sure12:03
johnthetubaguyI can do those changes, one sec12:03
*** sree has joined #openstack-nova12:03
*** catintheroof has joined #openstack-nova12:04
*** mvk has joined #openstack-nova12:04
*** sree has quit IRC12:04
*** sree has joined #openstack-nova12:04
*** nicolasbock has joined #openstack-nova12:04
*** sree has quit IRC12:04
*** khushbu has joined #openstack-nova12:05
*** diga has quit IRC12:16
*** jpena is now known as jpena|lunch12:17
openstackgerritJohn Garbutt proposed openstack/nova: Scheduler calling the Placement API  https://review.openstack.org/41796112:18
*** bvanhav has joined #openstack-nova12:18
cdentI hope dong.wenjuan is having a nice vacation12:18
johnthetubaguycdent: I think these should help: https://review.openstack.org/#/c/417961/33..34/12:18
johnthetubaguycdent: if you could check for fat fingers, that would be awesome12:18
* cdent is looking12:18
*** sridharg has quit IRC12:19
cdentjohnthetubaguy++12:20
johnthetubaguyI just thought, I should add something in the service object save too12:20
johnthetubaguyto make sure it gets across RPC OK12:21
*** lucasagomes is now known as lucas-hungry12:21
johnthetubaguycdent: thanks for checking, fingers crossed on this while I get some lunch12:22
openstackgerritJohn Garbutt proposed openstack/nova: Scheduler calling the Placement API  https://review.openstack.org/41796112:22
* johnthetubaguy bravely runs away12:23
* cdent sings12:23
johnthetubaguyheh12:24
*** moshele has joined #openstack-nova12:27
*** moshele has quit IRC12:29
*** sdague has joined #openstack-nova12:40
*** thorst__ has joined #openstack-nova12:43
*** ociuhandu has joined #openstack-nova12:46
*** vladikr has quit IRC12:50
*** dakmetov has joined #openstack-nova12:52
*** sdague_ has joined #openstack-nova12:52
dakmetovgreetings everyone!12:53
*** tbachman has joined #openstack-nova12:54
*** andreas_s has quit IRC12:55
*** cdent has left #openstack-nova13:01
*** cdent has joined #openstack-nova13:01
*** psachin has quit IRC13:02
*** ratailor has quit IRC13:03
dakmetov2 mbooth: thank you for the great review!13:03
*** cdent has quit IRC13:04
dakmetov2 mbooth: could you please clarify one thing: https://review.openstack.org/#/c/399679/5/nova/virt/libvirt/driver.py13:04
*** cdent has joined #openstack-nova13:04
dakmetov2 mbooth: what do you mean under 'source info'?13:04
*** catintheroof has quit IRC13:05
cdentjohnthetubaguy: something weird: http://logs.openstack.org/61/417961/35/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/7e52701/logs/new/screen-n-sch.txt.gz#_2017-01-27_12_55_20_94613:05
cdentmin is showing as 16, but a compute is 15!13:05
wznoinsksfinucan, hi13:08
sfinucanwznoinsk: hey13:09
sfinucanwhat's up?13:09
bhagyashriscdent: Hi,  Is the create resource_provider available for the user or it's only for the internal api for service?13:09
cdentbhagyashris: It's possible for anything with the admin role to create resource providers, but at the moment there's not a lot of reason to do so. Long term the idea is that resource providers for things like shared disk farms will exist13:11
bhagyashriscdent: And Is it necessary that the resource_provider name should be  the host name?13:12
wznoinsksfinucan, I saw your chat with vladikr yesterday, I'm guessing he has some hardware dependent tests, was that passthrough and alike or more of networking like sriov work?13:13
*** khushbu has quit IRC13:13
cdentbhagyashris: the name of the resource provider is there for humans to remember and filter on as needed. The resource providers that are automatically created for compute nodes use the hostname.13:14
*** catintheroof has joined #openstack-nova13:16
*** tlbr has joined #openstack-nova13:16
bhagyashriscdent: Thank you for useful information. :)13:17
cdentyou're welcome13:18
*** lucas-hungry is now known as lucasagomes13:19
*** ayogi has quit IRC13:20
dakmetovguys, is there any way to get user name and tenant name using userid and tenantid inside nova?13:21
*** edmondsw has joined #openstack-nova13:21
dakmetovwithout external lookups to keystone, I mean13:21
*** pradk has joined #openstack-nova13:21
*** prateek has quit IRC13:22
*** jpena|lunch is now known as jpena13:22
*** edmondsw_ has joined #openstack-nova13:23
*** mdnadeem has quit IRC13:23
*** sdague has quit IRC13:25
*** edmondsw has quit IRC13:25
*** pbandark has quit IRC13:26
johnthetubaguycdent: crud, I think I know the problem...13:26
cdentjohnthetubaguy: oh good, because I've been staring at that trying to figure it out to no avail. do tell?13:26
johnthetubaguyadding a comment, one sec13:27
sfinucanwznoinsk: So yeah, he's looking to test the exposing of VLAN tag through the metadata service13:27
sfinucanWe're thinking that will require physical hardware. Not sure if it's something the Intel PCI CI could validate or not13:27
johnthetubaguycdent: does my comment make sense?13:27
johnthetubaguygoing to fix up now, and push it13:28
cdentoh hell13:28
cdenti hate it when it is something like that13:28
*** gszasz has joined #openstack-nova13:29
cdentI just assumed that was correct because all this service version stuff is new to me13:29
cdentsavagely educated13:29
johnthetubaguyI read that three times and checked it13:29
openstackgerritJohn Garbutt proposed openstack/nova: Scheduler calling the Placement API  https://review.openstack.org/41796113:30
johnthetubaguyI like that phrase13:30
johnthetubaguyits a bit like used in anger13:30
cdent"bugger" is the best comment ever13:30
johnthetubaguyits the correct british way, I feel13:30
cdentquite13:30
johnthetubaguyone doesn't want to offend13:30
cdentunless one really needs to13:31
johnthetubaguyheh13:31
cdentwe were nearly there13:31
johnthetubaguyso I left the debugging in, just in case13:31
*** pradk has quit IRC13:31
cdentgood choice13:31
cdentedleafe: you surfaced yet?13:35
markus_zdansmith: If you find time to review: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/newton+topic:%22bug+1455252%2213:36
*** khushbu_ has joined #openstack-nova13:37
*** gszasz has quit IRC13:40
*** jheroux has joined #openstack-nova13:40
markus_z^ or any other stable core, I'm not picky ;)13:42
*** thorst__ is now known as thorst_13:42
*** rwmjones has quit IRC13:43
*** gouthamr has joined #openstack-nova13:53
*** edleafe is now known as figleaf13:54
figleafcdent: I'm here13:54
cdentfigleaf: just wanted to know if you're up to date on the above13:55
figleafcdent: still reading...13:55
figleafcdent: want to give me the summary?13:55
cdentfigleaf: the only thing that real matters is: https://review.openstack.org/#/c/417961/35/nova/scheduler/filter_scheduler.py13:56
cdent(john's discovery)13:56
figleafjust reading the 'bugger' comment :)13:56
figleaflooks like johnthetubaguy has taken care of it13:57
cdentyes13:58
figleafand I have a merge conflict on the last of the RP series https://review.openstack.org/#/c/414769/13:58
* johnthetubaguy still has fingers crossed for a test pass13:58
cdentthe debugging stuff that's right around bauzas patch is still in there for the current gate run, but if it happys will be cleared out13:58
figleafcool13:59
cdentwe're not trying to get nested rps in ocata, or are we?13:59
figleafcdent: yes, but they won't be doing very much at all14:00
cdentthen I guess my rp update 9 email is wrong, could you extend it?14:01
*** xyang1 has joined #openstack-nova14:01
jrollcdent: figleaf: safe to assume we didn't get the extra_specs=CUSTOM_FOO_RESOURCE_CLASS=1 thing done, right?14:02
johnthetubaguythe ironic custom stuff, I guess we don't query that either yet, but if its in this cycle we can query it next cycle?14:02
johnthetubaguyis looking at: https://review.openstack.org/#/c/40447214:02
jrolljohnthetubaguy: I think we have similar questions :)14:02
cdentjroll: the handling of extra specs, no, but the recording of inventory is still trying to get in14:02
johnthetubaguyjroll: heh, yup14:03
*** bvanhav has quit IRC14:03
figleafcdent: dammit, you type too fast14:03
jrollcdent: right, so operators cannot change their flavors until pike, so we can't cut over how we schedule until q :(14:03
*** kfarr has joined #openstack-nova14:03
jrollwith 'cut over' ~= 'drop cpu/ram/disk-based scheduling'14:04
figleafjroll: wait - why can't they update their flavors? The new stuff will be ignored, but still...14:04
cdentjroll: I guess so? :( It's been really hard to track some of this14:04
jrollfigleaf: because a flavor with cpu/ram/disk won't work in the new world, where ironic resource providers don't have cpu/ram/disk14:05
jrollit would require CUSTOM_FOO *and* 2 vcpu and 16384 rams, etc14:05
*** pradk has joined #openstack-nova14:05
johnthetubaguyso I am reading through, it looks like each nova-compute node can only look at one ironic resource_class, is that correct?14:06
jrollthe old flavors are incompatible with the new14:06
figleafjroll: ah, so they could update their flavors, but the flavors will have both the resource class name *and* the vm-ish stuff too14:06
figleafjohnthetubaguy: that's the idea14:07
jrollfigleaf: yeah, something like that14:07
johnthetubaguyjroll: are we OK with nova-compute to resource class 1:1 mapping?14:08
figleafjroll: the scheduler/placement will just ignore the former for now, and at some point, switch to ignoring the latter14:08
*** vsaienko has joined #openstack-nova14:08
figleafjohnthetubaguy: to be clear, there can be multiple nova-computes that are mapped to the same resource class14:08
*** ratailor has joined #openstack-nova14:09
jrollfigleaf: I'm skeptical, how would it know to ignore ram/cpu for that flavor (but not for a virt flavor)?14:09
jrolljohnthetubaguy: that seems... wrong. are we talking nova-compute service or compute_node entry? a compute service should handle more than one resource class fine14:10
*** cleong has joined #openstack-nova14:10
jrollthere should be n:1 node:resource_class, and m:n service:resource_class14:10
johnthetubaguyjroll: that was my assumption14:11
*** ratailor has quit IRC14:11
*** sdague_ is now known as sdague14:12
figleafjroll: there will probably be an "if _is_ironic():" check added to that code to look for the resource class14:12
jrolljohnthetubaguy: I also don't see that n:1 service:resource_class in the code14:12
johnthetubaguyjroll: the resource class request's presence just always dominates any other request, that seems fine14:12
cdentIs it normal for us to be as confused about stuff as we've been?14:13
johnthetubaguyfigleaf: I guess we can make it nested, once we have nested?14:14
*** mdrabe has joined #openstack-nova14:14
johnthetubaguycdent: thats probably yes, I prefer the "can we do better at this" question, I hope thats a yes too14:14
* cdent nods14:14
cdentjust checking14:15
johnthetubaguyjroll: I am looking here: https://review.openstack.org/#/c/404472/23/nova/compute/resource_tracker.py@52514:15
johnthetubaguycdent: its good to check these things14:15
cdentI'm trying to decide whether it is reasonable to be chill or panicky14:15
*** owalsh is now known as owalsh-afk14:15
jrolljohnthetubaguy: yeah, so that's for a single ironic node, and that code will run once per ironic node14:15
* cdent defaults to chill14:16
*** markvoelker has joined #openstack-nova14:16
openstackgerritAndrey Volkov proposed openstack/nova: WIP Update pci device fields from deleted object  https://review.openstack.org/42624314:16
figleafcdent: it's more the overlapping terminology that confuses me14:16
johnthetubaguyjroll: doh, I see the loop now14:16
figleaf"compute" can mean multiple things14:17
johnthetubaguyfigleaf: sorry, I see what you were trying to tell me now14:18
jrollfigleaf: I've learned to always say compute *service* (nova-compute process) or compute *node* (the hypervisor/hardware), it seems to help14:18
cdentjroll: if you said compute service to me, I'd assume you meant all of nova14:18
cdentlanguage is hard14:19
jrollcdent: ahem, "nova-compute service"14:19
jrollheh14:19
*** bvanhav has joined #openstack-nova14:19
*** owalsh-afk has quit IRC14:19
* johnthetubaguy renames compute node to grape, and compute service to vine, now where does the wine come in?14:20
openstackgerritSam Betts proposed openstack/nova: Change order of _cleanup_deploy and _unprovision in Ironic virt  https://review.openstack.org/42267814:21
jrollthrough an IV14:21
cdentjroll++14:23
artomIntra-vineous :D14:26
figleafjohnthetubaguy: in that case, let's make some wine!14:26
*** tblakes has joined #openstack-nova14:27
* figleaf notes that it *is* Friday, after all14:27
*** esberglu has joined #openstack-nova14:27
*** owalsh-afk has joined #openstack-nova14:28
openstackgerritAndrey Volkov proposed openstack/nova: WIP Update pci device fields from deleted object  https://review.openstack.org/42624314:28
*** takedakn has joined #openstack-nova14:28
*** baoli has joined #openstack-nova14:30
*** karthiks has quit IRC14:30
*** esberglu_ has joined #openstack-nova14:30
*** esberglu has quit IRC14:33
johnthetubaguyfigleaf: best idea I heard all day14:36
*** efoley has quit IRC14:37
*** Dinesh_Bhor has quit IRC14:37
*** lpetrut has quit IRC14:37
*** efoley has joined #openstack-nova14:38
*** efoley has quit IRC14:39
BlackDexhello again. I'm having trouble with nova-consoleauth14:39
*** tlian has joined #openstack-nova14:39
BlackDexi have an ha setup, with hacluster and a vip14:39
BlackDex3 systems14:39
*** efoley has joined #openstack-nova14:39
BlackDexif i use horizon to view the console i have a 1 in 3 times luck to get the console to view14:40
*** dakmetov has quit IRC14:40
* figleaf has to step out for a doctor appt - back in a few.14:40
BlackDexi have configured memcached, but nothing is getting into memcached, i can connect to memcached from the nova-controle nodes14:40
*** mjura has quit IRC14:42
*** dansmith is now known as superdan14:47
*** sahid has quit IRC14:47
superdanthat scheduler change failed on the old side due to volumes and neutron14:47
cdentjohnthetubaguy, dansmith : looks like the latest multinode grenade failed on timeouts this time14:47
cdentjinx14:47
superdanheh14:47
* johnthetubaguy shakes fist14:47
*** vladikr has joined #openstack-nova14:48
johnthetubaguyis that a real something is broken timeout, or a you hit an unlucky node timeout?14:48
johnthetubaguycdent: did we get the correct min_version back now?14:49
cdentno logs for the new side14:49
superdanjohnthetubaguy: we didn't get far enough to look14:49
johnthetubaguyI was worried you would say that14:50
* cdent gives johnthetubaguy more wine14:50
johnthetubaguyheh14:50
mriedemsuperdan: congrats https://review.openstack.org/31937914:50
* superdan takes a bow14:50
mriedemnow we just cleanup the release notes for cells v2 and maybe upgrade docs and we're good for ocata i think14:51
jrollwhat's the best way to talk to the placement service in devstack? just grab a keystone token and use curl?14:51
mriedemjroll: yeah, althouh mlavalle has an osc plugin somewhere14:51
jrollmriedem: cool, thanks14:52
cdentjroll: a) probably yes b) i'm not sure if mlavalle's thing is "done" c) I use gabbi yaml files, run throuh gabbi-run14:52
johnthetubaguyfigleaf: cdent: I am struggling to understand this line: https://review.openstack.org/#/c/404472/23/nova/compute/resource_tracker.py@54814:52
openstackgerritArtom Lifshitz proposed openstack/nova: Pass APIVersionRequest to extensions  https://review.openstack.org/42587614:52
openstackgerritArtom Lifshitz proposed openstack/nova: Fix tag attribute disappearing in 2.33 and 2.37  https://review.openstack.org/42475914:52
johnthetubaguyfigleaf: cdent: I feel I am missing something big and obvious14:53
jrollcdent: thanks14:53
cdentjroll: in case you're curious: https://gist.github.com/cdent/a9590764fbc7402d450fa36df14f35e014:53
jrollcdent: I need to poll for resources in our devstack plugin, curl seems like the right thing for now14:53
*** sahid has joined #openstack-nova14:53
cdentjroll: for that, yeah14:53
cdent(although, just because we're almost on the topic, gabbi can poll)14:54
*** owalsh-afk is now known as owalsh14:54
cdentjohnthetubaguy: the cleanup or that the exception happens?14:54
johnthetubaguycdent: I am wondering if we need cleanup in other cases, like when an ironic node is there, but there is no allocation14:55
johnthetubaguycdent: maybe the API just does that for us already?14:55
johnthetubaguyah, I think it does14:55
cdentI don't think I'm sufficiently contextualized to really follow what you're saying?14:56
openstackgerritArtom Lifshitz proposed openstack/nova-specs: Fix tag attribute disappearing  https://review.openstack.org/42603014:56
*** mlavalle has joined #openstack-nova14:56
*** prateek has joined #openstack-nova14:56
*** armax has joined #openstack-nova14:56
johnthetubaguycdent: sorry, I was worried about us not always deleting the old style ironic inventories, but I didn't realise the API would do that for us when we put the inventories14:57
mriedemsuperdan: cdent: jaypipes: bauzas: johnthetubaguy: melwitt: fyi i'm starting to make a list of things to do before rc1 https://etherpad.openstack.org/p/nova-ocata-rc1-todos14:57
johnthetubaguyits just when we have allocations we need to do better14:57
johnthetubaguymriedem: perfect14:57
mriedemjroll: https://github.com/miguellavalle/python-placementclient14:57
jrollmriedem: cool, I'll wait til that's more "official" to depend on it in devstack, I think14:58
*** khushbu_ has quit IRC14:59
cdentmriedem: do we want to be concerned with the traffic breadth from the resource tracker that I wrote about in: http://lists.openstack.org/pipermail/openstack-dev/2017-January/110953.html ? There are two bug fixes in progress related to that.14:59
mriedemcdent: maybe? throw it into the etherpad, i haven't read that thread in detail yet15:01
mriedemi know we're super chatty15:01
openstackgerritJordan Pittier proposed openstack/nova: Fix unspecified bahavior on GET /servers/detail?tenant_id=X as admin  https://review.openstack.org/42625915:01
cdentk, will do15:01
mriedemthe RT is chatty i mean when using the report client15:01
mriedemi think i pointed something out about that in the resource classes series that's still up15:02
*** sahid has quit IRC15:02
*** sahid has joined #openstack-nova15:02
mriedembut now i can't remember what it was, i think something about updating inventory twice when we didn't need to15:02
*** vladikr has quit IRC15:02
*** tbachman has quit IRC15:02
cdentmriedem: yeah15:03
mriedembut thanks for doing an audit, it's easy to not care about performance when you're just trying to get something functional15:03
cdentI think in order to maintain our self-healing we still need to report more than we want15:03
cdents/want/could/15:03
cdentbut at the moment it is very redundant, and some if it just because of old bugs15:03
cdentmriedem: was worried about what would happen when this suddenly landed on a big deployment15:04
*** tbachman has joined #openstack-nova15:04
superdancdent: so one thing to consider, while we see how this goes,15:04
superdancdent: is are we likely to make more trouble by doing what we're doing than we have now..15:04
superdanbecause currently,15:04
superdanwe do this, but over rpc and into the same database as everything else,15:04
superdanwith placement, it's over http, probably a little more compact, and to a potentially separate database15:05
*** efoley_ has joined #openstack-nova15:05
superdanand probably more efficient horizontally-scaled under apache15:05
superdanso we could assume it's at least a wash for ocata and see how it goes with some early testers15:05
cdentcertainly the horizontal scale capabilities are more gooderer15:05
mriedemand we're still doing the rpc to single db thing15:05
superdanyou didn't identify anything that is worse than today right?15:06
mriedemwill be nice in the future when we can drop that load from rpc traffic15:06
cdentso yeah, we are continuing the existing, and adding more on top15:06
superdanyeah15:06
superdanyeah, I guess I forgot we're doing both, but, theoretically to different places15:06
superdanbut yeah15:06
cdentthe thing with object comparison is something we should probably just fix as it is clear and simple15:07
*** takedakn has quit IRC15:07
*** rwmjones has joined #openstack-nova15:07
mriedemif only we had a performance working group or something15:07
cdentI'm not sure how much impact it has on the http side, but maybe some on the rpc side15:07
cdentthe thing with tracked_instances is just weird15:07
superdanI haven't looked at the two fixes15:07
cdentas the way it works now is that it doesn't at all do what it says it is going to do15:07
superdanI'll try to do that and make a judgment15:07
cdentthanks, that would be awesome, just doing that analysis felt like I had a significant leap forward on understanding the resource tracker, but then right at the end I felt like I fell way backwards :(15:08
cdentin the "why does this even work" sense15:08
superdanwell, I'm not a RT expert by any means15:08
superdanprobably jay would be better, but..15:08
*** efoley has quit IRC15:09
johnthetubaguyfigleaf: cdent: this ironic resource provider change, I could be missing something again, but I feel like we are missing something around updating new instance allocations? well possibly anyways.15:09
openstackgerritStephen Finucane proposed openstack/nova: Fix backwards compatibility for InstanceNUMACell  https://review.openstack.org/39618415:09
openstackgerritStephen Finucane proposed openstack/nova: Combine obj_make_compatible tests  https://review.openstack.org/42626115:09
cdentjohnthetubaguy: figleaf has gone to the docs, I think, and knows more about that than me, but I'll listen...15:10
mriedemjohnthetubaguy: replied in https://review.openstack.org/#/c/404472/ -15:10
mriedemjohnthetubaguy: none of that fancy new code is hit unless node.resource_class is set15:10
mriedemwhich it's not today in our CI15:10
mriedemwhich kind of blows15:10
mriedemif you look at the logs for the ironic job on that patch it's still setting ram/cpu/disk15:10
mriedemwould be nice if we could have a one-off DNM patch to set node.resource_class and test that stuff15:11
mriedemjroll: ^15:11
mriedemjohnthetubaguy: and i think the ironic-specific stuff is mostly contained in the RT15:11
mriedemwhich is why in earlier patches i had asked that long-term we subclass the ResourceTracker to do the ironic-specific bits15:11
*** yamahata has joined #openstack-nova15:12
mriedembecause i don't like the "if ironic" spaghetti code15:12
mriedemi'll review the latest on that patch when i get back in an hour or so15:12
*** nkrinner is now known as nkrinner_afk15:12
cdentI may be misremembering, but I thought part of the rt+resosurce class stack was to change the resource tracker so that there is only one of them (for everything it tracks), not one for each node it cares about?15:13
cdentthat might be somewhere else...15:14
*** mnestratov|2 has joined #openstack-nova15:14
melwittcdent: fwiw the tracked_instances clear every update_available_resource period is known and intentional. it surprised me too when I first realized it. I don't recall why it's that way, I guess to prevent stale things from hanging around potentially15:14
*** jamesdenton has joined #openstack-nova15:14
cdentmelwitt: I took it out and no tests failed...15:14
*** mnestratov has quit IRC15:15
cdentwhich made me sad and confused15:15
mriedemcdent: that's the same15:15
mriedembut not what i'm talking about15:15
mriedembbiab15:15
*** mtanino has joined #openstack-nova15:16
melwittyeah, I don't think tests would fail from it. I have mentioned it to jaypipes in the past and he knew about it15:16
*** raunak has quit IRC15:16
cdentmelwitt: hmm. well, thanks for providing some background. I wonder if it is worth revisiting since now it is spinning off redudant http requests?15:17
melwittbauzas: do you know anything about why tracked_instances is cleared each update_available_resource period?15:17
*** vladikr has joined #openstack-nova15:18
*** vsaienko has quit IRC15:18
*** prateek has quit IRC15:19
*** gouthamr has quit IRC15:19
johnthetubaguycdent: so the bit I am stuck with, is where we make the new style ironic resource_class allocations15:19
melwittcdent: I think in general it's worth considering since it seemed excessive even in its original pre placement api form. just don't know what's involved in doing that or what gotchas there are15:20
*** stvnoyes has quit IRC15:20
*** claudiub|2 has joined #openstack-nova15:20
*** stvnoyes has joined #openstack-nova15:21
jrollmriedem: I'm working on devstack support right now, do you want a throwaway hack in the meantime?15:21
superdanmelwitt: cdent it's just to try to always stab the latest info into the database in case it's stale, right?15:21
melwittthat's what I'm guessing15:22
cdentif the node is authoritative for itself, and it keeps a cache of what it successfully wrote last time, how does staleness happen?15:22
cdent(where last time could be 60s or 60 days ago)15:23
*** claudiub|3 has joined #openstack-nova15:23
*** hieulq_ has joined #openstack-nova15:23
cdent(mostly thinking about inventory here, not allocations)15:23
cdentjohnthetubaguy: i'm rereading the code, to bring it back into my brain15:23
*** claudiub has quit IRC15:23
*** bhagyashris has quit IRC15:24
*** jaosorior has joined #openstack-nova15:25
*** claudiub|2 has quit IRC15:25
cfriesencdent: refreshing tracked_instances would allow it to recover from errors where it missed an event that would cause the instance to be added/removed from the list.  But it should be possible to handle that in a different way.15:26
johnthetubaguycdent: you probably want to ignore any changes to the "service" field in the compute_node as well, as service will keep getting updated every DB service update interval, unsure if thats related (re https://review.openstack.org/#/c/424305)15:26
johnthetubaguycdent: ignore that15:27
melwittcdent: looking back at the code to refresh my memory ... I think without the periodic clear, the only thing that removes instances from tracked_instances is the vm_state DELETED or SHELVED_OFFLOADED and if for some reason (errors) an instance doesn't get its state saved that way, it would never be removed15:29
cdentcfriesen: is missing events of that nature a thing happens a) often, b) sometimes, c) rarely?15:29
*** eglynn has joined #openstack-nova15:29
*** sneti_ has joined #openstack-nova15:30
*** sneti_ has quit IRC15:32
cfriesencdent: I would expect rarely15:32
cfriesenmelwitt: abort_instance_claim() also removes it, by calling _update_usage_from_instance(is_removed=True)15:33
melwittyeah, sorry. I forgot to mention that too15:34
cfriesenmelwitt: cdent: but I think melwitt is correct that there would be cases we'd need to add explicit removal15:34
*** zz_dimtruck is now known as dimtruck15:34
cdentat least over the long run, making things more explicit would probably be better15:34
cfriesencdent: agreed, and it would make the resource tracking react faster to freeing up of resources.15:35
*** rwmjones has quit IRC15:36
*** satyar has joined #openstack-nova15:37
johnthetubaguycdent: at somepoint doesn't nova-conductor create the instance resource allocations, and retries if the generation id of the resource provider doesn't match, (as generation id increments on a related allocation, or something like that)? So the resource tracker is more just checking for rogue instances with missing allocations?15:40
cfriesenso hardware.numa_usage_from_instances() expects a list, but we only ever call it with a single instance.  Do we plan on ever using it with a list or should we maybe clean it up to expect a single instance?15:41
*** gouthamr has joined #openstack-nova15:41
*** rwmjones has joined #openstack-nova15:41
cdentjohnthetubaguy: when you say "allocations" do you mean allocations in the placement api, or something more general? because if the former, the conductor doesn't talk to the placement api at all.15:42
*** mnestratov|2 has quit IRC15:43
johnthetubaguycdent: I mean placement API, and when I say conductor, that happens in nova-scheduler today15:43
*** Guest21909 has quit IRC15:43
cdentjohnthetubaguy: as far as I know, right now, the only thing that writes allocations is the resource tracker15:45
cdentand the only thing that writes inventories is the resource tracker15:45
cdentwhen nova-scheduler talks to placement API it is read only15:45
*** marst has joined #openstack-nova15:47
johnthetubaguycdent: I am thinking about the (not too distant) future, as the sooner we add the allocations, the sooner other queries "notice" them15:47
johnthetubaguybut thats probably just a distraction right now15:48
cdentoh you mean "at some point won't it be the cast that...".15:48
cdentYes, at some point the scheduler will request and claim resources (thus make allocations) in one go15:48
cdentprobably with a POST to /allocations of something resembling a request spec15:48
cdentinstead of the curerrent GET /resoruce_provdiers (which then passing into existing filter scheduler)15:49
cdentwhen that time comes, the resource tracker will change quite a bit, presumably15:49
*** hieulq_ has quit IRC15:49
cdentbut I suspect that that time is probably further off than we think15:49
cdents/think/hope/15:49
*** amotoki has joined #openstack-nova15:51
*** mnestratov|2 has joined #openstack-nova15:56
*** amotoki has quit IRC15:56
*** Guest21909 has joined #openstack-nova15:58
*** mvk has quit IRC15:58
*** dave-johnston has joined #openstack-nova15:59
*** ijw has joined #openstack-nova16:00
johnthetubaguycdent: fair point, just wondering if we can make baby steps in that direction, while we still have the middle ground16:03
*** efoley__ has joined #openstack-nova16:06
cdentjohnthetubaguy: I'm not super familiar with the existing claim handling, so I'm too clear on what the options are for steps16:06
*** lpetrut has joined #openstack-nova16:07
openstackgerritChris Dent proposed openstack/nova: Add more debug logging on RP inventory update failures  https://review.openstack.org/41423016:08
openstackgerritChris Dent proposed openstack/nova: Add more debug logging on RP inventory delete failures  https://review.openstack.org/42629016:09
*** efoley_ has quit IRC16:09
*** efoley__ has quit IRC16:12
*** Oku_OS is now known as Oku_OS-away16:12
*** liangy has joined #openstack-nova16:16
superdansdague: are you back today?16:16
sdaguesuperdan: yeh16:16
superdansdague: you wanna look at the mess I've made here? https://review.openstack.org/#/c/424730/16:17
superdanI think the wait for services is probably a good thing we want to avoid a race with the subnode checking back in,16:17
superdanalthough I don't think it's related to the problem we were trying to solve16:18
superdanand I figure that should be pulled out16:18
superdanto a separate patch16:18
superdanbut, the configuring/starting of the placement stuff would be good to validate if you could16:18
*** eharney has quit IRC16:20
sdagueyeh, it seems like that should be separate, and I do agree, based on how long all these phases take, I seriously doubt we'll have a practical issue there16:21
cdentanyone: probably too late but: CORS for placement had a +2 on it a while back and there was some support for getting it in as part of ocata (probably not a huge deal if it doesn't happen): https://review.openstack.org/#/c/392891/16:21
sdaguesuperdan: I saw this patch briefly the other day, how about we bite the bullet now and make a 55_placement top level construct and do the placement service enablement and start there16:21
superdansdague: it would specifically catch the situation where we introduce a thing that broke the subnode entirely and call it out, instead of just trying to decipher it from the failed tempest run, at least16:21
*** burgerk has joined #openstack-nova16:21
sdaguesuperdan: sure, but has that shown up yet?16:22
*** tbachman has quit IRC16:22
superdansdague: meaning have we ever broken the subnode?16:22
sdaguesuperdan: in this flow right now16:23
superdansdague: at one point the service version code had a bug where the subnode couldn't check back in because its service record had timed out and it was told it couldn't check back in and regress the minimum version, yeah16:23
superdansdague: and it took a while to realize that the subnode was running, but never being sent instances16:23
sdagueI get the idea that it can happen, but lets not mix up problems being solved here16:24
superdansdague: unrelated to this flow, although this flow can definitely race with the subnode checking back in after the upgrade, I just haven't seen it happen yet16:24
sdaguewe can always put a check like this back in later if it's happening at any frequency, and even backport it16:24
superdancdent: mriedem: scheduling patch just passed multinode grenade16:24
superdanjohnthetubaguy: ^16:25
cdentamazeballs16:25
*** tbachman has joined #openstack-nova16:25
superdansdague: it just doesn't seem responsible to not check for and catch a thing we know can happen when it's trivial to do, but.. the fight is all gone from me this week16:25
sdaguesuperdan: I just want to fix one thing at a time here and get people unblocked. We can fix these other things later.16:26
superdansdague: I'm not saying that has to be in front of this necessarily16:26
mriedemsuperdan: how is that possible16:26
superdanmriedem: cosmic rays16:26
sdaguebecause it's not like you'll get the wrong test results if it happens, it's just that figuring out what went wrong takes a little longer16:27
superdanmriedem: cdent johnthetubaguy I will get a bite to eat and then do the grenade refactor sdague is asking for16:27
superdanthere's also pep8 and unit fails in the nova side patch16:27
johnthetubaguysuperdan: cool, I can roll back the patch so the hacks are removed, and the tests are passing?16:27
superdansdague: it does mean we run another 50m of tests that we know will fail instead of just bailing early, but yeah16:27
mriedemjroll: throwaway hack yes please16:27
superdanjohnthetubaguy: remove the logging and such, but keep the *correct* min ver check yeah :)16:28
johnthetubaguycdent: sorry, I meant to ask you, did you have any more thoughts on this one: https://review.openstack.org/#/c/40447216:28
johnthetubaguysuperdan: thats my *aim* anyways16:28
sdaguesuperdan: sure, but that makes it an optimization we can fix later, even deep in the freeze, once we see how often it happens16:28
superdanjohnthetubaguy: I wish you luck sir16:28
cdentsuperdan, johnthetubaguy is the issue with making sure the service cache gets updated after each compute node update suitably okay? are we assuming the scheduler will be restarted every time some uprades a compute node?16:29
*** david-lyle has joined #openstack-nova16:29
cdentjohnthetubaguy: sorry, I started looking and got distracted by a merge conflict on some logging fixes and lost my place. Is there a summary of what we're trying to resolve?16:30
johnthetubaguycdent: I was thinking you needed to SIG_HUP for the RPC API change anyways, but it might be the first time16:30
johnthetubaguycdent: the -1 comment on the patch should do it16:30
superdancdent: that's the assumption that my grenade service check ensures is safe, and which sdague wants to wait on since we haven't seen it yet, even though it's 100% possible16:30
cdentsig hup won't do it, as there's no reset handler?16:30
superdancdent: so I'll pull that out and do it separately and if we start to break the world after this merges, we'll have it ready16:30
johnthetubaguycdent: basically, seems like the instance allocations will fail for ironic nodes that have resource_classes defined16:30
superdancdent: correct, there's no reset handler, which I did ask for, but we can do after as well, and won't help the gate anyway16:31
johnthetubaguycdent: ah, I missed that16:31
johnthetubaguysuperdan: +1, but lets keep that separate16:31
cdentsuperdan: k, I wasn't think in terms of the gate, but you know, like real people ;)16:31
*** ralonsoh has quit IRC16:31
superdanjohnthetubaguy: yeah, I was going to pull it out regardless, but I was going to keep it in the path of this set, but I won't now16:32
superdancdent: yes, which is why I asked for the reset thing initially16:33
cdentjohnthetubaguy: tacking back to that allocations thing now16:34
*** mfeoktistov|2 has quit IRC16:34
mriedemis someone fixing up the nova side scheduler change? because if they are, can they address the exception wording comment in https://review.openstack.org/#/c/425806/ ?16:36
johnthetubaguycdent: ack16:37
mriedemsnikitin: are you saying the diagnostics patches were all approved yesterday before nit fixing?16:38
mriedemor just had +2s on them?16:38
*** bvanhav has quit IRC16:38
openstackgerritJim Rollenhagen proposed openstack/nova: DNM: hack ironic with resource providers  https://review.openstack.org/42629616:38
jrollmriedem: enjoy16:38
* figleaf is back, but in need of coffee16:39
*** annegentle has joined #openstack-nova16:39
mriedemjroll: cool hopefully i can load that sometime today16:41
jrollheh16:41
*** chyka has joined #openstack-nova16:41
*** mdrabe has quit IRC16:41
*** tbachman has quit IRC16:43
sdaguecdent: there are many ways that things can go wrong. My biggest concern is that we don't have a failed software stack that provides a passing result. As long as we don't have that, our cost is debug time and time until test completion.16:48
sdaguebecause we have a specific test that attempts to schedule to all nodes, and a min node count, we have one test that points in the right direction already, so debug time isn't hugely negatively impacted16:49
sdagueand test run time until failure is basically +15m here16:49
cdentsdague: I'm not sure which of several context to put that response in?16:49
*** mdrabe has joined #openstack-nova16:49
figleafjohnthetubaguy: can you summarize the sticking points in https://review.openstack.org/#/c/404472 for your -1?16:50
cdentjohnthetubaguy: are you concerned about the situation of old style ironic allocations being deleted but then nothing allocation the new style? I'm still confused (apparently like figleaf too)16:50
sdaguecdent: about why I said don't both with this - https://review.openstack.org/#/c/424730/12/projects/60_nova/resources.sh@136 at the moment16:50
cdentsdague: I hadn't made any comments with respect to that?16:51
sdaguecdent: ok, well a bunch of the superdan cross comments were about that16:51
johnthetubaguycdent: figleaf: the bit I don't understand, is who make allocations for ironic instances, making use of the new resource class16:51
sdagueanyway, lunch, biab16:51
cdentjohnthetubaguy: nobody16:51
cdentwhich is why the old allocations will never be cleaned up16:52
cdentbecause InventoryInUse will never get raised16:52
mriedemjroll: CUSTOM_LOL eh16:52
johnthetubaguycdent: figleaf: it looks like we still trying to make the old style allocations in the resource tracker16:52
johnthetubaguycdent: figleaf: which should fail I assume, as that resource class isn't present for that resource provider any more16:53
superdansdague: if you mean cross in the "unhappy" sense, you're misinterpreting my exhaustion16:53
cdentI think it was "cross" as in "interleaved"16:53
openstackgerritJohn Garbutt proposed openstack/nova: Scheduler calling the Placement API  https://review.openstack.org/41796116:53
superdansdague: I think that check should be in there given the requirement we're adding that it be right at scheduler start, but I'm okay to wait until later if you think it's best16:53
superdancdent: okay, cool16:53
cdentjohnthetubaguy: why are they not there anymore? (not asserting they are, just tracing your thought process)16:54
openstackgerritStephen Finucane proposed openstack/nova: console: Move proxies to 'console/proxy'  https://review.openstack.org/40819216:55
johnthetubaguycdent: because the code now only creates inventory entires for the new custom resource class16:55
*** unicell has joined #openstack-nova16:55
*** unicell has quit IRC16:55
*** unicell has joined #openstack-nova16:55
figleafjohnthetubaguy: existing ironic nodes won't have the custom resource class set16:56
cdentjohnthetubaguy: so if a new ironic thing comes into existence, with a defined, resource class, that's when we run into trouble?16:56
figleafjohnthetubaguy: so they'll get handled like they were vms16:56
*** bvanhav has joined #openstack-nova16:56
johnthetubaguycdent: yes16:56
figleafjohnthetubaguy: ah16:56
figleafjohnthetubaguy: so your concern is that we don't have the ironic-aware bits to select a resource provider?16:56
mriedemandreykurilin: your upper-constraints dreams have come true https://review.openstack.org/#/c/426116/5/upper-constraints.txt@33216:57
andreykurilinmriedem: \o/16:57
openstackgerritSarafraj Singh proposed openstack/nova: Adopts keystoneauth with glance client.  https://review.openstack.org/41263416:57
johnthetubaguyfigleaf: ish, its basically that this method will fail (looking for link)16:57
johnthetubaguyfigleaf: ah, so this will fail, when there is a new style ironic resource provider being used by a new instance that is created: https://github.com/openstack/nova/blob/master/nova/scheduler/client/report.py#L66416:58
johnthetubaguyfigleaf: at least I think it will16:58
openstackgerritSarafraj Singh proposed openstack/nova: Adopts keystoneauth with glance client.  https://review.openstack.org/41263416:58
sfinucanInteresting problem here - I've backported a bugfix that's exposed a latent Python 3 issue in the code https://review.openstack.org/#/c/42508716:58
johnthetubaguyfigleaf: cdent: maybe a quick google hangout would make this easier to talk through?16:59
sfinucanWonder if I should fix the Python 3 issue or drop the test?16:59
*** markus_z has quit IRC17:00
figleafjohnthetubaguy: maybe, but I'm switching context to follow the code through the different parts17:01
figleafjohnthetubaguy: I can't keep it all straight while working on something else :)17:02
*** sahid has quit IRC17:02
johnthetubaguyfigleaf: no worries, it took me a good while to trace all that back too17:02
*** slaweq has joined #openstack-nova17:02
*** raunak has joined #openstack-nova17:03
superdansdague: I have to merge something in devstack to make grenade consider a new projects/XX_thing plugin?17:05
cdentjohnthetubaguy: will objects.InstanceList.get_by_host_and_node return ironic things?17:06
johnthetubaguycdent: I believe it does17:06
superdancdent: it'll return ironic instances yeah17:07
cdentk, thanks johnthetubaguy and superdan, trying to properly trace the code (as figleaf said, is meandering...)17:07
superdansdague: oh, nm I think i see17:09
*** dtp has joined #openstack-nova17:09
*** yamahata has quit IRC17:11
openstackgerritJohn Garbutt proposed openstack/nova: Scheduler calling the Placement API  https://review.openstack.org/41796117:12
openstackgerritJohn Garbutt proposed openstack/nova: Block starting compute unless placement conf is provided  https://review.openstack.org/42580617:12
*** nic has joined #openstack-nova17:12
*** slaweq has quit IRC17:12
mriedemjohnthetubaguy: ah i see you updated it :)17:13
mriedemwas just leaving comments, will finish that up on the older PS17:13
johnthetubaguymriedem: hopefully, that was a rebase, but I think it was superdan 's change that failed to merge, but never mind17:13
mriedemjohnthetubaguy: just posted comments in PS3617:15
jrollmriedem: sure why not :P17:16
johnthetubaguymriedem: I thought I had removed all that :S17:16
mriedemjohnthetubaguy: want to update the wording on https://review.openstack.org/#/c/425806/4/nova/exception.py ?17:17
mriedemwhile you're at it17:17
johnthetubaguymriedem: aye17:18
figleafjohnthetubaguy: cdent: so update_instance_allocation() should only be called for "old-style" allocations. create_provider_allocations() should be called for "new-style" (i.e., ironic with custom RC)17:18
superdanmriedem: johnthetubaguy I'm going to avoid rechecking the nova scheduler patch until I have reasonable faith in the refactor to move placement into its own top-level17:19
superdanjust FYI17:19
jrollMore than one endpoint exists with the name 'placement'. <- not cool, devstack, not cool17:19
figleafjohnthetubaguy: cdent: what it looks like is that we need to make sure that the update_instance_allocation() calls are all checked to make sure that they aren't called for new ironic nodes17:19
mriedemjohnthetubaguy: i'll go through PS38 too17:20
johnthetubaguyfigleaf: but there are no new calls right now, so surely we need someone to claim the new resource class?17:20
figleafjohnthetubaguy: I only see two places where update_instance_allocations() is called in the RT.17:21
johnthetubaguyfigleaf: I would rather update_instance_allocation() was able to know from the instance what resources it should be claiming, and it just claims the correct ones, for now thats a bit hard coded I guess, but eventually its the same logic the scheduler has to do when requesting the correct resource class17:21
figleafjohnthetubaguy: yeah, this is all transitional code17:22
figleafjohnthetubaguy: we wanted to get ironic handled before we are able to handle all types of RPs17:22
figleafjohnthetubaguy: can you think of a test that would fail now with ironic?17:23
johnthetubaguyfigleaf: add resource class on ironic node, boot instance that will land there, resource claim fails17:23
johnthetubaguyfigleaf: at least, thats the way it looks to me17:23
cdentfigleaf, johnthetubaguy my read of the code in the rt ironic inventory patch is that johnthetubaguy is right: it is possible to reache update_instance_allocation with a compute node and an instance and depending on what "compute_node" is, the allocations will either fail, or apply to the nova-compute node, incorrectly17:23
cdentbecause the data at that point is about resource usages (vcpu etc)17:24
*** Jeffrey4l_ has quit IRC17:24
*** unicell has quit IRC17:24
johnthetubaguyyeah, instead of allocating for vcpu, it should be for the custom resource class, I think17:24
*** Jeffrey4l_ has joined #openstack-nova17:24
johnthetubaguyoh balls...17:25
cdentwhich should mean that the ironic "instance" is not include in the loop that is calling update_instance_allocation17:25
cdentbecause it isn't really an instance17:25
cdent(in that context)17:25
johnthetubaguyif we merge the code to query placement, we will never land on an iroinc node in the first place, unless it is reporting VCPU resources17:25
figleafjohnthetubaguy: that's a known issue17:26
cdent"oh balls" means we've passed the "bugger" state of affairs17:26
johnthetubaguycdent: heh, ack17:26
figleafjohnthetubaguy: which is why ironic has to keep reporting vm-type resources until Pike17:26
johnthetubaguyfigleaf: but this code doesn't do that, I thought17:26
cdentbrb17:27
figleafjohnthetubaguy: digging into the claim stuff - haven't looked at that in a while17:27
johnthetubaguyfigleaf: actually, I am changing my mind on the fix here, I think its probably just that we should still be reporting VCPUs, we can't drop those till we schedule based on the new resources17:28
*** annegentle has quit IRC17:29
figleafjohnthetubaguy: yes, that's been the plan17:29
*** rmart04 has quit IRC17:29
johnthetubaguyfigleaf: but the code only reports the custom resource classes, or did I read that wrongly?17:30
*** annegentle has joined #openstack-nova17:30
cdentjohnthetubaguy, figleaf: can we restate the problem again (I know we keep doing that, but I think it is useful): If we released this code today without any change it would be wrong because:17:30
figleafcdent: ...once an ironic node reports resources with a custom resource class, all the VCPU etc. stuff gets removed17:31
figleafcdent: ...and we have no way to then select that ironic node17:31
figleafjohnthetubaguy: cdent: Does that sound right?17:32
openstackgerritJohn Garbutt proposed openstack/nova: Block starting compute unless placement conf is provided  https://review.openstack.org/42580617:32
cdent"reports resources" == writes inventory?17:32
figleafyeah17:32
*** sree has joined #openstack-nova17:33
*** slaweq has joined #openstack-nova17:33
*** sree has joined #openstack-nova17:33
figleafcdent: the _cleanup_ironic_legacy_allocations() method removes all the old-style inventory17:34
cdent(if there was any)17:34
mriedemthe funny thing is, ironic + placement in newton doesn't work anyway17:35
figleafcdent: well, unless the node is newly-added, there will be17:35
mriedemi.e. https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/newton+topic:bug/164731617:35
cdentfigleaf: right, but when a node is newly added the same problem exists: there's no way to allocate or claim inventory?17:35
figleafmriedem: yeah - "funny"17:35
*** tbachman has joined #openstack-nova17:35
johnthetubaguymriedem: its almost like we need some ironic testing17:36
mriedemjohnthetubaguy: we have it in ocata17:36
mriedemjust not in newton where placement was optional17:36
johnthetubaguymriedem: ah17:36
mriedemironic + placement ci was broken for a few days in master17:36
figleafcdent: if a node is added and has a custom RC, its inventory will be added new-style17:36
mriedemwhich is why i'm backporting that series of fixes17:36
cdentfigleaf: yes, but you can't allocate against it17:37
jrollbah, no '-' in custom resource classes? :|17:37
figleafcdent: and we have no way to select that node now17:37
cdentand that too17:37
cdentfeh17:37
mriedemjroll: _ is far superior17:37
cdentjroll: just upperletters and numbers17:37
jrolland _17:37
*** sree has quit IRC17:38
jrollkinda lame IMO, but alas17:38
cdentjroll: yeah, but that's just to keep you conused so best to just leave it out ...17:38
* jroll continues trying to make this work in devstack17:38
johnthetubaguyfigleaf: cdent: so the summary, either we have VCPUs, claim them, and schedule on them, OR we have to schedule on custom_resource classes and claim them, current state: schedule VCPU, report custom, claim VCPU (except for existing instances that we tidy up to claim the new resource class)17:38
cdentI _think_ the constraint came into play when we didn't even have custom resource classes, not sure.17:38
openstackgerritMatt Riedemann proposed openstack/nova: Block starting compute unless placement conf is provided  https://review.openstack.org/42580617:39
*** Apoorva has joined #openstack-nova17:40
*** liangy has quit IRC17:41
jrollwell, unless I did something awful wrong, the ironic code gives me 2017-01-27 17:39:56.654 TRACE nova.compute.manager AttributeError: 'SchedulerClient' object has no attribute 'set_provider_inventory'17:41
*** Apoorva has quit IRC17:41
*** Apoorva has joined #openstack-nova17:41
jrollnope, I'm at b6339a876a440831903445bd4b10afb732e91335 which is latest here17:42
johnthetubaguyhmm, interesting17:42
*** mriedem is now known as mriedem_beach17:42
cdentthis sounds like a job for functional tests17:43
figleafjroll: there are some places I found where SchedulerClient and SchedulerClient.reportclient are treated as the same17:43
figleafIt's in reportclient17:43
*** ociuhandu has quit IRC17:43
jrollfigleaf: just reporting this code is broken :)17:44
jrollcdent: I'm working on integration stuff atm17:44
figleafjroll: heh, can relate to that17:45
*** vladikr has quit IRC17:45
cdentjroll: which is great, but there's something in the middle: more functional testing while we are developing the features (just setting a bit for future efforts, not moaning)17:46
*** jpena is now known as jpena|off17:46
jrollcdent: sure, don't disagree17:46
* jroll way more familiar with an actual deployment than nova's functests17:46
cdentjroll: shit bro, it's not you who I think should have done functional tests on this stuff. It's me and figleaf and jaypipes17:47
jrollcdent: heh. it's all of us. I just want to help get this working17:48
cdentfigleaf: are you working on dealing with the traceback jroll's reported, or should I do that?17:49
jrollfigleaf: okay, that worked, now I should see an inventory record that is mapped to that custom resource class, yeah?17:49
figleafjroll: sorry, I'm looking at 6 different bits of code, trying to keep them straight :(17:49
figleafjroll: what was it that you were doing when you hit the bug17:50
figleaf?17:50
cdenttraceback on the review17:50
jrollfigleaf: so I've got devstack. ironic stuff has resource classes. using this code.17:50
jrollfixed traceback as mentioned (leaving a comment on that now)17:50
jrolllooks like things go through but I don't see anything in inventory table that references the custom RP17:51
jroller, s/RP/RC/17:51
*** Swami has joined #openstack-nova17:51
* jroll debugging now17:53
*** ijw has quit IRC17:54
*** unicell has joined #openstack-nova17:54
*** ijw has joined #openstack-nova17:54
*** lucasagomes is now known as lucas-afk17:55
*** derekh has quit IRC17:55
*** yamahata has joined #openstack-nova17:56
*** ijw_ has joined #openstack-nova17:56
*** ijw has quit IRC17:56
figleafjroll: ugh - looks like there are more of those all over the place17:56
*** owalsh is now known as owalsh-afk18:00
cdentfigleaf, jroll, johnthetubaguy: sorry guys I gotta go. I haven't eaten enough and starting to fade badly. If there's something I can do on sunday or in the UK morning monday, let me know, either by email to os-dev or direct to me.18:01
*** eharney has joined #openstack-nova18:01
jrollcdent: no worries man, I'll just leave anything I find on gerrit18:02
*** harlowja has joined #openstack-nova18:02
jrolllooks like it submits inventory correctly18:02
johnthetubaguycdent: figleaf: I am going to run soon too, similar comments about monday morning18:02
*** hfu has quit IRC18:02
cdentbest of luck18:02
*** cdent has quit IRC18:02
johnthetubaguyjroll: well, not sure, once we get bauzas's change in, you will no longer be able to pick iroinc nodes if the current proposal merges18:02
johnthetubaguyfigleaf: I attempted to add a summary comment about what I think we should do18:02
jrolljohnthetubaguy: ... O_o18:03
figleafjohnthetubaguy: cdent: ok, thanks for opening up this can of worms! (Seriously! It would have really sucked to let this out like this)18:03
* jroll will read that18:03
johnthetubaguythere may be 40% nonsense in my current suggestion, but lets see if that makes things clearer or not18:03
*** owalsh-afk has quit IRC18:04
*** vladikr has joined #openstack-nova18:05
jrolljohnthetubaguy: lgtm, I think18:05
johnthetubaguyfigleaf: are you on board with the plan I sketched out in https://review.openstack.org/#/c/404472?18:06
*** owalsh-afk has joined #openstack-nova18:06
*** sambetts is now known as sambetts|afk18:07
*** owalsh-afk is now known as owalsh18:07
*** jose-phillips has joined #openstack-nova18:08
*** vsaienko has joined #openstack-nova18:10
*** tbachman has quit IRC18:10
*** tbachman has joined #openstack-nova18:12
*** weshay is now known as weshay_brb18:12
*** ijw_ has quit IRC18:12
openstackgerritJohn Garbutt proposed openstack/nova: Scheduler calling the Placement API  https://review.openstack.org/41796118:13
figleafjohnthetubaguy: sorry, working on a different fix atm. Looking now...18:13
johnthetubaguyfigleaf: so as I understood it, this is one of the two remaining FFEs, hence my focus on it18:14
jrollfwiw I'm just getting lost in this code at this point18:14
* jroll bows out and takes a break for a bit18:15
figleafjohnthetubaguy: I was thinking similarly - getting rid of the cleanup of old-style inventory18:15
figleafjroll: welcome to the club!!18:15
figleafjohnthetubaguy: and trying to ensure that both old- and new-style can coexist18:16
figleafjohnthetubaguy: and then add back the cleanup in Pike, remove it in Queens18:16
johnthetubaguyfigleaf: cool, yeah, I think we just report the new resources, for the first step18:16
*** mvk has joined #openstack-nova18:16
johnthetubaguyfigleaf: yeah, the clean up can only be added after we start scheduling and claiming using the new resource class, I think, we can probably drop reporting the old ones at that point, but not 100% sure18:17
johnthetubaguyactually the scheduling can happen first, then we can start claiming the new style, then we drop reporting the legacy resource classes18:18
johnthetubaguyI think18:18
*** psachin has joined #openstack-nova18:19
*** tbachman has quit IRC18:20
*** JoseMello has joined #openstack-nova18:21
*** vsaienko has quit IRC18:22
openstackgerritDan Smith proposed openstack/nova: Avoid redundant call to update_resource_stats from RT  https://review.openstack.org/42430518:23
*** tbachman has joined #openstack-nova18:24
*** jose-phillips has quit IRC18:26
*** yhvh has quit IRC18:27
*** khushbu has joined #openstack-nova18:28
*** mnestratov|2 has quit IRC18:38
*** khushbu has quit IRC18:38
*** david-lyle has quit IRC18:39
openstackgerritEd Leafe proposed openstack/nova: placement: RT now adds proper Ironic inventory  https://review.openstack.org/40447218:41
figleaf^^ This just fixes the client bugs jroll found18:42
*** dharinic is now known as dharinic|lunch18:42
*** slaweq has quit IRC18:45
*** dave-johnston has quit IRC18:45
*** owalsh has quit IRC18:47
*** slaweq has joined #openstack-nova18:47
*** owalsh has joined #openstack-nova18:47
*** slaweq has quit IRC18:52
*** slaweq has joined #openstack-nova18:52
*** bvanhav_ has joined #openstack-nova18:53
*** bvanhav has quit IRC18:54
*** ijw has joined #openstack-nova18:56
*** beagles is now known as beagles-biab18:58
*** liangy has joined #openstack-nova19:00
*** dtp is now known as dtp-afk19:04
*** weshay_brb is now known as weshay19:06
*** unicell has quit IRC19:12
*** hongbin has joined #openstack-nova19:12
openstackgerritJohn Garbutt proposed openstack/nova: Added instance.reboot.error to the legacy notifications  https://review.openstack.org/41181619:15
*** Sukhdev has joined #openstack-nova19:18
*** slaweq has quit IRC19:19
*** tbachman has quit IRC19:26
*** dharinic|lunch is now known as dharinic19:27
bauzassuperdan: johnthetubaguy: I'm now in the train back to Grenoble, what's the current situation for the scheduler change?19:29
superdanbauzas: sdague wants the grenade change refactored, so I'm working on that19:30
superdanbauzas: the scheduler thing passed multinode grenade before I started, johnthetubaguy was going to fix up the pep8 and unit tests19:30
bauzasokay, thanks for both you19:31
bauzasthe pep8 issue is really easy19:31
sdaguesuperdan: yeh, the placement add patch seems good now, pending tests19:32
bauzasnot sure I understand an unittest problem19:32
superdansdague: I think it's still not working yet, but getting closer19:32
*** Matias has quit IRC19:32
sdaguesuperdan: ok19:32
superdanbauzas: see all the red FAILED on the patch19:32
*** Matias has joined #openstack-nova19:33
superdanI have to run out to meet someone in a little bit, but I will pick it up when I get back19:33
superdanthis person says they have a nice minimum wage job flipping burgers and I am SUPER INTERESTED19:33
figleafsuperdan: much less stress, for sure19:35
superdanaye19:35
*** unicell has joined #openstack-nova19:42
*** eglynn has quit IRC19:42
*** jheroux has quit IRC19:43
*** tbachman has joined #openstack-nova19:47
*** ijw_ has joined #openstack-nova19:47
bauzasoh man, the internet connection in the my train is so terrible :(19:48
*** jose-phillips has joined #openstack-nova19:49
*** ijw has quit IRC19:49
*** tbachman has quit IRC19:52
openstackgerritArtom Lifshitz proposed openstack/nova: Pass APIVersionRequest to extensions  https://review.openstack.org/42587619:52
superdanwell, it upgraded placement without exploding, we'll see if it actually works19:52
*** d-bark has joined #openstack-nova19:53
*** david-lyle has joined #openstack-nova19:53
bauzassuperdan: so you modified grenade by adding a new placement service? heh, okay19:53
*** david-lyle has quit IRC19:53
bauzasI thought it was within nova, but fine by me19:54
*** david-lyle has joined #openstack-nova19:54
bauzasFWIW, I'd need to rebase the top patch since matt wrote a new PS for the bottom one19:55
*** unicell has quit IRC19:55
openstackgerritArtom Lifshitz proposed openstack/nova: Pass APIVersionRequest to extensions  https://review.openstack.org/42587619:55
openstackgerritArtom Lifshitz proposed openstack/nova-specs: Fix tag attribute disappearing  https://review.openstack.org/42603019:57
*** vsaienko has joined #openstack-nova19:58
*** Jack_V has quit IRC19:58
*** beagles-biab is now known as beagles20:00
*** psachin has quit IRC20:00
*** david-lyle has quit IRC20:01
*** david-lyle has joined #openstack-nova20:01
*** tbachman has joined #openstack-nova20:01
*** lpetrut has quit IRC20:03
openstackgerritJim Rollenhagen proposed openstack/nova: DNM: hack ironic with resource providers  https://review.openstack.org/42629620:10
jrollfigleaf: rebased that on your better patchset ^20:10
figleafjroll: working better?20:10
*** dtp-afk is now known as dtp20:12
jrollfigleaf: I couldn't get it going for some reason, and dealing with other stuff since20:13
jrollsaw you uploaded that so figured I'd kick my test patch off at least20:13
openstackgerritAlex Szarka proposed openstack/nova: Transform instance.add_fixed_ip notification  https://review.openstack.org/33287620:27
*** slaweq has joined #openstack-nova20:28
*** jose-phillips has quit IRC20:29
*** owalsh has quit IRC20:35
*** owalsh has joined #openstack-nova20:35
*** vnovakov has joined #openstack-nova20:36
*** bvanhav_ has quit IRC20:37
*** vsaienko has quit IRC20:43
*** vsaienko has joined #openstack-nova20:47
*** pradk has quit IRC20:47
*** claudiub has joined #openstack-nova20:47
*** armax has quit IRC20:49
*** claudiub|3 has quit IRC20:49
*** lpetrut has joined #openstack-nova20:50
*** dtp_ has joined #openstack-nova20:51
*** jose-phillips has joined #openstack-nova20:52
*** dtp has quit IRC20:54
*** sdague has quit IRC20:56
*** jose-phillips has quit IRC20:58
*** smatzek has quit IRC20:58
*** smatzek has joined #openstack-nova20:59
*** liangy has quit IRC21:03
*** haplo37_ has quit IRC21:05
*** sdague has joined #openstack-nova21:07
*** satyar has quit IRC21:08
*** tblakes has quit IRC21:12
*** cleong has quit IRC21:13
*** Apoorva has quit IRC21:15
*** haplo37_ has joined #openstack-nova21:16
*** raunak has quit IRC21:17
*** raunak has joined #openstack-nova21:19
*** dtp_ has quit IRC21:22
*** smatzek has quit IRC21:22
*** karimb has quit IRC21:23
*** tblakes has joined #openstack-nova21:25
*** armax has joined #openstack-nova21:26
superdansdague: even though I did a bunch of work on that grenade patch are you cool with me +W when we've got a good run from the nova patch?21:28
sdaguesuperdan: the one I've got a +2 on?21:30
sdagueif so, yes21:30
superdansdague: yup21:30
sdagueyep21:30
superdanthanks21:31
*** tbachman has quit IRC21:31
*** Jeffrey4l_ has quit IRC21:35
*** Apoorva has joined #openstack-nova21:40
*** dimtruck is now known as zz_dimtruck21:40
*** vladikr has quit IRC21:40
*** Jeffrey4l_ has joined #openstack-nova21:41
*** pradk has joined #openstack-nova21:41
*** tbachman has joined #openstack-nova21:42
*** catintheroof has quit IRC21:44
*** catintheroof has joined #openstack-nova21:45
*** unicell has joined #openstack-nova21:46
*** unicell has quit IRC21:46
*** catintheroof has quit IRC21:49
*** unicell has joined #openstack-nova21:50
*** ijw_ has quit IRC21:51
*** raunak has quit IRC21:51
*** raunak has joined #openstack-nova21:54
*** ducttape_ has joined #openstack-nova21:54
*** JoseMello has quit IRC21:55
diana_clarkesuperdan: What do you mean by "unmapped due to normally transient state"? https://github.com/openstack/nova/blob/e57714f9f162f60e77222b1ae8e1e78d2cb10a10/nova/cmd/manage.py#L134821:59
diana_clarkesuperdan: asking as I try to respond to this review feedback: https://review.openstack.org/#/c/421436/22:00
*** Guest21909 has quit IRC22:00
*** breitz has quit IRC22:01
*** breitz has joined #openstack-nova22:02
*** pradk has quit IRC22:02
*** rmart04 has joined #openstack-nova22:03
*** rmart04 has left #openstack-nova22:04
superdandiana_clarke: like before it's scheduled22:09
diana_clarkesuperdan: So return code 2 will eventually fix itself, correct? It has an instance mapping, just no cell mapping.22:11
*** jose-phillips has joined #openstack-nova22:11
superdandiana_clarke: yeah22:13
diana_clarkeSo perhaps that should read: "Returns 0 when the instance is successfully mapped to a cell, 1 if the instance is not mapped to a cell, and 2 if the instance is in the process of being mapped to a cell."22:13
superdanit's a really small window22:13
superdansure22:13
*** vishwanathj has joined #openstack-nova22:13
diana_clarkeso return code 1 is an error case, right?22:13
superdanI mean if you like less awesome comments22:13
diana_clarkesuperdan: not changing your comment, trying to write the man page22:14
diana_clarke;)22:14
superdanthey both mean that the instance isn't functional,22:14
*** vishwanathj has quit IRC22:14
superdanand 1 could be a transient case as well I think, let me think about it22:14
superdanI know, I meant "awesome" in the snarky way22:14
*** vishwanathj has joined #openstack-nova22:15
openstackgerritDan Smith proposed openstack/nova: Scheduler calling the Placement API  https://review.openstack.org/41796122:15
openstackgerritDan Smith proposed openstack/nova: Block starting compute unless placement conf is provided  https://review.openstack.org/42580622:15
*** thorst_ has quit IRC22:16
diana_clarkeOr asking differently (like johnthetubaguy), what action should be taken for return codes 1 & 2?22:16
superdandiana_clarke: okay, so from here forward, 1 is always an error, but not one we expect to happen, because:22:16
*** zz_dimtruck is now known as dimtruck22:16
superdanin the past, we didn't have instance mappings, so if you have an instance in the database and no mapping, it was created on an older release and you didn't do your homework to patch up all the mappings yet22:17
*** vishwanathj has quit IRC22:17
superdanso the point of that tool is really to verify that you created instance mappings for all of them22:17
superdanhowever, if you run that while things are going on, you could get retval=2 due to a transient luck-based event22:17
superdanbut doing the online migrations should fix both cases up for any non-booting instances22:18
superdanso the answer to "what to do" is run the map command22:18
superdanmap_instances for 1 and map_cell_and_hosts for 2 (modulo transientness)22:19
superdanclear as mud?22:19
diana_clarkesuperdan: I going to re-read that 10 times in a row, and then hopefully it will be ;) much appreciated!22:21
superdanlet me say it again linearly now that I have it in my head...22:21
superdanIf you get retval=1, that means you need to run map_instances as some instances (which were created with a previous version) don't have instance mappings yet22:22
superdanif you get retval=2, that may be because you're racing with instances that are booting, or it may mean that you are missing cell mappings, which means you need to run map_cell_and_hosts22:22
*** edmondsw_ has quit IRC22:23
superdansomeone could modify that code to not return 2 if the instance has a building-related task state such that we know it's the former case for retval=222:23
*** edmondsw has joined #openstack-nova22:23
diana_clarkenice, thx!22:24
*** gouthamr has quit IRC22:27
*** armax has quit IRC22:27
*** edmondsw has quit IRC22:28
*** baoli has quit IRC22:28
*** slaweq_ has quit IRC22:30
*** esberglu_ has quit IRC22:36
*** esberglu has joined #openstack-nova22:36
*** esberglu has quit IRC22:41
*** jaosorior has quit IRC22:42
openstackgerritEd Leafe proposed openstack/nova: placement: RT now adds proper Ironic inventory  https://review.openstack.org/40447222:43
figleafjohnthetubaguy: cdent: ^^ adds the workarounds for the problem we identified earlier.22:44
*** xyang1 has quit IRC22:44
*** ijw has joined #openstack-nova22:47
*** ijw has quit IRC22:47
*** ijw has joined #openstack-nova22:47
mriedem_beachhola22:48
*** mriedem_beach is now known as mriedem22:48
mriedemdonde esta el progresso?22:48
superdanwhere is the progress?22:49
mriedemsi22:49
mriedemgrenade patch is +W22:49
superdanis it?22:49
mriedemi thought you were going to +W it22:49
superdanI will when I see the nova one yeah22:49
mriedemoh22:49
superdanI think the nova patch was running against a stale version of the grenade one,22:50
superdanso it's going again now22:50
superdanjohn fixed up a bunch of the unit and pep8 issues22:50
mriedemyeah i see the series must have been rebased22:50
mriedemi'll review that all again22:50
*** burgerk has quit IRC22:50
*** esberglu has joined #openstack-nova22:50
*** gouthamr has joined #openstack-nova22:51
superdanstill getting NoValidHost on 11 tests  on regular grenade ...22:51
superdanjeez22:51
superdancompute is still not starting because of detecting the placement config22:52
mriedemi thought things were peachy this morning?22:52
mriedemor was that not valid b/c of bad deps?22:52
superdanyeah, I dunno now, I thought we were past that22:52
*** kfarr has quit IRC22:53
*** Swami_ has joined #openstack-nova22:53
mriedemmaybe changing the grenade patch messed something up?22:53
superdanPS3 worked. but that was the last time22:53
openstackgerritPushkar Umaranikar proposed openstack/nova: Adopts keystoneauth with glance client.  https://review.openstack.org/41263422:54
superdanbut I see no reason why, there's been almost no change since22:54
*** esberglu has quit IRC22:54
*** lpetrut has quit IRC22:55
superdanyeah no placement config in nova.conf22:56
superdangrr22:56
*** Swami has quit IRC22:57
mriedemhmm,22:58
mriedemwe're now deploying placement before nova in grenade right?22:58
superdanyeah22:58
*** vsaienko has quit IRC22:58
mriedemwhich should configure nova.conf on the primary node,22:58
mriedemwhich would actually be newton at that point22:58
mriedemsince it's pre-upgrade22:58
mriedem55 is updating a newton compute i mean22:58
*** jamesdenton has quit IRC22:58
superdanbut we still call the configure bit22:58
mriedemsure,22:59
*** takedakn has joined #openstack-nova22:59
mriedemand that shouldn't get wiped out or anything by the 60_nova part22:59
openstackgerritPushkar Umaranikar proposed openstack/nova: Adopts keystoneauth with glance client.  https://review.openstack.org/41263422:59
*** Swami_ has quit IRC23:00
mriedemhuh, so now both of the nova changes depend on the grenade change23:00
mriedemdoesn't hurt anything23:00
superdanI did that because it ran against a stale grenade patch for some reason23:01
superdanso when I rebased I just added it on23:01
superdandidn't know why23:01
*** tbachman has quit IRC23:01
superdanahhh23:01
superdanI wonder if we still need to do configure_placement_nova_compute in the 60_nova bit23:01
superdannot sure why it wouldn't survive, but that's the part that gets us placemetn config23:02
superdanalso, what about this? http://logs.openstack.org/06/425806/6/check/gate-grenade-dsvm-neutron-ubuntu-xenial/fb281f1/logs/grenade.sh.txt.gz#_2017-01-27_20_02_44_14223:02
mriedemhttps://review.openstack.org/#/c/424730/15/projects/55_placement/from-newton/upgrade-placement@1723:02
mriedem^ should configure nova-compute on the primary node for placement23:02
superdanI know23:02
superdanbut23:03
superdanafter the run it's not set23:03
*** ducttape_ has quit IRC23:03
superdanoh wait23:03
superdanit doesn't have $NOVA_CONF in the run23:03
*** ducttape_ has joined #openstack-nova23:03
mriedembecause we didn't source the nova stuff yet?23:04
*** slaweq_ has joined #openstack-nova23:04
superdanyeah23:04
mriedemteehee23:04
mriedemi believe i called this ~6 hours ago23:04
superdanyes you did23:04
superdanit's definitely weird doing this out of body but still part of nova23:05
mriedemsuperdan: so do we no longer need the thing you added where it saves off the number of computes before and checks for that after?23:05
superdanmriedem: that is being challenged, so I split it out23:06
superdanwith this set we're introducing a very specific race because the scheduler checks the min vers at startup23:06
superdanwhich is why I think we need it23:06
*** annegentle has quit IRC23:06
superdanbut we haven't hit it yet, so ...23:06
superdanmriedem: this is the standalone patch: https://review.openstack.org/#/c/426310/23:07
superdanwhich I see I have tabs in.. gdi23:08
*** figleaf is now known as edleafe23:09
*** tblakes has quit IRC23:10
*** greghaynes has quit IRC23:10
*** mdrabe has quit IRC23:14
mriedemok the 55_placement thing looks good now23:17
mriedemnow we play the waiting game23:19
*** marst has quit IRC23:20
*** tbachman has joined #openstack-nova23:21
superdancool23:21
superdanpersonally,23:21
superdanI hate the effing waiting game23:21
superdanI've been playing it all week and I'm done with it23:21
*** ducttape_ has quit IRC23:24
*** greghaynes has joined #openstack-nova23:37
openstackgerritPushkar Umaranikar proposed openstack/nova: Adopts keystoneauth with glance client.  https://review.openstack.org/41263423:40
*** jamesdenton has joined #openstack-nova23:41
openstackgerritPushkar Umaranikar proposed openstack/nova: Adopts keystoneauth with glance client.  https://review.openstack.org/41263423:42
*** edmondsw has joined #openstack-nova23:43
*** Swami has joined #openstack-nova23:43
*** jamesden_ has joined #openstack-nova23:44
*** Swami has quit IRC23:44
*** Swami has joined #openstack-nova23:44
*** jamesdenton has quit IRC23:45
mriedemdiana_clarke: just went through https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:man - some comments inline,23:48
mriedemi think there are only a few that need some tweaks23:48
*** Swami_ has joined #openstack-nova23:48
mriedemlike discover_hosts should come after create_cell because you can't map the hosts to a cell w/o a cell23:48
openstackgerritMatt Riedemann proposed openstack/nova: DNM: hack ironic with resource providers  https://review.openstack.org/42629623:50
*** Swami has quit IRC23:52
*** smatzek has joined #openstack-nova23:52
*** greghaynes has quit IRC23:52
diana_clarkemriedem: Yes, thanks a ton! I'll circle back and add more detail etc monday at the latest.23:52
mriedemsuperdan: passed http://logs.openstack.org/61/417961/40/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/5fb6f74/23:53
*** smatzek_ has joined #openstack-nova23:53
superdanlike, yay23:53
mriedemhttp://logs.openstack.org/61/417961/40/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/5fb6f74/logs/new/screen-n-sch.txt.gz#_2017-01-27_23_44_07_73623:54
mriedem2017-01-27 23:44:07.736 12381 DEBUG nova.scheduler.filter_scheduler [req-f76d4b01-c248-4de4-8675-a4cd3cb6d4cb tempest-AttachInterfacesTestJSON-881960388 tempest-AttachInterfacesTestJSON-881960388] Skipping call to placement, as upgrade in progress. _get_all_host_states /opt/stack/new/nova/nova/scheduler/filter_scheduler.py:17923:54
superdanwoot23:54
*** smatzek_ has quit IRC23:54
*** tbachman has quit IRC23:55
*** smatzek_ has joined #openstack-nova23:55
mriedemwonder what the whiny ass grenade + live migration job is complaining about23:55
*** ducttape_ has joined #openstack-nova23:56
mriedemheh wtf http://logs.openstack.org/61/417961/40/check/gate-grenade-dsvm-neutron-multinode-live-migration-nv/8c363cd/logs/devstack-gate-post_test_hook.txt.gz23:56
mriedemtdurakov: ^23:57
mriedemgrenade + live migration apparently never passes23:57
superdanlol23:57
*** smatzek has quit IRC23:57
mriedemhttp://logs.openstack.org/61/417961/40/check/gate-grenade-dsvm-neutron-multinode-live-migration-nv/8c363cd/logs/new/tempest_conf.txt.gz23:57
mriedemblock_migration_for_live_migration = False  live_migration = False23:57
melwittI've noticed that before. when all tests are skipped, it doesn't consider that a success23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!