Monday, 2016-05-02

*** baoli has quit IRC00:02
*** baoli has joined #openstack-ironic00:07
*** baoli has quit IRC00:16
*** baoli has joined #openstack-ironic00:17
*** mrkelley has quit IRC00:19
*** keedya_ has quit IRC00:21
*** kuthurium has joined #openstack-ironic00:27
*** baoli has quit IRC01:02
*** ekarlso has quit IRC01:33
*** causten_ has joined #openstack-ironic01:48
*** ChrisAusten has quit IRC01:51
*** ekarlso has joined #openstack-ironic01:53
*** linuxgeek has joined #openstack-ironic02:38
*** baoli has joined #openstack-ironic03:14
*** Nisha has joined #openstack-ironic03:18
*** baoli has quit IRC03:19
*** raginbaji is now known as raginbajin03:28
*** linuxgeek has quit IRC03:43
*** links has joined #openstack-ironic04:07
*** Nisha has quit IRC04:07
*** Nisha has joined #openstack-ironic04:21
*** bnemec has joined #openstack-ironic04:21
*** linuxgeek has joined #openstack-ironic04:36
*** vmud213 has joined #openstack-ironic04:37
openstackgerritYushiro FURUKAWA proposed openstack/ironic: Align cleanning behaivor in maintaenace b/w auto and manual  https://review.openstack.org/29909704:39
*** rcernin has joined #openstack-ironic05:05
*** Sukhdev has joined #openstack-ironic05:14
*** yolanda has joined #openstack-ironic05:32
*** yolanda has quit IRC05:34
*** yolanda has joined #openstack-ironic05:34
*** ChubYann has quit IRC05:35
*** Nisha_away has joined #openstack-ironic05:37
*** Nisha has quit IRC05:40
*** Sukhdev has quit IRC05:55
*** mkovacik has quit IRC05:56
*** causten_ has quit IRC06:01
*** jtomasek has joined #openstack-ironic06:12
*** Nisha_brb has joined #openstack-ironic06:16
*** Nisha_away has quit IRC06:16
*** moshele has joined #openstack-ironic06:16
*** deray has joined #openstack-ironic06:17
*** Nisha_away has joined #openstack-ironic06:23
*** Nisha_brb has quit IRC06:27
*** itamarl has joined #openstack-ironic06:56
*** fragatina has joined #openstack-ironic07:08
*** Nisha_away has quit IRC07:11
*** daemontool has joined #openstack-ironic07:11
*** yolanda has quit IRC07:12
*** Nisha has joined #openstack-ironic07:12
*** ifarkas has joined #openstack-ironic07:12
*** vmud213 is now known as vmud213_brb07:15
*** vmud213_brb is now known as vmud21307:16
*** daemontool has quit IRC07:21
*** daemontool has joined #openstack-ironic07:21
openstackgerritvinay kumar muddu proposed openstack/ironic-python-agent: Fix local boot issue with fedora in uefi mode  https://review.openstack.org/30214307:32
*** daemontool has quit IRC07:33
*** tesseract has joined #openstack-ironic07:33
*** tesseract is now known as Guest3759007:34
*** ohamada has joined #openstack-ironic07:35
*** yolanda has joined #openstack-ironic07:36
*** daemontool has joined #openstack-ironic07:37
*** ohamada has quit IRC07:38
*** ohamada has joined #openstack-ironic07:39
*** fragatina has quit IRC07:46
*** derekh has joined #openstack-ironic07:57
*** zzzeek has quit IRC08:00
*** zzzeek has joined #openstack-ironic08:00
*** praneshp has quit IRC08:03
*** dmk0202 has joined #openstack-ironic08:06
*** deray has quit IRC08:07
*** derekh has quit IRC08:19
*** deray has joined #openstack-ironic08:23
*** mkovacik has joined #openstack-ironic08:24
*** jistr has joined #openstack-ironic08:26
*** vmud213 has quit IRC08:39
*** daemontool has quit IRC08:54
*** divya has joined #openstack-ironic09:25
*** ohamada_ has joined #openstack-ironic09:30
*** ohamada has quit IRC09:30
*** deray has quit IRC09:35
*** deray has joined #openstack-ironic09:38
*** fragatina has joined #openstack-ironic09:46
*** fragatina has quit IRC09:52
openstackgerritAparna proposed openstack/proliantutils: Adds support in hpssa for SDD interface 'Solid State SAS'  https://review.openstack.org/31171310:16
*** irf has joined #openstack-ironic10:17
irfmorning Ironic and all10:17
irfgood news from me ....10:17
irfI am able to boot the machine from woL as well as from Ironic10:17
openstackgerritAparna proposed openstack/proliantutils: Modify minimum disk for RAID 0 in hpssa  https://review.openstack.org/31171410:19
*** vmud213 has joined #openstack-ironic10:36
*** chlong has joined #openstack-ironic10:38
*** Nisha has quit IRC10:40
*** yolanda has quit IRC10:47
*** fragatina has joined #openstack-ironic10:49
*** yolanda has joined #openstack-ironic10:51
*** fragatina has quit IRC10:53
TheJuliaGood morning everyone10:58
*** itamarl has quit IRC11:01
*** irf has quit IRC11:02
*** irf has joined #openstack-ironic11:08
jrollohai TheJulia11:09
TheJuliaGood morning jroll11:09
jrollIRONIC_VM_COUNT=6411:10
jrollthis should be fun :D11:10
TheJuliaHow much ram do you have? :)11:10
jroll128gb, I think?11:11
jrollyep11:11
TheJuliaI would suggest popcorn if I could eat it11:14
*** yolanda has quit IRC11:16
*** yolanda has joined #openstack-ironic11:16
*** yolanda has quit IRC11:29
*** yolanda has joined #openstack-ironic11:30
*** trown|outtypewww is now known as trown11:43
*** e0ne has joined #openstack-ironic11:44
*** raginbajin has quit IRC11:50
divyahi ironicers...11:53
TheJuliagood morning11:55
divyanova boot is failing for physical bare metal ironic node.12:03
divyaso i need to debug this issue, so is there any troubleshooting doc?12:03
TheJuliadivya: Generally the best place to start is to look at ironic node-show output for the node nova attempted to schedule on to12:07
divyaironic node is created successfully12:08
jrolldivya: hi, we have a start on a troubleshooting doc here: http://docs.openstack.org/developer/ironic/deploy/troubleshooting.html12:08
divyahttp://paste.openstack.org/show/495518/12:08
*** wajdi has joined #openstack-ironic12:08
divyathis is the steps i fillowed to create node12:09
jrolldivya: your paste looks like the pxe boot to the deploy ramdisk is failing, I'd suggest looking at the node's console output to start12:09
jrolllikely a network issue12:09
*** baoli has joined #openstack-ironic12:11
*** athomas_ has joined #openstack-ironic12:12
*** baoli_ has joined #openstack-ironic12:12
divya_Jgoll : u metn ironic node-get-console12:12
TheJuliadivya: actually attaching a screen and retrying12:13
TheJuliato the physical chassis12:13
divyaTheJulia : i am getting pxe failure that's it. then nova boot shows "no valid host error"12:14
*** xavierr has joined #openstack-ironic12:14
*** e0ne has quit IRC12:14
TheJuliaupon failure, the ironic nova virt driver will tell nova that it failed and will attempt to reschedule12:15
TheJuliahence "no valid host"12:15
xavierrcd /brazil12:16
*** baoli has quit IRC12:16
divyaTheJulia : nova hypervisor-list shows ironic node.12:16
xavierrafter 24 hours... back home! (:12:17
jrolldivya: no, like TheJulia said, attach a physical screen to the machine (or something like serial-over-lan with ipmi)12:17
xavierrmorning all12:17
jrollxavierr: \o/12:17
TheJuliaxavierr: congrats! :)12:17
TheJuliadivya: Like jroll said as well, you likely have a networking issue somewhere between the physical machine and neutron, so the most logical step is to evaluate what the physical node is seeing, and work backwards from there.12:19
*** vmud213 has quit IRC12:19
TheJuliaAs such, verify MAC addresses, if you have multiple nics, verify the intended port is connected and booting first12:20
divyaJroll, TheJulia : is it neutron/ironic config issue or issue reaching pxe server?12:22
divyaTheJulia : i have only one NIC connected, removed other nodes. when i execute nova boot, bare metal node is powered on and pxe error is shown12:23
TheJuliadivya: I don't think we have enough information to know for sure.  Based on your paste output, it looks like it is an issue reaching the pxe server, your tcpdump makes me wonder if it is a networking config issue.12:23
TheJuliadivya: Does the pxe loader load at all, or does it just time out trying to get an address?12:24
TheJuliaThat would be a hint as to possible root causes12:25
*** dprince has joined #openstack-ironic12:25
divyai don't find any clue in n-cpu.log, is there a way to check?12:25
xavierrjroll, \o/12:26
xavierrTheJulia, tks : )12:26
jrolldivya: you need to look at the actual boot process on the hardware12:26
*** irf has quit IRC12:27
divyaJroll : pxe loader not loaded.12:29
*** mbound has joined #openstack-ironic12:30
*** wajdi has quit IRC12:33
*** jjohnson2_ has joined #openstack-ironic12:33
*** vmud213 has joined #openstack-ironic12:35
jrolldivya: that's the message on the console?12:35
TheJuliajlvillal: By chance is there an etherpad for grenade notes?12:42
*** mjturek1 has joined #openstack-ironic12:44
*** athomas_ has quit IRC12:49
*** jaypipes has joined #openstack-ironic12:51
*** krtaylor has joined #openstack-ironic12:51
divyaNO. Direct console shows "Boot failed. PXE network" message 2 times, that's it.12:58
divyaJroll, TheJulia : Direct console shows "Boot failed. PXE network" message 2 times, that's it.13:00
jrollhrm, sounds like a network issue13:00
jrollbut doesn't give much info13:01
*** nicodemos has joined #openstack-ironic13:01
jrollI guess I'd tcpdump for dhcp requests where neutron-dhcp-agent is running13:01
divya$ sudo tcpdump -vv port 67 or port 6813:08
divyadidn't show anything13:08
jrolldivya: right, so the dhcp requests aren't getting to your control plane, likely something in your network architecture13:10
*** rbradfor_home is now known as rbradfor13:11
*** Goneri has joined #openstack-ironic13:12
divyajroll : neutron-dhcp-agent is running and vi /opt/stack/data/neutron/dhcp/c35cbfde-db1a-4224-8098-a0e67b772a20/host file shows the same ip as shown in nova list13:12
*** vmud213 has quit IRC13:14
jrolldivya: sure, if the dhcp request doesn't get there, dhcp and therefore pxe booting won't work13:18
divyaJroll : could u take a look and give some clue to fix it? http://paste.openstack.org/show/495869/13:20
jrolldivya: not sure, but looks like it's something in the network infrastructure, not the openstack configuration13:22
deraykrotscheck, hi.. when I do npm install @ironic-webclient dir.. I face an issue13:24
krotscheckderay: What's up?13:25
deraykrotscheck, hi .. g'morning :) everything fine here .. barring this: http://paste.openstack.org/show/495870/ :)13:27
divyaJroll : Thanks, no openstack config, i am little bit happy now.13:28
krotscheckderay: Well, first of all, you don't need to run sudo.13:28
divyaJroll: issue could be connectivity between controller and physical bare metal server?13:28
krotscheckderay: Secondly, if you do run sudo, it's likely mucked up the permissions in your npm cache directory.13:29
jrolldivya: yes13:29
*** baoli_ has quit IRC13:30
jrolldevananda: news on the project install guide stuff: https://review.openstack.org/#/c/301284/16/specs/newton/project-specific-installguides.rst13:30
divyaJroll : assigned controller em3'13:31
krotscheckderay: So first of all, I'd say- 'sudo npm cache clear; npm cache clear; sudo rm -rf ./node_modules'13:31
deraykrotscheck, but I believe I need to use "sudo -E" .. as I have a proxy setup as part of my env on my dev machine to access outside world. And npm will try installing all the devDependencies from internet, rt?13:31
*** yolanda has quit IRC13:32
krotscheckderay: Would this work? https://jjasonclark.com/how-to-setup-node-behind-web-proxy13:32
deraykrotscheck, there's no "./node_modules" in my current dir: /opt/stack/ironic-webclient13:32
divyaJroll : assigned controller em3 ip as 100.100.100.10, bm node's em3 as 100.100.100.10 and tried ping. it works fine.13:32
deraykrotscheck, aha .. seems so .. looking13:33
divyaJroll : now sure i am trying wrong13:33
krotscheckderay: If there's no node_modules directory, then that's good - means that you never got to the point where it installed the downloaded packages in your local project.13:33
*** [1]cdearborn has joined #openstack-ironic13:34
xavierrdo you guys have that etherpad link with all links for etherpad for each design sessions decisions?13:34
krotscheckIt basically means the problem can be isolated to the cache.13:35
TheJuliaxavierr: https://etherpad.openstack.org/p/summit-mitaka-ironic13:35
*** links has quit IRC13:35
TheJuliaerr13:35
TheJuliawrong one13:35
TheJuliaxavierr: https://etherpad.openstack.org/p/ironic-newton-summit13:36
* TheJulia goes and makes more coffee13:36
* xavierr One link to rule them all13:36
jrolldivya: not sure how much more I can troubleshoot this, sorry :/13:36
xavierrTheJulia, thanks ;)13:36
*** yolanda has joined #openstack-ironic13:36
divyaJroll, TheJulia : Thanks much for the support.13:37
jrollyou're welcome13:37
jrollJayF: that onmetal + devstack + coreos kernel panic thing, I'm even seeing it on latest alpha coreos (4.5.2 kernel) :/ can't remember if you had thoughts on why that would happen on onmetal but not a VM13:53
jroll(also happens on the super old coreos ramdisk we publish, fwiw)13:53
jrollJayF: https://gist.github.com/jimrollenhagen/87ca2baa012d651e032bed3a2db47433 if you feel like taking a look when you have some time :)13:54
TheJuliajroll: by chance noapic on the kernel command line arguments?13:55
*** e0ne has joined #openstack-ironic13:56
jrollTheJulia: [ 0.000000] Command line: initrd=/opt/stack/data/ironic/tftpboot/c7467a06-47e1-44d3-af0e-d3d6f3e5e97d/deploy_ramdisk selinux=0 disk= iscsi_target_iqn= deployment_id= deployment_key= ironic_api_url= troubleshoot=0 text nofb nomodeset vga=normal console=ttyS0 systemd.journald.forward_to_console=yes ipa-debug=1 boot_option= ipa-api-url=http://65.61.151.138:6385 ipa-driver-name=agent_ssh13:56
jrollboot_mode= coreos.configdrive=0 BOOT_IMAGE=/opt/stack/data/ironic/tftpboot/c7467a06-47e1-44d3-af0e-d3d6f3e5e97d/deploy_kernel ip=10.1.0.4:65.61.151.138:10.1.0.1:255.255.255.0 BOOTIF=01-52-54-00-3d-21-8a13:56
jlvillalGood morning Ironic.13:56
jrollwhoa, the was long13:56
jrollthat*13:56
jrollanyway, nope :/13:56
TheJuliajroll: I meant, try adding it :)13:56
TheJulianull pointer dereference.... wow13:56
jrollTheJulia: oh, it is on the host if that's what you meant13:56
jrollbut I can try that13:57
TheJuliajroll: so this log is from a VM booting right?13:57
TheJuliaonmetal?13:57
*** xhku has joined #openstack-ironic13:57
TheJuliaor is it native on onmetal?13:57
jrollTheJulia: that log is from a devstack VM booting on an onmetal host13:57
jrollonmetal is straight up bare metal13:57
jlvillalTheJulia, Yes there is. You can follow the chain starting here: https://wiki.openstack.org/wiki/Meetings/Ironic-QA13:58
jlvillalTheJulia, Which should get you here: https://etherpad.openstack.org/p/ironic-newton-summit-grenade-worksession13:58
*** fragatina has joined #openstack-ironic13:59
TheJuliajroll: okay, thats what I thought but I wanted to make sure... I would add noapic and see if that changes things on the kernel command line for the devstack vm booting... alternatively something is up with virtualization on that hardware I suspect, espescially since it is booting in paravirtual mode13:59
TheJuliajroll: i bet if it was in full emulation mode, it wouldn't blink.14:00
TheJuliajlvillal: thanks, found the etherpad and the github repo, need to find that window again and clone :)14:00
*** mtanino has joined #openstack-ironic14:00
jrollTheJulia: yeah, I'll try noapic and then dig in further14:01
cineramajlvillal: thanks for posting the link14:01
jrollwe certainly intend for virt to work on these :P14:01
jrollTheJulia: muahaha, you rock, progress is had14:01
cineramaalso hi everyone!14:02
TheJulia\o/14:02
*** ametts has joined #openstack-ironic14:02
jroll\o cinerama14:02
deraykrotscheck, silly me .. there was already a tmp (file) in my home dir (/opt/stack). so, npm was not able to create a folder with the same name. Btw, setting up Npm behind a corporate web proxy also helped, I feel. Otherwise, would have hit that issue after resolving the earlier one.14:02
krotscheckderay: Cool, so you're all good?14:03
deraykrotscheck, YES..14:03
* TheJulia looks at her lab router's routes, raises an eyebrow, and feels like a coffee IV is needed14:03
*** fragatina has quit IRC14:03
krotscheckSweeeet14:04
deraykrotscheck, now going to start ``npm start``. But what's the port gulp uses? I already have some ports like 8081/2 used up14:04
openstackgerritXavier proposed openstack/ironic: Updating links and removing unnecessary dollar symbol from doc page  https://review.openstack.org/30882114:04
krotscheckderay: 8000. It should automatically open a browser.14:05
krotscheckderay: Though, that's only for dev purposes. For production, run `npm pack` and unzip the tarball in a webroot of your own choice.14:06
deraykrotscheck, superrrr! browser opens with link: http://localhost:8000/#/config14:06
krotscheckderay: Yep. Now tell it where your ironic api is, and it _should_ autodetect it (as long as you have CORS configured correctly)14:07
deraykrotscheck, sure14:07
*** baoli has joined #openstack-ironic14:07
TheJuliajroll: or it could be the host OS on your onmetal machine, I just don14:08
TheJulia't know the apic/paravirt interaction so not sure14:08
deraykrotscheck, gimme some time..14:08
deraykrotscheck, lemme enable cors14:08
jrollTheJulia: no, it's chugging along now, thanks :)14:12
*** mtanino has quit IRC14:12
*** e0ne has quit IRC14:13
*** e0ne has joined #openstack-ironic14:14
openstackgerritJarrod Johnson proposed openstack/pyghmi: Cope with empty agentless fields  https://review.openstack.org/31175114:17
*** ChrisAusten has joined #openstack-ironic14:23
xavierrguys, do we have spec for openstackclient support?14:26
*** itamarl has joined #openstack-ironic14:26
TheJuliahttps://github.com/openstack/ironic-specs/blob/master/specs/approved/ironicclient-osc-plugin.rst14:27
*** dprince has quit IRC14:29
openstackgerritJim Rollenhagen proposed openstack/ironic-specs: Add newton priorities doc  https://review.openstack.org/31153014:29
jrollwould love eyes/volunteers on that ^14:30
xavierrTheJulia, thanks14:33
*** mag009_ has joined #openstack-ironic14:40
*** ppiela has joined #openstack-ironic14:43
openstackgerritJim Rollenhagen proposed openstack/ironic: Devstack: allow extra PXE params  https://review.openstack.org/31175714:44
jroll^ should be an easy one :P14:44
deraykrotscheck, everything goes fine till now.. have been able to add 2 ironic clouds. But when i try to see the node with one it remains on the ``loading nodes`` prompt and seems hung14:44
deraykrotscheck, ./config.json missing I feel .. need to provide the tenant/user name there?14:48
*** ayoung has joined #openstack-ironic14:50
*** divya has quit IRC14:53
deraykrotscheck, can u mail me a sample config.json to try out? .. and where to place that.. my email: debayan.ray@gmail.com14:57
openstackgerritMerged openstack/pyghmi: Cope with empty agentless fields  https://review.openstack.org/31175114:58
*** mtanino has joined #openstack-ironic14:59
*** deray has quit IRC14:59
jrollAssertionError: 0 == 0 :15:00
jrollgg15:00
jaypipesjroll, devananda, jlvillal, JayF: mind checking my logic around this use case based on discussions around how the generic-resource-pools functionality can fulfill the multi-compute-host issues? http://paste.openstack.org/show/495878/15:00
* jroll reads15:05
*** [1]cdearborn has quit IRC15:06
*** [1]cdearborn has joined #openstack-ironic15:06
jrolljaypipes: yeah, that matches my thoughts around it, I think it's accurate15:07
jrolljaypipes: so the two issues I see right now are:15:08
jroll1) that's an explosion of effort if an operator has many hardware configurations15:08
*** itamarl has quit IRC15:09
jroll2) when hardware is added/removed/taken offline for management, the resource pool would need to be updated every time15:09
jrollmaybe we can make ironic do that automagically, though?15:09
jrollIOW, the operator is now responsible for ensuring that the resource pool has an accurate count of resources that can be scheduled to, rather than the ironic driver15:09
jrollmake sense?15:10
*** nicodemos has quit IRC15:13
*** links has joined #openstack-ironic15:14
*** links has quit IRC15:14
*** sinh_ has joined #openstack-ironic15:15
*** phschwartz_ has joined #openstack-ironic15:15
*** mtreinish_ has joined #openstack-ironic15:16
*** lekha_ has joined #openstack-ironic15:17
*** cfarquhar has joined #openstack-ironic15:18
*** cfarquhar has quit IRC15:18
*** cfarquhar has joined #openstack-ironic15:18
*** BadCub_ has joined #openstack-ironic15:18
*** igordcar1 has joined #openstack-ironic15:19
*** ijw has joined #openstack-ironic15:20
*** davidlenwell has quit IRC15:20
*** marlinc_ has joined #openstack-ironic15:21
*** ijw_ has joined #openstack-ironic15:21
*** dtantsur|afk has quit IRC15:21
*** kuthurium has quit IRC15:21
*** ppiela has quit IRC15:21
*** mtreinish has quit IRC15:21
*** lekha has quit IRC15:21
*** rcernin has quit IRC15:21
*** cfarquhar_ has quit IRC15:21
*** phschwartz has quit IRC15:21
*** marlinc has quit IRC15:21
*** sinh has quit IRC15:21
*** alineb has quit IRC15:21
*** jrist has quit IRC15:21
*** BadCub has quit IRC15:21
*** igordcard has quit IRC15:21
*** jlvillal has quit IRC15:21
*** mtreinish_ is now known as mtreinish15:21
*** dtantsur has joined #openstack-ironic15:21
*** jlvillal has joined #openstack-ironic15:21
*** dtantsur has quit IRC15:22
*** dtantsur has joined #openstack-ironic15:22
*** BadCub_ is now known as BadCub15:22
*** rcernin has joined #openstack-ironic15:22
*** kuthurium_ has joined #openstack-ironic15:23
*** marlinc_ is now known as marlinc15:23
krotscheckderay: config.json is just one of the configuration options, you shouldn't need it.15:23
*** lekha_ is now known as lekha15:24
*** ijw has quit IRC15:25
*** garthb has joined #openstack-ironic15:27
*** davidlenwell has joined #openstack-ironic15:28
*** rcernin has quit IRC15:30
*** moshele has quit IRC15:30
*** fragatina has joined #openstack-ironic15:34
*** ppiela has joined #openstack-ironic15:35
*** yolanda has quit IRC15:35
JayFNo meeting today, correct?15:36
jrollright15:36
*** jrist has joined #openstack-ironic15:37
*** chlong has quit IRC15:38
*** fragatina has quit IRC15:41
*** fragatina has joined #openstack-ironic15:42
devanandamorning, all15:43
*** alineb has joined #openstack-ironic15:45
jrollhey devananda15:46
*** e0ne has quit IRC15:47
jaypipesjroll: sorry, went to grab coffee :) on 1) there is no way around that I can see. on 2) if hardware is taken offline for maintenance and it shouldn't be shceduled to, yes, the external script would set the --reserved value in resource-pool-inventory-update for the pool in question.15:49
jaypipesjroll: and yes, we can use the ironic virt driver I believe.15:50
jrolljaypipes: so you're okay with the ironic driver handling resource pool management?15:50
jaypipesjroll: right now, it's abstract for me. "something" updates the inventory and reserved amounts :)15:50
jrollheh. yeah.15:50
jaypipesjroll: we can make the nova-compute daemon update it, sure, via ironic virt driver.15:51
jrolljaypipes: okay, now you have my ears :)15:51
*** mbound has quit IRC15:52
* jroll tries to reason about, if the compute daemon managed everything, if it would be possible to have that manage the host aggregates as well15:52
* jroll hrms really hard15:52
jaypipesjroll: keep in mind that the general direction is that these APIs are to be the broken-out scheduler REST APIs. In the same way that the compute manager calls to Neutron's REST API when plugging VIFs for a VM, we can have the compute manager (i.e. not the ironic virt driver itself but rather the container that houses the ironic virt driver) do the calls to the new shceduler REST API.15:52
jrolljaypipes: sure, and so could ironic, so we have a couple pathways to attack this15:53
jaypipesright15:53
jaypipesjroll: I'll leave that discussion for th eimplementation section of the spec versus the use case section ;)15:54
jrolljaypipes: totes15:54
jrollI'll mull this over15:54
jaypipesjroll: mull away.15:54
jaypipesjroll: I suppose that's better than mulla way.15:54
jrollthe host aggregate thing (e.g. when a new hardware config is enrolled in ironic) is the hard part15:54
* devananda catches up on scrollback15:54
jaypipesjroll: you would only need to add a new host aggregate for hardware that doesn't "fit" an existing resource class, not every time you added new hardware.15:55
jrolljaypipes: could the same set of compute hosts handle all resource pools?15:55
jrollyep, that's my thought15:55
devanandajaypipes: L11-19 is not an accurate description of the current behavior15:55
devanandaif it is supposed to be, I will reply with some clarification15:56
* devananda continues reading15:56
jrolldevananda: good catch, I glazed over that15:56
jaypipesjroll: yes, theoretically all compute nodes could handle all resource classes.15:56
jaypipesjroll: I just thought it was cleaner to have one set of compute hosts per agg and one resource class handled per agg.15:57
jrolljaypipes: cool, so in theory if something knew about all compute hosts (and all compute hosts handled all resource classes), this could be totally done in code15:57
jaypipesjroll: but I can add some commentary around putting them all in a single agg and having the resource pool have multiple inventory records, one for each resource class.15:57
jrolljaypipes: agree, if the resource class count is low15:57
jrolljaypipes: well, I was thinking aggr per resource class, but all aggrs on all hosts15:58
jaypipesjroll: in the back of my mind is also the end goal of having the scheduler be partitioned, so only requests for certain things go to a subset of scheduler daemons. The most convenient partitioning scheme in Nova is the aggregate right now :)15:58
*** ChrisAusten has quit IRC15:59
*** dmk0202 has quit IRC15:59
*** yolanda has joined #openstack-ironic15:59
jrolljaypipes: e.g. just this change in the seq part: http://paste.openstack.org/show/495884/15:59
*** Guest37590 has quit IRC15:59
jaypipesjroll: agg per resource class and all aggs on all hosts would provide no benefit to the scheduler, though...15:59
jaypipesjroll: it would still have to consider all compute hosts for every request to launch an instance.15:59
jrolljaypipes: sure, just thinking things through15:59
jaypipesjroll: yup, totes.15:59
jrollsure15:59
jaypipesdevananda: can you assist me there? :) how can I better describe the existing situation?16:00
jaypipesdevananda: is it juust lines 17-19 that are wrong?16:01
*** blakec has joined #openstack-ironic16:01
jrolljaypipes: so, I think there's less of a scale problem there for the ironic case, e.g. a compute host can handle 1k ironic nodes today. so a 100k server deployment would only be 100 compute hosts. scheduler shouldn't mind that too much :)16:01
devanandajaypipes: happy to :) I'll edit and post another pastebin16:01
jaypipesjroll: when you start talking about NFV use cases, though, where there are lots of policies around the capabilities of various hosts, it does start to weigh down the scheduler.16:02
jaypipesdevananda: rock on, thank you sir :)16:02
jrolljaypipes: I'm not sure if that applies here16:02
jrollanyway16:02
jrollneed to think on this more16:03
jrollmy goal is for operators to not need to do all of this work16:03
jrollor at least when an ironic node becomes un-schedulable, that's handled automagically16:03
*** ChubYann has joined #openstack-ironic16:04
jaypipesjroll: k16:05
* JayF thinks he should read up on nova host aggreggates and resource pools16:06
jrollJayF: you should16:06
* jroll finds the novel16:06
krotscheckjroll: It was better on the big screen.16:07
jrollJayF: http://lists.openstack.org/pipermail/openstack-dev/2016-February/086371.html16:07
JayFdanke16:08
jrollJayF: probably some small changes since, but that should get the idea in your head16:08
JayFtyvvm16:09
jaypipeskrotscheck: lol :)16:10
*** links has joined #openstack-ironic16:12
devanandajaypipes: jroll: http://paste.openstack.org/show/495889/16:15
*** blakec has quit IRC16:17
jaypipesdevananda: rock on, thank you sir :)16:17
jrolldevananda: ++16:17
jaypipesdevananda: this proposal I am working on will allow us to get rid of the (host, node) tuple entirely. :)16:18
devanandajaypipes: thats fantastic!16:18
jaypipesindeed.16:18
jaypipesprogress. we march forward!16:19
devananda\o/16:19
JayFdevananda: jaypipes: Is it OK to not reflect the CCM use case in that doc at all? I think it is just making sure it wasn't forgotten by accident16:19
jaypipesJayF: CCM?16:20
devanandaCCM?16:20
JayFClusteredComputeManager16:20
jaypipesJayF: you mean like vCenter?16:20
JayFhttps://github.com/openstack/ironic/blob/master/ironic/nova/compute/manager.py16:20
jrollso, that isn't really supported at all >.>16:20
devanandai think its covered, but if not, please explain16:20
JayFIt just wasn't called out explicitly in the "how it works today" use case; I don't think it has to be16:21
jrolldevananda: well, CCM makes it possible to run more than one16:21
jrollalso makes it terrible16:21
devanandait = the desired result16:21
*** sacharya has joined #openstack-ironic16:21
devanandasorry, typing with one hand while eating ...16:21
devanandaCCM allows one to run >1 n-cpu today, though it's really cludgy16:21
JayF"really cludgy" is putting it nicely, lol16:22
jrollI feel like we should pretend CCM doesn't exist for the sake of this document and future work16:22
devanandawith this work, we won't need CCM any more, and we'll be able to run >116:22
JayF+116:22
jrolldevananda: I think JayF is just asking if you want to note it in your edited section16:23
*** jistr has quit IRC16:23
devanandajaypipes: how do you see nova handling the baremetal case where an n-cpu process is terminated?16:23
devanandajaypipes: with libvirt, clearly those instances become unmanageable and may be rescheduled -- with ironic, that's not the case, but we could easily support re*associating* them with another nova-compute process16:24
jrollnova-manage move-ironic-hosts --from downed-compute16:24
jroller, s/hosts/instances16:24
devanandayea, something like that16:24
jaypipesdevananda: same as a normal n-cpu process today. on restart nothing is lost (simply reads its instances from the database and carries on as normal).16:25
jaypipesdevananda: with libvirt, nothing happens to the instances at all.16:25
jrollright, so I imagine the first pass is ^ which is the status quo16:25
jaypipesdevananda: the n-cpu daemon doesn't house the libvirt daemons. the host does.16:25
*** jaybeale has joined #openstack-ironic16:25
devanandajaypipes: i meant the libvirt driver16:26
jrollbut eventually we can optimize, given you could just update the db to point to a different compute daemon to make the instances manageable again16:26
jaypipesdevananda: that has no state itself.16:26
*** chihhsin has quit IRC16:26
jaypipesjroll: you wouldn't "move" instances from one compute node to another. there's no point in doing that. you would just restart the n-cpu service.16:26
*** chihhsin has joined #openstack-ironic16:26
jrolljaypipes: well, consider the case where that service is hosted on a vm that disappears/dies/etc16:27
jaypipesjroll: this isn't like the neutron router agent case which has state in the agent about the routers it manages.16:27
jrollor rather, the process must be down for a long time16:27
jrolldoes that make more sense?16:27
jaypipesjroll: it won't affect anything other than control plane communication to the instances that are associated with that n-cpu.16:27
jrollright16:28
jrollso since ironic instances don't live there16:28
jaypipesjroll: n-cpu is not HA. period :)16:28
jaypipesnever has been, probably never will be...16:28
jrollwe can reduce downtime for control plane actions by just changing instance.compute_host or whatever in the db16:28
jrollit's an optimization/hack16:28
jrollbut would be a nice thing to do16:28
devanandacool. no local state on the n-cpu host makes it easy to work with16:29
*** ifarkas has quit IRC16:29
jaypipesjroll: would be easier to just set up a VIP to a passive n-cpu daemon that has the same compute node data and just use haproxy to fail over to the passive n-cpu.16:29
jaypipesdevananda: right.16:29
jaypipesby "has the same compute node data" I mean in the DB, not in the n-cpu process itself, of course.16:30
jrolljaypipes: sure, that's fine too16:30
jrollmy experience with active/passive on a vip tells me a mysql update is easier :P16:30
jaypipesheh, fair enough.16:30
jaypipeslet's tackle that issue when we get to it.16:30
devanandathis is making more and more sense to me - and I like what I'm understanding :)16:31
jrollyeah, totally16:31
jrolllike I said, first pass is what you described, later we optimize :)16:31
jaypipes++16:31
devanandajaypipes: one more question, then I probably need to run: we will need $something to correlate the nova flavors to baremetal node "types"16:32
jrolldevananda: as long as we can make most of the resource pool management automated (which we can), we can do it16:32
jrolldevananda: a flavor can be linked to a resource pool, e.g. the flavor says "requires 1 baremetal-max-cpu resource"16:32
devanandajroll: today, the nova scheduler is matching flavor to the actual hardware. with this proposal, it won't be doing that, it sounds like16:33
devanandabut rather matching flavor to resource pool16:33
jrolldevananda: right, so you have a resource pool for each distinct hardware config16:33
devanandaand then it's up to the operator to make sure those are all created correctly16:33
jroll(which today would look like a flavor)16:33
jrollright16:33
jaypipesdevananda: it will be a flavor that has a single resource class in it... the one matching the baremetal configuration type.16:33
jaypipesdevananda: with a requested_amount of 1 always for that resource class.16:34
jrolland that was what you and penick didn't like about it - because all of that work16:34
devanandabut also - once that request is routed to an n-cpu host, that host (or really, the nova.virt.ironic driver) needs to select the appropriate ironic node -- and ironic does not (today) have a "resource pool" or "flavor" property to match against16:34
jaypipesjroll: no, not a resource class for each distinct hardware configuration...16:34
jrollO_o16:35
jaypipesjroll: it would be a resource class for each distinct server type. there can be many different distinct configurations of hardware or manufacturer that provide a single "server type".16:35
devanandawe will want to codify a way to do that matching in our REST API. that's not exactly what our current tags / claims proposal does16:35
jaypipesjroll: sorry, we need better terminology here :)16:35
jrolljaypipes: I feel like I need an example here16:36
JayFHow does that work with the potential for dynamic types then?16:36
jrolloh lawd16:36
jaypipesjroll, JayF: sure, lemme paste an example.16:36
JayFhttp://specs.openstack.org/openstack/ironic-specs/specs/backlog/exposing-hardware-capabilities.html16:36
devanandaJayF: yeeeeaaah.....16:37
JayFobviously that's not hashed out yet, but I know dynamic hardware config is something Ironic has wanted to do for a while, anything from reconfiguring a setting to "I have a bunch of hardware that $fancy_chassis assembles into a server on demand"16:37
jaypipesJayF: yeah, the RSA use case... but that's not exactly what I'm referring to here.16:38
JayF"RSA use case" ?16:38
devanandaJayF: if each configuration of $fancy_hardware is expressed as a separate flavor, we would need one Instance to claim from multiple resource pools at once. That's ... ugly. But, so is a flavor that has non-defined CPU count.16:38
jaypipesJayF, jroll: I'm referring to the complaint that penick had about having to create a flavor (or resource class) every time Yahoo! registered a few new racks with hardware that was just slighly different from the last generation of hardware they racked.16:39
JayFdevananda: so if a single node X could provide flavors A, B, C, it'd be in 3 different resource pools (one for each config), and we'd have to mark it as used in all places on scheduling?16:40
jrolljaypipes: yeah, sorry, I guess I think of config as a combination of "cpu/ram/disk/extra_specs_or_capabilities_or_whatever_loaded_term"16:40
devanandajaypipes: his complaint wasn't specifically about creating a new flavor, but about starting a new n-cpu process16:40
devanandajaypipes: new flavor is just fine16:40
jaypipesJayF, jroll: if the same "server type" (say, 800G HDD, 128G RAM, 2-socket Xeon, 4 10G NIC) could be serviced by multiple models and generations of HP, Dell, and Supermicro hardware, you would only need a single "server type", not many of them.16:40
jrolljaypipes: essentially resource pool per flavor, ya?16:40
jrollat a very basic level of flavor16:41
*** [1]cdearborn has quit IRC16:41
JayFjaypipes: I'm talking about the reverse case; I have a single server type which can provide multiple flavors16:41
devanandaguys, we now have 3 conversations going at once16:41
JayFjaypipes: it's a many-to-many mapping in some use cases16:41
jaypipesjroll: well, a resource pool can provide multiple resource classes :) so you could technically have a single resource pool that provided IRON_BASE and IRON_CRAZY_PERF resources.16:41
devanandathis has just gotten too hard to track16:41
devananda*for me to track16:41
jrolljaypipes: gdi. resource class per flavor.16:41
*** dprince has joined #openstack-ironic16:42
*** blakec has joined #openstack-ironic16:42
devanandajaypipes: do you have an ERD of these concepts? the mapping of resource pool - resource class - flavor - nova compute - host agg ...16:42
jaypipesjroll: heh, yes. a flavor represents both the quantity of requested resources (amounts of some set of resource classes) as well as the qualitative side of the equation (the capabilities)16:43
devanandait's enough new terminology that I sort of want to fall back to looking at a data model :)16:43
jaypipesjroll: so you could have multiple flavors each with the same ironic resource class, but exposing different capabilities (for instance SSD machines vs. HDD machines)16:43
jrolljaypipes: right, yep, that matches what I thought/expected16:44
jaypipescoolio. have I confused everyone adequately enough today? :)16:44
jrolland I think that matches at least the "config capabilities on the fly" use case16:44
jrollnever!16:44
*** rloo has joined #openstack-ironic16:44
*** ohamada_ has quit IRC16:45
jaypipeshehe16:45
devanandahmm. jaypipes, in your pastebin example, is IRON_BASE a flavor or a resource class?16:45
jaypipesdevananda: resource class.16:45
devanandaaaaah16:45
*** harlowja has joined #openstack-ironic16:45
jaypipesdevananda: the flavor would request 1 of IRON_BASE and be caled something like "Standard bare metal" or whatever16:45
devanandaok. so then that example doesn't discuss flavors at all?16:45
jaypipescorrect.16:45
devanandacould you add that? I misunderstood and thought IRON_BASE was a flavor16:46
jaypipesdevananda: the flavor is the combination of a set of requested resource classes along with a set of capabilities.16:46
devanandagotcha16:46
devanandaso in a virt cloud, a resource class might be "cpu" and a flavor might be "8 cpu"16:46
jrollyep16:46
devanandawhereas in a baremetal cloud, a resource class might be "8 cpu machine" and a flavor would be "1 of those machine"16:47
jaypipesdevananda: those capabilities are currently called "extra_specs" and we are actively trying to standardize that side of the flavor. the resource providers stuff is all about standardizing the quantitative side of the flavor (the requested amounts of stuff)16:47
jaypipesdevananda: precisely.16:47
devanandajaypipes: that's REALLY cool. and also really different from today.16:47
jaypipesindeed.16:47
devanandaand the missing piece for me was that this doc didn't assert anything about flavor, and so I imputed it incorrectly16:47
jaypipesdevananda: check out the dependency diagram at the bottom of https://blueprints.launchpad.net/nova/+spec/compute-node-inventory-newton16:47
jaypipesdevananda: it's a LOT of work to try to get done for all this remodeling going on.16:48
devananda*blink blink*16:48
jaypipesdevananda: with the end goal being the broken-out scheduler with a full REST API.16:48
devanandaright16:48
jaypipesdevananda: what we've been discussed today is *only* the generic-resource-pools and resource-providers-dynamic-resource-classes ones.16:49
jrolljaypipes: I assume you've taken this into account, but we'll need to make sure a flavor and a compute host that doesn't require/expose cpus/ram doesn't explode things16:50
devanandajaypipes: good stuff, thanks for the brainshare!16:51
jroll+116:51
*** daemontool has joined #openstack-ironic16:52
jaypipesjroll: sorry, not quite following you on that last one... could you elaborate?16:52
jrolldevananda: so the next steps for us to decide if this is feasible, is to play around with doing the pool management in code (e.g. ironic virt driver or ironic-conductor talks to the scheduler REST API when a node becomes (un)schedulable16:52
devanandajroll: yep16:52
jrolljaypipes: so cpu/ram is special, right, in that it's an implicit resource pool16:52
jrolland the resource tracker(?) populates that16:53
devanandajroll: and playing with the claims API stuff, since our virt driver will now need to "select" a node to place the instance on, rather than being handed that info16:53
jroll(iirc, this is what the inventory thing is about)16:53
jaypipesjroll: no, CPU and RAM are not in a resource pool. A compute node is a resource provider that contains fixed amounts of RAM and CPU. A resource pool provides resources that are *shared* among multiple compute nodes (via the aggregate association).16:54
jrolljaypipes: so what happens if a compute daemon doesn't populate the cpu/ram pools, or a flavor doesn't require any cpu/ram resources?16:54
jrollok, right, I'm way off16:54
jaypipesjroll: no worries, after a certain point all this stuff makes one's head bend.16:54
jrollso what does an ironic compute node report for ram/cpu? is a flavor that needs 0 ram and 0 cpus okay? :)16:55
* TheJulia suspects thats true with most complex features16:55
devanandajroll: I *think* the ironic virt driver doesn't need to report up anything about the cpu/ram/disk on an ironic Node16:55
devanandabut there is a missing piece right now16:56
devananda*someone* must know that this Ironic cluster can provide X of this resource type and Y of that resource type16:56
jrolldevananda: yeah, it's just an edge case I want to be sure is included16:56
jaypipesjroll: An ironic compute node exposes no resources itself. An ironic compute node only exposes resources of the pools that the aggregates expose that the compute node is associated with.16:56
jlvillalI'm going to send this email out about the Grenade stuff: http://paste.openstack.org/show/495891/16:56
*** wajdi has joined #openstack-ironic16:56
jlvillalIn about five minutes. Unless there are objections/comments.16:56
devanandajroll: not an edge case -- I actually think this is the crux of things. ironic virt driver will stop reporting cpu/ram/disk to the scheduler16:57
jrolljaypipes: okay, just making sure unit tests that have zeroes for these resources is in the plans :P16:57
devanandait won't report anytng at all16:57
jaypipesjroll: an ironic compute node's get_available_resources() call will return only the resources from the pools it is associated with, nothing more. whereas a "normal" compute node would do that, PLUS return the resources that the compute host itself (via the libvirt driver) has access to.16:57
jrolljaypipes: so that's all in the driver. cool.16:57
devanandait will increment/decrement the resource pool counters for the resource types that are associated with that compute host, when they are consumed/released16:57
jaypipesright.16:57
TheJuliajlvillal: lgtm16:58
devanandaso, we need a way for that driver to determine how many units of that resource type are in its pool16:58
jrolldevananda: right, making sure there isn't a count_cpus_and_report() in compute/manager.py or something16:58
jrolldevananda: yep. that's a rest api call16:58
jlvillalTheJulia: thanks :)16:58
devanandajroll: ahh. right. things outside the driver would be a problem here16:58
jaypipesdevananda: actually, no, the ironic virt driver will just return nothing for get_available_resources()... let me explain...16:58
*** blakec has quit IRC16:59
jrolldevananda: this is the "nova resource-pool-inventory-update" calls in http://paste.openstack.org/show/495878/16:59
*** rloo_ has joined #openstack-ironic17:00
*** irf has joined #openstack-ironic17:00
krtaylorjlvillal, I guess that means we *are* having a meeting this week :) jk, looks good17:00
jaypipesdevananda: the compute node's resource tracker, upon starting up, queries the DB for the aggregates that it is associated to. These aggregates have a set of resource pool objects associated to them that themselves return an inventory of the resources the pool exposes. This information is known to the compute node resource tracker BEFORE ever calling to the virt driver's get_available_resources() API call. For the ironic virt driver, since there17:00
jaypipesare no resources that the virt driver itself provides on the compute node, it would return nothing. For hypervisor drivers, they return the resources that the compute host exposes for guests.17:00
*** blakec has joined #openstack-ironic17:00
jlvillalkrtaylor: I only canceled the summit one :P17:01
*** jaybeale has quit IRC17:02
devanandajaypipes: sure. and for hypervisor drivers, when they create an instance, they decrement the resources in that pool17:02
*** jaybeale has joined #openstack-ironic17:02
mat128@here meeting time?17:02
jrollmat128: no meeting today17:02
JayFthere's no meeting today17:02
devanandajaypipes: clearly, ironic driver should do the same thing (decrement resources on allocation, increment resources on instance deletion)17:02
*** gugl has quit IRC17:02
jaypipesdevananda: that is how things currently work, yes. however, I am working towards having the scheduler do the claim process and the scheduler would do that decrementing...17:02
jrolldevananda: the scheduler handles the pools17:03
mat128oh, ok fine :) thanks17:03
jrollright that17:03
devanandajaypipes: oh! gotcha17:03
jrollhence the rest apis :)17:03
devanandamakes sense :)17:04
*** [1]cdearborn has joined #openstack-ironic17:04
devanandajroll: it sounds like we will, eventually, want to provide a means to keep the nova resource pools in sync with ironic's available hardware types and counts in near-real-time, eg. if nodes are taken offline for maintenance or such17:05
jrolldevananda: right, that's what I keep saying17:06
devanandabut that is out of band from the nova-compute <-> ironic-api interactions17:06
jrollI think that is critical to doing this work this way17:06
jroll(instead of the currently proposed thing)17:06
devanandait could be a separate ironic-nova-sync service or something17:06
jrollwell, the virt driver could do that17:06
jaypipesdevananda: or it could be done in the ironic virt driver itself.17:06
devanandait could. but doesn't have to.17:06
*** rloo has quit IRC17:06
*** rloo_ has quit IRC17:07
jrollI feel like I'd probably prefer it to17:07
*** rloo has joined #openstack-ironic17:07
devanandajroll: how will the nova-compute process know when an ironic node is added/removed from the available pool?17:07
jrolldevananda: it would just count available things periodically17:08
jrollit doesn't need to know anything about the nodes other than schedulable and not schedulable17:08
devanandajroll: we know how well polling and counting works ...17:08
*** fragatina has quit IRC17:08
devanandajroll: yes, it needs to know how to map each node into resource pools17:09
jrollsure, that too17:09
devanandathat could be a tag like "nova_resource_class_name"17:09
jrolleither way it will need that, and it will poll and count17:09
jrollthe only difference is that poll is via rest api for the driver, or db for the separate thing17:09
devanandajroll: there's been a bug open for a while now about the race that having it poll causes17:09
jrollwell, this is a bit different17:10
jrollhere, it's simply counting17:10
devanandaotoh, if ironic-conductor were calling to the scheduler API, it could be done in response to state changes17:10
jrolltoday, it is tracking the state of those nodes17:10
devanandano lag for the poll loop17:10
jrollthe resource pool here only cares about the count, fwiw17:10
devanandajroll: right, got that it only cares about the count, but something needs to correlate each Node to a resource pool17:11
JayFnova is keeping a counter, but still when it wants a node, it asks Ironic for one17:11
JayFso worst case with dated info is a build fails later due to thinking we had more capacity than we did, or vice versa (a build failing despite there being $small_number of usable nodes)17:11
jrolldevananda: I understand that17:11
jrollI'm not concerned about the metadata, that's easy17:12
devanandaJayF: exactly. that race condition is an open bug against our nova driver today17:12
JayFthat's what I thought. Just making sure I'm following along properly17:12
devanandafolks have proposed using notifications to inform nova, so that it has a more up to date view of available resources17:12
jrollI tend to think that's a somewhat separate thing17:12
devanandai'm pointing out that, if we put this logic in the nova.virt.ironic driver, and it does a poll/count loop, we're in the same situation17:12
JayFthe big difference between then and now17:13
jrollthe bug today is caused by nova not being up to date on the state and picking a *specific* node that is wrong17:13
* JayF == jroll17:13
devanandahow we update the MAX(resource_count) in a pool is orthogonal to how we consume resources from the pool17:13
JayFhe took the words from me17:13
*** trown is now known as trown|lunch17:13
jrollin this future, nova just asks ironic to "give me a node matching this resource pool"17:13
jrollironic handles all the state things17:13
devanandajroll: sure. so it's less of a race, but still a race17:13
*** gugl has joined #openstack-ironic17:13
jrollso we reduce that bug to just late fails (in the virt layer) when capacity is depleted17:14
JayFjroll: or false fails in the other case17:14
jrollwhich is... /shrug17:14
jrollah, yes17:14
devanandaif the last node of class_X just went offline, and someone requests it, there's a window where nova will allow the request, even though ironic knows it can't succeed17:14
JayFthe other case is the one that sucks more, bceause it's a fail for no good reason17:14
devanandayea17:14
devanandaI just fixed this thing in Ironic, now I need to wait N minutes for Nova to catch up17:14
jrollsure, it will just fail in the compute instead of the scheduler, which I don't think is a big deal17:14
*** ijw_ has quit IRC17:14
jrollbut yeah the reverse is nastier17:15
jrollat any rate, if we have a fast API to get a count of things that match a resource class, we can just sync every 5s :)17:15
devanandaheh17:15
JayFI mean, could we do something clever like before failing out a request for being out of capacity, you poll Ironic right then to get fresh data? I guess not, given that happens in the scheduler far away from Ironic specific code...17:15
jroll(I'm not opposed to the conductor handling this, fwiw, but it needs to do a full count, not simply "this node was updated, let me tell nova")17:15
devanandaJayF: that requires n-sched polling ironic directly17:16
jrollyep17:16
JayFdevananda: yep, that's what I was thinking, which is a nogo17:16
*** rbudden has joined #openstack-ironic17:17
devanandathanks, guys. this has been really helpful17:17
jrolltotes, thanks indeed :D17:18
* jroll will think on this between tempest runs and summit writeups17:18
devanandaI need to step away for a bit, gotta catch up on bills and stuff here for a while today17:18
* jroll plays first of tha month17:19
*** piet has joined #openstack-ironic17:20
*** [1]cdearborn has quit IRC17:23
guglHi folks, just wondering if this doc is update-to-date? http://docs.openstack.org/developer/ironic/deploy/install-guide.html17:23
JayFWe try to keep the documentation up to date; if it's not accurate let us know.17:24
*** daemontool_ has joined #openstack-ironic17:24
cineramagugl, are you having a specific issue with the process? we can talk you through a fix (and fix the docs!)17:24
TheJulia++17:24
*** blakec1 has joined #openstack-ironic17:25
guglI am very new to ironic...so not sure if it is accurate...before I start want to double check...17:25
jrollit should be up to date17:26
guglcool17:26
cineramagugl, are you planning on installing from source or OS packages?17:27
*** daemontool has quit IRC17:27
*** blakec has quit IRC17:28
*** david-lyle has joined #openstack-ironic17:28
*** david-lyle has quit IRC17:29
*** baoli has quit IRC17:29
xavierrguys, could you take a look at this patch: https://review.openstack.org/#/c/308821/17:30
*** david-lyle has joined #openstack-ironic17:30
xavierris just some updates on dev-quickstart.rst17:30
JayFI think that $ is intended17:31
JayFto indicate you're at a bash prompt after ssh'ing17:31
*** fragatina has joined #openstack-ironic17:32
*** fragatina has quit IRC17:33
*** fragatina has joined #openstack-ironic17:33
*** links has quit IRC17:33
xavierrJayF, should I add on the beginning of that command?17:34
JayFxavierr: I honestly don't think it matters much either way17:35
JayFjust saying it wasn't some random $ :)17:35
xavierrJayF, understood :)17:37
gugldevstack17:37
guglcinerama: devstack17:38
*** piet has quit IRC17:39
*** piet has joined #openstack-ironic17:39
gugldoes Horizon has ironic panel?17:40
guglor it is part of compute show in Horizon?17:40
TheJuliagugl: ironic-ui, under early development17:40
guglTheJulia: ic17:40
TheJuliagugl: You can use Ironic via nova or directly, depending on what you want to achieve in the end.17:42
guglTheJulia: will like to set list of all baremetal instances, deactivate and delete baremetal instances and view baremetal instance details17:44
*** blakec1 has quit IRC17:44
gugl*get17:44
*** blakec1 has joined #openstack-ironic17:44
TheJuliagugl: So in that case, seems like you would want nova although there is the distinct difference in an unused node and a deployed instance, so you'll need to use the ironic command line to add your baremetal nodes and likely do anything required in your environment configuration wise so they are able to be used for instance deployments by nova17:48
guglTheJulia: so if it is set up used by nova, the instances will show up in compute in horizon? otherwise the instances won't show up there, right?17:51
TheJuliadeployed instances should be visible, if visible by the user in the tenant17:52
*** piet has quit IRC17:55
guglTheJulia: let me recap....if it is not set up used by nova, then the deployed and undeployed instances will show up under compute(nova), if it is not set up used by nova, then the undeployed instances will not show up in compute, only deployed instances. right?17:58
*** blakec1 has quit IRC17:58
guglfirst should be * if it is set up used by nova17:58
*** piet has joined #openstack-ironic17:59
TheJuliagugl: please define what you mean by undeployed17:59
guglnot provisioned17:59
*** praneshp has joined #openstack-ironic17:59
TheJulianodes not provisioned will only be visible with-in the ironic cli or the ironic-ui as "available" nodes18:00
*** blakec1 has joined #openstack-ironic18:00
TheJuliagugl: we also have concepts of nodes being cleaned after deletion or prior to being made available, managable but not available, enrolled but not available.  This is part of our state machine, which you can see a digram of at http://docs.openstack.org/developer/ironic/dev/states.html18:01
guglTheJulia: those states are only visition in ironic...not in nova, right?18:02
TheJuliaEssentially, all you should see as a nova user, is nodes in active state18:03
guglTheJulia: ic18:04
*** ijw has joined #openstack-ironic18:05
*** trown|lunch is now known as trown18:05
*** rloo has quit IRC18:05
TheJuliagugl: We have an instance_uuid field in our database/api/cli/ui that ties back to the nova instance UUID18:06
*** jjohnson2_ has quit IRC18:06
*** blakec1 has quit IRC18:08
*** jjohnson2 has joined #openstack-ironic18:09
guglTheJulia: Once it shows up in nova, it can be deactivate and deleted through nova, it is also visition in ironic, I assume18:10
*** [1]cdearborn has joined #openstack-ironic18:10
TheJuliagugl: the deployed instance yes, deleting a node in nova just transitions the node to the DELETING state in our state machine18:11
TheJuliastepping away for a little bit, bbiab18:12
*** ppiela has quit IRC18:12
*** rloo has joined #openstack-ironic18:13
*** fragatina has quit IRC18:13
*** [1]cdearborn has quit IRC18:13
*** fragatina has joined #openstack-ironic18:14
*** [1]cdearborn has joined #openstack-ironic18:14
guglTheJulia: thanks for the helps18:18
*** fragatina has quit IRC18:19
*** blakec1 has joined #openstack-ironic18:20
*** Haomeng|2 has quit IRC18:21
*** Haomeng|2 has joined #openstack-ironic18:21
*** ppiela has joined #openstack-ironic18:22
*** irf has quit IRC18:22
*** irf has joined #openstack-ironic18:23
openstackgerritClif Houck proposed openstack/ironic-specs: Add spec for image caching  https://review.openstack.org/31059418:24
*** Sukhdev has joined #openstack-ironic18:24
*** Haomeng|2 has quit IRC18:24
*** Sukhdev has quit IRC18:26
*** Sukhdev has joined #openstack-ironic18:27
*** irf has quit IRC18:29
*** mjturek1 has quit IRC18:30
*** mkoderer__ has quit IRC18:32
*** fragatina has joined #openstack-ironic18:40
*** dprince has quit IRC18:42
*** baoli has joined #openstack-ironic18:45
*** mjturek1 has joined #openstack-ironic18:49
*** ChrisAusten has joined #openstack-ironic18:55
*** alejandrito has joined #openstack-ironic18:55
*** alejandrito has quit IRC18:56
*** alejandrito has joined #openstack-ironic18:57
*** alejandrito has quit IRC18:57
jrollJayF: bored? :) https://review.openstack.org/#/c/311757/19:01
JayFYou're aware I can't land that, right mister ptl? I'm not core on ironic ;)19:02
JayF+1'd though, looks sane to me19:02
jrollah dang19:02
jrolllol19:02
jrollthanks19:02
*** piet has quit IRC19:03
*** moshele has joined #openstack-ironic19:07
*** Mr_T has quit IRC19:07
*** Mr_T has joined #openstack-ironic19:13
*** yolanda has quit IRC19:14
*** blakec1 has quit IRC19:18
*** blakec1 has joined #openstack-ironic19:21
*** baoli has quit IRC19:23
*** [1]cdearborn has quit IRC19:26
*** [1]cdearborn has joined #openstack-ironic19:26
*** absubram has joined #openstack-ironic19:30
openstackgerritMerged openstack/ironic: Fix tox cover command  https://review.openstack.org/30685419:35
openstackgerritMerged openstack/ironic: Devstack: allow extra PXE params  https://review.openstack.org/31175719:44
*** baoli has joined #openstack-ironic19:45
*** baoli_ has joined #openstack-ironic19:47
*** moshele has quit IRC19:50
*** baoli has quit IRC19:51
*** lucasagomes has quit IRC19:53
openstackgerritJarrod Johnson proposed openstack/pyghmi: Add disk inventory when possible from Lenovo IMM  https://review.openstack.org/31182619:54
*** absubram has quit IRC19:56
*** absubram has joined #openstack-ironic19:58
*** lucasagomes has joined #openstack-ironic20:00
*** mjturek1 has quit IRC20:02
*** mjturek1 has joined #openstack-ironic20:02
*** mjturek1 has quit IRC20:06
*** mjturek1 has joined #openstack-ironic20:07
*** mjturek1 has quit IRC20:07
*** baoli has joined #openstack-ironic20:08
*** mjturek1 has joined #openstack-ironic20:08
*** baoli_ has quit IRC20:08
*** baoli has quit IRC20:08
*** baoli has joined #openstack-ironic20:09
*** mjturek2 has joined #openstack-ironic20:12
*** mjturek1 has quit IRC20:12
openstackgerritJarrod Johnson proposed openstack/pyghmi: Add disk inventory when possible from Lenovo IMM  https://review.openstack.org/31182620:15
*** mjturek1 has joined #openstack-ironic20:15
openstackgerritJarrod Johnson proposed openstack/pyghmi: Add disk inventory when possible from Lenovo IMM  https://review.openstack.org/31182620:16
*** mjturek1 has quit IRC20:16
*** mjturek2 has quit IRC20:16
openstackgerritMerged openstack/pyghmi: Add disk inventory when possible from Lenovo IMM  https://review.openstack.org/31182620:22
*** izaakk_ has joined #openstack-ironic20:22
*** serverascode_ has joined #openstack-ironic20:23
*** vdrok_ has joined #openstack-ironic20:23
*** Sukhdev has quit IRC20:27
*** rbudden has quit IRC20:27
*** sylwesterB_ has joined #openstack-ironic20:28
*** yhvh- has joined #openstack-ironic20:28
*** bapalm_ has joined #openstack-ironic20:28
*** sylwesterB has quit IRC20:29
*** bapalm has quit IRC20:29
*** izaakk has quit IRC20:29
*** odyssey4me has quit IRC20:29
*** morgabra has quit IRC20:29
*** mrda has quit IRC20:29
*** mmedvede has quit IRC20:29
*** vdrok has quit IRC20:29
*** serverascode has quit IRC20:29
*** yhvh has quit IRC20:29
*** izaakk_ is now known as izaakk20:29
*** sylwesterB_ is now known as sylwesterB20:29
*** vdrok_ is now known as vdrok20:29
*** mrda has joined #openstack-ironic20:30
*** baoli has quit IRC20:31
*** morgabra has joined #openstack-ironic20:32
*** vishwanathj has joined #openstack-ironic20:33
*** serverascode_ is now known as serverascode20:34
*** odyssey4me has joined #openstack-ironic20:35
*** adu has joined #openstack-ironic20:35
*** moshele has joined #openstack-ironic20:35
*** jjohnson2 has quit IRC20:43
*** mjturek1 has joined #openstack-ironic20:44
*** jaybeale has quit IRC20:49
*** ijw has quit IRC20:51
*** rbudden has joined #openstack-ironic20:52
*** mmedvede has joined #openstack-ironic20:52
*** piet has joined #openstack-ironic20:53
*** Goneri has quit IRC20:57
*** dmk0202 has joined #openstack-ironic21:00
*** adu has quit IRC21:01
*** Sukhdev has joined #openstack-ironic21:09
*** intr1nsic has quit IRC21:09
*** xek_ has joined #openstack-ironic21:10
*** mjturek1 has quit IRC21:10
*** intr1nsic has joined #openstack-ironic21:10
*** anush has quit IRC21:10
*** Sukhdev has quit IRC21:10
*** xek has quit IRC21:11
*** trown is now known as trown|outtypewww21:11
*** anush has joined #openstack-ironic21:11
*** dmk0202 has quit IRC21:12
*** jlvillal has quit IRC21:12
*** jlvillal has joined #openstack-ironic21:12
*** mjturek1 has joined #openstack-ironic21:14
*** adu has joined #openstack-ironic21:21
*** mbound has joined #openstack-ironic21:25
*** [1]cdearborn has quit IRC21:26
*** mjturek1 has quit IRC21:36
openstackgerritJim Rollenhagen proposed openstack/ironic: Test post don't upvote  https://review.openstack.org/31186521:42
openstackgerritJim Rollenhagen proposed openstack/ironic: Test post don't upvote  https://review.openstack.org/31186521:43
*** afaranha has joined #openstack-ironic21:43
*** caiobo has joined #openstack-ironic21:44
JayFjroll: a little curious what you're up to21:45
jrollJayF: grenade things21:45
jrollsee the depends-on21:45
JayFanything I can do to help?21:45
jrollnah, I'm heading out21:46
jrollso what I've done so far is21:46
jrollrun devstack on onmetal with 64 VMs21:46
JayFthat's right, you're EST now. d'oh.21:46
jrollrun tempest smoke21:46
jrollsee what breaks21:46
JayFexclude those tests, continue21:47
JayFgot it21:47
jrolllike, the tests I skipped, just don't work21:47
jrollbecause devstack vms have one nic, they require two21:47
JayFgot it21:47
jrollbut now I think I hit some races21:47
jrolland then running in serial, I think I hit weird dirty environment things21:47
JayFso is there a doc for doing what you're doing? mocking out the gate? just installing devstack and going with it?21:47
JayFI want to push button recieve gate vm21:48
jrollso now I run in the gate and start fresh tomorrow21:48
JayFgot it21:48
jrollnot really, jlvillal has some things for making it gate-like21:48
*** Sukhdev has joined #openstack-ironic21:48
jrollI'm just running devstack with the config from our docs, but 64 VMs21:48
jlvillal64?21:48
JayFgotcha21:48
jrolloh and turns out the pxe append params thing needs noapic on onmetal21:48
jrolljlvillal: yes21:48
jlvillalCool :)21:49
jrolljlvillal: 128gb ram21:49
jrolloh and HARDWARE NOT NESTED VIRT21:49
* jroll wants hw in the gate real bad21:49
*** Sukhdev has quit IRC21:49
aduhas anyone gotten ironic to work with non-x86 platforms, like raspberry pi?21:52
JayFrpi doesn't support the primitives needed to be provisioned with ironic21:54
JayFlike the ability to pxe boot or remotely configure stuff21:55
aduyeah, I'm aware, but you can create a specially crafted sdcard to do network boot similar to pxe, like a tftp downloaded kernel, etc21:56
jrollbut you'll overwrite that sd card with an image each time you deploy :/21:56
adujroll: so, just have all images contain the specially crafted bootloader...21:57
jrollyeah, I suppose21:57
*** ametts has quit IRC21:57
jrollso to answer your question, no, I don't believe anyone has run ironic against a raspi :)21:57
*** wajdi has quit IRC21:58
adujroll: I'd like to try21:58
jrollI'd love to see the results :)21:58
*** Sukhdev has joined #openstack-ironic21:58
jrollI have a raspi here on my desk that is currently unemployed21:58
JayFyou'd also need external power control21:58
adujroll: I have about 10 unemployed raspis21:58
* jroll looks at his finger21:59
jrollfake power driver ftw21:59
JayFegad21:59
aduI have a WeMo switch I can use for remote power controll21:59
JayFhonestly if you're using fake power driver, I think an array of sd card writers would be more efficient than ironic21:59
JayFlol21:59
jrollJayF: real hardware test bed though22:00
jrollclearly I'm not going to deploy a DC full of pis22:00
* jroll would rather deploy a kitchen full of pies22:00
JayFI mean, fake power driver, nearly fake management driver22:00
JayFyou're not testing a lot at that point22:00
aduhttp://www.belkin.com/us/F7C027-Belkin/p/P-F7C027;jsessionid=C8BF5079206E98A3A3D6D1A67CF50A96/22:00
JayFI don't think we have a wemo power driver, but that's a cool as hell idea for one22:01
aduI've also seen people deploy *pairs* of raspis, one for remote management and one for the actual work22:01
*** ChrisAusten has quit IRC22:02
adusince they're so cheap22:02
aduyou could just have one always running that ironic-agent thing22:02
aduor does that assume it's running on the target?22:03
JayFwell that's not exactly how the ironic-agent-thing works :P22:03
JayFIPA is a provisioning ramdisk22:03
JayFaka API calls like "erase my disk" and "write this image"22:03
JayFnot "reboot me" and such, which is generally left to BMCs/PDUs22:03
aduJayF: well, sorry, but the documentation is hard to read, since I've never been able to get ironic or nova to work22:03
aduI don't know how it's supposed to be, all I know is roumors and documentation22:04
jrolladu: well, here's how ironic deploys a thing, in terms of the interactions between ironic and hardware http://docs.openstack.org/developer/ironic/deploy/user-guide.html#example-1-pxe-boot-and-iscsi-deploy-process22:04
*** afaranha has quit IRC22:05
jrollthere's some more general things about how it interacts with nova if you scroll up22:05
aduso iscsi is a block storage protocol, right?22:06
*** rbudden has quit IRC22:08
*** praneshp has quit IRC22:08
jrolladu: yes, so that deployment method exposes the hard disk as an iscsi device, which ironic mounts and writes22:10
jrollbelow that is one that does not use iscsi, but rather http+dd22:11
*** mbound has quit IRC22:11
aduraspi/raspbian already supports iscsi and uboot (which is an implementation of some of PXE for raspi), and I would probably disable ipmi, because the servers are always on, so I think in theory it should work, I just need to figure out how to configure everything22:11
jrollsure, if you can get it to work that would be awesome22:12
jrollironic needs to be able to reboot the server and such, though, hence ipmi22:12
aduwell, full IPMI support would require a pair deployment22:14
adupairs would also allow you to have the second raspi hooked up to the raw console22:14
jrollsure22:15
*** moshele has quit IRC22:15
*** absubram has quit IRC22:16
*** ijw has joined #openstack-ironic22:16
*** fragatin_ has joined #openstack-ironic22:19
*** fragatin_ has quit IRC22:19
jlvillaljroll: Or anyone else. Would there be an objection to getting rid of the ironic-bm-logs directory? And moving that content up one directory?22:20
jlvillalclarkb is asking me about it over in infra22:20
jrollI'll pop over there22:20
*** fragatina has quit IRC22:22
*** ijw has quit IRC22:22
*** adu has quit IRC22:26
*** blakec1 has quit IRC22:27
*** david-lyle has quit IRC22:28
*** mkovacik has quit IRC22:34
*** lucasagomes has quit IRC22:36
*** fragatina has joined #openstack-ironic22:39
*** fragatina has quit IRC22:39
*** fragatina has joined #openstack-ironic22:40
*** lucasagomes has joined #openstack-ironic22:41
*** piet has quit IRC22:49
*** davidlenwell has quit IRC22:55
*** caiobo has quit IRC23:02
*** gugl has quit IRC23:02
*** davidlenwell has joined #openstack-ironic23:03
*** ijw has joined #openstack-ironic23:10
*** Nisha has joined #openstack-ironic23:11
*** rloo has quit IRC23:13
*** ijw has quit IRC23:16
*** jrist has quit IRC23:19
*** Sukhdev has quit IRC23:22
*** Sukhdev has joined #openstack-ironic23:25
*** blakec1 has joined #openstack-ironic23:27
*** vishwanathj has quit IRC23:30
*** keedya has joined #openstack-ironic23:33
*** jrist has joined #openstack-ironic23:34
*** ChrisAusten has joined #openstack-ironic23:34
*** keedya has quit IRC23:37
*** keedya has joined #openstack-ironic23:38
*** Nisha has quit IRC23:48
*** wajdi has joined #openstack-ironic23:51
*** blakec1 has quit IRC23:51
*** chlong has joined #openstack-ironic23:52
*** rloo has joined #openstack-ironic23:52
*** mtanino has quit IRC23:56
*** jaybeale has joined #openstack-ironic23:58
*** piet has joined #openstack-ironic23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!