Wednesday, 2015-04-15

*** naohirot has joined #openstack-ironic00:01
*** Marga_ has quit IRC00:02
*** ijw has quit IRC00:03
*** yuanying has quit IRC00:04
*** yuanying has joined #openstack-ironic00:09
*** r-daneel has quit IRC00:24
*** saripurigopi has joined #openstack-ironic00:28
egonl00:34
*** saripurigopi has quit IRC00:36
*** saripurigopi has joined #openstack-ironic00:39
*** rloo has quit IRC00:43
*** kkoski has joined #openstack-ironic00:46
*** Marga_ has joined #openstack-ironic00:47
*** yuanying_ has joined #openstack-ironic00:50
*** Marga_ has quit IRC00:52
*** yuanying has quit IRC00:53
*** Marga_ has joined #openstack-ironic00:53
*** thrash is now known as thrash|g0ne00:54
*** Marga_ has quit IRC01:03
*** achanda has quit IRC01:10
mgagneis there a way to tell a driver to always include some values in the extra or driver_info field?01:18
mgagnelooking at the vendor interface now, thanks!01:21
*** chenglch has joined #openstack-ironic01:24
*** Sukhdev has quit IRC01:34
*** jxiaobin has joined #openstack-ironic01:36
*** saripurigopi has quit IRC01:49
*** kkoski has quit IRC01:57
*** kkoski has joined #openstack-ironic01:58
*** kkoski has quit IRC02:09
*** achanda has joined #openstack-ironic02:10
*** achanda has quit IRC02:17
*** harlowja is now known as harlowja_away02:17
*** kkoski has joined #openstack-ironic02:33
*** davideagnello has quit IRC02:42
*** ramineni has joined #openstack-ironic02:43
*** kkoski has quit IRC03:03
*** achanda has joined #openstack-ironic03:13
*** Sukhdev has joined #openstack-ironic03:17
*** achanda has quit IRC03:17
*** Nisha has joined #openstack-ironic03:37
*** tiagogomes has quit IRC03:37
*** tiagogomes has joined #openstack-ironic03:37
*** achanda has joined #openstack-ironic03:49
*** saripurigopi has joined #openstack-ironic03:54
openstackgerritShivanand Tendulker proposed openstack/ironic: grub2 bootloader support for uefi boot mode  https://review.openstack.org/16619203:57
openstackgerritShivanand Tendulker proposed openstack/ironic: Secure boot support for pxe_ilo driver  https://review.openstack.org/15480803:58
*** krtaylor has quit IRC04:07
*** krtaylor has joined #openstack-ironic04:10
*** davideagnello has joined #openstack-ironic04:30
*** davideagnello has quit IRC04:35
openstackgerritMichael Davies proposed openstack/python-ironicclient: Cache negotiated api microversion for this server  https://review.openstack.org/17367404:49
*** achanda has quit IRC04:50
*** achanda has joined #openstack-ironic04:50
*** yog has joined #openstack-ironic05:11
*** yog is now known as Guest3661405:11
*** Sukhdev has quit IRC05:17
saripurigopiHow to check whether BP is implemented or not, I'm looking at https://review.openstack.org/#/c/135899/ RAID interface05:28
raminenisaripurigopi: you can check the bp status - https://blueprints.launchpad.net/ironic/+spec/ironic-generic-raid-interface05:29
raminenisaripurigopi: Its in code review stage05:30
saripurigopiramineni: got it, thank you. can I submit bp which is dependent on this implementation?05:31
raminenisaripurigopi: yes, you can do that05:31
saripurigopiramineni: cool, thank you.05:32
raminenisaripurigopi: you need a spec for it though, which states its depending on the generic implementation05:32
saripurigopiramineni: okay.05:34
saripurigopiIs there interface to auto discover nodes based on some qualification?05:36
*** ukalifon has joined #openstack-ironic05:59
*** Haomeng|2 has joined #openstack-ironic06:01
*** Haomeng has quit IRC06:04
openstackgerritjxiaobin proposed openstack/ironic-specs: Mount config drive as loop device to supply data to cloud-init  https://review.openstack.org/17314206:08
*** alex_xu has quit IRC06:18
*** jcoufal has joined #openstack-ironic06:29
*** alex_xu has joined #openstack-ironic06:30
*** mgoddard has quit IRC06:40
*** mgoddard has joined #openstack-ironic06:41
*** achanda has quit IRC06:48
*** dtantsur|afk is now known as dtantsur07:06
dtantsurMorning!07:06
Haomeng|2good morning dtantsur!07:07
dtantsuro/07:08
pshigedtantsur: morning07:08
*** rwsu has quit IRC07:09
mrdahi dtantsur, Haomeng|2, pshige07:11
pshigehaomeng|2, mrda: morning07:14
*** a1exhughe5 has joined #openstack-ironic07:21
Nishadtantsur, morning07:21
pshigeNisha: morning :)07:22
Nishapshige, morning :)07:22
*** jistr has joined #openstack-ironic07:24
Haomeng|2mrda, pshige, morning:)07:28
*** ifarkas has joined #openstack-ironic07:32
*** chlong has quit IRC07:35
openstackgerritMichael Davies proposed openstack/python-ironicclient: Cache negotiated api microversion for this server  https://review.openstack.org/17367407:37
*** rameshg87 has joined #openstack-ironic07:42
rameshg87good afternoon ironic07:42
*** bkero has joined #openstack-ironic07:58
*** rameshg87 is now known as rameshg87-lunch08:05
*** pas-ha has joined #openstack-ironic08:06
*** viktors has joined #openstack-ironic08:08
*** davideagnello has joined #openstack-ironic08:08
*** romcheg has joined #openstack-ironic08:13
*** davideagnello has quit IRC08:13
*** Nisha has quit IRC08:14
*** Nisha has joined #openstack-ironic08:14
*** lucasagomes has joined #openstack-ironic08:17
*** ndipanov has joined #openstack-ironic08:27
*** rameshg87-lunch is now known as rameshg8708:33
*** dtantsur is now known as dtantsur|brb08:37
openstackgerritHaomeng,Wang proposed openstack/ironic: supports alembic migration for db2  https://review.openstack.org/17372108:38
*** Nisha has quit IRC08:47
openstackgerritLucas Alvares Gomes proposed openstack/ironic-specs: New stateless iPXE driver  https://review.openstack.org/13081208:47
*** a1exhughe5 has quit IRC08:48
*** a1exhughe5 has joined #openstack-ironic08:49
*** rameshg87 has quit IRC08:49
*** pelix has joined #openstack-ironic08:50
*** rameshg87 has joined #openstack-ironic08:50
openstackgerritMerged openstack/ironic-specs: Add pxe_ucs driver spec for liberty  https://review.openstack.org/17327108:51
openstackgerritIWAMOTO Toshihiro proposed openstack/ironic-specs: Collect IPA logs  https://review.openstack.org/16879908:54
*** krtaylor has quit IRC08:57
rameshg87lucasagomes: hello08:58
lucasagomesrameshg87, hi there08:58
rameshg87lucasagomes: the specs https://review.openstack.org/#/c/130812/ and https://review.openstack.org/#/c/134865/ ideally are suited as a new implementation of boot interface08:59
rameshg87lucasagomes: we have the boot interface thing as one of priority items for kilo09:00
lucasagomesvirtual media deploy?09:00
rameshg87lucasagomes: yeah09:00
lucasagomesright yeah, but didn't finish that :-(09:00
rameshg87lucasagomes: i meant boot interface thing as one of priority items for liberty09:00
rameshg87sorry09:01
rameshg87lucasagomes: i hope ipxe also fits in better as a new boot interface implementation09:01
lucasagomesrameshg87, yeah it would :-)09:01
rameshg87lucasagomes: but i am not sure how it can be prioritized09:02
lucasagomesI think we do have it as a priority as per devananda email for PTL09:02
rameshg87yeah09:02
lucasagomeshe mentions that09:02
rameshg87lucasagomes: so does it make sense for  https://review.openstack.org/#/c/130812/ and https://review.openstack.org/#/c/134865/ to be proposed as a new boot interface instead ?09:02
lucasagomesdepending on the new boot interface?09:02
openstackgerritNaohiro Tamura proposed openstack/ironic: Add iRMC Virtual Media Deploy module for iRMC Driver  https://review.openstack.org/15195809:02
lucasagomesor as it?09:02
lucasagomescause they r different work09:03
rameshg87yeah i meant depending on new boot interface09:03
lucasagomesdo we have a spec for the new boot interface?09:03
rameshg87lucasagomes: https://review.openstack.org/#/c/168698/09:03
lucasagomesor we are just doing it? cause to be honest iPXE doesn't depend on that... I just need a driver vendor passthru endpoint09:03
lucasagomescool09:04
lucasagomeswe can align things for sure09:04
rameshg87lucasagomes: yeah09:04
*** krtaylor has joined #openstack-ironic09:04
rameshg87lucasagomes: i had some rough POCs as well09:04
lucasagomescool09:04
rameshg87lucasagomes: see references09:04
lucasagomeswill take a look09:04
rameshg87lucasagomes: it's not exactly according to spec09:04
rameshg87lucasagomes: but working ones09:04
rameshg87okay, let me know what you think after having a look :)09:05
*** Marga_ has joined #openstack-ironic09:06
*** athomas has quit IRC09:07
*** athomas has joined #openstack-ironic09:08
lucasagomesrameshg87, will do, I'm finishing reviewing another patch09:10
rameshg87lucasagomes: sure..thanks09:13
*** pas-ha has quit IRC09:18
*** pas-ha has joined #openstack-ironic09:21
*** derekh has joined #openstack-ironic09:25
*** Marga_ has quit IRC09:28
*** Marga_ has joined #openstack-ironic09:29
*** ukalifon has quit IRC09:45
*** jamielennox is now known as jamielennox|away09:52
*** ukalifon has joined #openstack-ironic09:58
*** naohirot has quit IRC10:00
*** chenglch has quit IRC10:03
*** dtantsur|brb is now known as dtantsur10:10
*** ukalifon has quit IRC10:17
*** ukalifon has joined #openstack-ironic10:18
*** achanda has joined #openstack-ironic10:22
*** ukalifon has quit IRC10:26
*** ukalifon has joined #openstack-ironic10:26
*** jcoufal has quit IRC10:27
*** achanda has quit IRC10:27
*** jcoufal has joined #openstack-ironic10:27
rameshg87folks, anyone face the below issue ?10:30
rameshg87i run stack.sh with enabling ironic and virt driver as ironic10:30
rameshg87the nova-compute ironic virt driver gets started first (n-cpu) and then tries to contact the ironic-api service and retries 60 times10:31
rameshg87ir-api and ir-cond doesn't come up by that time and n-cpu dies10:31
vdrokrameshg87, yup, had this issue10:32
vdrokrameshg87, morning10:32
rameshg87later on after starting ir-api and ir-cond, ironic waits for the nova-hypervisor stats to appear10:32
rameshg87vdrok: morning :)10:32
rameshg87and fails10:32
rameshg87vdrok: did you increase the retry count ?10:32
vdrokrameshg87, this one helps10:32
vdrokrameshg87, https://review.openstack.org/#/c/171406/10:32
rameshg87vdrok: oh okay10:32
vdrokrameshg87, well you could increase retry count but it doesn't seem right :010:33
rameshg87vdrok: but this issue may come up at some point of time later on due to some other reason10:33
rameshg87vdrok: i think we should restart n-cpu if it's not running after enrolling the nodes10:33
rameshg87vdrok: wdyt ?10:33
* rameshg87 fetches the devstack patch10:33
vdrokrameshg87, in this case restarting n-cpu makes sense, but not sure that we should do that in general10:35
rameshg87vdrok: yeah, may be when we see nova-compute is not running ?10:36
lucasagomesrameshg87, there's this here as well https://review.openstack.org/#/c/171313/10:36
lucasagomesperhaps we should bump that default in the Ironic driver10:36
rameshg87oh :)10:37
* rameshg87 realises everyone has faced this issue before10:37
rameshg87so really there are multiple ways to solve this10:38
rameshg87i just tried adding stop_nova_compute and start_nova_compute before we wait for hypervisor stats, but doesn't work straight away10:38
rameshg87stop_*() fails stack.sh if it is not running10:39
rameshg87let me try this ...10:39
vdrokrameshg87, lucasagomes, in this case when n-cert is disabled difference between failing n-cpu and starting ir-cond may be long (like 5 min)10:39
vdrokso I think increasing retries is not good for this case, seems that there is some other slowdown10:40
rameshg87vdrok: yeah i felt same ..10:40
lucasagomesyeah, do we need to stop n-cpu if Ironic is not present?10:42
lucasagomesor should we just skip it, leave it dorment10:42
lucasagomesdormant*10:42
rameshg87lucasagomes: yeah check if any process nova-compute is running, and then start it if it's not running10:43
vdrokooh, and then there is this one https://review.openstack.org/#/c/173493/210:44
vdroklots of ways to fix it :)10:44
rameshg87yeah :)10:44
rameshg87and then there is this: https://review.openstack.org/#/c/156887/7/lib/ironic (684-687)10:45
rameshg87i am just testing it out10:45
*** zhenguo has quit IRC10:48
rameshg87lucasagomes: just replied to your comment on https://review.openstack.org/#/c/168698/2/specs/liberty/new-boot-interface.rst10:50
lucasagomesrameshg87, we are not booting the same ramdisk always, specially for rescue10:51
lucasagomesrameshg87, for the rescue ramdisk we may not include any of the deploy bits10:52
lucasagomesin fact, I think that the RAX guys doesn't even use IPA for rescue10:52
lucasagomesit's yet another ramdisk10:52
lucasagomesjroll, JayF ^10:52
lucasagomesrameshg87, https://github.com/rackerlabs/onmetal-rescue-agent/10:52
rameshg87lucasagomes: oh okay10:55
lucasagomesrameshg87, one could use the same ramdisk for everything, just by setting the same glance id for everything and all10:56
lucasagomesbut it's important to have the flexibility of booting different stuff for different propouses10:56
rameshg87lucasagomes: so what about deploy and cleaning ?10:57
rameshg87lucasagomes: should we have this  there too ?10:57
lucasagomesrescue mainly since the ramdisk should include a bunch of tools for troubeshooting that is not needed otherwise10:57
lucasagomesrameshg87, cleaning ? hmm I think it should. For cleaning right now we just boot the same ramdisk right?10:58
*** ramineni has quit IRC10:58
rameshg87lucasagomes: yes10:58
lucasagomesperhaps for this first implementation we should just call the deploy methods in cleaning?10:59
rameshg87lucasagomes: but i felt we might end up overloading the boot interface11:00
rameshg87lucasagomes: i just thought of classifying it like this - either bare metal is always booted to something that ironic will use it for some operator11:00
rameshg87or operations11:01
rameshg87OR bare metal is booted to the deployed image for the user to use it11:01
rameshg87always assumption seemed wrong - more flexibility is required there11:01
lucasagomesyeah it keeps things simple indeed11:01
lucasagomesyeah it will require more flexibility... hmm /me thinks11:02
*** Marga_ has quit IRC11:02
lucasagomesrameshg87, how it will works in code... the deploy interface will be in control over the boot interface right?11:05
rameshg87lucasagomes: but if it still wanted deploy/cleaning to be different from rescue it can be possible, right ?11:05
rameshg87lucasagomes: yes11:05
lucasagomesrameshg87, yeah I think it should... but let's think11:05
lucasagomesmaybe we just need a generic method11:05
lucasagomesas you did in that spec11:05
rameshg87lucasagomes: like this it will look: https://review.openstack.org/#/c/166513/4/ironic/drivers/modules/iscsi_deploy.py11:06
lucasagomesbut the deploy interface must be aware about what is the boot interface11:06
rameshg87it's not according to the spec11:06
lucasagomesrameshg87, will take a look11:06
lucasagomesI will need to step out a bit and grab something to eat quickly11:06
rameshg87oh okay :)11:06
rameshg87np, see you later, i should be around11:07
rameshg87i am just removing "boot" from it11:07
* lucasagomes is in the office today, will skip thursday11:07
lucasagomesack11:07
openstackgerritRamakrishnan G proposed openstack/ironic-specs: Add new boot interface in Ironic  https://review.openstack.org/16869811:09
*** thrash|g0ne is now known as thrash11:12
*** yuanying_ has quit IRC11:21
*** rameshg87 is now known as rameshg87-away11:23
*** saripurigopi has quit IRC11:39
*** saripurigopi has joined #openstack-ironic11:46
*** davideagnello has joined #openstack-ironic11:46
*** davideagnello has quit IRC11:50
*** Haomeng has joined #openstack-ironic11:58
pshigeIs specs core meeting here in three hours?11:59
dtantsurspecs core meeting? Oo12:00
*** Haomeng|2 has quit IRC12:00
*** rameshg87-away has quit IRC12:01
pshigehttps://wiki.openstack.org/wiki/Meetings/Ironic12:01
pshigethe end of announcement12:01
pshigein Agenda for next meeting12:02
dtantsurcommunication rocks!12:02
dtantsursomeone should be told about mailing list >_<12:02
dtantsurpshige, thanks!12:02
pshigedtantsur: thank you!12:03
lucasagomesdtantsur, it will collide with our meeting at 15UTC12:03
dtantsuryep12:03
lucasagomesI said that yesterday, so it's ok if we delay a bit12:04
dtantsurlet's make sure that next time we have a ML announcement12:04
*** arif-ali has quit IRC12:05
*** trown|outttypeww is now known as trown12:06
*** dprince has joined #openstack-ironic12:08
*** arif-ali has joined #openstack-ironic12:12
*** zhenguo has joined #openstack-ironic12:29
lucasagomesdtantsur, +1 totally12:32
lucasagomeseven because one of the IRC meetings happens at 3 am here :-)12:33
*** saripurigopi has quit IRC12:35
openstackgerritLucas Alvares Gomes proposed openstack/ironic: Troubleshoot: Do not power off node if deployment fail  https://review.openstack.org/17293212:36
pshige3 am in Dublin! in 13 hours ...12:46
openstackgerritDmitry Tantsur proposed stackforge/ironic-discoverd: Bump version to 1.2.0  https://review.openstack.org/17380712:53
*** rloo has joined #openstack-ironic12:57
openstackgerritZhenguo Niu proposed openstack/ironic: Add attempt of max retries to power state sync logs  https://review.openstack.org/16712213:00
lucasagomespshige, :-D13:04
lucasagomesyeah... we are UTC +1 now13:04
* lucasagomes likes Ireland's TZ13:04
* pshige likes Japan Standard Time (2218 JST now)13:18
* dtantsur likes Friday evening whatever time13:23
*** david-lyle has quit IRC13:24
pshigedtantsur, :-D13:24
NobodyCamgood morning says the man waiting on koffi13:28
*** r-daneel has joined #openstack-ironic13:31
dtantsurNobodyCam, morning13:33
NobodyCammorning dtantsur :)13:33
pshigeNobodyCam: morning :)13:33
NobodyCammorning pshige :)13:34
dtantsurfolks, I'd like to seriously discuss retrying HTTP 409 in our client. Getting all kind of http://paste.openstack.org/show/203982/ makes writing automation insane...13:34
*** MattMan has quit IRC13:34
jlvillalGood morning, dtantsur NobodyCam lucasagomes pshige and the rest of Ironic13:34
dtantsurjlvillal, morning13:35
NobodyCammrorning jlvillal13:35
NobodyCamdtantsur: don't we already re-try most things in te client?13:35
dtantsurNobodyCam, looks like no13:35
jlvillaldtantsur, Makes sense to me to retry, if the 409 is a temporary state.13:35
pshigejlvillal: morning13:36
NobodyCamdtantsur: ya, I think that makes sense. ocf I will note that I have yet to ingest any coffee13:37
* dtantsur does not risk opening IRC without the first cup13:39
*** Guest36614 has quit IRC13:41
*** mariojv has joined #openstack-ironic13:41
openstackgerritMerged stackforge/ironic-discoverd: Bump version to 1.2.0  https://review.openstack.org/17380713:41
*** MattMan has joined #openstack-ironic13:42
NobodyCamhehehee13:44
*** saripurigopi has joined #openstack-ironic13:44
lucasagomesjlvillal, NobodyCam morning13:45
NobodyCammorning lucasagomes13:46
NobodyCam:)13:46
*** mtanino has joined #openstack-ironic13:48
rloohi everyone13:49
NobodyCamgood morning rloo :)13:49
jlvillalgood morning rloo and again to lucasagomes :)13:50
lucasagomesrloo, morning13:50
pshigerloo: morning :)13:50
rloomorning NobodyCam, jlvillal. Hi lucasagomes, dtantsur, pshige.13:50
dtantsurrloo, morning13:50
rloodtantsur: when does the client get a 409?13:50
* jlvillal is looking forward to meeting the IRC people in person at the summit13:51
rloodtantsur: I don't see why we can't add some environment/option to the client, to retry?13:51
dtantsurrloo, I think we should13:51
dtantsurrloo, we get it on node-set-provision-state13:51
dtantsurnowadays you have to call this much more often :)13:51
*** romcheg has quit IRC13:53
rloodtantsur: I guess it depends on how configurable it should be. retry only on 409, retry only on 409+node-set-provision-state, ...13:53
dtantsurrloo, I think it should be retry or don't retry, with retry being the default13:54
rloodtantsur: i was wondering if the API folks have an opinion on that. Not that I've paid any attention to what they're doing.13:54
dtantsurrloo, we can default to "rety" for CLI and default to "as before" for API13:55
rloodtantsur: maybe they only deal with api, not CLI.13:55
rloodtantsur: oh, you want in both cli and api.13:55
rloodtantsur: me thinks this may require a spec13:56
dtantsurrloo, hmm, are we talking about ironicclient API? I'm talking about it13:56
*** kkoski has joined #openstack-ironic13:56
pshigejlvilla: I will attend the summit !13:56
NobodyCampshige: awesome!!!13:57
rloodtantsur: oh. the library. I was thinking of the actual http request.13:57
dtantsurmaybe, but that's not my concern right now13:57
rloodid you see this patch for the client: https://review.openstack.org/#/c/173852/13:58
rlooso we're going to have a kilo branch for the client, but we'll continue using the package versioning we've been using?13:59
rloowith the specs being moved from kilo to kilo-archive to liberty, how do people who know git, get the history of those files? eg https://github.com/openstack/ironic-specs/blob/master/specs/liberty/cisco-ucs-pxe-driver.rst?14:03
jlvillalpshige, Great! :)14:05
openstackgerritRuby Loo proposed openstack/ironic-specs: Remove placeholder for Liberty directory  https://review.openstack.org/17389314:06
rloopshige, jlvillal: will be great to meet you face to face :-)14:07
openstackgerritRuby Loo proposed openstack/ironic-specs: Remove placeholder in Liberty directory  https://review.openstack.org/17389314:08
NobodyCamhow log is "tox --e py27 -- test_pxe taking for folks? I'm getting Ran 87 tests in 151.053s14:08
NobodyCambut I think it ia ia test I added14:08
NobodyCam:-p but I think it is a test I added even14:09
* NobodyCam goews back to coffee14:09
NobodyCam:-p14:09
rlooNobodyCam: 1.708s for me14:10
jlvillalrloo, I don't understand your question about git and kilo, kilo-archive, and liberty14:10
NobodyCamyep its me :-p14:10
rloojlvillal: git is funny, if you rename files.14:10
rloojlvillal: if you look at that file I linked, what's the history of it?14:10
jlvillalrloo, One moment14:10
*** zz_jgrimm is now known as jgrimm14:11
rloojlvillal: that file got moved from 'kilo' directory to 'kilo-archive' directory to 'liberty' directory14:11
jlvillalIf I do this: git log --follow specs/liberty/cisco-ucs-pxe-driver.rst14:11
jlvillalI can see the rename14:11
jlvillalrloo, Without --follow I don't see the rename14:11
rloojlvillal: I usually look at the history of a file, to get the patch/commit for changes to it14:11
jlvillalrloo, So you can see the history, but you have to use --follow14:12
rloojlvillal: thx, I'll try it14:13
jlvillalrloo, You're welcome14:13
*** saripurigopi has quit IRC14:14
*** saripurigopi has joined #openstack-ironic14:17
BadCubMorning folks14:18
dtantsurBadCub, morning14:18
NobodyCammorning BadCub :)14:19
rloojlvillal: do you really like how it looks? https://review.openstack.org/#/c/173427/14:19
*** achanda has joined #openstack-ironic14:20
pshigeBadCub: morning :)14:21
rloohiya BadCub14:23
pshigeI sat in the back row at the previous design summit, so I didn't chat with almost anyone here, except devananda.14:24
*** achanda has quit IRC14:24
* dtantsur reapplied for visa yesterday and keeps fingers crossed14:25
lucasagomespshige, oh... let's talk more on the next summit then!14:26
* BadCub returns with more coffee14:26
lucasagomesBadCub, good morning :-)14:26
BadCubmorning rloo, dtantsur NobodyCam pshige :)14:27
BadCubmornin lucasagomes14:27
BadCub:)14:27
*** Marga_ has joined #openstack-ironic14:28
pshigelucasagomes: yes, I'll be happy with that!14:31
jlvillalrloo, I like the new look14:31
jlvillalBecause it helps differentiate the conditional from the action lines14:31
jlvillaldtantsur, you had to re-apply.  Sorry if they denied first time :(14:32
rloojlvillal: I agree with that, but I don't like it because the conditional stuff isn't aligned. I don't see why we need to enforce that look but if others want it, democracy rules. Or 2 dictators rule anyway ;)14:32
jlvillalrloo, But if the conditonal stuff is aligned then the action lines blend into the conditional.14:34
*** achanda has joined #openstack-ironic14:34
rloojlvillal: it doesn't blend for me but i'm used to it. clearly, others think the way you do, otherwise that Ewhatever rule wouldn't exist.14:35
jlvillalrloo, So then you have to hunt around to find the colon to figure out where the conditional ends.14:35
* jlvillal is upset that the Windows update has broken his laptop's video camera :(14:35
dtantsurlucasagomes, do we really need a spec to: 1. bring https://github.com/openstack/nova/blob/master/nova/virt/ironic/client_wrapper.py#L104 to ironicclient; 2. use it when issuing shell commands?14:38
lucasagomesdtantsur, well maybe not a spec... unless we want to get some comments about the idea first14:38
lucasagomesit doesn't seem contradictory... so maybe just put a patch up for it?14:39
jrollmorning $too_many_people_to_name :)14:41
pshigejroll: morning :)14:42
jrollBadCub: what time is this specs thing?14:42
BadCubjroll: 8 I beleive14:43
pshigeIn 16 mins14:43
jrollok, I have another meeting then but I should be able to overlap some if it's just on irc14:43
NobodyCammorning jroll14:44
*** achanda has quit IRC14:45
BadCubI put a few topics on the pad for folks to ponder on too. https://etherpad.openstack.org/p/IronicSpecProcess14:49
*** igordcard has quit IRC14:49
openstackgerritNisha Agarwal proposed openstack/ironic: Update ilo drivers documentation for inspection  https://review.openstack.org/17006514:49
BadCubSome ideas that might make things less hectic14:50
*** igordcard has joined #openstack-ironic14:50
BadCubbrb14:51
*** smoriya has joined #openstack-ironic14:54
pshigesmoriya: good evening!14:55
dtantsurjroll, morning14:55
smoriyapshige: good evening :)14:55
jrollheya dtantsur :)14:56
* BadCub returns14:59
* NobodyCam is here too15:00
*** rwsu has joined #openstack-ironic15:00
*** smoriya has quit IRC15:05
*** smoriya has joined #openstack-ironic15:06
NobodyCamI'm adding my thoughts to the spec etherpad15:06
BadCubNobodyCam: cool, thank you :)15:07
openstackgerritZhenguo Niu proposed openstack/ironic: Cleanup some code in pxe.prepare  https://review.openstack.org/17392515:07
jlvillalWas there a meeting somewhere that started 9 minutes ago?  I thought it was going to be here.15:09
NobodyCamjlvillal: I'm meeting15:09
NobodyCamlol I'm working on the etherpad15:10
jlvillalNobodyCam, Okay.  I thought there was a discussion about specs going on.15:10
*** ijw has joined #openstack-ironic15:10
pshigecomment authers cannot be seen on etherpad?15:10
NobodyCamI assume that will come as folks come up with idea's and questions15:10
NobodyCampshige: I'm just adding my thoughts15:11
NobodyCamplease add any ideas you have15:11
pshigeI've already added my thoughs.15:12
NobodyCampshige: on https://etherpad.openstack.org/p/IronicSpecProcess ???15:12
pshigeoh15:13
pshigeI've added another etherpad .. > https://etherpad.openstack.org/p/liberty-ironic-design-summit-ideas15:14
lucasagomesr we having a hangout or something for the spec review stuff?15:14
NobodyCamlucasagomes: I think at this point we are putting down the ideas15:15
lucasagomesok15:15
NobodyCamon how to review15:15
NobodyCam:-P15:15
NobodyCamwe can start a hangout15:15
BadCubI am good with hangout15:16
dtantsur-0 for hangout, it's pretty exhausting...15:17
*** athomas has quit IRC15:19
*** athomas has joined #openstack-ironic15:21
openstackgerritNisha Agarwal proposed openstack/ironic: Update ilo drivers documentation for inspection  https://review.openstack.org/17006515:21
*** jmanko has joined #openstack-ironic15:24
devanandamorning, all15:24
BadCubmorning devananda15:24
dtantsurdevananda, morning, you're a bit late for the party :D15:24
pshigedevananda: morning :)15:24
devanandaindeed. unexpected travel required pre-coffee this morning.15:24
devanandaanyone care to dial me in? :)15:25
BadCubdevananda: everyone is jotting down ideas on the questions/ideas I put on the pad for discussion right now15:25
BadCubdevananda: https://etherpad.openstack.org/p/IronicSpecProcess15:25
devanandak k15:25
*** jcoufal_ has joined #openstack-ironic15:26
*** jmank has quit IRC15:27
*** jrist has quit IRC15:27
*** jerryz has quit IRC15:28
lucasagomesthe specs priority there looks confusing15:28
dtantsuryeah15:28
lucasagomesubber important == high prio15:28
dtantsurBadCub, what's the difference between High and Uber?15:28
dtantsurand why aren't they ordered?15:28
lucasagomeswe like to and need = Med15:28
lucasagomescan we keep high/med/low only15:29
lucasagomes?15:29
lucasagomesdevananda, morning15:29
BadCubThe specs listed under HIGH are already in LP as HIGH. I have not moved them up on the pad yet15:29
dtantsurwill anyone cry if I delete 1st 3 sections?15:29
NobodyCamoh morning devananda :)15:29
*** jcoufal has quit IRC15:29
dtantsurBadCub, hmm, if you have some logic there, please go on then :)15:29
*** Marga_ has quit IRC15:30
BadCubdtantsur: all of the specs listed under High/Med/Low were all specs that were approved in K and carried over15:30
BadCubThey have not yet been tagged for L15:30
*** Marga_ has joined #openstack-ironic15:30
BadCubI listed them, so we could look at them and move them up into the appropriate L category15:30
dtantsurBadCub, UCS already approved for L15:30
dtantsurah, I see15:30
dtantsurso, there can be duplicate15:31
BadCubdtantsur: yes, a couple BPs were tagged for L already15:31
BadCubdtantsur: actually we should be pulling the BPs from the existing list and placing them in the priority grouping for L instead of duplicating on the pad15:35
BadCubthey should move from carried over/backlog up15:35
*** jrist has joined #openstack-ironic15:38
*** Sukhdev has joined #openstack-ironic15:38
devanandait sounds like we need to track who on the core team has taken responsibility for tracking which features15:40
devanandaie, who the sponsors are for each large change15:40
devanandawhich is something we haven't done before15:40
lucasagomesyeah, I particularly like the sponsors idea15:40
BadCubdevananda: I think this sharing of ideas will help us pull a lot of good ideas forward and help the whole group :)15:41
lucasagomesbut we should be serious about it, if you sponsor something you gotta review it quick and consistently15:41
lucasagomesmaybe even help with code when it's just nits and typos15:41
BadCublucasagomes: ++15:41
devanandalucasagomes: ++15:41
dtantsurwe kind of had it, e.g. I sponsored inspection :)15:42
devanandadtantsur: and your collaboration on the out-of-band work was very helpful15:42
*** dlpartain has joined #openstack-ironic15:42
dtantsur:)15:43
lucasagomes+1 but we should make it official and documented for this cycle if that's the path we are going to15:43
devanandawe need that sort of cross-pollination // collaboration... this doesn't work very well if the sponsor is on the same team as the patch author15:43
pshigelucasagomes: ++15:43
NobodyCamif go to a sponsor based system can any one sponsor any spec.. but that I mean something like can I sponsor a IPA or discoverD spec?15:43
jlvillalNobodyCam, I would vote for this spec: https://review.openstack.org/133902  Maybe for specs we like and need??15:43
* BadCub looks15:43
lucasagomesNobodyCam, sure, if you have the technical capacity to judge the work why not15:43
devanandaNobodyCam: a) only core reviewers could sponsor things, b) I'd propose that members of the same team don't sponsor (or aren't the only sponsor) of some work15:44
NobodyCamlucasagomes: I am not core on discoverD15:44
lucasagomesyou don't need to be expert and know all about that project, but you should be happy with the idea and have technical capacity to review and test it15:44
lucasagomesNobodyCam, right, so yeah that wouldn't allow you to sponsor things in discoverd15:44
lucasagomesbut yes for IPA and Ironic15:44
lucasagomes(if IPA needs spec)15:45
NobodyCamjlvillal: :)15:45
dtantsurNobodyCam, I would say you can, if you want to invest some time in it15:45
rloowhat's the rationale for having a sponsor (vs no sponsor)?15:45
dtantsurrloo, to have a person ~responsible for landing the feature, so that it's not forgotten15:46
rlooI mean, are you talking about enforcing that all specs have a sponsor?15:46
dtantsurenforcing... hmmm....15:46
dtantsurinitially we were talking about exceptions15:46
dtantsurnow we're talking about major undertakings15:46
jlvillalI do worry about this process making it harder for people to contribute.  Especially for people who don't have any core reviewers in their company.15:46
rlooI don't mind if people volunteer to sponsor/shepherd (sp) but I wasn't sure if you were talking about 'must have'...15:47
jlvillal+115:47
dtantsurchances are high we won't need specs for e.g. implementing some interface for a driver15:47
*** dlpartain has left #openstack-ironic15:47
rlooto be honest, that would be similar to someone or someones helping out to get a patch merged.15:47
* lucasagomes thought about sponsors for exception15:47
BadCubI think the sponsoring specs idea is derived from "How do we make exepections for specs outside the approval window/timeframe or after cutoff"15:47
lucasagomesbut maybe not... yeah it's a risky thing15:47
* jlvillal Had thought sponsors was for all specs. If only for exceptions, that is more amenable.15:48
rloooh, if you mean for after the proposal deadline, then yes, I agree that it makes sense for some core (or two) to say they think it is worthwhile/can be done/will review15:48
BadCubjlvillal: rloo yes, it is for specs after deadline15:48
devananda+1 for exceptions // after the deadline // for very large things // and from other companies15:49
jlvillalBadCub, That sounds reasonable and a good idea.  Because it is an exception.15:49
BadCubjlvillal: ++15:49
pshigesponsors for essential or high specs are needed, I think.15:49
jlvillaldevananda, Is that an 'and' statement?  Or an 'or' statement?15:49
jrolldevananda: it's unclear to me why "from other companies" matters so much for a "sponsor", as long as reviews still come in from other companies15:50
* jlvillal thinks it is an 'and' statement15:50
lucasagomesjroll, it looks bad to have people from the same company being the only ones reviewing some certain change15:51
lucasagomesI would say, that we could have people from the same company reviewing it, no problem15:51
lucasagomesbut someone from another company should also review it prior to approval15:51
devananda^ exactly15:51
BadCublucasagomes: I agree ^15:52
devanandaand if the only person who has committed to reviewing a feature is on the same team as (or in fact *IS*) the author15:52
devanandathen it is less likely, by the mere fact that someone else has already promised to do the work, that other people will do the work too15:52
jrolllucasagomes: devananda: if "the only person" is the only one reviewing, it isn't going to land15:52
rloodoes nova ask for 1 or 2 core sponsors for feature exceptions?15:52
devanandarloo: 315:53
rloowhoa!15:53
devanandabecause of the frequency with which reviewers dont have enough time to actually review things15:53
jrolllucasagomes: devananda: I'm not saying that anything should land with only people from the same company reviewing, I'm specifically asking about this 'sponsor' thing15:53
pshige3!15:53
rlooI think if we have 2, with at least one not being the author/same group/company, we're ok.15:53
lucasagomesjroll, if only people from the same company are willing to sponsors a patch for an exception, the exception won't be given to that spec15:53
devanandaand the criticality of getting really fast turn around during the exception process15:53
jrollwhich I tend to not love anyway, we should all be sponsors for thigns we FFE15:53
dtantsur+1 for "2, with at least one not being the author/same group/company"15:54
lucasagomesI think 2 cores is enough15:54
jrollif we're going to grant an FFE, it should be something critical to the project that everyone agrees we need15:54
lucasagomesdtantsur, +115:54
devanandaI'm not a fan of the word "sponsor" here, tbh15:54
lucasagomesbacker?!15:54
BadCubsupporter?15:54
*** viktors is now known as viktors|afk15:54
pshigeBadCub: +115:55
* jroll wonders if we could just do shorter release cycles and not FF long enough to need FFEs...15:55
devanandajroll: ++15:55
BadCub++^15:55
jrollso like15:55
lucasagomesis it up to us?15:55
devanandathat would appear to solve much of these problems15:55
devananda*many15:55
lucasagomesI'm +1 with the idea, but being part of openstack means following the rules of openstack15:56
jrollwhy don't we talk about that instead of more and more processes around specs and FF15:56
* dtantsur does 3 months cycles for discoverd and likes it15:56
lucasagomesunless we go to the TC and try to change it :-D15:56
lucasagomesjroll, totally +115:56
BadCubwhat I observed in K was y'all reviewing like mad and getting uber hectic. Makes me sad to see folks stress and crunch so hard15:56
jrolllucasagomes: with the whole big tent thing we have the freedom to do different cadences15:57
lucasagomesdevananda, does the big tent thing helps with it ^?15:57
lucasagomesjroll, oh ok I didn;t know that15:57
rlooi thought the meeting today was to go over the submitted specs, not to discuss process :-)15:57
jrollrloo: I thought so too :/15:57
lucasagomesspec jam!15:57
devanandalucasagomes: yes, we can self-select a different cadence15:57
lucasagomesI like talking about the process as early as possible15:57
NobodyCamoh I though it was process based that may be /me who derailed it then15:57
NobodyCam:(15:58
devanandalucasagomes: if we all feel that more frequent releases would be better, I will work with ttx to set that up15:58
lucasagomesdevananda, cool. Is it something we are looking at then? Release more release often?15:58
lucasagomesgotcha15:58
devanandaI will still want us to do a release coordinated with the same timing as the rest of openstack15:58
devanandabut we could do more frequent as well (ie, every 3 months)15:58
rloopersonally, i'm not convinced that more frequent releases will help, but I don't mind watching/participating in the experiment :)15:58
devanandaand follow our own freeze schedule, etc15:58
BadCubdevananda: agreed there15:58
* lucasagomes wonders how the stable branches etc will work with it15:59
jrollrloo: it will enable us to do shorter (or no) freezes, and I think if the freeze is short enough we can stop doing FFEs, which we've been talking about all morning15:59
lucasagomesperhaps it should be another discussion15:59
devanandarloo: where I think it may help us is in lessenning the perceived impact of bumping a proposal15:59
devanandarloo: right now, not accepting something for K means that company has to wait 6 months15:59
devanandaor s/that company/anyone who wants that feature/16:00
openstackgerritNisha Agarwal proposed openstack/ironic: Update ilo drivers documentation for inspection  https://review.openstack.org/17006516:00
rloodevananda: so the problem is perhaps, not having a feature freeze deadline.16:00
devanandaI will say, the counterpoint is that operators and distributors *actually* *really* want a six-month release that they can count on to be stable16:01
*** Marga_ has quit IRC16:01
*** a1exhughe5 has quit IRC16:01
lucasagomesperhaps we all should read http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/ar01s04.html16:01
rloodevananda: is the bottleneck getting specs approved? we had specs in kilo that didn't get implemented.16:01
lucasagomesand discuss it16:01
* lucasagomes is particulary fond of it16:01
devanandaus simply doing shorter "sprints" with less freeze at the end doesn't change whether or not our downstream consumers want a realease every 6 mo16:01
BadCubrloo: I think we had too many approved for K16:02
devanandarloo: I think we spent about half the cycle on on specs and the second half on code16:02
rloowhat would 'release early' mean for us though? there are already X-1, X-2, X-3?, X-rc1...16:02
jrolldevananda: how sure are you that operators want a 6 month release (specifically, the 6 number)? I tend to see many operators running icehouse, still16:02
jrolllucasagomes: +1000000016:02
rloodevananda: so the problem is that we shouldn't spend half the cycle on specs.16:02
jlvillaljroll, The comic company here in Portland is still running Grizzly :)16:02
devanandarloo: right16:02
jlvillalDark Horse Comics I think16:03
jrolljlvillal: exactly.16:03
devanandajroll: operators want stable releases. distributors like us to produce code that they can package with their 6-month-timed releases.16:03
*** derekh has quit IRC16:03
jrollI haven't heard of very many people even running juno things yet16:03
rloodevananda: so releasing often, eg 2 months?, we'd have a feature freeze after 1 month in?16:03
*** jistr has quit IRC16:03
devanandajroll: openstack's cadence essentially comes from distros16:03
jroll/rage16:03
rlooBadCub: if we had too many approved for Kilo, how is more frequent releases going to help?16:04
* lucasagomes admires rolling release distros16:04
BadCubdevananda: what is your opinion on the cut-off time frame for Specs? A month before end-of-cycle? Two months? More?16:04
*** jistr has joined #openstack-ironic16:04
rlooIf we look back at Kilo, we didn't even finish implementing the state machine, we introduced microversions and didn't finish that, cleaning isn't finished. how are more frequent releases going to help? I think it'll be worse.16:05
devanandaBadCub: X-2 at the latest, which is 6 weeks before feature freeze16:05
devanandarloo: making releases more frequent doesn't mean we finish big features faster16:06
jrollrloo: because we won't spend the first 2-3 months on specs instead of code, look at this graph: http://stackalytics.com/?module=ironic&metric=commits16:06
BadCubrloo: I think that initially, it could be, however if we set reasonable cut-off timeframes, and cap the amount of specs we can "reasonably" accomodate. The process would actually smooth out16:06
rlooso the problem is 'don't spend the first 2-3 months on specs only'16:06
jrollI mean, that's one problem16:07
rloowe now have a few specs that should only take 1-2 days to approve since they're being rolled from kilo, so that should help.16:07
jrolleverybody slams a million features home at the end of a cycle because it means waiting 6 months if they don't16:07
jrollif a cycle bump only means your feature gets out a month later, that's much more palatable16:07
BadCubwe have a bunch of specs that are already approved but not tagged to L yet.16:08
*** dttocs has joined #openstack-ironic16:08
rloojroll: my concern is that. well, are we OK releasing if a feature isn't complete. and if so, how 'not complete' can it be?16:08
BadCubdevananda: Maybe we set the cut-off at 8 weeks and allow for a 2 week "safety gap" ?16:09
jrollrloo: we can always feature flag it or whatever, just disable the API endpoint or whatever makes that feature happen until it's ready16:09
rloojroll: small features that are self-contained are fine. but i worry about a partial implementation of a big feature that gets into a release.16:09
jrollrloo: have the patch chain not enable using the feature at all until the last patch in the chain16:09
*** rameshg87 has joined #openstack-ironic16:09
*** jistr has quit IRC16:09
lucasagomesI think that depends on the feature16:09
lucasagomessometimes only parts of it is already usable16:10
*** athomas has quit IRC16:10
rloojroll: it is easy to say, harder to monitor/do. i think. frankly, i worry about the overhead. but again, if people want to try it...16:10
dtantsurour patches should be self-contained IMO16:10
jrollpersonally, I think release cycles at all are insanity, they incentivize being able to land unstable code, break compatibility, etc. but I'm willing to play ball.16:10
jrolllucasagomes: sure, of course :)16:10
dtantsurso that every commit makes sense on its own16:10
devanandaanother good visualization of this tail-loading: https://github.com/openstack/ironic/graphs/contributors?from=2014-11-01&to=2015-04-15&type=c16:10
NobodyCamjroll: JoshNang: as a question: based on things found with the cleaning stuff do you forsee changes to the implement-zapping-states spec?16:10
jrolldtantsur: +++16:10
devananda16:07:33 < jroll> everybody slams a million features home at the end of a cycle because it means waiting 6 months if they don't16:10
devanandaTHAT ^16:11
jrollNobodyCam: probably, I have no idea right now16:11
JoshNangNobodyCam: i'll look again, but nothing comes to mind. other than getting a list of zap/clean steps is challenging because you may not have the agent booted.16:11
lucasagomesone thing... are we take any decisions today ? Or it's more a brain storm? Can someone summarize the ideas and put it in a ML?16:12
devanandaBadCub: while that sounds good, I think it will result in a similar situation -- we'll be pressured to heavily review specs early in each cycle16:12
*** tiagogomes has left #openstack-ironic16:12
rlooI think 6 months isn't too long to wait. but i suspect that the problem is that it isn't 'at most' 6 months. if you have an idea it could take 12 months. or longer.16:12
* BadCub cleans up the list of Specs16:12
devanandaBadCub: which will result in less code being reviewed early in the cycle, and more code review / code landing at the end of the cycle16:12
rameshg87rloo: hello16:13
rloohi rameshg8716:13
rameshg87rloo: regarding using driver_internal_info https://review.openstack.org/#/c/173214/1/specs/liberty/ironic-generic-raid-interface.rst16:13
BadCubdevananda: if we plan on doing our own release cycle, how about we have a spec approval window in each internal cycle?16:13
*** tiagogomes has joined #openstack-ironic16:13
jrollrloo: when you have customers that want a feature before they give you money, 6 months is a *really long time*16:13
lucasagomesBadCub, +1 I think that's the idea16:14
rameshg87rloo: one reason i prefer driver_internal_info over any other field is it is read-only and best place to put validated information16:14
jrollrloo: and really, in the long run, we're building this so we can make money (whether externally or saving money by some internal metric), right?16:14
devanandajroll: customers that want a feature downstream should not wait for that feature to be landed upstream16:14
rloohonestly. i think if people want to do releases more frequently, then I suggest they propose something. we should use this time (or some time, whatever) to review specs to unblock folks.16:14
NobodyCamis there a official spec for the boot interface stuff?16:14
* rameshg87 waits till rloo is out of other conversation16:14
BadCubso we need to determine what our "internal release cycle" is, and maybe block out a couple days or a week at the beginning of that cycle to do spec approval..16:15
devanandaI'm fine tabling the release cadence and spec process discussions if folks want to actually go review specs16:15
*** jistr has joined #openstack-ironic16:15
jrolldevananda: that's only true because of this issue.16:15
devanandabut it is something we need to discuss, and soon16:15
rloorameshg87: but driver_internal_info is for internal use by the driver. if the user specifies the raid config -- that should be read-only? read-only to whom?16:15
jrolldevananda: if I could wait a month for a feature to upstream before shipping it, I would consider it16:15
lucasagomesdevananda, right, is the ML a good place to discuss it?16:15
devanandawe have a few weeks (until vancouver) to decide on the cadence we use for Liberty16:16
rameshg87rloo: a read-only place seemed a better place to put validated information.16:16
devanandajroll: the nature of upstream is you can't ever promise a customer what will land there. you know that ...16:16
rloodevananda: and/or is that something to discuss at the summit?16:16
rameshg87rloo: i can wait till other conversation is complete. np16:16
devanandarloo: summit is fine to finalize it, but we should all be as close as possible to the same page by then16:16
* rameshg87 reads scrollback16:16
devanandarloo: by the end of the summit, we have to declare the schedule we'll follow16:17
rloorameshg87: not sure i understand. if the user provides information to/about the node, shouldn't it be placed in one of the usual places? driver_info, instance_info, properties, capabilities, i've lost track of them all.16:17
rloodevananda: got it, and yes, that makes sense. start the discussion now!16:17
*** jistr has quit IRC16:17
jrolldevananda: sure. but clearly people are shipping upstream (or else they wouldn't worry about landing things RIGHT NOW BEFORE THE CYCLE ENDS)16:18
devanandalucasagomes: ML is possible, but too low bandwidth. I think the discussion we just had was helpful16:18
BadCubdevananda: I can pull all the ideas from the pad and consolidate them into an informal game-plan concept and throw it up for folks to look at too16:18
lucasagomesdevananda, yup yeah it was helpful to brainstorm the ideas16:18
dtantsur++16:18
lucasagomesthe ML is good to document it as such16:18
* devananda throws wet cats at jroll16:18
jroll:D16:18
devanandaBadCub: ++16:18
BadCublol16:18
rlooBadCub: my suggestion is for the folks that want frequent releases, to put more details into how it will work.16:18
rameshg87rloo: yes, driver_info, instance_info, etc, but if the user/an external entity(like nova) is going to update the information through node-update.16:18
lucasagomesBadCub, that would be awesome16:19
rameshg87rloo: but in case of raid, both GET and PUT are through it's own APIs16:19
jrolldevananda: to be fair, you probably know users better than I do, so take my statements with many grains of salt :)16:19
devanandajroll: you mind working with badcub and myself to draft exactly what a more frequent release cadence would look like?16:19
dtantsurrloo, 1 mon spec proposals, 1 mon spec implementation, 1 mon FF and bug fixing16:19
dtantsur:)16:19
rameshg87rloo: i mean information is read/written using it's own APIs16:19
BadCubrloo: I agree. We do need more details there. I will leave the pad for folks to add more comments for the next day and then throw together the concept.16:19
rloorameshg87: but that's the point, there's an API. which means that it is user/external info. that's not what driver_internal_info is meant for.16:19
BadCubthat sound good for everyone?16:19
jrolldevananda: push a tag at 0000 UTC daily ;D16:20
devanandajroll, BadCub: I think if the 3 of us can iron out a proposal in, say, the next 10 days, that should give others time to consider it thoroughly before the summit16:20
jrolldevananda: for real though, yes, willing to help16:20
devanandajroll: thanks16:20
BadCubdevananda: jroll I think that would be awesomeness16:20
rameshg87rloo: but where else would you suggest ? driver_info ?16:21
devanandarameshg87: RAID layout should be instance_info16:21
BadCubI will star compiling the pad details on the other topics tomorrow :)16:21
*** athomas has joined #openstack-ironic16:21
BadCubs/star/start16:21
rameshg87devananda: but it's not related to instance.16:21
*** jmankov has joined #openstack-ironic16:21
rloorameshg87: like devananda says. Or a new field.16:21
devanandarameshg87: then it should be node.properties16:22
rameshg87devananda: it's done during zapping in which instance might not be there16:22
devanandaeither this is static with respect to Nova, and nova merely schedules instances onto nodes that match the type of RAID that instance wants -- in which case it is node.properties16:22
devanandaor this is dynamic and determined by the desired instance, ie it comes from the nova Flavor, and nova has to pass that info on node.instance_info16:23
*** absubram has joined #openstack-ironic16:23
rameshg87devananda: yeah, in the spec the schedulable properties (like local_gb, raid_level) are updated in properties16:23
devanandathe problem is that some people are going to want (A) and some will want (B) and we already know this16:23
BadCubdevananda: I will also move all the backlog specs to a new pad for the Specs Cores to gander at and see if we are on track for gettng them into L16:23
BadCubbrb16:23
NobodyCambrb...16:23
dtantsurfolks, I'm going for now, see you tomorrow16:23
*** dtantsur is now known as dtantsur|afk16:23
lucasagomesdtantsur, have a good night16:23
rloonight dtantsur|afk16:23
devanandaBadCub: thanks. and yea, the "review kilo leftovers" discussion is clearly separate from "talk about liberty process"16:24
rameshg87devananda: yeah B is not being targetted now16:24
pshigedtantsur: have a good night16:24
jrolldevananda: in the second case, why wouldn't flavor match to properties?16:24
rameshg87devananda: it's only A that is mentioned in that spec16:24
*** jrist has quit IRC16:24
lucasagomesdevananda, rloo rameshg87... so this is something that 1) users do a PUT request 2) Ironic validates information and put it in the driver_internal_info if valid 3) zapping consumes the the tihngs in driver_internal_info and build the raid 4) set the raid information in properites so that nova is aware of it... If that correct?16:24
lucasagomesIs that correct?*16:25
rameshg87yes, that's what is there in spec right now16:25
lucasagomesI think I'm good with it16:25
*** smoriya has quit IRC16:25
*** jmanko has quit IRC16:25
lucasagomesthe key thing here is that, there's a validation prior to setting it to driver_internal_info... which is read-only16:25
rameshg87yeah, that's what i was coming to16:26
lucasagomeswhere other fields the user can go there and tweak it, and it won't trigger any validation16:26
*** saripurigopi has quit IRC16:26
rameshg87someone could even update properties with node-update itself and modify the validated info16:26
rlooi'm not sure i agree with that, although i sort of see what you mean.16:26
lucasagomesrameshg87, yup16:26
rloothe thing is, the user is specifying information.16:26
rloothey shouldn't have to look at driver_internal_info to see what that info was.16:27
lucasagomesrloo, user gives data, which is turned into information I believe... Users can say build a raid level BLAH16:27
lucasagomesbut the information saved will contian things like size that is possible for that node etc16:27
rameshg87rloo: the user needn't check driver_internal_info. there is an API for it, right ?16:27
rloolucasagomes: where/how does the user get access to their BLAH?16:27
lucasagomesrloo, it's the supported raid level16:28
lucasagomesand ironic will validate whether the node contains enough disks to support that level etc16:28
lucasagomesther'es a mininum of disks required for different raid levels16:28
lucasagomesand that should be validated16:28
rloolucasagomes: is the supported raid level saved anywhere?16:28
rloolucasagomes: maybe i misinterpreted the spec.16:29
lucasagomeshttp://en.wikipedia.org/wiki/Standard_RAID_levels16:29
rameshg87lucasagomes: "node contains enough disks to support that level" is an optional thing for driver16:29
rloolucasagomes: here's the API: PUT /nodes/<uuid>/raid/configuration16:30
rameshg87lucasagomes: inband can't do it :(16:30
lucasagomesrameshg87, right yeah fair16:30
rloolucasagomes, rameshg87: is that specifying the configuration? is 'configuration' hardcoded, or is that to be replaced with the user's configuration?16:30
lucasagomesrloo, I'm not sure, I think the information is not changed but introspected16:31
jrollI have a higher level question: does nova have any sort of concept of raid? it doesn't need one because there's no real disks, right?16:32
rameshg87rloo: i didn't get your question. we pass on the input directly to the driver handling it after applying some ironic *defaults*16:32
jrollok yeah, no references to the word 'raid' in their codebase, ignore me :)16:32
rameshg87jroll: not there unfortunately16:32
rloorameshg87: what is the input? I mean, is the request literally 'PUT /nodes/<uuid>/raid/configuration' where only the uuid is replaced with a real uuid?16:33
lucasagomesjroll, yeah ... but it's ok. Cause zapping happens prior to available16:33
lucasagomesso nova will only see a device already created16:33
jrollright16:33
jrollso when we say 'user' we mean the ironic user, to be clear16:33
rloorameshg87: there's a body to that PUT, right?16:33
rameshg87rloo: yeah16:33
lucasagomesjroll, yup operator16:33
rameshg87rloo: the body itself is the json data16:33
rameshg87the raid configuration16:33
rloorameshg87: right, that's what i thought. and you and lucasagomes are saying that this json data will be verified and placed in driver_internal_info, and I disagree with that cuz this is user information.16:34
devanandaBadCub: hope you dont mind me rearranging the spec list a bit :)16:34
BadCubdevananda: not at all!16:34
rloorameshg87: we're opening the door now, for all  other kinds of information that the user might specify, being put in driver_internal_info.16:35
rameshg87rloo: yeah, the data sent along with this PUT API will be validated and placed in driver_internal_info16:35
rameshg87rloo: one second getting an example16:35
rloorameshg87: and the reason we have driver_internal_info, is cuz someone else wanted to store internal info in driver_info.16:35
rameshg87to make sure we are first in same page16:35
rameshg87rloo: http://paste.openstack.org/show/171957/16:36
lucasagomescool, yeah it says current and pending16:37
lucasagomeswhich gives an overview about the state of the raid16:37
lucasagomesand this is internal info16:37
* devananda wonders why driver_internal_info is exposed in the API at all16:37
lucasagomesdevananda, +116:37
rameshg87yeah, and current state of raid *is* not the exact input from user16:37
lucasagomesyeah I wonder that too16:37
lucasagomesdevananda, perhaps for troubleshooting... but idk either16:38
jrolldriver_internal_info is useful to operators16:38
jrollfor example agent_url16:38
rameshg87lucasagomes: yeah it helps16:38
rloodevananda: I wondered about that too, but I think someone said it was to help with debugging. or maybe that's what i thought it woudl be useful for.16:38
rameshg87jroll: exactly :)16:38
jrollwe inspect things from there in our dashboard16:38
rameshg87it has been useful to me16:38
lucasagomesjroll, good point16:38
rameshg87to find agent_url and then ssh to it when there is some issue16:38
*** EmilienM is now known as EmilienM|afk16:38
jrolltl;dr I use it in my environments and will yell at anyone who takes it away :D16:39
lucasagomesjroll, we won't or we would break our current api16:39
lucasagomesso ur safe16:39
lucasagomesno more api breakages please16:39
jrollheh16:39
rloolucasagomes: I dunno. we could deprecate it, give jroll 6 months (ok 12 months) to replace it ;)16:40
lucasagomesrloo, heh -216:40
lucasagomes:P16:40
jrollrloo: I only need a day, I'll revert the patch downstream :)16:40
lucasagomesit's only shown with /detail right?16:40
lucasagomesso it's grand16:40
*** jrist has joined #openstack-ironic16:40
rameshg87lol16:40
rameshg87rloo: so coming back .. yes initially the user-input is saved exactly as it is16:41
lucasagomesanyhoo... I'm +1 for driver_internal_info on raid16:41
lucasagomesI think that's the right place for it16:41
jrolllucasagomes: yeah, just node-show or node-list --detail16:41
lucasagomescool16:42
rloorameshg87: are you saving the user's config json data anywhere?16:42
*** Marga_ has joined #openstack-ironic16:42
rameshg87rloo: that's in driver_internal_info too16:42
* NobodyCam begains to wounder if we'll end up wanting a port_internal_info and node_internal_info as well as driver16:42
NobodyCambrb .. bbt16:42
rloorameshg87: if I look in driver_internal_info, will I see the user's config data?16:42
rameshg87rloo: yes16:42
lucasagomesNobodyCam, hope not :D16:42
rloorameshg87: I disagree with that16:43
rameshg87rloo: it will show up as 'target_raid_configuration'16:43
*** yog has joined #openstack-ironic16:43
rameshg87rloo: that's what the driver hopes to apply when zapping is invoked next time16:43
*** yog is now known as Guest1610616:43
rloorameshg87: the PUT operation just puts that info, no operation is done?16:43
lucasagomesvalidation16:44
rameshg87rloo: no operation is done.16:44
rameshg87rloo: just validation16:44
rameshg87yeah16:44
lucasagomesfor drac we can validate important things out-of-band16:44
lucasagomesas mentioned, number of disks for the raid level is available and so on16:44
lucasagomesthat's neat16:44
rameshg87and returns 202 accepted16:44
rloorameshg87: it seems to me that you should add a new target_raid_configuration or something like that.16:44
rameshg87rloo: add a new "target_raid_configuration" field ?16:45
*** jrist has quit IRC16:45
rloorameshg87: yeah, why not?16:45
rloorameshg87: we have a target_provision_state and target_power_state. why should this be hidden in driver_internal_info?16:46
rameshg87rloo: but all nodes might not be interested in raid no ? raid is just for nodes that support it and drivers managing the nodes that support it16:46
rameshg87rloo: in majority of cases, this field might be lying around not used :)16:47
rloorameshg87: are all nodes interested in inspecting and cleaning? we have clean_step and inspection* fields16:47
rloorameshg87: and what about console_enabled?16:47
rameshg87hmm .. interesting16:47
rloorameshg87: I'd prefer if it went in instance_info or properties but you had reasons for not.16:48
rloorameshg87:  or extra. i don't know if i ever figured out what that was for.16:48
rameshg87rloo: what was your main point against driver_internal_info ?16:48
devanandatarget_raid_level is not a bad idea16:49
rloorameshg87: it is internal to the driver. that's why it is called driver_INTERNAL_info.16:49
devanandaif that's unused on nodes that don't support raid, tha'ts fine16:49
devanandai generally agree with rloo's concern about abusing driver_internal_info here16:49
devanandait's not something the user should have to care about16:49
lucasagomesyeah the target_raid_config seems okish16:50
lucasagomesdevananda, but the user don't16:50
lucasagomesI mean he should not be looking at that field16:50
lucasagomestheere's an api endpoint which he can GET16:50
devanandaif I'm PUT'ing data to Ironic to change some properties of a node, that request shouldn't in any way require me to look at *internal* info16:50
devanandaso perhaps I "PUT {new raid info} /v1/nodes/NNNN/states/raid"16:51
devanandaI should darn well be able to "GET /v1/nodes/NNNN/states/raid" and see the state after it was applied16:51
lucasagomesdevananda, that's exactly what it is16:51
lucasagomesGET/PUT v1/nodes/<uuid>/raid/configuration16:51
*** ifarkas has quit IRC16:51
devanandaif I have to go look at driver_internal_info to see whether ironic succeeded or failed in applying those raid settings, we've made a terrible API16:51
rameshg87devananda: it's completely through API in it's current proposal16:52
devanandaon the other hand, if some driver wants to maintain some internal cache about the hardware's capabilities (like this node can support RAID 0/1/10 and that hardware supports 0/1/10/5/50)16:52
devanandathen that's just fine16:52
devanandadriver_internal_info IS the right place to cache that sort of thing16:52
lucasagomesdevananda, http://paste.openstack.org/show/171957/16:52
lucasagomesyes, exactly16:52
rameshg87devananda: you can tell the configuration, know whether it's applied or not AND get the configuration after applying it - all through API16:53
rameshg87devananda: rloo: it doesn't change anything much - but i was just skeptical in adding a new field altogether16:53
rameshg87when every possible operation was exposed through an API16:54
rameshg87and when i could use something that is existing16:54
devanandalucasagomes: at initial reading, that paste looks sane16:54
rameshg87and implementation OR the user didn't bother much where the information was stored16:54
lucasagomesdevananda, yup, yeah that's what the spec proposes16:54
lucasagomesdevananda, that's why I think that driver_internal_info is the right place. I'ts just caching the pending configuration set by the user16:55
lucasagomesit doesn't require user to look into it or anything like that16:55
devanandalucasagomes: the API should also expose a sample of the raid-config object, as well as a description of it16:55
devanandalucasagomes: not just a list of the logical_disk_properties16:55
devanandait looks like you're on the path to doing that, but there's more than just this paste16:55
devanandalucasagomes: hm, no. caching the pending configuration should not be done in driver_internal_info16:56
devanandafor one, that creates two API endpoints to get the same info16:56
*** jrist has joined #openstack-ironic16:58
lucasagomesdevananda, a sample? yeah that would be useful... but then we have also docs with samples etc16:58
lucasagomesdevananda, right... that's the problem with the driver_internal_info being externally exposed I suppose16:59
lucasagomesI mean, the information has been validated by the driver and now is pending16:59
lucasagomesdriver needs to cache it somewhere, I don't see option if not driver_internal_info16:59
lucasagomesby putting it on other fields means that users can partial update the data, without revalidating, which is not nice17:00
lucasagomesit's the same for set_boot_device17:01
lucasagomeswhen you set it to persistent, some BMCs saves that internally so you can retrieve17:01
rameshg87unless we put it on a new field which they cannot touch with any other API17:02
lucasagomessome drivers saves it to driver_internal_info because the BMC cannot save it17:02
lucasagomesyou can look the option on both, whether get-set-boot-device or looking at internal info17:02
lucasagomesthe info will be there17:02
rloolucasagomes: the set_boot_device bothers me too.17:02
lucasagomesright, but here's the thing... if we have something that is not nice, is that internal information is being exposed right?17:03
lucasagomesnot that we are saving things that, which are internal for the driver17:03
lucasagomesthings there*17:03
rloolucasagomes: that was meant for saving things that are internal (for the driver or instance or whatever)17:04
lucasagomesrloo, and a already validated information is something internal no?17:04
lucasagomesI mean we have introspected into this information17:04
lucasagomesit's not just raw input17:04
rloolucasagomes: so I am fine if you want to save validated info in driver_internal_info. I'm not fine with not saving the user's data in a field that the user can find/makes sense to them.17:05
rameshg87rloo: but validated info == info-that-user-input17:05
rloolucasagomes: so if you want some user-field like target_raid to hae the config, and you want driver_internal_info to have a validated config,then whatever.17:06
lucasagomesI'm not against the target_raid_config tho17:07
rloorameshg87: look. why do we even have target_power_state? we could have just provided APIs for that. I could say the same for most of the other things we have. console_enabled. Why not store whether it is enabled or disabled in driver_internal_info?17:07
rloorameshg87: I don't even know why we have inspection* fields. we could have put them in driver_internal_info too.17:07
* rloo thinks we should deprecate all the fields and just keep driver_internal_info.17:08
NobodyCamrloo: ieek17:08
*** pas-ha has quit IRC17:09
rloolook, it clearly isn't clear. And maybe if I actually read rameshg87 and grok'd it, I'd think differently. Although I don't think so. rameshg87 just needs two cores to +2/+A. This is just my opinion, although it seems like devananda also agrees. with part of it anyway ;)17:09
rloorameshg87: I mean 'read the spec in detail'17:10
lucasagomesyeah, we have many incosistences specially the diff endpoints retunring the same information17:10
lucasagomesGET v1/nodes/<uuid>17:10
lucasagomesGET v1/nodes/<uuid>/states17:10
lucasagomeswill return same info17:11
rloonote that we validate that the power state (and or provision state) is a valid state. So by your example, we should just put the target*states in driver_internal_info.17:11
lucasagomesI'm not against the target_raid_config to keep the consistency with the other states we have17:11
lucasagomesrloo, the thing is that target_ states are indexable17:11
lucasagomesif one, want target_raid_config to be indexable17:12
lucasagomesso that they can filter the API by it17:12
lucasagomesso yes, we should have a target_ state17:12
lucasagomesI can filter all nodes that are powered on, or transitioning to it17:12
lucasagomesis it useful for raid config? I'm not sure maybe...17:13
lucasagomesif so we can make it an indexable field17:13
rloolucasagomes: is that the only reason we added target_*_state? so that we could filter on that?17:13
lucasagomesrloo, to be very honest, when we started the ironic api nobody had any experience with REST APIs17:13
lucasagomesso it's natural that we have our incosistencies17:13
rloolucasagomes: totally agree. i just don't see why we'd want to add a new inconsistency.17:14
lucasagomesrloo, one of the reason to have it to be indexable, yes so we can filter on it17:14
* rameshg87 can't think of any use case of indexing a json data17:14
rameshg87and it's not guaranteed that indexing will work17:15
*** ndipanov has quit IRC17:15
lucasagomesyup17:15
rameshg87unless database is that intelligent :)17:15
rlooin this case, it seems like the only useful thing might be to see IF target_raid* is not None17:15
lucasagomesyup that's valid17:16
rloobut I don't think of target* stuff being there just for filtering. we have filtering for maintenance and instance too. I see them there so that the user can get some info as to the state/etc of the node.17:16
rameshg87yeah17:16
lucasagomesor if we have a raid already set (which is the non target field)17:16
rloowhich is why i'd prefer that this config was in properties or extras or instance_info and not a separate field, but I'm happy with anything that isn't *internal*17:17
lucasagomesyup yeah all the fielters that we can filter on should be indexable basically17:17
rloorameshg87, lucasagomes: gonna get lunch. i think i've given you enough of my opinions on this :-)17:18
*** athomas has quit IRC17:18
lucasagomesrloo, fair17:18
lucasagomesand I will go home too17:18
* lucasagomes came to the office17:18
lucasagomesrloo, enjoy lunch!17:18
rameshg87and i will goto sleep too17:18
NobodyCamnight lucasagomes and rameshg87 :)17:18
lucasagomesNobodyCam, have a good night17:19
rameshg87have a good day NobodyCam, rloo17:19
rameshg87goodnight lucasagomes17:19
lucasagomesgood night/day everyone!17:19
NobodyCamthank you both great conversations!!!17:19
rameshg87bye everyone17:19
lucasagomesrameshg87, night17:19
*** rameshg87 has quit IRC17:19
*** lucasagomes has quit IRC17:20
*** ijw has quit IRC17:25
*** ijw has joined #openstack-ironic17:26
*** ijw has quit IRC17:27
*** ijw has joined #openstack-ironic17:28
*** davideagnello has joined #openstack-ironic17:28
*** trown is now known as trown|lunch17:29
*** achanda has joined #openstack-ironic17:35
*** harlowja_away is now known as harlowja17:36
jxiaobinHi @JoshNang17:37
JoshNangjxiaobin: hi!17:38
jxiaobinI have a question on the cleaning17:38
JoshNangask away!17:38
*** Sukhdev has quit IRC17:38
jxiaobindo we need to configure "cleaning_network_uuid" in order to make cleaning work?17:39
jxiaobindoes that mean cleaning can only happen in a particular network?17:40
JoshNangusing the default neutron provider, yes17:41
* JoshNang needs to add to the deploy docs17:41
JoshNangbasically, we have to be able to boot a ramdisk, the ramdisks require a network config to be able to boot17:41
jxiaobinhmm, for us, we will have a bunch of networks, and a server can only live in one network17:43
jxiaobinso a single "cleaning_network_uuid" will not work out for us17:43
JoshNangso the way we have it deployed in production is a separate vlan set aside for just cleaning17:44
jxiaobinI think Ironic shall support cleaning in the same network as well17:45
JoshNangi think that should be possible17:46
JoshNang(not today, but in the future)17:46
jxiaobinThe idea is Ironic reuse the provisioning network for cleaning17:46
JoshNangright. we'll have to modify a few things internally to hold onto that information. currently, the node has no idea what network it was on by the time it hits cleaning17:47
JoshNangyou should file a bug17:47
JoshNangso we don't lose track of it17:47
jxiaobinsure, thanks a lot!17:47
JoshNangnp!17:48
*** Marga_ has quit IRC17:49
*** Marga_ has joined #openstack-ironic17:49
jrolljxiaobin: ironic does support cleaning in the same network, you just have to tell it which network that is :)17:52
jxiaobinjroll: but that need enhancement, right? is the feature available now?17:54
jrolljxiaobin: yes, whatever network nova is using to boot a server, use the same network for cleaning_network_uuid17:54
jxiaobinjroll: maybe I missed something, but in code, ironic will only use the "cleaning_network_uuid" from configuration file.17:56
jrolljxiaobin: right, there is a manual step of configuring that17:57
jrolljxiaobin: you say you have one network that baremetal nodes are booted on, right?17:57
jxiaobinjroll: yes17:57
jrollok, what's the ID for that network?17:57
*** andreykurilin has joined #openstack-ironic17:58
jxiaobinUUID? let's assume "123456" :)17:58
*** openstackstatus has quit IRC17:58
jrollsure17:58
*** trown|lunch is now known as trown17:58
jrollso set cleaning_network_uuid=12345617:59
jrollit's an unfortunate manual step, but it works, it can be the same network17:59
*** openstackstatus has joined #openstack-ironic17:59
*** ChanServ sets mode: +v openstackstatus17:59
jxiaobinbut this is a global configuration, right? in my poc env, I have 3 networks18:00
jxiaobinI'll have to change the configuration, and reload the value, then delete the node18:00
jrollah, I see18:01
jrollit's per-conductor, but yes essentially global18:01
jxiaobinyeah :-)18:02
jxiaobinbasically our management want to adopt Ironic as our baremetal solution18:02
-openstackstatus- NOTICE: Gerrit has stopped emitting events so Zuul is not alerted to changes. We will restart Gerrit shortly to correct the problem.18:03
*** ChanServ changes topic to "Gerrit has stopped emitting events so Zuul is not alerted to changes. We will restart Gerrit shortly to correct the problem."18:03
jxiaobincurrently we're running our home-made system, which already manages like ~35000 physical servers18:03
*** ijw has quit IRC18:03
jrollnice18:03
*** ijw has joined #openstack-ironic18:03
jxiaobinit's essentially a private cloud, so we do not care too much about multi tenancy, isolation18:04
jrollso better networking things in general is something we're going to be hitting hard soon18:04
jrollbut it isn't great right now18:04
*** ijw has quit IRC18:04
*** ijw has joined #openstack-ironic18:05
jxiaobinwe actually don't need fancy networking features, we're quite traditional18:05
*** htrmeira has joined #openstack-ironic18:05
jxiaobinall our servers are on the same vlan18:06
jxiaobinthanks jroll for the information though :)18:06
jrollyou're welcome :)18:07
BadCubWe just lost the entire electrical box on the house18:11
BadCubdevananda: ^^^^18:12
jroll"lost" :P18:12
BadCubTree cutters crashed a tree into house18:13
jrolldaaaang18:13
BadCubRipped the entire electric box off the wall18:13
*** Marga_ has quit IRC18:15
*** EmilienM|afk is now known as EmilienM18:22
BadCubElectricians and power company called :(18:23
*** ChanServ changes topic to "Bare Metal Provisioning | Status: http://bit.ly/ironic-whiteboard | Docs: http://docs.openstack.org/developer/ironic/ | Bugs: https://bugs.launchpad.net/ironic"18:25
-openstackstatus- NOTICE: Gerrit has been restarted. New patches, approvals, and rechecks between 17:30 and 18:20 UTC may have been missed by Zuul and will need rechecks or new approvals added.18:25
htrmeiraHey guys, did anyone install ironic in a production environment and made it work?18:28
*** sambetts has quit IRC18:29
*** sambetts has joined #openstack-ironic18:32
jrollhtrmeira: yes, I'm from rackspace where we run it in production on our public cloud18:33
*** andreykurilin has quit IRC18:36
*** andreykurilin has joined #openstack-ironic18:40
htrmeira:jroll, I'm having a realy weird problem in my deploy. Nova does not recognize any ironic bm node. Have you seen something like this?18:40
jrollhtrmeira: can you be more specific?18:41
jrollhtrmeira: also, what version of nova and ironic are you running? is there a specific guide or distribution you used to install this?18:41
htrmeiraI have followed this guide: http://docs.openstack.org/developer/ironic/deploy/install-guide.html18:43
htrmeiraMy ironic version (.deb versions):  http://pastebin.com/RuNaNUGu18:43
htrmeirathe whole openstack installation is Kilo (up to date).18:43
htrmeira:jroll the problem is, when I create a bm node using ironic create-node, I can see the entry on ironic database table. But, I can not see the entry on nova database table (compute_nodes).18:45
*** jcoufal_ has quit IRC18:45
* jroll looks18:45
htrmeirajroll, and because of this, nova hypervisor-list does not show any ironic bm node.18:45
*** dttocs has quit IRC18:46
jrollfirst thing I would check is these configs:18:46
jrollhttp://docs.openstack.org/developer/ironic/deploy/install-guide.html#configure-compute-service-to-use-the-bare-metal-service18:46
*** Marga_ has joined #openstack-ironic18:46
htrmeiraI have checked, It seems to be ok. would you like to see the nova.conf?18:49
jrollmmm18:49
jrollI guess at this point I would want to see nova-compute logs18:49
jrollmake sure it's connecting to ironic ok etc18:49
jrollalso "ironic node-list"18:49
htrmeirasure. Just a sec.18:50
*** Marga_ has quit IRC18:51
*** andreykurilin has quit IRC18:52
*** ijw has quit IRC18:57
*** Haomeng has quit IRC19:06
*** Haomeng has joined #openstack-ironic19:06
*** ijw has joined #openstack-ironic19:08
*** ijw has quit IRC19:13
htrmeirajroll: sorry the delay, we are having some problems with our network. I will have to seek your help latter, if possible. =/19:19
jrollhtrmeira: I'm always here, as are some of my other team members :)19:19
htrmeirajroll: Thank you very much. =)19:20
jrollnot a problem!19:20
*** ijw has joined #openstack-ironic19:28
*** ukalifon has quit IRC19:30
*** david-lyle has joined #openstack-ironic19:30
*** ijw_ has joined #openstack-ironic19:30
*** ijw has quit IRC19:32
*** ijw has joined #openstack-ironic19:33
*** htrmeira has left #openstack-ironic19:36
*** ijw_ has quit IRC19:36
*** ijw has quit IRC19:39
*** dttocs has joined #openstack-ironic19:46
*** alex_xu has quit IRC19:48
*** alex_xu has joined #openstack-ironic19:49
*** dttocs has quit IRC19:50
*** ijw has joined #openstack-ironic19:50
openstackgerritShilla Saebi proposed openstack/ironic: changes to upgrade-guide.rst  https://review.openstack.org/17406519:56
*** Marga_ has joined #openstack-ironic20:02
*** dttocs has joined #openstack-ironic20:17
*** dttocs has quit IRC20:21
*** kkoski has quit IRC21:01
*** Marga_ has quit IRC21:05
*** dttocs has joined #openstack-ironic21:06
*** absubram has quit IRC21:10
*** Marga_ has joined #openstack-ironic21:10
*** dprince has quit IRC21:12
*** Sukhdev has joined #openstack-ironic21:13
*** Marga_ has quit IRC21:19
*** Marga_ has joined #openstack-ironic21:19
*** andreykurilin has joined #openstack-ironic21:25
*** achanda has quit IRC21:34
*** achanda has joined #openstack-ironic21:37
openstackgerritJohn L. Villalovos proposed openstack/ironic: ironic/tests/drivers/amt: Add autospec=True to mocks  https://review.openstack.org/17411321:37
*** achanda has quit IRC21:37
*** achanda has joined #openstack-ironic21:38
TheJuliaAnyone seen issues cleaning a node with IPA?  From the conductor log http://paste.openstack.org/show/uZ9wd5p7auUGn5MNl7gD/21:38
jrollaw hell21:40
jrollTheJulia: we hit that but I thought it was a downstream shim we added21:41
jrollJoshNang: ^ :(21:41
jrollI'll put a patch up21:41
jrollbut we'll need to backport it21:41
TheJuliajroll: :(21:41
TheJuliac'est la vie21:42
*** trown is now known as trown|outttypeww21:42
JoshNang:( yeah i thought that was downstream too21:42
jrollTheJulia: mind filing a bug in the meantime?21:43
jrollidk why I thought this was in our hack, wow21:43
openstackgerritJim Rollenhagen proposed openstack/ironic: Fix heartbeat when clean step in progress  https://review.openstack.org/17411521:43
jrollthat should do it21:43
TheJuliajroll: sure21:45
TheJuliajroll: will in 2-3 minutes21:45
jrollyeah, no rush :)21:45
jrollactually, I'll just get it, no worries21:45
*** jamielennox|away is now known as jamielennox21:46
*** andreykurilin has quit IRC21:46
openstackgerritJim Rollenhagen proposed openstack/ironic: Fix heartbeat when clean step in progress  https://review.openstack.org/17411521:50
jroll^ with a bug #21:50
jrolldevananda: we're going to want to backport that to the RC, however that works (just push to stable/kilo?)21:50
devanandajroll: cheers. tag the bug with "kilo-rc-potential" pls21:55
jrollyep21:55
devanandajroll: https://wiki.openstack.org/wiki/StableBranch#Processes21:56
jrolldone21:56
jrollaha, awesome, ty21:56
jrollwell21:56
devanandanote that the fix has to land on master first21:56
jrollmy problem is I don't see stable/kilo branch, though maybe I'm doing something wrong here21:56
devanandajroll: proposed/kilo21:56
jrollaha21:57
jrollty21:57
devanandanp!21:57
devanandathe permissions are the same -- only stable maintenance team & release mgmt team can approve them21:57
devanandaand th eprocess is the same as documented up there ^21:57
jrollyep21:57
openstackgerritJim Rollenhagen proposed openstack/ironic: Fix heartbeat when clean step in progress  https://review.openstack.org/17411521:58
jrolloh damn, I overwrote that21:58
devanandaheh21:58
jrolland still didn't end up on proposed/kilo21:58
devanandagotta jump on a call, back in a while21:59
TheJuliajroll: cool, thank you, my lag just got REALLY bad... like... maybe time to get new internet connection bad21:59
jrollThe branch 'proposed/kilo' does not exist on the given remote21:59
jroll'gerrit'. If these changes are intended to start a new branch, re-run22:00
jrollwith the '-R' option enabled.22:00
jrollhrmmmmmmm22:00
jrollTheJulia: :( no problem22:00
TheJuliaIt is just a s ign I need to call it an evening22:00
mrdaMorning Ironic22:03
mrda(Sorry TheJulia, it's just morning, at least here :)22:03
jrollheya mrda :)22:04
mrdao/22:05
jrollTHERE we go22:08
jrollthanks for the announce openstackgerrit22:08
jrolldevananda: backport is https://review.openstack.org/17412222:09
*** ijw has quit IRC22:18
*** ijw has joined #openstack-ironic22:20
*** ijw has quit IRC22:30
jxiaobinshall we document "scheduler_use_baremetal_filters" of nova.conf in install-guide.rst?22:43
jxiaobinmy understanding is if this option is not set, the scheduler will not go through the baremetal filters, am I right?22:44
jrolldo we not document that?22:44
jrollbecause we should22:44
jxiaobinI didn't find that22:44
jroll:(22:45
jxiaobinI greped all the doc under deploy folder, none22:45
jrollcan you send a patch or file a bug? :)22:46
jxiaobinsure22:46
jxiaobinfor months, I've been running VM filters, until today I'm trying to fix this, and found the option in code :D22:47
jrollheh22:48
jrollI've been trained to always go to the code :)22:48
*** achanda has quit IRC22:48
*** kbs1 has quit IRC22:49
*** Marga_ has quit IRC22:51
openstackgerritjxiaobin proposed openstack/ironic: document "scheduler_use_baremetal_filters" option in nova.conf  https://review.openstack.org/17414122:51
jrolljxiaobin: left a quick review :)22:53
*** achanda has joined #openstack-ironic22:58
openstackgerritjxiaobin proposed openstack/ironic: document "scheduler_use_baremetal_filters" option in nova.conf  https://review.openstack.org/17414122:58
jxiaobinjroll: got it and updated :)22:59
*** kbs has joined #openstack-ironic22:59
jrollthanks :)22:59
jrolljxiaobin: could you add a new line before the new stuff?23:00
*** yuanying has joined #openstack-ironic23:01
jxiaobinjroll: ok23:01
openstackgerritjxiaobin proposed openstack/ironic: document "scheduler_use_baremetal_filters" option in nova.conf  https://review.openstack.org/17414123:02
openstackgerritJohn L. Villalovos proposed openstack/ironic: ironic/tests/drivers/drac: Add spec= & autospec=True  https://review.openstack.org/17414523:03
*** Marga_ has joined #openstack-ironic23:03
*** yuanying has quit IRC23:03
jrollthank you jxiaobin, +2 :)23:03
*** yuanying has joined #openstack-ironic23:04
jxiaobinthanks jroll!23:04
*** Isotopp has quit IRC23:06
*** chlong has joined #openstack-ironic23:11
*** Sukhdev has quit IRC23:12
*** Marga_ has quit IRC23:13
*** Isotopp has joined #openstack-ironic23:15
*** Isotopp has quit IRC23:15
*** Isotopp has joined #openstack-ironic23:15
*** kbs has quit IRC23:22
*** Marga_ has joined #openstack-ironic23:23
*** dttocs_ has joined #openstack-ironic23:31
*** moshaile__ has joined #openstack-ironic23:33
*** tteggel_ has joined #openstack-ironic23:34
*** Shrews_ has joined #openstack-ironic23:35
*** HenryG_ has joined #openstack-ironic23:35
*** Haomeng|2 has joined #openstack-ironic23:36
*** Shrews has quit IRC23:36
*** Shrews_ is now known as Shrews23:36
*** mdbooth has quit IRC23:38
*** moshaile_ has quit IRC23:38
*** dttocs has quit IRC23:38
*** HenryG has quit IRC23:38
*** bigjools has quit IRC23:38
*** bigjools_ has joined #openstack-ironic23:38
*** Haomeng has quit IRC23:38
*** mdbooth_ has joined #openstack-ironic23:38
*** mdbooth_ is now known as mdbooth23:38
*** tteggel has quit IRC23:39
*** bigjools_ is now known as bigjools23:39
*** bigjools has quit IRC23:40
*** bigjools has joined #openstack-ironic23:40
*** Marga_ has quit IRC23:54

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!