Friday, 2014-12-05

*** jmankov has joined #openstack-ironic00:04
NobodyCamthree?00:04
jrollseven00:05
*** jmanko has quit IRC00:07
NobodyCamclif_h: https://github.com/openstack/ironic-python-agent/blob/master/ironic_python_agent/tests/agent.py#L33300:09
*** Masahiro has joined #openstack-ironic00:10
*** Masahiro has quit IRC00:14
*** spandhe has left #openstack-ironic00:14
*** openstackgerrit has quit IRC00:18
*** openstackgerrit has joined #openstack-ironic00:19
*** andreykurilin_ has quit IRC00:21
*** spandhe_ has joined #openstack-ironic00:25
JayFclif_h: ^ I think NobodyCam just found your test failure00:25
NobodyCam:-p00:26
clif_hNobodyCam: Thank you00:26
NobodyCamnobodycamAir:ironic-python-agent NobodyCam$ grep -R _get_kernel_params *  :-p00:26
clif_hI thought I had grepped through the repo for that function name00:27
clif_hbut I did not hit the tests00:27
clif_h:(00:27
NobodyCam:)00:27
JayFif you get a merge in 2 patchsets00:27
JayFfor your first openstack contribution00:27
*** spandhe_ has left #openstack-ironic00:28
JayFThat's like, 96 less than JoshNang needed to not get his original agent driver merged :P00:28
NobodyCamlol00:28
NobodyCamJayF: you didn't say be nice to JoshNang the first day00:28
NobodyCamlol00:29
* NobodyCam *ducks*00:29
clif_him sure the agent driver was much more complex though00:29
JayFYou guys wouldn't have listened to me anyway :)00:29
clif_hthis is touching-the-ground-fruit00:29
NobodyCamhehehehe :-p00:29
JayFclif_h: you have no idea ... we basically had upstreamed, in a single gerrit patchset, all our downstream code. It was giant and awful and unreviewable.00:29
NobodyCam+2000 lines00:30
NobodyCambut it really was a "new" approch (and quite kewl)00:30
JayFit was worse than that, wasn't it?00:30
JayFNobodyCam: we ran it in production for a long time (still kinda do; but it's broken up even downstream now)00:31
*** marcoemorais has quit IRC00:31
*** marcoemorais has joined #openstack-ironic00:31
JayFclif_h: the important thing is the tempest job passed; that means you didn't break anything but tests :D00:33
*** marcoemorais has quit IRC00:33
*** marcoemorais has joined #openstack-ironic00:33
*** hemna has quit IRC00:34
openstackgerritClif Houck proposed openstack/ironic-python-agent: Fix badly named function _get_kernel_params()  https://review.openstack.org/13926900:35
clif_hJayF: but all the tests must pass!00:36
clif_h:)00:36
*** penick has joined #openstack-ironic00:38
jrollclif_h: https://github.com/ggreer/the_silver_searcher00:39
*** Haomeng|2 has quit IRC00:45
*** Haomeng|2 has joined #openstack-ironic00:47
*** shakamunyi has quit IRC00:50
*** spandhe has joined #openstack-ironic00:51
openstackgerritDevananda van der Veen proposed openstack/ironic: Refactor async helper methods in conductor/manager.py  https://review.openstack.org/13921700:52
openstackgerritDevananda van der Veen proposed openstack/ironic: Begin using the state machine for node deploy/teardown  https://review.openstack.org/13921600:52
*** igordcard has quit IRC00:52
*** penick has quit IRC01:04
*** Masahiro has joined #openstack-ironic01:06
*** dlaube has quit IRC01:09
*** penick has joined #openstack-ironic01:11
*** penick has quit IRC01:13
openstackgerritMerged openstack/ironic-python-agent: Fix badly named function _get_kernel_params()  https://review.openstack.org/13926901:15
JayFclif_h: ^ \o/ grats01:16
*** jerryz has joined #openstack-ironic01:17
openstackgerritMerged openstack/ironic: Updated from global requirements  https://review.openstack.org/13922901:18
clif_hJayF: \o/ thanks!01:28
NobodyCamcongratz clif_h01:30
JayFNobodyCam: if you know of any good bugs or small things to point clif_h at, please reccomend them. our low-hanging-fruit tagged bugs are more like rotten fruit that's been on the ground for a year01:32
clif_hNobodyCam: Thanks!01:32
JayF(aka all docs updates or logging improvements)01:32
devanandaone of the things I'd love to see is a good operator-centric logging rewrite within the ConductorManager module01:33
jrollwe don't know nothin' about operating ironic01:34
* jroll runs01:34
devanandaheh heh heh01:34
JayFhttp://www.stopbuyingcrap.com/pics/sbc/itsatrap.jpg01:34
devanandaJayF: you walked into that one01:34
devananda(I'm just sneaky, so you didn't see it coming)01:35
devanandaactually, I'm also frustrated by it after jumping in that code again today01:35
JayFWell if I was going to fix a log message, I'd fix the case jroll found the other day where ipa can fail to download an image and not log a damn thing01:36
JayFwhich seems ... bad01:36
devanandathat doesn't seem helpful, no01:36
JayFbut we knew what happened because we had logs in devstack for ipa01:36
JayFso progress at least01:36
jrollI mean, ipa logging story is horrible01:36
jrollso is the docs story01:36
jrollboth make me feel like a bad person01:36
JayFdevananda: https://review.openstack.org/#/c/134436/ itsatrap.jpg01:36
jroll(I might be)01:36
devanandaif I was refactoring ConductorManager, I might do this: https://review.openstack.org/#/c/139217/2/ironic/conductor/manager.py01:36
devananda:)01:37
JayFdoes ironic-parallel tempest usually pass nowadays?01:37
devanandathat'sa  good question01:37
devanandaI see several cases where it failed but pxe_ssh passed01:38
JayFYeah, I should just check adam_g's gate status thing01:39
adam_gJayF, it should be01:39
adam_gwel01:39
adam_gtheres that race thing we were looking at the other week that turned up there as well01:39
JayFdoes it happen more often/more likely in the parallel job, I wonder? That could be an interesting data point01:40
adam_gJayF, its more likely to happen the parallel job because we're launching so many more VMs there than the current pxe_ssh job01:40
JayFadam_g: did you make any progress on the persistent logs from libvirt console? I sadly never got around to testing that locally01:40
JayFadam_g: that's what I thought, yeah01:40
adam_gJayF, but the failure started to pop up more often in the pxe_ssh job, but seems to have calmed down, mysteriously01:40
adam_gJayF, i actually haven't had a chance on that, sorry :|01:41
JayFIt's fine; I haven't tested it locally so I can't complain01:41
JayFheh01:41
*** vdrok has quit IRC01:41
adam_gJayF, i was hoping the current patchset @ https://review.openstack.org/#/c/129099/ would work, it worked locally when i spawned locally but doing it in the slave didnt work out so well01:42
*** vdrok has joined #openstack-ironic01:42
openstackgerritDevananda van der Veen proposed openstack/ironic: Refactor async helper methods in conductor/manager.py  https://review.openstack.org/13921701:43
openstackgerritDevananda van der Veen proposed openstack/ironic: Begin using the state machine for node deploy/teardown  https://review.openstack.org/13921601:43
devanandaanyone else seeing this in the conductor logs lately?01:43
devanandaDEBUG oslo.messaging._drivers.impl_rabbit [-] Timed out waiting for RPC response: timed out _raise_timeout_if_deadline_is_reached01:43
devanandaadam_g: waht's the race you're seeing?01:45
adam_gdevananda, yeah i just noticed those somewhere this past hour01:45
adam_gdevananda, some issue where a node sporadically cannot reach back to the host machines tftp server. not certain its a race or what01:46
devanandaok01:46
devanandathe failures I've got right now in parallel-nv are u'No valid host was found. There are not enough hosts available.'01:46
*** smoriya has joined #openstack-ironic01:47
devanandaseems to be on this patch set only --  https://review.openstack.org/#/c/139217/01:47
devanandaif this news to you, I'll assume it's something I broke :)01:47
adam_gdevananda, well i was seeing these earlier, but patched devstack to address it01:48
adam_g(it was starting tempest tests before n-cpu's periodic picked up recently enrolled nodes)01:49
adam_gim actually looking at the same failure on 13922301:49
adam_gwhich are certainly not related to the patch01:49
devanandaah, k01:50
*** nosnos has joined #openstack-ironic01:52
devanandaadam_g: yea, looks like the same situation here. tests failed before n-cpu picked up available resources01:52
*** dlaube1 has joined #openstack-ironic01:53
adam_gdevananda, yeah01:53
adam_ghttp://logs.openstack.org/23/139223/1/check/check-tempest-dsvm-ironic-parallel-nv/cb2bf22/logs/devstacklog.txt.gz#_2014-12-04_22_51_03_34701:53
adam_gdevstack should wait until 'hypervisor-stats -ge $expected_nodes'01:53
adam_gwhich its doing, but still failing.01:53
adam_gdevananda, re those messaging timeouts, i wonder if those starting popping up around the time new oslo.messaging was released today01:54
devanandaadam_g: oh. right. makes sense01:59
adam_gi need to run i wanna look closer at those new scheduler fails tomorrow, timestamps seem to checkout on 139223--the failures happen long after nova's got the resources, i wonder if tempest is spinning up more instances concurrently now02:06
*** ChuckC_ has quit IRC02:12
*** ChuckC_ has joined #openstack-ironic02:12
*** dlaube1 has quit IRC02:12
*** alexpilotti has joined #openstack-ironic02:15
*** dlaube has joined #openstack-ironic02:17
*** ChuckC_ has quit IRC02:17
*** kurtrao has quit IRC02:22
*** Masahiro has quit IRC02:22
*** rloo has quit IRC02:23
*** Masahiro has joined #openstack-ironic02:24
*** dlaube has quit IRC02:29
*** kurtrao has joined #openstack-ironic02:31
*** marcoemorais has quit IRC02:37
*** ryanpetrello has quit IRC02:40
*** ramineni has joined #openstack-ironic02:41
*** ryanpetrello has joined #openstack-ironic02:43
*** nosnos has quit IRC02:48
*** ChuckC_ has joined #openstack-ironic02:49
*** r-daneel has quit IRC02:50
*** shakamunyi has joined #openstack-ironic02:50
*** shakamunyi has quit IRC02:50
*** Haomeng has joined #openstack-ironic02:51
*** Haomeng|2 has quit IRC02:52
*** harlowja_ is now known as harlowja_away02:54
*** spandhe has quit IRC02:57
*** shakamunyi has joined #openstack-ironic02:59
*** achanda has joined #openstack-ironic03:01
*** achanda has quit IRC03:11
*** dlaube has joined #openstack-ironic03:15
*** naohirot has quit IRC03:26
*** Masahiro has quit IRC03:38
openstackgerritJeremy Stanley proposed openstack/ironic: Workflow documentation is now in infra-manual  https://review.openstack.org/13932903:41
openstackgerritJeremy Stanley proposed openstack/ironic-python-agent: Workflow documentation is now in infra-manual  https://review.openstack.org/13933003:41
openstackgerritJeremy Stanley proposed openstack/ironic-specs: Workflow documentation is now in infra-manual  https://review.openstack.org/13933103:41
*** Masahiro has joined #openstack-ironic03:44
*** stevebaker has left #openstack-ironic03:45
*** Masahiro has quit IRC03:49
openstackgerritJeremy Stanley proposed openstack/python-ironicclient: Workflow documentation is now in infra-manual  https://review.openstack.org/13937403:51
*** pcrews has quit IRC03:58
*** Marga_ has quit IRC03:59
*** Masahiro has joined #openstack-ironic03:59
*** alexpilotti has quit IRC04:00
*** naohirot has joined #openstack-ironic04:04
*** pensu has joined #openstack-ironic04:05
naohirotgood afternoon ironic!04:06
mrdahi naohirot04:07
naohirotmrda: hi04:07
* naohirot I had a long meeting this morning :)04:07
*** r-daneel has joined #openstack-ironic04:15
*** Marga_ has joined #openstack-ironic04:37
*** rameshg87 has joined #openstack-ironic04:37
*** dlaube has quit IRC04:38
*** r-daneel has quit IRC04:53
*** killer_prince is now known as lazy_prince04:53
Haomengnaohirot: hi05:01
naohirotHaomeng: Hi :)05:02
Haomengnaohirot: good afternoon:)05:02
naohirotHaomeng: good afternoon05:03
Haomengnaohirot: :)05:03
naohirotHaomeng: are you going to attend the next IRC meeting which starts 5 UTC?05:04
Haomengnaohirot: sure, I want, my zone is +8, it should be my 1pm?05:09
Haomeng8 dec, 5utc,right?05:09
naohirotHaomeng: Yes, it is, it should be 1PM, and 2PM here.05:10
Haomengnaohirot: we have two weeks IRC with utc+5, right?05:10
naohirotHaomeng: Yes, but there are good news and bad news.05:11
Haomengnaohirot: :)05:11
Haomengnaohirot: I like bad news:)05:11
naohirotHaomeng: good news is that new time schedule is absolutely easier to attend.05:12
*** Masahiro has quit IRC05:12
Haomengnaohirot: yes05:12
openstackgerritGopi Krishna S proposed openstack/ironic-specs: Cisco UCS power driver  https://review.openstack.org/13951705:12
naohirotHaomeng: Bad news is that it's easier to conflict schedule :<05:12
naohirotHaomeng: So I cannot attend to the next meeting due to double booking.05:13
Haomengnaohirot: last release meeting is in my 3am, I attended three times totally:)05:13
Haomengnaohirot: dont worry05:13
*** Marga_ has quit IRC05:13
HaomengHaomeng: I have habit that i will review the meeting log if I can not attend05:13
Haomengnaohirot: to catch the points:)05:13
Haomengnaohirot: dont worry05:13
*** Marga_ has joined #openstack-ironic05:14
naohirotHaomeng: Yes, I'll check the log.05:14
Haomengnaohirot: :)05:14
Haomengnaohirot: for kilo meeting, we are runing with new mode, to discuss the topis which is raised before meeting, so we have chance to discuss topics in meeting that is cool05:15
naohirotHaomeng: If starting at 3am our local time never creates double booking :-)05:15
Haomengnaohirot: :)05:15
Haomengnaohirot: np, dont worry05:15
naohirotHaomeng: Okay :-)05:16
Haomengnaohirot: :)05:16
*** Masahiro has joined #openstack-ironic05:27
*** saripurigopi has joined #openstack-ironic05:52
saripurigopiHI, I'm trying to submit new BP for kilo.05:54
saripurigopiI've executed the tox tests locally, and do not see any errors. But Jenkins tests failed with an error.05:54
saripurigopift1.2: tests.test_titles.TestTitles.test_current_cycle_template_StringException: Traceback (most recent call last):05:55
saripurigopi  File "tests/test_titles.py", line 142, in test_current_cycle_template05:55
saripurigopi    self._check_license(data)05:55
saripurigopi    self._check_license(data)05:55
saripurigopiany idea what I'm missing here.05:55
Haomengsaripurigopi: hi05:56
saripurigopi@Haomeng Hi05:57
Haomengsaripurigopi: let me help you check05:57
Haomengsaripurigopi: which one?05:57
Haomenghttps://review.openstack.org/#/c/139517/?05:58
saripurigopiHaomeng: This is the failure result http://logs.openstack.org/17/139517/1/check/gate-ironic-specs-python27/c6d67f3/testr_results.html.gz05:58
Haomengsaripurigopi: ok05:58
saripurigopiyes05:58
*** lazy_prince has quit IRC05:58
Haomengsaripurigopi: we can check this log in details - http://logs.openstack.org/17/139517/1/check/gate-ironic-specs-python27/c6d67f3/console.html05:59
*** penick has joined #openstack-ironic05:59
Haomengsaripurigopi: from the first error message05:59
Haomengsaripurigopi: missing the license in the top of spec I think06:00
saripurigopiHaomeng: ok, let me check.06:00
Haomengsaripurigopi: I check your license lines, looks fine06:01
*** penick has quit IRC06:01
Haomengsaripurigopi: so dont worry, some times that is caused by jenkins06:01
saripurigopiHaomeng: okay,06:01
Haomengsaripurigopi: and some of the cases are blocked by other code06:01
Haomengdid you notice "Attribution 3.0 Unported\"06:02
saripurigopiHaomeng: yeah06:02
Haomengsaripurigopi: your first line, can you try to remove the last char "\"06:02
Haomengsaripurigopi: maybe it is not required and break the license validator06:02
Haomengsaripurigopi: maybe06:02
Haomengsaripurigopi: :)06:02
openstackgerritGopi Krishna S proposed openstack/ironic-specs: Cisco UCS power driver  https://review.openstack.org/13951706:04
saripurigopiHaomeng: submitted again, after removing the last char.06:05
Haomengsaripurigopi: ok, good luck:)06:05
Haomengsaripurigopi: :)06:05
*** shakamunyi has quit IRC06:07
Haomengsaripurigopi: works:)06:08
saripurigopiHaomeng: Yes , thank you :)06:09
HaomengJenkins              +106:09
Haomeng:)06:09
Haomengwelcome06:09
saripurigopiHaomeng: is 8th Dec, the last date to submit new BP?06:09
*** rushiagr is now known as rushiagr_away06:10
Haomengsaripurigopi: should be, let me check the reference06:14
Haomengsaripurigopi: but dont worry, you have raised the spec already06:15
saripurigopiHaomeng: okay, thank you.06:18
*** mrda is now known as mrda-weekend06:19
Haomengsaripurigopi: wel06:21
*** CHZ-PC has joined #openstack-ironic06:29
Haomengsaripurigopi: another round comments:)06:30
saripurigopiHaomeng: sure06:31
Haomengsaripurigopi: dont worry, you can fix with other reviewer's comments06:33
Haomengsaripurigopi: £º£©06:34
Haomengsaripurigopi: :)06:34
saripurigopiHaomeng: :-)06:34
*** Masahiro has quit IRC06:35
*** Masahiro has joined #openstack-ironic06:39
*** CHZ-PC has quit IRC06:43
*** saripurigopi has quit IRC06:47
*** rushiagr_away is now known as rushiagr06:49
*** openstackgerrit has quit IRC06:49
*** openstackgerrit has joined #openstack-ironic06:49
*** cuihaozhi has joined #openstack-ironic06:54
cuihaozhiHi,when i boot server from nova, nova-scheduler report "Filter RamFilter returned 0 hosts", and nova hypervisor-show $baremetal-compute display free_ram_bm=0, is there any step i missed?06:56
cuihaozhiutil now i can use ironic node-xxx control the phy-node06:57
rameshg87cuihaozhi, are you using ironic or nova-baremetal ?07:04
cuihaozhii using ironic 2014.207:04
cuihaozhiwith this document http://docs.openstack.org/developer/ironic/deploy/install-guide.html07:05
cuihaozhiif i need "ironic-nova-bm-migrate" registe some info to nova?07:10
cuihaozhiopenstack-nova-compute.log:   nova.compute.resource_tracker [-] Total physical ram (MB): 0, total allocated virtual ram (MB): 007:12
rameshg87cuihaozhi, just wondering if you have any previous instances in nova that used this bare metal07:42
rameshg87cuihaozhi, instances that have errored07:43
rameshg87cuihaozhi, you could try deleting those instances which have failed with error and try07:43
cuihaozhiyes i delete error instance.07:44
rameshg87cuihaozhi, is your ironic node having properties/memory_mb set ?07:45
cuihaozhiif i need ironic node-update register node meminfo?07:45
cuihaozhino.07:45
cuihaozhii try it/07:45
rameshg87cuihaozhi, something like this: ironic node-update $NODE_UUID add properties/cpus=4 properties/memory_mb=4096 properties/local_gb=30 properties/cpu_arch=x86_6407:45
rameshg87cuihaozhi, nova ironic virt driver reads these and sends to nova scheduler07:45
cuihaozhithank u i will try it07:46
rameshg87cuihaozhi, wc :)07:47
rameshg87cuihaozhi, you may address a person in irc by adding their irc name to the msg (like i add your name to my msg in the beginning)07:48
rameshg87cuihaozhi, because i missed your message because it didn't alert me :)07:48
*** dtantsur|afk is now known as dtantsur07:59
dtantsurMorning07:59
openstackgerritMerged stackforge/ironic-discoverd: Workflow documentation is now in infra-manual  https://review.openstack.org/13947208:05
*** jerryz has quit IRC08:09
*** chenglch has joined #openstack-ironic08:20
*** enterprisedc has joined #openstack-ironic08:22
*** jcoufal has joined #openstack-ironic08:31
*** Isotopp_ is now known as Isotopp08:35
*** Masahiro has quit IRC08:36
sirushtidtantsur, Hi, I'd replied to your concerns and then updated the spec. Could you please take a look into https://review.openstack.org/#/c/97150/ again?08:37
*** enterprisedc_ has joined #openstack-ironic08:40
dtantsurhi, will try to (a bit later)08:41
*** enterprisedc_ has quit IRC08:41
*** enterprisedc has quit IRC08:42
*** enterprisedc has joined #openstack-ironic08:44
*** Masahiro has joined #openstack-ironic08:47
*** dtantsur is now known as dtantsur|brb08:49
*** derekh has joined #openstack-ironic08:58
*** rakesh_hs has joined #openstack-ironic09:06
cuihaozhihi, when i boot instance.i can see ironic-conductor set "chassis bootdev pxe options=persistent", then the node just power on and not to deploy; and i can't find "dnsmasq" process on ironic-conductor, is there any thing wrong?09:07
Haomengcuihaozhi: hi09:10
Haomengcuihaozhi: dnsmasq is controled by neutron09:10
Haomengcuihaozhi: during nova booting, ironic will call neutron update to prepare dhcp port which will launch dnsmasq process to serv the dhcp/pxe request from bm09:11
Haomengcuihaozhi: so maybe your neutron issue09:12
Haomengcuihaozhi: check nova ironic compute node log, and neutron logs to see what step issue during booting09:12
cuihaozhiHaomeng: thank u i will check it09:13
*** jistr has joined #openstack-ironic09:14
*** subscope has quit IRC09:16
naohirotHaomeng: Hi09:20
naohirotHaomeng: are you there?09:21
*** igordcard has joined #openstack-ironic09:22
openstackgerritTan Lin proposed openstack/ironic: Add AMT-PXE-Driver to deploy cloud on PC  https://review.openstack.org/13518409:23
*** subscope has joined #openstack-ironic09:26
*** romcheg has joined #openstack-ironic09:27
*** andreykurilin_ has joined #openstack-ironic09:32
openstackgerritRamakrishnan G proposed openstack/ironic-specs: iLO virtual media drivers to deploy without DHCP  https://review.openstack.org/13756709:33
*** lucasagomes has joined #openstack-ironic09:40
*** rakesh_hs has quit IRC09:45
*** foexle has joined #openstack-ironic09:45
*** derekh has quit IRC09:53
*** andreykurilin_ has quit IRC09:54
*** andreykurilin_ has joined #openstack-ironic09:54
*** subscope has quit IRC09:56
Haomengnaohirot: hi10:03
*** jerryz has joined #openstack-ironic10:04
*** subscope has joined #openstack-ironic10:06
Haomengnaohirot: I am back:)10:10
*** athomas has joined #openstack-ironic10:16
*** rakesh_hs has joined #openstack-ironic10:17
*** naohirot has quit IRC10:19
*** andreykurilin_ has quit IRC10:20
*** rakesh_hs2 has joined #openstack-ironic10:21
*** andreykurilin_ has joined #openstack-ironic10:21
*** rakesh_hs has quit IRC10:22
*** pelix has joined #openstack-ironic10:25
*** chenglch has quit IRC10:26
openstackgerritLucas Alvares Gomes proposed openstack/ironic: Add tests to iscsi_deploy.build_deploy_ramdisk_options  https://review.openstack.org/13909710:29
*** Masahiro has quit IRC10:33
*** rameshg87 has quit IRC10:39
*** rameshg87 has joined #openstack-ironic10:40
*** ramineni has quit IRC11:01
*** Haomeng has quit IRC11:07
*** derekh has joined #openstack-ironic11:09
*** Haomeng has joined #openstack-ironic11:10
openstackgerritRamakrishnan G proposed openstack/ironic: Fix for broken deploy of iscsi_ilo driver  https://review.openstack.org/13960211:24
*** derekh has quit IRC11:34
*** Masahiro has joined #openstack-ironic11:34
*** cuihaozhi has left #openstack-ironic11:36
*** Masahiro has quit IRC11:39
*** naohirot has joined #openstack-ironic11:44
*** rameshg87 has quit IRC11:45
*** derekh has joined #openstack-ironic11:47
naohirotHaomeng: I'm back too11:47
naohirotHaomeng: I'm just wondering if "More than one compute_driver in nova.conf" is possible or not.11:48
naohirotHaomeng: http://lists.openstack.org/pipermail/openstack/2013-August/000513.html11:48
*** lucasagomes is now known as lucas-brb11:51
naohirotHaomeng: it is little bit old mail. If we configured nova for ironic, does that nova become exclusively for ironic?11:51
openstackgerritVladyslav Drok proposed openstack/ironic: Remove 'glance://' prefix strip from image hrefs  https://review.openstack.org/13905711:58
*** derekh has quit IRC11:59
*** Haomeng|2 has joined #openstack-ironic12:01
*** Haomeng has quit IRC12:01
*** dtantsur|brb is now known as dtantsur12:06
*** andreykurilin_ has quit IRC12:09
*** dlaube has joined #openstack-ironic12:20
*** kurtrao has quit IRC12:22
*** kurtrao has joined #openstack-ironic12:22
*** rakesh_hs2 has quit IRC12:25
*** jcoufal has quit IRC12:25
*** jcoufal has joined #openstack-ironic12:27
*** jcoufal has quit IRC12:28
*** jcoufal has joined #openstack-ironic12:29
*** dlaube has quit IRC12:34
*** smoriya has quit IRC12:36
*** dprince has joined #openstack-ironic12:48
*** kurtrao has quit IRC12:48
*** kurtrao has joined #openstack-ironic12:48
Haomeng|2naohirot: I dont think more than one compute_driver is possible for nova compute node12:52
Haomeng|2naohirot: one nova compute node support one single hypertype only12:52
Haomeng|2hypervisor type12:52
naohirotHaomeng|2: Hi, I see12:53
Haomeng|2naohirot: :)12:53
naohirotHaomeng|2: so typically deployer prepares the number of nova compute nodes which is same number of hyper visor deployer deals?12:55
*** mikedillion has joined #openstack-ironic12:55
Haomeng|2naohirot: one compute node support one single hypervisor I think12:55
Haomeng|2naohirot: you can confirm it from nova ironic12:55
Haomeng|2naohirot: I did not test such case :)12:56
naohirotHaomeng|2: I see. I was asked by my colleague, but I couldn't figured out :-)12:56
Haomeng|2naohirot: nova conf guide  - http://docs.openstack.org/juno/config-reference/content/list-of-compute-config-options.html12:57
Haomeng|2naohirot: ok12:57
Haomeng|2naohirot: compute_driver = None    (StrOpt) Driver to use for controlling virtualization. Options include: libvirt.LibvirtDriver, xenapi.XenAPIDriver, fake.FakeDriver, baremetal.BareMetalDriver, vmwareapi.VMwareVCDriver, hyperv.HyperVDriver12:59
naohirotHaomeng|2: when I looked at nova source, it seems that defining like this is impossible12:59
*** sambetts has joined #openstack-ironic12:59
naohirotHaomeng|2: compute_driver=baremetal.BareMetalDriver:libvirt.LibvirtDriver13:00
Haomeng|2naohirot: sure?13:00
Haomeng|2naohirot: in this doc - http://docs.openstack.org/juno/config-reference/content/list-of-compute-config-options.html13:00
Haomeng|2compute_driver  is not MultiStrOpt or ListOpt13:01
Haomeng|2naohirot: so it should be single hypervisor type13:01
Haomeng|2naohirot: can you check from nova irc room?13:01
Haomeng|2good night:) I will be offline:)13:02
Haomeng|2naohirot: nice weekend:)13:02
naohirotHaomeng|2: Okay, thanks good night:-)13:02
naohirotHaomeng|2: you too!13:02
Haomeng|2naohirot: :)13:02
*** Masahiro has joined #openstack-ironic13:08
*** bauzas has joined #openstack-ironic13:12
bauzasadam_g: hi, re: https://bugs.launchpad.net/ironic/+bug/1398128 it seems that there is a regression now13:13
bauzashttp://logs.openstack.org/73/126573/18/check/check-tempest-dsvm-ironic-pxe_ssh/a61d4c8/logs/devstacklog.txt.gz13:13
bauzasadam_g: http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTm92YSBoeXBlcnZpc29yLXN0YXRzIGRpZCBub3QgcmVnaXN0ZXIgYXQgbGVhc3QgMyBub2Rlc1wiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI0MzIwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTc3ODQzODI2NDZ913:13
*** Masahiro has quit IRC13:13
bauzasadam_g: I can't reopen the bug, can you ?13:13
*** mikedillion has quit IRC13:19
*** pensu has quit IRC13:28
*** dlaube has joined #openstack-ironic13:34
*** pensu has joined #openstack-ironic13:43
*** lucas-brb is now known as lucasagomes13:49
openstackgerritNaohiro Tamura proposed openstack/ironic-specs: iRMC Management Driver for Ironic  https://review.openstack.org/13602013:50
*** dlaube has quit IRC13:52
openstackgerritDmitry Tantsur proposed stackforge/ironic-discoverd: Cherry-pick changelog from 0.2.5  https://review.openstack.org/13963213:56
*** alexpilotti has joined #openstack-ironic13:56
*** jcoufal has quit IRC14:00
dtantsurlucasagomes, o/ usual request to review a couple of discoverd patches: https://review.openstack.org/#/c/137374/  https://review.openstack.org/#/c/139096/ :)14:00
*** dlaube has joined #openstack-ironic14:00
lucasagomesdtantsur, hey :) will do just finishing something up here14:01
dtantsurack thanks!14:01
*** derekh has joined #openstack-ironic14:07
*** Marga_ has quit IRC14:09
*** Haomeng|2 has quit IRC14:11
*** pensu has quit IRC14:14
openstackgerritDmitry Tantsur proposed stackforge/ironic-discoverd: Do not fail if ipmi_address is not present in discovery data  https://review.openstack.org/13963514:14
*** Haomeng|2 has joined #openstack-ironic14:15
*** kbyrne has quit IRC14:19
*** kbyrne has joined #openstack-ironic14:21
*** shakamunyi has joined #openstack-ironic14:21
openstackgerritDmitry Tantsur proposed openstack/ironic: Workflow documentation is now in infra-manual  https://review.openstack.org/13932914:28
*** romcheg has quit IRC14:30
*** romcheg has joined #openstack-ironic14:30
*** alexpilotti has quit IRC14:30
*** alexpilotti has joined #openstack-ironic14:31
openstackgerritDmitry Tantsur proposed openstack/python-ironicclient: Workflow documentation is now in infra-manual  https://review.openstack.org/13937414:31
NobodyCamGood morning Ironic14:32
lucasagomesNobodyCam, morning14:32
dtantsurNobodyCam, morning, looks like TGIF :)14:33
NobodyCammorning lucasagomes :) TGIF14:33
openstackgerritNaohiro Tamura proposed openstack/ironic-specs: iRMC Virtual Media Deploy Driver for Ironic  https://review.openstack.org/13486514:33
lucasagomesNobodyCam, \o/ last day before holidays14:33
NobodyCamoh ya... morning dtantsur14:33
NobodyCamlucasagomes: how long are off14:33
lucasagomesNobodyCam, only 1 week14:34
lucasagomesnext week I will be on holidays14:34
lucasagomes8-12 of dev14:34
lucasagomesdec*14:34
NobodyCam:) oh lucasagomes is there a public picture of Pixie Boots14:34
lucasagomesNobodyCam, oh I should add it to the wiki right?14:34
lucasagomesor to our documentation?14:34
NobodyCam:) ya14:35
*** romcheg has quit IRC14:35
lucasagomesI will post all the inkscape files14:35
lucasagomesand png versions of it :)14:35
NobodyCamI was thinking on this page: https://wiki.openstack.org/wiki/Ironic14:35
NobodyCam"-p14:36
lucasagomeswill add14:36
jrollmorning everybody :)14:36
NobodyCamwoo hoo :-p14:36
NobodyCamI've Pixie to a few people and they Love him/her/it14:37
dtantsurjroll, o/14:37
NobodyCammornign jroll14:37
openstackgerritMerged stackforge/ironic-discoverd: Cherry-pick changelog from 0.2.5  https://review.openstack.org/13963214:37
*** alexpilotti has quit IRC14:38
naohirotgood morning, and good evening to all14:39
NobodyCammorning naohirot :)14:40
*** alexpilotti has joined #openstack-ironic14:40
naohirotI've updated two specs, and things to be done this week has been done :-)14:40
lucasagomesNobodyCam, !14:40
lucasagomesjroll, morning14:40
naohirotNobodyCam: Hi14:40
NobodyCam:)14:41
* lucasagomes is trying to edit the wiki14:41
dtantsurnaohirot, morning14:41
naohirotso I go to bed :-)14:41
dtantsuror is it evening?14:41
naohirotdtantsur: Hi :)14:41
* dtantsur is confused14:41
naohirotdtantsur: here is approaching to the mid night:-)14:42
naohirotso have a nice weekend to all!14:42
*** alexpilotti has quit IRC14:42
dtantsurnaohirot, have a nice weekend then14:43
naohirotdtantsur: you too14:43
naohirotbye14:43
*** naohirot has quit IRC14:43
NobodyCamhave a great weekend14:45
*** mikedillion has joined #openstack-ironic14:45
lucasagomesew I got upload the pics somewhere before linking in the wiki -.-14:47
*** ichi-the-one has joined #openstack-ironic14:49
ichi-the-oneHELLO nayone here?14:49
ichi-the-oneI have a question and i will be very grateful if someone would answer me14:49
*** erwan_taf has joined #openstack-ironic14:50
erwan_tafheya world14:51
ichi-the-onehey erwan14:51
openstackgerritYuriy Zveryanskyy proposed openstack/ironic-specs: Add a new driver for Fuel Agent  https://review.openstack.org/13811514:52
NobodyCammorning ichi-the-one14:52
NobodyCammorning erwan_taf14:52
ichi-the-onehey thank you14:52
jrollichi-the-one: hi :) don't ask to ask, just ask :)14:52
ichi-the-onecool14:52
NobodyCamhehhe14:52
NobodyCam:)14:52
dtantsurichi-the-one, erwan_taf, hey!14:52
jrollmorning(?) erwan_taf :)14:53
ichi-the-onei was looking into the documentation of ironic, it's mentionned that neutron is supported, is that mean that we can have isolated networks for our baremetal instances?14:53
dlaubeg'morning guys14:54
dtantsurdlaube, morning14:54
erwan_taflo NobodyCam, dtantsur, jroll14:54
erwan_tafjroll: morning yes, still in North America for a couple of hours14:54
*** r-daneel has joined #openstack-ironic14:55
NobodyCammornign dlaube14:55
ichi-the-oneso anyone knows the answer?14:55
NobodyCamichi-the-one: all of my testing is done with a flat network14:57
jrollichi-the-one: theoretically, yes. in reality, neutron does not have support today for configuring real hardware switches.14:57
jroll(as far as I know)14:57
*** Masahiro has joined #openstack-ironic14:57
ichi-the-oneso, we have two option, use flat network with baremetal, or SDN controller to configure hardware switches as neutron can't do that, right?14:58
jrollichi-the-one: sounds correct15:00
ichi-the-oneok, aren't there any plans to add this to neutron?15:00
jrollyes15:01
* jroll finds links15:01
openstackgerritMerged openstack/ironic: Add tests to iscsi_deploy.build_deploy_ramdisk_options  https://review.openstack.org/13909715:01
*** Masahiro has quit IRC15:01
jrollichi-the-one: maybe it's abandoned, https://review.openstack.org/#/q/topic:bp/neutron-external-attachment-points,n,z15:02
*** alexpilotti has joined #openstack-ironic15:02
jrollor moved to a different way15:02
jrollyou should ask the neutron channel15:02
* jroll will bbl15:03
ichi-the-oneok thanks15:03
ichi-the-onei found this on github15:03
ichi-the-onehttps://github.com/rackerlabs/ironic-neutron-plugin15:03
jrollaha :)15:03
jrollthis talks more about that plugin and how we use it https://etherpad.openstack.org/p/ironic-neutron-bonding15:04
ichi-the-onewell thank you guys for your time15:04
jrollI really have to go for now but I will be back later15:04
jrollyou're welcome :)15:04
ichi-the-one:)15:04
openstackgerritMerged stackforge/ironic-discoverd: Do not fail if ipmi_address is not present in discovery data  https://review.openstack.org/13963515:05
*** sambetts has quit IRC15:06
NobodyCambrb15:12
lucasagomesdtantsur, was thinking, if we find multiple disks shouldn't we pick the first one that matches the hints?15:13
lucasagomesdtantsur, for example, someone just care about the disks being X size15:13
dtantsurlucasagomes, always a hard question. for this case probably yes. we should probably have AND of all hints, not OR (like in discoverd)15:14
lucasagomesdtantsur, this gives flexibility, because if someone wants to pick a very specific one he still can by passing the serial or wwn of that disk15:14
erwan_taflucasagomes: _o/15:14
lucasagomeserwan_taf, yo15:14
lucasagomesdtantsur, supporting operators?15:14
lucasagomesdtantsur, I was thinking about adding operators in a later version of that, talked a bit with jroll bout it15:15
lucasagomeslike greater than, etc... we can add that stuff on top of that work15:15
dtantsurlucasagomes, I meant using AND by default, instead of OR15:15
dtantsurwell, tho it still can give more than 1 variant...15:15
erwan_tafwith mutli factor items per device be careful with and/or15:15
lucasagomesdtantsur, oh yeah it's AND by default :)15:15
dtantsurnevermind15:15
erwan_taf(device is a _and_ device is b) or (device_is c _and_ device is d)15:16
erwan_tafcould make long rules15:16
lucasagomeserwan_taf, https://review.openstack.org/#/c/138729/15:16
lucasagomeserwan_taf, yeah I'm keeping it simple for now15:16
dlaubeso I built a custom IPA deploy image, but ran into kernel panics. then I tried using the IPA that is automatically built over at http://tarballs.openstack.org/ironic-python-agent/coreos/files/    and that is also hitting kernel panics. It there some common failure mode that I can try working around?15:17
erwan_taf*reading*15:17
lucasagomeserwan_taf, like pick the disk that matches THIS THIS THIS AND THAT15:17
lucasagomesno ORs15:17
* lucasagomes brb 1 sec15:18
*** andreykurilin_ has joined #openstack-ironic15:20
*** ryanpetrello has quit IRC15:21
*** romcheg has joined #openstack-ironic15:21
*** ChuckC_ has quit IRC15:26
*** kurtrao has quit IRC15:26
*** ryanpetrello has joined #openstack-ironic15:26
*** kurtrao has joined #openstack-ironic15:26
*** Marga_ has joined #openstack-ironic15:27
*** romcheg has quit IRC15:27
*** ndipanov is now known as ndipanoff15:33
*** rushiagr is now known as rushiagr_away15:35
erwan_taflucasagomes: commented your proposal. Nice one, I had very few addition / comments15:36
lucasagomeserwan_taf, ta much!15:36
NobodyCamhey hey lucasagomes how familiar are you with the RH openstack packages?15:41
lucasagomesNobodyCam, not much :) but I can point you to the right people15:41
lucasagomeserwan_taf, thanks, so I was talking to dtantsur about what to do if multiple disks are found right now15:42
NobodyCamseems there is something strange with the nova versions.. if you have a second could you just read the replys on: https://bugs.launchpad.net/nova/+bug/137937315:42
lucasagomeserwan_taf, I was thinking that we should just pick one of the disks that matches all the expected criteria, if operator wants to pick a very specific disk he still can pass a unique hint (like uuid or wwn)15:43
lucasagomesNobodyCam, will do15:43
erwan_taflucasagomes: +115:43
NobodyCamI'm not sure but it seems like the nova BM code is 1/2 there15:43
lucasagomeserwan_taf, right I will update the spec15:43
* lucasagomes loads of comments15:44
lucasagomesNobodyCam, wondering if that's specific on RH packages or if those command (nova-baremetal-manage) were actually deleted upstream as well15:48
lucasagomesNobodyCam, I will check internally see if I find something15:48
erwan_tafThe best hacking scene ever : https://www.youtube.com/watch?v=boEb8zKfPBo \o/15:48
NobodyCamlucasagomes: the proxy code is in place15:49
NobodyCamthe thing I dont get is the ref to the old nova bm tables is gone.15:49
NobodyCamThey could have a older version15:49
lucasagomeserwan_taf, lol ninja!15:49
NobodyCambut then why is the nova-baremetal-manage db sync gone?15:50
lucasagomeserwan_taf, I like this one https://www.youtube.com/watch?v=hkDD03yeLnU15:50
lucasagomesNobodyCam, yeah not sure :/15:50
lucasagomesso the proxy is for Ironic right? he says he has no ironic installed15:51
NobodyCamyea. the command should not work15:52
*** pcrews has joined #openstack-ironic15:52
NobodyCambut its the looking for the bm tables / with out the db sync is what I'm missing15:53
jrolldlaube: how much ram do you have?15:54
jrolldlaube: you're gonna need about 1gb or more15:54
dlaube16GB15:55
openstackgerritDmitry Tantsur proposed openstack/ironic: Workflow documentation is now in infra-manual  https://review.openstack.org/13932915:55
jrollhuh15:56
jrolldlaube: feel free to paste that somewhere, I'm about to head to my office but can look later15:56
dlaubeok, thanks jroll15:56
*** killer_prince has joined #openstack-ironic15:57
*** killer_prince is now known as lazy_prince15:57
openstackgerritLucas Alvares Gomes proposed openstack/ironic-specs: Root device hints  https://review.openstack.org/13872915:58
* NobodyCam likes the cat 5 cable out of the plane16:03
NobodyCamlol16:03
* dtantsur brb16:06
*** rushiagr_away is now known as rushiagr16:07
*** ichi-the-one has quit IRC16:08
*** romcheg has joined #openstack-ironic16:10
*** killer_prince has joined #openstack-ironic16:13
*** lazy_prince has quit IRC16:14
*** killer_prince is now known as lazy_prince16:14
openstackgerritMerged stackforge/ironic-discoverd: Return serialized node to the ramdisk  https://review.openstack.org/13737416:17
*** smoriya has joined #openstack-ironic16:20
*** ChuckC_ has joined #openstack-ironic16:24
* lucasagomes wants to use IPA and not have to deal with bash anymore :(16:28
*** ChuckC_ is now known as ChuckC16:28
* NobodyCam switchs lucasagomes shell to csh :-p16:29
lucasagomesNobodyCam, it's the deploy ramdisk :(16:29
lucasagomessh is it16:29
NobodyCam:-p16:29
JayFlucasagomes: TBH it probably wouldn't be terribly difficult to implement a "pxe ramdisk" in the style of IPA's (embedded container in coreos)16:31
JayFlucasagomes: if you'd be interested in that, I can upstream some work I did downstream to split out the build method from the build content (i.e. downstream we have >1 ramdisk so I have kinda a "ramdisk builder" library downstream)16:31
lucasagomesJayF, yeah it won't and we want to do that16:31
lucasagomesthing is, I got some priorities and can't work on that in the moment16:32
lucasagomesbut I will!16:32
JayFSo general question along those lines16:32
lucasagomesJayF, I would love to have only one ramdisk for everything16:32
JayFI was going to put up a spec proposing we create an ironic-ramdisks (or something similar) repository for 1) Ironic DIB elements and 2) IPA ramdisk builders not using DIB16:32
NobodyCamlucasagomes: +++16:32
JayFlucasagomes: I'm mildly -1 to that, but primarily because it means both images will bloat larger than they need to only support one driver16:33
lucasagomesJayF, right, we can make it modular right?16:34
lucasagomeslike in the DIB case we can have 1 element per module16:34
lucasagomes(and a base one with the core of IPA)16:34
lucasagomesso you can customize the ramdisk at build time16:34
lucasagomesand include whatever modules you want to work in ur env16:35
JayFI have something like that-ish setup downstream for our builder16:35
JayFwe use image inheritance stuff from docker to make it easy though16:35
lucasagomesyeah so in ur case onmetal will be a module for IPA16:35
lucasagomesJayF, nice16:35
lucasagomesyeah something like that16:36
JayFbuild IPA container, from the IPA container, add "vendor" stuff from a separate dockerfile, then export the unified image as a tarball to inject into the coreos thing16:36
JayFyeah, I'm totally going to upstream my refactor of the builds16:36
lucasagomes+116:36
lucasagomesit sounds good, and very flexible16:36
JayFlucasagomes: would you be +1 to my idea of a separate ramdisk/elements repo for Ironic?16:36
*** igordcard has quit IRC16:36
JayFI'd love to get the IPA builder out of the repo (we build IPA out of a separate repo downstream, and it's actually simpler) and break ironic's dependence on tripleo-image-elements16:37
lucasagomesJayF, +1 sounds good16:37
JayFokay, I might find time today or this weekend to put a spec up about it16:38
JayFit's not a lot of work but I think would likely have a lot of benefit16:38
*** smoriya has quit IRC16:38
lucasagomesright, next week I will be a bit away (holidays) but add my name on the reviewers list so I'm aware of it once I'm back16:38
* JayF is here except for the week of christmas16:39
lucasagomesJayF, definitely. Man having to deal with bash to add something to the default ramdisk is painful :(16:40
JayFI really wish I could just upstream our ramdisk builder repo, as I think I have all the pieces in place to make a pxe ramdisk trivial there16:40
JayFbut I can't because it has a crapton of NDA hardware utils in it16:40
NobodyCamlucasagomes: install python in the ram disk and be done with it16:41
NobodyCamlol16:41
dlaubeme too JayF :)16:41
lucasagomesNobodyCam, can I ? :P16:41
NobodyCamit's prob there16:41
lucasagomesNobodyCam, nah I don't think so16:41
lucasagomesbut we should heh16:41
*** Marga_ has quit IRC16:43
*** igordcard has joined #openstack-ironic16:45
*** Masahiro has joined #openstack-ironic16:46
*** mjturek has quit IRC16:48
*** Masahiro has quit IRC16:50
*** yjiang5 is now known as yjiang5_away17:09
devanandamornin, all17:10
NobodyCamgood morning devananda17:10
erwan_tafheya devananda17:11
erwan_tafdevananda: sounds like you made progress with the FSM idea17:11
openstackgerritJim Rollenhagen proposed openstack/ironic: Add network provider interface and implementations  https://review.openstack.org/13968717:14
devanandaerwan_taf: to support current users, we need a migration path. to think about a migration path, we need a starting point ...17:14
jrollcohn: ^^ check that patch out, it's a wip but just need to shovel more code17:16
jrolldevananda: ^17:16
erwan_tafvictor_lowther: I think we have a serious issue with cleaning & rebuild RAIDS in clean17:17
erwan_tafvictor_lowther: that will prevent recognizing raids to select the proper boot device17:17
victor_lowthererwan_taf: how so?17:17
victor_lowtherNm, will reply to comment.17:19
*** romcheg1 has joined #openstack-ironic17:19
*** romcheg has quit IRC17:20
erwan_tafI mean if some create a RAID with the futur API, they will get something in return like UUID or similar17:20
erwan_tafif we do rebuild it, we're done :/17:20
jrollwhat?17:21
jrollwe can likely make that work17:21
jrollthough I'm not sure what that's supposed to mean17:21
jrollif the node has a raid, it cleans it and rebuilds, it will store that uuid17:21
*** andreykurilin_ has quit IRC17:24
*** mikedillion has quit IRC17:24
*** mikedillion has joined #openstack-ironic17:25
*** shakamunyi has quit IRC17:25
NobodyCamerwan_taf: I'm not sure I see the logic error, Are you thinking of creating more then one raid volume?17:27
devanandajroll: why is neutron_plugin.py still ABC? just a copy-paste thing, or ?17:31
*** romcheg has joined #openstack-ironic17:31
*** david-lyle has joined #openstack-ironic17:31
jrolldevananda: I clearly haven't implemented that yet :P17:31
erwan_tafNobodyCam: for sure I could need that17:31
devanandaright17:31
jrolldevananda: train ride ended, I pushed :P17:32
openstackgerritDmitry Tantsur proposed stackforge/ironic-discoverd: Extend node_cache.pop_node() result to be a structure  https://review.openstack.org/13909617:32
victor_lowthererwan_taf: for MD and DM based softwaid, you should name the array. Ditto for hardware based raid arrays that use DDF metadata.17:34
*** foexle has quit IRC17:35
*** romcheg has quit IRC17:35
devanandavictor_lowther: just to be clear, ironic doesn't deal with software raid ...17:35
victor_lowtherdevananda: I know.17:36
victor_lowtherWhich is why the UUID question struck me as odd.17:37
*** Marga_ has joined #openstack-ironic17:38
*** athomas has quit IRC17:39
*** Marga_ has quit IRC17:39
*** Marga_ has joined #openstack-ironic17:40
*** jmankov has quit IRC17:40
*** mikedillion has quit IRC17:40
*** jmankov has joined #openstack-ironic17:40
*** Marga_ has quit IRC17:40
*** derekh has quit IRC17:40
*** Marga_ has joined #openstack-ironic17:41
erwan_tafvictor_lowther: I mean at some point, some will ask to create RAID volumes right ?17:41
erwan_tafvictor_lowther: and you'll have to "point it" at the bootable disk17:41
erwan_tafbut you could also have other raids volumes and DAS disks17:41
*** Marga_ has quit IRC17:41
erwan_tafif you do delete it and recreate it in clean, how do you "point" this raid as the disk to be installed ?17:42
*** Marga_ has joined #openstack-ironic17:42
*** Marga_ has quit IRC17:43
*** Marga_ has joined #openstack-ironic17:44
*** Marga_ has quit IRC17:44
*** jistr has quit IRC17:45
victor_lowtherwhat, you mean like megacli -adpBootDrive -set -l0 -a017:45
*** Marga_ has joined #openstack-ironic17:45
victor_lowtherUp to whatever RAID driver you are using17:45
*** Marga_ has quit IRC17:45
victor_lowtherbut that is part of hte RAID config that needs to be saved and restored.17:45
victor_lowtheror recreated17:46
*** Marga_ has joined #openstack-ironic17:46
*** Marga_ has quit IRC17:47
NobodyCammost raid controllers allow for a label..17:47
*** Marga_ has joined #openstack-ironic17:48
jrollI mean17:48
jrollerwan_taf: we're building that into ironic right now17:48
NobodyCamlucasagomes: can you add disk labels to https://review.openstack.org/#/c/138729/17:48
jrollhints for the disks17:48
*** Marga_ has quit IRC17:49
NobodyCamerwan_taf: see review ^^^17:49
jrolland the cleaning stuff could automatically set that, if it's rebuilding a raid17:49
*** Marga_ has joined #openstack-ironic17:50
openstackgerritMerged stackforge/ironic-discoverd: Extend node_cache.pop_node() result to be a structure  https://review.openstack.org/13909617:50
NobodyCamjroll: kinda like config drive partition is along the lings I was thinking17:50
NobodyCamlines even17:50
jrollyeah17:51
lucasagomesNobodyCam, disk label? yes17:51
NobodyCam:)17:51
* lucasagomes gotta catch up with the scroll back17:51
devanandaso i'll preface by saying I dont have the RAID config spec paged into memory right now, but ...17:52
*** Marga_ has quit IRC17:52
devanandais that direction taking us further away from managing cattle, and closer to managing pets?17:53
*** Marga_ has joined #openstack-ironic17:53
jrollI mean, you could have a bunch of cattle with identical raid configs17:53
devanandavictor_lowther: also, I like zehicle's of mayflies. describes the HPC use-case well17:53
devanandajroll: sure. in which case, that should be defined on the flavor17:53
jrollsure17:54
devanandajroll: it we put RAID config data per-node, then we're managing pets17:54
lucasagomesNobodyCam, apart from the label it lgty?17:54
jrollbut you want to do it before provision time17:54
* lucasagomes is working on it already http://paste.openstack.org/show/145628/17:55
NobodyCamlucasagomes: yea,17:55
victor_lowtheras far as hardware RAID goes, I did a pretty deep dive on what you can see about a RAID volume using megacli17:55
victor_lowther(in https://github.com/opencrowbar/hardware/blob/develop/raid/crowbar_engine/barclamp_raid/app/models/barclamp_raid/lsi_megacli.rb)17:55
*** krtaylor has quit IRC17:56
lucasagomesNobodyCam, ta much, I will update it17:56
victor_lowtherand it does not expose a UUID for RAID volumes17:56
lucasagomesNobodyCam, just one thing... labels have to deal with filesystem :/17:57
victor_lowthermuch less the ability to set one.17:57
lucasagomesI'm not looking at that level actually17:57
lucasagomeswhy we need label ther?17:57
lucasagomesthere*17:57
victor_lowtherwhich is why I just gave the RAID volume a name17:57
jrollbecause it's useful :P17:57
lucasagomesjroll, right, but we are going to ovewrite it with the new image17:58
lucasagomesand the label will be gone17:58
jrolloh, true17:58
lucasagomes(unless the image has the same label)17:58
lucasagomesbut still17:58
jrollmaybe it's not useful17:58
lucasagomesyeah, I think it's a layer below of the one I'm using17:58
lucasagomescause this is to find the device to deploy the image onto17:58
jrollright, you want to deal with disks, not partitions17:59
lucasagomesnot find device for other usages17:59
lucasagomesyes17:59
lucasagomesI filter out partitons already17:59
lucasagomesas well as cdrom devices, ram etc17:59
jrollyeah, I agree, don't need labels17:59
lucasagomesloop devices as well17:59
jrollI wonder how much you're doing that IPA already does17:59
lucasagomes:/17:59
* lucasagomes too17:59
lucasagomesI will do it in IPA as well17:59
jrollhttps://github.com/openstack/ironic-python-agent/blob/master/ironic_python_agent/hardware.py#L226-26017:59
lucasagomesbut u know priorities in the moment :(17:59
jrollnot everything, but would be easy to add walking /sys or whatever18:00
lucasagomesah yeah I use lsblk for some18:00
lucasagomesjroll, +1 yeah that's what I'm doing18:00
jrolllike, IPA knows about all the hardware18:00
lucasagomesfor vendor, model size etc18:00
lucasagomesI look at sysfs18:00
jrolllet's just kill the iscsi ramdisk, please18:00
NobodyCamlucasagomes: for HP raid cards I can :(Optional) Enter a label of no more than 15 characters to identify the array.18:00
lucasagomes* lucasagomes wants to use IPA and not have to deal with bash anymore :(18:00
jrollI can't imagine doing all of this in bash18:00
jrollyeah18:00
jrolljust... do it18:01
lucasagomesjroll, I was talking with JayF18:01
JayFipa-deploy-driver=pxe # default to agent18:01
JayFdooet18:01
jrollright18:01
NobodyCamthat is with out a file system18:01
jrollbut18:01
lucasagomesjroll, I can't, I need it asap18:01
JayFif the pxe driver actually uses the agent itself18:01
JayFthat's awesome18:01
lucasagomesso I will have to extend a bit the deploy ramdisk first, then I can jump on IPA18:01
jrollJayF: that doesn't actually help you, this doesn't get hardware detection in python18:01
* dtantsur is sick of doing discointrospecinspection in bash as well18:01
jrolllucasagomes: what do you need that's not in IPA?18:02
jrollwhere you can't do this asap in IPA18:02
*** rwsu has joined #openstack-ironic18:02
lucasagomesjroll, I need the pxe_* driver to use IPA :)18:02
jrollright, so you need partition images and boot from pxe18:02
lucasagomesyup pretty much18:03
lucasagomesand ofc gate stuff18:03
jrollall we really need to do for that is rip partition code out into a library18:03
jrolland use it in both18:03
lucasagomes+118:03
jrollright, gate stuff is there, just needs approval18:03
lucasagomesactually, take a look at blivert18:03
jrollaha, yeah18:04
lucasagomesjroll, http://fedoraproject.org/wiki/Blivet18:04
jrollyeah, looking18:04
lucasagomesit's the anaconda stuff now in a library18:04
lucasagomesit seems pretty good, it can do things like creating EFI partitions and all18:05
victor_lowtherNobodyCam: That is basically reflects what everyone else supports via the DDF18:05
lucasagomesJayF, if you get a time to look at it as well18:05
*** subscope has quit IRC18:06
*** shakamunyi has joined #openstack-ironic18:07
NobodyCamvictor_lowther: I suspect thats how its doing that.. looking over the DDf stuff18:07
lucasagomesalright folks18:07
lucasagomesI will call it a day, I still have to finish my packing18:07
lucasagomeshave a good night, a great weekend18:08
lucasagomesand I see y'all after next week :D18:08
*** achanda has joined #openstack-ironic18:08
victor_lowtherthe DDF spec is some interesting reading.18:08
victor_lowtherI neverr knew there were so many ways to lay a RAID 5 onto the disks.18:08
NobodyCamhave a good time off lucasagomes18:09
lucasagomesNobodyCam, ta much!18:09
*** lucasagomes is now known as lucas-packing18:13
*** krtaylor has joined #openstack-ironic18:13
dtantsurcalling it a day, have a nice weekend!18:13
dtantsurlucas-packing, have a safe trip and enjoy your PTO18:13
*** dtantsur is now known as dtantsur|afk18:13
*** pelix has quit IRC18:14
NobodyCamhave a good weekend dtantsur|afk18:14
openstackgerritMerged openstack/ironic-specs: Workflow documentation is now in infra-manual  https://review.openstack.org/13933118:16
*** ChuckC has quit IRC18:16
*** achanda has quit IRC18:17
*** achanda has joined #openstack-ironic18:18
*** pensu has joined #openstack-ironic18:20
*** achanda has quit IRC18:22
adam_gbauzas, hey, still around?18:23
openstackgerritMerged openstack/ironic: Workflow documentation is now in infra-manual  https://review.openstack.org/13932918:30
*** harlowja_away is now known as harlowja_18:31
openstackgerritDmitry Tantsur proposed openstack/ironic-specs: Introduce driver capabilities  https://review.openstack.org/12892718:32
*** jerryz has quit IRC18:33
*** Masahiro has joined #openstack-ironic18:35
*** chuckC_ has joined #openstack-ironic18:38
*** Masahiro has quit IRC18:39
*** shakamunyi has quit IRC18:44
*** spandhe has joined #openstack-ironic18:44
NobodyCambrb18:46
*** shakamunyi has joined #openstack-ironic18:46
openstackgerritJay Faulkner proposed openstack/ironic-specs: Exposing Hardware Capabilities  https://review.openstack.org/13127218:47
JayFdevananda: ^18:47
JayFIf others want to take a look, that's just a backlog spec, and it'd be nice to get it in ^18:47
*** shakamunyi has quit IRC18:48
devananda+2'd18:48
adam_ghas anyone seen conductor stall out on startup like this? http://logs.openstack.org/94/138294/7/check/check-tempest-dsvm-ironic-pxe_ssh/e3124f6/logs/screen-ir-cond.txt.gz18:49
devanandadtantsur's driver capabilities spec is now also backlog'd, and the two are fairly tightly related18:49
devanandawe ought to land both of them, IMO18:49
*** ndipanoff has quit IRC18:50
devanandaadam_g: that file looks truncated18:50
*** shakamunyi has joined #openstack-ironic18:52
adam_gdevananda, hmm18:52
adam_gdevananda, looking at a similiar scheduling issue as yesterday--where nova finds some enrolled nodes but they've never left power_state=None and their resources get ignored, sounded like the conductor is never syncing that on startup and thats the log i found18:53
*** erwan_taf has quit IRC19:14
*** dprince_ has joined #openstack-ironic19:17
*** mikedillion has joined #openstack-ironic19:17
*** dprince has quit IRC19:18
*** Nisha has joined #openstack-ironic19:21
*** Nisha_away has joined #openstack-ironic19:24
*** Nisha has quit IRC19:26
NobodyCamif a spec lands in backlog does that basicly mean its approved? or will it have to be revoted on to be moved in to the "current" cycle?19:26
*** igordcard has quit IRC19:33
*** igordcard has joined #openstack-ironic19:38
NobodyCambrb19:38
*** pensu has quit IRC19:43
*** marcoemorais has joined #openstack-ironic19:49
*** mjturek has joined #openstack-ironic19:50
*** shakamunyi has quit IRC19:54
*** mjturek has quit IRC19:54
*** alexpilotti has quit IRC19:55
*** shakamunyi has joined #openstack-ironic19:59
*** shakamunyi has quit IRC20:00
*** shakamunyi has joined #openstack-ironic20:01
*** Nisha_away has quit IRC20:03
devanandaNobodyCam: it means there is agreement with the direction, and we want to record that. but no one's working on it now, and it is not approved for *this* cycle20:09
devanandaNobodyCam: moving a spec out of backlog requires another round of reviews / approval20:09
dlaubeis there a good image out there that i can build with DIB that will boot up and drop me to a root shell?20:10
dlaubealternatively it would be sweet if I could set pxe deploy properties to get into single user or something20:10
dlaubedoes anything like that exist?20:11
devanandadlaube: I believe there's a kernel param you can pass to the DIB ramdisk image which does that20:13
devanandaor there was at one point20:13
devanandamm, rebooting. brb20:13
*** dprince_ has quit IRC20:14
*** dprince has joined #openstack-ironic20:14
dlaubehmm20:15
*** marcoemorais has quit IRC20:17
NobodyCamdevananda: TY20:18
*** Masahiro has joined #openstack-ironic20:23
*** Masahiro has quit IRC20:27
*** romcheg1 has quit IRC20:30
*** rushiagr is now known as rushiagr_away20:31
*** romcheg has joined #openstack-ironic20:34
jrolldlaube: if you embed ssh keys in an IPA ramdisk, you can ssh in and do things20:35
*** marcoemorais has joined #openstack-ironic20:36
dlaubethats also a problem, the IPA i've built kernel panics and the ones I pulled off http://tarballs.openstack.org/ironic-python-agent/coreos/files/  kernel panic too20:37
*** marcoemorais has quit IRC20:37
dlaubeI've set u'pxe_append_params': u'console=ttyS0'    but I dont see anything on serial console20:37
jrolloh, right20:37
jrollthis with a real server20:37
jroll?20:37
*** marcoemorais has joined #openstack-ironic20:37
dlaubeKVM over lan shows me *something* but I havent been able to catch it fly by and I dont have any backscroll20:38
dlaubeyeah real server20:38
jrollyou might try other params like console=ttyS120:38
jrollah ok20:38
jrollhmm20:38
*** igordcard has quit IRC20:38
jrollthat's interesting, I wonder if it's a coreos or kernel bug20:38
jrollif you can somehow get it logged, I'd love to try to help fix it, or pass it on to the coreos folks20:39
jrollany weird hardware or is the server pretty standard?20:39
dlaubepretty standard … everything is on-board nothing bleeding edge20:40
dlaubeno HW raid or anything20:40
jrollhuh, interesting20:42
jrolllet me grab the latest tarball and make sure that works20:42
jroll(it should)20:42
jrollJayF: https://review.openstack.org/#/c/134436/ landed!20:42
jrollor +A anyway20:43
dlaubethanks.. I cant imagine the auto build of the image is producing a bad image but I would appeciate the check20:43
jrolldlaube: right, it seems to be working, see the check-tempest-dsvm-ironic-agent_ssh-nv job on https://review.openstack.org/#/c/139602/20:43
dlaubedoes IPA work with the pxe_ipmitool driver?20:45
jrollno, only with agent_*20:45
dlaubeugghh I was afraid you would say that20:45
jrollI mean, that's what the agent deploy driver is for20:46
jrollpxe_ipmitool should really be called iscsi_ipmitool by the way20:46
dlaubewill agent_deploy driver use ipmi ?20:46
jrollit can, yes20:46
jrollironic in general composes deploy and power drivers (and other types)20:46
jrollthese all use the agent deploy driver https://github.com/openstack/ironic/blob/master/setup.cfg#L38-4120:47
jrolltwo of those have ipmi20:47
dlaubeahh, so it sounds like we should switch to using the agent_ipmitool20:47
PaulCzarso I'm real close here ...  I can pxeboot guests and they run through the initial install process but then fail because they can't resolve url for my ironic api20:48
PaulCzarI am passing through a correct dns server via dhcp options20:48
PaulCzarand if I steal the MAC into another VM and boot that and get the dhcp lease that the ironic node should have ... I can ping/curl/etc the api node url20:48
jrolldlaube: right, the deploy driver has nothing to do with the power driver :)20:48
NobodyCamPaulCzar: ubuntu?20:48
PaulCzarNobodyCam: ya20:49
dlaubegot it20:49
NobodyCammaybe somethingn with /etc/resolv.conf20:50
PaulCzarNobodyCam: is that something I can easily modify inside the image ?20:50
jrollPaulCzar: so, the ramdisk is failing, to be clear?20:50
PaulCzarjroll: appears so ...   'request IRONIC API to deploy image \n curl (6) Could not resolve host: openstack.example.org20:52
dlaubejroll: will agent_ipmitool use iscsi for deployment at all?20:52
jrolluhhh20:52
jrollPaulCzar: openstack.example.org?20:52
jrolldlaube: no20:52
PaulCzaryeah it's valid in my env20:52
PaulCzarour CI uses that for our dns20:52
PaulCzarand dnsmasq's dns server resolves it fine20:52
jrolldlaube: pxe_* uses iscsi, agent_* uses IPA and thus http/dd20:52
jrollPaulCzar: ok, yeah20:52
jrollseems like a diskimage-builder thing, seems awful weird20:53
dlaubejroll: ok thanks for clarifying20:53
jrollI'd probably bug #tripleo20:53
jrolldlaube: np20:53
PaulCzarjroll: somebody mentioned the other day that there's a cirros test image?  is that just the standard image?  if so what do I use for vmlinuz/ramdisk/etc ?20:57
jrollaweeks: you should look at https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:states,n,z20:57
jrollPaulCzar: it can be broken up into kernel/ramdisk, sec20:57
aweeksjroll: looking20:57
*** shakamunyi has quit IRC20:59
jrollPaulCzar: https://launchpad.net/cirros/+download21:00
jrollthe -uec images are split21:00
dlaubejroll: in order to use http + dd to deploy via the agent_ipmi driver, I dont need to use iPXE do I?21:03
*** mikedillion has quit IRC21:03
jrolldlaube: nope, agent does what it does regardless of how you boot it :)21:04
jrollactually, I don't know if there's ipxe support for the agent driver yet (which is a bit silly)21:04
dlaubeok cool21:05
dlaubety!21:05
*** romcheg has quit IRC21:05
jrollnp :)21:05
*** shakamunyi has joined #openstack-ironic21:06
JayFagent_ssh-src is voting on IPA \o/ https://review.openstack.org/#/c/134436/21:06
*** spandhe has quit IRC21:06
*** dprince has quit IRC21:08
*** ChuckC has joined #openstack-ironic21:10
PaulCzarjroll: do I need to untar the uec and add them individually ... or will ironic know how to handle the uec image ?21:12
jrollPaulCzar: I think untar and add manually21:13
PaulCzark, and are the kernel/ramdisk image IDs added to the glance image metadata different to the ones set in the ironic node options ( -i pxe_deploy_kernel= ) ?21:15
PaulCzaror can I use them for both ?21:15
jrollI have no idea, I've never done this before21:17
jrollI'm not entirely sure what you're asking21:17
jrolloh21:17
PaulCzarhaha nobody has it seems :D21:17
jrollso, upload the cirros k/r to glance21:17
jrolluse those ids as the metadata for the cirros root image21:17
*** marcoemorais has quit IRC21:17
jrollthe root image points at its own kernel and ramdisk, hopefully that makes sense21:17
*** marcoemorais has joined #openstack-ironic21:18
PaulCzarokay, so I shouldn't need the -i pxe_deploy_kernel=  -i pxe_deploy_ramdisk options in my ironic node-create command then ?21:18
jrollcorrect, ironic handles that21:19
PaulCzarright21:19
NobodyCambrb quick walkies21:23
openstackgerritMerged openstack/ironic-specs: Exposing Hardware Capabilities  https://review.openstack.org/13127221:23
*** romcheg has joined #openstack-ironic21:25
bauzasadam_g: hey I'm there21:29
*** ChuckC has quit IRC21:30
*** chuckC_ has quit IRC21:31
aweeksjroll: devananda: so, from my reading of the three FSM related changes out right now, we're not actually using the FSM to control how state transitions are handled.  As in, the FSM is being used to verify that a state transition is valid, but not to actually trigger the action as a result of the state transition.21:31
devanandaaweeks: baby steps .... :)21:32
aweeksheh, ko21:32
aweeks*ok21:32
devanandaaweeks: we aren't using *any* state machine today. but I think those three lay enough down that I can start using task_manager / fsm.py to do callbacks21:32
devanandathus the refactoring of eg, _do_node_deploy and so on21:33
aweeksyeah, that makes sense21:33
aweeksawesome21:33
devanandaif you feel like having a go at that, lemme know21:33
aweeksdevananda: this might be more of a philosophical question, but do you think that the conductor *is* a state machine (or really multiple state machines, one for each node), or that it simply is *implemented* as a state machine.  not sure if that makes sense21:35
*** dlaube has quit IRC21:36
aweeksmostly having to do with the api--is the contract that we are a state machine, and you tell us how to transition?21:36
devanandaoh21:36
aweeksor is the relationship between api calls that the state machine looser than that21:36
devanandaI'll rephrase - see if this is what you meant21:36
adam_gbauzas, hi, regarding that scheduling issue you saw. is that the first time you've hit that? ive been looking into some other similar weirdness nova's resource tracking with ironic21:37
devananda"ConductorManager is the state machine; it accepts input, determines what state to transition to, and then transitions to it"21:37
devanandavs21:37
bauzasadam_g: well, I just discovered the Ironic problem because of the Nova job which was failing on my patch21:38
devananda"ConductorManager tracks the state of each node in a logical state machine, but transitions are handled externally (eg, by the drivers)"21:38
devanandaaweeks: not sure my words are any clearer :(21:39
bauzasadam_g: indeed, the Nova RT is by design providing updates to the sched every 60 secs21:39
aweeksdevananda: no, I think we're on the same page21:39
aweeksI'm just curious which vision you have have right now21:39
bauzasadam_g: so, the question I'm wondering is why it takes more 120 secs for deploying 3 ironic nodes21:39
adam_gbauzas, yeah, i looked into that specific failure case you saw and it looks like an issue on the ironic side (nova's RT is discovering nodes in the ironic inventory, but ironic's conductor never syncs them to a usable state)21:40
adam_gbauzas, one sec, let me pull up those logs again21:40
devanandaaweeks: #221:40
bauzasadam_g: oh ok21:40
aweekskk21:40
devanandaaweeks: logical state machine. Conductor instantiates one state machine for each node it manages, uses that object to validate events and initiate(*) transitions21:41
devanandaaweeks: (*) I need to add callbacks to actually do this21:41
aweeksdevananda: I guess another way to think about this is whether the API is more state oriented, or more transition oriented.21:41
devanandaaweeks: which api, to be precise?21:42
aweeksmy understanding is there are two apis involved:21:42
aweeks1. you would set the provision state in ironic api21:43
aweeks2. it would make an rpc call to the conductor21:43
aweeksthe first one is more state oriented (you tell it what state you want it in), the second one is more transition orinted (you would call do_deploy, or do_spawn, etc.)21:43
devanandaaweeks: hm. I think you've drawn that line in the wrong place21:44
devanandaaweeks: the RPC call is essentially just routing the API request to the appropriate conductor. it's not really different, though21:45
aweeks1 sec, let me double check the code21:45
devanandaaweeks: within the ConductorManager.do_node_deploy(), for example, there are multiple calls out to the driver, to utils, etc21:45
devanandaaweeks: each of those calls performs a transition. some are micro-state (eg, DEPLOYING -> DEPLOYDONE) and others are macro-state (AVAILABLE -> DEPLOYING)21:46
devanandaright now, the Conductor handles updating the node.*_state fields based on a) requests to start an action, and b) return values from other libraries (drivers, utils, etc)21:47
devanandaso, in that sense, yes, the ConductorManager today *IS* the state machine21:47
adam_gbauzas, so it shouldn't take more than 120 secs for nodes to register. it typically takes less, 2 minutes was generous to give a grenade nova-bm -> ironic time to migrate nodes into ironic. something went wrong on the ironic side and resources never ended up in nova21:47
devanandabut as such, not clearly defined. within the API service, there is some state checking (and it can get out-of-sync w.r.t. the conductor's state checking ....)21:48
aweeksgot it21:48
devanandaaweeks: and the state transitions are implicit within the conductor and driver code, not formally modelled in a class somewhere --- that is the biggest change my patch series does21:48
adam_gbauzas, nova will check ironic for resources at startup, and then update during periodic tasks. the devstack polling for in the case that nodes aren't enrolled until after the initial startup resource update, and we need to wait for a subsequent periodic task21:49
devanandathough it's not complete yet. there are still places making implicit state checks (eg, the API service) which aren't using fsm.machine21:49
devananda** states.machine, i mean21:49
bauzasadam_g: k, see the problem21:49
adam_gbauzas, feel free to recheck, i'd like to get a bug filed and an e-r query in case this turns up agani21:50
bauzasadam_g: I guess we can probably update the existing e-r query21:51
adam_gbauzas, this is a different bug21:52
adam_gbauzas, incidentally, the existing bug is still showing up but for other unknown RT reasons21:52
bauzasadam_g: what are your symptoms ?21:53
bauzasadam_g: I'm working on the RT/Scheduler interfaces in Nova so I can probably help21:53
aweeksdevananda: so, from my possibly uninformed perspective, I've been thinking about nodes in ironic as being controlled by state machine, of which the nova state machine is a subset.  we have our own set of states, along with a view of state that is shown to nova.21:53
devanandaaweeks: right. that's actually inverted21:54
adam_gbauzas, cool! let me update Bug #1398128 with the details21:54
devanandaaweeks: logically speaking, set{ironic_states} intersects set{nova_states}21:55
devanandaaweeks: there are states that Nova can represent to its users which do not apply to Ironic (eg, migrating, snapshotting)21:55
bauzasadam_g: k, will look at logstash for the frequency21:55
devanandaaweeks: and there are states within Ironic which Nova does not model in any way (eg, discovering, cleaning, managed, or available)21:56
*** lucas-packing has quit IRC21:56
devanandaaweeks: functionally, each service is modelling a different type of resource. Further, Ironic actually has no direct knowledge of what state the corresponding Nova instance is in21:57
aweeksdevananda: ok, that makes sense21:57
aweeksall we get is the nova calls through the virt driver21:57
devanandayup21:57
devanandawhich implement a very small set of primitives21:58
aweekswhich might implicitly tell us something21:58
aweekscooincidentally21:58
devanandayup21:58
devanandaeg, nova.virt.driver.spawn() implies that there is a Nova instance21:58
devanandaand we should save certain information in Ironic to indicate the association of that instance with that node21:59
aweeksyep21:59
devanandaso that Nova can find it again later21:59
*** marcoemorais has quit IRC22:01
*** marcoemorais has joined #openstack-ironic22:01
*** marcoemorais has quit IRC22:01
*** marcoemorais has joined #openstack-ironic22:01
*** marcoemorais has quit IRC22:02
*** marcoemorais has joined #openstack-ironic22:02
*** ryanpetrello_ has joined #openstack-ironic22:06
*** ryanpetrello has quit IRC22:06
*** ryanpetrello_ is now known as ryanpetrello22:06
*** igordcard has joined #openstack-ironic22:09
adam_gbauzas, okay, added some notes to https://bugs.launchpad.net/ironic/+bug/1398128  found the race as i was looking at the logs in more detail22:11
*** Masahiro has joined #openstack-ironic22:12
*** ryanpetrello has quit IRC22:14
bauzasadam_g: that's really weird22:15
bauzasadam_g: that's the same section of code in the RT22:15
bauzasadam_g: there is only a branch if the compute is present or not22:16
*** Masahiro has quit IRC22:17
bauzasadam_g: do you know how the Ironic driver reports stats to Nova ?22:17
bauzasadam_g: I basically know that 1 host can run multiple nodes22:17
bauzaswhich is not the same for other drivres22:18
adam_gbauzas, yeah, so it makes an api call to ironic to list nodes. each node is a nova hypervisor. each ironic node has properties associated with it describing mem/cpu/gb, which the driver maps to nova's resources22:18
adam_gbauzas, it looks like, when nova first finds some nodes in ironic, it sets the hypervisor count accordingly but doesn't update associated resources until the next periodic sync22:19
adam_gat least according to the symptoms, looking thru nova code now22:19
*** spandhe has joined #openstack-ironic22:24
adam_gbauzas, whats confusing me is how we're ending up with 0 resources here: http://logs.openstack.org/87/139687/1/check/check-tempest-dsvm-ironic-parallel-nv/55ce684/logs/screen-n-cpu.txt.gz#_2014-12-05_18_01_26_85222:27
bauzasadam_g: I should look at the driver's code in Nova...22:29
bauzasadam_g: that's 11.30pm here, could we catch up on Monday morning your time ?22:30
adam_gbauzas, sure! still early here, check back at that bug i'll update it with anything i find this afternoon22:30
bauzasadam_g: which TZ are you in ?22:30
adam_gbauzas, PST USA22:30
bauzasadam_g: I'm CET22:30
bauzasadam_g: ie. France TZ22:31
bauzasadam_g: ok, so I'll try to find some time to investigate on Monday22:31
adam_gbauzas, cool, have a good weekend22:31
NobodyCamoh thats new: error: Failed to start domain seed22:38
NobodyCamerror: Unable to add port vnet1 to OVS bridge brbm: Operation not permitted22:38
*** alexpilotti has joined #openstack-ironic22:49
*** r-daneel has quit IRC22:49
PaulCzarif I set the ironic-api to IP then I can actually boot an instance!   but it takes forever because it can't reach metadata .. is there a SNAT or similar trick to allow access to metadata on the 169. address ?22:50
*** andreykurilin_ has joined #openstack-ironic22:52
NobodyCamPaulCzar: TripleO does this: https://github.com/openstack/tripleo-image-elements/blob/master/elements/nova-ironic/os-refresh-config/configure.d/81-nat-metadata#L622:56
*** ChuckC has joined #openstack-ironic22:59
*** alexpilotti has quit IRC23:00
*** shakamunyi has quit IRC23:01
*** Haomeng has joined #openstack-ironic23:06
*** Haomeng|2 has quit IRC23:07
PaulCzarNobodyCam: is that on the guest ?23:13
NobodyCamPaulCzar: yep: os-refresh-config is baked into the image and run after deploy23:14
NobodyCamSpamapS: are there better words for os-refresh-config (see ^)23:15
*** alexpilotti has joined #openstack-ironic23:20
*** spandhe has left #openstack-ironic23:23
SpamapSNobodyCam: yeah, os-refresh-config is run by os-collect-config .. but that is irrelevant really.23:23
NobodyCam:)23:23
*** rwsu has quit IRC23:24
NobodyCamTy SpamapS23:25
SpamapSNobodyCam: btw, that is not on the guest.23:27
SpamapSNobodyCam: that is on the conductor23:27
* NobodyCam has a headach after spending way to much time trouble shooting swift post -m ‘Temp-URL-Key:unset’ vs swift post -m "Temp-URL-Key:unset"23:27
NobodyCamoh good catch23:27
SpamapSPaulCzar: ^^23:27
NobodyCamPaulCzar: see SpamapS comment ^^23:28
*** andreykurilin_ has quit IRC23:28
SpamapSPaulCzar: your instance should be getting a pushed route via DHCP23:30
SpamapSPaulCzar: we set a DHCP option in Neutron to tell your instance that 169.254.169.254 routes through $NETWORK_NODE23:30
NobodyCamSpamapS: i don't think PaulCzar is running TripleO scripts23:31
SpamapSPaulCzar: the NAT rule above just translates that to the local neutron-metadata-agent23:31
SpamapSNobodyCam: I thought that was what the ironic nova driver did when making ports.23:32
SpamapSI could be wrong, might be a config option or something.23:32
NobodyCami don't think we push routes23:32
NobodyCamthou imbw23:32
NobodyCamtoo23:32
NobodyCamoh SpamapS side question you may just know the answer to23:33
*** marcoemorais has quit IRC23:33
NobodyCamshould os-refresh-config/migration.d scripts retry if they exit with other then 123:33
*** marcoemorais has joined #openstack-ironic23:34
NobodyCam*other then 023:34
SpamapSNobodyCam: o-r-c just explodes when that happens23:35
SpamapSNobodyCam: but os-collect-config will indeed retry over and over again.23:35
NobodyCamheheh think I found a bug. then lol23:35
NobodyCamocc reruns orc but then something in os-config-refresh/post-configure.d/99-refresh-completed fails23:37
NobodyCamdon't waste any thought on it.. it was a bad approch...23:38
*** igordcard has quit IRC23:42
SpamapSNobodyCam: That's intentional that it keeps running over and over until everything succeeds with the latest version of metadata.23:43
SpamapSNobodyCam: o-r-c scripts _MUST_ be idempotent.23:43
* jroll working on https://bugs.launchpad.net/nova/+bug/139983023:48
NobodyCamSpamapS: see http://paste.openstack.org/show/RD3Wi5Iyu7PsNDMKte6z/23:48
NobodyCamjroll: awesome TY23:49
jrollNobodyCam: pretty nasty bug :/23:49
NobodyCam:(23:49
NobodyCamSpamapS: as a result of what that paste shows the migration.d script never runs again23:50
SpamapSNobodyCam: that's unfortunate, but it is part of the fact that the signal failed.. which is a serious issue.23:52
NobodyCam:) it was a bad approch23:53
adam_gif conductor is is blocking on 'Attempting to reserve node 5', is there an easy way to find out what task has that node locked?23:55
NobodyCamgrep the log for the nodes uuid23:56
adam_gNobodyCam, not much to grep for, http://logs.openstack.org/94/138294/7/check/check-tempest-dsvm-ironic-pxe_ssh/e3124f6/logs/screen-ir-cond.txt.gz23:57
adam_ga different log than what im working with, but the same issue23:57
devanandaadam_g: what's the issue?23:58
devanandaadam_g: I don't see any error23:58
adam_gdevananda, its the that log we saw yesterday ^^, seems to be related to the resource tracking/scheduling issues that are popping up23:58
adam_gnova's node.list/get returns all of the nodes, but none of them have a power state synced because conductor is stuck waiting for a reservation23:59
adam_gwithout a power state, nova skips them for scheduling23:59
devanandagoing through the other logs in that run ... this looks interesting23:59
devananda2014-12-05 10:11:38.819 31500 INFO urllib3.connectionpool [-] Resetting dropped connection: 127.0.0.123:59
devanandain ir-api23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!