Wednesday, 2014-12-03

*** naohirot has joined #openstack-ironic00:00
naohirotgood morning ironic!00:00
NobodyCammorning marios00:00
NobodyCamgah00:00
NobodyCammorning naohirot00:01
naohirotNobodyCam: Hi, thank you for the approval :-)00:01
NobodyCamhuh00:01
NobodyCam:-p00:01
NobodyCamthank you00:02
naohirotNobodyCam: so is it okay to change the status to "approval" and the delivery to "started" ?00:03
*** clif has joined #openstack-ironic00:03
*** clif is now known as clif_h00:03
naohirotNobodyCam: I'm asking about https://blueprints.launchpad.net/ironic00:03
JayFnaohirot: that's something devananda has to do once he sees the spec is approved00:04
NobodyCamhe has a tool for that00:04
NobodyCamat least I think he does00:04
jrollyeah, spec2bp.py or something00:04
jrollttx wrote it00:05
naohirotJayF: good evening00:05
*** dlaube has quit IRC00:05
NobodyCamhe may have a set time to run that.. I'm not sure00:05
naohirotJayF: NobodyCam: Okay I understood, thanks.00:05
NobodyCamdevananda: around still?00:05
jrollI mean, blueprint status isn't actually that important, as a developer00:06
naohirotjroll: Hi00:06
jrollit only really matters to release folks00:06
jrollhiya naohirot :)00:06
NobodyCamya naohirot feel free to start00:06
JayFhey guys, so clif_h is a new developer working with us on Rackspace OnMetal and as such will be working as well on Ironic :) everyone treat him nicely00:07
naohirotNobodyCam: yes, I'll ask a lot of question maybe once I started cording :-)00:07
JayFand NobodyCam this means no more [^j|J*]00:07
jrolllolol00:07
jrolland by "nicely", we mean throw a few low-hanging busses at him :)00:08
* naohirot started meeting00:08
clif_hlow-hanging fruit you mean00:08
jrollbusses, too00:09
JayFbeep beep00:09
jrollhttp://31.media.tumblr.com/tumblr_lz1lwc25ox1rnua94o1_500.gif00:09
jrolland, just realized I forgot to take care of my latest bus yesterday :|00:10
JayFwe'll have to find a bigger, scarier bus to throw you under as punishment00:10
NobodyCamlol00:10
jrollnooooooooo00:10
NobodyCamHi ya clif_h00:10
clif_hNobodyCam: hello00:11
NobodyCamJayF: clif_h: https://bugs.launchpad.net/ironic/+bugs?field.tag=low-hanging-fruit00:12
rloojroll: maybe clif_h can fix that set-maintenance-mode-off-via-node-update-should-reset-maintenance-reason00:19
jrollok, weekly status report finally sent00:21
jrollrloo: sureynot00:21
rlooclif_h: if you have any questions, just ask jroll ;)00:21
* jroll runs00:21
NobodyCamnight jroll00:22
jrolllol00:22
jrollI'm not leaving00:22
jrolljust running away from ruby00:22
*** ryanpetrello has quit IRC00:22
*** yuanying has quit IRC00:23
NobodyCamlol00:23
*** yuanying has joined #openstack-ironic00:23
devanandaNobodyCam: pong00:24
*** yuanying has quit IRC00:24
*** yuanying has joined #openstack-ironic00:27
*** smoriya has joined #openstack-ironic00:27
NobodyCamdevananda: ping00:28
devanandapong00:28
NobodyCamlol00:28
jroll-.-00:29
jrolldevananda: the ping earlier I believe was asking about updating blueprint statuses00:29
NobodyCamoh00:29
devanandaoh00:29
NobodyCamthat00:29
jrolllol, I should have kept my mouth shut and watched this keep going back and forth00:30
NobodyCamjroll: https://www.youtube.com/watch?v=IhJQp-q1Y1s00:31
*** ryanpetrello has joined #openstack-ironic00:31
jrollhahaha00:31
NobodyCamdevananda: I was just checking if there was a set time you ran the blueprint update script00:32
devanandanope00:32
devanandajust when ever I'm reminded to00:32
NobodyCamack :)00:33
NobodyCamnaohirot: ^^^^00:33
NobodyCamlol00:33
devanandanaohirot: please update the delivery status as you go. I will not necessarily know when the work is completed00:33
naohirotNobodyCam: yes00:33
naohirotdevananda: good evening00:33
devanandanaohirot: since kilo-1 is on Dec 18, I targeted irmc power driver to the next milestone, kilo-200:35
naohirotdevananda: Okay, I'll update the delivery status, thanks00:35
devanandanaohirot: if you think the code will be ready this week, let me know and I can re-target to kilo0100:35
naohirotdevananda: I'll work toward kilo-2 since I haven't stated coding :-)00:36
devanandasounds good00:37
naohirotdevananda: thanks00:38
*** Masahiro has joined #openstack-ironic00:46
*** Masahiro has quit IRC00:50
*** ryanpetrello has quit IRC00:52
*** spandhe has quit IRC01:02
*** kurtrao has joined #openstack-ironic01:07
*** Masahiro has joined #openstack-ironic01:08
*** yuanying has quit IRC01:08
*** yuanying has joined #openstack-ironic01:09
*** yuanying has quit IRC01:15
*** yuanying has joined #openstack-ironic01:16
*** yuanying has quit IRC01:17
*** yuanying has joined #openstack-ironic01:19
*** rloo has quit IRC01:22
lintanHaomeng:Hi01:25
*** chenglch has joined #openstack-ironic01:31
*** yuanying has quit IRC01:33
*** ryanpetrello has joined #openstack-ironic01:34
*** yuanying has joined #openstack-ironic01:38
*** yuanying has quit IRC01:40
*** killer_prince is now known as lazy_prince01:40
*** ryanpetrello has quit IRC01:48
*** yuanying has joined #openstack-ironic01:50
Haomenglintan: good morning:)01:51
*** hemna__ has quit IRC01:53
*** nosnos has joined #openstack-ironic01:56
*** r-daneel has quit IRC02:03
*** Marga_ has quit IRC02:11
*** Masahiro has quit IRC02:18
*** Masahiro has joined #openstack-ironic02:21
*** hemna__ has joined #openstack-ironic02:22
*** marcoemorais has quit IRC02:24
*** ryanpetrello has joined #openstack-ironic02:26
openstackgerritArun S A G proposed openstack/python-ironicclient: Add option to specify node uuid in node-create subcommand  https://review.openstack.org/13213702:30
*** ryanpetrello has quit IRC02:31
*** rushiagr_away is now known as rushiagr02:42
*** Masahiro has quit IRC02:52
*** Masahiro has joined #openstack-ironic02:53
*** hemna__ has quit IRC02:55
*** Masahiro has quit IRC02:59
*** ryanpetrello has joined #openstack-ironic03:02
*** ryanpetrello has quit IRC03:12
*** Masahiro has joined #openstack-ironic03:22
*** ramineni has joined #openstack-ironic03:22
*** rushiagr is now known as rushiagr_away03:27
*** naohirot has quit IRC03:27
*** ryanpetrello has joined #openstack-ironic03:31
*** enterprisedc has quit IRC03:33
*** enterprisedc has joined #openstack-ironic03:33
*** nosnos has quit IRC03:39
*** Nisha has joined #openstack-ironic03:40
*** achanda has joined #openstack-ironic03:44
*** harlowja_ is now known as harlowja_away03:47
*** Haomeng|2 has joined #openstack-ironic03:48
*** Haomeng has quit IRC03:48
*** rushiagr_away is now known as rushiagr03:53
openstackgerritNisha Agarwal proposed openstack/ironic-specs: uefi support for agent-ilo driver  https://review.openstack.org/13702404:02
*** naohirot has joined #openstack-ironic04:04
*** pensu has joined #openstack-ironic04:05
*** yuanying_ has joined #openstack-ironic04:14
*** pensu has quit IRC04:17
*** yuanying has quit IRC04:17
*** nosnos has joined #openstack-ironic04:19
*** chenglch|2 has joined #openstack-ironic04:20
*** chenglch has quit IRC04:23
*** rameshg87 has joined #openstack-ironic04:23
openstackgerritShivanand Tendulker proposed openstack/ironic-specs: Ironic Management Interfaces to support UEFI Secure Boot  https://review.openstack.org/13584504:27
*** lazy_prince is now known as killer_prince04:39
openstackgerritNisha Agarwal proposed openstack/ironic-specs: Discover node properties using new CLI node-introspect  https://review.openstack.org/10095104:41
*** achanda has quit IRC04:43
openstackgerritNisha Agarwal proposed openstack/ironic-specs: Discover node properties for iLO drivers  https://review.openstack.org/10300704:43
*** achanda has joined #openstack-ironic04:43
*** achanda has quit IRC04:46
*** achanda_ has joined #openstack-ironic04:46
*** ryanpetrello has quit IRC04:47
zer0c00lHaomeng|2: Hi, i see your comments  on the test case  https://review.openstack.org/13213705:04
*** Kamilion has quit IRC05:04
Haomeng|2zer0c00l: hi, let me check again05:04
zer0c00li guess you want me to write testcase for an invalid uuid05:04
zer0c00lor may be am i asking the wrong person? :)05:05
Haomeng|2zer0c00l: yes, it is better that we have more cases to cover the new option05:05
Haomeng|2zer0c00l: such as we input a invalid format uuid, what is the expect result05:06
Haomeng|2zer0c00l: and for the result verification, that is option I think, depends on you:)05:06
*** pensu has joined #openstack-ironic05:06
*** Marga_ has joined #openstack-ironic05:07
zer0c00lAlso what is the second test case?05:07
zer0c00li don't understand the point number 205:08
Haomeng|2zer0c00l: 2nd one is - can we verify the result to see if the created node uuid is the same with the input arguments - if this is diffcult to test, we can ignore this comments05:08
Haomeng|2zer0c00l: so you can go ahead to add the 1st case05:09
*** killer_prince is now known as lazy_prince05:09
zer0c00lHow do i test the firstcase. The code doesn't validate the uuid05:10
zer0c00lmay be i need to add that validation stuff?05:10
Haomeng|2zer0c00l: let me check the api05:10
zer0c00lBasically anyone can send anything in --uuid05:10
Haomeng|2zer0c00l: ok, if our api does not support the uuid validation, that should not be your coverige05:12
zer0c00li am checking the ironic API05:12
Haomeng|2zer0c00l: and another point, can we add test case to create node with existing uuid?05:12
Haomeng|2zer0c00l: me too05:12
Haomeng|2zer0c00l: we have such code to vaoid uuid to get an instance - https://github.com/openstack/ironic/blob/b2121442204f5148c96a2016dc9f2721b48c1fcc/ironic/db/sqlalchemy/api.py#L29005:14
*** alexiz has quit IRC05:15
Haomeng|2zer0c00l: but for node_create call, looks like we have no such validation - https://github.com/openstack/ironic/blob/b2121442204f5148c96a2016dc9f2721b48c1fcc/ironic/db/sqlalchemy/api.py#L25305:15
Haomeng|2zer0c00l: so can we add the validation for input uuid, how do you think?05:16
zer0c00lyes we can do that05:17
zer0c00lis there a way to validate uuid?05:17
zer0c00lon the client side?05:17
zer0c00luuid.UUID05:22
zer0c00lis_uuid_like05:22
*** achanda_ has quit IRC05:23
zer0c00lfrom ironic/openstack/common/uuidutils.py05:23
*** lazy_prince is now known as killer_prince05:29
*** Marga_ has quit IRC05:31
*** Marga_ has joined #openstack-ironic05:31
*** pcrews has quit IRC05:37
*** pcrews has joined #openstack-ironic05:45
*** achanda has joined #openstack-ironic05:51
Haomeng|2zer0c00l: let me check client code05:52
*** Masahiro has quit IRC05:52
Haomeng|2zer0c00l: yes05:53
*** Masahiro has joined #openstack-ironic05:53
mrdathanks zer0c00l - I didn't know about that func05:53
Haomeng|2zer0c00l: it is better that we valid the uuid format in api side05:54
*** killer_prince is now known as lazy_prince05:55
Haomeng|2zer0c00l: so, client will not have such logic, all the logic in api side05:55
*** achanda has quit IRC05:55
Haomeng|2zer0c00l: and for api change, maybe we need another patch05:55
zer0c00lok05:57
openstackgerritNaohiro Tamura proposed openstack/python-ironicclient: Removed "py26" from tox.ini in python-ironicclient  https://review.openstack.org/13863405:57
Haomeng|2zer0c00l: so dont worry, that is just my comments:)05:57
Haomeng|2zer0c00l: and you can wait more comments, and commit new patch to cover more comments05:57
*** Marga_ has quit IRC06:02
zer0c00lsure06:03
Haomeng|2zer0c00l: and another option is that we just verify uuid format from client side, for example - https://github.com/openstack/python-ironicclient/blob/0a6ec955c096fa19371b08f47716bb494a78ac5c/ironicclient/openstack/common/cliutils.py#L25306:03
*** Kamilion has joined #openstack-ironic06:05
*** ParsectiX has quit IRC06:08
*** Marga_ has joined #openstack-ironic06:08
*** Masahiro has quit IRC06:09
*** Masahiro has joined #openstack-ironic06:10
*** Nisha has quit IRC06:14
*** k4n0 has joined #openstack-ironic06:20
*** Marga_ has quit IRC06:31
*** nosnos has quit IRC06:38
*** mrda is now known as mrda-away06:40
*** nosnos has joined #openstack-ironic06:40
*** pcrews has quit IRC06:42
*** pcrews has joined #openstack-ironic06:43
openstackgerritjiangfei proposed openstack/ironic-python-agent: add deprecated_name to the config variables  https://review.openstack.org/13163206:57
*** Nisha has joined #openstack-ironic07:10
*** yuriyz has quit IRC07:16
*** yuriyz has joined #openstack-ironic07:16
*** pcrews has quit IRC07:20
openstackgerritTan Lin proposed openstack/ironic: Add AMT support with Ironic  https://review.openstack.org/13518407:25
*** Masahiro has quit IRC07:27
*** Masahiro has joined #openstack-ironic07:31
openstackgerritTan Lin proposed openstack/ironic: Add AMT support with Ironic  https://review.openstack.org/13518407:45
*** jerryz has joined #openstack-ironic07:49
*** openstackgerrit has quit IRC07:50
*** openstackgerrit has joined #openstack-ironic07:50
*** Masahiro has quit IRC07:59
*** Masahiro has joined #openstack-ironic08:02
*** Nisha has quit IRC08:10
*** smoriya has quit IRC08:33
*** jcoufal has joined #openstack-ironic08:38
*** Nisha has joined #openstack-ironic08:53
*** tatyana has joined #openstack-ironic09:01
*** andreykurilin_ has joined #openstack-ironic09:06
*** romcheg has joined #openstack-ironic09:28
*** lucasagomes has joined #openstack-ironic09:29
*** dlpartain has joined #openstack-ironic09:30
*** ndipanov_gone has quit IRC09:32
*** jcoufal has quit IRC09:40
lucasagomesjroll, devananda sorry catching up with (some of) the backlog from yesterday now09:48
*** ndipanov has joined #openstack-ironic09:48
lucasagomesjroll, I think that a machine that doesn't conform failing at deploy time is not the worst scenario case, it's the best scenario. If something changes - due memory/disk hot spare, or even BIOS tweaks like disabling CPU-cores - while it's sitting on AVAILABLE for a X number of time (could be months) and the machine gets deployed successfully, that's the worst case scenario09:49
lucasagomesjroll, because now ur client asked for X,Y,Z and you gave him X,Y,A and that is hard to debug, you're basically giving someone a different thing than he asked and/or is paying for09:50
*** athomas has joined #openstack-ironic09:54
*** lazy_prince has quit IRC09:57
*** enterprisedc has quit IRC10:07
*** enterprisedc has joined #openstack-ironic10:07
rameshg87lucasagomes, hi10:10
lucasagomesrameshg87, hi10:10
rameshg87lucasagomes, i have a question - do we ask operator to enroll all the nics of a bare metal into ironic ?10:11
*** killer_prince has joined #openstack-ironic10:11
*** killer_prince is now known as lazy_prince10:11
lucasagomesrameshg87, usually yes, because ironic will create a PXE config for all them. Unless you have control about which one exactly will boot so it won't fail with a "Couldn't find a valid PXE config"-thing10:12
lucasagomesrameshg87, dtantsur|afk have seem a problem with multiple nics tho10:12
*** alexpilotti has joined #openstack-ironic10:12
rameshg87lucasagomes, if i have 4 nics on my bare metal and i enroll all of them10:13
rameshg87lucasagomes, and from nova i request connectivity to 2 networks10:13
*** andreykurilin_ has quit IRC10:14
rameshg87lucasagomes, oh okay10:14
rameshg87lucasagomes, i see there is a check10:14
lucasagomesright :)10:15
rameshg87lucasagomes, so if there are 4 nics and i select connectivity to 4 networks in nova10:16
rameshg87lucasagomes, then selects 4 ports of ironic, creates neutron ports in 4 networks. it can't be 4 flat networks right ?10:17
Nisharameshg87, the NICs must be pxe enabled for ironic to choose it for deploy10:18
lucasagomesrameshg87, not complete sure. Is this part of the virtual media without DHCPing spec work?10:19
Nishaby default only only one NIC is pxe enabled10:19
rameshg87lucasagomes, yes.10:19
rameshg87Nisha, yes, but i was just wondering how the mapping would work.10:19
Nisharameshg87, all are connected to public network?10:20
Nishaor all connected to private network10:20
Nisha?10:20
Nishaor its mix?10:20
rameshg87lucasagomes, Nisha, if i have NIC1, NIC2, NIC3, NIC4 physically connected to flat networks NET1, NET2, NET3, NET4 respectively, then it wouldn't work right ?10:21
*** sambetts has joined #openstack-ironic10:21
Nisharameshg87, are all NICs connected to same kind of network?\10:22
Nishai.e. public or private?10:22
rameshg87lucasagomes, Nisha, so i guess it would be managed by neutron to provide connectivity for the respective NICs10:22
rameshg87Nisha, they are 4 separate networks10:22
rameshg87lucasagomes, my question was if i have 4 ports enrolled into Ironic10:22
rameshg87lucasagomes, and i use virtual media driver10:23
*** naohirot has quit IRC10:25
rameshg87lucasagomes, i have it like this10:26
rameshg87lucasagomes, https://github.com/openstack/ironic/blob/master/ironic/drivers/modules/ilo/deploy.py#L158-L16710:26
lucasagomesrameshg87, right, only one port will have a VIF?10:27
* lucasagomes checks10:27
*** pelix has joined #openstack-ironic10:27
lucasagomesI don't have an answer, I gotta investigate a bit10:27
rameshg87lucasagomes, yeah but we have VIF for all the ports in Ironic10:27
*** chenglch|2 has quit IRC10:27
lucasagomesoh10:28
rameshg87lucasagomes, i smell something wrong here.10:28
lucasagomes+110:28
rameshg87lucasagomes, i think we should pass the IP details of all the ports to the booted ramdisk and then let it assign the IP to all the interfaces10:28
rameshg87lucasagomes, just like a config drive will do10:28
lucasagomeshttps://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L898-L90510:29
lucasagomesyeah seems all ports will have a vif indeed10:29
*** Masahiro has quit IRC10:29
rameshg87lucasagomes, yes. so all the ports will have a IP assigned in neutron10:29
rameshg87lucasagomes, so if we are going to avoid dhcp and statically assign IPs, we should rather pass network details for all the MAC addresses to the ramdisk10:30
rameshg87lucasagomes, so that it has connectivity to all the networks - one of them might be the actual provisioning network10:30
lucasagomesyeah :/10:32
*** Nisha has quit IRC10:32
lucasagomesidk how the ip= BOOTIF thing will work for that10:32
lucasagomesdoes you can specify which mac each address will go to?10:33
rameshg87lucasagomes, yeah it might be problem, because there are multiple details to pass on10:33
rameshg87lucasagomes, let me just think over10:34
rameshg87lucasagomes, the problem was that we have only 1 network connected in our bare metals that we use for testing :)10:34
rameshg87lucasagomes, will keep the dhcp spec on hold for now until i figure out this10:35
*** jistr has joined #openstack-ironic10:37
lucasagomesack10:37
lucasagomesthanks for looking into it10:37
*** jistr is now known as jistr|trng10:38
*** athomas has quit IRC10:41
*** jcoufal has joined #openstack-ironic10:42
*** kurtrao has quit IRC10:46
*** derekh has joined #openstack-ironic10:50
*** ramineni has quit IRC10:50
*** rameshg87 has quit IRC10:59
*** athomas has joined #openstack-ironic11:06
*** Marga_ has joined #openstack-ironic11:20
*** k4n0 has quit IRC11:23
*** Masahiro has joined #openstack-ironic11:30
*** Masahiro has quit IRC11:34
*** pensu has quit IRC11:41
*** kurtrao has joined #openstack-ironic11:43
kurtraoI've met this problem in creating disk image using disk-image-builder. It reports: /hooks/root.d/99-shared_apt_cache: line 12: DISTRO_NAME: unbound variable11:45
kurtraocan anyone please help?11:45
kurtraoi was following the install-guide.html from openstack official site11:46
*** Haomeng has joined #openstack-ironic11:47
*** Haomeng|2 has quit IRC11:47
lucasagomeskurtrao, hi there, diskimage-builder is a tool from TripleO11:50
lucasagomeskurtrao, I think that people on #tripleo can help you better with it11:50
kurtraook. thanks a lot~  Will it possiblly be a problem of the document? I've seen some bugs before like mis-use of "service" and "services" as tenant name. So, just to confirm. That's ok. I'll check with #tripleo11:51
*** tatyana has left #openstack-ironic11:52
*** dlpartain has quit IRC11:59
*** romcheg has quit IRC12:05
*** romcheg has joined #openstack-ironic12:12
*** pensu has joined #openstack-ironic12:16
*** jistr|trng has quit IRC12:28
*** jistr has joined #openstack-ironic12:34
*** jistr is now known as jistr|trng12:35
*** Marga_ has quit IRC12:41
*** lucasagomes is now known as lucas-hungry12:41
*** rameshg87 has joined #openstack-ironic12:53
*** MattMan has joined #openstack-ironic12:57
*** pensu has quit IRC12:59
*** tatyana has joined #openstack-ironic13:01
*** naohirot has joined #openstack-ironic13:02
*** ryanpetrello has joined #openstack-ironic13:07
*** Masahiro has joined #openstack-ironic13:08
*** dprince has joined #openstack-ironic13:09
*** rushiagr is now known as rushiagr_away13:10
*** Masahiro has quit IRC13:13
*** rameshg87 has quit IRC13:20
*** ryanpetrello has quit IRC13:37
*** jjohnson2 has joined #openstack-ironic13:38
*** ryanpetrello has joined #openstack-ironic13:46
*** lucas-hungry is now known as lucasagomes13:49
*** ryanpetrello has quit IRC13:50
*** jcoufal has quit IRC13:55
*** jcoufal has joined #openstack-ironic13:56
openstackgerritLucas Alvares Gomes proposed openstack/ironic-specs: Root device hints  https://review.openstack.org/13872914:01
*** dlaube has joined #openstack-ironic14:01
openstackgerritLucas Alvares Gomes proposed openstack/ironic-specs: Root device hints  https://review.openstack.org/13872914:02
*** dlaube has quit IRC14:03
*** rushiagr_away is now known as rushiagr14:05
*** rloo has joined #openstack-ironic14:08
*** mjturek has joined #openstack-ironic14:08
*** lazy_prince is now known as killer_prince14:10
victor_lowtherGood morning party people.14:10
*** dlaube has joined #openstack-ironic14:19
*** erwan_taf has joined #openstack-ironic14:21
lucasagomesvictor_lowther, yo morning14:26
erwan_tafheya world14:26
lucasagomeserwan_taf, howdy14:26
erwan_tafhey lucasagomes14:28
*** dlaube has quit IRC14:28
* rloo is too busy partying to say good morning to victor_lowther. What, is it morning already? :-)14:28
*** nosnos has quit IRC14:28
victor_lowtherwell, it is in Texas.14:28
lucasagomesrloo, morning :)14:29
* victor_lowther runs off for coffee, email, and spec reviews14:29
rlooafternoon lucasagomes!14:29
*** dlaube has joined #openstack-ironic14:33
*** rameshg87 has joined #openstack-ironic14:35
*** rameshg87 has quit IRC14:36
*** rameshg87 has joined #openstack-ironic14:37
*** jogo has joined #openstack-ironic14:40
jogohttps://bugs.launchpad.net/nova/+bug/137937314:41
jogowondering if nova still needs nova/api/openstack/compute/contrib/baremetal_nodes.py14:41
rloojogo: ohhh. that was there to help 'migrate' BM users to ironic. but BM was deleted from nova. so do we still want to support those BM APIs...14:43
rloojogo: I think we need to discuss with devananda and/or is there a policy wrt how long to support something before it is deprecated?14:47
jogorloo: right, but kilo nova doesn't support nova-baremetal14:49
jogorloo: yeah not sure what the right policy answer is here14:49
rameshg87NobodyCam, hi14:50
rloojogo: I know. So I'm fine if we don't support/proxy the nova BM apis too. But... user experience blah blah... There are only two: baremetal-node-list and baremetal-node-show.14:50
rloojogo: and I think we only proxy'd them cuz someone (outside of ironic) asked for it. maybe as part of graduation...14:51
jogorloo: yeah, I am interested in hearing what devananda has to say14:52
rloojogo: me too ;)14:52
*** Masahiro has joined #openstack-ironic14:57
*** krtaylor has quit IRC14:57
*** pcrews has joined #openstack-ironic14:57
dlaubeg'morning guys15:02
*** Masahiro has quit IRC15:02
dlaubeanyone know how ironic or nova sets up the magic IP "169.254.169.254" for the metadata service?15:05
dlaubeis there a request that get sent to neutron in order to create some routes or something?15:06
PaulCzarhey folks,  i'm way further along with the pxe_ssh driver15:17
PaulCzaram getting errors regarding setting ports to DHCP15:17
PaulCzarhttps://gist.github.com/paulczar/10f421479ac264368f7c15:17
PaulCzarthe interesting error is at the bottom15:17
*** r-daneel has joined #openstack-ironic15:21
*** jerryz has quit IRC15:21
*** Nisha has joined #openstack-ironic15:24
NobodyCammorning Ironic.. says the man running late :-p15:25
NobodyCamhi rameshg87 :)15:26
*** rameshg87 has quit IRC15:28
NobodyCamPaulCzar: neutron run and running?15:29
*** killer_prince has quit IRC15:29
jrollhuh15:35
jroll/var/log/nova/nova-compute.log:2014-12-03 15:07:09.135 8182 DEBUG nova.virt.ironic.driver [-] unplug: instance_uuid=25cb63eb-4b08-4925-b0ea-e810c2e2849e vif=[VIF({'profile': {}, 'ovs_interfaceid': None, 'network': Network({'bridge': u'brq38e82ca6-74', 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': 'fixed', 'floating_ips': [], 'address': u'172.16.255.6'})], 'version': 4,15:35
jroll'meta': {'dhcp_server': u'172.16.255.3'}, 'dns': [], 'routes': [], 'cidr': u'172.16.255.0/24', 'gateway': IP({'meta': {}, 'version': 4, 'type': 'gateway', 'address': u'172.16.255.1'})})], 'meta': {'injected': False, 'tenant_id': u'78246742e14049fc81be918cba080e87', 'should_create_bridge': True}, 'id': u'38e82ca6-7475-4022-a824-85bf1faa89d8', 'label': u'internal'}), 'devname': u'tap5b4f39c1-95',15:35
jroll'vnic_type': u'normal', 'qbh_params': None, 'meta': {}, 'details': {u'port_filter': True}, 'address': u'08:00:27:a0:3a:50', 'active': False, 'type': u'bridge', 'id': u'5b4f39c1-955a-45d5-b861-02f192898eea', 'qbg_params': None})] _unplug_vifs /usr/local/lib/python2.7/dist-packages/nova/virt/ironic/driver.py:93415:35
jrollwhoa, my bad15:35
jrolldidn't realize that was so long15:35
NobodyCamwow its eraly for that15:35
NobodyCamlol15:35
jrolllol15:35
jrollI call that making an entrance15:35
NobodyCammorning jroll15:35
jrollmorning :)15:36
dlaubeg'morning15:36
jrollPaulCzar: so it looks like nova deleted the port from neutron just before ironic tried to use it15:36
NobodyCammorning dlaube15:36
jrollerr, actually, unplugged it from ironic15:37
jrollstrange15:37
Shrews:)15:37
NobodyCammorning Shrews15:37
Shrewsmorning NobodyCam15:38
*** krtaylor has joined #openstack-ironic15:38
jrollhey Shrews and dlaube15:39
Shrewshiya jrollyroll15:39
jrolldlaube: btw, there's fancy network things going on there, someone had figured it out but I forget who (sorry, I don't use metadata service, unfortunately)15:39
*** zz_jgrimm is now known as jgrimm15:39
lucasagomesNobodyCam, jroll Shrews dlaube morning15:40
NobodyCammorning lucasagomes :)15:40
jroll\o lucasagomes15:40
Shrewshi lucasagomes15:40
dlaubejroll: ahh ok… darn15:40
lucasagomesjroll, re boot() vs deploy() interface :( I don't think I will have much time to work on that15:42
PaulCzarjroll: that's what I'm thinking ...  I can't see why though15:42
lucasagomesI've been assigned to some other priorities15:42
*** jmank has joined #openstack-ironic15:43
jrolllucasagomes: priorities are hard :(15:43
lucasagomesaye15:43
jrolllucasagomes: it might take me a little bit of time, but maybe I could take that on and trade you the iscsi driver configdrive stuff? :)15:43
lucasagomesjroll, that sounds good to me15:43
lucasagomescause that is one of my priorities :)15:44
jrollPaulCzar: right, either that or maybe the port wasn't successfully created, take a look at neutron logs?15:44
jrollexactly :)15:44
jrolland it sounds painful :P15:44
lucasagomesdoes it? :(15:44
PaulCzarjroll: nova's error for the instance shows : no valid host, 3 attempts made15:44
lucasagomesyou mean creating the partition and dealing with the rebuild stuff? I probably will need to pick urs and devananda brains a bit to catch up with u guys about the problems15:45
jrolllucasagomes: yeah, mostly just that I'm scared of that partitioning code :P15:45
lucasagomescool, fair trade :)15:46
jrollPaulCzar: right, it will try to reschedule to another node if the deploy fails15:46
NobodyCamPaulCzar: three attempts are you running the RetryFilter ?15:46
jroll(I'm also scared of refactoring our interfaces) :P15:46
NobodyCamjroll: is that a new default15:46
NobodyCamI don't recall it being like that15:47
lucasagomesheh yeah, the main problem with boot() and deploy() is actually to abstract, in a generic way, means for the deploy() interface to pass information for the boot interface15:47
jrollNobodyCam: dunno, I think it's been that way for a while15:47
lucasagomesthings like the ipa url that is part of the pxe template15:47
NobodyCam:-p15:47
lucasagomesor the iscsi_iqn15:47
lucasagomesonce that's kinda sorted I believe it will be easier15:47
PaulCzarjroll: Bound port: fbcef375-dd45-47ea-b279-04f5bc18be9b, host: allinone, vnic_type: normal, profile: , driver: linuxbridge, vif_type: bridge, v15:48
PaulCzarif_details: {"port_filter": true}15:48
jrollhrm15:48
jrolllucasagomes: yeah, I'm thinking just... driver.deploy.get_deploy_config(), we'll need to normalize some things etc15:49
lucasagomesyeah, something like that15:49
lucasagomeswe use Jinja2 templates for the configs so it may be a question of adding a deploy_information field15:50
*** sambetts has quit IRC15:50
lucasagomeswihch defaults to "" (empty string)15:50
jrollright15:50
jrollOTOH15:50
jrollIPA will ignore iscsi_iqn, iscsi ramdisk will ignore ipa-api-url15:51
jrollmaybe just pass everything :P15:51
lucasagomesheh, one thing is15:51
lucasagomesthe deploy() interface will probably have control over the boot() interface15:51
PaulCzarjroll: neutron logs .. just grepping for the port ID - http://pastebin.com/KUDgRLcg15:51
lucasagomesso you can give it the right parameters depending on the deploy() but generic enough on the boot()15:51
PaulCzarso it looks like it does a lookup for the port and can't see it for some reason so just goes ahead and deletes it15:52
jrollPaulCzar: yeah... also I see PUT /v2.0/ports/fbcef375-dd45-47ea-b279-04f5bc18be9b.json HTTP/1.1" 40415:53
jroll*before* it's deleted15:53
lucasagomes[off-topic] if anyone has some time, please take a look at https://review.openstack.org/#/c/138729/15:54
openstackgerritLucas Alvares Gomes proposed openstack/ironic-specs: Root device hints  https://review.openstack.org/13872915:54
PaulCzarjroll: hmmm, is it an auth/tenant thing ?   I think I'm running nova as the admin user / admin tenant ...  just remember ironic has a user hard coded into ironic.conf15:59
PaulCzarso maybe the port is getting created for admin, and then ironic can't see it15:59
jrollPaulCzar: ooo, maybe, though ironic's user should be an admin, no?15:59
openstackgerritMerged openstack/ironic: Add documentation for SeaMicro driver  https://review.openstack.org/13632416:01
*** naohirot has quit IRC16:01
PaulCzarjroll: yeah, I set the ironic policy to allow the ironic service user to work16:01
PaulCzarbut I'm switching the ironic.conf user to the same user as me16:01
PaulCzarsee if that works16:01
jrollPaulCzar: hrm, to the same user as the user doing nova boot?16:05
jrollseems wrong16:05
jrollironic's user needs to, at a minimum, be able to validate tokens with keystone16:05
*** dlpartain has joined #openstack-ironic16:06
NobodyCamlucasagomes: why model?16:08
PaulCzarjroll: just trying to think why ironic can't see the neutron port ... thought it might be because it's not in the same user or at least tenant as the user calling neutron16:08
PaulCzar(grasping at straws )16:08
lucasagomesNobodyCam, that's the field I was wondering whether I should include or not heh but as it was part of the info I can find on sysfs I thought why not16:09
openstackgerritJim Rollenhagen proposed openstack/ironic-python-agent: Use _ instead of - for config options  https://review.openstack.org/13163216:09
jrollJayF: ^ that lgtm16:09
lucasagomesNobodyCam, tho I think I should remove it16:09
jrollPaulCzar: could be, yeah, I don't know much about neutron :/16:09
lucasagomesNobodyCam, it's not like it's a unique identifier or something16:10
jrolllucasagomes: it could be unique16:10
*** dlaube has quit IRC16:10
NobodyCamyea, I'll add a comment16:10
jrollsay I have two disks, one is model cheapo-ssd and one is giant-hdd16:10
lucasagomesjroll, :D yeah if you have only one of the disks on that model16:10
jrollright16:10
NobodyCamjroll: in /SOME/ cases16:10
jrollright16:10
jrollso it can be useful16:10
lucasagomesjroll, right, yeah16:10
lucasagomesI don't think it's totally invalid16:10
jrollinstead of looking up serial numbers for each node16:10
jrollI can just spam every node with a model name16:11
lucasagomeshmm it does makes sense yea16:11
NobodyCamI'll add a comment please feel free to disagree16:11
jrolllucasagomes: I wonder if size would be another reasonable thing there16:11
jrollif you know your root disk is 32gb, tell it to look for the 32gb disk16:12
lucasagomesjroll, I was thinking about it, but I was trying to make it shorted16:12
jrollI would use that16:12
lucasagomesbecause I know u guys use on IPA16:12
rloolucasagomes: just reading your comments wrt the state machine. there was some discussion in IRC about it yesterday that might help.16:12
lucasagomesjroll, for size we could add some operators16:12
lucasagomeslike greater than etc16:12
lucasagomesrloo, right, I tried to catch up with some of the IRC conversation16:13
jrolllucasagomes: yeah, for this spec I would just do "exact" size16:13
rloolucasagomes: wrt zap/clean. We wanted to distinguish them, so eg clean might be a subset of zap. You might want to do some things only once after enrolling the node, but other things (whatever-in-clean) always after deleting.16:13
lucasagomesrloo, but it's difficult if we don't summirize the decisions in gerrit16:13
lucasagomesjroll, right, mind adding a ocmment for it? I would glad add that16:13
jrollsure thing16:13
rloolucasagomes: yup. i was thinking about that. (summarizing decisions etc). it is easy to forget to do that.16:14
lucasagomesrloo, right I mean, many things can be a subset of zapping idk why cleaning tasks are one exception16:14
lucasagomesrloo, because, the RAID example idk if it fits on CLEANING16:15
lucasagomesunless we say that cleaning is also building RAID16:15
rloolucasagomes: i think to identify things that one ALWAYS wants to do after deleting, vs things that most likely only want to do once after enrolling.16:15
rloolucasagomes: so maybe 'clean' is the wrong verb.16:16
lucasagomesright16:16
rloolucasagomes: what happened to 'decommission'?16:16
lucasagomesyeah could be naming problem16:16
jrolldecommission has too many meanings16:16
rloolucasagomes: how about 'zappy' instead of 'clean' :-)16:16
lucasagomesI would think that normalising16:16
lucasagomescause cleaning is called before the node is provisioned as well16:17
jrolleh?16:17
jrollisn't it just after delete?16:17
jrolland when triggered by operator16:17
* lucasagomes rechecks the state machine16:17
lucasagomesjroll, nop, it goes from MANAGED to CLEANING and the AVAILABLE16:17
rloolucasagomes: so if you want to clean before the first time a node is provisioned, you add that/those tasks to the zap16:18
lucasagomeswhich is good, it makes sense16:18
lucasagomeswe don't know if the machine is being recycled or not16:18
*** romcheg has quit IRC16:18
jrolllucasagomes: but from available, it only goes to managed and cleaning if the operator requests16:18
jrollbut yeah, when you first bring it in it should be cleaned16:18
*** Marga_ has joined #openstack-ironic16:19
jrollsorry, I thought "before the node is provisioned" meant every time the node is provisioned :P16:19
lucasagomesrloo, but that's the thing. Imagine someone managing many nodes in Ironc with different drivers16:19
lucasagomesrloo, one driver ZAPPING only build arrays, the other it also cleans16:19
lucasagomesso he is used to put the node back in MANAGED and call ZAPPING16:19
lucasagomesone will complete erease all the data and the other not16:19
lucasagomesit's hard to manage16:19
lucasagomesjroll, oh no, like before it gets into the main loop :) as a first step (which makes sense)16:20
jrolllucasagomes: commented on the disk thing16:20
jrollyeah, ok :)16:20
rloolucasagomes: I'm not sure I understand. Is your example after a deletion, or only after enrolling, or both?16:20
lucasagomesjroll, ta much16:20
jrolllucasagomes: to be clear, when we talked about separating CLEAN and ZAP, there will be actions that happen in both16:21
jrolllike erase disks will happen in both16:21
lucasagomesrloo, after enrolling16:21
rloomaybe 'clean' -> 'mini-zap'16:21
lucasagomesjroll, right yeah16:22
lucasagomesjroll, the example I was using is that, before ereasing the disks, one may want to break the RAID16:22
rloolucasagomes: after enrolling only? So driver1 only wants to ZAP, not CLEAN, and driver2 ZAP and CLEAN?16:22
lucasagomesand clean individual disks to not have to deal with the RAID abstraction16:22
lucasagomesand be confident that individual disks are ereased16:22
jrollright, hmm16:23
lucasagomesbut then if the ZAPPING doesnt happen after clean, we can't rebuild the raid (unless clean does it)16:23
rloolucasagomes: so driver1 codes the 'clean' so that it doesn't clean if it came from 'managed' state, and driver2 is happy as-is.16:23
jrolllucasagomes: well16:23
lucasagomesrloo, after entolling is not a one-way pass16:23
jrollfor a raid16:23
lucasagomesrloo, you can put it back to MANAGED16:23
lucasagomesand call zap again16:23
jrollthe individual disks will still show as sda/sdb/etc, won't they?16:23
jrollat least for software raid, idk about hw raid16:24
openstackgerritSirushti Murugesan proposed openstack/ironic-specs: Whole Disk Image Support  https://review.openstack.org/9715016:24
lucasagomesjroll, in raid I believe it's shown as a individual device16:24
rloolucasagomes: sorry, when I said 'after enrolling' (and not after deleting), I meant it as a one-way/one-time. Exceptions are with avail->manage.16:24
lucasagomeserwan_taf does it to clean his disks16:24
jrolllucasagomes: software raid will have /dev/sda, /dev/sdb, and the raid is at /dev/md016:24
jrollfor example16:24
lucasagomesjroll, right, I can take a look at it see if we could clean individual devices without touching RAID16:26
jrolllucasagomes: cool, hardware raid is the only one I'm worried about16:26
lucasagomesyup16:27
* lucasagomes investigate16:27
jrollthanks16:28
*** dlpartain has quit IRC16:28
erwan_tafback16:30
erwan_tafIf you want to clean disks for security issues, you have to delete all your raid arrays, create JBOD-like config and clean each individual disk by itself16:31
*** dlaube has joined #openstack-ironic16:31
erwan_tafthis is the way to insure all your disks doesn't have any user data on it16:31
erwan_tafI mean, physical disks16:32
erwan_tafi.e a 6 physical disk setup will be turn in 6 individual raid volume of 1 disk16:33
victor_lowthererwan_taf: Several RAID controllers I work with do not allow direct access to disks16:33
erwan_tafthen I can address the cleaning of every single phys disk16:33
jrollerwan_taf: for software raid, hardware raid, or both?16:33
erwan_tafvictor_lowther: by creating a raid 0 of 1 disk you can16:33
victor_lowtherso you cannot clean the ehtire physical disk no matter what you do16:33
victor_lowtherwithout attaching it to something else16:33
erwan_tafjroll: software is easy to handle, just process all the /dev/sd* something16:33
victor_lowtherno16:33
jrollerwan_taf: right, thought so16:34
victor_lowthera single-disk RAID 0 still has the raid metadata + other controller specific stuff that you cannot touch from userspace.16:34
erwan_tafon the hardware raid, I do use inbound cleaning16:34
erwan_tafvictor_lowther: sure but we are considering the addressable space16:34
erwan_tafthis is where the data we are interested in is located isn't it ?16:35
victor_lowtherthren in that case you don't have to disassemble the raid to clean it.16:35
erwan_tafthen I do clean the raid I created16:35
victor_lowtherjust spam it with zeros16:35
erwan_tafI don't know the server, and I don't what was the usage of that server before16:35
erwan_tafmaybe yes a raid array exists but some uses other disks to another usage16:35
jrollif you overwrite every existing block device with zeroes, I would think that would be enough16:36
erwan_tafas I can't predict what disks were used before, considering cleaning everything and gain access to every phys disk is the sole way I know to insure all disks are cleaned16:36
jrollbut this is not my expertise16:36
erwan_tafjrist: depends on your expections of cleaning16:36
erwan_tafjroll: sorry ^16:36
jrollmy expectations are that the second tenant cannot access the first tenant's data16:37
erwan_tafIf you expect Dod 5220-22.M, that's a little bit more complicated ;)16:37
jrist:)16:37
victor_lowtheror only byt RAID controllers that allow for secure erase commands to be passed through. :)16:37
victor_lowtherGood luck with that.16:37
erwan_tafI'm usually don't expect raid controllers to be cooperative16:37
jrolljrist: I'm going to buy you a beer one day for all these false highlights16:37
jroll:)16:38
erwan_tafthat makes me much more resilient to various setups16:38
jristjrist: sounds good16:38
jristbut I'm allergic to beer and that will kill me16:38
erwan_taf\o/16:38
jrist;(16:38
lucasagomesjrist, ouch :(16:38
erwan_tafjrist did highliht himself to answer jroll16:38
jrollthat was the best false highlight16:38
erwan_tafjrist: that's sad man16:38
jristlol16:38
jristI know erwan_taf . I know16:39
jrolljrist: the drink of your choice, then :P allergic to beer or gluten or?16:39
jristdidn't used to be allergic16:39
jristdunno. went to an allergist for it even16:39
jristhe had no clue, lol16:39
jrollI guess gluten probably doesn't kill people16:39
jrollwow, weird16:39
victor_lowtherI am also of the opinion that if your CLEANING is smart enough to tear down your RAID arrays for cleaning to your desired level of paranoia, making it smart enough to rebuild what it tore down is not too much to ask.16:39
NobodyCamoh man :(16:39
erwan_tafso the definition of how deep should be the cleaning is variable16:39
lucasagomesit's actually hard to read a conversation between jrist and jroll :D16:39
jristlucasagomes: wow, I'm surprised you didn't say jrist and jrist16:39
jrolllmao16:40
jrist:)16:40
victor_lowtherheh16:40
lucasagomeslol16:40
erwan_tafI suggest that clean explodes raids, clean disks if required, and let the raid unconfigured16:40
erwan_taf-ETOOMANY_JR16:40
jristerwan_taf: :)16:40
JayFjroll: gluten isn't an "allergy" technically... it's an intolerance. IDK what the difference is but I used to :P16:40
victor_lowtherthat is silly if the default expectation is that a cleaned system is ready to be reassined the same sort of role it had.16:41
victor_lowtherer, reassigned16:41
erwan_taffrom my opinion, a clean system is a system ready to be configured to become something16:41
erwan_tafclean does clean16:41
erwan_tafprepare does prepare and expect it to be clean16:42
victor_lowtherhm16:42
victor_lowtherI have different expectations, then16:42
jrollyeah, clean should not build a raid imo16:42
lucasagomesI agree with that level of abstraction16:42
victor_lowtherto me, a clean system has been sanitized16:42
victor_lowtherconfiguration and role assignment happens elsewhere.16:42
erwan_tafdoes cleaning your car make it ready to travel ?16:43
erwan_tafthat semantically wrong at my taste16:43
victor_lowtherno, it gets rid of the dirt and bugs16:43
victor_lowtherit doesn't change the design or config16:43
jrollvictor_lowther: ++ I think y'all are on the same page16:43
erwan_tafso I missed where the raid level should be defined16:44
victor_lowtherZAP16:44
victor_lowtheror defined in MANAGE and initially instantiated in ZAP16:44
jrollum16:44
jrollI'm not so sure about that16:45
victor_lowtherscrubbed in CLEAN16:45
jrollat the summit, we talked about potentially doing long-running tasks in ZAP16:45
jrollshort-running tasks can be done at deploy time16:45
victor_lowtheryes16:45
jrollhow long does configuring a hardware raid take?16:45
JayFhours16:45
jrollblah16:45
JayFif you're doing a full init16:45
erwan_tafdoes ZAP knows what the host will be used for and decide which raid level/disks to use ?16:45
victor_lowtherwhich is why systems available to be deployted are in AVAILABLE, not MANAGED16:45
jrollerwan_taf: no, that's why zap can't do that IMO16:45
jrollor16:46
jrollidk16:46
lucasagomeshmm16:46
*** Masahiro has joined #openstack-ironic16:46
jrollmaybe it can16:46
victor_lowtherand CLEAN happens between MANAGED/DELETE and AVAILABLE16:46
JayFjroll: ZAP is operator initiated, and it does whatever the operator tells it to16:47
JayFjroll: so an operator could say "ZAP this node to RAID 10" and it would do it16:47
jrollok, yeah, the machine in the spec looks fine to me16:47
jrollright16:47
jrollhad to re-read16:47
JayFcoolio16:47
erwan_tafthat's still why I still consider that Zap is doing too much things. Deleting Raid Arrays & taking care of user data is one thing, preparing the server for a particular role is another16:47
jrollwhen you unprovision a node, it cleans up after previous tenant16:47
erwan_tafclean vs prepare16:47
jrollwhen you want to configure raid or whatever the heck, you do a zap16:48
jrollerwan_taf: we have split this between clean and zap already16:48
jrollzap ~= prepare16:48
erwan_tafbut at this stage we don't know what this node will be used for16:48
erwan_tafwhich could define a particular hw config16:48
jrollhuh?16:48
*** Marga_ has quit IRC16:49
PaulCzarjroll: I can run neutron port-update from the ironic user ... so it's not a permissions problem16:49
victor_lowtherWe decide what the node will be used for in MANAGED16:49
jrollthe operator requests ironic to do a particular zap16:49
victor_lowtherand set node.properties accordingly16:49
erwan_tafok16:49
jrollright16:49
victor_lowtherbefore making it AVAILABLE16:49
jrolland the flavor matches capabilities in node.properties16:49
jrolle.g. "raid10"16:49
JayFerwan_taf: the idea of ZAP doesn't make as much sense until it's combined with hardware capabilities (there's a backlog spec up) which allows you to schedule based on a server having given capabilities (like, say, having a RAID configured)16:49
*** dlpartain has joined #openstack-ironic16:49
*** Masahiro has quit IRC16:50
erwan_tafso clean should be done before zap16:51
victor_lowtherno16:51
jrollyes.16:51
erwan_tafif zap does create the raid16:51
jrolland it is.16:51
victor_lowtherbecause zap can change what the hardware looks like16:51
erwan_tafit have to be cleaned first16:51
jrollclean is done when the node is unprovisioned, or when the node is unrolled16:51
victor_lowtherand clean should not.16:51
jrollenrolled16:51
jrollto get to MANAGED, you have to do CLEAN16:51
jrollto get to ZAP, you have to be in MANAGED16:51
victor_lowtherno16:51
jrollyes16:51
victor_lowtherto get to AVAILABLE, you have to CLEAN.16:51
jrollper the current spec16:51
JayFAVAILABLE -> MANAGED is a transition16:51
jrollsure16:51
JayFMANAGED -> [CLEAN] -> AVAILABLE is the path out of managed16:52
JayFthat's what deva's diagram yesterday indicated16:52
jrollCLEAN -> AVAILABLE -> MANAGED16:52
jrollyou can't get to managed without a clean16:52
victor_lowtherjroll: you arereading hte graph wrong16:53
jroller, I guess you can16:53
*** dlpartain has quit IRC16:53
jrollwth16:53
*** ryanpetrello has joined #openstack-ironic16:53
*** achanda has joined #openstack-ironic16:54
rloojroll is right. If you went through the DELET* state, to get to MANAGED, you have to DELET* -> CLEAN* -> AVAILABLE -> (make request) -> MANAGED16:55
jrollrloo: right, but I was wrong about initial enrollment16:56
*** yuriyz has quit IRC16:57
*** yuriyz has joined #openstack-ironic16:57
rloojroll: okay ;)16:59
*** deva__ has joined #openstack-ironic17:01
*** achanda has quit IRC17:02
*** viktors|afk is now known as viktors17:03
NobodyCamhumm if a available is put back in to manage it has to go thru clean again to become available again17:03
JayFYes :) That's by design17:04
devanandamornin, all17:04
NobodyCammorning devananda :)17:04
JayFbecause who knows what the oper did to the machine while it was in MANAGE, and it has to be re-normalized (I like that term) before going back into the pool17:05
JayFor else you risk provisioning an instance to a machine that's slightly different than all the rest17:05
jrollheya devananda17:05
jrollso I think this solves the whole issue with raid stuff17:06
jrollit goes DELETING -> CLEAN -> AVAILABLE -> MANAGED -> ZAP (build raid) -> MANAGED -> CLEAN -> AVAILABLE17:06
jrollerwan_taf: lucasagomes: ^ this should do what you need, yes? assuming CLEAN knows how to tear down a raid?17:06
lucasagomesjroll, hmm kinda, but there's a race there between AVAILABLE and MANAGED no?17:07
jrollmmm.17:07
lucasagomesnode becomes AVAILABLE, then have to be manually moved to MANAGED17:07
*** subscope has quit IRC17:08
jrolllucasagomes: assuming there's a flavor that can schedule to that node when it doesn't have a raid, yeah, there's a race17:08
devanandajroll: the problem there is, if the node is AVAILABLE, it might be scheduled by Nova at that point17:08
victor_lowtherjroll: seriously, what is wrong with making CLEANING be responsible for recreating whatever raid config it tore down?17:08
jrolldevananda: right17:08
victor_lowtherif it even needs to do so?17:08
jrollvictor_lowther: I'm not sure that it fits, maybe it does17:08
devanandajroll: if you need to do a destructive task between DELETING and DEPLOYING, it has to happen before AVAILABLE, if you want to guarantee that it happens at all17:08
lucasagomesyeah17:09
devanandavictor_lowther: ++17:09
lucasagomesas rloo pointed it may be a language problem, maybe cleaning is not the best term for it17:09
lucasagomescause cleaning building RAID, sure it can, but sounds a bit out of place17:10
*** romcheg has joined #openstack-ironic17:10
jrollwell17:10
*** achanda has joined #openstack-ironic17:10
jrollI'm fine with cleaning building exactly the raid it tore down17:10
erwan_tafwe are not facing technical issues but semantic ones17:10
JayFjroll: thats what I had in mind17:10
erwan_tafin a state machine having a single verb per action is nice to ease the communication17:11
erwan_tafwe are hidding states under a single name17:11
erwan_tafcleaning is not creating17:11
erwan_tafthat's where the confusion comes from17:11
*** jistr|trng has quit IRC17:11
erwan_tafcleaning should clean17:11
devanandacan we not argue about languages in such a multi-cultural, multi-national environment?17:12
devanandaI'm giong to rename them states AAA, BBB, CCC, DDD17:12
jrollthe problem is that we can't feasibly encode every potential cleaning or zapping task in this state machine17:12
lucasagomesdevananda, heh, well default is english :D17:12
devanandaand then start adding one character from every language I know17:12
victor_lowtherif the system config does not change from the POV of hte rest of hte graph, then who cares?17:12
devanandasome kanji, sanskrit, and greek for good measure17:12
*** marcoemorais has joined #openstack-ironic17:13
victor_lowtherInteresting Game of Life patterns.17:13
devanandabecause UTF8 is cool17:13
devanandabrb, gotta walk into the office17:13
victor_lowtherI want the glider gun state.17:13
lucasagomes"There are only two hard problems in Computer Science: cache invalidation and naming things"17:13
JayFµå˜å©´∂ -> [笴å˜] -> å√刬å∫¬´17:13
NobodyCamJayF: huh17:14
JayFNobodyCam: alternate state names with utf817:15
rlooso if we rename 'CLEAN' to 'QQQ' and 'ZAP' to 'PPP' does it help?17:15
*** marcoemorais1 has joined #openstack-ironic17:15
rloo(we can figure out the actual names or vote on it or something later)17:15
erwan_tafdevananda: I'm sorry to disagree. we are already exchanging with english words/verbs. If we do hide various action under a single umbrella that creates confusion and useless debates because everyone understand that word differently17:15
*** marcoemorais1 has quit IRC17:16
NobodyCamstate 1; state 2 state 3....17:16
victor_lowtheror since people are getting hung up on the word state, taskbucket 1, taskbucket2, etc.17:16
erwan_tafI mean we are defining server's states which does exists, so it shouldbe verbs17:16
erwan_tafor adverbs17:17
romchegLet's use universal musical notation: sad sound for DELETING, etc..17:17
jrollI mean17:17
jrollthe words don't matter17:17
*** marcoemorais1 has joined #openstack-ironic17:17
jrollthe spec has a description of every state that is defined17:17
jrollwe should use those definitions, not the words themselves17:17
rlooerwan_taf: can you propose something that works with AAA, BBB, CCC states or whatever. we can deal with the actual names later.17:17
*** marcoemorais has quit IRC17:17
rloo++ jroll17:17
jrollin the state diagram, the words are only a reference to those definitions17:18
erwan_tafif that doesn't shock anyone that a state named "clean" is made for creating something ....17:20
* erwan_taf read : "CLEANING be responsible for recreating whatever raid config"17:20
erwan_tafthat hurts my logical brain ... sorry for that17:20
victor_lowtherofftopic: compliance training that I cannot fast-forward through audio17:20
erwan_tafIf I the sole one, I step down on that topic17:20
jrollerwan_taf: where do you see that?17:20
erwan_taf<victor_lowther> jroll: seriously, what is wrong with making CLEANING be responsible for recreating whatever raid config it tore down?17:21
victor_lowtherit is not as if I have not been taking this course every year for the last 17 years. :/17:21
rlooerwan_taf: please humour us, try to be constructive. Replace 'CLEAN*' with whatever for now. If you could/did, would it work?17:21
jrollerwan_taf: so, here's a thing17:21
jrollerwan_taf: clean should clean the raid volume, agree?17:21
*** ParsectiX has joined #openstack-ironic17:22
jrollerwan_taf: a raid volume is cleaned by disassembling the raid, scrubbing each disk, and reassembling the raid, agree?17:22
erwan_tafrloo: I'm just trying to say that sometimes if a state is hidding serveral differents tasks it just means that we need to split this state in two states17:22
rlooerwan_taf: so what two states would you suggest?17:22
rlooerwan_taf: clean & prepare?17:22
Nishadevananda, victor_lowther : [off_topic here] i have a question regarding states. Why DISCOVERYFAIL/INTROSPECTIONFAIL/INSPECTIONFAIL not a valid state?17:22
victor_lowthererwan_taf: not if the end states from the POV of the graph are the same.17:23
rlooNisha: I think it is INSPECTFAIL, and it is a valid state17:23
victor_lowtherNisha: they are, why would you think otherwise?17:23
* lucasagomes feels we are being sucked into the FSM thing again17:23
jrollwe have a dirty raid volume, we want a clean raid volume, this means we need to re-assemble the raid when it's done cleaning17:23
Nisharloo, the states spec doesnt mention about the state17:23
Nishavictor_lowther, ^^^17:23
rlooNisha: it is mentioned. Maybe we need to clarify it.17:24
jrollNisha: comment that on the spec, please17:24
erwan_tafrloo: yes something lke clean/erase/wipe versus build/create/17:24
victor_lowtherhttps://www.irccloud.com/pastebin/uPnYSYQy17:24
rlooNisha: see line 11917:24
lucasagomesjroll, victor_lowther maybe having CLEAN -> PREPARE -> AVAILABLE?17:24
devanandaback17:24
NobodyCamwb devananda17:25
lucasagomeswhere prepare could also prepare the machine like preboot17:25
jrolllucasagomes: but, what is PREPARE? put the server back how it was? sounds like CLEAN17:25
lucasagomesdevananda, wb17:25
victor_lowtherlines 13 - 15 in that paste.17:25
rlooNisha: ...a -ED suffix, and the fail state has a -FAIL suffix17:25
*** ndipanov is now known as ndipanov_gone17:25
Nisharloo, ohk. thanks17:26
rlooNisha: if you can think of a better way to word it or if an example would help, please comment in the spec.17:27
jrollcan I point out here that this whole "rebuild a raid" thing is only one use case? and that I think it fits into the current proposal, I think we would be safe moving forward with what we have, and if it causes problems later we can fix it.17:28
jrollalso, documentation is a thing17:28
lucasagomesjroll, right, the preboot thing I think is also valid17:28
Nisharloo, ok :)17:28
jrollif we make it clear that any existing raids are rebuilt in CLEAN, that's fine17:28
lucasagomesjroll, cause you are powering on the node right? and leave it like that17:28
jrolllucasagomes: I disagree that we need a preboot state, just power on at the transition to available or whatever, and use node.properties to signal to nova that it's prebooted17:29
lucasagomesand you prebooted node are different than non prebooted mode, so it changes the states (and scheduler is aware)17:29
jrollhave the scheduler prefer nodes with node.properties['prebooted'] == True17:29
erwan_tafjroll: sorry for being the grumpfy guy. It clean needs to rebuild something it means for me it's not the proper time to do it :/17:29
erwan_tafjroll: would you build a house, dismantle it to clean it and rebuild it then ?17:30
*** jcoufal has quit IRC17:30
erwan_tafs/It clean/If clean/17:30
rloowas lucasagomes the only one that asked for a PREBOOT state? I see that state from the summit whiteboard17:30
jrollerwan_taf: if I was told by someone smarter than me that this is the proper way to clean a house, yes.17:30
lucasagomesrloo, nop, I think J's had the use case for that17:30
lucasagomesthe way I understood it was like a optimization prior to amke the node AVAILABLE17:31
JayFerwan_taf: if you want to make sure that nobody hid any landmines in the floor; yep17:31
JayFlol17:31
jrollrloo: lucasagomes: we have a use case for prebooting things, not for a state though17:31
victor_lowthererwan_taf: Wouldn;t do it to a house, definitly would to a server. :)17:31
lucasagomeswhich could be powering on the node17:31
lucasagomesor perhaps even pre-imaging it17:31
victor_lowtherand I would have canned air and a dust mask ready.17:31
victor_lowthermaybe even new thermal paste17:31
erwan_tafstill thinking than clean should clean the raid at some point and destroy everthung needed and then build something that we be used finally17:32
erwan_tafcreate-raid / delete / create again could change Ids eetc..17:32
*** ryanpetrello has quit IRC17:32
victor_lowtherand after doing that, the server would be left in the same configuration as it was in before cleaning, minus the unneeded cruft that had built up.17:32
*** Marga_ has joined #openstack-ironic17:32
jrollerwan_taf: in general, what we want to do is go from dirty raid volume to clean raid volume17:32
jrollerwan_taf: you told me the only way to do this is to tear down the raid, clean each disk, and build it again17:33
rlooso erwan_taf suggested two states instead of the one CLEAN*: clean/erase/wipe versus build/create/17:33
erwan_tafI mean17:33
jrollerwan_taf: so yes, I think that should all be done in the CLEAN phase17:33
rlooand lucasagomes suggested CLEAN and PREPARE17:33
jrollthis isn't preparing17:33
jrollthis is cleaning17:33
devanandaerwan_taf: this really feels like a linguistic argument, and not constructive. can you make your point without relying on the word "CLEAN"?17:33
erwan_tafto clean a server, we need to dismantle its raid and clean every single disk, yes17:33
rloowhat's the issue with adding a 2nd state that can be a noop for those that don't want it?17:33
rlooand/or what's the issue with adding a 2nd state later when there is a usecase for it?17:33
erwan_tafthen the server is clean and not configured17:33
devanandarloo: unnesseray complexity17:33
victor_lowtherbecause so far it is jsut for this one special case.17:34
erwan_tafit's then time to prepare it by setting the raid17:34
rlooso we need a word that means 'clean up and get ready/prep'17:34
*** viktors is now known as viktors|afk17:35
jrollvictor_lowther++17:35
rloo"recuperate" or "take a holiday" I think.17:35
victor_lowtherand I htink CLEAN is a good enough one-word summary.17:35
openstackgerritMerged openstack/python-ironicclient: Add option to specify node uuid in node-create subcommand  https://review.openstack.org/13213717:35
jrollerwan_taf: think about it as cleaning the server. one step of that is to clean the raid. when you clean a raid volume, you shold come out the other end with a raid volume.17:35
jrollthis is all semantics17:35
jrollall english17:36
jrolland I think we're all wasting time here17:36
devanandai am not going to engage in a serious discussion about whether or not the word CLEAN is the optimal word. it's good enough.17:36
jrollwhile half the specs up for review wait for us to finish17:36
rlooCLEANUP?17:36
devanandawe already perform many sub-actions within each state transitiona17:36
erwan_tafjroll: no I mean if you mean to clean the server it means more than clean the actual raid17:36
devanandaan arbument based on breaking every single micro task into its own state is a meritless strawman17:37
erwan_tafjroll: it does mean cleaning _all_ the disks, even the ones that are not part of that particular raid17:37
victor_lowtherCLEAN can be -ING and -ED ed, and CLEANUP is just awkward there.17:37
erwan_tafjroll: we are supposed to clean the server not only the raid volume17:37
devanandacan we please get back to the business of making software?17:37
jrollerwan_taf: of course, the raid is one step17:37
jrollI'm done17:37
* erwan_taf feel sorry not being able to express itself clear enough to get understood17:37
jrollI'm +1 on the proposed state machine17:37
jrolland I've reflected that in the review17:38
erwan_tafand being received as trying to get thing being explicit17:38
jrollI think we should all get to the point where we are +1 on the proposed state machien17:38
erwan_tafI'm seen as being grumpfy for being grumpfy17:38
jrolland then we can move on to the actual import parts, like how the hell do we do this in an upgradeable fashion17:38
lucasagomesjroll, yeah sounds better indeed17:38
devanandajroll: speaking of that, i have somethuing ...17:38
devanandajroll: https://github.com/devananda/ironic/tree/FSM17:39
lucasagomeserwan_taf, no worries :) I mean everyone is trying to help17:39
devanandastarted poking at it last night to see if i could model our current states in an FSM17:39
erwan_tafdevananda: I though you didn't wanted a FSM...17:40
devanandastarting from where we are, going straight to new states AND a new modelling layer will be a breaking upgrade path, IMO17:40
jrolldevananda: neat17:40
jrollagree17:40
devanandaerwan_taf: i never said that17:40
erwan_tafdevananda: if you want to see if that is FSM compatible the answer is no. If you want a FSM which is ___nice___ that's requires some rules that are not applied here ...17:42
erwan_tafdevananda: having a FSM compatible thing was the starting point with lucasagomes and that got rejected17:42
devanandaerwan_taf: unless i did. in which case, i was being sarcastic, or i changed my mind. all of these are possible. but if you read victor_lowther 's spec for a state machine, and followed the discussion we had been having over the last month at all, you'd know that we've been working on a state machine this whole time17:42
devanandaerwan_taf: also, there are multiple types of FSM. what we have in production today IS an FSM17:42
erwan_tafbut all the arguments we pushed for a FSM got rejected17:43
devanandabut not explicitly modelled as such17:43
devanandano17:43
devanandaseriously. i'm sorry you dont understand why this was met with the response it was17:43
devanandait's not because it is an FSM. we were working on one too. it's the way you presented it, in throwing out all the work we were already doing17:43
devananda and ihnsisting that our aproach is not really a FSM17:44
erwan_tafi agree we were wrong on the process to present that17:44
lucasagomesdevananda, that wasn't the intention17:44
erwan_tafhaving the requirement to get a FSM is the best argument ever to get a state machine that works perfectly well and simplify the core of ironic17:45
erwan_tafit would be so brilliant to get ironic propusled by a FSM17:45
devanandaoh no... FSMs are not simple. this massively complicates things17:45
devanandai've resisted a FORMAL state machine for the last three years because of this17:46
devanandaanyway, i'm taking a break again and going to work on this. sorry if i seem rude, but having this discussion again isn't helpful for the channel17:46
jrollon that note, can everybody please go post their comments on the spec?17:46
*** tatyana has quit IRC17:47
victor_lowtherformal state machines are great when they can perfectly model every aspect of a system.  That is decidedly not the case when we have to do nasty things like deal with hardware.17:47
victor_lowtheryes, give me all your comments.17:47
*** pensu has joined #openstack-ironic17:47
victor_lowtherwhile I ditch this ridiculous complinace training and get lunch17:47
lucasagomesvictor_lowther, g'luck :)17:48
* lucasagomes have to do it as well17:48
jrollerwan_taf, lucasagomes, rloo, post comments on the spec please, we don't make any progress if you don't17:48
lucasagomesjroll, will do17:48
*** romcheg1 has joined #openstack-ironic17:49
*** romcheg has quit IRC17:49
erwan_tafdevananda, victor_lowther : can I take as a will/goal to get a FSM like state machine ?17:51
erwan_tafI mean for my comments17:51
jrollerwan_taf: you should comment with your beliefs IMO17:52
jrollerwan_taf: I'd also love to be educated why this state machine is not finite17:52
erwan_tafdone17:54
erwan_tafstupid question, how ever did implement an FSM ?17:55
jrollcan you rephrase that?17:56
rlooerwan_taf: it would be helpful to add a comment there, explaining WHY it couldn't be implemented in an FSM. I don't think a goal was to have that, but it would be useful to know, for the future.17:56
rlooerwan_taf: what's a "stupid question"?17:56
erwan_tafI mean I feel like hurted by what I read about this state machine, but wonder how many ever experienced FSM before that17:57
erwan_tafI mean having a state that have 3 inputs ...17:57
*** killer_prince has joined #openstack-ironic17:58
*** killer_prince is now known as lazy_prince17:58
erwan_tafthe same state can be achieved is hidding in fact different sub states17:58
erwan_taf-can be achieved17:58
jrollI have implemented plenty of finite state machines back in university, maybe since then but I don't remember17:58
jrollperhaps what we do cannot be modeled by a perfect FSM17:59
jrollthe inspect and zap tasks are optional, manual, tasks17:59
erwan_tafso a managed server could be or not zapped right ?17:59
jrollcorrect18:00
*** derekh has quit IRC18:00
erwan_tafour proposal with lucasagomes was to get zap as mandatory step but _doing nothing_ by default18:00
erwan_tafso we have a state representing what is a "zapped" server which is zapped or not18:01
jrollthe thing is, you can't predict when a node needs to be zapped18:01
erwan_tafdepends on the implmeentation of that driver18:01
jrollin my use case, a node needs to be zapped when my vendor ships me new firmware18:01
erwan_tafsure that's implementation18:01
erwan_tafin that example we have: managed -> zapping -> zapped18:02
jrollin a way; in your proposal, when is a node zapped?18:02
erwan_tafwhen you need to prepare it18:02
jrollwhich is when?18:02
lucasagomesjroll, we called "normalising "for the BIOS/firmware update,  "preparing" for raid stuff, "cleaning" for cleaning18:03
jrolllucasagomes: sure. when does normalizing happen?18:03
lucasagomesjroll, will get the diagram for u18:03
lucasagomes1 sec18:03
erwan_tafjroll: what I mean, is when a server reach a particular state, you should be in position to say what was done before18:03
lucasagomesit was after managed btw18:03
erwan_tafin that case, you can't as managed represents too many options/cases/states18:04
lucasagomesjroll, https://docs.google.com/drawings/d/1Pxy2lr270yAlP78P7knhYhmBbanKWFf2EfcwVrduOtA/edit?usp=sharing18:04
jrolllucasagomes: so how do I go from available to available with a firmware update in between?18:04
* jroll looks18:04
lucasagomesjroll, this was part of the document we were writing18:04
jrollor in this case, ready to ready18:04
*** harlowja_away is now known as harlowja_18:04
lucasagomesyou could always go back to manageable18:04
lucasagomesand restart the main loop18:05
jrollok, but then we have the same problem18:05
lucasagomesso you can get u zapped18:05
jrollto entry points to that state18:05
erwan_tafno state is self looping here18:06
erwan_tafjroll: another way to say that18:06
erwan_tafjroll: in this example, a clean server, is a server that passed all the steps before18:06
erwan_tafjroll: without any exception18:07
erwan_tafjroll: even if some are empty body18:07
erwan_tafjroll: so you can trust the state to define a particular "time" in the life cycle18:07
erwan_tafjroll: which such loops on MANAGED that becomes unpredictable18:07
*** andreykurilin_ has joined #openstack-ironic18:07
jrollhmm18:08
lucasagomesyeah states becomes self explanatory18:08
lucasagomesjust by reading the previous states18:08
erwan_tafjroll: you cannot know if the server is zapped or not and even worst .... is it a server that should have been zapped but not zapped18:08
erwan_tafit's only MANAGED18:08
lucasagomeswhat does READY means? it means the machine is CLEAN, CONSISTENT, NORMALIZED etc...18:08
lucasagomesit's important because FSM has no "memory"18:09
lucasagomesit's limited by it's states18:09
erwan_tafbtw, not trying to propose that version but illustrate the issues on the current one18:09
lucasagomesso each state is very clear about what is the state of the server when how it got there18:09
jrollZAP is not a state that needs to be done every cycle18:09
jrollthat's the thing18:09
jrollthe whole point of splitting zap/clean18:09
erwan_tafthis is the perfect thing18:09
lucasagomesjroll, when we did it we based on the decommisioning18:10
lucasagomeswhich was done every cycle18:10
jrollzapping is something that an external system (human/robot/something else) triggers in extraordinary circumstances18:10
jroll(as we've defined it recently, that is)18:10
lucasagomesjroll, right, it's cool to move some of it out of the main loop18:13
lucasagomesbut you gotta guarantee that once "zapped" the machine will reenter the main loop18:13
lucasagomesand do the required steps to become READY18:13
lucasagomesbut always trhought a single path18:13
erwan_taf+118:17
*** rushiagr is now known as rushiagr_away18:17
rloolucasagomes: where/what is the 'introspect' state, in the FSM?18:19
rlooerr, s/introspect/inspect/18:20
*** achanda has quit IRC18:20
*** MattMan has quit IRC18:20
jrollthere's another problem I have, you can't inspect every time a node is cycled18:20
jrollby our definition of inspect18:20
lucasagomesrloo, we did it as part of becoming "defined"18:21
erwan_tafjroll: if you have a strong expection of the state of this hw, I do suggest inspecting before actually using it18:21
lucasagomesrloo, and conformity check should check if the machine is confirming18:21
*** achanda has joined #openstack-ironic18:21
rloolucasagomes: at what point do you know you can talk to the node with the supplied credentials?18:21
lucasagomesrloo, because if you do like some firwamre update you can introduce new features18:22
lucasagomesor bios flashing etc18:22
lucasagomesyou can disable/enable CPU cores for example via BIOS18:22
erwan_tafjroll: on my daily work, I can prevent a node to be used if the NICs are not well connected to the good switches or if the system is already too hot18:22
jrollerwan_taf: we've defined inspecting as, "find out what this hardware is and update the database". we can't do that every cycle, say ram fails, suddenly you're running on different hardware without knowing it18:22
erwan_tafjroll: that's a very late decision to take18:22
jrollwhat18:22
lucasagomesrloo, once it's manageable18:22
jrollwe do this in production during "decommissioning" (which has become cleaning/zapping18:23
rloolucasagomes: can we always inspect before we know if we can talk to the node?18:23
lucasagomesrloo, the "validating manageability" action will test if you have the right credentials etc18:23
rloolucasagomes: I'm just wondering why we put inspecting after managed in the proposed18:23
jrollerwan_taf: our discovery stuff won't tell you it isn't connected or too hot, it will just update ironic's database to say what it is and isn't connected to18:23
erwan_tafjroll: you have two different time: 1 for checking the unlikely to be changed values, 1 for checking the likely to be changed values18:23
jrollno18:24
jrollno18:24
jrollno18:24
jrollvalues should never change18:24
jrollwithout me knowing18:24
erwan_taf*ahah*18:24
rloolucasagomes: but you said inspecting happened as part of becoming defined, and that happens before validating.18:24
erwan_tafsorry, but they will18:24
jrollwhat18:24
erwan_tafdisk is broken ?18:24
erwan_tafsystem too hot ?18:24
jrollyeah, that will change18:24
jrollI don't want my database to change to reflect this18:24
erwan_tafnic not well connectd ?18:24
erwan_tafbroken DIMM ?18:24
jrollI want that node to be failed18:24
erwan_tafyes but you have to check its conformoty to a model18:25
jrollour "inspect" work explicitly only does the former18:25
erwan_tafyes I know18:25
erwan_tafwould be nice at some point being able to cope with more precise cases18:25
lucasagomesrloo, right, I believe that's the 2 "discovery" thing18:25
erwan_tafdeploying on a {almost}broken node isn't a good idea18:25
jrollno shit18:25
erwan_tafbut that's a different topic18:25
lucasagomesthat's why we have that "introspection/inspect" in the proposed, it's not actually discovering a new node18:26
lucasagomeswhere CREATED is like discovering18:26
rloolucasagomes: you lost me. what '2 "discovery" thing'?18:26
lucasagomesdiscover nodes vs discovering node properties18:26
jrollthe difference is18:26
jrollinspection is manual in the current proposal18:26
lucasagomesthe later we changed to "introspecting" to avoid confusing18:27
lucasagomesconfusion*18:27
rlooso lucasagomes, in the FSM (== the FSM, and proposed==our-not-FSM-being-proposed)18:28
rloolucasagomes: are you saying that inspecting happens in the 'Filing Cred...' action?18:28
aweeksjust going to leave this here https://i.imgur.com/mgnDvDz.jpg18:28
lucasagomesjroll, I think that the problem happens if the node doesn't fail to deploy18:28
lucasagomesthen the deploy successed but you gave you client something different than he asked for or is paying for18:29
lucasagomesbecause you haven't checked before the machine is exactly what it says it's before giving the node to him18:29
*** athomas has quit IRC18:29
jrolllucasagomes: there's room for validating hardware every cycle, we need it, I agree18:29
lucasagomesclient asks for X,Y,A but gets X,Y,Z18:29
lucasagomesthat's hard to debug18:29
lucasagomesif it fails on deploy time that's the best it can happen18:30
JayFThat type of verification is something I could see being done in a CLEAN state even though18:30
lucasagomesrloo, yes18:30
JayFin fact we do some of it today in our downstream decomissioning18:30
lucasagomesJayF, machine can sits on AVAIALBLE for 1 month18:30
lucasagomessomething can happen there18:30
lucasagomesafter that you have no validation18:30
rloolucasagomes: what if the inspecting can't inspect cuz it can't talk to the node?18:31
JayFlucasagomes: that's a reasonable point; but I don't want to pay that inspection time on every deploy tbh18:31
lucasagomesrloo, maintenance with reason18:31
lucasagomessomeone has to look at the machine and figure that it;s inacessible18:31
lucasagomesretry18:31
lucasagomesit's a retryable state18:31
lucasagomesretry after fixing*18:31
jrollso18:31
lucasagomesJayF, it's cool, I mean you can make that non-op18:32
jrollwe validate hardware today at deprovision time18:32
jrollnot at provision time18:32
lucasagomesJayF, that's why it's seaprated actions with diff names18:32
jrolland problems have been minimal18:32
rloolucasagomes: so then, in the proposed diagram (note, I'm not calling it a FSM;)), do you think we should move INSPECT* to ENROLL -> INSPECT* -> VALIDAT* ?18:32
lucasagomesyou don't have to implement if ur not using18:32
jrollI think we've had like... maybe 5 deploys of a not-quite-working node18:32
jroll*maybe*18:32
JayFout of 1000+ instance deploys on average per day18:32
lucasagomesjroll, JayF I understand ur use case18:33
NobodyCambrb18:33
jrolllucasagomes: I mean, I understand why it would be good18:33
lucasagomesand in any way that diagram says you _have_ to implement that18:33
jrollI'm just saying in the real world, it's rare18:33
lucasagomeswhat it does say is that, for those who wants it (and I think it's valid)18:33
lucasagomesit can be done18:33
lucasagomesit can cope with both use cases18:33
lucasagomesjroll, right, rare but can happen18:34
lucasagomesfor 1000+ ok yeah 5 times18:34
lucasagomesbut once you get bigger and bigger it can get worse and worse18:34
jrollnonono18:34
jroll1000+ per day18:34
jrollsince june18:34
jrolland maybe 5 total since june18:34
*** Masahiro has joined #openstack-ironic18:35
*** pelix has quit IRC18:35
lucasagomesright, but if you could capture that 5 cases before hand wouldn't that be helpful?18:35
lucasagomesI think that we have been talking about not trusting hardware18:35
lucasagomesnot trusting BMCs all the time18:35
lucasagomesthat's what that actions is for18:35
lucasagomesif u want to guarantee, you can18:35
lucasagomes5 of ur machines became non-conformant, few cases or not, it's a valid state where all servers potentially can end up in18:36
* jroll thinks18:36
jrollif validating hardware is part of the deploy task18:37
jrollshouldn't that just be done in the deploying state?18:37
*** ParsectiX has quit IRC18:38
lucasagomesit could, I could fit actions inside other actions. But again, a deploy failure is different than a conformity check18:38
lucasagomesa conformity check is not a failure18:38
lucasagomesit's a check that takes ur server to a state which says it's not conformant18:38
*** Masahiro has quit IRC18:40
lucasagomesbecause that's actually the state of the machine, "look I'm not what you're telling me I am"18:40
lucasagomesso please take a look at me :)18:40
jrolldunno, I tend to think it's a failure18:40
jrollDEPLOYFAIL, maintenance=true, maintenance_reason='RAM did not match what's in the db"18:41
JoshNanglucasagomes: it could go to a state other than deployfail18:41
JoshNangor that^18:41
jrollidk18:41
lucasagomesJoshNang, that's why we have that action YES or NO cause it has multipaths18:41
lucasagomesif when DEPLOY fails you can move to 2 diff states18:42
lucasagomesdepending on what is happening internally in deploy you have no predicability18:42
jrollI have some thoughts that people probably won't love18:42
jroll1) why did we decide to rewrite the entire state machine in the first place?18:43
jroll2) is it possible (and/or detrimental to the project) to punt on this for now?18:43
lucasagomes1) I think we need it, we have the decom use case for it18:44
lucasagomesthat by itself is a valid reason18:44
lucasagomes2) idk18:44
jrollwe need to add to it18:44
jrolldoes not mean we need to rewrite the whole thing18:45
lucasagomesyes18:45
lucasagomesbut the way we have states now it's hard18:45
jrollright18:45
jrollok18:45
lucasagomescause it's too feel doing too many thing18:45
lucasagomeswith well defined states, if you need to add a state in between later is easy18:45
jrolljust curious, I couldn't remember why we started this in the first place18:45
jrollI think the summit thing went from "what does our state machine actually look like" to "fuck this thing, let's fix it"18:46
lucasagomescause you know exactly what to expect when the server gets there18:46
lucasagomeswithout multipaths18:46
lucasagomesjroll, yeah I think so too18:46
lucasagomeswe started drawing on that whiteboard and started thinking about a better model18:46
lucasagomeswhich was good :)18:46
jrolland now we're going to spend half the cycle fixing it18:47
lucasagomesyeah18:47
lucasagomeswe will have to do it at one point IMO18:48
devanandajroll: thank you18:48
devanandaso, we don't need to rewrite the whole thing. and we cant without breaking compat18:48
jrolldevananda: for what, I've said a lot18:48
devananda18:43:10 < jroll> 1) why did we decide to rewrite the entire state machine in the first place?18:49
devanandathat18:49
jrollah18:49
devanandawe needed to see how certain new states fit within the existing thing18:49
devanandaand without making it overly complex to reason about18:49
devanandastatements such as "we can not have multiple paths leading to a single state" and "every state/action can only do one thing" -- I have no idea where these notions came from18:50
devanandaI don't believe either of them are part of the current code, nor possible without rewriting the entire project, nor are they an integral part of our project's mission18:51
devanandawhcih is, to manage and provision hardware in a cloud-like way (paraphrasing here)18:51
*** achanda has quit IRC18:51
devanandawe have existing users. actually, a lot of them18:51
devanandasupporting an upgrade path for them is mroe important to me than an idealized version of what a state machine should look like18:52
*** achanda has joined #openstack-ironic18:52
*** achanda has quit IRC18:52
*** achanda has joined #openstack-ironic18:52
jroll++18:52
*** arif-ali has quit IRC18:54
devanandalucasagomes: where did the idea that we shouldn't have multiple paths come from? I don't get it18:54
lucasagomesdevananda, we were trying to make the states to be predicable18:55
devanandalucasagomes: why is that important?18:55
devanandai mean, why is "no multipath" relevant to "be predictable"18:55
lucasagomesso you know exactly what the steps the node have passed trhough before become what it says it's (the state)18:55
jrolllucasagomes: I'd like to point out that you can get that info from the logs18:56
devanandaa finite state machine has, for any given state, a set of acceptable inputs, whcih determine the next state that it transitions to18:56
lucasagomescause if you come from multiple places18:56
lucasagomesand state machines have no "memory"18:56
devanandalucasagomes: why is THAT important?18:56
lucasagomesyou can't define from where you came from if there are multi paths18:56
jrollbut ironic has a database, and a log file18:56
jrollthe code may not know where it came from18:56
jrollbut that's irrelevant18:56
*** arif-ali has joined #openstack-ironic18:56
lucasagomesdevananda, it's good practice, I mean, we were trying to come up with something simple18:56
jrollthe code doesn't need to know that18:57
lucasagomessomething people can look at and see18:57
lucasagomeswhat my server is, and what it did to become it18:57
lucasagomesby simplying looking at the diagram18:57
lucasagomesI get it's not what we are aiming for now18:57
jrollwhy do you need to know that unless you're debugging18:57
lucasagomesbut I don't know why so much bashing, or why we are causing frustatrion18:57
lucasagomessimply because we spent some time and effort to present something nice to the project18:58
jrollI agree this has been rough, and probably too much rage18:58
jrollwhich comes from everyone working on this for weeks and not making progress18:58
jrollit's not directed towards y'all specifically18:58
lucasagomesin any time I wanted to hurt the project, or delay the current proposal18:58
lucasagomesjroll, yeah I understand18:59
lucasagomesand I have a thick skin too :) so no problem18:59
jrollha18:59
jrollI hope most people can make it to the midcycle, everyone deserves much beer18:59
lucasagomes+1 :D18:59
jrollI'll cover drinks for the whole week for anyone that +2's that spec right now :P19:00
rlooanyone that +2's that spec should get their core status removed ... don't listen to jroll ;)19:00
jrollI'll still buy drinks if the spec makes it through that way :P19:01
NobodyCamwe are doing what we think is best for the project, I have not felt otherwise throught this whole discussion.19:01
rlooso are we moving forward... ?? lucasagomes? were your comments in the spec addressed?19:02
lucasagomesrloo, yeah, just adding a comment about "what nova expects"19:03
jrollwhat does nova have to do with this?19:03
lucasagomesnotihng19:03
lucasagomesjust answering victor_lowther comment19:03
*** deva__ has quit IRC19:03
lucasagomescause he said nova expects ACTIVE and DELETED state from Ironic19:03
rloojroll: I think that is in reference to the name of the 'ACTIVE' state19:03
erwan_tafdevananda: in such particular case, on such a simple use case, having 3 outputs on a state where 2 of them loops on the same states means the state is pretty not well defined19:03
lucasagomeswhich is not true19:03
jrolloh, right19:04
jrollyeah, nova code can be changed19:04
lucasagomesyeah19:05
*** andreykurilin__ has joined #openstack-ironic19:09
*** andreykurilin_ has quit IRC19:09
rlooso lucasagomes, wrt the spec, are you 'ok' with the state machine as proposed? no preboot, with CLEAN* states, ACTIVE/DELETED?19:14
rloolucasagomes: I know you didn't -1, but wanted to check ;)19:15
lucasagomesrloo, oh I'm ok with it yeah19:15
lucasagomeswill vote19:15
rloothx19:15
lucasagomesthere's some undefined things in the spec, I think Nisha pointed to one state which is not being described19:15
jroll\o/19:16
lucasagomesbut I'm overall ok with that yes19:16
rlooso everyone, how do we move the spec along? we need to flesh out the rest of it :-(19:16
jrolllucasagomes: nah, it was a *FAIL state, which is implicitly defined somewhere19:16
jrollunless Nisha pointed out something else19:16
* lucasagomes will recheck her comment and vote19:16
rloolucasagomes: Nisha's question has been answered. All the *FAIL were described together, not individually except for zapfail.19:17
*** krtaylor has quit IRC19:17
lucasagomesvoted19:19
lucasagomes+1;ed19:19
lucasagomes'ed*19:19
*** spandhe has joined #openstack-ironic19:20
lucasagomesaight folks, with that19:22
lucasagomesI will go get something to eat :)19:22
lucasagomeshave a good night everyone! I ttyl19:22
jrollnight lucas :)19:22
Shrewslucasagomes: weren't we supposed to have a name for our mascot today?19:22
* Shrews been battling a messed up ubuntu upgrade and may have missed in scrollback19:23
NobodyCamhave a good night lucasagomes19:23
lucasagomesShrews,holy molly yes19:23
lucasagomesI will close the voting!19:23
NobodyCamoh on the name19:24
lucasagomesNobodyCam, congrats! Pixie Boots19:24
rloonight lucasagomes. thx for staying later ;)19:24
lucasagomesis the name :D19:24
NobodyCamoh wow :)19:24
lucasagomesPixie Boots is the name of our new mascot! I will send the email later on19:24
rloogoodbye ironica19:24
lucasagomespool is closed!19:24
lucasagomesShrews, ta much for the reminder!19:24
jroll\o/19:24
NobodyCamnow is Pixie Boots a boy or girl?19:25
JayFWhy do you need to assign gender to an animation :)19:25
ShrewsNobodyCam: a perfectly ambiguous name  :)19:25
NobodyCam:)19:25
PaulCzarironic-conductor doesn't print out its config options when it starts in debug mode19:25
Shrewsthough i will miss the opportunity to design a Ironic Maiden t-shirt19:25
PaulCzarI have tftp_server = 172.16.255.100 set it ironic.conf19:26
PaulCzarbut it sets the dhcp_option to 10.0.2.2 on the neutron port19:26
lucasagomesNobodyCam, it can be any :D19:26
lucasagomesor both at the same time19:26
NobodyCami like it19:27
*** spandhe has quit IRC19:27
lucasagomesemail sent :D19:27
lucasagomesfood time :) thanks for everyone that voted!19:27
* rloo thinks we should vote on the gender of pixie boot ;)19:27
jrollPaulCzar: wtf19:28
* lucasagomes should draw a pair of boots 19:28
PaulCzarfek ..  it's not under the [pxe] section19:28
NobodyCamlucasagomes: ++19:28
jrolloh, ha, I do that all the time19:28
PaulCzarbut still, conductor should print out the config options set19:28
lucasagomeso/ night everyone19:28
*** lucasagomes is now known as lucas-dinner19:29
jrollPaulCzar: I thought it did :|19:29
JayFPaulCzar: that'd be a good bug to file --> make sure to tag it low-hanging-fruit19:29
*** krtaylor has joined #openstack-ironic19:30
*** spandhe has joined #openstack-ironic19:39
*** pensu has quit IRC19:40
*** achanda has quit IRC19:41
*** achanda has joined #openstack-ironic19:42
NobodyCamwhy is so hard to motivate one's self when its grey and raining out side :-p19:45
openstackgerritMerged openstack/ironic-python-agent: Use _ instead of - for config options  https://review.openstack.org/13163219:46
*** achanda has quit IRC19:46
PaulCzarit seems I need to run the tftpd server in the network namespace that belongs to the neutron dhcp agent ?20:02
*** dprince has quit IRC20:04
NobodyCamPaulCzar: namespace or subnet?20:05
PaulCzarnamespace20:08
PaulCzarrunning it in the dhcp namespace my node connects and manages to get pxelinux.020:11
PaulCzarbut then it fails:  unable to location configuration file20:12
PaulCzarlocate even20:12
NobodyCamPaulCzar: do you have a map file in the tftp dir20:13
PaulCzarhmmm no ?20:14
PaulCzardoes ironic create that ?20:14
NobodyCamno we don't I thought we landed a patch that added that to the doc's but I'm not seeing it20:16
PaulCzarthat'd do it20:16
NobodyCamPaulCzar: https://github.com/openstack/tripleo-image-elements/blob/master/elements/ironic-conductor/install.d/69-ironic-tftp-support#L42-L4320:17
victor_lowtheryay, more comments on the spec!20:20
NobodyCamrloo: (or anyone) am I nuts did we not add a patch to the docs about the tftp map file?20:21
jrollI swear it was there20:21
*** ChuckC has quit IRC20:21
* NobodyCam thinks a dependent service quick config ref page may be a good thing20:22
rlooi thought there was something... looking...20:22
PaulCzarNobodyCam: I run it with the map file and now it can't find pxelinux.020:23
*** Masahiro has joined #openstack-ironic20:23
*** Nisha has quit IRC20:24
NobodyCamoh joy20:24
NobodyCamanything in the log about where it's looking20:24
*** dprince has joined #openstack-ironic20:24
rloosorry NobodyCam; I don't see anything about a map file :-(20:27
*** Masahiro has quit IRC20:28
NobodyCamya20:30
NobodyCamoh well I guess I'm just going crazy :-p20:31
PaulCzarNobodyCam: okay ... if I run it without --secure ( because security is for chumps ) and the mapfile it works20:33
PaulCzarsomething to do with absolute symlinks20:33
PaulCzarso it boots up to a initramfs) prompt20:36
PaulCzarso that's closer20:36
NobodyCamPaulCzar: are you running selunix or apparmor or other such20:36
PaulCzaron the host ?20:37
NobodyCamtftp host20:37
PaulCzarits a fairly standard ubuntu so I think its got apparmor20:38
NobodyCamoh brb ... fresh coffee is ready20:38
*** mrda-away is now known as mrda20:38
mrdaMorning Ironic20:38
NobodyCamPaulCzar: ack was just thinking of what would block access to files20:38
NobodyCammorning mrda20:39
PaulCzarNobodyCam: its a tftp thing20:39
PaulCzarwhen in secure mode it doesn't allow following symlinks outside of the /tftpdirectory ...  apparently absolute symlinks makes it think they're outside20:40
PaulCzarIronically ( NOOO ) I found info on it here - https://bugs.launchpad.net/ironic/+bug/128026720:42
NobodyCamPaulCzar: yep20:45
*** penick has joined #openstack-ironic20:52
*** igordcard has joined #openstack-ironic21:11
*** Marga_ has quit IRC21:11
*** arif-ali has quit IRC21:11
*** Marga_ has joined #openstack-ironic21:12
openstackgerritChris Krelle proposed openstack/ironic: Add info on creating a tftp map file  https://review.openstack.org/13886421:12
*** arif-ali has joined #openstack-ironic21:13
*** igordcard has quit IRC21:14
*** Marga_ has quit IRC21:17
*** ryanpetrello has joined #openstack-ironic21:18
*** Marga_ has joined #openstack-ironic21:18
*** ChuckC has joined #openstack-ironic21:18
jrolldevananda: NobodyCam: found a super interesting thing :)/b 8021:24
jrollgahhhhhhh21:24
NobodyCambingo21:24
jrollanyway21:24
jrollhttps://github.com/openstack/nova/blob/master/nova/scheduler/filter_scheduler.py#L265-27321:24
NobodyCamits a fun game but I tend to like video games better21:24
jrolldefault is 1.21:24
jrollthose are ordered in the same order they come back from ironic21:25
jrollwhich tends to be the same21:25
jrollso you're picking the first available, every time21:25
jrollif your servers have shorter cycle times, some will be used much more often than others21:25
NobodyCamahh the old add a new node to casdranda and watch it fall over issue21:26
NobodyCam:-p21:26
jrolllol21:26
jrollbasically21:26
JayFNobodyCam: you know they fixed that with vnodes in cass =>221:27
NobodyCam:)21:27
openstackgerritOpenStack Proposal Bot proposed openstack/ironic-python-agent: Updated from global requirements  https://review.openstack.org/13596421:36
jroll^ funny how it produces a new patchset even when it doesn't change anything, all 4 patchsets there are the same21:39
* devananda steps away for a while21:39
*** alexpilotti has quit IRC21:40
rlooouch, forgot to ask devananda. NobodyCam, wrt that nova bug https://bugs.launchpad.net/nova/+bug/137937321:41
rlooNobodyCam: any idea whether/how long we want to support/proxy nova BM API?21:42
JayFjroll: ^ if you wanna land those reqs, I just +2'd it21:42
jrollJayF: waiting for tests, yo21:43
*** Marga_ has quit IRC21:43
JayFjroll: nothing changed since patch set 3 ?21:43
jrollJayF: yeah, still21:43
jrollidk, you're the one that taught me to wait :P21:43
*** Marga_ has joined #openstack-ironic21:43
jroll+A21:43
*** Marga_ has quit IRC21:44
*** Marga_ has joined #openstack-ironic21:44
*** jgrimm is now known as zz_jgrimm21:46
NobodyCamrloo: I'm not sure21:48
NobodyCambut I have a question posted on that bug21:48
rlooNobodyCam: ok. I didn't actually look in detail at that bug, but it seems like if/how we address it might have to do with what support if any, we provide.21:48
*** Marga_ has quit IRC21:48
*** Marga__ has joined #openstack-ironic21:48
*** dprince has quit IRC21:49
*** Marga__ has quit IRC21:50
NobodyCamrloo: I believe that error came from a mis config, user still has baremetal driver.. but i'm not 100% sure21:50
rlooNobodyCam: so eg, if you take master/nova, there is no BM code there. So should a 'nova baremetal-show-whatever' proxy to ironic if ironic is running? Or should it just return an error.21:50
NobodyCambut ya21:50
*** Marga_ has joined #openstack-ironic21:50
NobodyCamrloo: thats really a question for nova21:50
NobodyCamthey were conserned about backwards compablity issues21:51
*** Marga_ has quit IRC21:52
jrollI think what they're concerned about21:52
jrollis what happens when someone runs a baremetal client command21:52
*** Marga_ has joined #openstack-ironic21:52
jrollagainst nova that doesn't have that21:52
jrollthat being baremetal21:52
jrollbecause people may upgrade clients without upgrading nova21:53
NobodyCamya21:53
rlooNobodyCam: did you see jogo's comment in the bug (and in IRC earlier today) about whether we still need nova/api/openstack/compute/contrib/baremetal_nodes.py?21:53
*** andreykurilin_ has joined #openstack-ironic21:53
rlooNobodyCam: without that extension, there is no proxying.21:53
*** andreykurilin__ has quit IRC21:53
NobodyCamyea21:53
NobodyCamThats the file we hacked in the proxying support into21:54
rlooso if he's good with deleting it and another nova core is good with it, i'm fine with it too. but... i'm not sure what the 'right' thing might be.21:54
rlooI think since BM code is gone, it makes sense to get rid of the proxying too.21:55
NobodyCamI would point every thing at https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/baremetal_nodes.py#L9521:56
NobodyCamfo a cycle21:56
*** penick has quit IRC21:57
rlooNobodyCam: ooo, that's a good idea (if we agree we don't want to proxy)21:57
NobodyCambut thats really up to the nova folk21:57
rlooNobodyCam: ok, jogo wanted to know what devananda thought, but I forgot to ask him. He'll be back though;)21:58
*** Marga_ has quit IRC22:02
*** Marga_ has joined #openstack-ironic22:02
jjohnson2Well, I got it to the point of sending a bad RAKP2 packet22:03
openstackgerritJarrod Johnson proposed stackforge/pyghmi: Implement server side IPMI protocol (WIP)  https://review.openstack.org/13810922:03
jjohnson2and now it is time to vacation for a while..22:03
NobodyCamjjohnson2: :) so we'll see you next year?22:04
jjohnson2January 5th is my return date22:04
NobodyCamhave a great time away22:04
jjohnson2between vacation and comp time, I have a good time to spend22:04
NobodyCam:)22:04
jjohnson2NobodyCam, but after fixing rakp2, then all that's needed is one more function before easy street22:05
jjohnson2oh I'm dumb...22:05
NobodyCam:) jjohnson2 thank you so much for this22:05
*** Marga_ has quit IRC22:06
jjohnson2rakp2 is fine22:06
jjohnson2I was actually using the wrong password...22:06
jjohnson2so it legitimately bounced me22:06
*** Marga_ has joined #openstack-ironic22:06
*** penick has joined #openstack-ironic22:06
NobodyCamd'oh22:06
jjohnson2myserver = sin.IpmiServer({'USERID': 'password'})22:06
jjohnson2import pyghmi.ipmi.private.serversession as sin22:07
jjohnson2import pyghmi.ipmi.private.session as sess22:07
jjohnson2then loop on sess.Session.wait_for_rsp(30)22:07
jjohnson2has been my test harness22:07
jjohnson2but oh well, rakp3 handling is next22:07
jjohnson2but my implementation is good enough for an attacker to offline attack the user password, so it's well on its way to a proper ipmi implementation22:08
NobodyCamjjohnson2: are you adding cypher zero support :-p22:08
jjohnson2nope22:08
NobodyCamj/k22:08
jjohnson2cipher 0 does not work22:08
jjohnson2null usernames, also don't work22:08
jjohnson2you shall have a username, it shall be ipmi 2, and screw cipher suite 022:09
jjohnson2support for cipher suites 2 and whatever the sha256 numbers would come after first pass22:09
NobodyCam++ :)22:09
jjohnson2pretty easy to add, but ipmitool doesn't use those by default and neither does pyghmi22:09
jjohnson2# ipmitool -I lanplus -U USERID -P password -L Administrator+ -H 127.0.0.122:10
jjohnson2> Error: no response from RAKP 3 message22:10
jjohnson2that's where the virtual bmc gets to so far22:10
NobodyCamthats a lot more then my initial attempt got22:10
jjohnson2hmm, and needs adminstrator+, I might be acting like an old intel bmc22:10
jjohnson2all fun for the new yew22:10
jjohnson2year22:10
NobodyCam:)22:11
jjohnson2wasn't as bad once I actually started working on it22:11
*** Marga_ has quit IRC22:12
*** Masahiro has joined #openstack-ironic22:12
NobodyCamjjohnson2: this will advance our testing big time.22:12
*** Marga_ has joined #openstack-ironic22:12
NobodyCamso many many thank you's from me22:13
jjohnson2well, hopefully you remain patient22:13
jjohnson2and I'll party down for a few weeks22:13
NobodyCam:)22:13
NobodyCamhave a great time jjohnson2 :)22:14
* NobodyCam will brb22:14
*** Marga_ has quit IRC22:15
*** Marga_ has joined #openstack-ironic22:15
*** Marga_ has quit IRC22:16
*** Marga_ has joined #openstack-ironic22:16
rloojjohnson2: are you actually leaving your house?22:16
*** Masahiro has quit IRC22:17
jjohnson2rloo, I'll be going all the way to half way across the state, big travel in my terms22:17
rlooWOW! good for you jjohnson2!22:17
openstackgerritMerged openstack/ironic-python-agent: Updated from global requirements  https://review.openstack.org/13596422:17
*** jjohnson2 has quit IRC22:23
*** Marga_ has quit IRC22:27
*** Marga_ has joined #openstack-ironic22:27
*** penick has quit IRC22:35
*** penick has joined #openstack-ironic22:36
*** penick has quit IRC22:36
*** penick has joined #openstack-ironic22:38
*** davideagnello has quit IRC22:41
*** davideagnello has joined #openstack-ironic22:43
*** krtaylor has quit IRC22:52
*** Marga_ has quit IRC22:52
*** Marga_ has joined #openstack-ironic22:53
*** mjturek has quit IRC23:04
*** krtaylor has joined #openstack-ironic23:04
*** penick has quit IRC23:06
*** pcrews has quit IRC23:13
*** erwan_taf has quit IRC23:16
*** romcheg1 has quit IRC23:19
*** jjohnson2 has joined #openstack-ironic23:20
*** naohirot has joined #openstack-ironic23:23
*** igordcard has joined #openstack-ironic23:24
*** ryanpetrello has quit IRC23:24
Haomengmorning Ironic:)23:26
NobodyCammorning Haomeng :)23:27
HaomengNobodyCam: :)23:27
NobodyCam:)23:28
*** penick has joined #openstack-ironic23:31
NobodyCamw00 h00 actually was able to help someone in the internal ironic channel :)23:33
JayFwhy do you have an internal ironic channel :(23:33
NobodyCam:-p no-one is ever in it ...23:34
* jroll looks at his internal ironic channel23:34
NobodyCamlol23:34
JayFjroll: I don't see any ironic channel, just a place where dentists hang out to talk about teeth <.< >.>23:34
jrollto be fair, we push people this way if it's not an internal question :P23:34
jrolllol23:34
NobodyCamJayF: internal support of corp folks23:34
*** jjohnson2 has quit IRC23:35
NobodyCammany dont have outside IRc access23:35
jrollah yeah23:35
JayFI'm mainly just poking fun :) We do redirect non-internal questions to here, even if we end up answering them23:37
NobodyCam:) I do try but for many its a real pain to access freenode23:39
NobodyCamso we have a "hipChat" channel23:39
jrollport 443 ftw23:40
jrollhipchat isn't too bad really23:40
NobodyCami (actually) like the client23:40
NobodyCamsupport last line editing with s/a/b/23:40
*** ryanpetrello has joined #openstack-ironic23:51
naohirotgood morning ironic23:53
mrdamorning naohirot23:53
NobodyCammorning naohirot23:53
naohirotHaomeng: goodmorning23:53
mrdamorning Haomeng23:53
NobodyCamPaulCzar: got it all working?23:53
Haomengnaohirot: Good Morning, naohirot:)23:53
Haomengmrda: Good morning, mrda:)23:53
naohirotmrda: NobodyCam: good evening23:54
naohirotmrda: are you in US right?23:54
mrdanaohirot: No, Australia23:54
naohirotmrda: Oh really, same time zone!23:55
mrdanaohirot: very close :)23:55
*** cohn has joined #openstack-ironic23:55
naohirotmrda: Year, almost same :-)23:55
naohirotjroll: JayF: good evening23:56
jrollhi there naohirot23:56
naohirots/Year/Yeah/ :)23:56
spandheanybody here tried ipxe yet?23:56
NobodyCamhi spandhe, I believe several folks have used it23:57
JayFhiya23:58
spandheNobodyCam: awesome! :) I am trying to understand the ipxe config template format..23:58
spandheany idea why every option has boot in the end?23:58
NobodyCamso it boots23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!