Tuesday, 2014-11-25

openstackgerritDevananda van der Veen proposed openstack/ironic: Add new enrollment and troubleshooting doc sections  https://review.openstack.org/13620200:03
devananda(I forgot we had a separate command for set maintenance mode)00:04
devananda(oh! I need more coffee)00:04
NobodyCamlol coffee is aways a good thing00:05
NobodyCamspeaking of I should get some food00:12
*** rushiagr_away is now known as rushiagr00:17
*** romcheg has quit IRC00:19
*** penick has quit IRC00:20
*** penick has joined #openstack-ironic00:24
JayFhttps://review.openstack.org/#/c/136867/2 is tested working locally if anyone has devstack power to point at it00:26
jrolldtroyer is a good person to bug00:27
jroll(#openstack-qa)00:27
HaomengJayF: +1:)00:28
*** rushiagr is now known as rushiagr_away00:28
devanandajroll: https://bugs.launchpad.net/ironic/+bug/139594600:28
jrolldevananda: I completely agree, I raised this concern when people asked me to put that line there00:28
jrolldo you think just removing it is ok?00:29
jrollalso00:29
jroll"The PATCH mechanism was not actually deprecated or removed"00:29
jrollthe goal was to do that00:29
jrollbut that should have an api version bump00:30
jrollbackwards compat is hard :|00:30
openstackgerritJim Rollenhagen proposed openstack/ironic: Remove useless deprecation warning for node-update maintenance  https://review.openstack.org/13693400:32
jrolldevananda: at any rate, ^00:32
*** penick has quit IRC00:33
*** zhenzanz has joined #openstack-ironic00:42
JayFadam_g: I'm digging down the libvirt path00:42
JayFadam_g: if we can reconfigure it to do the right thing, that'd be great00:42
*** zhenzanz_ has joined #openstack-ironic00:42
devanandajroll: cheers00:44
devanandaalso this: 00:30:07 < jroll> backwards compat is hard :|00:44
jrollyeah.00:45
jrollfolks wanted to deprecate setting maintenance that way, idk if that's good or bad, consistency is nice to have00:46
jrollcan have "micro-versions"?00:46
adam_gJayF, im fairly certain its a 'limitation' of qemu--there are alternatives to file logging (ie, tcp/telnet)00:46
jroll(which, to me, means real actual versioning)00:46
*** zhenzanz has quit IRC00:47
*** zhenzanz_ is now known as zhenzanz00:47
devanandammm, micro-versioning the REST API ... that'd be great00:48
jrollyes00:48
devanandaI mean, it sounds nice00:48
devanandaalso, a proposal to add PUT /v1/nodes is up -- https://review.openstack.org/#/c/130228/00:48
jrollyeah, I don't really see the point in that, I guess00:49
jrollother than not having to diff?00:49
JayFadam_g: yeah, I'm well down the rabbithole now :P00:49
jrollI'm of the opinion that if you're changing a node, you should know what you're changing00:49
JayFadam_g: I also thought about things like adding a hook in the ssh driver to "rotate the logs" since that driver is essentially targetted at devstack00:51
JayFadam_g: but that seems like it could be a layering violation :P00:52
adam_gJayF, we just need to be able to spawn the equiv of 'while [ ! -f $console_log_file ]; do sleep 1; done; sudo tail -f $console_log_file >$DATA/logs/persistent_$console_log_file', which is hard to do given how devstack does its process tracking. might be easiest to just add a 1 line script to ./tools/ironic/ that it can call rather than the in-line scripting00:52
*** ryanpetrello has quit IRC00:52
adam_grun_process() or whatever gets used00:52
JayFadam_g: I'd be +1 to putting that script in00:53
adam_gJayF, ill poke at it tomorrow. i had a bunch of devstack hacking i wanted todo today but was side tracked with stable branch maintenance00:54
JayFtenfour00:54
JayFI'm just looking into this now and far into the rabbithole00:54
JayFbecause that log file getting blanked is no bueno :(00:55
JayFI'm thinking I change something IPA logs00:55
JayFI wanna validate it; I can't , because the log is all gone00:55
JayFetc etc00:55
adam_gyea00:55
JayFmakes me glad that it didn't require more digging to get that low-hanging IPA change into devstack00:56
JayFwould've been highly annoyed to put something in the image that does that for me then have it get nuked by libvirt on reboot, haha00:56
*** ryanpetrello has joined #openstack-ironic01:02
jrolldevananda: so you're going to thank me for that patch without reviewing it? :P01:03
*** r-daneel has quit IRC01:07
*** ChuckC has quit IRC01:12
*** dlaube has quit IRC01:17
*** chenglch has joined #openstack-ironic01:18
*** ChuckC has joined #openstack-ironic01:18
*** ChuckC has quit IRC01:22
*** ChuckC has joined #openstack-ironic01:23
*** ryanpetrello has quit IRC01:35
*** achanda has quit IRC01:50
*** nosnos has joined #openstack-ironic02:01
*** zhenzanz_ has joined #openstack-ironic02:03
*** zhenzanz has quit IRC02:04
*** zhenzanz_ is now known as zhenzanz02:04
*** yjiang5 has quit IRC02:32
*** Haomeng|2 has joined #openstack-ironic02:41
*** Haomeng has quit IRC02:41
*** vipul has quit IRC02:43
*** ramineni has joined #openstack-ironic02:53
*** zhenzanz has quit IRC03:03
*** zhenzanz has joined #openstack-ironic03:08
*** zhenzanz_ has joined #openstack-ironic03:11
*** zhenzanz has quit IRC03:13
*** zhenzanz_ is now known as zhenzanz03:14
*** nosnos has quit IRC03:29
*** rloo has quit IRC03:34
*** ryanpetrello has joined #openstack-ironic03:34
*** spandhe has quit IRC03:42
*** nosnos has joined #openstack-ironic04:03
*** pensu has joined #openstack-ironic04:12
naohirotdevananda: NobodyCam: I could access the Awesome Developer dashboard: http://perm.ly/ironic-review-dashboard in case of direct connection to the Internet.04:15
naohirotdevananda: NobodyCam: so the root cause seems to be http proxy or firewall.04:16
*** rameshg87 has joined #openstack-ironic04:19
naohirotrameshg87: Hi, good morning04:25
rameshg87naohirot, hello, good morning04:26
naohirotrameshg87: I waited for you :-) do you have time right now?04:26
rameshg87naohirot, we can talk, but for an hour my responses might be slow as i am in another meeting. :)04:27
*** vipul has joined #openstack-ironic04:28
naohirotrameshg87: I see, I know your situation. I don't mind your response is slow. so let me ask :-)04:29
rameshg87naohirot, sure go ahead :)04:29
naohirotrameshg87: http://docs.openstack.org/developer/ironic/drivers/ilo.html#deploy-process04:29
naohirotrameshg87: the last bullet, "On the first and subsequent reboots iscsi_ilo driver attaches this boot ISO image in Swift as Virtual Media CDROM and then sets iLO to boot from it."04:29
naohirotrameshg87: my question is that which ISO image does "this boot ISO image" refer to?04:30
rameshg87naohirot, just have a look at the bullet point above it04:31
rameshg87naohirot, "The driver bundles the boot kernel/ramdisk for the Glance deploy image into an ISO and then uploads it to Swift. This ISO image will be used for booting the deployed instance."04:31
naohirotrameshg87: the first boot is for deploy which copy user os through iscsi.04:32
naohirotrameshg87: the second and subsequent boot must use different ISO from the first boot.04:33
naohirotrameshg87: Am I correct?04:33
naohirotrameshg87: Is "this boot ISO image" same as deploy-ramdisk.iso or not?04:36
naohirotrameshg87: I believe that the last bullet should be like this:04:42
rameshg87naohirot, yes it's correct04:43
rameshg87naohirot, first boot is with deploy iso.  this is different from the iso used for second and subsequent boots04:44
naohirotrameshg87: On the *second* and subsequent reboots iscsi_ilo driver attaches *newly created* boot ISO image in Swift as Virtual Media CDROM and then sets iLO to boot from it.04:44
rameshg87naohirot, is that last bullet misleading ?04:44
rameshg87naohirot, ah okay. yeah i agree. *newly created* avoid ambiguity04:45
*** ryanpetrello has quit IRC04:45
naohirotrameshg87: Yes, wording confused me a lot such as "The deploy kernel/ramdisk", "the deploy ramdisk image", "the ISO deploy ramdisk image" , etc.04:46
naohirotrameshg87: I wondered those are same image or different image.04:47
naohirotrameshg87: Generally speaking, is there any difference between "ramdisk image" and "kernel/ramdisk image"?04:49
*** subscope_ has joined #openstack-ironic04:55
rameshg87naohirot, okay. i will fix the wordings.05:01
rameshg87naohirot, yeah they are same "ramdisk image" and "kernel/ramdisk" image05:01
rameshg87naohirot, basically the functionality of deployment is within the ramdisk, that's why we give more priority for ramdisk :)05:01
rameshg87naohirot, priority in the sense, we mention it alone05:02
rameshg87naohirot, i just saw your spec for virtual media deploy driver: https://review.openstack.org/#/c/134865/3/specs/kilo/irmc-deploy-driver.rst05:04
naohirotrameshg87: Okay, I understood.05:05
rameshg87naohirot, i have one question - is there any reason why you prefer nfs/cifs instead of the current iscsi approach ?05:05
naohirotrameshg87: Yes, that's the reason I have a lot of questions:-)05:05
rameshg87naohirot, actually i am interested in making the virtualmedia deploy driver independent of the virtual media implementation05:06
naohirotrameshg87: NFS/CIFS is based on my misunderstanding.05:06
naohirotrameshg87: I thought that iscsi_ilo uses iscsi to mount virtual media. this was my misunderstanding.05:07
rameshg87naohirot, ah okay05:07
rameshg87naohirot, iscsi is used for copying the image to the bare metal's disk05:08
naohirotrameshg87: iRMC uses NFS or CIFS to mount virtual media.05:08
naohirotrameshg87: so I renamed it iscsi_irmc driver.05:08
*** mikedillion has quit IRC05:09
rameshg87naohirot, so you need an nfs share/cifs share for mounting it as virtual media ?05:09
naohirotrameshg87: yes, devananda gave me an advice I need to use Manila.05:10
naohirotrameshg87: so I need to implement image preparation part using Manila instead of Swift.05:11
*** Nisha_ has joined #openstack-ironic05:14
rameshg87naohirot, okay05:15
naohirotrameshg87: do you think that I can reuse your most of code even if I use Manira?05:16
rameshg87naohirot, i am not sure05:16
rameshg87naohirot, can you boot from your virtual media ?05:17
naohirotrameshg87: Yes, I can boot from virtual media.05:17
rameshg87naohirot, can you boot from an iso image ?05:17
naohirotrameshg87: Yes, I can boot from ISO image too.05:18
rameshg87naohirot, how do we do that ? you give the nfs share location of the iso image ?05:18
naohirotrameshg87: iRMC mounts ISO as a virtual media through NFS or CIFS.05:19
rameshg87naohirot, okay, but if i want to mount to an iso image through nfs05:20
naohirotrameshg87: From baremetal server's point of view, it is CDROM/DVD.05:20
rameshg87naohirot, what's the input to iRMC ?05:20
rameshg87naohirot, is it like 1.2.3.4:/path/to/image.iso ?05:20
naohirotrameshg87: Yes, exactly05:21
rameshg87naohirot, does it support http ?05:21
rameshg87naohirot, mounting an image over http ?05:21
naohirotrameshg87: Unfortunately, no.05:21
rameshg87naohirot, :(05:21
rameshg87naohirot, then i think % of code you can reuse from existing virtual media deploy driver might be less :(05:23
*** ryanpetrello has joined #openstack-ironic05:23
naohirotrameshg87: Okay, how about diskimage-builder and IPA coreos image part?05:24
rameshg87naohirot, yeah, we can reuse those05:24
rameshg87naohirot, the main reason is we rely extensively on temp-url, a feature of swift which provides access to a swift object through http05:25
naohirotrameshg87: Okay, I relieved :-)05:25
*** subscope_ has quit IRC05:25
rameshg87naohirot, since proliant bare metal can use virtual media over http, it made things easy05:26
rameshg87naohirot, so we can use glance images directly as bare metal's virtual media05:26
naohirotrameshg87: so the main difference is how to mount virtual media.05:26
rameshg87naohirot, yes, that also makes the difference in how we prepare for virtual media boot05:27
rameshg87naohirot, i think the first question is05:27
rameshg87naohirot, if user uploads the ISO as a glance image, how do i make the bare metal node boot from it05:27
rameshg87naohirot, in our case, we could just generate the swift-temp-url for the glance image (which is http) and attach it as virtual media05:28
naohirotrameshg87: Is that ISO User OS?05:29
rameshg87naohirot, i mean the deploy ISO image05:29
*** nosnos has quit IRC05:30
*** nosnos_ has joined #openstack-ironic05:30
rameshg87naohirot, which contains the deploy kernel/deploy ramdisk05:30
naohirotrameshg87: I'm not sure right now, but at lease I can copy the deply image to NFS disk which Manira provides.05:31
naohirots/deply/deploy/05:31
naohirotrameshg87: I think it's not efficient, but I need to investigate.05:32
rameshg87naohirot, yeah.05:32
rameshg87naohirot, even i have concerns with it.05:32
naohirotrameshg87: thank you for sparing your time, I'll read your source code.05:35
rameshg87naohirot, wc :)05:35
naohirot:-)05:35
*** rakesh_hs has joined #openstack-ironic05:40
devanandanaohirot: i think you might have misunderstood. Manila is a new openstack project whose goal is to provide an NFS service in the cloud05:46
devanandanaohirot: requiring an operator to have an NFS server available for Ironic and iRMC to use is orthogonal to /how/ that NFS server is provided05:48
devanandanaohirot: as an operator, perhaps I set up my own NFS server, or perhaps I use a hardware NAS that I already have ... or perhaps I use Manila05:49
devanandarameshg87: another reason for using swift is security - the access to an object is encoded in the temp-url, and you do not need to give out a shared secret (password or token) to the ramdisks05:50
rameshg87devananda, yes05:51
naohirotdevananda: Hi05:52
* naohirot I'm reading your message05:54
devanandanaohirot: if all conductor's share a single NFS volume, you should look at the file caching done by ironic/drivers/modules/image_cache.py05:54
devanandanaohirot: and you may need to consider using file-based locks (on the NFS volume) rather than the process locks we use within that module today05:54
devanandanaohirot: so that multiple conductors do not attempt to write the same file at the same time05:55
naohirotdevananda: do you think that it is one way to use NFS of conductor's OS?05:57
devanandanaohirot: what does "use NFS of conductor's OS" mean?05:57
naohirotdevananda: I meant the OS where ironic conductor is running.05:58
devanandanaohirot: sorry, i still don't understand the question :(05:59
naohirotdevananda: I couldn't understand "as an operator, perhaps I set up my own NFS server, or perhaps I use a hardware NAS that I already have ... or perhaps I use Manila"05:59
devanandak06:00
naohirotdevananda: The OS I'm talking about is for example Ubuntu in case of DevStack.06:01
devanandanaohirot: as I understood your spec, the iRMC driver will require: 1) that there exists an NFS / CIFS server, 2) that the driver (and thus the conductor) has access to files on the NFS / CIFS server, 3) that the hardware BMC also have access to the NFS / CIFS server06:01
devanandanaohirot: and then, iRMC driver will store some file(s) on the NFS / CIFS server, inform the BMC how to attach those files (path, credentials, etc), and then initiate a boot from those files06:01
devanandais that correct?06:02
naohirotdevananda: Yes06:02
devananda:)06:02
devanandanaohirot: so, to my mind, it is reasonable to consider that NFS server as a separate operator requirement06:02
devanandaan operator MUST have an NFS server, but neither Ironic nor iRCM care HOW that NFS server is run06:03
naohirotdevananda: are you saying that ironc does nothing to NFS, and operator has responsible to prepare NFS so that iscsi_irmc can work?06:06
devanandayes06:08
naohirotdevananda: If so, I don't need to investigate Manila, but I have to write thorough procedure document.06:08
devanandalike tftp service -- the PXE drivers require that a TFTP service is running on each Conductor where the PXE driver is enabled06:08
devanandabut Ironic does not start that service06:08
devanandanaohirot: correct :)06:09
naohirotdevananda: Okay, that's good. I'm wondering that Manila is still incubation stage, I'll face a lot of problem in development stage :-)06:10
devanandalikely so06:11
devanandanaohirot: I mentioned Manila more to say, "hey, there's something you might be interested in", that's all06:11
naohirotdevananda: I thinks it's reasonable approach to think about Manila after Manila stabilized.06:12
devanandaat some point in the future, someone may want to use these together. but that's optional06:12
naohirotdevananda: Okay06:13
naohirotdevananda: One thing I want to clarify is about the OS.06:13
*** ryanpetrello has quit IRC06:14
naohirotdevananda: Is it illegal to access NFS partition on OS where Ironic conductor is running?06:14
devanandanaohirot: well, why would that pose a problem?06:16
naohirotdevananda: Because OpenStack is some kind of Cloud OS, so I thought that it is not good to access directly to any resources on OS where OpenStack services are running.06:18
*** killer_prince is now known as lazy_prince06:18
*** ifarkas has joined #openstack-ironic06:19
*** rushiagr_away is now known as rushiagr06:19
devanandanaohirot: can you be more specific? acces by who/what?06:20
naohirotdevananda: When I'll write the thorough procedure document how to set up NFS and deploy image,06:20
naohirotdevananda: I thought that one way to describe to operator, export a disk partition on conductor node as NFS, and put deploy image into the NFS.06:22
naohirotdevananda: That's is my question, in this case, operator doesn't have to prepare NFS box.06:23
naohirotdevananda: operator can temporally use conductor node as NFS too.06:23
*** k4n0 has joined #openstack-ironic06:24
rameshg87naohirot, i guess the current ipxe implementation does the same thing. it assumes that an http server is running on the conductor node.06:28
devanandanaohirot: that volume will be shared by all nodes which are deployed from that conductor. And because it is NFS, also visible to anyone else on the same network ...06:28
devanandarameshg87: difference: http is probably a read-only service. nfs is not06:29
rameshg87devananda, ah okay :)06:29
rameshg87devananda, so if share an nfs across all conductor nodes, we could use it and use image cache06:30
devanandaif the operator had a separate NFS server, which all conductors connected to, then yes, all conductors could share an image cache06:31
naohirotdevananda: Okay, If I can use conductor node as NFS server for iRMC, I think it's very convenient, I don't have worry about NFS.06:31
devanandarameshg87: this is possible today even with the pxe driver (but there is a risk of file collissions)06:32
rameshg87devananda, what do you mean by file collission06:33
*** spandhe has joined #openstack-ironic06:33
rameshg87devananda, do you mean locking can't be done effectively when it is shared ?06:33
devanandanaohirot: I feel that how the NFS server is configured should be outside of the control of ironic-conductor06:33
devanandanaohirot: in other words, iRMC driver should have config options to inform it about the NFS service, but conductor can not start or manage the NFS mount point or the NFS service itself06:34
naohirotdevananda: My question came from the fact that OpenStack storages are all abstracted. I though that this is some kind of design policy or something.06:34
*** nosnos_ has quit IRC06:34
devanandanaohirot: it has nothing to do with OpenStack Block or Volume storage services06:34
*** nosnos has joined #openstack-ironic06:34
naohirotdevananda: Okay, I use NFS partition on conductor node as the first option.06:36
devanandanaohirot: personally, I think that's going to pose a security problem, and many people will not be willing to use it06:36
devanandanaohirot: but I may be wrong on that06:36
devanandanaohirot: where the NFS server is run should not matter to Ironic or the iRMC driver -- as long as it's accessible on the network06:37
naohirotdevananda: if operator doesn't write data into conductor node, then suggest to use NFS somewhere06:37
devanandarameshg87: look at drivers/modules/image_cache.py06:38
*** ifarkas has quit IRC06:39
naohirotdevananda: Okay, if there is security issue, I reverse the order, first please prepare NFS somewhere, then consider to use conductor node as the last resort.06:39
devanandarameshg87: fetch_image() assumes only a single conductor process is writing to the cache dir, and that it is on a file system which supports hard-links06:40
* naohirot I'm looking at fetch_image()06:41
devanandarameshg87: if you want to make image_cache work on a network file system shared by >1 conductor , you'll need to look at lockutils.lock() -- I believe there is a way to make that use a file-based lock (rather than the default in-memory lock)06:42
rameshg87devananda, doesn't nfs support hardlinks ?06:42
devanandarameshg87: it may - I don't know offhand06:43
rameshg87devananda, we count the number of hardlinks to see whether it is time to cleanup the master image or not, right ?06:43
rameshg87devananda, yeah okay, file based locks might work06:44
rameshg87devananda, but is it just that nobody has tried out such a thing with pxe driver ?06:44
rameshg87devananda, oh okay, file based locks :)06:44
devanandarameshg87: also, file-based locks across a shared file system might be really slow and error prone - so please don't take this discussion as an endorsement of the idea06:48
rameshg87devananda, okay06:48
devanandarameshg87: I think it's an _interesting_ idea. but I dont know if it's a good solution06:48
rameshg87devananda, :)06:48
devanandaok - I need to get some sleep. good night, folks. see you tomorrow!06:50
naohirotdevananda: good night, thanks!06:50
rameshg87devananda, good night :)06:51
naohirotrameshg87: regarding file lock, is it not enough NFS lock daemon work for our case?06:52
openstackgerritHarshada Mangesh Kakad proposed openstack/ironic: Add documentation for SeaMicro driver  https://review.openstack.org/13632406:58
openstackgerritAnusha Ramineni proposed openstack/ironic: iLO Management Interface  https://review.openstack.org/13274606:58
*** harlowja is now known as harlowja_away07:01
*** aswadr has joined #openstack-ironic07:04
rameshg87naohirot, i am just checking what nfs lockd is ..07:06
*** achanda has joined #openstack-ironic07:08
naohirotrameshg87: okay, I just would like to understand the lock issue you and devananda discussed.07:09
naohirotrameshg87: The lock NFS lockd provides may be just advisory lock, I'm not sure.07:13
rameshg87naohirot, current image_cache uses locks while fetching and cleaning up the image07:15
rameshg87naohirot, https://github.com/openstack/ironic/blob/master/ironic/drivers/modules/image_cache.py#L10807:15
rameshg87naohirot, https://github.com/openstack/ironic/blob/master/ironic/drivers/modules/image_cache.py#L10807:15
rameshg87naohirot, this uses oslo.concurrency module https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/lockutils.py#L268-L31107:16
rameshg87naohirot, we should use file based locking for locking during operations in the image_cache07:17
*** dlpartain has joined #openstack-ironic07:20
naohirotrameshg87: Okay, Is there something I should consider to use file based lock mechanize in NFS?07:21
*** mrda is now known as mrda_away07:21
* naohirot rereading the discussion b/w rameshg87 and devananda 07:23
rameshg87naohirot, i am not aware of file-based locking mechanisms in nfs. but if you are gonna have a single nfs server and are going to share images (like deploy ISO images), then you will definitely need a better locking mechanism on the shared nfs07:23
naohirotrameshg87: Okay, I got that the issue would be performance.07:27
rameshg87naohirot, bigger worry would be will it work properly in all cases :)07:28
rameshg87naohirot, may be the nfs specific locking mechanisms can address that07:28
naohirotrameshg87: Okay, I'll mention it in the spec.07:29
rameshg87naohirot, okay07:32
naohirotrameshg87: thanks! :-)07:32
*** achanda has quit IRC07:35
rameshg87wc07:39
*** achanda has joined #openstack-ironic07:43
*** achanda has quit IRC07:46
*** spandhe has quit IRC07:47
*** achanda has joined #openstack-ironic07:48
*** kiku has joined #openstack-ironic07:50
kikuHii07:51
kikuneed some help about pxe_ssh driver07:51
*** achanda has quit IRC07:53
*** kiku has quit IRC07:53
*** achanda has joined #openstack-ironic07:58
*** jcoufal has joined #openstack-ironic08:00
*** spandhe has joined #openstack-ironic08:01
*** athomas has joined #openstack-ironic08:05
*** lucasagomes has joined #openstack-ironic08:06
*** lucasagomes has quit IRC08:06
*** lucasagomes has joined #openstack-ironic08:07
*** ifarkas has joined #openstack-ironic08:10
*** romcheg has joined #openstack-ironic08:10
*** achanda has quit IRC08:10
*** dtantsur has joined #openstack-ironic08:10
*** dtantsur has quit IRC08:11
*** dtantsur_ has joined #openstack-ironic08:11
*** dtantsur_ is now known as dtantsur08:11
*** spandhe has quit IRC08:13
*** jistr has joined #openstack-ironic08:18
GheRiveromorning all08:20
Haomeng|2hi08:21
Haomeng|2kiku: hi08:21
dtantsurmorning :)08:32
lucasagomesmorning gfolks08:32
*** ndipanov_gone is now known as ndipanov08:33
openstackgerritYuiko Takada proposed openstack/python-ironicclient: Fix node-set-provision-state cmd's help strings  https://review.openstack.org/13551808:35
Nisha_dtantsur: lucasagomes Morning !08:45
dtantsurNisha_, hi08:46
*** mkerrin has quit IRC08:48
Nisha_dtantsur: hi08:49
Nisha_just now saw the comments on discovery spec08:49
Nisha_dtantsur: some clarifications on it08:50
Nisha_dtantsur what shall be the proposed timeout?08:50
Nisha_dtantsur: and shall it be a new element in node object?08:50
Nisha_or part of node.extra?08:51
Nisha_dtantsur: i wanted to move my spec out of states spec as states spec is anyway a major dependency for all the specs08:51
Nisha_dtantsur: means most of them. So by the time that piece of code is still in progress, we can atleast get our specs in?08:52
dtantsurNisha_, we could, but there's no point IMO. Nothing is preventing you from writing the code right now, but it won't be approved until state machine is finished08:56
Nisha_yes :(08:57
dtantsurNisha_, re timeout: it's configuration option, not a node property. We probably need a new Node.discovery_started datetime field though...08:57
Nisha_dtantsur: ok clarification needed above on the review comments.^^^08:57
Nisha_dtantsur: why we need timestatrted?08:58
dtantsurNisha_, to calculate timeouts. Example:08:58
Nisha_we are anyway adding last_discovered08:58
Nisha_ohk,08:58
Nisha_that may not be a seperate field anyway08:59
dtantsurNisha_, we have discover started at 12:00 and died for some reason (for OOB is may be conductor dying). timeout is 1hr, so at 13:00 conductor sets it to DISCOVERYFAILED08:59
Nisha_we can take a timestamp when discovery started and timeout can be calculated09:00
Nisha_?09:00
Nisha_thats fine, but why we need it as a sepearte field09:00
Nisha_dtantsur: it can be done even if it is not a seperate field09:00
Nisha_correct?09:00
dtantsurNisha_, where are you going to store " a timestamp when discovery started" for each node?09:00
Nisha_the management interface can have a local variable, not saved to node object09:01
Nisha_isnt that possible?09:01
Nisha_ohk, or we can store it temporariy in node.etra09:01
Nisha_and remove when discovery done09:02
Nisha_s/node.etra/node.extra09:02
dtantsurNisha_, local variable: won't work with HA and conductor dying; extra: will require going thorough all nodes instead of ``select * from nodes where discovery_started < now() - timeout``09:02
*** zhenzanz has quit IRC09:03
Nisha_ok09:03
Nisha_Ok i will add it in the spec09:03
Nisha_dtantsur: do u have any idea of default tiimeout?09:03
openstackgerritAnusha Ramineni proposed openstack/ironic-specs: Add get/set boot mode to Management Interface  https://review.openstack.org/12952909:04
dtantsurNisha_, hmm. it should be quite large to account for in-band. what about 1 hour?09:04
Nisha_1 hr, isnt that too large a time for a node?09:05
Nisha_i was thinking in terms of seconds :)09:05
Nisha_or minutes09:05
dtantsurNisha_, honestly I don't know :) for in-band it's definitely not seconds. also, discoverd had it's own timeouts and OOB will rarely fail that badly.09:05
dtantsurso it's the last resort and can be large09:06
Nisha_dtantsur: :)09:06
Nisha_ok i will propose it to be that but i am not sure if people will really agree on such large timeouts09:06
dtantsurlet us see :)09:06
Nisha_dtantsur: one more thing wanted to ask09:07
Nisha_as part of properties discovery, we plan to even discover hardware capabilities09:07
Nisha_that said it will discover many properties than what schedular requires and update in the form of capabilities in node.properties09:08
Nisha_so do i need to propose another spec for that? or it is not required to mention? or it is fine to do as part of hardware discovery09:09
Nisha_above we want to do for ilo drivers09:09
Nisha_i.e. discovering hw capabilities along with mandatory properties09:09
dtantsurgood question. depending on what you mean for capabilities...09:10
dtantsurthe only thing I suggest is not overcomplicating this spec even more :)09:10
ekarlso-what's instack btw?09:10
dtantsurNisha_, so you may start with putting it into your ilo spec09:10
dtantsurekarlso-, Red Hat's undercloud installer based on tripleo09:11
ekarlso-ah ok09:11
dtantsurekarlso-, https://github.com/agroup/instack-undercloud09:11
Nisha_dtantsur: devananda suggested to not have a another spec for ilo, the same spec can say the reference implementation will be done for ilo drivers09:11
dtantsurekarlso-, https://openstack.redhat.com/Deploying_an_RDO_Undercloud_with_Instack09:11
dtantsurNisha_, for just discovery - yes. for more capabilties... maybe you need it09:12
ekarlso-is discovery stuff coming to ironic ?09:12
Nisha_dtantsur: ok. so the same managemnet interface can be used for for hw capabilities?09:12
dtantsurekarlso-, yes :) https://review.openstack.org/#/c/100951/ https://review.openstack.org/#/c/13560509:12
ekarlso-oh kewl :)09:13
Nisha_or should it be done as a seperate mgmt interface09:13
dtantsurNisha_, probably. I didn't give it a good think yet and I don't know what exactly you mean by "capabilities"09:13
Nisha_i mean several prop like supported boot modes, server model, VT-d, etc etc09:14
ekarlso-I was wondering about the consolidation stuff that someone asked about in the MLs, maybe use ceilometer / nova api's to pull instances on hosts periodically and using ironic to control powerstate after vm's has been consolidated onto new hosts would be a fun idea09:14
Nisha_we are yet to finalize on list which we want to discover, but still for ilo it is closely tied with discovery09:14
*** derekh has joined #openstack-ironic09:14
dtantsurNisha_, honestly we'll have kind of these for in-band... but we're in the middle of the discussion right now at the meeting :)09:15
Nisha_dtantsur: i didnt get09:16
Nisha_for in-band, it will be done as part of hw discovery?09:16
Nisha_or seperately09:16
dtantsurNisha_, likely part of discovery. I just suggest not putting it into this spec, so that it can land soon :)09:17
Nisha_:)09:18
Nisha_dtantsur: ok then i will just update that in my ilo spec09:18
Nisha_and should i rename that as hw capabilities spec for ilo ;)09:18
dtantsuryeah, makes sense09:18
Nisha_dtantsur: one basic ques09:18
Nisha_Even if it is going to be part of discovery, then also DISCOVERYFAIL shall be set only when mandatory prop fail09:20
Nisha_is that fine?09:20
Nisha_for other capabilities, it will not be set as DISCOVERYFAIL09:21
Nisha_but the timeout will matter then09:21
Nisha_how much time in-band takes for mandatory prop, approx09:21
Nisha_?09:21
dtantsurNisha_, in-band will take ~the same time no matter which properties are discovered. The boot time is dominating.09:22
*** andreykurilin_ has joined #openstack-ironic09:22
dtantsurNisha_, should be fine to call discovery done, once we have props required for scheduling, even if the other failed09:22
Nisha_but its asynchronous call correct, the last disocvered will be set when the mgmt interface returns correct?09:23
dtantsurNisha_, when is last_discovered is set, depends on the driver. it's set before mgmt returns by OOB driver and inside discoverd for discoverd driver09:25
Nisha_dtantsur: will do as discussed and update both the specs09:26
dtantsurNisha_, thanks!09:26
Nisha_dtantsur: thank you.09:27
Nisha_dtantsur: i was going thru the spec https://review.openstack.org/12892709:27
dtantsurit's very unfinished and will probably be abanoded in favor of https://review.openstack.org/#/c/13127209:28
Nisha_do you plan to use properties field for driver capabilities09:29
Nisha_:)09:29
Nisha_dtantsur: thats what my main ques was do we plan to use different node filed for driver capabilities and hw capabilities or it is same for al the capabilities09:30
Nisha_dtantsur: so it looks like all capabilities will be stored same09:31
dtantsursorry, I don't know right now :) it was not my focus recently, I better ask devananda, or JayF, or jroll, I guess09:31
Nisha_:)09:31
Nisha_dtantsur: one more ques09:32
*** jistr has quit IRC09:33
Nisha_dtantsur: for states like zapping, like  firmware settings firmware update are discussed to be done  thru capabilities,09:33
Nisha_how does nova selects a node which is in zapping and which is in available09:34
Nisha_do we need a different scheduler filer for such case09:34
Nisha_?09:34
dtantsurNisha_, nova inspects provision_state09:34
dtantsurso interesting for Nova are only AVAILABLE nodes, it should ignore the other09:34
Nisha_but then how does the nodes which are in zapping gets selected for taking action requested in flavor?09:35
Nisha_ASAIUI, firmware setting, fimrware update , RAID, etc needs to be implemented thru capabilities and do not have endpoint sepeartely09:36
dtantsurNisha_, not sure for now (again, ask rackspace guys), but Nova is not related to scheduling zapping actions, it's solely done by Ironic itself09:37
*** foexle has joined #openstack-ironic09:37
Nisha_dtantsur: actually i did experimented something hw capabilities in nova, so as of now it matches the nodes as given in the flavor key09:38
Nisha_so i was just thinking there is nothing as of now in Nova to prevent the node from being selected for deploying if it goes thru available filters09:39
dtantsurright. but Ironic only passes it nodes in AVAILABLE state09:39
Nisha_as of now do we have any such discrimination?09:43
Nisha_dtantsur: in my experiment i can see nova can see all the hypervisors09:44
dtantsurI think so. We call it "NOSTATE" now, though. But we don't provide Nova with nodes in say "DELETING" state IIRC09:45
Nisha_then all the filers act09:45
Nisha_hmmm not sure about deleting state, :) because mostly that time i would have been watching my instance ;)09:46
Nisha_i think we will need one filter in Nova for state , not very sure as of now09:47
Nisha_as of now, nova selects transparently for any node for any capabilities matched, there isno filter for state09:48
dtantsurhmm09:48
dtantsurI don't remember how it's done :) lucasagomes ^^^?09:49
Nisha_if we want nova to select node only for AVAILABLE states, how do we plan the hw capabilities actions to be taken as as per requested flavor09:49
*** pensu has quit IRC09:49
dtantsurNisha_, these do not look related. Nova will select AVAILABLE nodes that have appropriate capabilities and hw properties. Do you expect any problems here?09:51
Nisha_for ex, say a node has capability to give different kind of power. the flavor say i need optimized power. The node which can provide the optimized power shall be chosen and then the driver shall take particular action to give it optimized power09:51
Nisha_the above is part of firmware setting09:51
Nisha_which shall be part of hw capabilities09:51
Nisha_so during this requested action node should not be in AVAILABLE state correct?09:52
dtantsurNisha_, if it's a fast action - yes. Nova will chose AVAILABLE node with capability like `can_optimize_power` and afterward the specific way will be enforced as part of prepare action of DEPLOY (not sure how exactly - ask J*)09:53
Nisha_so if it has to come thru Nova flavor then nova should be able to select the node for that requested action and do the requested work.09:53
Nisha_i just gave an example of power09:53
Nisha_the firmware settings can then be part of AVAILABLE state ????09:54
dtantsurNisha_, of DEPLOY state09:54
dtantsurit should be set in advance (I'm not aware of the details)09:54
Nisha_DEPLOY is after AVAILABLE correct...but firmware setting is supposed to be part of ZAPPING09:55
dtantsurNisha_, it depends. ZAPPING is for long-running things. They become "properties" of the AVAILABLE node afterwards. SHort-running tasks a part of DEPLOY however.09:56
Nisha_dtantsur: ^^^^ 1. how does Nova know it will short running task or long running task? 2) Will the time taken not differ per driver? 3) How does Nova take action during ZApping state?09:57
Nisha_dtantsur: if they have seperate endpoint then it is fine, but if they dont have seperate end point then nova need to know the mechanism to differentiate between several capabilities09:59
*** andreykurilin_ has quit IRC10:00
*** andreykurilin_ has joined #openstack-ironic10:00
*** dlpartain has quit IRC10:02
*** jistr has joined #openstack-ironic10:03
*** pensu has joined #openstack-ironic10:04
*** andreykurilin_ has quit IRC10:05
*** andreykurilin_ has joined #openstack-ironic10:06
*** nosnos has quit IRC10:08
*** nosnos has joined #openstack-ironic10:09
*** mkerrin has joined #openstack-ironic10:11
*** Nisha_ has quit IRC10:14
openstackgerritDmitry Tantsur proposed openstack/ironic-specs: In-band hardware properites discovery via ironic-discoverd  https://review.openstack.org/13560510:23
*** naohirot has quit IRC10:29
*** sambetts has joined #openstack-ironic10:39
*** vdrok has quit IRC10:55
*** MattMan has quit IRC10:58
*** vdrok has joined #openstack-ironic10:59
*** MattMan has joined #openstack-ironic10:59
*** naohirot has joined #openstack-ironic11:00
*** dlaube has joined #openstack-ironic11:01
*** ramineni has quit IRC11:03
*** dlaube has quit IRC11:05
*** andreykurilin_ has quit IRC11:12
*** rameshg87 has quit IRC11:13
*** k4n0 has quit IRC11:20
sambettsMorning all11:24
openstackgerritYuriy Zveryanskyy proposed openstack/ironic: Rename parameters for ilo driver  https://review.openstack.org/13702211:32
*** chenglch has quit IRC11:38
openstackgerritNisha Agarwal proposed openstack/ironic-specs: uefi support for agent-ilo driver  https://review.openstack.org/13702411:46
openstackgerritTan Lin proposed openstack/ironic-specs: Bare Metal Trust  https://review.openstack.org/13390211:53
*** Nisha has joined #openstack-ironic11:58
Nishadtantsur: hi11:58
Nishadtantsur: wanted to check one thing on your comment11:59
Nishadtantsur: "New option: ``discovery_timeout`` (integer, in seconds, default to 1 hour)   with the amount of time after which discovery is considered to be failed and   node is moved to DISCOVERYFAIL state" DO you propose it to be a new option in the CLI?11:59
*** dlaube has joined #openstack-ironic12:02
*** dlaube has quit IRC12:07
dtantsurNisha, configuration option, in /etc/ironic/ironic.conf12:08
dtantsursambetts, morning12:09
sambettsMorning dtantsur12:09
lucasagomesNisha, it may clarify some about the zapping state https://review.openstack.org/#/c/102685/12:24
lucasagomessambetts, morning12:24
Nishadtantsur: Ok.12:24
* lucasagomes is in a meeting so I'm quiet here today12:25
sambettslucasagomes, morning12:25
*** dlpartain has joined #openstack-ironic12:26
*** mjturek has quit IRC12:26
*** jistr is now known as jistr|english12:32
Nishalucasagomes: yes i saw it today morning itself. But still it doesnt clarify much on nova dependencies, as it is still unclear12:37
Nishadtantsur: the zapping spec is not the child spec of state spec though it requires it12:37
Nishaso shall i also make discovery spec as independent and have state spec in the dependencies section12:37
Nisha?12:37
dtantsurNisha, I'm not to blame, please ask it's author :)12:37
Nisha:)12:38
Nishadtantsur: i am not blaing you12:38
dtantsurNisha, they're not independent, so no you can't IMO12:38
Nishablaming*12:38
lucasagomesNisha, right... This is some discussion that we have to had as part of the new state machine work12:38
Nishajust to land the discovery spec ;)12:38
*** dlpartain1 has joined #openstack-ironic12:38
*** dlpartain has quit IRC12:38
lucasagomeshow nova is going to iteract with the new state machine12:38
lucasagomes(afk a bit... meeting)12:38
Nishayes, lucasagomes correct12:39
Nishaok i will pose it as dependent patch but then discovery spec cant land till state spec lands :(12:39
dtantsurNisha, I personally won't approve a spec with non-solved dependencies. Because we can't land any code.12:40
dtantsurNisha, also imagine, we change smth discovery-related in the states spec after we landed the discovery spec. What will we do?12:41
*** bradjones has quit IRC12:45
*** Haomeng has joined #openstack-ironic12:45
*** bradjones has joined #openstack-ironic12:45
*** Haomeng|2 has quit IRC12:46
Nishadtantsur: ok :)12:51
*** rakesh_hs has quit IRC12:51
*** ramineni has joined #openstack-ironic12:55
*** jcoufal has quit IRC12:56
*** jcoufal has joined #openstack-ironic12:57
*** Haomeng|2 has joined #openstack-ironic13:01
*** Haomeng has quit IRC13:02
*** dlaube has joined #openstack-ironic13:03
*** dprince has joined #openstack-ironic13:06
*** dlaube has quit IRC13:07
*** Nisha has quit IRC13:23
*** naohirot has quit IRC13:29
*** Haomeng has joined #openstack-ironic13:35
*** Haomeng|2 has quit IRC13:37
*** linggao has joined #openstack-ironic13:37
*** jjohnson2 has joined #openstack-ironic13:43
*** dlpartain1 has left #openstack-ironic13:47
*** nosnos has quit IRC13:48
*** nosnos has joined #openstack-ironic13:49
*** nosnos has quit IRC13:54
*** trown has quit IRC13:55
*** trown has joined #openstack-ironic13:56
*** rloo has joined #openstack-ironic14:02
*** foexle has quit IRC14:03
*** dlaube has joined #openstack-ironic14:03
*** jcoufal has quit IRC14:03
*** jcoufal has joined #openstack-ironic14:04
openstackgerritVladyslav Drok proposed openstack/ironic-specs: Support for non-Glance image references  https://review.openstack.org/13527614:04
*** dlaube has quit IRC14:08
*** pensu has quit IRC14:09
*** jistr|english is now known as jistr14:13
*** ryanpetrello has joined #openstack-ironic14:13
*** rushiagr is now known as rushiagr_away14:24
*** mjturek has joined #openstack-ironic14:38
*** mjturek has quit IRC14:39
*** mjturek has joined #openstack-ironic14:39
*** ryanpetrello has quit IRC14:40
*** ryanpetrello has joined #openstack-ironic14:46
NobodyCamgood morning Ironic14:46
rloomorning NobodyCam14:48
GheRiveromorning Ironic14:49
NobodyCammorning rloo and GheRivero14:49
*** r-daneel has joined #openstack-ironic14:51
*** rushiagr_away is now known as rushiagr14:51
dtantsurmorning NobodyCam, rloo, GheRivero!14:53
rloohi GheRivero, dtantsur14:53
*** dlpartain has joined #openstack-ironic14:53
*** jcoufal_ has joined #openstack-ironic14:53
*** jcoufal has quit IRC14:53
*** dlpartain has left #openstack-ironic14:53
jrollmorning everybody :)14:54
*** jcoufal_ has quit IRC14:54
dtantsurjroll, morning14:54
rloomorning jroll ;)14:54
*** jcoufal has joined #openstack-ironic14:54
NobodyCammorning dtantsur and jroll14:55
jrolldtantsur: to answer nisha's question about nova: https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L240-25014:55
jrollwe'll have to modify that for the new state machine, but there you have it14:55
dtantsuraha I see14:55
dtantsurthanks14:55
jrollnp14:56
* jroll responds to sdague's email14:57
*** zz_jgrimm is now known as jgrimm15:03
*** dlaube has joined #openstack-ironic15:04
openstackgerritChris Krelle proposed openstack/python-ironicclient: Update README  https://review.openstack.org/13454115:05
*** dlaube has quit IRC15:08
openstackgerritVladyslav Drok proposed openstack/python-ironicclient: Fix log_curl_request API version duplication  https://review.openstack.org/13677315:09
jrollyo lucasagomes, I don't think either of those status codes would work here https://review.openstack.org/#/c/136934/15:09
*** ndipanov has quit IRC15:09
openstackgerritVladyslav Drok proposed openstack/python-ironicclient: Fix log_curl_request API version duplication  https://review.openstack.org/13677315:10
*** ryanpetrello has quit IRC15:16
openstackgerritVladyslav Drok proposed openstack/python-ironicclient: Fix log_curl_request API version duplication  https://review.openstack.org/13677315:17
*** ndipanov has joined #openstack-ironic15:18
*** ryanpetrello has joined #openstack-ironic15:20
*** achanda has joined #openstack-ironic15:29
*** achanda has quit IRC15:32
*** alexpilotti has joined #openstack-ironic15:32
dtantsurifarkas, https://github.com/agroup/instack-undercloud/blob/master/elements/discovery-ironic/init.d/80-discovery-ironic15:45
ifarkasdtantsur, thanks15:45
*** PaulCzar has joined #openstack-ironic15:46
*** achanda has joined #openstack-ironic15:48
NobodyCambrb15:51
dtantsurifarkas, now https://review.openstack.org/#/c/122151/15:54
ifarkasdtantsur, I am wondering if a ramdisk for a stackforge project will be accepted or not...15:55
dtantsurifarkas, we can see :)15:56
devanandamorning, all15:56
dtantsurit would be a bit strange if it doesn't actually15:56
dtantsurdevananda, morning15:56
ifarkasmorning devananda15:57
sambettsmorning, devananda15:57
NobodyCamgood morning devananda15:57
jrollheya deva15:57
ifarkasdtantsur, I am not convinced since diskimage-builder is an already accepted openstack project. We will see...15:58
*** anderbubble has joined #openstack-ironic15:58
*** kfox1111 has quit IRC15:58
jrollifarkas: you mean if ironic will find it acceptable to use a ramdisk from a stackforge project?15:59
openstackgerritDmitry Tantsur proposed openstack/ironic-specs: In-band hardware properites discovery via ironic-discoverd  https://review.openstack.org/13560516:00
ifarkasjroll, no, I am wondering if a ramdisk for stackforge/ironic-discoverd element fits to the diskimage-builder repo16:00
ifarkasjroll, but I am not against it16:00
jrolloh, I don't see why not16:00
devanandaifarkas: any reason why DIB can't build a ramdisk for discoverd?16:01
dtantsurjroll, I'm trying https://review.openstack.org/#/c/122151/ to get the reference ramdisk in diskimage-builder. It will be kind of strange to have support for it in Ironic but not having a ramdisk anywhere within openstack16:01
devanandaifarkas: I would expect, if it can be built, that project will accept it16:01
jrolldtantsur: agree, I don't see why they wouldn't merge that16:01
ifarkasdevananda, it can be built with DIB, so it's all cool then16:01
*** achanda has quit IRC16:02
devanandaalso, fwiw, the tripleo folks have wanted this functionality for a while, so I would expect them to be _pleased_ to accept it16:02
ifarkasthat's great! then we can create a discoverd element16:03
ifarkasif it doesn't exist already16:04
* devananda reviews it16:04
devanandain https://review.openstack.org/#/c/122151/2/elements/ironic-discoverd-ramdisk/init.d/80-discovery-ironic -- is the node UUID alraedy known? (maybe passed in the $ironic_api_url?)16:05
*** dlaube has joined #openstack-ironic16:05
jrollis the reason for using bash in these ramdisks that python would bloat it too much?16:05
dtantsurdevananda, no. for that we need a way to associate a machine with UUID. and that's exactly what discoverd will be doing. chicken-and-egg problem :)16:06
devanandadtantsur: ok. thought so.... so this will always create a new Node record, even if it is re-run multiple times on the same host16:06
dtantsurjroll, IIRC our ramdisks do not have infrastructure for Python. Also: I would like to port the ramdisk to IPA one day :)16:07
dtantsurdevananda, oh no16:07
devanandadtantsur: ??16:07
dtantsurdevananda, first of all discoverd does not create nodes :)16:07
jrolldtantsur: I mean, it's just a matter of installing python, DIB knows how to install things16:07
devanandaNODE_RESP=$(request_curl POST $IRONIC_API_URL $NODE_DATA)16:07
dtantsurdevananda, second, $ironic_api_url is an awful name, because it's actually URL of discoverd16:07
dtantsurdevananda, it is discoverd that handles the logic after that point16:08
devanandaooooh16:08
devanandaok - please change that :)16:08
dtantsur(oh and $IRONIC_API_URL is one more awful name :D16:08
devanandain that case, this is quite confusing: readonly IRONIC_API_URL=$(get_kernel_parameter ironic_callback_url)16:08
jrolldtantsur: I'd be happy to take this in IPA, dunno if adding to the agent itself or making a new container is the right way16:09
dtantsurdevananda, yeah, I forgot about it, sorry :) some time ago it really was Ironic endpoint16:09
dtantsurjroll, dunno either16:09
jrolldtantsur: probably just add a "discover" command to IPA16:10
dtantsurjroll, the problem is to call it. Discoverd has to know it's IP address.16:11
openstackgerritYuriy Zveryanskyy proposed openstack/ironic: Change some exceptions from invalid to missing  https://review.openstack.org/13712416:11
jrollmmm16:11
*** ramineni1 has joined #openstack-ironic16:12
jrollwell, today it calls back to ironic on boot16:12
jrollyou could do a similar thing16:12
devanandadtantsur: what do you think of adding stackforge/discoverd to the IRC bot notices?16:12
dtantsurjroll, if we do it specially for discoverd, we can just as well just send all the data :) exactly how the reference ramdisk works.16:12
dtantsurdevananda, to this channel? I'd be happy, if you don't mind.16:12
devanandadtantsur: ++16:12
jrolldtantsur: right, maybe a separate container is better, use a kernel param to switch16:12
dtantsurright16:13
dtantsurdevananda, ack thanks :)16:13
jrolldevananda: sdague's nova/ironic email is highly related to your interests :)16:13
*** ramineni has quit IRC16:13
dtantsurI also need someone to review it except for 2 Red Hat guys :) and I'm ready to fast-track any ironic-code to ironic-discoverd-core as well :)16:13
devanandajroll: yup. on my short list to reply to. once coffee kicks i16:14
devanandain16:14
jrollcool16:15
devanandadtantsur: adding it to my gertty config now :)16:15
dtantsuroh good :)16:15
* jroll bbiab16:17
NobodyCambrb16:18
*** kfox1111 has joined #openstack-ironic16:24
kfox1111Morning all.16:24
kfox1111so, what would be the best way to go about setting up a raid with the agent?16:26
kfox1111or should the image itself just pack itself as a partial array and then extend itself to the other disk?16:27
openstackgerritJarrod Johnson proposed stackforge/pyghmi: Provide access to chassis identify  https://review.openstack.org/13712916:28
devanandakfox1111: if you're doing hardware raid, I'd do that in the agent. if software raid, I'd go with option 216:28
*** igordcard has joined #openstack-ironic16:31
devanandakfox1111: i haven't tried the degraded raid image approach yet. from several discussions, I think it could work16:32
*** hemna__ has joined #openstack-ironic16:32
kfox1111there is no hardware raid controllers, so I'll have to do the latter.16:33
kfox1111do you know if dib supports tweaking the partitioning?16:33
JayFI'm confused by sdague's email. I thought the entire point of being integrated was being able to have voting jobs?16:35
devanandaJayF: it used to be. That's something some people want to change16:35
devanandaJayF: where "that" == "the definition of being 'part of OpenStack'"16:36
JayFSure; but I haven't seen anything official come down that is changing; although lots of text has been written about ideas to change it16:36
devanandayep16:37
*** alexpilotti has quit IRC16:37
*** alexpilotti has joined #openstack-ironic16:38
*** ramineni1 has quit IRC16:40
*** anderbubble has quit IRC16:41
devanandajroll: do you recall the patch where unit tests were added to Nova that enforced the contract(s) we expect in their driver api ?16:41
kfox1111ah. elements/vm/block-device.d/10-partition. :)16:41
devanandaI'm grepping around and don't see them... but for now I'm going to believe that I'm just overlooking them16:42
JayFI do remember that16:43
JayFI think it was posted to the mailing list16:44
* JayF searches16:44
JayFdevananda: https://review.openstack.org/#/c/9820116:44
* JayF is good at the internet16:44
openstackgerritMerged stackforge/pyghmi: Provide access to chassis identify  https://review.openstack.org/13712916:45
JayFShrews: going to test your devstack patch this morning16:46
JayFerm16:46
JayFadam_g: ^16:46
lucasagomesjroll, right (sorry the delay to answer)16:48
lucasagomesjroll, I was just thinking of how we can actually tell the user of the API that, maintenance is actually deprecated16:48
lucasagomesmaybe only documentation is enough16:48
lucasagomes(better than the logging that he can't see anyway)16:48
lucasagomesbut we need somehow to indicate that16:48
devanandathis makes me sad ...16:49
devanandadeva@9470m:/opt/source/openstack/nova⟫ find ./ -name test_ironic_api_contracts.py16:49
devanandadeva@9470m:/opt/source/openstack/nova⟫16:49
devanandawhere did the file go?16:49
lucasagomes:/16:49
rloomy guess is that it was deleted after the ironic driver was merged16:49
dtantsurdevananda, re IRC: https://review.openstack.org/#/c/13713616:50
lucasagomesdevananda, git log --  <file path>16:50
devanandadtantsur: cheers16:50
dtantsurok now time to go, see you tomorrow16:51
*** dtantsur has quit IRC16:52
devanandaok -- wtf? we deleted it? https://review.openstack.org/#/c/111425/16:52
devanandapatch set 1116:53
*** yjiang5 has joined #openstack-ironic16:53
openstackgerritMerged openstack/python-ironicclient: Fix node-set-provision-state cmd's help strings  https://review.openstack.org/13551816:53
lucasagomesew16:54
devanandamrda_away: when you're around, do you remember why you deleted test_ironic_api_contracts.py in https://review.openstack.org/#/c/111425/ ?16:55
openstackgerritNisha Agarwal proposed openstack/ironic-specs: Discover node properties using new CLI node-discover-properties  https://review.openstack.org/10095116:56
*** ifarkas has quit IRC16:56
lucasagomesdoes seems to have any comments to indicate why we deleted that16:56
lucasagomesdoesn't*16:57
lucasagomesI will have to go (in paris now) they are closing the office16:57
*** kfox1111 has quit IRC16:57
lucasagomessee ya16:57
*** lucasagomes has quit IRC16:58
openstackgerritJay Faulkner proposed openstack/ironic: Improve docs for running IPA in Devstack  https://review.openstack.org/13713916:58
openstackgerritJay Faulkner proposed openstack/ironic: Improve docs for running IPA in Devstack  https://review.openstack.org/13713916:58
*** athomas has quit IRC16:59
*** davideagnello has quit IRC17:00
*** davideagnello has joined #openstack-ironic17:01
*** pensu has joined #openstack-ironic17:01
*** sambetts has quit IRC17:02
*** sambetts has joined #openstack-ironic17:02
*** kfox1111 has joined #openstack-ironic17:02
*** anderbubble has joined #openstack-ironic17:03
jrolldevananda: maybe we thought having our unit tests in tree was enough? (I don't think it is)17:05
devanandamaybe?17:06
jrolloh, there's a comment on patch set 1317:07
jrollThis was removed as a result of a comment by Daniel Berrange. We added in this test:17:07
jrolldef test_public_api_signatures(self):17:07
jroll        self.assertPublicAPISignatures(self.driver)17:07
jrollwhich is equivalent to these tests I believe.17:07
jrollYes, the assertPublicAPISignatures test case validates that every API implemented by ironic/driver.py has 100% matching parameter set to the driver API in nova/virt/driver.py It is driven by python's introspection capabilities, so guaranteed to always reflect current state of affairs unlike this manually written test attempting to achieve the same end goal.17:07
jroll^ that is from mrda, then berrange17:07
*** igordcard has quit IRC17:10
devanandaah, ok17:15
*** yjiang5 has left #openstack-ironic17:18
*** derekh has quit IRC17:25
NobodyCamzapping just seems like a strange state to do burn-in in :-p17:26
adam_gJayF, that devstack patch is still not quite right, at least not running in the gate with USE_SCREEN=False.17:26
JayFthen I guess I'm, in a roundabout way, glad my VM didn't have enough ram and died during the first stack17:27
JayFlol17:27
JayFadam_g: https://review.openstack.org/137139 is relevant to your interests17:27
JayFjroll: ^ as well17:27
* jroll looks17:28
*** alexpilotti has quit IRC17:28
jroll+217:29
openstackgerritNisha Agarwal proposed openstack/ironic-specs: Discover node properties for iLO drivers  https://review.openstack.org/10300717:29
openstackgerritJay Faulkner proposed openstack/ironic-python-agent: Improve/add docstrings for CommandResult classes  https://review.openstack.org/12066317:33
JayF^ another easy review if someone wants to have a look17:36
*** lazy_prince is now known as killer_prince17:36
* JayF trying to tie up loose ends17:36
NobodyCamJayF: lines 62-64 seem strange to me RE: https://review.openstack.org/#/c/120663/7/ironic_python_agent/extensions/base.py17:38
NobodyCamyour returning excatly what was just passed in?17:38
JayFI'm only responsible for the comment :)17:38
JayFfor the base command it does nothing17:38
jrollNobodyCam: that's the base class, check out join() for AsyncCommandResult17:39
JayFor for the sync17:39
JayFfor the asy... yeah17:39
*** ndipanov is now known as ndipanov_gone17:39
*** jcoufal has quit IRC17:40
*** Nisha has joined #openstack-ironic17:41
*** andreykurilin_ has joined #openstack-ironic17:42
jrollJayF: reviewed, left comments17:43
JayFwill fix real quick17:45
*** harlowja_away is now known as harlowja17:49
*** jistr has quit IRC17:52
NobodyCamhummm I think our spec repo may have the wrong license file: https://github.com/openstack/ironic-specs/blob/master/LICENSE is a Apache License while the spec's refference Creative Commons Attribution 3.0 Unported License..17:55
NobodyCamdevananda: ^^^ is that a issue?17:56
NobodyCamreally anyone17:56
jrollseems fine17:57
jrollapache license for code, CC for prose17:57
jroll(I think)17:57
jrollI would assume that a license on a single file overrides the license for the repo, also, but I am not a lawyer :)17:58
*** rushiagr is now known as rushiagr_away17:59
*** romcheg has quit IRC17:59
NobodyCamjroll: I ask because I saw this on hte ML: http://lists.openstack.org/pipermail/openstack-dev/2014-November/051498.html18:00
jrolloh, hm18:01
jrollmaybe respond with "this is what we're doing in ironic, is this correct?"18:01
devanandasee next email in taht thread, referencing Nova: http://git.openstack.org/cgit/openstack/nova-specs/tree/LICENSE18:10
devanandaso, yup, we're possibly doing it wrong18:11
*** jjohnson2 has quit IRC18:11
NobodyCamthats how I took that18:11
*** russellb has quit IRC18:11
*** jjohnson2 has joined #openstack-ironic18:11
devanandawhere "wrong" == "not like Nova"18:11
devanandaalso, I don't know why Apache v2 would be an inappropriate license here18:11
*** russellb has joined #openstack-ironic18:12
devanandagiven that each spec has a CC-BY-UL header, which overrides the repo-level LICENSE, I do interpret this to mean we've put an Apache license on any code in ironic-specs/18:12
devanandaand that is correct18:12
*** bradjones has quit IRC18:13
NobodyCamok so no update is required :)18:14
devanandajuno/ironic-ilo-virtualmedia-driver.rst is the only one missing a license header18:15
openstackgerritDevananda van der Veen proposed openstack/ironic-specs: Add missing license header to ironic-ilo-virtualmedia-driver  https://review.openstack.org/13716218:17
devanandaIANAL, but I think this is fine18:18
openstackgerritJay Faulkner proposed openstack/ironic-python-agent: Improve/add docstrings for CommandResult classes  https://review.openstack.org/12066318:18
JayFjroll: ^18:18
NobodyCamack Thank you devananda :)18:19
NishaJayF: jroll i was working on enabling agent-ilo for uefi. the other day you gave me efi capable image location.http://stable.release.core-os.net/amd64-usr/current/coreos_production_rackspace_onmetal_image.img.bz2 .....but the image doesnt boot up in uefi mode18:22
Nishafor bios it works18:22
JayFI wasn't saying it was UEFI18:22
JayFI was saying it was GPT18:22
JayFand that you could use IPA to deploy an image with UEFI boot configured properly and it would work18:22
JayFI personally do not know where to find such an image, nor do I have hardware to test it18:22
*** davideagnello has quit IRC18:23
Nishathe system is in uefi boot mode. what else needs to be configured?18:23
Nishathe deploy iso goes thru, but the second boot up screen is stuck18:24
JayFmeaning you have to have the image setup properly18:24
*** davideagnello has joined #openstack-ironic18:24
JayFi.e. GPT with a fat partition with a .efi rom inside18:24
Nishaplease elaborate18:24
JayFExactly like I said before: if you took a system, used the installer to build out a UEFI booting linux18:25
*** sambetts has quit IRC18:25
JayFthen took an image of that disk18:25
JayFand used it as the image on another machine18:25
Nishabut the other day you said that itis all inbuilt inside the image18:25
JayFit would work and UEFI boot18:25
JayFit is!18:25
JayFan image can have multiple partitions18:25
Nishayes that's true18:26
JayFSo that means if you make an image with the right GPT paritions, and a UEFI boot rom in the proper place, all you need is a deployment method for full disk images18:26
JayFwhich IPA supports *today*18:26
Nishais there a way to create such image ?18:27
Nishafor agent driver alone also, i was thinking it cannot automatically switch the boot mode as per the desired capability18:28
JayFI honestly don't know :( That's handled by a different group at Rackspace18:28
Nishahmmmm how to qualify the agent ilo driver for uefi then?18:29
Nisha:(18:29
Nishaand even agent driver18:29
JayFIDK, I've been reviewing and trying to help but I don't have any UEFI capable hardware for testing18:29
*** sirushti has left #openstack-ironic18:29
Nishacan u test that on a vm? i think that should be same...just a suggestion not very sure18:30
*** alexpilotti has joined #openstack-ironic18:33
JayFI don't know.18:34
*** penick has joined #openstack-ironic18:34
jrollNisha: perhaps there is someone at your employer who can make an image that is capable of UEFI18:35
*** rushiagr_away is now known as rushiagr18:36
openstackgerritMerged openstack/ironic: Update 'Introduction to Ironic' document  https://review.openstack.org/13613618:36
Nishacan IPA deploy images apart from coreos?18:36
jrollIPA can deploy any full disk image18:36
jrollthe coreos image was an example of an image with GPT partitions18:37
Nishai asked this because even the ramdisk and kernel creation using docker uses coreos for its creation18:37
Nishahence asked if it can support any disk image18:38
jrollwe run the IPA ramdisk in CoreOS18:38
jrollfor deploying an image with ironic, IPA can write any full disk image18:38
jrollit just downloads the image and dd's it to the disk18:38
jrollnothing fancy18:39
jrollno partitioning18:39
jrolljust download and dd18:39
rlooNobodyCam, devananda, wrt nova-specs license. Looks like nova changed to cc-by cuz it is docn not code: http://git.openstack.org/cgit/openstack/nova-specs/commit/LICENSE?id=d230458f333fba8e4f5f1908c51df8d4e5c14dc318:39
jrollhuh18:40
jrollbut what about the code?18:40
rloojroll: code we use apache18:41
rloobut I'm no lawyer ;)18:41
NobodyCamis this because the code is in a different repo?18:41
jrollthere is code in ironic-specs18:41
rloooh, you mean like the tests18:41
rloorussell referenced this thread: http://lists.openstack.org/pipermail/legal-discuss/2014-March/000201.html18:42
jrollright, I saw that18:43
jrollthat was before they made the repo afaict18:43
*** spandhe has joined #openstack-ironic18:43
rlooit seems like the repo licence should be apache then, but the specs themselves cc-by?18:43
jrollas it is today :18:44
jroll:)18:44
rloojroll: ha ha. I hadn't looked to see what we were doing, only read the scroll back! We're good then. I think ;)18:45
*** achanda has joined #openstack-ironic18:47
openstackgerritAlex Weeks proposed openstack/ironic-specs: Moved metrics spec to kilo  https://review.openstack.org/13717118:48
devanandarloo: except for one spec, which was missing a cc-by header. I proposed a patch to add it. we should also have a test for it18:48
NobodyCamdevananda: which should be landing now18:49
rloodevananda: +1 for a test. Do we have such a test (for apache) for our code?18:49
* rloo never looks at that 'stuff' at the top, when reviewing18:50
openstackgerritMerged openstack/ironic-specs: Add missing license header to ironic-ilo-virtualmedia-driver  https://review.openstack.org/13716218:50
*** achanda_ has joined #openstack-ironic18:52
*** achanda has quit IRC18:52
JoshNangrloo: we def have a test for missing apache license18:53
JoshNangi've hit it before when creating new files18:53
JayFnot in -specs18:53
rlooJoshNang: cool.18:53
JayFwhich is the context of the conversation currently :)18:53
rloosomeone could go through all the *-specs repos, adding a test for the license. I'm sure someone will...18:54
*** penick has quit IRC18:54
NobodyCamrloo: sounds like a good LHF bug18:56
rlooNobodyCam: yeah. That's why I'm sure someone will jump at it. Much as I'd like to do that, I figure I should look at the specs ;)18:58
*** harlowja is now known as harlowja_away18:58
NobodyCamJayF: landing 120663 now :)18:59
NobodyCamwoo hoo new meeting times are posted19:02
*** achanda_ has quit IRC19:04
NobodyCambrb... quick walkies19:04
*** achanda has joined #openstack-ironic19:04
*** achanda has quit IRC19:05
*** achanda has joined #openstack-ironic19:07
*** harlowja_away is now known as harlowja19:07
*** penick has joined #openstack-ironic19:08
dlaubehey guys, quick question about console in ironic19:09
*** jistr has joined #openstack-ironic19:09
*** aswadr has quit IRC19:10
dlaubewhen I run ironic node-get-console NODE_UUID,  property console_info comes back as None19:11
jrolldlaube: may depend on the driver? or might need to turn on console mode, I forget19:11
jrollwhat driver are you using?19:11
dlaubeare there some specific values I should be setting under the nodes's console_info in order to get that populated?19:11
dlaubepxe_ipmitool19:12
jrollok, that driver should work19:12
jrollI have no idea on the rest, never tried it :)19:12
dlaubeok19:13
dlaubeI appreciate the honest answer19:13
jrollI feel like there's a console state you need to flip, though19:14
dlaubeyou mean this?19:14
dlaubeironic node-set-console-mode 21119b95-d11a-4f95-b3c8-bf3514e06d77 true19:14
jrollnode-set-console-mode19:14
jrollyeah19:14
dlaubelet me paste output, 1sec19:15
dlaubehttp://pastie.org/private/bvcqxhtnlqluqklvporkg19:15
*** achanda_ has joined #openstack-ironic19:19
*** achanda has quit IRC19:20
*** pensu has quit IRC19:20
jrolldlaube: interesting, no idea :|19:20
rloodlaube: the paste output doesn't show you setting console mode to True?19:21
rloodlaube: forget that. had to scroll19:21
rloodlaube: might want to look at the logs then.19:21
openstackgerritMerged openstack/ironic-python-agent: Improve/add docstrings for CommandResult classes  https://review.openstack.org/12066319:21
NobodyCamthank you devananda19:23
devanandaadam_g, NobodyCam: do you recall, last cycle, the times when nova changes broke our gate? sdague is asking for specific cases of that19:23
adam_gdevananda, yeah im digging thru git now19:23
adam_ghttps://review.openstack.org/#/c/94043/ was one19:23
devanandaI know we kept some notes on the etherpad, but ..19:23
NobodyCamgah I think that history was wiped out19:24
devanandaadam_g: yup. which, sadly, I +1'd19:24
adam_gdevananda, i remember there being a really bad one when we were sitting together  in raleigh, but im having trouble findingi it19:24
devanandaadam_g: I recall one that I thought changed test_ironic_api_contracts but now I can't find that either19:24
jrolldevananda: etherpad has history19:25
devanandajroll: yup. ever try digging through a long-lived pad's history?19:25
jroll:D19:25
jrollI just removed it last week though19:25
devanandaoh - FYI folks, I've updated the meeting time in the places and things19:26
jrolldevananda: oh, and apparently it only is showing history for today19:26
jrollthanks etherpad19:27
devanandajroll: wait for it ...19:27
adam_gdevananda, the only two i see that touched contract tests after they were added was https://review.openstack.org/#/c/68942/ and https://review.openstack.org/#/c/94043/19:27
jroll"Version 0 Saved November 25, 2014"19:27
*** agordeev has quit IRC19:29
*** agordeev has joined #openstack-ironic19:29
NobodyCamdevananda: https://review.openstack.org/#/c/116093/ this is one I think19:30
adam_gyeah ^19:30
adam_gboth 94043 and 68942 broke us19:30
* devananda trawls the etherpad19:32
devanandaadam_g: https://review.openstack.org/#/c/68942/ is the one I was thinking of -- but it looks like our -nv test passed??19:33
devanandapatch 3219:33
devanandacheck-tempest-dsvm-virtual-ironic-nv SUCCESS in 38m 56s (non-voting)19:33
devanandaI also found these looking through the etherpad19:35
devanandahttps://review.openstack.org/#/c/107562/19:35
devanandahttps://review.openstack.org/#/c/111538/19:35
devanandahttps://review.openstack.org/#/c/105599/19:35
devanandahttps://review.openstack.org/#/c/71557/19:35
adam_gdevananda, is there an associated bug with 68942? i dont remember how that broke us, if it did at all19:36
jrolldevananda: I'm guessing our tests don't call shutdown19:37
*** ChuckC has quit IRC19:40
*** Nisha has quit IRC19:43
NobodyCamI'm glad we saved that info :)19:45
*** penick has quit IRC19:49
*** achanda_ has quit IRC19:49
*** penick has joined #openstack-ironic19:49
*** penick has quit IRC19:51
*** penick has joined #openstack-ironic19:52
*** alexm__ has joined #openstack-ironic19:54
*** tchaypo has joined #openstack-ironic19:56
alexm__hey, why do we need to run nova-compute for ironic and how does the resource tracking work with that?19:59
jrollalexm__: because ironic appears to nova as a virt driver20:01
*** ChuckC has joined #openstack-ironic20:01
jrollif you use the ironic virt driver and the ironic host manager, there's code in those to handle resource tracking things20:01
alexm__so must I run one nova-compute per metal node?20:06
JayFOh absolutely not :)20:06
JayFthe nova compute that talks to ironic exposes each metal node as a hypervisor host though20:06
JayFso you have one nova compute, it can expose hundreds or thousands of "hypervisors" (bare metal hosts)20:06
alexm__yes makes more sense :)20:06
*** achanda has joined #openstack-ironic20:07
alexm__how do we register the available memory, cpu and local disk of a remote metal node? is this part of the extra data in the ironic node?20:07
tchaypoI seem to have hit a fun case where my vm is running at the top of driver.modules.ssh._power_off()20:07
JayFit's part of node.properties on the ironic node20:07
*** andreykurilin_ has quit IRC20:07
*** andreykurilin_ has joined #openstack-ironic20:08
tchaypobut dead by the time virsh destroy gets called a few lines down; so the attempt to destroy it fails, and that error bubbles up and ironic gives up on the node even though it’s now in the desired state20:08
JayFand work to automatially populate that via introspection of the node is ongoing and mapped for kilo20:08
alexm__@JayF: something like this https://pypi.python.org/pypi/ironic-discoverd/ ?20:08
JayFthat is exactly what's being used for the in-band bit20:09
JayFdtantsur|afk is working on that20:09
alexm__ok thanks20:09
alexm__so when I add the node properties, the nova-compute will report those new resource stats for the scheduler?20:09
jrollyes20:10
alexm__excellent, I’ll try this right way thanks guys20:11
*** achanda has quit IRC20:11
*** achanda has joined #openstack-ironic20:12
*** jgrimm is now known as zz_jgrimm20:13
*** anderbubble has quit IRC20:14
*** zz_jgrimm is now known as jgrimm20:18
*** jgrimm is now known as zz_jgrimm20:19
*** achanda has quit IRC20:19
*** achanda has joined #openstack-ironic20:21
*** rushiagr is now known as rushiagr_away20:22
*** andreykurilin_ has quit IRC20:24
*** andreykurilin_ has joined #openstack-ironic20:24
*** achanda has quit IRC20:24
*** achanda has joined #openstack-ironic20:25
*** ChuckC_ has joined #openstack-ironic20:28
*** dprince has quit IRC20:29
*** ChuckC has quit IRC20:32
*** igordcard has joined #openstack-ironic20:40
*** jistr has quit IRC20:41
*** anderbubble has joined #openstack-ironic20:47
devanandaanyoen remember the wacky change to nova scheduler that broke us, then got reverted?20:47
JayFthe arch.canonicalize() ?20:48
*** mrda_away is now known as mrda20:50
devanandafound it - https://review.openstack.org/#/c/71557/20:51
devanandabroke ironic and tripleo with a subtle change to resource_tracker.py20:51
mrdaMorning all20:52
jrollmorning mrda20:52
*** zz_jgrimm is now known as jgrimm20:53
*** jgrimm is now known as zz_jgrimm20:53
NobodyCammoring mrda :-p20:55
rloodevananda: don't know if this is useful or not or if you got it: https://bugs.launchpad.net/nova/+bug/136915120:59
*** alexpilotti has quit IRC21:04
alexm__quick question after testing nova+ironic, what would be the most common reason of my instance sticking to BUILD/spawning state ?21:12
alexm__the scheduling phase seems to have succeeded, the instance data was set on the ironic node21:12
alexm__but it doesn’t actually trigger the pxe_ipmi deployment and doesn’t create the VIF21:13
devanandarloo: thanks. that wasn't on my list21:16
*** davideagnello has quit IRC21:25
*** achanda has quit IRC21:28
openstackgerritChris Krelle proposed openstack/ironic-specs: Add a test for license header  https://review.openstack.org/13721521:30
NobodyCamI'm sure there are better ways of doing ^^^ but there is one way21:30
*** zz_jgrimm is now known as jgrimm21:32
*** hemna__ has quit IRC21:39
*** davideagnello has joined #openstack-ironic21:40
*** hemna__ has joined #openstack-ironic21:42
*** bradjones has joined #openstack-ironic21:47
*** Marga_ has joined #openstack-ironic21:49
*** davideagnello has quit IRC21:57
*** davideagnello has joined #openstack-ironic21:59
*** jjohnson2 has quit IRC22:05
*** ryanpetrello has quit IRC22:10
*** alexpilotti has joined #openstack-ironic22:20
*** Hefeweizen has joined #openstack-ironic22:22
*** achanda has joined #openstack-ironic22:28
*** achanda has quit IRC22:39
devanandammm, dentist appointment .... I'll probably be offline the rest of the day22:46
JayFgl, have a good evening22:47
NobodyCamhope its just a cleaning devananda22:47
*** andreykurilin_ has quit IRC23:01
*** achanda has joined #openstack-ironic23:03
*** linggao has quit IRC23:05
*** achanda has quit IRC23:17
Haomengalexm__: hi23:23
*** achanda has joined #openstack-ironic23:23
Haomengalexm__: can you check neutron logs, maybe neutron port update call failed23:24
HaomengNobodyCam: morning:)23:24
NobodyCammorning Haomeng :)23:24
HaomengNobodyCam: :)23:29
*** anderbubble has quit IRC23:29
*** mjturek has quit IRC23:36
mrdaHi Haomeng23:39
*** igordcard has quit IRC23:46
*** yuanying_ has joined #openstack-ironic23:50
Haomengmrda: morning:)23:53
*** yuanying has quit IRC23:54
*** davideagnello has quit IRC23:54
*** davideagnello has joined #openstack-ironic23:56
*** penick has quit IRC23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!