Monday, 2014-03-24

*** max_lobur has joined #openstack-ironic00:11
*** max_lobur has quit IRC00:11
*** max_lobur has joined #openstack-ironic00:11
*** max_lobur has quit IRC00:11
*** matsuhashi has joined #openstack-ironic00:25
*** max_lobur has joined #openstack-ironic00:31
*** max_lobur has quit IRC00:31
*** yongli has joined #openstack-ironic01:02
*** BadCub01 has quit IRC01:12
*** eghobo has joined #openstack-ironic01:23
*** nosnos has joined #openstack-ironic01:36
*** eghobo has quit IRC02:08
openstackgerritDan Prince proposed a change to openstack/ironic: Run ipmi power status less aggressively  https://review.openstack.org/8040002:28
openstackgerritDan Prince proposed a change to openstack/ironic: Run ipmi power status less aggressively  https://review.openstack.org/8040002:30
*** matsuhashi has quit IRC02:55
*** eghobo has joined #openstack-ironic03:13
*** matsuhashi has joined #openstack-ironic03:21
*** rameshg87 has joined #openstack-ironic03:38
*** ekarlso has quit IRC03:48
*** ekarlso has joined #openstack-ironic03:48
*** killer_p- has joined #openstack-ironic03:52
*** killer_p- is now known as killer_prince03:52
*** lazy_prince has quit IRC03:53
*** todd_dsm has joined #openstack-ironic04:05
*** killer_prince has quit IRC04:16
*** lazy_prince has joined #openstack-ironic04:18
*** lazy_prince is now known as killer_prince04:18
*** todd_dsm has quit IRC04:41
*** vkozhukalov has joined #openstack-ironic04:42
*** vkozhukalov has left #openstack-ironic04:45
*** killer_prince2 has joined #openstack-ironic04:52
*** killer_prince2 is now known as lazy_prince04:52
*** nosnos_ has joined #openstack-ironic05:16
*** pradipta_away is now known as pradipta05:18
*** nosnos has quit IRC05:19
*** vkozhukalov has joined #openstack-ironic05:40
*** vkozhukalov has left #openstack-ironic05:41
*** todd_dsm has joined #openstack-ironic05:52
*** matsuhas_ has joined #openstack-ironic06:00
*** matsuhashi has quit IRC06:01
openstackgerritJenkins proposed a change to openstack/ironic: Imported Translations from Transifex  https://review.openstack.org/7886206:07
*** nosnos has joined #openstack-ironic06:19
*** nosnos_ has quit IRC06:22
*** matsuhas_ has quit IRC06:31
*** matsuhashi has joined #openstack-ironic06:31
*** saju_m has joined #openstack-ironic06:35
*** mrda is now known as mrda_away06:45
*** saju_m has quit IRC06:59
*** max_lobur has joined #openstack-ironic07:08
*** saju_m has joined #openstack-ironic07:12
*** saju_m has quit IRC07:17
*** saju_m has joined #openstack-ironic07:17
*** branen_ has joined #openstack-ironic07:18
*** todd_dsm has quit IRC07:19
*** branen__ has quit IRC07:19
*** saju_m has quit IRC07:22
*** saju_m has joined #openstack-ironic07:23
*** saju_m has quit IRC07:30
*** saju_m has joined #openstack-ironic07:31
*** eghobo has quit IRC07:32
*** jistr has joined #openstack-ironic07:38
*** todd_dsm has joined #openstack-ironic07:57
*** athomas has joined #openstack-ironic08:21
*** tzumainn has joined #openstack-ironic08:26
*** ifarkas has joined #openstack-ironic08:35
*** ndipanov has joined #openstack-ironic08:41
*** lucasagomes has joined #openstack-ironic08:52
*** derekh has joined #openstack-ironic08:57
*** jistr has quit IRC08:59
*** martyntaylor has joined #openstack-ironic09:01
*** jistr has joined #openstack-ironic09:16
dtantsurMorning Ironic09:19
GheRiveromorning09:19
dtantsurCan anyone have a quick look at https://review.openstack.org/#/c/81770/ ? It is (hopefully) the last patch to run Devstack+Ironic on Fedora09:20
*** martyntaylor has left #openstack-ironic09:20
agordeevdtantsur, GheRivero morning!09:22
yuriyzmorning Ironic09:26
agordeevyuriyz: morning!09:28
*** pradipta is now known as pradipta_away09:28
lucasagomesmorning all :)09:36
agordeevdtantsur: are you in a hurry with that patch?09:41
agordeevlucasagomes: morning!09:41
dtantsuragordeev, no, nothing serious, just wanting to close this topic09:42
agordeevdtantsur: ah, you can add this link/topic to today's meeting agenda :)09:47
dtantsuragordeev, the thing is, I am pretty new here and I'm not sure how to add something to the agenda (btw, when and where is the meeting?)09:48
ifarkasdtantsur, https://wiki.openstack.org/wiki/Meetings#Ironic_.28Bare_Metal.29_team_meeting09:50
*** vkozhukalov has joined #openstack-ironic09:50
dtantsurifarkas, thnx. We may discuss Fedora status during "integration & testing" topic or in the end of the meeting09:53
mdurnosvistovMorning all! :)09:53
ifarkasdtantsur, sure. I think it fits better to the "integration & testing" part09:54
agordeevmdurnosvistov: morning10:01
*** dshulyak has joined #openstack-ironic10:09
*** vkozhukalov has left #openstack-ironic10:11
*** aignatov is now known as bucash10:20
*** bucash is now known as aignatov10:20
openstackgerritA change was merged to openstack/ironic: Correct version.py and update current version string  https://review.openstack.org/8132710:24
*** jistr has quit IRC10:24
*** jistr has joined #openstack-ironic10:27
*** romcheg has joined #openstack-ironic10:29
*** matsuhashi has quit IRC10:33
*** matsuhas_ has joined #openstack-ironic10:35
*** max_lobur1 has joined #openstack-ironic10:35
*** max_lobur has quit IRC10:36
*** vkozhukalov has joined #openstack-ironic10:54
*** rameshg87 has quit IRC10:55
*** max_lobur1 has quit IRC11:02
*** matsuhas_ has quit IRC11:07
*** athomas has quit IRC11:16
*** nosnos has quit IRC11:23
*** athomas has joined #openstack-ironic11:24
*** romcheg has quit IRC11:42
*** romcheg1 has joined #openstack-ironic11:42
*** romcheg has joined #openstack-ironic11:47
*** romcheg1 has quit IRC11:47
openstackgerritJenkins proposed a change to openstack/ironic: Updated from global requirements  https://review.openstack.org/7933411:48
*** matsuhashi has joined #openstack-ironic11:53
*** nosnos has joined #openstack-ironic11:54
*** ndipanov has quit IRC11:59
*** ndipanov has joined #openstack-ironic12:03
*** lucasagomes is now known as lucas-hungry12:05
*** max_lobur has joined #openstack-ironic12:08
*** romcheg has quit IRC12:11
*** matsuhashi has quit IRC12:14
*** matsuhas_ has joined #openstack-ironic12:17
*** romcheg has joined #openstack-ironic12:21
*** romcheg has quit IRC12:25
*** romcheg has joined #openstack-ironic12:31
*** lazy_prince has quit IRC12:31
*** romcheg has quit IRC12:34
*** romcheg has joined #openstack-ironic12:34
*** rloo has joined #openstack-ironic12:36
*** matty_dubs|brno is now known as matty_dubs12:46
*** vkozhukalov has quit IRC12:49
*** vkozhukalov1 has joined #openstack-ironic12:50
*** linggao has joined #openstack-ironic12:55
*** romcheg has quit IRC13:03
*** lucas-hungry is now known as lucasagomes13:07
*** matsuhas_ has quit IRC13:12
*** matsuhashi has joined #openstack-ironic13:13
jrollvkozhukalov1: hi13:13
*** ndipanov_ has joined #openstack-ironic13:13
jrollvkozhukalov1: etherpad isn't working for me, the cursor is jumping all over the place, so I want to make a couple of quick comments here13:14
*** romcheg has joined #openstack-ironic13:14
*** jbjohnso_ has joined #openstack-ironic13:14
jrollvkozhukalov1: devananda is correct in that a CMDB is far outside the scope of the agent project. I want to be clear that this is an *ironic* agent, not a generic agent framework.13:15
*** ndipanov has quit IRC13:16
*** matsuhashi has quit IRC13:17
jrollvkozhukalov1: also, sorry for butchering "command line" in the etherpad, again, etherpad isn't working right for me :(13:19
*** max_lobur has quit IRC13:22
*** max_lobur has joined #openstack-ironic13:23
*** rustlebee is now known as russellb13:23
*** nosnos has quit IRC13:24
openstackgerritRohan Kanade proposed a change to openstack/ironic: Adds max retry limit to sync_power_state task  https://review.openstack.org/7742013:24
vkozhukalov1jroll: hello, it is ok, external cmdb is not critical13:24
*** nosnos has joined #openstack-ironic13:25
vkozhukalov1jroll: let's leave it outside of scope13:25
vkozhukalov1jroll: I've just added some points to discuss on today meeting. https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting13:26
jrollvkozhukalov1: ok :) you can always have the CMDB talk to ir-api and the agent, to do what you need13:26
jrollcool13:26
jrollvkozhukalov1: LGTM13:27
*** nosnos has quit IRC13:29
vkozhukalov1jroll: the main question is granular procedural approach vs declarative (hard coded) approach, I believe both of them are compatible with one another.13:31
jrollI agree13:31
jrollI think granular APIs are a hard requirement13:32
jrolland if you have that, adding a "flow" sort of thing on top is simple13:32
*** tatyana has joined #openstack-ironic13:32
jrollbecause it's just a list of arbitrary commands to run13:32
jrollvalidation might be difficult, but should be doable13:33
jrollvkozhukalov1: I'd like to focus on the granular APIs first, the "flow" thing to me is just a nice to have13:33
vkozhukalov1jroll: yes, I agree that implementing "flow" over granular API is quite simple13:34
jrollcool :)13:35
*** tatyana has quit IRC13:36
*** tatyana has joined #openstack-ironic13:36
jrollvkozhukalov1: I'm also working on a draft for updating the agent wiki page - I'll wait until after the meeting to finish it, though13:36
vkozhukalov1jroll: another question about granularity and validation is: should particular tasks be dependent on others? For example, if you want to granularly install bootloader, it is certainly not always possible when you did not configure disks before. It seems not so trivial to follow such inter-dependencies.13:38
*** jgrimm has quit IRC13:39
jrollvkozhukalov1: I would argue that the API should not handle those dependencies13:40
jrollwith the bootloader example13:40
jrollthe install bootloader endpoint should first check if it is possible13:40
jrolle.g. if there are no partitions, error out13:40
jrollbut it should not actually do the disk setup13:40
jrolldoes that make sense?13:40
openstackgerritRohan Kanade proposed a change to openstack/ironic: Adds max retry limit to sync_power_state task  https://review.openstack.org/7742013:42
*** jrist has joined #openstack-ironic13:44
vkozhukalov1jroll: we are on the same page here, I also think that driver should just error out, not doing anything if it is not possible13:45
*** max_lobur has quit IRC13:46
jroll:)13:46
*** max_lobur has joined #openstack-ironic13:46
vkozhukalov1AFK13:48
*** rloo has quit IRC13:54
*** rloo has joined #openstack-ironic13:54
openstackgerritDan Prince proposed a change to openstack/ironic: Run ipmi power status less aggressively  https://review.openstack.org/8040013:54
*** rloo has quit IRC13:55
*** rloo has joined #openstack-ironic13:55
*** vkozhukalov1 has left #openstack-ironic13:57
*** nosnos has joined #openstack-ironic13:58
*** matsuhashi has joined #openstack-ironic13:59
*** vkozhukalov1 has joined #openstack-ironic13:59
*** nosnos has quit IRC14:00
*** nosnos has joined #openstack-ironic14:01
*** nosnos has quit IRC14:05
NobodyCamGood Morning Ironic14:09
romchegMorning NobodyCam14:09
lucasagomesmorning NobodyCam romcheg  :)14:10
romchegMorning lucasagomes14:10
*** nosnos has joined #openstack-ironic14:11
NobodyCamMorning romcheg and lucasagomes :)14:11
NobodyCamhow was your weekends14:12
*** vkozhukalov1 has left #openstack-ironic14:12
lucasagomesgreat :) just came back from prague14:13
lucasagomesawesome city14:13
NobodyCam:)14:13
romchegI've returned from Moscow. Border control guys there are assholes :)14:13
*** nosnos has quit IRC14:14
romchegHow was yours?14:14
*** krtaylor has quit IRC14:15
*** vkozhukalov1 has joined #openstack-ironic14:16
*** matsuhashi has quit IRC14:16
*** vkozhukalov1 has left #openstack-ironic14:16
*** vkozhukalov1 has joined #openstack-ironic14:16
*** rwsu has joined #openstack-ironic14:17
jrollmorning all :)14:18
*** max_lobur1 has joined #openstack-ironic14:19
*** jgrimm has joined #openstack-ironic14:21
*** max_lobur has quit IRC14:21
*** vkozhukalov1 has left #openstack-ironic14:22
NobodyCamwas good ... installed a water misting system on the rv for the trip to atalana14:22
NobodyCammorning jroll14:22
jrollthat'll be a heck of a drive14:23
NobodyCamyep leaving april 1 :)14:24
jrollnice :)14:25
*** vkozhukalov1 has joined #openstack-ironic14:26
*** vkozhukalov1 has left #openstack-ironic14:26
*** toure has joined #openstack-ironic14:28
*** vkozhukalov1 has joined #openstack-ironic14:29
*** matsuhashi has joined #openstack-ironic14:32
*** lucasagomes_ has joined #openstack-ironic14:33
*** lucasagomes has quit IRC14:34
*** lucasagomes_ is now known as lucasagomes14:37
*** toure has quit IRC14:37
* NobodyCam calls tmobile14:37
*** toure has joined #openstack-ironic14:39
*** saju_m has quit IRC14:39
*** ndipanov_ has quit IRC14:41
*** krtaylor has joined #openstack-ironic14:44
NobodyCamwoo hoo /me has a phone again :-p14:45
NobodyCamso much for updating the sim card over the weekend14:45
*** ndipanov_ has joined #openstack-ironic14:53
*** vkozhukalov1 has left #openstack-ironic15:03
*** dhellmann_ is now known as dhellmann15:05
*** vkozhukalov has joined #openstack-ironic15:08
NobodyCamThank you lucasagomes :)15:10
lucasagomesNobodyCam, np! thank u :)15:10
*** todd_dsm has quit IRC15:11
*** vkozhukalov has quit IRC15:12
*** toure has quit IRC15:13
*** toure has joined #openstack-ironic15:13
*** eghobo has joined #openstack-ironic15:16
*** ndipanov_ has quit IRC15:20
*** matsuhashi has quit IRC15:25
*** todd_dsm has joined #openstack-ironic15:27
NobodyCambrb15:27
*** matsuhas_ has joined #openstack-ironic15:28
*** matsuhas_ has quit IRC15:33
*** ndipanov_ has joined #openstack-ironic15:34
*** mkerrin has joined #openstack-ironic15:35
*** rloo has quit IRC15:39
*** rloo has joined #openstack-ironic15:40
devanandag'morning, all15:48
NobodyCamgood morning devananda :)15:48
lucasagomesmorning derekh15:52
lucasagomesdevananda,15:52
lucasagomessorry derekh :)15:52
derekhlucasagomes: np15:52
*** BadCub01 has joined #openstack-ironic15:53
*** blamar has joined #openstack-ironic15:59
*** blamar has joined #openstack-ironic16:00
*** ifarkas has quit IRC16:04
* NobodyCam starts a devtest run...16:08
NobodyCambrb16:08
*** dwalleck has joined #openstack-ironic16:11
*** dwalleck has quit IRC16:15
*** eghobo has quit IRC16:16
devanandaromcheg: hi there! welcome back :)16:19
romchegMorning devananda16:19
*** romcheg has left #openstack-ironic16:19
*** romcheg has joined #openstack-ironic16:19
romchegwhoops, accidentally left the chat16:19
*** tatyana has quit IRC16:21
*** eghobo has joined #openstack-ironic16:22
romcheg*is trying to understand the math here https://review.openstack.org/#/c/80400/6/ironic/drivers/modules/ipmitool.py*16:23
*** toure has left #openstack-ironic16:23
jrollheh16:23
*** tatyana has joined #openstack-ironic16:27
devanandaugh16:34
NobodyCamugh?16:34
devanandathat does not need to be called recursively, on every iteration of the loop16:34
jrollnot to mention it will infinitely recurse...16:36
jrollx ** 2 will never == -116:36
lucasagomesjroll, but that's total_time16:37
jrollOH16:37
lucasagomeshe's checking if retry == -116:37
* jroll runs off to get coffee :)16:37
lucasagomesbut anyway, that logic is a bit confuse16:37
devanandawhen retry == 0, the next call will == -1 and not recurse further16:38
devanandait's really bad logic. and the variable "total_time" is not total_time at all16:38
lucasagomeshehe yeah16:38
devanandaand there's a simple formula for sum-of-a-sequence16:38
lucasagomesshould we have 2 parameters for it? one for max sleep time16:39
lucasagomesand the second for number of retries?16:39
devanandano16:39
devanandajust one -- total time to retry16:39
romchegIsn't that too implicit?16:39
devanandathen do an exponential back-off starting until that total time is reached16:39
devananda1, 2, 4, 8, ....16:39
devanandaor use fibonacci sequence. similar effect16:40
lucasagomes1,4,9,16...16:40
devanandan^2 vs 2^n :)16:40
lucasagomesheh16:41
devanandapoint is to avoid DOS a BMC with polling the power state16:41
devanandaand we don't need complicated logic, multiple CONF options, or recursive methods to do that16:42
lucasagomesyeah, indeed16:43
jroll+116:43
devanandaromcheg: https://review.openstack.org/#/c/81336/ could use another pair of eyes16:43
romcheg*looks*16:43
devanandahmm16:44
devanandalucasagomes: you -1'd https://review.openstack.org/#/c/81340/ - let's chat for a minute16:44
lucasagomesdevananda, sure16:44
lucasagomesI put some comments there16:44
lucasagomesI agree that the rescue api is incomplete/not mature16:44
devanandalucasagomes: ah. so i wasn't clear -- i meant, we have no REST API for it16:44
lucasagomesdevananda, ahh that's correct16:45
devanandalucasagomes: the driver API may or may not be (in)complete -- it's untested as no driver has even started implementing it16:45
lucasagomesdevananda, right16:45
devanandalucasagomes: so I wanted to hide the driver interface so it isn't presented in the list of supported interfaces16:45
devanandain the API16:45
NobodyCambbiafm post bbt walkies...16:45
devanandawithout any REST API to call it16:45
lucasagomesdevananda, right16:46
lucasagomesbut when it's implemented we would need to revert the change that that patch is doing16:46
devanandalucasagomes: dependency of https://review.openstack.org/#/c/81336/ whcih you +2 :)16:46
devanandayes16:46
lucasagomesso I would just leave it there for now16:46
lucasagomesironic didn't graduate so our api will still change16:46
devanandasure16:46
lucasagomesdevananda, I agree with 81336, which shows rescue as not supported16:47
lucasagomesbut 81340 to me sounds like, removing part of the plumbing that will be needed in the future16:47
devanandalucasagomes: so when someone implements an out-of-tree driver based on the Icehouse release *and* adds a driver.rescue() interface16:47
devanandalucasagomes: they will have been mislead since there wouldn't be any REST API to invoke it16:47
devanandaI'm happy to revert 81340 as soon as Juno opens16:48
devanandabut I think it should be hidden in Icehouse release16:48
lucasagomesright... so wouldn't be better to implement the rest api instead of ripping that off?16:48
devanandawell. if we had done that a month ago, yes :)16:48
lucasagomesheh16:48
devanandaRC1 is this week, I think16:49
*** harlowja has joined #openstack-ironic16:49
lucasagomesok grand, we also need to remove from the python client16:49
devanandathere's still a bunch on https://launchpad.net/ironic/+milestone/icehouse-rc1 that haven't landed yet16:49
lucasagomeswhich on previous versions will still showing rescue in the validate16:49
devanandahm?16:49
devananda"on previous versions" -- i'm not sure what you mean16:50
lucasagomesdevananda, ah ignore that, I thought that in the client we were using print_dict16:51
lucasagomesand had a list with the driver interfaces that needs to be listed16:52
lucasagomesbut we are not16:52
lucasagomes(which is good)16:52
lucasagomeshttps://github.com/openstack/python-ironicclient/blob/master/ironicclient/v1/node_shell.py#L16216:52
lucasagomesdevananda, right... so agreed, for the icehouse release we can hide that interface16:53
lucasagomesdevananda, also, can I have ur opnion on https://blueprints.launchpad.net/ironic/+spec/credentials-keystone-v3 ?16:53
NobodyCamlucasagomes: +1 from me on that BP16:54
lucasagomesNobodyCam, :) yeah I was doing some experimentation here with keystone, seems quite flexible16:55
devanandalucasagomes: hm, interesting idea16:55
lucasagomesyeah16:56
lucasagomesI don't think ironic should be managing credentials at all16:56
NobodyCamdevananda: remove all passwords from our db16:56
devanandalucasagomes: i've been toying with ironic using reversible AES to store credentials16:56
devanandaI agree we need to handle credentials more sanely than we do today16:56
lucasagomesyeah16:56
devanandalike, seriously. that's something I should have done a while back16:56
lucasagomesbut there's also a problem with fragmentation, cause keystone is the service that is supposed to manage this sort of things16:57
devanandakeystone manages credentials for openstack services16:57
devanandahmm16:57
lucasagomesyeah, but with v3 they made it flexible enough to other services to store other credentials within keystone as well16:58
devanandait's also a question of how tightly coupled should ironic be with keystone. if we put all creds there, then it's not possible to start ironic until after keystone starts16:58
NobodyCami like offloading the cerds to keystone, I would be ok with keeping basic password support as we have now so that ironic could be used by "devs" outside of the openstack env for testing16:58
devanandaif we support >1 location for cred storage, it needs to be pluggable. and then we've just re-implemented keystone16:59
dtantsurI need your help :) I'm trying to follow recently-merged guide for Ironic on devstack (and as usual on Fedora 20), and I'm stuck with the problem: instance is in "building" state for looong-long time, then fails16:59
lucasagomesdevananda, maybe we should offer some flexibility like... you can store, ipmi_password/username, ipmi_credential_id?17:00
dtantsurthe only thing I found in logs grepping by ERROR was: http://paste.openstack.org/show/74162/17:00
dtantsur(traceback converted to human-readable form: http://paste.openstack.org/show/74164/ )17:00
dtantsurany advice is appreciated17:00
devanandadtantsur: ERROR ironic.conductor.manager [-] Timeout reached when waiting callback for node17:01
devanandadtantsur: check syslog for messages from tftp17:01
NobodyCamdtantsur: how did you build your deploy ramdisk?17:01
romchegdevananda: I've looked through 81336 and I think we can land it17:01
dtantsurdevananda, yeah, that should be the cause, though exception also bothers me17:01
romchegagree?17:01
JayFIf I understand correctly, you said v3 Keystone is what you need to store creds for other services. Is it OK for Ironic to take a hard dep on Keystone v3 API?17:01
*** vkozhukalov has joined #openstack-ironic17:01
dtantsurNobodyCam, devstack: BM_DEPLOY_FLAVOR="-a amd64 fedora deploy-ironic"17:01
NobodyCamok17:02
lucasagomesJayF, that's need to be discussed17:02
lucasagomesJayF, right now we have a mix of v2 and v317:02
lucasagomeswhich seems kinda messy17:02
NobodyCamdtantsur: do you have access to console on the node your deploying?17:02
devanandadtantsur: if syslog shows the node being served kernel/ramdisk by tftp, and also fetching the deploy token, then it should have worked. my guess is one or both of those did not get pulled17:02
lucasagomesmaybe we should do -just like other services: heat - and use v3 only17:02
devanandalucasagomes: i thought we are only eystone v2 today17:03
*** todd_dsm has quit IRC17:03
lucasagomesdevananda, the common/keystone.py supports v3 as well17:03
JayFI think we'd strongly prefer to not have a hard dep on Keystone v317:03
devanandaah17:03
lucasagomesto get things from the catalog17:03
dtantsurdevananda, is it sudo journalctl | grep -i tftp ? Than nothing intriguing..17:03
lucasagomesJayF, right, any particular reason?17:03
devanandadtantsur: i dont know journalctl. /var/log/syslog ?17:03
JayFlucasagomes: it significantly raises the bar for integration with existing clouds17:04
devanandaJayF: hm. afaik, we need keyv3 for signed URLs, both in swift and ironic, which is how we are looking at doing any sort of secure callback from teh agent17:04
dtantsurdevananda, sudo grep -rni tftp /var/log/ gives nothing (journalctl seems to be replacement for syslog in Fedora)17:04
devanandadtantsur: are you running a tftp service?17:04
lucasagomesJayF, yeah indeed... but afaik services like heat only supports v3 no?17:04
lucasagomesso we won't be first17:05
lucasagomesdevananda, yeah, the trust thing for the ramdisk we might need v3 as well (I think)17:05
devanandadtantsur: look in devstack/lib/ironic for 'tftp' and see which service should be running17:05
JayFIMO heat taking the dependency wasn't a decision I would've made/supported, but I work on Ironic not heat :)17:05
lucasagomesheh17:06
JayFI'm just saying it might be worthwhile to have it be more than a passing IRC conversation to add that dependency17:06
lucasagomesJayF, oh sure17:06
lucasagomesI mean, the credentials thing is not even approaved or anything17:06
lucasagomeswe will discuss it more17:06
lucasagomesand see the impacts17:06
JayFThanks, that's what I wanted to be sure of17:07
* NobodyCam starts devtest again,17:07
devanandalucasagomes: so one reason I suspect folks will object to using keystone for IPMI creds17:07
devanandalucasagomes: is privilege separation17:07
devanandadirect BMC access (and the credentials thereof) are generally much more tightly controlled than access to the tools which *use* those accounts to provision hardware17:08
devanandaeg, separate accounts to grant "nova boot --flavor baremetal" vs "ironic node-show | grep password"17:09
devanandaright now, we only support very rudimentary access for this via keystone v217:09
devanandabut we do support separating those two actions so that "users" of baremetal can't see the credentials17:09
lucasagomesright17:10
lucasagomesso yeah the keystone thing right now is admin only17:10
dtantsurdevananda, I see no signs of tftp in ps, nor in services; seems like tftp should be operated by xinetd, and there's config file for it. It is supposed to listen on port 69, right?17:10
lucasagomesironic would only store a ref to the credentials17:10
*** romcheg has quit IRC17:10
lucasagomesbut only admins will be able to list/get it17:10
devanandadtantsur: it should definitely be running. that's (one of) the problem(s)17:10
lucasagomeshttps://github.com/openstack/keystone/blob/master/etc/policy.json#L5917:10
dtantsurdevananda, netstat only gives   udp6       0      0 :::69                   :::*17:10
dtantsurdevananda, maybe it only binds to ipv6 endpoint?17:11
NobodyCamdtantsur: that should be both v4 and v617:11
devanandalucasagomes: IMHO, ironic should continue to "own" BMC creds, but use keyv3 policy to limit access to them even further17:11
devanandalucasagomes: perhaps to teh point of preventing retrieval via the REST API17:11
devanandalucasagomes: eg, write-but-dont-read17:11
devanandaDC ops often have security/compliance requirements around this stuff17:12
devanandamy feeling right now is taht stashing the BMC creds in keystone is going to violate those compliance reqs17:13
NobodyCamdevananda: thats why I like pushing the creds on to keystone17:13
devanandaNobodyCam: huh?17:14
NobodyCamyou think so17:14
*** ifarkas has joined #openstack-ironic17:14
devanandaNobodyCam: yes, because it would put all the credentials in one place17:14
NobodyCamkeystones job is creds so they are keeping up on how to store passwords / keys17:14
NobodyCamoh17:14
devanandait's clearly worth bringing this up on the ML :)17:15
lucasagomesdevananda, +117:15
dtantsurdevananda, NobodyCam: added `flags = IPv4` to xinet.d configuration, now I have xinet.d listen on ipv4 endpoint as well, will try again17:15
JayFYeah but there needs to be some separation of responsibilities; user auth and BMC auth are different use cases and security cases IMO17:15
devanandaJayF: right17:15
JayFdevananda: perhaps a topic for the meeting? But maybe needs to be hashed out more first17:15
lucasagomesyeah I think for the meeting next week we could talk more about it17:15
lucasagomesit's too fresh right now17:16
lucasagomesneeds more thought17:16
NobodyCamlucasagomes: is what dtantsur  just did a fedora requirment?17:16
* lucasagomes reads17:16
NobodyCamadded `flags = IPv4` to xinet.d configuration,17:17
dtantsurNobodyCam, I've seen similar issues on boxes with both IPv4 and v6 supported, when service uses common approach: enumerate all possible endpoints and try to bind, unless one succeed17:18
JoshNangI added a blueprint to describe the agent driver we're working on. I should have some preliminary code pushed up shortly. Perhaps its another topic for the meeting? https://blueprints.launchpad.net/ironic/+spec/agent-driver17:18
lucasagomesNobodyCam, hmm I didn't do that in my env17:18
*** epim has joined #openstack-ironic17:18
jrollJoshNang: nice17:18
jrollJoshNang: that depends on some un-merged reviews, right?17:19
jrolloh, I should finish reading first ;)17:19
NobodyCamdevananda: lucasagomes want a creds topic on the agenda, as JayF says to early?17:20
JoshNangjroll: yup. maybe some other ones i forgot about too17:20
lucasagomesNobodyCam, maybe fft17:20
NobodyCam:)17:20
JayFNobodyCam: I'd say next week after a ML thread this week?17:21
lucasagomesor for the meeting next week, with a better idea/understanding of the implications17:21
jrollJoshNang: I think it's just that one17:21
lucasagomesJayF, sounds good17:21
NobodyCam:) well hold off to see what ML comes up with17:21
devanandaJayF, lucasagomes: either of you want to start the ML thread?17:23
lucasagomessure17:23
devanandalucasagomes: also, did you change your mind on https://review.openstack.org/#/c/81340/2 ?17:24
devanandalucasagomes: *thanks, aslo ...17:24
lucasagomesdevananda, oh yeah, I was about to remove my -117:24
lucasagomesdevananda, there's that nit in the commit message, but I think it's ok, if u rebase that or put another patch set up u can fix17:25
devanandalucasagomes: ah. I'll fix taht nit if you're ready to +2 afterwards17:26
devanandajust want to get it in today if we're going to get it in :)17:26
lucasagomesdevananda, don't need to, I will +2 that with the nit17:26
lucasagomescause it's not like a problem :)17:26
devanandaNobodyCam: did you decide what you want to do with https://review.openstack.org/#/c/80376/ ?17:26
devanandalucasagomes: cheers17:26
devanandaoh, and https://review.openstack.org/#/c/81336/ needs a +A17:27
devanandalooks like romcheg +2'd but didn't +A17:27
lucasagomesdevananda, done17:28
devanandaty17:28
NobodyCamdevananda: I'd like to switch to kwystone, but will wait for ML, if we keep creds in our db then the reverisable aes stuff will be needed, and a what to filter out the password/key from node show17:29
* NobodyCam looks17:29
NobodyCamlucasagomes: beet me to the +a17:30
NobodyCam:)17:30
lucasagomesNobodyCam, I will try to send the email to the ML today or tomorrow morning tops17:30
NobodyCams/what/way/17:30
NobodyCamlucasagomes: :) TY :)17:30
lucasagomessomeone knows whether the signed url for the ramdisk needs v3 or not?17:33
devanandayes17:33
devananda, AFAIK ,signed url's is only v317:33
lucasagomesright17:33
lucasagomesthanks17:33
JayFThat is code that doesn't exist atm though :)17:34
JoshNanglucasagomes: signed url to download the glance image from swift?17:34
JayFJoshNang: I think they're talking about what was mentioned at the mid-cycle meetup of using signed urls, provided by keystone, to auth ironic to the agent and back17:34
lucasagomesyup, neither signed url or our credentials in keystone exist, I just want to gather the ideas we have in the moment that touches v317:34
devanandaJoshNang: that too. but we've also been discussing a signed url for the agent to POST back to ir-api17:34
devanandaJayF: yes17:35
lucasagomesJayF,*17:35
JoshNanggotcha17:35
lucasagomesJoshNang, yeah for the vendor_passthru, pass_deploy_info() that we use in pxe right now17:35
JoshNangdefinitely a very useful security feature17:35
lucasagomesyeah, the auth_token in the /tftp needs to go away :)17:36
*** jistr has quit IRC17:36
devanandalucasagomes: another interesting one for you https://review.openstack.org/#/c/78912/17:36
JayFProbably something we might want to wait to actually implement in the agent+driver until after Atlanta summit though, since it's not strictly required for a working prototype17:36
lucasagomesdevananda, will see17:37
devanandaJayF: ++17:37
NobodyCamlucasagomes: devananda: https://review.openstack.org/#/c/82180/17:37
devanandawe may well end up with a session for key v3 integration discussion @Atlanta17:37
NobodyCamadd driver-show to client17:37
devanandasounds like there are multiple angles to consider and things to implement17:38
NobodyCamdevananda: seems like it would be worth it17:38
lucasagomesdevananda, +117:39
NobodyCamhumm NodeLocked: Node 24e2d627-5a8d-4619-abfe-14f54d428783 is locked by host ubuntu, please retry after the current operation is completed.17:40
lucasagomesdevananda, would be nice to have some ceilometer guys there as well to rethink the push-ipmi-data-to-ceilometer as well17:40
devanandalucasagomes: i thought that bp was abandoned // haomeng was working on integration17:40
devanandaso that ir-cond would push notifications to ceil17:40
lucasagomesdevananda, right, yeah he was working on it17:41
JayFDo all Openstack services only push statistics to Ceilometer? Are there any integrations with non-OS monitoring tools like statsd?17:41
lucasagomesdevananda, but I think we agreed with the ceilometer guys that ironic will do it, and the main reason was because ironic owns the ipmi credentials17:41
lucasagomeswhich might change in that discussion17:41
lucasagomesso it would be nice to have their presence there as well17:41
devanandayuriyz: on 81763, any reason not to set action_timeout to 0?17:42
devanandayuriyz: afaict, this is only controlling the sleep() time for a mocked function anyway17:42
lucasagomesJayF, hmm I think ceilometer has a central agent that pulls statistics as well17:42
lucasagomesJayF, and the central agent sends it via RPC to the collectors17:42
lucasagomesthat then write to the database17:43
lucasagomeswhat Ironic would do, would send it via RPC to the collectors directly AFAIUI17:43
JayFJust thinking about how I would metric Ironic in an enviornment without Ceilometer.17:43
JayFIt sounds like the answer might have to be, run a Ceilometer and have it report into another monitoring system17:44
*** athomas has quit IRC17:45
devanandalucasagomes: re: "ironic owns the creds" -- sorta. the real reason AFAIR is "ironic owns the BMC access channel"17:45
devanandalucasagomes: adding more services with access to the BMC means more complicated ops and security17:46
lucasagomesdevananda, sure, but we could do it in a controller way using trust etc17:46
devanandalucasagomes: the original BP for ceilo was to use *local* IPMI access via an agent on the host. which clearly doesn't apply to ironic's instances17:46
devanandaJayF: that depends onw hat you want to monitor17:47
lucasagomesdevananda, but that original approached changed no?17:47
devanandaJayF: resource utilization (how many nodes used / available) vs stats of each node (cpu temp, fan speed, etc)17:47
lucasagomesafter the discussion u guys had in hong kong17:47
devanandalucasagomes: right17:48
JayFdevananda: I'd probably want to monitor both, but do different things with them (i.e. capcity planning vs failure prediction)17:48
lucasagomeswhich would be the ironic conductor that would retrieve the data and send it to the ceilometer collectors?17:48
JayFdevananda: which is why I'd want the flexibility to get the data, especially the second type of data, into another system outside of openstack17:48
lucasagomesJayF, which ceilometer might fit well with their alarms17:49
JayFlucasagomes: to be blunt; my A+ preference would be to cut ceilometer out of the conversation completely, but given that's not possible, I'm just looking for the most efficient way to get the data out and into another system17:50
lucasagomesright ack17:50
devanandaJayF: my ideal would be for ironic to expose an API for the retrieval of said information17:51
devanandaJayF: iirc, and it's been a while so imbw, ceilometer didn't want to poll to get it -- they wanted ironic to push the notifications out on some periodicity17:52
devanandaNobodyCam: https://review.openstack.org/#/c/77939/ could use eyes17:52
JayFPushing fits more with other monitoring systems17:52
jrolldevananda: sounds expensive unless ironic is storing that data already17:52
JayFbut I'd love it to be pluggable17:52
devanandajroll: right17:53
* NobodyCam looks17:53
JayFwhere I could, for instance, ship to statsd and/or graphite (in addition to? || in place of?) ceilometer17:53
devanandaso. pushing means it needs to be pluggable17:53
jrollindeed17:53
JayFHas there been much thought on this? Any blueprints, etc? I'll gladly toss up a blueprint for metrics gathering17:53
devanandaas longas the message format is standardized across all drivers (which is one of the driver API req's) then I think that should be doable17:53
devanandayes17:53
devanandahttps://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer17:54
*** derekh has quit IRC17:54
JayFI'll add some comments to that then. Thanks17:54
NobodyCamdevananda: jenkins failed on that review...?17:54
devanandaand https://review.openstack.org/#/c/72538/17:54
NobodyCamahh rebase17:54
devanandaJayF: there's a _long_ discussion there. worth reading up on before you comment17:54
JayFdevananda: absolutely :) reading the context is part of the process. Thanks for the links17:55
NobodyCamyuriyz: wanta tosss up a quick rebase on https://review.openstack.org/#/c/77939/17:55
*** max_lobur1 has quit IRC17:58
Shrewsdevananda: for bug 1295870, i was sort of thinking following the model nova uses, which can be seen in this class: https://github.com/openstack/nova/blob/master/nova/image/glance.py#L16417:59
Shrewsdevananda: i don't think putting retry logic in the client itself is the better route. no other client code does that, from what i can tell. And I think a developer using the client lib would want tighter control over that, anyway18:00
* NobodyCam makes a bagel b4 meeting ... brb18:03
ShrewsNobodyCam: I should really invest some money in that bagel company you're supporting  :)18:05
devanandaShrews: ++18:06
NobodyCamlol :)18:06
NobodyCamhttp://www.saraleebread.com18:08
openstackgerritA change was merged to openstack/ironic: Change JsonEncodedType.impl to TEXT  https://review.openstack.org/8158318:09
openstackgerritA change was merged to openstack/ironic: Imported Translations from Transifex  https://review.openstack.org/7886218:09
lucasagomesShrews, nice! yeah calling manually the _retry_if... is not ideal18:10
Shrews agreed18:11
*** zul has quit IRC18:13
openstackgerritJarrod Johnson proposed a change to stackforge/pyghmi: Fix missing delay_xmit argument breaking power wait requests  https://review.openstack.org/8256918:15
*** zul has joined #openstack-ironic18:16
openstackgerritA change was merged to openstack/python-ironicclient: Add support for 'driver-show' command  https://review.openstack.org/8218018:17
jrollvkozhukalov, agordeev, if either of you are around, here's my draft for updating the agent wiki page: https://etherpad.openstack.org/p/282Ocf7oXR18:18
jrolland anyone else interested ^18:18
jrollof course, some of that may change after today's meeting18:18
jrolls/may/will likely/ :)18:18
vkozhukalovjroll: having a look18:19
NobodyCamspeaking of meeting ... anyone have or want anything on the agenda that thats not there already?18:20
*** lucasagomes is now known as lucas-afk18:21
jrollagenda lgtm :)18:21
devanandaeasy review for another core to approve: https://review.openstack.org/#/c/81555/18:23
* NobodyCam clicks18:24
NobodyCam:-p18:24
dtantsurIn the meanwhile, I've tried to deploy instance, while tftp should be listening on ipv4 udp:69, but with the same result. And again no signs of tftpd working in any logs..18:26
dtantsurlucas-afk, did you say you use Fedora as well?18:27
NobodyCamno tftp at all? ie deployment k&R not served?18:28
devanandadtantsur: do you have network bridge properly set up? you should see DHCP BOOTP request coming from the VM18:28
devanandadtantsur: either tail dnsmasq's log or tcpdump that network18:28
NobodyCamdtantsur: real hw or vm?18:28
dtantsurdevananda, I'll see18:29
dtantsurNobodyCam, vm, libvirt18:29
*** Hefeweizen has quit IRC18:30
NobodyCamdtantsur: dies node get any dhcp info... ie an ip18:30
NobodyCams/dies/does/18:30
dtantsurNobodyCam, `DHCPACK(tap1d4fb9d1-3e) 10.0.0.5 52:54:00:4f:8e:7a host-10-0-0-5`  <-- seems like yes18:31
dtantsurNobodyCam, and DHCPRELEASE follows in 30 minutes18:32
NobodyCamdtantsur: any firewall running on host?]18:32
dtantsurNobodyCam, I thinks firewalld. Let me check..18:34
NobodyCamoh if so is port 69 open?18:35
dtantsurmaybe not, trying to figure out18:37
* dtantsur never liked iptables >_<18:38
NobodyCamlast I used iptables was centOs but I used something like iptables -I INPUT -p udp --dport 69 -j ACCEPT18:39
devanandaNobodyCam: another one for you - https://review.openstack.org/#/c/81340/218:39
NobodyCamto open the port18:39
NobodyCamlol thats a good reason : because the API for this has not been created yet18:40
dtantsurok, trying again with port opened18:41
NobodyCamdtantsur: :)18:41
NobodyCamshould work better that way :)18:41
*** greghaynes has joined #openstack-ironic18:42
NobodyCamdevananda: you ok with the spelling error lucas pointed out? just commit message so I'm ok with it :)18:42
dtantsurbtw, I get {"message": "'HTTPInternalServerError' object has no attribute '__name__'", "code": 500, "created": "2014-03-24T18:41:50Z"} while trying to delete failed instance, but that should be another story...18:44
NobodyCamdtantsur: ya sounds unreleated to your current deploy issue18:45
NobodyCamundercloud deployed from seed with ironic but getting error in undercloud..18:46
NobodyCamTiming out after 600 seconds:18:46
NobodyCamCOMMAND=ironic chassis-create -d devtest_canary18:46
NobodyCamI think I know why...18:46
openstackgerritA change was merged to openstack/ironic: Stop incorrectly returning rescue: supported  https://review.openstack.org/8133618:47
*** martyntaylor has joined #openstack-ironic18:48
*** martyntaylor has left #openstack-ironic18:49
NobodyCamten minute bell!18:50
dtantsurNobodyCam, great, some tftp activity in logs and node can be pinged, but ssh gives "connection refused" (better than "no route to host" already!)18:52
NobodyCamdtantsur: what is the status18:53
NobodyCamthe initial deployment ramdisk will not have ssh18:53
dtantsurNobodyCam, "spawning" for now. Should I wait more?18:54
NobodyCamyes wait18:54
NobodyCamcan you watch the nodes console?18:54
dtantsurNobodyCam, not sure how18:55
NobodyCamits ok18:55
NobodyCamwatch the status field18:55
*** tatyana has left #openstack-ironic18:56
devanandadtantsur: what's the status in ironic's API?18:56
devananda"spawning" is from nova-api18:56
*** romcheg has joined #openstack-ironic18:56
NobodyCamdtantsur: ironic node-show18:56
NobodyCamor just ironic node-list18:57
dtantsurdevananda, NobodyCam provision_state        | wait call-back18:57
dtantsur target_provision_state | deploy complete18:57
*** lucas-afk is now known as lucasagomes18:57
*** agordeev2 has joined #openstack-ironic18:58
*** romcheg has quit IRC18:58
devanandadtantsur: that looks good. tail ir-cond log18:58
lucasagomesdtantsur, yeah I do, haven't tested that devstack patch tho18:58
NobodyCamdtantsur: nova is waiting for the node to ping back and say start deploy18:58
*** romcheg has joined #openstack-ironic18:58
devananda^ s/nova/ironic/18:58
NobodyCamdoh TY devananda :)18:58
devanandadtantsur: you should see in the tftp logs, in addition to kernel & ramdisk, that the token file is also fetched18:58
devanandadtantsur: depending on how slow nested virt is for you, it may be several minutes before the deployment resumes18:59
dtantsurdevananda, "/tftpboot/token-..."? Yes, seems like it was fetched18:59
dtantsurdevananda, ok, thanks you. I'll try not to panic a bit more :)19:00
*** harlowja has quit IRC19:01
*** max_lobur has joined #openstack-ironic19:02
*** mrda_away is now known as mrda19:02
*** harlowja has joined #openstack-ironic19:03
*** romcheg has quit IRC19:09
*** romcheg has joined #openstack-ironic19:09
*** yonglihe_ has joined #openstack-ironic19:15
*** yongli has quit IRC19:16
*** romcheg1 has joined #openstack-ironic19:22
*** romcheg has quit IRC19:24
openstackgerritA change was merged to openstack/ironic: Hide rescue interface from validate() output  https://review.openstack.org/8134019:26
dtantsurTo panic again: still does not work :(19:29
dtantsurand I'm going to file a bug about cleaning up after failure: http://paste.openstack.org/show/74179/ http://paste.openstack.org/show/74164/19:30
*** romcheg1 has quit IRC19:38
*** romcheg has joined #openstack-ironic19:38
*** epim has quit IRC19:42
openstackgerritA change was merged to openstack/ironic: Fix traceback hook for avoid duplicate traces  https://review.openstack.org/8155519:46
dtantsurcreated https://bugs.launchpad.net/ironic/+bug/129691819:49
dtantsurlifeless, you mentioned something about nova delete not working? I have troubles with deleting failed instances - they just stay in "deleting" state. May it be related?19:51
lifelessdtantsur: thats the symptom19:52
lifelessdtantsur: though it fails for running instances too for me19:52
dtantsurlifeless, nova show for such instance gives me {"message": "'HTTPInternalServerError' object has no attribute '__name__'", "code": 500, "created": "2014-03-24T17:15:50Z"}19:53
dtantsuris it the same for you?19:53
lifelessdunno right now19:54
lifelessI've context switched away from Ironic until we get consensus on the n-n startup issue19:55
dtantsurok, will try to investigate as well19:56
lifelessn-c, I mean19:58
lifelessdtantsur: https://bugs.launchpad.net/ironic/+bug/129550319:59
jroll12:59:24        JayF | I think we should take jroll's proposed wiki page, make it the wiki page, and put arch discussions in there/in a blueprint20:00
jrollI agree20:00
agordeev2+120:00
devanandaJayF: so etherpads are, IMO, good for near-real-time discussions20:00
devanandaJayF: not so much for long async design sessions20:00
dtantsurlifeless, thanks, will look into this as well20:00
devanandaJayF: but BPs are even worse for that20:00
vkozhukalovjroll: devananda: let's remove all that stuff about CMDB20:00
JayFI completely agree20:00
jrollvkozhukalov: +120:00
*** JoshNang has quit IRC20:00
NobodyCambrb20:00
JayFSo what's the best way to go? I suggest codifying the Minimum-viable-architecture in the wiki, working towards implementing that in a prototype for the summit, then at the summit hashing it out further20:01
*** JoshNang_ has joined #openstack-ironic20:01
jrollmakes sense to me20:01
devanandaJayF: that sounds good. do you have that MVP drafted somewhere already?20:01
jrolldevananda: https://etherpad.openstack.org/p/282Ocf7oXR20:01
lucasagomeswhat about some clarification on the procedural vs declarative API?20:01
JayFjroll: you can hash out the architecture stuff more in that though, right? and pull references to the old etherpad?20:01
vkozhukalovone of the main questions is about procedural vs declarative approach20:02
jrollJayF: yes, I can20:02
jrollok, so20:02
jrollis anyone opposed to a fine-grained API with all available commands?20:02
devanandalifeless: ok, so the problem seems to actually be here: https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L100820:02
devanandalifeless: which is out of scope for us to be able to change20:02
vkozhukalovis everyone ok with having both of those approaches simultaneusly?20:02
jrolland is anyone opposed to adding an additional endpoint to submit multiple commands?20:02
jrollI think we should have both20:02
devanandalifeless: it's not the nova.virt.ironic.driver:init_host() method taht's failing -- it's compute.manager :(20:02
*** max_lobur has quit IRC20:03
jrolllucasagomes: I think we should support both - I'll add that to https://etherpad.openstack.org/p/282Ocf7oXR20:04
lucasagomesjroll, cheers20:04
vkozhukalovjroll: i still think we need to have a minimal list of drivers as well, let's move one by one and decide which of those mentioned here  https://etherpad.openstack.org/p/IronicPythonAgent we really need.20:06
jrollvkozhukalov: I mentioned them in the other etherpad20:06
jrollnot which functionality they cover, but which drivers we should have20:06
jrolland I can list out which functions they should support, if that's helpful20:07
vkozhukalovjroll: ok, see, line 3620:07
jrollyes20:07
devanandavkozhukalov: jroll: one thing to consider about the agent -- ironic's driver API and the agent API will converge over time20:07
jrolldevananda: perhaps. I think the agent API will always be more fine-grained20:08
devanandaany action that we need to perform on hardware will a) need to be expressed in some way via teh REST and Driver APIs20:08
lifelessdevananda: yes, I know :(20:08
devanandaand b) likely be expressed in other driver APIs as well20:08
lifelessdevananda: I have an idea though which you may hate20:08
*** romcheg has quit IRC20:08
lifelessdevananda: which is to be evil20:08
devanandalifeless: ehh....20:08
jrolldevananda: eh, maybe you're right. I can definitely see them converging. maybe not 100% but close.20:09
*** JoshNang_ has quit IRC20:09
devanandajroll: what we do with the agent, vendors will do in hardware20:09
devanandajroll: update firmware? yea, some vendors can do taht directly via the BMC20:10
devanandasame for build raid, etc20:10
jrollsure20:10
NobodyCamlifeless: evil? how so?20:10
JayFI think it's possible that the agent might be able to do things that other drivers might find difficult/impossible over time20:11
lifelessdevananda: https://review.openstack.org/#/c/81959/ + https://review.openstack.org/#/c/82414/ + https://review.openstack.org/#/c/81627/20:11
JayFbut that doesn't exclude those things from being a part of the larger ironic api if some driver comes along that can implement them, good20:11
lifelessdevananda: should let you a) get a seed ironic that Just Works20:11
devanandajroll: not 100% identical, but the agent and driver APIs will necessarily converge (since ir-cond needs a way to tell the driver to do things on the hardware)20:11
lifelessdevananda: and b) see the undercloud fail20:11
jrollJayF: at a minimum, the agent driver could implement them :P20:11
jrolldevananda: right, I agree20:11
JayFjroll: exactly :D20:11
devanandaJayF: yes. and things which the upstream agent can do which vendor drivers _cant_ will encourage vendors to start _addign_ those functions to their hardware20:12
JayFexactly20:12
JayFLook at this -> we need a whole ramdisk + some python agent just to update your bios. Don't you just want to make it a call via the BMC?20:12
JayFThat's why I like getting to a full prototype of things working together. Almost certainly we can learn more from a working example than from talking about what a working example might look like ;)20:13
lifelessJayF: am I wrong to say please nononono I can audit the ramdisk and python code20:13
lifelessJayF: I can't audit vendor code, and we *know* its often got problems20:13
devanandalifeless: you have different priorities than hw vendors ;)20:13
JayFlifeless: In my experience, you sometimes might be exec()'ing a vendor binary to flash the firmware or update the bios20:14
lifelessdevananda: arguably not:)20:14
JayFlifeless: so the only thing auditable in those cases are the scaffolding we've built to enable it20:14
devanandaand almost all the time, we'll be relying on vendor tooling to do the actual plumbing anyway20:14
lifelessJayF: yes but at least I'm not gambling they get network security correct20:14
vkozhukalovanother question which is still open for me is: how is conductor going to do inventory? is it supposed to be implemented as periodic task?20:14
lifelessJayF: cipher suite 0.20:14
devanandavkozhukalov: two approaches have been proposed so far20:14
devanandavkozhukalov: 1) default PXE ramdisk which POSTs enrollment data20:15
devanandavkozhukalov: 2) driver API to "scan" for unregistered hardware20:15
jroll^20:15
JayFlifeless: I just think we'll have won if we can get most vendors to start cryptographically signing their firmwares and configs and things of that matter. That way the path doesn't matter as much.20:15
devanandaJayF: ++20:15
jrolldevananda, vkozhukalov: I think we could support both methods.20:15
jrolls/could/should20:15
*** JoshNang_ has joined #openstack-ironic20:16
devanandajroll: ++should20:16
NobodyCambrb20:16
*** romcheg has joined #openstack-ironic20:18
*** romcheg has left #openstack-ironic20:18
lifelessdevananda: so by evil I mean tarpitting the ironic calls c.m makes20:18
*** romcheg has joined #openstack-ironic20:18
*** lucasagomes is now known as lucas-dinner20:18
lifelessdevananda: but can we go big picture for a second ?20:18
devanandalifeless: sure20:19
lifelessdevananda: isn't the idea of Ironic that nova instances won't  be owned by a nova compute in quite they way they are today ?20:19
lifelessdevananda: e.g. three n-cs, three ironics, one fails, there shouldn't be any big drama20:19
devanandalifeless: yes, except that will require rearchitecting some chunks of n-cpu, n-cond, n-sched20:19
lifelesssure sure20:19
lifelessbut if thats the big picture20:19
lifelessperhaps we can do some compromise stuff today with that in mind20:19
lifelesse.g.20:19
lifelessthe failure in list_instances20:20
vkozhukalovjroll: devananda: for me they are to different tasks 1) discovery - list of nodes available 2) inventory - hardware details20:20
JayFvkozhukalov: What use is a list of nodes without the details with them?20:21
lifelesswhat if list_instances was also ring based (using the nova knowledge of running n-c)20:21
devanandavkozhukalov: perhaps there is a terminology difference. "discovery" for us means "find new hardware that is not yet known by ironic"20:21
vkozhukalovjroll: devananda: 1) discovery - heartbeat 2) inventory - hardware info exposed via API20:21
jrollvkozhukalov: yes. that's why we should support both.20:21
devanandavkozhukalov: discovery != heartbeat20:21
lifelessdevananda: agh, got too detailed. Let me make a pad.20:21
devanandalifeless: please do. I dont believe nova has any hash ring or knowledge of ir-cond's ring20:22
jrollvkozhukalov: so, discovery is so far an unsolved problem, and I'd like to punt on that for now.20:22
lifelesshttps://etherpad.openstack.org/p/ironic-nova-friction20:23
devanandalifeless: that said, i /think/ i see where you're going, broadly... eager to see it more clearly20:23
vkozhukalovdevananda: heartbeat > discovery, right?20:23
jrollvkozhukalov: heartbeat, the hardware info is send on the *first* heartbeat and stored on the node object in the db20:23
devanandavkozhukalov: heartbeat == "an idle agent informing ir-cond that it is still alive"20:23
jrollvkozhukalov: further heartbeats just say "I'm alive"20:23
devanandajroll: "hw info is sent on first POST"20:23
devanandajroll: to clarify my view, that POST is not a heartbeat20:24
jrollvkozhukalov: (I should mention that the "first heartbeat" is not actually a heartbeat)20:24
vkozhukalovjroll: ok, now i see20:24
jrolldevananda: agreed, bad wording on my part20:24
devananda:)20:24
devanandajroll: do you have a pad / flow diagram of the agent's init process?20:25
jrollvkozhukalov: for "inventory", I think that may be outside the scope of ironic itself, but I do like the idea of exposing an API endpoint in both ironic and the agent20:25
devanandajroll: eg, from DHCP BOOT through ... until it is finally idle20:25
*** romcheg1 has joined #openstack-ironic20:25
jrolldevananda: I think so but not positive, let me poke around20:26
devanandaas far as inventory and exposing hw specs -- to me, the answer is simple: stash any details that ironic doesn't need in node.properties or node.extra20:26
jroll+120:26
devanandair-api already exposes those20:26
devanandaso there's NOTHING to do :)20:26
jroll:)20:26
devanandajroll: thanks. i'd like to understand how yhou see the auth during agent startup (both for a known and an unknown node)20:26
*** romcheg has quit IRC20:27
jrolldevananda: we use a trusted network for authentication right now20:27
*** harlowja_ has joined #openstack-ironic20:27
vkozhukalovdevananda: jroll: we definitely need to have detailed diagram about heartbeat/discovery/inventory flow20:27
JayFdevananda: I think we're enforcing security of the network agents come on, rather than in the agent itself right now20:27
*** romcheg has joined #openstack-ironic20:28
jrollvkozhukalov: except for exposing a "list hardware" api call, we're not in the business of inventory management20:28
lifelessdevananda: soime thoughts there, but stepping away for a sec we have a contractor here needs shown around20:28
devanandalifeless: ack20:28
JayFdevananda: if, for instance, I pass some option or PXE config based on MAC onto a network, any device can spoof that mac and get the 'credentials' which means truly authenticating an agent on a trustworthy network is borderline impossible20:28
*** notq has joined #openstack-ironic20:29
devanandaJayF: unless SDN // MAC filtering on the switch's ports20:29
devanandaJayF: then yes. but we're not there yet :)20:29
JayFdevananda: exactly, and at that point you're completely relying on the security of the network20:29
*** epim has joined #openstack-ironic20:29
NobodyCamlifeless: question, is there any reason each element that needs it could not check if keystone is setup and if not call a keystone init from a O-R-C script?20:30
JayFdevananda: not saying we shouldn't add layers eventually, just saying I think there's, at least for a little while unless there's some more clever solution I haven't thought of/heard yet, an intrisic requirement that the PXE network be secured20:30
*** romcheg2 has joined #openstack-ironic20:30
*** romcheg1 has quit IRC20:30
devanandaJayF: i agree that there is that assumption today20:31
*** harlowja has quit IRC20:31
devanandaJayF: but it's one taht i think we should aim to get away from20:31
JayFI'm just curious as to what could happen in the future, even, to change that assumption20:31
JayFNot really urgent to know now, I just don't see a path away from that requirement20:32
devanandaack20:32
JayFunless you change from 'pxe' to some more authenticated mechanism to transmit the agent and/or configs (like virtual media, or some fancy UEFI secure remote booting thing that may not exist yet but would theoretically be awesome)20:32
devanandaJayF: example: ^20:32
devanandayep20:33
*** romcheg has quit IRC20:33
devanandawork is in flight for exactly that20:33
jrollhow will that work?20:34
devanandahypothetically20:36
devanandadriver creates disk w/ secure token, mounts disk via VM channel20:37
devanandadriver powers on node20:37
devanandanode PXE boots generic ramdisk20:37
devanandathen uses secure token to auth back to ironic20:37
devananda<end>20:37
devanandasame process can be extended for validating signatore of the PXE and user images, firmware image, etc. it's a step towards UEFI support20:38
jrollhmm, ok20:38
jrollwhat is VM channel?20:38
jrollvirtual media?20:39
devanandavirtual media channel20:39
JayFso there's an implict requirement for hardware support for mounting remote media?20:39
devanandanot part of IPMI spec -- but nearly all vendors have one20:39
devanandayes20:39
jrollhmm20:39
devanandathis is one of the main benefits taht hw vendors are looking to drive20:39
devananda*get in their drivers20:39
devanandathat's not to say "assume PXE net is secure" is an invalid model -- it's fine for a lot of situations20:40
devanandabut as we think about APIs and architeture, we should avoid limiting ourselves to ^20:40
JayFwell it's the only model without hardware cooperation of some kind it seems :C20:40
devanandato be pedantic, we kinda need hardware cooperation to do /any/ of this :p20:41
jrollhahaha20:41
JoshNang_:D20:41
lifelessok back20:41
JayFipmitool chassis power on20:41
JayF# no20:41
devanandahehehe :)20:41
* JayF has seen BMCs almost that ornery20:41
lifelessNobodyCam: say there are three undercloud nodes20:41
lifelessNobodyCam: which one should do the init-keystone initialisation ?20:41
* jroll afk for a few20:42
NobodyCamlifeless: hummm.. yes odd on they would all try at the same time20:42
NobodyCams/odd/odds/20:44
vkozhukalovgood night guys, tomorrow going to draw heartbeat/discovery sequence diagram, and think  we are almost agreed about main points20:45
*** linggao has quit IRC20:45
NobodyCamnight vkozhukalov20:45
devanandacomstud: around? want to continue the compute.manager:init_host discussion from last week? we're jotting notes at https://etherpad.openstack.org/p/ironic-nova-friction20:46
lifelessNobodyCam: so, no, not without some careful plumbing/thought.20:46
*** max_lobur has joined #openstack-ironic20:46
NobodyCamlifeless: would be neat of there was a hash ring type check the undercloud could do..20:47
NobodyCamso only one  would attempt to register20:47
NobodyCam:-p20:47
lifelessNobodyCam: we have a etherpad for that20:47
NobodyCamwas just a thoiught I had walking th dogs20:47
NobodyCamoh link?20:47
devanandavkozhukalov: g'night! thanks!20:47
devanandalifeless: re: hash ring for tripleo things -- wouldn't it be nice if we had a quorum manager? ;)20:48
devanandasomething like, oh, zookeeper maybe ...20:49
lifelessdevananda: then we have the same bootstrap problem for that :)20:49
*** vkozhukalov has quit IRC20:49
devanandahehe20:49
lifelessdevananda: we're looking at some plumbing to help, and yes, I think perhaps we need to make a consistent API and facility for that in openstack but thats a later battle.20:49
NobodyCam:)20:50
devanandalifeless:  so a problem with nova.virt.ironic.driver:list_instances merely returning [] when unable to auth/reach ironic/etc is that this will lead to very abberant behavior later on20:51
comstuddevananda: Half around... making/eating lunch, but I can multitask20:52
devanandalifeless: short of that, i'm not sure what you could be proposing20:52
lifelessdevananda: just finishing the analysis, bear with me :)_20:52
lifelessI may be on crack20:52
lifelessthis code is a problem20:54
lifeless        for instance in local_instances:20:54
lifeless            if instance.host != our_host:20:54
russell_hlifeless: "Nova-compute is intrinsically HA with federated state mirrored into a central scheduler + DB" <- that seems wrong21:02
lifelessrussell_h: ok! how so21:03
*** jbjohnso_ has quit IRC21:03
russell_hlifeless: I mean, the statements about state mirroring are correct, but the conclusion that its HA I think is wrong21:03
russell_htoday, to make nova-compute HA you would need to implement some sort of failover or master-election on top21:03
russell_hand there are definitely a lot of challenges along the way21:03
lifelessrussell_h: if you have 5 libvirt kvm hypervisors and one fails, the system as a whole remains available21:03
*** JoshNang_ is now known as JoshNang21:04
russell_hlifeless: ah, I see what you mean. You mean that collectively, the nova-compute layer is HA?21:04
lifelessrussell_h: no, there is no master in n-c, its federated, not replicated state.21:04
lifelessright21:04
russell_hok, gotcha, agreed21:04
lifelessindividual VMs aren't HA with any of the current hypervisors21:04
comstudrussell_h: There is a way to do HA for nova-compute21:04
lifelessbut n-c isn't a hypervisor, its an abstraction.21:04
comstudit's kinda hacky, but we think it works21:04
lifelesscomstud: running live-migrate and stopping pre-pivot ?21:04
comstudI'm talking about nova-compute itself21:05
comstudrun N of them21:05
comstudset their 'host' all to the same thing21:05
comstudCONF.host21:05
lifelesscomstud: oh right - so yeah thats one of the options here for ironic actually21:05
lifelesscomstud: but big concerns about scale21:05
comstudyeah, i'm just talking in the ironic context21:05
lifelesslike, 10K machines with Ironic, do you want them all reporting all instances ?21:05
comstudnot really!21:05
comstudbut21:05
comstudthat's what cells is for21:06
comstud:)21:06
* comstud hides21:06
lifelesscomstud: so not :)21:06
comstudso is21:06
russell_hwhat we need is a hashring service21:06
lifelesscomstud: I thought cells was for scaling DB / rabbit /scheduler  usage ?21:07
comstudlifeless: it's a general way to break up work21:07
comstudBut yes, I still don't like all nova-computes talking to ironic and getting all instances21:07
lifelesscomstud: its very manual though, right? you have to decide how many cells21:07
comstudit's not really a solution for that.. for that part I'm somewhat kidding21:07
comstudyes21:08
comstudyou break up hosts into cells manually... or you can wrap config with something automatic21:08
lifelessyes, so - not dynamic :)21:08
lifelessanyhooo21:08
comstudi don't think you want 10K nodes in a single cell either way though21:09
comstudbut we can ignore that21:09
comstudI think you need a solution regardless21:09
comstudcoincidentally i was just talking to dansmith about this re: nova conductor21:11
comstudtrying to find a way to perhaps have N nova-conductors manage a set of M nova-computes21:11
lifelessit doesn't do that already?21:12
lifelessI thought the whole *point* of nova-conductor was to scale DB access separately to computes (as well as the security aspects)21:12
comstudwell, right now there's no need to have any conductor own a particular compute21:13
comstudbut we're talking about maybe doing that with respect to periodic tasks21:13
comstudso yes, we have that today without any 'ownership'21:13
comstud(M:N)21:14
lifelessfor things with no real locality of reference, that seems fine to me21:15
lifelessanyhow, hash rings++21:15
lifelesscomstud: you're looking at https://etherpad.openstack.org/p/ironic-nova-friction ?21:15
mrdacomstud: so formalising the relationship between computes and conductors.  Where *would* cells fit into that discussion?21:16
comstudlifeless: I haven't looked at that, no21:16
comstudmrda: It doesn't21:16
mrda:)21:16
comstudcells is mostly supposed to be a manual "i assign certain nodes to cells" and they don't flip21:17
comstudbecause there can be layer 2 boundaries and so forth21:17
*** jbjohnso_ has joined #openstack-ironic21:19
devanandaN n-cpu with a static 1:M relationship to nodes invalidates the HA which ir-api/ir-cond services provide21:19
*** agordeev2 has quit IRC21:19
mrdahmmm21:20
devanandaand N n-cpu with no relationship to nodes, where each n-cpu reports all nodes, will confound and overwhelm the scheduler21:20
devanandaand only 1 N-cpu instance invalidates nova's HA21:20
devanandatherefor: we must remove n-cpu! :-D21:20
devanandalifeless: actually, what's the harm in running ony 1 n-cpu instance?21:21
devanandalifeless: if we assume it is easy to start a new one if the existing one fails21:21
devanandacan heat ensure that we have 0 or 1 copies of n-cpu running at any given time, but no more than 1?21:22
lifelessdevananda: means we need to wrap it in pacemaker, which is sad.21:22
lifelessdevananda: no, heat cannot.21:22
devanandadarn21:22
lifelessdevananda: heats granularity is the APIs it orchestrates(*)21:22
lifeless*: this is changing a bit with software config, but not to solve this aspect yet21:22
lifelessdevananda: also scale21:23
devanandaright. until we test it, I suspect scale of n-cpu wont be taht much of an issue21:23
devanandasince it's merely greenthreads waiting on API calls with a small amount of python in the middle21:23
lifelessdevananda: it has a (green)thread pool21:23
devanandaCPU's scale well21:23
devanandamake the pool bigger :)21:23
lifelessdevananda: R U SRS?21:23
devanandawe've talked about the scaling of ir-cond :: nodes since conductors do a lot of IO21:24
devanandaand in nova-bm taht same IO pressure is on n-cpu21:24
devanandabut not the case with nova.virt.ironic21:24
devanandaor am i missing something?21:24
devanandacomstud: roughly, what's teh scale factor for n-cpu :: n-cond ?21:25
lifelessdevananda: I'm worried about efficieny of things like list_instances21:25
lifelessdevananda: and the interaction with e.g. scheduler etc there, which partitioning amongst n-cs would address.21:25
devanandalifeless: you are referrign to the work done inside n-cpu when it gets the list of instances back from ir-api?21:27
devanandalifeless: or the latency in getting said list?21:27
devananda*are you21:27
lifelessdevananda: yes, yes and then what it hands to conductor etc.21:28
lifelessdevananda: all those codepaths are designed for up to hundreds of VMs, not thousands+21:28
*** romcheg has joined #openstack-ironic21:29
devanandaah. so you think CPU would become the bottleneck then. gotcha21:29
lifelessconcerned21:29
lifelessdata will show :)21:29
devanandaindeed21:29
comstuddevananda: not sure that's been determined21:30
*** romcheg1 has joined #openstack-ironic21:30
comstuddevananda: it's a lot higher than it should be right now... beacuse DB calls in conductor all block21:30
devanandacomstud: <facepalm>21:30
comstudhehe ya.. well they do everywhere21:31
*** romcheg2 has quit IRC21:31
comstudit's just that spreading the load across all computes is actually better right now21:31
comstudbut anyway... fixes coming hopefully soonish21:31
lifelessdevananda: so _init_instance21:32
devanandaok, task at hand. relationship between each n-cpu host and ironic nodes21:32
lifelessdevananda: have you read through that ?21:32
lifelessdevananda: its basically recovering from things that ironic either doesn't support (e.g. migrations) or partially applied local logic like deletes.21:33
lifelessI think, if we have 2 n-cs with the same hostname that we *DO NOT WANT* any _init_instance stuff happening, as its just a massive race condition waiting to happen21:33
devanandalifeless: right21:33
*** romcheg has quit IRC21:34
openstackgerritJay Faulkner proposed a change to openstack/ironic: Set good defaults for heartbeat interval & timeout  https://review.openstack.org/8261521:34
*** romcheg has joined #openstack-ironic21:34
devanandalifeless: do we ever want any of that stuff to happen (eg, if we had 1 n-cpu and restarted it)21:35
devanandai think the answer is yes21:35
lifelessdevananda: it would be a nicety - e.g. if n-c is killed mid-delete21:35
devanandaright21:36
lifelessfinishing the cleanup without a user having to run 'nova delete' again21:36
lifelessbut, in principle, its no different than if the delete threw an exception and n-c wasn't restarted.21:36
devanandabut some steps in there dont make sense21:36
*** romcheg1 has quit IRC21:37
lifelessso21:38
lifelesswe want destroy-evacuated-instances to be a no-op21:38
lifelesswe want init-instance to run but only on one n-c21:38
lifelessand the rest is a wash21:39
openstackgerritJay Faulkner proposed a change to openstack/ironic: Set good defaults for heartbeat interval & timeout  https://review.openstack.org/8261521:39
lifelessso I've put my evil plan in the etherpad ;)21:40
lifelessdevananda: plug_vifs for instance doesn't make any sense21:41
devanandalifeless: the more i read _init_instance() the more i think none of that needs to be run for ironic21:41
devanandalifeless: a few things (clean up a failed delete && retry any pending reboots) make sense but are not strictkkly necessary21:42
lifelessright21:42
devanandaso it's clearner to just avoid the whole thing21:42
devanandaexcept i dont know that we actually can do that :(21:42
lifelesswe could use a different compute manager - etc/nova/nova.conf.sample:#compute_manager=nova.compute.manager.ComputeManager21:43
jrollupdated the agent wiki, if anyone is interested: https://wiki.openstack.org/wiki/Ironic-python-agent21:43
jrollnot 100% done but it's most of the way there21:44
JayFjroll: we might wanna get the source for that PDF and s/teeth-agent/ironic-python-agent/ ;)21:44
devanandaenfi?21:44
jrollJayF: oops21:44
jrollJayF: also, blamar should have the source for that PDF21:45
lifelessdevananda: E No F* Idea.21:45
devanandaahh21:45
devanandaE_NFI21:45
comstudI'd avoid a different compute manager.. that starts to get into hacky territory21:46
comstudalthough we're trying about making compute pluggable at a higher layer at some point21:46
comstudI'd try to hold out for that21:46
lifelesscomstud: we just need to neuter one method21:46
comstudThings like cells, vmware, hyperv, ironic all fit into a model where an external thing is managing the real nodes/hosts21:47
lifelesscomstud: and we're looking at 'how to make this usable for I', not long term plans.21:47
comstud_init_instance?  i missed why it needs neutered21:47
comstudsure21:47
lifelesscomstud: init_host actually, thats 90% irrelevant/problematic. _init_instance is problematic for two reasons.21:48
comstudIt may do a bunch of unnecessary things, but there's nothing you can just choose to 'fake' in the driver?21:48
lifelesscomstud: a) with 3 N-C's with the same hostname we'll run _init_instance *concurrently* for the same instances on the different N-Cs21:48
comstudwe can look at moving some of _init_instance into the driver layer or something21:48
comstudyeah ok21:48
devanandacomstud: several methods called in _init_instance are also called elsewhere. we can't fake all of those.21:48
lifelessb) much of it is irrelevant (and just noise but not harmful) but some is just weird to do to an already running instance - like plug_vifs.21:49
comstudi gotcha21:49
lifelesscomstud: I agree with longer term refactorings21:49
lifelesscomstud: problem is the chicken-egg thing21:49
comstudsure21:49
lifelesscomstud: until we're @ parity with nova-bm + tested, no inclusion. Parity means no less robust etc, and nova-bm had a federated model of host ownership21:50
lifelessso it had the 'generally available with N n-cs if one fails' property that we're looking at reclaiming here.21:50
comstudi'm for getting things working now :)21:50
lifelesscomstud: so my proposed short term plan is21:50
comstudjust don't want "*too* hacky*"21:50
comstudthere are some things already that I think need to move to driver layer21:51
devanandalifeless: no, ,it didn't21:51
comstud_init_instance could be one of them21:51
devanandalifeless: nova-bm had a strict ownership21:51
lifelessa) run N n-c same hostname. b) subclass ComputeManager and neuter init-host to permit 1) startup without ironic for TripleO and 2) avoid _init_instance21:51
lifelessdevananda: yes it did21:51
devanandalifeless: when did you add that,a nd why wasn't the BP updated?21:51
lifelessdevananda: which meant if I had 100 nodes and 4 n-cs 'nova boot' would work if one n-c was offline.21:51
devanandalifeless: it would work 75% of the time21:52
devanandaif the scheduler happened to pick a node not owned by the offline n-cpu21:52
lifelessdevananda: scheduler would reschedule21:52
devanandaheh21:52
lifelessdevananda: and after timeout scheduler would pick a live n-c directly.21:52
devanandagotcha21:52
lifelessdevananda: what blueprint ? what did I add?21:52
comstudscheduler does not reschedule if the n-c is down21:52
devanandalifeless: nvm. i misunderstood your statement about "generally available"21:52
comstudand it sends to there21:52
comstudHow would that have worked?21:53
lifelesscomstud: it doesn't ?21:53
comstudbut yes, after timeout, things would have been fine21:53
comstudno21:53
comstudit has no idea if compute got the msg or not21:53
comstudit's a cast21:53
lifelesscomstud: ah, so my misunderstanding, but timeout would do it too21:53
comstudyeah, after the timeout period, sched would notice it's down21:53
lifelesseither way21:53
lifelessyou don't need to run an ops firedrill over a down n-c21:53
lifelessdo we have consensus on this approach ?21:54
comstudyou did with bm after the instance is built21:54
devanandaassuming you have considerable excess capacity21:54
comstudcan't do any actions21:54
devanandaand no running instances owned by taht n-cpu21:54
lifelesscomstud: yeah, you can't ignore it21:54
comstud(but that is not the case with ironic)21:54
lifelesscomstud: but you don't need to be screaming down the streets with sirens on either21:54
comstudi'd still call it an ops firedrill :)21:54
lifelessdevananda: running instances are fine until they want to reboot21:54
devanandalifeless: right21:54
comstudi guess it depends on if you're running a large public cloud or not21:55
comstud:)21:55
*** max_lobur has quit IRC21:55
lifelesscomstud: a cloud of nova-bm or one deployed by nova-bm:)21:55
lifelessanyhow21:55
comstuda cloud of nova-bm21:55
lifelesscomstud: so, don't do that :)21:55
devanandacomstud: hopefully no one is doing that :)21:55
comstudin this case21:55
lifelessanyhoo....21:55
comstudi doubt anyone is doing that with baremetal driver21:56
comstudyeah, anyhoo...21:56
devananda21:51:22 < lifeless> a) run N n-c same hostname. b) subclass ComputeManager and neuter init-host ...21:56
devanandabacking up to that :)21:56
comstudseems like a reasonable option for now21:56
devanandalifeless: i think that might work. but ... it means ironic isn't just a driver for nova21:56
comstuddevananda: you already need the special scheduler host manager21:57
comstud:-/21:57
comstudwhich i'd also like to ditch the need for21:57
devanandayea ...21:57
comstudi think it might be fine short term21:57
comstudbut21:57
devanandacomstud: so i think, long term, the scheduler host manager could be addressed by more capable filtering21:57
comstudI think maybe we can get more of the init process down to the driver layer in nova-compute21:58
comstudso that you can ... not do things21:58
devanandathat would be much better IMO21:58
devanandaat this point, landing a new compute.manager subclass in noav is probably out of the question for Icehouse :)21:58
comstudcorrect21:58
comstudwithin nova21:58
devanandai haven't looked at how easily we can plug in an outof-tree compute mgr21:58
comstudyou could ship one in Ironic21:58
devanandapossible?21:58
comstudyeah it is very possible last i knew21:58
devanandak21:59
comstudchecking...21:59
comstudshould be compute_manager conf setting21:59
devananda#compute_manager=nova.compute.manager.ComputeManager21:59
devanandaso, hypothetically, yea21:59
comstud92     cfg.StrOpt('compute_manager', 93                default='nova.compute.manager.ComputeManager',21:59
lifelessdevananda: its trivial.21:59
devanandak21:59
lifelessdevananda: I linked the setting aove21:59
lifelessabove21:59
devanandaso22:00
devanandadependency here that we need to address anyway, but this just raises it22:00
devanandawe need to run the nova unit test suite22:00
devanandain our check/gate22:00
devanandafor as long as these things aren't in nova22:00
devanandaat least then we will spot when nova changes an internal API and breaks the out of tree code22:01
devanandawhcih seriously sucks. but that's the life of an incubated project for now.22:01
*** jbjohnso_ has quit IRC22:03
lifelessdevananda: I don't hold much confidence that it will tell us that.22:05
lifelessdevananda: because there are few interface tests in nova22:05
devanandasigh22:05
lifelessdevananda: by which I mean tests that test tht 'all drivers support X'22:05
lifelessNobodyCam: so, https://review.openstack.org/#/c/80376/ ?22:07
NobodyCamahh yes, I chatted with lucas, and even before I got to chat with him he had put up. https://blueprints.launchpad.net/ironic/+spec/credentials-keystone-v322:10
lifelessNobodyCam: does that block the ssh patch somehow ?22:11
lifelessNobodyCam: it seems like a good improvement to me, but orthogonal.22:11
NobodyCamhe is going to shot a letter to the ML to see what people think about the idea of removing all user creds from ironic22:11
openstackgerritJim Rollenhagen proposed a change to openstack/ironic: Add Node.instance_info field  https://review.openstack.org/7946622:11
lifelessNobodyCam: sure, but - does this block it ?22:12
lifelessNobodyCam: if its going to be blocked indefinitely, I'll abandon it to get it out of my working set.22:12
NobodyCamnot block but seems a extra amount of work to land only to pull out again22:12
lifelessNobodyCam: otherwise I'm checking it every day to see if there are things I need to do to it.22:12
lifelessNobodyCam: it wouldn't get pulled out22:12
comstuddevananda: different topic. Were objects added to Ironic somewhat after some code was already using dbapi directly?  (Pointing to some things in conductor somewhat suggests this...like getting a list of nodes)22:13
lifelessNobodyCam: it compromises two distinct things22:13
lifelessNobodyCam: a) storing a different form of password22:13
*** matty_dubs is now known as matty_dubs|gone22:13
lifelessNobodyCam: b) utilising a stored SSH key in a different way22:13
comstuddevananda: (Also things passing a node_id to conductor instead of the Node object itself)22:13
lifelessNobodyCam: a) is a patch that has to happen to all drivers with the keystone v3 thing. b) would stay in.22:13
Shrewsso this is an easily missed bug that somehow found its way in: https://github.com/openstack/ironic/blob/master/ironic/nova/virt/ironic/driver.py#L26622:14
lifelessNobodyCam: or put another way if you do the keystone v3 thing first, we'll still have to land a patch to enable SSH with API provided keys in future.22:14
devanandacomstud: iirc, only things doing direct dbapi calls are in the api service for object creation22:15
devanandacomstud: there may be a better way to do that with objects that was added after we copied objects from nova22:15
devanandacomstud: as for passing node_id to conductor -- we started by passing the whole object, but then quickly realized that was wasted bytes in RPC when we need to fetch it from the DB anyway22:16
NobodyCamdevananda: how much work did you put into that aes crypto stuff you looked at22:16
devanandacomstud: if a condcutor trusted the potentially-stale RPC object, it would lead to bad locking22:17
devanandaNobodyCam: no code yet22:17
lifelessNobodyCam: does that make sense ?22:17
NobodyCamlifeless: that put another way does make a vary valid point22:17
devanandaShrews: what's teh bug?22:17
NobodyCamvery even22:18
Shrewsdevananda: no 'raise'  :-P22:18
*** romcheg has quit IRC22:18
devanandaoh. hah!22:18
Shrewsas i said, easily missed22:18
devanandaagain,  why we need unit tests22:18
devananda*why we need to be running unit tests22:19
comstuddevananda: Yeah, there's a nice refresh() call on the object that could be used22:19
comstudwell, at least in nova22:19
notqon the ironic white board, it says that ironic failed to graduate in icehouse. what does "graduate" in this context mean? Is that graduate from an incubation project, or does that mean it won't be released in icehouse?22:19
comstuddevananda: but if you're always going to refresh, I can understand the bytes saving22:19
devanandalifeless: quick testr question -- testr config in ironic to a) clone nova b) run nova's unit test suite on the nova driver code in our tree22:20
comstuddevananda: But anyway, there's a number of direct DB API use in conductor right now...22:20
comstudit seems for things that are not implemented in objects yet22:20
comstudlike (getting) a list of nodes22:20
devanandanotq: means we are still an incubated project and therefor not part of the official release or the symmetric gate22:20
devanandanotq: does not mean that downstream package managers won't release packages of ironic (in fact, some of them are)22:20
*** romcheg has joined #openstack-ironic22:21
notqdevananda: okay, so, from a user perspective not much difference?22:21
*** romcheg has quit IRC22:21
lifelessdevananda: it all depends on discover22:21
devanandacomstud: ahh. yes. those return a list of objects tho22:21
comstudit should, but right now it's a list of sql-a models22:21
comstudso it seems22:22
lifelessdevananda: but again, novas code doesn't understand 'run tests on a driver'22:22
comstud$ grep -c dbapi *.py22:22
lifelessdevananda: drivers have tests.22:22
comstudmanager.py:1222:22
comstudtask_manager.py:722:22
*** eghobo has quit IRC22:22
comstuddevananda: I'm guessing you're open to fixes :)22:22
NobodyCamwe need to move the password / key out of the info dict22:22
lifelessdevananda: so I think you're approach the problem backwards. The way I'd approach it is 'how to have Ironic unittests that extend/trigger on Nova changes and exercise the nova driver'22:22
lifelessNobodyCam: separate problem though, right ? :)22:23
devanandacomstud: you mean get_nodeinfo_list?22:23
comstudNode has no 'destroy' right now, so dbapi.destroy_node() is called, as an example as well22:23
NobodyCamlifeless: yes it is...22:23
comstudyeah, get_nodeinfo_list would be the first one I'd nail22:23
lifelessNobodyCam: note that ssh keys are better than passwords with the current structure, because ssh keys can be limited but passwords cannot22:23
lifeless(by sshd)22:23
devanandacomstud: so taht should be the only one... and it should stay taht way :)22:23
comstud501         node_list = self.dbapi.get_nodeinfo_list(columns=columns,22:23
comstud502                                                  filters=filters)22:23
comstudwhat should stay what way?22:23
devanandacomstud: well. let me ask22:24
comstudok22:24
JayFhs check-tempest-dsvm-virtual-ironic been passing in most cases? I see it's nonvoting, but it failed for my patch and was curious is that was expected or somehow caused by my task22:24
devanandacomstud: with the object code, can an object be init'd with only partial field data? if so, what does taht do later on?22:24
jrollJayF: I haven't seen it pass, ever22:24
comstudyou can22:24
comstudand you can make it lazy load other junk on access later if you want22:24
devanandacomstud: the point of get_nodeinfo_list is to avoid unnecessarily fetching the whole node when you only need eg. 2 columns22:24
devanandacom22:24
devanandaok22:24
comstudoh, gotcha22:25
devanandacomstud: so the lazy stuff either wasn't implemented, or I didn't undersatnd at the time22:25
comstudyou can do that still, yes22:25
comstudin fact, we just recently had to do this for something in nova22:25
comstudthat queried specific columns22:25
devanandacool. so yea, taht'd be fine :)22:25
comstudit may not have been implemented at the time, depending on how early you grabbed stuff22:25
devanandacomstud: as for destroy ... that was probably an oversight on my part. objects should have destroy :)22:25
JayFjroll: thanks, that's the feedback I was lookin' for22:25
comstuddevananda: cools22:25
jroll:)22:25
comstud511                 node = self.dbapi.get_node(node_uuid)22:26
notqtrying to get me head around it all, tasked to create a baremetal cloud. just got some contacts with hp cloud os that i'm supposed to work with to help us, but trying to catch up and understand as much as possible.22:26
comstudthere's that one too which is just an oversight i'm guessing22:26
notqanyway, don't mean to clutter your dev work. i'll shut up now :)22:26
NobodyCamlifeless: I have not tested, but do you know off the top of your head waht paramiko would do with both key and key file set22:26
lifelessNobodyCam: doesn't matter, the code prevents that22:26
lifelessNobodyCam: one and only one credential is permitted22:27
lifelessNobodyCam: 0/2/3 all error - and there are tests :)22:27
comstuddevananda: I'll get some patches together here22:27
comstudi can see some of these not really needing objects, like 494         self.dbapi.touch_conductor(self.host)22:27
devanandaJayF: last i heard from adam_g this morning, -virtual-ironic test is passing on HPCS nodes but not on RAX nodes. pending more work22:27
comstudheh22:27
devanandacomstud: right :)22:27
NobodyCamahh yes it does22:27
comstuddevananda: btw, thanks for implementing your DB api as a class :)22:28
devanandacomstud: welcome :)22:28
comstudfor nova, I have larger plans22:28
JayFdevananda: ah yeah, I didn't make the connection that was the test he was talking about, but he said something to dwalleck and I about it before. Apparently not much we can do on the image side to fix but he's working with libcloud for something upstream22:28
devanandacomstud: the singleton thing gets a bit wonky, but the class should make life much easier in the long run22:28
comstudwhere the DB API actually exposes objects itself22:28
comstudso we don't have this huge list of bullshit22:28
devanandaJayF: yep22:28
comstuddevananda: right22:29
devanandalifeless: so there are two different angles22:29
devanandalifeless: a) have reasonable unit test coverage of the nova.virt.ironic driver, and ensure we can catch issues in it at that layer22:29
devanandalifeless: b) have some integration tests with nova + the ironic driver22:30
adam_gdevananda, heh no, not exactly. the actual devstack setup of ironic is succeeding but tempest-against-ironic is not. still work to do there22:30
adam_gJayF, ^22:30
devanandaadam_g: ahh. and is taht succeeding on both RAX and HPCS?22:30
adam_gdevananda, HPCS so far22:30
devanandaack22:30
JayFadam_g: k, ty22:31
*** ifarkas has quit IRC22:31
openstackgerritlifeless proposed a change to openstack/ironic: Provide a new ComputeManager for Ironic.  https://review.openstack.org/8263722:32
devanandalifeless: commented on https://bugs.launchpad.net/ironic/+bug/129550322:32
lifelessdevananda: what do you mean by integration tests22:33
devanandalifeless: in this case, i mean something which ensures the nova virt driver API as implemented by ironic's driver isn't stale22:33
devanandalifeless: as your point earlier, that may not be wells erved by nova's unit test suite. i haven't checked22:34
devanandalifeless: but my first point (unit tests of our driver code) should be gettign run22:34
devanandaand they're not run anywhere today22:34
lifelessdevananda: they should be22:34
lifelesstest_command=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ ./ $LISTOPT $IDOPTION22:34
lifelessdevananda: are you sure they are not ?22:34
comstudIs this new compute manager only to work around the fact that you're trying to use keystone before keystone is configured?22:35
lifelesscomstud: no22:35
NobodyCamdevananda: did we have a BP to refactor password out of dict already?22:35
comstudI assume it's more than that22:35
comstudok22:35
lifelesscomstud: you did read the commit message right ?22:35
comstudsorry, I was reading a bug22:35
devanandaNobodyCam: not afaik22:35
lifelesscomstud: and the etherpad22:35
lifelesscomstud: where we listed 5 or so reasons Ironic might not be available at init-host time22:36
NobodyCamlifeless: 80376 LGTM22:36
lifelessNobodyCam: so +A it ;)22:36
devanandacomstud: see my comment at end of bug22:36
comstudyeah22:36
NobodyCamlifeless: I am now... :)22:36
NobodyCamdevananda: also going to file BP for that refactor22:37
comstudsorry, I just reverted back to a sore subject I have22:37
comstud:)22:37
lifelesscomstud: whats the sore subject?22:37
comstudneedlessly jumping to conclusions22:38
devanandalifeless: i beleive instances = instance_obj.InstanceList.get_by_host is not necessary22:38
comstudlifeless: needing to bring services online unconfigured in order to configure them22:38
lifelesscomstud: I'm not sure which side of that you're on :)22:39
lifelesscomstud: I'm on the 'I just want to deploy easily and not have to embed arbitrary state state machines into my deploy logic'22:39
lifelesscomstud: side.22:39
comstudi'm on the side that I think it's dumb to have to bring up unconfigured services22:39
comstud:)22:39
comstudnod22:39
lifelesscomstud: so, in this case, nova *is* configured.22:39
lifelesscomstud: a *dependency* isn't.22:39
comstudin this case i'm referring to keystone22:39
lifelesssure, keystone is a sore point for me too :)22:40
comstudif you start up nova first22:40
comstudand ironic before that even22:40
comstudthen you start keystone and configure it22:40
comstudthere's a race in there22:40
comstudetc22:40
lifelesswhats the race?22:40
lifelessI think I know, but for clarity22:40
comstudnova querying keystone between unconfigured and configured22:41
lifelesswhich will error right ?22:41
comstudit get a 401 and should not retry because it generally should mean you put invalid creds in22:41
comstudright22:41
openstackgerritlifeless proposed a change to openstack/ironic: Provide a new ComputeManager for Ironic.  https://review.openstack.org/8263722:41
lifelessso perhaps nova should rety22:41
lifelessretry there22:41
comstudnova can retry on CONNREFUSED easily.  once keystone is up, it should be configured22:41
devanandathe issue is, if keystone's offline, it should retry22:41
comstudi argue nova should not retry on 401 :)22:42
devanandaif the auth is bad, it shouldn't assume auth will change22:42
comstudbecause that's a 'invalid creds man, you screwed up your nova config'22:42
lifelessbut the auth might be bad because the creds are being rotated22:42
comstud(or you screwed up your keystone config)22:42
devanandaso we can point the finger at keystone for being "available" when it's not configured yet22:42
lifelessyou don't know if its keystone or nova thats wrong22:42
comstudhah, well, that's an interesting case22:42
lifelessinitial bring up is a special case of key rotation22:42
lifeless:)22:42
comstudi think you add new creds first to keystone22:42
comstudfix nova22:43
comstudremove old creds22:43
comstud;p22:43
comstud(no retry on 401 in nova)22:43
lifelessso, in a SOA, stopping cold and not retrying is a pretty significant increase in management complexity.22:44
comstudi could see nova maybe retrying once it's been running22:44
lifelessWhats the rationale for making a 401 stop cold22:44
comstudbut on startup, it should maybe bail22:44
comstudyeah, i dunno, but it's all somewhat besides my point22:45
comstudgenerally you bring services online when they are ready to be used22:45
comstudyou don't want people querying an unconfigured service22:45
devanandalifeless: please no (c) in empty files22:46
devanandawant to test it, but otherwise LGTM22:46
NobodyCamdevananda: https://blueprints.launchpad.net/ironic/+spec/refactor-password-key-storage22:46
devanandaNobodyCam: thanks22:47
NobodyCamlifeless: in zuuls hands now!22:47
openstackgerritlifeless proposed a change to openstack/ironic: Provide a new ComputeManager for Ironic.  https://review.openstack.org/8263722:47
NobodyCambrb mid afternoon walkies22:47
* devananda rewrites dprince's patch22:48
lifelesscomstud: so, I agree about not querying an unconfigured service, but not about nova giving up :)22:48
lifelesscomstud: keystone is kindof special that much of its configuration is done through the service22:48
comstudthat's fair22:48
comstudwell22:49
lifelesscomstud: so one could argue that its actually configured once it has a service token22:49
comstudnot sure keystone is special here in OpenStack with that regard22:49
comstudmy soreness comes from nova wanting to remove nova-manage and make it all API driven22:49
comstudbecause it's apparently difficult to roll out configs.22:49
comstudor something.22:50
comstud(but easy to query the API)22:50
lifelessnot sure I see the connect between nova-manage and configs22:50
lifelesswasn't nova-manage all DB backdoor access?22:50
comstudyeah, sorry it's more "because it's apparently hard to update the DB"22:51
comstudyes it was/is22:51
comstudThere was pushback to adding cells configuration (which is DB driven) to nova-manage22:51
comstudI made enough of a case to land it, but still annoyed by the general thought process, myself22:52
comstudmaybe i'm in the minority22:52
jrolldevananda: working on this agent startup diagram - how much detail are you looking for? this is what I have right now, it's a little hand-wavey: https://dl.dropboxusercontent.com/u/363486/IPA-Startup-Flow.png22:53
adam_gdtantsur, still around?22:53
devanandajroll: looking22:53
jrollbut I think it's a good at-a-glance overview22:53
devanandajroll: some numbers or an indication of where to "enter" the flow would help22:54
devanandai think i got it, though22:54
jrollok, cool, thanks22:54
jrollit'll be on the wiki soon22:54
devanandajroll: also, clarification that the response to POST for unknown hw info22:54
devanandas/that/what/22:55
devanandaotherwise, yea, good start, thanks22:55
jrollsure22:55
NobodyCamlifeless: in your TripleO / Ironic testing the under cloud did deploy from the seed you weren't getting Stack create failed, status FAILED on hte undercloud?? is that correct?23:10
lifelessNobodyCam: stack will fail because the wait condition never triggers23:11
lifelessNobodyCam: which is due to the bug we just worked through23:11
NobodyCamack23:11
NobodyCam:)23:11
lifelessNobodyCam: to fix, you need to use the review I put up, add a new setting to nova.conf to set the compute manager, and have undercloud-vm-ironic-source.yaml set that setting23:12
NobodyCam:)23:13
*** dwalleck has joined #openstack-ironic23:13
jrolldevananda: https://dl.dropboxusercontent.com/u/363486/IPA-Startup-Flow.png23:15
devanandajroll: nice. i would add a logical break above "issue commands"23:16
jrolldevananda: like23:17
devanandabut everything is much clearer23:17
jroll"some time passes..."23:17
jroll?23:17
devanandajroll: something to indicate taht a User is driving the API at that point. not the agent23:17
devanandait looks like the agent is driving itself in a very round-about awy :)23:18
devanandaway23:18
jrollright right23:18
jrollwill do23:18
*** dwalleck has quit IRC23:21
openstackgerritDevananda van der Veen proposed a change to openstack/ironic: Run ipmi power status less aggressively  https://review.openstack.org/8266823:30
openstackgerritAdam Gandelman proposed a change to openstack/ironic: Pass no arguments to _wait_for_provision_state()  https://review.openstack.org/8266923:31
adam_gdevananda, ^ something merged friday and broke instance deletion. :|23:32
devananda:(23:32
jrolladam_g: should check-tripleo-ironic-undercloud-precise be passing, currently?23:33
devanandaoil-ci-bot ??23:34
adam_gis the nova driver unit test coverage in the ironic tree currently, or buried in nova's history somewhere?23:34
adam_gdevananda, heh, my firefox LP session was still logged into an old bot account.23:34
adam_gi barely use FF these days23:34
adam_gjrist, i do not know about tripleo23:34
adam_gjroll, ^23:35
jrollah23:35
jrollwhose gate jobs are those?23:35
devanandaadam_g: lol23:35
devanandajroll: see #tripleo :)23:35
adam_gjroll, that is a separate CI effort focused on tripleo.  #tripleo would be able to tell you more23:35
devanandaadam_g: it should be contained in ironic's tree... let me check23:36
jrollgot it, thanks23:36
JayFThis fix to the heartbeat bug (https://bugs.launchpad.net/ironic/+bug/1295874) that was assigned to me in the meeting is awaiting core review: https://review.openstack.org/#/c/82615/23:37
devanandaJayF: thanks for the ping23:38
JayFdevananda: reading through the periodic task code, it looked like they might also need a bugreport filed for not logging or throwing an exception when you give an interval lower than the logic permits (it idles 60s between runs), wdyt?23:39
JayFI would think a WARN in the log would be sufficient if it sees an interval lower than DEFAULT_INTERVAL23:40
devanandaJayF: yea, please follow up on that23:40
devananda(because I forgot to)23:40
devanandai remember a discussion around the last summit of, hey, we should fix that. but i dont tink anyone did23:41
JayFyeah it's not even documented in the comments on the decorator that appear to be pulled into some user docs23:41
JayFso it needs to throw a warn and get the docs fixed :x23:41
JayFI might just file the bug and push the fix to them too if it's easy23:41
devanandaso in previous systems I've worked on23:42
devanandahb_timeout is always ~ 2.5x hb_interval23:42
JayFHere's my logic in making it what it is:23:42
JayFWhen we run tests, do we think it's OK for tests to pass consistently if the first heartbeat fails.23:42
JayFMy gut said no, so I made the timeout reflect that.23:43
JayFI agree I would not use these values in a production environment though.23:43
devanandaheh23:43
devananda#sanedefaults23:43
devanandaI agree with your reasoning. but I think we should have sane production defaults to the extent possible23:43
devanandaand it's much easier to codify changing these in a test env23:43
devanandaeg, in the test class setUp, just override the conf23:44
devanandawe do that a lot already23:44
russell_hdevananda: 2.5x - 3x is what I've always done too23:44
JayFI have no problem with bumping the timeout to 150s, I'll see if I can also fix the tests to use 60/9023:44
JayFShould that be in the same merge req/commit or a separate one?23:44
devanandaJayF: same one23:44
devanandaJayF: i'll toss a comment up23:44
JayFty23:44
NobodyCamhey hey lifeless still about?23:53
lifelessyes23:53
NobodyCam:) can you push up a quick rebase on https://review.openstack.org/#/c/82637 also it got hit by H803  git commit title ('Provide a new ComputeManager for Ironic.') should not end with period23:54
NobodyCam:-p23:55
openstackgerritA change was merged to openstack/ironic: Permit passing SSH keys into the Ironic API  https://review.openstack.org/8037623:55
NobodyCamalso ^^^ :)23:55
openstackgerritJim Rollenhagen proposed a change to openstack/ironic: Add Node.instance_info field  https://review.openstack.org/7946623:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!