Friday, 2014-11-21

kfox1111netstat says its listening...00:01
openstackgerritDevananda van der Veen proposed openstack/ironic: Minor fix to install guide for associating k&r to nodes  https://review.openstack.org/13618700:01
NobodyCamkfox1111: the node is attempting to pxe boot. is it getting dhcp address and trying to pull the boot image?00:01
kfox1111haven't gotten there yet. just trying a tftp client from localhost to see if thats working.00:02
NobodyCamahh that was the only full example of node create we had00:03
*** igordcard has quit IRC00:03
NobodyCamdevananda: ^^^00:03
*** Viswanath has joined #openstack-ironic00:04
NobodyCamkfox1111: I may have asked already but forgotten what is you os?00:04
devanandaNobodyCam: yea, and in the wrong place00:05
devanandaI'll add another one00:05
NobodyCamw00t :) hehehehe00:05
*** penick has quit IRC00:05
devanandaoh jeez ... the user_guide still references teh Baremetal wiki :(00:06
NobodyCamomg did you remove that like months ago00:07
NobodyCam:(00:07
NobodyCams/did/didn't/00:07
devanandayes00:08
kfox1111can't delete nova instance... {"message": "Error destroying the instance on node 9cb00e2b-c117-427a-9fc8-ab62487579e8. Provision state still 'error'00:09
*** Viswanath has quit IRC00:09
kfox1111I think I fixed the problem that caused it to get into that state on launch, but I can't delete to retry.00:09
devanandakfox1111: there are probably better ways to fix that, but the simplest one: restart nova-compute00:10
kfox1111k.00:10
devanandakfox1111: also: ironic node-show NNNN00:11
devanandashould give more details on the error00:11
kfox1111still no good.00:11
kfox1111I set the state on the node to deleted. it seems happy, but nova wont delete it.00:11
devanandawhat does ironic node-show show?00:12
kfox1111looks happy. power_state poer off provision_state None target_provision_state None last_error None00:12
devanandayep - that's good00:15
devanandakfox1111: instance_uuid None too?00:15
kfox1111ah. no, its 609d56b5-1f0a-4059-a508-e54f9c8b765d00:16
devanandaahh00:16
NobodyCam:)00:16
devanandanot sure why nova won't finish cleaning up there00:16
devanandaif you unset the instance_uuid, nova should realize the instance is gone and delete it's references00:17
NobodyCamdevananda: you know if there is a bug around that00:17
devanandaNobodyCam: i thought we squashed those bugs last cycle ...00:18
*** ryanpetrello has quit IRC00:18
devanandait's possible there's another (or a new one) ?00:18
NobodyCamyea...00:18
kfox1111still won't delete. :/00:19
kfox1111the instance_uuid is gone for sure.00:19
NobodyCamwhats nova saying00:19
kfox1111"message": "Error destroying the instance on node 9cb00e2b-c117-427a-9fc8-ab62487579e8. Provision state still 'error'.", "code": 500, "details": "  File \"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 314, in decorated_function00:19
*** r-daneel has quit IRC00:20
devanandakfox1111: at this point, you should be able to just create a new instance ...00:20
kfox1111quota doesn't allow it.00:21
devanandahah00:21
kfox1111:)00:21
devanandawelp. there are "can't delete this instance" bugs in Nova that are unrelated to Ironic, too.00:21
*** alexpilotti has joined #openstack-ironic00:21
kfox1111fair enough.00:21
kfox1111bumping quota... :)00:21
devanandaI dunno what this one is caused by ... restarting n-cpu should cause the compute manager to clean that up00:21
devanandabut if it's not ... then I haven't seen that issue before00:22
NobodyCamyea00:22
kfox1111hmm... back to no valid host was found....00:22
kfox1111its like something in nova thinks ironic's node is allocated.00:23
*** ryanpetrello has joined #openstack-ironic00:24
NobodyCamcan you paste the output of nova list and ironic node-list00:24
kfox1111sure.00:25
kfox1111http://pastebin.com/PJ92GsFk00:26
kfox1111maybe I need to force the vm into a better state, then retry?00:26
kfox1111ah. there we go...00:27
kfox1111nova reset-state seemed to let me delete it.00:27
kfox1111OSError: [Errno 13] Permission denied: '/tftpboot/9cb00e2b-c117-427a-9fc8-ab62487579e8'   rarg....00:28
NobodyCamya looks like nova lost something. NOSTATE for power is not a good thing00:28
NobodyCamahh yep tftp dir need to be owned by the user the conductor is running as00:29
kfox1111hmm.... chown ironic /tftpboot seems to have done the trick. :)00:30
NobodyCam:)00:30
kfox1111looks like the node's finally trying to boot. :)00:30
devanandakfox1111: I don't suppose you're recording the bumps you're hitting in the docs, and planning to post a patch?00:30
kfox1111I'm recording everything in a shell script so I can redeploy. I should be able to get feedback back. :)00:31
NobodyCamlol devananda took the words right off my keyboard00:31
kfox1111some of the rpm's are missing dependencies too. :)00:31
devanandaawesome, ty00:31
kfox1111sure. :)00:31
kfox1111hmm... netboot fail...00:32
NobodyCamdid it get a dpck address and attemp to pull the boot image?00:32
kfox1111didn't have the console up at the time. deleting vm and trying again...00:33
*** david-lyle is now known as david-lyle_afk00:33
kfox1111does setting pxe persistant over ipmi work on all nodes?00:35
kfox1111hmm... looks like that part might not have worked.00:36
*** ryanpetrello_ has joined #openstack-ironic00:37
kfox1111hmm... nova's in task_state deleting vm_state building..... for a long time.00:37
kfox1111do I need to manually go into the bios and set pxe to always be first?00:37
*** naohirot has joined #openstack-ironic00:38
*** ryanpetrello has quit IRC00:38
*** ryanpetrello_ is now known as ryanpetrello00:38
naohirotgood morning ironic00:38
kfox1111hmm... ironic node-get-boot-device is showing boot_device None00:39
NobodyCamkfox1111: how are you controling the node ipmi?00:42
NobodyCammorning naohirot00:42
kfox1111what do you mean?00:42
naohirotNobodyCam: GM00:42
NobodyCamis this a HW node or vm?00:42
kfox1111hardware.00:42
kfox1111real ipmi.00:42
NobodyCamipmi should be able to set the boot device00:43
kfox1111I tried this: ironic node-set-boot-device --persistent 9cb00e2b-c117-427a-9fc8-ab62487579e8 pxe00:43
kfox1111but it didn't seem to stick.00:43
kfox1111I just did it again, and am double checking...00:43
kfox1111hmm... so this time its trying to netboot.00:43
kfox1111hmm.... ok. weak. cobbler took it....00:44
NobodyCam:-p00:44
*** ryanpetrello_ has joined #openstack-ironic00:44
*** ryanpetrello has quit IRC00:47
kfox1111the tune to dueling bajo's is in my head now. :/00:49
*** ryanpetrello_ has quit IRC00:50
*** spandhe has joined #openstack-ironic00:50
*** alexpilotti has quit IRC00:50
openstackgerritMerged openstack/ironic: Support configdrive in agent driver  https://review.openstack.org/12838800:53
kfox1111BTW, whats the status of booting local off of ironic managed nodes?00:53
kfox1111hmm.... yes, the --persistent flag to node-set-boot-device seems to not work on this hardware. :/00:54
jrollyou can do local boot with the agent driver00:57
jrollrequires full disk images atm but hey00:57
kfox1111agent driver?00:57
kfox1111ok... so cobbler dhcp is leaving it alone now, but no joy from neutron....00:58
kfox1111ah.... bridge's wrong.00:59
jrollit's a different deploy driver, ignore me for now01:00
kfox1111oh. ok.01:00
kfox1111jroll: in juno or in development?01:01
jrollin juno01:01
jrollalso making it better in dev01:01
kfox1111hmm.. ok. I'll put it on my to look at list. Thanks. :)01:02
devanandakfox1111: wait, cobbler?01:02
kfox1111devananda: had to deploy my ironic box somehow. ;)01:02
devanandakfox1111: AFAIK, if you have cobbler running on the same network, you'll need to exclude the Node from cobbler's DHCP service01:02
devanandaotherwise the two will race ...01:02
kfox1111yup. found and squashed.01:02
devanandak01:02
devanandaoh ya. you said that01:03
* devananda actually reads scrolback :)01:03
kfox1111just got to figure out how to get the neutron dhcp server onto the network. :)   sharing eth0 here is going to make it a bit tricky.01:03
kfox1111the real trick will be getting it all right, so that I don't drop ssh out from under me. :)01:04
devanandaah, heh, yep01:06
openstackgerritDevananda van der Veen proposed openstack/ironic: Correct link in user guide  https://review.openstack.org/13620101:11
openstackgerritDevananda van der Veen proposed openstack/ironic: Add new enrollment and troubleshooting doc sections  https://review.openstack.org/13620201:11
devanandaNobodyCam: there's your enrollment example :)01:11
devanandakfox1111: you're probably past those by now, but ^ may still have some bits that will help you01:11
* devananda steps away to get ready for dinner01:11
* NobodyCam reads back ... woohoo 01:13
kfox1111nope.... failure. :)01:15
*** Marga_ has joined #openstack-ironic01:18
*** chenglch has joined #openstack-ironic01:25
NobodyCamkfox1111: :(01:33
NobodyCamdevananda: wow I must be tired as I +2'd all three patches01:35
NobodyCam:-p01:35
NobodyCam'night ya'll :)01:36
*** kfox1111 has quit IRC01:41
Haomeng|2morning:)01:52
jrollheya Haomeng|201:53
Haomeng|2jroll: :)01:54
*** rloo has quit IRC01:58
*** dlaube has quit IRC02:04
*** ryanpetrello has joined #openstack-ironic02:04
*** chenglch|2 has joined #openstack-ironic02:09
*** chenglch has quit IRC02:09
*** penick has joined #openstack-ironic02:09
*** ChuckC has quit IRC02:10
*** penick_ has joined #openstack-ironic02:12
*** nosnos has joined #openstack-ironic02:12
*** penick has quit IRC02:14
*** penick_ is now known as penick02:14
*** Marga_ has quit IRC02:16
*** Marga_ has joined #openstack-ironic02:16
*** Marga__ has joined #openstack-ironic02:20
*** krtaylor has quit IRC02:21
*** Marga_ has quit IRC02:21
*** krtaylor has joined #openstack-ironic02:33
*** ramineni has joined #openstack-ironic02:53
*** ryanpetrello has quit IRC03:01
*** ryanpetrello has joined #openstack-ironic03:04
*** ujuc has joined #openstack-ironic03:05
openstackgerritMerged openstack/ironic: Add serial console feature to seamicro driver  https://review.openstack.org/13262803:08
openstackgerritMerged openstack/python-ironicclient: Add maintenance_reason to node-show output  https://review.openstack.org/13599903:09
*** pcrews has quit IRC03:13
*** ryanpetrello has quit IRC03:16
*** ryanpetrello has joined #openstack-ironic03:17
*** nosnos has quit IRC03:24
*** naohirot has quit IRC03:26
*** ryanpetrello has quit IRC03:31
*** Marga__ has quit IRC03:33
*** harlowja is now known as harlowja_away03:39
*** penick has quit IRC03:56
*** ChuckC has joined #openstack-ironic03:58
*** pensu has joined #openstack-ironic04:02
*** Marga_ has joined #openstack-ironic04:04
*** naohirot has joined #openstack-ironic04:07
*** Viswanath has joined #openstack-ironic04:09
*** Marga_ has quit IRC04:12
*** Viswanath has quit IRC04:13
*** spandhe has quit IRC04:17
*** vinbs has joined #openstack-ironic04:22
*** spandhe has joined #openstack-ironic04:23
*** nosnos has joined #openstack-ironic04:29
*** Marga_ has joined #openstack-ironic04:39
*** rushiagr_away is now known as rushiagr04:43
*** david-lyle_afk has quit IRC04:43
*** Marga_ has quit IRC04:44
*** ryanpetrello has joined #openstack-ironic04:54
*** spandhe has quit IRC04:57
openstackgerritSirushti Murugesan proposed openstack/ironic-specs: Whole Disk Image Support for PXE Deploy Driver  https://review.openstack.org/9715005:00
*** killer_prince is now known as lazy_prince05:01
vinbsmorning ironic!05:09
*** nosnos has quit IRC05:14
*** nosnos has joined #openstack-ironic05:16
*** __mohit__ has joined #openstack-ironic05:21
*** __mohit__ has quit IRC05:22
*** Marga_ has joined #openstack-ironic05:40
*** Marga_ has quit IRC05:45
*** rakesh_hs has joined #openstack-ironic05:45
*** nosnos has quit IRC05:59
*** nosnos has joined #openstack-ironic06:00
*** mrda is now known as mrda_weekend06:01
*** Marga_ has joined #openstack-ironic06:02
*** jcoufal has joined #openstack-ironic06:03
*** vinbs has quit IRC06:07
*** Marga_ has quit IRC06:07
openstackgerritMerged openstack/ironic: Minor fix to install guide for associating k&r to nodes  https://review.openstack.org/13618706:11
openstackgerritMerged openstack/ironic: Correct link in user guide  https://review.openstack.org/13620106:11
*** vinbs has joined #openstack-ironic06:12
*** vinbs has quit IRC06:17
*** k4n0 has joined #openstack-ironic06:17
*** vinbs has joined #openstack-ironic06:22
Haomeng|2vinbs: morning:)06:23
*** Marga_ has joined #openstack-ironic06:46
*** Marga_ has quit IRC06:51
*** lazy_prince has quit IRC06:59
*** ryanpetrello has quit IRC07:05
*** killer_prince has joined #openstack-ironic07:11
*** killer_prince is now known as lazy_prin07:12
*** lazy_prin is now known as lazy_prince07:13
*** lazy_prince has quit IRC07:34
*** lazy_prince has joined #openstack-ironic07:34
*** vinbs has quit IRC07:35
*** achanda has quit IRC07:38
*** bradjones has quit IRC07:40
*** bradjones has joined #openstack-ironic07:40
*** Marga_ has joined #openstack-ironic07:47
*** dlpartain has joined #openstack-ironic07:47
*** Marga_ has quit IRC07:51
*** pensu has quit IRC07:56
*** dlpartain1 has joined #openstack-ironic08:04
*** dlpartain has quit IRC08:06
naohirotrushiagr: hello, I read your presentation slide http://www.slideshare.net/openstackindia/filesystem-as-a-service-in-openstack08:09
naohirotrushiagr: if you have time right now, can I ask some basic questions?08:09
*** pensu has joined #openstack-ironic08:11
*** nosnos has quit IRC08:24
*** nosnos has joined #openstack-ironic08:24
*** jiangfei has quit IRC08:30
*** jiangfei has joined #openstack-ironic08:32
*** achanda has joined #openstack-ironic08:38
*** achanda_ has joined #openstack-ironic08:40
*** achanda has quit IRC08:40
*** achanda_ is now known as achanda08:41
*** Marga_ has joined #openstack-ironic08:47
*** nosnos has quit IRC08:51
*** nosnos has joined #openstack-ironic08:51
*** Marga_ has quit IRC08:52
*** romcheg has joined #openstack-ironic08:59
*** nosnos has quit IRC09:06
*** ujuc has quit IRC09:06
*** ifarkas has joined #openstack-ironic09:09
*** ndipanov_gone is now known as nidipanov09:12
*** lucasagomes has joined #openstack-ironic09:24
*** andreykurilin_ has joined #openstack-ironic09:30
*** achanda has quit IRC09:33
*** nosnos has joined #openstack-ironic09:34
openstackgerritHarshada Mangesh Kakad proposed openstack/ironic: Set info = _parse_driver_info(self.node) in setup rather than testcases  https://review.openstack.org/13627409:34
*** derekh has joined #openstack-ironic09:35
raminenilucasagomes: morning :)09:35
*** dtantsur|afk is now known as dtantsur09:37
dtantsurMorning Ironic, TGIF!09:37
raminenidtantsur: morning :)09:37
*** viktors has joined #openstack-ironic09:43
*** Marga_ has joined #openstack-ironic09:48
naohirotdtantsur: good morning!09:52
dtantsurnaohirot, ramineni o/09:53
*** Marga_ has quit IRC09:53
*** k4n0 has quit IRC09:53
naohirotdtantsur: can I ask basic question about swift?09:53
dtantsurnaohirot, only if very basic, I know very little about swift :(09:54
dtantsur#openstack-swift should be much better place09:54
naohirotdtantsur: Swift export object though HTTP/HTTPS, right?09:55
dtantsurI suppose so09:55
*** dlpartain1 has quit IRC09:55
naohirotdtantsur: and iscsi_ilo mounts ISO from swift object storage.09:56
naohirotdtantsur: I suppose iLO accesses the ISO by HTTP/HTTPS.09:56
naohirotdtantsur: by mounting ISO as virtual media.09:57
dtantsurthen it's not iscsi_ilo, iscsi deploy is not using virtual media09:57
dtantsuroh no, you're right09:57
dtantsursorry, I confused it with pxe_ilo09:57
naohirotdtantsur: when is iscsi protocol used? that's my question.09:58
dtantsurnaohirot, virtual media is used to put kernel & ramdisk on the machine. iscsi is used to put a large instance image on a target hard driver09:58
dtantsurnaohirot, it's ramdisk responsibility to expose the hard driver via iscsi09:59
*** chenglch|2 has quit IRC10:00
naohirotdtantsur: Aha, ramdisk is iscsi target from conductor's point of view.10:00
naohirotdtantsur: If so, nfs_irmc and cifs_irmc should be also iscsi_irmc, what do you think?10:01
*** Masahiroo has joined #openstack-ironic10:01
dtantsurnaohirot, I'm not sure how nfs_irmc and cifs_irmc are going to work :)10:02
naohirotdtantsur: I totally misunderstood iscsi_ilo.10:02
naohirotdtantsur: nfs or cifs is used to mount virtual media in case of irmc.10:03
dtantsuras you see, I also confuse our numerous drivers from time to time :)10:03
naohirotdtantsur: me too :-)10:03
dtantsurnaohirot, we have 2 major means of putting an instance image on a node: iSCSI and ironic-python-agent (widely knows is IPA or just agent)10:03
dtantsur* known10:03
dtantsurnaohirot, I suppose your drivers should work with one of these10:04
naohirotdtantsur: Okay10:04
*** dlpartain has joined #openstack-ironic10:10
raminenilucasagomes: hi , need your opinion on seperating boot as seprate resource and keep device and mode as subresource to boot in REST API  to expose boot_device and boot_mode respectively10:10
raminenilucasagomes: currently , 3 resources i could think of correspond to boot. ( boot_device, boot_mode , secureboot (proposed as part of https://review.openstack.org/#/c/135845/1/specs/kilo/uefi-secure-boot-management-interfaces.rst ) . so, thinking exposing boot as seperate resource would be more cleaner way to do this ? what do you think, sounds good to you?10:11
openstackgerritImre Farkas proposed openstack/ironic-specs: New driver interface for RAID configuration  https://review.openstack.org/13589910:18
*** andreykurilin_ has quit IRC10:20
*** Masahiroo has quit IRC10:21
*** andreykurilin_ has joined #openstack-ironic10:22
*** jcoufal has quit IRC10:24
*** jcoufal has joined #openstack-ironic10:25
*** blinky_ghost has joined #openstack-ironic10:26
openstackgerritAnusha Ramineni proposed openstack/ironic-specs: Add get/set boot mode to Management Interface  https://review.openstack.org/12952910:31
*** Haomeng has joined #openstack-ironic10:38
*** Haomeng|2 has quit IRC10:39
*** dtantsur is now known as dtantsur|brb10:41
*** igordcard has joined #openstack-ironic10:41
rushiagrnaohirot: hey hi10:42
rushiagrnaohirot: I no longer work on it, so I might not be up-to-date on it10:42
rushiagrnaohirot: the part I was working on, has been moved to a new openstack project named as Manila10:42
rushiagrnaohirot: you can ask more information in it's own channel: #openstack-manila10:43
*** Marga_ has joined #openstack-ironic10:49
*** igordcard has quit IRC10:50
*** igordcard has joined #openstack-ironic10:51
*** naohirot has quit IRC10:52
*** foexle has joined #openstack-ironic10:53
*** Marga_ has quit IRC10:54
*** igordcard has quit IRC10:55
*** ramineni has quit IRC10:59
*** pensu has quit IRC11:05
*** igordcard has joined #openstack-ironic11:08
*** Haomeng|2 has joined #openstack-ironic11:18
*** Haomeng has quit IRC11:18
*** pelix has joined #openstack-ironic11:19
*** pensu has joined #openstack-ironic11:32
openstackgerritHarshada Mangesh Kakad proposed openstack/ironic: Avoid calling _parse_driver_info in every test  https://review.openstack.org/13627411:32
*** Marga_ has joined #openstack-ironic11:50
*** Marga_ has quit IRC11:55
blinky_ghostHi all, I'm trying to use autodiscovery in tuskar-ui  to provision my node using ironic. The node gets ramdisk discovery but kernel panics: http://picpaste.com/Screenshot_from_2014-11-21_11_53_25-avYqqHX0.png  any hint what could be wrong? thanks11:55
*** dlpartain has quit IRC11:57
*** jcoufal has quit IRC11:58
*** naohirot has joined #openstack-ironic12:00
lucasagomesblinky_ghost, it looks like a diff error than u had yesterday, yesterdat the ramdisk booted right?12:00
lucasagomesblinky_ghost, kernel panic can be a lot of different things, missing base drivers, wrong combination of kernel and ramdisk12:01
lucasagomesetc...12:01
lucasagomeswhat did it change from yesterday to today that it's not booting anymore?12:01
blinky_ghostlucasgomes: yes, different error, I was testing autodiscovery.12:01
lucasagomesblinky_ghost, have u manually updated the ramdisk? or u built it again with DIB?12:02
blinky_ghostlucasgomes now I'm testing without autodiscovery, I build the dib and didn't work so I updated it manually. testing12:03
*** dlpartain has joined #openstack-ironic12:03
lucasagomesright u could have messed permissions when compacting it again and it caused the ramdisk to fail to boot12:03
lucasagomesare u using root to extract and compact the ramdisk before modifying it?12:04
lucasagomes" The node gets ramdisk discovery but kernel panics" So it's the deploy ramdisk!?12:04
lucasagomess/using/logged as/12:04
blinky_ghostlucasgomes yes I used root to compact it12:06
lucasagomesright, does the deploy ramdisk works? are they built using the same image?12:07
blinky_ghostlucasgomes I think you're right, I created the ramdisk manually and somehow messed it because I get the error with or without autodiscovery12:08
blinky_ghostlucasgomes I'll try to create a new ramdisk12:09
*** alexpilotti has joined #openstack-ironic12:13
*** romcheg has quit IRC12:14
lucasagomesright12:19
lucasagomesyeah, it's not hard to mess things up doing that :)12:19
naohirotlucasagomes: hi12:22
naohirotblinky_ghost: hi12:22
blinky_ghosthi12:22
naohirotblinky_ghost: may I talk to lucasagomes , are you still discussing?12:23
blinky_ghostnaohirot: sure :)12:24
lucasagomesnaohirot, hi12:24
naohirotblinky_ghost: thanks!12:24
naohirotlucasagomes: I'd like to hear your opinion how to implement nfs_irmc and cifs_irmc.12:24
naohirotlucasagomes: I totally have been misunderstood iscsi_ilo.12:25
lucasagomesnaohirot, right, is it one of the specs proposed?12:25
naohirotlucasagomes: I though that iscsi is used to mount virtual media.12:25
naohirotlucasagomes: yes, it is one on spec, irmc virtual media deploy driver.12:26
lucasagomesnaohirot, oh... right... not really that's how the ramdisk exposes the disk to the conductor, not really to do with mounting a iscsi volume using the BMC or anything like that12:26
naohirotlucasagomes: Yes, but I thought that iLO uses iscsi to mount virtual media :-)12:27
*** romcheg has joined #openstack-ironic12:27
lucasagomesi c :)12:27
naohirotlucasagomes: that's the reason I named nfs_irmc and cifs_irmc, because iRMC use NFS/CIFS to mount virtual media.12:28
naohirotlucasagomes: and by reading iLO4 manual, iLO seems to use HTTP/HTTPS to mount virtual media.12:29
naohirotlucasagomes: that's the reason iscsi_ilo uses swift object storage.12:29
lucasagomesnaohirot, right, AFAIUI. they put it on swift and get a temp url from there12:30
naohirotlucasagomes: what I'd like to hear your opinion is that whether I should use cinder instead of swift or not.12:30
lucasagomesnaohirot, not 100% sure, you may want to talk to ramesh about it, he was directly involved into writting the driver12:30
lucasagomesbut I believe that12:31
lucasagomesby using swift and temp urls u avoid the need of having a authentication key to fetch the image12:31
lucasagomesjust like it would be required if they used only glance for example12:31
lucasagomesnaohirot, for what u want to do, attaching external block devices12:31
lucasagomesI believe cinder is the right place12:32
naohirotlucasagomes: but iRMC cannot mount virtual media by HTTP/HTTPS.12:32
lucasagomesand time to time we see folks wanting the cinder integration12:32
lucasagomesnaohirot, also, jroll was looking at having diskless nodes in Ironic12:33
lucasagomesidk if he's planning to use cinder or what, but you guys may want to talk12:33
naohirotlucasagomes: Okay I'll talk to jroll, I understood that cinder is right place for iRMC.12:34
lucasagomesyeah it seems to be12:35
lucasagomesI think there's even a old blueprint (not spec) about cinder boot volumes12:35
* lucasagomes checks12:35
lucasagomeshttps://blueprints.launchpad.net/ironic/+spec/cinder-integration12:35
lucasagomeswell not many info there12:35
naohirotlucasagomes: regarding iscsi, is the conductor initiator of iscsi?12:35
naohirotlucasagomes: or is nova initiator of iscsi to install the User OS?12:37
lucasagomesnaohirot, the ramdisk will create the target and pass it to the ironic conductor12:38
lucasagomesthe conductor will them mount it format and copy the iamge12:38
lucasagomesimage*12:38
lucasagomesthe deploy ramdisk*12:38
naohirotlucasagomes: Okay, ironic conductor is initiator of iscsi.12:38
*** pensu has quit IRC12:39
lucasagomesyup12:39
naohirotlucasagomes: If so, is all driver, pxe_*, iscsi_ilo, agent_*, use iscsi to install user os, right?12:39
* lucasagomes rechecks12:40
lucasagomesnaohirot, no the agent_ does it differently12:40
lucasagomesthey fetch the image directly from glance (I believe) convert and write it locally12:40
lucasagomesit's something that the normal ramdisk could also do, but right now it doesn't12:40
naohirotlucasagomes: do you mean agent_* by "they"?12:41
lucasagomesyeah... I mean drivers using the Agent deploy ramdisk12:42
lucasagomesfetches the image directly and write it to the disk locally12:42
lucasagomeswithout exposing the local disk via iscsi, passing to the conductor to wrote it etc12:42
lucasagomes(maybe it sounded confusing... )12:43
lucasagomesI mean the Agent ramdisk does fetch the image and write it locally12:43
lucasagomeson the machine it's booted on12:43
naohirotlucasagomes: don't you know what kind of protocol is that?12:43
naohirotlucasagomes: user os is large size12:44
*** andreykurilin_ has quit IRC12:44
lucasagomesnaohirot, I believe they use HTTP to fetch it12:44
* lucasagomes gotta dig into the code12:45
openstackgerritHarshada Mangesh Kakad proposed openstack/ironic: Add documentation for SeaMicro driver  https://review.openstack.org/13632412:45
naohirotlucasagomes: I see.12:45
*** Haomeng has joined #openstack-ironic12:46
*** Haomeng|2 has quit IRC12:46
naohirotlucasagomes: I case of agent_*, deply ramdisk becomes client to conductor or glance.12:46
*** rakesh_hs has quit IRC12:47
*** Marga_ has joined #openstack-ironic12:47
*** rakesh_hs has joined #openstack-ironic12:48
*** dprince has joined #openstack-ironic12:50
naohirotrushiagr: thanks for replying me. I'm new to openstack. I just wanted to make sure the difference b/w swift and cinder12:56
*** sambetts has joined #openstack-ironic12:58
Shrewsmorning ironicers13:03
lucasagomesShrews, morning13:06
*** athomas has joined #openstack-ironic13:06
*** lucasagomes is now known as lucas-hungry13:15
*** nosnos has quit IRC13:16
*** Marga__ has joined #openstack-ironic13:18
*** Marga_ has quit IRC13:21
*** dtantsur|brb is now known as dtantsur13:27
dtantsurShrews, morning13:28
*** linggao has joined #openstack-ironic13:37
*** linggao_ has joined #openstack-ironic13:38
*** rushiagr is now known as rushiagr_away13:38
sambettsAfternoon ironic13:41
*** lazy_prince is now known as killer_prince13:48
*** dlpartain has left #openstack-ironic13:49
openstackgerritNaohiro Tamura proposed openstack/ironic-specs: iRMC Power Driver for Ironic  https://review.openstack.org/13448713:53
*** Marga__ has quit IRC13:54
*** Marga_ has joined #openstack-ironic13:55
*** mjturek has joined #openstack-ironic13:59
*** jistr has joined #openstack-ironic14:03
*** ryanpetrello has joined #openstack-ironic14:03
*** jistr is now known as jistr|mtg14:04
*** ryanpetrello_ has joined #openstack-ironic14:05
*** ryanpetrello has quit IRC14:08
*** ryanpetrello_ is now known as ryanpetrello14:08
*** dlpartain has joined #openstack-ironic14:09
*** lucas-hungry is now known as lucasagomes14:13
*** rainya has quit IRC14:15
*** rainya has joined #openstack-ironic14:16
*** cohn has quit IRC14:16
*** russell_h has quit IRC14:16
*** cohn has joined #openstack-ironic14:16
*** cohn is now known as Guest8271114:16
*** russell_h has joined #openstack-ironic14:17
*** rakesh_hs has quit IRC14:17
*** rushiagr_away is now known as rushiagr14:27
*** dlpartain has left #openstack-ironic14:27
*** ronald has quit IRC14:29
NobodyCamGood Morning Ironic .... TGIF!14:31
dtantsurNobodyCam, oh really TGIF :)14:34
NobodyCammorning dtantsur :) ya14:34
dtantsursambetts, NobodyCam, morning/afternoon14:34
NobodyCamdtantsur: are you planing on posting a status report today ?14:39
dtantsurNobodyCam, you mean bugs? we agreed on creating a report on Mon, but as I may be absent on Mon, I'll probably create it now14:40
NobodyCam:) you are correct... /me goes back to sipping coffee14:41
NobodyCam:-p14:41
ShrewsNobodyCam: no sip. gulp14:41
dtantsurI'll post it today and update on Mon, if I'm available14:42
NobodyCammorning Shrews :)14:51
NobodyCamthank you dtantsur14:51
NobodyCamShrews: if I gulp I'll be out of coffee sooner14:51
ShrewsNobodyCam: well we can't have that. best just sip then  :)14:52
*** Guest82711 is now known as cohn14:53
*** cohn has quit IRC14:53
*** cohn has joined #openstack-ironic14:53
*** r-daneel has joined #openstack-ironic14:54
NobodyCamlol14:56
NobodyCamI love some of these names.15:02
NobodyCamI want to combine them like: Pixie Boots15:03
dtantsurlol15:03
NobodyCamlucasagomes: is that allowed ^^^15:03
dtantsurthat could be a winner actually :D15:04
lucasagomeshah15:04
lucasagomeswell yeah why not15:04
lucasagomesadd to the list :D15:04
* NobodyCam replys15:04
lucasagomes\o/15:05
NobodyCam:) lucasagomes I also added a section to the meeting agenda, please feel free to remove if its not needed15:10
lucasagomesNobodyCam, right15:10
lucasagomesthe only thing is I might not be present on next meeting ;(15:10
lucasagomesI'll be going to paris (again) on monday >.<15:11
NobodyCamoh nice :)15:11
NobodyCamdang that city is expensive ... wow15:11
lucasagomesyeah15:11
*** Marga_ has quit IRC15:21
blinky_ghostlucasgomes: after recreating the image it works :)15:23
NobodyCamblinky_ghost: great to hear!15:25
*** Marga_ has joined #openstack-ironic15:25
blinky_ghosta big thank you to everyone been 2 weeks struggling with this :)15:26
lucasagomesblinky_ghost, :) yvw15:28
dtantsurifarkas, lucasagomes, I have something for you to review ;) https://review.openstack.org/#/c/136291/15:30
lucasagomeso/15:31
dtantsur... and anyone suddenly feeling like reviewing ironic-discoverd patches too :D15:32
*** zz_jgrimm is now known as jgrimm15:34
*** andreykurilin_ has joined #openstack-ironic15:36
*** Marga_ has quit IRC15:37
foexlehey guys, short question: i get every time node is locked by xxx if i try to boot a new node. compute => Error contacting Ironic server for 'node.update' ####### api => "PATCH /v1/nodes/f1ddbd1a-4758-4ba5-bce6-822435e3da0b HTTP/1.1" 409 node is locked -.- anyone a short hint ?15:37
lucasagomes*sights*15:39
NobodyCamfoexle: what state is the node in.15:39
lucasagomesfoexle, yeah that node may have a lock stuck :/15:39
foexle| f1ddbd1a-4758-4ba5-bce6-822435e3da0b | None          | power off   | None            | False       |15:40
NobodyCamI've two cases where node did not remove the instance-uuid15:40
foexlelucasagomes: yeah .... i had delete (in db) and recreate again .... but i doesn't work ....15:40
lucasagomesfoexle, delete the node? or the lock?15:40
*** Haomeng has quit IRC15:41
foexleafter a while if 60 attemps reached by ironic conductor i get the node (nova list) in ERROR state15:41
foexleand i can delete15:41
*** naohirot has quit IRC15:43
foexlelucasagomes: i've read thats a bug ... so not a bug but it's not possible to solve a deadlock right ?15:43
foexlei need to delete the whole node in my ironic db and recreate it15:43
foexlebut how does this deadlocks occurs ?15:44
lucasagomesfoexle, u can delete the lock in the db only15:44
lucasagomesdon't need to delete the whole node15:44
foexlelucasagomes: how ?15:44
lucasagomesat the node tables, there's a (forgot the name of the field)15:44
lucasagomeslemme check15:44
foexlereservation ?15:44
lucasagomesreservation column15:45
lucasagomesexactly15:45
foexlei need to clean ? so update nodes set reservation=''; ?15:46
lucasagomesfoexle, so it occurs if a conductor that is managing that node dies mid-operation15:46
lucasagomesfoexle, it won't work... we talked in Paris about exposing it in the API15:46
foexlehmmm dies? ... ok i'll test :)15:46
lucasagomesI mean exposing a way in the API to break the lock15:46
foexlelucasagomes: yeah i know; i saw it in launchpad15:46
foexlelike the state changer in nova15:47
lucasagomesfoexle, people also wants to make it pluggable to be able to use zookeeper for e.g15:47
lucasagomesyup15:47
foexleso i need only to clean this column and try again ?!15:47
foexlelet me check :)15:47
lucasagomesbut right now, yeah I know it kinda sucks to not have a way to break it from the api15:47
lucasagomesfoexle, yeah15:47
*** Haomeng has joined #openstack-ironic15:50
foexlelucasagomes: nope, doesn't has an effect :( to clean the column and try again isn't working15:51
foexlesame error ... is there other relationship maybe ? ... i found nothing i've checked all tables15:51
*** nidipanov has quit IRC15:51
lucasagomeshmm checks15:52
lucasagomeswondering if it changed after the conductor_affinity was introduced15:52
lucasagomesfoexle, will check the code 1 sec15:52
lucasagomesfoexle, ur getting a NodeLocked exception right?15:53
*** pcrews has joined #openstack-ironic15:54
*** anderbubble has joined #openstack-ironic15:54
devanandamornin, all15:55
* anderbubble waves15:55
*** achanda has joined #openstack-ironic15:56
lucasagomesdevananda, morning15:56
NobodyCammornign devananda15:57
foexlelucasagomes: yep :)15:58
sambettso/ devananda15:59
lucasagomesfoexle, odd, looks setting reservation to NULL should do the work16:00
*** achanda has quit IRC16:02
foexlelucasagomes: nope -.- .... it's ok i'll delete the node completely16:03
lucasagomesdevananda, when u get some time, wanna talk about the ENROLL and INIT states of the new state machine?16:03
foexleand recreate again16:03
*** andreykurilin_ has quit IRC16:03
*** andreykurilin__ has joined #openstack-ironic16:03
lucasagomesfoexle, :(16:03
* lucasagomes tries to figure out what's going on16:03
devanandalucasagomes: I have ~15 minutes, then need to run -- have to go to the DoT office and get a replacement title for my old car16:04
devanandathat should be .. .fun ...16:04
lucasagomesdevananda, I see... have u checked the spec recently?16:04
lucasagomesI added some more thoughts there16:05
foexlelucasagomes: maybe i'm wrong ... so i try my first ironic setup from scratch (juno) ... but i runs everytime in lock errors :D ... so maybe anything wrong. Save your time16:05
foexlelucasagomes: but thanks for your help !16:05
devanandalucasagomes: ack. will look now16:05
lucasagomesfoexle, ack, if u get really stuck lemme know16:05
foexleall right thanks16:06
*** anderbubble has quit IRC16:07
*** Viswanath has joined #openstack-ironic16:11
*** Viswanath has quit IRC16:14
devanandalucasagomes: "both ENROLL and INIT means the same, it means that the node is registered in Ironic but may not contain all the information to be managed by it yet"16:19
devanandalucasagomes: that's where I'm thinking of it differently16:19
devanandalucasagomes: to me, ENROLL means "may or may not have all the information necessary to manage it"16:20
devanandalucasagomes: whereas INIT means "ironic has confirmed it has all the information it needs, node is -initialized-"16:20
devanandaexample16:20
*** Marga_ has joined #openstack-ironic16:21
devanandaI create a node and don't pass in ipmi creds. node is in ENROLL state, and driver validation fails16:21
devanandai update node to add ipmi creds, but no properties16:21
devanandanode is in ENROLL state still, but driver validation succeeds16:21
lucasagomesa-ha, right16:21
devanandaI now have a choice16:21
lucasagomesI see I understand it better with the validation checks16:21
devananda1. PUT DISCOVERING /provision_state16:21
devanandaand ironic tries to find the properties16:22
devanandaor 2. PUT {some data} /node/properties16:22
devanandaat the end of either of those, ironic checks again -- does it have enough info? if so, automatically transition to INIT16:22
devanandabecause now the node is "initialized"16:22
devanandastill requiers an operator to move it from INIT to AVAILABLE16:23
lucasagomesone thing, can a node in INIT triggers discover again? Or should operator but it back to ENROLL and then trigger discover16:23
lucasagomesI see16:23
devanandaINIT can go back to DISCOVERY by manual action (not automatic)16:23
devanandaoooh16:23
devanandawhat if all the *FAIL states dump a node to INIT16:23
devanandai mean, try to transition it *to* INIT16:24
devanandaso it re-runs the same validation that taht step does? -- just thinking outloud here16:24
lucasagomesheh it shouldn't16:24
devanandabut anyway -- do you see the difference that I'm thinking of between ENROLL and INIT? is that useful?16:24
lucasagomesyeah16:24
lucasagomesI see what it means now16:24
lucasagomesI will think over, I'm thinking about jumping with both feet on that state machine16:25
lucasagomesit's kinda strategic for us, so we need to get it done asap16:25
devanandayup16:26
devanandalots of things depend on this16:26
devanandaI thinkw e should prioritize getting this done by k-116:26
lucasagomes+116:26
devanandai'll try to spend some time over the holidays working on it, assuming it's going to take at least a few weeks to get ready to land it16:26
devanandaooh, also, what I said above is not what victor_lowther had in mind16:27
devananda"INIT: Ironic has verified that it can manage the node, but the node may require further configuration before it can be made available."16:27
devanandahis distinction may be even more useful there16:27
devanandaENROLL: there is a node, but Ironic is not able to even contact it16:27
* lucasagomes doesn't remember that part16:27
lucasagomesoh he just commented on it16:28
* lucasagomes refreshs16:28
devanandaINIT: Ironic has confirmed it can control the node16:28
lucasagomesyeah, I see the distinction more clear now16:28
devanandaAVAILABLE: Ironic has enough information about the node to make it available AND the operator has requested that it is made available16:28
lucasagomesthe validation to check whether it's manageble or not makes sense16:28
lucasagomessounds good16:29
devanandaok - I gotta run .. .hope that's helpful16:29
devanandawill bbl16:29
lucasagomesdevananda, good luck there16:29
lucasagomesI will read his comments and think more about it16:29
*** Marga_ has quit IRC16:32
*** foexle has quit IRC16:33
victor_lowtherlucasagomes: please do. My irc access at work is spotty, so I would prefer that discussion on the spec happen in the comments.16:33
lucasagomesvictor_lowther, sure, +1 for having discussions on the spec16:34
lucasagomesI just needed a high-throughput here to try to understand the differences better16:35
*** Marga_ has joined #openstack-ironic16:36
victor_lowtherYeah, if I did not have to do irc over my phone at work for misguided security reasons I would pay more attention to it.16:38
lucasagomesfair enuff16:39
jrollmorning everybody :)16:49
jrollvictor_lowther: :(((16:50
victor_lowtherAlthough irccloud let's me know when my name is said...16:51
lucasagomesjroll, moring16:59
lucasagomesmorning*16:59
NobodyCammornign jroll :)17:01
*** Marga_ has quit IRC17:02
*** Marga_ has joined #openstack-ironic17:03
dtantsurmorning jroll :)17:08
dtantsurand g'night to everyone else! have a nice weekend17:09
*** dtantsur is now known as dtantsur|afk17:10
*** Marga_ has quit IRC17:10
*** Marga_ has joined #openstack-ironic17:11
NobodyCamnight dtantsur|afk :)17:15
NobodyCamhave a good weekend your self17:15
*** anderbubble has joined #openstack-ironic17:20
*** dprince has quit IRC17:26
*** dprince has joined #openstack-ironic17:26
*** Marga_ has quit IRC17:30
*** Marga_ has joined #openstack-ironic17:31
*** Marga_ has quit IRC17:31
*** Marga_ has joined #openstack-ironic17:32
*** Marga_ has quit IRC17:32
*** Marga_ has joined #openstack-ironic17:33
*** Marga_ has quit IRC17:33
*** Marga_ has joined #openstack-ironic17:34
*** Marga_ has quit IRC17:35
*** Marga_ has joined #openstack-ironic17:36
*** Marga_ has quit IRC17:36
*** Marga_ has joined #openstack-ironic17:37
*** kfox1111 has joined #openstack-ironic17:43
kfox1111Almost got ironic working. pxe booting is working up to the deploy image, then deploying is failing.17:44
kfox1111http://pastebin.com/851N1jEM17:44
kfox1111Any idea what might cause that?17:44
jrollNov 21 09:38:07 ironic.osg.pnl.gov ironic-conductor[27312]: Stderr: 'Error: Partition(s) 1, 2, 3 on /dev/sdc have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use.  As a result, the old partition(s) will remain in use.  You should reboot now before making further changes.\n'17:45
jrollthat's weird17:45
*** Marga_ has quit IRC17:46
kfox1111yeah.17:47
kfox1111I followed the install docs and used the ubuntu image instructions.17:48
kfox1111Note, the image on disk from before did have md arrays on it. Maybe that had an effect?17:48
jrolloh, maybe?17:49
jrollwe do clear the very beginning and end17:49
jrollseems like we could let that error through, though17:49
kfox1111so if you clear it, maybe redeploying it a second time will work?17:49
jrollmaybe?17:50
jrollworth a shot17:50
kfox1111let me try it. if so, I'll submit a bug.17:50
jrollcool17:50
kfox1111Is there any way to get the deploy image to use the ipmi console? its very... quiet during that part.17:50
jrollI have no idea, I don't really use the pxe driver17:51
kfox1111hmm.. now Stderr: 'Error: The location 1024001 is outside of the device /dev/sdd.\n'17:51
*** harlowja_away is now known as harlowja17:52
*** sambetts has quit IRC17:52
kfox1111what does -i pxe_root_gb=<root partition size> \ mean?17:56
jrollin the client?17:57
kfox1111in the flavor17:57
*** derekh has quit IRC17:57
jrolloh, that's the size for the root partition17:57
kfox1111sorry. in the node-create17:57
jrollthere's also (optionally) ephemeral/swap17:57
jrollum, that shouldn't be in the node-create call, nova sends that17:58
kfox1111there's pxe_root_gb and there is -p local_gb=$DISK_GB17:58
kfox1111http://docs.openstack.org/developer/ironic/deploy/install-guide.html#configure-identity-service-for-bare-metal17:58
JayFkfox1111: absolutely nuke the whole disk if it had a raid on it17:58
kfox1111the local_gb makes sense. there is no explination on what pxe_root_gb means.17:58
JayFkfox1111: this is one of the reasons I've been advocating for decom/zapping as a default; you can have weird things happen with raid superblocks left over :)17:58
jrollI don't see pxe_root_gb there?17:59
kfox1111JayF. a couple of tries should have cleared it by now?17:59
kfox1111jroll: search in the document for "ironic node-create -d pxe_ipmitool"17:59
JayFkfox1111: yeah, it's likely other things were/are wrong too; but generally Ironic doesn't clear your whole disk, only beginning/end then writes an image out ...18:00
jrollI don't even see node-create on that page :|18:00
kfox1111Section "Flavor Creation"18:00
kfox1111step 3.18:00
kfox1111Under Juno18:01
jrollI see flavor-create18:01
kfox1111second box.18:01
*** imtiaz has joined #openstack-ironic18:01
jrolland node-update18:01
jrollbut I don't see pxe_root_gb18:01
jrollironic node-update $NODE_UUID add \18:01
jrolldriver_info/pxe_deploy_kernel=$DEPLOY_VMLINUZ_UUID \18:01
jrolldriver_info/pxe_deploy_ramdisk=$DEPLOY_INITRD_UUID \18:01
kfox1111what?!?18:02
kfox1111heh. I just refreshed the page, and its different now. :/18:02
kfox1111yeah. no node-create anywhere.18:02
jrolllol :(18:02
kfox1111not a useful update to the doc. :/18:02
kfox1111so whats the proper way to node-create?18:03
jrollodd18:03
kfox1111the old doc said: http://pastebin.com/pegSdXJP18:04
jrolldevananda: uh18:04
jrolldevananda: you removed the only node-create from our deployer docs :P18:04
jrollhttps://review.openstack.org/#/c/136187/1/doc/source/deploy/install-guide.rst,cm18:04
*** ChuckC has quit IRC18:04
kfox1111its also missing the port create. though that was never there.18:05
*** romcheg has quit IRC18:05
*** romcheg has joined #openstack-ironic18:05
jrollkfox1111: yeah, that's correct, just skip the pxe_root_gb bit18:05
kfox1111it should do the right thing automatically?18:05
jrollyes, nova sets that18:05
kfox1111k.18:05
*** romcheg has quit IRC18:06
mjtureknot sure this is the best place to ask but I'm getting the following error when running diskimage-builder I'm getting "dib-run-parts: command not found". Anyone seen this before? Already ran the dependency install script from the triple-o guys18:06
jrollmjturek: I'd ask in #tripleo18:07
mjturekjroll thanks will do!18:07
jroll:)18:07
kfox1111could it also be that my disk sizes are wrong? I have 1 TB disks, but set flavor and node to have 1000GB disk sizes. The partitioning complaint was that it couldnt do 1024001 which doesn't count the way hd manufacturers count. :/18:09
jrollhmm, maybe18:10
jrollthat's strange18:10
jrollwell, what's your flavor look like18:10
jrollin terms of root/ephemeral/swap18:10
kfox1111cbf47302-9ac3-4f03-99b2-a04c6f6320eb | my-baremetal-flavor | 65536     | 1000 | 0         |      | 32    | 1.0         | True18:10
jrollwhat's that 32?18:10
kfox1111vcpu's18:10
jrolloh, so no swap or ephemeral?18:11
kfox1111correct. the doc didn't say to create any18:11
jrollright18:11
kfox1111should I just drop both of them to 900GB and see if it works better?18:12
jrollI have no idea, then18:12
jrollyeah, that will probably work18:12
NobodyCamgah corp exp report filed... brain only slightly damaged as a result18:12
JayFNobodyCam: I did mine first day back18:12
jrolllol18:12
JayFNobodyCam: made that a habit early on because otherwise I'd end up with a drawer of reciepts that had faded 3 months later18:13
JayFlol18:13
NobodyCamI wait for all expences to show up. else I end up filing like two or three reports for the same trip18:13
JayFAha, corporate card or such? I just keep reciepts18:14
NobodyCamyep corp card :-p18:15
*** achanda has joined #openstack-ironic18:18
kfox1111hmm.. yeah. dropping it to 900GB seems to be working better.18:18
jrollneat18:18
jrollthat should still be filed as a bug18:18
kfox1111stupid hard drive vendors and their winning supreme court case. :/18:19
JayFand probably clarified in the docs18:19
jrollthough I can't math right now, so18:19
jrollit might be correct18:19
JayFwanna become an openstack ATC? fix the doc? :)18:19
jrollkfox1111: oh, is the drive actually 1,000,000,000,000 bytes?18:19
*** achanda has quit IRC18:19
jrollor whatever?18:19
*** achanda has joined #openstack-ironic18:19
kfox1111jroll: yes.18:19
JayFiirc there was a lot of discussion about GiB vs GB when that code waas refactored18:19
* jroll cries18:19
JayFI think it was left the way it was because it had been that way and backwards compat, etc18:20
kfox1111but ironic thinks it should be 102400018:20
JayFthat's really awkward18:20
NobodyCamJayF: you see deva's doc refactoring patches from lastnight18:20
JayFNobodyCam: no18:20
* JayF was ETO yesterday18:20
NobodyCamlike three or four of them18:20
*** spandhe has joined #openstack-ironic18:21
jrollright, 1000GB * 102418:21
jrollor something18:21
*** Marga_ has joined #openstack-ironic18:21
jrollGiB vs GB always confuses me18:21
kfox1111yeah. but hd vendors never count that way anymore.18:21
jrollNobodyCam: he removed the only node-create command18:21
jrollright18:21
kfox1111they always are base 10.18:21
jrollalways? or just consumer drivers?18:21
jrolldrives?18:21
JayFit's pretty much always jroll18:22
kfox1111so if they say they are 1TB, it means 1,000,000,000,00018:22
jrollugh18:22
kfox1111always.18:22
JayFthat's why our "32 GB" satadoms show up as less via df18:22
JayFOS tools tell the troof, marketeers and box-writing folk do not18:22
jrollyeah18:22
kfox1111Some customers sued them a while back for not giving them base2 numbers.18:22
kfox1111the court upheld what the vendors were doing.18:22
jrollyeah, I remember that18:22
kfox1111so its in the hd vendors best interest to always use base 10 so they can inflate their numbers.18:23
kfox1111at least they aren't as bad as the old game cartridge convention, giving you everything in megabits. ;)18:24
jrolllol18:24
*** dlaube has joined #openstack-ironic18:24
*** Marga__ has joined #openstack-ironic18:24
*** igordcard has quit IRC18:25
*** Marga_ has quit IRC18:25
*** achanda_ has joined #openstack-ironic18:26
kfox1111arg. I gota figure out how to get ironic to specify the ipmi serial console... not seeing whats going on sucks. :)18:27
openstackgerritOpenStack Proposal Bot proposed openstack/ironic: Updated from global requirements  https://review.openstack.org/13596318:27
openstackgerritOpenStack Proposal Bot proposed openstack/ironic-python-agent: Updated from global requirements  https://review.openstack.org/13596418:27
kfox1111hmm... looks like the node imaged... its pingable now..18:27
jrollkfox1111: there's a config option for extra kernel params, if you're just trying to set the console18:28
kfox1111yeah. in the flavor or node?18:28
jrollit's an ironic-conductor config18:28
* jroll looks18:28
kfox1111ah. ok.18:28
kfox1111just gota poke: console=ttyS1,115200 in there somehow.18:29
jrollpxe.pxe_append_params18:29
jroll"pxe." meaning in the [pxe] section18:30
kfox1111perfect. thanks.18:30
*** achanda has quit IRC18:30
jrollnp18:30
kfox1111gota love the crudini tool. :)18:31
openstackgerritOpenStack Proposal Bot proposed openstack/python-ironicclient: Updated from global requirements  https://review.openstack.org/13598518:33
NobodyCambrb18:33
*** achanda_ is now known as achanda18:34
*** pelix has quit IRC18:35
lucasagomesI will call it a day folks18:37
lucasagomeshave a great weekend18:37
lucasagomessee ya18:37
JayFlater lucas18:38
*** lucasagomes is now known as lucas-beer18:39
NobodyCamhave a great weekend lucas-beer18:39
kfox1111ahhh. there we go. serial console. thanks. :)18:40
jrollnight lucas18:40
*** imtiaz has quit IRC18:40
JayFkfox1111: Mind talking about what you're using Ironic for? /me curious18:40
JayFalso wondering if I should evangalize the agent driver to you :P18:40
jroll"deploying bare metal"18:41
*** imtiaz has joined #openstack-ironic18:41
JayFjroll: "Deploying openstack clouds" vs "deploying instances on demand for customers"18:41
kfox1111right now, just kicking the tires. I work for the pacific northwest national lab, in their super computer devision.18:41
kfox1111We're using openstack right now internally to provide support services for our big hpc cluster.18:42
jrolloh, hpc stuff? I think you mentioned that yesterday18:42
*** andreykurilin__ has quit IRC18:42
*** andreykurilin_ has joined #openstack-ironic18:42
kfox1111We use cobbler for most deployments and are interested in ironic as a possible replacement for the provisioning piece.18:42
*** Marga__ has quit IRC18:42
kfox1111I think we'll still need it for its awesome repo mirring capability.18:43
*** Marga_ has joined #openstack-ironic18:43
jrollneat18:43
kfox1111hmm.... Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [25/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by <class 'socket.error'>: [Errno 113] No route to host)]18:43
kfox1111is that expected?18:43
JayFThat's from cloud-init inside the image?18:43
jrollare you running nova's metadata service?18:43
kfox1111yeah. hmmm.. let me double check on its status.18:44
jrollso that's probably a networking thing18:44
jrollor configs maybe?18:44
jrollidk how that works18:44
jrollthe 169.254.169.254 looks sketchy to me :P18:44
NobodyCamthats metadta service18:44
*** ChuckC has joined #openstack-ironic18:44
kfox1111k. I've debugged the metadata server before. just wan't sure it was expected to work with bare metal or not.18:44
kfox1111169.254.169.254 is a link local address.18:45
jrollright18:45
jrollyour metadata service is probably not accessible via that IP18:45
JayFkfox1111: I suspect you need physical networking to cooperate to make metadata service work18:45
jrollI would assume18:45
*** imtiaz has quit IRC18:45
JayFkfox1111: we're using ConfigDrive downstream in what we do, and patches are up to support it already (I think?) but only for agent driver18:45
kfox1111hmm..18:45
jrollJayF: ironic patches are merged :)18:46
JayFkfox1111: jroll would know as he's implementing configdrive for upstream :)18:46
JayFjroll: oh wow, that was fast18:46
jrollnova patches aren't quite up yet as I need to build a swift wrapper etc18:46
jrollI'm good like that :P18:46
kfox1111trying to remember how this works... with neutron, I think the router namespace puts in a metadata proxy.18:46
kfox1111but since I'm using a flat network, I didn't build a router...18:46
jrollright18:47
jrollcan you set that IP in cloud-init configs, maybe?18:47
jrollor, the correct IP, I should say18:47
kfox1111would have to build images with it hardcoded I think. easier just to get the service on the right ip.18:47
jrollright, you need image cooperation18:48
jrollon a link local IP?18:48
kfox1111should be able to build a dummyr router connected to the network with no gateway and it should then work I think.18:49
JayFjroll: IDK if metadata service URL is configurable, that's a damn good idea18:49
jrollwell, I would think normally the OVS or whatever on the hypervisor would be what handles that18:49
jrollI have no idea, this is way out of my knowledge range18:49
jrollJayF: yes18:49
kfox1111JayF: chicken and the egg problem. how does an image built by a 3rd party know what url to contact?18:50
kfox1111Solution, always the same link local one. :/18:51
kfox1111Amazon garantee's it, so everyone uses it.18:51
jrollsadface18:51
NobodyCamtripleo folks have been known to do somehting like: https://github.com/openstack/tripleo-image-elements/blob/master/elements/devstack/install.d/98-baremetal-network#L1618:52
*** imtiaz has joined #openstack-ironic18:52
JayFkfox1111: ++ I agree, but you can build your own images18:52
JayFkfox1111: the images we use for Rackspace OnMetal have a bit of cloud-init configuration in them too, to bootstrap it up18:52
kfox1111I know. but am lazy. I'd rather all images work unchanged rather then tweak every one. ;)18:53
kfox1111ok. have a router...18:53
*** alexpilotti has quit IRC18:53
*** alexpilotti has joined #openstack-ironic18:53
kfox1111hmm... no I don't... strange...18:53
*** Marga_ has quit IRC18:54
*** Marga_ has joined #openstack-ironic18:54
kfox1111ah... l3-agent is not working... probably because of the inital packstack config.. just a sec.18:54
*** Marga_ has quit IRC18:57
*** Marga_ has joined #openstack-ironic18:58
kfox1111hmm.. still no metadata agent rule...18:58
*** Marga_ has quit IRC18:58
*** Marga_ has joined #openstack-ironic18:59
*** JackieHurst has joined #openstack-ironic19:00
*** igordcard has joined #openstack-ironic19:01
*** alexpilotti has quit IRC19:03
*** killer_prince has quit IRC19:03
*** jistr|mtg has quit IRC19:04
*** killer_prince has joined #openstack-ironic19:05
*** killer_prince is now known as lazy_prince19:05
kfox1111so, anyway, we deploy a lot of our systems using xcat, cobbler or rocks. It would be interesting if we could use ironic, and let ourselves more easily switch nodes between hpc and more traditional cloud usecases.19:08
kfox1111looking at the messages in the log, the deploy image exports an iscsi volume and conductor copies the bits to it?19:10
*** ifarkas has quit IRC19:13
jrollyes, that's how the pxe drivers work19:15
jrollthis is all the deploy ramdisk does: https://github.com/openstack/diskimage-builder/blob/master/elements/deploy-ironic/init.d/80-deploy-ironic19:16
jrollthe agent drivers run a python agent in a ramdisk, which has a rest api that ironic can call to tell it to do things, like download an image over http and write it to disk19:16
kfox1111so the agent driver version should scale better?19:17
jrollyes, that's one of the reasons we built it :)19:18
kfox1111ok. cool.19:18
*** lazy_prince has quit IRC19:18
kfox1111we usually have a few hundred machines per associate director machine, so they could potentially overwhelm it. I'd guess we'd do one ironic conductor per associate director.19:19
*** killer_prince has joined #openstack-ironic19:19
*** killer_prince is now known as lazy_prince19:19
*** andreykurilin_ has quit IRC19:19
jrollsounds about right19:20
JayFI think we have something around 200 nodes per conductor in our env?19:20
jrollwe (rackspace) are currently running just over 200 nodes per conductor19:20
jrollusing the agent driver19:20
*** andreykurilin_ has joined #openstack-ironic19:20
kfox1111ok. cool.19:21
*** lazy_prince has quit IRC19:24
kfox1111hmm... the metadata agent/proxy is up. just doesn't seem to be working... lets see...19:24
*** Marga__ has joined #openstack-ironic19:25
*** Marga_ has quit IRC19:25
*** JackieHurst is now known as GiancarloRizzi19:27
*** killer_prince has joined #openstack-ironic19:29
*** GiancarloRizzi has quit IRC19:30
*** JackieHurst has joined #openstack-ironic19:30
*** killer_prince is now known as lazy_prince19:30
*** romcheg has joined #openstack-ironic19:32
openstackgerritChris Krelle proposed openstack/python-ironicclient: Update README  https://review.openstack.org/13454119:33
*** romcheg has quit IRC19:33
*** romcheg has joined #openstack-ironic19:33
*** blinky_ghost has quit IRC19:34
*** Marga__ has quit IRC19:34
*** Marga_ has joined #openstack-ironic19:35
*** Marga_ has quit IRC19:36
*** yjiang5_away is now known as yjiang519:36
*** Marga_ has joined #openstack-ironic19:36
*** Marga_ has quit IRC19:37
*** Marga_ has joined #openstack-ironic19:38
*** Marga_ has quit IRC19:39
*** Marga_ has joined #openstack-ironic19:40
*** igordcard has quit IRC19:40
*** JackieHurst has quit IRC19:41
*** lazy_prince has quit IRC19:41
*** bahazc has joined #openstack-ironic19:41
*** killer_prince has joined #openstack-ironic19:43
*** Marga_ has quit IRC19:43
*** killer_prince is now known as lazy_prince19:43
*** Marga_ has joined #openstack-ironic19:44
*** bahazc has quit IRC19:45
*** penick has joined #openstack-ironic19:46
*** Marga_ has quit IRC19:48
*** Marga_ has joined #openstack-ironic19:49
*** romcheg has quit IRC19:50
*** romcheg has joined #openstack-ironic19:51
*** spandhe has quit IRC19:52
*** bahazc has joined #openstack-ironic19:53
*** spandhe has joined #openstack-ironic19:54
*** Marga_ has quit IRC19:54
*** Marga_ has joined #openstack-ironic19:55
*** bahazc has quit IRC19:55
*** Marga_ has quit IRC20:00
*** Marga_ has joined #openstack-ironic20:01
*** Marga_ has quit IRC20:01
*** Marga_ has joined #openstack-ironic20:02
*** lazy_prince has quit IRC20:03
*** killer_prince has joined #openstack-ironic20:04
*** killer_prince is now known as lazy_prince20:05
*** Marga_ has quit IRC20:06
*** datajerk has joined #openstack-ironic20:07
kfox1111yay. passed my Mirantis certification. :)20:09
dlaubenice!20:11
dlaubecongrats kfox111120:11
dlaubehow many questions? or is it lab based or something?20:12
kfox1111lab based.20:17
*** r-daneel has quit IRC20:17
dlaubeany ironic material in there yet?20:17
kfox1111hehe. no, not yet.20:18
dlaubedarn20:18
kfox1111now, back to figuring out why the metadata server isn't working quite right...20:18
*** lucas-beer has quit IRC20:19
kfox1111OH.... I think I know why.... its being routed via the neutron router...20:20
*** dprince has quit IRC20:22
NobodyCambrb quick run to grab a some smokes, maybe some starbuxks20:31
*** andreykurilin_ has quit IRC20:37
*** achanda has quit IRC20:41
kfox1111YAY!20:42
kfox1111Metadata server's working.20:42
kfox1111so nova boot -> ironic, ssh -> instance = joy. :)20:42
*** jjohnson2 has joined #openstack-ironic20:42
jroll\o/20:43
jjohnson2fyi, I just pushed 1.0 RC to pypi for confluent20:43
kfox1111ok, so to get the metadata server working, I setup a router on a different address, on the same network, then20:44
kfox1111neutron subnet-update osg-subnet --host-route 'destination=169.254.169.254/32,nexthop=192.168.122.248'20:44
kfox1111alittle hackish, but it worked. :)20:45
kfox1111now to really kick the tires. :)20:45
kfox1111hmm... so ironic deploy only uses the first drive it finds?20:46
jrollyes, it uses /dev/sda20:47
jrollanother benefit of the agent20:47
jrollstandard agent uses smallest disk larger than 4GB, you can write a few lines of python to customize that20:47
* kfox1111 thinks he needs to start looking closer at the agent...20:47
kfox1111these nodes have two identical 1T disks.20:48
kfox1111usually just raid0 them.20:48
JayFWe don't support configuring raid via ironic today20:48
JayFhowever it would absolutely be possible to hack into the agent20:49
kfox1111any pointers to playing with the agent?20:49
JayFdriver: s/pxe/agent/20:50
*** rushiagr is now known as rushiagr_away20:50
JayFyou can use a ramdisk of master ipa20:50
JayFhttp://tarballs.openstack.org/ironic-python-agent/coreos/files/20:50
JayFor you can build your own (it's simple, and documented in openstack/ironic-python-agent)20:50
JayFand IPA only handles full disk images20:50
JayFso you'll have to use a full disk image instead of a partition image20:51
*** Marga_ has joined #openstack-ironic20:51
JayFjroll: ^ forgetting anything?20:51
kfox1111cool. I'll give it a shot. :)20:52
JayFkfox1111: you can embed your own hardware managers into the agent as wel20:52
JayFkfox1111: for instance; this is what we use: https://github.com/rackerlabs/onmetal-ironic-hardware-manager20:52
*** alexpilotti has joined #openstack-ironic20:53
* NobodyCam is back20:53
kfox1111so, that lets you configure additional hardware or something?20:54
JayFkfox1111: so today; in upstream Ironic, there's not much interesting hardware stuff that the agent does20:54
JayFkfox1111: at Rackspace, we're running an early version of "zapping" which has specs up todya20:54
JayFwhich does things like erase disks, setup bios settings, flash firmwares, etc etc (anything you'd want to happen between workloads)20:55
kfox1111ah. ok.20:55
JayFThat's obviously very hardware specific; so it's important to be able to make your own code to do it20:55
kfox1111yeah. makes sense.20:55
*** athomas has quit IRC20:56
*** Marga_ has quit IRC20:56
kfox1111I can't remmber if I asked or not. can you use the agent driver to let you pxeboot once, then boot off of the image until the instance is deleted?20:57
jrollJayF: dunno, don't think so20:59
jrollkfox1111: the agent driver today only supports whole disk images, and booting the instance from disk20:59
jrollit never pxe boots the instance today20:59
kfox1111cool. thats awesome. :)20:59
kfox1111seperating the kernel/initrd out is kind of a pain,21:00
kfox1111and I've wondered about the trippleo type undercloud stuff. if you must pxeboot your controllers, that means you must keep your seed around. If they can local boot after deployed, the seed can go away.21:00
jrollyeah, indeed21:01
kfox1111it lets you yum upgrade the instance nicely too.21:01
jrollright21:01
jrolltripleo doesn't believe in that21:01
kfox1111yeah. Ideally you wouldn't have to. but sometimes its nice to have the option.21:02
jrolldefinitely21:03
kfox1111is there plans on making the agent the default at some point?21:03
JayFkfox1111: a lot of the tripleo people like the behavior of none of the instances coming up until the control plane is up21:04
jrollthat's a bit of a goal, yes21:04
JayFkfox1111: idgi but it's considered by some people to be a feature not a bug21:04
kfox1111having to keep one seed vm up and running so your "fault tolerent" cluster can boot seems counter intuitive. ;)21:05
*** Marga_ has joined #openstack-ironic21:07
*** romcheg has quit IRC21:08
kfox1111is there any cinder support in ironic yet?21:10
*** Marga_ has quit IRC21:10
*** Marga_ has joined #openstack-ironic21:10
jrollnot yet, folks are working on it21:11
kfox1111k.21:12
jroll(theoretically, I haven't seen code)21:12
kfox1111I'm guessing security groups are off the table too? The docs had you configure them, but I'm guessing they don't work at all?21:12
jrollprobably not?21:13
jrollI haven't tried, personally21:13
jrollor heard of folks trying21:13
jrollbut neutron doesn't have support for hardware switches yet21:13
kfox1111do you need to match the ironic-python-agent image build to the openstack release, or will trunk always work with the latest stable release?21:14
jrolltrunk will always work21:14
kfox1111yeah. that was what I was thinking. just wondered if somethign happened that I had missed.21:14
kfox1111k. thx.21:14
* NobodyCam notes date and time : 21:14 | jroll > trunk will always work21:14
kfox1111hehe.21:15
jrollnope, we do our best to keep trunk working with last stable21:15
jrollshhhhh21:15
NobodyCam:-p21:15
kfox1111Oh. btw, thanks for posting official prebuilt images. something that TripleO REALLY needs to do.21:16
jroll:)21:17
*** imtiaz has quit IRC21:18
*** imtiaz has joined #openstack-ironic21:18
NobodyCam"the J crew" are a awesome group of guys21:19
jrolld'aww21:19
JayFkfox1111: it's built by the CI, so those will change on each commit to IPA21:20
kfox1111even better. :)21:20
JayFkfox1111: but I strongly, strongly reccomend you end up building your own agent. Ramdisk agent is super useful for fleet management, just gotta drop an ssh key inside :)21:20
* JayF stuffs all kinds of tools into his deploy ramdisk21:20
kfox1111yeah. I don't mind building everyting. We have a jenkins setup for that. We end up customizing all of our images.21:21
kfox1111Its more the, I want to kick the tires to see if something is worth spending time on.21:21
kfox1111You have to implement all of TripleO before you can ensure its a good fit. :/21:21
jrollha21:21
kfox1111I'd rather try it out at scale with prebuilt images, make sure its what we want, then customize the bits we care about.21:22
jroll+121:22
*** david-lyle has joined #openstack-ironic21:22
*** Marga_ has quit IRC21:22
*** Marga_ has joined #openstack-ironic21:23
* devananda waves21:23
jrollohai21:23
devanandahi!21:23
devanandain the office today, which means talking with people and not getting any work done21:23
jrollwoohoo21:23
kfox1111yeah, thats the way of it isnt it...21:24
jrollsounds like a blast :P21:24
*** Marga_ has quit IRC21:26
*** Marga_ has joined #openstack-ironic21:27
NobodyCamwb devananda21:29
NobodyCamdevananda: talking to folks IS getting work done!21:30
jrollI like to pretend it i21:30
jrolls21:30
devanandajroll: I just call it like it is21:30
*** Marga_ has quit IRC21:30
*** Marga_ has joined #openstack-ironic21:31
NobodyCamhumm -c has stoped working on my devtest... gota be somehting with my setup21:33
*** Marga_ has quit IRC21:34
*** achanda has joined #openstack-ironic21:35
*** Marga_ has joined #openstack-ironic21:35
*** Marga_ has quit IRC21:37
*** Marga_ has joined #openstack-ironic21:38
*** romcheg has joined #openstack-ironic21:38
*** Marga_ has quit IRC21:38
*** Marga_ has joined #openstack-ironic21:39
*** Marga__ has joined #openstack-ironic21:43
*** Marga_ has quit IRC21:43
*** Marga__ has quit IRC21:44
dlaubehey guys, just deployed an ubuntu image with ironic and I'm sitting at a login prompt21:44
*** Marga_ has joined #openstack-ironic21:44
NobodyCamnice!!!21:44
jroll\o/21:44
dlaube:D21:44
*** Marga_ has quit IRC21:44
dlaubeI wish I could say it was all of my doing but I certainly had some help!!!21:44
-openstackstatus- NOTICE: gating is going offline while we deal with a broken block device, eta unknown21:44
*** ChanServ changes topic to "gating is going offline while we deal with a broken block device, eta unknown"21:44
dlaubeBUT....21:44
*** Marga_ has joined #openstack-ironic21:45
dlaubecan anyone tell me the username associated with the adminpass that nova boot spits out?21:45
NobodyCamhow did you build your image21:45
dlaubeimage-builder21:45
jrollif it's the standard ubuntu image, should be root or ubuntu21:45
NobodyCamdid you add the stackuser element21:45
dlaubeahhh21:45
dlaubelooks like an image rebuild and glance create is in order21:46
dlaubeheh21:46
NobodyCamalso the root password  should be listed on the nova boot output21:46
NobodyCambut ya I use stackuser element alot :-p21:47
jrollyeah, always try root first21:47
dlauberoot user was a no-go21:47
NobodyCamwith password from nova boot?21:47
NobodyCamI think it listed as adminPass21:49
dlaubeyep, with the adminPass that nova boot spits out21:49
NobodyCam:(21:49
NobodyCamubuntu / ubuntu?21:49
NobodyCamgoogle seems to say Try 'ubuntu' with an empty password21:50
dlaubenope21:51
dlaubeneither of those two work21:51
jrollI mean21:51
jrolldo you have the metadata service set up properly?21:51
NobodyCamdid cloud-init fire off21:51
jrollif not, the passowrd probably isn't getting set21:51
dlaubenope21:51
jrollyeah that21:51
NobodyCamthere ya go21:52
dlaubesaw some warnings at provision time about metadata failure21:52
dlaubeaha!21:52
NobodyCamstackuser element will allow you log in to the instance21:52
dlaubeI thought it was only needed for cloud-init stuff21:52
NobodyCambut I'd suggest setting up meta data21:52
jrollcloud-init is what sets the root password21:52
NobodyCamand installs ssh keys21:53
JayFjroll: Are you sure cloud-init will set a root password?21:53
jrollI'm not21:53
JayFjroll: Pretty sure it only does $distroname users + sudo and/or keys for any user21:53
jrollbut what else would21:53
NobodyCamJayF: it should yes21:53
JayFbut imbw21:53
NobodyCamJayF: how else would nova set the admin password21:54
JayFNobodyCam: nova-agent21:54
jrollthere's nova-agent21:54
NobodyCamagent?21:54
NobodyCamicky21:54
jrollis that another rackspace thing21:54
* jroll hides21:54
NobodyCamlol21:54
dlaubeheh21:55
NobodyCamhumm why is my neutron only listing dhcp agent... what happened to my Open vSwitch agent.21:56
NobodyCam:(21:56
dlaubeok, well thanks for the info gents21:56
dlaubegoing to get back to it and see what we can do21:56
*** Marga_ has quit IRC21:56
NobodyCam:)21:56
JayFjroll: I'm somewhat sure that's upstream?21:56
JayFNobodyCam: I think agents are icky too; that's why onmetal uses only cloud-init21:56
NobodyCam:) +++++21:57
*** Marga_ has joined #openstack-ironic21:57
dlaubeseriously, going to head up into SF one of these days and buy you guys some coffees or beers21:57
NobodyCamI believe we (ironic and to that extent nova) should be hands off a user image21:57
jrollJayF: I have no idea, man21:57
jrolldlaube: and a cider for JayF :P21:58
jrollNobodyCam: ++21:58
jrollI don't want my provider running any code in my instance21:58
JayFI won't make a value judgement about doing that generally, becuase the company I work for manages servers and clouds and I'd rather them use a python agent than a human agent21:59
JayFbut if people just want compute, they should just get that, without anyone mucking in their instance/image21:59
dlaube:P21:59
dlaubetrue that21:59
NobodyCamyes there are cases when I ask for the provider to do somehting for me... (ie I have asked for backup's with past providers) thou I installed their software my self.22:01
*** Marga_ has quit IRC22:03
*** Marga_ has joined #openstack-ironic22:05
dlaubehmm…  any way to specify the metadata agent IP at provision time?22:05
*** Marga_ has quit IRC22:05
dlaubeor is that something that comes out of the nova.conf ?22:05
*** Marga_ has joined #openstack-ironic22:06
NobodyCamoh looks like I have to stop installing Boinc updates now22:07
NobodyCamthe ip I believe is aways 169.254.169.25422:07
NobodyCam?22:07
dlaubeinteresting22:09
PaulCzarI think I'm making some progress here .  I've got the virtualbox power stuff working with the ssh agent ... now I need to figure out how to plumb networking over22:13
kfox1111dlaube: metadata ip always should be 169.254.169.254.22:14
dlaubeahh ok thanks NobodyCam and kfox111122:14
PaulCzarthe neutron dhcp agent runs in a network namespce ..so I assume I have to do something to allow a the external node to dhcp off it ?22:15
*** ryanpetrello has quit IRC22:17
kfox1111you need to ensure the network you create is bound to the flat provider network exposed to the external host.22:19
PaulCzaryeah22:19
kfox1111is there a document describing how to set a node to use the agent?22:21
*** imtiaz has quit IRC22:23
JayFkfox1111: IDK if you can change the driver of a node post-adding it to Ironic22:23
JayFkfox1111: might wanna delete that node and recreate it, just specifying the agent driver22:24
*** Marga_ has quit IRC22:24
kfox1111JayF: Thats ok.22:25
kfox1111I'm not seeing it in the driver-list. I'm assuming I need to add it to enabled_drivers22:25
JayFaha, yes22:26
kfox1111know offhand what the line needs to be?22:26
jrollagent_*, where * is ssh/ipmitool/pyghmi22:29
*** Marga_ has joined #openstack-ironic22:29
kfox1111k.22:30
*** jjohnson2 has quit IRC22:30
kfox1111There we go.22:32
kfox1111crudini --set /etc/ironic/ironic.conf DEFAULT enabled_drivers pxe_ipmitool,agent_ipmitool,agent_ssh22:32
jrollnice22:32
jrollyou don't need agent_ssh unless you want the ssh power driver, btw22:32
kfox1111yeah. I figured it was easy to enable, so.. why not...22:33
kfox1111hmm.. and then I need to update agent_pxe_append_params too...22:34
jrollyeah22:36
kfox1111so, since kvm images are usually partitioned, I should be able to use a stock ubuntu cloud image with the agent?22:36
jrollshould22:37
jrollbut I've noticed this before: https://github.com/openstack/diskimage-builder/blob/master/elements/ubuntu/install.d/10-support-physical-hardware#L622:37
kfox1111ah.22:38
kfox1111hmm... is there any way to tag nodes with a name so they can  be tracked easier?22:41
*** imtiaz has joined #openstack-ironic22:46
jrollkfox1111: soon https://review.openstack.org/#/c/134439/22:49
kfox1111cool. thx.22:50
kfox1111hehe. better reason would be, I don't want to print out, then affix a uuid to a physical node for harware replacement. ;)22:50
kfox1111and have to update the physical label when I delete and re-add the node. ;)22:51
jrollso, node.extra is a dict22:52
jrolland super nice for stuffing random data22:52
jroll(like server name)22:52
kfox1111cool. I'll give that a shot.22:53
dlaubecouldn't you also just add that extra data to the node properties or extra ?22:53
*** anderbubble has quit IRC22:53
*** achanda has quit IRC22:53
NobodyCamdlaube: I'd use extra for that data22:53
*** romcheg has quit IRC22:54
*** achanda has joined #openstack-ironic22:54
JayFWe totally add lots of IDs in node.extra22:54
kfox1111would be nice if it showed up in node-list though.22:55
kfox1111yeah. I could see extra being useful for stuff like, rack, U, slot in chassis, etc.22:56
jrollnode-list --detail might be a thing22:56
jrollor /nodes/detail definitely is22:56
kfox1111its totally ignored by things?22:56
jrollextra? yeah22:57
kfox1111k.22:57
kfox1111strange... disk-image-create -u ubuntu -o my-image doesn't partition things?22:58
*** romcheg has joined #openstack-ironic22:58
*** imtiaz_ has joined #openstack-ironic22:58
NobodyCampartition things?22:58
jrollwhat do you mean "partition things"22:58
*** Marga_ has quit IRC22:58
kfox1111will the agent work with that image, or do I need to get it partitioned nicely?22:58
kfox1111root is in /dev/sda.22:59
jrollthe agent takes a qcow and dd's it22:59
jrollthat's it22:59
kfox1111not in /dev/sda122:59
jrolloh, so no bootloader in the image?22:59
NobodyCamkfox1111: you want to add vm element for agent22:59
kfox1111ah. ok.22:59
kfox1111disk-image-create -u ubuntu -o my-bare-image vm22:59
NobodyCamsure23:00
kfox1111k. thx.23:00
kfox1111so if you use disk image builder on ubuntu images, it always installs the physical hardware bits into the image too?23:01
devananda"physical hardware bits" ?23:01
NobodyCamsame question here?23:01
NobodyCamhi again devananda :)23:01
kfox1111< jroll> but I've noticed this before: https://github.com/openstack/diskimage-builder/blob/master/elements/ubuntu/install.d/10-support-physical-hardware#L623:02
*** imtiaz has quit IRC23:02
NobodyCamkfox1111: yep thats part of the ubuntu element so it will get included23:02
kfox1111ok.23:03
Haomengmorning23:03
*** mjturek has quit IRC23:03
NobodyCammorning Haomeng23:03
*** jgrimm is now known as zz_jgrimm23:05
kfox1111? http://tarballs.openstack.org/ironic-python-agent/coreos/files/ no longer works. :/23:05
JayFnote /topic23:05
HaomengNobodyCam: :)23:05
kfox1111ah.23:05
JayFkfox1111: I'd presume tarballs.o.o is impacted by that23:05
NobodyCamd'oh23:06
NobodyCamlol23:06
kfox1111yeah...23:06
JayFconfirmed by infra, it's impacted23:06
*** jeblair is now known as corvus23:06
JayFkfox1111: really building the image is a cakewalk if you have a trusty vm23:07
JayFnot trusty as in trustworthy23:07
JayFtrusty as in ubuntu 14.04 :)23:07
*** corvus is now known as jeblair23:07
kfox1111arg. k.23:07
JayFkfox1111: https://github.com/openstack/ironic-python-agent/tree/master/imagebuild/coreos23:08
JayFkfox1111: clone the thing, run full_trusty_build.sh23:08
JayFkfox1111: if you wanna know what's actually happening; use the README23:08
JayFbut that .sh file is exactly what the gate runs against a bare-trusty node to generate the images you were using23:08
adam_gJayF, what are the current memory requirements for DIB built agent ramdisks?23:10
JayFadam_g: I think jroll got it working with 3GB ?23:11
NobodyCamadam_g: at least 307223:11
adam_gah23:11
NobodyCambut I understand testing was done on 8gb systems23:11
kfox1111I've got an sl7 box I'm building all this on. I should be able to tweak the script to build on sl7?23:11
adam_gi just got multinode ironic devstack working with a second node just hosting libvirt + conductor23:11
kfox1111its just doing docker stuff?23:11
adam_gwe may be able to get concurrency working with 3GB VMs across both nodes23:11
JayFadam_g: lets use the CoreOS image for that now then? until the DIB image goes on a diet and gets better vetted?23:14
adam_gJayF, yeah--this should give us plenty of resources if we end up using it upstream23:14
JayFthat's killer man, great job23:15
JayFseriously23:15
JayFthat'll also be nice if we can get it working locally as well23:15
kfox1111copying the full_trusty_build.sh and changing it to yum install docker instead seems to be working.23:16
adam_gi just finished hacking it together based on some multi node stuff others are working on it.. it should be easy with some devstack updates23:16
jrolladam_g: yeah, coreos ramdisk only needs 1gb, let's just do that23:18
adam_gjroll, word23:19
jrollkfox1111: just curious, what's sl7?23:19
kfox1111oh. scientific linux 7. almost identical to centos7.23:20
jrollah, neat23:21
*** andreykurilin_ has joined #openstack-ironic23:21
JayFkfox1111: yeah; the build itself is driven totally by docker, if you have recentish docker you're gtg23:21
*** alexpilotti has quit IRC23:22
JayFkfox1111: The container IPA runs in is still ubuntu; although I wouldn't be opposed to s/coreos/coreos-ubuntu/ and adding a coreos-fedora23:22
JayFbut I think I want to make the builder more split-out before doing that23:22
kfox1111not recent enough for my tastes, but usually does work. :)23:22
kfox1111doesn't matter to me. We're just much more familior with rhel.23:23
JayFI don't care much about distributions ... I care about init systems, and I <3 me some systemd23:23
JayFso I've been running a lot of fedora lately :)23:23
kfox1111yeah. I really dont get the flamewar over systemd.23:23
kfox1111I think its a big misunderstanding.23:24
kfox1111sure systemd doesn't follow unix philosophy.23:24
kfox1111but neither does the linux kernel.23:24
jrollmodular monolith23:25
jrollis like my new favorite phrase23:25
kfox1111If you were to s/systemd|lenart/linux|linus/ they suddently would be on the wrong side of the argument.23:25
JayFI think people see systemd-* and think it's all in their init system23:25
JayFwhen really the project just believes in namespacing23:26
kfox1111seriously, you really want unix philosophy, its plan9 or the hurd for you. :)23:26
kfox1111yeah. exactly.23:26
JayFI want to never write an initscript ever again :)23:26
kfox1111otherwise, you want the linux philosophy. :)23:26
JayFkfox1111: if you're a systemd guy; you should appreciate the IPA image being in coreos; that's driven entirely by systemd (see imagebuild/coreos/oem/cloud-config.yml)23:26
kfox1111JayF: yeah, I hear you. proper init scripts are a PAIN.23:26
* devananda posts more spec reviews23:27
JayFand the image we use downstream for Rackspace OnMetal has probably about a 3x as large cloud-config.yml23:27
* devananda goes AFK23:27
kfox1111yeah. coreos is on my list of things to study closer once I have some time.23:27
NobodyCamhave a good weekend devananda23:27
JayFwell, you are about to be using it. Congratulations :P23:27
kfox1111wow... its really working on building this image...23:28
*** imtiaz_ has quit IRC23:29
*** imtiaz has joined #openstack-ironic23:29
*** imtiaz has quit IRC23:33
kfox1111there we go.23:35
jrollyeah, it caches docker layers so it will be faster after the first build23:36
kfox1111so building on el7 seems to work.23:36
JayFI'd expect so; if the build broke across different OS versions that'd either be my bug or docker's bug23:37
JayFit should just need docker, around 1.0 or newer23:38
JayFand the various basic linux utilities that are on "every" distribution23:38
JayFkfox1111: they fixed http://tarballs.openstack.org/ironic-python-agent/coreos/files/23:38
kfox1111should be able to modify the shell script to check for ubuntu/redhat and yum install -y docker; systemctl start docker and then the rest of the script is the same. then rename it to something more generic.23:38
JayFkfox1111: JIT for you to be done building your own :)23:38
kfox1111heh. nice.23:39
JayFkfox1111: in that case; I'd probably ask it be made a separate build file, because I don't wanna make what runs in the gate more complex23:39
kfox1111stupid murphy.23:39
kfox1111:)23:39
jrollit's interesting to me that tarballs.o.o isn't ssl23:39
kfox1111or that. just fork the file and change the two lines.23:39
*** imtiaz has joined #openstack-ironic23:40
kfox1111jroll: http should be jsut fine if there are relyable sha's somewhere?23:40
jrollfull-fedora/centos/sl7-build23:40
jrollkfox1111: I'm not sure there is23:40
kfox1111full-el7?23:41
kfox1111el is usually the shortened term for rhel/centos/sl.23:41
*** Marga_ has joined #openstack-ironic23:45
jrollgot it23:45
kfox1111ok.. so I got an image up... I have the driver enabled... lets see...23:46
kfox1111ok. gota redo the node-create...23:46
jrollshould be able to node-update I would think?23:46
*** imtiaz_ has joined #openstack-ironic23:46
kfox1111are all the params the same?23:47
kfox1111hmm.... pxe_deploy_ramdisk/kernel or agent_deploy_ramdisk?23:48
*** achanda has quit IRC23:48
jrolljust deploy_ramdisk, deploy_kernel23:48
jrollafaik23:48
jrollyep, double checked23:48
kfox1111k. thx.23:49
*** imtiaz has quit IRC23:51
kfox1111k... moment of truth..23:53
*** achanda has joined #openstack-ironic23:53
NobodyCamtotaly unreleated but /me wants this: http://www.fasthemis.com/procharger-supercharger-kit-dodge-challenger-6.1l-srt8-2008-2009-2010.aspx23:54
jrollom nom23:54
kfox1111hmm? ironic conductor's mentioning swift. why?23:55
NobodyCamits a agent thing23:56
NobodyCamagent use swift temp urls23:56
kfox1111hmm... I never setup swift.23:56
kfox1111we haven't had much need of it.23:56
kfox1111arg.23:56
*** imtiaz_ has quit IRC23:57
jrolloh23:57
jrollohhhh.23:57
jrollJayF: there's the difference you missed23:57
jrollkfox1111: basically, it requires swift to be your glance backend23:57
kfox1111yeah. :/23:58
kfox1111would a ceph gateway work?23:58
jrollnot out of the box, idk much about ceph23:58
kfox1111hmm.. probably not, since glance stores directly in ceph, not through the ceph gateway..23:58
jrollbasically, using the swift backend allows you to make a signed url for an image23:59
kfox1111this is a major setback.23:59
JayF:-C23:59
jrollwhich doesn't require an auth token23:59
jrollyeah, sorry :(23:59
JayFsorry kfox1111 :((23:59
JayFthe agent itself supports downloading from any http endpoint23:59
JayFbut ironic only knows how to tell it about swift temp urls23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!