Tuesday, 2014-04-22

kevinbentonjroll: ok, we should plan to meetup then. Maybe I’ll send an email to the Ironic dev list to see who else is interested so we can try to schedule something00:01
devanandakevinbenton: do you have an idea what that integration would look like? what are you hoping to achieve (and what's reasonable todayy)?00:01
devanandakevinbenton: i know there are some folks in HP as well who are interested (not sure if they're attending)00:01
jrollkevinbenton: yeah, agreed. we've been looking at what we want to do with neutron but it's not ready yet00:02
kevinbentondevananda: so the workflow is that when an admit adds a physical server to the network, he/she would create a neutron physical port with two identifiers (the MAC and the attachment ID)00:02
kevinbentondevananda: the attachment ID is something the network backend understands eg. “Switch1:Port8”00:03
kevinbentonthen the physical port is assigned to whichever tenant has control of the server00:03
morgabrakevinbenton: devananda: fwiw, I've been working on a plugin for this00:03
morgabraI had planned on it being kind of a stopgap for us, but I should have been better about making it open00:04
kevinbentonthat tenant can then assign it to various neutron networks00:04
morgabrakevinbenton: https://gist.github.com/morgabra/ccef48de278583ef3ad700:04
*** dwalleck has quit IRC00:04
devanandaso, folks, one thing to point out...00:04
jrollmorgabra: bad paste? -H 'Content-Type: applicatiobec-81f8-4e26-94d8-274391706a41" :P00:04
devanandaironic does not associate hw nodes to tenants00:05
morgabrajroll: fixed00:05
jrollcool00:05
kevinbentondevananda: ah, so there isn’t a concept of a tenant owning baremetal servers?00:05
devanandakevinbenton: nope00:05
jrollit's all in nova, right?00:05
devanandakevinbenton: ironic's API is *not* meant to be exposed to end users. full stop.00:05
devanandait is an admin only API right now00:05
devanandauser talks to Nova to request an instance00:06
devanandanova uses the ironic driver to talk to ironic-api, then ironic provisions the hw, and lets nova know when it's done and ready for the user00:06
devanandaneutron sets up the networking00:07
jrollso, what if the neutron calls happened in the virt driver where we know about things like tenants? (disclaimer: I know almost nothing about neutron)00:07
devanandaironic knows the MACs, passes them out to Nova, which passes them off to neutron during virt.driver.spawn()00:07
devanandajroll: they do00:07
jrollaha.00:07
jrollok00:07
devanandaso00:07
devanandaa network and an IP range are associated to a tenant00:07
devanandaneutron assigns an IP from taht tenant to the nova Instance00:07
devanandanormal workflow was neutron would also generate a MAC, but in this case, it requests the MAC from nova (which gets it from ironic)00:08
devanandawhat we need, security wise, is for the hw node and its ports to be isolated00:08
devanandafrom other tenants00:08
devanandaand from the control network00:08
devanandawhen an instance is provisioned on it00:08
devananda-- and have it moved back when taht instance is deleted, etc00:08
devanandakevinbenton: does that fit with your thinking?00:08
kevinbentondevananda: yeah, that latter part would be handled00:08
kevinbentondevananda: the workflow is slightly different though00:09
kevinbentondevananda: let me think about it for a second00:09
devanandaone add'l component -- ironic passes the dhcp extra options to neutron00:09
devanandato direct the dhcp boot request to an appropriate tftp server00:09
jrolldevananda: right, so, the isolation is exactly what morgabra is working on right now :)00:10
devanandagreat00:11
devanandamorgabra: what are the requirements for that isolation?00:11
devanandaie, does it work with only swicthes from XX vendor?00:11
morgabrawell, it's pretty severely hardware dependent00:11
*** eguz has quit IRC00:11
kevinbentondevananda: how does this workflow work at all right now? how can an instance be correctly placed on the VLAN corresponding to the neutron network?00:11
devanandakevinbenton: flat network00:12
jrollmorgabra: but that's pluggable, yeah?00:12
devanandakevinbenton: is the only network model that works today00:12
devanandakevinbenton: we *want* it to work with VLANs00:12
devanandakevinbenton: and afaik, cloud-init can pass the necessary vlan metadata in to the instance -- but we lack the integration with hardware to move the node between vlan's00:13
morgabrajroll: yes, but it's yet another driver abstraction on top of a plugin. I'm fairly convinced my implementation is wrong for the rest of neutron proper but I wanted something to work immediately for us00:13
jrollheh, yeah00:13
devanandamorgabra: an openflow / ovs implementation will be required for testing, fwiw00:13
kevinbentondevananda: i don’t think you would want cloud-init to pass vlan info into the instance, would you?00:14
devanandakevinbenton: why not?00:14
kevinbentondevananda: this should be all transparently handled by the network00:14
devanandakevinbenton: the instance has to come up on teh tenant's network00:14
devanandathere's no agent00:14
devanandaoh00:14
kevinbentondevananda: because if you don’t trust the tenant, they could change their vlan tagging00:14
devanandaright right00:15
morgabrathe switch won't respect that00:15
morgabraor shouldn't00:15
devanandayou'd have the switch force all traffic on that port to that vlan00:15
devanandayes00:15
kevinbentondevananda: yeah, since the switch requires configuration, it should just be untagged traffic and ignore everything else00:15
devanandakevinbenton: yes00:16
devanandai hate to run right now, but i need to ...00:17
devanandaplease continue w/o me, and def follow up on the ML :)00:17
kevinbentonkevinbenton: so we might just need to adjust our extension a bit. is it safe to assume then that ironic will pass in that switch port identifier information in the port-create request?00:17
kevinbentondevananda: ^^00:17
kevinbentonmorgabra: ^^. that’s the approach you are using, right?00:18
morgabrakevinbenton: yes, but I'm pretty sure this is the first that devananda and co has heard of it, I've been off on my own for a bit00:19
*** matsuhashi has joined #openstack-ironic00:19
kevinbentonmorgabra: ok, i will think about our extension and see how we can handle ironic and non-ironic attachments to the network and then I will send an email to the list to see if we can start converging on an approach00:20
*** zdiN0bot has quit IRC00:27
*** dwalleck has joined #openstack-ironic00:27
*** radsy has joined #openstack-ironic00:31
*** radsy has joined #openstack-ironic00:31
*** dwalleck_ has joined #openstack-ironic00:36
*** zdiN0bot has joined #openstack-ironic00:38
*** dwalleck has quit IRC00:40
*** zdiN0bot1 has joined #openstack-ironic00:42
*** zdiN0bot has quit IRC00:42
*** zdiN0bot1 has quit IRC00:53
*** datajerk has joined #openstack-ironic00:54
*** dwalleck_ has quit IRC01:01
*** newell has quit IRC01:01
*** datajerk has quit IRC01:22
*** radsy has quit IRC01:33
*** nosnos has joined #openstack-ironic01:36
*** rwsu has quit IRC01:47
openstackgerritOpenStack Proposal Bot proposed a change to openstack/ironic-python-agent: Updated from global requirements  https://review.openstack.org/8872201:48
*** hemna_ has joined #openstack-ironic02:11
*** rameshg87 has joined #openstack-ironic02:47
*** coolsvap|afk is now known as coolsvap02:48
*** eghobo has joined #openstack-ironic02:53
*** harlowja is now known as harlowja_away02:58
*** harlowja_away is now known as harlowja03:16
*** matsuhashi has quit IRC03:20
openstackgerritJim Rollenhagen proposed a change to openstack/ironic: Adding a reference driver for the agent  https://review.openstack.org/8479503:25
*** nosnos has quit IRC03:32
*** dkehn_ has joined #openstack-ironic04:21
*** dkehnx has quit IRC04:25
*** matsuhashi has joined #openstack-ironic04:28
openstackgerritJim Rollenhagen proposed a change to openstack/ironic: Adding a reference driver for the agent  https://review.openstack.org/8479504:30
*** nosnos has joined #openstack-ironic04:37
*** matsuhas_ has joined #openstack-ironic05:02
*** matsuhashi has quit IRC05:05
*** dwalleck has joined #openstack-ironic05:06
*** dwalleck has quit IRC05:11
*** vkozhukalov has joined #openstack-ironic05:15
*** harlowja is now known as harlowja_away05:41
*** uberj has quit IRC05:50
*** uberj has joined #openstack-ironic05:52
*** uberj has quit IRC05:53
*** uberj_ has joined #openstack-ironic05:54
*** matsuhas_ has quit IRC06:02
openstackgerritOpenStack Proposal Bot proposed a change to openstack/ironic: Imported Translations from Transifex  https://review.openstack.org/8850806:07
*** Mikhail_D_ltp has joined #openstack-ironic06:08
*** matsuhashi has joined #openstack-ironic06:10
*** ifarkas has joined #openstack-ironic06:10
*** early has quit IRC06:14
*** viktors has joined #openstack-ironic06:16
*** vkozhukalov has quit IRC06:17
*** early has joined #openstack-ironic06:17
*** raies has joined #openstack-ironic06:27
raieshi,06:27
raiescan someone tell me the prossible values for node provision state ??06:28
raiesalso please point it in ironic code06:28
*** max_lobur has joined #openstack-ironic06:40
rameshg87hello raies:06:41
raiesrameshg87: hi06:41
rameshg87are you looking for this: https://github.com/openstack/ironic/blob/master/ironic/common/states.py#L50-L60 ?06:41
raiesrameshg87: Yes I checked it but I am not well confirmed of it06:43
*** Manishanker has joined #openstack-ironic06:43
rameshg87raies: are you looking for something specific that deals with these node states ?06:43
raiesNo no, I just want to know which possible provision state are there.06:44
raiesAnd if I set provision state to particular state as of above using API request, then which filed of node show will give info that it is set successful06:45
*** Mikhail_D_ltp has quit IRC06:50
raiesactually when I set provision state to active using API, then I get "target_provision_state" of node show command - "deploy complete" and field "provision_stat" is None06:50
raiesrameshg87: ^^ is it correct behavior can you explain??06:51
*** yfujioka has joined #openstack-ironic06:51
rameshg87raies: how did you set the provision state ? did you use nodes/$NODE/states/provision from the API ?06:56
raiesyes06:56
*** hemna_ has quit IRC06:56
rameshg87if you make a call to this api, then this is the expected behaviour06:56
rameshg87raies: the api call that you make ends up here: https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1/node.py#L222-L22406:57
*** viktors has quit IRC06:58
rameshg87it calls do_node_deploy() on the conductor which sets the target state to "deploy complete", and current state to "deploying". its done here on the conductor: https://github.com/openstack/ironic/blob/master/ironic/conductor/manager.py#L350-L35306:58
raiesrameshg87: got it. but one more issue - "node.target_provision_state = states.DEPLOYDONE" mean deploy complete07:00
raiesbut node.provision_state = states.DEPLOYING07:00
raieswhat it indicates ??07:00
raiesrameshg87: ok got it :)07:01
raiesthanks for this great healp07:01
rameshg87raies: it just indicates that the node would ideally next goto node.target_provision_state07:01
rameshg87raies: anyways you already got it :-)07:01
raiesrameshg87: One more thing I want to ask, in which order provisioning could be called ??07:02
raiesmean if I called active first07:02
raiesthen if I call next deploying then it will give error07:02
raiesdid you get my question ?07:03
rameshg87raies: you cannot ask nodes/$NODE/states/provision API to set the node to any intermediate states. you can only set it to either "active" or "deleted"07:04
rameshg87raies: any other state, the API will simply reject: https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1/node.py#L205-L21107:05
rameshg87raies: you can do two basic operations by using these two states - either "do a deploy" or "do a teardown"07:05
raiesrameshg87: I didn't get your last suggestion07:06
raies'do a deploy' and 'do a teardown' I didn't get it07:07
rameshg87raies: when we send target: "active" with nodes/$NODE/states/provision API call, we are asking ironic to "do a deploy" on the baremetal node07:07
rameshg87raies: similarly, when we send target: "deleted" with nodes/$NODE/states/provision API call, we are asking ironic to "do a teardown" on the already provisioning baremetal node07:08
raiesok got it thanks for your great help :)07:08
rameshg87raies: :-)07:08
raiesrameshg87: One more thing it is very silly but I want to confirm it anyway07:09
rameshg87raies: sure ..07:09
raiesrameshg87: what does provisioning state of a node actually mean ??07:10
rameshg87raies: it's the current state of the node w.r.t to deploy operation. it is used both internally and externally to know the current status of a registered baremetal node07:11
rameshg87raies: when we register a new node, the node is not provisioned by ironic, so the provision state is None07:11
rameshg87raies: when we want ironic to deploy (or when nova wants ironic to deploy), we ask ironic to do so by changing the provision state07:12
*** viktors has joined #openstack-ironic07:13
rameshg87raies: on the process, ironic conductor sets the current state to deploying, and calls the deploy() method of the driver which is associated with the node07:13
raiesrameshg87: when a node is get registered using node-create we cannot use that directly. When we deploy a baremetal node using nova then these APIs get called. Am I right ??07:13
rameshg87raies: yes07:13
raiesafter deployment we can use the node well right07:14
raies??07:14
rameshg87raies: the deploy() method of the driver takes it through various stages of deployment and sets the state of the node accordingly. finally when the deployment is complete, the driver will set the node state as active07:14
rameshg87raies: yes, after deployment, the end user can use the node07:14
raiesrameshg87: thanks for your great help u cleared all my illusions07:15
rameshg87raies: :-)07:15
raiesare u going to Atlanta ??07:15
*** eghobo has quit IRC07:15
rameshg87raies: nope.. :-( . are you going ?07:25
*** yuriyz|2 has quit IRC07:26
*** yuriyz has joined #openstack-ironic07:26
*** ndipanov has joined #openstack-ironic07:42
*** vkozhukalov has joined #openstack-ironic07:46
Mikhail_D_wkGood morning Ironic  :)07:47
rameshg87good morning Mikhail_D_wk:07:54
*** romcheg has joined #openstack-ironic07:55
vkozhukalovmorning guys07:55
*** ndipanov has quit IRC07:56
*** romcheg1 has quit IRC07:58
*** romcheg1 has joined #openstack-ironic07:58
*** derekh has joined #openstack-ironic08:03
dtantsurMorning Ironic08:04
openstackgerritRamakrishnan G proposed a change to openstack/ironic: Add IloDriver and its utils  https://review.openstack.org/8950008:05
Mikhail_D_wkdtantsur: morning :)08:06
dtantsurMikhail_D_wk, hi :) Have you been to IRC meeting? Something interesting except for design sessions? I completely forgot about it due to holiday :)08:08
Mikhail_D_wkdtantsur: Nope  :( I forgot too... :)08:09
*** ndipanov has joined #openstack-ironic08:09
dtantsurDid you have holiday yesterday as well? :) (I am not sure where you're located)08:11
*** jistr has joined #openstack-ironic08:12
*** romcheg has quit IRC08:14
*** martyntaylor has joined #openstack-ironic08:14
raiesrameshg87: yes I am going08:16
rameshg87raies: great :-)08:17
*** mrda is now known as mrda_away08:17
*** lucasagomes has joined #openstack-ironic08:19
*** yfujioka has quit IRC08:25
*** romcheg1 has quit IRC08:46
raiesrameshg87: hi08:51
raiescan we clear provisioning state of node ??08:51
raiesi.e if it is set to active ----> again None can be achieved ??08:52
rameshg87raies: once you have the node state as "active", you need to tear down the node.  send target: "deleted" to nodes/$NODE/states/provision API08:54
raiesok i.e this flow of action is possible None--->active---->deleted----->None   ??08:56
*** romcheg has joined #openstack-ironic08:58
raiesbut what I see with action that if node provision is set to active then deleted APi gets failed09:00
raies^^ as per https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1/node.py#L225-L22709:00
raiesrameshg87: If I have a baremetal node registered, and I want to test whether both provisions (activ, deleted) get sets successfully or not then with a single node what could be the call flow ??09:03
rameshg87raies: the following flow should work09:05
rameshg871. register the node09:05
raiesrameshg87: As per me - 1. create baremetal node associated with chassis 2. set provision state active (ends with deploy complete) 3. set deleted (This case is getting failed) 4.09:05
raiesok please continue :)09:05
rameshg87raies: the #3 in the above flow should work09:05
rameshg87raies: i was about to say the same thing09:06
rameshg87:-)09:06
rameshg87raies: once the state of the node is "active", we should be able to call provision with target: "deleted"09:06
raiesrameshg87: I created node, set active which sets target_provision_state "deploy complete"09:08
rameshg87raies: okay09:08
rameshg87then ?09:08
raiesrameshg87: I checked node show I didn't find "active" keyword anywhere09:08
rameshg87the deploy takes some time to complete.  after the deploy is complete, your target_provision_state will be None and provision_state will be active09:08
rameshg87the deploy will take some time to complete ..09:09
raiesohh09:09
raiesit is the matter09:09
raiesok can you please tell me the timeout ??09:09
rameshg87deploy_callback_timeout is set to 1800 seconds by default in /etc/ironic/ironic.conf09:10
Mikhail_D_wkdtantsur: Yeah!!! We had holiday yesterday :) (I'm in Ukraine in Kharkiv)09:11
raiesrameshg87: actually I am writing tests in tempest for baremetal APIs09:12
raiesther ein configuration we need to write timeouts09:12
rameshg87raies: okay09:12
raiesrameshg87: as per /etc/ironic/ironic.conf a node will take 30 minuts to get active ??09:14
raiesomg it is too high to have it in conf of tempest09:14
rameshg87no, it won't 30 minutes to get active09:15
rameshg87the pxe deploy operation configures the pxe environment and asks power cycles the node09:16
rameshg87after the node boots, the node informs ironic conductor, and the conductor starts writing the image to the disk09:16
rameshg87the deploy_callback_timeout is the timeout for the node after the power cycle to contact the ironic conductor09:16
rameshg87if the node doesn't contact within that time, the deploy operation should fail09:17
raiesok09:17
raiesduring deploy, is it required to take node in power state on ??09:18
raiesrameshg87: I am using devstack development environment. I followed steps as I told above (1. Create node 2. Provision state set active (deploy completed), 3. show node (after an hour) target_provision_state is still "deploy compted" and provision state is still None)09:24
raiesI don't know whether m I going right or wrong09:25
openstackgerritLucas Alvares Gomes proposed a change to openstack/ironic: Add ManagementInterface  https://review.openstack.org/8606309:25
openstackgerritLucas Alvares Gomes proposed a change to openstack/ironic: IPMITool to use the new ManagementInterface  https://review.openstack.org/8609209:25
openstackgerritLucas Alvares Gomes proposed a change to openstack/ironic: SeaMicro to use the new ManagementInterface  https://review.openstack.org/8632809:25
openstackgerritLucas Alvares Gomes proposed a change to openstack/ironic: IPMINative to use the new ManagementInterface  https://review.openstack.org/8658809:26
openstackgerritLucas Alvares Gomes proposed a change to openstack/ironic: IPMINative set_boot_device persistent  https://review.openstack.org/8574209:26
*** foexle has joined #openstack-ironic09:28
openstackgerritA change was merged to openstack/ironic: Implement list_instance_uuids() in Nova driver  https://review.openstack.org/8868409:41
openstackgerritA change was merged to openstack/ironic: Modify the get console API  https://review.openstack.org/8776009:41
*** dshulyak has joined #openstack-ironic09:42
*** andreykurilin has quit IRC09:49
raiesrameshg87: for APi verification will this level verification do or not ??09:55
raiesi.e when I set provision state to "active"09:56
raieshow can I validate that provision state is set as invoked09:57
raiesI am using devstack environment09:57
raiesrameshg87: what steps I followed, I explained you. It has been more than an hour but still provison state is None09:58
*** dkehn__ has joined #openstack-ironic10:07
*** dkehn_ has quit IRC10:10
dtantsurGuys, maybe you now: is it possible to run several conductor processes on the same machine and thus share the master image directory?10:12
dtantsurI am trying to understand why we had file-based locking for master image directory in PXE module10:13
dtantsurmy intention is to just replace it with lockutils10:13
dtantsurlucasagomes, hi, maybe you know ^^^ ?10:16
lucasagomesdtantsur, hmm not off the top of my head, I know it's possible to run more than one conductor (for testing the hash ring for example)10:30
lucasagomesidk if it's something that should be done in production tho10:30
lucasagomesnow having the file-based lock, I think it's used to prevent that case, to not allow the dir to be used by diff processes at the same time10:31
dtantsurlucasagomes, but there are a lot of places already, where process-based lock is used, so I am a bit confused10:32
lucasagomesright10:32
lucasagomesyou're running the processes with the same user?10:32
dtantsurlucasagomes, I am not, I am thinking about possible race conditions in caching code :)10:33
dtantsurthere are some already, and it seems like I introduced even more10:33
dtantsurand now trying to fix things at once :)10:34
dtantsurit is re https://review.openstack.org/#/c/8538710:34
lucasagomesheh, right...10:36
lucasagomesone thing is I don't tihnk we will ever want to run >1 conductor on the same node in production10:36
dtantsurlucasagomes, that's a relief :) ok, I will stick to convenient lockutils for now and see what people are going to say10:37
lucasagomesdtantsur, ack...10:37
*** coolsvap is now known as coolsvap|afk10:40
openstackgerritLucas Alvares Gomes proposed a change to openstack/ironic: Document ClusteredComputeManager  https://review.openstack.org/8953311:00
*** derekh has quit IRC11:01
*** rameshg87 has left #openstack-ironic11:21
*** lucasagomes is now known as lucas-hungry11:22
*** killer_prince has quit IRC11:57
openstackgerritMikhail Durnosvistov proposed a change to openstack/ironic: Unified object model  https://review.openstack.org/8506512:01
openstackgerritMikhail Durnosvistov proposed a change to openstack/ironic: Get rid object model `dict` methods part 3  https://review.openstack.org/6410812:01
openstackgerritMikhail Durnosvistov proposed a change to openstack/ironic: Get rid object model `dict` methods part 1  https://review.openstack.org/6002512:01
openstackgerritMikhail Durnosvistov proposed a change to openstack/ironic: Get rid object model `dict` methods part 2  https://review.openstack.org/6233112:01
openstackgerritMikhail Durnosvistov proposed a change to openstack/ironic: Get rid object model `dict` methods part 5  https://review.openstack.org/6427812:01
openstackgerritVladimir Kozhukalov proposed a change to openstack/ironic-python-agent: Added list and report disk utils  https://review.openstack.org/8860212:02
*** jdob has joined #openstack-ironic12:06
*** datajerk has joined #openstack-ironic12:07
*** jdob has quit IRC12:09
*** jdob has joined #openstack-ironic12:09
*** Manishanker has quit IRC12:15
*** andreykurilin has joined #openstack-ironic12:15
*** lucas-hungry is now known as lucasagomes12:20
*** datajerk has quit IRC12:25
NobodyCamgood morning Ironic12:28
Mikhail_D_wkNobodyCam: g'morning :)12:32
NobodyCammorning Mikhail_D_wk :)12:33
*** Mikhail_D_wk has left #openstack-ironic12:38
*** Mikhail_D_wk has joined #openstack-ironic12:40
*** derekh has joined #openstack-ironic12:41
*** matsuhashi has quit IRC12:43
*** matsuhashi has joined #openstack-ironic12:43
lucasagomesmorning NobodyCam Mikhail_D_wk12:44
Mikhail_D_wk lucasagomes: morning :)12:45
NobodyCammorning lucasagomes :)12:45
NobodyCamhave a good weekend?12:45
romchegHi NobodyCam, lucasagomes and everyone else!12:47
Mikhail_D_wk lucasagomes: I left comment in https://review.openstack.org/#/c/60025/12:47
*** datajerk has joined #openstack-ironic12:47
NobodyCamhey hey romcheg :)12:47
*** datajerk has quit IRC12:47
openstackgerritMikhail Durnosvistov proposed a change to openstack/ironic: Old value 'updated_at' field returned after update  https://review.openstack.org/7543012:47
*** matsuhashi has quit IRC12:48
NobodyCamyour back back in the news over here :-p ... Hope everyone is safe and well :)12:48
*** blamar has quit IRC12:48
*** nosnos has quit IRC12:48
lucasagomesMikhail_D_wk, thanks will see12:48
*** blamar has joined #openstack-ironic12:49
*** rloo has joined #openstack-ironic12:50
*** nosnos has joined #openstack-ironic12:51
*** nosnos has quit IRC12:52
*** jbjohnso has joined #openstack-ironic12:53
*** viktors has quit IRC12:54
*** viktors has joined #openstack-ironic12:56
*** dkehn__ is now known as dkehnx12:57
NobodyCamwow 2 X 2 X 40G fiber network connections for 160G of network bandwidth per server...12:57
*** killer_prince has joined #openstack-ironic12:58
NobodyCambrb13:00
*** sseago has quit IRC13:04
*** max_lobur has quit IRC13:14
*** max_lobur has joined #openstack-ironic13:15
*** mdickson has quit IRC13:16
*** matty_dubs|gone is now known as matty_dubs13:23
*** linggao has joined #openstack-ironic13:24
jrollmorning ironic :)13:26
NobodyCammorning jroll :)13:26
NobodyCamlucasagomes: question... was this not by design? https://bugs.launchpad.net/python-ironicclient/+bug/131044613:29
*** mdickson has joined #openstack-ironic13:29
lucasagomesNobodyCam, hey, it was13:30
lucasagomesNobodyCam, we added it to the libs (so that nova can start the deployment) but not the cli13:30
NobodyCamthe the need for bp approval13:30
NobodyCamyep13:30
NobodyCams/the the/thus the/13:31
lucasagomesright13:31
*** jgrimm has joined #openstack-ironic13:32
NobodyCamcommented13:33
NobodyCam:-p13:33
lucasagomes:D13:33
lucasagomescheers13:33
*** datajerk has joined #openstack-ironic13:35
openstackgerritJim Rollenhagen proposed a change to openstack/ironic: Adding a reference driver for the agent  https://review.openstack.org/8479513:42
openstackgerritJarrod Johnson proposed a change to stackforge/pyghmi: Fix mass thread initialization of sessions  https://review.openstack.org/8957313:50
NobodyCamok heres a question I am seeing may come up.13:52
NobodyCamcan we deploy a image with out hdd... by that I mean say the user wishes to deploy only a ram disk13:53
*** ndipanov has quit IRC13:53
*** ndipanov has joined #openstack-ironic13:55
*** datajerk has quit IRC13:55
lucasagomesNobodyCam, network volume?14:00
lucasagomeswhen we support that we could than deploy a node without a local hd14:00
NobodyCamlucasagomes: nope.. system with 3TB of RAM14:00
lucasagomeswe will be able then*14:00
NobodyCamthe ramdisk is the user image14:01
lucasagomesright14:01
lucasagomeswell, I don't know the use cases for that... but right now we can't14:01
lucasagomesbecause the provision_state will be set to active14:02
lucasagomesafter the pass_deploy_info14:02
lucasagomesbut doesn't sounds hard to implement it14:02
NobodyCamlucasagomes: yea.. its for big data. in memory DB, auto scalling14:03
dtantsurre blueprints: there was an idea to consider Glance checksum when caching master images (or using one from cache). Do you think I need to create a blueprint before starting?14:03
dtantsurI just have no idea what change is large enough to have a blueprint14:04
linggaoping matty_dubs14:04
openstackgerritDmitry Tantsur proposed a change to openstack/ironic: Implement more robust caching for master images  https://review.openstack.org/8538714:04
NobodyCamdtantsur: if you have a question if its a big enough change for BP then it prob is :)14:04
lucasagomesNobodyCam, i see :D14:05
dtantsurNobodyCam, it's not large, but it seems it can be controversial14:05
NobodyCamdtantsur: then I would do a BP for it14:05
dtantsurthanks14:05
NobodyCameheheh :)14:06
jrollNobodyCam: send me that 3TB system and I'll figure it out :P14:06
NobodyCamjroll: I want to do a burn in with ... oh say a unreal server :-p14:07
jroll:D14:07
lucasagomesheh thats a crap load of ram :P14:08
NobodyCam96 dimm slots filled with 32 gb dimms :)14:09
*** rwsu has joined #openstack-ironic14:09
jrollNobodyCam: these are pretty cool http://www.innodisk.com/intel/SATADOM_all.html14:09
jrolljust get a little one for the boot disk and good to go14:10
*** datajerk has joined #openstack-ironic14:10
linggaoping devananda14:10
NobodyCamhumm...14:10
NobodyCamlinggao: prob a liitle bit b4 he's online14:10
linggaomorning NobodyCam.14:11
NobodyCamjroll: those are cool :)14:11
NobodyCammorning linggao :)14:11
matty_dubsjroll: What do those cost?14:11
matty_dubsNormal people money, or 3TB RAM money?14:12
jrollmatty_dubs: I have no idea, but I don't see why they'd be too expensive14:12
jrollafaik it's just like the flash memory in your phone or whatever plus a sata connector14:12
matty_dubsThe SLC bit made me nervous14:12
jrollrandom amazon search says < $10014:12
NobodyCamya I can pick like 20 gb ssd for $30 or less now a days14:12
matty_dubsI've been toying with building a ZFS box at home and keep reading that you should put the ZIL on an SLC drive14:13
jrolloh yeah, slc is a bit more expensive14:13
linggaoping matty_dubs14:15
matty_dubslinggao: pong14:15
linggaomatty_dubs, for the console patch https://review.openstack.org/#/c/64100/, I think I need to do some restructure.14:15
matty_dubsHow so?14:16
linggaoI want to pull the functions that related to shellinabox in a separate file like console_util.py because ipminative and many other drivers will use the same functions.14:16
matty_dubsHmm, that seems fairly reasonable.14:17
matty_dubslinggao: I was actually poking at the ssh drivers for something unrelated, and thinking we could plug in console support there, too.14:18
matty_dubslinggao: Not in this patch, though :)14:18
linggaoAnd the 3 configuration settings 'terminal_cert_dir', 'terminal_pid_dir', 'terminal' are not really for ipmi. They should be under [console] section and be shared by different drivers.14:18
matty_dubsBut I like the idea of making it a bit more extensible.14:18
matty_dubsAh, +1 to that14:18
linggaomatty_dubs, yes so I'd like to get this patch landed and then make the changes to pull the common functions out.14:19
matty_dubslinggao: Oh, so you want to merge 64100 soon and _then_ refactor?14:19
matty_dubsI'm fine with either, FWIW.14:19
linggaogreat. I'll work on a patch that depends on this one.14:20
linggaomatty_dubs, thanks.14:21
matty_dubslinggao: Really, we just need more reviewers. I already gave a +1, but I'm the only one :'(14:22
linggaomatty_dubs, let's see if we can bribe someone, like lucasagomes or NobodyCam?14:25
NobodyCamhuh14:25
NobodyCamhehehe14:25
matty_dubsOoh, good idea14:26
openstackgerritlinggao proposed a change to openstack/ironic: Support serial console access  https://review.openstack.org/6410014:27
*** newell has joined #openstack-ironic14:28
linggaoplase ignore this set. I need to do pep814:28
*** datajerk has quit IRC14:28
lucasagomeslinggao, will take a look (need to finish one thing before)14:29
linggaothanks, lucasagomes. I need to check in another set for it to incoporate the new console api change.14:30
*** sseago has joined #openstack-ironic14:32
*** datajerk has joined #openstack-ironic14:33
*** sseago has quit IRC14:41
openstackgerritJim Rollenhagen proposed a change to openstack/ironic: Adding a reference driver for the agent  https://review.openstack.org/8479514:50
*** sseago has joined #openstack-ironic14:55
*** datajerk has quit IRC14:57
*** dwalleck has joined #openstack-ironic14:59
matty_dubslinggao: What changed in this one? Does this include the above change, or was it a rebase? (Diffing it against the previous patch shows about 16k lines changed, seemingly due to changes on master.)14:59
NobodyCammatty_dubs: sounds like rebase to me14:59
matty_dubsI guess my question is more accurately, "Was it just a rebase, or are there code changes too?"15:00
linggaomatty_dubs, please ignore this one. I am working on a rebase and make changes in the get_console to return a dict instead of a string. The API changes in 87760.15:01
matty_dubsAh, will do. Thanks.15:01
*** derekh has quit IRC15:02
*** datajerk has joined #openstack-ironic15:10
NobodyCambbt..brb15:13
*** hemna_ has joined #openstack-ironic15:13
*** hemna_ has quit IRC15:14
*** hemna_ has joined #openstack-ironic15:18
*** datajerk has quit IRC15:20
*** ndipanov has quit IRC15:24
*** hemna_ has quit IRC15:25
dwalleckjroll: ping?15:29
openstackgerritlinggao proposed a change to openstack/ironic: Support serial console access  https://review.openstack.org/6410015:29
*** datajerk has joined #openstack-ironic15:36
*** ndipanov has joined #openstack-ironic15:37
NobodyCamdid we come up with a way to do rack aware provisioning?15:42
jrolldwalleck: waiting for the bus, will get back to you from the office :)15:44
*** yuriyz has quit IRC15:45
*** datajerk has quit IRC15:45
*** hemna_ has joined #openstack-ironic15:45
*** jistr has quit IRC15:48
*** jistr has joined #openstack-ironic15:50
*** eghobo has joined #openstack-ironic15:51
*** viktors is now known as viktors_away15:52
*** vkozhukalov has quit IRC15:52
*** derekh has joined #openstack-ironic15:53
*** mdickson has quit IRC16:00
openstackgerritLucas Alvares Gomes proposed a change to openstack/ironic: Port to oslo.messaging  https://review.openstack.org/8830716:03
*** mdickson has joined #openstack-ironic16:03
openstackgerritDmitry Tantsur proposed a change to openstack/ironic: Implement more robust caching for master images  https://review.openstack.org/8538716:04
openstackgerritLucas Alvares Gomes proposed a change to openstack/ironic: Port to oslo.messaging  https://review.openstack.org/8830716:06
lucasagomesdevananda, https://blueprints.launchpad.net/ironic/+spec/promote-set-boot-device as asked for the set_boot_device work16:07
dtantsurifarkas, hope I managed to create a proper blueprint for all these cache-related things https://blueprints.launchpad.net/ironic/+spec/pxe-master-images-caching16:07
dtantsurlucasagomes, we need MOAR blueprints :D16:11
lucasagomeshah16:11
*** matty_dubs is now known as matty_dubs|lunch16:11
*** jistr has quit IRC16:12
ifarkasdtantsur, wow, do we need a blueprint for that?16:12
ifarkasbut yeah, makes sense16:12
dtantsurifarkas, as I understand, trend now is to have blueprint for major changes. ours proved to be a major one16:12
*** zdiN0bot has joined #openstack-ironic16:13
dtantsurand guys, we need a blueprint for having more blueprints :)16:13
*** lucasagomes is now known as lucas-afk16:13
*** datajerk has joined #openstack-ironic16:16
dtantsurnow seriously, what is the process of approving blueprints?16:16
openstackgerritlinggao proposed a change to openstack/python-ironicclient: node-get-console incorporate the changes in API  https://review.openstack.org/8776916:17
NobodyCamdtantsur: as of right now... put up blueprint and then have people go look at it..16:17
NobodyCamthou that will be changing soon16:17
dtantsuryeah, I've heard about Nova procedure, I just wondering what to do right now :)16:18
*** datajerk has quit IRC16:18
dtantsurNobodyCam, because in my situation a started to work on a bug (proper caching? peace of cake, what can go wrong?) and then realized that this is a major rework actually, so I created a blueprint for this and future changes16:19
dtantsurprobably not exactly the right thing to do (tm), but...16:19
NobodyCam:) thats the way it goes sometimes16:20
*** uberj_ is now known as uberj16:25
dtantsurbtw, NobodyCam, could you actually have a look? https://blueprints.launchpad.net/ironic/+spec/pxe-master-images-caching16:26
* dtantsur be back in ~hour16:26
*** jgrimm has quit IRC16:26
jrolldwalleck: here now, got a few?16:26
NobodyCamdtantsur: sure.. but am about to jump on a conf call so ok if I look after that :)16:26
dtantsursure16:27
NobodyCam:)16:27
devanandamornin, all16:28
NobodyCamgood morning devananda :)16:29
BadCubMorning Devananda16:29
jrollheya devananda :)16:30
NobodyCamdtantsur: what are the impacts of this change.. Ie will api's need changing?16:30
NobodyCamwith the doc require changes16:30
NobodyCams/doc/docs/16:30
NobodyCamhehe on hold.. :-p16:30
dwalleckjroll: Was my turn to bail out, hit the food trucks16:38
*** shakamunyi has joined #openstack-ironic16:39
jrollheh, no worries. good now?16:39
dwalleckyup!16:40
jrollcool16:40
jrollso I'm thinking about agent + tempest16:40
jrollidk how much you've looked at it16:40
jrollbut should it be pretty similar to the existing tempest things?16:40
devanandaBadCub: !! hi there :)16:40
*** ndipanov is now known as ndipanov_gone16:41
dwalleckAhh, that's what you mean. I'm aware of the agent, haven't messed with it much16:41
jrolldwalleck: or are we going to have to go from scratch16:41
jrollok16:41
dwalleckWell, sort of. Tempest mostly interacts with APIs16:41
jrollwhere can I find the existing tests btw?16:41
dwalleckTempest is at: github.com/openstack/tempest CloudCafe at github.com/stackforge/cloudroast16:42
jrollas far as the agent, it's somewhat similar to the existing pxe driver16:42
jrollsee https://wiki.openstack.org/wiki/Ironic-python-agent16:42
jrolland the diagrams linked from there16:42
jrollok16:42
dwalleckHaven't tried it, but I think it's doable. How are things communicating with the agent? sockets?16:43
jrollhttp, yeah16:43
jrollso16:43
dwalleckOh! The agent does expose a rest API!16:43
jrollwhen the agent starts up, it registers with a node in ironic through the api16:43
jrollthen ironic sends commands to the agent through the agent's rest api16:44
jrolland yes :)16:44
jrollthat API is only ever touched by ironic16:44
*** zdiN0bot has quit IRC16:44
jrollit seems to me that tempest focuses on integration testing, so we would only want tempest to poke the ironic API, right?16:44
jrollor would we also want to write standalone agent tests?16:45
dwalleckThe Tempest folks really only like people poking at the API and validating side effects. For example, I sort of hit the Nova agent when I change a password via the public API. It catches it as an integration piece16:46
dwalleckIt's possible we want both16:46
dwalleckTempest was originally integration only, but if you look at it now it's a bit of both16:47
dwalleckAs long as it's a deployed, functional thing, testing via Tempest should be fair game16:47
jrollok, right on16:48
jrollseems like this is where actually spinning up a node is tested? https://github.com/openstack/tempest/blob/master/tempest/scenario/test_baremetal_basic_ops.py16:48
openstackgerritA change was merged to openstack/ironic: cleanup docstring for drivers.utils.get_node_mac_addresses  https://review.openstack.org/8839516:49
jrollso we would just need another class for the agent_ssh driver I think16:49
jrollshouldn't be too bad16:49
jrolldevananda: wdyt? tldr: agent testing - make another test scenario in tempest for the agent_ssh driver, and maybe even have tempest tests for the agent APIs16:50
dwalleckSo if I spin up an server right now, its already getting the ironic agent?16:50
dwalleckI wasn't sure we were that far yet16:51
jrolldwalleck: would you be able to work on that?16:51
dwalleckjroll: Absolutely16:51
jrolldwalleck: are you talking internally or?16:51
*** packet has joined #openstack-ironic16:51
*** derekh has quit IRC16:51
jrollhow the agent works overall is it's in a ramdisk that we pxe boot16:51
jrolland if you're talking about our test environment, yes, we have agents running there16:52
jrollnow, deploys aren't fully functional16:52
jrollbut we're almost there, just some bugs to work out (which might just be environmental things)16:53
dwalleckYeah, having something working will help speed things up :) I can futz around if I need to. I think I'm making a bad translation though. When I think of agents, I think of the Nova agent, which is a process that lives inside each VM. That's not the case with Ironic agent?16:53
jrollah, nonono16:54
jrollthe agent will only run on unprovisioned servers16:54
jrollit provisions an instance and we pass off to user16:54
dwalleckOh!16:54
dwalleckThat makes total sense16:54
jrollthen when user is done, it does decom work and then waits for a provision command again16:54
jrollcool16:55
dwalleckCool! So is there one agent per chassis or something like that?16:55
jrollby ironic terms, per node16:55
jrollbut a node is usually a chassis16:55
jrollso yeah16:55
dwalleckOkay, now I'm on the wagon16:55
jroll\o/16:56
jrollthe diagrams on that wiki page should have slightly more detail than that16:56
*** martyntaylor has left #openstack-ironic16:56
dwalleckOkay. Let me read through this so I have more intelligent questions to ask16:57
*** romcheg1 has joined #openstack-ironic16:57
jrollsure thing16:57
dwalleckBut my general feeling is that this shouldn't be too hard (jinx). Where these tests live might be the only thing up for debate16:58
*** zdiN0bot has joined #openstack-ironic16:58
jrollI'm just trying to get a general plan thought out atm16:58
jrollheh16:58
jrollas far as tempest vs cloudcafe or something else?16:58
dwalleckWell, Tempest only likes testing public APIs. This might be considered an internal API16:58
dwalleckBut I'll roll the dice and see what they think.16:59
jrollmmm16:59
jrollwell16:59
jrollwe currently have tempest tests for the pxe driver of ironic16:59
jrollso ironic + agent driver are fair game16:59
dwalleckSounds reasonable17:00
jrollI would prefer to have agent API tests, but if it's not allowed then /shrug17:00
JoshNangmost of the agent APIs should get hit by ironic tests though17:01
*** datajerk has joined #openstack-ironic17:01
jrollyeah, exactly17:01
jrollbut we could exercise more cases than ironic would, I think17:01
openstackgerritDevananda van der Veen proposed a change to openstack/ironic: Remove 'fake' driver from default enabled drivers  https://review.openstack.org/8871117:02
jrolldwalleck: also, what's the deal with cloudroast? is the goal to replace tempest?17:02
*** matty_dubs|lunch is now known as matty_dubs17:02
dwalleckjroll: Not really. Tempest (as proven this morning in a lively discussion) is focused on API testing only. CloudRoast is more of a hoalistic view17:03
openstackgerritA change was merged to stackforge/pyghmi: Fix mass thread initialization of sessions  https://review.openstack.org/8957317:04
jrollmmm17:04
dwalleckThe major different is that Tempest is something I wrote when I first started testing APIs. CloudCafe is what I wrote after I learned my lessons :-)17:04
dwalleckSo for example, today I put up the idea that Tempest should know the configured hypervisor/driver and skip based on what actions are implemented for that driver. That was shot down because they don't want Tempest to know anything about the environment17:05
dwalleckIn CloudCafe we're not only knowledgeable about the hypervisors, but go further in depth testing behaviors that occur because of that hypervisor17:06
*** eghobo has quit IRC17:07
*** eghobo has joined #openstack-ironic17:07
devanandadwalleck: you know that, to get tempest to test ironic (via nova) there's a regex filter on the tests, right?17:08
jrolldwalleck: I see17:08
devanandadwalleck: we're essentially doing ^^ but at a layer above tempest ...17:08
jrolldevananda: did you catch this btw?17:08
jroll09:50:43           jroll | devananda: wdyt? tldr: agent testing - make another test scenario in tempest for the agent_ssh driver, and maybe even have tempest tests for the agent APIs17:08
devanandajroll: hmm. well...17:09
jroll:)17:09
devanandajroll: we have two sets of tempest tests for ironic today, and i dont think we need more (though separate tests for the agent API might make sense... i'll come back to that)17:10
devananda* API c/r/u/d tests w/ fake driver17:10
devanandawhere tempest talks directly to ironic17:10
devananda* integration test17:10
devanandawhere devstack configures nova to use ironic, then tempest talks to nova17:11
devanandajroll: as we add drivers that require testing, i think they should fit into that second category transparently17:11
devanandawhether we configure devstack to enroll the hw with driver A or B, it shouldn't matter to tempest17:12
devanandabecause it shouldn't matter to the user of nova17:12
devanandaif it does, we've broken the abstraction17:12
jrolldevananda: sure17:12
jrollso, would a better way be to have integration tests provision n nodes, each with different drivers?17:13
devanandapossibly17:14
jrollthat's kind of what I had in mind17:14
jrollthen I saw https://github.com/openstack/tempest/blob/master/tempest/scenario/test_baremetal_basic_ops.py#L5217:14
jrolland thought maybe we should just add another class like that17:14
*** zdiN0bot has quit IRC17:14
*** zdiN0bot has joined #openstack-ironic17:14
*** datajerk has quit IRC17:15
devanandaadam_g: thoughts ^ ?17:15
devanandajroll: I think that's the same thing17:16
jrollah17:16
jrollyeah, so that was my plan17:16
devanandajroll: well, "it shouldn't matter to tempest" may be a bit misleading17:16
jrolland then maybe some agent API testing if we can talk tempest folks into testing internal apis17:16
devanandajroll: tempest is validating that certain things happened, which are, in this case, specific to the ironic driver in question17:17
jrollright17:17
devanandadwalleck's comments seem to indicate that is not in keeping with what folks want tempest to do... i'm not sure17:17
jrollright17:17
devanandathough this information is exposed in Ironic's API, so tempest is able to validate it there, without having to reach "underneath" ironic to check17:18
jrolleither way, we're going to have tests for the API, it's just a matter of where they live and if things are mocked :)17:18
jrollyeah17:18
devanandaalso, some of that needs to move to instance_info (not driver_info)17:18
*** harlowja_away is now known as harlowja17:18
devanandaand will be the same for the pxe and agent drivers17:18
dwalleckdevananda: Yeah, they were not very amused by my proposal to give Tempest even the knowledge of what driver Nova is using17:18
devanandaeg, deploy_ramdisk, root_gb, etc17:18
jrollyep17:18
devanandadwalleck: heh. well, they've allowed that in for ironic17:19
devanandadwalleck: see the link jroll posted17:19
devanandait's verifying specific results in Ironic's API as a result of Nova using the Ironi driver here: https://github.com/openstack/tempest/blob/master/tempest/scenario/test_baremetal_basic_ops.py#L8317:20
dwalleckdevananda: Actually, that's very true17:20
dwalleckThat would never work with another driver17:20
devanandajroll: once we move as much as we can from driver_info to instance_info, and the agent driver uses the same DHCP integration with Neutron, I think these tests will look identical17:21
devanandadwalleck: right17:21
jrolldevananda: I hope so :)17:22
dwalleckBut you do have to have the Ironic service enabled for those tests to run, which goes back to just declaring your nova driver. Docker and LXC folks are in the same boat17:22
devanandadwalleck: right17:22
devanandadwalleck: but there are a lot of tests which we also have to *disable* when using the ironic driver17:23
*** Mikhail_D_ltp has joined #openstack-ironic17:24
*** Mikhail_D_ltp has quit IRC17:24
dwalleckright. That's why I proposed a higher level of abstraction as it would make the effort much, much easier. The general feeling was that any nova driver should support most or some agreed on subset of the API, so I withdrew to fight another day :)17:25
devanandadwalleck: yea. i've had that discussion with clarkb a few times, and i think there's a session in atlanta o nit17:25
devanandatldr; there are some things which ironic doesn't (and essentially cant) implement that other virt drivers do17:26
devanandadoes that mean ironic can never be a supported driver for nova?17:26
dwalleckdevananda: That's the point I was starting to make17:27
dwalleckKVM doesn't support change password, so is it out of Nova too? :)17:27
devanandaheh17:28
*** vkozhukalov has joined #openstack-ironic17:30
*** jgrimm has joined #openstack-ironic17:32
adam_gjroll, devananda my original idea was to extend the current tempest scenario test beyond *PXESSH, and have tests skip based on what drivers are reported by ironic17:32
devanandaadam_g: ah. so then, when used for third-party tests, it might run only the tests for that specific (eg, seamicro) driver?17:33
adam_gif we want to test other drivers or the agent, we'd need to have devstack be able to deploy it with some driver flag  (it ucrrently only support pxe_ssh hard coded)17:33
dtantsurNobodyCam, sorry, was out for language lesson :) Should I updated a blueprint with information on impact (none expected on API, not quite sure about docs)?17:34
devanandaadam_g: if at some point we have multiple drivers being tested upstream, think it's reasonable for devstack to create multiple nodes, one (or mroe) with each driver?17:35
adam_gdevananda, ideally yeah, we need to figure out a good way of targetting / discovering specific driver. say we have an ironic deployment that deploys / reports different drivers, can we boot a nova instance and target a specific driver ahead of time?17:35
devanandaadam_g: or would we want separate devstack runs, one for each driver?17:35
adam_gdevananda, i would think different runs for each driver. and devstack deployment deploys an ironic endpoint that lists only one supported driver, so tempest knows what to test17:36
*** epim has joined #openstack-ironic17:37
NobodyCamdtantsur: those are things that will be in the new BP template / format I put it out there more for thought. it is not required at this point but will be soon so I am trying to get folks to thing about it17:37
devanandaadam_g: nova allows targeting a specific host:node when booting an instance17:37
dtantsurack, thanks17:37
jrolldevananda: does it? comstud had to hack that in for me the other day :P17:38
devanandaadam_g: separate devstack run for each driver would require separate jenkins / devstack-gate job for each ironic driver, which seems like a waste of resources if we can avoid that17:39
devanandaadam_g: whereas it makes more sense for third-party tests, where they may only want one driver17:39
devanandaadam_g: though it should be easy now to have devstack configure ironic with only one driver, based on an ENV var17:40
adam_gdevananda, so... a seamicro test would query nodes for one enlisted /w that driver, skip if it doesnt find it, and run the scenario with a host:node constraint?17:40
adam_g..if it does17:40
devanandawell, as soon as https://review.openstack.org/#/c/86958/ lands, it will need one more patch17:40
devanandaadam_g: possibly17:41
devanandai'm thinking of two different test environments17:41
devananda1) upstream, where we can test reference drivers // drivers taht are not hardware-specific (eg, pxe_ssh, agent_ssh)17:41
devananda2) third-party ci, where specific hardware is required (eg, seamicro, ilo)17:41
adam_gwhatever the test environment, you should ideally be able to point-and-shoot /w tempest and have it figure out what to run/skip17:41
devanandaactually, that's going to be more complicated than i thought at first17:43
devanandabut i think we need to17:44
NobodyCamShrews: I think #1 is dangerious17:44
devanandajroll: this is sort of a follow on to yesterdays' conversation. if we can run pxe_* and agent_* in the same environment, we should be able to test both in the same devstack/tempest environment ,too17:44
jrollindeed17:45
devanandajroll: if we can't run them in the same env (eg, because they have different DHCP requirements) then we've broken the architecture17:45
devanandaso it seems that having devstack deploy both and tempest test both in parallel would actually be good17:45
devanandamaybe a high bar, but a good one17:45
jrollyep, +117:45
devanandaand we should then impose the same requirement on thrid-party drivers17:46
devanandaeg, run both pxe_ipmi and seamicro_ipmi in the same test env17:46
devanandaadam_g: so i think we actually need tempest to run tests against two drivers in parallel to ensure that no driver breaks the architectural assumption that multiple drivers _can_ coexist17:46
adam_gwith the agent, are there ironic API calls we can make while the nova provisioning flow is happening that the test can verify all things are happening correctly? as we do with the current pxe_ssh scenario test?17:47
ShrewsNobodyCam: #1 is not my first choice either17:47
*** martyntaylor has joined #openstack-ironic17:47
jrolladam_g: yes. goal is to make it work similarly to the pxe driver, e.g. you'll be able to see target_provisioning_state set correctly etc17:47
NobodyCammaybe if mantiance mode as also true17:47
NobodyCambut still scarry17:48
Shrewsyeah17:48
adam_gdevananda, running them in parallel may be a challenge, at least in the near term. we explicitly turn of concurrent tests. but having a single tempest run hit both tests serially would probably be doable.17:48
devanandaadam_g: serial is fine17:48
rlooMikhail_D_wk: are you around?17:52
NobodyCammorning rloo :)17:52
Shrewsdevananda: fyi, my initial -1 vote on 87346 was b/c logging wasn't working. It does after a rebase so I changed my vote.17:52
rloomorning NobodyCam!17:52
NobodyCam:)17:52
openstackgerritJosh Gachnang proposed a change to openstack/ironic: Adding a reference driver for the agent  https://review.openstack.org/8479517:53
openstackgerritJosh Gachnang proposed a change to openstack/ironic: Drivers determine acceptable power states  https://review.openstack.org/8674417:55
*** jbjohnso has quit IRC17:58
comstuddevananda: ya, i couldn't/can't find any way to target a specific host or node.  There happens to be internal use of it, but it's not accessable via scheduler hints17:59
comstudfrom what i could find17:59
ShrewsSpamapS: rebuild is not really a "state", but triggers a state change on the instance. To trigger a deploy, we set the target state to "active", but since the instance is already active, the api complains.  :/17:59
devanandacomstud: nova boot --availability-zone zone:host:node17:59
*** packet has quit IRC18:00
comstudah right, with AZS18:00
comstudbut without AZs, there's no way18:00
devanandayea. the old --force_host got replaced b ythat18:00
adam_gdevananda, isn't that an admin-only capability?18:00
comstudunless maybe 'default' works18:00
devanandawell, there should be a default AZ18:00
devanandaadam_g: yes18:00
comstudso maybe default:host:node works18:01
devanandaadam_g: well, by default, yes. but i think you can change that with nova policy18:01
adam_gdevananda, so WRT tempest we run everything as the non-admin user18:01
devanandaadam_g: set up one nova flavor per node?18:02
devanandahmm, nvm.18:02
devanandait might work but is a pretty bad hack18:02
adam_gdevananda, that was my original idea. baremetal_pxe, baremetal_seamicro?18:02
adam_gflavor per driver18:02
devanandawe could add a scheduler filter that matched on that, perhaps18:02
devanandabut not the way i'd recommend operators to deploy it, and not something users should see18:03
adam_gright18:03
*** jbjohnso has joined #openstack-ironic18:03
devanandai would rather just change the nova policy for --availability-zone18:04
jrollthere also may be a need for a driver to use multiple flavors18:04
devanandainitial guess:     "compute:create:forced_host": "is_admin:True",18:04
*** BLZbubba_ has quit IRC18:05
devanandawe just need to change that18:05
*** rameshg87 has joined #openstack-ironic18:05
devanandajroll: IMO the flavor should match the type of hardware being provisioned, not the specific driver doing the provisioning18:05
jrollexactly.18:05
jrollI didn't word that very well :P18:06
devanandaif I ask for a node with 8 cores and 32GB RAM, it shouldn't matter whether it's HP or AMD hardware18:06
devananda:)18:06
jrollas an HP employee I would think you *would* have an opinion on that :P18:06
* jroll stops derailing18:06
adam_gha18:07
devanandajroll: with my HP hat on, yes. with my PTL hat on, no :p18:08
NobodyCamhummm.. speaking of deraining... should we register our port? http://www.iana.org/form/ports-services18:08
jroll:)18:08
devanandaNobodyCam: ahh. yes ...18:08
devanandaNobodyCam: thanks for reminding me18:09
NobodyCam:-p18:09
SpamapSShrews: I don't really understand what that means. It sounds like implementation details.18:14
devanandaNobodyCam: interesting... looks like several openstack services are not registered18:14
NobodyCamyep I saw that18:15
*** eguz has joined #openstack-ironic18:15
*** eghobo has quit IRC18:19
matty_dubsboris-42: devananda: Say, any pointers on where to get started for taking a look at perf-testing Ironic? I've got a couple boxes with 128GB RAM and 10Gb Ethernet to play with^W^Wuse for testing18:24
NobodyCamlan party at matty_dubs house :-p18:26
*** krtaylor has quit IRC18:26
matty_dubslol18:26
*** martyntaylor has left #openstack-ironic18:26
*** hemna__ has joined #openstack-ironic18:26
NobodyCammatty_dubs: I would love to know if we are faster then nova BM18:26
* NobodyCam thinks not.. but18:27
jrolldidn't lifeless determine that we are recently?18:27
lifelessIronic is linearly faster today18:27
jroll(faster than BM)18:27
NobodyCamw00t :) and mornign lifeless :)18:28
matty_dubsooh18:28
lifelesshttps://review.openstack.org/#/c/87938/ will close the gap a bit18:28
lifelessbut there are things Ironic overlaps better beyond that ovherad18:28
lifeless'overhead'18:28
lifelessmatty_dubs: perf testing or scale testing18:29
lifelessmatty_dubs: time-to-deliver 1 node isn't interesting. Hundreds or thousands would be interesting18:29
lifelesstime-to-finished was 22minutes for 40 machines w/Ironic, 120 or so for nova-bm18:29
matty_dubslifeless: I have a small number of moderately-powerful machines, so more perf than scale. Though I could test scale with VMs, I suppose18:29
jrolloh, that was 40 machines? nice.18:29
lifelessjroll: yup, amortised 30 seconds/machine with Ironic, 3m with nova-bm18:30
* jroll has a secret goal of beating that record18:30
matty_dubsOh, wow18:30
matty_dubsI assume that's with parallel provisioning?18:30
lifelessno18:30
matty_dubs!18:30
*** datajerk has joined #openstack-ironic18:30
jroll:o18:30
NobodyCamnice :)18:30
jrollseriously?18:30
jrollthat's impressive18:31
lifelesswell, whatever Ironic's default is. I think 2 dd's in parallel. Probably would be slightly faster if tuned to one at a time.18:31
jrollthat's time to get into reboot, yes? not including the actual boot?18:31
lifelessthe next big jump needs the multicast stuff implemented18:31
jrolland these 40 were real machines, not VMs?18:31
lifelessjroll: thats time to ACTIVE, HP Proliants.18:31
devanandalifeless: that's with 40 "nova boot" in parallel, but the actual dd's are serialized, right?18:31
lifelessjroll: 10Gbps network.18:31
jrolldayum18:32
matty_dubsThat's pretty slick. Most machines I've seen can't _boot_ in 30sec18:32
lifelessdevananda: the driver for the test was 'devtest_overcloud.sh' with OVERCLOUD_COMPUTE_SCALE=4018:32
matty_dubsMuch less get provisioned18:32
lifelessmatty_dubs: thats why I said 'amortised'18:32
devanandamatty_dubs: that's not the per-machine time-to-boot18:32
lifelessmatty_dubs: the time to first machine was 10m, and then they tick over in rapid fire sequence18:32
lifeless10m-or-so - I didn't time it precisely.18:33
devanandaright, so 2 POST cycles18:33
matty_dubsHa! Okay, yeah, that makes sense. I was doing nonsense math.18:33
*** dwalleck has quit IRC18:33
matty_dubsI'm with you now18:33
lifeless| NovaCompute8                    | 7e28aa0f-dd34-4672-918c-fc8945dd8ac9 | state changed          | CREATE_IN_PROGRESS | 2014-04-15T08:01:21Z |18:33
devanandajroll: you guys are trying to optimize the linear time-to-boot one machine though, right?18:33
lifeless...18:34
lifeless| NovaCompute8                    | 46bac141-ca08-4b67-b32e-c813d75a33d2 | state changed          | CREATE_COMPLETE    | 2014-04-15T08:08:33Z |18:34
lifeless7 minutes for the one POST cycle + partitioning + dd + initiate the second POST18:34
devananda7:12... nice18:34
devanandayep18:34
lifelessso <=14m to be in userspace18:34
jrolldevananda: that's the main number I want to look at, but I also am looking at whole-rack times18:34
lifelessmulticast should let us get the rack down from 22m to 14m.18:35
lifelesshttps://etherpad.openstack.org/p/glance-multicast-image-transfer18:35
lifelesslibpgm is looking pretty good, I just need more timeslices on it18:35
devanandalifeless: is there going to be a perf-related session in tripleo track?18:35
lifelessdevananda: probably not18:35
lifelessdevananda: not enough sessions18:36
devanandalifeless: heh, right. and you have 2 more than we do :)18:36
lifelessjroll: so Ironic is very close to 'optimal' today for single machine cases.18:37
lifelessjroll: to make it faster I believe we'd need to write some custom C code and instead of dding, do a smart used-blocks-only copy from the qcow2 - probably preprocessing it to avoid high compute costs while still guaranteeing linear IO18:38
jrolllifeless: or remove one POST cycle :)18:38
lifelessjroll: this is something I think should be done as a successor to multicast, so we can obsolete the dd backend in the fullness of time18:38
lifelessjroll: we did that, it wasn't reliable.18:38
devanandafwiw, i'm not aware of any OOB mechanism which removes a POST cycle18:38
devanandalifeless: he's referrign to pre-running an agent on all the hosts18:38
jroll^18:39
devanandanot kexec (i hope)18:39
jrollwell, that's also a thing, but a bit more questionable18:39
jroll:)18:39
devanandajroll: you did mention kexec in your session proposal. fwiw, that's not an option because of poor support from PCI vendors18:39
lifelessoh, ok, but that doesn't remove a POST cycle for reusing / reimaging, only for greenfields18:39
jrolldevananda: maybe as a config option or something. just a thought for now.18:40
devanandait assumes excess capacity where the agent could sit idle (and, arguably, wasting power)18:40
jrolllifeless: we don't really plan on fast turnover for a reimage or whatever. we'll do the decom when its released and have the agent wait for a deploy.18:41
lifelessjroll: yeah, deploy-kexec in diskimage-builder will get you a nova-bm kexec deploy ramdisk, if you want to work on it.18:41
jrollhmm, ok18:41
lifelessjroll: reimage == nova rebuild --preserve-ephemeral btw. So same user, user data preserved. Time matters there.18:41
*** hemna__ has quit IRC18:42
jrolllifeless: right, I know. that's not a high priority for us right now. or at least for speed.18:42
lifelessjroll: ack18:43
*** dwalleck has joined #openstack-ironic18:43
*** openstackgerrit has quit IRC18:49
*** openstackgerrit has joined #openstack-ironic18:49
*** jistr has joined #openstack-ironic19:01
*** derekh has joined #openstack-ironic19:03
*** foexle has quit IRC19:05
*** krtaylor has joined #openstack-ironic19:07
*** GheAway is now known as GheRivero19:07
*** datajerk has quit IRC19:11
devanandaadam_g: I tossed some notes up here https://wiki.openstack.org/wiki/Ironic/Testing19:14
devanandajroll: also applies to testing the agent19:15
devanandaplease comment/edit as appropriate19:15
*** zdiN0bot has quit IRC19:16
jrollcool19:16
jrolldwalleck: ^19:16
dwalleckjroll: Thanks! I'll check that out19:17
*** shakamunyi has quit IRC19:33
*** shakamunyi has joined #openstack-ironic19:36
*** hewbrocca has quit IRC19:39
devanandaNobodyCam: answer to your IANA question -- keystone is the only one registered because it is the discovery mechanism for all the rest. thus the individual service ports don't matter19:47
linggaoping matty_dubs19:48
matty_dubslinggao: pong19:50
linggaomatty_dubs, now I think about the rearraging the functions for the console patch, I should use the current patch. what do you think?19:51
NobodyCamahh ok19:52
linggaoIt makes send to make it right since the console has never been landed.19:52
matty_dubslinggao: Yes, it does make sense now. The only risk is that it'll take even longer to get the patch merged.19:52
matty_dubsBut I guess we're not in a huge rush.19:52
NobodyCambut with out registering our port could not "new service x" come along an snach it from us?19:53
linggaomatty_dubs, I'll get it in tomorrow. I'll put both your and my names as co-auther to be fair.19:53
matty_dubslinggao: OK, I didn't mean to rush you. I just meant that if I were in your shoes I'd want the thing merged soon since it's >20 revisions. ;) I'm all for getting it right, though.19:54
matty_dubsI've only contributed a handful of lines to the patch; not sure what the standard is for Co-authored by19:55
linggaomatty_dubs, I found out there need to be more doc changes in the code while doing the rearrangement. That's why I think it's better to to do it in the new file at one shot. :-)19:56
linggaomatty_dubs, you also helped testing it and contributing ideas.19:57
matty_dubsSure, that makes sense20:00
matty_dubsI should really test this stuff on other hardware, too -- I realized that the box I'm testing on is IBM/Lenovo, and I presume yours is the same.20:00
matty_dubsIPMI should be the same, but it wouldn't hurt for me to test this on a Dell box or something20:01
linggaomatty_dubs, yes, I tested on IBM/Lenovo too, "great minds think alike" haha.20:02
matty_dubs:-D20:02
jrollI haven't looked at this yet, are you using SOL?20:02
jrolllinggao: ^20:02
*** derekh has quit IRC20:03
matty_dubsYeah, it's IPMI SoL20:03
matty_dubsErr, not to speak for linggao ;)20:03
linggaojroll, yes, ipmitool sol activate with shellinabox20:03
jrollcool20:03
openstackgerritJosh Gachnang proposed a change to openstack/ironic: Adding a reference driver for the agent  https://review.openstack.org/8479520:11
jrollthis driver_info -> instance_info patch might get massive... :/20:13
*** dwalleck has quit IRC20:14
devanandajroll: it's gonna have to touch the pxe driver, nova.virt.ironic, devstack, and tempest20:15
jrollI know :/20:15
*** ifarkas has quit IRC20:19
*** jistr has quit IRC20:20
rloodevananda: if you have a few minutes, this was rebased, needs approval again: https://review.openstack.org/#/c/78651/20:20
*** vkozhukalov has quit IRC20:33
NobodyCambrb20:41
*** rch has joined #openstack-ironic20:44
*** zdiN0bot has joined #openstack-ironic20:47
*** romcheg2 has joined #openstack-ironic20:48
*** romcheg1 has quit IRC20:50
*** romcheg2 has quit IRC20:56
*** epim has quit IRC21:00
*** matty_dubs is now known as matty_dubs|gone21:03
*** jdob has quit IRC21:04
*** linggao has quit IRC21:10
openstackgerritA change was merged to openstack/ironic: Add worker threads limit to _check_deploy_timeouts task  https://review.openstack.org/7865121:12
*** mdickson has quit IRC21:14
jrolldevananda: hmm. looking at https://bugs.launchpad.net/ironic/+bug/130868021:15
jrolldevananda: rpc_thread_pool_size comes from ironic/openstack/common/rpc21:15
jrollis there a way we can change the default in ironic code somehow?21:15
*** dwalleck has joined #openstack-ironic21:15
devanandajroll: the oslo.messaging patch from lucas-afk adds an option to change the worker pool separately21:18
devanandaooh. i see what you mean21:18
jrollyeah.21:18
devanandawe want to rpc thread pool to be very small21:18
devanandataht option gets imported into our config file21:18
devanandabut with the default from oslo21:18
devanandahmmm21:18
jrollright21:19
devanandalemme look once this meeting is over21:19
lucas-afk:)21:19
jrollok, just curious if you knew something offhand21:19
jrollI'll poke oslo and see if this has come up before21:19
devanandanope21:19
devanandawe do things in the unit tests21:19
devanandato override conf defaults21:19
lucas-afkbtw devananda jroll  afternoon... devananda can you take a look at https://blueprints.launchpad.net/ironic/+spec/promote-set-boot-device ? (requested on the adding the management interface patch series)21:19
lucas-afkno hurry as well, finish the meeting before21:20
devanandak k21:20
jrollgood evening lucas :)21:20
*** jbjohnso has quit IRC21:22
*** romcheg1 has joined #openstack-ironic21:24
JoshNangtemptest is giving me a lot of 'no valid host, no conductor supports driver fake' errors. http://logs.openstack.org/95/84795/34/check/check-tempest-dsvm-ironic/c71655d/console.html21:27
JoshNanghas anyone else been getting that?21:28
adam_gJoshNang, conductor isn't starting, probably due to something in your patch? http://logs.openstack.org/95/84795/34/check/check-tempest-dsvm-ironic/c71655d/logs/screen-ir-cond.txt.gz21:30
JoshNanghmm. that's not good :)21:30
JoshNangthanks adam_g !21:30
adam_gnp!21:31
*** max_lobur has quit IRC21:40
*** gpizza has joined #openstack-ironic21:46
*** romcheg1 has quit IRC21:50
devanandalucas-afk: ok, mostly done. so, about https://blueprints.launchpad.net/ironic/+spec/promote-set-boot-device21:58
devanandalucas-afk: two things come to mind imemdiately21:59
devananda* we should just make it work with the ssh power driver, or we can't call it a "core interface"21:59
devananda* we need to test it with tempest, in the gate21:59
NobodyCamugg munging xml :-p21:59
NobodyCamhttp://stackoverflow.com/questions/19011159/how-to-set-boot-order-on-kvm-libvirt-virsh22:00
devanandalucas-afk: so i'd like to see a plan for testing set boot device with VMs, even if it's not implemented prior to landing the management interface22:01
NobodyCamvbox and vmware should be able to make the change via command line22:03
lucas-afkdevananda, right, I can put some work on getting the ssh driver to have a a set_boot_device22:04
lucas-afkdevananda, thanks for the review, I will take a look at it tomorrow then22:04
devanandajust posted comments to BP as well22:04
devanandaand updated direction to approved22:05
lucas-afkdevananda, ack, /me added it to my todo list22:05
lucas-afkNobodyCam, thanks for the link :D22:05
NobodyCamlucas-afk: http://download3.vmware.com/sample_code/Perl/VMBootOrder.html  <- vmware22:05
openstackgerritJim Rollenhagen proposed a change to openstack/ironic: Set a sane default for rpc_thread_pool_size  https://review.openstack.org/8970422:07
NobodyCamlucas-afk: VBoxManage modifyvm  <uuid|name>  [--boot<1-4> none|floppy|dvd|disk|net>]  <- vbox command to change boot order22:07
lucas-afkNobodyCam, ah wow, thanks for that!22:08
jrolldevananda: haven't tested that bug along with lucas' patch, but there's the other half of your proposed fix22:08
lucas-afkNobodyCam, idk how I will test vmware tho22:08
jrolllucas-afk: does your oslo-messaging patch work or is it WIP?22:08
NobodyCamask who ever added the patch to test it :-p22:09
lucas-afkjroll, it works22:09
jrollcool22:09
*** datajerk has joined #openstack-ironic22:09
jrollthanks22:09
lucas-afkNobodyCam, lol ack22:09
NobodyCam:)22:09
lucas-afkjroll, np :)22:09
jrolllucas-afk, NobodyCam, JoshNang has vmware on his laptop, and I'm happy to try to throw him under that bus ;)22:10
devanandajroll: i'm going to test both patches together and see what i get22:10
*** sseago has quit IRC22:10
JoshNang:D22:10
jrolldevananda: cool22:10
NobodyCamlol Ty jroll :)22:10
lucas-afkdevananda, ^ about the oslo.messaging... I left some comments inline, the eventlet one I talked to markmc, basically prepare() returns a new context which not alterns the self.client22:10
NobodyCam(and TY too JoshNang) lol22:10
JoshNangnp haha22:11
lucas-afkjroll, JoshNang ah, nice... I will try to put something up tomorrow22:11
lucas-afkthanks22:11
jrolldevananda: also quick thing I noticed, this isn't reflected in the sample config, is that expected or can I fix that real quick? https://github.com/openstack/ironic/blob/master/ironic/common/config.py#L2522:11
devanandajroll: oh. i see. would you mind rebasing on lucas' patch?22:11
devanandayou've got a TODO in there22:11
jrollheh, sure thing22:11
devanandathanks :)22:11
jrollyeah22:11
jrollwell22:11
jrollthat's not lucas' patch in the todo22:11
jrollthat's in oslo-incubator https://review.openstack.org/#/c/89702/22:11
devanandaoh! hah22:12
jrollI can still rebase if you'd like22:12
devanandajroll: nvm22:12
jrollok22:12
*** derekh has joined #openstack-ironic22:13
devanandajroll: well, your change is for the oslo-incubator code base, not the oslo.messaging library22:13
jrolldevananda: right22:13
jrolloh22:13
jrollohhhhh22:13
devanandalucas' patch rips taht out22:13
jrollright22:14
jrollwill fix22:14
devananda:)22:14
jrollthanks for that22:14
* devananda tests lucas' patch in isolation22:14
lucas-afkjroll, you might want to fix that in the oslo.messaging22:14
lucas-afkhttps://github.com/openstack/oslo.messaging/blob/master/oslo/messaging/transport.py#L5622:14
jrollright22:14
lucas-afkjroll, yeah my patch removes the whole openstack/common/rpc/ stuff22:14
jrolllucas-afk: actually it might need to be here: https://github.com/openstack/oslo.messaging/blob/19053b6e652d54444af44cff2c7db640b5b5b207/oslo/messaging/_executors/impl_eventlet.py22:15
jrollthat's the config that needs to change22:15
NobodyCamok calling it a day22:15
jrollotherwise I could do something like: cfg.set_defaults(_eventlet_opts, rpc_thread_pool_size=4)22:15
NobodyCamhave a good night22:16
lucas-afkjroll, right22:16
lucas-afkNobodyCam, have a good night!22:16
jrollbut I'd rather not be reaching into _executors either :/22:16
NobodyCam:)22:16
jrollnight NobodyCam22:16
devanandag'nght, NobodyCam !22:16
NobodyCamnight lucas-afk jroll devananda and all others :)22:17
devanandahmm, i haven't looked at the oslo libs enough yet to know the answer22:17
devanandahow do we change the CONF default at run time before instantiating the class whose __init__ registers it?22:17
jrolloh, ouch22:18
devanandadhellmann: ping22:18
jrollhrm22:18
lucas-afkyeah dunno as well22:18
* devananda drops an email to the list22:19
jrollso there's this https://github.com/openstack/oslo.messaging/search?q=rpc_thread_pool_size&ref=cmdform22:19
jrollI'm curious how zmq implementation uses that22:19
jrollassuming zmq doesn't use impl_eventlet somewhere22:19
jrolloh heh https://github.com/openstack/oslo.messaging/blob/06ab616d8ffb73fa1607878c52edbcbd3b54b91e/oslo/messaging/_drivers/impl_zmq.py#L88922:20
*** epim has joined #openstack-ironic22:20
*** epim has quit IRC22:20
lucas-afkheh22:20
devanandajroll: it probably doeshttps://github.com/openstack/oslo.messaging/blob/06ab616d8ffb73fa1607878c52edbcbd3b54b91e/oslo/messaging/_drivers/impl_zmq.py#L3422:20
lucas-afkthe /search is pretty cool, I didn't know it existed22:21
jrollyeah I just linked to where they register it, deva22:21
devanandayea, i pasted while switching windows, heh22:21
devanandadidn't see your msg before i hit enter22:21
jrollheh22:21
*** sseago has joined #openstack-ironic22:22
* jroll crosses fingers22:28
devanandai have an email drafted, and might have spotted a solution in the mean time22:29
* devananda tests before hitting send22:29
jrollooo I'm curious to see this22:29
devanandanope, doesn't seem to work22:49
*** newell has quit IRC22:50
*** epim has joined #openstack-ironic22:50
*** zdiN0bot has quit IRC22:52
devanandaemail sent22:54
*** lucas-afk has quit IRC22:57
*** zdiN0bot has joined #openstack-ironic23:14
*** datajerk has quit IRC23:34
*** eguz has quit IRC23:36
*** yfujioka has joined #openstack-ironic23:36
*** eghobo has joined #openstack-ironic23:36
*** jgrimm has quit IRC23:36
*** krtaylor has quit IRC23:43
*** dwalleck has quit IRC23:50
jrollthanks for looking into that, devananda23:52

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!