Wednesday, 2014-01-22

openstackgerritdekehn proposed a change to openstack/ironic: Adds Neutron support to Ironic  https://review.openstack.org/6607100:19
*** rongze has joined #openstack-ironic00:35
*** zul has joined #openstack-ironic00:37
*** rongze has quit IRC00:40
*** blamar has joined #openstack-ironic01:10
* NobodyCam wanders afk01:13
*** jbjohnso has quit IRC01:19
*** harlowja has joined #openstack-ironic01:33
*** nosnos has joined #openstack-ironic01:35
*** rongze has joined #openstack-ironic01:46
*** sameer has quit IRC01:46
*** datajerk has joined #openstack-ironic01:48
*** datajerk has quit IRC01:58
*** rloo has quit IRC02:01
*** sameer has joined #openstack-ironic02:02
*** rloo has joined #openstack-ironic02:02
*** datajerk has joined #openstack-ironic02:02
*** rloo has quit IRC02:06
*** sameer has quit IRC02:06
*** rloo has joined #openstack-ironic02:06
*** rloo has quit IRC02:11
*** rloo has joined #openstack-ironic02:11
*** michchap has quit IRC02:24
*** michchap has joined #openstack-ironic02:25
*** rloo has quit IRC02:27
*** rloo has joined #openstack-ironic02:27
*** rloo has quit IRC02:29
*** vkozhukalov has joined #openstack-ironic02:30
*** rloo has joined #openstack-ironic02:30
*** rloo has quit IRC02:31
*** rloo has joined #openstack-ironic02:31
*** coolsvap_away has quit IRC02:33
*** rloo has quit IRC02:34
*** rloo has joined #openstack-ironic02:34
*** rloo has quit IRC02:41
*** rloo has joined #openstack-ironic02:42
*** rloo has quit IRC02:42
*** rloo has joined #openstack-ironic02:43
*** rloo has quit IRC02:46
*** rloo has joined #openstack-ironic02:46
*** epim has quit IRC02:56
openstackgerritA change was merged to openstack/ironic: Use oslo.rootwrap library instead of local copy  https://review.openstack.org/6638903:14
openstackgerritA change was merged to openstack/ironic: Add missing "Filters" section to the ironic-images.filters  https://review.openstack.org/6639003:15
openstackgerritA change was merged to openstack/ironic: Remove the absolute paths from ironic-deploy-helper.filters  https://review.openstack.org/6640003:16
*** rloo has quit IRC03:22
*** harlowja is now known as harlowja_away03:54
*** datajerk has quit IRC04:03
*** coolsvap has joined #openstack-ironic04:08
*** rongze has quit IRC04:52
*** sameer has joined #openstack-ironic04:54
*** sameer has quit IRC05:19
*** sameer has joined #openstack-ironic05:19
*** coolsvap is now known as coolsvap_away05:23
*** rongze has joined #openstack-ironic05:23
*** rongze has quit IRC05:28
*** rongze has joined #openstack-ironic05:35
*** rongze has quit IRC05:40
*** sameer has quit IRC06:00
*** blamar has quit IRC06:08
*** aignatov_ is now known as aignatov06:17
*** rongze has joined #openstack-ironic06:17
*** coolsvap_away is now known as coolsvap06:19
*** blamar has joined #openstack-ironic06:23
*** mrda has quit IRC06:35
*** nosnos_ has joined #openstack-ironic06:43
*** nosnos has quit IRC06:43
*** mrda__ is now known as mrda_away06:51
*** vkozhukalov has quit IRC06:52
*** aignatov is now known as aignatov_07:20
*** mdurnosvistov has joined #openstack-ironic07:29
*** jistr has joined #openstack-ironic07:43
GheRiveromorning Ironic!07:48
*** aignatov_ is now known as aignatov07:54
HaomengGheRivero: morning:)07:55
openstackgerritJenkins proposed a change to openstack/ironic: Imported Translations from Transifex  https://review.openstack.org/6802407:57
*** Haomeng has quit IRC08:03
*** Haomeng has joined #openstack-ironic08:04
*** aignatov is now known as aignatov_08:06
*** romcheg has joined #openstack-ironic08:21
*** romcheg1 has joined #openstack-ironic08:23
*** yuriyz has joined #openstack-ironic08:24
*** romcheg has quit IRC08:26
*** vkozhukalov has joined #openstack-ironic08:30
*** Alexei_987 has joined #openstack-ironic08:44
openstackgerritGhe Rivero proposed a change to openstack/ironic: Remane and update ironic-deploy-helper rootwrap  https://review.openstack.org/6834009:01
*** derekh has joined #openstack-ironic09:03
*** mdurnosvistov has quit IRC09:08
*** aignatov_ is now known as aignatov09:10
*** vetalll has joined #openstack-ironic09:12
*** derekh has quit IRC09:18
*** aignatov is now known as aignatov_09:34
*** aignatov_ is now known as aignatov09:37
*** jistr has quit IRC09:37
*** jbjohnso has joined #openstack-ironic09:45
*** ndipanov has joined #openstack-ironic09:45
*** aignatov is now known as aignatov_09:49
*** aignatov_ is now known as aignatov09:53
*** jistr has joined #openstack-ironic09:57
*** athomas has joined #openstack-ironic10:00
*** martyntaylor has joined #openstack-ironic10:08
*** martyntaylor has quit IRC10:12
*** mdurnosvistov has joined #openstack-ironic10:15
*** martyntaylor has joined #openstack-ironic10:27
*** derekh has joined #openstack-ironic10:35
*** max_lobur_afk is now known as max_lobur10:39
*** jbjohnso has quit IRC10:45
*** romcheg has joined #openstack-ironic10:45
GheRiverowhat was the consensus about getting rid of the object dict behaviour?10:46
*** romcheg1 has quit IRC10:46
max_loburHi GheRivero10:54
GheRiverohi max_lobur10:54
max_loburwe decided to drop all the usages but not to drop the actual interface10:55
max_loburbecause oslo object model still has it10:55
max_loburand once we merge with it we will get it again10:55
GheRiverook. thanks for the info10:56
max_loburwe probably will want to create an intermediate class between oslo object and our to make sure we're not using this interface (e.g. throw Exception in these methods) until it dropped from oslo10:56
*** lucasagomes has joined #openstack-ironic10:57
max_loburmorning lucasagomes :)10:58
lucasagomesmorning max_lobur :)10:58
max_loburtake a look https://bugs.launchpad.net/ironic/+bug/127131710:59
max_lobur<devananda> tl;dr -- let's just drop XML support and be done with it10:59
lucasagomesmax_lobur, btw will test the other problem about the traceback now in the morning, yesterday I got busy with other stuff11:04
* lucasagomes clicks on the link11:05
max_loburlucasagomes, you can test, but deva confirmed that he cannot reproduce too11:05
max_loburhttps://bugs.launchpad.net/ironic/+bug/124474711:05
lucasagomesmax_lobur, ah nice :D so it might have been the debug flag11:05
max_loburif you have time please test to be sure11:05
max_loburbut I think it's OK now11:06
*** michchap has quit IRC11:06
lucasagomesah good, so it's already closed11:06
max_loburyep11:06
lucasagomes:D11:06
*** michchap has joined #openstack-ironic11:06
lucasagomesmax_lobur, great email about xml11:08
lucasagomesI agree with that11:08
lucasagomes(and with the bug as well, will cmment to express that)11:08
max_loburyep11:08
max_loburI'm totally agree too11:08
max_loburxml is always a head pain11:08
lucasagomesheh +111:11
lucasagomesyea and supporting both is costly11:11
lucasagomesthat email does give a nice explanation/summary about the problems involved11:12
Haomengmax_lobur: morning:)11:12
Haomenglucasagomes: morning:)11:12
max_loburmorning Haomeng ! (:11:12
lucasagomesmorning Haomeng11:13
Haomenglucasagomes: :)11:14
Haomengmax_lobur: :)11:14
max_loburneed to save that email somewhere to show to my future customers :)11:14
lucasagomeshah11:14
Haomengdo you know how our api content_types are controled, I debug with pecan, still have no idea:)11:15
Haomengcurrent the supported type(s): ['application/json', 'application/xml']11:16
max_loburHaomeng, I found this in our api v1.media_types = [MediaType('application/json',11:16
max_lobur<max_lobur>                           'application/vnd.openstack.ironic.v1+json')]11:16
max_loburI think thats it11:16
max_lobur'application/xml' currently is not supported as content type11:16
max_loburonly as accept11:16
*** lucasagomes is now known as lucas-afk11:17
Haomengmax_lobur: I debuged, current we support ['application/json', 'application/xml']11:18
Haomengmax_lobur: ok, let me check api.v1.media_types11:18
max_loburhm11:19
*** lucas-afk is now known as lucasagomes11:19
max_loburI tried to send xml requests and got server erros11:19
HaomengHaomeng: I try to input invalid content_types http request such as 'application/abc'11:19
Haomengmax_lobur: we get the error - 2014-01-22 02:15:21.336 6377 ERROR pecan.core [-] Controller 'get_all' defined does not support content_type 'None'. Supported type(s): ['application/json', 'application/xml']11:20
Haomengmax_lobur: can not find where we define 'application/xml'11:20
max_loburhm11:23
max_loburthat's probably just a wrong message11:24
max_loburtry to figure out where it's starts11:24
max_loburbecause if you try to send xml you should get the server error too11:24
Haomengmax_lobur: found some default content_types defined by wsexpose11:24
Haomengmax_lobur: yes11:24
Haomengmax_lobur: so I try to disable xml support11:25
Haomengmax_lobur: looks like it is defined by - http://nullege.com/codes/show/src@w@s@WSME-0.5b1@wsmeext@pecan.py/44/pecan.expose11:25
Haomengmax_lobur: but not sure how to change the settings to disable  xml support, other project such as Glance, they support json only11:26
HaomengHaomeng: however Glance use ownself wsgi py11:26
*** aignatov is now known as aignatov_11:26
*** rongze has quit IRC11:26
*** rongze has joined #openstack-ironic11:27
*** derekh has quit IRC11:29
*** rongze has quit IRC11:31
*** aignatov_ is now known as aignatov11:50
*** vetalll1 has joined #openstack-ironic12:05
*** vetalll has quit IRC12:07
*** rongze has joined #openstack-ironic12:09
*** rongze has quit IRC12:14
openstackgerritA change was merged to openstack/ironic: Add [keystone_authtoken] to ironic.conf.sample  https://review.openstack.org/6825612:14
*** ifarkas has quit IRC12:16
*** ifarkas has joined #openstack-ironic12:17
openstackgerritLucas Alvares Gomes proposed a change to openstack/ironic: Support preserve_ephemeral  https://review.openstack.org/6823612:24
*** aignatov is now known as aignatov_12:26
*** rongze has joined #openstack-ironic12:39
*** jbjohnso has joined #openstack-ironic13:02
*** coolsvap has quit IRC13:03
*** aignatov_ is now known as aignatov13:11
*** yfujioka has joined #openstack-ironic13:35
*** derekh has joined #openstack-ironic13:35
*** rongze has quit IRC13:37
*** jdob has joined #openstack-ironic13:39
*** rongze has joined #openstack-ironic13:47
*** rloo has joined #openstack-ironic13:47
*** yfujioka has quit IRC13:52
*** yfujioka has joined #openstack-ironic13:53
*** aignatov is now known as aignatov_13:57
*** rloo has quit IRC13:58
*** rloo has joined #openstack-ironic13:59
*** datajerk has joined #openstack-ironic14:07
openstackgerritA change was merged to stackforge/pyghmi: Add support for discrete sensors  https://review.openstack.org/6827914:08
*** aignatov_ is now known as aignatov14:09
*** rloo has quit IRC14:09
*** yfujioka has quit IRC14:09
*** rloo has joined #openstack-ironic14:09
*** matsuhashi has joined #openstack-ironic14:14
*** rongze_ has joined #openstack-ironic14:14
*** rongze has quit IRC14:16
*** datajerk1 has joined #openstack-ironic14:17
*** rongze has joined #openstack-ironic14:17
*** rongze_ has quit IRC14:19
*** datajerk has quit IRC14:19
*** coolsvap has joined #openstack-ironic14:22
*** rongze_ has joined #openstack-ironic14:22
*** rongze has quit IRC14:25
*** rloo has quit IRC14:27
*** rloo has joined #openstack-ironic14:27
*** datajerk1 has quit IRC14:33
openstackgerritYuriy Zveryanskyy proposed a change to openstack/ironic: Add parameter for filtering nodes by maintenance mode  https://review.openstack.org/6393714:35
*** datajerk has joined #openstack-ironic14:35
*** rloo has quit IRC14:36
*** rloo has joined #openstack-ironic14:36
*** rloo has quit IRC14:38
*** rloo has joined #openstack-ironic14:38
*** rloo has quit IRC14:39
*** rloo has joined #openstack-ironic14:40
*** matty_dubs|gone is now known as matty_dubs14:41
*** rloo has quit IRC14:42
*** rloo has joined #openstack-ironic14:43
*** rloo has quit IRC14:43
*** rloo has joined #openstack-ironic14:43
*** matsuhashi has quit IRC14:45
*** rongze_ has quit IRC14:49
*** linggao has joined #openstack-ironic14:53
*** nosnos_ has quit IRC14:57
*** rloo has quit IRC14:59
*** rloo has joined #openstack-ironic15:00
*** rongze has joined #openstack-ironic15:05
NobodyCammorning Ironic, says the man slowly waking up :)15:11
rloomorning NobodyCam. WAKE UP! :)15:12
NobodyCamlol morning rloo15:13
NobodyCam:)15:14
rloobtw NobodyCam, if you get a chance, maybe you can take a look at https://review.openstack.org/#/c/61859/. it is in response to a bug you filed.15:17
NobodyCamoh :)15:17
yuriyzmorning rloo nobodycam and Ironic15:18
rlooNobodyCam, just that the 'soln' is wrong, and it might be useful to the assignee to get your opinion. not sure.15:18
NobodyCammorning yuriyz15:18
rlooafternoon yuriyz!15:18
*** rongze has quit IRC15:22
max_loburmorning Ironic :)15:23
NobodyCammorning max_lobur ;)15:23
NobodyCamrloo: interesting point about the multiple nodes, do you have a spicific example of where this could be a issue?15:27
rlooNobodyCam. No, just reading the code though, I can see it won't work. I think Ghe said he got an exception.15:28
rlooNobodyCam. I looked at that awhile ago. Just thought it'd be good to give the person feedback sooner rather than later.15:29
linggaoMorning NobodyCam.15:29
NobodyCammorning linggao15:30
linggaoCan you let me know where is the Ironic hypervisor (driver) located?15:30
NobodyCamlinggao: https://review.openstack.org/#/c/51328/915:30
NobodyCamthere?15:30
*** lucasagomes is now known as lucas-hungry15:31
linggaoNobodyCam, yes it is!  I thought it was merged already :-)15:31
NobodyCamno15:32
NobodyCam:(15:32
NobodyCamwe need tp land the pxe / boot options patch from don so it can actually work15:33
openstackgerritMax Lobur proposed a change to openstack/ironic: Fix JSONEncodedDict default values  https://review.openstack.org/6841315:37
*** rloo has quit IRC15:46
*** rloo has joined #openstack-ironic15:46
*** aignatov is now known as aignatov_15:49
GheRiveromorning all15:50
NobodyCammorning GheRivero :)15:50
rloomorning GheRivero.15:51
NobodyCamlucas-hungry: around?15:55
*** datajerk has quit IRC15:56
NobodyCambrb15:59
*** datajerk has joined #openstack-ironic16:04
rlooyuriyz: wrt https://review.openstack.org/#/c/63937/, patch 4, my comment about 'chassis_id' instead of 'chassis' was my bad, i forgot to delete the comment. Sorry. (I can't seem to add a comment to that now.)16:05
yuriyzok :)16:05
*** vkozhukalov has quit IRC16:07
*** hemna has quit IRC16:10
*** aignatov_ is now known as aignatov16:19
*** lucas-hungry is now known as lucasagomes16:21
lucasagomesNobodyCam, morning16:21
lucasagomesmorning rloo yuriyz linggao :D16:22
rlooafternoon lucasagomes!16:22
NobodyCammorning / afternoon lucasagomes16:22
lucasagomes:) I will take a look at the patch16:22
linggaomorning lucasagomes, rloo, yuriyz, GheRivero and everyone.16:23
rloomorning linggao ;)16:26
openstackgerritYuriy Zveryanskyy proposed a change to openstack/ironic: Add parameter for filtering nodes by maintenance mode  https://review.openstack.org/6393716:26
*** datajerk has quit IRC16:28
*** matty_dubs has quit IRC16:35
*** martyntaylor has quit IRC16:39
*** datajerk has joined #openstack-ironic16:40
*** aignatov is now known as aignatov_16:42
*** vetalll1 has quit IRC16:43
*** matty_dubs has joined #openstack-ironic16:47
NobodyCamlucasagomes: off the top of your head. do you recall if the nova driver will need to call  the neutron update_port before issuing deploy?16:49
lucasagomesNobodyCam, I think no, I think the pxe module will do that16:49
*** datajerk has quit IRC16:50
lucasagomesNobodyCam, https://github.com/openstack/ironic/blob/master/ironic/drivers/modules/pxe.py#L47816:50
NobodyCam:) that was what I recalled was thinking...16:50
NobodyCamthat last sentence made no sense16:50
NobodyCamlol16:50
NobodyCam:) that was what I recalled...16:51
lucasagomesxD16:51
lucasagomesI can try to make a patch for that if u want to16:51
lucasagomesthat stub is already being called as part of the deploy16:51
NobodyCamlet see what I get in testing today16:52
*** yuriyz has quit IRC16:52
lucasagomesack16:53
*** datajerk has joined #openstack-ironic16:53
*** matty_dubs has quit IRC16:55
*** datajerk has quit IRC16:58
*** mdurnosvistov has quit IRC16:59
*** datajerk has joined #openstack-ironic16:59
*** datajerk has quit IRC17:05
*** datajerk has joined #openstack-ironic17:07
NobodyCambbt brb17:08
*** matty_dubs has joined #openstack-ironic17:09
devanandamorning, all17:09
NobodyCammorning devananda17:11
*** matty_dubs is now known as matty_dubs|lunch17:11
lucasagomesmorning devananda17:11
rloomorning devananda17:11
*** matty_dubs|lunch has quit IRC17:15
devanandaawesome job landing i2 fixes yesterday guys17:15
devanandai thin we got ~5 in at the last minute :)17:16
devanandahttps://launchpad.net/ironic/+milestone/icehouse-2 is great17:16
*** ndipanov has quit IRC17:16
NobodyCamwoo hoo :)17:16
rlooyay! Are you/we going to try to land more changes for I2?17:17
devanandarloo: nope. it's being cut now()17:17
rlooooo. I thought tomorrow was the deadline.17:17
devanandai only had to defer 3 bug fixes though17:17
devanandaah. no. yesterday was the deadline17:17
rloothat explains the long queue!17:18
devanandamp-cut is tuesdays. there's a day or two for backports, thursday is the final17:18
devanandarloo: https://wiki.openstack.org/wiki/ReleaseTeam/How_To_Release#MP_cut_.28Tuesday.2917:19
rloocool. so anything that lands now, is for i3.17:19
devanandabasically, yep17:19
rloothe ironic driver didn't land, so ironic will be in i2, but not quite usable?17:20
*** datajerk has quit IRC17:21
lucasagomesnice17:21
devanandarloo: the nova driver is a separate thing entirely. a) we dont control when it lands, b) ironic is functional without it17:22
rloodevananda. ok :)17:22
devanandarloo: i mean, it's much better WITH it. but it's a separate thing in a separate project. nova gets to decide how important that is to their release cycle17:23
*** ndipanov has joined #openstack-ironic17:23
*** matty_dubs|lunch has joined #openstack-ironic17:24
rloodevananda: got it. thx!17:24
devanandaalso guys, our plans for i3 are a bit more aggressive :) https://launchpad.net/ironic/+milestone/icehouse-317:25
devanandaand please, if you see a bug or bp that you expect in Icehouse and it's not targeted, update it or talk to me17:25
NobodyCamdevananda: rloo and GheRivero raised some very good points on https://bugs.launchpad.net/bugs/125934617:26
NobodyCamI need to look in to them a bit more17:27
rlooit would be very nice if we could land the dict* changes and ()-instead-of-\ changes cuz they are PITAs.17:27
rlooin case you have nothing to do. but i think ()-instead-of-\ isn't ready yet (well, my opinion anyway).17:29
*** hemnafk is now known as hemna17:30
NobodyCampost bbt walkies... bbiafm17:31
*** jistr has quit IRC17:32
*** lucasagomes has quit IRC17:40
devanandaNobodyCam, rloo - re: dont pass node // task has >1 node17:40
*** blamar_ has joined #openstack-ironic17:40
devanandayes, this is something we may start thinking about17:40
*** datajerk has joined #openstack-ironic17:41
rloodevananda: i don't believe I've seen a case where a task has > 1 node, but I assume we may want that functionality?17:43
*** blamar has quit IRC17:43
*** blamar_ is now known as blamar17:43
devanandahere's a thought. what if (in the long run) we restructure the ConductorManager and TaskManager code to someting more like this...17:45
devanandahttp://paste.openstack.org/show/61693/17:45
devanandaactually this has been my plan all along. but we have to walk before we can run, as they say17:45
rlooha ha. I don't think I've ever heard that wrt coding!17:47
NobodyCamin the above example nodes is a [list] of node_uuids or node object?17:47
rlooi think Nodes17:48
rloounless it is cheap to get the Node from the uuid17:49
devanandabigger example17:50
devanandahttp://paste.openstack.org/show/61694/17:50
devanandalist of node uuids,a ctually17:50
devanandabecause task_manager.acquire() will create the node objs when it instantiates the task17:50
devanandawe couldchange that17:51
devanandatask_manager.acquire could accept node objects and just call obj.refresh()17:51
*** datajerk has quit IRC17:51
devanandaso, also, relevant discussion is what tier the group-awareness should be17:52
rlooah sorry NobodyCam. node uuids is probably better.17:52
devanandathings like Heat expect that they operate on discrete things17:52
devanandanot on groups of things17:52
rloocuz until task_manager does the acquire, any Node could be out of sync.17:53
devanandarloo: right17:53
devanandarloo: hence either task_manager sets up the Node obj, or it must call refresh() on it17:53
rlooi like the clean look of the new code.17:53
devanandait moves a lot of logic out of ConductorManager and into TaaskManager17:54
NobodyCamrefresh() seems kinda hackish to me?17:54
*** datajerk has joined #openstack-ironic17:54
rlooi think when I looked before, it was only the operation that modified a node, that passed a Node object (with the changes) to the ConductorManager.17:54
devanandaNobodyCam: how so? objects are designed to be long-lived and refreshed when needed, eg, after being passed over RPC17:54
*** lucasagomes has joined #openstack-ironic17:55
*** lucasagomes has quit IRC17:55
NobodyCamdevananda: true, guess I just like the get it when you need approch17:55
*** lucasagomes has joined #openstack-ironic17:55
devanandalifeless: if you're around, this mockup may be interesting. we talked about this ~6mo ago, and we're still probably another few months from doing anything with it. but worth your input on layer-effects: http://paste.openstack.org/show/61694/17:56
*** matty_dubs|lunch is now known as matty_dubs17:56
devanandarloo: iirc, only update_node expects a changed node object17:57
devanandarloo: the rest of the CM methods expect a uuid17:57
devanandatangent: we should probably pull an update of nova's object code before icehouse release17:58
devanandaso we dn't slip further out of syn17:58
devanandasync17:58
rloodevananda. ah yes, we already made that change to use node_ids.17:59
devanandait's more efficient for RPC when we really don't need to either manipulate the node_obj in the API, or pass the whole obj over RPC18:00
*** datajerk has quit IRC18:00
rlooto get back to the bug that started this discussion -- ??18:01
devanandayes?18:03
rloowhat to do with it? should the person continue working on it?18:04
rloo(or down the path they've taken, which doesn't seem right.)18:05
*** vkozhukalov has joined #openstack-ironic18:05
devanandahmm18:11
*** datajerk has joined #openstack-ironic18:12
devanandai think the question is, do we want drivers to be aware of groups of nodes (in which case they need to receive a task)18:13
devanandaor just operate on one node at a time (in which case they should receive the node)18:14
rlooeven if groups of nodes, unless they need something other than the list of nodes from the task, they don't need the task.18:15
devanandabut they do18:15
devanandarequire_exclusive_lock decorator18:16
devanandathat currently pulls the task from the *args18:16
rloooh... I'm in over my head now...18:16
devanandai suppose we could try to pul lit from the stack instead?18:16
*** derekh has quit IRC18:16
devanandawe are also accessing task.context in a few places, but we could move that to *args18:16
*** mdurnosvistov has joined #openstack-ironic18:18
devanandaso instea of eg: def tear_down(self, task, node)    it would be    def tear_down(self, context, node)18:18
rloojust did a grep, pxe.py uses task to eg get resources18:22
*** epim has joined #openstack-ironic18:23
devanandayes. which is a short-cut to avoid doing a DB query18:23
devanandato fetch the Ports associatd with a node18:24
devanandasince those are already bundled in the Task18:24
devanandathat could be refactored18:24
rlooI am wondering whether it is worth deferring changes here, until we have more end-to-end working.18:26
*** Alexei_987 has quit IRC18:30
*** harlowja_away is now known as harlowja18:32
lifelessdevananda: morning18:36
devanandalifeless: mornin!18:37
openstackgerritLucas Alvares Gomes proposed a change to openstack/ironic: Handle multiple exceptions raised by jsonpatch  https://review.openstack.org/6845718:46
NobodyCamdevananda: I think the issue lsmola was / is having is due to incorrect (imcomplete) instance_type_extra_specs in his nova conf18:48
NobodyCamnote is nova conf (last line) http://paste.openstack.org/show/61696/18:49
NobodyCams/is/his/18:49
*** athomas has quit IRC18:50
NobodyCambrb18:51
openstackgerritDevananda van der Veen proposed a change to openstack/ironic: Enable driver methods without explicit 'task' arg  https://review.openstack.org/6846319:14
devanandarloo: ^19:14
rlooask and ye shall receive ;) thx devananda!19:15
lifelessdevananda: ok, so that paste, a little more context would be good19:17
devanandalifeless: operate on a group of nodes at once19:17
devanandalifeless: the conversation from ~6mo ago where we talked about whether grouping should be done /only/ at a higher layer, or whether we should allow grouping within ironic as well19:17
lifelessoh, ok19:17
lifelessmmm19:18
lifelessI remember now19:18
devanandathis is a code sketch of how we could enable it19:18
lifelessI was arguing that the Ironic API shouldn't group19:18
devanandayep19:18
lifelessthat you should coalesce instead w/in Ironic19:18
devanandaah19:18
lifelesswhich the code you've pasted doesn't show19:18
*** max_lobur is now known as max_lobur_afk19:19
devanandawell, true, but if we coalesced adjacent API requests within some midle tier, this code would perform the action19:19
devanandaon taht group19:19
lifelessand I argued this because you'll get much bigger wins if you don't require every possible client to be fixed19:19
lifelesssure19:19
devanandaas a code structure for "do something on group of nodes" i think this is pretty clean19:19
lifelessok so that code specifically, when coalescing you need to preserve the original requests so you can raise seperate exceptions19:19
devanandabut, yes, there's issues like that any time you coalesce separate requests19:20
devanandathat also raises the sync vs async discussion19:20
devanandawe're moving more towards async19:20
lifelessalso you probably need to try all the nodes even if one fails19:20
lifelessand when raising signal x,y and z failed, a,b,c were all ok19:20
devanandaesp with max_lobur_afk's threading proposal19:20
lifelessI haven't seen his threading proposal19:20
*** mdurnosvistov has quit IRC19:20
lifelesswhat else, you're not raising at the end of your except19:21
lifelessso noone will know if you succeeded or failed19:21
lifelessI have a couple of thoughts here19:21
devanandahttps://review.openstack.org/#/c/66368/119:21
lifelessfirstly, I know some vendors have multiple-machine APIs; your code is actually a layer up from that19:22
lifelessthey will need to do their own coalescing19:22
lifelessper-conductor or even cross-conductor19:22
lifelessso, I'd say - why both coalescing /here/, the driver seems like a more obvious place to do it19:23
lifelesssecondly19:23
lifelessthe points where coalescing is useful are (not exclusively, but I'd argue primarily) around high latency operations19:24
*** epim has quit IRC19:24
lifelesse.g. imagine multicast deploys19:24
lifelessfor a single deploy you want to cache the image, start streaming to the group, power the node, wait for complete, stop streaming to the group19:24
*** epim has joined #openstack-ironic19:24
*** datajerk has quit IRC19:25
lifelessfor a second lter deploy, you want to use that cached image, start streaming, power the node, wait for complete, stop streaming19:25
*** zenfish has joined #openstack-ironic19:25
lifelessfor a second *concurrent* deploy, its exactly the same, but you want to not start a second stream, and stop streaming only when both images are finished19:25
lifelesswhich suggests to me that you want a hand-off from the per-node deploy code to a multicast stream management thing, which has a refcount and manages the image cache etc19:26
lifelessthat could also live in the conductor layers, have its own ring, and be distributed but only one active for a given image at any time19:26
devanandahmm19:27
devanandathat's an interesting one, yea19:27
lifelessas a driver writer writing the pxe driver, I'd want that stuff to live in the vendor extension area, so it doesn't need core changes19:27
devanandajoin existing operation19:27
lifelessnote that this example above isn't coalescing *adjacent* requests19:28
lifelessits coalescing over possibly multiple minutes apart19:28
devanandayep19:28
lifelessand if you consider heat rolling updates that might be updating (say) no more than 10% of the cluster at once19:28
lifelesswe'll trigger this19:28
lifelessbecause we'll start a few, wait for finish, start one, wait for finish of any other, start one, repeat19:29
lifeless[*] note that bittorrent or other things along the same line will have the same characteristics19:29
devanandaindeed19:29
devanandathat kind of optimization will need to live within the driver19:30
lifelessso - personally - I wouldn't do any coalescing work at all at this point, instead I'd look at making the design support this sort of message passing and workers19:30
*** datajerk has joined #openstack-ironic19:32
lifelessbecause I think you'll get more bang for buck19:36
*** datajerk has quit IRC19:36
lifelessand not have to deal with questions like coalescing requests for different conductors19:36
*** datajerk has joined #openstack-ironic19:37
devanandaright, several points in there are interesting19:37
devanandamulti-machine APIs doing their own coalescing seems not to fit with their requirements19:38
devanandawhich is that some layer above the hardware understand the group API that said hardware exposes19:38
devanandathis mockup could be a step towards addrsesing that19:38
lifeless'their' - do you mean vendor features?19:39
devanandayes19:39
lifelessok19:39
lifelessso I think their driver would do the same basic framework I described for multicast19:39
devanandatuskar folks have been looking at expressing groupings - arbitrary groupings - in the UI19:40
lifelessadd in a sleep(1) or something in the 'and now we call our vendor backend library' step, and that would let them let more requests coalesce fairly naturally19:40
devanandaand asked hwo to model that in ironic19:40
lifelesssadface19:40
devanandato do that we need something more than just the notion of a chassis19:40
lifelessI think there is some confusion afoot there19:40
devanandapossibly19:40
devanandai pushed back :)19:40
lifelessin that I've suggested tags as a useful adhoc thing19:40
devanandasaying that we should store tags and allow search-by-tags and such19:41
lifelessand that for anything functional like HA distribution we need to model the specifics19:41
lifelessarbitrary grouping for grouping sake is not something I believe tuskar needs19:41
lifelessor that users need19:41
lifelessdevananda: I suspect you and I are entirely on the same page here :)19:41
devanandai think so :)19:41
devanandaalso, i think there are situations where ironic does need some understanding that nodes x, y, z are discrete but not independent19:42
devanandaeg, when it's not possible to perform action A on x without affecting y and z19:42
devanandai dont think we need that soon, though19:43
devanandataking a step back -- this pseudocode came up as a way to clarify how we're handling locks and passing requests to drivers19:44
devanandadoes our api allow atomically locking >1 node?19:44
devananda*internal api19:45
devanandadoes that internal api allow passing >1 node to a driver at once?19:45
devanandathese are related questions19:45
*** vkozhukalov has quit IRC19:45
devanandaif the RESTful API only allows operations on one node at a time -- never operations on multiple nodes OR operations on a single group-of-nodes -- which I think is much clearer and simpler19:46
devanandathen how should the code model the coalescing of adjacent (or non-adjascent) requests?19:46
devanandausing one TaskManager to contain a grouping of nodes would allow that coalescing to be done at the API or Conductor tiers, rather than inside a driver19:47
devanandais that better? i'm not sure yet :)19:47
* devananda gets coffee and breakfast and continues ruminating19:48
*** datajerk has quit IRC19:49
*** rloo has quit IRC19:51
*** rloo has joined #openstack-ironic19:52
*** rloo has quit IRC19:53
*** rloo has joined #openstack-ironic19:53
lifelessroger19:54
lifelessso, it seems to me that having one-node-lock-at-a-time is going to be much less likely to discover bugs, as its simpler code19:55
lifelessso I'd just do that everywhere19:55
*** rloo has quit IRC19:58
*** rloo has joined #openstack-ironic19:59
*** datajerk has joined #openstack-ironic20:01
*** datajerk has quit IRC20:08
NobodyCamdevananda: when your around a keyboard I have a question on your nova-ironic dib element ( https://review.openstack.org/#/c/66461 )20:10
*** rloo has quit IRC20:10
devanandaNobodyCam: i'm sort of here20:10
*** rloo has joined #openstack-ironic20:10
NobodyCamits a easy question20:11
*** derekh has joined #openstack-ironic20:11
NobodyCamI was looking at https://review.openstack.org/#/c/66461/1/elements/nova-ironic/install.d/80-pxelinux20:11
NobodyCambut most of that seems to have already landed:20:11
NobodyCamhttps://github.com/openstack/tripleo-image-elements/blob/master/elements/ironic-conductor/install.d/68-ironic-tftp-support20:11
NobodyCamdid you want to replace that?20:12
devanandareplace with?20:15
NobodyCamyour review?20:15
NobodyCamI was looking at https://review.openstack.org/#/c/66461/1/elements/nova-ironic/install.d/80-pxelinux20:15
devanandaseems totally unrelated20:16
devanandaoh!20:16
NobodyCamerally20:16
devanandatht file shouldnt be there20:16
NobodyCam:-p20:16
NobodyCamok20:16
devanandanova-compute desnt need tftpdir20:16
NobodyCamI'm working thru that now20:17
NobodyCamwill remove that file :)20:17
devanandaawesome, ty. i havent ad time...20:17
NobodyCam:)20:17
*** datajerk has joined #openstack-ironic20:20
*** lucasagomes is now known as lucas-afk20:21
*** rloo has quit IRC20:26
*** rloo has joined #openstack-ironic20:26
*** rloo has quit IRC20:26
*** rloo has joined #openstack-ironic20:27
*** rloo has quit IRC20:29
*** rloo has joined #openstack-ironic20:29
* NobodyCam makes a bagel20:30
*** rloo has quit IRC20:30
*** rloo has joined #openstack-ironic20:31
*** datajerk has quit IRC20:34
devanandalifeless: python question -- note this is just a proof-of-concept at this stage to see if I could do it -- is this particularly bad? https://review.openstack.org/#/c/68463/1/ironic/conductor/task_manager.py20:34
*** mrda_away is now known as mrda20:35
lifelessdevananda: meep! why?20:47
devanandalifeless: wanted to see if i could remove the "task" parameter from driver methods20:47
devanandawhile still using the @require_exclusive_lock decorator20:47
devanandain fact, i can - but it seems terrible :)20:48
*** datajerk has joined #openstack-ironic20:48
devanandait seems like nonlocal would be better, but that's not in py2.7 ,and i couldn't find a similar solution20:49
devanandawell, nonlocal also isn't quite what i want, i think20:49
NobodyCambrb20:49
devanandalifeless: put another way, is there a pythonic way to pass something to a decorator but NOT to the method it decorates?20:50
*** rloo has quit IRC20:55
*** rloo has joined #openstack-ironic20:55
*** datajerk has quit IRC20:57
lifelessdevananda: just don't pass it when you call the decorated thing20:58
devanandalifeless: delete it from *args?20:59
lifelessdef wrapper(*args, foo=None, **kwargs):20:59
lifeless   ... f(*args, **kwargs)21:00
devanandasometimes the obvious is too simple :)21:00
devanandathanks21:00
lifelessor if task is first21:00
lifelesswrapper(task, *args, **kwargs):21:00
lifeless  f(*args, **kwargs)21:00
*** mdurnosvistov has joined #openstack-ironic21:11
*** zenfish has quit IRC21:18
*** lucas-afk has quit IRC21:25
*** datajerk has joined #openstack-ironic21:26
*** rloo has quit IRC21:27
*** rloo has joined #openstack-ironic21:27
*** datajerk has quit IRC21:31
*** datajerk has joined #openstack-ironic21:35
*** rloo has quit IRC21:35
*** rloo has joined #openstack-ironic21:35
*** rloo has quit IRC21:42
*** datajerk has quit IRC21:44
*** rloo has joined #openstack-ironic21:44
*** mdurnosvistov has quit IRC21:49
*** derekh has quit IRC22:03
NobodyCambrb22:04
*** datajerk has joined #openstack-ironic22:05
*** linggao has quit IRC22:06
*** ColinTaylor has joined #openstack-ironic22:08
*** datajerk has quit IRC22:17
*** matty_dubs is now known as matty_dubs|gone22:17
*** datajerk has joined #openstack-ironic22:19
*** jdob has quit IRC22:27
*** datajerk has quit IRC22:38
*** datajerk has joined #openstack-ironic22:40
*** datajerk has quit IRC22:41
lifelessdevananda: sorry, ESTUFF22:54
lifelessdevananda: I think we sorted that though, no?22:55
*** hstimer has joined #openstack-ironic23:15
*** jbjohnso has quit IRC23:18
*** romcheg has quit IRC23:27
devanandalifeless: yep. been sketching it out for the list23:33
*** mdurnosvistov has joined #openstack-ironic23:36
dkehnNobodyCam: u around23:55
NobodyCammaybe23:56
NobodyCami'm currently Waiting for seed VM to boot.23:57
NobodyCam:-p23:57
dkehnNobodyCam: I'm seeing some strage things when after running the generate_sample.sh, no errors, toxc -epep8 is giving me a diff error on include_service_catalog=true?/23:58
dkehnNobodyCam: did a rebase all seems good, one conflict but resolved23:58
dkehnNobodyCam: think I'll hand edit to remove it and see23:59
NobodyCamin pxe.py?23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!