Wednesday, 2014-02-05

*** kobier has quit IRC00:07
*** thedodd has quit IRC00:19
*** matsuhashi has joined #openstack-ironic00:25
*** r-mibu has joined #openstack-ironic00:30
*** jbjohnso has joined #openstack-ironic00:34
*** nosnos has joined #openstack-ironic00:57
*** nosnos has quit IRC01:01
*** nosnos has joined #openstack-ironic01:01
*** hemna has quit IRC01:02
*** rloo has quit IRC01:11
openstackgerritJarrod Johnson proposed a change to stackforge/pyghmi: Make pyghmi tolerate arbitrary threading models  https://review.openstack.org/7117401:20
*** digambar has quit IRC01:26
devanandalifeless: what's your opinion on futures?01:30
lifelessdevananda: the concept in general - fine. The python library specifically, I haven't used.01:35
devanandalifeless: ack, thanks01:38
lifelessdevananda: I'm curious what Ironic needs async scatter-gather for ?01:40
lifelessdevananda: remembering that with physical IO involved, nearly any concurrency is a pessimisation01:40
devanandayes01:40
devanandaa layer above that01:40
lifelessdevananda: e.g. I'm seriously wondering how to knobble nova-compute to stop doing concurrent image downloads from glance right now.01:40
lifelesscause its the suck01:40
devanandaknobble... nice word01:41
devanandalifeless: take a look at the ironic pxe driver. IIRC, GheRivero already added that there01:41
lifelessdevananda: added concurrency or added single-op-at-a-time?01:42
* devananda double checks the code01:42
lifelessdevananda: anyhow, whats ironic considering futures for? I recall a review that mentioned it01:42
devanandaapi<->conductor tier01:42
lifelessah01:42
lifelessyeah, ok that could make sense01:43
devanandawhen an RPC request is received by the conductor, take the TaskManager lock, start off a future, then fire back the "OK, i started it"01:43
lifelesshuh, wait01:43
lifelesswhere's the async in that01:43
devanandathe requested action may take a long time01:43
lifelessyes...01:43
devanandauser gets answer once the job is successfully /started/01:43
devanandaor user gets an error if it can't be started01:44
devanandathat much is sync01:44
devanandathe job completes async01:44
lifelessI thought there was a worker pool already associated with the rpc layer01:44
lifelessso you're going to user a worker pool from the rpc worker threads?01:44
devanandaworker pool?01:44
lifelessyeah01:44
devanandathis isn't gearman01:44
devananda?01:44
devanandasorry, greenthread01:45
devanandalifeless: call'able methods return a message to the caller when the invoked method exits. cast'able methods don't return anything01:47
lifelessdevananda: I'm confused01:48
lifelessdevananda: I know the cast vs call difference01:48
lifelessdevananda: I was fairly sure in nova that there wasn't a single RPC service thread for the conductor (to use an example), rather there is a [green]thread pool01:48
devanandaahh01:48
lifelessso message from amqp -> worker thread in the thread pool01:49
devanandagotcha, yes01:49
lifelessand what you're saying is that that worker thread is now going to use *another* thread pool.01:49
lifelesseither I'm confused, or this is bonkers.01:49
devanandahah01:49
devanandait may be bonkers. we may need to solve it slightly differently.01:50
lifelessIn Python, more concurrency is not the answer to the problem, *whatever* the problem is.01:50
lifelessdevananda: so what is the problem ?01:50
lifelessdevananda: and, am I confused ?01:50
lifelessdevananda: 'rpc_thread_pool_size'01:51
devanandathe goal is being able to validate a request and start an action, then inform the user that the action was started01:51
lifelessdevananda: defaults to 64 - in Ironic /openstack/common/rpc/__init__.py01:51
lifelessdevananda: so, I don't think I'm confused.01:51
lifelessdevananda: sketching to check I understand01:52
lifelesshttps://etherpad.openstack.org/p/action-started01:52
lifelessdevananda: ok, so I understand it now01:54
lifelessdevananda: let me ask, how will the user tell the thing failed if it fails after start ?01:54
devanandaediting pad too01:54
lifelessok, so if the user is going to poll01:55
lifelesswhy do this at all?01:55
lifeless[I see some complex failure modes that avoiding entirely would be nice]01:55
devanandalifeless: so there are a few ways to avoid this, most of which seem to fail much worse01:56
devanandalifeless: one is that the API does some very light weight checking, response 202-ACCEPTED to user, and *then* sends the message to conductor01:56
devanandalifeless: with no guarantee that it can actually do the work requested01:56
lifelessI'll let you enumerate, then we can address point by point01:57
lifelessdon't want to rabbit hole early01:57
devanandak. i'll put in pad so we have references :)01:57
*** kobier has joined #openstack-ironic01:59
lifelessmy laptop being hammered, switching to desktop for a sec02:08
devanandahah. i was suddenly wondering who had joined :)02:09
lifelessmy 2G VM is deploying 3 x 4G images...02:11
lifelessTHRASH IT BABY02:12
lifelessoom killer killed rabbit before02:12
lifelessthat made deployments unhappy :P02:12
*** coolsvap_away has quit IRC02:34
devanandaright02:55
devanandamax_lobur_afk: something for you & lucas to look over when ya'll wake up: https://etherpad.openstack.org/p/action-started03:30
*** harlowja is now known as harlowja_away04:03
*** killer_prince has joined #openstack-ironic04:19
*** killer_prince has quit IRC04:31
devanandalifeless: message for when you're back (though i'll probably be gone by then) -- what if we limit the futures' threads like we do rpc threadpool? does that affect your preference?04:34
*** killer_prince has joined #openstack-ironic04:37
mrdadevananda: Just a question on bug/1271291, when the bug report suggests the methods should be consolidated - are you talking about just implementation, or the public signatures as well?  i.e. are the abstract methods in db/api.py up for change, or only the implementation in db/sqlalchemy/api.py ?04:37
devanandaoh04:38
devanandaboth04:38
devanandabut04:38
devanandai thought i saw that change in a recent patch?04:38
* mrda goes looking04:38
devanandamrda: https://review.openstack.org/#/c/63937/04:39
devanandatagged with a different bug id though ...04:39
*** jbjohnso has quit IRC04:41
mrdadevananda: it's kind of half the job, because it ignores get_node, and get_node_by_instance.04:42
mrdaI'll review the patch and request the bug be added04:42
mrdain the patch04:42
mrdarats04:43
devanandamrda: one i just filed today -- a bit trickier though :) https://bugs.launchpad.net/ironic/+bug/127639304:45
devanandaif you're hunting for bugs to take on, there are some others04:46
*** shausy has joined #openstack-ironic04:47
mrdaSure! I'm happy to take any that won't be too hard :)04:47
mrdadevananda: Feel free to send any others my way04:51
devanandamrda: I just targeted a bunch of unassigned bugs to i3 ... https://launchpad.net/ironic/+milestone/icehouse-304:52
devanandalet's see how much I regret that when I have to postpone a bunch of them later :)04:53
devanandabut feel free to take anything that's not assigned04:53
devanandaanyhow, it's almost 9pm and I really should eat some dinner :)04:54
* devananda really goes afk now04:54
mrdathanks devananda, have a good night04:55
*** jcooley_ has quit IRC05:19
*** killer_prince has quit IRC05:20
*** igor__ has joined #openstack-ironic05:37
*** killer_prince has joined #openstack-ironic05:40
*** jcooley_ has joined #openstack-ironic05:48
*** shawal has joined #openstack-ironic05:50
*** killer_prince has quit IRC06:03
*** killer_prince has joined #openstack-ironic06:04
*** killer_prince has quit IRC06:05
lifelessdevananda: back06:05
*** killer_prince has joined #openstack-ironic06:05
*** killer_prince has quit IRC06:05
openstackgerritJenkins proposed a change to openstack/ironic: Imported Translations from Transifex  https://review.openstack.org/7119206:06
*** killer_prince has joined #openstack-ironic06:08
*** igor__ has quit IRC06:21
*** mrda is now known as mrda_away06:44
*** coolsvap has joined #openstack-ironic07:09
*** jcooley_ has quit IRC07:12
*** jcooley_ has joined #openstack-ironic07:12
*** jcooley_ has quit IRC07:17
*** shausy has quit IRC07:42
*** shausy has joined #openstack-ironic07:43
devanandalifeless: so i should be asleep, but i'm not. i probably don't have braincells for much discussion, but after you went afk i looked closer at futures.07:46
devanandalifeless: if implemented differently than inthe current patch, it should be possible to limit teh # of threads. with that, what concerns do you have?07:46
devanandas/do/would/07:46
lifelessdevananda: as a short term fix - fine; longer term the interaction with housekeeping seems necessary to fix regardless07:57
lifelessdevananda: and - honestly - that seems simple to fix right now - don't use a lock for background updates - just run one and only one, and use a rate limit on the node to ensure both concurrency and rate limit thresholds are kept under07:58
lifelessdevananda: that said, I don't see what futures gains you - its api is for map-reduce abstractions, which isn't the problem you ahve07:59
devanandalifeless: it provides a way to return a message on the RPC bus (work started) while the work continues to run in another thread08:03
lifelessdevananda: so do threads :)08:03
lifelessdevananda: and thread pools08:03
devanandalifeless: um. that's what this is, no?08:03
devanandamaybe i misunderstood something08:04
lifelessdevananda: its a backport of a python 3.2+ stdlib module that abstracts out threads vs external processes08:04
lifelesshttp://pythonhosted.org//futures/#executor-objects08:04
lifelessits intended to be used from synchronous code that wants to spawn a bunch of async workers.08:05
lifelesse.g. map-reduce08:05
*** mdurnosvistov_ has joined #openstack-ironic08:05
*** shausy has quit IRC08:05
lifelessits a lovely thing, but /not/ the problem you have, since you want your synchronous code to return, not to block.08:05
*** shausy has joined #openstack-ironic08:06
devanandalifeless: also, not taking a lock for background/housekeeping things might reduce the likelyhood of tripping on this today, but doesn't solve the races08:06
devanandaright08:06
lifelessdevananda: if it gives breathing room to do the long term solution, I think thats fine.08:06
devanandaso i want the synchronous code to spawn one and exactly one async worker08:06
lifelessdevananda: since both the defects we identified in the discussion need to be fixed.08:06
lifeless[eventually]08:06
devanandayea08:07
devanandaso non-locking-housekeeping addresses the "nova boot failed because of periodic task"08:07
devanandabut doesn't address any of the races between clients08:07
lifelessif the sync code spawns one and only one worker, then when the sync code runs another rpc call08:07
lifelesseither it errors [there is a worker], or it blocks [your DOS concern], or it queues [ queueing]08:08
devanandaor improve the client's visibility into "was my request actioned?"08:08
*** jistr has joined #openstack-ironic08:08
lifelessI think you would be better served by saying 'given a thread pool serving RPCs, I want a thread pool of the same exact size serving RPC requested actions08:08
lifelessand when that thread pool is full, start erroring incoming requests.08:08
devanandasorry - i mean, for each incoming request, the sync code should start one async worker then synchronously return08:08
devanandathere may be a pool of N async workers that it can take one from08:09
lifelessright08:09
devanandaright!08:09
devanandathat's what I meant08:09
devanandaE_LATE :)08:09
lifelessstandard eventlet thread pools will do that for you I think08:09
devananda...?08:09
devanandahmm, ok08:09
lifeless(though I still favour being -much- simpler about this :)) - doc ref for thread pool- http://eventlet.net/doc/threading.html#tpool-simple-thread-pool08:10
lifelesstis what oslo.messaging uses08:10
devanandacan i create two separate pools of pools?08:10
devanandaer, pools of threadpools? or waht ever08:10
lifelessyeah08:10
lifelessthese are real threads not greenthread, though I don't think that will make any odds for your code08:11
lifelesso, maybe not multiple pools available - but if not thst will bite oslo eventually too :P08:11
lifelesssigh, I hate this, someday will read code without eyes bleeding08:12
devanandahah08:12
lifelessanyhow08:12
lifelessyou should sleep on it08:12
lifelesssee if my arguments for doing a 'hack' now are sufficient ;)08:13
lifelessjust checked08:13
lifelessyou can instantiate a new Pool08:13
lifelessproably use http://eventlet.net/doc/modules/greenpool.html08:13
devanandayea, reading tha tpage already08:13
lifelessspawn blocks when the worker thing is full, which I think is fine - its still a signal to the user that something is wrong08:14
devanandalooks like just instantiating a GreenPool within conductorManager.start(), with same number of worker threads, should do the trick08:14
lifelessbrilliant - no, fine - yes.08:14
devanandayep08:14
lifelessagain, we should focus right now on good enough, since there is a deadline, and once past it sic redundant russions on the request id thing08:15
devanandaya08:15
devanandascheduler//taskflow//thing is needed08:15
devanandabut not this cycle08:15
lifelessmmm, not a scheduler, nor taskflow08:16
devanandathing :)08:16
lifelessthing :)08:16
devanandalet's just call it Thing08:16
lifelessWARNING nova.openstack.common.loopingcall [-] task run outlasted interval by 50.013854 sec08:16
lifelessnoice08:16
lifeless2014-02-05 07:55:05.979 8284 WARNING oslo.messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 91c001fd98a34e3489204b819cb1ae51, message : {u'_unique_id': u'734e2577dd3b472e852d73e09a7581e5', u'failure':08:16
lifelesspoor little node, overworked.08:16
devanandaheh08:17
lifelesstrying to spawn a 3 node overcloud08:17
*** romcheg has joined #openstack-ironic08:29
*** ifarkas has joined #openstack-ironic08:40
*** ifarkas_ has joined #openstack-ironic08:42
*** ifarkas has quit IRC08:44
*** lsmola has quit IRC08:46
*** mdurnosvistov_ has quit IRC08:52
*** matsuhashi has quit IRC08:54
*** matsuhashi has joined #openstack-ironic08:55
*** romcheg1 has joined #openstack-ironic08:57
*** romcheg has quit IRC08:58
*** aignatov_ is now known as aignatov08:59
*** lsmola has joined #openstack-ironic08:59
*** aignatov is now known as aignatov_09:02
*** derekh has joined #openstack-ironic09:05
*** romcheg has joined #openstack-ironic09:06
*** romcheg1 has quit IRC09:08
*** aignatov_ is now known as aignatov09:10
*** lucasagomes has joined #openstack-ironic09:16
*** lsmola has quit IRC09:19
*** lsmola has joined #openstack-ironic09:32
openstackgerritYuriy Zveryanskyy proposed a change to openstack/ironic: Expose 'reservation' field of a node via API  https://review.openstack.org/7121109:37
openstackgerritYuriy Zveryanskyy proposed a change to openstack/ironic: Add ability to break TaskManager locks via REST API  https://review.openstack.org/7121209:38
openstackgerritYuriy Zveryanskyy proposed a change to openstack/ironic: wip0  https://review.openstack.org/7121109:38
openstackgerritYuriy Zveryanskyy proposed a change to openstack/ironic: Expose 'reservation' field of a node via API  https://review.openstack.org/7121109:44
openstackgerritYuriy Zveryanskyy proposed a change to openstack/ironic: Add ability to break TaskManager locks via REST API  https://review.openstack.org/7121209:44
*** tatyana has joined #openstack-ironic09:58
*** max_lobur_afk is now known as max_lobur10:04
lucasagomesmax_lobur, romcheg congratz for the nomination, you guys deserve it :)10:20
openstackgerritA change was merged to openstack/ironic: Move test__get_nodes_mac_addresses  https://review.openstack.org/7048110:20
romchegMorning lucasagomes, max_lobur10:20
romchegThanks!10:20
lucasagomesmorning10:20
romchegI appreciate that!10:20
max_loburmorning lucasagomes and romcheg10:21
max_loburthanks Lucas10:21
max_loburThanks for your confidence and support folks!10:22
lucasagomes:)10:23
romchegI join Max's thanks :)10:23
*** romcheg has left #openstack-ironic10:24
*** rsacharya has joined #openstack-ironic10:24
*** aignatov is now known as aignatov_10:25
*** rsacharya has quit IRC10:26
*** rsacharya has joined #openstack-ironic10:28
*** rsacharya has quit IRC10:36
*** rsacharya has joined #openstack-ironic10:36
*** rsacharya has joined #openstack-ironic10:37
openstackgerritLucas Alvares Gomes proposed a change to openstack/ironic: API tests to check for the return codes  https://review.openstack.org/7076610:39
*** jistr has quit IRC10:40
*** shawal has quit IRC10:40
*** shawal has joined #openstack-ironic10:41
*** martyntaylor has joined #openstack-ironic10:42
*** walsha has joined #openstack-ironic10:45
*** shawal has quit IRC10:47
*** nosnos has quit IRC10:53
Haomengmorning Ironic:)10:54
Haomengmax_lobur, romcheg congrats:)10:54
HaomengI am back from our Chinese new year:)10:55
*** romcheg has joined #openstack-ironic10:56
*** jistr has joined #openstack-ironic11:00
max_loburmorning Haomeng :)11:00
max_loburthanks!11:00
openstackgerritLucas Alvares Gomes proposed a change to openstack/ironic: API tests to check for the return codes  https://review.openstack.org/7076611:00
Haomengmax_lobur: morning:)11:00
Haomengmax_lobur: :)11:00
Haomengmax_lobur: I learn a lot from your code review comments,  thanks you:)11:01
max_lobur:)11:01
Haomengmax_lobur: :)11:01
openstackgerritLucas Alvares Gomes proposed a change to openstack/ironic: Handle multiple exceptions raised by jsonpatch  https://review.openstack.org/6845711:02
lucasagomesHaomeng, morning, welcome back11:02
max_loburHaomeng, how's your holidays? :)11:02
Haomenglucasagomes: thanks:)11:03
Haomengmax_lobur: very busy holidays to visit all friends:)11:04
max_loburI've seen a few TV reports from Chinese сelebration , seems it was awesome :)11:04
max_loburHaomeng, heh, same here :)11:04
Haomengmax_lobur: yes, Chinese New year is special, all the people which will be back to home to celebration the new year:)11:05
max_loburcool :)11:08
Haomengmax_lobur: so the traffic is terrible:)11:08
max_loburI know how it is :)11:08
Haomengmax_lobur: :)11:09
max_loburwe also have a lot of people working in big cities, and they they all trying to get home for NY :)11:09
Haomengmax_lobur: yes, same case:)11:09
*** dshulyak has joined #openstack-ironic11:10
max_loburbtw, the winter olympics will start this saturday in Sochi, it should be a great show as well11:11
Haomengmax_lobur: great:)11:11
max_loburwondering what Russians will invent to surprise the world :)11:11
Haomengmax_lobur: )11:11
*** rsacharya has quit IRC11:13
*** aignatov_ is now known as aignatov11:14
*** Alexei_987 has joined #openstack-ironic11:25
openstackgerritLucas Alvares Gomes proposed a change to openstack/python-ironicclient: Add node.states() to the client library  https://review.openstack.org/7097911:37
*** shausy has quit IRC11:42
*** shausy has joined #openstack-ironic11:43
*** lsmola has quit IRC11:46
*** shausy has quit IRC11:55
*** shausy has joined #openstack-ironic11:56
*** shausy has quit IRC11:56
*** igor_ has joined #openstack-ironic11:57
*** shausy has joined #openstack-ironic11:57
*** lsmola has joined #openstack-ironic12:01
*** kobier has quit IRC12:30
*** coolsvap has quit IRC12:33
*** shausy has quit IRC12:35
*** shausy has joined #openstack-ironic12:36
*** lucasagomes is now known as lucas-hungry12:43
*** lsmola has quit IRC12:53
*** lsmola has joined #openstack-ironic12:56
openstackgerritMax Lobur proposed a change to openstack/ironic: Allow concurrent image downloads in pxe logic  https://review.openstack.org/6390413:19
*** igor_ has quit IRC13:27
*** igor_ has joined #openstack-ironic13:28
*** jdob has joined #openstack-ironic13:30
*** igor_ has quit IRC13:33
*** walsha has quit IRC13:34
openstackgerritYuriy Zveryanskyy proposed a change to openstack/ironic: Add parameter for filtering nodes by maintenance mode  https://review.openstack.org/6393713:37
*** lucas-hungry is now known as lucasagomes13:48
*** zigo_ is now known as zigo13:50
*** igor_ has joined #openstack-ironic13:58
*** igor__ has joined #openstack-ironic14:00
*** shausy has quit IRC14:01
*** igor_ has quit IRC14:03
*** rloo has joined #openstack-ironic14:09
*** medberry is now known as med_14:14
*** jbjohnso has joined #openstack-ironic14:17
openstackgerritYuriy Zveryanskyy proposed a change to openstack/ironic: Fix 'run_as_root' parameter check in utils  https://review.openstack.org/7032414:22
*** matty_dubs|gone is now known as matty_dubs14:29
NobodyCamgood mornic Ironig says the man with out coffee14:39
lucasagomesNobodyCam, morning14:42
NobodyCam:)14:45
NobodyCammorning lucasagomes14:45
max_loburmorning NobodyCam :)14:45
NobodyCammorning max_lobur :)14:45
romchegMorning NobodyCam14:45
NobodyCamhey hey romcheg :) good morning... a14:46
NobodyCams/a//14:46
NobodyCam:-p14:46
NobodyCamI've got a conf call at 7 am pst to see if we can get some resources to help us out!14:48
NobodyCamromcheg: your gmt +2?14:48
romchegNobodyCam: yes, GMT+214:48
NobodyCamawesome :)14:48
*** aignatov is now known as aignatov_14:49
NobodyCamanyone seen linaggo online last few days?14:50
*** rloo has quit IRC14:53
*** aignatov_ is now known as aignatov14:53
*** rloo has joined #openstack-ironic14:53
romchegNo, haven't seen her14:55
romchegNobodyCam: Please keep in mind I have a Spanish class from 6PM to 8 PM my time14:56
lucasagomesNobodyCam, haven't seem her/him14:57
devanandamorning, all14:57
lucasagomesNobodyCam, maybe it's because of the chinese new year14:57
lucasagomesdevananda, morning14:57
devanandamax_lobur, lucasagomes - i'm fairly sure that, once i have some coffee in me, i'll be able to explain https://etherpad.openstack.org/p/action-started14:58
devanandaand another solution for the race that doesn't require futures // expose a DOS attack on the conductor14:58
NobodyCamahh lucasagomes yes14:59
* lucasagomes clicks14:59
max_loburmorning devananda14:59
NobodyCamwill do romcheg :)14:59
lucasagomesdevananda, wait, that etherpad is about the race condition?15:00
devanandayes15:00
devanandaheh15:00
devanandalifeless and I were at it for a while ...15:00
NobodyCammorning devananda15:00
max_lobur:)15:00
lucasagomesdevananda, awesome, ok I gotta read it15:00
devanandaNobodyCam: conf code?15:00
max_loburI'm almost at the end of those doc :)15:00
lucasagomesNobodyCam, btw I'm adding a bunch of tests to the driver, also tested the patch-set #16 (w/ volume driver)15:00
lucasagomesit works15:00
* lucasagomes having a bad time with the tests, I suck on mocking stuff15:01
NobodyCamlucasagomes: awesome15:01
NobodyCam:)15:01
devanandamax_lobur: tldr - use greenthreads, not futures. my comment on the review is otherwise still correct15:02
NobodyCamhttp://www.youtube.com/watch?v=fGGWrJp4JHA&feature=kp15:02
devanandamax_lobur: i'll have a mock up for you in a bit15:02
max_loburyea, I've seen it15:02
max_loburI got general concept15:02
devanandak15:02
devanandamax_lobur: while testing this out15:03
devanandagranted, it was very late last night15:03
devanandabut I think i discovered a bug in task_manager15:03
devanandathey aren't actually locking!! :(15:03
devanandacause i can spin up >1 thread working on the same node at the same time15:04
lucasagomeshmm15:04
openstackgerritDevananda van der Veen proposed a change to openstack/ironic: MOCK of a greenthread pool  https://review.openstack.org/7128115:04
devanandathat ^15:05
*** bashok has joined #openstack-ironic15:07
openstackgerritYuriy Zveryanskyy proposed a change to openstack/ironic: Add ability to break TaskManager locks via REST API  https://review.openstack.org/7121215:08
*** rsacharya has joined #openstack-ironic15:12
max_loburdevananda, have you found a bug in my race patch, or in your's greenpool mockup?15:15
devanandamax_lobur: I *think* it's a problem in neither15:16
max_loburI just posted a comment to the mockup https://review.openstack.org/#/c/7128115:16
devanandamax_lobur: well. there's a problem with your futures patch. but there's ALSO a problem with the task_manager locks15:17
*** matsuhashi has quit IRC15:17
max_loburalso please take a look https://review.openstack.org/#/c/69135/1/ironic/tests/conductor/test_manager.py15:17
max_loburL285 - 28715:18
*** matsuhashi has joined #openstack-ironic15:18
*** aignatov is now known as aignatov_15:28
*** coolsvap has joined #openstack-ironic15:29
max_loburdevananda, lucasagomes I have to go for now, I'll join IRC in a ~4 hours15:32
lucasagomesack15:32
* lucasagomes is in a call15:32
devanandamax_lobur: Cannot call refresh on orphaned Node object15:33
*** matsuhashi has quit IRC15:34
max_loburdevananda, do you mean the test is broken / does nothing ?15:37
openstackgerritGhe Rivero proposed a change to openstack/ironic: Allow to tear-down a node waiting to be deployed  https://review.openstack.org/7129715:37
devanandamax_lobur: when i run my patch, adding a node.refresh() this is what i get in the greenthread15:38
max_loburahh15:38
devanandamax_lobur:  i believe your test isn't erally testing node.refresh() or a separate thread15:38
max_loburi'm using db_node.refresh(self.context)15:38
max_loburyou're getting OrphanedObjectError15:38
max_loburhttps://github.com/openstack/ironic/blob/2ce7c44bd16fb65719e554bd66ff775a2e8c6332/ironic/objects/base.py#L11615:39
max_loburthe only place where it raised15:39
max_loburthis means the session stored within the object is not valid15:39
devanandamax_lobur: ah, right. it's early - i'm still making sleepy mistakes15:39
max_loburtherefore we usually pass session to refresh15:40
max_loburas arg15:40
devanandamax_lobur: so if i explicitly pass session to refresh, i get a different one now15:40
max_lobur:)15:40
devanandaDBError: can't compare offset-naive and offset-aware datetimes15:40
max_loburheh15:40
max_loburhaven't seen that15:40
devanandacoming from session.commit15:40
devanandain the greenthread15:40
devanandamax_lobur: anyway, my poorly-written mockup aside, the point of that long etherpad is that, long term, we should indeed be returning a tracking token for every user request15:41
max_loburdevananda, have you seen my comment https://review.openstack.org/#/c/71281/115:42
max_loburto your patch15:42
max_loburbecause it won't work as it is15:42
devanandaand using a third (or external) service to store those requests... but short term15:42
devanandashort term we need a thread pool with limited # of workers, not spawning a new thread for every request with no upper bound15:43
max_loburdevananda, I agree15:43
devanandamax_lobur: gah! yes - you're right. it was after midnight when I was testing that.15:43
max_loburI'll take a look if we can create a pool using futures15:44
devanandalemme grab the manual locking from your patch. that explains why my test isn't working :)15:44
max_loburhonestly I like futures API more than greenthead15:44
devanandamax_lobur: futures, afaict, doesn't support this. why not use eventlet.greenpool15:44
max_loburat least the have done_callback that we need so much15:44
max_loburbut15:44
max_loburwe can have a pool of submitted futures15:44
max_loburwe will have it anyway15:44
max_loburbecause we want to send signal to cancel them15:45
devanandawe alerady pass the task object in to the job, so the jobs can release the task when they finish15:45
devanandarather than needing a separate wrapper that releases the task15:45
max_loburwell15:45
devanandahm15:45
devanandaedit edit edit :)15:46
devanandawe need a wrapper to handle exception cases15:46
max_loburin that way we'll need to check for exception15:46
max_loburyep15:46
devanandabut still, i dont see why that doesn't work with greenthread api15:46
max_loburbecause those point of task that releases lock may be not hit at all15:46
*** rsacharya has quit IRC15:46
max_loburdone_callback guarantees that it will be hit15:47
devanandatry: finally:15:47
max_loburwell, maybe, I need to check15:48
max_loburk, I really gotta go, will back in ~ 4 hrs15:48
max_lobursee ya!15:48
*** max_lobur is now known as max_lobur_afk15:49
NobodyCamdevananda: lucasagomes and /me have +2'd https://review.openstack.org/#/c/71211/ I did not approve just so you could take a quick pick15:52
* devananda looks15:52
lucasagomesyea that's a simple one I will give a try/review the one actually breaking the lock later15:53
devanandayuriyz: one nit - the commit message says closes-bug, shouldn't it be partial-bug?15:55
*** igor__ has quit IRC15:57
NobodyCamwow TripleO jobs in zuul for nearly 64 hours16:00
*** igor_ has joined #openstack-ironic16:03
NobodyCambrb16:04
*** igor_ has quit IRC16:08
lucasagomesdevananda, NobodyCam great news!16:12
lucasagomesI will see you guys in sunnyvalle :)16:13
NobodyCamwoo hoo16:13
NobodyCamoh how about romcheg and max_lobur_afk ?16:13
romchegNobodyCam: I don't know yet16:13
rloolucasagomes: sweet!16:13
NobodyCammorning rloo :)16:14
lucasagomesrloo, morning :D16:14
lucasagomesrloo, see ya there as well?16:14
romchegPlease remind me how much time before the event do I have?16:14
rloolucasagomes: question about https://review.openstack.org/#/c/71211/. Should 'reservation' be added to sample()?16:14
rloomorning NobodyCam!16:14
NobodyCamits march 3 thru 7th16:14
lucasagomesrloo, good catch! Yes, now that it's part of the Node resource16:14
rloolucasagomes: am thinking about it. i can be there for part of it and am not sure that is worth it. was going to ask.16:15
NobodyCamromcheg: you'll be core by then, so you have to go :-p16:15
romchegNobodyCam: I still did not manage to get a visa. I will poke management team here to speed up the process16:15
devanandalucasagomes: awesome!16:15
lucasagomesrloo, I see, I've been to the last mid-cycle in seattle16:16
lucasagomesI can say it was very useful16:16
NobodyCam:)16:16
rlooWhat is done in a mid-cycle? 5 days seems like a lot.16:16
lucasagomesit was more about tripleo, but we managed to talk about ironic16:16
lucasagomesthe cli started there :)16:16
devanandawe should have more ironic folks this time -- last time it was just, what, 4 of us?16:16
lucasagomesrloo, talks, making decisions, fixing a lot of bugs16:16
devanandaalso the project is much further along16:16
devanandawe have real things that work now! :-D16:17
rlooahh. hmm. i can be there wed-fri, but I am worried that 'most' of the good stuff will be mon-tues.16:17
lucasagomesdevananda, yea last time was: me, you, NobodyCam and martyntaylor16:17
rloowow, how ironic has grown ;)16:17
devanandarloo: i suspect friday, not much will happen16:17
rloodevananda. so it is 'worth' being there wed, thurs then?16:18
devanandarloo: missing mon-tue will hurt, but wed-thu should be productive still16:18
NobodyCamrloo: yes!16:18
devanandahurt in the sense of, we'll probbably change everything on tuesday ;)16:18
NobodyCamI think so16:18
NobodyCamlol16:18
lucasagomeshah16:18
rlooha ha. ok. I'll look into it then. Seems like Tues is the day!16:20
devanandasweet16:21
devanandagot greenthreads to do what i wanted16:21
*** jcooley_ has joined #openstack-ironic16:21
lucasagomesnice!16:21
NobodyCamwoo16:22
*** bashok has quit IRC16:22
openstackgerritDevananda van der Veen proposed a change to openstack/ironic: DO NOT MERGE: mock of a greenthread pool  https://review.openstack.org/7128116:23
devanandamax_lobur_afk: borrowed your task_manager changes and yes, that's all I was missing. see ^ for a way to use greenthread api to clean up. It's still just a mock, probably has some holes that you'll see ... :)16:24
devanandalucasagomes: that fixes the race condition16:25
devanandalucasagomes: without a lot of changes16:25
devanandahttps://review.openstack.org/#/c/71281/2/ironic/conductor/manager.py16:26
lucasagomesah sweet /me will take a look16:26
*** jcooley_ has quit IRC16:28
openstackgerritJarrod Johnson proposed a change to stackforge/pyghmi: Make pyghmi tolerate arbitrary threading models  https://review.openstack.org/7117416:31
devanandajbjohnso: hah, have you been watching our threading discussions? :)16:32
*** rloo has quit IRC16:33
*** rloo has joined #openstack-ironic16:33
*** jcooley_ has joined #openstack-ironic16:33
*** rloo has quit IRC16:34
*** rloo has joined #openstack-ironic16:35
openstackgerritJarrod Johnson proposed a change to stackforge/pyghmi: Make pyghmi tolerate arbitrary threading models  https://review.openstack.org/7117416:38
*** Alexei_987 has quit IRC16:40
rlooHi NobodyCam: https://bugs.launchpad.net/ironic/+bug/1261915. icbw, and it was awhile ago, but I thought you had mentioned something about this?16:50
* NobodyCam looks16:51
NobodyCamahh yes16:52
NobodyCamrloo: I came up with this https://review.openstack.org/#/c/51328/15/nova/virt/ironic/ironic_driver_fields.py16:53
NobodyCamfor the nova driver16:53
NobodyCambut thought it would be really cool if there was a way to query for what are is required16:54
NobodyCamso we didn't have to maintain a static mapping file16:54
rlooNobodyCam: ah, i thought you had done something.16:55
NobodyCamlol  if you count a static mapping file well then... :-p16:55
NobodyCamI see by the the bouncing bubbie that it is BBT... so i'll BRB :)16:57
rlooNobodyCam. I should take a look at your ironic driver... seems like if we provide a way of querying, there's going to be *some* sort of static list/map.16:57
NobodyCamrloo: it is our driver :)16:58
rloooh yeah, sorry NobodyCam. Our driver! ha ha.16:59
*** athomas has joined #openstack-ironic16:59
*** rloo has quit IRC17:03
*** rloo has joined #openstack-ironic17:03
*** igor_ has joined #openstack-ironic17:04
NobodyCamhehehehe L(17:05
NobodyCams/L(/:)/17:05
lucasagomesI've a mock on the fixtures, someone knows hw to disable that mock for a particular test?17:08
lucasagomesI tried to stop() but that doesn't work17:08
lucasagomesI mean it works, but it doesn't "remove" the mock17:09
*** igor_ has quit IRC17:09
*** ndipanov is now known as ndipanov_gone17:09
*** ifarkas_ has quit IRC17:10
NobodyCamlucasagomes: https://code.google.com/p/googlemock/wiki/FrequentlyAskedQuestions#How_can_I_delete_the_mock_function's_argument_in_an_action?17:13
NobodyCamwait nope thats not it17:14
lucasagomeshah yea it's google mock lib in c++17:14
rloolucasagomes. did you try setting the variable to the mock, to None?17:18
lucasagomesrloo, not really lemme try17:19
rloolucasagomes, after stopping.17:19
lucasagomes        # stop _get_client mock17:19
lucasagomes        self.mock_cli.stop()17:19
lucasagomes        self.mock_cli = None17:19
lucasagomessame error :(17:19
lucasagomesurgh!17:19
rloohmm. I think that's how I've disabled it before. what error?17:20
rlooor maybe I thought it was disabled...17:20
lucasagomesthis _get_client17:21
lucasagomescalls an ironic_client.get_client17:21
*** hemna_ is now known as hemna17:22
lucasagomesso in the test of the _get_client method I have an assert testing it ironic_client.get_client was called with the right parameters17:22
lucasagomesAssertionError: Expected to be called once. Called 0 times.17:22
*** jcooley_ has quit IRC17:22
rlooso ironic_client.get_client is mocked17:22
NobodyCamlucasagomes: can you .andreturn=ironic_client.get_client>17:23
*** jcooley_ has joined #openstack-ironic17:23
devanandalucasagomes: make a different test class17:26
devanandalucasagomes: so the mock during __init__ isn't there17:26
*** jistr has quit IRC17:26
lucasagomesdevananda, heh, that would work but sounds a bit overkill :P17:26
devanandalucasagomes: it sounds like most tests want the mock in place, but you also want a test to make sure that the thing you're mocking is going to work17:26
lucasagomesdevananda, exactly, only 2 tests I dont17:27
devanandaso one class has one test: test the thing. other class ha lots of tests: assume first thing works ,test the rest17:27
lucasagomesneed that mock17:27
lucasagomesI c17:27
lucasagomesyea that would work17:27
devanandanot overkill. it's easier to understand than starting and stopping the mock17:27
devanandabut put it in the same file/module :)17:28
lucasagomesack17:29
lucasagomesthanks :D17:29
*** athomas has quit IRC17:33
devanandagoing to be afk for a few hours17:33
devanandalucasagomes: your devstack walkthrough still accurate?17:33
*** jcooley_ has quit IRC17:34
lucasagomesdevananda, I think so, I've been using it to test the driver17:34
devanandagreat17:34
devanandai'll give that a shot this afternoon and update the pad if needed17:34
*** jcooley_ has joined #openstack-ironic17:34
lucasagomesdevananda, ack, thanks!17:34
*** tatyana has quit IRC17:35
devanandagonna fix up my neutron patch -- would love to see tha tland soon :)17:36
devananda*then go afk :)17:37
NobodyCam:)17:37
lucasagomes+2!!!17:37
openstackgerritDevananda van der Veen proposed a change to openstack/ironic: Implement _update_neutron in PXE driver  https://review.openstack.org/7046817:38
openstackgerritDevananda van der Veen proposed a change to openstack/ironic: Fix log and test for NeutronAPI.update_port_dhcp_opts  https://review.openstack.org/7109417:38
*** Haomeng has quit IRC17:38
*** Haomeng has joined #openstack-ironic17:39
lucasagomesa-ha found a way (kinda tricky) to disable that mock... I will leave like this but we can put in another test class if that looks too tricky17:41
devanandahttps://review.openstack.org/#/c/64711/3 and https://review.openstack.org/#/c/70267/1 both need reviews too17:41
*** jcooley_ has quit IRC17:41
*** jcooley_ has joined #openstack-ironic17:42
devanandaok - really AFK now.17:44
*** matty_dubs is now known as matty_dubs|lunch17:53
lucasagomesNobodyCam, I will submit a new patchset adding the tests (not completed yet, tomorrow I will add more)17:55
NobodyCamlucasagomes: awesome!!!!!!17:55
lucasagomesyea not yet, but we are getting there :D17:56
NobodyCam:)17:56
lucasagomeshttps://review.openstack.org/#/c/51328/17/nova/tests/virt/ironic/test_driver.py17:57
*** aignatov_ is now known as aignatov17:57
NobodyCamlucasagomes: that is looking great!18:00
NobodyCamTY for all the assistence!18:00
*** derekh has quit IRC18:00
lucasagomesNobodyCam, cheers :) ah np!18:00
lucasagomeslet's get it ready for review and merged soon!18:01
NobodyCamya18:01
NobodyCamahhh I have a question18:01
lucasagomessure18:01
openstackgerritDevananda van der Veen proposed a change to openstack/ironic: Implement a multiplexed VendorPassthru example  https://review.openstack.org/7086318:02
NobodyCamquestion is about nova service state18:02
NobodyCamhttps://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L448-L45118:02
NobodyCamis that a tulpe as a dict key18:03
lucasagomeslemme take a look18:04
lucasagomesNobodyCam, it looks like it's a tuple yea18:05
*** igor__ has joined #openstack-ironic18:05
NobodyCamya, Now /me need to learn how to set that18:06
*** igor__ has quit IRC18:10
lucasagomesNobodyCam, https://review.openstack.org/#/c/13920/25/nova/scheduler/host_manager.py L34118:10
lucasagomeslooking if it has changed18:11
lucasagomeshttps://github.com/openstack/nova/blob/master/nova/tests/scheduler/fakes.py#L162-L17518:13
* lucasagomes is confused18:13
lucasagomesseems there's no format to be honest18:13
NobodyCamyq I tried : http://paste.openstack.org/show/5D37KvDYkOuKwV8OplgX/18:15
NobodyCamworking scheduler tests18:15
lucasagomesright18:16
lucasagomeswe have our own hostmanager right?18:16
NobodyCamsi18:16
lucasagomesI think it will be easier to figure out things if we know hw the self.host_state_map looks like18:18
lucasagomesthat method returns an interator for that datasctruct (which is a dict)18:19
*** jcooley_ has quit IRC18:19
NobodyCamseems I need to do a quick walkies.. brb18:19
lucasagomesright and I'm done for today as well18:20
lucasagomeswill prepare something for dinner18:20
lucasagomeshave a good night everybody18:20
lucasagomesnight devananda NobodyCam rloo18:21
*** lucasagomes has quit IRC18:21
*** jcooley_ has joined #openstack-ironic18:21
*** jcooley_ has quit IRC18:23
*** jcooley_ has joined #openstack-ironic18:23
*** aignatov is now known as aignatov_18:24
*** aignatov_ is now known as aignatov18:30
jbjohnsodevananda, actually, no18:31
jbjohnsodevananda, I've not been paying much attention18:31
openstackgerritGhe Rivero proposed a change to openstack/ironic: Set boot device to PXE when deploying  https://review.openstack.org/7133218:32
jbjohnsoI have been doing profiling and being disappointed that while my threaded code is much easier to follow18:32
jbjohnsoit is not nearly as nice on cpu time...18:32
*** digambar has joined #openstack-ironic18:32
jbjohnsothe commit was because all pyghmi stuff did have to be kept on a thread18:32
jbjohnsoif you patched it with eventlet.green stuff18:33
jbjohnsobecause eventlet would raise a RuntimeError because multiple things touched the same filehandle18:33
jbjohnsowhich is something only a crazy person would do18:33
jbjohnsoso I pulled out the io touching things into a distinct thread18:33
*** zul has quit IRC18:33
digambarHey18:34
digambarupdated the libvert to 1.0.218:34
jbjohnsowhen I have a bunch of servers doing console in a process, that process takes like 15% of one cpu18:34
jbjohnsoa bunch being defined as about 718:35
jbjohnsobut it seems to not be changing much with scale...18:35
digambarHello NobodyCam18:35
NobodyCamhey digambar :)18:36
digambarDone with libert 1.0.318:36
digambarnow wants to start work on ironic18:36
digambarplease suggest me any bug for ironic18:37
jbjohnsoone of my next changes will be to adjust the sockets to be pooled, because I project that the socket sharing for SOL will far apart at default rmem at about 800 servers or so18:37
*** zul has joined #openstack-ironic18:37
jbjohnsotuning servers per client socket based on the socket buffer size would be nice and let me delete some code to nicely balance that18:38
digambarHello NobodyCam18:40
digambarplease suggest me from where should I start for dev in ironic18:40
digambarso that I can start immidiately on the bug18:41
NobodyCamdigambar: one sec18:41
digambarokk18:41
*** matty_dubs|lunch is now known as matty_dubs18:44
*** aignatov is now known as aignatov_18:46
NobodyCamdigambar: how about this one : https://bugs.launchpad.net/ironic/+bug/1276393??18:46
digambarlet me check18:46
digambarcool18:49
digambarlet me start with this one18:49
*** lynxman has quit IRC18:51
*** lynxman_ has joined #openstack-ironic18:51
*** lynxman_ is now known as lynxman18:51
*** lynxman has quit IRC18:51
*** lynxman has joined #openstack-ironic18:51
NobodyCamawesome digambar :) please assign your self to the bug18:51
digambaryep18:53
*** anniec has joined #openstack-ironic18:55
*** mdurnosvistov_ has joined #openstack-ironic18:56
digambarironic command not displayed19:01
digambarwhy so19:01
*** hemna has quit IRC19:01
*** anniec has quit IRC19:02
digambarroot@openstack:/opt/stack# ironic ironic: command not found19:03
*** rloo has quit IRC19:03
digambar.19:03
digambarIs there somethink I am missing here19:03
*** rloo has joined #openstack-ironic19:03
*** rloo has quit IRC19:04
*** harlowja_away is now known as harlowja19:05
digambarHey NobodyCam19:05
NobodyCamhey digambar sorry was on another window19:05
digambarNP19:05
digambarbut ironic command is not found19:05
digambarthough I have done the source openrc19:06
*** igor__ has joined #openstack-ironic19:06
NobodyCamhow did you build your stack devstack or diskimage builder?19:06
digambaris there anythink I am missing here19:06
digambardevstack19:06
*** jcooley_ has quit IRC19:07
NobodyCamdid you install ironic?19:07
openstackgerritJarrod Johnson proposed a change to stackforge/pyghmi: Make pyghmi tolerate arbitrary threading models  https://review.openstack.org/7117419:07
digambaryes19:08
*** jcooley_ has joined #openstack-ironic19:08
NobodyCamis it in your path?19:08
digambaryes19:09
digambarI am seeing service both ir-api & ir-cond running @ screen19:09
digambarI am able to do nova list19:09
digambarbut not ironic related command19:09
NobodyCamwhat do you get with 'which ironic'19:10
digambarnothing19:10
digambarno output19:10
*** igor__ has quit IRC19:10
digambar9$ ir-api*  10$ ir-cond19:11
digambarsee above service found @ screen also19:11
digambarwith running19:11
NobodyCamtry find / -name ironic19:11
digambarGot it19:12
*** rloo has joined #openstack-ironic19:12
digambarpython-ironicclient is not installed19:13
digambarwith devstack19:13
NobodyCamahh19:13
openstackgerritA change was merged to stackforge/pyghmi: Make pyghmi tolerate arbitrary threading models  https://review.openstack.org/7117419:13
NobodyCamthat would do it19:13
NobodyCam:-p19:13
digambarfrom where should I dowload it19:14
digambarany link19:14
NobodyCampip install python-ironicclient19:14
digambarokk19:15
*** coolsvap is now known as coolsvap_away19:15
*** rloo_ has joined #openstack-ironic19:16
*** rloo_ has quit IRC19:16
*** rloo_ has joined #openstack-ironic19:17
*** rloo has quit IRC19:17
digambarHey19:24
digambarwhile running command19:24
digambarPATH=$PATH:../tripleo-incubator/scripts/ ./tripleo-incubator/scripts/create-nodes 1 512 10 amd64 119:24
digambargetting error19:25
digambarvirsh: /usr/lib/libvirt-lxc.so.0: version `LIBVIRT_LXC_1.0.4' not found (required by virsh)19:25
digambarvirsh: /usr/lib/libvirt.so.0: version `LIBVIRT_1.1.3' not found (required by virsh)19:25
digambarHi NobodyCam19:26
NobodyCamHey digambar19:26
digambarwhile creating mac it gives me error19:26
digambarvirsh: /usr/lib/libvirt.so.0: version `LIBVIRT_1.1.3' not found (required by virsh)19:26
NobodyCamdoes something like 'virsh list' work form the command line19:26
NobodyCam?19:26
digambaryes19:26
digambarwhen I run19:27
digambarPATH=$PATH:../tripleo-incubator/scripts/ ./tripleo-incubator/scripts/create-nodes 1 512 10 amd64 119:27
digambarin the last once it going to create mac19:27
digambargives me error19:27
digambarvirsh: /usr/lib/libvirt.so.0: version `LIBVIRT_1.1.3' not found (required by virsh)19:27
* NobodyCam looks at create nodes script19:28
digambarroot@openstack:/opt/stack# virsh --version 1.0.319:29
digambarroot@openstack:/opt/stack# virsh list19:29
digambarId    Name                           State ----------------------------------------------------19:30
NobodyCamdigambar: what is your base os?19:30
digambarubuntu19:30
digambarubuntu 12.0419:30
digambar32bit19:30
NobodyCamdigambar: Have you ever used http://paste.openstack.org ?19:30
digambarno19:31
NobodyCamahh ^^^ is a much better why to send multi line posts19:31
digambarokk19:31
NobodyCampaste to url and just put the link in IRC19:31
*** igor___ has joined #openstack-ironic19:32
digambarI'll do it from next time19:32
NobodyCam:)19:32
digambarbut after the upgration of libvert package this error is getting19:32
digambarI haven't found the reason behind this19:33
NobodyCamya some thing in libvirt seems broken19:33
NobodyCamfind / -name libvirt.so.019:34
NobodyCamis that file on the sysytem?19:34
digambaryes19:34
digambarits under /usr/list19:34
digambarhey19:35
digambarits under /usr/lib19:35
digambar path /usr/lib/libvirt.so.019:35
NobodyCamhow about owner ship and chmod settings?19:36
*** igor___ has quit IRC19:37
digambarlrwxrwxrwx 1 root root 19 Feb  5 09:31 /usr/lib/libvirt.so.0 -> libvirt.so.0.1000.319:37
digambarroot & all the permission it has19:38
NobodyCamis libvirt.so.0.1000.3 there too.. that is just a sym link19:39
*** rloo_ has quit IRC19:39
digambaryes19:39
*** rloo has joined #openstack-ironic19:40
*** rloo has quit IRC19:40
*** rloo has joined #openstack-ironic19:40
*** rloo has quit IRC19:41
devanandaback for a bit19:46
NobodyCamwB devananda how was the run19:46
*** rloo has joined #openstack-ironic19:47
*** rloo has joined #openstack-ironic19:48
*** rloo has joined #openstack-ironic19:49
devanandagood. it's cold out there!19:49
*** rloo has quit IRC19:49
*** rloo has joined #openstack-ironic19:50
*** rloo has quit IRC19:52
*** rloo has joined #openstack-ironic19:53
*** rloo has quit IRC19:53
*** igor___ has joined #openstack-ironic20:03
*** igor____ has joined #openstack-ironic20:05
*** igor___ has quit IRC20:07
*** martyntaylor has left #openstack-ironic20:12
*** digambar has quit IRC20:14
*** romcheg has left #openstack-ironic20:32
*** rloo has joined #openstack-ironic20:52
*** max_lobur has joined #openstack-ironic20:58
max_loburfinally back20:59
NobodyCamWB max_lobur20:59
max_loburam I missed something important? (not able to read scrollback now:( )20:59
max_loburthanks :)20:59
NobodyCamnope :-p21:00
max_lobur:)21:00
NobodyCamoh also http://eavesdrop.openstack.org/irclogs/21:01
NobodyCam:-p in case you ever need21:01
*** rloo has quit IRC21:04
*** rloo has joined #openstack-ironic21:05
max_loburah cool21:07
max_loburI forgot about logging21:07
NobodyCam:-p21:08
devanandaconstruction noise has gotten too loud after lunch ... i'm heading to a coffee shop to work. bbiab21:09
NobodyCamenjoy the GOOD coffe21:09
NobodyCam:-p21:09
*** rloo has quit IRC21:12
*** rloo has joined #openstack-ironic21:13
*** rloo has quit IRC21:13
*** rloo has joined #openstack-ironic21:14
*** mrda_away is now known as mrda21:17
* mrda is working from a cafe this morning21:17
NobodyCam:)21:19
NobodyCamnice /me feels left out21:19
*** rloo has quit IRC21:22
*** rloo has joined #openstack-ironic21:22
NobodyCambrb21:23
*** blamar has joined #openstack-ironic21:25
*** blamar has quit IRC21:26
*** blamar has joined #openstack-ironic21:26
*** romcheg has joined #openstack-ironic21:29
*** rloo has quit IRC21:32
*** rloo has joined #openstack-ironic21:32
*** rloo has quit IRC21:33
*** rloo has joined #openstack-ironic21:34
*** rloo has quit IRC21:49
*** datajerk1 has quit IRC22:04
*** jdob has quit IRC22:07
*** jbjohnso has quit IRC22:07
*** mrda is now known as mrda_away22:18
*** matty_dubs is now known as matty_dubs|gone22:25
max_loburdevananda: I added my comment to https://review.openstack.org/#/c/71281/2 (race using green pool)22:29
* devananda looks22:30
devanandamax_lobur: so, there's one important difference between futures and greenpool22:32
*** mdurnosvistov_ has quit IRC22:32
devanandamax_lobur: http://eventlet.net/doc/modules/greenpool.html#eventlet.greenpool.GreenPool.spawn22:32
devanandamax_lobur: "If the pool is currently at capacity, spawn will block until one of the running greenthreads completes its task and frees up a slot.22:32
max_loburahh22:33
devanandawith futures, the max_threads setting determines # of workers that run in parallel, but there is unlimited # of jobs that an queue up22:33
max_loburso it won't queue too much22:33
devanandaright22:33
devanandawe'll actually get blocking if the worker pool is at capacity22:33
devanandawhich we want22:33
max_loburyea I see now22:34
max_loburyes, makes sense22:34
max_loburthen I'll rework my patch to futures22:35
max_loburgah22:35
max_loburto greenpool :)22:35
devananda:)22:35
max_loburwhat about sync/async parts of the job? do you think it's better to control that at manager level (e.g. call sync part and then call async within greenthread)22:36
max_loburor share worker_pool and control that from utils22:37
devanandamax_lobur: what sync parts are you referring to?22:37
max_loburor define worker_pool inside thread util that I created22:37
devanandamy view is that job should be run async, what ever the "job" is defined as22:38
devanandano -- worker pool must be a singleton // class property22:38
devanandasince the condcutor.ConductorManager class wil only have one instance created, a class property there is effectively a singleton22:38
max_loburmodule level for thread util will serve for that too22:39
max_loburmodule imported only once22:39
max_loburhttps://review.openstack.org/#/c/69135/1/ironic/conductor/utils.py22:39
max_lobur_validate_node_power_action(task, node, state):22:39
max_loburthis is what I'd like to do sync22:39
max_lobur_perform_node_power_action(task, node, state): - this one should be async22:39
devanandaah, so22:42
devanandai'd like to get rid of all the validate calls as separate RPC things22:42
devanandaeven validate_vendor_passthru22:42
devanandathe only reason those existed was we had to make call+cast until now22:42
devanandaalso, even internally, i dont think we need them as separate steps22:42
devananda- take lock22:43
max_loburyep, I understand22:43
devananda-- start thread (to do work)22:43
devananda-- return22:43
devananda--- thread does work22:43
devananda--- thread updates node.last_error if it fails22:43
devananda--- thread returns to pool when complete22:43
max_loburhmm22:44
max_loburso22:44
max_loburtask.driver.power.validate(node)22:44
max_loburwill be async too22:45
max_loburthe user should have to poll to know it's result22:45
max_loburmy thoughts was to perform all quick validation in a sync way22:46
max_loburalso there are a bunch of validations directly in the api code22:46
max_loburhttps://review.openstack.org/#/c/69135/1/ironic/api/controllers/v1/node.py see my TODOs22:47
max_loburI'd like to move them to conductor, so we can have all our rules defined in one placec22:47
devanandahmm22:48
max_loburwe may not request DB from the API at all - no need to get rpc node22:48
devanandaiterating through the driver.interface.validate() method calls22:48
max_loburwe just passing node uuid and desired state to conductor and it does all the job22:49
devanandacould take a bit, but otehrwise how do we get the result correctly back to the user (without a separate request tracking service)22:49
devanandai mean for GET /v1/nodes/XXXX/validate22:49
devanandathat has to be sync22:49
max_loburwell, I meant another22:49
max_loburGET /v1/nodes/XXXX/validate will be sync22:50
max_loburbecause it's a separate request, we're already doing RPC "call" there right?22:50
* NobodyCam takes a few minutes to look for some food stuffs22:51
max_loburI was talking about task.driver.power.validate(node) which is called within "change_node_power_state"22:51
max_loburI thought we will make it sync too22:51
max_loburhttps://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1/node.py#L531-L53722:55
max_loburhttps://github.com/openstack/ironic/blob/master/ironic/conductor/rpcapi.py#L21922:55
max_loburGET /v1/nodes/XXXX/validate will always be sync, it's a separate method22:56
*** romcheg has left #openstack-ironic22:57
* max_lobur re-reads discussion23:00
max_loburRE: but otehrwise how do we get the result correctly back to the user23:00
max_loburtrue23:00
max_loburit will appear in last error, and in logs23:01
max_loburit may be not his error right? e.g. how should he know that he issued that last_error23:01
devanandammm, back. was distracted in another channel23:02
devanandamax_lobur: right, so calling task.driver.power.validate(node) from within change_node_power_state should be part of the async job23:03
devanandaor rather23:03
devanandawe shouldn't call it explicitly23:03
devanandait's the driver's responsibility to fail sanely if it is lackig info when we call driver.power.set_power_state23:03
max_loburso, your'e proposing to remove this https://github.com/openstack/ironic/blob/master/ironic/conductor/utils.py#L3723:06
max_loburright?23:06
*** anniec has joined #openstack-ironic23:06
max_loburso the user will have to issue GET /v1/nodes/XXXX/validate to validate separately23:07
devanandamax_lobur: user may or may not issue validate -- that is up to user23:07
devanandamax_lobur: if user issues PUT /v1/nodes/xxx/state/power {'on'}, then this will syncronously *start* the job23:08
max_loburagree23:08
devanandauser will know if job starts or fails to start23:08
devanandabut no more than that23:08
max_loburk23:08
max_loburgot it23:08
devanandathen user will GET /v1/nodes/XXX23:08
devanandato observe the state23:08
max_loburor last_error23:09
devanandaright23:09
devanandanow...23:09
max_loburthis will simplify my patch23:09
devanandathis resolves the race that was causing nova.spawn to fail23:09
devanandaif the user sees that job failed to start (NodeLocked exception) he can wait & retry some # of times23:09
devanandaand get status, etc, to make sure things are OK to continue23:10
devanandait does NOT solve a separate race that we also have today23:10
devanandawhere multiple jobs run SUCCESSFULLY in sequence23:10
devananda^D^D^D23:10
devanandawhere multiple jobs start successfuly in sequence23:10
devanandaand the status of all N-1 jobs are overwritten by job N23:10
devanandathis is not a problem for noav, where it should be the only "user" requesting power or deploy changes23:11
devanandaand there's no way to solve it today23:11
devanandawe need separate "job" service to track each request's status23:11
devanandain Juno :)23:11
max_loburcan't get the last23:12
max_loburRE: where multiple jobs start successfuly in sequence23:12
max_loburyou mean start and end? right?23:12
devanandauser A starts job 123:12
devanandajob 1 finishes23:12
devanandauser B starts job 223:12
devanandajob 2 finishes23:12
max_loburyep, I see23:12
devanandauser A gets node status to see what happened to job 123:12
max_loburwe need a separate request modeling in db23:13
max_loburyea23:13
devanandaright23:13
max_loburI though about this as well23:13
max_loburright23:13
devanandataht's very much outside the scope for Icehouse23:13
max_loburthis is to much for icehouse23:13
devanandawhat we are doing with greenpool solves the important race that is breaking Nova23:13
max_loburk, cool, thanks for you time, I'll rework my patch tomorrow23:14
devanandamax_lobur: np, thank you!23:14
devanandasolving provlems like this one is fun :)23:15
max_lobur:)23:15
*** mrda_away is now known as mrda23:45
*** epim has joined #openstack-ironic23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!