*** romcheg1 has joined #openstack-ironic | 00:04 | |
*** romcheg has joined #openstack-ironic | 00:07 | |
*** romcheg1 has quit IRC | 00:09 | |
*** romcheg has left #openstack-ironic | 00:51 | |
*** jbjohnso has joined #openstack-ironic | 00:57 | |
*** jbjohnso has quit IRC | 01:08 | |
*** romcheg has joined #openstack-ironic | 01:23 | |
*** romcheg has left #openstack-ironic | 01:23 | |
*** nosnos has joined #openstack-ironic | 01:24 | |
*** Haomeng has quit IRC | 01:51 | |
*** jbjohnso has joined #openstack-ironic | 02:15 | |
*** mrda has quit IRC | 02:39 | |
*** mrda has joined #openstack-ironic | 02:42 | |
*** d4n_ has joined #openstack-ironic | 03:14 | |
*** d4n_ has left #openstack-ironic | 03:14 | |
*** vkozhukalov has joined #openstack-ironic | 03:28 | |
*** jbjohnso has quit IRC | 04:06 | |
*** coolsvap has joined #openstack-ironic | 04:19 | |
*** yfujioka has quit IRC | 04:26 | |
*** aignatov has joined #openstack-ironic | 05:12 | |
*** aignatov has quit IRC | 05:17 | |
*** michchap has quit IRC | 05:37 | |
*** michchap has joined #openstack-ironic | 05:37 | |
*** nosnos has quit IRC | 05:52 | |
*** nosnos_ has joined #openstack-ironic | 05:52 | |
openstackgerrit | Jenkins proposed a change to openstack/ironic: Imported Translations from Transifex https://review.openstack.org/65032 | 06:04 |
---|---|---|
*** bashok has joined #openstack-ironic | 06:26 | |
*** tzumainn has quit IRC | 06:26 | |
*** mrda has quit IRC | 06:41 | |
*** vkozhukalov has quit IRC | 06:44 | |
*** nosnos_ has quit IRC | 06:52 | |
*** nosnos has joined #openstack-ironic | 06:53 | |
*** mrda has joined #openstack-ironic | 07:09 | |
*** mrda has quit IRC | 07:20 | |
*** Haomeng has joined #openstack-ironic | 07:34 | |
*** Haomeng has quit IRC | 07:40 | |
*** Haomeng has joined #openstack-ironic | 07:40 | |
*** ifarkas has joined #openstack-ironic | 07:43 | |
*** mdurnosvistov has joined #openstack-ironic | 07:54 | |
*** mrda has joined #openstack-ironic | 08:02 | |
*** jistr has joined #openstack-ironic | 08:06 | |
*** mrda has quit IRC | 08:09 | |
agordeev | morning Ironic :) | 08:20 |
Haomeng | agordeev: morning:) | 08:23 |
GheRivero | morning all | 08:24 |
Haomeng | GheRivero: morning:) | 08:25 |
agordeev | Haomeng, GheRivero morning :) | 08:27 |
Haomeng | agordeev: :) | 08:27 |
*** vkozhukalov has joined #openstack-ironic | 08:28 | |
*** vkozhukalov has quit IRC | 08:34 | |
*** Alexei_987 has joined #openstack-ironic | 08:36 | |
*** Haomeng has quit IRC | 08:39 | |
*** kpavel has joined #openstack-ironic | 08:40 | |
*** kpavel_ has joined #openstack-ironic | 08:40 | |
*** Haomeng has joined #openstack-ironic | 08:43 | |
*** kpavel has quit IRC | 08:44 | |
*** kpavel_ is now known as kpavel | 08:44 | |
*** mrda has joined #openstack-ironic | 08:44 | |
*** vkozhukalov has joined #openstack-ironic | 08:51 | |
openstackgerrit | Dmitry Shulyak proposed a change to openstack/ironic: alembic with initial migration and tests https://review.openstack.org/67415 | 08:53 |
*** romcheg has joined #openstack-ironic | 09:01 | |
*** dshulyak has joined #openstack-ironic | 09:04 | |
*** derekh has joined #openstack-ironic | 09:15 | |
*** aignatov_ has joined #openstack-ironic | 09:24 | |
*** jistr has quit IRC | 09:31 | |
*** jistr has joined #openstack-ironic | 09:32 | |
*** lsmola_ has joined #openstack-ironic | 09:38 | |
*** derekh is now known as derekh_afk | 09:59 | |
*** bashok has quit IRC | 10:02 | |
*** bashok has joined #openstack-ironic | 10:02 | |
*** rwsu has joined #openstack-ironic | 10:03 | |
*** vkozhukalov has quit IRC | 10:04 | |
*** vetalll has joined #openstack-ironic | 10:11 | |
*** vkozhukalov has joined #openstack-ironic | 10:16 | |
openstackgerrit | Dmitry Shulyak proposed a change to openstack/ironic: alembic with initial migration and tests https://review.openstack.org/67415 | 10:16 |
*** max_lobur_afk is now known as max_lobur | 10:18 | |
*** ifarkas has quit IRC | 10:29 | |
*** martyntaylor has joined #openstack-ironic | 10:31 | |
*** ifarkas has joined #openstack-ironic | 10:32 | |
*** mrda has quit IRC | 10:39 | |
*** mrda has joined #openstack-ironic | 10:46 | |
*** bashok has quit IRC | 11:01 | |
*** martyntaylor has quit IRC | 11:07 | |
*** martyntaylor1 has joined #openstack-ironic | 11:08 | |
*** lucasagomes has joined #openstack-ironic | 11:09 | |
*** jistr has quit IRC | 11:11 | |
*** jistr has joined #openstack-ironic | 11:14 | |
*** martyntaylor has joined #openstack-ironic | 11:15 | |
*** martyntaylor1 has quit IRC | 11:15 | |
*** bashok has joined #openstack-ironic | 11:20 | |
*** athomas has joined #openstack-ironic | 11:33 | |
*** derekh_afk is now known as derekh | 11:38 | |
max_lobur | ping GheRivero | 11:50 |
GheRivero | max_lobur: pong | 11:50 |
max_lobur | Hi Ghe | 11:50 |
GheRivero | morning | 11:50 |
max_lobur | Thank you for review of https://review.openstack.org/#/c/66349/3 | 11:51 |
GheRivero | let-s hope it makes its way on time | 11:51 |
max_lobur | do we need some other reviews to make it merged / do I need ping someone or just wait | 11:52 |
GheRivero | I don't know. Actually, I never know how I ended in the core group for requirements in the first place :) | 11:54 |
max_lobur | hehe :) | 11:54 |
*** mrda has quit IRC | 11:56 | |
max_lobur | I'll try to ask someone in infra channel | 11:57 |
openstackgerrit | A change was merged to openstack/ironic: Imported Translations from Transifex https://review.openstack.org/65032 | 12:03 |
*** coolsvap has quit IRC | 12:07 | |
*** jbjohnso has joined #openstack-ironic | 13:02 | |
*** max_lobur is now known as max_lobur_afk | 13:26 | |
*** martyntaylor has quit IRC | 13:33 | |
*** jdob has joined #openstack-ironic | 13:33 | |
*** martyntaylor has joined #openstack-ironic | 13:36 | |
*** linggao has joined #openstack-ironic | 13:45 | |
*** rloo has joined #openstack-ironic | 13:47 | |
Haomeng | GheRivero: ping | 14:07 |
Haomeng | GheRivero: I have a global requirement patch to be reviewed https://review.openstack.org/#/c/64408/ , can you help? Thanks:) | 14:10 |
Haomeng | GheRivero: :) | 14:10 |
*** bashok has quit IRC | 14:14 | |
*** tzumainn has joined #openstack-ironic | 14:21 | |
*** tzumainn has quit IRC | 14:21 | |
*** vetalll has left #openstack-ironic | 14:21 | |
lucasagomes | are the tests in ironic failing for u guys as well? DBError: (IntegrityError) UNIQUE constraint failed: | 14:23 |
lucasagomes | (wondering if it's the sqlite version) | 14:23 |
Haomeng | lucasagomes: what db are you using? MySQL? | 14:24 |
lucasagomes | Haomeng, it's for the tests so sqlite I think it is | 14:24 |
lucasagomes | [lucasagomes@lucasagomes ironic]$ sudo rpm -q sqlite | 14:25 |
lucasagomes | sqlite-3.8.2-1.fc20.x86_64 | 14:25 |
*** matty_dubs|gone is now known as matty_dubs | 14:25 | |
Haomeng | lucasagomes: I found we have some code branch for sqllite only, and I use MySQL by default:) | 14:25 |
Haomeng | lucasagomes: this code - https://github.com/openstack/ironic/blob/master/ironic/db/sqlalchemy/migrate_repo/versions/014_add_address_uc_sqlite.py#L21 | 14:27 |
lucasagomes | Haomeng, lemme take a look | 14:27 |
lucasagomes | thanks :D | 14:27 |
*** jrist has quit IRC | 14:28 | |
Haomeng | lucasagomes: not sure if this is root cause:) | 14:28 |
Haomeng | lucasagomes: another root cause, mysql is case non-sensitive, but sqlite is case sensitive, our Jenkins will use MySQL as default database | 14:30 |
Haomeng | lucasagomes: so for some code, it working with MySQL, but not working with sqlite:) | 14:31 |
lucasagomes | yea I will try to change the engine | 14:31 |
Haomeng | lucasagomes: good luck:) | 14:31 |
lucasagomes | and run the tests with an older version of sqlite | 14:31 |
lucasagomes | Haomeng, cheers | 14:31 |
Haomeng | lucasagomes: :) | 14:31 |
lucasagomes | thanks for the inputs as well :) | 14:31 |
Haomeng | lucasagomes: :) | 14:31 |
Haomeng | lucasagomes: nice day, I will go to sleep:) | 14:32 |
lucasagomes | Haomeng, have a g'night buddy | 14:33 |
Haomeng | lucasagomes: :) | 14:33 |
*** max_lobur_afk is now known as max_lobur | 14:33 | |
*** yuriyz has joined #openstack-ironic | 14:34 | |
*** rloo has quit IRC | 14:36 | |
*** rloo has joined #openstack-ironic | 14:36 | |
GheRivero | I found this about sqlite/mysql the other day https://bugs.launchpad.net/ironic/+bug/1265518 | 14:39 |
GheRivero | Haomeng: taking a look to your patch | 14:39 |
*** coolsvap has joined #openstack-ironic | 14:41 | |
*** rloo has quit IRC | 14:44 | |
*** rloo has joined #openstack-ironic | 14:44 | |
*** rloo has quit IRC | 14:46 | |
*** rloo has joined #openstack-ironic | 14:46 | |
openstackgerrit | Dmitry Shulyak proposed a change to openstack/ironic: alembic with initial migration and tests https://review.openstack.org/67415 | 14:55 |
*** nosnos has quit IRC | 15:06 | |
*** pradipta has joined #openstack-ironic | 15:19 | |
*** rloo has quit IRC | 15:22 | |
*** rloo has joined #openstack-ironic | 15:22 | |
*** martyntaylor has quit IRC | 15:23 | |
*** rloo has quit IRC | 15:23 | |
*** martyntaylor has joined #openstack-ironic | 15:23 | |
*** rloo has joined #openstack-ironic | 15:24 | |
*** lucasagomes is now known as lucas-hungry | 15:33 | |
*** vkozhukalov has quit IRC | 15:34 | |
*** kpavel has quit IRC | 15:38 | |
openstackgerrit | Mikhail Durnosvistov proposed a change to openstack/ironic: Removes use of timeutils.set_time_override https://review.openstack.org/67432 | 15:45 |
openstackgerrit | dekehn proposed a change to openstack/ironic: Adds Neutron support to Ironic https://review.openstack.org/66071 | 15:59 |
*** ifarkas has quit IRC | 16:08 | |
*** lucas-hungry is now known as lucasagomes | 16:21 | |
*** romcheg1 has joined #openstack-ironic | 16:21 | |
*** romcheg has quit IRC | 16:22 | |
openstackgerrit | Yuriy Zveryanskyy proposed a change to openstack/ironic: Add parameter for filtering nodes by maintenance mode https://review.openstack.org/63937 | 16:24 |
*** mdurnosvistov has quit IRC | 16:30 | |
*** dshulyak has quit IRC | 16:36 | |
*** rloo has quit IRC | 16:47 | |
*** jistr has quit IRC | 16:48 | |
*** rloo has joined #openstack-ironic | 16:48 | |
*** rloo has quit IRC | 16:49 | |
*** rloo has joined #openstack-ironic | 16:49 | |
*** aignatov_ has quit IRC | 16:50 | |
*** GheRiver1 has joined #openstack-ironic | 16:53 | |
*** GheRiver1 has quit IRC | 16:53 | |
*** zheliabina has joined #openstack-ironic | 17:02 | |
*** rloo has quit IRC | 17:07 | |
*** ifarkas has joined #openstack-ironic | 17:08 | |
*** rloo has joined #openstack-ironic | 17:08 | |
*** vkozhukalov has joined #openstack-ironic | 17:08 | |
*** zheliabina has quit IRC | 17:08 | |
openstackgerrit | Lucas Alvares Gomes proposed a change to openstack/ironic: Delete the iscsi target the deploy https://review.openstack.org/67877 | 17:11 |
*** rloo has quit IRC | 17:13 | |
*** rloo has joined #openstack-ironic | 17:13 | |
*** rloo has quit IRC | 17:14 | |
*** rloo has joined #openstack-ironic | 17:14 | |
*** rloo has quit IRC | 17:14 | |
openstackgerrit | Lucas Alvares Gomes proposed a change to openstack/ironic: Delete the iscsi target https://review.openstack.org/67877 | 17:15 |
*** rloo has joined #openstack-ironic | 17:15 | |
*** rloo has quit IRC | 17:16 | |
*** rloo has joined #openstack-ironic | 17:16 | |
*** agordeev2 has joined #openstack-ironic | 17:16 | |
openstackgerrit | Lucas Alvares Gomes proposed a change to openstack/ironic: Delete the iscsi target https://review.openstack.org/67877 | 17:18 |
*** rloo has quit IRC | 17:18 | |
lucasagomes | pep8 :P | 17:18 |
*** rloo has joined #openstack-ironic | 17:18 | |
*** romcheg has joined #openstack-ironic | 17:20 | |
*** romcheg1 has quit IRC | 17:21 | |
*** mdurnosvistov has joined #openstack-ironic | 17:30 | |
*** Alexei_987 has quit IRC | 17:33 | |
*** rloo has quit IRC | 17:36 | |
*** rloo has joined #openstack-ironic | 17:36 | |
*** rloo_ has joined #openstack-ironic | 17:36 | |
*** rloo has quit IRC | 17:36 | |
*** martyntaylor has quit IRC | 17:38 | |
*** matty_dubs is now known as matty_dubs|lunch | 17:38 | |
*** rloo_ has quit IRC | 17:40 | |
*** rloo has joined #openstack-ironic | 17:40 | |
*** aignatov_ has joined #openstack-ironic | 17:41 | |
*** matty_dubs|lunch has quit IRC | 17:43 | |
*** rloo has quit IRC | 17:43 | |
*** rloo has joined #openstack-ironic | 17:44 | |
*** rloo has quit IRC | 17:45 | |
*** rloo has joined #openstack-ironic | 17:46 | |
*** aignatov_ has quit IRC | 17:47 | |
openstackgerrit | Jarrod Johnson proposed a change to stackforge/pyghmi: Add support for retrieving SDR data https://review.openstack.org/67299 | 17:48 |
*** rloo has quit IRC | 17:51 | |
*** rloo has joined #openstack-ironic | 17:51 | |
jbjohnso | if anyone is interested, that's probably where I'm going to land concretely in SDR support for IPMI | 17:52 |
jbjohnso | for numeric linear/linearizable sensors | 17:53 |
*** rloo has quit IRC | 17:53 | |
*** rloo has joined #openstack-ironic | 17:54 | |
NobodyCam | good morning Ironic says the man lazy-a-lee poking his head in and waving | 17:55 |
GheRivero | morning NobodyCam | 17:56 |
openstackgerrit | dekehn proposed a change to openstack/ironic: Adds Neutron support to Ironic https://review.openstack.org/66071 | 17:56 |
rloo | morning NobodyCam. Happy Martin Luther King Jr. Day | 17:57 |
jbjohnso | on that neutron stuff, looks like the way to neutron is very simplistic, is the idea just pxelinux options, or supporting adaptive responses to support non-BIOS stuff? | 17:58 |
NobodyCam | morning GheRivero rloo :) | 17:58 |
NobodyCam | so anyone have any thoughts on something like node-reset in the cli? | 18:01 |
NobodyCam | lucasagomes: devananda ^^^ | 18:01 |
rloo | NobodyCam: what would a node-reset do? | 18:02 |
romcheg | NobodyCam: what is it intended to do? | 18:03 |
NobodyCam | reset everything to default... | 18:03 |
romcheg | whoops, rloo was first :) | 18:03 |
NobodyCam | make my testing easier :-p | 18:03 |
*** derekh has quit IRC | 18:03 | |
devananda | morning, all! | 18:03 |
rloo | NobodyCam: sure then! | 18:03 |
romcheg | NobodyCam: reset ramdisk? :D | 18:03 |
devananda | I'm running late today... gonna grab breakfast and be back for the meeting | 18:03 |
romcheg | Morning devananda, NobodyCam rloo | 18:03 |
NobodyCam | say you get a node stucking provision state deploying | 18:04 |
NobodyCam | :) devananda we too | 18:04 |
NobodyCam | romcheg: nope just the nodes data base | 18:04 |
lucasagomes | NobodyCam, morning | 18:04 |
lucasagomes | devananda, romcheg rloo morning | 18:04 |
NobodyCam | morning lucasagomes | 18:04 |
rloo | afternoon lucasagomes, evening romcheg. | 18:05 |
lucasagomes | NobodyCam, you mean node tear-down? | 18:05 |
lucasagomes | to clean up the enviroment etc | 18:05 |
romcheg | Well, we can definitely just power down the node but the data which already are on the node must be cleaned up | 18:05 |
lucasagomes | you can PUT {'target': 'deleted'} to /states/provision | 18:05 |
rloo | NobodyCam: you just want the db updated to reflect a different state? | 18:05 |
NobodyCam | lucasagomes: not if stuck in deploying | 18:06 |
NobodyCam | and oh say the instance uuid disapired | 18:06 |
NobodyCam | :-p | 18:06 |
lucasagomes | NobodyCam, so it's deploying, but didn't fail yet? | 18:07 |
NobodyCam | correct ... inface would never fail | 18:07 |
lucasagomes | yea, cause there's no timeout | 18:07 |
NobodyCam | *infact | 18:07 |
lucasagomes | for pxe, :P | 18:07 |
*** matty_dubs|lunch has joined #openstack-ironic | 18:07 | |
lucasagomes | :/* | 18:07 |
NobodyCam | so I thought about node-reset | 18:08 |
romcheg | We need lock breaking to implement node-reset | 18:08 |
NobodyCam | just in case nodes ever got out of sync | 18:08 |
lucasagomes | hmm /me thinking | 18:08 |
lucasagomes | cause idk about node-reset seems to powerful might leave things in an inconsistent state | 18:09 |
lucasagomes | I see the benefits for our tests | 18:09 |
lucasagomes | that would help us | 18:09 |
lucasagomes | but then we can just go and edit the database | 18:09 |
lucasagomes | I think the timeout would be a better final approach | 18:09 |
NobodyCam | lucasagomes: that use it helpful for us in testing but real world... could be misused... | 18:09 |
lucasagomes | deploy would fail cause it timeout, so you can do a node tear-down | 18:09 |
lucasagomes | NobodyCam, yea | 18:10 |
*** rloo has quit IRC | 18:10 | |
NobodyCam | :-p | 18:10 |
romcheg | But we never know what time should pass before a timeout | 18:10 |
lucasagomes | romcheg, ther'es a configuration for that already | 18:10 |
lucasagomes | we just don't use it | 18:10 |
*** rloo has joined #openstack-ironic | 18:10 | |
*** athomas has quit IRC | 18:10 | |
NobodyCam | yea it up to the network to set a realistic value for thier network | 18:11 |
NobodyCam | gah /me needs more coffee | 18:11 |
romcheg | Yes, but as far as I remember we agreed to not perform operations that might brake anything automatically | 18:11 |
NobodyCam | its up to the network admin to .... | 18:11 |
*** rloo has quit IRC | 18:13 | |
*** pradipta has quit IRC | 18:13 | |
*** rloo has joined #openstack-ironic | 18:13 | |
max_lobur | morning / evening Ironic | 18:14 |
NobodyCam | morning max_lobur | 18:15 |
lucasagomes | morning max_lobur | 18:15 |
lucasagomes | romcheg, https://bugs.launchpad.net/nova/+bug/1195073 | 18:16 |
*** vkozhukalov has quit IRC | 18:19 | |
*** matty_dubs|lunch is now known as matty_dubs | 18:26 | |
* NobodyCam makes coffee for the meeting | 18:39 | |
openstackgerrit | Jarrod Johnson proposed a change to stackforge/pyghmi: Add support for retrieving SDR data https://review.openstack.org/67299 | 18:43 |
*** linggao has quit IRC | 18:55 | |
*** yuriyz has left #openstack-ironic | 18:56 | |
*** linggao has joined #openstack-ironic | 18:57 | |
* devananda is back | 18:57 | |
NobodyCam | WB | 18:57 |
romcheg | Guys, why we set os=False when doing monkeypatch? | 18:58 |
*** aignatov_ has joined #openstack-ironic | 19:03 | |
openstackgerrit | Matt Wagner proposed a change to openstack/ironic: API: Add sample() method on Node https://review.openstack.org/65536 | 19:35 |
*** pokrov has joined #openstack-ironic | 19:41 | |
devananda | max_lobur, lucasagomes if you guys can't reproduce bug 1244747, we can reclose it and chalk it up to wierdness in my env (which needed to be wiped anyway) | 19:42 |
lucasagomes | devananda, hmm I have to try again and see if I can reproduce that | 19:43 |
lucasagomes | I tested it before it got reopened | 19:43 |
lucasagomes | I will do it tomorrow | 19:43 |
max_lobur | lucasagomes, thanks! | 19:43 |
max_lobur | I tested just today | 19:44 |
max_lobur | worked fine | 19:44 |
lucasagomes | right I'll write it down here and test it tomorrow then | 19:45 |
lucasagomes | max_lobur, thank you for fixing :D | 19:45 |
*** pokrov has quit IRC | 19:45 | |
*** k4n01 has joined #openstack-ironic | 19:51 | |
romcheg | I and max_lobur tested thread pool executor for futures which might be useful for performing some tasks in Ironic | 19:57 |
romcheg | We discovered that we need to monkey patch os by eventlet for that | 19:57 |
romcheg | However, currently os if not patched. | 19:57 |
romcheg | Can someone please tell me why? | 19:58 |
romcheg | I ran tests with eventlet.monkey_patch(os=True) and everything seems to work | 19:58 |
NobodyCam | great meeting eveyone | 20:01 |
max_lobur | let's finish with tempest and then go to race | 20:01 |
k4n01 | same here | 20:01 |
devananda | awesome meeting,e veryone! | 20:01 |
max_lobur | https://review.openstack.org/#/c/67854/1 this is the first patch expanding tempest | 20:01 |
devananda | ok, temest API tests | 20:01 |
romcheg | regarding to tempest test coverage: there might be problems in changing the API specification: to update Ironic's code we need to update tempest. But that update will fail until patch to Ironic if merged | 20:01 |
devananda | so | 20:02 |
devananda | that's a good point, BUT the wrong way to look at it | 20:02 |
devananda | we shouldn't be breaking API compat after icehouse release | 20:02 |
devananda | right now we're in the wierd situation of having no prior release taht we need to maintain compat with | 20:02 |
max_lobur | :) | 20:02 |
devananda | so we have been breaking things between patches without regard | 20:02 |
devananda | we need to stop that | 20:03 |
devananda | and the tempest API stuff will force us to :) | 20:03 |
devananda | it's good, IMNSHO | 20:03 |
NobodyCam | devananda: +1 | 20:03 |
lucasagomes | yea I tend to agree with that | 20:03 |
devananda | when we update our API, we need to be compatible at a minimum between one patchset and the next | 20:03 |
lucasagomes | api should ne stable | 20:03 |
lucasagomes | be* | 20:03 |
devananda | and when we release icehouse, we'l need to remain compatible with THAT until the J release | 20:03 |
NobodyCam | if we need to break it then we'll to talk aboutv2 of the api | 20:03 |
devananda | fwiw, once Icehouse is released, it will be added to the stable checks | 20:04 |
max_lobur | ok, then I'm going to port all the existing tests from the API to tempest + create new ones for uncovered pathways, right? | 20:04 |
devananda | and we should contine to test compat of the tip of python-irnoicclient against the last stable release, too | 20:05 |
devananda | max_lobur: ++ | 20:05 |
max_lobur | ok :) | 20:05 |
devananda | max_lobur: in an ideal world, i think we should have the same API unit and functional tests, but i may be overzealous in testing some times ... | 20:05 |
max_lobur | haha | 20:05 |
rloo | are we happy with our api then? lucasagomes: any TODOs or 'i'd like to change' things? | 20:06 |
max_lobur | how much changes do we expect in the api till icehouse? | 20:06 |
devananda | deploy is done, afaik | 20:06 |
lucasagomes | rloo, yea, well there's only one inconsistency i'd like to remove | 20:07 |
devananda | we're missing an endpoint for interrupt-type things, though, right? | 20:07 |
lucasagomes | between the /provision and /power calls | 20:07 |
devananda | (gah, i need to step AFK again for a few minutes... sorry. brb) | 20:07 |
lucasagomes | /power does return something while /provision doesn't | 20:07 |
lucasagomes | there's a patch for that | 20:07 |
lucasagomes | also there's some FIXMEs on that part as well | 20:08 |
*** ifarkas has quit IRC | 20:08 | |
rloo | lucasagomes: ok, so we should try to get those in soon. | 20:08 |
lucasagomes | because we need to return the Location Header pointing to /states so that clients knows that they need to GET /states in order to track the states changes | 20:08 |
lucasagomes | rloo, unfortunately it needs to be fixed in wsme first | 20:08 |
rloo | lucasagomes: :-( | 20:09 |
lucasagomes | wsme doesn't currently supports changing the location header | 20:09 |
lucasagomes | there's a bug for it, we can try to implement that in wsme | 20:09 |
lucasagomes | it's open source :) | 20:09 |
rloo | lucasagomes: true! | 20:09 |
max_lobur | :) | 20:09 |
lucasagomes | rloo, https://bugs.launchpad.net/wsme/+bug/1233687 | 20:10 |
rloo | thx lucasagomes. Guess it affects me too ;) | 20:11 |
lucasagomes | hah | 20:11 |
lucasagomes | yea, well it's assigned to him for a long time | 20:12 |
rloo | lucasagomes: it is 'wishlist' so may not be important to fix soon. | 20:12 |
lucasagomes | so it might worth to talk to him to see if he's making a patch for it first | 20:12 |
lucasagomes | rloo, yea | 20:12 |
lucasagomes | although it's very important | 20:12 |
rloo | is it worth adding a comment that it is important for us? or i guess contacting julien. | 20:13 |
max_lobur | devananda, ok, so I'll start from what we have in the API, and then will increase that once new API's are merged | 20:13 |
max_lobur | we can move to race | 20:13 |
lucasagomes | rloo, +1 | 20:14 |
max_lobur | brb in 2 mins | 20:14 |
devananda | back | 20:14 |
lucasagomes | rloo, I think it's important for the REST thing in general | 20:14 |
lucasagomes | many people would use the location header for async tasks | 20:14 |
lucasagomes | maybe we can even talk about changing that priority | 20:14 |
lucasagomes | to medium at least | 20:14 |
rloo | lucasagomes: yeah, at a minimum, medium! | 20:15 |
devananda | fwiw, other folks are spinning up projects based on pecan/wsme now, too | 20:15 |
openstackgerrit | Jarrod Johnson proposed a change to stackforge/pyghmi: Add support for retrieving SDR data https://review.openstack.org/67299 | 20:15 |
max_lobur | back | 20:15 |
devananda | so there'll be mroe exposure for missing capabilities like that | 20:16 |
lucasagomes | indeed | 20:16 |
devananda | max_lobur: ok,s o, race conditions ... | 20:16 |
max_lobur | yes | 20:16 |
max_lobur | I investigate greenthreads world, also asked a few guys around | 20:16 |
*** aignatov_ has quit IRC | 20:16 | |
max_lobur | including romcheg :) | 20:16 |
max_lobur | 1. Currently we patch threads, so each our RPC request is already handled within greenthread | 20:17 |
max_lobur | It's already can hang the whole process as you mentioned | 20:17 |
max_lobur | so moving it to another greenthread won't add the problems | 20:18 |
devananda | max_lobur: sounds about right | 20:18 |
max_lobur | 2. python futures thread pool executor has handy add_done_callback feature | 20:18 |
max_lobur | http://pythonhosted.org/futures/#concurrent.futures.Future.add_done_callback | 20:19 |
max_lobur | so we can reuse futures instead of implementing our own threads | 20:19 |
max_lobur | 3. What our guys suggested to me is to use processes instead of greenthreads | 20:19 |
*** aignatov_ has joined #openstack-ironic | 20:19 | |
max_lobur | to protect ourselves from hanging conductor | 20:20 |
max_lobur | It may be possible, since I think we won't have a lot of deployments/reboots handled in the same time by one conductor | 20:20 |
devananda | max_lobur: that would require IPC to coordinate any effect of the callable | 20:20 |
max_lobur | processes won't hit the performance | 20:20 |
devananda | max_lobur: ah. that is definitely not true | 20:20 |
max_lobur | am I right? | 20:21 |
devananda | max_lobur: we should not plan according to "we wont do a lot of X" | 20:21 |
lucasagomes | we might have many deployments at the same time | 20:21 |
max_lobur | ok :) | 20:21 |
openstackgerrit | A change was merged to stackforge/pyghmi: Add support for retrieving SDR data https://review.openstack.org/67299 | 20:21 |
devananda | right now, we dont know what scalability issues we'll hit | 20:21 |
devananda | and im sure we'll hit some | 20:21 |
max_lobur | then we should go with greenthreads | 20:21 |
devananda | but as a goal, we shouldn't set ourselves low | 20:22 |
devananda | think thousands or tens of thousands :) | 20:22 |
max_lobur | futhermore there is an issue with processes + eventlet :) | 20:22 |
max_lobur | http://stackoverflow.com/questions/21242022/eventlet-hangs-when-using-futures-processpoolexecutor | 20:22 |
devananda | yes | 20:22 |
max_lobur | I hit it today :) | 20:22 |
devananda | heh | 20:22 |
max_lobur | so I'm glad we're choosing threads :D | 20:22 |
devananda | ++ stick with threads | 20:23 |
lucasagomes | right, there's also the two steps lock approach (while I haven't thought a lot about it), I thought about implementing it like: a flag on acquire to not release the node once it's finished... | 20:23 |
max_lobur | yes | 20:23 |
devananda | yep | 20:23 |
max_lobur | an alternate solution - intent lock | 20:23 |
devananda | my initial thoughts here (some months abck) were about an intent lock / two-phase lock | 20:23 |
lucasagomes | now that we have the routing algorithm in place we are sure that the rpc call will land in the same conductor | 20:23 |
max_lobur | e.g. to maintain lock between two rpc requests | 20:23 |
lucasagomes | so on we can acquire the lock with another flag, and we would successfully acquire the node on the second time if conductor is the same and the flag is true | 20:24 |
max_lobur | yep. this means our WITH block won't actually release the lock if the flag was passed | 20:25 |
max_lobur | right? | 20:25 |
lucasagomes | max_lobur, yes | 20:25 |
lucasagomes | max_lobur, we would have a flag for that, | 20:25 |
*** rloo has quit IRC | 20:25 | |
lucasagomes | I mean, a parameter on the acquire() method | 20:25 |
max_lobur | yea, I see | 20:26 |
devananda | lucasagomes: not just "true" - i think we would need a token | 20:26 |
*** rloo has joined #openstack-ironic | 20:26 | |
max_lobur | lucasagomes, + your current patch should be merged | 20:26 |
devananda | so that other API requests remain blocked during the interval | 20:26 |
devananda | however | 20:26 |
lucasagomes | max_lobur, yes, actually we need that intent lock patch before, and then my current implementation depending on the intent lock patch | 20:26 |
devananda | let's think about this in another situation | 20:26 |
devananda | what happens when the RPC layer is choked and slow? | 20:27 |
lucasagomes | devananda, right, so you mean like... you acquire and it returns a key to you | 20:27 |
*** rloo has quit IRC | 20:27 | |
devananda | what happens when RPC messages get lost? | 20:27 |
lucasagomes | that is passed as an argument for the next acquire() so that releases the lock | 20:27 |
devananda | lucasagomes: exactly | 20:27 |
lucasagomes | devananda, right that also works | 20:27 |
devananda | lucasagomes: otherwise there's still a race | 20:27 |
lucasagomes | but needs to add a db field | 20:27 |
devananda | lucasagomes: nope | 20:27 |
lucasagomes | where the key will live | 20:27 |
*** rloo has joined #openstack-ironic | 20:27 | |
devananda | lucasagomes: it'd be in memory in the ResourceManager | 20:28 |
devananda | i think | 20:28 |
lucasagomes | devananda, hmm right | 20:28 |
lucasagomes | yea that might work as well | 20:28 |
devananda | if that conductor fails or the RPC mapping gets changed, another conductor wouldn't need to break the lock | 20:28 |
devananda | if a real TaskManager lock hasn't been taken yet | 20:28 |
devananda | but | 20:29 |
devananda | let's think about what happens with threads and with two-phase lock if RPC fails or is slow | 20:29 |
*** k4n01 has quit IRC | 20:30 | |
devananda | the point of the API tier making two RPC requests | 20:30 |
devananda | is to give the user some reasonably-quick feedback | 20:30 |
devananda | if their request is likely to succeed or not, and thus whether itw as accepted | 20:30 |
max_lobur | yes | 20:30 |
devananda | - user request | 20:30 |
devananda | -- api receives it | 20:30 |
devananda | -- api makes synchronous RPC call | 20:30 |
devananda | -- conductor does something, returns a value across RPC | 20:30 |
devananda | -- api makes async RPC call | 20:31 |
devananda | - user gets response | 20:31 |
devananda | -- conductor does something else | 20:31 |
max_lobur | + | 20:31 |
lucasagomes | looks correct | 20:31 |
max_lobur | but the second async call may lost | 20:31 |
devananda | we're talking about adding $something at the fourth step (condcutor does something) | 20:32 |
max_lobur | and the user already got 200 response | 20:32 |
devananda | which sets up a lock until the last step | 20:32 |
max_lobur | devananda, + | 20:32 |
devananda | in teh threading model, IIRC, this is a separate greenthread which will effectively sleep | 20:32 |
devananda | and get woken up // reattached at the last step | 20:32 |
devananda | max_lobur: what prevents >1 API request triggering >1 greenthread? | 20:33 |
max_lobur | the first API will acquire the lock | 20:33 |
max_lobur | subsequent ones will fail to acquire it | 20:33 |
devananda | max_lobur: ah. teh TaskManager lock | 20:33 |
devananda | max_lobur: and taht is then held until the last step? | 20:34 |
max_lobur | greenthread is started only if lock has been acquired | 20:34 |
*** rloo has quit IRC | 20:34 | |
max_lobur | greenthread has a callback | 20:34 |
max_lobur | in the end | 20:34 |
max_lobur | add_done_callback() | 20:34 |
max_lobur | this callback will release the lock | 20:34 |
max_lobur | so it will be held untill greenthread is done | 20:34 |
devananda | when is greenthread done? | 20:35 |
*** rloo has joined #openstack-ironic | 20:35 | |
devananda | what "ends" it? | 20:35 |
max_lobur | let's see | 20:35 |
*** rloo has quit IRC | 20:35 | |
devananda | the second RPC call will create a new greenthread | 20:35 |
*** rloo has joined #openstack-ironic | 20:35 | |
devananda | so how will that cause the first one to finish, triggering the add_done_callback hook? | 20:35 |
*** rloo has quit IRC | 20:35 | |
*** rloo has joined #openstack-ironic | 20:36 | |
max_lobur | greenthread is done when utils.node_power_action(task, task.node, new_state) returned | 20:36 |
max_lobur | or the part of utils.node_power_action(task, task.node, new_state) responsible for taking power action | 20:36 |
max_lobur | the async part | 20:36 |
devananda | max_lobur: that isn't called by the first RPC call though | 20:36 |
devananda | how does the second RPC call attach to the greenthread created in the first one? | 20:37 |
*** agordeev2 has quit IRC | 20:37 | |
devananda | maybe i'm missing something obvious... | 20:37 |
max_lobur | in threading model we have one rpc call | 20:37 |
lucasagomes | I think that in the thread approach | 20:37 |
lucasagomes | there's no second rpc call | 20:37 |
lucasagomes | it's only a sync call | 20:37 |
max_lobur | lucasagomes, ++ | 20:37 |
lucasagomes | that will acquire the lock manually (not using with) | 20:37 |
lucasagomes | and that will spawn a greenthread (if data is correct and validated) | 20:38 |
lucasagomes | and return to the client | 20:38 |
devananda | ahh | 20:38 |
max_lobur | it will acquire lock, do validation, tell greenthread to release lock at the end, start greenthread, return 200 to API | 20:38 |
lucasagomes | so it's a sync call, but at the same time it's async | 20:38 |
max_lobur | lucasagomes, yea :D | 20:38 |
max_lobur | sync call which lefts something running after it | 20:39 |
lucasagomes | yup | 20:40 |
devananda | gotcha | 20:40 |
devananda | that makes sense | 20:40 |
devananda | thanks for explaining! | 20:41 |
max_lobur | sure :) | 20:41 |
devananda | so the greenthread resumes and does the actual work of power_on right after the RETURN 200 is done | 20:41 |
lucasagomes | yea | 20:41 |
max_lobur | no, right befor 200 is returned | 20:41 |
devananda | so there is tiny risk if the API tier is too slow for some reason | 20:41 |
devananda | right before? | 20:42 |
max_lobur | we spawn greenthread in the step before return 200 | 20:42 |
max_lobur | it starts running in parralel | 20:42 |
max_lobur | because we can't do anything after return :) | 20:42 |
max_lobur | we'll need yield in this case, but that's another story :) | 20:43 |
lucasagomes | max_lobur, hmm well it needs to return to the client 200 | 20:43 |
max_lobur | it does | 20:43 |
lucasagomes | so clients knows that the request was accepted and is being processed (in the background by the greenthread) | 20:43 |
lucasagomes | right ok | 20:43 |
lucasagomes | so yea it does the actual work after the return (I mean it starts before but will finish after) | 20:44 |
max_lobur | https://review.openstack.org/#/c/66368/1/ironic/conductor/manager.py | 20:44 |
max_lobur | lucasagomes, ++ | 20:44 |
max_lobur | utils.async_node_power_action | 20:44 |
max_lobur | spawns a greenthread | 20:44 |
max_lobur | and after this start_change_node_power_state returns control -> sends RPC response | 20:45 |
devananda | https://review.openstack.org/#/c/66368/1/ironic/conductor/utils.py L 101 | 20:45 |
devananda | validates, then spawns a ThreadWithCallback | 20:45 |
max_lobur | ahh | 20:46 |
devananda | when taht returns, it'll trickle up and trigger the API return, yes? | 20:46 |
max_lobur | that not a mistake | 20:46 |
max_lobur | devananda, + | 20:46 |
devananda | right | 20:47 |
max_lobur | async_node_power_action will either just return (which meands everything validated and the thread is running) | 20:47 |
max_lobur | or throw an exception | 20:47 |
max_lobur | from validators | 20:47 |
max_lobur | in both cases it will go back to the API | 20:47 |
devananda | yep. or anode locked exception | 20:47 |
devananda | and for either exception, it won't start teh ThreadWithCallback | 20:48 |
max_lobur | or exception from task.driver.power.validate(node) | 20:48 |
lucasagomes | btw, node locked exception should be converted to conflict in the api layer | 20:48 |
max_lobur | yes | 20:48 |
devananda | actualy | 20:48 |
lucasagomes | but that's another thing, just a note here | 20:48 |
devananda | NodeLocked would come from https://review.openstack.org/#/c/66368/1/ironic/conductor/manager.py : 189 | 20:48 |
*** derekh has joined #openstack-ironic | 20:48 | |
max_lobur | devananda, ++ | 20:49 |
max_lobur | which is also before the spawning thread | 20:49 |
devananda | right | 20:49 |
devananda | max_lobur: stylistic question | 20:49 |
devananda | max_lobur: could we use a context manager at that layer (in conductor/manager.py) to achieve a similar ting with threads | 20:49 |
devananda | max_lobur: instead of adding manual_acquire, task.next(), etc | 20:50 |
max_lobur | I thought of it | 20:50 |
max_lobur | It's can be done but | 20:50 |
max_lobur | 1. It will be more complex | 20:51 |
max_lobur | 2. It may not be obvious that this with won't release the lock | 20:51 |
max_lobur | 2 seems critical to me | 20:51 |
max_lobur | It will be fake WITH | 20:51 |
devananda | ah | 20:52 |
max_lobur | that releases nothing when control goes out from it | 20:52 |
devananda | stylistically, i'm not fond of what's there now, either | 20:52 |
devananda | it's not clear when the lock is released | 20:52 |
devananda | all we see is endcallback = task.next() | 20:52 |
max_lobur | end_callback = lambda: task.next() | 20:52 |
devananda | yea | 20:52 |
max_lobur | yes not obvious | 20:53 |
*** mrda has joined #openstack-ironic | 20:53 | |
max_lobur | I wanted to rewrite this | 20:53 |
max_lobur | replace generator with usual class | 20:53 |
max_lobur | with meaningful methods | 20:53 |
devananda | ++ | 20:53 |
devananda | task.start, task.end | 20:53 |
max_lobur | and then wrap it into generator for our automatic with | 20:53 |
max_lobur | yep | 20:53 |
devananda | sounds good | 20:53 |
lucasagomes | hmm I tend to agree, I thought that the two-steps lock (intent lock) would look a bit complicated to understand, but the thread approach is actually very complicated as well (when the lock is realeased, having a sync call which is not sync) | 20:54 |
lucasagomes | both will need some documentation | 20:54 |
devananda | i think this could be much cleaner | 20:54 |
devananda | i like the approach, though | 20:54 |
max_lobur | cool | 20:54 |
max_lobur | then I'll polish the code nearest time | 20:54 |
max_lobur | and create tests | 20:54 |
devananda | it'll be much easier to understand and debug it than a two-phase lock, in my experience | 20:55 |
devananda | *based on my .. | 20:55 |
openstackgerrit | Jarrod Johnson proposed a change to stackforge/pyghmi: Add 'get_sensor_data' to Command class https://review.openstack.org/67935 | 20:55 |
max_lobur | It will shine as a new ferrari :) | 20:55 |
devananda | it's very clear that API makes a call to Conductor to start a process. that requiers a lock. if the lock is granted, process starts in teh background | 20:55 |
lucasagomes | :D | 20:55 |
devananda | if lock is not granted, it fails immediately | 20:55 |
devananda | there's a single RPC message + answer, regardless | 20:56 |
devananda | and so debugging it is much easier -- we always know that there should be RPC request + response | 20:56 |
devananda | what needs to be clearer is when the Manager is starting a background greenthread, what that does, and when it will end | 20:56 |
devananda | vs when the Manager is just doing something synchronously and returning it | 20:57 |
devananda | eg, I think it could be something like .... | 20:57 |
max_lobur | I changed change_node_power_state | 20:57 |
max_lobur | to start_change_node_power_state | 20:57 |
max_lobur | in my pathc | 20:57 |
devananda | yea. i mean in the manager code | 20:58 |
max_lobur | we can change to start_async_change_node_power_state | 20:58 |
devananda | the RPC method names are important, too | 20:58 |
max_lobur | yes, I'm talking about manager and the rpc api | 20:59 |
max_lobur | we had change_node_power_state | 20:59 |
*** rloo has quit IRC | 20:59 | |
devananda | what about something like this | 20:59 |
max_lobur | and will have start_change_node_power_state | 20:59 |
devananda | method = utils.node_power_action | 20:59 |
*** rloo has joined #openstack-ironic | 20:59 | |
devananda | args = [new_state] | 20:59 |
*** rloo has quit IRC | 20:59 | |
devananda | task_manager.background(context, node_id, args) | 21:00 |
*** rloo has joined #openstack-ironic | 21:00 | |
devananda | vs, eg. for something that is synchronous | 21:00 |
openstackgerrit | Jarrod Johnson proposed a change to stackforge/pyghmi: Add 'get_sensor_data' to Command class https://review.openstack.org/67935 | 21:00 |
devananda | method = update_node | 21:00 |
devananda | args = [values...] | 21:00 |
*** rloo has quit IRC | 21:01 | |
devananda | task_manager.do(context, node_id, method, args) | 21:01 |
*** rloo has joined #openstack-ironic | 21:01 | |
devananda | ... (woops, first example should have had "method, args" too) | 21:01 |
*** rloo has quit IRC | 21:02 | |
devananda | taht wasn't very clear. i can expand on that in a pastebin | 21:03 |
*** rloo has joined #openstack-ironic | 21:03 | |
max_lobur | yep, would be great | 21:03 |
devananda | teh gist is, i'd like to see the ConductorManager code be more consistent between the existing "with lock: do something synchronous" and the new "with thread: do something async" | 21:03 |
devananda | k. i'll go sketch something out | 21:04 |
max_lobur | I see | 21:04 |
max_lobur | thanks! | 21:04 |
lucasagomes | nice | 21:04 |
lucasagomes | so right, should I abandon my patch and stop thinking about the two phases lock ? | 21:05 |
lucasagomes | as I said to max_lobur I'm ok with both approachs | 21:05 |
max_lobur | ah, wanted to ask | 21:05 |
max_lobur | devananda, do you think we can have some other features requiring two phase lock? | 21:05 |
max_lobur | others than avoiding race conditions | 21:06 |
max_lobur | I meant intent lock | 21:06 |
devananda | probably many things will need to use this mechanism | 21:06 |
devananda | power, deploy, rescue, firmware,e tc | 21:06 |
devananda | anything that needs to make a physical change to a machine | 21:06 |
devananda | should probably validate that before it starts | 21:07 |
devananda | and return to user "OK, i'm going to start now" | 21:07 |
max_lobur | no, it's all that can be fixed with threads | 21:07 |
devananda | ah, sorry | 21:07 |
max_lobur | I meant some others | 21:07 |
devananda | no | 21:07 |
lucasagomes | yea I can't think about any other use case as well | 21:07 |
max_lobur | that will definitely require two subsequent RPC calls or even more | 21:07 |
lucasagomes | apart from fixing a lock between two rpc calls | 21:07 |
devananda | what comes to mind is taht, if we need to return a handle to the two-phase lock *to the user* | 21:08 |
devananda | then we can't do taht with a background thread | 21:08 |
max_lobur | true | 21:08 |
devananda | but that seems like a terrible idea :) | 21:08 |
max_lobur | so the user will gain the control over the node right? | 21:08 |
devananda | i mean, let's NOT do taht | 21:08 |
lucasagomes | heh | 21:08 |
lucasagomes | +1 | 21:08 |
max_lobur | ok :) | 21:08 |
devananda | i'm just saying, everything else should be doable with bgthread | 21:08 |
max_lobur | ok, thanks Guys for a discussion, I gotta hurry to the bus stop, otherwise I'll need to walk through the forest with a half meter snow :D | 21:09 |
devananda | max_lobur: lol. dont miss the bus :) | 21:10 |
devananda | max_lobur: and thanks for the patches and discussion, too! | 21:10 |
max_lobur | bye All! | 21:10 |
lucasagomes | max_lobur, hah see ya | 21:10 |
lucasagomes | gnight | 21:10 |
*** max_lobur is now known as max_lobur_afk | 21:10 | |
* devananda should walk home soon | 21:10 | |
lucasagomes | yea I'm overtime as well | 21:10 |
lucasagomes | devananda, thanks for ur input! | 21:11 |
devananda | np! | 21:11 |
lucasagomes | I'm done for the day | 21:11 |
lucasagomes | devananda, have a g'night :D | 21:11 |
devananda | g'night! | 21:12 |
*** lucasagomes has quit IRC | 21:12 | |
devananda | ok, noisy kids in cafe, i'm walking home... bbiafm | 21:27 |
mrda | hey jh | 21:33 |
*** aignatov_ has quit IRC | 21:34 | |
*** mdurnosvistov has quit IRC | 21:38 | |
openstackgerrit | A change was merged to openstack/ironic: Fix non-unique pxe driver 'instance_name' https://review.openstack.org/65657 | 21:41 |
openstackgerrit | A change was merged to openstack/ironic: Fix non-unique tftp dir instance_uuid https://review.openstack.org/66858 | 21:42 |
devananda | back | 21:48 |
openstackgerrit | Matt Wagner proposed a change to openstack/ironic: API: Add sample() method on Node https://review.openstack.org/65536 | 21:55 |
*** datajerk has quit IRC | 21:55 | |
*** martyntaylor has joined #openstack-ironic | 22:00 | |
*** matty_dubs is now known as matty_dubs|gone | 22:01 | |
*** linggao has quit IRC | 22:06 | |
*** jdob has quit IRC | 22:10 | |
openstackgerrit | Ghe Rivero proposed a change to openstack/ironic: PXE instance_name is no longer mandatory https://review.openstack.org/67974 | 22:20 |
*** michchap has quit IRC | 22:30 | |
*** michchap has joined #openstack-ironic | 22:31 | |
openstackgerrit | A change was merged to openstack/ironic: Replace assertTrue with explicit assertIsInstance https://review.openstack.org/67420 | 22:49 |
*** datajerk has joined #openstack-ironic | 23:02 | |
*** martyntaylor has quit IRC | 23:07 | |
*** romcheg has quit IRC | 23:08 | |
openstackgerrit | A change was merged to stackforge/pyghmi: Add 'get_sensor_data' to Command class https://review.openstack.org/67935 | 23:17 |
*** datajerk has quit IRC | 23:26 | |
*** derekh has quit IRC | 23:30 | |
devananda | Haomeng: ping | 23:32 |
devananda | Haomeng: how likely do you think it is that https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer will be done by i3? | 23:32 |
mrda | hey devananda | 23:37 |
devananda | hiya! | 23:38 |
mrda | so I was wondering if you have any low-hanging ironic fruit that I could look at? | 23:39 |
devananda | i'm doing bug/bp cleanup today anyway | 23:39 |
devananda | so i'll start tossing things your way as I spot them. take what you like :) | 23:40 |
mrda | cool - just looking to learn the code base and help wherever I can | 23:40 |
devananda | https://bugs.launchpad.net/nova/+bug/1195073 should be closed, and another one opened and fixed | 23:40 |
devananda | to remove teh pxe_deploy_timeout variable from drivers/modules/pxe.py | 23:40 |
devananda | since we don't have any mechanism today to enforce that | 23:40 |
mrda | ok | 23:40 |
devananda | the variable is unused | 23:40 |
* devananda notes taht we will need to introduce a generic "tiemout" mechanism at a higher level, so a bug should be filed for that | 23:41 | |
openstackgerrit | Jarrod Johnson proposed a change to stackforge/pyghmi: Add 'get_health' to Command class https://review.openstack.org/67987 | 23:41 |
devananda | mrda: this would be a slightly meatier bug: https://bugs.launchpad.net/ironic/+bug/1264596 | 23:42 |
mrda | devananda: I'll look at both. First I'll close down +bug/1195073, open a new one, and then look at +bug/1264596 | 23:43 |
devananda | mrda: https://bugs.launchpad.net/ironic/+bug/1255648 would be another good one, but may require real hardware with IPMI support to test your fix | 23:47 |
devananda | mrda: let me ask, as there's a few more like taht -- do you have access to hardware to test against? | 23:49 |
devananda | with access to the IPMI network | 23:49 |
mrda | devananda: Bottom line, is that I don't right this moment. But I need to get the lab back up again. I might need to work up to that. | 23:49 |
devananda | mrda: ack | 23:50 |
mrda | devananda: so with regards to https://bugs.launchpad.net/nova/+bug/1195073 , I should remove the Ironic tag as I've raised and linked https://bugs.launchpad.net/ironic/+bug/1270981 for the Ironic portion, but the nova half of this still needs action, so leave it open and unassigned? | 23:58 |
devananda | mrda: yep | 23:58 |
devananda | and i just triaged the new bug | 23:58 |
mrda | ta | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!