*** derekh has quit IRC | 00:05 | |
*** jcooley_ has quit IRC | 00:10 | |
*** derekh has joined #openstack-ironic | 00:17 | |
*** jcooley_ has joined #openstack-ironic | 00:17 | |
*** matsuhashi has joined #openstack-ironic | 00:30 | |
openstackgerrit | Yongli He proposed a change to openstack/ironic: ironic clean up:kill few backslash of conductor https://review.openstack.org/69760 | 00:38 |
---|---|---|
*** bigjools_ is now known as bigjools | 00:52 | |
*** bigjools has quit IRC | 00:53 | |
*** bigjools has joined #openstack-ironic | 00:53 | |
*** jbjohnso has quit IRC | 01:04 | |
* NobodyCam waves night all | 01:26 | |
rloo | night NobodyCam. | 01:33 |
*** derekh has quit IRC | 01:36 | |
*** nosnos has joined #openstack-ironic | 01:37 | |
*** yongli has joined #openstack-ironic | 01:51 | |
dkehn | dkehn_: waves back | 01:52 |
*** yongli has quit IRC | 01:53 | |
*** yongli has joined #openstack-ironic | 01:58 | |
*** rloo has quit IRC | 02:28 | |
openstackgerrit | Yongli He proposed a change to openstack/ironic: ironic clean up:kill few backslash of conductor https://review.openstack.org/69760 | 02:31 |
*** aignatov_ is now known as aignatov | 03:07 | |
*** vkozhukalov has joined #openstack-ironic | 03:17 | |
*** matsuhashi has quit IRC | 03:30 | |
*** matsuhashi has joined #openstack-ironic | 03:31 | |
*** matsuhashi has quit IRC | 03:35 | |
*** jcooley_ has quit IRC | 03:41 | |
*** harlowja is now known as harlowja_away | 04:16 | |
NobodyCam | lol will have to try this out http://www.urbandictionary.com/define.php?term=Python%20Bomb | 04:38 |
*** aignatov is now known as aignatov_ | 04:41 | |
mrda | Hey all, looking for an ironic core to re +2 this patch - https://review.openstack.org/#/c/68852/ It's already approved, but I hit bug/1254890 a few times, but after re-checking it's ready for gate. | 04:46 |
*** matsuhashi has joined #openstack-ironic | 04:55 | |
*** jcooley_ has joined #openstack-ironic | 05:22 | |
*** matsuhashi has quit IRC | 05:34 | |
*** matsuhashi has joined #openstack-ironic | 05:35 | |
*** nosnos has quit IRC | 05:38 | |
*** nosnos has joined #openstack-ironic | 05:41 | |
*** mrda is now known as mrda_away | 05:41 | |
*** jcooley_ has quit IRC | 05:50 | |
*** jcooley_ has joined #openstack-ironic | 06:06 | |
openstackgerrit | Jenkins proposed a change to openstack/ironic: Imported Translations from Transifex https://review.openstack.org/68024 | 06:07 |
*** jbjohnso has joined #openstack-ironic | 06:08 | |
*** vkozhukalov has quit IRC | 06:13 | |
*** jbjohnso has quit IRC | 06:14 | |
*** matsuhashi has quit IRC | 06:23 | |
*** matsuhas_ has joined #openstack-ironic | 06:25 | |
*** Haomeng|2 has joined #openstack-ironic | 06:37 | |
*** Haomeng has quit IRC | 06:37 | |
*** early has quit IRC | 06:39 | |
*** early has joined #openstack-ironic | 06:47 | |
*** jcooley_ has quit IRC | 06:53 | |
*** jcooley_ has joined #openstack-ironic | 06:55 | |
*** ndipanov has joined #openstack-ironic | 07:16 | |
*** coolsvap has joined #openstack-ironic | 07:19 | |
*** ifarkas has joined #openstack-ironic | 07:41 | |
*** jistr has joined #openstack-ironic | 07:45 | |
*** rwsu has quit IRC | 07:46 | |
*** rwsu has joined #openstack-ironic | 07:50 | |
*** romcheg has joined #openstack-ironic | 07:54 | |
*** vkozhukalov has joined #openstack-ironic | 07:55 | |
*** aignatov_ is now known as aignatov | 07:59 | |
*** romcheg has quit IRC | 08:00 | |
*** romcheg1 has joined #openstack-ironic | 08:17 | |
*** aignatov is now known as aignatov_ | 08:27 | |
*** hstimer has quit IRC | 08:27 | |
*** jcooley_ has quit IRC | 08:30 | |
*** matsuhas_ has quit IRC | 09:04 | |
*** matsuhashi has joined #openstack-ironic | 09:04 | |
*** athomas has joined #openstack-ironic | 09:11 | |
*** aignatov_ is now known as aignatov | 09:14 | |
*** matsuhashi has quit IRC | 09:17 | |
*** derekh has joined #openstack-ironic | 09:18 | |
*** matsuhashi has joined #openstack-ironic | 09:19 | |
*** jcooley_ has joined #openstack-ironic | 09:21 | |
*** coolsvap_away has joined #openstack-ironic | 09:33 | |
*** coolsvap has quit IRC | 09:34 | |
*** jcooley_ has quit IRC | 09:37 | |
*** mdurnosvistov has joined #openstack-ironic | 09:38 | |
*** coolsvap has joined #openstack-ironic | 09:45 | |
*** coolsvap_away has quit IRC | 09:48 | |
*** matsuhashi has quit IRC | 09:51 | |
*** nosnos has quit IRC | 09:51 | |
*** martyntaylor has joined #openstack-ironic | 10:01 | |
*** jcooley_ has joined #openstack-ironic | 10:09 | |
*** jcooley_ has quit IRC | 10:15 | |
*** max_lobur_afk is now known as max_lobur | 10:30 | |
*** aignatov is now known as aignatov_ | 10:37 | |
*** athomas has quit IRC | 10:50 | |
*** athomas has joined #openstack-ironic | 10:56 | |
*** jistr has quit IRC | 10:59 | |
*** lucasagomes has joined #openstack-ironic | 11:05 | |
*** coolsvap has quit IRC | 11:15 | |
*** lucasagomes has quit IRC | 11:15 | |
*** lucasagomes has joined #openstack-ironic | 11:15 | |
*** jistr has joined #openstack-ironic | 11:19 | |
*** zul has quit IRC | 11:20 | |
*** athomas has quit IRC | 11:22 | |
*** zul has joined #openstack-ironic | 11:24 | |
*** athomas has joined #openstack-ironic | 11:31 | |
openstackgerrit | Mikhail Durnosvistov proposed a change to openstack/ironic: Get rid object model `dict` methods part 6 https://review.openstack.org/64336 | 11:39 |
openstackgerrit | Mikhail Durnosvistov proposed a change to openstack/ironic: Get rid object model `dict` methods part 5 https://review.openstack.org/64278 | 11:39 |
*** aignatov_ is now known as aignatov | 11:43 | |
openstackgerrit | Mikhail Durnosvistov proposed a change to openstack/ironic: Get rid object model `dict` methods part 6 https://review.openstack.org/64336 | 11:47 |
openstackgerrit | Mikhail Durnosvistov proposed a change to openstack/ironic: Get rid object model `dict` methods part 5 https://review.openstack.org/64278 | 11:47 |
*** jcooley_ has joined #openstack-ironic | 11:58 | |
*** jcooley_ has quit IRC | 12:08 | |
*** dshulyak has joined #openstack-ironic | 12:10 | |
*** max_lobur is now known as max_lobur_afk | 12:27 | |
*** jcooley_ has joined #openstack-ironic | 12:55 | |
*** linggao has joined #openstack-ironic | 13:03 | |
*** jdob has joined #openstack-ironic | 13:08 | |
*** jcooley_ has quit IRC | 13:09 | |
ifarkas | lucasagomes, ping | 13:10 |
lucasagomes | ifarkas, pong | 13:10 |
ifarkas | lucasagomes, hey, did you manage to debug the heat create-in-progress issue? | 13:10 |
lucasagomes | ifarkas, I didn't debugged that, as my undercloud was up and running I just went to test the next steps instead | 13:11 |
lucasagomes | I'm rerunning the tests now | 13:11 |
lucasagomes | I can try to take a look at the problem when it appears again | 13:11 |
ifarkas | lucasagomes, ok, let me know if that happens. I figured out another way to debug such an issue | 13:12 |
lucasagomes | ifarkas, right, will let u know | 13:13 |
ifarkas | I run os-collect-config on undercloud and checed its output | 13:13 |
lucasagomes | ifarkas, btw I had a problem with my seedvm today | 13:13 |
ifarkas | lucasagomes, oh, what was the issue? | 13:13 |
lucasagomes | http://paste.openstack.org/show/62088/ | 13:13 |
lucasagomes | I'm taking a look at it now | 13:13 |
lucasagomes | happened twice http://paste.openstack.org/show/62094/ | 13:13 |
*** vkozhukalov has quit IRC | 13:26 | |
*** jistr is now known as jistr|english | 13:30 | |
*** vkozhukalov has joined #openstack-ironic | 13:39 | |
*** rloo has joined #openstack-ironic | 13:47 | |
*** jcooley_ has joined #openstack-ironic | 13:49 | |
*** jbjohnso has joined #openstack-ironic | 13:56 | |
*** jcooley_ has quit IRC | 13:58 | |
*** rloo has quit IRC | 14:01 | |
*** rloo has joined #openstack-ironic | 14:02 | |
*** rloo has quit IRC | 14:02 | |
*** rloo has joined #openstack-ironic | 14:03 | |
*** rloo has quit IRC | 14:11 | |
*** rloo has joined #openstack-ironic | 14:12 | |
*** aignatov is now known as aignatov_ | 14:19 | |
*** matty_dubs|gone is now known as matty_dubs | 14:20 | |
*** aignatov_ is now known as aignatov | 14:21 | |
*** rloo has quit IRC | 14:22 | |
*** rloo has joined #openstack-ironic | 14:22 | |
*** max_lobur_afk is now known as max_lobur | 14:23 | |
*** rloo has quit IRC | 14:36 | |
*** rloo has joined #openstack-ironic | 14:36 | |
*** aignatov is now known as aignatov_ | 14:40 | |
*** jistr|english is now known as jistr | 14:41 | |
*** rloo has quit IRC | 14:42 | |
*** rloo has joined #openstack-ironic | 14:42 | |
*** aignatov_ is now known as aignatov | 14:43 | |
*** rloo has quit IRC | 14:47 | |
*** rloo has joined #openstack-ironic | 14:47 | |
*** rloo has quit IRC | 14:48 | |
*** rloo has joined #openstack-ironic | 14:48 | |
*** vkozhukalov has quit IRC | 14:49 | |
*** rloo has quit IRC | 14:50 | |
*** rloo has joined #openstack-ironic | 14:51 | |
*** rloo has quit IRC | 14:53 | |
*** rloo has joined #openstack-ironic | 14:53 | |
*** vkozhukalov has joined #openstack-ironic | 15:03 | |
*** aignatov is now known as aignatov_ | 15:08 | |
*** rloo has quit IRC | 15:14 | |
*** rloo has joined #openstack-ironic | 15:14 | |
*** zul has quit IRC | 15:30 | |
*** rloo has quit IRC | 15:30 | |
*** zul has joined #openstack-ironic | 15:30 | |
romcheg1 | Morning folks | 15:31 |
*** aignatov_ is now known as aignatov | 15:31 | |
max_lobur | morning Ironic! | 15:32 |
romcheg1 | lucasagomes: I've seen you posted a manual on how to use deploy interface with curl. Cannot find it | 15:33 |
*** romcheg1 is now known as romcheg | 15:33 | |
lucasagomes | romcheg, pong | 15:33 |
lucasagomes | lemme find the link | 15:33 |
lucasagomes | max_lobur, romcheg morning | 15:33 |
lucasagomes | romcheg, https://etherpad.openstack.org/p/IronicDeployDevstack ? | 15:33 |
romcheg | lucasagomes: Ah, exactly! | 15:34 |
romcheg | Thanks | 15:34 |
lucasagomes | romcheg, np | 15:34 |
*** rloo has joined #openstack-ironic | 15:34 | |
GheRivero | morning all | 15:38 |
*** rloo has quit IRC | 15:41 | |
*** rloo has joined #openstack-ironic | 15:42 | |
*** lucasagomes is now known as lucas-hungry | 15:44 | |
*** rloo__ has joined #openstack-ironic | 15:49 | |
*** rloo has quit IRC | 15:49 | |
*** coolsvap has joined #openstack-ironic | 15:49 | |
devananda | gmorning, all | 15:52 |
*** jcooley_ has joined #openstack-ironic | 15:53 | |
openstackgerrit | Mikhail Durnosvistov proposed a change to openstack/ironic: Get rid object model `dict` methods part 6 https://review.openstack.org/64336 | 15:55 |
openstackgerrit | Max Lobur proposed a change to openstack/ironic: Fix JSONEncodedDict default values https://review.openstack.org/68413 | 15:56 |
NobodyCam | Good Morning Ironic, says the man moving slowly | 15:59 |
max_lobur | lol | 16:00 |
max_lobur | morning NobodyCam :) | 16:00 |
NobodyCam | morning max_lobur :) | 16:01 |
*** rloo__ has quit IRC | 16:01 | |
*** rloo has joined #openstack-ironic | 16:01 | |
devananda | NobodyCam: g'morning! I'm up before you again :) | 16:01 |
NobodyCam | ya. this tiny trailer is ruff at night .. lol | 16:02 |
devananda | so folks, FYI, looks like the global gate is still having some serious problems | 16:04 |
NobodyCam | :( | 16:06 |
openstackgerrit | Max Lobur proposed a change to openstack/ironic: Fix JSONEncodedDict default values https://review.openstack.org/68413 | 16:06 |
*** vkozhukalov has quit IRC | 16:07 | |
NobodyCam | ifarkas: question on your last review of 66461 if your around? | 16:14 |
dkehn | NobodyCam: yes it is : https://review.openstack.org/#/c/66071/ check-tempest-dsvm-ironic-postgres FAILURE | 16:15 |
NobodyCam | dkehn: ??? yes it is ? gate having problems? | 16:18 |
dkehn | NobodyCam: can't hardly wait till code freeze | 16:18 |
devananda | i'm waiting on -infra to finish fisxing what they're fixing, then i'm going to fire off a round of recheck/reverify | 16:19 |
dkehn | NobodyCam: where volume jumps up 40% | 16:19 |
NobodyCam | I saw the TripleO gate lastnight was way way backed up | 16:20 |
*** aignatov is now known as aignatov_ | 16:25 | |
* devananda looks at the history of https://review.openstack.org/#/c/69495/ | 16:25 | |
devananda | what are the chances of NOT hitting a non-deterministic failure? very, very low. | 16:26 |
rloo | devananda. What's the diff between reapproving, vs doing a recheck (for https://review.openstack.org/#/c/69495/) | 16:32 |
*** lucas-hungry is now known as lucasagomes | 16:34 | |
lucasagomes | morning NobodyCam devananda rloo | 16:35 |
*** jcooley_ has quit IRC | 16:37 | |
*** jcooley_ has joined #openstack-ironic | 16:37 | |
NobodyCam | morning lucasagomes | 16:38 |
max_lobur | lucasagomes, devananda, do you have some time to discuss background task cancellation today? | 16:41 |
lucasagomes | max_lobur, yes :) | 16:41 |
*** jcooley_ has quit IRC | 16:42 | |
*** mdurnosvistov has quit IRC | 16:45 | |
devananda | max_lobur: yes! | 16:45 |
rloo | afternoon lucasagomes ;) | 16:46 |
devananda | rloo: recheck -- jenkins will run check tests again | 16:46 |
devananda | rloo: reverify -- jenkins will run gate tests,a nd if it passes, merge to trunk | 16:46 |
max_lobur | cool | 16:46 |
devananda | rloo: core member re-approving is a lazy way to reverify without taggign a bug | 16:46 |
max_lobur | so currently I can imagine two ways | 16:46 |
devananda | rloo: anyone can "recehck no bug" but "reverify no bug" doesn't work. and I didn't feel like trying to figure out which bug caused the failure /again/ on that patch | 16:47 |
rloo | devananda. thx. one of the many advs of being a core member ;) | 16:47 |
NobodyCam | speak softly a=but carry a big check mark :-p | 16:48 |
NobodyCam | s/a=// | 16:48 |
devananda | anyone around looking for low-hanging-fruit? | 16:49 |
max_lobur | 1. To maintain thread pool and terminate running thread somehow. Which not a good pattern. For reference http://stackoverflow.com/questions/323972/is-there-any-way-to-kill-a-thread-in-python | 16:50 |
devananda | max_lobur: yes. killing a thread abruptly == NO. | 16:50 |
max_lobur | In this case we may have corrupted data in conductor process and corrupted node - somewhere in the middle of deployment or changing power state | 16:50 |
devananda | max_lobur: and quite likely, leaked memory/resources/etc | 16:51 |
max_lobur | devananda, true | 16:51 |
devananda | max_lobur: which is why proper SMP architecture uses signals and traps | 16:51 |
max_lobur | 2. To have a set of checkpoints within every long-running task. Every checkpoint is point where the task can be stopped and rolled back | 16:51 |
max_lobur | I like this more | 16:52 |
lucasagomes | max_lobur, 2 sounds good | 16:52 |
devananda | 2 is half-way there. it may help bu tmay not be enough | 16:52 |
lucasagomes | and opens up a space to use things like | 16:52 |
lucasagomes | taskflow for it | 16:52 |
devananda | think of the step "dd ${image} ${iscs_target}" | 16:52 |
devananda | 2 will have to wait for that to finish | 16:52 |
max_lobur | yep | 16:52 |
max_lobur | if we want to keep our data consistent | 16:53 |
devananda | so this is good | 16:53 |
devananda | it's not an immediate interrupt | 16:53 |
max_lobur | we need to be agreed with the fact that task cannot be cancelled immediately | 16:53 |
devananda | right | 16:53 |
max_lobur | it's only can be planned to be cancelled | 16:53 |
lucasagomes | +1 | 16:53 |
devananda | ++ | 16:53 |
lucasagomes | firmware updates for e.g | 16:53 |
lucasagomes | I don't see any problem in waiting a task to finish before canceling the operation | 16:54 |
devananda | where task == some smaller unit of work | 16:54 |
lucasagomes | yup | 16:54 |
devananda | not the TaskManager instance | 16:54 |
lucasagomes | devananda, have u looked at taskflow? | 16:54 |
max_lobur | I think not only cancelling, but rolling back to beginning right& | 16:54 |
devananda | lucasagomes: yes, but not in a while. harlowja_away and I talked a lot when he started the project | 16:54 |
lucasagomes | do you think that we could benefit from it? | 16:54 |
lucasagomes | cinder uses it afaict | 16:55 |
devananda | lucasagomes: possibly, but I don't want to introduce that right now | 16:55 |
devananda | in Juno, maybe | 16:55 |
lucasagomes | devananda, right | 16:55 |
lucasagomes | makes sense | 16:55 |
devananda | lucasagomes: but bringing up proper task management // checkpoint&rollback // etc can not be discussed within our current framework | 16:55 |
devananda | lucasagomes: for that, i think we will need a third service | 16:56 |
lucasagomes | yea | 16:56 |
max_lobur | devananda, + | 16:56 |
devananda | API <-> TaskManager <-> ConductorManager | 16:56 |
lucasagomes | for the rolling back part that would makes a lot of sense to use taskflow instead of implementing our own | 16:56 |
max_lobur | to make possible rollbacks in case conductor has crashed | 16:56 |
*** matty_dubs is now known as matty_dubs|lunch | 16:56 | |
devananda | same thing applies to the ML thread about batching or coalescing requests | 16:57 |
max_lobur | right | 16:57 |
devananda | a third service which received the API requests and could coalesce them could then issue them all in a batch to the conductor | 16:57 |
devananda | as a single task | 16:58 |
lucasagomes | hmm | 16:58 |
lucasagomes | there's a service mistral I think | 16:58 |
lucasagomes | I think is the name | 16:58 |
devananda | gannt, too | 16:59 |
lucasagomes | to provide task as a service or something like that | 16:59 |
lucasagomes | https://wiki.openstack.org/wiki/Mistral | 16:59 |
lucasagomes | I see | 16:59 |
devananda | taht said, i'm looking quite far down the road | 17:00 |
max_lobur | so, at current time, do you think it's possible/worth time to try to implement it? | 17:00 |
* devananda pulls his head out of the rabbit hole | 17:00 | |
max_lobur | I was thinking about rollback batch | 17:00 |
devananda | max_lobur: let's focus on interrupt for the moment | 17:00 |
devananda | without a separate service to coordinate things | 17:00 |
devananda | we will need to route API request (interrupt what node XXX is doing) to the appropriate conductor | 17:01 |
devananda | - i think that can be done today | 17:01 |
max_lobur | devananda, yes | 17:01 |
devananda | and the greenthread that's doing $THING will need to be checking whether it has been signalled to stop | 17:01 |
max_lobur | using reservation field right? | 17:01 |
devananda | well | 17:01 |
devananda | max_lobur: no | 17:02 |
devananda | max_lobur: API service will use the hash_ring to find whcih conductor that $NODE is mapped to, then issue RPC request to that conductor's bus | 17:02 |
max_lobur | ah, right | 17:02 |
devananda | there is risk, if the ring changes, that interrupt would not be possible | 17:02 |
devananda | because message would be routed to a different conductor | 17:03 |
devananda | so we need to handle this and send back error to the API | 17:03 |
devananda | we could use broadcasts... but i'm side tracking again | 17:03 |
devananda | within each interruptible method (eg, deploy) | 17:04 |
devananda | i see two options | 17:04 |
devananda | - do some "magic" in eventlet that will check for our interrupt signal between each LOC | 17:04 |
max_lobur | :) | 17:05 |
devananda | - add an explicit call at certain points in the method to check for the signal | 17:05 |
*** jistr has quit IRC | 17:05 | |
devananda | i prefer #2 | 17:05 |
max_lobur | I like the second | 17:05 |
* devananda mocks up something on pastebin | 17:05 | |
max_lobur | devananda, lucasagomes so do we want only interruption, or interruption + rollback? | 17:05 |
lucasagomes | max_lobur, I would aim for interruption only in the moment | 17:06 |
lucasagomes | for rollback we would have to break the method doing the actual work | 17:06 |
lucasagomes | in atomic subclasses/methods | 17:06 |
lucasagomes | so we know hw to rollback each class etc | 17:06 |
max_lobur | yea, rollback will probably introduce a lot of bugs at it won't be reliable. because of possible conductor crash | 17:07 |
lucasagomes | I mean, both can be done but I would focus on the interruption first | 17:07 |
lucasagomes | and then if there's time we can start taking a look at rollingback | 17:07 |
lucasagomes | yea also, it more things to test etc | 17:07 |
*** linggao has quit IRC | 17:07 | |
lucasagomes | s/each class/each task/g | 17:08 |
lucasagomes | max_lobur, +1 | 17:08 |
max_lobur | I was thinking about rollback batches. F.e. each action like dd will add an opposite action to the rollback batch, then, if we want to cancel the task, we'll hit one of the checkpoints and run all callables from rollback batch in opposite order | 17:10 |
max_lobur | but this will probably add a lot of code | 17:10 |
max_lobur | it's something like rewinding back | 17:11 |
devananda | http://paste.openstack.org/show/mSj9WnYdJvaICPCC2uvJ/ | 17:11 |
devananda | including rollback -- but we can leave that out | 17:11 |
devananda | just an idea of how it might be done | 17:11 |
* max_lobur looks | 17:12 | |
devananda | max_lobur: the "if some_itnerrupt_was_recieved" step is not clear to me -- i'm hoping you have some insight in how we can do that with greenthreads | 17:13 |
devananda | max_lobur: eg, inject some change into a greenthread | 17:13 |
devananda | er, not what i meant to say | 17:13 |
max_lobur | hmm lemme think | 17:15 |
max_lobur | initially I thought about DB | 17:15 |
max_lobur | but it will DDOS it | 17:15 |
devananda | yes | 17:15 |
devananda | won't scale | 17:15 |
max_lobur | need something better | 17:15 |
max_lobur | devananda, in you mockup the _rollback_second() will need to rollback first too right? | 17:16 |
max_lobur | ah | 17:16 |
max_lobur | I see | 17:16 |
devananda | it does | 17:16 |
max_lobur | you called rollback first | 17:16 |
max_lobur | but what if we have first and second in different scopes | 17:16 |
max_lobur | in different methods | 17:17 |
devananda | so, as an aside, this is a very crude form of TaskFlow | 17:17 |
max_lobur | pls take a look at my comment about rollback batches (see above), what do you think | 17:18 |
lucasagomes | devananda +1 | 17:18 |
lucasagomes | max_lobur, https://github.com/openstack/cinder/blob/master/cinder/volume/flows/api/create_volume.py#L1678-L1681 | 17:18 |
* max_lobur looks how to pass something to greenthread | 17:18 | |
lucasagomes | https://github.com/openstack/cinder/blob/master/cinder/volume/flows/api/create_volume.py#L680-L698 | 17:18 |
max_lobur | lucasagomes, right | 17:19 |
max_lobur | so they have execute and revert for each unit of work | 17:19 |
max_lobur | cool | 17:20 |
lucasagomes | I would just live the rollback aside, or if we are going to implement it | 17:20 |
max_lobur | it's like a DB migrations :) | 17:20 |
lucasagomes | let's not reinvent it and use something that does it already | 17:20 |
max_lobur | yes | 17:20 |
lucasagomes | max_lobur, exactly | 17:20 |
lucasagomes | like a transition | 17:20 |
max_lobur | yea this will require us to rewrite most of the code using taskflow approach | 17:21 |
lucasagomes | yup | 17:21 |
devananda | so all that is doing a flow and handling rollback if the flow fails part-way | 17:21 |
devananda | what we're talking about is somewhat different | 17:21 |
devananda | sending a message to the flow FROM OUTSIDE to stop it | 17:21 |
devananda | harlowja_away: is ^ supported by taskflow today? | 17:21 |
lucasagomes | devananda, I think | 17:22 |
lucasagomes | I think that, hmm it would be just a trigger | 17:22 |
lucasagomes | I mean, the flow is running, each task has it's execute() method | 17:23 |
lucasagomes | and the execute() method for each class can check whether the flow should stop or not | 17:23 |
lucasagomes | if yes, it raises an exception that will make taskflow to rollback it | 17:24 |
lucasagomes | but I think it's too complex for now, I would go with a simple approach just to interrupt the current work | 17:24 |
lucasagomes | without rollback/taskflow etc | 17:24 |
*** hemnafk is now known as hemna_ | 17:25 | |
lucasagomes | (note: I'm wasn't tihnking about a third service managing the flow, I'm thinking about conductor using the taskflow lib to create the flow inside it) | 17:26 |
max_lobur | so roughly to send a signal to the thread we need to maintain some common collection of the cancellation tokens - for example a dictionary global to each conductor process | 17:27 |
max_lobur | it should be {"greenthread_id": <True/False> - cancelled or not ...} | 17:28 |
max_lobur | then from within each check_signals( we'll need to get current greenthread id | 17:28 |
max_lobur | go to that collection | 17:28 |
max_lobur | and look if it was cancelled | 17:28 |
max_lobur | this needs to be prototyped | 17:29 |
NobodyCam | max_lobur: are you thinking shared directory like nfs? | 17:29 |
max_lobur | since we will just read the collection from all the greenthreads it won't block | 17:30 |
max_lobur | and we'll write to it from only one thead | 17:30 |
max_lobur | that one which handles rpc requests | 17:30 |
max_lobur | well, not from one | 17:30 |
max_lobur | NobodyCam, no, I mean usual python dictionary, just global for a conductor | 17:31 |
max_lobur | somewhere on module level | 17:31 |
*** rloo__ has joined #openstack-ironic | 17:31 | |
*** rloo has quit IRC | 17:31 | |
devananda | max_lobur: look at resource_manager.py | 17:31 |
NobodyCam | ahh /me miss read :-p | 17:31 |
*** rloo__ has quit IRC | 17:32 | |
devananda | max_lobur: there is already a shared object for each node which maintains references back to every TaskManager which acquired that node | 17:32 |
*** rloo has joined #openstack-ironic | 17:32 | |
devananda | max_lobur: in the case of shared locks, may be >1 greenthread in the conductor which has acquired the Node | 17:32 |
devananda | max_lobur: but only one with exclusive lock | 17:32 |
devananda | max_lobur: i think this would be the right place to add a greenthread_id reference | 17:33 |
max_lobur | devananda, right, I'll take a look now | 17:33 |
max_lobur | we also will need to maintain greenthread_id<-->task_id somewhere, and maintain tasks in db. So once the user submitted a deployment, he will receive his task_id and will be able to use it to cancel the task, right? or there are other ways? | 17:36 |
max_lobur | how do we want to say what exactly background task do we want to cancel?:) | 17:36 |
max_lobur | or it will be - cancell every active task for particular node | 17:37 |
devananda | max_lobur: no | 17:37 |
devananda | max_lobur: we dont maintain task id in DB. and we don't support >1 task per node | 17:37 |
devananda | sorry for confusion | 17:37 |
devananda | most tasks we use today require exclusive lock | 17:38 |
NobodyCam | bbt...brb | 17:38 |
devananda | there are some which, in principle, may take a shared lock | 17:38 |
devananda | so we support shared locks too | 17:38 |
devananda | exclusive lock is tracked in DB -- just in case there is a node rebalance, exclusive lock is maintained across conductors | 17:38 |
devananda | shared lock is only tracked locally, within the memory state of the conductor. it doesn't need to be interrupted | 17:39 |
devananda | shared lock is for eg. validate() | 17:39 |
devananda | exlusive is for eg. set_power_state or deploy | 17:39 |
devananda | max_lobur: so you'll want to find the greenthread_id of the task holding an exlusive lock | 17:40 |
max_lobur | yes:) | 17:40 |
max_lobur | I was meant, for example if the user submitted a deployment | 17:40 |
max_lobur | how the command to cancel the deployment will look like? | 17:41 |
max_lobur | it will only have node id right? | 17:41 |
devananda | yes | 17:42 |
devananda | actually this may not need to be exposed at all | 17:42 |
devananda | it should probably be internal to our RPC API | 17:43 |
devananda | or even just inside the conductor | 17:43 |
devananda | eg | 17:43 |
devananda | if a user wants to cancel the deploy they started, they should just issue undeploy | 17:43 |
devananda | yes? | 17:43 |
max_lobur | hmm | 17:44 |
devananda | if power_on is taking too long, what is more intuitive -- "ironic interrupt-node $NODE" or "ironic power-off $NODE" | 17:44 |
NobodyCam | ironic cancel-current-node-action $NODE | 17:44 |
max_lobur | I'd say interrupt | 17:45 |
max_lobur | NobodyCam, or so | 17:45 |
max_lobur | because if power on hang | 17:45 |
lucasagomes | also, the interruption might be triggered internally | 17:45 |
lucasagomes | due a timeout | 17:45 |
*** matty_dubs|lunch is now known as matty_dubs | 17:45 | |
max_lobur | why should I think that power off will succeed :) | 17:45 |
devananda | NobodyCam: think how this is goign to be implemented in the nova driver | 17:45 |
devananda | nova may get a request to delete the instance which hasn't finished deploying yet | 17:46 |
devananda | because users are impatient :) | 17:46 |
max_lobur | lucasagomes, + | 17:46 |
max_lobur | hehe :) | 17:46 |
max_lobur | brb | 17:46 |
NobodyCam | post bbt walkies.. bbiafm | 17:47 |
lucasagomes | devananda, NobodyCam what needs to be set on the nova.conf in order to use the ironic driver? http://paste.openstack.org/show/62119/ | 17:48 |
lucasagomes | does that looks correct? I need to set something else | 17:48 |
lucasagomes | ? | 17:48 |
max_lobur | back | 17:48 |
devananda | lucasagomes: looks right?? | 17:49 |
devananda | bbiafm | 17:49 |
lucasagomes | devananda, cheers :) | 17:49 |
*** aignatov_ is now known as aignatov | 17:50 | |
* max_lobur saved the link to nova.conf :D | 17:51 | |
lucasagomes | :) | 17:52 |
*** jcooley_ has joined #openstack-ironic | 17:53 | |
*** harlowja_away is now known as harlowja | 17:54 | |
harlowja | devananda u guys talking about flows :-P | 17:55 |
harlowja | devananda define outside :) | 17:57 |
*** derekh has quit IRC | 18:01 | |
max_lobur | harlowja, hi! yes we are :) | 18:01 |
harlowja | sweet | 18:01 |
harlowja | max_lobur from the above it seems u guys are building mini-workflows :-P | 18:01 |
* harlowja maybe u are just discussing (not sure) | 18:02 | |
max_lobur | harlowja, almost :) we're trying to make possible to cancel background threads. Using cancellation checkpoints set across the code | 18:03 |
*** aignatov is now known as aignatov_ | 18:03 | |
max_lobur | we're not going to have rollbacks on this stage | 18:03 |
harlowja | k, so let me describe a little bit of how taskflow is doing this :) | 18:04 |
harlowja | it does have i think what u are describing | 18:04 |
harlowja | and it also has rollbacks | 18:04 |
*** vkozhukalov has joined #openstack-ironic | 18:04 | |
harlowja | so in taskflow, there's a concept of a task object (it has execute and revert methods) | 18:05 |
max_lobur | yep, lucasagomes posted an example | 18:05 |
harlowja | ah | 18:05 |
harlowja | u guys ahead of me :-P | 18:05 |
harlowja | haha | 18:05 |
max_lobur | heh :) | 18:05 |
harlowja | ok, so those are formed into larger structures (flows) | 18:05 |
harlowja | flows can describe data-flow dependencies or just other random dependencies (no cycles currently) | 18:06 |
*** aignatov_ is now known as aignatov | 18:06 | |
harlowja | all of that gets executed by an engine | 18:06 |
harlowja | the engine does a bunch of state-transitions | 18:06 |
harlowja | and manages the data-flow between tasks | 18:07 |
harlowja | it has suspend methods that are equivalent to your cancel | 18:07 |
harlowja | and at anytime during running u can suspend it (which after current tasks finish running, aka no preemption, the engine will stop running other tasks) | 18:07 |
devananda | me is back | 18:07 |
max_lobur | devananda, wb:) | 18:08 |
harlowja | so the suspending can be activated by another thread (if thats how this wants to be used) | 18:08 |
harlowja | u then also don't need 'cancellation checkpoints' | 18:09 |
max_lobur | harlowja, yea that's exactly what we need, but | 18:09 |
harlowja | since at every state-transition the engine does it will check if it has been suspended | 18:09 |
harlowja | *https://wiki.openstack.org/wiki/TaskFlow/States_of_Task_and_Flow | 18:09 |
max_lobur | at current stage it may be overhead to reinvent all what we have using taskflow framework | 18:10 |
max_lobur | harlowja, looks cool! | 18:11 |
devananda | harlowja: suspend ... or cancel? | 18:11 |
devananda | harlowja: we dont want to leave things dangling and resumable | 18:11 |
harlowja | max_lobur sure, of course, never easy to do refactoring :-P | 18:11 |
devananda | harlowja: this is for eg. an impatient user cancelled their deploy | 18:11 |
devananda | harlowja: or the deploy has stalled waiting for a callback from teh hardware that never happened due to a network error | 18:11 |
devananda | harlowja: in either case, we want to stop the "task" at the next interruptible point | 18:12 |
harlowja | devananda so i think it would be fine to expose the immediate revert method then, which would stop then start reversion | 18:12 |
harlowja | thats currently not exposed as a public engine api, but could be | 18:13 |
devananda | max_lobur: also we do need some sort of cleanup | 18:13 |
devananda | max_lobur: even if it's not a full rollback, each method that can be interrupted needs to be able to be cleaned up | 18:13 |
lucasagomes | yea, taskflow looks promising for us | 18:13 |
devananda | harlowja: cool, thanks | 18:13 |
max_lobur | devananda, + | 18:13 |
harlowja | devananda np | 18:13 |
lucasagomes | but I don't think we will get it done for this cycle | 18:13 |
devananda | so fokls - there's a lot of long-tail work we can see | 18:13 |
devananda | let's focus on the real bug in front of us :) | 18:14 |
lucasagomes | harlowja, thank you for the explanation | 18:14 |
harlowja | *isn't there always ;) | 18:14 |
lucasagomes | devananda, +2 | 18:14 |
harlowja | lucasagomes np | 18:14 |
*** martyntaylor has left #openstack-ironic | 18:14 | |
devananda | https://bugs.launchpad.net/ironic/+bug/1270986 | 18:14 |
devananda | and https://blueprints.launchpad.net/ironic/+spec/generic-timeouts | 18:15 |
devananda | and https://blueprints.launchpad.net/ironic/+spec/abort-deployment | 18:15 |
devananda | these are all related | 18:16 |
devananda | a deploy can be interrupted, broadly speaking, at three places | 18:16 |
devananda | *a deploy of the PXE driver ... | 18:16 |
devananda | during driver.deploy(), which finishes when the node powers on | 18:17 |
devananda | during the interim while the node PXE boots the deploy ramdisk and POSTs back | 18:17 |
devananda | during the driver.vendor._continue_deploy() phase, which dd's the image onto the node and power cycles it once again | 18:17 |
devananda | in step 2, there is currently nothing running inside of the ironic conductorManager | 18:18 |
devananda | no lock held. nothing to interrupt. | 18:18 |
devananda | so we need a general timeout that is registered when deploy starts, can trigger at any point during all 3 stages, and is removed at end of stage 3 | 18:20 |
devananda | that timeout should fire off the same interrupt that a user could initiate | 18:20 |
devananda | and perform the same cleanup | 18:20 |
devananda | etc | 18:20 |
devananda | EOL | 18:21 |
max_lobur | so at stage 1 and 3 we'll need to cancel the greenthread | 18:22 |
max_lobur | at 2 we'll need to clear node state in db and wipe the node, right? | 18:22 |
*** hstimer has joined #openstack-ironic | 18:22 | |
* max_lobur meant steps to interrupt deployment for each stages | 18:23 | |
devananda | 1 and 3 need to interrupt a greenthread | 18:24 |
devananda | 2 and 3 need to power off the node | 18:24 |
devananda | all steps need to call driver.deploy.clean_up() to delete images, tftp config, etc | 18:24 |
max_lobur | currently we can't see from outside on what stage we are right? | 18:25 |
max_lobur | e.g. our node state doesn't track the, | 18:25 |
max_lobur | *them | 18:26 |
devananda | we can look at node.provision_state | 18:26 |
devananda | yes | 18:26 |
devananda | actually no | 18:27 |
devananda | state == DEPLOYING for both all 3 | 18:27 |
max_lobur | sorry, I haven't dig into deployment much, I can ask stupid questions :) | 18:27 |
devananda | not a stupid question :) | 18:28 |
max_lobur | right, so once we distinguish between the stages of deployment | 18:28 |
max_lobur | we can take an appropriate action | 18:28 |
max_lobur | to interrupt it | 18:28 |
max_lobur | periodic task may serve as a watchdog that will check timer expiration | 18:29 |
devananda | the cleanup logic should reside in the driver | 18:29 |
lucasagomes | should maybe we add a new state? for when the config files is just being created? | 18:29 |
lucasagomes | instead of using deploying for all of the tasks? | 18:29 |
devananda | and should be registered as a callback | 18:29 |
devananda | that's how it should track what to do for rollback, not by a dependency on node state | 18:29 |
devananda | lucasagomes: that way leads to problems | 18:30 |
* devananda points to the pastebin he linked earlier | 18:30 | |
devananda | http://paste.openstack.org/show/mSj9WnYdJvaICPCC2uvJ/ | 18:30 |
devananda | note that all the logic for rolling back is encapsulated within do_long_thing | 18:31 |
devananda | at the interruptible points, a callback is set which knows how to clean up at that point | 18:31 |
devananda | i can flesh it out with deploy and continue_deploy if that helps make it clear | 18:31 |
NobodyCam | *flush even | 18:32 |
lucasagomes | hmm I see | 18:32 |
devananda | NobodyCam: no, flesh :) http://eggcorns.lascribe.net/english/349/flush/ | 18:34 |
devananda | NobodyCam: unless we're chasing vermin and need to flush them out of hiding ;) | 18:35 |
max_lobur | :D | 18:35 |
* devananda likes eggcorns | 18:36 | |
NobodyCam | flesh: the soft substance consisting of muscle and fat that is found between the skin and bones of an animal or a human. | 18:36 |
NobodyCam | also (put weight on.) | 18:37 |
devananda | yep | 18:37 |
devananda | so my idea is too skinny. i need to flesh it out :) | 18:37 |
devananda | to add weight to it | 18:37 |
NobodyCam | ahh, /me learns new work usage | 18:37 |
lucasagomes | hah cool expression indeed | 18:37 |
NobodyCam | word even | 18:37 |
NobodyCam | :) | 18:37 |
devananda | max_lobur: so, i think we should implement the interrupt first | 18:39 |
NobodyCam | devananda: quick question on line # 4 for https://review.openstack.org/#/c/66461/4/elements/nova-ironic/os-refresh-config/configure.d/80-ironic-ssh-power-key | 18:39 |
max_lobur | ok, so I'll try to create a prototype of how we're going to cancel greenthread that holds exclusive lock | 18:39 |
max_lobur | will see how it looks like | 18:39 |
NobodyCam | I added my nick to the todo but, I'm not sure we can actually do it | 18:39 |
NobodyCam | images can be built out side of a ironic env. also ironic can support multi power control modes | 18:41 |
NobodyCam | just wanted to check on your thought path when you initialy added that | 18:42 |
devananda | max_lobur: steps seem to be 1) tell a greenthread to stop at the next checkpoint, 2) add some checkpoint hooks to conductor_manager.do_node_deploy and pxe.deploy.[prepare|deploy], and 3) have these checkpoints register a callback (it'll call pxe.deploy.clean_up, pxe.deploy.teardown, etc, depending on location) | 18:42 |
devananda | max_lobur: then we can add something to register timeouts and a periodic-task that checks timeouts and fires off interrupts | 18:42 |
devananda | max_lobur: sound about right? | 18:42 |
devananda | NobodyCam: looking | 18:43 |
devananda | NobodyCam: i dont see a problem with generating it all the time | 18:43 |
NobodyCam | it was the todo bail out I was questioning | 18:44 |
devananda | NobodyCam: ssh power driver is part of trunk. it's unlikely that fokls will deploy ironic and explicitly remove it from the codebase | 18:44 |
max_lobur | devananda, great plan, thanks for ordering things in my mind :D | 18:44 |
devananda | NobodyCam: right. I'd just remove the TODO | 18:44 |
NobodyCam | :) thats what I thought just wanted to check and see if you had a plan I didn't think of | 18:44 |
NobodyCam | :) | 18:44 |
devananda | max_lobur: could you toss this up somewhere for future reference? Either on one of the existing BPs or send a summary to the ML | 18:44 |
max_lobur | I'll take a look for existing bps | 18:45 |
max_lobur | mailing threads become so long with time | 18:45 |
lucasagomes | maybe an etherpad? | 18:46 |
lucasagomes | devananda, http://paste.openstack.org/show/62126/ | 18:46 |
devananda | https://blueprints.launchpad.net/ironic/+spec/generic-timeouts and https://blueprints.launchpad.net/ironic/+spec/abort-deployment and https://blueprints.launchpad.net/ironic/+spec/breaking-resource-locks | 18:46 |
devananda | we should probably consolidate those ... | 18:46 |
devananda | a bit of redundancy :) | 18:46 |
lucasagomes | devananda, I just looked at my nova.conf | 18:47 |
lucasagomes | it does have an entry | 18:47 |
lucasagomes | scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler | 18:47 |
max_lobur | devananda, seems we need a parent bp for those 3 | 18:49 |
max_lobur | makes sense? | 18:49 |
devananda | max_lobur: or we invalidate one and leave other 2 | 18:50 |
devananda | actually, nvm | 18:51 |
max_lobur | which one? | 18:51 |
devananda | you're right | 18:51 |
max_lobur | deployment? | 18:51 |
max_lobur | ok :) | 18:51 |
devananda | breaking-resoruce-locks is not the same as abort-deployment | 18:51 |
devananda | it's also about breaking the lock of a dead conductor | 18:51 |
max_lobur | true | 18:51 |
max_lobur | I'll try fill the parent bp if you don't mind | 18:53 |
devananda | sure, thanks | 18:56 |
lucasagomes | devananda, NobodyCam have a minute? I'm trying to configure the ironic driver for nova with devstack (and implement it in devstack later, that will be used by our tests) | 18:57 |
lucasagomes | devananda, NobodyCam how would the scheduler pick an ironic node? | 18:57 |
devananda | lucasagomes: nova-ironic driver exposes nodes to scheduler | 18:58 |
lucasagomes | does this flow sounds correct? register the node in ironic (with properties etc etc etc) -> create a flavor in nova -> issue nova boot | 18:58 |
* max_lobur filling a bp | 18:58 | |
NobodyCam | devananda: yes it does | 18:58 |
lucasagomes | devananda, right, via IronicHostManager? | 18:58 |
devananda | lucasagomes: you're missing a step: wait ~ 2 minutes for nova scheduler to become aware of resources | 18:58 |
lucasagomes | devananda, ahnn | 18:59 |
lucasagomes | right right | 18:59 |
NobodyCam | yes there is a delay | 18:59 |
lucasagomes | devananda, there's anyway I can check if it's aweare of the resources? | 18:59 |
devananda | lucasagomes: no, via nova.ironic.driver:get_available_nodes | 18:59 |
devananda | lucasagomes: tail -f nova-compute.log | 18:59 |
devananda | you'll see the audit of available resources | 18:59 |
NobodyCam | devananda: ++ | 18:59 |
devananda | when n-cpu becomes aware of the resources, it'll log it | 18:59 |
devananda | there is a way to check in the DB, but I forget it right now | 19:00 |
lucasagomes | devananda, NobodyCam gotcha | 19:00 |
lucasagomes | thanks | 19:00 |
NobodyCam | thou I cheat and watch the ironic api log to see nova query for the node | 19:00 |
devananda | that ^ works too | 19:00 |
devananda | but watching for resource in n-cpu is the same for baremetal and ironic :) | 19:01 |
NobodyCam | yes | 19:01 |
matty_dubs | lucasagomes: Do you know if https://review.openstack.org/#/c/66925 is still needed for devstack+Ironic? Brant's comment sort of suggested otherwise, but I'm unsure. | 19:08 |
lucasagomes | right, I might have been doing something wrong cause it's not returning the nodes to nova /me will investigate | 19:08 |
lucasagomes | http://paste.openstack.org/show/62128/ | 19:09 |
lucasagomes | matty_dubs, yea | 19:09 |
matty_dubs | thanks lucasagomes, will apply | 19:09 |
lucasagomes | matty_dubs, I will investigate if we should add the version to the url | 19:10 |
lucasagomes | if it's not present | 19:10 |
lucasagomes | matty_dubs, but in the moment it's needed | 19:10 |
lucasagomes | NobodyCam, https://review.openstack.org/#/c/51328/12/nova/virt/ironic/driver.py L292 | 19:11 |
lucasagomes | if power_state is None we are not going to use it | 19:11 |
lucasagomes | hmm | 19:11 |
NobodyCam | lucasagomes: yes a vaild node will have a valid power state | 19:12 |
NobodyCam | only fake driver will leave it at none | 19:12 |
NobodyCam | which is why I do ironic node-set-power-state $IRONIC_NODE_ID off | 19:13 |
lucasagomes | NobodyCam, right, we don't immediatly set the power state when we create the node | 19:13 |
NobodyCam | for my testing with fake driver | 19:13 |
lucasagomes | I see | 19:13 |
NobodyCam | ssh and ipmi will auto set via background task | 19:13 |
lucasagomes | yup yea | 19:14 |
lucasagomes | cheers :D | 19:14 |
NobodyCam | :) | 19:14 |
NobodyCam | brb make'n mo coffee | 19:15 |
*** athomas has quit IRC | 19:24 | |
lucasagomes | NobodyCam, can I submit reviews for the driver for nova? or u prefer to me to just comment on it? | 19:31 |
lucasagomes | https://review.openstack.org/#/c/51328/12/nova/virt/ironic/driver.py L563 causes it to fail, in the unplug_vifs it's handling the wrong exception | 19:32 |
NobodyCam | humm in the try ? | 19:33 |
NobodyCam | ironic_exception.HTTPInternalServerError | 19:33 |
lucasagomes | NobodyCam, yup | 19:34 |
lucasagomes | related:https://review.openstack.org/#/c/68457/ | 19:34 |
lucasagomes | http://paste.openstack.org/show/62130/ | 19:34 |
max_lobur | devananda, we talked primarily about deployment cancellation. are there other similar tasks (firmware updates?) | 19:34 |
max_lobur | I think of bp name cancel-long-running-tasks | 19:34 |
openstackgerrit | Devananda van der Veen proposed a change to openstack/ironic: Sanitize node.last_error message strings https://review.openstack.org/64711 | 19:34 |
max_lobur | node power on / off probably too | 19:35 |
max_lobur | if they hang | 19:35 |
devananda | max_lobur: deploy cancellation is the main need today. but yes, there will be others | 19:35 |
max_lobur | so is cancel-long-running-tasks ok? | 19:36 |
devananda | max_lobur: we alrady have bp for generic-timeouts | 19:36 |
devananda | hm | 19:37 |
NobodyCam | humm do we have a keyerror | 19:37 |
max_lobur | it's gist is "Individual timeouts should be configurable per-driver." | 19:37 |
max_lobur | As I understood | 19:37 |
devananda | yes | 19:37 |
*** martyntaylor has joined #openstack-ironic | 19:38 | |
max_lobur | maybe just task-cancellation | 19:38 |
*** martyntaylor has left #openstack-ironic | 19:38 | |
devananda | ah, so an umbrella might be make-tasks-interruptible | 19:38 |
devananda | or something | 19:38 |
devananda | yea | 19:38 |
lucasagomes | NobodyCam, actually it will be clientsideerror | 19:38 |
devananda | cause it's not just about long-running things, or timeouts | 19:38 |
max_lobur | make-tasks-interruptible sounds better | 19:39 |
lucasagomes | http code 400 instead of internalservererror httpcode 500 | 19:39 |
devananda | max_lobur: ++ | 19:39 |
max_lobur | ok, finishing | 19:39 |
lucasagomes | NobodyCam, I'll investigate that 1 sec | 19:39 |
NobodyCam | lucasagomes: ack | 19:39 |
openstackgerrit | Devananda van der Veen proposed a change to openstack/ironic: Sanitize node.last_error message strings https://review.openstack.org/64711 | 19:40 |
max_lobur | are we able to edit bp body after it created? I marked some unclear places by "?" and would like to remove them after one more round of discussion | 19:41 |
devananda | max_lobur: yes | 19:42 |
max_lobur | and is somebody other than creator able to edit it | 19:42 |
openstackgerrit | Devananda van der Veen proposed a change to openstack/ironic: API: Add sample() method on Node https://review.openstack.org/65536 | 19:42 |
devananda | also, max_lobur, take a look at https://review.openstack.org/#/c/48198/2 | 19:44 |
devananda | max_lobur: only creator and PTL, i believe | 19:44 |
devananda | max_lobur: you can use the "specification URL" to point to a wiki or etherpad | 19:44 |
devananda | max_lobur: that is usually better than making many edits to the bp descriptin | 19:44 |
max_lobur | right, will move detailed description to etherpad | 19:45 |
max_lobur | thx! | 19:45 |
* max_lobur looking to the 48198 | 19:46 | |
devananda | yuriyz, max_lobur - we may want to revive https://review.openstack.org/#/c/48198/2 and incorporate this into the work max_lobur is starting for interruptions | 19:46 |
*** mdurnosvistov has joined #openstack-ironic | 19:46 | |
* devananda trolls through the status:abandoned list | 19:49 | |
devananda | erm, woops | 19:49 |
devananda | s/trolls/trawls/ :) | 19:49 |
max_lobur | :D | 19:50 |
lifeless | devananda: trolls sounds appropriate | 19:50 |
max_lobur | trolls may be acceptable too | 19:50 |
max_lobur | :D | 19:50 |
lifeless | o/ | 19:50 |
NobodyCam | :) | 19:50 |
max_lobur | https://blueprints.launchpad.net/ironic/+spec/make-tasks-interruptible done | 19:50 |
NobodyCam | hey lifeless :) | 19:51 |
max_lobur | hi lifeless | 19:51 |
max_lobur | devananda, lucasagomes ^ the bp ref | 19:51 |
lucasagomes | max_lobur, cheers :D | 19:52 |
max_lobur | and I'm going to go home.. | 19:52 |
devananda | max_lobur: thanks! g'night :) | 19:52 |
NobodyCam | have a good night max_lobur :) | 19:52 |
lucasagomes | max_lobur, g'night | 19:52 |
lucasagomes | NobodyCam, http://paste.openstack.org/show/62131/ | 19:53 |
max_lobur | yep, see you tomorrow :) | 19:53 |
max_lobur | night Everyone! | 19:53 |
*** max_lobur is now known as max_lobur_afk | 19:53 | |
* NobodyCam looks | 19:53 | |
lucasagomes | NobodyCam, so, the kernel+ramdisk should be registered in nova? | 19:54 |
NobodyCam | lucasagomes: did you set deploy_kernel_id on the flavor? | 19:54 |
lucasagomes | NobodyCam, heh nop | 19:54 |
lucasagomes | I mean | 19:54 |
lucasagomes | I thought I would register it directly in ironic | 19:54 |
devananda | lucasagomes: looks like https://review.openstack.org/#/c/58266/2 may want to be restored? | 19:54 |
lucasagomes | devananda, ohhh +1 | 19:55 |
lucasagomes | devananda, will restore that | 19:55 |
lucasagomes | devananda, thank u! | 19:55 |
lucasagomes | NobodyCam, do you have the command line do you use to register the flavor there handy? | 19:56 |
lucasagomes | I was just doing | 19:56 |
lucasagomes | nova flavor-create baremetal auto 512 10 1 | 19:56 |
NobodyCam | lucasagomes: wait one sec | 19:56 |
*** vkozhukalov has quit IRC | 19:56 | |
NobodyCam | I may have spoke too soon | 19:57 |
lucasagomes | NobodyCam, right thanks | 19:57 |
NobodyCam | nope | 19:57 |
NobodyCam | lines 48 thru 52 of https://review.openstack.org/#/c/51328/12/nova/virt/ironic/ironic_driver_fields.py | 19:58 |
devananda | lucasagomes: same for https://review.openstack.org/#/c/61960/3 | 19:58 |
NobodyCam | 'nova_object': 'flavor','object_field': 'extra_specs/baremetal:deploy_kernel_id'}, | 19:58 |
lucasagomes | devananda, I think you already propoused a better solution for that | 19:59 |
devananda | ah, k | 19:59 |
lucasagomes | devananda, using the hashring to identify if the driver is available or not | 19:59 |
devananda | lucasagomes: ah yes ,thanks! | 20:00 |
devananda | i should scroll al lthe way to the end of the bug report :) | 20:00 |
lucasagomes | ^^ | 20:00 |
lucasagomes | NobodyCam, cheers | 20:00 |
NobodyCam | lucasagomes: https://github.com/openstack/tripleo-incubator/blob/master/scripts/setup-baremetal#L41-L44 | 20:00 |
lucasagomes | NobodyCam, a-ha here we go! | 20:01 |
lucasagomes | ta much! | 20:01 |
lucasagomes | the pxe_root_gb I continue to set in ironic? | 20:01 |
NobodyCam | lucasagomes: line 45-46 of above ironic_driver_fileds | 20:02 |
NobodyCam | that comes from instance | 20:02 |
devananda | lucasagomes: pls see yuriyz' response on https://review.openstack.org/#/c/68018/2/ironic/api/controllers/v1/node.py | 20:02 |
devananda | lucasagomes: regarding your -1 on the patch | 20:02 |
lucasagomes | devananda, will take a look | 20:02 |
lucasagomes | NobodyCam, thanks | 20:02 |
openstackgerrit | Devananda van der Veen proposed a change to openstack/ironic: API validates driver name for both POST and PATCH https://review.openstack.org/68018 | 20:03 |
lucasagomes | devananda, hmm, true it makes sense, because the acquire will look at the driver right? | 20:04 |
*** ndipanov has quit IRC | 20:05 | |
devananda | max_lobur_afk: looks like this should be restored too: https://review.openstack.org/#/c/63904/ | 20:08 |
devananda | lucasagomes: acquire? | 20:08 |
devananda | lucasagomes: ah. right. | 20:08 |
NobodyCam | brb | 20:08 |
devananda | lucasagomes: but no, i think because update_node isn't performed by the API for /any/ data | 20:09 |
lucasagomes | devananda, true | 20:09 |
devananda | lucasagomes: update_node will always need to be routed to a conductor. and if no conductor is currently alive and advertising support for that driver, there is no topic for it | 20:09 |
lucasagomes | devananda, true, but before the get_topic_for would use a generic topic | 20:10 |
lucasagomes | if the driver was invalid | 20:10 |
devananda | yes | 20:10 |
lucasagomes | so any conductor alive would be able to get that request and update the node | 20:10 |
devananda | which would allow any arbitrary conductor to work on it | 20:10 |
devananda | yep | 20:10 |
lucasagomes | idepdenent of the driver it has | 20:10 |
lucasagomes | yea | 20:10 |
lucasagomes | hmm do you think it's a bad practice? | 20:10 |
devananda | which, if the conductor tried to load the driver (eg, in acquire()) would of course fail | 20:10 |
lucasagomes | right | 20:11 |
lucasagomes | that was what I was thinking | 20:11 |
lucasagomes | hmm | 20:11 |
devananda | iow, the old behavior was actually broken | 20:12 |
devananda | but since we aren't doing functional testing w/ multiple conductor instances w/ different drivers, we haven't hit that issue yet | 20:12 |
lucasagomes | ok fair, it was more a observation, cause it sounds a bit odd if the partial update fails because of an unrelated attribute | 20:12 |
devananda | well | 20:12 |
devananda | old way: update would fail for any attribute if driver not found | 20:13 |
devananda | but you'd get an error about driver-not-found even when updating node.properties | 20:13 |
devananda | new way: update will fail for any attribute if driver not found | 20:13 |
devananda | but you'll get a mroe sensible error :) | 20:13 |
lucasagomes | heh yea | 20:13 |
devananda | and the error is generated in API, rather than in a random conductor | 20:13 |
lucasagomes | which is fair enough | 20:14 |
devananda | whether we should allow updates for nodes which have no active driver is a different question ;) | 20:14 |
lucasagomes | indeed | 20:14 |
lucasagomes | I thought the old way would allow us to update indepedent if the driver is avaiable or not | 20:15 |
lucasagomes | but I might be wrong there | 20:15 |
devananda | i thought acquire() will load the driver | 20:15 |
devananda | and thus fail if eg, you try to update a node which has driver='foobar' | 20:15 |
*** jdob has quit IRC | 20:15 | |
lucasagomes | devananda, just took a look at the code, yea it will try to load the driver | 20:17 |
lucasagomes | and raise | 20:17 |
lucasagomes | raise exception.DriverNotFound(driver_name=driver_name) | 20:17 |
lucasagomes | case it's not found | 20:17 |
devananda | ya | 20:17 |
lucasagomes | I was a bit confused because | 20:17 |
lucasagomes | in the acquire method we have the default argument | 20:17 |
lucasagomes | driver_name=None | 20:17 |
devananda | humm | 20:17 |
lucasagomes | but following it down yea it will always try to load it | 20:17 |
lucasagomes | :param driver_name: Name of Driver. Default: None | 20:18 |
lucasagomes | And then on the NodeManager: driver_name = driver_name or self.node.get('driver') | 20:18 |
devananda | yes | 20:19 |
devananda | so | 20:19 |
lucasagomes | devananda, so yea the patch is correct, I will review it later | 20:20 |
lucasagomes | sorry for the confusion | 20:20 |
lucasagomes | (and for suggesting something that won't work) | 20:20 |
devananda | lucasagomes: http://git.openstack.org/cgit/openstack/ironic/commit/?id=425a4438 | 20:20 |
devananda | I think this is to allow changign the driver_name | 20:21 |
devananda | and is needed if we want to move a node from one driver (on conductor A) to another driver (on conductor B) | 20:21 |
lucasagomes | right, hmm makes sense | 20:23 |
devananda | yea, that's it -- normally it uses self.node.get('driver'), except when a new driver_name is passed | 20:24 |
devananda | in which case the update_node method should have used the new driver_name when routing the RPC message | 20:25 |
devananda | and conductor needs to acquire() the node with the new driver, not the current one | 20:25 |
devananda | :) | 20:25 |
devananda | s/:)// -- fingers movign too fast in the wrong window | 20:26 |
NobodyCam | for the next rev of the nova driver do you think I should switch 'except ironic_exception.HTTPInternalServerError' to 'except Exception'? | 20:28 |
lucasagomes | devananda, thanks! | 20:28 |
lucasagomes | http://paste.openstack.org/show/62134/ | 20:29 |
lucasagomes | I've seem a lot of Node is already locked by another process when triggering the deploy | 20:29 |
lucasagomes | I think the periodic tasks are running too often | 20:29 |
lucasagomes | or we should somehow tells it to not acquire that node while he's being deployed | 20:30 |
NobodyCam | lucasagomes: do we check that the nodes instance <> uuid? | 20:31 |
NobodyCam | that would tell us | 20:31 |
devananda | lucasagomes: oh, interesting | 20:32 |
lucasagomes | NobodyCam, +1 or the target_provision_state | 20:32 |
devananda | lucasagomes: we should put a retry in there | 20:32 |
lucasagomes | if it's active | 20:32 |
lucasagomes | devananda, +1 | 20:32 |
devananda | retry in the nova-ironic driver | 20:32 |
devananda | since api calls may get transitory failures like this | 20:32 |
devananda | it's normal | 20:32 |
lucasagomes | cause the deploy might fail because the peridic task is running in the node | 20:32 |
lucasagomes | devananda, yup | 20:32 |
lucasagomes | we can check if the API returned conflict | 20:33 |
devananda | yep | 20:33 |
lucasagomes | and retry it | 20:33 |
devananda | yep | 20:33 |
devananda | it should only retry for certain HTTP codes, ofc | 20:33 |
lucasagomes | +1 | 20:33 |
lucasagomes | NobodyCam, about the InternalServerError | 20:33 |
lucasagomes | it should be clienterror | 20:33 |
lucasagomes | cause in that case it's trying to remove an attribute that doesn't exist in that node | 20:34 |
lucasagomes | so the APi should return 400 (client error) | 20:34 |
lucasagomes | bad request* | 20:34 |
devananda | i'm going to head home... afk for ~ an hour | 20:34 |
NobodyCam | :) | 20:35 |
lucasagomes | I'm going to take a break as well | 20:35 |
NobodyCam | devananda: you go into the office today? | 20:35 |
devananda | NobodyCam: no. been working from the hill, tho | 20:35 |
NobodyCam | :) ahh | 20:35 |
NobodyCam | enjoy the walk (driver) home | 20:35 |
lucasagomes | NobodyCam, devananda g'night | 20:35 |
NobodyCam | night lucasagomes :) | 20:35 |
lucasagomes | devananda, safe drive back home | 20:36 |
lucasagomes | NobodyCam, thanks, and thanks for all the help with the deploy stuff | 20:36 |
NobodyCam | :) | 20:36 |
NobodyCam | lucasagomes: fyi I should yet another nova driver up today | 20:36 |
lucasagomes | NobodyCam, good stuff | 20:36 |
lucasagomes | I will test that tomorrow | 20:36 |
NobodyCam | :) | 20:36 |
NobodyCam | :) | 20:37 |
*** lucasagomes has quit IRC | 20:37 | |
devananda | g'night, lucas! | 20:40 |
openstackgerrit | Jarrod Johnson proposed a change to stackforge/pyghmi: Fix keepalive behavior on broken Sessions https://review.openstack.org/69967 | 20:43 |
jbjohnso | I've moved on to long term test with network failure injection to find issues | 20:44 |
jbjohnso | That one was interesting, an application with multiple sessions to multiple bmcs with one of those sessions connecting successfully and then having connectivity lost would cause it to lose its mind | 20:45 |
jbjohnso | the fun sorts of problems I get to enjoy by being more ambitious than ipmitool | 20:45 |
NobodyCam | :) | 20:46 |
jbjohnso | and the shortest git review I think I've ever done | 20:46 |
*** coolsvap is now known as coolsvap_away | 20:46 | |
NobodyCam | so if you 5 connections and lose four then then one that was still active goes wacko | 20:46 |
jbjohnso | NobodyCam, or if you had 5 concurrent sessions and just one | 20:48 |
jbjohnso | then the one bad one would basically starve the 4 | 20:48 |
jbjohnso | well, except for SOL payloads | 20:48 |
NobodyCam | ahh :) good catch | 20:48 |
jbjohnso | which still carried on | 20:48 |
jbjohnso | but cpu usage would be high | 20:48 |
jbjohnso | man, going through two apache reverse proxy setups really visibly impacts the chunkiness of my remote console server | 20:49 |
jbjohnso | I should stop doing that.. | 20:49 |
NobodyCam | lol | 20:49 |
jbjohnso | btw, was curious | 20:50 |
jbjohnso | so shellinabox is currently part of the strategy of things | 20:50 |
NobodyCam | add a squid cache that will help...LOL | 20:50 |
NobodyCam | :-p | 20:50 |
jbjohnso | so if I do show off this console server and I license it as Apache | 20:50 |
jbjohnso | how would you want to work the javascript logistics in the browser? | 20:51 |
jbjohnso | I used the shellinabox javascript sloppily (since I am a terrible web designer) with a trivial ajax filled select and an iframe to change the console | 20:52 |
NobodyCam | browser like lynx? | 20:52 |
jbjohnso | ironically | 20:52 |
jbjohnso | lynx or links wouldn't work very well as a text console browser | 20:52 |
jbjohnso | since it doesn't do javascript | 20:52 |
jbjohnso | but you could run links | 20:52 |
jbjohnso | and then embed that in another browser | 20:52 |
jbjohnso | you just gave me an idea to demo the external application console plugin... what better console than lynx | 20:53 |
*** rloo has quit IRC | 20:54 | |
*** rloo has joined #openstack-ironic | 20:54 | |
NobodyCam | many time I do not even have access to gui browsers | 20:54 |
jbjohnso | well, all this stuff is catering to web people | 20:54 |
jbjohnso | the exact same console is available over non-http | 20:54 |
NobodyCam | so I think (as it is a text console) text browser support would be awesome | 20:55 |
jbjohnso | in fact, concurrent accesses from http and non-http can see each other type | 20:55 |
jbjohnso | it's just the non-http users don't notice latency as badly as the http users | 20:55 |
NobodyCam | ssh/telnet and http | 20:55 |
jbjohnso | well, it's a socket that's available over TLS or unix domain socket | 20:56 |
* NobodyCam remembers pluging 9600 bps modems into serial ports for remote console access | 20:56 | |
jbjohnso | I remember real vt100s | 20:57 |
NobodyCam | try and find a computer with a db9(25) port now | 20:57 |
jbjohnso | a server or other? | 20:57 |
jbjohnso | server it's easy | 20:57 |
* NobodyCam user ibm rs and hp mini's (mini took up a whole room :-p ) | 20:58 | |
NobodyCam | s/user/used/ | 20:58 |
jbjohnso | I love how parallel cables are technically 'subminiature' | 21:00 |
jbjohnso | so tiny, miniature isn't small enough | 21:00 |
jbjohnso | and yet, gigantic by today's standards | 21:01 |
NobodyCam | :) | 21:02 |
openstackgerrit | A change was merged to stackforge/pyghmi: Fix keepalive behavior on broken Sessions https://review.openstack.org/69967 | 21:05 |
*** jcooley_ has quit IRC | 21:11 | |
*** jcooley_ has joined #openstack-ironic | 21:12 | |
openstackgerrit | Ruby Loo proposed a change to openstack/ironic: mock's return value for processutils.ssh_execute https://review.openstack.org/69479 | 21:15 |
NobodyCam | lol got side tracked looking up the old debug command to setup mfm/rll drives ... congrats to seagate for keeping the info up. (fyi: ftp://ftp.seagate.com/techsuppt/controllers/st11m-r.txt) | 21:17 |
NobodyCam | how many people here remember mfm/rll hard drives | 21:17 |
* NobodyCam wounders | 21:17 | |
rloo | NobodyCam, what are you talking about? :-) | 21:25 |
NobodyCam | lol :-P | 21:28 |
* NobodyCam feels old | 21:28 | |
rloo | but NobodyCam is young at heart! | 21:32 |
NobodyCam | :) | 21:40 |
NobodyCam | Ty rloo | 21:40 |
jbjohnso | some people probably don't even remember IDE | 21:43 |
NobodyCam | jbjohnso: lol wow am I that old... I missed punch cards but did use paper tape! | 21:45 |
jbjohnso | I vaguely remember messing with Xylogics controllers in Sun4 systems | 21:45 |
jbjohnso | mainly because some one screwed around with them for no good reason and I spent a long time with tweezers unbending the pins on the controller | 21:46 |
jbjohnso | someone wanted to reorganize the Sun4 VME bits into one super-duper Sun4 system with lots of disk in the student computer lab | 21:47 |
jbjohnso | I do remember as the Sun4 systems went down one by one we did salvage, ended up with a massive 128 MB of ram in one system, that was amazing... | 21:48 |
NobodyCam | nice | 21:48 |
NobodyCam | we had a dec pdp-11 | 21:49 |
jbjohnso | in that day, a 5U storage enclosure held precisely 1 drive: 540 Megabytes | 21:49 |
NobodyCam | yep. I installed several on the hp I used to work on... >220 lbs eavh | 21:50 |
NobodyCam | each even | 21:50 |
NobodyCam | took four of us | 21:50 |
jbjohnso | this is going to turn into a tech variant of the four yorkshiremen skit | 21:51 |
NobodyCam | lol jb did you see the link I posted last night? Re: python bomb? | 21:53 |
NobodyCam | http://www.urbandictionary.com/define.php?term=Python%20Bomb | 21:55 |
jbjohnso | heh | 21:57 |
jbjohnso | hmm... wonder why it is the advertisements now decide to talk to me about lenovo so much... | 21:57 |
jbjohnso | guess the acquisition caused me to do things that the ad servers mistook for looking to buy lenovo rather than being bought by lenovo | 21:57 |
NobodyCam | lol google is watching where you go! | 21:58 |
jbjohnso | I for one take some fascination in how tracking me changes my experience | 21:58 |
*** epim has joined #openstack-ironic | 21:58 | |
jbjohnso | ok, I look up some auto part and I see ads for that auto part for a while, I'm with you... | 21:58 |
jbjohnso | I look at a site about speedboats, and then I see banner ads for *yachts* | 21:59 |
NobodyCam | yep! | 21:59 |
jbjohnso | one, you've grossly overestimated my success in the world | 21:59 |
NobodyCam | lol | 21:59 |
jbjohnso | two, even if I were that successful, does a *banner* ad really make anyone buy a yacht... | 21:59 |
jbjohnso | "oh yeah, might as well, where's the 'buy it now' button..." | 21:59 |
NobodyCam | lol None on NobodyCams Credit cards would pass the bay a Yacht now check | 22:00 |
NobodyCam | s/bay/buy/ | 22:01 |
devananda | fokls who want to land things -- | 22:08 |
devananda | in case you dont want to just "recheck no bug", this bug has been causing a lot of the failures: https://bugs.launchpad.net/nova/+bug/1273386 | 22:09 |
devananda | pretty easy to tell if that's why jenkins failed it -- look in the test failure's kernel.log for this: http://paste.openstack.org/show/61869/ | 22:10 |
rloo | devananda. so we should just keep doing 'recheck's until it wins the lottery? | 22:15 |
devananda | rloo: probably not. that'll just further choke the queues. though, i'm doing that with the stevedore fix to try to unblock /everything/ else | 22:16 |
devananda | rloo: but if you feel something needs to be rechecked/reverified, that bug seems to be causing most of our pain | 22:17 |
rloo | devananda. ok, i think for important stuff (like the stevedore) it makes sense, but otherwise, I figured I'd just wait. | 22:17 |
rloo | devananda. i'm even wondering whether it is worth reviewing right now. | 22:18 |
NobodyCam | brb afternoon walkies | 22:22 |
*** aignatov is now known as aignatov_ | 22:27 | |
devananda | agordeev, romcheg - dont suppose either of you are around? | 22:34 |
devananda | could use your eues on our tempest test suite | 22:34 |
devananda | i suspect we're doing something we shouldn't, cause we're hitting a neutron bug EVERY SINGLE TIME. and even Nova is not. | 22:34 |
devananda | ahh | 22:36 |
devananda | we are testing tempest | 22:36 |
devananda | *testing tempest with neutron | 22:37 |
romcheg | devananda: Hi, I'm semi-available here :) | 22:37 |
*** matty_dubs is now known as matty_dubs|gone | 22:38 | |
NobodyCam | Hi romcheg :) How goes the revolution | 22:38 |
devananda | romcheg: https://review.openstack.org/70001 | 22:38 |
romcheg | devananda: But we need to test neutron integration as well... | 22:40 |
devananda | romcheg: today? | 22:40 |
romcheg | Not | 22:40 |
romcheg | *No | 22:40 |
devananda | romcheg: right | 22:40 |
devananda | romcheg: and we can't land ANYTHING today because of bugs in neutron | 22:40 |
romcheg | Now it's not required | 22:40 |
romcheg | devananda: that's what I was about to mention | 22:40 |
devananda | romcheg: even Nova does not have neutron enabled in their tempest tests | 22:40 |
devananda | romcheg: look at tempest-dsvm-full | 22:41 |
devananda | for example | 22:41 |
devananda | it's not using neutron | 22:41 |
romcheg | Yes, neutron's baas (bug as a service) works great. I had several problems with it while testing Ironic tests but they were not critical | 22:42 |
romcheg | I will support that patch to infra | 22:43 |
devananda | ty | 22:47 |
* devananda wants to unblock our gate ... | 22:48 | |
*** mrda_away is now known as mrda | 22:49 | |
mrda | morning all | 22:49 |
NobodyCam | morning mrda | 22:49 |
romcheg | NobodyCam: we had some good progress but there are a lot of things to do yet | 22:51 |
mrda | hey NobodyCam, just wondering if you could +2, it's already approved, and it's passed check, it's just stuck and won't run gate until a core re-approves | 22:51 |
romcheg | NobodyCam: Ukraine drives to Europe http://cl.ly/image/3s222R1g3440 :) | 22:51 |
romcheg | NobodyCam: Thanks for your interest | 22:51 |
NobodyCam | romcheg: :) | 22:51 |
mrda | Sorry NobodyCam it's https://review.openstack.org/#/c/68852/ | 22:51 |
NobodyCam | mrda: link | 22:51 |
NobodyCam | :) | 22:51 |
mrda | thnx | 22:52 |
NobodyCam | mrda: it is approved you hold on https://review.openstack.org/#/c/66078/2 | 22:52 |
devananda | mrda: so. read scrollback if you want more history ... | 22:52 |
devananda | mrda: tldr - gate has been stalled for a while. i'm trying to unblock it | 22:53 |
devananda | mrda: ignore jenkins' -1's right now, too. same problem | 22:53 |
NobodyCam | in this case dep is not approved | 22:53 |
devananda | mrda: also, rebase anything on top of 69495 if you want to have a chance of passing gate | 22:53 |
NobodyCam | devananda: so https://review.openstack.org/#/c/66078/2 is good to go now | 22:54 |
* devananda looks | 22:54 | |
devananda | yea | 22:54 |
mrda | thanks, been reading scrollback - lots of discussion o/night | 22:54 |
devananda | as soon as gate is fixed :) | 22:54 |
NobodyCam | ahh I had already +2 it | 22:55 |
devananda | mrda: the discussion about interrupting a deploy is probably worth a read | 22:55 |
mrda | devananda: ok, thanks | 22:55 |
openstackgerrit | Ruby Loo proposed a change to openstack/ironic: SSHPower driver raises IronicExceptions https://review.openstack.org/66990 | 23:00 |
NobodyCam | omg romcheg thats not near you is it? | 23:00 |
romcheg | That's a burnt police bus in Kyiv | 23:00 |
NobodyCam | wow | 23:00 |
devananda | romcheg: lol, nice ride :p | 23:01 |
romcheg | NobodyCam: Quite far from me but I hope people here in Kharkiv will become more active eventually... | 23:02 |
NobodyCam | http://www.bbc.co.uk/news/world-europe-25885588 | 23:03 |
devananda | (not intending to be offensive. making light of very grave things is best, IMO, otherwise they are just depressing) | 23:03 |
NobodyCam | that does not look safe to /me | 23:03 |
NobodyCam | :) | 23:03 |
romcheg | NobodyCam: the funniest one I've seen http://pbs.twimg.com/media/Be7W1zKCQAA0JpR.jpg:medium | 23:05 |
*** epim has quit IRC | 23:05 | |
NobodyCam | omg | 23:06 |
NobodyCam | :) | 23:06 |
NobodyCam | i lone this one: http://www.bbc.co.uk/news/world-europe-25927212 | 23:06 |
NobodyCam | s/lone/like/ | 23:06 |
*** rloo has quit IRC | 23:07 | |
NobodyCam | sorry its the 5th picture | 23:07 |
*** rloo has joined #openstack-ironic | 23:08 | |
NobodyCam | link take you to the first one | 23:08 |
*** rloo has quit IRC | 23:09 | |
*** rloo has joined #openstack-ironic | 23:10 | |
romcheg | NobodyCam: I think this one was in all world news http://www.popularresistance.org/wp-content/uploads/2013/12/Ukraine-man-playing-pianor-to-riot-police-Dec-7-2013.jpg | 23:11 |
NobodyCam | :) romcheg lol Thats really good too | 23:11 |
*** rloo has quit IRC | 23:17 | |
*** rloo has joined #openstack-ironic | 23:17 | |
* NobodyCam wanders afk for a few min.. | 23:22 | |
*** mdurnosvistov has quit IRC | 23:24 | |
*** epim has joined #openstack-ironic | 23:32 | |
*** epim has quit IRC | 23:48 | |
*** romcheg has left #openstack-ironic | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!