Monday, 2015-06-29

*** smoriya has joined #openstack-ironic00:09
*** sinval has joined #openstack-ironic00:23
*** jamielennox is now known as jamielennox|away00:26
*** jamielennox|away is now known as jamielennox00:30
*** Haomeng has joined #openstack-ironic00:44
*** Haomeng|2 has quit IRC00:47
openstackgerritSinval Vieira Mendes Neto proposed openstack/ironic: Adds port creation passing the name of the node instead of the UUID of the node  https://review.openstack.org/19343900:52
openstackgerritSinval Vieira Mendes Neto proposed openstack/ironic: Add port creation passing the name of the node instead of the UUID of the node  https://review.openstack.org/19343900:59
*** zhenguo has joined #openstack-ironic01:16
*** puranamr has quit IRC01:37
*** ThomasPB has joined #openstack-ironic01:38
*** puranamr has joined #openstack-ironic01:45
*** chenglch has joined #openstack-ironic01:48
openstackgerritSinval Vieira Mendes Neto proposed openstack/ironic: Adds port creation passing the name of the node instead of the UUID of the node  https://review.openstack.org/19343901:54
*** Haomeng has quit IRC01:54
*** Sukhdev has joined #openstack-ironic02:01
*** sinval has quit IRC02:08
*** Haomeng has joined #openstack-ironic02:39
*** ramineni has joined #openstack-ironic02:40
*** zz_natorious is now known as natorious02:52
*** achanda has joined #openstack-ironic03:11
*** achanda has quit IRC03:18
*** saripurigopi has joined #openstack-ironic03:34
saripurigopigood morning Ironic...03:35
*** saripurigopi has quit IRC03:43
*** Nisha has joined #openstack-ironic03:47
*** davideagnello has joined #openstack-ironic03:51
*** puranamr has quit IRC03:51
*** coolsvap|away is now known as coolsvap03:52
*** Nisha_away has joined #openstack-ironic03:54
*** Nisha has quit IRC03:54
*** sinval has joined #openstack-ironic03:54
*** davideagnello has quit IRC03:56
*** Sukhdev has quit IRC04:01
*** saripurigopi has joined #openstack-ironic04:05
*** Marga_ has joined #openstack-ironic04:06
*** Marga_ has quit IRC04:07
*** Marga_ has joined #openstack-ironic04:07
*** puranamr has joined #openstack-ironic04:11
*** Nisha_brb has joined #openstack-ironic04:13
*** Nisha_away has quit IRC04:13
*** puranamr has quit IRC04:16
*** saripurigopi has quit IRC04:18
*** Marga_ has quit IRC04:20
*** Marga_ has joined #openstack-ironic04:21
*** Marga_ has quit IRC04:25
openstackgerritZhenguo Niu proposed openstack/ironic: Add ability to filter nodes by provision_state via API  https://review.openstack.org/19652904:36
openstackgerritAnusha Ramineni proposed stackforge/proliantutils: Add RIS support for firmware update  https://review.openstack.org/19395204:50
*** sinval has quit IRC04:55
*** rameshg87 has joined #openstack-ironic04:55
*** davideagnello has joined #openstack-ironic04:57
*** davideagnello has quit IRC05:03
*** davideagnello has joined #openstack-ironic05:03
*** davideagnello has quit IRC05:03
*** puranamr has joined #openstack-ironic05:08
*** saripurigopi has joined #openstack-ironic05:18
*** radek__ has joined #openstack-ironic05:20
*** yuanying_ has joined #openstack-ironic05:22
*** radek_ has quit IRC05:23
*** Nisha_brb has quit IRC05:23
*** yuanying has quit IRC05:25
*** puranamr has quit IRC05:26
Haomengsaripurigopi: morning:)05:28
saripurigopimorning Haomeng o/05:29
Haomengsaripurigopi: :)05:29
*** stegranet has joined #openstack-ironic05:30
*** Marga_ has joined #openstack-ironic05:35
naohirotHaomeng: saripurigopi: good morning :)05:38
Haomengnaohirot: morning:)05:38
naohirotrameshg87: good morning05:39
naohirotrameshg87: or Haomeng:05:39
Haomengnaohirot: :)05:39
naohirotrameshg87: or Haomeng: I have question about rest api05:39
Haomengnaohirot: sure05:39
*** Marga_ has quit IRC05:40
naohirotHaomeng: If rest api is async, that api returns 202 if request is accepted.05:40
*** Marga_ has joined #openstack-ironic05:40
naohirotHaomeng: how can user know if error happened after api returned 202?05:41
naohirotHaomeng: for instance power control.05:41
Haomengnaohirot: yes, 202 just means that api request is accepted already05:41
Haomengnaohirot: have to check status api if have I thinnk05:41
naohirotHaomeng: okay, I'm looking source code too, node.py in api.05:42
Haomengnaohirot: we can call node-show api to get power status05:43
naohirotHaomeng: Okay, I believe that is a solution, but please look at Jim's comment here05:44
Haomengnaohirot:  GET http://9.119.58.234:6385/v1/nodes/<NODE_UUID>05:44
Haomengnaohirot: for example05:44
Haomengnaohirot: and Lucas has a patch to support get subset for node properties05:44
naohirotHaomeng: https://review.openstack.org/#/c/186700/4/specs/liberty/enhance-power-interface-for-soft-reboot-and-nmi.rst05:45
Haomengnaohirot: so you can try to get node's power_state only05:45
naohirotHaomeng: line 15605:45
Haomengnaohirot: ok, let me check05:45
*** puranamr has joined #openstack-ironic05:46
Haomengnaohirot: yes, this is concern, for user, have to call another api to check the status, and if user run with cli, that is easy to call another api to get result to return user05:47
naohirotHaomeng: This is a way of async call, so do we have a solution without making another call?05:49
Haomengnaohirot: sorry, no more idea so far:)05:49
*** yog__ has joined #openstack-ironic05:50
Haomengnaohirot: but I think we can follow the ansync call mode in openstack05:50
Haomengnaohirot: do you want to make it a sync call?05:50
naohirotHaomeng: Okay, I should ask to Jim, but timezone is different. so what do you think what kind of answer he expected?05:50
Haomengnaohirot: I understand Jim just think from user view, how to tell user if soft-power failed05:51
*** puranamr has quit IRC05:51
HaomengHaomeng: so we can have another api call to get status, I think it make sense:)05:51
naohirotHaomeng: Yes, we can have another api to get status, but not error code of the previous call.05:52
Haomengnaohirot: same behavior with current existing  PUT v1/nodes/<NODE_UUID>/states/power api call05:53
naohirotHaomeng: It's very difficult to keep track of the previous call.05:53
Haomengnaohirot: yes05:53
Haomengnaohirot: let me post these into your spec as my comments, thanks for your spec:)05:54
naohirotHaomeng: I understand that making another call is a way of async call. thanks!05:54
Haomengnaohirot: yes05:54
naohirotHaomeng: Yes, please, that is absolutely helpful for me.:-)05:55
Haomengnaohirot: so dont worry, that should be common behavior that we run api in async mode, I think:)05:55
naohirotHaomeng: Yep, :)05:56
Haomengnaohirot: jim's concern is "For "soft reboot", "get power state" will *always* return POWER_ON, I think. This makes it impossible to tell if the call succeeded or not. The only way to check is to log on and look at uptime."05:57
Haomengnaohirot: so can we add more status to identify the soft reboot status05:57
Haomengnaohirot: such as soft-reboot-pending status, make sense?05:58
*** gridinv_ has quit IRC05:58
Haomengnaohirot: So can we add more pending status, such as 'SOFT_REBOOTING' status as indicatior for user?05:59
naohirotHaomeng: Yes, that's make sense.05:59
Haomengnaohirot: ok, I will add comments, and wait more response for this issue, and we can get solutions I think, dont worry05:59
naohirotHaomeng: actually, current power status are06:00
Haomengnaohirot: ?06:00
naohirotHaomeng: just second06:01
Haomengnaohirot: yes06:01
naohirotHaomeng: POWER_ON, POWER_OFF,REBOOT, POWER_OFF_SOFT, REBOOT_SOFT or INJECT_NMI06:01
Haomengnaohirot: so for our status, we missed some 'PENDING' status06:01
naohirotHaomeng: I incorporated rameshg87's advice06:02
naohirotHaomeng: so power status will be06:02
Haomengnaohirot: can not find ramesh comments, which revision?06:03
naohirotHaomeng: POWER_ON, POWER_OFF,REBOOT, POWER_OFF_SOFT, REBOOT_SOFT, INJECT_NMI, POWER_OFF_SOFT_IN_PROGRESS, SOFT_REBOOTING06:03
naohirotHaomeng: something like that?06:03
naohirotHaomeng: there are a lot :)06:04
Haomengnaohirot: I think that is good, but not sure if other guys has more throughts:)06:04
Haomengnaohirot: and IMO, we need add more 'PENDING' status for existing power actions, such as power on/off06:04
Haomengnaohirot: we need have power-on-inprogress, and power-off-inprogress for user to get understand the pending status which is06:05
naohirotHaomeng: I see, PENDING is good, we can use for all of them by one word.06:05
Haomengnaohirot: yes, i think so:)06:06
naohirotHaomeng: May be Jim may have expected just POWER_ERROR and POWER_PENDING06:07
Haomengnaohirot: yes:)06:07
Haomengnaohirot: dont worry06:07
naohirotHaomeng: we had a good discussion :)06:08
Haomengnaohirot: :)06:08
*** yog__ has quit IRC06:12
*** Marga_ has quit IRC06:17
saripurigopinaohirot : hi06:17
*** Marga_ has joined #openstack-ironic06:17
saripurigopirameshg87 : hii06:17
naohirotsaripurigopi: good morning ;)06:17
openstackgerritMerged openstack/ironic: Do not use "private" attribute in AuthTokenMiddleware  https://review.openstack.org/19500006:18
openstackgerritMerged openstack/ironic: Use oslo_log  https://review.openstack.org/19598906:20
*** dasm|afk is now known as dasm06:23
*** chenglch has quit IRC06:25
naohirotHaomeng: I got a solution, GET /v1/nodes/(node_ident)/states06:33
naohirotHaomeng: this rest api returns Return type:06:33
naohirotNodeStates06:33
naohirotHaomeng: NodeStates has last_error field. http://docs.openstack.org/developer/ironic/webapi/v1.html#nodes06:34
naohirotHaomeng: so we can avoid explosion of power states.06:35
Haomengnaohirot: yes, we can call htis api to get status06:36
Haomengnaohirot: you mean to set last_error if we soft-power fail?06:37
naohirotHaomeng: If we introduced POWER_ERROR, which is very problematic.06:37
Haomengnaohirot: yes06:38
naohirotHaomeng: because if SOFT POWER OFF failed, still node is POWER_ON state, not POWER_ERROR state.06:38
naohirotHaomeng: yes, if SOFT POWER OFF failed, we put error message into last_error.06:39
naohirotHaomeng: then user issues  GET /v1/nodes/(node_ident)/states to know what happened.06:40
*** mitchjameson has joined #openstack-ironic06:40
Haomengnaohirot: yes, but  last_error is common field, it will be overwritten by other error06:41
naohirotHaomeng: Yes, it is likely to happen06:41
naohirotHaomeng: I believe user trys to power (hard) off if power soft off failed.06:43
naohirotHaomeng: so the state of node has to be remained in POWER ON.06:43
rameshg87naohirot: saripurigopi: Haomeng: morning. sorry was away from my desk06:45
saripurigopirameshg87: o/06:45
naohirotrameshg87: good morning06:45
Haomengrameshg87: good morning:)06:49
*** rameshg87 is now known as rameshg87-lunch06:51
*** davideagnello has joined #openstack-ironic06:52
*** yuanying_ has quit IRC06:53
*** athomas has joined #openstack-ironic06:53
*** yuanying_ has joined #openstack-ironic06:54
*** davideagnello has quit IRC06:57
*** mitchjameson has quit IRC07:13
*** yog__ has joined #openstack-ironic07:17
*** amotoki has joined #openstack-ironic07:17
*** stegranet has quit IRC07:20
*** lsmola has joined #openstack-ironic07:26
*** natorious is now known as zz_natorious07:32
*** jistr has joined #openstack-ironic07:32
*** ifarkas has joined #openstack-ironic07:35
*** Marga_ has quit IRC07:37
*** kfox1111 has quit IRC07:40
*** Marga_ has joined #openstack-ironic07:46
*** stegranet has joined #openstack-ironic07:59
*** lucasagomes has joined #openstack-ironic08:01
*** mitchjameson has joined #openstack-ironic08:01
*** dtantsur|afk is now known as dtantsur08:03
openstackgerritMerged openstack/bifrost: Add additional note dependent upon os_ironic_facts  https://review.openstack.org/19511108:03
dtantsurMorning Ironic!08:03
*** bethelwell has joined #openstack-ironic08:05
lucasagomesdtantsur, morning08:09
dtantsurlucasagomes, o/ you're earlier than me today :)08:09
lucasagomesyeah I've slept a lot on the weekend because I was a under the weather08:10
lucasagomesheh08:10
*** Marga_ has quit IRC08:10
*** yuanying_ has quit IRC08:12
dtantsurlucasagomes, good for you, I wasn't sleeping much this weekend :(08:12
lucasagomesdtantsur, hah I would say good for you for it too08:12
lucasagomesI mean you've enjoyed it more I bet08:13
dtantsurheh08:13
zhenguomorning dtantsur, lucasagomes08:16
dtantsuro/08:16
lucasagomeszhenguo, hi there, good morning08:17
zhenguoo/08:17
*** MattMan has joined #openstack-ironic08:18
*** Marga_ has joined #openstack-ironic08:22
*** puranamr has joined #openstack-ironic08:22
*** dguerri` is now known as dguerri08:23
*** puranamr has quit IRC08:25
*** mitchjameson has quit IRC08:26
*** Marga_ has quit IRC08:27
*** stegranet1 has joined #openstack-ironic08:30
*** stegranet has quit IRC08:31
*** stegranet1 is now known as stegranet08:31
*** derekh has joined #openstack-ironic08:33
*** Marga_ has joined #openstack-ironic08:35
*** rkhanbikov has joined #openstack-ironic08:41
*** davideagnello has joined #openstack-ironic08:41
openstackgerritMerged openstack/ironic: Log configuration options on ironic-conductor startup  https://review.openstack.org/19625608:42
*** Marga_ has quit IRC08:45
*** rkhanbikov has quit IRC08:45
*** Marga_ has joined #openstack-ironic08:45
*** Marga_ has quit IRC08:45
*** stegranet has quit IRC08:45
*** puranamr has joined #openstack-ironic08:45
*** stegranet has joined #openstack-ironic08:45
*** Marga_ has joined #openstack-ironic08:46
*** davideagnello has quit IRC08:46
*** kbyrne has quit IRC08:47
openstackgerritSergey Vilgelm proposed openstack/ironic: Switch to oslo.service  https://review.openstack.org/19500808:49
*** puranamr has quit IRC08:50
*** kbyrne has joined #openstack-ironic08:53
*** ndipanov has joined #openstack-ironic08:54
*** bethelwe_ has joined #openstack-ironic09:00
openstackgerritSergey Vilgelm proposed openstack/ironic: Update ironic.conf.sample  https://review.openstack.org/19600209:01
*** pelix has joined #openstack-ironic09:03
*** romcheg has joined #openstack-ironic09:03
*** bethelwell has quit IRC09:03
*** joe__ has joined #openstack-ironic09:06
*** joe__ has quit IRC09:10
openstackgerritNaohiro Tamura proposed openstack/ironic-specs: Enhance Power Interface for Soft Reboot and NMI  https://review.openstack.org/18670009:11
*** bethelwe_ has quit IRC09:14
*** bethelwell has joined #openstack-ironic09:17
*** bethelwell has quit IRC09:19
*** bethelwell has joined #openstack-ironic09:19
*** puranamr has joined #openstack-ironic09:26
sambettsMorning all o/09:31
*** puranamr has quit IRC09:32
dtantsursambetts, morning09:33
sambettsdtantsur: Hows it going?09:33
dtantsurpretty fine, except for inspector gate09:34
*** Marga_ has quit IRC09:34
*** bethelwe_ has joined #openstack-ironic09:44
*** athomas has quit IRC09:46
*** romcheg has quit IRC09:47
*** bethelwell has quit IRC09:48
*** naohirot has quit IRC09:50
*** athomas has joined #openstack-ironic09:52
*** openstackgerrit has quit IRC09:53
*** puranamr has joined #openstack-ironic09:53
*** openstackgerrit has joined #openstack-ironic09:54
lucasagomessambetts, morning09:54
*** bethelwe_ has quit IRC09:54
sambettslucasagomes: Morning o/09:55
*** bethelwell has joined #openstack-ironic09:55
sambettsdtantsur: Something broke?09:55
dtantsursambetts, yep, https://bugs.launchpad.net/devstack/+bug/146916009:56
openstackLaunchpad bug 1469160 in devstack "Cinder tries to start even if it wasn't requested - and fails" [Undecided,In progress] - Assigned to Dmitry Tantsur (divius)09:56
*** Marga_ has joined #openstack-ironic09:57
*** puranamr has quit IRC09:57
sambettsah, I better not pull devstack on my testbed then :-P09:58
*** uschreiber_ has joined #openstack-ironic10:03
*** uschreiber_ has quit IRC10:05
*** romcheg has joined #openstack-ironic10:11
openstackgerritMerged stackforge/proliantutils: Adding RIS support for virtual media interfaces  https://review.openstack.org/19457010:13
*** romcheg has quit IRC10:13
*** romcheg has joined #openstack-ironic10:18
openstackgerritDmitry Tantsur proposed openstack/ironic-inspector: Move Python ramdisk code out of tree  https://review.openstack.org/19661510:23
*** davideagnello has joined #openstack-ironic10:30
*** davideagnello has quit IRC10:34
*** Marga_ has quit IRC10:40
dtantsursambetts, answered on https://bugs.launchpad.net/ironic-inspector/+bug/1441117 and https://bugs.launchpad.net/bugs/139186510:40
openstackLaunchpad bug 1441117 in Ironic Inspector "Provide way to append/prepend plugins to processing_hooks w/o overriding the defaults" [Low,Confirmed]10:40
openstackLaunchpad bug 1391865 in Ironic Inspector "Client should try to fetch base_url from Keystone " [Low,Confirmed]10:40
*** subscope has joined #openstack-ironic10:42
sambettsdtantsur: awesome :D10:42
*** jistr_ has joined #openstack-ironic10:49
*** dtantsur_ has joined #openstack-ironic10:49
*** dtantsur has quit IRC10:50
*** jistr has quit IRC10:51
*** subscope has quit IRC10:52
*** jistr_ has quit IRC10:54
*** dtantsur_ has quit IRC10:54
*** dtantsur has joined #openstack-ironic10:59
*** ramineni has quit IRC11:01
*** jistr_ has joined #openstack-ironic11:06
TheJuliagood morning everyone11:07
dtantsurTheJulia, morning11:07
* rameshg87-lunch goes home 11:08
*** rameshg87-lunch has quit IRC11:08
*** coolsvap is now known as coolsvap|away11:10
*** romcheg has quit IRC11:21
*** dguerri is now known as dguerri`11:28
*** bethelwell has quit IRC11:29
*** bethelwell has joined #openstack-ironic11:29
*** keekz has joined #openstack-ironic11:31
lucasagomesTheJulia, good morning11:32
openstackgerritLucas Alvares Gomes proposed openstack/ironic: Allow vendor methods to serve static files  https://review.openstack.org/18971611:32
openstackgerritJulia Kreger proposed openstack/bifrost: WIP: Split GIT downloads and OpenStack CI logic out  https://review.openstack.org/19639811:37
openstackgerritLucas Alvares Gomes proposed openstack/ironic: Allow vendor methods to serve static files  https://review.openstack.org/18971611:42
*** saripurigopi has quit IRC11:48
*** dguerri` is now known as dguerri11:48
*** lucasagomes is now known as lucas-hungry11:48
openstackgerritDmitry Tantsur proposed openstack/ironic-inspector: Make functional test importable and stop depending on DIB code  https://review.openstack.org/19663211:50
*** trown|outttypeww is now known as trown11:53
openstackgerritMerged openstack/ironic: Add IPMI 1.5 support for the ipmitool power driver  https://review.openstack.org/19515711:58
*** stegranet1 has joined #openstack-ironic11:59
dtantsurlucas-hungry, ^^ \o/12:00
*** stegranet has quit IRC12:01
*** stegranet1 has quit IRC12:04
*** dprince has joined #openstack-ironic12:06
*** davideagnello has joined #openstack-ironic12:18
*** romcheg has joined #openstack-ironic12:23
openstackgerritJulia Kreger proposed openstack/bifrost: WIP: Split GIT downloads and OpenStack CI logic out  https://review.openstack.org/19639812:23
*** davideagnello has quit IRC12:23
*** Marga_ has joined #openstack-ironic12:25
*** dguerri is now known as dguerri`12:28
dtantsurlucas-hungry, mind reviewing https://review.openstack.org/#/c/191710/ ? we already have futurist public release12:28
*** dguerri` is now known as dguerri12:32
openstackgerritJulia Kreger proposed openstack/bifrost: WIP: Split GIT downloads and OpenStack CI logic out  https://review.openstack.org/19639812:48
*** lucas-hungry is now known as lucasagomes12:51
lucasagomesdtantsur, w00t... will do12:51
*** ukalifon1 has joined #openstack-ironic13:02
*** ukalifon3 has joined #openstack-ironic13:03
*** krtaylor has quit IRC13:04
*** smoriya has quit IRC13:05
*** ukalifon1 has quit IRC13:07
*** jlvillal has quit IRC13:08
lucasagomesdtantsur, done, pretty good. Only one thing inline13:11
lucasagomesdtantsur, I think that reusing workers_pool_size for setting the max number of parallel tasks running at the same time is wrong13:11
*** puranamr has joined #openstack-ironic13:11
dtantsurlucasagomes, I had some reason to do it, don't remember right now, but probably we should just change all of tasks/jobs/etc to use one thread pool13:12
dtantsurlucasagomes, otherwise it's just too much settings affecting concurrency13:12
lucasagomesdtantsur, yeah... we could try to simplify13:13
lucasagomesbut like setting that to max is wrong because, because 1 peridioc task could spawn more workers from that same pool13:13
dtantsurlucasagomes, yeah, right, why do you think we need 2 thread pools for Ironic?13:13
lucasagomeswhich now is max 813:13
*** zhenguo has quit IRC13:13
lucasagomesdtantsur, I don't htink we need 213:13
lucasagomesI think we need to set some maximum number of parallel periodic tasks running at the same time13:14
lucasagomesmax != pool size13:14
dtantsurlucasagomes, it defaults to 100 btw: https://github.com/openstack/ironic/blob/master/ironic/conductor/manager.py#L12813:14
dtantsurlucasagomes, no sorry, max == pool size13:14
dtantsurIIRC at least13:14
lucasagomesdtantsur, if we set 100 periodic tasks13:14
dtantsurlucasagomes, "If the pool is currently at capacity, spawn will block until one of the running greenthreads completes its task and frees up a slot.13:15
dtantsur"13:15
lucasagomesand they all run in paralell at the same time we will starve workers from doing other things13:15
dtantsurlucasagomes, correct, that's how pool work13:15
dtantsurthat's the whole idea of pools, now you try to overcome it by demanding one more pool for tasks :)13:15
lucasagomesI think ironic won't wait, it will raise NoFreeConductorWorker13:16
*** krtaylor has joined #openstack-ironic13:17
dtantsurlucasagomes, we also limit number of threads run by periodic tasks to 8 per each task (periodic_max_workers setting)13:17
dtantsurI guess you're confusing these 213:18
lucasagomesdtantsur, hmm /me thinking13:18
lucasagomesyeah so we have only 1 pool right? default 10013:18
dtantsuroh, this NoFreeConductorWorker is a non-sense, how came we even have it? Oo13:18
dtantsurwe should clean up our async code definitely....13:18
lucasagomesright now it's used for: async tasks (e.g power state changes, deploying etc...) and also periodic tasks can use up to 8 of those workers to do something13:19
lucasagomeswith futurist we will use workers from that same pool that will be the periodic tasks13:19
*** jlvillal has joined #openstack-ironic13:19
dtantsuroh my god, this NoFreeConductorWorker even issues 503....13:19
lucasagomesdtantsur, yeah it's a bit messy that part13:20
dtantsurso we have 2 sources of random failures actually...13:20
lucasagomesso I think we may want to cap the number of parallel periodic tasks running at the same time so we don't starve workers from the conductor to do the rest13:21
dtantsurlucasagomes, why? parallel tasks are just as well important13:21
lucasagomesyes but if max == pool size the conductor will become unusable13:21
lucasagomesit will only be able to run periodc tasks13:21
lucasagomeswould be good to have a cap so people can tune it13:22
dtantsurlucasagomes, it will finish running periodic tasks, and will process other requests13:22
lucasagomesdtantsur, right, but currently it will raise that exception and request will fail13:22
*** mjturek1 has joined #openstack-ironic13:22
dtantsurlucasagomes, right, it is broken. Already is, it's not broken by my future change13:23
dtantsurthe same thing will happen if a user issues >100 requests13:23
lucasagomesyes13:23
lucasagomeswell perhaps one could just increase that number...13:23
* lucasagomes thinks13:23
dtantsurexactly13:24
lucasagomesyeah perhaps that's the way if should go for now :-/13:24
dtantsurlucasagomes, and now I realize that we need ironicclient retry for error 503 as well...13:26
lucasagomes503!?13:26
lucasagomesouch13:26
dtantsurlucasagomes, NoFreeConductorWorker is error 50313:26
dtantsurmore vague errors for gods of vague errors \o/13:26
lucasagomesyeah13:26
lucasagomesnot great13:28
* dtantsur writes a patch13:31
*** puranamr has quit IRC13:31
*** rloo has joined #openstack-ironic13:31
*** zz_jgrimm is now known as jgrimm13:33
*** absubram has quit IRC13:36
*** absubram has joined #openstack-ironic13:36
openstackgerritDmitry Tantsur proposed openstack/python-ironicclient: Also retry on HTTP 503 (service unavailable)  https://review.openstack.org/19666413:37
dtantsurlucasagomes, done ^^13:37
*** mtanino has joined #openstack-ironic13:43
*** Marga_ has quit IRC13:51
*** igordcard_ has quit IRC13:55
*** boris-42 has joined #openstack-ironic13:59
*** amotoki has quit IRC13:59
*** subscope has joined #openstack-ironic13:59
*** saripurigopi has joined #openstack-ironic14:00
*** subscope has quit IRC14:04
*** davideagnello has joined #openstack-ironic14:07
openstackgerritJulia Kreger proposed openstack/bifrost: Disambiguate the roles of ci_testing, ci_testing_zuul etc  https://review.openstack.org/19575914:10
openstackgerritJulia Kreger proposed openstack/bifrost: Split GIT downloads and OpenStack CI logic out  https://review.openstack.org/19639814:10
*** absubram has quit IRC14:11
TheJuliahmm... rebased again14:12
*** davideagnello has quit IRC14:12
*** yuikotakada has joined #openstack-ironic14:12
openstackgerritLucas Alvares Gomes proposed openstack/ironic: Allow vendor methods to serve static files  https://review.openstack.org/18971614:12
*** absubram has joined #openstack-ironic14:15
*** absubram has quit IRC14:15
*** r-daneel has joined #openstack-ironic14:19
*** ukalifon3 has quit IRC14:23
*** subscope has joined #openstack-ironic14:24
*** ukalifon has joined #openstack-ironic14:29
NobodyCamgood morning Ironicers14:33
dtantsurNobodyCam, morning14:34
NobodyCammorning dtantsur14:34
rlooHappy Monday ironickers, NobodyCam, dtantsur, TheJulia, lucasagomes :)14:36
lucasagomesrloo, NobodyCam TheJulia good ugt morning!14:36
dtantsurrloo, o/14:36
TheJuliagood morning rloo14:37
rloohey dtantsur, you ok if we approve this? https://review.openstack.org/#/c/173674/ ?14:38
dtantsurrloo, yep14:39
NobodyCammorning rloo lucasagomes14:39
rloodtantsur: done!14:40
jrollgoooood morning NobodyCam TheJulia dtantsur lucasagomes rloo and anyone else :)14:40
dtantsurjroll, morning!14:40
rloomorning jroll!14:40
lucasagomesjroll, yo! good morning14:41
NobodyCammornign jroll :)14:41
NobodyCam:-p14:41
*** yog__ has quit IRC14:42
NobodyCammeeting aganda looks light today. call for agenda Items?14:54
*** Marga_ has joined #openstack-ironic14:55
*** rameshg87 has joined #openstack-ironic14:56
*** mitchjameson has joined #openstack-ironic14:56
*** pradipta has joined #openstack-ironic14:58
rameshg87rloo: hello14:58
rloohi rameshg87!14:58
rameshg87rloo: regarding https://review.openstack.org/#/c/194786/1/ironic/drivers/modules/iscsi_deploy.py14:59
rameshg87rloo: what do we do next ? looks good to me as it is14:59
rameshg87rloo: but my question is whether we can procliam driver_internal_info['instance'] as a place to store driver internal data related to instance15:00
rloorameshg87: oh. what do you think we should do?15:00
rloorameshg87: personally, i'm afraid to use 'instance' right now cuz it seems too generic.15:00
rameshg87rloo: lucasagomes had a comment we should change to driver_internal_info['disk_layout']15:00
rameshg87lucasagomes: ^^15:00
*** ijw has joined #openstack-ironic15:00
rloorameshg87: yeah, i prefer 'disk_layout' but the original code was approved with 'instance'. that's why i left it :)15:01
lucasagomesrameshg87, rloo hi yeah. I just think "instance" is too generic for that15:01
*** cdearborn has joined #openstack-ironic15:01
lucasagomesit was a suggestion tho, since instance was used before15:01
rameshg87rloo: lucasagomes: I was coming to that15:01
rameshg87couldn't we just provide driver_internal_info['instance'] to the drivers as such (just like we give instance_info)15:01
rameshg87to store internal attributes related to instance15:02
rameshg87we could clear that off at the end of deploy15:02
rloorameshg87: no. or yes. i'm afraid to say explicitly that we do that, cuz then it is sort of an 'api' contract with drivers.15:02
rloorameshg87: and then it means we should think about what really goes in 'instance', and is it really then a new instance_internal_info.15:02
rloorameshg87: which might be a good thing. I don't really know. yet.15:03
dtantsuryuikotakada, g'evening/night :) could you remove -1 from https://review.openstack.org/#/c/196050/ if you're ok with it now?15:03
lucasagomesyeah sounds like an instance_internal_info indeed15:03
* lucasagomes is in a call 1 sec15:03
rameshg87ah may be a new field then ..15:04
yuikotakadadtantsur, hi, g'evening :) OK, I've done15:04
rameshg87rloo: this is not the first time we have come across such a need for drivers15:04
*** absubram has joined #openstack-ironic15:04
rloorameshg87: well, that's why we added driver_internal_info :)15:04
dtantsuryuikotakada, thanks! will try to grab someone to review it today..15:04
rloorameshg87: so it seems like we have two 'sets' of information: 1. the users provide; 2. what we actually use15:04
rameshg87but the problem it is related to node rather than an instance15:04
rameshg87yeah #2 is something we never wants user to know even15:05
rloorameshg87: sorry, what is 'related to node rather than an instance'?15:05
rloorameshg87: I mean WHAT?15:05
rameshg87I meant some information is related to instance which should be cleared at the end of the life of the instance15:06
rameshg87so rather fitting it into driver_internal_info requires manual care15:06
openstackgerritYuiko Takada proposed openstack/ironic-inspector: Migrate to oslo_db  https://review.openstack.org/18190515:06
rameshg87so if we could get an agreement on instance_internal_info or a driver_internal_info['instance'] or whatever, which is a contract15:07
rloorameshg87: well, it is all 'manual'? i haven't analyzed what info we have in driver_internal_info. is there a pattern?15:07
rameshg87well there isn't really :)15:08
rloorameshg87: aren't you going to have to save some RAID info too?15:08
rameshg87rloo: but that got moved to a new field15:08
rameshg87node.raid_config and node.target_raid_config15:08
yuikotakadadtantsur, yeah, please. I hope that will be merged soon, too15:09
rloorameshg87: ohhh, I hate this. You're making me think, and it is only Monday morning!15:10
rameshg87rloo: :) I don't want to drag you again into that conversation15:10
rameshg87rloo: so my proposal is just this15:10
rameshg87rloo: we can get along with https://review.openstack.org/#/c/194786/1/ironic/drivers/modules/iscsi_deploy.py in it's current way (it's only addressing your comments on the review and not changing anything)15:11
rameshg87rloo: and may be come up with a proposal for something like a contract (may be node.instance_internal_info) to drivers for storing this information15:11
rloorameshg87: so you think we need to distinguish 'driver_internal_info' vs 'instance_internal_info'?15:12
rloorameshg87: what if we had just called it 'internal_info'?15:12
rameshg87rloo: but some *internal* information needs to be cleared at the end of life of instance15:13
rameshg87like this one we just had15:13
rameshg87it will be good if drivers don't need to explicitly take care in removing it and it is automatically done at the end of tear down15:14
rloorameshg87: I almost think that if we are going to same "some" instance info, why don't we just save all the 'deployinfo' or whatever we call it15:14
rameshg87rloo: where do mean to save it ?15:14
rloorameshg87: my only concern with generalizing it (which makes sense) is that odd case where some driver wants to keep the internal instance info around beyond tear down.15:14
*** mitchjameson has quit IRC15:15
rameshg87rloo: oh, then they should goto driver_internal_info15:15
rameshg87rloo: we already have that in one of the proposed patches15:15
rameshg87ipmi_persistent_boot_device15:15
rameshg87it should persist beyond tear down15:15
rloorameshg87: I mean, if we're going to save some of the instance info (like root_gb, swap_mb, ephemeral_gb), why not just save all the deploy (in this case, the variable is called i_info) instead of picking out bits to save15:16
rameshg87rloo: but this is just one implementation of deploy interface15:17
rameshg87rloo: there are deploy drivers proposed for diskless nodes - it might never have above fields15:17
rameshg87connecting directly to a iscsi lun15:17
rameshg87rloo: so I think it will be hard to generalize the fields even15:18
rloorameshg87: just looking, we have 'is_whole_disk_image', 'instance', 'clean_steps', 'agent_last_heartbeat', 'agent_url', 'root_uuid_or_disk_id'15:18
rameshg87yes15:18
rameshg87except root_uuid_or_disk_id, everything above should persist beyond tear down15:19
rloorameshg87: I don't understand your point about 'there are deploy drivers ... might never have above fields'. what does that have to do with it? they have instance-info that you're arguing they might want to save?15:19
rameshg87instance as well :) (though name is misleading )15:19
rameshg87rloo: they might have. my point was information that different deploy drivers might want to save might be different.15:21
rloorameshg87: I thought you were arguing for generalizing/using 'instance'. ?? You aren't? I'm confused.15:21
rameshg87rloo: no. I am not actually :)15:21
rloorameshg87: and if 'instance' name is misleading, why are we using it?15:21
rameshg87rloo: I was just suggesting we need one common place for a driver to store all internal info related to an instance (that shouldn't persist beyond tear down)15:22
rloorameshg87: and yeah, i agree, different drivers might want to save (or not save) different info15:22
rloorameshg87: what does that have to do with (I think) your argument with having some 'instance' field that gets cleared after teardown?15:22
rameshg87rloo: ah no. I am sorry if I couldn't convey it right15:23
*** e0ne has joined #openstack-ironic15:23
openstackgerritMerged openstack/python-ironicclient: Cache negotiated api microversion for server  https://review.openstack.org/17367415:24
rameshg87rloo: I just meant there are some information that needs to be cleared on tear down (which we saw things like root_uuid_or_disk_id, root_gb, swap_gb, ...) AND some information that needs to persist even after tear down (agent_last_heartbeat, agent_url, ipmi_persistent_boot_device (proposed), etc)15:24
rloorameshg87: just to step back. wrt that patch, I submitted it cuz the code is wrong. I most likely would have objected to using 'instance' in the first place, but it got approved and I don't have strong-enough opinion to change that now. Unless others want to change it too.15:25
*** e0ne is now known as e0ne_15:25
rameshg87rloo: I agree.15:25
rloorameshg87: yes, I agree. and that info, when it gets saved and when it gets removed, is up to the driver.15:25
yuikotakadagood night, ironic :)15:25
*** yuikotakada has quit IRC15:25
*** e0ne_ is now known as e0ne15:25
rloorameshg87: so if you want it to still be 'instance', you can voice your opinion/discuss with lucasagomes. I'll change it or not change it based on the reviewers' wishes :)15:26
rloorameshg87: lucasagomes was making a suggestion about that15:27
*** david-ly_ is now known as david-lyle15:27
lucasagomesrloo, rameshg87 I haven't read the scrollback. It was just a suggestion because I think it was more accurate15:27
* lucasagomes reads the scrollback15:27
rameshg87lucasagomes: yeah, I agree. but I was just talking to rloo that if we want we could generalize it to a field (something like node.instance_internal_info) which is cleared automatically by the conductor at the end of deploy.  I am voting up for it.15:29
rameshg87lucasagomes: but if we feel we still don't have a reason for doing it now, I am +1 for changing it to driver_internal_info['disk_layout']15:30
*** davideagnello has joined #openstack-ironic15:31
rameshg87lucasagomes: which you and rloo anyway agree upon15:31
* rameshg87 feels I managed to confuse both of them15:31
lucasagomesrameshg87, right, yeah I think it would be beneficial to have a field that gets cleared up at the end15:31
lucasagomeswe already have plenty of keys that could be cleanered up15:31
lucasagomesroot_uuid_or_disk_id, is_whole_disk_image, that "instance" now15:31
rameshg87root_uuid_or_disk_id is one15:32
rameshg87yeah15:32
lucasagomess/now// it was already there15:32
lucasagomesrameshg87, but it seems that if we opt of having such field, it should be done outside the patch that rloo proposed15:33
rameshg87lucasagomes: yes15:33
lucasagomeswhich is just fixing something else15:33
rameshg87yeah15:33
lucasagomesack :-)15:33
lucasagomesrloo, I would be +1 to change to disk_layout, but if you prefer to keep instance to minimize the changes I'm good too15:33
rloolucasagomes: I actually prefer 'disk_layout' but others approved 'instance' so I was hesitant to change.15:34
rloorameshg87: are you good with 'disk_layout'?15:34
rameshg87rloo: yes I am  if we decide we never gonna have something like node.instance_internal_info15:35
rameshg87rloo: but until then I am of the opinion to keep it as it is. wdyt ?15:35
rloorameshg87: we cannot decide that now. 'never' isn't guaranteed15:35
rloolucasagomes: rameshg87 likes 'instance' so I'm going to leave it unless others disagree too.15:36
lucasagomes++15:36
lucasagomesack15:36
openstackgerritMerged openstack/ironic: Switch to oslo.service  https://review.openstack.org/19500815:36
rameshg87rloo: yeah, so can we just keep driver_internal_info['instance'] for now (same as your patch), and get along and decide on the next this15:36
rloobut yeah, lucasagomes, we're right :)15:36
rameshg87lucasagomes: rloo: +1, you were right15:36
rloorameshg87: I am held hostage by you reviewers :)15:37
* rameshg87 relaxes it's not only me ;-)15:37
lucasagomes(-:15:37
rloorameshg87: ha ha15:37
*** achanda has joined #openstack-ironic15:37
* rameshg87 brb15:38
*** rameshg87 is now known as rameshg870brb15:38
*** rameshg870brb is now known as rameshg8715:41
*** davideagnello has quit IRC15:42
*** mitchjameson has joined #openstack-ironic15:42
*** zz_natorious is now known as natorious15:44
*** Marga_ has quit IRC15:44
*** Sukhdev has joined #openstack-ironic15:46
*** e0ne has quit IRC15:46
*** e0ne has joined #openstack-ironic15:48
*** radek__ has quit IRC15:48
NobodyCamjust checking on sub-team status...any updates for the white board?15:48
dtantsuroh, bug stats15:52
*** e0ne is now known as e0ne_15:53
*** mitchjameson has quit IRC15:53
*** davideagnello has joined #openstack-ironic15:53
devanandag'morning, all15:55
jrollthanks for reminder NobodyCam15:55
*** e0ne_ is now known as e0ne15:55
jrollohai devananda :)15:55
rloodtantsur: do you think it is worth discussing the is_hostname_safe check at the meeting? https://review.openstack.org/#/c/193587/ ?15:55
dtantsurdevananda, morning15:55
rloomorning devananda15:56
dtantsurrloo, hmm maybe15:56
*** trown is now known as trown|lunch15:56
rloodtantsur: I feel bad giving Chris the runaround, and it seems like we should just decide what to do15:56
NobodyCammornign devananda15:56
jrollrloo: dtantsur +115:57
rloojroll: ok, will add it15:57
jrollI thought we were trying to relax the names but am not sure15:57
*** romcheg has quit IRC15:57
rloojroll: i thought so too but devananda doesn't seem to want it15:57
*** derekh has quit IRC15:58
jrollrloo: where is that discussion, here in irc or?15:58
*** davideagnello has quit IRC15:58
jrolldisclaimer: I'm still catching up from vacation :P15:58
rloojroll: yeah, i think it was in irc. devananda had some good reasons.15:58
rloojroll: will see if i can dig it up and add to the meeting agenda15:59
devanandarloo: doesn't want what?15:59
* jroll will look too15:59
jrolldevananda: relaxed node.name validation15:59
rloodevananda: freeform names for nodes. the unicode stuff15:59
*** jistr_ has quit IRC15:59
jrolloh yeah, unicode sounds like a horrible idea15:59
rloodevananda: cuz of requests15:59
devanandaright16:00
jrollbut other than that is there opposition to relaxing it?16:00
devanandathe node.name field is usable as a resource identifier, eg. in the URL16:00
jroll(or any push to relax it at all)16:00
devanandawhich makes certain characters unacceptable16:00
lucasagomesdevananda, morning16:01
jrolldevananda: https://www.panic.com/blog/the-worlds-first-emoji-domain/16:01
jroll:P16:01
*** mitchjameson has joined #openstack-ironic16:02
*** puranamr has joined #openstack-ironic16:02
devanandajroll: :)16:03
lucasagomesjroll, lol16:03
rloojroll: you're a source of ... information :D16:04
jlvillalGood morning Ironic16:04
lucasagomesjlvillal, morning16:04
jrollrloo: hah16:04
NobodyCammorning jlvillal16:04
jlvillal:)16:04
NobodyCamjlvillal: how many days now?16:05
devanandajroll: I didn't say "makes unicode unacceptable". but this: https://tools.ietf.org/html/rfc3986#page-1216:05
jlvillalNobodyCam, 2 more work days :)16:05
jlvillalWhich I guess is just two more days.16:06
NobodyCam:)16:06
jrolldevananda: yeah, I'm fine with not relaxing things, I just thought there was some push for that and this was reverse of that16:06
*** romcheg has joined #openstack-ironic16:07
devanandajroll: afaik, this is just a result of rally guys running into inconsistency across projects and wanting to clean things up16:07
lucasagomesdevananda, do we have already some sort of agenda for the midcycle?16:07
jrolldevananda: btw, ironic/neutron meeting in meeting-4 in case you forgot :)16:07
devanandalucasagomes: nope16:07
*** Marga_ has joined #openstack-ironic16:07
lucasagomesor it will be decided there?16:07
devanandajroll: blarg. thanks. mondays at 9am is terrible16:07
jrolldevananda: yeah, I guess I just don't care much about this being 100% hostname safe or whatever... it's on the agenda, let's talk at 10 about it :)16:07
lucasagomesjroll, ++ to talk about it at the meeting16:08
devananda++16:08
*** rwsu has joined #openstack-ironic16:08
*** puranamr has quit IRC16:12
*** radek__ has joined #openstack-ironic16:12
*** meghal has joined #openstack-ironic16:12
*** puranamr has joined #openstack-ironic16:13
*** lazy_prince has joined #openstack-ironic16:13
rameshg87dtantsur: hi16:17
dtantsurrameshg87, o/16:17
rameshg87dtantsur: I didn't get where it should str(e) here in https://review.openstack.org/#/c/196003/1/ironic/common/raid.py16:17
dtantsurrameshg87, eek, sorry, I meant "instead of e.message"16:18
*** meghal has left #openstack-ironic16:18
rameshg87e.message is already a readable string as per http://python-jsonschema.readthedocs.org/en/latest/errors/16:18
rameshg87dtantsur: http://python-jsonschema.readthedocs.org/en/latest/errors/#jsonschema.exceptions.ValidationError.message16:18
dtantsurrameshg87, well, for standard python exceptions e.message is deprecated. you can probably ignore it for 3rdparty libraries, but "str(exc)" is kind of a pattern already :)16:19
rameshg87dtantsur: oh okay. but since it's documented explicitly about e.message, I think it should be okay to go ahead with that. wdyt ?16:20
dtantsuryeah, fine with me16:20
rameshg87okay, thanks16:20
*** puranamr_ has joined #openstack-ironic16:21
openstackgerritRamakrishnan G proposed openstack/ironic: Add node fields for raid configuration  https://review.openstack.org/15523016:21
openstackgerritJohn L. Villalovos proposed openstack/ironic: Refactor check_allow_management_verbs  https://review.openstack.org/19625116:21
openstackgerritJim Rollenhagen proposed openstack/ironic-specs: Change release model to independent releases  https://review.openstack.org/18517116:21
*** beekneemech has joined #openstack-ironic16:21
jroll^^ can we land this yet? :)16:21
rloojroll: I was tempted to add that to the meeting agenda, but I've already added two items :)16:22
jrollrloo: then I'm going to add it :)16:23
rloojroll: as long as they are *after* mine!16:23
*** puranamr has quit IRC16:23
jroll:D16:23
rloojroll: just kidding, I don't really care.16:23
*** bnemec has quit IRC16:23
*** puranamr_ has quit IRC16:26
*** puranamr has joined #openstack-ironic16:26
*** puranamr_ has joined #openstack-ironic16:27
*** puranamr has quit IRC16:27
*** puranamr has joined #openstack-ironic16:28
*** puranamr_ has quit IRC16:32
*** radek_ has joined #openstack-ironic16:33
*** radek__ has quit IRC16:34
*** krtaylor has quit IRC16:35
*** ifarkas has quit IRC16:36
openstackgerritRamakrishnan G proposed openstack/ironic: Add RAIDInterface for RAID configuration  https://review.openstack.org/19600316:40
*** dguerri is now known as dguerri`16:40
openstackgerritRamakrishnan G proposed openstack/ironic: Add RPCAPIs for RAID configuration  https://review.openstack.org/19600616:40
openstackgerritRamakrishnan G proposed openstack/ironic: Add APIs for RAID configuration  https://review.openstack.org/19600716:40
*** maurosr has quit IRC16:41
*** radek_ has quit IRC16:41
* lucasagomes brb will be back for the meeting16:43
*** jgrimm has quit IRC16:43
*** athomas has quit IRC16:46
*** radek_ has joined #openstack-ironic16:47
*** maurosr has joined #openstack-ironic16:49
*** zz_jgrimm has joined #openstack-ironic16:49
*** e0ne has quit IRC16:59
devanandareminder: meeting starting in #openstack-meeting-317:01
*** mitchjameson has quit IRC17:03
*** krtaylor has joined #openstack-ironic17:04
*** davideagnello has joined #openstack-ironic17:07
*** bethelwell has quit IRC17:07
*** trown|lunch is now known as trown17:07
*** lsmola has quit IRC17:09
openstackgerritMerged openstack/ironic: Save disk layout information when deploying  https://review.openstack.org/19478617:10
*** saripurigopi has quit IRC17:11
*** achanda has quit IRC17:17
*** thiagop has joined #openstack-ironic17:20
*** afaranha has joined #openstack-ironic17:21
*** dontalton has joined #openstack-ironic17:32
*** Marga_ has quit IRC17:32
*** Marga_ has joined #openstack-ironic17:32
*** Sukhdev has quit IRC17:46
*** achanda has joined #openstack-ironic17:46
*** lazy_prince has quit IRC17:47
*** pelix has quit IRC17:51
*** abrito has joined #openstack-ironic17:54
jlvillalkrtaylor, Are you interested in working on functional testing?  I'm delighted to get all the help I can! :)17:55
* jlvillal thinks lucasagomes replied in wrong channel ;)17:56
krtaylorjlvillal, I would be if I could stuff it in, my day job is keeping me crazy busy right now - but, my team does have test case writing experience17:57
* jlvillal thinks jlvillal doesn't know what he is talking about17:57
jlvillalkrtaylor, Great! :)17:57
krtaylorjlvillal, I am hoping I can free up in a few weeks and we can start a CI discussion17:57
krtaylormaurosr, mjturek1 ^^^^  re: test cases17:58
jlvillalkrtaylor, Okay.  If you want to start on things before I get back, feel free.  So far only openstack/python-ironicclient is setup for functional testing.17:58
lucasagomesme?17:58
jlvillallucasagomes, I was confused!17:58
lucasagomesjlvillal,  :-D ack no worries17:58
jlvillalJoshNang, I wouldn't mind your feedback on: https://review.openstack.org/196251  Sort of related to your 'fail' patch18:00
JoshNangjlvillal: ah yeah i was looking at that earlier, got distracted18:00
* rameshg87 runs to bed to sleep 18:00
rameshg87good night folks18:00
NobodyCamnight rameshg8718:00
jlvillalJoshNang, Thanks.18:00
devanandarameshg87: good night!18:00
NobodyCambrb18:01
jlvillalrameshg87, Ciao18:01
*** rameshg87 has quit IRC18:01
* dtantsur is going too18:01
dtantsurg'night!18:01
lucasagomesdtantsur, night18:01
jlvillaldtantsur, Paka18:01
dtantsur:D18:01
* jlvillal is not sure he spelled 'paka' correctly18:01
lucasagomesJoshNang, devananda right yeah... I'm ++ for abort too18:01
lucasagomesI think it describes it better18:01
dtantsurjlvillal, пока!18:02
lucasagomesJoshNang, devananda re breaking the lock. You think that we should forcely break it if users ask for "abort" ?18:02
JoshNangso does deployfail automatically go to deleting? or does nova move it that way?18:02
*** dtantsur is now known as dtantsur|afk18:02
devanandaJoshNang: aiui, nova exposes the failed instance to the user. the user may choose to 'rebuild' or 'delete' at that point18:03
lucasagomesJoshNang, I will give it a go tomorrow to confirm18:03
devanandalucasagomes: i dont think this has to do with breaking the lock18:03
devanandalucasagomes: the state machine currently has some unrecoverable states that dont hold any locks18:03
devanandalucasagomes: oh, and that never time out either18:04
lucasagomesdevananda, if it's DEPLOYING it has a lock, so we can't change the state to fail18:04
lucasagomeswithout breaking it18:04
JoshNanglucasagomes: sweet. fwiw we don't run this downstream yet, so i'm not sure how well it'll work yet. i wanna test in devstack, but it was a quick patch before i took off for the day fri18:04
lucasagomesJoshNang, yup, yeah I got that impression18:04
devanandalucasagomes, JoshNang: I think the issue is cleaning doesn't have cleanwait, tbh18:04
lucasagomesbut it's great18:04
lucasagomesthe goal is good18:04
devananda*the issue in my case. there could certainly be other issues18:05
jrollJoshNang: devananda: when nova sees a build failure (e.g. DEPLOYFAIL) it calls destroy() to clean it up or something like that18:05
JoshNangjroll: k that's what i thought. counts a reschedule and tries another node, right?18:05
jrollor when it reschedules a failure, I should say18:05
jrollyep18:05
jrolland last one gets "no valid host"18:05
lucasagomesdevananda, yeah, that's because cleaning is only done by the agent right?18:05
lucasagomeswhich coordinates things via heartbeat18:06
lucasagomesthe DEPLOYWAIT fits more into the DIB case than agent to be honest18:06
devanandalucasagomes: deploywait is needed by the agent just as much18:06
JoshNang++ on cleanwait18:07
devanandalucasagomes: for the period of time before the agent boots up and "phones hime"18:07
lucasagomesright18:07
lucasagomeswhich is basically the heartbeat18:07
devanandathings are breaking for my test env because the agent isn't calling back to ironic, either for deploy or clean18:07
lucasagomesbut it's not like an state visible for the user, I'm ok having DEPLOYWAIT CLEANWAIT18:07
devanandanode gets into deploywait, but I can recover from this easily with 'delete'18:07
lucasagomestho it's nothing more than a signal (which heartbeat does)18:08
devanandathere's no way out of "cleaning but the agent hasn't called home yet"18:08
*** pradipta has quit IRC18:08
devanandaJoshNang: rather than implementing "fail" or "abort" verb -- what if you just implemented CLEANWAIT state?18:09
devanandawould that also resolve your issue(s)?18:09
JoshNangi don't think so18:10
lucasagomesit doesn't solve all problems of having a node stuck in *ING state18:10
lucasagomesit solves some problems18:10
JoshNangthe reason i did more than just cleaning was yeah, that ^18:10
lucasagomesif something happens with the conductor while node is *ING18:10
lucasagomesit is stuck18:10
JoshNanglike, i've had to go into the db to fix things more often than i should, and i'm hoping to avoid it & script it18:11
devanandaah, right18:12
JoshNangbut i think adding cleanwait is the right thing anyway. it's way better than than the hack that is in there now18:12
devanandaso an API that exposes the list of active conductor hosts, and then an API to 'abort' transitions on nodes that are locked by non-active conductors18:13
lucasagomesdevananda, JoshNang you can see any major problem in allowing calling Node.release() and moving it to "fail" if user explicitly ask for "abort" ?18:13
devanandaso an operator can get a view of things that went sideways after, say, a conductor host had crashed and they started a new conductor with a different host name18:14
devanandalucasagomes: any time there is a long-running operation (whether in or out of band) that does destructive things, like firmware updates, yes18:14
lucasagomesdevananda, right, like for zapping it wouldn't be allowed18:14
devanandasince cleaning and zapping are intended to do just that18:14
devanandaright18:15
devanandawell, same for cleaning18:15
lucasagomesyeah and cleaning18:15
jrolldevananda: cleaning getting stuck can also happen without a conductor dying18:15
jrolljust for the record18:15
lucasagomesI wonder if the zapping/cleaning tasks could somehow indicate "this task is not abortable"18:15
lucasagomesor something like that for certain tasks18:15
devanandajroll: I'm not surprised -- but could you describe how?18:15
jrolldevananda: tftp fails18:16
devanandajroll: tftp fails to boot the agent?18:16
jrollyes18:16
JoshNangalso, if a node suddenly died, it would get stuck, since it would never heartbeat again18:16
devanandajroll: so that's the case I already described, and would be solved if that was represented as CLEANWAIT18:16
JoshNangdevananda: if you're rebooting during cleaning, you'd be going cleaning->cleanwait->cleaning over and over. not sure if that's a problem or not18:17
devanandaJoshNang: and releasing the taskmanager lock between those18:17
lucasagomesJoshNang, I think that's fine/good18:17
devanandaJoshNang: that would resolve my present issue18:17
jrolldevananda: ok, fair. some firmware update occassionally causes networking to go away until a reboot happens :)18:17
*** Marga_ has quit IRC18:17
lucasagomeswith locall boot we go deploying-deploywait-deploying-deploywait-deploying...18:17
JoshNangah18:17
JoshNangalright, wfm then18:17
lucasagomesthe combination of those 2 *WAIT and aborting certian *ING states, I think would cover it all18:18
lucasagomescertain*18:18
devanandaI'm actually not seeing the *ING states needing "abort" at all right now18:19
devananda*ING should indicate 'there is an active conductor doing something'18:19
lucasagomesdevananda, conductor dies?18:19
devananda- waiting for node to reboot / heartbeat? put it in *WAIT18:19
jrolldevananda: you don't think disk erase, firmware updates, etc, can get stuck?18:19
devananda- conductor dies and restarts? it clears its own locks during __init__18:19
lucasagomesif conductor has the same hostname18:19
devananda- conductor dies and does not restart? those locks should hit a timeout and/or be able to be removed manually (via the API)18:20
lucasagomeswe can add a timeout too, but that involves breaking the lock18:20
devanandaok, so that last case is the only case where "abort" is needed18:20
devanandaand it is a manual process taking the place of a smart[-enough] timeout system18:20
*** gridinv_ has joined #openstack-ironic18:21
devanandaJoshNang: clearly this is going to require a spec, heh18:21
devanandaJoshNang: also, have you seen my reworking of the state machine spec?18:21
JoshNangdevananda: yeah, i figured that was going to happen :P18:21
JoshNangi saw a patch, haven't looked at it yet though18:22
lucasagomesJoshNang, I'm having a couple of problems with locks in the internal setup18:22
lucasagomesJoshNang, I'm more than glad to contribute18:22
lucasagomeswith code and spec too18:22
JoshNanglucasagomes: sweet18:22
lucasagomess/locks/nodes stuck18:23
*** Marga_ has joined #openstack-ironic18:23
lucasagomesdevananda, JoshNang another thing re abort18:24
*** Sukhdev has joined #openstack-ironic18:24
lucasagomesis that nova allows you to do a delete on a "spawning" state instance18:24
lucasagomeswhich is similar to our abort here18:24
*** ukalifon has quit IRC18:24
lucasagomesso this is aligned with the interface we have in nova I believe18:24
JoshNangthat's true18:25
lucasagomeshttps://review.openstack.org/#/c/182992/18:26
devanandafunctionally speaking, interrupting a deploy should be non-destructive. practically speaking, our driver API doesn't support it18:26
lucasagomesthis allows it for DEPLOYWAIT18:26
devanandainterrupting during DEPLOYWAIT is possible today18:26
lucasagomesyeah it wasn't via nova18:26
lucasagomesnow it is18:26
devanandaahhh cool!18:26
lucasagomesdue that patch ^18:26
lucasagomesbut what I mean is, interface-wise we should somehow allow to do it in DEPLOYING as well, since it's allowed for other drivers in nova18:27
lucasagomesbut ironic18:27
devanandalucasagomes: question -- this exception will cause self.spawn() to call self.destroy from its exception handler18:30
devanandathat's great -- but18:30
*** openstackgerrit has quit IRC18:30
*** Sukhdev has quit IRC18:30
devanandalook at destroy, at L82718:30
devanandawhat if the node is in state DEPLOYING ?18:30
lucasagomesdevananda, no it doesn't, it just breaks the loop18:30
* lucasagomes looks18:30
*** ijw has quit IRC18:30
*** openstackgerrit has joined #openstack-ironic18:30
devanandalucasagomes: L744 starts a timer on _wait_for_active18:31
devanandaif that raises an exception, it calls destroy on L75118:31
devanandain destroy, it checks the node state18:31
*** romcheg has quit IRC18:31
devanandaif the node state is DEPLOYING, it *does not call* self._unprovision18:31
devanandaand instead, just calls _cleanup_deploy18:32
devanandawhich is going to fail, because a PATCH request to remove instance_info will be refused while the node is in DEPLOYING state18:32
lucasagomesright yeah18:33
lucasagomeseven if we allow yeah, _unprovision() for DEPLOYING it will fail because DEPLOYING doesn't allow "deleted"18:34
devanandaright18:34
lucasagomeswith the "abort" idea it would work18:34
devanandalucasagomes: so, while your change works for nodes in DEPLOYWAIT state, I think this is going to create orphaned nodes, if it happens to get called on a node thta is DEPLOYING18:35
*** achanda has quit IRC18:35
lucasagomesdevananda, I will give it a go, I think I've tested it18:35
*** achanda has joined #openstack-ironic18:36
JoshNangi've gotta run, bbiab18:39
NobodyCam:)18:39
JoshNangdevananda: i'll review the state machine changes later today18:39
*** Marga_ has quit IRC18:40
*** Marga_ has joined #openstack-ironic18:40
*** achanda has quit IRC18:41
lucasagomesdevananda, JoshNang yeah can we continue this tomorrow? It's a bit late here18:41
lucasagomesdevananda, I will take a look at the spec machine review tomorrow morning too18:41
* lucasagomes marks him on the spec18:42
lucasagomeswhere is it?18:42
devanandathx18:42
devanandahttps://review.openstack.org/#/c/196320/1/specs/devref/new-ironic-state-machine.rst,cm18:42
lucasagomesah was looking for state machine in the title18:43
lucasagomesdevananda, thanks18:43
lucasagomeslol "Topic: versions-are-hard"18:43
*** romcheg has joined #openstack-ironic18:44
lucasagomesok have a good night all!18:44
*** krtaylor has quit IRC18:44
lucasagomessee you later18:44
*** lucasagomes is now known as lucas-dinner18:44
NobodyCamnight lucas-dinner18:44
NobodyCamdevananda: looking at 196320.. line # 343 ... can I ask why 415 vs something like 406?18:48
devananda The resource identified by the request is only capable of generating response entities which have content characteristics not acceptable according to the accept headers sent in the request.18:49
devananda The server is refusing to service the request because the entity of the request is in a format not supported by the requested resource for the requested method.18:49
devanandaNobodyCam: AIUI, 406 is specific to the Content-Type header18:50
devanandaeg, xml vs json18:50
NobodyCamahh ok :)18:50
devanandasorry, i mean the "Accept: Content-Type" header18:50
NobodyCamUnsupported Media Type just seemd strange but I think I get it now18:50
NobodyCamTY18:51
*** Sukhdev has joined #openstack-ironic18:51
devanandaoooh18:51
devanandaI've been reading from the old list18:51
devanandahttp://www.iana.org/assignments/http-status-codes/http-status-codes.xhtml is actualy the reference we want to use18:52
NobodyCamya I like 426 but its kinda a miss use18:52
*** Marga_ has quit IRC18:53
devanandaomg. there's an RFC for HTTP-based resource locking18:53
openstackgerritChris St. Pierre proposed openstack/ironic: Replace is_hostname_safe with a better check  https://review.openstack.org/19358718:53
NobodyCambrb18:56
openstackgerritStephanie Miller proposed openstack/ironic: Mute ipmi debug log output  https://review.openstack.org/19626718:59
*** ijw has joined #openstack-ironic19:05
*** Marga_ has joined #openstack-ironic19:10
*** boris-42 has quit IRC19:12
*** dguerri` is now known as dguerri19:18
*** dguerri is now known as dguerri`19:20
*** achanda has joined #openstack-ironic19:22
*** maurosr has quit IRC19:28
*** maurosr has joined #openstack-ironic19:32
*** maurosr has quit IRC19:39
*** maurosr has joined #openstack-ironic19:40
*** romcheg has quit IRC19:46
*** romcheg has joined #openstack-ironic19:47
*** dguerri` is now known as dguerri19:47
*** achanda has quit IRC19:48
*** gridinv_ has quit IRC19:49
*** gridinv_ has joined #openstack-ironic19:54
openstackgerritJulia Kreger proposed openstack/bifrost: WIP: Ansible 2.0 compatability **DO NOT MERGE**  https://review.openstack.org/19683219:58
*** achanda has joined #openstack-ironic20:00
*** ijw has quit IRC20:05
*** gridinv_ has quit IRC20:22
*** achanda has quit IRC20:22
*** gridinv_ has joined #openstack-ironic20:22
*** dontalton has quit IRC20:26
*** cdearborn has quit IRC20:30
*** maurosr has quit IRC20:30
*** maurosr has joined #openstack-ironic20:30
*** boris-42 has joined #openstack-ironic20:31
openstackgerritOpenStack Proposal Bot proposed openstack/python-ironicclient: Updated from global requirements  https://review.openstack.org/19684920:33
*** zz_jgrimm has quit IRC20:37
*** maurosr has quit IRC20:37
*** gridinv_ has quit IRC20:37
*** krtaylor has joined #openstack-ironic20:38
*** krtaylor has quit IRC20:39
*** krtaylor has joined #openstack-ironic20:40
*** marzif_ has joined #openstack-ironic20:43
*** jgrimm has joined #openstack-ironic20:45
*** maurosr has joined #openstack-ironic20:46
*** maurosr has quit IRC20:51
*** Sukhdev has quit IRC20:51
*** afaranha has quit IRC20:52
*** jgrimm has quit IRC20:52
*** jgrimm has joined #openstack-ironic20:55
*** maurosr has joined #openstack-ironic20:56
*** marzif_ has quit IRC20:58
*** Marga_ has quit IRC20:59
*** ijw has joined #openstack-ironic21:04
*** trown is now known as trown|outttypeww21:05
*** thiagop has quit IRC21:06
*** maurosr has quit IRC21:11
*** jgrimm has quit IRC21:13
*** krtaylor has quit IRC21:13
*** radek_ has quit IRC21:13
*** ijw has quit IRC21:19
*** dprince has quit IRC21:20
*** r-daneel has quit IRC21:23
*** Sukhdev has joined #openstack-ironic21:27
*** kevinbenton has quit IRC21:31
*** natorious is now known as zz_natorious21:37
*** maurosr has joined #openstack-ironic21:51
*** krtaylor has joined #openstack-ironic21:52
*** zz_jgrimm has joined #openstack-ironic21:52
*** ijw has joined #openstack-ironic22:02
*** ijw has quit IRC22:03
*** ijw has joined #openstack-ironic22:03
*** rloo has quit IRC22:08
mrdaMorning Ironic22:18
*** absubram has quit IRC22:19
NobodyCamgood UGT mornig mrda22:19
mrdaNobodyCam: umm, ok :)22:19
NobodyCamhehehe22:20
BadCubhowdy mrda :)22:25
mrdahey BadCub22:25
*** lucas-dinner has quit IRC22:39
*** sinval has joined #openstack-ironic23:00
*** romcheg has quit IRC23:05
*** ijw has quit IRC23:05
*** ijw has joined #openstack-ironic23:05
*** ijw has quit IRC23:07
*** ijw has joined #openstack-ironic23:07
*** ijw has quit IRC23:10
jlvillalGood morning mrda23:10
openstackgerritSinval Vieira Mendes Neto proposed openstack/ironic: Add port creation passing the name of the node instead of the UUID of the node  https://review.openstack.org/19343923:11
*** dguerri is now known as dguerri`23:17
*** ijw has joined #openstack-ironic23:18
mrdahey jlvillal, welcome back23:21
*** Sukhdev has quit IRC23:27
*** yuanying has joined #openstack-ironic23:30
*** boris-42 has quit IRC23:32
*** smoriya has joined #openstack-ironic23:54
jlvillalmrda, Thanks23:57
*** Sukhdev has joined #openstack-ironic23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!