Tuesday, 2019-03-05

*** hwoarang has quit IRC00:06
*** hwoarang has joined #openstack-ironic00:08
*** sdake has joined #openstack-ironic00:09
*** sdake has quit IRC00:12
*** sdake has joined #openstack-ironic00:12
*** jcoufal has joined #openstack-ironic00:22
*** jcoufal has quit IRC00:39
*** sdake has quit IRC00:41
*** hwoarang has quit IRC00:59
*** hwoarang has joined #openstack-ironic01:00
*** rloo has quit IRC01:00
*** sdake has joined #openstack-ironic01:07
*** openstackgerrit has joined #openstack-ironic01:16
openstackgerritLars Kellogg-Stedman proposed openstack/virtualbmc master: [WIP] Add Serial-over-LAN (SOL) support  https://review.openstack.org/64088801:16
*** sthussey has quit IRC01:22
openstackgerritKaifeng Wang proposed openstack/ironic-tempest-plugin master: inspector py3 gate fix  https://review.openstack.org/64091001:26
openstackgerritKaifeng Wang proposed openstack/ironic-inspector master: Exclude unrelevant files from tempest job  https://review.openstack.org/64091201:36
*** dmellado has quit IRC01:50
*** stevebaker has quit IRC01:50
*** hwoarang has quit IRC01:55
*** hwoarang has joined #openstack-ironic01:57
*** whoami-rajat has joined #openstack-ironic02:02
*** sdake has quit IRC02:05
*** sdake has joined #openstack-ironic02:26
*** hwoarang has quit IRC02:33
*** hwoarang has joined #openstack-ironic02:35
*** cdearborn has quit IRC02:43
*** dmellado has joined #openstack-ironic02:55
*** andrein has quit IRC02:56
openstackgerritLars Kellogg-Stedman proposed openstack/ironic master: [WIP] honor ipmi_port in serial console drivers  https://review.openstack.org/64092802:56
openstackgerritLars Kellogg-Stedman proposed openstack/ironic master: [WIP] honor ipmi_port in serial console drivers  https://review.openstack.org/64092802:58
openstackgerritLars Kellogg-Stedman proposed openstack/ironic master: [WIP] honor ipmi_port in serial console drivers  https://review.openstack.org/64093002:59
*** dsneddon has quit IRC03:19
*** stevebaker has joined #openstack-ironic03:25
*** dsneddon has joined #openstack-ironic03:35
*** dsneddon has quit IRC03:40
*** stendulker has joined #openstack-ironic03:47
*** sdake has quit IRC03:51
*** andrein has joined #openstack-ironic03:51
*** gyee has quit IRC03:54
*** andrein has quit IRC04:13
*** dsneddon has joined #openstack-ironic04:15
*** dsneddon has quit IRC04:20
*** dsneddon has joined #openstack-ironic04:50
*** dsneddon has quit IRC04:57
openstackgerritKaifeng Wang proposed openstack/ironic-inspector master: Exclude unrelevant files from tempest job  https://review.openstack.org/64091205:08
openstackgerritQianBiao Ng proposed openstack/ironic master: Add Huawei iBMC driver support  https://review.openstack.org/63928805:24
*** dsneddon has joined #openstack-ironic05:30
openstackgerritMerged openstack/ironic master: Update the log message for ilo drivers  https://review.openstack.org/63998905:31
*** dsneddon has quit IRC05:40
*** sdake has joined #openstack-ironic05:47
*** jhesketh has quit IRC05:47
*** jhesketh has joined #openstack-ironic05:48
*** sdake has quit IRC05:50
*** pcaruana has joined #openstack-ironic05:52
*** sdake has joined #openstack-ironic05:56
*** pcaruana has quit IRC06:07
*** dsneddon has joined #openstack-ironic06:10
*** rh-jelabarre has quit IRC06:13
*** jtomasek has joined #openstack-ironic06:17
*** dims has quit IRC06:24
*** e0ne has joined #openstack-ironic06:24
*** dims has joined #openstack-ironic06:26
*** e0ne has quit IRC06:33
*** dims has quit IRC06:36
*** dims has joined #openstack-ironic06:37
*** e0ne has joined #openstack-ironic06:46
*** Chaserjim has quit IRC06:47
*** Qianbiao has joined #openstack-ironic06:59
QianbiaoHello.07:04
QianbiaoI am working on https://review.openstack.org/#/c/639288/07:04
patchbotpatch 639288 - ironic - Add Huawei iBMC driver support - 6 patch sets07:04
Qianbiaothe openstack-tox-lower-constraints CI results in an error now07:05
QianbiaoThe reason is "rfc3986==0.3.1" does not match my code.07:05
QianbiaoMay i upgrade the lower-constraints?07:06
QianbiaoOr i need to use old-version-style of rfc3986.07:07
*** e0ne has quit IRC07:07
openstackgerritNikolay Fedotov proposed openstack/ironic-inspector master: Use getaddrinfo instead of gethostbyname while resolving BMC address  https://review.openstack.org/62655207:10
*** lekhikadugtal has joined #openstack-ironic07:36
arne_wiebalckgood morning, ironic07:46
*** tssurya has joined #openstack-ironic08:16
rpittau|afkgood morning ironic! o/08:16
*** lekhikadugtal has quit IRC08:16
*** rpittau|afk is now known as rpittau08:16
*** rh-jelabarre has joined #openstack-ironic08:17
*** e0ne has joined #openstack-ironic08:17
*** pcaruana has joined #openstack-ironic08:18
*** jtomasek has quit IRC08:20
*** yolanda has joined #openstack-ironic08:21
*** yolanda has quit IRC08:23
*** pcaruana has quit IRC08:25
*** yolanda has joined #openstack-ironic08:25
*** sdake has quit IRC08:36
*** pcaruana has joined #openstack-ironic08:37
*** priteau has joined #openstack-ironic08:42
*** pcaruana has quit IRC08:44
iurygregorygood morning o/08:46
iurygregorymorning rpittau =)08:46
rpittauhi iurygregory :)08:46
mgoddardmorning all08:50
rpittauhi mgoddard :)08:50
iurygregorymorning mgoddard08:51
mgoddardI'll be at a client's office today and tomorrow, with no internet access. Should be able to push a patch with deploy template nits before my train arrives at the station...08:51
mgoddardmorning rpittau iurygregory08:51
*** lekhikadugtal has joined #openstack-ironic08:51
*** amoralej|off is now known as amoralej08:53
*** e0ne has quit IRC08:58
*** iurygregory has quit IRC08:58
*** e0ne has joined #openstack-ironic09:00
*** iurygregory has joined #openstack-ironic09:01
*** pcaruana has joined #openstack-ironic09:01
*** mbeierl has quit IRC09:02
*** mbeierl has joined #openstack-ironic09:04
*** lekhikadugtal has quit IRC09:07
openstackgerritMark Goddard proposed openstack/ironic-tempest-plugin master: Deploy templates: add API tests  https://review.openstack.org/63718709:08
*** andrein has joined #openstack-ironic09:09
*** moshele has joined #openstack-ironic09:09
openstackgerritMark Goddard proposed openstack/ironic master: Deploy templates: conductor and API nits  https://review.openstack.org/64044609:13
*** S4ren has joined #openstack-ironic09:13
*** stendulker has quit IRC09:19
*** dtantsur|afk is now known as dtantsur09:22
dtantsurmorning ironic09:22
*** andrein has quit IRC09:24
iurygregorymorning dtantsur o/09:26
*** andrein has joined #openstack-ironic09:27
*** mariojv has quit IRC09:30
Qianbiaohello09:31
QianbiaoMay someone look at the story https://storyboard.openstack.org/#!/story/200514009:32
dtantsurQianbiao: please join #openstack-requirements and work with them on updating constraints. we have no power over it.09:34
Qianbiaook thanks dtantsur09:34
dtantsuroh wait09:35
dtantsurQianbiao: my bad, I thought about upper-constraints. lower-constraints can be updated with your patch.09:35
QianbiaoI could update it directly in my patch?09:35
dtantsurQianbiao: just make sure you update requirements.txt to >= x.y.z AND lower-constrants.txt to === x.y.z09:35
dtantsurQianbiao: you should, actually, yes.09:36
Qianbiaook thanks09:36
*** derekh has joined #openstack-ironic09:36
* dtantsur needs coffee09:36
openstackgerritArkady Shtempler proposed openstack/ironic-tempest-plugin master: Test BM with VM on the same network  https://review.openstack.org/63659809:45
openstackgerritRachit Kapadia proposed openstack/ironic master: Set boot_mode in node properties during OOB Introspection  https://review.openstack.org/63969809:46
*** lekhikadugtal has joined #openstack-ironic09:47
*** lekhikadugtal has quit IRC09:48
S4renGood morning ironic, I have a question, if I have an instance deployed on a node in ironic, is it possible to wipe that node and it become available, but for the instance itself to remain active in nova?09:50
*** moshele has quit IRC09:52
openstackgerritIury Gregory Melo Ferreira proposed openstack/python-ironicclient master: Move to zuulv3  https://review.openstack.org/63301009:54
*** sdake has joined #openstack-ironic09:55
*** lekhikadugtal has joined #openstack-ironic09:55
openstackgerritQianBiao Ng proposed openstack/ironic master: Add Huawei iBMC driver support  https://review.openstack.org/63928809:55
arne_wiebalckS4ren: I guess you’re describing a situation you see in your deployment? (And not something you’d like to see.)09:56
S4renarne_wiebalck, yes it is a situation I saw and definetely not something I'd like to see09:58
arne_wiebalckS4ren: :)09:58
arne_wiebalckS4ren: I’d say nova and ironic got out of sync.09:58
arne_wiebalckS4ren: Do yo know how the instance got wiped?09:58
dtantsurS4ren: you can do it via ironic API09:59
dtantsuri.e. you deploy a node via nova but undeploy via ironic09:59
S4renWell thats the thing I do not, Im trying to figure this out09:59
S4rendtantsur, Is that possible? I thought that once an instance is deployed on a node in ironic ironic wont let you do stuff on the node09:59
dtantsurS4ren: "undeploy" is a legitimate operation for a deployed node10:00
dtantsurone of few that are allowed10:00
dtantsuractually, nova uses precisely this operation when you issue 'openstack server delete'10:00
* arne_wiebalck thinks of dd’ing to a block device which has a fs on top10:00
S4renSo I can undeploy a node in ironic, but that will leave the nova instance untouched is that right?10:01
dtantsurS4ren: yep. we have no way of syncing it back.10:02
S4rendtantsur, Is it possible to undeploy the node via the horizon GUI? I am looking at mine now and there is no option to undeploy in the dropdown10:03
dtantsurS4ren: I don't have a lot of experience with horizon. are you using ironic-ui?10:03
openstackgerritDigambar proposed openstack/ironic stable/ocata: Fix OOB introspection to use pxe_enabled flag in idrac driver  https://review.openstack.org/64096910:03
openstackgerritDigambar proposed openstack/ironic stable/ocata: Fix OOB introspection to use pxe_enabled flag in idrac driver  https://review.openstack.org/64096910:05
S4rendtantsur, I am not sure, let me doublecheck10:07
S4rendtantsur, yeah I am using ironic-ui10:12
arne_wiebalckS4ren: nova should also notices this inconsistency, it probably won’t do anything about it (it doesn’t for VMs), but it should log this inconsistency10:12
*** mbuil has joined #openstack-ironic10:13
S4renIt did, there are logs saying "Instance is unexpectedly not found. Ignore."10:13
arne_wiebalckS4ren: that’s what I meant :)10:14
dtantsuriurygregory: images updated \o/ but we apparently forgot the branch suffix >_< https://tarballs.openstack.org/ironic-python-agent/tinyipa/files/10:15
iurygregorydtantsur, omg10:15
iurygregorywell we almost there \o/10:15
dtantsurI wonder at which point it disappeared.. wanna take a look?10:15
dtantsuryeah, at least it builds10:15
iurygregorysure10:15
iurygregorylooking now10:15
iurygregory=)10:15
dtantsuriurygregory: the logs from the last build, if you need it: http://logs.openstack.org/dd/dd300fe49e0799936111832a869631b9ea6775f6/post/ironic-python-agent-buildimage-tinyipa/c73025b/10:16
iurygregoryits call zuulv3 magic dtantsur XD10:16
dtantsurlol10:16
mbuilhey guys, what is the login for the Fedora IPA image? I am having some problems and I would like to see the logs of the IPA process10:16
dtantsurmbuil: I don't think there is a password set, unless you set it explicitly when building an image10:16
dtantsurmbuil: check https://docs.openstack.org/ironic-python-agent/latest/admin/troubleshooting.html#gaining-access-to-ipa-on-a-node10:17
openstackgerritNisha Agarwal proposed openstack/ironic master: [WIP] Adds graphical console implementation for ilo drivers  https://review.openstack.org/64097310:18
S4renHmm would setting an active node to available state cause it to self clean?10:18
mbuildtantsur: ok, thanks! any idea why nodes could be stuck for a very long time in "wait call-back"?10:21
dtantsurmbuil: usually it's DHCP, PXE or otherwise networking problems10:21
arne_wiebalckS4ren: how would you do that?10:24
mbuildtantsur: In the node, I can see a successful heartbeat and I see "ens1f0: state change: disconnected -> prepare". Then, "ens1f0: state change: prepare -> config". Then, "ens1f0: state change: config -> ip-config". Then, "ens1f0: activation: beginning transaction (timeout in 45s)". And finally "ens1f0: dhclient started with pid 3014"10:28
dtantsurmbuil: just to double-check: make sure your nodes are not in maintenance10:30
mbuildtantsur: in the server I see "ironic_inspector.pxe_filter.base [-] The PXE filter driver NoopFilter, state=initialized left the fsm_reset_on_error context fsm_reset_on_error /usr/lib/python2.7/site-packages/ironic_inspector/pxe_filter/base.py:153"10:30
mbuildtantsur: maintenance is False for both nodes10:31
mbuildtantsur: and I also see in the server "dnsmasq-dhcp[27659]: DHCPDISCOVER(eth0) 5c:b9:01:8b:a6:30 ignored" :(10:32
dtantsurmbuil: it may be ironic-inspector DHCP. anyway, if you have IPA running, you're past the DHCP stage.10:32
mbuildtantsur: ok, that's what I was wondering10:33
dtantsurmbuil: also check that you're not affected by https://docs.openstack.org/ironic/latest/admin/troubleshooting.html#dhcp-during-pxe-or-ipxe-is-inconsistent-or-unreliable10:34
dtantsur(you shouldn't since you're in IPA, but just in case)10:34
mbuildtantsur: I can also see from time to time in the node "cancel DHCP transaction", is that relevant? TBH, I am a bit lost because I am not sure what should be happening when node is in "wait call-back" state. It should contact the server with info about the node through HTTP?10:36
dtantsurmbuil: what is happening is that IPA heartbeats into ironic, and as a reaction of these heartbeats ironic tells IPA to do something10:36
dtantsur(partition a disk, flash an image, etc)10:36
dtantsurso if you see heartbeats, ironic is supposed to react to them10:36
dtantsur* successful heartbeats10:37
S4renarne_wiebalck, When an instance is active the user has the option of moving it into available state. This will undeploy it/wipe it and this is available in ironic-ui as well. Is this expected behavior?10:37
dtantsuryes10:38
mbuildtantsur: I think they are successful: "Mar 05 10:37:13 linux ironic-api[26625]: 2019-03-05 10:37:13.760 26914 INFO eventlet.wsgi.server [req-48bd1c85-bf40-4275-8193-dd5eb5928868 - - - - -] 192.168.122.3 "POST /v1/heartbeat/e1369efa-5391-5035-8533-3a065c44a584 HTTP/1.1" status: 202  len: 298 time: 0.0552812"10:38
dtantsuryep10:39
dtantsurmbuil: anything in ironic-conductor logs? I wonder what it is doing.10:39
mbuildtantsur: nope, in journalctl I only see logs for ironic-inspector10:43
dtantsurmbuil: maybe they're in /var/log/ironic?10:44
mbuildtantsur: nope, nothing. Is ironic-conductor the one supposed to tell IPA to do something?10:46
S4renFollow on point, is there a way to disable this behavior so that the only way to delete the instance is through nova, at least in ironic-ui?10:46
dtantsurmbuil: yeah. you may need to turn DEBUG logging on if you haven't already.10:47
*** v12aml has quit IRC10:48
arne_wiebalckS4ren: “has the option“ you mean the ui offers this?10:48
dtantsurS4ren: I don't think so10:48
S4renYes it does10:48
S4rendtantsur, :(10:48
arne_wiebalckdtantsur: what would be the cli equivalent?10:48
dtantsurarne_wiebalck: 'openstack baremetal node undeploy'?10:49
arne_wiebalckdtantsur: sure, but what I meant is to change the provisioning states10:49
dtantsurarne_wiebalck: this is the command for "deleted" action. we no longer have a universal command for them.10:50
dtantsurit's $ openstack baremetal node {undeploy,deploy,rebuild,clean,inspect,...}10:50
*** sdake has quit IRC10:51
arne_wiebalckdtantsur: right, hence my question to S4ren where the user has the option to move a node from active to available10:51
arne_wiebalck“When an instance is active the user has the option of moving it into available state.”10:51
S4renThat option is definetely there10:51
arne_wiebalckS4ren: So, the ui offers this option and under the hood it calls ‘undeploy’?10:52
S4renIt seems that way yes10:52
arne_wiebalckS4ren: dtantsur: That sounds like a dangerous option … to say the least :)10:52
S4renIt is somewhat confusing becouse normally moving a node to available is done from managable, and that doesnt have a specific affect apart from triggering cleaning if it is enabled10:53
S4renarne_wiebalck, Agreed10:53
*** v12aml has joined #openstack-ironic10:53
dtantsursounds like a UX issue10:53
arne_wiebalckdtantsur: ++10:53
arne_wiebalckS4ren: correct10:54
*** sdake has joined #openstack-ironic10:54
S4renI might raise this as a bug/request, would the appropriate way be as a storyboard?10:56
arne_wiebalckS4ren: yes10:57
mbuildtantsur: after turning on DEBUG logging, these are the logs. I can't see anything strange there but hopefully you can: http://paste.openstack.org/show/747279/10:57
*** dougsz has joined #openstack-ironic10:58
dtantsurmbuil: it looks like IPA is still downloading the image. weird. is the image large?11:00
mbuildtantsur: can you tell me where you see that in the logs? The image is 870M11:04
openstackgerritDmitry Tantsur proposed openstack/ironic-tempest-plugin master: Deploy templates: add API tests  https://review.openstack.org/63718711:05
dtantsurmbuil: Status of agent commands for node e1369efa-5391-5035-8533-3a065c44a584: prepare_image: result "None", error "None"11:08
dtantsurmbuil: note that if the image is now RAW, you need enough RAM on the target machine to fit it for conversion.11:08
mbuildtantsur: ok. It has just finished: "Image successfully written to node c1a56cb3-fcef-59d5-8105-7e6815154f70" but it took a long time :(11:08
mbuildtantsur: not sure why, this was much faster a few months ago11:09
dtantsurmbuil: it can be some network condition OR it can be that ironic started converting images to RAW to stream them (without fitting in RAM)11:09
dtantsurmbuil: for the latter, you can play with https://github.com/openstack/ironic/blob/f4576717ba8e37fdd5868370b0cdd5ac84e2b668/ironic/conf/agent.py#L3711:10
mbuildtantsur: so let me understand this, if stream_raw_images=True, less memory is needed but it takes longer? RAM is 128GB so, more than enough11:14
dtantsurmbuil: you're right. you may want to disable this option if so.11:22
arne_wiebalcketingof: ping11:23
etingofarne_wiebalck, o/11:24
arne_wiebalcketingof: hey o/11:24
arne_wiebalcketingof: we discussed some weeks ago to test the parallel power sync11:25
etingofright11:25
arne_wiebalcketingof: I could give that a try11:25
etingofwould be interesting!11:25
arne_wiebalcketingof: could you point me to patches I’d need?11:25
arne_wiebalcketingof: please ;)11:26
etingofarne_wiebalck, I think they are all merged already11:26
etingofso just current master should have everything in place11:26
etingofarne_wiebalck, or do you want to cherry pick?11:27
arne_wiebalcketingof: yeah … but I won’t be able to run master that easily11:27
arne_wiebalcketingof: yes11:27
* arne_wiebalck checking master11:28
etingofarne_wiebalck, https://review.openstack.org/#/c/631872/11:28
patchbotpatch 631872 - ironic - Parallelize periodic power sync calls (MERGED) - 5 patch sets11:28
etingofarne_wiebalck, https://review.openstack.org/#/c/610007/11:29
patchbotpatch 610007 - ironic - Kill misbehaving `ipmitool` process (MERGED) - 9 patch sets11:29
arne_wiebalcketingof: awesome, thx, I’ll keep you posted11:29
etingofarne_wiebalck, the first patch runs up to 8 ipmitool processes concurrently, the second patch kills off the hung ones11:30
etingofthere are a couple of follow ups, but these are insignificant I think11:30
arne_wiebalcketingof: ok. I’ll start with the first one to see how that affects the power sync loop11:31
*** e0ne has quit IRC11:32
*** andrein has quit IRC11:32
dtantsurthanks arne_wiebalck, this will be extremely useful11:38
arne_wiebalckdtantsur: :)11:39
*** andrein has joined #openstack-ironic11:40
etingofarne_wiebalck, watch for long-running ipmitool processes then11:41
arne_wiebalcketingof: ok!11:41
*** e0ne has joined #openstack-ironic11:42
openstackgerritMerged openstack/python-ironicclient master: Deploy templates: client support  https://review.openstack.org/63693111:49
dtantsur\o/11:50
*** andrein has quit IRC11:53
*** andrein has joined #openstack-ironic11:53
*** lekhikadugtal has quit IRC11:57
*** amoralej is now known as amoralej|lunch12:00
*** jtomasek has joined #openstack-ironic12:06
*** mmethot has quit IRC12:15
*** bfournie has quit IRC12:20
*** early` has quit IRC12:26
*** andrein has joined #openstack-ironic12:28
*** early` has joined #openstack-ironic12:29
jrollmornings12:30
Qianbiaohello12:31
Qianbiaoif a node deploy failed, how we could re-deploy it.12:31
QianbiaoProvisioning State is "deploy failed", Maintenance is False.12:32
iurygregorydtantsur, do we need to change this https://github.com/openstack/ironic-python-agent/blob/80be07ae791980a1c444b3b0d685775c1688ca34/imagebuild/tinyipa/common.sh ?12:34
iurygregorymorning jroll o/12:34
iurygregoryBRANCH_PATH will get master i think since we dont have stable/stein https://github.com/openstack/ironic-python-agent/blob/5b6bf0b6c86aad352db271cf530a6321ad4248eb/playbooks/ironic-python-agent-buildimage/run.yaml#L912:35
dtantsuriurygregory: well, it does not get master, that's the problem. it's just empty.12:36
dtantsurprobably ZUUL_REFNAME is not a thing12:37
iurygregoryyeah12:38
iurygregorygoing to check with infra to see12:38
*** sdake has quit IRC12:39
dtantsuriurygregory: https://github.com/openstack/openstack-manuals/commit/21edbc931fe09eedf8fa4e219fde1b222c9bce9312:47
dtantsurwe probably need the ZUUL_BRANCH thingy12:47
iurygregoryhttp://git.openstack.org/cgit/openstack-infra/project-config/tree/playbooks/proposal/propose_update.sh#n8212:47
iurygregoryi found this too12:47
iurygregoryshould i go and try Ajaeger approach?12:49
dtantsuriurygregory: I think so (we only need a branch though, not all of these)12:52
iurygregorydtantsur, going to push a patch =)12:53
openstackgerritHarald Jensås proposed openstack/ironic master: Initial processing of network port events  https://review.openstack.org/63372912:59
openstackgerritIury Gregory Melo Ferreira proposed openstack/ironic-python-agent master: Replace ZUUL_REFNAME for zuul.branch  https://review.openstack.org/64100713:05
openstackgerritMerged openstack/networking-generic-switch master: Adding python 3.6 unit test  https://review.openstack.org/64079613:12
openstackgerritIury Gregory Melo Ferreira proposed openstack/ironic-python-agent master: Replace ZUUL_REFNAME for zuul.branch  https://review.openstack.org/64100713:13
*** sdake has joined #openstack-ironic13:20
openstackgerritRiccardo Pittau proposed openstack/networking-baremetal master: Adding py36 environment to tox  https://review.openstack.org/64079413:26
*** PabloIranzoGmez[ has joined #openstack-ironic13:28
*** amoralej|lunch is now known as amoralej13:28
PabloIranzoGmez[hi13:30
PabloIranzoGmez[I'm trying to add a baremetal host to an environment with regular instances13:30
openstackgerritRiccardo Pittau proposed openstack/networking-baremetal master: Supporting all py3 environments with tox  https://review.openstack.org/64079413:30
*** dtantsur is now known as dtantsur|brb13:31
PabloIranzoGmez[thing is that aggregateinstancespecfilter removes all 4 hosts (the ones in the virtual-host aggregate), but says nothing about the baremetal-hosts aggregate13:32
PabloIranzoGmez[where I've added the controllers13:32
PabloIranzoGmez[but I cannot add the baremetal host13:32
PabloIranzoGmez[however, baremetal host shows in details that is member of that aggregate13:32
PabloIranzoGmez[scheduling instance for baremetal fails because there aggregate for baremetal is not reported and it filters as expected all the virtual-hosts13:32
PabloIranzoGmez[osp release is queens13:33
PabloIranzoGmez[any hint?13:33
*** oanson has joined #openstack-ironic13:36
openstackgerritIlya Etingof proposed openstack/sushy-tools master: Add memoization to expensive emulator calls  https://review.openstack.org/61275813:40
*** priteau has quit IRC13:41
*** Qianbiao has quit IRC13:42
openstackgerritLars Kellogg-Stedman proposed openstack/virtualbmc master: [WIP] Add Serial-over-LAN (SOL) support  https://review.openstack.org/64088813:43
*** priteau has joined #openstack-ironic13:52
*** jcoufal has joined #openstack-ironic13:53
*** sthussey has joined #openstack-ironic14:05
*** mmethot has joined #openstack-ironic14:05
*** bfournie has joined #openstack-ironic14:09
*** sdake has quit IRC14:11
*** mjturek has joined #openstack-ironic14:17
openstackgerritMerged openstack/ironic master: Add option to protect available nodes from accidental deletion  https://review.openstack.org/63926414:21
*** sdake has joined #openstack-ironic14:21
*** sdake has quit IRC14:23
arne_wiebalckQianbiao: How about moving  the node via manage & provide back to available and then retrigger the creation of an instance (if this is via nova)?14:29
arne_wiebalcketingof: dtantsur: I tried the 1st power sync patch now on one of our controllers (which has ~700 nodes)14:30
arne_wiebalcketingof: power sync loop took 105 seconds before14:30
arne_wiebalcketingof: is now down to 42 secs14:31
arne_wiebalcketingof: (default number of wrokers)14:31
*** rloo has joined #openstack-ironic14:32
arne_wiebalcketingof: I haven’t checked for ipmi outliers yet14:33
*** sdake has joined #openstack-ironic14:33
arne_wiebalcketingof: but with that patch we could go back to default values for the power sync interval14:34
TheJulia_sicko/14:37
TheJulia_sickHey everyone14:37
arne_wiebalckHey TheJulia_sick o/14:37
rpittauhi TheJulia_sick :)14:38
TheJulia_sickrpittau: Hey, regarding python-hardware, do you think we could submit an update to constraints to get the upper-constraint updated since they just released?14:38
rloomorning ironicers arne_wiebalck, TheJulia_sick, rpittau.14:38
arne_wiebalckrloo o/14:39
rlooTheJulia_sick: you shouldn't be here...14:39
arne_wiebalckrloo ++14:39
*** dtantsur|brb is now known as dtantsur14:40
rpittauTheJulia_sick, yes definitely14:40
dtantsurmorning TheJulia_sick, how are you?14:40
rpittauhi rloo :)14:41
dtantsuralso morning rloo14:41
TheJulia_sickwoot, and the wifey now has it14:41
rloohi dtantsur!14:41
TheJulia_sickjoy14:41
rlooTheJulia_sick: :-(14:41
dtantsur:(14:41
rlooTheJulia_sick: nice that you are sharing... again :-(14:42
TheJulia_sickrloo: she said the same thing...14:42
TheJulia_sickI must have picked up the new flu varient that has been hitting the midwest united states14:42
TheJulia_sickluckilly, if one has had the vaccine, its still awful, but fairly quick for the flu14:42
dtantsuroh, I see14:42
rpittauoh gosh :/14:43
TheJulia_sicks/farily quick/fairly rough and quick/14:43
TheJulia_sickI slept most of saturday/sunday14:43
dtantsurTheJulia_sick: FYI I've been fast-approving attempts to make our IPA post job back into operation14:43
TheJulia_sickdtantsur: oh.. joy14:43
dtantsuryeah, it's been down since mid-Jan14:44
TheJulia_sickdtantsur: ack ack, anything else I need to be aware of14:44
TheJulia_sicksweet!14:44
* iurygregory zuulv3 magic14:44
dtantsurand I really want it back before FF14:44
dtantsurTheJulia_sick: nothing else, go back to bed :)14:44
TheJulia_sickdtantsur: eh, FF is only in our minds ;)14:44
iurygregoryget better TheJulia_sick o/14:44
* TheJulia_sick <3s you all14:44
rpittauTheJulia_sick, take care :)14:45
* TheJulia_sick is still reading email :)14:46
TheJulia_sickdtantsur: btw, I don't know if you saw, the fast track change set actually worked in CI. Just the scenario test I built needs a little more work :)14:46
TheJulia_sicks/built/cobbled together/14:46
dtantsurneat! I'll try to get to it after I finish going through TC candidate statements/discussions and finally vote..14:47
rpittauTheJulia_sick, one thing though, the old hardware package is not in the current upper-constraints, I guess it's because it's in plugin-requirements and not in requirements14:47
etingofarne_wiebalck, \o/14:47
TheJulia_sickdtantsur: ack, I figure we can rip the depends-on off and take out the test job from running, and add it back in a later patch14:48
TheJulia_sickjust so we can get it merged and iterate from there14:48
dtantsurI think it's a good idea14:48
*** sdake has quit IRC14:48
dtantsurrloo: if you have a minute today, https://review.openstack.org/#/c/639050/ would make my life somewhat easier14:49
patchbotpatch 639050 - ironic - Allow building configdrive from JSON in the API - 8 patch sets14:49
rloodtantsur: ok14:49
*** sdake has joined #openstack-ironic14:50
* etingof wishes s/TheJulia_sick/TheJulia/ quickly14:54
rloohjensas: hi, i'm looking at the neutron event stuff. wondering where you have gotten to. Do you have it working, tested, etc? There are two WIP PRs and I don't know if you will be adding more PRs, so wanted to get an idea.14:55
dtantsurhjensas: .. and what's the situation around the networking-baremetal part?14:57
*** pcaruana has quit IRC14:57
hjensasrloo: I have only the add/remove cleaning network implemented. It's working, but a PoC. It does'nt cover the port update setting dhcp_opts either, so it's very early. Won't be done this week.14:58
dtantsurhjensas: should we press to get it done in Stein then? how likely is it even?14:58
*** munimeha1 has joined #openstack-ironic14:59
etingofarne_wiebalck, I wonder why the time period reduction is far from being 8-fold though15:00
hjensasdtantsur: not likely to finish in weeks yet. We need to have notifications in CI as well, the ironic jobs does not use the baremetal mech driver. So we would need that, or implement the notifier in neutron.15:01
arne_wiebalcketingof: may be it’s dominated by sth uncompressible15:01
rloohjensas: thx for the status. I think that it won't get done in Stein.15:01
*** e0ne has quit IRC15:02
dtantsurright. then we should plan on early Train.15:02
dtantsurThis looks scary to rush in..15:02
hjensasrloo: dtantsur: yes, and we may want to have another discussion at PTG regarding mgoddard's idea of integrating it in steps?15:03
rloohjensas, dtantsur: yeah, i don't want to rush this in. I was thinking about what mark said yesterday about using steps (deploy/clean) and i think it makes sense.15:03
dtantsurhjensas: totally. have you added it to the etherpad?15:03
rloohjensas: ++ :)15:03
*** mjturek has quit IRC15:03
rloohjensas, dtantsur: are you OK if I remove the neutron events stuff from our weekly priorities?15:03
dtantsurfine with me15:04
hjensasrloo: fine with me as well.15:04
*** sdake has quit IRC15:04
rloothx hjensas.15:04
arne_wiebalckTheJulia_sick: jroll: dtantsur: a first version of our downstream software RAID patches is now available from: https://review.openstack.org/#/q/topic:software_raid15:06
dtantsurarne_wiebalck: nice! what's left to get rid of [WIP]?15:06
arne_wiebalckdtantsur: there are a few open points I’d like to discuss (I made corresponding task in the story as well)15:07
*** sdake has joined #openstack-ironic15:07
dtantsurarne_wiebalck: is it PTG level of discussion or just on the patches?15:07
etingofarne_wiebalck, how about trying more power sync workers? ;)15:08
arne_wiebalcketingof: doing that already … ;)15:09
arne_wiebalckdtantsur: half and half I’d say15:09
dtantsurI see. are you coming to the PTG, arne_wiebalck?15:10
arne_wiebalckdtantsur: no, sorry15:10
arne_wiebalckdtantsur: on the list for next one :)15:10
arne_wiebalckdtantsur: the code is woking, btw, I verified it in our QA env with real machines15:10
arne_wiebalckit’s not only woking it’s even working15:11
arne_wiebalckdtantsur: I can add an item to the necxt weekly meeting15:11
dtantsurarne_wiebalck: yes please, ideally with specific questions15:11
arne_wiebalckdtantsur: just wanted to signal that the code is up in a first version15:12
arne_wiebalckdtantsur: I can remove the [WIP] if that attracts more eyes15:12
*** e0ne has joined #openstack-ironic15:13
dtantsurarne_wiebalck: generally, if you think the code is worth merging, remove [WIP]15:13
arne_wiebalckdtantsur: in that case I leave the [WIP] until we discussed some of the points :)15:14
arne_wiebalcketingof: going to 32 workers does not change the timing15:15
arne_wiebalcketingof: (assuming I sucessfully changed the no of workers)15:15
hjensasdtantsur: added to the PTG etherpad to discuss neutron events again.15:17
dtantsurthanks!15:18
rloodtantsur: was this approved? https://storyboard.openstack.org/#!/story/200508315:18
rloodtantsur: jroll seems good with it.15:19
dtantsurrloo: apparently not officially. jroll said okay in the comment, TheJulia_sick agreed on the meeting (IIRC)15:19
dtantsurif you agree, we can call it approved15:19
openstackgerritIury Gregory Melo Ferreira proposed openstack/python-ironicclient master: Move to zuulv3  https://review.openstack.org/63301015:22
* iurygregory starts to pray for the CI15:23
rloodtantsur: trying to parse the description of that story. The configdrive now can be string or gzip-base-64-encoded. You want to accept configdrive as a json with 3? (not 2) keys?15:24
dtantsurrloo: up to three keys, yes (did I leave two somewhere still? it's from an older version)15:25
rloodtantsur: ok, got it. with user_data value maybe being json or not, but the other two have jsons as their values.15:25
etingofarne_wiebalck, is it plausible that you have one slow BMC on which ipmitool blocks for 42 sec?15:26
dtantsurrloo: exactly15:26
rloodtantsur: and the only defaults will be for meta_data. if the others aren't specified we don't do anything.15:26
dtantsurrloo: correct15:26
rloook, got it. thx. will approve.15:26
dtantsurthnx!15:26
* etingof wonders if arne_wiebalck applied the killer patch as well...?15:27
*** Chaserjim has joined #openstack-ironic15:27
arne_wiebalcketingof: I verified that I indeed increased  the number of threads.15:27
arne_wiebalcketingof: no killer patch yet15:27
arne_wiebalcketingof: I was thinking switching to debug mode and see if there is a slow “power status” call … not sure if I will find it, though15:28
*** openstackgerrit has quit IRC15:28
* arne_wiebalck sharpens the grep knife15:28
arne_wiebalcketingof: you’d think I should go with the killer patch first?15:29
*** bfournie has quit IRC15:29
etingofthis killer patch is the most reliable part of the whole system, arne_wiebalck15:33
etingofso I'd try it sooner or later15:33
*** openstackgerrit has joined #openstack-ironic15:33
openstackgerritMerged openstack/ironic-python-agent master: Replace ZUUL_REFNAME for zuul.branch  https://review.openstack.org/64100715:33
etingofbut debugging what's happening first makes sense as well15:33
rloodtantsur: sorry, another question. are you going to update the client to support that new configdrive format?15:37
dtantsurrloo: so, this is something I've been postponing because ironicclient already has decent support for building configdrives.15:38
rloodtantsur: well, just seems inconsistent. would someone using the client, want to pass in json for the configdrive?15:39
dtantsurrloo: okay, I can (and I will) fix https://github.com/openstack/python-ironicclient/blob/9cd584548a77492bfa0f41e7ea72546baed4a58d/ironicclient/v1/node.py#L537-L540 to now blow up on a dict15:39
dtantsurbut CLI already supports passing a directory, which should be enough15:39
rloodtantsur: i'm fine if you do it in train. i was just wondering if it was something that we wanted to land this week15:39
dtantsurrloo: the ironicclient fix will be small, I'll prepare it right now.15:40
rloodtantsur: ok. now i'll look at your ironic PR :)15:40
*** pcaruana has joined #openstack-ironic15:42
openstackgerritMerged openstack/ironic-tempest-plugin master: Deploy templates: add API tests  https://review.openstack.org/63718715:44
*** sdake has quit IRC15:49
*** mjturek has joined #openstack-ironic15:49
dtantsurhuh, I seem to have found a bug in our old configdrive code..15:52
rpittaujust realized gate is broken for ironic-introspection :/15:53
iurygregoryD:15:53
dtantsurSIGH15:53
dtantsurthere was a patch this morning15:54
dtantsurthis https://review.openstack.org/#/c/640910/ ?15:54
patchbotpatch 640910 - ironic-tempest-plugin - inspector py3 gate fix - 1 patch set15:54
rpittauyes15:55
rpittaubut I was just rechecking like a blind lemming15:55
rpittauI guess I'll add a depends on until that is merged15:56
openstackgerritArkady Shtempler proposed openstack/ironic-tempest-plugin master: Test BM with VM on the same network  https://review.openstack.org/63659815:56
mbuildtantsur: do you know what network protocol is it used to transmit the image to the nodes?15:56
dtantsurmbuil: depends on deploy_interface used. I assume you use "direct", thus HTTP(s)15:58
iurygregoryrpittau, just remember that depends-on wont trigger merge if your patch have +2 +W XD15:58
mbuildtantsur: that's what I thought but I can't see any HTTP traffic going on :/, need to investigate15:59
rpittauiurygregory, yeah, it should be ok15:59
openstackgerritDmitry Tantsur proposed openstack/python-ironicclient master: Support passing a dictionary for configdrive  https://review.openstack.org/64106115:59
openstackgerritRiccardo Pittau proposed openstack/ironic-inspector master: [trivial] removing pagination in Get Introspection Data api-ref  https://review.openstack.org/64070516:00
*** sdake has joined #openstack-ironic16:01
*** ajya[m] has quit IRC16:01
*** ajya[m] has joined #openstack-ironic16:02
arne_wiebalcketingof: there are some nodes where the lock for the power sync state is held 6+ seconds16:02
etingofarne_wiebalck, should we power sync locked nodes later in time...?16:05
arne_wiebalcketingof: the lock is shared, so is the time the lock is taken relevant?16:06
arne_wiebalcketingof: it is shared, right?16:06
arne_wiebalcketingof: oh, wait … I missed that the call time seems to be logged!16:09
* arne_wiebalck collects more data16:09
openstackgerritDmitry Tantsur proposed openstack/python-ironicclient master: Support passing a dictionary for configdrive  https://review.openstack.org/64106116:11
dtantsurrloo: ^^ (even with CLI, it proved straightforward)16:13
* dtantsur -> exercising16:13
rloodtantsur: you're faster than me :)16:13
etingofarne_wiebalck, yes, lock is shared16:14
arne_wiebalcketingof: I see 4 BMCs replying only in 4+ seconds plus 10 or so in 1+ secs16:14
dtantsurheh16:14
openstackgerritIury Gregory Melo Ferreira proposed openstack/python-ironicclient master: Move to zuulv3  https://review.openstack.org/63301016:14
arne_wiebalcketingof: if ~40 secs is b/c this is simply the sum of the ipmi calls of the “slowest” worker I don’t understand why more workers don’t help16:15
arne_wiebalcketingof: but the spectrum of ipmi runtimes varies from 0.03s to 5.1s16:17
etingofarne_wiebalck, there are just 8 workers, right? if you have 8 slow BMCs, everything beyond 8 would accumulate time perhaps16:17
*** bfournie has joined #openstack-ironic16:17
arne_wiebalcketingof: 32 workers atm16:17
arne_wiebalcketingof: let me go to 4 :)16:18
etingofarne_wiebalck, I think you should sum up all runtimes and divide by 3216:18
etingofarne_wiebalck, you should get the Answer to the Ultimate Question of Life, the Universe, and Everything16:19
arne_wiebalcketingof: ha!16:19
rpittaulol16:20
* arne_wiebalck calculates16:20
arne_wiebalcketingof: total time divided by workers is ~616:22
arne_wiebalcketingof: 201 secs and 32 workers16:22
arne_wiebalcketingof: this is the max locking time and close to the max ipmitool runtime16:23
etingofarne_wiebalck, can it mean that the workers are idling, but you still have a chunk of slow BMCs that line-up to 42 secs?16:24
arne_wiebalcketingof: if the slow BMCs go all the same worker you mean?16:24
etingofarne_wiebalck, yes, that would be the *slowest* worker meaning there could be more than one16:25
*** kandi has joined #openstack-ironic16:25
etingofarne_wiebalck, but you see that some workers are waiting on the shared lock?16:26
etingofarne_wiebalck, if you have an exclusive lock on a node, that would cause power sync to block, right?16:27
arne_wiebalcketingof: the max ipmitool time is 5sces, the max lock time is 6secs, and I was assuming these were the same workers16:27
etingofarne_wiebalck, is it 'do_sync_power_state' time that you count?16:28
arne_wiebalcketingof: it’s the ipmitool call time which is printed in debug mode16:29
arne_wiebalcketingof: the total time is from instrumenting _sync_power_states16:30
etingofarne_wiebalck, _sync_power_states is misleading - it's actually instruments (1) spawning N-1 threads and (2) running N-th thread from start to finish16:32
arne_wiebalcketingof: correct16:32
etingofarne_wiebalck, 'do_sync_power_state' seems to measure a single ipmitool runtime16:33
arne_wiebalcketingof: do you know what the METRICS.imer decorator does?16:38
arne_wiebalcketingof: METRICS.timer16:38
etingofarne_wiebalck, my understanding is that it measures the runtime of the decorated callable, no?16:40
arne_wiebalcketingof: right … do you know how to access the generated data?16:41
arne_wiebalcketingof: it doesn’t seem to go to the debug output16:41
arne_wiebalcketingof: 4 threads is roughly the same as 8 threads16:42
iurygregorydtantsur, we will only know if the patch worked tomorrow?16:43
*** tssurya has quit IRC16:45
etingofarne_wiebalck, statsd?16:46
etingofarne_wiebalck, https://docs.openstack.org/ironic/ocata/deploy/metrics.html16:46
openstackgerritRiccardo Pittau proposed openstack/ironic-ui master: Supporting all py3 environments with tox  https://review.openstack.org/64108016:46
arne_wiebalcketingof: yes … not configured16:47
etingofarne_wiebalck, so the runtimes from debug - can we trust them?16:49
etingofarne_wiebalck, where in the code is it calculated...?16:50
etingofarne_wiebalck, we have also tried 'sa' against ipmitool if memory serves16:51
arne_wiebalcketingof: yes we did!16:52
arne_wiebalcketingof: let me see if I can find out where the 40s come from16:53
etingofarne_wiebalck that would be extremely useful - may be we could squeeze some more time out of the power sync loop16:54
arne_wiebalcketingof: my current guess is that the slow BMCs end up with the same worker16:54
arne_wiebalcketingof: the nodes are ordered, no?16:55
etingofarne_wiebalck, the seems unlikely16:55
arne_wiebalcketingof: fully agree16:55
etingofarne_wiebalck, the nodes might be ordered in the shared queue, then the workers pick up one job at a time16:56
mbuildtantsur: after one hour it finished the dump of the image, however, I only saw 8080 traffic two minutes before it worked. I think IPA is stuck somewhere and after an hour, it tries to download the image but no idea where. Any thing that comes to your mind?16:56
arne_wiebalcketingof: yes, that was my understanding16:56
etingofarne_wiebalck, so how slow nodes can end up in a single worker then?16:57
openstackgerritHarald Jensås proposed openstack/ironic master: Initial processing of network port events  https://review.openstack.org/63372916:57
openstackgerritHarald Jensås proposed openstack/ironic master: WiP - Implement Event Handler in driver interfaces  https://review.openstack.org/63784016:57
openstackgerritHarald Jensås proposed openstack/ironic master: WIP - Cleaning network - events  https://review.openstack.org/63784116:57
arne_wiebalcketingof: I don’t know16:57
etingofarne_wiebalck, but the answer is still 4216:57
arne_wiebalcketingof: it’s just my theory … no proof16:57
arne_wiebalcketingof: yes … more or less independent from the number of workers16:58
etingofarne_wiebalck, at least 1/4 of nodes are slow enough to occupy all 4 workers for 42 secs?16:59
etingofarne_wiebalck, or can it be locking...?16:59
*** andrein has quit IRC17:00
etingof(I am saying 4 workers as you say that's the breaking point)17:00
arne_wiebalcketingof: I guess I should try 2 and 117:01
etingofarne_wiebalck, well, I think we started with one...17:02
arne_wiebalcketingof: 1 was the old code17:02
etingofarne_wiebalck, right, but for 1 worker the new code behaves the same17:02
dtantsuriurygregory: periodic tasks can be watched like http://zuul.openstack.org/builds?job_name=ironic-python-agent-buildimage-tinyipa17:02
dtantsuriurygregory: I see a new version at https://tarballs.openstack.org/ironic-python-agent/tinyipa/files/ \o/17:03
dtantsurFYI folks watch out for new IPA-related failures, since the version we use in gate was just updated (the previous one was from mid-January)17:08
-openstackstatus- NOTICE: Gerrit is being restarted for a configuration change, it will be briefly offline.17:10
*** jistr|sick is now known as jistr17:14
*** e0ne has quit IRC17:18
arne_wiebalcketingof: 2 threads ~60 secs17:19
arne_wiebalck: etingof: 2 threads ~120 secs17:19
arne_wiebalcketingof: sorry, 1 threads ~120 secs17:20
arne_wiebalcketingof: scales nicely at the beginning, then hits 42 secs17:21
etingofarne_wiebalck, so current suspects are 1) a subset of slow nodes and 2) exclusively locked nodes17:22
arne_wiebalcketingof: the ipmitool runtime output confirms we have some nodes which need ~5 secs to get their power state17:23
arne_wiebalcketingof: I can try that by hand to confirm it’s real17:23
dtantsurI can easily believe that. I've seen rare cases when it took more than a minute..17:23
etingofarne_wiebalck, will these notes take up 42 secs combined across 3-4 threads?17:23
arne_wiebalcketingof: well, the total of the ipmi calls is ~200 secs17:24
arne_wiebalcketingof: so, the right combination will, yes17:24
etingofarne_wiebalck, that does not explain why 32 workers do not beat 42 secs17:25
arne_wiebalcketingof: agreed, this should mix up the distribution17:25
dtantsurarne_wiebalck: how much CPU is ironic-conductor using during power sync?17:26
dtantsurmaybe we're shelving there?17:26
arne_wiebalckdtantsur: looks pretty busy indeed17:28
dtantsurwe may be just hitting the limitation of a single process..17:28
arne_wiebalckdtantsur: so with 1 thread we should a significantly lower load …17:29
dtantsurnot sure, it's one OS thread anyway17:34
arne_wiebalckhmm, so picking one “slow” node and running ipmitool against it to check the power status, it returns in <<1 sec17:34
dtantsurwith more green threads we spend less time just spinning and waiting, but the CPU load should be comparable?17:35
*** sdake has quit IRC17:35
arne_wiebalckrunning ipmitool multiple times, the runtime has basically two values: ~0secs and ~5secs, seems reproducible17:40
arne_wiebalckthere are two levels17:40
arne_wiebalckthe 5 secs comes from the ’-N 5’ the code uses17:41
etingofarne_wiebalck, may be try enabling debugging in ipmitool? looks like a timeout and retry?17:41
etingofor change -N to see if it reflects runtime17:42
arne_wiebalcketingof: it does17:43
arne_wiebalcketingof: with “-N 10”, it sometimes takes now 10secs17:43
arne_wiebalcketingof: still comes back ok17:43
etingofarne_wiebalck, how about hacking down min_command_interval in ironic?17:45
etingofthat's in ipmi config17:46
* arne_wiebalck checking …17:46
arne_wiebalcketingof: min_command_interval is now 1 sec, workers=8 … let’s see17:48
* etingof is thrilled17:48
arne_wiebalcklol17:48
arne_wiebalckpower sync is now 34 secs17:50
* arne_wiebalck is disappointed17:50
etingofat least arne_wiebalck did not kill the BMCs17:51
* etingof is trying to cheer arne_wiebalck up17:51
arne_wiebalcketingof: :)17:51
dtantsurrloo: thanks for review! I'll think about a tempest test (we don't have API tests for configdrive yet)17:51
rpittaugood night o/17:52
etingofarne_wiebalck, did you try ipmitool -N 1 by hand?17:52
arne_wiebalcketingof: yes17:52
rloodtantsur: ok. i am fine with follow up to that PR, just wanted your comments first. so let me know.17:52
*** rpittau is now known as rpittau|afk17:52
arne_wiebalcketingof: wait, no17:52
arne_wiebalcketingof: I tried -N 1017:52
* TheJulia_sick needs lots of coffeee17:52
etingofarne_wiebalck, try -N 117:52
etingofipmitool is a smart beast17:53
arne_wiebalcketingof: gives a plateau at 1 sec17:53
arne_wiebalcketingof: -N clearly influences the total runtime of ipmitool17:53
*** gyee has joined #openstack-ironic17:54
arne_wiebalcketingof: but not so much the total time to power sync17:54
etingofarne_wiebalck, how about trying -N 1 with 1 and 2 workers again?17:54
arne_wiebalcketingof: trying …17:55
etingofthe desire for coffee must be a sign of recovery, TheJulia_sick17:56
*** S4ren has quit IRC17:56
TheJulia_sickI am feeling decent enough that I did put on clothing that makes me look like I stepped out of the 1950s again17:57
TheJulia_sickalso... everything is in the laundry17:57
openstackgerritIury Gregory Melo Ferreira proposed openstack/python-ironicclient master: Move to zuulv3  https://review.openstack.org/63301017:57
arne_wiebalcketingof: min_command_interval=1 and workers=2 gives 53 secs17:58
arne_wiebalcketingof: compared to 60 before17:58
* arne_wiebalck needs to stop for today17:58
etingofarne_wiebalck, does it mean that the bottleneck is somewhere else?17:58
arne_wiebalcketingof: I think so17:59
arne_wiebalcketingof: the main contributor is sth else17:59
etingoflets sleep on it17:59
arne_wiebalcketingof: +117:59
etingofI will be at the university tomorrow in the morning, back by noon17:59
arne_wiebalcketingof: ok, thx for your help today!18:00
etingofthank you, arne_wiebalck!18:00
* etingof is still trying to imagine the fashion of fifties18:01
openstackgerritDmitry Tantsur proposed openstack/ironic master: Allow building configdrive from JSON in the API  https://review.openstack.org/63905018:08
dtantsurrloo: updated, thanks ^^^18:08
rloodtantsur: thx18:09
dtantsurugh, I think the huawei CI went insane: https://review.openstack.org/#/c/639050/18:10
patchbotpatch 639050 - ironic - Allow building configdrive from JSON in the API - 9 patch sets18:10
TheJulia_sickdtantsur: that looks like there are cogs and some smoke coming out of that CI18:12
dtantsurI wonder if we can make it stop somehow without waiting for their morning..18:13
TheJulia_sicketingof: I have to take a new bio picture in the next day or two, I'll share when I do18:13
TheJulia_sickThere morning is in like 8 hours...18:13
openstackgerritDmitry Tantsur proposed openstack/ironic master: Allow building configdrive from JSON in the API  https://review.openstack.org/63905018:15
dtantsurrloo: mostly dummy update to try stop the CI madness ^^18:15
rloodtantsur: ok18:15
TheJulia_sick45 minutes until my next meeting \o/18:15
dtantsuroh my, it became worse now..18:16
dtantsurTheJulia_sick: I'm asking an intervention from infra18:16
TheJulia_sickdtantsur: ++18:16
* TheJulia_sick is going to have to lay down in a little bit18:19
*** e0ne has joined #openstack-ironic18:19
dtantsur++18:19
* etingof would consider trading a meeting for a day-off18:20
TheJulia_sickYeah, after the meeting I'm likely going to take a nap18:23
*** amoralej is now known as amoralej|off18:28
*** Chaserjim has quit IRC18:30
etingof++18:31
*** baha has joined #openstack-ironic18:33
openstackgerritJulia Kreger proposed openstack/ironic master: fast tracked deployment support  https://review.openstack.org/63599618:35
openstackgerritJulia Kreger proposed openstack/ironic master: Add fast-track testing  https://review.openstack.org/64110418:35
TheJulia_sickdtantsur: ^^ split the scenario test apart so the code can be merged as the test still needs some work18:35
TheJulia_sickand... it doesn't make sense to require the test to merge before the feature18:35
dtantsuryeah :)18:36
*** andrein has joined #openstack-ironic18:56
*** kanikagupta has joined #openstack-ironic19:15
*** e0ne has quit IRC19:17
*** dtantsur is now known as dtantsur|afk19:31
dtantsur|afkg'night19:31
TheJulia_sicksleep sounds good19:31
*** sdake has joined #openstack-ironic19:33
*** e0ne has joined #openstack-ironic19:37
larsksWhat is responsible for populating iscsi targets?  After resting the controller, a host configured for boot-from-volume is ACTIVE, but there are no iscsi targets defined. I was hoping that 'openstack server stop' followed by 'openstack server start' would re-create the targets, but no luck.19:53
*** sdake has quit IRC20:17
*** sdake has joined #openstack-ironic20:18
*** whoami-rajat has quit IRC20:22
*** kanikagupta has quit IRC20:25
TheJulia_sicklarsks: no iscsi targets defined where?20:35
TheJulia_sicklarsks: power off/power on should cause ironic to attempt to update what it has on file from cinder.20:36
*** andrein has quit IRC20:52
*** andrein has joined #openstack-ironic20:53
*** anupn has joined #openstack-ironic20:56
*** pcaruana has quit IRC21:10
*** anupn has quit IRC21:13
*** e0ne has quit IRC21:13
*** dsneddon has quit IRC21:18
*** jcoufal has quit IRC21:25
*** e0ne has joined #openstack-ironic21:26
*** e0ne has quit IRC21:30
*** mjturek has quit IRC21:43
*** baha has quit IRC21:43
larsksTheJulia_sick: in answer to the first question: in the kernel. The controller is not offering any iscsi targets. Starting and stopping the server in Nova does not restore the targets.21:57
larsksDestroying and re creating the server seems to work.21:57
*** anupn has joined #openstack-ironic22:01
*** priteau has quit IRC22:01
*** dougsz has quit IRC22:09
*** MattMan_1 has quit IRC22:11
*** MattMan_1 has joined #openstack-ironic22:11
openstackgerritLin Yang proposed openstack/sushy master: Fix wrong default JsonDataReader() argument  https://review.openstack.org/64114622:26
*** sdake has quit IRC22:28
*** mmethot has quit IRC22:34
*** sdake has joined #openstack-ironic22:37
*** andrein has quit IRC22:40
*** sdake has quit IRC22:45
*** sdake has joined #openstack-ironic22:50
*** dsneddon has joined #openstack-ironic22:51
*** munimeha1 has quit IRC22:52
*** dsneddon has quit IRC22:56
*** anupn has quit IRC22:58
*** sdake has quit IRC23:02
*** dsneddon has joined #openstack-ironic23:10
*** hwoarang has quit IRC23:14
*** dsneddon has quit IRC23:15
*** hwoarang has joined #openstack-ironic23:18
*** dsneddon has joined #openstack-ironic23:25
*** sdake has joined #openstack-ironic23:33
*** derekh has quit IRC23:36

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!