Friday, 2019-01-18

*** dsneddon has quit IRC00:01
*** dsneddon has joined #openstack-ironic00:12
*** rh-jelabarre has joined #openstack-ironic00:13
NobodyCamanyone ever seen a case where a node pxe boots the inspection image and then loses the its ip when dhcp-all-interfaces runs from with in the image?00:20
*** hwoarang has quit IRC00:22
*** hwoarang has joined #openstack-ironic00:23
*** TxGirlGeek has quit IRC00:29
*** dustinc has quit IRC00:32
*** ijw has quit IRC00:36
*** sthussey has quit IRC00:43
*** fragatina has quit IRC00:44
*** betherly has joined #openstack-ironic00:45
*** JayF has quit IRC00:47
*** fragatina has joined #openstack-ironic00:48
*** fragatina has quit IRC00:49
*** betherly has quit IRC00:50
*** ijw has joined #openstack-ironic00:52
*** ijw has quit IRC00:56
*** fragatina has joined #openstack-ironic00:57
*** fragatina has quit IRC01:01
*** fragatina has joined #openstack-ironic01:12
*** fragatina has quit IRC01:17
*** zenpac has quit IRC01:26
TheJuliaNobodyCam: sounds like they are firing off on different interfaces01:32
*** mmethot_ has joined #openstack-ironic01:51
*** jrist- has joined #openstack-ironic01:53
*** vabada_ has joined #openstack-ironic01:55
*** lifeless_ has joined #openstack-ironic01:55
*** rloo has quit IRC01:56
*** tridde has joined #openstack-ironic01:57
*** mmethot has quit IRC02:00
*** lifeless has quit IRC02:00
*** zufar_ has quit IRC02:00
*** larsks has quit IRC02:00
*** vabada has quit IRC02:00
*** iurygregory has quit IRC02:00
*** trident has quit IRC02:00
*** mgoddard has quit IRC02:00
*** jrist has quit IRC02:00
*** larsks has joined #openstack-ironic02:00
*** mgoddard has joined #openstack-ironic02:00
*** openstackgerrit has quit IRC02:02
*** dims has quit IRC02:02
*** dims has joined #openstack-ironic02:05
*** betherly has joined #openstack-ironic02:17
*** fragatina has joined #openstack-ironic02:17
*** _fragatina_ has joined #openstack-ironic02:17
*** _fragatina_ has quit IRC02:19
*** _fragatina_ has joined #openstack-ironic02:19
*** zufar has joined #openstack-ironic02:20
*** _fragatina_ has quit IRC02:21
*** fragatina has quit IRC02:21
*** betherly has quit IRC02:21
*** Soopaman has joined #openstack-ironic02:32
*** hamzy has joined #openstack-ironic02:33
*** hwoarang has quit IRC02:40
*** hwoarang has joined #openstack-ironic02:41
*** betherly has joined #openstack-ironic02:48
*** betherly has quit IRC02:52
*** Soopaman has quit IRC03:06
*** tridde is now known as trident03:06
*** betherly has joined #openstack-ironic03:08
*** betherly has quit IRC03:13
*** dustinc has joined #openstack-ironic03:15
*** dustinc has quit IRC03:20
*** Soopaman has joined #openstack-ironic03:20
*** betherly has joined #openstack-ironic03:28
*** rh-jelabarre has quit IRC03:33
*** betherly has quit IRC03:34
*** TxGirlGeek has joined #openstack-ironic03:35
*** hwoarang has quit IRC03:39
*** hwoarang has joined #openstack-ironic03:39
*** betherly has joined #openstack-ironic03:49
zufarhi i want to ask something, if my baremetal node is using virtualbmc, which driver i use? ipmi or pxe_ipmitool?03:50
zufarusing PXE or iPXE?03:50
*** betherly has quit IRC03:53
*** dsneddon has quit IRC03:55
*** dsneddon has joined #openstack-ironic04:05
*** betherly has joined #openstack-ironic04:09
*** dsneddon has quit IRC04:10
*** betherly has quit IRC04:14
*** hwoarang has quit IRC04:16
*** hwoarang has joined #openstack-ironic04:18
*** dsneddon has joined #openstack-ironic04:19
*** JayF has joined #openstack-ironic04:23
*** dsneddon has quit IRC04:24
*** betherly has joined #openstack-ironic04:30
*** betherly has quit IRC04:35
*** Bhujay has joined #openstack-ironic04:40
*** Bhujay has quit IRC04:41
*** Bhujay has joined #openstack-ironic04:41
*** Soopaman has quit IRC04:45
*** hwoarang has quit IRC04:54
*** hwoarang has joined #openstack-ironic04:55
*** dsneddon has joined #openstack-ironic04:57
zufaranyone can help me with this issue on Ironic? https://storyboard.openstack.org/#!/story/200481405:01
*** dsneddon has quit IRC05:03
*** dsneddon has joined #openstack-ironic05:18
*** dsneddon has quit IRC05:23
*** dnuka has joined #openstack-ironic05:27
dnukagood morning o/05:28
*** hwoarang has quit IRC05:29
*** hwoarang has joined #openstack-ironic05:29
dnukaHi zufar :) you may need to talk to dtantsur|afk or etingof but please note that, both are in central European time.05:32
zufarhi dnuka, thank you for suggestion05:37
zufari will standby in here haha05:37
dnuka:)05:37
*** TxGirlGeek has quit IRC05:46
*** dsneddon has joined #openstack-ironic05:56
*** dsneddon has quit IRC06:00
*** TxGirlGeek has joined #openstack-ironic06:09
*** TxGirlGeek has quit IRC06:18
*** dsneddon has joined #openstack-ironic06:33
*** hwoarang has quit IRC06:36
*** hwoarang has joined #openstack-ironic06:37
*** dsneddon has quit IRC06:38
*** MattMan_1 has quit IRC06:59
*** MattMan_1 has joined #openstack-ironic07:00
*** dsneddon has joined #openstack-ironic07:05
*** dsneddon has quit IRC07:10
*** dnuka is now known as dnuka|brb07:11
*** dsneddon has joined #openstack-ironic07:20
*** pcaruana has joined #openstack-ironic07:25
*** dsneddon has quit IRC07:25
*** dsneddon has joined #openstack-ironic07:31
arne_wiebalckgood morning, ironic07:36
*** dsneddon has quit IRC07:41
*** arne_wiebalck has quit IRC07:43
*** arne_wiebalck has joined #openstack-ironic07:43
*** arne_wiebalck_ has joined #openstack-ironic07:44
*** iurygregory has joined #openstack-ironic07:48
*** arne_wiebalck_ has quit IRC07:48
iurygregorymorning everyone o/07:51
*** arne_wiebalck_ has joined #openstack-ironic07:51
arne_wiebalckiurygregory: good morning! (… to Brazil? seems early)07:53
iurygregoryarne_wiebalck, i moved to Brno =)07:54
iurygregoryim living here since sep last year =)07:54
*** pcaruana has quit IRC07:55
*** pcaruana has joined #openstack-ironic07:55
arne_wiebalckiurygregory: ah, that makes more sense :)07:55
arne_wiebalckiurygregory: same timezone then07:56
iurygregoryarne_wiebalck, yep =D07:56
*** rpittau has joined #openstack-ironic07:59
*** dnuka|brb is now known as dnuka08:00
rpittaugood morning ironic! o/08:00
dnukagood morning arne_wiebalck , iurygregory :)08:00
iurygregorymorning rpittau dnuka o/08:00
dnukagood morning rpittau o/08:00
rpittauhi iurygregory dnuka :D08:00
arne_wiebalckgood morning rpittau dnuka o/08:00
rpittauTGIF \o/08:01
rpittauarne_wiebalck, good morning :)08:01
iurygregory\o/ TGIF \o/08:01
dnukasearhing for the "TGIF" :)08:02
* rpittau feels like this week has gone very fast08:02
* iurygregory thinks the same08:03
dnukaoh! it's, Thank God It's Friday \o/08:03
rpittauyep :D08:04
dnuka:D08:04
*** jtomasek has joined #openstack-ironic08:06
rpittauthere will be an upgrade to review.openstack.org today, exciting! :D08:11
dnukarpittau, wow! really..08:12
rpittauit's just a minor version, but still08:13
dnuka:)08:13
dnukarpittau is there a place(link) that I could read about it :)08:14
*** dsneddon has joined #openstack-ironic08:15
rpittaudnuka, there was an announcement in the openstack-discuss mailing list08:18
dnukarpittau, thank you. I'll look for it :)08:19
rpittaudnuka, http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001833.html08:19
dnukarpittau, thanks :)08:20
*** dsneddon has quit IRC08:21
*** dsneddon has joined #openstack-ironic08:40
*** S4ren has joined #openstack-ironic08:45
*** dsneddon has quit IRC08:45
*** e0ne has joined #openstack-ironic08:55
*** dsneddon has joined #openstack-ironic09:10
*** openstackgerrit has joined #openstack-ironic09:11
openstackgerritparesh sao proposed openstack/ironic master: [docs] OOB RAID implementation for ilo5 based HPE Proliant servers  https://review.openstack.org/63145809:11
*** dsneddon has quit IRC09:26
etingofgood morning everyone o/09:30
etingofhjensas, so I played with last night's vbmc problem, but could not reproduce it in a reasonable way09:31
rpittauhi etingof :)09:31
dnukaetingof, good morning09:31
etingofI though to make vbmc ignoring that file access error, but after giving it some thought I am not sure that'd be the good thing09:32
etingofespecially if I can't get hold on the original failure in my env09:32
*** derekh has joined #openstack-ironic09:34
*** dtantsur|afk is now known as dtantsur09:35
zufarhi etingof09:36
dtantsurianw: hey, sorry, I think https://review.openstack.org/#/c/631286/ should fix that. And this should serve us a reminder to get a DIB job.. do you folks have some lightweight job with IPA for us to run?09:37
patchbotpatch 631286 - ironic-python-agent - Fix a regression in generate_upper_constraints.sh (MERGED) - 2 patch sets09:37
dnukagood morning dtantsur o/09:37
zufaranyone know why my baremetal cannot download the item from swift? its get error input/output error. all of my configuration is writing in here https://storyboard.openstack.org/#!/story/200481409:38
dtantsurmorning everyone09:40
iurygregorymorning dtantsur09:40
iurygregoryzufar, wich OS you are using?09:41
zufarhi dtantsur09:41
rpittauhi dtantsur :)09:41
zufardtantsur, i am using centos 7.5 for building openstack.09:41
zufarfor user image, i am using fedora (build with diskimage-builder) because trouble when building ubuntu (grub2 error)09:43
arne_wiebalckmorning dtantsur09:43
zufarfor deploy image, i am using core os prebuild from the website09:43
zufariurygregory, OS using in openstack? i am using centos 7.509:44
dtantsurzufar: you need to check what is wrong with your iPXE configuration. Try fetching images locally with cURL, try tcpdump on HTTP port. It's hard to debug remotely from the generic bits of information you provide.09:44
iurygregoryjust saw =)09:44
zufardtantsur, my swift is working well with dump container. i try to feach the boot.ipxe from http but cannot because swift temporary url.09:45
dtantsurzufar: this has absolutely nothing to do with swift09:48
dtantsurthese are static files in your /httpboot09:48
dtantsur(well, some of the files are generated by ironic, but they're still just static files served by httpd/whatever)09:49
zufardtantsur, I think the access is only given into swift. I am following this tutorial. https://docs.openstack.org/ironic/pike/install/configure-glance-swift.html09:51
zufari will try to curl or wget into swift. thank you.09:51
dtantsurzufar: swift is only used to serve the final (user) image. The boot.ipxe file is just a file on an HTTP server, it is absolutely unrelated to swift.09:52
openstackgerritDmitry Tantsur proposed openstack/ironic-inspector stable/rocky: Prevent abnormal timeout values from breaking sync with ironic  https://review.openstack.org/63175309:53
*** dsneddon has joined #openstack-ironic09:53
openstackgerritDmitry Tantsur proposed openstack/ironic-inspector stable/queens: Prevent abnormal timeout values from breaking sync with ironic  https://review.openstack.org/63175609:56
zufardtantsur: I see, thank you, get it.10:06
*** dtantsur is now known as dtantsur|brb10:08
zufarhm, i try to see what process is running into 8080, its swift. http://paste.opensuse.org/view//5671184910:09
*** dnuka has quit IRC10:13
rpittauzufar, that's absolutely normal, 8080 is swift proxy default port10:13
openstackgerritIury Gregory Melo Ferreira proposed openstack/ironic-python-agent master: Moving publish jobs to zuulv3  https://review.openstack.org/63071310:17
zufarrpittau: dtantsur said that boot.ipxe is listen on HTTP Server (not swift). because my baremetal is getting the boot.ipxe under port 8080, so i check what service is running the port. and its swift10:18
zufarare I need to change the `http_url` under [deploy] section in ironic.conf into other port?10:19
rpittauzufar, you should check your ipxe configuration, you need to assign a different port10:40
*** Bhujay has quit IRC10:41
zufarrpittau: did you mean this? https://docs.openstack.org/ironic/pike/install/configure-pxe.html#http-server10:41
zufarin the docs, the port is in 8080. i have try to change it into 8088. but when i check with netstat. it is not listen to 8088.10:42
rpittauwhat's the port assigned to the ipxe vhost ?10:43
*** S4ren has quit IRC10:43
*** S4ren has joined #openstack-ironic10:43
zufarrpittau: ipxe vhost? sorry i don't get it.10:44
rpittauzufar, did you configure your httpd server to provide access to /httpboot ?10:48
rpittauif you're keeping the ipxe files under /httpboot ofc10:49
zufarrpittau: until now, not yet. should i create aditional vhost for this? because i just following the ironic queens docs.10:49
rpittauzufar, yes, ideally you want to create an additional vhost, exposing a port that is not used by another service, e.g. 808810:52
*** dsneddon has quit IRC10:57
*** S4ren has quit IRC10:58
*** Bhujay has joined #openstack-ironic10:59
zufarrpittau: done, now i can curl the boot.ipxe. thank you11:04
rpittauzufar, yw11:04
zufarhi, i get another problem with dhcp. the error is `setting client-ip does not make sense for 'dhcp'` https://ibb.co/5LQCwCQ11:14
zufarare the boot.ipxe fail the setup?11:14
*** odyssey4me has joined #openstack-ironic11:30
*** dsneddon has joined #openstack-ironic11:31
* TheJulia raises an eyebrow11:34
TheJuliazufar: your IPA image is too new compared to your ironic deployment. You have two options: update your ipxe template that ironic renders... OR use a prior version's IPA image.11:35
TheJulia(prior version's image being hopefully old enough)11:35
*** dsneddon has quit IRC11:40
openstackgerritMerged openstack/networking-baremetal master: Break out ironic client from neutron agent  https://review.openstack.org/63104411:48
*** dsneddon has joined #openstack-ironic12:00
*** dtantsur|brb is now known as dtantsur12:02
dtantsurgood news, TheJulia, everyone. we got pre-built IPA images with ironic-lib 2.16.1 this morning. I hope it will stabilize the gate a bit.12:03
dtantsurwe need to collect new statistics, so let's recheck the hell out of everything :)12:05
*** dsneddon has quit IRC12:05
*** robbbe has joined #openstack-ironic12:06
openstackgerritDmitry Tantsur proposed openstack/ironic-lib stable/queens: Add retry attempts for the partprobe command  https://review.openstack.org/63178712:26
openstackgerritDmitry Tantsur proposed openstack/ironic-lib stable/queens: Run sync and partprobe after adding a configdrive partition  https://review.openstack.org/63018112:27
*** e0ne has quit IRC12:33
*** dsneddon has joined #openstack-ironic12:34
*** dsneddon has quit IRC12:38
*** e0ne has joined #openstack-ironic12:39
*** e0ne has quit IRC12:40
*** e0ne has joined #openstack-ironic12:40
zufarTheJulia: IPA image? do you mean deploy Images right?12:40
zufaror User Images?12:40
TheJuliazufar: deploy images12:40
TheJuliadtantsur:  \o/ thanks!12:41
*** TheJulia is now known as needssleep12:42
needssleepEngage casual nick Friday!12:42
*** rpittau has quit IRC12:46
zufarTheJulia: thank you, I am downloading from the docs, https://docs.openstack.org/ironic/queens/install/deploy-ramdisk.html#deploy-ramdisk12:46
zufari will downgrade into queens version. thank you12:48
*** dtantsur is now known as creepy_owlet12:50
* creepy_owlet follows the lead of our sleepless PTL12:50
*** dsneddon has joined #openstack-ironic12:55
*** dsneddon has quit IRC13:00
ianwdtantsur: yep we could make something that only does IPA.  could you file a bug and send it to me so i don't forget, but i can do it on monday :)13:08
w14161_1Anyone knows, if we got 1000 machines to provision, for such ironic inventory info like port id, mac address, switch port id for each node, only input way was input them one by one manually for all these 1000 nodes? Any automatical way to do this?13:09
*** rh-jelabarre has joined #openstack-ironic13:11
*** Bhujay has quit IRC13:16
etingofw14161_1, may be this would be useful? -- https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/node_discovery.html13:18
* etingof has also heard of the xcat project which does node discovery (among other things) -- https://xcat-docs.readthedocs.io/en/stable/13:20
creepy_owletianw: okie. well, if it's not too hard. I guess long term we'll need a real job that does some deployment / introspection.13:20
creepy_owletdo you use LP or storyboard?13:20
*** rpittau has joined #openstack-ironic13:25
openstackgerritDerek Higgins proposed openstack/ironic master: Replace use of Q_USE_PROVIDERNET_FOR_PUBLIC  https://review.openstack.org/62114613:26
*** dsneddon has joined #openstack-ironic13:27
*** trown|outtypewww is now known as trown13:35
*** dsneddon has quit IRC13:37
w14161_1etingof, thanks, I would check this, seems  https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/node_discovery.html works automatically, just to confirm whether I could find port id, mac, switch port id feilds, not sure open stack already has built-in support for those fields13:46
iurygregoryderekh, you around?13:46
derekhiurygregory: yup13:47
iurygregorydo you know if we need to keep the -{node} in JOBNAME=ironic-python-agent-buildimage-{image-type}-{node} ?13:47
iurygregoryalso i think we can have a single run.yaml since we have the case condition in shell, wdyt?13:48
*** rloo has joined #openstack-ironic13:48
creepy_owletw14161_1: these can be discovered by LLDP in case of a real switch13:48
hjensasetingof: I missed the ping earlier. I have a workaround with sudo, I can open a story with more details, and reproduce steps.13:51
derekhiurygregory: not sure about the job name, I guess it could be removed,13:51
etingofhjensas, hey, would be a way to go if you feel like it's the problem worth fixing13:51
creepy_owletI think {node} is no longer required13:51
hjensasetingof: I'm not convinced vbmc is to blame, it may also be something in devstack scripts.13:51
iurygregorycreepy_owlet, ty!13:52
iurygregoryany objections in use just one run.yaml? =)13:52
* etingof finds devstack generally easier to blame ;)13:52
derekhiurygregory: yup, the case condition looks like it would work13:52
creepy_owletiurygregory: I'd actually encourage having only one13:52
iurygregorydoing now \o/13:52
creepy_owletside note: also check where bash can be replaced with actual ansible13:52
* iurygregory not expert with ansible =(13:52
derekhiurygregory: me neither13:53
derekhiurygregory: also there is another line of diff between those files http://paste.openstack.org/show/742950/13:53
creepy_owletyou can check metalsmith's jobs for tons of inspiration :) but that's just a wish, not blocking the patch.13:53
derekhiurygregory: might need to be handled somehow13:53
* iurygregory start to ping friends from UFCG that knows more about ansible =D 13:54
*** Bhujay has joined #openstack-ironic13:54
iurygregoryderekh, yeah i checked the diff since is in the tinyipa i will keep the part on the run they use13:54
derekhack13:55
iurygregorylets see with the normal changes how it goes13:55
*** arne_wiebalck_ has quit IRC13:59
*** arne_wiebalck_ has joined #openstack-ironic14:00
w14161_1Creepy_owlet, really usefully infroamtion! I alreay got mac address and port id at https://docs.openstack.org/ironic-inspector/latest/_modules/ironic_inspector/common/lldp_tlvs.html#mapping_for_switch14:00
* creepy_owlet has to jump on a meeting14:00
*** dsneddon has joined #openstack-ironic14:05
zufaranyone have been get error like this `ipmi_si: unable to find any system interface(s)` when try to cleaning the node?14:08
openstackgerritIury Gregory Melo Ferreira proposed openstack/ironic-python-agent master: Moving publish jobs to zuulv3  https://review.openstack.org/63071314:08
*** dsneddon has quit IRC14:10
creepy_owletzufar: it's not an error, just a warning (expected on e.g. a VM)14:12
zufarcreepy_owlet: okey, i think i need to wait a litle bit more time. thank you14:14
zufarwhen i try to move the node stat into `available`. the status is clean_wait. but the baremetal node, have already get the login page.14:19
zufarbaremetal node cli : https://ibb.co/HHF5npY14:20
zufarironic provisioning state : https://ibb.co/hBDr7LT14:20
*** zul has joined #openstack-ironic14:22
zufarare this is the part of cleaning process? i think the node is reach the timeout. https://access.redhat.com/solutions/334908114:24
*** priteau has joined #openstack-ironic14:25
*** jrist- is now known as jrist14:34
*** jcoufal has joined #openstack-ironic14:37
creepy_owletit's part of cleaning indeed14:40
creepy_owlettry $ openstack baremetal node show <node>14:40
creepy_owletthere may be some hints on what it is going to do14:40
*** dsneddon has joined #openstack-ironic14:42
arne_wiebalck_General question: for backports to stable branches, do you actually need/want reviews?14:43
creepy_owletarne_wiebalck_: yes/yes :)14:44
creepy_owletbut the review approach is a bit different, since most of the changes must have merged to master before14:44
creepy_owletso the main question is "is it appropriate to backport" rather then "is it a good change overall"14:44
arne_wiebalck_creepy_owlet: ok, understood!14:45
arne_wiebalck_creepy_owlet: thx!14:45
creepy_owletarne_wiebalck_: https://docs.openstack.org/project-team-guide/stable-branches.html#appropriate-fixes has some guidance14:46
arne_wiebalck_creepy_owlet: awesome, thx!14:47
*** dsneddon has quit IRC14:47
zufarcreepy_owlet: the provisioning status now is clean_wait. the time out is 1800s.14:56
*** jtomasek has quit IRC14:56
zufarlast error from `maintenance reason` is `Timeout reached while cleaning the node.`14:56
zufarsorry the full is `Timeout reached while cleaning the node. Please check if the ramdisk responsible for the cleaning is running on the node. Failed on step {}.`14:56
creepy_owletI guess you still experience some communication problems between the ramdisk and the node14:56
creepy_owlet* the ramdisk and ironic, sorry14:57
zufarhm, so ramdisk cannot contact the API?14:57
zufari try to add dump host into `provisioning & cleaning network` and be able to contact the API.14:57
*** jtomasek has joined #openstack-ironic14:59
*** zul has quit IRC15:00
*** jtomasek has quit IRC15:00
*** livelace has joined #openstack-ironic15:01
*** jtomasek has joined #openstack-ironic15:01
iurygregorycreepy_owlet, do you have any idea how to not rely on the configuration (pxe/pxe enabled) for the online data migration?15:04
*** dsneddon has joined #openstack-ironic15:06
*** mjturek has joined #openstack-ironic15:07
creepy_owletiurygregory: I'm afraid we'll have to rely on it15:07
*** sthussey has joined #openstack-ironic15:07
iurygregoryi dont see much problems if the operator is aware of what he is doing hehe15:08
iurygregorymgoddard, you around?15:08
mgoddardiurygregory: yeah15:09
*** baha has joined #openstack-ironic15:09
mgoddardiurygregory: ipxe?15:09
iurygregoryyup15:09
iurygregorycan you elaborate the idea where we wouldnt rely on the config? .-.15:10
*** dsneddon has quit IRC15:10
mgoddardiurygregory: yeah. It's a corner case, but it's possible for different conductors to have different values for ipxe_enabled15:11
iurygregoryso if they have the migration for the nodes wouldnt occour (this would be expected I think)15:12
mgoddardiurygregory: the other potential problem is, the environment in which we do the migration might not have all the config (think containers)15:12
mgoddardat least in that case we would only miss some migrations, rather than migrating nodes incorrectly15:13
iurygregorymgoddard, not have the config is strange for me15:13
iurygregorybecause it would assume the Default value no?15:13
mgoddardiurygregory: in a multi-node, containerised system, there isn't necessarily such a thing as 'the config'15:14
iurygregoryhummm15:14
iurygregorygotcha15:14
mgoddardwe might have different hosts, running api and conductor in containers, each with their own config file15:14
mgoddardthen after an upgrade, the deployment tool performs an online migration, possibly using yet another container15:15
mgoddardthe operator might assume that container only needs config necessary to access the DB15:15
iurygregorygot it15:16
mgoddardultimately, the operator isn't going to think about this anywhere near as much as you are - they won't really want to have to make a decision, so the deployment tool will just run the migrations automatically15:16
mgoddardso it needs to 'just work'15:17
iurygregoryyeah15:17
mgoddardthis is all just corner cases, but I think it's worth keeping in mind15:18
* iurygregory thinking "any way we could check the conductor for the nodes to make sure it will only apply to specific nodes?"15:18
zufaranyone know ho to login into baremetal node when `login:`, what username and password we use?15:18
mgoddardin reality I'd expect most deployments to have the same ipxe_enabled everywhere15:18
zufarwhen `login:` appears on console?15:19
iurygregorymgoddard, do you think is worth see in the meeting on monday people feedback?15:19
iurygregoryso we can try to get a solution15:19
arne_wiebalckzufar: that depends on your image, I’d say15:20
iurygregoryzufar, if is cirros the login is cirros15:21
arne_wiebalckzufar: we add our ssh keys during the build process15:21
mgoddardiurygregory: yes, can you add it to the etherpad?15:21
iurygregorymgoddard, doing now =)15:21
mgoddardiurygregory: cool15:21
*** arne_wiebalck_ has quit IRC15:21
arne_wiebalckzufar: I was referring to the deploy ramdisk image15:22
zufararne_wiebalck, iurygregory: its not creating the instance, I am stuck when cleaning the node. and get this screen on baremetal. https://ibb.co/HHF5npY15:23
zufarwait a minute. ramdisk image type right?15:23
arne_wiebalckzufar: yes15:24
zufarI think i use fedora. I think fedora disable password login right?15:25
*** baha has quit IRC15:26
zufarI will try disable automated_clean referring from this tutorial https://access.redhat.com/solutions/334908115:26
zufarthe current state is `clean wait`, and i have wait 1 hour. should i wait more?15:27
arne_wiebalckzufar: You’ll need an image with the IPA to actually do the cleaning.15:27
zufararne_wiebalck: i have create the Deploy Image with this command, https://docs.openstack.org/ironic/queens/install/deploy-ramdisk.html#deploy-ramdisk15:29
zufarand add it into the baremetal node via deploy_kernel and deploy_ramdisk options.15:29
arne_wiebalckarne_wiebalck: sounds good15:30
arne_wiebalckzufar: I think the image will use the ssh keys from the build node.15:31
zufararne_wiebalck: did we need to install openstack-ironic-python-agent package? i only just install the python package `python2-ironic-python-agent` =))15:32
arne_wiebalckzufar: you built your image with the disk-image-builder?15:32
arne_wiebalckzufar: I mean: you used disk-image-create?15:34
zufararne_wiebalck: what image? i build `User Image` with disk-image-builder, but use pre-built image for `Deploy Image`15:34
zufarfollowing this docs, https://docs.openstack.org/ironic/queens/install/configure-glance-images.html15:34
arne_wiebalckzufar: I’m talking about the deploy image.15:34
zufararne_wiebalck: I use pre-built image from CoreOS15:35
zufarqueens version, download from https://tarballs.openstack.org/ironic-python-agent/coreos/files/15:35
creepy_owletmgoddard: we tend to assume that we do have the config when we run online-data-migrations15:35
creepy_owletand having ipxe_enabled inconsistent over your cloud is a recipe from troubles :)15:36
iurygregorycreepy_owlet, i added to the discussion for the mid-cycle =)15:38
arne_wiebalckzufar: ah, ok. I only used the disk-image-builder so far. I guess you need to have a look how you get into the coreos deploy image. That will allow you to see what the IPA is doing and why cleaning gets stuck.15:38
zufararne_wiebalck: I think i forget to install openstack-ironic-python-agent.noarch package. I just install the python dependency (`python2-ironic-python-agent)15:40
*** mjturek has quit IRC15:42
*** dsneddon has joined #openstack-ironic15:42
arne_wiebalckzufar: I don’t think that’s required, is it? The IPA comes from the git repo you clone in the very beginning, no?15:42
zufararne_wiebalck: I am install openstack environment with packstack and do manual install on Ironic.15:43
zufarpackstack (Centos 7.5 queens)15:43
zufarand record all installation step.15:44
NobodyCamGood Morning Ironic'ers15:45
mgoddardcreepy_owlet: perhaps, but you could have different drivers on different conductors and make it work15:46
NobodyCamand15:46
NobodyCamofc15:46
NobodyCamTGIF!15:46
*** dsneddon has quit IRC15:46
iurygregorymorning NobodyCam15:48
NobodyCamGood Morning iurygregory :)15:48
zufararne_wiebalck: do you have example command to build deploy image with custom username and password to login?15:52
rpittauhi NobodyCam :)15:52
NobodyCamhey hey rpittau happy Friday15:52
rpittauhappy Friday :D15:53
NobodyCam:)15:53
*** Bhujay has quit IRC15:58
*** mjturek has joined #openstack-ironic15:59
*** pcaruana has quit IRC16:00
*** iurygregory is now known as iurygregory_mtg16:00
*** dsneddon has joined #openstack-ironic16:00
arne_wiebalckzufar: we use keys16:01
arne_wiebalckzufar: and these are taken from .ssh/authorized_keys when the image is built16:02
arne_wiebalckzufar: so, we basically follow the docs you pointe to earlier, but have the .ssh dir set up as well16:03
NobodyCamzufar: if your using DIB there is a the dev user element: https://docs.openstack.org/diskimage-builder/latest/elements/devuser/README.html16:07
zufari think i will add coreos.autologin. https://docs.openstack.org/ironic-python-agent/ocata/troubleshooting.html#id316:07
zufaranyone here have example configuration of `ironic-python-agent` /etc/ironic-python-agent/agent.conf16:08
zufararne_wiebalck: done, coreos is autologin. how i can check the IPA log?16:15
arne_wiebalckzufar: journalctl --follow _SYSTEMD_UNIT=ironic-python-agent.service16:15
*** dustinc has joined #openstack-ironic16:18
*** e0ne has quit IRC16:22
zufararne_wiebalck: to many error log. i think baremetal node cannot connect into API endpoint. https://ibb.co/tMXFRnb16:23
zufarI think the problem is in firewall and selinux. http://paste.opensuse.org/view//6222376416:24
zufarbecause i am installing manual the Ironic package.16:24
zufararne_wiebalck: thank you very much. i will continue tomorrow. its midnight here. happy weekend all.16:26
arne_wiebalckzufar: have good weekend!16:27
arne_wiebalckzufar: use telnet or simialr to check if the IPA can get to the API16:28
arne_wiebalckzufar: but not now :)16:28
*** baha has joined #openstack-ironic16:29
*** iurygregory_mtg is now known as iurygregory16:29
iurygregoryhave a good weekend everyone o/16:30
*** iurygregory has quit IRC16:31
*** mjturek has quit IRC16:31
*** mjturek has joined #openstack-ironic16:33
zufararne_wiebalck: yeah i have test this. i can curl into other endpoint (ex: glance) but cannot curl into ironic baremetal endpoint16:34
zufarits block by selinux & firewall.16:34
arne_wiebalckzufar: ah, there you go!16:35
rpittaubye all! have a nice weekend o/16:35
arne_wiebalckzufar: good result to stop for today :)16:36
*** rpittau has quit IRC16:36
openstackgerritDmitry Tantsur proposed openstack/metalsmith master: Clean up the edge cases around states  https://review.openstack.org/63182316:37
arne_wiebalckbye everyone o/16:39
openstackgerritDmitry Tantsur proposed openstack/metalsmith master: Clean up the edge cases around states  https://review.openstack.org/63182316:49
*** TxGirlGeek has joined #openstack-ironic16:54
*** fragatina has joined #openstack-ironic17:02
*** fragatina has quit IRC17:05
*** fragatina has joined #openstack-ironic17:06
*** dsneddon has quit IRC17:07
*** fragatina has quit IRC17:07
*** fragatina has joined #openstack-ironic17:08
*** dsneddon has joined #openstack-ironic17:12
*** robbbe has quit IRC17:14
*** dsneddon has quit IRC17:19
*** jaypipes is now known as leakypipes17:30
openstackgerritMark Goddard proposed openstack/ironic master: WIP: Deploy templates: data model, DB API & objects  https://review.openstack.org/62766317:31
openstackgerritMark Goddard proposed openstack/ironic master: WIP: Deploy templates: API & notifications  https://review.openstack.org/63184517:31
*** creepy_owlet is now known as dtantsur|afk17:38
dtantsur|afkhave a great weekend17:38
openstackgerritJulia Kreger proposed openstack/ironic-python-agent master: Deprecate CoreOS ramdisk support  https://review.openstack.org/63184717:42
mgoddardyou too dtantsur|afk and ironic17:42
*** TxGirlGeek has quit IRC17:50
*** dsneddon has joined #openstack-ironic17:52
*** trown is now known as trown|lunch17:56
*** dsneddon has quit IRC17:57
*** fragatina has quit IRC17:58
*** fragatina has joined #openstack-ironic17:59
*** fragatina has quit IRC18:00
*** derekh has quit IRC18:00
*** fragatina has joined #openstack-ironic18:00
* needssleep requests a teleporter18:04
openstackgerritMerged openstack/ironic master: Allocation API: minor fixes to DB and RPC  https://review.openstack.org/62818718:05
etingofneedssleep, how many ipmitool processes do we run simultaneously for power sync?18:05
needssleepetingof: I just don't remember right now. The week of meetings and now travel headaches has kind of scrambled my brain18:06
etingofneedssleep also needs some rest18:07
needssleepetingof: no rest for the in-flight... now that I'm finally on an aircraft18:07
etingofhave an easy flight!18:07
needssleepheh18:08
needssleepdelayed nearly 3 hours and unable to continue on to my destination today :(18:08
needssleeps/destination/home/18:08
etingofI am reading through the code and it makes me think that it's max one ipmitool instance for power sync running at all times... May be I am missing something yet18:09
etingofs/all/any18:09
needssleepetingof: I feel like I had that impression previously, but.. one per bmc I think is what it ends up being18:09
needssleepthen agian, brain scrambled and it has been a while18:10
etingofbecause power sync actions are called from a periodic thread sequentially18:10
needssleepsounds like what I thought early on as well but $brains18:10
bfournieneedssleep: ugh, 3 hour delay :-(18:11
*** dsneddon has joined #openstack-ironic18:11
* needssleep upgrades brain to zombie18:11
needssleeps/upgrade/downgrades/18:11
etingofhttps://github.com/openstack/ironic/blob/master/ironic/conductor/manager.py#L167018:11
needssleepthat does seem like it, and it yeilds between steps18:13
*** fragatina has quit IRC18:13
etingofthis is where we iterate over all nodes for power sync... I suspect that do_sync_power_state() executes one-by-one even though each is non-blocking relative to other I/O tasks18:14
*** fragatina has joined #openstack-ironic18:14
needssleepyeah, but one at a time, sequentually...18:14
* needssleep wonders if anyone has reviewed locking18:14
etingofI am not sure why it yields given that it might yield to the mainloop on I/O down the call stack18:14
* needssleep is afraid to look18:14
etingofso would it make sense to push this code to a couple of threads for greater parallelism?18:15
needssleepetingof: now we're hitting my brain just can't seek those block sat this time, they are on tapes or something someplace18:15
*** dsneddon has quit IRC18:16
etingofneedssleep needs to rewind the neurons ;)18:16
needssleepheh18:17
etingofjust thinking, if we got to powersync 1K nodes one by one...18:17
etingofall in 60 seconds18:17
needssleepyeah, that would be bad experience18:17
etingofhow it could possibly be done18:17
needssleepit would never complete18:17
needssleepwhich is a whole other conundrum18:18
etingofeven if we had free task executors hanging around...18:18
* needssleep really wants locking and to enable multiple conductors to look at last_updated time in the db18:18
*** priteau has quit IRC18:18
etingofwould it make sense to just run power sync from a couple of green threads?18:19
needssleepIt might be good to find some time to talk to rloo about it, at some point... I know she has looked at the exec stuff before18:19
needssleepENOBRAINS18:19
etingofthat should not hurt locking I think18:19
* etingof could probably draft a patch proposal on that18:19
needssleepi _thought_ the ipmitool exec ends up semi-blocking for reasons()18:20
needssleepbut, I just don't remember18:20
needssleepit would be best to close the laptop... since I'm nearly out of battery18:20
etingofsorry, needssleep!18:20
etingoftake some rest!18:20
rlooneedssleep: etingof: time, i need more time too. i'll book you in for 2020. and i don't recall looking into that stuff. maybe i did? anyway, bye needssleep!18:20
needssleepI feel like saying goodnight18:21
needssleeprloo: you did... a long long itme ago, in a galaxy far far away18:21
needssleepanyway, closing laptop18:21
* etingof can't help but wonder how this civilization is still functional given all the IT we depend on now days...18:22
*** dsneddon has joined #openstack-ironic18:22
*** fragatina has quit IRC18:23
*** fragatina has joined #openstack-ironic18:23
*** fragatina has quit IRC18:24
*** fragatina has joined #openstack-ironic18:25
*** fragatina has quit IRC18:25
*** fragatina has joined #openstack-ironic18:26
*** fragatina has quit IRC18:26
*** mjturek has quit IRC18:27
*** fragatina has joined #openstack-ironic18:27
*** mjturek has joined #openstack-ironic18:30
*** fragatina has quit IRC18:32
*** fragatina has joined #openstack-ironic18:33
*** mjturek has quit IRC18:35
*** mjturek has joined #openstack-ironic18:37
*** mjturek has quit IRC18:48
*** mjturek has joined #openstack-ironic18:49
*** fragatina has quit IRC18:50
*** fragatina has joined #openstack-ironic18:51
*** fragatina has quit IRC18:52
*** fragatina has joined #openstack-ironic18:53
*** mjturek has quit IRC18:54
*** fragatina has quit IRC18:55
*** fragatina has joined #openstack-ironic18:55
*** mjturek has joined #openstack-ironic18:58
*** mjturek has quit IRC18:59
*** fragatina has quit IRC18:59
*** mjturek has joined #openstack-ironic18:59
*** fragatina has joined #openstack-ironic19:00
*** fragatina has quit IRC19:00
*** fragatina has joined #openstack-ironic19:01
*** ijw has joined #openstack-ironic19:02
*** fragatina has quit IRC19:03
*** fragatina has joined #openstack-ironic19:04
*** fragatina has quit IRC19:06
*** fragatina has joined #openstack-ironic19:06
*** fragatina has quit IRC19:07
*** fragatina has joined #openstack-ironic19:08
*** fragatina has quit IRC19:09
*** fragatina has joined #openstack-ironic19:09
*** fragatina has quit IRC19:10
*** fragatina has joined #openstack-ironic19:11
*** fragatina has quit IRC19:11
*** fragatina has joined #openstack-ironic19:11
*** fragatina has quit IRC19:13
*** fragatina has joined #openstack-ironic19:14
*** fragatina has quit IRC19:15
*** fragatina has joined #openstack-ironic19:15
*** fragatina has quit IRC19:16
*** fragatina has joined #openstack-ironic19:17
*** fragatina has quit IRC19:19
*** fragatina has joined #openstack-ironic19:20
*** fragatina has quit IRC19:20
*** trown|lunch is now known as trown19:21
*** fragatina has joined #openstack-ironic19:21
*** _fragatina_ has joined #openstack-ironic19:22
*** _fragatina_ has quit IRC19:22
*** _fragatina_ has joined #openstack-ironic19:23
*** _fragatina_ has quit IRC19:25
*** _fragatina_ has joined #openstack-ironic19:25
*** fragatina has quit IRC19:26
*** _fragatina_ has quit IRC19:29
*** e0ne has joined #openstack-ironic19:29
*** fragatina has joined #openstack-ironic19:29
openstackgerritIlya Etingof proposed openstack/ironic master: WIP: Parallelize periodic power sync calls  https://review.openstack.org/63187219:31
etingofrloo, needssleep would appreciate your feedback ^19:32
rlooetingof: sorry can't look now/today. working on something urgent right now19:32
*** fragatina has quit IRC19:32
etingofsure, no worries!19:33
*** fragatina has joined #openstack-ironic19:33
*** fragatina has quit IRC19:34
*** fragatina has joined #openstack-ironic19:35
*** fragatina has quit IRC19:49
*** fragatina has joined #openstack-ironic19:50
*** fragatina has quit IRC19:50
*** fragatina has joined #openstack-ironic19:51
*** fragatina has quit IRC19:56
*** fragatina has joined #openstack-ironic19:56
*** fragatina has quit IRC20:05
*** fragatina has joined #openstack-ironic20:06
*** fragatina has quit IRC20:31
*** martin_ken has joined #openstack-ironic20:39
martin_kenhey all, I was just wondering is there any talk previously for an rsd driver for ironic?20:40
*** jcoufal has quit IRC20:48
*** e0ne has quit IRC20:55
*** trown is now known as trown|outtypewww21:47
*** fragatina has joined #openstack-ironic21:49
*** priteau has joined #openstack-ironic21:56
*** jtomasek has quit IRC22:00
*** priteau has quit IRC22:10
*** baha has quit IRC22:19
*** jistr has quit IRC22:32
*** jistr has joined #openstack-ironic22:33
*** mjturek has quit IRC22:34
* etingof is not aware of rsd driver for ironic. though there are rsd client and lib -- https://github.com/openstack/python-rsdclient22:35
*** jistr has quit IRC22:49
*** jistr has joined #openstack-ironic22:50
*** dustinc has quit IRC22:56
*** dustinc has joined #openstack-ironic22:57
*** dustinc has quit IRC23:00

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!