Wednesday, 2018-08-22

*** rloo has quit IRC00:05
*** yolanda has quit IRC00:08
*** gyee has quit IRC00:19
*** harlowja has quit IRC00:23
*** moshele has quit IRC00:25
*** livelace has quit IRC00:35
*** livelace has joined #openstack-ironic00:35
*** phuongnh has joined #openstack-ironic01:19
*** dmellado has quit IRC01:22
*** rcernin_ has joined #openstack-ironic01:23
*** rcernin has quit IRC01:25
openstackgerritTony Breeds proposed openstack/ironic-python-agent master: Add a UUID to the extra-hardware data on ppc64le  https://review.openstack.org/59459101:32
openstackgerritKaifeng Wang proposed openstack/ironic-inspector master: Wrap rpc server into oslo.service  https://review.openstack.org/58475801:41
*** rcernin has joined #openstack-ironic02:00
*** rcernin_ has quit IRC02:03
openstackgerritKaifeng Wang proposed openstack/ironic-inspector master: Wrap Flask into oslo.service  https://review.openstack.org/56182302:23
*** zhangfei has joined #openstack-ironic02:28
*** dmellado has joined #openstack-ironic02:38
*** rpioso is now known as rpioso|afk02:43
*** mmethot has quit IRC02:46
*** edleafe has quit IRC02:46
*** edleafe has joined #openstack-ironic02:47
*** mmethot has joined #openstack-ironic02:49
*** Bhujay has joined #openstack-ironic04:16
*** galaxyblr has joined #openstack-ironic04:16
*** galaxyblr has quit IRC04:38
*** galaxyblr has joined #openstack-ironic04:39
*** trungnv has quit IRC04:45
*** moshele has joined #openstack-ironic05:00
*** moshele has quit IRC05:02
*** sdake has quit IRC05:13
*** sdake has joined #openstack-ironic05:13
*** sdake has quit IRC05:50
*** sdake has joined #openstack-ironic05:51
*** hjensas has quit IRC05:52
openstackgerritKaifeng Wang proposed openstack/ironic-inspector master: Wrap Flask into oslo.service  https://review.openstack.org/56182305:56
*** sdake has quit IRC06:02
*** sdake has joined #openstack-ironic06:04
*** moshele has joined #openstack-ironic06:06
*** adrianc has joined #openstack-ironic06:26
*** gkadam has joined #openstack-ironic06:31
*** pcaruana has joined #openstack-ironic06:41
openstackgerritMerged openstack/ironic stable/pike: Fix iDRAC hardware type does not work with UEFI  https://review.openstack.org/58894206:43
*** tssurya has joined #openstack-ironic06:47
*** hjensas has joined #openstack-ironic06:56
*** amoralej|off is now known as amoralej07:01
*** zhangfei has quit IRC07:03
*** dmellado has quit IRC07:04
*** dmellado has joined #openstack-ironic07:06
*** rcernin has quit IRC07:07
openstackgerritKaifeng Wang proposed openstack/ironic-inspector master: Wrap Flask into oslo.service  https://review.openstack.org/56182307:07
*** dmellado has quit IRC07:12
*** phuongnh has quit IRC07:17
*** dmellado has joined #openstack-ironic07:19
*** dtantsur|afk is now known as dtantsur07:22
dtantsurmorning ironic07:23
etingofdtantsur, \o07:23
openstackgerritlei zhang proposed openstack/bifrost master: Migrate the link of bug report button to storyboard  https://review.openstack.org/59487607:39
*** jtomasek has joined #openstack-ironic07:42
openstackgerritIlya Etingof proposed openstack/sushy master: Add a virtual media resource  https://review.openstack.org/57081007:49
*** adrianc has quit IRC07:50
*** jaganathan has quit IRC07:52
openstackgerritMerged openstack/ironic-inspector-specs master: import zuul job settings from project-config  https://review.openstack.org/59238907:56
*** e0ne has joined #openstack-ironic08:01
*** mgoddard has joined #openstack-ironic08:04
*** w-miller has joined #openstack-ironic08:11
*** amoralej is now known as amoralej|brb08:16
*** pcaruana has quit IRC08:28
*** pcaruana has joined #openstack-ironic08:30
*** dirk has joined #openstack-ironic08:45
*** amoralej|brb is now known as amoralej08:47
*** sambetts_ is now known as sambetts09:00
*** moshele has quit IRC09:05
*** moshele has joined #openstack-ironic09:05
*** amoralej is now known as amoralej|brb09:24
*** moshele has quit IRC09:34
*** dougsz_ has joined #openstack-ironic09:44
*** moshele has joined #openstack-ironic09:45
*** amoralej|brb is now known as amoralej09:56
*** yolanda has joined #openstack-ironic09:56
openstackgerritcheng li proposed openstack/ironic-lib master: Check GPT table with sgdisk insread of partprobe  https://review.openstack.org/59492210:11
openstackgerritcheng li proposed openstack/ironic-lib master: Check GPT table with sgdisk insread of partprobe  https://review.openstack.org/59492210:26
*** jesusaur has quit IRC10:38
*** adrianc has joined #openstack-ironic10:39
*** jesusaur has joined #openstack-ironic10:41
yolandahi... i have more problems testing standalone in ironic. Now i'm able to boot the ipa, and it seems to be stuck on deploying state10:50
yolandai can login into the ipa10:50
yolandaand i see it just ends on finding the node, it finds the root node at /dev/sda10:50
yolandabut after that does nothing, just hearbeats10:50
yolandawhat could be the problem there?10:51
sambettsyolanda: is the node in maintenance?10:51
yolandasambetts, no, it's stuck in deploying state10:51
yolandain conductor logs, the last i can see is Node 37d158bc-821d-409d-88c0-d0bc9c24f4ea moved to provision state "deploying" from state "wait call-back"; target provision state is "active"10:51
sambettshmmm, only time I've seen IPA just sit and heartbeat without doing anything is when the node has got maintenance=true, it might be doing something in the background if its in deploying state, if your using iscsi deploy then once IPA sets up the iscsi endpoint then all the work should be being done by the conductor, IPA is hands off at that point10:53
yolandai cannot see any logs in conductor, just moving to wait call-back , then to deploying10:54
yolandaand it remains locked until i restart the conductor service10:55
yolandai even cannot see any logs in the conductor, that says it tries to fetch the image, anything, although i enabled debugging10:56
sambettshmm how are you viewing the conductor logs? there should be logs touching on all of that10:57
yolandait's a container, with docker logs10:57
yolandaand on the ipa, i enabled ssh access, so i see it directly10:58
sambettsyolanda: kolla container?10:58
yolandasambetts, i am trying helm.. but yep, i think it uses kolla image10:58
yolandapike version, as queens was failing to install10:59
*** slagle has quit IRC11:01
sambettsyolanda: something I found when I was working to get ironic running in kolla-k8s was that the docker logs didn't contain much information, kolla put all the logs proper logs into a /var/logs/kolla/ironic directory, so you had to view them by doing docker exec with a cat /var/log/ironic/ironic-conductor.log or something like that11:01
sambettsnot sure if its the same with the helm version11:01
yolandalet me check11:02
yolandai need to get more clues11:02
yolandai needed to do something like that on the http container11:02
yolanda /var/log/ironic is empty, but i could configure it11:03
sambettsany other directorys? kolla put everything in /var/log/kolla11:03
yolandai don't have a kolla directory there11:03
yolandajust "io.kubernetes.container.logpath": "/var/log/pods/6b0fd3d1-a60e-11e8-8d7a-ecf4bbc0b534/ironic-conductor/0.log",11:04
yolandaand that's quite empty11:04
sambettshmmm yeah might be best to configure ironic.conf to log to a specific location you can look at if it isn't already11:06
sambettssomething else I found when doing ironic things was that getting iscsi to work inside a container was a bit tricky, and I ended up having to run an priviledged iscsid/tgtd container alongside ironic-conductor to make sure the host kernel was setup right to make it work11:08
sambettsbut this was a little while ago11:08
yolandai could try with deploy direct as well11:08
yolandamaybe gives less headache11:08
yolandai was thinking in something like that: permissions, communication11:08
sambettsdirect deploy should work with standalone you just need to have a http server to serve the images from11:09
yolandai even could reuse the ironic http one, right?11:09
sambettsyolanda: should be able to, thats how non-containerised bifrost does it, it uses the same http server as for ipxe11:10
yolandayep, httpboot11:10
yolandai was thinking on that11:10
yolandamay be easier11:10
yolandai don't have any special requirement for iscsi11:11
sambettsyeah for standalone I think it'll be simpler11:12
yolandathis http works for sure as it's used to load the boot11:12
sambettsyeah, and it takes the work load of the conductor too11:12
yolandatrying11:14
yolandayay!11:18
yolandathx sambetts11:20
moshelesambetts: hi11:21
sambettsno problem :) it would be interesting to get some logs from the iscsi deploy to see what was failing and if there is something we need to fix there11:21
sambettsmoshele: Hi11:21
moshelesambetts: can we talk about the smart-nic spec https://review.openstack.org/#/c/582767/?11:21
patchbotpatch 582767 - ironic-specs - Add Support for Smart NIC - 0h 42m 27s spent in CI11:21
sambettsmoshele: sure11:22
*** S4ren has quit IRC11:23
moshelesambetts: first of all we want that the plug of the port will be on the ovs neutron side see https://review.openstack.org/#/c/586252/. this is because neutron have all the bridge information11:23
patchbotpatch 586252 - neutron - Add SmartNIC representor port to integeration bridge - 8h 43m 16s spent in CI11:23
moshelesambetts:  second I am not sure how your proposl solve the deploy issue as we need to change the deployment flow. power on before plugging the network11:25
moshelesambetts: also creating another network_interface seems like lost of code duplication ...11:25
sambettsmoshele: I personally don't think the existing OVS neutron mechanism driver should include that logic, I think that if we want to use the OVS mech driver we should behave like nova and plug and configure the downstream end of the connection to the baremetal, that way the OVS agent in neutron can operate exactly as it would for a VM and Ironic remains in control of how and when the baremetal11:32
sambettsports are connected to OVS, and we might be able to use https://github.com/openstack/os-vif to help us there too11:32
*** rh-jelabarre has joined #openstack-ironic11:33
sambettsmoshele: my proposal for the deployment flow, adds a call to the network interface before the node is powered off after deployment, in addition to the one that is already called after power off doesn't that solve your issue?11:33
sambettsmoshele: I think the binding data for a OVS port is significantly different from the binding data for a baremetal vnic type port so I think there would be little duplication between the network interfaces right?11:35
moshelesambetts: this an example of the change we need https://review.openstack.org/#/c/583573/5/ironic/drivers/modules/agent.py11:36
patchbotpatch 583573 - ironic - Add support for smartNICs - 9h 28m 24s spent in CI11:36
moshelesambetts: for example in deploy the current code poweroff the machine/ but we need to reboot it to make sure we can connect to the ovs before we plug the ports11:37
moshelesambetts: if we put a side who will be responsible to plug the port ironic or neutron. It will be almost the same just update the port host_id with the location the neutron agent11:39
moshelesambetts: also we not just talk about ovs/ the can be any neutron l2 agent that can support it11:40
sambettsmoshele: I understand that, but sticking "if smartnics" everywhere isn't reusable in anyway and very specific, my proposal would add pre-reboot (power on) tasks to the offical API for ironic network interfaces so that other network interfaces can also add tasks at the point in the process if required11:40
sambettsmoshele: having other Ml2 l2 agents supprt it is exactly another reason why Ironic should use os-vif and not put the logic in neutron11:41
jrollmorning11:44
sambettso/ jroll11:44
jrolldon't have full context, but +1 for pre/post power off things in the network driver, changing the deployment flow to support smartnic seems hairy11:47
moshelesambetts: not sure i understand. the deploy driver calls the network interface telling it when to switch to provision network. this today is done pre-reboot. for smart nic the power must be set to configure the network. having the deploy call the network driver before power does not help. are you suggesting the network driver control/affect the pow11:51
mosheleer state?11:51
*** mgoddard has quit IRC11:52
jrollmoshele: what sam is saying is that attach_tenant_networks() or whatever would still be before power on, and some new post_power_on_work() method would be after power on, and the smartnic driver would do its work in the latter11:52
sambettsI was thinking the other way around, a pre power off action (before we shut down IPA), and then the existing one, because we need to unplug the provisioning network before we shutdown IPA if the server needs to be on11:54
jrollwell, same idea11:55
jroll:)11:55
sambettsyup :)11:55
jrollthough moving the existing one feels wrong, if there's a driver that depends on that, it's suddenly broken :)11:55
sambettsthe existing one is already after we shutdown IPA, but before we power it back on to the guest OS11:56
sambettsmy proposal adds an action before we shutdown IPA the existing one doesn't move11:56
jrolloh, I need more coffee11:57
jrollI get what you're saying now11:58
jrollbut it sounds like moshele needs the existing one to be after power on, for the same reason, the NICs need power11:58
sambettsmy suggestion is that the smart nics would have a new network_interface (as its quite differnt to the existing implementation) and they would only implement the pre-power off function not the existing one11:59
jrollso tenant networks would be plugged while IPA is up?12:01
jrollsounds sketchy :)12:01
moshelesambetts: let say the baremetal is poweroff state and I am doing dypoly does it means that my new network interface should now do power on from the network driver12:02
sambettsno12:02
sambettsI'm saying their will be new function in the netowrk interace that will be called once the power is in the right state12:02
jrolldeploy driver: new_pre_power_off_networking_foo(); power_off(); current_networking_foo(); power_on(); done()12:04
jrollyes?12:04
sambettsmore or less yup, we might need more points in the process, pre-IPA-shutdown, post-power on into tenant image,  e.g. unplug provisioning network needs to be done before shutting down IPA, but plugging tenant network might need to be done after powering up into the tenant image12:05
moshelesambetts: so the deploy will tell the network driver that power is on and the network driver will imply that the meaning is that it should plug the network? the network driver should not need to guess what to do based on power state. the deploy sets the flow, and the network driver does what it is told to do when it is told to do it.12:06
*** amoralej is now known as amoralej|lunch12:06
sambettspretty much, which is why I wonder if this starts heading in the deploy steps direction12:08
sambettse.g. a network interface could just change the priority of the plugs and unplugs, wdyt jroll ?12:08
jroll... that might make sense12:09
jrollagain, don't have full context12:09
jrollthe basic premise is that smartnics need power to connect or disconnect a network, right?12:09
sambettsyup, but the current deploy interfaces always power off the server before calling the network interface and there no way to tamper with that unless you make your own deploy interface12:10
jrollright12:11
* jroll will try to have a think about it12:12
* jroll also thinks we have a lot of spaghetti he'd rather untangle before introducing new spaghetti12:12
sambettsyeah... I feel like deploy steps making the deploy process composable pretty much solves this issue without introducing a whole bunch of extra driver functions etc to ignore/implement depend on what you need, however I am out of touch with the progress on that work12:13
jrollbasically we've got it done, but there's only one deploy step right now12:14
jrollso we need to map things out and break it down12:14
moshelecan you point me to the deploy steps spec/ implementation?12:15
jrollhttp://specs.openstack.org/openstack/ironic-specs/specs/11.1/deployment-steps-framework.html12:15
jrollvia http://specs.openstack.org/openstack/ironic-specs/ ctrl-f "steps" :)12:15
*** trown|outtypewww is now known as trown12:21
moshelejroll: so as I understand all the in tree deploy interfaces are now just one step, and the plan is the break them in the future right?12:27
jrollmoshele: correct12:27
jrollthere's also talk about making deploy 'templates' that define a set of steps - so you might have a 'raid5' template which will set up a raid 5 before deploying and a 'standard' template that does not12:28
*** rnoriega_ has joined #openstack-ironic12:38
*** mgoddard has joined #openstack-ironic12:38
*** jcoufal has joined #openstack-ironic13:01
TheJuliaGood morning everyone13:01
sambettso/13:01
*** rloo has joined #openstack-ironic13:05
*** amoralej|lunch is now known as amoralej13:05
dtantsurmorning TheJulia, jroll13:06
TheJuliaI'm kind of concerned about changing the api interface signifigantly with more logic. Why not have simeple helper methods that look at interface capabilities and do the needful dance through based on the helper. That allows us to have one logic location each pertinent major chunk that different deploy interfaces could call13:06
jrollmorning TheJulia and dtantsur13:06
TheJulias/more logic/more logic and more methods/13:07
sambettssomething like driver.network_interface.needs_power_on = true/false, for the interface capability, then the deploy logic can call things at the right point? I still think that deploy steps priorities might be better, because then we could just make the priority of the plug X network step, before or after the reboot to guest OS step :/13:09
TheJuliaThat is kind of the same problem we're trying to solve, but the step would have to be aware that task.driver.network_interface.capabilities has 'smartnic' or something burried in the list, should ideally sort that, because then if it is entirely left to the operator to figure that on a template they define, then we're in for a world of hurt if we don't have detection logic burried all over the place to throw errors13:12
TheJuliathat their template might not work as desired.13:12
TheJuliathen again needs_power_on = true/false is another possibility :)   Same meaning in the end, just different names and slightly different implementation details13:12
*** rh-jelabarre has quit IRC13:13
jrollTheJulia: I believe sam is implying the driver author defines the priority, not the operator13:13
TheJuliaso then we come up with two different priority matrixies?13:14
sambettsI had imangined these would be steps that are defined by the interfaces with prioritys like the clean steps are (again I'm super out of touch) so in that case the smartnics network interface would define the same steps as the neutron network interface just with different priorities so depending on which interface is loaded on the node things occur at different points13:14
*** mjturek has joined #openstack-ironic13:14
jrollI don't think we've discussed operators defining deploy step priorities at all, only which steps happen or don't happen. but I could be wrong.13:14
TheJuliaI remember we even started discussing an api for it...13:15
TheJuliabut we also tabled all of that to get the actual steps work done13:15
TheJuliaso we can revisit and try and figure out what works, but tl;dr operators do want to be able to provide control of steps, invocation of optional steps, and the priority. The templates were to have inferring logic to take traits and turn them into actions as well13:16
TheJuliathose actions might be different steps being turned on/off too13:16
* TheJulia gets a headache13:16
TheJuliawe should likely decompose this a little bit more to figure out what our real mvp is13:16
jrolllet's just throw everything out and only support the ansible driver13:17
TheJulialol13:17
jrollthen operators can do anything they want13:17
* jroll is sick of inifinite customization13:17
TheJuliabut how do you support anything they want? :)13:18
* jroll steps away to play with dogs instead13:18
TheJuliaI think the mvp was likely just traits turning on/off steps13:19
TheJuliaso it could be a "smartnic" trait about the node that we have13:19
mjturekgm ironic!13:21
*** Bhujay has quit IRC13:21
*** etingof has quit IRC13:21
*** etingof has joined #openstack-ironic13:22
*** galaxyblr has quit IRC13:28
*** hjensas has quit IRC13:28
rloogood morning ironic'ers, mjturek, TheJulia, jroll, sambetts, dtantsur13:31
dtantsurmorning rloo13:31
rlooTheJulia: doesn't this need a reno? (even the master patch) https://review.openstack.org/#/c/592247/13:33
patchbotpatch 592247 - ironic (stable/rocky) - Fix not exist deploy image within irmc-virtual-med... - 5h 48m 56s spent in CI13:33
*** jcoufal has quit IRC13:33
TheJuliarloo: just yanked my +2, irmc results still not posted and yeah, since after 11.1 we do need a reno13:36
rlooTheJulia: maybe add one in master and squash in the backport? whatever works, I am easy :)13:37
TheJuliayeah, we can do that too13:38
TheJuliawe could also post the reno after the pach lands13:38
TheJuliaits not a big deal, the real issue is irmc not voting13:38
openstackgerritMerged openstack/ironic-specs master: import zuul job settings from project-config  https://review.openstack.org/59239813:45
*** jcoufal has joined #openstack-ironic13:47
*** r-daneel has joined #openstack-ironic13:48
*** dmellado has quit IRC14:01
*** rh-jelabarre has joined #openstack-ironic14:06
*** S4ren has joined #openstack-ironic14:07
*** hjensas has joined #openstack-ironic14:09
*** moshele has quit IRC14:10
*** jiapei has joined #openstack-ironic14:16
openstackgerritMerged openstack/ironic-python-agent stable/ocata: import zuul job settings from project-config  https://review.openstack.org/59243014:27
openstackgerritMerged openstack/ironic master: Minor fixes to contributor vision  https://review.openstack.org/59373614:27
*** cdearborn has joined #openstack-ironic14:38
*** jrist has joined #openstack-ironic14:45
*** mmedvede is now known as mmedvede_14:55
*** mmedvede_ is now known as mmedvede14:55
*** tonyb has quit IRC15:04
*** rnoriega_ is now known as rnoriega15:07
*** pcaruana has quit IRC15:10
*** rh-jelabarre has quit IRC15:11
*** r-daneel has quit IRC15:23
*** r-daneel has joined #openstack-ironic15:24
*** scas has joined #openstack-ironic15:25
*** openstackgerrit has quit IRC15:31
*** cdearborn has quit IRC15:34
*** dmellado has joined #openstack-ironic15:45
*** openstackgerrit has joined #openstack-ironic15:46
openstackgerritMerged openstack/ironic-inspector master: import zuul job settings from project-config  https://review.openstack.org/59238615:46
*** pcaruana has joined #openstack-ironic15:49
*** gkadam is now known as gkadam-afk15:52
*** tssurya has quit IRC15:53
openstackgerritMerged openstack/ironic-inspector stable/rocky: import zuul job settings from project-config  https://review.openstack.org/59246015:56
openstackgerritMerged openstack/ironic-inspector stable/queens: import zuul job settings from project-config  https://review.openstack.org/59244815:56
*** rpioso|afk is now known as rpioso16:02
rpiosoGood late morning, ironicers16:03
* rpioso was occupied downstream16:03
openstackgerritMerged openstack/ironic-inspector stable/pike: import zuul job settings from project-config  https://review.openstack.org/59243716:05
openstackgerritMerged openstack/ironic-inspector stable/ocata: import zuul job settings from project-config  https://review.openstack.org/59242716:05
openstackgerritMerged openstack/ironic-inspector master: switch documentation job to new PTI  https://review.openstack.org/59238716:05
TheJuliagood afternoon rpioso16:08
rpiosoTheJulia: :)16:09
rpiosoAnd thanks, again, for your help getting the idrac UEFI bug fix merged everywhere.16:09
openstackgerritMerged openstack/ironic-inspector master: add python 3.6 unit test job  https://review.openstack.org/59238816:15
*** rh-jelabarre has joined #openstack-ironic16:28
*** moshele has joined #openstack-ironic16:28
TheJuliarpioso: happy to help16:33
*** jiapei has quit IRC16:35
rpiosodtantsur: You, too :)16:45
*** mgoddard has quit IRC16:47
dtantsuryou're welcome :)16:48
*** gkadam-afk is now known as gkadam16:49
*** trown is now known as trown|lunch16:52
*** gkadam has quit IRC16:54
*** S4ren has quit IRC16:59
*** dougsz_ has quit IRC17:00
*** w-miller has quit IRC17:00
*** gyee has joined #openstack-ironic17:03
*** NobodyCam has joined #openstack-ironic17:06
*** e0ne has quit IRC17:07
*** rh-jelabarre has quit IRC17:15
*** sambetts is now known as sambetts|afk17:23
sambetts|afknight all o/17:23
*** etingof has quit IRC17:25
*** rnoriega has quit IRC17:25
*** sambetts|afk has quit IRC17:25
*** mmedvede has quit IRC17:25
*** jlvillal has quit IRC17:25
*** dims has quit IRC17:25
*** Guest92900 has joined #openstack-ironic17:34
TheJuliaGoodnight!17:35
TheJuliasadly it is not nap time...17:35
*** dims_ has joined #openstack-ironic17:36
TheJuliao/ dims_17:36
*** rloo has quit IRC17:46
*** mmedvede has joined #openstack-ironic17:58
*** rloo has joined #openstack-ironic18:00
*** etingof has joined #openstack-ironic18:01
NobodyCamhi18:12
NobodyCamoh its working now18:12
NobodyCamI was registered but still could not join :(18:12
TheJuliao/18:15
TheJuliayeah, freenode did a whole sweeping restart yesterday it seems18:15
*** trown|lunch is now known as trown18:21
*** dmellado has quit IRC18:22
openstackgerritJulia Kreger proposed openstack/bifrost stable/pike: Only install libvirt-python and python-lxml via pip  https://review.openstack.org/59528018:27
openstackgerritMerged openstack/ironic-specs master: Remove the duplicated word  https://review.openstack.org/59289218:43
*** dtantsur is now known as dtantsur|afk18:43
*** moshele has quit IRC18:46
*** amoralej is now known as amoralej|off18:51
*** mjturek has quit IRC19:11
*** adrianc has quit IRC19:16
*** cdearborn has joined #openstack-ironic19:32
*** eandersson has joined #openstack-ironic19:42
*** pcaruana has quit IRC19:47
eanderssonWhat is the most place for nova-compute-ironic. Do most people just put it on the control plane together with other nova services?19:52
eanderssonI am guessing that compute_driver = libvirt.LibvirtDriver has no effect on services like nova-scheduler etc right?19:53
*** e0ne has joined #openstack-ironic19:57
*** MattMan has quit IRC20:12
*** tonyb has joined #openstack-ironic20:26
*** e0ne has quit IRC20:27
*** harlowja has joined #openstack-ironic20:42
*** jcoufal has quit IRC20:49
TheJuliaeandersson: people either run it on the control plane when they use ironic, or they run it on the nodes they are running ironic-conductor on20:53
eanderssonI see - thanks20:54
TheJuliaeandersson: nova-compute process feeds information into scheduling/placement20:54
*** trown is now known as trown|outtypewww20:54
TheJuliabut in ironic's case, the virt driver distributes nodes using a hash ring20:54
*** jtomasek has quit IRC20:57
anupnHi TheJulia: Any idea if we add proxies separately inside the glance image? I have added proxies including no_proxy to my env variables, still seeing ProxyError when IPA tries to download instance image registered with glance21:07
TheJuliaanupn: no idea, I think more information is required21:08
anupnTheJulia: I am trying to launch an instance over BM and I see this error inside IPA logs http://paste.openstack.org/show/728634/21:09
TheJulia"Temporary failure in name resolution"21:10
TheJuliano dns to reach a named proxy?21:10
TheJuliaor is the proxy an ip address?21:10
anupnTheJulia: But i am able to curl www.google.com21:10
anupnso assume nameserver should not be an issue?21:10
anupnproxy is the URL21:11
anupnXXX.intel.com21:11
TheJuliawhat kind of ramdisk are you using?21:11
anupndefault that devstack use - agent tinycore21:12
anupnTheJulia: And i see http_proxy is getting added while the request is made, but it is not adding 'no_proxy'21:13
TheJuliaI'm not sure anyone has tried the proxy code with tinycore before21:13
jrollis the proxy hostname publically addressable?21:15
jroller, publically resolvable?21:15
anupnjroll: Yes21:15
jrollmight be using a nameserver that doesn't know about the proxy's hostname21:15
jrollhrm21:15
TheJuliabut tl;dr, from with-in the running IPA image, it needs to be able to perform a dns lookup to be able to connect to the proxy, it is basically saying it is unable to21:15
anupnon running curl with that address, I see it getting connected21:15
TheJuliarunning curl where?21:15
jrollyeah, definitely a dns problem21:15
anupnTheJulia: curl from the devstack machine, from where I am running nova commands21:16
TheJuliaSounds like a DNS problem where your provisioning network is unable to resolve DNS21:17
TheJuliaor the DNS name of the proxy specificially21:17
anupnTheJulia: Umm, may be I will check from within the network namespace if is able to ping the proxy server21:18
TheJuliaanupn: but it also depends on what neutron is handing out to on the provisioning network for DNS resolution21:19
*** cdearborn has quit IRC21:20
anupnTheJulia: But doesn't same route is used when downloading deploy kernel and ramdisk for provisioning, because that time I didn't see such error21:21
TheJuliadevstack uses ip addresses21:21
TheJuliaand ironic likely uses ip addresses as a result of devstack21:22
anupnTheJulia: So do you mean it is better to use IP address for proxy?21:23
TheJuliayou won't have dns resolution issues if you use that, at least in development. It is good to make sure dns works though21:24
anupnTheJulia: Ok, thanks. Will check that, and replace URL with specific IP, and see if it makes a diff21:26
*** dansmith is now known as htimsnad21:34
TheJuliahtimsnad: is it almost friday? :)21:38
*** vcgi has joined #openstack-ironic21:40
vcgiI wonder what will be the best setup option for openstack ironic for a new enviroment.  All servers have dual nics.  Option 1:  one for the Bare Metal Provisioning Network, and one for external connectivity.  Option 2: nic 1, nic 2 bonded with multiple vlan setups.21:43
openstackgerritMerged openstack/ironic-python-agent master: switch documentation job to new PTI  https://review.openstack.org/59239421:43
anupnTheJulia: in my resolv.conf, I see nameserver as my hostname itself21:45
anupnTheJulia: I feel no_proxy should get added when IPA tries to make a request21:47
TheJuliaanupn: but your resolv.conf on your workstation is not what is present to the deployment ramdisk while running21:48
TheJuliavcgi: guess it depends on your hardware and security posture/requirements21:48
TheJuliavcgi: does your switch support LACP fallback to a single link ?21:49
anupnTheJulia: Yes that's why I feel if no_proxy has has my hostname IP address is added to the ramdisk, might solve the problem21:50
anupnknow?21:50
vcgi@TheJulia yes they do support lacp fallback to a single link21:50
TheJuliavcgi: so concievably, you can do bonded/portgroup links with vlans. I think most neutron ml2 drivers lack the portgroup concepts though, so it would likely need to be manual static switch side configuration21:52
TheJuliabut then you can do vlan as access ports on the bonds I guess21:52
TheJuliafrom a security standpoint, that would be ideal. A static back-end provisioning network is not really a great idea since only one of those nodes needs to be compromised to begin to allow the rest to be targeted and pivoted through.21:53
TheJuliaanupn: It doesn't work that way21:53
TheJuliawell21:54
TheJuliaanupn: if there is direct connectivity, then you don't need a proxy, then yes, that shoudl work. It won't fix dns resolution issues though21:54
* TheJulia thought you meant for the dns issue21:55
vcgi#TheJulia what if I don't use lacp for bonding ?21:56
TheJuliavcgi: pxe booting is likely to fail. You can just do vlan enabled ports and not have one of the two ports connected to anything21:57
anupnTheJulia: direct connectivity with what?21:57
TheJuliavcgi: specifically PXE chain loading through tftpboot (with UDP packets) won't survive the traffic distribution algorithms that switches try and transmit down the wire21:57
TheJuliaanupn: direct connectivity between you the node (in this case, I believe it is your VM) that your trying to provision21:58
anupnTheJulia: yes21:59
anupnTheJulia: There is a direct connectivity in that sense22:00
*** livelace has quit IRC22:24
*** rcernin has joined #openstack-ironic22:34
*** livelace has joined #openstack-ironic22:37
*** slagle has joined #openstack-ironic23:17
*** rpioso is now known as rpioso|afk23:18
*** r-daneel has quit IRC23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!