Friday, 2019-08-16

*** igordc has quit IRC00:11
*** igordc has joined #openstack-ironic00:11
*** gyee has quit IRC00:34
*** igordc has quit IRC00:44
*** igordc has joined #openstack-ironic00:45
*** rloo has quit IRC00:46
*** ricolin has joined #openstack-ironic00:58
*** absubram has joined #openstack-ironic01:39
*** absubram has quit IRC01:54
*** absubram has joined #openstack-ironic01:58
*** hwoarang has quit IRC02:36
*** hwoarang has joined #openstack-ironic02:40
*** ash2307 has joined #openstack-ironic02:46
*** igordc has quit IRC04:11
*** rh-jelabarre has quit IRC04:22
*** absubram has quit IRC05:01
openstackgerritShivanand Tendulker proposed openstack/ironic master: Add iPXE boot interface to 'ilo' hardware type  https://review.opendev.org/67685405:03
*** zaneb has quit IRC05:04
*** zbitter has joined #openstack-ironic05:04
openstackgerritMerged openstack/ironic-python-agent-builder master: Updates the build file in DIB  https://review.opendev.org/67542705:05
*** e0ne has joined #openstack-ironic05:13
*** e0ne has quit IRC05:27
*** kaifeng has joined #openstack-ironic05:30
*** jtomasek has joined #openstack-ironic05:36
*** verma-varsha has joined #openstack-ironic06:01
*** belmoreira has joined #openstack-ironic06:16
*** belmoreira has quit IRC06:16
arne_wiebalckGood morning, ironic!06:18
*** ociuhandu has joined #openstack-ironic06:31
*** ociuhandu has quit IRC06:35
*** stendulker has joined #openstack-ironic06:49
*** yolanda has joined #openstack-ironic06:55
*** jtomasek has quit IRC07:25
*** stendulker has quit IRC07:49
*** e0ne has joined #openstack-ironic07:52
*** lucasagomes has joined #openstack-ironic08:02
*** trident has quit IRC08:03
*** trident has joined #openstack-ironic08:11
*** mgoddard_ has joined #openstack-ironic08:22
openstackgerritVanou Ishii proposed openstack/ironic master: Add logic to determine Ironic node is HW or not into configure_ironic_dirs  https://review.opendev.org/67195708:24
*** dsneddon has quit IRC08:32
*** derekh has joined #openstack-ironic08:37
*** stendulker has joined #openstack-ironic08:54
*** alexmcleod has joined #openstack-ironic08:55
verma-varshaetingof: o/ I am trying to write code for creating a volume via a POST request on a volume collection. After successful creation, I need to send a response of the form here (https://github.com/openstack/sushy/blob/master/sushy/tests/unit/resources/system/storage/test_volume.py#L183). I intend to do something like `return '{'Location': '/url/to/vol'}', 201`. How should I code it?08:59
*** dsneddon has joined #openstack-ironic08:59
etingofverma-varsha, o/ I think you should just fill-in 'headers' in response, no?09:04
etingofverma-varsha, like this -- https://github.com/openstack/sushy/blob/master/sushy/resources/sessionservice/sessionservice.py#L11709:06
etingofverma-varsha, from server's perspective -- https://stackoverflow.com/questions/25860304/how-do-i-set-response-headers-in-flask09:07
*** verma-varsha has quit IRC09:12
*** verma-varsha has joined #openstack-ironic09:21
*** dsneddon has quit IRC09:22
openstackgerritIlya Etingof proposed openstack/sushy master: Change OEM extensions architecture  https://review.opendev.org/67688909:24
etingofrpioso, ^09:25
verma-varshaetingof, If I use flask.Response(), won't it conflict with the decorator @returns_json that the method uses?09:30
etingofverma-varsha, I think it won't *if* the view returns a Response object rather than bare Python types -- https://github.com/openstack/sushy-tools/blob/master/sushy_tools/emulator/main.py#L16609:32
etingof(which is implicit if you are setting headers)09:33
openstackgerritArne Wiebalck proposed openstack/ironic-python-agent master: Software RAID: Ignore missing component devices or holder disks  https://review.opendev.org/67585709:36
*** Lucas_Gray has joined #openstack-ironic09:37
openstackgerritIlya Etingof proposed openstack/sushy master: Change OEM extensions architecture  https://review.opendev.org/67688909:41
etingofrpioso, rebased onto the latest sushy patch proposed above -- https://github.com/etingof/sushy-oem-dellemc/blob/master/sushy_oem_dellemc/resources/manager/manager.py09:43
*** kaifeng has quit IRC09:45
*** ociuhandu has joined #openstack-ironic09:48
*** ociuhandu has quit IRC09:50
*** dsneddon has joined #openstack-ironic09:53
*** dsneddon has quit IRC09:57
verma-varshaThanks etingof! I'll try that.10:03
*** verma-varsha has quit IRC10:24
*** ociuhandu has joined #openstack-ironic10:25
*** dsneddon has joined #openstack-ironic10:25
*** dsneddon has quit IRC10:31
*** Lucas_Gray has quit IRC10:32
*** verma-varsha has joined #openstack-ironic10:33
*** ociuhandu has quit IRC10:38
*** ociuhandu has joined #openstack-ironic10:39
*** ociuhandu has quit IRC10:42
*** ociuhandu has joined #openstack-ironic10:42
*** Lucas_Gray has joined #openstack-ironic10:43
*** verma-varsha1 has joined #openstack-ironic10:50
*** verma-varsha has quit IRC10:51
*** verma-varsha1 is now known as verma-varsha10:51
*** ociuhandu has quit IRC10:53
*** verma-varsha has quit IRC10:56
*** ociuhandu has joined #openstack-ironic10:58
*** jtomasek has joined #openstack-ironic11:03
nishagbGreetings ironic!11:04
*** dsneddon has joined #openstack-ironic11:05
nishagbCan someone review the patch - https://review.opendev.org/#/c/676157/ ?11:07
nishagbI can check if the ironic-python-agent-buildimage-dib test passes only after this patch merges.11:07
patchbotpatch 676157 - ironic-python-agent - Installs diskimage-builder to pass the DIB image b... - 3 patch sets11:07
*** dsneddon has quit IRC11:10
TheJuliagood morning everyone11:21
jrollmorning y'all11:22
*** ijw has joined #openstack-ironic11:27
*** ijw has quit IRC11:32
*** ociuhandu has quit IRC11:38
*** ociuhandu has joined #openstack-ironic11:40
*** dsneddon has joined #openstack-ironic11:42
openstackgerritparesh sao proposed openstack/ironic master: Out-of-band `erase_devices` clean step for Proliant Servers  https://review.opendev.org/64158211:43
*** ociuhandu has quit IRC11:44
openstackgerritparesh sao proposed openstack/ironic master: Out-of-band `erase_devices` clean step for Proliant Servers  https://review.opendev.org/64158211:46
*** dsneddon has quit IRC11:47
*** ociuhandu has joined #openstack-ironic11:50
*** mgoddard has quit IRC11:52
*** Lucas_Gray has quit IRC11:53
*** belmoreira has joined #openstack-ironic11:53
*** ociuhandu has quit IRC11:55
*** mgoddard has joined #openstack-ironic11:59
*** dsneddon has joined #openstack-ironic12:00
openstackgerritJulia Kreger proposed openstack/ironic master: Test: Move to unsafe caching  https://review.opendev.org/67691612:05
TheJuliaWell, it will be interesting to see if ^-- hits any of the same timeouts we've been seeing.12:05
*** rh-jelabarre has joined #openstack-ironic12:14
*** stendulker has quit IRC12:18
*** henriqueof has joined #openstack-ironic12:21
arne_wiebalckGood morning TheJulia jroll: in case you didn't see it yet: the power sync patches have been merged \o/12:23
jrollniiice12:26
TheJuliaarne_wiebalck: awesome12:27
jrolland just now I got to that email :P12:28
*** verma-varsha has joined #openstack-ironic12:32
*** rloo has joined #openstack-ironic12:38
*** ociuhandu has joined #openstack-ironic12:40
openstackgerritHarald Jensås proposed openstack/networking-baremetal master: Fix networking-baremetal CI  https://review.opendev.org/67570112:41
openstackgerritIlya Etingof proposed openstack/sushy master: Add conditional field matching  https://review.opendev.org/67507312:50
*** Lucas_Gray has joined #openstack-ironic12:51
*** ociuhandu has quit IRC12:57
*** dsneddon has quit IRC13:03
TheJuliawow, that cache change really changes the execution time on basic baremetal ops tests13:09
TheJuliaLowest I've seen in job logs is 381 seconds13:09
*** verma-varsha has quit IRC13:12
*** verma-varsha has joined #openstack-ironic13:12
openstackgerritDigambar proposed openstack/ironic master: DRAC : clear_job_queue clean step to fix pending bios config jobs  https://review.opendev.org/67402113:14
TheJuliaeh, ~400 seconds again. Not bad13:14
*** verma-varsha has quit IRC13:22
*** zbitter is now known as zaneb13:23
TheJuliahjensas:  reading your last change, I guess changing the neutron plugin out is what broke it since the job inherits ironic-base13:25
*** verma-varsha has joined #openstack-ironic13:26
TheJuliahjensas: s/mynetwork/br-ex/?13:26
hjensasTheJulia: hm that "Invalid input for operation: physical_network 'mynetwork' unknown for VLAN provider network." does'nt happen in my local reproducer.13:31
hjensasTheJulia: must be something else I got different.13:31
TheJuliawhat neutron plugin are you using?13:31
*** dsneddon has joined #openstack-ironic13:32
hjensasTheJulia: hm, looks like it's the legacy plugin. Since I have "WARNING: Using lib/neutron-legacy is deprecated, and it will be removed in the future"13:34
TheJuliayup... so there is the disconnect I guess13:35
hjensasTheJulia: I have to run, got a wedding to guest tomorrow and need to drive to Oslo this evening.13:35
hjensasTheJulia: I'll pick it up on monday.13:36
TheJuliahjensas: okay, I might poke it some later today13:36
TheJuliasince I'm the crazy person that fixed vlan support in the neutron devstack plugin13:37
*** dsneddon has quit IRC13:37
*** ociuhandu has joined #openstack-ironic13:38
*** ociuhandu has quit IRC13:42
*** sthussey has joined #openstack-ironic13:44
*** ociuhandu has joined #openstack-ironic13:45
*** zaneb has quit IRC13:55
*** hjensas has quit IRC13:56
verma-varshaetingof, we'd discussed auto-creation of storage pools at startup. For creating a libvirt storage pool, we need to construct the pool-xml which requires us to supply a uuid attribute for the pool. Can I use uuid.uuid1() to generate a uuid for the pool being created?13:58
openstackgerritparesh sao proposed openstack/ironic master: Out-of-band `erase_devices` clean step for Proliant Servers  https://review.opendev.org/64158214:02
*** bdodd has joined #openstack-ironic14:02
etingofverma-varsha, I see no reason why not?14:05
mjturekwhen deploying a whole disk image, is it required to have "is_whole_disk_image" set in the driver_internal_info?14:06
*** dsneddon has joined #openstack-ironic14:06
verma-varshaetingof, Also, I see you have used a default pool here (https://github.com/openstack/sushy-tools/blob/master/sushy_tools/emulator/resources/systems/libvirtdriver.py#L122) for virtual media emulation. So, you assume that it already exists? Because I do not see it being created anywhere in code.14:08
openstackgerritRadosław Piliszek proposed openstack/bifrost master: Install mariadb-server on Debian/Ubuntu  https://review.opendev.org/66590214:10
*** dsneddon has quit IRC14:10
etingofverma-varsha, my impression is that default pool should be created on libvirt installation... however one could remove it afterwards14:10
*** bobmel has joined #openstack-ironic14:12
verma-varshaetingof: Oh! Is that so? Somehow I can't find any pool named default on my m/c. I must have deleted it by mistake :/14:14
verma-varshaetingof: Also, while creating a new pool, a target path is also to be provided. I am thinking of something like '/var/lib/libvirt/pool-name'. I hope that doesn't cause any problems!14:19
etingofverma-varsha, that's off the top of my head and out of experience. I may be wrong re the default pool. It would be great if you research that a tbit14:20
etingofverma-varsha, I am wondering if we could get some sort of default for libvirt-managed directory from libvirt itself rather than hardcode out own...14:21
verma-varshaetingof: libvirt stores it's VM images by default in '/var/lib/libvirt/images'. That's where I came up with the path14:23
arne_wiebalckmjturek: upfront on the node you mean?14:24
mjturekarne_wiebalck yep!14:24
etingofverma-varsha, right, but that can change so it would be great if we could discover the actual setting from libvirt14:24
arne_wiebalckmjturek: I don't think so. Our nodes don't have it.14:25
mjturekah okay, thank you arne_wiebalck14:25
rpiosoetingof: Thanks for looping me in on https://review.opendev.org/67688914:26
patchbotpatch 676889 - sushy - Change OEM extensions architecture - 2 patch sets14:26
arne_wiebalckmjturek: But it may be set at some point during deployment ... ?14:26
mjturekarne_wiebalck fair enough, but it's not something the user needs to set before deployment you think?14:27
* rpioso shudders at re-architecture ;-)14:27
verma-varshaetingof: Can you elaborate on what is exactly needed here?14:28
etingofrpioso, I tried to explain why it's a relatively big deal in the commit message14:28
etingofverma-varsha, if there is an official way within libvirt API to figure out the prefix directory for storage pool, that would be ideal14:29
etingofverma-varsha, alternatively, may be we could enumerate the existing pools (if any) and use their directory or at least its parent directory14:31
etingofverma-varsha, I am cautious to hardcode our own because then we will have to manage it (e.g. ensure it's large enough, clean it up eventually etc)14:31
verma-varshaetingof: Using the parent of an existing pool directory is doable. But, what do we do in case none exists? Go with '/var/lib/libvirt'?14:33
etingofverma-varsha, perhaps, *unless* you find some other source of information on a libvirt-owned directory we could piggyback. somewhere in libvirt configuration, likely14:34
etingofverma-varsha, that reminds me! can storage pool be remote i.e. managed by libvirtd sitting somewhere outside of the machine we are running our code on?14:35
verma-varshaetingof: All of the auto-created libvirt pools on my machine have a directory in '/var/lib/libvirt', so it seems to be what we are looking for. Should I mail the libvirt folks to confirm?14:35
*** belmoreira has quit IRC14:36
etingofverma-varsha, asking libvirt folks would never hurt!14:36
verma-varshaetingof: I guess the pools can be remote, although, I haven't tried implementing/using one.14:37
arne_wiebalckadfkikded14:37
arne_wiebalckmjturek: The admin on the node you mean? No.14:38
mjturekperfect, thanks14:39
etingofverma-varsha, I am thinking, they might always be remote in a sense. That is they are always created/managed by libvirtd which can easily run on a different machine.14:39
verma-varshaetingof: I am not sure about the remote thing. 'Libvirt provides storage management on the physical host through storage pools and volumes.', says the official libvirt documentation14:39
etingofverma-varsha, if pools are remote, then, generally, we have no influence/control over that directory at all... so I suggest another quick research on that ;-)14:41
verma-varshaetingof: https://libvirt.org/storage.html#StorageBackendNetFS Is this a remote pool? I'm sorry I can't make that out having little experience in that area.14:44
openstackgerritVladyslav Drok proposed openstack/ironic master: Allow to configure additional ipmitool retriable errors  https://review.opendev.org/67696314:44
*** dsneddon has joined #openstack-ironic14:45
*** bnemec is now known as beekneemech14:48
etingofverma-varsha, I think here we literally "upload" boot image to libviritd -- https://github.com/openstack/sushy-tools/blob/master/sushy_tools/emulator/resources/systems/libvirtdriver.py#L67814:50
*** dsneddon has quit IRC14:50
etingofverma-varsha, I suspect that you are confusing two "remote" things - the filesystem where user will have their data (your link is about a remote variant, we will probably use local block device) and the way how we (as a libvirt client) configure storage pools to libvirtd.14:52
verma-varshaetingof: I've looked up some stuff. It seems libvirt will always have atleast one pool. We can use up it's parent directory for our new pool.14:54
etingofverma-varsha, in our use-case, we want to use local filesystem for storage pool (local relative to where libvirtd runs), however if libvirtd runs on a different machine than libvirt client (i.e. sushy-tools), then from client's perspective the pool is remote e.g. the client has no filesystem access to that directory (because its elsewhere)14:55
openstackgerritJulia Kreger proposed openstack/ironic master: Move to unsafe caching  https://review.opendev.org/67691614:56
*** verma-varsha1 has joined #openstack-ironic14:58
verma-varsha1etingof: But doesn't libvirt have the access to the filesystem on host?14:58
*** verma-varsha has quit IRC14:59
*** verma-varsha1 is now known as verma-varsha14:59
TheJuliajroll: arne_wiebalck: I'd <3 thoughts on ^ since presently we end up in a situation where where when we're trying to issue syncs, we have two abstraction layers we're going through with different caching, so at least in test VMs it seems reasonable to run with unsafe as they are for testing... and then we hopefully don't end up in any weird races on devices that wouldn't really exist with multiple layers.14:59
openstackgerritShivanand Tendulker proposed openstack/ironic master: Add iPXE boot interface to 'ilo' hardware type  https://review.opendev.org/67685415:01
*** verma-varsha has quit IRC15:05
*** verma-varsha has joined #openstack-ironic15:06
*** dsneddon has joined #openstack-ironic15:08
verma-varshaetingof: In case of remote pools, if we don't have access, we can't really create pools as well as volumes? Am I right?15:09
arne_wiebalckTheJulia: Seems ok to me ... what triggered this patch?15:12
arne_wiebalckTheJulia: This only affects test VM anyway, no?15:12
arne_wiebalckVMs15:12
*** gyee has joined #openstack-ironic15:12
*** Garyx has joined #openstack-ironic15:15
etingofverma-varsha, we can't do that at the filesystem level (mkdir /var/lib/libvirt/pool) because the /var/lib filesystem might reside on some other machine. however if we ask libvirtd to do that for us, that should work out. but obviously we can't ask for any random directory of our choice - libvirtd might reject that15:16
verma-varshaetingof: Then maybe the idea of creating a user-specified pool that could not be found on the backend is absurd. Should we just throw an error in case that happens?15:22
etingofverma-varsha, sounds reasonable to me15:25
etingofverma-varsha, we can only chose pool name, not filesystem location15:25
verma-varshaSo now, we are only creating the volumes (and not pools) that could not be found on the backend but are specified in the config file.15:27
verma-varshaAlso, for creating volumes for a POST request, use the 'default' pool?15:28
*** gyee has quit IRC15:35
TheJuliaarne_wiebalck: we had like 6 gate jobs all fail in a row yesterday with timeouts on deploy15:36
jrollTheJulia: nice find15:36
TheJuliaor, timeouts seeing /dev/vda115:37
TheJuliaand sync() and then refresh from the device is the key there which is only problematic in VMs15:37
etingofverma-varsha, perhaps that's a reasonable start15:37
TheJuliaso... "what if unsafe makes this better?" jumped into my brain15:37
TheJuliahence the patch15:37
arne_wiebalckI wouldn't have thought this makes such a difference!15:38
TheJuliaso, libvirt passes the device sync request down to the host, so ultimatley we're going vm sync -> devstack vm sync() -> hypervisor host sync() -> whatever the back-end devices are sync()15:39
TheJuliaso... yeah15:39
*** ociuhandu has quit IRC15:40
TheJuliagoing unsafe just makes the devstack vm's hypervisor go "yup, synced, here is that data you asked for (but I secretly only gave you the buffer cache data instead of trying to read from the disk *muahahahaha*)15:40
arne_wiebalckOTOH, if a 15% speedup pushes us under the timeout, isn't the timeout also sth to llok at?15:40
TheJulia"15:40
arne_wiebalckTheJulia: LOL15:40
TheJuliaVM performance across providers varies quite a bit. my 10-20% gut feeling is just based on wallclocking a few of the jobs to what I've seen previously15:41
* arne_wiebalck sees evil hypervisors now15:41
*** ociuhandu has joined #openstack-ironic15:41
jrollonly now? I've been seeing them for years :P15:42
arne_wiebalckThe patch seems fine to me : if something really breaks, consistent data on the disk probably won't help us / be needed.15:42
arne_wiebalckjroll: heh, I'm still the new kid on the block!15:42
TheJuliaI see hypervisors as Doctor Chaotica15:43
TheJuliaStar Trek Voyager reference there15:44
*** ociuhandu has quit IRC15:45
* arne_wiebalck hears muahahahaha15:46
*** gyee has joined #openstack-ironic15:48
JayFTheJulia: nice patch15:49
* TheJulia steps away to go run an errand and hope that her migraine goes away15:49
TheJuliaJayF: thanks!15:49
*** ociuhandu has joined #openstack-ironic15:50
*** gyee has quit IRC15:52
*** ociuhandu has quit IRC15:54
*** ociuhandu has joined #openstack-ironic15:55
*** e0ne has quit IRC15:58
*** ijw has joined #openstack-ironic16:02
*** lucasagomes has quit IRC16:03
*** gyee has joined #openstack-ironic16:08
*** Lucas_Gray has quit IRC16:13
mbuilTheJulia: Hello. As you probably remember, in OPNFV we are using Bifrost. Currently, we are moving towards Python3 and we get an error with this list https://github.com/openstack/bifrost/blob/master/playbooks/roles/bifrost-ironic-install/defaults/required_defaults_Ubuntu_16.04.yml#L8 because for example, it expects python3-mysqldb instead of python-mysqldb. I am pretty sure Bifrost works with python3, so we are wondering if perhaps16:15
mbuilthere is a way to install the python3 version of those packages without having to add the 3 after python (e.g. python3-dev, python3-mysqldb...)16:15
*** bnemec has joined #openstack-ironic16:19
*** beekneemech has quit IRC16:19
*** verma-varsha has quit IRC16:19
*** bnemec is now known as beekneemech16:20
*** ociuhandu_ has joined #openstack-ironic16:28
*** yolanda has quit IRC16:30
*** ociuhandu has quit IRC16:31
*** beekneemech has quit IRC16:32
*** ociuhandu_ has quit IRC16:32
arne_wiebalckbye, everyone o/16:33
openstackgerritIlya Etingof proposed openstack/ironic master: [WIP] Add iDRAC boot interface  https://review.opendev.org/67249816:35
*** bnemec has joined #openstack-ironic16:35
etingofrpioso, I've updated sushy-oem-dell to make use of the reworked sushy oem support -- https://github.com/etingof/sushy-oem-dellemc/blob/master/sushy_oem_dellemc/resources/manager/manager.py#L9316:36
*** verma-varsha has joined #openstack-ironic16:37
*** yolanda has joined #openstack-ironic16:37
etingofand adjusted idrac boot interface in ironic ^ accordingly16:37
*** ijw_ has joined #openstack-ironic16:41
*** verma-varsha1 has joined #openstack-ironic16:42
*** verma-varsha has quit IRC16:43
*** verma-varsha1 has quit IRC16:43
*** bnemec has quit IRC16:44
*** ijw has quit IRC16:45
*** bnemec has joined #openstack-ironic16:45
*** derekh has quit IRC16:48
*** mgoddard_ has quit IRC16:50
TheJuliambuil: I guess ultimately we would have to template that out further for python3 on ubuntu 16.04....16:54
mbuilTheJulia: ok. Does that change in Ubuntu 18.04?16:56
TheJuliambuil: I don't think we explicitly re-targetted to python3 in 18.04, I honestly don't really remember at this point. The issue realy is a duality situation where we were unable to support python3 early on so lots of things resulted python2 focused. :\16:58
mbuilTheJulia: ok, thanks. I'll create a patch next week to fix that16:59
TheJulialooks like xenial has python 3.5 which means ansible should run on that just fine17:00
TheJuliambuil: thanks, ping me with the patch and I'll review it once it is up17:00
mbuilYes, it works fine, I have been trying it. Great, I'll ping you, have a nice weekend17:01
*** bnemec has quit IRC17:02
TheJuliambuil: you too!17:02
*** bnemec has joined #openstack-ironic17:03
*** igordc has joined #openstack-ironic17:04
*** ociuhandu has joined #openstack-ironic17:05
*** mgoddard has quit IRC17:07
*** igordc has quit IRC17:08
*** ociuhandu has quit IRC17:09
*** yaawang has quit IRC17:14
*** bnemec has quit IRC17:14
*** yaawang has joined #openstack-ironic17:14
*** bnemec has joined #openstack-ironic17:15
*** alexmcleod has quit IRC17:26
*** igordc has joined #openstack-ironic17:26
*** bnemec has quit IRC17:31
openstackgerritRadosław Piliszek proposed openstack/bifrost master: Install mariadb instead of mysql on Debian Buster  https://review.opendev.org/66590217:33
*** bnemec has joined #openstack-ironic17:34
*** ijw_ has quit IRC17:35
*** bnemec has quit IRC17:40
*** bnemec has joined #openstack-ironic17:42
*** igordc has quit IRC17:49
*** igordc has joined #openstack-ironic17:50
*** jtomasek has quit IRC17:53
openstackgerritChristopher Dearborn proposed openstack/ironic master: Fix exception on provisioning with idrac hw type  https://review.opendev.org/67700318:08
*** ricolin has quit IRC18:24
*** zaneb has joined #openstack-ironic18:29
*** bdodd has quit IRC18:36
*** ijw has joined #openstack-ironic18:38
*** jcoufal has joined #openstack-ironic18:39
rpiosoetingof: Thank you!18:59
*** dsneddon has quit IRC18:59
*** jcoufal has quit IRC19:01
*** rloo has quit IRC19:04
*** rloo has joined #openstack-ironic19:04
*** rloo has quit IRC19:09
*** rloo has joined #openstack-ironic19:09
*** rloo has quit IRC19:19
*** rloo has joined #openstack-ironic19:19
*** ociuhandu has joined #openstack-ironic19:28
*** ociuhandu has quit IRC19:29
*** ociuhandu has joined #openstack-ironic19:31
*** ociuhandu has quit IRC19:34
*** ociuhandu has joined #openstack-ironic19:34
*** ociuhandu has quit IRC19:48
*** ociuhandu has joined #openstack-ironic19:48
*** dsneddon has joined #openstack-ironic19:53
*** ijw has quit IRC19:55
*** ociuhandu has quit IRC19:56
openstackgerritEric Fried proposed openstack/ironic master: devstack: Fix libvirtd/libvirt-bin detection  https://review.opendev.org/67702520:12
*** bnemec is now known as beekneemech20:14
openstackgerritMerged openstack/ironic master: Install sushy if redfish is a hardware type  https://review.opendev.org/67673220:43
*** yolanda__ has joined #openstack-ironic20:45
*** yolanda has quit IRC20:46
*** ociuhandu has joined #openstack-ironic20:48
*** pcaruana has quit IRC20:57
*** ociuhandu has quit IRC20:58
*** igordc has quit IRC21:01
*** igordc has joined #openstack-ironic21:01
openstackgerritMerged openstack/python-ironicclient master: Strip prefix when paginating  https://review.opendev.org/67060021:04
openstackgerritJulia Kreger proposed openstack/python-ironicclient master: Remove the ironic command  https://review.opendev.org/67651521:05
*** rh-jelabarre has quit IRC21:21
*** rh-jelabarre has joined #openstack-ironic21:21
*** ijw has joined #openstack-ironic21:26
*** ijw has quit IRC21:28
*** ijw has joined #openstack-ironic21:29
rlooTheJulia: did you forget to add the reno to ^^ 676515? I don't see it.21:33
TheJuliarloo: d'oh21:36
TheJuliarloo: I already swapped laptops for the evening, It will be at latter or tomorrow before I do that21:36
rlooTheJulia: no worries, if i forget to review, ping me next week :)21:37
TheJuliarloo: will do, have a wonderful weekend21:37
rlooTheJulia: you too. I wonder about you 'swapping laptops for the evening', vs 'turn off laptop for the rest of the weekend' :D21:38
TheJuliarloo: kerbal space program21:38
rlooTheJulia: ok, that is allowed!21:39
*** henriqueof has quit IRC21:52
*** ociuhandu has joined #openstack-ironic22:29
*** ociuhandu has quit IRC22:33
*** igordc has quit IRC22:46
openstackgerritMerged openstack/ironic stable/stein: Check for deploy.deploy deploy step in heartbeat  https://review.opendev.org/67615122:59
*** ociuhandu has joined #openstack-ironic23:00
*** ociuhandu has quit IRC23:05
openstackgerritMerged openstack/ironic-inspector master: CI documentation  https://review.opendev.org/67399223:26
*** vesper11 has quit IRC23:32
*** sthussey has quit IRC23:54

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!