Monday, 2019-07-08

*** cgoncalves has joined #openstack-nova00:09
*** rcernin has joined #openstack-nova00:13
*** brinzhang has joined #openstack-nova00:29
*** trident has joined #openstack-nova00:33
*** ircuser-1 has quit IRC00:35
openstackgerritSundar Nadathur proposed openstack/nova master: ksa auth conf and client for cyborg access  https://review.opendev.org/63124200:39
openstackgerritSundar Nadathur proposed openstack/nova master: WIP: Add Cyborg device profile groups to request spec.  https://review.opendev.org/63124300:39
openstackgerritSundar Nadathur proposed openstack/nova master: WIP: Create and bind Cyborg ARQs.  https://review.opendev.org/63124400:39
openstackgerritSundar Nadathur proposed openstack/nova master: WIP: Get resolved Cyborg ARQs and add PCI BDFs to VM's domain XML.  https://review.opendev.org/63124500:39
*** _alastor_ has joined #openstack-nova00:48
*** _alastor_ has quit IRC00:52
*** imacdonn has quit IRC01:13
*** imacdonn has joined #openstack-nova01:13
*** slaweq has joined #openstack-nova01:14
*** slaweq has quit IRC01:24
*** altlogbot_2 has quit IRC01:28
*** altlogbot_0 has joined #openstack-nova01:29
*** spatel has joined #openstack-nova01:37
*** Dinesh_Bhor has joined #openstack-nova01:44
*** slaweq has joined #openstack-nova02:16
*** brinzhang_ has joined #openstack-nova02:17
*** brinzhang has quit IRC02:20
*** slaweq has quit IRC02:24
*** BjoernT has joined #openstack-nova02:33
openstackgerritYongli He proposed openstack/nova master: clean up orphan instances  https://review.opendev.org/62776502:35
*** BjoernT_ has joined #openstack-nova03:08
*** BjoernT has quit IRC03:09
*** spatel has quit IRC03:14
*** slaweq has joined #openstack-nova03:15
*** slaweq has quit IRC03:24
*** psachin has joined #openstack-nova03:28
openstackgerritBoxiang Zhu proposed openstack/nova master: Support agile samples name  https://review.opendev.org/66959103:28
openstackgerritSundar Nadathur proposed openstack/nova master: ksa auth conf and client for cyborg access  https://review.opendev.org/63124203:33
openstackgerritSundar Nadathur proposed openstack/nova master: WIP: Add Cyborg device profile groups to request spec.  https://review.opendev.org/63124303:33
openstackgerritSundar Nadathur proposed openstack/nova master: WIP: Create and bind Cyborg ARQs.  https://review.opendev.org/63124403:33
openstackgerritSundar Nadathur proposed openstack/nova master: WIP: Get resolved Cyborg ARQs and add PCI BDFs to VM's domain XML.  https://review.opendev.org/63124503:33
*** prometheanfire has joined #openstack-nova03:43
prometheanfireI'm guessing volumes showing up as attached in nova but not cinder (and not actually being attached) is a nova problem?03:43
prometheanfirecan't 'remove volume' it      No volume with a name or ID of '3e92f9b6-1a3b-4a7e-8487-6ff253e888db' exists.03:43
prometheanfireproblem starts when attaching it, times out with a 504 :|03:50
*** udesale has joined #openstack-nova04:09
*** slaweq has joined #openstack-nova04:11
*** factor has quit IRC04:13
*** factor has joined #openstack-nova04:14
openstackgerritTakashi NATSUME proposed openstack/python-novaclient master: Deprecate cells v1 and extension commands and APIs  https://review.opendev.org/66959704:22
*** slaweq has quit IRC04:24
*** slaweq has joined #openstack-nova05:11
*** ociuhandu has joined #openstack-nova05:20
*** slaweq has quit IRC05:25
*** ociuhandu has quit IRC05:27
*** ociuhandu_ has joined #openstack-nova05:27
*** whoami-rajat has joined #openstack-nova05:30
*** BjoernT_ has quit IRC05:32
*** Luzi has joined #openstack-nova05:32
*** ociuhandu_ has quit IRC05:33
*** brinzhang_ has quit IRC05:36
*** ratailor has joined #openstack-nova05:45
*** ileixe has joined #openstack-nova05:54
*** ccamacho has joined #openstack-nova06:00
*** brinzhang has joined #openstack-nova06:02
*** slaweq has joined #openstack-nova06:11
*** pcaruana has joined #openstack-nova06:12
*** yaawang has quit IRC06:15
openstackgerritArthur Dayne proposed openstack/nova-specs master: Proposal for a safer noVNC console with password authentication  https://review.opendev.org/62312006:16
*** maciejjozefczyk has joined #openstack-nova06:21
*** etp has joined #openstack-nova06:24
*** slaweq has quit IRC06:26
*** damien_r has joined #openstack-nova06:28
*** xek_ has joined #openstack-nova06:30
*** damien_r has quit IRC06:32
*** luksky11 has joined #openstack-nova06:34
*** slaweq has joined #openstack-nova06:39
*** yaawang has joined #openstack-nova06:43
openstackgerritMerged openstack/nova master: Clean up test_virtapi  https://review.opendev.org/66741906:45
*** yaawang has quit IRC06:50
*** yaawang has joined #openstack-nova06:55
*** rcernin has quit IRC07:12
*** ivve has joined #openstack-nova07:13
*** helenafm has joined #openstack-nova07:14
*** ricolin has joined #openstack-nova07:18
*** psachin has quit IRC07:19
openstackgerritwangwei1 proposed openstack/nova master:   fix spelling error in nova/api/validation/__init__.py  https://review.opendev.org/66924407:20
openstackgerritBoxiang Zhu proposed openstack/nova master: Add host and hypervisor_hostname flag to create server  https://review.opendev.org/64552007:22
*** tssurya has joined #openstack-nova07:28
*** jangutter has joined #openstack-nova07:31
*** adriant has quit IRC07:38
*** ociuhandu has joined #openstack-nova07:45
*** damien_r has joined #openstack-nova07:49
*** ralonsoh has joined #openstack-nova07:55
*** ttsiouts has joined #openstack-nova08:01
*** rpittau|afk is now known as rpittau08:03
*** ttsiouts has quit IRC08:13
*** ttsiouts has joined #openstack-nova08:13
*** ttsiouts has quit IRC08:18
*** ttsiouts has joined #openstack-nova08:24
*** mdbooth has joined #openstack-nova08:33
*** psachin has joined #openstack-nova08:34
*** gokhani has joined #openstack-nova08:34
openstackgerritMerged openstack/nova master: Add VirtAPI.update_compute_provider_status  https://review.opendev.org/66870608:41
*** yaawang has quit IRC08:41
*** brinzhang_ has joined #openstack-nova08:47
*** brinzhang has quit IRC08:50
*** derekh has joined #openstack-nova08:54
*** lpetrut has joined #openstack-nova09:00
*** cdent has joined #openstack-nova09:05
*** Zara has joined #openstack-nova09:08
*** priteau has joined #openstack-nova09:08
*** ociuhandu has quit IRC09:11
*** brinzhang has joined #openstack-nova09:13
*** brinzhang has quit IRC09:13
*** brinzhang has joined #openstack-nova09:14
*** brinzhang_ has quit IRC09:16
*** brinzhang has quit IRC09:17
*** brinzhang has joined #openstack-nova09:17
*** brinzhang_ has joined #openstack-nova09:18
*** mdbooth has quit IRC09:21
*** brinzhang has quit IRC09:22
*** davidsha has joined #openstack-nova09:23
*** mdbooth has joined #openstack-nova09:38
*** cdent has quit IRC09:45
*** panda is now known as panda|bbl09:48
*** ociuhandu has joined #openstack-nova09:48
openstackgerritBrin Zhang proposed openstack/nova master: Specify availability_zone to unshelve  https://review.opendev.org/66385109:51
openstackgerritMiguel Ángel Herranz Trillo proposed openstack/nova stable/queens: Fix type error on call to mount device  https://review.opendev.org/66962909:57
*** luksky11 has quit IRC10:12
*** moshele has joined #openstack-nova10:15
moshele@sean-k-mooney: hi10:16
*** ttsiouts has quit IRC10:32
*** ttsiouts has joined #openstack-nova10:33
*** panda|bbl has quit IRC10:33
*** cdent has joined #openstack-nova10:33
*** awalende has joined #openstack-nova10:36
gokhanihi team, because of power outage, Most of our compute nodes  unexpectedly shut  down and now I can not start our instances. Instances Power status is No State.  Error log is http://paste.openstack.org/show/754107/. My environment is OpenStack Pike and Instances are on a nfs shared storage. Nova version is 16.1.6.dev2. There are more important instances on my environment. How can I rescue my istances? What would you suggest ?10:37
*** ttsiouts has quit IRC10:37
*** panda has joined #openstack-nova10:37
*** priteau has quit IRC10:42
*** udesale has quit IRC11:00
*** ttsiouts has joined #openstack-nova11:01
*** priteau has joined #openstack-nova11:09
*** luksky11 has joined #openstack-nova11:11
*** cdent has quit IRC11:12
*** priteau has quit IRC11:16
*** tesseract has joined #openstack-nova11:17
*** cdent has joined #openstack-nova11:18
openstackgerritIvaylo Mitev proposed openstack/nova master: VMware VMDK detach: get adapter type from instance VM  https://review.opendev.org/65373811:19
*** moshele has quit IRC11:20
*** tesseract has quit IRC11:20
*** tesseract has joined #openstack-nova11:21
*** mdbooth has quit IRC11:24
*** ratailor has quit IRC11:25
*** ricolin has quit IRC11:26
*** mdbooth has joined #openstack-nova11:29
openstackgerritMerged openstack/nova master: Correct the comment of RequestSpec's network_metadata  https://review.opendev.org/66706111:56
*** _alastor_ has joined #openstack-nova12:02
*** etp has quit IRC12:02
*** etp has joined #openstack-nova12:02
*** sean-k-mooney has quit IRC12:03
*** _alastor_ has quit IRC12:06
*** etp has quit IRC12:08
*** sean-k-mooney has joined #openstack-nova12:16
*** cdent has quit IRC12:26
openstackgerritBrin Zhang proposed openstack/nova master: Specify availability_zone to unshelve  https://review.opendev.org/66385112:33
*** artom has joined #openstack-nova12:33
openstackgerritShilpa Devharakar proposed openstack/nova master: Support filtering of hosts by forbidden aggregates  https://review.opendev.org/66795212:35
*** edleafe has joined #openstack-nova12:42
*** shilpasd has joined #openstack-nova12:49
openstackgerritMerged openstack/os-resource-classes master: Add Python 3 Train unit tests  https://review.opendev.org/66947912:57
openstackgerritMerged openstack/os-traits master: Add Python 3 Train unit tests  https://review.opendev.org/66948013:00
*** takashin has left #openstack-nova13:02
*** lbragstad has joined #openstack-nova13:03
*** damien_r has quit IRC13:04
*** tesseract has quit IRC13:14
*** tesseract has joined #openstack-nova13:16
*** mriedem has joined #openstack-nova13:23
openstackgerritMatt Riedemann proposed openstack/nova stable/rocky: Fix type error on call to mount device  https://review.opendev.org/66966413:25
*** spatel has joined #openstack-nova13:28
*** brinzhang_ has quit IRC13:29
*** spatel has quit IRC13:29
*** francoisp has quit IRC13:30
*** spatel has joined #openstack-nova13:31
*** francoisp has joined #openstack-nova13:32
*** cdent has joined #openstack-nova13:34
bauzasmriedem: around ?13:41
bauzasmriedem: I replied to the thread but I wonder about something13:42
mriedemyes13:42
bauzasin both http://paste.openstack.org/show/734146/ and https://github.com/larsks/os-placement-tools/blob/master/check_placement.py they first ask about all the instances13:42
bauzasand then they loop around allocations by checking some persisted object13:42
bauzasbut then, we could have some races13:43
bauzasplus, it could be super huge to get all the instances for CERN eg.13:43
bauzasso, I wonder whether I should call the API for every instance13:43
bauzaserh13:43
bauzasI mean, checking the DB13:43
mriedemi'd think you'd avoid races by checking if an instance has a task_state != None, and if looking at migrations, is the migration in progress13:46
mriedemmnaser's script doesn't loop over instances,13:48
*** Luzi has quit IRC13:48
mriedemit gets hypervisors and the servers on those hypervisors and compares to the resource provider inventory/allocation per hypervisor/server13:48
*** eharney has joined #openstack-nova13:49
*** amodi has joined #openstack-nova13:50
mriedemi'd think it would be easiest to first determine what kinds of problems the audit command is going to try and find13:50
mriedemand then figure out the best way to implement that13:50
*** awalende has quit IRC13:50
mriedemi think we know the two main issues are missing allocations (which heal_allocations should fix) and leaked / orphaned allocations, so i'd start by focusing on the latter issue13:51
*** awalende has joined #openstack-nova13:51
*** BjoernT has joined #openstack-nova13:52
*** efried_pto is now known as efried13:54
mriedemi think we could pretty easily identify leaked allocations by starting from the provider allocations and working backward - looking for servers or migration records with the allocation consumer id (since we don't have the type recorded yet)13:55
*** awalende has quit IRC13:55
*** awalende_ has joined #openstack-nova13:55
mriedemso i'd probably start by getting all hypervisors (nodes/resource providers) and for each hypervisor, get the allocations - similar to what mnaser's script does13:59
mriedemgetting the hypervisors + servers in the same call like his script would also tell you if allocations on that provider was a server or not - and if not, it's either leaked or it's a migration allocation14:00
mriedemso no i don't think you need to loop over all instances14:00
mriedemmaybe i should just write this thing :)14:00
*** awalende_ has quit IRC14:00
*** _alastor_ has joined #openstack-nova14:02
*** priteau has joined #openstack-nova14:02
*** FlorianFa has quit IRC14:05
bauzasmriedem: I'm pretty sure I can do it14:06
*** hongbin has joined #openstack-nova14:07
openstackgerritFrançois Palin proposed openstack/nova master: WIP - call volume detach even if terminate_connection fails  https://review.opendev.org/66967414:16
kashyapmriedem: You're right ... indeed I didn't hear any responses on the list so far (http://lists.openstack.org/pipermail/openstack-discuss/2019-July/007521.html)14:19
kashyapalex_xu: Hiya, when you're about, do you have a comment on the above thread?  (And the change wich Matt Cced you on, which removes Intel CMT events)14:19
* kashyap goes back to the Other Yak(tm)14:19
efriedkashyap: As predicted, my wife immediately stole the sleep book14:20
efriedand it is life-changing14:20
kashyapefried: Hehe.  Sharing is caring.  I'm sure she will not hide her fascination with the book14:20
efriedI don't know why, but stuff I've been telling her for years for some reason is having more of an impact coming from a complete stranger14:20
kashyapefried: Ah, she already finished it?14:20
efriedno, hasn't finished yet, but every chapter brings new insight.14:21
efriedshe has bought a copy for her parents14:21
efriedand recommended it to a number of people.14:21
kashyap:-)14:21
efriedI got a couple chapters in before she stole it, and I'm definitely sold14:21
kashyapefried: I know, you must've become like a "broken record", and people zone out the moment broken reel start spinning :D14:21
kashyapSorry, just teasing.14:22
efriedthough for me it's more fascination with the science than altering lifestyle, which was already pretty close to right sleep-wise.14:22
kashyapYeah, agreed.14:23
kashyapefried: Also, I'd highly recommend to watch him give a tech talk.  He's one of the most effective speakers I've ever seen.14:23
efriednoted14:24
kashyapThe way he pauses for the words to sink in, diction, is all worth observing.  At least to me.14:24
kashyaphttps://www.youtube.com/watch?v=aXflBZXAucQ14:24
*** dpawlik has quit IRC14:24
kashyapefried: If you click, look how quickly (within 5 seconds) he escalates the talk. :D14:24
* kashyap stops drumming the beat and goes to do muck around with Secure Boot patches14:25
*** purplerbot has quit IRC14:28
prometheanfirewas pointed here for nova adding a cinder volume to a server without (the record/db at least) but not actually adding it to the running VM (client reports a 504 when trying to add the volume)14:29
*** purplerbot has joined #openstack-nova14:30
prometheanfireit shows up in the volumes_attached field, but can't remove any of them as they either do not exist or.....14:30
prometheanfireInvalid volume: Invalid input received: Invalid volume: Unable to detach volume. Volume status must be 'in-use' and attach_status must be 'attached' to detach. (HTTP 400) (Request-ID: req-01d8a3c5-66f4-43a4-bac5-9f1104a292fe) (HTTP 400) (Request-ID: req-358132e4-38a1-493a-b87b-d135791bae2d)14:30
*** ricolin has joined #openstack-nova14:35
*** mlavalle has joined #openstack-nova14:36
*** gouthamr has quit IRC14:47
*** awalende has joined #openstack-nova14:55
*** gouthamr has joined #openstack-nova14:56
*** awalende has quit IRC14:59
efriedkashyap: Would you mind taking a look at this please?15:02
efriedhttps://review.opendev.org/#/c/667975/15:02
efriedIt seems straightforward enough, but I'm not sure if there could e.g. be security problems, or if there's another accepted way to do this, etc.15:02
kashyapefried: In a meeting; will look15:03
efriedno hurry15:03
efriedthanks15:03
stephenfinlyarwood: Could you send this on its merry way? https://review.opendev.org/#/c/667355/15:09
lyarwoodstephenfin: looking15:11
lyarwoodstephenfin: sorry I did mean to sort that out last week once the stein change landed15:11
*** BjoernT_ has joined #openstack-nova15:11
lyarwoodstephenfin: done15:11
mriedemgibi: comments in https://review.opendev.org/#/c/669188/ just so i'm sure i'm following it correctly (i know this patch was because i brought it up elsewhere)15:13
*** dklyle has joined #openstack-nova15:13
gibimriedem: looking..15:13
*** BjoernT has quit IRC15:14
*** ivve has quit IRC15:16
*** gyee has joined #openstack-nova15:17
*** mdbooth has quit IRC15:21
gibimriedem: replied. I only added some extra explanation but I think we agree.15:26
gibimriedem: I can re-spin the patch to fix the test redundancies15:27
*** wwriverrat has joined #openstack-nova15:28
*** dklyle has quit IRC15:28
*** dklyle has joined #openstack-nova15:28
mriedemyeah just fix the test and i'm +215:28
gibimriedem: OK, I will do it quickly15:28
*** amodi has quit IRC15:29
cdentmriedem, efried : I have a situation where under high load the _update_to_placement call made at the end of instance_claim in the compute manager can fail on ResourceProviderConflict all 4 retry attempts (because of nova-scheduler simultaneously making many allocations to the same resource provider). This fails the instance build even though the instance did build.15:29
efriedcdent: That is very interesting.15:30
cdentI'm considering a hack to make instance_claim not call _update_to_placement and just letting the periodic job do that. The reason _update_to_placement is actually talking to placement is because inventory (DISK_GB max_unit) does change after many (most) instance creations15:31
cdentideally that max_unit thing wouldn't be in place but that's not really any immediate option15:31
efriedI'm guessing backing off the retry interval wouldn't be of help, because the problem is the timing between the GET and the PUT(s).15:31
sean-k-mooneycdent: were we not looking at disabling the periodic job at one point15:31
efriedwait, what?15:31
efriedmax_unit changes?15:32
cdentto account for multiple different source of one disk_gb in a cluster15:32
mriedemcdent: is the vcenter driver reporting different max_unit DISK_GB from update_provider_tree during the claim?15:32
cdentbeing presented as one disk15:32
sean-k-mooneycdent: you mean for things like ceph15:32
openstackgerritBalazs Gibizer proposed openstack/nova master: Remove assumption of http error if consumer not exists  https://review.opendev.org/66918815:32
mriedemthis is likely not libvirt/ceph15:33
mriedemit's vcenter with shared storage provider or something?15:33
cdentvcenter _faking_ shared stored providers15:33
sean-k-mooneycdent: normally it should not need to change and should stay at the total capasity right15:33
cdentsean-k-mooney: yes15:33
efriedoh, this isn't in libvirt, okay. Yeah, in libvirt max_unit == total and doesn't seem to change.15:33
cdentbut if you have N datastores in one cluster, no single request can be more than the free space on one of those datastores15:33
efriedokay, but as you say, fixing that is unrelated to the 409 bounce15:34
cdent(this is why I've been eager for shared providers, as it could fix this (except that storage "policies" break all that)15:34
sean-k-mooneycdent: well if you have over allcoation set to somthing other then 1 it could15:34
mriedemsure, but why would max_unit change between initially reporting the inventory and then during the claim?15:34
sean-k-mooneybut ya i get you point15:34
mriedemwhen the service starts up, it runs upt which should report the max_unit from the Nth datastore with the most amount of total disk right?15:34
efriedmriedem: the claim itself reduces the amount of free space on one of those disks15:34
sean-k-mooneywe physically could not allocate more then the free space on the data store15:35
cdentlet's assume for the moment that the max_unit hack is immutable as that's not really germane to the point, it's just the proximate cause15:35
mriedemefried: couldn't we make that same argument for the libvirt driver then?15:35
kashyapmelwitt: efried: On that 'initenv' change (https://review.opendev.org/#/c/667976/2), doesn't it seem like a broken detection by 'cloud-init'?15:36
gibimriedem: done https://review.opendev.org/66918815:36
cdentthe point is that a failed _update_to_placement during instance claim (which is about inventory, traits and aggregates, not allocations) can fail an already suceeded instance15:36
sean-k-mooneymriedem: i dont think we should be adjusting max_unit in either case15:36
efriedmriedem: In libvirt it's not an issue because we're only looking at one disk, so the (total - usage) alread ylimits.15:36
kashyap [Ignore me, I'll add a comment in the change.]15:36
mriedemcdent: what do you mean by "already succeeded?15:36
efriedkashyap: Note the subsequent patch and the bug it points to.15:36
mriedemwe do the claim before the driver.spawn right?15:36
sean-k-mooneyit should remian at total size and really on palcment to do the free calulation + allcoation raitio15:36
cdentmriedem: let me look at my notes15:37
sean-k-mooneyor total size of largest data store i gues for vcenter15:37
mriedemyeah driver.spawn is way late in the build process, it's like the last thing15:37
mriedemfirst we claim, then we build volumes and networking, then we call driver.spawn15:37
efriedanyway, I think the point here is whether we can maybe do without that _update_to_placement at the end of instance_claim? What else is it doing?15:37
mriedeminstance_claim calls _update which was historically for updating info on the compute node record15:38
cdentmriedem: yeah, sorry, not that it did work, but _would_ work15:38
mriedemfor things like free_vcpus and whatnot15:38
mriedemyeah i was going to say... https://github.com/openstack/nova/blob/86524773b8cd3a52c98409c7ca183b4e1873e2b8/nova/compute/manager.py#L2223 :)15:38
openstackgerritSundar Nadathur proposed openstack/nova master: ksa auth conf and client for cyborg access  https://review.opendev.org/63124215:38
openstackgerritSundar Nadathur proposed openstack/nova master: WIP: Add Cyborg device profile groups to request spec.  https://review.opendev.org/63124315:38
openstackgerritSundar Nadathur proposed openstack/nova master: WIP: Create and bind Cyborg ARQs.  https://review.opendev.org/63124415:38
openstackgerritSundar Nadathur proposed openstack/nova master: WIP: Get resolved Cyborg ARQs and add PCI BDFs to VM's domain XML.  https://review.opendev.org/63124515:39
*** amodi has joined #openstack-nova15:39
mriedemso i think what's being asked is a kwarg on _update to say whether or not to call _update_to_placement15:39
*** mdbooth has joined #openstack-nova15:39
efriedthough the fact that it's doing something in this case is an argument in favor of keeping it. Kinda wondering what breaks if we don't have that. Presumably it would be possible for the disk allocation to be unsatisfiable late in the spawn process.15:39
mriedemwhich in the startup and update_available_resource periodic would be yes,15:40
mriedembut in instance_claim you're saying no15:40
cdentmriedem: pretty much, yes15:40
efriedconfigurable ^ perhaps15:40
cdentyou'd have to know that if you chose to do that you have to keep the periodic regular15:40
cdents/regular/frequent/15:40
cdentwhat I really want to do is not have inventory changing all the time...15:41
mriedemreshapes are in that flow but it looks like reshape should only be allowed on startup15:41
kashyapefried: Yeah, just read the bug.  Miguel is right.  And this change makes sense.  I don't see any security implications here (checked also w/ a libvirt dev)15:41
sean-k-mooneycdent: as mriedem asked earlier is there any reason on start that we could not set teh max_unit to the larges datasotre and then not update it15:41
sean-k-mooneyor not update it outside the periodic15:42
mriedemsean-k-mooney: from the upt perspective i'm not sure if the driver knows if it's called from the periodic, on startup, or during a claim15:42
sean-k-mooneye.g. does it really need to be set to the free space15:42
mriedemnote that _move_claim will have the same issue here i'd think15:42
cdenta) what mriedem said, b) the idea is to make sure placement has a way to prevent someone asking for more physical disk than is available15:43
cdentmriedem: it's less of a risk there as they a) don't happen as much, b) happen even less in the vmware environment, c) this problem really only shows up when throwing > 2000 vms as the same nova-compute15:43
*** helenafm has quit IRC15:43
cdents/as the/at the/15:43
sean-k-mooneycdent: ya i get that. i wonder if this is somthing we coudl do when we make the inital allocation15:44
mriedemthe initial allocation doesn't involve the driver15:44
sean-k-mooneye.g. is this somethign we shoudl consider for all driver in general15:44
sean-k-mooneytrue15:44
efriedkashyap: Thank you. Your votes/comments on those patches will be beneficial. Sounded like melwitt will also appreciate it.15:44
cdentno other driver (that I'm aware of) would need this as they report real and true inventory15:44
sean-k-mooneybut shoudl we be doing it for libvirt with ceph/nfs15:44
cdentit's only when you have non-contiguous disk(s)15:45
sean-k-mooneycdent: ok would this be solved in the futrue by representing15:45
mriedemin the libvirt+ceph case, we report total and max_unit the same, and placement will reject requests that are too big because of the capacity calculation in placement, right?15:45
sean-k-mooneythe datastores as different shareing providers15:46
cdentsean-k-mooney: yes, except for the thing I said above about something called "storage policy" which would disrupt placement's understanding of where things are15:46
*** tesseract has quit IRC15:46
sean-k-mooneymriedem: yes which is fine for ceph as we have a singel pool for vms15:46
mriedemi'd prefer not to have a config option for this behavior,15:47
mriedemyou could add a variable on the driver itself,15:47
mriedemwhich could be controlled by the driver depending on if the vcenter driver is doing this shared storage pool modeling thing15:47
cdentyeah, I'm not even clear if it should ever be upstreamed15:47
mriedemthe RT would check that, and by default we do as we do today15:47
* cdent nods15:47
mriedemi've done similar hacks in the RT for ironic15:47
mriedemb/c they are a special unicorn as well15:47
mriedem*we've done15:48
*** tesseract has joined #openstack-nova15:48
cdentunicorn's are a PITA15:48
cdenthmm unicorns too15:48
mriedemalways pooping out frosting on my yard and such15:48
cdentThanks mriedem, sean-k-mooney, efried this has been very useful.15:48
cdentand such15:49
kashyapefried: Yes, will review15:49
cdentIf it turns out I can extract a little bit of 'if this virtdriver magic' i'll make it so15:49
openstackgerritSundar Nadathur proposed openstack/nova master: ksa auth conf and client for cyborg access  https://review.opendev.org/63124215:50
openstackgerritSundar Nadathur proposed openstack/nova master: WIP: Add Cyborg device profile groups to request spec.  https://review.opendev.org/63124315:50
openstackgerritSundar Nadathur proposed openstack/nova master: WIP: Create and bind Cyborg ARQs.  https://review.opendev.org/63124415:50
openstackgerritSundar Nadathur proposed openstack/nova master: WIP: Get resolved Cyborg ARQs and add PCI BDFs to VM's domain XML.  https://review.opendev.org/63124515:50
*** lpetrut has quit IRC15:50
*** tssurya has quit IRC15:53
mriedemgibi: looks like https://review.opendev.org/#/c/637955/31 and https://review.opendev.org/#/c/669188/2 conflict, maybe you want to rebase the fix underneath the bigger change?15:53
sean-k-mooneycdent: by the way i assume there are buisness reasons that vmware would prefer not to map each compute node to a specifc data store and use cinder with volume type to express storage policies for everything else15:57
cdentsean-k-mooney: yeah, the basic root of all these things is "let the DRS continue to work"15:57
sean-k-mooneyi know vmware have lots of fancy feature built into the hypervior and storage solution so im guessign the current approch is trying to expose them15:57
sean-k-mooneyright ok makes sense15:58
cdentit creates a huge impedance mismatch between the way nova thinks and the way vmware thinks, but that's the way it goes15:58
sean-k-mooneyya hyperv also has some limited clustering of vms and storage pools but its not as advanced as what vcenter does so the impedence miss match is less of an issue for them i guess15:59
cdentthere's been discussion of tying to listen to events from the DRS via some agent that would then "correct" placement when DRS makes a change16:00
cdentbut that would require more effort than currently available resources16:01
gibimriedem: will do16:01
sean-k-mooneythat might require you to reshap allocation if it moved things right if you had a shareing provider per datastore16:01
sean-k-mooneyalthough if you had just one RP and hid the datastore i guess it woul just have to modify the max unit16:02
sean-k-mooneyas it does today16:02
sean-k-mooneycdent: also i assume you are more or less the resouces that are available to work on it16:03
cdentsean-k-mooney: I consider myself already fully booked16:04
sean-k-mooneycdent: yep proably over booked16:05
prometheanfirewhich db field is the 'volumes_attached' field sourced from?16:07
prometheanfiresmcginnis: you might know? ^16:07
mriedemprometheanfire: block_device_mappings16:08
smcginnismriedem is the expert there. ;)16:08
prometheanfiremriedem: thanks, I can't reproduce it, but cinder created a volume fine, but nova was 504 on the attach (it shows up in the db but no action taken on the compute host)16:09
prometheanfirerestart nova services with debug/verbose and it attaches fine16:09
prometheanfiresince it was only half added, the api can't delete it16:09
*** whoami-rajat has quit IRC16:10
mriedemprometheanfire: meaning nova-compute asked cinder to create the volume, right?16:10
mriedembdm source_type != 'volume'16:10
mriedem"the api can't delete it" - you mean the cinder api can't delete the volume? or the compute api can't delete the server?16:11
prometheanfiremriedem: no, volume created manually (worked), then nova was asked to attach volume (failed)16:11
prometheanfireapi can't remove it from the instance16:11
prometheanfirethe failed part half attached it to the instance16:11
*** BjoernT has joined #openstack-nova16:11
prometheanfireadded to block_device_mapping table but no action taken on compute host16:12
openstackgerritMerged openstack/os-vif master: Add Python 3 Train unit tests  https://review.opendev.org/66943816:12
mriedemso there is a bdm, and the volume shows up for the server but when you try to detach it what happens? nova gets an error from cinder?16:12
mriedemsaying it's already detached (not attached) or something?16:12
prometheanfirenova questions cinder if it's attached, cinder says no, nova does nothing16:13
mriedemas in nova doesn't delete the bdm so it stops showing up as attached to the server16:14
prometheanfiredoes nothing -> says it's not attached, why are you asking me to detach something that's not attached (returned to user)16:14
mriedemwhich release?16:14
prometheanfirestein16:14
prometheanfireI haven't been able to reproduce this after restarting nova stuff infra side16:14
prometheanfireso maybe some timeout or hanging connection?16:15
*** whoami-rajat has joined #openstack-nova16:15
*** BjoernT_ has quit IRC16:15
mriedemmy guess is cinder is failing on this call: https://github.com/openstack/nova/blob/86524773b8cd3a52c98409c7ca183b4e1873e2b8/nova/compute/api.py#L417516:15
prometheanfiremaybe, for the detach part16:15
mriedemwell what error do you get from the compute API?16:16
mriedem"Invalid volume: %(reason)s"16:16
prometheanfireNo volume with a name or ID of '3e92f9b6-1a3b-4a7e-8487-6ff253e888db' exists.16:16
prometheanfiresince I removed the volumes16:17
prometheanfireInvalid volume: Invalid input received: Invalid volume: Unable to detach volume. Volume status must be 'in-use' and attach_status must be 'attached' to detach. (HTTP 400) (Request-ID: req-01d8a3c5-66f4-43a4-bac5-9f1104a292fe) (HTTP 400) (Request-ID: req-358132e4-38a1-493a-b87b-d135791bae2d)16:17
mriedemmsg = _("Unable to detach volume. Volume status must be 'in-use' "16:17
mriedem                    "and attach_status must be 'attached' to detach.")16:17
mriedemyup16:17
mriedemthat's the begin_detaching call failing16:17
prometheanfirebecause it never actually attached :D16:18
openstackgerritsean mooney proposed openstack/os-vif master: Sync Sphinx requirement  https://review.opendev.org/66638716:18
mriedemright i get it16:18
*** tesseract has quit IRC16:18
openstackgerritBalazs Gibizer proposed openstack/nova master: nova-manage: heal port allocations  https://review.opendev.org/63795516:18
gibimriedem: rebased ^^16:18
mriedemprometheanfire: interesting you wouldn't hit this in the compute service when the attach fails and delete the bdm https://github.com/openstack/nova/blob/86524773b8cd3a52c98409c7ca183b4e1873e2b8/nova/compute/manager.py#L572216:19
prometheanfirehaven't been able to reproduce after restart of services, but am leaving it in debug mode in case it happens again16:20
openstackgerritBalazs Gibizer proposed openstack/nova master: Move consts from neutronv2/api to constants module  https://review.opendev.org/66894516:21
mriedemwere there errors in the compute log after the failed attach about not being able to delete the bdm?16:21
openstackgerritBalazs Gibizer proposed openstack/nova master: Use neutron contants in cmd/manage.py  https://review.opendev.org/66894616:21
openstackgerritBalazs Gibizer proposed openstack/nova master: Add 'resource_request' to neutronv2/constants  https://review.opendev.org/66894716:21
prometheanfirecompute didn't even attempt to attach from what I could see16:22
prometheanfiredidn't get that far16:22
*** davidsha has quit IRC16:22
mriedemso...the rpc cast from nova api to nova compute failed?16:22
mriedemyou said you got a 504 somewhere16:22
prometheanfire504 from nova-api to client16:22
prometheanfireopenstackclient16:23
mriedemok i don't know why that would happen16:23
prometheanfireI'm fine waiting til I can reproduce, I've left stuff in debug16:24
sean-k-mooneystephenfin: jangutter so as i was saying on the placmeent channel ... i adressed my nit in https://review.opendev.org/#/c/666387/2 if ye want to take a look at that16:26
openstackgerritBalazs Gibizer proposed openstack/nova master: Translatable output strings in heal allocation  https://review.opendev.org/66892516:28
*** cdent has quit IRC16:33
sean-k-mooneyaspiers: so you know that domaincap api you added for sev ... its broken16:33
*** dtantsur is now known as dtantsur|afk16:34
sean-k-mooneywe cant default to q35 here https://github.com/openstack/nova/blob/master/nova/virt/libvirt/host.py#L744-L745 the logic breaks if you have emulators installed that dont support it16:34
sean-k-mooneycurrently we dont call this code in nova yet because your sev stuff that uses it has not merged yet but i tried to use it for my device model filter stuff and it expolodes beautifully16:35
*** igordc has joined #openstack-nova16:37
efriedmriedem: I'm going to bump current runways by, what say, 2 days due to last week's holiday?16:38
mriedemshrug16:39
mriedemnot sure how much it matters16:39
efriedaspiers: SEV series is in merge conflict - any chance of getting that rebased soon?16:40
*** ttsiouts has quit IRC16:43
*** ttsiouts has joined #openstack-nova16:44
*** ivve has joined #openstack-nova16:48
*** ttsiouts has quit IRC16:49
*** rpittau is now known as rpittau|afk16:50
*** luksky11 has quit IRC16:51
sean-k-mooneymriedem:  so quick question. https://review.opendev.org/#/c/659703/8 change the behavior of force config drive to remove the config drive after first boot16:54
sean-k-mooneyto me that seams like a regression as it could lead to data lose in the form of injected files if the user does not copy the injected files form the config_drive or if the operator customerises the config dirve contentce with data scripts that run on each boot16:55
*** psachin has quit IRC16:55
sean-k-mooneyif you are using the dynamic vendor data and do not have teh metadta service deployed is this not an issue?16:57
sean-k-mooneythe injected files case might be ok if that sets instance.config_drive automaticall but i think https://review.opendev.org/#/c/659703/8/nova/virt/configdrive.py@169 might break some usecauses16:58
sean-k-mooneyas such im not sure how safe it is to backport this16:59
*** derekh has quit IRC17:00
*** mdbooth has quit IRC17:02
mriedem"change the behavior of force config drive to remove the config drive after first boot" - you mean if you boot the server with force_config_drive=True, then change to force_config_drive=False? the bug is actually the opposite - they created a server without a config drive, then changed to force_config_drive=True and after that they can't reboot the servers on the host w/o a config drive since the file doesn't exist17:02
mriedemi said on lyarwood's stein backport, "I'd like to move a bit slowly with this one since it's a very latent  issue and is a bit of a behavior change, though justified for the reboot  issue. It should also be pretty rare (I don't imagine lots of people  are changing the force_config_drive value on their computes once they  are deployed)."17:03
mriedemso there it's been sitting17:03
sean-k-mooneymriedem: no what im concerned about is the comment sugges that on a reboot the config drive will be removed17:04
mriedemsean-k-mooney: i'm not really following you on what specific use case you think this regressed17:04
sean-k-mooneye.g. its only present on first boot17:04
mriedemwhat comment?17:04
sean-k-mooneyhttps://review.opendev.org/#/c/660914/1/nova/virt/configdrive.py17:05
mriedemfrom the bug, if you go from no config drive to a forced config drive and reboot the vm, it pukes in libvirt17:05
mriedemthat's the bug they are fixing17:05
sean-k-mooneyif we reboot the vm that was started with a config drive launched_at will not be null right17:05
sean-k-mooneyno im talkinbg about if you boot a vme with force_config=ture17:06
sean-k-mooneythen you reboot it it should continute to have a config drive17:06
*** dasp has quit IRC17:06
sean-k-mooneybut instance.launched_at will not be None on the second boot17:06
mriedemyou mean on the reboot17:06
sean-k-mooneyyes17:07
sean-k-mooneyboot with force  it will have a config drive17:07
mriedemso boot with force_config_drive=true, change force_config_drive=false, reboot the vm, the config drive isn't in the vm17:07
sean-k-mooneythen if you hard reboot it will go away right17:07
sean-k-mooneyno config change17:07
sean-k-mooneyif you leave it at force_config_drive=true17:07
sean-k-mooneyalways17:07
sean-k-mooneyboot a vm17:08
sean-k-mooneythen hard reboot it17:08
sean-k-mooneyit should still have a config drive after the reboot17:08
sean-k-mooneybut on the reboot launched_at would be non None17:08
sean-k-mooneyso and not instance.launched_at would be false17:09
mriedemyeah i see what you're saying, and instance.config_drive is only set from the API?17:09
sean-k-mooneyi think its set in the flavor or image17:09
sean-k-mooneyactully no17:09
sean-k-mooneyyou are right17:09
sean-k-mooneythe api17:09
sean-k-mooneyso im thinking about deployment where the metadata service is not deployed17:09
sean-k-mooneyso you use force_config_drive=true to make cloud init work and for things like vendor data or device role tagging17:10
sean-k-mooneyor file injection with the v2.1 api17:10
*** igordc has quit IRC17:10
sean-k-mooneythe last point is less relevent beacue that is depercated so i would be ok with that on master but im not sure we should backport that change and it might break people on upgrade that dont deploy the metadata service17:11
mriedemyeah as i said i wasn't totally comfortable with backporting it17:11
mriedemoh wait17:12
mriedemi remember digging into this17:12
mriedemhttps://github.com/openstack/nova/blob/master/nova/compute/manager.py#L175817:12
mriedeminstance.config_drive will be set on compute if force_config_drive=True and the user doesn't specify it in the api17:13
mriedemso i don't think your scenario holds17:13
mriedemhttps://review.opendev.org/#/c/659703/4/nova/virt/configdrive.py@16917:13
mriedemso tl;dr, you likely need to recreate whatever regression you think there is and report a bug if we're going to talk about reverting that change17:14
mriedemand i've got a hungry kid here and need to make lunch17:14
sean-k-mooneyhttps://github.com/openstack/nova/blob/master/nova/virt/configdrive.py#L18117:15
sean-k-mooneyso if we have already set instace.launched_at then it wont17:15
sean-k-mooneymriedem: sure take care of lunch :)17:15
mriedemon first create, if the host has force_config_drive=True, we'll set instance.config_drive=True, and on subsequent calls to update_instance we won't update it because "if not True" is False17:16
mriedemanyway, like i said, this would be easier if you can actually recreate a problem and report a bug rather than both of us mostly just guessing based on code17:16
mriedembut i did look into this earlier on the patch17:17
sean-k-mooneyi think we need to reorder 1757 and 1758 https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L175817:17
sean-k-mooneymriedem: ok no worries17:17
sean-k-mooneyi can test this in my devstack setup and see if it causes an issue or not17:17
sean-k-mooneygo make food :) ill let you know if i find anything17:18
*** ralonsoh has quit IRC17:19
*** priteau has quit IRC17:21
openstackgerritMatt Riedemann proposed openstack/nova master: Add host and hypervisor_hostname flag to create server  https://review.opendev.org/64552017:21
openstackgerritMatt Riedemann proposed openstack/nova master: Update AZ admin doc to mention the new way to specify hosts  https://review.opendev.org/66676717:21
openstackgerritMerged openstack/nova stable/rocky: Ignore hw_vif_type for direct, direct-physical vNIC types  https://review.opendev.org/66735517:35
*** spatel has quit IRC17:38
sean-k-mooneyya so with master if you set force_config_drive=true, then boot a new vm and hard reboot it it will not have the config dirve on teh second boot17:40
sean-k-mooneyso that intoduced a new bug17:40
sean-k-mooneywhich should be fixable by swapping instance.launched_at and configdrive.update_instace here https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L175817:41
sean-k-mooneywhich ill go test now17:41
*** tbachman has joined #openstack-nova17:41
*** igordc has joined #openstack-nova17:41
*** maciejjozefczyk has quit IRC17:42
*** dasp has joined #openstack-nova17:42
*** igordc has quit IRC17:43
*** maciejjozefczyk has joined #openstack-nova17:45
sean-k-mooneyyep that fixes it17:45
sean-k-mooneyill file a bug and push a patch17:46
*** igordc has joined #openstack-nova17:46
sean-k-mooneythe order didnt matter befoce because required_by did not depend on the launched_at field not it does so we need to set that after17:47
sean-k-mooneyill try these change teh config to diable force and see if ti does the right thing now17:47
sean-k-mooneyyep it works corectly after config updte too. old vms keep the config drive an new ones dont have them. and if i go back from false to true old vms dont get a config drive but new vms do17:54
*** maciejjozefczyk has quit IRC17:54
*** ociuhandu_ has joined #openstack-nova18:01
*** ociuhandu has quit IRC18:03
*** BjoernT_ has joined #openstack-nova18:05
*** ociuhandu_ has quit IRC18:06
*** BjoernT has quit IRC18:06
*** whoami-rajat has quit IRC18:20
*** priteau has joined #openstack-nova18:23
*** spatel has joined #openstack-nova18:28
openstackgerritMatt Riedemann proposed openstack/nova master: Add host and hypervisor_hostname flag to create server  https://review.opendev.org/64552018:30
openstackgerritMatt Riedemann proposed openstack/nova master: Update AZ admin doc to mention the new way to specify hosts  https://review.opendev.org/66676718:30
spatelsean-k-mooney: one of my vm stuck in powering-on stat so i did18:32
spatelnova reset-state --active 368f2b45-c4b8-460e-9269-023ef80a69d118:32
spatelnow now when i start vm it is saying  - Cannot 'start' instance 368f2b45-c4b8-460e-9269-023ef80a69d1 while it is in vm_state active (HTTP 409) (Request-ID: req-29b635ac-4d8f-4139-9170-19824f019806)18:33
mriedemi'm +2 on "Add host and hypervisor_hostname flag to create server  https://review.opendev.org/645520" now18:33
sean-k-mooneyspatel: you will need to hard reboot it18:33
spatelwhat is the option for command line to hard-reboot?18:36
spatellet me try from GUI18:36
sean-k-mooney"openstack server reboot --hard" but its available in horizon too18:37
openstackgerritsean mooney proposed openstack/nova master: libvirt: make config drives sticky bug 1835822  https://review.opendev.org/66973818:42
openstackbug 1835822 in OpenStack Compute (nova) "vms loose acess to config drive with CONF.force_config_drive=True after hard reboot" [Medium,Confirmed] https://launchpad.net/bugs/1835822 - Assigned to sean mooney (sean-k-mooney)18:42
sean-k-mooneymriedem: ^ that is the fix for the config drive issue18:42
spatelsean-k-mooney: that works :)_18:43
spatelyou are awesome18:43
sean-k-mooneyno i just have hit the same proablem18:43
sean-k-mooneyif you use reset-state to set it to active you need to use hard-reboot to fix it18:44
*** ociuhandu has joined #openstack-nova18:47
*** whoami-rajat has joined #openstack-nova18:57
*** BjoernT_ has quit IRC19:05
*** ricolin has quit IRC19:21
*** wwriverrat has left #openstack-nova19:24
openstackgerritMatt Riedemann proposed openstack/python-novaclient master: Add host and hypervisor_hostname to create servers  https://review.opendev.org/64767119:26
*** BjoernT has joined #openstack-nova19:28
*** factor has quit IRC19:30
*** factor has joined #openstack-nova19:30
*** BjoernT_ has joined #openstack-nova19:41
*** BjoernT has quit IRC19:44
*** icarusfactor has joined #openstack-nova19:45
*** eharney has quit IRC19:46
*** factor has quit IRC19:47
*** factor has joined #openstack-nova20:01
*** ociuhandu has quit IRC20:02
*** icarusfactor has quit IRC20:03
*** bbowen has joined #openstack-nova20:14
openstackgerritMerged openstack/os-vif master: Sync Sphinx requirement  https://review.opendev.org/66638720:23
*** luksky11 has joined #openstack-nova20:25
*** icarusfactor has joined #openstack-nova20:25
*** factor has quit IRC20:27
openstackgerritLee Yarwood proposed openstack/nova master: nova-lvm: Disable [validation]/run_validation in tempest.conf  https://review.opendev.org/66217620:34
openstackgerritLee Yarwood proposed openstack/nova master: nova-lvm: Disable [validation]/run_validation in tempest.conf  https://review.opendev.org/66217620:34
*** priteau has quit IRC20:44
*** spatel has quit IRC20:48
*** priteau has joined #openstack-nova20:50
*** pcaruana has quit IRC20:50
mriedemoh lyarwood20:59
mriedemwrong location for that variable20:59
*** whoami-rajat has quit IRC21:00
*** nicolasbock has joined #openstack-nova21:04
*** BjoernT_ has quit IRC21:07
sean-k-mooneymriedem: im going to leave it for this evening but any idea why http://paste.openstack.org/show/754180/ would fail on the last assert.21:10
sean-k-mooneyit kind of looks like the compute manager code is not running21:11
sean-k-mooneyi might move it to the fake libvirt functional tests21:11
sean-k-mooneybut i would have expected this to work the the fake driver21:11
mriedemsean-k-mooney: you're not waiting for the server to be ACTIVE for one21:12
mriedemunless this is using the CastAsCall fixtured, but i'd avoid that for new tests if possible21:12
sean-k-mooneyah yes21:12
mriedemyou also shouldn't need more than one compute for this test21:12
sean-k-mooneyoh ya i know im just using  a for because it was quick but i belive it just uses self.compute21:13
sean-k-mooneyi will change that21:13
sean-k-mooneybut ill try adding teh wait that is proably the issue21:13
sean-k-mooneyhum no that does not seam to fix it21:19
sean-k-mooneyoh i didnt wait correctly21:23
*** priteau has quit IRC21:27
*** priteau has joined #openstack-nova21:27
*** priteau has quit IRC21:29
*** brault has quit IRC21:30
sean-k-mooneyok ill add the rest of the test cases tommorow but adding self._wait_for_state_change(created_server, 'BUILD') solve my issue thanks21:30
*** ivve has quit IRC21:39
*** mriedem has quit IRC21:53
*** slaweq has quit IRC22:07
*** tjgresha has joined #openstack-nova22:10
*** mlavalle has quit IRC22:13
*** luksky11 has quit IRC22:19
*** slaweq has joined #openstack-nova22:23
*** slaweq has quit IRC22:28
*** lbragstad has quit IRC22:48
*** lbragstad has joined #openstack-nova22:50
*** tkajinam has quit IRC23:01
*** tkajinam has joined #openstack-nova23:01
*** hongbin has quit IRC23:11
*** andreaf has quit IRC23:15
*** andreaf has joined #openstack-nova23:15
*** _alastor_ has quit IRC23:23
*** hoonetorg has quit IRC23:32
*** rcernin has joined #openstack-nova23:36
*** spatel has joined #openstack-nova23:41
*** gyee has quit IRC23:42
*** hoonetorg has joined #openstack-nova23:45
*** BjoernT has joined #openstack-nova23:48
*** BjoernT has quit IRC23:48
*** gyee has joined #openstack-nova23:57

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!