Wednesday, 2017-11-15

*** ChiTo has quit IRC00:08
*** aojea has joined #openstack-operators00:28
*** aojea has quit IRC00:33
*** markvoelker has quit IRC00:37
*** chyka has quit IRC00:41
*** gfhellma_ has quit IRC01:11
*** gyee_ has quit IRC01:23
*** aojea has joined #openstack-operators01:28
*** aojea has quit IRC01:33
*** markvoelker has joined #openstack-operators01:37
*** Apoorva_ has quit IRC01:46
*** vijaykc4 has joined #openstack-operators01:59
*** chyka has joined #openstack-operators02:09
*** chyka has quit IRC02:14
*** AlexeyAbashkin has joined #openstack-operators02:22
*** AlexeyAbashkin has quit IRC02:26
*** aojea has joined #openstack-operators02:29
*** vijaykc4 has quit IRC02:29
*** vijaykc4 has joined #openstack-operators02:31
*** aojea has quit IRC02:33
*** fragatina has quit IRC03:10
*** fragatina has joined #openstack-operators03:13
*** fragatina has quit IRC03:17
*** aojea has joined #openstack-operators03:30
*** aojea has quit IRC03:34
*** rcernin has quit IRC03:34
*** rcernin has joined #openstack-operators03:36
*** vijaykc4 has quit IRC03:40
*** slaweq has joined #openstack-operators03:43
*** slaweq has quit IRC03:48
*** Apoorva has joined #openstack-operators04:08
*** dbecker has quit IRC04:24
*** sticker has quit IRC04:24
*** aojea has joined #openstack-operators04:31
*** aojea has quit IRC04:35
*** dbecker has joined #openstack-operators04:37
*** Apoorva_ has joined #openstack-operators04:46
*** Apoorva has quit IRC04:49
*** dmibrid_ has quit IRC04:52
*** AlexeyAbashkin has joined #openstack-operators05:22
*** AlexeyAbashkin has quit IRC05:26
*** chyka has joined #openstack-operators05:30
*** aojea has joined #openstack-operators05:31
*** chyka has quit IRC05:35
*** aojea has quit IRC05:36
*** Apoorva_ has quit IRC05:41
*** simon-AS559 has joined #openstack-operators05:42
*** simon-AS5591 has quit IRC05:43
*** yprokule has joined #openstack-operators05:57
*** makowals has joined #openstack-operators06:27
*** makowals_ has quit IRC06:28
*** arnewiebalck has quit IRC06:29
*** makowals has quit IRC06:29
*** arnewiebalck has joined #openstack-operators06:29
*** makowals has joined #openstack-operators06:30
*** aojea has joined #openstack-operators06:32
*** threestrands has quit IRC06:34
*** aojea has quit IRC06:37
*** Long_yanG has quit IRC06:41
*** LongyanG has joined #openstack-operators06:41
*** simon-AS559 has quit IRC06:41
*** sticker has joined #openstack-operators06:42
*** khyr0n has joined #openstack-operators06:50
*** slaweq has joined #openstack-operators06:51
*** slaweq has quit IRC06:55
*** TuanLA has joined #openstack-operators06:55
*** aojea has joined #openstack-operators06:57
*** aojea has quit IRC06:58
*** aojea has joined #openstack-operators06:58
*** aojea has quit IRC06:59
*** arnewiebalck_ has joined #openstack-operators06:59
*** arnewiebalck has quit IRC07:00
*** arnewiebalck_ is now known as arnewiebalck07:00
*** dtrainor has quit IRC07:04
*** slaweq has joined #openstack-operators07:07
*** spectr has joined #openstack-operators07:13
*** slaweq has quit IRC07:15
*** kelv_ has quit IRC07:18
*** rcernin has quit IRC07:20
*** Oku_OS-away is now known as Oku_OS07:24
*** spectr has quit IRC07:38
*** slaweq has joined #openstack-operators07:47
*** pcaruana has joined #openstack-operators07:51
*** slaweq has quit IRC07:52
*** belmoreira has joined #openstack-operators07:53
*** mmedvede has quit IRC07:59
*** purp has quit IRC07:59
*** maciejjozefczyk has quit IRC08:00
*** simon-AS559 has joined #openstack-operators08:01
*** purp has joined #openstack-operators08:04
*** slaweq has joined #openstack-operators08:05
*** mmedvede has joined #openstack-operators08:06
*** maciejjozefczyk has joined #openstack-operators08:06
*** dtrainor has joined #openstack-operators08:07
*** slaweq has quit IRC08:10
*** vijaykc4 has joined #openstack-operators08:11
*** tesseract has joined #openstack-operators08:17
*** vijaykc4 has quit IRC08:17
*** Jack_Iv has joined #openstack-operators08:33
*** Jack_Iv has quit IRC08:33
*** Jack_Iv has joined #openstack-operators08:34
Jack_IvDoes Magnum deploy a private etcd service for discovery? I don't have access to the public actually. What is the solution?08:35
*** AlexeyAbashkin has joined #openstack-operators08:37
*** jbadiapa_ is now known as jbadiapa08:43
*** simon-AS559 has quit IRC08:45
*** gkadam has joined #openstack-operators09:02
*** chyka has joined #openstack-operators09:06
*** zenpwner has joined #openstack-operators09:08
*** chyka has quit IRC09:11
*** khyr0n has quit IRC09:21
*** simon-AS5591 has joined #openstack-operators09:24
*** derekh has joined #openstack-operators09:34
*** belmoreira has quit IRC10:01
*** belmoreira has joined #openstack-operators10:06
*** makowals has quit IRC10:08
*** zenpwner has quit IRC10:15
*** TuanLA has quit IRC10:17
*** TuanLA has joined #openstack-operators10:18
*** TuanLA has quit IRC10:19
*** rmart04 has joined #openstack-operators10:40
*** rmart04 has quit IRC11:36
*** sticker_ has joined #openstack-operators11:36
*** sticker has quit IRC11:40
*** rmart04 has joined #openstack-operators11:41
*** Jack_Iv has quit IRC11:41
*** slaweq has joined #openstack-operators12:06
*** racedo has quit IRC12:08
*** slaweq has quit IRC12:11
*** sticker_ has quit IRC12:32
*** Caterpillar has quit IRC12:54
*** Caterpillar has joined #openstack-operators12:54
*** racedo has joined #openstack-operators13:00
*** makowals has joined #openstack-operators13:11
openstackgerritMartin Mágr proposed openstack/osops-tools-monitoring master: Fix for check_neutron_floating_ip  https://review.openstack.org/51638113:26
*** markvoelker has quit IRC13:29
*** markvoelker has joined #openstack-operators13:29
*** nguyentrihai has joined #openstack-operators13:33
*** liverpooler has quit IRC13:34
dimsfolks, anyone interested in volunteering? add your name at the bottom of the etherpad - https://etherpad.openstack.org/p/LTS-proposal13:46
dimsthere are 3 people already!13:47
zioprotoHello Ops, have a look at this bug. If it matches your use case https://bugs.launchpad.net/nova/+bug/1732428 Possible data loss14:01
openstackLaunchpad bug 1732428 in OpenStack Compute (nova) "Unshelving a VM breaks instance metadata when using qcow2 backed images" [Undecided,New]14:01
*** slaweq_ has joined #openstack-operators14:08
*** fragatina has joined #openstack-operators14:11
*** slaweq_ has quit IRC14:12
*** makowals has quit IRC14:15
*** fragatina has quit IRC14:16
*** belmoreira has quit IRC14:23
*** liverpooler has joined #openstack-operators14:41
*** liverpooler has quit IRC14:45
*** liverpooler has joined #openstack-operators14:47
*** simon-AS5591 has quit IRC15:20
*** simon-AS559 has joined #openstack-operators15:20
*** belmoreira has joined #openstack-operators15:49
*** rmart04 has quit IRC16:07
*** Oku_OS is now known as Oku_OS-away16:13
*** markvoelker_ has joined #openstack-operators16:18
*** markvoelker has quit IRC16:20
*** simon-AS5591 has joined #openstack-operators16:29
*** simon-AS559 has quit IRC16:30
*** simon-AS559 has joined #openstack-operators16:35
*** simon-AS5591 has quit IRC16:36
*** simon-AS559 has quit IRC16:40
*** Apoorva has joined #openstack-operators16:45
*** makowals has joined #openstack-operators16:49
*** chyka has joined #openstack-operators16:52
*** Chris__ has joined #openstack-operators16:52
openstackgerritMerged openstack/osops-tools-monitoring master: Fixed check_nova_instance  https://review.openstack.org/51671616:53
*** belmoreira has quit IRC16:53
openstackgerritMerged openstack/osops-tools-monitoring master: Fix for check_neutron_floating_ip  https://review.openstack.org/51638116:54
Chris__Hi, I am having some issues with a compute node deployed using openstack juno. When looking at my compute logs, I am seeing this error https://paste.ofcode.org/ZXi2m8EE7U8jzfYxXW4Qzj . This seems stop the nova-compute service. I've tried deleting this instance doing a nova delete <instance_id> but it seems when I try to restart the nova-compute service this error keeps popping up.16:59
Chris__It seems the compute filter command its getting stuck at is, blockdev: RegExpFilter, blockdev, root, blockdev, (--getsize64|--flushbufs), /dev/.*16:59
Chris__Does the path /dev/.* store the instance id somewhere that nova-compute looks for when running17:02
*** tesseract has quit IRC17:03
*** gfhellma has joined #openstack-operators17:09
*** yprokule has quit IRC17:10
*** ChiTo has joined #openstack-operators17:12
*** d0ugal has quit IRC17:12
*** masber has quit IRC17:15
*** B_Smith has quit IRC17:28
*** makowals has quit IRC17:28
*** AlexeyAbashkin has quit IRC17:29
*** gyee_ has joined #openstack-operators17:34
*** rmart04 has joined #openstack-operators17:34
*** gfhellma_ has joined #openstack-operators17:34
*** rmart04 has quit IRC17:37
*** gkadam has quit IRC17:37
*** gfhellma has quit IRC17:37
*** racedo has quit IRC17:46
*** derekh has quit IRC17:59
*** fragatina has joined #openstack-operators18:01
*** fragatina has quit IRC18:01
*** fragatina has joined #openstack-operators18:02
*** pcaruana has quit IRC18:11
*** aojea has joined #openstack-operators18:15
*** fragatina has quit IRC18:15
*** gfhellma__ has joined #openstack-operators18:20
*** B_Smith has joined #openstack-operators18:24
*** gfhellma_ has quit IRC18:24
*** aojea has quit IRC18:31
klindgren_1 - juno is like *really* old.18:33
klindgren_2.) that error looks like its coming from rootwrap - most likely that its missing a root wrap filter, so its not allowing the blockdev command to run18:34
*** fragatina has joined #openstack-operators18:34
klindgren_I would look at you rootwrap.d stuff and make sure blockdev is allowed on that compute node18:34
klindgren_https://ask.openstack.org/en/question/56558/usrbinnova-rootwrap-unauthorized-command-blockdev-getsize64-cinder1img-no-filter-matched/18:36
klindgren_Chris__, ^^18:36
klindgren_also - https://review.openstack.org/#/c/65944/1/etc/nova/rootwrap.d/compute.filters18:37
*** makowals has joined #openstack-operators18:42
*** TxGirlGeek has joined #openstack-operators18:55
*** TxGirlGeek has quit IRC18:55
*** TxGirlGeek has joined #openstack-operators18:55
*** TxGirlGeek has quit IRC18:58
*** TxGirlGeek has joined #openstack-operators18:58
*** TxGirlGeek has quit IRC18:59
Chris__thanks for the response klindgren_19:00
*** TxGirlGeek has joined #openstack-operators19:02
*** TxGirlGeek has quit IRC19:02
*** TxGirlGeek has joined #openstack-operators19:02
*** simon-AS559 has joined #openstack-operators19:11
*** AlexeyAbashkin has joined #openstack-operators19:13
*** AlexeyAbashkin has quit IRC19:17
*** d0ugal has joined #openstack-operators19:20
*** AlexeyAbashkin has joined #openstack-operators19:22
*** TxGirlGeek has quit IRC19:26
*** AlexeyAbashkin has quit IRC19:26
*** TxGirlGeek has joined #openstack-operators19:34
*** TxGirlGeek has quit IRC19:43
*** TxGirlGeek has joined #openstack-operators19:44
*** TxGirlGeek has quit IRC19:53
*** TxGirlGeek has joined #openstack-operators19:59
*** TxGirlGeek has quit IRC20:00
*** slaweq has joined #openstack-operators20:10
*** slaweq has quit IRC20:15
*** dmibrid_ has joined #openstack-operators20:16
*** d0ugal_ has joined #openstack-operators20:25
*** d0ugal has quit IRC20:26
*** jbadiapa_ has joined #openstack-operators20:33
*** jbadiapa has quit IRC20:36
*** d0ugal_ has quit IRC20:40
*** d0ugal has joined #openstack-operators20:40
*** d0ugal has quit IRC20:40
*** d0ugal has joined #openstack-operators20:40
*** Apoorva has quit IRC20:43
*** Apoorva has joined #openstack-operators20:43
*** Apoorva has quit IRC20:43
*** d0ugal has quit IRC20:52
*** TxGirlGeek has joined #openstack-operators21:00
*** TxGirlGeek has quit IRC21:04
*** simon-AS559 has quit IRC21:15
*** Chris__ has quit IRC21:15
*** liverpooler has quit IRC21:20
*** threestrands has joined #openstack-operators21:32
*** threestrands has joined #openstack-operators21:32
*** mriedem has left #openstack-operators21:53
*** mriedem has joined #openstack-operators21:53
mriedemklindgren_: i'm not even sure how to tell anymore, but attach interface doesn't work with upstream cells v1 right?21:53
mriedemor,21:54
mriedemis that routed to the child cell api?21:55
klindgren_attach interface - afaik always worked21:59
mriedemyeah i got confused by something internal where it's looking at either parent api vs child api22:00
klindgren_the things that we have patches around is mainly creating the following from the top level: host aggregates, flavors, server_groups, (key pairs iirc),22:01
mriedemyeah ok22:01
mriedemeverything that's now in the nova_api db now22:02
mriedemminus placement things22:02
*** makowals has quit IRC22:06
*** slaweq has joined #openstack-operators22:11
*** liverpooler has joined #openstack-operators22:12
mgagnemriedem: there are patches floating around for that IIRC22:13
mgagnemriedem: just grep for CellsV1 to see the extend of what I (and ops) needs to forward port each release for cellsv1: https://github.com/NeCTAR-RC/nova/commits/nectar/mitaka22:14
*** slaweq has quit IRC22:15
mriedembut if/when you can get to that sweet place of using cellsv2, you won't need those anymore22:16
mgagnemriedem: sure, still playing catch up. and also, tbh, I still don't fully understand what I will need to do to get cellsv2. there are upgrade guides available but to be honest, I have a very hard time digesting the information and how I will need to update my deployment tools to make it happen.22:17
mriedemyeah, and that's mostly because we (nova devs) don't have large cells v1 clusters to be kicking at that, so we can provide suggestions on what we think needs to happen, but are also needing large cells v1 users to help us and each other with that - which belmiro is doing a bit with what he's doing at CERN22:18
mriedemand i think nectar is going to do it a little differently too,22:18
mgagnemriedem: also english is not my first language so I can get lost easily trying to read and see how it applies to me (or not)22:18
mriedeme.g. per-cell placement service or a global placement22:18
mriedemi don't know yet what godaddy is going to do, not sure if they know :)22:18
mgagneI have *zero* idea so far about what I will need to do to get placement api, haven't fully read about the subject yet22:19
mriedemi think CERN is the first large cells v1 user moving to ocata?22:20
mgagnethose are major changes and slow down our upgrade pace since we need to read and try to grasp what this is all about and eventually, how to operate and support it.22:20
mriedemyes i understand22:20
mriedemi can't fully appreciate it, but i can understand that this isn't a simple thing22:20
*** AlexeyAbashkin has joined #openstack-operators22:21
mriedemfwiw, nova considers placement external and global, like keystone and neutron and cinder22:21
mgagnecompared to keystone, I jokily said to a coworker I would upgrade it in our private cloud this afternoon and did it in less than a day. Can't do the same with Nova tbh, too many changes, patches, test cases, etc.22:21
mriedemwell, that's not really a fair comparison is it?22:22
mriedemyou're not running keystone agents on 100K machines22:22
mgagnewell, nova is the only one falling being in our setup tbh =)22:22
mgagneI think the number of agents is irrelevant22:22
mriedemso you're falling behind because of a large number of patches to add functionality to the API because yo'ure using cellsv1 right?22:23
mgagnebecause once details are ironed out, upgrade is relatively fast22:23
*** rcernin has joined #openstack-operators22:23
mgagnemriedem: partially, we also support baremetal/ironic. so we need to test those use cases too.22:23
mriedemi'm not sure what "too many test cases" means22:23
mriedemwould you rather we test less?22:24
mgagneI'm not sure what you are suggesting22:24
mriedemyou said, "Can't do the same with Nova tbh, too many changes, patches, test cases, etc."22:25
mriedemwhat does 'too many test cases' mean?22:25
mriedemi'm assuming you just mean merge conflicts or test failures when syncing up your code with upstream22:25
mgagne"... that I have to test"22:25
mriedemoh, because of baremetal22:25
mriedemplus kvm22:25
mgagneyes22:25
mgagne~12 images, at least ~3 major hw setup (mostly related to network)22:25
*** AlexeyAbashkin has quit IRC22:26
mgagneso 36 baremetal use cases if you want to fully cover22:26
mriedemso would people rather we just say nova is feature complete and call it a day?22:26
mgagneso it's not really nova's problem here.22:26
mgagnebut with major refactors coming (cells, placement, etc.) it's just more in my already full plate.22:27
mriedemsure, i get that,22:27
mriedemcellsv2 is to replace the problem that is cellsv122:27
mriedemso everyone is aware22:27
mriedemand allow non-cellsv1 people to scale out in the same way w/o all of the extra headache22:27
mgagnesure... meanwhile, I'm forward porting patches for 4-5 releases already.22:27
mgagneand can't easily catch up to latest nova release because I'm caught in an upgrade nightmare.22:28
mriedemit would be useful to know how much of that forward porting work is due to being on cells v1 though22:28
mriedemlike, if 75% of your patch load is to fill gaps in the api for cells v1, that goes away if/when you get to cellsv222:29
mriedembecause it works natively with the compute api22:29
mriedemif the other 25% is some amount of custom stuff or whatever, then i'm not sure what to do about that except encourage you to upstream those, like the extend volume stuff we worked on in pike,22:30
mriedemand if half of that 25% is common across a lot of nova deployments, but ops aren't telling us everyone is caring the same patch, we need to know that22:30
mriedems/caring/carrying/22:30
mgagnesorry I don't want to be harsh but being told that cellsv2 is the holy grail for a couple of releases/years already, it's tiresome.22:31
mgagneops tried to upstream cellsv1 patches but were told it's non-supported and won't be merged anymore.22:31
mriedemyeah, because we were trying to focus on cells v222:32
mriedemso rather than limp along a thing we don't want,22:32
mriedemwe focused on the future replacement22:32
mriedemwhich makes those api changes to make cells v1 feature complete irrelevant22:32
mgagneok, I don't know what to say anymore22:34
mriedemthat's fine :)22:34
mriedemi'm not trying to argue, just trying to understand and share my perspective22:34
mgagnebecause "ops aren't telling us everyone is caring the same patch, we need to know that" we did and didn't get a satisfactory answer.22:34
mriedemi do remember going through the etherpad of patches and noting on several, things like "this shouldn't be a problem, it's a bug fix, just push it up"22:35
mriedemfor the API parity things, i did bring it up a few releases ago and the consensus was to not take those on because of the focus on v222:36
mgagneI remember that one22:36
mriedemso it's not like it wasn't considered22:36
*** ChiTo has quit IRC22:36
mriedemthis was specifially the neutron external events / vif plug timeout stuff22:36
mgagneI don't think there is that much patches related to API, mostly bug fixes22:36
mgagneThat being said, nectar already did the job of forward porting the patches, I can't complain. But with the actual situation on my side, it's hard to invest that much to perform an upgrade every 6 months.22:38
*** lbragstad has joined #openstack-operators22:43
*** d0ugal has joined #openstack-operators22:51
*** ChiTo has joined #openstack-operators22:51
*** ChiTo has quit IRC22:59
*** ChiTo has joined #openstack-operators22:59
*** AlexeyAbashkin has joined #openstack-operators23:20
*** AlexeyAbashkin has quit IRC23:25
*** masber has joined #openstack-operators23:30

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!