Monday, 2015-11-09

*** rady has quit IRC00:06
*** lhcheng has quit IRC00:06
*** lhcheng has joined #openstack-operators00:07
*** mriedem is now known as mriedem_away00:10
*** zhangjn has joined #openstack-operators00:11
*** lhcheng has quit IRC00:11
*** lhcheng has joined #openstack-operators00:11
*** zhangjn has quit IRC00:15
*** jproulx has joined #openstack-operators00:18
*** lhcheng has quit IRC00:33
*** xavpaice has quit IRC00:34
*** lhcheng has joined #openstack-operators00:35
*** xavpaice has joined #openstack-operators00:38
*** Piet has joined #openstack-operators00:47
*** VW has joined #openstack-operators00:55
*** ahonda has joined #openstack-operators00:56
*** zhangjn has joined #openstack-operators00:56
*** zhangjn has quit IRC00:56
*** zhangjn has joined #openstack-operators00:57
*** VW has quit IRC00:59
*** zhangjn_ has joined #openstack-operators01:02
*** zhangjn has quit IRC01:04
*** lhcheng_ has joined #openstack-operators01:08
*** lhcheng has quit IRC01:08
*** lhcheng_ has quit IRC01:24
*** lhcheng has joined #openstack-operators01:26
*** vinsh has joined #openstack-operators01:32
*** dminer has joined #openstack-operators01:36
*** harshs has quit IRC01:39
*** smemon92 has joined #openstack-operators01:45
*** lhcheng has quit IRC01:55
*** harshs has joined #openstack-operators01:56
*** lhcheng_ has joined #openstack-operators01:56
*** harshs has quit IRC02:00
*** vinsh has quit IRC02:00
*** VW has joined #openstack-operators02:09
*** VW has quit IRC02:10
*** VW has joined #openstack-operators02:10
*** lhcheng_ has quit IRC02:17
*** lhcheng has joined #openstack-operators02:17
*** dminer has quit IRC02:21
*** lhcheng has quit IRC02:24
*** lhcheng has joined #openstack-operators02:25
*** VW has quit IRC02:34
*** VW has joined #openstack-operators02:38
*** VW has quit IRC02:38
*** VW has joined #openstack-operators02:39
*** vinsh has joined #openstack-operators02:41
*** harshs has joined #openstack-operators02:43
*** vinsh has quit IRC02:58
*** smemon92 has quit IRC03:00
*** harshs has quit IRC03:13
*** harshs has joined #openstack-operators03:23
*** lhcheng has quit IRC03:26
*** lhcheng_ has joined #openstack-operators03:26
*** lhcheng_ has quit IRC03:28
*** DarthVigil has quit IRC03:40
*** zhangjn_ has quit IRC03:55
*** zhangjn has joined #openstack-operators03:58
*** zhangjn has quit IRC03:59
*** zhangjn has joined #openstack-operators04:00
*** harshs has quit IRC04:06
*** zhangjn has quit IRC04:07
*** lhcheng has joined #openstack-operators04:21
*** zhangjn has joined #openstack-operators04:35
*** Marga_ has quit IRC04:38
*** lhcheng_ has joined #openstack-operators04:47
*** lhcheng has quit IRC04:50
*** britthouser has quit IRC04:58
*** britthouser has joined #openstack-operators05:07
*** harshs has joined #openstack-operators05:13
*** Piet has quit IRC05:15
*** Marga_ has joined #openstack-operators05:27
*** sanjayu has joined #openstack-operators05:46
*** pontusf has quit IRC05:50
*** pontusf has joined #openstack-operators05:50
*** zhangjn_ has joined #openstack-operators06:01
*** zhangjn has quit IRC06:04
*** zhangjn_ has quit IRC06:18
*** lhcheng has joined #openstack-operators06:19
*** lhcheng_ has quit IRC06:19
*** zhangjn has joined #openstack-operators06:26
*** zhangjn has quit IRC06:33
*** zhangjn has joined #openstack-operators06:35
*** zhangjn has quit IRC06:38
*** zhangjn has joined #openstack-operators06:44
*** zhangjn has quit IRC06:48
*** zhangjn has joined #openstack-operators07:18
*** harshs has quit IRC07:23
*** zhangjn has quit IRC07:24
*** Marga_ has quit IRC07:26
*** zhangjn has joined #openstack-operators07:26
*** miyagishi_t has joined #openstack-operators07:28
*** zerda has joined #openstack-operators07:46
*** zhangjn has quit IRC07:47
*** zhangjn has joined #openstack-operators07:53
*** zhangjn has quit IRC07:54
*** liverpooler has joined #openstack-operators07:56
*** zhangjn has joined #openstack-operators08:03
*** zigo has quit IRC08:16
*** bvandenh has joined #openstack-operators08:17
*** zigo has joined #openstack-operators08:19
*** berendt has joined #openstack-operators08:23
*** zhangjn has quit IRC08:27
*** zhangjn has joined #openstack-operators08:33
*** zhangjn has quit IRC08:36
*** zhangjn has joined #openstack-operators08:37
*** miyagishi_t has quit IRC08:49
*** VW has quit IRC08:49
*** bvandenh has quit IRC08:53
*** bvandenh has joined #openstack-operators08:55
*** matrohon has joined #openstack-operators09:00
*** cbrown2_ocf has joined #openstack-operators09:03
*** cbrown2_ocf has quit IRC09:07
*** cbrown2_ocf has joined #openstack-operators09:13
*** bvandenh has quit IRC09:16
*** openstack has joined #openstack-operators09:19
*** subscope has joined #openstack-operators09:22
*** lhcheng has quit IRC09:29
*** derekh has joined #openstack-operators10:02
*** electrofelix has joined #openstack-operators10:08
*** Marga_ has joined #openstack-operators10:15
*** belmoreira has joined #openstack-operators10:37
*** bradjones has joined #openstack-operators10:37
*** bradjones has quit IRC10:37
*** bradjones has joined #openstack-operators10:37
*** subscope has quit IRC10:51
*** subscope has joined #openstack-operators10:55
*** subscope has quit IRC11:03
*** maishsk has joined #openstack-operators11:05
*** zhangjn has quit IRC11:10
*** zhangjn has joined #openstack-operators11:36
*** zhangjn has quit IRC11:36
*** zhangjn has joined #openstack-operators11:36
*** bvandenh has joined #openstack-operators11:41
*** jaypipes has joined #openstack-operators11:41
*** weihan has joined #openstack-operators11:43
*** liverpooler has quit IRC11:47
*** maishsk has quit IRC12:01
*** liverpooler has joined #openstack-operators12:02
*** maishsk has joined #openstack-operators12:03
*** zerda has quit IRC12:06
*** liverpooler has quit IRC12:11
*** liverpooler has joined #openstack-operators12:11
*** bvandenh has quit IRC12:12
*** subscope has joined #openstack-operators12:14
*** bvandenh has joined #openstack-operators12:15
*** ig0r__ has joined #openstack-operators12:17
*** _nick has quit IRC12:41
*** _nick has joined #openstack-operators12:42
*** liverpoo1er has joined #openstack-operators12:51
*** sanjayu has quit IRC12:55
*** weihan has quit IRC12:57
*** subscope has quit IRC12:57
*** maishsk has quit IRC13:02
*** subscope has joined #openstack-operators13:08
*** maishsk has joined #openstack-operators13:08
*** VW has joined #openstack-operators13:44
*** VW has quit IRC13:45
*** ccarmack1 has left #openstack-operators13:45
*** regXboi has joined #openstack-operators13:53
*** liverpoo1er has quit IRC13:56
*** cbrown2_ocf has quit IRC14:03
*** cbrown2_ocf has joined #openstack-operators14:19
*** VW has joined #openstack-operators14:20
*** VW has quit IRC14:24
*** VW has joined #openstack-operators14:24
*** zhangjn has quit IRC14:30
*** mriedem_away is now known as mriedem14:35
*** subscope has quit IRC14:38
*** signed8bit has joined #openstack-operators14:44
*** ctina has joined #openstack-operators14:53
*** ctina_ has joined #openstack-operators14:54
*** ctina_ has quit IRC14:54
*** ctina has quit IRC14:54
*** liverpooler has quit IRC15:07
*** dminer has joined #openstack-operators15:10
*** cbrown2_ocf has quit IRC15:15
*** openstackgerrit has quit IRC15:17
*** openstackgerrit has joined #openstack-operators15:17
*** cbrown2_ocf has joined #openstack-operators15:27
*** csoukup has joined #openstack-operators15:30
*** berendt has quit IRC15:35
*** jmckind has joined #openstack-operators15:44
*** prometheanfire has joined #openstack-operators15:44
*** Brainspackle has quit IRC15:48
*** emagana has joined #openstack-operators15:49
*** maishsk has quit IRC15:56
*** maishsk has joined #openstack-operators16:07
*** krotscheck has joined #openstack-operators16:08
*** mdorman has joined #openstack-operators16:09
*** sanjayu has joined #openstack-operators16:11
*** SimonChung1 has quit IRC16:11
*** VW has quit IRC16:12
*** maishsk has quit IRC16:14
*** maishsk has joined #openstack-operators16:15
*** VW has joined #openstack-operators16:18
*** bvandenh has quit IRC16:21
*** vinsh has joined #openstack-operators16:26
*** VW has quit IRC16:30
*** VW has joined #openstack-operators16:30
*** Piet has joined #openstack-operators16:34
*** maishsk has quit IRC16:41
*** harshs has joined #openstack-operators16:46
*** gyee has joined #openstack-operators16:48
*** lhcheng has joined #openstack-operators16:58
*** jmckind is now known as jmckind_17:00
*** jmckind_ is now known as jmckind17:00
*** Brainspackle has joined #openstack-operators17:02
*** matrohon has quit IRC17:05
*** krotscheck has quit IRC17:12
*** armax has joined #openstack-operators17:14
*** SimonChung has joined #openstack-operators17:16
*** jmckind is now known as jmckind_17:18
*** VW has quit IRC17:19
*** jmckind_ is now known as jmckind17:21
*** krotscheck has joined #openstack-operators17:29
*** alop has joined #openstack-operators17:30
*** sanjayu has quit IRC17:44
*** regXboi has quit IRC17:47
*** derekh has quit IRC17:52
*** VW has joined #openstack-operators18:04
*** VW has quit IRC18:05
*** VW has joined #openstack-operators18:05
*** belmoreira has quit IRC18:08
klindgrenI keep reading that "upgrades" are terrible in the Ops mailing list.  However, having done upgrades since havana.  I dont recall them being terrible at all?  I got lucky on some of the upgrades - Like I was already running ml2.  But seems to me like people are blowing the upgrade issue *WAY* out of perspective18:10
*** jmckind is now known as jmckind_18:11
dmsimardI think it's especially that there's a lot of work involved in upgrading properly and without impacting both end users and systems on large-scale production environments and the features in the newer version might not warrant them spend that time on that18:15
dmsimardthat last message probably needed a couple commas or periods18:16
klindgrenI can see the new features in new version may not provide anything that you care about.  But from what I have seen our upgrades has been pretty much a zero data plane impact.  Only the control plane takes a hit18:17
dmsimardklindgren: How did you upgrade from Icehouse to Juno ? It outright required an OS upgrade afaik (CentOS6 to CentOS7 and Ubuntu 12.04 to Ubuntu 14.04)18:19
klindgrenWe dont use distro packages18:19
klindgrenso cent6 runs Juno code without any issues18:19
klindgrenbut our control plane we were able to one-by-one either upgrade or reload to cent718:20
dmsimardBah, that's cheating :)18:20
dmsimardyeah, doing upgrades without impacting customers is possible, just a lot of work18:20
dmsimardlive migrations around, re-installs (when using packages), etc.18:21
klindgreninfact we have a hack in place to run kilo nova-compute and neutron-agent on Cent6 Hv's18:21
klindgrenthat we might continue running under libirty18:21
*** stanchan has joined #openstack-operators18:21
klindgrendepends on when we can get our last 80 cent6 hv's upgraded to cent718:21
klindgrenwhich I have shared the hack with the folks at Cern and they are using it/going to be using it for their kilo stuff18:22
klindgrensince they have like 500 or 800 hv's to reload - I dont recal which18:22
klindgreniirc, cern also got CentOS to build juno packages for cent618:23
dmsimardSo the lesson from the story is to roll your own packages ?18:23
klindgrenThat def helps :-)18:26
klindgrenthough - since its python - building of packages is not strictly required18:27
mdormani think a lot of people would really like to see more the ‘yum update’ experience for updating.  instead of this complicated orchestration of certain things that have to go in a certain order, upgrading schema, etc. etc.18:35
mdormanwhich, granted, for a complicated system like openstack, is kind of a pipe dream18:35
*** elo has joined #openstack-operators18:37
*** prometheanfire has left #openstack-operators18:40
*** signed8bit is now known as signed8bit_ZZZzz18:41
xavpaicecertainly when I did an upgrade using distro packages there was significant challenges in that we had to do the whole thing in one hit18:44
xavpaicethe switch to our own packages made it easier18:44
*** signed8bit_ZZZzz is now known as signed8bit18:44
xavpaiceif we had each service in its own container or VM that would have made the whole thing a bunch easier, and we may be happy to switch back to distro packages if we go down that route18:45
*** harshs has quit IRC18:50
xavpaicesomething I've learned so far is that the bugfixes in the newer versions are far more important than the new features - in fact so much so I'd rather be running a new version and collecting bug reports than looking at long fixed issues that haven't been backported18:51
klindgren+1 on bug fixes18:56
klindgrenalso easier if you have a bug and fix it locally - to get it upstreamed18:56
klindgrenas the delta between what you are running and where master is - is usualy smaller/less painful18:56
*** signed8bit is now known as signed8bit_ZZZzz18:57
xavpaiceI guess the bug fixes we've had to carry locally have been the ones that are needed within hours because they're affecting a customer - so we can't afford to wait for the merge upstream18:59
xavpaiceand adding those to a package is miles easier than patching code in place18:59
xavpaicefor those making packages in venv's - have you had to add mysql-python to the requirements.txt?  Not sure if that's just a by-product of my setup, or others have the same thing19:00
xavpaicealso because I'm letting pbr do the version, and my tag is, e.g. 2015.1.2cat1, pbr gets quite upset19:01
xavpaiceupset if it's 0.11.0 that is...19:01
*** signed8bit_ZZZzz is now known as signed8bit19:03
klindgrenyea - PBR is terrible19:03
klindgrenwe just pass the ENV variable for PBR_VERSION when building packages19:04
xavpaiceit's only the versioning I'm struggling with, and I'm sure it's just that I don't understand how to do it right19:04
xavpaiceI can get the debs built OK, but when the code runs it spits errors about versions - guess I'll try that env var and see if it's better :)19:04
xavpaiceI should read the pbr code and figure out how it works19:05
klindgrenif you do 2015.1.2.119:05
klindgrenit should give you a pbr version of 2015.1.2.dev119:06
xavpaicePBR_VERSION=2015.1.2.1 makes a package with 2015.1.2.dev1?19:06
*** electrofelix has quit IRC19:18
*** rady has joined #openstack-operators19:19
*** signed8bit has quit IRC19:23
klindgrenno - if you use the PBR override it will be 2015.1.2.119:36
klindgrenif not the semver stuff will set it to 2015.1.2.dev119:36
klindgrenor -dev1 I forgot which19:36
*** Piet has quit IRC19:40
*** VW has quit IRC19:44
*** VW has joined #openstack-operators19:46
*** VW has quit IRC19:46
*** VW has joined #openstack-operators19:47
*** berendt has joined #openstack-operators19:50
xavpaiceI'm happy with either - thanks for that!19:52
klindgrennp - harlowja and I spent a *long* time fighting with PBR for naming of packages19:54
*** jmckind_ is now known as jmckind19:57
*** cfarquhar has quit IRC20:01
*** matrohon has joined #openstack-operators20:01
mgagneklindgren: live upgrade will make our life much harder since you won't be able to skip versions anymore AFAIK20:05
mgagneklindgren: the problem isn't the upgrade itself or database migration, it's all the required prep work which we can't afford every 6 months20:05
*** cfarquhar has joined #openstack-operators20:07
*** cfarquhar has joined #openstack-operators20:07
*** kencjohnston has joined #openstack-operators20:08
*** harshs has joined #openstack-operators20:10
klindgrenyea for us it was 2-3 weeks of updating our patches to new version of code - removing patches that aren't needed, or re-writing them from scratch due to major arch changes20:10
mgagneklindgren: I like Tom Cameron's reply: http://lists.openstack.org/pipermail/openstack-operators/2015-November/008704.html20:12
klindgrenI was just reading that20:12
klindgrenHonestly - I wonder how many people responded - actually updated the details of their cloud20:13
klindgrenI *ALMOST* didn't in the last user survey - which would have said we were on Icehouse20:13
mgagneklindgren: I think the openstack community has yet to fully understand why people aren't upgrading every 6 months and IMO are often mislead in their our hypothesis and assumptions made after misinterpreting survey or what not.20:13
mgagneklindgren: that's just an impression, not (yet) based on any empirical evidence20:15
mgagnereply by Clint Byrum on the list: "Upgrading across 2 years of development would possibly be a herculean effort that so far nobody has even tried to tackle without stepping through all intermediary releases."20:16
mgagneI have yet to reply to this comment but IMO, this reply wasn't made in good faith by taking an overly exaggerate example. We have plenty of examples of people skipping 1 or 2 releases. True it's not 2 years of releases but the idea of skipping versions in out there and for reasons.20:17
xavpaicecan't do that now with Nova though20:19
mgagnecan't anymore yes20:19
mgagneand it's frustrating because now, we will be stuck with older version for longer20:19
xavpaiceup to that point, I gather it's fine, but I've never been confident to do it20:19
xavpaicemgagne: is this because of older Centos versions?20:20
mgagnewe aren't gaining *anything* from the live migration thing. we don't have issue with database migration here. we have issue with the whole upgrade process.20:20
mgagnexavpaice: say that again?20:20
xavpaicethe difficulty upgrading - are you stuck on centos 6 and having trouble with that?20:20
mgagnexavpaice: i'm not, we use ubuntu20:21
mgagnexavpaice: ... where the prep work is much bigger than the upgrade itself20:21
xavpaicecos of the distro packages and dependencies with versions?20:21
mgagnewe do the operating system upgrade in between openstack upgrades20:21
xavpaiceah - you don't use the LTS release?20:22
mgagnewe use LTS20:22
mgagnelast time we upgraded was for precise -> trusty20:22
xavpaice...? I'm missing something20:22
mgagneIcehouse is the only version where both LTS are supported20:22
mgagnexavpaice: no, I manage that part (packages) and it isn't that bad. Takes me ~2-3 weeks to repackage, update configs, test provisioning and upgrade.20:22
xavpaicewe did the Trusty upgrade for Icehouse, but upgrading to Juno was easy once we got past the dependencies for packages20:23
mgagnexavpaice: now ask the business to prep itself (external systems depending/using openstack)20:23
xavpaiceright, that's the same sort of pain we have too20:23
mgagnexavpaice: we have billing, custom interface, management interfaces to upgrade and we (the infra team) aren't the ones managing/dev those things20:23
xavpaice"hey, we have to just take down the entire region's network for a bit while we restart the routers"20:24
mgagnexavpaice: so the whole business have to go as fast as the openstack project itself and we aren't ready yet20:24
xavpaicesame deal with a public cloud - customers are running things we have absolutely no control over20:24
xavpaicelive migration has been mixed, sometimes it works fine and sometimes not20:25
xavpaicemigrating routers between network nodes has been painful and not the most reliable - and sometimes, a 10 second network outage causes breakage20:25
drwahl^^ a lot of that depends on the version of libvirt you are running20:25
xavpaicefor sure20:25
mgagneyes20:25
drwahlactually, migrating routers has been amazingly smooth for us20:25
xavpaicemigrationg across libvirt, qemu and kernel versions is.... risky...20:25
drwahlwe RARELY lose even a single packet during migration20:25
xavpaicedo you use VPNaaS?20:26
drwahlnot yet. we're using astara for our routers20:26
xavpaiceright - so there's the warning for you :)  We use VPNaaS (ipsec), and the L3 agent with OVS etc for our networking20:26
mgagneSo we can't slow down the openstack project itself. 6 months is fine. The problem is when you no longer have the ability to skip major version and save time/money. You end up geting pressured to upgrade faster by the community but yet, you can't afford it20:26
xavpaice100% open source20:26
xavpaicemgagne: yep, feel your pain.  We just did the Juno upgrade, then last week I hit Keystone with Kilo and am trying to get the next one done this week.20:27
drwahlastara has VPNaaS support in the works. i can't think of any reason why VPNaaS would cause issues during migration if we don't see any packets being dropped20:28
drwahlbut there are always suprises :)20:28
mgagne"We assumed early on that the people deploying OpenStack would be more agile because of the ephemeral nature of cloud."20:28
mgagneon the list *now*20:28
xavpaicehmm, how did I not read about Astara before?20:30
drwahlit's a company we (dreamhost) spun out about a year ago20:30
drwahlwe created the akanda router, then spun off the company and it's not become a "big tent" project20:30
xavpaicethat makes it available to us - we're only allowed to use 100% open source software20:31
mgagnes/not/now/ ?20:31
drwahlit is 100% open source20:31
xavpaice:)20:31
* xavpaice reads up on it20:31
drwahlhttp://akanda.io/20:31
xavpaiceinterestingly, we're talking about cumulus too20:32
drwahlcumulus has been pretty good for us20:33
drwahlthere have been a few small hiccups, but overall, it's been really nice20:33
drwahlit's kind of weird to have your switch just be a linux box with a ton of ports on it20:33
drwahlbut it's nice, 'cause then you can easily use tools like chef to configure them20:33
wasmumok, wrapping my head around adding multiple networks to a single l3 agent.  Have that working.  The original provider net was a single net out.  When I would assign a floating IP and ping it I would get the response from the ip I was pinging (floating ip).  Now after I have added the additional networks, added the config in the plugin.ini and the ml2_config, add the bridges and port, add the neutron networks, assign a20:34
wasmum floating ip to a instance from one of the newly created/added nets, the reply from a ping comes from the internal IP of the instance20:34
wasmumI'm thinking, thats not how it's supposed to look?  Did I miss something snat/nat related?20:34
drwahlxavpaice: if you care for a deep-dive into how we're using akanda, i'd be happy to share20:34
*** Marga_ has quit IRC20:35
xavpaicedrwahl: thanks, that would be good20:35
*** sanjayu has joined #openstack-operators20:36
claytonxavpaice: yes, we have to add mysql-python also20:45
claytonit's not in requirements.txt, I assume because it's not actually required.  you could be using db220:45
claytonthe choice seems to be between having requirements.txt list every db backend people could be using, or me manually adding mysql-python, so I'm ok with the latter20:45
xavpaicefair call20:46
xavpaicejust means a local patch is needed, but only because we're carrying venv built local packages so I guess that's the sort of thing I should do with quilt20:46
claytonI'm doing this for requiremens.txt - http://paste.openstack.org/show/478388/20:46
claytonI don't use what's in the package directly20:47
klindgrenI find that alot of the missing req's are defined in test-requirements.txt20:47
claytonxavpaice for heat, our requirements-source.txt looks like this - http://paste.openstack.org/show/478389/20:48
xavpaicebingo - the mysql part is the same as ours20:49
xavpaicethanks for that20:49
claytonthe update-requirements script I pasted will always produce a requirements.txt with everything frozen, and if you use branch names in the requirements-source.txt, it will freeze things as a commit hash20:49
xavpaicehow is uwsgi working out for you with heat?20:49
klindgreneg: https://github.com/openstack/nova/blob/master/test-requirements.txt#L1120:49
claytonhaven't deployed it, but I think the dev work is mostly done20:49
claytonseems to be ok, I think we'll have to try it out and tweak settings20:49
claytonwe're planning to use uwsgi directly in http mode, and front it with haproxy in http mode20:49
xavpaicePyMySQL =/= mysql-python though?20:49
klindgrenyes20:50
claytonno, pymsql is the pure python implementation20:50
xavpaicethat's what I was thinking - and people were saying it's really slow20:50
claytonthat's the one the rackspace guy was saying was really slow in the liberty issues session20:50
claytonI was confused when he was saying that, since I didn't even know there were two different drivers.  We've always used python-mysql, since that's what ubuntu packages20:51
klindgrenah - right - sorry - I keep getting mixed up on the ones that redhat renames and the ones that are different.20:51
claytonI'm looking into moving our nova control servers into docker now, to hopefully avoid having to upgrade to kilo.1 before we go to liberty20:51
*** csoukup has quit IRC20:52
claytonso hopefully we can upgrade *just* nova control services to latest stable/kilo20:52
xavpaicewe were talking about containers for liberty, simply because we don't want to wait to finish the work on that before moving to kilo20:52
claytonxavpaice: we were planning to do keystone, heat, designate and glance before moving to liberty and the rest afterwards20:53
*** kencjohnston has quit IRC20:53
claytonwe have two primary operational issues with docker right now 1) restarting docker engine restarts all services20:53
claytonand 2) we have an issue with aufs getting wedged and not letting us delete containers20:53
xavpaicethose are both pretty significant20:53
claytonthe latter I'm hoping to fix by upgrading the kernel and switching to overlayfs20:53
*** kencjohnston has joined #openstack-operators20:54
claytonthe first one isn't too bad if ovs agent doesn't destroy everything when it restarts.20:54
claytonwe restart services all the time anyway, so the latter is stupid, but managable20:54
klindgrenwe are planning on actually doing the reverse.  Revamp our deployment process to include ci/cd and auto deployment then move to liberty20:54
*** nikhil_k has quit IRC20:54
xavpaiceyou're on kilo now though aren't you?20:54
claytonyes20:54
xavpaiceyeah, so we're well behind you folk20:55
claytonnot sure if you meant me or kris, but we're both on kilo :)20:55
*** jmckind is now known as jmckind_20:55
* xavpaice hangs head in shame at still being so far behind20:55
xavpaice:)20:55
claytonwe've spent a ridiculous amount of time on upgrading to juno and kilo20:56
claytonbut I don't really see any other alternative20:56
raginbajinclayton:  The whole upgrading for us problem is all around upgrading neutron and restarting the agent and it losing the flows.20:56
raginbajinWe went to Juno in containers this last upgrade on the controller nodes.20:56
raginbajinand haven't actually had any problems with aufs or docker-engine, so we've been lucky on that side of things.20:56
claytonraginbajin yeah, we've managed to mitigate that to some extent, at least on control nodes.20:57
xavpaicethat restarting issue is fixed in liberty iirc20:57
claytonon compute nodes it's fast enough to rebuild that's not that big an issue20:57
raginbajinYeah it's still out an outage20:57
claytondepends on your tolerance for pain though :)20:57
raginbajinnot mine, my users tolerance for pain20:57
xavpaicealso depends on how many VMs on the hypervisor20:57
raginbajinand they don't like anything.20:57
claytonyou should watch the talk we did for tokyo on upgrading to kilo ;)20:58
raginbajinI'll definitely check it out. We are just at Juno now and the thought of going to kilo/liberty sounds kinda daunting. One of the biggest issues we had was around deprecated settings that don't mean what they use to mean20:59
claytonraginbajin this part specifically: https://youtu.be/47YAH5Km9ho?t=4m37s :)20:59
raginbajinthanks!20:59
claytonyou'll appreciate the 15 seconds there.20:59
claytonerr, maybe 45 seconds, I linked to the wrong point :)20:59
claytonhonestly, the OVS agent restart change is reason enough for us to upgrade to liberty21:00
claytonand upgrading to kilo was driven by the rabbit heartbeat chagnes21:00
claytonmfisch is upgrading rabbit in production tonight!21:01
raginbajinthose are the two biggest things that are killing us I think as well.21:01
*** berendt has quit IRC21:01
raginbajinbut juno was a little more rocky than we like so we are taking a breather to let the pitchforks and torches die down21:01
claytonjuno was easy for us, compared to kilo :)21:02
claytonin fact, I'm pointing that out now, as I write my self-review ;)21:02
*** sanjayu has quit IRC21:03
*** jmckind_ is now known as jmckind21:06
*** Piet has joined #openstack-operators21:08
*** kencjohnston has quit IRC21:09
*** VW has quit IRC21:11
*** jmckind is now known as jmckind_21:13
*** VW has joined #openstack-operators21:13
*** VW has quit IRC21:14
*** VW has joined #openstack-operators21:14
klindgrenfor us our upgrades were all easy21:15
klindgrenbut I wonder how much of that is due to our chosen network arch21:15
klindgrenBTW we do upgrades at 9am ;-)21:16
xavpaiceI do upgrades 'after hours' but in reality, a public cloud has no such thing and I might as well do them at 9am21:16
xavpaiceyay for haproxy21:16
*** subscope has joined #openstack-operators21:16
claytonwell, our customers are all in the US, so we definitely have off hours.21:21
*** jmckind_ is now known as jmckind21:21
raginbajinWe've been doing them later on in the day say around 5pm-6pm that way it's not really late for us and we have some users we can use at guinea pigs if need be21:23
raginbajinklindgren: I definitely think your network arch makes it much easier thing to do.21:23
claytonI expect once we have HA routers in place and the OVS agent restart fix then the data plane impact should be almost nil21:23
claytonHA routers have been in the code base for like 2 releases now, so we're planning to try them once we're on liberty21:24
raginbajinThat's been our thinking as well.21:28
xavpaicewe were going to try that too, then apparently it's not great with VPNaaS21:31
xavpaiceso we might need to wait till Liberty for that21:31
claytonit doesn't work with L2pop until liberty apparently21:32
xavpaicethis is why we have to keep up with upgrades21:32
claytonyeap, exactly.21:33
*** k_stev has joined #openstack-operators21:46
mfischare we discussing the off the rails thread on operators?21:48
mfischclayton: I think we should first strive to get devs to backport a single release since they wont for the ovs fix...21:49
*** jaypipes has quit IRC21:49
claytonmfisch I think it stated that way.  kris is amazed everyone doesn't find openstack upgrades to be trivial21:52
mfischI was surpised that Matt said instances were the biggest issue21:53
mfischother than virtual routers instances dont cause us any issues21:53
claytonyeah, that seems crazy to me.  if you care about that, you do whatever you have to do to get live migration21:53
mfischwe dont l-m instances though, although we can21:53
xavpaiceinstances is only a problem when we want to do hypervisor maint, but that's not an openstack problem21:53
claytonsure, I think he meant for like operating system upgrades21:54
xavpaiceklindgren: thanks for the tip re pbr, that worked perfectly for me21:57
*** matrohon has quit IRC21:58
klindgrenxavpaice, cool21:58
klindgrenclayton, mfisch its less that I think that upgrades are trivial.  It's more that I think some people are making them out to be *way* harder than they actually are.  Atleast in my exprience.21:59
mfischonce we dont have to move routers around like a shitty 3 card monty they will be way better21:59
klindgrenprovider networks - this time - for the win!22:00
*** jmckind has quit IRC22:02
*** nikhil has joined #openstack-operators22:27
jlkklindgren: they're hard if you're trying to preserve some level of uptime22:33
jlkor if you have busy clouds22:34
jlkthey were way harder in previous releases22:34
*** harshs has quit IRC22:39
*** rady has quit IRC22:41
klindgrenour upgrades have always been just about a 30 minute planned control plane outage - with 0 data-plane downtime.  Aside from the rebasing patches, testing/planning phase.  The upgrades themselves have gone smoothly without crazyness.  But again, for me, we made some lucky design choices so I missed out on some of the upgrades other people had to go through.  Such as the neutron OVS -> ML2+OVS migration.  I was just point ou22:42
klindgrent that I seem to be reading people on the mailing list saying ZOMG upgrades it's the end-of-the-world.  When my experience tells me for the most part its otherwise.  Especially if you aren't running with local patches that need rebased.22:42
*** harlowja_ has quit IRC22:43
*** harshs has joined #openstack-operators22:43
klindgrenHonestly for those running icehouse/juno just going to kilo for heartbeat support would most likely give them *more* uptime day-to-day22:43
*** delattec has quit IRC22:50
*** albertom has quit IRC22:51
*** csoukup has joined #openstack-operators22:53
jlkklindgren: yeah, for small clouds that works, small clouds that don't rely on vendor distros for the openstack software22:55
claytonklindgren it's not really meaningful that the actual upgrade itself is easy22:55
claytonI think most of us have automated it and agree that's the case22:55
jlklarge clouds, 30 minutes is laughable. Mostly due to very lengthy database migrations22:56
claytonit's all the research and testing leading up to the upgrade that is the time sink22:56
jlkand some clouds can't take 30 minutes of API outage, so they're trying to optimize for 0 minutes of API outage22:56
xavpaiceanyone having issues with configs changing with Juno -> kilo?  I am finding that after doing keystone, all the other services when I upgrade them get WARNING keystonemiddleware.auth_token [-] Authorization failed for token22:56
*** SimonChung has quit IRC22:57
*** SimonChung1 has joined #openstack-operators22:57
klindgrenpretty sure you have to redo the midleware paste config from juno-> kilo22:57
*** SimonChung has joined #openstack-operators22:57
*** SimonChung1 has quit IRC22:58
xavpaiceI've been nabbing the paste.ini from the distro packages, so haven't even started to look there22:58
*** SimonChung1 has joined #openstack-operators22:58
*** SimonChung2 has joined #openstack-operators22:58
*** SimonChung1 has quit IRC22:58
*** SimonChung has quit IRC22:58
*** albertom has joined #openstack-operators22:58
*** SimonChung has joined #openstack-operators22:59
*** SimonChung2 has quit IRC22:59
*** Marga_ has joined #openstack-operators23:00
*** Marga_ has quit IRC23:01
*** Marga_ has joined #openstack-operators23:01
xavpaicewait, that's the per project api-paste.ini, not the one in /etc/keystone/keystone-paste.ini?23:02
klindgrenper project-paste23:02
klindgreniirc some of the middleware was deprecated between juno -> kilo so if you haven't updated those since say icehouse - its got the wrong middleware code paths in them.23:03
klindgrenthough due to our app server layout we have to do all services at once on upgrade, due to package deps.  Hello, moving to containers.23:04
*** csoukup has quit IRC23:06
*** csoukup has joined #openstack-operators23:10
*** jmckind has joined #openstack-operators23:13
*** cdelatte has joined #openstack-operators23:21
*** subscope has quit IRC23:26
*** k_stev has quit IRC23:35
*** mdorman has quit IRC23:45
*** mriedem is now known as mriedem_away23:54
*** harshs has quit IRC23:54
*** dminer has quit IRC23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!