Friday, 2015-11-06

*** zhangjn has joined #openstack-operators00:03
*** Marga__ has quit IRC00:12
*** Marga_ has joined #openstack-operators00:13
*** Marga_ has quit IRC00:17
*** Marga_ has joined #openstack-operators00:17
*** SimonChung1 has quit IRC00:20
*** SimonChung has joined #openstack-operators00:20
*** zhangjn has quit IRC00:22
*** Piet has joined #openstack-operators00:51
*** zhangjn has joined #openstack-operators01:01
klindgrenmgagne, can you get get me a the neutron wsgi logging output of a neutron net-list call.  Sanitize whatever you want01:01
klindgren2015-11-05 13:45:24.302 14549 INFO neutron.wsgi [req-ab633261-da6d-4ac7-8a35-5d321a8b4a8f ] 10.224.48.132 - - [05/Nov/2015 13:45:24] "GET /v2.0/networks.json?id=2d5fe344-4e98-4ccc-8c91-b8064d17c64c HTTP/1.1" 200 655 0.02755001:02
klindgrenis what I get01:02
klindgrenbut I am seeing logs where [ req-blah ] is [ req-blah user_id project_id ]01:02
klindgrenand am wondering WTF is wrong with my config01:02
klindgrenlogs from other people that is01:03
mgagnefrom dev: http://paste.openstack.org/show/478137/01:04
mgagnewith eventlet01:04
klindgrenson of a f*cking b*tch01:07
klindgrenso01:07
klindgrenso neutron.conf [default] loggin_context_format_string = %(asctime)s.%(msecs)03d %(levelname)s %(name)s [%(request_id)s %(user_name)s %(project_name)s] %(instance)s%(message)s01:08
klindgrenwill log the username/projectname on every wsgi request01:08
xavpaiceoooh01:08
klindgrenso if someone happens ot go rampent and delete say a ton of floating ip's because htey happened to have an admin user01:08
klindgrenyou could go kick them in hte balls01:09
klindgrennot confirming nor denying that may have happened01:09
klindgrenso yea01:10
klindgrenNote to everyone - if you care about traceability in your neutron logs01:10
klindgrenyou might want to be setting something like that in the neutron.conf01:10
xavpaice:)01:10
xavpaicedoes that work in the other logs too?01:10
xavpaice(or something similar)01:10
logan2*saves this* thanks01:11
klindgrenafter a full day of going down this rabbit hole01:11
klindgrenanything that does oslo.log should have that01:11
xavpaicenice01:11
klindgrenapparently dev stack does this shit by default01:11
klindgrenso yea01:11
xavpaicedammit if it's default in devstack, why not default in real life...01:12
klindgrenbecause worked in devstack lolz!01:12
klindgrenhttps://review.openstack.org/#/c/172508/201:13
klindgrenapparently its getting removed01:13
klindgrenbut yea01:13
* klindgren goes and enables that everywhere01:13
*** SimonChung has quit IRC01:33
*** verdurin has quit IRC01:35
*** dminer has quit IRC01:44
*** verdurin has joined #openstack-operators01:49
*** VW has quit IRC02:19
*** harshs has quit IRC02:22
*** Marga_ has quit IRC02:50
*** Marga_ has joined #openstack-operators02:51
*** rebase has quit IRC02:51
*** Marga_ has quit IRC02:55
*** ccarmack has joined #openstack-operators03:01
*** lhcheng has quit IRC03:11
*** k_stev has joined #openstack-operators03:11
*** k_stev has quit IRC03:19
*** rebase has joined #openstack-operators03:19
*** rebase has quit IRC03:19
*** harshs has joined #openstack-operators03:20
*** albertom has quit IRC03:32
*** k_stev has joined #openstack-operators03:37
*** klindgren_ has joined #openstack-operators03:41
*** albertom has joined #openstack-operators03:41
*** klindgren has quit IRC03:43
*** sanjayu has joined #openstack-operators03:45
*** harshs_ has joined #openstack-operators03:50
*** k_stev has quit IRC03:52
*** klindgren_ is now known as klindgren03:53
*** harshs has quit IRC03:54
*** harshs_ is now known as harshs03:54
*** sanjayu has quit IRC04:01
*** emagana has quit IRC04:10
*** lhcheng has joined #openstack-operators04:17
*** lhcheng_ has joined #openstack-operators04:18
*** subscope has joined #openstack-operators04:18
*** fawadkhaliq has joined #openstack-operators04:20
*** k_stev has joined #openstack-operators04:21
*** lhcheng has quit IRC04:21
*** k_stev has quit IRC04:22
*** emagana has joined #openstack-operators04:33
*** emagana has quit IRC04:41
*** subscope has quit IRC05:02
*** subscope has joined #openstack-operators05:05
*** zhangjn has quit IRC05:09
*** Marga_ has joined #openstack-operators05:14
*** Marga_ has quit IRC05:14
*** Marga_ has joined #openstack-operators05:15
*** zhangjn has joined #openstack-operators05:16
*** zhangjn has quit IRC05:29
*** lhcheng_ has quit IRC05:30
*** lhcheng has joined #openstack-operators05:30
*** stanchan has joined #openstack-operators05:45
*** nikhil_k has joined #openstack-operators05:48
*** nikhil has quit IRC05:48
*** lhcheng has quit IRC06:04
*** zhangjn has joined #openstack-operators06:08
*** subscope has quit IRC06:26
*** openstackgerrit has quit IRC06:31
*** openstackgerrit has joined #openstack-operators06:32
*** harshs has quit IRC06:38
*** david-lyle has joined #openstack-operators06:40
*** simon-AS559 has joined #openstack-operators06:42
*** david-lyle has quit IRC06:45
*** simon-AS559 has quit IRC06:51
*** Marga_ has quit IRC07:01
*** fawadkhaliq has quit IRC07:19
*** simon-AS559 has joined #openstack-operators07:19
*** simon-AS5591 has joined #openstack-operators07:20
*** sanjayu has joined #openstack-operators07:22
*** simon-AS559 has quit IRC07:24
*** fawadkhaliq has joined #openstack-operators07:34
*** simon-AS5591 has quit IRC07:37
*** fawadkhaliq has quit IRC07:52
*** lhcheng has joined #openstack-operators07:52
*** lhcheng has quit IRC07:57
*** fawadkhaliq has joined #openstack-operators07:58
*** matrohon has joined #openstack-operators08:11
*** ahonda has quit IRC08:13
*** simon-AS5591 has joined #openstack-operators08:25
*** zigo has quit IRC08:37
*** zigo has joined #openstack-operators08:42
*** andyhky has quit IRC08:42
*** andyhky has joined #openstack-operators08:44
*** racedo has joined #openstack-operators08:48
*** zerda has joined #openstack-operators08:51
zerdaHello. Does anybody have some performance figures for neutron+OVS+VXLAN? I observe no more than 2Gbit/s with RDO Kilo on RHEL 7 and some 10G Emulex 11xxx adapters (1.5K MTU VM, 9K MTU host), which is kinda small IMO08:56
*** subscope has joined #openstack-operators08:57
*** subscope has quit IRC08:58
*** simon-AS5591 has quit IRC09:08
*** derekh has joined #openstack-operators09:11
*** subscope has joined #openstack-operators09:18
*** simon-AS5591 has joined #openstack-operators09:26
*** cbrown2_ocf has quit IRC09:27
*** fwdit has joined #openstack-operators09:31
*** VW has joined #openstack-operators09:35
*** VW has quit IRC09:36
*** VW has joined #openstack-operators09:37
*** lhcheng has joined #openstack-operators09:42
*** fwdit has quit IRC09:45
*** lhcheng has quit IRC09:46
*** simon-AS5591 has quit IRC10:01
*** zhangjn has quit IRC10:01
*** zhangjn has joined #openstack-operators10:03
*** zhangjn has quit IRC10:04
*** VW has quit IRC10:04
*** lhcheng has joined #openstack-operators10:05
*** zhangjn has joined #openstack-operators10:08
*** lhcheng has quit IRC10:10
*** fawadkhaliq has quit IRC10:10
*** simon-AS5591 has joined #openstack-operators10:12
*** lhcheng has joined #openstack-operators10:17
*** cbrown2_ocf has joined #openstack-operators10:29
*** zhangjn has quit IRC10:40
*** ferest has joined #openstack-operators10:43
*** ferest has quit IRC10:47
*** VW has joined #openstack-operators10:56
*** electrofelix has joined #openstack-operators10:57
*** lhcheng has quit IRC11:04
*** simon-AS5591 has quit IRC11:15
*** simon-AS5591 has joined #openstack-operators11:24
*** simon-AS5591 has quit IRC11:34
*** lhcheng has joined #openstack-operators11:37
*** _nick has quit IRC11:38
*** _nick has joined #openstack-operators11:38
darrencdarren11:39
*** subscope has quit IRC11:41
*** ferest has joined #openstack-operators11:41
*** subscope has joined #openstack-operators11:43
*** ferest has quit IRC11:46
*** Marga_ has joined #openstack-operators11:49
*** ybabenko has joined #openstack-operators11:49
*** zhangjn has joined #openstack-operators11:53
*** VW has quit IRC11:54
*** VW has joined #openstack-operators11:56
*** ferest has joined #openstack-operators11:57
*** VW has quit IRC12:00
*** zerda has quit IRC12:04
*** ferest has quit IRC12:11
*** VW has joined #openstack-operators12:15
*** zigo has quit IRC12:36
*** zigo has joined #openstack-operators12:37
*** weihan has joined #openstack-operators12:38
*** ybabenko has quit IRC12:39
*** ybabenko has joined #openstack-operators12:41
*** rady has joined #openstack-operators12:44
*** weihan has quit IRC12:51
*** lhcheng has quit IRC13:00
*** sanjayu has quit IRC13:21
*** VW has quit IRC13:43
*** VW has joined #openstack-operators13:47
*** regXboi has joined #openstack-operators13:47
*** subscope has quit IRC13:53
*** ybabenko has quit IRC14:02
*** subscope has joined #openstack-operators14:07
*** ccarmack has quit IRC14:09
*** ccarmack has joined #openstack-operators14:12
*** Piet has quit IRC14:14
*** mriedem has joined #openstack-operators14:18
*** vinsh has joined #openstack-operators14:20
*** subscope has quit IRC14:35
*** alaski is now known as lascii14:52
*** ybabenko has joined #openstack-operators14:57
*** Piet has joined #openstack-operators15:01
*** ybabenko has quit IRC15:01
*** alejandrito has joined #openstack-operators15:04
*** stanchan has quit IRC15:07
mriedemmgagne: klindgren: fyi, might be interested in this for operators that are still on juno or just moved to juno http://lists.openstack.org/pipermail/openstack-dev/2015-November/078630.html15:07
mgagnemriedem: true about stable support. We are now running Kilo in all regions since yesterday (we migrated the last region last night) =)15:08
mriedemcool15:09
mriedemso this affects you less15:09
mgagnewell, maybe I would feel more concerned if it was about kilo =)15:09
*** Marga_ has quit IRC15:10
*** Marga_ has joined #openstack-operators15:10
*** Marga_ has quit IRC15:15
*** rbrooker has joined #openstack-operators15:35
*** rbrooker has quit IRC15:51
wasmummgagne: klindgren: +1 consistant project log format. ;)15:59
wasmumzerda: whats your mtu set in your tenant, external nets?16:02
mgagnemriedem: trying to debug test for cells+neutron16:06
mgagnemriedem: 1st step: trying to figure out why no floating ip is assigned to instance. The assert makes a lot of assumption and doesn't help you: if the number of fip != 1, it assumes there are too many FIP to choose from when in fact, there is none. :-/16:07
mriedemmgagne: that might be a tempest bug?16:08
mriedemi haven't checked results, but devstack makes some assumptions about neutron using floating IPs for ssh16:08
mriedemi had changed that to fixed IPs16:08
mgagnemriedem: trying to get to the root cause, the web interface isn't helping me, filtering by log level melts my CPU16:08
mriedemin the d-g change16:08
mgagnemriedem: http://paste.openstack.org/show/478196/16:09
mgagnemriedem: so yea, if it's fixed IPs, then FIP won't work I guess16:09
*** jaypipes is now known as leakypipes16:09
mgagnemriedem: 9/10 failures in gate-tempest-dsvm-full are caused by missing FIP16:12
mriedemmgagne: ok, let me see if that's configurable in tempest16:12
mgagnemriedem: other failure is caused by resize16:12
mriedemhttp://logs.openstack.org/85/235485/5/check/gate-tempest-dsvm-neutron-full/e37ab26/logs/tempest_conf.txt.gz16:13
mriedemssh_connect_method = floating16:13
mriedemso my change hasn't worked, hmm16:13
mgagnemriedem: and this could be a bug or race Belmiro and I saw in prod in kilo where resize is never confirmed16:13
mriedemexport TEMPEST_SSH_CONNECT_METHOD="fixed"16:14
mgagnemriedem: could be resize takes longer with cells, I'm not sure of the root cause and only throwing ideas at the wall16:14
mriedemin https://review.openstack.org/#/c/235485/5/devstack-vm-gate-wrap.sh16:14
mriedemyeah, i thought we were actually skipping resize tests for the cells jobs16:14
mgagneIf we manage to make tests pass, we should test resize too =)16:14
mriedemhttps://github.com/openstack/nova/blob/master/devstack/tempest-dsvm-cells-rc16:14
*** emagana has joined #openstack-operators16:15
mriedemi *think* the resize stuff is skipped b/c of issues with flavor sync16:15
mgagneoh...16:15
mgagnewell16:15
mriedemi'd have to ask alaski in -nova16:15
mgagneif default flavors are used, it should work just fine16:15
mgagnesince they will get same auto_increment on both sides16:15
mriedemit doesn't16:16
mriedemdevstack creates it's own flavors for tempest16:17
mgagnewhy is that?16:18
mgagnecan't it be disabled for cells job?16:18
mriedemwe can certainly disable resize testing for cells jobs, i can do that in devstack (and will after morning meetings are over)16:19
mriedemso we can cleanup that regex16:19
mgagneok, lets start with FIP config16:20
mriedemyeah16:21
mriedemalso on a call atm :)16:21
mgagneno problem, I have plenty of work here =)16:21
mriedemjroll from ironic had a devstack change that was going to insert baremetal flavors into the cells db too,16:22
mriedemthat's really what we need to happen in devstack for these custom flavors, but that probably means a new nova-manage command16:22
mriedemcernops apparently has a script that does that16:22
*** derekh has quit IRC16:26
mriedemalright, i'll be back in a couple of hours (appts)16:26
*** mriedem is now known as mriedem_away16:27
klindgrenmriedem_away,  I am not sue why people aren't upgrading.  So far all of our upgrades have been painless in prod16:34
mgagneklindgren: our upgrade took close to 3 months to plan, test and execute. :-/16:35
klindgrenminus some UTF-8 colation issues with databases in our non-preprod env's16:35
mgagneklindgren: upgrade itself isn't that bad but there is a lot of prep work and people to sync16:35
klindgrenwe were able to sort out any possible issues when we updated our dev/test/staging labs16:35
mgagneyea, collation was a big pain in the butt16:36
mgagneand I suspect all people affected by this bug were using Puppet :D16:36
*** emagana has quit IRC16:37
klindgrenmgagne, agreed - however for our user's its only been a control plane outage - and we've done them in the morning during business hours16:37
*** HenryG has quit IRC16:37
klindgrenfor us the hardest part about an upgrade is building all the new packages and porting our patches over to newer code16:39
*** HenryG has joined #openstack-operators16:39
klindgrenthe actual db_upgrades and everything have not been problematic16:39
klindgrenbut I understand that for some the neutron upgrades with routers being restarted is problematic16:40
*** matrohon has quit IRC16:41
*** emagana has joined #openstack-operators16:45
*** Marga_ has joined #openstack-operators16:49
mgagneklindgren: we don't use neutron routers =)16:51
klindgrenhuzzah!16:51
*** VW has quit IRC16:51
*** VW has joined #openstack-operators16:52
klindgrenwas your upgrade pain also the fact that you cut over to cells as well?16:52
*** gyee has joined #openstack-operators16:52
klindgrenonce we get our code moved over ~1-2 weeks. We are usually able to go through our clouds in ~ 1 week16:53
klindgrenwith the upgrades taking ~45min-1hour in each cloud16:53
klindgrenmost of that is waiting on ansible and serialization of some number of computes to avoid the flood detection stuff from kicking and and blocking some of the communication from my workstation to the hosts16:54
*** VW has quit IRC16:56
*** mdorman has joined #openstack-operators16:58
*** ccarmack has quit IRC17:05
*** alop has joined #openstack-operators17:06
_nick[16:36:24]  <mgagne>and I suspect all people affected by this bug were using Puppet :D17:07
_nick+117:07
_nicktime taken to get modules updated, fix provisioning code / update for new configuration etc., and then properly test takes a lot of effort17:07
mgagneklindgren: migration to cells was easy: copy database, update nova.conf to use cells and start nova-cells17:08
*** VW has joined #openstack-operators17:08
mgagneklindgren: about the same here too with ansible17:08
mdormanwhat bug is this?17:08
klindgrenthey changed the colation stuff on one of the modules17:09
klindgrenfrom like utf817:09
klindgrento utf8_something17:09
mgagneutf8_unicode_ci to utf8_general_ci17:09
klindgrenand it hosed a bunch of db upgrades17:09
mdormanoh17:10
*** ccarmack has joined #openstack-operators17:11
mdormanwhen going to liberty?17:11
mgagnekilo17:11
mgagneand probably juno too17:11
mgagnewe went from icehouse to kilo, we skipped juno17:11
mdormani see17:12
*** SimonChung has joined #openstack-operators17:18
*** lhcheng has joined #openstack-operators17:25
klindgrenmdorman, handles the puppet stuff for me - its like magic :-D17:28
*** subscope has joined #openstack-operators17:30
mdormano_O17:30
*** cbrown2_ocf has quit IRC17:37
*** zhangjn has quit IRC17:38
*** SimonChung1 has joined #openstack-operators17:49
*** SimonChung1 has quit IRC17:50
*** SimonChung2 has joined #openstack-operators17:50
*** SimonChung has quit IRC17:50
*** electrofelix has quit IRC18:01
*** VW has quit IRC18:03
*** VW has joined #openstack-operators18:03
*** SimonChung has joined #openstack-operators18:06
*** SimonChung2 has quit IRC18:06
*** VW has quit IRC18:08
*** mriedem_away is now known as mriedem18:09
*** alop has quit IRC18:18
*** ccarmack has quit IRC18:18
*** ccarmack has joined #openstack-operators18:24
*** SimonChung has quit IRC18:31
*** SimonChung1 has joined #openstack-operators18:31
*** SimonChung1 has quit IRC18:32
*** SimonChung has joined #openstack-operators18:32
*** rady has quit IRC18:38
*** emagana has quit IRC18:38
*** rady has joined #openstack-operators18:43
*** jmckind has joined #openstack-operators18:44
*** mdorman has quit IRC19:04
*** emagana has joined #openstack-operators19:05
*** mdorman has joined #openstack-operators19:05
*** harshs has joined #openstack-operators19:11
*** SimonChung1 has joined #openstack-operators19:27
*** SimonChung has quit IRC19:27
*** Marga_ has quit IRC19:33
*** Piet has quit IRC19:33
*** VW has joined #openstack-operators19:34
*** VW has quit IRC19:38
*** VW has joined #openstack-operators19:43
*** VW has quit IRC19:45
*** VW has joined #openstack-operators19:46
*** jmckind is now known as jmckind_19:51
*** Marga_ has joined #openstack-operators19:56
claytonmdorman it changed in the kilo modules19:56
*** stanchan has joined #openstack-operators20:01
*** jmckind_ is now known as jmckind20:08
*** rady has quit IRC20:10
*** Piet has joined #openstack-operators20:21
*** rady has joined #openstack-operators20:24
*** k_stev has joined #openstack-operators20:36
*** codebaus1 is now known as codebauss20:38
*** SimonChung1 has quit IRC20:49
*** ccarmack has left #openstack-operators20:51
*** harlowja_ has joined #openstack-operators20:56
*** harlowja has quit IRC20:56
*** ccarmack has joined #openstack-operators20:59
*** openstackgerrit has quit IRC21:01
*** openstackgerrit has joined #openstack-operators21:02
*** mriedem has left #openstack-operators21:03
*** mriedem has joined #openstack-operators21:04
*** signed8bit has joined #openstack-operators21:22
*** k_stev has quit IRC21:25
*** k_stev has joined #openstack-operators21:25
*** subscope has quit IRC21:39
*** alejandrito has quit IRC21:56
*** rady has quit IRC22:05
*** lascii is now known as alaski22:13
*** rady has joined #openstack-operators22:18
*** vinsh has quit IRC22:21
*** vinsh has joined #openstack-operators22:22
*** vinsh has quit IRC22:26
*** signed8bit has quit IRC22:33
mriedemwoot http://logs.openstack.org/85/235485/6/check/gate-tempest-dsvm-neutron-full/2289378/logs/testr_results.html.gz22:33
mriedem4 failures really, ebs fail is a known issue, skip patch for that is in hte gate22:33
mriedemelse https://bugs.launchpad.net/tempest/+bug/151398322:34
openstackLaunchpad bug 1513983 in tempest "tempest is trying to use floating IPs for ssh even though ssh_connect_method = fixed" [Undecided,New]22:34
mriedemwe should be very close to a cells + neutron job config next week22:34
*** rady has quit IRC22:36
*** mriedem has quit IRC22:37
*** SimonChung has joined #openstack-operators22:50
*** k_stev has quit IRC22:51
*** SimonChung1 has joined #openstack-operators22:52
*** SimonChung has quit IRC22:55
*** jmckind is now known as jmckind_22:58
*** jmckind_ has quit IRC23:00
*** regXboi has quit IRC23:07
mdormanawesome, thanks mriedem23:40

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!