Monday, 2015-11-30

*** openstack has joined #openstack-ansible15:46
*** leifmadsen_ has joined #openstack-ansible15:46
*** woodard has quit IRC15:47
*** woodard has joined #openstack-ansible15:47
cloudnullmorning15:47
odyssey4meadac this would tell you the running state: ansible galera_container -m shell -a "cat /var/lib/mysql/grastate.dat"15:47
adacodyssey4me, exactly yes15:48
*** cfarquhar has joined #openstack-ansible15:49
*** cfarquhar has quit IRC15:49
*** cfarquhar has joined #openstack-ansible15:49
*** Mudpuppy has quit IRC15:51
*** alkari has joined #openstack-ansible15:51
*** alextricity has joined #openstack-ansible15:53
*** Mudpuppy has joined #openstack-ansible15:53
*** agireud has joined #openstack-ansible15:53
*** grumpycatt has quit IRC15:54
*** leifmadsen has quit IRC15:55
openstackgerritMajor Hayden proposed openstack/openstack-ansible-security: Updating tests for openstack-ansible-security  https://review.openstack.org/25143015:55
*** leifmadsen_ is now known as leifmadsen15:56
*** jasondotstar has joined #openstack-ansible15:58
mhaydenif someone could check my tox work there ^^ i'd be much obliged16:01
mhaydenthat'll unblock some of my future work16:01
adacEven tough I restarted the mysql deamons on the lxc cotainers (one with "/etc/init.d/mysql start --wsrep-new-cluster"  and the other with "/etc/init.d/mysql start")  A 'cat' still shows me this: https://gist.github.com/anonymous/eaaa3d336a6671c15520 Any ideas?16:01
*** mss has joined #openstack-ansible16:03
*** stevelle_ is now known as stevelle16:03
*** woodard has quit IRC16:08
*** woodard has joined #openstack-ansible16:08
*** openstackstatus has joined #openstack-ansible16:14
*** ChanServ sets mode: +v openstackstatus16:14
adachmm nope no luck. tried it again. The clsuter does not come up16:15
adac*cluster16:16
openstackgerritMiguel Alex Cantu proposed openstack/openstack-ansible: Add documentation for HA ceilometer  https://review.openstack.org/24890516:17
spotzadac I think this might help http://galeracluster.com/documentation-webpages/restartingcluster.html16:19
*** alkari has quit IRC16:22
adacspotz, thanks! looks promising!16:22
spotzadac you're welcome. Haven't done a lot with galera but I'll help if I can16:23
adacspotz, Yes this looks like the solution! :-)16:23
spotzsweet!16:24
*** alkari has joined #openstack-ansible16:24
*** pabelanger has joined #openstack-ansible16:24
*** metral_zzz has joined #openstack-ansible16:25
*** metral_zzz is now known as metral16:25
adacspotz, ultrasweet :-)16:26
alextricityIs OSA going to eventually use submodules pointing to the new role repos?16:26
kysseodyssey4me: PortBindingFailed: Binding failed for port e97bcaef-1750-45f5-9a37-76e973c1f0d6, please check neutron logs for more information. sounds familiar?16:26
spotz:)16:26
kyssenova's log when creating instance.16:26
Sam-I-Amodyssey4me: you around?16:27
*** woodard has quit IRC16:28
*** woodard has joined #openstack-ansible16:29
palendaealextricity: The plan was galaxy roles16:31
palendaeInstead of straight submodules16:32
cloudnullalextricity: ++ what palendae said16:32
alextricitythen just ansible-galaxy <role-name> what you need?16:34
alextricityor 'ansible-galaxy *install*, rather16:34
cloudnullwe're building out the ansible-rolerequirements.yml file16:36
cloudnullhttps://github.com/openstack/openstack-ansible/blob/master/ansible-role-requirements.yml16:37
cloudnullwhis is resolved https://github.com/openstack/openstack-ansible/blob/master/scripts/bootstrap-ansible.sh#L88-L9016:37
*** grumpycatt has joined #openstack-ansible16:38
alextricityohh i see. cool16:42
alextricitythanks :)16:42
*** sdake has joined #openstack-ansible16:42
odyssey4meSam-I-Am sort-of :)16:42
odyssey4mekysse from what I saw in #openstack you likely have a config error of some sort - it may relate to some missing config on the hosts involved... without some depth of knowledge in your openstack_user_config and user_variables, it's hard to tell... I'm unfortunately on a train heading home, so I don't have time to assist right now - perhaps someone else can help? Apsu ?16:44
Sam-I-Amodyssey4me: i was looking, but the config looked ok16:44
odyssey4mealextricity if you're working with master, you'll now find the roles in /etc/ansible/roles/16:44
*** mgoddard_ has joined #openstack-ansible16:44
odyssey4mealextricity and no, no submodules - we're trying to work with ansible in the way it was designed16:45
*** sdake_ has joined #openstack-ansible16:46
*** sdake has quit IRC16:46
alextricityAre there any plans to move the openstack roles as well?16:47
*** mgoddard__ has quit IRC16:48
ApsuWhat am I helping with? port binding failures...16:48
Apsukysse: What's the Neutron log say?16:49
*** woodard has quit IRC16:49
*** woodard has joined #openstack-ansible16:50
stevellealextricity: yes, all in increments16:52
*** sacharya has joined #openstack-ansible16:52
*** adac has quit IRC16:53
kysseApsu: compute nodes or neutron server nodes log?16:54
Apsukysse: Compute.16:55
Sam-I-Amkysse: do you see any errors in the agent logs in the agent containers or just compute?16:55
kysselets look16:55
ApsuIt was a port binding error on starting an instance, and directed them to the compute node's Neutron log. I suspect it's a vif binding failure16:55
kysseyes but what causes vif binding failure?16:56
Sam-I-Amkysse: usually bad bridge mappings16:56
ApsuThe log usually has some info about that, but yeah16:56
Sam-I-Amat least one of the more common problems16:56
*** sacharya has quit IRC16:56
ApsuUsually bad ml2 config16:56
kysseok, I try to get a real info packet for you guys.16:57
*** sacharya has joined #openstack-ansible16:58
*** Mudpuppy has quit IRC17:00
*** mgoddard_ has quit IRC17:02
*** mgoddard has joined #openstack-ansible17:02
*** Mudpuppy has joined #openstack-ansible17:06
*** sdake_ has quit IRC17:06
*** woodard has quit IRC17:10
*** woodard has joined #openstack-ansible17:11
odyssey4mealextricity yep, all the roles are moving - it just takes time to get it done and cloudnull is doing them in small groups17:15
*** rebase has joined #openstack-ansible17:19
cloudnullhughsaunders:  still around ?17:20
kysseSam-I-Am: Apsu http://sprunge.us/iFKS :')17:22
Apsukysse: Is the network you created a flat one?17:23
kyssevlan17:24
ApsuOk. Is p1p2 already in a bridge? That's why I'm asking really17:24
Apsuip link show p1p2, look for "master ___"17:25
*** tiagogomes has quit IRC17:25
kyssehttp://paste.nerv.fi/54101194.txt17:25
ApsuDoesn't matter here for making a vlan network, but if it is.. won't work if you try to make a flat later17:26
ApsuYeah17:26
ApsuCan't do it that way.17:26
kysseoh, is there a real reason why?17:26
ApsuNeutron creates bridges and puts interfaces you map into them.17:26
ApsuYou can't put the same interface into two bridges.17:26
kysseyes you can17:26
kyssehmh. lol. you may be right17:27
Apsu:)17:27
kyssewell..17:28
Sam-I-AmApsu: we're not automagically doing that, right?17:28
ApsuYou can cheat a little if you need both flat and vlan17:28
ApsuSam-I-Am: Nope17:28
Sam-I-Ami thought we included some magic to make this possible17:28
kysseI need only vxlan and vlan with same interface ;-) and vxlan in different vlan17:28
ApsuSam-I-Am: There's an example of it in the example network configs17:28
ApsuBut we don't do host network config for you17:28
ApsuExcept the AIO does some of the magic for you17:28
Sam-I-AmApsu: i thought our host network config covered this situation because people generally want to use flat and vlan networks on the same underlying interface17:29
Apsukysse: Ok, cool. Well that's pretty easy17:29
Sam-I-Amand on a regular host i think this works... except we have br-vlan to deal with17:29
ApsuJust make a subinterface for the VXLAN vlan, and put it in br-vxlan17:29
ApsuAnd take the flat mapping out17:29
kysseah, lol, so I'm bridging bridge to bridge17:30
ApsuSo fun fact, the interface that you see when you create a bridge is *not* a bridge.17:30
ApsuIt's a port on that bridge, and it can itself be bridged.17:30
ApsuOr subinterfaced with a VLAN tag17:31
*** woodard has quit IRC17:31
ApsuJust spin a top to see what level of bridging you're in ;)17:31
kyssehmh. can you make me a example interface file (just something) to clear my head17:31
*** woodard has joined #openstack-ansible17:31
*** sacharya has quit IRC17:31
ApsuWell, what the result looks like as I describe is something like this17:32
Apsup1p2 ... master br-vlan17:32
Apsup1p2.401 ... master br-vxlan17:32
ApsuAnd in the ml2, vlan: br-vlan, vxlan: br-vxlan17:32
*** daneyon_ has joined #openstack-ansible17:32
kysseok17:32
ApsuUsing 401 as an example VLAN ID for VXLAN traffic, of course17:33
ApsuThe only tricky or unexpected thing here is just that folks are used to assuming that the bridge interface they can see *is* the bridge17:33
ApsuAnd it's true they're linked in some ways17:34
ApsuBut it's also just an interface17:34
ApsuSo you can do interface-oriented things, like bridge it17:34
ApsuI think of these things like switches and cables.17:34
ApsuMaking a bridge creates a switch and attaches a cable, to start17:34
ApsuPutting other interfaces in it attaches more cables17:34
ApsuYou can use the first cable to hook into another switch, of course17:35
*** daneyon__ has joined #openstack-ansible17:35
Sam-I-AmApsu: a mapping for vxlan in ml2?17:35
*** daneyon has quit IRC17:36
ApsuSam-I-Am: A mapping for the bridge for it. Oh, right, it doesn't work that way. It just needs the endpoint IP17:36
Sam-I-Amyarp17:36
ApsuYou still need to bridge the subinterface, but don't have to specify the vxlan mapping, because you don't specify --provider:physical_network on VXLAN17:36
Apsu@ kysse17:36
Sam-I-Amthe ansible magic uses that, but ml2 itself does not17:36
ApsuSam-I-Am is right17:37
Sam-I-Amit sometimes happens17:37
ApsuSometimes.17:37
kysseApsu: http://paste.nerv.fi/69548198.txt running configuration like this now. So i should change..17:38
Apsukysse: So if you take the 'flat' mapping out and restart the neutron-linuxbridge-agent, do the things work? In theory it's unrelated to your problem since you were using a VLAN type network... but...17:38
ApsuNeutron is picky17:38
*** daneyon_ has quit IRC17:38
ApsuThat's actually fine and correct.17:38
*** karimb has quit IRC17:38
kysse...17:38
ApsuJust take the flat mapping out of the ml217:38
kysseok17:38
cloudnullany cores about (or anyone for that matter) that would17:39
ApsuWait, hold on...17:39
cloudnullbe able to test https://review.openstack.org/#/c/241483/ ?17:39
Apsuhttp://sprunge.us/WJgh your ml2_conf.ini17:39
ApsuYou don't have any mappings specified here... I guess it's using the ones from the network type sections17:40
ApsuNo, because they're just the identifiers, not interface names17:40
*** egonzalez has quit IRC17:40
ApsuOh, they're in linuxbridge_agent.ini17:41
Apsuderp17:41
kysseyes, ansible does tha :')17:41
kyssehow derp is that?!17:41
Apsulol. I will withhold my commentary17:42
ApsuHuh. We should probably be using bridge_mappings instead of interface_mappings.17:43
ApsuSince we're providing bridges17:43
kysseso if I change it to bridge_mappings = vxlan:br-vxlan,vlan:br-vlan17:44
Sam-I-AmApsu: bridge_mappings is an ovs thing17:44
kysseah17:44
ApsuSam-I-Am: Not according to the docs (source code)17:44
Sam-I-AmApsu: linky17:44
Apsuhttps://github.com/openstack/neutron/blob/c8a7d9bfdba9cab82cc29f563a387d8c3088c630/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py#L9817:44
ApsuUses it throughout the agent17:44
ApsuBasically for when you have bridges already, and Neutron will just use them.17:45
Apsukysse: Don't think it should matter for your case, because it works either way for everyone17:45
kysselets try17:45
ApsuJust commenting, bridge_mappings is more correct seemingly17:45
Sam-I-AmApsu: thats sort of a new not-well-tested thing17:46
ApsuNeat17:47
kysseaaaaaand failed! what a suprise!17:47
ApsuSounds like "Stable" in Neutron land17:47
kyssesurprise*17:47
Sam-I-Ami think the idea was that people did not like brq### stuff, so there's an option for neutron to use existing bridges17:47
*** javeriak has quit IRC17:48
Sam-I-AmApsu: because creating brq### and plugging the bare interface into it effectively moves the ip configuration of the bare interface to the bridge17:48
Sam-I-Amfrankly i see it as a corner case thats solvable in better ways17:48
Sam-I-Ambut alas, neutron17:49
Sam-I-Amno one really understands networking, so you get patches like this17:49
kysseits a real problem17:49
kyssecoders are not network engineers17:49
ApsuIndeed.17:51
*** javeriak has joined #openstack-ansible17:51
Apsukysse: So what we really need to see are neutron logs when you try to create an instance, showing the failure from its perspective17:51
kyssefrom compute neutron bridge agent?17:51
ApsuMight need to turn on verbose and/or debug to see useful info17:51
ApsuYeah17:51
Apsuafk 5-10 min, grabbing some food17:51
kysseI'll turn debug option on..17:51
kysseye ok17:52
*** woodard has quit IRC17:52
*** woodard has joined #openstack-ansible17:52
kyssehttp://sprunge.us/IXcO no failures :/17:56
*** harlowja has joined #openstack-ansible18:10
*** b3rnard0 is now known as b3rnard0_lunch18:10
*** phiche1 has quit IRC18:10
*** woodard has quit IRC18:12
*** woodard has joined #openstack-ansible18:13
*** karimb has joined #openstack-ansible18:15
*** neillc has quit IRC18:17
*** neillc has joined #openstack-ansible18:18
*** stevelle has quit IRC18:18
*** odyssey4me has quit IRC18:19
*** dalees has quit IRC18:20
*** b3rnard0_lunch has quit IRC18:20
*** daneyon__ has quit IRC18:20
*** mss has quit IRC18:20
*** daneyon has joined #openstack-ansible18:21
*** finchd-also has quit IRC18:21
*** Guest83268 has quit IRC18:23
*** erikwilson has quit IRC18:23
*** xar- has quit IRC18:23
*** karimb has quit IRC18:23
*** mss has joined #openstack-ansible18:24
*** stevelle has joined #openstack-ansible18:25
*** odyssey4me has joined #openstack-ansible18:25
*** b3rnard0 has joined #openstack-ansible18:25
*** erikmwilson has joined #openstack-ansible18:27
*** mgagne has joined #openstack-ansible18:28
*** mgagne is now known as Guest6345318:28
*** xar- has joined #openstack-ansible18:29
*** finchd has joined #openstack-ansible18:30
Apsukysse: Is that log including trying to boot an instance and it failing?18:31
*** phiche has joined #openstack-ansible18:32
kysseyes18:32
*** woodard has quit IRC18:33
*** woodard has joined #openstack-ansible18:34
ApsuNeat.18:35
ApsuMaybe restart nova-compute18:36
*** adac has joined #openstack-ansible18:37
*** Guest63453 has quit IRC18:38
*** Guest63453 has joined #openstack-ansible18:38
*** Guest63453 is now known as mgagne18:39
*** eil397 has joined #openstack-ansible18:39
kyssedone already :-)18:44
*** woodard_ has joined #openstack-ansible18:45
*** dalees has joined #openstack-ansible18:45
*** woodard has quit IRC18:47
*** woodard_ has quit IRC18:54
*** woodard has joined #openstack-ansible18:55
*** sdake has joined #openstack-ansible19:10
*** woodard has quit IRC19:15
*** woodard has joined #openstack-ansible19:15
*** mancdaz has quit IRC19:20
*** adac has quit IRC19:20
*** xek has quit IRC19:21
*** mancdaz has joined #openstack-ansible19:22
kysseI just can't understand what's going on. Maybe I should go to the dark side and test ovs and or with packstack19:25
Sam-I-Amkysse: have you installed openstack before?19:25
*** javeriak has quit IRC19:25
kyssenope19:26
Sam-I-Ami recommend starting out with something a bit simpler that you can learn from19:27
Sam-I-Amtrying to deploy openstack the first time using a complicated deployment tool that installs openstack in a production (read: complicated) way may not be beneficial19:27
kysse(compilated deployment tool that i'm familiar with)19:28
Sam-I-Amso there's that...19:28
kysseand compilicated networking wich was correct19:28
Sam-I-Ambut using containers and stuffing neutron into them adds a lot of complexity to the picture19:28
kysseand compilcated setup where I dont get _any_ red color when installing :') and everything works smooth and correct but those compute nodes19:29
cloudnullkysse: sorry im late to the party , but reading the scroll back , are you still getting the vif type binding failure?19:30
kysseyes.19:30
cloudnullcan we see you openstack_user_config.yml ?19:30
kysseyes.19:30
cloudnullidk if you pasted that before ...19:30
* cloudnull reading more19:30
d9khttp://sprunge.us/iFKS19:31
Sam-I-Amkysse: you might get it working, but you really need to learn how openstack itself works because it'll break someday.19:31
kysseSam-I-Am: you are right, and yes I'm learning how it works right now.19:32
kyssecloudnull: d9k's url contains all information.19:33
cloudnullkysse:  remove http://cdn.pasteraw.com/af3bx8jx934rr761r4l7fziongg69gd from the user config19:33
kysseerr, okey19:34
cloudnullw/out the host bind override neutron will not support an ml2 mapping using the same interface19:34
cloudnulland on compute nodes that interface would be br-vlan19:34
cloudnullwhich based on your user config would be shared with vlan and flat network types19:35
*** adac has joined #openstack-ansible19:35
*** woodard has quit IRC19:36
*** sacharya has joined #openstack-ansible19:36
cloudnullonce you do that rerun the os-neutron-install.yml play19:36
cloudnulland you can limit the command with the tag "neutron-config"19:36
cloudnullat least i think you can...19:36
*** woodard has joined #openstack-ansible19:36
* cloudnull brain is full19:36
kyssei'm running it now, lets see.19:37
cloudnullin our gate test we use both flat and vlan networks together.19:37
cloudnullhttps://github.com/openstack/openstack-ansible/blob/master/etc/openstack_deploy/openstack_user_config.yml.aio#L40-L4819:37
cloudnullbut set host_bind_override to be the eth12 device19:38
cloudnullwhich is a veth https://github.com/openstack/openstack-ansible/blob/master/etc/network/interfaces.d/aio_interfaces.cfg#L53-L5819:38
cloudnullkysse: also i'd recommend disabling sg-offload for your bridge devices. its been known to cause issues with network perfofrmance and general LXC we do that like so: https://github.com/openstack/openstack-ansible/blob/master/etc/network/interfaces.d/aio_interfaces.cfg#L4219:41
* cloudnull was reading http://paste.nerv.fi/69548198.txt19:41
kysseit's not my full interfaces file, but thanks! Good advices :)19:42
cloudnullfor sure. I figured there was more, but i thought i mention it19:43
*** sdake has quit IRC19:43
*** javeriak has joined #openstack-ansible19:43
kysseI ran it against compute without that part you told me, still same error. Do I need to change physical mappings?19:49
kysseah, lets see. hmhm19:49
cloudnullthe physical mappings shouldve been updated19:50
kysseyes they were. physical_interface_mappings = vlan:br-vlan19:50
cloudnullyou will need to run it everywhere19:50
cloudnullagents need to be restarted / updated across the cluster19:51
kysseah, okey19:51
*** woodard has quit IRC19:56
*** woodard has joined #openstack-ansible19:57
*** sacharya has quit IRC20:05
cloudnullkysse: anything ?20:05
kysse1 sec .. or 2 .. :P20:06
cloudnullok . no worries20:06
*** woodard has quit IRC20:06
kyssenope. I just rebooted all neutron containers and compute node. I'm missing something clearly.20:12
cloudnullso the mapping is consistent ?20:14
cloudnulland the agents are running i assume20:14
cloudnulldoes `neutron agent-list` return anything intersting ?20:14
cloudnulland do you see the neutron-lxb-agent running on the compute node?20:15
kyssewhat does  :-) mean?20:15
cloudnullalso I assume your running liberty. is that right?20:16
kysseyes20:16
cloudnullthats OpenStack for its working ...20:16
cloudnullwhich i hate !20:16
kysseno I dont see neutron-lxb-agent on my compute node :-(20:16
cloudnullsorry neutron-linuxbridge-agent20:16
kysseah, it's running yes20:16
* cloudnull was being a lazy person20:16
kyssewhat interfaces I should see in neutron-agent and server containers?20:18
cloudnullits still the "PortBindingFailed: Binding failed for port <UUID>, please check neutron logs for more information" error?20:19
cloudnullin the containers you should see eth0,1,1120:19
cloudnullsorry20:20
cloudnullin the containers you should see eth0,1,10,1120:20
cloudnullthe networking containers that is.20:20
cloudnulland the host should have br-vlan,vxlan,mgmt20:20
kysseerror is sam eyes20:21
kyssehmm20:21
cloudnullwhen you boot your VM are you using a vxlan network type ?20:22
kysseno, vlan20:22
cloudnullwhen you created the neutron networks did you do so with the range "200:400" ?20:23
cloudnullcan you try and build a vm that is only on a vxlan network ?20:23
kysseok, i'll try.20:23
cloudnulljust to see if it completes .20:23
cloudnullalso how were the neutron networks created ? horizon? neutron CLI ?20:24
ApsuWould like to see "neutron net-show $yournetwork" and "neutron subnet-show $yoursubnet"20:24
cloudnull^ that20:24
ApsuTo see more details around the VLAN ID, CIDR, provider flags, etc20:24
kysseI used horizon.. also tried with cli20:25
cloudnullw/in horizon did you set the segmentation ID of your network to be the same as your vlan range ?20:26
kyssesame error with vxlan20:26
cloudnullok so neutron is still mad20:26
Apsukysse: Let's get a paste with the details on the network and subnet20:26
kysseyes ok20:26
ApsuAlso, take a look at dmesg on the compute node20:26
ApsuIn case there's some kernel messages that might shed some light20:27
cloudnullalso is there anything else being set in the neutron-linux-bridge-agent.log within the compute node and the neutron agent container  ?20:28
kyssehttp://paste.nerv.fi/27203893.txt20:28
cloudnullkysse: the provider:segmentation_id needs to be within the 200:400 range .20:29
Apsu^^^20:29
ApsuWas just pasting the lines that conflict, lol20:29
Apsunetwork_vlan_ranges = vlan:200:400, in your ml2_conf20:29
kyssewtf20:29
cloudnullwhat Apsu said20:29
kysseaaah! but hey, it didnt work with vxlan either.20:30
Sam-I-Amcloudnull: as an admin you can use any arbitrary segmentation id20:30
kysseand it was between allowed vxlans20:30
Sam-I-Amthe range is just there for non-priv networks that cant choose vlan ids20:30
kysseI can try to create a network between 200:400, but I'm just saying that it's not gonn work any better.20:30
ApsuThat rings a bell, actually.20:31
cloudnullkysse: this is likely, theres something else happening thats making neutron not happy.20:31
kysseindeed.20:31
ApsuOk, let's look at dmesg on the compute. Also.... if you run "ip a", do you see any ipv6 addresses on interfaces?20:31
cloudnulljust pointing out that the vlan network needs a segmentation id within your set vlan range .20:31
ApsuOr, did you purposefully unload the ipv6 module and blacklist it on your compute box?20:32
kyssethere was no errors in dmesg.20:32
kyssenothing unusual20:32
kysseI can see link local addresses yes20:32
ApsuOk20:32
ApsuNeutron has an obscure bug when the ipv6 module is blacklisted, so guess it's not that20:33
cloudnulland nothing in the neuttron-linuxbridge-agent log file ?20:33
kyssenope, not even with debug mode.20:34
Apsucloudnull: kysse posted this earlier, it's a collection of pastes of various things http://sprunge.us/iFKS20:34
Apsukysse: What about turning on debug for nova-compute and checking it on the failure20:34
ApsuKind of wondering if it's not neutron but something libvirt/nova related, and its just blaming neutron20:35
cloudnullyea but the log is mostly empty, so was curious if its gotten more data since the original paste20:35
cloudnulli wonder if the issue is "WARNING oslo_config.cfg [-] Option "username" from group "neutron" is deprecated. Use option "user-name" from group "neutron"."20:36
Sam-I-Amcloudnull: nah, thats all jamie lennox20:37
Sam-I-Amin other words, bs no one wants to fix20:37
cloudnullit says deprecated but do they mean "removed" :)20:37
Sam-I-Amturns out user-name doesnt even work20:37
cloudnullok nevermind then20:38
*** KLevenstein has quit IRC20:40
kyssehttp://sprunge.us/WcCO nova debug log when creating instance20:41
*** mfisch` has quit IRC20:56
*** mfisch has joined #openstack-ansible20:57
*** mfisch is now known as Guest715020:57
kysseno comments? :P20:57
cloudnullsorry was looking elsewhere20:58
cloudnullthe raised exception is here https://github.com/openstack/nova/blob/stable/liberty/nova/network/neutronv2/api.py#L34220:59
cloudnullkysse:  when you reran the os-neutron-install.yml play did you do so with or without a tag / limit ?21:01
*** Guest7150 has quit IRC21:01
*** javeriak has quit IRC21:01
kysseopenstack-ansible setup-everything.yml --tags neutron-config21:02
cloudnulli hate to ask however, you can you run ``openstack-ansible os-neutron-install.yml`` and then try to create the vm using a vxlan network ?21:02
kyssesure. sec.21:03
cloudnullmaybe the tag i told you is missing a step ...21:03
cloudnullbut the only thing i can think of that would cause that is a busted ml2 config21:03
*** metral is now known as metral_zzz21:03
*** metral_zzz is now known as metral21:04
cloudnulland it could be in your agent container(s), or the compute node(s)21:04
ApsuIt seems a little light to me, but then I know we split out some LB things into its own file...21:04
kyssemaybe I and d9k should contribute openstack-ansible's documentation and stuff, we saw that there is lots of information missing from documentation.21:05
kyssebtw.21:05
cloudnullkysse: d9k that would be awesome21:05
ApsuPatches welcome! Encouraged. Lauded.21:05
cloudnulldoc updates help everyone, and we've done our best to make the docs what they are however they could use some more love thats for sure.21:07
Sam-I-Amplenty of love needed21:07
Sam-I-Amespecially around that finicky host_bind_override thing21:07
cloudnullback in a min, making food21:11
kyssecloudnull: no luck.21:11
*** adac has quit IRC21:14
*** phiche1 has joined #openstack-ansible21:17
*** phiche has quit IRC21:17
cloudnullkysse:  hum...21:21
Apsukysse: cloudnull: Sam-I-Am: "binding:vif_type": "binding_failed"21:23
ApsuThis is the standard vif_type binding_failed error21:23
kysseI see that when I query mysql neutron something..21:23
ApsuAlso... from the debug output of nova-compute...21:24
ApsuI see neutron.admin_* values are None21:24
kyssei pasted debug output of nova-compute long time ago21:24
ApsuWhich means nova can't log into neutron's endpoint21:24
ApsuThat's probably the whole problem. Nova needs creds for Neutron in the nova.conf21:25
*** sdake has joined #openstack-ansible21:25
ApsuThis doesn't look right to me either: neutron.admin_auth_url         = http://localhost:5000/v2.021:25
ApsuGuessing keystone isn't running on your compute node21:26
Sam-I-AmApsu: so i was noticing earlier, working with kysse, that neutron.conf did not have a [keystone_authtoken] section21:26
Sam-I-Amwhich i found a bit odd21:26
ApsuYeah seems like there's some auth shenanigans here.21:26
Sam-I-Ampretty sure thats a) needed and b) something o-a has stuffed into that file for a LONG time21:26
cloudnullwhat should be "internal_lb_vip_address: 10.0.8.4" based on the user_config21:28
*** adac has joined #openstack-ansible21:29
cloudnullkysse: did you by chance run the nova-compute play w/ ``ansible-playbook`` ? or ``openstack-ansible`` also w/ the updates we made earlier in the openstack_user_config file we might want to rerun the os-nova-install.yml (openstack-ansible os-nova-install.yml)21:33
cloudnullbut that would be odd if the keystone auth secions are missing from the nova-compute nodes21:33
kyssewith openstack-ansible21:34
kyssehmh. Maybe I should reinstall whole compute node or something..21:34
*** karimb has joined #openstack-ansible21:34
cloudnulljust rerun ``openstack-ansible os-nova-install.yml --tags nova-config``21:34
kysseok21:35
cloudnullthe adminurl shouldve been defined here " https://github.com/openstack/openstack-ansible/blob/master/playbooks/roles/os_nova/templates/nova.conf.j2#L167"21:35
cloudnullwhile thats running you mind openning the nova.conf file on the compute node and seeing if auth_url is in the [neutron] section21:36
Apsucloudnull: And what about the admin_username and tenant_id and all that for neutron, too?21:39
ApsuI assume that's still required for nova these days21:39
cloudnullhttps://github.com/openstack/openstack-ansible/blob/master/playbooks/roles/os_nova/templates/nova.conf.j2#L154-L16921:40
cloudnullall that should be in the file21:40
* Apsu nods21:40
ApsuSo either the template didn't run, or the variables are blank or something21:40
ApsuAll blank/missing would mean it'd use defaults, and I bet the auth URL default is localhost21:41
*** sdake_ has joined #openstack-ansible21:47
*** sdake has quit IRC21:47
kyssecloudnull: no luck. We also checked those neutron.admin_auth urls etc.21:47
*** mfisch has joined #openstack-ansible21:48
*** adac has quit IRC21:48
*** mfisch has quit IRC21:48
*** mfisch has joined #openstack-ansible21:48
*** KLevenstein has joined #openstack-ansible21:51
*** stevelle has quit IRC21:53
cloudnulldo they say localhost  ?21:53
*** stevelle has joined #openstack-ansible21:56
kysseinner lb vip21:57
*** alkari has quit IRC21:57
cloudnullkysse:  and you have [keystone_authtoken] in the nova.conf22:00
cloudnulland its auth_ur* entries are using the internal vip too   ?22:00
*** coolj has left #openstack-ansible22:01
*** sdake_ has quit IRC22:02
cloudnullkysse:  on the compute node, are you restarting the service w/ "service nova-compute restart"22:04
cloudnullhave you started it w/ backgrounding the command nova-compute ?22:04
*** markvoelker has joined #openstack-ansible22:07
ApsuYeah seems like maybe it's not reading the configs22:11
cloudnullwhats odd is I see "2015-11-30 22:39:45.641 5513 DEBUG oslo_service.service [req-4aab2824-7df8-4159-9914-ce087783fabc - - - - -] oslo_messaging_rabbit.rabbit_host = localhost"22:12
cloudnulland 2015-11-30 22:39:45.644 5513 DEBUG oslo_service.service [req-4aab2824-7df8-4159-9914-ce087783fabc - - - - -] neutron.admin_auth_url         = http://localhost:5000/v2.0 log_opt_values /openstack/venvs/nova-12.0.1/lib/python2.7/site-packages/oslo_config/cfg.py:223322:12
cloudnullthe rabbit one makes sense22:12
cloudnullbecause we use rabbit_hosts22:12
cloudnullwhich has the correct values of the rabbit nodes ['10.0.9.97:5671', '10.0.9.82:5671']22:12
cloudnullbut "neutron.admin_auth_url         = http://localhost:5000/v2.0" seems wrong...22:13
cloudnullunless we've missed something22:13
cloudnullkysse:  i have a new liberty build going on now22:13
cloudnulland will be able to see if i can test the same things here in a min22:14
*** spotz is now known as spotz_zzz22:14
cloudnullkysse: Apsu im thinking that https://github.com/openstack/openstack-ansible/blob/master/playbooks/roles/os_nova/templates/nova.conf.j2#L16722:14
cloudnullshould be admin_auth_url = ...22:15
Apsuyeah22:16
kyssehmh.22:16
Apsupgrep -fa nova-compute22:16
ApsuSee what parameters its running with22:16
*** spotz_zzz is now known as spotz22:19
*** daneyon has quit IRC22:23
*** mancdaz has quit IRC22:26
*** mancdaz has joined #openstack-ansible22:28
cloudnullkysse: so this is not your fault. you wouldve had the same issue because of the openstack_user_config change we updated a while back however the config options in the nova.conf file have recently changed...22:29
cloudnulland i think its a change in keystonemiddleware that is causing the issue22:29
kysseah, so I'm not crazy afterall.22:30
cloudnullnot at all22:31
cloudnullits just taken me a while to tracck the issue down22:31
cloudnullsorry about that22:31
kysseno problems.22:32
*** lkoranda_ has joined #openstack-ansible22:48
*** Mudpuppy has quit IRC22:49
*** lkoranda has quit IRC22:50
*** lkoranda_ has quit IRC22:52
*** metral has quit IRC22:56
cloudnullkysse:  can you try something for me22:56
cloudnullcan you add "auth_plugin = password" to the [neutron] section in your nova.conf22:56
cloudnullon the compute node22:56
cloudnullrun: service nova-compute restart" and then try again22:56
kyssewell, I'd have to do all that stuff you told me a while ago.22:57
cloudnullthis time try booting an instance w/ a vxlan network22:57
*** metral_zzz has joined #openstack-ansible22:57
*** metral_zzz is now known as metral22:57
cloudnullwhats that ?22:57
*** lkoranda has joined #openstack-ansible22:58
*** phiche1 has quit IRC22:58
kysseammmm. nothing. I'll test service nova-com... sec.22:58
kysseit's already there! but the password is password22:59
cloudnullauth_plugin = password is already there ?22:59
kysseyes indeed.23:00
cloudnullok23:00
kysse/etc/nova/nova.conf @compute23:00
cloudnullin the neutron section right  ?23:00
*** tlian2 has joined #openstack-ansible23:00
kyssey23:01
*** tlian has quit IRC23:03
cloudnullok last thing, try adding "admin_" to the auth_url, password, username to the opt in the [neutron[ section23:04
cloudnullrestart nova-compute and build a vm23:05
ApsuDid we verify it's actually reading these configs?23:06
cloudnullit is23:06
Apsupgrep -fa nova-compute, make sure the confs are in the commandline?23:06
Apsuok23:06
cloudnullhowever this https://github.com/openstack/nova/blob/stable/liberty/nova/network/neutronv2/api.py#L182-L208 seems to be loading the plugin23:07
cloudnullwhich has all of the deprecated ops https://github.com/openstack/nova/blob/stable/liberty/nova/network/neutronv2/api.py#L51-L5323:07
cloudnulland is not using the regular auth_plugin like it should23:07
*** spotz is now known as spotz_zzz23:08
cloudnulljamielennox: question for you when your around .23:08
cloudnull[neutron] auth_plugin seems to be ignoring the values of the keystone_authtoken section when auth_plugin = password23:09
cloudnullfrom the [neutron] section23:09
Sam-I-Amthis was kilo?23:09
Sam-I-Amor is...23:09
cloudnullliberty23:09
Sam-I-Amcloudnull: did you see the related sections in here? http://docs.openstack.org/liberty/install-guide-ubuntu/23:10
cloudnullSam-I-Am:  yes thats what we have23:11
cloudnullhowever thats not what is being loaded in nova.conf23:11
Sam-I-Amsure this is liberty? because it was different in kilo.23:12
kyssehmh23:12
cloudnullSam-I-Am: yes, liberty23:12
kyssewe're running neweset one.23:13
kyssenewest*23:13
Sam-I-Amalso wondering why this would have all of the sudden broke23:13
Sam-I-Amkysse: which tag?23:13
cloudnullSam-I-Am: 12.0.1 (based on the log files)23:13
kysse12.0.123:13
*** baker has quit IRC23:14
cloudnullSam-I-Am: http://sprunge.us/WcCO23:14
cloudnullhas neutron.admin_auth_url         = http://localhost:5000/v2.023:14
kysse 01:04   cloudnull| ok last thing, try adding "admin_" to the auth_url, password, username to the opt in the [neutron[ section23:15
kyssetrying this now23:15
cloudnullSam-I-Am:  i'd expect to see "neutron.admin_auth_url" or neutron.auth_url set to the keystone auth endpoint23:17
kysseahmh.23:17
kysseCould not clean up failed build, not rescheduling23:17
cloudnullbut the only two uses of port 5000 are set to localhost23:17
kyssehttp://sprunge.us/CjZG23:18
cloudnullkysse:  with all of these tests you may need to delete a bunch of the dead vms23:18
kysseok23:18
cloudnullwell thats an odd error23:19
kyssehttp://paste.nerv.fi/77158297.txt neutron section23:20
cloudnullSam-I-Am jamielennox do you know if the keystone_authtoken will simply not show up in the debug output when the config is loaded ?23:22
cloudnullbecause maybe this is a red harring ?23:23
Sam-I-Amcloudnull: in that [neutron] stuff, the usual username/password (rather than admin_username) should work23:23
Sam-I-Amcloudnull: what parts of keystone_authtoken are you looking for?23:24
cloudnullany of it23:24
Sam-I-Amdoubt it23:24
cloudnullif i restart nova-compute that section does not show up in the running output23:24
cloudnullsame for the neutron section23:25
Sam-I-Amoh, i think i've seen that23:25
cloudnullhowever the old opts do23:25
Sam-I-Amyeah because he never updated those23:25
cloudnull:'(23:25
*** spotz_zzz is now known as spotz23:26
Sam-I-Amthe whole cfg.CONF is broken for keystone middleware23:27
Sam-I-Amand should not be used23:27
Sam-I-Amso you sort of just figure it out by reading code, or a blog post23:28
cloudnullkysse:  i have to run for the evening however I may be back online later on .23:31
cloudnullyou can rerun the os-nova-install.yml --tags nova-config23:31
cloudnullto restore the configs back to the way they were prior to all of the messing about.23:31
kysseok. :)23:32
Sam-I-Ami think this went down the wrong rabbit hole23:32
cloudnullas for the issues, i still dont know, however I have my 10 node cluster I'm going to beat on to see if i can recreat the issues.23:32
Sam-I-Amjust a hunch23:32
cloudnullSam-I-Am:  that may be however kysse still has a broken cluster which would be good to figure out why .23:33
cloudnullit could be something to do with our liberty code, or a random misconfiguration23:33
cloudnullIDK .23:33
Sam-I-Amgiven the stuff i saw that was missing earlier23:33
Sam-I-Ami wasnt around if y'all were solving (or solved) that23:34
Sam-I-Ambut curious what else might be missing23:34
cloudnullkysse: are all of the bridges up on all of the network and compute nodes ?23:37
cloudnulland do you see the corresponding devices in the containers23:37
openstackgerritMerged openstack/openstack-ansible: Allow ramdisk_id, kernel_id to be null on schema  https://review.openstack.org/24650323:38
kyssehmh.23:39
*** sigmavirus24 is now known as sigmavirus24_awa23:44
*** spotz is now known as spotz_zzz23:49
kyssecorresponding devices as?23:50

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!