Tuesday, 2017-05-16

*** cpuga_ has quit IRC00:01
*** reedip_ has quit IRC00:03
*** fnaval has joined #openstack-lbaas00:22
*** fnaval has quit IRC00:23
*** fnaval has joined #openstack-lbaas00:23
*** blogan_ has quit IRC00:26
*** cpuga has joined #openstack-lbaas00:44
*** harlowja has joined #openstack-lbaas00:49
*** cpuga has quit IRC00:58
*** cpuga has joined #openstack-lbaas00:59
*** cpuga has quit IRC01:03
*** gongysh has joined #openstack-lbaas01:07
*** cpuga has joined #openstack-lbaas01:29
*** harlowja has quit IRC01:42
*** cpuga has quit IRC01:42
*** harlowja has joined #openstack-lbaas01:56
*** cpuga has joined #openstack-lbaas02:04
*** aojea has joined #openstack-lbaas02:11
*** aojea has quit IRC02:16
*** harlowja has quit IRC02:20
*** gongysh has quit IRC02:39
*** yuanying has quit IRC02:51
*** fnaval has quit IRC03:12
*** links has joined #openstack-lbaas03:30
*** yuanying has joined #openstack-lbaas03:58
*** yuanying has quit IRC04:23
*** yuanying has joined #openstack-lbaas04:23
*** belharar has joined #openstack-lbaas04:44
*** krypto has joined #openstack-lbaas04:48
*** krypto has quit IRC04:48
*** krypto has joined #openstack-lbaas04:48
*** strigazi_ has joined #openstack-lbaas04:51
*** rochaporto has joined #openstack-lbaas04:51
*** rochapor1o has quit IRC04:54
*** strigazi has quit IRC04:54
*** armax has quit IRC04:56
*** strigazi has joined #openstack-lbaas04:56
*** rochapor1o has joined #openstack-lbaas04:56
*** rochaporto has quit IRC04:57
openstackgerritcheng proposed openstack/octavia master: Enable add debug account for ssh access  https://review.openstack.org/46456004:58
*** belharar has quit IRC04:58
*** strigazi_ has quit IRC04:58
*** ssmith has joined #openstack-lbaas05:16
*** yuanying has quit IRC05:18
*** yuanying has joined #openstack-lbaas05:19
*** yuanying has quit IRC05:19
*** yuanying has joined #openstack-lbaas05:20
*** strigazi_ has joined #openstack-lbaas05:25
*** rochaporto has joined #openstack-lbaas05:26
*** rochapor1o has quit IRC05:27
*** strigazi has quit IRC05:27
*** gcheresh has joined #openstack-lbaas05:28
*** rochaporto has quit IRC05:30
*** strigazi has joined #openstack-lbaas05:30
*** rochaporto has joined #openstack-lbaas05:31
*** strigazi_ has quit IRC05:32
*** links has quit IRC05:33
*** links has joined #openstack-lbaas05:41
*** harlowja has joined #openstack-lbaas05:47
*** cpuga has quit IRC05:48
*** belharar has joined #openstack-lbaas05:50
*** belharar has quit IRC05:51
*** belharar has joined #openstack-lbaas05:52
*** yuanying has quit IRC06:06
*** sbalukoff_ has quit IRC06:09
*** sbalukoff_ has joined #openstack-lbaas06:10
*** pcaruana has joined #openstack-lbaas06:22
*** gongysh has joined #openstack-lbaas06:27
*** krypto has quit IRC06:27
*** krypto has joined #openstack-lbaas06:28
*** krypto has quit IRC06:28
*** krypto has joined #openstack-lbaas06:28
*** rcernin has joined #openstack-lbaas06:32
*** gongysh has quit IRC06:35
*** cpuga has joined #openstack-lbaas06:49
*** harlowja has quit IRC06:50
*** cpuga has quit IRC06:53
*** yuanying has joined #openstack-lbaas06:59
*** ssmith has quit IRC07:02
*** aojea has joined #openstack-lbaas07:20
*** krypto has quit IRC07:40
*** krypto has joined #openstack-lbaas07:41
*** krypto has quit IRC07:46
*** krypto has joined #openstack-lbaas07:47
*** kobis has joined #openstack-lbaas08:38
*** aojea has quit IRC09:03
*** aojea has joined #openstack-lbaas09:04
*** krypto has quit IRC09:04
*** krypto has joined #openstack-lbaas09:04
*** krypto has quit IRC09:04
*** krypto has joined #openstack-lbaas09:04
*** belharar has quit IRC09:27
*** kobis has quit IRC09:41
openstackgerritcheng proposed openstack/octavia master: Add allocate vip port when create loadbalancer in server side  https://review.openstack.org/46328909:44
*** cpuga has joined #openstack-lbaas10:51
*** aojea has quit IRC10:53
*** cpuga has quit IRC10:55
*** reedip_ has joined #openstack-lbaas11:09
*** atoth has joined #openstack-lbaas11:13
*** cpuga has joined #openstack-lbaas12:17
*** cpuga has quit IRC12:19
*** cpuga has joined #openstack-lbaas12:19
*** krypto has quit IRC12:54
*** krypto has joined #openstack-lbaas12:55
*** aojea has joined #openstack-lbaas13:21
*** ssmith has joined #openstack-lbaas13:24
*** aojea has quit IRC13:27
*** rcernin has quit IRC13:48
*** rcernin has joined #openstack-lbaas13:49
*** aojea has joined #openstack-lbaas13:52
*** leitan has joined #openstack-lbaas14:06
leitanHi guys, quick question, im testing octavia but i need to give the devs a lbaas service asap, so im planning to deploy V2 with agent version, while in parallel im working with octavia14:07
leitancan i have both service providers on neutron configured14:07
*** fnaval has joined #openstack-lbaas14:08
leitanwith agent as default, and octavia as the other choice ?14:08
leitansomething like: service_provider = LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default,LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver14:09
leitanthat question one, and second, is already implemented a mechanism to reschedule a lbaasv2 agent-mode loadbalancer onto another agent, if agent1 dies ?14:11
*** armax has joined #openstack-lbaas14:11
*** fnaval has quit IRC14:14
*** fnaval has joined #openstack-lbaas14:14
leitanfor what i see it has been implemented on ocata, is there any chance to backport it to mitaka ?14:15
leitanor if there is any "manual" option to reaasing on which agent the lbaas run, to manually reschedule on mitaka14:18
*** fnaval has quit IRC14:19
*** KeithMnemonic has joined #openstack-lbaas14:28
*** fnaval has joined #openstack-lbaas14:28
nmagnezileitan, hi there14:35
leitannmagnezi: hi nir !14:36
leitansaw you working on this patch, so i think youre the man with the answer :P14:36
nmagnezileitan, as for both service_providers at once, I think it's not currently possible since you'd need two keystone endpoints. I know this is in the works since Octavia is about to become its own endpoint.. when it's done it will be possible. pinging rm_work and xgerman to keep me honest here.14:37
nmagnezileitan, as for reschedule, yup it is possible14:37
nmagnezileitan, indeed i coded that one :)14:37
nmagnezileitan, there's a single flag you'll need to flip https://review.openstack.org/#/c/299998/51/releasenotes/notes/auto_reschedule_load_balancers-c0bc69c6c550d7d2.yaml14:37
leitanyup saw the flag, but im running mitaka14:38
leitanand i didnt see the code get to that release14:38
xgermanok, you cna have both providers. I would make Octavia default - but I am biased ;-)14:38
nmagnezixgerman, lol14:38
nmagnezileitan, so good that i ping xgerman .. :)14:38
leitanxgerman: thats great to hear, can i do that on mitaka also ?14:39
xgermanyes14:39
leitanlike i stated upthere14:39
leitangreat, is the syntax correct ?14:39
leitani will make octavia default, when i got it working :P14:39
nmagnezileitan, and as for auto reschedule i'm sorry but it's only available starting stable/octata14:39
xgermanyep, syntax should work14:39
nmagnezileitan, and we cannot backport this.. it's against the backports policy to backport new features14:39
xgermanyou can also deploy master Octavia with mitaka IHMO if you have a way to isolate dependencies (e.g. run your control plane in a container)14:40
xgermanbut the Mitaka Octavia already works… most of the work for Newton was bug fixing…14:40
leitanxgerman: i working with several doc pages, to deploy octavia, i know you guys are working on the docs, so maybe i can help to improve as i go with my experience14:40
xgermanthat would be great!!14:41
nmagnezi+114:41
nmagnezi:)14:41
leitannmagnezi: ok, ill try to backport that myself just to offer "fake HA" as i got octavia working14:41
nmagnezileitan, why?14:41
leitancause i have to deliver LbaaS this week, and i dont think to get octavia up and running by that time14:42
leitanim using the https://docs.openstack.org/developer/octavia/guides/dev-quick-start.html14:42
leitanbut i found it pretty "generic" , i dont know if there any more detailed doc14:42
leitanim combining it with https://docs.openstack.org/mitaka/networking-guide/config-lbaas.html to configure everything14:43
leitanand i think that would be great to actually create something like the metadata-proxy agent, that can have the management network for octavia running on a vxlan for example14:43
leitancause i have a working cloud and i needed to create a new flat, reconfigure ovs, restart all the agents, to reach the amphoraes from my controller14:44
*** KeithMnemonic has quit IRC14:44
xgermanwe have Open Stack Ansible scripts which talk more about the network14:44
leitanxgerman: yup, i surfed the ansible scripts to understand that correctly, and finally create that network to manage the amphorae instances14:45
xgermanyeah, the network is the hardest part14:46
leitanbut i think it would be great to have a "namespaced" way to talk to amphorae14:46
xgermanmmh, that’s an interesting idea14:46
leitanso you dont depends of your network administrators, and possible you can fully delegate the admin also of the amphorae to the owning tenant14:46
leitanin the current model you depend of a management network that exist on your dc, and probably you dont want your tenants that are fully isolated to access it14:47
leitanso its kinda hard to RBAC this setup14:47
xgermannah, the amphora are supposed to be a black box for the tenant. They are suppsoed to interact through the LBaaS V2 API14:48
xgermanbut I can see how the management net is cumbersome14:49
leitanxgerman, ok i can totally give you that, but the mgmt net, its a pain in the ass for a running productive cloud14:49
leitanmore if i cant dedicate new compute nodes to run amphoraes and i have to touch the net.config of the productive ones14:50
leitanlike i faced last week14:50
leitansomething like octavia-ns-proxy, with an admin owned VXLAN network ... i think that will really ease the setup14:51
xgermanyep, agreed — it might be best to file an RfE bug14:51
xgermanI also know people who run the mgmt net on the public provider net14:52
xgerman(rm_work does something like that) to avoid the extra mgmt net14:52
*** gcheresh has quit IRC14:53
leitanxgerman: i flirted with that idea but i found stupid to "consume" addressing from my provider net for lb_mgmt purposes14:54
leitanand to mix "functionalities" on networks14:55
leitanxgerman: want me to fill an RFE bug ?14:55
xgermanyes14:56
leitani can totally do that if that makes sense to you14:56
xgermanit makes sense — the big problem is always developers to actually do that…14:58
*** links has quit IRC14:58
*** KeithMnemonic has joined #openstack-lbaas15:09
rm_workleitan: yeah, our default provider net is "private" so I just let nova schedule it to some network, figure out what the network is, and use that15:13
rm_workeverything is over HTTPS so I'm not super concerned about it15:13
rm_workand the endpoints are authed15:13
rm_workthat way I avoid using a special management-net15:13
rm_worki just merged the patch that makes that possible, actually15:13
rm_workleitan: https://review.openstack.org/#/c/463660/15:14
rm_workleitan: and per what xgerman was saying, I'd recommend running even as new as Ocata or Pike of Octavia, even against an old cloud, as it is totally independent15:14
rm_workWe're running *master* of Octavia against a liberty cloud with no problems15:15
rm_workthere are definitely enough bugfixes and improvements to make it worth it15:15
rm_workand if you go to *master*, you can deploy octavia with the same API as n-lbaas so it's seamless15:15
rm_workbrb15:15
*** reedip_ has quit IRC15:17
leitanrm_work: yeah, i will run the ocata octavia version actually, im using kolla containers for it, to make it easier15:17
leitanxgerman nmagnezi rm_work i fille an RFE of what we were talking for the lb-net https://bugs.launchpad.net/octavia/+bug/169113415:18
openstackLaunchpad bug 1691134 in octavia "[RFE] Implement a namespace-agent to manage amphorae via isolated/tunnel networks" [Undecided,New]15:18
*** reedip_ has joined #openstack-lbaas15:18
leitanso if you guys can take a look when you can, and let me know if its clear enough15:18
*** reedip_ has quit IRC15:19
*** reedip_ has joined #openstack-lbaas15:25
xgermank15:25
*** KeithMnemonic has quit IRC15:27
*** reedip_ has quit IRC15:32
xgermanmoved it to confirmed15:34
johnsomLb-mgmt can be routed, not flat.  We use a simple neutron network (can be vxlan) in devstack.  I am not understanding why you would need a proxy at all15:34
johnsomThe only hard part is getting the controller processes on the neutron network (which is owned by the octavia service account like the amphora vms BTW)15:36
xgermanI assume a namspace proxy would avoid dealing with new networks…15:38
xgermanI like it captures so we can discuss and see if it’s worthwhile15:39
leitanjohnsom: you are right about the network being routed, but it turns out it was simpler for me to do a flat on my use case in order no to mess with our OSPF, and wait years to add a network published15:41
*** reedip_ has joined #openstack-lbaas15:42
leitanjohnsom and also we dont have a mechanism today to get the controller into the vlxan network, since they are on an isolated dmz, having this "proxy" mechanism i dont need to change anything on my setup in order to introduce octavia into it15:45
*** aojea has quit IRC15:52
*** gcheresh has joined #openstack-lbaas15:54
*** gcheresh has quit IRC16:00
*** blogan has joined #openstack-lbaas16:13
*** ipsecguy has joined #openstack-lbaas16:32
*** ipsecguy_ has quit IRC16:32
*** pcaruana has quit IRC16:53
*** rcernin has quit IRC16:53
*** reedip_ has quit IRC16:56
*** reedip has quit IRC17:02
*** reedip has joined #openstack-lbaas17:09
*** reedip_ has joined #openstack-lbaas17:15
*** krypto has quit IRC17:38
*** dasanind has quit IRC17:43
*** amit213 has quit IRC17:43
*** sindhu has quit IRC17:43
*** amit213 has joined #openstack-lbaas17:45
*** dasanind has joined #openstack-lbaas17:46
*** sindhu has joined #openstack-lbaas17:47
*** cpuga has quit IRC17:48
*** harlowja has joined #openstack-lbaas17:50
*** reedip has quit IRC17:53
*** belharar has joined #openstack-lbaas18:02
*** gcheresh has joined #openstack-lbaas18:14
*** openstackgerrit has quit IRC18:17
*** cpuga has joined #openstack-lbaas18:19
*** cpuga_ has joined #openstack-lbaas18:20
*** cpuga has quit IRC18:23
*** belharar has quit IRC18:41
*** leitan has quit IRC18:52
*** aojea has joined #openstack-lbaas19:04
*** leitan has joined #openstack-lbaas19:10
leitanguys, just trying to create my first octavia loadbalancer, and neutron api keeps complaining that is trying to reach keystone at 127.0.0.1,  i have added auth_url to service_auth on neutron_lbaas.conf with no luck, am i missing something ?19:11
leitanor i will need an octavia.conf also on the /etc/neutron  on the neutron server machine19:12
*** harlowja has quit IRC19:17
leitanim gettings this: http://paste.openstack.org/show/609713/19:29
xgermanok, there is a section in the lbaas/neutron configs you need to add the keystone values19:33
xgermanhttps://github.com/openstack/openstack-ansible-os_neutron/blob/master/templates/neutron.conf.j2#L191-L20519:36
xgermanit’s called service-auth19:36
leitanok, i have that seccion exactly like that but in the neutron_lbaas.conf file19:46
leitanill move that to neutron.conf19:46
leitanand try again19:46
leitanhmm seems the trailing /v3 its importante im getting a 404 now19:54
*** armax has quit IRC19:58
*** man_arab has joined #openstack-lbaas20:00
xgermanit should work in lbaas.conf so a bit surprised…20:01
leitanxgerman: moved to neutron.conf and worked20:02
*** man_arab has left #openstack-lbaas20:02
xgermanyeah, surprising20:02
leitanmaybe by default neutron-server is not loading neutron_lbaas.conf file on the systemd job ?20:02
leitanxgerman: made it to octavia api now :). , but getting AttributeError: 'NoopAmphoraLoadBalancerDriver' object has no attribute 'get_vrrp_interface'20:03
xgermanoh, there must still be some setting in octavia.conf off20:03
leitani think that too, ill recheck20:04
leitanim using ACTIVE_STANDBY btw20:04
*** harlowja has joined #openstack-lbaas20:04
leitanhmm and thats why20:06
leitanonly the amphora_haproxy_rest_driver20:06
leitanhas get_vrrp_interface20:06
leitanshould i use the amphora_haproxy_rest_driver insted of the NooP that seems to be the default20:10
leitanxgerman ?20:10
xgermanyes20:10
leitanthanks20:12
*** aamerine has joined #openstack-lbaas20:46
*** gcheresh has quit IRC20:48
leitanxgerman: this is courious when i try to delete a lb thats on status ERROR cause never got created, it stucks on PENDING_DELETE and the octavia_api loops constantly over a GET /v1/loadbalancers/0f42ad66-fd4a-4f68-a735-2c55f4658db0 that returns a 404 (ok cause it doesnt exist) then after few iterations comes back to status ERROR21:09
xgermanCould be a bug if you run master…21:11
xgermanI haven’t seen that (yet21:11
*** JudeC has joined #openstack-lbaas21:19
rm_work ugh21:19
rm_workwas "get_vrrp_interface" part of my patch?21:20
rm_worksec21:20
rm_workit's missing on the noop?21:20
rm_workcrap21:21
rm_workit's in the rest_api_driver but not actually the interface >_< wtf21:21
rm_workugh21:21
rm_workwow this is messy21:22
rm_workthe amphora driver pulls in a Mixin for VRRP21:22
rm_workbut even THAT interface doesn't include "get_vrrp_interface"21:22
rm_workthis needs to be a bug21:22
rm_workand get cleaned up21:23
*** eandersson has joined #openstack-lbaas21:23
*** JudeC has quit IRC21:29
leitanrm_work: want me to fill it up ?21:30
rm_workThat would be excellent21:30
rm_worksince you know how to repro21:30
leitanrm_work: right away21:30
rm_workawesome, thanks :P21:30
leitani moved forward , after solving the certificate issue i came to here: http://paste.openstack.org/show/609723/21:31
rm_workyeah really you need the rest_driver anyway21:32
rm_workhmmm21:32
leitanim with the rest driver now21:32
rm_worksomething broke at a higher level21:33
rm_workbecause that should always work21:33
leitanvip address gets "allocated" | ee03a772-3576-468c-90af-5cf6da97bb2f | octavia-lb-test12 | 80.80.80.15 | ERROR               | octavia  |21:33
rm_workthe amphora doesn't have a valid network config21:33
leitanhmm probably is because i didnt defined the - amp_boot_network_list21:36
rm_workAHHH yes21:36
rm_workyou need that21:36
rm_workunless you have my patch21:36
rm_workwhich you won't yet21:36
leitani thought that it will extract it form hte [networking] section21:36
leitanso i need to also specify the management network on the boot_network_list21:37
rm_workyes21:37
rm_workerr21:37
rm_workhold on21:37
rm_workmaybe it will extract21:37
rm_worklet me look21:37
leitanrm_work: ok21:39
leitanrm_work: i thought that this is for "additional networks" that you want to plug in21:39
leitanand the mgmt network got extracted from lb_network_name variable on [networking]21:40
rm_workdoesn't look like it extracts it from that21:40
rm_workyou need to specify21:40
rm_workhmm21:40
*** ssmith has quit IRC21:41
leitanyes, checking the code and it seems it doesnt get used anywhere21:41
leitanbut still there on the config21:41
rm_workI think it's not21:41
rm_workummmmmmm21:41
rm_workyeeeeaaaahhhh21:42
rm_workthat's what I just noticed too21:42
rm_workcode doesn't use it21:42
rm_workat all21:42
rm_work... bug?21:42
leitanits just the CONF.opt definition and same on test cases21:42
rm_workyes21:42
leitanill fill a bug for that too21:42
rm_workk21:42
rm_workthanks :)21:42
leitanno worries21:42
rm_worklet me know the bug ID :P21:43
*** JudeC has joined #openstack-lbaas21:44
rm_workand it looks like it was ACTUALLY supposed to be a name21:46
rm_workwhereas the boot list is subnet_ids21:46
leitanrm_work: https://bugs.launchpad.net/octavia/+bug/169128621:48
openstackLaunchpad bug 1691286 in octavia "lb_network_name option defined on tests and CONF.getops but not used anywhere" [Undecided,New]21:48
rm_workxgerman: so i'm about to make our HM system driver-able and write an etcd3 driver <_<21:48
*** openstackgerrit has joined #openstack-lbaas21:50
openstackgerritAdam Harwell proposed openstack/octavia master: Remove lb_network_name from config (it was bogus)  https://review.openstack.org/46518321:50
rm_workleitan: alright, "fix" posted :P21:50
rm_workwait was johnsom on? i thought he was on vacation21:51
leitanrm_work: he was answering a couple of hours ago21:54
leitanrm_work: also filled https://bugs.launchpad.net/octavia/+bug/169128721:54
openstackLaunchpad bug 1691287 in octavia "get_vrrp_instance missing on NoopAmphoraLoadBalancerDriver" [Undecided,New]21:54
rm_workk21:54
rm_workthat's a little more involved, don't have time to tackle that *immediately*21:54
rm_workactually...21:54
rm_workwell, OK yeah I do21:54
*** aojea has quit IRC21:55
leitanrm_work:  :)21:56
leitanrm_work: looking at the code i see that amphorae_network_config.get(amphora.id).vip_subnet , i tried specifing network_id and also network-name, but ... should i specify the subnet NAME ?22:05
openstackgerritAdam Harwell proposed openstack/octavia master: VRRP amphora_driver functions weren't handled by noop driver  https://review.openstack.org/46518522:05
rm_worksubnet_id22:05
rm_workall subnets are referenced by id22:05
rm_workwhich config var are you referring to?22:05
rm_workamp_network_boot_list?22:05
leitanyes22:06
leitancause it says # Networks to attach to the Amphorae examples:22:06
leitanit doesnt say subnet_id anywhere22:06
leitanso may sound silly but its not self explanatory22:06
rm_workhmm no i think you're right22:06
rm_workone sec22:06
leitanok22:06
rm_workwant to file a bug?22:06
rm_work:P22:06
leitanyes, so it should say "subnet_id" right ?22:07
rm_workhold on actually22:07
rm_worklooking22:07
leitanok22:07
rm_workhmmmmmmm22:07
rm_workno this does look like network_ids22:07
rm_workyou specified the correct network_id and it didn't work?22:08
leitanit didnt work22:08
rm_workhmm22:08
rm_workstill the same error?22:08
leitanbut the courious thing is that its failing on the .get on the VIP_SUBNET22:09
leitansubnet = amphorae_network_config.get(amphora.id).vip_subnet22:09
leitanAttributeError: 'NoneType' object has no attribute 'get'22:09
rm_workwell it's because the .get(amphora.id) fails22:09
rm_workso get() returns None22:09
leitanyes thats clear22:10
leitanbut why vip_subnet ?22:10
rm_workso amphorae_network_config doesn't have info for that amp22:10
rm_workit's not really relevant22:10
leitanvip subnet is not the tunneled net ?22:10
leitanline 112, in post_vip_plug22:10
leitani guess then its not actually a vip ? or it uses also a vip for mgmt ?22:11
rm_workso22:11
rm_workthis is getting the info to set up the VIP on the amphora22:11
leitanyes22:11
rm_workright22:11
rm_workso it needs the subnet_id of the VIP22:11
rm_workso it can fill the networking templates properly22:11
rm_workto send to the amphora22:11
leitanright, but the vip subnet is the one that youre passing as an argument to the loadbalacner-crate22:12
leitancreate22:12
leitanand as you can see here http://paste.openstack.org/show/609725/22:13
leitanits actually allocating it ok from neutron22:13
rm_workyes, so you're saying it should just come from the LB?22:13
rm_workinstead of from the amp config22:13
leitanrm_work: at least on the post_vip_plug i guess so yes22:14
rm_worklooking22:14
leitanbut if its using the same function to actually plug the mgmt network ... then we need to come back to the amp_network_boot_list22:15
leitanthat we were talking22:15
rm_worksoooo22:15
rm_workin the allowed_address_pairs driver22:15
rm_workit comes directly from loadbalancer.vip.subnet_id22:16
rm_worklol22:16
rm_workOH uhhhhh22:16
rm_workyou replaced the amp driver22:16
rm_workdid you replace ALL of the drivers?22:16
rm_workamphora_driver = amphora_haproxy_rest_driver22:16
rm_workcompute_driver = compute_nova_driver22:16
rm_worknetwork_driver = allowed_address_pairs_driver22:16
rm_workyou need to set ALL of those22:16
rm_work>_>22:16
rm_workor it won't work22:16
rm_workthough admittedly yes, it looks like there's no reason that should need to come from the amphora, and not the loadbalancer22:17
leitanrm_work: i didnt change those22:17
rm_worklol22:17
rm_workok yeah22:17
rm_workyou need all three22:18
rm_workfor it to DO anything22:18
rm_workno wonder it isn't working :P22:18
rm_worknot sure what our docs mention about that...22:18
leitannow that youre saying, it makes sense to change the noops there too, but ... is not actually clear22:18
rm_workbut still, I am not sure why this is pulled this way22:18
rm_workwhen it comes directly from the loadbalancer22:18
leitanrm_work: want me to fill a bug to make that reference a little more self explanatory22:19
rm_workmaybe it has to do with what object models we have access to at various points of taskflow22:19
rm_workyou can file one if you want :)22:19
leitanrm_work: i think its actually releated with task flow22:19
leitanok ill do that, if youre not an octavia developer you can crash like me :P22:19
rm_workyeah22:21
*** armax has joined #openstack-lbaas22:21
leitanrm_work: its stupid to asume, that if you set amphora driver to rest driver, that the CONF option react to this and set the compute and the networkdriver to nova and allowed address pairs ?22:22
rm_workwell22:22
leitanor it might conflict with other uses that uses customs drivers there ?22:22
rm_workit theoretically could be any combination22:22
rm_workit's just that the other noops don't fill stuff22:22
rm_workso if you used some other network driver22:22
rm_work...22:22
rm_workit just happens there arent others at the moment :P22:22
rm_workso if you had like22:22
rm_workcontainer_compute_driver and opendaylight_network_driver22:22
leitanunderstand, i fill a bug just to the conf file be more specific if youre changing this22:23
rm_workthat would work too22:23
rm_workI think it's a doc problem22:23
leitanyes22:23
rm_workmore than a conf problem22:23
rm_worki mean i don't think we need a whole set of docs INSIDE our conf file22:23
rm_workbut honestly the real fix is that the noop drivers SHOULD set things up in a way that will work22:23
rm_workthough22:23
rm_workit still wouldn't *work* work22:23
rm_workbut it shouldn't *crash* at that stage like that22:23
leitanrm_work: so it make more sense to fill a bug on that orientation, or more like a doc bug first22:24
leitanill fill both22:24
leitanin order to "handle" this scenario and update the docs22:25
leitanrm_work: now im on MissingAuthPlugin: An auth plugin is required to determine endpoint URL22:27
leitanseems that i need to configure something somewhere else22:27
rm_workmmm22:27
leitanshould i fill the [neutron] section on the octavia.conf ? or it automatically gets it from keystone ?22:28
rm_workyeah you're missing something in config22:29
rm_worknooo22:29
rm_workyou need to fill it22:29
rm_workthat's HOW it gets it from keystone22:29
rm_workyou don't need `endpoint`22:29
rm_workbut you need `region_name`22:29
leitanservice_name region and endpoint type22:29
rm_workyes22:30
rm_workfor nova/neutron/glance22:30
leitani guess it will be the same for [nova] and glance22:30
rm_workyep22:30
leitanok, since im specifying an admin account on the keystone_authtoken section22:30
rm_workyes22:31
leitanwhy it doesnt do this discover22:31
leitanautomatically ?22:31
rm_workerr22:31
leitanjust wondering22:31
rm_workit does discover the endpoints22:31
rm_workbut22:31
rm_workhow do we know what your nova is?22:31
rm_workto be fair, there could be better defaults22:31
rm_workbut like, in our environments, those are custom names22:31
leitanok, i understand22:31
leitanlike nova, is not "compute"22:31
rm_workregion is always custom22:31
rm_workit can be22:31
rm_workbut we have others22:31
leitanlike the 80% of the deploys :P22:32
rm_workI think "compute" would be a better default than BLANK though22:32
rm_workbut, region -- there is no good default22:32
rm_workyou need to set that22:32
rm_workeveryone uses something else22:32
leitanyes, referred on the doc, and also as default if you leave it uncommented i guess22:32
rm_workah hmm22:32
rm_workmaybe there is a default region?22:32
rm_worki don't know22:32
rm_worki think there isn't in our deploy, but maybe22:33
leitanon the code is "None" but the default on every other openstack conf file is RegionOne22:33
leitangotta go, ill fill the doc guidance bugs from home22:34
rm_workkk22:34
rm_workRegionOne is devstack22:34
leitanthanks for the support rm_work i guess tomorrow ill be bothering you again22:34
rm_workbut yeah we COULD default it if there is no such thing as a "default keystone region"22:34
rm_workI may be out tomorrow22:34
rm_workand the next day actually :(22:35
rm_workbut I can try to respond if i am around22:35
leitanrm_work: yes devstack and on every doc guide, when you create the services it does it with RegionOne22:35
leitanrm_work: thanks22:35
rm_workkk22:35
rm_workif there is such a thing as a keystone default region22:35
rm_workthen we don't want to change it22:35
rm_workbut if there isn't, RegionOne makes sense22:35
leitanrm_work: like this for example https://docs.openstack.org/mitaka/install-guide-rdo/nova-controller-install.html22:36
leitanthese also: https://docs.openstack.org/newton/install-guide-ubuntu/nova-controller-install.html22:36
leitanand so on for the other core services22:36
*** leitan has quit IRC22:41
*** fnaval has quit IRC22:49
*** fnaval has joined #openstack-lbaas22:57
*** cpuga_ has quit IRC23:00
openstackgerritJude Cross proposed openstack/python-octaviaclient master: Add member commands to client  https://review.openstack.org/46303523:07
*** aamerine has quit IRC23:28
*** jmccrory is now known as jmccrory_awaythi23:30
*** jmccrory_awaythi is now known as jmccrory_away23:30

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!