*** cpuga_ has quit IRC | 00:01 | |
*** reedip_ has quit IRC | 00:03 | |
*** fnaval has joined #openstack-lbaas | 00:22 | |
*** fnaval has quit IRC | 00:23 | |
*** fnaval has joined #openstack-lbaas | 00:23 | |
*** blogan_ has quit IRC | 00:26 | |
*** cpuga has joined #openstack-lbaas | 00:44 | |
*** harlowja has joined #openstack-lbaas | 00:49 | |
*** cpuga has quit IRC | 00:58 | |
*** cpuga has joined #openstack-lbaas | 00:59 | |
*** cpuga has quit IRC | 01:03 | |
*** gongysh has joined #openstack-lbaas | 01:07 | |
*** cpuga has joined #openstack-lbaas | 01:29 | |
*** harlowja has quit IRC | 01:42 | |
*** cpuga has quit IRC | 01:42 | |
*** harlowja has joined #openstack-lbaas | 01:56 | |
*** cpuga has joined #openstack-lbaas | 02:04 | |
*** aojea has joined #openstack-lbaas | 02:11 | |
*** aojea has quit IRC | 02:16 | |
*** harlowja has quit IRC | 02:20 | |
*** gongysh has quit IRC | 02:39 | |
*** yuanying has quit IRC | 02:51 | |
*** fnaval has quit IRC | 03:12 | |
*** links has joined #openstack-lbaas | 03:30 | |
*** yuanying has joined #openstack-lbaas | 03:58 | |
*** yuanying has quit IRC | 04:23 | |
*** yuanying has joined #openstack-lbaas | 04:23 | |
*** belharar has joined #openstack-lbaas | 04:44 | |
*** krypto has joined #openstack-lbaas | 04:48 | |
*** krypto has quit IRC | 04:48 | |
*** krypto has joined #openstack-lbaas | 04:48 | |
*** strigazi_ has joined #openstack-lbaas | 04:51 | |
*** rochaporto has joined #openstack-lbaas | 04:51 | |
*** rochapor1o has quit IRC | 04:54 | |
*** strigazi has quit IRC | 04:54 | |
*** armax has quit IRC | 04:56 | |
*** strigazi has joined #openstack-lbaas | 04:56 | |
*** rochapor1o has joined #openstack-lbaas | 04:56 | |
*** rochaporto has quit IRC | 04:57 | |
openstackgerrit | cheng proposed openstack/octavia master: Enable add debug account for ssh access https://review.openstack.org/464560 | 04:58 |
---|---|---|
*** belharar has quit IRC | 04:58 | |
*** strigazi_ has quit IRC | 04:58 | |
*** ssmith has joined #openstack-lbaas | 05:16 | |
*** yuanying has quit IRC | 05:18 | |
*** yuanying has joined #openstack-lbaas | 05:19 | |
*** yuanying has quit IRC | 05:19 | |
*** yuanying has joined #openstack-lbaas | 05:20 | |
*** strigazi_ has joined #openstack-lbaas | 05:25 | |
*** rochaporto has joined #openstack-lbaas | 05:26 | |
*** rochapor1o has quit IRC | 05:27 | |
*** strigazi has quit IRC | 05:27 | |
*** gcheresh has joined #openstack-lbaas | 05:28 | |
*** rochaporto has quit IRC | 05:30 | |
*** strigazi has joined #openstack-lbaas | 05:30 | |
*** rochaporto has joined #openstack-lbaas | 05:31 | |
*** strigazi_ has quit IRC | 05:32 | |
*** links has quit IRC | 05:33 | |
*** links has joined #openstack-lbaas | 05:41 | |
*** harlowja has joined #openstack-lbaas | 05:47 | |
*** cpuga has quit IRC | 05:48 | |
*** belharar has joined #openstack-lbaas | 05:50 | |
*** belharar has quit IRC | 05:51 | |
*** belharar has joined #openstack-lbaas | 05:52 | |
*** yuanying has quit IRC | 06:06 | |
*** sbalukoff_ has quit IRC | 06:09 | |
*** sbalukoff_ has joined #openstack-lbaas | 06:10 | |
*** pcaruana has joined #openstack-lbaas | 06:22 | |
*** gongysh has joined #openstack-lbaas | 06:27 | |
*** krypto has quit IRC | 06:27 | |
*** krypto has joined #openstack-lbaas | 06:28 | |
*** krypto has quit IRC | 06:28 | |
*** krypto has joined #openstack-lbaas | 06:28 | |
*** rcernin has joined #openstack-lbaas | 06:32 | |
*** gongysh has quit IRC | 06:35 | |
*** cpuga has joined #openstack-lbaas | 06:49 | |
*** harlowja has quit IRC | 06:50 | |
*** cpuga has quit IRC | 06:53 | |
*** yuanying has joined #openstack-lbaas | 06:59 | |
*** ssmith has quit IRC | 07:02 | |
*** aojea has joined #openstack-lbaas | 07:20 | |
*** krypto has quit IRC | 07:40 | |
*** krypto has joined #openstack-lbaas | 07:41 | |
*** krypto has quit IRC | 07:46 | |
*** krypto has joined #openstack-lbaas | 07:47 | |
*** kobis has joined #openstack-lbaas | 08:38 | |
*** aojea has quit IRC | 09:03 | |
*** aojea has joined #openstack-lbaas | 09:04 | |
*** krypto has quit IRC | 09:04 | |
*** krypto has joined #openstack-lbaas | 09:04 | |
*** krypto has quit IRC | 09:04 | |
*** krypto has joined #openstack-lbaas | 09:04 | |
*** belharar has quit IRC | 09:27 | |
*** kobis has quit IRC | 09:41 | |
openstackgerrit | cheng proposed openstack/octavia master: Add allocate vip port when create loadbalancer in server side https://review.openstack.org/463289 | 09:44 |
*** cpuga has joined #openstack-lbaas | 10:51 | |
*** aojea has quit IRC | 10:53 | |
*** cpuga has quit IRC | 10:55 | |
*** reedip_ has joined #openstack-lbaas | 11:09 | |
*** atoth has joined #openstack-lbaas | 11:13 | |
*** cpuga has joined #openstack-lbaas | 12:17 | |
*** cpuga has quit IRC | 12:19 | |
*** cpuga has joined #openstack-lbaas | 12:19 | |
*** krypto has quit IRC | 12:54 | |
*** krypto has joined #openstack-lbaas | 12:55 | |
*** aojea has joined #openstack-lbaas | 13:21 | |
*** ssmith has joined #openstack-lbaas | 13:24 | |
*** aojea has quit IRC | 13:27 | |
*** rcernin has quit IRC | 13:48 | |
*** rcernin has joined #openstack-lbaas | 13:49 | |
*** aojea has joined #openstack-lbaas | 13:52 | |
*** leitan has joined #openstack-lbaas | 14:06 | |
leitan | Hi guys, quick question, im testing octavia but i need to give the devs a lbaas service asap, so im planning to deploy V2 with agent version, while in parallel im working with octavia | 14:07 |
leitan | can i have both service providers on neutron configured | 14:07 |
*** fnaval has joined #openstack-lbaas | 14:08 | |
leitan | with agent as default, and octavia as the other choice ? | 14:08 |
leitan | something like: service_provider = LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default,LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver | 14:09 |
leitan | that question one, and second, is already implemented a mechanism to reschedule a lbaasv2 agent-mode loadbalancer onto another agent, if agent1 dies ? | 14:11 |
*** armax has joined #openstack-lbaas | 14:11 | |
*** fnaval has quit IRC | 14:14 | |
*** fnaval has joined #openstack-lbaas | 14:14 | |
leitan | for what i see it has been implemented on ocata, is there any chance to backport it to mitaka ? | 14:15 |
leitan | or if there is any "manual" option to reaasing on which agent the lbaas run, to manually reschedule on mitaka | 14:18 |
*** fnaval has quit IRC | 14:19 | |
*** KeithMnemonic has joined #openstack-lbaas | 14:28 | |
*** fnaval has joined #openstack-lbaas | 14:28 | |
nmagnezi | leitan, hi there | 14:35 |
leitan | nmagnezi: hi nir ! | 14:36 |
leitan | saw you working on this patch, so i think youre the man with the answer :P | 14:36 |
nmagnezi | leitan, as for both service_providers at once, I think it's not currently possible since you'd need two keystone endpoints. I know this is in the works since Octavia is about to become its own endpoint.. when it's done it will be possible. pinging rm_work and xgerman to keep me honest here. | 14:37 |
nmagnezi | leitan, as for reschedule, yup it is possible | 14:37 |
nmagnezi | leitan, indeed i coded that one :) | 14:37 |
nmagnezi | leitan, there's a single flag you'll need to flip https://review.openstack.org/#/c/299998/51/releasenotes/notes/auto_reschedule_load_balancers-c0bc69c6c550d7d2.yaml | 14:37 |
leitan | yup saw the flag, but im running mitaka | 14:38 |
leitan | and i didnt see the code get to that release | 14:38 |
xgerman | ok, you cna have both providers. I would make Octavia default - but I am biased ;-) | 14:38 |
nmagnezi | xgerman, lol | 14:38 |
nmagnezi | leitan, so good that i ping xgerman .. :) | 14:38 |
leitan | xgerman: thats great to hear, can i do that on mitaka also ? | 14:39 |
xgerman | yes | 14:39 |
leitan | like i stated upthere | 14:39 |
leitan | great, is the syntax correct ? | 14:39 |
leitan | i will make octavia default, when i got it working :P | 14:39 |
nmagnezi | leitan, and as for auto reschedule i'm sorry but it's only available starting stable/octata | 14:39 |
xgerman | yep, syntax should work | 14:39 |
nmagnezi | leitan, and we cannot backport this.. it's against the backports policy to backport new features | 14:39 |
xgerman | you can also deploy master Octavia with mitaka IHMO if you have a way to isolate dependencies (e.g. run your control plane in a container) | 14:40 |
xgerman | but the Mitaka Octavia already works… most of the work for Newton was bug fixing… | 14:40 |
leitan | xgerman: i working with several doc pages, to deploy octavia, i know you guys are working on the docs, so maybe i can help to improve as i go with my experience | 14:40 |
xgerman | that would be great!! | 14:41 |
nmagnezi | +1 | 14:41 |
nmagnezi | :) | 14:41 |
leitan | nmagnezi: ok, ill try to backport that myself just to offer "fake HA" as i got octavia working | 14:41 |
nmagnezi | leitan, why? | 14:41 |
leitan | cause i have to deliver LbaaS this week, and i dont think to get octavia up and running by that time | 14:42 |
leitan | im using the https://docs.openstack.org/developer/octavia/guides/dev-quick-start.html | 14:42 |
leitan | but i found it pretty "generic" , i dont know if there any more detailed doc | 14:42 |
leitan | im combining it with https://docs.openstack.org/mitaka/networking-guide/config-lbaas.html to configure everything | 14:43 |
leitan | and i think that would be great to actually create something like the metadata-proxy agent, that can have the management network for octavia running on a vxlan for example | 14:43 |
leitan | cause i have a working cloud and i needed to create a new flat, reconfigure ovs, restart all the agents, to reach the amphoraes from my controller | 14:44 |
*** KeithMnemonic has quit IRC | 14:44 | |
xgerman | we have Open Stack Ansible scripts which talk more about the network | 14:44 |
leitan | xgerman: yup, i surfed the ansible scripts to understand that correctly, and finally create that network to manage the amphorae instances | 14:45 |
xgerman | yeah, the network is the hardest part | 14:46 |
leitan | but i think it would be great to have a "namespaced" way to talk to amphorae | 14:46 |
xgerman | mmh, that’s an interesting idea | 14:46 |
leitan | so you dont depends of your network administrators, and possible you can fully delegate the admin also of the amphorae to the owning tenant | 14:46 |
leitan | in the current model you depend of a management network that exist on your dc, and probably you dont want your tenants that are fully isolated to access it | 14:47 |
leitan | so its kinda hard to RBAC this setup | 14:47 |
xgerman | nah, the amphora are supposed to be a black box for the tenant. They are suppsoed to interact through the LBaaS V2 API | 14:48 |
xgerman | but I can see how the management net is cumbersome | 14:49 |
leitan | xgerman, ok i can totally give you that, but the mgmt net, its a pain in the ass for a running productive cloud | 14:49 |
leitan | more if i cant dedicate new compute nodes to run amphoraes and i have to touch the net.config of the productive ones | 14:50 |
leitan | like i faced last week | 14:50 |
leitan | something like octavia-ns-proxy, with an admin owned VXLAN network ... i think that will really ease the setup | 14:51 |
xgerman | yep, agreed — it might be best to file an RfE bug | 14:51 |
xgerman | I also know people who run the mgmt net on the public provider net | 14:52 |
xgerman | (rm_work does something like that) to avoid the extra mgmt net | 14:52 |
*** gcheresh has quit IRC | 14:53 | |
leitan | xgerman: i flirted with that idea but i found stupid to "consume" addressing from my provider net for lb_mgmt purposes | 14:54 |
leitan | and to mix "functionalities" on networks | 14:55 |
leitan | xgerman: want me to fill an RFE bug ? | 14:55 |
xgerman | yes | 14:56 |
leitan | i can totally do that if that makes sense to you | 14:56 |
xgerman | it makes sense — the big problem is always developers to actually do that… | 14:58 |
*** links has quit IRC | 14:58 | |
*** KeithMnemonic has joined #openstack-lbaas | 15:09 | |
rm_work | leitan: yeah, our default provider net is "private" so I just let nova schedule it to some network, figure out what the network is, and use that | 15:13 |
rm_work | everything is over HTTPS so I'm not super concerned about it | 15:13 |
rm_work | and the endpoints are authed | 15:13 |
rm_work | that way I avoid using a special management-net | 15:13 |
rm_work | i just merged the patch that makes that possible, actually | 15:13 |
rm_work | leitan: https://review.openstack.org/#/c/463660/ | 15:14 |
rm_work | leitan: and per what xgerman was saying, I'd recommend running even as new as Ocata or Pike of Octavia, even against an old cloud, as it is totally independent | 15:14 |
rm_work | We're running *master* of Octavia against a liberty cloud with no problems | 15:15 |
rm_work | there are definitely enough bugfixes and improvements to make it worth it | 15:15 |
rm_work | and if you go to *master*, you can deploy octavia with the same API as n-lbaas so it's seamless | 15:15 |
rm_work | brb | 15:15 |
*** reedip_ has quit IRC | 15:17 | |
leitan | rm_work: yeah, i will run the ocata octavia version actually, im using kolla containers for it, to make it easier | 15:17 |
leitan | xgerman nmagnezi rm_work i fille an RFE of what we were talking for the lb-net https://bugs.launchpad.net/octavia/+bug/1691134 | 15:18 |
openstack | Launchpad bug 1691134 in octavia "[RFE] Implement a namespace-agent to manage amphorae via isolated/tunnel networks" [Undecided,New] | 15:18 |
*** reedip_ has joined #openstack-lbaas | 15:18 | |
leitan | so if you guys can take a look when you can, and let me know if its clear enough | 15:18 |
*** reedip_ has quit IRC | 15:19 | |
*** reedip_ has joined #openstack-lbaas | 15:25 | |
xgerman | k | 15:25 |
*** KeithMnemonic has quit IRC | 15:27 | |
*** reedip_ has quit IRC | 15:32 | |
xgerman | moved it to confirmed | 15:34 |
johnsom | Lb-mgmt can be routed, not flat. We use a simple neutron network (can be vxlan) in devstack. I am not understanding why you would need a proxy at all | 15:34 |
johnsom | The only hard part is getting the controller processes on the neutron network (which is owned by the octavia service account like the amphora vms BTW) | 15:36 |
xgerman | I assume a namspace proxy would avoid dealing with new networks… | 15:38 |
xgerman | I like it captures so we can discuss and see if it’s worthwhile | 15:39 |
leitan | johnsom: you are right about the network being routed, but it turns out it was simpler for me to do a flat on my use case in order no to mess with our OSPF, and wait years to add a network published | 15:41 |
*** reedip_ has joined #openstack-lbaas | 15:42 | |
leitan | johnsom and also we dont have a mechanism today to get the controller into the vlxan network, since they are on an isolated dmz, having this "proxy" mechanism i dont need to change anything on my setup in order to introduce octavia into it | 15:45 |
*** aojea has quit IRC | 15:52 | |
*** gcheresh has joined #openstack-lbaas | 15:54 | |
*** gcheresh has quit IRC | 16:00 | |
*** blogan has joined #openstack-lbaas | 16:13 | |
*** ipsecguy has joined #openstack-lbaas | 16:32 | |
*** ipsecguy_ has quit IRC | 16:32 | |
*** pcaruana has quit IRC | 16:53 | |
*** rcernin has quit IRC | 16:53 | |
*** reedip_ has quit IRC | 16:56 | |
*** reedip has quit IRC | 17:02 | |
*** reedip has joined #openstack-lbaas | 17:09 | |
*** reedip_ has joined #openstack-lbaas | 17:15 | |
*** krypto has quit IRC | 17:38 | |
*** dasanind has quit IRC | 17:43 | |
*** amit213 has quit IRC | 17:43 | |
*** sindhu has quit IRC | 17:43 | |
*** amit213 has joined #openstack-lbaas | 17:45 | |
*** dasanind has joined #openstack-lbaas | 17:46 | |
*** sindhu has joined #openstack-lbaas | 17:47 | |
*** cpuga has quit IRC | 17:48 | |
*** harlowja has joined #openstack-lbaas | 17:50 | |
*** reedip has quit IRC | 17:53 | |
*** belharar has joined #openstack-lbaas | 18:02 | |
*** gcheresh has joined #openstack-lbaas | 18:14 | |
*** openstackgerrit has quit IRC | 18:17 | |
*** cpuga has joined #openstack-lbaas | 18:19 | |
*** cpuga_ has joined #openstack-lbaas | 18:20 | |
*** cpuga has quit IRC | 18:23 | |
*** belharar has quit IRC | 18:41 | |
*** leitan has quit IRC | 18:52 | |
*** aojea has joined #openstack-lbaas | 19:04 | |
*** leitan has joined #openstack-lbaas | 19:10 | |
leitan | guys, just trying to create my first octavia loadbalancer, and neutron api keeps complaining that is trying to reach keystone at 127.0.0.1, i have added auth_url to service_auth on neutron_lbaas.conf with no luck, am i missing something ? | 19:11 |
leitan | or i will need an octavia.conf also on the /etc/neutron on the neutron server machine | 19:12 |
*** harlowja has quit IRC | 19:17 | |
leitan | im gettings this: http://paste.openstack.org/show/609713/ | 19:29 |
xgerman | ok, there is a section in the lbaas/neutron configs you need to add the keystone values | 19:33 |
xgerman | https://github.com/openstack/openstack-ansible-os_neutron/blob/master/templates/neutron.conf.j2#L191-L205 | 19:36 |
xgerman | it’s called service-auth | 19:36 |
leitan | ok, i have that seccion exactly like that but in the neutron_lbaas.conf file | 19:46 |
leitan | ill move that to neutron.conf | 19:46 |
leitan | and try again | 19:46 |
leitan | hmm seems the trailing /v3 its importante im getting a 404 now | 19:54 |
*** armax has quit IRC | 19:58 | |
*** man_arab has joined #openstack-lbaas | 20:00 | |
xgerman | it should work in lbaas.conf so a bit surprised… | 20:01 |
leitan | xgerman: moved to neutron.conf and worked | 20:02 |
*** man_arab has left #openstack-lbaas | 20:02 | |
xgerman | yeah, surprising | 20:02 |
leitan | maybe by default neutron-server is not loading neutron_lbaas.conf file on the systemd job ? | 20:02 |
leitan | xgerman: made it to octavia api now :). , but getting AttributeError: 'NoopAmphoraLoadBalancerDriver' object has no attribute 'get_vrrp_interface' | 20:03 |
xgerman | oh, there must still be some setting in octavia.conf off | 20:03 |
leitan | i think that too, ill recheck | 20:04 |
leitan | im using ACTIVE_STANDBY btw | 20:04 |
*** harlowja has joined #openstack-lbaas | 20:04 | |
leitan | hmm and thats why | 20:06 |
leitan | only the amphora_haproxy_rest_driver | 20:06 |
leitan | has get_vrrp_interface | 20:06 |
leitan | should i use the amphora_haproxy_rest_driver insted of the NooP that seems to be the default | 20:10 |
leitan | xgerman ? | 20:10 |
xgerman | yes | 20:10 |
leitan | thanks | 20:12 |
*** aamerine has joined #openstack-lbaas | 20:46 | |
*** gcheresh has quit IRC | 20:48 | |
leitan | xgerman: this is courious when i try to delete a lb thats on status ERROR cause never got created, it stucks on PENDING_DELETE and the octavia_api loops constantly over a GET /v1/loadbalancers/0f42ad66-fd4a-4f68-a735-2c55f4658db0 that returns a 404 (ok cause it doesnt exist) then after few iterations comes back to status ERROR | 21:09 |
xgerman | Could be a bug if you run master… | 21:11 |
xgerman | I haven’t seen that (yet | 21:11 |
*** JudeC has joined #openstack-lbaas | 21:19 | |
rm_work | ugh | 21:19 |
rm_work | was "get_vrrp_interface" part of my patch? | 21:20 |
rm_work | sec | 21:20 |
rm_work | it's missing on the noop? | 21:20 |
rm_work | crap | 21:21 |
rm_work | it's in the rest_api_driver but not actually the interface >_< wtf | 21:21 |
rm_work | ugh | 21:21 |
rm_work | wow this is messy | 21:22 |
rm_work | the amphora driver pulls in a Mixin for VRRP | 21:22 |
rm_work | but even THAT interface doesn't include "get_vrrp_interface" | 21:22 |
rm_work | this needs to be a bug | 21:22 |
rm_work | and get cleaned up | 21:23 |
*** eandersson has joined #openstack-lbaas | 21:23 | |
*** JudeC has quit IRC | 21:29 | |
leitan | rm_work: want me to fill it up ? | 21:30 |
rm_work | That would be excellent | 21:30 |
rm_work | since you know how to repro | 21:30 |
leitan | rm_work: right away | 21:30 |
rm_work | awesome, thanks :P | 21:30 |
leitan | i moved forward , after solving the certificate issue i came to here: http://paste.openstack.org/show/609723/ | 21:31 |
rm_work | yeah really you need the rest_driver anyway | 21:32 |
rm_work | hmmm | 21:32 |
leitan | im with the rest driver now | 21:32 |
rm_work | something broke at a higher level | 21:33 |
rm_work | because that should always work | 21:33 |
leitan | vip address gets "allocated" | ee03a772-3576-468c-90af-5cf6da97bb2f | octavia-lb-test12 | 80.80.80.15 | ERROR | octavia | | 21:33 |
rm_work | the amphora doesn't have a valid network config | 21:33 |
leitan | hmm probably is because i didnt defined the - amp_boot_network_list | 21:36 |
rm_work | AHHH yes | 21:36 |
rm_work | you need that | 21:36 |
rm_work | unless you have my patch | 21:36 |
rm_work | which you won't yet | 21:36 |
leitan | i thought that it will extract it form hte [networking] section | 21:36 |
leitan | so i need to also specify the management network on the boot_network_list | 21:37 |
rm_work | yes | 21:37 |
rm_work | err | 21:37 |
rm_work | hold on | 21:37 |
rm_work | maybe it will extract | 21:37 |
rm_work | let me look | 21:37 |
leitan | rm_work: ok | 21:39 |
leitan | rm_work: i thought that this is for "additional networks" that you want to plug in | 21:39 |
leitan | and the mgmt network got extracted from lb_network_name variable on [networking] | 21:40 |
rm_work | doesn't look like it extracts it from that | 21:40 |
rm_work | you need to specify | 21:40 |
rm_work | hmm | 21:40 |
*** ssmith has quit IRC | 21:41 | |
leitan | yes, checking the code and it seems it doesnt get used anywhere | 21:41 |
leitan | but still there on the config | 21:41 |
rm_work | I think it's not | 21:41 |
rm_work | ummmmmmm | 21:41 |
rm_work | yeeeeaaaahhhh | 21:42 |
rm_work | that's what I just noticed too | 21:42 |
rm_work | code doesn't use it | 21:42 |
rm_work | at all | 21:42 |
rm_work | ... bug? | 21:42 |
leitan | its just the CONF.opt definition and same on test cases | 21:42 |
rm_work | yes | 21:42 |
leitan | ill fill a bug for that too | 21:42 |
rm_work | k | 21:42 |
rm_work | thanks :) | 21:42 |
leitan | no worries | 21:42 |
rm_work | let me know the bug ID :P | 21:43 |
*** JudeC has joined #openstack-lbaas | 21:44 | |
rm_work | and it looks like it was ACTUALLY supposed to be a name | 21:46 |
rm_work | whereas the boot list is subnet_ids | 21:46 |
leitan | rm_work: https://bugs.launchpad.net/octavia/+bug/1691286 | 21:48 |
openstack | Launchpad bug 1691286 in octavia "lb_network_name option defined on tests and CONF.getops but not used anywhere" [Undecided,New] | 21:48 |
rm_work | xgerman: so i'm about to make our HM system driver-able and write an etcd3 driver <_< | 21:48 |
*** openstackgerrit has joined #openstack-lbaas | 21:50 | |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Remove lb_network_name from config (it was bogus) https://review.openstack.org/465183 | 21:50 |
rm_work | leitan: alright, "fix" posted :P | 21:50 |
rm_work | wait was johnsom on? i thought he was on vacation | 21:51 |
leitan | rm_work: he was answering a couple of hours ago | 21:54 |
leitan | rm_work: also filled https://bugs.launchpad.net/octavia/+bug/1691287 | 21:54 |
openstack | Launchpad bug 1691287 in octavia "get_vrrp_instance missing on NoopAmphoraLoadBalancerDriver" [Undecided,New] | 21:54 |
rm_work | k | 21:54 |
rm_work | that's a little more involved, don't have time to tackle that *immediately* | 21:54 |
rm_work | actually... | 21:54 |
rm_work | well, OK yeah I do | 21:54 |
*** aojea has quit IRC | 21:55 | |
leitan | rm_work: :) | 21:56 |
leitan | rm_work: looking at the code i see that amphorae_network_config.get(amphora.id).vip_subnet , i tried specifing network_id and also network-name, but ... should i specify the subnet NAME ? | 22:05 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: VRRP amphora_driver functions weren't handled by noop driver https://review.openstack.org/465185 | 22:05 |
rm_work | subnet_id | 22:05 |
rm_work | all subnets are referenced by id | 22:05 |
rm_work | which config var are you referring to? | 22:05 |
rm_work | amp_network_boot_list? | 22:05 |
leitan | yes | 22:06 |
leitan | cause it says # Networks to attach to the Amphorae examples: | 22:06 |
leitan | it doesnt say subnet_id anywhere | 22:06 |
leitan | so may sound silly but its not self explanatory | 22:06 |
rm_work | hmm no i think you're right | 22:06 |
rm_work | one sec | 22:06 |
leitan | ok | 22:06 |
rm_work | want to file a bug? | 22:06 |
rm_work | :P | 22:06 |
leitan | yes, so it should say "subnet_id" right ? | 22:07 |
rm_work | hold on actually | 22:07 |
rm_work | looking | 22:07 |
leitan | ok | 22:07 |
rm_work | hmmmmmmm | 22:07 |
rm_work | no this does look like network_ids | 22:07 |
rm_work | you specified the correct network_id and it didn't work? | 22:08 |
leitan | it didnt work | 22:08 |
rm_work | hmm | 22:08 |
rm_work | still the same error? | 22:08 |
leitan | but the courious thing is that its failing on the .get on the VIP_SUBNET | 22:09 |
leitan | subnet = amphorae_network_config.get(amphora.id).vip_subnet | 22:09 |
leitan | AttributeError: 'NoneType' object has no attribute 'get' | 22:09 |
rm_work | well it's because the .get(amphora.id) fails | 22:09 |
rm_work | so get() returns None | 22:09 |
leitan | yes thats clear | 22:10 |
leitan | but why vip_subnet ? | 22:10 |
rm_work | so amphorae_network_config doesn't have info for that amp | 22:10 |
rm_work | it's not really relevant | 22:10 |
leitan | vip subnet is not the tunneled net ? | 22:10 |
leitan | line 112, in post_vip_plug | 22:10 |
leitan | i guess then its not actually a vip ? or it uses also a vip for mgmt ? | 22:11 |
rm_work | so | 22:11 |
rm_work | this is getting the info to set up the VIP on the amphora | 22:11 |
leitan | yes | 22:11 |
rm_work | right | 22:11 |
rm_work | so it needs the subnet_id of the VIP | 22:11 |
rm_work | so it can fill the networking templates properly | 22:11 |
rm_work | to send to the amphora | 22:11 |
leitan | right, but the vip subnet is the one that youre passing as an argument to the loadbalacner-crate | 22:12 |
leitan | create | 22:12 |
leitan | and as you can see here http://paste.openstack.org/show/609725/ | 22:13 |
leitan | its actually allocating it ok from neutron | 22:13 |
rm_work | yes, so you're saying it should just come from the LB? | 22:13 |
rm_work | instead of from the amp config | 22:13 |
leitan | rm_work: at least on the post_vip_plug i guess so yes | 22:14 |
rm_work | looking | 22:14 |
leitan | but if its using the same function to actually plug the mgmt network ... then we need to come back to the amp_network_boot_list | 22:15 |
leitan | that we were talking | 22:15 |
rm_work | soooo | 22:15 |
rm_work | in the allowed_address_pairs driver | 22:15 |
rm_work | it comes directly from loadbalancer.vip.subnet_id | 22:16 |
rm_work | lol | 22:16 |
rm_work | OH uhhhhh | 22:16 |
rm_work | you replaced the amp driver | 22:16 |
rm_work | did you replace ALL of the drivers? | 22:16 |
rm_work | amphora_driver = amphora_haproxy_rest_driver | 22:16 |
rm_work | compute_driver = compute_nova_driver | 22:16 |
rm_work | network_driver = allowed_address_pairs_driver | 22:16 |
rm_work | you need to set ALL of those | 22:16 |
rm_work | >_> | 22:16 |
rm_work | or it won't work | 22:16 |
rm_work | though admittedly yes, it looks like there's no reason that should need to come from the amphora, and not the loadbalancer | 22:17 |
leitan | rm_work: i didnt change those | 22:17 |
rm_work | lol | 22:17 |
rm_work | ok yeah | 22:17 |
rm_work | you need all three | 22:18 |
rm_work | for it to DO anything | 22:18 |
rm_work | no wonder it isn't working :P | 22:18 |
rm_work | not sure what our docs mention about that... | 22:18 |
leitan | now that youre saying, it makes sense to change the noops there too, but ... is not actually clear | 22:18 |
rm_work | but still, I am not sure why this is pulled this way | 22:18 |
rm_work | when it comes directly from the loadbalancer | 22:18 |
leitan | rm_work: want me to fill a bug to make that reference a little more self explanatory | 22:19 |
rm_work | maybe it has to do with what object models we have access to at various points of taskflow | 22:19 |
rm_work | you can file one if you want :) | 22:19 |
leitan | rm_work: i think its actually releated with task flow | 22:19 |
leitan | ok ill do that, if youre not an octavia developer you can crash like me :P | 22:19 |
rm_work | yeah | 22:21 |
*** armax has joined #openstack-lbaas | 22:21 | |
leitan | rm_work: its stupid to asume, that if you set amphora driver to rest driver, that the CONF option react to this and set the compute and the networkdriver to nova and allowed address pairs ? | 22:22 |
rm_work | well | 22:22 |
leitan | or it might conflict with other uses that uses customs drivers there ? | 22:22 |
rm_work | it theoretically could be any combination | 22:22 |
rm_work | it's just that the other noops don't fill stuff | 22:22 |
rm_work | so if you used some other network driver | 22:22 |
rm_work | ... | 22:22 |
rm_work | it just happens there arent others at the moment :P | 22:22 |
rm_work | so if you had like | 22:22 |
rm_work | container_compute_driver and opendaylight_network_driver | 22:22 |
leitan | understand, i fill a bug just to the conf file be more specific if youre changing this | 22:23 |
rm_work | that would work too | 22:23 |
rm_work | I think it's a doc problem | 22:23 |
leitan | yes | 22:23 |
rm_work | more than a conf problem | 22:23 |
rm_work | i mean i don't think we need a whole set of docs INSIDE our conf file | 22:23 |
rm_work | but honestly the real fix is that the noop drivers SHOULD set things up in a way that will work | 22:23 |
rm_work | though | 22:23 |
rm_work | it still wouldn't *work* work | 22:23 |
rm_work | but it shouldn't *crash* at that stage like that | 22:23 |
leitan | rm_work: so it make more sense to fill a bug on that orientation, or more like a doc bug first | 22:24 |
leitan | ill fill both | 22:24 |
leitan | in order to "handle" this scenario and update the docs | 22:25 |
leitan | rm_work: now im on MissingAuthPlugin: An auth plugin is required to determine endpoint URL | 22:27 |
leitan | seems that i need to configure something somewhere else | 22:27 |
rm_work | mmm | 22:27 |
leitan | should i fill the [neutron] section on the octavia.conf ? or it automatically gets it from keystone ? | 22:28 |
rm_work | yeah you're missing something in config | 22:29 |
rm_work | nooo | 22:29 |
rm_work | you need to fill it | 22:29 |
rm_work | that's HOW it gets it from keystone | 22:29 |
rm_work | you don't need `endpoint` | 22:29 |
rm_work | but you need `region_name` | 22:29 |
leitan | service_name region and endpoint type | 22:29 |
rm_work | yes | 22:30 |
rm_work | for nova/neutron/glance | 22:30 |
leitan | i guess it will be the same for [nova] and glance | 22:30 |
rm_work | yep | 22:30 |
leitan | ok, since im specifying an admin account on the keystone_authtoken section | 22:30 |
rm_work | yes | 22:31 |
leitan | why it doesnt do this discover | 22:31 |
leitan | automatically ? | 22:31 |
rm_work | err | 22:31 |
leitan | just wondering | 22:31 |
rm_work | it does discover the endpoints | 22:31 |
rm_work | but | 22:31 |
rm_work | how do we know what your nova is? | 22:31 |
rm_work | to be fair, there could be better defaults | 22:31 |
rm_work | but like, in our environments, those are custom names | 22:31 |
leitan | ok, i understand | 22:31 |
leitan | like nova, is not "compute" | 22:31 |
rm_work | region is always custom | 22:31 |
rm_work | it can be | 22:31 |
rm_work | but we have others | 22:31 |
leitan | like the 80% of the deploys :P | 22:32 |
rm_work | I think "compute" would be a better default than BLANK though | 22:32 |
rm_work | but, region -- there is no good default | 22:32 |
rm_work | you need to set that | 22:32 |
rm_work | everyone uses something else | 22:32 |
leitan | yes, referred on the doc, and also as default if you leave it uncommented i guess | 22:32 |
rm_work | ah hmm | 22:32 |
rm_work | maybe there is a default region? | 22:32 |
rm_work | i don't know | 22:32 |
rm_work | i think there isn't in our deploy, but maybe | 22:33 |
leitan | on the code is "None" but the default on every other openstack conf file is RegionOne | 22:33 |
leitan | gotta go, ill fill the doc guidance bugs from home | 22:34 |
rm_work | kk | 22:34 |
rm_work | RegionOne is devstack | 22:34 |
leitan | thanks for the support rm_work i guess tomorrow ill be bothering you again | 22:34 |
rm_work | but yeah we COULD default it if there is no such thing as a "default keystone region" | 22:34 |
rm_work | I may be out tomorrow | 22:34 |
rm_work | and the next day actually :( | 22:35 |
rm_work | but I can try to respond if i am around | 22:35 |
leitan | rm_work: yes devstack and on every doc guide, when you create the services it does it with RegionOne | 22:35 |
leitan | rm_work: thanks | 22:35 |
rm_work | kk | 22:35 |
rm_work | if there is such a thing as a keystone default region | 22:35 |
rm_work | then we don't want to change it | 22:35 |
rm_work | but if there isn't, RegionOne makes sense | 22:35 |
leitan | rm_work: like this for example https://docs.openstack.org/mitaka/install-guide-rdo/nova-controller-install.html | 22:36 |
leitan | these also: https://docs.openstack.org/newton/install-guide-ubuntu/nova-controller-install.html | 22:36 |
leitan | and so on for the other core services | 22:36 |
*** leitan has quit IRC | 22:41 | |
*** fnaval has quit IRC | 22:49 | |
*** fnaval has joined #openstack-lbaas | 22:57 | |
*** cpuga_ has quit IRC | 23:00 | |
openstackgerrit | Jude Cross proposed openstack/python-octaviaclient master: Add member commands to client https://review.openstack.org/463035 | 23:07 |
*** aamerine has quit IRC | 23:28 | |
*** jmccrory is now known as jmccrory_awaythi | 23:30 | |
*** jmccrory_awaythi is now known as jmccrory_away | 23:30 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!