*** harlowja has quit IRC | 00:34 | |
*** yamamoto has joined #openstack-lbaas | 00:58 | |
*** hongbin has joined #openstack-lbaas | 01:01 | |
*** abaindur has joined #openstack-lbaas | 01:28 | |
abaindur | johnsom: hey, i've been testing failover of amphora | 01:28 |
---|---|---|
abaindur | we dont have a spares pool | 01:28 |
abaindur | but is there a reason we dont first spin up the new amphora, then only when its ready, we cut over the VIP to the new amphora and delete the old? | 01:29 |
abaindur | why is the old one deleted first, then the new one spun up? That process leads to consideable downtime. This is queens, we are not running ACTIVE-ACTIVE | 01:30 |
abaindur | maybe i'm missing something ehre, but shouldn't it be possible to spin up the new one, then moving the VIP over should be similar to migrating a floating IP - it would just send out a GARP | 01:31 |
*** yamamoto has quit IRC | 01:49 | |
*** ramishra has quit IRC | 01:54 | |
*** Emine has quit IRC | 01:55 | |
*** ramishra has joined #openstack-lbaas | 02:28 | |
*** abaindur has quit IRC | 02:41 | |
*** abaindur has joined #openstack-lbaas | 02:42 | |
*** abaindur has quit IRC | 02:47 | |
*** abaindur has joined #openstack-lbaas | 02:50 | |
*** abaindur has quit IRC | 03:00 | |
*** ramishra has quit IRC | 03:29 | |
johnsom | There are actually comments in the code about this. The delete first is in place to attempt to make it possible to failover a failed amphora if the cloud is resource constrained. Take a look at the comments in the code for more details. | 03:47 |
*** ramishra has joined #openstack-lbaas | 04:06 | |
*** abaindur has joined #openstack-lbaas | 04:27 | |
*** yamamoto has joined #openstack-lbaas | 04:54 | |
*** hongbin has quit IRC | 05:12 | |
*** ramishra has quit IRC | 05:20 | |
*** ramishra has joined #openstack-lbaas | 05:37 | |
*** yboaron has quit IRC | 05:38 | |
*** yamamoto has quit IRC | 05:51 | |
*** ivve has quit IRC | 06:02 | |
*** ivve has joined #openstack-lbaas | 06:14 | |
*** pck has quit IRC | 06:25 | |
*** pck has joined #openstack-lbaas | 06:30 | |
*** abaindur has quit IRC | 06:39 | |
*** abaindur has joined #openstack-lbaas | 06:39 | |
*** rcernin has quit IRC | 07:00 | |
*** luksky has joined #openstack-lbaas | 07:04 | |
*** yamamoto has joined #openstack-lbaas | 07:14 | |
*** pcaruana has joined #openstack-lbaas | 07:19 | |
*** velizarx has joined #openstack-lbaas | 07:19 | |
lxkong | johnsom: hi, we met with an issue with batch pool members update https://storyboard.openstack.org/#!/story/2003608, i'm not sure if it's a known issue | 07:29 |
openstackgerrit | Reedip proposed openstack/octavia-tempest-plugin master: [DNM] Modify Member tests for Provider Drivers https://review.openstack.org/598476 | 07:40 |
*** abaindur has quit IRC | 07:41 | |
*** abaindur has joined #openstack-lbaas | 07:42 | |
*** celebdor has joined #openstack-lbaas | 07:45 | |
*** luksky has quit IRC | 07:58 | |
*** luksky has joined #openstack-lbaas | 08:28 | |
*** abaindur has quit IRC | 08:44 | |
*** abaindur has joined #openstack-lbaas | 08:45 | |
*** velizarx has quit IRC | 09:00 | |
*** velizarx_ has joined #openstack-lbaas | 09:00 | |
*** ramishra has quit IRC | 09:13 | |
*** yamamoto has quit IRC | 09:38 | |
*** ramishra has joined #openstack-lbaas | 09:48 | |
*** bzhao__ has joined #openstack-lbaas | 09:53 | |
-openstackstatus- NOTICE: Jobs using devstack-gate (legacy devstack jobs) have been failing due to an ara update. We use now a newer ansible version, it's safe to recheck if you see "ImportError: No module named manager" in the logs. | 09:57 | |
*** ramishra has quit IRC | 10:27 | |
*** ramishra has joined #openstack-lbaas | 10:28 | |
*** cgoncalves has quit IRC | 10:48 | |
*** cgoncalves has joined #openstack-lbaas | 10:50 | |
*** ramishra has quit IRC | 11:58 | |
*** velizarx_ has quit IRC | 12:07 | |
*** velizarx has joined #openstack-lbaas | 12:14 | |
jiteka | Hello, I got a question regarding the CLI | 12:59 |
jiteka | Do you think it could be possible to add a feature --set to loadbalancer to force change the status when a LB is stuck in PENDING_CREATE ? | 12:59 |
jiteka | RIght not the only way to get rid of them using delete --cascade to avoid deleting object manually one by one (amphora VM, server group, security group, neutron port) | 12:59 |
jiteka | Is to go in Octavia DB and perform that switch with an update SQL query | 12:59 |
jiteka | mysql> update load_balancer set provisioning_status = "ERROR" where provisioning_status = "PENDING_CREATE"; | 12:59 |
jiteka | or | 12:59 |
jiteka | update load_balancer set provisioning_status = 'ERROR' where id = 'loadbalancer id'; | 12:59 |
jiteka | maybe something similar to this : | 13:02 |
jiteka | https://docs.openstack.org/python-openstackclient/pike/cli/command-objects/server.html#server-set | 13:02 |
jiteka | openstack loadbalancer set --state ERROR <loadbalancer uuid> | 13:02 |
cgoncalves | jiteka, I'd expect LBs in PENDING_CREATE to eventually go to ERROR after a timeout. have you waited long enough? | 13:09 |
cgoncalves | like 25 minutes, I think is the default | 13:09 |
*** yamamoto has joined #openstack-lbaas | 13:12 | |
*** salmankhan has joined #openstack-lbaas | 13:23 | |
*** salmankhan has quit IRC | 13:27 | |
velizarx | Hi folks. Where can I find documentation about what kind of notifications Octavia sends about self to AMQP (like in nova I can receive notification: compute.instance.update)? Octavia sends notifications? | 13:29 |
cgoncalves | velizarx, the only event streamer available are two: EventStreamerNoop (which is a no-op driver) and the EventStreamerNeutron | 13:38 |
cgoncalves | the EventStreamerNeutron is not recommended to be used due to known issues | 13:39 |
cgoncalves | http://git.openstack.org/cgit/openstack/octavia/tree/octavia/controller/queue/event_queue.py#n53 | 13:39 |
cgoncalves | you can find a description of what it does in ^ | 13:40 |
*** luksky has quit IRC | 13:40 | |
*** yamamoto has quit IRC | 13:43 | |
velizarx | This is not exactly what I was asking. This functionality exists for integration Octavia to Neutron's LBaaS. But I was asking about a general mechanism of notification. For example, I want to know in my third-party application when the loadbalancer be created. Yes, this class EventStreamerNeutron could help me, but it's a little wrong. | 13:51 |
velizarx | cgoncalves, Generally, you already answered my question, thank you | 13:52 |
cgoncalves | velizarx, my answer to you was, in short: there's no event streamer like nova and neutron have, no | 13:53 |
cgoncalves | :) | 13:53 |
johnsom | Right, we have not implemented notifications in Octavia yet | 14:00 |
velizarx | cgoncalves, One more question maybe you know: if I want to add this functionality to Octavia, the best way will be using 'event_stream_topic' option and add a new EventStreamer type class? Or create a separate code for work with this events? | 14:05 |
cgoncalves | velizarx, yes, a new EventStreamerBase type should be good | 14:06 |
cgoncalves | johnsom, would you agree? | 14:06 |
johnsom | New code | 14:06 |
johnsom | The eventstreamer code is planned to be removed and is poorly implemented | 14:07 |
cgoncalves | johnsom, the EventStreamerBase altogether or EventStreamerNeutron? | 14:07 |
cgoncalves | I'd agree with removing EventStreamerNeutron, sure | 14:07 |
johnsom | If we add notifications I would expect it to look more like the notifications nova and neutron send | 14:07 |
cgoncalves | the code base would be similar to what exists in EventStreamerNeutron, in the sense of leveraging oslo.messaging | 14:09 |
cgoncalves | hmm versioned notifications? :D | 14:10 |
johnsom | I would recommend not using any of the existing code and targeting around optional “notifications”. I am not even sure the existing code is in the right file locations in out repo | 14:11 |
johnsom | It was slapped in and never should have been approved IMO | 14:11 |
johnsom | Thus, marked for deletion | 14:12 |
cgoncalves | I'm okay with removing it all and then start from scratch | 14:12 |
velizarx | johnsom, Ok, I understood. I write a new functionality and code will using exchange 'octavia' and topic 'notifications' like nova and neutron | 14:14 |
johnsom | Key parts I would like to see in a notification RFE/spec are: how it aligns to other services, that it is optionally turned on, how we make sure the messages have a ttl, how we make sure rabbit issues don’t impact the existing code, who/what the use case is, and that the queue/transport settings are unique for the notifications | 14:15 |
johnsom | The default can be the existing queue, but it should be configurable | 14:16 |
velizarx | Sure, it's the same functionality in nova for example | 14:17 |
johnsom | Ok, sounds good | 14:19 |
xgerman_ | +1 yep, completely new is best | 14:23 |
xgerman_ | if nova, neutron, etc. are doing it shouldn’t there be an Oslo_notifications? | 14:23 |
velizarx | xgerman_, no, this functionality in class Notifier of oslo_messaging | 14:30 |
xgerman_ | yeah, shouldn’t be hard to implement just add something toour flows | 14:31 |
*** celebdor has quit IRC | 14:33 | |
*** velizarx has quit IRC | 14:34 | |
*** celebdor has joined #openstack-lbaas | 14:35 | |
*** reedipb has joined #openstack-lbaas | 14:52 | |
*** celebdor has quit IRC | 14:58 | |
reedipb | johnsom : ping | 15:03 |
johnsom | reedipb Heloo | 15:04 |
johnsom | hello | 15:04 |
johnsom | Ugh, just made my morning coffee | 15:04 |
reedipb | hi johnsom : Do you have a timing + state diagram for Octavia ? I was searching for it but got links to 404 | 15:05 |
johnsom | reedipb Maybe these: http://logs.openstack.org/35/598235/2/check/openstack-tox-docs/0372d8b/html/contributor/devref/flows.html | 15:06 |
johnsom | We had to disable them temporarily due to a python package dependency issue, but I have a patch to re-enable it | 15:07 |
reedipb | ok, thanks | 15:08 |
reedipb | johnsom : sorry to disturb your morning covfefe | 15:13 |
johnsom | Ha, no worries | 15:13 |
reedipb | johnsom : but I was wondering why a member goes from Active to No_Monitor after the Update has occurred https://github.com/openstack/octavia-tempest-plugin/blob/master/octavia_tempest_plugin/tests/api/v2/test_member.py#L635 | 15:14 |
johnsom | Hmm, well, it should update on first health message from the amphora. "NO_MONITOR" means there is no health monitor configured on the pool, so we assume it is ONLINE blindly. | 15:15 |
johnsom | ACTIVE is a provisioning status and not a operating status list NO_MONITOR | 15:16 |
johnsom | Those are two very different status fields. | 15:16 |
reedipb | oh, yeah, sorry .. Wrong word ACTIVE, I meant online :P | 15:16 |
johnsom | Operating status is what we observe from the load balancer, so for example what haproxy is stating the status is | 15:16 |
reedipb | johnsom : so in case there is no HealthMonitor Configured/ HealthMonitor Supported, we should assume it to be online blindly, right? So I guess I need to change this https://review.openstack.org/#/c/598476/ for the same | 15:18 |
johnsom | Right, if it has no health monitor, we state that in the operating status as NO_MONITOR and assume the members are healthy. | 15:19 |
reedipb | johnsom : So provider drivers not supporting HMs should never update the Operating Status to Online, is it ? | 15:20 |
reedipb | IIUC ^^ | 15:20 |
reedipb | because if they do not support HMs, then there is no way Operating Status , for members ( fr eg. ) can be made Online | 15:21 |
johnsom | Correct, with no health monitor we can't really make a call of if it is healthy or not. | 15:21 |
tobias-urdin | o/ I'm super confused by the documentation on how I should issue CAs, could somebody clear things up for me? so I have one root private key (ca_private_key) two CAs client_ca.pem (client_ca), server_ca.pem (ca_certificate, server_ca) then I issue my client_cert from the client_ca? | 15:21 |
tobias-urdin | (production deployment btw) | 15:21 |
johnsom | reedipb I have tried to capture so of this logic in the API guide: https://developer.openstack.org/api-ref/load-balancer/v2/index.html#status-codes | 15:22 |
johnsom | tobias-urdin One minute, I will find a recent chat log where we helped another user with that | 15:23 |
xgerman_ | tobias-urdin: yeo, client-cert from client-ca | 15:24 |
reedipb | johnsom : thats right, but the confusion comes, when you have admin_state_up=True ( means it is administratively enabled ), there is no Health Monitor, and all the entities are healthy :) | 15:24 |
reedipb | xgerman_ 0/ | 15:24 |
johnsom | tobias-urdin Try reading from here down, it will likely answer your questions. If not, please feel free to come back here: http://eavesdrop.openstack.org/irclogs/%23openstack-lbaas/%23openstack-lbaas.2018-08-27.log.html#t2018-08-27T19:15:34 | 15:25 |
tobias-urdin | johnsom: xgerman_ thanks! | 15:25 |
johnsom | reedipb Just because it is admin_state_up doesn't mean there is anything listening on the port | 15:25 |
reedipb | johnsom : so the priority of the Health Monitor is the highest... :) | 15:26 |
johnsom | With no monitor we can't know if the application is working or not, so assume so and report that we don't know to the user | 15:26 |
*** pcaruana has quit IRC | 15:26 | |
reedipb | Ideally we should only have Operating Status as NO_MONITOR, if HM is not configurable for the Provider Drivers. Its upto the driver itself if it can handle the call or not | 15:27 |
reedipb | johnsom : ^^ | 15:27 |
johnsom | reedipb We do have an operating status of "NO_MONITOR".... | 15:28 |
reedipb | johnsom : yeah, thats what I said, I think :) | 15:28 |
johnsom | Ha, ok. I was confused there.... (see above first cup of coffee) | 15:29 |
reedipb | go ahead, enjoy your extended weekend .. I will log out now .. :D :) Great discussing with you :) | 15:29 |
tobias-urdin | johnsom: if you wouldn't mind can you verify my logic is not completely off and it's what I want to do, would really really appreciate it :) | 16:02 |
tobias-urdin | https://review.openstack.org/#/c/599018/ | 16:02 |
tobias-urdin | (the commit message describes the logic changes) | 16:02 |
*** openstackgerrit has quit IRC | 16:06 | |
johnsom | tobias-urdin I'm not very familiar with the puppet setup, but I think what you describe makes sense. | 16:06 |
tobias-urdin | johnsom: ok, thanks. just wanted to verify my logic that ca_certificate/server_ca and client_ca should be separated in a production deployment | 16:07 |
johnsom | Yes, that is a best practice | 16:07 |
tobias-urdin | awesome, thanks for verifying, have a great weekend everyone and thanks for the help o/ | 16:07 |
* cgoncalves should pay more attention to 'reply all' vs 'reply' | 16:43 | |
johnsom | At lest you got his name right.... sigh | 16:44 |
cgoncalves | lol! | 16:44 |
johnsom | Apologies again. | 16:45 |
cgoncalves | dayou will soon have +4 powers | 16:45 |
*** Swami has joined #openstack-lbaas | 16:48 | |
*** openstackgerrit has joined #openstack-lbaas | 17:34 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Set some amphora driver optimizations https://review.openstack.org/598379 | 17:34 |
xgerman_ | lol | 17:57 |
johnsom | cgoncalves All official now, welcome to the core team! | 17:59 |
colin- | congrats | 18:03 |
*** luksky has joined #openstack-lbaas | 18:08 | |
cgoncalves | thank you, all! | 18:18 |
cgoncalves | I'm officially open to trading +2's for beer | 18:18 |
* johnsom quickly starts reverting the permissions | 18:18 | |
cgoncalves | heh :) | 18:19 |
johnsom | We at least ask for a bottle of whiskey here.... | 18:20 |
xgerman_ | per person | 18:23 |
xgerman_ | just tell assay you need a bigger expense account because you will be buying in Denver :-) | 18:23 |
xgerman_ | also make sure we are notleft out of the RedHat party - I vividly remember the bus in Dublin | 18:24 |
cgoncalves | which bus?! :) | 18:26 |
rm_work | we /all/ saw the bus | 18:28 |
rm_work | though to be fair, I had a perfectly good dinner elsewhere :P | 18:29 |
rm_work | with mugsie IIRC | 18:29 |
rm_work | and may also have been half passed-out already | 18:29 |
colby_ | Hey Guys. I got barbican up and working with dogtag. Are there any guides on integrating barbican with octavia? | 18:30 |
cgoncalves | colby_, Octavia can talk to Barbican to retrieve secrets (certificates, etc) for configuring TLS-terminated listeners: https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html#deploy-a-tls-terminated-https-load-balancer | 18:33 |
colby_ | I meant how do I enable barbican on octavia. What configs need to change etc. Im on pike | 18:36 |
colby_ | Im guessing [certificates]cert_manager,cert_generator,barbican_auth,service_name | 18:38 |
cgoncalves | ah, sorry. cert_generator=barbican_cert_generator | 18:41 |
cgoncalves | I *think* that should be enough with rest of default configs | 18:41 |
colby_ | ok great just wanted to make sure there was not anything else needed. Thanks | 18:42 |
cgoncalves | oops, ignore cert_generator. there's no barbican there | 18:42 |
cgoncalves | cert_manager=barbican_cert_manager | 18:43 |
cgoncalves | which is the default actually :) | 18:43 |
colby_ | hah ok. | 18:45 |
xgerman_ | yep, cert_manager | 18:51 |
xgerman_ | we used to have anchor as a cert generator but that is now on the growing heap of stale openstack projects | 18:52 |
xgerman_ | colby_: there is some dance with permissions on the barbican container (stores the certs) needed + the barnican client only targets public endpoints, so depending on your cloud that might be problematic | 18:53 |
colby_ | yea I have public endpoint setup. I was going to user endpoint_type to point to internal but sounds like that wont work | 18:54 |
cgoncalves | also in pike users need to explicitly add the octavia uuid to the secret acl | 18:54 |
cgoncalves | starting from Rocky, Octavia can handle that for the user: https://review.openstack.org/#/c/552549/ | 18:55 |
colby_ | ug ok...is that the case for the horizon interface too? I have not looked at that yet but feel our users would be more likely to use that (except in the case of magnum container clusters) | 18:55 |
*** openstackstatus has quit IRC | 18:58 | |
cgoncalves | colby_, you, as cloud admin, would need to inform your users of octavia uuid so that they could go to the CLI/Horizon (never tried via Horizon) and add that uuid to the ACL | 18:59 |
cgoncalves | compare latest page ^ with https://docs.openstack.org/octavia/queens/user/guides/basic-cookbook.html#deploy-a-tls-terminated-https-load-balancer | 19:00 |
cgoncalves | compare the command example and you'll see what I'm talking about | 19:00 |
colby_ | yea I see that. I looked through the barbican acl controls and see the commands to setup that for octavia in the cookbook. Thanks for that tip. Im guessing it would be less secure (but less hassle to users) to allow the octavia user access to secrets via policy files in barbican but then that is full access to all secrets so probably a bad idea. | 19:03 |
cgoncalves | yeah | 19:15 |
*** openstackstatus has joined #openstack-lbaas | 19:38 | |
*** ChanServ sets mode: +v openstackstatus | 19:38 | |
*** openstackstatus has quit IRC | 19:55 | |
*** openstackstatus has joined #openstack-lbaas | 19:58 | |
*** ChanServ sets mode: +v openstackstatus | 19:58 | |
*** openstackstatus has quit IRC | 20:11 | |
*** openstackstatus has joined #openstack-lbaas | 20:12 | |
*** ChanServ sets mode: +v openstackstatus | 20:12 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Update amphora-agent to report UDP listener health https://review.openstack.org/598192 | 20:35 |
*** openstackstatus has quit IRC | 20:36 | |
*** openstackstatus has joined #openstack-lbaas | 20:37 | |
*** ChanServ sets mode: +v openstackstatus | 20:37 | |
*** harlowja has joined #openstack-lbaas | 20:46 | |
*** luksky has quit IRC | 21:00 | |
xgerman_ | cgoncalves: what only +1? use your new +2 powers :-) | 21:11 |
cgoncalves | xgerman_, I could ask you the same :P | 21:11 |
xgerman_ | I was skeptical about that parameter :-) | 21:11 |
cgoncalves | I wanted to know if you agree based on my last comment (keepalived killed by user) | 21:12 |
xgerman_ | I think if somebody logs into an amp and kill keepalived he is on his own | 21:16 |
johnsom | I don't even know what you guys are talking about.... | 21:23 |
johnsom | Oh, got it | 21:23 |
johnsom | Ha, yeah, if the user is in the amp.... you are already in trouble. That said, I have been sitting on that as I needed to do more research to vote | 21:25 |
cgoncalves | I am okay with either restart options, except for one thing where restart=no would be better: situations where keepalived keeps on exiting, hence restarting infinitely | 21:30 |
johnsom | Right, a max restart attempts is usually a good idea | 21:30 |
xgerman_ | +1 | 21:33 |
xgerman_ | but it said it would alreday do that by itself (on the Interweb) | 21:33 |
xgerman_ | johnsom: what’s your take on https://review.openstack.org/#/c/587505/ — do we need to expose the new state in the Docs? | 21:34 |
johnsom | Yes we would as amphora is now exposed via the API. Still don't understand why we need that state, but haven't reviewed yet | 21:36 |
xgerman_ | well, unless you want to fire off deletes right when you get the message you need a way to transfer that | 21:37 |
johnsom | Commented, not sure about the API version bump, but it does need to be documented. | 21:43 |
xgerman_ | Ok, will add it to the docs then | 21:44 |
openstackgerrit | Michael Johnson proposed openstack/octavia-tempest-plugin master: Fix the readme example https://review.openstack.org/599086 | 22:19 |
*** harlowja has quit IRC | 22:24 | |
openstackgerrit | Merged openstack/octavia master: Re-enable flow diagrams https://review.openstack.org/598235 | 22:28 |
*** Swami has quit IRC | 23:08 | |
openstackgerrit | German Eichberger proposed openstack/octavia master: Delete zombie amphora when detected https://review.openstack.org/587505 | 23:50 |
xgerman_ | ok, added the one word to docs... | 23:50 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!