Friday, 2018-08-31

*** harlowja has quit IRC00:34
*** yamamoto has joined #openstack-lbaas00:58
*** hongbin has joined #openstack-lbaas01:01
*** abaindur has joined #openstack-lbaas01:28
abaindurjohnsom: hey, i've been testing failover of amphora01:28
abaindurwe dont have a spares pool01:28
abaindurbut is there a reason we dont first spin up the new amphora, then only when its ready, we cut over the VIP to the new amphora and delete the old?01:29
abaindurwhy is the old one deleted first, then the new one spun up? That process leads to consideable downtime. This is queens, we are not running ACTIVE-ACTIVE01:30
abaindurmaybe i'm missing something ehre, but shouldn't it be possible to spin up the new one, then moving the VIP over should be similar to migrating a floating IP - it would just send out a GARP01:31
*** yamamoto has quit IRC01:49
*** ramishra has quit IRC01:54
*** Emine has quit IRC01:55
*** ramishra has joined #openstack-lbaas02:28
*** abaindur has quit IRC02:41
*** abaindur has joined #openstack-lbaas02:42
*** abaindur has quit IRC02:47
*** abaindur has joined #openstack-lbaas02:50
*** abaindur has quit IRC03:00
*** ramishra has quit IRC03:29
johnsomThere are actually comments in the code about this.  The delete first is in place to attempt to make it possible to failover a failed amphora if the cloud is resource constrained.  Take a look at the comments in the code for more details.03:47
*** ramishra has joined #openstack-lbaas04:06
*** abaindur has joined #openstack-lbaas04:27
*** yamamoto has joined #openstack-lbaas04:54
*** hongbin has quit IRC05:12
*** ramishra has quit IRC05:20
*** ramishra has joined #openstack-lbaas05:37
*** yboaron has quit IRC05:38
*** yamamoto has quit IRC05:51
*** ivve has quit IRC06:02
*** ivve has joined #openstack-lbaas06:14
*** pck has quit IRC06:25
*** pck has joined #openstack-lbaas06:30
*** abaindur has quit IRC06:39
*** abaindur has joined #openstack-lbaas06:39
*** rcernin has quit IRC07:00
*** luksky has joined #openstack-lbaas07:04
*** yamamoto has joined #openstack-lbaas07:14
*** pcaruana has joined #openstack-lbaas07:19
*** velizarx has joined #openstack-lbaas07:19
lxkongjohnsom: hi, we met with an issue with batch pool members update https://storyboard.openstack.org/#!/story/2003608, i'm not sure if it's a known issue07:29
openstackgerritReedip proposed openstack/octavia-tempest-plugin master: [DNM] Modify Member tests for Provider Drivers  https://review.openstack.org/59847607:40
*** abaindur has quit IRC07:41
*** abaindur has joined #openstack-lbaas07:42
*** celebdor has joined #openstack-lbaas07:45
*** luksky has quit IRC07:58
*** luksky has joined #openstack-lbaas08:28
*** abaindur has quit IRC08:44
*** abaindur has joined #openstack-lbaas08:45
*** velizarx has quit IRC09:00
*** velizarx_ has joined #openstack-lbaas09:00
*** ramishra has quit IRC09:13
*** yamamoto has quit IRC09:38
*** ramishra has joined #openstack-lbaas09:48
*** bzhao__ has joined #openstack-lbaas09:53
-openstackstatus- NOTICE: Jobs using devstack-gate (legacy devstack jobs) have been failing due to an ara update. We use now a newer ansible version, it's safe to recheck if you see "ImportError: No module named manager" in the logs.09:57
*** ramishra has quit IRC10:27
*** ramishra has joined #openstack-lbaas10:28
*** cgoncalves has quit IRC10:48
*** cgoncalves has joined #openstack-lbaas10:50
*** ramishra has quit IRC11:58
*** velizarx_ has quit IRC12:07
*** velizarx has joined #openstack-lbaas12:14
jitekaHello, I got a question regarding the CLI12:59
jitekaDo you think it could be possible to add a feature --set to loadbalancer to force change the status when a LB is stuck in PENDING_CREATE ?12:59
jitekaRIght not the only way to get rid of them using delete --cascade to avoid deleting object manually one by one (amphora VM, server group, security group, neutron port)12:59
jitekaIs to go in Octavia DB and perform that switch with an update SQL query12:59
jitekamysql> update load_balancer set provisioning_status = "ERROR" where provisioning_status = "PENDING_CREATE";12:59
jitekaor12:59
jitekaupdate load_balancer set provisioning_status = 'ERROR' where id = 'loadbalancer id';12:59
jitekamaybe something similar to this :13:02
jitekahttps://docs.openstack.org/python-openstackclient/pike/cli/command-objects/server.html#server-set13:02
jitekaopenstack loadbalancer set --state ERROR <loadbalancer uuid>13:02
cgoncalvesjiteka, I'd expect LBs in PENDING_CREATE to eventually go to ERROR after a timeout. have you waited long enough?13:09
cgoncalveslike 25 minutes, I think is the default13:09
*** yamamoto has joined #openstack-lbaas13:12
*** salmankhan has joined #openstack-lbaas13:23
*** salmankhan has quit IRC13:27
velizarxHi folks. Where can I find documentation about what kind of notifications Octavia sends about self to AMQP (like in nova I can receive notification: compute.instance.update)? Octavia sends notifications?13:29
cgoncalvesvelizarx, the only event streamer available are two: EventStreamerNoop (which is a no-op driver) and the EventStreamerNeutron13:38
cgoncalvesthe EventStreamerNeutron is not recommended to be used due to known issues13:39
cgoncalveshttp://git.openstack.org/cgit/openstack/octavia/tree/octavia/controller/queue/event_queue.py#n5313:39
cgoncalvesyou can find a description of what it does in ^13:40
*** luksky has quit IRC13:40
*** yamamoto has quit IRC13:43
velizarxThis is not exactly what I was asking. This functionality exists for integration Octavia to Neutron's LBaaS. But I was asking about a general mechanism of notification. For example, I want to know in my third-party application when the loadbalancer be created. Yes, this class EventStreamerNeutron could help me, but it's a little wrong.13:51
velizarxcgoncalves, Generally, you already answered my question, thank you13:52
cgoncalvesvelizarx, my answer to you was, in short: there's no event streamer like nova and neutron have, no13:53
cgoncalves:)13:53
johnsomRight, we have not implemented notifications in Octavia yet14:00
velizarxcgoncalves, One more question maybe you know: if I want to add this functionality to Octavia, the best way will be using 'event_stream_topic' option and add a new EventStreamer type class? Or create a separate code for work with this events?14:05
cgoncalvesvelizarx, yes, a new  EventStreamerBase type should be good14:06
cgoncalvesjohnsom, would you agree?14:06
johnsomNew code14:06
johnsomThe eventstreamer code is planned to be removed and is poorly implemented14:07
cgoncalvesjohnsom, the EventStreamerBase altogether or EventStreamerNeutron?14:07
cgoncalvesI'd agree with removing EventStreamerNeutron, sure14:07
johnsomIf we add notifications I would expect it to look more like the notifications nova and neutron send14:07
cgoncalvesthe code base would be similar to what exists in EventStreamerNeutron, in the sense of leveraging oslo.messaging14:09
cgoncalveshmm versioned notifications? :D14:10
johnsomI would recommend not using any of the existing code and targeting around optional “notifications”. I am not even sure the existing code is in the right file locations in out repo14:11
johnsomIt was slapped in and never should have been approved IMO14:11
johnsomThus, marked for deletion14:12
cgoncalvesI'm okay with removing it all and then start from scratch14:12
velizarxjohnsom, Ok, I understood. I write a new functionality and code will using exchange 'octavia' and topic 'notifications' like nova and neutron14:14
johnsomKey parts I would like to see in a notification RFE/spec are: how it aligns to other services, that it is optionally turned on, how we make sure the messages have a ttl, how we make sure rabbit issues don’t impact the existing code, who/what the use case is, and that the queue/transport settings are unique for the notifications14:15
johnsomThe default can be the existing queue, but it should be configurable14:16
velizarxSure, it's the same functionality in nova for example14:17
johnsomOk, sounds good14:19
xgerman_+1 yep, completely new is best14:23
xgerman_if nova, neutron, etc. are doing it shouldn’t there be an Oslo_notifications?14:23
velizarxxgerman_, no, this functionality in class Notifier of oslo_messaging14:30
xgerman_yeah, shouldn’t be hard to implement just add something toour flows14:31
*** celebdor has quit IRC14:33
*** velizarx has quit IRC14:34
*** celebdor has joined #openstack-lbaas14:35
*** reedipb has joined #openstack-lbaas14:52
*** celebdor has quit IRC14:58
reedipbjohnsom : ping15:03
johnsomreedipb Heloo15:04
johnsomhello15:04
johnsomUgh, just made my morning coffee15:04
reedipbhi johnsom : Do you have a timing + state diagram for Octavia ? I was searching for it but got links to 40415:05
johnsomreedipb Maybe these: http://logs.openstack.org/35/598235/2/check/openstack-tox-docs/0372d8b/html/contributor/devref/flows.html15:06
johnsomWe had to disable them temporarily due to a python package dependency issue, but I have a patch to re-enable it15:07
reedipbok, thanks15:08
reedipbjohnsom : sorry to disturb your morning covfefe15:13
johnsomHa, no worries15:13
reedipbjohnsom : but I was wondering why a member goes from Active to No_Monitor after the Update has occurred https://github.com/openstack/octavia-tempest-plugin/blob/master/octavia_tempest_plugin/tests/api/v2/test_member.py#L63515:14
johnsomHmm, well, it should update on first health message from the amphora.  "NO_MONITOR" means there is no health monitor configured on the pool, so we assume it is ONLINE blindly.15:15
johnsomACTIVE is a provisioning status and not a operating status list NO_MONITOR15:16
johnsomThose are two very different status fields.15:16
reedipboh, yeah, sorry .. Wrong word ACTIVE, I meant online :P15:16
johnsomOperating status is what we observe from the load balancer, so for example what haproxy is stating the status is15:16
reedipbjohnsom : so in case there is no HealthMonitor Configured/ HealthMonitor Supported, we should assume it to be online blindly, right? So I guess I need to change this https://review.openstack.org/#/c/598476/ for the same15:18
johnsomRight, if it has no health monitor, we state that in the operating status as NO_MONITOR and assume the members are healthy.15:19
reedipbjohnsom : So provider drivers not supporting HMs should never update the Operating Status to Online, is it ?15:20
reedipbIIUC ^^15:20
reedipbbecause if they do not support HMs, then there is no way Operating Status , for members ( fr eg. ) can be made Online15:21
johnsomCorrect, with no health monitor we can't really make a call of if it is healthy or not.15:21
tobias-urdino/ I'm super confused by the documentation on how I should issue CAs, could somebody clear things up for me? so I have one root private key (ca_private_key) two CAs client_ca.pem (client_ca), server_ca.pem (ca_certificate, server_ca) then I issue my client_cert from the client_ca?15:21
tobias-urdin(production deployment btw)15:21
johnsomreedipb I have tried to capture so of this logic in the API guide: https://developer.openstack.org/api-ref/load-balancer/v2/index.html#status-codes15:22
johnsomtobias-urdin One minute, I will find a recent chat log where we helped another user with that15:23
xgerman_tobias-urdin:  yeo, client-cert from client-ca15:24
reedipbjohnsom : thats right, but the confusion comes, when you have admin_state_up=True ( means it is administratively enabled ), there is no Health Monitor, and all the entities are healthy :)15:24
reedipbxgerman_ 0/15:24
johnsomtobias-urdin Try reading from here down, it will likely answer your questions. If not, please feel free to come back here: http://eavesdrop.openstack.org/irclogs/%23openstack-lbaas/%23openstack-lbaas.2018-08-27.log.html#t2018-08-27T19:15:3415:25
tobias-urdinjohnsom: xgerman_ thanks!15:25
johnsomreedipb Just because it is admin_state_up doesn't mean there is anything listening on the port15:25
reedipbjohnsom : so the priority of the Health Monitor is the highest... :)15:26
johnsomWith no monitor we can't know if the application is working or not, so assume so and report that we don't know to the user15:26
*** pcaruana has quit IRC15:26
reedipbIdeally we should only have Operating Status as NO_MONITOR, if HM is not configurable for the Provider Drivers. Its upto the driver itself if it can handle the call or not15:27
reedipbjohnsom : ^^15:27
johnsomreedipb We do have an operating status of "NO_MONITOR"....15:28
reedipbjohnsom : yeah, thats what I said, I think :)15:28
johnsomHa, ok. I was confused there....  (see above first cup of coffee)15:29
reedipbgo ahead, enjoy your extended weekend .. I will log out now .. :D :) Great discussing with you :)15:29
tobias-urdinjohnsom: if you wouldn't mind can you verify my logic is not completely off and it's what I want to do, would really really appreciate it :)16:02
tobias-urdinhttps://review.openstack.org/#/c/599018/16:02
tobias-urdin(the commit message describes the logic changes)16:02
*** openstackgerrit has quit IRC16:06
johnsomtobias-urdin I'm not very familiar with the puppet setup, but I think what you describe makes sense.16:06
tobias-urdinjohnsom: ok, thanks. just wanted to verify my logic that ca_certificate/server_ca and client_ca should be separated in a production deployment16:07
johnsomYes, that is a best practice16:07
tobias-urdinawesome, thanks for verifying, have a great weekend everyone and thanks for the help o/16:07
* cgoncalves should pay more attention to 'reply all' vs 'reply'16:43
johnsomAt lest you got his name right....  sigh16:44
cgoncalveslol!16:44
johnsomApologies again.16:45
cgoncalvesdayou will soon have +4 powers16:45
*** Swami has joined #openstack-lbaas16:48
*** openstackgerrit has joined #openstack-lbaas17:34
openstackgerritMichael Johnson proposed openstack/octavia master: Set some amphora driver optimizations  https://review.openstack.org/59837917:34
xgerman_lol17:57
johnsomcgoncalves All official now, welcome to the core team!17:59
colin-congrats18:03
*** luksky has joined #openstack-lbaas18:08
cgoncalvesthank you, all!18:18
cgoncalvesI'm officially open to trading +2's for beer18:18
* johnsom quickly starts reverting the permissions18:18
cgoncalvesheh :)18:19
johnsomWe at least ask for a bottle of whiskey here....18:20
xgerman_per person18:23
xgerman_just tell assay you need a bigger expense account because you will be buying in Denver :-)18:23
xgerman_also make sure we are notleft out of the RedHat party - I vividly remember the bus in Dublin18:24
cgoncalveswhich bus?! :)18:26
rm_workwe /all/ saw the bus18:28
rm_workthough to be fair, I had a perfectly good dinner elsewhere :P18:29
rm_workwith mugsie IIRC18:29
rm_workand may also have been half passed-out already18:29
colby_Hey Guys. I got barbican up and working with dogtag. Are there any guides on integrating barbican with octavia?18:30
cgoncalvescolby_, Octavia can talk to Barbican to retrieve secrets (certificates, etc) for configuring TLS-terminated listeners: https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html#deploy-a-tls-terminated-https-load-balancer18:33
colby_I meant how do I enable barbican on octavia. What configs need to change etc. Im on pike18:36
colby_Im guessing [certificates]cert_manager,cert_generator,barbican_auth,service_name18:38
cgoncalvesah, sorry. cert_generator=barbican_cert_generator18:41
cgoncalvesI *think* that should be enough with rest of default configs18:41
colby_ok great just wanted to make sure there was not anything else needed. Thanks18:42
cgoncalvesoops, ignore cert_generator. there's no barbican there18:42
cgoncalvescert_manager=barbican_cert_manager18:43
cgoncalveswhich is the default actually :)18:43
colby_hah ok.18:45
xgerman_yep, cert_manager18:51
xgerman_we used to have anchor as a cert generator but that is now on the growing heap of stale openstack projects18:52
xgerman_colby_:  there is some dance with permissions on the barbican container (stores the certs) needed + the barnican client only targets public endpoints, so depending on your cloud that might be problematic18:53
colby_yea I have public endpoint setup. I was going to user endpoint_type to point to internal but sounds like that wont work18:54
cgoncalvesalso in pike users need to explicitly add the octavia uuid to the secret acl18:54
cgoncalvesstarting from Rocky, Octavia can handle that for the user: https://review.openstack.org/#/c/552549/18:55
colby_ug ok...is that the case for the horizon interface too? I have not looked at that yet but feel our users would be more likely to use that (except in the case of magnum container clusters)18:55
*** openstackstatus has quit IRC18:58
cgoncalvescolby_, you, as cloud admin, would need to inform your users of octavia uuid so that they could go to the CLI/Horizon (never tried via Horizon) and add that uuid to the ACL18:59
cgoncalvescompare latest page ^ with https://docs.openstack.org/octavia/queens/user/guides/basic-cookbook.html#deploy-a-tls-terminated-https-load-balancer19:00
cgoncalvescompare the command example and you'll see what I'm talking about19:00
colby_yea I see that. I looked through the barbican acl controls and see the commands to setup that for octavia in the cookbook. Thanks for that tip. Im guessing it would be less secure (but less hassle to users) to allow the octavia user access to secrets via policy files in barbican but then that is full access to all secrets so probably a bad idea.19:03
cgoncalvesyeah19:15
*** openstackstatus has joined #openstack-lbaas19:38
*** ChanServ sets mode: +v openstackstatus19:38
*** openstackstatus has quit IRC19:55
*** openstackstatus has joined #openstack-lbaas19:58
*** ChanServ sets mode: +v openstackstatus19:58
*** openstackstatus has quit IRC20:11
*** openstackstatus has joined #openstack-lbaas20:12
*** ChanServ sets mode: +v openstackstatus20:12
openstackgerritMichael Johnson proposed openstack/octavia master: Update amphora-agent to report UDP listener health  https://review.openstack.org/59819220:35
*** openstackstatus has quit IRC20:36
*** openstackstatus has joined #openstack-lbaas20:37
*** ChanServ sets mode: +v openstackstatus20:37
*** harlowja has joined #openstack-lbaas20:46
*** luksky has quit IRC21:00
xgerman_cgoncalves:  what only +1? use your new +2 powers :-)21:11
cgoncalvesxgerman_, I could ask you the same :P21:11
xgerman_I was skeptical about that parameter :-)21:11
cgoncalvesI wanted to know if you agree based on my last comment (keepalived killed by user)21:12
xgerman_I think if somebody logs into an amp and kill keepalived he is on his own21:16
johnsomI don't even know what you guys are talking about....21:23
johnsomOh, got it21:23
johnsomHa, yeah, if the user is in the amp.... you are already in trouble.  That said, I have been sitting on that as I needed to do more research to vote21:25
cgoncalvesI am okay with either restart options, except for one thing where restart=no would be better: situations where keepalived keeps on exiting, hence restarting infinitely21:30
johnsomRight, a max restart attempts is usually a good idea21:30
xgerman_+121:33
xgerman_but it said it would alreday do that by itself (on the Interweb)21:33
xgerman_johnsom: what’s your take on https://review.openstack.org/#/c/587505/ — do we need to expose the new state in the Docs?21:34
johnsomYes we would as amphora is now exposed via the API.  Still don't understand why we need that state, but haven't reviewed yet21:36
xgerman_well, unless you want to fire off deletes right when you get the message you need a way to transfer that21:37
johnsomCommented, not sure about the API version bump, but it does need to be documented.21:43
xgerman_Ok, will add it to the docs then21:44
openstackgerritMichael Johnson proposed openstack/octavia-tempest-plugin master: Fix the readme example  https://review.openstack.org/59908622:19
*** harlowja has quit IRC22:24
openstackgerritMerged openstack/octavia master: Re-enable flow diagrams  https://review.openstack.org/59823522:28
*** Swami has quit IRC23:08
openstackgerritGerman Eichberger proposed openstack/octavia master: Delete zombie amphora when detected  https://review.openstack.org/58750523:50
xgerman_ok, added the one word to docs...23:50

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!