Friday, 2019-06-21

*** mithilarun has quit IRC00:41
*** yamamoto has quit IRC00:49
*** ricolin has joined #openstack-lbaas00:55
*** goldyfruit has quit IRC01:59
rm_workhmm yeah, not seeing how that revert could fail02:01
rm_workinteresting02:01
rm_workthough i'm looking at master02:01
rm_workneed to check out Queens02:01
rm_workhmm yeah weird, same02:02
rm_workmore log context would be good :D02:03
rm_workif the listener didn't have an ID somehow? uhhh, no don't think that's possible02:03
rm_workwe log an error on either of the functions it runs failing02:04
*** goldyfruit has joined #openstack-lbaas02:17
*** psachin has joined #openstack-lbaas03:06
*** yamamoto has joined #openstack-lbaas03:11
*** gthiemonge has quit IRC03:53
*** yamamoto has quit IRC03:53
*** gthiemonge has joined #openstack-lbaas03:54
*** yamamoto has joined #openstack-lbaas04:01
*** ramishra has joined #openstack-lbaas04:17
*** pcaruana has joined #openstack-lbaas04:45
*** gcheresh has joined #openstack-lbaas05:12
*** gcheresh has quit IRC05:19
*** rcernin has quit IRC05:59
*** vishalmanchanda has joined #openstack-lbaas06:05
*** ivve has joined #openstack-lbaas06:15
*** luksky has joined #openstack-lbaas06:43
*** ccamposr__ has joined #openstack-lbaas07:02
*** ccamposr has quit IRC07:05
*** tesseract has joined #openstack-lbaas07:19
*** ricolin_ has joined #openstack-lbaas07:40
*** ricolin has quit IRC07:43
*** rpittau|afk is now known as rpittau07:47
*** ccamposr has joined #openstack-lbaas07:59
*** ccamposr__ has quit IRC08:00
openstackgerritCarlos Goncalves proposed openstack/octavia-tempest-plugin master: Add amphora failover API test  https://review.opendev.org/63361408:09
*** ccamposr__ has joined #openstack-lbaas08:23
*** ccamposr has quit IRC08:25
*** gcheresh has joined #openstack-lbaas08:25
*** ccamposr has joined #openstack-lbaas08:30
*** ccamposr__ has quit IRC08:32
*** gcheresh has quit IRC08:41
*** gcheresh has joined #openstack-lbaas09:10
*** ricolin_ has quit IRC09:10
*** ricolin has joined #openstack-lbaas09:11
*** gcheresh has quit IRC09:16
*** ricolin has quit IRC09:57
*** gcheresh has joined #openstack-lbaas10:11
*** gcheresh has quit IRC10:19
*** yamamoto has quit IRC10:25
*** yamamoto has joined #openstack-lbaas10:57
*** yamamoto has quit IRC10:59
*** yamamoto has joined #openstack-lbaas11:00
*** gcheresh has joined #openstack-lbaas11:23
*** gcheresh has quit IRC11:58
*** sapd1_x has joined #openstack-lbaas12:02
*** ccamposr__ has joined #openstack-lbaas12:08
*** ccamposr has quit IRC12:11
*** goldyfruit has quit IRC12:22
openstackgerritMerged openstack/octavia master: Install DIB binary dependencies from bindep.txt  https://review.opendev.org/63205112:36
openstackgerritMerged openstack/octavia stable/rocky: Fix allocate_and_associate DB deadlock  https://review.opendev.org/66555713:10
*** goldyfruit has joined #openstack-lbaas13:15
*** vishalmanchanda has quit IRC13:28
*** yboaron_ has joined #openstack-lbaas13:31
*** yboaron_ has quit IRC13:35
emccormickjohnson FYI that debug message about shared attribute excluded by policy has no bearing on the SubnetNotFound result13:51
emccormicktried this in another region (running Rocky FWIW) and the log entries are the same13:52
emccormick200 response + that debug message from Neutron. It's returning the subnet info to the request. I just can't find where the SubnetNotFound message gets triggered13:53
emccormickinterestingly in the Queens region where I'm unable to add new loadbalancers, I have 3 others that are stuck PENDING_UPDATE. Maybe related to cgoncalves bug13:59
*** Vorrtex has joined #openstack-lbaas14:03
emccormickIs there any clean way to revert an LB back from PENDING_UPDATE to ACTIVE short of forcing it in the database? Status shows every element of it as online and active except the loadbalancer itself14:18
cgoncalvesemccormick,  SQL query: update loadbalancer set provisioning_status="ACTIVE" where id="YOUR_LB_ID";14:30
emccormickthat's what I was afraid of ;)14:30
emccormickThanks14:30
xgerman_you can add a feature request for the —force flag for failover or so :-)14:31
*** yboaron_ has joined #openstack-lbaas14:34
cgoncalvesthe --force option has been discussed on multiple occasions. the team is working supporting flow persistence. see https://storyboard.openstack.org/#!/story/200507214:35
xgerman_Yep, saw that in the denver minutes… not sure if that’s the only way to get into *_pending though14:37
emccormickI think this thing has gotten itself in a bad state where amphorae do not exist and therefore are in error state14:38
emccormickout of sync with the servers that actually exist somehow. Fun14:38
*** yboaron_ has quit IRC14:39
cgoncalvesemccormick, try marking provisioning_status of LB to ERROR and trigger loadbalancer failover14:39
emccormickthey seem to slowly be getting there on their own. I now have 0 amphorae14:39
xgerman_yeah, at one point the system will notice :-)14:40
emccormickit's not spawning new ones so far though...14:40
*** gcheresh has joined #openstack-lbaas14:44
*** spatel has joined #openstack-lbaas14:48
spatelDoes lbaas mgmt network need to be routable or its just for internal heartbeat ?14:48
*** fnaval has joined #openstack-lbaas14:56
*** sapd1_x has quit IRC14:58
emccormickspatel It needs to allow access from your controllers to the amphorae15:00
xgerman_non routable is fine15:00
emccormickSo either routed or shared with some interface on your controllers15:00
xgerman_if you manage out that all on one L2 subnet15:01
spatelIn short my controller node need to access all compute nodes in L2 network15:01
xgerman_out=put15:01
spatelI got your point, thank you..15:01
spatelemccormick: xgerman_15:01
spatelDoes horizon GUI support uploading SSL certificate to Octivia or barbican ?15:02
spatelI haven't seen one.. so far not sure if i am missing anything here15:03
cgoncalvesspatel, the Octavia dashboard does not, and should not IMO. it should be handled in a Barbican dashboard, which does not exist today15:04
cgoncalvesI heard the Barbican team was planning to start one though15:04
* redrobot pokes head in at the sound of Barbican15:04
spatelcgoncalves: sounds good15:05
cgoncalvesredrobot, can you confirm or deny the rumors? :)15:05
emccormickanything required to get new amphora to try to spin up for LBs in ERROR state?15:10
emccormickassuming there are no amphorae presently15:10
emccormickor just wait15:10
emccormickbeen about 5 minutes so far...15:10
emccormickapparently failover isn't a valid command in ERROR state15:11
* johnsom holds the tape recorder closer to redrobot to get a quote15:12
johnsomThere used to be castellan_ui, but I think that died.15:12
johnsomemccormick You must have an up patched deployment, that bug about failover on ERROR obejcts was fixed some time ago15:12
cgoncalvesemccormick, hmm I reckon there was a patch to fix that corner case when there are no amps associated to the LB15:13
cgoncalvesthis: https://review.opendev.org/#/c/620414/15:13
emccormickwas it backported to Queens?15:14
johnsomcgoncalves That is a different issue15:14
johnsomcgoncalves That patch should be abandoned.15:15
johnsomcgoncalves Though the issues are part of the failover refactor I have on my short list.15:15
johnsomemccormick It was: https://review.opendev.org/#/c/643740/15:17
emccormickin the short term, If I go forcibly set them to active again in the DB, should the amphorae get created?15:18
emccormickI kinda have bitchy customers that won't be too happy if I can't fix it til I build a whole new image and push it out ;)15:19
johnsomemccormick It was released in 2.1.0 for queens. No image should be required to fix the failover issue, it's an API bug.15:20
johnsomYou can try a force to active15:20
johnsomThen a failover15:20
emccormickfun. It completely ignored the failover request with no errors at all15:26
emccormickloadbalancer is still ACTIVE though *heh*15:26
emccormickgood news is I can create new loadbalancers now that I fixed the first problem and I get amphorae15:28
emccormickFYI this was all caused initially by a second region getting added and none being specified in octavia.conf in the first region. It was contacting the right endpoints but for some reason not able to deal with the results that came back15:29
*** goldyfruit has quit IRC15:41
*** yamamoto has quit IRC15:48
*** yamamoto has joined #openstack-lbaas15:49
*** yamamoto has quit IRC15:54
*** rpittau is now known as rpittau|afk15:57
emccormickonly thing that seems to trigger the amphora creation is deleting and recreating which is less than ideal16:34
emccormicknot sure why it ignores failover requests16:34
*** ataraday_ has joined #openstack-lbaas17:03
ataraday_johnsom, Hi! Can you please take a look on https://review.opendev.org/#/c/662791/ ? And if approach for pool flows is acceptable https://review.opendev.org/#/c/665381/17:12
johnsomataraday_ Ok. Might not get to it until next week.17:12
ataraday_johnsom, OK, thanks!17:13
*** gcheresh has quit IRC17:20
*** ccamposr has joined #openstack-lbaas17:22
*** ccamposr__ has quit IRC17:24
spatelOne more question if one of amphorae vm is dead in active-standby does Octivia rebuild it again?17:43
*** tesseract has quit IRC17:50
johnsomYes18:01
*** gcheresh has joined #openstack-lbaas18:02
*** ramishra has quit IRC18:08
mlozahello, what would cause this error https://pastebin.com/raw/TVjm9AKj18:17
johnsommloza 409 means conflict. In this case some other action is being executed against the member by one of the controllers. You need to retry until the object is no longer in use and the 409 clears.18:18
johnsomOh, not member, load balancer.18:23
*** goldyfruit has joined #openstack-lbaas18:51
*** mithilarun has joined #openstack-lbaas18:53
*** gcheresh has quit IRC18:56
*** psachin has quit IRC18:59
*** goldyfruit has quit IRC19:16
mlozathanks19:20
mlozaI keept getting this error 'The amphora 9c270c8a-c413-46bf-bb79-4ccf4f42d764 with IP 172.31.0.56 is missing from the DB, so it cannot be automatically deleted (the compute_id is unknown). An operator must manually delete it from the compute service.'19:20
mlozakeep*19:20
mlozafailover doesnt even bring it back19:21
mlozathis is another issue19:21
*** Vorrtex has quit IRC19:27
*** mithilarun has quit IRC19:29
*** fnaval has quit IRC19:29
*** mithilarun has joined #openstack-lbaas19:48
*** yamamoto has joined #openstack-lbaas19:49
*** yamamoto has quit IRC19:53
cgoncalvesmloza, that's a warning message, and the message hints at what to do to resolve it19:59
*** goldyfruit has joined #openstack-lbaas19:59
cgoncalvesfind the nova instance with IP 172.31.0.56 and delete it20:00
mlozathanks20:10
*** mithilarun has quit IRC20:15
*** mithilarun has joined #openstack-lbaas20:15
*** mithilarun has quit IRC20:20
*** mithilarun has joined #openstack-lbaas20:24
*** fnaval has joined #openstack-lbaas20:44
*** pcaruana has quit IRC21:16
*** mithilarun has quit IRC21:25
*** mithilarun has joined #openstack-lbaas21:25
*** mithilarun has quit IRC21:30
*** openstackgerrit has quit IRC21:33
*** mithilarun has joined #openstack-lbaas21:36
*** squarebracket has joined #openstack-lbaas21:42
*** spatel has quit IRC21:49
*** ccamposr has quit IRC22:24
rm_workcgoncalves: hmmm i BET we could actually improve that -- could look up that IP in neutron, check what port->compute it's hooked to, attempt an amp-agent /info call to see if it really belongs to us, and if it DOES, *then* we could delete it automatically?22:38
rm_work:D22:38
*** mithilarun has quit IRC22:42
*** mithilarun has joined #openstack-lbaas22:42
*** spatel has joined #openstack-lbaas22:43
*** spatel has quit IRC22:47
*** mithilarun has quit IRC22:47
*** mithilarun has joined #openstack-lbaas23:01
*** luksky has quit IRC23:31
*** goldyfruit has quit IRC23:40
*** goldyfruit has joined #openstack-lbaas23:44

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!