Wednesday, 2019-05-22

johnsomWell, I wasn't just sitting watching zuul... grin00:07
johnsomThough slightly watching the paint peal while running tempest tests.00:08
*** altlogbot_2 has quit IRC00:10
johnsomBlah, end of day. s/peal/peel/g00:10
*** altlogbot_3 has joined #openstack-lbaas00:11
*** rcernin has quit IRC00:21
*** rcernin has joined #openstack-lbaas00:21
*** goldyfruit has quit IRC01:06
*** rcernin has quit IRC01:22
*** rcernin has joined #openstack-lbaas01:22
*** happyhemant has quit IRC01:28
*** goldyfruit has joined #openstack-lbaas01:31
openstackgerritAdam Harwell proposed openstack/octavia master: Allow multiple VIPs per LB  https://review.opendev.org/66023902:13
*** gthiemon1e has quit IRC02:24
*** gthiemonge has joined #openstack-lbaas02:25
*** ricolin has joined #openstack-lbaas02:44
*** goldyfruit has quit IRC02:54
openstackgerritAdam Harwell proposed openstack/octavia master: Allow multiple VIPs per LB  https://review.opendev.org/66023902:55
*** ramishra has joined #openstack-lbaas03:38
*** psachin has joined #openstack-lbaas03:39
*** AlexStaf has joined #openstack-lbaas04:06
*** ivve has quit IRC04:26
*** rm_work has quit IRC05:31
*** rm_work has joined #openstack-lbaas05:34
*** ivve has joined #openstack-lbaas05:39
*** ivve has quit IRC05:54
*** gcheresh_ has joined #openstack-lbaas06:36
*** pcaruana has joined #openstack-lbaas07:17
*** tesseract has joined #openstack-lbaas07:18
*** rpittau|afk is now known as rpittau07:19
*** luksky has joined #openstack-lbaas07:29
*** lemko has joined #openstack-lbaas07:38
*** nmagnezi has joined #openstack-lbaas07:40
*** luksky has quit IRC07:42
*** luksky has joined #openstack-lbaas07:57
*** yboaron_ has joined #openstack-lbaas08:00
openstackgerritAnn Taraday proposed openstack/octavia master: [Jobboard] Importable flow functions  https://review.opendev.org/65953808:09
openstackgerritAnn Taraday proposed openstack/octavia master: [Jobboard] Importable flow functions  https://review.opendev.org/65953808:35
openstackgerritAnn Taraday proposed openstack/octavia master: [WIP] Jobboard based controller  https://review.opendev.org/64740608:36
*** ataraday_ has joined #openstack-lbaas08:39
*** ccamposr has joined #openstack-lbaas08:47
*** luksky has quit IRC08:48
openstackgerritAnn Taraday proposed openstack/octavia master: [WIP] Jobboard based controller  https://review.opendev.org/64740608:49
*** sapd1_x has joined #openstack-lbaas08:50
openstackgerritAnn Taraday proposed openstack/octavia master: [WIP] Transition member flows to use dicts  https://review.opendev.org/65784209:09
*** rcernin has quit IRC09:10
*** luksky has joined #openstack-lbaas09:22
kklimondahmm, in stein horizon is trying to access flavorprofiles as part of the LB creation and I'm getting 403 - that seems reasonable based on the policy and code, but I don't quite get how to make it work, is that a bug?09:28
*** ricolin has quit IRC09:40
rm_workkklimonda: is horizon newer than your octavia install?10:01
rm_workif octavia doesn't have the flavorprofiles API yet, it might break10:01
rm_workit really should do version discovery though... but maybe that wasn't handled properly10:02
rm_workthat or possibly the flavorprofiles stuff is behind a policy that your users don't have... i didn't actually review the policy side of that so i'm not sure. you would be better off asking johnsom when he's up in a few hours10:03
rm_works/better/best/10:03
kklimondahmmm, both octavia and horizon are from stein (although not final releases, but RCs for now) - I can see that horizon does query to flavorprofiles and the query is denied by policy for the user. I'll come back later when johnsom is up. Thanks.10:12
kklimondaactually scratch that, I see that even though kolla's config file says 4.0.0.0rc1, the image comes with final releases (4.0.0 and 15.0.0 respectively)10:13
openstackgerritAnn Taraday proposed openstack/octavia master: [WIP] Jobboard based controller  https://review.opendev.org/64740610:25
*** yboaron_ has quit IRC10:28
*** yboaron_ has joined #openstack-lbaas10:29
*** luksky has quit IRC10:47
*** luksky has joined #openstack-lbaas11:18
openstackgerritAdam Harwell proposed openstack/octavia master: Allow multiple VIPs per LB  https://review.opendev.org/66023911:37
*** henriqueof has joined #openstack-lbaas11:43
openstackgerritAdam Harwell proposed openstack/octavia master: Allow multiple VIPs per LB  https://review.opendev.org/66023911:47
*** boden has joined #openstack-lbaas11:58
openstackgerritAnn Taraday proposed openstack/octavia master: [WIP] Transition member flows to use dicts  https://review.opendev.org/65784212:24
*** psachin has quit IRC12:27
*** ricolin has joined #openstack-lbaas12:55
openstackgerritAdam Harwell proposed openstack/octavia master: Allow multiple VIPs per LB  https://review.opendev.org/66023913:02
*** goldyfruit has joined #openstack-lbaas13:07
rm_worklol our amps have postfix13:12
*** luksky has quit IRC14:43
*** henriqueof has quit IRC14:45
johnsomkklimonda So, dashboard should not be looking at flavor profiles at all. They are an admin only object. It should only be looking at flavors....  Let me look at the dashboard code.15:00
*** Vorrtex has joined #openstack-lbaas15:01
johnsomkklimonda Ok, bummer, the patch that merged for that is trying to look up the flavor profile to get the provider, which is not correct. We will have to fix that bug.15:03
johnsomIt must have only been tested with the admin account. sigh.15:05
johnsomI have opened https://storyboard.openstack.org/#!/story/2005759 for this15:08
colin-what is causing my housekeeping to routinely spike up to ~50 CPU utilization and back down again? any guesses?15:23
colin-i'm not super familiar with the duties of this guy besides deleting stuff and purging records15:24
cgoncalvescolin-, could you check if you are running with this patch: https://review.opendev.org/#/q/Iffc960c7c3a986328cfded1b4e408931ab0a787715:27
*** gcheresh_ has quit IRC15:27
johnsomFYI, I think I have a fix for the dashboard bug, starting testing now.15:30
colin-i do _not_ have that but have a deployment window coming up later where i will be implementing it15:31
colin-how satisfying thx cgoncalves15:31
cgoncalvescolin-, you're welcome :) let us know if it helps15:33
cgoncalvesalso if it doesn't :)15:34
*** ltomasbo has quit IRC15:36
*** ramishra has quit IRC15:51
openstackgerritMichael Johnson proposed openstack/octavia-dashboard master: Fix 403 issue when creating load balancers  https://review.opendev.org/66076815:51
johnsomkklimonda https://review.opendev.org/66076815:51
johnsomNow that is service... lol15:51
openstackgerritMichael Johnson proposed openstack/octavia-dashboard stable/stein: Fix 403 issue when creating load balancers  https://review.opendev.org/66076915:52
rm_work#startmeeting Octavia16:00
openstackMeeting started Wed May 22 16:00:00 2019 UTC and is due to finish in 60 minutes.  The chair is rm_work. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
*** openstack changes topic to " (Meeting topic: Octavia)"16:00
openstackThe meeting name has been set to 'octavia'16:00
rm_worko/16:00
xgermanO/16:00
ataraday_hi16:00
rm_workI am ... still working from yesterday, so bear with me16:00
cgoncalveso/16:00
johnsomo/16:00
johnsomKeep forgetting I need to raise my hand now.... lol16:00
cgoncalvesgood, already warmed up16:01
rm_workyes, you have been demoted (promoted?) to "regular participant" :D16:01
rm_work#topic Announcements16:01
*** openstack changes topic to "Announcements (Meeting topic: Octavia)"16:01
johnsom#link http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006478.html16:01
johnsomSome minor requirements changes are coming.16:02
johnsomI think we are mostly up to date on that, but thought I would highlight the thread16:02
rm_workI'm just working away... nothing really to mention specifically... any other announcements?16:03
johnsomI think the gate issues are now fixed with the requirements upper-constraints file updated this morning.16:03
colin-o/16:03
johnsomAlso, if you haven't done the user survey:16:04
johnsom#link http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006393.html16:04
johnsomPlease raise awareness that Octavia matters to you....16:04
nmagnezio/16:04
rm_workok... moving on then16:04
rm_work#topic Brief progress reports / bugs needing review16:05
*** openstack changes topic to "Brief progress reports / bugs needing review (Meeting topic: Octavia)"16:05
*** henriqueof has joined #openstack-lbaas16:05
johnsomI have pivoted from the unset work to help jump start the jobboard work.16:05
johnsomI have posted a patch that creates an "amphorav2" provider driver/controller:16:05
johnsom#link https://review.opendev.org/65968916:06
johnsomI am currently working on the "demo" patch for switching those flows over to using the provider driver models. This will also remove the DB objects from the flows for the jobboard work.16:06
johnsomHowever, I was a genius and picked "listener" as the demo, which is probably the most complicated of them all. A still very WIP patch:16:07
johnsom#link https://review.opendev.org/66023616:07
johnsomI hope I can wrap that up today. I think there is a follow on patch for octavia-lib to add the project ID to the objects. I'm pretty sure the vmware NSX driver needs that as well.16:08
cgoncalvesit will be a breeze reviewing these jobboard patches16:08
ataraday_I mostly rebase my changes on johnsom "amphorav2" provider driver/controlle  and look at refator example - start modify patches that I already have on this topic.16:08
rm_workI am working on one of the items I signed up for this cycle, multi-vip! The strategy is to add "additional_vips" as a list of subnet_id (+ optionally ip_address) that will be added to the VIP port. This would allow ipv6+ipv4 on the same LB, as an example. Maybe take a look and give feedback now if you don't like the way it's set up on the user-facing side: https://review.opendev.org/#/c/660239/16:08
rm_work#link https://review.opendev.org/#/c/66023916:08
johnsomataraday_ Feedback is welcome. Let me know if what I'm doing makes sense, etc.16:08
ataraday_johnsom, all seems pretty good, thanks a lot for this huge piece of work!16:09
johnsomataraday_ Also shame me if I end up doing something you have already posted. grin I'm kind of just running with this.16:09
rm_workI still have a bit of work on the backend / plugging side of things16:09
cgoncalvesI pushed to Gerrit a patch I started in November that intends to implement VIP ACL API. I just rebased it before pushing, still much WIP. listener POST works, PUT doesn't16:09
cgoncalves#link https://review.opendev.org/#/q/topic:vip-acl16:09
*** tesseract has quit IRC16:10
johnsomI also want to highlight a critical patch for Octavia dashboard:16:10
ataraday_also this small change ready for review https://review.opendev.org/#/c/659538/ - but it is based on johnsom's change anyway16:10
johnsom#link https://review.opendev.org/66076816:10
johnsombackport to stein here:16:10
johnsom#link https://review.opendev.org/66076916:10
rm_workso much stuff in progress \o/16:10
johnsomWe missed that it was trying to access flavor profiles which only admins can do.16:11
rm_workyeah, i had a feeling that might have happened16:12
johnsomSince lxkong isn't likely here I will also highlight his autoscaling demo using heat and octavia: https://youtu.be/dXsGnbr7DfM16:13
johnsomHe posted it to the openstack discuss mailing list16:13
xgermanWoot!!16:14
rm_workI feel like we're getting into open discussion-ish16:14
johnsomopps, auto healing, not auto scaling16:14
johnsomSorry, just thought it was an update from another team member.16:14
rm_workyeah that's true16:14
*** tesseract has joined #openstack-lbaas16:14
*** tesseract has quit IRC16:14
rm_worki am just trying to move things along because my vision is starting to get sparkly and i'm looking forward to sleep :D16:15
cgoncalvesnice! I'll watch it after the meeting for sure16:15
rm_workany other progress reports?16:15
johnsomlol, ok, I'm done16:15
rm_work#topic Open Discussion16:16
*** openstack changes topic to "Open Discussion (Meeting topic: Octavia)"16:16
rm_workanything else folks want to discuss today? we haven't really had any specific agenda items in a while16:16
rm_worknot sure if there is anything pressing16:16
ataraday_about jobboard redis and zookeeper..16:17
johnsomI don't have any other topics today.16:17
ataraday_I put commemt on the 9th patch set on this change https://review.opendev.org/#/c/647406/916:18
ataraday_if anyone interested could take a look16:18
*** Vorrtex has quit IRC16:18
*** rpittau is now known as rpittau|afk16:18
johnsomAh, yes. I think that should be ok to move the loop into the flow using the "retry" flow logic16:19
rm_workhopefully yeah16:19
johnsom#link https://docs.openstack.org/taskflow/latest/user/atoms.html#retry16:20
ataraday_OK, then I try to do this16:21
rm_workA little bit of a meta topic -- I think we should nominate someone to update our meeting wiki -- I think johnsom was doing it before when he ran the meetings, but I basically forgot it existed until now. Any volunteers for meeting scribe?16:21
johnsomProbably just need to move the loop out to a retry "times" element.16:21
rm_work#link https://wiki.openstack.org/wiki/Octavia/Weekly_Meeting_Agenda16:21
* johnsom takes a step back. Needs a break....16:22
rm_workthe minutes are automatically created, so this is actually like ... a pre-scribe (because the idea would be to put the next meeting up ahead of time so people can add topics)16:22
rm_workthough also, they need to be added to here maybe? https://wiki.openstack.org/wiki/Octavia/Meeting_Minutes16:22
rm_workwhich is just a matter of copy/pasting a link once a week after the meeting16:23
johnsomYeah, the two steps I had been doing is: create agenda, then after post the links16:23
rm_workany takers? ;)16:24
rm_workcgoncalves: have I delegated enough to you yet?16:24
eanderssono/16:24
* cgoncalves looks away16:24
rm_workeandersson: a volunteer? :D16:24
eanderssonlol16:24
cgoncalvesoh, there! eandersson :)16:24
rm_workaha! he's volunteered!16:24
johnsomI don't know if anyone cares about the links or if they are fine just using the eavesdrop page16:24
eanderssonunfortunately just meant to highlight that I showed up lol16:24
johnsomeandersson Perfect, thanks for volunteering!16:25
eandersson=]]16:25
*** sapd1_x has quit IRC16:25
cgoncalvesrm_work, I can be eandersson's backup16:25
rm_workalright, that's all I had, eandersson will be our agenda pre-scribe moving forward ^_^16:25
cgoncalvesI'll be on PTO next Wednesday though16:26
rm_workhonestly, you could probably do like a month ahead at once16:26
rm_workpeople can pick the right date hopefully for adding their topics16:26
rm_workit's just copy/pasting and changing a few digits16:26
rm_workand maybe we let the minutes page die, and refer people to the eavesdrop index16:27
cgoncalvessounds reasonable to me16:27
*** lemko has quit IRC16:28
rm_workok, any other topics for today? we might be able to get out of here early ;)16:28
rm_worklooks like maybe we're done?16:29
rm_workalright, thanks for the meeting folks, see you next week o/16:30
rm_work#endmeeting16:30
*** openstack changes topic to "Discussions for OpenStack Octavia | Train PTG etherpad: https://etherpad.openstack.org/p/octavia-train-ptg"16:30
openstackMeeting ended Wed May 22 16:30:25 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:30
openstackMinutes:        http://eavesdrop.openstack.org/meetings/octavia/2019/octavia.2019-05-22-16.00.html16:30
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/octavia/2019/octavia.2019-05-22-16.00.txt16:30
openstackLog:            http://eavesdrop.openstack.org/meetings/octavia/2019/octavia.2019-05-22-16.00.log.html16:30
* rm_work passes out16:30
ataraday_bye16:30
*** Vorrtex has joined #openstack-lbaas16:35
*** ataraday_ has quit IRC16:35
openstackgerritAdam Harwell proposed openstack/octavia master: WIP: Allow multiple VIPs per LB  https://review.opendev.org/66023916:38
*** ccamposr__ has joined #openstack-lbaas16:52
*** Swami has joined #openstack-lbaas16:53
*** ccamposr has quit IRC16:53
*** gthiemonge has quit IRC17:00
*** ricolin has quit IRC17:13
kklimondajohnsom: wow, that was fast ;)17:19
johnsomkklimonda That was a critical bug.  Thanks for reporting it to us.  Glad I could help.17:20
*** luksky has joined #openstack-lbaas17:23
openstackgerritMerged openstack/neutron-lbaas-dashboard stable/stein: Imported Translations from Zanata  https://review.opendev.org/65662017:28
*** Swami has quit IRC17:35
*** yboaron_ has quit IRC18:00
*** yboaron_ has joined #openstack-lbaas18:01
*** ccamposr__ has quit IRC18:22
*** ganso has joined #openstack-lbaas18:38
gansohello folks. I just deployed ocatavia using https://docs.openstack.org/devstack/latest/guides/devstack-with-lbaas-v2.html but enabling horizon. After deployment is complete, I don't see the load balancers tab in Horizon, am I missing a configuration step? Could someone please assist?18:39
johnsomganso You un-commented the octavia-dashboard section in the plugin?18:47
gansojohnsom: oh I forgot about # enable_plugin octavia-dashboard https://git.openstack.org/openstack/octavia-dashboard.git18:48
gansojohnsom: thanks!18:48
johnsomganso Also please note the bug that was fixed this morning: https://review.opendev.org/#/c/660768/18:52
gansojohnsom: thanks! I am deploying octavia to verify a slightly similar problem18:54
gansojohnsom: I suppose that bug above affects only when load balancers are *created*, right?18:55
gansojohnsom: the problem I am facing is that users with just role member of their project cannot load the load balancers tab, nor the overview page or create tenant networks18:58
*** ramishra has joined #openstack-lbaas19:02
johnsomganso: it would impact some details pages as well.19:18
gansojohnsom: hmmm ok, seems it would be best to wait for it to merge first19:19
gansojohnsom: thanks!19:19
johnsomganso I can't speak to an issue creating tenant networks as that is a neutron feature and not Octavia.19:20
johnsomganso However, a role issue for load balancers might be a mis-understanding of the advanced RBAC we have in Octavia.  See this page for more information: https://docs.openstack.org/octavia/latest/configuration/policy.html19:21
johnsomganso They may have the advanced RBAC enabled, but have not created the right role / group configurations.  They may also have this disabled and be running the "admin_or_owner" policy file.19:22
*** sfilatov has joined #openstack-lbaas19:23
gansojohnsom: thanks for the link. This matches closely what I am experiencing. A regular user without any octavia roles fails to load the load balancers tab, overview and create tenant networks. Once the role "load-balancer_member" is added to the same user, everything is fixed19:23
gansojohnsom: but the way I see it, the user experience is not being very good when the user doesn't have that role, so I am trying to reproduce this bad user experience problem and work on improving it if it as bad as my customer says it is19:24
johnsomganso If you want to disable the advanced RBAC you can install the admin_or_owner-policy.json located here: https://github.com/openstack/octavia/tree/master/etc/policy19:24
johnsomganso Cool, thank you for that. We are a small team and appreciate any help we can get.19:25
gansojohnsom: that could be a solution, but I would need to double check if my customer has other users using the advanced roles (I assume yes)19:25
*** ramishra has quit IRC19:27
sfilatovHi! I got an issue with octavia: LB stops working(fip assigned on VIP is not accessible) after l3-neutron-agent full sync is triggered(for example after RMQ outage). It stops working because iptables rule in -A neutron-l3-agent-PREROUTING -d <fip> -j DNAT --to-destination <vip> is removed during full sync. Has anyone faced this issue or a similar one?19:28
johnsomsfilatov Not that I know of. That sounds like a pretty big bug in neutron if iptables rules are lost.19:29
johnsomsfilatov Have you inquired in #openstack-neutron?19:30
sfilatovnot yet. I'll try openstack-neutron as well. Thx!19:31
gansojohnsom: btw I was already stacking while we were talking, and devstack failed with this error while configuring octavia-dashboard: http://paste.openstack.org/show/751954/19:31
gansojohnsom: it looks like octavia-dashboard doesn't work with Python3. Is that correct?19:32
johnsomUgh, yeah, it's a problem with python3. Horizon has the same issue.  It's actually the nodejs manage command that is broken.19:32
gansojohnsom: ok, thanks! Will make sure to not enable python3 on next attempt19:33
johnsomganso Or hack the file to have #!/usr/bin/env python3 instead of python19:34
johnsomI hit that with the horizon manage.py this morning when I was fixing that other bug.19:34
gansojohnsom: hmmm, will try that =) thanks!19:35
johnsomI might be able to fix our version actaully. I should look at that devstack script19:35
johnsomganso It's this line: https://github.com/openstack/horizon/blob/master/manage.py#L119:35
johnsomAh, that is being run a different way in the devstack plugin. Yeah, this should be a quick fix as well19:38
openstackgerritMerged openstack/octavia-dashboard master: Fix 403 issue when creating load balancers  https://review.opendev.org/66076819:38
johnsomIt's this line: https://github.com/openstack/octavia-dashboard/blob/master/devstack/plugin.sh#L1119:39
gansojohnsom:  thanks!19:39
openstackgerritMichael Johnson proposed openstack/octavia-dashboard master: Fix devstack plugin python3 support  https://review.opendev.org/66081319:47
openstackgerritMichael Johnson proposed openstack/octavia-dashboard stable/stein: Fix devstack plugin python3 support  https://review.opendev.org/66081419:48
openstackgerritMichael Johnson proposed openstack/octavia-dashboard stable/rocky: Fix devstack plugin python3 support  https://review.opendev.org/66081619:49
johnsomganso Thanks for letting us know about that. Those should fix that issue with the devstack plugin19:49
gansojohnsom: thank you for addressing the issues =)19:55
johnsomYeah, cool, those work just fine.20:07
johnsom2019-05-22 20:02:10.479 lib/horizon:configure_horizon:81  /usr/bin/python3.6 manage.py compilemessages20:07
johnsomHa, helps if I copy the right line....20:08
johnsom2019-05-22 20:03:20.101 | /opt/stack/octavia-dashboard/devstack/plugin.sh:octavia_dashboard_configure:11  /usr/bin/python3.6 ../manage.py compilemessages20:08
*** pcaruana has quit IRC20:20
colin-cgoncalves: that did help, thanks again20:20
*** Vorrtex has quit IRC20:24
cgoncalvescolin-, cool! happy to hear that20:46
openstackgerritMichael Johnson proposed openstack/octavia master: Convert listener flows to use provider models  https://review.opendev.org/66023620:58
openstackgerritMichael Johnson proposed openstack/octavia master: Create Amphora V2 provider driver  https://review.opendev.org/65968921:00
openstackgerritMichael Johnson proposed openstack/octavia master: Convert listener flows to use provider models  https://review.opendev.org/66023621:00
*** sfilatov has quit IRC21:05
*** boden has quit IRC21:19
*** henriqueof has quit IRC21:36
*** ut2k3 has joined #openstack-lbaas22:15
ut2k3Hi johnsom maybe you remember me and my problem with the empty amphora table. Maybe you can help me out with a follow up problem. The LBs are up, but not reachable via their FloatingIPs (which was working before) could it be that OS / Octavia tries to point them to the wrong internalIP/Port?22:16
ut2k3Just for my Understanding, do the amphora need to have the same IP as the vip_address?22:17
johnsomut2k3 We where just talking about a neutron bug earlier: https://bugs.launchpad.net/neutron/+bug/182606622:17
openstackLaunchpad bug 1826066 in neutron "Iptables rules for unbound ports removed during agent sync" [Undecided,Incomplete]22:17
johnsomut2k3 I suspect that is your problem22:17
johnsomut2k3 Another user was saying that a rabbit outage caused this "agent full sync" and wiped out his floating IPs in neutron's iptables.22:18
johnsomQuoting the other user:22:19
johnsomLB stops working(fip assigned on VIP is not accessible) after l3-neutron-agent full sync is triggered(for example after RMQ outage). It stops working because iptables rule in -A neutron-l3-agent-PREROUTING -d <fip> -j DNAT --to-destination <vip> is removed during full sync.22:19
ut2k3In my case the amphora table was empty and you helped me recreating them. So in theory detaching/attaching of floating IP should help or not?22:20
johnsomut2k3 Can you check if those iptables rules are present for neutron?22:20
johnsomut2k3 Probably. I don't think it is related to the amphora table being empty, we repaired those properly, so they still have the same VIP address. This is why I suspect it is the neutron bug.22:20
ut2k3Ok, so just to have the understanding the amphora don't need to have the vip_address? Because the vip_address != the private IP address of the currently running amphora22:23
johnsomCorrect, each amphora has two "ports", on is real, one is a neutron "Allowed Address Pairs" port that is fake. The AAP port had the VIP, the base port has a different IP.22:24
johnsomA server list will only show the base port22:24
johnsomBut, if you do an openstack port show on that base port ID, you will see at the top the AAP configuration with the VIP on it.22:25
ut2k3Ok let me check therefore `iptables -L neutron-l3-agent-PREROUTING`22:26
johnsomThis is neutron's complicated way of having two IP addresses on a single port.22:27
*** goldyfruit_ has joined #openstack-lbaas22:28
*** goldyfruit has quit IRC22:30
ut2k3The bug ticket you've shared is something related to neutron-vpn-agent, I am not using that.22:32
ut2k3Other floating IPs are working.22:32
johnsomRight, the same happens the the L3 agent you are using for floating IPs22:33
*** goldyfruit_ has quit IRC22:33
*** pnull has joined #openstack-lbaas22:35
ut2k3Ok, do you know how to solve or to check that in detail? Thats not on the host right, its on the router or where should be that iptables-rule exist?22:36
johnsomThey would be in the router namespace where your l3-agents are installed22:37
openstackgerritMichael Johnson proposed openstack/octavia master: Convert listener flows to use provider models  https://review.opendev.org/66023622:39
ut2k3When I do a `p netns exec qrouter-6253b80b-de18-4081-8ddf-45db94cef43a iptables -L neutron-l3-agent-PREROUTING -t nat` on a network node22:42
*** goldyfruit has joined #openstack-lbaas22:42
ut2k3It shows a for me correct line with: `DNAT       all  --  anywhere             x.x.x.x        to:10.123.0.8` where x.x.x.x is my floating IP22:42
ut2k3> | acae625f-01ff-4bfc-9b74-df0df3f59be6 | cluster-production-k8s-de-kar--4fo2nxngz7a6-api_lb-7erqpobqhuh3-loadbalancer-try3py43mbhm  | 0e790846485a4db6b0f9ab6ec958c1ed | 10.123.0.9  | ACTIVE              | amphora  |22:42
johnsomhmmm, ok, that should be it22:42
ut2k3That's the corresponding line from `openstack loadbalancer list`22:43
johnsomoh, that is not the right DNAT line then22:43
johnsomWell, I'm trying to remember how floats work with AAP ports.22:43
johnsomis .8 the base port?22:43
ut2k310.123.0.8 == vip_address22:45
ut2k3Ah sorry pasted you the wrong line22:45
ut2k3| 9e950a88-af30-4eeb-8fb6-d8c20388db17 | ac8042c20554011e9a047fa163ef65c4                                                           | 0e790846485a4db6b0f9ab6ec958c1ed | 10.123.0.8  | ACTIVE              | amphora  |22:45
ut2k3| 9e950a88-af30-4eeb-8fb6-d8c20388db17 | ac8042c20554011e9a047fa163ef65c4                                                           | 0e790846485a4db6b0f9ab6ec958c1ed | 10.123.0.8  | ACTIVE              | amphora  |22:45
ut2k3http://paste.openstack.org/show/751959/22:45
johnsomLet me build an LB with a FIP so I can compare22:45
ut2k3Ok thanks22:47
johnsomSo it should look like this:22:53
johnsomhttps://www.irccloud.com/pastebin/98K1bHZF/22:53
johnsomThe DNAT should point to the VIP IP22:54
johnsomhttps://www.irccloud.com/pastebin/AVCdlDlv/22:55
ut2k3http://paste.openstack.org/show/751960/22:59
johnsomThat looks ok22:59
ut2k3From a host, I am able to ping the Amphora Instance IP itself, but not the vip_address or request something via the port that should be open.22:59
johnsomYeah, you should not be able to ping the VIP23:00
johnsomBut it's not responding to queries to the VIP on the port that it is load balancing?23:00
ut2k3Interesting is that the status of the port "down" is, could that be the reason?23:02
johnsomNo, the VIP port is a fake neutron port, it is always down23:02
johnsomThe base port must be up however23:02
ut2k3OK23:03
ut2k3So thats normal: | 10389c9a-3250-41a3-a398-b3027c0ad1c8 | octavia-lb-9e950a88-af30-4eeb-8fb6-d8c20388db17                                                                    | fa:16:3e:a3:0e:1e | ip_address='10.123.0.8', subnet_id='843f80f1-cf5c-4b16-a04b-b234bb6b4e88'     | DOWN   |23:03
johnsom| 9c0ae11d-6124-4136-bc04-24177b176126 | octavia-lb-e64e658e-a0ac-46f6-8634-440d5f716212      | fa:16:3e:0d:53:14 | ip_address='10.0.0.4', subnet_id='254c0e46-e4ab-405e-abf3-ca62856828a4'                             | DOWN   ||23:03
johnsomyep23:03
johnsomSo what do you get if you do a status tree call on the load balancer?23:03
ut2k3Whats the command for that?23:04
johnsomopenstack loadbalancer status show <lb ID>23:05
ut2k3http://paste.openstack.org/show/751962/23:06
johnsomSo the LB is up and the members are healthy assuming your health manager process is running23:08
ut2k3Yep, requesting 10.123.0.23 in that case on Port 6443 works fine23:09
ut2k3(From an instance)23:10
johnsomHmmm, I'm running out of ideas as it looks like octavia is healthy.23:11
johnsomYou did a load balancer failover to repair these right? Not just an amphora failover?23:11
ut2k3Yep I did a `openstack loadbalancer failover...`23:12
johnsomDid you try detaching the FIP and re-attaching it?23:12
ut2k3Yes, in our case even the vip_address is not accessible from the private net23:13
ut2k3So I think the problem is a bit deeper, not only the floating ip23:13
johnsomWell, I would still like to try the FIP detach as this still could be a neutron issue. Especially if you are using DVR23:14
ut2k3Ok I gonna detach for a test the FIP of the LB with vip_address: 10.123.0.923:15
ut2k3So its gone, let me try to do a `curl -v https://10.123.0.9:6443`23:16
ut2k3Nope still not available: `curl: (7) Failed to connect to 10.123.0.9 port 6443: No route to host`23:16
johnsomNo route to host implies you are not connected to the 10.123.0.9 subnet.23:17
johnsomLet's re-attach the FIP and see if it works23:17
ut2k3http://paste.openstack.org/show/751964/23:19
ut2k3> (10.123.0.9) at <incomplete> on eth023:19
ut2k3Gonna attach it back23:19
*** rcernin has joined #openstack-lbaas23:20
johnsomJust to double check, is fa:16:3e:f6:5c:49  the mac of the base port for the .23 port?23:21
*** goldyfruit has quit IRC23:21
johnsomAh, nevermind, I got confused because the pastes have been from different load balancers23:22
ut2k3Sorry for jumping here.23:23
ut2k3fa:16:3e:f6:5c:49 is the MAC address of an Kubernetes instance23:23
johnsomHow if the FIP after the attach?23:23
ut2k3The 10.123.0.9 == vi_address23:23
ut2k3FIP after attaching it back is still also not available23:23
johnsomBummer. Well, I guess the next thing I would try is another loadbalancer failover23:24
johnsomForce it to rebuild the ports and security groups23:24
ut2k3The LB is quite fresh, I did a failover 2-3 times already.23:25
ut2k3Since it was not reachable23:25
johnsomwow, ok, then this is super odd as the status shows it as up and healthy.23:25
johnsomI guess you can check the security groups just to make sure the port is open on the VIP base port.23:26
johnsomAfter that I would jump inside the amphora with ssh, go into the netns and do a tcpdump to see if the packets are getting to the instance.23:26
johnsomNote, but default, from inside the amp, you cannot directly connect to the members without bringing up the lo interface. We don't bring it up as we don't need it.23:27
johnsomThere should be a security group called lb-4d3df5bc-240b-4dc5-a03a-5c6e089f9add where the uuid is the load balancer ID.23:29
johnsomIt should have your listener port open, plus 1025 which is used for sync on the base port.23:30
ut2k3In my case the `octavia-lb-acae625f-01ff-4bfc-9b74-df0df3f59be6` == vip_address => don't have a security group23:31
ut2k3But the belonging VRRP have a security group.23:32
johnsomRight, it is only on the base port, the VIP port is a fake neutron port23:32
ut2k3The `octavia-lb-vrrp` contains also the vip_address as an allowed Address-Pair23:33
johnsomHmmm, actually, my vip port does have an SG on it23:33
johnsomYeah, on mine, both ports have the same SG on it23:34
johnsomMaybe that is the issue, maybe the VIP port lost it's SG somehow23:34
johnsomI would try applying that SG from the octavia-lb-vrrp port to the VIP (AAP) port. Just make sure you have the right ports and don't cross the LB SGs23:35
johnsomDoesn't seem to make a difference for me. If I delete that SG I can still connect. As I remembered, only the base port SG matters to neutron. the AAP port doesn't23:37
ut2k3Yeah and since `octavia-lb...` is in another namespace I can't attach the SG anyway to it.23:39
ut2k3http://paste.openstack.org/show/751967/23:40
ut2k3And thats the vrrp http://paste.openstack.org/show/751968/23:41
johnsomThose look fine to me23:46
ut2k3So without checking documentation from my understanding the VRRP IP is configured as secondary IP to the eth interface within the amphora right?23:49
ut2k3Or to be more precise: The thing I don't understand is: how can I be able to access the VRRP IP but NOT the VIP. IMHO, they should be configured on the same network interface inside the amphora instance. So. How can it be possible that I don't even get an ARP response for the VIP while I can access/ping the VRRP address?23:50
*** goldyfruit has joined #openstack-lbaas23:53
johnsomThey are both on the same interface inside the "amphora-haproxy" network namespace inside the amphora.23:55
johnsomThe VIP address is the secondary IP on the interface23:55
johnsomI need to sign off for the day, it's already a few hours past when I planned to stop. I can continue to help you tomorrow or maybe some of the other cores will be on and be of assistance.23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!