Thursday, 2018-03-22

*** Swami has quit IRC00:02
*** ipsecguy_ has quit IRC00:21
*** ipsecguy has joined #openstack-lbaas00:22
*** fnaval has quit IRC00:36
*** ipsecguy has quit IRC00:42
*** fnaval has joined #openstack-lbaas00:43
*** yamamoto has joined #openstack-lbaas00:52
*** yamamoto has quit IRC00:57
rm_workok, just had the first complaint from a user about the 50s timeout01:08
rm_worki'm gonna do the timeout expose patch tomorrow01:08
*** ipsecguy has joined #openstack-lbaas01:09
rm_workwhat did we say? listener level?01:09
rm_workright now we put it in "defaults" section, which is for backend and listen01:09
rm_workoh01:10
rm_workbut, derp, each listener is one haproxy config so yeah it's fine in defaults :P01:10
*** harlowja has quit IRC01:31
johnsomrm_work: please put it in the listener both on the API and the config. I eventually want to fix the process per listener and there is some strange interactions with the defaults if I remember01:40
rm_workright01:40
johnsomOtherwise, yes please!  Grin01:41
rm_workerr, the "listener"?01:41
rm_workIE, the frontend?01:41
johnsomYep01:41
rm_workwhich, is only for client timeout01:41
rm_workserver timeout is on backend01:41
rm_workusually it's easier to just throw them both in the defaults section01:41
rm_workbut01:41
rm_worki can poke at it01:41
johnsomOh really, hmm01:41
johnsomOk, i can always review comment too when I have the docs open01:41
johnsomMobile at the moment01:42
rm_workk01:42
*** yamamoto has joined #openstack-lbaas01:53
*** yamamoto has quit IRC01:59
*** jaff_cheng has joined #openstack-lbaas02:14
*** dayou has quit IRC02:18
*** yamamoto has joined #openstack-lbaas02:37
*** jaff_cheng has quit IRC02:42
*** yamamoto has quit IRC02:46
*** yamamoto has joined #openstack-lbaas02:56
*** annp has joined #openstack-lbaas03:14
*** AlexeyAbashkin has joined #openstack-lbaas03:17
*** AlexeyAbashkin has quit IRC03:21
*** harlowja has joined #openstack-lbaas03:32
*** harlowja has quit IRC03:54
*** isssp has joined #openstack-lbaas05:03
*** burned has quit IRC05:06
*** slashnick has joined #openstack-lbaas05:07
*** slashnick has left #openstack-lbaas05:09
*** isssp has quit IRC05:09
*** isssp has joined #openstack-lbaas05:12
*** imacdonn has quit IRC05:15
*** imacdonn has joined #openstack-lbaas05:15
*** rcernin has quit IRC05:34
*** rcernin has joined #openstack-lbaas05:46
*** links has joined #openstack-lbaas05:53
*** rcernin has quit IRC06:06
*** rcernin has joined #openstack-lbaas06:08
*** AlexeyAbashkin has joined #openstack-lbaas06:16
*** AlexeyAbashkin has quit IRC06:21
*** threestrands has quit IRC06:25
*** kobis has joined #openstack-lbaas06:34
*** kobis has quit IRC06:40
*** velizarx_ has joined #openstack-lbaas07:00
*** links has quit IRC07:15
*** links has joined #openstack-lbaas07:23
*** velizarx_ has quit IRC07:23
*** kobis has joined #openstack-lbaas07:24
openstackgerritOpenStack Proposal Bot proposed openstack/octavia-dashboard master: Imported Translations from Zanata  https://review.openstack.org/55517707:24
*** kobis has quit IRC07:25
*** isssp has quit IRC07:25
*** bbzhao has quit IRC07:25
*** ptoohill has quit IRC07:25
*** kobis has joined #openstack-lbaas07:26
*** bbzhao has joined #openstack-lbaas07:26
*** isssp has joined #openstack-lbaas07:26
*** ptoohill has joined #openstack-lbaas07:28
*** rcernin has quit IRC07:31
*** pcaruana has joined #openstack-lbaas07:42
*** pcaruana has quit IRC07:44
*** velizarx has joined #openstack-lbaas07:44
*** pcaruana has joined #openstack-lbaas07:44
*** pcaruana has quit IRC07:45
*** pcaruana has joined #openstack-lbaas07:45
*** pcaruana has quit IRC07:47
*** pcaruana has joined #openstack-lbaas07:47
*** pcaruana has quit IRC07:48
*** pcaruana has joined #openstack-lbaas07:48
*** pcaruana has quit IRC07:50
*** pcaruana has joined #openstack-lbaas07:50
*** pcaruana has quit IRC07:51
*** pcaruana has joined #openstack-lbaas07:51
*** pcaruana has quit IRC07:53
*** pcaruana has joined #openstack-lbaas07:53
*** pcaruana has quit IRC07:54
*** pcaruana has joined #openstack-lbaas07:55
*** AlexeyAbashkin has joined #openstack-lbaas07:56
*** pcaruana has quit IRC07:56
*** ispp has joined #openstack-lbaas07:58
*** isssp has quit IRC08:00
*** shananigans has quit IRC08:05
*** pcaruana has joined #openstack-lbaas08:06
*** pcaruana has quit IRC08:07
*** pcaruana has joined #openstack-lbaas08:08
*** pcaruana has quit IRC08:09
*** shananigans has joined #openstack-lbaas08:09
*** pcaruana has joined #openstack-lbaas08:10
*** pcaruana has quit IRC08:10
*** pcaruana has joined #openstack-lbaas08:11
*** tesseract has joined #openstack-lbaas08:11
*** pcaruana has quit IRC08:12
*** pcaruana has joined #openstack-lbaas08:13
*** pcaruana has quit IRC08:15
*** pcaruana has joined #openstack-lbaas08:15
*** pcaruana has quit IRC08:16
*** AlexeyAbashkin has quit IRC08:17
*** pcaruana has joined #openstack-lbaas08:18
*** AlexeyAbashkin has joined #openstack-lbaas08:18
*** pcaruana has quit IRC08:20
*** pcaruana has joined #openstack-lbaas08:20
*** pcaruana has quit IRC08:21
*** pcaruana has joined #openstack-lbaas08:21
*** pcaruana has quit IRC08:22
velizarxHi, all. I have a question about support the active-active topology. Could you tell me what priority does this blueprint (https://blueprints.launchpad.net/octavia/+spec/l3-active-active) have on your roadmap? What tasks now have a higher priority? I ask, because currently I research LBaaS technologies for our virtual private cloud and I see that one of the main killer-features it's the active-active topology.08:24
velizarxI want understand, when you are planning add this functionality.08:26
*** pcaruana has joined #openstack-lbaas08:29
*** pcaruana has quit IRC08:30
*** pcaruana has joined #openstack-lbaas08:36
*** pcaruana has quit IRC08:37
*** pcaruana has joined #openstack-lbaas08:38
*** pcaruana has quit IRC08:39
*** pcaruana has joined #openstack-lbaas08:40
*** pcaruana has quit IRC08:40
*** pcaruana has joined #openstack-lbaas08:41
*** pcaruana has quit IRC08:41
*** sapd has quit IRC08:44
*** sapd has joined #openstack-lbaas08:45
*** salmankhan has joined #openstack-lbaas08:52
*** pcaruana has joined #openstack-lbaas09:02
*** pcaruana has quit IRC09:03
*** pcaruana has joined #openstack-lbaas09:03
*** pcaruana has quit IRC09:05
*** dayou has joined #openstack-lbaas09:38
*** dayou has quit IRC09:44
*** nmagnezi_ has quit IRC09:46
*** dayou has joined #openstack-lbaas09:53
*** velizarx_ has joined #openstack-lbaas09:56
*** velizarx_ has quit IRC10:00
*** nmagnezi has joined #openstack-lbaas10:12
toker_Hi again everyone, I'm still struggling to get the octavia-dashboard to work with horizon on OSP 12 (pike). I get the menu, but when clicking it it just "reloads" the page in a constant loop.10:22
*** nmagnezi has quit IRC10:30
*** nmagnezi has joined #openstack-lbaas10:30
*** salmankhan has quit IRC10:52
*** salmankhan has joined #openstack-lbaas10:57
*** AlexeyAbashkin has quit IRC11:00
*** AlexeyAbashkin has joined #openstack-lbaas11:00
*** logan- has quit IRC11:03
*** logan- has joined #openstack-lbaas11:03
dayoutoker_, check this, http://logs.openstack.org/91/544591/5/check/build-openstack-sphinx-docs/cad913d/html/readme.html#enabling-octavia-dashboard-and-neutron-lbaas-dashboard11:25
dayouI don't know whether it works with pike or not, but should work with queens11:25
toker_hm, ok, so now I get to this point, clicking loadbalancer in the gui, gives me an exception in horizon, SDKException: public endpoint for load_balancer service in regionOne region not found11:45
toker_which is really weird since  "e4e001c5da1847eea1821c63ccde35a1 | regionOne | octavia      | load-balancer  | True    | public    | https://xxx:13876                  |" gets listed when I do "openstack endpoint list | grep load"11:46
dayouThat might be related to https://review.openstack.org/#/c/524011/11:48
dayouYou can revert that commit and give it a try11:48
dayouon Pike11:48
dayouSomehow octavia-dashboard relies on a newer version of openstacksdk to be working11:49
dayouwhich might not be available on Pike11:50
openstackgerritJacky Hu proposed openstack/octavia-dashboard master: Add package-lock.json  https://review.openstack.org/55527011:51
toker_dayou: Hm, I figured it had to do with the type of the endpoint, it seems like octavia-dashboard was requesing "load_balancer" but mine was "load-balancer" (note the - vs _). So I renamed it, but then I got another exception :)12:01
toker_    service_type)   File "/usr/lib/python2.7/site-packages/openstack/session.py", line 277, in _get_version_match     for link in version["links"]: KeyError: 'links'12:01
toker_I'll try with your suggestion12:05
*** aojea has joined #openstack-lbaas12:18
*** velizarx has quit IRC12:19
toker_hm, no luck.12:20
toker_still stuck with the 'SDKException: public endpoint for load_balancer service in regionOne region not found'12:20
toker_argh, really frustrating :(12:30
openstackgerritJacky Hu proposed openstack/octavia-dashboard master: Add package-lock.json  https://review.openstack.org/55527012:31
toker_how does octavia-dashboard try list the endpoints ? with which user ?12:36
*** velizarx has joined #openstack-lbaas12:40
dayouhttps://github.com/openstack/octavia-dashboard/blob/master/octavia_dashboard/api/rest/lbaasv2.py#L38-L6412:41
dayouIt tries to use the current horizon user token to access the loadbalancer api with openstacksdk12:41
dayouopenstack/load_balancer/load_balancer_service.py:            service_type='load-balancer',12:42
dayouSo if you have v2-api enabled in your octavia.conf12:43
toker_hm, well that makes sence12:46
toker_it doesnt make sense that it doesnt find the endpoint though12:47
toker_but as a regular user, i cant do "openstack endpoint list"12:47
toker_I can only do "openstack catalog list"12:47
toker_but thats the default12:47
dayouso you tried to patch your installed openstacksdk with the forward compatibility disabled?12:48
dayouand openstack loadbalancer list works12:48
toker_https://review.openstack.org/#/c/524011/ <- I reversed this in lbaasv2.py12:49
toker_only one file12:49
toker_yes, openstack loadbalancer list12:49
toker_workds12:49
dayouthen restart apache?12:49
toker_yes12:49
toker_even tried to add broken code just to see that it "actually reloaded and using the modified version", which it does.12:53
*** yamamoto has quit IRC12:54
toker_[Thu Mar 22 12:56:57.272783 2018] [:error] [pid 16] InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/security.html [Thu Mar 22 12:56:57.272827 2018] [:error] [pid 16] WARNING:py.warnings:InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: http12:57
toker_^ is shown in the apache error log on horizon when I visit the loadbalancer page12:57
toker_is it suppose to be that way ?12:57
*** velizarx has quit IRC13:01
dayouIt seems the internal services are accessed using https, but with a self generated certificate13:03
dayoupip list | grep openstack13:08
dayouWhat is your output of this command?13:08
toker_I have a OSP 12 installation so the horizon is packed in a docker container13:12
toker_(which is the one I'm testing against)13:12
*** velizarx has joined #openstack-lbaas13:17
toker_https://github.com/openstack/python-openstacksdk/blob/master/openstack/load_balancer/load_balancer_service.py <- where do i define version ?13:19
toker_should version somehow be defined for the endpoint ?13:19
toker_I cant find that anywhere13:20
toker_doesnt make sense though since the openstack cli works13:20
toker_I dont think there is anything wrong with the endpoint itself, rather how octavia-dashboard tries to look it up13:21
*** yamamoto has joined #openstack-lbaas13:21
*** fnaval has quit IRC13:29
dayouhttps://github.com/openstack/python-openstacksdk/blob/master/openstack/load_balancer/load_balancer_service.py#L1913:35
toker_yes but what i mean is should that be defined on the endpoint or anything ?13:35
*** aojea has quit IRC13:36
dayouI am not aware of that, it seems it only constraint what it has received from upstream13:37
toker_Ok, well Im not sure how I can debug this further. I would like to see what kind of request octavia-dashboard makes to the SDK and why it cand find my endpoint.13:38
toker_Is there a debuglog for the sdk somewhere?13:39
dayouMaybe debug=True for get_one_cloud method will work13:41
dayouconnection.Connection(13:42
dayouOr if you reverted, it the method above13:42
dayouPeople at #openstack-sdks should be aware of more about the openstacksdk itself13:44
*** fnaval has joined #openstack-lbaas13:44
dayouhttps://github.com/openstack/requirements/blob/stable/pike/upper-constraints.txt#L40513:45
dayouSo in pike it is using 0.9.17 of openstacksdk, and in queens is 0.11.313:46
*** aojea has joined #openstack-lbaas13:46
dayouI'll see if I downgrade my openstacksdk to 0.9.17 and what happens13:47
*** aojea has quit IRC13:50
*** yamamoto has quit IRC13:53
toker_request.user.domain_id <- this value is None13:57
toker_in the connection to the SDK13:57
toker_that doesnt seem right ?13:57
*** yamamoto has joined #openstack-lbaas14:08
*** links has quit IRC14:10
*** yamamoto has quit IRC14:13
*** atoth has joined #openstack-lbaas14:14
*** huseyin has joined #openstack-lbaas14:17
huseyinHi guys. Can anybody help me about configuration of Octavia? I followed all the instructions in the docs, and when I want to create a load-balancer I get the “Driver error: The request you have made requires authentication. (HTTP 401)” error.14:18
*** huseyin has left #openstack-lbaas14:22
*** huseyin_ has joined #openstack-lbaas14:23
*** yamamoto has joined #openstack-lbaas14:24
huseyin_Hi guys. Can anybody help me about configuration of Octavia? I followed all the instructions in the docs, and when I want to create a load-balancer I get the “Driver error: The request you have made requires authentication. (HTTP 401)” error.14:24
*** huseyin_ has quit IRC14:25
*** huseyin_ has joined #openstack-lbaas14:27
*** yamamoto has quit IRC14:28
*** huseyin_ has quit IRC14:28
toker_#openstack-sdks14:31
*** kobis1 has joined #openstack-lbaas14:33
*** kobis1 has quit IRC14:34
*** kobis has quit IRC14:35
johnsomtoker_ I'm not in the office yet, but your endpoint list should look like:14:38
johnsomhttps://www.irccloud.com/pastebin/cjj1PdiX/14:38
*** yamamoto has joined #openstack-lbaas14:39
johnsomdomain is not used in most OpenStack deployments and is fine being None14:39
toker_what, should it be /load-balancer ?14:39
toker_I dont have that14:39
johnsomHmm, in older OpenStack you might have a port.14:39
toker_Yes, I only have a port.14:40
toker_And only a public addr14:40
johnsomYou can check this with "openstack --debug loadbalancer list" and see what URL it is using14:40
toker_Yes, I mean with the openstack loadbalancer list everything works, thats why I'm saying the endpoint seems to be right.. Just octavia-dashbord and the SDK not getting it :/14:41
toker_I'm talking with the sdk-people now helping me14:41
johnsomhuseyin_ I'm not sure if you are using neutron-lbaas or not (hope not), but that error indicates the [service_auth] part of the configuration is missing or incorrect: https://github.com/openstack/octavia/blob/master/etc/octavia.conf#L308 and https://docs.openstack.org/octavia/latest/configuration/configref.html#service-auth14:42
*** yamamoto has quit IRC14:43
*** yamamoto has joined #openstack-lbaas14:46
*** yamamoto has quit IRC14:46
johnsomtoker_ You should be able to curl the endpoint and get a version document:14:48
johnsomhttps://www.irccloud.com/pastebin/PyGDHHWG/14:49
toker_yes, I can do that14:49
johnsomOk, so if your URL is good, then the only thing I see wrong is you don't have the other endpoint types. It might be looking for the "internal" endpoint14:50
toker_johnsom: well I had all the urls first, but I removed them. The errormessage from the sdk is pretty verbose about trying to find the public endpoint, "SDKException: public endpoint for load_balancer service in regionOne region not found"14:51
*** salmankhan has quit IRC14:55
*** Swami has joined #openstack-lbaas14:57
dayoutoker_15:01
dayouI got it working15:01
dayougit revert -n baae43b6515:01
dayouthen sudo pip install openstacksdk==0.9.1715:02
dayou==0.9.1915:02
dayoushould be15:02
dayouso the official requirements in pike is 0.9.17, but it does not work, you need to use 0.9.1915:02
dayouhttps://github.com/openstack/python-openstacksdk/commit/d1b242ee850f286f1fae56021f2edf13cde1eb9c15:04
dayouAlso you need to cherry pick this15:04
dayouto get healthmoniotr tabs working15:04
toker_Oh, wait. You say I need a newer SDK to get it working in pike ?15:05
dayouIf you want it working15:06
dayouIt took me like only 10 minutes to figure it out after my devstack is up15:06
toker_hm ok15:06
dayouGo to sleep now, good luck, have fun15:07
toker_Oh, thank you very much ! I'll try it out15:07
*** mordred has joined #openstack-lbaas15:08
dayouMight not work for you :-D15:08
*** aojea_ has joined #openstack-lbaas15:09
*** aojea_ has quit IRC15:14
johnsomYeah, the queens forward SDK had a lot of changes to how it interacts with the cloud. A number of things were broken late in that version, so this could be one of them.  Give me a few and I will dig through some code and see if I can find out what went wrong.15:19
johnsomHmm, that error is coming from keystoneauth15:27
rm_workjohnsom: err, how exactly do these timeouts we defined in the PTG etherpad line up with the timeout names in haproxy?15:28
rm_worki think i get a couple of them15:28
rm_workbut not all maybe15:28
johnsomYou wanted them re-ordered to be consistent15:28
toker_Hm ok, talking to mordor in #openstack-sdks, says he's having the same issue with the 0.9.19 SDK. He's looking into it though.15:29
*** Swami has quit IRC15:29
johnsomOk, cool15:29
johnsomrm_work if you have ones you don't know, give me that list15:30
rm_workyeah working on it15:30
velizarxHello everybody. Could you comment my question (http://eavesdrop.openstack.org/irclogs/#openstack-lbaas/#openstack-lbaas.2018-03-22.log.html#t2018-03-22T08:24:18), please?15:30
toker_I dont think just updating my SDK to a newer version would help my problem, and that would explain why mordor is having the same problem.15:30
toker_although, still not sure why the sdk cant find the loadbalancer service.. but yeah, he's looking into it.15:31
rm_workjohnsom: http://paste.openstack.org/show/708974/15:32
rm_workor just look at the etherpad15:32
rm_workit's from there15:32
rm_workxgerman_: i responded on https://review.openstack.org/#/c/552632/15:37
xgerman_k15:37
*** velizarx has quit IRC15:40
*** pcaruana has joined #openstack-lbaas15:42
johnsomvelizarx Yes, L3 active/active is currently being worked on by a team. There goal is a Rocky release.15:43
*** pcaruana has quit IRC15:44
*** yamamoto has joined #openstack-lbaas15:47
rm_workjohnsom: also, not sure if `timeout_tcp_inspect` is the best either15:47
rm_workis it actually a timeout?15:47
rm_worki guess technically?15:47
johnsomYes, it is15:48
rm_workmore like `delay_tcp_inspect`15:48
johnsomIt times out waiting for additional packets15:48
*** pcaruana has joined #openstack-lbaas15:48
rm_workbut anyway, are those others correct?15:48
*** salmankhan has joined #openstack-lbaas15:48
johnsomtimeout is more consistent with the other drivers15:48
rm_workk15:48
rm_workwhatever, it's 5 lines15:49
rm_worktimeout_client_connect - timeout connect15:49
rm_worktimeout_client_data - timeout client15:49
rm_worktimeout_member_connect - ??15:49
rm_worktimeout_member_data - timeout server15:49
rm_worktimeout_tcp_inspect - tcp-request inspect-delay?15:49
johnsomYeah, just a sec, looking.  I have like five people pinging me this morning15:50
rm_worklol k15:50
rm_workyeah i'm working anyway, will adjust according to decisions15:50
rm_workno biggie15:50
rm_workalso, timeouts, milliseconds?15:52
rm_workor seconds15:52
johnsomI am updating the etherpad version.15:52
johnsomI think in haproxy they are all milliseconds, so go with that.15:52
*** yamamoto has quit IRC15:53
toker_hm, ok so I got octavia-dashboard to work on the OSP12 setup.. But when trying to create a loadbalancer from the gui, I can push the "create a loadbalancer" button.. But then after filling in all the details, the button is "non clickable"... Anyone got any thought ? Does it have to do with policy ?15:54
johnsomtoker_ Do not install the policy file on horizon for Octavia dashboard15:54
johnsomtoker_ It is broken in older versions.15:54
toker_ok ok15:55
johnsomtoker_ Also, try logging out, restarting apache, and logging back in. I have seen it do that on initial startup.15:55
toker_ok ok15:55
johnsomIt just has to do with the caching in horizon and all of those changes underneath it.15:55
johnsomOne time thing when you are setting up15:56
*** harlowja has joined #openstack-lbaas16:04
*** harlowja has quit IRC16:09
*** velizarx has joined #openstack-lbaas16:11
*** velizarx has quit IRC16:11
*** velizarx has joined #openstack-lbaas16:12
*** yamamoto has joined #openstack-lbaas16:17
*** yamamoto has quit IRC16:22
*** yamamoto has joined #openstack-lbaas16:32
*** AlexeyAbashkin has quit IRC16:34
*** yamamoto has quit IRC16:36
ftersinHi everybody. Is there any chance for you guys to fix https://storyboard.openstack.org/#!/story/2001481 soon?16:37
johnsomftersin That turned out to be a very complicated issue. One patch was proposed, but it had defects.  Are you seeing this situation occur or just concerned about it?  Commenting your interest on the story can help us prioritize it.16:41
johnsomrm_work ^^^ act/stdby both down story16:41
johnsomWe have some ideas of how to solve it, but it will be a bit of work16:42
ftersinWe got it while were testing Octavia16:42
xgerman_I have seen it in conjunction withdelayed health manager processing16:43
johnsomftersin Thanks for letting us know, it does help us prioritize effort. This was one we weren't sure how much it was happening in the real world.16:45
ftersinCan you use external resources to fix it? If our customer wants it to be fixed we'll work on it in any case...16:45
xgerman_we love help ;-)16:46
*** yamamoto has joined #openstack-lbaas16:47
johnsomYeah, help would be great.  I laid out some things I think need to be updated to address it in the story. Adam (rm_work) can also probably provide some commentary16:47
rm_workyeah, it's just ... a very complex issue16:48
xgerman_my kneejerk fix wold be to mark all amphora on the LB busy ;-)16:49
rm_workerm16:49
ftersinI see. Does https://storyboard.openstack.org/#!/story/1703547 duplicate it? Or these one differ from each other?16:49
rm_worki think that would be a possible deadlock case as it can't be atomic16:50
rm_workso what, you open a transaction and try to grab both, what if it happens at the same time T_T16:50
rm_worki have been considering a different fix16:50
xgerman_I think DBs have a way to time out locks but yes, I know my idea wasn’t great16:51
rm_workyeah so i want to disentangle it a bit16:52
rm_workit's on my shortlist16:52
*** yamamoto has quit IRC16:52
xgerman_ftersin: potentially but it also reference a discrepancy between neutron and octavia so hard to say what’s goin gon16:53
johnsomYeah, we already do mark them all busy, that is not the issue really.  It's more about the way some of our code is designed where it assumes both are available all of the time.  Really, that needs refactored so that we can work on one amp in isolation, restore it, then come back and fix the other.16:55
rm_workyes16:56
rm_workjohnsom: so is timeout_client_connect - not a thing?16:56
johnsomUgh, I got pulled of looking at that, sorry. It should be a thing, it might be connect just assigned to the frontend.  I did this mapping once before, but lost track of it.  Hmm, maybe it was on the story.16:57
*** aojea_ has joined #openstack-lbaas16:58
johnsomYeah, likely that other story is a duplicate16:58
*** velizarx has quit IRC16:58
xgerman_they talk about n-lbaas so not 100% sure17:01
*** aojea_ has quit IRC17:02
*** yamamoto has joined #openstack-lbaas17:02
johnsomYeah, two layers of issues there, but I think the end case was the same17:03
*** yamamoto has quit IRC17:06
rm_workso, 0 is infine, and what is our max? literal maxint?17:08
*** salmankhan has quit IRC17:09
*** yamamoto has joined #openstack-lbaas17:17
*** yamamoto has quit IRC17:22
toker_yey! got octavia-dashboard working with openstack-sdk 0.9.19. :D17:24
toker_... except that I cant press that god damn blue create loadbalancer button :)17:24
ftersinjohnsom: xgerman_: Ok, thank you. I let you know if we have to do something with that.17:28
*** salmankhan has joined #openstack-lbaas17:29
*** yamamoto has joined #openstack-lbaas17:32
toker_I've restarted the container and tried logging in from different browsers... Still having the issue with not being able to create stuff via the giu17:33
toker_What determines if the blue button should be clickabke or not ?17:34
*** yamamoto has quit IRC17:37
johnsomtoker_ You are talking about the final create button in the wizard right?17:41
toker_johnsom: yes thats right17:41
johnsomOne part is that all of the required fields are filled in, those with a * next to them, on all of the screens of the wizard17:41
toker_hm, well yes I've filled in information in every field... still no go :/17:43
johnsomYeah, looking through the code. What is the label on that button?17:44
toker_"Create Load Balancer"17:46
*** yamamoto has joined #openstack-lbaas17:47
johnsomSo, the RBAC is handled here: https://github.com/openstack/octavia-dashboard/blob/master/octavia_dashboard/static/dashboard/project/lbaasv2/loadbalancers/actions/create/create.service.js#L5917:50
johnsomBut we have confirmed you do not have the octavia_policy.json installed in the horizon policy file directory, so that should be disabled.17:50
johnsomThe other checks are all under here: https://github.com/openstack/octavia-dashboard/tree/master/octavia_dashboard/static/dashboard/project/lbaasv2/loadbalancers17:51
johnsomI am looking through that now17:51
*** yamamoto has quit IRC17:52
toker_yep, no octavia_policy.json anywhere17:54
johnsomJust to double check, no neutron_lbaas_policy.json either right?17:55
johnsomIt would be in the horizon directory somewhere. (booting my dashboard VM now)17:56
toker_mm nope,17:57
openstackgerritMerged openstack/octavia-dashboard master: Imported Translations from Zanata  https://review.openstack.org/55517717:58
toker_no such file.17:58
toker_this was also working before switching from neutron-lbaas to octavia directly..17:58
johnsomFYI horizon/openstack_dashboard/conf is the directory with these policy files.17:59
johnsomHmmm, yeah, still thinking about it.  dayou is our expert, but he is probably offline now.18:00
toker_ok, no worries, I'll figure it out sooner or later =)18:00
johnsomYeah, on mine as soon as I select the "Monitor type" drop down, the create button goes live18:01
*** yamamoto has joined #openstack-lbaas18:02
*** beisner has joined #openstack-lbaas18:04
beisnerhi all - curious if anyone has a hardware hsm in place (+ barbican), testing or using with octavia?  if not, what barbican back-end is under dev/test with octavia?18:04
*** yamamoto has quit IRC18:06
*** yamamoto has joined #openstack-lbaas18:06
*** yamamoto has quit IRC18:06
johnsomWe support barbican and castellan.  Most of us are developers, so use the devstack default local store.18:07
johnsomI think at one point someone used an Atalla in testing, but not prod.18:08
johnsomI know there has been a lot of interest in Vault, which you can use with Octavia queens via Castellan. But I personally have not tried it.18:09
johnsomHere is an old HPE helion white paper talking about using the Atalla ESKM with barbican18:13
johnsomhttp://files.asset.microfocus.com/4aa6-5241/en/4aa6-5241.pdf18:13
*** harlowja has joined #openstack-lbaas18:15
*** AlexeyAbashkin has joined #openstack-lbaas18:15
*** harlowja_ has joined #openstack-lbaas18:17
*** harlowja has quit IRC18:19
*** AlexeyAbashkin has quit IRC18:20
rm_workjohnsom: i am not sure timeout_client_connect exists18:24
xgerman_rm_work, johnsom: https://review.openstack.org/55496018:30
johnsomxgerman_ Can we have someone from infra check that?18:31
xgerman_you know somebody?18:31
rm_worki just +2'd it because it can't be worse18:31
*** tesseract has quit IRC18:32
beisnerjohnsom: thx for the input18:34
johnsomSure, NP18:34
beisnerrm_work: absent any context, that is comical :)18:35
rm_workbeisner: :P18:35
rm_workI think it's pretty comical *in-context*, actually >_<18:36
rm_workbut yes, even more so out18:36
johnsomrm_work back to looking, context switching is killing me this morning.18:37
rm_worklol yes same18:37
rm_worksorry for my part :(18:37
rm_workbut this is almost done I think lol18:37
johnsomrm_work Yeah, I am not finding it. Let's drop it for now.18:43
rm_workk18:43
johnsomWondering about timeout client-fin and timeout tunnel though.  But these feel very HAProxy ish18:43
rm_workyes18:45
rm_worktunnel is interesting but QUITE haproxy18:45
rm_workit seems18:45
*** aojea has joined #openstack-lbaas18:46
johnsomrm_work FYI, there is a story for this: https://storyboard.openstack.org/#!/story/145755618:48
rm_workk18:49
rm_workman18:49
rm_worktimeout http-keep-alive timeout http-request timeout tunnel18:49
rm_workso many18:49
johnsomYeah, there are the keepalive options too that I am sure someone will need in the future18:50
rm_worksooo many, lol18:50
*** aojea has quit IRC18:50
rm_workALMOST feels like worth a little object, but i think no18:50
rm_workdid reedip actually do anything?18:50
rm_workk looks like not on octavia18:53
*** voelzmo has joined #openstack-lbaas19:01
*** aojea has joined #openstack-lbaas19:01
*** yamamoto has joined #openstack-lbaas19:07
*** voelzmo has quit IRC19:10
*** Swami has joined #openstack-lbaas19:10
*** voelzmo has joined #openstack-lbaas19:11
*** yamamoto has quit IRC19:13
*** voelzmo has quit IRC19:22
*** aojea has quit IRC19:23
rm_workjohnsom: i think this is going to take some revisions19:24
rm_worklol19:24
johnsomYeah, the patch for n-lbaas added a "timeout" which I rejected as too vague, etc.19:30
*** _toker_ has joined #openstack-lbaas19:32
_toker_Hm, after switching to octavia-api from neutron-lbaas I cant longer see the port the loadbalancer uses as vip. Before the switch I saw the ipaddress and then the "attached device" would be "neutron:LOADBALANCERV2".. But now, I only see the ports if I log in as the octavia user (and the attached device is "Octavia")... Can anyone explain that change of behaviour ?19:34
_toker_This is what breaks terraform now I believe, since terraform tries to get that port information...19:35
johnsomYou might be missing this patch: https://review.openstack.org/#/c/524254/19:36
_toker_johnsom: *checking*19:37
johnsomI am of course assuming you are on the pike series19:37
_toker_yes19:37
_toker_correct, that patch is not applied.19:39
*** aojea has joined #openstack-lbaas19:43
*** aojea has quit IRC19:45
*** aojea has joined #openstack-lbaas19:45
*** aojea has quit IRC19:46
*** aojea has joined #openstack-lbaas19:49
xgerman_rm_work: you ever saw a memory leak in the health manager19:50
xgerman_?19:50
rm_worknope19:51
rm_workbut i didn't look SUPER closely?19:51
xgerman_well mine were eating up 30 Gigs of Meme19:51
rm_workummmmmm19:51
rm_workthat's not... ideal19:51
rm_worklet me look19:51
xgerman_you wold have noticed ;-)19:51
rm_workmaybe?19:51
rm_workmaybe19:52
xgerman_I had people noticing it for me…19:52
rm_workyeah nope, my memory footprint on my HMs hasn't budged in a month19:52
xgerman_sweet, something must have been fixed from Pike19:53
rm_workumm did we backport the HM fixes?19:53
rm_worki don't think so?19:53
xgerman_no19:53
rm_workthe thing where they were actually really shitty? lol19:53
xgerman_here is the latest attempt to improbve it: https://review.openstack.org/#/c/554063/19:55
_toker_johnsom: Sweet! That did it :D20:01
johnsomYeah, I level devstacks running for a while too and have not seen any memory leaks20:01
*** yamamoto has joined #openstack-lbaas20:09
*** yamamoto has quit IRC20:14
*** aojea has quit IRC20:17
*** aojea has joined #openstack-lbaas20:18
_toker_Still cant get my head around "Create buttons beying greyd out". So, it turns out I can edit already added loadbalancers, pools, members etc. Its JUST when I try to create each one of them, the button is greyed out.20:18
_toker_      spyOn(policy, 'ifAllowed').and.returnValue(true);20:24
_toker_      expect(service.allowed()).toBe(true);20:24
_toker_      expect(policy.ifAllowed).toHaveBeenCalledWith({rules: [['neutron', 'create_loadbalancer']]});20:24
_toker_What does those lines do ?20:24
*** aojea has quit IRC20:33
*** aojea has joined #openstack-lbaas20:34
_toker_bah, I give up for today. Listing and editing through gui and creating through terraform on OSP 12 is fine for me... Maybe I'll be back tomorrow asking questions again =)20:39
_toker_Anyway, thanks a bunch for all the help!20:39
*** _toker_ has quit IRC20:44
*** AlexeyAbashkin has joined #openstack-lbaas20:45
rm_workok, so are we tracking what we might need to backport? :P20:45
rm_workah i guess we DID backport that actually, but OSP12 didn't catch it :(20:46
*** kobis has joined #openstack-lbaas20:50
*** AlexeyAbashkin has quit IRC20:52
*** kobis has quit IRC20:52
rm_workjohnsom: going to need descriptions for these timeouts20:55
rm_workand then also I guess we can debate which additional ones we will need20:56
rm_workI just put 4 to start20:56
rm_work(the ones we listed in the etherpad)20:56
*** aojea has quit IRC20:57
xgerman_rm_work so maybe we need to be smarter with our hm and keep track of the futures we schedule in a weak ref map indexed by amphora-id so we can cncel updates when a new one comes in20:57
rm_workhmmm20:57
xgerman_so far we assumed that we can keep up but if we can’t…20:57
rm_workwell if you can't keep up20:57
rm_workyou have *problems*20:57
xgerman_don’t tell me abou them20:57
xgerman_I am burning ~120% of CPU20:58
*** salmankhan has quit IRC20:58
xgerman_and fot up to 26 GB of mmeory20:58
*** ssmith has joined #openstack-lbaas20:59
johnsomDidn't you have keystone problems in that environment too?20:59
xgerman_well, if I eat up all the mem keystone starts swapping21:00
xgerman_we 64 GB and if hm needs 26GB there is not much left over for the rest of the con trol plane21:00
johnsomI'm just skeptical given the issues there21:00
rm_workok well here goes21:01
rm_workxgerman_: the CPU might be related to the HM issues also21:01
xgerman_ok21:01
rm_workxgerman_: you REALLY need the updated multiproc HM code21:01
openstackgerritAdam Harwell proposed openstack/octavia master: Allow members to be set as "backup"  https://review.openstack.org/55263221:01
openstackgerritAdam Harwell proposed openstack/octavia master: [WIP] Expose timeout options  https://review.openstack.org/55545421:01
rm_workyou probably have the same issues I did21:02
rm_workjohnsom: ^^21:02
xgerman_I am running Pike. So maybe we just need to backport that wholesale21:02
rm_workthat's... fairly complete. just need some messages, and some discussion about what else to expose21:02
xgerman_I don’t have that many amps21:02
rm_workxgerman_: i don't know if we can21:02
rm_workxgerman_: right, i didn't either21:02
rm_workit starts to break down at like 20, lol21:02
rm_workit is so bad21:02
xgerman_ouch21:02
rm_workhonestly we NEED to backport it21:03
rm_workor Pike will be essentially useless21:03
xgerman_+100021:03
rm_worklet me look21:03
johnsomYeah, I'm just saying I want to see something like that outside that environment before I say it's us21:03
johnsomI mean you can't even tell neutron to delete a port in there21:03
rm_workjohnsom: i can't imagine a scenario where the old HM code wouldn't fall over21:03
xgerman_https://usercontent.irccloud-cdn.com/file/vA5OL9ym/Screen%20Shot%202018-03-22%20at%2012.48.04%20PM.png21:04
johnsomWell, we fixed one issue21:04
rm_workit falls into the bucket of "did anyone ever try to actually run this"21:04
rm_workhmmm21:04
rm_workuwsgi shouldn't be that bad tho >_>21:04
xgerman_yep21:04
johnsomYeah, pretty sure this is effect of soemthing else21:04
xgerman_https://usercontent.irccloud-cdn.com/file/RiGiDVAH/Screen%20Shot%202018-03-22%20at%2012.39.56%20PM.png21:04
rm_workyeah i mean21:05
rm_workoctavia-health ALSO shouldn't be even close21:05
rm_workhe is definitely suffering from that one21:05
xgerman_yeah, normally I would agree but the only thing hm needs is the DB21:05
xgerman_and mysql runs at 50% CPPU21:06
*** salmankhan has joined #openstack-lbaas21:06
johnsomGiven the multiple issues, including those outside octavia, I bet they lost storage for a bit or something similar21:07
*** aojea has joined #openstack-lbaas21:07
xgerman_well, it’s trending up after the restart21:07
xgerman_(hm memory)21:07
xgerman_and they even now allegedly tuned mysql21:09
rm_workwait how the heck do I *checkout* stable/pike21:10
*** salmankhan has quit IRC21:10
xgerman_girt chekout origin/stable/pike21:10
rm_workah21:10
*** yamamoto has joined #openstack-lbaas21:10
xgerman_yeah, I think having the hm being unbound is not ok — unless we want people to run it in some straight jacket cgroup21:11
*** yamamoto has quit IRC21:16
xgerman_https://bugs.python.org/issue1411921:17
*** AlexeyAbashkin has joined #openstack-lbaas21:17
*** kobis has joined #openstack-lbaas21:18
*** AlexeyAbashkin has quit IRC21:21
rm_workumm21:22
xgerman_yeah, rumor on the Internet has it that ProccessPol blocks21:22
rm_workxgerman_: there's a number of changes that i think will fix this21:22
rm_workand you won't need to worry about that21:23
rm_worki'm working on the cherry-picks21:23
rm_workthe merge conflicts are somewhat insane tho21:23
xgerman_sweet21:23
*** kobis has quit IRC21:26
*** kobis has joined #openstack-lbaas21:28
*** kobis has quit IRC21:28
*** ssmith has quit IRC21:36
*** pcaruana has quit IRC21:53
rm_workWHEEEELP this will be funtimes21:54
rm_workyou ready xgerman_?21:54
*** yamamoto has joined #openstack-lbaas21:55
rm_workoh one sec21:55
rm_workalright here goes nothing...21:57
rm_workerrr21:58
rm_workit rejected my PR21:58
rm_works/PR/review/21:58
rm_workOH. right.21:59
rm_workok so.... i remember why we got stuck on this22:00
rm_workanyway, here's the end of the chain: https://review.openstack.org/#/c/555475/22:01
rm_workbut the first one was blocked due to adding a requirement <_<22:01
rm_workwhich we just can't avoid22:01
rm_workseveral of those things solve bits of the problems you have xgerman_22:02
rm_worklike, "Minimize the effect overloaded Health Manager processes"22:02
rm_worksolves the "we got way behind" problem22:03
*** rcernin has joined #openstack-lbaas22:34
*** dayou has quit IRC22:42
*** dayou has joined #openstack-lbaas22:43
*** fnaval has quit IRC22:46
*** aojea has quit IRC22:49
*** dayou has quit IRC22:52
*** dayou has joined #openstack-lbaas22:55
*** AlexeyAbashkin has joined #openstack-lbaas23:16
*** AlexeyAbashkin has quit IRC23:20
*** Swami has quit IRC23:26
*** fnaval has joined #openstack-lbaas23:41

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!