Thursday, 2018-09-20

*** abaindur has joined #openstack-lbaas00:24
*** abaindur has quit IRC00:27
*** abaindur has joined #openstack-lbaas00:27
*** hongbin_ has joined #openstack-lbaas01:00
*** ltomasbo has quit IRC01:00
*** tobias-urdin has quit IRC01:00
*** phuoc_ has joined #openstack-lbaas01:17
*** phuoc has quit IRC01:17
*** abaindur has quit IRC01:34
sapd1_rm_work: build success now :D02:04
*** jiteka has quit IRC02:11
*** eandersson has quit IRC02:11
*** jiteka has joined #openstack-lbaas02:11
*** jiteka has quit IRC02:15
rm_workcool :)02:20
*** eandersson has joined #openstack-lbaas02:24
*** eandersson has quit IRC02:31
*** hongbin_ has quit IRC03:02
*** abaindur has joined #openstack-lbaas03:32
*** abaindur has quit IRC03:35
*** abaindur has joined #openstack-lbaas03:35
*** sapd1_ has quit IRC03:45
*** sapd1 has joined #openstack-lbaas03:45
*** ramishra has joined #openstack-lbaas04:15
*** dayou_ has joined #openstack-lbaas04:19
*** eandersson has joined #openstack-lbaas04:34
*** yamamoto has quit IRC04:35
*** yamamoto has joined #openstack-lbaas04:35
*** yamamoto has quit IRC04:36
*** yboaron_ has joined #openstack-lbaas04:45
*** yamamoto has joined #openstack-lbaas05:10
*** abaindur has quit IRC05:31
*** rcernin has quit IRC07:02
*** celebdor has joined #openstack-lbaas07:41
*** dayou_ has quit IRC07:42
*** yboaron_ has quit IRC08:07
*** yboaron_ has joined #openstack-lbaas08:08
*** ccamposr has joined #openstack-lbaas08:27
openstackgerritReedip proposed openstack/octavia-tempest-plugin master: Add configuration support for skipping tests  https://review.openstack.org/59939308:41
*** jiteka_ has joined #openstack-lbaas08:45
*** yboaron_ has quit IRC08:46
*** yboaron_ has joined #openstack-lbaas08:46
*** jiteka_ has quit IRC08:50
*** ccamposr_ has joined #openstack-lbaas08:52
*** ccamposr has quit IRC08:54
*** ccamposr__ has joined #openstack-lbaas08:57
*** ccamposr_ has quit IRC08:59
*** salmankhan has joined #openstack-lbaas09:16
*** ccamposr__ has quit IRC09:19
*** ccamposr__ has joined #openstack-lbaas09:19
*** baffle has joined #openstack-lbaas09:28
*** tobias-urdin has joined #openstack-lbaas09:35
maciejjozefczykrm_work I can help to make it public09:39
maciejjozefczykFor OVH actual Octavia implementation is not that compatible, so using floating IP, keepalived but without ARP looks great.09:40
baffleI'm trying to get Octavia working in my installation; The haproxy amph now fails when trying to bring up eth1:0 inside the network namespace, due to it both adding the "default" route, and then trying to add 0.0.0.0/0 from host routes as well; ip ofcourse respons with "RTNETLINK answers: File exists Failed to bring up eth1:0." and then bringing up the amph fails.09:44
baffleDo I just bastardize the template and rebuild my image, or am I missing something else?09:45
openstackgerritYang JianFeng proposed openstack/octavia master: Add quota support to octavia's l7policy and l7rule  https://review.openstack.org/59062009:54
openstackgerritYang JianFeng proposed openstack/octavia master: Refactor 'check_quota_met' and 'decrement_quota'  https://review.openstack.org/59666509:58
*** pcaruana has joined #openstack-lbaas10:04
sapd1johnsom: Could you review my patch?10:26
rm_workmaciejjozefczyk: oh are you basically using the L3 driver?10:38
rm_workmaciejjozefczyk: https://review.openstack.org/#/c/435612/10:38
rm_work(uses FLIPs on L3, keepalived with no ARP, calls back to the HM over the health message to trigger a neutron flip move :P10:39
rm_work)10:39
rm_workI guess it might work for you out-of-the-box?10:39
rm_workif so, I could actually clean it up and see if we could get it mergable10:39
maciejjozefczykrm_work: wow10:41
maciejjozefczykrm_work: thats the thing! yes would like to use only floating_ip without this allowed_ip_pairs logic10:41
rm_workmaciejjozefczyk: well, it kinda uses it...10:41
rm_worktry out that whole chain10:41
rm_workfrom multi-az (does anti-affinity over AZs!), and evacuate (lets you maintenance an AZ at a time for patching), then FLIP driver10:42
rm_workthat is what I run here10:42
rm_workactually this is my patch list on top of master, in order: http://paste.openstack.org/show/730406/10:43
rm_workprobably you don't need the first one10:43
rm_workor the second one i suppose, people kept asking me about adoption here tho :P10:44
rm_workanyway, if you have questions let me know, would love to see if that driver is useful for anyone else10:44
rm_workI can walk you through it10:44
rm_worksome parts are a little dirty right now since i haven't had the motivation yet to clean it up and add good testing T_T10:45
rm_workbut i can vouch that it works, been using it for over a year and a half here10:45
maciejjozefczykrm_work: its great work! We're just started prodding Octavia and we went through the issues this should solve for us10:47
maciejjozefczykrm_work: I'll dig into it to understand how it works10:47
rm_workyeah, give it a shot, and poke at me if you need anything10:47
maciejjozefczykrm_work: thanks a lot!10:47
rm_workbasically -- i have keepalived use a notify script, so when it takes over as master, it sends a custom signal to the agent10:47
rm_workthe agent sends a special broadcast type of health message over the normal health monitoring channels10:48
rm_workthe HMs detect that, and one of them takes the "failover" task, which moves the FLIP and refreshes the other (dead) amp10:48
rm_workI have a tempest test for the failover it does here: https://review.openstack.org/#/c/501559/10:48
maciejjozefczykrm_work: that will perfectly work with our network solution10:49
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: WIP: Failover test  https://review.openstack.org/50155910:49
rm_workdumb pep8 bug lol10:50
rm_workmaciejjozefczyk: yeah i hope you can give me some feedback on how it works for you, and please dont hesitate to hit me up if you have problems/questions10:50
rm_workimportant notes: "ha_port_id" in the amp table is overloaded to be the FLIP id10:50
rm_workand there's a couple config bits you should probably set10:51
rm_workfor multi-az, just list all your available AZs10:52
rm_work(I assume you have multiple?)10:52
rm_workhave you patched nova to accept multiple --availability-zone ?10:52
rm_workif so there's a setting you should set as well, though that's less common10:52
rm_worksorry, really excited to find someone else stuck in this situation :P10:52
maciejjozefczykrm_work: we're not using AZs unfortunately10:55
maciejjozefczyk:(10:55
rm_workk, then you can ignore that bit10:55
rm_workactually i guess probably you can just cherry-pick the FLIP driver patch without it ... maybe10:55
rm_worki don't remember if i had to do anything to make it specifically work on top of the AZ patch10:56
maciejjozefczykok10:56
rm_workah, in [networking] in your config, you'll want:11:08
rm_work`allow_vip_subnet_id = False` and `allow_vip_port_id = False`, and you want to set `valid_vip_networks = <a list of your flip networks>`11:08
rm_workfinally got around to taking a look at the kolla-ansible octavia config <_< https://review.openstack.org/60404311:13
*** pcaruana has quit IRC11:15
*** pcaruana has joined #openstack-lbaas11:20
*** pcaruana has quit IRC11:32
*** pcaruana has joined #openstack-lbaas11:39
maciejjozefczykrm_work: good, what about openstack client? the vip_* field is mandatory while creating lb, isn't it?11:50
*** pcaruana has quit IRC11:50
maciejjozefczykaah, ok11:50
*** salmankhan1 has joined #openstack-lbaas12:30
*** salmankhan has quit IRC12:33
*** yboaron_ has quit IRC12:34
*** salmankhan1 has quit IRC12:35
*** yboaron_ has joined #openstack-lbaas12:35
*** yamamoto has quit IRC12:42
*** yamamoto has joined #openstack-lbaas12:42
*** salmankhan has joined #openstack-lbaas12:51
bafflemaciejjozefczyk: Wouldn't it work with "--vip-network-id <flip network id>"?13:28
*** yamamoto has quit IRC13:29
*** yboaron_ has quit IRC13:38
*** ccamposr__ has quit IRC14:03
johnsomWe should note that FLIP failover is much slower than the native Act/Stdby failover14:03
*** yboaron_ has joined #openstack-lbaas14:20
xgerman_did we ever time it in comparison to SINGLE ?14:34
johnsomYeah, would be interesting to compare14:34
xgerman_because with the spare pools it shouldn’t be much slower (once detected)14:36
xgerman_so all we might need to do is beef up detetcion speed14:37
*** KeithMnemonic has joined #openstack-lbaas14:40
*** yamamoto has joined #openstack-lbaas14:48
*** Swami has joined #openstack-lbaas14:57
*** yboaron_ has quit IRC14:59
*** yamamoto has quit IRC15:03
*** luis5tb has joined #openstack-lbaas15:09
luis5tbping johnsom (I'm ltomasbo)15:11
johnsomHello15:12
luis5tbjohnsom, I have two things to comment with you15:12
johnsomOk15:12
luis5tbjohnsom, one is if you discuss about the patch I mentioned the other day in the weekly irc meeting (sorry I finally could not attend it)15:12
luis5tbjohnsom, and second, is that I'm seeing some weird behaviour in octavia (db/repositories/create_pool_on_load_balancer)15:13
luis5tbjohnsom, seems the transaction, sometimes, is not completed and the last bits are not executed, leading to inconsistencies, and breaking for instance ovn-octavia driver support15:13
johnsomYes I had it on the agenda and we did discuss it, but the outcome was no one on the team had time to do the research we discussed at the PTG, so we postponed for another week to give more time for folks to research.15:14
luis5tbjohnsom, you mean changing the SG on the VIP?15:14
johnsomCorrect15:14
luis5tbjohnsom, I actually research that option, and it is not working15:14
luis5tbjohnsom, at least it does not work with ml2/ovs driver15:14
luis5tbjohnsom, as security groups that apply are the ones on the amphora port, not the one on the VIP15:15
johnsomThe request is people research how SGs work in neutron and a few ideas folks have on how to stack the SGs.15:15
johnsomFYI, you can always read the meeting transcripts here: https://wiki.openstack.org/wiki/Octavia/Meeting_Minutes#2018-09-19_Weekly_meeting:15:15
luis5tbjohnsom, oohh, true! thanks!15:16
johnsomlooking at "create_pool_on_load_balancer" now15:17
luis5tbjohnsom, ok, given what I discuss with the neutron folks, if they enforce VIP SG, it will not be through allow address pair but a new feature being merged...15:17
johnsomYeah, we discussed other options. Just many of us have not looked at the neutron SG code in a long time15:18
johnsomSo, what part of create_pool_on_load_balancer isn't getting run?15:18
luis5tbjohnsom, I'm not getting the listener updated15:19
johnsomAnd you are doing a direct POST to create your pool? or single-call-create?15:21
luis5tbwell, I'm using kuryr to create the lbaas, but it is going throug the POST method at pool.py15:23
luis5tbjohnsom, ^15:24
johnsomok, so not single-call-create path.  Still looking/testing15:24
luis5tbjohnsom, probably is happening for the amphora driver too, but in that case the fact that listener_id is not there is not breaking the support I guess...15:25
*** celebdor has quit IRC15:25
johnsomSo I just created a pool and the default_pool_id got filled in as expected... If that was broken I would expect a huge number of our tests to fail as it is the common code path.15:26
johnsomDo you have the json being passed to the API?15:27
luis5tbjohnsom, it is not happening all the time15:27
luis5tbI'm creating lb -> listener -> pool -> member15:28
johnsomIt is a valid case that you create pools that are not bound to listeners.  Pools can be created, correctly, that are only bound to the load balancer. These can then be used for L7 policies.15:28
luis5tbjohnsom, yes, but in this case, I passed the listener_id, and it is a problem that it is dropping it15:28
johnsomYeah, that is what I do as well, I have a test script I use to create LBs.15:29
luis5tbjohnsom, https://github.com/openstack/octavia/blob/master/octavia/api/v2/controllers/pool.py#L244-L24515:30
luis5tbjohnsom, there, db_pool is not setting listener_id, even if it is passed there15:30
luis5tb(sometimes...)15:30
luis5tbjohnsom, I added some logs, and I see that the transactions seems to complete...15:32
luis5tbstill, missing the listener id...15:33
johnsomYeah, I'm not immediately seeing where there could be an issue. Can you add a few log points for me?15:33
johnsomhttps://github.com/openstack/octavia/blob/master/octavia/api/v2/controllers/pool.py#L185 log the contents of the pool object.15:34
luis5tbjohnsom, I have a problem doing that... including LOG.debug in that function, makes it work...15:34
luis5tbjohnsom, but, I did print the listener that is obtained on that funtion on line 23415:35
luis5tbso, listener id is there for sure...15:35
johnsomAlso here: https://github.com/openstack/octavia/blob/master/octavia/api/v2/controllers/pool.py#L233 log the pool_dict15:35
luis5tbjohnsom, I printed listener ID right after that line 23415:35
luis5tband it has the listner id value!15:36
johnsomOk, so on the run you had problems with, listener_id was there????15:36
luis5tbjohnsom, yes15:36
luis5tbjohnsom, but it is not on db_pool after executing line 24415:37
johnsomWell, wait, I thought you said the listener was not getting it's "default_pool_id" updated. That would not be in db_pool as that is a listener table update, not the returned pool object15:40
luis5tbjohnsom, seems this is not getting the updated object sometimes: https://github.com/openstack/octavia/blob/master/octavia/db/repositories.py#L22515:42
luis5tbjohnsom, I meant that db_pool is not getting the listener_id set and thus not passing it to the driver15:43
luis5tbjohnsom, well, maybe it is getting set, but not returned in the get at https://github.com/openstack/octavia/blob/master/octavia/db/repositories.py#L22515:43
johnsomBut the pool object returned there does not have a listener_id parameter15:44
johnsomhttps://github.com/openstack/octavia/blob/master/octavia/db/models.py#L26215:45
luis5tbjohnsom, but I saw that provider_pool sometimes include the listener id, and other times it does not...15:46
luis5tbjohnsom, and actually, it tried to set it at https://github.com/openstack/octavia/blob/master/octavia/api/drivers/utils.py#L24215:46
johnsomOk, so this if different than the "default_pool_id" in the listener DB table...15:47
luis5tbjohnsom, and, if I do: db_pool.listener_d = listener_id, things work...15:48
luis5tb(on the pool.py before the db_pool_to_provider_pool15:49
luis5tbjohnsom, I see listeners is a property on pool model...15:51
johnsomWhat DB are you using?15:51
johnsomYes, listeners is on the returned pool object15:52
luis5tbmariadb15:53
luis5tbjohnsom, ^^15:53
johnsomOk.15:53
johnsomYeah, I would have to have a test case to really run this to ground. The only thing I can think is sqlalchemy isn't handling the sub-transaction reliably for some reason. You could try a "session.flush()" before this line: https://github.com/openstack/octavia/blob/master/octavia/db/repositories.py#L22515:54
luis5tbjohnsom, ok, I'll try15:55
luis5tbif not that, the listener property is not getting the value...15:55
luis5tbbtw, the listener.update there15:56
luis5tbahh, no, sorry, that is listner table...15:56
luis5tbgoing to try the flush15:56
johnsomYeah, I just haven't seen this and would really just have to walk that code path watching the listener id as I go to narrow it down15:56
luis5tbok... thanks very much for the help!15:58
luis5tbjohnsom, ^^15:58
johnsomSure, good luck15:58
luis5tbjohnsom, I was mostly asking in case it was a known issue (and to ask about the other patch...)15:58
luis5tbjohnsom, I'll let you know if flush helps...15:58
johnsomNo open story for that as far as I remember15:59
*** luis5tb has quit IRC16:03
*** luis5tb has joined #openstack-lbaas16:06
luis5tbjohnsom, ok, flush is not helping either...16:06
luis5tbjohnsom, seems that moving that pool.get inside the subtransaction may help...16:15
openstackgerritLuis Tomas Bolivar proposed openstack/octavia master: Ensure pool.get obtains listeners information  https://review.openstack.org/60415216:23
luis5tbjohnsom, ^^ this fixes it...16:24
openstackgerritMerged openstack/neutron-lbaas master: Fix memory leak in the haproxy provider driver  https://review.openstack.org/60346016:26
*** Swami has quit IRC16:36
*** luis5tb has quit IRC16:46
*** salmankhan has quit IRC17:01
*** ramishra has quit IRC17:02
*** yamamoto has joined #openstack-lbaas17:03
*** Swami has joined #openstack-lbaas17:05
*** yamamoto has quit IRC17:20
*** eandersson has quit IRC17:34
*** jiteka has joined #openstack-lbaas17:37
*** eandersson has joined #openstack-lbaas17:39
*** yamamoto has joined #openstack-lbaas17:50
*** yamamoto has quit IRC17:56
*** salmankhan has joined #openstack-lbaas18:42
*** abaindur has joined #openstack-lbaas18:46
*** salmankhan has quit IRC18:47
johnsomFYI, I have uploaded my "stresshm" tool I used to stress test the health manager. It can also be used to populate the database with "ghost" load balancers.19:15
johnsomhttps://github.com/johnsom/stresshm19:15
johnsomSame disclaimers, it was slapped together....19:15
*** yamamoto has joined #openstack-lbaas19:58
*** yamamoto has quit IRC20:06
*** abaindur has quit IRC20:32
*** abaindur has joined #openstack-lbaas20:33
*** abaindur has quit IRC20:35
*** abaindur has joined #openstack-lbaas20:35
rm_workthat sounds yiddish21:20
johnsomBoy, bionic wants to run home to python2.7 fairly often....  sigh21:30
openstackgerritGerman Eichberger proposed openstack/octavia master: Refactor the AAP driver to not depend on nova  https://review.openstack.org/60422621:33
xgerman_Boom!21:34
johnsomCool!21:35
johnsomxgerman_ Reviewed21:45
xgerman_Thanks… will fix l8ter today. Kids school’s out soon-ish21:46
*** yamamoto has joined #openstack-lbaas22:08
*** yamamoto has quit IRC22:24
openstackgerritMerged openstack/octavia master: Updates README-Vagrant.md to use OSC commands  https://review.openstack.org/60305522:27
rm_workjohnsom: just deployed to prod22:47
rm_workthe return times look A++22:47
johnsomExcellent22:47
*** rcernin has joined #openstack-lbaas22:53
rm_workyep very nice22:55
rm_workunfortunately I can't +2 it any harder22:55
rm_worki could have our deploy-bot +1 it i guess22:55
johnsomHa22:56
johnsomrm_work Are you running the HM fix too? https://review.openstack.org/#/c/600332/22:56
rm_workugh i will be sad if i forgot to pull that in22:57
rm_worklet me check22:57
rm_workyes22:57
rm_worki have both22:57
rm_workwas worried for a sec22:58
johnsomYou could +2 that one too... grin22:58
rm_workjohnsom: http://paste.openstack.org/show/730490/23:00
johnsomI am futzing with DIB on bionic. I am able to reproduce my issue I saw on the bionic nodepool instances23:00
johnsomrm_work is that the "before"23:01
johnsom?23:01
rm_workno23:01
rm_workthat's now23:01
rm_workbefore it was creeping up to the 0.4s23:01
johnsomWow, a whole magnitude slower than my workstation23:01
rm_workwell23:01
rm_workthe DB isn't on the same machine23:01
rm_worksooooo23:01
rm_workyes23:01
rm_worklol23:01
rm_workthere's some minimal wire time involved :P23:01
rm_workeach one is doing ~6/s23:02
rm_work360 amps / 10s heartbeats / 6 HMs23:02
rm_workmath works out lovely right now, heh23:03
rm_workare the gates still f'd?23:06
rm_worklooks like it?23:06
johnsomYeah23:06
*** andreykurilin has quit IRC23:34
*** andreykurilin has joined #openstack-lbaas23:35
*** rcernin has quit IRC23:36
*** rcernin has joined #openstack-lbaas23:36
*** Swami has quit IRC23:45

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!