Wednesday, 2019-01-30

johnsomFYI: tags support for the openstacksdk : https://review.openstack.org/63384900:11
openstackgerritAdam Harwell proposed openstack/python-octaviaclient master: Add some commands for octavia flavor and flavor_profile  https://review.openstack.org/62468600:25
rm_workwait why did the tempest side patches fail?00:47
johnsomSome nova sillyness today00:47
johnsom"no hosts"00:48
rm_work>_>00:50
rm_workwell, lots of merging happening today00:50
rm_workexciting00:50
johnsomYeah, good stuff!00:50
johnsomI'm trying to update openstacksdk as we have neglected it for a bit.00:50
johnsomPlus add provider/flavor stuffs00:50
rm_workah yeah00:51
colin-ever been discussion or consideration for enhancing loadbalancer list output with loadbalancer stats?00:51
openstackgerritMerged openstack/python-octaviaclient master: Add some commands for octavia flavor and flavor_profile  https://review.openstack.org/62468600:51
johnsomIt has been discussed. It gets pretty busy though with all of that data00:52
colin-yeah i can see the dilemma, could easily be too much00:53
johnsomI think the discussion led to one of two different paths:00:53
johnsom1. Maybe we need a list stats API00:53
johnsom2. Maybe we should be shipping those off to a service that deals with stats in a better way than we do.00:54
johnsomFrankly, IMO the whole stats functionality needs to be re-worked from the amp up.00:55
johnsomI say that as the author of most of the  drivel that is there now.00:55
colin-haha, fair00:55
johnsomIt was a "we need this now, so make something work, knowing we should fix this later" item00:56
colin-found myself looking at list output wondering which was the busiest (currently)00:56
colin-looping through them to figure it out but it felt inelegant00:56
johnsomMostly because OpenStack was still figuring out the metrics/stats strategy00:56
colin-gotcha00:56
johnsomIMO really it should be time series data with deltas.00:57
johnsomThat can roll up for queries via the API00:58
johnsomOr be fed to some useful metrics service00:58
colin-that would be cool, yeah00:58
johnsomSolves a lot of the failover ugliness we have today00:58
openstackgerritMerged openstack/octavia master: Update the amphora driver for flavor support.  https://review.openstack.org/63188901:28
openstackgerritMerged openstack/octavia master: Add compute flavor support to the amphora driver  https://review.openstack.org/63190601:28
openstackgerritMerged openstack/octavia master: Fix a topology bug in flavors support  https://review.openstack.org/63255101:28
*** sapd1_ has quit IRC01:40
*** sapd1 has joined #openstack-lbaas01:42
johnsomOk, stats added to the SDK. Calling it a night.  Failover, providers, flavor, flavor profiles, and amphora tomorrow01:45
*** yamamoto has joined #openstack-lbaas02:02
*** yamamoto has quit IRC02:07
lxkongjohnsom: hi, one quick question, which ip address does the backend server receive? Is it the amphora vm's ip or the original client ip?02:42
*** Dinesh_Bhor has joined #openstack-lbaas02:43
johnsomVM member network ip. Though there is a header option that will put the client IP into the Proxy protocol or HTTP header.02:44
johnsomOctavia amphora driver is a full proxy02:44
lxkongso i need to specify `X-Forwarded-For`to the `insert_headers ` field when creating the listener, right?02:45
johnsomRight02:46
lxkongjohnsom: thanks02:46
*** sapd1 has quit IRC02:56
*** sapd1 has joined #openstack-lbaas03:11
*** ramishra has joined #openstack-lbaas03:23
openstackgerritYang JianFeng proposed openstack/octavia master: ACTIVE-ACTIVE: Initial distributor data model  https://review.openstack.org/52885003:30
*** yamamoto has joined #openstack-lbaas03:40
*** sapd1_ has joined #openstack-lbaas03:55
*** sapd1 has quit IRC03:59
*** sapd1 has joined #openstack-lbaas04:11
*** Dinesh_Bhor has quit IRC04:33
*** Dinesh_Bhor has joined #openstack-lbaas04:38
openstackgerritMerged openstack/octavia-tempest-plugin master: Add the flavor service client.  https://review.openstack.org/63040704:55
*** psachin has joined #openstack-lbaas05:55
*** aojea has joined #openstack-lbaas06:23
*** aojea has quit IRC06:27
openstackgerritYang JianFeng proposed openstack/octavia master: ACTIVE-ACTIVE: Initial distributor data model  https://review.openstack.org/52885006:43
*** ccamposr has joined #openstack-lbaas06:51
openstackgerritYang JianFeng proposed openstack/octavia master: [WIP] Add distributor CURD api  https://review.openstack.org/63045607:00
*** numans is now known as numans_lunch07:29
*** rpittau has joined #openstack-lbaas07:41
*** gcheresh_ has joined #openstack-lbaas08:00
*** gcheresh_ has quit IRC08:06
*** gcheresh has joined #openstack-lbaas08:06
*** mkuf has quit IRC08:10
*** pcaruana has joined #openstack-lbaas08:10
*** yboaron has joined #openstack-lbaas08:26
*** yboaron has quit IRC08:27
*** yboaron has joined #openstack-lbaas08:27
*** mkuf has joined #openstack-lbaas08:33
*** yamamoto has quit IRC08:37
*** yamamoto_ has joined #openstack-lbaas08:37
*** numans_lunch is now known as numans08:41
*** Dinesh_Bhor has quit IRC09:10
*** Dinesh_Bhor has joined #openstack-lbaas09:34
*** ramishra has quit IRC09:40
*** ramishra has joined #openstack-lbaas09:40
*** celebdor has joined #openstack-lbaas10:08
*** ramishra has quit IRC10:26
*** ramishra has joined #openstack-lbaas10:39
*** yamamoto_ has quit IRC10:42
openstackgerritYang JianFeng proposed openstack/octavia master: [WIP] Add distributor CURD api  https://review.openstack.org/63045610:44
openstackgerritYang JianFeng proposed openstack/octavia master: [WIP] Add distributor CURD api  https://review.openstack.org/63045610:51
*** yamamoto has joined #openstack-lbaas10:52
*** yamamoto has quit IRC10:53
*** salmankhan has joined #openstack-lbaas10:55
sapd1johnsom: Can we change flavor of a load balancer which is created by other flavor?10:56
*** salmankhan has quit IRC11:10
*** sapd1 has quit IRC11:14
*** mkuf has quit IRC11:14
*** Dinesh_Bhor has quit IRC11:16
openstackgerritYang JianFeng proposed openstack/octavia master: ACTIVE-ACTIVE: Initial distributor data model  https://review.openstack.org/52885011:17
*** salmankhan has joined #openstack-lbaas11:22
openstackgerritYang JianFeng proposed openstack/octavia master: [WIP] Add distributor CURD api  https://review.openstack.org/63045611:31
*** yamamoto has joined #openstack-lbaas11:34
*** salmankhan has quit IRC11:35
*** rtjure has quit IRC11:40
*** Dinesh_Bhor has joined #openstack-lbaas11:43
*** mkuf has joined #openstack-lbaas11:48
*** Dinesh_Bhor has quit IRC11:51
*** salmankhan has joined #openstack-lbaas11:51
*** yamamoto has quit IRC11:52
*** rtjure has joined #openstack-lbaas11:53
*** rpittau has quit IRC12:11
*** rpittau has joined #openstack-lbaas12:12
*** sapd1 has joined #openstack-lbaas12:16
*** yamamoto has joined #openstack-lbaas12:19
*** yboaron has quit IRC12:31
*** rpittau has quit IRC12:42
*** rpittau has joined #openstack-lbaas12:45
*** trown|outtypewww is now known as trown12:48
*** amuller has joined #openstack-lbaas12:50
openstackgerritYang JianFeng proposed openstack/octavia master: [WIP] Add distributor CURD api  https://review.openstack.org/63045612:50
openstackgerritYang JianFeng proposed openstack/octavia master: ACTIVE-ACTIVE: Initial distributor data model  https://review.openstack.org/52885012:54
*** rpittau has quit IRC12:58
*** rpittau has joined #openstack-lbaas12:58
*** rpittau has quit IRC13:27
*** rpittau has joined #openstack-lbaas13:29
*** yamamoto has quit IRC13:44
*** psachin has quit IRC13:44
*** pcaruana has quit IRC13:50
*** psachin has joined #openstack-lbaas13:53
*** pcaruana has joined #openstack-lbaas13:57
*** psachin has quit IRC13:58
*** psachin has joined #openstack-lbaas14:00
*** yamamoto has joined #openstack-lbaas14:16
*** yamamoto has quit IRC14:16
*** yamamoto has joined #openstack-lbaas14:17
*** yamamoto has quit IRC14:21
*** aojea_ has joined #openstack-lbaas14:32
*** yboaron has joined #openstack-lbaas14:42
*** Dinesh_Bhor has joined #openstack-lbaas14:46
*** sapd1 has quit IRC14:48
*** yamamoto has joined #openstack-lbaas14:51
johnsomsapd1_: not at this point, no. Flavors are permanent after creation.15:05
johnsomMaybe in the future15:05
*** aojea_ has quit IRC15:09
*** aojea_ has joined #openstack-lbaas15:09
*** aojea_ has quit IRC15:13
*** Dinesh_Bhor has quit IRC15:37
*** gcheresh has quit IRC15:42
*** pcaruana has quit IRC16:01
*** pcaruana has joined #openstack-lbaas16:17
*** salmankhan1 has joined #openstack-lbaas16:27
johnsomFYI there is a new pycodestyle check out today E117 that check indentation closer. Seems to be hitting a bunch of OpenStack repos. I checked Octavia, it is not impacted.16:28
*** salmankhan has quit IRC16:28
*** salmankhan1 is now known as salmankhan16:28
*** ramishra has quit IRC16:40
*** pcaruana has quit IRC16:45
*** psachin has quit IRC16:48
*** ccamposr has quit IRC16:53
*** celebdor has quit IRC16:59
*** yboaron has quit IRC17:06
*** yboaron has joined #openstack-lbaas17:12
*** yboaron has quit IRC17:14
*** yboaron has joined #openstack-lbaas17:15
*** yboaron has quit IRC17:19
*** salmankhan1 has joined #openstack-lbaas17:27
*** salmankhan has quit IRC17:28
*** salmankhan1 is now known as salmankhan17:28
colin-can anyone help me locate something that discusses the mutual exclusivity between the spare pool and anti-affinity?17:33
colin-https://docs.openstack.org/openstack-ansible-os_octavia/latest/configure-octavia.html#optional-tuning-octavia-for-production-use this seems to suggest it is supported17:42
johnsomcolin- So if anti-affinity is enabled, the controller will not use spare pool amphora.17:44
johnsomThis is because nova won't allow us to add an already built amphroa into the server group, it won't live migrate it to meed the anti-affinity rule, and we don't want to become a nova scheduler by replacing all of those nova feautres.17:45
colin-and the availability zone args that the nova create api supports?17:45
colin-are those not an appropriate tool in this respect?17:45
johnsomNo, AZs are very different than anti-affinity. AZs are like separate data centers and usually do not have the same subnets, so failover can't move the VIP, etc.17:47
johnsomanti-affinity is not on the same host, but in the same AZ17:47
colin-i understand the difference in their concepts but have used the convention to describe individual compute nodes in our cloud17:48
colin-so, i was imagining that the creates to accomplish the desired size of the spare pool could be spread between however many hosts are in the aggregate that way17:48
colin-but i guess that would necessitate tracking which spares belong to which of those nodes and that wouldn't work17:49
johnsomIt would work, but we would be becoming a nova scheduler at that point, duplicating the anti-affinity feature of nova (though arguably making it more useful).17:50
johnsomIt's a line we just haven't decided we needed to cross, or even if it is a good idea.17:51
colin-understood17:51
johnsomIdeally, we could just take a spare amp, add it to the nova server group, and nova would "make it right", but that is not how it works today.17:52
colin-real shame to have to choose between the two17:52
colin-they're nice features.17:52
colin-thanks for the detail, understanding this will help plan17:52
johnsomYeah, it's something we can talk about at the meeting if you are proposing to do the coding.....17:53
colin-sounds like it might be a new enough behavior that soliciting feedback from others would be worthwhile before pursuing it, given that this has historically been an area that the project didn't want to be too opinionated on (scheduling)17:54
colin-i'll ask next meeting and see if anyone else has thoughts on it17:55
colin-if that's cool17:55
johnsomYep, sounds good. I can add it to the agenda17:55
*** rpittau has quit IRC17:56
*** eandersson has joined #openstack-lbaas18:03
*** salmankhan has quit IRC18:10
*** trown is now known as trown|lunch18:10
*** pcaruana has joined #openstack-lbaas18:17
xgermancolin-:  rm_workโ€™s az aware patches use spare pools and spread things across azs โ€” that looks like what you are looking for18:58
rm_workyeah i was about to say18:59
xgerman:-)18:59
rm_workboth things colin- mentioned are things i've pushed for and worked on :P18:59
rm_worklooking for my patch for the other one18:59
xgermanmaybe itโ€™s time to add them in some contrib folder?18:59
rm_workwell, multi-az is ready to merge IMO18:59
rm_workerg tho i need to rebase maybe18:59
rm_workagain19:00
rm_workit's optional so like19:00
* rm_work shrugs19:00
johnsomYeah, would rather push to a merge solution rather than some contrib mess19:00
rm_workenable it or not19:00
rm_workit's marked "experimental"19:00
rm_worki think19:00
rm_workfailing py36 tho ... hmmm19:00
rm_workhttps://review.openstack.org/#/c/558962/19:00
rm_workcolin- / johnsom https://review.openstack.org/#/c/489727/19:02
rm_worki just never had time to finish it19:02
rm_workseems lots of people asking about old work i started this week tho :P19:03
rm_workcolin-: if you wanted to take that over, i don't think it's hard19:03
*** aojea has joined #openstack-lbaas19:04
colin-cool nice to hear, i will take a look at that thanks rm_work xgerman19:04
rm_worki'm gonna try to make the multi-az one pass now19:04
rm_workthe issue is it PROBABLY doesn't work with the default driver, unless somehow *all of your AZs* are on one L219:05
rm_workwhich is unlikely19:05
rm_workin which case you have to use the L3 network driver i wrote too, which is NOT ready to merge, lol19:05
colin-you mean a common VLAN?19:05
*** aojea has quit IRC19:09
johnsomYeah, the VIP address has to be valid on the active and standby instances, so if they are on different AZs you need to be able to move the VIP between the instances.19:10
colin-ah ok, yeah that makes sense19:10
johnsomOtherwise you need to NAT (floating IP) or be running L3 models19:11
*** trown|lunch is now known as trown19:23
*** tomtom001 has joined #openstack-lbaas19:33
tomtom001hello, I'm running octavia in queens and I delete my LB's from the database yet, octavia keeps spinning up instances. Is there a cache or something that it uses for this?19:34
johnsomNo, but it is setup to automatically rebuilt failed/deleted instances.  Why are you deleting things via the database and not the aPI?19:37
johnsomAlso, if you enabled the spares pool it will keep building spare instances.19:38
tomtom001We can't seem to delete load balancers the normal way19:39
tomtom001ok so how do I clear the spares pool19:39
tomtom001??19:39
johnsomWhy?19:39
johnsomWhy can't you delete them the normal way?19:40
tomtom001I don't know it just says unable to delete load balancer19:40
johnsomWell, then something is wrong in your deployment or you have an active listener on the load balancer (which it says in the response message).19:41
johnsomSpares pool is configured via this setting: https://docs.openstack.org/octavia/latest/configuration/configref.html#house_keeping.spare_amphora_pool_size19:41
tomtom001so I'd have to unconfigure that to stop the behaviour?  But surely something is telling the pool to keep making instances?19:42
openstackgerritAdam Harwell proposed openstack/octavia master: Experimental multi-az support  https://review.openstack.org/55896219:42
johnsomdefault is 0 and will not build spare instances. If you change that back to 0 or comment it out, restart the housekeeping process(s) to have it take effect. Then it will no longer build spare amphora.19:42
johnsomYeah, if that was configured to something larger than 0, it will rebuild instances to make sure there is a minimum of that number of spares booted, but not configured.19:43
openstackgerritAdam Harwell proposed openstack/octavia master: Add usage admin resource  https://review.openstack.org/55754819:46
tomtom001Yeah but how does it know how many to build if according to the DB there are no loadbalancers?19:48
rm_workthis: https://docs.openstack.org/octavia/latest/configuration/configref.html#house_keeping.spare_amphora_pool_size19:49
rm_workis how it knows how many to build. exactly that many.19:49
johnsomYeah, if you set that, you told it to boot "spare" instances to be used if someone requests a load balancer.19:50
rm_workwhen you say "I delete my LB's from the database yet, octavia keeps spinning up instances", what exactly do you mean by "delete my LB's"?19:50
rm_worklike19:50
tomtom001remove them from the load_balancer table.19:50
rm_workok19:50
rm_workso, they will still have records in the amphora table19:50
tomtom001thus I would figure that spares would be based on what is in that table.19:50
rm_workspares is completely different from LBs19:51
johnsomNo, spares are not tied to load balancers until a load balancer is created via the API19:51
tomtom001I removed those too, I cleared out the pools, listeners, health_monitors, amphora, etc... tables.19:51
rm_workand then it continues to create new records in the amphora table?19:52
tomtom001yes and boot new instances in OS, i can see them under the admin -> instances19:52
tomtom001that's why I'm wondering if there is a cache or something controlling it19:53
rm_workyes, there is a configuration value19:53
rm_workhttps://docs.openstack.org/octavia/latest/configuration/configref.html#house_keeping.spare_amphora_pool_size19:53
johnsomJust the configuration setting I posted19:53
rm_workthe number of loadbalancers is irrelevant19:53
tomtom001I've stopped all octavia services -> cleared the tables -> start octavia services and then it continues to build spares.19:53
rm_workthat makes sense19:53
johnsomYep, as long as that setting is set, it will19:54
rm_workif the value in https://docs.openstack.org/octavia/latest/configuration/configref.html#house_keeping.spare_amphora_pool_size is set to > 019:54
colin-always working to achieve that value19:54
tomtom001ok, I set it to 2, but 6 amphora instances are created when the octavia services are started back up19:56
rm_workhow many housekeeping processes do you run?19:56
colin-i sometimes see extra when i run the processes on multiple controllers, but i'm not sure if it would account for 3x19:56
johnsomThat *may* happen. It can overshoot some, but six seems like reactions to deleting stuff from the database19:56
johnsom#startmeeting Octavia20:00
openstackMeeting started Wed Jan 30 20:00:26 2019 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.20:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:00
*** openstack changes topic to " (Meeting topic: Octavia)"20:00
openstackThe meeting name has been set to 'octavia'20:00
nmagnezio/20:00
colin-o/20:00
johnsomHi folks20:00
johnsom#topic Announcements20:01
*** openstack changes topic to "Announcements (Meeting topic: Octavia)"20:01
tomtom001ok that's making sense 3 controllers so 2 per controller20:01
johnsomFirst up, a reminder that TC elections are coming up.  If you have an interest in running for the TC, please see this post:20:01
johnsom#link http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001829.html20:01
johnsomNext up, HAProxy has reached out and is soliciting feedback on the next "native" protocol to add to HAProxy.20:02
johnsomThey recently added gRPC and are interested in what folks are looking for.20:02
colin-that's awesome20:02
rm_worko/20:03
johnsomThey have put up a poll here:20:03
johnsom#link https://xoyondo.com/ap/6KYAwtyjNiaEOjJ20:03
johnsomPlease give them feedback on what you would like to see20:03
johnsomIf you select other, either comment on the poll or let me know what you are looking for and I can relay it20:03
johnsomIf you aren't following the upstream HAProxy work, HTTP/2 is really coming together with the 1.9 release.20:05
johnsom#link https://www.haproxy.com/blog/haproxy-1-9-has-arrived/20:05
rm_workah yeah i was wondering why that wasn't in the poll, but i guess since it was partially in 1.8 then they were already planning to finish it in 1.920:05
colin-was surprised not to see amqp there for our own msging20:06
rm_workwas googling to see if they finished it before i put it as "other" :P20:06
johnsomYeah, it's pretty much done in 1.9 but they are working out  the bugs, edge cases, etc.20:06
johnsomAny other announcements today?20:07
rm_workcool, so is it packaged anywhere yet? :P20:07
johnsomWell, to quote the release announcement "An important point to note, this technical release is not suitable for inclusion in distros, as it will only be maintained for approximately one year (till 2.1 is out). Version 2.0 coming in May will be long-lived and more suitable for distros."20:07
*** aojea has joined #openstack-lbaas20:07
johnsomSo, likely no.20:08
johnsomHowever, we might want to consider work on our DIB setup to allow users to point to a non-distro package. Not a high priority (easy enough to install it in an image), but something for the roadmap.20:09
openstackgerritMerged openstack/octavia-tempest-plugin master: Add the provider service client.  https://review.openstack.org/63040820:09
johnsomI didn't see any other announcements, so...20:09
johnsom#topic Brief progress reports / bugs needing review20:09
*** openstack changes topic to "Brief progress reports / bugs needing review (Meeting topic: Octavia)"20:09
johnsomI have been working on flavors. The client, tempest, and octavia patches are all done and ready for review.  Some have started merging.20:10
johnsomI also added an API to force a configuration refresh on the amphora-agent.  That is all posted as well.20:10
nmagneziI back ported some fixes for stable/queens, a chain of 3 patches (the last one is from cgoncalves) that starts here: https://review.openstack.org/#/c/633412/120:11
nmagneziAll related to IPv6 / Keepalived / Active Standby stuff20:11
johnsomI had to stop and deal with some gate issues this week as well. 1. requests_toolbelt released with broken dependencies which broke a release test gate. 2. pycodestyle updated and broken some pep8 stuff in openstacksdk.20:11
johnsomCurrently I am working on getting the openstacksdk updated for our new API capabilities.20:12
johnsomtags, stats, amp/provider/flavor/flavorprofile, etc.20:12
johnsomAnd finally, working on patch reviews.20:12
johnsomnmagnezi Thank you!20:13
rm_worksorry, been reading that haproxy 1.9 announcement. nice cacheing stuff coming soon too, we might need to think about that again20:13
nmagnezijohnsom, np at all20:13
johnsomOnce the SDK stuff is cleaned up I will work on reviews and helping with the TLS patches that are up for review20:13
nmagneziHad some conflicts so look closely :)20:13
johnsomYeah, that is why I want to carve some time out for that review20:14
rm_workthe last change there (ipv6 prefix) needs to merge in rocky first too... need to figure out why it has a -1 there20:15
johnsomYeah, the caching stuff was neat. At least in 1.8 you had to turn off other things though to use it. Hopefully that got fixed.20:15
nmagnezirm_work, thank you for pointing this out!20:16
johnsomAny other updates today?20:16
johnsom#topic Discuss spares pool and anti-affinity (colin-)20:17
*** openstack changes topic to "Discuss spares pool and anti-affinity (colin-) (Meeting topic: Octavia)"20:17
rm_workoh, cgoncalves responded on the rocky one, i may be missing other backports there too, if you have time to look at that20:17
rm_work^^ nmagnezi20:17
rm_worksimilar to your chain in queens20:17
nmagnezirm_work, will do20:17
johnsomSo colin- raised the question of enabling spares pool when anti-affinity is enabled.  colin- Did you want to lead this discussion?20:17
nmagnezirm_work, I know that the second patch in the chain exists in Rocky , will double check for the rest20:18
colin-sure, was looking at what the limitations were and the channel helped me catch up on why it's not true today. in doing so we discussed a couple of ways that could be address, including rm_work's experimental change here https://review.openstack.org/55896220:18
rm_work๐Ÿ‘20:19
rm_workah yeah, so that is the way I handled it for AZs, not sure if we want to try to handle it similarly for flavors20:19
colin-i was curious if the other operators have a desire/need for this behavior or if it would largely be for convenience sake? is anti-affinity strongly preferred over spares pool generally speaking?20:19
rm_workoh sorry this is anti-affinity not flavors20:19
johnsomThe question came up whether we should be managing the scheduling of spare amphora such that they could be used to replace the nova anti-affinity capability.20:20
rm_workso, anti-affinity really only matters for active-standby, and the idea is that there it's ok to wait a little longer, so spares pool isn't as important20:20
colin-ok, that was sort of the intent i was getting. that makes sense20:20
*** salmankhan has joined #openstack-lbaas20:21
* rm_work waits for someone to confirm what he just said20:21
johnsomYeah, it should be a delta of about 30 seconds20:21
rm_workyeah i don't THINK it's relevant for STANDALONBE20:21
rm_work*STANDALONE20:21
johnsomAt least on well behaved clouds20:21
rm_worksince ... there's nothing to anti-affinity to20:22
johnsomRight, standalone we don't enable anti-affinity because it would be dumb to have server groups with one instance in them20:22
rm_workso that's why we never really cared to take the effort20:22
colin-i'm guessing things like querying nova to check a hypervisor attribute on an amphora before pulling a spare out of the pool would be out of the question?20:23
colin-a copmarison to see if it matches the value on the STANDALONE during a failover maybe20:23
*** salmankhan1 has joined #openstack-lbaas20:23
rm_worki mean, the issue is cache reliability20:23
johnsomYeah, I think ideally, nova would allow us to add an instance to a server group with anti-affinity on it, and nova just live migrate it if it wasn't meeting the anti-affinity rule. But this is not a capability of nova today.20:23
rm_workwith AZ i'm pretty confident20:23
rm_workwith HV, not so much20:24
rm_workyes, i had a productive talk about that in the back of an uber with a nova dev at the PTG a year ago20:24
rm_workand then i forgot who it was and nothing happened <_<20:24
*** salmankhan has quit IRC20:25
*** salmankhan1 is now known as salmankhan20:25
johnsomI mean, you could query nova for the current host info, then make a decision of which to pull from the spares pool.  That is basically the scheduling that *could* be added to octavia.  Though it seems heavy weight.20:25
colin-and would be assuming some scheduling responsibilities it sounds like20:26
rm_workyes, and unless the spares pool manager tries to maintain this20:26
colin-conceptually20:26
johnsomWith nova live migration, you don't know if the boot host is the current host, so you have to ask20:26
rm_workwhich it COULD technically20:26
rm_workthere's a good chance you'd have to try several times20:26
rm_workor be out of luck20:26
rm_workespecially if your nova uses pack20:26
*** celebdor has joined #openstack-lbaas20:26
rm_workso if we did add a field, and had housekeeping check the accuracy of it every spares run.... that could work20:26
rm_workbut still like... eh20:27
colin-is this the relevant section for this logic? https://github.com/openstack/octavia/blob/979144f2fdf7c391b6c154c01a22f107f45da833/octavia/controller/worker/flows/amphora_flows.py#L224-L23020:27
johnsomYeah, I feel like it's a bit of tight coupling and getting into some nasty duplicate scheduling code with nova.20:27
johnsomNope, just a second20:28
johnsom#link https://github.com/openstack/octavia/blob/master/octavia/controller/worker/tasks/database_tasks.py#L47920:28
johnsomThis is where we check if we can use a spare and allocate it to an LB20:29
colin-thanks20:29
johnsomThere would probably be code in the "create amphora" flow to make sure you create spares with some sort of host diversity too20:29
johnsombasically if that MapLoadbalancerToAmphora task fails to assign a spare amphora, we then go and boot one20:30
johnsomSo, I guess the question on the table is: Is the boot time long enough to warrant building compute scheduling into the Octavia controller or not?20:31
colin-given the feedback i've gotten i'm leaning towards not (yet!)20:31
rm_workright, exactly what i have in my AZ patch, just using a different field20:32
rm_workthat's host-based instead of AZ based20:32
rm_workwhich i commented in the patch i think20:32
rm_workand yeah, i think it's not worth it20:32
johnsomFor me, the 30 second boot time is ok.20:32
colin-ok. appreciate the indulgence :)20:33
rm_workeven if it's minutes....20:33
johnsomPlus we still have container dreams which might help20:33
rm_workit's an active-standby LB20:33
rm_workit's probably fine20:33
rm_worki see failures so infrequently, i would hope you wouldn't see two on the same LB within 10 minutes20:33
rm_workand if you do, i feel like probably you have bigger issues20:33
jitekajohnsom: for us it's more about 1:30 and up to 2min with first amp hit a new compute20:33
rm_workespecially if antri-affinity is enabled20:34
johnsomjiteka Why is it 1:30? have you looked at it?20:34
rm_workthey are still using celerons20:34
johnsomDisk IO? Network IO from glance?20:34
rm_work๐Ÿ˜‡โœŒ๏ธ20:34
jitekajohnsom: our glance is global accross region (datacenter/location)20:34
johnsomhyper visor overhead?20:34
rm_workno local image cacheing?20:34
jitekawe have local cache20:35
jitekabut not pre-cached20:35
johnsomYeah, the 2 minutes on first boot should be loading the cache20:35
jitekawhen new amphora image is promoted and tagged20:35
rm_workoh, right, new compute20:35
jitekaand used for the first time20:35
rm_workyeah, at RAX they had a pre-cache script thing20:35
johnsomBut after that, I'm curious what is driving the 1:30 time.20:35
rm_workso they could define images to force into cache on new computes20:35
jitekain public cloud yes20:35
jitekacan't say in private ;)20:35
rm_work:P20:36
rm_workbut per what i said before, even up to 10m... not a real issue IMO20:36
johnsomjiteka20:36
johnsomAre you using centos images?20:36
colin-the spares pool has given us the latitude to overlook that for the time being but abandoning it will be cause to look into long boot times, for sure20:36
jitekaalso I don't recall any way directly with glance to cherry-pick image you want to pre-cache among all available image20:36
jitekajohnsom: using ubuntu20:36
rm_workah for new LBs you mean? colin-20:36
rm_workbecause that's the only place anyone would ever see an issue with times20:36
colin-yes, jiteka and i are referring to the same platform20:37
rm_workjiteka: they had a custom thing that ran on the HVs20:37
jitekajohnsom: Ubuntu 16.04.5 LTS20:37
johnsomHmm, yeah, would be interested to learn what is slow.20:37
rm_workthey == rax20:37
johnsomOk, yeah, I know there is a centos image slowdown that folks are looking into.20:37
jitekajohnsom: all other officially released and used VMs in our cloud are centos based20:37
jitekajohnsom: which makes octavia often hit that "first spawn not yet cached" situation20:38
johnsom#link https://bugzilla.redhat.com/show_bug.cgi?id=166661220:39
openstackbugzilla.redhat.com bug 1666612 in systemd "Rules "uname -p" and "systemd-detect-virt" kill the system boot time on large systems" [High,Assigned] - Assigned to jsynacek20:39
jitekajohnsom: sorry I mean all other user used same glance image that we provided to them which are centos based20:39
johnsomJust an FYI on the centos boot time issue.20:39
jitekajohnsom: thx20:40
johnsomOk, are we good on the spares/anti-affinity topic? Should I move on to the next topic?20:41
colin-yes20:41
rm_workIMO AZ anti-affinity is better anyway20:41
rm_workif you can use it20:41
rm_work:P20:41
johnsomCool. Thanks for bringing it up. I'm always happy to have a discussion on things.20:41
johnsom#topic Need to start making decisions about what is going to land in Stein20:42
*** openstack changes topic to "Need to start making decisions about what is going to land in Stein (Meeting topic: Octavia)"20:42
johnsomI want to raise awareness that we are about one month from feature freeze for Stein!20:42
johnsomBecause of that I plan to focus more on reviews for features we committed to completing at the last PTG and bug fixes.20:43
jitekajohnsom: I noticed the exact same behavour in my rocky devstack when I perform "Rotating the Amphora Images" scenario, after kill spare pool and forcing it to re-generate using newly tagged amphora image, it takes long time. Once that first amphora end-up ready, next one are really fast20:43
johnsomSadly we don't have a lot of reviewer cycles for Stein (at least that is my perception) so I'm going to be a bit picky about what we focus on and prioritize reviews for things we agreed on at the PTG.20:44
johnsom#link https://etherpad.openstack.org/p/octavia-stein-ptg20:44
jitekajohnsom: and nothing more to add about this part20:44
johnsomThat is the etherpad with the section we listed our priorities.20:44
johnsomDo you all agree with my assessment and approach?  Discussion?20:45
johnsomI would hate to see patches for things we agreed on at the PTG go un-merged because we didn't have review cycles or spent them on patches for things we didn't agree to.20:46
openstackgerritAdam Harwell proposed openstack/octavia master: WIP: Floating IP Network Driver (spans L3s)  https://review.openstack.org/43561220:47
colin-anything known to be slipping out of Stein at this stage? or too early20:47
colin-from the etherpad20:47
johnsom^^^^ Speaking of patches not targeted for Stein....20:47
rm_work:P20:48
rm_workpatch not targeted for merging ever :P20:48
johnsomI think the TLS patches (client certs, backend re-encryption) are posted but at-risk.  Flavors is in the same camp.  Log offloading.20:48
johnsomprivsep and members as a base resource are clearly out.20:49
johnsomHTTP2 is out20:49
rm_workdamn, that one is so easy too :P20:49
rm_workjust needed to ... do it20:49
rm_workbut agreed20:49
johnsomOk. I didn't want to make a cut line today. I think that is too early.20:50
johnsomI just wanted to raise awareness and make sure we discussed the plan as we head to feature freeze.20:50
johnsomWe have a nice lenghty roadmap, but limited development resources, so we do need to plan a bit.20:51
rm_worki'm trying to spin back up20:51
rm_workjust catching up on everything20:51
johnsomGood news is, we did check things off that list already, so we are getting stuff done and there are patches posted.20:52
johnsomOk, unless there are more comments, I will open it to open discussion20:52
johnsom#topic Open Discussion20:53
*** openstack changes topic to "Open Discussion (Meeting topic: Octavia)"20:53
johnsomOther topics today?20:53
colin-have a plan to maintain a separate aggregate with computes that will exclusively host amphorae, believe we have what we need to proceed but wanted to see if anyone has advice against this or experience doing it successfully?20:53
jitekajohnsom: I would like to know more about Containerized amphora20:54
jitekajohnsom: on the etherpad, maybe more context about statement "k8s performance overhead"20:54
johnsomcolin- The only issue I have seem is people treating that group as pets and getting scared to update the hosts or lack of maintenance as they aren't on the main compute radar.20:55
jitekawith that strategy of an aggregate of few computes hosting all small amphora VM, I feel that our build time will be relatively small expect when performing a new image rotation20:55
colin-johnsom: strictly cattle here, good call out :)20:56
johnsomjiteka Yeah, lengthy discussion that comes up regularly. Some of the baseline issues with k8s is the pool network performance due to the ingress/egress handling.20:56
johnsomjiteka The real issue now is we don't have anyone that can take time to work on containers.  We were close to having lxc working, but there were bugs with nova-lxd. These are likely not issues any longer, but we don't have anyone to look at them.20:57
johnsomjiteka Also, k8s isn't a great platform for base services like networking. You don't exactly want it to shuffle/restart the pods during someone's TCP flow.....20:58
johnsomWe tend to work a bit lower in the stack.  That said, using containers is still something we think is good and should happen. Just maybe not k8s, or some of the orchestrators that aren't conducive to networking.21:00
johnsomDang, out of time. Thanks folks!21:00
johnsom#endmeeting21:00
*** openstack changes topic to "Discussions for Octavia | Stein priority review list: https://etherpad.openstack.org/p/octavia-priority-reviews"21:00
openstackMeeting ended Wed Jan 30 21:00:33 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)21:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/octavia/2019/octavia.2019-01-30-20.00.html21:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/octavia/2019/octavia.2019-01-30-20.00.txt21:00
openstackLog:            http://eavesdrop.openstack.org/meetings/octavia/2019/octavia.2019-01-30-20.00.log.html21:00
jitekathanks21:00
johnsomjiteka Let me know if you want to keep talking about the containers.21:00
jitekanot now, but later, it's still something we have in mind in future deployment phase to reduce amphora footprint on our hypervisors21:01
johnsomYeah, same here.21:01
jitekajohnsom: also I will need to get more familiar with K8s to have more specific questions21:02
johnsomZun was another option that someone was starting to work on, but I don't think it's working yet21:02
*** aojea has quit IRC21:08
*** aojea has joined #openstack-lbaas21:08
rm_workyeah i would much rather have it work with something like zun or nova-lxc21:09
jitekarm_work: I don't have strong opinion about using k8s over zun/nova-lxc, it was more container vs VM for me21:14
johnsomOr have both with Kata containers!  lol21:15
jitekajohnsom: kata what ? :D21:15
johnsomjiteka https://katacontainers.io/21:16
*** salmankhan1 has joined #openstack-lbaas21:16
*** salmankhan has quit IRC21:16
*** salmankhan1 is now known as salmankhan21:16
jitekajohnsom: I was joking I heard a lot about it at Openstack summit in Vancouver  ;)21:18
*** salmankhan has quit IRC21:23
*** aojea has quit IRC21:31
*** aojea has joined #openstack-lbaas21:32
*** aojea has quit IRC21:39
*** amuller has quit IRC21:42
*** openstackgerrit has quit IRC21:50
*** trown is now known as trown|outtypewww22:07
rm_workso much <_<22:48
johnsomI am having fun getting my head around the sdk framework.22:56
johnsomIt's always too long between working with it22:56
johnsomIt's fine testing frameworks like this:22:57
johnsomhttps://www.irccloud.com/pastebin/T6sepeoD/22:57
johnsomThat always make me scratch my head22:57
johnsomWell, until I figure ^^^ that out22:57

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!