johnsom | FYI: tags support for the openstacksdk : https://review.openstack.org/633849 | 00:11 |
---|---|---|
openstackgerrit | Adam Harwell proposed openstack/python-octaviaclient master: Add some commands for octavia flavor and flavor_profile https://review.openstack.org/624686 | 00:25 |
rm_work | wait why did the tempest side patches fail? | 00:47 |
johnsom | Some nova sillyness today | 00:47 |
johnsom | "no hosts" | 00:48 |
rm_work | >_> | 00:50 |
rm_work | well, lots of merging happening today | 00:50 |
rm_work | exciting | 00:50 |
johnsom | Yeah, good stuff! | 00:50 |
johnsom | I'm trying to update openstacksdk as we have neglected it for a bit. | 00:50 |
johnsom | Plus add provider/flavor stuffs | 00:50 |
rm_work | ah yeah | 00:51 |
colin- | ever been discussion or consideration for enhancing loadbalancer list output with loadbalancer stats? | 00:51 |
openstackgerrit | Merged openstack/python-octaviaclient master: Add some commands for octavia flavor and flavor_profile https://review.openstack.org/624686 | 00:51 |
johnsom | It has been discussed. It gets pretty busy though with all of that data | 00:52 |
colin- | yeah i can see the dilemma, could easily be too much | 00:53 |
johnsom | I think the discussion led to one of two different paths: | 00:53 |
johnsom | 1. Maybe we need a list stats API | 00:53 |
johnsom | 2. Maybe we should be shipping those off to a service that deals with stats in a better way than we do. | 00:54 |
johnsom | Frankly, IMO the whole stats functionality needs to be re-worked from the amp up. | 00:55 |
johnsom | I say that as the author of most of the drivel that is there now. | 00:55 |
colin- | haha, fair | 00:55 |
johnsom | It was a "we need this now, so make something work, knowing we should fix this later" item | 00:56 |
colin- | found myself looking at list output wondering which was the busiest (currently) | 00:56 |
colin- | looping through them to figure it out but it felt inelegant | 00:56 |
johnsom | Mostly because OpenStack was still figuring out the metrics/stats strategy | 00:56 |
colin- | gotcha | 00:56 |
johnsom | IMO really it should be time series data with deltas. | 00:57 |
johnsom | That can roll up for queries via the API | 00:58 |
johnsom | Or be fed to some useful metrics service | 00:58 |
colin- | that would be cool, yeah | 00:58 |
johnsom | Solves a lot of the failover ugliness we have today | 00:58 |
openstackgerrit | Merged openstack/octavia master: Update the amphora driver for flavor support. https://review.openstack.org/631889 | 01:28 |
openstackgerrit | Merged openstack/octavia master: Add compute flavor support to the amphora driver https://review.openstack.org/631906 | 01:28 |
openstackgerrit | Merged openstack/octavia master: Fix a topology bug in flavors support https://review.openstack.org/632551 | 01:28 |
*** sapd1_ has quit IRC | 01:40 | |
*** sapd1 has joined #openstack-lbaas | 01:42 | |
johnsom | Ok, stats added to the SDK. Calling it a night. Failover, providers, flavor, flavor profiles, and amphora tomorrow | 01:45 |
*** yamamoto has joined #openstack-lbaas | 02:02 | |
*** yamamoto has quit IRC | 02:07 | |
lxkong | johnsom: hi, one quick question, which ip address does the backend server receive? Is it the amphora vm's ip or the original client ip? | 02:42 |
*** Dinesh_Bhor has joined #openstack-lbaas | 02:43 | |
johnsom | VM member network ip. Though there is a header option that will put the client IP into the Proxy protocol or HTTP header. | 02:44 |
johnsom | Octavia amphora driver is a full proxy | 02:44 |
lxkong | so i need to specify `X-Forwarded-For`to the `insert_headers ` field when creating the listener, right? | 02:45 |
johnsom | Right | 02:46 |
lxkong | johnsom: thanks | 02:46 |
*** sapd1 has quit IRC | 02:56 | |
*** sapd1 has joined #openstack-lbaas | 03:11 | |
*** ramishra has joined #openstack-lbaas | 03:23 | |
openstackgerrit | Yang JianFeng proposed openstack/octavia master: ACTIVE-ACTIVE: Initial distributor data model https://review.openstack.org/528850 | 03:30 |
*** yamamoto has joined #openstack-lbaas | 03:40 | |
*** sapd1_ has joined #openstack-lbaas | 03:55 | |
*** sapd1 has quit IRC | 03:59 | |
*** sapd1 has joined #openstack-lbaas | 04:11 | |
*** Dinesh_Bhor has quit IRC | 04:33 | |
*** Dinesh_Bhor has joined #openstack-lbaas | 04:38 | |
openstackgerrit | Merged openstack/octavia-tempest-plugin master: Add the flavor service client. https://review.openstack.org/630407 | 04:55 |
*** psachin has joined #openstack-lbaas | 05:55 | |
*** aojea has joined #openstack-lbaas | 06:23 | |
*** aojea has quit IRC | 06:27 | |
openstackgerrit | Yang JianFeng proposed openstack/octavia master: ACTIVE-ACTIVE: Initial distributor data model https://review.openstack.org/528850 | 06:43 |
*** ccamposr has joined #openstack-lbaas | 06:51 | |
openstackgerrit | Yang JianFeng proposed openstack/octavia master: [WIP] Add distributor CURD api https://review.openstack.org/630456 | 07:00 |
*** numans is now known as numans_lunch | 07:29 | |
*** rpittau has joined #openstack-lbaas | 07:41 | |
*** gcheresh_ has joined #openstack-lbaas | 08:00 | |
*** gcheresh_ has quit IRC | 08:06 | |
*** gcheresh has joined #openstack-lbaas | 08:06 | |
*** mkuf has quit IRC | 08:10 | |
*** pcaruana has joined #openstack-lbaas | 08:10 | |
*** yboaron has joined #openstack-lbaas | 08:26 | |
*** yboaron has quit IRC | 08:27 | |
*** yboaron has joined #openstack-lbaas | 08:27 | |
*** mkuf has joined #openstack-lbaas | 08:33 | |
*** yamamoto has quit IRC | 08:37 | |
*** yamamoto_ has joined #openstack-lbaas | 08:37 | |
*** numans_lunch is now known as numans | 08:41 | |
*** Dinesh_Bhor has quit IRC | 09:10 | |
*** Dinesh_Bhor has joined #openstack-lbaas | 09:34 | |
*** ramishra has quit IRC | 09:40 | |
*** ramishra has joined #openstack-lbaas | 09:40 | |
*** celebdor has joined #openstack-lbaas | 10:08 | |
*** ramishra has quit IRC | 10:26 | |
*** ramishra has joined #openstack-lbaas | 10:39 | |
*** yamamoto_ has quit IRC | 10:42 | |
openstackgerrit | Yang JianFeng proposed openstack/octavia master: [WIP] Add distributor CURD api https://review.openstack.org/630456 | 10:44 |
openstackgerrit | Yang JianFeng proposed openstack/octavia master: [WIP] Add distributor CURD api https://review.openstack.org/630456 | 10:51 |
*** yamamoto has joined #openstack-lbaas | 10:52 | |
*** yamamoto has quit IRC | 10:53 | |
*** salmankhan has joined #openstack-lbaas | 10:55 | |
sapd1 | johnsom: Can we change flavor of a load balancer which is created by other flavor? | 10:56 |
*** salmankhan has quit IRC | 11:10 | |
*** sapd1 has quit IRC | 11:14 | |
*** mkuf has quit IRC | 11:14 | |
*** Dinesh_Bhor has quit IRC | 11:16 | |
openstackgerrit | Yang JianFeng proposed openstack/octavia master: ACTIVE-ACTIVE: Initial distributor data model https://review.openstack.org/528850 | 11:17 |
*** salmankhan has joined #openstack-lbaas | 11:22 | |
openstackgerrit | Yang JianFeng proposed openstack/octavia master: [WIP] Add distributor CURD api https://review.openstack.org/630456 | 11:31 |
*** yamamoto has joined #openstack-lbaas | 11:34 | |
*** salmankhan has quit IRC | 11:35 | |
*** rtjure has quit IRC | 11:40 | |
*** Dinesh_Bhor has joined #openstack-lbaas | 11:43 | |
*** mkuf has joined #openstack-lbaas | 11:48 | |
*** Dinesh_Bhor has quit IRC | 11:51 | |
*** salmankhan has joined #openstack-lbaas | 11:51 | |
*** yamamoto has quit IRC | 11:52 | |
*** rtjure has joined #openstack-lbaas | 11:53 | |
*** rpittau has quit IRC | 12:11 | |
*** rpittau has joined #openstack-lbaas | 12:12 | |
*** sapd1 has joined #openstack-lbaas | 12:16 | |
*** yamamoto has joined #openstack-lbaas | 12:19 | |
*** yboaron has quit IRC | 12:31 | |
*** rpittau has quit IRC | 12:42 | |
*** rpittau has joined #openstack-lbaas | 12:45 | |
*** trown|outtypewww is now known as trown | 12:48 | |
*** amuller has joined #openstack-lbaas | 12:50 | |
openstackgerrit | Yang JianFeng proposed openstack/octavia master: [WIP] Add distributor CURD api https://review.openstack.org/630456 | 12:50 |
openstackgerrit | Yang JianFeng proposed openstack/octavia master: ACTIVE-ACTIVE: Initial distributor data model https://review.openstack.org/528850 | 12:54 |
*** rpittau has quit IRC | 12:58 | |
*** rpittau has joined #openstack-lbaas | 12:58 | |
*** rpittau has quit IRC | 13:27 | |
*** rpittau has joined #openstack-lbaas | 13:29 | |
*** yamamoto has quit IRC | 13:44 | |
*** psachin has quit IRC | 13:44 | |
*** pcaruana has quit IRC | 13:50 | |
*** psachin has joined #openstack-lbaas | 13:53 | |
*** pcaruana has joined #openstack-lbaas | 13:57 | |
*** psachin has quit IRC | 13:58 | |
*** psachin has joined #openstack-lbaas | 14:00 | |
*** yamamoto has joined #openstack-lbaas | 14:16 | |
*** yamamoto has quit IRC | 14:16 | |
*** yamamoto has joined #openstack-lbaas | 14:17 | |
*** yamamoto has quit IRC | 14:21 | |
*** aojea_ has joined #openstack-lbaas | 14:32 | |
*** yboaron has joined #openstack-lbaas | 14:42 | |
*** Dinesh_Bhor has joined #openstack-lbaas | 14:46 | |
*** sapd1 has quit IRC | 14:48 | |
*** yamamoto has joined #openstack-lbaas | 14:51 | |
johnsom | sapd1_: not at this point, no. Flavors are permanent after creation. | 15:05 |
johnsom | Maybe in the future | 15:05 |
*** aojea_ has quit IRC | 15:09 | |
*** aojea_ has joined #openstack-lbaas | 15:09 | |
*** aojea_ has quit IRC | 15:13 | |
*** Dinesh_Bhor has quit IRC | 15:37 | |
*** gcheresh has quit IRC | 15:42 | |
*** pcaruana has quit IRC | 16:01 | |
*** pcaruana has joined #openstack-lbaas | 16:17 | |
*** salmankhan1 has joined #openstack-lbaas | 16:27 | |
johnsom | FYI there is a new pycodestyle check out today E117 that check indentation closer. Seems to be hitting a bunch of OpenStack repos. I checked Octavia, it is not impacted. | 16:28 |
*** salmankhan has quit IRC | 16:28 | |
*** salmankhan1 is now known as salmankhan | 16:28 | |
*** ramishra has quit IRC | 16:40 | |
*** pcaruana has quit IRC | 16:45 | |
*** psachin has quit IRC | 16:48 | |
*** ccamposr has quit IRC | 16:53 | |
*** celebdor has quit IRC | 16:59 | |
*** yboaron has quit IRC | 17:06 | |
*** yboaron has joined #openstack-lbaas | 17:12 | |
*** yboaron has quit IRC | 17:14 | |
*** yboaron has joined #openstack-lbaas | 17:15 | |
*** yboaron has quit IRC | 17:19 | |
*** salmankhan1 has joined #openstack-lbaas | 17:27 | |
*** salmankhan has quit IRC | 17:28 | |
*** salmankhan1 is now known as salmankhan | 17:28 | |
colin- | can anyone help me locate something that discusses the mutual exclusivity between the spare pool and anti-affinity? | 17:33 |
colin- | https://docs.openstack.org/openstack-ansible-os_octavia/latest/configure-octavia.html#optional-tuning-octavia-for-production-use this seems to suggest it is supported | 17:42 |
johnsom | colin- So if anti-affinity is enabled, the controller will not use spare pool amphora. | 17:44 |
johnsom | This is because nova won't allow us to add an already built amphroa into the server group, it won't live migrate it to meed the anti-affinity rule, and we don't want to become a nova scheduler by replacing all of those nova feautres. | 17:45 |
colin- | and the availability zone args that the nova create api supports? | 17:45 |
colin- | are those not an appropriate tool in this respect? | 17:45 |
johnsom | No, AZs are very different than anti-affinity. AZs are like separate data centers and usually do not have the same subnets, so failover can't move the VIP, etc. | 17:47 |
johnsom | anti-affinity is not on the same host, but in the same AZ | 17:47 |
colin- | i understand the difference in their concepts but have used the convention to describe individual compute nodes in our cloud | 17:48 |
colin- | so, i was imagining that the creates to accomplish the desired size of the spare pool could be spread between however many hosts are in the aggregate that way | 17:48 |
colin- | but i guess that would necessitate tracking which spares belong to which of those nodes and that wouldn't work | 17:49 |
johnsom | It would work, but we would be becoming a nova scheduler at that point, duplicating the anti-affinity feature of nova (though arguably making it more useful). | 17:50 |
johnsom | It's a line we just haven't decided we needed to cross, or even if it is a good idea. | 17:51 |
colin- | understood | 17:51 |
johnsom | Ideally, we could just take a spare amp, add it to the nova server group, and nova would "make it right", but that is not how it works today. | 17:52 |
colin- | real shame to have to choose between the two | 17:52 |
colin- | they're nice features. | 17:52 |
colin- | thanks for the detail, understanding this will help plan | 17:52 |
johnsom | Yeah, it's something we can talk about at the meeting if you are proposing to do the coding..... | 17:53 |
colin- | sounds like it might be a new enough behavior that soliciting feedback from others would be worthwhile before pursuing it, given that this has historically been an area that the project didn't want to be too opinionated on (scheduling) | 17:54 |
colin- | i'll ask next meeting and see if anyone else has thoughts on it | 17:55 |
colin- | if that's cool | 17:55 |
johnsom | Yep, sounds good. I can add it to the agenda | 17:55 |
*** rpittau has quit IRC | 17:56 | |
*** eandersson has joined #openstack-lbaas | 18:03 | |
*** salmankhan has quit IRC | 18:10 | |
*** trown is now known as trown|lunch | 18:10 | |
*** pcaruana has joined #openstack-lbaas | 18:17 | |
xgerman | colin-: rm_workโs az aware patches use spare pools and spread things across azs โ that looks like what you are looking for | 18:58 |
rm_work | yeah i was about to say | 18:59 |
xgerman | :-) | 18:59 |
rm_work | both things colin- mentioned are things i've pushed for and worked on :P | 18:59 |
rm_work | looking for my patch for the other one | 18:59 |
xgerman | maybe itโs time to add them in some contrib folder? | 18:59 |
rm_work | well, multi-az is ready to merge IMO | 18:59 |
rm_work | erg tho i need to rebase maybe | 18:59 |
rm_work | again | 19:00 |
rm_work | it's optional so like | 19:00 |
* rm_work shrugs | 19:00 | |
johnsom | Yeah, would rather push to a merge solution rather than some contrib mess | 19:00 |
rm_work | enable it or not | 19:00 |
rm_work | it's marked "experimental" | 19:00 |
rm_work | i think | 19:00 |
rm_work | failing py36 tho ... hmmm | 19:00 |
rm_work | https://review.openstack.org/#/c/558962/ | 19:00 |
rm_work | colin- / johnsom https://review.openstack.org/#/c/489727/ | 19:02 |
rm_work | i just never had time to finish it | 19:02 |
rm_work | seems lots of people asking about old work i started this week tho :P | 19:03 |
rm_work | colin-: if you wanted to take that over, i don't think it's hard | 19:03 |
*** aojea has joined #openstack-lbaas | 19:04 | |
colin- | cool nice to hear, i will take a look at that thanks rm_work xgerman | 19:04 |
rm_work | i'm gonna try to make the multi-az one pass now | 19:04 |
rm_work | the issue is it PROBABLY doesn't work with the default driver, unless somehow *all of your AZs* are on one L2 | 19:05 |
rm_work | which is unlikely | 19:05 |
rm_work | in which case you have to use the L3 network driver i wrote too, which is NOT ready to merge, lol | 19:05 |
colin- | you mean a common VLAN? | 19:05 |
*** aojea has quit IRC | 19:09 | |
johnsom | Yeah, the VIP address has to be valid on the active and standby instances, so if they are on different AZs you need to be able to move the VIP between the instances. | 19:10 |
colin- | ah ok, yeah that makes sense | 19:10 |
johnsom | Otherwise you need to NAT (floating IP) or be running L3 models | 19:11 |
*** trown|lunch is now known as trown | 19:23 | |
*** tomtom001 has joined #openstack-lbaas | 19:33 | |
tomtom001 | hello, I'm running octavia in queens and I delete my LB's from the database yet, octavia keeps spinning up instances. Is there a cache or something that it uses for this? | 19:34 |
johnsom | No, but it is setup to automatically rebuilt failed/deleted instances. Why are you deleting things via the database and not the aPI? | 19:37 |
johnsom | Also, if you enabled the spares pool it will keep building spare instances. | 19:38 |
tomtom001 | We can't seem to delete load balancers the normal way | 19:39 |
tomtom001 | ok so how do I clear the spares pool | 19:39 |
tomtom001 | ?? | 19:39 |
johnsom | Why? | 19:39 |
johnsom | Why can't you delete them the normal way? | 19:40 |
tomtom001 | I don't know it just says unable to delete load balancer | 19:40 |
johnsom | Well, then something is wrong in your deployment or you have an active listener on the load balancer (which it says in the response message). | 19:41 |
johnsom | Spares pool is configured via this setting: https://docs.openstack.org/octavia/latest/configuration/configref.html#house_keeping.spare_amphora_pool_size | 19:41 |
tomtom001 | so I'd have to unconfigure that to stop the behaviour? But surely something is telling the pool to keep making instances? | 19:42 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Experimental multi-az support https://review.openstack.org/558962 | 19:42 |
johnsom | default is 0 and will not build spare instances. If you change that back to 0 or comment it out, restart the housekeeping process(s) to have it take effect. Then it will no longer build spare amphora. | 19:42 |
johnsom | Yeah, if that was configured to something larger than 0, it will rebuild instances to make sure there is a minimum of that number of spares booted, but not configured. | 19:43 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Add usage admin resource https://review.openstack.org/557548 | 19:46 |
tomtom001 | Yeah but how does it know how many to build if according to the DB there are no loadbalancers? | 19:48 |
rm_work | this: https://docs.openstack.org/octavia/latest/configuration/configref.html#house_keeping.spare_amphora_pool_size | 19:49 |
rm_work | is how it knows how many to build. exactly that many. | 19:49 |
johnsom | Yeah, if you set that, you told it to boot "spare" instances to be used if someone requests a load balancer. | 19:50 |
rm_work | when you say "I delete my LB's from the database yet, octavia keeps spinning up instances", what exactly do you mean by "delete my LB's"? | 19:50 |
rm_work | like | 19:50 |
tomtom001 | remove them from the load_balancer table. | 19:50 |
rm_work | ok | 19:50 |
rm_work | so, they will still have records in the amphora table | 19:50 |
tomtom001 | thus I would figure that spares would be based on what is in that table. | 19:50 |
rm_work | spares is completely different from LBs | 19:51 |
johnsom | No, spares are not tied to load balancers until a load balancer is created via the API | 19:51 |
tomtom001 | I removed those too, I cleared out the pools, listeners, health_monitors, amphora, etc... tables. | 19:51 |
rm_work | and then it continues to create new records in the amphora table? | 19:52 |
tomtom001 | yes and boot new instances in OS, i can see them under the admin -> instances | 19:52 |
tomtom001 | that's why I'm wondering if there is a cache or something controlling it | 19:53 |
rm_work | yes, there is a configuration value | 19:53 |
rm_work | https://docs.openstack.org/octavia/latest/configuration/configref.html#house_keeping.spare_amphora_pool_size | 19:53 |
johnsom | Just the configuration setting I posted | 19:53 |
rm_work | the number of loadbalancers is irrelevant | 19:53 |
tomtom001 | I've stopped all octavia services -> cleared the tables -> start octavia services and then it continues to build spares. | 19:53 |
rm_work | that makes sense | 19:53 |
johnsom | Yep, as long as that setting is set, it will | 19:54 |
rm_work | if the value in https://docs.openstack.org/octavia/latest/configuration/configref.html#house_keeping.spare_amphora_pool_size is set to > 0 | 19:54 |
colin- | always working to achieve that value | 19:54 |
tomtom001 | ok, I set it to 2, but 6 amphora instances are created when the octavia services are started back up | 19:56 |
rm_work | how many housekeeping processes do you run? | 19:56 |
colin- | i sometimes see extra when i run the processes on multiple controllers, but i'm not sure if it would account for 3x | 19:56 |
johnsom | That *may* happen. It can overshoot some, but six seems like reactions to deleting stuff from the database | 19:56 |
johnsom | #startmeeting Octavia | 20:00 |
openstack | Meeting started Wed Jan 30 20:00:26 2019 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. | 20:00 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 20:00 |
*** openstack changes topic to " (Meeting topic: Octavia)" | 20:00 | |
openstack | The meeting name has been set to 'octavia' | 20:00 |
nmagnezi | o/ | 20:00 |
colin- | o/ | 20:00 |
johnsom | Hi folks | 20:00 |
johnsom | #topic Announcements | 20:01 |
*** openstack changes topic to "Announcements (Meeting topic: Octavia)" | 20:01 | |
tomtom001 | ok that's making sense 3 controllers so 2 per controller | 20:01 |
johnsom | First up, a reminder that TC elections are coming up. If you have an interest in running for the TC, please see this post: | 20:01 |
johnsom | #link http://lists.openstack.org/pipermail/openstack-discuss/2019-January/001829.html | 20:01 |
johnsom | Next up, HAProxy has reached out and is soliciting feedback on the next "native" protocol to add to HAProxy. | 20:02 |
johnsom | They recently added gRPC and are interested in what folks are looking for. | 20:02 |
colin- | that's awesome | 20:02 |
rm_work | o/ | 20:03 |
johnsom | They have put up a poll here: | 20:03 |
johnsom | #link https://xoyondo.com/ap/6KYAwtyjNiaEOjJ | 20:03 |
johnsom | Please give them feedback on what you would like to see | 20:03 |
johnsom | If you select other, either comment on the poll or let me know what you are looking for and I can relay it | 20:03 |
johnsom | If you aren't following the upstream HAProxy work, HTTP/2 is really coming together with the 1.9 release. | 20:05 |
johnsom | #link https://www.haproxy.com/blog/haproxy-1-9-has-arrived/ | 20:05 |
rm_work | ah yeah i was wondering why that wasn't in the poll, but i guess since it was partially in 1.8 then they were already planning to finish it in 1.9 | 20:05 |
colin- | was surprised not to see amqp there for our own msging | 20:06 |
rm_work | was googling to see if they finished it before i put it as "other" :P | 20:06 |
johnsom | Yeah, it's pretty much done in 1.9 but they are working out the bugs, edge cases, etc. | 20:06 |
johnsom | Any other announcements today? | 20:07 |
rm_work | cool, so is it packaged anywhere yet? :P | 20:07 |
johnsom | Well, to quote the release announcement "An important point to note, this technical release is not suitable for inclusion in distros, as it will only be maintained for approximately one year (till 2.1 is out). Version 2.0 coming in May will be long-lived and more suitable for distros." | 20:07 |
*** aojea has joined #openstack-lbaas | 20:07 | |
johnsom | So, likely no. | 20:08 |
johnsom | However, we might want to consider work on our DIB setup to allow users to point to a non-distro package. Not a high priority (easy enough to install it in an image), but something for the roadmap. | 20:09 |
openstackgerrit | Merged openstack/octavia-tempest-plugin master: Add the provider service client. https://review.openstack.org/630408 | 20:09 |
johnsom | I didn't see any other announcements, so... | 20:09 |
johnsom | #topic Brief progress reports / bugs needing review | 20:09 |
*** openstack changes topic to "Brief progress reports / bugs needing review (Meeting topic: Octavia)" | 20:09 | |
johnsom | I have been working on flavors. The client, tempest, and octavia patches are all done and ready for review. Some have started merging. | 20:10 |
johnsom | I also added an API to force a configuration refresh on the amphora-agent. That is all posted as well. | 20:10 |
nmagnezi | I back ported some fixes for stable/queens, a chain of 3 patches (the last one is from cgoncalves) that starts here: https://review.openstack.org/#/c/633412/1 | 20:11 |
nmagnezi | All related to IPv6 / Keepalived / Active Standby stuff | 20:11 |
johnsom | I had to stop and deal with some gate issues this week as well. 1. requests_toolbelt released with broken dependencies which broke a release test gate. 2. pycodestyle updated and broken some pep8 stuff in openstacksdk. | 20:11 |
johnsom | Currently I am working on getting the openstacksdk updated for our new API capabilities. | 20:12 |
johnsom | tags, stats, amp/provider/flavor/flavorprofile, etc. | 20:12 |
johnsom | And finally, working on patch reviews. | 20:12 |
johnsom | nmagnezi Thank you! | 20:13 |
rm_work | sorry, been reading that haproxy 1.9 announcement. nice cacheing stuff coming soon too, we might need to think about that again | 20:13 |
nmagnezi | johnsom, np at all | 20:13 |
johnsom | Once the SDK stuff is cleaned up I will work on reviews and helping with the TLS patches that are up for review | 20:13 |
nmagnezi | Had some conflicts so look closely :) | 20:13 |
johnsom | Yeah, that is why I want to carve some time out for that review | 20:14 |
rm_work | the last change there (ipv6 prefix) needs to merge in rocky first too... need to figure out why it has a -1 there | 20:15 |
johnsom | Yeah, the caching stuff was neat. At least in 1.8 you had to turn off other things though to use it. Hopefully that got fixed. | 20:15 |
nmagnezi | rm_work, thank you for pointing this out! | 20:16 |
johnsom | Any other updates today? | 20:16 |
johnsom | #topic Discuss spares pool and anti-affinity (colin-) | 20:17 |
*** openstack changes topic to "Discuss spares pool and anti-affinity (colin-) (Meeting topic: Octavia)" | 20:17 | |
rm_work | oh, cgoncalves responded on the rocky one, i may be missing other backports there too, if you have time to look at that | 20:17 |
rm_work | ^^ nmagnezi | 20:17 |
rm_work | similar to your chain in queens | 20:17 |
nmagnezi | rm_work, will do | 20:17 |
johnsom | So colin- raised the question of enabling spares pool when anti-affinity is enabled. colin- Did you want to lead this discussion? | 20:17 |
nmagnezi | rm_work, I know that the second patch in the chain exists in Rocky , will double check for the rest | 20:18 |
colin- | sure, was looking at what the limitations were and the channel helped me catch up on why it's not true today. in doing so we discussed a couple of ways that could be address, including rm_work's experimental change here https://review.openstack.org/558962 | 20:18 |
rm_work | ๐ | 20:19 |
rm_work | ah yeah, so that is the way I handled it for AZs, not sure if we want to try to handle it similarly for flavors | 20:19 |
colin- | i was curious if the other operators have a desire/need for this behavior or if it would largely be for convenience sake? is anti-affinity strongly preferred over spares pool generally speaking? | 20:19 |
rm_work | oh sorry this is anti-affinity not flavors | 20:19 |
johnsom | The question came up whether we should be managing the scheduling of spare amphora such that they could be used to replace the nova anti-affinity capability. | 20:20 |
rm_work | so, anti-affinity really only matters for active-standby, and the idea is that there it's ok to wait a little longer, so spares pool isn't as important | 20:20 |
colin- | ok, that was sort of the intent i was getting. that makes sense | 20:20 |
*** salmankhan has joined #openstack-lbaas | 20:21 | |
* rm_work waits for someone to confirm what he just said | 20:21 | |
johnsom | Yeah, it should be a delta of about 30 seconds | 20:21 |
rm_work | yeah i don't THINK it's relevant for STANDALONBE | 20:21 |
rm_work | *STANDALONE | 20:21 |
johnsom | At least on well behaved clouds | 20:21 |
rm_work | since ... there's nothing to anti-affinity to | 20:22 |
johnsom | Right, standalone we don't enable anti-affinity because it would be dumb to have server groups with one instance in them | 20:22 |
rm_work | so that's why we never really cared to take the effort | 20:22 |
colin- | i'm guessing things like querying nova to check a hypervisor attribute on an amphora before pulling a spare out of the pool would be out of the question? | 20:23 |
colin- | a copmarison to see if it matches the value on the STANDALONE during a failover maybe | 20:23 |
*** salmankhan1 has joined #openstack-lbaas | 20:23 | |
rm_work | i mean, the issue is cache reliability | 20:23 |
johnsom | Yeah, I think ideally, nova would allow us to add an instance to a server group with anti-affinity on it, and nova just live migrate it if it wasn't meeting the anti-affinity rule. But this is not a capability of nova today. | 20:23 |
rm_work | with AZ i'm pretty confident | 20:23 |
rm_work | with HV, not so much | 20:24 |
rm_work | yes, i had a productive talk about that in the back of an uber with a nova dev at the PTG a year ago | 20:24 |
rm_work | and then i forgot who it was and nothing happened <_< | 20:24 |
*** salmankhan has quit IRC | 20:25 | |
*** salmankhan1 is now known as salmankhan | 20:25 | |
johnsom | I mean, you could query nova for the current host info, then make a decision of which to pull from the spares pool. That is basically the scheduling that *could* be added to octavia. Though it seems heavy weight. | 20:25 |
colin- | and would be assuming some scheduling responsibilities it sounds like | 20:26 |
rm_work | yes, and unless the spares pool manager tries to maintain this | 20:26 |
colin- | conceptually | 20:26 |
johnsom | With nova live migration, you don't know if the boot host is the current host, so you have to ask | 20:26 |
rm_work | which it COULD technically | 20:26 |
rm_work | there's a good chance you'd have to try several times | 20:26 |
rm_work | or be out of luck | 20:26 |
rm_work | especially if your nova uses pack | 20:26 |
*** celebdor has joined #openstack-lbaas | 20:26 | |
rm_work | so if we did add a field, and had housekeeping check the accuracy of it every spares run.... that could work | 20:26 |
rm_work | but still like... eh | 20:27 |
colin- | is this the relevant section for this logic? https://github.com/openstack/octavia/blob/979144f2fdf7c391b6c154c01a22f107f45da833/octavia/controller/worker/flows/amphora_flows.py#L224-L230 | 20:27 |
johnsom | Yeah, I feel like it's a bit of tight coupling and getting into some nasty duplicate scheduling code with nova. | 20:27 |
johnsom | Nope, just a second | 20:28 |
johnsom | #link https://github.com/openstack/octavia/blob/master/octavia/controller/worker/tasks/database_tasks.py#L479 | 20:28 |
johnsom | This is where we check if we can use a spare and allocate it to an LB | 20:29 |
colin- | thanks | 20:29 |
johnsom | There would probably be code in the "create amphora" flow to make sure you create spares with some sort of host diversity too | 20:29 |
johnsom | basically if that MapLoadbalancerToAmphora task fails to assign a spare amphora, we then go and boot one | 20:30 |
johnsom | So, I guess the question on the table is: Is the boot time long enough to warrant building compute scheduling into the Octavia controller or not? | 20:31 |
colin- | given the feedback i've gotten i'm leaning towards not (yet!) | 20:31 |
rm_work | right, exactly what i have in my AZ patch, just using a different field | 20:32 |
rm_work | that's host-based instead of AZ based | 20:32 |
rm_work | which i commented in the patch i think | 20:32 |
rm_work | and yeah, i think it's not worth it | 20:32 |
johnsom | For me, the 30 second boot time is ok. | 20:32 |
colin- | ok. appreciate the indulgence :) | 20:33 |
rm_work | even if it's minutes.... | 20:33 |
johnsom | Plus we still have container dreams which might help | 20:33 |
rm_work | it's an active-standby LB | 20:33 |
rm_work | it's probably fine | 20:33 |
rm_work | i see failures so infrequently, i would hope you wouldn't see two on the same LB within 10 minutes | 20:33 |
rm_work | and if you do, i feel like probably you have bigger issues | 20:33 |
jiteka | johnsom: for us it's more about 1:30 and up to 2min with first amp hit a new compute | 20:33 |
rm_work | especially if antri-affinity is enabled | 20:34 |
johnsom | jiteka Why is it 1:30? have you looked at it? | 20:34 |
rm_work | they are still using celerons | 20:34 |
johnsom | Disk IO? Network IO from glance? | 20:34 |
rm_work | ๐โ๏ธ | 20:34 |
jiteka | johnsom: our glance is global accross region (datacenter/location) | 20:34 |
johnsom | hyper visor overhead? | 20:34 |
rm_work | no local image cacheing? | 20:34 |
jiteka | we have local cache | 20:35 |
jiteka | but not pre-cached | 20:35 |
johnsom | Yeah, the 2 minutes on first boot should be loading the cache | 20:35 |
jiteka | when new amphora image is promoted and tagged | 20:35 |
rm_work | oh, right, new compute | 20:35 |
jiteka | and used for the first time | 20:35 |
rm_work | yeah, at RAX they had a pre-cache script thing | 20:35 |
johnsom | But after that, I'm curious what is driving the 1:30 time. | 20:35 |
rm_work | so they could define images to force into cache on new computes | 20:35 |
jiteka | in public cloud yes | 20:35 |
jiteka | can't say in private ;) | 20:35 |
rm_work | :P | 20:36 |
rm_work | but per what i said before, even up to 10m... not a real issue IMO | 20:36 |
johnsom | jiteka | 20:36 |
johnsom | Are you using centos images? | 20:36 |
colin- | the spares pool has given us the latitude to overlook that for the time being but abandoning it will be cause to look into long boot times, for sure | 20:36 |
jiteka | also I don't recall any way directly with glance to cherry-pick image you want to pre-cache among all available image | 20:36 |
jiteka | johnsom: using ubuntu | 20:36 |
rm_work | ah for new LBs you mean? colin- | 20:36 |
rm_work | because that's the only place anyone would ever see an issue with times | 20:36 |
colin- | yes, jiteka and i are referring to the same platform | 20:37 |
rm_work | jiteka: they had a custom thing that ran on the HVs | 20:37 |
jiteka | johnsom: Ubuntu 16.04.5 LTS | 20:37 |
johnsom | Hmm, yeah, would be interested to learn what is slow. | 20:37 |
rm_work | they == rax | 20:37 |
johnsom | Ok, yeah, I know there is a centos image slowdown that folks are looking into. | 20:37 |
jiteka | johnsom: all other officially released and used VMs in our cloud are centos based | 20:37 |
jiteka | johnsom: which makes octavia often hit that "first spawn not yet cached" situation | 20:38 |
johnsom | #link https://bugzilla.redhat.com/show_bug.cgi?id=1666612 | 20:39 |
openstack | bugzilla.redhat.com bug 1666612 in systemd "Rules "uname -p" and "systemd-detect-virt" kill the system boot time on large systems" [High,Assigned] - Assigned to jsynacek | 20:39 |
jiteka | johnsom: sorry I mean all other user used same glance image that we provided to them which are centos based | 20:39 |
johnsom | Just an FYI on the centos boot time issue. | 20:39 |
jiteka | johnsom: thx | 20:40 |
johnsom | Ok, are we good on the spares/anti-affinity topic? Should I move on to the next topic? | 20:41 |
colin- | yes | 20:41 |
rm_work | IMO AZ anti-affinity is better anyway | 20:41 |
rm_work | if you can use it | 20:41 |
rm_work | :P | 20:41 |
johnsom | Cool. Thanks for bringing it up. I'm always happy to have a discussion on things. | 20:41 |
johnsom | #topic Need to start making decisions about what is going to land in Stein | 20:42 |
*** openstack changes topic to "Need to start making decisions about what is going to land in Stein (Meeting topic: Octavia)" | 20:42 | |
johnsom | I want to raise awareness that we are about one month from feature freeze for Stein! | 20:42 |
johnsom | Because of that I plan to focus more on reviews for features we committed to completing at the last PTG and bug fixes. | 20:43 |
jiteka | johnsom: I noticed the exact same behavour in my rocky devstack when I perform "Rotating the Amphora Images" scenario, after kill spare pool and forcing it to re-generate using newly tagged amphora image, it takes long time. Once that first amphora end-up ready, next one are really fast | 20:43 |
johnsom | Sadly we don't have a lot of reviewer cycles for Stein (at least that is my perception) so I'm going to be a bit picky about what we focus on and prioritize reviews for things we agreed on at the PTG. | 20:44 |
johnsom | #link https://etherpad.openstack.org/p/octavia-stein-ptg | 20:44 |
jiteka | johnsom: and nothing more to add about this part | 20:44 |
johnsom | That is the etherpad with the section we listed our priorities. | 20:44 |
johnsom | Do you all agree with my assessment and approach? Discussion? | 20:45 |
johnsom | I would hate to see patches for things we agreed on at the PTG go un-merged because we didn't have review cycles or spent them on patches for things we didn't agree to. | 20:46 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: WIP: Floating IP Network Driver (spans L3s) https://review.openstack.org/435612 | 20:47 |
colin- | anything known to be slipping out of Stein at this stage? or too early | 20:47 |
colin- | from the etherpad | 20:47 |
johnsom | ^^^^ Speaking of patches not targeted for Stein.... | 20:47 |
rm_work | :P | 20:48 |
rm_work | patch not targeted for merging ever :P | 20:48 |
johnsom | I think the TLS patches (client certs, backend re-encryption) are posted but at-risk. Flavors is in the same camp. Log offloading. | 20:48 |
johnsom | privsep and members as a base resource are clearly out. | 20:49 |
johnsom | HTTP2 is out | 20:49 |
rm_work | damn, that one is so easy too :P | 20:49 |
rm_work | just needed to ... do it | 20:49 |
rm_work | but agreed | 20:49 |
johnsom | Ok. I didn't want to make a cut line today. I think that is too early. | 20:50 |
johnsom | I just wanted to raise awareness and make sure we discussed the plan as we head to feature freeze. | 20:50 |
johnsom | We have a nice lenghty roadmap, but limited development resources, so we do need to plan a bit. | 20:51 |
rm_work | i'm trying to spin back up | 20:51 |
rm_work | just catching up on everything | 20:51 |
johnsom | Good news is, we did check things off that list already, so we are getting stuff done and there are patches posted. | 20:52 |
johnsom | Ok, unless there are more comments, I will open it to open discussion | 20:52 |
johnsom | #topic Open Discussion | 20:53 |
*** openstack changes topic to "Open Discussion (Meeting topic: Octavia)" | 20:53 | |
johnsom | Other topics today? | 20:53 |
colin- | have a plan to maintain a separate aggregate with computes that will exclusively host amphorae, believe we have what we need to proceed but wanted to see if anyone has advice against this or experience doing it successfully? | 20:53 |
jiteka | johnsom: I would like to know more about Containerized amphora | 20:54 |
jiteka | johnsom: on the etherpad, maybe more context about statement "k8s performance overhead" | 20:54 |
johnsom | colin- The only issue I have seem is people treating that group as pets and getting scared to update the hosts or lack of maintenance as they aren't on the main compute radar. | 20:55 |
jiteka | with that strategy of an aggregate of few computes hosting all small amphora VM, I feel that our build time will be relatively small expect when performing a new image rotation | 20:55 |
colin- | johnsom: strictly cattle here, good call out :) | 20:56 |
johnsom | jiteka Yeah, lengthy discussion that comes up regularly. Some of the baseline issues with k8s is the pool network performance due to the ingress/egress handling. | 20:56 |
johnsom | jiteka The real issue now is we don't have anyone that can take time to work on containers. We were close to having lxc working, but there were bugs with nova-lxd. These are likely not issues any longer, but we don't have anyone to look at them. | 20:57 |
johnsom | jiteka Also, k8s isn't a great platform for base services like networking. You don't exactly want it to shuffle/restart the pods during someone's TCP flow..... | 20:58 |
johnsom | We tend to work a bit lower in the stack. That said, using containers is still something we think is good and should happen. Just maybe not k8s, or some of the orchestrators that aren't conducive to networking. | 21:00 |
johnsom | Dang, out of time. Thanks folks! | 21:00 |
johnsom | #endmeeting | 21:00 |
*** openstack changes topic to "Discussions for Octavia | Stein priority review list: https://etherpad.openstack.org/p/octavia-priority-reviews" | 21:00 | |
openstack | Meeting ended Wed Jan 30 21:00:33 2019 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 21:00 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/octavia/2019/octavia.2019-01-30-20.00.html | 21:00 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/octavia/2019/octavia.2019-01-30-20.00.txt | 21:00 |
openstack | Log: http://eavesdrop.openstack.org/meetings/octavia/2019/octavia.2019-01-30-20.00.log.html | 21:00 |
jiteka | thanks | 21:00 |
johnsom | jiteka Let me know if you want to keep talking about the containers. | 21:00 |
jiteka | not now, but later, it's still something we have in mind in future deployment phase to reduce amphora footprint on our hypervisors | 21:01 |
johnsom | Yeah, same here. | 21:01 |
jiteka | johnsom: also I will need to get more familiar with K8s to have more specific questions | 21:02 |
johnsom | Zun was another option that someone was starting to work on, but I don't think it's working yet | 21:02 |
*** aojea has quit IRC | 21:08 | |
*** aojea has joined #openstack-lbaas | 21:08 | |
rm_work | yeah i would much rather have it work with something like zun or nova-lxc | 21:09 |
jiteka | rm_work: I don't have strong opinion about using k8s over zun/nova-lxc, it was more container vs VM for me | 21:14 |
johnsom | Or have both with Kata containers! lol | 21:15 |
jiteka | johnsom: kata what ? :D | 21:15 |
johnsom | jiteka https://katacontainers.io/ | 21:16 |
*** salmankhan1 has joined #openstack-lbaas | 21:16 | |
*** salmankhan has quit IRC | 21:16 | |
*** salmankhan1 is now known as salmankhan | 21:16 | |
jiteka | johnsom: I was joking I heard a lot about it at Openstack summit in Vancouver ;) | 21:18 |
*** salmankhan has quit IRC | 21:23 | |
*** aojea has quit IRC | 21:31 | |
*** aojea has joined #openstack-lbaas | 21:32 | |
*** aojea has quit IRC | 21:39 | |
*** amuller has quit IRC | 21:42 | |
*** openstackgerrit has quit IRC | 21:50 | |
*** trown is now known as trown|outtypewww | 22:07 | |
rm_work | so much <_< | 22:48 |
johnsom | I am having fun getting my head around the sdk framework. | 22:56 |
johnsom | It's always too long between working with it | 22:56 |
johnsom | It's fine testing frameworks like this: | 22:57 |
johnsom | https://www.irccloud.com/pastebin/T6sepeoD/ | 22:57 |
johnsom | That always make me scratch my head | 22:57 |
johnsom | Well, until I figure ^^^ that out | 22:57 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!