Thursday, 2020-02-27

*** yamamoto has joined #openstack-lbaas00:05
*** yamamoto has quit IRC00:10
openstackgerritguang-yee proposed openstack/octavia stable/rocky: HTTPS HMs need the same validation path as HTTP  https://review.opendev.org/71016100:31
rm_workjohnsom: lol yes though in reviewing the patch for adding quotes on some other objects ... i honestly don't understand why we BOTHER for anything besides LBs <_< everything else is just a logical object on the same amphorae >_>01:08
rm_work*for adding quotas01:09
rm_worklike https://review.opendev.org/#/c/590620/01:09
rm_workwhy? who cares? lol01:09
johnsomYeah, that one was ... odd01:10
johnsomI mean there probably is some limit, but .... it would be very large01:10
rm_workbut like... what does the limit affect?01:14
rm_workthe user's LB?01:14
rm_workif there's a hard limit (like, in haproxy itself), we should just put code in for it01:14
rm_worknot make a quota, lol01:14
rm_worki remember there being one like that...01:15
*** ramishra has joined #openstack-lbaas01:18
openstackgerritMichael Johnson proposed openstack/octavia master: WIP - Refactor the failover flows  https://review.opendev.org/70531701:22
*** ramishra has quit IRC01:26
*** ramishra has joined #openstack-lbaas01:48
rm_workwhat the shit01:49
rm_work$ curl $OCTAVIA_URL/v2/lbaas/loadbalancers/ -H "X-Auth-Tent-Type: application/json" -X POST -d ""01:49
rm_work{"faultcode": "Client", "faultstring": "Missing argument: \"load_balancer\"", "debuginfo": null}01:49
rm_work^^ "load_balancer"?!?! where is that coming from01:49
rm_workin our API we take it as "loadbalancer"01:49
*** yamamoto has joined #openstack-lbaas01:56
rm_workalso, listener list doesn't show provisioning_status? O_o01:59
rm_workbut pool list does01:59
*** luketollefson has joined #openstack-lbaas02:16
rm_workugh, i really want to revisit https://review.opendev.org/#/c/549297/02:27
rm_work`Updated1 year, 12 months ago`02:28
rm_workam I mistaken about how time works? wouldn't that be "2 years"? :D02:28
*** archiephan has joined #openstack-lbaas02:47
*** yamamoto has quit IRC02:49
*** nicolasbock has quit IRC03:09
*** yamamoto has joined #openstack-lbaas03:55
*** yamamoto has quit IRC04:00
*** yamamoto has joined #openstack-lbaas04:13
*** rcernin has quit IRC05:33
*** rcernin has joined #openstack-lbaas05:33
*** gcheresh has joined #openstack-lbaas06:15
*** rcernin has quit IRC06:24
*** ccamposr__ has joined #openstack-lbaas07:49
*** ccamposr has quit IRC07:52
*** tesseract has joined #openstack-lbaas07:53
*** tkajinam has quit IRC08:02
openstackgerritAnn Taraday proposed openstack/octavia master: [Amphorav2] Fix noop driver case  https://review.opendev.org/70969608:02
openstackgerritAnn Taraday proposed openstack/octavia master: [Amphorav2] Fix noop driver case  https://review.opendev.org/70969608:02
openstackgerritAnn Taraday proposed openstack/octavia master: Testing  https://review.opendev.org/69721308:03
*** rpittau|afk is now known as rpittau09:09
*** yamamoto has quit IRC09:35
openstackgerritMerged openstack/octavia-lib master: Remove all usage of six library  https://review.opendev.org/70350009:56
*** dmellado has quit IRC09:59
*** ccamposr__ has quit IRC10:48
*** ccamposr__ has joined #openstack-lbaas10:48
*** rpittau is now known as rpittau|bbl11:12
*** yamamoto has joined #openstack-lbaas11:42
*** yamamoto has quit IRC11:46
*** ccamposr__ has quit IRC11:48
*** ccamposr__ has joined #openstack-lbaas11:49
*** ivve has joined #openstack-lbaas11:53
ivvehi, i have a small question (this might be a bug). i have a standalone lb, functioned all well. did a failover on it and it came up all nice but it doesn't seem to have bound it's ha_ip on the amphora namespace, but haproxy is listening to the vrrp_ip and therefor not functioning11:56
ivvehttps://hastebin.com/inoluhejoz.rb11:59
ivvejust keep getting resets from the ha_ip12:02
*** nicolasbock has joined #openstack-lbaas12:02
ivveso the question is12:11
ivveshould octavia via api use some rootwarp or similar mechanism to "ip netns amphora-haproxy ip addr add <ha_ip/32> dev eth1"12:12
ivveand why did it not do that when i issued a failover (on the new amphora)12:12
rm_workno, in STANDALONE it's supposed to be brought up via the agent I believe12:13
ivvechecking agent log12:13
rm_workunclear why the agent didn't do that (or how haproxy got configured on the vrrp address?)12:13
ivvei even tried a configure api command12:13
rm_workhaproxy config comes from the controller-worker, which knows the ha_ip even if it isn't bound right -- so it should have sent an haproxy.conf that had that IP12:13
ivvebut it didn't add the ha_ip12:13
ivveit only has vrrp_ip configured on eth1 (in the amphora-haproxy namespace)12:14
ivvei tried to bump listeners too to see if that changed anything, no bueno12:15
ivvejust to test things out12:15
ivvehttps://hastebin.com/ukogohuzuh.rb12:17
ivvei can't really see anything in the agent log12:18
ivvei mean nothing that fails or errors out, not even warnings. just couple of debug get/puts12:19
ivvethe last thing it issues is a reload12:19
ivvecould be associated with my bumps on the listener12:20
ivvethis is version 4.1.1, amphora agent is 4.1.2.dev712:22
ivvecould that be an issue?12:22
ivveits built on branch, not tag12:23
ivveand octavia-api on controller nodes are also stein, which ends up in 4.1.112:23
*** yamamoto has joined #openstack-lbaas12:25
ivverm_work: found the problem12:37
ivveif you have a config "mismatch" between the topology in octavia-api and you have spawned a standalone, the agent config contains active_standby instead of single12:37
ivveso im guessing here that the amphora is simply waiting for keepalived to set the ha_ip on the interface12:38
ivvewhich never happens, because its single and no keepalived12:38
ivvequite easily reproduced12:42
ivveshall i submit is as a bug?12:42
ivveit*12:42
rm_workuhh12:54
rm_workhow do you have a mismatch there12:55
rm_workwe totally allow for multiple topologies in the system -- we look at the topology of the LB when deciding how to make the config12:55
rm_workthe LB and amp *cannot* have different topos, it's not possible (at least without some admin intervention)12:56
*** servagem has joined #openstack-lbaas12:59
*** nicolasbock has quit IRC13:03
*** nicolasbock has joined #openstack-lbaas13:04
*** dmellado has joined #openstack-lbaas13:05
ivverm_work: basically you can set octavia.conf to topology = standalone, create a loadbalancer via k8s, it will use any default topology since you can't really set that in heat13:12
rm_workuhh, ok?13:12
ivvereconfig the octavia.conf to active_standby and then failover that amphora13:12
rm_workthat shouldn't happen13:12
ivvei can reproduce that13:12
rm_workif the LB was created as standalone, it should stay standalone forever13:12
rm_workif you can reproduce that, then yeah, bug13:13
ivveso switching it back again to standalone13:13
rm_workbut i've got a ton of both in my env13:13
ivveand then failover again, then it works13:13
rm_workand no issues...13:13
rm_workwhat version are you on again?13:13
rm_workthis may have been resolved in like ... rocky13:13
ivveapi 4.1.113:16
ivveagent in amphora 4.1.2.dev713:16
ivvehttps://hastebin.com/apogohodec.makefile13:17
ivveso it gets this config when octavia-api/worker etc is reconfigured13:18
*** nicolasbock has quit IRC13:21
ivvejust to be super clear, i am performing the octavia.conf change on the api on the controller nodes13:23
*** rpittau|bbl is now known as rpittau13:24
ivvewell, its 4 different ones, because all the different components are in different containers, but i change and restart all of them13:24
*** nicolasbock has joined #openstack-lbaas13:25
*** yamamoto has quit IRC13:35
*** yamamoto has joined #openstack-lbaas13:54
johnsomSo your api is configured with one topology setting and your controller a different one? Can you do an lb show for us?13:59
*** yamamoto has quit IRC14:18
*** gcheresh has quit IRC14:43
*** ccamposr__ has quit IRC14:48
*** ccamposr__ has joined #openstack-lbaas14:49
rm_workyeah went back and looked, it REALLY shouldn't matter what's set in config, during a failover it will ONLY care about what the LB's topology is from the DB14:51
johnsomThat is what I thought as well14:55
mlozaHello, I have kolla-ansible deployment and I wanted to test https://github.com/a10networks/a10-octavia but I don't know which octavia container the a10 driver should be installed. Should it be in the worker or api?15:56
rm_workmloza: i believe it should be in all of them16:04
mlozarm_work: including housekeeping and health-manager ?16:06
rm_workprobably, yes16:06
rm_workif it isn't needed, it just won't be used16:06
rm_workbut it looks like it'd be necessary in all pieces16:06
rm_workthis code is confusing me a bit tho, they are replacing a lot of stuff they shouldn't need to replace I think...16:07
*** tesseract has quit IRC16:27
johnsommloza I don't know much about the A10 driver. I would contact A10 for install instructions, etc.16:38
*** dosaboy has quit IRC16:45
*** ccamposr__ has quit IRC16:49
*** ccamposr__ has joined #openstack-lbaas16:49
*** dosaboy has joined #openstack-lbaas17:01
*** rpittau is now known as rpittau|afk17:23
*** ccamposr__ has quit IRC17:49
*** ccamposr__ has joined #openstack-lbaas17:50
*** ccamposr has joined #openstack-lbaas18:05
*** ccamposr__ has quit IRC18:09
*** gcheresh has joined #openstack-lbaas18:37
-openstackstatus- NOTICE: Memory pressure on zuul.opendev.org is causing connection timeouts resulting in POST_FAILURE and RETRY_LIMIT results for some jobs since around 06:00 UTC today; we will be restarting the scheduler shortly to relieve the problem, and will follow up with another notice once running changes are reenqueued.19:11
*** gcheresh has quit IRC19:22
-openstackstatus- NOTICE: The scheduler for zuul.opendev.org has been restarted; any changes which were in queues at the time of the restart have been reenqueued automatically, but any changes whose jobs failed with a RETRY_LIMIT, POST_FAILURE or NODE_FAILURE build result in the past 14 hours should be manually rechecked for fresh results19:45
*** nicolasbock has quit IRC20:07
*** gcheresh has joined #openstack-lbaas20:31
*** gcheresh has quit IRC20:55
*** gcheresh has joined #openstack-lbaas21:00
*** gcheresh has quit IRC21:09
*** servagem has quit IRC21:23
*** rcernin has joined #openstack-lbaas21:44
*** xakaitetoia1 has quit IRC21:47
*** tkajinam has joined #openstack-lbaas22:51
*** tkajinam has quit IRC22:51
*** tkajinam has joined #openstack-lbaas22:51
*** ivve has quit IRC22:57
*** jamesdenton has quit IRC23:49
*** jamesdenton has joined #openstack-lbaas23:50

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!