Tuesday, 2018-07-17

*** rcernin_ has joined #openstack-lbaas00:04
*** rcernin has quit IRC00:04
*** fnaval has quit IRC00:43
*** hongbin has joined #openstack-lbaas00:44
*** hongbin_ has joined #openstack-lbaas00:56
*** longkb has joined #openstack-lbaas00:56
*** JudeC has quit IRC00:58
*** hongbin has quit IRC00:58
*** sapd has joined #openstack-lbaas01:39
*** hongbin_ has quit IRC01:45
*** hongbin has joined #openstack-lbaas01:45
*** ramishra has joined #openstack-lbaas03:21
openstackgerritMichael Johnson proposed openstack/octavia master: Cleanup Octavia create VIP ports on LB delete  https://review.openstack.org/58116803:39
*** hongbin has quit IRC04:08
openstackgerritZhaoBo proposed openstack/octavia master: Treat null admin_state_up as False  https://review.openstack.org/58292904:17
*** JudeC has joined #openstack-lbaas05:18
openstackgerritYang JianFeng proposed openstack/octavia master: Add amphora_flavor field for amphora api  https://review.openstack.org/58291405:25
*** rcernin_ has quit IRC05:25
*** rcernin has joined #openstack-lbaas05:26
*** links has joined #openstack-lbaas05:33
openstackgerritYang JianFeng proposed openstack/octavia master: Add compute_flavor field for amphora api  https://review.openstack.org/58291405:49
*** yboaron has joined #openstack-lbaas06:06
openstackgerritZhaoBo proposed openstack/octavia master: Treat null admin_state_up as False  https://review.openstack.org/58292906:14
*** phuoc_ has joined #openstack-lbaas06:26
*** phuoc has quit IRC06:29
*** kobis has joined #openstack-lbaas06:30
*** kobis has quit IRC06:35
*** yboaron has quit IRC06:35
*** velizarx has joined #openstack-lbaas06:46
*** tesseract has joined #openstack-lbaas06:48
*** ispp has joined #openstack-lbaas06:59
*** yamamoto has joined #openstack-lbaas07:02
*** velizarx has quit IRC07:03
*** JudeC has quit IRC07:04
*** peereb has joined #openstack-lbaas07:04
*** rcernin has quit IRC07:11
*** AlexeyAbashkin has joined #openstack-lbaas07:22
*** velizarx has joined #openstack-lbaas07:23
*** ispp has quit IRC07:28
*** yboaron has joined #openstack-lbaas07:34
*** longkb has quit IRC07:39
*** longkb has joined #openstack-lbaas07:41
openstackgerritCarlos Goncalves proposed openstack/octavia master: Correct naming for quota resources  https://review.openstack.org/55967207:59
*** yboaron_ has joined #openstack-lbaas08:03
*** yboaron has quit IRC08:06
*** PagliaccisCloud has quit IRC08:07
*** PagliaccisCloud has joined #openstack-lbaas08:09
*** ispp has joined #openstack-lbaas08:15
*** kobis has joined #openstack-lbaas08:21
cgoncalveshttps://review.openstack.org/#/c/583068/ should fix our gate. I rebased the quota renaming patch ^ on top of that for testing08:22
cgoncalvesnmagnezi, ^08:22
*** yboaron has joined #openstack-lbaas08:31
*** yboaron_ has quit IRC08:33
*** ktibi has joined #openstack-lbaas09:01
*** ispp has quit IRC09:02
*** annp has quit IRC09:18
*** annp has joined #openstack-lbaas09:26
*** salmankhan has joined #openstack-lbaas09:30
*** yamamoto has quit IRC09:37
bzhao__cgoncalves:  Nice. :)10:02
openstackgerritZhaoBo proposed openstack/octavia master: Treat null admin_state_up as False  https://review.openstack.org/58292910:03
openstackgerritZhaoBo proposed openstack/octavia master: UDP jinja template  https://review.openstack.org/52542010:03
openstackgerritZhaoBo proposed openstack/octavia master: UDP for [2]  https://review.openstack.org/52965110:04
openstackgerritZhaoBo proposed openstack/octavia master: UDP for [3][5][6]  https://review.openstack.org/53939110:05
*** longkb has quit IRC10:30
*** velizarx has quit IRC10:54
*** ispp has joined #openstack-lbaas10:55
*** velizarx has joined #openstack-lbaas10:58
*** rabel has joined #openstack-lbaas11:02
rabelhi there. with lbaasv2: has the vip-port to be on the same network-node as the loadbalancer itself?11:03
rabelwe're experiencing problems with one of our lbaas-loadbalancers and i just saw that the vip port is not on the same network node as the loadbalancer11:03
*** sapd has quit IRC11:03
openstackgerritCarlos Goncalves proposed openstack/neutron-lbaas master: Update new documentation PTI jobs  https://review.openstack.org/53031411:08
*** atoth has joined #openstack-lbaas11:14
*** sapd has joined #openstack-lbaas11:20
nmagnezicgoncalves, o/11:48
nmagnezicgoncalves, the errors here are also because of the same diskimage-builder issue? https://review.openstack.org/#/c/580724/11:49
cgoncalvesnmagnezi, yes: http://logs.openstack.org/24/580724/3/gate/octavia-v1-dsvm-py3x-scenario/145d0e3/logs/devstacklog.txt.gz#_2018-07-16_13_20_39_90911:53
*** amuller has joined #openstack-lbaas11:54
*** ispp has quit IRC12:22
*** yboaron_ has joined #openstack-lbaas12:30
*** yboaron has quit IRC12:33
*** peereb has quit IRC12:56
*** salmankhan has quit IRC13:09
*** ispp has joined #openstack-lbaas13:17
*** velizarx has quit IRC13:35
*** velizarx has joined #openstack-lbaas13:35
*** salmankhan has joined #openstack-lbaas13:36
*** fnaval has joined #openstack-lbaas13:47
*** ispp has quit IRC13:48
*** ispp has joined #openstack-lbaas13:50
*** ispp has quit IRC13:52
*** ispp has joined #openstack-lbaas14:01
*** ispp has quit IRC14:02
*** yboaron has joined #openstack-lbaas14:11
*** kobis has quit IRC14:11
*** yboaron_ has quit IRC14:13
*** velizarx has quit IRC14:27
*** ispp has joined #openstack-lbaas14:38
*** ispp has quit IRC14:39
openstackgerrithuangshan proposed openstack/python-octaviaclient master: Support backup members  https://review.openstack.org/57653014:39
openstackgerrithuangshan proposed openstack/python-octaviaclient master: Support backup members  https://review.openstack.org/57653014:45
*** ispp has joined #openstack-lbaas15:03
*** JudeC has joined #openstack-lbaas15:07
*** JudeC has quit IRC15:21
*** ftersin has joined #openstack-lbaas15:23
*** tesseract has quit IRC15:57
rabelwhy are there two haproxy processes per loadbalancer on lbaasv2?16:09
johnsomrabel You mean neutron-lbaas?16:10
johnsomrabel Which provider driver are you using?16:10
rabeljohnsom: yes. provider driver is haproxy16:10
johnsomSo the old namespace driver.  Not sure why there would be two as it doesn't support high availability. It could be that one is the parent controlling process and the second is the actual load balancing process.16:11
johnsomMaybe they have them configured for nproc 2, which they can do as they don't do HA.16:12
rabeljohnsom: the reason why i'm asking: while investigating a broken lbaas loadbalancer, i found that the vip port is on another network node than the loadbalancer itself. in this case there is one haproxy process on each network node. i think something went wrong there, but don't know if this could also work somehow16:14
johnsomI'm not very familiar with that driver, but I would expect the VIP port to be bound to the network node that is running the haproxy process for the VIP.16:15
rabelso the port should be on the same node as the lbaas-agent from neutron lbaas-agent-hosting-loadbalancer?16:17
rabeli can't imagine how it should work otherwise. but it's also a little strange that there are two haproxy services running on two different network nodes for the same loadbalancer.16:18
*** harlowja has joined #openstack-lbaas16:18
johnsomYeah, I don't think that is how that driver normally works. It should be one LB tied to one agent running one haproxy16:19
rabelok, thanks a lot16:20
*** yboaron has quit IRC16:22
*** links has quit IRC16:24
*** ktibi has quit IRC16:32
*** JudeC has joined #openstack-lbaas16:36
johnsomcourtesy reminder: The deadline to submit a Berlin summit presentation is July 18 at 6:59 am UTC (July 17 at 11:59 pm PST)16:48
*** AlexeyAbashkin has quit IRC16:59
*** ispp has quit IRC17:06
*** atoth has quit IRC17:12
*** ramishra has quit IRC17:17
*** salmankhan has quit IRC17:22
*** harlowja has quit IRC17:54
*** atoth has joined #openstack-lbaas18:13
*** ianychoi has quit IRC18:14
*** KeithMnemonic has joined #openstack-lbaas18:16
*** kberger has quit IRC18:18
*** harlowja has joined #openstack-lbaas18:32
*** phuoc has joined #openstack-lbaas19:04
*** phuoc_ has quit IRC19:07
*** kberger has joined #openstack-lbaas19:50
*** KeithMnemonic has quit IRC19:51
*** amuller has quit IRC19:52
*** AlexeyAbashkin has joined #openstack-lbaas20:22
openstackgerritNir Magnezi proposed openstack/octavia master: Fixes unlimited listener connection limit  https://review.openstack.org/58072420:39
apuimedojohnsom: xgerman_: any of you there?20:40
johnsomHi20:40
apuimedoHi!20:40
apuimedo:-)20:40
apuimedoquick question20:40
openstackgerritGerman Eichberger proposed openstack/octavia master: [WIP] Switch amphora agent to use privsep  https://review.openstack.org/54929520:40
xgerman_Hi20:41
johnsomSure20:41
apuimedoif I set event_streamer_driver to queue_event_streamer20:42
apuimedowhen octavia is used standalone without the neutron adapter20:42
apuimedodo we get any side effect20:42
apuimedoor we just get the health events in rabbitmq for whoever wants to see them?20:42
johnsomDon't do it. Yes, horrible performance and eventual rabbit backlogs that tank rabbit20:43
apuimedojohnsom: why is that?20:43
apuimedoBecause nothing will be consuming them?20:43
johnsomThis was a slapped in thing to try to keep neutron's database straight if you are using neutron-lbaas with octavia driver. It has two issues mainly: 1. it doesn't TTL out the messages, they just accumulate in rabbit. 2. the code is not efficient in that it checks the old state and then sends the event. This will slow down the health manager greatly.20:44
johnsomAs soon as neutron-lbaas is EOL that code is going away20:44
apuimedojohnsom: ok20:45
apuimedoI'll give some context20:45
johnsomIt was never built to be a generic "event streamer" sadly.20:45
apuimedowe have an intern working on a project that takes rabbitmq events and turns them in a kubernetes/etcd style HTTP watch events20:46
johnsomThere is a patch to try to help someone that has to run it here: https://review.openstack.org/#/c/581585/20:46
nmagneziapuimedo, \o/20:46
johnsomBasically they are feeling that pain20:46
apuimedohe started trying it with Neutron20:46
apuimedoand he gets the port events just fine20:46
apuimedoso now he wanted to add support for octavia20:46
apuimedosince kuryr-kubernetes makes extensive use of it20:46
apuimedoand polling gets old really fast :P20:46
johnsomYeah, senlin wanted it from us too. It's just no one has done the development to do it right yet.20:46
apuimedojohnsom: wouldn't it just be making a better queue_event_streamer ?20:47
apuimedoOr is there anything else needed?20:47
johnsomWell, I would ignore all of that code and start over with a cleaner solution focused on notification style events and not deltas/syncing neutron.20:48
apuimedoone that doesn't care about checking the old status and just emits the events20:48
xgerman_we update the queue every couple of seconds so we should be much more selective with what we ship and how often20:48
johnsomIt wouldn't be hugely crazy work, just doing a decent design and updating the health manager driver(s)20:48
apuimedoright20:48
apuimedoit should be just on resource creation20:49
apuimedothen on status updates20:49
cgoncalvescreate, update, delete20:49
johnsomHmmm, resource creation... Are you looking for the audit events?20:49
apuimedocgoncalves: that's right20:49
apuimedojohnsom: audit events?20:49
johnsomAPI audit events20:50
johnsomhttps://docs.openstack.org/keystonemiddleware/latest/audit.html20:50
johnsomIt's on my back burner to-do list20:50
johnsomI think neutron supports it. we don't yet20:50
apuimedojohnsom: that would not tell us about health of the LB20:50
apuimedo:-)20:50
apuimedolike when the LB is really ready and so on20:51
xgerman_ok, so the event streamer was meant to sync two databases; you want more a pub/sub type thingy20:51
johnsomRight, but the create/update/delete stuffs.  If that is not what you are looking for, I get it, just asking if that is what you are using on the neutron side. Clarification.  So it sounds like you are looking for the CRUD events.20:51
johnsomYeah, so, we don't have what you are looking for today.20:52
xgerman_neutron has an event stream where they have stuff like port create/delete20:52
apuimedojohnsom: on the neutron side we just use the port rabbit events that get published20:52
cgoncalvesCUD, not CRUD ;)20:52
johnsomYeah, I am aware of those20:52
apuimedousually they are port_create_start, port_update_end20:52
apuimedoand such20:52
cgoncalvesxgerman_, correct. nova too20:52
apuimedoexactly20:52
xgerman_and we would do lb create, update,…20:53
apuimedosince I saw that you have rabbitmq stuff and emit in the health20:53
apuimedoI thought maybe the scope was the same :P20:53
johnsomNope, sorry20:53
johnsomThat is bad hackery that needs rm -rf20:53
*** erjacobs has joined #openstack-lbaas20:53
apuimedothanks for all the info johnsom xgerman_ and cgoncalves20:53
apuimedojohnsom: well, at that point, if it needs writing anew20:54
apuimedoprobably it's as cheap to make the watch API directly in octavia xD20:54
*** AlexeyAbashkin has quit IRC20:54
johnsomSure, NP. Sorry.  You can always put in a story for us: https://storyboard.openstack.org/20:54
* cgoncalves waits for johnsom's *grin*20:55
apuimedois that something that would be acceptable to put in your API?20:55
nmagnezicgoncalves, that's his trademark :)20:56
johnsomIf it makes you feel better, it was on my roadmap.... https://wiki.openstack.org/wiki/Octavia/Roadmap as "Status change notifications via oslo messaging"20:56
apuimedojohnsom: good20:56
cgoncalvesapuimedo, REST API? no changes would be needed AFAIK20:56
apuimedocgoncalves: just to take a &watch=True20:57
apuimedoon any resource20:57
johnsomYeah, I'm not sure I see how that would relate to the API.20:57
apuimedoand that would put the client with a http chunked transfer20:57
apuimedothat would get events for each health update20:57
apuimedoit is not trivial20:57
cgoncalvesapuimedo, now I sort of understand what you meant by "watch api" but which watch api?20:57
xgerman_ah, so you would httpstream20:57
cgoncalvesah20:58
apuimedoxgerman_: that's right20:58
apuimedo:-)20:58
johnsomDoes any OpenStack API support that?20:58
cgoncalvesapuimedo, consider leveraging aodh for that: alarming as a service20:58
xgerman_mmh, that can quickly get out of hand…20:58
johnsomSeems like it would hang an API thread just feeding that, which I suspect people would not like....20:58
cgoncalvesyeah. go aodh20:59
apuimedojohnsom: no. No service has that. That's why the intern is writing a rabbitmq -> Watch streaming API20:59
apuimedo:P20:59
apuimedoWEaaS20:59
apuimedoWatch Enpoints as a Service20:59
apuimedo:-)20:59
xgerman_well, I would think we would add something to the API to feed some event thing so that those theads don’t need to poll20:59
johnsomI think I like the pub/sub over oslo messaging (rabbit if you like) better than adding that to our API21:00
xgerman_+!21:00
johnsomHa, dipping your toe into Ceilometer....21:00
apuimedoxgerman_: well, the idea is to prevent the polling from those threads21:00
apuimedojohnsom: ok, so rabbit neutron/nova style then, right?21:01
johnsomYeah, maybe done better.. grin21:02
apuimedocgoncalves: how would aodh help? How does it get the events from octavia?21:02
xgerman_well, johnsom and i occasionally talk about using etcd for health/status stuff which might be more trigger-on-update friendly21:02
johnsomIt doesn't today21:02
apuimedoxgerman_: etcd provides watch for free21:02
apuimedoon all keys21:02
cgoncalvesapuimedo, aodh can listen for notification events and triggers an alarm for each tenant that has requested to be informed. it can be via http post or other methods21:03
johnsomYeah, I have crazy dreams of an etcd driver for the health manager...  Just no time/resource to develop it.  I don't get interns anymore... grin21:03
apuimedocgoncalves: but something has to notify it21:03
apuimedojohnsom: you should apply for outreachy mentoring21:03
cgoncalvesapuimedo, octavia would send out a notification to the oslo messaging bus and that's it21:03
apuimedoI got an awesome intern the last cycle21:03
johnsomYeah, my last intern wrote our active/standby code21:04
apuimedocgoncalves: well, if we have to add sending stuff to oslo messaging, the intern project can already consume it21:04
xgerman_+100021:04
apuimedosince we want http watch endpoints and not aodh http POST21:04
apuimedojohnsom: you should trick cgoncalves into writing the etcd health backend21:05
johnsomYeah, I would add the pub/sub code and fire them that way. Then multiple projects can consume them.  Once the infrastructure is in the code (oslo setup, etc.) the hook points are pretty clear in our flows and health manager drivers21:05
* johnsom tries jedi mind trick on cgoncalves21:06
apuimedo:-)21:06
* johnsom that is not the code you are looking for. You want to work on johnsom's projects.....21:06
* cgoncalves never watched Star Wars \o/21:06
apuimedocgoncalves: Star Trek is way better anyway21:07
johnsomOye21:07
apuimedojohnsom: maybe make him an offer he won't refuse21:07
johnsomTime to go back to working on my internal project21:07
johnsomI will offer him a cookie21:08
rm_worki almost did a whole etcd health driver, like a year and a half ago21:09
rm_workit should work21:09
rm_workit's just... meh21:09
apuimedorm_work: why meh?21:10
cgoncalveshow are you stealing my cookie!21:10
cgoncalves*how dare you21:10
rm_workit's not as scalable21:10
apuimedorm_work: as scalable as what?21:11
rm_workthe UDP health driver21:11
johnsomcgoncalves Cookie pair: zuul_filter_string=neutron-lbaas21:12
johnsomMy thought was use it instead of mysql as the backend, not as the amp->controller mech21:12
johnsomI'm interested in in-memory with disk checkpointing21:13
rm_workyeah we were looking at having the amps do cert-auth to etcd and then keep their status in it via that mechanism -- both aliveness (connection active) and also health of members21:14
rm_workand then the HM daemon would just be checking with etcd for failure events21:14
rm_workit's actually a little bit SIMPLER imo, and a little faster for long-failover detection21:14
johnsomYeah, not so warm fuzzy on that one21:14
rm_workbut it doesn't scale as well21:14
johnsomright21:14
rm_workand it's scary if you don't trust an etcd cluster to stay up properly21:15
rm_work(which I don't)21:15
rm_work(I don't trust ANY service to stay up properly that hasn't been heavily tested for like 20 years)21:15
johnsomWe would find a way to break it I'm sure21:15
rm_work(and even then barely)21:15
apuimedojohnsom: yeah, I thought also instead of mysql21:17
*** apuimedo has quit IRC21:21
*** erjacobs has quit IRC21:28
openstackgerritGerman Eichberger proposed openstack/octavia master: [WIP] Switch amphora agent to use privsep  https://review.openstack.org/54929522:31
*** rcernin has joined #openstack-lbaas22:32
rm_workdid something merge somewhere that causes our py3 tests to all fail?22:57
rm_workthey're failing on EVERYTHING22:57
rm_workwell, one specific one is22:58
rm_workoctavia-v1-dsvm-py3x-scenario is failing on every patchset even on rechecks22:58
johnsomYeah, astroid broke pylint and a bunch of stuff yesterday.22:59
johnsomI think the patch merged though22:59
johnsomYep: http://logs.openstack.org/41/566741/7/gate/octavia-v1-dsvm-py3x-scenario/4a34984/logs/devstacklog.txt.gz#_2018-07-16_21_55_33_30323:00
johnsomThis is what carlos was talking about yesterday23:01
cgoncalvesnot merged yet. pending on a depends-on from tripleo-ci IIRC23:01
johnsomAh, bummer23:02
cgoncalveshttps://review.openstack.org/#/c/583068/ & https://review.openstack.org/#/c/583102/23:02
rm_workk then our gates are stuck until that goes through <_<23:07
cgoncalvesin the meantime backport patches could use some reviews :D23:16
cgoncalveshttps://review.openstack.org/#/q/status:open+project:openstack/octavia+NOT+branch:master23:16

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!