Thursday, 2017-06-15

korean101ajo: well. different timezone...00:09
korean101ajo: i'll try master branch and report to you00:09
korean101ajo: thanks for reply00:09
johnsomkorean101 I don't know anyone using the vagrant stuff, so it might be stale00:11
korean101johnsom: OMG00:11
korean101johnsom: any other documents ?00:12
johnsomkorean101 documents for what?00:12
korean101johnsom: I get a lot of help with devstack.00:12
korean101johnsom: installation manuals00:13
korean101johnsom: i already deploy octavia in Newton Releases00:13
johnsomOur docs are here: https://docs.openstack.org/developer/octavia/00:13
korean101johnsom: and now i try Ocata releases00:13
johnsomThere is an overview here: https://docs.openstack.org/developer/octavia/guides/dev-quick-start.html00:13
johnsomMost of us day-to-day for development use devstack (without vagrant)00:14
johnsomThis is the main octavia setup script for devstack: https://github.com/openstack/octavia/blob/master/devstack/plugin.sh00:14
korean101i use this file (https://github.com/openstack/octavia/blob/stable/ocata/devstack/samples/singlenode/Vagrantfile)00:14
korean101johnsom: not freshly file?00:14
korean101OK. plugin.sh00:15
korean101johnsom: other questions00:15
korean101johnsom: any solutions about Octavia HA?00:15
johnsomYeah, I don't know if the vagrant stuff is current or out of date00:15
johnsomWhat about octavia HA?  We support HA on both control and data plane00:16
johnsomHave since Mitaka if I remember correctly00:16
korean101johnsom: Octavia daemons dead? and So what happens?00:20
*** aojea has quit IRC00:20
johnsomkorean101 You can run multiple instances of all of the octavia processes.00:20
johnsomWe recommend three or more00:21
*** aojea has joined #openstack-lbaas00:21
korean101instances?00:21
korean101octavia-api, octavia-health-manager, octavia-housekeeping, octavia-worker00:22
johnsomCorrect00:22
korean101that daemons on network nodes only00:23
korean101but network nodes die?00:23
johnsomYou will need to run a load balancer in front of the API instances.  You need to configure all of the health manager endpoints in the configuration file so the amps use them all00:23
johnsomNo, none of those need to run on network nodes00:23
korean101johnsom: but o-hm0 device needs br-int?00:24
johnsomThe old neutron-lbaas haproxy driver has to run on network nodes, but octavia does not.  It's an advantage to using octavia.00:25
*** armax has quit IRC00:26
johnsomNot necessarily, no.  That is just how devstack is setup.  the lb-mgmt-net (which in devstack we setup as o-hm0) is just a private network between the control plane processes (o-cw, o-hm, o-hk) and the amphora-agent.  It can be a private neutron network or a provider (VLAN, etc.) network00:27
korean101johnsom: hmmmm00:28
johnsomIt us up to the deployer how they setup that network00:28
johnsomIt can be routable too00:29
korean101johnsom: so... hard to me00:30
korean101johnsom: some examples for me?00:31
johnsomOpenStack Ansible has a playbook for Octavia00:32
korean101johnsom: this one? (http://git.openstack.org/cgit/openstack/openstack-ansible-os_octavia/)00:33
johnsomYep00:33
korean101johnsom: Oh thanks!!!!00:34
korean101johnsom: i'wll try that00:34
johnsomSure, NP00:34
korean101johnsom: last questions00:34
johnsomOk00:34
korean101johnsom: in Pike releases 1.0 versions come out?00:34
johnsomYes, it will have it's own endpoint00:35
johnsomhttps://developer.openstack.org/api-ref/load-balancer/v2/index.html00:36
johnsomIt is compatible with neutron-lbaas LBaaS v2 API00:36
korean101johnsom: OH that's good00:37
korean101johnsom: OMG... really really last questions00:37
johnsomMuch easier deployment as it doesn't plug into neutron00:37
korean101johnsom: DVR + Octavia problems was solved?00:37
johnsomThat document isn't done yet, still L7 and quota to write00:37
korean101johnsom: floating IPs00:37
johnsomOh, the neutron bug with floating IPs, DVR, and allowed-address-pairs ports?00:38
korean101johnsom: well this one (https://docs.openstack.org/ocata/networking-guide/config-lbaas.html#associating-a-floating-ip-address)00:40
korean101neutron floatingip-associate FLOATINGIP_ID LOAD_BALANCER_PORT_ID00:40
johnsomOk, what about it?00:41
korean101https://bugs.launchpad.net/neutron/+bug/158369400:42
openstackLaunchpad bug 1583694 in neutron "[RFE] DVR support for Allowed_address_pair port that are bound to multiple ACTIVE VM ports" [Wishlist,In progress] - Assigned to Swaminathan Vasudevan (swaminathan-vasudevan)00:42
johnsomYeah, that is the neutron bug with floating IPs, DVR, and allowed-address-pairs ports00:44
johnsomDVR is still broken00:44
johnsomThough I think the neutron folks are working on it for Pike00:45
korean101johnsom: ok. i wish fix that bugs00:45
korean101johnsom: and i wish (DVR + Octavia)'s problem solved in Pike releases00:46
korean101johnsom: many many thanks!00:46
johnsomYeah, it breaks more than just octavia.  I hope they get it fixed soon too00:47
*** Dave has quit IRC00:53
*** Dave____ has joined #openstack-lbaas00:55
*** amotoki_away is now known as amotoki01:10
*** sanfern has quit IRC01:21
*** sanfern has joined #openstack-lbaas01:27
*** dayou has joined #openstack-lbaas01:30
*** sanfern has quit IRC01:37
*** armax has joined #openstack-lbaas01:40
johnsomOh brother, these functional tests are screwed up.02:10
rm_workjohnsom: seems like healthchecks aren't ... actually working right02:10
rm_workwe have http checks configured and it's still doing ping checks it seems02:11
johnsomHmmm, wonder what is up with that.  They were working when I tested the alternate port thing02:11
rm_worklooking at the haproxy config and we have: http://paste.openstack.org/show/612629/02:12
rm_workwhich looks right02:12
rm_workbut the boxes at ip1/ip2 don't show connections from the lb ever02:12
rm_workand we change the healthcheck page to status 500 and the members are still up02:12
rm_workbut if we take down the box, it does go away02:12
rm_work<_<02:12
rm_workwhich functional tests?02:13
johnsomAPI functional tests, they are setting the statuses before they check the statuses02:15
rm_workyeah i fixed a ton of those02:15
rm_workthey were essentially testing nothing02:15
rm_workyou'll see a lot of the files that i heavily edited, those went down to ... not bothering02:15
rm_workbecause checking statuses isn't useful02:15
rm_workso i just clipped it out02:15
rm_workexcept where i was checking specifically if things went to pending02:16
rm_workbut nothing can possibly go to ACTIVE/ONLINE legitimately in functional tests because there's no worker, lol02:16
johnsomRight, exactly, they should all be checking for pending02:17
*** aojea has quit IRC02:25
johnsomrm_work SoB, yep, it's broken.  Just confirmed on my stack02:26
*** aojea has joined #openstack-lbaas02:26
*** aojea has quit IRC02:26
rm_workT_T02:26
rm_workjohnsom: i wonder what happened... are we typoing the checktype and haproxy still accepts the config?02:27
rm_workshould it be "httpchk" or "httpcheck"?02:28
johnsomWait, no, it is working02:28
rm_worknope, httpchk is right...02:28
rm_workhmmm02:28
johnsomYeah, just confirmed, it is working for me, I see HTTP hits02:29
johnsomhttps://www.irccloud.com/pastebin/Eccu7t7d/02:30
rm_workhmm02:31
rm_workubuntu amps?02:32
rm_worki'm on centos amps? different haproxy?02:32
johnsomYes02:32
rm_workHA-Proxy version 1.5.18 2016/05/1002:33
rm_work<_<02:33
rm_work>_>02:33
rm_work<_<02:33
rm_workthat doesn't look right02:33
johnsomWell, that is old, but02:33
rm_workyeah i mean02:33
rm_workit shouldn't break this02:33
rm_workbut it *is* wrong02:33
rm_workcentos doesn't have 1.6 i guess?02:34
rm_workneed to make sure we have the ability to run a custom haproxy version I suppose... :/02:34
johnsomii  haproxy                            1.6.3-1ubuntu0.102:34
johnsomWhich is still old, but better02:34
rm_workyeah...02:35
johnsomOk, I am going to give up on having functional tests for the LB policy stuff done today (they work, though the tests are bogus)02:38
johnsomI was just sitting here puzzled why the heck you would have "set" methods in assert wrappers....02:39
johnsomOh....02:39
rm_workrofl02:39
rm_workyeah it's dumb02:39
rm_worki removed them as I saw them02:39
rm_workbut there's a ton02:39
rm_workbecause it all got copy/pasted02:39
rm_workliterally 7 times02:39
johnsomYeah, you aren't kidding.  I fixed one and now 14 tests blow up.  I'm sure that is just the start.02:40
johnsomTomorrow for that fun02:40
rm_workwhelp.... https://github.com/DBezemer/rpm-haproxy02:43
rm_workmaybe that02:43
rm_worklol02:43
rm_workAFAICT centos is dead02:43
rm_worki don't understand why people use it02:43
rm_workbut i'm being forced to02:43
rm_work(not for any good reason)02:43
rm_workat least fedora has 1.702:44
rm_workhttps://haproxy.debian.net/#?distribution=Ubuntu&release=xenial&version=1.702:44
rm_workso does debuntu02:44
rm_work@*($&@* Debian stable has 1.7 in its base repos rofl02:45
rm_workif *debian stable* has significantly newer packages that your distro, your distro is dead and buried02:45
*** sanfern has joined #openstack-lbaas02:50
*** gcheresh has joined #openstack-lbaas03:12
*** catintheroof has joined #openstack-lbaas03:15
*** gans has joined #openstack-lbaas03:29
*** catintheroof has quit IRC03:47
*** cody-somerville has joined #openstack-lbaas04:32
*** csomerville has quit IRC04:35
*** gcheresh has quit IRC05:02
*** dementor has joined #openstack-lbaas05:06
dementorhello can somebody help me to using load balancer v2 octavia, i create so difficult05:07
dementori can't find the practice or example load balancer in dashboard05:08
gansdementor, i am not sure from dashboard, but you can see this to create LB via CLI , https://docs.openstack.org/developer/devstack/guides/devstack-with-lbaas-v2.html05:25
*** blogan_ has joined #openstack-lbaas05:31
dementorthat same for LB v2???05:31
dementoroke I'll try05:32
dementorthx you gans05:32
gansdementor, yeah for lb v205:33
*** blogan has quit IRC05:33
*** krypto has joined #openstack-lbaas05:45
*** blogan_ has quit IRC05:48
*** rcernin has quit IRC06:11
*** diltram has quit IRC06:13
*** diltram has joined #openstack-lbaas06:18
*** rcernin has joined #openstack-lbaas06:37
*** JudeC has joined #openstack-lbaas06:41
*** kobis has joined #openstack-lbaas06:42
*** tesseract has joined #openstack-lbaas06:43
*** diltram has quit IRC06:47
*** armax has quit IRC06:49
*** kobis has quit IRC06:53
*** diltram has joined #openstack-lbaas06:53
*** diltram has quit IRC06:57
*** pcaruana has joined #openstack-lbaas06:58
*** krypto has quit IRC07:03
*** diltram has joined #openstack-lbaas07:09
*** armax has joined #openstack-lbaas07:10
*** diltram has quit IRC07:13
*** armax has quit IRC07:15
*** diltram has joined #openstack-lbaas07:17
*** kobis has joined #openstack-lbaas07:36
*** kobis1 has joined #openstack-lbaas07:39
*** kobis has quit IRC07:40
dementorhello why cannot delete the load balance on dashboard07:51
*** aojea has joined #openstack-lbaas07:54
*** aojea has quit IRC08:01
*** JudeC has quit IRC08:02
*** armax has joined #openstack-lbaas08:11
*** dosaboy has quit IRC08:11
*** dementor has quit IRC08:14
*** armax has quit IRC08:16
*** dosaboy has joined #openstack-lbaas08:19
*** rstarmer_ has joined #openstack-lbaas08:34
*** rstarmer_ has quit IRC08:41
*** rstarmer_ has joined #openstack-lbaas08:42
*** dosaboy has quit IRC08:50
*** dosaboy has joined #openstack-lbaas08:51
*** dosaboy has quit IRC08:53
*** dosaboy has joined #openstack-lbaas09:08
nmagnezirm_work, I wonder how many more rechecks..09:11
nmagneziO_o09:11
*** armax has joined #openstack-lbaas09:12
*** armax has quit IRC09:17
*** rstarmer_ has quit IRC09:19
*** rstarmer_ has joined #openstack-lbaas09:21
*** rstarmer_ has quit IRC10:12
*** armax has joined #openstack-lbaas10:12
*** rstarmer_ has joined #openstack-lbaas10:16
*** armax has quit IRC10:17
*** sanfern has quit IRC10:53
*** gans has quit IRC10:53
*** rstarmer_ has quit IRC10:55
*** armax has joined #openstack-lbaas11:13
*** Dave____ is now known as Dave11:14
*** armax has quit IRC11:18
*** chlong has joined #openstack-lbaas11:32
*** armax has joined #openstack-lbaas12:14
*** armax has quit IRC12:19
*** atoth has joined #openstack-lbaas12:40
*** sanfern has joined #openstack-lbaas12:47
*** leitan has joined #openstack-lbaas12:59
*** catintheroof has joined #openstack-lbaas13:07
*** armax has joined #openstack-lbaas13:13
*** armax has quit IRC13:17
*** catinthe_ has joined #openstack-lbaas13:21
*** catintheroof has quit IRC13:24
*** cpuga has joined #openstack-lbaas13:25
*** catintheroof has joined #openstack-lbaas13:25
*** catinthe_ has quit IRC13:25
*** catinthe_ has joined #openstack-lbaas13:26
*** catinth__ has joined #openstack-lbaas13:26
*** catintheroof has quit IRC13:26
*** cpuga has quit IRC13:26
*** cpuga has joined #openstack-lbaas13:27
*** catintheroof has joined #openstack-lbaas13:29
*** catinthe_ has quit IRC13:30
*** catinth__ has quit IRC13:33
*** mariusv has joined #openstack-lbaas13:33
*** mariusv has quit IRC13:55
xgerman_o/13:56
*** aojea has joined #openstack-lbaas14:06
korean101johnsom: are you there?14:34
korean101you probably 14:34 time14:35
*** kobis1 has quit IRC14:51
*** reedip_ has joined #openstack-lbaas14:54
*** fnaval has quit IRC14:55
*** fnaval has joined #openstack-lbaas14:55
*** rstarmer has joined #openstack-lbaas14:59
*** fnaval has quit IRC15:00
openstackgerritMerged openstack/neutron-lbaas master: Remove usage of parameter enforce_type  https://review.openstack.org/47406215:07
*** rstarmer has quit IRC15:09
*** rcernin has quit IRC15:09
johnsomo/15:10
*** fnaval has joined #openstack-lbaas15:19
*** fnaval has quit IRC15:25
*** fnaval has joined #openstack-lbaas15:26
*** rcernin has joined #openstack-lbaas15:44
*** dayou has quit IRC15:51
*** blogan has joined #openstack-lbaas15:55
*** aojea has quit IRC15:56
*** reedip_ has quit IRC15:56
*** rcernin has quit IRC16:12
*** SumitNaiksatam has joined #openstack-lbaas16:15
*** diltram has quit IRC16:18
*** diltram has joined #openstack-lbaas16:21
*** diltram has quit IRC16:26
openstackgerritOpenStack Proposal Bot proposed openstack/neutron-lbaas master: Updated from global requirements  https://review.openstack.org/47467116:28
*** diltram has joined #openstack-lbaas16:30
openstackgerritOpenStack Proposal Bot proposed openstack/octavia master: Updated from global requirements  https://review.openstack.org/47467416:30
*** cody-somerville has quit IRC16:31
*** cody-somerville has joined #openstack-lbaas16:31
openstackgerritMichael Johnson proposed openstack/octavia master: Add RBAC enforcement to Octavia v2 API  https://review.openstack.org/47287216:39
johnsom^^^ WIP - I will probably be check-pointing a few times today.  That one is LB basically done with functional tests.  I need to incorporate the new policy idea I pitched yesterday and then start turning the crank for the rest of the API.16:44
*** tesseract has quit IRC16:46
*** sshank has joined #openstack-lbaas16:55
rm_worknmagnezi: our whole gate system is really bonkers17:06
rm_worknmagnezi: we have so many checks, and they're unreliable enough again for various reasons, that 75% of the time or so at least one will fail >_>17:06
*** csomerville has joined #openstack-lbaas17:10
*** JudeC has joined #openstack-lbaas17:11
*** cody-somerville has quit IRC17:13
johnsomrm_work the only intermittent test failure I am aware of is the gunicorn 404 issue.  Are you aware of others?17:15
*** pcaruana has quit IRC17:16
*** greghaynes is now known as greghayn117:18
*** greghaynes has joined #openstack-lbaas17:18
*** greghaynes has quit IRC17:22
*** greghaynes has joined #openstack-lbaas17:23
*** SumitNaiksatam has quit IRC17:23
rm_workjohnsom: i was going to look today17:24
rm_worksome stuff appears to be failing during devstack setup...17:24
*** greghayn1 has quit IRC17:25
*** SumitNaiksatam has joined #openstack-lbaas17:25
rm_workhttp://logs.openstack.org/18/474318/2/gate/gate-octavia-v1-dsvm-scenario-ubuntu-xenial/9c55621/logs/devstacklog.txt.gz#_2017-06-15_00_48_45_13017:26
rm_workwtf happened there17:26
johnsomMy two first thoughts, the test host is out of disk or I know DIB had recently made changes to the block device stuff.17:28
rm_work>_>17:28
johnsomWell, the host looks fine for disk17:29
johnsomWould be nice if DIB captured the error stream.  It probably would if we turning on it's debug, but ouch, talk about a flood of log messages17:30
rm_workyeah :/17:30
*** JudeC has quit IRC17:30
rm_workugh why can't i run coverage tests inside pycharm17:30
rm_workblegh17:30
*** SumitNaiksatam has quit IRC17:31
*** SumitNaiksatam has joined #openstack-lbaas17:35
openstackgerritAdam Harwell proposed openstack/octavia master: Small refactor for load_balancer v2 vip validation  https://review.openstack.org/47471117:35
rm_workjohnsom: ^^ that is tiny and probably meaningless in general but would appreciate merging it quickly, will save me a lot of work17:35
johnsomrm_work You owe me if I merge that.  It's going to likely merge conflict with the RBAC patch.17:38
rm_workis it? lol17:39
rm_workhmmmm17:39
rm_workwhat if i rebase for you? :P17:39
johnsomNo worries, just giving you a hard time.17:39
johnsomOh, geez, is that mkfs issue what blocked the GR update?17:41
johnsomNo, broken mirrors:17:42
johnsomis_admin:True17:42
johnsomE: Unable to locate package python-pip17:43
johnsom2017-06-15 17:04:30.505 | E: Package 'python3-pip' has no installation candidate17:43
johnsom2017-06-15 17:04:30.505 | E: Package 'python-virtualenv' has no installation candidate17:43
johnsom2017-06-15 17:04:30.505 | E: Unable to locate package python3-virtualenv17:43
johnsomOh, and plus:17:44
johnsomConnected to amphora. Response: <Response [404]> request /opt/stack/new/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py:28017:44
rm_work>_<17:44
johnsomWow, and mkfs.  There gate tests, all failed in different ways....  Though only one is our own code17:45
rm_workthis is what i'm talking about T_T17:48
rm_workugh17:48
*** JudeC has joined #openstack-lbaas17:48
*** JudeC has quit IRC17:53
johnsomThe 404 is the one that kills me, pretty random17:55
*** JudeC has joined #openstack-lbaas18:15
*** JudeC has quit IRC18:19
*** rstarmer has joined #openstack-lbaas18:25
*** fnaval has quit IRC18:30
rm_workyeah i still don't understand that18:30
rm_workmight need to track down an expert or something18:30
rm_workgunicorn expert?18:30
rm_workwsgi expert? i dunno18:30
rm_workI'm asking in #gunicorn18:33
*** rstarmer has quit IRC18:36
*** JudeC has joined #openstack-lbaas18:36
*** rstarmer has joined #openstack-lbaas18:40
johnsomI am almost thinking we need to put some ugly debug code in that when this situation triggers we scp the amp logs off the amp before it gets deleted in the revert.18:42
rm_worki'm considering other stupidly hacky workarounds as well18:42
rm_workchange the default 404 page in the agent, so once the agent is loaded up, a real 404 is distinct via the body?18:44
johnsomCurrently that code to connect is shared across a lot of the functions.  That is why I have been reluctant to up the retry count.18:46
rm_workright18:46
rm_workbut if we made the 404 distinct18:46
rm_workwe could tell if it's a real 404, or a "not yet loaded 404"18:46
johnsomIf we are going super hacky fix style, I guess we could create a "special" code path just for that initial connect.18:46
rm_workbecause it's more than just a HTTP code18:47
rm_work404s have a body18:47
rm_workwe can scan that18:47
johnsomIt just seems like that would impact the other API calls and points in time....18:47
rm_workwhy18:47
johnsomHmm, so agent custom 404 means it's really not there for all API calls, non custom 404 means not started?18:48
rm_workyes18:48
rm_workconveniently using the failure to load to our advantage :P18:49
johnsomThis is making me ill right before lunch.  I think I might be willing to add the hacky debug code first and try to figure out the "why" part18:49
rm_worklol18:49
rm_worki might ... do the custom 40418:49
johnsomFor all I know this is some stupid gate environment issue where our connections are being routed wrong (proxy server or some such mess) or there is something else listening on the IP.18:50
johnsomI have never seen it local18:50
johnsomThough I haven't done a CHO local in a while.18:50
rm_workwell18:51
rm_workyeah me either lol18:51
rm_workbut my solution would still work :P18:51
rm_workthough also i wish i could get rid of Flask completely18:51
rm_worki had some progress on that18:51
johnsomThis is one of these things that 6 months later you are going to look at and go "Ugh, why????" and I will remind you it was your idea.  Grin18:51
johnsomOk, I need to run an errand over lunch, back in around an hour.18:52
*** sanfern has quit IRC18:53
*** sanfern has joined #openstack-lbaas18:53
rm_workjohnsom: lol BTW, we already DO have a "get_haproxy_config", I thought that was the case19:00
rm_workon the agent19:00
johnsomYeah, I could not remember if it got implemented or not.19:02
*** armax has joined #openstack-lbaas19:03
*** sshank has quit IRC19:04
xgerman_mmh, if it’s not up it should do something like 503 but…19:05
*** fnaval has joined #openstack-lbaas19:06
*** catinthe_ has joined #openstack-lbaas19:07
rm_workjohnsom: ummmmmm19:08
rm_workjohnsom: i think we might already be doing that rofl19:08
rm_worki think we have custom error pages all around19:08
rm_workrofl19:09
*** catintheroof has quit IRC19:11
*** catintheroof has joined #openstack-lbaas19:22
*** catinthe_ has quit IRC19:25
*** leitan has quit IRC20:06
*** leitan has joined #openstack-lbaas20:06
*** leitan has quit IRC20:11
*** rstarmer has quit IRC20:31
*** JudeC has quit IRC20:47
*** JudeC has joined #openstack-lbaas20:49
*** JudeC has quit IRC20:53
*** catinthe_ has joined #openstack-lbaas21:06
*** catintheroof has quit IRC21:09
*** dayou has joined #openstack-lbaas21:14
*** fnaval has quit IRC21:23
*** catinthe_ has quit IRC21:28
rm_workjohnsom: ok i'm going to post a workaround because this is stupid21:42
rm_workworking out the mock issues for testing tho21:42
johnsomOk, I will have a look.21:42
rm_workk got it i think, it's hard to test for sure since, well, can't repro21:50
rm_workbut21:50
rm_worki can recheck this a few times and hopefully not get any fails21:50
rm_workunfortunately it's all bundled in with the webob work i was doing, but i can pull it out if we need to21:51
*** cpuga has quit IRC21:53
openstackgerritAdam Harwell proposed openstack/octavia master: Agent: swap flask responses to webob, handle 404 retries better  https://review.openstack.org/47479021:54
rm_workjohnsom: ^^ so the real change is https://review.openstack.org/#/c/474790/1/octavia/amphorae/drivers/haproxy/rest_api_driver.py@28321:54
johnsomYeah, already looking at it.21:54
johnsomHow do you know the fake 404 isn't returning json?21:55
rm_workbecause that's insane21:55
rm_worknothing does that by default21:55
johnsomHahaa21:55
rm_workwe have to override to make everything json specifically21:55
rm_workwe do it here:21:55
johnsomLeave it to us to find the counter example.....21:55
rm_workhttps://github.com/openstack/octavia/blob/master/octavia/amphorae/backends/agent/api_server/server.py#L32-L3521:56
rm_work:P21:56
rm_workthe work was already done for me21:56
rm_worklol21:56
rm_workbut to be explicit: https://review.openstack.org/#/c/474790/1/octavia/amphorae/backends/agent/api_server/server.py@3621:56
rm_workactually i don't need one of those lines, sec21:57
openstackgerritAdam Harwell proposed openstack/octavia master: Agent: swap flask responses to webob, handle 404 retries better  https://review.openstack.org/47479021:57
rm_worki didn't have json= when i was testing21:57
rm_workbut it does the content-type automatically with that21:58
rm_workso this does a ton more towards removing flask21:59
rm_workthen I just need to figure out what to use to do routing21:59
rm_workthough maybe we can just leave Flask for that <_<21:59
rm_workvery minimal21:59
rm_workJUST use its routing engine21:59
rm_workthough I have a feeling the issue is related21:59
rm_worki think Flask is where the bug is21:59
johnsomSo, this will retry ten times for any amp API call that comes back 404.22:00
rm_workONLY if it's non-json22:00
rm_workwhich will ONLY be if it isn't spun up yet22:01
rm_workbecause once our API is actually loaded and responding, 404s will be json22:01
johnsomOnly if it is json...22:01
rm_workerrr22:01
rm_worki did it backwards lol one sec22:02
rm_workgood catch :P22:02
openstackgerritAdam Harwell proposed openstack/octavia master: Agent: swap flask responses to webob, handle 404 retries better  https://review.openstack.org/47479022:02
rm_workforgot a "not"22:02
johnsomIf we are putting hacks in I'm going to look at it closely....22:02
johnsomgrin22:02
rm_workyes :P22:02
rm_workfeel free22:02
rm_workto pick this apart22:02
rm_workbut I THINK it'll work22:02
rm_workalso I think removing the stupid flask jsonify stuff and moving to webob is ++22:03
rm_workugh forgot to put it in requirements tho >_< one more patch22:03
rm_workoh nm it's there22:04
rm_worki guess we already had it for some reason22:04
johnsomYeah, so no test that fakes up a non-json 404 or two?22:08
rm_workugh22:08
rm_worksometimes .... i just... >_>22:09
* rm_work sighs22:09
rm_workalso the other tests are wrong now rofl22:11
johnsomOtherwise I think it looks ok.  Just curious about the unit test coverage and if we can put in a functional that tries that out.22:12
rm_workah i did it wrong anyway22:13
rm_workfind is -1 / 022:13
rm_workwhich ...22:13
rm_workboth are false22:13
rm_worklol22:13
johnsomJust to capture my comment22:14
rm_workok there you go22:15
openstackgerritAdam Harwell proposed openstack/octavia master: Agent: swap flask responses to webob, handle 404 retries better  https://review.openstack.org/47479022:15
rm_workA) fixed it22:15
rm_workB) added a negative22:15
rm_workreally i just want to see if it passes22:16
*** KeithMnemonic has quit IRC22:16
rm_workjohnsom: we seriously can't get a check to pass today22:18
rm_worksorry, ONE of 10 did22:18
rm_work<_<22:18
johnsomYeah, I'm working on the mkfs thing in the DIB channel22:22
rm_workk22:23
johnsomMy WIP patch passed though: https://review.openstack.org/#/c/472872/ LOL22:23
rm_workbut that's also intermittent22:23
rm_worklol. great.22:23
johnsomYeah I think Ian is on the right track with the mkfs thing22:23
rm_workwait what is the dib channel22:24
rm_workam i somehow not in that22:24
rm_workjohnsom: ^^22:27
johnsomopenstack-dib22:29
*** fnaval has joined #openstack-lbaas22:29
openstackgerritAdam Harwell proposed openstack/octavia master: Enable DIB trace logging  https://review.openstack.org/47480022:43
rm_workjohnsom: that?22:43
johnsomYes, I think so22:44
johnsomThough, it might have to be a: export DEVSTACK_LOCAL_CONFIG+=$'\n'"DISABLE_AMP_IMAGE_BUILD=True"$'\n'22:44
rm_worklol22:45
rm_workwhelp22:45
johnsomI think it does actually have to get shoved in the local.conf22:45
rm_workk22:45
johnsomThe scoping is nasty for this stuff.22:45
johnsomStill  OCTAVIA_DIB_TRACING=1 obviously22:46
rm_workwell22:47
rm_workTrue works22:47
openstackgerritAdam Harwell proposed openstack/octavia master: Enable DIB trace logging  https://review.openstack.org/47480022:47
rm_workthe plugin looks for !=022:47
*** fnaval has quit IRC22:59
rm_worklol we have so many broken tests23:55

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!