Tuesday, 2017-08-08

*** tongl has quit IRC00:16
*** harlowja has quit IRC00:25
*** armax has quit IRC00:37
*** xingzhang has joined #openstack-lbaas00:41
openstackgerritJude Cross proposed openstack/octavia-tempest-plugin master: Create scenario tests for load balancers  https://review.openstack.org/48677500:46
*** xingzhang has quit IRC00:47
*** armax has joined #openstack-lbaas00:48
rm_workyeah crap00:52
rm_workjohnsom: right now this causes 2x FLIPs to be created per LB, and one is just orphaned forever00:53
*** sanfern has joined #openstack-lbaas00:53
rm_work:(00:54
rm_worktrying to figure out why it doesn't actually GET the data for the first one created00:54
*** JudeC has quit IRC00:55
openstackgerritAdam Harwell proposed openstack/octavia master: WIP: Floating IP Network Driver (spans L3s)  https://review.openstack.org/43561200:56
*** jick has quit IRC01:01
*** catintheroof has joined #openstack-lbaas01:02
*** ipsecguy_ has joined #openstack-lbaas01:04
*** armax has quit IRC01:07
*** ipsecguy has quit IRC01:07
openstackgerritGhanshyam Mann proposed openstack/octavia master: Remove usage of credentials_factory.AdminManager  https://review.openstack.org/49164001:10
*** catintheroof has quit IRC01:11
*** dayou2 has quit IRC01:18
*** dayou has joined #openstack-lbaas01:19
rm_workjohnsom: did you verify that patch actually WORKED in devstack?01:23
rm_workhttps://review.openstack.org/#/c/463289/8/octavia/api/v2/controllers/load_balancer.py01:23
rm_workso01:23
rm_workon 219-22301:23
rm_workit seems like it is not ACTUALLY saving that back to the DB01:23
rm_workdo our models work that way?01:23
rm_workwill changing the values there and doing a commit below it "work" like expected?01:24
rm_workbecause ... in my deployment, it's getting the right data for the VIP (and might "just work"), except it's not actually being stored in the DB in the VIP object ever01:24
rm_workit doesn't return it to the user either01:25
rm_workso yeah it's not storing that back to the DB....01:32
rm_workdid you guys actually test that patch? johnsom / xgerman_01:32
rm_workIMO it is not working yet01:32
rm_workand right now will PROBABLY be creating an extra port01:32
rm_workand throwing it away, and not ACTUALLY using the IP specified01:32
xgerman_Mmh, it looked sane when I read it...01:34
rm_workyes it LOOKS sane01:36
rm_workbut it isn't actually writing the vip object back to the DB01:36
rm_workcould you verify in a devstack?01:36
*** dougwig has quit IRC01:36
rm_workwhat i'm seeing is the vip object returns correctly, the data is transfered into "db_lb.vip" ... and then thrown away01:36
rm_workSQLAlchemy says the session is clear even after touching the values01:55
rm_workin db_lb.vip01:55
openstackgerritAdam Harwell proposed openstack/octavia master: Properly store VIP data on LB Create  https://review.openstack.org/49164302:03
rm_workjohnsom / xgerman_ ^^ this fixes it02:03
rm_workplz to be testing to verify my findings, and merging02:03
xgerman_Ok. Testing it is -02:05
*** xingzhang has joined #openstack-lbaas02:10
*** mestery has quit IRC02:17
*** afranc has quit IRC02:17
*** sanfern has quit IRC02:17
*** mestery has joined #openstack-lbaas02:22
*** afranc has joined #openstack-lbaas02:22
*** dayou1 has joined #openstack-lbaas02:37
*** dayou has quit IRC02:40
*** armax has joined #openstack-lbaas02:41
*** armax has quit IRC02:44
*** yamamoto_ has joined #openstack-lbaas02:46
*** yamamoto has quit IRC02:46
openstackgerritlidong proposed openstack/octavia master: Update links in README  https://review.openstack.org/49165503:09
*** sanfern has joined #openstack-lbaas03:48
*** links has joined #openstack-lbaas03:53
*** reedip_afk is now known as reedip04:17
*** fnaval has quit IRC04:21
*** xingzhang has quit IRC04:33
*** xingzhang has joined #openstack-lbaas04:34
*** harlowja has joined #openstack-lbaas04:35
*** Alex_Staf has joined #openstack-lbaas04:54
*** harlowja has quit IRC05:14
*** xingzhang has quit IRC05:38
*** xingzhang has joined #openstack-lbaas05:38
*** JudeC has joined #openstack-lbaas05:44
*** rcernin has joined #openstack-lbaas06:21
*** JudeC has quit IRC06:26
*** gcheresh has joined #openstack-lbaas06:32
*** pcaruana has joined #openstack-lbaas07:00
*** Alex_Staf has quit IRC07:04
*** gtrxcb has quit IRC07:11
*** tesseract has joined #openstack-lbaas07:21
*** cpuga has joined #openstack-lbaas07:43
*** yamamoto_ has quit IRC07:44
*** yamamoto has joined #openstack-lbaas07:50
*** rtjure has quit IRC07:58
*** belharar has joined #openstack-lbaas08:04
*** leyal has joined #openstack-lbaas08:18
*** Guest14 has joined #openstack-lbaas08:20
*** leyal- has joined #openstack-lbaas08:26
leyal-Hi - i am trying no run single-node devstack instance with this local.conf  -  https://github.com/openstack/octavia/blob/master/devstack/samples/singlenode/local.conf08:27
leyal-After ./stack.sh succeeded  - i tried to create a load-blancer an got a provisioning error..08:28
*** leyal has quit IRC08:29
leyal-i tried to look at the log of o-cw - and i saw the following exception - ComputeWaitTimeoutException - while trying to create the lbaas-instance ..08:30
*** aojea has joined #openstack-lbaas08:32
leyal-I will be happy for any debugging hint :)08:33
*** Alex_Staf has joined #openstack-lbaas08:40
*** cpuga has quit IRC08:44
*** aojea has quit IRC08:46
*** rtjure has joined #openstack-lbaas08:46
*** aojea has joined #openstack-lbaas08:47
*** yamamoto has quit IRC08:49
nmagnezirm_work, o/08:51
nmagnezileyal-, hi there08:51
nmagnezileyal-, it is possible that your machine is a bit slow and the timeout was just too short for it.08:52
nmagnezileyal-, couple of suggestions08:52
nmagnezileyal-, try to create another loadbalancer, it is possible that the boot time will be quicker this time since the image should now be cached. and also when the loadbalancer gets created monitor the instance creation (nova list would do)08:53
nmagnezileyal-, a second option would be to increase the timeout08:53
*** dayou1 has quit IRC08:55
leyal-nmagnezi,  Hi . thanks ! i don't think that a timeout issue as it's was around 1 minute in state of PENDING_CREATE..08:55
*** dayou has joined #openstack-lbaas08:55
nmagnezileyal-, could you please paste the worker log somewhere and send a link?08:55
leyal-But i will try both ..08:56
leyal-where can i find the worker log ?08:57
nmagnezileyal-, hmm.. basically devstack now uses systemd. I still use screens so just try something like systemctl status devstack@octavia-cw and check where the log is.. i'm sorry i don't know the exact cmd08:59
nmagnezileyal-, if you want to wait i can spawn a devstack node with systemd and look at it09:00
leyal-Is the worker is one of the following - devstack@o-api  devstack@o-cw   devstack@o-hk   devstack@o-hm09:01
nmagnezileyal-, yes. it is devstack@o-cw09:02
leyal-nmagnezi, thanks ! - i see there the following msg repeated - "WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance."09:03
nmagnezileyal-, btw if you wish to increase the wait timeout (your debug hint ;) ), the param you should try to fiddle with is amp_active_retries09:04
nmagnezileyal-, https://github.com/openstack/octavia/blob/32819ecc8de293511a7fbb1020abb97b3087665d/octavia/controller/worker/tasks/compute_tasks.py#L18509:04
nmagnezileyal-, yeah, but one line is not enough for me to understand the root cause09:04
nmagnezileyal-, you may paste with http://paste.openstack.org/09:05
nmagnezileyal-, and link it here09:05
leyal-nmagnezi, cool - pasted that here - http://paste.openstack.org/show/617750/09:06
nmagnezileyal-, Eyal, when you create an additional loadbalancer, do you see an nova instance being created?09:08
leyal-Hi - the second one created well :)09:09
nmagnezileyal-, magniv :P09:09
nmagnezileyal-, caching helped you here09:10
leyal-nmagnezi - just noticed that when i checked the server list and only one created - Thanks a lot!! "toda-raba"!!09:11
*** yamamoto has joined #openstack-lbaas09:15
*** yamamoto has quit IRC09:22
openstackgerritJi Chengke proposed openstack/neutron-lbaas master: make lbaasv2 support "https" keystone endpoint  https://review.openstack.org/49173409:23
openstackgerritJi Chengke proposed openstack/neutron-lbaas master: make lbaasv2 support "https" keystone endpoint  https://review.openstack.org/49173409:54
*** yamamoto has joined #openstack-lbaas09:54
*** yamamoto has quit IRC10:07
*** yamamoto has joined #openstack-lbaas10:12
*** mdavidson has quit IRC10:16
*** mdavidson has joined #openstack-lbaas10:17
*** yamamoto has quit IRC10:17
*** yamamoto has joined #openstack-lbaas10:18
*** yamamoto has quit IRC10:20
*** yamamoto has joined #openstack-lbaas10:20
*** xingzhang has quit IRC10:48
*** xingzhang has joined #openstack-lbaas10:48
*** Guest14 is now known as ajo10:54
*** sanfern has quit IRC10:56
*** yamamoto has quit IRC10:58
*** slaweq has joined #openstack-lbaas10:59
*** yamamoto has joined #openstack-lbaas10:59
*** yamamoto has quit IRC11:05
openstackgerritgaryk proposed openstack/neutron-lbaas master: Use flake8-import-order plugin  https://review.openstack.org/48054311:06
*** yamamoto has joined #openstack-lbaas11:07
*** yamamoto has quit IRC11:08
*** yamamoto has joined #openstack-lbaas11:10
*** yamamoto has quit IRC11:13
*** xingzhang has quit IRC11:13
*** yamamoto has joined #openstack-lbaas11:15
*** yamamoto has quit IRC11:16
*** yamamoto has joined #openstack-lbaas11:16
*** atoth has joined #openstack-lbaas11:48
*** aojea has quit IRC12:05
*** aojea has joined #openstack-lbaas12:06
*** aojea has quit IRC12:11
openstackgerritOpenStack Proposal Bot proposed openstack/neutron-lbaas master: Updated from global requirements  https://review.openstack.org/49085612:16
openstackgerritOpenStack Proposal Bot proposed openstack/octavia master: Updated from global requirements  https://review.openstack.org/49129312:18
*** yamamoto has quit IRC12:18
*** yamamoto has joined #openstack-lbaas12:20
*** yamamoto has quit IRC12:20
*** yamamoto has joined #openstack-lbaas12:40
*** leitan has joined #openstack-lbaas12:45
*** aojea has joined #openstack-lbaas12:53
*** ssmith has joined #openstack-lbaas12:58
*** cpusmith has joined #openstack-lbaas12:59
*** catintheroof has joined #openstack-lbaas13:00
*** ssmith has quit IRC13:03
openstackgerritZhaoBo proposed openstack/octavia master: Fix sg_rule didn't set protocol field  https://review.openstack.org/49179213:09
*** links has quit IRC13:11
*** sanfern has joined #openstack-lbaas13:16
*** gcheresh has quit IRC13:50
*** sanfern has quit IRC14:05
*** sanfern has joined #openstack-lbaas14:06
*** slaweq has quit IRC14:10
*** xingzhang has joined #openstack-lbaas14:12
*** xingzhang has quit IRC14:17
*** Guest13936 is now known as med_14:21
*** med_ has quit IRC14:21
*** med_ has joined #openstack-lbaas14:21
*** med_ is now known as medberry14:21
*** cpuga has joined #openstack-lbaas14:33
*** cpuga_ has joined #openstack-lbaas14:35
*** armax has joined #openstack-lbaas14:37
*** aojea has quit IRC14:38
*** cpuga has quit IRC14:39
*** medberry is now known as med_14:43
*** links has joined #openstack-lbaas14:48
*** links has quit IRC14:49
*** belharar has quit IRC15:01
*** belharar has joined #openstack-lbaas15:04
*** belharar has quit IRC15:35
*** belharar has joined #openstack-lbaas15:37
*** armax has quit IRC15:42
*** armax has joined #openstack-lbaas15:43
*** dougwig has joined #openstack-lbaas15:44
*** belharar has quit IRC15:45
*** yamamoto has quit IRC15:54
*** ajo has quit IRC15:57
*** yamamoto has joined #openstack-lbaas15:58
*** yamamoto has quit IRC16:03
*** leyal- has quit IRC16:08
*** leyal has joined #openstack-lbaas16:09
*** armax has quit IRC16:10
*** armax has joined #openstack-lbaas16:11
*** pcaruana has quit IRC16:27
*** rcernin has quit IRC16:31
*** tesseract has quit IRC16:31
*** sshank has joined #openstack-lbaas16:42
*** KeithMnemonic has joined #openstack-lbaas16:45
*** KeithMnemonic has quit IRC16:47
*** KeithMnemonic has joined #openstack-lbaas16:49
*** KeithMnemonic has quit IRC16:52
*** cpuga_ has quit IRC17:04
*** fnaval has joined #openstack-lbaas17:12
*** sanfern has quit IRC17:16
*** cpuga has joined #openstack-lbaas17:19
openstackgerritJude Cross proposed openstack/octavia-tempest-plugin master: Create scenario tests for load balancers  https://review.openstack.org/48677517:28
openstackgerritJude Cross proposed openstack/octavia-tempest-plugin master: Create scenario tests for load balancers  https://review.openstack.org/48677517:30
*** Alex_Staf has quit IRC18:10
rm_worki'm fixing the functional test failure presently in my fix18:20
openstackgerritMerged openstack/neutron-lbaas master: Updated from global requirements  https://review.openstack.org/49085618:22
rm_workah man, these functional tests are proving this is broken18:22
rm_workthe tests were set up to make sure it didn't allocate a VIP before the worker stage...18:23
*** sshank has quit IRC18:49
rm_workjohnsom: will you have time to test this today? or still focused on 404s19:03
johnsomyeah, I will take a look.  I'm having workstation issues today. Ugh19:03
rm_work:(19:03
johnsomPretty close to a restore from backup....19:04
rm_workjust looking for you to verify the *issue*19:04
rm_worknot test the fix yet19:04
rm_workwell, that's.... sad19:04
openstackgerritAdam Harwell proposed openstack/octavia master: Properly store VIP data on LB Create  https://review.openstack.org/49164319:25
rm_workok testing .... "fixed" >_>19:25
*** aojea has joined #openstack-lbaas19:30
nmagnezio/19:31
rm_worknmagnezi: https://review.openstack.org/#/c/489015/19:32
nmagnezirm_work, hello to you too :)19:32
rm_work:P19:32
* nmagnezi reads19:32
nmagnezirm_work, I was under the impression that his patch is more generic (in a very good way) and should also update members who did not report to hm to OFFLINE19:35
rm_workwhose patch?19:36
nmagnezirm_work, yours19:36
rm_workoh you mean "this" patch prolly, derp19:36
nmagnezirm_work, quote "19:36
nmagneziBy actively updating the status of members that don't report, we can set19:36
nmagnezidisabled members to OFFLINE exactly when this becomes true."19:36
rm_workuhh19:37
rm_workright19:37
rm_workso i'm not sure what you mean by an updated "reported_members" list19:37
nmagnezinot sure why I  wrote quote and also "" :P19:37
rm_work"Such case could be handled with a simple check that a member is still not in (an updated) reported_members list before you set its status to OFFLINE."19:37
nmagnezirm_work, let me try to explain myself with this comment19:37
rm_workwe have literally only this one "reported_members" list to work with, there's no getting an updated one19:38
*** xingzhang has joined #openstack-lbaas19:38
nmagnezirm_work, can't we re-read a single member from the db each time?19:41
rm_workit's not from the DB19:43
rm_workthat list is from the health message19:43
*** xingzhang has quit IRC19:43
nmagnezirm_work, sec, cherry-picking this again19:44
nmagnezirm_work, yeah so fetching a single member by id would need an addition to octavia/controller/healthmanager/update_db.py19:49
rm_workno i mean we literally can't19:50
nmagnezioh, not that path19:50
nmagnezisec19:50
rm_work"reported_members" isn't from the DB19:50
nmagnezihttps://github.com/openstack/octavia/blob/98799b54ddc778722c0434af58a0b11d9b00958b/octavia/db/repositories.py#L735-L73519:50
nmagneziwhy? you fetch the pool few lines beforehand, no?19:50
rm_work*reported_members* is from the health message19:51
rm_workreal_members is from the db19:51
rm_workwe're comparing DB to HM19:51
*** aojea has quit IRC19:51
* nmagnezi looks at reported_members19:52
nmagnezirm_work, ah, it comes from the 'health' param provided to that function, right?19:55
rm_workyeah it comes from the health message we're processing19:55
nmagnezigot it19:56
rm_worki do wonder if we could combine some of those messages19:59
rm_workinstead of throwing N messages on the queue19:59
nmagnezirm_work, changed my vote19:59
nmagnezirm_work, I wish there was a way to check a health state of a single member at that point in the code. but I understand why this is not doable here.20:00
nmagnezirm_work, I had good intention in mind :P20:00
rm_work:)20:02
nmagnezirm_work, i have a question, not related to this patch20:03
nmagnezirm_work, now that we run the api with the wsgi (or uwsgi, can't remember) magic20:03
nmagnezirm_work, how can I query the API service directly?20:03
rm_work?20:04
nmagnezirm_work,  I don't see it listening to any port, which is strange20:04
rm_workah20:04
rm_workit's because it is routed via apache20:04
rm_workjohnsom could probably answer this one better20:04
nmagneziaye. the api-ref examples (which really helped me so far) might need to get some update to fit that,. .different port maybe..20:05
johnsomYeah, new openstack world order, no ports but pathes20:06
nmagnezirm_work, this actually stopped me from reviewing you batch members patch https://review.openstack.org/#/c/477034/20:06
nmagnezijohnsom, hi o/20:06
rm_workyou can still hit it directly20:07
rm_workit's not "indirect"?20:07
johnsomIt is like /load-balancer/v2....  now20:07
rm_worki still do curl commands as I did before20:07
nmagnezirm_work, on which port number?20:07
rm_workjust have to use the path per:20:07
rm_workexport OCTAVIA_URL=$(openstack catalog show load-balancer -f json | jq -r .endpoints | awk '/publicURL/ {print $2}')20:07
rm_workno port #20:07
rm_workjust curl that path20:07
nmagneziah20:07
nmagnezialright20:07
nmagnezik.. I'm going AFK20:08
rm_workkk20:08
nmagneziI'm on PTO tomorrow, so might miss the weekly meeting as well20:08
rm_workhave fun :)20:09
nmagnezithanks :)20:09
*** aojea has joined #openstack-lbaas20:09
*** sshank has joined #openstack-lbaas20:16
johnsomFyi, I ran 620 create/delete cycles last night.  Not a single 40420:16
xgerman_did that break your box?20:17
johnsomHa, maybe20:17
johnsomIt has been acting odd for a few weeks20:18
*** aojea has quit IRC20:24
*** aojea has joined #openstack-lbaas20:28
johnsomrm_work looks good, can't test at the moment, but reading it looked ok.  Waiting for gates20:37
*** cpusmith has quit IRC20:38
*** aojea has quit IRC20:51
*** yamamoto has joined #openstack-lbaas21:09
*** aojea has joined #openstack-lbaas21:10
*** sshank has quit IRC21:18
*** JudeC has joined #openstack-lbaas21:19
*** cpuga has quit IRC21:20
*** cpuga_ has joined #openstack-lbaas21:21
*** yamamoto has quit IRC21:21
*** sshank has joined #openstack-lbaas21:22
*** JudeC has quit IRC21:24
*** ssmith has joined #openstack-lbaas21:25
*** cpuga_ has quit IRC21:28
*** cpuga has joined #openstack-lbaas21:28
*** cpuga has quit IRC21:33
rm_workjohnsom: i don't know how to specify in the API Ref that we take a list of some object21:47
rm_work>_>21:47
johnsomIn the param docs?21:48
johnsomThey call it array21:48
xgerman_well, I can fire uo. devstack on my Mac… are we still trying to test rm_work’s patch from alst night?21:55
*** yamamoto has joined #openstack-lbaas21:59
rm_workyesplz21:59
rm_workor rather21:59
rm_workplease just verify my findings on master21:59
rm_workwhich is, you'll still not get a vip_address back on create22:00
rm_workand it'll still create the port on the worker side22:00
rm_workand you'll have an extra orphaned port22:00
xgerman_ok22:00
rm_workalso:22:01
openstackgerritAdam Harwell proposed openstack/octavia master: Allow PUT to /pools/<id>/members to batch update members  https://review.openstack.org/47703422:01
rm_workfixed22:01
rm_workbut less important by far22:01
xgerman_ok, will test master22:01
*** sshank has quit IRC22:06
*** sshank has joined #openstack-lbaas22:21
*** fnaval has quit IRC22:24
*** leitan has quit IRC22:40
xgerman_and beacuse it’s close to RC devstack doesn’t work22:54
rm_work<_<22:57
*** aojea has quit IRC23:12
rm_workjohnsom: in the amp driver, if we try to delete a listener and it gets a 404 from the amp that the listener is already gone ....... is that ... "OK"? Could we ignore that?23:20
johnsomYeah, I thought german just had a patch for that23:21
johnsomhttps://review.openstack.org/#/c/487232/23:22
xgerman_Yep, pls merge23:22
rm_workah, i see23:23
rm_workyeah k23:23
rm_workxgerman_: i wonder what makes this happen -- i just ran into it in our env23:25
xgerman_Concurrency - in my case those gopher things where deleting concurrently all over the place23:27
johnsomWasn't it that the amp ran out of disk during create, so the listener didn't finish create, then the user tried to delete it to fix it?23:28
xgerman_Nope. That was something else23:28
*** catintheroof has quit IRC23:29
rm_workin my case, it looks like the request was made but died on a read error for some reason23:31
rm_workso the driver retried23:31
rm_workbut it had actually made it to the amp already and the agent had done the delete23:31
rm_workso then it died on the 40423:31
rm_workhttp://paste.openstack.org/show/617856/23:34
rm_workjohnsom / xgerman_23:34
rm_workin the meantime, reviewing that fix, will prolly merge it23:34
xgerman_Thx23:35
rm_workyep looks good, merging23:36
rm_workyou did basically what i was looking at doing23:36
johnsomHmm, wonder why the listener delete took more than 30 seconds????23:36
rm_workso yeah :P23:36
rm_workyeah no clue what happened...23:37
rm_workbut it had worked23:37
rm_workbecause the next try was a 40423:37
johnsomI wonder if haproxy took a while to stop.23:38
johnsomMaybe we should bump that timeout up....23:38
rm_workhmmm23:39
rm_workdunno, why would it take that much time?23:39
johnsomLong running flows maybe?23:39
rm_workerr23:40
rm_workflows? this is on the amp http request right?23:40
johnsomNo, this was delete listener which stops HAProxy (that's about the only thing that would take any time).  Maybe HAProxy waits to stop until the existing flows finish23:41
rm_workright but23:41
rm_worktaskflow isn't running on the amp23:41
rm_workthis is a request to the amp agent23:41
johnsomTCP flow23:41
rm_workoh23:41
rm_workso haproxy waits for ... connections to finish?23:42
rm_workbefore stopping?23:42
johnsomhttps://github.com/openstack/octavia/blob/master/octavia/amphorae/backends/agent/api_server/listener.py#L27323:42
johnsomMaybe, it was a guess/thinking out loud23:42
rm_workyeah i mean it's got to be that23:42
johnsomThe whole process lifecycle for HAproxy has changed a bunch since I really messed with it.23:42
rm_workso yes, maybe we need to either make it not wait, or increase that timeout23:42
rm_workjohnsom: you get a chance to look at the IP issue?23:59
rm_workthis is seriously *bad*23:59
rm_workwe are leaving master broken for way too long23:59
rm_workIM23:59
rm_work*IMO23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!