Thursday, 2019-01-31

*** yangjianfeng has joined #openstack-lbaas00:51
*** yangjianfeng has quit IRC00:53
*** celebdor has quit IRC01:29
*** Dinesh_Bhor has joined #openstack-lbaas01:48
*** dims has quit IRC02:38
*** dims has joined #openstack-lbaas02:55
*** sapd1 has joined #openstack-lbaas03:00
*** psachin has joined #openstack-lbaas03:01
*** ramishra has joined #openstack-lbaas03:50
*** hongbin has joined #openstack-lbaas04:48
*** hongbin has quit IRC05:04
*** sapd1 has quit IRC05:43
*** gcheresh has joined #openstack-lbaas05:58
*** sapd1 has joined #openstack-lbaas06:30
*** ccamposr has joined #openstack-lbaas06:56
*** ccamposr__ has joined #openstack-lbaas07:02
*** gcheresh has quit IRC07:03
*** ccamposr has quit IRC07:04
*** phuochc has joined #openstack-lbaas07:08
*** phuoc has quit IRC07:08
*** phuochoang has joined #openstack-lbaas07:09
*** phuochc has quit IRC07:13
*** gcheresh has joined #openstack-lbaas07:16
*** takamatsu has joined #openstack-lbaas07:46
*** rpittau has joined #openstack-lbaas08:07
*** ramishra has quit IRC08:17
*** celebdor has joined #openstack-lbaas08:18
*** sapd1 has quit IRC08:22
*** ramishra has joined #openstack-lbaas08:28
*** yboaron has joined #openstack-lbaas08:32
*** Dinesh_Bhor has quit IRC08:44
*** Dinesh_Bhor has joined #openstack-lbaas08:44
*** psachin has quit IRC09:40
*** sapd1 has joined #openstack-lbaas09:55
*** AlexStaf has joined #openstack-lbaas10:26
*** Dinesh_Bhor has quit IRC10:54
*** salmankhan has joined #openstack-lbaas10:54
*** sapd1_ has quit IRC11:18
*** sapd1 has quit IRC11:21
*** rpittau has quit IRC11:34
*** rpittau has joined #openstack-lbaas11:34
*** gcheresh has quit IRC11:52
*** gcheresh has joined #openstack-lbaas11:53
*** salmankhan1 has joined #openstack-lbaas12:14
*** salmankhan has quit IRC12:15
*** salmankhan1 is now known as salmankhan12:15
*** yboaron has quit IRC12:16
*** yboaron has joined #openstack-lbaas13:30
*** rpittau has quit IRC13:35
*** rpittau has joined #openstack-lbaas13:46
*** pcaruana has quit IRC13:52
*** takamatsu has quit IRC13:54
*** pcaruana has joined #openstack-lbaas14:02
*** mkuf_ has joined #openstack-lbaas14:14
*** mkuf has quit IRC14:18
*** numans has quit IRC14:26
*** gcheresh has quit IRC14:49
*** openstackgerrit has joined #openstack-lbaas15:09
openstackgerritArnaud Morin proposed openstack/octavia master: Update requirements for ubuntu  https://review.openstack.org/62440515:09
*** AlexStaf has quit IRC15:24
tomtom001johnsom: thank you for your help yesterday,  since I have 3 controllers and thus 3 octavia containers, it seems that it will create the number of spares per octavia process, so 3 octavia containers @ 2 spares = six spares.  I changed the number to 1 so that only 3 spares were created.15:30
xgermanan Octavia install should coordinate and only do the spares specified… if that’s not the case please file. a bug15:35
johnsomAre each container using their own database?15:36
*** trown|outtypewww is now known as trown15:38
*** celebdor has quit IRC16:07
*** mloza has joined #openstack-lbaas16:15
mlozaHello, I tried to deploy octavia ui and neutron lbaas ui both have panels but the octavia ui keeps looping/refresh and the neutron dashboard is empty. I have octavia running as standalone16:16
johnsomIf you are running Octavia standalone, you don’t need the neutron-lbaas-dashboard and should remove it from horizon.16:22
mlozajohnsom: I need both. I have third party driver in neutron-lbaas16:25
johnsomAh. Ok.  Encourage them to get their Octavia driver done.  How did you install your OpenStack and how did you install the dashboards?16:26
mlozajohnsom: I installed through kolla-ansible. I both dashboards is enabled, the issue happens. The issue is just horizon but I'm able to create load balancer separately.16:27
mlozaif both dashboards*16:28
johnsomHmm, ok. I am not very familiar with the kolla installer, but I can try to help you debug the dashboards.16:30
tomtom001johnsom: this was openstack queens setup with openstack-ansible.  There is one galera database that is shared for octavia to all 3 containers.16:30
tomtom001one more question, the Provisioning Status for members in always on Pending Update in Horizon, been there for about an hour now...  The load balancer seems to be working fine but the status never updates, is this horizon weirdness or a know display bug?16:31
johnsomThat is something wrong with your cloud or somemore kill -9 a controller process instead of doing a graceful shutdown16:32
johnsommloza Can you do a ls of the ${HORIZON_DIR}/openstack_dashboard/local/enabled/ directory?16:33
mlozajohnsom: just a sec16:33
johnsomI'm going to boot up a dashboard VM as well so I can look at my settings. I haven't run both in well over a year.16:33
mlozajohnsom: http://sprunge.us/ccJRim16:35
johnsomtomtom001 Also note, older versions of the dashboard don't auto-refresh the status. You need the rocky or newer dashboard for auto-refresh16:36
mlozathis is in /var/lib/kolla/venv/lib/python2.7/site-packages/openstack_dashboard/local/enabled/16:36
johnsomOk, that looks good.16:36
johnsomThe next thing I would try is re-running the horizon commands to load all of the dashboard parts. Maybe one got missed or interrupted.16:37
johnsomfrom the horizon home directory, inside the venv, you will want to run the following commands:16:38
mlozaare you talking about ./manage.py collectstatic and ./manage.py compress?16:38
johnsom$ ./manage.py collectstatic16:38
johnsom$ ./manage.py compress16:38
tomtom001johnsom: thank you16:38
johnsomYeah, I guess you are already on top of that.16:38
*** yboaron has quit IRC16:39
*** yboaron has joined #openstack-lbaas16:39
mlozajohnsom: I have multiple manage.py http://sprunge.us/1VPbEy . Which one should I run?16:41
johnsomHmmm, that is a good question.16:42
johnsomAre the /var/lib/kolla/venv/bin/manage.py and /horizon-source/horizon-14.0.2/manage.py the same file?16:42
johnsomDoesn't kolla run all of the services inside containers? are you inside the horizon container?16:43
mlozajohnsom: They are the same. No output with diff -Naur16:44
*** yboaron has quit IRC16:44
mlozaI'm inside the container16:44
johnsomYeah, ok16:44
johnsomIf you do a "pip list" do you see the dashboards installed?  If not, we need to activate a venv. Possibly the /var/lib/kolla/venv16:45
johnsomSorry, there are something like eight different deployment tools for OpenStack and I can't keep up with them all. Kolla is one I haven't really followed much.16:46
mlozajohnsom: I have both dashboard installed. neutron-lbaas-dashboard and octavia-dashboard16:54
johnsomOk, so you may not need to activate a venv first then.  I would run the /var/lib/kolla/venv/bin/manage.py version16:54
johnsomWhen you login to horizon, what account are you using?  The admin account?16:56
mlozaI did source /var/lib/kolla/venv/bin/activate16:57
mlozaI'm using the admin the account16:57
johnsomAh, ok. Yeah, as long as the python modules are listed with pip list, you are in the right place.16:58
*** ramishra has quit IRC16:58
johnsomOk, I have mine up and running now too. So we can compare things if necessary16:58
mlozait still doesn't work17:00
mlozathis is what it looks like https://transfer.sh/11cZRO/b1.png17:00
mlozathat is the neutron-lbaas-dashboard17:00
mlozathis is octavia dashboard https://transfer.sh/DMHKM/b2.png17:01
mlozait just keeps looping17:01
johnsomOk, can you click on the "API Access" left nav, and make sure "Load Balancer" is there and the endpoint looks right?17:03
mlozathe manage.py version is 1.11.14 both /horizon-source/horizon-14.0.2/manage.py and /var/lib/kolla/venv/bin/manage.py17:03
johnsomLoad Balancerhttp://10.21.21.65/load-balancer17:03
johnsomThat is what I have in my line17:04
johnsomYou should also have: Networkhttp://10.21.21.65:9696/ for neutron/neutron-lbaas17:04
johnsomOf course your IP is going to be different than my lab17:05
mlozaLoad Balancer http://10.0.0.1:987617:05
mlozait only 1 listed in the API17:06
johnsomHmm, ok, so using the old port format. This *might* be an issue.17:06
johnsomFrom the horizon container can you "curl http://10.0.0.1:9876"?17:06
mlozayes17:07
johnsomYou should see a version document like this:17:07
johnsomhttps://www.irccloud.com/pastebin/o7yurDR5/17:07
mlozaA bit different17:08
mlozaI don't load-balancer/v2 in the uri17:09
mlozaI don't have load-balancer/v2 in the uri17:09
mlozaI only have load-balancerhttp://10.0.0.1:9876/ and http://10.0.0.1:9876/v217:09
johnsomYeah, mine is master and running a different deployment tool. As long as you see something similar this is ok.17:09
johnsomOk, next  thing we should look at is the logs. In the contianer, cd to /var/log/apache217:11
johnsomHave a look in the horizon_error.log file and see if there are errors being listed.17:12
mlozafrom the table in this link https://docs.openstack.org/octavia-dashboard/latest/readme.html#enabling-octavia-dashboard-and-neutron-lbaas-dashboard17:12
mlozai satisfied the condition that i have no octavia driver but other driver and v2 api enable and v1 in octavia17:13
johnsomYeah, you are fine compared to the table17:13
*** takamatsu has joined #openstack-lbaas17:14
mlozajohnsom: I dont get any error17:16
johnsomHmm, ok, have a look in the other logs and see if something jumps out there.  They won't pop into the log until you navigate to the "load balancer" tab in horizon, but there should be some old errors from your previous attempts17:17
*** ccamposr__ has quit IRC17:19
mlozaStill no error17:20
mlozathis is what i'm getting17:20
mloza10.0.91.9 - - [31/Jan/2019:17:19:36 +0000] "GET /project/load_balancer/ HTTP/1.1" 200 4815 137688 "http://10.0.0.1/project/load_balancerlancer/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36"17:20
johnsomYeah, those are successful calls17:20
mlozajust keeping repeating when the page keeps looping17:20
johnsomIt's having trouble somewhere else.17:21
johnsomIs this a typo? "http://10.0.0.1/project/load_balancerlancer/"17:22
mlozait's a typo17:24
mlozai had to manual type it17:24
mlozacan't copy paste from tmux17:24
johnsomOk17:24
johnsomSince it had a 200, success, I wondered17:25
johnsomOk, can you check one other thing, in the octavia container, can you look in the octavia API log and see if there are any messages there that give us a clue?17:30
mlozajohnsom: I tailed neutron and octavia logs but it doesn't indicate any errors17:32
johnsomDo you see the dashboard accessing the octavia API?17:32
mlozafor octavia, no17:33
mlozafor neutron-lbaas yes17:33
johnsomHmmm, this is very odd.17:33
johnsomThere should be something in the horizon logs about why the panel isn't able to load17:33
mlozahmm17:35
mlozacant figure out17:35
johnsomWe may need to ask in the horizon channel for folks with deeper horizon experience.  Short of you sending me all of the horizon logs to dig through.17:35
mlozaanyway, i can still create lbs in neutron-lbaas via cli17:35
mlozaand octavia ui is working without neutron lbaas dashboard17:35
mlozavice versa17:36
johnsomAh, ok, so if you uninstall the neutron-lbaas-dashboard, octavia starts working?17:36
mlozaurd17:36
mlozayes17:36
johnsomAh, hmm, ok, that is a hint.17:36
mlozai have rebuild though17:36
mlozai have to rebuild though17:36
mlozaif i dont have octavia ui, neutron lbaas dash works17:36
johnsomAt least I know they worked together at one point. I wonder if something changed that caused them to fight. neutron-lbaas-dashboard doesn't get a lot of attention given it is deprecated and going to be retired in September.17:37
mlozayeah. Hopefully the third party driver will support octavia as provider by that time17:37
johnsomWhat version of the dashboards are you running, are they both rocky?17:37
mlozaboth rockty17:38
mlozaboth rocky17:38
johnsomOk, let me go take at look at those two repos. Give me a few minutes.17:38
mlozaok17:39
johnsomOk, I'm not seeing anything obvious in the configs, but I wonder if the static content has a namespace conflict we didn't notice before since they were cloned repos. But now that octavia-dashboard has advanced there is an issue.17:46
johnsomYeah, pretty sure that is the issue. The static content overlapps17:48
*** rpittau has quit IRC17:48
mlozaI see. I guess I have to drop neutron dashboard and just use octavia. I can vips in neutron lbaas via cli. No biggy. Thanks for help17:52
johnsomTo fix that it's going to be some work....  Are you ok with upgrading your neutron-lbaas-dashboard to a master version if we fix it there?17:52
mlozathe octavia dashboard is one that keeps looping17:53
johnsomOk. I will open a story and see if we can get a patch up for neutron-lbaas-dashboard to give it a separate static content namespace.17:53
mlozaneutron dashboard is just empty when you access it17:53
johnsomYeah, it is probably a weird mix as the code parts are separate, but the javascript, etc. are mixed.17:54
mlozais your setup, it is working?17:54
mlozain your setup, it is working?17:54
mlozaprobably the kolla-ansible codebase messed up something17:55
johnsomI only have the octavia-dashboard loaded. I don't have neutron-lbaas in this environment. I would have to build a new VM to have both17:55
johnsomI'm pretty sure it's that these overlapp: https://github.com/openstack/neutron-lbaas-dashboard/tree/stable/rocky/neutron_lbaas_dashboard/static/dashboard/project and https://github.com/openstack/octavia-dashboard/tree/master/octavia_dashboard/static/dashboard/project/lbaasv217:56
mlozaWorks fine. Just don't install both. It blows up when you have both17:56
johnsomwhen horizon imports the static content17:56
johnsomIt was working fine, so I think everyone forgot about the static content paths17:57
johnsomUntil of course when the octavia-dashboard significantly changes it's static content, which we have.17:58
johnsomOk, tracking it here: https://storyboard.openstack.org/#!/story/200491318:01
johnsomAgain, is this something you would like soon or not so important?  We will try to get it fixed before the end of Stein, but trying to prioritize.18:02
mlozaNot that important but it's good to have18:04
johnsomYeah, it's something we need as a migration tool for sure.18:04
johnsomOk, thanks18:04
johnsomSorry we dropped the ball on that18:05
mlozaThank you too18:05
mlozaNo worries18:05
*** Swami has joined #openstack-lbaas18:10
rm_worki may have found a bug in the batch-update-members18:14
johnsomJoy18:15
rm_workbecause it uses the MemberPOST type, there are some fields that are defaulted to None18:15
rm_workinstead of just leaving them Unset18:15
rm_worki think it will always clear those fields18:15
rm_workwhen you update a member18:15
rm_work(using batch)18:15
rm_worki'm not really sure why we had to default to None...18:15
rm_workUnset should have been fine?18:15
johnsomOh, for member port and address?18:16
rm_workyes18:16
johnsomYeah, really it'18:16
johnsomit's better to not put defaults in the types....18:17
rm_worktechnically also anything with a default, but fixing backup and admin_state_up will be a little more tricky18:17
johnsomJust for this exact reason18:17
johnsomJust move it to the controller, where you have more context about if it should have a default or not18:17
rm_workyes18:18
rm_workI will do that in ... err... a patch i may have written18:18
johnsomReally we need to scrub the whole API for Unset vs. blank issues. Along with the CLI updates for "unset" commands18:19
rm_workyes18:22
*** trown is now known as trown|lunch18:29
tomtom001Hello, my octavia health manager log is full of these messages: http://paste.openstack.org/show/744334/  Does anyone know why this occurs?18:58
*** celebdor has joined #openstack-lbaas18:58
xgermantomtom001: did you change the heartbeat encryption key? https://docs.openstack.org/octavia/latest/admin/guides/operator-maintenance.html#changing-the-heartbeat-encryption-key19:00
johnsomIt could be caused by old amphora running when you changed your controller heartbeat_key. https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.heartbeat_key19:00
xgermanof course we did a change in the format and if you upgraded amps but not eh control plane things can get messay19:01
xgermanjohnsom: we probably should remove/deprecate https://docs.openstack.org/octavia/latest/admin/Anchor.html19:02
johnsomYes19:02
tomtom001thank you I will check19:05
tomtom001johnsom, xgerman: no key was changed, however, is the health manager still supposed to listen on port 5555?19:07
johnsomyes19:07
tomtom001ok, so that might be part of the issue, my health manager starts up, but never listens on 5555 in netstat.19:07
johnsomWell it must be, it's getting the packets and logging that message19:09
*** celebdor has quit IRC19:10
tomtom001johnsom, you're right lol, it's using udp is all19:11
*** salmankhan has quit IRC19:13
*** AlexStaf has joined #openstack-lbaas19:22
*** pcaruana has quit IRC19:30
rm_worki wish our pep8 didn't take longer to run than our functional tests + unit tests combined19:39
rm_workit makes no sense19:39
johnsomIt is the pylint19:39
rm_worki mean, i run pylint on some other projects and it doesn't take THIS long i feel <_<19:40
rm_workah well19:40
johnsomYeah, I feel like we should be capping it from going back so far....  Would be curious to see what the other teams do19:41
*** sapd1 has joined #openstack-lbaas19:42
*** trown|lunch is now known as trown19:43
rm_workeh, it didn't even GET to pylint before my functional+unit test suites ran19:45
rm_work(when running in parallel)19:45
*** sapd1 has quit IRC19:48
johnsomI am testing out some of the queens backports at the moment. Trying to catch up on my review backlog19:54
rm_workok. i'm busy doing stuff i'm not supposed to be doing because i got a wild hair. T_T19:55
johnsomlol19:56
rm_workplease ignore any changes you might see in the next few minutes19:57
johnsomGrr, trying to build a centos image....19:57
johnsom2019-01-31 19:55:52.036 | Cannot uninstall 'virtualenv'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.19:57
rm_work(i'm waiting for a devstack to build anyway)19:57
rm_workROFL yes fffff centos19:57
rm_workthat error is so dumb19:57
rm_worki see it in all kinds of different circumstances when i'm on centos19:57
rm_workbut also semi-related to the new pip i think19:58
rm_work"newer pip", not the latest one (just anything >9)19:58
johnsomYeah,  hmm, This is queens....  Wonder what the fix is....19:58
johnsomI'm going to try an upgraded version of DIB20:04
openstackgerritAdam Harwell proposed openstack/octavia master: Allow members to be accessed as a base resource  https://review.openstack.org/63430220:07
rm_workis our DIB version pinned? if so, wtf? is the centos image we use NOT pinned in queens?20:08
rm_workwhat would have changed20:08
*** AlexStaf has quit IRC20:08
johnsomThe new DIB did work. I don't think it is....20:10
rm_workhmm20:11
johnsomBut, I don't think it will upgrade what I have installed eitehr20:11
johnsomOk, everything seemed to work with ubuntu images. booting up a centos lb now.20:17
openstackgerritMerged openstack/octavia-tempest-plugin master: Add configuration to enable/disable L7,L4 protocols  https://review.openstack.org/62149320:30
johnsomWrappedFailure: WrappedFailure: [Failure: octavia.amphorae.drivers.haproxy.exceptions.InternalServerError: Internal Server Error, Failure: octavia.amphorae.drivers.haproxy.exceptions.InternalServerError: Internal Server Error]20:35
johnsomsigh20:35
*** celebdor has joined #openstack-lbaas20:36
johnsomJan 31 20:39:51 amphora-67b111f9-2346-44f5-bb10-a58e94ff7c6a amphora-agent: 2019-01-31 20:39:51.696 3142 ERROR flask.app CalledProcessError: Command '['rpm', '-q', '--queryformat', '%{VERSION}', 'ipvsadm']' returned non-zero exit status 120:42
*** jlaffaye has quit IRC20:43
johnsomHmm, looks like I forgot to set the variables to get the right amp-agent version.20:45
*** jlaffaye has joined #openstack-lbaas20:48
*** celebdor has quit IRC21:06
*** celebdor has joined #openstack-lbaas21:07
rm_workjohnsom: want to fix https://review.openstack.org/#/c/632842/ really quick? :P21:26
johnsomSure21:27
*** salmankhan has joined #openstack-lbaas21:28
openstackgerritMichael Johnson proposed openstack/octavia master: Add amphora agent configuration update admin API  https://review.openstack.org/63284221:29
johnsomYeah, I went to fix the duplicate, but edited the wrong one. doh21:29
*** salmankhan1 has joined #openstack-lbaas21:31
*** celebdor has quit IRC21:31
*** salmankhan has quit IRC21:32
*** salmankhan1 is now known as salmankhan21:32
rm_worki still need to review the patch below it tho T_T21:34
johnsomI just signed off on two of the queens backports. They tested good for me21:38
openstackgerritMerged openstack/octavia master: Support remote debugging with PyDev  https://review.openstack.org/61994421:41
*** salmankhan has quit IRC21:43
*** trown is now known as trown|outtypewww22:02
openstackgerritMichael Johnson proposed openstack/octavia master: Fix the amphora noop driver  https://review.openstack.org/63434423:26
johnsomrm_work  ^^^^ nun worthy23:26
colin-Provider 'amphora' raised a driver error: Unable to complete operation for network edf99090-df57-46eb-9fbe-973041560b98. The IP address 172.16.5.89 is in use.23:34
colin-curious what makes this determination, does it consult with neutron?23:34
johnsomYes23:35
colin-anything i can simulate from the cli?23:35
johnsomYeah, allocate a port on a network that has an IP via neutron. Then try to create a load balancer with a VIP that requests that fixed IP.23:36
johnsomWe need to add more driver error types to make these errors a bit more user friendly again.23:39
openstackgerritMichael Johnson proposed openstack/octavia master: Fix the amphora noop driver  https://review.openstack.org/63434423:55

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!