*** yangjianfeng has joined #openstack-lbaas | 00:51 | |
*** yangjianfeng has quit IRC | 00:53 | |
*** celebdor has quit IRC | 01:29 | |
*** Dinesh_Bhor has joined #openstack-lbaas | 01:48 | |
*** dims has quit IRC | 02:38 | |
*** dims has joined #openstack-lbaas | 02:55 | |
*** sapd1 has joined #openstack-lbaas | 03:00 | |
*** psachin has joined #openstack-lbaas | 03:01 | |
*** ramishra has joined #openstack-lbaas | 03:50 | |
*** hongbin has joined #openstack-lbaas | 04:48 | |
*** hongbin has quit IRC | 05:04 | |
*** sapd1 has quit IRC | 05:43 | |
*** gcheresh has joined #openstack-lbaas | 05:58 | |
*** sapd1 has joined #openstack-lbaas | 06:30 | |
*** ccamposr has joined #openstack-lbaas | 06:56 | |
*** ccamposr__ has joined #openstack-lbaas | 07:02 | |
*** gcheresh has quit IRC | 07:03 | |
*** ccamposr has quit IRC | 07:04 | |
*** phuochc has joined #openstack-lbaas | 07:08 | |
*** phuoc has quit IRC | 07:08 | |
*** phuochoang has joined #openstack-lbaas | 07:09 | |
*** phuochc has quit IRC | 07:13 | |
*** gcheresh has joined #openstack-lbaas | 07:16 | |
*** takamatsu has joined #openstack-lbaas | 07:46 | |
*** rpittau has joined #openstack-lbaas | 08:07 | |
*** ramishra has quit IRC | 08:17 | |
*** celebdor has joined #openstack-lbaas | 08:18 | |
*** sapd1 has quit IRC | 08:22 | |
*** ramishra has joined #openstack-lbaas | 08:28 | |
*** yboaron has joined #openstack-lbaas | 08:32 | |
*** Dinesh_Bhor has quit IRC | 08:44 | |
*** Dinesh_Bhor has joined #openstack-lbaas | 08:44 | |
*** psachin has quit IRC | 09:40 | |
*** sapd1 has joined #openstack-lbaas | 09:55 | |
*** AlexStaf has joined #openstack-lbaas | 10:26 | |
*** Dinesh_Bhor has quit IRC | 10:54 | |
*** salmankhan has joined #openstack-lbaas | 10:54 | |
*** sapd1_ has quit IRC | 11:18 | |
*** sapd1 has quit IRC | 11:21 | |
*** rpittau has quit IRC | 11:34 | |
*** rpittau has joined #openstack-lbaas | 11:34 | |
*** gcheresh has quit IRC | 11:52 | |
*** gcheresh has joined #openstack-lbaas | 11:53 | |
*** salmankhan1 has joined #openstack-lbaas | 12:14 | |
*** salmankhan has quit IRC | 12:15 | |
*** salmankhan1 is now known as salmankhan | 12:15 | |
*** yboaron has quit IRC | 12:16 | |
*** yboaron has joined #openstack-lbaas | 13:30 | |
*** rpittau has quit IRC | 13:35 | |
*** rpittau has joined #openstack-lbaas | 13:46 | |
*** pcaruana has quit IRC | 13:52 | |
*** takamatsu has quit IRC | 13:54 | |
*** pcaruana has joined #openstack-lbaas | 14:02 | |
*** mkuf_ has joined #openstack-lbaas | 14:14 | |
*** mkuf has quit IRC | 14:18 | |
*** numans has quit IRC | 14:26 | |
*** gcheresh has quit IRC | 14:49 | |
*** openstackgerrit has joined #openstack-lbaas | 15:09 | |
openstackgerrit | Arnaud Morin proposed openstack/octavia master: Update requirements for ubuntu https://review.openstack.org/624405 | 15:09 |
---|---|---|
*** AlexStaf has quit IRC | 15:24 | |
tomtom001 | johnsom: thank you for your help yesterday, since I have 3 controllers and thus 3 octavia containers, it seems that it will create the number of spares per octavia process, so 3 octavia containers @ 2 spares = six spares. I changed the number to 1 so that only 3 spares were created. | 15:30 |
xgerman | an Octavia install should coordinate and only do the spares specified… if that’s not the case please file. a bug | 15:35 |
johnsom | Are each container using their own database? | 15:36 |
*** trown|outtypewww is now known as trown | 15:38 | |
*** celebdor has quit IRC | 16:07 | |
*** mloza has joined #openstack-lbaas | 16:15 | |
mloza | Hello, I tried to deploy octavia ui and neutron lbaas ui both have panels but the octavia ui keeps looping/refresh and the neutron dashboard is empty. I have octavia running as standalone | 16:16 |
johnsom | If you are running Octavia standalone, you don’t need the neutron-lbaas-dashboard and should remove it from horizon. | 16:22 |
mloza | johnsom: I need both. I have third party driver in neutron-lbaas | 16:25 |
johnsom | Ah. Ok. Encourage them to get their Octavia driver done. How did you install your OpenStack and how did you install the dashboards? | 16:26 |
mloza | johnsom: I installed through kolla-ansible. I both dashboards is enabled, the issue happens. The issue is just horizon but I'm able to create load balancer separately. | 16:27 |
mloza | if both dashboards* | 16:28 |
johnsom | Hmm, ok. I am not very familiar with the kolla installer, but I can try to help you debug the dashboards. | 16:30 |
tomtom001 | johnsom: this was openstack queens setup with openstack-ansible. There is one galera database that is shared for octavia to all 3 containers. | 16:30 |
tomtom001 | one more question, the Provisioning Status for members in always on Pending Update in Horizon, been there for about an hour now... The load balancer seems to be working fine but the status never updates, is this horizon weirdness or a know display bug? | 16:31 |
johnsom | That is something wrong with your cloud or somemore kill -9 a controller process instead of doing a graceful shutdown | 16:32 |
johnsom | mloza Can you do a ls of the ${HORIZON_DIR}/openstack_dashboard/local/enabled/ directory? | 16:33 |
mloza | johnsom: just a sec | 16:33 |
johnsom | I'm going to boot up a dashboard VM as well so I can look at my settings. I haven't run both in well over a year. | 16:33 |
mloza | johnsom: http://sprunge.us/ccJRim | 16:35 |
johnsom | tomtom001 Also note, older versions of the dashboard don't auto-refresh the status. You need the rocky or newer dashboard for auto-refresh | 16:36 |
mloza | this is in /var/lib/kolla/venv/lib/python2.7/site-packages/openstack_dashboard/local/enabled/ | 16:36 |
johnsom | Ok, that looks good. | 16:36 |
johnsom | The next thing I would try is re-running the horizon commands to load all of the dashboard parts. Maybe one got missed or interrupted. | 16:37 |
johnsom | from the horizon home directory, inside the venv, you will want to run the following commands: | 16:38 |
mloza | are you talking about ./manage.py collectstatic and ./manage.py compress? | 16:38 |
johnsom | $ ./manage.py collectstatic | 16:38 |
johnsom | $ ./manage.py compress | 16:38 |
tomtom001 | johnsom: thank you | 16:38 |
johnsom | Yeah, I guess you are already on top of that. | 16:38 |
*** yboaron has quit IRC | 16:39 | |
*** yboaron has joined #openstack-lbaas | 16:39 | |
mloza | johnsom: I have multiple manage.py http://sprunge.us/1VPbEy . Which one should I run? | 16:41 |
johnsom | Hmmm, that is a good question. | 16:42 |
johnsom | Are the /var/lib/kolla/venv/bin/manage.py and /horizon-source/horizon-14.0.2/manage.py the same file? | 16:42 |
johnsom | Doesn't kolla run all of the services inside containers? are you inside the horizon container? | 16:43 |
mloza | johnsom: They are the same. No output with diff -Naur | 16:44 |
*** yboaron has quit IRC | 16:44 | |
mloza | I'm inside the container | 16:44 |
johnsom | Yeah, ok | 16:44 |
johnsom | If you do a "pip list" do you see the dashboards installed? If not, we need to activate a venv. Possibly the /var/lib/kolla/venv | 16:45 |
johnsom | Sorry, there are something like eight different deployment tools for OpenStack and I can't keep up with them all. Kolla is one I haven't really followed much. | 16:46 |
mloza | johnsom: I have both dashboard installed. neutron-lbaas-dashboard and octavia-dashboard | 16:54 |
johnsom | Ok, so you may not need to activate a venv first then. I would run the /var/lib/kolla/venv/bin/manage.py version | 16:54 |
johnsom | When you login to horizon, what account are you using? The admin account? | 16:56 |
mloza | I did source /var/lib/kolla/venv/bin/activate | 16:57 |
mloza | I'm using the admin the account | 16:57 |
johnsom | Ah, ok. Yeah, as long as the python modules are listed with pip list, you are in the right place. | 16:58 |
*** ramishra has quit IRC | 16:58 | |
johnsom | Ok, I have mine up and running now too. So we can compare things if necessary | 16:58 |
mloza | it still doesn't work | 17:00 |
mloza | this is what it looks like https://transfer.sh/11cZRO/b1.png | 17:00 |
mloza | that is the neutron-lbaas-dashboard | 17:00 |
mloza | this is octavia dashboard https://transfer.sh/DMHKM/b2.png | 17:01 |
mloza | it just keeps looping | 17:01 |
johnsom | Ok, can you click on the "API Access" left nav, and make sure "Load Balancer" is there and the endpoint looks right? | 17:03 |
mloza | the manage.py version is 1.11.14 both /horizon-source/horizon-14.0.2/manage.py and /var/lib/kolla/venv/bin/manage.py | 17:03 |
johnsom | Load Balancerhttp://10.21.21.65/load-balancer | 17:03 |
johnsom | That is what I have in my line | 17:04 |
johnsom | You should also have: Networkhttp://10.21.21.65:9696/ for neutron/neutron-lbaas | 17:04 |
johnsom | Of course your IP is going to be different than my lab | 17:05 |
mloza | Load Balancer http://10.0.0.1:9876 | 17:05 |
mloza | it only 1 listed in the API | 17:06 |
johnsom | Hmm, ok, so using the old port format. This *might* be an issue. | 17:06 |
johnsom | From the horizon container can you "curl http://10.0.0.1:9876"? | 17:06 |
mloza | yes | 17:07 |
johnsom | You should see a version document like this: | 17:07 |
johnsom | https://www.irccloud.com/pastebin/o7yurDR5/ | 17:07 |
mloza | A bit different | 17:08 |
mloza | I don't load-balancer/v2 in the uri | 17:09 |
mloza | I don't have load-balancer/v2 in the uri | 17:09 |
mloza | I only have load-balancerhttp://10.0.0.1:9876/ and http://10.0.0.1:9876/v2 | 17:09 |
johnsom | Yeah, mine is master and running a different deployment tool. As long as you see something similar this is ok. | 17:09 |
johnsom | Ok, next thing we should look at is the logs. In the contianer, cd to /var/log/apache2 | 17:11 |
johnsom | Have a look in the horizon_error.log file and see if there are errors being listed. | 17:12 |
mloza | from the table in this link https://docs.openstack.org/octavia-dashboard/latest/readme.html#enabling-octavia-dashboard-and-neutron-lbaas-dashboard | 17:12 |
mloza | i satisfied the condition that i have no octavia driver but other driver and v2 api enable and v1 in octavia | 17:13 |
johnsom | Yeah, you are fine compared to the table | 17:13 |
*** takamatsu has joined #openstack-lbaas | 17:14 | |
mloza | johnsom: I dont get any error | 17:16 |
johnsom | Hmm, ok, have a look in the other logs and see if something jumps out there. They won't pop into the log until you navigate to the "load balancer" tab in horizon, but there should be some old errors from your previous attempts | 17:17 |
*** ccamposr__ has quit IRC | 17:19 | |
mloza | Still no error | 17:20 |
mloza | this is what i'm getting | 17:20 |
mloza | 10.0.91.9 - - [31/Jan/2019:17:19:36 +0000] "GET /project/load_balancer/ HTTP/1.1" 200 4815 137688 "http://10.0.0.1/project/load_balancerlancer/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36" | 17:20 |
johnsom | Yeah, those are successful calls | 17:20 |
mloza | just keeping repeating when the page keeps looping | 17:20 |
johnsom | It's having trouble somewhere else. | 17:21 |
johnsom | Is this a typo? "http://10.0.0.1/project/load_balancerlancer/" | 17:22 |
mloza | it's a typo | 17:24 |
mloza | i had to manual type it | 17:24 |
mloza | can't copy paste from tmux | 17:24 |
johnsom | Ok | 17:24 |
johnsom | Since it had a 200, success, I wondered | 17:25 |
johnsom | Ok, can you check one other thing, in the octavia container, can you look in the octavia API log and see if there are any messages there that give us a clue? | 17:30 |
mloza | johnsom: I tailed neutron and octavia logs but it doesn't indicate any errors | 17:32 |
johnsom | Do you see the dashboard accessing the octavia API? | 17:32 |
mloza | for octavia, no | 17:33 |
mloza | for neutron-lbaas yes | 17:33 |
johnsom | Hmmm, this is very odd. | 17:33 |
johnsom | There should be something in the horizon logs about why the panel isn't able to load | 17:33 |
mloza | hmm | 17:35 |
mloza | cant figure out | 17:35 |
johnsom | We may need to ask in the horizon channel for folks with deeper horizon experience. Short of you sending me all of the horizon logs to dig through. | 17:35 |
mloza | anyway, i can still create lbs in neutron-lbaas via cli | 17:35 |
mloza | and octavia ui is working without neutron lbaas dashboard | 17:35 |
mloza | vice versa | 17:36 |
johnsom | Ah, ok, so if you uninstall the neutron-lbaas-dashboard, octavia starts working? | 17:36 |
mloza | urd | 17:36 |
mloza | yes | 17:36 |
johnsom | Ah, hmm, ok, that is a hint. | 17:36 |
mloza | i have rebuild though | 17:36 |
mloza | i have to rebuild though | 17:36 |
mloza | if i dont have octavia ui, neutron lbaas dash works | 17:36 |
johnsom | At least I know they worked together at one point. I wonder if something changed that caused them to fight. neutron-lbaas-dashboard doesn't get a lot of attention given it is deprecated and going to be retired in September. | 17:37 |
mloza | yeah. Hopefully the third party driver will support octavia as provider by that time | 17:37 |
johnsom | What version of the dashboards are you running, are they both rocky? | 17:37 |
mloza | both rockty | 17:38 |
mloza | both rocky | 17:38 |
johnsom | Ok, let me go take at look at those two repos. Give me a few minutes. | 17:38 |
mloza | ok | 17:39 |
johnsom | Ok, I'm not seeing anything obvious in the configs, but I wonder if the static content has a namespace conflict we didn't notice before since they were cloned repos. But now that octavia-dashboard has advanced there is an issue. | 17:46 |
johnsom | Yeah, pretty sure that is the issue. The static content overlapps | 17:48 |
*** rpittau has quit IRC | 17:48 | |
mloza | I see. I guess I have to drop neutron dashboard and just use octavia. I can vips in neutron lbaas via cli. No biggy. Thanks for help | 17:52 |
johnsom | To fix that it's going to be some work.... Are you ok with upgrading your neutron-lbaas-dashboard to a master version if we fix it there? | 17:52 |
mloza | the octavia dashboard is one that keeps looping | 17:53 |
johnsom | Ok. I will open a story and see if we can get a patch up for neutron-lbaas-dashboard to give it a separate static content namespace. | 17:53 |
mloza | neutron dashboard is just empty when you access it | 17:53 |
johnsom | Yeah, it is probably a weird mix as the code parts are separate, but the javascript, etc. are mixed. | 17:54 |
mloza | is your setup, it is working? | 17:54 |
mloza | in your setup, it is working? | 17:54 |
mloza | probably the kolla-ansible codebase messed up something | 17:55 |
johnsom | I only have the octavia-dashboard loaded. I don't have neutron-lbaas in this environment. I would have to build a new VM to have both | 17:55 |
johnsom | I'm pretty sure it's that these overlapp: https://github.com/openstack/neutron-lbaas-dashboard/tree/stable/rocky/neutron_lbaas_dashboard/static/dashboard/project and https://github.com/openstack/octavia-dashboard/tree/master/octavia_dashboard/static/dashboard/project/lbaasv2 | 17:56 |
mloza | Works fine. Just don't install both. It blows up when you have both | 17:56 |
johnsom | when horizon imports the static content | 17:56 |
johnsom | It was working fine, so I think everyone forgot about the static content paths | 17:57 |
johnsom | Until of course when the octavia-dashboard significantly changes it's static content, which we have. | 17:58 |
johnsom | Ok, tracking it here: https://storyboard.openstack.org/#!/story/2004913 | 18:01 |
johnsom | Again, is this something you would like soon or not so important? We will try to get it fixed before the end of Stein, but trying to prioritize. | 18:02 |
mloza | Not that important but it's good to have | 18:04 |
johnsom | Yeah, it's something we need as a migration tool for sure. | 18:04 |
johnsom | Ok, thanks | 18:04 |
johnsom | Sorry we dropped the ball on that | 18:05 |
mloza | Thank you too | 18:05 |
mloza | No worries | 18:05 |
*** Swami has joined #openstack-lbaas | 18:10 | |
rm_work | i may have found a bug in the batch-update-members | 18:14 |
johnsom | Joy | 18:15 |
rm_work | because it uses the MemberPOST type, there are some fields that are defaulted to None | 18:15 |
rm_work | instead of just leaving them Unset | 18:15 |
rm_work | i think it will always clear those fields | 18:15 |
rm_work | when you update a member | 18:15 |
rm_work | (using batch) | 18:15 |
rm_work | i'm not really sure why we had to default to None... | 18:15 |
rm_work | Unset should have been fine? | 18:15 |
johnsom | Oh, for member port and address? | 18:16 |
rm_work | yes | 18:16 |
johnsom | Yeah, really it' | 18:16 |
johnsom | it's better to not put defaults in the types.... | 18:17 |
rm_work | technically also anything with a default, but fixing backup and admin_state_up will be a little more tricky | 18:17 |
johnsom | Just for this exact reason | 18:17 |
johnsom | Just move it to the controller, where you have more context about if it should have a default or not | 18:17 |
rm_work | yes | 18:18 |
rm_work | I will do that in ... err... a patch i may have written | 18:18 |
johnsom | Really we need to scrub the whole API for Unset vs. blank issues. Along with the CLI updates for "unset" commands | 18:19 |
rm_work | yes | 18:22 |
*** trown is now known as trown|lunch | 18:29 | |
tomtom001 | Hello, my octavia health manager log is full of these messages: http://paste.openstack.org/show/744334/ Does anyone know why this occurs? | 18:58 |
*** celebdor has joined #openstack-lbaas | 18:58 | |
xgerman | tomtom001: did you change the heartbeat encryption key? https://docs.openstack.org/octavia/latest/admin/guides/operator-maintenance.html#changing-the-heartbeat-encryption-key | 19:00 |
johnsom | It could be caused by old amphora running when you changed your controller heartbeat_key. https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.heartbeat_key | 19:00 |
xgerman | of course we did a change in the format and if you upgraded amps but not eh control plane things can get messay | 19:01 |
xgerman | johnsom: we probably should remove/deprecate https://docs.openstack.org/octavia/latest/admin/Anchor.html | 19:02 |
johnsom | Yes | 19:02 |
tomtom001 | thank you I will check | 19:05 |
tomtom001 | johnsom, xgerman: no key was changed, however, is the health manager still supposed to listen on port 5555? | 19:07 |
johnsom | yes | 19:07 |
tomtom001 | ok, so that might be part of the issue, my health manager starts up, but never listens on 5555 in netstat. | 19:07 |
johnsom | Well it must be, it's getting the packets and logging that message | 19:09 |
*** celebdor has quit IRC | 19:10 | |
tomtom001 | johnsom, you're right lol, it's using udp is all | 19:11 |
*** salmankhan has quit IRC | 19:13 | |
*** AlexStaf has joined #openstack-lbaas | 19:22 | |
*** pcaruana has quit IRC | 19:30 | |
rm_work | i wish our pep8 didn't take longer to run than our functional tests + unit tests combined | 19:39 |
rm_work | it makes no sense | 19:39 |
johnsom | It is the pylint | 19:39 |
rm_work | i mean, i run pylint on some other projects and it doesn't take THIS long i feel <_< | 19:40 |
rm_work | ah well | 19:40 |
johnsom | Yeah, I feel like we should be capping it from going back so far.... Would be curious to see what the other teams do | 19:41 |
*** sapd1 has joined #openstack-lbaas | 19:42 | |
*** trown|lunch is now known as trown | 19:43 | |
rm_work | eh, it didn't even GET to pylint before my functional+unit test suites ran | 19:45 |
rm_work | (when running in parallel) | 19:45 |
*** sapd1 has quit IRC | 19:48 | |
johnsom | I am testing out some of the queens backports at the moment. Trying to catch up on my review backlog | 19:54 |
rm_work | ok. i'm busy doing stuff i'm not supposed to be doing because i got a wild hair. T_T | 19:55 |
johnsom | lol | 19:56 |
rm_work | please ignore any changes you might see in the next few minutes | 19:57 |
johnsom | Grr, trying to build a centos image.... | 19:57 |
johnsom | 2019-01-31 19:55:52.036 | Cannot uninstall 'virtualenv'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall. | 19:57 |
rm_work | (i'm waiting for a devstack to build anyway) | 19:57 |
rm_work | ROFL yes fffff centos | 19:57 |
rm_work | that error is so dumb | 19:57 |
rm_work | i see it in all kinds of different circumstances when i'm on centos | 19:57 |
rm_work | but also semi-related to the new pip i think | 19:58 |
rm_work | "newer pip", not the latest one (just anything >9) | 19:58 |
johnsom | Yeah, hmm, This is queens.... Wonder what the fix is.... | 19:58 |
johnsom | I'm going to try an upgraded version of DIB | 20:04 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: Allow members to be accessed as a base resource https://review.openstack.org/634302 | 20:07 |
rm_work | is our DIB version pinned? if so, wtf? is the centos image we use NOT pinned in queens? | 20:08 |
rm_work | what would have changed | 20:08 |
*** AlexStaf has quit IRC | 20:08 | |
johnsom | The new DIB did work. I don't think it is.... | 20:10 |
rm_work | hmm | 20:11 |
johnsom | But, I don't think it will upgrade what I have installed eitehr | 20:11 |
johnsom | Ok, everything seemed to work with ubuntu images. booting up a centos lb now. | 20:17 |
openstackgerrit | Merged openstack/octavia-tempest-plugin master: Add configuration to enable/disable L7,L4 protocols https://review.openstack.org/621493 | 20:30 |
johnsom | WrappedFailure: WrappedFailure: [Failure: octavia.amphorae.drivers.haproxy.exceptions.InternalServerError: Internal Server Error, Failure: octavia.amphorae.drivers.haproxy.exceptions.InternalServerError: Internal Server Error] | 20:35 |
johnsom | sigh | 20:35 |
*** celebdor has joined #openstack-lbaas | 20:36 | |
johnsom | Jan 31 20:39:51 amphora-67b111f9-2346-44f5-bb10-a58e94ff7c6a amphora-agent: 2019-01-31 20:39:51.696 3142 ERROR flask.app CalledProcessError: Command '['rpm', '-q', '--queryformat', '%{VERSION}', 'ipvsadm']' returned non-zero exit status 1 | 20:42 |
*** jlaffaye has quit IRC | 20:43 | |
johnsom | Hmm, looks like I forgot to set the variables to get the right amp-agent version. | 20:45 |
*** jlaffaye has joined #openstack-lbaas | 20:48 | |
*** celebdor has quit IRC | 21:06 | |
*** celebdor has joined #openstack-lbaas | 21:07 | |
rm_work | johnsom: want to fix https://review.openstack.org/#/c/632842/ really quick? :P | 21:26 |
johnsom | Sure | 21:27 |
*** salmankhan has joined #openstack-lbaas | 21:28 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Add amphora agent configuration update admin API https://review.openstack.org/632842 | 21:29 |
johnsom | Yeah, I went to fix the duplicate, but edited the wrong one. doh | 21:29 |
*** salmankhan1 has joined #openstack-lbaas | 21:31 | |
*** celebdor has quit IRC | 21:31 | |
*** salmankhan has quit IRC | 21:32 | |
*** salmankhan1 is now known as salmankhan | 21:32 | |
rm_work | i still need to review the patch below it tho T_T | 21:34 |
johnsom | I just signed off on two of the queens backports. They tested good for me | 21:38 |
openstackgerrit | Merged openstack/octavia master: Support remote debugging with PyDev https://review.openstack.org/619944 | 21:41 |
*** salmankhan has quit IRC | 21:43 | |
*** trown is now known as trown|outtypewww | 22:02 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Fix the amphora noop driver https://review.openstack.org/634344 | 23:26 |
johnsom | rm_work ^^^^ nun worthy | 23:26 |
colin- | Provider 'amphora' raised a driver error: Unable to complete operation for network edf99090-df57-46eb-9fbe-973041560b98. The IP address 172.16.5.89 is in use. | 23:34 |
colin- | curious what makes this determination, does it consult with neutron? | 23:34 |
johnsom | Yes | 23:35 |
colin- | anything i can simulate from the cli? | 23:35 |
johnsom | Yeah, allocate a port on a network that has an IP via neutron. Then try to create a load balancer with a VIP that requests that fixed IP. | 23:36 |
johnsom | We need to add more driver error types to make these errors a bit more user friendly again. | 23:39 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Fix the amphora noop driver https://review.openstack.org/634344 | 23:55 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!