*** yamamoto has joined #openstack-lbaas | 00:54 | |
*** yamamoto has quit IRC | 01:49 | |
*** goldyfruit has joined #openstack-lbaas | 02:23 | |
*** ianychoi has quit IRC | 02:25 | |
*** ianychoi has joined #openstack-lbaas | 02:26 | |
lxkong | johnsom: the the fedora amphora image, here are the logs you asked | 02:39 |
---|---|---|
lxkong | nova console log: http://paste.openstack.org/show/753065/ | 02:39 |
lxkong | cloud-init.log: http://paste.openstack.org/show/753066/ | 02:39 |
lxkong | `/var/log/message`: http://paste.openstack.org/show/753067/ | 02:39 |
*** goldyfruit has quit IRC | 02:42 | |
lxkong | johnsom: (updated)the `/var/log/message`: http://dpaste.com/1Z0BZ9V | 02:54 |
*** yamamoto has joined #openstack-lbaas | 03:07 | |
*** psachin has joined #openstack-lbaas | 03:37 | |
*** yamamoto has quit IRC | 04:51 | |
*** yamamoto has joined #openstack-lbaas | 04:53 | |
*** gcheresh has joined #openstack-lbaas | 05:03 | |
*** vishalmanchanda has joined #openstack-lbaas | 05:18 | |
*** ivve has quit IRC | 05:30 | |
*** gcheresh has quit IRC | 05:32 | |
*** ramishra has joined #openstack-lbaas | 05:38 | |
*** AlexStaf has quit IRC | 05:47 | |
*** gcheresh has joined #openstack-lbaas | 05:49 | |
*** luksky has joined #openstack-lbaas | 05:55 | |
*** rcernin has quit IRC | 06:02 | |
*** gcheresh has quit IRC | 06:09 | |
*** gcheresh has joined #openstack-lbaas | 06:25 | |
*** ccamposr has joined #openstack-lbaas | 06:30 | |
*** yboaron has joined #openstack-lbaas | 06:34 | |
*** gcheresh has quit IRC | 06:36 | |
*** ivve has joined #openstack-lbaas | 06:46 | |
*** rpittau|afk is now known as rpittau | 06:56 | |
*** rcernin has joined #openstack-lbaas | 07:00 | |
*** trident has quit IRC | 07:06 | |
*** trident has joined #openstack-lbaas | 07:08 | |
*** yamamoto has quit IRC | 07:16 | |
*** tesseract has joined #openstack-lbaas | 07:24 | |
*** AlexStaf has joined #openstack-lbaas | 07:27 | |
*** yamamoto has joined #openstack-lbaas | 07:48 | |
*** yboaron has quit IRC | 07:54 | |
*** yamamoto has quit IRC | 07:57 | |
*** yamamoto has joined #openstack-lbaas | 08:08 | |
*** ricolin has joined #openstack-lbaas | 08:12 | |
*** yboaron has joined #openstack-lbaas | 08:44 | |
*** yboaron_ has joined #openstack-lbaas | 08:50 | |
*** pcaruana has joined #openstack-lbaas | 08:52 | |
*** yboaron has quit IRC | 08:52 | |
*** luksky has quit IRC | 08:54 | |
*** numans has joined #openstack-lbaas | 09:00 | |
*** ricolin has quit IRC | 09:14 | |
*** luksky has joined #openstack-lbaas | 09:27 | |
*** yamamoto has quit IRC | 09:34 | |
*** lemko has joined #openstack-lbaas | 09:35 | |
*** rcernin has quit IRC | 09:36 | |
lemko | Hi. I'm getting the error " Amphora ae091665-aa7d-44c0-bc33-f3107e879e41 health message was processed too slowly: 18.0750341415s! The system may be overloaded or otherwise malfunctioning. This heartbeat has been ignored and no update was made to the amphora health entry. THIS IS NOT GOOD.". WHat can I do to make this appear less? Because somtimes, it tends to kill my amphoraes. What could I do to avoid this? | 10:04 |
*** yamamoto has joined #openstack-lbaas | 10:05 | |
*** ccamposr__ has joined #openstack-lbaas | 10:08 | |
*** ccamposr has quit IRC | 10:12 | |
*** yamamoto has quit IRC | 10:15 | |
*** yamamoto has joined #openstack-lbaas | 10:34 | |
*** yamamoto has quit IRC | 10:36 | |
*** yamamoto has joined #openstack-lbaas | 10:39 | |
*** yamamoto has quit IRC | 10:43 | |
openstackgerrit | Merged openstack/octavia master: Fix TCP listener logging bug https://review.opendev.org/665470 | 10:48 |
*** yamamoto has joined #openstack-lbaas | 11:08 | |
*** yamamoto has quit IRC | 11:12 | |
sapd1 | lemko, What version are you running? I got this issue before in old version. | 11:45 |
lemko | Some version from like 5 months ago. | 11:46 |
lemko | How did you fix it sapdl? | 11:46 |
lemko | did you increase the time check interval? | 11:46 |
sapd1 | there was an optimized patch which fixed it. | 11:46 |
sapd1 | The root cause is octavia used query all command in database. | 11:47 |
*** yamamoto has joined #openstack-lbaas | 11:49 | |
lemko | So the problem is in the database query? | 11:51 |
sapd1 | This patch: https://review.opendev.org/#/c/603242/ | 11:51 |
*** boden has joined #openstack-lbaas | 11:55 | |
*** yamamoto has quit IRC | 11:56 | |
*** yamamoto has joined #openstack-lbaas | 11:57 | |
*** yamamoto has quit IRC | 12:02 | |
lemko | thanks a lot :) | 12:04 |
*** yboaron_ has quit IRC | 12:13 | |
*** yamamoto has joined #openstack-lbaas | 12:14 | |
*** yamamoto has quit IRC | 12:14 | |
*** yamamoto has joined #openstack-lbaas | 12:14 | |
*** rtjure has quit IRC | 12:16 | |
*** yamamoto has quit IRC | 12:19 | |
openstackgerrit | Nir Magnezi proposed openstack/octavia master: Fix a python3 issue in the amphora-agent https://review.opendev.org/665498 | 12:20 |
openstackgerrit | Nir Magnezi proposed openstack/octavia master: Fix a python3 issue in the amphora-agent https://review.opendev.org/665498 | 12:22 |
lemko | It seems I already have this patch sapl :( Maybe increasing the heartbeat control interval will help | 12:27 |
*** ricolin has joined #openstack-lbaas | 12:45 | |
*** psachin has quit IRC | 12:46 | |
johnsom | lemko With that patch the process should take a few thousandth of a second to complete, but are taking over 10 seconds. Something is seriously overloaded in your environment and changing the heartbeat interval will likely only buy you some time. Check that the database is not having trouble or overloaded. That is the most likely cause. | 12:47 |
*** vishalmanchanda has quit IRC | 12:57 | |
*** gcheresh has joined #openstack-lbaas | 12:58 | |
*** gcheresh has quit IRC | 13:09 | |
*** yamamoto has joined #openstack-lbaas | 13:13 | |
*** ricolin has quit IRC | 13:17 | |
*** yamamoto has quit IRC | 13:19 | |
*** yboaron_ has joined #openstack-lbaas | 13:22 | |
*** goldyfruit has joined #openstack-lbaas | 13:24 | |
*** pcaruana|afk| has joined #openstack-lbaas | 13:26 | |
*** pcaruana has quit IRC | 13:26 | |
*** gcheresh has joined #openstack-lbaas | 13:30 | |
*** ricolin has joined #openstack-lbaas | 13:40 | |
lemko | any tip to check how overloaded the database is johnsom? | 13:40 |
*** pcaruana has joined #openstack-lbaas | 13:48 | |
*** pcaruana|afk| has quit IRC | 13:51 | |
kklimonda | so I have any interesting case, over the last 4-5 days our network has been flapping like crazy and now I have two loadbalancers in a weird state: one has provisioning_status 'PENDING_UPDATE' and two amphorae: one ERROR/BACKUP and another ACTIVE/STANDBY. | 13:58 |
kklimonda | the other loadbalancer is ACTIVE with one amphora ALLOCATED/MASTER but no BACKUP even though topology is ACTIVE_STANDBY | 13:58 |
kklimonda | erm, for first one states are ERROR/BACKUP and ACTIVE/STANDALONE, not STANDBY | 13:59 |
kklimonda | not sure how to get this into a working state | 13:59 |
sapd1 | lemko, Does it take long time to list all loadbalancer in your deployment? | 14:01 |
sapd1 | kklimonda, As my experiments, You can update provisioning status for error LB in your DB to ACTIVE. then perform failover loadbalancer/ | 14:03 |
kklimonda | @sapd1 yeah, that was my plan - both were in the ERROR/PENDING_UPDATE state initially and I've sacrificed one of LBs, changing its state to ACTIVE but it's not spawning the BACKUP amphora | 14:04 |
kklimonda | I'll try failing over amphora now, instead of the LB itself | 14:05 |
sapd1 | So the amphora was deleted, You can change status from DELETED to ACTIVE then perform that error amphora. | 14:06 |
kklimonda | hmm, I think I see what you mean | 14:07 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Fix diskimage-create.sh datasources https://review.opendev.org/665680 | 14:07 |
openstackgerrit | Nir Magnezi proposed openstack/octavia stable/rocky: Fix auto setup Barbican's ACL in the legacy driver. https://review.opendev.org/665681 | 14:08 |
openstackgerrit | Gregory Thiemonge proposed openstack/octavia stable/rocky: Update amphora-agent to report UDP listener health https://review.opendev.org/665683 | 14:14 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Fix diskimage-create.sh datasources https://review.opendev.org/665680 | 14:14 |
*** yboaron_ has quit IRC | 14:16 | |
kklimonda | @sapd1 thanks, seems that've helped - now I just have to figure out 2 minute delay between amphora-agent starting and starting processing requests from worker/health check | 14:21 |
johnsom | kklimonda There is a built in delay after a controller restart where the health checks will not run. This is to allow the amphora instances to all report in fresh heartbeats before we start checking for failed amphora. | 14:25 |
johnsom | Basically it''s in receive only mode for a short window, then will start looking for failed instances. | 14:25 |
kklimonda | @johnsom mhm, but in my deployment it takes a long time for the worker to connect on port 9443 and fetch `/0.5/info` - I get connection refused, but the agent is running | 14:26 |
johnsom | Oh, you are talking about a different issue. | 14:26 |
johnsom | kklimonda CentOS or Ubuntu image? | 14:26 |
kklimonda | ubuntu | 14:26 |
johnsom | Hmm, ok. We are tracking down an CentOS issue along these lines, but Ubuntu has been fine. | 14:27 |
*** fnaval has joined #openstack-lbaas | 14:27 | |
johnsom | I would start by watching the instance console to see how long it is taking for the instance to come up. | 14:27 |
kklimonda | it's much quicker - I can login to the amphora way before it starts listening on port 9443 | 14:28 |
kklimonda | it takes ~30 seconds for the amphora instance to come up | 14:31 |
kklimonda | not sure how to debug it further | 14:31 |
johnsom | Yeah, my whole load balancer build finishes in about 23 seconds | 14:31 |
kklimonda | and then agent is stuck for ~2 minutes - not sure how to debug that | 14:32 |
kklimonda | journalctl is not helpful, but there seems to be a 2 minute delay of some sort: ```Jun 17 14:30:59 amphora-7e0666b0-a07f-4510-a428-141ccf19de26 systemd[1]: Started OpenStack Octavia Amphora Agent. | 14:33 |
kklimonda | Jun 17 14:32:43 amphora-7e0666b0-a07f-4510-a428-141ccf19de26 amphora-agent[843]: 2019-06-17 14:32:43.165 843 INFO octavia.common.config [-] Logging enabled!``` | 14:33 |
johnsom | Well, like I said, I would check the console log. The amphora-agent startup should be included there. The amphora-agent logs to both the amphora-agent log and syslog in older versions of the image (like before master last week), so those would also be good places to look at timing | 14:33 |
kklimonda | ah, lets see | 14:33 |
kklimonda | I thought console would basically display the same thing journalctl does | 14:33 |
johnsom | No, before last week gunicorn was writing straight to log files. | 14:34 |
johnsom | I was just working on cleaning up the logging inside the amphora | 14:34 |
kklimonda | the console has nothing but the login prompt | 14:35 |
*** AlexStaf has quit IRC | 14:45 | |
*** pcaruana has quit IRC | 14:47 | |
openstackgerrit | Gregory Thiemonge proposed openstack/neutron-lbaas stable/rocky: Improve performance on get and create/update/delete requests https://review.opendev.org/665696 | 14:56 |
openstackgerrit | Nir Magnezi proposed openstack/octavia stable/stein: Fix a python3 issue in the amphora-agent https://review.opendev.org/665698 | 14:57 |
openstackgerrit | Gregory Thiemonge proposed openstack/neutron-lbaas stable/queens: Improve performance on get and create/update/delete requests https://review.opendev.org/665700 | 14:59 |
*** ivve has quit IRC | 15:05 | |
*** luksky has quit IRC | 15:20 | |
cgoncalves | johnsom, backport candidate? https://review.opendev.org/#/c/559460/ | 15:21 |
*** bonguardo has joined #openstack-lbaas | 15:23 | |
johnsom | cgoncalves Technically that was an API change so I think that is why we did not backport it. | 15:24 |
cgoncalves | I was/am on the fence with this one. it's an API change but at same time compatibility is kept and is a fix patch | 15:26 |
kklimonda | well, that's unexpected: https://pastebin.com/5ZMHatj9 | 15:31 |
kklimonda | how can I have two standalone amphoras for a single ACTIVE_STANDBY LB? | 15:31 |
johnsom | cgoncalves Yeah, the extent of the API change is adding the links... Not a huge change. | 15:32 |
kklimonda | bonus points, neither one actually has VIP set in the amphora-haproxy netns | 15:33 |
johnsom | kklimonda During a failover the amps are in standalone configuration, then transitioned to their final roles. | 15:33 |
openstackgerrit | Nir Magnezi proposed openstack/octavia stable/stein: Fix a python3 issue in the amphora-agent https://review.opendev.org/665698 | 15:33 |
kklimonda | yeah, but that's not happening | 15:33 |
johnsom | Did someone kill -9 one of the controllers? | 15:33 |
kklimonda | not while they were spawning | 15:33 |
johnsom | Or is the load balancer marked as provisioning_status ERROR? | 15:33 |
kklimonda | no, although it was marked as ERROR initially | 15:34 |
johnsom | That would not be spawning, that would be failover. | 15:34 |
kklimonda | I'm trying to get it to work now | 15:34 |
johnsom | Ok, yeah, if it was in ERROR that means we ran out of retry attempts due to some cloud failure. It's likely the failover could not progress so we stopped and marked it in ERROR. | 15:34 |
kklimonda | ok, how do I proceed in that case? | 15:35 |
johnsom | Manually trigger a failover once the cloud is back to functional | 15:35 |
kklimonda | failover the LB or amphorae? | 15:35 |
johnsom | Use the load balancer failover, not the amphora failover | 15:35 |
johnsom | lol, yeah, was just clarifying | 15:36 |
kklimonda | mhm, I've done that and that's how I ended up with two STANDALONE amphorae | 15:36 |
johnsom | So you did a manual failover, but you ended up back in ERROR? | 15:36 |
kklimonda | no, the LB itself is ACTIVE now, but both amphorae are STANDALONE | 15:37 |
*** tesseract has quit IRC | 15:38 | |
johnsom | Ummm, now that is not good. Let me look in the code for a minute, but if you don't mind sharing, it would be helpful to have the controller logs for that faliover flow. | 15:38 |
kklimonda | from the initial one, or can I failover now? | 15:39 |
johnsom | The failover now | 15:39 |
*** pcaruana has joined #openstack-lbaas | 15:40 | |
johnsom | I wonder if you have a warning log "Could not fetch Amphora %s from DB, ignoring " | 15:42 |
johnsom | Or: skipping the failover | 15:43 |
*** gcheresh has quit IRC | 15:43 | |
johnsom | There are a few known bugs in the failover flow that are planned to be fixed soon. | 15:44 |
kklimonda | huh, there is no flow: https://pastebin.com/VSrnL6ts | 15:46 |
kklimonda | after that the second amphora is being failed over | 15:46 |
*** ianychoi_ has joined #openstack-lbaas | 15:48 | |
johnsom | Ah, the reset is probably in debug level logging.... | 15:49 |
*** Vorrtex has joined #openstack-lbaas | 15:49 | |
kklimonda | I can bump to debug and retry | 15:50 |
johnsom | Ah, ok, I see the bug | 15:50 |
*** ianychoi has quit IRC | 15:52 | |
johnsom | Hmm, no, it still should have come in. I think what happened is an automated failover occurred, but did not complete. This ties back to the known issue with the failover flow. It assumes too much based on what is in the DB about the previous amphora. | 15:52 |
kklimonda | ah, the flow is only logged with debug | 15:53 |
kklimonda | now I have more :) | 15:53 |
kklimonda | it's possible that I'm partially to blame, initially the LB was stuck in PENDING_UPDATE | 15:54 |
johnsom | From a controller kill -9? | 15:54 |
kklimonda | well, it wasn't kill -9 | 15:55 |
johnsom | Or did the retry timer not yet expire? | 15:55 |
kklimonda | but I think the L2 flapping resulted in the same as kill -9 | 15:55 |
kklimonda | we've had the network flapping for like 12 hours, and that affected rabbit, mysql, and probably any other connectivity | 15:55 |
johnsom | If we could not reach the DB to update the status, that could happen | 15:55 |
kklimonda | so I ended up changing its status to ACTIVE (or ERROR) directly in the DB | 15:56 |
*** trident has quit IRC | 15:56 | |
kklimonda | because everything was failing with "LB immutable" | 15:56 |
johnsom | Ok, yeah. So, path forward, I would go into the DB, set one to role=MASTER and one to role=BACKUP, then failover again. It should fix it | 15:57 |
kklimonda | even with it being stuck in PENDING_UPDATE? failover command wasn't working | 15:57 |
kklimonda | http://paste.openstack.org/show/753104/ <- the new log from failover with debug enabled | 15:57 |
johnsom | Yeah, if the controller can't set the status to ERROR or ACTIVE in the DB, it will log the ERRORs and give up. | 15:57 |
*** trident has joined #openstack-lbaas | 15:58 | |
johnsom | Yeah PENDING_* anything means that one of the controllers is ACTIVELY working on the objects and no other actions should occur until the controller releases ownership. We are actually also working on improving that this cycle as well with the jobboard work. However, DB outage is kind of catastrophic as we can't update any state then. After that retry timer expires, we don't really have much we can do.... | 15:59 |
kklimonda | mhm, I've seen some discussions regarding making it more robust | 16:01 |
kklimonda | could I set one STANDALONE to MASTER, another to BACKUP in the DB and initiate failover? | 16:03 |
johnsom | Yeah, that is what I recommended above. | 16:03 |
johnsom | <johnsom> Michael Johnson Ok, yeah. So, path forward, I would go into the DB, set one to role=MASTER and one to role=BACKUP, then failover again. It should fix it | 16:04 |
kklimonda | oh, I thought you meant initially | 16:04 |
johnsom | Nope, current state | 16:04 |
kklimonda | also, bonus points for explaining why it takes 2 minutes for worker to connect to amphora on port 9443 ;) | 16:06 |
kklimonda | the instance is up after ~30 seconds | 16:06 |
johnsom | I saw your log earlier. I'm not sure why there is a two minute delay there between systemd saying it's starting it and it actually starts. Last time I checked I'm not seeing that myself. I was still at 23 seconds to LB ACTIVE. However I will say someone else reported seeing that recently too. | 16:08 |
johnsom | We may have to turn on some kind of systemd tracing | 16:08 |
kklimonda | thanks, setting roles and failover made the LB work again | 16:13 |
johnsom | +1 | 16:14 |
*** lemko has quit IRC | 16:15 | |
*** mithilarun has joined #openstack-lbaas | 16:21 | |
*** yamamoto has joined #openstack-lbaas | 16:27 | |
*** yamamoto has quit IRC | 16:31 | |
*** rpittau is now known as rpittau|afk | 16:31 | |
*** ricolin has quit IRC | 16:35 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: DNM CentOS7 gate test https://review.opendev.org/665464 | 16:48 |
*** ricolin has joined #openstack-lbaas | 16:53 | |
*** ivve has joined #openstack-lbaas | 16:58 | |
*** ramishra has quit IRC | 17:08 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: DNM: CentOS test job - cpu-mode https://review.opendev.org/665726 | 17:10 |
openstackgerrit | Michael Johnson proposed openstack/octavia-tempest-plugin master: DNM: CentOS test - cpu-mode https://review.opendev.org/665727 | 17:15 |
*** eandersson has joined #openstack-lbaas | 17:19 | |
*** bonguardo has quit IRC | 17:22 | |
*** trident has quit IRC | 17:27 | |
*** trident has joined #openstack-lbaas | 17:29 | |
*** ccamposr__ has quit IRC | 17:30 | |
*** ricolin has quit IRC | 17:30 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Add a note about nova hardware architectures https://review.opendev.org/665732 | 17:37 |
*** AlexStaf has joined #openstack-lbaas | 18:11 | |
*** luksky has joined #openstack-lbaas | 18:12 | |
*** ccamposr has joined #openstack-lbaas | 18:34 | |
*** yamamoto has joined #openstack-lbaas | 18:36 | |
*** ccamposr__ has joined #openstack-lbaas | 18:39 | |
*** ccamposr has quit IRC | 18:42 | |
*** pcaruana has quit IRC | 18:44 | |
*** gcheresh has joined #openstack-lbaas | 19:07 | |
*** AlexStaf has quit IRC | 19:17 | |
*** pcaruana has joined #openstack-lbaas | 19:17 | |
*** yamamoto has quit IRC | 19:21 | |
*** yamamoto has joined #openstack-lbaas | 19:25 | |
*** pcaruana has quit IRC | 19:27 | |
*** yamamoto has quit IRC | 19:30 | |
*** mithilarun has quit IRC | 19:34 | |
*** trident has quit IRC | 19:37 | |
*** trident has joined #openstack-lbaas | 19:39 | |
*** yamamoto has joined #openstack-lbaas | 19:39 | |
*** yamamoto has quit IRC | 19:39 | |
*** yamamoto has joined #openstack-lbaas | 19:40 | |
*** yamamoto has quit IRC | 19:45 | |
*** mithilarun has joined #openstack-lbaas | 20:02 | |
*** gcheresh has quit IRC | 20:34 | |
*** mithilarun has quit IRC | 20:36 | |
*** mithilarun has joined #openstack-lbaas | 20:36 | |
*** mithilarun has quit IRC | 20:41 | |
*** mithilarun has joined #openstack-lbaas | 20:49 | |
*** xgerman_ has joined #openstack-lbaas | 20:54 | |
xgerman | Shanghai will be mandarin and English… reminds me of Tokyo where half the presentatiins were Japanese :-) | 20:55 |
johnsom | Ha, yeah. Tokyo was fun | 20:56 |
johnsom | xgerman BTW, the log offloading patches are merging as we chat | 20:56 |
xgerman | yep, sweet :-) | 20:56 |
*** boden has quit IRC | 21:10 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: DNM CentOS7 gate test https://review.opendev.org/665464 | 21:16 |
*** Vorrtex has quit IRC | 21:17 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Add the Amphora image building guide to the docs https://review.opendev.org/665769 | 21:39 |
*** rcernin has joined #openstack-lbaas | 21:43 | |
*** ccamposr__ has quit IRC | 21:51 | |
rm_work | johnsom / kklimonda: yes, i was seeing at least 30s from when i was able to ssh in to an amp, and when the agent even started... but i redid my devstack and moved to a different patch and now i can't reproduce | 21:52 |
rm_work | if i am able to repro again I will let you know | 21:52 |
*** lemko has joined #openstack-lbaas | 21:52 | |
rm_work | and for me it wasn't just centos, it was ubuntu also | 21:52 |
johnsom | Right, that user was using ubuntu as well. | 21:54 |
rm_work | yeah | 21:54 |
rm_work | wasn't sure if when you said "but it was on centos" you were referring to my testing | 21:54 |
rm_work | because i saw it on both i think | 21:55 |
johnsom | It looked like a gap between when systemd said it was starting it to when the process actually started | 21:55 |
johnsom | Carlos and I are working on the CentOS issue which is yet another boot performance issue | 21:55 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Add the Amphora image building guide to the docs https://review.opendev.org/665769 | 21:56 |
*** mithilarun has quit IRC | 21:57 | |
*** mithilarun has joined #openstack-lbaas | 22:00 | |
*** fnaval has quit IRC | 22:05 | |
*** luksky has quit IRC | 22:08 | |
*** ChanServ sets mode: +o johnsom | 22:21 | |
*** luksky has joined #openstack-lbaas | 22:21 | |
*** blake has joined #openstack-lbaas | 22:22 | |
*** goldyfruit has quit IRC | 22:27 | |
openstackgerrit | Merged openstack/octavia-tempest-plugin master: Save amphora logs in gate https://review.opendev.org/626406 | 22:34 |
*** luksky has quit IRC | 22:39 | |
*** blake has quit IRC | 22:40 | |
*** sapd1_x has joined #openstack-lbaas | 23:01 | |
johnsom | I keep forgetting how painful running tox on neutron is.... | 23:16 |
rm_work | yes | 23:36 |
rm_work | ugh been trying to get remote-debugging working on my devstack VM for like 4 hours now | 23:36 |
rm_work | it's just not behaving | 23:36 |
rm_work | very frustrating | 23:36 |
rm_work | upgraded to latest pycharm and still no luck. connects but doesn't break | 23:37 |
rm_work | and only the first time, after the first connection it never tries to connect again >_>< | 23:37 |
*** goldyfruit has joined #openstack-lbaas | 23:43 | |
johnsom | I am working on cleaning out the neutron-lbaas stuff from neutron-lib/neutron. It's the part I signed up for at the PTG | 23:47 |
johnsom | neutron-lib: https://review.opendev.org/665828 | 23:49 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!