Thursday, 2017-08-03

rm_workummm00:12
rm_workhttp://paste.openstack.org/show/617353/00:12
rm_workwut00:12
rm_workjohnsom: you know what that means at all?00:12
rm_workgreghaynes: or you ^^00:12
rm_workstarted happening to me :/00:13
johnsomThe root error probably happened above00:13
johnsomThat looks like revert/rollback to me00:14
rm_workah00:14
rm_worki don't think so00:15
rm_workjohnsom: http://paste.openstack.org/show/617354/00:16
rm_workthere's more00:16
rm_workit looks like normal cleanup00:16
rm_workah00:16
rm_workhttps://bugs.launchpad.net/diskimage-builder/+bug/170638600:16
openstackLaunchpad bug 1706386 in diskimage-builder "build-image role failing in master" [Critical,Fix released] - Assigned to John Trowbridge (trown)00:16
rm_workwhelp00:16
rm_workneed them to release it i guess00:17
rm_workor fix our build process to use dib-master00:17
johnsomPretty soon you will be the canary00:18
*** http_GK1wmSU has joined #openstack-lbaas00:19
*** http_GK1wmSU has left #openstack-lbaas00:21
rm_workam i not already? :/00:25
JudeCHey johnsom you ever seen anything like this in devstack before:00:25
johnsomOh the things I have seen....  Grin00:25
JudeChttps://pastebin.com/gdMsXLE100:26
JudeCthis is the nova logs when trying to build an LB00:26
JudeCI keep getting provisioning_status = ERROR :(00:26
johnsomYeah, nova is puking on you.  Ubunut?00:27
johnsomAh, yeah00:27
johnsomCan you "dpkg --list | grep qemu" and paste the versions?00:27
rm_workdidn't ubuntu fix their shit?00:27
rm_workmine seems to work currently...00:27
johnsomNo, we had an agree to disagree00:27
rm_workoh right but we fixed the nova config to fix that00:28
rm_workso this is something else00:28
JudeCqemu                               1:2.8+dfsg-3ubuntu2.3~cloud0               amd64        fast processor emulator00:28
johnsomYeah, ok, so if you are on xenial, you have the same issue I had....00:28
JudeCdo you want all of them?00:28
johnsomNo, that is fine00:28
rm_workbut i'm on xenial too and mine seems to be fine >_> i think00:29
johnsomhttps://bugs.launchpad.net/ubuntu/+source/qemu/+bug/170342300:29
openstackLaunchpad bug 1703423 in qemu (Ubuntu) "zesty qemu packages (2.8) released for xenial-updates" [Undecided,Incomplete]00:29
rm_workrestacking now00:29
JudeCSo you know how to fix it :P00:29
johnsomAdd to nova.conf:[libvirt]live_migration_uri = qemu+ssh://stack@%s/systemcpu_mode = nonevirt_type = kvmhw_machine_type = x86_64=pc-i440fx-xenial https://usercontent.irccloud-cdn.com/file/17Lw8wBp/image.png00:29
johnsomThat last line, hw_machine_type00:29
johnsomnow, note, nova also messed up their config files, so you may have two config files in /etc/nova/ that need to be edited.00:30
rm_workyeah, we've done that00:30
rm_workah though00:30
rm_workhis say qemu not kvm00:30
johnsomThen restart devstack@n-* and restart your instances or build a VM00:30
rm_workOH00:30
rm_workbecause you didn't check the box i bet00:30
rm_workJudeC: parallels?00:30
JudeCyes00:31
rm_workthere's a box you have to check in config00:31
rm_workto enable nested virt00:31
rm_workso it will detect and use kvm instead of qemu00:31
rm_workthat's why mine works00:31
JudeCI hate parallels....00:31
johnsomWell, the new "split" config could also be why it didn't pick it up00:31
rm_workunder CPU & Memory00:31
rm_workadvanced00:31
rm_workjohnsom: he has that too00:31
rm_workjohnsom: he's using devstack_deploy00:31
johnsomOk00:32
rm_workso yeah stop your VM, enable nested00:32
rm_workand then start and redo your stack00:32
rm_workthe setting should save across reverts00:32
johnsomJudeC ^^^ That will make a huge difference in performance too00:32
rm_workyeah did you... just redo it or something00:33
rm_workor has your devstack ALWAYS been running like that O_o00:33
JudeCI reinstalled parallels at some point00:34
JudeCa little while ago and may have wiped that out00:34
johnsomYeah, LB creation goes from taking like 8-10 minutes to start down to ~30 seconds00:35
rm_worki think his LB creation has been broken since he redid his VM <_<00:36
rm_workhe's been on no-op00:36
JudeClikely yes I prob never noticed because I am in noop00:36
johnsomPoor soul00:36
JudeCwelp restacking now, thanks for the help lol00:39
rm_workyeah somehow it didn't click that all of that said qemu00:39
rm_workbecause i was on qemu for so long myself00:39
*** bzhao has joined #openstack-lbaas01:26
*** gongysh has joined #openstack-lbaas01:30
*** armax has joined #openstack-lbaas01:34
*** yamamoto has quit IRC01:45
*** yamamoto has joined #openstack-lbaas01:45
*** sanfern has quit IRC02:02
*** sanfern has joined #openstack-lbaas02:05
*** sanfern has quit IRC02:12
*** ducnc has joined #openstack-lbaas02:22
*** ducnc has quit IRC02:33
*** yamamoto has quit IRC02:41
*** yamamoto has joined #openstack-lbaas02:51
*** kbyrne has quit IRC02:54
*** tongl has quit IRC02:56
*** kbyrne has joined #openstack-lbaas02:58
*** yamamoto has quit IRC03:33
*** yamamoto has joined #openstack-lbaas03:42
*** JudeC has quit IRC03:43
*** sanfern has joined #openstack-lbaas03:44
rm_workjohnsom: i think housekeeping should also recycle spares pool VMs that are out of date (periodically check the image tag and see which image it is, and recycle the spares if they aren't the same image03:46
rm_work)03:46
johnsomSure03:47
rm_workpossibly we should track which image an amp was created with03:47
rm_workin the DB03:47
rm_worksince it won't change03:47
rm_workand it'd make that easier03:47
rm_workadding it to my TODO03:47
rm_workstale spares is my bane right now03:48
rm_workevery time I make an image i have to create a bunch of LBs and delete them03:48
rm_workto clear it out03:48
rm_worki guess i could try to fail them over...03:48
*** yamamoto has quit IRC03:48
johnsomMan, I thought we did store the image id in the amp table, but I guess not03:49
johnsomYeah, I hope to have the failover API finished soon03:49
*** yamamoto has joined #openstack-lbaas03:49
johnsomGerman started it, I offered to finish it, then got busy with other things.03:49
johnsomWe hoped to get it into pike, but sigh, time03:50
johnsomYeah, you can just nova delete them.  Just make sure you have the right IDs....03:51
xgerman_Well, we can merge as-is and big the rare case somebody steals the amp. After all it's not worse then before03:52
johnsomFeature freeze was last week03:53
*** yamamoto has quit IRC03:54
*** links has joined #openstack-lbaas03:54
xgerman_Well, it had a patch for a while...03:54
johnsomYep03:54
johnsomJust not enough cycles for everything we are trying to do...03:55
openstackgerritZhaoBo proposed openstack/octavia master: Extend api to accept qos_policy_id  https://review.openstack.org/45830803:55
xgerman_Feature freeze used to be no new patches but finish the open ones...03:56
rm_workmm03:56
rm_workactually i do kinda remember that >_>03:56
rm_workBTW can confirm https://review.openstack.org/#/c/489015/ works03:57
johnsomstop merging code adding new features, new dependencies, new configuration options, database schema changes, changes in strings… all things that make the work of packagers, documenters or testers more difficult.03:57
rm_workdeployed it to production here already :)03:57
johnsomThat is the current definition03:58
rm_workahhhhh03:58
rm_workumm03:58
rm_workok03:58
rm_workstringfreeze is later03:58
rm_workvery specifically tho03:58
johnsomSoft string freeze is with feature freeze.  Hard freeze is RC103:59
xgerman_Yeah, no merging after RC-1 has always been the case03:59
johnsomYeah, I have to say, I agree with drawing a line.  It sucks, but it's what we could get done in a reasonable time.03:59
rm_workdo we, uh.... have translators?03:59
johnsomThere are a number of patches I wish we could have made happen, it just didn't work out.04:00
rm_workoh FYI, I am taking the rest of this week off04:00
johnsomYes, we have translators04:00
rm_worki am tired of not having vacations04:00
johnsomrm_work Cool, enjoy!04:00
rm_workso i am going to ... sit around at home by myself and play video games04:00
xgerman_I don't agree with taking weeks away from Dev time04:00
rm_workprobably i will be on IRC04:00
rm_work>_>04:00
xgerman_Yeah, enjoy04:00
johnsomWell, it's still dev time.  Just focused on clearing out the pile of bugs that have accumulated....04:01
rm_workyeah it actually allows for a clear focus on ONLY bugs04:01
rm_workwhich is the idea04:01
johnsomLike the one rm_work just mentioned04:01
rm_workstabilize during the last weeks04:01
xgerman_That's what we did during RC04:02
johnsomRC1 should ship ideally04:02
rm_workyeah, only supposed to even cut an RC2 or RC3 if you HAVE to04:03
johnsomIf we get things stablized, we can release early and start on queens....04:03
xgerman_I don't have to agree with the new world order04:03
johnsomHa04:03
johnsomTrue04:03
xgerman_Also that's the first time it was mentioned but I don't pay much attention04:04
xgerman_Otherwise i would have filed the FFE04:04
johnsomNah, this has been the case for a few releases now.  I want to say mitaka, but maybe newton04:04
xgerman_Newton makes sense - my attention span has been real short beginning then04:05
johnsomI remember feeling really bad about our ocata release because the same thing happened.  We didn't get a few patches in that I really hoped we would.04:06
johnsomI think the best thing that could help is getting more reviews on patches.  The cores can't do all of the reviews....04:07
xgerman_We can make our own rules - there are projects which even backport features ;-)04:08
rm_workwe lose some tags if we do that04:08
xgerman_Yeah, more reviewers would be good!04:08
xgerman_Aren't those tags meaningless? Our project not diverse? My a**04:09
rm_workeh04:10
johnsomThe lowest core review is still 4x the next reviewer04:10
rm_workhate to say it but... at this point... if we lost rax or gd it'd be bad04:10
xgerman_We might gain Octavia Inc04:11
rm_worklol04:11
johnsomHa04:11
rm_workhey if we could find funding i'd do it :P04:11
johnsomYeah, would love to read the business plan04:11
xgerman_1 develop Octavia04:11
xgerman_2 ?04:11
xgerman_3 Profit04:12
johnsomHahaha, yeah, as entertaining as I expected04:12
rm_workfoolproof04:12
*** gongysh has quit IRC04:15
johnsomrm_work What are you planning to play?04:15
rm_worknot 100% sure04:15
rm_workI do a lot of PlayerUnknown's Battlegrounds04:16
johnsomHahaha, fair enough04:16
rm_workand I got a Vive recently04:16
rm_workso maybe some VR stuff04:16
rm_workRec Room is fun, Darknet is really cool too04:16
johnsomNice.  haven't tried any of the VR gear yet04:16
rm_workhttps://www.darknetgame.com/04:17
johnsomDownside of not being on an HP site anylonger04:17
rm_workno cool toys to play with? :(04:17
rm_workah when is that Eclipse thing04:17
*** belharar has joined #openstack-lbaas04:17
rm_workI was tempted to go down to Oregon04:17
rm_workI have a friend who's driving down04:18
johnsomAug 21st04:18
johnsomIt's going to be a cluster here04:18
rm_workheh yeah04:18
rm_workare you in the "zone"?04:18
johnsomLike 150,000 people are supposed to show up here in our 50,000 person town04:18
rm_worklol04:18
johnsomI am in the zone04:18
rm_worknice :P at least you can hide in your house04:19
rm_workand just look outside04:19
johnsomThe university is having a big "event" and opening the dorms04:19
xgerman_Selling rooms on airbnb?04:19
johnsomNah, airbnb creeps me out, but many people here are04:19
johnsomWe are taking the day off and sheltering-in-place04:19
rm_workI use AirBnB all the time04:19
rm_workso :P04:19
johnsomMaybe walk to the nearby park04:20
xgerman_Get a month mortgage in a few nights04:20
johnsomSee, my point exactly04:20
johnsomIn a single night from what I have heard about the hotels04:20
xgerman_Chance of a lifetime04:21
johnsomrm_work if you are so motivated, you are of course welcome here04:21
rm_workheh not sure what my friend's plan is... i honestly wonder if he's planning to sleep in his car, or try to camp04:21
rm_workbut i assume all the legal campgrounds are crazy booked too04:21
rm_worki will probably end up being way too busy/lazy04:22
johnsomYeah, the permits for most of the area sold out in a few hours04:22
johnsomWe are too close to the highest point in the coast range (mary's peak)04:22
rm_workyeah we were gonna camp on the drive up from TX and then we realize you can't just... camp04:22
xgerman_My Oregon relatives are throwing a big party...04:23
johnsomAll for less than ten minutes of entertainment in the morning.  My guess is tailgating the rest of the day.  Sigh.  Living in a college town04:24
johnsomThus, why we are sheltering-in-place...  grin04:24
rm_workyeah what time is it?04:24
rm_workearly?04:25
rm_worki might not even want to be AWAKE that early04:25
johnsomLike 9 or 10 am04:25
rm_workugh yeah04:25
rm_workthat's a bit before my alarm04:25
johnsomHahaha04:25
xgerman_Will be dark so great for sleeping;-)04:25
johnsomAnd if you stare without glasses, it will be dark the rest of the day....04:26
johnsomThey have activated the national guard for it here.04:26
johnsomWhatever that means04:27
xgerman_There was some big eclipse in Germany when I was in school - so feel like having seen it ;-)04:27
johnsomYeah, I am pretty sure we had one when I was pre-school age.  They wouldn't let us go outside so we didn't blind ourselves04:28
rm_work<_<04:29
xgerman_Also selling glasses last minute is a lucrative business04:29
johnsomhahaha, darn youngins04:29
johnsomYeah, the astronomy group is selling them at the fair.  We volunteer each year, so I plan to pick some up there.04:30
xgerman_Just make sure they are not fake ;-)04:33
johnsomYeah, that is a big problem from what I hear.04:34
xgerman_Yep. Was the same in the German and eclipse04:35
xgerman_We are contemplating coming up. My wife + kids would like to see the eclipse...04:36
xgerman_But as you said getting there is a cluster04:37
rm_workjohnsom: you know how to specify a DB time offset in SQLA?04:42
rm_workthere's func.now() which is just "CURRENT_TIME" in MySQL04:43
johnsomNo04:43
rm_workbut I want to be like04:43
rm_workfunc.now()+6004:43
johnsomSQLA is super limited here04:43
rm_workblegh04:43
rm_worki can probably QUERY the DB for the time04:43
rm_workand do it myself04:43
johnsomI fought with that when I wanted the DB to be the "single point of truth for time" and gave up.04:44
rm_workso is it already not?04:44
johnsomThey claim it's too DB specific, when I claim BS04:44
johnsomYeah, gave up.  The assumption is the controllers are all NTP clients04:44
rm_workwell i'm doing the initial heartbeat insert thing04:44
rm_workand i have everything but this04:44
johnsomI despise SQLA04:45
rm_workjust need to do now+offset as the initial last_update04:45
johnsomYeah, as it is now, just use the host UTC time and python time math04:45
rm_workif the time is off from the DB ... what happens is ... either they failover-loop instantly because always past the time04:46
rm_workor ... >_>04:46
rm_workoh, though already we really need DB-time to be in sync i guess04:46
rm_worksince the HMs do the retrieval based on their time?04:46
rm_workor ... OH no that's a DB-side query04:46
rm_workbleh04:46
johnsomIt came down to I would have had to ask SQLA what engine it is and then push native SQL through it.  Totally defeats the purpose (SQLA ???  Purpose???) of SQLA and different DB backends04:47
rm_workwelllll04:48
rm_workyou can supposedly do like04:48
rm_workselect([some_table, func.current_date()]).execute()04:48
rm_workand get the time back04:49
johnsomAt this point, I think NTP is probably good enough for us.  We can probably save the DB roundtrip04:50
johnsomI don't think we need to go to the PTP level here04:51
rm_workk04:51
rm_workjust hope your DB is the same timezone is all >_>04:52
johnsomYeah, the world should be UTC...  Am I right???04:52
johnsomSeems like it is kind of your current timezone....04:56
rm_workI THINK this works tho...04:57
rm_workdb_apis.get_session().execute(expression.select(functions.now()))04:57
johnsomJust make sure it works with our sqlite functional tests....04:57
rm_workaugh04:57
johnsomSince you are kind of bypassing SQLA04:57
rm_workyeah hopefully04:57
rm_workit's not04:57
rm_worki do not do any pure SQL04:58
rm_workit's all from SQLAlchemy04:58
johnsomDoesn't expression bypass?04:58
rm_workbut the only thing i send is `functions.now()`04:58
rm_workwhich should work for any DB04:59
rm_workright?04:59
johnsomSigh, young padawan...04:59
johnsomThere will always be a DB exception04:59
rm_workT_T04:59
johnsomon the path to load balancer enlightenment05:00
johnsomGrin05:00
johnsomToo long of a day, too many virtual networks on my diagrams.05:00
rm_workthis seems simple from the outside... (the whole initial heartbeat thing)05:00
rm_workbut instead i'm going to go watch Game of Thrones and start my vacation05:01
johnsomYeah, Trevor's initial caused the gates to failover05:01
rm_workyeah05:01
johnsomWise man, kick can to next week...  grin05:01
rm_worki just started over05:01
rm_workit was easier05:01
rm_workk yeah, i'll be around :P05:02
johnsomYeah, probably a good plan05:02
*** belharar has quit IRC05:02
rm_workgood week though, and good note to leave on05:03
rm_workjust closed out four stories in JIRA that have been open for a while :P05:03
rm_worktiming issue is resolved05:03
rm_workas are all these member status bugs :)05:03
rm_worktiming was REALLY because of k8s >_<05:03
rm_workthey override the resolv.conf on the box with really idiotic settings that break DNS badly, so DNS resolution to hit keystone for token auth was timing out all the time05:04
johnsomOuch, yeah, that will not be fun05:05
rm_works/on the box/on the container/05:05
johnsomAren't they heavy on the hosts file overloading?05:05
rm_workbut yeah, default dnsPolicy is "ClusterFirst" .... the one that makes it work right is "Default" ... which is not default.05:05
rm_workno, actually none05:05
rm_workmy fix was actually to add the keystone server to /etc/hosts05:06
johnsomI am also watching sou vide youtube videos.  Got the wife a immersion heater.05:06
rm_worknoice, i wanted one of those for a while05:06
rm_worknow i realize i do not have the patience05:07
johnsomAmazon black friday in July deal05:07
rm_workbut it's awesome supposedly :)05:07
rm_worklol i must have missed that05:07
johnsomI will let you know05:07
rm_worknight :)05:07
johnsomCatch you next week05:08
*** armax has quit IRC05:13
*** armax has joined #openstack-lbaas05:13
*** armax has quit IRC05:13
*** armax has joined #openstack-lbaas05:14
*** armax has quit IRC05:14
*** armax has joined #openstack-lbaas05:15
*** armax has quit IRC05:15
*** armax has joined #openstack-lbaas05:16
*** armax has quit IRC05:16
*** JudeC has joined #openstack-lbaas05:17
sanfernhi johnsom, how to identify which amphora is serving in ACTIVE-STANDBY mode ?05:27
openstackgerritSanthosh Fernandes proposed openstack/octavia master: [WIP] Adding exabgp-speaker element to amphora image  https://review.openstack.org/49016405:31
*** aojea has joined #openstack-lbaas05:33
*** aojea has quit IRC05:33
*** aojea has joined #openstack-lbaas05:33
*** robcresswell has quit IRC05:41
*** aojea_ has joined #openstack-lbaas05:46
*** aojea has quit IRC05:48
*** gcheresh_ has joined #openstack-lbaas05:54
*** Guest14 has joined #openstack-lbaas05:56
*** jick has joined #openstack-lbaas06:01
*** belharar has joined #openstack-lbaas06:01
*** aojea_ has quit IRC06:03
*** aojea has joined #openstack-lbaas06:03
openstackgerritZhaoBo proposed openstack/octavia master: Extend api to accept qos_policy_id  https://review.openstack.org/45830806:14
*** rcernin has joined #openstack-lbaas06:19
*** JudeC has quit IRC06:32
*** pcaruana has joined #openstack-lbaas06:48
*** aojea_ has joined #openstack-lbaas06:57
*** kobis has joined #openstack-lbaas06:58
*** aojea has quit IRC06:59
*** aojea has joined #openstack-lbaas07:02
*** aojea_ has quit IRC07:04
*** aojea_ has joined #openstack-lbaas07:07
*** ducnc has joined #openstack-lbaas07:07
*** aojea has quit IRC07:10
*** tesseract has joined #openstack-lbaas07:17
*** gtrxcb has quit IRC07:24
*** ducnc has quit IRC07:28
*** isantosp has joined #openstack-lbaas07:32
*** robcresswell has joined #openstack-lbaas07:41
*** sanfern has quit IRC08:05
*** sanfern has joined #openstack-lbaas08:05
*** sanfern has quit IRC08:06
openstackgerritNir Magnezi proposed openstack/octavia master: Remove WebTest from test requirements  https://review.openstack.org/49038208:15
*** openstackgerrit has quit IRC08:18
*** yamamoto has joined #openstack-lbaas09:41
*** rochaporto has quit IRC09:49
*** strigazi_OFF has quit IRC09:49
*** rochaporto has joined #openstack-lbaas09:50
*** strigazi_OFF has joined #openstack-lbaas09:50
*** fnaval has quit IRC11:00
*** belharar has quit IRC11:11
*** atoth has joined #openstack-lbaas11:40
*** catintheroof has joined #openstack-lbaas12:36
*** dosaboy has quit IRC12:38
*** dosaboy has joined #openstack-lbaas12:38
*** sanfern has joined #openstack-lbaas13:04
*** yamamoto has quit IRC13:06
*** links has quit IRC13:06
*** ssmith has joined #openstack-lbaas13:38
*** cpusmith has joined #openstack-lbaas13:40
*** ssmith has quit IRC13:43
*** fnaval has joined #openstack-lbaas14:00
*** nakul_d has quit IRC14:03
*** nakul_d has joined #openstack-lbaas14:04
*** yamamoto has joined #openstack-lbaas14:06
*** gcheresh_ has quit IRC14:07
*** aojea_ has quit IRC14:07
*** yamamoto has quit IRC14:11
mnaserhas anyone been able to successfully plumb an octavia controller to a tenant network such as one over vxlan or gre14:17
mnaserminus the obvious run it in a VM on the cloud thing14:18
*** kobis has quit IRC14:18
*** fnaval_ has joined #openstack-lbaas14:21
*** fnaval has quit IRC14:24
xgerman_I have a management network on vxlan14:26
xgerman_and that turns in any other neutron network past linux bridge14:27
xgerman_^mnaser any specifics?14:27
mnaserxgerman_ but how do you get your physical machine to access the management network?14:28
xgerman_VxLan->bridge->conroller14:29
xgerman_bridge is one of those virtual br you can create with brctl14:29
mnaseroh hm i see14:29
mnaserso your switches do the vxlan work?14:30
mnaserin this case using neutron-openvswitch-agent + gre tunnels, i cant imagine being able to hook into a network in a physical machine14:30
xgerman_gre tunnels are tricky14:31
mnaserunless i create a vxlan management network and configure it manually on the controller node14:31
mnaserbut im not sure if i can mix those in the cloud14:31
xgerman_well, you should have things come in on a trunk port so you can take one of the vxlans for your management purposes14:32
mnaseri suppose that's the way to do it but it involves a fair bit of configs across switches and what not14:32
mnaserso i'm trying to see if there's a cleaner way but i could just configure a vlan and use that but yeah14:33
mnaserthinking about it maybe my issue is more of an octavia one anyways14:34
xgerman_I think somebody experimented with a VPN14:34
mnaseraccessing the load balancers from the same network as the mgmt network results in no traffic14:35
mnaserthe path of the traffic is <client ip> => <load balancer ip> incoming, where client ip is an IP in the mgmt subnet and load balancer is the vip14:35
xgerman_I think rm_work is using mgmt. network as VIP network… but he made patches14:35
mnaserhowever, the path of return traffic is <load balancer vip> => <client ip> .. however i think this is where things go wrong because the load balancer vip and the client ip arent on the same network14:36
mnaserso the traffic comes with srcip in the 10.x network (the one the lbaas is sitting on)14:36
mnaserrather than come back to the virtual router doing the floating ip and back to the user14:37
xgerman_we proxy the traffic so it should be  VIP -> LB IP -> node14:38
mnaserto make it easy.. my floating ip is 192.168.0.220, my lbaas vip is 10.0.0.5, my client ip is 192.168.0.133.  making a request hits the backend and comes back, but on the way back, tcpdump shows: 10.0.0.5.webcache > 192.168.0.133.6035914:38
mnaserwhat should happen is 10.0.0.5 sends traffic back to default gateway (neutron) which will do SNAT to send traffic back14:39
mnaserbut because there is a route for 192.168.0.0/24, it uses that instead14:39
xgerman_ah, that sounds like a bug14:39
mnaseri mean the load balancers have always kinda worked14:40
mnaserso i wonder if this is a one time thing that it got stuck14:40
mnaserlet me try to recreate the load balancer14:40
xgerman_eah, the Lb should proxy and hence send things from 10.x14:40
*** fnaval_ has quit IRC14:41
mnaseri figured that it sits in its own netns so it shouldn't use the routes of the host but yeah14:41
mnaserlet me do some checks14:41
johnsomIt will honor "host_routes" defined in neutron for the neutron networks.  So if there are "host_routes" configured for the network in neutron, the netns in the amp will pick those up14:42
*** armax has joined #openstack-lbaas14:47
*** yamamoto has joined #openstack-lbaas14:47
*** yamamoto has quit IRC14:47
*** fnaval has joined #openstack-lbaas14:51
mnaserjohnsom no host routes, i wonder if it was a one time oddity14:52
johnsomOk, just wanted to share in case....14:53
*** kobis has joined #openstack-lbaas14:57
*** kobis has quit IRC15:00
mnaserso packet flow is: 192.168.0.133.60720 > 10.0.0.7.webcache (initial request, DNAT by neutron floating ip), 10.0.0.4.33042 > 10.0.0.12.31841 (request towards backend), 10.0.0.12.31841 > 10.0.0.4.33042 (response from backend), 10.0.0.7.webcache > 192.168.0.133.60720 (response to client)15:01
mnaserwhere it is going wrong is the last one should be hitting the virtual router (aka 10.0.0.1) rather than the route installed by being on the same l2 network15:02
mnaserand i believe netns should have made sure this doesnt happen but i guess this is where i have to start troubleshooting15:02
*** Guest14 is now known as ajo15:04
*** kobis has joined #openstack-lbaas15:27
mnaseromg15:29
mnaserjohnsom got it15:29
mnasermtu of network is not respected in amphorae15:29
mnaser2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 100015:29
mnaserbut in reality the network has a lower mtu (which neutron even sets by dhcp)15:29
mnaser ip netns exec amphora-haproxy ip link set eth1 mtu 1458 fixed it15:29
mnaserso i guess we need to make sure that amphorae respect the correct mtu15:30
mnaseri guess it worked before because the mtu was not an issue due to the size of the page15:30
mnaserhmm https://github.com/openstack/octavia/commit/89f6b2ccefa67d6a5b2d1c518d2361580f83fcc115:31
mnaseri guess my image was out of date, darn15:32
*** kobis has quit IRC15:38
*** yamamoto has joined #openstack-lbaas15:48
*** yamamoto has quit IRC15:53
*** kobis has joined #openstack-lbaas16:00
*** belharar has joined #openstack-lbaas16:10
*** rcernin has quit IRC16:12
*** kobis has quit IRC16:19
*** kobis has joined #openstack-lbaas16:29
*** belharar has quit IRC16:37
johnsomFYI, PTG etherpads: https://wiki.openstack.org/wiki/PTG/Queens/Etherpads16:41
*** oomichi has quit IRC16:41
*** oomichi has joined #openstack-lbaas16:44
*** cpusmith_ has joined #openstack-lbaas16:49
*** cpusmith has quit IRC16:49
*** cpusmith has joined #openstack-lbaas16:50
*** cpusmith_ has quit IRC16:54
*** JudeC has joined #openstack-lbaas16:54
*** pcaruana has quit IRC16:54
*** sshank has joined #openstack-lbaas16:55
*** rcernin has joined #openstack-lbaas17:02
*** armax_ has joined #openstack-lbaas17:03
*** armax has quit IRC17:03
*** armax_ is now known as armax17:03
*** ajo has quit IRC17:04
*** kobis has quit IRC17:05
*** sanfern has quit IRC17:09
*** sanfern has joined #openstack-lbaas17:09
*** atoth has quit IRC17:20
*** atoth has joined #openstack-lbaas17:22
*** sanfern has quit IRC17:24
*** sanfern has joined #openstack-lbaas17:26
*** tongl has joined #openstack-lbaas17:38
*** tesseract has quit IRC17:49
*** SumitNaiksatam has joined #openstack-lbaas17:54
*** cpusmith has quit IRC17:58
*** kobis has joined #openstack-lbaas18:17
*** sshank has quit IRC18:31
*** gcheresh_ has joined #openstack-lbaas19:09
*** openstackgerrit has joined #openstack-lbaas19:27
openstackgerritSwaminathan Vasudevan proposed openstack/octavia master: Adds support for SUSE distros  https://review.openstack.org/48888519:27
*** kobis has quit IRC19:39
*** harlowja has quit IRC19:57
*** sshank has joined #openstack-lbaas20:02
xgerman_@johnsom how do we envision that distrinutor stuff to work for the end user?20:33
xgerman_are we letting them say how many amphora should be in there or is that in the flavor?20:33
*** SumitNaiksatam has quit IRC20:56
johnsomxgerman_ That is a flavor setting IMO21:00
xgerman_yeah, my feel as well21:00
xgerman_so we need to get flavor before we can have ACTIVE-ACTIVE21:00
*** gcheresh_ has quit IRC21:04
*** rtjure has joined #openstack-lbaas21:05
*** harlowja has joined #openstack-lbaas21:15
*** aojea has joined #openstack-lbaas21:17
*** yamamoto has joined #openstack-lbaas21:21
*** aojea_ has joined #openstack-lbaas21:23
openstackgerritMerged openstack/neutron-lbaas master: Updated from global requirements  https://review.openstack.org/48886321:24
*** aojea has quit IRC21:24
*** yamamoto has quit IRC21:26
*** aojea has joined #openstack-lbaas21:28
*** aojea_ has quit IRC21:30
*** aojea_ has joined #openstack-lbaas21:32
*** aojea has quit IRC21:35
*** aojea has joined #openstack-lbaas21:37
*** aojea_ has quit IRC21:40
*** aojea_ has joined #openstack-lbaas21:42
*** aojea has quit IRC21:45
*** aojea has joined #openstack-lbaas21:47
johnsomxgerman_ Well, in the interim we can do like we do for act/stdby and just make it a blanket config setting21:49
xgerman_yeah, I am trying to make that distributor driver interface21:49
*** aojea_ has quit IRC21:50
*** aojea has quit IRC21:56
*** aojea has joined #openstack-lbaas21:58
*** yamamoto has joined #openstack-lbaas21:58
*** aojea_ has joined #openstack-lbaas22:03
*** aojea has quit IRC22:06
*** aojea has joined #openstack-lbaas22:08
*** aojea_ has quit IRC22:10
*** apuimedo has quit IRC22:11
*** aojea_ has joined #openstack-lbaas22:13
*** apuimedo has joined #openstack-lbaas22:14
*** fnaval has quit IRC22:16
*** aojea has quit IRC22:16
*** aojea has joined #openstack-lbaas22:18
*** aojea_ has quit IRC22:21
*** aojea has quit IRC22:26
openstackgerritGerman Eichberger proposed openstack/octavia master: ACTIVE-ACTIVE Topology: Initial Distributor Driver Mixin  https://review.openstack.org/31300622:33
xgerman_okey, that should be a good basis for next steps22:34
johnsomCool22:35
xgerman_so on the OSA front jenkins is obstructing but they are more lenient with deadlines…22:46
*** fnaval has joined #openstack-lbaas22:56
johnsomWhat is up with Jenkins?22:57
*** rcernin has quit IRC22:58
xgerman_timeouts, tempest disk images not loaded22:59
johnsomBummer.  For the glance issue, I wonder if it isn't waiting for glance to mark it ready.  That is async and can take a while23:22
xgerman_well, I will keep hitting recheck and hopefully get tha stuff merged23:32

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!