Wednesday, 2018-06-20

*** blake has quit IRC00:10
*** blake has joined #openstack-lbaas00:12
openstackgerritMichael Johnson proposed openstack/octavia master: Fix health manager to be spare amphora aware  https://review.openstack.org/57666800:12
*** blake has quit IRC00:32
*** Swami has quit IRC00:33
*** longkb has joined #openstack-lbaas00:35
*** blake has joined #openstack-lbaas00:38
*** blake has quit IRC00:45
*** yamamoto has joined #openstack-lbaas00:48
*** yamamoto has quit IRC00:53
*** annp has joined #openstack-lbaas01:18
*** ramishra has joined #openstack-lbaas01:34
openstackgerrithuangshan proposed openstack/python-octaviaclient master: Support backup members  https://review.openstack.org/57653001:34
*** yamamoto has joined #openstack-lbaas01:49
*** yamamoto has quit IRC01:56
*** ianychoi_ has quit IRC02:27
*** ianychoi has joined #openstack-lbaas02:30
*** yamamoto has joined #openstack-lbaas02:51
*** yamamoto has quit IRC02:57
*** links has joined #openstack-lbaas03:23
*** mnaser has quit IRC03:42
openstackgerrithuangshan proposed openstack/python-octaviaclient master: Support backup members  https://review.openstack.org/57653003:42
*** dougwig has quit IRC03:43
*** zioproto has quit IRC03:44
*** fyx has quit IRC03:44
*** beisner has quit IRC03:44
*** mnaser has joined #openstack-lbaas03:45
*** dougwig has joined #openstack-lbaas03:45
*** dougwig is now known as Guest2913203:46
*** zioproto has joined #openstack-lbaas03:46
*** fyx has joined #openstack-lbaas03:46
*** beisner has joined #openstack-lbaas03:46
*** wolsen has quit IRC03:47
*** amitry has quit IRC03:49
*** johnsom has quit IRC03:50
*** ctracey has quit IRC03:50
*** hogepodge has quit IRC03:50
*** Guest29132 has quit IRC03:50
*** bzhao__ has quit IRC03:51
*** beisner has quit IRC03:51
*** mnaser has quit IRC03:51
*** fyx has quit IRC03:51
*** zioproto has quit IRC03:51
*** lxkong has quit IRC03:51
*** coreycb has quit IRC03:51
*** mrhillsman has quit IRC03:51
*** xgerman_ has quit IRC03:52
*** yamamoto has joined #openstack-lbaas03:53
*** yamamoto has quit IRC03:59
*** Eran_Kuris has quit IRC04:11
*** openstack has joined #openstack-lbaas04:32
*** ChanServ sets mode: +o openstack04:32
*** yamamoto has joined #openstack-lbaas04:55
*** yamamoto has quit IRC04:58
*** yamamoto has joined #openstack-lbaas04:59
openstackgerrithuangshan proposed openstack/python-octaviaclient master: Support backup members  https://review.openstack.org/57653005:13
*** lxkong has joined #openstack-lbaas05:24
*** ctracey has joined #openstack-lbaas05:27
*** coreycb has joined #openstack-lbaas05:30
*** bzhao__ has joined #openstack-lbaas05:35
*** johnsom has joined #openstack-lbaas05:36
*** zioproto has joined #openstack-lbaas05:40
*** mnaser has joined #openstack-lbaas05:42
*** fyx has joined #openstack-lbaas05:43
*** beisner has joined #openstack-lbaas05:45
*** kobis has joined #openstack-lbaas05:46
*** beisner has quit IRC05:51
*** mnaser has quit IRC05:51
*** fyx has quit IRC05:51
*** mnaser has joined #openstack-lbaas05:51
*** beisner has joined #openstack-lbaas05:51
*** zioproto has quit IRC05:52
*** beisner has quit IRC05:56
*** mnaser has quit IRC05:56
*** coreycb has quit IRC05:57
*** johnsom has quit IRC05:57
*** lxkong has quit IRC05:58
*** bzhao__ has quit IRC05:58
*** ctracey has quit IRC05:59
*** dims has quit IRC06:09
*** dims has joined #openstack-lbaas06:10
*** mnaser has joined #openstack-lbaas06:10
*** beisner has joined #openstack-lbaas06:10
*** zioproto has joined #openstack-lbaas06:11
*** dims has quit IRC06:16
*** fyx has joined #openstack-lbaas06:16
*** dims has joined #openstack-lbaas06:17
*** mrhillsman has joined #openstack-lbaas06:20
*** xgerman_ has joined #openstack-lbaas06:20
*** Guest29132 has joined #openstack-lbaas06:21
*** bzhao__ has joined #openstack-lbaas06:21
*** coreycb has joined #openstack-lbaas06:21
*** lxkong has joined #openstack-lbaas06:21
*** coreycb has quit IRC06:22
*** coreycb has joined #openstack-lbaas06:22
*** lxkong has quit IRC06:22
*** lxkong has joined #openstack-lbaas06:22
*** ctracey has joined #openstack-lbaas06:22
*** johnsom has joined #openstack-lbaas06:22
*** amitry_ has joined #openstack-lbaas06:55
*** wolsen has joined #openstack-lbaas06:55
*** hogepodge has joined #openstack-lbaas07:00
*** ispp has joined #openstack-lbaas07:03
*** tesseract has joined #openstack-lbaas07:04
*** ispp has quit IRC07:18
*** nmanos has joined #openstack-lbaas07:23
*** nmanos has quit IRC07:27
*** nmanos has joined #openstack-lbaas07:29
*** ktibi has joined #openstack-lbaas07:37
*** yboaron has joined #openstack-lbaas07:44
*** AlexeyAbashkin has joined #openstack-lbaas07:49
*** rcernin has quit IRC08:03
*** rpittau is now known as rpittau_08:05
*** rpittau_ is now known as elfosardo08:05
*** elfosardo is now known as rpittau08:05
*** ispp has joined #openstack-lbaas08:08
*** rpittau is now known as rpittau__08:09
*** rpittau__ is now known as rpittau08:09
*** rpittau has quit IRC08:10
*** rpittau has joined #openstack-lbaas08:10
*** peereb has joined #openstack-lbaas08:10
*** pcaruana has joined #openstack-lbaas08:11
*** peereb has left #openstack-lbaas08:11
*** peereb has joined #openstack-lbaas08:12
*** peereb has quit IRC08:14
*** peereb has joined #openstack-lbaas08:15
*** peereb has quit IRC08:16
*** peereb has joined #openstack-lbaas08:16
*** peereb has quit IRC08:17
*** peereb has joined #openstack-lbaas08:18
*** peereb has quit IRC08:19
*** peereb has joined #openstack-lbaas08:19
*** peereb has quit IRC08:20
*** peereb has joined #openstack-lbaas08:21
*** peereb has quit IRC08:21
*** rcernin has joined #openstack-lbaas08:54
*** salmankhan has joined #openstack-lbaas09:02
*** cvm has joined #openstack-lbaas09:09
*** kobis has quit IRC09:21
*** aojea_ has joined #openstack-lbaas09:42
*** aojea_ has quit IRC09:47
*** kobis has joined #openstack-lbaas09:51
*** rcernin has quit IRC09:56
*** annp has quit IRC09:58
*** yboaron_ has joined #openstack-lbaas10:01
devfazhi, anyone here?10:02
*** salmankhan has quit IRC10:04
*** yboaron has quit IRC10:04
*** salmankhan has joined #openstack-lbaas10:04
*** yamamoto has quit IRC10:18
*** cristicalin has joined #openstack-lbaas10:18
*** cristicalin has quit IRC10:37
*** cristicalin has joined #openstack-lbaas10:48
*** cristicalin has quit IRC10:53
*** yboaron_ has quit IRC10:58
devfazIm seeing "Amphora X health message reports X listeners when 0 expected", but im unable to locate the given amphora. ?!11:15
*** yamamoto has joined #openstack-lbaas11:19
*** cristicalin has joined #openstack-lbaas11:20
*** yamamoto has quit IRC11:24
*** cristicalin has quit IRC11:25
*** yboaron_ has joined #openstack-lbaas11:29
*** longkb has quit IRC11:41
*** atoth has joined #openstack-lbaas11:42
*** yamamoto has joined #openstack-lbaas11:49
*** cristicalin has joined #openstack-lbaas11:51
*** cristicalin has quit IRC11:56
*** frickler has joined #openstack-lbaas12:00
*** yamamoto has quit IRC12:00
*** salmankhan has quit IRC12:10
*** cristicalin has joined #openstack-lbaas12:11
*** yboaron_ has quit IRC12:26
*** cristicalin has quit IRC12:29
*** kman has joined #openstack-lbaas12:29
*** crazik has joined #openstack-lbaas12:32
*** fnaval has joined #openstack-lbaas12:38
*** kman has quit IRC12:38
crazikhello, two questions about LBaaSv2 with haproxy12:39
crazik1. is it possible to manually reschedule to other node?12:39
crazik2. I got an error about rescheduling with non-existient lb ID12:40
crazikI found that in lbaas_loadbalanceragentbindgs tables12:40
craziktable*12:40
crazikcan I clean this table from not existing lb12:40
crazikand why is it not cleaning properly?12:40
*** yamamoto has joined #openstack-lbaas12:51
*** kobis has quit IRC12:54
*** salmankhan has joined #openstack-lbaas13:02
*** ispp has quit IRC13:07
*** ispp has joined #openstack-lbaas13:10
johnsomdevfaz: we have done some work over the last few months to improve that logging.  Basically it is an amphora reporting health that we don’t expect to be there.13:14
johnsomWhat version are you running? I wonder if we missed a backport.13:15
devfazthats what I expected from the msg, but Im unable to find the given amphora in the database or even as instance.13:15
devfazour prod is currently running pike, so the issue is in 1.0.213:16
johnsomYeah, we added logging that gives the IP and amphora ID in the log message13:16
johnsomOk, when I get in the office I will look to see if it didn’t get backported.13:17
devfazjohnsom: great, in the meantime I try to upgrade the octavia-part to master, but I think this will need some more time/testing.13:18
johnsomIf you do a ‘openstack server list —all’ and grep for amphora you will get all of the amphora IDs, you can then compare that list to the Ready and Allocated amphora in the DB.13:19
johnsomAny that are in nova, but not the DB are candidates.  If they only have the lb-mgmt-net in nova I usually delete them13:20
johnsomSomething happened where Octavia orphaned that amp.  Either a failover gone bad or a dirty shutdown of a controller.13:21
*** nmanos has quit IRC13:23
devfazjohnsom: there is no instance running, which is not in the database - other: all running instances are in the database.13:24
devfazarg.. mom, didnt filter for ready/allocated13:24
johnsomYeah, it might be in the deleted list13:25
devfazjohnsom: nope, no running instance which should be running.13:27
johnsomWe have been working to improve this area of the code.  Depending on our results I am half tempted to have Octavia just delete those instances.13:28
devfazjohnsom: think it is much easier to backport your logging-change.13:29
johnsomHmm, maybe a quick scan to see which ones only have the lb-mgmt-net attached?  I don’t know how many amps you have running.13:29
devfazcurrently 39, so no very much.13:30
johnsomYeah, please feel free. Like I said, I am not in the office yet so can’t do that myself yet.  It is 6:30am here13:30
devfazjohnsom: Ok, so have a nice cup of coffee ;)13:31
johnsomPlus if you backport it, I can approve it.  Grin13:31
devfazjohnsom: even in master the log-msg is the same (in the code).13:35
devfaz# Amphora <UUID> health message reports X listeners when Y expected13:35
johnsomSo you have the amphora ID?  The nova instances are named with it, it should be in the nova list —all13:37
devfazyes, but is isnt in the list of nova instances. thats the problem.13:37
johnsomOk, then I am stumped how an amphora could get booted and configured with that uuid, but not have a nova instance record.  Maybe there is a qemu running that nova lost track of?13:39
johnsomI thought there was a patch that added the source IP address to the logs as well. Hmm13:40
devfazjohnsom: looks like I looked at the wrong location. Im getting the same msg on dev (master) and there is an IP in the msg.13:46
devfazhttps://github.com/openstack/octavia/commit/1417f6f0f8cbf13fd26440ae2dc53de42873d6cc13:49
*** ispp has quit IRC13:58
*** ispp has joined #openstack-lbaas14:01
*** ispp has quit IRC14:01
*** ispp has joined #openstack-lbaas14:01
*** ispp has quit IRC14:55
*** kobis has joined #openstack-lbaas15:00
*** kobis has quit IRC15:03
*** ispp has joined #openstack-lbaas15:06
*** kobis has joined #openstack-lbaas15:16
*** kobis has quit IRC15:19
mnaserhey15:34
mnaseri think we have to bump the default #15:34
mnaserfor local disk15:34
mnaser /dev/vda1                 1.9G  1.7G     0 100% /15:34
mnaserin an amphora15:34
johnsomOpps, yeah, we have been trying to reduce the usage in the amps.  Though centos is still pretty large.15:35
johnsomI know ubuntu keeps putting more stuff in the cloud image that isn't helping us....15:35
mnaseri wonder if that can explain some of the health issues15:36
johnsom /dev/vda1                 1.9G  1.5G  293M  84% /15:37
johnsomOn a fresh amp15:37
mnaseryeah that doesnt leave much breathing room15:37
johnsomYeah, especially on active load balancers, the flow logging will probably eat that before it rotates and compresses15:38
*** ktibi has quit IRC15:38
mnaserit is a very very active lb15:38
*** ktibi has joined #openstack-lbaas15:39
*** links has quit IRC15:41
johnsomFor the stable branches I had to resort to https://review.openstack.org/#/c/569531/ removing snapd15:41
johnsomSince I couldn't backport the switch to ubuntu-minimal15:42
johnsomdue to stable branch policy15:42
mnaserwe'll bump it here to 2015:44
mnaserbut i think we should increase it upstream in os_octavia to like 10 or something15:44
mnaserstorage is cheap...15:45
johnsomHa, that is not the feedback we get.  People are pissed over the 2GB15:53
johnsomwhich is thin provisioned..  ha15:54
johnsomThere is a patch up that disables local logging as well that would help.15:55
mnaserthat's silly, 2gb..15:55
mnaserthat's pennies..15:55
johnsomWe hoped to have those logs being shipped off the amp in Rocky too, but just didn't have the developers15:55
johnsommnaser https://review.openstack.org/56674115:56
*** kobis has joined #openstack-lbaas15:56
mnaserjohnsom: yay problem #216:04
mnaserscale testing octavia16:04
mnaser188.114.110.22:13195 [20/Jun/2018:16:02:31.314] <snip> 225/0/0/1/226 200 2065 - - ---- 2000/2000/1/1/0 0/0 "GET <snip> HTTP/1.1"16:04
mnaser2000/200016:04
mnasercan we adjust maxconn?16:04
johnsommnaser that is user setting on the listener16:04
johnsommnaser --connection_limit16:05
mnaserugh sweet, thank you so much16:05
mnaserbut so here's the thing16:05
mnaserconnection_limit=-116:05
johnsomThere is an open bug about -1 being 2000 on the amphora driver.  Darn, forgot about that.  I need to put that on the priority bug list16:05
mnaseris a lie?16:06
mnaserhaha16:06
mnaseryes.16:06
mnaseri can take care of that16:06
johnsomI will find the story for you16:06
johnsommnaser https://storyboard.openstack.org/#!/story/163541616:07
johnsomSince we have released the API we really need to make -1 work and not change the default to 200016:08
johnsomThis was an odd one because what -1 or missing means to haproxy is dependent on how it was compiled, so it varies package to package16:10
johnsomThus we missed that it can sometimes mean just 200016:10
johnsomGood news is setting that listener setting will immediate resolve it16:10
*** crazik has quit IRC16:11
mnaserjohnsom: are we still at a stage where this is the case?16:11
mnaserthat was in 2016 afterall :p16:11
johnsomSigh16:12
*** yamamoto has quit IRC16:12
johnsomIt's been a rough few releases for us with our goals and then people disappearing.16:13
johnsomThis one just got lost in the shuffle16:13
mnaserjohnsom: no worries, i get that :(16:14
mnaseri think ideally it's defaulting to 200016:14
johnsomIt is up to the packager. You can see what they compiled in default is with "/usr/sbin/haproxy -vv"16:15
*** ispp has quit IRC16:15
mnaserthe docs don't seem to explain a default value16:15
mnaserand i think -1 is a good way to shoot yourself in the foot16:15
johnsomYeah, so thanks if you take that story16:15
johnsomYeah, but some vendors do support -1 as unlimited so...16:16
mnaseroh i know16:16
mnaseri have a user that wants to run at -116:16
mnaserhaproxy -vv shows maxconn = 2000 for default16:16
johnsom-1 as an option is necessary and we need to translate that to haproxy speak.16:16
johnsomYeah, that is what I have on ubuntu too16:16
johnsomIt's the DEFAULT_MAXCONN option at compile time16:17
mnasercan we set -1 in the haproxy config file16:17
johnsomI'm not sure, but I don't think so.16:17
*** yamamoto has joined #openstack-lbaas16:22
*** yamamoto has quit IRC16:27
*** kobis has quit IRC16:31
*** ramishra has quit IRC16:32
*** tesseract has quit IRC16:34
*** yamamoto has joined #openstack-lbaas16:37
*** yamamoto has quit IRC16:41
johnsomYou know it's getting serious when I need to clone a copy of the taskflow code....16:43
*** yamamoto has joined #openstack-lbaas16:52
*** yamamoto has quit IRC16:56
*** crazik has joined #openstack-lbaas16:58
*** crazik has quit IRC16:59
*** crazik has joined #openstack-lbaas16:59
*** atoth has quit IRC17:01
*** yamamoto has joined #openstack-lbaas17:07
rm_workmnaser / johnsom: we could probably just set -1 to whatever the largest number that haproxy supports is17:08
* rm_work shrugs17:08
rm_workat some point it doesn't matter if it's exact17:08
johnsomYeah17:08
mnaserrm_work: thanks for pushing up that queens backport17:09
rm_workyep :)17:10
rm_worknow we can do Pike! :)17:10
rm_work(I think)17:10
crazik hello, two questions about LBaaSv2 with haproxy: 1. is it possible to manually reschedule to other node? 2. I got an error about rescheduling with non-existient lb ID.I found that in lbaas_loadbalanceragentbindgs table. can I clean this table from not existing lb and why is it not cleaning properly?17:11
*** yamamoto has quit IRC17:12
*** atoth has joined #openstack-lbaas17:14
crazikwell, that was 3 questions in fact.17:15
rm_workcrazik: you can trigger a "failover" as an administrator, which will manually force it to move the active node17:15
rm_workoh, wait17:15
johnsomrm_work He isn't using octavia17:16
crazikstill haproxy.17:16
crazikoctavia work in progress ;)17:16
rm_workare you talking about *neutron-lbaas*17:16
rm_workwith the namespace driver?17:16
crazikI think so.17:16
*** salmankhan has quit IRC17:16
rm_workaugh. then, no idea17:16
crazikblah.17:17
johnsomcrazik 1, no I don't think so. 2. I don't know. 3. Don't know either.  I don't use the namespace driver17:17
crazik;)17:17
crazikjohnsom: thank you17:17
johnsomSorry, I don't think there are many folks around anymore that use that.17:17
rm_workdo people still use that? besides crazik I guess17:17
rm_workyeah hmm17:17
crazikwell, what do you mean as namespace driver?17:17
crazikthis one which creates haproxy instances inside namespaces?17:18
rm_workyes17:18
rm_workon the agent hosts I think?17:18
crazikcorrect17:18
crazikso, It's my case17:18
rm_workyeah, that was deprecated years ago I think17:18
craziklbaasv1 was deprecated17:18
crazikv2 wasn't at Ocata afair17:19
rm_workhmm, lbaasv1 was completely removed by now, but i thought we at least deprecated (not removed yet though) namespace-haproxy17:19
rm_workbut it does make sense that it's still commonly in use17:19
rm_workit's *very* simple17:19
crazikyeah17:20
rm_workwe just don't know much about it :(17:20
crazikI was trying to setup both: haproxy +ocatvia together, but had some issues.17:20
craziknow still thinking about way to migrate17:20
rm_workthe entire current team started working on lbaas after it was essentially abandoned :(17:20
rm_workI mean, after the namespace driver was -- not the project17:21
crazikwell, I wish I had a clean start and could use Octavia17:21
rm_workyeah, not always possible :(17:21
rm_workI know, my cloud is still on liberty :P17:21
crazikyeah. there is mention about migration toools on roadmap17:21
rm_workyeah, that is started17:22
rm_workthere is a WIP up I believe from johnsom17:22
crazikany unofficial tools are available?17:22
*** yamamoto has joined #openstack-lbaas17:22
rm_workbut you can also just start running Octavia as a separate service entirely, and run both at the same time17:22
rm_workthat is what we did actually17:22
crazikas I said - I had some issues17:22
rm_worki mean, completely disconnected17:22
rm_workcontinue running neutron-lbaas with the namespace backend, and run octavia as a separate service/endpoint17:23
rm_workdon't try to pull octavia in to neutron-lbaas17:23
johnsomThe migration tool works for neutron-lbaas load balancers using octavia provider, but not others yet. It also does not migrate from one provider to another17:23
*** AlexeyAbashkin has quit IRC17:23
rm_workwe ran them in parallel for a while, got people to migrate themselves in time, and then eventually EOL'd neutron-lbaas17:23
crazikI got problem with horizon to use them in parallel17:24
rm_workahh yeah17:24
rm_workthe horizon dashboards conflicting :/17:24
rm_workthat is a dumb problem, i am still not clear why we can't just run both dashboards >_<17:24
rm_workbut yeah, it doesn't work :/17:24
crazik;)17:24
johnsomShould work, I did it once17:24
rm_workwe don't use horizon here so it was not a problem for us17:24
johnsomThe only odd thing was they both had the same title17:25
craziknext issue I had, was that amphoras needs ephemeral disk17:25
rm_workoh yeah, maybe it was just a problem when it was the old/new neutron-lbaas-dashboard17:25
crazikI have only volume-based instances17:25
johnsomrm_work Any way you could possibly ask harlowj a taskflow question?17:25
johnsomOr has he sworn off OpenStack for good now?17:26
rm_workhe's off this week17:26
rm_workbut certainly17:26
*** yamamoto has quit IRC17:27
johnsomBummer, ok. Trying to figure out how subflow revert / retry works. I want a sub-flow to revert, but not revert up the parent.17:28
rm_workhe'll be back hmmm17:28
rm_worknext week17:28
*** links has joined #openstack-lbaas17:37
*** links has quit IRC17:39
*** atoth has quit IRC17:39
*** atoth has joined #openstack-lbaas17:39
*** links has joined #openstack-lbaas17:39
*** yamamoto has joined #openstack-lbaas17:52
*** yamamoto has quit IRC17:57
*** yamamoto has joined #openstack-lbaas18:02
*** yamamoto has quit IRC18:02
*** links has quit IRC18:11
*** links has joined #openstack-lbaas18:12
*** links has quit IRC18:14
*** Swami_ has joined #openstack-lbaas18:20
*** Swami has joined #openstack-lbaas18:20
*** yamamoto has joined #openstack-lbaas18:36
*** ianychoi has quit IRC18:36
*** atoth has quit IRC18:40
*** yamamoto has quit IRC18:41
openstackgerritMerged openstack/octavia master: Align logging on oslo_log  https://review.openstack.org/57593118:49
*** yamamoto has joined #openstack-lbaas18:51
*** yamamoto has quit IRC18:56
*** aojea_ has joined #openstack-lbaas19:03
*** fnaval has quit IRC19:03
*** yamamoto has joined #openstack-lbaas19:06
*** fnaval has joined #openstack-lbaas19:08
*** fnaval has quit IRC19:08
*** fnaval has joined #openstack-lbaas19:09
*** yamamoto has quit IRC19:10
*** aojea_ has quit IRC19:16
*** yamamoto has joined #openstack-lbaas19:22
*** yamamoto has quit IRC19:26
*** yamamoto has joined #openstack-lbaas19:37
*** salmankhan has joined #openstack-lbaas19:41
*** yamamoto has quit IRC19:43
*** ktibi has quit IRC19:57
johnsom#startmeeting Octavia20:00
openstackMeeting started Wed Jun 20 20:00:08 2018 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.20:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:00
*** openstack changes topic to " (Meeting topic: Octavia)"20:00
openstackThe meeting name has been set to 'octavia'20:00
cgoncalveshi20:00
nmagnezio/20:00
johnsomHi folks20:00
johnsomAnother hot and sunny day here20:00
xgerman_o/20:00
johnsom#topic Announcements20:01
*** openstack changes topic to "Announcements (Meeting topic: Octavia)"20:01
johnsomI wanted to highlight that I have setup the Rocky priority patch/review list20:01
johnsom#link https://etherpad.openstack.org/p/octavia-priority-reviews20:01
johnsomI am trying to update this daily with progress on the patches20:01
johnsomIf you have some time please try to help review and / or work on patches20:02
nmagneziIs it okay for others to update this as well? (In case I noticed updates)20:02
johnsomIt is a decent sized list, but we are making some progress.20:02
johnsomnmagnezi Yes please!20:02
nmagneziack20:03
johnsomWe are heading towards feature freeze and MS320:03
nmagneziI have a question about those python 3 patches, but let's take it to the open discussion part20:03
johnsomMS3 and Feature freeze is the week of July 23!!!!20:04
johnsomOk, sounds good.20:04
johnsomThe only other announcement I had is about the upcoming PTG20:04
johnsomWe will have a room for Octavia. I will be attending20:04
johnsomIf you need travel assistance, the foundation round 1 closes July 1st.20:05
johnsomSo apply soon20:05
johnsomAny other announcements today?20:05
johnsom#topic Brief progress reports / bugs needing review20:06
*** openstack changes topic to "Brief progress reports / bugs needing review (Meeting topic: Octavia)"20:06
johnsomI have been busy trying to squash bugs.  There have been a few little ones and tech debt.20:06
johnsomI am currently focused on fixing the dual amphora failure bug that causes failover to no be fully successful.20:07
nmagneziLink? :)20:07
johnsomI have a plan, a PoC task and parallel flow. I just need to finish out the rest of the failover flow20:08
johnsom#link https://storyboard.openstack.org/#!/story/200148120:08
johnsomBasically if more than one amphora is down/failed, the failover flow will not fully bring up the LB. One amp will be rebuilt, but the LB will be in provisioning ERROR20:09
*** rm_mobile has joined #openstack-lbaas20:09
rm_mobileo/20:10
johnsomHopefully I can get a patch posted today for it.20:10
* rm_mobile is running late20:10
nmagneziPut it in the prio list :)20:10
johnsomIt is, under "Priority Patches (These need work or are WIP)"20:10
johnsomLine 4520:10
johnsomAny other progress updates from the team?20:11
*** rm_mobile has quit IRC20:12
*** rm_mobile has joined #openstack-lbaas20:12
johnsomFYI, for those of you interested in the stable branch backporting, this patch would be good to backport. Someone was asking about it this morning. It will need some minor work to cherrypick though.20:12
johnsom#link https://review.openstack.org/#/c/561369/20:12
cgoncalvesnot sure it was reported last week: grenade support is merged20:13
johnsomSo if someone is motivated....20:13
johnsomYes!  Excellent work.20:13
johnsomAnybody have comments on making that voting?20:13
xgerman_if nobody os motivated I can grab that20:13
johnsomI think we need to have it voting and an upgrade procedure doc and we can assert our tag(s)20:13
xgerman_yeah, let’s #vote on voting20:14
cgoncalvesxgerman_, go for it. propose the backport20:14
xgerman_k20:14
johnsomHa, I would only call a formal vote on this if Doug was here. Just to bug him20:14
cgoncalvesjohnsom, I started the upgrade procedure doc but am out traveling this week20:15
johnsomI think I will look at the job history and propose a voting patch20:15
johnsomcgoncalves Ok, cool!20:15
johnsomAny other updates or should we move on?20:16
johnsom#topic Talk about API versioning/microversioning20:17
*** openstack changes topic to "Talk about API versioning/microversioning (Meeting topic: Octavia)"20:17
johnsomWe opened this topic a few weeks ago.20:17
* nmagnezi looks at rm_work 20:17
johnsomAny new thoughts on this? Have people had time to look at the reference links?20:18
* rm_work looks at nmagnezi 20:18
johnsom#link https://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html20:19
rm_workoh right i was going to look up stuff20:19
johnsom#link https://docs.openstack.org/tempest/latest/microversion_testing.html#step4-separate-test-classes-for-each-microversion20:19
* rm_work looks at the floor, which is where that task ended up20:19
rm_workjohnsom: oh fff no20:19
rm_workbleh20:19
johnsomAs I mentioned last week, I lean towards not doing microversioning, but to increment the API version document with dot versions and only do additions to the API.20:20
cgoncalvesI have not. I looked +1 years ago to nova on their microversion implementation. we need some sort of versioning support for rolling upgrades, for sure20:20
xgerman_+!20:20
johnsomYeah, it's the clients that are becoming the bigger issue20:20
johnsomclients don't know if they can ask for features20:21
johnsomOk, so20:21
* johnsom glares at the room20:21
nmagneziWhat other projects are doing when the client asks for a feature that is not supported in a specific version?20:22
johnsomLet's all take a look at those and have thoughts/ideas/comments ready for next week.20:22
johnsomLikely the same 404 we do20:22
cgoncalveshave api versioning made to a community goal for rocky?20:22
johnsomNo20:23
johnsomThe goals for Rocky are no mox (we don't) and enable mutable configs20:23
johnsomStein is likely going to be py3 and something I forgot.20:23
rm_worksometimes i think the extensions thing neutron did seems like a sane way to approach this. and then I flip my table and cry in a corner for a while.20:23
xgerman_+1 as much as I hate extensions they seem a bit better than this microversioning20:24
johnsomCan't say I'm a huge fan of the extensions20:24
nmagneziWill It make sense to do some sort of version discovery behind the scenes for specific client actions that we know not all versions support? And in that case we can mask the error and output something like "this is not supported by the current API version"20:24
rm_workfor now i kinda just want to keep doing20:25
rm_workthe thing20:25
rm_workthat johnsom said20:25
johnsomOh, The other Stein goal that has traction is cold upgrade20:25
nmagneziFor example action like amp failover that was added in queens IIRC20:25
xgerman_well, if you returned 404  before you are retro-actively changing API behavior20:25
rm_work<johnsom>As I mentioned last week, I lean towards not doing microversioning, but to increment the API version document with dot versions and only do additions to the API.20:26
cgoncalvesjohnsom, cold upgrade: check! :)20:26
johnsomcgoncalves Yep!20:26
johnsomSo, if I update my version discovery patch and update the version are we good with that?  leave the path /2.0 and have the multiple dot versions?20:27
johnsomI think  I will do that as a proposed patch. Then we can discuss again next week and make a call.20:27
rm_workI think so IMO20:27
xgerman_+120:28
nmagnezi+120:28
cgoncalves#link https://review.openstack.org/#/c/559460/20:28
johnsomYeah, that needs fixed though. I think I don't like how it is now20:29
johnsom#topic Open Discussion20:29
*** openstack changes topic to "Open Discussion (Meeting topic: Octavia)"20:29
johnsomnmagnezi I think you had a py3 queston20:30
johnsomquestion20:30
nmagneziyup20:30
nmagneziSo re: https://review.openstack.org/#/c/573348/20:30
*** salmankhan has quit IRC20:30
*** aojea has joined #openstack-lbaas20:31
nmagnezihSo re: https://review.openstack.org/#/c/573348/20:31
nmagnezionce that merged20:31
nmagneziDoes that mean we gate on python 3 *only* ?20:31
nmagneziJust double checking myself here20:31
johnsomIt means that those jobs will run under py3, but we will still have py27 unit, functional, tempest20:32
johnsomSo, pep8 will run under py3, cover, docs, releasenotes, and the debug.20:32
nmagneziBut for pep8 and cover it will run only with python3 right?20:32
nmagneziyeah20:33
johnsomBasically they want to make py27 the exception instead of py3 being the exception as it is today20:33
johnsomCorrect, this change moves those over to run only under py320:33
rm_workyeah A++20:33
johnsomThis is lining things up for the proposed Stein goal of all projects running py3 jobs as a minimum20:34
nmagneziyeah. I just wonder because we still gate on python 2.x on RDO and internally20:34
johnsomYeah, the important jobs: unit tests, functional, and tempest will still run both py27 and py3 to show that our code works under both20:35
johnsomFor the near future.  Eventually OpenStack will drop py2.7 support.20:35
rm_workin U at the earliest20:35
nmagneziGot it. Thanks for the answers :)20:36
johnsom#link https://pythonclock.org/20:36
johnsomThis is being driven by the fact that python 2.7 itself is going end-of-life in a year and a half20:36
johnsomWell, 1 year, 6 months, 11 days, 9 hours, 22 minutes20:37
nmagneziHaha20:37
nmagneziI trust them to start Python 4.x just for having two versions20:38
johnsomI will just be happy to only have to deal with one version20:38
johnsomlol, yeah, probably20:38
xgerman_yep20:38
johnsomOk, any other discussions for today?20:38
xgerman_but by then we might have rewritten OpenStack in golang20:38
johnsomC#?20:39
nmagnezixgerman_, dougwig once wanted to port neutron-lbaas to Ruby.. :-)20:39
rm_workas an aside, python recommends not doing version detection :P https://docs.python.org/3/howto/pyporting.html#use-feature-detection-instead-of-version-detection20:39
nmagnezijohnsom, fortran20:39
rm_work(related to the microversioning discussion)20:39
*** yamamoto has joined #openstack-lbaas20:39
rm_workI recommend we rewrite the amp-agent in Rust20:40
johnsomHey, fortran is not a dead language....20:40
rm_workPascal?20:40
johnsomOye, pascal, now we are going down hill.  Might as well throw out perl or LISP20:40
rm_workoooh, we had to learn J in uni...20:40
rm_workhttps://en.wikipedia.org/wiki/J_(programming_language)20:40
rm_workAPL but typable by normal humans with normal keyboards20:41
johnsomSadly I have coded in all of the above except for rust20:41
johnsomat one point or another20:41
xgerman_Rust is new20:41
*** pcaruana has quit IRC20:41
johnsomOk, if nothing else I will let you fine folks go work on reviews.  Please help with those, MS3 is coming quick.20:41
xgerman_https://en.wikipedia.org/wiki/System_programming_language#Major_languages20:42
rm_workpretty sure http://www.cs.trinity.edu/old-index.cgi is written in J20:42
xgerman_take your pick20:42
rm_workalright. I'm ... temporarily kinda semi-away from octavia for a few weeks, ping me for reviews or if there's patches that I need to update20:43
rm_work:(20:43
xgerman_ok20:43
johnsomOk, thanks folks!20:44
johnsom#endmeeting20:44
*** openstack changes topic to "Discussion of OpenStack Load Balancing (Octavia) | https://etherpad.openstack.org/p/octavia-priority-reviews"20:44
openstackMeeting ended Wed Jun 20 20:44:28 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:44
openstackMinutes:        http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-06-20-20.00.html20:44
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-06-20-20.00.txt20:44
openstackLog:            http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-06-20-20.00.log.html20:44
*** yamamoto has quit IRC20:44
*** rm_mobile has quit IRC21:04
*** yamamoto has joined #openstack-lbaas21:41
*** yamamoto has quit IRC21:47
*** rcernin has joined #openstack-lbaas22:10
*** fnaval has quit IRC22:23
*** aojea has quit IRC22:36
*** yamamoto has joined #openstack-lbaas22:43
*** yamamoto has quit IRC22:49
*** rcernin has quit IRC22:50
johnsomOk, got a working dual amp down failover.  There is a bunch of flow rework that could be done to parallelize better, but I want a small patch we can backport for the bug.  We can do that optimization later.22:56
johnsomOn my workstation it's 29 seconds to LB functional again, another minute for the rest of the first amp cleanup to finish, then another 34 seconds until the second amp is fully restored.22:58
johnsomSo about two minutes and seven seconds wall clock time to restore full redundancy after both amps were nuked.  But the important number is the 29 seconds to LB functional again.22:59
johnsomThirty seconds of that is retrying to connect to the secondary in case it was just a network blip.  On the fence if we should retry that many times on the secondary or if we should fail it faster23:02
*** rcernin has joined #openstack-lbaas23:43
*** yamamoto has joined #openstack-lbaas23:45
*** Swami has quit IRC23:47
*** Swami_ has quit IRC23:47
*** yamamoto has quit IRC23:50

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!