Wednesday, 2018-03-28

*** Swami has quit IRC00:06
johnsomDid the backup patch too now00:06
openstackgerritJacky Hu proposed openstack/octavia master: Able to set frontend network for loadbalancer  https://review.openstack.org/52993600:07
*** yamamoto has joined #openstack-lbaas00:12
*** openstack has joined #openstack-lbaas00:17
*** ChanServ sets mode: +o openstack00:17
*** yamamoto has quit IRC00:17
johnsomrm_work do you still have the timeout patch running in a stack?00:23
rm_workyes00:23
johnsomcan you paste me from mysql "show create table listener;"?00:23
johnsomOut of time to do a stack, but want to finish this read through review based on how the migration came out in the DB.00:27
johnsomslqalchemy adds a bit of chance to how it interprets things, so I don't trust it to do what I think it's going to do.00:27
*** yamamoto has joined #openstack-lbaas00:38
*** yamamoto has quit IRC00:45
rm_workerr k00:54
rm_workone sec00:54
rm_workjohnsom: sorry am debugging things internally00:54
rm_workhttp://paste.openstack.org/show/715635/00:55
rm_workjohnsom: ^^00:55
*** yamamoto has joined #openstack-lbaas00:56
*** fnaval has quit IRC01:05
*** yamamoto has quit IRC01:14
*** PagliaccisCloud has quit IRC01:19
*** PagliaccisCloud has joined #openstack-lbaas01:20
*** Jeffrey4l has joined #openstack-lbaas01:31
Jeffrey4lfor octavia house keeping spare_amphora_pool_size feature. if i have several octavia-housekeeping process, for example 3, it may create 3 * spare_amphora_pool_size vms, seems there is no any global locks to ensure this. is this expected? or should be fixed?01:34
rm_workit should not do that01:36
johnsomIt is expected.  We balanced to extra code and complexity against having a few extra spare amps01:37
rm_workah, yeah at the very start01:37
rm_workit can build a few extra01:37
rm_workbut it will not STAY at a few extra01:37
rm_workafter the initial build, it should end up at or very close to the number01:37
johnsomIf it is really a problem for someone it can be fixed.01:39
Jeffrey4li expected it boot spare_amphora_pool_size vms. i think it is a bug01:42
Jeffrey4lon the other hand, if i have more than spare_amphora_pool_size vms, hours keeping won't shrink the number of vms. and i hoping it cloud shrink the node too.01:43
*** atoth has quit IRC01:54
rm_workJeffrey4l: you aren't *wrong*, it's a bug technically, and we could have a bug filed for it ... but it's a bug we knowingly introduced, because the fix is a LOT of work, and the result of the bug is ... not a big deal01:55
Jeffrey4lrm_work, how about introduce tooz to add a global lock during creating vms.02:01
*** AlexeyAbashkin has joined #openstack-lbaas02:12
*** yamamoto has joined #openstack-lbaas02:15
*** AlexeyAbashkin has quit IRC02:16
rm_worki mean, we can do it without tooz also02:18
rm_workit's just work02:18
rm_workand relying on another service just to lock internally is a bit heavy <_<02:18
*** yamamoto has quit IRC02:21
openstackgerritsapd proposed openstack/octavia master: Support create amphora instance from volume based.  https://review.openstack.org/55711102:31
Jeffrey4lroger. btw, i create a story for tracking this https://storyboard.openstack.org/#!/story/200174802:32
*** yamamoto has joined #openstack-lbaas02:37
*** yamamoto has quit IRC03:18
*** kbyrne has quit IRC03:51
*** kbyrne has joined #openstack-lbaas03:56
openstackgerritJacky Hu proposed openstack/octavia master: ACTIVE-ACTIVE: Initial distributor data model  https://review.openstack.org/52885004:05
openstackgerritJacky Hu proposed openstack/octavia master: L3 ACTIVE-ACTIVE data model  https://review.openstack.org/52472204:05
openstackgerritJacky Hu proposed openstack/octavia master: Make frontend interface attrs less vrrp specific  https://review.openstack.org/52113804:05
openstackgerritJacky Hu proposed openstack/octavia master: Able to set frontend network for loadbalancer  https://review.openstack.org/52993604:05
*** yamamoto has joined #openstack-lbaas04:19
*** yamamoto has quit IRC04:25
*** links has joined #openstack-lbaas04:46
*** Alex_Staf has joined #openstack-lbaas04:48
*** yamamoto has joined #openstack-lbaas05:21
*** yamamoto has quit IRC05:27
*** velizarx has joined #openstack-lbaas05:51
*** velizarx has joined #openstack-lbaas05:53
*** velizarx has quit IRC06:02
*** voelzmo has joined #openstack-lbaas06:18
*** yamamoto has joined #openstack-lbaas06:23
*** yamamoto has quit IRC06:28
*** pcaruana has joined #openstack-lbaas06:38
*** voelzmo has quit IRC06:55
*** voelzmo has joined #openstack-lbaas06:55
*** voelzmo has quit IRC07:08
*** voelzmo has joined #openstack-lbaas07:10
*** ianychoi has quit IRC07:10
*** voelzmo has quit IRC07:11
*** rcernin has quit IRC07:15
*** voelzmo has joined #openstack-lbaas07:15
*** yamamoto has joined #openstack-lbaas07:25
*** tesseract has joined #openstack-lbaas07:25
*** yamamoto has quit IRC07:30
*** voelzmo has quit IRC07:36
*** Alex_Staf has quit IRC07:43
*** velizarx has joined #openstack-lbaas07:43
*** Alex_Staf has joined #openstack-lbaas07:45
*** AlexeyAbashkin has joined #openstack-lbaas08:00
*** kobis has joined #openstack-lbaas08:02
*** kobis has quit IRC08:02
*** kobis has joined #openstack-lbaas08:02
*** voelzmo has joined #openstack-lbaas08:04
*** Alex_Staf has quit IRC08:11
*** voelzmo has quit IRC08:22
*** voelzmo has joined #openstack-lbaas08:22
*** voelzmo has quit IRC08:23
*** voelzmo has joined #openstack-lbaas08:23
*** voelzmo has quit IRC08:24
*** voelzmo has joined #openstack-lbaas08:24
*** rcernin has joined #openstack-lbaas08:24
*** voelzmo has quit IRC08:24
*** voelzmo has joined #openstack-lbaas08:25
*** voelzmo has quit IRC08:25
*** voelzmo has joined #openstack-lbaas08:25
*** voelzmo has quit IRC08:26
*** yamamoto has joined #openstack-lbaas08:26
*** yamamoto has quit IRC08:30
*** irenab has quit IRC08:33
*** oanson has quit IRC08:34
*** oanson has joined #openstack-lbaas08:36
*** irenab has joined #openstack-lbaas08:40
*** rcernin has quit IRC08:42
*** salmankhan has joined #openstack-lbaas09:10
*** yamamoto has joined #openstack-lbaas09:27
*** Alex_Staf has joined #openstack-lbaas09:27
*** salmankhan has quit IRC09:32
*** yamamoto has quit IRC09:32
*** salmankhan has joined #openstack-lbaas09:35
*** kobis has quit IRC09:55
*** kobis has joined #openstack-lbaas09:56
*** kobis has quit IRC09:56
*** ianychoi has joined #openstack-lbaas10:15
*** Alex_Staf has quit IRC10:22
*** yamamoto has joined #openstack-lbaas10:29
*** yamamoto has quit IRC10:34
openstackgerritMerged openstack/octavia master: Fix logging level for keystone auth bypass  https://review.openstack.org/55697010:47
*** Alex_Staf has joined #openstack-lbaas10:50
*** atoth has joined #openstack-lbaas10:52
*** links has quit IRC11:04
*** chandankumar has quit IRC11:04
*** links has joined #openstack-lbaas11:05
*** chkumar246 has joined #openstack-lbaas11:20
*** kobis has joined #openstack-lbaas11:24
*** yamamoto has joined #openstack-lbaas11:31
*** yamamoto has quit IRC11:35
*** velizarx has quit IRC11:45
*** velizarx has joined #openstack-lbaas11:56
*** sapd__ has joined #openstack-lbaas12:06
*** sapd_ has quit IRC12:09
*** sapd__ has quit IRC12:17
*** sapd_ has joined #openstack-lbaas12:17
*** yamamoto has joined #openstack-lbaas12:22
*** sapd_ has quit IRC12:27
*** sapd_ has joined #openstack-lbaas12:27
*** sapd_ has quit IRC12:32
*** sapd_ has joined #openstack-lbaas12:32
*** chkumar246 is now known as chandankumar12:43
*** kobis has quit IRC12:48
*** salmankhan has quit IRC12:50
*** sapd__ has joined #openstack-lbaas12:52
*** sapd_ has quit IRC12:52
*** kobis has joined #openstack-lbaas12:53
*** salmankhan has joined #openstack-lbaas12:59
*** fnaval has joined #openstack-lbaas13:20
openstackgerritChuck Short proposed openstack/neutron-lbaas-dashboard master: Update tox.ini  https://review.openstack.org/55734313:57
*** velizarx has quit IRC14:00
*** velizarx has joined #openstack-lbaas14:01
*** ianychoi_ has joined #openstack-lbaas14:33
*** ianychoi has quit IRC14:36
*** sapd__ has quit IRC14:37
*** sapd__ has joined #openstack-lbaas14:38
*** sapd__ has quit IRC14:40
*** sapd__ has joined #openstack-lbaas14:41
*** sapd__ has quit IRC14:41
*** sapd__ has joined #openstack-lbaas14:41
*** sapd__ has quit IRC14:41
*** sapd__ has joined #openstack-lbaas14:42
*** velizarx has quit IRC14:56
*** velizarx has joined #openstack-lbaas14:57
*** kevinbenton has quit IRC14:58
*** links has quit IRC15:03
*** Alex_Staf has quit IRC15:04
*** kobis has quit IRC15:05
*** kevinbenton has joined #openstack-lbaas15:06
*** pcaruana has quit IRC15:09
*** velizarx has quit IRC15:18
*** fnaval_ has joined #openstack-lbaas15:19
*** fnaval_ has quit IRC15:20
*** fnaval_ has joined #openstack-lbaas15:21
*** fnaval has quit IRC15:21
*** imacdonn has quit IRC15:26
*** imacdonn has joined #openstack-lbaas15:27
openstackgerritMerged openstack/octavia-tempest-plugin master: Updated from global requirements  https://review.openstack.org/55631415:28
*** vishyj has joined #openstack-lbaas16:36
*** kobis has joined #openstack-lbaas16:36
vishyjhi, I am doing some research on using anycast routing for loadbalancers....was wondering if OpenStack neutron allowed specifiying same IP address for multiple VMs that had LBaaS running on those VMs...new to lbaas and in research mode, please pardon if my question is invalid16:38
johnsomvishyj Take a look at this doc: https://docs.openstack.org/octavia/latest/contributor/specs/version1.1/active-active-l3-distributor.html16:39
openstackgerritMurali Annamneni proposed openstack/neutron-lbaas master: Enable MySQL Cluster Support for neutron-lbaas  https://review.openstack.org/51308116:39
*** velizarx has joined #openstack-lbaas16:40
vishyjjohnsom thanks a lot .... will take a look16:40
*** huseyin has joined #openstack-lbaas16:47
huseyinHi guys, I can finally create a load-balancer under octavia after changing network, compute and amphora drivers to noop16:54
huseyinLoad balancer status is active, and it gets an IP address from the private subnet16:54
huseyinI can access amphora images from controller nodes and vice versa16:55
huseyinBut the VIP address on the private subnet is not accessible16:55
huseyini assigned a floating IP to the LB, but i can not access members via this floating IP16:56
huseyinthe VIP address shown on the dashboard is not accessible from project VMs16:56
huseyinOn the amphora image, i can not see any namespace16:57
huseyinamphora image has only IP address on the lb-mgmt-net16:57
huseyinis it expected behavior?16:58
*** velizarx has quit IRC16:58
huseyinany idea?16:59
*** Swami has joined #openstack-lbaas17:08
*** AlexeyAbashkin has quit IRC17:08
*** kobis has quit IRC17:12
*** velizarx has joined #openstack-lbaas17:14
*** velizarx has quit IRC17:16
openstackgerritMerged openstack/neutron-lbaas master: Updated from global requirements  https://review.openstack.org/55557417:16
*** velizarx has joined #openstack-lbaas17:16
*** yamamoto has quit IRC17:29
*** tesseract has quit IRC17:35
johnsomhuseyin The no-op drivers don't actually do anything. They don't call out to neutron or nova for example.  They are purely used for testing when the tests don't need to use those components.17:36
huseyini reverted back the config17:37
huseyinon the amphora images, is it expected to see a namespace on the private subnet?17:37
*** velizarx has quit IRC17:38
huseyini mean, amphora image has an IP address on the lb-mgmt-net, but nothing on the private subnet (tenant network)17:38
johnsomhuseyin Correct, the lb-mgmt-net is in the default namespace with the amphora-agent. The vip  and member networks are inside the amphora-haproxy namespace with haproxy. This isolates the tenant networks from the lb-mgmt-net.17:41
huseyinthere is not any namespace in the amphora images17:41
huseyinwhat can be the cause?17:41
huseyin“sudo ip netns list” returns empty17:42
huseyinany clue?17:42
*** velizarx has joined #openstack-lbaas17:43
huseyinamphora images reach octavia-worker nodes17:44
huseyinand worker nodes can reach amphora images via lb-mgmt-net17:44
johnsomhuseyin And this amphora has a load balancer with a VIP?17:45
huseyinha ssure17:45
huseyinyes sure17:45
huseyini can see the vip address on the dashboard17:45
huseyinunder the load balancer17:45
*** yamamoto has joined #openstack-lbaas17:46
*** kobis has joined #openstack-lbaas17:46
huseyinthis IP is in the private subnet17:46
johnsomummm, this is new to me, "sudo ip netns" should show the namespace.17:47
johnsomThis would only happen if the amphora driver is still no-op17:47
*** velizarx has quit IRC17:48
huseyini tried everything17:50
huseyinwhen i set the amphora driver to no-op load balancer is created successfully17:51
huseyinwhen i set the driver to haproxy rest, its provisioning status turns to error17:51
huseyinand at the backend several amphora images are built and deleted17:51
johnsomOh, so you are running network and compute with normal drivers, but amphora no-op. That makes sense. no-op won't actually configure the amphora instance. It doesn't call out to the VM.17:52
huseyini can see an amphora image in the build state, then in active state, and after a few seconds i can not see the same amphora image17:52
johnsomYeah, by default if we get a failure we clean up our resources.17:53
huseyinActually, at the moment amphora driver is set to haproxy rest but lb creation fails17:53
johnsomSo, when the LB goes into provisioning error, what do you see in the worker log on the controller? There should be an error message there17:53
huseyinlet me check again17:54
*** kobis has quit IRC17:55
*** salmankhan has quit IRC17:55
*** velizarx has joined #openstack-lbaas17:58
huseyinthe only thing is octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.17:58
*** kobis has joined #openstack-lbaas17:59
huseyinand an error message about dns integration17:59
*** jmccrory has quit IRC17:59
johnsomhuseyin ok, those Could not connect to instance. Retrying. will eventually become ERROR.  This means your worker cannot reach the amphora agent on the amphora VM using the lb-mgmt-net18:00
huseyini can ping the amphora image from the worker nodes18:00
huseyinand ping worker nodes from amphora images18:01
huseyinisn’t it enough?18:01
johnsomOk, get the IP address on the amphora lb-mgmt-net18:01
huseyin172.16.0.218:01
huseyinbut as i said before it changes priodically due to clean up process18:02
rm_workyou should get a long series of those Retrying messages18:03
huseyinit is now 172.16.0.618:03
rm_workbut eventually they will *stop*18:03
johnsomThen try "openssl s_client -connect 172.16.0.2:9443"18:03
rm_workthe error right when they stop is the part we are probably interested in18:03
rm_workthough probably it is just hitting the max timeout18:03
huseyinhmm, we have to wait a bit more18:04
johnsomYou should see some output like:18:04
johnsomhttps://www.irccloud.com/pastebin/1qfCfjkc/18:04
johnsomYou will have to ctrl-c out of that as it waits for data.18:04
huseyinok i will try18:04
*** jmccrory has joined #openstack-lbaas18:06
*** yamamoto has quit IRC18:12
*** velizarx has quit IRC18:13
*** sshank has joined #openstack-lbaas18:14
*** kobis1 has joined #openstack-lbaas18:15
*** kobis has quit IRC18:16
*** yamamoto has joined #openstack-lbaas18:21
openstackgerritMerged openstack/octavia-dashboard master: Add package-lock.json  https://review.openstack.org/55527018:24
*** yamamoto has quit IRC18:26
openstackgerritMichael Johnson proposed openstack/octavia-dashboard master: List children pools on LB details page  https://review.openstack.org/55130518:27
johnsomFYI, I am going to add release notes to a bunch of these dashboard patches as an example. So that will be all of these dashboard patches18:28
*** voelzmo has joined #openstack-lbaas18:33
openstackgerritMichael Johnson proposed openstack/octavia-dashboard master: Being able to edit default pool of listener  https://review.openstack.org/55143618:36
*** yamamoto has joined #openstack-lbaas18:36
rm_workk lol18:39
rm_worki'm starting to *hate* the pecan wsme model return code SO MUCH18:41
rm_workit is blocking me on two patches where i try to do things intelligently18:41
rm_workand the code is so dumb18:41
rm_worki may have to spend some time making a PR against wsme_pecan18:41
*** yamamoto has quit IRC18:41
*** sshank has quit IRC18:43
openstackgerritMichael Johnson proposed openstack/octavia-dashboard master: Add l7 support  https://review.openstack.org/55194718:44
*** ssmith has joined #openstack-lbaas18:45
johnsomHmm, this is to return full single-call creates?18:45
rm_workor i dunno, maybe trying to use wsme_pecan is just not right for me18:45
rm_workmaybe I should just be constructing the return value myself18:45
rm_workalso for usage18:45
rm_worki want to do this:18:45
johnsomPersonally I kind of like the types stuff we use, but it can be a bit constraining.  Which can also be good.18:46
johnsomCool stuff, the L7 support in dashboard works great.18:46
*** velizarx has joined #openstack-lbaas18:48
*** voelzmo has quit IRC18:48
rm_workhttp://paste.openstack.org/show/716351/18:49
rm_work^^ I want that18:49
rm_workI don't want to type a bunch of BS static code for no reason18:49
rm_workit constructs the object perfectly18:49
rm_workand then nulls the whole thing out inside the wsme_pecan return code, because it can't deal with dynamic *anything*18:49
rm_workbut honestly, the DB query already gave me a perfectly formatted json response18:50
rm_workso this is literally just converting from json, to model, so it can be converted back to json18:50
rm_workand it makes me want to die inside18:51
*** yamamoto has joined #openstack-lbaas18:52
johnsomrm_work So why not just not inherit from our cooked BaseType?18:53
rm_work?18:53
johnsomYou are inheriting from this: https://github.com/openstack/octavia/blob/master/octavia/api/common/types.py#L7718:53
*** AlexeyAbashkin has joined #openstack-lbaas18:54
rm_workoh from octavia.api.common import types18:54
johnsomyeah18:54
rm_worki don't think it's ours18:54
rm_workbut yeah sure i don't need to18:54
rm_worki mean i don't think our stuff is the problem18:54
rm_workthe place where it gets wrecked is inside the pecan code (i've traced it through)18:54
johnsomWouldn't that get you around the data model mapping stuffs?18:55
rm_workthat isn't the issue18:55
johnsomAh, ok, maybe I didn't understand the issue18:56
johnsomGoing to grab lunch18:56
*** yamamoto has quit IRC18:56
rm_workoh, but18:57
rm_workthey have a different thing called DynamicBase18:57
* rm_work looks18:57
rm_worknope didn't help18:57
*** AlexeyAbashkin has quit IRC18:59
*** velizarx has quit IRC19:01
*** kobis1 has quit IRC19:08
*** atoth has quit IRC19:10
*** kobis has joined #openstack-lbaas19:15
*** yamamoto has joined #openstack-lbaas19:17
*** yamamoto has quit IRC19:17
*** vishyj has quit IRC19:24
*** yamamoto has joined #openstack-lbaas19:28
*** yamamoto has quit IRC19:32
*** yamamoto has joined #openstack-lbaas19:43
*** yamamoto has quit IRC19:47
*** jniesz has joined #openstack-lbaas19:49
*** yamamoto has joined #openstack-lbaas19:57
*** yamamoto has quit IRC19:57
johnsom#startmeeting Octavia20:00
openstackMeeting started Wed Mar 28 20:00:02 2018 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.20:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:00
*** openstack changes topic to " (Meeting topic: Octavia)"20:00
openstackThe meeting name has been set to 'octavia'20:00
johnsomHi folks20:00
jnieszhi20:00
johnsom#topic Announcements20:00
*** openstack changes topic to "Announcements (Meeting topic: Octavia)"20:00
johnsomWe have a new core reviewer Jacky Hu (dayou)!20:00
nmagnezio/20:00
xgerman_o/20:01
xgerman_Congrats!!20:01
jnieszcongrats20:01
nmagnezicongrats :-)20:01
johnsomYes, congratulations are in order. It is great to expand our core reviewer team20:01
johnsomOther than that, I don't have any more announcements. Anyone else?20:02
johnsomThe summit schedule is now published and includes all of the sessions. I think we have three.20:02
johnsomI plan to do an on-boarding session this time. If anyone else is attending and would like to help, let me  know.20:03
johnsom#topic Brief progress reports / bugs needing review20:03
*** openstack changes topic to "Brief progress reports / bugs needing review (Meeting topic: Octavia)"20:03
rm_worko/20:03
rm_workyeah i can help20:04
johnsomI have been busy looking at gate issues, fixing bugs, reviewing patches, and trying to make some progress on my patches.20:04
johnsomWe saw a nasty little bug in health manager when nova goes out to lunch.  When we were deleting amps nova would acknowledge, we would mark it DELETED, then move on. However, in nova the amp was stuck in "deleting" and was still running. It would post a health heartbeat, we would expect it to be deleted and delete it again.20:05
johnsomThe cycle continued....20:06
johnsomThis eventually pushes the process CPU usage up and then memory as the failover backlog grows.20:06
johnsomSo, anyway, unique situation. I pushed a patch to stop the cycle. I think there is another patch up that would help here too.20:07
johnsomToday I am working through our dashboard patch review backlog. Then back to tempest fun.20:07
johnsomAny other updates from folks?20:07
johnsom#topic Other OpenStack activities of note20:08
*** openstack changes topic to "Other OpenStack activities of note (Meeting topic: Octavia)"20:08
johnsom#link https://review.openstack.org/#/c/523973/20:08
johnsomKeystone is working on a common RBAC policies spec20:08
johnsomI have commented there once, but this might be of interest to others.20:09
johnsomIt's very similar to the current Octavia default RBAC config20:09
johnsom#topic Octavia deleted status vs. 40420:09
*** openstack changes topic to "Octavia deleted status vs. 404 (Meeting topic: Octavia)"20:09
johnsomThis was from last week.20:10
johnsomThe patch is still up for review. It looks like it needs a rebase.20:10
johnsomI had a quick look at the SDK and I think it will be ok.20:10
johnsomAny other updates here?20:10
johnsom#topic Open Discussion20:11
*** openstack changes topic to "Open Discussion (Meeting topic: Octavia)"20:11
johnsomOk, either a quiet bunch or netsplit...  ha20:11
johnsomOther topics for this week?20:11
johnsomPlease remember to add release notes to you patches if they warrant one.20:12
xgerman_hi20:12
xgerman_still here — just. not much news20:12
nmagnezilikewise20:12
johnsomOk, then we can have a quick meeting and get back to work.20:13
nmagneziI'm still making progress in Rally, but no news just yet20:13
johnsomNice.20:13
nmagneziI wanted to raise something about storyboard in general but I want cgoncalves to also be around20:13
nmagneziso will wait for next week20:13
johnsomOk20:13
cgoncalvesI am actually20:14
nmagnezihaha20:14
nmagneziokay..20:14
nmagneziso20:14
*** kobis has quit IRC20:15
nmagneziwe kinda looked into it again in the past week, in the context of finding stories we would like to mark for backport potentials20:15
johnsomrelease team recently moved over, requirements team is moving soon. FYI20:15
nmagnezibut since storyboard has no option (to the best of my knowledge) to mark priority / severity20:15
nmagnezithat.. can't of a problem..20:15
nmagnezis/can't/kind20:15
nmagnezisorry20:15
nmagneziwas a long day.20:16
johnsomYeah, the "storyboard" way of doing priority is to create a worklist or board and prioritize by sort order.20:16
nmagneziso the question / rant is: what do we think of the results of that migration? aren't we missing features we just to have in LP?20:16
cgoncalvesone of many things where storyboard falls short20:17
johnsomThat said, it's a pain so I haven't really done that yet.20:17
johnsomYeah, I kind of liked LP better myself. But....20:17
johnsomWe can give the storyboard team feedback.  You can open stories for them.  They have IRC meetings (it was earlier today).20:18
cgoncalvespeople are neglecting opening bugs in storyboard and linking that to commit msgs, me included20:18
johnsomAs for backport potential there are a couple of options:20:18
johnsom1. Create a worklist and add the stories there.20:19
johnsom2. Use the tags feature to add a "backport potential" tag to the story20:19
rm_worki mean... i find it takes me longer to deal with opening a story and linking it, than actually fixing the bugs, in many instances :P20:19
rm_workor maybe I just hate dealing with it for trivial stuff20:19
johnsom3. Ignore storyboard and mark it on the patch commit message.20:19
rm_workso I normally don't >_>20:19
johnsomYou can also have the worklist automatic and pull in all stories with the "backport" tag20:20
nmagnezijohnsom, the things is (this might add more context): in Neutron we created a proactive backport process, in which we pull the list of high severity bugs from LP so humans can manually triage an backport20:21
johnsomIf it would be a benefit for you guys I can set that up20:21
nmagneziin storyboard.. it's more difficult to do that..20:21
xgerman_I would like some more robust backport ptocesses, too20:21
xgerman_I though last week we elected cgoncalves  to be our stabel team lead20:22
xgerman_so he should set that up20:22
johnsomYeah, I think the best option for a workflow like that is to use the tags. If we think it's high priority and should be backported, add the backport tag to the story. Then work against the worklist to do the backports.20:22
xgerman_+120:22
cgoncalvesxgerman_: storyboard does not provide the means to accurately retrieve relevant data20:22
nmagnezijohnsom, +1. worth  the try. worst case we contact the storyboard team to add stuff if that does not work for us20:23
xgerman_+120:23
nmagnezicgoncalves, what do you think?20:24
johnsomYeah, it looks like other teams are doing it this way, there are already tags for release backports.20:24
cgoncalvesnmagnezi: tags would be a workaround20:24
nmagnezijohnsom, to me it looks like tags are kind of a workaround20:24
nmagnezihaha20:24
nmagneziyeah20:24
nmagneziLP had both tags and bug severity metadata20:25
johnsomWell, I don't disagree.  It's a short term solution until you get the storyboard team to add the feature you want.20:25
cgoncalvesjohnsom: ... or we could move back to LP :P20:25
johnsomSo, let me ask, are you advocating moving back to LP?20:25
cgoncalvesat this point in time I don't believe it would be well received/accepted by TC20:26
johnsomThat would be something you would need to bring up with the foundation.20:26
nmagneziwell, it worked well for us. if we can sort things out in storyboard that would be okay too20:27
johnsomThere are a couple of reasons they are moving everyone off launchpad20:27
cgoncalvesbut I am personally not happy with storyboard as I can't find a single feature that I prefer over LP20:27
nmagnezicgoncalves, +120:27
johnsomOne, it's not really supported and has some big bugs that people can't fix.20:27
johnsomTwo, they want to have other ways to authenticate people that isn't one vendor20:27
johnsomThree, people wanted a more "agile/scrum/kanban" like tool. (a former Octavia PTL was in that camp)20:28
johnsomSo, we as a team need to make a choice:20:29
johnsom1. We work with the storyboard team (and/or submit patches) to improve it in a way that works for us.20:29
johnsom2. We approach the foundation and ask to move back.20:30
xgerman_3. We use trello20:30
*** sapd__ has quit IRC20:30
cgoncalves99. We propose Bugzilla20:30
nmagnezilol20:30
* johnsom glares at xgerman.20:30
johnsomI am not a fan of trello20:30
*** sapd__ has joined #openstack-lbaas20:31
nmagnezijohnsom, I would say that 2 is probably less likely to actually happen. so for 1 lets start maybe something like an etherpad and write down everything we are missing from storyboard20:31
nmagneziwhen we finish we can communicate that to them..20:31
johnsomOk20:31
johnsomThere is a #stroyboard IRC channel too BTW20:32
nmagneziIf we start stories for things we miss there may be a change they will miss them (since the search engine of storyboard is sooooooooo lacking)20:32
johnsomYeah, and broken IMO.  I keep getting stories for other projects20:33
johnsomI opened a story on that....20:33
nmagnezi</rant>20:33
nmagnezijohnsom, how did they react you the story you submitted?20:33
nmagnezis/you/to20:33
johnsom#link https://storyboard.openstack.org/#!/story/200146820:34
cgoncalvesyou have to raise priority/severity on your stories. oh, wait.... xD20:34
xgerman_just put [URGENT] in the title20:34
johnsomPlease don't do this for octavia stories...20:35
cgoncalves[++URGENT]20:35
nmagnezixgerman_, feels like we moved back in time :D20:35
johnsom#link https://etherpad.openstack.org/p/storyboard-issues20:35
johnsomThere you go20:35
xgerman_some approaches are timeless20:35
johnsomWho wants to take the lead on this with the storyboard team?20:36
nmagneziI need to drop. will catch up with the log later guys20:36
* xgerman_ takes step back20:36
nmagnezijohnsom, I don't mind20:36
nmagnezi*but*20:36
* johnsom steps back20:36
nmagneziwe do need to start an etherpad20:36
johnsomI just did20:36
johnsom#link https://etherpad.openstack.org/p/storyboard-issues20:36
xgerman_we let nmagnezi drop and then assign him - problem solvd20:36
cgoncalvesjohnsom: I can add a few issues to the list and ask other folks to do the same20:37
xgerman_+10020:37
johnsomAlso, do you want to try the backport tag? I will  volunteer to set that up20:37
cgoncalvesaren't tags a free-form text thing everyone could create?20:37
johnsomYes, but I meant setting up the work list based on the tag. (plus adding the tag as I review at stories)20:38
johnsomWe also need to agree to the tag, something like backport-candidate?20:39
cgoncalvessure, why not :) thanks!20:39
cgoncalvesI fear, though, we will not have many stories to tag since people don't use storyboard that much20:40
johnsomOh, they do20:40
johnsomI think we got ~12 in the last week20:40
cgoncalvesok, I was not aware of20:41
johnsomYou can "favorite" projects and setup e-mail notifications for stories like we had in LP20:42
johnsomI try to look through those and comment. In fact I fixed one from Alex that was reported recently about the log level issue20:42
cgoncalvesI thought I had set that. checking20:42
cgoncalvesI'm aware of that story. thanks for the patch!20:43
johnsomAlex seems perfectly happy to add stories...  Grin20:43
johnsomOk, any other items today?20:44
xgerman_#link http://tarballs.openstack.org/octavia/test-images/20:45
xgerman_our images are building20:45
johnsomAh, nice20:45
cgoncalvescentos image size is smaller than ubuntu's \o/20:46
johnsomlol20:46
xgerman_I haven’t tested those images (yet)20:46
johnsomOk, thanks folks!20:47
johnsom#endmeeting20:47
*** openstack changes topic to "Discussion of OpenStack Load Balancing (Octavia) | https://etherpad.openstack.org/p/octavia-priority-reviews"20:47
openstackMeeting ended Wed Mar 28 20:47:39 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:47
openstackMinutes:        http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-03-28-20.00.html20:47
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-03-28-20.00.txt20:47
openstackLog:            http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-03-28-20.00.log.html20:47
*** fnaval_ has quit IRC20:47
cgoncalvesdayou: you may be happy to see https://review.openstack.org/#/c/556888/20:48
*** fnaval has joined #openstack-lbaas20:52
*** yamamoto has joined #openstack-lbaas20:57
johnsomHere is the backport-candidate worklist: https://storyboard.openstack.org/#!/worklist/26820:58
huseyinmichael, here is the error log from worker node after exhausting retries https://justpaste.it/1iwd720:59
*** beisner is now known as beisner-afk20:59
johnsomhuseyin Yep, the amphora was not reachable via the lb-mgmt-net from that controller21:00
huseyinthere is not an active amphora image after the error21:00
huseyinwhat is the correct way to test connectivity?21:01
huseyinwhen an amphora image is fired up, i log in to that image and i can ping worker node21:01
huseyini can also ping amphora image from the workers21:02
*** velizarx has joined #openstack-lbaas21:02
huseyinduring these retries, amphora image id and IP address changes repeatedly21:02
johnsomhuseyin If you want the amp to stay after a failure you can set this setting in the config: https://docs.openstack.org/octavia/latest/configuration/configref.html#task_flow.disable_revert21:03
*** yamamoto has quit IRC21:03
johnsomIt disables the automatic clean up21:03
huseyinok, and what about the connectivity?21:04
johnsomhuseyin The "openssl s_client -connect<ip>:9443 is the exact connection the controller makes to the amphora that is failing for you21:04
johnsomhuseyin I'm not understanding the part about the image id changing and the IP changing.21:04
huseyini check the images with “openstack server list” with admin credentials21:05
huseyinduring retries21:05
johnsomhuseyin is your health manager failing these over? Is there log entries in the health manager log?21:05
huseyinhere you are https://justpaste.it/1iwdj21:07
*** fnaval has quit IRC21:07
*** fnaval has joined #openstack-lbaas21:07
johnsomOk, that is probably confusing things a bit.  Until we get the worker fixed let's stop the health manager process.21:08
johnsomThere is actually a fix for that behavior (failing over a amp in PENDING_*) under review.  It's not the cause of the trouble, but making it confusing for sure21:09
*** fnaval_ has joined #openstack-lbaas21:09
huseyinwhat is the suggested way of implementing and testing lb-mgmt-net ?21:10
johnsomTesting is using the openssl command.  Implementing there are a bunch of ways to do it.21:10
huseyinlet me explain my way21:11
huseyinthe subnet that worker nodes are running is also a provider network on openstack21:11
johnsomIn devstack we create a port using OVS or linux bridge as seen here: https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L32821:11
*** fnaval has quit IRC21:12
johnsomMany deployments use a provider network for the lb-mgmt-net.21:12
huseyinand the lb-mgmt-net has a router on that provider network21:13
huseyinthis router ip on the lb-mgmt-net is the default gw for lb-mgmt-net21:14
huseyinso all amphora images use this router to access worker nodes21:14
*** velizarx has quit IRC21:15
johnsomok, yeah, that should work.  I assume the worker has a route through this router to the amphora?21:15
huseyinworker nodes are running on the same network with this router, and there is a route for lb-mgmt-net with next hop21:15
huseyinso why they can not reach amphora image?21:15
johnsomIs there a security group getting in the way?  Can you tcpdump -nli eth0 and see the packets from the worker?21:16
huseyinis there any possible reason like the amphora-image itself that can cause the problem?21:17
johnsomare there errors in the /var/log/syslog from the amphora-agent of why it might reject the packet? maybe a bad certificate setup?21:17
huseyinok ican try with disable_revert21:17
johnsomYes, the amphora-agent must be running in the amp and listening on 944321:18
johnsomhuseyin Don't forget to stop your health-manager21:18
huseyinok, got it21:18
huseyinmichael, you are right21:34
huseyinsystemd[1]: amphora-agent.service: Main process exited, code=exited, status=1/FAILURE21:34
huseyinValueError: certfile "/etc/ssl/private/uyum.in.crt" does not exist21:34
johnsomHmm, now that is an odd error.21:35
huseyinbut disable_revert did not work21:36
huseyinit deletes the images after finishing retries21:36
*** sticker has joined #openstack-lbaas21:40
johnsomhuseyin So, I suspect the [certificates] section of your octavia.conf has some issue.21:45
huseyinnow i try to use auto generated certificates21:46
huseyini got the error Error: [('PEM routines', 'PEM_read_bio', 'no start line'), ('SSL routines', 'SSL_CTX_use_certificate_file', 'PEM lib')]21:46
johnsomThe "-----BEGIN CERTIFICATE-----" is missing from the pem file21:47
huseyini used auto generate script in the github21:47
johnsomMaybe you have DER certs?21:47
huseyinhttps://github.com/openstack/octavia/blob/master/bin/create_certificates.sh21:47
huseyinhttps://justpaste.it/1iwez21:49
johnsomhuseyin Here are the configs and example certs used in the gates: http://logs.openstack.org/89/548989/4/check/octavia-v1-dsvm-scenario/90dc6c6/logs/etc/octavia/21:49
johnsomhuseyin https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L29421:51
*** ssmith has quit IRC21:51
johnsomThat is the script that sets it up21:51
johnsomThat line and the config file settings below it21:51
-openstackstatus- NOTICE: the zuul web dashboard will experience a short downtime as we roll out some changes - no job execution should be affected21:52
huseyinthanks michael, i will try it21:53
*** harlowja has joined #openstack-lbaas21:53
*** huseyin has quit IRC21:54
*** yamamoto has joined #openstack-lbaas21:59
*** yamamoto has quit IRC22:04
johnsomCan one of the cores review/approve this? https://review.openstack.org/#/c/555965 Once it's merged I can propose the governance tag for it.22:12
*** aojea has joined #openstack-lbaas22:23
*** rcernin has joined #openstack-lbaas22:28
xgerman_done22:30
*** fnaval_ has quit IRC22:40
johnsomThanks22:52
*** yamamoto has joined #openstack-lbaas23:00
openstackgerritMichael Johnson proposed openstack/octavia-dashboard master: Add rbac support for octavia service apis  https://review.openstack.org/55031923:05
*** yamamoto has quit IRC23:06
*** beisner-afk is now known as beisner23:14
*** aojea has quit IRC23:33
openstackgerritMerged openstack/octavia master: add lower-constraints job  https://review.openstack.org/55596523:48
openstackgerritAdam Harwell proposed openstack/octavia master: WIP: Add usage admin resource  https://review.openstack.org/55754823:52
-openstackstatus- NOTICE: Zuul has been restarted to update to the latest code; existing changes have been re-enqueued, you may need to recheck changes uploaded in the past 10 minutes23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!