Thursday, 2018-06-14

*** yamamoto has quit IRC00:00
*** ktibi_ has quit IRC00:02
*** SumitNaiksatam has quit IRC00:03
*** blake has quit IRC00:22
*** longkb has joined #openstack-lbaas00:23
*** Swami has quit IRC00:25
*** Swami_ has quit IRC00:25
*** annp has joined #openstack-lbaas00:48
*** blake has joined #openstack-lbaas00:49
*** blake has quit IRC00:52
*** yamamoto has joined #openstack-lbaas00:56
*** yamamoto has quit IRC01:02
*** fnaval has joined #openstack-lbaas01:16
*** yamamoto has joined #openstack-lbaas01:58
*** hongbin has joined #openstack-lbaas02:01
*** yamamoto has quit IRC02:04
*** yamamoto has joined #openstack-lbaas03:00
*** yamamoto has quit IRC03:05
*** SumitNaiksatam has joined #openstack-lbaas03:13
*** yamamoto has joined #openstack-lbaas03:38
*** longkb has quit IRC03:55
*** longkb has joined #openstack-lbaas03:55
*** annp has quit IRC04:00
*** annp has joined #openstack-lbaas04:00
*** hongbin has quit IRC04:21
*** threestrands has quit IRC04:47
*** links has joined #openstack-lbaas05:00
*** pcaruana has quit IRC05:09
*** nmanos has joined #openstack-lbaas05:25
*** fnaval has quit IRC05:50
*** nmanos has quit IRC06:06
*** pcaruana has joined #openstack-lbaas06:17
*** kobis has joined #openstack-lbaas06:18
*** kobis has quit IRC06:19
*** AlexeyAbashkin has joined #openstack-lbaas06:20
*** kobis has joined #openstack-lbaas06:20
*** threestrands has joined #openstack-lbaas06:27
*** AlexeyAbashkin has quit IRC06:33
*** AlexeyAbashkin has joined #openstack-lbaas06:59
*** tesseract has joined #openstack-lbaas07:16
*** nmanos has joined #openstack-lbaas07:19
*** kobis has quit IRC07:22
*** kobis has joined #openstack-lbaas07:27
*** rcernin has quit IRC07:27
*** sapd has quit IRC07:27
*** sapd has joined #openstack-lbaas07:27
cgoncalvesrm_work, I don't know (ansible RH person). the impression I have is that the ansible community is pretty quick with reviews so if they are not, ping me later :)08:02
*** ramishra has joined #openstack-lbaas08:19
*** openstackgerrit has joined #openstack-lbaas08:26
openstackgerritMerged openstack/octavia master: Improve the error logging for zombie amphora  https://review.openstack.org/56136908:26
ramishraHi All, Any idea why we see these errors intermittently with octavia? http://logs.openstack.org/61/502961/13/check/heat-functional-orig-mysql-lbaasv2/84df284/logs/screen-o-hm.txt.gz#_Jun_14_07_25_34_34644308:28
*** ianychoi has quit IRC08:39
openstackgerritCarlos Goncalves proposed openstack/octavia master: Add grenade support  https://review.openstack.org/54965408:43
cgoncalvesramishra, hi. https://review.openstack.org/#/c/561369/ may fix the error you're seeing. it was merged a few minutes ago. worth rechecking your patch08:48
ramishracgoncalves: thanks! Will recheck08:51
*** AlexeyAbashkin has quit IRC08:53
ramishracgoncalves: hopefully it's related. But we see these kind of errors from time to time and I don't see any way to dig deep for further details.. may be some db transaction issues?08:55
cgoncalvesramishra, let's hope it fixes :)08:57
cgoncalvesramishra, dunno, I'd need to check it a bit more08:57
ramishracgoncalves: OK, thanks!09:02
*** AlexeyAbashkin has joined #openstack-lbaas09:04
*** salmankhan has joined #openstack-lbaas09:07
*** kobis has quit IRC09:28
*** links has quit IRC09:33
*** links has joined #openstack-lbaas09:33
*** threestrands has quit IRC09:54
*** AlexeyAbashkin has quit IRC10:03
*** kobis has joined #openstack-lbaas10:07
*** kobis has quit IRC10:07
*** salmankhan has quit IRC10:07
*** kobis has joined #openstack-lbaas10:08
*** salmankhan has joined #openstack-lbaas10:10
*** salmankhan has quit IRC10:13
*** salmankhan has joined #openstack-lbaas10:13
*** annp has quit IRC11:02
*** yamamoto has quit IRC11:02
*** AlexeyAbashkin has joined #openstack-lbaas11:03
*** yamamoto has joined #openstack-lbaas11:44
*** yamamoto has quit IRC11:48
*** atoth has joined #openstack-lbaas12:04
*** yamamoto has joined #openstack-lbaas12:06
*** yamamoto has quit IRC12:10
*** amuller has joined #openstack-lbaas12:24
*** longkb has quit IRC12:26
*** fnaval has joined #openstack-lbaas12:32
*** yamamoto has joined #openstack-lbaas12:40
*** yamamoto has quit IRC12:45
*** ianychoi has joined #openstack-lbaas12:46
*** AlexeyAbashkin has quit IRC12:56
*** AlexeyAbashkin has joined #openstack-lbaas13:04
*** yamamoto has joined #openstack-lbaas13:22
*** yamamoto has quit IRC13:27
*** kobis has quit IRC13:28
*** salmankhan1 has joined #openstack-lbaas13:29
*** salmankhan has quit IRC13:30
*** salmankhan1 is now known as salmankhan13:30
*** yamamoto has joined #openstack-lbaas13:38
*** yamamoto has quit IRC13:42
*** AlexeyAbashkin has quit IRC13:43
*** kobis has joined #openstack-lbaas13:44
*** AlexeyAbashkin has joined #openstack-lbaas13:46
*** nmanos has quit IRC13:56
*** yamamoto has joined #openstack-lbaas14:08
*** yamamoto has quit IRC14:12
*** links has quit IRC14:15
*** kobis has quit IRC14:16
*** yamamoto has joined #openstack-lbaas14:20
*** yamamoto has quit IRC14:20
*** kobis has joined #openstack-lbaas14:30
*** AlexeyAbashkin has quit IRC14:32
*** ramishra has quit IRC14:37
*** Swami has joined #openstack-lbaas14:54
*** kobis has quit IRC15:12
*** salmankhan has quit IRC15:18
*** yamamoto has joined #openstack-lbaas15:20
*** pcaruana has quit IRC15:24
*** yamamoto has quit IRC15:25
cgoncalvesjohnsom, hi. https://review.openstack.org/#/c/554004/ requires a story, no?15:56
johnsomcgoncalves Yes, it should15:57
johnsomThat stuff is for German's proxy plugin15:57
xgerman_yeah, why is that not merged ;-)15:58
cgoncalvesI know. I'm about to -1 it because of missing story15:58
cgoncalvesit was required for https://review.openstack.org/#/c/568361/ so we should be consistent15:58
johnsomGo for it15:58
xgerman_or write the story and add it to the patch — be part of the solution15:59
cgoncalvesxgerman_, you may have a good looking traceback worth attaching to the story :P15:59
johnsomYeah, I have to say that patch is *super* low on my personal priority list right now16:00
cgoncalvesjohnsom, the grenade patch passed CI with your "Improve the error logging for zombie amphora" patch in and without Adam's16:14
johnsomYep, good.  Now that zombie merged we should be able to merge grenade!16:15
*** yamamoto has joined #openstack-lbaas16:22
*** yamamoto has quit IRC16:27
*** Swami has quit IRC16:35
*** SumitNaiksatam has quit IRC16:36
*** kobis has joined #openstack-lbaas16:43
*** kobis has quit IRC16:45
*** SumitNaiksatam has joined #openstack-lbaas17:02
rm_workjohnsom: well, it looks like they fixed whatever it was upstream in centos17:18
rm_workjohnsom: as well as the ridiculous cloud-init bug17:18
rm_workso17:18
* rm_work shrugs17:18
johnsomNice17:18
rm_worktwo less patches for me17:19
*** yamamoto has joined #openstack-lbaas17:23
*** yamamoto has quit IRC17:29
*** tesseract has quit IRC17:30
*** pcaruana has joined #openstack-lbaas17:43
johnsomrm_work Can you abandon that patch then?17:51
*** salmankhan has joined #openstack-lbaas17:51
rm_workI guess so17:53
rm_workit irks you a lot, doesn't it :P17:53
johnsomWell, plus I would like to reduce the ghost patches laying around.17:55
*** salmankhan has quit IRC17:56
openstackgerritAdam Harwell proposed openstack/octavia master: Experimental multi-az support  https://review.openstack.org/55896218:00
openstackgerritAdam Harwell proposed openstack/octavia master: WIP: AZ Evacuation resource  https://review.openstack.org/55987318:00
*** pcaruana has quit IRC18:01
openstackgerritMerged openstack/octavia master: Add grenade support  https://review.openstack.org/54965418:06
johnsomWahoo!18:06
*** pcaruana has joined #openstack-lbaas18:06
johnsomrm_work Is this one dead now too? https://review.openstack.org/#/c/572702/18:07
* johnsom is building the priority review list so looking through stuff18:07
*** kobis has joined #openstack-lbaas18:08
rm_workehhhhh18:08
rm_worklet me see18:08
johnsomMy zombie version merged18:08
openstackgerritAdam Harwell proposed openstack/octavia master: WIP: Floating IP Network Driver (spans L3s)  https://review.openstack.org/43561218:09
rm_workyeah18:10
rm_worki'm trying to make sure18:11
*** kobis has quit IRC18:13
rm_workso the only thing is that mine tries a little harder than yours18:15
rm_workbut probably it shouldn't (?) be possible for it to matter18:15
rm_workso, sure18:16
cgoncalvesjohnsom, I take that we'll deprecate the 'octavia' provider alias at some point, so a follow-up patch for grenade would be to create a LB with provider=octavia, update DB and verify18:19
johnsomYes, it will go away, but I don't see a need to rush.18:20
cgoncalvessure18:21
johnsomI think the next step is to make grenade voting, then maybe work on testing that the data plane (amps) don't go down. I think that is the next step on the upgrade tag ladder18:22
cgoncalvesyes for making it voting18:23
cgoncalvesaren't we testing that data plane is up during upgrade? I think we are18:23
cgoncalvesthat's the verify_noapi function18:23
johnsomThat would be great if we are18:23
cgoncalveswe curl the vip18:23
johnsomOk.  I haven't looked at the patch in a while18:24
johnsomSo cool18:24
johnsomSo I guess the next step is getting the multi-node stuff going again and work on rolling upgrades18:24
cgoncalvesspreading out o-* services across multiple nodes?18:26
johnsomYep18:26
cgoncalvesif so, we'd need to change a few things in devstack plugin right? to make sure o-worker can reach amps without relying on o-hm018:27
cgoncalvesit should have it's own interface or whatever18:27
johnsomWouldn't we have an o-hm0 on each control plane instance?18:27
openstackgerritAdam Harwell proposed openstack/neutron-lbaas master: Add gate for Direct L7-Proxy to Octavia  https://review.openstack.org/56104918:30
rm_workI always link o-hm to the dataplane <_<18:30
johnsombefore charging the flux capacitor18:31
rm_work^_^18:31
cgoncalvesjohnsom, the naming 'o-hm0' gives the understanding that it's more for the o-hm service18:31
rm_workah you're talking about the interface18:31
rm_worknot the service18:31
cgoncalvesso in case of composable deployments where o-hm is not running on a given (set of) controller nodes.....18:31
johnsomYeah, true, but, it's just a devstack hack, so, does it matter?18:32
cgoncalvesdepends on how multi-node is set up I guess18:32
*** salmankhan has joined #openstack-lbaas18:32
johnsomNot sure if we can do fake VLANs in multi-node to support adding a provider network18:33
*** yamamoto has joined #openstack-lbaas18:34
cgoncalvesI'm not sure either18:34
*** yamamoto has quit IRC18:39
*** yamamoto has joined #openstack-lbaas18:49
*** kobis has joined #openstack-lbaas18:53
*** yamamoto has quit IRC18:53
johnsomOk folks, I have updated the priority patch review list for Rocky: https://etherpad.openstack.org/p/octavia-priority-reviews18:53
johnsomThe link is also in the topic for this channel18:54
johnsomIf I missed something you think is a priority feel free to add it to the Awaiting section and ping me18:54
johnsomThese are things we think are a priority to get into Rocky18:55
rm_workxgerman_: https://review.openstack.org/#/c/573470/18:56
rm_workor nmagnezi18:56
*** yamamoto has joined #openstack-lbaas18:58
rm_workthx18:58
*** kobis has quit IRC19:06
*** amuller has quit IRC19:07
*** yamamoto has quit IRC19:07
*** yamamoto has joined #openstack-lbaas19:16
*** yamamoto has quit IRC19:16
*** aojea_ has joined #openstack-lbaas19:41
*** kobis has joined #openstack-lbaas19:51
openstackgerritAdam Harwell proposed openstack/octavia master: Gather fail/pass after executor is done  https://review.openstack.org/47772019:54
*** yamamoto has joined #openstack-lbaas20:16
*** yamamoto has quit IRC20:22
openstackgerritMerged openstack/octavia-tempest-plugin master: Spare amps have no role  https://review.openstack.org/57347020:35
*** kobis has quit IRC20:35
eanderssonAnyone happen to know how the policy worked for neutron-lbaas?20:36
eanderssonIf I want to allow a role to list all load balancers20:36
eanderssonfrom all projects20:36
johnsomUmmm, by default it's "owner or admin", but you can override that20:36
eanderssonI just can't find what that key is called in the policy20:36
johnsomSure, just a minute, I will find it20:37
eanderssonoh20:37
eanderssonhttps://github.com/openstack/neutron-lbaas/blob/master/etc/neutron/policy.json.sample20:37
eanderssonI am silly20:37
eanderssonalthough not sure how one of these would control that20:38
johnsomYou know if you were running Octavia: https://docs.openstack.org/octavia/latest/configuration/policy.html GRIN20:38
eanderssonhaha20:38
eanderssonWe are moving towards it20:39
eanderssonbut even if we deployed it today, it would take time to migrate over :p20:39
johnsomDid you need help defining the rule?20:39
eanderssonYea - if you have a moment20:39
johnsomSure, what is it you want to do?20:39
johnsomA global list all lbs?20:40
eanderssonSo basically we have a readonly user that we want to be able to list all load balancers (across all projects)20:40
eanderssonYea20:40
johnsomso the rule would be "rule:admin_or_owner or role:<audit role name>" where <audit role name> is the role you setup for this and assigned to the user20:41
johnsomLet me find the root policies for neutron, that file looks old and incomplete20:41
eanderssonhttps://github.com/openstack/neutron/blob/master/etc/policy.json20:42
johnsomHmmm, yeah, it must be "get_loadbalancer" though I am rusty on the neutron-lbaas side20:46
johnsomYeah, their granularity is only at the HTTP method, so GET loadbalancer and GET loadbalancers is the same RBAC rule20:55
eanderssonYea - but looks like it's "" by default, which means no restrictions21:01
johnsomNo, it falls back to the "default" rule21:04
eanderssonAh21:04
johnsomWhich is owner_or_admin21:04
johnsomhttps://github.com/openstack/neutron/blob/master/etc/policy.json#L1521:04
eanderssonIs that a neutron special? because according to olso.policy : "" means always21:08
eanderssonhttps://docs.openstack.org/oslo.policy/latest/admin/policy-json-file.html21:08
eanderssonAh I see - default is what is used if the rule is not defined, but "" means always.21:10
johnsomThere you go, ok21:11
*** SumitNaiksatam has quit IRC21:12
eanderssontl;dr octavia>lbaas :p21:13
johnsomWe think so21:14
*** yamamoto has joined #openstack-lbaas21:19
*** yamamoto has quit IRC21:24
rm_workwhy is octavia-grenade just going to ERROR on all of the currently running jobs21:37
rm_work?21:37
rm_worklook at zuul status21:37
johnsomHmm, that is odd.  Well, we have to wait for the run to finish to see the zuul error21:38
johnsomLooks like the experimental AZ support patch will be the first to finish21:40
openstackgerritAdam Harwell proposed openstack/octavia master: Ignore a port not found when deleting an LB  https://review.openstack.org/56484821:46
eanderssonjohnsom, figured it out22:00
eandersson> is_advsvc22:00
eanderssonThis is the flag that determines if you can get all lbs or not22:00
johnsomhmmm22:01
eanderssonactually is_admin or is_advsvc22:04
johnsomYeah, just found it in the neutron code.  very odd....22:05
johnsomhttps://github.com/openstack/neutron/blob/master/neutron/db/_utils.py#L7522:07
eanderssonYep22:08
eanderssonWas just looking at the same code22:08
johnsomI think neutron is in need of an RBAC overhaul22:09
*** aojea_ has quit IRC22:11
eanderssonYep22:16
eanderssonThanks for the help johnsom22:17
eanderssonAnother selling point for Octavia :p22:17
johnsom+1 NP22:18
*** rcernin has joined #openstack-lbaas22:20
*** yamamoto has joined #openstack-lbaas22:20
*** yamamoto has quit IRC22:25
mnaserso22:29
mnaserreasons why http requests will randomly give 503 service unavailable?22:30
mnasereven though backends are responding with no problems?22:30
mnaserand it would flap between ok and not ok22:30
johnsomUmmm22:30
johnsom503 usually means the pool doesn't have any healthy members.22:31
*** fnaval has quit IRC22:33
mnaserjohnsom: ever seen a case where maybe two processes are fighting for the port or a bad reload maybe22:33
mnaserhttp://paste.openstack.org/show/723513/22:33
mnasersee how it's flapping22:33
mnasernow if i show http traffic back and forth from the backends22:34
johnsomWhat do you get if you do repeated member list?22:34
mnaserjohnsom: let me check but for whats its worth you see traffic is ok here - http://paste.openstack.org/show/723514/22:35
mnasertwo backends are a-ok22:35
johnsomOk, then the next step is to grab the haproxy log from the amp22:37
johnsomIt will tell us exactly the issue.22:37
mnaseri think i'll have to finally accept my fate and add keys22:37
johnsomAnd yes, we know we need to add the log forwarding stuff22:37
mnaseris there a way to inject keys into a running amphora by any chance22:38
mnaserusing the api22:38
johnsomNot our API, but there is nova recovery/rescue mode22:38
mnaserwont shutting down the vm into rescue failover?22:39
rm_workmnaser: so you don't build your amps with the ssh-disabled flag? :P22:39
johnsomNo, it should reboot fast enough that it won't trigger a full failover22:39
mnaseri actually build them with ssh disabled in the idea that i should never touch them but i guess life ain't easy22:39
rm_workthat's the hardcore way to go ^_^22:39
johnsomIf they are using TLS offload it will though22:39
johnsomYeah, we have more work to do for admin stuff before running with no keys is ok22:40
rm_workmnaser: you can go to the health table and flip the busy bit22:44
rm_workmnaser: for just that amp, it will protect it while you debug22:44
rm_workwhen you're done, flip it back22:44
mnaseroh that's an idea22:44
rm_workthat's what I do22:44
johnsomrm_work FYI, infra says it was a zuul race condition with merging the grenade gate and those jobs. They say just recheck.22:44
rm_workok22:44
johnsomYeah, if you are using the default settings, as long as your cloud can reboot instances inside 60 seconds, the amp will come back up. However, if they are using TLS offload, it will failover as the cert(s) and key will be gone.22:46
mnaseryeah but the amp will go back up in rescue mode which won't have any keys or anything22:47
mnaseri might just rebuild em22:47
xgerman_mnaser: I always run with no keys22:47
rm_workit won't go into normal run mode22:47
rm_workcorrect22:47
mnaserxgerman_: how do you troubleshoot issues like this22:47
xgerman_I dont have issues like that — but plenty others22:48
xgerman_never had to log onto a box… all my problems are control plane22:48
mnaserlooping over member list22:49
mnasershows them constantly online22:49
mnaseri think i might add keys22:50
mnaserseems to be a common issue22:50
johnsomYeah, let's look in the logs and see what is up.22:51
mnaseri think i'll add keys22:52
mnaserand fail it over22:52
mnasereasier and less messy22:52
johnsomyep22:52
mnasercan you effectively run multiple load balancers in a single amphora22:55
mnasersomething like.. multiple listeners with each their own pool?22:55
rm_workno22:55
johnsomWait...22:55
mnaseri see22:55
rm_workoh, that yes22:55
rm_workmultiple LBs no22:55
rm_workmultiple listeners totally do go in one amp22:55
rm_workbut that's != multiple LBs22:55
rm_workmight be a terminology issue22:55
mnasermultiple listeners across multiple ips though is a no go right?22:55
johnsomNo to multiple LBs per amp. So only one VIP per amp.22:55
rm_workhttps://docs.openstack.org/octavia/pike/reference/glossary.html22:56
mnaseryeah one vip at the end of teh day22:56
mnaserso only one port 44322:56
rm_workyes22:56
johnsomYou can have as many listeners (ports) etc. as you want22:56
*** threestrands has joined #openstack-lbaas22:56
*** threestrands has quit IRC22:56
*** threestrands has joined #openstack-lbaas22:56
rm_work"Object describing a logical grouping of listeners on one or more VIPs" that's an interesting definition22:56
*** threestrands has quit IRC22:57
johnsomAn Amp will have one VIP, but will have zero or more listeners (ports)22:57
*** threestrands has joined #openstack-lbaas22:57
*** threestrands has quit IRC22:57
*** threestrands has joined #openstack-lbaas22:57
johnsomlisteners can have multiple pools for L7 policiies22:57
mnaserok cool, time to get some amphoras rebuilt i guess23:02
mnaserdo i just delete the idle amphoras?23:02
mnaserand then failover23:02
mnaseronce the new ones get started23:02
johnsomYou should have a failover API/command23:03
mnaserbut before failing over i want it to not fail over to the amphoras that are already booted with no keys23:03
mnaser(i have a few on stand by)23:03
johnsomI mean deleting the amp will trigger one (please only delete one out of an active/standby pair at a time!)23:03
rm_workmnaser: what release are you on23:04
mnaserqueens23:04
rm_workok so23:04
rm_worki think queens had amp failover23:04
rm_workuse the amphora failover api to fail the spares23:04
rm_worki THINK we backported my fix for that?23:04
* rm_work checks23:04
johnsomhttps://developer.openstack.org/api-ref/load-balancer/v2/index.html#failover-amphora23:04
rm_workcrap maybe not23:04
mnasersetting admin port to down has worked in the past23:05
mnaserjust takes a little while23:05
johnsomYes23:05
rm_workhttps://review.openstack.org/#/c/564082/23:05
rm_workwithout that, he can't use it on spares23:05
rm_workmnaser: you can just go to the health table and change "updated time" to 023:05
rm_work;)23:05
mnasernaughty naughty ideas23:05
rm_workthat's how i make stuff failover fast, lol23:05
rm_workI am a dirty little db-monkey23:06
rm_worki have a constant DB connection up for all my DCs <_<23:06
johnsomlol23:06
rm_workThis is GREAT on OSX btw: https://www.sequelpro.com/23:07
rm_workone tab per DB :)23:07
rm_worknicely color coded23:07
rm_worksupports connections through an ssh bastion23:07
rm_workreconnects well23:07
johnsomNice23:07
mnaserok failed over all 3 standby (by standby i mean preprovisioned? not sure what is the best term)23:09
johnsomI used to use dbvisualizer at HP, but I don't have a license (or a need really) anymore23:09
johnsomSpares23:09
johnsomIt's your spare pool23:09
mnaseroh fyi23:10
mnaserhttps://wiki.openstack.org/wiki/OpenStack_health_tracker23:10
mnaserso let me know if you need anything :)23:10
johnsomGrin, you two signed up for us because you know we are low maintenance didn't you....23:11
johnsomLittle did you know the trouble we cause....23:12
mnasernah, i picked the teams i work with often so it's pretty easy to keep in touch with whats going on :p23:12
johnsomSo let's talk about the diversity tag.....23:12
mnaserjohnsom: little did you know that i have some nested virt running very very well and hoping to put that in your hands soon23:12
johnsomNice, yeah, both limestone and OVH have it, but something is breaking it. We've been talking with both to see if we can debug, but not moving very quickly23:13
johnsomMaybe when the nodepool images go 18.04 the kernel will have a patch23:13
mnaser16.04, fedora and centos have been stable in this case of this user23:14
mnaser17.10 has been a mess23:14
johnsomYeah, they crash booting up cirros even, it's not our amp image23:14
mnaserok23:17
mnaseri got new ones with keys23:17
mnasertime to fail over the amphoras that have stuff running on them23:17
johnsomWhich health monitor type is the LB using?23:18
mnaserhttp23:18
rm_workLOL mugsie is also our liason :P23:18
johnsomRight that is why I laughed too23:19
*** yamamoto has joined #openstack-lbaas23:21
mnaserso23:22
mnaseras i totally expected23:22
mnaserafter failover23:22
mnaserit works fine.23:22
rm_work:P23:22
mnaserthe customer seems to have been able to replicate it23:22
rm_workof course it does23:22
johnsomUmmm?????23:22
mnaserso at least i have ssh access the next time it happens23:22
johnsomI am puzzled on that one.  Well, grab the /var/log/haproxy.log for us if you see another one.23:24
johnsomI have seen that flapping before, but it's always been the backend app or server had issues23:25
*** yamamoto has quit IRC23:26
rm_workanyone know if RHEL8/CENT8 is going to support live kernel patching?23:29
rm_workcgoncalves / nmagnezi? :P23:29
johnsomVoodoo I tell you23:41
rm_worklol23:43

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!