Thursday, 2017-08-17

johnsomrm_work xgerman_ Can you guys poke the RC 2 backports?  https://review.openstack.org/494332 and https://review.openstack.org/49433100:04
johnsomThat way I can cut RC2 tomorrow morning00:04
rm_work+2'd00:05
johnsomThanks00:05
*** ssmith has quit IRC00:08
*** wasmum has quit IRC00:21
*** aojea has joined #openstack-lbaas00:40
*** JudeC has quit IRC00:43
*** aojea has quit IRC00:45
*** aojea has joined #openstack-lbaas01:41
*** aojea has quit IRC01:47
*** rtjure has quit IRC03:34
*** rtjure has joined #openstack-lbaas03:39
*** aojea has joined #openstack-lbaas03:43
*** aojea has quit IRC03:47
*** links has joined #openstack-lbaas04:01
*** tomtomtom has quit IRC04:42
*** openstack has quit IRC04:42
*** openstack has joined #openstack-lbaas04:46
*** kbyrne has joined #openstack-lbaas04:47
*** yuanying has joined #openstack-lbaas04:47
*** thomasem has joined #openstack-lbaas04:47
*** kevinbenton has joined #openstack-lbaas04:47
*** isantosp_ has joined #openstack-lbaas04:48
*** dayou1 has joined #openstack-lbaas04:48
*** rtjure has joined #openstack-lbaas04:48
*** bcafarel has joined #openstack-lbaas04:50
*** fnaval has joined #openstack-lbaas04:50
*** ptoohill has joined #openstack-lbaas04:50
*** PagliaccisCloud has joined #openstack-lbaas04:50
*** jidar has joined #openstack-lbaas04:51
*** m-greene has joined #openstack-lbaas04:51
*** m-greene_ has joined #openstack-lbaas04:52
*** Guest75213 has joined #openstack-lbaas04:52
*** raginbajin has joined #openstack-lbaas04:55
*** mnaser has joined #openstack-lbaas04:57
*** gcheresh has joined #openstack-lbaas05:47
*** rcernin has joined #openstack-lbaas05:57
*** ianychoi has quit IRC06:23
*** ianychoi has joined #openstack-lbaas06:26
*** pcaruana has joined #openstack-lbaas06:42
*** aojea has joined #openstack-lbaas06:45
*** tesseract has joined #openstack-lbaas06:47
*** aojea has quit IRC06:49
*** ajo has joined #openstack-lbaas07:19
*** openstackgerrit has joined #openstack-lbaas07:20
openstackgerritJi Chengke proposed openstack/octavia master: Change 14.04 to 16.04 in devstack setup guide  https://review.openstack.org/49440707:20
*** Alex_Staf has joined #openstack-lbaas07:30
*** rtjure has quit IRC07:35
*** rtjure has joined #openstack-lbaas07:39
*** JudeC has joined #openstack-lbaas07:45
*** Alex_Staf has quit IRC07:55
*** tesseract has quit IRC07:58
*** openstackgerrit has quit IRC08:17
*** Alex_Staf has joined #openstack-lbaas08:18
*** aojea has joined #openstack-lbaas08:19
*** ajo has quit IRC08:27
*** eezhova has joined #openstack-lbaas08:29
*** logan- has quit IRC08:37
*** aojea has quit IRC08:37
*** JudeC has quit IRC08:46
*** kong has quit IRC10:10
*** kong has joined #openstack-lbaas10:13
*** belharar has joined #openstack-lbaas10:18
*** fnaval has quit IRC10:47
*** aojea has joined #openstack-lbaas11:38
*** atoth has joined #openstack-lbaas11:40
*** aojea has quit IRC11:50
*** aojea has joined #openstack-lbaas12:17
*** saphi has joined #openstack-lbaas12:25
saphiHi everyone, I'm installing neutron-lbaas-dashboard in stable/pike branch but After restart apache2 I can't access to horizon with an error on apache error log: NotImplementedError: The DisableLoadBalancer BatchAction class must have both action_past and action_present methods12:26
*** saphi has quit IRC12:35
amotokifollowup comment: it must be a bug of neutron-lbaas-dashboard12:37
amotokihorizon pike dropped action_past and action_present *attributes* (which are deprecated for a long time)12:37
amotokithe change happened early Pike12:38
*** saphi has joined #openstack-lbaas12:42
*** saphi has quit IRC12:46
*** catintheroof has joined #openstack-lbaas12:49
*** catintheroof has quit IRC12:50
*** catintheroof has joined #openstack-lbaas12:50
*** aojea has quit IRC12:51
*** aojea has joined #openstack-lbaas12:52
*** belharar has quit IRC12:56
*** saphi has joined #openstack-lbaas13:05
*** aojea has quit IRC13:11
*** aojea has joined #openstack-lbaas13:15
*** aojea has quit IRC13:23
*** aojea has joined #openstack-lbaas13:30
johnsomCheck that you are following the installation instructions exactly and not loading both files.13:33
johnsomDashboard was working fine a week or two ago13:34
*** belharar has joined #openstack-lbaas13:37
*** ssmith has joined #openstack-lbaas13:39
*** cpusmith has joined #openstack-lbaas13:40
*** saphi has quit IRC13:40
*** aojea has quit IRC13:41
*** aojea_ has joined #openstack-lbaas13:43
*** ssmith has quit IRC13:44
*** cpusmith has quit IRC13:44
*** aojea__ has joined #openstack-lbaas13:45
*** aojea_ has quit IRC13:48
*** b3nt_pin is now known as beagles13:49
*** aojea__ has quit IRC13:50
*** aojea has joined #openstack-lbaas13:50
*** aojea has quit IRC13:56
*** aojea has joined #openstack-lbaas13:59
*** belharar has quit IRC14:00
*** belharar has joined #openstack-lbaas14:01
*** armax has joined #openstack-lbaas14:02
*** slaweq has joined #openstack-lbaas14:05
*** aojea has quit IRC14:12
*** aojea has joined #openstack-lbaas14:25
*** belharar has quit IRC14:26
*** fnaval has joined #openstack-lbaas14:30
amotokijohnsom: I am a bit surprised the dashboard worked well as horizon dropped action_present/past in Pike-2 https://github.com/openstack/horizon/commit/ad642346e0fefa285b0a471bbd5fa9d2779477b014:32
amotokiif nobody works on it, I can send a fix tomorrow14:32
*** fnaval has quit IRC14:39
*** eezhova has quit IRC14:43
*** fnaval has joined #openstack-lbaas14:52
*** gcheresh has quit IRC14:57
*** KeithMnemonic has joined #openstack-lbaas14:58
*** KeithMnemonic1 has joined #openstack-lbaas15:00
*** KeithMnemonic has quit IRC15:00
johnsomamotoki that would be appreciated15:02
*** KeithMnemonic has joined #openstack-lbaas15:03
*** KeithMnemonic has quit IRC15:03
*** KeithMnemonic1 has quit IRC15:03
*** KeithMnemonic has joined #openstack-lbaas15:03
*** dougwig has joined #openstack-lbaas15:06
*** eezhova has joined #openstack-lbaas15:18
*** logan- has joined #openstack-lbaas15:31
*** rcernin has quit IRC15:32
*** dmellado has joined #openstack-lbaas15:37
*** logan- has quit IRC15:40
*** logan- has joined #openstack-lbaas15:40
*** ssmith has joined #openstack-lbaas15:42
*** belharar has joined #openstack-lbaas16:03
*** rcernin has joined #openstack-lbaas16:08
*** ssmith has quit IRC16:15
*** eezhova has quit IRC16:18
*** aojea has quit IRC16:18
*** saphi has joined #openstack-lbaas16:19
*** eezhova has joined #openstack-lbaas16:31
*** aojea has joined #openstack-lbaas16:36
*** eezhova has quit IRC16:38
*** belharar has quit IRC16:53
*** JudeC has joined #openstack-lbaas16:58
*** sshank has joined #openstack-lbaas17:01
*** saphi has quit IRC17:13
*** aojea has quit IRC17:13
*** eezhova has joined #openstack-lbaas17:16
*** aojea has joined #openstack-lbaas17:34
*** aojea has quit IRC17:36
*** aojea has joined #openstack-lbaas17:50
*** aojea has quit IRC17:54
*** SumitNaiksatam has joined #openstack-lbaas17:59
*** vegarl has joined #openstack-lbaas18:03
*** JudeC has quit IRC18:27
*** sshank has quit IRC18:38
*** catintheroof has quit IRC18:54
*** KeithMnemonic has quit IRC19:21
*** sshank has joined #openstack-lbaas19:42
*** gcheresh has joined #openstack-lbaas19:56
*** armax has quit IRC20:13
*** rcernin has quit IRC20:16
*** JudeC has joined #openstack-lbaas20:20
*** sshank has quit IRC20:22
*** sshank has joined #openstack-lbaas20:27
*** apuimedo has quit IRC20:43
*** apuimedo has joined #openstack-lbaas20:47
*** sshank has quit IRC20:49
*** sshank has joined #openstack-lbaas21:04
*** gcheresh has quit IRC21:11
*** dougwig has quit IRC21:16
*** armax has joined #openstack-lbaas21:17
*** eezhova has quit IRC21:26
*** eezhova has joined #openstack-lbaas21:29
*** eezhova has quit IRC21:32
*** rtjure has quit IRC21:47
rm_workjohnsom: how is the keepalived service started?22:04
*** aojea has joined #openstack-lbaas22:06
*** pck has joined #openstack-lbaas22:08
*** pck has quit IRC22:08
*** pck has joined #openstack-lbaas22:08
*** pck is now known as pckizer22:09
rm_workjohnsom: hey, uhh... I don't know if we're restarting the keepalived process correctly when it gets sent a new config22:20
rm_workit seems we *always* just do: upload_vrrp_config, manage_service_vrrp(start)22:20
rm_worknever reload22:20
rm_workso like, in this case i am testing by failing the *backup*, but what happens is: backup goes down, healthmanager recycles it, sends a new vrrp config to the old one, runs "start"22:21
rm_workit should be reload, right?22:21
rm_workthe `reload_vrrp_service` function in `vrrp_rest_driver.py` is never used T_T22:24
rm_workonly `start_vrrp_service`22:24
rm_workthere is no driver task for it, only start/stop22:24
rm_workok so yeah this is broken22:26
johnsomLooking22:26
rm_workbut the question is, does it MATTER22:26
rm_worklook for get_vrrp_subflow22:26
rm_workin our failover flow, that's the only one we'd ever call22:26
rm_workand it always does upload_config->start22:26
rm_worknon-conditional22:26
xgerman_yeah, why would you need to restart vrrp?22:26
xgerman_it’s not like we change the VIP or anyhting22:27
rm_workhmmm22:27
rm_workwell, why do we bother re-uploading the config even then?22:27
xgerman_if a machine is down we just replace it22:27
rm_workok so ... lulz...22:28
rm_workwhat IP does it use? looks like.. uhh22:28
xgerman_yep, that bug might make it work ;-22:28
rm_workvrrp_ip right?22:28
johnsomYeah, failover it's not started yet, it will always be a start22:28
rm_worksooooo in *my* setup, that IP *does* change22:28
xgerman_yep, vrrp-ip22:28
rm_workwhen it does a failover, the new amp will have a different vrrp_ip22:29
xgerman_well, that is a new challenge22:29
johnsomWhere are we ever uploading a VRRP config to an already running VRRP?22:29
rm_workthat's not the case upstream? (sometimes i lose track of what exactly my differences are)22:29
xgerman_we made it so that we could bring up stuff with the same port (IP)22:29
rm_worki didn't think that was a change on my side22:29
rm_workoh maybe it is22:29
xgerman_you should still have the port22:30
rm_workyeah ok, so it's just broken for me22:30
rm_workwell, we can't salvage that port22:30
xgerman_in you world the machine might be on a subnet the port isn’t on?22:30
johnsomWhere are we ever uploading a VRRP config to an already running VRRP?22:30
rm_workbecause there's a 95% chance or so that it's unreachable from whatever host the new VM is spun up on22:30
rm_workjohnsom: i'm just looking at the amp agent log...22:30
rm_worklet me pastebin22:30
xgerman_rm_work I don’t say that lightly but your network is screwing you22:31
rm_workhttp://paste.openstack.org/show/mzwf9iUXssb4U7R915ud/22:31
rm_workxgerman_: yeah it's really dumb, thus the whole reason for the L3 VIP driver22:32
rm_workanyway, that paste shows what happens on the amp agent for the MASTER as I repeatedly failover the BACKUP22:32
xgerman_without HA VIPs you are basically bever HA22:32
rm_workline 10-21 is the first failover, 22-32 is the second22:33
rm_worklet me look at the flows22:34
xgerman_yeah, we use teh same flows for create with VRRP22:38
johnsomrm_work What do you mean first and second failover?  The logs would be wiped with each failover22:39
rm_work?22:39
rm_workthis is for the one that DIDN'T fail22:39
rm_worki'm failing one and watching logs on the other22:39
johnsomAh, ok, yeah, that makes sense.  It just iterates over all the amps on the LB, so basically it's pushing down the same config and a "start" which is already started.22:40
rm_workyeah22:40
xgerman_yep22:40
xgerman_takes the LB22:40
rm_workbut in my case the vrrp_ips change22:40
xgerman_instead of specific AMP22:40
rm_workso the one that didn't fail, needs a new config? I THINK?22:40
johnsomBut you are saying the config would change?  I thought the unicast peer was the VIP address and not the base address22:40
rm_workah hmm22:40
rm_worklet me see if there's any mention of the peer address22:40
rm_worki thought that had to be there? for them to communicate?22:41
johnsomRight, but that is the only thing that *could* change in that config22:41
rm_workyes, unicast_peer22:41
rm_workit updates the config with the new peer22:41
rm_workbut the service isn't restarted/reloaded22:41
rm_workso the running version has the old peer22:41
xgerman_in our case we re-use port so doesn’t matter22:42
rm_worksooo... can we do it right though? :/22:42
johnsomWell, maybe....22:42
rm_workjohnsom: yeah so my question would be22:42
rm_workif the *new peer* has the correct config and is newly started22:43
rm_workdoes it initiate the pair by connecting22:43
rm_workand the bad config on the older node doesn't matter?22:43
rm_worki feel like it probably should matter if the configs are correct.. but22:43
johnsomI think so, but ...  I'm pretty sure that is just for the initial connections and they share the peer list from then on, but, not 100% on that22:43
rm_worki guess this really is just a my problem22:44
johnsomLet me take a look at what actually happens here.22:44
xgerman_so my fear is if we restart that keeplaived forgets who is master since we probably have the priorities wrong but…22:44
johnsomGive me a couple minutes22:44
rm_workhmm k22:44
rm_workyeah xgerman_ my understanding of how any of this vrrp stuff works is really shaky22:45
johnsomPriorities are static per our config, that will not change.  Once a BACKUP always a backup.22:45
xgerman_but if we do failober BACKUP becomes master22:45
rm_workjohnsom: so when the master comes back up... it'll try to take back over?22:45
xgerman_and we still write down MASTER config22:45
xgerman_to the new node22:45
xgerman_rm_work there is some code to prevent flopping22:46
johnsomxgerman virtually, but it's config is always a BACKUP with the lower priority22:46
johnsomRight22:46
xgerman_not sure if a restart of both disturbs the balance22:47
rm_workhmm so german's point is that if we restart the config on the backup, will it revert to being a backup?22:47
rm_workand then we have two backups? or what22:47
johnsomYeah, it *could* re-evaluate positions, but I think it should just load the config and maintain it's state.  Though this could be why it's a start.22:48
xgerman_indeed22:48
johnsomIt has been a few years since I was deep into this stuff22:48
johnsomLike mitaka22:48
xgerman_poking with a stick has been my MO22:49
rm_worki might just modify the start script22:52
rm_worksuch that when it gets "start", it can see if it's running, and do a reload if it's already started22:52
rm_workrather than try to muck with the logic in the flows themselves which are <_<22:52
*** kbyrne has quit IRC22:52
rm_workyeah that is easy22:54
*** kbyrne has joined #openstack-lbaas22:55
johnsomYeah, so HUP -- This causes keepalived to close down all interfaces, reload its configuration, and start up with the new configuration.22:58
johnsomSo, yeah, reload will definitely cause a re-negotiation and flip the VIP over to the other host22:58
johnsomrm_work Question, in your environment, why are you even messing with keepalived/VRRP?22:59
johnsomOh, just the "down" notification?22:59
rm_workkeepalived needs to run a script22:59
rm_workyes22:59
rm_worki'm removing the garp config22:59
rm_workand adding a notify_master22:59
johnsomI don't think you can actually disable the garp in keepalived.  That is kind of it's only reason in life23:00
rm_workerr23:00
rm_workit's some config lines23:00
rm_workif i remove them, it shouldn't do it? i assume23:00
rm_workgarp_master_refresh23:00
rm_workgarp_master_refresh_repeat23:00
johnsomNo, that just changes the defaults23:00
rm_workno?23:00
rm_worklol k23:00
rm_workwell23:00
johnsomLet me look, but I don't think you can disable it totally.23:01
rm_workit doesn't REALLY matter if they're garping23:01
johnsomBut, shouldn't matter for you right?23:01
rm_worknothing looks at it23:01
rm_workit'd just be ignored23:01
rm_workmaybe I can set the time up to something ridiculous23:01
rm_workso it at least doesn't spam stuff over the network as often23:01
rm_work"every 20 years, send a garp please"23:02
rm_workeffectively off23:02
johnsomvrrp_garp_master_repeat 0 maybe23:02
rm_workyeah23:02
rm_workwas just thinking23:02
rm_workanyway i'm basically failing at getting it to actually run the notify scripts anyway <_<23:04
rm_workit doesn't seem to want to do it23:04
*** fnaval has quit IRC23:06
rm_worki have all of these set:23:06
rm_worknotify_master "/bin/haproxy-vrrp-alert"23:06
rm_work notify_backup "/bin/haproxy-vrrp-alert"23:06
rm_work notify_fault "/bin/haproxy-vrrp-alert"23:06
rm_worknone of them run23:06
rm_workbleh23:06
*** aojea has quit IRC23:07
johnsomperms right?23:07
rm_workhmm23:07
rm_workwhat user does it run as23:07
johnsomroot23:07
rm_workseems root23:08
rm_workyes23:08
rm_work<_<23:08
johnsomBut are the scripts exec23:08
rm_workI mean i can run the script manually23:08
rm_workso yes23:08
rm_worktrying again23:08
rm_workalso, 0 for repeat doesn't work23:08
johnsomYeah, I pretty much guessed turning off garp was a no-go23:09
rm_worki just set the refresh time to 20 years23:09
rm_workI think that is workable :P23:10
rm_workOOOOOR not23:10
rm_workthat makes it do it every second23:10
rm_workok it seems to be fine with montly23:11
rm_work*monthly23:11
rm_workyearly is also too long23:11
rm_worki wonder what MAX is23:12
johnsomYou are putting the scripts inside the vrrp_instance block right?23:14
rm_workyes23:14
rm_workhttp://paste.openstack.org/show/618728/23:14
rm_workeventually it'll be only notify_master but23:15
rm_worki just want it to freaking trigger, lol23:15
johnsomtry dropping the quotes23:16
rm_workhmmm23:16
xgerman_didn’t you guys implement etcd for health?23:16
johnsomHahahaha23:17
xgerman_just use a lock there and then switch :-)23:17
johnsomThat was just josh drive-by23:17
xgerman_he probably would feel vindicated right now ;-)23:17
johnsomkinda like the patches to fix health manager lifecycle23:17
xgerman_lol23:17
xgerman_he feels like a manager23:17
xgerman_like the guy who write the UDP code23:18
johnsomSo, the other though, if dropping the quotes doesn't work, is to try the group approach23:18
johnsomhttps://www.irccloud.com/pastebin/E7Dq1IBq/23:18
johnsomJust for giggles23:18
*** fnaval has joined #openstack-lbaas23:19
johnsomAlso, what is syslog saying, it usually spews on MASTER elect, I would expect something in there about the notify script23:19
rm_workhmmm23:19
rm_workyeah it didn't23:19
rm_workweird23:19
rm_worki'm looking at journalctrl23:19
johnsomrm_work BTW, I think -C for keepalived is what you are looking for.23:21
rm_work-C ?23:21
johnsomhttps://www.irccloud.com/pastebin/RUJQwcDH/23:21
rm_workah23:21
rm_workok23:21
rm_workthough we use the default start script so i'm not sure how to change what it does23:22
xgerman_maybe you should use a different tool pacemaker, carp?23:22
johnsomSo, with your notify in the config and keepalive running, kill -USR1 and look at /tmp/keepalived.data23:22
rm_workcmd = ("/usr/sbin/service octavia-keepalived {action}".format(23:22
xgerman_you might like that better than keeplaived: https://github.com/jedisct1/UCarp23:23
johnsomYeah, well, you are hacking this far, it's just a flag in the systemd service script23:23
johnsomxgerman You are starting to sound like Stephen23:23
xgerman_it seems we are using a hammer for a screw; if we don’t need GARP why not use a tool which doesn’t offer it23:24
johnsomCARP does GARP too23:24
rm_workbecause this one is already installed and configured :P23:24
rm_workand i'd have to use an even bigger hammer to dislodge it23:25
johnsomand since we are already using keepalived it's probably less work23:25
rm_work^^ this23:25
rm_workif i had to do it from scratch i'd use something simpler maybe23:25
johnsomNot to mention, ucarp seems to have even less docs that keepalived: http://www.ucarp.org/23:25
xgerman_well, they advertise: “no need for any dedicated extra network link between redundant hosts."23:27
johnsomWe don't either23:27
johnsomEven though I kind of wanted to do that as part of the design23:28
rm_workinteresting, so yeah no-quotes causes the script to load23:29
rm_workbut uhh23:29
rm_workit also seems to call it the first time it comes up?23:29
johnsomno notify posibility with ucarp either23:29
johnsomYeah, that makes sense23:29
rm_work<_<23:30
johnsomWith quotes it is expecting some params23:30
rm_workthat's  .... un-ideal23:30
johnsomWell, one of them has to become the master right?23:30
rm_worki mean, when the master comes up the first time, it does the master notify, which in my case would be a FLIP reassign23:30
xgerman_the example on the UCARP page explicitly shows scripts whcih run when becoming master and when loosing that status23:31
rm_worki need it to happen only when it becomes master due to a fail23:31
rm_worknot just when it comes up and is like "woo i'm master"23:31
johnsomOh, the up/down scripts23:31
xgerman_yep23:31
johnsomto like do the interface up/down23:31
xgerman_well, I guess you could put in anyhting23:31
xgerman_anyhow, gotta get my kids… AFK for a long while23:32
johnsomrm_work Not sure how to work around that or why it matters.  It seems like this is an easy way to trigger your flip assignment in the first place23:34
rm_workwell23:34
rm_worki mean i guess so :P23:34
rm_worki suppose it's a noop if there is no change23:34
rm_workso whatever23:34
rm_workit should be fine23:34
rm_work*airquotes*23:34
johnsomI'm also thinking more, that, maybe you should just code up another topology and make this a supported config moving flips.  It's just a standard neutron flip call right?23:35
rm_workwell23:35
rm_workI have a driver for this remember23:35
rm_workFLIP is a driver23:35
rm_workand this goes along with that23:35
*** openstackgerrit has joined #openstack-lbaas23:39
openstackgerritMichael Johnson proposed openstack/neutron-lbaas master: Set stable/pike to pull stable/pike neutron  https://review.openstack.org/49474523:39
johnsomI need to do a bit of house keeping in release prep.23:39
openstackgerritAdam Harwell proposed openstack/octavia master: WIP: Floating IP Network Driver (spans L3s)  https://review.openstack.org/43561223:39
rm_workhttps://review.openstack.org/#/c/435612/36/octavia/amphorae/backends/agent/api_server/keepalived.py23:40
rm_workjohnsom: ^^ that is what i went wirth23:40
johnsomIs this something you plan to try to merge?23:41
rm_worknot in this state23:48
rm_workonce I finally get it to "working working" state, I'm going to clean it up a lot23:48
rm_workand I'll have to generalize a thing or two23:49
rm_workbut then maybe23:49
rm_workit SHOULD run in devstack without problem23:49
johnsomOk, we just need to remember that that reload will flip the VIP23:49
rm_workthe idea is that I think it's useful to people23:49

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!