Monday, 2017-07-17

openstackgerritMerged openstack/octavia master: Updated from global requirements  https://review.openstack.org/48337600:17
*** sanfern has quit IRC02:43
*** yamamoto has joined #openstack-lbaas02:47
*** yamamoto has quit IRC03:09
*** gans has joined #openstack-lbaas03:39
*** links has joined #openstack-lbaas03:44
*** sanfern has joined #openstack-lbaas03:45
*** yamamoto has joined #openstack-lbaas04:09
*** yamamoto has quit IRC04:14
*** gtrxcb has joined #openstack-lbaas04:31
*** armax has joined #openstack-lbaas05:23
*** armax has quit IRC05:30
*** armax has joined #openstack-lbaas05:30
*** armax has quit IRC05:31
*** armax has joined #openstack-lbaas05:31
*** armax has quit IRC05:32
*** armax has joined #openstack-lbaas05:32
*** armax has quit IRC05:32
*** armax has joined #openstack-lbaas05:33
*** armax has quit IRC05:33
*** armax has joined #openstack-lbaas05:34
*** armax has quit IRC05:34
*** armax has joined #openstack-lbaas05:35
*** armax has quit IRC05:35
*** rcernin_ has joined #openstack-lbaas06:00
*** gans819 has joined #openstack-lbaas06:01
*** gans has quit IRC06:04
*** diltram has quit IRC06:39
*** diltram has joined #openstack-lbaas06:46
*** gans819 has quit IRC06:52
*** gtrxcb has quit IRC07:04
*** cody-somerville has joined #openstack-lbaas07:07
*** cody-somerville has quit IRC07:07
*** cody-somerville has joined #openstack-lbaas07:07
*** csomerville has joined #openstack-lbaas07:09
*** cody-somerville has quit IRC07:12
*** aojea has joined #openstack-lbaas07:19
*** gans819 has joined #openstack-lbaas07:30
*** gans819 has quit IRC07:31
*** tesseract has joined #openstack-lbaas07:32
*** diltram has quit IRC07:41
*** diltram has joined #openstack-lbaas07:47
*** gcheresh has joined #openstack-lbaas08:26
*** tesseract has quit IRC08:50
*** tesseract has joined #openstack-lbaas08:53
*** yamamoto has joined #openstack-lbaas09:13
*** yamamoto has quit IRC09:13
*** yamamoto has joined #openstack-lbaas09:14
*** yamamoto has quit IRC09:16
*** yamamoto has joined #openstack-lbaas09:17
*** yamamoto has quit IRC09:21
*** yamamoto has joined #openstack-lbaas09:21
*** yamamoto has quit IRC09:26
*** yamamoto has joined #openstack-lbaas09:30
*** yamamoto has quit IRC09:38
*** csomerville has quit IRC09:51
*** yamamoto has joined #openstack-lbaas10:37
*** sanfern has quit IRC10:48
openstackgerritPradeep Kumar Singh proposed openstack/octavia master: [WIP] Add provider table and API  https://review.openstack.org/48432510:52
*** atoth has quit IRC11:10
*** sanfern has joined #openstack-lbaas11:40
*** aojea has quit IRC11:57
*** atoth has joined #openstack-lbaas12:03
nmagnezirm_work, johnsomm, that neutron-lbaas breakage is so strange. I know which patch in neutron broke us. but for some reason it seems like it shouldn't have break us. also I just cant reproduce it locally12:17
nmagnezijohnsom12:17
isantospHi, we are trying to run octavia in our infrastructure, we have some issue about the subnet_id we have to pass in the creation of the loadBalancer, in our infrastrature we would like, instead of passing a subnet_id, pass a dns directly and create differents loadbalancers, any idea how could I implement this?12:29
*** catintheroof has joined #openstack-lbaas12:51
*** aojea has joined #openstack-lbaas12:52
*** aojea has quit IRC12:56
xgerman_isantosp Octavia doesn’t have any DNS integration right now but in the latest version you can pass in a port. So if you create a DNS name for said port beforehand that should cover your usecase (though I think you could do subnet -> VIP -> assign DNS name as well)13:10
*** aojea has joined #openstack-lbaas13:19
*** aojea has quit IRC13:26
nmagnezixgerman_, o/13:28
xgerman_o/13:29
nmagnezixgerman_, did you notice ^^ ? neutron-lbaas unit tests got kinda broken.. not sure if anyone picked that up yet13:30
nmagnezixgerman_, i tried to debug this but for some unknown reason I can't reproduce locally13:30
xgerman_mmh, I usually look at the jenkins job and use the exact commands they use13:31
*** links has quit IRC13:37
*** aojea has joined #openstack-lbaas13:48
*** yamamoto has quit IRC13:49
*** sanfern has quit IRC14:04
*** sanfern has joined #openstack-lbaas14:04
*** chlong has joined #openstack-lbaas14:05
*** yamamoto has joined #openstack-lbaas14:09
*** yamamoto has quit IRC14:09
*** chlong has quit IRC14:10
nmagnezixgerman_, aye. deployed a branch new node that wasn't stacked yet. following you advise.14:27
*** gcheresh has quit IRC14:38
*** yamamoto has joined #openstack-lbaas14:40
*** aojea has quit IRC14:40
*** yamamoto has quit IRC14:46
isantospxgerman_ thank you, I'll try, however, if you are trying to integrate this DNS in octavia, I can help with it too, just let me know :)14:50
johnsomisantosp We kind of had integration via neutron for a while, but there were bugs that caused us to remove it.  When we moved a port from one instance to another (failover scenario) it would block us.  Those bugs might be fixed now.14:52
*** armax has joined #openstack-lbaas14:53
*** armax has quit IRC14:58
*** csomerville has joined #openstack-lbaas15:03
*** armax has joined #openstack-lbaas15:06
xgerman_that would be AWS ELB style15:10
xgerman_usually people assign more sensible names so I think what I find useful is to specify something like “www.bigcorp.com” on the lb create and then we assign the IP in the DNS record15:11
redondo-mkHi. Question: did anyone use that migration script from v1 to v2 lbaas in production successfully already? And how much work would it be  to to migrate v2 agent to v2 octavia? In that v1 -> v2 migration script you can get away without any downtime of load balancer itself, right? Is there a way to migrate to octavia from v2 agent without any downtime...I'm aware there isn't any work done regarding migration15:15
redondo-mkto octavia but there were words on the mailing list saying it is planned.15:15
openstackgerritPradeep Kumar Singh proposed openstack/octavia master: [WIP] Add provider table and API  https://review.openstack.org/48432515:17
openstackgerritPradeep Kumar Singh proposed openstack/octavia master: [WIP] Add provider table and API  https://review.openstack.org/48432515:21
johnsomHi redondo-mk  So, the only script I know of for v1->v2 was the database migration script. I can't talk to how well it works or if anyone used it.  Most of the folks that were around with the v1 transition are no longer working on the project (v1 was deprecated a long time ago).15:23
johnsomAs to the old agent code, we would like to move it under Octavia as a driver/provider but so far we don't have resources to do so.  We are planning to focus on providers/drivers in Queens15:25
johnsomI think it would be pretty straight forward to migrate from a neutron-lbaas v2 agent LB to an Octavia LB with some downtime, but without downtime if might be some work.  We really haven't scoped that yet.15:26
johnsomWe do intend to have a neutron-lbaas v2 to octavia migration script, but that work has not yet started either.15:26
openstackgerritPradeep Kumar Singh proposed openstack/octavia master: [WIP] Add provider table and API  https://review.openstack.org/48432515:28
*** aojea has joined #openstack-lbaas15:31
redondo-mk@johnsom Yeah, I'm aware of that. v1 was deprecated in Liberty and is removed in Newton and if I would want to use it I would I guess have to patch neutron_lbaas to keep it. But that is not our interest since we want to implement some failover/HA into our production LBs.  I know octavia offers some failover capabilities but I don't know the details since it's about a week since I started looking into the15:35
redondo-mkproject (at this point I haven't even managed to spin it up with devstack properly where I get error on horizon and cli as well just after having env up and running from devstack Vagrantfile that is present there). I'm just analyzing the script to see how and under what constraints does the script migrate the data from v1 tables to v2 tables. At this point I don't know if when doing the switch to v2, v215:35
redondo-mkagent automatically "picks up" and handles already existing haproxy and netns that was created by v1 lbaas?15:35
*** chlong has joined #openstack-lbaas15:36
*** aojea has quit IRC15:36
johnsomredondo-mk Yeah, if HA is important to you, Octavia is your project.  It has active/standby with ~second failover and layers of failover capability.15:36
redondo-mkWhat kind of work would it be required to provide migration without downtime? I would be interested in getting to know any kind of info at this point and if I manage to make something work, I would be more then happy to contribute...15:37
johnsomredondo-mk I'm not sure about the agent interacting with v1, I think v1 had it's own version of the namespace driver/agent15:37
*** chlong has quit IRC15:38
johnsomYeah, the issue is going to be able migrating the VIP IP over to the octavia amphora based load balancers.15:38
*** chlong has joined #openstack-lbaas15:45
redondo-mkI think going from v2 agent to v2 octavia is going to be less difficult then going from v1 to v2...don't know how it is with migrating heat stacks having v1 lb to heat stacks with v2...15:45
johnsomYeah, I agree.  V1 has some very strange data model constructs.15:46
*** atoth has quit IRC15:53
*** armax has quit IRC16:03
*** gcheresh has joined #openstack-lbaas16:07
*** tongl has joined #openstack-lbaas16:10
*** rcernin_ has quit IRC16:19
*** gcheresh has quit IRC16:19
*** atoth has joined #openstack-lbaas16:21
*** aojea has joined #openstack-lbaas16:26
*** csomerville has quit IRC16:29
*** aojea has quit IRC16:30
tomtomtomanyone run into trouble with octavia and neutron where the "pools"  are in an error state for provisioning status?  I have checked my neutron and octavia logs, but don't see any information about pools being a problem.16:31
tomtomtomi've upgraded to the latest master branch of octavia too.  just did it last week.16:33
johnsomtomtomtom The pools in the octavia database show ERROR for provisioning?16:37
tomtomtomno it's in neutron status where they do, let me check the database.16:42
tomtomtomno they show active in the octavia db16:45
*** aojea has joined #openstack-lbaas16:45
*** aojea has quit IRC16:49
johnsomtomtomtom So, the issue is neutron-lbaas is giving up on the status sync too early.  This is a proposal to address that problem: https://review.openstack.org/#/c/478385/16:49
*** fnaval has joined #openstack-lbaas16:50
johnsomLonger term solution is to get rid of the neutron database and neutron-lbaas, thus the Octavia v2 API work we have been doing.16:50
tomtomtomok so is it just an information problem, or does neutron have other things it does when it thinks the pool status is error?  Does this prevent the LB from going ONLINE?16:51
*** ssmith has joined #openstack-lbaas16:53
tomtomtomI installed octavia master branch last week, shouldn't that be v2?16:53
johnsomtomtomtom It means that neutron client and the neutron-lbaas API will think it is in ERROR when it is not and it might restrict some actions.16:53
johnsomYes, that would be.  In that case you could go straight to the octavia v2 API and bypass neutron-lbaas all together.  You would also be able to use the openstack client16:54
johnsomIf you don't need neutron-lbaas (provider drivers, flavors, or horizon) using straight Octavia v2 API is the way to go.  Those others, we just haven't finished those yet16:56
johnsomIf you need neutron-lbaas there are two things to try:16:56
johnsom1. edit the neutron-lbaas config, in the octavia section, add request_poll_timeout and set it to something higher than 100.16:56
johnsomThough, if you aren't stress testing or running in virtualbox 100 should be a fine number.16:57
johnsom2. Use the patch I pointed to above that will force sync neutron.16:57
ssmithjohnsom, we're on production hardware17:01
johnsomHmm, yeah, so that seems odd that the status wouldn't be updating in 100 seconds17:02
johnsomIf you want, I am happy to look through o-api, o-cw, and q-svc logs and see if I can find something.17:02
*** aojea has joined #openstack-lbaas17:03
*** chlong has quit IRC17:03
johnsomnmagnezi Are you working on the neutron issue or should I pick that up?17:07
*** aojea has quit IRC17:07
johnsomIt looks like the agent test code is doing bad things and using a private method from neutron....  Sigh.  That agent....17:07
tomtomtom another quick question I have is, does octavia create the public endpoint in keystone somehow or is that part of the installation?17:09
johnsomThat is part of installation17:10
johnsomOur devstack plugin does it here: https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L16717:11
johnsomBut it is pretty quick and easy via "openstack endpoint create ..."17:11
*** aojea has joined #openstack-lbaas17:12
*** armax has joined #openstack-lbaas17:12
xgerman_+117:12
*** sshank has joined #openstack-lbaas17:13
*** aojea has quit IRC17:16
*** harlowja has joined #openstack-lbaas17:17
*** sshank has quit IRC17:17
*** chlong has joined #openstack-lbaas17:19
*** sshank has joined #openstack-lbaas17:20
*** aojea has joined #openstack-lbaas17:21
tonglQuick question, in case of any exception during creation of LBaaS resource, should we just call failed_completion and put the resource in error state. Or we don't even bother to create anything.17:23
johnsomneutron-lbaas?17:24
johnsomAh, yes.  Umm, my rule is usually if it's too a point that the DB record is there for the LB, it should be marked error17:25
tonglyes17:25
*** aojea has quit IRC17:25
johnsomIf the failure is in your driver, you really should call failed_completion as the API is async and may have already returned 201 to the user.17:27
tonglFor example, we first create a loadbalancer and it is online and active. When we create a listener for it, there is any exception in the driver and we call failed_completion for listener. In this case, DB has a record of the failed listener. Do we update the loadbalancer's state to error too?17:29
*** aojea has joined #openstack-lbaas17:31
*** aojea has quit IRC17:36
johnsomHmm, good question.  I'm not sure how neutron-lbaas handles that.17:39
*** aojea has joined #openstack-lbaas17:40
johnsomI guess it's up to the driver.  I.e. if a listener deployment error would negatively impact the whole load balancer (like it's not deployed) it should bubble up.  If it only impacts that listener, but the load balancer could still service requests on other listeners, I might lean towards just erroring out the failed component.17:41
johnsomOk, so the neutron/neutron-lbaas issue is a namespace collision....17:42
*** aojea has quit IRC17:45
*** aojea has joined #openstack-lbaas17:50
*** aojea has quit IRC17:51
*** aojea has joined #openstack-lbaas17:51
openstackgerritMichael Johnson proposed openstack/neutron-lbaas master: Fix a namespace collision issue.  https://review.openstack.org/48443717:52
johnsomrm_work xgerman_ That should fix the neutron-lbaas gate.  It's a bit cheesy, so let me know if you think we should take a different direction.  It's a unit test issue17:52
*** aojea has quit IRC17:56
*** sanfern has quit IRC17:57
*** tesseract has quit IRC18:01
*** aojea has joined #openstack-lbaas18:07
*** aojea has quit IRC18:11
*** aojea has joined #openstack-lbaas18:16
*** aojea has quit IRC18:21
tonglWhen is our octavia session for the upcoming PTG?18:26
*** rcernin has joined #openstack-lbaas18:26
johnsomI have a room reserved Wednesday, Thursday, Friday18:26
tongljohnsom: Thanks Michael!18:33
*** aojea has joined #openstack-lbaas18:35
*** aojea has quit IRC18:37
*** gcheresh has joined #openstack-lbaas18:45
rm_workugh still working on getting to that18:45
rm_workTHOUGH it looks like Sydney is probably going to happen O_o18:45
*** sshank has quit IRC19:02
xgerman_Awesome - so at least one of us can run that lab ;-)19:26
*** harlowja has quit IRC19:32
*** dougwig has joined #openstack-lbaas19:34
*** csomerville has joined #openstack-lbaas20:10
*** chlong has quit IRC20:27
*** chlong has joined #openstack-lbaas20:32
*** gcheresh has quit IRC20:38
*** sshank has joined #openstack-lbaas20:45
*** harlowja has joined #openstack-lbaas20:56
*** cody-somerville has joined #openstack-lbaas21:06
*** cody-somerville has quit IRC21:06
*** cody-somerville has joined #openstack-lbaas21:06
*** csomerville has quit IRC21:08
johnsomOk, this passed now: https://review.openstack.org/#/c/484437/21:17
xgerman_+2’d21:22
johnsomThanks21:22
xgerman_rm_work?21:22
*** gtrxcb has joined #openstack-lbaas21:26
*** chlong has quit IRC21:26
johnsomxgerman_ commented on https://review.openstack.org/#/c/483520 but it went on patchset 1...21:31
xgerman_thanks21:32
*** rcernin has quit IRC21:33
johnsomThis one doesn't even look like it got to your code: https://review.openstack.org/#/c/48352021:43
johnsomOpps, this one: https://review.openstack.org/#/c/482664/21:43
johnsomSeems odd that it timed out after an hour, it's usually two21:45
xgerman_yeah, this stuff is real odd21:46
*** ssmith has quit IRC21:49
rm_workjohnsom: wait why command vs shell in ansible?22:07
rm_worki ask because I'm using shell all over the place22:07
johnsomI assume it's a bit more secure and lighter weight.  They have a lint rule for it.22:07
johnsomrm_work Probably similar to this: https://security.openstack.org/guidelines/dg_avoid-shell-true.html22:09
rm_workoh you were just mentioning it gave a warn/error in the linter22:09
johnsomYeah, error in linter.  Failed the gate22:09
*** sshank has quit IRC22:14
*** sshank has joined #openstack-lbaas22:19
*** tongl has quit IRC22:19
*** tongl has joined #openstack-lbaas22:26
*** armax has quit IRC22:47
*** fnaval has quit IRC23:01
*** catintheroof has quit IRC23:07
*** fnaval has joined #openstack-lbaas23:17
*** chlong has joined #openstack-lbaas23:24

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!