*** ducttape_ has quit IRC | 00:06 | |
*** ducttape_ has joined #openstack-lbaas | 00:07 | |
*** ducttape_ has quit IRC | 00:15 | |
*** ducttape_ has joined #openstack-lbaas | 00:23 | |
*** ducttape_ has quit IRC | 00:41 | |
*** chlong has joined #openstack-lbaas | 01:25 | |
*** armax has quit IRC | 01:29 | |
*** manishg has joined #openstack-lbaas | 01:31 | |
*** manishg_ has quit IRC | 01:34 | |
*** manishg has quit IRC | 01:59 | |
*** bana_k has joined #openstack-lbaas | 02:26 | |
openstackgerrit | Bo Chi proposed openstack/octavia: Assign load_balancer in _port_to_vip() https://review.openstack.org/259391 | 02:43 |
---|---|---|
*** bana_k has quit IRC | 03:35 | |
*** manishg has joined #openstack-lbaas | 04:38 | |
*** armax has joined #openstack-lbaas | 04:53 | |
*** prabampm has joined #openstack-lbaas | 05:01 | |
*** manishg has quit IRC | 05:18 | |
*** manishg has joined #openstack-lbaas | 05:19 | |
*** manishg has quit IRC | 05:24 | |
*** bochi-michael has joined #openstack-lbaas | 05:59 | |
*** bana_k has joined #openstack-lbaas | 06:07 | |
openstackgerrit | tianqing proposed openstack/neutron-lbaas: Remove invalid fileds of healthmonitor when its type is TCP/PING https://review.openstack.org/263141 | 06:31 |
*** bana_k has quit IRC | 06:32 | |
*** armax has quit IRC | 06:52 | |
*** chlong has quit IRC | 06:55 | |
*** sbalukoff has quit IRC | 07:13 | |
*** ljxiash has joined #openstack-lbaas | 07:17 | |
*** sbalukoff has joined #openstack-lbaas | 07:17 | |
ljxiash | using devstack, after stack.sh, there are two q-lbaas service are running, is there anyone can explain it, why there are two? | 07:19 |
*** numans has joined #openstack-lbaas | 07:20 | |
ljxiash | Hi, is there someone can kindly help me on the neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver [-] Stats socket not found for pool a5ed91d2-fe1e-4555-90b8-deb7cee61bd5 | 07:27 |
ljxiash | It is a WARNING | 07:27 |
*** amotoki has joined #openstack-lbaas | 07:38 | |
*** Aish has joined #openstack-lbaas | 07:41 | |
*** Aish has quit IRC | 07:41 | |
*** Aish has joined #openstack-lbaas | 07:42 | |
*** Aish has quit IRC | 07:43 | |
openstackgerrit | tianqing proposed openstack/neutron-lbaas: Remove invalid fileds of healthmonitor when its type is TCP/PING https://review.openstack.org/263141 | 08:58 |
*** openstackgerrit has quit IRC | 10:02 | |
*** openstackgerrit has joined #openstack-lbaas | 10:03 | |
*** bochi-michael has quit IRC | 10:35 | |
*** ljxiash has quit IRC | 10:50 | |
*** chlong has joined #openstack-lbaas | 12:03 | |
*** rtheis has joined #openstack-lbaas | 12:14 | |
*** doug-fish has joined #openstack-lbaas | 12:31 | |
*** ducttape_ has joined #openstack-lbaas | 13:07 | |
*** chlong has quit IRC | 13:20 | |
*** ducttape_ has quit IRC | 13:29 | |
*** chlong has joined #openstack-lbaas | 13:33 | |
*** m1dev has joined #openstack-lbaas | 14:26 | |
*** numans has quit IRC | 14:29 | |
*** prabampm has quit IRC | 14:39 | |
*** ducttape_ has joined #openstack-lbaas | 14:57 | |
*** manishg has joined #openstack-lbaas | 15:13 | |
*** shakamunyi has quit IRC | 15:15 | |
*** barra204 has quit IRC | 15:15 | |
*** TrevorV has joined #openstack-lbaas | 15:15 | |
*** markvan has quit IRC | 15:20 | |
*** markvan has joined #openstack-lbaas | 15:22 | |
*** manishg_ has joined #openstack-lbaas | 15:22 | |
*** ajmiller has joined #openstack-lbaas | 15:23 | |
*** manishg has quit IRC | 15:25 | |
*** shakamunyi has joined #openstack-lbaas | 15:29 | |
*** liamji has joined #openstack-lbaas | 15:43 | |
liamji | Hello | 15:43 |
*** doug-fish has quit IRC | 15:45 | |
xgerman | Hi | 15:47 |
liamji | About https://wiki.openstack.org/wiki/Neutron/LBaaS/API#List_Traffic_Statistics_of_a_pool, is the bytes-in in the response a aggregate-value? thx | 15:49 |
*** doug-fish has joined #openstack-lbaas | 15:51 | |
*** doug-fish has quit IRC | 15:56 | |
xgerman | aggregate liamji | 16:04 |
*** johnsom has quit IRC | 16:20 | |
*** prabampm has joined #openstack-lbaas | 16:25 | |
*** amitry has joined #openstack-lbaas | 16:30 | |
amitry | Good morning, happy New Year. Are any of the public OpenStack clouds running OpenStack LBaaS? We are working on an app that leverages the OpenStack LBaaS API and have verified against DevStack but would like to test against a public cloud as well. | 16:34 |
*** manishg_ has quit IRC | 16:34 | |
xgerman | I know Rackspace has plans to run LBaaS… Helion Open Stack runs LBaaS V1 and V2 | 16:35 |
xgerman | but Helion is not a public cloud | 16:35 |
amitry | xgerman: thanks, I'll double check on Rackspace, I think you are right that they have plans but what they offer now is not OpenStack LBaaS | 16:37 |
blogan | amitry: i'm from rackspace and yes we do have plans, but yes our current offering does not use openstack lbaas (or any openstack product) | 16:37 |
amitry | blogan: thanks, is there a beta we could test against? | 16:38 |
liamji | xgerman: thanks | 16:38 |
blogan | amitry: nope not right now | 16:39 |
amitry | blogan: ok, thanks | 16:39 |
blogan | amitry: np | 16:39 |
*** liamji has quit IRC | 16:39 | |
*** johnsom has joined #openstack-lbaas | 16:40 | |
*** _cjones_ has joined #openstack-lbaas | 16:49 | |
*** barra204 has joined #openstack-lbaas | 16:51 | |
*** shakamunyi has quit IRC | 16:51 | |
*** armax has joined #openstack-lbaas | 17:07 | |
*** bana_k has joined #openstack-lbaas | 17:08 | |
*** diogogmt has joined #openstack-lbaas | 17:11 | |
*** Aish has joined #openstack-lbaas | 17:13 | |
*** bharathm has joined #openstack-lbaas | 17:26 | |
*** manishg has joined #openstack-lbaas | 17:34 | |
*** Aish has quit IRC | 17:42 | |
*** Aish has joined #openstack-lbaas | 17:51 | |
*** ajmiller has quit IRC | 17:55 | |
*** ajmiller has joined #openstack-lbaas | 18:00 | |
*** sbalukoff has quit IRC | 18:02 | |
*** harlowja has quit IRC | 18:04 | |
*** harlowja has joined #openstack-lbaas | 18:04 | |
*** doug-fish has joined #openstack-lbaas | 18:10 | |
*** armax has quit IRC | 18:11 | |
*** bana_k has quit IRC | 18:14 | |
*** bharathm has quit IRC | 18:26 | |
*** manishg has quit IRC | 18:28 | |
*** openstackgerrit has quit IRC | 18:32 | |
*** manishg has joined #openstack-lbaas | 18:32 | |
*** openstackgerrit has joined #openstack-lbaas | 18:33 | |
*** prabampm has quit IRC | 18:33 | |
*** manishg has quit IRC | 18:34 | |
*** manishg has joined #openstack-lbaas | 18:36 | |
*** doug-fish has quit IRC | 18:38 | |
*** manishg has quit IRC | 18:41 | |
*** doug-fish has joined #openstack-lbaas | 18:41 | |
*** doug-fish has quit IRC | 18:49 | |
*** doug-fish has joined #openstack-lbaas | 18:50 | |
*** doug-fish has quit IRC | 18:51 | |
*** bharathm has joined #openstack-lbaas | 18:51 | |
*** madhu_ak has joined #openstack-lbaas | 18:51 | |
*** bana_k has joined #openstack-lbaas | 18:54 | |
*** bana_k has quit IRC | 18:58 | |
*** manishg has joined #openstack-lbaas | 19:05 | |
*** manishg has quit IRC | 19:09 | |
*** sc68cal has quit IRC | 19:11 | |
*** bharathm has quit IRC | 19:11 | |
*** bharathm has joined #openstack-lbaas | 19:12 | |
*** sc68cal has joined #openstack-lbaas | 19:14 | |
*** bana_k has joined #openstack-lbaas | 19:16 | |
*** intr1nsic has joined #openstack-lbaas | 19:18 | |
*** sbalukoff has joined #openstack-lbaas | 19:20 | |
*** sbalukoff has quit IRC | 19:23 | |
*** bharathm has quit IRC | 19:28 | |
*** nmagnezi has joined #openstack-lbaas | 19:30 | |
*** bharathm has joined #openstack-lbaas | 19:32 | |
*** Kiall has quit IRC | 19:36 | |
*** Kiall has joined #openstack-lbaas | 19:36 | |
*** manishg has joined #openstack-lbaas | 19:38 | |
*** manishg has quit IRC | 19:43 | |
*** manishg has joined #openstack-lbaas | 19:45 | |
*** bharathm has quit IRC | 19:46 | |
*** bdrich has joined #openstack-lbaas | 19:47 | |
*** bharathm has joined #openstack-lbaas | 19:50 | |
nmagnezi | xgerman, ping. question about lb-network | 19:53 |
*** m1dev has quit IRC | 19:59 | |
*** bharathm has quit IRC | 20:00 | |
*** manishg has quit IRC | 20:00 | |
*** bharathm has joined #openstack-lbaas | 20:05 | |
*** manishg has joined #openstack-lbaas | 20:06 | |
*** woodster_ has joined #openstack-lbaas | 20:06 | |
*** manishg has quit IRC | 20:13 | |
*** manishg has joined #openstack-lbaas | 20:13 | |
xgerman | hi | 20:14 |
*** manishg_ has joined #openstack-lbaas | 20:16 | |
*** manishg_ has quit IRC | 20:16 | |
*** manishg_ has joined #openstack-lbaas | 20:18 | |
*** manishg has quit IRC | 20:18 | |
*** bharathm has quit IRC | 20:21 | |
*** minwang2 has joined #openstack-lbaas | 20:24 | |
*** bharathm has joined #openstack-lbaas | 20:24 | |
*** bdrich_ has joined #openstack-lbaas | 20:29 | |
*** bharathm has quit IRC | 20:31 | |
*** bdrich has quit IRC | 20:31 | |
*** bharathm has joined #openstack-lbaas | 20:34 | |
*** bdrich_ has quit IRC | 20:37 | |
*** bdrich has joined #openstack-lbaas | 20:37 | |
*** bharathm has quit IRC | 20:40 | |
*** minwang2 has quit IRC | 20:40 | |
nmagnezi | xgerman, hi | 20:42 |
nmagnezi | xgerman, still around? | 20:42 |
xgerman | yep | 20:42 |
*** bharathm has joined #openstack-lbaas | 20:42 | |
nmagnezi | xgerman, great :) so re: lb-network | 20:42 |
nmagnezi | xgerman, will all amphora vms be members of that network? even if they were created from within different tenants? | 20:43 |
xgerman | yep, they all will be in that network so we can control them | 20:43 |
nmagnezi | xgerman, so what if I have large amount of amphoras? won't it be a issue? | 20:44 |
*** minwang2 has joined #openstack-lbaas | 20:44 | |
xgerman | maybe… we thought about having several networks to shard that… but never got implemented | 20:45 |
nmagnezi | xgerman, did you test for high amount of amphoras? say I have 250 loadbalancers.. will heartbeat work well? will failover (which is new) be affected? | 20:46 |
xgerman | no, we did not | 20:46 |
xgerman | maybe blogan has more tests | 20:47 |
nmagnezi | xgerman, also, if you don't mind, a small question about active/standby. how is the scheduling done? does it make sure that both active and standby vms are not on the same compute node? | 20:47 |
nmagnezi | blogan, ^^ when you see it, please let me know :) | 20:47 |
xgerman | it’s not doing any scheduling right now | 20:48 |
nmagnezi | xgerman, so who decide where the vm is spawned? | 20:48 |
xgerman | but we wanted to tap into nova’s anti affinity.. also M is still under development ;-) | 20:48 |
nmagnezi | xgerman, care to share the patch url? :-) | 20:48 |
xgerman | well, I think nova has that feature | 20:50 |
xgerman | http://docs.openstack.org/juno/config-reference/content/section_compute-scheduler.html | 20:50 |
xgerman | so we need to code that for active-passive | 20:50 |
nmagnezi | xgerman, IMHO, this is very important. two amphora vms spawned on the same compute node.. is.. well.. it's just bad practice :) | 20:54 |
xgerman | yeah, I know... | 20:54 |
nmagnezi | xgerman, ack. also today I've tested the image-builder script. worked fine with ubuntu. failed for both centos and fedora :( | 20:55 |
xgerman | yeah, we only test with ubuntu | 20:55 |
johnsom | Yeah, sadly the Redhat side has not had much testing. | 20:56 |
nmagnezi | xgerman, I will try to help with that | 20:56 |
nmagnezi | johnsom, ^ | 20:56 |
xgerman | thanks | 20:56 |
johnsom | Please feel free to put up some patches | 20:56 |
nmagnezi | will do | 20:56 |
*** doug-fish has joined #openstack-lbaas | 20:56 | |
johnsom | Cool, ping me if you have questions about the image-builder scripts | 20:56 |
nmagnezi | johnsom, sure will! | 20:57 |
nmagnezi | johnsom, could you tell me, in short, aside from haproxy and keepalived, what else is running on the amphora? (not sure what responds to the health monitor calls) | 20:58 |
xgerman | we run our amphora-agent | 20:59 |
johnsom | Other than those, and the standard cloudinit stuff, we install octavia in the image and run the amphora-agent (python code) | 20:59 |
nmagnezi | xgerman, btw, the docs about housekeeping need some elaboration (what are amphora spare pools?): http://docs.openstack.org/developer/octavia/specs/version0.5/housekeeping-manager-interface.html#problem-description | 20:59 |
xgerman | oh, the idea is that spinning up a vm takes time and so you can spin them up[ beforehand | 21:00 |
xgerman | and the spare pool is how many you keep around | 21:00 |
xgerman | that should make failover (if you don’t use active-standby) and provisioning faster | 21:01 |
nmagnezi | xgerman, is that configurable? (amount of amphoras etc) | 21:01 |
xgerman | yep | 21:01 |
nmagnezi | johnsom, this code? (somewhere here): https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends/agent | 21:02 |
nmagnezi | xgerman, octavia.conf i presume | 21:02 |
xgerman | exactly | 21:02 |
johnsom | Yes, that is the agent code | 21:03 |
blogan | nmagnezi: i have no additional tests for this yet, and we're not going to be using active-passive internally as well, but for our lb-network we will be using a provider network at first, and then later to improve scalability doing something else | 21:03 |
blogan | nmagnezi: but we'll be doign an active active thing, and yes the anti-affinity we will need as well | 21:04 |
johnsom | FYI: https://blueprints.launchpad.net/octavia/+spec/anti-affinity | 21:04 |
johnsom | Just to track it | 21:04 |
blogan | johnsom: excellent | 21:05 |
nmagnezi | blogan, hi :) i'm not 100% sure I follow that provider network thing. also in active passive what replaces lb-network? | 21:05 |
nmagnezi | johnsom, thanks!! | 21:05 |
blogan | nmagnezi: oh well i guess i mentioned the provider network because if we used an isolated network we (internally) have a limit on the number of ports taht can be on that network, so we're using a provider network | 21:06 |
blogan | nmagnezi: i dont follow your second question | 21:06 |
nmagnezi | blogan, this is exactly my concern. if we have more than 256 amphoras a class C network won't be enough | 21:07 |
nmagnezi | blogan, as for my second question: " and we're not going to be using active-passive internally as well" did you mean you are not going to use the lb-network the same as you are using today for active-passive? | 21:08 |
blogan | nmagnezi: yep we have the same limit | 21:08 |
nmagnezi | blogan, so, is there a plan to solve it? | 21:08 |
blogan | nmagnezi: yeah we won't be using it for the heartbeating bc we won't be doing active-passive, but we will be using it for the communication and configuratin for the amphorae | 21:08 |
nmagnezi | blogan, so how will heatbeat communication work? | 21:09 |
blogan | nmagnezi: we do have a plan internally to solve it that we were going to discuss with the community once we have it fleshed out more, but we're not there yet | 21:09 |
nmagnezi | blogan, ack | 21:09 |
blogan | nmagnezi: sorry, i meant more the vrrp heartbeat, the heartbeat that octavia uses will be using the lb-network | 21:10 |
blogan | and we will be using that | 21:10 |
blogan | i'm sure i have been more confusing than helpful :) | 21:10 |
nmagnezi | btw I would love to join your weekly meetings, any chance to make them more Europe friendly? (timezone wise) | 21:10 |
nmagnezi | blogan, no you are not :) so how will vrrp heatbeat work between amphoras? | 21:11 |
blogan | 10-11 pm isn't europe friendly? :) | 21:11 |
nmagnezi | lol | 21:11 |
nmagnezi | sure it is | 21:11 |
nmagnezi | :| | 21:11 |
johnsom | Right now the heartbeat is on the tenant network | 21:11 |
nmagnezi | johnsom, is that intentional? why is it like that? | 21:12 |
johnsom | It was discussed and just ended up that way. It probably should be changed in the future. I argued for it's own network. | 21:13 |
nmagnezi | btw guys, one final question for you today (and thanks a lot for your answers). how is a setup admin expected to update his amphoras? say tomorrow we want to use a newer version of keepalived | 21:13 |
nmagnezi | johnsom, got it | 21:13 |
johnsom | Update the image, update the conf to use the new image, failover the amphora | 21:13 |
nmagnezi | johnsom, no solution without killing amphoras | 21:14 |
nmagnezi | '? | 21:14 |
nmagnezi | btw how do you manually trigger a failover? (that's a sub-question :) ) | 21:14 |
johnsom | Not currently. I think the ubuntu image has security updates enabled, but that might not work in all environments. If the operator has some other infrastructure, they can always update the image to use it. | 21:15 |
*** bharathm has quit IRC | 21:16 | |
johnsom | Right now, you can stop the amphora-agent or turn off the network port. In the future that should be part of the operator API. | 21:16 |
blogan | nmagnezi: we won't be using vrrp for our amphorae, we'll be doing a "special" active/active topology that won't require vrrp (though we still will do a fast failure detection) | 21:25 |
*** bharathm has joined #openstack-lbaas | 21:25 | |
blogan | johnsom: do you know if taskflow has something akin to a foreachflow (my new name for this), for example: i want a set of tasks to run for each item in an iterable? | 21:25 |
johnsom | Yes, they do | 21:26 |
blogan | johnsom: link? | 21:26 |
johnsom | looking | 21:27 |
blogan | johnsom: can't seem to find it in the docs | 21:27 |
johnsom | I think it is this: http://docs.openstack.org/developer/taskflow/patterns.html#taskflow.flow.Flow.iter_links | 21:28 |
johnsom | But still looking for a better example | 21:28 |
johnsom | No, that isn't it. hang on | 21:29 |
johnsom | I may be getting this confused with the retry foreach. Let's ping the expert, harlowja do you have an answer for blogan? | 21:37 |
johnsom | You can set them up as parallel sub-flows as flow definition time. That's how we build multiple amphora in parallel for the active/standby. | 21:39 |
blogan | johnsom: yeah but flow definition time is not in time to know how many items are going ot be in that list | 21:40 |
blogan | i mean its too early | 21:40 |
johnsom | Right.. | 21:41 |
blogan | it has to be known during run time | 21:41 |
nmagnezi | blogan, so when active/active gets merged, it deprecates active/standby? or users(admins) can choose and configure? | 21:56 |
*** diogogmt has quit IRC | 21:57 | |
*** manishg_ has quit IRC | 21:58 | |
blogan | nmagnezi: pretty sure it'll be configurable, seems like it'd be a good fit for flavors so users acna have different types of load balancers, but thats yet to be determined | 21:58 |
*** manishg has joined #openstack-lbaas | 22:01 | |
*** TrevorV has quit IRC | 22:01 | |
harlowja | johnsom blogan yo | 22:04 |
nmagnezi | blogan, I agree. it's kind-of like service level for an amphora | 22:04 |
nmagnezi | blogan, thanks for the answer | 22:05 |
*** bdrich has quit IRC | 22:05 | |
johnsom | harlowja blogan had a question about iterating tasks | 22:05 |
*** bharathm has quit IRC | 22:05 | |
harlowja | blogan johnsom so taskflow currently doesn't have that, thats basically a dynamic flow, all the current stuff is statically compiling down to a DAG that is then traversed over | 22:06 |
harlowja | what u are thinking seems more akin to a DAG that changes | 22:06 |
harlowja | *which isn't impossible, just doesn't exist | 22:06 |
harlowja | all current patterns get smashed together when http://docs.openstack.org/developer/taskflow/engines.html#compiling is called | 22:06 |
harlowja | and a DAG is formed (with other nodes that are used for internals as needed) which is then used during running | 22:07 |
blogan | harlowja: ah okay, so i was thinking maybe a ForEachFlow could be created that takes the name of an iterable in the __init__ of that flow, and that name should be provided be some previous atom/task in that flow (or store) | 22:07 |
harlowja | right, then it expands the DAG when its activated ? | 22:08 |
blogan | harlowja: oh i see, since its all smashed together at compile, and this iterable size would only be known during run, it can't be done right now | 22:08 |
harlowja | right blogan not impossible, just would require some work | 22:08 |
harlowja | just hasn't been done, but could be | 22:08 |
harlowja | if u dare to try, def could be made possible | 22:09 |
harlowja | if u accept this mission | 22:09 |
harlowja | lol | 22:09 |
harlowja | this message will self-destruct | 22:09 |
harlowja | lol | 22:09 |
blogan | harlowja: yeah, i think having something like that would really increase the reusability of tasks and flows, just form my limited experience | 22:09 |
harlowja | right | 22:09 |
*** bharathm has joined #openstack-lbaas | 22:10 | |
blogan | bc right now i think i'm going to have to recreate all the tasks in a flow i want to use that does the looping | 22:10 |
harlowja | right | 22:10 |
blogan | DoOneA, DoOneB -> DoManyA, DoManyB | 22:10 |
johnsom | Well, you could just "run" n flows passing in the unique data | 22:11 |
blogan | but doing it that way has a hacky feel because i want to reuse the code thats in the DoOneA.execute, so I'm tempted to instantiate DoOneA and call execute | 22:11 |
harlowja | so a way that could be done, is basically u need figure out how to add a 'delayed' node into the DAG at https://github.com/openstack/taskflow/blob/master/taskflow/engines/action_engine/compiler.py#L130 | 22:11 |
blogan | inside a loo | 22:11 |
harlowja | and then at runtime, figure out what to do when that delayed node is encountered | 22:11 |
harlowja | which would involve some work around https://github.com/openstack/taskflow/blob/master/taskflow/engines/action_engine/builder.py#L123 (and subsequent code) | 22:11 |
harlowja | depends on how much work u want to do/try/explore ;) | 22:12 |
harlowja | *if u want to do it | 22:12 |
harlowja | it should be something that shouldn't be super-hard, but will require some exploration | 22:13 |
blogan | i'd actually like to, but then I may do a hacky way first and depending on how dirty i feel about it, attempt to do that | 22:13 |
harlowja | thats fine | 22:13 |
blogan | harlowja: well there's definitely a deep dive into taskflows guts that is required | 22:13 |
harlowja | right | 22:13 |
harlowja | i offer free consulations/deep dives | 22:13 |
harlowja | lol | 22:13 |
harlowja | for limited time only | 22:13 |
harlowja | lol | 22:13 |
harlowja | free of charge | 22:13 |
blogan | and then you require payment eh? | 22:13 |
harlowja | credit card number, that i will charge monthly after 30 days | 22:14 |
harlowja | lol | 22:14 |
blogan | sounds like a drug dealer | 22:14 |
harlowja | :-p | 22:14 |
blogan | a sophisticated drug dealer! | 22:14 |
harlowja | gotta keep up with the times u know | 22:14 |
blogan | android pay/apple pay? | 22:14 |
harlowja | sure | 22:14 |
blogan | bitcoin | 22:14 |
harlowja | meh | 22:14 |
harlowja | not that sophisticated | 22:14 |
harlowja | lol | 22:14 |
blogan | lol | 22:15 |
blogan | alright thanks for the info | 22:15 |
blogan | if i/someone did get this in taskflow i'd feel better about taskflow | 22:16 |
harlowja | taskflow-self-improvement program ftw | 22:17 |
harlowja | TFSI for short | 22:17 |
harlowja | TFSIP | 22:17 |
blogan | take a SIP of TaskFlow | 22:17 |
harlowja | lol | 22:17 |
harlowja | +2 | 22:17 |
*** rtheis has quit IRC | 22:18 | |
harlowja | http://luigi.readthedocs.org/en/stable/tasks.html#dynamic-dependencies is how another similarish library/framework does it, perhaps we can do similar stuff | 22:18 |
harlowja | although its slightly different, but something to look at/think about | 22:18 |
harlowja | depends on how TFSIP would work out | 22:19 |
harlowja | *in this case | 22:19 |
harlowja | anywayssss | 22:20 |
harlowja | ^ something like that would probably not work exactly, but we can work through something that would | 22:21 |
harlowja | *mainly due to out of process execution being possible in taskflow | 22:21 |
harlowja | and yielding from something out of process back to its 'engine' process is ummm, gonna be odd | 22:21 |
harlowja | odd/not work, lol | 22:22 |
harlowja | *although we could mark such 'delayed' things as not being able to run out of process | 22:22 |
*** manishg has quit IRC | 22:35 | |
*** manishg_ has joined #openstack-lbaas | 22:37 | |
*** manishg has joined #openstack-lbaas | 22:40 | |
*** minwang2 has quit IRC | 22:42 | |
*** manishg_ has quit IRC | 22:42 | |
*** bharathm has quit IRC | 22:52 | |
*** woodster_ has quit IRC | 22:56 | |
*** bharathm has joined #openstack-lbaas | 23:01 | |
nmagnezi | blogan, hi, just noticed an snat namespace when using regular (single amphora, why is this needed? | 23:16 |
*** sbalukoff has joined #openstack-lbaas | 23:19 | |
blogan | nmagnezi: where do you see the snat namespace? | 23:25 |
nmagnezi | blogan, i'm running an all-in-one devstack node | 23:26 |
blogan | nmagnezi: oh that might be for the L3 agent | 23:26 |
blogan | nmagnezi: bc that doesn't sound like something needed for octavia | 23:27 |
blogan | i could be wrong though | 23:27 |
nmagnezi | blogan, ack :) | 23:27 |
nmagnezi | blogan, just read about active/standby - keepalived if I understand correctly, is running inside the amphoras - right? | 23:28 |
blogan | nmagnezi: correct | 23:28 |
nmagnezi | blogan, so both instances of keepalived communicate from within their amphoras, via.. lb-network? | 23:28 |
blogan | nmagnezi: if my memory serves me correct, and it probably doesn't, its doing it over teh vip network, which i believe has its own problems,but we know about them | 23:32 |
nmagnezi | blogan, by vip network you mean the floating ip network (externa)? or the tenant netwok? | 23:33 |
nmagnezi | and what problems? are there any bugs available to read? | 23:33 |
blogan | the network the user specifices they want their VIP to be allocated on, most likely a tenant network yes | 23:34 |
blogan | but the problem with that is the vrrp communication needs a port to listen on and that can conflict with a port a user wants their load balancer to listen on | 23:35 |
nmagnezi | blogan, yes, this is why in ha routers they create a dedicated network | 23:35 |
blogan | nmagnezi: yep and thats probably the solution we'll go with to fixing this | 23:39 |
nmagnezi | blogan, ack, i will try to follow-up | 23:39 |
nmagnezi | blogan, leaving now, thanks a lot for the answers :) | 23:39 |
blogan | nmagnezi: np, anytime | 23:40 |
*** bharathm has quit IRC | 23:40 | |
*** nmagnezi has quit IRC | 23:44 | |
rm_work | anyone know what time evgeny usually gets on? :) | 23:45 |
*** minwang2 has joined #openstack-lbaas | 23:46 | |
*** manishg has quit IRC | 23:49 | |
*** bharathm has joined #openstack-lbaas | 23:49 | |
*** yuanying has quit IRC | 23:50 | |
*** ducttape_ has quit IRC | 23:50 | |
*** yuanying has joined #openstack-lbaas | 23:51 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!