*** yamamoto_ has joined #openstack-dragonflow | 00:18 | |
*** yamamoto_ has quit IRC | 00:24 | |
*** yamamoto_ has joined #openstack-dragonflow | 01:21 | |
*** yamamoto_ has quit IRC | 01:26 | |
*** yamamoto_ has joined #openstack-dragonflow | 02:22 | |
*** yamamoto_ has quit IRC | 02:28 | |
*** yamamoto has joined #openstack-dragonflow | 02:56 | |
*** yamamoto has quit IRC | 03:02 | |
*** afanti has joined #openstack-dragonflow | 03:35 | |
*** yamamoto has joined #openstack-dragonflow | 04:03 | |
*** yamamoto has quit IRC | 04:09 | |
*** yamamoto has joined #openstack-dragonflow | 05:05 | |
*** yamamoto has quit IRC | 05:10 | |
oanson | Morning | 05:31 |
---|---|---|
oanson | dimak, Jenkins looks lovely in https://review.openstack.org/#/c/489003/ | 05:32 |
dimak | oanson, good morning, what a surprise | 05:47 |
dimak | quick, make tempest voting :{ | 05:47 |
dimak | :P | 05:47 |
dimak | oanson, the last recheck was due to a specific change? | 06:03 |
oanson | No | 06:03 |
oanson | And no to the voting tempest thing | 06:03 |
oanson | :) | 06:03 |
oanson | dimak, re https://review.openstack.org/#/c/480196/5/dragonflow/controller/df_local_controller.py@346 | 06:04 |
oanson | Do you have a suggestion? | 06:04 |
openstackgerrit | Omer Anson proposed openstack/dragonflow master: model proxy: Throw custom exception when model not cached https://review.openstack.org/495865 | 06:06 |
*** yamamoto has joined #openstack-dragonflow | 06:06 | |
dimak | oanson, consider you have an n item deep ref chain: a->b->....->n, if it gets updated n to a, thats n^2 DB gets | 06:11 |
dimak | I'd got for looking at cached copies | 06:11 |
dimak | and let sync bring updates on the next run | 06:11 |
dimak | that way you only have to do depth 1 | 06:11 |
oanson | All right. I'll do that | 06:12 |
dimak | well, not exactly depth 1 | 06:12 |
*** yamamoto has quit IRC | 06:12 | |
oanson | Not depth 1 at all :) | 06:12 |
oanson | But probably linear | 06:12 |
dimak | if we could assure there are no invalid references in db-store, it'd be depth one :P | 06:13 |
dimak | also, there's another issue | 06:13 |
oanson | Shoot | 06:13 |
dimak | consider you have a single VM port, that belongs to tenant a | 06:13 |
dimak | and it uses a tenant b shared network thorough its router | 06:14 |
dimak | so you DFS there and retrieve the network | 06:14 |
dimak | and store it in db store | 06:14 |
dimak | next time sync runs it will throw it away | 06:14 |
dimak | because we're not subscribe to topic b | 06:15 |
oanson | Yes | 06:15 |
oanson | Want to implement garbage collection for models? | 06:15 |
dimak | nope | 06:16 |
dimak | we just have to make sure each model we fetch we subscribe to its topic | 06:16 |
oanson | That's what we need. Subscribed models exist in their own right. Referenced models exist only as long as they are referenced | 06:16 |
oanson | Yes. Otherwise changes in the router won't be intercepted | 06:16 |
oanson | We also need to unsubscribe when references are 0 | 06:18 |
dimak | You need the whole topic because there might be ports on other topic, and you won't be able to reach them without fetching them | 06:18 |
dimak | Or just stop maintaining topic list, and generate it each run in topology | 06:18 |
dimak | start from ports, get topics, get all objects in topic, get all references, get all topics, ... | 06:19 |
oanson | Makes sense | 06:19 |
oanson | Use topic as a coarse-grain filter, but sync/topology work bottom-up | 06:19 |
dimak | yes | 06:19 |
oanson | And we need to identify first-class objects - OVS ports? | 06:19 |
oanson | Provider networks? | 06:20 |
dimak | what do you mean? | 06:20 |
oanson | From where do we start spanning the tree in sync/topology? | 06:20 |
dimak | I'd say start from ovs ports, and see how far that gets up ;) | 06:20 |
dimak | us* | 06:20 |
oanson | Anything that isn't referenced by lports isn't coverd: e.g. Trunk ports, FIP | 06:21 |
dimak | as long as they share the topic its alright | 06:22 |
dimak | 1 issue i could see is | 06:22 |
dimak | you connect to provider network by router from another tenant | 06:22 |
oanson | Hmm... In trunk port I guess we can assume the ChildPortSegmentation object is the same topic | 06:22 |
oanson | In FIP, I guess the same can be applied | 06:23 |
oanson | That router is identifiable by is_external:true, right? | 06:23 |
dimak | a network has router:external | 06:24 |
oanson | Bah | 06:24 |
oanson | In general, if a router connects two networks from different topics, one topic won't see it | 06:25 |
dimak | yes | 06:25 |
oanson | Multi-topic? :) | 06:26 |
dimak | how you're going to manage it? | 06:26 |
dimak | routers can be shared? | 06:26 |
oanson | In the Neutron API plugin layer. | 06:26 |
oanson | The topic doesn't *have* to be the tenant_id/project_id | 06:27 |
dimak | i.e. have shared property | 06:27 |
oanson | It can be a derived value. For routers - derived from all the tenant_ids it connects | 06:27 |
dimak | and how exactly you're going to subscribe to that? | 06:27 |
oanson | Yes, have a is_shared property, but have it fine-grained - shared between whom | 06:27 |
dimak | or just publish on all relevant topics? | 06:27 |
oanson | Dunno | 06:27 |
oanson | I want to say bitmask | 06:29 |
dimak | oanson, https://bugs.launchpad.net/dragonflow/+bug/1712266 | 06:46 |
openstack | Launchpad bug 1712266 in DragonFlow "Selective topology distribution will not fetch shared objects" [Undecided,New] | 06:46 |
dimak | Ummm, topic is a bit off | 06:46 |
oanson | I think you're right - this feature needs a redesign | 06:47 |
dimak | Lets try again | 06:48 |
dimak | oanson, https://bugs.launchpad.net/dragonflow/+bug/1712266 | 06:48 |
openstack | Launchpad bug 1712266 in DragonFlow "Selective topology distribution will not fetch reachable objects of other tenants" [Undecided,New] | 06:48 |
dimak | This one is better | 06:48 |
oanson | dimak, this looks too big for pike | 06:51 |
oanson | Let's push it to Queens | 06:51 |
oanson | Shared objects weren't take into account, and that's a irreconcilable design flaw. | 06:52 |
dimak | never though it was for pike anyway | 06:52 |
dimak | xiao proposed a spec for that back in the day | 06:52 |
dimak | https://review.openstack.org/414697 | 06:53 |
oanson | Even after he left, he still contributes :) | 07:00 |
oanson | But there are some open unanswered questions there | 07:01 |
oanson | dimak, lihi, irenab, could we schedule a brainstorm for today 13:00ish utc? I want to discuss the selective proactive issue. (Here in the channel) | 07:38 |
dimak | sure | 07:39 |
irenab | ok | 07:40 |
lihi | 14:30? | 07:40 |
oanson | lihi, utc? | 07:41 |
lihi | oh utc. OK then (for 13:00) | 07:41 |
oanson | Cool. | 07:42 |
openstackgerrit | Omer Anson proposed openstack/dragonflow master: An instance update also sends an update on all referred instances https://review.openstack.org/480196 | 07:43 |
irenab | oanson, this can be helpful if you would share the context for the discussion prior to the meeting | 07:43 |
oanson | The issue I want to tackle is selective-proactive not supporting shared objects. There's a bug: https://bugs.launchpad.net/dragonflow/+bug/1712266 and the beginning of a spec: https://review.openstack.org/414697 | 07:46 |
openstack | Launchpad bug 1712266 in DragonFlow "Selective topology distribution will not fetch reachable objects of other tenants" [Undecided,New] | 07:46 |
oanson | We should discuss how to mitigate this. Understand the problems we have currently, and design a solution and find a volunteer to fix it. | 07:47 |
irenab | oanson, do we have the requirements properly defined? | 07:48 |
oanson | I doubt it. | 07:48 |
irenab | Maybe worth to start by discussing requirements and not just solving the problems we see now | 07:49 |
oanson | Makes sense | 07:49 |
irenab | oanson, thanks for sharing the links | 07:51 |
*** Natanbro has joined #openstack-dragonflow | 08:18 | |
openstackgerrit | Merged openstack/dragonflow master: Tempest gate: Update apparmor with docker permissions https://review.openstack.org/495775 | 08:26 |
openstackgerrit | Merged openstack/dragonflow master: Move FIP status updates to L3 plugin https://review.openstack.org/493359 | 08:26 |
dimak | oanson, can we make etcd/zmq job voting? | 08:29 |
openstackgerrit | Merged openstack/dragonflow master: Reset in_port for provider ingress packets https://review.openstack.org/494164 | 08:32 |
*** yamamoto has joined #openstack-dragonflow | 08:35 | |
openstackgerrit | Dima Kuznetsov proposed openstack/dragonflow master: DNAT: used cached floating lport https://review.openstack.org/496154 | 08:43 |
openstackgerrit | Dima Kuznetsov proposed openstack/dragonflow master: DNAT: use cached floating lport https://review.openstack.org/496154 | 08:44 |
openstackgerrit | Dima Kuznetsov proposed openstack/dragonflow master: [DNM][WIP] Fix tempest https://review.openstack.org/489003 | 08:54 |
openstackgerrit | Dima Kuznetsov proposed openstack/dragonflow master: Disable l3 agent in gate https://review.openstack.org/483385 | 08:54 |
*** yamamoto has quit IRC | 08:59 | |
*** yamamoto has joined #openstack-dragonflow | 08:59 | |
*** yamamoto has quit IRC | 08:59 | |
*** kkxue has joined #openstack-dragonflow | 09:05 | |
*** kkxue_ has joined #openstack-dragonflow | 09:09 | |
*** kkxue has quit IRC | 09:10 | |
openstackgerrit | Dima Kuznetsov proposed openstack/dragonflow master: Remove enable_goto_flows configuration option https://review.openstack.org/494287 | 09:28 |
openstackgerrit | Dima Kuznetsov proposed openstack/dragonflow master: Change DNAT to rely on Provider app for bridge access https://review.openstack.org/475362 | 09:28 |
*** kkxue__ has joined #openstack-dragonflow | 09:46 | |
*** kkxue_ has quit IRC | 09:47 | |
*** yamamoto has joined #openstack-dragonflow | 10:00 | |
*** yamamoto has quit IRC | 10:05 | |
openstackgerrit | Dima Kuznetsov proposed openstack/dragonflow master: Remove enable_goto_flows configuration option https://review.openstack.org/494287 | 10:10 |
openstackgerrit | Dima Kuznetsov proposed openstack/dragonflow master: Change DNAT to rely on Provider app for bridge access https://review.openstack.org/475362 | 10:10 |
openstackgerrit | Dima Kuznetsov proposed openstack/dragonflow master: Remove unique_key field from floating IPs https://review.openstack.org/496186 | 10:10 |
*** yamamoto has joined #openstack-dragonflow | 10:30 | |
*** kkxue__ has quit IRC | 10:30 | |
openstackgerrit | Dima Kuznetsov proposed openstack/dragonflow master: Remove unique_key field from floating IPs https://review.openstack.org/496186 | 10:44 |
openstackgerrit | Dima Kuznetsov proposed openstack/dragonflow master: Remove enable_goto_flows configuration option https://review.openstack.org/494287 | 10:44 |
openstackgerrit | Dima Kuznetsov proposed openstack/dragonflow master: Change DNAT to rely on Provider app for bridge access https://review.openstack.org/475362 | 10:44 |
*** lihi has quit IRC | 11:01 | |
*** lihi has joined #openstack-dragonflow | 11:05 | |
openstackgerrit | Omer Anson proposed openstack/dragonflow master: devstack: Add hooks to support deployment with Octavia https://review.openstack.org/496204 | 11:09 |
oanson | irenab, note ^^^ and https://review.openstack.org/496205 for octavia devstack integration | 11:10 |
irenab | oanson, great! | 11:11 |
openstackgerrit | Dima Kuznetsov proposed openstack/dragonflow master: Remove enable_goto_flows configuration option https://review.openstack.org/494287 | 11:11 |
openstackgerrit | Dima Kuznetsov proposed openstack/dragonflow master: Change DNAT to rely on Provider app for bridge access https://review.openstack.org/475362 | 11:11 |
*** yamamoto has quit IRC | 11:18 | |
*** yamamoto has joined #openstack-dragonflow | 11:30 | |
*** yamamoto has quit IRC | 11:46 | |
*** yamamoto has joined #openstack-dragonflow | 11:48 | |
dimak | oanson, we brainstorm? | 11:59 |
oanson | I think we said in an hour | 11:59 |
oanson | I was supposed to have a meeting now, but it may have been canceled | 12:00 |
oanson | Sure, let's start, and worst comes I'll take a back seat. | 12:00 |
oanson | lihi, irenab, you guys want to start now? | 12:00 |
dimak | oh, I though it was 12, not 13 | 12:00 |
oanson | Or in an hour like we scheduled? | 12:00 |
dimak | I don't mind either | 12:01 |
irenab | now is better | 12:01 |
oanson | lihi, ? You available? | 12:02 |
*** yamamoto has quit IRC | 12:03 | |
oanson | I guess she isn't in. All right, let's wait a bit and try again later. | 12:04 |
*** yamamoto has joined #openstack-dragonflow | 12:04 | |
*** yamamoto has quit IRC | 12:10 | |
*** yamamoto has joined #openstack-dragonflow | 12:12 | |
lihi | Hi, sorry, I missed the notification. In 10 min? | 12:35 |
openstackgerrit | Dima Kuznetsov proposed openstack/dragonflow master: Disable l3 agent in gate https://review.openstack.org/483385 | 12:35 |
oanson | Sure | 12:37 |
oanson | No worries. | 12:37 |
oanson | dimak, irenab, lihi, 8 minutes? | 12:38 |
oanson | 7 now :) | 12:38 |
dimak | sure | 12:38 |
irenab | ok | 12:39 |
oanson | Hi all | 12:45 |
oanson | dimak, lihi, irenab, shall we start? | 12:45 |
dimak | o/ | 12:45 |
lihi | 👍 | 12:45 |
oanson | irenab, ? | 12:45 |
irenab | a min | 12:46 |
irenab | here | 12:48 |
oanson | All right! | 12:48 |
oanson | Let's start | 12:48 |
oanson | The issue is that we selective-proactive works badly with shared objects | 12:48 |
dimak | not shared per se | 12:48 |
oanson | e.g. a network or router that's shared across tenants, like the provider network | 12:48 |
dimak | just anything that connects one tenant to another | 12:49 |
oanson | Yes. | 12:49 |
oanson | Shared as in used by objects from more than one tenant | 12:49 |
oanson | or project, even | 12:49 |
irenab | lets list the requirements we expect the SPD to provide | 12:50 |
irenab | oanson, I belive you started some etherpad | 12:50 |
oanson | I started something here: https://etherpad.openstack.org/p/dragonflow-selective-proactive | 12:50 |
oanson | irenab, yes | 12:50 |
irenab | let just post all case needed to be supported | 12:51 |
oanson | In the end we'll end up with a garbage collection method: When a topic or object is no longer referenced, it should be collected | 12:52 |
oanson | Which could be done using back-refs, but in-memory, not in db | 12:53 |
dimak | but how do decide to fetch it in the first place | 12:53 |
lihi | Maybe only fetch it if needed | 12:53 |
irenab | some cases may lead to heavy load | 12:54 |
dimak | a separate service can generate tenant dependency graph | 12:56 |
oanson | Yes. | 12:56 |
dimak | but thats another spof | 12:56 |
oanson | But that means double retrieval | 12:56 |
oanson | spof? | 12:56 |
dimak | single point of failure | 12:56 |
dimak | and its full database per number of services that do this | 12:57 |
dimak | doesn't have to be one on each compute | 12:57 |
oanson | Maybe the problem we're solving is too complex. Maybe not allow ad-hoc object connections | 12:57 |
dimak | neutron allows it | 12:57 |
oanson | Shared objects, such as routers and networks, are published to everyone. They always exist | 12:58 |
irenab | are we putting neutron RBAC aside? | 12:58 |
oanson | For now - Neutron rbac is limited to networks currently (according to https://specs.openstack.org/openstack/neutron-specs/specs/liberty/rbac-networks.html ) unless I misunderstand | 12:59 |
irenab | it works for qos policy too as far as I remember | 12:59 |
oanson | And if we construct a dependency graph in the API layer, the effect is the same | 12:59 |
irenab | it started with network | 12:59 |
oanson | I think one of the issues we see is with routers. | 12:59 |
irenab | so this dependency is clear from the virtual topology view | 13:00 |
dimak | I don't see that routers have a shared property | 13:00 |
dimak | just networks | 13:00 |
oanson | Hmm | 13:01 |
oanson | No good then | 13:01 |
dimak | We can distribute routers to everyone | 13:01 |
irenab | I do not like the fact that we should understand the internals of neutron to make the decision | 13:02 |
oanson | Or at least we need something from within Neutron to abstract it for us | 13:02 |
oanson | Distributing routers is problematic - that would cascade to distributing networks and ports | 13:03 |
oanson | We loose the edge of selective-proactive | 13:03 |
irenab | opting to the worst case | 13:03 |
dimak | oanson, why? | 13:04 |
oanson | dimak, any network that has a router will be pulled. Any port connected to such a network will have to be pulled as well | 13:04 |
dimak | I think that just router interfaces + their netorks | 13:04 |
dimak | only if that router is relevant | 13:04 |
dimak | i.e. there's a route that connects local ports to that network | 13:05 |
oanson | If you pull the network, you're going to have to pull its ports for the l3 app | 13:05 |
irenab | oanson, maybe we can look at what information may be required locally and given this decide whaen is enough to stop polling | 13:05 |
dimak | I won't | 13:05 |
oanson | dimak, sorry? | 13:05 |
oanson | irenab, that means every application has to report the relations it counts on | 13:06 |
dimak | oanson, you just need routers + interfaces (their networks) | 13:06 |
irenab | oanson, report to whom? | 13:06 |
oanson | l3 does l2 lookup within the destination network. Then we probably need to pass it to the tunneling app. | 13:06 |
oanson | irenab, to the new topology mechanism | 13:06 |
irenab | I missed the service decomposition phase :-) | 13:07 |
oanson | I would like to opt for a simpler solution: We do keep a dependency graph, but only between topics. i.e. if a->b, then pulling topic a also pulls b. The bookkeeping is kept to Neutron APIs, maybe using Neutron DB to store the extra data. In NB DB store only the tree. | 13:08 |
oanson | irenab, it was implicit | 13:08 |
lihi | oanson, how do you know to build it? | 13:08 |
irenab | if topic is tenat/project, I do not like the granularity | 13:09 |
oanson | I should have all the info in the Neutron API sied | 13:09 |
oanson | side* | 13:09 |
oanson | irenab, topic doesn't have to be tenant/project | 13:09 |
oanson | But I doubt we can do something more fine-grained | 13:09 |
dimak | lihi, we can map out the api actions that create dependencies | 13:09 |
dimak | for not it is mostly add/delete router interface | 13:09 |
irenab | we may have others too, such as qos policies that shared across projects | 13:10 |
oanson | Yes. | 13:11 |
irenab | we still may have it lazily loaded, retrieved when needed | 13:12 |
*** yamamoto has quit IRC | 13:12 | |
oanson | irenab, then they need a special topic | 13:13 |
irenab | oanson, seems you already have some desing in mind. Care to share? | 13:14 |
oanson | So far we have two plausible solutions: 1. Each app reports its dependencies, and topology service pulls the additional info needed. 2. We build a coarse-grain dependency graph on the Neutron side. | 13:14 |
oanson | irenab, no, I'm just making it up as I go along | 13:14 |
*** yamamoto has joined #openstack-dragonflow | 13:14 | |
oanson | If we load lazily - we have to listen to updates (e.g. register to the topic for updates for that object) | 13:15 |
dimak | oanson, 1 requires backrefs? | 13:15 |
oanson | dimak, possibly, but maybe we can keep them local to the topology service | 13:15 |
oanson | That means we need a special topic, since we don't want to pull all objects from the second topic. Or maybe we do | 13:15 |
dimak | if there's a X->Y dependency, and we have only topic Y, how topology will guess topic X? | 13:16 |
oanson | Yes, that won't work | 13:16 |
dimak | Somewhere, something needs a global view | 13:16 |
oanson | Yes. We can't get away from that | 13:16 |
dimak | or each db update should keep those updated | 13:17 |
dimak | but that requires high level of consistency | 13:17 |
oanson | I don't want to manually add back-refs to the api. It will be hell to maintain | 13:17 |
irenab | can you give some concrete example for the x->Y case? | 13:18 |
dimak | shared network, single router | 13:18 |
dimak | each tenant gets router interface | 13:18 |
oanson | Or lports on an lswitch, but of different topics | 13:18 |
irenab | in case of shared networks there will be more than one interface on the router? | 13:19 |
oanson | I think each network is limited to a single router interface | 13:19 |
irenab | in neutron for sure | 13:20 |
oanson | We have a way to support the general case, but as long as Neutron is the only API, I don't think it has value | 13:20 |
dimak | network can have several router interfaces | 13:21 |
irenab | lets not overcomplicate if not required for now | 13:21 |
irenab | dimak, yes, but on different router. Or it has value to have it on same one? | 13:21 |
dimak | different router | 13:22 |
*** yamamoto has quit IRC | 13:22 | |
irenab | so is there any conclusion? | 13:24 |
oanson | 1. We will need an orchestrator on the API level | 13:25 |
oanson | Which will be difficult if we don't want something neutron specific | 13:25 |
dimak | irenab, what I was trying to describe: https://paste.fedoraproject.org/paste/N9puUzIJ0WztHEwjUN4djw | 13:25 |
oanson | This is definitely something we need to support | 13:25 |
irenab | dimak, ok. Seems like typical neutron deployment | 13:26 |
oanson | 1. implies that we need a dependency graph to publish to the compute nodes | 13:26 |
oanson | So that's 2. | 13:26 |
lihi | Does being not neutron-specific makes things so much complicated? | 13:27 |
irenab | dependency graph is topic based or entity based? | 13:27 |
oanson | irenab, undecided | 13:27 |
dimak | irenab, I think the more common case is a router-per-tenant | 13:27 |
oanson | lihi, depends how we do it. I'm afraid it will come to tightly coupled with neutron, and it will be hard to generalise | 13:28 |
irenab | I think both cases are possible, depends who manages the router | 13:28 |
irenab | with give me a network, probably admin will be responsible for the router | 13:28 |
dimak | well yes, both are something we should support | 13:28 |
oanson | provider network is a must - we need it to support hierarchical port binding | 13:29 |
irenab | oanson, we need to stick to DF model and topics | 13:29 |
oanson | irenab, point 3. on the etherpad sufficient? 'Dependency graph must be DF model based, to avoid Neutron tight-coupling'\ | 13:30 |
irenab | yes | 13:31 |
irenab | do we want to consider selective reactive? | 13:31 |
oanson | irenab, is that relevant? Performance-wise? | 13:32 |
irenab | or it fits into lazily loading | 13:32 |
irenab | this maybe on the first hit, then you have it cached | 13:32 |
oanson | Sorry? | 13:33 |
irenab | sotheing similar to Midonet that fetches the relevant model first time it need to details, later the model is in local cache | 13:33 |
oanson | What about services reporting their required references/back-refs? do we need it? Possibly in the API layer? | 13:34 |
oanson | irenab, I think that's what we have with the db_store pattern | 13:34 |
irenab | I think it should be clear from the models | 13:34 |
oanson | Not necessarily | 13:34 |
oanson | l3 app requires the routers connected to every network. But network mode shouldn't mention routers since routers is an app/feature/extension that sits on top of networks | 13:35 |
irenab | l3 includes maybe we should check case by case, l3 is about routing | 13:37 |
oanson | Not sure I follow | 13:37 |
irenab | maybe we should check app by app | 13:38 |
oanson | We should | 13:38 |
irenab | for l3 app its about routing, so l3 will need routers data | 13:38 |
irenab | + networks that have interface on routers | 13:38 |
oanson | Yes | 13:38 |
dimak | note that a network can be a few hops away | 13:39 |
oanson | We start charting the dependency graph from OVS ports. That's the only thing we know we have. | 13:39 |
irenab | oanson, ovs port represents neutron port? | 13:40 |
oanson | dimak, currently we definitely only support one hop. I think in Neutron that's enough | 13:40 |
oanson | irenab, yes | 13:40 |
oanson | https://review.openstack.org/#/c/480195/ makes it official | 13:40 |
irenab | I have 5 mins before I must leave | 13:40 |
oanson | irenab, no worries. Worst comes we can continue tomorrow :) | 13:40 |
oanson | There's no rush. This won't be done before pike | 13:41 |
oanson | All right, new plan | 13:41 |
oanson | Let's table this discussion. Pick it up again in the vPTG | 13:41 |
irenab | speaking of pike, do we have SPD disabled by default? | 13:41 |
oanson | Not yet. Who wants to take it? | 13:42 |
oanson | For small deployments and development, the current selective proactive solution is good enough. Maybe make shared stuff an 'all topics' model, but that requires in depth testing | 13:42 |
dimak | for small deployments you don't need selective | 13:43 |
oanson | dimak, yes. That's what I meant :) | 13:43 |
oanson | Second thing is that I don't see the subnet model and dhcp model changes getting in before the tag cutoff, and I don't want to rush us | 13:43 |
lihi | SPD? | 13:43 |
irenab | Selective Proactive dist | 13:44 |
oanson | So I want to discuss upgrade path. Specifically adding a Grenada test to make sure we can upgrade from pike to queens and enforce writing correct db migration code | 13:44 |
irenab | sorry guys and girls, have to drop the discussion | 13:45 |
dimak | bye | 13:45 |
oanson | irenab, no worries. Thanks for your help! | 13:45 |
lihi | bye | 13:45 |
oanson | So this is the plan: Disable SPD by default, discuss it in PTG, and prioritize Grenada test before patches: https://review.openstack.org/#/c/494557 and https://review.openstack.org/#/c/480196 | 13:46 |
oanson | lihi, dimak, is this all right? Do we have consensus? | 13:46 |
dimak | +1 | 13:46 |
lihi | yes | 13:46 |
oanson | All right. Any volunteers for the Grenada test, or should I take it? | 13:47 |
oanson | I am free now that my two big patches are postponed :) | 13:47 |
oanson | And I'll disable SPD by default now. | 13:47 |
oanson | Grenade* | 13:48 |
dimak | I'll take a look see how hard it is to set up an upgrade job | 13:48 |
oanson | dimak, not too many plates in the air? | 13:48 |
dimak | i'll juggle | 13:49 |
oanson | I can take it. I'm fairly free | 13:49 |
oanson | dimak, ? | 13:52 |
dimak | It's yours if you insist | 13:53 |
oanson | I do :) | 13:53 |
oanson | My biggest task was just postponed :) | 13:53 |
*** igordc has joined #openstack-dragonflow | 14:02 | |
oanson | dimak, you still here? | 14:15 |
*** mlavalle has joined #openstack-dragonflow | 14:16 | |
*** yamamoto has joined #openstack-dragonflow | 14:23 | |
openstackgerrit | Omer Anson proposed openstack/dragonflow master: Disable Selective-Proactive Distribution by default https://review.openstack.org/496247 | 14:28 |
*** yamamoto has quit IRC | 14:28 | |
dimak | oanson, missed your ping, yes | 14:40 |
oanson | I was trying to remember what we decided about #1708178 | 14:41 |
oanson | I was trying to remember what we decided about bug #1708178 | 14:41 |
openstack | bug 1708178 in DragonFlow "LBaaSv2 with 3rd party provider does not work if L3agent is disabled" [Critical,New] https://launchpad.net/bugs/1708178 | 14:41 |
dimak | irenab and I found it working with HAProxy | 14:41 |
oanson | If I recall correctly, we agreed to disable it in favor of bug #1712266 | 14:41 |
openstack | bug 1712266 in DragonFlow "Selective topology distribution will not fetch reachable objects of other tenants" [Undecided,New] https://launchpad.net/bugs/1712266 | 14:41 |
dimak | I'm not convinced those 2 are related | 14:42 |
dimak | lbaas port was different tenant? | 14:43 |
oanson | You said the first is invalid. We opened the second since the question of SPD exists | 14:43 |
dimak | even if so, both had local ports | 14:43 |
oanson | I think there was a cross-tenant issue, since disabling SPD solved the issue | 14:43 |
dimak | I believe you but I can't see why that mattererd | 14:44 |
oanson | I didn't see the system. Those are the reports from you, irena and eyal | 14:44 |
dimak | I think we should try again with spd on | 14:45 |
dimak | maybe there's something else at play | 14:45 |
oanson | Sure. But as long as it's reported working, I want to bump it down to medium | 14:46 |
dimak | ok | 14:46 |
oanson | I also want to verify the l3-agent / advanced services before closing the bug completely | 14:46 |
dimak | speaking of l3 agent | 14:46 |
dimak | please review https://review.openstack.org/#/c/483385/ when you get a moment | 14:47 |
dimak | lihi too :) | 14:47 |
dimak | (just fixed a typo in the commit message) | 14:47 |
oanson | Done. Yes - I see it's only rebases | 14:48 |
dimak | I have to step outside for a while, I'll check in later | 14:48 |
oanson | No worries. I'm leaving soon, so have a good evening :) | 14:48 |
lihi | Done | 14:50 |
lihi | :) | 14:50 |
openstackgerrit | Omer Anson proposed openstack/dragonflow master: LBaaS spec https://review.openstack.org/477463 | 14:52 |
openstackgerrit | Omer Anson proposed openstack/dragonflow master: Fix docs warnings in extra_dhcp_opts spec https://review.openstack.org/496264 | 14:52 |
openstackgerrit | Omer Anson proposed openstack/dragonflow master: devstack: Add hooks to support deployment with Octavia https://review.openstack.org/496204 | 15:01 |
*** yamamoto has joined #openstack-dragonflow | 15:25 | |
*** yamamoto has quit IRC | 15:30 | |
openstackgerrit | Merged openstack/dragonflow master: model proxy: Throw custom exception when model not cached https://review.openstack.org/495865 | 15:41 |
*** afanti has quit IRC | 15:55 | |
openstackgerrit | Yuval Brik proposed openstack/dragonflow master: Redis Driver Rewrite [WIP] https://review.openstack.org/496299 | 16:00 |
*** Natanbro has quit IRC | 16:03 | |
*** yamamoto has joined #openstack-dragonflow | 16:26 | |
*** yamamoto has quit IRC | 16:32 | |
openstackgerrit | Merged openstack/dragonflow master: Disable l3 agent in gate https://review.openstack.org/483385 | 16:54 |
*** yamamoto has joined #openstack-dragonflow | 17:28 | |
*** yamamoto has quit IRC | 17:35 | |
*** yamamoto has joined #openstack-dragonflow | 18:31 | |
*** yamamoto has quit IRC | 18:37 | |
*** yamamoto has joined #openstack-dragonflow | 19:33 | |
*** yamamoto has quit IRC | 19:38 | |
*** igordc has quit IRC | 20:23 | |
*** yamamoto has joined #openstack-dragonflow | 20:35 | |
*** yamamoto has quit IRC | 20:38 | |
*** yamamoto has joined #openstack-dragonflow | 20:38 | |
*** yamamoto has quit IRC | 20:39 | |
*** yamamoto has joined #openstack-dragonflow | 21:10 | |
*** yamamoto has quit IRC | 21:15 | |
*** yamamoto_ has joined #openstack-dragonflow | 22:11 | |
*** yamamoto_ has quit IRC | 22:17 | |
*** yamamoto_ has joined #openstack-dragonflow | 23:13 | |
*** yamamoto_ has quit IRC | 23:18 | |
*** mlavalle has quit IRC | 23:38 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!