*** sripriya has quit IRC | 00:41 | |
openstackgerrit | Santosh Kodicherla proposed stackforge/tacker: Add tacker functional tests https://review.openstack.org/222329 | 01:03 |
---|---|---|
*** santoshkumark has quit IRC | 01:26 | |
*** lhcheng_ has joined #tacker | 01:45 | |
*** lhcheng has quit IRC | 01:45 | |
*** s3wong has quit IRC | 01:59 | |
*** changzhi has joined #tacker | 02:26 | |
*** sridhar_ram1 has quit IRC | 03:00 | |
*** sridhar_ram has joined #tacker | 03:09 | |
*** sridhar_ram has quit IRC | 03:33 | |
*** sarob has quit IRC | 03:40 | |
*** lhcheng_ has quit IRC | 03:49 | |
*** lhcheng has joined #tacker | 03:49 | |
*** sridhar_ram has joined #tacker | 03:56 | |
*** sridhar_ram has quit IRC | 03:57 | |
*** sarob has joined #tacker | 04:03 | |
*** lhcheng has quit IRC | 04:30 | |
*** tbh has joined #tacker | 05:03 | |
*** santoshkumark has joined #tacker | 05:35 | |
*** lhcheng has joined #tacker | 06:18 | |
*** lhcheng has quit IRC | 06:23 | |
*** lhcheng has joined #tacker | 06:29 | |
*** tbh has quit IRC | 06:33 | |
*** tbh has joined #tacker | 07:05 | |
*** tbh has quit IRC | 07:12 | |
*** zeih has joined #tacker | 07:13 | |
*** santoshkumark has quit IRC | 07:23 | |
*** tbh has joined #tacker | 07:32 | |
*** lhcheng has quit IRC | 07:44 | |
*** tbh has quit IRC | 09:01 | |
*** tbh has joined #tacker | 09:14 | |
*** changzhi has quit IRC | 11:54 | |
*** tbh has quit IRC | 12:48 | |
*** zeih has quit IRC | 13:16 | |
*** zeih has joined #tacker | 13:18 | |
*** zeih has quit IRC | 13:19 | |
*** trozet has quit IRC | 13:22 | |
*** trozet has joined #tacker | 14:01 | |
*** sridhar_ram has joined #tacker | 14:01 | |
*** sridhar_ram has quit IRC | 14:03 | |
*** bobh has joined #tacker | 14:08 | |
*** tbh has joined #tacker | 15:42 | |
tbh | bobh, Hi | 15:42 |
tbh | bobh, I just came to my home | 15:42 |
tbh | bobh, I will setup my dev env now* | 15:43 |
tbh | bobh, can't we see the heat stack when I create vnf? | 15:43 |
*** bobh has quit IRC | 15:43 | |
*** sripriya has joined #tacker | 15:49 | |
lsp42 | stupid question time...how do I review a patchset? I can guess, but would rather know the method this group uses | 15:57 |
sripriya | lsp42: we follow the same procedure as rest of the openstack projects. if you are convinced with a patchset, you can leave a +1 on the patchset. if you have questions/comments on the patchset, you can leave a -1 with your comments. also, if you would like to test the patchset in your own tacker environment, you can pull the review via 'git review -d <gerrit_review_no.> and test the changes. hope this answers. | 16:04 |
lsp42 | sripriya: Yes! Thank you. | 16:04 |
tbh | sripriya, how you set the basic setup work? | 16:11 |
tbh | sripriya, what vnf you generally create for dev work | 16:11 |
*** lhcheng has joined #tacker | 16:26 | |
*** prashantD has joined #tacker | 16:52 | |
sripriya | tbh: i usually work with cirros vnf most of the time | 16:53 |
tbh | sripriya, can't we see the heat stack, when we create vnf? | 16:53 |
sripriya | tbh: heat stack is the one we spawn in the background when we create vnf | 16:54 |
sripriya | which internally spawns a vm in nova/compute | 16:54 |
tbh | sripriya, but I can't see anything in heat stack list and no vm in nova list | 16:55 |
tbh | but I can see in virsh list | 16:55 |
sripriya | al the heat stacks are created in the service tenant | 16:55 |
tbh | sripriya, oh got it | 16:55 |
tbh | thanks | 16:56 |
sripriya | you can login as a 'tacker' user to see the stack created in service tenant or as an admin under Admin->Instances | 16:56 |
sripriya | tbh: cool | 16:56 |
tbh | sripriya, even as an admin I can't see the instances | 16:57 |
tbh | oh admin-instances, I saw it project -> instances of admin tenant | 16:57 |
sripriya | tbh: yes | 16:57 |
tbh | sripriya, I forgot, I did a mistake in this review https://review.openstack.org/#/c/222715/, sorry for that | 17:01 |
tbh | sripriya, I moved correct file, but changed path in wrong section | 17:01 |
*** bobh has joined #tacker | 17:01 | |
sripriya | tbh: that is fine, you need not be sorry :-) please go ahead with a follow up patchset under the same bug and clearly state the commit message., that should be good... | 17:05 |
tbh | sripriya, waiting for this | 17:06 |
*** s3wong has joined #tacker | 17:09 | |
bobh | tbh: Hi - the heat stack shows up under the "service" account, at least until we start passing through the tenant credentials | 17:10 |
*** santoshkumark has joined #tacker | 17:13 | |
tbh | bobh, oh okay | 17:14 |
tbh | bobh, I am still setting up env in my home laptop | 17:14 |
*** santoshkumark has quit IRC | 17:17 | |
openstackgerrit | bharaththiruveedula proposed stackforge/tacker: Moves NOOP driver from tests folder to tacker/vm https://review.openstack.org/223704 | 17:20 |
*** sridhar_ram has joined #tacker | 17:21 | |
lsp42 | Noob question #2 of the day. When it comes to downloading a patchset, do I just leave the state of DevStack running from the .stack.sh script? | 17:23 |
lsp42 | so stack.sh, download patch set via OpenStack methodology, tweak and observe changes? | 17:24 |
bobh | lsp42: depends on if it's a client or server patch | 17:28 |
bobh | lsp42: if it's a client patch, yes that's pretty much it | 17:28 |
lsp42 | and if it's a server patch? | 17:28 |
bobh | lsp42: if it's a server patch you need to go into the screen session window for the server you are patching and ctrl-c to kill the server | 17:28 |
bobh | lsp42: then up-arrow to recall the server command and hit return to restart it | 17:29 |
lsp42 | bobh: Ah, ok. That's not so bad :) | 17:29 |
sridhar_ram | lsp42: yeah, be careful .. devstack will spoil you ;-) | 17:29 |
lsp42 | sridhar_ram: Don't joke about that! | 17:30 |
* lsp42 List of things Tacker related is growing | 17:32 | |
tbh | bobh, I think it is better to have monitor for each vdu | 17:34 |
tbh | let say I want to deploy IMS vnf | 17:35 |
tbh | it has many vdus, I want to monitor each VDU with a different monitoring driver and its params | 17:36 |
tbh | so in this case it can be same/ different monitoring driver but definitely different params | 17:36 |
tbh | so it is better we will go with monitor for each VDU | 17:36 |
tbh | what do you say? | 17:37 |
bobh | tbh: I think it's an architecture/design question - one person might want to write a single monitor driver that does all of the individual VDU monitoring itself, while another person might want the building blocks | 17:41 |
bobh | tbh: I think I'm leaning toward the individual VDU but that means more changes in the server | 17:41 |
bobh | tbh: it shouldn't be that hard, and there is always the possibility of doing a "VNF monitor" later that would cover all of the VDUs | 17:42 |
tbh | bobh, if we follow individual VDU driver, then the user can write one driver, and mention the same for all the VDUs | 17:43 |
tbh | bobh, why many changes, I think we are not even storing the monitoring info into db | 17:44 |
bobh | tbh: right, which could mean a lot of duplication in the template | 17:44 |
tbh | bobh, duplication occurs only if we they want to use same driver, but that's not the case every time | 17:45 |
bobh | tbh: Today the monitoring thread uses the Device table (I think) which has the monitor policy stored as an attribute of the VNF | 17:45 |
tbh | bobh, it is like writing heat template to launch 5 vms of same image, even here we mention same image, but that doesn't duplication I guess | 17:45 |
bobh | tbh: so the change is to store the monitoring data in a list of maps probably with the VDU names as the keys | 17:46 |
sridhar_ram | bobh: tbh: this is something we discussed in the mid-cycle .. and for now we called out to have mon-driver to be per-VDU specific | 17:46 |
bobh | tbh: true, I just like to avoid duplication. There are ways around it in TOSCA so not a big deal | 17:46 |
bobh | sridhar_ram: thanks - I didn't recall the discussion | 17:47 |
tbh | sridhar_ram, bobh oh okay | 17:47 |
sridhar_ram | given we are early in this space .. don't know the operator's usage pattern so per-VDU will give enough deployment options.. | 17:47 |
tbh | sridhar_ram, that's true | 17:48 |
sridhar_ram | also .. even code wise .. we need mgmt-ip of each VDU to probe for health | 17:48 |
bobh | so driver will receive the mgmt IP of the VDU along with whatever parameters are specified | 17:49 |
sridhar_ram | that only catch is the respawn logic .. which I agree w/ Bob is kinds misplaced, where it respawsn the whole VNF | 17:49 |
sridhar_ram | bobh: yes | 17:49 |
bobh | pre-Kilo it was all that was available anyway | 17:50 |
bobh | but clearly an area for improvement | 17:50 |
lsp42 | Is there any movement on the whole ve-vnfm-vnf ref point? So for now everything is in band. Is there a way to use oslo further down the line and do some RPC? | 17:50 |
*** prashantD has quit IRC | 17:51 | |
sridhar_ram | yeah... for near-term, to bound the scope of this effort, we can keep monitoring per-VDU but the respawn action is for the whole VNF | 17:52 |
sridhar_ram | sorry, that was bobh: | 17:52 |
lsp42 | :) | 17:53 |
bobh | lsp42: I haven't seen anything in that area - I think it's probably a way down the road yet | 17:53 |
sridhar_ram | lsp42: well, your point is quite valid .. we did briefly discuss this in Tacker.. to have one or more dedicated "health-mon" agents that will be scheduled for a given monitoring task | 17:54 |
sridhar_ram | ... along the lines of neutron l3-agent | 17:54 |
lsp42 | sridhar_ram: Yeah, I guess this is a wider industry challenge. | 17:54 |
sridhar_ram | bobh: agree, this is something we need to take up down the line | 17:54 |
lsp42 | sridhar_ram: Not any vendor or project is going to want to run an 'agent' on a VNF | 17:55 |
sridhar_ram | lsp42: the solution shd be scaling, but we are taking baby steps :) | 17:55 |
lsp42 | sridhar_ram: so consensus is needed from somewhere to twist arms | 17:55 |
lsp42 | sridhar_ram: Yes! Baby steps lead somewhere though :) | 17:56 |
bobh | lsp42: Agents have been an issue for a long time - people want the functionality that an agent provides but they don't want the overhead/security risks of an agent | 17:56 |
sridhar_ram | lsp42: that's a slightly different subject - (vnf) agentless vs vnf agent-based montoring... | 17:56 |
lsp42 | sridhar_ram: Tacker could be the perfect project to cultivate industry consensus on this matter | 17:57 |
sridhar_ram | what I'm talking here is .. loading the tacker's mgmt-driver by the health-mon agent instead of directly by the tacker-server which is how it is right now | 17:57 |
lsp42 | sridhar_ram: It's an argument based on the reference point argument. Different parts of the industry push the blame for lack of progress to the other. Agentless vs agent based...NETCONF vs something new. Same old issues going round and round! With the work bobh and tbh are doing, I guess at least Tacker is in a position to react when the reference is sturdied up | 17:58 |
lsp42 | sridhar_ram: Ok, I think I understand | 18:00 |
sridhar_ram | lsp42: totally, we sure will provide a point of view which will be based on something deployed and running ;-) | 18:00 |
sridhar_ram | lsp42: it is more a scalability issue for tacker-service | 18:00 |
lsp42 | sridhar_ram: I get you now. Sorry. Totally misunderstood to start with :) | 18:01 |
lsp42 | sridhar_ram: So it would use something like oslo to talk to the health-mon agent? Would the agent be responsible for reporting what drivers were available? | 18:04 |
sridhar_ram | lsp42: np at all on the misunderstanding.. there are some more interesting aspects in this problem space to indulge. biggest challenge is to pace ourselves .. with the folks showing up here w/ the "coding" shovels :) | 18:04 |
lsp42 | sridhar_ram: Yes. I'm like an excited shiny thing chaser. Before long, hopefully I will also start actually coding and mending bugs. Or at least that's the plan. | 18:05 |
lsp42 | hence all the questions around dev env | 18:05 |
sridhar_ram | lsp42: yes, we could use oslo_rpc to talk between tacker-server and future tacker health-mon agent... health-mon agent will run the periodic mon-driver and report back only selected events back to tacker-server to react for any healing action | 18:06 |
sridhar_ram | lsp42: totally, there is no other way around :) | 18:06 |
bobh | I was thinking lsp42 meant RPC to the VNF - that was a bit more challenging :-) | 18:07 |
lsp42 | bobh: Yikes! Heh | 18:07 |
sridhar_ram | bobh: that would be agent-based monitoring | 18:07 |
bobh | sridhar_ram: That's why I was confused :-) | 18:07 |
sridhar_ram | interestingly .. there was existing code in tacker until couple of weeks back that actually "tried" to do that.. | 18:08 |
sridhar_ram | I throw them away recently ! | 18:08 |
sridhar_ram | *threw | 18:08 |
lsp42 | sridhar_ram, bobh: Sorry for the confusion :-) | 18:09 |
tbh | sridhar_ram, any reason why you avoid agent based monitoring | 18:09 |
tbh | ? | 18:09 |
sridhar_ram | fyi, here is the removal code for future reference .. https://review.openstack.org/#/c/215377/ | 18:09 |
bobh | tbh: Convincing vendors to allow you to install the agent is a big challenge | 18:10 |
tbh | bobh, yeah that's true :) | 18:10 |
sridhar_ram | tbh: we sure can consider that if there is a request from operators..but 100% agree w/ bobh | 18:11 |
bobh | tbh: we used an agent-based system for several years, based on an early Cloudify product. It worked pretty well but lots of cases where it wouldn't work at all | 18:11 |
bobh | lots of vendors give you a closed image that you can't modify - that presents a number of problems | 18:12 |
sridhar_ram | tbh: also the original tacker code used a model where the RPCs gets proxied all the way into VNF ... that's slippery slope for multiple reason including security risk | 18:13 |
lsp42 | bobh: It's a pain and with such a loose reference point between the vnfm and vnf, I think you're left with very little choice other than to come up with a case of logic conditions based on existing technology, like icmp echo, SNMP, NETCONF or possibly REST if the VNF supports it. It means more maintenance though ultimately. The industry as a 'team' has fallen short here massively | 18:13 |
sridhar_ram | you can always achieve something similar if you can write a custom tacker mon-driver that will probe *your* VNF in custom ways to achieve the same thing | 18:14 |
bobh | lsp42: Yep - there are 20 different ways to accomplish the same thing and no consensus, so nothing happens | 18:14 |
tbh | bobh, sridhar_ram got it | 18:14 |
bobh | lsp42: Sometimes I miss Bellcore.... | 18:15 |
lsp42 | bobh: LOL | 18:15 |
sridhar_ram | bobh: LOL | 18:15 |
bobh | at least they gave you a standard to work from - sometimes a lousy standard, but it beats no standard at all | 18:15 |
sridhar_ram | well.. I was in last IETF #93 .. guess what, most of the WG are doing NETCONF work for each of their functional areas | 18:16 |
lsp42 | bobh: I hear you for sure | 18:16 |
lsp42 | sridhar_ram: no surprise there at all | 18:16 |
bobh | sridhar_ram: The TOSCA guy wouldn't be too happy about that | 18:16 |
sridhar_ram | bobh: so, don't loose hope.. something might actually happen around NETCONF! | 18:16 |
sridhar_ram | bobh: no, IMO they are orthogonal.. we are still chasing TOSCA for all the descriptors but NETCONF is better suited for deep configuration of most standards based VNFs | 18:17 |
bobh | sridhar_ram: I agree completely, but there seems to be some tension at the working group level | 18:18 |
sridhar_ram | bobh: that's true | 18:18 |
tbh | sridhar_ram, in my previous work also, we use NETCONF to configure VNFs | 18:19 |
bobh | given the installed base of NETCONF products I don't see how anyone can think it will be replaced by TOSCA | 18:19 |
sridhar_ram | tbh: that's cool, makes perfect sense... | 18:19 |
sridhar_ram | bobh: IMO, as things stand now, it doesn't make sense to consider TOSCA for that.. | 18:20 |
sridhar_ram | bobh: tbh: btw, back to our near term job ... | 18:20 |
sridhar_ram | bobh: tbh: what you folks thing about my earlier thought one per-VDU monitoring but per-VNF action ? | 18:20 |
sridhar_ram | would that work and make things simple for your folks to implement ? | 18:20 |
bobh | sridhar_ram: seems like the best way forward for now | 18:21 |
bobh | sridhar_ram: it will need rework in the future but that's a given anyway | 18:21 |
tbh | I don't remember the exact link, there they mentioned the difference between nova VM and VNF is vm can be configured through ssh or something else, but VNF through netconf :) | 18:21 |
tbh | sridhar_ram, sure | 18:21 |
tbh | sridhar_ram, yeah one mon driver per VDU will do | 18:22 |
bobh | sridhar_ram: Am I correct in my understanding that the "device" structure that is passed to the monitoring thread is the same as the device and deviceattributes tables in the database? | 18:22 |
sridhar_ram | bobh: agree, it will be un-even for now.. but atleast in the correct path.. | 18:22 |
tbh | sridhar_ram, checking the code changes | 18:22 |
* sridhar_ram walking to my pycharm window | 18:22 | |
bobh | lol | 18:22 |
lsp42 | sridhar_ram: You being serious? | 18:24 |
* lsp42 Can't quite figure it out... | 18:24 | |
* sridhar_ram ... walking back from the code dungeon I've in my basement | 18:25 | |
lsp42 | long walk | 18:27 |
openstackgerrit | Santosh Kodicherla proposed stackforge/tacker: Add tacker functional tests https://review.openstack.org/222329 | 18:27 |
tbh | bobh, are we storing mon drivers info into db? | 18:27 |
bobh | that's what I'm trying to figure out - my development environment is having issues so I haven't been able to check how it's actually working | 18:28 |
sridhar_ram | bobh: your understanding is correct.... though it is instance_id added to device_dict | 18:28 |
bobh | sridhar_ram: Thanks - so we probably need to either modify the database or just push more data under attributes - that would be easier than modifying the database | 18:28 |
*** sripriya_ has joined #tacker | 20:03 | |
*** sridhar_ram1 has joined #tacker | 20:03 | |
*** santoshkumark has joined #tacker | 20:03 | |
*** sridhar_ram has quit IRC | 20:04 | |
*** sripriya has quit IRC | 20:06 | |
*** santoshkumark has quit IRC | 20:45 | |
*** sridhar_ram1 has quit IRC | 20:55 | |
*** sripriya_ has quit IRC | 20:59 | |
*** sridhar_ram has joined #tacker | 21:03 | |
*** sripriya has joined #tacker | 21:03 | |
*** bobh has quit IRC | 21:12 | |
*** sripriya has quit IRC | 21:22 | |
*** sripriya has joined #tacker | 21:41 | |
*** trozet has quit IRC | 21:47 | |
*** lhcheng_ has joined #tacker | 22:04 | |
*** lhcheng has quit IRC | 22:07 | |
sridhar_ram | folks - any one to review https://review.openstack.org/#/c/222739/ (dead code removal) ? | 22:09 |
sripriya | sridhar_ram: i will be reviewing it within this week (2-3 days) | 22:13 |
sridhar_ram | sripriya: thanks! | 22:13 |
*** lhcheng_ has quit IRC | 22:37 | |
*** lhcheng has joined #tacker | 22:40 | |
*** tbh has quit IRC | 23:03 | |
*** openstackgerrit has quit IRC | 23:16 | |
*** openstackgerrit has joined #tacker | 23:16 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!