Monday, 2018-06-04

*** shaohe_feng has quit IRC00:00
*** shaohe_feng has joined #openstack-cyborg00:01
*** shaohe_feng has quit IRC00:11
*** shaohe_feng has joined #openstack-cyborg00:12
*** shaohe_feng has quit IRC00:21
*** shaohe_feng has joined #openstack-cyborg00:22
*** shaohe_feng has quit IRC00:31
*** shaohe_feng has joined #openstack-cyborg00:32
*** shaohe_feng has quit IRC00:41
*** shaohe_feng has joined #openstack-cyborg00:44
*** shaohe_feng has quit IRC00:52
*** shaohe_feng has joined #openstack-cyborg00:53
*** shaohe_feng has quit IRC01:02
*** shaohe_feng has joined #openstack-cyborg01:03
*** shaohe_feng has quit IRC01:12
*** shaohe_feng has joined #openstack-cyborg01:13
*** shaohe_feng has quit IRC01:22
*** shaohe_feng has joined #openstack-cyborg01:23
*** shaohe_feng has quit IRC01:33
*** shaohe_feng has joined #openstack-cyborg01:33
*** shaohe_feng has quit IRC01:43
*** shaohe_feng has joined #openstack-cyborg01:47
*** shaohe_feng has quit IRC01:53
*** shaohe_feng has joined #openstack-cyborg01:56
*** shaohe_feng has quit IRC02:03
*** shaohe_feng has joined #openstack-cyborg02:04
*** shaohe_feng has quit IRC02:14
*** shaohe_feng has joined #openstack-cyborg02:14
*** shaohe_feng has quit IRC02:24
*** shaohe_feng has joined #openstack-cyborg02:25
*** shaohe_feng has quit IRC02:34
*** shaohe_feng has joined #openstack-cyborg02:37
*** shaohe_feng has quit IRC02:44
*** shaohe_feng has joined #openstack-cyborg02:46
*** shaohe_feng has quit IRC02:55
*** shaohe_feng has joined #openstack-cyborg02:56
*** shaohe_feng has quit IRC03:05
*** shaohe_feng has joined #openstack-cyborg03:05
*** shaohe_feng has quit IRC03:15
*** shaohe_feng has joined #openstack-cyborg03:17
*** shaohe_feng has quit IRC03:25
*** shaohe_feng has joined #openstack-cyborg03:27
*** shaohe_feng has quit IRC03:36
*** shaohe_feng has joined #openstack-cyborg03:37
*** shaohe_feng has quit IRC03:46
*** shaohe_feng has joined #openstack-cyborg03:47
*** shaohe_feng has quit IRC03:56
*** shaohe_feng has joined #openstack-cyborg04:00
*** shaohe_feng has quit IRC04:06
*** shaohe_feng has joined #openstack-cyborg04:07
*** shaohe_feng has quit IRC04:17
*** shaohe_feng has joined #openstack-cyborg04:18
*** shaohe_feng has quit IRC04:27
*** shaohe_feng has joined #openstack-cyborg04:31
*** shaohe_feng has quit IRC04:37
*** shaohe_feng has joined #openstack-cyborg04:39
*** shaohe_feng has quit IRC04:47
*** shaohe_feng has joined #openstack-cyborg04:48
*** shaohe_feng has quit IRC04:58
*** shaohe_feng has joined #openstack-cyborg04:59
*** shaohe_feng has quit IRC05:08
*** shaohe_feng has joined #openstack-cyborg05:09
*** shaohe_feng has quit IRC05:18
*** shaohe_feng has joined #openstack-cyborg05:20
*** shaohe_feng has quit IRC05:28
*** shaohe_feng has joined #openstack-cyborg05:29
*** shaohe_feng has quit IRC05:39
*** shaohe_feng has joined #openstack-cyborg05:40
*** shaohe_feng has quit IRC05:49
*** shaohe_feng has joined #openstack-cyborg05:50
*** shaohe_feng has quit IRC05:59
*** shaohe_feng has joined #openstack-cyborg06:01
*** shaohe_feng has quit IRC06:09
*** shaohe_feng has joined #openstack-cyborg06:13
*** shaohe_feng has quit IRC06:20
*** shaohe_feng has joined #openstack-cyborg06:20
*** shaohe_feng has quit IRC06:30
*** shaohe_feng has joined #openstack-cyborg06:32
*** shaohe_feng has quit IRC06:40
*** shaohe_feng has joined #openstack-cyborg06:42
*** shaohe_feng has quit IRC06:50
*** shaohe_feng has joined #openstack-cyborg06:51
*** shaohe_feng has quit IRC07:01
*** shaohe_feng has joined #openstack-cyborg07:01
*** shaohe_feng has quit IRC07:11
*** shaohe_feng has joined #openstack-cyborg07:12
*** shaohe_feng has quit IRC07:21
*** shaohe_feng has joined #openstack-cyborg07:22
*** shaohe_feng has quit IRC07:31
*** shaohe_feng has joined #openstack-cyborg07:34
*** shaohe_feng has quit IRC07:42
*** shaohe_feng has joined #openstack-cyborg07:42
*** shaohe_feng has quit IRC07:52
*** shaohe_feng has joined #openstack-cyborg07:53
*** shaohe_feng has quit IRC08:02
*** shaohe_feng has joined #openstack-cyborg08:03
*** shaohe_feng has quit IRC08:12
*** shaohe_feng has joined #openstack-cyborg08:14
*** shaohe_feng has quit IRC08:23
*** shaohe_feng has joined #openstack-cyborg08:25
*** shaohe_feng has quit IRC08:33
*** shaohe_feng has joined #openstack-cyborg08:34
*** captaindutch has joined #openstack-cyborg08:38
*** shaohe_feng has quit IRC08:43
*** shaohe_feng has joined #openstack-cyborg08:44
*** shaohe_feng has quit IRC08:53
*** shaohe_feng has joined #openstack-cyborg08:55
*** shaohe_feng has quit IRC09:04
*** shaohe_feng has joined #openstack-cyborg09:06
*** shaohe_feng has quit IRC09:14
*** shaohe_feng has joined #openstack-cyborg09:17
*** shaohe_feng has quit IRC09:24
*** shaohe_feng has joined #openstack-cyborg09:27
*** shaohe_feng has quit IRC09:34
*** shaohe_feng has joined #openstack-cyborg09:37
*** shaohe_feng has quit IRC09:45
*** shaohe_feng has joined #openstack-cyborg09:48
*** shaohe_feng has quit IRC09:55
*** shaohe_feng has joined #openstack-cyborg09:57
*** shaohe_feng has quit IRC10:05
*** shaohe_feng has joined #openstack-cyborg10:08
*** shaohe_feng has quit IRC10:15
*** shaohe_feng has joined #openstack-cyborg10:17
*** alex_xu_ has joined #openstack-cyborg10:17
*** alex_xu has quit IRC10:20
*** shaohe_feng has quit IRC10:26
*** shaohe_feng has joined #openstack-cyborg10:26
*** shaohe_feng has quit IRC10:36
*** shaohe_feng has joined #openstack-cyborg10:37
*** shaohe_feng has quit IRC10:46
*** shaohe_feng has joined #openstack-cyborg10:47
*** shaohe_feng has quit IRC10:56
*** shaohe_feng has joined #openstack-cyborg10:58
*** shaohe_feng has quit IRC11:07
*** shaohe_feng has joined #openstack-cyborg11:07
*** shaohe_feng has quit IRC11:17
*** shaohe_feng has joined #openstack-cyborg11:18
*** openstackgerrit has joined #openstack-cyborg11:22
openstackgerritwangzhh proposed openstack/cyborg master: Fix Deployable get_by_host  https://review.openstack.org/57208011:22
*** shaohe_feng has quit IRC11:27
*** shaohe_feng has joined #openstack-cyborg11:28
*** shaohe_feng has quit IRC11:37
*** shaohe_feng has joined #openstack-cyborg11:38
*** shaohe_feng has quit IRC11:48
*** shaohe_feng has joined #openstack-cyborg11:49
*** shaohe_feng has quit IRC11:58
*** shaohe_feng has joined #openstack-cyborg11:59
*** shaohe_feng has quit IRC12:08
*** shaohe_feng has joined #openstack-cyborg12:08
*** shaohe_feng has quit IRC12:18
*** shaohe_feng has joined #openstack-cyborg12:20
*** shaohe_feng has quit IRC12:29
*** shaohe_feng has joined #openstack-cyborg12:30
*** openstackgerrit has quit IRC12:34
*** shaohe_feng has quit IRC12:39
*** shaohe_feng has joined #openstack-cyborg12:42
*** shaohe_feng has quit IRC12:49
*** shaohe_feng has joined #openstack-cyborg12:54
*** shaohe_feng has quit IRC12:59
*** shaohe_feng has joined #openstack-cyborg13:00
*** shaohe_feng has quit IRC13:10
*** shaohe_feng has joined #openstack-cyborg13:11
*** shaohe_feng has quit IRC13:20
*** jaypipes has joined #openstack-cyborg13:20
*** shaohe_feng has joined #openstack-cyborg13:23
*** shaohe_feng has quit IRC13:30
*** shaohe_feng has joined #openstack-cyborg13:32
*** shaohe_feng has quit IRC13:40
*** shaohe_feng has joined #openstack-cyborg13:43
*** shaohe_feng has quit IRC13:51
*** shaohe_feng has joined #openstack-cyborg13:51
*** Helloway has joined #openstack-cyborg13:58
*** Sundar_ has joined #openstack-cyborg13:58
*** shaohe_feng has quit IRC14:01
*** shaohe_feng has joined #openstack-cyborg14:02
shaohe_feng#startmeeting openstack-cyborg-driver14:02
openstackMeeting started Mon Jun  4 14:02:54 2018 UTC and is due to finish in 60 minutes.  The chair is shaohe_feng. Information about MeetBot at http://wiki.debian.org/MeetBot.14:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:02
*** openstack changes topic to " (Meeting topic: openstack-cyborg-driver)"14:02
openstackThe meeting name has been set to 'openstack_cyborg_driver'14:02
shaohe_feng#topic Roll Call14:03
*** openstack changes topic to "Roll Call (Meeting topic: openstack-cyborg-driver)"14:03
Sundar_#info Sundar14:03
shaohe_feng#info shaohe14:03
*** tony has joined #openstack-cyborg14:03
Helloway#info Helloway14:04
shaohe_fengSundar_, let's wait minutes for other?14:05
*** wangzhh has joined #openstack-cyborg14:06
shaohe_fengevening wangzhh14:06
tonyhi14:06
shaohe_fenghello tony14:06
wangzhhhello everyone14:06
tonyhello everyone14:07
Sundar_shaohe: Sure14:07
shaohe_fengOK. let's start.14:08
*** Guest4480 has joined #openstack-cyborg14:08
Sundar_Hi Tony14:08
Sundar_Hi Wangzhh14:08
Guest4480Hi14:08
shaohe_fengwelcome tony14:08
shaohe_feng#topic current status of drivers14:08
*** openstack changes topic to "current status of drivers (Meeting topic: openstack-cyborg-driver)"14:08
tonyths shaohe14:09
shaohe_fengI have list the tasks on the etherpad #link https://etherpad.openstack.org/p/cyborg-driver-tasks14:09
shaohe_fenglet go through the tasks14:09
shaohe_fengwangzhh, are you going on the VGPU?14:10
wangzhhOK. Let introduce my work.14:10
shaohe_fengwelcome.14:10
wangzhhI'm going on the VGPU.14:11
*** shaohe_feng has quit IRC14:11
wangzhhAnd when I merged my code. I find some exist bug:(14:11
wangzhhcyborg-agent doesn't work well.14:12
*** shaohe_feng has joined #openstack-cyborg14:12
wangzhhSuch as https://review.openstack.org/#/c/572080/14:12
wangzhhSo, before VGPU driver, maybe I should fix them first.14:13
shaohe_fenggood catch.14:14
*** xinran__ has joined #openstack-cyborg14:14
shaohe_fengso this is an urgent fix.14:14
xinran__Hi sorry for being late14:15
shaohe_fengxinran__, evening.14:15
shaohe_fengLi_liu is not on line.14:15
shaohe_fenghe introduce deployable object.14:16
shaohe_fengSundar_, and other developers, please help to review wangzhh bug fix14:17
shaohe_feng#link https://review.openstack.org/#/c/572080/14:17
shaohe_fengwangzhh, other process on VGPU? can you help to update the task list? #link https://etherpad.openstack.org/p/cyborg-driver-tasks14:18
shaohe_feng^ wangzhh, update the status.14:19
shaohe_fengOK, next.14:19
wangzhhshaohe_feng: OK14:19
shaohe_fengSPDK, Helloway are you on line?14:19
*** alex_xu_ has quit IRC14:20
Hellowayyes14:20
*** alex_xu has joined #openstack-cyborg14:20
shaohe_fengso any process on the SPDK?14:21
*** shaohe_feng has quit IRC14:21
HellowayTemporarily no14:22
*** shaohe_feng has joined #openstack-cyborg14:22
shaohe_fengHelloway, can you help to update the SPDK status on the #link https://etherpad.openstack.org/p/cyborg-driver-tasks14:24
shaohe_fengOK, next provider report.14:24
shaohe_fengSundar_, Do we support multi-resource class and nest provider in this release14:25
Sundar_Shaohe: IMHO, it is becoming risky14:26
shaohe_fengSundar_, what's the risky?14:26
Sundar_We have been waiting too long. Even in today's discussion in Nova scheduler meeting, it is not clear that it is going to come soon14:26
shaohe_fenghow should we do?14:27
Sundar_IMO, we should switch immediately to my originally proposed plan to use compute node as RP, till we get nRP14:27
Sundar_This means we can have multiple devices on the same host but they must all have the same types/traits14:27
shaohe_fengOK. Can you have a discuss with jaypipes about cyborg's resource provider?14:28
shaohe_fengduring the summit?14:28
Sundar_E.g. 2 GPUs, 2 FPGAs both from Xilinx with same device family, 2 FPGAs both from Intel (say A10) etc14:28
Sundar_I discussed with edleafe etc., primarily on the comments that we should not have vendor/product/device names in traits.14:29
*** Helloway has quit IRC14:29
Sundar_That got resolved, because we do need vendor/device names, but not product names. The spec has been updated14:29
Sundar_On nRP, it is a larger discussion, that is still going on in the Nova community14:29
shaohe_fengif we have a 1 Xilinx and 1 intel's FPGA on one host, what's the resource name? and what's the traits name?14:30
Sundar_We cannot have that unless we get nRPs. I'll explain why14:30
shaohe_fengOK. please14:30
* edleafe is here. Didn't know the meeting time changed14:30
Sundar_For each device (GPU or FPGA), we will apply a trait like CUSTOM_GPU_AMD, CUSTOM_FPGA_INTEL14:31
Sundar_In your example, it will be CUSTOM_FPGA_INTEL and CUSTOM_FPGA_XILINX14:31
Sundar_We will also publish the resource class as CUSTOM_ACCELERATOR_FPGA14:31
*** shaohe_feng has quit IRC14:32
edleafeI would prefer 2 nested children, each with an inventory of 1 CUSTOM_FPGA. Then use traits to distinguish them14:32
Sundar_However, without nRP, it will get applied on the compute node, not on the individual devices14:32
Sundar_shaohe: that requires nRP. We may not get that soon enough to meet our Rocky goals.14:32
edleafeThere is currently discussion in placement on the potential upgrade problems when moving from non-nested to nested. It would be better to only plan on implementing on the nested model if possible14:33
*** shaohe_feng has joined #openstack-cyborg14:33
Sundar_Without nRP, it will get applied on the compute node. So, the cpmpute node will advertise 2 units of CUSTOM_ACCELERATOR_FPGA, with 2 traits: CUSTOM_FPGA_INTEL and CUSTOM_FPGA_XILINX. But, Placement wil see that as 2 RCs each with 2 traits14:34
Sundar_There is no way to say that one unit of the RC has one trait, and the other has another trait14:34
Sundar_Does that make sense?14:34
edleafeSundar_: precisely. That's the main reason for waiting for nested14:35
shaohe_fengedleafe, Sundar_ : there's issue in this way.14:35
Sundar_edleafe: Thanks for joining. My concern is, we are already in June and even today's Nova sched discussion indicates concerns with rolling upgrades and nRPs14:36
Sundar_It would be better to get something done with caveats, than nothing14:36
shaohe_fengfor example, user apply one CUSTOM_FPGA_XILINX, then inventory will remain one intel's CUSTOM_ACCELERATOR_FPGA14:37
shaohe_fengthen user still apply one CUSTOM_FPGA_XILINX, what's will go on?14:37
edleafeBut besides the issues noted with moving from non-nested to nested, Cyborg will also have to re-do a lot of the naming of custom resource classes14:38
Sundar_shaohe: traits go with resource providers (RPs), not resource classes (RCs)14:38
shaohe_fengSundar_, yes, that's issue.14:39
shaohe_fengSundar_, so the user still can apply a FPGA, but not XILINX that he expect.14:40
*** Helloway has joined #openstack-cyborg14:41
*** shaohe_feng has quit IRC14:42
*** shaohe_feng has joined #openstack-cyborg14:42
*** Sundar_ has quit IRC14:42
shaohe_fengedleafe, any suggestion on this issue?14:44
*** Sundar_ has joined #openstack-cyborg14:44
Sundar_Hi14:44
*** openstackgerrit has joined #openstack-cyborg14:45
openstackgerritwangzhh proposed openstack/cyborg master: Fix Deployable get_by_host  https://review.openstack.org/57208014:45
Sundar_Can you see my typing?14:45
*** tony has quit IRC14:45
shaohe_fengSundar_, welcome to come back14:45
shaohe_fengSundar_, just see "Hi"14:45
Sundar_I got blocked for some reason -- whatever I typed did not show up14:45
shaohe_fengedleafe, Sundar_ , only support one kind of FPGA to avoid this issue?14:45
Sundar_Yes, omultiple devices ok, but all of the same type14:46
Sundar_We cannot have a GPU and a FPGA on the same host, or 2 kinds of FPGAs14:46
Sundar_That should be ok for Rocky14:46
*** tony has joined #openstack-cyborg14:47
*** alex_xu has quit IRC14:47
edleafeshaohe_feng: sorry, had to step away for a moment14:48
edleafeshaohe_feng: FPGA is a resource *class*. GPU is a resource *class*14:48
edleafeThey should be modeled that way from the start.14:48
edleafespecific types should be distinguished with traits14:49
*** alex_xu has joined #openstack-cyborg14:49
Sundar_edleafe: That is exactly what we are doing14:49
edleafeSundar_: good. I really would hate to see things like CUSTOM_FPGA_XILINX14:49
Sundar_It is just that, without nRPs, we will apply the traits to the compute node, for Rocky cycle alone. That means, all devices on the same host must have the same traits.14:50
Sundar_edleafe: CUSTOM_FPGA_XILINX would be a trait on the RP, not a RC14:50
shaohe_fengCUSTOM_FPGA_XILINX is resource or traits?14:50
edleafeWe are still hoping to have NRP complete in Rocky14:50
Sundar_shaohe: trait, not RC14:51
edleafeah, that wasn't clear. In context I thought you were using it as an RC14:51
Sundar_edleafe: I understand, but with 2 months left to go, I think we are risking Cyborg's Rocky goals by waiting further14:51
*** shaohe_feng has quit IRC14:52
edleafeSure, that's understandable. I just want to make sure you know that that will make moving to an NRP design later harder14:52
Sundar_What makes the switch to nRPs hard?14:53
*** shaohe_feng has joined #openstack-cyborg14:53
Sundar_The set of RCs and traits will stay the same. But we will apply the traits to individual device RPs later14:53
edleafeYou will have inventory of devices for the compute node. When you upgrade, somehow these must be converted to a nested design, and any existing instances that are using those resources will have to have their allocations moved. That is the current upgrade discussion going on.14:54
edleafeRemember, allocations are for an instance against the RP. When you move to nested, you now have a new child RP, and the allocations should be against it14:55
edleafeBut they will be against the compute node for any existing instances at the time of upgrade. How to reconcile all of this correctly is what we are trying to work out now14:56
*** Helloway has quit IRC14:58
shaohe_fengOK14:58
Sundar_OK. There are 2 options: (a) Do not support expectations of upgrades for Cyborg in rocky (IMHO, Rocky addresses the basic Cyborg use cases and lays a strong foundation for further development) (b) Support upgrades by providing some mechanism to delete traits on compute node at a safe time (would appreciate your input here)14:58
Sundar_edleafe: What do you think?15:00
shaohe_fengSundar_, can you summary it, and we can get conclusion on Wednesday's meeting15:00
shaohe_feng^ edleafe,  any suggestion on it?15:00
Sundar_shaohe: Above is the summary :). What additional info do you want?15:01
edleafeYou would probably have to write some custom upgrade script to iterate over all machines to find old-style traits and allocations, convert them to the new style, and then delete the old ones15:01
shaohe_fengSundar_, does these 2 options are mutually exclusive?15:01
Sundar_edleafe: On an upgrade, the new agent/driver(s) will automatically create nested RPs and apply traits there, while the old traits on the compute node still exist15:02
*** shaohe_feng has quit IRC15:02
Sundar_Can we then delete the traits on the compute node, while instances are still running?15:03
*** shaohe_feng has joined #openstack-cyborg15:03
Sundar_If so, we can provide a script that the operator must run after upgrade, which deletes Cyborg traits on compute nodes15:04
Sundar_shaohe: they are exclusive.15:05
edleafeSundar_: Traits aren't the issue. Allocations / Inventory is what is important to update15:06
edleafeOtherwise, a compute node will have, say, inventory of 2 FPGAs, and will now have two child RPs with an FPGA inventory15:07
edleafeIn that example, Placement will now see 4 FPGAs on the one compute node.15:07
Sundar_edleafe: True. May be the upgrade script can set the reserved to total for the compute node RP?15:08
edleafeThat's one way. The other would be to simply delete that inventory, since it really isn't the compute node's anymore15:09
Sundar_Can we do that while the instances are still using that inventory?15:09
edleafeAllocations will also have to be adjusted, because having a double allocation has an impact on things like measuring quotas15:09
edleafeSundar_: :) Now you15:10
edleafeoops15:10
edleafeNow you're seeing why I don't like to go the non-nested to nested path15:10
Sundar_Can we do that while the instances are still using that inventory?15:10
Sundar_i.e. delete the inventory on compute node RPs15:11
edleafeThere are a lot of issues, and we're tyring to come up with a generic solution that will work for Cyborg as well as things like NUMA15:11
shaohe_fengedleafe, does provider support NUMA at present?15:11
shaohe_fengedleafe, how do we organizate15:12
edleafeshaohe_feng: yes, but in a non-nested way15:12
edleafeWe're trying to figure out how to do that upgrade, and it doesn't look easy15:12
*** shaohe_feng has quit IRC15:13
*** shaohe_feng has joined #openstack-cyborg15:13
shaohe_fengone in numa node0, another in numa node115:13
edleafeThe way NUMA has been modeled in Nova is a horrible hack that was done before Placement even existed15:14
Sundar_May be it is simplest to go with option a: for upgrades with Cyborg, first stop all instances using accelerators, run a script that cleans up, then upgrade to new Cyborg. For other subsystems, I understand the issue. But, Cyborg is a new and we can set expectations15:14
edleafeSundar_: yes, that is a luxury that a generic solution wouldn't have15:15
shaohe_fengedleafe, ^ any suggestion on the ultima solution for cyborg accelerator numa topo ?15:16
edleafeshaohe_feng: I would think it would look something like: compute_node -> NUMA node -> FPGA device -> FPGA regions - > FPGA function15:18
edleafeBut from what I know of NUMA, you can configure it multiple ways.15:18
shaohe_feng compute_node is provider, NUMA node is provider, FPGA device , FPGA regions, FPGA function are all provider?15:20
shaohe_feng^ edleafe,15:20
shaohe_fengSundar_, have consider numa topo for FPGA15:21
edleafeshaohe_feng: yes. The only inventory is the function, which is what the user wants15:21
shaohe_feng?15:21
shaohe_fengedleafe, ok, got it. 4 level provider.15:21
shaohe_fengthanks15:21
Sundar_shaohe: kinda. But, my suggestion is to focus on the basics for Rocky. If we try to throw in everything, we will not deliver anything15:22
edleafeshaohe_feng: of course, it doesn't *have to* be that way, but it is one possibility15:23
*** shaohe_feng has quit IRC15:23
*** shaohe_feng has joined #openstack-cyborg15:24
shaohe_fengedleafe, thanks again.15:24
shaohe_fengSundar_, let's go ahead?15:24
*** tony has quit IRC15:24
shaohe_fengfor next task?15:25
Sundar_Yes, I will send an email to openstack-dev, proposing this (lack of) upgrade plan15:25
shaohe_fengSundar_, OK, please, thanks.15:25
shaohe_fengnext is Intel FPGA driver15:26
shaohe_fengI see the owner is rorojo15:26
shaohe_fengSundar_, rorojo is not on line15:26
shaohe_fengcan you help to sync with him?15:27
Sundar_Yes. Discussions are ongoing about the implementation15:27
shaohe_fenggreat.15:27
shaohe_fengany update there?15:27
Sundar_Nothing significant. I helped Rodrigo to get started, with code browsing etc15:28
Sundar_I had issues with devstack myself :), but I got that fixed, so I have a working deployment on MCP now15:28
Sundar_I am now working on agent-driver API update15:29
shaohe_fengSundar_, Oh, we have improve the devstack doc15:29
shaohe_fenglet me find it for you.15:30
shaohe_feng#link https://docs.openstack.org/cyborg/latest/contributor/devstack_setup.html#devstack-quick-start15:31
Sundar_The issue was not with Cyborg plugin. I was hitting version conflicts on various components in oslo etc.15:31
shaohe_fengOK, there's a bug on oslo components?15:31
Sundar_I had to do a 'pip install --upgrade' on many components, because they were lower version than the minimum15:32
Sundar_Then, I took eveything in /opt/stack/requirements/lower_constraints.text and tried to do a mass upgrade15:32
shaohe_fengshould we submit a patch to upgrade the cyborg requirement?15:33
Sundar_That failed because some components need Python 3. So, I excluded some things manually, and otherwise hacked, till it worked15:33
*** shaohe_feng has quit IRC15:33
Sundar_The issue is, devstack does not seem to upgrade the components to their minimum versions automatically.15:34
openstackgerritwangzhh proposed openstack/cyborg master: Fix Deployable get_by_host  https://review.openstack.org/57208015:34
Sundar_This is not Cyborg-specific15:34
*** shaohe_feng has joined #openstack-cyborg15:35
shaohe_fengFPGA programing, Li_liu is not on line. let's skip it.15:35
shaohe_fengHPTS, yumeng is not online, skip it.15:35
shaohe_fengCyborg/Nova interaction, Sundar_15:36
Sundar_You may have seen the spec updates15:36
Sundar_Waiting for approval15:36
wangzhhSundar_: Maybe you can try PIP_UPGRADE=True in local.conf15:36
Sundar_wangzhh: Thanks, will try it next time15:37
shaohe_fengwangzhh, thanks. can you submit a patch for our developer guider?15:37
Guest4480Hi Sundar, there are one problem we faced in our environment: suppose there are two GPU in one host, when attaching one gpu to vm, if this attachment is failed,  will nova try to attach the second gpu in this host to vm?15:38
shaohe_fengwangzhh, also with other doc bugs.  #link https://docs.openstack.org/cyborg/latest/contributor/devstack_setup.html#devstack-quick-start15:38
wangzhhIt's an common config. Do we need to maintain it?15:38
shaohe_fengwangzhh, maybe we can add some note for developer.15:39
Guest4480sorry to interrupt.15:39
shaohe_fengGuest4480, it does not matter.15:39
wangzhhOK.15:39
shaohe_fengGuest4480, go ahead.15:39
shaohe_fengSundar_, Guest4480 ask help from you.15:39
Sundar_Guest4480: since they are independent resources, failure to attach one should not affect the other one.15:40
shaohe_fengnext task.15:41
shaohe_fengCyborg/Nova interaction(nova side), Sundar_ any plan for it. Should we move it in next release?15:42
Sundar_What does that mean?15:42
shaohe_fengSundar_, I think you will talk with jaypipes, edleafe and other nova developer's about it.15:43
Sundar_Do you mean, nova compute calling into os-acc?15:43
*** shaohe_feng has quit IRC15:43
Sundar_Yes, we need to follow up. We have an os-acc spec, which is awaiting approval15:43
Sundar_I will update the specs once more by this week15:44
*** shaohe_feng has joined #openstack-cyborg15:44
shaohe_fengzhipengh[m], zhuli_ will works on os-acc.15:44
edleafeSundar_: ping me in -nova when they are ready15:45
Sundar_edleafe: Sure. Thanks15:45
shaohe_fengSundar_,  but we still need volunteers working on nova side.15:45
Guest4480OK, we will follow the spec to see where to add our work(code) which has be done.15:45
edleafeshaohe_feng: I can help with the nova side15:46
Sundar_Excellent, edleafe. Nova compute needs to call into os-acc for specific instance events, as documented in os-acc spec15:46
Sundar_#link https://review.openstack.org/#/c/566798/15:47
shaohe_fengedleafe, thanks.15:47
shaohe_fengSundar_, you can have more details with edleafe.15:48
Sundar_Yes15:48
shaohe_fengthanks.15:48
shaohe_fengnext task.15:48
shaohe_fengQuota for accelerator15:48
shaohe_fengxinran__, are you on line?15:48
xinran__Yes15:49
Sundar_I need to drop off in 5 minutes15:49
xinran__I’m here15:49
shaohe_fengOK.15:49
shaohe_feng Cyborg/Nova/Glance intercation in compute node, including os-acc15:49
shaohe_fengSundar_, it is ready?15:49
shaohe_fengxinran__, how is quota going on?15:50
openstackgerritwangzhh proposed openstack/cyborg master: Fix Deployable get_by_host  https://review.openstack.org/57208015:50
shaohe_fengcan you update the task status on the #link https://etherpad.openstack.org/p/cyborg-driver-tasks15:50
Sundar_shaohe: I will respond to comments and update it. Hopefully, we are on the last iteration15:50
xinran__I implemented quota reserve and commit in api layer15:51
shaohe_fengSundar_, thanks.15:51
shaohe_fengxinran__, ok, thanks.15:51
shaohe_fengthat's all for today's task status15:51
shaohe_fengSundar_, have a good day.15:52
shaohe_fengit is too late in China.15:52
xinran__But I got sundar’s comment that the first point nove enter to cyborg is agent,  I’m not sure about that15:52
shaohe_fengdo you means nova will call cyborg agent directly?15:53
Sundar_Nova compute calls os-acc, which calls Cyborg agent15:53
*** shaohe_feng has quit IRC15:54
Sundar_Bye15:54
*** Sundar_ has quit IRC15:54
wangzhhBye15:54
*** shaohe_feng has joined #openstack-cyborg15:55
shaohe_fengno time left for us to discuss it on today's meeting. let's remain it to Wed's meeting.15:55
xinran__okay bye15:55
shaohe_fengFor it is too late in China.15:55
shaohe_fengokey meeting adjourned15:56
shaohe_feng#endmeeting15:56
*** openstack changes topic to "spec review day (Meeting topic: openstack-cyborg)"15:56
openstackMeeting ended Mon Jun  4 15:56:26 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:56
openstackMinutes:        http://eavesdrop.openstack.org/meetings/openstack_cyborg_driver/2018/openstack_cyborg_driver.2018-06-04-14.02.html15:56
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/openstack_cyborg_driver/2018/openstack_cyborg_driver.2018-06-04-14.02.txt15:56
openstackLog:            http://eavesdrop.openstack.org/meetings/openstack_cyborg_driver/2018/openstack_cyborg_driver.2018-06-04-14.02.log.html15:56
shaohe_fengxinran__, bye. have a good night15:56
*** shaohe_feng has quit IRC16:04
*** shaohe_feng has joined #openstack-cyborg16:07
*** shaohe_feng has quit IRC16:14
*** shaohe_feng has joined #openstack-cyborg16:14
*** shaohe_feng has quit IRC16:24
*** shaohe_feng has joined #openstack-cyborg16:26
*** shaohe_feng has quit IRC16:35
*** shaohe_feng has joined #openstack-cyborg16:36
*** shaohe_feng has quit IRC16:45
*** shaohe_feng has joined #openstack-cyborg16:55
*** shaohe_feng has quit IRC16:55
*** shaohe_feng has joined #openstack-cyborg16:56
*** shaohe_feng has quit IRC17:05
*** shaohe_feng has joined #openstack-cyborg17:08
*** shaohe_feng has quit IRC17:16
*** shaohe_feng has joined #openstack-cyborg17:18
*** shaohe_feng has quit IRC17:26
*** shaohe_feng has joined #openstack-cyborg17:28
*** shaohe_feng has quit IRC17:36
*** shaohe_feng has joined #openstack-cyborg17:38
*** shaohe_feng has quit IRC17:46
*** shaohe_feng has joined #openstack-cyborg17:47
*** shaohe_feng has quit IRC17:57
*** shaohe_feng has joined #openstack-cyborg17:57
*** wangzhh has quit IRC18:04
*** xinran__ has quit IRC18:04
*** shaohe_feng has quit IRC18:07
*** shaohe_feng has joined #openstack-cyborg18:08
*** shaohe_feng has quit IRC18:17
*** shaohe_feng has joined #openstack-cyborg18:18
*** shaohe_feng has quit IRC18:27
*** shaohe_feng has joined #openstack-cyborg18:29
*** shaohe_feng has quit IRC18:38
*** shaohe_feng has joined #openstack-cyborg18:39
*** shaohe_feng has quit IRC18:48
*** shaohe_feng has joined #openstack-cyborg18:48
*** shaohe_feng has quit IRC18:58
*** shaohe_feng has joined #openstack-cyborg18:59
*** shaohe_feng has quit IRC19:08
*** shaohe_feng has joined #openstack-cyborg19:11
*** shaohe_feng has quit IRC19:19
*** shaohe_feng has joined #openstack-cyborg19:20
*** shaohe_feng has quit IRC19:29
*** shaohe_feng has joined #openstack-cyborg19:30
*** shaohe_feng has quit IRC19:39
*** shaohe_feng has joined #openstack-cyborg19:43
*** shaohe_feng has quit IRC19:49
*** shaohe_feng has joined #openstack-cyborg19:50
*** shaohe_feng has quit IRC20:00
*** shaohe_feng has joined #openstack-cyborg20:01
*** shaohe_feng has quit IRC20:10
*** shaohe_feng has joined #openstack-cyborg20:12
*** shaohe_feng has quit IRC20:20
*** shaohe_feng has joined #openstack-cyborg20:23
*** shaohe_feng has quit IRC20:30
*** shaohe_feng has joined #openstack-cyborg20:31
*** shaohe_feng has quit IRC20:41
*** shaohe_feng has joined #openstack-cyborg20:43
*** shaohe_feng has quit IRC20:51
*** shaohe_feng has joined #openstack-cyborg20:52
*** captaindutch has quit IRC20:55
*** shaohe_feng has quit IRC21:01
*** shaohe_feng has joined #openstack-cyborg21:02
*** shaohe_feng has quit IRC21:11
*** shaohe_feng has joined #openstack-cyborg21:12
*** shaohe_feng has quit IRC21:22
*** shaohe_feng has joined #openstack-cyborg21:23
*** shaohe_feng has quit IRC21:32
*** shaohe_feng has joined #openstack-cyborg21:33
*** shaohe_feng has quit IRC21:42
*** shaohe_feng has joined #openstack-cyborg21:43
*** shaohe_feng has quit IRC21:52
*** shaohe_feng has joined #openstack-cyborg21:53
*** shaohe_feng has quit IRC22:03
*** shaohe_feng has joined #openstack-cyborg22:06
*** shaohe_feng has quit IRC22:13
*** shaohe_feng has joined #openstack-cyborg22:14
*** shaohe_feng has quit IRC22:23
*** shaohe_feng has joined #openstack-cyborg22:24
*** shaohe_feng has quit IRC22:33
*** shaohe_feng has joined #openstack-cyborg22:34
*** shaohe_feng has quit IRC22:44
*** shaohe_feng has joined #openstack-cyborg22:45
*** shaohe_feng has quit IRC22:54
*** shaohe_feng has joined #openstack-cyborg22:55
*** shaohe_feng has quit IRC23:04
*** shaohe_feng has joined #openstack-cyborg23:05
*** shaohe_feng has quit IRC23:14
*** shaohe_feng has joined #openstack-cyborg23:16
*** shaohe_feng has quit IRC23:25
*** shaohe_feng has joined #openstack-cyborg23:25
*** shaohe_feng has quit IRC23:35
*** shaohe_feng has joined #openstack-cyborg23:38
*** shaohe_feng has quit IRC23:45
*** shaohe_feng has joined #openstack-cyborg23:46
*** masber has joined #openstack-cyborg23:49
*** sum12 has quit IRC23:49
*** masuberu has quit IRC23:52
*** shaohe_feng has quit IRC23:55
*** shaohe_feng has joined #openstack-cyborg23:57

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!