Friday, 2016-09-16

*** Apoorva has joined #openstack-operators00:01
*** VW has quit IRC00:02
*** itsuugo has quit IRC00:06
*** itsuugo has joined #openstack-operators00:07
*** uxdanielle has quit IRC00:07
*** armax has quit IRC00:14
*** armax has joined #openstack-operators00:20
*** ducttape_ has joined #openstack-operators00:21
*** rmcall has joined #openstack-operators00:23
*** ducttape_ has quit IRC00:26
*** piet has quit IRC00:30
*** markvoelker has joined #openstack-operators00:34
*** itsuugo has quit IRC00:45
*** ducttape_ has joined #openstack-operators00:45
*** itsuugo has joined #openstack-operators00:46
*** piet has joined #openstack-operators00:52
*** itsuugo has quit IRC00:56
*** itsuugo has joined #openstack-operators00:58
*** ducttape_ has quit IRC01:04
*** itsuugo has quit IRC01:06
*** itsuugo has joined #openstack-operators01:08
*** ducttape_ has joined #openstack-operators01:11
*** esker[away] is now known as esker01:12
*** britthouser has quit IRC01:12
*** esker has quit IRC01:13
*** itsuugo has quit IRC01:15
*** itsuugo has joined #openstack-operators01:17
*** britthouser has joined #openstack-operators01:17
*** itsuugo has quit IRC01:22
*** piet has quit IRC01:22
*** itsuugo has joined #openstack-operators01:22
*** vinsh has joined #openstack-operators01:24
*** britthouser has quit IRC01:27
*** ducttape_ has quit IRC01:27
*** karad has quit IRC01:28
*** itsuugo has quit IRC01:30
*** ducttape_ has joined #openstack-operators01:31
*** itsuugo has joined #openstack-operators01:31
*** Apoorva_ has joined #openstack-operators01:35
*** Apoorva has quit IRC01:39
*** itsuugo has quit IRC01:40
*** Apoorva_ has quit IRC01:40
*** itsuugo has joined #openstack-operators01:41
*** itsuugo has quit IRC01:51
*** itsuugo has joined #openstack-operators01:53
*** armax has quit IRC01:59
*** vijaykc4 has joined #openstack-operators01:59
*** itsuugo has quit IRC02:03
*** itsuugo has joined #openstack-operators02:03
*** itsuugo has quit IRC02:11
*** itsuugo has joined #openstack-operators02:12
*** itsuugo has quit IRC02:17
*** itsuugo has joined #openstack-operators02:18
*** itsuugo has quit IRC02:23
*** itsuugo has joined #openstack-operators02:23
*** rstarmer has joined #openstack-operators02:26
*** vinsh has quit IRC02:27
*** catintheroof has joined #openstack-operators02:27
*** rstarmer has quit IRC02:31
*** VW has joined #openstack-operators02:32
*** vijaykc4 has quit IRC02:33
*** vinsh has joined #openstack-operators02:33
*** vinsh has quit IRC02:36
*** vinsh has joined #openstack-operators02:36
*** vinsh has quit IRC02:40
*** britthouser has joined #openstack-operators02:43
*** britthouser has quit IRC02:43
*** ducttape_ has quit IRC02:44
*** ducttape_ has joined #openstack-operators02:45
*** itsuugo has quit IRC02:51
*** sudipto_ has joined #openstack-operators02:53
*** sudipto has joined #openstack-operators02:53
*** itsuugo has joined #openstack-operators02:53
VWanyone around for the LDT meeting?03:00
VWsorrison: you online?03:00
*** itsuugo has quit IRC03:01
VW#startmeeting Large Deployments Team03:01
openstackMeeting started Fri Sep 16 03:01:45 2016 UTC and is due to finish in 60 minutes.  The chair is VW. Information about MeetBot at http://wiki.debian.org/MeetBot.03:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.03:01
openstackThe meeting name has been set to 'large_deployments_team'03:01
*** vijaykc4 has joined #openstack-operators03:01
*** itsuugo has joined #openstack-operators03:02
VWlooks like we might be short of folks for the LDT meeting scheduled, but I'll wait and see if some folks wander in03:03
*** ducttape_ has quit IRC03:04
*** armax has joined #openstack-operators03:05
*** itsuugo has quit IRC03:09
*** itsuugo has joined #openstack-operators03:11
*** itsuugo has quit IRC03:15
*** itsuugo has joined #openstack-operators03:16
*** itsuugo has quit IRC03:23
*** itsuugo has joined #openstack-operators03:25
*** itsuugo has quit IRC03:30
VWlooks like not.  we'll try again next month03:30
VW#endmeeting03:30
openstackMeeting ended Fri Sep 16 03:30:23 2016 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)03:30
openstackMinutes:        http://eavesdrop.openstack.org/meetings/large_deployments_team/2016/large_deployments_team.2016-09-16-03.01.html03:30
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/large_deployments_team/2016/large_deployments_team.2016-09-16-03.01.txt03:30
openstackLog:            http://eavesdrop.openstack.org/meetings/large_deployments_team/2016/large_deployments_team.2016-09-16-03.01.log.html03:30
*** VW_ has joined #openstack-operators03:31
*** itsuugo has joined #openstack-operators03:31
*** VW has quit IRC03:34
*** vinsh has joined #openstack-operators03:36
*** vinsh has quit IRC03:37
*** vinsh has joined #openstack-operators03:37
*** vinsh has quit IRC03:41
*** VW_ has quit IRC03:45
*** VW has joined #openstack-operators03:46
*** itsuugo has quit IRC03:48
*** itsuugo has joined #openstack-operators03:50
*** VW has quit IRC03:50
*** itsuugo has quit IRC03:54
*** mriedem has quit IRC03:55
*** itsuugo has joined #openstack-operators03:56
*** itsuugo has quit IRC04:00
*** itsuugo has joined #openstack-operators04:01
*** ducttape_ has joined #openstack-operators04:05
*** itsuugo has quit IRC04:08
*** itsuugo has joined #openstack-operators04:11
*** ducttape_ has quit IRC04:11
*** vijaykc4 has quit IRC04:12
sorrisonHey VM sorry we had a going away lunch for a work mate and couldn't make it04:15
*** sudipto has quit IRC04:17
*** sudipto_ has quit IRC04:17
*** itsuugo has quit IRC04:20
*** itsuugo has joined #openstack-operators04:21
*** markvoelker has quit IRC04:28
*** itsuugo has quit IRC04:28
*** itsuugo has joined #openstack-operators04:29
*** rcernin has quit IRC04:37
*** itsuugo has quit IRC04:37
*** itsuugo has joined #openstack-operators04:39
*** itsuugo has quit IRC04:43
*** itsuugo has joined #openstack-operators04:45
*** harlowja has quit IRC04:45
*** itsuugo has quit IRC04:50
*** sudipto_ has joined #openstack-operators04:50
*** sudipto has joined #openstack-operators04:50
*** itsuugo has joined #openstack-operators04:51
*** itsuugo has quit IRC04:56
*** itsuugo has joined #openstack-operators04:57
*** itsuugo has quit IRC05:04
*** itsuugo has joined #openstack-operators05:05
*** itsuugo has quit IRC05:12
*** bvandenh has quit IRC05:12
*** itsuugo has joined #openstack-operators05:14
*** fragatina has quit IRC05:15
*** itsuugo has quit IRC05:18
*** itsuugo has joined #openstack-operators05:20
*** markvoelker has joined #openstack-operators05:28
*** markvoelker has quit IRC05:33
*** fragatina has joined #openstack-operators05:33
*** fragatina has quit IRC05:35
*** itsuugo has quit IRC05:36
*** itsuugo has joined #openstack-operators05:36
*** bvandenh_ has joined #openstack-operators05:38
*** itsuugo has quit IRC05:41
*** rcernin has joined #openstack-operators05:43
*** itsuugo has joined #openstack-operators05:43
*** itsuugo has quit IRC05:48
*** itsuugo has joined #openstack-operators05:49
*** rstarmer has joined #openstack-operators06:02
*** itsuugo has quit IRC06:07
*** itsuugo has joined #openstack-operators06:08
*** rcernin has quit IRC06:14
*** itsuugo has quit IRC06:15
*** itsuugo has joined #openstack-operators06:16
*** rcernin has joined #openstack-operators06:19
*** pcaruana has joined #openstack-operators06:23
*** fragatina has joined #openstack-operators06:29
*** fragatina has quit IRC06:33
*** itsuugo has quit IRC06:33
*** itsuugo has joined #openstack-operators06:34
*** vern has quit IRC06:40
*** vern has joined #openstack-operators06:43
*** itsuugo has quit IRC06:44
*** itsuugo has joined #openstack-operators06:45
*** VW has joined #openstack-operators06:48
*** VW has quit IRC06:53
*** priteau has joined #openstack-operators06:53
*** priteau has quit IRC06:54
*** itsuugo has quit IRC06:54
*** itsuugo has joined #openstack-operators06:56
*** david-lyle_ has joined #openstack-operators06:59
*** david-lyle has quit IRC07:00
*** itsuugo has quit IRC07:01
*** itsuugo has joined #openstack-operators07:02
*** sticker has quit IRC07:03
*** matrohon has joined #openstack-operators07:10
*** itsuugo has quit IRC07:15
*** sudipto_ has quit IRC07:16
*** sudipto has quit IRC07:16
*** itsuugo has joined #openstack-operators07:17
*** julian1 has quit IRC07:19
*** julian1 has joined #openstack-operators07:20
*** itsuugo has quit IRC07:22
*** itsuugo has joined #openstack-operators07:23
*** david-lyle has joined #openstack-operators07:28
*** david-lyle_ has quit IRC07:29
*** markvoelker has joined #openstack-operators07:29
*** markvoelker has quit IRC07:34
*** jkraj has joined #openstack-operators08:01
*** openstackgerrit has quit IRC08:03
*** openstackgerrit has joined #openstack-operators08:04
*** itsuugo has quit IRC08:08
*** itsuugo has joined #openstack-operators08:10
*** fragatina has joined #openstack-operators08:15
*** itsuugo has quit IRC08:15
*** itsuugo has joined #openstack-operators08:16
*** fragatina has quit IRC08:19
*** itsuugo has quit IRC08:21
*** itsuugo has joined #openstack-operators08:22
*** dbecker has quit IRC08:24
*** derekh has joined #openstack-operators08:26
*** dbecker has joined #openstack-operators08:28
*** itsuugo has quit IRC08:29
*** itsuugo has joined #openstack-operators08:31
*** racedo has joined #openstack-operators08:36
*** saneax-_-|AFK is now known as saneax08:42
*** electrofelix has joined #openstack-operators08:43
*** itsuugo has quit IRC08:49
*** itsuugo has joined #openstack-operators08:50
*** VW has joined #openstack-operators08:50
*** VW has quit IRC08:55
*** itsuugo has quit IRC08:55
*** itsuugo has joined #openstack-operators08:56
*** itsuugo has quit IRC09:08
*** itsuugo has joined #openstack-operators09:10
*** itsuugo has quit IRC09:15
*** itsuugo has joined #openstack-operators09:16
*** simon-AS559 has joined #openstack-operators09:17
*** sudipto_ has joined #openstack-operators09:25
*** sudipto has joined #openstack-operators09:25
*** itsuugo has quit IRC09:27
*** hughsaunders is now known as hushsaunders09:28
*** itsuugo has joined #openstack-operators09:29
*** hushsaunders is now known as hush09:29
*** hush is now known as hughsaunders09:30
*** itsuugo has quit IRC09:34
*** itsuugo has joined #openstack-operators09:36
*** itsuugo has quit IRC09:41
*** itsuugo has joined #openstack-operators09:42
*** itsuugo has quit IRC09:47
*** itsuugo has joined #openstack-operators09:48
*** itsuugo has quit IRC09:57
*** itsuugo has joined #openstack-operators09:58
*** itsuugo has quit IRC10:07
*** bvandenh_ has quit IRC10:08
*** itsuugo has joined #openstack-operators10:09
*** itsuugo has quit IRC10:14
*** rstarmer has quit IRC10:15
*** itsuugo has joined #openstack-operators10:15
*** itsuugo has quit IRC10:20
*** itsuugo has joined #openstack-operators10:21
*** ducttape_ has joined #openstack-operators10:26
*** ducttape_ has quit IRC10:30
*** itsuugo has quit IRC10:33
*** itsuugo has joined #openstack-operators10:36
*** itsuugo has quit IRC10:41
*** itsuugo has joined #openstack-operators10:42
*** karad has joined #openstack-operators10:44
*** itsuugo has quit IRC10:51
*** itsuugo has joined #openstack-operators10:52
*** VW has joined #openstack-operators10:52
*** itsuugo has quit IRC10:57
*** VW has quit IRC10:57
*** itsuugo has joined #openstack-operators10:58
*** itsuugo has quit IRC11:09
*** itsuugo has joined #openstack-operators11:10
*** sudswas__ has joined #openstack-operators11:12
*** sudipto has quit IRC11:13
*** sudipto has joined #openstack-operators11:13
*** sudipto_ has quit IRC11:14
*** itsuugo has quit IRC11:21
*** itsuugo has joined #openstack-operators11:23
*** ducttape_ has joined #openstack-operators11:27
*** itsuugo has quit IRC11:30
*** markvoelker has joined #openstack-operators11:31
*** itsuugo has joined #openstack-operators11:31
*** ducttape_ has quit IRC11:31
*** markvoelker has quit IRC11:35
*** paramite has joined #openstack-operators11:36
*** itsuugo has quit IRC11:38
*** itsuugo has joined #openstack-operators11:39
*** catintheroof has quit IRC11:39
*** itsuugo has quit IRC11:44
*** itsuugo has joined #openstack-operators11:45
*** jsheeren has joined #openstack-operators11:48
*** VW has joined #openstack-operators12:07
*** itsuugo has quit IRC12:07
*** itsuugo has joined #openstack-operators12:09
*** VW has quit IRC12:10
*** VW has joined #openstack-operators12:11
*** ducttape_ has joined #openstack-operators12:14
*** VW has quit IRC12:16
*** sudipto has quit IRC12:19
*** sudswas__ has quit IRC12:19
*** itsuugo has quit IRC12:20
*** sudipto has joined #openstack-operators12:21
*** itsuugo has joined #openstack-operators12:21
*** sudswas__ has joined #openstack-operators12:24
*** maticue has joined #openstack-operators12:24
*** markvoelker has joined #openstack-operators12:26
*** itsuugo has quit IRC12:26
*** itsuugo has joined #openstack-operators12:27
*** catintheroof has joined #openstack-operators12:27
catintheroofHi, quick question, suppose i have lots of users into a single OU on ldap, and i need to assign each user to a new domain, i dont need domain specific driver for that right ? i just need multidomains enabled and how to do i do to filter that every user is a new domain ? can i apply some filter on keystone to achieve that ?12:29
*** itsuugo has quit IRC12:34
*** itsuugo has joined #openstack-operators12:35
*** mriedem has joined #openstack-operators12:38
*** ducttape_ has quit IRC12:39
*** itsuugo has quit IRC12:40
*** itsuugo has joined #openstack-operators12:41
*** uxdanielle has joined #openstack-operators12:43
*** makowals has joined #openstack-operators12:59
*** rstarmer has joined #openstack-operators13:00
*** makowals has quit IRC13:01
*** makowals_ has joined #openstack-operators13:02
*** makowals_ has quit IRC13:03
*** makowals has joined #openstack-operators13:04
*** rstarmer has quit IRC13:05
*** VW has joined #openstack-operators13:10
*** dminer has joined #openstack-operators13:27
*** VW has quit IRC13:27
*** VW has joined #openstack-operators13:28
*** alaski is now known as lascii13:28
*** piet has joined #openstack-operators13:45
*** vinsh has joined #openstack-operators13:52
*** saneax is now known as saneax-_-|AFK13:55
*** makowals has quit IRC13:56
*** jsheeren has quit IRC14:01
*** ducttape_ has joined #openstack-operators14:03
*** _ducttape_ has joined #openstack-operators14:18
*** ducttape_ has quit IRC14:21
*** vinsh has quit IRC14:35
*** itsuugo has quit IRC14:38
*** itsuugo has joined #openstack-operators14:41
*** ircuser-1 has quit IRC14:46
*** vinsh has joined #openstack-operators14:47
*** vinsh has quit IRC14:47
*** vinsh has joined #openstack-operators14:48
*** VW has quit IRC14:50
*** albertom has quit IRC15:03
*** albertom has joined #openstack-operators15:13
*** jkraj has quit IRC15:13
*** rcernin has quit IRC15:15
*** jkraj has joined #openstack-operators15:19
*** jkraj has quit IRC15:30
*** itsuugo has quit IRC15:39
*** itsuugo has joined #openstack-operators15:41
*** electrofelix has quit IRC15:50
*** mriedem is now known as mriedem_afk15:50
*** derekh has quit IRC16:02
*** matrohon has quit IRC16:02
*** pilgrimstack has quit IRC16:04
*** britthouser has joined #openstack-operators16:08
*** VW has joined #openstack-operators16:11
*** britthouser has quit IRC16:11
*** Benj_ has joined #openstack-operators16:11
*** britthouser has joined #openstack-operators16:12
*** VW_ has joined #openstack-operators16:13
*** Benj_ has quit IRC16:13
*** Benj_ has joined #openstack-operators16:14
*** hieulq_ has joined #openstack-operators16:15
*** fragatina has joined #openstack-operators16:16
*** VW has quit IRC16:16
*** Apoorva has joined #openstack-operators16:16
*** fragatina has quit IRC16:20
*** VW_ has quit IRC16:22
*** VW has joined #openstack-operators16:23
*** rmcall has quit IRC16:27
*** VW has quit IRC16:28
*** rmcall has joined #openstack-operators16:28
*** hieulq_ has quit IRC16:31
*** hieulq_ has joined #openstack-operators16:32
*** dminer has quit IRC16:33
*** piet has quit IRC16:34
*** piet has joined #openstack-operators16:35
*** rmcall has quit IRC16:36
*** _ducttape_ has quit IRC16:36
*** ducttape_ has joined #openstack-operators16:37
*** rmcall has joined #openstack-operators16:37
*** ducttape_ has quit IRC16:38
*** ducttape_ has joined #openstack-operators16:38
openstackgerritCraig Sterrett proposed openstack/osops-tools-contrib: Edited Readme added testing information  https://review.openstack.org/37170516:40
*** rmcall has quit IRC16:41
*** fragatina has joined #openstack-operators16:46
*** fragatina has quit IRC16:49
*** fragatina has joined #openstack-operators16:49
*** hieulq_ has quit IRC16:54
*** mwturvey has quit IRC16:56
*** itsuugo has quit IRC17:00
*** itsuugo has joined #openstack-operators17:00
*** simon-AS5591 has joined #openstack-operators17:04
*** simon-AS5591 has quit IRC17:04
*** ircuser-1 has joined #openstack-operators17:05
*** fragatina has quit IRC17:05
*** fragatina has joined #openstack-operators17:06
pabelangerafternoon17:06
pabelangerI'm trying to learn more about pre-caching glance images on compute nodes, interested in any documentation around the subject.17:07
*** simon-AS559 has quit IRC17:07
pabelangerWith infracloud, we are only using the local HDD to the compute node to store the images from glance.17:08
pabelangerObviously when we upload a new image to glance, we are setting problems launching new compute nodes for the first time because the need to fetch the new image17:08
pabelangerI'm curious how other operators deal with this issue.17:09
pabelangerThese are qcow2 images too17:09
jlkRackspace used something they build, scheduled images I think.17:10
jlka system to prime the system behind the scenes17:10
jlkthere were efforts to push that upstream, but I'm not sure where it stalled out17:10
*** ducttape_ has quit IRC17:11
jlkat IBM private cloud we don't bother with it, image fetch time is pretty fast, and instance launch time hasn't been a customer complaint17:11
pabelangerRight, that is what I am finding in the googles.  People have either implemented something locally or it is not an issue before of their setup17:14
*** rstarmer has joined #openstack-operators17:40
mgagneyea, looking for the same here. I think pabelanger is asking on my behalf ;)17:40
mgagneI will be looking into implementing a swift image download for https://github.com/openstack/nova/tree/master/nova/image/download17:41
mgagneand see where it goes17:41
*** jamesdenton has joined #openstack-operators17:43
*** VW has joined #openstack-operators17:45
*** ducttape_ has joined #openstack-operators17:45
notmynamemgagne: I'm not sure of the context of that, but if you need guidance on swift, let me know17:53
mgagnenotmyname: 1) upload a new image to Glance (with Swift backend) 2) Boot 50 instances spread on 50 computes. 3) Observe chaos where Nova and Glance struggle to download and cache the new image.17:54
mgagnenotmyname: hmm so you are the one that wanted to be my friend at the ops meetup? =)17:55
notmynamemgagne: I want to be everyone's friend ;-)17:58
mgagne=)17:58
notmynameespecially ops17:58
mgagnehehe17:58
mgagneso I'm looking to bypass Glance when downloading images stored in Swift and see where it goes in term of performance17:59
notmynamewhat's the current bottleneck?17:59
notmynamenetwork, cpu, drive IO?18:00
mgagnewe have yet to fully identify the problem (we have firewall, glance, swift, in the data path) but so far, we see Glance itself as an unnecessary element in the download chain18:00
notmynameok18:00
notmynameso one image needs to be loaded by 50 compute instances. and, because it wouldn't be fun otherwise, all 50 need the image at the same time18:01
mgagneyep18:01
mgagneexact situation18:02
notmynamehow big is the image?18:02
mgagneask infra =), maybe 10GB18:02
notmynameok18:02
pabelanger8GB18:02
pabelangerqcow218:02
notmynameand the whole image is needed before any of it can be used, right? so eg you can't start using the first byte before the last byte is downloaded18:02
pabelangeractually not sure how that would work18:03
pabelangeror what other providers do18:03
pabelangerbut, I would imagine, we need to whole image18:03
notmynameyeah18:03
mgagneso we could scale Swift and Glance. But with Glance, cache is not shared so it would need to warm up on all nodes first. Glance cache is not used at its full potential since every compute nodes want the image right now, Glance has no time to warm up the cache.18:04
notmynameok, and how many drives are in your swift cluster? order of magnitude? more than 100?18:04
mgagnewe can cap the network bandwidth just fine when downloading a file =)18:04
mgagneI think one of the network hardware between Glance and Swift could be a bottleneck and also Glance itself (need to verify that first)18:05
notmynameI'm thinking of drive bandwidth, and it matters if you have 8 drives or 80 drives (or 800)18:05
mgagnethere is a lot of elements and system in the data path. I don't think Swift itself is the botteneck.18:06
notmynameI hope not :-)18:07
mgagneI'm sure it's not =)18:07
*** piet has quit IRC18:08
notmynamebut with the idea I want to share, I want to make sure my suggestion matches what's possible. and it *really* matters if you have a few drives or "enough" drives18:08
mgagnetrue, will consider this aspect when adding more proxies as drives might be the next bottleneck for Swift.18:08
*** sudipto has quit IRC18:08
*** sudswas__ has quit IRC18:08
notmynameright. so do you have about 10 drives in the cluster or more than 100?18:09
mgagnewould have to check, I'm not the one managing Swift =)18:10
mgagnemore than 10018:11
notmynameok, so assuming you have enough drives, here's what I'd do if I were writing a client to download 8+GB objects from swift and wanted to get 100% of the bytes to 50+ clients ASAP...18:11
notmynameok, 100+ is fine for this case18:12
notmynamefirst, since there's a 5GB limit on an individual object in swift, you'll need to use large object manifests. I would definitely use static large objects (SLO)18:12
*** harlowja has joined #openstack-operators18:13
notmynameSLOs work by making an object who's contents is a json blob listing the *other* objects that make it up. think of it kinda like a table of contents18:13
*** paramite has quit IRC18:13
*** ducttape_ has quit IRC18:13
notmynameso I'd split the original image into small pieces before uploading it. somehting in the 100-250MB range18:14
notmynameso you'd split it locally using whatever you want (FWIW, the python-swiftclient SDK and CLI can do this for you) and upload those to /v1/myaccount/image_foo_segments/seg_00000 etc18:15
notmynameso with an 8Gb image and 100MB segments, you'd end up with ~80 segments18:15
mgagneGlance manages this aspect, the user/customer doesn't manage it. I'm sure we can configure Glance to split as suggested18:15
notmynameright18:15
mgagnemight already be the case by default18:16
notmynamethen the 81st object you'd put into swift is the SLO manifest: /v1/myaccount/myimages/awesome.image18:16
mgagnewill have to check and apply suggestions18:16
notmynameyeah, they used to use DLOs and then did some work to switch to SLOs at some point18:16
mgagneI'm sure Swift isn't the bottleneck for now. As mentioned, we have network hardwares and Glance in the datapath that we will need to check first. Once those are fixed/removed, we will work on optimizing Swift which might better scale than Glance =)18:16
notmynameok, so now you've got the image into swift. now let's load it really really quickly using all the hardware in teh swift cluster18:17
notmynameso the naive way is to simply have all 50 clients GET the SLO and stream the bytes18:17
notmynamethat will have a few problems18:17
notmynamefirst is that you'll end up with a lot of drive contention as each client starts getting bytes off of a single (or small set of) drive18:18
notmynamean HDD is bad at serving bytes quickly to 50+ clients at the same time. HDDs are slow18:18
mgagneright, is there something that can be done in the proxy for that?18:18
notmynamethe second problem is that one stream won't likely max the network bandwidth for one client18:18
notmynameso the trick to solve both problems is the same thing: concurrently download different parts18:19
mgagneis this something that is controlled on the client side? or can swift proxy do some magic?18:20
notmynameso I'd make each of the 50+ VM hosts download the SLO itself (ie get the manifest) and then parse it to find the segments. then each client should randomize the list and download 10 segments at a time. then after all 80+ segments are downloaded, reassemble locally (ie cat them together)18:20
notmynameno, you don't want the swift proxy to do this, otherwise you'll be limited by one swift proxy. much much better to do this on the client itself18:20
mgagneright so we are talking about implementation details in the nova image download driver (that doesn't exist atm)18:20
notmynameright18:20
mgagneawesome18:21
notmynamebut a lot of that logic about concurrent downloads is already written in python-swiftclient.18:21
mgagneso it's just a matter of enabling the feature?18:22
notmynameso if this is done, you'll have 50+ clients concurrently downloading parts of the whole in such a way that 80+ drives in the cluster are used at the same time. you should be able to max out your network and not max out HDD IO in this way18:22
mgagneshuffle ?18:22
mgagnebut it looks like the option is enabled by default. no?18:23
*** timburke has joined #openstack-operators18:24
notmynamehi timburke18:24
mgagne"download 10 segments" -> controlled by object_dd_threads ?18:24
notmynamethingee: meet mgagne18:24
notmynametimburke:18:24
timburkehi mgagne!18:25
mgagnehi!18:25
notmynametimburke: ok, here's the summary: mgagne wants to download an SLO as quickly as possible to 50+ clients18:25
notmynametimburke: what's the options in swiftclient (service) to do the concurrent SLO download stuff you wrote?18:25
mgagneor with a bit more context: I want to fix the thundering herd problem when a new private image is uploaded by a customer and boot 50 instances on as much compute nodes. =)18:26
timburkenotmyname: i didn't *write* it yet! that's what https://bugs.launchpad.net/python-swiftclient/+bug/1621562 is about18:26
openstackLaunchpad bug 1621562 in python-swiftclient "Download large objects concurrently" [Wishlist,Confirmed]18:26
*** VW has quit IRC18:26
notmynamelolo18:26
*** VW has joined #openstack-operators18:27
notmynameok, yeah, that's what I was describing to mgagne as a way to do it18:27
*** chlong_ has quit IRC18:27
notmynamethe option 1 there18:27
timburkebasically, you'll want each client to download the manifest (rather than the large object), shuffle the segments, and download them into a sparse file18:28
notmynamemgagne: I knew timburke would have the answer ;-)18:28
timburkewhich also seems like what clayg prefers18:28
notmynametimburke: or local file that's then concat'd with the others at the end18:28
mgagneright, makes a lot of sense18:28
timburkefwiw, we have code to fetch the manifest and figure out the list of segments already18:29
timburkejust a matter of typing :P18:29
notmynamebut there's still some mechanism in swiftclient to facilitate that, if a client wanted to do it now18:29
notmynamethe download threads/concurrency thing18:29
mgagneyea, found it. so I'm not sure how it relates to the bug report18:29
timburkehow are we using it? cli or python?18:29
mgagneor could it be that python only support concurrent download of different objects and not large object chunks? http://docs.openstack.org/developer/python-swiftclient/swiftclient.html#swiftclient.multithreading.MultiThreadingManager18:31
*** VW has quit IRC18:31
mgagneand upload is supported with segment_threads and not download of chunks?18:32
mgagnefirst time I'm looking at swiftclient so bear with me =)18:33
notmynameyeah, the bug report is to do that automatically on downloads. It should be possible to do it today, it's just that the magic auto-ness isn't in swiftclient downloads. but one could use the primitives to do the same thing that's tin the bug report18:33
timburkeweeeeelllll... we can find a way around it, if we're willing to do the reconstruction client-side. but yes, the dd-threads option is talking about objects, not segments within a large object18:33
mgagneright. brb in 5m18:34
notmynametimburke: so a client could build the "work list" by fetching the manifest, then use the download threads to get concurrency, then reconstruct locally, right?18:34
* notmyname needs to step out too. will be back later18:35
timburkehow much can we assume about the upload? eg, that it was done by swiftclient? that it was a dlo, or an slo? or do we want to discover all of this as part of it?18:35
*** tdasilva has joined #openstack-operators18:35
*** rstarmer has quit IRC18:36
mgagnetimburke: the idea I had is to add the ability to nova-compute to download an image from Swift if possible instead of Glance itself so middleman is bypassed. So it would have to support whatever methods Glance used to upload the image to Swift.18:38
mgagneI have a lot on my plate for the next weeks. So I might just throw resources/hardware at the problem for now. If I find time, I might work on the proposed idea.18:39
timburkemakes sense. and doing it in python should make things easier18:40
timburkefwiw, glance would be uploading slos now, but dlos previously18:41
mgagnewhich version?18:41
mgagnewe are running kilo atm, planning on upgrading soon (that's one of the task in my plate)18:42
mgagnebut this future work would need the work described in above bug report, right?18:42
timburkeactually, i stand corrected. glance uses dlos, and presumably has for a while. maybe i was thinking of shade or someone else?18:44
*** saneax-_-|AFK is now known as saneax18:46
timburkemgagne: depends on how exactly we want the integration to work. ideally, yes, we'd fix swiftclient to do the concurrent download of segments, then get nova to use the new feature18:47
mgagne:D18:49
timburkebut if you need it in a hurry, you could get nova to head the object and determine the manifest prefix, get the appropriate container listing, pipe that back through swiftclient to download, and then reassemble yourself. with the caveat that hopefully that would all be replaced fairly quickly once the feature lands in swiftclient18:49
timburkehaving another person asking for it bumps this up *my* priority list a bit, so good to know :-)18:50
*** pilgrimstack has joined #openstack-operators18:52
mgagneI might not be able to invest time in the very short term but yes, it looks like a feature I would definitely use.18:56
timburkemgagne: fwiw, with a bit of cli magic, current swiftclient can do the concurrent download for dlos (doesn't take care of reconstruction, though, and what i've got leaves a rather messy path)18:59
timburkesomething like `swift download $(swift stat "$CONTAINER" "$DLO" | grep Manifest | sed -e 's/^[^:]*: //' -e 's!/! --prefix=!')` might be useful18:59
*** pilgrimstack has quit IRC19:02
*** simon-AS559 has joined #openstack-operators19:05
*** VW has joined #openstack-operators19:15
*** david-lyle has quit IRC19:30
*** david-lyle has joined #openstack-operators19:30
*** david-lyle has quit IRC19:43
*** Apoorva has quit IRC19:47
*** itsuugo has quit IRC19:50
*** itsuugo has joined #openstack-operators19:50
*** VW has quit IRC20:18
*** pontusf4 has joined #openstack-operators20:30
*** pontusf has joined #openstack-operators20:32
*** pontusf3 has quit IRC20:34
*** Apoorva has joined #openstack-operators20:34
*** pontusf4 has quit IRC20:35
*** pontusf has quit IRC20:37
*** pontusf has joined #openstack-operators20:39
*** itsuugo has quit IRC20:40
*** itsuugo has joined #openstack-operators20:41
*** pontusf1 has joined #openstack-operators20:41
*** pontusf has quit IRC20:43
*** pontusf1 has quit IRC20:45
*** lascii is now known as alaski20:53
*** itsuugo has quit IRC21:02
*** itsuugo has joined #openstack-operators21:04
*** wasmum has quit IRC21:18
*** VW has joined #openstack-operators21:19
*** wasmum has joined #openstack-operators21:20
*** simon-AS559 has quit IRC21:22
*** VW has quit IRC21:24
*** simon-AS559 has joined #openstack-operators21:25
*** jamesdenton has quit IRC21:26
*** fragatin_ has joined #openstack-operators21:33
*** fragatina has quit IRC21:36
*** saneax is now known as saneax-_-|AFK21:40
*** itsuugo has quit IRC21:43
*** itsuugo has joined #openstack-operators21:44
*** fragatin_ has quit IRC21:48
*** fragatina has joined #openstack-operators21:50
*** fragatina has quit IRC22:10
*** fragatina has joined #openstack-operators22:10
*** itsuugo has quit IRC22:18
*** itsuugo has joined #openstack-operators22:19
*** simon-AS559 has quit IRC22:20
*** VW has joined #openstack-operators22:21
*** mriedem_afk is now known as mriedem22:22
*** VW has quit IRC22:26
*** makowals has joined #openstack-operators22:31
*** catintheroof has quit IRC22:31
*** markvoelker has quit IRC22:34
*** itsuugo has quit IRC22:39
*** itsuugo has joined #openstack-operators22:41
*** dtrainor has quit IRC22:43
*** david-lyle has joined #openstack-operators22:48
*** itsuugo has quit IRC22:55
*** itsuugo has joined #openstack-operators22:56
*** erhudy has quit IRC23:02
*** pontusf1 has joined #openstack-operators23:04
*** itsuugo has quit IRC23:06
*** itsuugo has joined #openstack-operators23:08
*** Benj_ has quit IRC23:34
*** rmcall has joined #openstack-operators23:46

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!