Friday, 2011-08-12

*** stewart has joined #openstack00:01
*** tryggvil_ has quit IRC00:04
*** deirdre_ has quit IRC00:04
*** vladimir3p has quit IRC00:06
*** freeflying has joined #openstack00:14
nhmanyone tried setting up hetrogeneous compute nodes?00:15
dchallonerDoes anyone know how you go about changing the auth url openstack reports back to the client?  Right now mine is https://<ip address> and I think java will never do SSL to an IP even if you set the CN correctly.00:15
*** ewindisch has quit IRC00:17
*** ccc11 has joined #openstack00:19
nhmwoo, our grant got funded. I've got about $170k to spend on an openstack deployment. :D00:24
*** deirdre_ has joined #openstack00:34
*** shaon_ has quit IRC00:35
heckjnhm: congrats!00:35
*** ton_katsu has joined #openstack00:37
*** jakedahn has quit IRC00:38
*** obino has quit IRC00:43
*** worstadmin has joined #openstack00:43
*** tryggvil_ has joined #openstack00:45
*** mszilagyi has quit IRC00:46
*** obino has joined #openstack00:47
*** sam_itx has joined #openstack00:48
*** HugoKuo has joined #openstack00:54
*** jj0hns0n has joined #openstack00:55
HugoKuoMorning guys00:55
*** shaon has joined #openstack00:58
*** obino has quit IRC00:58
nhmheckj: thanks!00:59
*** heckj has quit IRC00:59
*** nerdstein has quit IRC01:08
*** ncode has joined #openstack01:12
*** shaon has quit IRC01:12
thickskinhi all01:13
thickskinis there someone who know about qcow2 image using in xen?01:13
HugoKuohi all01:15
*** nerdstein has joined #openstack01:15
*** jdurgin has quit IRC01:16
*** tryggvil_ has quit IRC01:16
*** iOutBackDngo is now known as OutBackDingo01:19
*** nerdstein has quit IRC01:19
*** Alowishus has joined #openstack01:27
*** jakedahn has joined #openstack01:28
*** rfz__ has joined #openstack01:31
*** rfz_ has quit IRC01:31
*** mrrk has quit IRC01:31
*** johnmark has left #openstack01:36
*** rfz__ has quit IRC01:39
*** deepest has joined #openstack01:43
*** jtanner_ has quit IRC01:47
*** mattray has joined #openstack01:51
*** jakedahn has quit IRC01:52
*** jfluhmann has joined #openstack01:52
*** James has joined #openstack01:53
*** James is now known as Guest3968401:54
*** llang629_ has quit IRC01:57
*** mrrk has joined #openstack02:02
*** jtanner_ has joined #openstack02:05
*** stewart has quit IRC02:07
*** deirdre_ has quit IRC02:07
*** huslage has quit IRC02:08
*** msivanes has joined #openstack02:10
*** jtanner_ has quit IRC02:11
*** cereal_bars has joined #openstack02:16
*** clauden has quit IRC02:25
*** mat_angin is now known as blackshirt02:27
*** llang629_ has joined #openstack02:29
*** llang629_ is now known as llang62902:29
*** HugoKuo has quit IRC02:31
*** HugoKuo has joined #openstack02:31
*** mattray has quit IRC02:31
*** HugoKuo has quit IRC02:32
*** HugoKuo has joined #openstack02:32
*** fayce has joined #openstack02:32
*** HugoKuo has quit IRC02:34
*** HugoKuo1 has joined #openstack02:35
*** blackshirt is now known as mat_angin02:36
*** miclorb has quit IRC02:43
*** anotherjesse has quit IRC02:44
*** vernhart has quit IRC02:44
*** rms-ict has joined #openstack02:44
*** chomping has joined #openstack02:52
*** fayce has quit IRC02:57
*** jj0hns0n has quit IRC02:58
*** jj0hns0n has joined #openstack03:05
*** vernhart has joined #openstack03:08
*** rms-ict has quit IRC03:14
*** novemberstorm has joined #openstack03:14
*** novemberstorm has quit IRC03:20
*** cereal_bars has quit IRC03:22
*** jj0hns0n has quit IRC03:22
*** mattray has joined #openstack03:26
*** osier has joined #openstack03:27
*** jj0hns0n has joined #openstack03:29
*** t9md has joined #openstack03:33
*** DrHouseMD is now known as HouseAway03:39
*** nati has joined #openstack03:40
*** mdomsch has joined #openstack03:41
*** dhanuxe has joined #openstack03:49
*** HouseAway has quit IRC03:53
*** rms-ict has joined #openstack04:06
*** techthumb has joined #openstack04:09
*** jfluhmann has quit IRC04:10
techthumbis there a tutorial to get nova-compute to talk to an esxi host?04:10
*** jfluhmann has joined #openstack04:10
*** techthumb has quit IRC04:12
*** miclorb has joined #openstack04:13
*** jfluhmann has quit IRC04:15
*** katkee has joined #openstack04:16
*** msivanes has quit IRC04:17
*** llang629 has left #openstack04:19
*** martine has joined #openstack04:24
*** nelson____ has joined #openstack04:24
*** katkee has quit IRC04:26
*** rms-ict has quit IRC04:27
*** katkee has joined #openstack04:28
*** HugoKuo has joined #openstack04:29
*** HugoKuo1 has quit IRC04:29
*** HugoKuo has quit IRC04:32
*** HugoKuo has joined #openstack04:32
*** jj0hns0n has quit IRC04:33
*** jj0hns0n has joined #openstack04:33
*** HugoKuo has quit IRC04:44
*** HugoKuo has joined #openstack04:44
*** martine has quit IRC04:44
*** dgags has joined #openstack04:49
*** reed has quit IRC04:51
*** mattray has quit IRC04:53
*** worstadmin_ has joined #openstack04:55
*** worstadmin has quit IRC04:55
*** dgags has quit IRC05:05
uvirtbotNew bug: #824967 in nova "Parent instance is not bound to a Session; lazy load operation of attribute 'instance' cannot proceed" [Undecided,In progress] https://launchpad.net/bugs/82496705:06
*** scollier has quit IRC05:15
*** scollier has joined #openstack05:17
*** j05h has joined #openstack05:17
*** martine has joined #openstack05:23
*** mshadle has left #openstack05:23
*** SplasPood has quit IRC05:35
*** SplasPood has joined #openstack05:36
*** SplasPood has joined #openstack05:38
*** martine has quit IRC05:39
*** tsuzuki has joined #openstack05:47
*** HugoKuo has quit IRC05:54
*** ton_katsu has quit IRC06:09
*** ton_katsu has joined #openstack06:10
*** viveksnv has joined #openstack06:11
viveksnvhi all06:11
viveksnvcan we use different virualization models like kvm, Xen, qemu etc..with single openstack setup ?..06:12
*** obino has joined #openstack06:13
viveksnvi have a ubuntu serevr with intel-VT supportable hardware..how can i try different virtualization models...?06:14
viveksnvis it possible ?06:14
alekibangoviveksnv: it is06:16
alekibangobut not in single openstack setup i am afraid06:16
*** ejat has joined #openstack06:16
*** rchavik has joined #openstack06:16
*** llang629_ has joined #openstack06:18
*** guigui has joined #openstack06:19
viveksnvalekibango: thanks for you replt..i am confused with different things...what is role of nova-compute ?...06:20
viveksnvalekibango: one nova-compute will work with one virtual model..(one for kvm, one for xen, one for qemu) ?06:20
*** llang629 has joined #openstack06:23
*** llang629 has left #openstack06:23
*** llang629_ has quit IRC06:26
dhanuxehello...06:32
dhanuxehow to fix error of bug from this url ? http://j.mp/p6g4oQ06:32
alekibangocompute is here for managing virtual guests06:34
alekibangoyou need to pick one for one install06:35
alekibangoiirc06:35
*** deepest has quit IRC06:35
*** zul has joined #openstack06:38
*** zul has quit IRC06:42
*** llang629_ has joined #openstack06:49
*** anotherjesse has joined #openstack06:52
*** viveksnv has quit IRC06:54
*** mrrk has quit IRC06:54
*** kidrock has joined #openstack06:58
kidrockHi everyone.06:58
kidrockI installed newest nova milestone version06:58
kidrockcreate zipfile and run source novarc06:59
*** javiF has joined #openstack06:59
kidrockeuca-describe-instances06:59
kidrockflowing error occurred07:00
kidrockhttp://paste.openstack.org/show/2148/07:00
kidrockpls help me. Thanks07:00
*** deepest has joined #openstack07:02
*** mgoldmann has joined #openstack07:12
*** cbeck has quit IRC07:13
*** cbeck has joined #openstack07:13
*** zul has joined #openstack07:16
*** siwos has joined #openstack07:22
*** mrrk has joined #openstack07:24
*** nicolas2b has joined #openstack07:25
*** katkee has quit IRC07:26
*** miclorb has quit IRC07:45
*** anotherjesse has quit IRC07:45
*** truijllo has joined #openstack07:53
*** dhanuxe has quit IRC07:56
uvirtbotNew bug: #825024 in glance "'glance add' treats size=xxx as a normal property" [Undecided,New] https://launchpad.net/bugs/82502407:56
*** javiF has quit IRC07:57
*** javiF has joined #openstack07:57
*** zul has quit IRC08:01
*** nickon has joined #openstack08:01
*** nicolas2b has quit IRC08:02
*** mrrk has quit IRC08:02
*** mrrk has joined #openstack08:05
*** rms-ict has joined #openstack08:08
*** teamrot has quit IRC08:12
*** willaerk has joined #openstack08:14
deepestHi everyone!08:14
deepestI want to ask you about the Swift again08:15
deepestI received some different information about the Swift08:15
deepest some people tell me about the limited of small disk drive in cluster.08:16
deepest if first storage node = 100GB, second storage node = 250 GB, third storage node = 500GB, then I send 120GB to Swift and get failed when the first storage node full.08:16
deepest It means Swift Architecture doesn't have mechanism to change the location when one or more than one disk run out of space.08:17
deepestAnother information is that Swift doesn't care about the drive disk you have, Swift just care about the total space of disk drive.08:17
deepestIt means, If you have thousands disk drive and sorted by the number from 1 to n, then you transfer data to Swift. If the 1st drive full, Swift will take the rest of data to another location.08:17
deepest I am really confusing, Do u have any document or tutorial mention about that information, please give it to me.08:17
*** guigui has quit IRC08:19
*** rms-ict has quit IRC08:26
deepestany ideas?08:26
*** jeffjapan has quit IRC08:27
*** rms-ict has joined #openstack08:32
*** anp_ has quit IRC08:39
*** zul has joined #openstack08:41
*** tryggvil has joined #openstack08:49
*** guigui has joined #openstack08:50
*** CloudAche84 has joined #openstack08:53
*** mrrk has quit IRC08:56
*** deepest has quit IRC08:57
*** deepest has joined #openstack08:58
deepestHi everyone!08:58
deepestI want to ask you about the Swift again08:58
CloudAche84morning08:58
*** rods has joined #openstack08:58
deepestI received some different information about the Swift08:59
*** darraghb has joined #openstack08:59
deepestsome people tell me about the limited of small disk drive in cluster.08:59
deepestif first storage node = 100GB, second storage node = 250 GB, third   storage node = 500GB, then I send 120GB to Swift and get failed when the first storage node   full.08:59
deepestIt means Swift Architecture doesn't have mechanism to change the location   when one or more than one disk run out of space.08:59
deepestAnother information is that Swift doesn't care about the drive disk you   have, Swift just care about the total space of disk drive08:59
deepestIt means, If you have thousands disk drive and sorted by the number from 1   to n, then you transfer data to Swift. If the 1st drive full, Swift will take the rest of   data to another location.08:59
deepestI am really confusing, Do u have any document or tutorial mention about   that information, please give it to me.08:59
deepesthi ClouAche8409:00
*** ejat has quit IRC09:00
CloudAche84how many disks do you have in total is it just 3?09:00
*** BuZZ-T has quit IRC09:00
deepestno09:00
deepestmy mean is not just 309:01
deepest2 kind of different information09:01
deepestfirst swift cares about size of each drive09:02
deepestsecond swift only cares about the total size of all drive09:02
deepestthis is what I feel confuse09:03
CloudAche84so how many disks do you have and how many nodes?09:03
deepestah now I have 5 drive09:03
deepestwith 5 node09:04
deepestand diffrerent size09:04
*** katkee has joined #openstack09:04
deepestwhat happend if I send the size of data more than size of smallest drive09:05
*** BuZZ-T has joined #openstack09:05
*** BuZZ-T has joined #openstack09:05
*** tryggvil has quit IRC09:05
deepestsome guys said no problem, but some guys said failed09:06
*** anp has joined #openstack09:08
anphi09:08
anpon installing openstack CC and node on same machine09:08
anpI get error:09:08
anpError: openstack-nova-compute-config conflicts with openstack-nova-cc-config09:08
anpError: openstack-nova-cc-config conflicts with openstack-nova-compute-config09:08
anpI use CentOS 6 and Griddynamics repo09:08
anpplease help me09:08
*** deepest_ has joined #openstack09:14
deepest_back again09:14
deepest_lost connection09:14
deepest_CloudAche84, are U there?09:14
*** deepest has quit IRC09:14
*** kidrock has quit IRC09:16
*** tryggvil has joined #openstack09:17
*** rms-ict has quit IRC09:19
*** irahgel has joined #openstack09:21
*** deepest_ has quit IRC09:22
*** deepest has joined #openstack09:27
*** chomping has quit IRC09:28
*** rms-ict has joined #openstack09:28
*** deepest has quit IRC09:32
uvirtbotNew bug: #825074 in nova "Release floating IP with OS API" [Undecided,New] https://launchpad.net/bugs/82507409:32
*** chomping has joined #openstack09:32
*** rms-ict has quit IRC09:34
*** chomping has quit IRC09:35
*** chomping has joined #openstack09:36
*** rms-ict has joined #openstack09:36
*** ccc11 has quit IRC09:36
*** rms-ict has quit IRC09:47
*** ton_katsu has quit IRC09:48
*** arun_ has quit IRC09:50
*** arun_ has joined #openstack09:50
*** oziaczek has joined #openstack10:00
oziaczekim deploying openstack with the deployment tool. i install whole packet with glance and swift as well. during nova installation i got 2011-08-12 10:28:22,431 - ERROR - The process id of nova-volume is changing. 30254 -> 30334 2011-08-12 10:28:22,431 - ERROR - Error occured when starting the service(nova-volume).10:01
oziaczeki created lvm volume10:01
oziaczeki named it nova-volumes, everything seems fine in configuration, but it doesn't work10:02
oziaczeki run service nova-volume start, i get information that it is running10:02
oziaczekbut later i can't find it in nova-manage service list10:03
oziaczekanyone with some idea what is going on?10:03
*** medberry is now known as med_out10:06
*** ewindisch has joined #openstack10:08
*** ton_katsu has joined #openstack10:12
*** SplasPood has quit IRC10:13
*** SplasPood has joined #openstack10:13
*** ton_katsu has quit IRC10:14
*** ton_katsu has joined #openstack10:14
*** jj0hns0n has joined #openstack10:16
*** Ryan_Lane has joined #openstack10:16
*** miclorb has joined #openstack10:21
viraptoroziaczek: /var/log/nova/nova-volume will tell you the truth...10:22
*** shang has quit IRC10:25
*** tsuzuki has quit IRC10:27
*** miclorb has quit IRC10:28
*** worstadmin_ has quit IRC10:34
*** miclorb has joined #openstack10:34
*** miclorb has quit IRC10:35
*** t9md has quit IRC10:39
oziaczekyes i got it! don't know why i haven't checked them before!10:42
oziaczekthanks10:42
*** Ryan_Lane has quit IRC10:48
*** ewindisch has quit IRC10:49
*** lorin1 has joined #openstack10:51
*** jj0hns0n has quit IRC10:55
*** zul has quit IRC10:59
*** infinite-scale has joined #openstack11:07
*** AhmedSoliman has joined #openstack11:14
*** ton_katsu has quit IRC11:15
*** nid0 has quit IRC11:22
*** mies has quit IRC11:26
*** nid0 has joined #openstack11:26
*** chomping has quit IRC11:27
*** chomping has joined #openstack11:27
*** ncode has quit IRC11:27
*** martine has joined #openstack11:28
*** chomping has quit IRC11:30
*** joearnold has joined #openstack11:32
*** mfer has joined #openstack11:46
*** jtanner_ has joined #openstack11:47
*** nerdstein has joined #openstack11:48
*** AhmedSoliman has quit IRC12:00
*** dendro-afk is now known as dendrobates12:01
*** nicolas2b has joined #openstack12:01
*** ncode has joined #openstack12:02
*** nicolas2b has quit IRC12:04
*** Ephur has joined #openstack12:19
*** infinite-scale has quit IRC12:19
*** dprince has joined #openstack12:28
*** aliguori has joined #openstack12:29
*** joearnold has quit IRC12:29
*** mancdaz has quit IRC12:30
*** mancdaz has joined #openstack12:30
*** manish has joined #openstack12:33
*** msivanes has joined #openstack12:35
*** nelson____ has quit IRC12:37
*** Ephur has quit IRC12:37
*** shang has joined #openstack12:38
*** shang has quit IRC12:38
*** nmistry has joined #openstack12:42
*** javiF has quit IRC12:43
*** dendrobates is now known as dendro-afk12:47
*** MarkAtwood has quit IRC12:48
*** bsza has joined #openstack12:57
*** PeteDaGuru has joined #openstack12:58
*** allsystemsarego has joined #openstack12:59
*** marrusl has joined #openstack13:03
*** mdomsch has quit IRC13:04
*** huslage has joined #openstack13:05
*** lts has joined #openstack13:05
*** dendro-afk is now known as dendrobates13:08
*** jtanner_ has quit IRC13:10
*** kashyap_ has joined #openstack13:16
*** kbringard has joined #openstack13:17
*** CloudAche84 has quit IRC13:19
*** CloudAche84 has joined #openstack13:21
*** ewindisch has joined #openstack13:22
*** nmistry has quit IRC13:23
*** whitt has joined #openstack13:24
*** dendrobates is now known as dendro-afk13:27
*** gnu111 has joined #openstack13:28
*** jtanner has joined #openstack13:29
*** dendro-afk is now known as dendrobates13:30
*** ccc11 has joined #openstack13:34
*** lborda has joined #openstack13:39
*** mfer has quit IRC13:41
*** jtanner has quit IRC13:43
*** jtanner has joined #openstack13:44
*** jfluhmann has joined #openstack13:47
*** Dunkirk has joined #openstack13:47
DunkirkStarted from scratch, followed these instructions: http://wiki.openstack.org/RunningNova, and I can see in VNC that my VM is stuck at trying to boot at the SeaBIOS screen.13:48
*** duffman has quit IRC13:50
*** dendrobates is now known as dendro-afk13:52
DunkirkI've seen this behavior before, but then I was getting messages about networking. I've scrapped the trunk packages and gone back to release, and I'm not seeing anything in the logs about networking now.13:53
DunkirkIn any case, shouldn't following the instructions get me up and running? I don't understand what's wrong, so I don't have a clue as to what to try differently.13:54
*** cereal_bars has joined #openstack13:54
DunkirkCan someone have pity on me and hit me with a cluebat?13:54
*** freeflying has quit IRC13:55
dilemmaI think it's a bit early in the day. Those who wield the cluebats are still sleeping/breakfasting.13:56
*** TimR-L has joined #openstack13:56
kbringardwhen you say "Started from scratch", did you completely reinstall the operating system, or just reinstall OpenStack?13:57
kbringardDunkirk ^^13:57
*** lborda has quit IRC13:58
Dunkirkkbringard: Just OpenStack.13:59
Dunkirkkbringard: I checked all the nova-manage options to make sure that nothing was left defined, though.13:59
kbringardon your compute node, I'd check the _base directory13:59
kbringardand do a glance index13:59
kbringardmake sure the images you've uploaded are the correct size13:59
kbringardand that it matches what's in _base13:59
Dunkirkkbringard: This is new info. Where's _base?14:00
*** mrjazzcat-afk is now known as mrjazzcat14:00
kbringardin /var/lib/nova/instances14:00
kbringardso, the quick shibby on how it works14:00
kbringardI have no idea if this is what the issue is, btw14:00
kbringardbut sometimes this happens14:00
kbringardglance pulls the image from where ever (by default it's /var/lib/glance/images/)14:01
kbringardand serves it up via the API14:01
kbringardthe compute node downloads the image and stores it in /var/lib/nova/instances/_base (by default)14:01
kbringardit then uses this as the backing image when it builds out the individual disk for the instance14:01
*** bcwaldon has joined #openstack14:01
kbringardand it keeps it cached, so subsequent instances launch more quickly14:02
kbringardI've seen it, sometimes, get a bad image in the cache in _base on the compute node14:02
Dunkirkkbringard: Well I don't know how big the image should be, but `glance index' is giving me an "unable to connect to server".14:02
kbringardare you running it on the server glance is installed on?14:03
DunkirkYeah, it's all one server14:03
kbringardthat would probably be your problem14:03
kbringardcheck the glance logs14:03
kbringardmake sure it's running and accepting connections14:03
*** duffman has joined #openstack14:04
DunkirkOK, I've started glance-api and glance-registry...14:04
*** LiamMac has joined #openstack14:04
kbringarddoes glance index work now?14:05
Dunkirkkbringard: it does, but it doesn't show any images...14:05
kbringardthere's your second problem :-)14:05
kbringardI'm guessing you used the uec-publish stuff to upload them?14:05
DunkirkRoger14:06
kbringardso, real quick, the way that works14:06
kbringardand sorry, i'm just trying to help you (and whomever else) understand how this works so you can troubleshoot it better in the future :-)14:06
kbringardcasuse it can be a black box, haha14:06
kbringardso, the publish stuff was written for eucalyptus14:06
Dunkirkkbringard: Dude, please, treat me like I'm a noob. Cause I am.14:06
kbringardwhich uses the ec2 api14:06
kbringardso what happens in oenstack is14:07
alekibangokbringard: there is etherpad page about this. let me find it14:07
kbringardobjectstore is what does the ec2 bundle stuff14:07
kbringardah, that would be helpful14:07
*** duffman has quit IRC14:07
kbringardit's probably more coherent than my "only 1 cup of coffee" rambling14:07
kbringardbut, the quick of it is14:07
Dunkirkkbringard: Heh.14:07
kbringardthe uec-publish stuff uploads the stuff to objectstore14:08
alekibangokbringard:  but it might be wrong a bit, please continue there http://etherpad.openstack.org/create-instance-openstack14:08
kbringardwhich was running14:08
kbringardobjectstore then unbundles the images and does all that fun stuff14:08
kbringardand then uploads them to glance14:08
kbringardso what most likely happened is, it shat itself because glance wasn't running14:08
*** dendro-afk is now known as dendrobates14:09
kbringardin theory, there should be something in the objectstore log14:09
kbringardbut, regardless, that is likely what happened14:09
kbringardso, now that glance is running, I would try uploading your images again14:09
kbringardand make sure you can see them in glance index14:09
alekibangothe relation between eucatools, nova-manage and glance and s3 storage in nova options should be better documented14:09
kbringardand that they have sane sizes14:09
kbringardI personally upload straight into glance14:10
alekibangokbringard: when i did this, it never worked right for me14:10
kbringardif you have ruby and rubygems running, you can gem install ogler14:10
alekibangoeven stackops is failing on that one somehow14:10
kbringardit's a glance uploader I wrote, and, at least for my environments, it works quite nicely14:10
*** grapex has joined #openstack14:10
alekibangokbringard: its long time since i tried14:10
Dunkirkkbringard: I'll definitely check that out.14:10
alekibangoi should test latest prolly14:11
kbringardor, alternatively14:11
DunkirkIf I just re-do the publish command, it's telling me that the kernel image is already registered.14:11
kbringardhttps://github.com/kevinbringard/OpenStack-tools/blob/master/glance-uploader.bash14:11
kbringardnot as elegant14:11
DunkirkHow do I get nova to realize that it's not.14:11
alekibangokbringard: i want ot be ableto use  euca tools  -- and upload image to glance using them... how?14:11
kbringardbut it wraps around the glance commands to upload images straight into glance14:11
*** imsplitbit has joined #openstack14:12
kbringardalekibango: you'll need to rely on the objectstore "middleware"14:12
Dunkirkkbringard: That's fantastic! I really need to see what things are going on behind the scenes.14:12
kbringardsince it handles the decrypting and unbundling14:12
alekibangohmm it sounds like you explained it by one line for me... :)14:12
*** shentonfreude has joined #openstack14:12
alekibangoexcetp that some example config would be very nice to be in wiki14:12
kbringardglance doesn't talk ec214:12
alekibangokbringard: i know... but i was not able to connect them together14:13
kbringardin theory having objectstore running on the same machine as glance should "just work"14:13
alekibangoif you could please add some info into wiki, we would all be gratefull14:13
alekibangokbringard: taht thheory didnt work for me14:13
alekibango:)14:13
alekibangoand please provide example config(s)14:14
kbringardbut in truth I've not used it much… I switched to pure glance quite a long time ago and don't even have objectstore installed14:14
kbringardI'd have to figure it out myself before I could provide examples14:14
alekibangoi had all kinds of hell with glance, trying to make it run before summer14:14
kbringardbut if I have some time I'll look at it14:14
DunkirkSo... how do I "unregister" a tarball?14:15
alekibangokbringard: thanks, i can wait abit14:15
*** ldlework has joined #openstack14:15
alekibangobut think about other noobs too14:15
alekibangoits pain14:15
Dunkirk...From "nova", so that I can re-do it and get it into "glance"?14:15
alekibango:)14:15
kbringardyea, documentation as a whole is something we're lacking… I think there is a documentation sprint scheduled pretty soon here14:15
kbringardso hopefully in the near future things will get better on that front14:15
alekibangoyou need to get more people writing... not accepting patch without docs could be a plus14:17
kbringardyea… and tests too14:17
alekibangofunctional tests14:17
alekibangodocumented, open14:17
alekibango(serving also as example for noobs)14:18
alekibangothat would help bring openstack above other platforms14:18
kbringardDunkirk: I'm not sure… /me thinks14:18
alekibangoright now i am playing with archipel14:18
alekibangoand well, its so easy to get running14:18
alekibangojust one command14:18
*** lotrpy has quit IRC14:18
alekibango:)14:18
*** osier has quit IRC14:18
kbringardthat's awesome14:18
alekibangowell, it lacks some features. but its enough for many14:19
alekibangoand those people could get openstack, if that would be easier14:19
*** benner_ has quit IRC14:19
*** lotrpy has joined #openstack14:19
*** Gordonz has quit IRC14:19
*** benner has joined #openstack14:20
alekibangoby making it hard for small businesses, you are loosing big share14:20
Dunkirkkbringard: I think I found it: euca-deregister?14:20
alekibangoopenstack is good for 40, or 100+ servers14:20
kbringardDunkirk: sounds right14:20
*** amccabe has joined #openstack14:20
alekibangoshould be easy for 3 too14:20
kbringardyea, agreed14:22
*** mu574n9 has joined #openstack14:24
Dunkirkkbringard: That seemed to do the right thing, but euca-publish didn't get the images registered with glance anyway. :-(14:25
annegentlealekibango: and kbringard are you hoping to sprint on docs as well?14:27
kbringardwhere I can14:27
*** lborda has joined #openstack14:27
kbringardI have a lot of random notes and stuff I've written down about things that I've been a terrible person about and not put in wikis14:28
*** javiF has joined #openstack14:28
annegentlekbringard: terrible person! :)14:29
*** duffman has joined #openstack14:29
kbringardhaha, I know! no need to rub it in14:29
kbringard:-p14:29
annegentle:)14:29
*** mattray has joined #openstack14:30
annegentleI have been trying to determine a good week. I guess after Aug. 22nd would be good to sprint.14:30
kbringardseems reasonable to me14:30
kbringardI'll be on vacation the 19th-24th14:30
kbringardnot that you should plan sprints around me14:30
kbringardhaha14:30
annegentle:)14:30
kbringardhaha, or perhaps planning them when I'm not around to mess all the docs up is a good idea14:30
*** dobber has quit IRC14:31
kbringardI'm like Fry's worms on that Parasites Lost episode of Futurama: I think I'm making things better, but as it turns out, you've just got worms14:31
uvirtbotNew bug: #825241 in nova "SQLAlchemy + Postgres + Eventlet" [Undecided,New] https://launchpad.net/bugs/82524114:31
kbringardbrb, need coffee14:32
*** anp has quit IRC14:33
*** npmapn has joined #openstack14:33
alekibangoannegentle: unfortunatelly i cant now... maybe if i would get paid... i need to earn some $$ this month14:35
*** llang629_ has left #openstack14:38
alekibangowas playing with clouds for too long for free :)14:38
*** reed has joined #openstack14:39
*** dendrobates is now known as dendro-afk14:43
*** dendro-afk is now known as dendrobates14:43
annegentlealekibango: totally understand :)14:43
*** siwos has quit IRC14:49
*** lborda has quit IRC14:51
*** rnirmal has joined #openstack14:51
*** nmistry has joined #openstack14:51
*** mfer has joined #openstack14:55
*** oziaczek has quit IRC15:02
*** dragondm has joined #openstack15:04
*** SCR512 has joined #openstack15:05
*** cp16net has joined #openstack15:07
*** alandman has joined #openstack15:10
*** odyi has quit IRC15:11
*** nmistry has quit IRC15:12
*** odyi has joined #openstack15:12
*** odyi has joined #openstack15:12
SCR512Any one experience issues with the system router VM not properly handing out DHCP address to instances?15:13
*** jkoelker has quit IRC15:15
*** jkoelker has joined #openstack15:15
*** jimbob5 has joined #openstack15:16
*** jimbob5 has quit IRC15:17
*** jkoelker has quit IRC15:18
*** SCR512 has left #openstack15:18
*** nati has quit IRC15:19
*** jkoelker has joined #openstack15:19
*** Gordonz has joined #openstack15:21
uvirtbotNew bug: #825269 in nova "EC2 API: terminated instances still show up when describing instnaces " [Medium,New] https://launchpad.net/bugs/82526915:21
kbringardthat bug is a dupe15:25
*** javiF has quit IRC15:26
*** truijllo has quit IRC15:26
*** heckj has joined #openstack15:28
*** guigui has quit IRC15:28
*** vladimir3p has joined #openstack15:29
*** heckj has quit IRC15:29
viraptorcan nova-manage be used to downgrade the schema in any way? If I do sync --version=something_older I get an exception only15:30
annegentleviraptor: not that I know of, but that's not a bad idea, maybe log it as a request15:32
annegentleviraptor: and someone can say whether it's technically feasibl15:33
annegentlefeasible even15:33
viraptorthe code for downgrade is there in the migrations already15:33
viraptorbut sync db doesn't call the right function it seems15:33
*** nati has joined #openstack15:37
nhmHave any of you guys tried using the glusterfs connector?15:38
kbringardnhm: sorry, nope15:39
nhmkbringard: trying to decide what to use for a storage backend.15:39
kbringardyea, me too15:40
*** heckj has joined #openstack15:40
nhmkbringard: what have you been considering?15:41
kbringardare you talking like S3 style?15:41
kbringardor do you mean like, for shared instance directories and stuff15:41
nhmkbringard: probably both15:41
kbringardwe'll still working on the S3 stuff15:41
kbringardbeen looking at Riak15:41
kbringardsupposedly they're working on an S3 compliant API15:42
kbringardand, of course, swift15:42
nhmI'm using swift now.15:42
*** CloudAche84 has quit IRC15:43
kbringardI'm actually working on getting a cluster setup as we speak15:43
nhmkbringard: Cool, I've had a little 14-node test cluster running for a couple of months. Over all it does well except that I desperately need storage.15:44
nhmWe just got a nice grant to build a production cluster so I need to start getting serious. :)15:44
kbringardI've not had a ton of personal experience with gluster, so I don't know if this is just me talking out of my ass15:44
*** nickon has quit IRC15:44
kbringardbut, I've heard a few horror stories about gluster with lots of small files15:45
*** jtanner has quit IRC15:45
uvirtbotNew bug: #825288 in nova "Kernel Panic when start instance in Xen Environment" [Undecided,New] https://launchpad.net/bugs/82528815:47
*** jtanner has joined #openstack15:47
creihtIf anyone gets the object storage layer working on gluster, I would be interested in hearing their experiences15:48
*** dgags has joined #openstack15:49
kbringardhah, creiht, ou and nhm should chat15:50
nhmkbringard: we ran it briefly on one of our supercomputers, but due to some problems that may or may not have been gluster related ended up moving to lustre.15:50
nhmcreiht: I'm probably going to be giving it a try some time soonish.  I've got a 500TB lustre depoyment I have to do in 2 weeks, but I'm hoping to squeeze it in.15:51
creihtonly so much time in a day :)15:52
nhmtell me about it!15:52
creihtnhm: hah, I've heard just as many horror storied about lustre :)15:52
creihtstories15:53
nhmcreiht: oh lustre is a beast certainly. :)15:53
*** willaerk has quit IRC15:54
nhmIt's problems are more about maintainability though.  It works, it just likes to stab you.  Repeatedly.15:54
nhmIn the eye.15:54
creihthah15:55
*** denken has joined #openstack15:55
nhmcreiht: So one thing I'm still not very clear on is whether glusterfs with openstack gives you any real benefits over swift/nfs/etc.15:57
nhmI read through the install guide and it looked like they were just using the client on the compute nodes to share the VM images?15:58
notmynamenhm: I've been trying to track down some gluster people to get some clarity on what their new connector actually is/does15:58
*** mrrk has joined #openstack15:59
*** mgius has joined #openstack15:59
*** huslage has quit IRC16:01
*** ewindisch has quit IRC16:02
nhmnotmyname: yeah, I've been out of the loop for too long to know if the features they list are unique to using glusterfs or would work with NFS backed storage mounted on the compute nodes.16:02
nhmor with swift for that matter.16:02
*** jtanner has quit IRC16:02
*** jtanner has joined #openstack16:03
*** lpatil has joined #openstack16:03
*** rchavik has quit IRC16:05
creihthrm16:06
creihtas pointed out by a friend, triggering replication on gluster is, um, interesting16:06
*** nphase has joined #openstack16:06
creihthttp://www.gluster.com/community/documentation/index.php/Gluster_3.2:_Triggering_Self-Heal_on_Replicate16:06
creihtit is a different beast though16:09
nhmcreiht: yeah, I was reading about some... issues with replication.16:09
creihtheh16:10
*** dolph_ has joined #openstack16:10
nhmnotmyname: yeah, I've been out of the loop for too long to know if the features they list are unique to using glusterfs or would work with NFS backed storage mounted on the compute nodes.http://gluster.com/community/documentation/index.php/3.3beta16:10
nhmdoh, sorry16:10
nhmapparently they are going for unified object storage in 3.3: http://gluster.com/community/documentation/index.php/3.3beta16:11
creihtnhm: yeah I looked at it briefly, and it looks like it is mostly a hacked up version of swift on top of their volumes16:14
*** dendrobates is now known as dendro-afk16:16
nhmcreiht: yeah, that's kind of what it looks like.  I wonder how stable it it. ;)16:19
nhmAnyone played around with keystone yet?16:19
heckjwe've been working with keystone (on and off) for the past several weeks16:19
nhmheckj: what are your impressions?16:20
heckjOff it more recently (since d3 milestone), there's a lot of motion on it.16:20
dolph_nhm, heckj: feedback encouraged  -keystone dev16:20
notmynameI've messed with it enough to run it on my personal swift dev environment16:20
heckjIt's got the basics of AuthN, flirts with AuthZ, and has a service catalog component that takes a while to understand how to set up16:20
notmynamedolph_: it allows anyone any access as long as you have a valid token16:20
heckjdocs on setting it up need work - but once you get it running, does it's job16:21
dolph_heckj, in a meeting working on the service catalog api *right now*16:21
heckjdolph_: how to set it up with the keystone-manage commands has been the biggest hurdle - just not clear what does what and how it impacts things16:21
mgiusI've gone in and executed SQL by hand a couple times because doing things through keystone-manage was too confusing16:22
heckjnhm: the key for us (we were actively integrating swift into dashboard) was to get the URL endpoints correct from the sample-data16:22
dolph_heckj, i'm not a fan of keystone-manage at all... i think everything needs to work through the api after bootstrap config16:22
nhmdolph_: haven't touched it yet, but will probably do so soon.  Just had a request to tie into a radius backend. O_O16:23
heckjdolph_: would love that! I'm a huge fan of a well documented, and ideally simple, API16:23
dolph_heckj, endpoints are a strange topic due to rackspace vs openstack -- i think we have some ideas to simplify them though16:23
nhmdoph_: integration into dashboard would be fantastic.16:23
heckjtoday I'm tracking down a bug with sqlalchemy-migrate on Ubuntu 11.04 - kicked my butt yesterday with a PPA install from trunk16:24
dolph_nhm, a radius backend for keystone?16:24
nhmdolph_: They don't know anything about keystone.  More like "It'd be cool if we could set up a radius server and use it to target multiple backends for auth!"16:25
mgiusmmmm....swift packaging bug16:26
WormMantoday's project is to install a trunk version on a clean system, then if that works to show my coworkers how it works in the next week or two16:26
dolph_nhm, keystone is definitely positioned to be that solution... just don't have a radius implementation yet :P16:26
gnu111i need to remove some compute nodes. where are these stored? I can't find in the mysql db/16:26
nhmdolph_: I don't really want to setup a radius server anyway. ;)16:26
dolph_nhm, ha16:27
nhmdolph_: Though ldap is definitely useful... so long as I don't have to get into a poltical battle about being able to modify it. ;)16:27
dolph_nhm, modifying the ldap backend code?16:27
nhmdolph_: no, the tree.16:28
*** nati has quit IRC16:28
heckjWormMan: stick with Ubuntu 10.10 for now if you're doing a package based install16:28
dolph_nhm, ah :)16:28
heckjnhm: been there!16:28
nhmdolph_: The group here that controls the ldap servers are rather protective of their turf. :)16:29
dolph_nhm, as for dashboard, that's coming up pretty soon... keystone will probably go core right before dashboard does, which means dashboard will have to support keystone pretty quick16:29
nhmheckj: indeed.  Actually, that was why radius was suggested.  Apparently you can do some kind of proxying or something and pass through to an existing ldap server without touching it.16:30
*** ewindisch has joined #openstack16:30
nhmdolph_: any rough estimates for when that might happen?16:31
dolph_nhm, rough estimate... i could see it happen within 6 weeks16:31
dolph_nhm, it's a discussion we'll probably start up next week16:31
nhmdolph_: excellent.  We got a grant to deploy openstack. :)16:31
dolph_nhm, nice! grats16:32
nhmdolph_: Thanks, two year project.  about $170k for hardware.16:32
dolph_nhm, O_O16:32
nhmMostly supporting proteomics and bioinformatics research.16:32
dolph_*woosh*16:32
creihtnhm: cool16:33
nhmdolph_: basically looking for proteins that have certain characteristics that might help with creating new medicines.16:33
nhmor the pressence of certain proteins that might help explain why certain illnesses behave the way they do.16:34
dolph_nhm, ah, like folding at home! </geek>16:34
jtannernhm, what made you choose a "cloud" type infra instead a traditional compute cluster?16:35
nhmdolph: yeah, folding basically is simulating how the structure works.  This is using other techniques to try to identify proteins and if there is an unknown one guess as to what it does based on how similar it looks to known ones.16:35
*** katkee has quit IRC16:35
*** ewindisch has quit IRC16:36
nhmjtanner: A lot of bio people are kind of skipping clusters and going directly to amazon so some of the software is already being distributed via AMIs.16:37
jtanneri see16:37
nhmjtanner: Also, buzzwords win grants. ;P16:37
jtannernhm, no doubt.16:37
*** HugoKuo has joined #openstack16:38
*** jdurgin has joined #openstack16:38
*** ewindisch has joined #openstack16:38
nhmpotentially another really cool possibility is being able to preserve the exact environment in which the analytics were done.16:38
*** anotherjesse has joined #openstack16:39
dilemmanhm: as in, "store the server" that was done to compute a particular item as a VM image, and when that item becomes interesting later, fire up the server, and you're guaranteed the same computing environment?16:40
nhmdilemma: yeah.  Or if you want to reproduce results.16:41
*** obino has quit IRC16:41
*** irahgel has quit IRC16:41
*** tsuzuki has joined #openstack16:41
nhmor even just collaborate on research with other people and make sure you are using the same environment even if you are located at different sites.16:42
heckjnhm: when someone is processing folding, is there a lot of IO associated with it (like sequence matching?) or is it mostly internal compute?16:42
jtannershould be ram+cpu16:42
mgiusfolding@home was all cpu and sometimes heavy ram16:43
dilemmahe's not just talking about folding though: https://www.msi.umn.edu/16:43
*** lborda has joined #openstack16:45
nhmYeah, people here do many different kinds of research.16:45
jtannernhm, any plans for using condor?16:47
nhmjtanner: we've flirted with it off and on.16:47
nhmjtanner: I actually had it setup on some test hardware about a year ago.16:48
jtannernhm, will your implementation be documented publicly?16:48
jtannerof openstack16:48
nhmjtanner: I'd like to.  I think it would be good PR for us.16:49
jtannerand for openstack16:49
nhmjtanner: indeed16:49
*** javiF has joined #openstack16:49
nhmjtanner: maybe for whatever vendor we go with too.16:49
jtanneri wonder how much dell charges for their crowbar setup16:50
nhmLots of work to do though.  I need to figure out how to make all of the stars line up right regarding Auth, PCIE passthrough (for GPU nodes), Storage, hardware, etc etc.16:50
nhmjtanner: I just emailed our Dell rep this morning.  Waiting to hear back.16:51
jtannerare you guys okay with being restricted to ubuntu?16:51
uvirtbotNew bug: #825338 in swift "Existing "swift" user modified on package install" [Undecided,New] https://launchpad.net/bugs/82533816:51
nhmjtanner: I've fought that battle a bit.  We'll use Ubuntu for the nodes, and I'll probably provide SL6.1 or CentOS6 as a VM option.16:52
nhmThough the folks at the Mayo Clinic are going to be building VM images as part of this grant too.16:52
jtannerit shouldn't be too hard to slap rhel/cent into crowbar16:52
dilemmayeah, SL6 / CentOS6 support would have made my life a lot easier as well16:52
jtannerit seems like most of the barhandles already have deb+rpm support16:52
jtanneri haven't tried it though16:53
BK_mannhm: if your nodes will be from top vendors you probably need RHEL instead of Ubuntu16:53
nhmjtanner: you've defintiely looked into it more than I have.  My stuff is all Ubuntu+Kickstart+Puppe16:53
nhms/Puppe/Puppet16:53
BK_mannhm: how many nodes is your target?16:53
nhmBK_man: depends how many fat nodes we need.  Probably 24-32ish.16:54
BK_mandilemma: you mean SL/RHEL for hosts or for instances?16:54
dilemmahosts16:54
nhmBK_man: I'd like to have heterogenous nodes to support different workloads.16:54
BK_mandilemma: http://yum.griddynamics.net/yum/diablo-3/openstack16:54
BK_mandilemma: just passed last QA session.Ready to install and use16:54
kbringardquick ? about swift16:55
*** ccc11 has quit IRC16:55
BK_mannhm: which tool will you use for deployment of OS and OpenStack on bare metal?16:55
kbringardthe docs say:16:55
kbringardPublish the local network IP address for use by scripts found later in this documentation:16:55
dilemmaBK_man: QA by griddynamics?16:55
kbringardexport STORAGE_LOCAL_NET_IP=10.1.2.316:55
kbringardexport PROXY_LOCAL_NET_IP=10.1.2.416:55
BK_mandilemma: yep.16:55
kbringardI'm not sure what each of those are16:55
uvirtbotNew bug: #825344 in openstack-ci "project watches incorrectly added to gerrit" [Undecided,New] https://launchpad.net/bugs/82534416:56
nhmBK_man: For my test cluster I'm using tftpboot/kickstart/puppet.  Thinking of using cobbler to tie it together but it's working pretty well as is.16:56
dilemmaBK_man: sadly, "official" support for RHEL was required by the decision-makers around here for us to use it16:56
*** mo has joined #openstack16:57
BK_mannhm: you will need hw management solution. For RHEL-based OSes I recommend xCAT http://xcat.sf.net16:57
*** mo is now known as Guest4445916:57
nhmBK_man: We use xCAT on some of our clusters. It's alright.16:57
BK_mandilemma: just use our RPMs. I got report that working on SL 6.1 without major issues16:58
*** Guest44459 has quit IRC16:58
BK_mannhm: if your HW is IMPI 2.016:58
nhmBK_man: we are currently a SLES shop, but will probably be moving away from that at some point.16:58
BK_mannhm: IPMI 2.0 complaint you should have no problems installing your nodes using xCAT16:58
nhmsadly our UV1000 will probably be SLES for it's whole life.16:58
*** Guest44459 has joined #openstack16:59
*** mszilagyi has joined #openstack16:59
dilemmaI've already setup a small test cluster using your RPMs. I was forced to re-kick them with ubuntu due to concerns with continued RHEL support.16:59
BK_mandilemma: switch to SL 6.117:00
BK_mandilemma: we're going to support it in near future (1 month)17:00
dilemma"we" being upstream openstack?17:00
nhmHopefully government funding cuts won't affect SL.17:00
*** tsuzuki has quit IRC17:00
*** j05h has quit IRC17:01
BK_manwe means Grid Dynamics17:01
nhmOne of my coworkers used to work for Fermi.  He's expressed cocnern.17:01
BK_manI'm employed by Grid Dynamics and we're support RHEL build of OpenStack projects - Nova, Swift, Glance and now Keystone and Dashboard17:02
HugoKuodoes keystone easy to implement ?17:02
lorin1nhm: We're using OpenStack on a cluster with a UV100 and some nodes with GPUs.17:02
*** YorikSar has joined #openstack17:02
BK_manHugoKuo: not yet, but we're working to fix that soon :)17:03
nhmlorin1: wow, that's crazy. :)17:03
*** johnmark has joined #openstack17:03
dolph_HugoKuo, if it's not, let me know :)17:03
nhmlorin1: One of our UV100s has GPUs connected to it.  Recent kernel upgrade broke GPU access inside cpusets. :(17:03
dilemmaBK_man: right. I've been through your documentation, and see that your company contributes significantly to openstack, and whatnot. But the decision makers here are concerned with the fact that RHEL is not supported upstream.17:03
nhmlorin1: how many cores in your UV100?17:03
*** duffman has quit IRC17:04
lorin1nhm: 12817:04
dilemmaIn any case, we're too late in the deployment stage to switch our host OS now17:04
*** duffman has joined #openstack17:04
HugoKuodolph_ : ok .... but i'm  in vocation until the end of this month XD17:04
lorin1nhm: We were running SLES but recently upgraded to RHEL 6.1.17:04
nilssonlorin1, are you using xen or kvm ?17:04
nhmlorin1: Yeah, I saw that they are supporting RHEL now, but I think there is a limit on the number of cores.17:04
BK_mandilemma: upstream is interested in RHEL RPMs too.17:04
dolph_HugoKuo, keystone will be easier by the end of the month!17:04
lorin1nilsson: We were using kvm, we're playing with lxc right now.17:04
kbringardBK_man: does that mean you're supporting Cent 6, or just RHEL and my milage may vary?17:05
BK_mankbringard: we're going to support CentOS also. So, RHEL, SL and CentOS _at least_17:05
lorin1nhm: There is a guy on our team working on the issue with supporting higher number of cores. I think he had to switch to a newer kernel.17:06
HugoKuodolph_ , that's great XD17:06
lorin1nhm: How many cores in your UV1000?17:06
nhmlorin1: our UV1000 has 1104 and our UV100s have 72 each.17:06
BK_mankbringard: but we need to wait for CentOS 6.1 to release since it contains newer libguestfs which we need17:06
kbringardyea, makes sense17:06
kbringardby "support", are you contracting out support, or do you just mean you're going to keep building packages?17:06
*** obino has joined #openstack17:06
WormManahh, centos 6.1... AKA 201217:07
YorikSardolph_: Hello. Have you seen my mail? Does that change looks good?17:07
kbringardWormMan: if we're lucky17:07
WormManI like Ubuntu, but it's also a bit annoying. 10.04 has LTS, but no newer/faster/better performing KVM. Newer stuff, shorter support, better performance.17:07
BK_mankbringard: we're doing both. Build packages for community and providing contractual support for our customes. Drop a message to cloudservices@griddynamics.com if you have a business request.17:08
kbringardWormMan: yea, but you'll always have the newest <insert obscure package name here>17:08
*** stewart has joined #openstack17:08
kbringardBK_man: awesome, thanks17:08
nhmWormMan: Yeah, the last LTS is getting a bit long in the tooth.17:08
*** maplebed has quit IRC17:09
*** maplebed_ has joined #openstack17:09
WormManof course, with the rapid pace of openstack, I'm not sure about trying to keep anything stable17:09
BK_manWormMan: I prefer SL 6.1 rather CentOS for now :)17:10
WormManwe're a CentOS shop, suggesting Ubuntu for anything was already met with torches and pitchforks :)17:11
nhmWormMan: I know how that goes. :)17:12
kbringardso were we17:12
*** javiF has quit IRC17:12
dilemmaWormMan: exactly what happened around here17:13
nhmWormMan: though realistically Ubuntu and clusters historically haven't mixed well.  For openstack though, it's a different story.17:13
*** mattray has quit IRC17:13
*** BK_man has quit IRC17:14
WormManI've been doing Linux for a long time, I don't much care what OS it is... as long as it's not SuSE :)17:14
nhmWormMan: that's what we run. ;)17:14
WormMan(or SLS)17:14
WormMan(or MCC Interim)17:14
nhmslackware cluster. ;)17:14
creihtgentoo!17:15
dilemmaInterestingly, since I'm only deploying openstack swift, I shouldn't have any problem slowly rekicking the cluster with a new OS in the future. Oh god, they may ask me to do that when CentOS6.1 is out and supported17:15
* creiht hides17:15
creiht:)17:15
*** obino has quit IRC17:15
*** obino has joined #openstack17:15
kbringardso can anyone explain to me what the storage local net and the proxy local net IPs for swift are supposed to be?17:15
nhmdilemma: you can have 100% uptime!17:15
creihtkbringard: a bit of a carry over from rackspace17:16
creihtkbringard: is this for stats17:16
creiht?17:16
*** maplebed_ is now known as maplebed17:16
dilemmanhm: that's true. I could preserve the XFS drives on the storage nodes, and not even resync any data17:16
kbringardnot exactly… I'm just diving into setting up swift for the first time17:16
YorikSardolph_: Oh, sorry. I've opened the other one. I see your lgtm under that, which I linked to in email17:16
kbringardand I get the whole internal/external network thing17:16
creihtoh from the docs17:17
kbringardyea17:17
kbringardI'm just not sure what parts of my setup each of those correspond to17:17
YorikSardolph_: Can you look at the change 222 as well, please?17:17
dolph_YorikSar, i've been in a meeting all day - 'import ldap' looked great (don't know why i had problems with it!), and i haven't had a chance to look at your other changes yet17:18
*** ccustine has joined #openstack17:18
YorikSardolph_: We're just hope that these changes can land to master this week so that we can tell the world how to use it.17:18
creihtkbringard: so usually for a swift cluster we setup the storage nodes on a private network, that isn't accesible by the outside17:18
creihtso when it mentions the STORAGE_LOCAL_NET_IP, it is talking about the ip of that storage node on that private network17:19
dolph_YorikSar, give me a couple hours, max17:19
kbringardahhh, ok, so that will vary from node to node17:19
creihtright17:19
kbringardand the proxy_local_net will be the same, since there is only one proxy17:20
kbringardproxy_local_net_ip I mean17:20
creihtthe PROXY_LOCAL_NET_IP is the public network IP for the proxy (or load balancer VIP if you are going to have several proxies)17:20
YorikSardolph_: Thanks a lot for your help.17:20
creihtyeah17:20
dolph_YorikSar, thanks for your contribs!17:21
*** mrjazzcat is now known as mrjazzcat-afk17:22
*** ewindisch has quit IRC17:23
*** marrusl has quit IRC17:23
*** lpatil has quit IRC17:23
kbringardcreiht: sweet, thanks, that helps a lot17:23
*** darraghb has quit IRC17:24
creihtcool17:24
*** Alowishus has left #openstack17:24
kbringardthe doc is a little confusing because they're on the same broadcast domain17:24
creihtheh17:24
*** dolph_ has quit IRC17:24
creihtgood point17:24
kbringardso I was like "If one is internal and one is external… why are they 1 IP apart"17:25
*** amccabe has quit IRC17:25
kbringardno worries though, this makes more sense now, thanks again17:25
dilemmaanyone have any input on putting anycast in front of openstack swift instead of a load balancer?17:25
*** dendro-afk is now known as dendrobates17:26
creihtpandemicsyn: -^ ?17:27
creihtdilemma: at that point you are getting a bit out of my areas of expertise :)17:27
*** ewindisch has joined #openstack17:27
kbringardcreiht: since I'm just doing this for testing and it's all internal, I can make those the same, right?17:28
creihtkbringard: sure17:29
kbringardOK, cool17:29
kbringarddidn't know if that'd make it crap itself or something17:29
creihtshouldn't17:29
kbringardrad, I'll let you know17:29
kbringardhaha17:29
redbowhy would you put anycast in front of swift?17:29
*** jtanner has quit IRC17:30
dilemmato avoid the expense of a load balancer (and it's correspondingly massive throughput requirements)17:30
dilemmaand, depending on network topology, reduce traffic between some switches within or between some data centers17:31
*** rfz_ has joined #openstack17:31
dilemmaif swift had a mechanism to retrieve the nearest copy, I could improve that even further17:32
*** YorikSar has quit IRC17:35
*** marrusl has joined #openstack17:37
*** pguth66 has joined #openstack17:38
redboOh, yeah it might make sense to do anycast if you have multiple DCs.  I don't know if anyone's tried that though.17:38
* exlt has thought about it ;-)17:39
*** Guest44459 has quit IRC17:39
*** amccabe has joined #openstack17:39
*** YorikSar has joined #openstack17:39
exltwhen talking about PUTs, the various data centers would need to talk to a central auth for validation, or auth could also be anycasted17:40
dilemmacentral auth exists, in my case17:40
exltand if that central auth is not available... anycast is useless17:41
dilemmaI'm actually writing the WSGI middleware for it at this exact moment17:41
redboI don't think auth is really enough traffic to worry about usually.17:42
exlttraffic is not the problem with anycast - getting authenticated in each data center is17:42
*** lpatil has joined #openstack17:42
dilemmaso long as sessions are stored outside of the proxies, you're fine17:43
dilemmaand if the proxies are storing your session data locally, you're doing it wrong17:43
exltanycast only for public reads would be the best use case for anycast, I think17:43
*** HugoKuo1 has joined #openstack17:44
dilemmaWhat specifically would be the problem authenticating with a centralized auth system from proxies behind anycast?17:44
redboI don't see how anycast and auth really interact.17:44
exltwrite to one location and replicate to other locations keeps things simple17:45
dilemmaredbo: I agree.17:45
*** HugoKuo has quit IRC17:45
redboother than... maybe if you're really worried about latency, you could replicate your user db at every anycast endpoint.17:46
redboor every DC rather17:46
dilemmaor just cache sessions and user data in memcached on the proxies17:47
*** hggdh has quit IRC17:47
*** YorikSar_ has joined #openstack17:48
redbowell sure17:48
*** YorikSar has quit IRC17:48
*** YorikSar_ is now known as YorikSar17:51
exltmy thought for an example: bigsite.org uses anycasted swift (2 locations) with one auth (location A) to store user photos - location A becomes unavailable (for whatever reason) - the one auth is no longer reachable so all uploads fail17:53
exltcaching only helps for a previously authed user in location B17:54
*** HugoKuo1 has left #openstack17:54
exltso not all uploads fail, but any new attempts will17:55
exltif this is acceptable, that's cool17:55
*** hggdh has joined #openstack17:56
exltotherwise, each location should have a local auth, also anycasted - it all depends on how bulletproof it needs to be17:56
dilemmaahh, right. So you're saying auth availability becomes a concern17:56
exltavailability is the secondary reason for anycast - first is get closer to the user17:56
dilemmaYeah, our auth system here is entirely separate, and has it's own redundancy17:56
dilemmaFor my purposes, it's safe to assume that auth is always up. It's a separately managed system that I'm integrating with.17:57
*** huslage has joined #openstack17:58
dilemmaand around here, the auth system is far more important than the openstack swift deployment. If auth goes down, people are getting called at 4am on christmas morning to fix it.18:00
exltbeen there - I completely understand :-)18:00
dilemmaunfortunately, some of the benefits of anycast will be lost, due to the fact that the proxies pick a random copy from the storage nodes to serve up18:02
dilemmaif my nodes and proxies are scattered across data centers, the initial request will go to the nearest proxy, but the copy of the requested data could come from any node18:02
dilemmacreiht or anyone: know if there's plans to have the proxies consider network topology before pulling copies?18:03
redboI don't think hacking something like a preferred zone list into the ring would be all that epic.18:03
dilemmaexactly what I was just about to suggest18:04
dilemmazone priority, on a per-proxy basis18:04
creihtdilemma: we have talked about it before18:04
creihtbut I don't think we have gone much beyond that because for the moment it works good enough, without the added complexity :)18:05
redboit would really just have to order the nodes it returned so the one in the preferred list is first.18:05
dilemmawould a solid code contribution be a good motivator for that discussion?18:05
creihtdilemma: certainly18:05
dilemmaif you can point me to a couple of integration points, I could probably get the go-ahead from my employer to look at it18:06
dilemmacross-dc traffic was a big topic of discussion here18:06
dilemmaif it has to reach into too much code, probably not though. If it's fairly clean modification (replace a single object? hell yeah.) I don't see why not.18:08
redboI think it could be as simple as new config option for preferred zones and a sort call where there's currently a shuffle to fakey load balance between object nodes.18:08
*** mattray has joined #openstack18:09
dilemmaYeah, I could see that. Wow, maybe I could have a patch ready by the end of next week. I'll be talking to my employer on Monday.18:09
*** dprince has quit IRC18:11
*** dolph_ has joined #openstack18:11
*** dprince has joined #openstack18:13
dilemmacreiht: if it were approved, how quickly does stuff like this make it into the official packages?18:14
dilemmaI'm currently using the 1.3 ppa for ubuntu18:15
creihtdilemma: usually pretty quickly, but not sure what the deadline is for the last release before diable18:15
creihtnotmyname: ?18:15
redbothe official packages?  I've identified a potential problem.18:16
notmynameone more release (1.4.3) around the end of this month18:16
notmyname1.4.3 will be diablo18:16
notmynamedilemma: creiht: redbo: I haven't been following along...18:17
notmynamewhat's the tl;dr?18:17
*** kashyap_ has quit IRC18:17
dilemmaI'm interested in submitting a patch to allow per-proxy zone priority in swift18:17
notmynameok. cool18:18
dilemmaand wondering if I can get it into official packages fast enough to use in my deployment without worrying my employer about using a patched openstack18:18
notmynamedilemma: so proxy one writes to zones c,b, and a while prixy two writes to b, a, and c?18:18
dilemmareads are the primary concern18:18
redbodilemma: I don't know, the Release PPA might be okay.18:19
dilemmaI don't want to change the way it distributes data18:19
notmynamedilemma: so (as redbo points out) "official" packages are a curious case18:19
dilemmanot so official then?18:20
notmynamedilemma: all swift releases are production-ready. currently, our milestones are different than the nova/glance milestones18:20
*** tjikkun has joined #openstack18:20
*** tjikkun has joined #openstack18:20
notmynameeach swift release (milestone or otherwise) is "official" from a swift perspective (as in, ready for production at scale). but there is only one official openstack release every 6 months18:21
*** herman__ has quit IRC18:21
notmynamebecause of the different nature of openstack packages, we (Rackspace) maintain our own swift packaging that we use for production (https://github.com/crashsite/swift_debian)18:21
notmynamefor the last 6 months (openstack's diablo cycle) we have tried to ensure that we release an official swift version every time we release in production. we've done pretty well, but it's not a 100% match18:23
notmynamedilemma: so, all that being said, even if your patch doesn't make it for diablo, it can be in an "official" swift release soon after (so you can run an unpatched swift install)18:24
dilemmawell damn. That changes the game for me a bit. I was avoiding newer versions of swift, and lamenting the fact that I was probably going to miss diablo for my deployment.18:24
notmynamereally, it comes down to where you install from18:24
dilemmayeah, I wasn't aware of that repository18:25
nhmHeh, I'm still on cactus18:25
notmynameit's not a secret, but also not something we trumpet around in the openstack world :-)18:25
*** clauden_ has joined #openstack18:25
nhmHow are the diablo releases feeling so far?18:25
* dilemma updates his dev cluster18:26
dilemmamy qa team is about to get some extra work :)18:26
*** vernhart has quit IRC18:26
notmynameheh18:26
nhmdilemma: you have a QA team?  wow18:27
nhmdilemma: I sometimes have an undergraduate student if I'm lucky. ;)18:27
*** herman_ has joined #openstack18:27
*** allsystemsarego has quit IRC18:28
dilemmaand a dev team... too bad they're all perl guys18:28
nhmbut then their souls get crushed and we must feed on others.18:28
dilemmawhich is why I'm tasked with the auth middleware18:28
nhmdilemma: I'm an old perl guy. ;)18:28
* nhm strains his back and yells about kids and lawns18:29
dilemmaI once tried perl. Then my soul was crushed, and the dev team had to feed on others.18:30
notmynamedilemma: the crashsite repo is what we use for cloud files at rackspace18:30
dilemmanotmyname: that's great. I'll be switching everything over. I'll probably also be submitting a patch for per-proxy zone read preferences18:33
dilemmaas in, an optional preference that, if present, determines the order in which zones are preferred when retrieving an object from the storage nodes18:34
dilemmayou'll set weights on zones or something, and randomly chose from identical weights, with the default being all zones have identical weights18:35
dilemmawould that have a chance of being accepted?18:35
notmynamedilemma: yes :-)18:36
notmynamedilemma: make sure it has some docs and unit test coverage. and the code (not the tests) need to pass pep818:36
*** redkilian has quit IRC18:36
*** mattray has quit IRC18:36
dilemmaI can do that. My employer would have to approve the time I spend on it, of course.18:37
dolph_nhm: still around?18:38
notmynamedilemma: we look forward to seeing it :-)18:38
nhmdolph_: yep18:38
dolph_nhm, just heard some news... dashboard is working on keystone integration *right now*18:38
nhmdolph_: sweet! :D18:39
YorikSardolph_: Haven't Dashboard had Keystone for like a month already?18:40
dolph_YorikSar, no clue - i heard this week that it was starting soon... and was just corrected18:40
dolph_YorikSar, i'd be curious to know how much progress they've made18:41
YorikSardolph_: Actually, the reason why I started to work on LDAP in Keystone is the fact that trunk version of Dashboard mandates Keystone auth18:41
nhmYorikSar: Does LDAP in keystone require write access?18:43
dolph_YorikSar, oh awesome18:43
dolph_nhm, (require write access to what?)18:44
YorikSarnhm: Yes, of course.18:44
YorikSarnhm: Well, for admin interface at least.18:44
YorikSarnhm: Current version of LDAP backend is not complete because it can not work with existing users and tenants trees18:45
dolph_YorikSar, does one of your changes fix that?18:46
YorikSarI'm going to fix this on Monday18:46
YorikSardolph_: No, my change just forces it to work correctly with assumption that it has separate playground somewhere in LDAP.18:47
*** ewindisch has quit IRC18:48
*** katkee has joined #openstack18:48
nhmYorikSar: Yeah, I'm trying to avoid having a fight with our LDAP admins. ;)18:48
YorikSardolph_: Oh, I see you pushed the second part to Jenkins, thanks!18:49
dolph_YorikSar, /salute!18:49
YorikSarnhm: Just when Jenkins finishes merging this change into master, we'll publish a blogpost about how to make it work just as it is right now.18:50
*** alandman has quit IRC18:50
*** mattray has joined #openstack18:53
*** rfz_ has quit IRC18:54
*** dprince has quit IRC18:57
*** huslage has quit IRC18:59
*** anotherjesse has quit IRC18:59
*** lvaughn_ has quit IRC19:01
*** lvaughn has joined #openstack19:01
*** mgoldmann has quit IRC19:03
*** lvaughn_ has joined #openstack19:05
*** mrjazzcat-afk is now known as mrjazzcat19:06
uvirtbotNew bug: #825419 in glance "Functional tests for private and shared images" [Undecided,New] https://launchpad.net/bugs/82541919:06
*** rnorwood has joined #openstack19:08
*** lvaughn has quit IRC19:08
*** lvaughn_ has quit IRC19:08
*** dragondm has quit IRC19:09
*** lvaughn has joined #openstack19:09
*** mrjazzcat has left #openstack19:13
kbringardanyone know how compatible the swift S3 api is at this point?19:13
*** stewart has quit IRC19:13
creihtkbringard: it should work for basic functionalities19:13
creihtIt doesn't support things like ACLs19:14
kbringardare there plans for it to?19:14
creihtIt starts getting a little fuzzy there, since ACLs are a bit different between the two19:15
kbringardsorry if this is documented somewhere, I googled around and surfed the docs for a bit but didn't see anything19:15
creihtof course patches are welcomed :)19:15
kbringardhehe, of course19:15
creihtIt could  use some work19:15
creihtI changed responsibilities right after getting that it19:15
creihtso it hasn't been updated a whole lot, and never got a chance to add better docs19:16
creihtkbringard: http://swift.openstack.org/misc.html#module-swift.common.middleware.swift319:16
creihtIs the best at the moment19:16
kbringardoh, awesome, thanks19:16
annegentleand here's all the further I got with doc'cing it: http://docs.openstack.org/trunk/openstack-object-storage/admin/content/configuring-openstack-object-storage-with-s3_api.html19:17
annegentlecreiht: I'll have to compare mine to yours, yours is better I'm sure :)19:17
kbringardthat sounds like a game we used to play in the playground19:17
kbringardon*19:17
creihtannegentle: oh cool, didn't realize you had snuck that in, thanks! :)19:17
* annegentle steals creiht's19:17
kbringardthank you both, this is super helpful19:18
creihtannegentle: I think they could be merged19:18
annegentlecreiht: yeah looks like it19:18
creihtI would leave the s3curl comments in there19:18
creihtas that has come up several times19:18
creihtkbringard: another small thing is that you have to use the old style api naming19:18
creihti.e., you can't reference the bucket in the domain name19:18
creihtsome tools don't support that19:19
creihtor at least reported so19:19
creihtkbringard: if you run into any glaring issues, let us know, or if there are features you would like at least submit a bug report19:20
creihtMy current work responsibilities don't provide the time to work on it anymore :/19:21
kbringardcreiht: sure thing, and we'll submit patches if/when we can too19:22
creihtkbringard: there have been a couple of contributors to it, so if if you file a bug, there is a small chance they may pick it up19:23
*** dendrobates is now known as dendro-afk19:25
*** johnmark has left #openstack19:31
*** Ryan_Lane has joined #openstack19:35
kbringardcreiht: does swift work with keystone and/or are there plans to add it (if not)?19:36
kbringardI would have to assume yes...19:36
notmynamekbringard: actually, it's the other way around19:36
kbringardoh?19:36
kbringardkeystone needs swift support?19:37
notmynamekbringard: keystone needs to work with swift. there is limited support there now19:37
dolph_notmyname, what does keystone have to do to support swift, specifically??19:37
kbringardah, cool… so you pass your universal creds to keystone, which talks to swift and authorizes you anre returns your token19:37
notmynamekbringard: but it's not "production ready" (for example, it doesn't tie auth tokens to swift accounts, so any valid token gets authorized to do anything to any account)19:37
notmynamekbringard: right19:37
kbringardso I guess there'd have to be a section in your ldap (or whatever auth you're using) that has your keys for keystone to pull out19:38
kbringard?19:38
notmynameI don't know the answer to that19:38
kbringardhmm, an interesting problem to be sure19:38
notmynamedolph_: the framework is there. just some...oddities (see my above comment to kbringard)19:39
*** reed has quit IRC19:40
nilssonare there any current billing apps that integrate with openstack?19:40
YorikSarnhm: Here, the post is finally ready http://mirantis.blogspot.com/2011/08/ldap-identity-store-for-openstack.html19:40
*** msivanes has quit IRC19:41
*** msivanes has joined #openstack19:41
kbringardnotmyname: thanks for the info19:43
YorikSarnotmyname: I'm confused. What does Keystone needs special for Swift that it doesn't already have for Nova?19:45
dolph_notmyname, if i understand you, it's swift's responsibility to map keystone users to swift accounts, using keystone roles and keystone tenants19:45
YorikSardolph_: I think so too.19:45
notmynamedolph_: in the auth middleware (that ships with keystone)?19:46
notmynamedolph_: why does swift need to keep track of keystone users?19:47
*** bcwaldon has quit IRC19:47
dolph_notmyname, i haven't looked much at the middleware, but the middleware doesn't do much afaik... it's all done through the keystone admin api19:47
YorikSarnotmyname: But eventually this middleware should land to each project separately, shouldn't it?19:47
*** jdurgin has quit IRC19:47
*** carlp has quit IRC19:49
nhmYorikSar: that's great19:49
nhmYorikSar: Now I need to upgrade and actually try all of this out.19:49
notmynameYorikSar: actually, I think it belongs in the auth system. for example, if I have a different auth system, that other system should provide the swift/nova/etc glue with it19:50
YorikSarnotmyname: But I thought that it's service's responsibility to work with common default auth system.19:52
notmynamedolph_: as an example, set up swift with keystone. get an auth token. use that token for _any_ swift account and it works. keystone is returning "authorized" regardless of the swift account19:53
YorikSarnotmyname: And by the way how can we have a complete package for a service without the auth middleware?19:54
dilemmaYorikSar: because when using that service stand-alone, you'll almost certainly need your own auth middleware19:55
notmynamewhat dilemma said :-)19:55
* dilemma is doing exactly that19:56
dolph_notmyname, write a functional test in keystone.test.functional, open a gerrit review with it along with an issue?19:56
notmynamewe have many servers that either don't have auth at all or have different auth systems19:56
dolph_can you*19:56
notmynamedolph_: ya (FWIW, this is documented in your README int he swift section)19:56
dolph_notmyname, hmm... i see the comment, but i don't know how that's possibly true today19:58
dolph_notmyname, if it is true today, it's a serious bug on either swift or keystone's end19:59
YorikSardilemma, notmyname: For example, Nova requires at least Glance to work with images. Why don't we make Keystone such a default as well?19:59
notmynamedolph_: agreed :-)19:59
*** jfluhmann has quit IRC19:59
dolph_YorikSar, i think that will happen after keystone is core?19:59
*** jfluhmann has joined #openstack20:00
notmynameYorikSar: I don't think we can make swift depend on keystone20:00
creihtYorikSar: I'm not sure how you can make keystone a hard requirement, because in a lot of places they are going to have their own auth system already20:01
*** tjikkun has quit IRC20:01
creihtfor example, rackspace :)20:01
dilemmaYorikSar: the point here though is that the middleware lives in the Keystone project because when you use any part of the system without keystone, you don't need the middleware, and when you're using it with Keystone, you have the middleware.20:01
creihtdilemma: ++20:01
YorikSardolph_: Well, yes. So that people should integrate Keystone to external custom auth system, not every service separately20:01
dolph_creiht, keystone will be the adapter between openstack and any existing auth system20:01
creihtdolph_: I disagree, keystone is a reference common auth system for openstack20:02
*** reed has joined #openstack20:02
redboThere's no reason to make swift depend on keystone, but if the swift-keystone middleware wants to live in swift's middleware package, I don't think that'd hurt.20:02
dolph_creiht, it's that too20:02
notmynameredbo: I think it makes more sense to stay in keystone. (like swauth)20:03
dolph_notmyname, it doesn't make any sense at all to me to have other project's middleware live inside keystone ;)20:04
*** dirkx_ has joined #openstack20:04
creihtI know!20:04
creihtWe should have a common middleware team that handles it!20:04
creiht;P20:04
dolph_notmyname, they have immediate dependencies on those projects (nova auth middleware imports from nova, for example... and only depends on keystone remotely)20:04
creihtas its own project with PTL and everything20:04
creiht:)20:04
creihtdolph_: maybe a better question is, who is responsible for maintaining the middleware?20:06
dolph_creiht, exactly... and with dependencies on those outside projects, it doesn't make sense for it to live in keystone20:07
*** tjikkun has joined #openstack20:07
*** tjikkun has joined #openstack20:07
dolph_even weirder to me, is that rackspace has proprietary code in the keystone codebase20:08
dolph_("legacy auth"... which is only "legacy" to rackspace, and completely useless to the rest of the world)20:08
*** nerdstein has quit IRC20:08
notmynamedolph_: that does sound odd. we solved that with swift/cloud files by having a separate package/codebase for the internal and legacy stuff20:09
dolph_notmyname, that would work here too20:10
notmyname"rackswift"20:11
dolph_keystone.middleware.nova_token_auth or whatever should be moved to ~ nova.middleware.keystone20:11
dolph_changes like that would certainly make keystone more intuitive for both new developers and for new operators20:12
notmynamedolph_: I don't think we're going to agree on this one :-)20:12
dolph_notmyname, are we disagreeing?20:12
YorikSardilemma: If you have separate Nova, Swift and Keystone nodes, you should not have to install Keystone on each of them just for one middleware file.20:12
creihtwhy not?20:12
dilemmaI don't have Nova or Keystone at all...20:12
creihtapt-get install keystone-middleware20:13
creiht:)20:13
dolph_apt-get install nova-middleware-keystone20:13
creihtdolph_: so back to my earlier question, who is going to write/maintain this middleware20:13
YorikSarSo we should package each middleware separately?20:13
*** jtanner has joined #openstack20:13
creihtincluding when the keystone featureset changes?20:13
YorikSarWill there be separate packages for Quantum integration points in Nova?20:14
creihtYorikSar: I would just say one20:14
dolph_YorikSar, either separately or with each independent project, but not with keystone20:14
*** mattray has quit IRC20:14
creihtlike there is a mysql-client package if you don't want the whole db20:14
dolph_creiht, each independent project20:14
*** manish has quit IRC20:14
*** dendro-afk is now known as dendrobates20:15
dilemmathat's an excellent point YorikSar.  Having the middleware in each project would work from both a dependency and installation perspective20:15
dolph_dilemma, ++20:15
*** gnu111 has quit IRC20:16
dolph_creiht, the middleware is fairly simple in terms of keystone featureset... and i don't think it would change much in the long run, if at all20:17
YorikSarAnd one more thing. If we're going to keep Keystone auth middleware out of Nova we have to either keep existing Nova auth system or let it live without any auth at all. Both possibilities are bad, I guess.20:18
dolph_creiht, in general, all the middleware does is direct unauthenticated requests off to keystone for authentication, and then validating that authentication when the user comes back20:18
*** Gordonz has quit IRC20:18
dilemmaif the outward-facing keystone API is fairly simple, that makes sense. The internal APIs used to write the middleware for each of the systems are likely much more complex.20:19
dolph_YorikSar, i think nova should be configured out of the box with keystone middleware, but if you opt to remove it, there is no longer any authentication against nova20:19
creihtdolph_: I know how auth middleware works :)20:19
dolph_creiht, well... i don't. so ha.20:19
dolph_anyway, is anyone against moving auth middleware out of keystone and into each discrete project? it would make sense to me for keystone to provide/support a very simple sample middleware20:21
*** tjikkun has quit IRC20:21
*** jtanner has quit IRC20:22
YorikSardolph_: But why is Keystone so special? Why Nova always has Glance and Quantum integration modules, but not the Keystone one? I don't see any reason why integration with default complements should be kept out of service's codebase or basic package.20:22
YorikSardolph_: *not has, but will have, of cause20:23
YorikSar**"of course", of course20:23
notmynamedolph_: opened an issue https://github.com/rackspace/keystone/issues/13920:24
dolph_notmyname, thanks!20:24
*** anotherjesse has joined #openstack20:25
*** elmo has joined #openstack20:28
*** YorikSar has quit IRC20:29
*** anotherjesse_ has joined #openstack20:30
*** YorikSar has joined #openstack20:30
creihtdolph_: I guess you wore me down enough, I would prefer otherwise, but if it has to that's fine with me20:30
notmynamecreiht: you aren't even on the swift team anymore! ;-)20:31
notmynameyou don't get a vote ;-)20:31
dolph_notmyname, rofl20:31
*** jtran has joined #openstack20:31
jtranhey all.  is nova compatible w/ python2.7 now?  Or is still supposed to be using 2.6?20:32
redbolast time I checked, we were still in swift-core :P20:32
YorikSarjtran: I use a 2.7 on my dev box. Had no issues with that.20:32
notmynamelol, I know, I know. I'm not trying to steal your birthright :-)20:33
*** mfer has quit IRC20:33
jtrancool thx20:33
*** anotherjesse has quit IRC20:33
*** anotherjesse_ is now known as anotherjesse20:33
*** mfer has joined #openstack20:33
*** lorin1 has quit IRC20:35
*** ameade has quit IRC20:35
*** lpatil has left #openstack20:35
notmynameIMO, things in swift should be related to the core storage system. stubs (like tempauth) or system features (like ratelimit, recon, and cache). that's why things like swauth, slogging (and now origin stuff) are separate projects instead of in swift proper20:35
*** ameade has joined #openstack20:38
*** danishman has joined #openstack20:38
*** dragondm has joined #openstack20:40
YorikSarnotmyname: It looks like tempauth does something more complex than keystone middleware. So why don't keep the latter in codebase?20:47
*** mfer has quit IRC20:47
*** tjikkun has joined #openstack20:50
*** tjikkun has joined #openstack20:50
*** Dunkirk has quit IRC20:56
creihtnotmyname: lol :)20:57
*** cp16net has quit IRC20:57
*** AhmedSoliman has joined #openstack20:58
*** anotherjesse has quit IRC20:59
*** anotherjesse has joined #openstack20:59
*** bcwaldon has joined #openstack21:00
uvirtbotNew bug: #825493 in glance "Glance client requires gettext module" [High,Triaged] https://launchpad.net/bugs/82549321:01
notmynamedolph_: YorikSar: am I supposed to be using swiftauth or tokenauth. the keystone instructions say tokenauth, but swiftauth seems more appropriate (although I haven't been able to load it yet)21:01
dolph_notmyname, ziad says swiftauth21:02
notmynamedolph_: hmm...so the docs are out of date. any docs on it?21:03
notmynamedolph_: what is keystone_url?21:03
notmynamedolph_: and it seems that if any of these config values are missing it raises exceptions21:04
dolph_notmyname, i'm not aware of any other docs21:04
notmyname(instead of gracefully failing)21:04
dolph_keystone_url should be either the service or admin api...21:05
dolph_notmyname, where is keystone_url defined?21:05
notmynamedolph_: self.keystone_url = urlparse(conf.get('keystone_url'))21:05
*** ewindisch has joined #openstack21:07
notmynamedolph_: hmm...got the proxy loading with swiftauth, but now I'm only getting 401 responses21:07
dolph_there's an open bug that you have to create an admin token through keystone-manage... did you do that?21:08
*** FallenPegasus has joined #openstack21:08
notmynamedolph_: ah. got it. keystone_url needed port 5001, not 500021:08
dolph_notmyname, ah cool - makes sense21:09
notmynamedolph_: perhaps to you ;-)21:09
notmynamedolph_: good news! swiftauth doesn't allow access to any account!21:09
dolph_notmyname, YAY! so what's the other middleware for lol21:10
notmynamedolph_: no idea21:10
uvirtbotNew bug: #825489 in openstack-ci "new gerrit build lost documentation links" [High,New] https://launchpad.net/bugs/82548921:10
*** FallenPegasus is now known as MarkAtwood21:12
*** anotherjesse_ has joined #openstack21:14
*** ewindisch has quit IRC21:16
*** anotherjesse has quit IRC21:17
*** anotherjesse_ is now known as anotherjesse21:17
*** lts has quit IRC21:18
*** dendrobates is now known as dendro-afk21:19
*** MarkAtwood has quit IRC21:19
*** msivanes has quit IRC21:19
nhmboo, I found out the C6100s from dell can't allocate all of the drives to a single sled.21:21
nhmI was hoping I might be able to do 3 compute nodes + 1 storage node in a chassis.21:21
*** ejat has joined #openstack21:21
*** ejat has joined #openstack21:21
*** carlp has joined #openstack21:23
*** jdurgin has joined #openstack21:27
*** dirkx_ has quit IRC21:27
*** rnirmal has quit IRC21:28
*** PeteDaGuru has left #openstack21:30
*** vernhart has joined #openstack21:31
*** kbringard has quit IRC21:33
*** jtran has quit IRC21:33
*** martine_ has joined #openstack21:41
*** imsplitbit has quit IRC21:43
*** martine has quit IRC21:44
*** grapex has quit IRC21:46
*** jdurgin has quit IRC21:48
*** carlp has quit IRC21:48
*** martine_ has quit IRC21:49
*** carlp has joined #openstack21:49
*** jdurgin has joined #openstack21:50
*** rnorwood has quit IRC21:57
*** obino has quit IRC21:59
*** obino has joined #openstack21:59
*** amccabe has quit IRC21:59
*** bsza has quit IRC22:02
*** jfluhmann has quit IRC22:05
*** MagicFab_ has joined #openstack22:06
*** LiamMac has quit IRC22:06
*** bcwaldon has quit IRC22:09
*** Ryan_Lane has quit IRC22:12
*** danishman has quit IRC22:15
*** eday has quit IRC22:17
*** ncode has quit IRC22:20
*** lborda has quit IRC22:26
*** ldlework has quit IRC22:27
*** eday has joined #openstack22:28
*** ChanServ sets mode: +v eday22:28
*** ats has joined #openstack22:38
atsI have a queston on the key displayed on openstack swift:   st -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U group:user -K displayedkey upload upfiles file2.tgz22:40
atsIn this command if somebody does "ps" on the machine the key will be visible to them if the file takes longer to upload.  Is there any way to hide the key?22:41
*** rnorwood has joined #openstack22:43
*** bsza has joined #openstack22:44
*** marrusl has quit IRC22:49
*** mattray has joined #openstack22:49
*** mattray has quit IRC22:49
*** mattray has joined #openstack22:49
*** stewart has joined #openstack22:49
atsActually, I see envirnoment variable called ST_USER in the st script.  I didn't try that.  Let me try.22:50
atsI see setting of ST_KEY environment variable avoid that. So scratch my question.22:52
*** mattray1 has joined #openstack22:52
*** vladimir3p has quit IRC22:52
*** mszilagyi_ has joined #openstack22:53
*** bsza has quit IRC22:54
*** mattray has quit IRC22:54
*** mszilagyi has quit IRC22:55
*** mszilagyi_ is now known as mszilagyi22:55
*** mattray1 has quit IRC22:57
*** rnorwood has quit IRC22:58
*** ejat has quit IRC23:01
*** Ryan_Lane has joined #openstack23:03
*** mszilagyi_ has joined #openstack23:04
*** freeflying has joined #openstack23:07
*** mszilagyi has quit IRC23:07
*** mszilagyi_ is now known as mszilagyi23:07
*** anotherjesse has quit IRC23:07
*** anotherjesse has joined #openstack23:07
*** dolph_ has quit IRC23:07
*** Ryan_Lane has quit IRC23:10
*** AhmedSoliman has quit IRC23:21
*** iRTermite has quit IRC23:24
*** iRTermite has joined #openstack23:25
*** ewindisch has joined #openstack23:27
*** fysa has quit IRC23:30
*** npmapn has quit IRC23:31
*** katkee has quit IRC23:36
*** heckj has quit IRC23:36
*** huslage has joined #openstack23:39
*** cereal_bars has quit IRC23:43
*** clauden_ has quit IRC23:46
*** jdurgin has quit IRC23:46
*** carlp has quit IRC23:48
*** carlp has joined #openstack23:49
*** tryggvil___ has joined #openstack23:52
*** ewindisch has quit IRC23:53
*** anotherjesse has quit IRC23:54
*** tryggvil has quit IRC23:56
*** carlp has quit IRC23:56
*** carlp has joined #openstack23:56
*** ats has quit IRC23:56
*** martine_ has joined #openstack23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!