Thursday, 2011-05-12

*** pguth66 has quit IRC00:00
*** pguth66 has joined #openstack00:00
*** Eyk has quit IRC00:01
*** kenh has joined #openstack00:02
*** pguth66 has quit IRC00:03
*** BK_man has quit IRC00:07
*** jeffjapan has joined #openstack00:14
*** nelson has quit IRC00:14
*** nelson has joined #openstack00:15
*** dragondm has quit IRC00:19
*** dirkx_ has joined #openstack00:19
*** joearnold has quit IRC00:27
*** miclorb has quit IRC00:35
*** zedas has joined #openstack00:36
*** dirkx_ has quit IRC00:42
*** patcoll has joined #openstack00:47
*** patcoll has quit IRC01:03
*** mattray has quit IRC01:06
*** grapex has joined #openstack01:07
nelsonHey!01:08
nelsonhttp://swift.openstack.org/howto_installmultinode.html has been updated for swauth! Well done!01:08
*** miclorb has joined #openstack01:11
gholtnelson: Ah, cool. You all updated now?01:12
nelsonworking on it. I just found howto_installmultinode's auth section. I'll try following it to see if the dox are at all buggy.01:13
gholtGotcha, k.01:13
*** mattray has joined #openstack01:14
*** agarwalla has joined #openstack01:24
* nelson needs to write a program which implements howto_installmultinode and Just Does It(tm)01:27
*** jdurgin has quit IRC01:27
*** mattray has quit IRC01:30
nelsongholt: the h_im instructions say to set up a filter:swauth section in proxy-server.conf, however, proxy-server whinges if it doesn't get a filter:auth section.01:31
gholtYou probably still have auth in your pipeline, it should be swauth now01:31
nelsonindeed, yes, I have a custom pipeline, and missed that, thanks.01:32
*** grapex has left #openstack01:33
notmynamecw: any thoughts on the xfs inode64 mount option? (http://xfs.org/index.php/XFS_FAQ#Q:_What_is_the_inode64_mount_option_for.3F)01:34
nelsonAuth subsystem prep failed: 401 Unauthorized01:34
gholtanticw: ^^ [in case you don't see the cw]01:35
gholtnelson: That is either the wrong -K or allow_account_management not equalling true for that proxy server.01:36
nelsondon't have allow_account_management true.  Looks like I'm running into all of the 1.1 -> 1.3 changes head-on.01:37
nelsonno, wait, I do.01:38
gholtnelson: The reasoning behind that is that devauth was it's own server that could be firewalled separately from the proxy. Now swauth is part of the proxy pipeline, so some like to run a special firewalled proxy with allow_account_management = true and the public proxies with it off.01:38
nelsonand the -K is definitely correct - cut-n-paste.01:38
gholtHmm. Maybe just tail your log(s), run the prep, and post the logs and see what we see?01:38
nelsonhttp://paste.openstack.org/show/1328/01:40
gholtThat looks pretty weird, gimme a sec to digest. :)01:43
gholtI don't understand why yours is posting at that long path, mine logs POST /auth/v2/.prep01:44
nelsonswauth-prep -A http://alsted.wikimedia.org/auth/ -K justkidding01:44
nelsonshould there be a trailing slash following "auth"?01:45
gholtYeah, taht should be fine01:46
gholtit'll add it if you forget it01:46
gholtLooking at the swauth-prep code, it looks like it just does a POST <-A you give>v2/.prep01:47
nelsonoh.... I'll bet my rewriter is getting in the way.01:47
gholtOh!01:47
nelsongimme a sec to take it out.01:47
gholtWell, if it add v1/AUTH_53189d113690458f9575e97096180ef3/ that would make perfect sense. :)01:48
nelsonhehe, it's proceeding apace....01:48
nelsonI gotta teach my rewriter about the /auth URL. Should be a one-line fix.01:49
gholtI wonder if the auth_prefix option in the swauth filter section would do whatcha want as well.01:50
gholtSomething like auth_prefix = //v1/AUTH_53189d113690458f9575e97096180ef3/auth%25 hehe01:51
gholtEh, that's wrong, but maybe you get the idea. Changing your rewriter might be safer.01:52
*** bluetux has quit IRC01:53
*** mattray has joined #openstack01:53
gholtI'm just not sure if something else would grab onto the /v1 and not let swauth have it.01:53
*** mattray has quit IRC01:54
gholtanticw: The main thing we're wondering is if the lack of the xfs inode64 option is what causes the age-slowdown issues we were talking about at the conference.01:56
nelsonit's not a problem to change the rewriter to just leave any /auth URL alone.01:57
*** dendrobates is now known as dendro-afk02:01
*** dendro-afk is now known as dendrobates02:02
*** adjohn has joined #openstack02:07
*** _adjohn has joined #openstack02:07
*** ike1 has joined #openstack02:08
ike1i can not connect lan gateway after i install nova02:08
ike1it's show ip address is 192.168.122.102:09
ike1but correct ip shoud be: 192.168.2.22802:09
*** dprince has joined #openstack02:09
*** lvaughn has quit IRC02:10
ike1i check the /etc/network/interfaces, in this config file, ip address is 192.168.2.22802:10
ike1i want to restart network, but it show can not read /etc/network/interfaces file02:11
*** adjohn has quit IRC02:11
*** _adjohn is now known as adjohn02:11
*** HugoKuo has joined #openstack02:13
*** cloudgroups has joined #openstack02:13
*** cloudgroups has left #openstack02:14
*** freeflyi1g has quit IRC02:22
*** dendrobates is now known as dendro-afk02:22
*** miclorb has quit IRC02:23
*** freeflying has joined #openstack02:24
*** obino has quit IRC02:30
*** obino has joined #openstack02:30
*** Dweezahr has quit IRC02:34
*** Dweezahr has joined #openstack02:35
HugoKuoIs there any flag for auto-associate public IP ?02:39
*** JuanPerez has joined #openstack02:46
*** Juan_Perez has joined #openstack02:46
*** JuanPerez has quit IRC02:46
*** Juan_Perez has quit IRC02:46
ike1HugoKuo: what's your means?02:52
ike1what is the flag?02:52
HugoKuothe flag in nova.conf02:53
HugoKuoor parameter02:53
ike1i check it02:53
ike1just yesterday all things is correct, i use halt shutdown the server02:53
HugoKuowhile an instance runs up , a private IP bind to instacne ....02:54
ike1today when i power on the server, i find i can not use putty ssh to it02:54
HugoKuohost or guest instance ?02:54
ike1i just install nova, not run instance yet02:54
ike1host02:54
ike1i have not install image02:55
ike1i plan today install a image for guest instance02:55
*** s1cz is now known as s1cz-02:56
ike1i don't understand why server ip auto chance for an other ip address02:57
ike1in nova.conf dhcpbrige = /etc/bin/nova-dhcpbrige03:00
*** dprince has quit IRC03:12
*** AimanA is now known as HouseAway03:13
*** zenmatt has quit IRC03:13
*** zenmatt_ has joined #openstack03:14
*** pLr has quit IRC03:16
raggi_is there a way to forcibly rebuild novas portion of the iptables rules?03:20
raggi_http://pastie.textmate.org/private/hpkd522cnpzqnagmzuskda03:24
raggi_it's installed rules with /26 instead of /2403:24
*** hadrian has quit IRC03:26
*** adjohn has quit IRC03:29
*** mdomsch has quit IRC03:30
*** adjohn has joined #openstack03:30
HugoKuoikel , the new ip is your instance network's gateway , it'll bind to flat_interface03:33
ike1HugoKuo: i us vi (nano maybe have problem) edit the interfaces files03:37
ike1# The loopback network interface03:38
ike1auto lo03:38
ike1iface lo inet loopback03:38
ike1auto  eth003:38
ike1iface eth0 inet static03:38
ike1        #bridge_ports eth003:38
ike1        bridge_stp off03:38
ike1        bridge_maxwait 003:38
ike1        bridge_fd 003:38
ike1        address 192.168.2.22803:38
ike1        netmask 255.255.255.003:38
ike1        broadcast 192.168.2.25503:38
ike1        gateway 192.168.2.103:38
ike1        dns-nameservers 202.103.24.6803:38
ike1now i can connect to the server03:38
HugoKuoI guess you rewrite br100 to eth0 , am I right ?03:39
ike1yes03:40
ike1you are right03:40
ike1this will bring some problem?03:41
ike1the server have only one NIC, i use this sever in single node mode03:43
HugoKuook03:44
HugoKuotry to set publice_interface =eth0 and flat_interface = eth003:44
ike1in nova.conf?03:45
HugoKuodid you install from Script ?03:45
HugoKuoyes03:45
ike1yes03:46
ike1i use script install mode.03:46
ike1this is my nova.conf :03:46
ike1-------------03:46
ike1--dhcpbridge_flagfile=/etc/nova/nova.conf03:46
ike1--dhcpbridge=/usr/bin/nova-dhcpbridge03:46
ike1--logdir=/var/log/nova03:46
ike1--state_path=/var/lib/nova03:46
ike1--lock_path=/var/lock/nova03:46
ike1--verbose03:46
ike1--s3_host=192.168.2.22803:46
ike1--rabbit_host=192.168.2.22803:46
ike1--cc_host=192.168.2.22803:46
ike1--ec2_url=http://192.168.2.228:8773/services/Cloud03:46
ike1--fixed_range=192.168.2.0/1203:46
ike1--network_size=803:46
ike1--FAKE_subdomain=ec203:46
ike1--routing_source_ip=192.168.2.22803:46
*** rchavik has joined #openstack03:47
HugoKuoikel03:48
ike1.03:48
HugoKuoikel , plz paster on http://paste.openstack.org/  next time03:49
ike1ok,03:49
HugoKuohttp://wiki.openstack.org/FlagsGrouping03:49
ike1http://paste.openstack.org/raw/1329/03:51
ike1can you see what i post ?03:52
HugoKuomy fault ,03:53
HugoKuo--public_interface03:53
HugoKuo--flat_interface03:53
ike1sorry,03:53
HugoKuoyou can check flags group from the link that I post03:53
ike1ok, i read the link03:54
HugoKuoThere'r too many flags .03:54
ike1yes!03:55
ike1but i not find --public_interface flag in this link?03:57
ike1in "Configuring Flat DHCP Networking"04:00
ike1i find : --public_interface=eth004:00
*** adjohn has quit IRC04:00
*** patri0t has quit IRC04:04
*** gaveen has joined #openstack04:08
*** gaveen has joined #openstack04:08
*** miclorb_ has joined #openstack04:13
*** adjohn has joined #openstack04:20
*** omidhdl has joined #openstack04:22
*** maplebed has quit IRC04:32
*** santhosh has joined #openstack04:37
*** _adjohn has joined #openstack04:39
*** santhosh has quit IRC04:40
*** santhosh has joined #openstack04:40
*** adjohn has quit IRC04:42
*** _adjohn is now known as adjohn04:42
*** kashyap has joined #openstack04:42
*** gaveen has quit IRC04:44
*** gaveen has joined #openstack04:45
*** f4m8_ is now known as f4m804:49
*** dysinger has joined #openstack04:57
*** santhosh has quit IRC04:59
*** santhosh has joined #openstack05:01
*** miclorb_ has quit IRC05:07
*** hagarth has joined #openstack05:08
*** Zangetsue has joined #openstack05:09
*** mattray has joined #openstack05:16
*** miclorb has joined #openstack05:25
*** guynaor has joined #openstack05:27
*** guynaor has left #openstack05:27
*** arun_ has joined #openstack05:30
*** arun_ has joined #openstack05:30
*** zenmatt_ has quit IRC05:34
*** koji-iida has joined #openstack05:34
*** koji-iida has quit IRC05:37
*** ccustine has quit IRC05:39
*** ike1 has quit IRC05:41
*** ike1 has joined #openstack05:48
*** kashyap has quit IRC05:49
*** kashyap has joined #openstack06:06
*** hansin has joined #openstack06:06
*** dysinger has quit IRC06:07
*** hansin has quit IRC06:08
*** hansin has joined #openstack06:10
*** hansin has quit IRC06:15
*** infinite-scale has joined #openstack06:19
ike1Unable to run euca-describe-images.  Is euca2ools environment set up?06:21
*** omidhdl has left #openstack06:22
*** guigui has joined #openstack06:23
*** sebastianstadil has quit IRC06:23
*** miclorb has quit IRC06:25
*** dobber has joined #openstack06:36
*** nerens has joined #openstack06:40
*** keds has joined #openstack06:41
*** obino has quit IRC06:43
*** dysinger has joined #openstack06:46
*** omidhdl has joined #openstack06:47
*** keds has quit IRC06:51
*** miclorb_ has joined #openstack06:54
*** toluene has quit IRC06:57
*** obino has joined #openstack06:58
*** toluene has joined #openstack06:59
*** med_out is now known as medberry07:00
*** hggdh has joined #openstack07:08
*** mattray has quit IRC07:10
*** katkee has joined #openstack07:11
*** Kronick has joined #openstack07:12
*** katkee has quit IRC07:12
*** lionel has quit IRC07:16
*** obino has quit IRC07:16
*** obino has joined #openstack07:17
*** obino has quit IRC07:18
*** rcc has joined #openstack07:18
*** hggdh has quit IRC07:19
*** CloudChris has joined #openstack07:20
*** CloudChris has left #openstack07:20
*** lionel has joined #openstack07:21
*** ChameleonSys has quit IRC07:26
*** ChameleonSys has joined #openstack07:27
*** nacx has joined #openstack07:31
*** CloudChris has joined #openstack07:41
*** CloudChris has left #openstack07:41
*** openstackjenkins has quit IRC07:44
*** openstackjenkins has joined #openstack07:46
*** zul has joined #openstack07:48
*** daveiw has joined #openstack07:53
*** dirkx_ has joined #openstack08:05
*** dendro-afk is now known as dendrobates08:06
*** dirkx_ has quit IRC08:07
*** openstackjenkins has quit IRC08:09
*** openstackjenkins has joined #openstack08:09
*** miclorb_ has quit IRC08:13
*** xavicampa has joined #openstack08:15
*** dirkx_ has joined #openstack08:16
*** katkee has joined #openstack08:18
*** lborda has joined #openstack08:20
*** CloudChris has joined #openstack08:22
*** hggdh has joined #openstack08:23
*** dirkx_ has quit IRC08:23
*** katkee has quit IRC08:24
*** toluene has quit IRC08:27
*** toluene has joined #openstack08:29
*** katkee has joined #openstack08:32
*** dirkx_ has joined #openstack08:34
*** CloudChris has left #openstack08:36
*** dirkx__ has joined #openstack08:37
*** dirkx_ has quit IRC08:37
*** hggdh has quit IRC08:38
*** miclorb has joined #openstack08:42
*** kashyap has quit IRC08:42
*** jeffjapan has quit IRC08:44
*** dirkx__ has quit IRC08:45
*** dendrobates is now known as dendro-afk08:45
*** dirkx_ has joined #openstack08:46
*** guynaor has joined #openstack08:47
*** dirkx_ has quit IRC08:48
*** MarkAtwood has quit IRC08:48
katkeevishy: hello, your script works with maverick64 inside virtualbox08:50
*** lborda has quit IRC08:50
*** zul has quit IRC08:51
katkeevishy: we are going to try your script on a bare server, do you think your script will work with ubuntu server 11.04 ?08:51
*** ike1 has quit IRC08:51
*** citral has joined #openstack08:56
*** lborda has joined #openstack08:59
*** CloudChris has joined #openstack09:06
*** watcher has joined #openstack09:11
*** dirkx_ has joined #openstack09:11
*** Kronick has quit IRC09:12
*** Kronick has joined #openstack09:15
*** zul has joined #openstack09:16
*** Kronick has left #openstack09:18
*** adjohn has quit IRC09:19
*** katkee has quit IRC09:23
*** miclorb has quit IRC09:23
radekhi I've setup testing installation of openstack catus on one server everything seems to work fine except that virtual machines show only 100MB nic speed09:25
radekis that normal09:25
radeki'm using vlan mode09:25
*** rchavik has quit IRC09:26
*** zul has quit IRC09:29
*** CloudChris has quit IRC09:29
*** CloudChris has joined #openstack09:29
*** dirkx__ has joined #openstack09:29
dsockwellhave you double-checked the interface properties of your server and switch?09:29
*** dirkx_ has quit IRC09:32
radeketh0 is running on 1G09:34
radekon br100 I can't see speed information09:35
radekis it something that i have to configure on bridge interface ?09:36
*** cuzoka has joined #openstack09:39
*** lborda has quit IRC09:40
*** watcher has quit IRC09:40
mtaylorsoren: hey! where you at?09:44
*** kashyap has joined #openstack09:46
*** lborda has joined #openstack09:47
*** nerens has quit IRC09:47
*** nerens has joined #openstack09:50
*** perestrelka has quit IRC09:50
*** lborda has quit IRC09:51
radekdsockweel any ideas ?09:54
*** rchavik has joined #openstack09:54
*** rchavik has joined #openstack09:54
*** Binbin is now known as Binbinaway09:54
*** dysinger has quit IRC09:59
toluenemy god the QEMU have driven me to crazy. I got "internal error Process exited while reading console log output: chardev opening backed "file" failed". Does anyone knows how to fix it ?10:00
*** dirkx__ has quit IRC10:01
*** Eyk has joined #openstack10:04
*** smaresca has quit IRC10:06
*** daedalusflew has quit IRC10:06
*** s1cz- has quit IRC10:06
sorenmtaylor: Jokai.10:06
sorentoluene: Which version of libvirt?10:07
mtaylorsoren: ah, I'm in Arany - I was going to poke you about a rabbitmq problem I was having10:07
mtaylorsoren: it seems that I cannot install rabbitmq on maverick cloud servers :(10:08
*** daedalusflew has joined #openstack10:08
sorenmtaylor: Poke away, but I may be slow to respond.10:08
mtaylorsoren: well, for now I just changed to using lucid and we'll see if that fixes it10:08
toluenesoren, my libvirt is 0.8.310:09
sorentoluene: Stock?10:09
sorentoluene: Some versions of libvirt had problems with file backed chardevs.10:10
toluenesoren, I cd into the instance dir, and run virsh create libvirt.xml, it told me "inter nal error Process exited while reading console log output: chardev: opening backed "file" failed"10:11
sorentoluene: I understand.10:13
sorentoluene: I'm asking about libvirt versions.10:13
sorentoluene: Is it a stock 0.8.3 libvirt?10:13
toluenesoren, I apt-get installed it from the official site10:14
sorenthe official site?10:14
sorenUbuntu?10:14
sorenDebian?10:14
sorenNot libvirt, surely.10:14
toluenesoren, I am using ubuntu 10.1010:15
*** Dweezahr has quit IRC10:15
*** Dweezahr has joined #openstack10:16
sorentoluene: And you're using the libvirt from Ubuntu itself?10:18
sorentoluene: How are you installing Nova?10:18
toluenesoren I install it from the source10:18
sorentoluene: Ok. You need a newer libvirt.10:19
*** smaresca has joined #openstack10:19
*** dendro-afk is now known as dendrobates10:25
mtaylorsoren: GGAAAAHHHHHHHHHHHH10:25
mtaylorsoren: AAAAAARRRRRRRRRGGGGGGGGGGGGGHHHHHHHHHHHHHHH10:25
*** hggdh has joined #openstack10:26
*** ChameleonSys has quit IRC10:30
*** CloudChris has left #openstack10:30
*** s1cz- has joined #openstack10:31
sorenmtaylor: No, tell me how you really feel. Don't hold back.10:32
mtaylorsoren: apt-get install rabbitmq-server should not, in my opinion, hang10:32
mtaylorsoren: or fail10:32
sorenmtaylor: Sounds like a reasonably to opinion to hold.10:32
sorenmtaylor: Reality holds a different opinion?10:33
mtaylorsoren: well, spinning a cloud server and trying that command so far has been quite fail10:33
sorenmtaylor: Which image id?10:33
mtaylorsoren: 69 definitely breaks - trying 49 now10:34
mtaylorsoren: (I think rackspace may be rate limiting me at the moment- spinning up servers is getting slow)10:34
sorenmtaylor: flavour?10:34
mtaylorsoren: 410:35
sorenOh.10:35
mtaylorsoren: problem?10:35
sorenNo, just that I was starting a flavour 1 one.10:35
mtaylorsoren: it _shouldn't_ make a difference...10:35
mtaylorsoren: is a 1 big enough to run nova smoketests?10:36
*** katkee has joined #openstack10:36
sorenmtaylor: Not sure. rabbitmq install worked great on the smaller one. Trying the larger on.10:37
mtaylorsoren: ok. I'll try the smaller one10:37
sorenmtaylor: ...worked fine on the larger one, too.10:37
sorenmtaylor: No problems at all.10:38
mtaylorsoren: goddammit10:38
mtaylorsoren: I hate everyone10:38
* soren hugs mtaylor 10:38
sorenmtaylor: Where are you at? I'm available for a bit of hacking on it.10:38
mtaylorsoren: Arany - I can leave and meet you somewhere though10:39
zigo-_-soren: Hello !10:40
sorenmtaylor: Exit Arany. Turn left. Walk 10 feet.10:40
sorenmtaylor: Meet you there.10:40
zigo-_-I just saw a presentation you did of Openstack, apparently in a French univercity.10:40
zigo-_-I was wondering if it was available, as I want to do a presentation here in China.10:41
toluenesoren, problem solved. I forgot to add the user and group permission for qemu.conf10:42
zigo-_-soren: Do you have the PDF / PPT / ODP somewhere?10:42
zigo-_-I'm not sure I'll do as good as you did, but I'll try...10:45
*** cm6051 has joined #openstack10:46
*** goatrider has joined #openstack10:46
*** hggdh has quit IRC10:49
*** Eyk has quit IRC10:50
*** ChameleonSys has joined #openstack10:51
*** dysinger has joined #openstack10:53
*** cloudgroups has joined #openstack10:55
*** dendrobates is now known as dendro-afk10:57
*** cloudgroups has left #openstack11:00
*** mgoldmann has joined #openstack11:00
*** ctennis has quit IRC11:03
*** markvoelker has joined #openstack11:04
dobberhi, i'm having trouble with swift documentation 3.3.1. - installing proxy node11:13
dobberthe rebalance the rings commands does not work11:13
dobberhttp://docs.openstack.org/cactus/openstack-object-storage/admin/content/installing-and-configuring-the-proxy-node.html11:13
dobberin point 7 i configure some storage devices, so i guess stuff should be rebalanced between them, but i never installed storage devices11:14
dobberit didnt say anywhere how or when to install them11:14
*** omidhdl has left #openstack11:19
*** ctennis has joined #openstack11:19
*** ctennis has joined #openstack11:19
*** dirkx_ has joined #openstack11:20
*** toluene has quit IRC11:23
*** katkee has quit IRC11:24
*** mgoldmann_ has joined #openstack11:26
*** mgoldmann has quit IRC11:26
*** goatrider has quit IRC11:28
*** mgoldmann_ has quit IRC11:30
zigo-_-dobber: There USED to be a swift-auth, and it's now gone.11:37
dobberzigo-_-: it's in 3.3.2. documentation11:37
dobberhttp://docs.openstack.org/cactus/openstack-object-storage/admin/content/installing-and-configuring-auth-nodes.html11:37
dobberdo i just skip it ?11:37
zigo-_-Should I just repeat?11:38
zigo-_-:)11:38
zigo-_-The doc is outdated, that's it.11:38
dobberi have no idea what swift-auth was, what it used to do and what replaces it.11:38
dobberok moving ahead11:39
zigo-_-I /think/ it's now included in swift-proxy-server...11:39
zigo-_-Not sure.11:39
zigo-_-Hum... it does, because in proxy-server.conf there's the swauth statements of that auth-server.conf ...11:40
zigo-_-Hint: use the wiki howto, not the doc, as the wiki is updated.11:40
dobberok11:43
dobberhttp://swift.openstack.org/howto_installmultinode.html11:43
dobberthis one ?11:43
zigo-_-yup11:45
*** gaveen has quit IRC11:46
dobberkk going over again11:46
*** Eyk has joined #openstack11:47
*** lurkaboo has quit IRC11:54
*** dendro-afk is now known as dendrobates12:06
*** jness has left #openstack12:13
*** aloga has joined #openstack12:15
*** zul has joined #openstack12:20
*** perestrelka has joined #openstack12:24
*** j05h has quit IRC12:26
*** zul has quit IRC12:35
*** guynaor has left #openstack12:41
*** dendrobates is now known as dendro-afk12:46
*** RoAkSoAx has quit IRC12:52
*** Binbinaway has quit IRC12:52
*** j05h has joined #openstack12:52
*** dprince has joined #openstack12:52
*** hggdh has joined #openstack12:53
*** Binbinaway has joined #openstack12:56
*** zenmatt has joined #openstack12:58
*** openstackjenkins has quit IRC13:00
*** aliguori has joined #openstack13:00
*** openstackjenkins has joined #openstack13:00
*** aliguori has quit IRC13:00
*** aliguori has joined #openstack13:01
*** NashTrash has joined #openstack13:01
NashTrashGood morning Openstack'ers13:01
NashTrashAnyone available for a Swift question?13:01
alekibangoNashTrash: try it13:02
*** hadrian has joined #openstack13:02
alekibangoNashTrash:   reading this might improve your chances :) http://catb.org/~esr/faqs/smart-questions.html13:02
NashTrashThank, I think.  Hope I am not coming across as rude.13:04
alekibango:)13:04
alekibangonp nash13:04
alekibangojust ask13:04
NashTrashI have my Swift cluster up and running but want to add a second proxy for HA.13:05
*** lorin1 has joined #openstack13:05
NashTrashI added it according to the directions here: http://swift.openstack.org/howto_installmultinode.html.13:05
NashTrashI then put a Pound load balancer out front with a virtual IP that redirects the requests to both proxy machines.13:06
*** mgoldmann has joined #openstack13:06
NashTrashI get an error when I try the following "swauth-prep -A https://99.99.99.94:8080/auth/ -K XXXXXX" where the 99. IP is the virtual IP assigned to the LB.13:07
NashTrashThe error is: Auth subsystem prep failed: 500 Internal Server Error13:07
alekibangoNashTrash:  what OS, swift version?13:08
NashTrashUbuntu 10.10 with the Swift pulled from ppa:swift-core/trunk yesterday13:08
*** lele_ has joined #openstack13:09
alekibangoNashTrash: we have similar problems... lol13:09
NashTrashHa13:09
NashTrashNot forever alone13:09
lele_Hi all !13:09
NashTrashhola13:09
*** dendro-afk is now known as dendrobates13:09
alekibangozigo-_-, dobber    you are not alone ^^13:09
lele_Hola :) , tengo una consulta rapida, hay alguna manera de borrar instancias de compute en el controller sin hacerle un delete al row del mysql ?13:10
NashTrashIf I try the swauth-prep command directly to either of the proxy machines it works just fine.13:10
dobberNashTrash: same problem here13:10
*** Kronick has joined #openstack13:10
*** hagarth has quit IRC13:11
NashTrashHmm...Does it matter that the load balancer is translating the request from HTTPS to HTTP?13:11
alekibangowell,. you need https13:11
NashTrashIs there a way to set the proxy machines to take HTTP since we are offloading that workload to the LB?13:11
alekibangoi thing that they will run in http mode authomatically when there are no certificates13:13
alekibangoimho13:13
alekibangoyou need to check docs or source13:13
lele_Hi all, is there a way to delete compute instances "cleanly" instead of deleting the row on the controller mysql database ?13:13
alekibango(i dont know more)13:13
alekibangolele_: ... how is that unclean?13:14
NashTrashalekibango: I will take a look at that.  I am also seeing if Pound can just pass through the HTTPS and not change it to HTTP13:14
NashTrashdobber: Are you using Pound as the LB also?13:14
dobberNo13:15
alekibangoNashTrash: no he just is running swift for first time13:15
alekibangotoday13:15
NashTrashalekibango: That was me yesterday.13:15
alekibangoNashTrash: can you please do ls -l /etc/swift13:15
alekibangoand paste it somewhere?13:15
alekibango(on machine with proxy... and possibly also on storage node)13:16
alekibangoi think i have few files there which are not needed anymore13:16
alekibangobut i am not sure13:16
alekibangodo you have auth.db there?13:16
*** galthaus has joined #openstack13:16
NashTrashlele_: Do you mean something other than euca-terminate-instances?13:16
NashTrashalekibango: http://paste.openstack.org/show/1332/13:17
alekibangoNashTrash: ty13:17
lele_alekibango: i tought that maybe was command like "nova-manage ... something" ...  :)  but i flipped over the docs an "deleting a compute node" is not documented13:17
alekibangodobber: ^^13:17
NashTrashalekibango: auth.db is not needed anymore.13:17
dobberalekibango: same as me13:17
alekibangoaha, lele_ you are asking how to terminate it?13:17
alekibangolike end user command?13:18
*** cuzoka has quit IRC13:18
alekibangolele_: euca-terminate-instances13:18
NashTrashalekibango: Are you following the directions at docs.openstack.org or http://swift.openstack.org/howto_installmultinode.html?13:18
*** patcoll has joined #openstack13:18
lele_alekibango: thats for instances, i was asking for removing a compute node , on a multinode environment, from the controller database / everywhere13:18
alekibangolele_:  or using openstack tools: nova delete13:18
alekibangoah node....13:19
alekibangolele_: sorry i didnt understood you13:19
lele_alekibango: no prob :)13:19
alekibangonova-manage service13:19
alekibangothat might be waht you want13:19
alekibangodisable it13:19
lele_yep, ive already done that, just wondering if was a way to completely remove it, but i guess that the only way is to delete the row on teh controller mysql database13:20
NashTrashlele_: AFAIK the only way to completely remove any of the services is to delete the row in the database.  Otherwise you can only disable it.13:20
notmynameNashTrash: comment out the certificate stuff in the proxy configs and the proxy servers will use http13:21
alekibangolele_: i dont know about other options too13:21
notmynameat rackspace we use the LBs for SSL termination, too13:21
NashTrashnotmyname: Cool.  Here we go.13:21
dobberNashTrash: i'm using the second documentation. the first one aparently is out of date13:22
lele_NashTrash: yep, thats what i thought , cause i cloned a compute instance, and started on a new xcp node with a new hostname, the compute gets registered on the controller but when i do a nova-manage service list, i see the new compute node with XXX , not happy faces :(13:22
alekibangonotmyname: is swift able to run on only one server?13:24
NashTrashnotmyname: Bingo.  I am good to go.  Thanks.  I am quite HA now.  I am adding in more disks today (one more on each storage node).  I have not seen directions for adding disks.  Do you know of any?13:24
alekibangoi mean only one device with 1 copy style13:24
*** galthaus has quit IRC13:24
alekibangoNashTrash: add them like when you started13:24
alekibangorebalance13:24
*** skiold has joined #openstack13:24
alekibangoand you are in13:24
NashTrashlele_: Yeah.  I just have a MySQL client that I use to directly edit the tables.  Much easier.13:25
NashTrashalekibango: Thanks.13:25
*** santhosh has quit IRC13:25
alekibangoNashTrash: at least thats my understanding :)13:25
*** watcher has joined #openstack13:26
lele_NashTrash: i think ill go for that too, any thoughts about why my new cloned node (with the same working config) is not showing as "happy" on the controller, the api ports are reacheable and connectivity is ok ...13:26
*** santhosh has joined #openstack13:26
NashTrashlele_: Time synchronization is important and not well documented.  Make sure you have NTP installed on all nodes.13:26
*** lvaughn has joined #openstack13:27
lele_NashTrash: thats a cool tip, documentation is a big issue on this, every paper seem to get everything working easily and "magically" :)13:28
alekibangontp is a must for every server13:28
alekibangobut for cloud, its a MUST13:28
dobberlele_: i'm seeing the same problem too. magic is not good for my production :(13:29
alekibangodobber: thats why they call it cloud13:29
dobbernot to mention, outdated get started tutorial13:29
alekibangounder the fluffy cloud there is lots of magic13:29
NashTrashlele_: Documentation is key.  OpenStack has a lot of moving parts so it must be crazy trying to keep it all well documented.  Especially with most users living on /trunk.13:29
*** Zangetsue has quit IRC13:29
dobberrsync config with some tricks is not that hard to explain and understand13:29
lele_yep, weŕe currently managing our cloud with oracleVM and enterprise manager , has a lot of black magic, but getting openstack running first time was a hard one13:29
alekibangolele_:  life is pain13:30
NashTrashYeah.  Two plus weeks before we had a stable pilot cloud.  Swift only took a day.13:30
*** zenmatt has quit IRC13:30
lele_weŕe looking forward to build an hybrid cloud , integrating our amazon ec2 instances and our production environment over oracle vm, i get everything working with chef and the ec2 xenapi, but this issue was hard to detect, the logs not showing anything about syncro13:32
dobberthrough the magic of strace13:32
*** Zangetsue has joined #openstack13:32
dobberi found where my problem was13:32
alekibangodobber: ?13:32
dobberalekibango: one of my storage nodes did not have the right permissions13:32
*** santhosh has quit IRC13:32
dobberchown swift.swift /srv/node -R13:32
NashTrashdobber: d'oh13:32
dobbernow swauth-prep works13:32
*** santhosh has joined #openstack13:33
lele_anyone knows if the chef fog extension for XEN is actually available ?13:33
alekibangook, can you add user also?13:34
dobberi added a user13:34
lele_oracleVM has a Xen background, if i can get this extension i hope that interacting with our prod cloud will be easy13:34
alekibangodobber:  how many nodes you have?13:35
dobber1 proxy, 3 storage13:35
dobberall VMs13:35
alekibangoi am starting to think that zigo's problem is having only 1 node13:35
alekibango(even with 1 copy configured)13:35
alekibangois there anyone running swift with one device in one zone on one server ?13:36
dobberi think with the right config, the setup with one node is possible13:36
alekibango(with single copy)?13:36
*** adjohn has joined #openstack13:36
alekibangodobber: can you try for us please?13:36
dobberbut it's spagetti13:36
alekibangoplease backup your config on server + nodes13:36
alekibangomaybe even publish it somewhere plz13:36
*** iammartian has joined #openstack13:36
alekibangobut i think zigo's problem is not in config13:36
dobberok, i'll see if I have some power left tonight13:37
alekibangoit looks like somthing external13:37
*** iammartian has left #openstack13:37
alekibangodobber: :)13:37
alekibangoit might be some package dependency13:37
*** dendrobates is now known as dendro-afk13:38
*** Zangetsue has quit IRC13:38
dobberdoes he have the same error as me ?13:39
dobberwhat I did was13:39
dobberedit proxy-server.conf13:39
alekibangodobber: imho the same or similar13:39
dobberchange workers=113:39
dobberrestart proxy13:39
dobberstrace -o /tmp/file -s 255 -f -p PID_OF_PROXY13:39
alekibangowill try13:39
dobberand on the other console - swauth-prep whatever13:39
alekibangoi am on his comp now13:39
alekibangoprep now works... its the adduser which has problems13:40
*** kennethkalmer has joined #openstack13:41
dobberalso13:41
alekibangoyes good idea13:41
dobberit will try to connect to every node13:41
dobberand send stuff like "poll(fd=ID whaeve...13:42
*** aloga has quit IRC13:42
dobberrecv(ID, HTTP/1.1 202 Accepted\r\nContent-Type13:42
dobberor recv(13, "HTTP/1.1 500 Internal Server Error\r\n13:42
dobberso you know witch node is the problem13:42
NashTrashHas anyone tried using "st" from an OSX client machine?13:43
alekibangodobber: he has only 1 node13:43
alekibangoi guess this might be source of the problem13:43
alekibangoone node, one everything13:43
dobberalekibango: he has tree nodes with one ip13:43
dobberwith the same ip i mean13:43
alekibangoimho he has only 1 if he didnt make them last night13:43
dobber1 storage ?13:43
alekibangoyes just 113:44
alekibangothats why i ask -- is it possible to have only 1 ??13:44
dobberswift-ring-builder account.builder create 18 3 1 <- i guess this should be changed13:44
dobber18 1 1 ?13:44
alekibangoyes13:44
alekibango18 1 113:44
dobberwell i don't know then...13:44
dobberso it works until swauth-add-user ?13:45
alekibangoyes13:45
alekibangoprep works13:45
dobberso cool13:45
alekibangowill post it13:45
alekibangowait...13:45
alekibangoi was even able to create account13:47
*** f4m8 is now known as f4m8_13:48
dobbermagicaly working now ?13:48
alekibango... now.. i still dig in strace output :)13:49
lele_a ton of black magic out there :P13:50
*** dprince_ has joined #openstack13:50
dobberi'm gonna create a magicstack.org cloud software :)13:50
alekibangolol13:50
lele_dobber: jajaja LOL13:50
dobberusing inotify hooks and rsync :)13:51
*** dprince has quit IRC13:51
*** dprince_ has quit IRC13:51
*** zul has joined #openstack13:51
*** dprince has joined #openstack13:51
*** iammartian has joined #openstack13:51
alekibangodobber: i would use that lol13:51
*** iammartian has left #openstack13:51
alekibangosimple, working.. lovely13:52
alekibangoand even might be accessible via filesystem hierarchy13:52
dobberbtw, i it is probably a fashion thing. glusterfs has poor documentation, moosefs has poor documentation13:52
alekibangoglusterfs is good13:52
alekibangobut somewhat slow13:53
dobberexcept when you restart a server13:53
dobberit's much faster then moose13:53
alekibangodobber: i for one am waiting for ceph13:53
dobbergluster gets faster by the node13:53
dobbermoose gets slower by the node13:53
dobbergo figure...13:53
alekibangoyou mean by node added?13:54
dobberyeah, sorry13:54
alekibangoi like gluster idea13:54
alekibangosimple, clean13:54
dobberthe problem with gluster is the missing metadata13:55
alekibangodobber: no, its  a feature!13:55
*** Kronick has left #openstack13:55
dobbernot when you get out of sync13:55
alekibangodobber: it does sync *MAGICALLY*13:55
alekibangolol13:55
dobberyea, if you have 5 billion files on a node13:55
dobberno magic can help you :)13:55
alekibangohehe13:56
alekibangomy guts are saying ceph... but not yet13:56
dobberwill see13:56
dobbermy swift is working perfectly i think13:57
dobberon to stress testing ... :)13:57
alekibangoswift looks great -- but not if you need filesystem13:57
lele_welll, i manage to start an instance , but is failing with this error : StorageRepositoryNotFound: Cannot find SR to read/write VDI13:57
lele_and the repo is well defined on the XCP13:57
dobberi need a filesystem yes, but if its so good, we can change our application13:57
dobberbeer time13:58
NashTrashdobber: We are currently evaluating gluster, ceph, and moose too.  I really hope that ceph gets good quickly13:58
*** hggdh has quit IRC13:58
*** santhosh has quit IRC13:59
dobbergluster is the best *working* thing i've seen lately14:00
dobberat least for my use14:00
*** zul has quit IRC14:01
dobberbut the consistency problem (that has been around forever) is not good14:01
NashTrashdobber: Our apps NEED consistency.  I am really hoping an open source solution works for us because the Isilon's of the world are wildly expensive.14:03
*** sunny has joined #openstack14:05
*** msivanes has joined #openstack14:05
*** katkee has joined #openstack14:06
dobberNashTrash: my opinion is that the sky is too cloudy for now ;(14:07
dobberat least for my use14:07
dobberbut we are growing and need to change and plan. so I have to find a sollution14:07
NashTrashdobber: :-)       We shall see.14:07
*** amccabe has joined #openstack14:10
lele_guys, anyone experienced with xen integration ?14:10
alekibangonot me (kvm)14:10
lele_it seems like the compute is not seeing the SR storage on the XCP14:10
*** prudhvi has joined #openstack14:11
lele_i know that KVM is very supported, any relevant diferences over XCP  ?14:11
*** openpercept_ has joined #openstack14:12
alekibangolele_:  idont really know i dotn even know what is SR and waht is XCP :)14:12
alekibangoi prefer to not dig into xen14:12
alekibangoand java14:12
lele_alekibango: jaja, ok , dont dig on to it ... the documentation of integration with xen sucks hard ...14:13
lele_alekibango: SR is storage repository / shared14:13
alekibango:)14:13
lele_How do you guys manage High Availability ?14:13
alekibangolele_: failure is part of life :)14:14
alekibangolele_: i am trying to use sheepdog14:14
alekibangoif you are talkin about ha for disks14:15
lele_alekibango: :P , actually oracleVM has live-migration and node-ha feature ... we really need to replicate this schema on openstack, but the HA stuff is not pretty documented14:15
alekibangosome people do RBD14:15
alekibangolele_: nasa is not using ha for disks14:15
alekibangocloud ready applications do not need reliability14:15
alekibango:)14:15
alekibangolele_: look on rbd or sheepdog14:16
alekibangobut i dont know how much luck will you have with XEN14:16
lele_alekibango: i know, we need that if a physical node crash, all the VMs migrate to an active node of the cluster ... but is seems that openstack is pretty far away of this14:16
alekibangolele_: openstack supports this14:16
alekibangoif you use aftercactus trunk14:16
alekibangobut its not yet used in production14:17
alekibangoas nasa doesnt need this14:17
lele_alekibango: wow, and it get done automagically too ?14:17
alekibangolele_: i dont thnk so, for now14:17
alekibangobut it will this year, i am sure14:17
alekibangoits too much magic to get running reliably without hard work14:18
alekibangoand local drives are faster14:18
lele_alekibango: thats cool , ill install a KVM test node today to see the diferrences and compatibilitys14:18
alekibangolele_: good luck :)14:18
alekibangotell me how it worked14:18
alekibango(or didnt, lol)14:19
lele_alekibango: lol , we got nfs netapp mounts, with SAS disk, i think that should work fast, i hope that KVM dont break my patience out as XEN did ...14:20
*** Zangetsue has joined #openstack14:21
alekibango:)14:23
*** jkoelker has joined #openstack14:23
*** zenmatt has joined #openstack14:28
NashTrashnotmyname: It appears I have my whole Swift cluster (with HA) up and running.  I want to have S3 compatibility.  I can not find any documentation on how to add this in.  Any pointers?14:33
*** adjohn has quit IRC14:33
notmynameNashTrash: the swift3 middleware14:33
NashTrashnotmyname: Right, I saw that it exists, but nothing on how to set it up.14:34
notmynameNashTrash: however, be aware that the S3 compatibility will always lag and be a 2nd class citizen14:34
NashTrashnotmyname: Sure.  I understand.14:34
notmynamehmm...I don't see a sample config14:36
notmynamecreiht: ^?14:37
*** kakella has joined #openstack14:37
*** kakella has left #openstack14:37
*** dendro-afk is now known as dendrobates14:39
Eykhttps://answers.launchpad.net/swift/+question/154332 <-- at the end I wrote how to get s3 running14:40
xavicampahi! anyone knows how to boot an image (snapshot) created with "nova image-create"? I'm using glance, "nova image-list" does not list it, "glance index" in the glance node neither, only "glance show 38" shows it14:41
notmynameEyk: thanks. I just wasn't sure what was required in the filter section14:41
*** mgoldmann has quit IRC14:42
*** derrick_ has quit IRC14:43
Eykit took me some time to get it working, all I could find on the web was wrong ;-)14:43
NashTrashEyk: Thanks.  I will take a look at it.14:44
NashTrashnotmyname: Thanks too.14:44
creihtNashTrash: yeah that should get you going14:45
creihtbtw, the compatibility layer is still a bit experimental14:45
NashTrashcreiht: Excellent.  Thanks all.14:45
uvirtbotNew bug: #781716 in nova "Post install check assumes dpkg-vendor" [Undecided,New] https://launchpad.net/bugs/78171614:46
*** dendrobates is now known as dendro-afk14:48
annegentleEyk: thanks for writing up the S3 config, I'll take a look and try to roll it into the docs14:48
mtaylormorning jaypipes14:50
creihtmtaylor: !!!!!14:50
creihtmtaylor: the swift ppas are still a 1.214:50
mtaylorcreiht: I thought soren was supposed to be doing your packages ...14:51
creihtmakes me a sad panda :(14:51
* creiht has no idea14:51
mtaylorcreiht: this may again be one of those times when each of us thought the other was taking care of it14:51
mtaylorcreiht: he's a few rooms over right now - I'll find him and sort it out14:51
creihtmtaylor: cool, and thanks14:51
*** dragondm has joined #openstack14:53
*** thatsdone has joined #openstack14:53
deshantmlele_: what were you trying to do with XCP that you couldn't do?14:55
lele_deshantm: hi, it seems that the nova-compute running on the XCP node is not seeing the repo14:57
lele_so the VM launches, but doest start14:57
lele_keeps in shutdown mode14:57
lele_StorageRepositoryNotFound: Cannot find SR to read/write VDI.14:58
*** thatsdone has quit IRC14:58
lele_but the SR is there ...14:58
deshantmlele_: what versions of everything are you running? did you follow the openstack wiki XenServer howto?14:58
lele_yep i followed it14:59
lele_cactus version of nova14:59
lele_XCP last version14:59
deshantmlele_: there was a thread on the openstack-operators list about this14:59
deshantmwas that you?14:59
lele_yep, that was me :)14:59
lele_i attached the stack trace14:59
*** thatsdone has joined #openstack15:00
deshantmlele_: I gotta run to a meeting, but I can try to get you in contact with the right people15:00
lele_thats cool !15:00
deshantmlele_: perphaps mcclurmc has some insights15:00
lele_great, could you reply to me email with the contact ?15:00
deshantmlele_: I was the one that responded a bit to that thread15:00
deshantm:)15:00
lele_:) yep, but anyonelse replied after the trace :(15:01
deshantmlele_: to me that looked like a nova issue, I don't have the experience with that part myself15:01
deshantmI'll try to pull in others who might have hints15:01
deshantmlele_: late for a meeting now though, talk to you later15:02
lele_thanks thats a big help !15:02
lele_have a great meeting15:02
*** thatsdone has quit IRC15:06
*** thatsdone has joined #openstack15:06
*** zenmatt has quit IRC15:06
*** xavicampa has quit IRC15:07
*** mancdaz has joined #openstack15:08
gholtmtaylor: I'm not sure why the planet openstack feed always shows you as the author of my posts. The planet page itself shows fine. Hopefully I'm not too embarassing to you. ;P15:11
mtaylorgholt: hrm. that's weird15:12
*** RoAkSoAx has joined #openstack15:12
*** citral has quit IRC15:14
gholtnelson: One of the guys just noticed that https://twitter.com/#!/russnelson/status/68050129105592320 401s atm15:14
*** msivanes has quit IRC15:18
creihtNashTrash: http://swift.openstack.org/misc.html#module-swift.common.middleware.swift315:19
creihthas an example boto config15:19
creihterm connection15:19
*** zenmatt has joined #openstack15:19
*** Shentonfreude has joined #openstack15:19
*** thatsdone has quit IRC15:23
*** guigui has quit IRC15:24
dobberso, is there a way to "mount" an object in swift?15:25
*** enigma has joined #openstack15:25
notmynamedobber: not as part of swift itself, but there are some 3rd party tools that do something like that15:26
dobbercool15:26
notmynamecloudfuse is a fuse module that can talk to a swift cluster15:27
notmynamebut realize that mounting a swift cluster is generally a bad idea. it's not designed to be used as a block-level device15:28
radekall my vm's have 100M network although server nic is 1G15:28
dobberyeah i know15:28
dobberbut i'm still looking for ways to use it15:28
radekits a single server installation  with default vlan mode15:28
notmynameok :-)15:28
radekis it normal ?15:28
radeki can't find out whats wrong with it15:30
*** deepy has quit IRC15:31
NashTrashcreiht: Nice.  Thanks.15:32
*** deepy has joined #openstack15:32
radekany one seen this issue ?15:33
*** jwilmes has joined #openstack15:35
*** joearnold has joined #openstack15:36
*** maplebed has joined #openstack15:37
nelsongholt: yeah, it's in an ... uncertain state.15:40
*** rnirmal has joined #openstack15:43
*** rchavik has quit IRC15:43
gholtHeh :)15:43
*** arun_ has quit IRC15:44
*** dobber has quit IRC15:45
nhmany of you guys doing VMs on magnycours?15:45
*** magglass1 has quit IRC15:46
*** msivanes has joined #openstack15:48
jaypipesmtaylor: morning :)15:49
uvirtbotNew bug: #781756 in nova "AuthToken server_management_url spelled incorrectly" [Undecided,New] https://launchpad.net/bugs/78175615:51
*** daveiw has quit IRC15:53
katkeehello, is there a document describing ebtables iptables nat bridges and the networking modes in openstack?15:55
*** scyld has joined #openstack15:56
*** skiold has quit IRC15:58
*** scyld is now known as skiold15:58
NashTrashcreiht: Do you have a moment for two quick Swift questions?15:59
*** arun_ has joined #openstack15:59
*** arun_ has joined #openstack15:59
NashTrashAnyone open for a Swift question?16:01
notmynamedon't ask to ask. just ask16:02
joearnoldDepends on the question.16:03
*** Zangetsue has quit IRC16:03
NashTrashOk.  I followed the directions for creating the first Swift user (root).  I then added a second proxy server and followed the directions to change root's URL.  Now the stat command fails with "Account not found"16:04
NashTrashI can still curl and get responses16:04
*** zenmatt has quit IRC16:06
NashTrashMy other question is when I try swauth-add-account I get the following error: "Account creation failed: 501 Not Implemented".  Is account creation really not implemented?16:06
notmynamethe add account error is probably a config issue16:06
notmynamedo you have allow_account_management set in the proxy server?16:07
Eykmaybe  you set the wrong root url,first question16:07
*** deepy has quit IRC16:07
*** deepy has joined #openstack16:07
NashTrashnotmyname: Yes, allow_account_management = true16:07
*** mattray has joined #openstack16:09
NashTrashEyk: If I run swauth-list on the system account I see the following: {"services": {"storage": {"default": "local", "local": "https://99.99.99.94:8080/auth/"}}, "account_id": "AUTH_59d7770d-41d9-4e2d-9cd6-e4a877ebcb1d", "users": [{"name": "root"}]}16:09
vishykatkee: script should work fine on bare metal.  You will have to change libvirt_type to kvm16:10
*** zigo-_- has quit IRC16:10
*** MarkAtwood has joined #openstack16:11
NashTrashnotmyname: Here is my proxy config - http://paste.openstack.org/show/1333/16:11
*** johnpur has joined #openstack16:13
*** ChanServ sets mode: +v johnpur16:13
EykNashTrash, some mistake I did, the new url you set need the account key at the end16:13
gholtNashTrash: I'm grepping through the code and I see nowhere a 501 can occur. You sure it was a 501?16:14
NashTrashEyk: Ah, you mean the AUTH_59... should be after /auth/?16:16
EykNashTrash, "local": "https://99.99.99.94:8080/v1/AUTH_59d7770d-41d9-4e2d-9cd6-e4a877ebcb1d"  this should the right output16:16
NashTrashEyk: Ok, one second...16:16
*** dirkx_ has quit IRC16:16
NashTrashEyk: Hm...I think I screwed that up.  Please take a look - http://paste.openstack.org/show/1334/16:22
*** enigma has quit IRC16:23
NashTrashEyk: Ah!  Missing v1 after auth16:23
Eykcompare my string with the one you used16:24
Eykyou get it ;-)16:24
*** enigma has joined #openstack16:25
Eykmany things are not clear in the documentation, this were the only obstacles setting up swift ;-)16:25
NashTrashEyk: And stat is now working.  Thank you.16:29
NashTrashgholt: Were you talking about my inability to create accounts?16:29
gholtNashTrash: I think so, but just generally speaking I can't find where a 501 could happen from Swift.16:30
katkeevishy: we have 2 bare metal servers running openstack installed with your script... we are now fighting with network config to understand ebtables, iptables, vlanmanager etc16:30
radekanyone has any ideas why my vm's have only 100MB nic speed I'm using default vlan mode networking ?16:31
*** grapex has joined #openstack16:31
NashTrashgholt: Well, I got it.  Here is the command- swauth-add-account -A https://99.99.99.94:8080/auth/ -K XXXXXXXX testaccount16:31
NashTrashgholt: And the result was "Account creation failed: 501 Not Implemented"16:31
*** nacx has quit IRC16:32
*** photron_ has joined #openstack16:32
Eykis this the right full syntax for this command?16:33
vishyradek: you can get up to about 600MB if you use virtio16:34
radekany docs how to do it ?16:34
radekdo I have do configure nova.conf or its on server side ?16:35
gholtNashTrash: Do you have something else running on 99.99.99.94 port 8080? A load balancer, nginx or something? I deliberately broke my install and got a 500 instead of 501...16:35
NashTrashgholt: Yes.  99.99.99.94:8080 is my load balancer.  But other commands (swauth-list for example) seem to work fine.16:36
gholtAh, okay. Well I'm not sure, but if you have multiple proxies it could be one of them is running with account management off. You might've changed the proxy conf and forgot to reload or something [just guessing]16:37
vishyradek: you can edit libvirt.xml.template and take out the comments around the virtio line16:38
vishyradek: you just have to make sure that the guests you use support virtio16:38
NashTrashgholt: Just checked and both are set to True.16:38
radeki've tried that do i need to reboot their instance after ??16:38
NashTrashI will be afk for lunch.  Thanks for all the help.16:38
radeksorry after I've changed that I restart the instance ?16:39
*** zenmatt has joined #openstack16:39
gholtNashTrash: Np, if you get the chance, just restart both proxy services and see if that helps. Just in case...16:39
gholtNashTrash: If not, check the logs on each proxy and see if they give any hints to what's wrong.16:39
raggi_is this channel moderated and i'm not noticing?16:41
*** markvoelker has quit IRC16:41
vishyradek: you need to recreate instances16:42
vishythat xml template needs to be changed on all of the compute nodes16:43
radekrecreate ?16:43
vishyyes destroy the vm16:43
vishymake a new one16:43
radekxml template is held with image ?16:44
*** enigma has quit IRC16:44
vishyyup16:44
raggi_is there a way to rebuild the `security_group_instance_association` table?16:44
*** Eyk has quit IRC16:44
vishyyou can manually change it16:44
vishyactually euca-reboot-instances would probably work after changing the template16:44
*** markvoelker has joined #openstack16:44
radekwhere is xml template stored ?16:45
vishyraggi_: automatically?  I don't think so16:45
vishyradek: depends on how you installed16:45
raggi_vishy: i have a couple of old instances, i'd like to apply the security group changes to16:45
radekdefulat ubuntu install16:45
*** dirkx_ has joined #openstack16:45
*** jkoelker has quit IRC16:45
radekfrom package manager16:45
raggi_vishy: i altered the iptables rules, which worked fine until a new (different) instance was booted, and then it regenerated all of the nova chains16:45
raggi_unfortunately, it regenerated the chains using stale data16:46
radekwhere is template xml by default ?16:46
vishyraggi_: you manually altered the rules?16:46
raggi_is there a way to re-apply a security group to an instance?16:46
vishyradek: looking16:46
radekok thx16:46
raggi_vishy: yes, but that's been corrected16:46
raggi_vishy: (iptables was flushed and rewritten by the instance boot)16:46
vishyradek: might be faster to just do a locate16:47
radekwhats the name of the template16:47
radek?16:47
vishylibvirt.xml.template16:48
radekthx16:48
vishyshould be in nova/virt dir inside of python16:48
*** katkee has quit IRC16:48
vishyraggi_: if you change a rule in a group it will update the rules on any instance that is using it16:49
raggi_vishy: that doesn't seem to be working16:49
raggi_vishy: we only have one security group16:49
vishyraggi_: really?16:49
vishyraggi_: you might need to restart nova-compute first16:49
*** kbringard has joined #openstack16:49
vishyto and run a new instance on the host16:50
vishyso that it creates the basic rules?16:50
raggi_actually, there's an odd rule in there: http://pastie.textmate.org/private/bcuinymssbtubwlmbelllw16:50
raggi_could that grpname rule be causing a problem?16:50
kbringardhey guys! quick question about zones?16:50
kbringarddo I need to set more than --node_availability_zone in my nova.conf to change the name of the zone?16:50
kbringardand/or is there any documentation about zone setup? I found some random stuff in the developer docs, but nothing super useful :-/16:51
*** jkoelker has joined #openstack16:51
raggi_vishy: right on the money, restarting nova-compute did it, thanks very much16:51
raggi_didn't occur to me to try that :)16:52
*** koolhead17 has joined #openstack16:52
*** jkoelker has quit IRC16:54
*** jkoelker has joined #openstack16:57
*** mattray has quit IRC16:59
*** obino has joined #openstack17:00
vishyraggi_: cool17:00
vishykbringard: availability_zones or distributed_zones?17:01
kbringarduhm... that is a good question17:01
kbringardavailability_zone I would assume... is there a doc explaining the difference? (sorry, new to zones in OpenStack)17:01
vishykbringard: i think just the conf file change and you might have to use the zone scheduler to get any usefulness out of them17:02
kbringardessentially, I'm looking to start implementing what's outlined in the MultiClusterZones section of the wiki17:02
kbringardhttp://wiki.openstack.org/MultiClusterZones17:02
vishykbringard: oh no that is actually distributed zones17:03
vishywhich work although the scheduler is still being finished17:03
*** mattray has joined #openstack17:05
vishykbringard: not sure if there is a good how to for how to use those zones.  dabo or sandywalsh might have some insight17:06
kbringardcool, thanks, I'll keep messing with17:06
kbringardI don't mind figuring it out, but if there are docs that help, I'm all for using them :-D17:06
kbringardif I come up with anything meaningful I'll start a wiki page about it17:07
sandywalshvishy, kbringard I added docs on Zones to the Cactus release17:07
kbringardsandywalsh: in the admin guide, or developer guide?17:08
anticwnotmyname: inode64 won't matter because you have large inodes17:08
anticw1k inodes means you can cover i think 4 or 8tb drivers evenly in <= 32 bits17:09
sandywalshkbringard, little of both ... trying to find a link, sec17:09
sandywalshannegentle, where would I find a link to the zones docs I added to Cactus?17:10
notmynameanticw: we've seen older drives in the cluster slow down, even though they have the same amount of data on them as newer drives. we came across the inode64 thing last night when I ran out of inodes on my SAIO and wondered if it may help with the drive slowness17:10
*** enigma has joined #openstack17:10
anticwran out of inodes?17:11
anticwthey are dynamically allocated17:11
anticwyou probably hit imaxpct=2517:11
anticwwhich is the default17:11
notmynameerr..actually, that's a bit of a guess. I got "out of room" errors17:11
anticwand if you did that's a bit horrific17:11
notmynameand the mount had plenty of space left on it17:11
anticw2TB drive? 1k inods?17:11
notmynameno, this was on my all-in-one. running on a slicehost VM17:12
*** kashyap has quit IRC17:12
notmynameloopback device17:12
anticwok so i doubt you did17:12
anticwi think you hit imaxpct then17:12
notmynameok. like I said. just a guess17:12
anticwso you can tweak that up17:12
notmynamewhat i imaxpct?17:12
notmynames/i/is17:12
anticwhow much space inodes can use17:12
kbringardsandywalsh: I did find http://wiki.openstack.org/MultiClusterZones17:13
anticwyou sell disks blocks not inodes ...   so if you bump into htat i would be worried17:13
kbringardbut that's more of a blueprint and less documentation17:13
notmynamewe haven't seen it in production. just my really old saio17:13
sandywalshkbringard, that's the bp spec, but I documented the implementation a while ago ... still looking17:13
kbringardno worries, thanks for the hlpe17:14
kbringardhelp*17:14
anticwnotmyname: lots of deltes?17:14
anticwtombstones burn inodes ...17:14
anticwgah17:14
anticwlatency horrible here, typing sucks17:14
sandywalshkbringard, this is the file, but I don't know why it's not in the published docs: http://bazaar.launchpad.net/~hudson-openstack/nova/trunk/view/head:/doc/source/devref/zone.rst17:14
notmynameya, lot's of everything. the saio was probably 6 months old. so 6 months of testing, etc17:14
*** jdurgin has joined #openstack17:14
*** dirkx_ has quit IRC17:15
anticwdf -hi17:15
kbringardsandywalsh: awesome, thanks! I'll read over it17:15
anticwand see how many you have17:15
anticw1k each ... and your block device is some size to ... check the math17:15
sandywalshkbringard, it's more admin related, my summit zones presentation is more dev focused. Let me know if you need the link17:15
notmynameya, that didn't say I was out of room, but docs I found said that df -i lies (or doesn't tell the whole truth)17:15
notmynameI rebuilt it this morning, so I can't go back and check now17:15
kbringardsandywalsh: thanks, not yet... I'm just looking to implement the building blocks at this point. We have a single zone for now but will likely be looking to implement more (in the huddle fashion you spoke of in the blueprint)17:16
sandywalshcool17:16
sandywalshlet me know if you have any questions17:17
*** watcher has quit IRC17:17
*** koolhead17 has quit IRC17:17
*** koolhead17 has joined #openstack17:17
anticwdf -i sorta lies17:18
anticwbut it's close enough17:18
anticwcw@naught:~$ sudo find / -xdev | wc -l17:19
anticw15476617:19
anticw/dev/sda5      xfs   4976768  154290 4822478    4% /17:19
anticw154766 ~ 15429017:19
anticwso lies, i dunno, that's a bit strong17:19
*** mgoldmann has joined #openstack17:22
*** mszilagyi has joined #openstack17:23
kbringardsandywalsh: so is the top level zone always called nova?17:23
kbringardbasically: currently I have one zone, I set --zone-name= in the nova.conf and restarted everything on the controller17:24
sandywalshkbringard, all zones are called 'nova' by default. --zone_name=foo to change it17:24
kbringardthen I did a nova zone-add on said controller17:25
kbringardbut it's still showing the zone as nova17:25
sandywalshdo you have --zone_name or --zone-name?17:25
kbringardand zone-info is only showing the basic stuff... same as before I had any child zones17:25
*** Kronick has joined #openstack17:25
kbringardohhhhh, haha17:25
kbringard<--- dumb17:25
sandywalsh:) common mistake17:25
sandywalshand it takes about 30 seconds for the updates to come through too17:26
kbringardyea, I was noticing17:26
kbringardthat's fine though17:26
kbringardso then I just bring up API servers wherever I want a zone, name them accordingly and zone-add them to the parent17:27
kbringardright?17:27
*** zenmatt has quit IRC17:28
kbringardsorry, hopefully one last thing... do I set the zone properties manually in the DB?17:29
kbringardoh, and what flag is it to change the availability zone name?17:29
*** mgoldmann has quit IRC17:31
*** mgoldmann has joined #openstack17:31
kbringardor does it just set it based on the node_availability_zone of the node when it registers?17:32
*** zenmatt has joined #openstack17:33
*** dirkx_ has joined #openstack17:33
notmynameanticw: thanks. good info to know17:36
*** pguth66 has joined #openstack17:38
*** xavicampa has joined #openstack17:38
*** Eyk has joined #openstack17:40
NashTrashgholt: I am back and figured out the 501 Not Implemented error.  Turns out the Pound load balancer defaults to only allowing GET, POST, and HEAD.  swauth-add-account uses PUT.17:44
gholtNashTrash: Ah, that's no fun. But I guess not a real big dig deal if you can just do account management directly shelled into one of the proxies to 127.0.0.1.17:45
notmynameof couse, that's going to make object creation hard17:46
NashTrashgholt: I was able to reconfigure Pound to allow POST.  All is good.17:46
gholtHah, I missed the kinda obvious there, yeah... :)17:46
NashTrashgholt: Showed up buried in the Pound log17:47
gholtYou'll need PUT as well. COPY is "nice" but not 100% /required/.17:47
NashTrashgholt: All set I think.  Now time for some more testing17:47
gholtDoes pound spool PUTs? If so, that'd be bad.17:47
*** markvoelker has quit IRC17:48
*** dprince has quit IRC17:48
NashTrashgholt: I do not think so.17:49
notmynameI think Pound is ok. It was our 2nd choice for LB (1st choice for open source LBs)17:49
gholtanticw: If you ever want me to run some xfs stuff on a "was in" production drive, let me know. I'll have the ops guys pull a drive and we can mess with it to see what's going on.17:49
NashTrashCan one swift user be in multiple accounts?17:49
notmynameNashTrash: depends on your auth, but not with swauth17:49
NashTrashnotmyname: Not a big issue.  Just curious.17:50
gholtNashTrash: No, a user is bound to an account for swauth. You'd have to make the user in each account. :/ Or you could give that user access to the other accounts (better).17:50
notmynamemy point is that is an auth "Feature" not a swift-specific thing17:50
NashTrashgholt: Hm..how do you give a user access to another account?17:51
*** mahadev has joined #openstack17:51
*** mahadev has left #openstack17:51
NashTrashnotmyname: Makes sense17:51
*** dirkx_ has quit IRC17:51
gholtNashTrash: st post -r 'account:user' -w 'account:user' container17:52
NashTrashgholt: Thanks.  I will have to play around with that.17:53
gholtNashTrash: For a little more info: http://swift.openstack.org/misc.html#acls17:53
nelsonwow. Should my /srv/node/sda3/objects subdirectory have 120K files in it? That seems excessive.17:54
*** dprince has joined #openstack17:54
gholtHeh, I guess that depends on how many files you have stored.17:55
nelson300k files, but we plan to store 256 times as many.17:55
*** skiold has quit IRC17:56
anticw120k isn't that many17:56
NashTrashgholt: I created a new user in the system account.  But I get a 403 Forbidden when I try to use that user with swauth-list on system.17:56
gholtnelson: And how many nodes? 300k objects * 3 replicas / devices17:56
NashTrashgholt: The .super_admin works fine17:56
nelsonsix devices17:57
gholtNashTrash: Only the super admin and reseller admins can use swauth-list iirc17:57
anticwgholt: that would be good, ill take you up on that but probably not for a few days as there are some fires i have to put out17:57
gholtnelson: Sounds about right then. :)17:57
*** dirkx_ has joined #openstack17:57
*** skiold has joined #openstack17:57
NashTrashgholt: Ok.17:57
nelsonas long as xfs can handle that many, yeah.17:58
*** kashyap has joined #openstack17:58
gholtanticw: Cool, I have a feeling it'll take a while to get ahold of a drive anyway. Have to pull it, ship from Dallas, convince people I'm not doing something evil, etc. :)17:58
NashTrashThanks everyone.  Off to meetings!17:59
*** NashTrash has quit IRC17:59
*** katkee has joined #openstack17:59
Eykis there any filecountlimit in swift?18:01
Eykobjects in a container or so18:02
gholtEyk: It depends on your container server performance. But we've been recommending staying under 10 million objects per container. There's no hard-coded limit.18:03
notmynamerun your container servers on SSD drives and then you have have 1 billion+ objects with no problem18:04
gholtTested and true. :) ^^18:04
*** clauden_ has joined #openstack18:05
*** fabiand__ has joined #openstack18:06
*** BK_man has joined #openstack18:12
devcamca-can someone remind me how to add someone to the openstack mailing list? been a looong time since i had to deal with that18:12
*** devcamca- is now known as devcamcar18:12
*** dirkx_ has quit IRC18:12
gholtdevcamcar: http://wiki.openstack.org/MailingLists :)18:13
devcamcargholt: too easy! :)18:13
*** dirkx_ has joined #openstack18:14
annegentlesandywalsh kbringard the zones doc Sandy did is here: http://nova.openstack.org/devref/zone.html, no doc in docs.openstack.org yet.18:19
kbringardthanks annegentle~18:19
kbringard!18:19
*** infinite-scale has joined #openstack18:20
annegentlesandywalsh: I'll add zone.rst to the devref page that pulls it all into the nav, etc.18:20
nelsongholt: I have an auth question. Don't shoot me. It looks like the auth server is return 200 OK, but php-cloudfiles is expecting a 204. You familiar with that?18:20
Eykwill the resource consumption increase a lot with more files or is this no problem. So 1Billion objects will consum probably 1000 times more resources then 1 Million or less ?18:21
btorchnelson: I think it's because on 1.3 (swauth) it returns 204s but on the old one 1.2 (devauth) it returned 200s18:21
*** galthaus has joined #openstack18:21
btorchnelson: sorry the other way around18:22
nelsonbtorch: phwew!18:22
*** galthaus has joined #openstack18:22
nelsonthat's not a problem, I'll just fix php-cloudfiles.18:23
btorchnelson:  1.3 (swauth) it returns 200s during auth18:23
nelsonconvincing php-cloudfiles to accept 200 works. now I'm down into problems in my own code. :)18:25
uvirtbotNew bug: #781837 in swift "IPv6 compressed notation and replication" [Undecided,New] https://launchpad.net/bugs/78183718:26
*** cp16net has joined #openstack18:26
*** MarkAtwood has quit IRC18:27
btorchnelson: was that on the latest php api ?18:28
nelsonkinda18:29
nelsonchecked out of the git repository March 31st. So a month and a half old.18:30
nelsonbtorch: are you familiar with that code?18:31
*** mrmartin has joined #openstack18:31
mrmartinre18:31
btorchnelson: no I believe conrad takes care of that I think18:32
*** dirkx_ has quit IRC18:32
nelsonHaven't seen conrad around ever.18:32
nelsonneither here nor on #cloudfiles18:33
*** tblamer has joined #openstack18:33
kbringardI have an interesting... thing18:33
kbringardeuca-describe-availability-zones shows the zone I set18:33
kbringardbut verbose shows the zone as nova still18:34
j05hsounds buggish18:34
btorchnelson: try chmouel18:34
nelsonbtorch: I think you just did. :)18:35
btorchnelson: I see that chmouel and conrad are the ones updating the github .. if I see conrad today I'll let him know too18:35
btorchnelson: :)18:35
nelsonbtorch: I have another question: whether it's re-authenticating or not. It looks like the code to do it is commented-out.18:35
*** katkee has quit IRC18:38
*** medberry is now known as med_out18:39
gholtDamn that code that requires 204 instead of just 2xx. :)18:40
sandywalshannegentle, thanks!18:40
kbringardawesome18:42
kbringardrv = {'availabilityZoneInfo': [{'zoneName': 'nova',18:42
kbringard                                        'zoneState': 'available'}]}18:42
kbringardI believe that explains my problem18:42
kbringardhaha18:42
sandywalshkbringard, yeah, don't confuse availability zones with Zones (bad choice of names)18:43
sandywalshavailability zones are logical partitionings *within* a Zone18:43
kbringardright, but, I was trying to change the output of the availability zone names18:43
kbringardand it looks like with verbose, the parent is always hardcoded to nova18:43
sandywalshah, yes, that would do it then18:43
sandywalsh:)18:43
kbringardso, then real quick...18:44
sandywalshanother tip with novaclient is to use the --debug option to see what's coming and going to the REST interface18:44
jaypipesgah, my air conditioning is hosed.18:44
sandywalshRUN ... jaypipes is going to explode!18:45
kbringarda Zone, is an entire setup with an API server, etc... an availability zone is ways to create fault tolerance within a Zone?18:45
jaypipessandywalsh: yeah, no joke :)18:45
sandywalshkbringard, exactly18:45
kbringardjaypipes: for some reason I read that as "My hair conditioner"18:45
mrmartinGuys, is there anyone here who is involved in Lunr development / planning ?18:45
kbringardand I was a bit confused18:45
jaypipeskbringard: which would be very funny.18:45
jaypipeskbringard: :)18:45
kbringardsandywalsh: so I'd want to do something like... location as the Zone, and then location0, location1, location2 as the availability_zone18:46
*** skiold has quit IRC18:46
kbringardjaypipes: as an aside, we got the chunking stuff working18:47
kbringardthanks for the heads up on that :-D18:47
sandywalshkbringard, sounds reasonable ... I haven't tried that.18:47
jaypipeskbringard: awesome news!18:47
sandywalshgotta step out for a bit ... will read thread after18:47
kbringardhopefully in the next couple of weeks I'll stop being lazy and I'll update ogle with image upload support18:47
kbringardthanks sandywalsh18:48
creihtmrmartin: howdy18:48
*** jbryce has joined #openstack18:49
creihtmrmartin: I am the lead for lunr, how can I help you?18:49
mrmartinI was in UDS@Budapest today, and saw a presentation about Openstack.18:50
nelsonDid something change in the % encoding of containers between 1.1 and 1.3 ?18:51
nelsonBecause what used to be images%2Fswift is now being encoded as images%25252Fswift18:51
nelsonI think they're being encoded twice now.18:51
gholtnelson: I think that occurs only in the logs.18:52
nelsonI think ... maybe ... I'll just throw away the %, since it's just causing problems.18:52
nelsongholt: I can totally imagine that the logs are encoding one of them, and something elseis encoding another one.18:53
nelson:)18:53
gholtHeh, but I don't know of anything API-wise that changed encoding.18:53
*** Kronick has left #openstack18:53
btorchnelson: I'm not sure about the re-authentication php code being comented out .. I think I used the php api once18:53
btorch:)18:53
nelson"but I didn't CHAAAAAAAANGE anything" :)18:53
nelsonbtorch: I'm hoping that conrad will know.18:54
*** joearnold has quit IRC18:55
*** cuzoka has joined #openstack18:55
*** anotherjesse has joined #openstack18:59
*** mattray has quit IRC19:02
*** kashyap has quit IRC19:04
*** MarkAtwood has joined #openstack19:05
*** boncos has joined #openstack19:05
*** cuzoka has quit IRC19:06
boncoshi, i try to install openstack (swift) on fedora but got problem when running 'swauth-prep -K'19:07
boncosAuth subsystem prep failed: 500 Server Error19:07
boncosanybody can guide me to troubleshoot this problem ?19:08
annegentleboncos: gholt may be able to help, though I don't know if anyone has tested the Swauth system on Fedora.19:08
*** enigma has quit IRC19:09
notmynamebryguy: look in the proxy server logs (syslog)19:10
boncosannegentle, maybe you can try to help me ? i have no knowledge about python thing19:10
*** enigma has joined #openstack19:11
bryguynotmyname: I think you meant that for someone else. :)19:11
nelsonboncos: suggestion: make sure you have set the ownership on /srv/node to swift:swift19:11
*** joearnold has joined #openstack19:11
notmynamebryguy: sorry19:11
notmynameboncos: ^19:12
nelsonst is telling me "ValueError: No JSON object could be decoded" when I try to download a file. anybody seen this before?19:12
nelsonand it's giving me a traceback. That implies a bug in 'st', because it shouldn't be throwing exceptions.19:12
boncosnelson, /srv/1 is symlink to /mnt/sdb1/1 (and yes, /mnt/sdb1/1 is owned by swift:swift)19:13
nelsonthat *ought* to work.19:14
boncosnotmyname, http://fpaste.org/kmBt/ this is my log19:14
*** fabiand__ has left #openstack19:15
*** enigma has quit IRC19:15
*** lele_ has quit IRC19:15
boncosline 6-9 ... (header, '\r\n\t'.join(values))#012TypeError: sequence item 0: expected string, int found19:17
boncoswhat trigger that error ?19:18
boncos!ping gholt19:21
openstackpong19:21
notmynameheh19:21
boncosanybody have a clue ?19:24
notmynameboncos: ya, that's the root of the issue, but I'm in a meeting now. I was hoping gholt would help (I think he's talking to contractors at his house at the moment, though)19:26
boncosnotmyname, oh okey .. take your time ... i'm not in rush19:27
*** kbringard_ has joined #openstack19:29
*** kbringard has quit IRC19:29
*** kbringard_ is now known as kbringard19:29
openstackjenkinsProject swift build #257: SUCCESS in 30 sec: http://jenkins.openstack.org/job/swift/257/19:31
openstackjenkinsTarmac: Rename swift-stats-* to swift-dispersion-* to avoid confusion with log stats stuff19:31
*** mrmartin has quit IRC19:36
*** HouseAway is now known as AimanA19:36
*** katkee has joined #openstack19:37
*** woostert has quit IRC19:40
*** moreno has joined #openstack19:40
morenohi all19:40
morenocuold any one help me with one problem?19:41
*** imsplitbit has joined #openstack19:42
morenoI have some trobles when trying to communicate with an instance19:43
*** ctennis has quit IRC19:44
*** moreno has quit IRC19:47
*** katkee has quit IRC19:48
*** katkee has joined #openstack19:49
gholtboncos: I'm not sure if it's because of python 2.7; we currently only test with 2.6.519:50
*** katkee has quit IRC19:51
gholtboncos: That seems a little suspect, but the error doesn't quite. So I'm still guessing right now.19:52
boncosgholt, mmhh ... ok i'll try to install on centos19:52
boncosgholt, centos is using python 2.4 ... is that ok ?19:55
notmynameboncos: py2.4 won't work19:55
boncosuuhh ..19:56
notmynamefor example, we use context managers (the with statement) a lot19:56
gholtDo you hate Ubuntu 10.04 LTS that much? ;P19:56
*** lorin1 has left #openstack19:56
boncosi'm not familiar with ubuntu19:56
*** lorin1 has joined #openstack19:57
boncosthat's it19:57
gholtJoking, btw, these things /should/ work. I'm going to do some experiments and see if I can figure what's up.19:57
boncosnot hate it19:57
boncos:19:57
boncos:)19:57
gholtOne option you can try, if you're a coder-type, is changing the putheader calls in swift/common/bufferedhttp.py to use str(value) instead of just value.19:58
*** katkee has joined #openstack19:58
gholtBut I just don't know why that'd affect you and no one else yet.19:58
*** brd_from_italy has joined #openstack20:00
boncosi'm not python coder, but i can program a bit20:01
*** anotherjesse has quit IRC20:01
*** keny has joined #openstack20:04
kenyhi everyone20:04
*** amccabe has quit IRC20:05
boncosgholt, i already change bufferedhttp.py ... still error20:06
gholtAnd you restarted all the services after? That's weird. How would it get an int if you explicitly told it str(x)...20:06
boncosbtw, i change this bufferedhttp.py , (bufferedhttp.pyc <-- what is this file ?)20:06
boncosow, i should restart the service ?20:07
gholtThat's auto-updated "compiled" version. Python should take care of that for you.20:07
boncosok .. i'll restart20:07
gholtOh, yeah, restart the proxy at least: swift-init proxy restart20:07
kenyI am trying to run nova on a system with xen. I have tested that xen runs fine (can create VM images, run them fine, etc). When I try to run one on openstack, spawn fails and machine is shutdown.20:08
kenyI have an extract of nova-compute.log http://pastebin.com/WgVD1AMi20:08
boncosgholt, ah... no error after i change bufferedhttp.py and restart the proxy20:09
*** Kronick has joined #openstack20:09
*** RJD22|away is now known as RJD2220:09
kenyalso, my nova.conf: http://pastebin.com/mSsT5Xkq20:09
gholtInteresting. I'm actually completely unsure why it doesn't blow up with python 2.6. But I will put in a bug and fix so it works with 2.7 (and keeps working with 2.6).20:10
vishykeny: you need -nouse_cow_images20:10
vishy-- that is20:10
*** Kronick has left #openstack20:10
kenyOne thing that catched my eye is that in the first log qemu is launched20:10
vishyalthought that doesn't look like the particular bug you are hitting20:10
kenyvishy thanks for the tip, I will include that20:10
vishycow images don't work with xen/libvirt20:10
*** dobber_ has joined #openstack20:10
boncosgholt, but thereis another error ... i'll paste it to pastebin20:10
*** infinite-scale has quit IRC20:11
*** llang629 has joined #openstack20:11
boncosgholt, http://fpaste.org/5G8T/20:12
gholtboncos: Okay, that's the other services I didn't have to restart, lol.20:13
gholtboncos: So try a swift-init main restart20:14
boncosok20:14
*** watcher has joined #openstack20:15
*** johnpur has quit IRC20:15
boncosah you right .. no error then :D20:15
boncosthanks a lot gholt20:15
*** llang629_ has joined #openstack20:16
gholtboncos: Ah, I figured it out. Python 2.7 refactored their httplib to use one class heirarchy instead of in 2.6 there were 2.20:16
* gholt runs off to chat with home contractors, back in a bit.20:16
tr3buchetannegentle: i'm changing the default behavior of a flag, is there some specific place that is documented20:16
uvirtbotNew bug: #781878 in nova "Error during report_driver_status(): host_state_interval" [Undecided,New] https://launchpad.net/bugs/78187820:16
*** llang629_ has left #openstack20:17
*** llang629 has quit IRC20:18
kbringarddprince: I'm seeing that as well20:18
kbringardFYI20:18
kenyusing --nouse_cow_images appears to help, since I'm getting a new error now =D http://pastebin.com/fAzsnDmh (googled but found nothing :/ )20:19
*** CloudChris has joined #openstack20:21
*** llang629 has joined #openstack20:21
dprincekbringard: roger that. Waiting for sandywalsh/sirp to weigh in. But I think it came in w/ one of the distributed scheduler merges.20:21
kbringardmakes sense20:22
*** llang629 has quit IRC20:25
*** CloudChris has left #openstack20:29
*** markwash has quit IRC20:30
*** lorin1 has left #openstack20:32
*** RJD22 is now known as RJD22|away20:35
*** piken is now known as piken_afk20:35
*** watcher has quit IRC20:36
annegentletr3buchet: the flag docs are auto-generated from docstrings, but I also update them manually in openstack-manuals. Which flag and what's the change?20:37
*** photron_ has quit IRC20:37
*** dprince has quit IRC20:39
*** mgoldmann has quit IRC20:39
*** boncos has quit IRC20:41
*** markwash has joined #openstack20:43
*** kbringard has quit IRC20:46
*** kbringard has joined #openstack20:46
*** mgoldmann has joined #openstack20:46
*** katkee has quit IRC20:48
notmynamenelson: joearnold: just merged a swift change that puts each transaction id in a response header. should really help with debugging issues20:54
nelsonon the file coming back?20:56
*** Shentonfreude has quit IRC20:56
nelsonyeah, will need to write a program that grovels through the log file given the headers.20:56
nelsonalthough it's uuid enough that grep alone should do it.20:56
notmynameyup20:57
*** keny has quit IRC20:58
notmyname2 advantages: 1) you can easily find the log entries associated with the request 2) if a response doesn't have an X-Trans-ID header, it's not from swift20:58
*** infinite-scale has joined #openstack20:59
*** mgoldmann_ has joined #openstack21:01
openstackjenkinsProject swift build #258: SUCCESS in 35 sec: http://jenkins.openstack.org/job/swift/258/21:01
openstackjenkinsTarmac: added transaction id header to every response21:01
*** AlexNeef has joined #openstack21:01
tr3buchetannegentle: the flag is vlan_interface and I've change the default from 'eth0' to None21:02
nelsonnotmyname: yup21:02
*** mgoldmann has quit IRC21:03
tr3buchetannegentle: updated the docstring a bit21:03
*** katkee has joined #openstack21:07
annegentletr3buchet: great, thanks. So when I install nova, and use VLAN, do I now also need to change my nova.conf and add --vlan_interface with my known interface value?21:09
*** lionel has quit IRC21:10
*** llang629 has joined #openstack21:10
tr3buchetannegentle: there are two options. A) just as you've said. B) upon creating networks, you also have to pass in the bridge_interface to use21:10
*** RJD22|away is now known as RJD2221:11
*** katkee has quit IRC21:13
*** lionel has joined #openstack21:14
*** imsplitbit has quit IRC21:16
*** j05h has quit IRC21:17
annegentletr3buchet: ok, that'll affect some instructions too, I'll get on it for diablo docs21:18
annegentletr3buchet: thanks for the heads-up!21:18
kbringardannegentle: do you know if there is a list of failure scenarios along with recovery options somewhere?21:18
kbringardor recovery steps, maybe I should say21:18
annegentlekbringard: hm, not that I know of... for Compute only?21:18
nelsonmwahahaha, fetching files again, phwew.21:19
kbringardany and all21:19
annegentlekbringard: sounds like a great topic - start writing in the wiki possibly?21:19
kbringardannegentle: yessm, just wanted to make sure I wasn't duplicating efforts21:19
nelsongholt: so this twitter works again: https://twitter.com/#!/russnelson/status/6805012910559232021:19
notmynamenelson: it's so magical!21:20
nelsonyeah, damnit, isn't it??21:21
*** markwash1 has joined #openstack21:22
tr3buchetannegentle: sure no problem. we should probably also chat some about multi-nic soon21:23
*** j05h has joined #openstack21:24
*** markwash has quit IRC21:25
*** llang629 has quit IRC21:26
*** llang629 has joined #openstack21:28
*** llang629 has left #openstack21:28
vishytr3buchet: have you had a chance to check out networking branches yet?21:29
*** rcc has quit IRC21:30
*** brd_from_italy has quit IRC21:34
*** kbringard has quit IRC21:40
*** j05h has quit IRC21:41
*** infinite-scale has quit IRC21:44
*** grapex has left #openstack21:47
uvirtbotNew bug: #781909 in swift "bufferedhttp should str header values to be Python 2.7 compatible" [Undecided,New] https://launchpad.net/bugs/78190921:51
*** mcclurmc_ has quit IRC21:53
*** clauden_ has quit IRC21:56
*** mcclurmc_ has joined #openstack22:01
*** galthaus has quit IRC22:03
*** gondoi has quit IRC22:06
*** dobber_ has quit IRC22:10
*** AXiS_SharK has joined #openstack22:10
AXiS_SharKdoes anyone have a comparison between openstack and vmware's vcloud director?22:11
AXiS_SharKI'm looking for something like … how openstack does multi-tenancy, organizational hierarchies, service catalog, chargeback, vApps, user experience, etc22:12
*** j05h has joined #openstack22:18
AXiS_SharK!list22:18
openstackAXiS_SharK: Admin, Channel, ChannelLogger, Config, MeetBot, Misc, Owner, Services, and User22:18
*** patcoll has quit IRC22:21
*** matiu has joined #openstack22:22
*** rnirmal has quit IRC22:27
*** jaypipes has quit IRC22:39
*** agarwalla has quit IRC22:40
*** lionel has quit IRC22:40
*** pguth66 has quit IRC22:40
*** Dweezahr has quit IRC22:40
*** niksnut has quit IRC22:40
*** Pathin has quit IRC22:40
*** andy-hk has quit IRC22:40
*** jamiec has quit IRC22:40
*** aryan has quit IRC22:40
*** arun has quit IRC22:40
*** clayg has quit IRC22:40
*** pquerna has quit IRC22:40
*** dh has quit IRC22:40
*** cdbs has quit IRC22:40
*** romans has quit IRC22:40
*** ctennis has joined #openstack22:42
AlexNeefwho knows about keystone project?22:43
*** lionel has joined #openstack22:45
*** pguth66 has joined #openstack22:45
*** Dweezahr has joined #openstack22:45
*** niksnut has joined #openstack22:45
*** andy-hk has joined #openstack22:45
*** jamiec has joined #openstack22:45
*** aryan has joined #openstack22:45
*** arun has joined #openstack22:45
*** clayg has joined #openstack22:45
*** pquerna has joined #openstack22:45
*** dh has joined #openstack22:45
*** cdbs has joined #openstack22:45
*** romans has joined #openstack22:45
*** Pathin has joined #openstack22:46
*** Pathin_ has joined #openstack22:47
gholtAlexNeef: I think KnightHacker is the only one in channel.22:47
AlexNeefknightHacker let me know if you have a minute.22:47
*** cp16net has quit IRC22:49
*** openpercept_ has quit IRC22:53
*** openpercept1 has joined #openstack22:55
*** mgoldmann_ has quit IRC22:56
*** derrick_ has joined #openstack22:58
*** Dumfries has quit IRC23:01
*** Eyk has quit IRC23:01
*** Dumfries has joined #openstack23:01
*** miclorb has joined #openstack23:05
*** llang629 has joined #openstack23:09
*** llang629 has left #openstack23:09
*** dragondm has quit IRC23:11
*** dysinger has joined #openstack23:16
*** jguerrero__ has joined #openstack23:21
joearnoldnotmyname: Yes. That would be helpful. Is the intent to always turn on X-Trans-ID or just for debugging?23:21
*** tblamer has quit IRC23:25
*** jkoelker has quit IRC23:29
notmynamejoearnold: it's always on. there is no cost to it (a few extra bytes in headers, but no security issues)23:30
notmynamejoearnold: it's part of the catch_errors middleware23:30
notmynameso therefore the intent is to always have it on23:31
joearnoldnotmyname: Makes sense. I don't see the header size byte count being an issue. (We've already increased the token size by about that many bytes in one of our deployments. )23:34
*** obino has quit IRC23:46
*** mszilagyi has quit IRC23:48
*** obino has joined #openstack23:48
*** BK_man has quit IRC23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!