Wednesday, 2011-08-10

*** vladimir3p has quit IRC00:06
*** deirdre_ has joined #openstack00:07
*** devcamcar_ has quit IRC00:07
*** mszilagyi has quit IRC00:08
*** devcamcar_ has joined #openstack00:08
*** Bobo has quit IRC00:10
*** devcamcar_ has quit IRC00:10
*** devcamcar_ has joined #openstack00:11
*** msinhore has joined #openstack00:15
*** devcamcar_ has quit IRC00:15
*** devcamcar_ has joined #openstack00:17
*** infinite-scale has quit IRC00:23
*** msinhore has quit IRC00:24
*** msinhore has joined #openstack00:24
*** nerdstein has joined #openstack00:29
*** ton_katsu has joined #openstack00:30
*** heckj has quit IRC00:31
*** anotherjesse has quit IRC00:33
*** kashyap_ has quit IRC00:34
*** devcamcar_ has quit IRC00:34
*** deshantm_laptop has joined #openstack00:34
*** devcamcar_ has joined #openstack00:35
*** bengrue has quit IRC00:40
*** devcamcar_ has quit IRC00:40
*** devcamcar_ has joined #openstack00:41
*** Ephur has joined #openstack00:42
*** bengrue has joined #openstack00:42
*** mgius has quit IRC00:44
*** Ephur has quit IRC00:45
*** nati has joined #openstack00:47
*** hisaharu_ has joined #openstack00:48
*** nati_ has quit IRC00:48
*** obino has quit IRC00:49
*** hisaharu has quit IRC00:51
*** kashyap_ has joined #openstack00:52
*** nostromo56 has quit IRC00:54
*** devcamcar_ has quit IRC00:54
*** aliguori has joined #openstack00:54
*** infinite-scale has joined #openstack00:55
*** devcamcar_ has joined #openstack00:55
*** Pentheus_ has joined #openstack00:57
*** ton_katsu has quit IRC00:59
*** infinite-scale has quit IRC00:59
*** ton_katsu has joined #openstack01:00
*** dragondm has quit IRC01:01
*** nostromo56 has joined #openstack01:02
*** clauden has quit IRC01:03
*** anotherjesse has joined #openstack01:05
*** deirdre_ has quit IRC01:06
*** jsalisbury has quit IRC01:06
*** maplebed_ has joined #openstack01:07
*** shentonfreude has quit IRC01:08
*** devcamcar_ has quit IRC01:08
*** maplebed_ has quit IRC01:08
*** fcarsten has quit IRC01:09
*** devcamcar_ has joined #openstack01:09
*** maplebed has quit IRC01:10
*** jdurgin has quit IRC01:18
*** devcamcar_ has quit IRC01:18
*** devcamcar_ has joined #openstack01:19
*** ccc11 has joined #openstack01:19
*** msinhore has quit IRC01:20
*** hisaharu_ has quit IRC01:21
*** nati has quit IRC01:22
*** miclorb_ has quit IRC01:23
*** devcamcar_ has quit IRC01:23
*** miclorb_ has joined #openstack01:23
*** devcamcar_ has joined #openstack01:24
*** nerdstein has quit IRC01:24
*** osier has joined #openstack01:25
*** jeffjapan has joined #openstack01:27
*** devcamcar_ has quit IRC01:27
*** devcamcar_ has joined #openstack01:28
*** anotherjesse has quit IRC01:28
*** osier has quit IRC01:30
*** hadrian_ has joined #openstack01:32
*** stewart has quit IRC01:33
*** hadrian has quit IRC01:36
*** hadrian_ is now known as hadrian01:36
*** devcamcar_ has quit IRC01:36
*** devcamcar_ has joined #openstack01:36
*** ewindisch has quit IRC01:37
*** aliguori has quit IRC01:42
*** devcamcar_ has quit IRC01:42
*** devcamcar_ has joined #openstack01:42
*** pguth66_ has joined #openstack01:47
*** devcamcar_ has quit IRC01:47
*** cg01 has quit IRC01:48
*** devcamcar_ has joined #openstack01:48
*** mattray has joined #openstack01:50
*** devcamcar_ has quit IRC01:52
*** devcamcar_ has joined #openstack01:53
*** nostromo56 has quit IRC01:53
*** devcamcar_ has quit IRC01:56
*** devcamcar_ has joined #openstack01:58
*** bengrue has quit IRC01:59
*** cg01 has joined #openstack02:02
*** marrusl has joined #openstack02:05
*** devcamcar_ has quit IRC02:05
*** marrusl_ has joined #openstack02:06
*** devcamcar_ has joined #openstack02:06
*** dragondm has joined #openstack02:09
*** devcamcar_ has quit IRC02:09
*** devcamcar_ has joined #openstack02:10
*** mdomsch has joined #openstack02:16
*** devcamcar_ has quit IRC02:16
*** devcamcar_ has joined #openstack02:17
*** msinhore has joined #openstack02:21
*** nostromo56 has joined #openstack02:24
*** tsuzuki has joined #openstack02:24
mattrayany good suggestions for how to send the logs from nova and swift to a centralized server for processing? The assumption is that syslog won't scale.02:24
notmynamemattray: upload the logs to the swift cluster itself and download them on your central processing server02:26
*** devcamcar_ has quit IRC02:26
mattraynotmyname: interesting idea02:26
*** devcamcar_ has joined #openstack02:27
notmynamemattray: more detail: use syslog to rotate logs every hour and send the hourly files to the swift cluster02:27
*** mrrk has quit IRC02:27
mattrayyeah, I was just thinking about that02:27
mattrayI'll bring that up with the customer, see how frequently they need the logs02:27
*** marrusl has quit IRC02:27
notmynamemattray: that's essentially how we do all the log processing for cloud files (many GB/hour of compressed logs)02:27
mattraythis will be similar scale ;)02:28
*** jj0hns0n has joined #openstack02:29
notmynamemattray: although we are currently rewriting the actual processing piece, the basics, including many useful utilities (like swift-log-uploader), can be found in the slogging package (http://github.com/notmyname/slogging02:29
notmynameI would suggest that you can find _many_ improvements :-)02:29
*** maplebed has joined #openstack02:30
mattraythanks a ton, remind me to buy you a drink in Boston, you just saved some serious time02:30
mattrayor conversely, if you're in Austin I'm around02:31
uvirtbotNew bug: #823676 in nova "python-nova installs /usr/share/pyshared/run_tests.py" [Undecided,New] https://launchpad.net/bugs/82367602:32
*** ewindisch has joined #openstack02:35
*** GeoDud has quit IRC02:42
*** devcamcar_ has quit IRC02:42
*** devcamcar_ has joined #openstack02:43
*** osier has joined #openstack02:44
*** devcamcar_ has quit IRC02:44
notmynamemattray: glad I could help :-)02:45
*** devcamcar_ has joined #openstack02:46
*** stewart has joined #openstack02:48
*** nostromo56 has quit IRC02:50
*** mattray has quit IRC02:52
*** obino has joined #openstack02:53
*** dendro-afk is now known as dendrobates02:54
*** mattray has joined #openstack03:00
*** devcamcar_ has quit IRC03:00
*** mattray has quit IRC03:00
*** devcamcar_ has joined #openstack03:02
*** hadrian has quit IRC03:06
*** devcamcar_ has quit IRC03:06
*** devcamcar_ has joined #openstack03:08
*** ziyadb has joined #openstack03:09
*** devcamcar_ has quit IRC03:09
*** devcamcar_ has joined #openstack03:10
*** secbitchris has joined #openstack03:10
*** dendrobates is now known as dendro-afk03:11
*** troytoman-away is now known as troytoman03:14
*** mrrk has joined #openstack03:18
*** devcamcar_ has quit IRC03:18
*** devcamcar_ has joined #openstack03:18
*** maplebed has quit IRC03:18
*** ahs3 has quit IRC03:19
*** jovy has quit IRC03:21
*** ahs3 has joined #openstack03:24
*** devcamcar_ has quit IRC03:24
*** devcamcar_ has joined #openstack03:24
*** martine has quit IRC03:27
*** jakedahn has quit IRC03:28
*** viveksnv has joined #openstack03:29
*** devcamcar_ has quit IRC03:29
*** devcamcar_ has joined #openstack03:31
*** johnmark has left #openstack03:31
*** ahs3 has quit IRC03:43
*** devcamcar_ has quit IRC03:43
*** PeteDaGuru has left #openstack03:44
*** DrHouseMD is now known as HouseAway03:45
*** devcamcar_ has joined #openstack03:45
*** ahs3 has joined #openstack03:48
*** devcamcar_ has quit IRC03:48
*** devcamcar_ has joined #openstack03:49
*** rms-ict has quit IRC03:50
*** ahs3 has quit IRC03:53
*** devcamcar_ has quit IRC03:53
*** devcamcar_ has joined #openstack03:54
*** cg01 has quit IRC03:54
*** ahs3 has joined #openstack03:58
*** shentonfreude has joined #openstack04:00
*** ahs3 has quit IRC04:05
*** viveksnv has left #openstack04:07
*** devcamcar_ has quit IRC04:07
*** msinhore has quit IRC04:07
*** devcamcar_ has joined #openstack04:08
*** ahs3 has joined #openstack04:10
*** devcamcar_ has quit IRC04:10
*** devcamcar_ has joined #openstack04:11
*** anotherjesse has joined #openstack04:15
*** devcamcar_ has quit IRC04:15
*** devcamcar_ has joined #openstack04:17
*** jakedahn has joined #openstack04:24
*** devcamcar_ has quit IRC04:24
*** jakedahn has quit IRC04:24
*** jakedahn has joined #openstack04:24
*** devcamcar_ has joined #openstack04:25
*** devcamcar_ has quit IRC04:28
*** ziyadb has quit IRC04:29
*** anotherjesse has quit IRC04:30
*** ziyadb has joined #openstack04:31
*** f4m8_ is now known as f4m804:36
*** rchavik has joined #openstack04:42
*** deshantm_laptop has quit IRC04:50
*** wariola has joined #openstack04:51
*** kashyap_ has quit IRC04:53
*** deshantm_laptop has joined #openstack04:53
*** stewart has quit IRC04:54
*** BK_man has quit IRC05:05
*** BK_man has joined #openstack05:05
*** deshantm_laptop has quit IRC05:06
*** openpercept_ has joined #openstack05:09
*** cory___ has quit IRC05:11
*** cory___ has joined #openstack05:11
*** mgoldmann has joined #openstack05:18
*** anotherjesse has joined #openstack05:24
*** mdomsch has quit IRC05:26
*** mrrk has quit IRC05:26
*** nmistry has joined #openstack05:32
*** reed has quit IRC05:42
*** HugoKuo__ has quit IRC05:42
*** mrrk has joined #openstack05:42
*** rocambole has joined #openstack05:43
*** mrrk has quit IRC05:43
*** viveksnv has joined #openstack05:46
*** nmistry has quit IRC06:00
*** nickon has joined #openstack06:00
*** nmistry has joined #openstack06:08
*** ccc11 has quit IRC06:08
*** ccc11 has joined #openstack06:12
*** kashyap has joined #openstack06:13
*** guigui1 has joined #openstack06:17
*** guigui1 has left #openstack06:17
viveksnvhi all06:25
viveksnvi am getting this error when using VLAN model - "CRITICAL nova [-] 'module' object has no attribute 'init_host' "06:25
viveksnvany thoughts ?06:26
*** nati has joined #openstack06:26
*** anotherjesse has quit IRC06:26
*** nmistry has quit IRC06:28
*** dragondm has quit IRC06:28
*** mcclurmc has joined #openstack06:41
*** rms-ict has joined #openstack06:45
*** wariola has quit IRC06:46
*** ziyadb has quit IRC06:47
*** tsuzuki has quit IRC06:52
*** tsuzuki has joined #openstack06:52
*** rms-ict has quit IRC06:52
*** ziyadb has joined #openstack06:54
*** rms-ict has joined #openstack07:00
*** ejat has joined #openstack07:02
*** ccustine has quit IRC07:03
*** siwos has joined #openstack07:07
*** FallenPegasus has joined #openstack07:07
*** ccc11 has quit IRC07:13
*** ccc11 has joined #openstack07:14
*** javiF has joined #openstack07:27
*** yamahata_dt has quit IRC07:39
*** dhanuxe has joined #openstack07:44
dhanuxehello07:45
*** dobber has joined #openstack07:45
*** nicolas2b has joined #openstack07:46
*** CloudAche84 has joined #openstack07:47
dhanuxeanybody know ? i've got error below07:50
dhanuxeroot@c1:~# euca-allocate-address07:50
dhanuxeUnknownError: An unknown error has occurred. Please try your request again.07:50
CloudAche84can you run other euca commands?07:52
CloudAche84eg euca-describe-instances07:52
*** katkee has joined #openstack07:53
*** ccc11 has quit IRC07:54
dhanuxek, wait07:55
dhanuxesorry flooding07:55
dhanuxeroot@c1:~# euca-allocate-address07:55
dhanuxeUnknownError: An unknown error has occurred. Please try your request again.07:55
dhanuxeroot@c1:~# euca-describe-instances07:55
dhanuxeRESERVATIONr-4o0cclacmyprojectdefault07:55
dhanuxeINSTANCEi-00000001ami-29266f8c10.0.0.310.0.0.3runningopenstack (myproject, c1)0m1.tiny2011-08-10T07:05:09Znova07:55
dhanuxeRESERVATIONr-au9ituwgmyprojectdefault07:55
dhanuxeINSTANCEi-00000002ami-08c917b310.0.0.410.0.0.4runningopenstack (myproject, c1)0m1.tiny2011-08-10T07:16:37Znova07:55
dhanuxeroot@c1:~#07:55
*** kidrock has joined #openstack07:56
dhanuxeany idea ?07:56
CloudAche84are you running nova-network on the same server as you are issuing the command?07:57
dhanuxeyep..07:59
dhanuxei follow the instructions from here http://j.mp/nvnSEp07:59
*** ccc11 has joined #openstack08:02
CloudAche84what does nova-manage floating list show?08:02
dhanuxehere is http://pastebin.com/9YHj4LPr08:04
CloudAche84run the command again and pastebin tail /var/log/nova/nova-network.log & nova-api.log08:07
*** ccc11 has quit IRC08:12
dhanuxenothing error..08:12
dhanuxebut when i run euca-describe-instances08:12
dhanuxei've got error08:12
dhanuxehttp://pastebin.com/UgVbn4UN08:12
*** ccc11 has joined #openstack08:13
dhanuxesorry08:13
dhanuxei means08:13
dhanuxeroot@c1:~# euca-allocate-address08:13
*** ccc11 has quit IRC08:13
*** ccc11 has joined #openstack08:14
*** viveksnv has left #openstack08:17
CloudAche84ok08:17
CloudAche84just looking08:17
*** MarcMorata has joined #openstack08:17
*** bradhedlund has quit IRC08:18
siwoseuca-allocate-address <your_nova_api_hostname>08:19
siwosthat's it - I ran into the same problem yesterday08:19
CloudAche84ooh08:19
CloudAche84that must be a bug then08:20
siwosno - not really08:20
CloudAche84well it doesnt do that in my version08:20
siwoswait a minute08:20
CloudAche84why would you need to tell it the api host?08:20
CloudAche84api server set in nova.conf08:20
*** zul has joined #openstack08:21
*** infinite-scale has joined #openstack08:21
dhanuxeno api hostname08:23
CloudAche84?08:23
dhanuxeroot@c1:~# cat /etc/nova/nova.conf08:23
dhanuxe--daemonize=108:23
dhanuxe--dhcpbridge_flagfile=/etc/nova/nova.conf08:23
dhanuxe--dhcpbridge=/usr/bin/nova-dhcpbridge08:23
dhanuxe--logdir=/var/log/nova08:23
siwosso you do it like this (it worked for me - I had the same error as you)08:23
dhanuxe--state_path=/var/lib/nova08:23
dhanuxe--verbose08:23
dhanuxe--libvirt_type=qemu08:23
dhanuxe--sql_connection=mysql://root:nova@172.241.0.101/nova08:23
dhanuxe--s3_host=172.241.0.10108:23
dhanuxe--rabbit_host=172.241.0.10108:23
dhanuxe--ec2_host=172.241.0.10108:23
dhanuxe--ec2_url=http://172.241.0.101:8773/services/Cloud08:23
dhanuxe--fixed_range=10.0.0.0/808:23
dhanuxe--network_size=6408:23
dhanuxe--num_networks=108:23
dhanuxe--FAKE_subdomain=ec208:23
dhanuxe--public_interface=eth008:23
dhanuxe--state_path=/var/lib/nova08:24
dhanuxe--lock_path=/var/lock/nova08:24
dhanuxeroot@c1:~#08:24
siwosnova-manage floating create <ip_of_nova_headnode_server> 192.168.21.2/3108:24
siwosthen08:24
CloudAche84echo $NOVA_URL08:24
CloudAche84what does that give you?08:24
dhanuxeroot@c1:~# echo $NOVA_URL08:24
dhanuxehttp://172.241.0.101:8774/v1.0/08:24
dhanuxeroot@c1:~#08:24
CloudAche84ah08:24
CloudAche84ec22 url08:24
dhanuxe?08:25
siwoseuca-allocate-address 192.168.21.208:25
CloudAche84This is the API: -ec2_url=http://172.241.0.101:8773/services/Cloud08:25
siwosand then euca-associate-address08:25
*** f4m8 has left #openstack08:25
siwosthis cleared the UnknownError08:25
CloudAche84he already has ip's created siwos08:25
CloudAche84as shown in nova-manage floating list (he pasted earlier)08:25
siwosyes - but when you create them without giving the api server ip, then euca-associate-address fails08:26
siwoswith UnknownError08:26
siwosso he should delete the address with nova-manage floating delete08:26
siwosand recreate it08:26
CloudAche84ah ok08:26
siwoshttp://docs.openstack.org/cactus/openstack-compute/admin/content/associating-public-ip.html08:26
CloudAche84definitely worth a try08:26
siwosbut he should give his api server address as the argument to nova-manage floating create08:27
siwosit's marked as "my-hostname" in the URL doc I pasted08:27
*** MarcMorata has quit IRC08:27
CloudAche84I created mine with hostname08:28
siwosI fought with the same error for 3 hours yesterday and finally found the answer somewhere in the Launchpad ;-)08:28
dhanuxek, wait..08:28
siwosthe hostname should be the ip or hostname of your cluster headnode (the node where api, network controller and scheduler are running)08:28
* dhanuxe afk08:28
CloudAche84so to be clear :08:29
CloudAche84nova-manage floating delete 172.241.0/3208:29
CloudAche84nova manage floating create 172.241.0.101 172.241.0/3208:29
CloudAche84ecua-allocate-address08:30
*** infinite-scale has quit IRC08:30
siwosyep08:30
*** jeffjapan has quit IRC08:32
CloudAche84dhanuxe: are you trying it?08:32
*** nati has quit IRC08:35
CloudAche84he gone..08:35
siwosit probably worked...08:35
siwos;-)08:35
CloudAche84there's gratitude lol08:35
CloudAche84did your hostname resolve to the api address when you tried it yesterday08:36
CloudAche84?08:36
siwosyes08:36
siwosit's not stated clearly int the doc08:37
siwosth doc should say clearly that "my hostname" is the hostname of cluster headnode08:37
CloudAche84it looked like he had an all in one solution though so it would have been anyway08:38
siwosfirst when I read the doc I thought that it would be a hostname of a vm08:39
CloudAche84in the sourcecode it looks like Vish is planning to add this functionality to the nova-manage command anyway rather than with euca-08:39
CloudAche84so the behaviour may change08:39
*** alperkanat has joined #openstack08:40
dhanuxenot yet08:46
dhanuxewait08:46
CloudAche84ah, welcome back08:46
CloudAche84:P08:46
alperkanathey there.. is it possible to directly stream from storage nodes but not the proxy? like requesting from proxy but getting response from the storage node and not through proxy server08:48
*** miclorb_ has quit IRC08:50
dhanuxewhat does mean range ?08:51
dhanuxeroot@c1:~# nova-manage floating delete 172.241.0/3208:51
dhanuxeCommand failed, please check log for more info08:51
dhanuxeroot@c1:~# nova-manage floating delete 172.241.0.0/3208:51
dhanuxeCommand failed, please check log for more info08:51
dhanuxeroot@c1:~# nova-manage floating delete 172.241.0.0/2508:51
dhanuxeCommand failed, please check log for more info08:51
dhanuxeroot@c1:~# nova-manage floating delete 172.241.0/2508:51
dhanuxeCommand failed, please check log for more info08:51
dhanuxeroot@c1:~# nova-manage floating list08:51
dhanuxehere is floating list08:52
dhanuxehttp://pastebin.com/ZM8NHgNy08:52
*** mat_angin has joined #openstack08:55
*** saa7_go has joined #openstack08:56
siwostry nova-manage floating delete 172.241.0.0/2408:57
dhanuxefailed08:58
dhanuxeroot@c1:~# nova-manage floating delete 172.241.0.1/3208:58
dhanuxeroot@c1:~# nova-manage floating delete 172.241.0.2/3208:58
dhanuxeroot@c1:~# nova-manage floating delete 172.241.0.3/3208:58
dhanuxeroot@c1:~# nova-manage floating list | head08:58
dhanuxecloud1172.241.0.4None08:58
dhanuxecloud1172.241.0.5None08:58
dhanuxecloud1172.241.0.6None08:58
dhanuxecloud1172.241.0.7None08:58
dhanuxecloud1172.241.0.8None08:58
dhanuxecloud1172.241.0.9None08:58
dhanuxecloud1172.241.0.10None08:58
dhanuxecloud1172.241.0.11None08:58
dhanuxecloud1172.241.0.12None08:58
dhanuxecloud1172.241.0.13None08:58
*** saa7_go has left #openstack08:58
dhanuxeroot@c1:~# nova-manage floating delete 172.241.0.0/2408:58
dhanuxeCommand failed, please check log for more info08:58
dhanuxeroot@c1:~# nova-manage floating delete 172.241.0.0/3208:58
dhanuxeCommand failed, please check log for more info08:58
dhanuxeroot@c1:~#08:58
dhanuxeone by one ? lol08:58
BuZZ-Tcould you please use a pasting service like http://paste.openstack.org/ , it doesn't flood the channel and people who want to help can read the code better09:00
dhanuxek, apologize..09:00
*** vernhart has joined #openstack09:01
*** yamahata has joined #openstack09:06
*** aztek has joined #openstack09:06
*** darraghb has joined #openstack09:08
CloudAche84:q09:12
CloudAche84oops sorry :D09:12
dhanuxenope.. still working..09:13
dhanuxedelete one by one using bash xD09:13
*** aztek has left #openstack09:13
CloudAche84use a for loop09:15
*** kidrock has quit IRC09:15
doudeHi all, someone use last version of Glance with cache mechanism ?09:21
CloudAche84dhanuxe:for i in {1..5};do echo nova-manage floating delete 172.241.0.$i ;done;09:22
CloudAche84where 1..5 is the range of IPs09:22
dhanuxei've done09:23
dhanuxestill failed09:23
dhanuxehttp://paste.openstack.org/show/2136/09:23
doudeI update Glance with that version (rev 967), but I don't enable image cache: image_cache_enabled = False in the API config file09:23
dhanuxenova-network.log http://pastebin.com/UgVbn4UN09:24
*** dobber has quit IRC09:24
doudeBut the cache directory even when contains cache files and takes lot of place on my disk09:25
*** dobber has joined #openstack09:25
dhanuxeahaaaa09:26
dhanuxeroot@c1:~# nova-manage floating create c1 172.241.0/3209:26
dhanuxeroot@c1:~# euca-allocate-address09:26
dhanuxeADDRESS172.241.0.009:26
dhanuxeroot@c1:~#09:26
dhanuxeopss09:26
dhanuxethanks guys CloudAche84 siwos09:26
CloudAche84so hostname worked09:26
dhanuxeyup09:26
CloudAche84what did you put in the first time you ran it?09:26
dhanuxejust delete all ip list09:27
dhanuxethen create name network same as hostname09:28
dhanuxejust it09:28
*** lotrpy has quit IRC09:31
*** ag has joined #openstack09:38
*** po has joined #openstack09:40
dhanuxeOk, Part one has been finished. http://paste.openstack.org/show/2137/ thanks all....09:40
* dhanuxe bye.. slap mat_angin =)09:40
*** dhanuxe has quit IRC09:40
*** lotrpy has joined #openstack09:47
*** Alowishus has quit IRC09:47
*** holoway has quit IRC09:47
*** bpaluch_ has quit IRC09:47
*** jduhamel_ has quit IRC09:47
*** duffman has quit IRC09:47
*** bpaluch has joined #openstack09:47
*** duffman has joined #openstack09:47
*** qwerty has joined #openstack09:47
*** Daviey has quit IRC09:47
*** qwerty has joined #openstack09:47
*** RobertLaptop has quit IRC09:47
*** WormMan has quit IRC09:47
*** qwerty is now known as Guest5687509:47
*** kernelfreak has quit IRC09:47
*** Alowishus has joined #openstack09:48
*** sandywalsh has quit IRC09:48
lotrpyhello, which ppa should I add, ppa:openstack-release/2011.2 or ppa:nova-core/release, I'm reading the deploy document and try install it on a single computer.09:48
*** holoway has joined #openstack09:48
*** mattt has quit IRC09:48
*** RobertLaptop has joined #openstack09:49
*** mattt has joined #openstack09:49
*** tjikkun_ has quit IRC09:50
*** Daviey has joined #openstack09:51
doudeHi all, someone use last version of Glance with cache mechanism ?09:51
doudeI upgrade Glance with that version (rev 967), but I don't enable image cache: image_cache_enabled = False in the API config file.09:51
doudeBut the cache directory even when contains cache files and takes lot of place on my disk09:51
*** kashyap has quit IRC09:56
*** katkee has quit IRC09:57
*** rms-ict has quit IRC09:57
*** mgoldmann has quit IRC09:59
*** zigo has joined #openstack09:59
*** mgoldmann has joined #openstack09:59
*** kashyap has joined #openstack10:00
*** katkee has joined #openstack10:03
*** sandywalsh has joined #openstack10:06
*** jduhamel has joined #openstack10:06
viraptordid anyone have issues where the scheduling message seems stuck on rabbitmq for a couple of minutes, then gets forwarded as if nothing happened? no exceptions / errors - just a massive delay in "scheduling" stage?10:10
*** ziyadb has quit IRC10:11
*** rms-ict has joined #openstack10:11
*** vernhart has quit IRC10:13
*** jantje_ has joined #openstack10:15
*** PeteDaGuru has joined #openstack10:17
*** ejat has quit IRC10:17
*** jantje has quit IRC10:18
*** Aim has quit IRC10:18
*** ton_katsu has quit IRC10:21
*** duffman_ has joined #openstack10:21
*** tudamp has joined #openstack10:22
*** Matt_L_ has joined #openstack10:23
*** paltman_ has joined #openstack10:23
*** mwhooker_ has joined #openstack10:26
*** MarkAt2od has joined #openstack10:26
*** obino1 has joined #openstack10:27
*** ahs3` has joined #openstack10:27
*** duffman has quit IRC10:27
*** paltman has quit IRC10:27
*** sandywalsh has quit IRC10:27
*** arun_ has quit IRC10:27
*** ianloic has quit IRC10:27
*** taihen has quit IRC10:27
*** andyandy has quit IRC10:27
*** dirakx2 has quit IRC10:27
*** mwhooker has quit IRC10:27
*** mancdaz has quit IRC10:27
*** obino has quit IRC10:27
*** hggdh has quit IRC10:27
*** Matt_L has quit IRC10:27
*** ahs3 has quit IRC10:27
*** ivan has quit IRC10:27
*** FallenPegasus has quit IRC10:27
*** chemikadze has quit IRC10:27
*** paltman_ is now known as paltman10:27
*** taihen has joined #openstack10:30
*** ivan has joined #openstack10:31
*** arun_ has joined #openstack10:33
*** mancdaz has joined #openstack10:34
*** jantje_ has quit IRC10:34
*** jj0hns0n has quit IRC10:34
*** sandywalsh has joined #openstack10:35
*** mcclurmc has left #openstack10:35
*** hggdh has joined #openstack10:35
*** dirakx1 has joined #openstack10:35
*** tsuzuki has quit IRC10:36
*** Aim has joined #openstack10:36
*** andyandy has joined #openstack10:36
*** jantje has joined #openstack10:36
*** guigui1 has joined #openstack10:37
*** andyandy has quit IRC10:39
*** ag has quit IRC10:40
*** javiF has quit IRC10:40
*** cian has quit IRC10:40
alekibangolibvirtd 0.9.4 is out. nice new features...10:42
*** troytoman is now known as troytoman-away10:42
*** lorin1 has joined #openstack10:42
*** rms-ict has quit IRC10:56
*** javiF has joined #openstack10:56
*** jantje has quit IRC10:59
*** jantje has joined #openstack11:00
*** rms-ict has joined #openstack11:02
*** ccc11 has quit IRC11:03
*** arun has quit IRC11:10
*** nostromo56 has joined #openstack11:10
*** ziyadb has joined #openstack11:13
*** arun has joined #openstack11:13
*** arun has joined #openstack11:13
*** lorin1 has left #openstack11:20
*** jovy has joined #openstack11:23
*** markvoelker has joined #openstack11:24
*** CloudAche84 has quit IRC11:27
*** irahgel has joined #openstack11:30
*** Pathin has quit IRC11:31
*** kernelfreak has joined #openstack11:35
*** nerdstein has joined #openstack11:38
*** fayce has quit IRC11:39
*** zigo has quit IRC11:41
*** jakedahn has quit IRC11:50
*** jakedahn has joined #openstack11:51
*** mnour has joined #openstack11:54
*** mnour has joined #openstack11:55
*** WormMan has joined #openstack11:59
*** OutBackDingo has quit IRC12:01
*** iOutBackDngo has joined #openstack12:01
*** vcaron has joined #openstack12:01
*** marrusl has joined #openstack12:09
*** nostromo56 has quit IRC12:11
*** martine has joined #openstack12:12
doudeHi all, someone use last version of Glance with cache mechanism ?12:14
doudeI upgrade Glance with that version (rev 967), but I don't enable image cache: image_cache_enabled = False in the API config file.12:14
doudeBut the cache directory contains cache files and takes lot of place on my disk12:14
*** mfer has joined #openstack12:19
*** ccc11 has joined #openstack12:20
*** osier has quit IRC12:21
*** shentonfreude has quit IRC12:23
*** aliguori has joined #openstack12:28
*** arun has quit IRC12:30
*** arun has joined #openstack12:33
*** arun has joined #openstack12:33
*** kbringard has joined #openstack12:34
*** willaerk has joined #openstack12:41
*** huslage has joined #openstack12:43
*** lts has joined #openstack12:46
*** aliguori has quit IRC12:46
*** aliguori has joined #openstack12:47
*** ewindisch has quit IRC12:51
*** bsza has joined #openstack12:55
*** alperkanat has quit IRC12:57
*** pimpministerp has joined #openstack13:01
*** primeministerp has quit IRC13:02
*** YorikSar has joined #openstack13:02
*** rnts has joined #openstack13:04
*** lpatil has joined #openstack13:11
*** nati has joined #openstack13:11
*** whitt has joined #openstack13:14
*** lpatil has quit IRC13:16
*** ncode has quit IRC13:16
*** ewindisch has joined #openstack13:18
*** CloudAche84 has joined #openstack13:23
*** ahs3` is now known as ahs313:25
*** lborda has joined #openstack13:27
*** lpatil has joined #openstack13:29
*** javiF has quit IRC13:33
*** herman__ has joined #openstack13:38
herman__hi13:38
*** msivanes has joined #openstack13:39
*** ejat has joined #openstack13:40
*** yes456 has joined #openstack13:42
*** maplebed has joined #openstack13:42
*** kashyap has quit IRC13:44
*** bcwaldon has joined #openstack13:45
*** npmapn has joined #openstack13:45
yes456how shutdown instances in openstack13:48
yes456so i can reinitiate13:49
kbringardhow do you mean?13:49
herman__im looking into openstack to build a private cloud. And i was wondering about the "recommended" place to store virtual server images (shared storage), i assume objectstorage is not suitable for it.13:51
*** dendro-afk is now known as dendrobates13:52
*** maplebed has quit IRC13:53
*** cereal_bars has joined #openstack13:53
kbringardGlance is the preferred API interface for managing your images13:54
kbringardand most people use a Swift backend13:54
kbringardfor storage13:54
*** zz2 has joined #openstack13:54
kbringardanymore the objectstore is just for ec2 compatability, if I'm correct13:54
herman__also for "live" virtual server images, persistant13:55
kbringardlive instances run on local disk of the compute node they're assigned to13:56
kbringardunless you're planning to support live migration13:56
kbringardin which case they have to be on some kind of shared storage13:56
kbringardlike an NFS mount, external SAN/NAS, etc13:56
herman__yes13:57
herman__that would be awesome in case of node failure13:57
*** gnu111 has joined #openstack13:57
kbringardwell, if the node fails13:57
kbringardthen your VMs are going to be boned13:57
*** emid has joined #openstack13:57
kbringardyou'll likely have access to their disks to pull any critical info off of them13:58
herman__yes but when the images are on shared storage they can easily be restored, while if on local they are totally gone till the node is back13:58
kbringardbut live migration is more a maintenance thing, imo13:58
kbringardlike, if you start to see disk errors or something, you can move them away before the node fails13:58
kbringardbut yea, you're right13:58
kbringardyou can relaunch them using the libvirt.xml13:58
kbringardhowever, I don't think openstack, at this time, supports bringing them back13:58
kbringardso you can virsh create them manually13:59
kbringardbut it's not going to update openstack with their new location, etc13:59
herman__hmmm ok13:59
kbringardin general, stuff running in cloud should be considered volatile13:59
herman__i agree with that14:00
kbringardand applications should be built or refactored to expect failure at any time14:00
kbringardfailure of N nodes, I mean14:00
herman__but in practice many still use it as "a server" and expect it to be HA.14:01
herman__And dont modify their apps14:01
kbringardof course14:01
kbringarddevs are lazy;-)14:01
kbringardour philosophy is that we do everything we can to keep your VM up as it were a metal machine14:01
herman__looks like Gluster has a "plugin" to provide shared storage for open stack14:02
kbringardbut you have to do everything you can to make your application "cloud ready"14:02
*** llang629 has joined #openstack14:02
herman__does openstack support live migration at all?14:03
*** Shentonfreude has joined #openstack14:03
*** llang629 has left #openstack14:03
kbringardyep14:03
kbringardit's pretty "simple" actually14:04
*** pimpministerp has quit IRC14:04
kbringardso long as your instances are on shared storage, it's more or less a matter of running a single command to move a VM to another compute node14:04
*** primeministerp has joined #openstack14:04
kbringardhttp://docs.openstack.org/cactus/openstack-compute/admin/content/live-migration-usage.html14:04
kbringardhttp://docs.openstack.org/cactus/openstack-compute/admin/content/configuring-live-migrations.html14:04
kbringardthose 2 docs should get you most of the way there14:05
herman__ye14:05
herman__cool14:05
kbringardthe second is written assuming you're exporting from one host to the others14:05
herman__thanks14:05
*** ncode has joined #openstack14:05
kbringardbut if you're using a netapp or gluster or something, then obviously that's not the case14:05
kbringardjust make sure all the compute nodes have read and write access to the exported mount, and it should pretty much work14:06
*** ldlework has joined #openstack14:07
*** arun has quit IRC14:07
herman__im kinda looking into moosefs, but not sure how it performs yet. And wish i didnt have to roll out 2 storage types (objectstorage&shared filesystem).14:07
herman__but i guess the latter is unavoidable :P14:07
*** LiamMac has joined #openstack14:07
kbringardyea… I don't have a lot of experience with swift14:08
kbringardit's entirely possible you can use it for the instances, but I don't think so14:08
kbringardjust given the nature of object storage vs block disk devices14:08
*** nati has quit IRC14:09
herman__afaik its just for putting and getting objects, not really changes within a file (which happens with server images)14:09
*** arun has joined #openstack14:09
kbringardright14:09
*** ejat has quit IRC14:11
*** openpercept_ has quit IRC14:12
*** nostromo56 has joined #openstack14:13
*** ejat has joined #openstack14:14
*** rms-ict has quit IRC14:16
*** nati has joined #openstack14:16
*** rms-ict has joined #openstack14:18
*** dendrobates is now known as dendro-afk14:19
*** javiF has joined #openstack14:20
*** mdomsch has joined #openstack14:23
*** nati has quit IRC14:24
*** YorikSar has quit IRC14:25
*** LiamMac has quit IRC14:25
*** pharkmillups has joined #openstack14:28
*** rnirmal has joined #openstack14:28
*** vladimir3p has joined #openstack14:33
*** reed has joined #openstack14:37
*** zz2 has quit IRC14:38
*** yes456 has quit IRC14:38
*** siwos has quit IRC14:41
*** jkoelker has joined #openstack14:45
*** jj0hns0n has joined #openstack14:48
*** dragondm has joined #openstack14:49
creihtyeah, I wouldn't use swift for your block devices14:49
creihtIt isn't designed for that14:49
*** jovy has quit IRC14:51
*** vcaron has joined #openstack14:52
*** msinhore has joined #openstack14:53
*** mattray has joined #openstack14:54
*** nickon has quit IRC14:54
*** kashyap has joined #openstack14:55
*** vladimir3p has quit IRC14:56
*** martine has quit IRC14:56
*** martine has joined #openstack14:58
*** CloudAche84 has quit IRC14:58
*** martine has quit IRC14:58
*** jj0hns0n has quit IRC14:59
*** martine has joined #openstack14:59
*** amccabe has joined #openstack14:59
vcaronHi ! I try to run VMware with Diablo-3  but the network driver nova.network.vmwareapi_net has gone. Can SO explained ?15:00
*** cp16net has joined #openstack15:01
*** imsplitbit has joined #openstack15:02
*** guigui1 has quit IRC15:08
*** gavin_b has quit IRC15:11
*** MarkAt2od is now known as MarkAtwood15:13
*** nostromo56 has quit IRC15:15
*** nostromo56 has joined #openstack15:15
vcaronHi ! I try to run VMware with the VLAN manager in Diablo-3  but the network driver nova.network.vmwareapi_net has gone. ????15:16
*** nati has joined #openstack15:18
*** Carter1 has joined #openstack15:19
*** willaerk has quit IRC15:22
*** ejat has quit IRC15:24
*** dilemma has joined #openstack15:24
dilemmaI have a question about swift-log-stats-collector that I can't seem to find the answer to: do you only need one machine in your cluster with a cron that runs it?15:25
dilemmaThe documentation here is pretty vague on where it should run: http://swift.openstack.org/1.3/overview_stats.html15:26
uvirtbotNew bug: #824008 in nova "euca-describe-instances shows all instances in all states (even terminated)" [Undecided,New] https://launchpad.net/bugs/82400815:26
*** vernhart has joined #openstack15:26
pvottx: update: we're scheduled to start on https://blueprints.launchpad.net/nova/+spec/admin-account-actions  next week15:27
ttxpvo: that leaves very little time to complete it. Feature branches need to be merged by Aug 22.15:27
ttxpvo: which makes me a bit uncomfrtable, given that it's considered essential15:28
ttxso potentially, release-breaking.15:28
ttxpvo: feeling confident that you can deliver it all in a week ?15:28
pvothen we should downgrade it from essential.15:29
pvoit isn't holdup release worthy15:29
pvoanymore15:29
pvothis is for milestone 4, right?15:29
*** mnour has quit IRC15:30
pvoI think we should have it by diablo. I changed it to the integrated freeze to give us time15:30
*** Ephur has joined #openstack15:33
viraptorpvo: is the list of available actions already defined for that task? it looks interesting to me...15:34
pvoviraptor: let me update it with what we think it would be. Standby for a few.15:35
viraptorthanks15:35
pvoviraptor: a lot is already here: http://wiki.openstack.org/NovaAdminAPI#A.2BAC8-accounts.2BAC8.7Baccount_id.7D.2BAC8-action15:36
*** lorin1 has joined #openstack15:37
*** RobertLaptop has quit IRC15:37
notmynamedilemma: yes. it is only set up to run on one machine currently. it can be configured to use all of your cores, though15:38
*** dobber has quit IRC15:38
viraptorpvo: thanks - it's not what I expected unfortunately - I though there will be a command to completely cleanup an account and all related bits in one go (would help me out with cleaning up after tests)15:41
pvoviraptor: that might be useful as well. If you can make a BP for it with details, we can take a look. Might be useful for us too.15:42
dilemmathanks notmyname. I'm having a rough time separating the stats system documentation out to a cluster, rather than SAIO15:42
notmynamedilemma: in what way?15:42
dilemmafor example, it's not initially clear that you need to add a proxy-server.conf to your storage nodes for swift-log-uploader to work15:43
*** ccc11 has quit IRC15:43
dilemmafound that answer here: https://answers.launchpad.net/swift/+question/16298615:43
notmynameah15:44
dilemmait's not initially clear how I should separate the info in log-processor.conf between proxies and nodes15:44
dilemmaor if it should all stay there15:44
dilemmaand if I need additional crons after separating15:44
*** pguth66 has quit IRC15:44
dilemmait's also not clear if I need to mess with rsyslog.conf on both proxies and nodes15:45
notmynamedilemma: FWIW, the stats/logging stuff is no longer in swift itself (it was moved to a separate project) and we are currently working on a replacement that is much better designed15:45
*** dgags has joined #openstack15:45
dilemmayeah, but that's not here yet, and current deployments are going to use 1.3, with the existing stats system in place15:46
dilemmaI mean, I'm sure the new system is great, and I'd love to switch to it in the future if there's a reliable upgrade path that doesn't interrupt an existing cluster15:48
dilemmabut yeah, the 1.3 docs for stats are pretty tough to sort though15:48
*** LiamMac has joined #openstack15:48
*** rms-ict has quit IRC15:50
*** RobertLaptop has joined #openstack15:51
notmynameone of the reasons it was removed is to emphasize that it's not part of the core storage system and you probably need to do some customization for your own needs (similar to like swauth)15:52
*** vcaron has quit IRC15:53
*** heckj has joined #openstack15:54
*** Alowishus has left #openstack15:55
*** rms-ict has joined #openstack15:56
*** tudamp has left #openstack15:56
*** rocambole has quit IRC15:58
*** markvoelker has quit IRC16:00
*** deirdre_ has joined #openstack16:05
*** dilemma has quit IRC16:05
*** ejat has joined #openstack16:07
*** ejat has joined #openstack16:07
*** nati has quit IRC16:08
*** jsalisbury has joined #openstack16:09
*** Guest56875 is now known as qwerty16:11
uvirtbotNew bug: #824033 in nova "SQLAlchemy w/ MySQL doesn't play nice with eventlet" [Undecided,In progress] https://launchpad.net/bugs/82403316:12
uvirtbotNew bug: #824034 in nova "Validate size of vhd files in OVF containers" [Undecided,New] https://launchpad.net/bugs/82403416:12
*** qwerty is now known as Guest8094616:12
*** ejat has quit IRC16:12
*** Guest80946 is now known as andyandy16:13
andyandyHi guys, when I run an instance, it still in "running state", it never boots, do you have any idea? thanks16:15
*** ejat has joined #openstack16:15
*** ejat has joined #openstack16:15
pvoviraptor: when you mean "clean up account" what exactly do you mean?16:16
*** infinite-scale has joined #openstack16:16
*** kernelfreak has quit IRC16:17
*** kernelfreak has joined #openstack16:18
*** mdomsch has quit IRC16:21
*** beenz has joined #openstack16:21
*** mrrk has joined #openstack16:22
beenzanother try at LXC support:  Anyone here got LXC instances' networking to come up? I'm running the latest natty diablo release from the trunk PPA with a natty UEC image.  kvm instances come up just fine16:25
*** Carter1 has quit IRC16:26
uvirtbotNew bug: #824043 in nova "Scheduler clogs up logging with too much noise" [Medium,In progress] https://launchpad.net/bugs/82404316:26
*** rms-ict has quit IRC16:27
viraptorpvo: basically same as suspend, but delete instead (could release floating ips too)16:27
*** Harp has joined #openstack16:27
*** rms-ict has joined #openstack16:28
pvoviraptor: good idea. I'll see what I can do.16:28
HarpHello everyone16:28
pvoHarp: hi16:29
Harphello mate. Can I ask something?16:29
pvoask away16:30
HarpI read the documentation about openstack, and find out pabout stackops. Also I found recommended requirements for nodes. for ex.16:30
Harp1 x 64 bits x86 4GB of RAM 2 x 10GB of SAS/SATA/SSD – Raid 1 or higher 3 x 1Gb NIC or 2x10GboE NIC16:30
Harpfor network node16:31
*** nicolas2b has quit IRC16:31
HarpOk, I have a supermicro server that has 6 avaliable slots for HDD16:31
*** mattray has quit IRC16:31
Harpcan I populate those remaining two slotes with other HDD and use them in a cloud16:31
Harpsorry 4 slotes16:32
*** jsalisbury has quit IRC16:32
pvoHarp: I can't see why not. I run most of the software on my laptop and the equilivent of a mac mini. The hw is entirely up to you and your needs.16:32
HarpSo anything I add on the nodes can be used as resources for my needs?16:33
*** jdurgin has joined #openstack16:33
*** kashyap has quit IRC16:34
*** kashyap has joined #openstack16:34
HarpCan the same logic be used for additional RAM ?16:34
*** bengrue has joined #openstack16:35
*** infinite-scale has quit IRC16:36
pvoHarp: sure… depending on how you configure your hdds.16:37
pvothe ram will get taken into account by the schedulers.16:37
pvo* depending on the scheduler you use16:38
pvoand what you're using the nodes for.16:38
pvoif you using the extra ram on the api nodes, there isn't much to account for. if you use the ram on the compute node you can add additional instances as long as you have hdd allocate.16:38
*** ccustine has joined #openstack16:38
*** jsalisbury has joined #openstack16:39
HarpBasically, what we want to accomplish is to host our own webaplications. And Im trying to understand how and what to implement.16:39
HarpAnd what we actually need.16:39
Harplol16:40
*** irahgel has left #openstack16:41
Harppvo : can you elaborate what you mean when you said : sure… depending on how you configure your hdds.16:41
*** Pentheus_ has quit IRC16:41
pvowhat hypervisor are you planning on using?16:42
*** katkee has quit IRC16:42
pvowe use xenserver, so our disk configurations would be different.16:42
pvoand we keep the instances in a storage repository.16:43
pvoif you have raid, are you using lvm? raid5, 6? 1?16:43
pvoit all depends on your tolerance.16:43
HarpI understand what you mean. Thank you very much for your help.16:44
*** Harp has quit IRC16:45
*** ironcame12 is now known as ironcamel216:46
*** Carter1 has joined #openstack16:46
*** obino1 has quit IRC16:48
*** anotherjesse has joined #openstack16:52
*** llang629 has joined #openstack16:53
*** zul has quit IRC16:53
kernelfreakDoes anybody had some experience build its own NAS using SATA disks ?16:53
*** llang629 has left #openstack16:56
*** johnmark has joined #openstack16:57
*** KnuckleSangwich has joined #openstack16:59
*** nati has joined #openstack17:02
*** slriv has joined #openstack17:03
*** mrrk has quit IRC17:03
*** dprince has joined #openstack17:05
*** aliguori has quit IRC17:07
*** nelson____ has quit IRC17:08
*** mgius has joined #openstack17:12
*** jakedahn has quit IRC17:13
*** vernhart has quit IRC17:14
*** markvoelker has joined #openstack17:18
*** joearnold has joined #openstack17:19
*** obino has joined #openstack17:22
*** darraghb has quit IRC17:25
uvirtbotNew bug: #824083 in glance "Allow Glance to log to syslog" [Undecided,In progress] https://launchpad.net/bugs/82408317:26
*** mrjazzcat is now known as mrjazzcat-afk17:26
*** darthflatus has joined #openstack17:28
*** viraptor has quit IRC17:31
*** aliguori has joined #openstack17:31
*** katkee has joined #openstack17:36
*** bradhedlund has joined #openstack17:37
*** rms-ict has quit IRC17:40
*** katkee has quit IRC17:42
*** katkee has joined #openstack17:42
*** martine_ has joined #openstack17:43
*** martine has quit IRC17:46
*** emid has quit IRC17:49
*** jakedahn has joined #openstack17:51
*** bockmabe has joined #openstack17:55
*** corrigac has joined #openstack18:00
*** Ephur has quit IRC18:01
*** Ephur has joined #openstack18:02
*** obino1 has joined #openstack18:03
*** mattray has joined #openstack18:04
*** obino has quit IRC18:04
*** bradhedlund has quit IRC18:05
*** maplebed_ has joined #openstack18:08
*** KnuckleSangwich has quit IRC18:09
*** bradhedlund has joined #openstack18:10
*** Shentonfreude has quit IRC18:10
*** jakedahn has quit IRC18:13
*** jakedahn has joined #openstack18:13
*** rms-ict has joined #openstack18:15
*** Shentonfreude has joined #openstack18:15
*** slriv has quit IRC18:21
*** GeoDud has joined #openstack18:21
*** vincent has joined #openstack18:25
*** vincent is now known as Guest9062018:25
*** jbweeks has joined #openstack18:28
*** MarkAtwood has quit IRC18:29
*** vladimir3p has joined #openstack18:30
*** martine_ has quit IRC18:33
*** glitchpants has joined #openstack18:42
*** javiF has quit IRC18:43
*** jovy has joined #openstack18:48
*** hggdh has quit IRC18:49
*** hggdh has joined #openstack18:51
*** nostromo56_ has joined #openstack18:52
*** nostromo56 has quit IRC18:52
*** nostromo56_ is now known as nostromo5618:52
*** mxmasster has joined #openstack18:53
*** nostromo56 has quit IRC18:53
mxmassterhello18:53
*** nostromo56_ has joined #openstack18:53
*** obino1 has quit IRC18:53
mxmassteri have a growing pool of static content (right now after pruning about 5 TB). I'm looking at options for object storage and swift is a large contender. With relation to serving static content directly to the net - or more specifically behind a varnish proxy is there anything that I should be aware of?18:56
*** jbweeks has quit IRC18:58
*** po has quit IRC18:58
notmynamemxmasster: swift will work fine for that18:59
uvirtbotNew bug: #824126 in nova "OS API should accept availability_zone parameter in order to launch VM instance on specific host of specified zone" [Undecided,New] https://launchpad.net/bugs/82412619:01
*** dprince has quit IRC19:01
*** nati has quit IRC19:05
*** saju_m has joined #openstack19:09
*** pguth66 has joined #openstack19:13
*** ewindisch_ has joined #openstack19:16
*** jbweeks has joined #openstack19:16
*** ewindisch has quit IRC19:18
*** ewindisch_ is now known as ewindisch19:18
*** martine has joined #openstack19:25
*** huslage has quit IRC19:27
*** obino has joined #openstack19:29
*** wilmoore has joined #openstack19:33
*** Martz has joined #openstack19:33
*** GeoDud has quit IRC19:35
*** clauden_ has joined #openstack19:35
*** FallenPegasus has joined #openstack19:36
*** DigitalFlux has joined #openstack19:38
*** DigitalFlux has joined #openstack19:38
*** FallenPegasus is now known as MarkAtwood19:38
*** lts has quit IRC19:39
*** Ephur_ has joined #openstack19:39
*** Ephur has quit IRC19:43
*** Ephur_ is now known as Ephur19:43
*** clauden_ has quit IRC19:44
*** clauden_ has joined #openstack19:44
*** po has joined #openstack19:44
*** mrrk has joined #openstack19:49
*** stewart has joined #openstack19:57
*** bcwaldon has quit IRC19:58
*** bcwaldon has joined #openstack20:00
*** mitchless has quit IRC20:05
*** pguth66 has quit IRC20:14
Carter1would it be possible to run openstack on multiple clouds20:14
Carter1for example, run openstack on aws and rackspace for the same site?20:15
Carter1so their is redundancy if amazon goes down20:15
kernelfreak Does anybody had some experience build its own NAS using SATA disks ?20:15
*** hingo has joined #openstack20:16
*** hingo has quit IRC20:17
*** lts has joined #openstack20:17
*** hingo has joined #openstack20:18
*** whitt has quit IRC20:21
*** aliguori has quit IRC20:21
*** nerdstein has quit IRC20:30
*** lorin1 has quit IRC20:32
*** darthflatus has quit IRC20:35
*** Shentonfreude has quit IRC20:39
*** Shentonfreude has joined #openstack20:44
*** msinhore has quit IRC20:45
*** jbweeks has quit IRC20:47
*** aliguori has joined #openstack20:48
*** Shentonfreude has quit IRC20:49
*** kbringard has quit IRC20:55
*** kbringard has joined #openstack20:56
*** sandywalsh has quit IRC20:57
*** YorikSar has joined #openstack20:59
YorikSarHello. Can anyone here help me with understanding the intended Gerrit workflow?21:01
*** saju_m has quit IRC21:03
*** lpatil has quit IRC21:04
notmynameYorikSar: as I understand it, instead of doing a github pull request or pushing to the canonical repo, you push to a git remote location that you set up for gerrit. once approved in gerrit, it gets merged into the master trunk21:07
YorikSarWell, it's a start. But I'm more interested in what happens next.21:07
notmynameYorikSar: so make branch, code, commit, code, commit, squash, push to gerrit, gerrit moves it to master21:08
*** npmapn has quit IRC21:08
notmynameYorikSar: inside of gerrit?21:08
YorikSarMy case: I proposed a change, it was partially accepted (at last the part of the first commit was committed to upstream), and now I have no idea on how to clearly continue to work on my branch.21:09
YorikSarnotmyname: More like inside my local git repo.21:09
*** Guest90620 has quit IRC21:10
notmynameso if your commit is in master now, I'd assume you would start working from the tip of master again. take your existing changes and rebase them to master. when ready, resubmit them to gerrit21:10
YorikSarIt would go well if only it didn't cause a mess in commits because of partial acceptance of the first one in the queue21:11
notmynameI'm a little unclear on how your patch was partially accepted. my understanding is that you have to squash all your commits into one changeset for review. how was only part of it approved?21:12
YorikSarmtaylor, jeblair, dolphm - are any of you anywhere near?21:12
*** mrjazzcat-afk is now known as mrjazzcat21:13
mtaylorYorikSar: whazzup?21:13
mtaylorYorikSar: and yes, I ask the same question that notmyname just asked - what do you mean by partially accepted? can you give me a link to the review you're talking about?21:14
*** fabiokung has quit IRC21:15
YorikSarI had 5 commits ready in my branch, each one of them representing separate step to my goal. Actually, each one of them are a separate "feature" to be added to project. After some processes, the first one of them were divided into two steps and the first one of them were merged into upstream.21:15
*** fabiokung has joined #openstack21:15
*** mfer has quit IRC21:15
YorikSarthe first "step", of cause21:15
*** Carter1 has left #openstack21:15
*** mfer has joined #openstack21:16
*** msivanes has quit IRC21:16
*** mattray has quit IRC21:16
YorikSarThere were change https://review.openstack.org/159 that was the accepted partially in change https://review.openstack.org/19321:17
dolphmYorikSar: i wanted to talk to you about that review, but...21:19
mtaylorYorikSar: AH - this was due to dolph reverting the change21:19
mtaylorYorikSar: this is not a _normal_ thing - so we don't have a best-practice written up for it yet ... gimme a sec21:19
YorikSarAs I can see, all my proposals are finally merged by dolphm, but I still can not figure out a easy way to collaborate when the minimum part for merging being a commit, not a branch.21:19
*** cereal_bars has quit IRC21:20
*** nati has joined #openstack21:20
dolphmYorikSar: in general, for each change you want to make to keystone, create a new branch from your master, and upload a single commit to gerrit (this implies that you can commit all you want, but you should "squash" prior to uploading a review)... this avoids the unnecessary dependencies between otherwise independent changes - and makes gerrit a lot smoother21:20
YorikSardolphm: I really tried to catch up to your changes in master this morning (MSK morning), but I fell into mess with merging and restoring all that changIds21:21
dolphmYorikSar: do you have new development that hasn't made it to gerrit?21:21
*** mattray has joined #openstack21:21
dolphmYorikSar: (side note - sorry for any headaches i caused, my goal was to get all your changes in as quickly as possible this week)21:22
dolphmYorikSar: and the only thing i have left is change the "from ldap import ..." to "import ldap" which seems to fail a bunch of tests, and i'm not sure why21:22
*** cp16net has quit IRC21:22
*** mdomsch has joined #openstack21:22
*** imsplitbit has quit IRC21:23
*** Ephur has quit IRC21:23
YorikSardolphm: And thank you for that, I'm sure, I'd broke a lot of keyboards trying to do this. But I still don't understand how we could or should collaborate under this circumstances.21:23
dolphmlol, just mine :) but I learned quite a bit about git in the process, so it's all good!21:24
*** jsalisbury has quit IRC21:25
YorikSardolphm: I believed that every step (commit) is independent, so that after each one of them Jenkins should check that everything is alright and reviewers should easily see that this bunch of changes is good.21:25
*** Ephur has joined #openstack21:25
*** HouseAway is now known as DrHouseMD21:25
*** pothos_ has joined #openstack21:25
dolphmYorikSar: i totally agree with your philosophy, but gerrit looks at your chain of commits, and assumes that the reviews must be dependent on eachother as well... in this case, one of the first commits had unit test issues (the import ldap stuff)21:26
dolphmYorikSar: and it wasn't your fault that the tests failed necessarily - right after the "import ldap" stuff was merged the first time, we improved test coverage drastically21:27
*** pothos has quit IRC21:27
*** pothos_ is now known as pothos21:28
*** maplebed has joined #openstack21:28
*** jheiss has quit IRC21:28
YorikSardolphm, mtaylor: there was a statement by jeblair in mailing list that Gerrit should solve problems, not create ones... As I see, the problem is that it provides ability only to operate with all branch as one change...21:28
dolphmYorikSar: and it was the improved test coverage that found errors, so i had to revert the ldap stuff to get the expanded test coverage into trunk21:28
*** bengrue has quit IRC21:28
YorikSardolphm: Oh, I see... I think, in such case the ldap stuff should first be fixed to match new tests, not reverted.21:29
*** bengrue has joined #openstack21:29
mtaylorYorikSar: well, the idea is just a codification of a git best practice about each commit that winds up in trunk being atomic and not a sequence of commits21:30
dolphmYorikSar: right, but it had already been merged and the review closed21:30
mtaylornotmyname: do you have the link to that article from last week?21:30
dolphmYorikSar: so i reverted the review, and then did a simpler commit with the simple_bind_s stuff you wrote21:30
*** maplebed has quit IRC21:30
*** rnirmal has quit IRC21:31
*** maplebed has joined #openstack21:31
YorikSardolphm: ...and that caught me lost in the woods.21:31
dolphm:(21:31
dolphmhow much work do you have that hasn't made it to gerrit?21:31
YorikSardolphm: There definitely should be some common guidance  for such case.21:31
YorikSardolphm: Oh, if all my changes are marked as merged, all I have is thoughts that I hope will have easier way to upstream.21:32
mtaylorYorikSar: it will usually be much easier than this21:33
dolphmmtaylor: YorikSar: ^^ this was truly an exceptional case21:33
dolphmYorikSar: do you need any help sorting out your local repo today?21:34
mtaylorYorikSar: yes - sorry it caused you pain - in the future, we well be better at avoiding these cases in the first place21:34
YorikSarmtaylor: As I can remember, there were a good practice that implies that every merged made to master (or any release branch and so on) should be done non-fastforward so that actually each feature ends up as a single commit on master branch and all commits that brought us to that feature are kept in the topic branch (or in history about existance of the topic branch)21:35
mtaylorYorikSar: not entirely...21:36
YorikSardolphm: Unfortunately, my today is long finished, but as all my work is in master, I'll start my branch from master again and see whether anything is missed.21:37
*** ejat has quit IRC21:37
mtaylorYorikSar: the intent here is actually that we discard the commits that led us to the trunk commit and squash them in to a single commit that is the culmination of the work21:37
dolphmYorikSar: alright - if you can work out "import ldap" that'd be major improvement - i don't understand why it causes problems!21:37
*** jkoelker_ has joined #openstack21:38
*** pquerna has quit IRC21:39
*** pquerna has joined #openstack21:39
mtaylorbut there is an article I'm looking for which explains it _MUCH_ better than me21:39
YorikSardolphm: And one thing that I think worth discussing is how to present % operator for templates with one arguments.21:39
*** ejat has joined #openstack21:40
*** bcwaldon has quit IRC21:40
YorikSardolphm: I saw that to conform PEP8 requrements and to shorten lines you converted the "%s" % (ss,) pieces into "%s" % ss. I may be a bit paranoid, but is it completely correct?21:40
*** lts has quit IRC21:40
dolphmYorikSar: ah, if it's not correct - i'd love to know!21:41
dolphmYorikSar: i only use tuples when there's more than one variable to plug in21:41
YorikSarmtaylor: http://nvie.com/posts/a-successful-git-branching-model/ - that's where I got that idea.21:41
dolphmYorikSar: and as you pointed out, it was just an easy way to conform to line length21:41
*** ejat has quit IRC21:41
dolphmYorikSar: see example 3.23 http://www.diveintopython.org/native_data_types/formatting_strings.html21:43
*** jfluhmann has joined #openstack21:43
mtaylorYorikSar: ah - that is not really the model we're using here ...21:44
mtaylorYorikSar: read this: http://sandofsky.com/blog/git-workflow.html21:44
YorikSardolphm: My point is to prevent any possible misbehaviour, e.g. if the only parameter is a tuple and we leave it alone after the %, we get trouble.21:45
mtaylorYorikSar: the nvie related model as it relates to openstack is more about pulling releases off of trunk - not how work from devs gets in to trunk21:45
*** jbweeks has joined #openstack21:45
dolphmYorikSar: can you email me an example? i'm headed out for the day21:45
*** ejat has joined #openstack21:46
*** ejat has joined #openstack21:46
YorikSardolphm: OK, I'll mail you in couple of minutes21:47
mtaylorYorikSar: (but keep in mind, we diverge from his exact steps a little bit- he just has good explanation of why we're squashing before submitting)21:47
*** jkoelker has quit IRC21:48
*** ldlework has quit IRC21:49
*** bradhedlund has quit IRC21:49
*** bradhedlund has joined #openstack21:49
YorikSarmtaylor: I hasn't finished reading yet, but as we can see on our case, our approach appeared to be not-so-good if one change is proposed and work on this topic branch is continued21:49
mtaylorYorikSar: nope, that's still actually fine21:50
mtaylorYorikSar: and I do that all the time ... in _this_ case, we broke a few things21:50
mtaylorif you have work in a topic branch that is ready to go in (it is now a complete thought that works) then you can squash those commits down and submit them. at that point, make a new branch from that branch and continue your work in that new branch21:51
*** bsza has quit IRC21:51
mtaylorYorikSar: if you have to make code-review updates on your first branch, that's fine, because you have it there and you can ammend the commit and submit it again21:51
*** dgags has quit IRC21:52
mtaylorYorikSar: one your second set of changes that is ready to go in which includes the first changes is ready - you can rebase/squash them on top of master21:52
*** bradhedlund has quit IRC21:53
*** bradhedlund has joined #openstack21:53
*** gnu111 has quit IRC21:54
*** po has quit IRC21:56
*** ldlework has joined #openstack21:56
*** bradhedlund has quit IRC21:58
YorikSarmtaylor: After that article, I think that the problem in my head is about commits should be written-in-stone, not disposable.21:59
*** johnmark has left #openstack21:59
YorikSarmtaylor: And if someone else messes with my changes, there will be rebase/merge/whatever trouble in my private branches anyway22:00
*** DrHouseMD has quit IRC22:00
*** AimanA has joined #openstack22:01
*** mnour has joined #openstack22:02
YorikSarmtaylor: About your last phrase - if I have branch mytopic1 that had a complete huge step on my topic that should be merged into upstream, should I both keep it and create other branch for one squashed commit?22:03
YorikSarmtaylor: So that I'll have access to all my private history and that golden squashed commit for review.22:04
*** LiamMac has quit IRC22:05
*** mfer has quit IRC22:07
*** markvoelker has quit IRC22:08
*** fysa has quit IRC22:09
*** fysa has joined #openstack22:09
*** Ephur has quit IRC22:19
*** Ephur has joined #openstack22:20
YorikSarmtaylor: One of the reasons for me not to squash commits was that two of them were common bugfixes, not a LDAP-specific features. Should I create a separate squashed commits for topic-specific and for common changes?22:22
*** hadrian has joined #openstack22:24
mtaylorYorikSar: I would say that in a perfect world you would want to make one submission to gerrit for each complete code thought... so a bugfix would want to be its own thing. that way, if someone is reading your patch, they can be sure they know what the patch does22:24
mtaylorof course, this is all within reason - and you'll develop a sense as to when you need a different branch and when you don't22:25
* mtaylor has to run out for a bit ... I'll find you again when I get back and we can keep chatting22:25
*** katkee has quit IRC22:25
*** zul has joined #openstack22:25
*** Ephur has quit IRC22:26
*** jfluhmann has quit IRC22:27
*** ldlework has quit IRC22:27
*** PeteDaGuru has left #openstack22:29
*** lvaughn has quit IRC22:30
*** lvaughn has joined #openstack22:30
*** po has joined #openstack22:31
*** stewart has quit IRC22:32
*** po has quit IRC22:32
*** lvaughn_ has joined #openstack22:33
*** kbringard has quit IRC22:33
*** llang629 has joined #openstack22:34
*** beenz has quit IRC22:34
*** cereal_bars has joined #openstack22:36
*** lvaughn has quit IRC22:37
*** jbweeks has quit IRC22:37
*** zul has quit IRC22:38
*** nati_ has joined #openstack22:43
*** nati has quit IRC22:43
*** mattray has quit IRC22:44
*** llang629 has left #openstack22:44
*** msinhore has joined #openstack22:47
*** mxmasster has quit IRC22:47
*** miclorb has joined #openstack22:57
*** kpepple_ has joined #openstack22:57
*** rlucio has joined #openstack22:58
kpepple_jk0: around ?22:58
*** mdomsch has quit IRC23:01
*** GeoDud has joined #openstack23:01
*** heckj has quit IRC23:02
*** clauden___ has joined #openstack23:03
*** clauden_ has quit IRC23:03
*** jtanner has joined #openstack23:05
*** mnour has quit IRC23:05
*** joearnold has quit IRC23:08
*** msinhore has quit IRC23:08
*** vladimir3p has quit IRC23:14
*** jovy has quit IRC23:16
*** msinhore has joined #openstack23:16
*** carlp has quit IRC23:19
*** DrHouseMD has joined #openstack23:20
*** ncode has quit IRC23:21
*** carlp has joined #openstack23:21
*** AimanA has quit IRC23:22
*** msinhore1 has joined #openstack23:26
*** msinhore has quit IRC23:26
*** amccabe has quit IRC23:26
*** mgius has quit IRC23:26
*** jeffjapan has joined #openstack23:27
uvirtbotNew bug: #824225 in nova "EC2 API get Metadata by instance type returns incorrect data" [Undecided,New] https://launchpad.net/bugs/82422523:27
*** mgius has joined #openstack23:28
*** msinhore1 has quit IRC23:35
*** huslage has joined #openstack23:38
*** DigitalFlux has quit IRC23:38
*** DigitalFlux has joined #openstack23:38
*** DigitalFlux has joined #openstack23:38
*** DigitalFlux has quit IRC23:44
*** ejat has quit IRC23:45
*** DigitalFlux has joined #openstack23:45
*** DigitalFlux has joined #openstack23:45
*** msinhore has joined #openstack23:45
*** hadrian has quit IRC23:46
*** nostromo56_ has quit IRC23:51
*** nostromo56 has joined #openstack23:52
*** mgius has quit IRC23:53
*** nati_ has quit IRC23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!