Tuesday, 2011-05-10

*** shentonfreude has joined #openstack00:00
*** dragondm has quit IRC00:01
*** mattray has quit IRC00:01
znsNHM: how do you differentiate between users in the central directory and the ones in the local institute directory?00:02
*** mattray has joined #openstack00:03
*** shentonfreude has quit IRC00:05
*** DigitalFlux has quit IRC00:06
*** DigitalFlux has joined #openstack00:06
*** DigitalFlux has joined #openstack00:06
*** matiu has quit IRC00:08
*** maplebed has quit IRC00:08
*** shentonfreude has joined #openstack00:08
*** enigma1 has quit IRC00:09
*** jdurgin has quit IRC00:16
*** miclorb has quit IRC00:19
*** gregp76_ has joined #openstack00:23
*** gregp76_ has quit IRC00:24
*** DigitalFlux has quit IRC00:26
*** gregp76 has quit IRC00:26
*** DigitalFlux has joined #openstack00:26
*** DigitalFlux has joined #openstack00:26
*** jeffjapan has joined #openstack00:27
*** zns has quit IRC00:29
*** mattray has quit IRC00:30
*** Dumfries has quit IRC00:31
*** drogoh has quit IRC00:32
*** drogoh has joined #openstack00:33
*** kashyap has quit IRC00:34
*** nelson has quit IRC00:42
*** nelson has joined #openstack00:42
*** Eyk has quit IRC00:44
*** kashyap has joined #openstack00:45
*** Ryan_Lane has quit IRC00:45
*** scalability-junk has quit IRC00:46
*** msivanes has joined #openstack00:53
*** westmaas1 has joined #openstack01:01
*** adjohn has joined #openstack01:03
*** cid has joined #openstack01:12
*** j05h has quit IRC01:13
cidhello...I got some problems with the command euca-run-instances01:14
cideuca-run-instances $emi -k openstack -t m1.tiny01:15
cideuca-run-instances -k test -t m1.tiny ami-tty01:15
cidImageNotFound: Image ami-tty could not be found01:16
cidany help would be great01:17
*** anticw has quit IRC01:19
*** anticw has joined #openstack01:19
*** j05h has joined #openstack01:19
winston-dcid : what's the output of 'euca-describe-images' ? is 'ami-tty' in that output?01:21
cidami-tty is not in the list01:24
cidI just follow the instructions for the script installation01:24
cidfollowed*01:25
*** zigo-_- has joined #openstack01:29
zigo-_-alekibango: Hi there!01:29
*** jfluhmann has joined #openstack01:34
*** DigitalFlux has quit IRC01:44
*** DigitalFlux has joined #openstack01:44
winston-dcid : you should first upload some images then use them to start instance01:46
*** toluene has joined #openstack01:46
*** vernhart has quit IRC01:46
toluenehi! Does anyone here install openstack from source ? I got a little problem.01:46
*** kaz_ has joined #openstack01:49
*** DigitalFlux has quit IRC01:54
*** DigitalFlux has joined #openstack01:54
*** stewart has joined #openstack01:59
*** dendrobates is now known as dendro-afk02:01
*** miclorb has joined #openstack02:07
*** lorin1 has joined #openstack02:17
*** lorin1 has left #openstack02:17
*** lorin1 has joined #openstack02:17
*** DigitalFlux has quit IRC02:28
*** DigitalFlux has joined #openstack02:29
*** fantasy has quit IRC02:29
*** larry__ has joined #openstack02:32
*** Ephur has quit IRC02:32
*** DigitalFlux has quit IRC02:32
*** Ephur has joined #openstack02:32
*** DigitalFlux has joined #openstack02:33
*** DigitalFlux has joined #openstack02:33
*** lorin1 has quit IRC02:36
HugoKuo__cid : $emi = ami-tty by default , but you did not have any image yet ....02:36
*** gaveen has joined #openstack02:40
*** cid has quit IRC02:41
*** DigitalFlux has quit IRC02:42
*** DigitalFlux has joined #openstack02:42
*** DigitalFlux has joined #openstack02:42
*** fantasy has joined #openstack02:42
*** jakedahn has joined #openstack02:44
*** cid has joined #openstack02:49
*** DigitalFlux has quit IRC02:50
*** DigitalFlux has joined #openstack02:51
*** DigitalFlux has joined #openstack02:51
HugoKuo__I found that nova-network is really hard to do HA  :<02:52
*** Zangetsue has joined #openstack02:54
*** bcwaldon has joined #openstack02:59
*** dysinger has quit IRC03:05
*** DigitalFlux has quit IRC03:07
*** DigitalFlux has joined #openstack03:07
*** DigitalFlux has joined #openstack03:07
*** DigitalFlux has quit IRC03:12
*** DigitalFlux has joined #openstack03:13
*** DigitalFlux has joined #openstack03:13
*** joearnold has joined #openstack03:13
*** santhosh has joined #openstack03:13
*** DigitalFlux has quit IRC03:16
*** rchavik has joined #openstack03:17
*** rchavik has joined #openstack03:17
*** gaveen has quit IRC03:17
*** larry__ has quit IRC03:29
*** zns has joined #openstack03:31
alekibangohi zigo :)03:31
*** gaveen has joined #openstack03:37
*** msivanes has quit IRC03:40
*** joearnold has quit IRC03:40
uvirtbotNew bug: #780276 in nova "run_tests.sh fails test_authors_up_to_date when using git repo" [Undecided,New] https://launchpad.net/bugs/78027603:41
*** joearnold has joined #openstack03:42
*** santhosh has quit IRC03:43
*** Ephur has quit IRC03:47
HugoKuo__morning03:48
*** vernhart has joined #openstack03:48
HugoKuo__How's going today03:49
*** gaveen has quit IRC03:50
*** bcwaldon has quit IRC03:52
alekibangoHugoKuo__: :) waking up03:55
*** cid has quit IRC03:55
HugoKuo__alekibango : It's really hot today in Taiwan....03:57
alekibangosun is coming up, birds singing, looks like it will be hot in CZ too03:57
*** joearnold has quit IRC04:01
*** joearnold has joined #openstack04:04
*** joearnold has quit IRC04:07
*** bcwaldon has joined #openstack04:09
*** joearnold has joined #openstack04:11
*** masudo has quit IRC04:11
*** joearnold has quit IRC04:15
*** bcwaldon has quit IRC04:16
uvirtbotNew bug: #780287 in nova "nova/scheduler/host_filter.py fails pep8" [Undecided,Fix committed] https://launchpad.net/bugs/78028704:21
*** Pyro_ has joined #openstack04:25
*** santhosh has joined #openstack04:27
*** omidhdl has joined #openstack04:35
*** med_out is now known as med04:36
*** med is now known as medberru04:36
*** medberru is now known as medberry04:36
*** grapex has quit IRC04:39
*** omidhdl has quit IRC04:42
*** omidhdl has joined #openstack04:44
*** f4m8_ is now known as f4m804:51
*** sophiap has quit IRC04:53
*** hagarth has joined #openstack05:08
*** zenmatt has quit IRC05:08
*** kraay has quit IRC05:19
*** Ryan_Lane has joined #openstack05:20
*** miclorb has quit IRC05:22
*** zns has quit IRC05:29
zigo-_-alekibango: Woke up?05:54
*** vernhart has quit IRC05:56
*** fantasy has quit IRC06:03
*** mcclurmc_ has joined #openstack06:06
HugoKuo__zigo-_-: yes he did06:09
*** fantasy has joined #openstack06:11
*** guigui has joined #openstack06:13
*** fantasy has quit IRC06:25
alekibangozigo-_-: now finished breakfast06:29
*** allsystemsarego has joined #openstack06:30
*** allsystemsarego has joined #openstack06:30
*** nacx has joined #openstack06:32
*** fantasy has joined #openstack06:32
*** johnpur has quit IRC06:34
*** zul has joined #openstack06:34
HugoKuo__playing with StackOps ...06:35
*** dendro-afk is now known as dendrobates06:36
HugoKuo__mornitoring features is been added into Cactus Release ?06:39
zigo-_-alekibango: I got Glance and Swift packages ready.06:40
zigo-_-Now I need to configure Swift and Glance to work together.06:40
zigo-_-Can you help?06:40
*** zul has quit IRC06:49
*** s1cz has quit IRC06:50
*** gaveen has joined #openstack06:53
*** gaveen has joined #openstack06:53
*** omidhdl has quit IRC06:55
*** rds__ has quit IRC06:55
*** omidhdl has joined #openstack06:58
*** keds has joined #openstack06:59
*** gaveen has quit IRC07:01
*** rds__ has joined #openstack07:08
*** lborda has joined #openstack07:09
*** Pyro_ has quit IRC07:12
*** arun_ has joined #openstack07:13
*** s1cz has joined #openstack07:13
*** Beens has quit IRC07:16
*** zul has joined #openstack07:16
*** mgoldmann has joined #openstack07:19
*** mgoldmann has joined #openstack07:19
*** nerens has joined #openstack07:27
*** krish|wired-in has joined #openstack07:28
*** fantasy has quit IRC07:28
*** zaitcev has quit IRC07:32
*** obino has quit IRC07:34
*** fantasy has joined #openstack07:45
*** kraay has joined #openstack07:46
*** fantasy has quit IRC07:50
*** HugoKuo__ has quit IRC07:53
*** HugoKuo has joined #openstack07:53
*** perestrelka has quit IRC07:54
*** daveiw has joined #openstack07:56
*** fantasy has joined #openstack07:57
*** lborda has quit IRC07:58
*** zul has quit IRC07:58
*** fantasy has quit IRC08:01
*** zul has joined #openstack08:01
*** jeffjapan has quit IRC08:01
*** radek has joined #openstack08:03
radekhi If I have instance of windows image running after I made changes to OS is there a way to save it as new image ?08:05
radekhow would you do it ?08:05
*** toluene has quit IRC08:05
*** infinite-scale has joined #openstack08:06
infinite-scaleanyone has an idea for a 2 server setup for nova and a 2 server setup for swift?08:07
infinite-scaleI imagine 2 cluster each with an allinone server08:07
infinite-scalein a later stage you could just add nodes to the clusters.08:08
*** fantasy has joined #openstack08:10
*** rcc has joined #openstack08:16
*** MarkAtwood has quit IRC08:18
*** fantasy has quit IRC08:20
radekanyone ?08:22
*** tjikkun has joined #openstack08:25
*** tjikkun has joined #openstack08:25
*** infinite-scale has quit IRC08:26
*** DigitalFlux has joined #openstack08:27
*** DigitalFlux has joined #openstack08:27
*** hggdh has joined #openstack08:28
*** rchavik has quit IRC08:31
*** zul has quit IRC08:32
*** fantasy has joined #openstack08:34
*** kashyap has quit IRC08:34
*** fantasy has quit IRC08:35
*** hggdh has quit IRC08:43
*** arun_ has quit IRC08:48
*** infinite-scale has joined #openstack08:50
*** mcclurmc has joined #openstack08:52
*** watcher has joined #openstack08:53
*** kraay has quit IRC08:58
*** lurkaboo is now known as purkaboo09:04
*** purkaboo is now known as purpaboo09:04
*** dendrobates is now known as dendro-afk09:06
*** guigui has quit IRC09:07
*** guigui has joined #openstack09:07
*** s1cz has quit IRC09:09
*** s1cz has joined #openstack09:10
*** infinite-scale has quit IRC09:17
*** infinite-scale has joined #openstack09:19
*** jokajak` has joined #openstack09:19
*** jokajak has quit IRC09:19
*** Eyk has joined #openstack09:22
*** bkkrw has joined #openstack09:28
*** Eyk has quit IRC09:31
*** miclorb_ has joined #openstack09:37
*** infinite-scale has quit IRC09:39
*** zul has joined #openstack09:45
*** taihen_ is now known as taihen09:47
*** perestrelka has joined #openstack09:50
*** anticw has quit IRC09:55
*** zul has quit IRC09:56
*** anticw has joined #openstack09:57
*** Eyk has joined #openstack10:00
*** infinite-scale has joined #openstack10:03
infinite-scalescripted installation isn't working properly10:03
infinite-scaleI tried to follow this installation: http://docs.openstack.org/cactus/openstack-compute/admin/content/scripted-ubuntu-installation.html10:04
*** rchavik has joined #openstack10:06
*** CloudChris has joined #openstack10:09
*** adjohn has quit IRC10:09
*** CloudChris has left #openstack10:09
*** winston-d has quit IRC10:15
*** lborda has joined #openstack10:17
*** rchavik has quit IRC10:19
*** rchavik has joined #openstack10:19
*** rchavik has joined #openstack10:19
*** guigui has quit IRC10:21
*** zul has joined #openstack10:21
*** Eyk_ has joined #openstack10:24
*** Eyk has quit IRC10:26
*** santhosh_ has joined #openstack10:30
*** santhosh has quit IRC10:30
*** santhosh_ is now known as santhosh10:30
*** Eyk_ has quit IRC10:31
*** zul has quit IRC10:33
*** lborda has quit IRC10:37
*** Eyk_ has joined #openstack10:46
*** santhosh has quit IRC10:52
*** fabiand__ has joined #openstack10:54
*** santhosh has joined #openstack10:55
*** jfluhmann has quit IRC10:56
*** pllopis has joined #openstack11:00
pllopishello11:00
*** Eyk_ has quit IRC11:02
*** fabiand__ has quit IRC11:08
*** Ryan_Lane has quit IRC11:10
*** omidhdl has left #openstack11:14
*** miclorb_ has quit IRC11:20
alekibangoshould swift be working on 1 server only ( i mean 1 copy only style on one machine)11:21
*** miclorb_ has joined #openstack11:22
*** ChameleonSys has quit IRC11:25
*** ChameleonSys has joined #openstack11:25
*** ctennis has quit IRC11:28
*** markvoelker has joined #openstack11:32
*** Eyk has joined #openstack11:33
infinite-scalealekibango, you mean a all-in.one server solution?11:33
alekibangoyes...  kind of11:33
alekibangowithout replication11:33
alekibangojust to test it11:33
infinite-scalehttp://swift.openstack.org/development_saio.html11:33
infinite-scaleshould do it I think11:34
infinite-scaleif you wanna try to use a multinode setup you can still have no replication.11:34
alekibangowill it work with only one device and and with only 1 object server ?11:34
alekibangonot like 1/2/3/4  .conf11:34
infinite-scaleas the number of the replica of an object is realted to the number of clusters11:34
alekibangodid someone try it with only 1?11:35
infinite-scaleyou mean with one node?11:35
infinite-scaleor one server at all11:35
alekibangoyes11:35
alekibangoone everything11:35
zigo-_-As in, one storage device.11:36
infinite-scaleyeah this is possible11:36
zigo-_-http://paste.openstack.org/show/1313/11:36
infinite-scalethe lin is doing it in a vm I think11:36
zigo-_-I got this error...11:36
zigo-_-I'm quite stuck for a long time with it.11:36
alekibangozigo-_-: its more readable on one line lol11:37
infinite-scalehttp://nova.openstack.org/devref/development.environment.html11:37
infinite-scaleperhaps that will help11:38
infinite-scalealekibango,11:38
zigo-_-infinite-scale: alekibango thinks this error is because of one device only ...11:38
alekibangoit was just idea11:38
zigo-_-I think it has nothing to do, this is pure auth issue.11:38
alekibangoyes11:38
alekibangoi agree its auth11:38
zigo-_-infinite-scale: What do you think?11:38
infinite-scaleNot sure about the auth stuff11:39
infinite-scalehas anything to do with the one device setup11:39
infinite-scaleshouldn't be an issue11:39
zigo-_-I'll ask later when there's more people on the channel ...11:39
alekibangobut still i wander if it should work with one server only :)11:39
infinite-scalesomething to do with sql probably11:40
infinite-scalealekibango, it is working11:40
zigo-_-Does Swift uses MySQL ?11:40
infinite-scalebut it isn't a production environmnt11:40
infinite-scaleI thought for user stuff yes11:41
infinite-scaleperhaps it was just nova could be not sure at the moment11:41
infinite-scaleanyway I'm off. another uni course^^11:41
*** ctennis has joined #openstack11:42
zigo-_-Cheers.11:42
*** zul has joined #openstack11:43
*** vernhart has joined #openstack11:43
*** icarus901 has quit IRC11:44
*** miclorb_ has quit IRC11:45
*** infinite-scale has quit IRC11:46
*** Jordandev has joined #openstack11:47
ccookeHiya11:47
ccookeAnyone know the minimum components necessary to make nova-compute work?11:47
*** Jordandev has quit IRC11:48
*** guigui has joined #openstack11:56
*** dirkx__ has joined #openstack12:01
*** guigui has joined #openstack12:03
*** GasbaKid has joined #openstack12:03
*** hggdh has joined #openstack12:09
*** hggdh has quit IRC12:11
*** hggdh has joined #openstack12:12
*** infinite-scale has joined #openstack12:12
*** zul has quit IRC12:15
*** rchavik has quit IRC12:16
*** scalability-junk has joined #openstack12:16
*** dendro-afk is now known as dendrobates12:16
*** shentonfreude has quit IRC12:17
*** katkee has joined #openstack12:18
*** katkee has quit IRC12:20
*** rackerhacker has quit IRC12:26
*** kakella has joined #openstack12:27
*** kakella has left #openstack12:27
*** rackerhacker has joined #openstack12:33
alekibangoccooke: you need strong will and some luck12:47
alekibango:)12:47
*** GasbaKid has quit IRC12:47
alekibangoccooke:  try reading this http://fnords.wordpress.com/2010/12/02/bleeding-edge-openstack-nova-on-maverick/12:47
alekibangoif you are on maverick12:48
*** zul has joined #openstack12:51
*** infinite-scale has quit IRC12:53
*** dprince has joined #openstack12:55
*** KnuckleSangwich has quit IRC12:55
*** jokajak` has quit IRC12:57
*** jokajak` has joined #openstack12:57
*** jokajak` is now known as jokajak12:57
*** shentonfreude has joined #openstack13:01
*** zul has quit IRC13:06
*** aliguori has joined #openstack13:06
*** dendrobates is now known as dendro-afk13:06
*** zul has joined #openstack13:06
*** hggdh has quit IRC13:07
*** nacx has quit IRC13:09
*** zul has quit IRC13:12
*** hadrian has joined #openstack13:12
ccookealekibango: I have a working openstack on natty at the moment13:15
alekibangoaha13:15
ccookeI'm currently working on *just* a nova-compute node13:15
alekibangoif you are adding only13:15
ccookejust double-checking what it needs to talk to13:15
alekibangocompute should be ok iirc -- provided you have storage space or storage nodes along13:17
ccookeI'm building a natty chroot to run on XenServer nodes13:17
*** zenmatt has joined #openstack13:17
ccookebut due to Fun, my Xen server is in a different security zone to the rest of nova13:17
ccookeso I'm port forwarding.13:17
ccookeShould be fine, though :-)13:18
alekibangoah this might kill you lol13:18
*** bcwaldon has joined #openstack13:18
alekibangonatty chroot? why?13:18
alekibangonova is best on bare metal13:18
ccookeNot on a XenServer build13:19
alekibangoi am not sure i understand what are you trying to do13:19
ccookewhich is based on an out-of-date redhat with limited space13:19
*** rchavik has joined #openstack13:19
*** rchavik has joined #openstack13:19
ccookeBasically, for openstack XenServer (that's Citrix Xen, not open-source Xen) you must have a nova-compute per hypervisor and it *must* either run in the dom0 or a domU on the hypervisor you want it to control13:20
ccookeso, there are two approaches: Use a VM for the nova-compute binary or install native into dom013:20
alekibangoah, i dont know a bit about citrix xen + nova13:21
ccookeI don't like the VM approach due to it being messy and awkward13:21
ccookeso I'm building a sensible approach to installing it on the XenServer dom013:21
alekibangoic13:21
ccookethere's limited space and installing nova into the base image would need *huge* changes13:22
ccookevery unclean13:22
ccookeOr... I can build a minimised natty chroot. Make it into a squashfs image, and wrap it around with scripts13:22
alekibangoic now...13:22
ccookeYou then get a versioned, packagable "nova-compute" blob you can install onto a XenServer13:22
scalability-junkwho has a production environment running? with several service servers etc.13:23
ccookeit's sill around 260M all told, but that is reasobale13:23
scalability-junkdo you have a second mysql server?13:23
ccookereasonable!13:23
alekibangoccooke:  its possible that nova might miss some depends :)  be on alert and report it when it happens13:23
ccookealekibango: yeah, caught out before on that :-)13:23
alekibangodont suspect a friend!   report him!13:23
alekibangoer.. bug13:23
*** patcoll has joined #openstack13:23
*** santhosh_ has joined #openstack13:28
*** dendro-afk is now known as dendrobates13:28
*** infinite-scale has joined #openstack13:28
*** Zangetsue has quit IRC13:28
*** Zangetsue has joined #openstack13:29
*** santhosh has quit IRC13:29
*** santhosh_ is now known as santhosh13:29
*** msivanes has joined #openstack13:29
scalability-junknoone here with a production environment?13:30
scalability-junkor is there any docs about the environment at rackspace?13:30
*** amccabe has joined #openstack13:31
*** fabiand__ has joined #openstack13:35
*** lborda has joined #openstack13:36
ccookescalability-junk: what are you trying to do?13:37
scalability-junkimplement my own private cloud13:38
scalability-junkbut I read in the docs that the minimum would be a 4 server setup for swift + 4 for nova13:38
scalability-junkand 8 server is a bit too much for a student right now :D13:38
j05hscalability-junk: you can do it with one.13:39
scalability-junkproduction environment with failover?13:39
j05hnow you have a requirement ;)13:39
scalability-junkThe plan was 2 cluster with one all in one server13:39
scalability-junkso that each service is there twice13:39
scalability-junkbut I'm not sure on how this could communicate13:40
scalability-junkreplicated database and so on13:40
j05hit seems like it depends on what kind of availability you're looking for and what your use cases are.  you could also do backups and restore.  if you're on a budget, i'm not sure you'd want to leave half of your hardware unused.13:42
scalability-junkso what would I need to backup for me to easy get back up13:43
scalability-junk?13:43
scalability-junkand what setup would you say would be alright. My budget is for a maximum of 4 server 2 for swift and 2 for nova13:43
*** Zangetsue has quit IRC13:44
*** f4m8 is now known as f4m8_13:45
*** krish|wired-in has quit IRC13:45
*** santhosh has quit IRC13:47
scalability-junkj05h: I thought with using 2 clusters I have a sort of HA environment. because of the budget I would use the single server per cluster as an all in one (+nova-compute) but I wasn't sure how this could be done well.13:48
scalability-junkSo I thought seeing a bigger production environment would be great to study the infrastructure and use it for this micro start^^13:49
*** lborda has quit IRC13:49
*** lborda has joined #openstack13:50
zigo-_-:)13:50
* scalability-junk zigo :(13:52
alekibangoscalability-junk: there is not enough companies using NOVA in production now13:53
alekibangoand 4 servers is not enough to get started13:53
alekibango5 = small test, proving that you can do it13:53
*** robbiew has joined #openstack13:53
alekibango20 = beta13:53
alekibango100+ => production13:53
alekibangoif you have only 4 total... maybe you should rather use virt-manager instead13:54
*** robbiew has left #openstack13:54
scalability-junkalekibango: it isn't like I haven't tried around, but I wanted to try openstack now. Already used eucalyptus, nimbula, opennebula etc.13:55
alekibangoscalability-junk: nova doesnt care how you replicate your database (its your job)13:55
alekibangonova can take care of replicating your virtual drives, but thats not enabled by default and its not trivial13:55
*** rchavik has quit IRC13:55
scalability-junkYeah I justed wanted to ask if someone has a production environment and could give me a few hints on how they make it HA and stuff13:56
alekibangoscalability-junk: openstack is prolly hardest to get installed right (at least nova)13:56
alekibangobut it has greatest adaptability of all13:56
*** lborda has quit IRC13:56
*** fabiand__ has quit IRC13:56
alekibangothats maybe the problem even, as there are mirriads of ways how to install it13:56
*** pllopis has left #openstack13:56
scalability-junkthat's why I wanna try it, I love the way it's doing things and I love python :D13:57
scalability-junkanyway so you think it's not at all possible to use 4 server for a production environment with swift and nova?13:57
alekibangoit might be13:57
alekibangobut it will be 'production'13:58
alekibangowith little added value13:58
scalability-junkthe value is scalability13:58
alekibangoif you will have 40 servers later, ok13:58
scalability-junkand experience for the most part :P13:58
alekibangoif not, nova might be just big cannon for you13:59
alekibangotoo big i mean13:59
alekibangoscalability-junk: but nova is fun, really13:59
scalability-junkthat's why I tried eucalyptus and so on, but that's not what I wanna use or try or whatever13:59
alekibangoscalability-junk: i have similar setup, 4 servers14:00
zigo-_-I believe that lot's of hard issues of swift can be solved with a bit of debconf and default config files.14:00
alekibangoand it can work14:00
alekibangobut its not worh it for production14:00
zigo-_-I don't understand why currently there's no default one ...14:00
scalability-junk4 server setup only nova?14:00
Eykto install swift 1.4dev for production, is "python setup.py develop" right or do i have to change the "develop" in something else?14:00
alekibangoswift for production is great14:00
alekibangonova is the hard one14:00
alekibango:)14:01
alekibangoEyk: i use swift cactus 1.3.14:01
creihtEyk: a good starting point is here: http://swift.openstack.org/howto_installmultinode.html14:01
alekibangofrom packages14:01
alekibangoscalability-junk: you will start liking nova when you think about 100 or more nodes14:02
alekibangothats when its excelling14:02
scalability-junkalekibango: If I only go with easy thinks and things aimed at my size how can I grow ;)14:02
alekibangoespecially with some configuration management system14:02
scalability-junklike puppet?14:02
alekibangopuppet or chef14:02
alekibangoi started playing with chef today14:02
*** galthaus has joined #openstack14:03
alekibangoi use fai for install :)14:03
Eyki just dont know with which command I should install the downloaded trunk version "python setup.py develop" <-- is this right?14:03
alekibangobut i still have issues.. its not working as i want it to14:03
creihtEyk: python setup.py install14:03
creihtwill install the python packages with python14:03
scalability-junkdo you have failover methods?14:03
scalability-junk2 services like api and so on?14:03
scalability-junkor are you just going with one server for the services and one database and 3 clusters with single nodes in them?14:04
alekibangoscalability-junk: rather do 4 nodes for swift14:04
alekibango1 node for nova controller and 3 compute nodes14:04
alekibangocombine into one system if you wish14:04
alekibangoto have headaches14:04
alekibango:)14:04
Eykcreiht, tnx14:04
alekibangoso you can test whats happening when 2 nodes go down14:05
scalability-junkwhat would you do if the node for nova controller dies?14:05
creihtscalability-junk: short story for swift, is you could get it installed on 2 nodes, but it isn't going to work optimally.  You need a bare minimum of 3, and we recommend a bare minimum of 5 to handle failure scenarios14:05
alekibangoscalability-junk: you use big megaphone to call admin14:05
alekibango:)14:05
alekibangocreiht: thanks you said it well14:06
alekibangoscalability-junk: and waht if nova-network node dies?14:06
scalability-junkcreiht: yeah read that in the docs. but starting small would be cool you know14:06
alekibangoscalability-junk: you can run more controller nodes14:06
alekibangothey dont care where they are14:07
creihtscalability-junk: yeah I understand, but when we first wrote it, we weren't thinking that small of scale :)14:07
alekibangoscalability-junk: at least not for production14:07
alekibangocreiht: still,i like it very much14:07
creihtWe wrote it with the intention of starting off with petabytes of storage14:07
scalability-junkcreiht: I know I know, but why not trying. are there any docs on how to set up production environment with a lot of servers?14:08
*** keny has joined #openstack14:08
creihtscalability-junk: the doc I linked to above gets you started14:08
Eykwhen I remove or add a disk to a swift ring. Can I just add the disc and rebalance the ring on some node or do I need to do something else linke replicate or restart something?14:08
alekibangoscalability-junk: just those docs worked for me on 414:08
alekibangoone afternoon and swift was ok... unlike nova lol14:08
creihthttp://docs.openstack.org/cactus/openstack-object-storage/admin/content/14:09
alekibangonova manuals are often outdated and wrong :)14:09
creihtis another good start14:09
creihtscalability-junk: most of it has to do with handling failure and how much risk you are willing to take14:09
alekibangoscalability-junk: look on sheepdog for reliable storage...14:10
alekibangoyou need at least 3/4 nodes to start with14:10
scalability-junkdo I overread stuff, I just found talk about the storage node failures14:10
*** robbiew1 has joined #openstack14:10
*** tblamer has joined #openstack14:12
creihtscalability-junk: without going in to a lot of detail, in a 2 node swift cluster, if one node goes down, a large portion of operations (especially PUTs) will not succeed until that node is brought back up14:13
scalability-junkah ok14:14
scalability-junkso 4 server minimum for swift14:14
*** hggdh has joined #openstack14:14
alekibangohe said 5 :)14:14
scalability-junkand 4 server minimum for nova that gets expensive ;)14:14
creiht3 is bare minimum14:14
alekibangorecommended 5 :)14:15
scalability-junkok14:15
creihtyeah14:15
alekibangoservers are cheap now14:15
scalability-junk29€ per month^^ per server14:15
OutBackDingoalekibango: not if their 120 core boxes :P14:15
alekibangohehe14:15
alekibangoOutBackDingo: where can i get those?14:15
OutBackDingoalekibango: from me :P14:15
alekibangourl?14:15
alekibangoi didnt see those here, maybe i was not looking much14:16
OutBackDingoyou want to buy?14:16
OutBackDingoor lease?14:16
alekibangoi want to see first14:16
OutBackDingowe sell SGI14:16
OutBackDingo:)14:16
alekibango:)14:16
OutBackDingoand Intel14:16
*** robbiew1 has quit IRC14:16
OutBackDingoand networking gear and high end storage14:17
*** cdbs is now known as cdbs-isnt-good14:17
alekibangoOutBackDingo: i would like to see new intel boards with many  cpus14:17
*** cdbs-isnt-good is now known as cdbs14:17
alekibangoeven if i used sgi for years :)14:17
OutBackDingoalekibango: ive got an Intel box here with 48 and 96 cores14:18
nhmalekibango: me too.  I'd like to see boards with hypercube or even 3D torus QPI setups.14:18
alekibangoand btw for swift -  i think about making my own boxes - with cheap arm cpu :)14:18
nhmOutBackDingo: 96 cores is interesting.  All on the same board?14:18
alekibangosuch machine can be smaller than disk :)14:18
OutBackDingonhm: yupp14:18
nhmOutBackDingo: How many QPI links per chip?14:19
alekibangoso i would just stack drives up, stick computers on them where appropriate ... to have storage  (LOL)14:19
Eyka arm system is enough for swift?14:19
alekibangowhy not14:19
alekibangowhen its enough to stream video14:19
alekibango... i need to test it first, right14:20
alekibangobut i think it will work well14:20
Eykswift is very resource friendly  ?14:20
alekibangoimho its not cpu hog, is it?14:20
alekibangohmm but you are right, i will tell those arm-people to test it for me first :)14:21
OutBackDingonhm: looks like 214:21
alekibangohaving computer for 30$ could be fun14:22
alekibangoespecially if it has lower consumption14:22
OutBackDingonhm: actually spec sheet says 414:22
nhmOutBackDingo: huh, that's surprising.  Given how many sockets there must be on that board I can't imagine you get very fast socket->socket communication.14:22
*** grapex has joined #openstack14:23
*** ianp100 has joined #openstack14:23
*** johnpur has joined #openstack14:23
*** ChanServ sets mode: +v johnpur14:23
*** blamar_ has joined #openstack14:23
ianp100what do the swift scripts swift-stats-populate and swift-stats-report do?14:23
*** blamar_ is now known as blamar14:23
OutBackDingoalekibango: question why is 5 recommended?14:24
*** larry__ has joined #openstack14:24
*** shentonfreude has quit IRC14:25
alekibango[2011-05-10 16:13] <creiht> scalability-junk: without going in to a lot of detail, in a 2 node swift cluster, if one node goes down, a large portion of operations (especially PUTs) will not succeed until that node is brought back up14:25
alekibangoOutBackDingo: ^^14:25
*** CloudChris has joined #openstack14:25
*** zenmatt has quit IRC14:26
creihtianp100: http://swift.openstack.org/overview_stats.html14:26
*** CloudChris has left #openstack14:26
annegentlekpepple: ping14:26
annegentlekpepple: ah, you're in Seoul now, will email :)14:28
notmynameianp100: creiht: swift-stats-populate and swift-stats-report are the dispersion reports (and will be renamed for clarity in the next release, I think)14:29
alekibangohmm those arm boards i was thinking about has no sata... only usb2 ...  hmm need to wait for better :)14:29
ianp100creiht: ive looked at that link but i cant see swift-stats-populate and swift-stats-report are mentioned14:29
*** grapex has quit IRC14:29
notmynameianp100: http://swift.openstack.org/admin_guide.html#cluster-health14:30
*** spectorclan_ has joined #openstack14:30
creihtnotmyname: heh... sorry14:30
creihtI always do that14:30
notmyname:-) and that's why it needs to be renamed14:30
creihtindeed14:30
notmynameI think gholt has a merge for that now14:31
creihtahh cool14:31
*** jkoelker has joined #openstack14:31
ianp100notmyname: perfect, thanks!14:32
*** shentonfreude has joined #openstack14:33
scalability-junkdamn I hate being a student with no money to sponsor my own small production cloud :P14:33
*** zenmatt has joined #openstack14:34
*** skiold has joined #openstack14:35
scalability-junkanyway I'm trying this stuff 24/7 to run nova on a 2 cluster stuff with failover :D14:40
scalability-junkwill be fun :D14:40
kenyI am trying to get openstack running on debian. I did an installation from source. Things seem to be working, except nova-compute won't start ...14:41
kenyI captured the error log: http://pastebin.com/4pBAPVVG14:41
kenyClassNotFound: Class get_connection could not be found <- I'm no python expert, but get_connection looks like a method to me14:42
*** dendrobates is now known as dendro-afk14:42
*** grapex has joined #openstack14:45
*** niksnut has quit IRC14:46
*** rnirmal has joined #openstack14:47
Eykwhen I change a swift builder file, do I have to copy the .ring.gz file immediately  to all nodes and reload there service? (I dont understand the ring management from the docs)14:48
*** hggdh has quit IRC14:53
*** skiold has quit IRC14:57
*** dragondm has joined #openstack14:59
*** shentonfreude has quit IRC14:59
*** bkkrw has quit IRC15:01
ccookeAny of the XenServer people around atm?15:01
ccookeI now have a running nova-compute on my XenServer hypervisor15:02
*** mgoldmann has quit IRC15:02
ccookebut it's complaining that it can't find a network for bridge xenbr015:02
*** photron has joined #openstack15:02
ccookemy config appears to be correct as far as the documentation goes, but...15:02
*** shentonfreude has joined #openstack15:03
*** hagarth has quit IRC15:06
*** skiold has joined #openstack15:08
*** ianp100 has quit IRC15:09
*** niksnut has joined #openstack15:13
*** zul has joined #openstack15:13
*** obino has joined #openstack15:16
galthausEyk: yes, that is my understanding.15:16
gholtEyk: galthaus: Copying the ring files out is enough. Every service that uses the ring checks the file's mtime occasionally and will reload its copy of the ring automatically.15:17
gholtWhen copying the ring out, it is best to copy it to a temporary location and then move it into place. You don't want to end up with half-rings or anything. :)15:18
galthausAh - very nice15:18
Eykthank you, for the infos15:20
*** larry__ has quit IRC15:20
*** guigui has quit IRC15:22
dprincejaypipes: So are you going to do the PPA package updates for glance API versioning? Soren?15:24
jaypipesdprince: I will work with soren once I get your latest fixes done.15:24
ccookedprince: Got a sec for a Xen-related query?15:24
jaypipesdprince: and vishy, sorn, perhaps it's worth a quick call today to coordinate... the Glance API is changing (adding a version identifier) and existing Glance clients *will* break.15:25
jaypipessoren: ^^15:25
dprincejaypipes: cool. So I'm dying to try that out. We can test smoke test that fairly easy with our VPC setup so give me or Waldon a shout when that installer branch is in.15:25
jaypipesdprince: I'll have those fixes up there within half an hour. really appreciate your review.15:26
dprincejaypipes: I need to coordinate w/ you on a couple CHef changes as well but I can bang those out fairly quickly.15:26
dprincejaypipes: Sounds good. NP15:26
jaypipesdprince: ack on chef.15:27
dprinceccooke: what is up? I'm not the Xen expert but shoot.15:27
jaypipesdprince: mainly around the conf changes, I assume?15:27
creihtanyone know off the top of their head where the api extensions blueprint is?15:27
dprincejaypipes: yes sir.15:27
ccookedprince: you're the only name I recognise active who has previously talked to me about it :-)15:27
*** enigma1 has joined #openstack15:28
dprinceccooke: sounds like it is time to change my handle. Anyway. What you got?15:28
dprince+creiht: That was part of the OS API v1.1 blueprint right?15:28
westmaascreiht: there isn't a seperate one, its rolled into the 1.1 spec15:28
ccookedprince: I have a nova-compute that appears to be basically working on a XenServer15:28
creihtdprince, westmaas: ahh thanks!15:29
ccookedprince: but it's complaining when I try to create an instance: "Error: Network could not be found for bridge xenbr0"15:29
*** zul has quit IRC15:29
ccookedprince: as far as I can tell, I've followed the docs. Does this mean I also need a nova-network on the hypervisor?15:29
dprinceccooke: What bridges do you have?15:29
dprinceccooke: brctl show15:30
ccookethis is a standard XenServer box - xenbr0 has been preconfigured correctly15:30
ccookeand yes, it (and xenbr1-3) are in the output of brctl show15:30
dprinceccooke: sure. That is correct.15:30
*** fabiand__ has joined #openstack15:31
ccookeWhat is correct? Context unclear :-)15:31
dprinceccooke: Sorry. Just meant that it was correct that XenServer creates that by default.15:32
ccookeAh, yes.15:32
dprinceccooke: checking my setup....15:32
ccooke(I have a network configured on the box the rest of nova is running on, but I don't see any way to configure that as including any particular host or bridge. My nova config is set to the flat network manager as per the documentation (and all the xen-specific lines in the example config are present)15:33
dprinceccooke: ifconfig xenbr015:34
dprinceccooke: does xenbr0 have an IP?15:34
ccookedprince: it's up and has an IP15:35
sorenjaypipes: I'm at the ubuntu developer summit this week, so I haven't much time for other stuff. The more specific you can be about the changes you need me to make, the better.15:35
sorenjaypipes: ...since I haven't really followed the discussion much, to be honest.15:35
*** h0cin has joined #openstack15:36
dprinceccooke: So to be clear. Xenbr0 should be on the XenServer (host machine). Not on the guest utility VM running nova-compute right?15:37
ccookedprince: I am running nova-compute directly on the hypervisor15:37
dprinceccooke: Oh.15:37
dprinceccooke: Did Ant and Pvo recommend you set things up like that the other day?15:38
jaypipessoren: no worries. so, the Glance API has been changed to add versioning into our URI structure. In addition, the previous single glance.conf has been broken into glance-api.conf and glance-registry.conf, which means the packaging/install of config files needs to be changed in coordination with the glance API version branch hitting Glance's trunk.15:38
dprinceccooke: I'm not running it that way. I'm actually not even sure the codebase supports it.15:38
sorenjaypipes: Ok. Is there a useful migration path from the single-file layout to this?15:38
ccookedprince: it was given as an equal option, and we (that is, my employer) decided that we'd much prefer to see a managable nova-compute for the HV15:38
jaypipesbcwaldon, dprince: what are your thoughts on /v1.0/images vs. /v1/images? I'm leaning towards /v1.0/images because that is how the Swift API is structured.15:39
ccookedprince: I'm currently working on a squashfs blob with a minimal natty root in it, with some wrapper scripts.15:39
jaypipessoren: hmm... good question.15:39
ccookedprince: currently comes to a bit over 220M, which is easily managed directly on dom015:39
jaypipessoren: not sure what is possible and what isn't... perhaps I should write a simple migration script for the glance.conf?15:40
dprinceccooke: Hmmm. So a chroot of sorts?15:40
sorenOh, by the way, Oneiric (Ubuntu next version) will very likely have Xen dom0 support.15:40
ccookedprince: basically15:40
sorenI guess some of you will find that interesting :)15:40
ccookedprince: read-only, though, and held in a single file so it's easy to version and replace15:40
dprinceccooke: Also natty? I haven't tested it with XenServer yet.15:40
ccookedprince: *vastly* more managable than running VMs15:41
sorenjaypipes: Not having looked at the reasons for the split it's hard for me to say something useful here. I'm just concerned about upgrade scenarios.15:41
ccookedprince: everything else I'm using is natty-based, so it seemed the best plan to keep to that.15:41
ccookedprince: openstack elsewhere seems to run fine on it. The XenServer libs are all python anyway, so there really shouldn't be any issues15:42
*** larry__ has joined #openstack15:42
jaypipessoren: so, the reason for the split was to align ourselves better with the way Swift structures its config files...15:42
ccookedprince: in your VM, do you run an instance of nova-network?15:43
sorenjaypipes: Ok.15:43
sorenjaypipes: So apart from the split, nothing has changed?15:43
jaypipessoren: it also makes it easier to a) manage the 2 different Glance servers separately, and b) eventually the aim is to be able to package the registry and API servers separately.15:43
dprinceccooke: Yes. I'm running maverick. It works great although you need to set --xenapi_remap_vbd_dev=true.15:43
sorenjaypipes: Should be a fairly simple migration.15:43
dprinceccooke: I think that is probably fixed with natty but watch out for it too perhaps.15:44
ccookedprince: wait. Your vm *does* include nova-network running?15:44
jaypipessoren: for the packaging, no, nothing else has changed. we need to work with vishy to coordinate changes that were made to the API and client, but that shouldn't affect packaging.15:44
ccookedprince: That sounds like my missing piece.15:44
dprinceccooke: No.15:44
dprinceccooke: Oh. Wait. Yeah. I see.15:44
jaypipesbaib15:45
dprinceccooke: Your nova-network also needs to use the xenbr0 bridge too I think.15:45
jaypipesbiab, even...15:45
dprinceccooke: Mine does. But it is on a different box.15:45
ccookedprince: right. Which means you need a nova-network on each hypervisor, too15:45
dprinceccooke: Yeah. That should fix your issue I think.15:45
ccookedprince: as well as a nova-compute15:45
dprinceccooke: In my setup nova-network is on a separate machine. I do however have a xenbr0 on that machine even though it isn't running XenServer.15:47
dprinceccooke: There are a couple ways to go about getting this interface. In any case I think you are on the right track.15:47
ccookedprince: .... that is a very, very broken thing to have to do :-)15:47
*** zenmatt has quit IRC15:47
dprinceccooke: maybe a bit confusing I guess. A convention of sorts. nova-network has to have some way to obtain that info.15:48
dprinceccooke: Are you using FlatDHCP?15:49
dprinceccooke: or Vlan?15:49
ccookedprince: no settings say either way15:51
dprinceccooke: What I was going to suggest is that you can have nova automatically setup the xenbr0 bridge for you on an unused interface if you use the --flat_interface flag.15:51
dprinceccooke: So if you use --network_manager=nova.network.manager.FlatDHCPManager15:52
dprinceccooke: --flat_network_bridge=xenbr015:52
dprinceccooke: --flat_interface=eth115:52
*** Ryan_Lane has joined #openstack15:52
*** fabiand__ has quit IRC15:52
dprinceccooke: Nova would then automatically create and bridge into that interface for you.15:52
dprinceccooke: Just a suggestion. I use that. Something Vish added which is quite handy.15:53
ccookedprince: huh. Okay15:53
*** zenmatt has joined #openstack15:53
ccookeI'll give that a try15:53
dprinceccooke: Anyway. Gotta go. I'll be online later today too.15:53
ccookethanks for the help15:54
*** krish|wired-in has joined #openstack15:55
*** medberry is now known as med_out15:56
*** zenmatt has quit IRC16:01
*** dirkx__ has quit IRC16:02
*** zenmatt has joined #openstack16:03
*** Ryan_Lane has quit IRC16:09
*** joearnold has joined #openstack16:09
*** zenmatt has quit IRC16:10
*** grapex has quit IRC16:14
*** grapex has joined #openstack16:15
*** lborda has joined #openstack16:17
*** MarkAtwood has joined #openstack16:17
*** nerens has quit IRC16:19
*** maplebed has joined #openstack16:21
*** mattray has joined #openstack16:22
*** ccustine has joined #openstack16:23
*** lborda has quit IRC16:23
*** zigo-_- has quit IRC16:28
*** nerens has joined #openstack16:29
*** jakedahn has quit IRC16:29
*** jakedahn has joined #openstack16:30
*** daveiw has quit IRC16:35
*** mgoldmann has joined #openstack16:39
*** troytoman-away is now known as troytoman16:41
*** dprince has quit IRC16:46
*** jdurgin has joined #openstack16:47
*** crescendo has quit IRC16:50
*** zenmatt has joined #openstack16:51
*** nerens has quit IRC16:55
*** dprince has joined #openstack16:57
*** purpaboo is now known as lurkaboo16:58
*** Ryan_Lane has joined #openstack16:59
*** nerens has joined #openstack17:02
*** watcher has quit IRC17:02
*** jeffk has joined #openstack17:04
*** jeffk has quit IRC17:05
*** jeffk has joined #openstack17:05
*** jeffk has quit IRC17:07
*** jeffkramer has joined #openstack17:08
*** photron has quit IRC17:11
*** jeffkramer has joined #openstack17:14
*** MotoMilind has quit IRC17:16
*** dirkx__ has joined #openstack17:18
*** Ryan_Lane is now known as Ryan_Lane|brb17:19
*** krish|wired-in has quit IRC17:20
*** jakedahn has quit IRC17:26
*** Ryan_Lane|brb is now known as Ryan_lane17:26
*** larry__ has quit IRC17:26
*** keny has quit IRC17:26
*** ChameleonSys has quit IRC17:27
*** MotoMilind has joined #openstack17:27
*** ChameleonSys has joined #openstack17:32
*** zaitcev has joined #openstack17:34
*** tjikkun has quit IRC17:37
*** Jordandev has joined #openstack17:37
*** clauden has joined #openstack17:39
*** aa___ has joined #openstack17:39
aa___howdi... got some swift questions...17:40
aa___I'm trying to figure out why swauth-prep and friends hang once in a while17:40
*** zenmatt has quit IRC17:41
aa___have folks ran into similar issues?17:41
*** krish|wired-in has joined #openstack17:47
Eykis the S3 access to swift working in current trunk? If its known to be broken, than I could stop trying ;-)17:48
*** crescendo has joined #openstack17:49
*** nelson has quit IRC17:49
*** CloudChris has joined #openstack17:49
*** nelson has joined #openstack17:50
*** CloudChris has left #openstack17:50
*** CloudChris has joined #openstack17:51
*** photron_ has joined #openstack17:52
*** krish|wired-in has quit IRC17:56
*** MotoMilind1 has joined #openstack17:59
*** MotoMilind has quit IRC17:59
*** BK_man has joined #openstack18:01
*** nijaba_afk has joined #openstack18:02
*** nijaba has quit IRC18:03
*** zenmatt has joined #openstack18:05
*** tjikkun has joined #openstack18:09
*** tjikkun has joined #openstack18:09
*** Jordandev has quit IRC18:10
*** MotoMilind1 has quit IRC18:11
*** dragondm has quit IRC18:17
*** MotoMilind has joined #openstack18:18
*** shentonfreude1 has joined #openstack18:25
*** mgius has joined #openstack18:26
*** shentonfreude has quit IRC18:28
*** mszilagyi has joined #openstack18:28
*** mgius has quit IRC18:29
*** cole has joined #openstack18:29
*** dprince_ has joined #openstack18:30
aa___are there any swift folks in the room?18:30
*** MotoMilind has quit IRC18:31
*** openfly has joined #openstack18:36
openflyhey in the nova flag file what does max_gigabytes mean?  is that maximum virtual ram or maximum virtual disk space to allocate?18:36
*** MotoMilind has joined #openstack18:38
*** mgius has joined #openstack18:38
openflyit's max volume from what i read in scheduler code18:43
* openfly &18:43
*** openfly has left #openstack18:43
coleaa___: openfly seems right..just went and looked it up18:44
*** jtran has joined #openstack18:47
*** dragondm has joined #openstack18:49
*** CloudChris has quit IRC18:51
*** skiold has quit IRC18:55
*** larry__ has joined #openstack18:55
*** katkee has joined #openstack18:57
*** daveiw has joined #openstack19:02
*** MotoMilind has quit IRC19:08
*** mattray has quit IRC19:08
*** dirkx__ has quit IRC19:09
*** joearnold has quit IRC19:11
*** scalability-junk has quit IRC19:15
*** dirkx__ has joined #openstack19:17
aa___looking for someone to chat about swift and swauth issues... anyone around?19:18
*** dprince_ has quit IRC19:18
dprinceexit19:18
dprinceexit19:18
*** dprince has quit IRC19:18
*** MotoMilind has joined #openstack19:19
*** dirkx__ has quit IRC19:19
*** dirkx__ has joined #openstack19:21
*** dirkx__ has quit IRC19:31
*** dirkx__ has joined #openstack19:32
*** MotoMilind has quit IRC19:32
*** ohnoimdead has joined #openstack19:32
*** amccabe has quit IRC19:33
*** h1nch has quit IRC19:34
*** dobber_ has joined #openstack19:35
dobber_hi, can i pxe boot into stackops ?19:36
*** MotoMilind has joined #openstack19:44
*** Eyk has quit IRC19:44
*** photron_ has quit IRC19:46
*** daveiw has left #openstack19:47
*** joearnold has joined #openstack19:51
*** amccabe has joined #openstack19:52
*** dirkx__ has quit IRC19:53
*** h1nch has joined #openstack19:54
*** kbringard has joined #openstack19:59
kbringardhowdy peoples!19:59
kbringardquick ?19:59
kbringardI've assigned a DNS name to my API's external IP19:59
kbringardwhen I connect to it that way, I get a 40319:59
kbringardand I figured it out20:00
kbringardall it took was me coming in here and asking, so I'd look like an idiot as soon as I asked20:00
kbringardhah20:00
jtranlol i hate it when that happens20:01
kbringardindeed20:02
dsockwellcan nova understand shared local storage?  performance issues aside, could I put all my local images on the same nfs share and not cause a panic?20:03
*** Ryan_lane is now known as Ryan_Lane20:04
*** bcwaldon has quit IRC20:04
*** bcwaldon has joined #openstack20:04
*** baffle has joined #openstack20:06
*** mgius has quit IRC20:09
*** vernhart has quit IRC20:11
*** mgius has joined #openstack20:11
*** mgius is now known as Guest900820:12
*** sdadh01 has quit IRC20:14
*** rcc has quit IRC20:14
*** ctennis has quit IRC20:14
*** allsystemsarego has quit IRC20:14
*** BK_man has quit IRC20:16
*** Guest9008 is now known as mgius_20:17
*** imsplitbit has joined #openstack20:18
kbringarddsockwell: yep20:19
kbringardin fact, if you want to use live migration, your instances have to reside on some kind of shared storage20:19
*** BK_man has joined #openstack20:19
dsockwellok.  the documentation says live-migration only works with kvm, is that correct?20:21
kbringardprobably... I've only used KVM so I can't say for sure20:21
kbringardbut, I can say I've live migrated machines and it does indeed work with KVM :-D20:21
*** ctennis has joined #openstack20:27
*** ctennis has joined #openstack20:27
*** bcwaldon has quit IRC20:29
*** sdadh01 has joined #openstack20:29
*** bcwaldon has joined #openstack20:29
*** pguth66 has joined #openstack20:38
*** carlp has joined #openstack20:41
*** mgoldmann has quit IRC20:43
*** ribo has joined #openstack20:43
*** perestrelka has quit IRC20:48
*** mattt has joined #openstack20:48
jaypipes#openstack-meeting in 10 minutes.20:50
dsockwelljaypipes: real quick, do you know if live migration is limited to the kvm hypervisor?20:52
*** mgius_ is now known as mgius20:53
*** mattray has joined #openstack20:53
jaypipesdsockwell: I believe so currently.20:54
jaypipesdsockwell: but I could very well be wrong on that...20:54
dsockwelljaypipes: the blueprint suggests that at least in cactus it is, thanks20:54
*** perestrelka has joined #openstack20:56
*** msivanes has quit IRC20:58
*** vernhart has joined #openstack20:58
ttxMeeting in 2 min. in #openstack-meeting20:59
*** mattray has quit IRC20:59
*** galthaus has quit IRC21:03
*** blamar_ has joined #openstack21:05
*** antenagora has joined #openstack21:07
*** watcher has joined #openstack21:13
*** watcher has quit IRC21:13
*** watcher has joined #openstack21:14
*** antenagora has quit IRC21:16
*** nati has joined #openstack21:18
*** obino has quit IRC21:20
*** Eyk has joined #openstack21:27
*** pguth66 has quit IRC21:28
*** mattray has joined #openstack21:31
*** patcoll has quit IRC21:32
*** midodan has joined #openstack21:32
*** devcamca- has left #openstack21:33
*** devcamca- has joined #openstack21:33
*** CloudChris has joined #openstack21:34
*** joearnold has quit IRC21:37
*** amccabe has quit IRC21:37
*** nerens has quit IRC21:37
*** joearnold has joined #openstack21:37
*** CloudChris has left #openstack21:37
*** mszilagyi has quit IRC21:47
*** spectorclan_ has quit IRC21:49
*** ohnoimdead has quit IRC21:51
*** nati has quit IRC21:53
*** mgius has quit IRC21:53
*** gondoi has quit IRC21:54
*** _0x44 has left #openstack21:55
*** mgius has joined #openstack21:55
*** pguth66 has joined #openstack21:56
*** mgius is now known as Guest857421:56
*** imsplitbit has quit IRC21:58
*** ChanServ changes topic to "Openstack Support Channel, Development in #openstack-dev | Wiki: http://wiki.openstack.org/ | Nova Docs: nova.openstack.org | Swift Docs: swift.openstack.org | Logs: http://eavesdrop.openstack.org/irclogs/ | http://paste.openstack.org/"21:58
*** markvoelker has quit IRC22:00
*** bcwaldon has quit IRC22:01
*** nati has joined #openstack22:02
*** kbringard has quit IRC22:02
*** tblamer has quit IRC22:04
*** grapex has quit IRC22:04
*** shentonfreude1 has quit IRC22:05
*** dobber_ has quit IRC22:20
*** grapex has joined #openstack22:20
*** grapex has quit IRC22:25
*** aa___ has quit IRC22:25
*** nelson has quit IRC22:28
*** nelson has joined #openstack22:28
*** blamar_ has quit IRC22:29
*** ohnoimdead has joined #openstack22:30
*** ohnoimdead has left #openstack22:38
*** grapex has joined #openstack22:40
*** zaitcev has quit IRC22:40
*** katkee has quit IRC22:42
*** shentonfreude has joined #openstack22:44
*** jeffkramer has quit IRC22:44
*** zaitcev has joined #openstack22:44
*** cole has quit IRC22:48
*** watcher has quit IRC22:51
*** miclorb_ has joined #openstack22:53
*** troytoman is now known as troytoman-away22:54
uvirtbotNew bug: #780784 in nova "KeyError in image snapshotting" [Undecided,New] https://launchpad.net/bugs/78078422:57
*** miclorb_ has quit IRC22:59
*** miclorb has joined #openstack22:59
*** nati has quit IRC23:03
*** rnirmal has quit IRC23:04
*** midodan has quit IRC23:12
*** jkoelker has quit IRC23:17
*** crescendo has quit IRC23:19
*** widodh has quit IRC23:19
*** widodh has joined #openstack23:19
*** Guest8574 has quit IRC23:19
*** crescendo has joined #openstack23:20
uvirtbotNew bug: #780788 in nova "EC2 API should allow query deleted objects" [Undecided,New] https://launchpad.net/bugs/78078823:22
*** mattray has quit IRC23:24
*** mattray has joined #openstack23:26
*** enigma1 has quit IRC23:26
*** grapex has quit IRC23:27
*** BK_man has quit IRC23:35
*** jtran has left #openstack23:36
*** MotoMilind has quit IRC23:40
*** johnpur has quit IRC23:48
*** zenmatt has quit IRC23:52
*** markwash_ has quit IRC23:52
*** markwash_ has joined #openstack23:53
*** BK_man has joined #openstack23:53
*** BK_man has quit IRC23:54
*** Eyk has quit IRC23:56
*** zenmatt has joined #openstack23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!