Friday, 2011-01-14

sorenjk0: Something like that.00:00
sorenjk0: I don't think it offers bridging.00:00
*** larstobi has quit IRC00:00
jk0soren: go to router.soren.com/mode.htm00:01
jk0there should be an option to switch from NAT to Bridge00:02
fcarstensoren: Might be called "RFC 1482"00:02
fcarsten1483 that is00:02
jk0(make sure you jot down your auth creds first :))00:02
sorenjk0: Uh. Yeah.00:02
sorenjk0: How'd you know?00:02
jk0know what?00:03
jk0oh and make note of the VCI/VPI settings in case those need to be set explicitly in bridge mode (shouldn't)00:04
masumotoksoren: sorry I send you merge request many times(since I found I forgot pep8 checking)00:04
masumotokvishy: sorry I send you merge request many times(since I found I forgot pep8 checking)00:04
sorenjk0: How did you know about that mode.htm?00:05
jk0soren: oh, google :)00:05
sorenAh.00:05
XenithNote that if you switch your router to bridge mode, you won't be able to switch it back without resetting it.00:05
jk0yeah, that's why jotting down those VCI/VPI settings is critical00:06
jk0that should be all you need, other than auth00:06
sorenThis is looking like a project for a different day.00:06
soren:)00:06
jk0or any static IP info if it's not DHCP00:06
sorenmasumotok: np00:06
*** adiantum has quit IRC00:07
sorenvishy: Approved.00:07
XenithYea, I think your best bet for v6 right now without munging your network is the Sixxs ayiya tunnel.00:07
jk0last I heard there were problems with the sixxxs signup page00:08
XenithDunno, I've never used them, of course :)00:08
jk0can't say I have either, this is what a friend told me a week or two ago00:09
jk0HE has been good to me for many years00:10
jk0since freebsd4.2 in fact ;)00:10
XenithGlad we can help.00:12
jk0are you with HE?00:12
*** adiantum has joined #openstack00:13
XenithYep00:13
jk0oh wow, nice, I had no idea00:13
*** AndChat| has joined #openstack00:14
*** nati has quit IRC00:15
XenithI'm looking forward to convincing my company to get involved in OpenStack :)00:15
jk0you should, that would be pretty cool :)00:15
sorenvishy: Are you going to review https://code.launchpad.net/~nttdata/nova/live-migration/+merge/44940 ?00:17
openstackhudsonProject nova build #402: SUCCESS in 1 min 25 sec: http://hudson.openstack.org/job/nova/402/00:18
openstackhudsonTarmac: This modifies libvirt to use CoW images instead of raw images.  This is much more efficient and allows us to use the snapshotting capabilities available for qcow2 images.  It also changes local storage to be a separate drive instead of a separate partition.00:18
openstackhudsonI'm proposing this branch for review to get feedback.  I may have inadvertently broken a few things.  Comments and possible issues:00:18
openstackhudson1. I haven't tested the other hypervisors.  I may have broken libvirt xen support and uml support with this patch.00:18
openstackhudson2. Is it useful to have a use_cow_images param, or should it just be automatic for qemu/kvm and turned off for everything else.00:18
openstackhudson3. create_image is a large annoying method.  I tried to clean it up a bit, but it could probably use a bit more refactoring.00:18
openstackhudson4. disk.py seems to be only used by the hypervisors, so perhaps it should move into virt dir.00:18
openstackhudson5. disk.py/partition() is unused now. Should we leave it in or throw it away?00:18
openstackhudsonComments welcome00:18
XenithWhat are cow images?00:22
vishysoren: just did, that took for ever00:22
vishywoah, cow merged, nice!00:22
sorenvishy: Yup.00:22
vishyXenith: CopyOnWrite00:22
sorenvishy: I don't see your comments on the live-migration mp?00:23
sorenvishy: I've read through that patch 3 times already. It gets harder each time to stay focused.00:24
fcarstenXenith: http://en.wikipedia.org/wiki/Qcow00:25
vishyi just sent00:25
vishyt00:25
vishyh00:25
vishysoren: might take a minute to pop up.  I made a ton of comments00:25
sorenYou e-mailed it?00:26
vishysoren: i wish i had time to review earlier, there are a lot of minor issues00:26
sorenOh, I see it now.00:26
masumotokvishy: I got your comments. thank you for reviewing!00:27
vishyi really wish lp had line by line reviews a la reitveld00:27
vishymsumotok: you're welcome. sorry it took so long, I've been super-busy00:27
masumotokvishy: I know.00:28
vishymasumotok: by the way, excellent work.  Most of the issues my review are just for style and constincency00:29
vishys/constincency/consistency00:29
XenithSo with cow images, one can do VPS-like functionality?00:30
XenithActually, I'm probably just fairly confused on the whole writing to the running image thing.00:30
XenithNot sure if my understanding is at all correct :)00:30
sorenvishy: Quoting the patch in the review seems the best approach to me. It's the most compatible with e-mail reviewing, too.00:31
sorenvishy: The worst is when people refer to line numbers. They change every time someone pushes to the branch, so it's useless when looking back through the history.00:32
vishysoren: yeah, it is just painful for long reviews.  Reitveld would show you diffs on one file at a time and show you comments inline00:32
vishysoren: it made it way easier00:32
sorenvishy: Did it allow you to review stuff by e-mail?00:33
sorenOtherwise, it's a fail :)00:33
vishysoren: yes00:34
sorenSo it'd extract your comments and show them in-line on the web?00:34
sorenThat sounds awesome.00:35
fcarstenXenith: COW e.g. allows you to privision VM images very fast. You don't have to copy the complete image for every VM. You just create a "reference" to the one image and only keep track of changes (write operations) in a VM specific file00:35
* soren desperately needs sleep00:36
fcarstenXenith: A COW image supports these "reference and keeping track of write changes".00:37
XenithAh.00:37
* vishy thinks soren is a trooper00:38
*** dfg_ has quit IRC00:39
vishyXenith, fcarsten: + Qcow2 supports instance snapshotting and restore inside the image format00:39
masumotokvishy: thank you! I'm struggling to fix now..00:39
*** dirakx has quit IRC00:40
XenithAm I correct in my understanding that inside an instance, its basically a ramdisk, so any writes you make are lost? Ignoring the volume and object storage for the moment.00:43
vishyXenith: no changes are saved on to disk00:43
vishyif your host crashes, some stuff may be buffered and not make it00:43
vishybut the instance on the host should get updates00:44
fcarstenvishy: you might want to add a comma ;-) : no, changes are saved to disk.00:44
XenithRight. But changes made to the running instance are not saved if you restart the instance?00:44
vishyif you do a virsh destroy <id> and virsh create /path/to/instance/dir/libvirt.xml you will00:44
vishyhave all of your changes00:45
vishyin fact reboot instances basically does that00:45
sandywalshxtoddx, thx ... having a look00:45
vishyfcarsten: yes, there was supposed to be a comma there.  Typing fast and typing well are orthogonal for me apparently.00:45
vishyXenith: btw, the same is true for amazon as well, they just don't tell you.  If you contact support i understand that they can recover crashed instances for you.  The best use of cloud would be to not have to worry about recovery though.00:47
vishyXenith: so most people act as if the instances are ephemeral00:47
XenithYea, just trying to think my way through this. I understand that generally you want to keep changing data on a volume or object storage or DB node00:48
XenithTrying to think if nova can be used in a more VPS-like fashion.00:48
vishyXenith: it can, in fact, the rackspace use-case is more vps00:49
fcarstenvishy, Xenith: This may not always be true. E.g. OpenNebula explicitly cleans (ie.e deletes) VM instance specific files (unless especially marked not to)00:49
vishyXenith, fcarsten: yeah, eucalyptus was notorious for randomly deleting instance files00:50
vishyon host reboot and such00:50
XenithYea, we were originally looking at euca, but I kept pushing openstack :)00:50
vishyXenith: \o/00:51
Xenithvishy: One thing I'd love to do is replacing all our older hosting servers with VPSes.00:51
XenithBecause it gets harder to find replacement hardware for them :)00:51
*** leted has joined #openstack00:53
vishyXenith: no kidding00:54
*** dragondm has quit IRC00:56
*** maplebed has quit IRC00:58
*** Lcfseth has quit IRC00:59
*** jfluhmann_ has joined #openstack01:00
*** koji-iida has joined #openstack01:04
*** trin_cz has quit IRC01:06
sandywalshtermie around?01:07
*** jdurgin has quit IRC01:13
sandywalshany bzr experts in the room?01:18
*** sophiap has quit IRC01:19
*** joearnold has quit IRC01:29
*** dubsquared1 has quit IRC01:30
*** koji-iida has quit IRC01:39
*** koji-iida has joined #openstack01:44
*** sophiap has joined #openstack01:48
Tushardoes any one know how many approvals are needed to merge the branch to trunk lp:nova?01:54
Tusharcurrently for ipv6 branch, only soren has given his approval.01:55
*** adiantum has quit IRC01:56
*** adiantum has joined #openstack01:58
XenithI believe it is two core approvals02:00
*** rlucio has quit IRC02:04
TusharXenith: Thank you.02:08
TusharVishy: If you have time , can you please approve IPV6 branch if everything is OK for you.02:08
vishygl hope the merge goes through without changes to the hudson box :)02:11
TusharVishy : Thank you very much for your approval,02:12
*** leted has quit IRC02:12
*** adiantum has quit IRC02:12
*** Tushar has quit IRC02:16
*** adiantum has joined #openstack02:17
creihtmtaylor: when you get a moment, can you install netifaces package on hudson?02:22
termiesandywalsh: around again02:23
termiesandywalsh: i do get backlog if you prefer to just ask your questiosn02:23
termiesandywalsh: i can answer them when i read them02:23
termiesandywalsh: if you get backlog i already answered some possible questions of yours02:23
sandywalshthat's ok, I think I got all my questions figured out02:23
sandywalshwas just running into bzr problems02:23
sandywalshdidn't get bin/stack on the pull / merge for some reason02:24
sandywalshwas wondering how to re-sync the two02:24
sandywalshtermie ^02:25
*** dirakx has joined #openstack02:26
sandywalshtermie, whoops, I know what the problem is ... ignore.02:26
vishyTushar: looks like there is a merge conflict in nova.sh02:27
dabocan someone merge https://code.launchpad.net/~ed-leafe/nova/xs-password-reset/+merge/45537  ? It's got two core approvals and everything's hunky-dory02:33
*** adiantum has quit IRC02:34
koji-iidavishy: Thank you for your approval of IPv6 branch. We are fixing the conflict problem of nova.sh now.02:34
*** flipzagging has quit IRC02:35
*** adiantum has joined #openstack02:38
*** reldan has quit IRC02:39
masumotokvishy: are you there? I just want to ask you about your comment.02:48
masumotokvishy: in your comment, "is it really necessary to have a separate host table instead of adding host info to services table?"02:48
masumotokvishy: actually, just adding host info to services table is good.02:48
masumotokvishy: From other point of view, in nova.api.ec2.admin, there are some code "host_get" "host_get_all"02:49
masumotokvishy: those code are expected separate host table.02:49
masumotokvishy: so, I think if I just adding host info to services table, other people create host table and adding similar column.02:49
masumotokvishy: Therefore, it may be better to choose keeping separate host table, and create relationship Service -> Host.02:49
masumotokvishy: Or, if nova.api.ec2.admin is discussing phrase, I remove host table because we are not sure.02:49
vishymasumotok: that is fine with me02:49
masumotokOK, I keep separate host table.02:50
masumotokvishy: thanks!02:50
vishymasumotok: I don't see why we need a separate table yet, but we may at some point so I guess it is fine02:50
*** ivan has quit IRC02:50
masumotokvishy: I am not sure for future release, but I remove separate host table when it obviously unnecessary for anyone.02:53
*** jdurgin has joined #openstack02:55
*** ivan has joined #openstack02:56
HisaharuI resolved conflict of IPv6 branch.02:56
vishykoji-iida: you don't need to repropose the merge after fixing stuff, you can just make a comment02:57
vishyi'm away for a bit02:57
*** pvo_away is now known as pvo03:05
koji-iidavishy: oops, I'm sorry for the inconvenience03:05
*** fraggeln has quit IRC03:17
JordanRinkeanyone have an idea why I would be getting 'sudo: no tty present and no askpass program specified\n' in the nova-compute log when trying to launch an instance?03:17
JordanRinkeinteresting, looks like I am getting permission denied on /dev/kvm as well03:22
termieannegentle: can i get the source for openstack.org or something? I would really like this bug with the sidebar to be fixed, makes the whole project look old and/or unprofessional03:33
*** hadrian has quit IRC03:33
*** Ryan_Lane is now known as Ryan_Lane|away03:33
*** omidhdl has joined #openstack03:34
*** nati has joined #openstack03:35
annegentletermie: oh shoot I meant to ask Todd about that today. It's a Wordpress site, he's moving it to Silverstripe, and I don't have access (in a tiny nutshell). Sorry, I'll check with him again.03:35
annegentletermie: ok, email sent to Todd, again I agree, the details matter.03:37
*** nati has quit IRC03:39
*** nati has joined #openstack03:40
*** nati has quit IRC03:45
creihtwoot!03:46
creihts3 compatability layer is now in swift trunk03:46
uvirtbotNew bug: #702722 in glance "no tty present and no askpass program specified" [Undecided,Invalid] https://launchpad.net/bugs/70272203:46
creihtit is still a bit experimental, but there if anyone would like to try it out03:46
*** fraggeln has joined #openstack03:48
uvirtbotNew bug: #702723 in nova "Launching instance fails: no tty present and no askpass program specified" [Undecided,New] https://launchpad.net/bugs/70272303:51
*** sandywalsh has quit IRC03:54
uvirtbotNew bug: #702725 in swift "Add s3 middleware support to swauth" [Low,Triaged] https://launchpad.net/bugs/70272503:57
*** jdurgin has quit IRC04:00
*** nati has joined #openstack04:01
*** mdomsch has joined #openstack04:05
*** nati has quit IRC04:06
*** nati has joined #openstack04:15
*** mdomsch has quit IRC04:16
*** nati has quit IRC04:16
*** mdomsch has joined #openstack04:16
*** nati has joined #openstack04:25
*** nati has quit IRC04:29
*** adiantum has quit IRC04:30
*** nati has joined #openstack04:30
*** Facefoxdotcom has quit IRC04:34
*** nati has quit IRC04:34
*** adiantum has joined #openstack04:35
*** nati has joined #openstack04:35
natisoren : Would you please merge ipv6 again?04:37
natisoren:Sorry, I know you are superbusy, but I'm afraid of another conflicts.04:38
*** adiantum has quit IRC04:41
*** nati has quit IRC04:42
*** nati-1 has joined #openstack04:43
openstackhudsonProject nova build #403: SUCCESS in 1 min 29 sec: http://hudson.openstack.org/job/nova/403/04:43
openstackhudsonTarmac: This change introduces support for Sheepdog (distributed block storage04:43
openstackhudsonsystem) which is proposed in04:43
openstackhudsonhttps://blueprints.launchpad.net/nova/+spec/sheepdog-support04:43
openstackhudsonRequirements:04:43
openstackhudson- libvirt 0.8.7 or later04:43
openstackhudson- qemu 0.13.0 or later04:43
openstackhudsonHow to test:04:43
openstackhudson1. install Sheepdog04:43
openstackhudsonThe software is available from SourceForge.net:04:43
openstackhudsonhttps://sourceforge.net/projects/sheepdog/files/04:43
openstackhudsonSee also:04:43
openstackhudsonhttp://wiki.qemu.org/Features/Sheepdog/Getting_Started#Install04:43
openstackhudson2. run the sheepdog daemon on each hosts04:43
openstackhudson$ sheep /store_dir04:43
openstackhudson/store_dir is a directory to store sheepdog objects. The directory04:43
openstackhudsonmust be on the filesystem with an xattr support.04:43
openstackhudson3. format the sheepdog storage04:43
openstackhudson$ collie cluster format --copies=304:43
openstackhudson4. run nova-volume04:43
openstackhudson$ nova-volume --volume_driver=nova.volume.driver.SheepdogDriver04:43
nati-1Thanks again but anothor conflicts :(04:45
*** adiantum has joined #openstack04:47
*** AndChat- has joined #openstack04:48
*** nati-1 has quit IRC04:49
*** nati-1 has joined #openstack04:51
*** AndChat- has quit IRC04:52
*** nati-1 has quit IRC04:55
*** adiantum has quit IRC04:56
*** howardroark has joined #openstack04:56
*** sandywalsh has joined #openstack05:00
*** nati-1 has joined #openstack05:00
nati-1soren: sorry again. We fixed conflicts05:01
nati-1Would you please merge?05:02
koji-iidasoren: yes, I fixed the merge problem of IPv6 branch.05:02
*** AndChat- has joined #openstack05:05
*** nati-1 has quit IRC05:06
*** AndChat- has quit IRC05:06
*** howardroark has quit IRC05:08
*** jt_zg has quit IRC05:08
*** adiantum has joined #openstack05:08
*** jimbaker has quit IRC05:17
*** nati has joined #openstack05:17
uvirtbotNew bug: #702741 in nova "nbd device did not show up" [Undecided,New] https://launchpad.net/bugs/70274105:31
*** adiantum has quit IRC05:33
*** adiantum has joined #openstack05:34
*** pvo is now known as pvo_away05:38
*** f4m8_ is now known as f4m805:48
*** f4m8 is now known as f4m8_05:50
nativish:Thank you for your approve agein :)05:53
*** fcarsten has quit IRC05:55
*** mdomsch has quit IRC05:55
natiOoops, Hudson says "ImportError: No module named netaddr "05:56
natiWe can not fix this problem.05:56
nativish: Sorry, Would you please check it?05:56
natiWould you know who have hudson privileges online now?06:09
natimtaylor: I suppose you have hudson privileges. Would you please do  apt-get install python-netaddr ?06:10
mtaylornati: I dunno. what're you gonna give me?06:11
natimtaylor: Sorry I misread log.06:11
mtaylornati: you did? did I not want to just install python-netaddr?06:12
natiipv6 branch uses python-netaddr package which not installed in Hudson.06:12
natihttps://code.launchpad.net/~ntt-pf-lab/nova/ipv6-support/+merge/4622006:12
mtaylorgreat. well, it's installed now06:12
natimtaylor: Thank you!06:12
mtaylorwe should probably add it to the deb package depends...06:13
natiI added it on nova.sh and pep06:13
nation my branch06:13
mtaylorsoren: remind me or you do it - add python-netaddr to debian package depends for nova06:13
mtaylornati: we'll need to also add it to the debian packaging, but it's not a huge deal06:14
natimtaylor: I got it. How can I do that?06:14
natiOr some privileges needed ?06:14
*** Ryan_Lane|away has quit IRC06:19
*** omidhdl has quit IRC06:33
*** miclorb_ has quit IRC06:38
*** adiantum has quit IRC06:52
*** f4m8_ is now known as f4m806:53
*** adiantum has joined #openstack06:57
anticwwhat's the lp dev cycle to push a branch back and ask for review?07:02
*** adiantum has quit IRC07:05
mtayloranticw: just push one up ... so :07:06
mtayloranticw: bzr push lp:~yournamehere/nova/branch-name07:06
*** ibarrera has joined #openstack07:07
anticwyeah, just found that07:07
anticwthat registers it as well?07:07
mtayloranticw: and then go to it online, such as https://code.edge.launchpad.net/~yournamehere/nova/branch-name ... and click "Propose for merging"07:07
fraggelnis swift ready for production-use?07:07
mtaylorfraggeln: yes - and is already in production use in fact07:08
anticwwe have clients with production use of it07:08
fraggelnanticw: what type of clients?07:08
fraggelnif you are allowed to talk about it that is :)07:09
anticwfraggeln: umm...  we (cloudscaling) do consulting...  client details aren't public yet07:09
anticw(maybe some are actually, i dont track this)07:09
*** guigui1 has joined #openstack07:09
*** adiantum has joined #openstack07:10
anticwfraggeln: anyhow, to answer your question (mtalyor did already thouugh), yes it is07:10
mtaylorfraggeln: rackspace cloud files is running swift on the backend, AIUI07:10
anticwand some people other than rackspace have some fairly reasonable scale as well07:11
openstackhudsonProject nova build #404: SUCCESS in 1 min 27 sec: http://hudson.openstack.org/job/nova/404/07:13
openstackhudsonTarmac: OpenStack Compute (Nova) IPv4/IPv6 dual stack support07:13
openstackhudsonhttp://wiki.openstack.org/BexarIpv6supportReadme07:13
openstackhudsonTested with07:13
openstackhudson?unit test07:13
openstackhudson?smoke test07:13
openstackhudsonNo conflict with current branch r 562.07:13
openstackhudsonFixed comment by Soren and Vish07:13
natiYappi!  ipv6  Merged! Thank you Soren ,Vish, Devin and OpenStackers.07:14
anticwmtaylor: what does bzr use for author details on a commit?07:16
fraggelnanticw, mtaylor : thanks :)07:16
anticw(sorry, i've avoided bzr like the plague with success thus far so am very ignorant)07:16
mtayloranticw: the value from bzr whoami07:16
anticwmtaylor: is there a config to change that al la git?07:17
mtayloranticw: it's ok - I've been avoiding git for the same amount of time but just recently had to start using it for something, so I an empathize07:17
anticwn, found it07:17
mtayloranticw: run "bzr whoami 'You Name <your@emnail.com>'"07:17
vishynatti: you're welcome07:18
* vishy is very frustrated with bzr's ability to cherry-pick07:19
vishyor lack therof...07:19
vishys/therof/thereof07:19
mtaylorvishy: you can cherry pickj07:20
mtaylorvishy: bzr merge -mrev1..rev2 other-branch07:20
mtaylorvishy: you have to give both a starting and ending rev for it to treat it as a cherry-pick merge07:21
vishyyes only it isn't really cherry picking07:21
anticwmtaylor: can i diff the branch i just pushed against where i cloned it trivially?07:21
vishyit doesn't keep revision information07:21
vishyit basically just treats it like you generated a patch file07:21
mtaylorvishy: that is true, and that also bugs me07:21
mtaylorvishy: it's _slightly_ better than a patch file, as it does at least grok file moves, etc.07:22
anticwim not sure about bzr, but some SCMs have to do that07:22
anticwbecause the state is basically a directed graph of hashes, so you can't take things out of the middle robustly without taking ancestors07:22
mtayloranticw: yes07:22
*** omidhdl has joined #openstack07:23
mtayloranticw: correct -that's the reason bzr doesn't really do cherry picking - bzr is all centered around the DAG impl07:23
mtayloranticw: but as for your diff question ... there's a couple of ways to go about it07:24
vishymtaylor: it really makes maintaining multiple branches difficult07:24
anticwvishy: push back on who created the work and have them recreate patches you can merge directly07:25
mtayloranticw: but what you're looking for is bzr diff ancestor:path/to/branch07:25
anticwmtaylor: oh, i mean on the web ui07:25
anticwsorry07:25
mtayloranticw: oh. heh.07:25
mtayloranticw: the merge request you generate will generate a diff that shows what will be merged07:25
anticwi guess if i do propse it will do that07:25
mtayloryup07:25
vishygit is way better at that in particular.  It also allows for nice clean branches due to rebasing.  And it generally seems to be a little smarter about merges and replaced files.07:26
mtaylorvishy: yeah - it's sort of about a workflow... I totally hear your complaint, but I find that the main key is to NEVER mix unrelated work into the same branch07:26
anticwgit cherry diffs then reapplies though internally07:26
vishymtaylor: that is great but we are basically maintaining three production branches and we need to pull fixes back and forth07:27
mtayloryup. it's certainly not perfect... and not great at back-and-forth07:27
* mtaylor doesn't particularly put much stock in the git approach to rebase/'clean branch' though07:28
mtaylorbut would LOVE if bzr could support cherry picking patches in the workflow you're discussing07:28
mtayloryou pretty much have to wind up making a branch that's a GCA of the production branches and then apply your patches to that and pull from that to the places it needs to go... sort of blows07:29
*** miclorb has joined #openstack07:30
* ttx waves07:31
mtaylorhey ttx!07:31
ttxlet's see if we can get a few more branches in before freezing07:32
*** joearnold has joined #openstack07:35
ttxmtaylor: would you feel comfortable flipping the Approved switch on a few approved-by-core-people BMPs ?07:36
mtaylorttx: sure07:37
ttxhttps://code.launchpad.net/~termie/nova/db_migration/+merge/46073 looks like a good candidate07:37
ttxalready went into the Hudson washing machine once07:38
ttxand was fixed in consequence07:38
mtaylorhehe07:38
* mtaylor loves hudson07:38
ttxAlso https://code.launchpad.net/~ed-leafe/nova/xs-password-reset/+merge/45537 has 4 "accepted", which should be considered sufficient07:38
mtaylorttx: yup. first one looks good.07:39
ttxlooks like the others will need an small extension.07:39
ttxmtaylor: press the button !07:40
mtaylorcool. well those two are flipped07:40
ttxyay07:40
* mtaylor flexes his muscles07:40
* mtaylor hurts himself07:40
mtaylorttx: do we have anywhere online where we list when we're planning on dev meetings being?07:40
* mtaylor is working on calendar stuff07:41
ttxnice work team on cow and ipv6 !07:41
ttxhmm07:41
ttxyou mean design summits ?07:41
mtayloryes07:43
ttxSo the next one is in Santa Clara April 26 - 29, 201107:43
ttxThe one after that is not planned yet07:43
ttxAsia, probably Japan07:43
XenithMan, wish I could go to that one.07:43
mtaylorttx: cool. thanks! I have to put these things on my calendar early so I can actually get to them07:44
ttxmtaylor: for the date... probably early October07:44
mtaylorcool.07:44
mtaylorttx: you don't happen to know the next UDS date do you?07:45
ttxbut that's still very much up in the air.07:45
ttxmtaylor: you can abuse the tentative release schedules to have a relatively good idea. If Mark doesn't come up with a fun version number that requires moving everything two weeks07:45
ttxhttp://wiki.openstack.org/EReleaseSchedule07:45
mtaylorhehe07:45
ttxhttps://wiki.ubuntu.com/PReleaseSchedule07:46
ttxPlaces UDS the week of October 27th07:46
mtaylorttx: it was actually UDS-O I was wondering about07:47
* mtaylor never understands why they only announce them like, 3 months out07:47
ttxOh ! That's Budapest, probably the week of May 12th07:47
mtaylorsome of us have to plan our lives a little more than that :)07:47
mtayloroh really?07:48
* mtaylor never knows anything07:48
ttxYou don't know about Oscillating Ocelot, then ?07:48
openstackhudsonProject nova build #405: SUCCESS in 1 min 25 sec: http://hudson.openstack.org/job/nova/405/07:48
openstackhudsonTarmac: Implements the blueprint for enabling the setting of the root/admin password on an instance.07:48
openstackhudsonIt uses a new xenapi plugin 'agent' that handles communication to/from the agent running on the guest.07:48
ttxmtaylor: about the 3 month-notice and the location in the woods: the plan is to prevent people from coming.07:49
mtaylorheh. that's funny07:49
mtaylorttx: I know about nothing - sabdfl never tells me anything07:50
ttxmtaylor: if you need to know stuff, just bribe Jono, he is cheaper07:51
mtaylorwow. I may have a 6 week stretch from late April to mid June where I'm essentially not home07:52
ttxmtaylor: really ?07:52
mtayloryeah. I'm too busy07:52
ttxmtaylor: maybe you should consider stopping slacking at conferences :P07:52
mtaylorttx: nah. slacking at conferences is too much fun07:53
ttxrha, db_migration needs another rinse in the washing machine07:54
*** fabiand_ has joined #openstack07:55
ttxwell, that'll be after freeze, then07:55
* mtaylor needs to be more involved in code over next cycle...07:55
vishyttx: final merge of db_migration should be post_freeze, since the last few patches that made it in changed the schema07:56
ttxvishy: sounds logical07:57
vishyand db_migration should convert austin -> bexar (not bexar - patches)07:57
* vishy has been staring at his computer screen too long07:57
*** brd_from_italy has joined #openstack07:58
* mtaylor sticks his head out of vishy's computer screen and floats around07:58
*** rcc has joined #openstack07:58
* vishy highly recommends vagrant. I've managed to get chef-server working and an install of nova from packages. Working on a multinode chef recipe. I see automated integration testing in our future.07:58
vishyfor all you packaging people: python-netaddr needs to be added to depends and qemu-nbd needs to be added to sudoers08:00
JordanRinkeyo08:01
vishyif anyone feels like helping out with the chef stuff ... http://unchainyourbrain.com/openstack/12-testing-nova-openstack-compute-with-vagrant-and-chef08:01
JordanRinkeoh speaking of chef, have you tried devstructure.com?08:01
vishyJordanRinke: any luck with that chardev issue?08:01
JordanRinkesandboxes used to create blueprints used to create recipes08:01
JordanRinkeyeah, i just posted the "fix"08:01
JordanRinkenot sure where to go with it though08:02
vishyJordanRinke: I think i know a guy who works there.  Also, I used some of their stuff to build a maverick64 vagrant image.  Seems pretty cool08:02
vishyJordanRinke: yes i met Richard Crowley08:03
*** miclorb has quit IRC08:05
*** adiantum has quit IRC08:05
ttxok, Feature Freeze is on. I'll go through the proposals in the pipe and approve the ones that just need the final push.08:06
vishyJordanRinke: how did you get raw images?08:06
*** allsystemsarego has joined #openstack08:07
*** allsystemsarego has joined #openstack08:07
ttxsoren: <vishy> for all you packaging people: python-netaddr needs to be added to depends and qemu-nbd needs to be added to sudoers08:07
fabiand_I read in a blog post, that objects are immutable in swift. so I wonder if they ever get deleted?08:08
JordanRinkeThey are images from Anso, I rembered seeing raw conversion as part of the compute logs previously so I decided to try it08:08
JordanRinkeand the virsh command ran08:08
mtaylorvishy: point me at anything you get in terms of multi-node testing - I REALLY want to get hudson/tarmac doing that sort of testing (pre-merge if possible)08:08
vishyJordanRinke: the new code automatically converts the images to qcow08:09
mtaylorttx: I also pinged soren about that... although I suppose I could just get off my own ass and do it08:09
vishyso perhaps kvm doesn't actually want you to specify08:09
ttxmtaylor: I could just do it too08:09
ttxmtaylor: but I'm kinda busy today08:09
*** adiantum has joined #openstack08:09
vishyJordanRinke: can you do a qemu-img info /path/to/instance_dir/disk08:10
*** elasticdog has quit IRC08:10
ttxmtaylor: can you ask for specific reviews to random BMPs in Nova ?08:10
JordanRinkeyeah, want it here or on launchpad?08:10
ttxmtaylor: or do you need to be the proposer for that ?08:10
vishyJordanRinke: just tell me if it says that it is a qcow2 image08:10
mtaylorttx: I don't think you need to be the proposer08:10
mtaylorttx: lemme check08:10
JordanRinkeindeed it does08:10
vishyyeah so kvm is R-tarded08:11
ttxmtaylor: I thought maybe members of the review team can08:11
vishyit actually booted if you changed it to raw?08:11
mtaylorttx: it appears I can request review for branches I did not propose08:11
ttxoh, cool. You have 5 minutes ?08:11
mtaylorsure08:11
JordanRinkeroot@00000r100020001:/var/lib/nova/instances/instance-00000017# virsh -d 5 create libvirt.xml08:11
JordanRinkecommand: "create libvirt.xml "08:11
ttxGo through https://code.launchpad.net/nova/+activereviews and make sure I'm specifically requested on all of them08:11
JordanRinkecreate: file(DATA): libvirt.xml08:11
ttxmtaylor: ^08:12
JordanRinkeDomain instance-00000017 created from libvirt.xml08:12
vishyand can you log in to it?08:12
ttxthat way I can preapprove without requiring people to file exceptions08:12
*** drico has quit IRC08:12
vishythat is ssh to its ip?08:12
*** calavera has joined #openstack08:12
ttxmtaylor: You can add "FFE" to review type08:13
mtaylorttx: k. are you ttx on launchpad?08:13
ttxyes08:13
JordanRinkeno, but I could have my networking screwed up right now08:13
JordanRinkeI am at home, can you link me nova-debug again? I will see if it is actually booting08:14
vishyJordanRinke: i think it isn't booting08:14
vishycan you change the line to read <driver name="qemu" type="qcow2"/>08:15
vishyand see if it still barfs08:15
JordanRinkehmm lsmod for kvm shows a count now, so I think it may have actually booted08:15
JordanRinkeit was qcow2, i changed it to make it work, I can change it back and confirm though, hold.08:16
JordanRinkevirsh # list Id Name                 State08:17
JordanRinke---------------------------------- 16 instance-00000017    running08:17
mtaylorttx: k. done08:17
ttxcool, many thanks08:17
JordanRinkeerr, that was while still raw08:17
JordanRinkeand ahh, i see the diff there with the qemu, hang on08:18
vishyJordanRinke: in my testing if i gave it a qcow2 and called it raw it would basically say it booted but not load the drive.  You'll probably see a failed to mount root filesystem if you cat console.log with your raw change.08:19
JordanRinkefailed with qemu/qcow208:20
vishybleh08:21
vishyJordanRinke: same error?08:22
JordanRinkeerror: operation failed: failed to retrieve chardev info in qemu with 'info chardev'08:22
JordanRinkeyou want a login to this box to look around?08:24
vishyyes pls08:24
mtaylorttx: ok. I made a branch adding that depend. I'm not in openstack-ubuntu-packagers, so I submitted a merge req08:25
mtaylorsoren ^^08:25
ttxmtaylor: will review08:25
ttxif soren doesn't beat me to ity08:25
mtaylorit doesn't look like you guys are using bzr-builddeb there, so I'm _way_ out of practice with how I'd test :)08:26
mtaylorbut that's ok08:26
redbomtaylor: can we get python-netifaces on the hudson box when you get a chance?08:30
mtaylorredbo: sure. done08:31
redbothanks08:31
mtaylorok. ... the bed is a calling08:33
*** adiantum has quit IRC08:34
*** adiantum has joined #openstack08:39
*** taihen has quit IRC08:45
*** miclorb has joined #openstack08:46
*** fabiand_ has quit IRC08:47
*** jfluhmann__ has joined #openstack08:55
*** jfluhmann_ has quit IRC08:59
*** adiantum has quit IRC09:03
uvirtbotNew bug: #702778 in nova "Cannot log into instance when network_manager=FlatManager" [Undecided,New] https://launchpad.net/bugs/70277809:06
*** perestre1ka has joined #openstack09:06
*** perestrelka has quit IRC09:09
*** miclorb has quit IRC09:15
*** adiantum has joined #openstack09:15
sorenmtaylor: We are using bzr-builddeb there?09:19
sorenmtaylor: What makes you say we don't?09:19
*** MarkAtwood has quit IRC09:20
sorenmtaylor: Oh, and you are in openstack-ubuntu-packagers.09:20
sorenmtaylor: nova-core, ubuntu-virt and you.09:20
sorenmtaylor: At least I thought that was the case.09:20
sorenmtaylor: Hm... guess not. That's unintentional.09:21
*** joearnold has quit IRC09:22
*** rcc has quit IRC09:23
sorenmtaylor: Anyways, branch merged, added it as a runtime dep, too, and building new packages now.09:24
sorenttx: Happy feature freeze!09:25
sorenttx: Or is it merry feature freeze? I always forget.09:26
ttxso far it's not happy or merry, it's just busy.09:26
sorenTell me about it. I was up till after 2 last night.09:26
vishysoren:09:31
ttxvishy is trying to beat you but just fell asleep.09:32
vishyso apparently kvm fails to boot with two qcow2 drives09:32
vishy:)09:32
vishysoren: any idea on that one? it is failing with a permissions error09:32
vishyqemu: could not open disk image /var/lib/nova/instances/instance-00000015/local: Permission denied09:34
vishyoddly: i can't replicate the error locally09:35
sorenvishy: How are the two systems different?09:37
vishywoah hold up that was weird09:37
sorenvishy: (i.e. you local system and the one where you see this problem)09:37
vishyi tried reversing the order of the two images and it still failed on local09:38
vishymine is running in qemu mode in a vm as opposed to hardware09:39
sorenOh.09:39
sorenAre they otherwise identical?09:39
vishybut i tried switching the hardware one to qemu mode and it still fails09:39
sorenvishy: Anything in dmesg?09:40
sorenapparmour sort of stuff.09:40
vishyaudit(1294997948.075:234): apparmor="STATUS" operation="profile_remove" name="libvirt-e956df6e-2621-a872-1d45-679a1e6f36e0" pid=32351 comm="apparmor_parser"09:41
*** miclorb has joined #openstack09:41
sorenYeah, that's not it.09:41
vishythat shows up right after the permission denied in syslog09:41
sorenThe qemu process quits, right?09:42
vishythere we go09:42
vishyJan 14 03:38:35 00000r100020001 kernel: [29180.002881] type=1400 audit(1294997915.445:230): apparmor="DENIED" operation="open" parent=28913 profile="/usr/lib/libvirt/virt-aa-helper" name="/var/lib/nova/instances/instance-00000015/local" pid=32144 comm="virt-aa-helper" requested_mask="r" denied_mask="r" fsuid=0 ouid=10509:43
sorenWhich version of Ubuntu is this?09:43
sorenRather, which version of libvirt?09:43
vishywhy is apparmor being evil09:43
vishyit was the default maverick09:43
vishybut i upgraded to the ppa version09:43
ttxvishy: it's for your own good !09:43
sorenvishy: So which one is itnow?09:43
sorenvishy: 0.8.3-1ubuntu14.1~ppamaverick1 ?09:44
vishylibvirt-bin                     0.8.3-1ubuntu14.1~ppamaverick109:44
sorenOk.09:44
vishybtw it loads the first image just fine09:44
vishyif i comment out local09:44
vishyand just load disk it boots09:44
*** adiantum has quit IRC09:45
sorenwhat the...09:45
sorenlol09:45
vishyapparmor="DENIED" operation="open" parent=1 profile="libvirt-e956df6e-2621-a872-1d45-679a1e6f36e0" name="/var/lib/nova/instances/_base/ami-maverick"09:46
sorenOk, give me a minute to sort this out. I know at least why it used to work.09:46
vishylooks like it was denied opening the backing file09:46
sorenYup.09:46
*** rcc has joined #openstack09:46
*** koji-iida has quit IRC09:46
* vishy gives soren a minute09:46
sorenanother minute, please.09:48
*** adiantum has joined #openstack09:50
sorenbug 69631809:51
uvirtbotLaunchpad bug 696318 in libvirt "Starting VMs on qcow2 format with base images in level three fails" [Medium,Confirmed] https://launchpad.net/bugs/69631809:51
sorenvishy: Easy workaround: Disable apparmor.09:54
soren/etc/libvirt/qemu.conf (look for security_driver)09:55
vishykind of odd that it only fails with two images though09:56
vishytesting the disable09:56
sorenThat's not why it fails.09:57
vishyoh?09:57
sorenlet me dig a little bit more.09:59
sorenIt's got to do with the name of the file, as it turns out.10:01
vishydisabling selinux did work btw10:01
vishyo rly?10:01
sorenapparmor.10:01
vishythe name of the backing file or the front file?10:01
sorenThe front.10:01
soren/etc/apparmor.d/usr.lib.libvirt.virt-aa-helper10:01
vishyrofl, so because i used the name local?10:01
sorenWell, because you didn't name it disk or something.qcow2 or whatnot.10:01
sorenOr put it in /var/lib/eucalyptus... :)10:02
*** MarkAtwood has joined #openstack10:02
vishylmao10:02
vishystrange that disk works10:02
sorenNo.10:02
soren  /**/disk{,.*} r,10:02
vishyso if i name the front file *.qcow2 it will work10:02
vishylmao10:03
vishyhilarious10:03
sorenThat's my hypothesis.10:03
vishyi think you are right10:03
vishyb10:03
vishybecause local failed both ways10:03
vishynow why didn't it crash on my local machine ...10:03
vishybecause it was in my home dir?10:04
vishy@{HOME}/** r10:05
larissavishy: Error: "{HOME}/**" is not a valid command.10:05
vishythat is really hilarious10:05
vishyok so i'm going to change local to local.qcow210:05
vishysow that it doesn't go boom10:05
*** fabiand has joined #openstack10:10
fabiandhey. is it correct that objects never get deleted in swift?10:10
sorenThat or we add an exception for nova to the apparmor profile.10:11
colinnichfabiand: no, that's not correct - objects are deleted as soon as you command it10:11
vishyalthough that is an annoying name if use_qcow_images is off10:11
fabiandcolinnich: are they really erased from the disk?10:11
colinnichfabiand: yes10:11
fabiandcolinnich: Because I read about that .tombstone thingy and wondered when those files get deleted ..10:11
*** miclorb has quit IRC10:12
fabiandcolinnich: okay, i will re-read that part then :)10:12
vishywill disk.local will match that regex right?10:12
sorenvishy: Yup.10:13
vishyk doing that then10:13
sorenvishy: It's an extended glob of sorts, not a regex, fwiw.10:13
vishyyeah i realized that after i said it but i was hoping you were going to let it slip by10:13
vishy:p10:13
*** adiantum has quit IRC10:14
sorenHeh :)10:15
vishyworks like a charm10:15
* vishy throttles apparmor10:16
*** adiantum has joined #openstack10:20
sorenOne thing I miss about university is having gigabit Internet connectivity.10:21
sorenUploading 80 GB worth of backups takes quite a while on DSL.10:22
*** elasticdog has joined #openstack10:28
*** fabiand_ has joined #openstack10:28
vishysoren: https://code.launchpad.net/~vishvananda/nova/lp702741/+merge/4624510:29
vishyfixed there so we don't have to patch libvirts app-armor profile10:29
vishy:)10:29
vishyok time for me to sleep now10:30
*** Dweezahr has quit IRC10:30
*** fabiand has left #openstack10:30
sorenvishy: Sleep tight.10:30
*** dirakx has quit IRC10:30
*** Dweezahr has joined #openstack10:31
fabiand_can I somehow tell swift that rsyncd is running on a different port?10:31
colinnichsoren: you should install your own swift cluster :-)10:35
*** allsystemsarego has quit IRC10:38
fabiand_mh okay, does not seem so ...10:38
*** hazmat has quit IRC10:39
*** allsystemsarego has joined #openstack10:41
sorencolinnich: I should do that, too, yes. But I still want off-site backups :)10:51
colinnichfabiand_: no, I don't think so. Had a quick look at the code and there doesn't appear to be any configuration there10:56
*** adiantum has quit IRC10:57
*** adiantum has joined #openstack10:59
fabiand_colinnich: It would be a nice thing in my eyes, so rsyncd could be run from userspace.11:03
fabiand_I thought about ading a key to the replicator config to specify rsyncd's port11:03
*** allsystemsarego has quit IRC11:16
*** MarkAtwood has quit IRC11:17
sorencreiht, gholt, letterj: I don't want to step on anyone's toes, so I just want to check with you guys if it's ok if I set up per-commit builds for swift in an identical manner to what we have for nova?11:26
* soren goes to lunch11:27
*** adiantum has quit IRC11:28
*** omidhdl has quit IRC11:28
*** nati has quit IRC11:30
*** adiantum has joined #openstack11:34
*** adiantum has quit IRC11:42
*** adiantum has joined #openstack11:48
sorenMmm... Lunch. Easily one of my favourite meals of the day.11:56
*** adiantum has quit IRC11:58
*** adiantum has joined #openstack12:02
*** rcc has quit IRC12:09
*** skrusty has quit IRC12:13
*** reldan has joined #openstack12:13
*** trin_cz has joined #openstack12:14
*** thatsdone has joined #openstack12:16
*** ctennis has quit IRC12:19
*** adiantum has quit IRC12:21
trin_czsoren: thanks for yesterday's answer.12:30
*** ctennis has joined #openstack12:33
*** reldan has quit IRC12:36
*** reldan has joined #openstack12:37
*** DubLo7 has quit IRC12:38
trin_czsoren: so I need later than Austin ... which means bzr. Let's go the other way around ... do you know some source of images which include ramdisk?12:40
ttxsandywalsh: please check https://code.launchpad.net/~anso/nova/wsgirouter/+merge/45330 : if you're good with it, we'll be able to merge the two paste-deploy branches12:41
*** reldan has quit IRC12:45
sorentrin_cz: You don't need bzr.12:45
sorentrin_cz: We publish tarballs for every single commit to bzr.12:45
sorentrin_cz: ..as well as ubuntu packages.12:45
sorenFor Lucid, Maverick, and Natty.12:45
trin_czsoren: ok, thanks12:47
*** omidhdl has joined #openstack12:55
*** gaveen has joined #openstack12:57
*** gaveen has joined #openstack12:57
*** arthurc has joined #openstack12:58
sandywalshttx, will do13:04
trin_czsoren: I used the ttylinux for now, so I have the ramdisk, but I still cannot run it. euca-run-instances says ECONNREFUSED . Any idea where to start?13:04
*** drico has joined #openstack13:07
sorenSounds like nova-api isn't running.13:08
trin_czyeah, looks like some recent egg update broke the scripts13:08
soren"egg update"?13:09
trin_czI'm still fighting with the correct versions of python eggs ...13:15
trin_czsoren: where do I find the bzr snapshot tarballs ... I cannot find them on launchpad13:16
trin_czsoren: also, is there a list of python egg versions which a particular nova version expect?13:18
*** DubLo7 has joined #openstack13:18
*** taihen has joined #openstack13:20
sorenWhich OS are you running?13:21
trin_czopenSUSE - I'm trying to package nova, so that it works out of the box.13:23
*** reldan has joined #openstack13:24
trin_czsoren: I know a bout the package version list at "Getting Started with Nova". But for example euca2ools 1.3 reqiure python-boto >=2.0 , so I have to stick with euca2ools 1.213:25
*** fabiand_ has quit IRC13:27
sorenOh, there's a new euca2ools out?13:30
sorenThat's recent.13:30
sandywalshxtoddx around?13:31
*** fabiand_ has joined #openstack13:32
sandywalshxtoddx, nm ... got it :)13:32
*** DubLo7 has left #openstack13:35
*** westmaas has joined #openstack13:37
*** jfluhmann__ has quit IRC13:39
sandywalshttx, anything else need reviewing?13:42
ttxsandywalsh: https://code.launchpad.net/~citrix-openstack/nova/xenapi-glance-2/+merge/45977 could use some XenAPI-aware analysis13:43
sandywalshttx on it13:43
ttxsoren: mind flipping the approved switch on wsgirouter ?13:44
ttx(or any other nova-core dude ^^)13:46
trin_czsoren: what about the nova snapshot tarballs? Packaging will be easier this way13:46
*** adiantum has joined #openstack13:48
*** thatsdone has quit IRC13:49
*** pvo_away is now known as pvo13:52
creihtfabiand_, colinnich: re deletes, the file is deleted as soon as the DELETE call is made.  a 0-byte tombstone is put in it's place to ensure that an older version of the file doesn't get replicated back (due to failure scenarios)13:54
creihtThe tombstone will be deleted after reclaimation age has passed, which is 7 days13:54
fabiand_creiht, thanks for this note :) It came up to my mind that the file might get truncated to free space.13:55
creihtfabiand_: yeah we don't currently support running rsyncd on a different port, but it probably wouldn't be difficult to add that as an option13:55
*** aliguori has joined #openstack13:55
creihtsoren: sounds good to me :)13:55
creihtfabiand_: As an example, say you delete an object, but you had a node down when that delete was issued.  When the failed node comes back online, it still has the old copy of the object.  Replication could try to push that old copy out to the other nodes, but the newer tombstones on the other nodes will prevent that13:58
*** arthurc has quit IRC13:58
creihteventually the other nodes will also replicate the tombstone to the node that had failed, and delete the old object13:58
*** hadrian has joined #openstack13:59
*** calavera has quit IRC13:59
*** arthurc has joined #openstack13:59
fabiand_creith, very nice solution ... in the beginning i was sceptical about the usage of rsync ... being able to specify the rsyncd port might enable the usage of swift without root14:00
creihtfabiand_: we keep thinking that we have to replace rsync, but we keep finding ways to make it work effectively :)14:01
creihtAs to root usage, the services start as root, but then quickly drop priveledges to whatever user you have defined14:02
creihtroot is needed to bind to port 80 or 44314:02
creihtWe also set a couple of rlimits in the kernel (like max file descriptors) on startup14:04
*** thatsdone has joined #openstack14:05
*** jguiditta has joined #openstack14:06
*** reldan has quit IRC14:06
annegentlettx: around?14:07
sandywalshewan or salvatore ever on here?14:07
ttxannegentle: yes14:08
*** adiantum has quit IRC14:10
*** Ryan_Lane has joined #openstack14:11
annegentlettx: I read with interest the suggestion to move automation scripts to a separate project. Can you help me set one up for documentation?14:12
*** thatsdon_ has joined #openstack14:12
annegentlettx: I need a place to check doc files in and out that's "openstack" rather than doc files for each project (nova/swift/glance)14:12
*** thatsdone has quit IRC14:13
ttxannegentle: we already have a few projects at the top level, like openstack-devel or openstack-common... maybe it can fit there ? what is the documentation about exactly ?14:15
annegentlettx: openstack-common would be perfect. It's a collection of documentation from all three projects as well as scenarios for them working together.14:16
annegentlettx: it's the documentation that will live on docs.openstack.org14:16
ttxannegentle: hm, openstack-common is the common library to all projects, so maybe it's not the perfect fit14:17
annegentlettx: I'm updating http://wiki.openstack.org/WebContentRequirements now to add more info about the steps for completion by Bexar14:17
ttxannegentle: could you check with jaypipes first ?14:18
annegentlettx: sure, what am I asking him exactly?14:18
annegentlettx: glance has glance.openstack.org set up with RST-based docs14:18
ttxannegentle: ask if he had other plans for openstack-common that make posting a doc branch there inappropriate14:19
sandywalshjaypipes around?14:19
annegentlettx: got it.14:19
sorentrin_cz: What do you mean by "packaging"?14:19
sorentrin_cz: Oh, sorry, I miseed the bit about opensuse.14:19
sorentrin_cz: The tarballs are at http://nova.openstack.org/tarballs/14:20
*** ppetraki has joined #openstack14:21
trin_czsoren: perfect! thanks14:23
fabiand_may I ask what the status of glance is?14:26
*** ctennis has quit IRC14:31
*** pvo is now known as pvo_away14:34
*** Ryan_Lane has quit IRC14:39
*** johnpur has quit IRC14:39
*** f4m8 is now known as f4m8_14:45
*** gondoi has joined #openstack14:47
*** thatsdon_ has quit IRC14:48
*** dirakx has joined #openstack14:56
*** omidhdl has quit IRC14:59
*** zul has joined #openstack15:00
*** rcc has joined #openstack15:01
annegentlefabiand_: sure, though jaypipes is the one to ask. You can't install it from packages yet (at least that was the case yesterday), but there are docs at http://glance.openstack.org if you want to read more.15:02
sorenannegentle: I'm working on packages now.15:04
sorenAlong with roughly 27 other things, so it'll probably be a couple of days.15:04
sorenBut soon!15:04
*** koji-iida has joined #openstack15:05
*** dendrobates is now known as dendro-afk15:07
*** rcc has quit IRC15:08
sandywalshany docs around on getting started with Glance?15:11
*** trin_cz has quit IRC15:15
*** koji-iida has quit IRC15:16
annegentlesandywalsh: me, I'm waiting on the packages since installing seems the right first step.15:17
sandywalshannegentle, thanks. I'm going through the learning process now ... I'll be sure to keep notes for you.15:18
annegentlesandywalsh: wonderful, thank you. I'll take anything you write up.15:19
*** zul has quit IRC15:19
*** reldan has joined #openstack15:20
*** adiantum has joined #openstack15:20
*** dendro-afk is now known as dendrobates15:21
*** jfluhmann_ has joined #openstack15:34
*** hadrian has quit IRC15:38
mtaylorsoren: cool - I didn't see a .bzr-builddeb dir, so I wasn't sure how it worked in that branch15:41
*** pvo_away is now known as pvo15:44
*** reldan has quit IRC15:44
*** lvaughn_ has quit IRC15:45
*** lvaughn has joined #openstack15:45
*** hadrian has joined #openstack15:46
*** zul has joined #openstack15:48
sorenmtaylor: It doesn't need it. The defaults are fine.15:51
sorenmtaylor: "bzr bd -S" realises its missing the orig, so it uses get-orig-tarball to grab it, which in turn uses uscan.15:51
annegentlemtaylor: I created a man page for nova-manage, but it doesn't look like it's connected in when I run man nova-manage. What's the next step for enabling the man page?15:52
sorenmtaylor: In short, it just works.15:52
annegentlemtaylor: oh and sorry for the "interruption" - was still typing when you started typing. spooky!15:52
*** hggdh has joined #openstack15:55
*** DigitalFlux has joined #openstack15:56
*** lvaughn_ has joined #openstack15:56
*** dragondm has joined #openstack15:57
*** lvaughn has quit IRC15:58
jaypipesfabiand_: working on packages and bug fixes right now...aiming to have a functional beta for Bexar...15:59
jaypipesfabiand_: also working on documentation for starting up nodes, etc15:59
*** MarkAtwood has joined #openstack16:02
jaypipesttx, annegentle: I'm not following you on the openstack-common thing...16:05
ttxjaypipes: anne is looking for a project under which to upload some openstack-wide doc branch16:05
ttxjaypipes: should she create a new project or piggyback on one of the existing "common" projects N16:06
ttx?16:06
ttxopenstack-common or openstack-devel...16:06
*** jimbaker has joined #openstack16:06
jaypipesttx: hmm, good question. :) I think openstack-dev may be more appropriate, as the future of openstack-common is not clear at this moment.16:08
ttxannegentle: so openstack-devel sounds like a good bet16:08
jaypipesttx: plus, docs that may (eventually) go into openstack-common will likely only be for any shared util libs...16:08
ttxright16:09
annegentlejaypipes: thanks for thinking it through.16:09
*** dirakx has quit IRC16:09
jaypipesholy translations, batman. Iida-san was busy last night :)16:09
annegentlejaypipes, ttx: one consideration is that these docs may not be Apache-licensed, they may be Creative Commons. Does that consideration change any part of the decision? The licensing is NOT final but something to think about.16:10
jaypipesannegentle: shouldn't be an issue... it's just the copyrighter headers in the rst files and in the output templates...16:11
* jaypipes notes that when creiht talks about tombstones, he just sounds creepy :)16:12
*** dirakx has joined #openstack16:12
annegentlejaypipes: ok, yup. thanks16:12
annegentlemtaylor: any thoughts on the state of the man page for nova-manage?16:15
*** rcc has joined #openstack16:17
*** guigui1 has quit IRC16:18
*** adiantum has quit IRC16:21
*** johnpur has joined #openstack16:24
*** ChanServ sets mode: +v johnpur16:24
*** DigitalFlux has quit IRC16:28
*** hggdh has quit IRC16:30
*** hggdh has joined #openstack16:30
*** DigitalFlux has joined #openstack16:31
*** adiantum has joined #openstack16:35
ttxjaypipes: any reason to hold up https://code.launchpad.net/~chris-slicehost/glance/add-s3-backend_updated/+merge/45217 ?16:37
jaypipesttx: one sec...sorry, didn't get around to that one yesterday...16:38
*** dfg_ has joined #openstack16:38
ttxok, preapproving ffe just in case you can marge it16:38
ttxmerge*16:38
jaypipesttx: looks to have merge conflicts again...16:39
*** reldan has joined #openstack16:39
jaypipesttx: lemme investigate.16:39
*** Ryan_Lane has joined #openstack16:50
*** DigitalFlux has quit IRC16:50
sorencreiht: What is the upcoming version of swift?16:58
creiht1.216:59
*** maplebed has joined #openstack16:59
*** brd_from_italy has quit IRC16:59
sorencreiht: Alright. When do you plan on bumping the version in the code?17:06
sorencreiht: In Nova, we did it when Austin was released.17:06
creihtyeah we should have done that sooner17:06
creihtI will do that soon17:06
sorenCan you ping me before you do it?17:07
creihtsure17:07
* ttx is about to call it a full week.17:07
sorenI need to adjust something in the packaging before you do.17:07
*** dendrobates is now known as dendro-afk17:09
*** ccustine has joined #openstack17:11
*** Ryan_Lane_ has joined #openstack17:15
*** Ryan_Lane has quit IRC17:17
*** Ryan_Lane_ is now known as Ryan_Lane17:17
*** dendro-afk is now known as dendrobates17:17
*** rlucio has joined #openstack17:21
vishytrin_cz: http://images.ansolabs.com/maverick.tgz17:27
mtaylorsoren: oh, ok. cool. I'm just so used to import-dsc based setups, the bare debian dir just looked really weird to me17:27
jk0can I get a couple of reviews for https://code.launchpad.net/~jk0/nova/lp702965/+merge/46291 please?17:27
mtaylorannegentle: did you add it to the manpage section of config.py?17:27
annegentlemtaylor: ah, no. Let me look at that.17:28
mtaylorannegentle: I mean conf.py17:28
mtaylorannegentle: search for "man_page"17:28
annegentlemtaylor: here's the line: http://paste.openstack.org/show/472/17:30
uvirtbotNew bug: #702965 in nova "Diagnostics broken in trunk" [Undecided,In progress] https://launchpad.net/bugs/70296517:32
mtaylorannegentle: so, what's the problem?17:32
*** joearnold has joined #openstack17:32
fabiand_jaypipes, thanks.17:32
annegentlemtaylor: on a box where I have installed nova, I type man nova-manage, and get "No manual entry for nova-manage" in return17:35
annegentlemtaylor: I can run nova-manage commands no problem17:35
fabiand_jaypipes, are there thoughts about using some space by using some kind of "cow" technique when pushing images which are based on an already present image?17:35
mtaylorannegentle: installed via deb? or via python setup.py install?17:35
jaypipesfabiand_: yep, that is already done. The xs-snapshot branch in Nova does that (registers images in Glance as snapshots of XS instances"17:37
annegentleinstalled via add-apt-repository ppa:nova-core/trunk (is that via deb?)17:37
mtayloryes17:37
mtaylorok, that just means that the manpage isn't in the deb packaging17:38
mtaylorsoren: ^^^ you want me to fix?17:38
jk0ttx: bugfixes can be merged to trunk, right?17:38
annegentlemtaylor: ah, ok. Is that a last-minute thing anyway? I do have one edit to make to the man page (remove nova-manage.conf to refer to nova.conf)17:38
annegentlemtaylor: gahhh gotta run to lunch before all the good Indian buffet food is gone :) thanks for the help17:39
mtaylorannegentle: well, trunk is already creating the manpage, right?17:39
jk0thanks vishy17:39
mtaylorannegentle: sure thing17:39
vishyaye17:39
sorenmtaylor: That would be great. I need to go make some dinner.17:39
mtaylorsoosfarm_: k17:39
mtaylorsoren: k17:39
mtaylor(bad irc client tab completion!)17:40
vishywow there are some bugs in compute api17:40
vishy+ looks like some extra stuff in cloud17:40
* vishy goes to clean17:40
uvirtbotNew bug: #702973 in nova "Network injection creates an invalid /etc/network/interfaces file if no DNS is specified" [Undecided,New] https://launchpad.net/bugs/70297317:41
fabiand_jaypipes, very cool :) .. i browsed through the branch ... but could not nail it down ...17:45
jaypipesfabiand_: :)17:46
fabiand_i just noted that it is not yet supported for libvirt ;)17:47
jaypipesfabiand_: sirp- wrote the code. he'd be a good resource to talk to about it (as would ewanmellor if he's ever around in IRC ;)17:47
jaypipesfabiand_: yep, hopefully soon, though :)17:47
*** jdurgin has joined #openstack17:48
openstackhudsonProject nova build #406: SUCCESS in 1 min 29 sec: http://hudson.openstack.org/job/nova/406/17:48
openstackhudsonTarmac: Create and use a generic handler for RPC calls to compute.17:48
*** Dweezahr has quit IRC17:49
*** Dweezahr has joined #openstack17:49
*** ctennis has joined #openstack17:50
fabiand_Is there any compyn from europe involved in openstack?17:51
*** reldan has quit IRC17:52
colinnichfabiand_: If you include the UK in that, then yes17:53
creihtfabiand_: And if you also include Rackspace UK :)17:54
fabiand_colinnich, yep - i meant EMEA .. so even good old britain :)17:54
colinnichcreiht: yeah yeah :-)17:54
fabiand_creiht, hehe17:54
creihtlol17:54
colinnichfabiand_: we are in Glasgow, Scotland17:54
fabiand_colinnich: may I ask what company it is? :)17:54
colinnichfabiand_: http://www.iomart.com/17:55
fabiand_colinnich: thanks17:55
colinnichfabiand_: our logo's even on http://www.openstack.org/community/ :-)17:56
fabiand_colinnich: yep - I've seen it, I just did not reallize that quick that it is in scotland ... looked for EMEA tlds17:57
creihtfabiand_: There were some people from French telecom at the last dev summit as well17:58
fabiand_creiht, ah nice.18:00
*** Ryan_Lane has quit IRC18:00
*** adiantum has quit IRC18:01
pvofabiand_: Citrix, in Cambridge18:05
*** dirakx has quit IRC18:11
fabiand_pvo: ah that company, yes :)18:11
fabiand_thanks everybody. and good evening. :)18:12
*** maple_bed has joined #openstack18:13
*** maplebed has quit IRC18:13
*** fabiand_ has quit IRC18:13
*** maple_bed has quit IRC18:14
*** maplebed has joined #openstack18:14
*** dendrobates is now known as dendro-afk18:19
*** dendro-afk is now known as dendrobates18:20
*** leted has joined #openstack18:27
*** joearnold has quit IRC18:28
uvirtbotNew bug: #702998 in nova "attach_volume fails due to missing import." [High,In progress] https://launchpad.net/bugs/70299818:31
*** jfluhmann__ has joined #openstack18:33
*** arthurc has quit IRC18:34
*** jfluhmann_ has quit IRC18:37
*** glenc has left #openstack18:41
*** glenc has joined #openstack18:42
*** dendrobates is now known as dendro-afk18:47
*** maple_bed has joined #openstack18:54
*** maplebed has quit IRC18:54
*** nelson__ has quit IRC18:57
*** nelson__ has joined #openstack18:58
*** maple_bed is now known as maplebed19:00
*** hggdh has quit IRC19:01
*** ctennis has quit IRC19:02
*** ctennis has joined #openstack19:06
*** ctennis has joined #openstack19:06
*** kpepple has joined #openstack19:09
uvirtbotNew bug: #703012 in nova "TrialTestCase breaks running individual tests" [Medium,In progress] https://launchpad.net/bugs/70301219:11
uvirtbotNew bug: #703014 in nova "get_console_output should be in compute.api" [Undecided,New] https://launchpad.net/bugs/70301419:11
*** dendro-afk is now known as dendrobates19:12
edayvishy: ahh, you beat me to the get_console_output move. should be the last of the rpc module usage in there too now19:15
vishyeday: yeah there are crazy bugs i'm dealing with right now19:16
edayvishy: what compute.api bugs did you see? just broken imports for attach_volume?19:16
vishyrunning the unit tests individually fail19:16
edayahh19:16
*** mark_ is now known as mark19:18
vishyha my bad19:18
*** dirakx has joined #openstack19:20
*** leted has quit IRC19:34
ttxjk0: yes, as many as you can find and fix :)19:38
jk0thanks19:38
jaypipes_0x44, sirp-, ttx, dubs: pls see https://code.launchpad.net/~jaypipes/glance/s3-backend/+merge/46317.  It is Chris' s3 branch all merged up with trunk and with conflicts resolved...review at will and sirp-, please feel free to set it to Approved when ready.19:43
sirp-jaypipes: will do19:43
jaypipescheers :)19:43
ttxjaypipes: merge at will !19:44
jaypipesttx: k, thx :)19:46
jaypipesdabo: https://bugs.launchpad.net/nova/+bug/70304119:48
uvirtbotLaunchpad bug 703041 in nova "i18n strings with >1 placeholder should always use dictionary, not tuple, replacement" [Low,Confirmed]19:48
dabothx jaypipes19:48
daboI was about to put something like that together after I finished my current review19:49
jaypipesdabo:  no worries... I'll assign it to myself during cactus some time.19:49
*** rcc has quit IRC19:50
uvirtbotNew bug: #703037 in nova "Instances fail to spawn - Ubuntu 10.10 package install" [Undecided,New] https://launchpad.net/bugs/70303719:52
openstackhudsonProject nova build #407: SUCCESS in 1 min 27 sec: http://hudson.openstack.org/job/nova/407/19:53
openstackhudsonTarmac: Re-removes TrialTestCase.  It was accidentally added in by some merges and causing issues with running tests individually.19:53
*** hggdh has joined #openstack19:54
uvirtbotNew bug: #703041 in nova "i18n strings with >1 placeholder should always use dictionary, not tuple, replacement" [Low,Confirmed] https://launchpad.net/bugs/70304119:56
*** hggdh has quit IRC19:59
*** trin_cz has joined #openstack19:59
*** jaypipes has quit IRC20:02
*** jeffrey_taylor has joined #openstack20:03
*** nelson__ has quit IRC20:05
*** nelson has joined #openstack20:05
*** kpepple has left #openstack20:06
devcamcarvim ../trunk/setup.py20:09
devcamcarhaw20:09
*** pvo is now known as pvo_away20:11
*** MarkAtwood has quit IRC20:12
*** opengeard has quit IRC20:12
openstackhudsonProject nova build #408: SUCCESS in 1 min 25 sec: http://hudson.openstack.org/job/nova/408/20:13
openstackhudsonTarmac: This branch fixes two outstanding bugs in compute.  It also fixes a bad method signature in network and removes an unused method in cloud.20:13
*** opengeard has joined #openstack20:25
*** whaley is now known as al--20:25
*** al-- is now known as whaley20:27
*** MarkAtwood has joined #openstack20:35
*** EdwinGrubbs has quit IRC20:37
*** fabiand_ has joined #openstack20:38
*** leted has joined #openstack20:42
*** fabiand_ has quit IRC20:50
*** hadrian has quit IRC20:55
*** hggdh has joined #openstack20:57
jeffrey_taylorDoes it make more sense to have the load-balancers in front of the firewalls and then the proxies behind the firewalls for a Swift deployment across several data-centres? OR, should the proxies sit between the firewalls and load-balancers?21:03
creihtjeffrey_taylor: that sounds like a question for your network engineer :)21:05
*** hadrian has joined #openstack21:05
jeffrey_taylorcreiht, that's me :S21:05
creihthehe21:05
jeremybthe storage nodes talk to each other not the proxies right?21:05
jeffrey_taylorright21:05
jeremybfor replication, etc.21:05
creihtyeah21:05
jeffrey_taylorcreiht, I've got a lot on my plate :P21:05
jeremyband other services like container21:05
creihtjeffrey_taylor: we really don't have cross-datacenter replication yet in swift21:05
jeffrey_taylorcreiht, I was going to go the vlan route21:06
creihtthough it is planned21:06
jeremybcreiht: but if you have 5 DCs you could do a zone each21:06
jeffrey_taylorcreiht, what about in the scenario of one DC. should the proxies be behind the firewalls or behind?21:06
creihtso the storage nodes need to be able to talk with each other21:06
jeremybso, i was thinking on the train last night...21:06
creihtjeremyb: as long as you have the connectivity, sure21:06
creihtand can handle the latency21:06
creihtbut we haven't done any optimizations for that scenario yet21:07
jeremybif you have multi-region set up, the different regions will be different rings21:07
creihtjeffrey_taylor: short answer is that you probably want the firewalls in front of the proxies21:07
jeffrey_taylorcreiht, thanks. Just wanted to confirm that train of thought :)21:08
creihtweather or not that is behind or in front of the lb is more up to you21:08
jeffrey_taylorcreiht, Ya, I'm really conflicted on that. LB's in the front of the firewalls are fine to a point, but I think we'll need LB's for the proxies once we grow21:08
creihtthough it seems more logical for the firewall to be outside the lb21:09
creihtbut I'm not a network expert21:09
jeremybso, if you know you're going to hit more spindles than you ever imagined you could move one spindle at a time to a new ring (in a new region) and eventually it's all moved (i guess you'd have to move the containers or objects too not just spindles. but at least you don't need to double your spindle count overnight21:09
jeremyb)21:09
jeffrey_taylorcreiht, you make a good point. Those FWs should be getting hammered and even if they are, they should be able to handle it. Back to the drawing board!21:10
creiht:)21:10
*** flipzagging has joined #openstack21:10
jeremybflipzagging!21:10
creihtjeremyb: There are a couple of ways that have been discussed21:10
jeffrey_taylorshouldn't*21:10
flipzaggingjeremyb: hi21:10
jeremybcreiht: is multi region cactus?21:11
creihtone of the first questions to ask is do you want it cluster wide, or optional on a per container basis21:11
jeremyblast i checked the bp said bexar21:11
creihtjeremyb: yeah we are going to have to move it to cactus21:11
jeremyb:/21:11
creihtIt is a hard problem :)21:11
creihtAnd things are busy21:11
creihtI think we are currently leaning towards it being optional on a per-container basis21:12
creihtso say you have cluster a, and cluster b21:12
creihtyou could say some container in cluster a, should also have n replicas in cluster b21:12
jeremybwhat about the rax keeps one region of a private cluster? still no idea if that would be available?21:12
creihtI don't know much about private clouds at rax21:13
creihtok21:13
creihtwait21:13
creihtI see what you are asking now21:13
creihtso that is part of the idea21:14
jeremybi mean i run on my own hardware and racks and then rax does one of the regions21:14
creihtin theory you have your private cluster, but could say that one container should also have some replicas in rax cloudfiles21:14
jeremybright21:14
creihtthat has been discussed21:14
creihtbut no promises :)21:14
creihtand I think that would be ideal21:15
jeremybbtw, i've wondering and just now remembered...21:16
jeremybare objects stored on disk raw? i.e. can i just less/cat/grep/etc. directly on the xfs path?21:16
creihtjeremyb: yes21:16
jeremybiirc that is one of nelson's requirements21:16
*** pvo_away is now known as pvo21:17
jeremybcreiht: cool. same as i have now except i sometimes have to pipe to `gzip -d`21:17
creihtjeremyb: and you can use the tool swift-get-nodes to determine on which nodes and where on the filesystem a given db or file is supposed to be located21:17
creihthehe21:17
*** westmaas has quit IRC21:19
creihtand swift-object-info will tell you all the information you need about a specific file (where it should be on other nodes, metadata, etc.)21:19
jeremybso it reads xattrs?21:20
jeremyband are xattrs part of the checksum?21:21
creihtthe xattrs are not part of the checksum21:22
creihtand yeah it reads the xattrs21:22
jeremybhrmmmm21:23
jeremybso there's no integrity checking on the xattrs then21:24
*** EdwinGrubbs has joined #openstack21:24
jeremyband they're not raided21:24
creihtthe xattrs are stored on each copy of the object21:24
jeremybright but if the read succeeds you don't know if it's good data or not21:25
creihthrm21:25
jeremybwas there much discussion (or even a conscious decision) about not checksumming them?21:26
jeremybalso, what's the checksum algo again?21:26
creihtthe checksum is the md5 hash of the object21:26
jeremybok, weak but short21:26
creihtDo you need something stronger there?21:27
jeremybnot atm but maybe at some point21:27
creihtIt wouldn't be hard to change, if you are paranoid21:27
creihtS3 uses it as well21:27
*** EdwinGrubbs has quit IRC21:28
*** EdwinGrubbs has joined #openstack21:28
jeremybwell one scenario i've brought up here before (which I'm not actually implementing right now but may do in the future) is a ring of trusted nodes and a ring of untrusted storage nodes21:28
jeremybso, trusted nodes would do all the services and sign stuff and also some storage21:28
jeremybbut there would be a pool of servers that are just storage21:29
creihtWhat problem are you worried about?21:29
*** ctennis has quit IRC21:30
creihtIn general users don't have access to storage nodes21:30
creihtbut maybe, I don't understand the issue that you are wanting to solve21:30
*** hhoover has joined #openstack21:31
jeremybmy idea (with no immediate plans) was to have people stick a $300 box with 4 sata bays in their houses or other not entirely secure places. that could be used as one region for swift and then have the other 2 regions in DC racks21:32
creihtjeremyb: but back to the metadata, what type of corruption are you worried about in the metadata21:32
jeremybright, metadata21:32
jeremybso, if content can go bad why not metadata? unless maybe xfs provides some protection for the metadata?21:33
creihtThe main reason for the object checksum is to ensure that the data the user PUTs/GETs is the whole data21:33
jeremyboh, not corruption?21:33
creihtIt is used secondarily for that as well21:33
creihtwe have auditors that double check that21:33
jeremybright21:34
jeremybi was thinking checksum the data and make that an xattr and then checksum all xattrs (including the object checksum) and store that in an xattr21:34
creihtIt also wouldn't be hard to add to the auditors to double check the metadata21:34
jeremyband then just have auditors compare the second checksum21:34
*** hhoover has left #openstack21:35
jeremyb(was double check the metadata meaning compare to another copy? because aiui they don't even talk to more than one copy at a time now)21:36
creihtjeremyb: and back to the public region thing, that is an entirely different problem :)21:36
creihtYou might find something like tahoefs interesting21:36
jeremybright21:36
jeremybyeah, i looked briefly21:37
jeremybare they still active?21:37
creihtjeremyb: yeah I was ensuing that we could add something like that, or have the auditors add the checksum21:37
creihtthey seem to be active still, but I haven't followed it very closely21:38
jeremybseems they have an update in sep 201021:38
creihthttp://tahoe-lafs.org/trac/tahoe-lafs21:38
creiht1.8.1 was realeased 11-28-1021:39
jeremyboh, i was at http://tahoe-lafs.org/hacktahoelafs/21:39
*** jfluhmann__ has quit IRC21:39
jeremybalso, their correction coding is nice21:40
*** ctennis has joined #openstack21:42
*** ctennis has joined #openstack21:42
creihtWe've talked about using reasure-coding in the backend of swift21:43
jeremybi'm planning to have containers sharded on date so a container will only have a short read-write lifespan and then turns r/o when it's "full". then do a final "golden" backup of the r/o container and reduce replica count or even apply correction coding21:43
creihtIt would be nice to have the same amount of durability, while using less space21:43
creihtbut it also complicates things quite a bit21:43
jeremybhave you seen the hdfs erasure coding?21:43
creihtnope21:43
jeremybhttp://wiki.apache.org/hadoop/HDFS-RAID21:44
redbohave you seen hdfs hold more than a couple billion files?  me either.21:44
redbo:)21:44
creihtlol21:44
jeremybno21:44
jeremybof course not!21:44
jeremybredbo: btw, is anything you wrote for the compression tests available for me to use?21:45
creihtjeremyb: storage is cheap :)21:45
jeremybcan middleware change the container or object name for read and write requests? (ideally before auth checks)21:46
creihtjeremyb: if we used erasure coding, you wouldn't be able to go the filesystem and cat out a file :)21:46
redbothe compression testing was never anything fancy.  https://github.com/redbo/test-storage/blob/master/test-compress21:46
creihtjeremyb: middleware can do just about anything :)21:46
creihtjeremyb: and actually that is pretty much how the s3 compatibility layer works21:47
jeremybcreiht: i figured that would change the cattability21:47
jeremybahh, cool21:47
vishymtaylor, soren: either of you here?21:47
creihtif you do something like erasure coding, then you are dealing with blocks, not files21:47
vishyhave a question on how hudson does bzr bd21:47
jeremybredbo: ohh, i was thinking you made some middleware or something21:49
jeremybredbo: anyway, that may help, thanks21:49
creihtjeremyb: he was using that to try to determine if compression would even buy us anything21:49
redbowell, that was just to see how compressable files were21:49
jeremybright, i remember. i just didn't know how you did it21:50
redboIt took all night to run on a drive21:51
jeremybjust took a spindle from production or what?21:52
*** pvo is now known as pvo_away21:52
*** jguiditta has quit IRC21:52
redbofrom your previous conversation, I'm not too worried about metadata being invisibly corrupted.  The nice thing about the auditor is making sure there aren't any sectors that die and are never read to find that out.21:53
redboyeah21:53
mtaylorvishy: yup21:54
jeremybredbo: so the auditor reads all attrs for everything it checks?21:55
redboyeah, it has to read all the xattrs to unpickle them and get the hash21:56
jeremybhaha, it's a pickle?!21:56
redboI can't remember if it's pickle or json21:56
jeremybmaybe pickle does some verification for you?21:56
vishymtaylor: so I'm assuming it does the following: cd /trunk && bzr pull && cd /packaging && bzr merge lp:packaging && dch (some options here) && bzr commit && bzr bd --merge=True --export-upstream=/trunk21:57
vishyis that about right?21:57
jeremybok, i'm off for now21:57
jeremybthanks redbo and creiht21:57
creihtnp21:57
mtaylorvishy: pretty much - except the opposite way21:57
mtaylorwait -- actually -21:58
mtaylorsorry, soren changed things.21:58
vishycan you pastie the shell script?21:58
mtaylorvishy: it just branches packaging and does bzr bd - the packaging branch only has the packaging - so it grabs the tarball using uscan21:58
vishymtaylor: ah ok, but it must have some magic for finding the current version number to update teh changelog?21:59
mtaylorvishy: I'm not sure how soren has it set up, at this point, tbh - the way I usually set this up is closer to what you were writing earlier22:00
vishymtaylor: and I would need to generate the tarball as well so maybe i'll use your method22:00
tr3bucheti'm considering the modeling for multi-nic. So far I think adding a virtual_interfaces table to hold the macs for each instance and a public boolean to networks should be all that needs to change. thoughts?22:00
* mtaylor likes my method better22:01
vishymtaylor: is there some magic in dch to make a changelog entry from the revision number of the other branch?22:01
vishyi suppose I could do it myself in a script, but if there is magic...22:01
rluciohey guys, it looks like the deb build is consistently failing22:03
rluciodh_install: python-nova missing files (usr/lib/python*/dist-packages/*), aborting22:03
rluciofor the trunk ppa22:03
rluciohere's a link to the last build for lucid: https://launchpad.net/~nova-core/+archive/trunk/+build/215306022:04
mtaylorvishy: no - I do it in a script -lemme paste you something22:05
mtaylorvishy: oh - no, none of my scripts do that specifically22:07
sirp-jaypipes: did you see my note RE: image-service-uses-glance?22:11
vishymtaylor: wth, it wants the patchfile backwards?22:12
vishyso weird22:12
mtaylorvishy: sorry - I have to run out real quick - I can give more of an actual hand with this as soon as I get back22:13
vishymtaylor: np i think i've got this22:13
uvirtbotNew bug: #703095 in swift "catch_errors middleware needs more verbose logging" [Medium,Triaged] https://launchpad.net/bugs/70309522:16
*** pvo_away is now known as pvo22:16
sorenmtaylor: I'm here now.22:17
sorenmtaylor: What's up?22:17
sorenvishy: Nope, sorry, that's not how we do packaging at all.22:19
*** troytoman has joined #openstack22:19
sorenvishy: Two completely separate branches get stitched together and I add a changelog entry, build a source package, and upload.22:20
vishysoren: yeah mtaylor said that you didn't do it that way, do you have dch magic to get the bzr revnumber or some kind of shell script?22:20
sorenvishy: Ok, so this is how it works:22:21
sorenvishy: Whenever a new commit lands on trunk, hudson notices.22:21
sorenvishy: It then builds the nova project.22:21
sorenvishy: ..which triggers the nova-tarball project...22:22
sorenvishy: ..which builds a tarball and triggers the nova-ppa project.22:22
sorenHang on, I'm going to look it up to make sure I don't lie.22:23
sorenOk.22:23
*** pvo is now known as pvo_away22:24
sorenvishy: The nova-ppa project does the following:22:24
sorenvishy: It grabs the most recent tarball from Hudson, deduces the version from its name.22:24
sorenvishy: Unpacks the tarball..22:24
sorenvishy: ..checks out the packaging branch and slaps it on top.22:24
sorenvishy: Adds a changelog entry corresponding to the bzr revision and target distribution series.22:25
sorenvishy: Builds the source package.22:25
sorenvishy: Uploads.22:25
sorenvishy: http://paste.openstack.org/show/474/22:26
sorenvishy: Those are the gory details. With me so far?22:26
vishyah gets it from the tarball22:26
vishyyup22:27
sorenYeah.22:27
sorenWell, you could just do "bzr revno".22:27
soren..but I wanted to actually match what the tarball says.22:27
vishysure22:27
sorena) to avoid a race where the something new gets added to trunk in between nova-tarball and nova-ppa run.22:27
sorenb) to keep the version string generation in one place.22:27
vishywhat do you pass in to dch?22:28
sorendch --force-distribution --v "${pkgversion}" "Automated PPA build." -D $series22:28
vishyor do you generate the changelog manually?22:28
sorenIt's all in that pastebin.22:28
vishyoh cool22:28
vishyt22:28
vishyhs22:28
vishyx22:28
sorenThere's a couple of reasons why I do it this way.22:28
vishygah, thx that is.  enter key got stuck22:28
* vishy is going to do the simple version to start22:29
sorenSo, we (OpenStack) may want to consume nova trunk at a different pace than Ubuntu.22:29
soren..but we want to collaborate on the packaging.22:29
sorenThat's easier if they're kept completely separate.22:30
sorenThe reason I do the tarball thing is to make the process of building these snapshots be identical to the process of building packages of final releases.22:31
sorenbuilding packages of final releases is always based on tarballs.22:31
sorenI'd hate to be building awesome packages all along, but have the final packages such because some difference in the process caused a problem.22:31
sorens/such/suck/22:31
*** jdarcy has quit IRC22:32
*** ppetraki has quit IRC22:34
vishysoren: so there is no way I can easily match package numbers22:37
*** Ryan_Lane has joined #openstack22:42
*** dendrobates is now known as dendro-afk22:43
*** dendro-afk is now known as dendrobates22:44
sorenvishy: Whuh?22:46
vishysoren: we're building our own packages22:47
vishysoren: so our rev numbers will be totally different22:47
sorenvishy: Ok.22:48
sorenvishy: Sorry, I'm not sure if you're asking a question or what?22:49
vishysoren: no statement22:49
vishysoren: i think i just have to live with the fact that our packages won't match :)22:50
sorenvishy: Do you want them to?22:50
sorenI may very well be lacking some context here.22:50
vishysoren: i guess it doesn't need to22:51
*** gondoi has quit IRC22:53
*** adiantum has joined #openstack23:14
*** EdwinGrubbs has quit IRC23:17
*** nelson has quit IRC23:23
*** adiantum has quit IRC23:23
*** nelson has joined #openstack23:23
*** EdwinGrubbs has joined #openstack23:25
*** adiantum has joined #openstack23:28
sorenredbo, letterj, gholt, creiht: Why is it again that all the swift daemons are disabled by default in the packages?23:29
soren(pardon if I'm flogging a dead horse)23:29
*** adiantum has quit IRC23:36
*** pvo_away is now known as pvo23:41
*** johnpur has quit IRC23:42
creihtsoren: because that's how our ops guys wanted it23:45
sorencreiht: Ah, so it's an internal Rackspace thing?23:47
creihtsoren: that, and it makes sense for swift :)23:47
sorenHow so?23:47
*** hggdh has quit IRC23:47
* creiht sighs23:47
creihtdo we have to go through this *again*? :)23:47
letterjsoren: I believe that our guys have tried it both way and found that they wanted more control of the services.23:48
*** adiantum has joined #openstack23:49
creihta.) There is a lot of configuration that needs to be done before starting services makes any sense on a new install23:49
*** zul has quit IRC23:49
creihtb.) We talked about having a default swift-one install that could install a working system on one machine, but that hasn't had any priority23:49
letterjNo one has more operational experience running swift clusters than our ops guys.23:50
sorenI'm just trying to understand here. Almost every single other daemon in Ubuntu and Debian is started by default. So when I see on that doesn't, it looks odd.23:51
creihtyou said the same thing last time :)23:51
sorenI'm sure I did.23:51
soren:)23:51
sorenI'm glad I'm consistent :)23:52
creihtThere may be a point some time in the future that it will make sense to do that, it just doesn't right now23:52
sorenGeneral style in Debian packaging is to enable stuff by default if there are good, safe defaults. "If you don't want the service running, dont' install it" sort of stuff.23:53
sorencreiht: Fair enough.23:53
sorenI'll make a note of that.23:53
sorenThis will probably be clearer once I try to run it more.23:53
sorenSo a "# We don't yet have a default configuration, so we can't start the deamons by default" sort of comment next to the dh_installinit --no-start stuff would be accurate?23:55
creihtgood enough23:55
*** ccustine has quit IRC23:56
edayI'm not quite sure how you would do it, given the filesystem requirements23:57
*** dragondm has quit IRC23:57
redboyeah, a "default configuration" doesn't make any sense23:58
creihtwell there may be a "I just want to install a single instance to try out" package that would have a default package at some point23:58
redboyeah, that's a different animal23:58
edaysoren: do you know of any other packages that create loopback devices or partitions as part of the install?23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!