*** daemontool_ has quit IRC | 00:04 | |
*** daemontool_ has joined #openstack-freezer | 00:06 | |
*** daemontool_ has quit IRC | 00:13 | |
*** dschroeder has quit IRC | 00:38 | |
*** EinstCrazy has quit IRC | 01:01 | |
*** c00281451 has quit IRC | 01:31 | |
*** EinstCrazy has joined #openstack-freezer | 01:43 | |
*** daemontool has joined #openstack-freezer | 02:31 | |
*** SamYaple has quit IRC | 04:11 | |
*** SamYaple has joined #openstack-freezer | 04:18 | |
*** EinstCrazy has quit IRC | 04:25 | |
*** EinstCrazy has joined #openstack-freezer | 05:21 | |
*** EinstCrazy has quit IRC | 05:47 | |
*** EinstCrazy has joined #openstack-freezer | 05:48 | |
*** EinstCrazy has quit IRC | 07:12 | |
*** EinstCrazy has joined #openstack-freezer | 07:22 | |
*** EinstCrazy has quit IRC | 08:05 | |
*** EinstCrazy has joined #openstack-freezer | 08:09 | |
*** EinstCrazy has quit IRC | 09:55 | |
*** zhangjn has joined #openstack-freezer | 10:37 | |
*** emildi has joined #openstack-freezer | 11:01 | |
Slashme | SamYaple: Hi, nice to see that we are not the only ones carring about backups and things done well :) | 11:02 |
---|---|---|
Slashme | Here are my two cents 'cause I'm known to always give my opinion. | 11:02 |
Slashme | Just to start, a quick refresh on how the freezer infra works. | 11:02 |
Slashme | The freezer-agent and freezer-scheduler are installed where you want to backup data. That can be inside a VM if you want to backup OpenStack workloads, on controllers to backup OpenStack infra, or even on your laptop to replace your dropbox if you like. | 11:02 |
Slashme | The freezer-scheduler polls the freezer-api to get the list of backup jobs it is supposed to execute and their scheduling informations. The freezer-scheduler then fires the freezer-agent that executes the required backup. We have different types of backup (the argument is --mode), file is the default, but we also support mongo, mysql, sqlserver, nova trought nova api (not ideal) and cinder thought cinder api (idem). The backup is then compressed | 11:02 |
Slashme | (your choice of compression algo), encrypted if necessary, and then sent to a storage media (we support Swift, Ssh ans local). | 11:02 |
Slashme | You create backup jobs through the Horizon webUI or the python-freezerclient. | 11:02 |
Slashme | Here is for Freezer. | 11:02 |
Slashme | We have the short-term goal to abstract the --mode with a plugin layer (ping szaher reldan ). Meaning the freezer-agent could do any type of backup trough loading plugins from a plugin folder. From what I understand, Ekko could re-use all the freezer infra, having freezer-agent installed on the compute node and using a specific Ekko mode. | 11:02 |
Slashme | Anyway and regardless of the kind of convergence Freezer and Ekko achieve, thanks for your openness and interest. It is nice to see people that are interested in the community side of OpenStack. | 11:03 |
*** pbourke has joined #openstack-freezer | 11:04 | |
*** zhangjn has quit IRC | 11:06 | |
*** zhangjn has joined #openstack-freezer | 11:09 | |
zhangjn | what's the choice Freezer or cEkko? | 11:21 |
*** zhangjn has quit IRC | 11:58 | |
*** pbourke has quit IRC | 12:02 | |
*** pbourke has joined #openstack-freezer | 12:03 | |
*** p3r0t has joined #openstack-freezer | 12:13 | |
p3r0t | hi folks | 12:13 |
p3r0t | I have a question about storage where freezer is going to store the backups. Is it possible to use ceph instead of swift ? | 12:14 |
*** daemontool has quit IRC | 12:19 | |
Slashme | If you use the radosgw, then yes. | 12:26 |
Slashme | When you use the radosgw, ceph replaces swift and provide the swift api | 12:26 |
Slashme | If you mean using the librados and pushing things directly in ceph then no. Or at least, not yet, it would be possible to implement I guess | 12:27 |
*** zhangjn has joined #openstack-freezer | 12:33 | |
*** zhangjn has quit IRC | 12:34 | |
*** zhangjn has joined #openstack-freezer | 12:35 | |
*** daemontool has joined #openstack-freezer | 12:40 | |
*** zhangjn has quit IRC | 12:44 | |
daemontool | Morning | 12:46 |
Slashme | morning daemontool | 12:50 |
daemontool | how is it going | 12:55 |
daemontool | :) | 12:55 |
daemontool | I'm releasing freezer 1.2.1 to pypi | 12:56 |
daemontool | after the bug merge https://review.openstack.org/#/c/271702/2 | 12:56 |
p3r0t | Slashme, thanks | 13:53 |
daemontool | be back in 30 min | 13:56 |
*** daemontool has quit IRC | 14:00 | |
*** emildi has quit IRC | 14:08 | |
*** EinstCrazy has joined #openstack-freezer | 14:16 | |
*** daemontool has joined #openstack-freezer | 14:31 | |
*** EmilDi has joined #openstack-freezer | 14:36 | |
*** dims has joined #openstack-freezer | 14:44 | |
*** samuelBartel has joined #openstack-freezer | 14:46 | |
dims | daemontool: checked with rpodolyaka over on Nova about the VM snapshot capabilities. his answer was " libvirt/xenapi/hyperv/vmware should all support live snapshoting" there's this note about some restrictions - https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1499 | 14:49 |
*** EinstCrazy has quit IRC | 15:13 | |
daemontool | dims, thanks a lot appreciate your help :) | 15:42 |
*** dschroeder has joined #openstack-freezer | 15:56 | |
*** samuelBartel has quit IRC | 17:02 | |
*** dims has quit IRC | 17:58 | |
*** EmilDi has quit IRC | 18:13 | |
*** EinstCrazy has joined #openstack-freezer | 18:14 | |
SamYaple | Slashme: thanks for the run down | 18:14 |
Slashme | SamYaple: Np | 18:14 |
SamYaple | Slashme: how is retention handled with freezer? | 18:15 |
*** EinstCrazy has quit IRC | 18:18 | |
SamYaple | Or anyone on the freezer team, how are you handling retention? | 18:20 |
SamYaple | Searching through the freezer repos I find no mention of retention of any kind. Does this mean you must keep around all incrementals for the life of the backup? | 18:32 |
*** daemontool_ has joined #openstack-freezer | 18:45 | |
*** daemontool has quit IRC | 18:47 | |
*** dims has joined #openstack-freezer | 19:54 | |
*** daemontool has joined #openstack-freezer | 20:06 | |
*** daemontool_ has quit IRC | 20:09 | |
daemontool | SamYaple, did anyone provided you info regarding the retention? | 21:03 |
SamYaple | daemontool: not yet | 21:05 |
daemontool | SamYaple, you have two options you can use | 21:05 |
daemontool | remove_from_date and remove_older_than | 21:05 |
daemontool | if you do not use that, the backup will be kept for as long as the user wants | 21:05 |
SamYaple | what about middle-of-chain removal? | 21:05 |
daemontool | i.e. remove older than 30 days but keep 1 backup for 1 year? | 21:06 |
daemontool | there's a bp for that but is not implemented | 21:06 |
daemontool | also we never encountered an operational case from users provided feedback for that | 21:07 |
daemontool | but if needed it can be done | 21:07 |
daemontool | probably we'd need to add a time range | 21:07 |
SamYaple | yes, but more like "take backup every hour but only keep a daily backup longterm" | 21:07 |
daemontool | for deleteion | 21:07 |
daemontool | we do not have that currently | 21:08 |
SamYaple | how is retention performed? In the case of object storage does this mean all the data is download, merge, and reuploaded? | 21:08 |
daemontool | nope | 21:08 |
daemontool | we record timestamp in the object metadata on swift | 21:08 |
daemontool | so we only download the metadata | 21:08 |
daemontool | not the data | 21:08 |
daemontool | according the metadata the object will be removed | 21:09 |
daemontool | at the next one I'll send you a bill =P | 21:09 |
daemontool | kidding... | 21:09 |
daemontool | swift manifest is probably what you need to look at | 21:09 |
SamYaple | but the actual retention act, (removing prior to certain date) how are you doing that? | 21:09 |
daemontool | we have two way of checking the data | 21:10 |
daemontool | one is the timestamp | 21:10 |
daemontool | in the metadata of the object | 21:10 |
daemontool | the other is the timestamp stored in the object name | 21:10 |
daemontool | so when you get the list of all the object from swift | 21:10 |
daemontool | you know if the timestamp of the object | 21:11 |
daemontool | is whitin or out the timeframe | 21:11 |
daemontool | you need | 21:11 |
daemontool | if is out then you delete the object to swift | 21:11 |
daemontool | and the same happen | 21:11 |
daemontool | to the other | 21:11 |
daemontool | storage media | 21:11 |
SamYaple | thats not quite what im asking. If you take a backup day 1, 2, 3, 4.. and you delete day 1 and 2, are you merging those into day 3? | 21:11 |
daemontool | like ssh and local fs (i.e. mounted nfs( | 21:12 |
daemontool | nope | 21:12 |
daemontool | why you want to do that? | 21:12 |
daemontool | the backup are compressed and encrypted | 21:12 |
SamYaple | because if you delete day 1 and 2 of an incremental backup you can't access day 3 | 21:12 |
daemontool | ah you mean | 21:12 |
daemontool | by 1 2 3 4 5 the incremental backup level? | 21:12 |
daemontool | you always need level 0 | 21:13 |
SamYaple | yes. have you not been talking about incremental backups? | 21:13 |
daemontool | you need level 0 for inremental backups | 21:13 |
SamYaple | so you do diffs off of level 0? | 21:13 |
daemontool | so I'm not sure if your example apply | 21:13 |
daemontool | or there's something I'm missing | 21:13 |
daemontool | yes | 21:13 |
daemontool | there are 2 options | 21:13 |
daemontool | max_level | 21:13 |
daemontool | use incremental | 21:13 |
daemontool | always_level is more like a differential | 21:14 |
daemontool | so the new level is always the difference between level 0 and your currenet level | 21:14 |
daemontool | when restoring | 21:14 |
daemontool | only 2 level needs to be restored | 21:14 |
SamYaple | so you have level 0, 1, 2, 3 and you want to remove 1 and 2, do you download 1 2 and 3 and merge them? | 21:14 |
daemontool | so it is faster | 21:14 |
daemontool | nope | 21:14 |
daemontool | why you want to do that? | 21:14 |
daemontool | why you want to touch encrypted compressed backup stored in a media storage? | 21:14 |
SamYaple | so they are all just diffs off of level 0? | 21:15 |
daemontool | yes | 21:15 |
SamYaple | esh | 21:15 |
SamYaple | that won't work long term for block backups at all | 21:15 |
daemontool | and why is that? | 21:15 |
daemontool | it works for rsync block based backups | 21:15 |
SamYaple | if level 1 has 1gb of change, then level 2 will have 1gb + whatever else is changed | 21:15 |
SamYaple | thats duplicating what you are backing up | 21:15 |
SamYaple | not to mention requiring level 0 forever | 21:16 |
daemontool | why level 2 has 1 gb + the incremental? | 21:16 |
daemontool | level 2 have only the incremental | 21:16 |
daemontool | level 0 you need it for restore | 21:16 |
daemontool | otherwise how will you restore the data if you need it? | 21:16 |
SamYaple | level 2 has the same information you backed up in level 1, plus whatever has changed for the level 2 backup | 21:17 |
SamYaple | because its a diff on level 0 | 21:17 |
daemontool | ah you mean for the always_level option | 21:17 |
daemontool | it's only a level 1 | 21:17 |
daemontool | you always have 2 levels | 21:18 |
daemontool | 0 and 1 if you set that | 21:18 |
daemontool | while with max_level | 21:18 |
daemontool | you have 0 1 2 3 4 5 | 21:18 |
SamYaple | so for incrementals, not diffs, how do you handle removing 1 2 3 out of 0 1 2 3 4 5? | 21:18 |
daemontool | if you that you can restore only 0 | 21:18 |
SamYaple | so you _cant_ is what you are say? | 21:19 |
daemontool | because the differential data between 0 - 1 2 3 an 4 | 21:19 |
daemontool | is gone | 21:19 |
daemontool | so 4 is not usable | 21:19 |
SamYaple | yea ok thats what i thought was happening | 21:19 |
SamYaple | thats the issue | 21:19 |
daemontool | not sure what's the issue there | 21:19 |
daemontool | explain it please | 21:19 |
SamYaple | so you have to keep all incrementals around for the backup around forever | 21:20 |
daemontool | nope | 21:20 |
SamYaple | you just said it though | 21:20 |
daemontool | in some point you want to restart level 0 | 21:20 |
daemontool | let's say you do | 21:20 |
daemontool | daily backup | 21:20 |
daemontool | and every week you do a full backup | 21:20 |
SamYaple | restarting a level 0 is the problem | 21:20 |
daemontool | so you have 6 levels | 21:20 |
daemontool | 0..6 | 21:20 |
SamYaple | this is not something that is acceptable to the backup world | 21:20 |
daemontool | it's apoint in time state of your sistem | 21:21 |
daemontool | that's how rsync works | 21:21 |
SamYaple | i understand what you are saying, its just not a good way to do backups. it doesn't work for blockbased at all | 21:21 |
daemontool | tar works | 21:21 |
daemontool | etc etc | 21:21 |
daemontool | so | 21:21 |
daemontool | for blockbased | 21:21 |
SamYaple | yes. and it doesnt work for the scale of block-based backups | 21:21 |
daemontool | you can use rolling block based | 21:21 |
daemontool | which is exactly how | 21:22 |
daemontool | rsync work | 21:22 |
daemontool | so now, why the rsync algorithm doesn't work for you? | 21:22 |
SamYaple | you can't force a new "level 0" backup because of the size of the backup. its potentially 100's of GBs | 21:22 |
SamYaple | doing diffs means the size of the backup grows forever (until you do a new full) | 21:23 |
daemontool | it's the point in time of your system | 21:23 |
daemontool | of your data | 21:23 |
daemontool | if you do not want to keep them | 21:23 |
daemontool | you can delete them | 21:23 |
SamYaple | no without losing your latest backup | 21:23 |
SamYaple | requiring a new full after the first one should never happen | 21:23 |
SamYaple | with ekko we can have 0 1 2 3 4 5 6 7 8 9 and remove 2 3 6 7 without losing 0 1 4 5 8 9 | 21:24 |
daemontool | ok | 21:24 |
daemontool | that's libvirt that does that | 21:24 |
daemontool | not ekko | 21:24 |
daemontool | but you still need level 0 | 21:24 |
SamYaple | thats not true at all | 21:25 |
SamYaple | and we do not | 21:25 |
SamYaple | we can drop 0 1 2 3 4 5 6 7 8 and still ahve 9 | 21:25 |
daemontool | ok | 21:25 |
daemontool | so you have all the data on 9? | 21:25 |
SamYaple | libvirt has nothing to do with that part | 21:25 |
SamYaple | yup | 21:25 |
daemontool | ok | 21:25 |
daemontool | yes qemu sorry | 21:25 |
daemontool | not libvirt | 21:25 |
SamYaple | no qemu has nothing to do with that either | 21:25 |
SamYaple | thats what Ekko can do | 21:26 |
daemontool | ok | 21:26 |
daemontool | so what's your point | 21:26 |
daemontool | ? | 21:26 |
daemontool | so ? | 21:26 |
daemontool | how that | 21:26 |
daemontool | ever has to do with the point I asked you to clarify | 21:26 |
daemontool | on the ml? | 21:26 |
daemontool | is that your point? | 21:26 |
SamYaple | i was trying to figure out if freezr had retention for incremental backups and how it works | 21:26 |
daemontool | ok | 21:27 |
SamYaple | but from what you are telling me you dont do retention which answers that question | 21:27 |
daemontool | we do retention in the way I described | 21:27 |
SamYaple | but thats not actually retention | 21:27 |
daemontool | ok | 21:27 |
SamYaple | you arent retaining anything | 21:27 |
daemontool | I see your point | 21:27 |
daemontool | I'm retaining the data | 21:28 |
daemontool | for the time the user wants | 21:28 |
daemontool | and remove the data the user do not want | 21:28 |
daemontool | in a way that if the user wants to restore the data in the point in time he wants | 21:28 |
daemontool | he can | 21:28 |
daemontool | that is our use case | 21:28 |
daemontool | I got you have a different idea | 21:29 |
daemontool | not trying to convince you | 21:29 |
daemontool | just answering | 21:29 |
SamYaple | I understand. What you are describing just doesn't work for block-based backups due to the size of them | 21:30 |
daemontool | I don't see that way | 21:30 |
daemontool | the think I'm telling | 21:30 |
daemontool | has been in production | 21:31 |
daemontool | for time | 21:31 |
daemontool | at reasonable scale | 21:31 |
daemontool | now | 21:31 |
daemontool | if you tell me | 21:31 |
daemontool | there's a better way | 21:31 |
SamYaple | it does not work at a scale. especially when you start talking about TB of data | 21:31 |
daemontool | that's ok | 21:31 |
SamYaple | look at the problems the, arguably, two largest backup companies have with it (storagecraft and veeam) | 21:32 |
SamYaple | the way they have gotten around this is by merging incrementals | 21:32 |
SamYaple | but that doesn't work for object-storage and thats another limitation to scale | 21:32 |
daemontool | if the data are encrypted and compressed | 21:33 |
daemontool | that is not efficient with any storage media | 21:33 |
daemontool | not only with object storage | 21:33 |
SamYaple | Well thats my point, ive solved that with Ekko | 21:33 |
SamYaple | but i was mainly curious to see if you already had a way t odo retention | 21:33 |
daemontool | let's talk about this more | 21:34 |
daemontool | I have a meeting now | 21:34 |
daemontool | sorry | 21:34 |
SamYaple | ok | 21:34 |
daemontool | thank you | 21:34 |
daemontool | this was interesting :) | 21:34 |
SamYaple | ill bet i can bring what i use for ekko to freezer | 21:34 |
SamYaple | even for file level | 21:34 |
*** dims has quit IRC | 21:46 | |
daemontool | SamYaple, that is fantastic | 22:06 |
daemontool | and that open approach is brilliant | 22:06 |
daemontool | ++ | 22:06 |
SamYaple | the only concern is I rely on changed block tracking on the hypervisor to tell me what to backup (rather than rsyncs algorythm) but that could be solved it other ways | 22:07 |
*** daemontool has quit IRC | 22:19 | |
*** daemontool has joined #openstack-freezer | 22:30 | |
daemontool | SamYaple, I'm here | 22:31 |
daemontool | yes I agree | 22:31 |
daemontool | I think we have to provide flexibility | 22:31 |
daemontool | if that's the best solution | 22:31 |
daemontool | to solve the vm backup in terms of efficiency | 22:31 |
daemontool | it make sense to provide that | 22:31 |
daemontool | what can be an alternative better approach for volumes or files | 22:32 |
daemontool | that will be proposed | 22:32 |
daemontool | but there are cases where for companies | 22:32 |
daemontool | storage efficiency is not important | 22:32 |
daemontool | time efficiency is | 22:32 |
daemontool | for others not | 22:32 |
SamYaple | "storage efficiency is not important". I work at a backup company, storage efficiency is one of the biggest concerns | 22:33 |
daemontool | space efficiency | 22:33 |
SamYaple | but luckily what i purpose is more effiecent space and time-wise | 22:33 |
daemontool | better then | 22:33 |
daemontool | in financial industry | 22:34 |
SamYaple | so as Slashme suggested, I can probably reuse the freezer agent/scheduler if there is a better plugin functionality for it | 22:34 |
daemontool | time is | 22:34 |
daemontool | I didn't see that conversation | 22:34 |
daemontool | but Slashme has a excellent skills and knowledge | 22:34 |
daemontool | so most probably make sense his advise | 22:34 |
SamYaple | daemontool: in the case of ekko, the 'downtime' of an instance is very, very little, far less than the time it takes to do the backup | 22:34 |
SamYaple | a few seconds | 22:34 |
SamYaple | the backup happens behind the scene without affecting the instance | 22:35 |
SamYaple | and its not truly downtime either, not like a snapshot which pauses the instance | 22:35 |
SamYaple | more like read-only rather than hard-down | 22:35 |
*** daemontool_ has joined #openstack-freezer | 22:37 | |
daemontool_ | SamYaple, we need to write a bp | 22:37 |
daemontool_ | and get feedback | 22:37 |
daemontool_ | also from Nova | 22:38 |
daemontool_ | it's important otherwise we are going to have always issues in prod environments to place the component we want on the compute nodes | 22:38 |
daemontool_ | have an open discussion on that | 22:38 |
daemontool_ | do you agree/ | 22:38 |
daemontool_ | ? | 22:38 |
*** daemontool has quit IRC | 22:39 | |
SamYaple | I don't know what would be needed from nova | 22:39 |
SamYaple | I don't actually speak to nova at all currently | 22:39 |
daemontool_ | we touch the hypervisor | 22:39 |
daemontool_ | without using the nova api | 22:39 |
SamYaple | right | 22:39 |
daemontool_ | I saw that many times in the past and always was an issue | 22:39 |
SamYaple | Can you give an example? | 22:39 |
daemontool_ | for what? | 22:40 |
SamYaple | 22:40:11 < daemontool_> I saw that many times in the past and always was an issue | 22:40 |
daemontool_ | like collectors of hypervisors metrics | 22:40 |
daemontool_ | for cloud big data analysis | 22:40 |
daemontool_ | rather than using the api | 22:40 |
daemontool_ | same apply for cinder | 22:41 |
SamYaple | But specifically what issues are you talking about? | 22:41 |
daemontool_ | the issue is that who manage the compute notes in production environment | 22:41 |
daemontool_ | does not want anything touching directly the hypervisor | 22:42 |
daemontool_ | this is the issue | 22:42 |
SamYaple | Nova should not be in charge of speaking to the hypervisor for these particular actions in my opinion | 22:42 |
daemontool_ | yes but we need to have their feedback | 22:42 |
daemontool_ | explain things | 22:42 |
SamYaple | Oh sure I agree | 22:42 |
daemontool_ | why we do that | 22:42 |
daemontool_ | why we take that decision | 22:42 |
daemontool_ | and so on | 22:42 |
SamYaple | but hinging this project on nova is a good way to get it killed ;) | 22:43 |
daemontool_ | listen their risks and concerns | 22:43 |
daemontool_ | address them and move forward | 22:43 |
daemontool_ | I don't think so | 22:43 |
daemontool_ | If our approach | 22:43 |
daemontool_ | is good for users | 22:43 |
daemontool_ | and we explain it well | 22:43 |
daemontool_ | clearly | 22:43 |
daemontool_ | openly | 22:43 |
daemontool_ | and we address the concerns | 22:44 |
daemontool_ | it will be accepted | 22:44 |
daemontool_ | if a solution is better | 22:44 |
SamYaple | Unfortunately that has never been the case with nova. but nevertheless that is the plan of action yes | 22:44 |
daemontool_ | there's no need to hide (not saying you are saying that) | 22:44 |
daemontool_ | SamYaple, what I'm saying is let's think very well about it | 22:45 |
daemontool_ | the approach | 22:45 |
daemontool_ | and so on | 22:45 |
daemontool_ | let's do the plan together | 22:45 |
daemontool_ | deep thinking on it | 22:45 |
daemontool_ | provide a bp/spec | 22:45 |
daemontool_ | and approach it seriously to os community, TC and so on | 22:46 |
daemontool_ | it will be positive | 22:46 |
daemontool_ | and | 22:47 |
daemontool_ | probably I'd approach this | 22:47 |
daemontool_ | by proposing a solution for a specific problem | 22:47 |
daemontool_ | narrowing the scope | 22:47 |
SamYaple | There will definetely be a bp/spec with nova, but ekko won't be blocked by nova has been my point | 22:47 |
daemontool_ | how does it sounds? | 22:48 |
SamYaple | what you are purposing is offloading alot of work into nova, and I don't see that being useful | 22:48 |
daemontool_ | SamYaple, don't worry about the blocking things | 22:48 |
daemontool_ | ok | 22:48 |
SamYaple | nova is already large and bloated | 22:48 |
daemontool_ | let's discuss about it | 22:48 |
daemontool_ | ok | 22:48 |
daemontool_ | I have to do a meeting | 22:48 |
daemontool_ | be right back soon | 22:48 |
SamYaple | ok | 22:50 |
*** dims has joined #openstack-freezer | 22:51 | |
*** dims has quit IRC | 22:54 | |
*** daemontool has joined #openstack-freezer | 22:58 | |
*** daemontool_ has quit IRC | 23:01 | |
*** dims_ has joined #openstack-freezer | 23:30 | |
daemontool | SamYaple, I'm not proposing to offload to noav | 23:35 |
daemontool | nova | 23:35 |
SamYaple | daemontool: i think we are actually on the same page about all of this, im just focusing on a alpha release | 23:36 |
daemontool | let's discuss this Friday | 23:36 |
daemontool | tomorrow I can't | 23:36 |
daemontool | SamYaple, ok great | 23:36 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!