Thursday, 2016-01-14

*** dschroeder has quit IRC00:18
*** EinstCrazy has joined #openstack-freezer01:14
*** c00281451 has quit IRC01:27
*** EinstCrazy has quit IRC01:57
*** EinstCrazy has joined #openstack-freezer01:59
*** reldan has quit IRC02:13
*** EinstCra_ has joined #openstack-freezer02:13
*** EinstCrazy has quit IRC02:16
*** szaher__ has joined #openstack-freezer02:52
*** szaher_ has quit IRC02:53
*** daemontool has quit IRC03:30
*** EinstCra_ has quit IRC04:13
*** EinstCrazy has joined #openstack-freezer04:13
*** EinstCrazy has quit IRC04:57
*** EinstCrazy has joined #openstack-freezer05:01
*** daemontool has joined #openstack-freezer07:14
*** daemontool has quit IRC07:22
*** ig0r_ has joined #openstack-freezer09:17
*** daemontool has joined #openstack-freezer09:23
daemontoolMorning09:34
daemontoolvannif,  ping09:34
*** ig0r_ has quit IRC09:41
vanniflo10:02
daemontoolvannif,  do you have 10 min to see together10:06
daemontoolthis? https://review.openstack.org/#/c/260950/10:06
daemontoolI'd like to solve that asap10:06
daemontoolif possible10:06
vannifyes. just opened it10:06
daemontoolok ty10:07
*** reldan has joined #openstack-freezer10:07
daemontoolthe issue there is that the module freezer_api is never loaded10:07
daemontoolreldan,  morning10:08
*** EinstCrazy has quit IRC10:10
reldandaemontool: hi10:18
reldandaemontool: do we have a problem?10:18
reldansomething wrong after my fixes?10:18
*** samuelBartel has joined #openstack-freezer10:24
daemontoolreldan,  why_10:24
daemontool?10:24
reldandaemontool: I don’t know ) You just say me good morning, and I thought that probably something is wrong with my commit )10:25
reldanAnd we have something critical to fix )10:25
vannifexcusatio non petita, accusatio manifesta10:29
daemontoolreldan, mmhhh... did you do anything that we don't know?10:39
daemontoollol10:39
daemontool=)10:39
reldandaemontool :) Nope I just saw these messages10:39
daemontoolvannif,  any clue about testr on the freezer-api_10:39
reldandaemontool: the issue there is that the module freezer_api is never loaded10:40
reldan[10:08am] daemontool: reldan,  morning10:40
daemontoolreldan,  yes10:40
reldanAnd I supposed that it may be related :)10:40
daemontoolthat's the issue10:40
daemontoolyes10:40
reldanDo we have some log or we should add tests to load this module?10:40
daemontoolreldan,  the tests loads that module10:49
daemontoolthe only log we have it this one http://logs.openstack.org/50/260950/2/check/gate-freezer-api-python27/824e2ce/console.html10:49
daemontoolreldan, to what would you like to work next?10:50
reldanI have two days before my holiday. I would like to improve logging today for parallel backup and add some documentation10:51
daemontoolah ok10:51
daemontoolyes10:51
reldanBut if we have some bugs, I’m ready to take one )10:51
daemontooltesting of parallel10:51
daemontooland logging10:51
daemontool++10:51
reldanDeal :)10:51
daemontoolreldan, I feel we need to improve the nova and cinder backups in some way10:57
daemontoolwe should have a bp to review, if I remember well?10:58
reldandaemontool: I agree. We need requirements and architecture document10:58
daemontoolyes10:58
daemontoolor we do not have the bp yet?10:58
daemontoolI remember you had some clear idea/options about it10:58
daemontoolfrescof too had some idea10:58
daemontoollet's talk about it to the meeting today10:59
reldanI don’t know, I didn’t write it. The biggest problem - we need requirements and some agreement that we are going to do it that way10:59
reldanBecause now it may be implemented by cindernative, may be implemented not cinder native10:59
reldanSome ideas with VM with freezer attached to volumes11:00
daemontoolm3m0, to the topics for the meeting today please add: elasticsearch backup (from Deklan), bp for nova&cinder backup, python-frezerclient, backup/restore listing using the scheduler (the code will be ported to the freezerclient)11:00
daemontoolreldan,  I think we should support both11:00
daemontoolIt's a bad answer I know11:01
reldanIn this case for tenant backup - we also should support both and for multi-region backup11:01
daemontoolbut by supporting both we offer flexibility11:01
daemontoolyes11:01
reldanSo we will have two tenant backups and two multi-region backup11:01
daemontoolmmhhh11:02
daemontoolfrescof, do you have any comment?11:02
m3m0daemontool, noted, https://etherpad.openstack.org/p/freezer_meetings11:03
daemontoolm3m0,  thanks11:03
*** EinstCrazy has joined #openstack-freezer11:33
daemontoolhttps://review.openstack.org/#/c/267485/11:40
*** reldan has quit IRC11:47
*** reldan has joined #openstack-freezer12:11
openstackgerritFausto Marzi proposed openstack/freezer-web-ui: Align requirements to liberty global-requirements  https://review.openstack.org/24698112:52
openstackgerritMemo Garcia proposed openstack/freezer-web-ui: Fix for sessions that point to non-existing urls  https://review.openstack.org/26757813:53
openstackgerritMemo Garcia proposed openstack/freezer: Merge vssadmin argument with snapshot  https://review.openstack.org/26759514:28
*** pennerc has joined #openstack-freezer14:29
m3m0reldan, vannif https://review.openstack.org/26759514:29
m3m0daemontool ^^14:29
reldanm3m0: +2 but let’s wait for tests14:30
m3m0so far nothing is broken locally but let's wait :)14:31
reldanm3m0: You know, probably we can completely remove vssadmin14:32
reldan if is_windows():14:32
reldan        # vssadmin is to be deprecated in favor of the --snapshot flag14:32
reldan        if backup_opt_dict.snapshot:14:32
reldan            backup_opt_dict.vssadmin = True14:32
reldanm3m0: freezer/backup.py 21614:32
m3m0wait wait, so should I leave vssadmin but with the deprecation flag?14:33
m3m0and start the movement to snapshot?14:33
reldanm3m0: I don’t know. I just see that vsadmin is alway true when we have is_windows and snapshot14:33
reldanso if you are removing it, probably you can remove it from backup.py as well14:34
m3m0yep, that's the case otherwise by default14:34
reldanm3m0:14:37
reldanhttps://gist.github.com/Reldan/c37d2a53545fce54ee1a14:37
m3m0yep i did the same14:37
m3m0I'm pushing :)14:37
reldanGreat!14:37
m3m0thanks :)14:37
vannifyes. I too think that the snapshotting cod should look for the --snapshot flag, be it vss, lvm, or whatever (btrfs ?)14:39
openstackgerritMemo Garcia proposed openstack/freezer: Merge vssadmin argument with snapshot  https://review.openstack.org/26759514:39
m3m0done14:40
reldan+214:40
vannifdo we want to keep the --vssadmin flag as deprecated ?14:41
m3m0really I don't think so14:43
openstackgerritMemo Garcia proposed openstack/freezer-web-ui: Simplify snapshot configuration for actions  https://review.openstack.org/26761714:48
daemontoolsorry I was on a meeting15:27
daemontoolcatching up15:27
daemontoolvannif,  tests are not discovered15:37
daemontoolthat's the issue15:38
*** EinstCrazy has quit IRC15:38
*** emildi has quit IRC15:40
*** emildi has joined #openstack-freezer15:52
*** dschroeder has joined #openstack-freezer15:54
*** ddieterly has joined #openstack-freezer15:56
m3m0#startmeeting openstack-freezer 14-01-201616:01
openstackMeeting started Thu Jan 14 16:01:09 2016 UTC and is due to finish in 60 minutes.  The chair is m3m0. Information about MeetBot at http://wiki.debian.org/MeetBot.16:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:01
openstackThe meeting name has been set to 'openstack_freezer_14_01_2016'16:01
m3m0All: meetings notes available in real time at: https://etherpad.openstack.org/p/freezer_meetings16:01
m3m0hey guys ready to rumble?16:01
ddieterlyyes16:01
m3m0who is here today? please raise your hand16:01
m3m0o/16:01
ddieterlyo/16:01
reldano/16:01
m3m0ok let's start16:02
m3m0#topic elasticsearch16:03
m3m0we need to create a new mode in freezer to backup and restore elasticsearch16:03
m3m0has anyone look at it?16:03
ddieterlyi looked at es this morning16:03
ddieterlyso, the req is to be able to backup /var/log, audit logs (whatever that means), and es16:04
m3m0in case of cluster, do we need to backup only the master one?16:04
ddieterlyi think /var/log and audit logs can already be backed up in freezer thru config16:04
ddieterlyfor es, we will need to mount a shared volume and snapshot es to that shared volume and then back the snapshot up16:05
m3m0if that the case then no new mode is required16:05
m3m0why a shared volume?16:05
ddieterlythe alternative is to backup each snapshot on each node of the cluster (i think)16:06
m3m0is it necesary to backup each node?16:07
m3m0by the way do you want to take ownership of this ddieterly?16:08
ddieterlyi think we would technically need to backup each shard16:08
ddieterlyto get a logically consistent view of the entire db, i seems easiest to snapshot to a shared repo on a single volume and that up16:08
ddieterlyhttps://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html16:08
ddieterly"shared file system repository" seems to be the most straight forward way to do it16:10
*** emildi has quit IRC16:10
m3m0could be a great idea to create a repository plugin for openstack16:10
ddieterlybut, i'm not an expert16:10
reldanI think that probbly it may be better to just add plugin for swift16:11
reldanyes16:11
reldanSomething like that https://github.com/elastic/elasticsearch-cloud-aws#s3-repository only for swift16:11
m3m0All: meetings notes available in real time at: https://etherpad.openstack.org/p/freezer_meetings16:12
ddieterlyso, a plugin for es that stores to swift?16:12
reldanhttps://github.com/wikimedia/search-repository-swift16:12
reldanYes16:12
ddieterlythat's probably what tsv was talking about in the email thread16:12
reldanIt seems that wikimedia already has a swift plugin16:12
m3m0but reldan, does that break the swift, ssh, local storage functionality?16:13
reldanIn that case we just don’t need freezer to do a backup16:13
reldanes will store all data in swift by itself16:13
m3m0but in the case we want ssh?16:14
ddieterlywe would need to schedule and initiate the backup, right?16:14
m3m0should we use 2 approach for this?16:14
reldanyes, sure we can integrate it with scheduler16:15
reldanPUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true16:15
reldanto execute something like that16:15
reldanOtherwise we will use 1) ElasticSearch backup to save data on disk 2) Use Freezer to store backup on Swift16:16
m3m0ok, so first step is to create a bp and/or spec to review this16:16
m3m0ddieterly what do you think?16:17
ddieterlyso, is the first step to investigate the options: 1) plugin or 2) just use freezer?16:17
m3m0yes and create a spec16:17
reldanAgree16:18
ddieterlyok16:18
ddieterlyso, the first step is investigation?16:18
m3m0yes16:18
ddieterlyok16:19
ddieterlyi'm assuming that pierre can do the config in hlm to backup /var/log and the audit logs?16:20
m3m0we need to create the configuration file and Slashme can deploy it16:20
m3m0and of course we need to test it in a similar environment16:21
ddieterlydo we need to address the other questions that are in the blueprint?16:22
ddieterlyhttps://blueprints.launchpad.net/freezer/+spec/backup-centralized-logging-data16:22
m3m0yes, please feel free16:22
ddieterlywhat i mean, do any of the topics need to be addressed at this time16:23
ddieterlyso, i'm assuming that you all are very busy, and the last thing you need is more work16:24
ddieterlyso, it looks like i'll be investigating the plugin?16:24
m3m0aaaa yes we have 4 more topics16:25
reldanYes and probably they have special requirements about incremental backups, encryption16:25
daemontool I'm here sorry16:25
m3m0so regarding elasticsearch are we clear in the next step16:25
m3m0?16:25
reldanI don’t know - can we add encryption to plugin16:25
reldanyes16:26
ddieterlyinvestigate plugin is the next step?16:26
daemontool*I think*16:26
daemontooland I might be wrong16:27
daemontoolthe snapshotting feature from es16:27
daemontoolis similar to what we do with lvm16:27
daemontoolbut the es built in snapshotting16:27
daemontooloffer a solution to execute backups of specific index/documents16:27
daemontoolso ddieterly  if you need a quick solution, I see the following options16:28
daemontool1) execute a fs backup + lvm snapshot on each elastic search node16:28
daemontool2) create a job to execute a script (i.e. with curl) that will create a snapshot using the elasticsearch buitin snapshot16:29
daemontooland then there's another job that will backup that files in the file system, we need to understand where are stored that files in the filesystem by es when the snapshot is triggered16:29
ddieterlyi think 1 is not an option because of db consistency concerns16:30
m3m0we can use sessions for that16:30
vannifyou pass the location to es as part of the curl invocation I think16:30
daemontoolddieterly, with mongodb I did that in production in the public cloud in hp mahy time16:30
daemontooland every time the backup was consistent16:30
vannifI agree about the consistency issues with 1)16:30
daemontoolbut the data wasn't sharded16:30
daemontoolso16:31
daemontoolthere are two possible consistency issues there16:31
vannifs/issue/concern16:31
daemontool1) half index in memory - half data written to the disk, generating data corruption16:31
daemontool2) inconsistencies with data sharded across multiple nodes16:31
daemontooldo you agree with that?16:31
ddieterlyyes16:32
daemontoolok16:32
daemontoolfor 1) I thin elasticsearch like mongo16:32
daemontoolwrited the journal log file in the same directory as where the data is stored16:32
daemontoolso if a snapshot with lvm is created (ro snap, immutable)16:32
daemontoolthe data doesn't change16:32
daemontoolthe backup is executed16:32
daemontooland the data is crash consistent16:33
daemontoolwhich would like, the porwer suddenly goes way on that node16:33
daemontoolanyone see any issue here?16:33
daemontoolso we need to understand if elastic search store journal logs16:33
daemontoolI think so16:33
daemontoolbut I might be wrong16:33
vannifand that might change16:34
daemontoolall good so far?16:34
ddieterlyi think so16:34
m3m0yes, time is a concern and we have 3 more topics, should we continue with this or move forward?16:34
daemontoolvannif,  in mongodb the data is stored on the same directory16:34
daemontool/var/lib/mongo16:34
daemontoolm3m0,  one sec16:34
daemontoolthis is critical16:34
daemontoolsorry16:34
daemontoolbecause the #1 solution would be easy to implement for your needs16:35
daemontoolas no code needs to be written16:35
daemontoolfor the issue #216:35
daemontoolwe have a feature called job session ddieterly16:35
ddieterlyyes, i like 1 then ;-)16:35
daemontoolI just explain, that you decide guys16:35
daemontool:)16:35
daemontoolon #216:35
daemontooljob session is used to execute backup at near the same time on multiple nodes16:36
daemontooland that can be used to solve the shards inconsistencies16:36
daemontoolI think16:36
daemontoolbefore writing code16:36
daemontoolit's worth to test this16:36
daemontoolbecause it's fast16:36
ddieterlyi don't think that that would give any guarantees16:36
daemontooland will help us to improve job session16:36
vannifwell. from what understand es has 2 ways of writing data: by default it writes data to all the shards before returning a positive ack to the user. that would result in all the shards having the data in their disks or journals16:36
m3m0but I don't know why we want to backup all nodes, are not supposed to be replicas of the master node?16:37
daemontoolm3m0,  it depends,16:37
daemontoolelastic search to scale and recude I/O16:37
daemontools/recude/reduce/16:37
ddieterlywe need to back up all shards16:37
daemontoolsplit the data on multiple nodes, called shards16:37
vannifanother way is less secure: write data to the master and return a positive ack to the user. *then* replicate16:37
daemontoolddieterly,  ++16:37
daemontoolI think with job session16:38
daemontoolthe solution can be acceptable16:38
daemontoolbecause we have the same issue anyway16:38
daemontooleven if we use the snapshotting16:38
daemontoolbuilt in feature in es16:38
daemontoolthat needs to be execute16:38
daemontoolat near same time16:38
daemontoolacross all the nodes16:38
ddieterlyi'm not liking that; no guarantees16:38
daemontoolvannif,  please off line if you can explain better the job session to ddieterly16:39
daemontool?16:39
ddieterlydepends on timing16:39
daemontoolddieterly,  yes I agree16:39
daemontoolin helion all the nodes are synced with an ntp node16:39
vannifsure16:39
daemontoolbut yes, you are right16:39
daemontoolno doubt about that, it is best effort16:39
daemontoolddieterly, are you comfortable to test that?16:39
daemontoolor do you want to go with other solutions?16:39
ddieterlyso, #1 seems reasonable if it guarantees consistency16:40
daemontoolI think if the writes of es are atomic16:40
daemontoolthe consistency should be OK16:40
daemontoolbut16:40
daemontool100% consistency cannot be guaranteed16:40
daemontool:(16:40
daemontoolit's a computer science challenge to execute two actions exactly on the same time on multipel nodes16:41
daemontoolnot only our problem16:41
ddieterlythe only way that 100% consistency can be guaranteed seems to be to use the snapshot feature of es16:41
daemontoolok16:41
daemontoolthen my advice yould be16:41
daemontoolwould be16:41
daemontoolto write a script16:41
daemontoolthat execute the snapshot with curl16:41
daemontooland then execute the backup of data as fs backup with the agent16:42
daemontoolthat wouldn't require writing code16:42
vannifI think #1 is reasonable, even though it relies on some assumptions. It does not require any new backup-mode anyway. we can leave an elasticsearch-mode for direct interaction with es (i.e. request a snapshot)16:42
ddieterlyif we can snapshot to each node, then we can just back that up with freezer16:42
daemontoolddieterly,  yes16:42
daemontoolthat was #216:42
daemontoolnow, we can dedice this even tomorrow16:42
ddieterlyso, we need to investigate whether es can do that16:43
daemontoolyes16:43
ddieterlyif so, that seems the best plan16:43
daemontoolddieterly,  ok16:43
daemontoolare you comfortable? can we move forward?16:43
ddieterlyif not, then see if we can do #116:43
daemontoolplease vannif  if you can explain job session also to ddieterly  offline?16:43
ddieterlyi'll setup a meeting16:43
daemontoolso we can move on the other topoic16:43
daemontoolwe can do a hangout meeting16:43
daemontoolso I can participate16:43
daemontoolas you want16:44
daemontoolor an irc meeting16:44
ddieterlygoogle hangout?16:44
daemontoolyes16:44
ddieterlysure, i'll set that up16:44
daemontoolhangout I thin kis better16:44
daemontoolok16:44
daemontoolty16:44
ddieterlynp16:44
daemontoolm3m0, let's run fast :)16:44
m3m0#topic cinder and nova backups16:44
m3m0what's the status on this?16:45
daemontoolMr reldan16:45
daemontool:)16:45
reldanWe have nova ephemeral disk backup (not incremental), cindernative backup (cannot be done on attached images (should be from liberty)), cinder backup (non-incremental)16:46
m3m0is this working now?16:46
reldanCurrently we cannot make a backup of whole vm with attached volumes16:46
daemontoolreldan,  that what I think we need16:47
daemontoolbecause currently no one is providing a solution for taht16:47
daemontoollike nova + attached volumes16:47
daemontools/nova/nova vm/16:47
reldanYes for nova with epehemeral, No - for nova with bootable cinder volume (can be done throug cinder-backup)16:47
m3m0can we inspect the vm and check if it has attached volumes and then execute nova and/or cinder backups?16:48
daemontoolm3m0,  yes from the API16:48
daemontoolfrom the Nova API16:48
daemontoolfrescof, please provide your inputs if any ^^16:48
reldanAnd probably we have problem with auth_url v316:48
m3m0why?16:49
reldanI don’t know. But I saw that it cannot authorize (trying to use wrong http address or something like that)16:50
daemontoolmmhhh16:50
reldanm3m0: We can expect attached volumes - yes16:50
daemontoolwe should be able to do that16:50
reldanBut there are still a problem with consistency16:50
daemontoolreldan,  at least the orchestration of backing up vms + attached volumes16:51
reldanAny backup/snapshot on attached volume can be corrupted16:51
daemontoolI think it should be  provided16:51
daemontoolwhy?16:51
daemontoolis crash consistent anyway16:51
reldanbecause we use —force to do so16:51
daemontoolit's like backing up /var/lib/mysql with lvm without flushing the in memory data of mysql16:52
daemontoolthere's no other way to do that from outside the vm16:53
daemontoolI think >(16:53
reldanI suppose the same.16:53
m3m0unless we define a new mode? in freezer that inspect the architecture of the vm and execute internal and external backups16:53
m3m0accordingly16:54
daemontoolI think16:54
daemontoolthat make sense16:54
daemontoolbut it's up to the user16:54
daemontoolif he want's to use16:54
reldanBut if we want to have a backup that contains (let’s say) 3 cinder volumes, 1 nova instance with information about where we should mount each volume - we should define such format16:54
m3m0but wait, each volume is a backup right?16:54
daemontoolm3m0,  yes16:55
reldanIf I understand it correct, the goal - is implementing full backup of instance with all attached volumes. In this case we should implement ephemeral disk backup, backup of each volume and metainformation - how to restore it16:55
reldanhow to reassemble instance16:56
reldanIt’s like metabackup of backups16:56
m3m0the instance should be up and running again, is not freezer responsability to do that16:56
m3m0the jobs for restore should only contain paths16:56
reldanSo if you terminate your instance- you cannot restore it?16:57
m3m0nop16:57
daemontoolmmhhh16:57
m3m0you need somewhere to restore it16:57
daemontoolI think probably we need to keep it a bit simple, or we go through a dark sea16:57
m3m0we can have this discussion offline16:58
reldanLet’s just say we have two openstack installation. If I understand the task correct - we should be able to create a backup in installation1 and restore the same configuration in instalation216:58
daemontoolyes16:58
daemontoolso we can offer disaster recovery capabilities16:59
daemontoollet's do this16:59
m3m0I dissagre16:59
reldanIn this case it would be great to create and discuss blue print16:59
daemontoolI'll write a bp fo rthis stuff16:59
daemontoolthe16:59
daemontooland we can then discuss on that16:59
daemontoolchange it and so on16:59
daemontoolm3m0,  is that ok?16:59
m3m0yes, of course16:59
reldanyes16:59
daemontoolok16:59
m3m0we are running late16:59
daemontoollet's move forward16:59
daemontoolyep17:00
m3m0and we have 2 more topics17:00
m3m0should we do it next week?17:00
m3m0python freezer client and list of backups17:00
daemontoollet's to id 5 minutes now17:00
daemontools/id/it/17:00
daemontoolpython freezerclient17:00
daemontoollet's skipit17:00
daemontoolbut list of backups17:00
daemontoolif fundamental that we have it in mitaka17:00
daemontoolvannif, ^^17:01
daemontoolis essential...17:01
daemontoolwe need to be able to list backups and restore using the scheduler17:01
m3m0yes, and it's not complicated, the ui has that funcionality already17:01
daemontoolretrieving data at least form the api17:01
daemontoolm3m0,  yep17:01
m3m0its a matter of replicate that17:01
daemontoolvannif, can you do that please?17:01
daemontoolor m3m0  if your workload on the web ui17:01
daemontoolit's not huge17:02
vannifyes17:02
daemontoolvannif,  ok thank you17:02
daemontoolthen we'll move that stuff17:02
daemontoolin the python-freezerclient17:02
daemontoolok17:02
vannifI17:02
daemontoolYou17:02
daemontoolol17:02
daemontoollol17:02
vannifI've started to look at how to use cliff for the freezerclient17:02
m3m0I'm very busy but I can do that if vannif is busy as well17:03
daemontoolvannif,  yes but we cannot do that for now17:03
daemontoolvannif,  can do that17:03
daemontoolsorry17:03
*** samuelBartel has quit IRC17:03
daemontoolwe cannot do that17:03
daemontoolfor now17:03
vannifyou mean no cliff ?17:03
daemontoolwe can do that after we split the code17:03
daemontoolyes17:03
vannifoh. ok. it's quicker then :)17:03
m3m0wait wait17:03
m3m0list from scheduler and the split?17:04
daemontoollist from scheduler can be doen now17:04
daemontoolpython-freezerclient split code can be done now17:04
daemontoolpython-freezerclient using cliff after the split17:04
m3m0we can split vannif in 217:04
daemontoolhaha17:04
daemontooleven in 317:04
daemontoolwe can cut it in 317:04
m3m0the italian way of doing bussines :P17:05
daemontooland doing sausages17:05
m3m0ok guys what's the veredict?17:05
daemontoolok17:05
daemontoolso17:05
daemontoolvannif, implement the job listing17:05
daemontoolI do the python-freezerclient split17:05
daemontoolafter that17:05
vannifok17:06
m3m0#agree17:06
daemontoolwe can use cliff on the freezerclient17:06
daemontool++17:06
daemontoolok17:06
daemontoolis that all?17:06
m3m0yes17:06
m3m0for now...17:06
daemontoolI'm going to write17:06
m3m0ok guys thanks to all for your time17:07
daemontoolthe bp for nova and cinder?17:07
daemontoolok17:07
m3m0perfect17:07
m3m0do that daemontool17:07
daemontoolI'll do it m3m017:07
daemontoollol17:07
daemontool:)17:07
daemontoolyou pleae cut vannif  in 317:07
m3m0#endmeeting17:07
openstackMeeting ended Thu Jan 14 17:07:34 2016 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:07
openstackMinutes:        http://eavesdrop.openstack.org/meetings/openstack_freezer_14_01_2016/2016/openstack_freezer_14_01_2016.2016-01-14-16.01.html17:07
ddieterlyciao!17:07
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/openstack_freezer_14_01_2016/2016/openstack_freezer_14_01_2016.2016-01-14-16.01.txt17:07
openstackLog:            http://eavesdrop.openstack.org/meetings/openstack_freezer_14_01_2016/2016/openstack_freezer_14_01_2016.2016-01-14-16.01.log.html17:07
m3m0too thin17:07
daemontoolddieterly,  Salut :)17:08
ddieterlybonjour17:08
vannifbeware, I'm gonna take thai chi classes, I' ll be a freaking shaolin monk soon ;)17:09
vannifare we going to meet with hangouts ? about the sessions ?17:09
ddieterlyyea, what's the best way to set that up?17:09
ddieterlyi tried inviting you via email17:10
m3m0dude I know jiu-jitsu17:12
ddieterlyvannif: what's your gmail account?17:16
vannifI don't see any email message (corporate)17:16
vanniffabrizio.vanni@gmail.com17:16
daemontoolddieterly,  yes but let's send an email17:18
daemontoolor I don't know17:18
daemontoolnow I cannot do that17:18
daemontoolcan we do that tomorrow?17:18
ddieterlysure17:18
daemontoolok I'm going now17:19
*** samuelBartel has joined #openstack-freezer17:20
vannifI won't be available tomorrow. I took the day off, but maybe I can manage to be online around this time ...17:27
ddieterlyok17:29
*** emildi has joined #openstack-freezer17:30
*** reldan has quit IRC17:42
*** reldan has joined #openstack-freezer17:45
ddieterlyvannif: something happened to google chrome17:47
ddieterlyanyway, i'll schedule a mtg for next week17:47
daemontoolddieterly, you need to pay the internet bill :P17:47
daemontoolok17:47
vannif:)17:53
*** reldan has quit IRC18:01
*** daemontool has quit IRC18:01
*** ddieterly has quit IRC18:04
*** reldan has joined #openstack-freezer18:59
*** ddieterly has joined #openstack-freezer20:53
*** pennerc has quit IRC21:04
*** reldan has quit IRC21:19
*** ddieterly has quit IRC23:51

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!