Monday, 2016-02-01

*** DuncanT has quit IRC00:15
*** DuncanT has joined #openstack-freezer00:16
*** arunb has joined #openstack-freezer01:36
*** EinstCrazy has quit IRC01:50
*** arunb has quit IRC02:33
*** EinstCrazy has joined #openstack-freezer03:51
*** EinstCrazy has quit IRC03:58
*** frescof_ has joined #openstack-freezer05:41
*** DuncanT_ has joined #openstack-freezer05:42
*** jokke__ has joined #openstack-freezer05:45
*** DuncanT has quit IRC05:49
*** frescof has quit IRC05:49
*** jokke_ has quit IRC05:49
*** DuncanT_ is now known as DuncanT05:51
*** EinstCrazy has joined #openstack-freezer06:56
*** EinstCrazy has quit IRC07:03
*** jokke__ is now known as jokke_07:56
*** samuelBartel has joined #openstack-freezer08:45
*** openstackgerrit has quit IRC10:02
*** EinstCrazy has joined #openstack-freezer10:04
*** daemontool_ has joined #openstack-freezer10:25
*** daemontool has quit IRC10:28
*** jokke_ has quit IRC10:34
*** jokke_ has joined #openstack-freezer10:34
*** reldan has joined #openstack-freezer11:00
*** vannif has joined #openstack-freezer11:08
*** reldan has quit IRC11:20
*** reldan has joined #openstack-freezer11:29
*** reldan has quit IRC11:46
*** openstackgerrit_ has joined #openstack-freezer11:54
*** openstackgerrit_ is now known as openstackgerrit11:55
*** openstackgerrit has quit IRC11:59
*** reldan has joined #openstack-freezer12:02
*** openstackgerrit has joined #openstack-freezer12:08
reldanHi guys, what is new?12:16
*** vannif has quit IRC12:21
*** EmilDi has joined #openstack-freezer13:10
daemontool_reldan,  hi14:00
daemontool_all good14:01
daemontool_nothing new14:01
daemontool_there was a critical bug14:01
daemontool_that prevented to execute restores from swift14:01
reldanWith swiftclient versino?14:01
daemontool_2.7.014:01
daemontool_not is fixed14:01
reldanDo we have something in review?14:01
reldanOr probably we should fix something soon?14:02
daemontool_it's already fixed14:02
daemontool_all good14:02
daemontool_it took 1 day ans something14:02
daemontool_to troubleshoot and fix it14:02
daemontool_the fix was easy actually14:03
reldanWhat is our priority now?14:03
daemontool_for the web ui14:03
daemontool_we need to do branching and pypi release14:03
daemontool_I think14:04
daemontool_we need to work on making better the volumes and vms backups14:04
daemontool_and the block based incremental14:04
daemontool_there's an ongoing conversation with SamYaple  about that14:04
daemontool_mostly for VMs backup14:04
daemontool_I think we have to provide multiple options anyway as we always do14:05
daemontool_so probably we need anyway to use rsync based block incremental even for files14:05
daemontool_what we need to do now, is having the devstack plugin14:05
reldanI see - how about tenant backup? Do we have a document?14:06
daemontool_fully working and have all our integration tests executing automatically14:06
daemontool_that is related with volume and vm backups too14:06
reldanAgree14:06
daemontool_we have the bp14:06
daemontool_form last time14:06
daemontool_also EinstCrazy  and another colleague of him are interested14:06
daemontool_on vm and volume backups14:06
daemontool_so if we agree with the bp for tenant based14:07
daemontool_we can start splitting tasks14:07
daemontool_and even work in paralle14:07
reldanVery good!14:07
reldanAnd what is about elasticsearch? Do we have some solution?14:07
daemontool_no solution14:07
daemontool_I mean14:07
daemontool_no alternative solution14:07
daemontool_we need to support also mysql as backup end I suppose14:08
daemontool_se we can use the service shared db14:08
reldanWe had two options - a plugin for es snapshots and distributed backup of every instance of es14:08
daemontool_do you mean es as backend db for the api?14:08
daemontool_ah ok14:09
daemontool_you are referring to es backup14:09
reldanyes14:09
daemontool_I'd like if someone can test14:09
reldanI remember that it was in our agenda14:09
daemontool_if the current lvm + job session backup works14:09
daemontool_yes14:09
daemontool_I think Slashme  could test that14:10
daemontool_as he have some infrastructure available14:11
daemontool_reldan, when you can please have a conversation with SamYaple, he is proposing an alternative approach for block based vms backup14:11
reldanI see, thank you. So nothing very urgent at the moment. But let ne know if we have14:11
reldanYes, sure14:12
daemontool_I think the integration with devstack14:12
daemontool_is quite urgent14:12
daemontool_there are at least 2 big companies that will be deploying in production freezer within the next 4 weeks14:12
daemontool_sooner we have the integrated test execute automatically by the dsvm gate job better is14:12
daemontool_reldan, I think we need to move forward with the rsync based incrementals also14:13
reldanWho is doing integration with devstack right now?14:13
daemontool_me and vannif, but mostly me14:13
daemontool_I'll restart doing that as soon as the branching and release of the web ui are ok14:14
reldanI see, do you know how to split it?14:14
daemontool_it shouldn't take more than couple of hours14:14
daemontool_split what?14:14
reldanSplit devstack integration. Or I can try check rsync?14:14
daemontool_I think, vannif can finishe the devstack integration as he has been working on that since the beginning14:16
daemontool_I think would be good if you start the tenant based backup14:16
reldanI see, let me check then rsync14:16
reldanI can14:16
reldanyes, let me check blueprints then first14:16
daemontool_ok14:16
reldanThank you!14:16
daemontool_https://blueprints.launchpad.net/freezer/+spec/tenant-backup14:18
daemontool_also frescof_ has been working on an interesting feature for disaster recovery14:18
*** reldan has quit IRC14:20
*** vannif has joined #openstack-freezer14:42
*** daemontool has joined #openstack-freezer14:47
*** daemontool_ has quit IRC14:47
*** daemontool has quit IRC14:53
*** daemontool has joined #openstack-freezer14:59
*** daemontool_ has joined #openstack-freezer15:52
*** daemontool has quit IRC15:54
*** dschroeder has joined #openstack-freezer15:59
*** dschroeder has quit IRC16:03
*** dschroeder has joined #openstack-freezer16:04
*** dschroeder has quit IRC16:04
*** pp767df has joined #openstack-freezer16:12
*** pp767df has quit IRC16:13
*** reldan has joined #openstack-freezer16:13
daemontool_please review https://review.openstack.org/#/c/270251/ and https://review.openstack.org/#/c/270315/16:14
daemontool_vannif,  can you review this please? https://review.openstack.org/#/c/266552/16:14
*** Slashme has quit IRC16:18
*** epheo has quit IRC16:19
reldanSamYaple: Hi, how are you! My name is Eldar!16:20
reldanSamYaple: I was looking for any documentation for Ekko, but unfortunately link in readme is dead. Do you have any other link?16:21
*** EmilDi has quit IRC16:21
daemontool_EinstCrazy, did you get the chance to take a look at https://blueprints.launchpad.net/freezer/+spec/tenant-backup ?16:23
daemontool_we should start working on that based on some agreement, if you are still interested16:24
*** dschroeder has joined #openstack-freezer16:31
*** daemontool has joined #openstack-freezer16:42
openstackgerritMerged openstack/freezer: Change Freezer repo_url from stackforge to openstack  https://review.openstack.org/27025116:43
*** daemontool_ has quit IRC16:44
SamYaplereldan: not as of yet. the ekko team is meeting feb 9th along side the kolla midcycle (because our teams overlap) and documentation is the top priority16:51
reldanSamYaple: Thank you! I actually don't need full documentation, just 5-6 sentences with description, goal would be good start.16:53
SamYaplereldan: that _should_ be there, if not ill add somethign quickly16:54
reldanSamYaple: https://github.com/openstack/ekko16:55
SamYaplereldan: thats the one16:56
reldanSamYaple: Documentation link is dead for me.16:56
SamYaplewe also have #openstack-ekko16:56
SamYapleyea we cant actually publish to docs.openstack.org yet16:56
SamYaplethat was from the cookie cutter repo file16:56
daemontoolok16:56
daemontoolSamYaple, did you have a conversation with your team16:56
reldanSamYaple: Thanks!16:57
daemontoolif you guys wants to go forward alone or converge?16:57
SamYapledaemontool: that decision wont be made for some time daemontool16:57
SamYaplewell have alot more info after midcycle16:57
SamYaplebut i think what we need from freezer is that plugin architecture to even approach the issue16:57
reldanSamYaple: Could you please give me a little bit more information what you mean by the plugin architecture. I was on vacation for a couple of weeks and seems that I missed something.16:59
*** EinstCrazy has quit IRC16:59
SamYaplereldan: daemontool had mentioned a plugin architecture for the freezer-agent that would have differetn types (database, file, ekko, etc)17:00
daemontoolSamYaple, honestly, I'm still trying to figure out if the solution your are proposing make sense or not, with all due respect17:01
daemontoolSamYaple,  I'm not sure17:01
daemontoolhow to solve the inconsistency between the nova db, and the changes in the hypervisor17:02
daemontoolthat's important17:02
SamYapledaemontool: there is no inconsistency at all17:02
daemontoolok17:02
daemontoolso17:02
reldanSamYaple: I see now. But if I understand it correct you have an additional agent per compute node. So it should provide some API, and freezer should be able to support plugin that connects to this API, is it right?17:02
SamYaplereldan: well the idea was for ekko to reuse freezer-agent|scheduler but run it on the compute node17:03
SamYapleso freezer-agent would have ekko plugin17:03
*** EinstCrazy has joined #openstack-freezer17:04
reldanSamYaple: I see. So from freezer you need 1) Remote code invocation 2) Scheduling 3) Status of remote invocation (success, fail). I am correct?17:05
reldan4) Plugin architecture for injecting code for work with hypervisor17:06
SamYapleI think it will be more complicated than that truthfully reldan17:08
SamYaplebut i dont have the working code to show why it is17:09
SamYaplefor example to od what im doing i need a retention database (redis or similiar) so i can do retention on existing data17:09
SamYapleThe scheduling should probably be reusable. but i havent' actually looked at freezers scheuling code17:10
reldanSamYaple: I see now. Thank you for the explanation.17:11
daemontoolSamYaple, what I was mentioning before, was something like: a user execute a backup of a VM in a region, then he wants to restore the same VM in another region17:12
daemontoolwe try to follow this kind of approach with freezer17:12
daemontoolwhat struggles me, is that to use ekko we should find a way17:13
daemontoolto extract all the metadata from the nova db (preferably from the nova api)17:13
daemontoolinclude that data in the backup, along with the vm data17:13
daemontooland store it in a media storage (i.e. local fs, ssh or swift the ones supported currently by freezer)17:14
daemontoolthen when we want to restore17:14
daemontoolwe can recreate the metadata in the nova db (preferibly using the nova api)17:14
daemontooland restore the vm data interacting with the hypervisor17:14
daemontoolif we could do this, it would be a really cool feature17:15
daemontooldo you agree?17:15
SamYapledaemontool: no. the "restore" in that sceanrio is just launching a new instance based on a glance image17:15
daemontoolok17:16
daemontoolreldan, so to do that we need probably to add glance as one more media storage?17:17
daemontoolSamYaple, I'm asking this to understand what we need to do concretely to provide to you the plugin architecture17:18
daemontoolthen you and your team can take the direction you want17:18
reldandaemontool: yes probably we can implement it with the new plugin architecture.17:18
SamYapledaemontool: im just as uncertain of what that means as well daemontool17:19
SamYapleyou mentioned plans for plugin architecture alraedy in works17:19
daemontoolso if you want to execute your backup-agent17:19
daemontoolfrom the scheduler17:19
daemontoolyou can do it already17:19
daemontoolbut17:19
daemontooljust to give you an example17:19
daemontoolwe support multiple media storage17:19
daemontooland also in parallel17:20
SamYaplewell storage is different. storage would never be in glance17:20
daemontoolso you can upload your backed up data simultaneously to a remote ssh node + swift+ local fs + another swift17:20
daemontoolglance in most of the cases will store in on Swift17:20
daemontoolbut17:20
daemontoolI'd like to understand17:20
SamYaplerestore would reassemble the bits from storage and send it to glance (regardless of what glance is backed by)17:20
daemontoolif ekko can leverage that backup data upload to multiple media storage17:21
daemontoolin parallel17:21
daemontoolok17:21
daemontoolwhich storage do you mean there?17:21
daemontoolswift?17:21
SamYaplestroage should be objectstorage, s3/swift/radosgw but for small scale testing local filsystem storage is also an option17:22
daemontoolso what happen is swift or keystone are not available for some reason... we need to provide a solution that enable the user to restore the data is other services fails17:22
daemontoolSamYaple,  ok17:22
SamYapledaemontool: highly disagree17:22
SamYaplenothing works when keystone is down17:22
SamYapledata is gone when swift is gone17:22
daemontoolwell that's the issue we also have to solve17:23
SamYaplefrom a purely recovery standpoint, it is possible to reassemble all of the backups with ekko outside of the openstack environment17:23
SamYaplebut thats full loss of the openstack environtment17:23
daemontoolok17:23
daemontoollet's say a user redeploy a new openstack instance on an alternative location17:24
daemontooland wants to restore the data17:24
daemontoolwe'd like to provide something that allow him&her to do that17:24
daemontoolso17:24
SamYapleits possible to have a tool that can read all of the objects in obejct-storage and rebuild the ekko database, yes17:24
daemontoolfor the VMs17:24
SamYaplebut its expesive to do so17:24
daemontoolok17:24
SamYaplelots of queries to objects17:24
daemontoolexpensive is a different thing17:24
daemontoolso for VMs17:25
daemontoolwhen you restore17:25
daemontoolon another openstack deployment17:25
daemontoolyou need also to restore the metadata of that vm,17:25
daemontooldo you agree with that?17:25
SamYapleno17:25
SamYaplethat seems to be a non-openstack way of doing it17:25
SamYapleand instance can only exist in one spot even (unique uuid)17:26
SamYaplerestoring it to another location is not in the spirit of openstack in my opinion17:26
daemontoolwhat? restoring a vm that belong to the same teant with the same network settings onto another openstack platform?17:26
*** samuelBartel has quit IRC17:26
daemontoolthat is a non-openstack way? :)17:26
daemontoollol17:26
daemontoolbut anyway17:27
daemontooldid you got the problem we solve?17:27
*** pbourke has joined #openstack-freezer17:27
daemontoolat some degree we already offer that17:27
SamYaplerestoring an identical vm with the same metadata? yes17:27
SamYapleat that point it could exist in two places at once17:27
SamYapleyu can't share alot of things (depending on your definition of 'openstack platform") like floating ips17:28
SamYapleyou dont want to dup mac addresses17:28
daemontoolI've seen many environment using anycast for HA, but that's not what we are discussing here17:29
SamYaplelaunching a _new_ instance based on a glance image you have restored, however, is very much an openstack way of doing it17:29
daemontoolSamYaple, I agree17:29
daemontoolso your answer after all this is, you do not need to backup any vm metadata17:29
daemontoolas the user just restore it from glance as image17:29
daemontoolright?17:29
daemontoolit's ok if it is that, I'm just trying to understand what's the solution you are proposing17:30
SamYapleThe instance uuid would be tracked in the ekko database _not_ in the actual backup manifest17:30
SamYaplebeyond the uuid, im not sure anything more is needed17:30
SamYaplerestoring back to a running instance from which the backup was _taken_ should be possible17:31
daemontoolok, so do you have a plan, for instance, if you want to restore a vm and that vm was destroyed?17:31
SamYaplebut assuming that instance is fully gone the only plan currenlty is a glance image17:31
SamYaplerestore from new glance image17:31
daemontoolok17:31
SamYaplethere is no garuntee you can get the same ips anyway17:31
SamYapleneutron would have released those17:32
daemontoolI agree17:32
daemontoolyes17:32
SamYapleglance image or cinder volume (so you can just mount it to an existing instance for more manual recovery)17:32
SamYaplebut even direct to cinder is needed since you can do glance-cinder17:32
SamYapleglance->cinder17:32
SamYapledirect to cinder isnt* needed17:33
daemontoolok]17:34
*** reldan has quit IRC17:35
daemontoolSamYaple, one sec when reldan will be back I'll ask you couple of more questions17:38
SamYapleok17:41
*** arunb has joined #openstack-freezer17:50
daemontoolSamYaple, I think the day is finished for reldan17:56
daemontoolone question17:56
SamYapleok17:56
daemontoolhow do you plan to upload the data to glance?17:56
daemontoolusing the backup-agent on the compute node?17:56
SamYapleno it would most likely be a seperate restore-type agent17:56
SamYaplemight combine it with the retention agent and call it something different17:57
daemontoolwhat would be the workflow like17:57
SamYaplefor restore?17:57
daemontoolyes17:58
daemontoolif you need to think about it more, np17:58
*** epheo has joined #openstack-freezer17:58
SamYapleapi call to restore backupset + incremental (possibly name for restroed glance image), send info to restore agent, restore agent pulls down data and sends to glance17:58
*** Slashme has joined #openstack-freezer17:59
SamYaplepretty basic17:59
SamYaplerestore agent would decompress/unencrypt17:59
daemontoolthe data will be retrieved from the object storage right?17:59
*** pbourke has quit IRC17:59
*** pbourke has joined #openstack-freezer18:00
SamYapleor whatever backend (we have a local storage driver for testing)18:00
SamYapleall the drivers for backend/compression/encryption are all stevedorized18:00
SamYaplevery very plugable18:00
daemontoolok18:01
*** Slashme has quit IRC18:03
daemontoolI don't think is that different from what we are doing today18:04
daemontoolthe main difference is how the incrementals blocks are computed18:05
daemontoolbut we do not have anyway the incremental for vms backups today18:05
daemontoolso the sentence "I don't think is that different from what we are doing today" is not entirely correct18:05
SamYapleright but you guys dont have real retention. thats a massive thing that has to be in the architecure from the ground up18:06
SamYaplethats a big worry for integration18:06
*** epheo has quit IRC18:06
SamYaplesince, as it is right now, ill have to run my own retention agent and mechanisms18:06
daemontoolSamYaple, the thing is that you have to reach a compromise if you want to have a solution suitable for infrastructure backup18:07
daemontooland baas18:07
daemontoolfrom my perspective18:07
*** Slashme has joined #openstack-freezer18:07
daemontoolif we have a patch, that will not remove a backup incremental session within a specified time frame18:08
daemontoolwe provide retention18:08
SamYapleI disagree with you on that point18:09
*** epheo has joined #openstack-freezer18:09
daemontooland that was pretty much the requirement needed in every environment I've been working in the last 15 years18:09
daemontoolonly for public cloud is debatable18:09
daemontoolbut for vms backup18:09
daemontoolcomputing the incrementals from the hypervisor18:09
daemontoolis a better approach18:10
daemontoolI agree with you on that18:10
SamYapleWithout being able to slice the chain, I disagree with you having retention in this day and age18:10
SamYapleat least for imaging18:11
daemontoolyes it's not space efficient, I agree with you18:11
SamYapleforget space effiecnt18:12
SamYapleit wont work with long chains18:12
daemontoollike what18:12
daemontoolit worked in production already18:12
SamYapleno, no it hasnt18:12
daemontoollol18:12
daemontoolok18:13
daemontoolit hasnt18:13
SamYaplestoragecraft is arguably the largest block-based backup outthere (though veaam may have overtaken them) and they can't do more than 500 incrementals18:13
SamYapleniether can veamm18:13
SamYaplebut file backup is very different than block backup due to the size of data18:13
*** epheo has quit IRC18:15
*** Slashme has quit IRC18:16
daemontoolok, interesting conversation, I have to go now, please let's talk again on Thursday, also talk with reldan please,18:17
*** reldan has joined #openstack-freezer18:23
*** zahari has joined #openstack-freezer18:26
*** reldan has quit IRC20:23
*** reldan has joined #openstack-freezer20:25
*** reldan has quit IRC21:23
*** zahari has quit IRC21:25
*** reldan has joined #openstack-freezer21:32
*** reldan has quit IRC21:54
*** reldan has joined #openstack-freezer21:58
*** daemontool has quit IRC22:39
*** reldan has quit IRC23:33

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!