Wednesday, 2015-12-23

*** dschroeder has quit IRC00:01
*** daemontool has quit IRC00:08
*** daemontool has joined #openstack-freezer00:08
daemontoolvannif, ++ python update.py /opt/stack/freezer-api00:30
daemontoolVersion change for: falcon, jsonschema, keystonemiddleware, oslo.config, oslo.i18n00:30
daemontoolUpdated /opt/stack/freezer-api/requirements.txt:00:30
daemontool    falcon>=0.1.6                  ->   falcon>=0.1.6,<0.2.000:30
daemontool    jsonschema>=2.0.0,<3.0.0,!=2.5 ->   jsonschema>=2.0.0,<3.0.000:30
daemontool    keystonemiddleware>=2.0.0,!=2. ->   keystonemiddleware>=1.5.0,<1.6.000:30
daemontool    oslo.config>=2.3.0 # Apache-2. ->   oslo.config>=1.9.3,<1.10.0  # Apache-2.000:30
daemontool    oslo.i18n>=1.5.0 # Apache-2.0  ->   oslo.i18n>=1.5.0,<1.6.0  # Apache-2.000:30
daemontool'pytest' is not in global-requirements.txt00:30
daemontool'pytest-cov' is not in global-requirements.txt00:30
daemontool'pytest-xdist' is not in global-requirements.txt00:30
daemontoolVersion change for: coverage, flake8, mock00:30
daemontoolUpdated /opt/stack/freezer-api/test-requirements.txt:00:30
daemontool    coverage                       ->   coverage>=3.600:30
daemontool    flake8>=2.2.4,<=2.4.1          ->   flake8==2.2.400:31
daemontool    mock>=1.2                      ->   mock>=1.0,<1.1.000:31
daemontoolTraceback (most recent call last):00:31
daemontool  File "update.py", line 270, in <module>00:31
daemontool    main()00:31
daemontool  File "update.py", line 253, in main00:31
daemontool    stdout, options.verbose, non_std_reqs)00:31
daemontool  File "update.py", line 266, in _do_main00:31
daemontool    project.write(proj, actions, stdout=stdout, verbose=verbose)00:31
daemontool  File "/opt/stack/requirements/openstack_requirements/project.py", line 180, in write00:31
daemontool    raise Exception("Error occured processing %s" % (project['root']))00:31
daemontoolException: Error occured processing /opt/stack/freezer-api00:31
*** zhonghua-lee has quit IRC01:55
*** zhonghua-lee has joined #openstack-freezer01:56
*** DuncanT has quit IRC02:01
*** c00281451 has joined #openstack-freezer02:02
*** DuncanT has joined #openstack-freezer02:04
*** c00281451_ has joined #openstack-freezer03:07
*** c00281451 has quit IRC03:09
*** memogarcia has quit IRC04:22
*** memogarcia has joined #openstack-freezer06:21
*** memogarcia has quit IRC06:25
*** openstackgerrit has quit IRC08:47
*** openstackgerrit has joined #openstack-freezer08:47
*** reldan has joined #openstack-freezer08:48
Slashme@daemontool You were looking for this a few days ago: https://www.openstack.org/summit-login/login?BackURL=/summit/austin-2016/call-for-speakers/&awesm=awe.sm_aNCPT08:50
*** reldan has quit IRC09:31
*** reldan has joined #openstack-freezer09:50
*** reldan has quit IRC09:56
daemontoolSlashme,  ty, we should think about the talk09:58
*** daemontool has quit IRC10:16
*** daemontool has joined #openstack-freezer10:26
*** reldan has joined #openstack-freezer10:30
daemontoolvannif,  we need to totally migrate freezer-api to testr10:55
openstackgerritFausto Marzi proposed openstack/freezer-api: Switch to testr from pytest  https://review.openstack.org/26095010:56
vannifok. it seems you found me a hobby for the Christmas holidays ;)10:58
daemontoolhaha11:00
daemontoolvannif,  that's11:00
daemontoolthat commit11:00
daemontoolshould fix that11:00
daemontoolbut there's an issue11:01
daemontoolhttp://logs.openstack.org/50/260950/1/check/gate-freezer-api-python27/d483519/console.html11:01
daemontoolwell, there are multiple issues in the ci11:02
daemontoolbut our issue is reproducible if you use tox -r -v locally on that patchset11:03
daemontooland by other side11:03
daemontoolwith devstack11:03
daemontoolas we still use pytest11:03
daemontoolat build time, it returns an error11:04
daemontoolat pytest is not in the global requirements11:04
daemontoolvannif, Slashme  are we going to keep using apscheduler and pepdaemon?11:04
daemontoolif yes it's ok, but I need to add them to the global requirements of mitaka11:05
daemontoolas soon as possible11:05
*** emildi has quit IRC11:05
daemontoolreldan,  ^^11:05
reldanHi11:06
reldanAre you about pytest?11:06
daemontoolreldan,  I'm removing pytest from freezer-api11:06
daemontoolbut it's quite easy as it is not used in the test code11:06
daemontoolwe need only to do the boilerplate to use testr/subunit11:07
daemontoolvannif,  is looking at that11:07
reldanOh, great. How I can help?11:07
daemontoolwhat I'd like to understand now is about11:07
daemontoolapscheduler and pepdaemon11:07
daemontoolcause they are not in the global requirements11:07
daemontoolso if we keep using them11:07
daemontoolwe have to add them to the global-requirements.txt of Mitaka11:08
daemontoolotherwise if we build freezer in devstack automatically11:08
daemontoolit will throw an error11:08
daemontoolbecause that modules are not part of the global-requirements11:08
vannifpepdaemon can be replaced quite easily. but the scheduler would take more time. do you know of any other library altready listed in OS requirements, which provides similar functionalities ?11:09
daemontoolszaher,  what about https://review.openstack.org/#/c/239905/ ?11:09
daemontoolI don't know11:09
daemontoolbut if we need them11:09
daemontoolit's not a big deal11:09
daemontoolwe just add it to the globals11:09
daemontooland we are grand11:09
*** emildi has joined #openstack-freezer11:09
vannifalso, what is better: reinvent the wheel, or add a library to the requirements ?11:09
daemontooladd a library11:10
daemontoolto req11:10
vannifthen add the libraries ^^11:10
daemontoolok11:10
daemontoolnow, another issue is that freezer-web-ui currently does not build on devstakc11:10
daemontoolbut this is not a high priority issue now11:11
daemontooltoday I'd like to merge the parallels11:11
daemontoolI just have to finish to test 2 ssh and 2 swift11:11
daemontooland I'll give +211:11
daemontoolreldan, ^^11:12
reldanAmazing )11:12
daemontooland please11:12
reldanThank you!11:12
daemontoolreview this https://review.openstack.org/#/c/259905/11:12
daemontoolvannif,  Slashme  reldan  szaher frescof11:13
reldanDoing it11:13
daemontoolI can't recall for what topic we where needing a bp, other than tenant based backups.... anyone can help me?11:14
daemontoolreldan,  was it the metadata for something?11:14
reldanI remember that it should be multiregion tenant backup11:15
reldanhttp://eavesdrop.openstack.org/irclogs/%23openstack-freezer/%23openstack-freezer.2015-12-17.log.html11:18
reldanYes, let’s imagine that we have cindernative in two regions11:18
reldanso we should have some format of backup that describes all cinedernatives in different regions11:18
daemontoolreldan,  that-s right11:20
daemontoolso it's like similar to what is written + managing multiple openstack client object?11:21
daemontoolafter all multi region are two or more client objects11:22
daemontoolis more or less my assumption correct?11:22
reldanYou know it is more about tenant backup11:23
reldanI have read your blueprint, it is good. But something is not clear for me11:23
reldanFor example - public IP11:23
daemontoolok write it also there please11:23
daemontoolI think the public IP probably has to be the same11:24
daemontoolbecause we backup something in a specific state with taht settings11:24
daemontooland then we restore that11:24
reldanYes, but stored IP can be already taken by someone else11:24
reldanAnd we cannot assign it11:24
daemontoolwell that can happen anyway11:25
reldanOt we are trying to restore our tenant in different installation11:25
daemontoolregardless of the backup I think11:25
daemontoolyes11:25
daemontoolI see that11:25
daemontoolprobably at first isntance11:25
daemontoolthe ip is the same11:25
SlashmeI think as much as possible, the restore should be idempotent, so if there is a problem, you can correct it and re-try the restore.11:25
reldanYes, but what we should do - try to assign new ones or don’t do anything?11:26
daemontoolat first instance we should say it in the logs, api, web ui and do nothing11:26
daemontoolat least this is my opinion11:27
daemontoolI think this is a bit different than backup and restore11:27
daemontooleven though is brilliant11:28
reldanOk, fine. The second question - if we are doing cindernative our backup is stored in swift container in the same region. And it will be impossible to restore, if we loose our swift in that region.11:28
daemontoolcause now we are not thinking about restoring what we back up11:28
daemontoolbut we are thinking about how to restore the data and change settings11:28
daemontoolso it would be, how do we want the service looks like after we restore our data11:28
daemontoolright?11:29
reldanYes, you are right. How it should be stored11:29
daemontoolit's interesting and useful, but I'd say, let's first focus on restore the exact same thing11:29
daemontoolthen we can find a way to define what to change pre/post backup11:29
daemontoolit's a complex matter11:30
reldanOr let’s say swift backup (I mean backup of containers). We are going to compress it and encrypt?11:30
daemontoolreldan,  I agree with your second question11:30
daemontoolthat's data availability11:30
daemontoolI think we should yes11:30
reldanBut it can be petabytes of data in billions of files and we have no fast mechanism of doing diff11:31
daemontoolI agree that's a good point and a different challenge11:31
reldanAnd if we would like to implement backups on per day basis - it will be just crazy11:31
daemontoolanyway11:31
daemontoolI agree11:31
daemontoolbut that's tenants11:31
daemontoolit is unlikely that one tenant have billions of files and petabytes11:32
daemontoolI don't think anyone would do that11:33
daemontoolbut let's say a user wants to do that every 4 months11:33
daemontoolwe still have the same challenge11:33
reldanAgree11:34
reldanWe can arrange meeting11:35
daemontoolI think we should probably focus now on getting that first available11:36
daemontoolwith basic features11:36
daemontoolyes11:36
reldanYes, it is great that we have a blueprint now. Probably we should discuss how to split this big task on several small. And then mark every small task as “ready to implement”, “should be clarified this and this”11:37
daemontoolok11:37
daemontool++11:37
daemontoolSlashme,  sounds for you?11:37
SlashmeMy only concern is the targeted release. I don't think mitaka is realistic, even for a basic feature.11:38
daemontoolSlashme,  I agree11:39
SlashmeApart from that, the feature is great, there are lots of use-cases and potential interested users I think.11:40
*** reldan has quit IRC11:41
daemontoolSlashme,  well if we can have anything by end of Febrary11:43
daemontoolwe have like 2 full months11:43
*** reldan has joined #openstack-freezer11:57
*** openstackgerrit has quit IRC12:17
*** openstackgerrit has joined #openstack-freezer12:18
*** memogarcia has joined #openstack-freezer12:30
daemontoolreldan,  now that I'm thinking about it, we can't do incremental for swift objects13:32
daemontoolcause there's no way to write a piece of object13:32
daemontoolto swift13:33
daemontoolwe can only use timestamp difference13:33
reldanBut probably it is possible to do not block based, but file-based incremental13:33
daemontooland if the timestamp changed, we backup it13:33
daemontoolyes13:33
daemontoolobject based13:33
reldanBut I don’t know any fast way to do this, instead of listing every container13:33
reldanAnd it may be quite slow13:34
reldaneven if we have no changes at all, we should list every single object in swift13:34
daemontoolwe can retrieve 10k object per time13:34
daemontoolper request13:34
daemontoolI mean, we can list 10k objects per request13:35
reldanYes, but usually I use swift for storing hundrets of milliions objects13:36
reldanLike I did at my previous work13:36
reldanWe can calculate average traffic13:36
reldanbut usually cloud providers charge for bandwidth as well13:37
reldanso it may be very expensive13:37
daemontoolgenerally the traffic within the cloud is not billed13:39
daemontoolif we move the object outside yes13:39
daemontoolbut that's part of the intrinsic cost of the backups13:40
daemontoolyou have to move the data anyway13:40
daemontoolalso this is public cloud specific13:40
daemontoolfor private cloud deployments costs are different13:40
reldanI suppose traffic between vm is not billed, but traffic between vm and swift should be13:40
daemontoolI don't know13:41
reldanAnyway, let’s don’t consider price for now13:43
reldanFor listing we should get file_name, file_size and attributes13:43
reldanLet’s say it should be 100 bytes per file13:43
reldanAnd we have 1 mln files13:44
daemontooland timestamp13:44
reldanSo we need something around 100 mb traffic only for listing13:44
daemontoolI think it's more13:44
daemontoolat least 200b of meta data per file13:45
daemontoolfor 1 milion objects13:45
reldanThen 200 mb per 1 mln objects for metainformation13:45
reldanif we have 10k objects per request, we should invoke it 100 times, each partition will be 2 mb13:46
reldanIf user have 10 millions, it will be 2GB of metainformaton13:47
daemontoolyes that's the math13:47
reldanNot fast, but seems doable. But there are may be a good question13:49
reldanWhat if we have writes during the backup13:49
reldanSo we can some objects write two times, some objects no one time13:49
reldanOr they have paging?13:50
reldanwith id13:50
daemontoolswift supports paging13:51
daemontoolbut there's no snapshot feature in there13:51
daemontoolso that's a question more for swift like13:51
daemontoolwhat happen is some write an object while concurrently someone else13:51
daemontoolread it?13:52
reldanYes13:53
daemontoolI don't know13:53
reldanAnd additional question. If we would like to support encryption for swift and compressing.13:54
daemontoolwe should look for object concurreny on swift13:54
daemontoolI was taking a look, the trasnfer can be compressed with swift13:54
daemontoolI think we should13:54
reldanShould we do it per file or per all container13:54
reldanwhole13:54
daemontoolfor the incremental13:54
daemontoolI think it should be per object13:54
reldanin this case we should obfuscate names13:55
daemontoolin the meta data_13:55
daemontool?13:55
daemontoolwe can just encrypt the matadata too13:55
reldanyes, but if we do compression per file basis, we have a lot of small archives13:56
reldanand we should be sure that no one can read original names13:56
reldanif we do compression per object - it means we have a lot of small compressions, and it may be not very usefull13:57
daemontoolyes but then if you need to restore13:57
daemontoolone object, then you need to restore one container, or the whole thing13:57
reldanYou are right13:58
daemontoolI think the metadata is one single blob13:58
daemontoolcompressed/encrypted13:58
daemontooland the compressed/encrypted objects13:58
daemontoolare per object13:58
daemontoolthe metadata would be like tar metadata13:58
reldanyes, with obfuscated names13:58
daemontoolyes with13:58
daemontoolthe metadata encrypted13:59
daemontool(which I'm not sure if we have that now)13:59
daemontoolcan't remember13:59
reldanNope, now metadata is not encrypted13:59
reldanand not compressed13:59
daemontoolok13:59
*** emildi has quit IRC15:02
*** daemontool has quit IRC15:29
*** daemontool has joined #openstack-freezer15:42
*** dschroeder has joined #openstack-freezer16:22
*** reldan has quit IRC16:59
*** reldan has joined #openstack-freezer17:01
*** daemontool has quit IRC17:02
*** reldan has quit IRC18:02
*** reldan has joined #openstack-freezer18:28
*** reldan has quit IRC18:32
*** reldan has joined #openstack-freezer18:33
*** reldan has quit IRC18:37
*** reldan has joined #openstack-freezer18:56
*** zhonghua-lee has quit IRC19:27
*** zhonghua-lee has joined #openstack-freezer19:28
*** zhonghua-lee has quit IRC19:29
*** dschroeder has quit IRC19:45
*** reldan has quit IRC20:48
*** reldan has joined #openstack-freezer20:50
*** reldan has quit IRC21:32
*** reldan has joined #openstack-freezer22:08
*** openstack has joined #openstack-freezer22:23
*** reldan has quit IRC22:34
*** openstack has joined #openstack-freezer22:38

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!