Thursday, 2016-02-11

openstackgerritFabrizio Fresco proposed openstack/freezer: Blueprint specs for DR in freezer  https://review.openstack.org/27824200:35
*** reldan has quit IRC00:37
*** dschroeder has quit IRC00:44
*** damia_pi has joined #openstack-freezer01:11
*** reldan has joined #openstack-freezer01:13
*** damia_pi has quit IRC01:16
*** reldan has quit IRC01:46
*** damia_pi has joined #openstack-freezer01:49
*** damia_pi has quit IRC01:51
*** daemontool has quit IRC02:11
*** openstackgerrit has quit IRC02:15
*** daemontool has joined #openstack-freezer02:23
*** openstackgerrit has joined #openstack-freezer02:24
*** daemontool has quit IRC03:08
*** openstackgerrit has quit IRC08:47
*** openstackgerrit_ has joined #openstack-freezer08:47
*** openstackgerrit_ is now known as openstackgerrit08:48
*** reldan has joined #openstack-freezer08:58
*** reldan has quit IRC09:34
*** reldan has joined #openstack-freezer10:09
openstackgerritEldar Nugaev proposed openstack/freezer: Fix freezer for py3 compatibility  https://review.openstack.org/27834610:20
dmelladohey guys, what's the deal finally with py34 gate?10:52
dmelladoit's blocking a couple of patches10:52
dmelladocould you merge the fix?10:53
reldandmellado: There is a lot of changes11:01
reldandmellado: And I’m not sure that I have fixed everything11:02
*** daemontool has joined #openstack-freezer11:21
m3m0_daemoontool pong11:24
openstackgerritEldar Nugaev proposed openstack/freezer: Fix freezer for py3 compatibility  https://review.openstack.org/27834611:26
*** reldan has quit IRC11:46
openstackgerritMemo Garcia proposed openstack/freezer: Add SSL support for freezer  https://review.openstack.org/27836711:51
*** reldan has joined #openstack-freezer12:05
*** ig0r_ has joined #openstack-freezer12:12
*** daemontool has quit IRC12:39
*** daemontool has joined #openstack-freezer12:40
openstackgerritPierre Mathieu proposed openstack/freezer: Swift client does not stringify object names anymore on get_object()  https://review.openstack.org/27902712:41
openstackgerritEldar Nugaev proposed openstack/freezer: Fix freezer for py3 compatibility  https://review.openstack.org/27834612:42
daemontoolMorning12:44
*** daemontool_ has joined #openstack-freezer12:49
*** daemontool__ has joined #openstack-freezer12:49
*** daemontool has quit IRC12:51
daemontool__slashme, re: https://review.openstack.org/279027 what do you think if we ask to the swift team why that behaviour changed?12:54
*** daemontool_ has quit IRC12:54
daemontool__the fix is good, but now we are converting all the blocks from data to str12:55
daemontool__if we restore hundreds of GB it can have some impact12:55
slashmeNo we are not12:56
slashmeBackup objects have a __repr__ method !12:56
daemontool__I think the type os that object before was data12:56
daemontool__now is str12:56
daemontool__aren't we converting from data to str?12:57
slashmeNo12:57
daemontool__what type is that block before converting to str?12:57
slashmeWhen you call str() on a backup object is calls the __repr__() method of that backup object that returns a string like "<self.hostname_backup_name>_<timestamp>_<level>"12:58
slashmeThe swift client get_object() method expects a string for the name of the object12:59
daemontool__ah ok12:59
daemontool__that's the name of the object we retrieve12:59
daemontool__ty12:59
daemontool__sorry13:00
slashmeBefore 2.7.0 it seems that they were calling str() on anything passed as the object name. Which is not the behaviour anymore.13:00
daemontool__okok13:00
daemontool__I wonder if they know that13:02
*** Slashme_ has joined #openstack-freezer13:16
*** Slashme_ has quit IRC13:22
*** daemontool__ has quit IRC13:23
*** daemontool__ has joined #openstack-freezer13:35
*** daemontool_ has joined #openstack-freezer13:42
*** daemontool__ has quit IRC13:46
*** daemontool_ is now known as daemontool13:56
daemontoolplease add your topics to the meeting agenda https://etherpad.openstack.org/p/freezer_meetings13:56
daemontoolremember today the meeting happen at 2 pm UTC13:56
daemontoolas per http://eavesdrop.openstack.org/#Freezer_Meeting13:56
daemontoolaccording your GMT TZ the meeting should happen in ~3 min, right?13:57
*** ddieterly has joined #openstack-freezer13:59
m3m0_yep14:00
daemontoolthe meeting will start in 2 min, m3m0_  you chair?14:02
m3m0_let's move to the #openstack-meeting-alt channel please14:02
daemontoolyes14:02
m3m0_frescof_ are you here?14:05
daemontoolvannif, szaher_  slashme  ping please join the meeting at #openstack-meeting-alt14:09
vannifpong14:09
slashmepong14:13
openstackgerritPierre Mathieu proposed openstack/freezer: Swift client does not stringify object names anymore on get_object()  https://review.openstack.org/27902714:17
szaher_pong14:18
openstackgerritEldar Nugaev proposed openstack/freezer: Fix freezer for py3 compatibility  https://review.openstack.org/27834614:19
daemontoolfrescof_, are you around?15:00
daemontoollet's discuss about the DR please, it's a hot topic15:02
slashmeFirst question abut the DR15:03
daemontoolcan we do 5 minutes break first?15:04
daemontoolso we have also frescof_ on this15:04
daemontooland I can go drink something15:04
m3m0_sure :)15:04
daemontoolty15:04
slashmeIs the gerrit change the right place to express disagrement. I think it is, because I have the feeling that when this is merged and part of the specs, then it means we will commit to it.15:05
slashmeSo for big blueprints like this one, I'd like to see a majority of core agreeing before merging. Because this impacts the globality of the software and its developper.15:06
slashmeDefinity two persons agreeing is not enough to merge this kind of blueprints.15:07
*** fabriziof has joined #openstack-freezer15:08
daemontoolok I'm here15:09
daemontoolslashme, I think gerrit is OK to express disagreements15:10
daemontoolslashme,  ok, what should it be?15:13
daemontoollike 40%? 50%?15:13
slashmeI don't know. We need to decide about percentage and if non-core can vote.15:14
daemontoolnon core can provide feedback15:14
slashmeMaybe there is OpenStack Guidelines.15:14
daemontooleverybody can vote15:14
daemontoolbut I think is at least 40% of core agree15:14
daemontoolwe can do it15:14
daemontoolbut there's also the balance between community and companies15:15
slashmeWith a current count of 7 cores, this would mean we need 3 approval to merge a blueprint ?15:16
daemontoolI don't know15:18
m3m0_it's better if we merged if half + 1 agree15:18
daemontooleven half15:18
daemontoolis ok15:18
daemontoolanyway15:19
daemontoollet's talk about the issue15:19
daemontoolthen we see the disagreements15:19
slashmeOkay. We should agree on that at some point. I'm not trying to start a war15:19
daemontoolok so let's say we need 415:20
daemontoolI'd like to understand the disagreements15:20
daemontoolwhat are the issues_15:20
daemontool?15:20
daemontooland what are the benefists?15:21
*** ig0r_ has quit IRC15:21
slashmeFirst, let's say the truth. This is not an implementation blueprint. This is a blueprint to merge an internal HP project called Osha into Freezer.15:21
daemontoolok15:22
daemontoolso we have the implementation already15:22
daemontool?15:22
fabriziofhi15:22
fabriziofyes of the goal 115:22
daemontoolhi fabriziof15:22
slashmeThen. There is nothing common about these two projects.15:22
daemontoolwhat is the problem that osha is trying to solve?15:23
slashmeVM high availability15:23
fabriziofnope15:24
fabriziofvm high availability is a complete different thing15:24
daemontooljust that?15:24
szaher_guys  I think we need to define what freezer is and what is the scope of freezer and then let's see if this bp or Osha is relative or not15:24
daemontoolok15:24
fabriziofradically different15:24
m3m0_agree szaher_15:24
daemontoolszaher_, I think the freezer scope is well define d at this point15:24
szaher_if it's relative let's work on it and include it in freezer if not let's just ignore this15:24
*** ddieterly has quit IRC15:24
daemontoolwe do backup restore and dr in openstack15:25
daemontoolthat's our scope15:25
szaher_daemontool I mean let's go back and check not to put one now off-course it's set long time ago :D15:25
m3m0_for context... https://review.openstack.org/#/c/27824215:25
daemontoolok15:25
daemontoolfabriziof, what are you proposing with that bp in short?15:26
epheohi there15:27
daemontoolHi epheo  :)15:27
slashmeMy opinion is very simple. I love both projects, but they share not components.15:27
epheoFunny issue btw :)15:28
epheoUsually (99%) people are struggling separating projects not merging them, hahaha15:28
daemontool:)15:28
daemontoolfrom the bp15:28
daemontoolI understand15:28
daemontoola dr feature is being proposed15:28
daemontoolso the point is, if would be the case to create a new project for that15:29
daemontoolor include it in freezer right?15:29
fabriziofimho dr is a wide definition15:29
epheoYes, what do you call DR ?15:29
daemontoolwell we have to start y something15:29
fabriziofthat can start from loosing 1 physical node15:30
daemontool:)15:30
fabriziofto a rack15:30
daemontoolto a dc15:30
fabriziofto a dc15:30
daemontooletc etc15:30
fabriziofor a continent15:30
fabriziofor a planet :)15:30
epheolost plannet15:30
m3m0_but the scope of this bp involves lots more of components that: a) the scope is not for freezer b) there are tools that already does that15:31
fabriziofso to start approaching to solve the problem, there are various kind of approaches15:31
fabriziofwith different costs and time of restoring the service15:32
*** daemontool_ has joined #openstack-freezer15:32
m3m0_so make sense to develop a platform that integrate all of those components first rather than putting all in freezer15:32
daemontool_sorry got disconnected15:32
fabriziofbackup/restore is probably the cheapest one15:33
daemontool_I think, a project in openstack, solve a common set of problems15:33
daemontool_for what I understand15:33
daemontool_things like heartbeat15:33
fabriziofbut need a long time15:33
daemontool_ha15:33
daemontool_and so on15:34
daemontool_we do not do that15:34
slashmeMy point is: DR using Backup/restore and DR using what you propose in your blueprint share no common parts.15:34
*** daemontool has quit IRC15:34
fabriziofbackup/restore is not usually considered a DR solution15:34
slashmeSo merging them into the same project does not make sense.15:35
daemontool_we do need to merge things15:35
fabriziof@slashme you are repeting yourself15:35
daemontool_we can have the tooling to resolve specific problems15:35
fabriziofwe can read15:35
fabriziofand understand at first shot15:35
daemontool_fabriziof, hang on please15:36
daemontool_so15:36
epheoBut without arguing on what Freezer should or shouldn't do, OpenStack is trying as much as possible to respect KISS15:36
epheoThe guys frmo Nova are separating the scheduler15:36
epheoMost of the projects are spliting15:36
daemontool_epheo, I see that15:36
epheoMake no sense to merge 2 together15:36
daemontool_look at neutron15:36
daemontool_project like lbaas15:37
fabriziofosha don't exist15:37
daemontool_vpnass15:37
daemontool_etc etc15:37
epheoEven if that make sense on Freezer feature pov15:37
daemontool_are individual tools15:37
epheoJust my 2cents15:37
daemontool_to solve specific problems15:37
epheo:)15:37
daemontool_but under the same project15:37
fabriziofthe architecture of what the bp is proposing15:37
daemontool_because they solve the same common set of issues15:37
fabrizioffeet the freezer architecture perfectly15:37
fabrizioffeet=fit15:38
daemontool_my point is that we can have15:38
daemontool_dedicated tolling15:38
daemontool_with shared services15:38
epheoCompute HA matching backup restore? (finnaly I have hope finding the loved one :) :) )15:38
daemontool_within the same project15:39
daemontool_and that dedicated tolling15:39
slashmedaemontool_: shared services15:39
daemontool_can have a completely dedicated purpose15:39
daemontool_yes like shared db, shared api, shared webui15:39
daemontool_like neutron subprojects15:40
fabriziofshared agent15:40
daemontool_they are independent tools15:40
daemontool_shared agent I don't know15:41
daemontool_probably shared scheduler make more sense15:41
daemontool_or same python-freezerclient15:41
fabriziofwhy not ?15:41
fabriziofah sorry.... used wrong term15:41
daemontool_what did you mean?15:41
daemontool_the scheduler?15:42
fabriziofyep15:42
daemontool_so we can have like a dedicated repo15:42
daemontool_something like freezer-dr15:42
daemontool_but one sec15:42
daemontool_what do you guys think?15:42
slashmeLet's see where things would interact15:43
slashmeFreezer-web-ui: Okay, that's easy15:43
slashmeThen what ?15:44
daemontool_the api framework I think can eb shared15:44
fabriziofthe scheduler is polling over the api, and can send information on the hypervisor15:44
fabriziofalmost for free15:44
daemontool_fabriziof, I think we have to write a dedicated agent for dr15:44
daemontool_liek freezer-dr-agent15:44
daemontool_something like that15:44
daemontool_the scheduler can be reused for sure15:45
daemontool_is abstracted enough15:45
daemontool_the api15:45
fabriziofagree 100% on that15:45
daemontool_would be a dedicated endpoint15:45
daemontool_called15:45
daemontool_disaster_recovery15:45
daemontool_or dr15:45
daemontool_I don't know15:45
daemontool_I'm just thinking loud15:46
daemontool_slashme,  what should be the independent components from your point of view?15:46
daemontool_the agent for sure has nothing to do15:46
fabriziofyep the agent has noting to do15:46
slashmeEven the api. What is the point of sharing it if we are not using any common endpoint.15:47
daemontool_well we share the infrastructure15:47
daemontool_and framework behind15:47
slashmeSo do we share infra with keystone and swift15:47
daemontool_nope because with those services we are not sharing15:47
daemontool_a common set of problems15:47
daemontool_but we use the keystonemiddleware15:48
daemontool_in our api15:48
daemontool_slashme, so the agent is independent15:48
daemontool_the repo is independent15:49
daemontool_the api endpoint is independent15:49
daemontool_what is shared15:49
daemontool_the web ui15:49
daemontool_the api framework15:49
daemontool_the scheduler15:49
daemontool_the python-freezerclient15:49
daemontool_I think over time we'll find more features to share15:50
daemontool_think about the dr applied to tenants15:50
daemontool_like dr as a services15:50
*** epheo has quit IRC15:51
daemontool_it has things in common with the tenant backup we are doing15:51
fabriziofthat was my idea15:51
daemontool_fabriziof, that should be written in the bp15:51
daemontool_I think15:51
daemontool_at least for clarity15:51
fabriziofand comparing thhe split proposed for nova, that is a 5 year old project with hundred of developer15:52
fabriziofwith freezer that is very young with a very small number of contributor, is wrong15:52
slashmeThe whole infra of freezer has been though to manage  Backup and Restore. Nothing is ready to integrate this kind of completly different fonctionality.15:52
slashmetenant backup is still just a new very complex backup mode.15:53
*** epheo_ has joined #openstack-freezer15:53
slashmeEvacuation / Fencing / monitoring / Notification are completly different topics.15:53
daemontool_well we use the other services api15:54
epheo_If you need to re-use some code and share classes or fct between both project the correct way of doing it is to have it as a dependency,15:54
daemontool_we are not going to implement that15:54
daemontool_epheo_, ++15:54
daemontool_as a module dependency15:54
daemontool_like for impi I think ironic should be used15:55
daemontool_I think there's a big gap in openstack for this15:55
slashmeDon't get me wrong, I think you are both very good engineer, and that szaher_ (who wrote a POC implementing the failover part of that blueprint) is a fantastic dev. It's just that I think this is not in the best interest of the freezer project.15:55
fabriziofironic should be leveraged, if present15:55
fabriziofas much as possible15:56
daemontool_slashme, I think we are disagreeing15:56
fabriziof@slashme it's really challenging to not get you wrong15:56
daemontool_if this is a set of problem we are proposing to solve or not15:56
slashmeWe will need to complexify the core of freezer (Backup/restore) to much to integrate those new functionnality.15:57
daemontool_is this is a set of solutinos we provide15:58
daemontool_then the implementation is just a way to achieve the goal15:58
daemontool_slashme,  I think15:58
daemontool_we have a separate repo15:58
daemontool_separate agent15:58
fabriziofI thinks those features should be in a separate module15:58
daemontool_a and a dedicated api endpoint within our framework15:58
daemontool_things can go in parallel15:58
fabriziofthat get loaded if enabled15:58
daemontool_at least to start15:58
daemontool_then if at some point15:59
daemontool_the dr thing15:59
daemontool_became to big15:59
daemontool_we can split15:59
fabriziofadding very little if not no complexity at all to actual code15:59
daemontool_but there are benefits15:59
daemontool_fabriziof, does hp would allocate some resources for this?15:59
*** Felips has joined #openstack-freezer16:00
fabriziofyes, I got the foundings16:00
daemontool_fundings :)16:00
fabriziofyes sorry, the $$16:01
daemontool_well if a company allcoate resources, there's an opportunity16:01
daemontool_slashme,  epheo_  what would be the conditions16:01
daemontool_from you to have this working?16:01
openstackgerritMemo Garcia proposed openstack/freezer: Add SSL support for freezer  https://review.openstack.org/27836716:01
daemontool_or no conditions at all it is not possible at all?16:02
*** epheo has joined #openstack-freezer16:02
daemontool_I can't read nothing16:08
slashmeJust curious here: what would be the interactions with the API ?16:08
slashmeWhat kind of data would the new agent share with the new endpoints of the API ?16:08
*** epheo has quit IRC16:09
fabriziofthe status of the hypervisor16:09
daemontool_fabriziof,  explain that a bit more please16:09
fabriziofthe scheduler can gather easily the status of kvm/libvirt on the hypervisor16:10
daemontool_fabriziof,  shouldn't we get that from the nova api?16:10
fabriziofand even take actions in front of a specific action (like the freezer jobs for backup)16:10
*** epheo has joined #openstack-freezer16:11
daemontool_I think the whole scheme of the job scheduler can eb reused16:11
fabriziofnova depends on rabbit, for example16:11
daemontool_that's a shared service16:11
daemontool_that's one reason why they splitted16:11
fabriziofand we want do reduce the false positives as much as possible16:11
daemontool_slashme, epheo   do you want to think about it couple of days more to set your conditions to minimize the risks you see on this?16:12
fabriziofbut we should leverage nova as much as possible anyway16:12
daemontool_is that reasonable?16:12
fabriziofthe workflow of the "monitoring" part, should be: nova, freezer-scheduler, third party monitoring tools (ex. monasca)16:13
epheofabriziof: You know my opinion about leveraging nova for something as critical as instance lifcycle16:14
epheoOr monasca/nagios/etc (but this is a separate topic)16:14
slashmeIn the end, I'll accept any decision, as long as the persons that are going to have to develop this kind of bp are involved in the decision process.16:14
daemontool_epheo, that is good16:14
fabriziofepheo: yes, but I still don't agree16:14
daemontool_I think with that consideration we can make the solution more resiliant16:15
daemontool_slashme, appreciate a lot that, I'd like also that we clearly say the issues we see16:15
daemontool_so we can address them16:15
*** dschroeder has joined #openstack-freezer16:16
*** epheo_ has quit IRC16:16
fabriziofepheo: but I respect your opinion, and I'm still open to change my mind16:16
daemontool_if you guys share your thoughts, we can16:16
daemontool_provide our inputs16:16
daemontool_otherwise is difficult16:16
daemontool_to what you disagree fabriziof  and epheo ?16:17
fabriziofnova is the core component of OS, and we are doing OS native things16:17
fabriziofleveraging as much as possible what OS provide16:17
daemontool_so epheo says we should go directly on the hypervisor without the api?16:18
daemontool_the nova api I mean?16:18
daemontool_or what?16:18
fabriziofsayd that, if nova is not perfect, it's not a good enough justification to not using that16:18
daemontool_fabriziof, I agree with that, but we need to find a mechanism16:18
daemontool_that works even is nova is not available16:18
daemontool_that's always is our primary purpose16:19
daemontool_so we could fall back and talk directly to the hypervisor16:19
daemontool_if the nova api are not avail16:19
daemontool_or keystone16:19
daemontool_something like that?16:19
fabriziofdaemontool_: it's what I sayd before, use more mechanism to decrease nova glitches16:19
epheodaemontool_: My opinion is just that Nova isn't (and will most probably never be) relevent enough to provide decision on instance lifecycle. It return close to random values regarding the compute state.16:19
epheoMonitoring tools as Monasca can be a bit better but that still mean that the life and death of our instance depends on the status of the monitoring tool.16:19
daemontool_epheo, well yes, I agree16:20
daemontool_we need to use those services tho16:20
daemontool_at least at first instance16:20
daemontool_then fall back to more direct method16:20
daemontool_is that fails16:20
daemontool_and open bugs to the services16:20
daemontool_if we see inconsistent things16:20
daemontool_epheo,  so you say the16:21
daemontool_dr mechanism16:21
daemontool_wouldn't be reliable16:21
daemontool_because of the unreliability of nova and monasca basically16:21
epheoIMHO we should assume that the compute is down by default until it prove the opposite. A way of doing it can be to have a couple of (idealy kernel level) checks that write in a watchdog directly in the compute. If it don't we migrate, Osha API will ensure the fencing to stay coherent and proceed to the evacuate16:22
epheoRelying on any other (outside) component may or may not be relevent. The problem beiing that we can't know for sure16:23
daemontool_fabriziof, don't you agree with that as complementary approach also?16:23
fabriziofepheo: that would have a hardware requirement16:24
daemontool_well let's deep think how to do it16:24
fabriziofso, imho, this should be one of the "monitoring" method16:24
daemontool_I think epheo  point is reasonable16:24
fabriziofnot the only one16:24
fabriziofas I always shared with epheo16:24
daemontool_how do we ensure the information retrieved from the api are consistent16:24
daemontool_ok16:25
daemontool_fabriziof, please also add that to the bp16:25
daemontool_as potential issue16:25
epheofabriziof: totally agree, if customer are ok to rely on their monitoring tools we should be able to do it as well16:25
daemontool_and possible solution16:25
fabriziofsame thing is ipmi, it's hardware requirement, and my proposal is to be as independent as possible from requirements16:26
daemontool_slashme, from your comment I've read before, I understand the dev needs to be actively involved in the solution definition16:26
epheoHopefully szaher_ didi a very good job and the whole architecture of Osha is modular16:26
daemontool_as far as I can understand, in freezer we work with that model, but if that is not true, please remark it16:26
daemontool_slashme,  ^^16:26
slashmeIt has always been the case, I not saying anything like that.16:27
daemontool_ok, I'm saying it because I believe on it16:27
* fabriziof too16:28
daemontool_so, fabriziof we need to define the limits between the the backup and the dr16:28
daemontool_fromthis conversation16:28
daemontool_writing down to the bp the shared and independent components16:29
slashmeTrying to get a view on how the architecture would look like:16:29
slashme  - A shared freezer-ui with two tabs: "backup/restore" and "datacenter HA"16:29
slashme  - A shared freezer-api with diffent set of endpoints16:29
slashme  - A shared db to store different kind of informations16:29
slashme  - A shared scheduler16:29
slashme  - Different agents being called by that scheduler.16:29
slashmeThat new agent would take care of the new functionnalities: Evacuation, Fencing, Monitoring, Notification.16:29
slashmeTaking into account that the freezer-scheduler and freezer-api couple is just a JSON exchange machine.16:30
fabriziofwait slashme16:30
fabriziofyou mean that: Evacuation, Fencing, Monitoring, Notification16:31
fabriziofshould be done in the "client" side ?16:31
daemontool_I think that should be done by the dr-agent16:32
daemontool_when you can place it where you want16:32
slashmeYup16:32
daemontool_s/when/then/16:32
daemontool_client site please forget it confuse people in respect the python-freezerclient16:32
fabriziofok, let's say where the scheduler is16:33
slashmeThe scheduler is just used to : dialog with the api / execute agents16:34
slashmeIt does not take actions on its own16:34
daemontool_++16:34
fabriziofI sayd: where the scheduler is16:35
fabriziofnot: in the scheduler16:35
daemontool_ok so we agree basically16:35
daemontool_?16:35
fabriziofdunno16:35
daemontool_I think so16:35
fabriziofI think that those actions should be taken where the api are16:36
daemontool_fabriziof, do you want that the dr-agent talk with the other than freezer services api? or you want the scheduler to do that?16:36
daemontool_I think the dr-agent should do it16:36
daemontool_and the dr-agent is executed by the scheduler16:36
daemontool_did I got it well?16:37
slashmeAPI DON'T TAKE ACTIONS16:37
epheoBut still, what's the benefit of having only one project to achieve this goal ?16:37
epheoThis isn't the Linux way... And we're already leveraging oslo and os.common as shared lib.16:37
epheoKeep It Simple Stupid is the only way to have readable / easy to contribute / and maintainable software.16:37
fabriziofok, we are speaking different languages16:37
fabriziof<fabriziof> I think that those actions should be taken where the api are16:38
epheoLook at Nero for example, dead because of too much functionalities. In people mind freezer = backup. I believe we should have something else for DR16:38
daemontool_fabriziof, I think the dr-agent should be placed anyware16:38
daemontool_not necessarly on the api node16:38
fabriziofdaemontool_: agree16:38
daemontool_it should be node indepentent as long it can communicate with the api16:38
daemontool_slashme, was it that what you were saying previously?16:39
fabriziofbut I don't agree on having that where the scheduler is16:39
daemontool_ok so that isn't16:39
daemontool_I got it wrong16:39
fabriziofbecause we are evaquating failed nodes16:39
fabriziofand I dunno how a node can evacuate himself after a failure :(16:39
daemontool_ok16:39
* fabriziof is feeling stupid16:39
daemontool_but the dr-agent is executed by the freezer-scheduler, right?16:40
fabriziofyes16:40
daemontool_ok16:40
daemontool_ok16:41
fabriziofbut that agent, as I sayd before, can be useful to take actions16:41
fabriziofnot that is the place where to execute the evacuation or the fencing16:41
fabrizioffor example16:41
fabriziofbut your opinion is appreciated16:42
daemontool_fabriziof,  shouldn't be that distributed?16:43
fabriziofI can be wrong16:43
daemontool_like one scheduler and one dr-agent per compute node?16:43
fabriziofyes16:43
daemontool_for compute service16:43
fabriziofyes16:43
daemontool_so if we loose a compute node16:43
daemontool_we have the others16:43
daemontool_then we need to manage concurrency16:43
slashmeSo one scheduler and one dr-agent per compute node for the monitoring side16:43
daemontool_but I think we can manage that using the freezer-api16:44
daemontool_and that is for compute service,16:44
fabriziofslashme: yep, let's say one of the monitoring piece16:44
daemontool_same apply for block storage service16:44
slashmeThen one scheduler and one dr-agent per api for migrations/fencing/notification ?16:44
fabriziofslashme: yep16:44
fabriziofhmmmmmm16:44
fabriziofyep16:45
epheoI like the idea of the agent on the compute :) Only itself can really know what's going on.16:45
fabriziofepheo: ++16:45
daemontool_why the scheduler/dr-agent also per api?16:45
daemontool_only on the compute nodes wouldn't be enough?16:45
daemontool_doing monitoring/migra/fencing//notification16:46
daemontool_?16:46
fabriziofthat's one of the reasons the architecture I thought fit very well with the freezer one16:46
slashmeCompute nodes can't be used to trigger migrations/fencing/notification16:46
epheoThen we have the problem fabriziof explain, if it's dead it will not be able to evacuate16:46
slashmeThey would't have network access to the right resourses16:46
daemontool_nope16:46
daemontool_because we have the other dr-agent16:46
daemontool_on the compute nodes16:46
fabriziofslashme: ++16:46
epheoAnd that's why we need to assume that its dead, until the agent proove otherwise16:46
fabriziofthis is the kind of conversation I like16:46
fabriziofcostructive16:46
daemontool_so16:47
daemontool_the scheduler/dr-agent16:47
daemontool_monitor only the compute node16:47
daemontool_where is running16:47
daemontool_and not also the others?16:47
fabriziofgo on guys16:47
daemontool_I have to go to eat16:47
daemontool_in 5 min16:47
daemontool_ok16:48
daemontool_have to go, fabrizio please the only thing make sure16:48
daemontool_all the perspectives are reported in the bp please16:48
daemontool_with issues and possible solutions16:48
fabriziofonce that bp is approved, or close too16:49
daemontool_nope,16:49
daemontool_this is part of the bp16:49
fabriziofwe can work togather on an architectural detailed design16:49
daemontool_this is16:49
daemontool_the architectural detailed design16:49
daemontool_part of that bp16:49
daemontool_let's do it know16:49
daemontool_then let's share with the community16:50
daemontool_get the feedback16:50
daemontool_merge, start with the implementation16:50
fabriziofI think that that part deserve a dedicated bp16:50
daemontool_why?16:50
daemontool_let's do evrything now16:50
fabriziofbecause it's complex enough16:50
daemontool_why do we need to have 2 bp16:50
fabriziofbecause I think we need more than 2 :)16:50
daemontool_we just need to add16:51
daemontool_this considerations16:51
daemontool_from this meeting16:51
daemontool_to that bp16:51
daemontool_ask for feedback and start with the implementation16:51
fabriziofo,, if you think so16:51
daemontool_then we split16:51
daemontool_many commits in gerrit16:51
fabriziofs/,,/k/16:51
slashmeEven if I'm participating, I'm still against the whole idea for the same reason as before (there is no usefull re-usage of the freezer infra). Maybe we will merge this, but that will be against my will.16:51
daemontool_slashme, even following the scheme that you reported before?16:52
daemontool_set the limits16:52
slashmeYes. Because I still think it does not make sense.16:52
slashmeThis is an opensource project. I'm just expressing my opinion.16:53
daemontool_sure16:53
daemontool_:)16:53
daemontool_have to go for food16:53
daemontool_later16:53
daemontool_please find an agreement16:53
slashmeBon appetit. See you later16:53
fabriziofbrb need some smoke16:53
epheosame16:55
*** daemontool_ has quit IRC16:57
*** samuelBartel has joined #openstack-freezer17:01
fabriziofslashme: it's not very clear why you say: there is no usefull re-usage of the freezer infra17:18
fabriziofsince we reuse most of the actual freezer infra17:18
*** samuelBartel has quit IRC17:33
epheofabriziof: Because if the goal is to re-use an API / Agent / Scheduler architecture... all the OpenStack projects should merge into one.17:37
epheoBut which make sense is a common shared lib (oslo).17:37
m3m0_daemontool_ https://blueprints.launchpad.net/freezer/+spec/last-poll-date-from-scheduler17:51
fabriziofepheo: ok but then we need to remove DR from freezer17:59
fabriziofbecause backup/restore is not DR17:59
fabriziofat all17:59
fabriziofif we want to do DR, imho, that one is a great way of doing it18:00
fabriziofsame thing can be:18:01
fabrizioffile backup18:01
fabriziofcinder backup18:01
fabriziofnova backup18:01
fabrizioflet's start splitting freezer that way so18:01
fabriziofbecause those have REALLY nothing in common18:02
fabriziofthey don't even need an angent18:03
m3m0_we can change the labeling in the UI to be Backup and Restore rather than DR18:08
fabriziofok18:09
fabriziofand split freezer18:09
*** EmilDi_ has quit IRC18:18
epheoOr "Backup and Restore and Disaster Recovery and Compute High Availability" :) (I should stop jokking before you kill me)18:22
*** reldan has quit IRC18:30
*** daemontool has joined #openstack-freezer18:54
*** reldan has joined #openstack-freezer18:59
daemontoolreldan,  how are we doing with https://review.openstack.org/#/c/278346/ ?19:12
reldandaemontool: i have fixed a lot of staff - I hope it will be ready tomorrow19:14
daemontoolI can keep working on it today19:15
daemontoolhere is just 14:4519:15
daemontooltomorrow is the last day for m3 release19:15
daemontoolreldan,  is it ok if I'll keep working on it?19:25
reldandaemontool: yes sure!19:25
daemontoolok ty19:25
reldanI will be glad )19:25
daemontoolplease be my guest! haha19:26
daemontoolok19:26
*** fabriziof has quit IRC19:35
daemontoolreldan,19:57
daemontoolare you there?19:57
daemontoolI think to solve the issue we need to move all the functional tests in a dedicated directory19:58
daemontoolthe integration tests does not work if we do not have a devstack instance running19:58
daemontoolthose are the tests to execute with devstack19:59
reldandaemontool: I don’t know actually. But it should pass a bigger part of tests now19:59
daemontoolok20:00
daemontoolif you remove the integration directory20:00
reldanif you have time you can try to fix these tests https://review.openstack.org/#/c/278346/10/tests/test_scheduler_daemon.py20:00
daemontooleverything works correctly20:00
daemontoolvannif,20:00
daemontoolping20:00
daemontoolafaik vannif  did the integration tests20:00
reldanand these https://review.openstack.org/#/c/278346/10/tests/test_lvm.py20:00
daemontoolassuming a devstack instance was there20:00
daemontoolif I remove the tests/integration dir20:00
daemontoolnothing fails from your current patchset20:01
daemontoolit works20:01
reldangreat!20:01
reldanwe can remove test/integration for now20:01
reldanand I will fix it tomorrow20:01
reldanjust to unblock other commits20:02
daemontoolyes20:04
daemontoolI'm doing some tests20:04
daemontooland upload a new patchset20:04
reldandeal!20:06
*** daemontool_ has joined #openstack-freezer20:53
*** daemontool has quit IRC20:55
daemontool_reldan,  ping21:33
daemontool_I'm doing something like this21:33
daemontool_https://github.com/openstack/python-novaclient/blob/master/tox.ini21:33
daemontool_with a directory tree like this21:33
daemontool_https://github.com/openstack/python-novaclient/tree/master/novaclient/tests21:34
daemontool_but if I use that settings21:34
daemontool_out tests are not execute21:34
daemontool_executed21:34
daemontool_with tox21:34
daemontool_but they are execute with python setup.py testr --coverage21:34
daemontool_ah, I think I've solved21:35
daemontool_yep...21:42
openstackgerritFausto Marzi proposed openstack/freezer: Fix freezer for py3 compatibility  https://review.openstack.org/27834621:52
*** daemontool_ has quit IRC21:56
*** daemontool has joined #openstack-freezer22:34

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!