Friday, 2017-03-31

*** chsc has quit IRC00:03
*** vinsh has joined #openstack-swift00:06
*** jamielennox is now known as jamielennox|away00:25
*** zhurong has joined #openstack-swift00:30
*** jamielennox|away is now known as jamielennox00:31
*** vint_bra has joined #openstack-swift00:53
kota_good morning00:55
tone_zGood morning!01:02
tone_zmattoliverau: morning!01:02
tone_zkota_: morning!01:02
*** vinsh has quit IRC01:03
mattoliveraukota_, tone_z: morning01:03
kota_tone_z, mattoliverau: \o/01:04
*** vint_bra has quit IRC01:10
kota_mattoliverau: i did NOT see the code in patch 448480 yet, how do you think of the bug 167500?01:26
openstackbug 167500 in Inkscape "Rulers don't align with document " [High,Fix released] https://launchpad.net/bugs/167500 - Assigned to Diederik van Lierop (mail-diedenrezi)01:26
patchbothttps://review.openstack.org/#/c/448480/ - swift - Container drive error results double space usage o...01:26
kota_IDK the patch 448480 works for the case that 2 handoff nodes attempt to place the replica db to where....01:27
patchbothttps://review.openstack.org/#/c/448480/ - swift - Container drive error results double space usage o...01:27
kota_3 other nodes except the handoff local? or 3 nodes including itself01:28
kota_the latter one could work well but the prior one may be terrible because the handoff nodes push their local each other (i.e. racing), if i understand the problem correctly01:29
* kota_ means 3 replicas case01:29
*** sams-gleb has joined #openstack-swift01:38
mattoliveraukota_: I'l need to test but this version, just keeps track of having to send to len(repl_nodes) nodes, on each node it decrements and if one comes back as a DirveNotMounted it'll simply icrement the counter, and thus fall into more_nodes. It's stopping the scan to find the handoff node, so will alwasy try and hit up the same handoff nodes. So it should scale to any number of replicas. But I'll check on my SAIO with01:40
mattoliveraudifferent replicas to be sure.01:40
mattoliverauif it hits itself as a hand off node, then like in the reconstructor it'll just continue (after decrementing) thereby skipping as he's already a handoff node.01:41
kota_mattoliverau: thx for sharing the status, that's helpful to understand01:41
mattoliverauI just we should confirm that it wont then delete off itself.01:42
kota_sounds reasenable (just continue after decrementing)01:42
mattoliverauotherwise we'd loose redundency01:42
kota_wondering the latter handoff attempts to try to push over prior handoff?01:42
kota_e.g.01:42
*** sams-gleb has quit IRC01:43
kota_p0, p1, p2, h1, h2, h3... and if h3 has a replica as a handoff01:43
kota_for some reasons (e.g. rebalance? network failure?), anyway01:44
kota_and then p0 is now failing with the device to store so p0 will raise device not mounted01:45
mattoliverauso h3 will try and push to p0, p1, p2, h1, h2 in that order.01:45
kota_and then, h3 tries to p1, p2, h1?01:45
mattoliveraubut yeah only on DriveNotMounted any other failure and it wont push it, just error. Like we currently do.01:45
kota_because the h3 is unable to find itself in the order.01:45
kota_untill h3 comes01:45
mattoliveraukota_: oh yeah, I guess it would push to a handoff node in that case.01:46
mattoliverauMaybe I'll need to go and add a test to see :)01:46
mattoliverauquestion what behaviour do we want. in that case, push to a handoff node? or hey I'm a handoff node so I'm already good.01:48
*** JimCheung has quit IRC01:48
mattoliverauexcept in the latter case, what happens its been a rebalance to remove the drive01:48
*** JimCheung has joined #openstack-swift01:49
kota_AFAIK, the object-replicator attempts to push the local ONLY to the primaries01:49
kota_exactly, it may be a tradeoff (racing to create handoffs vs it may decrease durability if we leave the device failures for a long term)01:50
kota_or the mid solution may be trying to push the local to fill 3 replicas including his local01:51
kota_so attempt_left will start from...01:52
kota_ah.. it needs to handle the primary or the handoff separately, in the handoff, # of primaries - # of primary failures - 1 (the handoff) that will be attempt_left01:53
*** JimCheung has quit IRC01:53
kota_maybe?01:53
kota_idk, and not sure01:53
kota_ah, no01:54
kota_the consition is not true01:54
kota_so, anyway, try primaries at first.01:55
kota_and then... if 1 failure found, and the replicator is in the handoff, it sounds fine, no more handoff you need to try to push.01:55
kota_if 2 failrues found, the replicator need to push 1 another handoff to keep the durability01:56
mattoliverauin your simulation before: p1, p2, p3, h1, h2, h3 and h3 has the handoff, we can make it so it says cool we need 3, I can be one.. but if p2 is down too, then it'll place it on p1, h1, h3 (itself).. but h1 and would place it on p1, h1 (itself), h2... which at least would be the same as now if p1 ran.01:56
kota_how do we express the condition01:56
mattoliverauSo that I'm saying it, we can still get into a situation where there is an extra handoff and it wont remove it because it doesn't know about the others.. but maybe that's ok?01:57
kota_mattoliverau: true01:57
kota_sounds terrible, isn't it?01:57
mattoliverauhmm, it does kinda.. well until a rebalance or the primary is fixed... which would always be the problem in any case, except in this case there might be an extra handoff out there somewhere01:58
mattoliverautho thats better then now.. where we place it everywhere :P01:59
kota_mattoliverau: exactly, current one in the master is the worst :P01:59
mattoliverauwell in the current patch, h3 would pus to p1, h1, h2 and then delete itself.. which would mean we still have durability (as far as its concerned) and now using up any extra space.. but there could be some more movement. However h1, and h2 wont ever move it again until the primaries are fixed or a rebalance happens.. so maybe it is better02:01
mattoliverau*now not using up extra space02:02
mattoliveraukota_: sorry, my typing seems to be bad, which is making my english bad :P02:02
mattoliverauI'll try and read my text before I press send :)02:03
kota_mattoliverau: ok, thanks for the info. sorry, I have to leave here for a while for a meeting02:04
mattoliveraukota_: no probs :)02:04
mattoliverauI'll do some playing with the patch02:05
mattoliveraukota_: I know your going to a meeting, but what I said before was slightly wrong, in the current patch h3 after pushing to p1, h1, h3 wouldn't delete itself because it would have got a failure from p2 and p3. So the data would also just stick around. If we want to clean itself, then we'd need to look for num replica successes in responses.(line 589) but thats only if we want to go that approach :)02:14
*** sams-gleb has joined #openstack-swift03:40
*** klrmn has quit IRC03:44
*** sams-gleb has quit IRC03:45
*** chsc has joined #openstack-swift03:57
*** JimCheung has joined #openstack-swift04:03
*** JimCheung has quit IRC04:07
*** zhurong has quit IRC04:09
*** kei_yama has quit IRC04:10
*** chsc has quit IRC04:13
*** gkadam has joined #openstack-swift04:17
*** zhurong has joined #openstack-swift04:30
*** psachin has joined #openstack-swift04:42
*** kei_yama has joined #openstack-swift04:46
*** _JZ_ has quit IRC04:53
*** ChubYann has quit IRC05:18
*** SkyRocknRoll has quit IRC05:34
*** zhurong has quit IRC05:41
*** sams-gleb has joined #openstack-swift05:43
*** rcernin has joined #openstack-swift05:44
*** sams-gleb has quit IRC05:47
*** jaosorior has joined #openstack-swift05:56
*** zhurong has joined #openstack-swift06:05
openstackgerritMatthew Oliver proposed openstack/swift master: sharding - Fix container server put/delete piv  https://review.openstack.org/45119706:12
*** jojden has joined #openstack-swift06:15
*** chmouel has left #openstack-swift06:33
*** hseipp has joined #openstack-swift06:38
PavelKkota_: mattoliverau: with patch 448480 - when h3 finds a replica that belongs to p1, p2, p3 and handoffs h1, h2, h3 then it pushes it to primaries and if necessary to handoffs in the ordes h1, h2. It doesn't delete the replica until it is pushed to all primary nodes (as object replicator does). I think it doesn't matter. If you reballance then we expect to have all primary drives health and handoffs06:39
patchbothttps://review.openstack.org/#/c/448480/ - swift - Container drive error results double space usage o...06:39
PavelKare cleared shortly.06:39
*** sams-gleb has joined #openstack-swift06:47
*** d0ugal has joined #openstack-swift06:52
*** d0ugal has quit IRC06:54
*** d0ugal has joined #openstack-swift06:58
*** d0ugal has quit IRC06:58
*** d0ugal has joined #openstack-swift06:58
*** JimCheung has joined #openstack-swift07:02
*** JimCheung has quit IRC07:06
*** tesseract has joined #openstack-swift07:08
*** geaaru has joined #openstack-swift07:19
*** d0ugal has quit IRC07:22
*** d0ugal has joined #openstack-swift07:26
*** d0ugal has quit IRC07:26
*** d0ugal has joined #openstack-swift07:26
*** pcaruana has joined #openstack-swift07:27
-openstackstatus- NOTICE: Jobs in gate are failing with POST_FAILURE. Infra roots are investigating07:45
*** ChanServ changes topic to "Jobs in gate are failing with POST_FAILURE. Infra roots are investigating"07:45
*** rcernin has quit IRC07:54
*** tesseract has quit IRC07:54
*** tesseract has joined #openstack-swift07:55
*** rcernin has joined #openstack-swift07:56
-openstackstatus- NOTICE: logs.openstack.org has corrupted disks, it's being repaired. Please avoid rechecking until this is fixed08:25
*** ChanServ changes topic to "logs.openstack.org has corrupted disks, it's being repaired. Please avoid rechecking until this is fixed"08:25
*** cbartz has joined #openstack-swift08:31
mattoliverauPavelK: I'm not actually at work (ie I'm suppose to be with the family) but i'm not disagreeing, I'm not saynig the bahaviour is wrong. Which is why I didn't score. I'm not against keeping the extra around. Just wanted to make sure the discusson was had as a community, and so made a probe test. Which ever way we decide we can always change the probe test to suit and then at least we'd have a test to keep it correct ;)09:18
PavelKmattoliverau: I understand that you do not hate it :-). I agree that it needs to be discussed more.09:25
*** peterlisak has quit IRC09:28
*** onovy has quit IRC09:28
*** psachin has quit IRC09:36
*** peterlisak has joined #openstack-swift09:39
openstackgerritRomain LE DISEZ proposed openstack/swift master: Fix encoding issue in ssync_sender.send_put()  https://review.openstack.org/45211209:39
rledisezhello all, can someone have a lot at ^ please?09:40
rledisezit fixes https://bugs.launchpad.net/swift/+bug/1678018 which seems like a critical issue in reconstructor09:40
openstackLaunchpad bug 1678018 in OpenStack Object Storage (swift) "ssync_sender encoding issue" [Undecided,New]09:40
*** onovy has joined #openstack-swift09:43
*** psachin has joined #openstack-swift09:54
*** sputnik13 has joined #openstack-swift10:03
*** jarbod__ has joined #openstack-swift10:06
openstackgerritGábor Antal proposed openstack/swift master: Use more specific asserts in test/unit/container  https://review.openstack.org/34280810:10
*** sams-gleb has quit IRC10:12
*** sams-gleb has joined #openstack-swift10:12
*** sams-gleb has quit IRC10:16
*** zhurong has quit IRC10:18
*** zhurong has joined #openstack-swift10:41
*** psachin has quit IRC10:50
*** psachin has joined #openstack-swift10:52
*** hseipp has quit IRC11:02
*** JimCheung has joined #openstack-swift11:03
*** JimCheung has quit IRC11:08
*** sams-gleb has joined #openstack-swift11:14
*** hseipp has joined #openstack-swift11:26
*** vint_bra has joined #openstack-swift11:32
*** kei_yama has quit IRC11:51
*** zhurong has quit IRC11:59
*** SkyRocknRoll has joined #openstack-swift12:05
*** SkyRocknRoll has quit IRC12:06
*** SkyRocknRoll has joined #openstack-swift12:07
*** vint_bra has quit IRC12:21
*** SkyRocknRoll has quit IRC12:22
*** catintheroof has joined #openstack-swift12:36
*** chlong has joined #openstack-swift12:40
*** hseipp has quit IRC12:41
*** hseipp has joined #openstack-swift12:41
*** ouchkernel has quit IRC12:51
*** HW-Peter has joined #openstack-swift12:55
*** HW-Peter has quit IRC12:56
*** HW-Peter has joined #openstack-swift12:56
*** gkadam has quit IRC12:57
*** PavelK has quit IRC13:05
jrichlirledisez: I can take a look.  I can't do it in the first half of my day, but can start on it in the afternoon.13:07
*** PavelK has joined #openstack-swift13:25
*** vint_bra has joined #openstack-swift13:26
*** adu has joined #openstack-swift13:27
*** mariusv_ has quit IRC13:34
*** mariusv has joined #openstack-swift13:35
*** mariusv has joined #openstack-swift13:35
*** PavelK has quit IRC13:46
rledisezjrichli: thx :)13:51
*** amoralej is now known as amoralej|lunch13:54
*** Dinesh_Bhor has quit IRC13:55
*** JimCheung has joined #openstack-swift14:03
*** geaaru has quit IRC14:04
*** ouchkernel has joined #openstack-swift14:07
*** JimCheung has quit IRC14:08
*** jamielennox is now known as jamielennox|away14:16
*** geaaru has joined #openstack-swift14:17
*** jroll has quit IRC14:32
*** jroll has joined #openstack-swift14:34
*** amoralej|lunch is now known as amoralej14:36
*** vinsh has joined #openstack-swift14:52
*** rcernin has quit IRC15:08
*** chsc has joined #openstack-swift15:11
*** chsc has joined #openstack-swift15:11
*** psachin has quit IRC15:13
*** klrmn has joined #openstack-swift15:13
*** chsc has quit IRC15:27
*** chsc has joined #openstack-swift15:27
*** chsc has joined #openstack-swift15:27
*** jaosorior has quit IRC15:28
*** jojden has quit IRC15:30
*** klrmn has quit IRC15:35
*** pcaruana has quit IRC15:46
*** jordanP has quit IRC15:49
*** _JZ_ has joined #openstack-swift16:06
*** sams-gleb has quit IRC16:06
*** hseipp has quit IRC16:20
*** JimCheung has joined #openstack-swift16:24
*** sams-gleb has joined #openstack-swift16:28
*** geaaru has quit IRC16:47
*** oshritf has joined #openstack-swift16:53
*** cbartz has quit IRC16:54
*** klrmn has joined #openstack-swift16:56
*** gyee has joined #openstack-swift16:59
*** MVenesio has joined #openstack-swift17:18
MVenesioHi Guys, i’ve installed a swift cloud to a client with 1 region, 3 zones, 3 servers, and 3 copies configured using replication. The particularity of this servers, is that one of them has less disks and less space to store partitions than the others. While i’m testing the data placement and replication, i’m figuring out that servers #1 and #3 are storing the 3 replicas of all objects as is expected, since those two servers hav17:19
MVenesioe more disks and space. The question is, when the system will start to place data in the server #2 also ? Is it related with the amount of space or partitions used, or something else ?17:19
notmynameMVenesio: start with the dispersion and overload sections on https://docs.openstack.org/developer/swift/overview_ring.html17:20
MVenesio@notmyname thanks man17:21
*** bikmak has quit IRC17:22
*** klrmn has quit IRC17:22
*** klrmn has joined #openstack-swift17:24
*** chsc has quit IRC17:29
*** oshritf has quit IRC17:31
*** ChubYann has joined #openstack-swift17:35
clayghello all17:41
notmynameclayg: hello17:59
*** chsc has joined #openstack-swift18:19
*** chsc has joined #openstack-swift18:19
claygnotmyname: https://wiki.openstack.org/wiki/Swift/PriorityReviews is out of date18:19
notmynameindeed18:20
*** tesseract has quit IRC18:21
notmynameclayg: reload. not helpful, but more correct18:24
clayglol18:25
claygyeah much better18:25
*** klrmn has quit IRC18:53
*** amoralej is now known as amoralej|off18:54
*** HW-Peter has quit IRC19:06
*** MVenesio has quit IRC19:10
*** hseipp has joined #openstack-swift19:11
*** klrmn has joined #openstack-swift19:33
*** hseipp has quit IRC19:45
*** klrmn has quit IRC19:45
*** klrmn has joined #openstack-swift19:55
-openstackstatus- NOTICE: lists.openstack.org will be offline from 20:00 to 23:00 UTC for planned upgrade maintenance19:59
*** klrmn has quit IRC20:05
*** stradling has joined #openstack-swift20:10
*** stradling has quit IRC20:42
*** klrmn has joined #openstack-swift21:16
*** mvk has quit IRC21:18
*** vinsh has quit IRC21:36
*** klrmn has quit IRC21:40
*** klrmn has joined #openstack-swift21:40
-openstackstatus- NOTICE: The upgrade maintenance for lists.openstack.org has been completed and it is back online.21:51
*** vint_bra has quit IRC21:55
*** sams-gleb has quit IRC22:20
*** sams-gleb has joined #openstack-swift22:20
*** sams-gleb has quit IRC22:21
*** klrmn has quit IRC22:24
*** catintheroof has quit IRC22:36
*** _JZ_ has quit IRC23:04
*** sams-gleb has joined #openstack-swift23:21
*** sams-gleb has quit IRC23:27
*** klrmn has joined #openstack-swift23:37
*** chsc has quit IRC23:51

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!