Tuesday, 2015-01-06

*** jmcbride has joined #openstack-dns00:00
*** ryanpetrello has quit IRC00:15
*** GonZo2K has joined #openstack-dns00:35
*** EricGonczer_ has joined #openstack-dns00:44
*** pk has quit IRC00:45
*** EricGonc_ has joined #openstack-dns00:47
*** EricGonczer_ has quit IRC00:49
*** ryanpetrello has joined #openstack-dns00:49
*** Stanley00 has joined #openstack-dns00:59
*** rmoe has quit IRC01:03
*** rmoe has joined #openstack-dns01:18
*** pk has joined #openstack-dns01:18
*** pk has quit IRC01:27
*** stanzgy has joined #openstack-dns01:47
*** nosnos has joined #openstack-dns01:50
*** penick has joined #openstack-dns01:53
*** penick_ has joined #openstack-dns01:54
*** penick has quit IRC01:57
*** penick_ is now known as penick01:57
*** EricGonczer_ has joined #openstack-dns02:08
*** EricGonc_ has quit IRC02:10
*** ryanpetrello has quit IRC02:14
*** GonZo2K has quit IRC02:18
*** zhang_liang__ has quit IRC02:23
*** pk has joined #openstack-dns02:26
*** pk has quit IRC02:32
*** pk has joined #openstack-dns02:33
*** rjrjr has quit IRC02:35
*** vinod has joined #openstack-dns02:38
*** jmcbride has quit IRC03:05
*** jmcbride has joined #openstack-dns03:10
*** jmcbride has quit IRC03:12
*** pk has quit IRC03:24
*** nosnos has quit IRC03:34
*** ryanpetrello has joined #openstack-dns03:38
*** EricGonczer_ has quit IRC03:42
*** richm has quit IRC03:44
*** EricGonczer_ has joined #openstack-dns03:45
*** EricGonczer_ has quit IRC03:47
*** vinod has quit IRC03:48
*** penick has quit IRC03:51
*** penick has joined #openstack-dns03:54
*** ryanpetrello has quit IRC03:55
*** GonZo2K has joined #openstack-dns04:03
*** nosnos has joined #openstack-dns04:29
*** pk has joined #openstack-dns04:34
*** pk has quit IRC04:39
*** penick has quit IRC04:43
*** GonZo2K has quit IRC05:07
*** k4n0 has joined #openstack-dns05:24
*** k4n0 has quit IRC05:25
*** stanzgy has quit IRC06:36
*** stanzgy has joined #openstack-dns06:37
*** chlong has quit IRC07:49
*** chlong has joined #openstack-dns07:49
*** chlong has quit IRC07:51
*** chlong has joined #openstack-dns07:53
*** jordanP has joined #openstack-dns09:06
*** Stanley00 has quit IRC10:20
*** stanzgy has quit IRC10:54
*** eandersson has joined #openstack-dns11:00
*** untriaged-bot has joined #openstack-dns11:02
untriaged-botUntriaged bugs so far:11:02
untriaged-bothttps://bugs.launchpad.net/designate/+bug/140326711:02
uvirtbotLaunchpad bug 1403267 in designate "create_domain should handle status asynchronously" [Undecided,New]11:02
untriaged-bothttps://bugs.launchpad.net/designate/+bug/140439511:02
uvirtbotLaunchpad bug 1404395 in designate "Pool manager attempts to periodically sync *all* zones" [Undecided,New]11:02
untriaged-bothttps://bugs.launchpad.net/designate/+bug/140641411:02
uvirtbotLaunchpad bug 1406414 in designate "Delete zone fails to propagate to all (Bind) nameservers in a pool depending on threshold_percentage" [Undecided,New]11:02
untriaged-bothttps://bugs.launchpad.net/designate/+bug/140359111:02
uvirtbotLaunchpad bug 1403591 in designate "A ZeroDivisionError is Thrown Without Servers" [Undecided,New]11:02
untriaged-bothttps://bugs.launchpad.net/designate/+bug/128944411:02
uvirtbotLaunchpad bug 1289444 in designate "Designate with postgres backend is having issues" [Undecided,New]11:02
untriaged-bothttps://bugs.launchpad.net/designate/+bug/140452911:02
uvirtbotLaunchpad bug 1404529 in designate "DynECT is called twice when any domain action happens." [Undecided,Confirmed]11:02
*** untriaged-bot has quit IRC11:02
eanderssonKiall: So, I tried to start a second instance on a second server today, and even with pymysql it gets stuck in a DeadLock again when running multiple workers. So make sure that for the master that you try my script with multiple workers (or instances) running.11:12
eanderssonI can only get it working with a single worker and pymysql.11:12
*** ryanpetrello has joined #openstack-dns12:46
*** mwagner_lap has quit IRC12:48
Kialleandersson: heya13:29
KiallSo - with multiple separate processes (i.e. not workers), you're still seeing that deadlock?13:29
KiallAnd - Are these  database deadlocks, or code deadlocks? (The original issue was code deadlocks from memory)13:31
*** jmcbride has joined #openstack-dns13:35
*** jmcbride has quit IRC13:37
*** mwagner_lap has joined #openstack-dns13:40
*** artom has joined #openstack-dns13:44
openstackgerritKiall Mac Innes proposed openstack/designate: Implement default page size for V2 API  https://review.openstack.org/14250513:56
eanderssonKiall: Basically I think this is a different issue.13:59
eanderssonSo I had a single instance with a single worker using pymysql. It worked under heavy load, no issues.13:59
eanderssonI added a second instanc eon a different server with the same setup, and now I see exceptions.14:00
eanderssonI stopped one of the instances and it works agian.14:00
KiallOkay, do you have a stacktrace of one of the exceptions?14:00
eanderssonyep getting it now14:00
KiallI expect it's a real database deadlock this time (yay for the "serial" column -_-)14:01
eandersson RemoteError: Remote error: DBDeadlock (InternalError) (1213, u'Deadlock found when trying to get lock; try restarting transaction') 'UPDATE domains SET updated_at=%s, serial=%s WHERE domains.id = %s AND domains.deleted = %s' (datetime.datetime(2015, 1, 6, 11, 6, 7, 967861), 1420542370, '684a6bcfd9ab4c678824a83bb9d021f9', '0')14:01
eanderssonI'll see if I can find the formated exception.14:01
eandersson(Since that was taken from the sink)14:01
KiallNo need - It's DB contention on the serial column as I thought14:01
KiallWhat version are you running again?14:02
Kiallicehouse?14:02
eanderssonYea, should be.14:02
ekarlso-heya :P14:02
eanderssonI am gonna try to get it set up in the LAB, so I can do some proper troubleshooting.14:03
eanderssonI only have one node there, so never experienced it during testing.14:03
Kiallhah - yea production has a way of showing up errors your lab can't ;)14:04
KiallSo - back when you first mentioned the issues your having.. I wrote this up: https://review.openstack.org/#/c/134524/14:04
*** nosnos has quit IRC14:04
KiallI think we can fix that up, and it should solve the issue14:05
eanderssonoh, so it will simply retry a couple of times until it works?14:05
KiallYep - That's the only thing you can do with a real database deadlock (kinda) - two queries are trying to change the same value in the DB at once, only 1 can succeed14:06
Kiall(The kinda is around the fact that we could find a way to remove the single per-domain serial column, also avoiding the deadlock)14:06
eanderssonPerfect.14:07
KiallLet me try write that patch up against icehouse, and see if it works14:07
eanderssonI'll have Internet back tomorrow, so I'll upload the patches I apply for review.14:07
KiallAh -  Well, check that review.. It should be updated tomorrow for master, and there may be a backport if I can get time before then :)14:08
eanderssonSounds good. :)14:08
*** richm has joined #openstack-dns14:10
openstackgerritKiall Mac Innes proposed openstack/designate: Retry transactions on database deadlocks  https://review.openstack.org/13452414:15
Kialleandersson: ^ should in theory do it (for master..)14:15
KiallNot sure how easy a backport will be though.. as always :'(14:17
eanderssonI am done backporting now, gonna grab a coffee and test it out. :D14:17
Kialllol - not possible ;)14:18
Kialleandersson: In theory, this is a backport.. https://review.openstack.org/14523314:22
Kialltotally untested14:22
*** betsy has joined #openstack-dns14:35
*** EricGonczer_ has joined #openstack-dns14:40
*** vinod has joined #openstack-dns14:51
*** vipul has quit IRC14:52
*** vipul has joined #openstack-dns14:52
*** timsim has joined #openstack-dns14:52
*** vinod has quit IRC14:54
*** vinod has joined #openstack-dns14:55
*** jmcbride has joined #openstack-dns14:58
*** jmcbride has quit IRC14:58
*** jmcbride has joined #openstack-dns14:58
eanderssonOk, so clearly I am a little lost after weeks of vacation.14:59
eanderssonI am indeed running Icehouse.... for everything besides Designate that is running Juno. :D15:01
eanderssonThe first patch you linked pretty much works out of the box on Juno, as it has def transaction(f):15:02
*** artom has quit IRC15:03
eandersson(hides in a corner)15:03
eanderssonbtw hey ekarlso-! :D15:03
ekarlso-eandersson: tjena grabben :)15:04
eanderssonGott nytt år!15:05
ekarlso-:D15:05
ekarlso-låt oss prata svenska i stellet för engelska eandersson :)15:05
ekarlso-dom kommer inte fatta en skit :P15:05
Kialleandersson: ah, makse sense :)15:06
Kialleandersson: BTW - Seems to be some sort of issue with the patch still, but I'm wondering if it's an issue with our unittests rather than anything else...15:06
eanderssonSo it did retry, and initially it looked good :D15:06
ekarlso-Kiall: pratar ni inte svenska ?;)15:06
eanderssonRemoteError: Remote error: DBError (InternalError) (1364, u"Field 'data' doesn't have a default value") 'INSERT INTO records (id, version, created_at, managed, status) VALUES (%s, %s, %s, %s, %s)' ('c9448843365f48beac5dec0ce0be7a36', 1, datetime.datetime(2015, 1, 6, 14, 57, 7, 358400), 0, 'ACTIVE')15:07
Kiallekarlso-: An bhfuil tú Gaeilge a labhairt?15:07
ekarlso-Kiall: :P15:07
eanderssonI am not sure if that is caused by something else lol15:07
Kialleandersson: Humm, different issue!15:08
ekarlso-+1 to Kiall on that, data doesn't seem to be populate15:08
ekarlso-+d15:08
eanderssonNever seen it before the patch though. Maybe I messed something up.15:08
ekarlso-swedes -,,-15:09
eanderssoncannot be trusted!15:09
KiallNah, it sounds like it could be caused by the retry.. Not 100% sure yet.. Writing unot tests at the moment15:09
Kiallunit*15:09
eanderssonMassive traceback :D15:09
eanderssonbut yea, it's caused by the retry15:09
eanderssonI see it hitting retrying, and throws this.15:10
eandersson*it may be caused by retry15:10
KiallI'm betting some state is lost after the retry -_-15:11
openstackgerritKiall Mac Innes proposed openstack/designate: Retry transactions on database deadlocks  https://review.openstack.org/13452415:12
Kiall^ has 2 new tests for the retry, one succeeds, one fails..15:12
*** nkinder has joined #openstack-dns15:12
Kiall(It just verifies the retry is attempted for now, not that the retry eventually succeeds)15:12
*** vinod has quit IRC15:15
*** vinod has joined #openstack-dns15:16
*** artom has joined #openstack-dns15:18
Kialleandersson: so, I see my bug... still not sure about the data doesn't have a value thing15:20
eanderssonSo it fails at increment. Is the issue that it tries to insert the data again?15:29
eanderssonSince create_record will be called twice as well?15:29
KiallYour issue seems to be that it "looses" it's data the second go around..15:29
eanderssonYea. It makes no sense.15:30
KiallMine is that, a call like update_recordset, which has nested TX's, will rollback  the inner TX and retry - which is an error.15:30
KiallWe have to bail all the way back out to the initial call, and retry from there.. Just can't see how yet ;)15:30
KiallI can see a fix that involves reintroducing the code deadlock we just fixed.. lol15:32
eanderssonhaha15:43
*** EricGonc_ has joined #openstack-dns15:50
eanderssonThe good thing with all of this is that I am getting more and more familar with the source code :p15:50
openstackgerritBetsy Luzader proposed openstack/designate:   Migrate Server table  https://review.openstack.org/13644015:51
*** barra204_ has joined #openstack-dns15:52
*** EricGonczer_ has quit IRC15:53
*** barra204_ is now known as shakamunyi15:53
*** paul_glass has joined #openstack-dns16:01
*** paul_glass1 has joined #openstack-dns16:02
*** paul_glass1 has quit IRC16:02
*** paul_glass has quit IRC16:05
*** mikedillion has joined #openstack-dns16:13
*** mikedillion has quit IRC16:14
openstackgerritKiall Mac Innes proposed openstack/designate: Retry transactions on database deadlocks  https://review.openstack.org/13452416:14
Kialleandersson: as near as I can tell, the "data is empty" issue doesn't happen on master.. Not sure why, or if my tests are just flawed though..16:31
openstackgerritKiall Mac Innes proposed openstack/designate: Retry transactions on database deadlocks  https://review.openstack.org/13452416:35
*** rjrjr has joined #openstack-dns16:49
*** untriaged-bot has joined #openstack-dns17:02
untriaged-botUntriaged bugs so far:17:02
untriaged-bothttps://bugs.launchpad.net/designate/+bug/140326717:02
untriaged-bothttps://bugs.launchpad.net/designate/+bug/140439517:02
uvirtbotLaunchpad bug 1403267 in designate "create_domain should handle status asynchronously" [Undecided,New]17:02
uvirtbotLaunchpad bug 1404395 in designate "Pool manager attempts to periodically sync *all* zones" [Undecided,New]17:02
untriaged-bothttps://bugs.launchpad.net/designate/+bug/140641417:02
uvirtbotLaunchpad bug 1406414 in designate "Delete zone fails to propagate to all (Bind) nameservers in a pool depending on threshold_percentage" [Undecided,New]17:02
untriaged-bothttps://bugs.launchpad.net/designate/+bug/140359117:02
uvirtbotLaunchpad bug 1403591 in designate "A ZeroDivisionError is Thrown Without Servers" [Undecided,New]17:02
untriaged-bothttps://bugs.launchpad.net/designate/+bug/128944417:02
uvirtbotLaunchpad bug 1289444 in designate "Designate with postgres backend is having issues" [Undecided,New]17:02
untriaged-bothttps://bugs.launchpad.net/designate/+bug/140452917:02
uvirtbotLaunchpad bug 1404529 in designate "DynECT is called twice when any domain action happens." [Undecided,Confirmed]17:02
*** untriaged-bot has quit IRC17:02
*** artom has quit IRC17:04
*** mikedillion has joined #openstack-dns17:13
eanderssonI'll run some tests tomorrow, and try to figure out why it is happening. :D17:19
KiallI may have just reproduced it.. Maybe...17:19
*** betsy has quit IRC17:22
rjrjrtimsim: you on?17:27
rjrjrvinod: you on?17:27
timsimrjrjr yep17:27
rjrjrokay, i looked at the code you submitted.  i have a better fix for it which also fixes other things.  i will be pushing that up shortly.  in a nutshell, i'm fixing it with the fix for https://bugs.launchpad.net/designate/+bug/140326717:28
uvirtbotLaunchpad bug 1403267 in designate "create_domain should handle status asynchronously" [Undecided,New]17:28
rjrjralso, about removing the create and delete statuses...17:28
rjrjri think we should hold off on removing the create statuses17:28
rjrjrwe might need that information when we allow for new servers to be added.17:29
rjrjri do agree however, that once a domain is deleted, we should remove the create, update, and delete statuses.17:29
*** mikedillion has quit IRC17:29
rjrjrwill that be okay with you?  (about the statuses)17:30
timsimI thought the whole 'add a new server, sync everything' process was going to be like, hey central, dump your state and call out to do all these things or something17:30
timsimOr more generally, hey new server (or old server) what's your state, this is my state, make the necesary changes.17:31
rjrjrwe'll want to calculate consensus when we add a new server.  if the threshold is 100% and we add a new server and it fails...17:31
rjrjrwe already have the logic to calculate concensus for creation.  i don't think we need to add new logic for the same thing on an add by removing the status.17:33
rjrjrhonestly, this isn't causing any issues.17:33
timsimIf you've got millions of zones, keeping those around might be an issue.17:33
timsimWouldn't you pop a new status for creation of a zone when you add a new server, and then calculate consensus for the Pool anyway?17:34
rjrjrwe are talking about 3 rows per zone (create, update, and delete status)17:35
*** rmoe has quit IRC17:36
KiallWell - Isn't it the "pool manager cache" - So, it should be safe to loose any value from it?17:39
KiallSo - Clearing should be safe+fine to do17:39
KiallIf a value isn't present, I think it should be recreatable...17:40
timsimAgreed, it kind of seemed like these values would represent one change or one logical group of changes, and when the changes were finished, they'd be deleted. The only things that would remain in the cache were pending or error'd changes.17:41
Kialltimsim: ++17:41
timsimAren't there multiple create statuses for each zone if you've got more than 1 server backend?17:47
rjrjrsorry, you are correct, one of each status for each server.17:47
rjrjri still need the update status for each zone per server however.17:48
timsimI'm ok with that, as long as they're deleted when they're finished. Because in theory if you've got one entry per server per change, you can retry only the ones that need retrying.17:48
rjrjri use that to calculate consensus.17:49
*** vinod has quit IRC17:49
rjrjrwe would leave the update status though.17:49
rjrjrlet me explain17:49
rjrjri add a record17:50
rjrjrone zone is at serial number17:50
rjrjrsorry 1 server is at serial number 217:50
rjrjr1 server is at serial number 317:50
rjrjr1 server is at serial number 417:50
rjrjrnotifies go out and 2 servers don't17:50
rjrjrnotifies go out and 2 servers don't respond.17:50
rjrjrbut the server with serial number 2 does respond and it is now at serial number 517:51
rjrjrthe new consensus is now 3 for 100%17:51
rjrjri need the old serial numbers to calculate that.17:51
rjrjrman, this is not fun to explain in chat...17:52
timsimWould you wait to mark the zone active until they're all at 5?17:52
rjrjrit isn't a zone thing.17:52
rjrjrit is marking records as active17:52
timsimAlright, the records then, doesn't the zone get marked pending when you make that change though?17:53
rjrjrany record change that occurred at serial number 3 is now active, even though 2 servers didn't respond.  they already got the change because they responded earlier with higher serial numbers.17:53
rjrjrchanges for serial number 4 and 5 will be pending however.17:53
rjrjrbut all changes for serial number 3 and lower are active.17:53
Kiallrjrjr: so, "the new consensus is now 3 for 100%" <-- This is where the status entries for 3 should be deleted IMO17:54
KiallAnd.. If none exist, or some are missing (it's a cache after all), they can be re-fected17:54
rjrjri have 1 status per server for an update17:54
rjrjrthat status includes the serial number returned from the server.17:54
rjrjri don't store a status for every update.  1 and only 1 per server per zone.17:55
rjrjrotherwise, our table could be theoretically larger than what tim had pointed out.17:55
KiallAh.. Humm, so.. deleting them there would bork the concensus for #417:55
rjrjrcorrect.17:55
rjrjrthink of that status as a watermark.  i store the highest serial number returned from a server for that zone.17:56
*** rmoe has joined #openstack-dns17:57
KiallYep, I get that now.. So... Is the confusion/concern more related to how the pool manage cache is really being treated as a relaible datastore then? e.g. should we add a memcache driver to help ensure the code doesn't treat it like that?17:57
rjrjrare you suggesting the logic should all be changed?17:58
rjrjrhonestly, i designed it this way exactly to address the problem of too large of a cache.17:58
KiallOnly if necessary to ensure it's treated as a cache, rather than persistent dtore17:58
*** penick has joined #openstack-dns17:58
timsimI can't operate under the assumption that the Pool Manager Cache will always be there.17:59
rjrjrif we did this the way i heard from others, the logic of removing/adding/etc. would be horrendous.17:59
rjrjrif servers don't respond the cache could grow without bounds.17:59
KiallA SQL based cache without an "expires" would, a tranditional cache would clear itself out over time18:00
*** jordanP has quit IRC18:00
timsimEventually when things respond again, those entries will be deleted, and as it catches up, your cache size would get back down toward 018:00
rjrjryou have a server down for a day and you have a million changes in that day.  that cache would be large.18:00
timsimHuge. Yep18:01
Kiallrjrjr: cache entries should expire after a short period (say 1 hr)18:01
rjrjrthis sets an upper bound.  predictable.  manageable.18:01
Kiallafter that, if you need to info 6 houts later, you do a SOA query to the nameserver to refresh the info18:01
Kiallhours*18:01
rjrjrwith this design, we can tell a user what to expect for a footprint even when servers are down.18:02
timsimI suppose if you had something down for an hour, then you resync everything anyway?18:02
rjrjrwith what you are proposing, you cannot tell the user what the footprint will be.18:02
timsimKiall: ^18:02
timsimrjrjr: Not sure what you mean by footprint?18:03
Kialltimsim: ideally, your thresholds haven't been hurt by 1 server being down.. so things will recover by themselves, and through ther periodic sync to add/remore created/deleted zones..18:03
rjrjrthe size of the cache/database.  you can easily predict its size based on the number of zones and servers.18:03
rjrjreven when there is an outage.18:04
Kiallrjrjr: sure, so - using an actual cache without a expire would result in a cache of the same size.. add an expire, and that # turns into a max size, rather than absolute size18:04
Kialli.e. it wouldn't ballon the size, the # of entries is still fixed, is just treating the cache as a cache - i.e tolerating loss of cached values and recreating them on demand when necessary18:05
rjrjrthis can still be a cache.  it is similar to how NTP's cache works.18:08
rjrjrwhile it isn't tolerant of a cleaning right now, it could be with a few modification.18:08
rjrjrregardless, how do you want me to proceed?18:09
KiallI think you cleared things up for me anyway .. timsim?18:09
openstackgerritKiall Mac Innes proposed openstack/designate: Retry transactions on database deadlocks  https://review.openstack.org/13452418:11
Kialleandersson: In theory, ^ solves the "data is empty" issue18:11
rjrjrtimsim?18:11
Kialltimsim was asking Q's too ;)18:11
KiallOh - You were pinging him18:11
Kiallnot asking why I mentioned him..  lol18:12
* Kiall goes back to deadlocks18:12
rjrjrkiall, honest opinion, should we redesign the logic for pool manager?  i am under a lot of pressure right now to get this work done.  i have some very real hard dates i need to meet.18:14
Kiallredesign? I don't think we need to, I think there's some cleanup in various places needed, but not a redesign ..18:15
rjrjri am working on some of the bug fixes and fixing some of the things you and i talked about.  i'll proceed with that until i hear back from timsim about this logic.18:15
timsimSorry, was getting some food18:18
timsimI don't think it needs to be completely redesigned, but I do think we should talk it all through and see where everybody lands at the mid-cycle.18:20
rjrjrokay.  back to bug fixes. :)18:30
*** pk has joined #openstack-dns18:34
ekarlso-I still havent figured on how to integrate the secondary stuff with pools -,,-18:48
ekarlso-was some quirks18:48
*** penick has quit IRC19:02
*** penick has joined #openstack-dns19:06
*** shakamunyi has quit IRC19:07
*** vinod has joined #openstack-dns19:10
*** rickerc has joined #openstack-dns19:18
*** ryanpetrello_ has joined #openstack-dns19:25
*** ryanpetrello has quit IRC19:27
*** ryanpetrello_ is now known as ryanpetrello19:27
*** pk has quit IRC19:43
*** GonZo2K has joined #openstack-dns19:47
*** shakamunyi has joined #openstack-dns19:51
*** achilles has joined #openstack-dns19:52
*** timsim has quit IRC19:52
*** achilles has left #openstack-dns19:53
*** pk has joined #openstack-dns19:54
*** timsim has joined #openstack-dns19:54
*** barra204_ has joined #openstack-dns19:55
*** shakamunyi has quit IRC19:58
*** jmcbride has quit IRC20:50
*** jmcbride has joined #openstack-dns20:59
*** pk_ has joined #openstack-dns21:18
*** pk_ has quit IRC21:18
openstackgerritRon Rickard proposed openstack/designate: Handle create_domain Status Asynchronously in Pool Manager  https://review.openstack.org/14534621:25
*** russellb has joined #openstack-dns21:27
rjrjrtimsim: this patch will fix the bug you were addressing and another bug kiall and i discussed.21:27
timsimCool. I'll test it out today/tomorrow morning.21:29
*** paul_glass1 has joined #openstack-dns21:30
*** paul_glass1 has quit IRC21:37
*** barra204_ has quit IRC21:39
*** barra204_ has joined #openstack-dns21:40
*** russellb has left #openstack-dns21:52
*** ryanpetrello_ has joined #openstack-dns21:55
*** ryanpetrello has quit IRC21:56
*** ryanpetrello_ is now known as ryanpetrello21:56
*** penick has quit IRC21:56
*** barra204_ has quit IRC22:12
*** barra204 has joined #openstack-dns22:13
*** mwagner_lap has quit IRC22:15
*** penick has joined #openstack-dns22:18
*** jmcbride has quit IRC22:37
*** timsim has quit IRC22:41
*** ryanpetrello has quit IRC22:56
*** vinod has quit IRC23:13
*** nkinder has quit IRC23:14
*** EricGonc_ has quit IRC23:16
*** rickerc has quit IRC23:21
*** eandersson has quit IRC23:21
*** ekarlso- has quit IRC23:21
*** timfreund has quit IRC23:21
*** gohko has quit IRC23:21
*** rickerc has joined #openstack-dns23:22
*** eandersson has joined #openstack-dns23:22
*** ekarlso- has joined #openstack-dns23:22
*** timfreund has joined #openstack-dns23:22
*** gohko has joined #openstack-dns23:22
*** boris-42 has quit IRC23:24
*** boris-42 has joined #openstack-dns23:26
*** penick has quit IRC23:34
*** barra204 has quit IRC23:43
*** GonZoPT has joined #openstack-dns23:44
*** GonZo2K has quit IRC23:45
*** ryanpetrello has joined #openstack-dns23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!