*** rudrajit has joined #openstack-dns | 00:25 | |
*** richm has quit IRC | 00:50 | |
*** puck has quit IRC | 01:23 | |
*** puck has joined #openstack-dns | 01:26 | |
*** rudrajit has quit IRC | 02:24 | |
*** rudrajit has joined #openstack-dns | 02:26 | |
*** elarson has quit IRC | 02:29 | |
*** sonuk has joined #openstack-dns | 03:09 | |
*** stupidnic has quit IRC | 03:40 | |
*** EricGonczer_ has quit IRC | 03:43 | |
*** stupidnic has joined #openstack-dns | 03:43 | |
*** Alex_Stef has joined #openstack-dns | 04:30 | |
*** stupidnic has quit IRC | 04:35 | |
*** stupidnic has joined #openstack-dns | 04:39 | |
*** Alex_Stef has quit IRC | 06:28 | |
*** Alex_Stef has joined #openstack-dns | 06:55 | |
*** pcaruana has joined #openstack-dns | 07:21 | |
*** ekarlso has quit IRC | 07:29 | |
*** ekarlso has joined #openstack-dns | 07:37 | |
*** lbrune has joined #openstack-dns | 07:49 | |
*** maskarat has joined #openstack-dns | 07:58 | |
openstackgerrit | Davanum Srinivas (dims) proposed openstack/designate: [WIP] Testing latest u-c https://review.openstack.org/318020 | 08:10 |
---|---|---|
*** kbyrne has quit IRC | 08:29 | |
sonuk | kljkljkljl | 08:32 |
*** rudrajit has quit IRC | 08:41 | |
*** GonZo2000 has quit IRC | 08:50 | |
*** penchal has joined #openstack-dns | 08:58 | |
openstackgerrit | qinchunhua proposed openstack/designate: Replace assertDictEqual() with assertEqual() https://review.openstack.org/348733 | 09:03 |
*** amitkqed has quit IRC | 09:04 | |
*** amitkqed has joined #openstack-dns | 09:04 | |
*** GonZo2000 has joined #openstack-dns | 09:07 | |
*** GonZo2000 has quit IRC | 09:07 | |
*** GonZo2000 has joined #openstack-dns | 09:07 | |
*** kbyrne has joined #openstack-dns | 09:21 | |
*** nyechiel has joined #openstack-dns | 09:34 | |
*** simonmcc has quit IRC | 09:44 | |
*** simonmcc has joined #openstack-dns | 09:47 | |
*** GonZo2000 has quit IRC | 10:02 | |
*** amit213 has quit IRC | 10:10 | |
*** amit213 has joined #openstack-dns | 10:12 | |
*** serverascode has quit IRC | 10:15 | |
*** bauruine has quit IRC | 10:16 | |
*** serverascode has joined #openstack-dns | 10:17 | |
*** bauruine has joined #openstack-dns | 10:24 | |
*** kbyrne has quit IRC | 11:11 | |
*** kbyrne has joined #openstack-dns | 11:12 | |
openstackgerrit | Graham Hayes proposed openstack/designate: Revert 372057bddb27716acd42a88591552a8dee7b519b https://review.openstack.org/350206 | 11:13 |
*** abalutoiu has joined #openstack-dns | 11:20 | |
*** mehul has joined #openstack-dns | 11:31 | |
*** mehul has quit IRC | 11:32 | |
*** ducttape_ has joined #openstack-dns | 12:06 | |
*** leitan has joined #openstack-dns | 12:27 | |
*** trondham has joined #openstack-dns | 12:27 | |
openstackgerrit | Merged openstack/designate-specs: Fix typo in zone-exists-event.rst https://review.openstack.org/347650 | 12:45 |
*** ducttape_ has quit IRC | 13:11 | |
*** GonZo2000 has joined #openstack-dns | 13:22 | |
*** EricGonczer_ has joined #openstack-dns | 13:41 | |
*** richm has joined #openstack-dns | 13:43 | |
*** lbrune1 has joined #openstack-dns | 13:49 | |
*** lbrune has quit IRC | 13:51 | |
*** GonZo2000 has quit IRC | 13:59 | |
*** krot_sickleave is now known as krotscheck | 14:00 | |
*** ducttape_ has joined #openstack-dns | 14:06 | |
*** EricGonczer_ has quit IRC | 14:16 | |
*** EricGonczer_ has joined #openstack-dns | 14:17 | |
*** mlavalle has joined #openstack-dns | 14:18 | |
*** penchal has quit IRC | 14:19 | |
openstackgerrit | Merged openstack/designate: Revert 372057bddb27716acd42a88591552a8dee7b519b https://review.openstack.org/350206 | 14:27 |
*** EricGonczer_ has quit IRC | 14:31 | |
*** EricGonczer_ has joined #openstack-dns | 14:31 | |
openstackgerrit | Tim Simmons proposed openstack/designate-tempest-plugin: Test that updating recordset TTL only modifies TTL https://review.openstack.org/350235 | 14:39 |
openstackgerrit | Tim Simmons proposed openstack/designate: Fix recordset changes so that they preserve object changes fields https://review.openstack.org/350621 | 14:40 |
openstackgerrit | Tim Simmons proposed openstack/designate: Make notifications pluggable https://review.openstack.org/348535 | 14:41 |
*** pglass has joined #openstack-dns | 14:46 | |
*** lbrune1 has quit IRC | 14:57 | |
*** lbrune has joined #openstack-dns | 14:57 | |
*** lbrune has quit IRC | 14:58 | |
*** lbrune has joined #openstack-dns | 14:59 | |
*** lbrune has quit IRC | 14:59 | |
*** lbrune has joined #openstack-dns | 15:01 | |
*** leitan has quit IRC | 15:06 | |
*** leitan has joined #openstack-dns | 15:09 | |
*** rudrajit has joined #openstack-dns | 15:40 | |
*** james_li has joined #openstack-dns | 15:44 | |
*** rudrajit has quit IRC | 15:49 | |
maskarat | hi, I am noticing that my machines lose their external DNS domain after a reboot | 15:52 |
maskarat | is this "normal"? | 15:52 |
maskarat | [centos@testvm1 ~]$ cat /etc/resolv.conf | 15:55 |
maskarat | ; generated by /usr/sbin/dhclient-script | 15:55 |
maskarat | search openstacklocal | 15:55 |
maskarat | nameserver 10.10.10.10 | 15:55 |
maskarat | although I have defined in my neutron.conf a domain .. it seems that is reverting back to default behaviour | 15:56 |
maskarat | with openstacklocal as the domain name | 15:56 |
*** lbrune has quit IRC | 16:06 | |
*** dxu_ has joined #openstack-dns | 16:08 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/designate: Updated from global requirements https://review.openstack.org/348619 | 16:08 |
*** leitan has quit IRC | 16:20 | |
*** leitan has joined #openstack-dns | 16:32 | |
*** abalutoiu has quit IRC | 16:37 | |
greghaynes | mugsie: hey, you around to chat perf? | 16:39 |
mugsie | I am, but our IRC meeting is in 15 mins | 16:44 |
mugsie | I will be back at about 18:00 UTC ? | 16:45 |
greghaynes | works for me | 16:45 |
mugsie | cool | 16:45 |
*** krotscheck is now known as kro_focused | 16:53 | |
*** rudrajit has joined #openstack-dns | 16:58 | |
*** pcaruana has quit IRC | 17:00 | |
*** jmcbride has joined #openstack-dns | 17:01 | |
*** nyechiel has quit IRC | 17:04 | |
*** openstackgerrit_ has joined #openstack-dns | 17:06 | |
*** openstackgerrit_ has quit IRC | 17:08 | |
*** Alex_Stef has quit IRC | 17:13 | |
*** james_li has quit IRC | 17:14 | |
*** pglass has quit IRC | 17:15 | |
*** lbrune has joined #openstack-dns | 17:16 | |
openstackgerrit | Tim Simmons proposed openstack/designate: Change bind -> bind9 in docs, sample configs https://review.openstack.org/350698 | 17:18 |
*** pglass has joined #openstack-dns | 17:20 | |
*** GonZo2000 has joined #openstack-dns | 17:22 | |
*** abalutoiu has joined #openstack-dns | 17:25 | |
*** GonZo2000 has quit IRC | 17:35 | |
*** pglass has quit IRC | 17:35 | |
*** maskarat has quit IRC | 17:35 | |
*** lbrune has quit IRC | 17:45 | |
*** lbrune has joined #openstack-dns | 17:49 | |
*** EricGonc_ has joined #openstack-dns | 17:55 | |
*** EricGonczer_ has quit IRC | 17:57 | |
*** haplo37__ has joined #openstack-dns | 17:59 | |
mugsie | timsim: greghaynes around? | 18:01 |
timsim | o/ | 18:01 |
greghaynes | ohai | 18:01 |
mugsie | hey | 18:01 |
mugsie | so timsim did the peref testing testing post ML thread | 18:02 |
greghaynes | So, what I was testing was a very small zone transfer getting hit with a ton of requests, I'm curious how that differs from you all's tests | 18:02 |
mugsie | we were testing zones that were big enough to start using TCP afaik | 18:02 |
mugsie | is that right timsim ? | 18:02 |
greghaynes | and I have some theories on why we might not match, but need some info to figure out if they are correct | 18:02 |
timsim | Yeah I think it was about 2k recordsets | 18:03 |
mugsie | so we have 2 kinds of usual queries - the small light weight SOA one, and heavier largish zones | 18:03 |
mugsie | SOA can definitly be improved, with just local caching or wire format. as that query will be done repeately | 18:04 |
mugsie | caching of* | 18:04 |
mugsie | especially on zones that do not change much | 18:04 |
greghaynes | right. So my thinking was pretty simple - the bits where the process isn't blocking on i/o are the only areas where there can possibly be a perf diff between languages, and the larger the zone the larger it should be an i/o bound issue unless theres something horribly different abou encoding between languages | 18:05 |
greghaynes | which, there really shouldn't be | 18:05 |
greghaynes | all that to say - if zone size makes a difference then theres something horribly wrong either in encoding or design | 18:06 |
mugsie | we use dnspython for encoding | 18:06 |
mugsie | which could be a culprit | 18:06 |
timsim | Which is pure Python right? | 18:06 |
greghaynes | and I suspect whats actually going on is either running with a ton of threads so thread starvation issues are coming in to play due to the gil, or $something_silly_with_encoding | 18:07 |
mugsie | eh, i think so | 18:07 |
*** dxu_ has quit IRC | 18:07 | |
greghaynes | yea, but its a single pass encode, it still should be fast relative to writing out to the wire | 18:07 |
*** dxu_ has joined #openstack-dns | 18:07 | |
*** leitan has quit IRC | 18:07 | |
mugsie | well, we dont do single pass afaik | 18:07 |
mugsie | we yeild tcp packet sized chunks | 18:07 |
greghaynes | right - I think you read it all in to a rr array then loop over encoding it | 18:08 |
greghaynes | so, something I'd really like to know is - what specifically other than a largeish zone were you testing. how many threads / processes and how many parallel requests | 18:08 |
*** pglass has joined #openstack-dns | 18:09 | |
timsim | Just a sec | 18:09 |
greghaynes | because something I did was turn off threading after noticing it defaults to a 1k thread pool and that SO_REUSEPORT would 'just work' for scaling our processes, otherwise I suspect hitting the mdns with many parallel requests would just case them to all get run on one core in parallel | 18:10 |
timsim | Yeah I think I was running mdns with one process and SO_REUSE on | 18:10 |
greghaynes | and the default thread pool count? | 18:11 |
timsim | Yep | 18:11 |
*** lbrune has quit IRC | 18:12 | |
greghaynes | ok, so this is something worth verifying, but that was kind of my original theory when you all mentioned perf falling off a cliff at larger scale - if you run 1k python threads in a single process that are all doing a bit of work the GIL will destroy throughput | 18:12 |
*** rudrajit_ has joined #openstack-dns | 18:12 | |
*** lbrune has joined #openstack-dns | 18:13 | |
timsim | That sounds about like what was happening. | 18:13 |
timsim | I think there's also probably some difference with dnspython's encoding vs the Go one too that would exacerbate that issue. | 18:15 |
*** rudrajit has quit IRC | 18:15 | |
mugsie | would doing apache style "lots of workers" help here? let the kernal deal with it? | 18:16 |
mugsie | (it may be a stupid question) | 18:16 |
greghaynes | For that - the go one will probably be a tiny bit faster but really the way it should pan out is once the zone size gets large enough the amount of time doing encoding work will be tiny compared to blocking on writing out to the network. Really the encoding issues come in to play when doing tons of small zone transfers which is why I was trying to test that | 18:16 |
greghaynes | mugsie: yep, thats exactly what I did and the way mdns is coded it 'just works' - which is awesome | 18:17 |
greghaynes | you literally just start more workers and its all good | 18:17 |
greghaynes | but about the encoding - really the tons of threads + a large zone is when you get in the deathspiral with encoding since the encoding can't release the gil | 18:17 |
mugsie | was there issues with eventlet passing work to larger numbers of workers? I seem to remember a ML thread about that | 18:18 |
mugsie | can't seem to find it in my email though | 18:18 |
greghaynes | mugsie: eventlet has no idea about the number of processes, evenlet is only operating at the thread level | 18:18 |
greghaynes | oh, the swift issue? | 18:18 |
timsim | So a relatively small pool of threads per process to limit the number of parallel encodings would be ideal. | 18:19 |
greghaynes | yep, do you still have some kind of testing setup? | 18:19 |
greghaynes | mine was basically a 5line bash script, I am sure anything is better | 18:19 |
greghaynes | mugsie: the evenlet deal swift was hitting is a lot more complicated - it really comes down to not being able to do async i/o on flat files in linux, which isn't an issue here | 18:20 |
timsim | I used this to send the queries https://github.com/rackerlabs/mdns/blob/master/cmd/bench.go | 18:21 |
timsim | Then I used this mysqldump, which has a small and a large zone https://github.com/rackerlabs/mdns/blob/master/test_resources/designate.sql | 18:22 |
greghaynes | ah, thats super helpful | 18:22 |
greghaynes | I spent way too long figuring out how to load up some data | 18:22 |
greghaynes | So, first is probably verifying that this death spiral with a ton of threads is really happening, which should be pretty easy to test. Then after that theres a bunch of different ways to cache the zone if you wanted to make the responses fast, and it really comes down to how much work I think you want to do | 18:23 |
*** leitan has joined #openstack-dns | 18:24 | |
greghaynes | it also could be used to fix the threading issue a bit, basically if every process did some kind of read-through cache of generating a zone then you would be doing 1k less work | 18:24 |
mugsie | the problem with caching is that these services can be distributed geographically | 18:25 |
mugsie | and an AXFR will hit them once, or twice | 18:25 |
greghaynes | thats fine, even jsut an in-process cache that checks the db result and invalidates when it changes | 18:25 |
greghaynes | because the db query is not where this is falling over, clearly | 18:25 |
greghaynes | so you'd get identical output, jsut do 1/num_threads less work | 18:26 |
timsim | Also I think there's some level of the wire format that you need to parameterize because it changes for every request | 18:26 |
Kiall | timsim: there is, but it's easy to "fix" as it's a fixed size fixed position int within the wireformat | 18:27 |
greghaynes | Yep. Thats kind of the mimum no-brainer way to do caching, if a bit more work was wanted I am sure we could come up with something a ton smarter for invalidating it | 18:28 |
mugsie | greghaynes: well, I dont think the wire format cache will get used much, if at all | 18:28 |
greghaynes | mugsie: so one reason I could see you really wanting it is actually for memory savings - if there are actually multiple megabyte zones as-is they are read in to memory 3 times per thread | 18:29 |
greghaynes | so for a 10mb zone its 30mb ram minimum per thread | 18:29 |
mugsie | either all servers will hit the same node at about the same time after getting notified, and all threads will generate the cache, or the request will go to a completely different mdns server | 18:29 |
Kiall | mugsie: on every change, every NS (so 1 to say 15 or so before you tier things) will make the same AXFR query at nearly the same time, which means 15x CPU time vs doing it once, caching, and unblocking the 2nd to 15th query. | 18:30 |
greghaynes | mugsie: oh, no, you do front-of-line-blocking. So all the other threads lock on one generating the cache and then when its done they all write it out | 18:30 |
mugsie | unless we take a lcok on a a zone | 18:30 |
greghaynes | mugsie: you lock on the cache entry | 18:30 |
mugsie | OK, yeah - that makes sense. | 18:31 |
mugsie | Kiall: you wrote the current TCP stuff - how easy wuold it be to add caching to the stuff there? | 18:31 |
Kiall | That was at least a year or two ago ;) | 18:31 |
greghaynes | I'm also happy to help some of this btw, I don't have a *ton* of time but if someone else was taking lead I could probably churn out a few patches | 18:32 |
Kiall | Caching of responses would be easy enough - we actually could drop in a new mDNS middleware that does it in all likelyhood.. | 18:32 |
mugsie | greghaynes: that would be great - even reviews would help :) | 18:32 |
greghaynes | sure thing | 18:33 |
Kiall | cache AXFR's, block in the middleware if cache suggests an AXFR is in progress anywhere else, drop cache if/when SOA queries come in and result in a higher SOA serial | 18:33 |
mugsie | even a select serial would be ok, right? | 18:34 |
Kiall | We do hit issues like memcache's 1MB limit, but that can probably be fixed with key sharding | 18:34 |
Kiall | something somewhere has to hit the DB to invalidate caches | 18:34 |
mugsie | Kiall: redis. it will keep timsim happy | 18:34 |
Kiall | lol - we already use and have memcache libs in place ;) | 18:34 |
Kiall | (one day, we'll move to oslo.cache and let the deployer decide ;)) | 18:35 |
mugsie | OK. I will file a bp, and write up a spec based on ^ | 18:35 |
mugsie | thanks all for digging in! | 18:36 |
greghaynes | np :) | 18:36 |
Kiall | Anyway - I don't think any of this really helps in a meaninful way at the sorta scale RAX are running things at tho :) | 18:36 |
greghaynes | oh? | 18:36 |
greghaynes | so, I'm happy to do a bit more engineering to make this a 100% i/o bound problem | 18:37 |
Kiall | From memory, the usage profile there is A) tiered, so 2x AXFR per change rather than say, 15.. and a stupid large number of zones, including some very heavy high churn zones | 18:37 |
Kiall | their* | 18:37 |
greghaynes | its not hard I think | 18:37 |
Kiall | and B) a stupid large* | 18:37 |
Kiall | greghaynes: :) | 18:37 |
greghaynes | Yea, so really the only question is at what point does this become a purely i/o bound problem, because then its going to be about the same speed no matter the language, its simply a matter of having the i/o bandwidth | 18:38 |
timsim | Nah the bind slaves don't end up doing an axfr | 18:38 |
Kiall | timsim: ah - /me misremembers what you guys look like then ;) | 18:38 |
timsim | I think caching would work out ok | 18:39 |
pglass | caching should work well for us to tone down the number of db queries. | 18:39 |
timsim | It was mostly a matter of having many worker processes going during performance tests. We weren't spinning enough up probably so they were thread starving | 18:39 |
greghaynes | pglass: with this caching setup there won't be less queries, that could be changed though... | 18:39 |
pglass | sorry. what's the point of the caching then? | 18:40 |
greghaynes | timsim: yea, and also turning the thread count way down | 18:40 |
Kiall | wont be less per non-cached AXFR I guess greg meant/ | 18:40 |
greghaynes | pglass: here its purely so we don't encode things per thread | 18:40 |
mugsie | timsim: it would be interestig to see the perf results at differing levels of # number of workers | 18:40 |
Kiall | overall QPS on mysql would drop, as responses served from cache wouldn't hit the DB | 18:40 |
greghaynes | Kiall: oh, so what are you thinking for invalidating a cache? | 18:40 |
greghaynes | Kiall: like, when would we know that the db changed - just check serial? | 18:41 |
pglass | so we explode memory per process, in order to avoid re-encoding to dns wire format? | 18:41 |
greghaynes | (maybe I missed that?) | 18:41 |
mugsie | greghaynes: yeah, just checking serial is much lighter | 18:41 |
greghaynes | pglass: its actually less memory per process | 18:41 |
pglass | is it? | 18:41 |
pglass | things in the cache are never freed, are they? | 18:42 |
Kiall | well - memcache or w/e grows | 18:42 |
greghaynes | pglass: ah, disreguard what I said about queries, if were checking serial then it is a ton less queries | 18:42 |
mugsie | pglass: it should go to a cache, not in memory | 18:42 |
Kiall | mugsie: caches are usually in memory ;) semantics matter :P | 18:42 |
mugsie | and we can kick things out of the cache based on changed serial | 18:42 |
mugsie | Kiall: in process | 18:42 |
pglass | oh, i thought I saw like an in memory cache pre process. i haven't read too closely. if it's memcached or something that's different. | 18:42 |
greghaynes | the cache will be in mem, basically the design right now wastes a _ton_ of memory - every thread is reading in the whole zone 3 times and each one stores its own copy and theres 1k threads | 18:42 |
greghaynes | so if teh cache size is HUGE itll be more mem, but really I think you just need a tiny cache here - the way load works for this service is an axfr goes out and everyone requests the same thing for a short period of time | 18:43 |
greghaynes | so caching any more than how many simultaneous axfr's are going on is a waste | 18:43 |
greghaynes | but thats also just a tuning nob folks can figure out afterwards | 18:44 |
pglass | oh. there' usually only one process with lots of threads. | 18:44 |
pglass | okay. that makes sense. | 18:44 |
Kiall | The flip side is SOA query caching, which is a high number of DNS requests per sec with a low-ish number of queries per request... which can also be cached, if invalidation can be worked into the right places. | 18:44 |
Kiall | (and SOA has a low encoding overhead, as were talking like 50-200byte responses) | 18:45 |
greghaynes | ah, so thats the kind of thing you probably want some kind of smarter system for | 18:46 |
greghaynes | but, baby steps | 18:46 |
*** haplo37__ has quit IRC | 18:55 | |
*** haplo37__ has joined #openstack-dns | 19:07 | |
*** EricGonczer_ has joined #openstack-dns | 19:09 | |
*** EricGonc_ has quit IRC | 19:13 | |
*** abalutoiu has quit IRC | 19:20 | |
openstackgerrit | Graham Hayes proposed openstack/designate: Don't hardcode options we pass to oslo.context https://review.openstack.org/350758 | 19:32 |
*** lbrune has left #openstack-dns | 19:42 | |
openstackgerrit | Merged openstack/designate-tempest-plugin: Test that updating recordset TTL only modifies TTL https://review.openstack.org/350235 | 19:44 |
*** pglass has quit IRC | 19:53 | |
*** rudrajit has joined #openstack-dns | 19:55 | |
*** rudrajit_ has quit IRC | 19:59 | |
*** pglass has joined #openstack-dns | 20:06 | |
*** EricGonc_ has joined #openstack-dns | 20:08 | |
*** EricGonczer_ has quit IRC | 20:09 | |
*** leitan has quit IRC | 20:16 | |
*** jmcbride has quit IRC | 20:43 | |
*** GonZo2000 has joined #openstack-dns | 20:48 | |
*** GonZo2000 has joined #openstack-dns | 20:48 | |
*** v12aml has quit IRC | 20:57 | |
*** rudrajit_ has joined #openstack-dns | 21:00 | |
*** rudrajit has quit IRC | 21:03 | |
*** v12aml has joined #openstack-dns | 21:04 | |
*** EricGonc_ has quit IRC | 21:19 | |
*** EricGonczer_ has joined #openstack-dns | 21:21 | |
*** GonZo2000 has quit IRC | 21:41 | |
*** GonZo2K has joined #openstack-dns | 21:41 | |
*** rudrajit_ has quit IRC | 21:45 | |
*** rudrajit has joined #openstack-dns | 21:54 | |
*** GonZoPT has joined #openstack-dns | 21:55 | |
*** GonZo2K has quit IRC | 21:58 | |
*** pglass has quit IRC | 22:11 | |
*** tyr_ has joined #openstack-dns | 22:31 | |
tyr_ | Krenair: are you interested in contributing to a v2 UI ? | 22:32 |
Krenair | might be able to in future | 22:33 |
tyr_ | I could use some help with converting a Horizon token into a v2 client. For testing I've just been hard coding the auth in the Horizon API layer. | 22:33 |
Krenair | don't have my own local dev setup and am not really in a position to make one right now | 22:33 |
tyr_ | ok, np. Are you using DNS functionality in Horizon? Perhaps you'd be a good reviewer to validate the usability of a new panel. | 22:34 |
*** mlavalle has quit IRC | 22:37 | |
*** ducttape_ has quit IRC | 22:41 | |
Krenair | tyr_, yes | 22:44 |
tyr_ | do you mind if I add you as a reviewer on my patch? Not so much for the code review, but to remind me to ping you when I have a version ready for testing. | 22:45 |
openstackgerrit | Alexander Monk proposed openstack/designate: Fix api-ref methods for getting, updating and deleting recordsets https://review.openstack.org/350817 | 22:47 |
Krenair | tyr_, go for it | 22:47 |
Krenair | I'll have to figure out how to get it setup myself locally | 22:48 |
tyr_ | ok. Thanks! We can burn that bridge once we get to it. | 22:48 |
Krenair | That doesn't sound very encouraging | 22:48 |
tyr_ | lol | 22:48 |
Krenair | :) | 22:48 |
tyr_ | I'm aiming to have this ready for Newton. | 22:49 |
tyr_ | but it depends on a fair amount of new Horizon flux...so its "dynamic" | 22:50 |
*** EricGonczer_ has quit IRC | 22:54 | |
*** ducttape_ has joined #openstack-dns | 23:02 | |
*** jmcbride has joined #openstack-dns | 23:07 | |
Krenair | yeah we're not actually on Mitaka yet | 23:08 |
*** jmcbride has quit IRC | 23:08 | |
Krenair | our 'production' openstack system just moved to Liberty, testing system is still on Liberty but I think there's an upgrade being planned for that to go to Mitaka soonish | 23:09 |
*** jmcbride has joined #openstack-dns | 23:14 | |
*** ducttape_ has quit IRC | 23:24 | |
*** ducttape_ has joined #openstack-dns | 23:25 | |
*** tyr_ has quit IRC | 23:31 | |
Krenair | oh, he quit :( | 23:38 |
Krenair | think I had a solution to the token-to-v2-client thing | 23:38 |
Krenair | sent an email instead | 23:44 |
*** dxu_ has quit IRC | 23:47 | |
*** ducttape_ has quit IRC | 23:49 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!