Friday, 2018-05-11

*** dayou_ has joined #openstack-lbaas00:09
rm_workcool! thanks00:12
*** longkb has joined #openstack-lbaas00:32
dayouI have one question, so if we have provider landed, can each company implement its own provider such as xxx company's provider?00:32
johnsomYes00:32
johnsomwe have provider "amphora", F5 networks could have "F5"00:33
rm_workor "banana" for all we care :P00:33
johnsomThe operator can override the names if they need to00:33
dayouThat sounds cool, or chocolate00:33
rm_workreally those belong as flavors <_<00:34
*** fnaval has quit IRC00:34
johnsomI thought it, but didn't say it00:35
bzhao__hi guys, could you please have a quick look about udp patches if you have time? Thank you. https://review.openstack.org/#/c/525420/  https://review.openstack.org/#/c/529651/00:36
johnsombzhao__ Yes! I am so sorry I haven't got to it yet.  I got wrapped up in writing the driver support.00:36
johnsomThese few coverage branches are taking much longer than I planned00:37
bzhao__No matter. It's my glad that octavia grow so quick. :).00:38
johnsomHa, yeah, there has been a lot of work over these last few weeks00:39
*** fnaval has joined #openstack-lbaas00:41
bzhao__I saw. Octavia is still moving quickly. :)00:41
*** fnaval has quit IRC00:47
*** fnaval has joined #openstack-lbaas00:47
*** fnaval has quit IRC00:52
rm_workwe're trying :)01:02
rm_workyeah unfortunately UDP is just a lower priority than Providers (and the associated testing) since we NEED to get that done this cycle ASAP so we can really fully deprecate neutron-lbaas01:03
johnsomYep, and we need to un-block vendors01:03
rm_workyes01:03
rm_workso unfortunately, even though I would love to get UDP support in, realistically it's just going to have to wait on this provider stuff01:04
johnsomI'm trying to multi-task, so alternating some reviews in, but yeah, focus at this moment is providers01:05
openstackgerritMichael Johnson proposed openstack/octavia master: Implement provider drivers - Cleanup  https://review.openstack.org/56743101:06
*** harlowja has joined #openstack-lbaas01:06
*** ianychoi_ has quit IRC01:14
johnsomOk, I have no idea what the heck is wrong with cover and that to_dict().  I can put logging in and the line is clearly getting hit with the test, but still shows red01:19
rm_workpush it up01:25
rm_worklink me to the line01:25
rm_worki'll debug through and look01:25
openstackgerritMichael Johnson proposed openstack/octavia master: Implement provider drivers - Cleanup  https://review.openstack.org/56743101:25
johnsomhttp://logs.openstack.org/31/567431/4/check/openstack-tox-cover/d0e7a48/cover/octavia_api_drivers_data_models_py.html#n7301:25
johnsomThat red01:25
johnsomThis test fires it: octavia.tests.unit.api.drivers.test_data_models.TestProviderDataModels.test_to_dict_partial01:26
rm_workkk01:26
johnsomLike a lot01:26
openstackgerritMerged openstack/octavia master: Creates provider driver base class and exceptions  https://review.openstack.org/55801301:55
openstackgerritMerged openstack/octavia master: Create noop provider driver and data model  https://review.openstack.org/55832001:55
johnsomWahoo01:56
johnsomGuess I need to get on those comments tomorrow01:56
rm_workjohnsom: erm yeah, so ...02:00
rm_workif you expect `test_to_dict_partial` to hit line 73, it's not going to02:00
rm_workbut....02:00
rm_workbut WHY is confusing me02:01
johnsomNo, put a log message there02:01
johnsomIt hits it a bunch due to all of the unsets02:02
rm_workah yeah so i put some other stuff there02:03
rm_workand the debug will go there02:03
rm_workbut if it's just a continue... it won't step me down to that line02:03
rm_workit's super weird02:03
*** harlowja has quit IRC02:07
rm_workyeah i think it's a bug in the coverage lib02:08
johnsomMe too02:08
rm_work"cases where CPython's peephole optimizer replaces a jump to a continue with a jump to the top of the loop, so the continue line is never actually executed, even though its effects are seen"02:10
rm_workso not the coverage lib so much02:11
rm_workmaybe they could work around it? but ... it's a python oddity02:11
rm_workhttps://mail.python.org/pipermail/python-ideas/2014-May/027893.html02:12
rm_workhttps://bugs.python.org/issue2506,02:14
*** rcernin has joined #openstack-lbaas02:14
rm_worklol johnsom: https://github.com/pyca/cryptography/pull/3968/commits/3b585f803891e750d0ca5861b5a29e16b779bc1602:25
rm_workthere you go02:26
johnsomLol, omg, I don’t think I can stoop to that level for coverage02:27
*** ianychoi has joined #openstack-lbaas02:29
rm_worklol02:31
rm_workanywho, learned a bit about the python compiler today :P02:32
rm_workeugh our exception handling has a tiny gap...02:39
rm_workhttps://github.com/openstack/octavia/blob/master/octavia/api/v2/controllers/load_balancer.py#L223-L23102:39
rm_workjohnsom: ^^ if neutron fails, we just raise and give the user a 500 with a stacktrace >_>02:40
rm_workwas that supposed to be actually *converting* the error to a different exception type that is API-Ready?02:40
rm_workit even says something about API-ready but the exception type of 'e' is definitely not02:40
johnsomIt solved the problem I was having with neutron exceptions and 500s02:41
johnsomrm_work: note btw, what it is catching is a network driver exception, not straight from neutron02:46
*** fnaval has joined #openstack-lbaas02:56
*** rm_work has quit IRC03:13
*** rm_work has joined #openstack-lbaas03:23
*** dayou_ has quit IRC03:53
rm_workjohnsom: right but a network driver exception IS a 50003:55
rm_workno?03:55
rm_workso I'm not clear how it would prevent you from getting 500s, when it literally raises an Exception type that is a 50004:05
rm_workOH, it's translating the code through, is how04:06
rm_workbut if the neutron code *is* a 500...04:06
rm_workAHHH also my network-driver isn't copying over the status code anyway <_< so yeah in my case it'd always be a 500 no matter what neutron returns (though in this case, that was still a 500)04:12
openstackgerritAdam Harwell proposed openstack/octavia master: WIP: Floating IP Network Driver (spans L3s)  https://review.openstack.org/43561204:13
*** pck has joined #openstack-lbaas04:25
*** pck is now known as pckizer04:35
*** yamamoto has joined #openstack-lbaas04:37
*** yamamoto has quit IRC05:31
*** fnaval has quit IRC05:46
*** yamamoto has joined #openstack-lbaas05:46
*** fnaval has joined #openstack-lbaas06:07
*** fnaval has quit IRC06:11
*** links has joined #openstack-lbaas06:31
*** pcaruana has joined #openstack-lbaas06:32
rm_workwheee06:35
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for pools  https://review.openstack.org/56564006:35
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for members  https://review.openstack.org/56619906:35
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create basic traffic balancing scenario test  https://review.openstack.org/56670006:35
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for healthmonitors  https://review.openstack.org/56768806:35
rm_workok06:35
rm_workHM List works06:35
* rm_work cries06:35
rm_worknow actually the scenario should be easy06:35
rm_work... tomorrow06:35
*** annp has quit IRC06:36
*** annp has joined #openstack-lbaas06:36
*** AlexStaf has quit IRC06:51
*** rpittau has quit IRC07:06
*** dmellado has quit IRC07:09
*** dmellado has joined #openstack-lbaas07:10
*** fnaval has joined #openstack-lbaas07:16
*** rpittau has joined #openstack-lbaas07:18
*** tesseract has joined #openstack-lbaas07:20
*** fnaval has quit IRC07:21
*** ivve has joined #openstack-lbaas07:28
*** yamamoto_ has joined #openstack-lbaas07:36
*** ltomasbo has quit IRC07:36
openstackgerritOpenStack Proposal Bot proposed openstack/octavia-dashboard master: Imported Translations from Zanata  https://review.openstack.org/56778907:37
*** yamamoto has quit IRC07:39
*** AlexStaf has joined #openstack-lbaas07:55
*** rcernin has quit IRC07:58
openstackgerritlidong proposed openstack/python-octaviaclient master: Update the outdated links  https://review.openstack.org/54811208:02
*** AlexStaf has quit IRC08:05
*** salmankhan has joined #openstack-lbaas08:26
*** openstack has joined #openstack-lbaas09:29
*** ChanServ sets mode: +o openstack09:29
*** yamamoto has joined #openstack-lbaas09:40
*** salmankhan has quit IRC09:50
*** salmankhan has joined #openstack-lbaas09:50
*** longkb has quit IRC10:33
*** yamamoto has quit IRC11:12
*** yamamoto has joined #openstack-lbaas11:20
*** Ignazio has joined #openstack-lbaas11:24
*** Ignazio has quit IRC11:25
*** yamamoto has quit IRC11:25
*** yamamoto has joined #openstack-lbaas11:28
*** ignazio has joined #openstack-lbaas11:37
ignaziohello everyone11:38
ignazioPlease, I need help on ocata octavia11:38
ignaziohealt-manager reports: Amphora 4e6d19d3-bc19-4882-aeca-4772b069c53b health message reports 0 listeners when 1 expected11:39
ignazioand continues to deploy amphora instances11:39
*** ignazio has left #openstack-lbaas11:40
*** ignazio has joined #openstack-lbaas11:42
ignazioPlease, I need help on ocata octavia11:43
*** yamamoto has quit IRC11:45
*** amotoki_ has joined #openstack-lbaas11:45
ignazioanyone can halp me on octavia ?11:49
*** amotoki__ has joined #openstack-lbaas11:54
*** yamamoto has joined #openstack-lbaas11:55
*** amotoki_ has quit IRC11:55
*** amotoki_ has joined #openstack-lbaas11:59
*** amotoki__ has quit IRC12:01
*** annp has quit IRC12:01
*** amotoki__ has joined #openstack-lbaas12:08
*** amotoki_ has quit IRC12:08
*** yamamoto has quit IRC12:12
*** yamamoto has joined #openstack-lbaas12:13
*** phuoc_ has joined #openstack-lbaas12:15
ignaziohelo12:16
ignazioanyone could help me with octavia ?12:16
*** phuoc has quit IRC12:17
*** yamamoto has quit IRC12:22
*** yamamoto has joined #openstack-lbaas12:23
*** yamamoto has quit IRC12:27
*** ignazio has quit IRC12:36
*** yamamoto has joined #openstack-lbaas12:46
*** yamamoto has quit IRC12:47
*** yamamoto has joined #openstack-lbaas12:47
*** amotoki__ has quit IRC12:52
*** yamamoto_ has joined #openstack-lbaas12:53
*** yamamoto has quit IRC12:57
*** atoth_ has quit IRC13:04
*** atoth has joined #openstack-lbaas13:04
*** salmankhan has quit IRC13:10
*** openstackstatus has joined #openstack-lbaas13:11
*** ChanServ sets mode: +v openstackstatus13:11
-openstackstatus- NOTICE: Due to a Zuul outage, patches uploaded to Gerrit between 09:00UTC and 12:50UTC, were not properly added to Zuul. Please recheck any patches during this window and apologies for the inconvenience.13:15
*** yamamoto_ has quit IRC13:21
*** salmankhan has joined #openstack-lbaas13:43
*** salmankhan has quit IRC13:48
*** salmankhan has joined #openstack-lbaas13:53
*** yamamoto has joined #openstack-lbaas14:21
*** yamamoto has quit IRC14:27
*** fnaval has joined #openstack-lbaas14:36
xgerman_ignazio wassup?14:45
*** openstackgerrit has quit IRC14:49
johnsomYeah, I checked earlier and saw they were already offline.  Friday is a rough day for coverage15:00
*** pcaruana has quit IRC15:07
*** rpittau has quit IRC15:14
*** yamamoto has joined #openstack-lbaas15:24
*** links has quit IRC15:24
*** yamamoto has quit IRC15:32
*** openstackgerrit has joined #openstack-lbaas15:34
openstackgerritMichael Johnson proposed openstack/octavia master: Implement provider drivers - Cleanup  https://review.openstack.org/56743115:34
*** salmankhan has quit IRC16:20
*** salmankhan has joined #openstack-lbaas16:23
*** yamamoto has joined #openstack-lbaas16:28
*** yamamoto has quit IRC16:34
openstackgerritMerged openstack/octavia-dashboard master: Make the display of none consistent in detail page  https://review.openstack.org/54275116:43
*** tesseract has quit IRC16:59
*** openstackstatus has quit IRC17:00
*** openstack has joined #openstack-lbaas17:04
*** ChanServ sets mode: +o openstack17:04
*** salmankhan has quit IRC17:05
*** yamamoto has joined #openstack-lbaas17:30
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for members  https://review.openstack.org/56619917:32
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create basic traffic balancing scenario test  https://review.openstack.org/56670017:32
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for healthmonitors  https://review.openstack.org/56768817:32
openstackgerritMichael Johnson proposed openstack/octavia master: Implement provider drivers - Health Monitor  https://review.openstack.org/56703717:34
johnsomAdded the depends on for HM17:35
*** yamamoto has quit IRC17:36
rm_workk17:38
rm_workjust in time for updates :P17:38
rm_workuhhh the scenario might fail17:38
rm_workworking on getting it testing17:38
rm_workerr sorry, s/might/will/17:40
rm_workso ignore that17:40
rm_workI see why people often don't bother writing tests that they know will work in parallel / dirty environments17:53
rm_workit's a PITA17:53
rm_workbut worth it I think17:53
johnsomYeah, it's worth the effort to get a good base17:57
johnsomI'm doing a few reviews before lunch, then out for lunch, this afternoon I will address your comments on the LB patch, then start the driver library17:58
rm_workk17:58
rm_workthe entirety of the comments on that patch are what's in the CR on the end of the chain17:58
rm_workso you may as well just comment on that17:59
johnsomSo you are just going to +2 the LB patch? grin17:59
rm_workif you address those comments, yeah17:59
rm_workas long as they are passing the new tempest tests...18:02
rm_workI gave it a once-over and didn't see anything egregious18:02
rm_workand there's a lot in the cleanup patch18:02
rm_workbut this is another "get it in, and iterate" thing18:03
rm_workI think it's not going to be perfect anyway until a vendor tries to actually use it to write a driver18:03
johnsomMost of cleanup is a doc and moving the samples out into a dedicated samples file18:03
rm_workit seems to be written "per spec", and if there's bugs, we'll find them when someone tries to use it18:05
johnsomWell, I fixed the ones I found as I wrote the amphora driver18:05
rm_workso as long as it doesn't break the existing stuff (which tests show it does not), and there's nothing obviously dumb in there, i'm good18:05
johnsomHa, well, entirely possible18:06
rm_workright, i did a once-over looking for dumb stuff18:06
rm_workjust found the one bit i didn't like, and commented/argued 100 times accordingly18:06
rm_workeugh i want to have the HM scenario test actually make sure member statuses update properly...18:10
rm_workbut that means it'd have to use compute instances to have valid members18:10
rm_workwhich means i need to move it into the "live-tests" thing <_<18:11
rm_workunless i figure out the singleton-servers thing18:11
rm_workjohnsom: did you know `gate-recheck` is a thing?18:25
rm_workapparently when our stuff passes checks, we +A it, and the gate fails for something dumb, we can just `gate-recheck` to ONLY retry the gate, we don't have to `recheck` and go all the way through both18:26
johnsomNo, must be new18:26
* rm_work shrugs18:26
rm_worki think it's been a while, just no one told us / we didn't read doc updates :P18:27
*** yamamoto has joined #openstack-lbaas18:32
*** yamamoto has quit IRC18:36
rm_workalright i'm gonna move the HM testing stuff around a bit :(19:06
*** yamamoto has joined #openstack-lbaas19:33
*** yamamoto has quit IRC19:38
rm_workhmm wonder if i found a bug20:18
rm_workOH20:23
rm_workjohnsom: so somehow i didn't realize that a member failing healthmonitor checks goes to ERROR, I thought it would go to OFFLINE20:23
rm_workeven though i have seen this multiple times in the past, so I should know this20:23
rm_workstill it just feels counter-intuitive to me20:23
johnsomOffline is admin down20:24
rm_workahhhh20:24
rm_workok20:24
rm_work;)20:25
rm_worki should add that to my test too20:25
rm_workI think I'm happy with this model20:31
rm_workthe scenario test files named for each object will pretty much just be a collection of the same tests from the API, but designed to be run not in noop mode... then if there is something you need to test that requires actual webservers/traffic, it goes in the traffic tests file20:32
*** yamamoto has joined #openstack-lbaas20:34
*** yamamoto has quit IRC20:39
rm_workjohnsom: ah, err... so, members in admin_state "down" ...20:49
rm_workjohnsom: with the change I made to fix the status-flipping bug, they will always stay in NO_MONITOR <_< so I guess I introduced another bug there. We probably just need to actually check if there's a monitor configured or not :/20:51
johnsom???20:52
johnsomI thought the way I had you change it fixed that20:52
rm_workwith this: https://review.openstack.org/#/c/567322/3/octavia/controller/healthmanager/health_drivers/update_db.py20:52
rm_workit fixed something else20:52
rm_workbut not quite this20:52
rm_workit doesn't ever override...20:53
rm_workbut since all HMs start in NO_MONITOR regardless of anything else, it still will ignore updating them20:53
rm_workoriginally i had it overriding, which we said was bad20:53
johnsomBut if the member reports it won't hit that logic20:53
rm_workah right20:54
rm_workerr20:54
rm_workhmm maybe i am misunderstanding the bug i'm hitting20:54
rm_workone sec20:54
rm_workerr right20:54
johnsomThe way you had it first would have had that problem20:54
rm_workbecause admin_state_up=False makes the member not report20:54
rm_workright?20:54
rm_workwe just don't include it20:54
johnsomYeah, yes20:55
johnsomI see it now20:55
rm_workyeah20:55
rm_workit was non-obvious20:55
rm_worki'm noodling the fix20:55
rm_workthere's actually a couple of ways to do it20:55
rm_workbut deciding between trying to inject too much intelligence or not20:55
rm_workdon't want this to get CRAZY complex20:55
rm_workwe actually have the db_member so we can actually LOOK and see if there's an HM20:56
rm_workor, we could also update our controller code so that when you set a member to admin-down it will set the initial OFFLINE20:57
rm_workand then it will not be a problem20:57
rm_work^^ I think i like the second one20:57
rm_worksince we should never have created it in NO_MONITOR anyway20:57
rm_workright?20:57
rm_workIMO *that* is also a similar bug20:57
rm_workbecause it's also a weird status-flip20:57
rm_workthoughts?20:57
rm_workthe issue there though is it just requires a ton of touchpoints in the code to make sure we always get it right21:02
rm_workcreate_member needs to make sure to set the initial OFFLINE if there is no HM (easy peasy)21:02
rm_workupdate_member needs to update the status to OFFLINE if admin_state_up is changed to false (easy, but, could be overwritten by health, so we're back in the same boat)21:03
rm_workcreate/delete HM would also need to update affected members (erk)21:04
rm_workhonestly I think the *problem* is all stemming from this being a weird override to begin with -- we're trying to say "we're smarter than the HM system" when a member is admin-down21:05
rm_workwhich seems all great and good21:06
rm_workbut it causes all of these awkward transitions21:06
rm_workso i think I may just *have to* change it in update_db21:07
rm_workyeah so if we *actually check* to see if there's a HM, that should work, right?21:21
rm_workwe have that data...21:21
rm_workanother option would be to do members the correct way in HAProxy21:23
rm_workand actually put them into config but in maint mode21:23
rm_workwhich is IMO what "admin-down" should do21:23
rm_workthen we wouldn't have this problem at all21:23
johnsomWhoa, take an internal meeting and come back to a wall of text....21:27
rm_workwalking through logic21:27
rm_workbasically I think the real fix for this (and something we really should have done ANYWAY) is to still put admin_down members into the config, but disable them21:28
rm_workhttps://cbonte.github.io/haproxy-dconv/1.7/configuration.html#5.2-disabled21:28
rm_workthen they're in maint mode21:28
rm_workand we correctly OFFLINE them via healthchecks21:28
johnsomYeah, but that option wasn't in the original haproxy we started with, I think that is why we pulled the config21:29
rm_workohrly?21:30
rm_worklet me see how far it goes back21:30
rm_workthat would be annoying21:30
johnsomI was just going to look21:30
rm_worki see it in 1.521:30
rm_worki think it was just that every OTHER object did it that way21:31
rm_workso we carried it through to members21:31
johnsomWell, if it does what we need and stops sending health monitor pings, I am all for it21:32
rm_workI believe so21:33
rm_workthat last part i need to double check21:33
johnsomIt appears to say it stops, so....21:33
rm_workyeah21:33
rm_work"That means21:33
rm_workthat it is marked down in maintenance mode, and no connection other than the21:33
rm_workones allowed by persist mode will reach it."21:33
rm_workah, and21:34
rm_work"while it is still possible to test the service by making use of the force-persist mechanism."21:34
rm_workso I assume you'd need that to enable HMs21:34
rm_workalmost done with the template change, working on the testing21:34
rm_workwe still do a weird flip where in NOOP mode it would always be NO_MONITOR but in real mode it flips to OFFLINE after a health message comes in21:35
rm_workbut I guess that's ok?21:35
*** yamamoto has joined #openstack-lbaas21:35
*** yamamoto has quit IRC21:41
rm_workjohnsom: ummm i thought we had tests somewhere that tested the generated haproxy configs, but i can't find them, and i changed stuff and nothing failed >_>21:42
rm_workoh i found it ... test_jinja_config21:43
rm_workbut it must not test anything involving disabled members >_<21:43
rm_worklol21:43
xgerman_yeah, that is very barebones and kust check some global options21:48
*** AlexStaf has joined #openstack-lbaas21:57
rm_workyeah got a fix, testing22:09
*** fnaval has quit IRC22:31
openstackgerritAdam Harwell proposed openstack/octavia master: Create disabled members in haproxy  https://review.openstack.org/56795522:32
rm_work^^ johnsom let me know what you think, when you have a break22:33
rm_workgoing to test it now myself to make sure it solves my issue22:33
johnsomok22:33
rm_workI wonder where would be the best place to add some docs and maybe a chart about exactly what you can expect the status of members to be depending on HM/not22:36
rm_workbecause it was confusion/misunderstanding on my part that led to introducing this bug22:37
johnsomYeah, I agree we should have something like that.  Maybe in the user section22:37
johnsomIt would go in an API guide I guess, but we don't have one at the moment.22:38
rm_workit's kinda a 3d matrix22:39
rm_worknot even sure how i'd visualize it22:39
rm_workah I guess one table for no HM, and one table for with HM22:39
johnsomgraphviz?22:40
johnsomlol22:40
*** ivve has quit IRC22:41
rm_workhmmm that may not have fixed it, but i need to figure out why22:45
rm_workprolly missed something22:45
rm_workoh, wtf22:50
rm_workHM didn't restart in devstack due to address in use22:50
rm_worki thought we fixed that <_<22:50
johnsomNo, I have been bitching about this for weeks. The new service/process thing doesn't shut down right22:51
rm_workeugh22:52
rm_workok umm so i guess maybe my patch does work then22:52
rm_workk22:52
rm_workfinally got it restarted and it seems to be working22:53
rm_workerrr22:53
rm_workthough this means it actually never had the last patch...22:53
rm_workbecause i had applied that and restarted the HM22:53
rm_workgrrr one sec22:54
rm_workgonna run it one more time to make sure it all passes, then revert this patch and see if it passes also22:54
rm_work(I don't think it will)22:54
rm_workbut that means this bug was present even without my patch I think22:54
rm_workerr, without the one I thought broke it22:54
rm_work`systemctl restart` doesn't return an error exit code if it fails miserably to start the service?? <_<22:55
rm_workok well, it is definitely passing what i think is a pretty good tempest test, WITH both patches22:56
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create api+scenario tests for healthmonitors  https://review.openstack.org/56768823:00
rm_workk, includes a couple of scenario tests, the traffic one i think is *good*23:00
rm_workbut I think it could still use maybe one or two more cases thrown in23:01
*** yamamoto has joined #openstack-lbaas23:01
rm_workok yes, bug exists with the first patch but without the second23:02
rm_worknow to test without either to verify it was introduced there23:02
rm_workuh-oh, it's 4:04pm, time to make myself not found23:04
johnsomsigh23:05
rm_workyeah ok verified that the last patch definitely did break it23:07
rm_workbut the other patch fixes it23:07
rm_workso23:07
rm_workI did add a reno23:07
rm_work;)23:07
johnsom+123:07
rm_workyeah ok i may have to track down why this isn't restarting properly23:11
rm_workunless, did you track it down already?23:11
rm_workI somehow missed your bitching about it :P23:11
johnsomNo, I didn't track it down23:19
rm_workk23:19
*** yamamoto has quit IRC23:26
johnsomrm_work I am so worried somebody is going to try to fill in "name" in their driver....23:34
johnsomrm_work make it _name23:39
johnsomrm_work _dont_use_me_name?23:52
johnsomOh, I know, _n_a_m_e23:53
rm_workI mean23:56
rm_workwe can use _name and a @property23:56
rm_workI almost did that23:56
rm_workI mean, you're welcome to23:56
rm_workbut i can get to it as well23:56
rm_workjust ... not tonight, I have to head out, be out of pocket for ~7h23:57
rm_workanyway if they did that, any testing would tell them it gets overridden :P23:57
rm_workWe could also add a comment23:57
rm_workanywho, bbl23:57
johnsomJust trolling back23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!