Thursday, 2019-02-14

colin-that crashplan for family was probably the best solution i've ever used00:11
colin-in retrospect00:11
*** dtruong has joined #openstack-lbaas00:28
eanderssonI'll leave the patch as-is for now, feel free to contribute to the patch directly00:39
eanderssonif you want to change the general pattern00:40
eanderssonI strongly prefer to stay consistent with other projects, if you don't agree I won't get offended if you change it00:42
rm_workcool :)00:48
eanderssonor jude can fix it :p00:48
eanderssonfeel free to hit him up00:49
rm_workworth noting i get this code-smell in VSCode: [bandit] Use of assert detected. The enclosed code will be removed when compiling to optimised byte code. [B101]00:51
rm_workon those assert lines00:51
rm_workwhich is ... O_o00:51
eanderssonhaha asserts are the worst00:52
rm_workrunning tests presently and i'll push an updated version00:53
eanderssonthanks rm_work appreciate that00:53
johnsomAh, so the security scanner is also giving us an eye00:56
eanderssonIs VSCode amazing for Python now?00:59
eanderssonPeople seem to love it00:59
*** sapd1 has joined #openstack-lbaas01:01
rm_worki'm giving it a fair shake01:02
rm_worki'm not sure01:02
*** fnaval has joined #openstack-lbaas01:12
openstackgerritAdam Harwell proposed openstack/octavia master: Fix oslo messaging connection leakage  https://review.openstack.org/63642801:16
rm_worki think so far i still prefer pycharm but it could be due to my experience with it, which is why i'm still trying VSCode rather than sticking with my initial reaction of "wtf is this"01:17
*** fnaval has quit IRC01:41
*** hongbin has joined #openstack-lbaas02:07
*** Dinesh_Bhor has joined #openstack-lbaas02:18
*** phuoc has joined #openstack-lbaas02:42
*** abaindur has quit IRC02:51
*** psachin has joined #openstack-lbaas02:55
*** Dinesh_Bhor has quit IRC03:40
*** Dinesh_Bhor has joined #openstack-lbaas04:03
*** ramishra has joined #openstack-lbaas04:09
*** obondarev has joined #openstack-lbaas04:15
*** hongbin has quit IRC05:44
*** yboaron_ has joined #openstack-lbaas05:58
*** ramishra_ has joined #openstack-lbaas06:00
*** ramishra has quit IRC06:01
*** gcheresh has joined #openstack-lbaas06:11
*** ramishra_ is now known as ramishra06:17
*** ccamposr has joined #openstack-lbaas07:04
*** gcheresh has quit IRC07:31
*** gcheresh has joined #openstack-lbaas07:32
*** velizarx has joined #openstack-lbaas07:40
*** velizarx has quit IRC07:56
*** velizarx has joined #openstack-lbaas07:59
*** ramishra has quit IRC08:08
*** rpittau has joined #openstack-lbaas08:17
*** ramishra has joined #openstack-lbaas08:18
*** phuoc has quit IRC08:56
*** phuoc has joined #openstack-lbaas08:57
*** salmankhan has joined #openstack-lbaas10:25
*** salmankhan has quit IRC10:26
*** salmankhan has joined #openstack-lbaas10:26
*** sapd1 has quit IRC10:29
*** salmankhan1 has joined #openstack-lbaas10:31
*** salmankhan has quit IRC10:32
*** salmankhan1 is now known as salmankhan10:32
*** yamamoto has quit IRC10:40
*** mkuf has quit IRC10:51
*** yamamoto has joined #openstack-lbaas11:21
*** sapd1 has joined #openstack-lbaas11:24
*** yamamoto has quit IRC11:28
*** mkuf has joined #openstack-lbaas11:51
*** yamamoto has joined #openstack-lbaas11:57
*** rpittau has quit IRC11:58
*** sapd1 has quit IRC12:03
*** velizarx has quit IRC12:06
*** velizarx has joined #openstack-lbaas12:08
*** yamamoto has quit IRC12:10
*** yamamoto has joined #openstack-lbaas12:14
*** yamamoto has quit IRC12:22
*** sapd1 has joined #openstack-lbaas12:23
*** yamamoto has joined #openstack-lbaas12:42
*** rpittau has joined #openstack-lbaas12:56
*** yamamoto has quit IRC13:00
*** yamamoto has joined #openstack-lbaas13:03
*** yamamoto has quit IRC13:06
*** yamamoto has joined #openstack-lbaas13:06
*** yamamoto has quit IRC13:07
*** sapd1 has quit IRC13:08
*** yamamoto has joined #openstack-lbaas13:13
*** trown|outtypewww is now known as trown13:16
*** Dinesh_Bhor has quit IRC13:18
*** yamamoto has quit IRC13:19
*** yboaron_ has quit IRC13:36
*** yboaron_ has joined #openstack-lbaas13:36
*** yamamoto has joined #openstack-lbaas13:46
*** yamamoto has quit IRC13:46
*** yamamoto has joined #openstack-lbaas13:46
*** yamamoto has quit IRC13:47
*** yamamoto has joined #openstack-lbaas13:47
*** velizarx has quit IRC14:04
*** velizarx has joined #openstack-lbaas14:04
*** yamamoto has quit IRC14:09
*** yboaron_ has quit IRC14:20
*** psachin has quit IRC14:22
*** yboaron_ has joined #openstack-lbaas14:38
*** yamamoto has joined #openstack-lbaas14:41
*** yboaron_ has quit IRC14:42
*** yboaron_ has joined #openstack-lbaas14:42
*** yamamoto has quit IRC14:46
*** velizarx has quit IRC14:52
*** gcheresh has quit IRC15:39
*** fnaval has joined #openstack-lbaas15:53
*** yboaron_ has quit IRC15:57
*** ramishra has quit IRC16:07
*** trident has quit IRC16:46
*** ianychoi has joined #openstack-lbaas17:06
*** rpittau has quit IRC17:22
*** salmankhan has quit IRC17:38
*** trown is now known as trown|lunch17:49
*** ccamposr has quit IRC18:43
*** dims has quit IRC18:47
*** numans has quit IRC19:19
*** abaindur has joined #openstack-lbaas19:47
*** KeithMnemonic has joined #openstack-lbaas20:02
*** yboaron_ has joined #openstack-lbaas20:07
*** trown|lunch is now known as trown20:17
*** abaindur has quit IRC21:09
*** abaindur has joined #openstack-lbaas21:10
*** abaindur has quit IRC21:31
*** abaindur has joined #openstack-lbaas21:32
-openstackstatus- NOTICE: Jobs are failing due to ssh host key mismatches caused by duplicate IPs in a test cloud region. We are disabling the region and will let you know when jobs can be rechecked.21:32
rm_workeandersson: could you poke jabach to look at my response to his comment?21:33
openstackgerritMichael Johnson proposed openstack/octavia master: Add client_ca_tls_container_ref to listener API  https://review.openstack.org/61226721:50
*** mloza has joined #openstack-lbaas21:52
mlozaHi, how can I delete a loadbalancer that is stuck in Pending State? I'm using Octavia21:52
mloza--cascade didn't helped21:52
mlozaDo I need to update the status in the database?21:52
mlozaStatus is PENDING UPDATE21:53
rm_workmloza: yes, but most likely that will cause you some issues because PENDING means a thread is still working on it21:59
rm_workmloza: how long has it been? and do you know WHY it was stuck?21:59
johnsommloza PENDING states generally mean one of the controllers is still working on the request has not yet given up retrying.21:59
johnsomYeah, what he said....21:59
rm_workif you have restarted the api/worker processes, then it would be safe to update the status in the DB21:59
rm_workotherwise, it should time-out to ERROR in some time22:00
rm_worki forget what our default timeouts are, but i do believe they are quite long :/22:00
johnsom25+ minutes.  I really should go lower those22:01
johnsomAnd then set them high in the zuul job configs for those sssslllllloooowwww nodepool instances.22:02
*** trown is now known as trown|outtypewww22:02
rm_workyes lol22:03
cgoncalvesplease remember if what the reason is, if any, for not allowing forced status? a use case like the one just mentioned (one restarts the worker/hm) will leave the resource stuck in a transient status22:04
johnsomIt is SUPER dangerous to do and 99.999999% of the time the wrong thing to do. Plus, normal restarts don't cause this.22:05
johnsomOnly kill -9 could cause that22:05
cgoncalveswhy a normal restart wouldn't cause it? aren't flows aborted?22:06
mlozarm_work: I'm testing failover by shutting down the node. This is testing environment. Its been stuck for more than 15 minutes22:06
mlozaIs there a timeout on this that will change the status to ERROR state?22:07
johnsomcgoncalves No, they are not. They gracefully shutdown.22:07
mlozas/node/controller node/22:07
mlozaI have 3 controllers22:07
cgoncalvesjohnsom, ok. thank you22:07
rm_workmloza: you're testing failover of ... what? by shutting down the octavia controller nodes?22:10
johnsomIt is this configuration setting: https://docs.openstack.org/octavia/latest/configuration/configref.html#haproxy_amphora.connection_max_retries22:11
mlozarm_work: yes22:11
rm_workmloza: what are you testing failover of?22:11
rm_worki'm just worried about a possible misunderstanding :)22:12
mlozacontroller node failover22:13
mlozaI have restarted the octavia services22:13
mlozaNow I need to update the database22:14
-openstackstatus- NOTICE: The test cloud region using duplicate IPs has been removed from nodepool. Jobs can be rechecked now.22:14
mlozawhich column do I need to update?22:14
johnsomprovisioning_status on the load balancer22:16
johnsommloza If you are kill -9 them when a flow was active, yes, you might hit this issue.22:17
johnsomrm_work Should we drop that to 10 minutes or 5?22:17
mlozajohnsom: is it the neutron database?22:17
johnsommloza octavia22:17
rm_workjohnsom: 10 is still safe for even most slow stuff22:18
rm_workso maybe that22:18
rm_workit's from like 30 right>?22:18
johnsomOk, yeah, it was 2522:18
johnsom300 retries with a 5 second interval22:19
mlozajohnsom: should update the status to ACTIVE or DELETED state?22:19
johnsomI would do ERROR then cascade delete to make sure all of the resources get cleaned up22:19
mlozajohnsom: Alright. Thanks22:19
*** yboaron_ has quit IRC22:21
johnsomrm_work What do you think about switching the default taskflow from serial mode to parallel?22:22
rm_workuhhh22:22
johnsomIt's another default that bugs me22:22
rm_workthat is maybe a little scary>?22:22
johnsomOk, I will leave it22:23
*** yamamoto has joined #openstack-lbaas22:30
xgermanI like parallel22:58
xgermanwe should at least change it in OSA22:58
openstackgerritCarlos Goncalves proposed openstack/octavia-tempest-plugin master: WIP: Add active/standby scenario test  https://review.openstack.org/63707323:05
openstackgerritMichael Johnson proposed openstack/octavia-tempest-plugin master: Update the live jobs to set higher retries  https://review.openstack.org/63707423:06
openstackgerritCarlos Goncalves proposed openstack/octavia-tempest-plugin master: WIP: Add active/standby scenario test  https://review.openstack.org/63707323:07
*** yamamoto has quit IRC23:33
openstackgerritMichael Johnson proposed openstack/octavia master: Set the default retries down to 120  https://review.openstack.org/63707723:38
johnsomThere, you guys can decide on that.23:39
*** abaindur_ has joined #openstack-lbaas23:40
*** abaindu__ has joined #openstack-lbaas23:43
*** abaindu__ is now known as abaindur__23:43
*** abaindur has quit IRC23:43
*** abaindur_ has quit IRC23:45
*** fnaval has quit IRC23:48
*** abaindur__ has quit IRC23:50
*** abaindur has joined #openstack-lbaas23:50
*** abaindur_ has joined #openstack-lbaas23:52
*** abaindur has quit IRC23:54
*** abaindur has joined #openstack-lbaas23:55
*** abaindur_ has quit IRC23:57
*** abaindur_ has joined #openstack-lbaas23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!