Wednesday, 2021-10-13

opendevreviewErik Olof Gunnar Andersson proposed openstack/designate master: [DNM] Fix dns.query and centralize implementation  https://review.opendev.org/c/openstack/designate/+/81372201:34
opendevreviewErik Olof Gunnar Andersson proposed openstack/designate master: [DNM] Fix dns.query and centralize implementation  https://review.opendev.org/c/openstack/designate/+/81372201:40
opendevreviewErik Olof Gunnar Andersson proposed openstack/designate master: Fix dns.query.tcp/udp not always handling ipv6 properly  https://review.opendev.org/c/openstack/designate/+/81372202:18
opendevreviewErik Olof Gunnar Andersson proposed openstack/designate master: Fix dns.query.tcp/udp not always handling ipv6 properly  https://review.opendev.org/c/openstack/designate/+/81372202:38
opendevreviewErik Olof Gunnar Andersson proposed openstack/designate-tempest-plugin master: [DNM] Testing  https://review.opendev.org/c/openstack/designate-tempest-plugin/+/81373804:39
eanderssonI don't think I fully understood the problem.05:55
eanderssonMakes sense johnsom05:55
opendevreviewArkady Shtempler proposed openstack/designate-tempest-plugin master: Add "cleanup" for created recordsets + delete zone test  https://review.opendev.org/c/openstack/designate-tempest-plugin/+/79646907:16
*** eandersson8 is now known as eandersson10:50
ozzzo_workin http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025292.html I was advised to setup redis to fix my "DBDuplicateEntry" problem16:00
ozzzo_workso we installed redis, and allowed ports 6379 and 26379, and it appears that redis is working, but we still get the duplicate entry errors, and DNS fails when that error occurs16:01
ozzzo_workwhat am I missing?16:01
fricklerozzzo_work: you need to configure designate to actually use redis as coordination backend?16:05
ozzzo_workit pulls from the redis_enabled value16:11
ozzzo_workI looked in the designate_producer container and I see it in /etc/designate/designate.conf: backend_url = redis://admin:Z59Lekw5HODiBVbS85BHk9ruhEyrb9sT8btt2sTl@10.221.176.48:26379?sentinel=kolla&sentinel_fallback=10.221.176.173:26379&sentinel_fallback=10.221.177.38:26379&db=0&socket_timeout=60&retry_on_timeout=yes16:14
ozzzo_workand I can hit the port with nc, from the container16:15
ozzzo_workit seems like redis is working, but I still see the DB error16:15
fricklerozzzo_work: there's also https://bugs.launchpad.net/designate/+bug/1940976 , so there might be an issue with parallel threads. this needs further investigation probably17:53
johnsomI thought the DLM changes Erik made resolved that issue as well.17:54
eanderssonIt should have resolved the race condition, unless there is a new issue.18:04
eanderssonI wonder if they are using the api or the sink?18:05
eanderssonI would double check that he is running 9.0.2 and not 9.0.1 18:08
eanderssonbecause 9.0.2 was released with this fix back in february18:09
eanderssonIf they are using the sink they might need this patch. https://github.com/openstack/designate/commit/4869913519e0b7bb12b4ba1ef6b7ce8aabb5382518:11
eanderssonIt's the only time I have seen that type of database errors in our deploy18:11
eanderssonand does not look like it was backported to Train18:14
eanderssonnvm it is in train https://opendev.org/openstack/designate/commit/0174797a52d8c2efa6581a97adfec9597751102418:15
eanderssonozzzo_work: Are there any mentions of coordination in the logs? Also, can you make sure you are running at least 9.0.2?18:17
ozzzo_workeandersson: I'll take a look, ty!20:13
-opendevstatus- NOTICE: Both Gerrit and Zuul services are being restarted briefly for minor updates, and should return to service momentarily; all previously running builds will be reenqueued once Zuul is fully started again22:49

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!