Thursday, 2023-01-19

opendevreviewMerged openstack/openstack-ansible stable/yoga: Add Glance tempest plugin repo to testing SHA pins list  https://review.opendev.org/c/openstack/openstack-ansible/+/87077800:43
opendevreviewchandan kumar proposed openstack/openstack-ansible-os_tempest master: Add support for whitebox-neutron-tempest-plugin  https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/87081204:52
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: [doc] Fix storage architecture links  https://review.opendev.org/c/openstack/openstack-ansible/+/87105008:34
noonedeadpunklet's quickly land backport https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/87090710:16
jrosserdone\10:28
*** dviroel|afk is now known as dviroel11:13
admin1i have a wildcard to *.domain.com that I use for my endpoint .. like cloud.domain.com .. i was wondering if its  possible on haproxy_extra_services to  have the frontend listen to say   s3.domain.com and have it pointed to a ceph backend  ? .. so that  for object storage, the url becomes s3.domain.com 11:13
dok53Hi all, it's been a while :) Could anyone give me a pointer as to what could be causing this or where to look? https://paste.openstack.org/show/b7k0RYHwijjzU66SlGU1/11:19
noonedeadpunkadmin1: well, frontend listens not on domain, but on IP or interface. and for mathing domain name you need to add ACL to the frontend IIRC11:21
noonedeadpunkand you can define haproxy_acls for haproxy_extra_services https://opendev.org/openstack/openstack-ansible-haproxy_server/src/branch/master/templates/service.j2#L71-L7811:22
noonedeadpunkso I'd say that should be possible11:22
noonedeadpunkdok53: oh, admin1 was coming with the same trouble yestarday11:23
noonedeadpunkwas you able to sort it out at the end?11:23
dok53It passed that task once for me noonedeadpunk and failed on something else but it was late so I decided to go back at it today and the above is all I get to11:29
noonedeadpunkdamn, I can totally recall seing that behaviour of keystone, when it refuses to issue a token...11:32
noonedeadpunkoh! memcached? can it be that one of memcached is down?11:33
noonedeadpunkor firewalled?11:33
dok53I'm not too sure as it's a new install so I'd imagine it should all be set up properly up to that point. I will look at that now though just in case11:37
noonedeadpunktry to telnet to memcahced port from keystone container11:38
dok53Will do, just spotted that Failed unmounting /var/log/jo11:40
dok53urnal/2cce3301a44647dfa4e4644a99d5a4dc in the memcached logs11:40
admin1dok53, i strugged with the same yesterday  and did almost 10 different builds11:53
admin1do you have iptables running ? 11:53
admin1in the controllers ? 11:53
admin1look for rules ( or docker rules ) 11:53
dok53The memcached ip address is the gateway for the subnet I'm using, I guessthat's not great11:54
noonedeadpunkhehe11:54
admin1and if you are using cephadm to deploy ceph and trying to make a controller also a mgr, this will also not work 11:54
admin1curl :5000 on the keystone url . does it work from everywhere ? 11:54
noonedeadpunkyeah, I think you need to take care of used_ips in openstack_user_config and re-create containers11:55
noonedeadpunkit's good though you've spotted that early enough11:55
dok53Yep I'll rebuild, I thought I had the first 20 IP's in that list but maybe I did something wrong. Thans for the pointers as usual :)11:56
opendevreviewMerged openstack/openstack-ansible-os_neutron stable/zed: Disable dhcp-agent and metadata-agent for OVN  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/87090712:46
admin1noonedeadpunk, thanks for the link and i also read the examples here https://docs.openstack.org/openstack-ansible-haproxy_server/latest/configure-haproxy.html .. but could not figure out how to do s3.domain.com -> backend 808012:47
noonedeadpunkadmin1: have you checked this? https://www.haproxy.com/blog/how-to-map-domain-names-to-backend-server-pools-with-haproxy/13:03
mgariepyyou guys probably saw it but posting anyway: https://github.blog/2023-01-17-git-security-vulnerabilities-announced-2/13:05
noonedeadpunkyeah, thanks!13:12
dok53noonedeadpunk yep seems that was the issue. I had excluded all IPs on the interfaces but not the gateway or the virtuals. Thanks for pointing me to it13:42
dok53Thanks admin1 for your input too13:43
jrosseradmin1: there is already an haproxy ACL for LetsEncrypt so you can see how a match was made on the path13:56
jrosserit should be pretty easy to extend that for matching a host as well13:57
jamesdentonFor Zed - any known issues rerunning the repo plays and getting stuck on "openstack.osa.glusterfs : Create gluster peers"?14:01
noonedeadpunkWell, I'd say it setting `haproxy_acls: {'ACL_s3': {'rule': 'hdr(host) -i s3.domain.com'}}` should work14:01
noonedeadpunkI guess14:01
noonedeadpunkbut yeah, referring let's encrypt one might be worth it14:02
jrosserhttps://github.com/openstack/openstack-ansible/blob/37813cc247ff150bb99079ee42d4caeaa136f757/inventory/group_vars/haproxy/haproxy.yml#L23814:02
jrosserleads to14:02
jrosserhttps://opendev.org/openstack/openstack-ansible-haproxy_server/src/branch/master/defaults/main.yml#L178-L18114:02
noonedeadpunkjamesdenton: well I saw that but never digged deeper14:03
jamesdentonrestarting glusterd seems to get it to move along, i will let you know how the playbook progresses14:04
mgariepyfinnally you are upgrading ?14:11
jamesdentoni couldn't help myself14:11
jamesdentoni just bumped the SHAs manually on a few14:12
jamesdentonInteresting... it's this command that's hanging, but only on infra01 - gluster volume status gfs-repo lab-infra01:/gluster/bricks/1 detail. And without 'detail', it works14:25
jamesdentonand it ends up hanging glusterd14:26
jrosserjamesdenton: what OS?14:35
jamesdenton20.0414:35
jrosserandrewbonney: ^ we didnt see this?14:36
andrewbonneyNo, worked fine for us14:36
jrosserhmm14:36
jamesdentondid it? ok good.14:36
jamesdentonahh, i think it's not cluster but some worse disk-related issue14:38
jamesdenton*gluster14:38
jamesdentondf -h is hanging here, too. And the 'detail' param queries disk14:38
jamesdentonmaybe i have a stale NFS mount or something14:39
jamesdentonalright, carry on. nothing to see here14:40
jamesdentonthanks jrosser andrewbonney 14:42
jrosserjamesdenton: it's trivial to replace gluster with NFS if you have that lying around anyway14:46
jrosserthere are vars in the repo server role that let you point systemd_mount role at whatever you've got14:47
jamesdentonthanks, i might consider that14:47
jrosserand there should be a bool to disable the whole gluster deploy as well14:47
amaraoIs someone know which role/code is writing venv_tag for hosts in facts? I see it for compute hosts and localhost, but do not see for infra hosts...15:15
*** dviroel is now known as dviroel|lunch16:01
jrosseramarao: can you be a bit more specific? i see /etc/ansible/facts.d/openstack_ansible.fact on my infra host..... is that what you mean?16:21
*** dviroel|lunch is now known as dviroel16:57
admin1jrosser, something like this ? https://paste.openstack.org/show/bwQ4wQm949EskcQ7yT9q/17:29
admin1please ignore the first line : true17:30
jrosseradmin1: perhaps, though it's pretty confusing to re-use the var named letsencrypt for something completely different17:35
jrosserperhaps put the ansible aside for a moment and adjust the haproxy config directly first to understand how it works17:40
jrosserlike this https://cheat.readthedocs.io/en/latest/haproxy.html#route-based-on-host-request-header17:42
noonedeadpunkfolks, what could be the reason that on new compute ovs not listening on 6640? Does it need some special config?17:47
noonedeadpunkthat's not osa-setup node fwiw17:47
noonedeadpunkI don't have `is_connected: true` for manager for some reason...17:50
noonedeadpunkah. seems like wrong command for set-manager - used `ovs-vsctl set-manager tcp:127.0.0.1:6640` isntead of `ovs-vsctl set-manager ptcp:6640:127.0.0.1`17:52
moha7To use two IPs from the subnet 172.17.222.0/24 for the HAProxy internal & external VIPs, how each one should be introduced: `172.17.222.35/32` or `172.17.222.35/24`?18:08
moha7As I know, we use the mask when configuring interfaces18:09
moha7I mean in the file `user_variables.yml`18:09
moha7`haproxy_keepalived_internal_vip_cidr: "<vip>/<mask>"`18:10
spatelall vip address should be /32 18:14
moha7+118:16
moha7172.17.222.35/32 is the same with or without the /32 (AKA single IP), right? Then I think /32 is a bit confusing if it's not necessary there.18:17
noonedeadpunkhm. why I even needed ovs-vsctl set-manager...18:28
noonedeadpunkmoha7: no, it's not abosuletly same. As that's being parsed and fed to ip  18:29
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/zed: Bump OpenStack-Ansible Zed  https://review.opendev.org/c/openstack/openstack-ansible/+/87115220:04
noonedeadpunkHow we ended up having 1 release note o_O20:04
noonedeadpunkor we merged directly to integrated...20:05
admin1moha7, better use 2 interfaces or ip ranges ? 20:13
admin1yeah .. or add 2 ips in the same range 20:14
moha7For security reason? yup20:14
admin1but in prod, you are usually going to have 2 diff ones 20:14
admin1one that does 1:1 nat from public  that is routed, and the other one is 172.29 that is unrouted20:14
admin1or if you don[t access public ip but just using internal. you can use netplan to also add 10.10.10.x in the interface, and then give that ip to the controllers 20:15
admin1so in same interface, you can have multiple ranges20:15
moha7Cool; I didn't knew about to range on same interface20:18
moha7two ranges*20:19
*** dviroel is now known as dviroel|out21:05
* jamesdenton needs to review Yoga deprecations before upgrading to Zed :|21:25
jamesdentonglance_remote_client got me21:25
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_neutron master: Remove "warn" parameter from command module  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/86966221:25
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-specs master: Blueprint for separated haproxy service config  https://review.opendev.org/c/openstack/openstack-ansible-specs/+/87118721:38
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: Prepare haproxy role for separated haproxy config  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/87118821:42
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible master: Prepare service roles for separated haproxy config  https://review.opendev.org/c/openstack/openstack-ansible/+/87118921:43
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-os_keystone master: Enable separated haproxy config  https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/87119021:47
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-os_nova master: Enable separated haproxy config  https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/87119121:48
*** arxcruz|ruck is now known as arxcruz21:50
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-galera_server master: Enable separated haproxy config  https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/87119221:50
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-os_horizon master: Enable separated haproxy config  https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/87119321:51
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: [DNM] Remove temporary tweaks related to separated haproxy service config  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/87119421:53
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible master: [DNM] Remove temporary tweaks related to separated haproxy service config  https://review.opendev.org/c/openstack/openstack-ansible/+/87119521:55
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-galera_server master: Enable separated haproxy config  https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/87119221:58
damiandabrowskihey folks, I'm leaving for a vacation, will be back on 30th Jan22:01
damiandabrowskiI wanted to push a blueprint and proposed changes for "separated haproxy service config". it can be found here: https://review.opendev.org/q/topic:separated-haproxy-service-config22:02
jrosserdamiandabrowski: why are the changes to the service roles needed, only adding a default scoped to the role that’s not otherwise used?22:55
damiandabrowskiyes, scope is the main reason because otherwise we'll probably need to define all haproxy services in group_vars/all which doesn't sound optimal23:03
damiandabrowskianother minor benefit is that instead of having haproxy_keystone_service, haproxy_placement_service etc. we just have one variable name: haproxy_services23:04
jrosserI don’t understand what those do though23:04
jrosserthe role does not use the var23:04
jrosseranyway, late now23:05
damiandabrowskirole itself doesn't but playbooks like galera-install.yml, os-keystone-install.yml do use it via https://review.opendev.org/c/openstack/openstack-ansible/+/871189/1/playbooks/common-tasks/haproxy-service-config.yml23:07

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!