Friday, 2023-01-20

opendevreviewMerged openstack/openstack-ansible-rabbitmq_server master: Remove "warn" parameter from command module  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/86966301:24
jamesdentonYoga (OVN) -> Zed (OVN) complete with minimal fuss, thanks all. neutron_ovn_ssl set to 'false' prior to maint is needed, as well as cleaning up the env.d beforehand. Only real issue (besides the stale NFS mount causing grief) is related to Ironic tftp and inspector changes that may need to be accounted for04:15
noonedeadpunkNeed some votes on https://review.opendev.org/q/topic:bump_osa+status:open :)08:49
noonedeadpunkHm, I kind of don't understand how variables from role defaults will have any effect on playbook....09:04
noonedeadpunk(it's regarding haproxy thing)09:05
noonedeadpunkI will totally need to deploy thing thing to understand as now I'm not09:09
noonedeadpunk*this thing09:09
jrossernoonedeadpunk: I did not even see yet where those role vars are supposed to be used09:28
jrosserthe ones starting _09:28
noonedeadpunkThey're used as default for haproxy_services09:29
noonedeadpunkand haproxy_services is for haproxy role09:29
noonedeadpunkBut I don't really understand how it works....09:29
jrosseromg09:29
noonedeadpunkas last line is `haproxy_services: "{{ haproxy_nova_services | default(_haproxy_services) }}"`09:30
noonedeadpunk(in node defaults)09:30
noonedeadpunk*nova09:30
jrossersurely these just go in the playbooks09:30
noonedeadpunkI don't feel we're reducing complexity either09:30
noonedeadpunkBut likely I just don't understand smth09:31
jrosserimho we should not touch the other roles with any of this09:33
noonedeadpunkWell, if we're not including haproxy role from other roles - there's no reason to touch them for sure09:34
noonedeadpunkso I do agree with you here09:35
jrosserI think there might be some middle ground with much less changes09:51
jrosserthough fundamentally this really seems to be question of if we configure haproxy up front in one go, or incrementally as the deployment proceeds09:52
jrosserand deciding which of those is preferable seems step #009:52
noonedeadpunkI kind of like idea of configuring backends when we run specific service, as there's a lot of issues historically, when we re-configure backends on haproxy role run, and services that are planned to run hours from now are affected09:56
jrosserperhaps just filtering the haproxy_sevices list in each playbook is much simpler09:56
jrosserkind of like we do inside roles already for filtered_services09:57
jrosserthen it would only deploy a subset of the list alum each service playbook09:57
noonedeadpunkI was thinking even to jsut feed haproxy_*_service inside playbook...09:57
noonedeadpunkor filter, yes09:57
jrosserright yes either of those is possible09:58
noonedeadpunkbased on backend nodes or smth09:58
jrosserfilter might be better as then the playbook doesn’t have to know how many things a service has09:58
noonedeadpunkyeah, just we kind of need to find smth consistent to filter09:59
noonedeadpunkand like nova is not that easy probably?09:59
jrosserperhaps that sort of approach leads to the same result Damian has worked on but with simplification09:59
noonedeadpunkor well, based on backend nodes might be valid thing10:00
jrossernova and ironic are probably the most complex10:00
noonedeadpunkwe're running play against group and if host is in this group we likely can run backend... But yes. for nova it won't be perfect as we will configure consoles when runnign api or metadata10:01
jrosserperhaps this is worth a prototype10:02
jrossersounds relatively simple to try for something like glance or keystone just to see what the code looks like10:02
noonedeadpunkyup10:03
jrossernoonedeadpunk> I was thinking even to jsut feed haproxy_*_service inside playbook...10:21
jrosser^ you know maybe this is exactly what we need - maybe no actual benefit from making it more complicated10:22
noonedeadpunkthe only concern I do have about this, is that historically haproxy_default_services - was the only way to override any given backend. So I assume plentuy of deployments have fully overriden it long time ago and rely on it10:37
noonedeadpunkBut I'm not sure how much we should worry about that. Like haproxy_*_service is a thing for couple of releases at least, and given solid release note....10:38
noonedeadpunkit might be fine to say that well, it's time to revise your overrides10:39
*** dviroel|out is now known as dviroel10:41
jrosserit would also be nice to be able to run everything on port 44310:47
noonedeadpunkah, yes, that is sweet thingy11:03
noonedeadpunkbut we need that only for frontends I beleive11:03
noonedeadpunkit doesn't matter what will be on backend side11:04
noonedeadpunkThough it's not only haproxy ACLs11:04
noonedeadpunkAs also endpoints should be set accordingly11:04
noonedeadpunkand tricky thing, is that now glance_service_port for example, is used both to determine endpoint URI and where backend will bind to11:05
gokhanishello folks, I can ssh to nodes manually but ansible can not connect to nodes. When I delete control path I can run this command > SSH: EXEC sftp -b - -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/3e38b90850 '[x.x.x.x]'11:32
gokhanisbut with control path I can't connect 11:32
gokhanisI tried with ansible version both 2.13.7 and 2.10.17, it didn't work.  11:36
opendevreviewMerged openstack/openstack-ansible stable/yoga: Bump OpenStack-Ansible Yoga  https://review.opendev.org/c/openstack/openstack-ansible/+/87081011:58
admin1instead of having just 1 haproxy, are we moving to add a haproxy in each container ? 12:10
jamesdentonthats the juju way12:24
gokhanisit is mtu problem ignore my question 12:25
admin1i still could not get s3.domain.com to work :(  ..   else my plan was to create sections like auth.cloud.domain.com,  images.cloud.domain.com and map all endpoints to  the right backend, and then from firewall block the other ports so that everything is on https:// 12:37
admin1and updating the public endpoint url to the correct one12:37
admin1i am thinking of doing this via a post-install ansible playbook that will add the sections on haproxy and update it 12:38
dokeeffe85Hi all, long winded question so I dropped it here - https://paste.openstack.org/show/bdYw6r944dixmJRqZ3yq/ thanks for any response :)12:59
jrosseradmin1: share what you tried to get s3.domain.com to work13:08
jrosseradmin1: to answer your previous question, no, it is not a proposal to add haproxy in each container13:10
jrosseradmin1: but instead to make the playbook for each service setup haproxy for just that service, rather than doing all of them at the start with the haproxy playbook13:11
opendevreviewMerged openstack/openstack-ansible stable/zed: Bump OpenStack-Ansible Zed  https://review.opendev.org/c/openstack/openstack-ansible/+/87115213:16
noonedeadpunksweet ^14:02
jamesdentonsweet.14:03
jamesdentoni will bump my zed to new zed today14:04
mgariepyjamesdenton, you upgraded a day too early ;p14:04
jamesdentoni know, right? lol14:04
mgariepyhow went the ovn ssl stuff?14:04
jamesdentonWell, i skipped the SSL stuff to not break my deployment14:04
jamesdentonin prior testing, non->ssl to ssl did not go well14:05
jamesdentonand i haven't revisited it, yet14:05
mgariepyha ok14:05
jamesdentonbut, i am overall happy with how seamless the process went overall once i revisited deprecations14:06
jamesdentonnoonedeadpunk User on the mailing list hit a bug with Y->Z upgrade that I also hit yesterday. https://paste.opendev.org/show/b7qIl2idoic46ewTOFDK/. Only I hit it with both ansible-etcd and ansible-pacemaker-corosync repos. I set the version to latest commit versus master and that seemed to work14:24
noonedeadpunkI've seen bug report regarding that14:25
noonedeadpunkwell, if latest commit does work, then new release I've pushed should work as well14:25
jamesdentonWell, latest commit doesn't touch ansible-role-requirements block for ansible-etcd or ansible-pacemaker-corosync, so not sure if that will make a difference or not14:26
jamesdenton*latest release, sorry14:27
jamesdentonthis was my workaround: https://paste.opendev.org/show/bLLV9QkPIs5A8UgS1Lb7/14:27
noonedeadpunkoh14:46
noonedeadpunkSeems like my bump script went south14:46
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/zed: Fix bump of github repos  https://review.opendev.org/c/openstack/openstack-ansible/+/87129615:46
noonedeadpunk^ that should fix it15:46
*** dviroel is now known as dviroel|lunch15:49
noonedeadpunkwow, https://monty-says.blogspot.com/2022/12/i-want-to-wish-you-happy-new-year-with.html looks quite interesting.... Wonder how dramatically it will break....16:17
noonedeadpunkbut, I think for AA we will get our mariadb version finally updated. As 10.11 supposed to be LTS one16:18
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server master: Bump rabbitmq to 3.11 and erlang to 25.2  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/87130316:31
noonedeadpunkdamn it16:35
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/yoga: Bump erlang version to cover CVE-2022-37026  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/87130416:42
*** dviroel|lunch is now known as dviroel16:46
moha7Hello19:02
jamesdentonzzzz19:09
joetenmusHey there21:02
joetenmusI'm gonna deploy OSA over a bunch of nodes hosted by ESXi and here is glusterFS errors that I'm encountered each time I run the 2nd step (setup-infrastructure.yml)! Would you please review this error: https://pastebin.com/7G2xfTdw21:03
joetenmusBased on this blog post: https://satishdotpatel.github.io/openstack-ansible-glance-with-glusterfs, GlusterFS is in use for glance, but I'm using NFS for locating images!21:08
*** dviroel is now known as dviroel|out21:22
jrosserjoetenmus: i don't think that blog post is relevant for you21:34
jrosserjoetenmus: there is a small glusterfs set up internally as part of the OSA deploymebny by default to make a shared filesystem across your 'repo' containers21:36
joetenmusSo, Is GlusterFS a required tool for repo containers, in any case?21:41
jrosserjoetenmus: that is the default, but the requirement is to have a shared filesystem of some sort21:45
jrosserjoetenmus: anyway the key thing that i see is you use the term "ESXi" and it is very important that you disable any network security around mac/ip addresses on the virtual machines21:46
joetenmusThose dirty things are disabled21:47
jrosseryour symptom is of broken connectivity between the repo container on one VM to the repo container on the others21:47
jrosserfrom infra02_repo_container-733e8f33 it does /usr/sbin/gluster --mode=script peer probe infra01-repo-container-a051b328 and this fails21:48
jrosseryou should look at the networking at make sure that from inside the infra2 repo container you can ping the ip of the repo1 infra container with `-I eth1` to ensure that the ping runs across the mgmt network21:49
joetenmusYeah, I manually tried to probe it, but failed; There's no any ssh connection issue or ping; What kind of protocol does it uses?21:50
jrosserwell gluster is a filesystem with it's own protcol21:50
jrosseryou could check the log of the gluster daemon on the containers21:50
jrosserhave you first built and 'all-in-one' deployment?21:50
joetenmusall-in-one? no, in whitch step shuold be deployed?21:52
jrosserbefore trying a multinode deployment it can be beneficial to start with something simpler21:53
joetenmusAh21:53
jrosserhttps://docs.openstack.org/openstack-ansible/zed/user/aio/quickstart.html21:53
jrosserjoetenmus: unfortunately we made a release of Zed today which included an error21:55
jrosseryou will need to apply this patch https://review.opendev.org/c/openstack/openstack-ansible/+/871296/1/ansible-role-requirements.yml21:55
joetenmusFor the all-in-one? 21:56
joetenmusAh, I missed the above line21:56
joetenmusok21:56
joetenmusIs it ok to use the master branch?21:57
jrossermaster is the development branch for the next release21:57
joetenmusThen I have no idea how to patch it! you mean editing the file manually?21:58
jrosserclone the repo, checkout stable/zed branch21:58
jrossergo here https://review.opendev.org/c/openstack/openstack-ansible/+/87129621:59
jrosserpress the "three little dots" menu top right and choose download patch21:59
jrosserpress the "copy" button at the end of the cherry-pick line and paste that into the terminal where you cloned the repo22:00
jrosseranyway, an all in one should deploy you something that works without too much difficulty22:00
jrosserit generates its own config completely and is entirely self contained in one VM with one interface and one IP22:01
jrosserdownside is there is some compromise made, like the services are not high availability and the networking is quite specific to that single VM use case22:02
jrosserwhat you do get though is a reference that you can look at when trying to understand how a multnode deployment should be22:03
joetenmusThank you jrosser22:19

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!