Wednesday, 2023-08-09

opendevreviewKe Niu proposed openstack/ansible-role-zookeeper master: Use TOX_CONSTRAINTS_FILE  https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/89087706:31
*** halali- is now known as halali13:10
hamidlotfi_Hi there,13:12
hamidlotfi_I want to a new compute node but a generally error in start ansible script 13:12
hamidlotfi_[WARNING]: The "ansible_collections.openstack.osa.plugins.connection.ssh" connection plugin has an improperly configured remote target value, forcing "inventory_hostname" templated value instead of the string13:13
hamidlotfi_And then show me this error that it can't resolve the hostname even though the host file is fine and manually pinged by name.13:14
hamidlotfi_And I can to ssh to host with the name of the host13:16
hamidlotfi_@admin1 @noonedeadpunk 13:18
noonedeadpunkhamidlotfi_: I think you might have an error in openstack_user_config13:19
hamidlotfi_like what?چ13:20
noonedeadpunkI don't know :) Indent? Unreadable symbols?13:20
noonedeadpunkAs remote target is generated based on the data in openstack_user_config13:21
noonedeadpunkSo if it can't read up IP address from there - it might fail with the error you see13:21
*** halali is now known as halali-13:22
hamidlotfi_noonedeadpunk: Ok, Thank You13:25
noonedeadpunkDoes /opt/openstack-ansible/inventory/dynamic_inventory.py executes at all or fails with stack trace?13:26
hamidlotfi_`ERROR! The field 'hosts' has an invalid value, which includes an undefined variable. The error was: {{ groups['utility_all'][0] }}: list object has no element 0`13:33
hamidlotfi_run /opt/openstack-ansible/inventory/dynamic_inventory.py correct13:34
hamidlotfi_that the above error show me at when run `openstack-ansible /opt/openstack-ansible/playbooks/setup-openstack.yml --syntax-check`13:35
hamidlotfi_@noonedeadpunk13:35
noonedeadpunkI don't think `--syntax-check` ever worked properly13:36
noonedeadpunkSo I won't bother doing it13:36
hamidlotfi_generally when I run the ansible command I have this error `can't resolve host`13:37
noonedeadpunkI think it's partically due to dynamic_inventory, as it's providing quite some variables for hosts13:37
noonedeadpunkAnsible command like what?13:38
hamidlotfi_`ansible -m ping all`13:38
noonedeadpunkdo you run it from openstack-ansible folder?13:38
noonedeadpunkie `cd /opt/openstack-ansible; ansible -m ping all`?13:38
hamidlotfi_no from `/opt/openstack-ansbible`13:38
hamidlotfi_oh sorry from this13:39
hamidlotfi_`[WARNING]: The "ansible_collections.openstack.osa.plugins.connection.ssh" connection plugin has an improperly configured remote target value, forcing "inventory_hostname" templated value instead of the string13:39
hamidlotfi_`13:39
noonedeadpunkyeah, but it's the same as you;ve pasted before13:40
hamidlotfi_ok13:40
noonedeadpunkWithout seeing openstack_user_config it's hard to say what can be wrong there13:40
*** halali- is now known as halali13:42
hamidlotfi_noonedeadpunk: send to private13:42
noonedeadpunkopenstack_user_config looks pretty okeyish, except weird leading space for each line... Given it's really for *each* line - it should not be a big deal...13:52
noonedeadpunkThough it might be super easy to miss one somewhere, unless that an artifact from the paste instelf13:53
hamidlotfi_ yes for paste itself13:54
noonedeadpunkalso compute02 definition looks weird and it's weirdly commented13:54
hamidlotfi_the `./scripts/inventory-manage.py` read from where and create the list?13:54
noonedeadpunkit reads from /etc/openstack_deploy/openstack_inventory.json13:55
noonedeadpunkdynamic_inventory.py writes to this file, reads from it and can update it13:55
noonedeadpunkBut it does not remove content from it, usually13:56
noonedeadpunkSo some artifacts may persist there13:56
noonedeadpunkRegarding computes (like compute01), you can safely remove them with ./scripts/inventory-manage.py -r compute01. Never do that with any of infrastructure hosts or containers, until you delete the container itself13:57
hamidlotfi_I have installed maxscale lxc manually on my controllers. but i did not add it any where in the deployment vm.13:59
hamidlotfi_I do not know why maxscale containers are currentl added to my inventry.json13:59
hamidlotfi_infact, it seems dynamic-inventory adds these lxc-s to my inventrory14:00
hamidlotfi_how can i refuse adding them into my inventory14:00
hamidlotfi_from where does exactly dynamic-inventory reads data for creating an inventory.json14:01
admin1hamidlotfi_, i run a lot of extra lxc containers ( for example for ceph mons) and they are not added to inv json 14:03
admin1yours could be due to something else 14:03
hamidlotfi_i dont know14:04
noonedeadpunkbut eventually, you can make OSA manage these LXC containers as well, by defining quite simple custom env.d14:12
noonedeadpunkwe have plenty of custom containers that are created this way14:13
artis there any document which describes the steps which we should follow to add our containers to OSA?14:14
noonedeadpunkyeah, there is, need to find it :D But in short for maxscale usecase, you need to create a file in /etc/openstack_deploy/env.d/maxscale.yml with the content: https://paste.openstack.org/show/bRQUfDvexgOzk9CV5w7f/14:16
noonedeadpunkand then in your openstack_user_config.yml add `maxscale_hosts: *controller_hosts`14:16
artthat's great. thank you14:18
noonedeadpunkI bet doc is somewhere here.... https://docs.openstack.org/openstack-ansible/latest/reference/inventory/inventory.html14:22
artI'd check it. thanks alot14:23
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server master: Fix linters issue and metadata  https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/88813214:42
*** dviroel__ is now known as dviroel14:43
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_placement master: Fix linters and metadata  https://review.opendev.org/c/openstack/openstack-ansible-os_placement/+/88860314:44
artdo you have any idea why i'm getting this error when run a simple ping command on my containers: 15:02
arthttps://www.irccloud.com/pastebin/TVU0z6Dc/lxc-attach%20err15:02
artalthough, I can run lxc-attach from the infra-s15:03
artthe command: `ansible compute-infra_all -m ping15:04
admin1lxc/d forking to incus is a good thing for osa i think :) 16:29
* noonedeadpunk pretty much excited about that16:31
noonedeadpunkart: and you see container among `lxc-ls -1 --active | grep eb9be476` on your infra03 node?16:34
artyou're right. I had called dynamic-inventory, so new names have been created for the containers. 16:36
noonedeadpunkThat sound like you did what I said not to do :D16:43
art:P16:43
noonedeadpunk(to never delete infra hosts with inventory-manage.py -r)16:43
artI had a backup. everything works smoothly now 16:44
artby the way, for removing a compute host documentation in https://docs.openstack.org/openstack-ansible/latest/admin/scale-environment.html#:~:text=Remove%20a%20compute%20host%C2%B6  suggests to use this playbook: https://opendev.org/openstack/openstack-ansible-ops/src/branch/master/ansible_tools/playbooks/remove_compute_node.yml16:47
artthe remove_compute_node.yaml script uses `neutron agent-list` and `neutron agent-delete` which I think have been deprecated16:49
artas for removing compute agent the openstack cli has been used, I think for removing network agent is better the same cli be used. 16:51
artI don't know where I  should report this issue,; thought inform you about it :)16:52
noonedeadpunkwe track bugs here https://bugs.launchpad.net/openstack-ansible16:53
noonedeadpunkbut yeah, looks like smth worth looking to16:53
noonedeadpunkops repo is smth we provide without any promises - just bunch of handy things that work until someone supports it and interested in maintaining them16:54
artah, good to know. thanks16:56
mgariepynoonedeadpunk, https://paste.openstack.org/show/bImkhN6wlNJv49kAlXXE/ do you see something obvious that i dont ?17:33
mgariepyonly ec-22 is working.17:33
noonedeadpunkthese are samer ceph cluster17:35
noonedeadpunk?17:35
mgariepyyes17:35
mgariepyeverything seems ok.17:35
mgariepyi can create shares and keys and everything17:35
mgariepybut mounts doesn't work17:35
mgariepyit makes no sense to me haha17:35
noonedeadpunkand manila user has same access to all of them?17:36
mgariepyyep17:36
mgariepymanila create the shares and all.17:36
noonedeadpunkand all of them are EC ones?17:36
mgariepyno 22 and 42 are ec the other is replicated17:36
mgariepyi did compare all configs for all of them and they are pretty much identical.17:37
noonedeadpunkand how it fails when tries to mount?17:37
mgariepymount error 2 = No such file or directory17:39
mgariepystrace is not helpful.17:39
mgariepyi'll try another vm.. maybe something is wrong with the ceph package on jammy .. 17:40
mgariepysame error .. 17:46
noonedeadpunkno such file or directory coming from ceph client?17:51
mgariepyyep17:51
noonedeadpunkand is there a source folder on cephfs? I'm jsut not sure if manila does create it on it's own or not...17:51
noonedeadpunkhm17:51
mgariepyyes the subvolume is created by manila17:51
mgariepyi feel like a noob on this one haha17:52
noonedeadpunkActually. I can recall some hussle with cephfs when we had more then one cephfs on the same cluster17:52
noonedeadpunkthere was need of some option to be passed to the mount command17:52
noonedeadpunkI think it was `mds_namespace`17:54
noonedeadpunkas you have like "default" one and rest17:54
mgariepyha17:55
noonedeadpunkI don't see anything like that in manila though17:59
mgariepyi don't either.17:59
noonedeadpunkbut we were mounting things like that with systemd_mount role: https://paste.openstack.org/show/bRVYI6PivmtMO4OzzH0c/17:59
noonedeadpunkso you have share name and then namespace I guess18:00
noonedeadpunkthough they have a statement `Each share’s data is mapped to a distinct Ceph RADOS namespace` so it might be done in a way18:01
mgariepywith -o mds_namespace=cephfs_3 it works !18:01
mgariepy```If you have more than one file system on your Ceph cluster, you can mount the non-default FS on your local FS as follows:```18:01
noonedeadpunkok, great, now you've found the rootcause - it's left to patch manila a bit then :D18:01
mgariepywow.18:02
mgariepymds_namespace is : the config: cephfs_filesystem_name18:03
mgariepyjust because so fucking convinient.18:03
mgariepythanks a lot18:04
mgariepyremind me to buy you a beer when we meet !18:05
noonedeadpunkhehe, no worries18:29
noonedeadpunkglad you've sorted it out!18:29
mgariepythe integration seems quite hacky to me.18:30
mgariepythe man page doesn't have any mention of that option.18:44
noonedeadpunkyeah, we were quite /o\ when accidentally cvreated another share on the same cluster18:47
noonedeadpunk(or intentionally, can't recall)18:47
mgariepyquite easy to find with the solution.. but without it, it's not so easy to find it.18:51
tuggernuts1is it possible to run this ovsdb-server on a different node other than on a compute? basically I can't have my vms routing from the computes. It seems that these ovn settings are wanting to do networking directly from the computes.20:09
tuggernuts1also I'm not sure what I did wrong here but ovn_nb_connection = ssl::6641 is not getting setup correctly in my ml2 plugin config20:10
tuggernuts1https://pastebin.com/1GEsgW9W20:11
tuggernuts1that's the full neutron config20:11
tuggernuts1well the ml2 plugin ini20:11
jrossertuggernuts1: looks like you have an empty group? https://opendev.org/openstack/openstack-ansible-os_neutron/src/branch/master/defaults/main.yml#L48120:14
jrosserssl::6641 just looks wrong20:14
admin1you can run dedicated nodes for l3 in ovn, but not sure if you can get away with not running ovsdb-server .. as ovn needs that20:15
jrosserand as far as routing goes I think that’s about which nodes you define as gateway chassis, not necessarily ovsdb-server, though mgariepy might have more experience with this20:15
admin1the name is misleading .. when you are using ovn, it does not say ovndb-server ... but ovsdb-server .. they use the ame name20:16
admin1*same20:16
admin1ovsdb-server maintains the flow and is needed for the control and management20:16
jrossertuggernuts1: read about the “neutron_ovn_gateway” inventory group here https://docs.openstack.org/openstack-ansible-os_neutron/latest/app-ovn.html20:18
mgariepyyou can set the the neutron_ovn_gateway group to your gateway nodes like jrosser said.20:19
mgariepyhttps://github.com/openstack/openstack-ansible-os_neutron/blob/master/tasks/providers/setup_ovs_ovn.yml#L22-L2520:22
jrosserlooks like you’d have to turn that off manually anywhere it was accidentally enabled20:24
tuggernuts1ok that makes sense then the names are misleading 20:30
tuggernuts1https://pastebin.com/2LKXBrKM20:30
tuggernuts1that's what I have in my config right now20:30
tuggernuts1for whatever reason I'm still not getting any neutron apis on the inf nodes either lol20:31
tuggernuts1@jrosser I did redeploy with your suggestions 20:31
tuggernuts1this time it's a source deploy and should be ovn20:31
jrosseryou still need neutron_server I think for the api?20:34
jrosserAIO is the source of truth here really20:34
tuggernuts1I thought I had just copied the prod example and filled it in20:35
tuggernuts1`grep neutron_server openstack_user_config.yml.prod.example` comes back with nothing20:35
tuggernuts1the example says `network-infra_hosts:` should give me the api 20:36
jrossersorry I’m just on a phone I can’t really look at the code20:38
tuggernuts1no worries20:38
tuggernuts1https://pastebin.com/QuMqWePH20:38
tuggernuts1that's what's in the inventory though so it seems to thing it has them20:38
tuggernuts1https://pastebin.com/Y0Xc2PXC that's northd20:40
jrosserusual steps would be to look at what the inventory_manage script says20:40
tuggernuts1k20:40
jrosserand also —list-hosts on the ansible-playbook command will tell you which hosts are in scope for the different plays20:41
tuggernuts1it sure looks like it should be setting these up correctly 20:44
tuggernuts1https://pastebin.com/Zga9NxSm20:44
tuggernuts1https://pastebin.com/rMhNhWwE20:45
tuggernuts1ok I think I see what I've done21:27
tuggernuts1https://docs.openstack.org/openstack-ansible/2023.1/reference/inventory/openstack-user-config-reference.html21:27
tuggernuts1'os-infra_hosts'21:27
tuggernuts1it seems like a pretty big oversight to not have this one in the example config 21:28
tuggernuts1ah man, it is in openstack_user_config.yml.example just not the prod example one21:46
tuggernuts1whelp21:46
tuggernuts1hopefully that's it for the apis anyways21:47
tuggernuts1has anyone seen this: https://pastebin.com/7cWJY3Aw before22:05
tuggernuts1it's weird because it seems intermittent 22:06
tuggernuts1like 3/5 runs do this22:06
tuggernuts1https://pastebin.com/fwbdLA23 also this seems to pass just fine22:08

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!