opendevreview | Ke Niu proposed openstack/ansible-role-zookeeper master: Use TOX_CONSTRAINTS_FILE https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/890877 | 06:31 |
---|---|---|
*** halali- is now known as halali | 13:10 | |
hamidlotfi_ | Hi there, | 13:12 |
hamidlotfi_ | I want to a new compute node but a generally error in start ansible script | 13:12 |
hamidlotfi_ | [WARNING]: The "ansible_collections.openstack.osa.plugins.connection.ssh" connection plugin has an improperly configured remote target value, forcing "inventory_hostname" templated value instead of the string | 13:13 |
hamidlotfi_ | And then show me this error that it can't resolve the hostname even though the host file is fine and manually pinged by name. | 13:14 |
hamidlotfi_ | And I can to ssh to host with the name of the host | 13:16 |
hamidlotfi_ | @admin1 @noonedeadpunk | 13:18 |
noonedeadpunk | hamidlotfi_: I think you might have an error in openstack_user_config | 13:19 |
hamidlotfi_ | like what?چ | 13:20 |
noonedeadpunk | I don't know :) Indent? Unreadable symbols? | 13:20 |
noonedeadpunk | As remote target is generated based on the data in openstack_user_config | 13:21 |
noonedeadpunk | So if it can't read up IP address from there - it might fail with the error you see | 13:21 |
*** halali is now known as halali- | 13:22 | |
hamidlotfi_ | noonedeadpunk: Ok, Thank You | 13:25 |
noonedeadpunk | Does /opt/openstack-ansible/inventory/dynamic_inventory.py executes at all or fails with stack trace? | 13:26 |
hamidlotfi_ | `ERROR! The field 'hosts' has an invalid value, which includes an undefined variable. The error was: {{ groups['utility_all'][0] }}: list object has no element 0` | 13:33 |
hamidlotfi_ | run /opt/openstack-ansible/inventory/dynamic_inventory.py correct | 13:34 |
hamidlotfi_ | that the above error show me at when run `openstack-ansible /opt/openstack-ansible/playbooks/setup-openstack.yml --syntax-check` | 13:35 |
hamidlotfi_ | @noonedeadpunk | 13:35 |
noonedeadpunk | I don't think `--syntax-check` ever worked properly | 13:36 |
noonedeadpunk | So I won't bother doing it | 13:36 |
hamidlotfi_ | generally when I run the ansible command I have this error `can't resolve host` | 13:37 |
noonedeadpunk | I think it's partically due to dynamic_inventory, as it's providing quite some variables for hosts | 13:37 |
noonedeadpunk | Ansible command like what? | 13:38 |
hamidlotfi_ | `ansible -m ping all` | 13:38 |
noonedeadpunk | do you run it from openstack-ansible folder? | 13:38 |
noonedeadpunk | ie `cd /opt/openstack-ansible; ansible -m ping all`? | 13:38 |
hamidlotfi_ | no from `/opt/openstack-ansbible` | 13:38 |
hamidlotfi_ | oh sorry from this | 13:39 |
hamidlotfi_ | `[WARNING]: The "ansible_collections.openstack.osa.plugins.connection.ssh" connection plugin has an improperly configured remote target value, forcing "inventory_hostname" templated value instead of the string | 13:39 |
hamidlotfi_ | ` | 13:39 |
noonedeadpunk | yeah, but it's the same as you;ve pasted before | 13:40 |
hamidlotfi_ | ok | 13:40 |
noonedeadpunk | Without seeing openstack_user_config it's hard to say what can be wrong there | 13:40 |
*** halali- is now known as halali | 13:42 | |
hamidlotfi_ | noonedeadpunk: send to private | 13:42 |
noonedeadpunk | openstack_user_config looks pretty okeyish, except weird leading space for each line... Given it's really for *each* line - it should not be a big deal... | 13:52 |
noonedeadpunk | Though it might be super easy to miss one somewhere, unless that an artifact from the paste instelf | 13:53 |
hamidlotfi_ | yes for paste itself | 13:54 |
noonedeadpunk | also compute02 definition looks weird and it's weirdly commented | 13:54 |
hamidlotfi_ | the `./scripts/inventory-manage.py` read from where and create the list? | 13:54 |
noonedeadpunk | it reads from /etc/openstack_deploy/openstack_inventory.json | 13:55 |
noonedeadpunk | dynamic_inventory.py writes to this file, reads from it and can update it | 13:55 |
noonedeadpunk | But it does not remove content from it, usually | 13:56 |
noonedeadpunk | So some artifacts may persist there | 13:56 |
noonedeadpunk | Regarding computes (like compute01), you can safely remove them with ./scripts/inventory-manage.py -r compute01. Never do that with any of infrastructure hosts or containers, until you delete the container itself | 13:57 |
hamidlotfi_ | I have installed maxscale lxc manually on my controllers. but i did not add it any where in the deployment vm. | 13:59 |
hamidlotfi_ | I do not know why maxscale containers are currentl added to my inventry.json | 13:59 |
hamidlotfi_ | infact, it seems dynamic-inventory adds these lxc-s to my inventrory | 14:00 |
hamidlotfi_ | how can i refuse adding them into my inventory | 14:00 |
hamidlotfi_ | from where does exactly dynamic-inventory reads data for creating an inventory.json | 14:01 |
admin1 | hamidlotfi_, i run a lot of extra lxc containers ( for example for ceph mons) and they are not added to inv json | 14:03 |
admin1 | yours could be due to something else | 14:03 |
hamidlotfi_ | i dont know | 14:04 |
noonedeadpunk | but eventually, you can make OSA manage these LXC containers as well, by defining quite simple custom env.d | 14:12 |
noonedeadpunk | we have plenty of custom containers that are created this way | 14:13 |
art | is there any document which describes the steps which we should follow to add our containers to OSA? | 14:14 |
noonedeadpunk | yeah, there is, need to find it :D But in short for maxscale usecase, you need to create a file in /etc/openstack_deploy/env.d/maxscale.yml with the content: https://paste.openstack.org/show/bRQUfDvexgOzk9CV5w7f/ | 14:16 |
noonedeadpunk | and then in your openstack_user_config.yml add `maxscale_hosts: *controller_hosts` | 14:16 |
art | that's great. thank you | 14:18 |
noonedeadpunk | I bet doc is somewhere here.... https://docs.openstack.org/openstack-ansible/latest/reference/inventory/inventory.html | 14:22 |
art | I'd check it. thanks alot | 14:23 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server master: Fix linters issue and metadata https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/888132 | 14:42 |
*** dviroel__ is now known as dviroel | 14:43 | |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_placement master: Fix linters and metadata https://review.opendev.org/c/openstack/openstack-ansible-os_placement/+/888603 | 14:44 |
art | do you have any idea why i'm getting this error when run a simple ping command on my containers: | 15:02 |
art | https://www.irccloud.com/pastebin/TVU0z6Dc/lxc-attach%20err | 15:02 |
art | although, I can run lxc-attach from the infra-s | 15:03 |
art | the command: `ansible compute-infra_all -m ping | 15:04 |
admin1 | lxc/d forking to incus is a good thing for osa i think :) | 16:29 |
* noonedeadpunk pretty much excited about that | 16:31 | |
noonedeadpunk | art: and you see container among `lxc-ls -1 --active | grep eb9be476` on your infra03 node? | 16:34 |
art | you're right. I had called dynamic-inventory, so new names have been created for the containers. | 16:36 |
noonedeadpunk | That sound like you did what I said not to do :D | 16:43 |
art | :P | 16:43 |
noonedeadpunk | (to never delete infra hosts with inventory-manage.py -r) | 16:43 |
art | I had a backup. everything works smoothly now | 16:44 |
art | by the way, for removing a compute host documentation in https://docs.openstack.org/openstack-ansible/latest/admin/scale-environment.html#:~:text=Remove%20a%20compute%20host%C2%B6 suggests to use this playbook: https://opendev.org/openstack/openstack-ansible-ops/src/branch/master/ansible_tools/playbooks/remove_compute_node.yml | 16:47 |
art | the remove_compute_node.yaml script uses `neutron agent-list` and `neutron agent-delete` which I think have been deprecated | 16:49 |
art | as for removing compute agent the openstack cli has been used, I think for removing network agent is better the same cli be used. | 16:51 |
art | I don't know where I should report this issue,; thought inform you about it :) | 16:52 |
noonedeadpunk | we track bugs here https://bugs.launchpad.net/openstack-ansible | 16:53 |
noonedeadpunk | but yeah, looks like smth worth looking to | 16:53 |
noonedeadpunk | ops repo is smth we provide without any promises - just bunch of handy things that work until someone supports it and interested in maintaining them | 16:54 |
art | ah, good to know. thanks | 16:56 |
mgariepy | noonedeadpunk, https://paste.openstack.org/show/bImkhN6wlNJv49kAlXXE/ do you see something obvious that i dont ? | 17:33 |
mgariepy | only ec-22 is working. | 17:33 |
noonedeadpunk | these are samer ceph cluster | 17:35 |
noonedeadpunk | ? | 17:35 |
mgariepy | yes | 17:35 |
mgariepy | everything seems ok. | 17:35 |
mgariepy | i can create shares and keys and everything | 17:35 |
mgariepy | but mounts doesn't work | 17:35 |
mgariepy | it makes no sense to me haha | 17:35 |
noonedeadpunk | and manila user has same access to all of them? | 17:36 |
mgariepy | yep | 17:36 |
mgariepy | manila create the shares and all. | 17:36 |
noonedeadpunk | and all of them are EC ones? | 17:36 |
mgariepy | no 22 and 42 are ec the other is replicated | 17:36 |
mgariepy | i did compare all configs for all of them and they are pretty much identical. | 17:37 |
noonedeadpunk | and how it fails when tries to mount? | 17:37 |
mgariepy | mount error 2 = No such file or directory | 17:39 |
mgariepy | strace is not helpful. | 17:39 |
mgariepy | i'll try another vm.. maybe something is wrong with the ceph package on jammy .. | 17:40 |
mgariepy | same error .. | 17:46 |
noonedeadpunk | no such file or directory coming from ceph client? | 17:51 |
mgariepy | yep | 17:51 |
noonedeadpunk | and is there a source folder on cephfs? I'm jsut not sure if manila does create it on it's own or not... | 17:51 |
noonedeadpunk | hm | 17:51 |
mgariepy | yes the subvolume is created by manila | 17:51 |
mgariepy | i feel like a noob on this one haha | 17:52 |
noonedeadpunk | Actually. I can recall some hussle with cephfs when we had more then one cephfs on the same cluster | 17:52 |
noonedeadpunk | there was need of some option to be passed to the mount command | 17:52 |
noonedeadpunk | I think it was `mds_namespace` | 17:54 |
noonedeadpunk | as you have like "default" one and rest | 17:54 |
mgariepy | ha | 17:55 |
noonedeadpunk | I don't see anything like that in manila though | 17:59 |
mgariepy | i don't either. | 17:59 |
noonedeadpunk | but we were mounting things like that with systemd_mount role: https://paste.openstack.org/show/bRVYI6PivmtMO4OzzH0c/ | 17:59 |
noonedeadpunk | so you have share name and then namespace I guess | 18:00 |
noonedeadpunk | though they have a statement `Each share’s data is mapped to a distinct Ceph RADOS namespace` so it might be done in a way | 18:01 |
mgariepy | with -o mds_namespace=cephfs_3 it works ! | 18:01 |
mgariepy | ```If you have more than one file system on your Ceph cluster, you can mount the non-default FS on your local FS as follows:``` | 18:01 |
noonedeadpunk | ok, great, now you've found the rootcause - it's left to patch manila a bit then :D | 18:01 |
mgariepy | wow. | 18:02 |
mgariepy | mds_namespace is : the config: cephfs_filesystem_name | 18:03 |
mgariepy | just because so fucking convinient. | 18:03 |
mgariepy | thanks a lot | 18:04 |
mgariepy | remind me to buy you a beer when we meet ! | 18:05 |
noonedeadpunk | hehe, no worries | 18:29 |
noonedeadpunk | glad you've sorted it out! | 18:29 |
mgariepy | the integration seems quite hacky to me. | 18:30 |
mgariepy | the man page doesn't have any mention of that option. | 18:44 |
noonedeadpunk | yeah, we were quite /o\ when accidentally cvreated another share on the same cluster | 18:47 |
noonedeadpunk | (or intentionally, can't recall) | 18:47 |
mgariepy | quite easy to find with the solution.. but without it, it's not so easy to find it. | 18:51 |
tuggernuts1 | is it possible to run this ovsdb-server on a different node other than on a compute? basically I can't have my vms routing from the computes. It seems that these ovn settings are wanting to do networking directly from the computes. | 20:09 |
tuggernuts1 | also I'm not sure what I did wrong here but ovn_nb_connection = ssl::6641 is not getting setup correctly in my ml2 plugin config | 20:10 |
tuggernuts1 | https://pastebin.com/1GEsgW9W | 20:11 |
tuggernuts1 | that's the full neutron config | 20:11 |
tuggernuts1 | well the ml2 plugin ini | 20:11 |
jrosser | tuggernuts1: looks like you have an empty group? https://opendev.org/openstack/openstack-ansible-os_neutron/src/branch/master/defaults/main.yml#L481 | 20:14 |
jrosser | ssl::6641 just looks wrong | 20:14 |
admin1 | you can run dedicated nodes for l3 in ovn, but not sure if you can get away with not running ovsdb-server .. as ovn needs that | 20:15 |
jrosser | and as far as routing goes I think that’s about which nodes you define as gateway chassis, not necessarily ovsdb-server, though mgariepy might have more experience with this | 20:15 |
admin1 | the name is misleading .. when you are using ovn, it does not say ovndb-server ... but ovsdb-server .. they use the ame name | 20:16 |
admin1 | *same | 20:16 |
admin1 | ovsdb-server maintains the flow and is needed for the control and management | 20:16 |
jrosser | tuggernuts1: read about the “neutron_ovn_gateway” inventory group here https://docs.openstack.org/openstack-ansible-os_neutron/latest/app-ovn.html | 20:18 |
mgariepy | you can set the the neutron_ovn_gateway group to your gateway nodes like jrosser said. | 20:19 |
mgariepy | https://github.com/openstack/openstack-ansible-os_neutron/blob/master/tasks/providers/setup_ovs_ovn.yml#L22-L25 | 20:22 |
jrosser | looks like you’d have to turn that off manually anywhere it was accidentally enabled | 20:24 |
tuggernuts1 | ok that makes sense then the names are misleading | 20:30 |
tuggernuts1 | https://pastebin.com/2LKXBrKM | 20:30 |
tuggernuts1 | that's what I have in my config right now | 20:30 |
tuggernuts1 | for whatever reason I'm still not getting any neutron apis on the inf nodes either lol | 20:31 |
tuggernuts1 | @jrosser I did redeploy with your suggestions | 20:31 |
tuggernuts1 | this time it's a source deploy and should be ovn | 20:31 |
jrosser | you still need neutron_server I think for the api? | 20:34 |
jrosser | AIO is the source of truth here really | 20:34 |
tuggernuts1 | I thought I had just copied the prod example and filled it in | 20:35 |
tuggernuts1 | `grep neutron_server openstack_user_config.yml.prod.example` comes back with nothing | 20:35 |
tuggernuts1 | the example says `network-infra_hosts:` should give me the api | 20:36 |
jrosser | sorry I’m just on a phone I can’t really look at the code | 20:38 |
tuggernuts1 | no worries | 20:38 |
tuggernuts1 | https://pastebin.com/QuMqWePH | 20:38 |
tuggernuts1 | that's what's in the inventory though so it seems to thing it has them | 20:38 |
tuggernuts1 | https://pastebin.com/Y0Xc2PXC that's northd | 20:40 |
jrosser | usual steps would be to look at what the inventory_manage script says | 20:40 |
tuggernuts1 | k | 20:40 |
jrosser | and also —list-hosts on the ansible-playbook command will tell you which hosts are in scope for the different plays | 20:41 |
tuggernuts1 | it sure looks like it should be setting these up correctly | 20:44 |
tuggernuts1 | https://pastebin.com/Zga9NxSm | 20:44 |
tuggernuts1 | https://pastebin.com/rMhNhWwE | 20:45 |
tuggernuts1 | ok I think I see what I've done | 21:27 |
tuggernuts1 | https://docs.openstack.org/openstack-ansible/2023.1/reference/inventory/openstack-user-config-reference.html | 21:27 |
tuggernuts1 | 'os-infra_hosts' | 21:27 |
tuggernuts1 | it seems like a pretty big oversight to not have this one in the example config | 21:28 |
tuggernuts1 | ah man, it is in openstack_user_config.yml.example just not the prod example one | 21:46 |
tuggernuts1 | whelp | 21:46 |
tuggernuts1 | hopefully that's it for the apis anyways | 21:47 |
tuggernuts1 | has anyone seen this: https://pastebin.com/7cWJY3Aw before | 22:05 |
tuggernuts1 | it's weird because it seems intermittent | 22:06 |
tuggernuts1 | like 3/5 runs do this | 22:06 |
tuggernuts1 | https://pastebin.com/fwbdLA23 also this seems to pass just fine | 22:08 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!