ygk_12345 | hi all | 05:29 |
---|---|---|
ygk_12345 | is there a way to install specific versions of nginx through OSA ? | 05:30 |
rohit02 | hi team,OSA deployment of wallaby failed with this error in setup-hosts.yml https://paste.opendev.org/show/bFenr7RYnGjPgtyttclJ/ | 10:17 |
noonedeadpunk | hey | 10:44 |
noonedeadpunk | hm, interesting. you have ansible 2.10.10, right? | 10:46 |
noonedeadpunk | *ansible-base | 10:46 |
noonedeadpunk | rohit02: also what task you run? | 10:49 |
noonedeadpunk | *playbook | 10:49 |
noonedeadpunk | playbooks/healthcheck-hosts.yml ? | 10:49 |
noonedeadpunk | jrosser: I wonder how we managed to use ansible.netcommon.ipaddr wihtout collection in https://opendev.org/openstack/openstack-ansible/src/branch/master/ansible-collection-requirements.yml ? | 10:53 |
rohit02 | noonedeadpunk: i am trying to run /usr/local/bin/openstack-ansible setup-hosts.yml -vvv | 10:53 |
rohit02 | ansible version is ansible 2.10.10 | 10:54 |
noonedeadpunk | jrosser: like https://opendev.org/openstack/openstack-ansible-lxc_hosts/src/branch/master/templates/lxc-networkd-bridge.network.j2#L6 | 10:54 |
rohit02 | jrosser noonedeadpunk: is there anything i have to check as all our deplyment jobs failing with same error both ubuntu and centos | 11:01 |
noonedeadpunk | rohit02: sorry. try adding https://github.com/ansible-collections/ansible.netcommon to https://opendev.org/openstack/openstack-ansible/src/branch/master/ansible-collection-requirements.yml | 11:17 |
noonedeadpunk | and re-run bootstrap-ansible.sh | 11:17 |
* noonedeadpunk catched ro for / on working laptop | 11:17 | |
rohit02 | noonedeadpunk: if i completely redeploy the setup it resolved automatically....or do i need to do changes manually | 11:24 |
noonedeadpunk | um, tbh not sure. From what I can see I gues we're missing that colelction in requiements. From other side I can hardly imagine how we haven't catched that | 11:36 |
rohit02 | noonedeadpunk: could you plzz give me steps to resolve this issue | 11:49 |
noonedeadpunk | create /etc/openstack_deploy/user-collection-requirements.yml with content: https://paste.opendev.org/show/bacsaf5ohdh4brqYAq3B/ | 11:54 |
noonedeadpunk | and re-run bootstrap-ansible | 11:54 |
opendevreview | Marios Andreou proposed openstack/openstack-ansible-os_tempest stable/wallaby: Add centos-9 tripleo standalone job for wallaby zuul layout https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/831196 | 11:58 |
opendevreview | Marios Andreou proposed openstack/openstack-ansible-os_tempest stable/wallaby: Add centos-9 tripleo standalone job for wallaby zuul layout https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/831196 | 11:58 |
noonedeadpunk | rohit02: but tbh I don't see any issues without that collection | 12:15 |
rohit02 | noonedeadpunk: with given steps also it failed in my environment | 12:17 |
noonedeadpunk | SO basically, from what I saw, your deployment in unhappy because of missing ipaddr filter | 12:26 |
noonedeadpunk | canb you check with simple debug if issue is really with that module? | 12:27 |
noonedeadpunk | Like with https://paste.opendev.org/show/bYGo6bitb6hL5WrSy9Gv/ | 12:27 |
rohit02 | noonedeadpunk: do i need to simply run this playbook https://paste.opendev.org/show/bYGo6bitb6hL5WrSy9Gv/ right? | 12:45 |
noonedeadpunk | except to put valid host | 12:46 |
rohit02 | module is there output of command https://paste.opendev.org/show/bmGQuTZm6VmhKqPnN5zh/ | 12:50 |
noonedeadpunk | and it's same if you just `container_address | ipaddr` | 13:00 |
rohit02 | noonedeadpunk: https://paste.opendev.org/show/bJqaZCJ05lMj7kWAlt70/ | 13:10 |
noonedeadpunk | oh, that's interesting... | 13:11 |
noonedeadpunk | And I bet I know why.... | 13:11 |
noonedeadpunk | rohit02: for https://paste.opendev.org/show/bacsaf5ohdh4brqYAq3B/ replace version with 2.51 | 13:13 |
noonedeadpunk | *2.5.1 | 13:13 |
rohit02 | /etc/openstack_deploy/user-collection-requirements.yml in this file right | 13:14 |
noonedeadpunk | yep | 13:15 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Add ansible.utils collectoin requirement https://review.opendev.org/c/openstack/openstack-ansible/+/831525 | 13:22 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Add ansible.utils collectoin requirement https://review.opendev.org/c/openstack/openstack-ansible/+/831525 | 13:26 |
noonedeadpunk | rohit02: or use that ^ | 13:26 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server master: Change location of ipaddr filter https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/831526 | 13:30 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-haproxy_server master: Change location of ipaddr filter https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/831528 | 13:32 |
rohit02 | noonedeadpunk: thank u so much for your help....are we got these changes directly from stable/wallaby branch | 13:32 |
rohit02 | or still we need to add manually these changes | 13:32 |
noonedeadpunk | If user-collection-requirements.yml helped - I'd prefer using that on W | 13:33 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-plugins master: Change location of ipaddr filter https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/831530 | 13:35 |
rohit02 | noonedeadpunk: /etc/openstack_deploy/user-collection-requirements.yml with version 2.5.1 works for me..thanx | 13:45 |
noonedeadpunk | awesome | 13:46 |
rohit02 | are we need to create this file everytime for new deployment...or we get the solution directly from stable/wallaby | 13:47 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/xena: Bump ansible.netcommon version https://review.opendev.org/c/openstack/openstack-ansible/+/831535 | 13:48 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Bump ansible.netcommon version https://review.opendev.org/c/openstack/openstack-ansible/+/831536 | 13:50 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Add ansible.utils collectoin requirement https://review.opendev.org/c/openstack/openstack-ansible/+/831525 | 13:51 |
mgariepy | morning everyone | 13:53 |
noonedeadpunk | o/ | 13:54 |
mgariepy | centos again :( | 14:04 |
mgariepy | noonedeadpunk, what do you think about this one ? https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/831436 | 14:10 |
mgariepy | i wonder is a new var in default would be better. | 14:15 |
noonedeadpunk | mgariepy: commented there | 14:21 |
noonedeadpunk | I think issue not in var here, but that connection might be encrypted. And then horizon_external_ssl will be false. But why this should break HTTP_X_FORWARDED_FOR? | 14:22 |
mgariepy | thanks | 14:22 |
noonedeadpunk | I guess it should not... | 14:22 |
mgariepy | yeah indeed. the location might not be optimal either :D | 14:22 |
mgariepy | most deploy would be behind LB. | 14:23 |
noonedeadpunk | yeah, but it's more about where SSL is terminated | 14:24 |
noonedeadpunk | https://opendev.org/openstack/openstack-ansible/src/branch/master/releasenotes/notes/haproxy_ssl_terminiation-cdf0092a5bfa34b5.yaml | 14:24 |
noonedeadpunk | not sure I support naming as it's confusing, but until we say this is changing, we should respect it | 14:25 |
noonedeadpunk | oh well, maybe I misunderstood | 14:26 |
noonedeadpunk | sorry my fault indeed, it's ok I guess | 14:27 |
spatel | noonedeadpunk morning! | 14:33 |
spatel | I am looking for HPC storage solution and looking for any advice or suggestion | 14:33 |
spatel | I have working HPC openstack cloud and now reasserting storage which suite environment | 14:34 |
spatel | Luster or Ceph | 14:34 |
spatel | lots of folk saying Manila for share filesystem for HPC job | 14:35 |
spatel | How does cinder provide shared storage? | 14:35 |
mgariepy | the advice i had the other day from noonedeadpunk was manilla and defenitely not cinder for shared :) | 14:38 |
tbarron | spatel: cinder can provide "multiattach" storage but you still get a raw block device on which | 14:38 |
tbarron | spatel: you would need to build a clustered file system assuming you want apps to work with fileystem paths | 14:39 |
spatel | manila with what storage? | 14:39 |
tbarron | spatel: manila on the other hand will give you shares from a networked file system, so you are all set for multi-access | 14:40 |
spatel | We need parallel filesystem for HPC so all node can see and read/write | 14:40 |
vkmc | tbarron++ | 14:40 |
mgariepy | i think cephfs would be ok after considering security issue | 14:40 |
spatel | tbarron you are saying deploy cephfs with manila? | 14:41 |
tbarron | currently there are no manila drivers for lustre, so cephfs would be the way to go | 14:41 |
spatel | Manila is new to me so asking noob question :) | 14:41 |
tbarron | vkmc :) | 14:41 |
spatel | I have 5 server to deploy storage, thinking i can put Ceph on it and do cephfs and use manila | 14:42 |
spatel | How VM get shared filesystem using manila? what NIC it use for data-transfer ? | 14:43 |
tbarron | spatel: I'm not an OSA deployment expert so will defer to others in this channel but ideally you'd have an isolated Storage network for connecting VMs to the Ceph daemons; I | 14:45 |
tbarron | would add an extra nic to the VMs that directly attaches to that network. | 14:45 |
spatel | I do have dedicated 10G nic attach to storage from each compute node | 14:46 |
spatel | VM to direct storage is bad idea correct? because it use virtio drive which has poor performance... | 14:46 |
tbarron | that dedicated data center storage network can be mapped to a neutron provider network so VMs can attach to it. | 14:47 |
spatel | compute to direct VM some kind of passthrough would be great | 14:47 |
spatel | You are talking about SRIOV NIC for storage | 14:47 |
spatel | I learned in my life virtio is crap when it comes to high speed data transfer :( | 14:48 |
spatel | if i mount NFS directly to VM then it will gives poor performance because all traffic will pass via virtual nic. (I may use SRIOV so vm can directly attach to physical nic) | 14:49 |
noonedeadpunk | "manila with what storage" -> virtfs ( ͡ᵔ ͜ʖ ͡ᵔ ) | 14:49 |
tbarron | spatel: ack, but I'm not SRIOV expert, more manila storage. Others here (or on openstack-discuss list) may be able to advise on SRIOV for storage network. | 14:50 |
spatel | I think SRIOV would be perfect for this design | 14:51 |
spatel | https://www.msi.umn.edu/sites/default/files/Pablo_CERN.pdf | 14:51 |
noonedeadpunk | btw I believe virtfs is a thing now in centos at least? | 14:51 |
tbarron | noonedeadpunk: virtiofs is in development in nova, theoretically will be much better perf than traditional virtio b/c of shared mem/DAX between guest and host but it's not ready yet | 14:51 |
spatel | noonedeadpunk virtfs ? | 14:51 |
tbarron | *virtiofs | 14:51 |
noonedeadpunk | Yeah, I know it's in development, not even sure when to expect for it to be merged... | 14:52 |
spatel | looks promising tbarron | 14:52 |
opendevreview | James Denton proposed openstack/openstack-ansible-os_ironic master: Remove [keystone] configuration block https://review.opendev.org/c/openstack/openstack-ansible-os_ironic/+/831544 | 14:52 |
tbarron | it's in new kernel and guest OSes but not yet in OpenStack (nova) | 14:52 |
spatel | :( | 14:52 |
spatel | it will be win! win! whenever its available | 14:53 |
spatel | look like cephfs + manila + sriov do the trick | 14:54 |
tbarron | spatel: check with the CERN folks then. They do cephfs + manila. The slides show cephfs + sriov. Are they doing all 3? | 14:55 |
noonedeadpunk | I told you end up with ceph anyway :D | 14:55 |
spatel | I love luster for HPC but it doesn't have openstack support (stable support) | 14:55 |
rohit02 | noonedeadpunk: he changes are merged regarding this /etc/openstack_deploy/user-collection-requirements.yml | 14:56 |
spatel | tbarron i don't know if they are using sriov or not but look like they do RDMA (infiniband for storage data) | 14:56 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/wallaby: Bump ansible.netcommon version https://review.opendev.org/c/openstack/openstack-ansible/+/831426 | 14:59 |
tbarron | spatel: ack. They are they most likely folks to have explored this sort of integration though. | 14:59 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/victoria: Bump ansible.netcommon version https://review.opendev.org/c/openstack/openstack-ansible/+/831427 | 15:00 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/victoria: Bump ansible.netcommon version https://review.opendev.org/c/openstack/openstack-ansible/+/831427 | 15:00 |
spatel | in 5 node Ceph deployment do i need dedicated node for cephfs to expose filesystem? that would be bottleneck right? | 15:01 |
tbarron | spatel: you can run multiple metadata servers. Actual data content is scale out over lots of osds. But CephFS isn't going to match Luster on performance. | 15:03 |
spatel | assuming cephfs like RBD which spread read/write to all OSD | 15:03 |
tbarron | like rbd on the data but not on the file system metadata | 15:04 |
spatel | I have only 5 node for storage so thinking how i can design that | 15:05 |
tbarron | spatel: HPC peeps keep saying they want a luster driver for Manila but some one who has luster needs to write it :) Manila drivers aren't that hard but you need the back end in question to write it. | 15:06 |
spatel | :) | 15:06 |
mgariepy | spatel, do you have multiple clients or only one client doing hpc on the cluster? | 15:53 |
spatel | I have 20 compute nodes going to share filesystem | 15:53 |
spatel | all 20 compute nodes has infiniband nic for vm passthrough so running vm can talk to each other over 100gbps low latency link for MPI messages | 15:55 |
spatel | here is the design - https://satishdotpatel.github.io/HPC-on-openstack/ | 15:55 |
spatel | storage part is missing which i am doing some research | 15:56 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server master: Use separate tmp directory https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/831550 | 15:57 |
spatel | May be my end users create 40 vms on this cloud and use shared filesystem to crunch some numbers using cluster application (Slurm or Spark or anything) | 15:58 |
spatel | goal is to have virtual HPC so multiple interested part can create and destroy and do experiments | 15:59 |
spatel | party* | 15:59 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server master: Add galera_data_dir variable https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/831552 | 16:07 |
mgariepy | https://zuul.opendev.org/t/openstack/build/f3204771e8d34b94bc1ee7fda88beb87/log/job-output.txt#5487 | 19:33 |
mgariepy | huh ? The error was: jinja2.exceptions.TemplateRuntimeError: No filter named 'ipaddr' found. | 19:34 |
mgariepy | oops.. | 19:34 |
opendevreview | Marc Gariépy proposed openstack/ansible-role-systemd_networkd master: Change location of ipaddr filter https://review.opendev.org/c/openstack/ansible-role-systemd_networkd/+/831603 | 19:59 |
*** dviroel is now known as dviroel|out | 21:31 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!