Wednesday, 2022-03-02

ygk_12345hi all05:29
ygk_12345is there a way to install specific versions of nginx through OSA ?05:30
rohit02hi team,OSA deployment of wallaby failed with this error in setup-hosts.yml https://paste.opendev.org/show/bFenr7RYnGjPgtyttclJ/10:17
noonedeadpunkhey10:44
noonedeadpunkhm, interesting. you have ansible 2.10.10, right?10:46
noonedeadpunk*ansible-base10:46
noonedeadpunkrohit02: also what task you run?10:49
noonedeadpunk*playbook10:49
noonedeadpunkplaybooks/healthcheck-hosts.yml ?10:49
noonedeadpunkjrosser: I wonder how we managed to use ansible.netcommon.ipaddr wihtout collection in https://opendev.org/openstack/openstack-ansible/src/branch/master/ansible-collection-requirements.yml ?10:53
rohit02noonedeadpunk: i am trying to run /usr/local/bin/openstack-ansible setup-hosts.yml -vvv10:53
rohit02ansible version is ansible 2.10.1010:54
noonedeadpunkjrosser: like https://opendev.org/openstack/openstack-ansible-lxc_hosts/src/branch/master/templates/lxc-networkd-bridge.network.j2#L610:54
rohit02jrosser noonedeadpunk: is there anything i have to check as all our deplyment jobs failing with same error both ubuntu and centos11:01
noonedeadpunkrohit02: sorry. try adding https://github.com/ansible-collections/ansible.netcommon to https://opendev.org/openstack/openstack-ansible/src/branch/master/ansible-collection-requirements.yml11:17
noonedeadpunkand re-run bootstrap-ansible.sh11:17
* noonedeadpunk catched ro for / on working laptop11:17
rohit02noonedeadpunk: if i completely redeploy the setup it resolved automatically....or do i need to do changes manually11:24
noonedeadpunkum, tbh not sure. From what I can see I gues we're missing that colelction in requiements. From other side I can hardly imagine how we haven't catched that11:36
rohit02noonedeadpunk: could you plzz give me steps to resolve this issue11:49
noonedeadpunkcreate /etc/openstack_deploy/user-collection-requirements.yml with content: https://paste.opendev.org/show/bacsaf5ohdh4brqYAq3B/11:54
noonedeadpunkand re-run bootstrap-ansible11:54
opendevreviewMarios Andreou proposed openstack/openstack-ansible-os_tempest stable/wallaby: Add centos-9 tripleo standalone job for wallaby zuul layout  https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/83119611:58
opendevreviewMarios Andreou proposed openstack/openstack-ansible-os_tempest stable/wallaby: Add centos-9 tripleo standalone job for wallaby zuul layout  https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/83119611:58
noonedeadpunkrohit02: but tbh I don't see any issues without that collection12:15
rohit02noonedeadpunk: with given steps also it failed in my environment12:17
noonedeadpunkSO basically, from what I saw, your deployment in unhappy because of missing ipaddr filter12:26
noonedeadpunkcanb you check with simple debug if issue is really with that module?12:27
noonedeadpunkLike with https://paste.opendev.org/show/bYGo6bitb6hL5WrSy9Gv/12:27
rohit02noonedeadpunk: do i need to simply run this playbook  https://paste.opendev.org/show/bYGo6bitb6hL5WrSy9Gv/ right?12:45
noonedeadpunkexcept to put valid host12:46
rohit02module is there output of command https://paste.opendev.org/show/bmGQuTZm6VmhKqPnN5zh/12:50
noonedeadpunkand it's same if you just `container_address | ipaddr`13:00
rohit02noonedeadpunk: https://paste.opendev.org/show/bJqaZCJ05lMj7kWAlt70/13:10
noonedeadpunkoh, that's interesting...13:11
noonedeadpunkAnd I bet I know why....13:11
noonedeadpunkrohit02: for https://paste.opendev.org/show/bacsaf5ohdh4brqYAq3B/ replace version with 2.5113:13
noonedeadpunk*2.5.113:13
rohit02 /etc/openstack_deploy/user-collection-requirements.yml in this file right13:14
noonedeadpunkyep13:15
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add ansible.utils collectoin requirement  https://review.opendev.org/c/openstack/openstack-ansible/+/83152513:22
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add ansible.utils collectoin requirement  https://review.opendev.org/c/openstack/openstack-ansible/+/83152513:26
noonedeadpunkrohit02: or use that ^13:26
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server master: Change location of ipaddr filter  https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/83152613:30
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-haproxy_server master: Change location of ipaddr filter  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/83152813:32
rohit02noonedeadpunk: thank u so much for your help....are we got these changes directly from stable/wallaby branch 13:32
rohit02or still we need to add manually these changes13:32
noonedeadpunkIf user-collection-requirements.yml helped - I'd prefer using that on W13:33
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-plugins master: Change location of ipaddr filter  https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/83153013:35
rohit02noonedeadpunk:  /etc/openstack_deploy/user-collection-requirements.yml with version 2.5.1 works for me..thanx13:45
noonedeadpunkawesome13:46
rohit02are we need to create this file everytime for new deployment...or we get the solution directly from stable/wallaby 13:47
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/xena: Bump ansible.netcommon version  https://review.opendev.org/c/openstack/openstack-ansible/+/83153513:48
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Bump ansible.netcommon version  https://review.opendev.org/c/openstack/openstack-ansible/+/83153613:50
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add ansible.utils collectoin requirement  https://review.opendev.org/c/openstack/openstack-ansible/+/83152513:51
mgariepymorning everyone13:53
noonedeadpunko/13:54
mgariepycentos again :(14:04
mgariepynoonedeadpunk, what do you think about this one ? https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/831436 14:10
mgariepyi wonder is a new var in default would be better.14:15
noonedeadpunkmgariepy: commented there14:21
noonedeadpunkI think issue not in var here, but that connection might be encrypted. And then horizon_external_ssl will be false. But why this should break HTTP_X_FORWARDED_FOR?14:22
mgariepythanks 14:22
noonedeadpunkI guess it should not...14:22
mgariepyyeah indeed. the location might not be optimal either :D14:22
mgariepymost deploy would be behind LB.14:23
noonedeadpunkyeah, but it's more about where SSL is terminated14:24
noonedeadpunkhttps://opendev.org/openstack/openstack-ansible/src/branch/master/releasenotes/notes/haproxy_ssl_terminiation-cdf0092a5bfa34b5.yaml14:24
noonedeadpunknot sure I support naming as it's confusing, but until we say this is changing, we should respect it14:25
noonedeadpunkoh well, maybe I misunderstood14:26
noonedeadpunksorry my fault indeed, it's ok I guess14:27
spatelnoonedeadpunk morning! 14:33
spatelI am looking for HPC storage solution and looking for any advice or suggestion 14:33
spatelI have working HPC openstack cloud and now reasserting storage which suite environment 14:34
spatelLuster or Ceph 14:34
spatellots of folk saying Manila for share filesystem for HPC job14:35
spatelHow does cinder provide shared storage?14:35
mgariepythe advice i had the other day from noonedeadpunk was manilla and defenitely not cinder for shared :)14:38
tbarronspatel: cinder can provide "multiattach" storage but you still get a raw block device on which14:38
tbarronspatel: you would need to build a clustered file system assuming you want apps to work with fileystem paths14:39
spatelmanila with what storage?14:39
tbarronspatel: manila on the other hand will give you shares from a networked file system, so you are all set for multi-access14:40
spatelWe need parallel filesystem for HPC so all node can see and read/write14:40
vkmctbarron++ 14:40
mgariepyi think cephfs would be ok after considering security issue14:40
spateltbarron you are saying deploy cephfs with manila?14:41
tbarroncurrently there are no manila drivers for lustre, so cephfs would be the way to go14:41
spatelManila is new to me so asking noob question :)14:41
tbarronvkmc :)14:41
spatelI have 5 server to deploy storage, thinking i can put Ceph on it and do cephfs and use manila14:42
spatelHow VM get shared filesystem using manila? what NIC it use for data-transfer ?14:43
tbarronspatel: I'm not an OSA deployment expert so will defer to others in this channel but ideally you'd have an isolated Storage network for connecting VMs to the Ceph daemons; I14:45
tbarronwould add an  extra nic to the VMs that directly attaches to that network.14:45
spatelI do have dedicated 10G nic attach to storage from each compute node14:46
spatelVM to direct storage is bad idea correct? because it use virtio drive which has poor performance... 14:46
tbarronthat dedicated data center storage network can be mapped to a neutron provider network so VMs can attach to it.14:47
spatelcompute to direct VM some kind of passthrough would be great14:47
spatelYou are talking about SRIOV NIC for storage14:47
spatelI learned in my life virtio is crap when it comes to high speed data transfer :( 14:48
spatelif i mount NFS directly to VM then it will gives poor performance because all traffic will pass via virtual nic. (I may use SRIOV so vm can directly attach to physical nic) 14:49
noonedeadpunk"manila with what storage" -> virtfs ( ͡ᵔ ͜ʖ ͡ᵔ )14:49
tbarronspatel: ack, but I'm not SRIOV expert, more manila storage.  Others here (or on openstack-discuss list) may be able to advise on SRIOV for storage network.14:50
spatelI think SRIOV would be perfect for this design 14:51
spatelhttps://www.msi.umn.edu/sites/default/files/Pablo_CERN.pdf14:51
noonedeadpunkbtw I believe virtfs is a thing now in centos at least?14:51
tbarronnoonedeadpunk: virtiofs is in development in nova, theoretically will be much better perf than traditional virtio b/c of shared mem/DAX between guest and host but it's not ready yet14:51
spatelnoonedeadpunk virtfs ?14:51
tbarron*virtiofs14:51
noonedeadpunkYeah, I know it's in development, not even sure when to expect for it to be merged...14:52
spatellooks promising tbarron 14:52
opendevreviewJames Denton proposed openstack/openstack-ansible-os_ironic master: Remove [keystone] configuration block  https://review.opendev.org/c/openstack/openstack-ansible-os_ironic/+/83154414:52
tbarronit's in new kernel and guest OSes but not yet in OpenStack (nova)14:52
spatel:(14:52
spatelit will be win! win! whenever its available14:53
spatellook like cephfs + manila + sriov  do the trick 14:54
tbarronspatel: check with the CERN folks then.  They do cephfs + manila.  The slides show cephfs + sriov.  Are they doing all 3?14:55
noonedeadpunkI told you end up with ceph anyway :D14:55
spatelI love luster for HPC but it doesn't have openstack support (stable support)14:55
rohit02noonedeadpunk:  he changes are merged regarding this  /etc/openstack_deploy/user-collection-requirements.yml14:56
spateltbarron i don't know if they are using sriov or not but look like they do RDMA (infiniband for storage data) 14:56
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/wallaby: Bump ansible.netcommon version  https://review.opendev.org/c/openstack/openstack-ansible/+/83142614:59
tbarronspatel: ack.  They are they most likely folks to have explored this sort of integration though.14:59
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/victoria: Bump ansible.netcommon version  https://review.opendev.org/c/openstack/openstack-ansible/+/83142715:00
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible stable/victoria: Bump ansible.netcommon version  https://review.opendev.org/c/openstack/openstack-ansible/+/83142715:00
spatelin 5 node Ceph deployment do i need dedicated node for cephfs to expose filesystem? that would be bottleneck right?15:01
tbarronspatel: you can run multiple metadata servers.  Actual data content is scale out over lots of osds.  But CephFS isn't going to match Luster on performance.15:03
spatelassuming cephfs like RBD which spread read/write to all OSD 15:03
tbarronlike rbd on the data but not on the file system metadata15:04
spatelI have only 5 node for storage so thinking how i can design that15:05
tbarronspatel: HPC peeps keep saying they want a luster driver for Manila but some one who has luster needs to write it :)  Manila drivers aren't that hard but you need the back end in question to write it.15:06
spatel:)15:06
mgariepyspatel, do you have multiple clients or only one client doing hpc on the cluster?15:53
spatelI have 20 compute nodes going to share filesystem15:53
spatelall 20 compute nodes has infiniband nic for vm passthrough so running vm can talk to each other over 100gbps low latency link for MPI messages 15:55
spatelhere is the design - https://satishdotpatel.github.io/HPC-on-openstack/15:55
spatelstorage part is missing which i am doing some research 15:56
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server master: Use separate tmp directory  https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/83155015:57
spatelMay be my end users create 40 vms on this cloud and use shared filesystem to crunch some numbers using cluster application (Slurm or Spark or anything)15:58
spatelgoal is to have virtual HPC so multiple interested part can create and destroy and do experiments 15:59
spatelparty*15:59
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server master: Add galera_data_dir variable  https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/83155216:07
mgariepyhttps://zuul.opendev.org/t/openstack/build/f3204771e8d34b94bc1ee7fda88beb87/log/job-output.txt#548719:33
mgariepyhuh ? The error was: jinja2.exceptions.TemplateRuntimeError: No filter named 'ipaddr' found.19:34
mgariepyoops..19:34
opendevreviewMarc Gariépy proposed openstack/ansible-role-systemd_networkd master: Change location of ipaddr filter  https://review.opendev.org/c/openstack/ansible-role-systemd_networkd/+/83160319:59
*** dviroel is now known as dviroel|out21:31

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!