Thursday, 2022-11-17

opendevreviewMerged openstack/openstack-ansible master: Bump uWSGI version  https://review.opendev.org/c/openstack/openstack-ansible/+/86457900:09
*** ysandeep|out is now known as ysandeep07:25
*** akahat|ruck is now known as akahat|ruck|lunch08:14
*** akahat|ruck|lunch is now known as akahat|ruck09:22
*** ysandeep is now known as ysandeep|afk10:08
dok53Morning all, think I'm just there with the volume storag on EMC SAN. I'm getting the error shown here: https://paste.openstack.org/show/bYKiF0EmEJZCE7bgweGJ/ I did "pip install storops" and it's installed but cinder doesn't see it. Do any of you know if this is expected or how would I pip install in the cinder venv so that cinder can see it10:50
*** dviroel_ is now known as dviroel11:09
opendevreviewMarcus Klein proposed openstack/openstack-ansible-os_neutron master: Allow to set dnsmasq configuration options  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/86487211:17
jrosserdo11:28
jrosserk11:28
jrossernoonedeadpunk: 11:28
jrossernoonedeadpunk: 11:28
jrosserargh11:28
jrosserdok53: what command did you use to pip install?11:29
*** ysandeep|afk is now known as ysandeep11:38
dok53noonedead punk just "pip install storops"11:39
noonedeadpunko/11:40
noonedeadpunkdok53: cinder runs inside virtualenv11:40
noonedeadpunkso you need to use /openstack/venvs/cinder-${osa_version}/bin/pip11:41
noonedeadpunkOr, define all that in your user_variables plus add  storops - https://opendev.org/openstack/openstack-ansible-os_cinder/src/branch/master/defaults/main.yml#L304-L31411:42
noonedeadpunkafter that you can re-run openstack-ansible os-cinder-install.yml -e venv_rebuild=true11:42
dok53Ah that's what I thought, I was following old instructions for our SAN. Thanks yet again :)11:43
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add zookeeper deployment  https://review.opendev.org/c/openstack/openstack-ansible/+/86475011:44
jrosseri think we have a hook for user defined pip packages https://opendev.org/openstack/openstack-ansible-os_cinder/src/branch/master/defaults/main.yml#L31711:44
jrosserdok53: ^ that would be what you want11:44
jrossernoonedeadpunk: did you see discussion of if uwsgi role should / should not use the repo server?11:45
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add zookeeper deployment  https://review.opendev.org/c/openstack/openstack-ansible/+/86475011:46
noonedeadpunkoh, didn't know we have that, sweet11:46
noonedeadpunkjrosser: no I haven't11:46
noonedeadpunkseems it was quite late yestarday?11:47
jrosserah well damiandabrowski showed that we build uwsgi in each place we use it11:47
jrosseryes it was late :/11:47
noonedeadpunkum, I can totally recall intention to build uwsgi wheels once only and then install everywhere11:49
noonedeadpunkthat was one of main points of the role (except we need to install uwsgi only once for metal)11:50
noonedeadpunkAnd I'm quite sure wheels for it were built back in the days on repo container11:50
noonedeadpunkthough on AIO we can result in not building wheels at all11:51
noonedeadpunkas it's single host only kind of11:51
jrosserLXC aio should?11:51
jrosseranyway seems there is no place in the uwsgi role to specify a build host11:52
noonedeadpunkno, not really.11:52
noonedeadpunkwell, we do build them in CI though, I think11:52
jrosserregardless it would be good not to have gcc being installed everywhere11:52
noonedeadpunkdo we ever specify build host?11:54
noonedeadpunkexcept relying on python_venv_build default?11:54
noonedeadpunkwe have quite complex logic for picking this up tbh there11:55
noonedeadpunkok, now looking at code I think we actually don't build wheels in CI11:55
noonedeadpunkThere was only a patch to enable that https://review.opendev.org/c/openstack/openstack-ansible/+/75231111:56
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Enable venv_wheel_build_enable for CI  https://review.opendev.org/c/openstack/openstack-ansible/+/75231111:56
noonedeadpunkbut yes, you're right jrosser, we need to install both on repo and on host I guess. I _think_ we have some "smart" variable in the role that installs packages only when we do not build wheels12:01
noonedeadpunkSo basically, if wheels are not build - we install ssl-dev on destination container, if wheels are set - we install on repo12:02
noonedeadpunk(https://opendev.org/openstack/ansible-role-python_venv_build/src/branch/master/tasks/python_venv_install.yml#L31)12:02
noonedeadpunknot sure if we need it everywhere regardless tbh12:02
noonedeadpunkjrosser: btw, have you succeed in having mgmt net != ssh net ?12:05
opendevreviewDmitriy Rabotyagov proposed openstack/ansible-role-zookeeper master: dnm  https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/86488712:18
opendevreviewDmitriy Rabotyagov proposed openstack/ansible-role-zookeeper master: Initial commit to zookeeper role  https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/86475213:32
*** ysandeep is now known as ysandeep|afk13:41
jrossernoonedeadpunk: yes we totally have mgmt net != ssh net on all my deployments13:58
noonedeadpunkoh, ok. But you had to override all this crap with ansible_host?14:01
jrosserno not at all14:02
jrossermaybe this means something terrible is happening :)14:02
jrosserdo you have an example?14:02
*** ysandeep|afk is now known as ysandeep14:06
noonedeadpunkjrosser: I was looking at all things like https://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/group_vars/all/infra.yml#L24 or https://opendev.org/openstack/openstack-ansible-galera_server/src/branch/master/defaults/main.yml#L129 which kind of means that clusters would be over ssh network, right?14:17
noonedeadpunkor well. maybe we set ansible_host to management network....14:18
noonedeadpunkhm, I actually haven't checked if we do :D14:18
noonedeadpunkah, yes, indeed, we set ansible_host == container_address https://opendev.org/openstack/openstack-ansible/src/branch/master/osa_toolkit/generate.py#L215-L21614:29
jrosserright makes sense14:34
jrosserbut kind of unclear / magical at the same time14:34
noonedeadpunkyeah... But then I will adjust zookeeper role as well to be same magical...14:39
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add zookeeper deployment  https://review.opendev.org/c/openstack/openstack-ansible/+/86475014:45
damiandabrowskiso noonedeadpunk was right, when I explicitly set venv_wheel_build_enable=True, then uWSGI wheel was built on repo container.14:57
damiandabrowskibut I'm a bit confused now. Maybe I misunderstood a logic here: https://opendev.org/openstack/ansible-role-python_venv_build/src/branch/master/vars/main.yml#L8014:57
damiandabrowskiBut does it mean that if I have lets say 3 glance containers but run glance playbook with limit to only one of them, then wheels won't be built on repo container?14:58
noonedeadpunkyes, that's correct15:11
noonedeadpunkbut if you run with serial - they will15:12
noonedeadpunkeventually, when you setup/upgrade there's assumption, that you don't use limit, but rely on playbook serial. 15:12
noonedeadpunkas limit in couple of roles can just break deployment15:13
noonedeadpunkhowever I wonder... If wheels are already built, and you run with limit... will it utilize already present wheels? As I guess with current logic it won't?15:14
opendevreviewDmitriy Rabotyagov proposed openstack/ansible-role-zookeeper master: Initial commit to zookeeper role  https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/86475215:16
damiandabrowskiyeah it won't: https://opendev.org/openstack/ansible-role-python_venv_build/src/branch/master/defaults/main.yml#L8115:18
damiandabrowskiI guess we can remove `and (venv_wheel_build_enable | bool)` from this condition? 15:19
damiandabrowskithese args shouldn't harm even if wheels are not present on repo container, right?15:20
noonedeadpunkI think if wheels are not present, then install will fail out, isn't it?15:26
noonedeadpunktbh, I think we could jsut hardcode venv_wheel_build_enable in user_variables or smth....15:27
noonedeadpunkor document that somewhere 15:27
damiandabrowskibah, i thought there is some sort of fallback if wheels are not present15:30
damiandabrowskibut I agree, hardcoding venv_wheel_build_enable=True seems reasonable15:30
noonedeadpunkwell it's reasonable for osa context, unless you try to use it elsewhere15:32
noonedeadpunkbut yeah, you still likely will do some overrides...15:32
damiandabrowskiif it's reasonable from osa context, we can override it in a place like /opt/openstack-ansible/inventory/group_vars/all/all.yml :D 15:33
noonedeadpunkor just document 15:33
noonedeadpunktbh I dunno what's better here15:34
noonedeadpunkHm, I'm catching this on master: https://paste.openstack.org/show/b8dfAKhU3Dffle6Oe2ua/15:46
dok53Me again :( so I got cinder talking to the backend SAN by doing what noonedeadpunk suggested with the venv pip installation, I also needed to install an old, old naviseccli but that's because our system is so old which is why I may move to a different backend rather than waste time on it. But before I do has anyone any input on this https://paste.openstack.org/show/bBilT9XJZvIx2ty303UR/ The volumes create & delete no problem and I can see it 15:47
dok53created on our SAN it's just I can't attach it15:47
noonedeadpunkI'm not sure what triggers these warnings though15:49
noonedeadpunkdok53: sorry, no idea. Anything on compute side?15:49
noonedeadpunkfor example here: https://zuul.opendev.org/t/openstack/build/a8b1124b603944a3bf5e52c507896521/log/job-output.txt#670415:53
noonedeadpunkand couple more times down the line 15:54
noonedeadpunkand looks like this warning is even not when ansible runs15:56
noonedeadpunkmaybe it's from zuul ansible....15:57
dok53Not to worry noonedeadpunk, think I'll call it quits, that system is too old. Thanks, have a good evening I'm off16:00
*** dviroel is now known as dviroel|brb|lunch16:00
noonedeadpunkno it's not zuul as I do see that in aio as well16:14
noonedeadpunkmaybe it's some ara thing?16:14
noonedeadpunkbecause that;'s totally not when our ansible runs https://paste.openstack.org/show/bkhcbXcp7FOxf4R2kRlg/16:15
*** ysandeep is now known as ysandeep|out16:15
opendevreviewDamian DÄ…browski proposed openstack/ansible-role-uwsgi master: Install OpenSSL development headers  https://review.opendev.org/c/openstack/ansible-role-uwsgi/+/86478317:09
damiandabrowski^ I've fixed this patch to install openssl dev headers on a host where uWSGI is being built17:11
*** dviroel|brb|lunch is now known as dviroel17:12
noonedeadpunkdamiandabrowski: any reason not to add these package names to vars?17:21
noonedeadpunkjsut add variable uwsgi_devel_distro_packages in vars/debian.yml vars/redhat.yml 17:22
damiandabrowskiseparate var may be actually a good idea, let me fix it17:32
opendevreviewDamian DÄ…browski proposed openstack/ansible-role-uwsgi master: Install OpenSSL development headers  https://review.opendev.org/c/openstack/ansible-role-uwsgi/+/86478317:35
noonedeadpunkfwiw, zookeeper role looks quite fair, except SSL https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/86475217:51
noonedeadpunkalso thanks to fungi redirect is not an issue anymore17:52
noonedeadpunkI've just re-issued re-check for integrated repo patch 17:52
noonedeadpunkI will work on SSL in follow-up patch I guess17:55
damiandabrowskigreat, thanks Dmitriy17:57
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add zookeeper deployment  https://review.opendev.org/c/openstack/openstack-ansible/+/86475017:58
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add zookeeper deployment  https://review.opendev.org/c/openstack/openstack-ansible/+/86475017:58
*** dviroel_ is now known as dviroel18:16
prometheanfiremaster branch still == zed?19:18
*** dviroel is now known as dviroel|out21:15

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!