Wednesday, 2021-05-05

*** dave-mccowan has quit IRC00:08
*** dotnetted has joined #openstack-ansible00:47
dotnettedHey all - Is the "cinder-manage" command available on a Victoria cinder install? If so, where? Thanks00:50
*** spatel_ has joined #openstack-ansible01:24
*** spatel_ is now known as spatel01:24
poopcat@dotnetted -- it's on the controller nodes. If your services are in containers, try logging into the cinder-api LXC container and sourcing '/openstack/venvs/cinder-*/bin/activate'02:15
poopcatotherwise, just do that on the baremetal controller02:15
*** spatel has quit IRC02:15
openstackgerritYuehuiLei proposed openstack/openstack-ansible-haproxy_server master: setup.cfg: Replace dashes with underscores  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/78969202:18
*** spatel_ has joined #openstack-ansible02:21
*** spatel_ is now known as spatel02:21
*** evrardjp has quit IRC02:33
*** evrardjp has joined #openstack-ansible02:33
*** spatel has quit IRC02:41
dotnettedpoopcat:- aha! thanks - was missing the venv activation03:04
*** dotnetted has quit IRC03:29
*** ChipOManiac has joined #openstack-ansible04:01
ChipOManiacHey all, anyone know what the problem is when "Slurp up constraints file for later re-deployment" in python_venv_build goes "constraints file not found". I've come to believe it's just that the repo is still downloading the file, but it's been a whole day already and it still throws the same error.04:03
*** mpjetta has joined #openstack-ansible04:13
*** mpjetta has quit IRC04:18
*** oleksandry has quit IRC04:18
*** shyamb has joined #openstack-ansible04:46
*** miloa has joined #openstack-ansible04:59
*** miloa has quit IRC05:21
*** shyamb has quit IRC05:24
*** shyamb has joined #openstack-ansible05:25
*** shyam89 has joined #openstack-ansible05:39
*** oleksandry has joined #openstack-ansible05:39
*** shyamb has quit IRC05:41
*** poopcat has quit IRC06:05
*** poopcat has joined #openstack-ansible06:07
*** d34dh0r53 has quit IRC06:18
*** d34dh0r53 has joined #openstack-ansible06:20
*** gyee has quit IRC06:32
*** oleksandry has quit IRC06:48
*** oleksandry has joined #openstack-ansible06:57
*** gokhani has joined #openstack-ansible07:05
openstackgerritYuehuiLei proposed openstack/openstack-ansible-os_cinder master: setup.cfg: Replace dashes with underscores  https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/78971207:05
openstackgerritYuehuiLei proposed openstack/openstack-ansible-os_cinder master: setup.cfg: Replace dashes with underscores  https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/78971207:09
*** rpittau|afk is now known as rpittau07:12
*** SiavashSardari has joined #openstack-ansible07:12
openstackgerritYuehuiLei proposed openstack/openstack-ansible-os_horizon master: setup.cfg: Replace dashes with underscores  https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/78971307:14
openstackgerritYuehuiLei proposed openstack/openstack-ansible-os_glance master: setup.cfg: Replace dashes with underscores  https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/78971407:16
openstackgerritYuehuiLei proposed openstack/openstack-ansible-os_ironic master: setup.cfg: Replace dashes with underscores  https://review.opendev.org/c/openstack/openstack-ansible-os_ironic/+/78971507:20
jrosserChipOManiac: the file is grabbed from the repo server as part of building the venv, its not downloaded from the internet07:22
jrosserit's a file that should be created as part of the deployment, not an external file07:22
*** andrewbonney has joined #openstack-ansible07:22
openstackgerritYuehuiLei proposed openstack/openstack-ansible-os_tempest master: setup.cfg: Replace dashes with underscores  https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/78971607:23
*** oleksandry has quit IRC07:28
noonedeadpunkjrosser: regarding min facts gathering (I think we already merged?) - do you recall how hardware facts are gathered to resolve ansible_facts['processor_vcpus'] ?07:29
noonedeadpunkAs it's hardware facts, not min?07:29
jrosserhmm07:29
jrosseriirc i did have to deal with this07:30
*** tosky has joined #openstack-ansible07:30
noonedeadpunkok-ok, just faced some issues with that on V, and can't recall what we did in regards of that in master07:31
jrosseroh well i think that there should be no changes on V, other than facts older than 24hours maybe leading to a surprise07:33
openstackgerritAndrew Bonney proposed openstack/openstack-ansible-os_zun master: Use ansible_facts[] instead of fact variables  https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/78073307:41
ChipOManiacjrosser which means that something went wrong during deployment for it not to be created?07:42
jrosserChipOManiac: yes, thats right07:42
jrosserwhich release are you using?07:42
noonedeadpunkyeah, I think in my case it's just smth simple, just pushed me to some thoughts about current state:)07:42
ChipOManiacjrosser Upgraded from Train to Ussuri, it's a PoC. Would doing a venv_rebuild=True work?07:42
jrosserwell, i'm suspecting that folklore advice to use venv_rebuild=True is actually what has made this break07:43
ChipOManiacI barely use it. I usually resort to that when a error of this sorts pops up.07:44
jrosserright, well it leaves these constraints files on the repo server in an inconsistent state07:45
ChipOManiacWhich makes it worse?07:46
jrosseryeah well the trouble is for the one time you use venv_rebuild=true it does indeed sometimes 'fix' things for that run07:47
jrosserbut it will then leave the state broken for any subsequent runs07:47
ChipOManiacOhwell.07:48
ChipOManiacConstraints error popped up. Thought it was the repo server, so I reinstalled that. Then it got stuck at creating wheels. So tried it with a venv_rebuild=Tue, and it's still stuck at crating wheels.07:49
jrosserright yes, so this is the symptom of having used venv_rebuild=true07:49
jrosserif you attach to the repo container which it's using for the venv build07:50
jrosserand look in /var/www/repo/os-releases/<version-number>07:50
jrosseri think there you will find that for some of the venv there are missing .txt files, there should be four for each07:51
*** shyam89 has quit IRC07:53
*** shyamb has joined #openstack-ansible07:58
jrossernoonedeadpunk: on V i think this would be it https://github.com/openstack/openstack-ansible/blob/stable/victoria/scripts/openstack-ansible.rc#L3807:59
noonedeadpunkjrosser: well, just folks reported about having http://paste.openstack.org/show/804968/ while running 22.1.208:01
noonedeadpunkwhich looks super weird for me tbh08:01
noonedeadpunkbut yeah, I've already found that08:02
jrosserthats interesting08:03
noonedeadpunkcan't reproduce though...08:05
jrosserstale facts would do it08:06
noonedeadpunkwhatever though I think. I believe they might mix up cherry-picking with checkout ....08:09
*** ianychoi_ has quit IRC08:13
noonedeadpunkwould be great to have second pair of eyes on https://c98a6745a1bc52cbe8a6-6774f2c76ec10220eac79c064c510a87.ssl.cf5.rackcdn.com/789376/7/check/openstack-tox-docs/0fd26ec/docs/admin/upgrades/compatability-matrix.html as I'm not sure how valid these support matrix is...08:13
noonedeadpunkespecially in terms of distro path08:14
noonedeadpunkprobably centos for P and Q should be also marked in warning (as it was kind of experimental there iirc)08:15
ChipOManiacjrosser How'd i get those files added back?08:15
*** shyamb has quit IRC08:19
noonedeadpunkjrosser: it apeeared to happen on master as there was really weird checkout08:19
noonedeadpunkthat should be easy to reproduce08:22
*** shyamb has joined #openstack-ansible08:36
*** shyam89 has joined #openstack-ansible08:38
*** shyam89 has quit IRC08:40
*** shyam89 has joined #openstack-ansible08:40
*** shyamb has quit IRC08:41
*** shyam89 has quit IRC08:51
*** ChipOManiac has quit IRC09:00
openstackgerritMerged openstack/openstack-ansible-tests master: Use ansible_facts[] instead of fact variables  https://review.opendev.org/c/openstack/openstack-ansible-tests/+/78105909:07
openstackgerritXinxin Shen proposed openstack/openstack-ansible-tests master: setup.cfg: Replace dashes with underscores  https://review.opendev.org/c/openstack/openstack-ansible-tests/+/78976109:14
noonedeadpunkjrosser: reproduced http://paste.openstack.org/show/804970/09:20
jrosseris this an AIO?09:21
noonedeadpunkyep09:21
jrosseri expect that part of bootstrap_host role gathers * facts09:21
noonedeadpunkthey do...09:21
jrosserand then subsequently in the playbook we only gather a subset09:22
noonedeadpunkso `!all,min` kind of break things out of ci now09:22
noonedeadpunkhttps://opendev.org/openstack/openstack-ansible/src/branch/master/tests/bootstrap-aio.yml#L1809:24
jrossercould you check which facts subset the mounts are in?09:26
noonedeadpunkI think hardware, but checking09:26
jrosserperhaps this all needs a sanity check for the vcpu stuff too09:27
noonedeadpunkyep, hardware, same as vcpu09:27
noonedeadpunkbut adding hardware without filtering expands facts file from 6.5k to 36k09:28
jrosserwe can use the filter, like i did on the dynamic_address_fact09:32
jrosserbut that might need lots and lots of patches09:32
noonedeadpunkwell, or just one but huge (in the integrated repo for playbooks)09:32
jrosseroh right yes, it's there isnt it09:33
jrosserhttps://github.com/openstack/openstack-ansible/blob/master/playbooks/common-tasks/dynamic-address-fact.yml#L1909:34
noonedeadpunkwe also need to clean up facts after boostrap-aio I guess09:35
jrosserhopefully this https://github.com/openstack/openstack-ansible/blob/2aa71dfebcfd618b8ee937cbec428e6f35cb90e4/tests/bootstrap-aio.yml#L8809:36
noonedeadpunkoh, huh...09:36
noonedeadpunkwhy then it works....09:36
jrosserthough i am wondering if that is clearing facts for localhost09:36
jrosserand not aio109:36
jrosseras the play is targetting localhost09:37
noonedeadpunkyeah, I think it does... Should we just delegate it to all hosts?09:37
noonedeadpunkbut we don't have inventory during this play, right09:37
jrosserwe don't, but the hostname does get changed during the play09:38
jrosserthat would be interesting to see if it has an effect09:38
noonedeadpunkwe can actually run `ansible -m meta -a clear_facts all` after this playbook in gate-check-commit.sh09:39
jrosserneeds some care, as the inventory will now be present but the hosts are not all there09:40
noonedeadpunkI think it's not running against hosts at all09:41
jrossermaybe instead add a second play at the end if bootstrap-aio.yml with just the clear_facts task, and have it target localhost,aio109:41
noonedeadpunkmeta is run locally09:41
jrosseroh hmm09:42
noonedeadpunkit cleans up /etc/openstack_deploy/ansible_facts/09:42
jrosser`causes the gathered facts for the hosts specified in the play's list of hosts to be cleared, including the fact cache`09:42
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: Clean up gathered facts during AIO bootstrap  https://review.opendev.org/c/openstack/openstack-ansible/+/78976909:47
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: Clean up gathered facts during AIO bootstrap  https://review.opendev.org/c/openstack/openstack-ansible/+/78976909:49
*** shyamb has joined #openstack-ansible09:53
*** shyamb has quit IRC09:53
noonedeadpunktbh, doing filters for every playbook and extra facts gathering and maintaining it is kind of....09:57
noonedeadpunknot sure makes much sense comparing to just gather hardware facts additionally. as mounts add up to filesize of 12k (which is twice from just min) and considering we need also some cpu info...10:13
noonedeadpunkand considering that setup will run regardless of having facts or not, we probably won't gain any performance profit at the end10:15
noonedeadpunk*of having facts cache10:15
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: DNM Gather additional required facts to min  https://review.opendev.org/c/openstack/openstack-ansible/+/78977610:15
*** sshnaidm|afk is now known as sshnaidm10:34
*** tosky has quit IRC10:38
*** tosky has joined #openstack-ansible10:38
jrosserhmm so using --tags rabbitmq-config,rabbitmq_server-config rewrites the config file if needed but doesnt restart the service10:40
noonedeadpunkummmmm10:41
noonedeadpunkwe don't handlers here https://opendev.org/openstack/openstack-ansible-rabbitmq_server/src/branch/master/tasks/rabbitmq_post_install.yml#L37 doh10:41
noonedeadpunkthat's stupid....10:42
noonedeadpunkah, well, we reastart not with handlers https://opendev.org/openstack/openstack-ansible-rabbitmq_server/src/branch/master/tasks/rabbitmq_post_install.yml#L77-L7810:42
*** tosky_ has joined #openstack-ansible10:45
*** tosky has quit IRC10:47
noonedeadpunkso seems the way more trickier10:48
*** tosky_ is now known as tosky10:49
jrosseroh, isnt it missing tags off here https://opendev.org/openstack/openstack-ansible-rabbitmq_server/src/branch/master/tasks/rabbitmq_post_install.yml#L77-L7810:53
noonedeadpunkyeah, and actually I'm not sure if we should also add them to https://opendev.org/openstack/openstack-ansible-rabbitmq_server/src/branch/master/tasks/rabbitmq_started.yml10:55
noonedeadpunkas tags doesn't seem to pass through https://opendev.org/openstack/openstack-ansible-rabbitmq_server/src/branch/master/tasks/rabbitmq_restart.yml#L16 as well10:56
openstackgerritAndrew Bonney proposed openstack/openstack-ansible-os_zun master: Use ansible_facts[] instead of fact variables  https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/78073310:58
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: Gather hardware facts by default  https://review.opendev.org/c/openstack/openstack-ansible/+/78978411:07
*** mgariepy has quit IRC11:08
noonedeadpunkbtw, galera tags doesn't work as expected as well11:09
jrosserrabbit is another role with too many small files including each other11:12
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server master: Add galera devel packages installation  https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/78978611:22
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server master: Add galera devel packages installation  https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/78978611:22
openstackgerritJonathan Rosser proposed openstack/ansible-role-pki master: WIP - create certificate authorities  https://review.opendev.org/c/openstack/ansible-role-pki/+/78740411:26
openstackgerritJonathan Rosser proposed openstack/ansible-role-pki master: WIP - Create server certificates  https://review.opendev.org/c/openstack/ansible-role-pki/+/78802111:26
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-rabbitmq_server master: DNM - Test PKI role  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/78803211:27
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-rabbitmq_server master: Fix service restart when using tags  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/78978811:27
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-rabbitmq_server master: Modernise TLS configuration  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/78978911:27
*** premkumarar has joined #openstack-ansible11:29
*** recyclehero has quit IRC11:35
jrossernoonedeadpunk: do you remember where we had the trouble with rabbitmq/ssl? would i see it with keystone in just an infra AIO?11:35
premkumararhi all while creating stack i got this issues.11:35
premkumararCreate Failed11:35
premkumararResource Create Failed: Resourceinerror: Resources.My Instance: Went To Status Error Due To "Message: No Valid Host Was Found. , Code: 50011:35
noonedeadpunkjrosser: iirc we saw that only in tempest when instance was spawned and some service was rejecting to connect to rabbit (like nova-conductor)11:36
noonedeadpunkpremkumarar: what's in nova-scheduler log?11:37
noonedeadpunkis nova-compute service is up in openstack compute service list?11:38
noonedeadpunk(and present in openstack hypervisor list)11:38
premkumararnoonedeadpunk nova-compute aio1 nova Enabled Up11:41
premkumararcinder-scheduler aio1-cinder-api-container-845d8e39 nova Enabled Up11:41
premkumararbut cinder volume is down11:42
premkumararcinder-volume aio1@lvm nova Enabled Down 2 days, 2 hours11:42
*** gokhani has quit IRC11:42
*** gokhani has joined #openstack-ansible11:56
*** jbadiapa has joined #openstack-ansible12:02
*** mgariepy has joined #openstack-ansible12:13
premkumararHi i have a basic question. Installed openstack aio using the ansible in Ussuri. Everything is working fine. Once i restart the system. My service are going down. After heat service completely went down and cinder volume also down.12:26
premkumararDo we have any steps or document what i need to do after the restart12:27
*** rh-jelabarre has joined #openstack-ansible12:29
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_adjutant master: Install mysql client libraries  https://review.opendev.org/c/openstack/openstack-ansible-os_adjutant/+/77760712:32
noonedeadpunkpremkumarar: nope, I don't think we have one. This happens, because we don't do persistant loop mounts for aio12:33
noonedeadpunkand we create loop drives for cinder/glance/etc12:33
noonedeadpunkwithout loop drive being present, lvm group can't activate and thus cinder-volume is down12:33
noonedeadpunkthat's smth that worth fixing, but never have time on that, as aio is used mainly for testing and not like prod envs12:34
noonedeadpunkI have devices set like this in aio http://paste.openstack.org/show/804973/12:35
noonedeadpunkand `/dev/loop3` is used as cinder-volumes volume group on LVM12:36
jrossernoonedeadpunk: maybe we should convert the creation of the loop devices to systemd units12:54
jrosserthen they should come back at reboot12:54
*** premkumarar has quit IRC13:10
openstackgerritArx Cruz proposed openstack/openstack-ansible-os_tempest master: Moving tripleo train job to non-voting  https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/78983013:18
arxcruznoonedeadpunk: jrosser ^13:19
arxcruzwe are working to fix the train job, meanwile, moving it to nv13:19
*** akahat|ruck has quit IRC13:21
*** dpawlik has quit IRC13:22
*** jbadiapa has quit IRC13:23
*** jbadiapa has joined #openstack-ansible13:23
*** akahat has joined #openstack-ansible13:33
*** macz_ has joined #openstack-ansible14:05
*** macz_ has quit IRC14:09
*** dpawlik5 has joined #openstack-ansible14:27
*** dpawlik5 is now known as dpawlik14:32
MrClayPoleHi All, we are currently running OSA Rocky and have a single infrastructure node running 3 x lxc Rabbit and Galera instances. We deployed 2 additional servers and deploy Galera and rabbit on these so we now have 5 Rabbit and Galera configured. We would now like to scale down the 3 x lxc Rabbitt and Galera instances to 1 on the original server.14:42
MrClayPoleWe are just wondering on the best/safest way to do this?14:42
noonedeadpunkThat's really interesting question :)14:43
jrossergalera is in some way easier as you have the loadbalancer in the way14:43
noonedeadpunkyeah, for rabbit things might be tough, as you need to adjust all services configuration14:44
jrosseri guess for rabbit you'd need to remove the unwanted ones from the inventory but not delete them14:44
jrosserredeploy all the services to get the configs updated14:44
noonedeadpunkwell, I'd do with galera exactly the same I guess14:44
jrosserthen drop the old containers14:44
noonedeadpunkredeploy in terms of re-running roles14:45
jrosseryeah14:45
noonedeadpunkfor rabbit I'd first changed services configs and only after re-configured rabbit14:45
jrosserthen i guess also the rabbit/galera roles need re-running last to reduce the size of the clusters in their config14:47
noonedeadpunkoh, but for galera I'd probably run haproxy role first14:47
noonedeadpunkso that "master" didn't appear on dropped node14:48
jrosserMrClayPole: lots of handwaving here ^^^ :) - so i guess the take-away is to get the ordering right14:48
noonedeadpunkjrosser: btw it seems we do gather facts somewhere after bootstrap-aio....14:51
noonedeadpunkin terms of all facts14:51
noonedeadpunkas otherwise I'd expect this to fail really badly https://review.opendev.org/c/openstack/openstack-ansible/+/78976914:51
*** Premkumarar has joined #openstack-ansible14:52
Premkumararnoonedeadpunk is there any work around to reslove that issue14:52
openstackgerritOleksandr Yeremko proposed openstack/openstack-ansible-specs master: Protecting plaintext configs  https://review.opendev.org/c/openstack/openstack-ansible-specs/+/78882914:52
MrClayPoleThanks guys I had a feel this wouldn't be straight forward14:53
noonedeadpunkPremkumarar: nope, not really. we just were discussing that it would be great to get this fixed, but not high prio for me personally (and I guess for nobody)14:53
*** mobuntu has joined #openstack-ansible14:54
jrossernoonedeadpunk: looks like all facts are present for aio1 https://aa62d61626cb9330e709-81a8be848ef91b58aa974b4cb791a408.ssl.cf2.rackcdn.com/789769/2/check/openstack-ansible-deploy-aio_lxc-ubuntu-focal/01b1208/logs/etc/host/openstack_deploy/ansible_facts/index.html14:55
noonedeadpunkyep14:55
jrosserhmm so maybe we are hiding broken things for multinode14:55
jrosseri was concerned about this tbh14:55
noonedeadpunkand we do this somewhere between bootstrap aio and hardening14:56
noonedeadpunkso should be not that hard to track it down14:56
jrosserwhen i was messing with this i had a terminal open with "watch ls -l /etc/openstack_deploy/ansible_facts"14:56
jrosserand just ran the plays through one by one14:56
noonedeadpunksounds not so fun14:56
jrosserit was very obvious when one fo the files went huge14:57
*** macz_ has joined #openstack-ansible14:57
noonedeadpunkyeah, will try to find where we collect things14:57
jrosserwtf is Connection failed: [SSL: WRONG_VERSION_NUMBER] wrong version number14:57
noonedeadpunkdo we use smth like tls1.0 or ssl3?14:58
noonedeadpunkI thought actually we even set ciphers....14:58
noonedeadpunkbut we probably used tls1.114:58
noonedeadpunk(might be worth switching to smth more descent)14:59
mobuntuHey guys, noob here. I'm trying to create two targetable host groups for my compute. But I can't seem to get the host groups to be created properly in my inventory. This is how i have them defined in my openstack_user_config.yml https://pastebin.com/qJdiddyd   anyone have a working example of this?14:59
jrossernoonedeadpunk: ah for the TLS setup for rabbit previously it was pretty undefined so i made this https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/78978914:59
jrosserso i am trying to use tls1.215:00
noonedeadpunkmobuntu: you alse need to create env.d files. spatel does have and already published that here one day...15:00
noonedeadpunkjrosser: ah! I believe tls1.1 shoudl be valid nowadays? Or it's already not?15:01
noonedeadpunkhttps://support.umbrella.com/hc/en-us/articles/360033350851-End-of-Life-for-TLS-1-0-1-1- hm15:01
noonedeadpunkseems it's eol as well15:01
*** mgariepy has quit IRC15:14
*** SiavashSardari has quit IRC15:15
jrossermobuntu: from your paste you've got an identation problem with the yaml15:24
jrossermobuntu: you should have something like this http://paste.openstack.org/show/804975/15:25
*** mgariepy has joined #openstack-ansible15:46
*** rpittau is now known as rpittau|afk15:52
* jrosser has TLS connection to rabbitmq \o/16:02
jrosserhelps if you actually connect to port 5671 instead of 5672..... doh16:02
noonedeadpunkhaha16:06
noonedeadpunkyeah, that's true)16:06
noonedeadpunkjrosser: should we also filter here? https://opendev.org/openstack/openstack-ansible/src/branch/master/playbooks/openstack-hosts-setup.yml#L4916:17
noonedeadpunkhm.... interesting...16:20
noonedeadpunkfeels like we don't need virtual there o_O16:28
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: Don't collect virtual facts  https://review.opendev.org/c/openstack/openstack-ansible/+/78992616:31
noonedeadpunksooooo. what have we decided in terms of hardware facts - gather them all or do a hook for each playbook?16:31
*** gyee has joined #openstack-ansible16:33
noonedeadpunkI think I'd vote for gathering all hardware facts - there're not too much of them I think... well, it's actually 6 times more... but dunno about having all these filters. probably we can add it to the common-tasks and just do simple include16:34
jrosserthe filter could also be a var defined in group_vars16:38
jrosserthat would make maintainance eaiser16:38
noonedeadpunkwell, yeah, agree16:38
noonedeadpunkI just not really like that setup will run each time - despite we have things already cached or not16:43
*** oleksandry has joined #openstack-ansible16:44
fridtjof[m]hmmm, trying to run an OSA deployment right now (osa 22.1.2), and specifically glance and cinder containers are failing to start16:46
fridtjof[m]starting them manually in foreground mode gives me this error over and over:16:47
fridtjof[m]lxc-start: infra1_glance_container-3965f658: cgroups/cgfsng.c: cgfsng_monitor_create: 1264 Failed to create cgroup "(null)"16:47
fridtjof[m]all other containers are running fine16:47
fridtjof[m]weird thing is that this happens on two separate infrastructure hosts (albeit VMs on the same physical machine, but...) for exactly glance and cinder_api containers16:48
fridtjof[m]hosts are all running ubuntu 20.0416:49
*** oleksandry has quit IRC16:50
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: DNM Gather additional required facts to min  https://review.opendev.org/c/openstack/openstack-ansible/+/78977616:56
fridtjof[m]update: rebooted both hosts, now it works17:07
fridtjof[m]strange17:07
*** Premkumarar has quit IRC17:15
fridtjof[m]ugh it's happening again17:25
*** andrewbonney has quit IRC17:31
fridtjof[m]oh, both cinder-api and glance want connectivity to br-storage?17:34
fridtjof[m]i kind of get that for glance, as I set it up to use cinder as a backing store (which does explain the new requirement), but why does cinder-api have to get directly onto the storage network?17:34
jrosserfridtjof[m]: if it's the same place that cinder-volume would go when that component is not on metal, it may explain it17:38
fridtjof[m]I based my configuration off of this: https://docs.openstack.org/openstack-ansible/victoria/user/prod/example.html17:40
fridtjof[m]it's worked before on stein, but I think I did connect the infra hosts to the storage network too back then17:40
fridtjof[m](i didn't this time)17:40
jrossercinder-volume runs on the infra hosts, so that will almost certainly need to be on the storage network17:41
jrosserunless you change the config to make it containerised17:42
fridtjof[m]I do see a discrepancy on that page - the "IP Assignments" table implies you don't have to connect the infra host(s) to br-storage, but the following openstack_user_config then binds cinder-api and glance-api to that network17:42
fridtjof[m]cinder-volume in my case is only running on a separate storage host17:43
jrosseroh right in that example it tells you to make the adjustments in "Environment customizations" to put the cinder-volume service in containers17:43
fridtjof[m]infra hosts are only running cinder-api, which (from what i've read) is supposed to just control cinder-volume through br-mgmt, right?17:43
jrosserthe cinder-volume service (as which isnt necessarily where the storage is) is also on the infra hosts17:44
jrosserin the example those services are using the NFS server at 172.29.244.1517:45
*** oleksandry has joined #openstack-ansible17:45
fridtjof[m]mhm17:45
fridtjof[m]I get why cinder-volume needs to be on br-storage, does cinder-api need to be on there though?17:46
fridtjof[m](same question for glance-api)17:46
jrosseri think that glance would need to talk to a ceph backend for example to upload images to the storage17:47
jrosserthe same would go for NFS i expect17:47
fridtjof[m]ah, yep17:47
jrosserand really i think that the storage network connected to the cinder-api container is a generalisation, it's not strictly necessary17:47
fridtjof[m]previously I only ever ran the file backend (which i realized wasn't the best idea after i reinstalled an infra host lol) for glance17:48
jrosserunless like in the example the cinder-volume service is also in that container17:48
jrosserso rather than overcomplicate the "if this else this...." documentaion it's easier to just connect it to the bridge17:48
fridtjof[m]in any case, seems like i'll have to connect br-storage in any case for glance-api to work with cinder17:49
jrosserglace will use its own space on the NFS i think17:49
fridtjof[m](not using NFS, just cinder-volume with a local LVM store)17:50
jrosseroh well thats the case when you have to run cinder-volume on the infra host itself17:50
jrosserbecasue the iscsi stuff doesnt work containerised iirc17:50
jrosserassuming we're talking about the same thing :)17:51
fridtjof[m]i think i'm not running into _that_ problem :)17:52
fridtjof[m](i have a single "storage" physical server with a bunch of drives for this deployment, those are one LVM VG, and cinder-volume runs on metal there)17:52
fridtjof[m]but for glance-api to be able to use cinder as a backend, it'll need access to the storage network (which I still have) to be able to store images there, right?17:53
fridtjof[m]or would iscsi not work with glance-api containerised?17:55
jrosseri think it's the server part of cinder-volume that needs to be on the host, as you have it now17:57
jrosserand yes i think you'll need the storage interface in the glance container in order to access it directly17:58
fridtjof[m]alright18:01
*** oleksandry has quit IRC18:09
*** gokhani has quit IRC18:13
admin0how do I override network interfaces for a flat networking when diff hypervisors have diff network interface names18:14
*** MrClayPole_ has joined #openstack-ansible18:19
*** MrClayPole has quit IRC18:23
jrosseradmin0: one simple way round that is to use a veth and a new interface, thats why you see eth12 all over the example configs, to give a well defined interface name to neutron18:26
jrosseralternatively, depending on how many different combinations you have this could be useful too https://docs.openstack.org/openstack-ansible/latest/user/prod/provnet_groups.html18:27
admin0jrosser, exactly what i needed18:28
mgariepywhat are you using for the network configuration ?18:28
admin0many thansk18:28
admin0simple bridges18:28
admin0lb18:28
admin0not ovs18:29
admin0mgariepy, or was the question diff and i did not understand18:29
mgariepyi thend to just rename the interface to something like : 25G-1 , 25G-2 etc.18:29
admin0netplan18:29
mgariepynetplan ? systemd-networkd ?18:29
admin0focal 2018:29
admin0yep18:29
admin0hmm..so just rename from udev :)18:30
admin03 methods to chose from \o/18:30
mgariepyhttp://paste.openstack.org/show/804980/18:31
*** recyclehero has joined #openstack-ansible18:32
mgariepynetplan can rename them. but sometimes you can have ussues with the file generation (because netplan only do file generation for other backend) the match might not work on some corner case that might be fixed now.18:32
*** Jeffrey4l has quit IRC18:44
*** Jeffrey4l has joined #openstack-ansible18:47
*** Adri2000 has joined #openstack-ansible18:52
*** oleksandry has joined #openstack-ansible19:01
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_glance master: [goal] Deprecate the JSON formatted policy file  https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/78074919:13
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_aodh master: [goal] Deprecate the JSON formatted policy file  https://review.opendev.org/c/openstack/openstack-ansible-os_aodh/+/78084419:15
*** oleksandry has quit IRC19:26
admin0thanks mgariepy19:32
openstackgerritMerged openstack/openstack-ansible-os_glance master: setup.cfg: Replace dashes with underscores  https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/78971419:34
*** oleksandry has joined #openstack-ansible19:39
mgariepyadmin0, you are welcome if you have issues, do not hesitate to ping me,19:40
*** oleksandry has quit IRC20:26
*** mobuntu has quit IRC21:41
*** gyee has quit IRC22:41
*** tosky has quit IRC23:03
*** macz_ has quit IRC23:06

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!