Thursday, 2023-03-16

opendevreviewTakashi Kajinami proposed openstack/ansible-config_template master: Remove TripleO jobs  https://review.opendev.org/c/openstack/ansible-config_template/+/87748602:09
opendevreviewTakashi Kajinami proposed openstack/ansible-config_template master: Replace deprecated whitelist_externals  https://review.opendev.org/c/openstack/ansible-config_template/+/87757002:09
opendevreviewJonathan Rosser proposed openstack/openstack-ansible master: Use a map file to select haproxy horizon backend from the base frontend  https://review.opendev.org/c/openstack/openstack-ansible/+/87685109:03
LosraioHello everyone09:05
LosraioI have the following error when running the setup-infrastructure.yml09:05
Losraiohttps://paste.openstack.org/show/bR08qz7GKQAx5QJuXXRF/09:05
LosraioAny help would be much appreciated09:06
LosraioFYI, yes, the 10.1.0.12 IP is reachable and SSH-able from the deployment host09:06
jrosserLosraio: is 10.1.0.12 your internal VIP?09:18
Losraioits the management network IP for the controller node09:19
Losraioand the internal VIP 09:19
LosraioTbh I think it has something to do with the system resources not being enough to run all these containers09:19
jrosserso you should have haproxy bound to that IP09:19
LosraioOh, I think I omitted that in the user_config.yaml09:20
dokeeffe85Sorry all, I was dragged away yesterday. Thanks for all the answers :)09:20
LosraioBut won't it be a problem if both the haproxy_hosts and interal_lp_vip have the same IP address?09:20
jrosserLosraio: you mean you have no `haproxy_hosts` defined?09:21
LosraioNope I don't 09:22
LosraioSince it mentions that it's optional09:22
LosraioAnd I'm only testing right now09:22
jrosserhaproxy can't really be optional09:23
LosraioBut I have to define it, then?09:23
jrosserdo you remember where it says it is optional?09:23
jrosserwell i mean it kind of can be optional, in that people have sometimes used a hardware F5 loadbalancer instead of haproxy, so in that sense it can be optional09:24
jrosserbut there must be a loadbalancer of some kind09:24
LosraioYes, it says so on the openstack_user_config.yml.example09:25
LosraioSo... I guess I must declare it in the user_config then. But will there be a problem if it has the same IP as the internal_lb_vip?09:26
jrosserhmm well it says `Recommend at least one target host for this service if hardware load balancers are not being used.`09:26
jrosserso i think it is that a loadbalancer is expected, it's just that its your choice to use haproxy or something else09:26
LosraioRight09:27
jrosserand i would advise that you put an IP on br-mgmt of each controller09:27
LosraioI do so already09:28
jrosserand also a completely separate IP for the loadbalancer VIP, just so its completely obvious whats going on09:28
LosraioOK, thanks09:28
jrosserif you have H/A controllers then thats a requirement anyway09:29
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_nova master: Stop installing qemu-system on debian variants  https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/87760409:58
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_ironic master: Remove deprecated support for cisco ucs and cims ironic drivers.  https://review.opendev.org/c/openstack/openstack-ansible-os_ironic/+/87760610:16
opendevreviewJonathan Rosser proposed openstack/openstack-ansible master: Use a map file to select haproxy horizon backend from the base frontend  https://review.opendev.org/c/openstack/openstack-ansible/+/87685110:26
jrosserdamiandabrowski: sorry my comment about enabling TLS backends was on the wrong patch10:39
jrosseri think i meant that for the 'big patch' instead10:40
damiandabrowskinp, i didn't work on it yet(just rebased it on top of your changes)10:40
damiandabrowskibut i get your point10:40
damiandabrowskiand you were probably right about temporary_service_defintions, we probably don't need them anymore10:41
jrosserhopefully moving the haproxy vars to group_vars should be a pretty simple move now10:41
jrosseros_ironic10:43
jrosserarg10:43
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: Prepare haproxy role for separated haproxy config  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/87577910:51
noonedeadpunkdamiandabrowski: can you kindly review these 2 things? https://review.opendev.org/q/topic:bump_osa+status:open10:59
damiandabrowskidone, i also have few simple patches that would appreciate reviews :D 11:03
damiandabrowskihttps://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/87742911:03
damiandabrowskihttps://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/87118811:03
damiandabrowskihttps://review.opendev.org/c/openstack/openstack-ansible/+/87685111:04
noonedeadpunkdamiandabrowski: have you played with https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/876749 ?11:11
damiandabrowskidid a quick test on my aio and worked fine11:11
noonedeadpunk++11:12
opendevreviewMerged openstack/openstack-ansible master: Split haproxy horizon config into 'base' frontend and 'horizon' backend  https://review.opendev.org/c/openstack/openstack-ansible/+/87616011:17
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_ironic master: Install socat and configure ipmtool-socat console interface  https://review.opendev.org/c/openstack/openstack-ansible-os_ironic/+/87761811:18
jrosserdamiandabrowski: are https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/871188 and  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/876749 a merge confict?11:24
jrosserthe maps patch is written on top of the original code i think with the item.service key still present11:24
damiandabrowskiahh IMO it is, wonder why gerrit does not show "merge conflicts" section11:26
damiandabrowskiwhat do you suggest? put them in the same relation chain?11:27
damiandabrowskior maybe i can just rebase 871188 after map patch is merged11:29
jrossergiven that 876749 is in the gate already i think 876749 needs rebasing on top of it now, and the new map code adjusting also to remove the extra .service11:29
jrosserarg11:29
jrossergiven that 876749 is in the gate already i think 871188 needs rebasing on top of it now, and the new map code adjusting also to remove the extra .service11:30
jrosser^ better :)11:30
damiandabrowskiack11:30
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible master: Enable TLS frontend for repo_server by default  https://review.opendev.org/c/openstack/openstack-ansible/+/87642611:38
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_ironic master: Rename idrac interfaces to idrac-wsman  https://review.opendev.org/c/openstack/openstack-ansible-os_ironic/+/87762712:14
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_ironic master: Enable raid interface implementations for ironic hardware drivers  https://review.opendev.org/c/openstack/openstack-ansible-os_ironic/+/87762812:14
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_ironic master: Add a no_driver ironic driver type  https://review.opendev.org/c/openstack/openstack-ansible-os_ironic/+/87762912:14
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: Add support for haproxy map files  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/87674912:27
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: Simplify haproxy_service_configs structure  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/87118812:27
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: Prepare haproxy role for separated haproxy config  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/87577912:27
jrosserargh 876749 would have been close to merging /o\12:30
damiandabrowskiyeah, i still don't understand what happened :| i just wanted to rebase 871188 on top of 87674912:33
damiandabrowskibut git review did something with 876749's commit message12:34
jrosserit's like theres a missing info on git review12:35
jrosserit's "push all these - are you sure" and some of the time only some of them change12:35
jrosserand other times a bunch can change in a surprising way12:35
noonedeadpunkoh, yes, that indeed does happen sometime12:39
noonedeadpunkI have no idea why though12:39
opendevreviewMerged openstack/openstack-ansible-repo_server master: Turn off absolute_redirect for nginx  https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/87742913:05
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: Simplify haproxy_service_configs structure  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/87118813:19
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: Prepare haproxy role for separated haproxy config  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/87577913:20
opendevreviewMerged openstack/openstack-ansible stable/xena: Bump OpenStack-Ansible Xena  https://review.opendev.org/c/openstack/openstack-ansible/+/87748813:31
dokeeffe85Hi all, quick one, can I change the keystone public endpoint for an IP address to a DNS without breaking everything?13:49
dokeeffe85It's ok, sorted :)13:53
opendevreviewMerged openstack/openstack-ansible stable/yoga: Bump OpenStack-Ansible Yoga  https://review.opendev.org/c/openstack/openstack-ansible/+/87698213:55
opendevreviewMerged openstack/openstack-ansible master: Deploy step-ca when 'stepca' is part of the deployment scenario.  https://review.opendev.org/c/openstack/openstack-ansible/+/87663713:55
jrosserdokeeffe85: you'll have to run (parts) of all the service playbooks to get the service catalog entries updated (or manually adjust them)13:57
jrosser^ if you want to make the public endpoint for all the services be the FQDN13:58
dokeeffe85Thanks jrosser14:00
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_ironic master: Add a no_driver ironic driver type  https://review.opendev.org/c/openstack/openstack-ansible-os_ironic/+/87762914:01
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-openstack_hosts master: Add `acl` package to all hosts and containers  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/87766514:46
opendevreviewJonathan Rosser proposed openstack/ansible-config_template master: Remove TripleO jobs  https://review.opendev.org/c/openstack/ansible-config_template/+/87748614:46
LosraioHey all15:00
opendevreviewJonathan Rosser proposed openstack/ansible-hardening master: Disable UsePriviledgeSeparation directive for sshd  https://review.opendev.org/c/openstack/ansible-hardening/+/87766615:00
LosraioCould someone please enlighten me on this eror?15:00
Losraiohttps://pasteboard.co/mqEfDxzXn3aO.png15:00
LosraioSorry for the screenshot instead of the paste15:00
LosraioShould I perhaps try to get rid of the iscsi_ip declaration on the user_config.yml?#15:08
noonedeadpunko/15:15
noonedeadpunkLosraio: do you have anything in cinder-api logs?15:17
LosraioHmm let me check15:18
LosraioWhere should these logs be?15:18
noonedeadpunkAs eventually what this task is trying to do - jsut execute command `/openstack/venvs/utility-26.0.1/bin/openstack volume type create --property volume_backend_name=LVM_SCSI lvm` from utility container15:18
noonedeadpunkBut API does not respond in time and looks like it gets stuck processing the call15:19
noonedeadpunkyou can check inside cinder-api containers using journalctl -u cinder-api15:19
LosraioI'm thinking whether it has something to do with my user config15:19
noonedeadpunkIt totally can15:19
LosraioBecause the IP address I have assigned to iscsi_ip is the management network address for the storage node15:20
LosraioShould this cause a problem?15:20
noonedeadpunkWell, cinder should be able to reach it. But it might also depend on IP storage node listening for connections15:21
LosraioWhat makes me worry is the error about 2 positional arguments though15:21
LosraioLet me try to run the playbook again15:22
noonedeadpunkYou can try running this command I provided15:22
LosraioI sure will15:22
noonedeadpunkIt's exactly same15:22
LosraioHow do I gain access to the container from the infra node?15:23
LosraioI haven't worked with lxc before so I'm completely clueless15:23
noonedeadpunklxc-attach -n <container_name>15:23
LosraioOh15:23
noonedeadpunklxc-ls --active to list them all15:23
noonedeadpunk*all running15:23
spateljamesdenton look like my company can arrange some budget for summit.. 15:25
spatellooking for hotel option etc.. 15:25
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_ironic master: Enable raid interface implementations for ironic hardware drivers  https://review.opendev.org/c/openstack/openstack-ansible-os_ironic/+/87762815:26
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_ironic master: Add a no_driver ironic driver type  https://review.opendev.org/c/openstack/openstack-ansible-os_ironic/+/87762915:26
noonedeadpunkspatel: sweet. I hope I wil lget my visa in time :D15:29
spatelI hope!!! you will 15:29
spatelDid you guys book hotel? 15:29
spateladmin1 what about you? 15:30
noonedeadpunkUm, I guess so, but I have no idea which one15:30
spatelRegistration is $799 for summit..15:30
noonedeadpunkspatel: don't you have ATC code???15:30
spatelI need to reach-out to someone for discount promo15:30
spatelNo15:30
noonedeadpunkI bet you've contributed?15:30
spatelHow do i find that?15:31
noonedeadpunkhttps://www.stackalytics.io/?release=zed&user_id=satish-txt&metric=commits 15:31
noonedeadpunkwell, depending on how they were counting15:31
noonedeadpunkDon't you have email with topic `OpenInfra Summit Vancouver 2023 Registration Promo Code`?15:32
spatelNo i don't have that email.. let me search again 15:33
Losraionoonedeadpunk so, I did not get to run your suggested command just yet, but I had started the playbook execution from before and the error did not pop up...15:37
LosraioLet's see how that goes15:37
noonedeadpunkoh, huh, ok15:39
noonedeadpunkmaybe it needed more time then timeout configured...15:40
LosraioThat's what I'm thinking, because the infra node is incredibly stressed right now15:40
LosraioAs in RAM and CPU usage15:40
opendevreviewMerged openstack/openstack-ansible master: Add a /etc/hosts entry for the external IP of an AIO  https://review.opendev.org/c/openstack/openstack-ansible/+/87663816:16
Losraionoonedeadpunk: Yeah, I can't even SSH into the infra node anymore, so I guess I should allocate much more resources to the VM before trying out anything else16:17
noonedeadpunkWell, having 12-16 GB and 4 CPU cores should be enough for the VM16:18
noonedeadpunkIn case of aio 16:18
LosraioI'm following the test example from the documentation16:20
LosraioAnd the sole infra node has got 8GB of ram and 8 vCPUS16:20
LosraioBut, for some reason, just from running the playbooks the RAM usage is capped at at least 95% usage16:20
LosraioAnd the CPU won't go lower than 70%16:20
noonedeadpunkwell, 8gb is what we're using in CI and it's not really enough IMO.16:21
LosraioYeah I can totally understand why :D16:21
noonedeadpunk8 cpus should be more then okay to be frank. I wonder what does consume CPU as it sound a bit off16:22
noonedeadpunkOr well16:22
noonedeadpunkIn AIO we do limit amount of threads for services16:22
LosraioI'm suspecting that the proxmox cluster on which my vms are running is simply inadequate16:22
noonedeadpunkYou can check all these `_wsgi_threads` `_wsgi_processes` variables we override here: https://opendev.org/openstack/openstack-ansible/src/branch/master/tests/roles/bootstrap-host/templates/user_variables.aio.yml.j216:23
noonedeadpunkIt will save you a lot of resources if you're not doing AIO but trying to reproduce deployment manually16:23
Losraiohmm16:24
LosraioI see16:26
LosraioYou mostly use a single thread and a single process16:27
noonedeadpunkYeah, kinda. But again for POC on a small VM you don't expect concurrency anyway16:28
LosraioTrue16:28
noonedeadpunkjrosser: huh, have you seen this uploaded patch? https://bugs.launchpad.net/openstack-ansible/+bug/200983416:30
noonedeadpunkI'm not sure it's correct though16:31
jrosseri don't think it is correct16:33
jrosserthe thing is the whole logic with those vars is to decide if to restart control plane services at the point that an upgrade is "complete"16:33
noonedeadpunkyeah, it kind of breaks the point16:33
jrosserand that is all totally not relevant for when adding a compute node16:33
jrosseri did try to split out the code that deploys the control plane from the code that deploys a compute node16:34
jrosser*but* the thing is you need to call the whole setup-openstack.yml even when adding a compute node as there is no idea what services need installing, you can't just isolate nova compute16:34
noonedeadpunkI wonder if we should just gather facts for nova_all somewhere there...16:34
jrossereven so it's not necessarily right - idk what any of this does when there is a node down for example16:35
jrosserso you cant gather facts for that, then it iterates over nova_all and -> fail16:35
noonedeadpunkwell. to be fair if you're upgrading when compute is down, or upgrading through couple of releases - nova will fail on itself16:36
noonedeadpunkbut yes, adding such limitation for adding compute is not good16:36
jrosserright - though somehow adding a compute node should be a relatively lightweight things16:36
noonedeadpunkwe can add --skip-tags for example... But it's nasty16:37
jrossermaybe one of the original ideas to add a `-e nova_add_compute_node=True` is perhaps not so bad after all16:37
jrosserso that the whole business of restarting the control plane stuff is just skipped16:37
noonedeadpunkeven if imagine dropping this all out and re-factoring logic - it will be totally not backportable16:42
opendevreviewMerged openstack/openstack-ansible-haproxy_server master: Add support for haproxy map files  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/87674916:58
opendevreviewMerged openstack/openstack-ansible master: Use certbot to generate SSL cert for the external VIP in 'stepca' scenario  https://review.opendev.org/c/openstack/openstack-ansible/+/87663917:00
jrossernoonedeadpunk: do i remeber you saying you had people trying ceph-immutable-object-cache ?17:07
noonedeadpunkyeah, folks played with that17:09
noonedeadpunkBut I have no idea about details. But likely can ask/connect you17:09
jrosseri was just trying to make it work - and it runs, but pretty unclear what to do next when libvirt+librbd does not appear to use it17:09
jrosser(this is the image read cache btw, not the write cache)17:10
noonedeadpunkah. no, we were playing with write cache17:10
noonedeadpunkAs they were trying to reduce latency for writes17:10
noonedeadpunkBut I'd expect same logic to be abpplied... But I think they were leveraging ceph.conf that is being used by nova for rbd connection 17:12
jrosserwhich should be /etc/ceph/ceph.conf - or is there another?17:12
noonedeadpunkYeah, this one, unless you've defined another path for ceph.conf 17:12
noonedeadpunkSo caching worked transparently for libvirt17:13
jrosser^ write?17:14
noonedeadpunkyeah, it was write I believe17:18
noonedeadpunkas all fuss was about commit/apply latencies17:23
jrosserwierd thing is there is almost nothing on the internet about using ceph-immutable-object-cach17:29
jrosserapart from the ceph docs17:30
admin1spatel, not in this one 20:01
admin1maybe in the next one20:01
spatelohh okie20:02
spatelNext will be in US i think 20:02
admin1i have finally been able to use magnum to launch k8s cluser and use the octavia lb ingress .. next in the list is to use cinder for volumes 20:04
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-openstack_hosts master: Add `acl` package to all hosts and containers  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/87766520:44
jrosserdoes anyone have a example of using ceph-immutable-object-cache with libvirt/rbd - the docs are fine for setting it up but a reproducible example of how it's expected to work would be great to see20:54
jrosserarg -ECHAN20:54

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!