Friday, 2023-02-03

opendevreviewJiri Podivin proposed openstack/openstack-ansible-os_tempest master: Switching linters to pre-commit system  https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/85852107:28
moha7jrosser: "cinder does not need to be on the compute hosts for ceph", but the AIO_Ceph does it too, by setting`storage_hosts:` in /etc/openstack_deploy/conf.d/cinder.yml, doesn't it?10:07
jrossermoha7: because in the AIO there is only one host? so where else would it go :)10:08
jrosseryou don’t need haproxy on all the compute hosts either, but also that just how it has to be in the AIO too10:09
moha7So, you mean `storage_hosts: *controller_hosts` in my case?10:10
moha7Indeed, should I set `storage_hosts` at all for a multinode env with a standalone Ceph cluster as Cinder storage?10:12
jrosserwell you still need the cinder api and volume services to be running on the controllers10:12
moha7not handled by `storage-infra_hosts:`?10:14
jrossersorry I’m not in front of the code at all today10:14
moha7Ah, ok10:14
jrossereither way cinder organises some remote storage to be used by the compute node, be it iscsi or rbd or whatever else10:15
jrosserpart of the confusion might be that in the non ceph AIO cinder-volume is set up to export iscsi from a loopback lvm device, just for the purposes of a yes with the simplest backend. that has to be on the host rather than a container because the iscsi target stuff doesn’t work in an lxc10:19
jrosser*purposes of a test…10:20
jrosserfor lvm backed storage cinder-volume actually provides the iscsi, but for ceph it just mediates the storage backend10:23
moha7in /etc/openstack_deploy/env.d/cinder-volume.yml, does `is_metal: false` mean having a container for cinder-volume?10:24
moha7my understanding: "Is it metal? No", then should be there a cinder volume container, but it does not exist among "lxc-ls --active"!10:26
*** dviroel|rout is now known as dviroel11:19
moha7^14:15
*** lowercas_ is now known as Arlion14:32
*** Arlion is now known as lowercase14:32
spatelFolks, ceph question i have PERC H710 controller should i be using write-back or write-through cache? 14:34
moha7Update: The cinder volume container is there! My bad.14:48
*** dviroel is now known as dviroel|lunch15:14
spatelAny idea about this error - https://paste.opendev.org/show/bmAj7IrkhbRpMbe14zht/16:08
*** dviroel|lunch is now known as dviroel16:14
moha7I still don't where `storage_hosts` should point to?16:44
moha7cinder in inventory.json: https://i.ibb.co/1zttDs2/Screenshot-2023-02-03-210536.png --> Is normal to have both `infra0X` and `infra0X_cinder_api_container` under  the section`storage-infra_all`? Please have a look at `storage_all` section17:38
moha7Seems yes. I compared it with the AIO inventory.17:40
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-specs master: Blueprint for separated haproxy service config  https://review.opendev.org/c/openstack/openstack-ansible-specs/+/87118718:19
jrosserdamiandabrowski: ^ imho we should perhaps be limiting ourselves to be fixing (2) in your spec18:20
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-specs master: Blueprint for separated haproxy service config  https://review.opendev.org/c/openstack/openstack-ansible-specs/+/87118718:35
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-specs master: Blueprint for separated haproxy service config  https://review.opendev.org/c/openstack/openstack-ansible-specs/+/87118718:39
damiandabrowskiomg, i can't get opendev rst preview to render this blueprint properly18:41
damiandabrowskijrosser:  why you think so?18:42
jrossermy feeling is that the current changes are spreading data/responsibility all across the roles, rather than keeping them cleanly implemented18:43
jrosserfor example i use the haproxy_server role for things nothing to do with OSA18:44
jrosserand now this talks about horizon and makes assumptions about port 443, which for my other use cases are not valid https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/871188/5/tasks/haproxy_ssl_letsencrypt.yml#6718:45
damiandabrowskihmm, maybe we can discuss it on next meeting. My point was to define variables only in a groups where they are really used. For ex. with separated-haproxy-service-config it makes sense to define openstack_haproxy_horizon_stick_table only for horizon because this variable is not used anywhere else18:48
damiandabrowskibut your point regarding horizon and port 443 may be valid, I'll have a look at it on Monday(but it has nothing to do with variable scope)18:49
jrosseri think that is a completely different problem to splitting up the way the haproxy deployment is done18:49
jrosseryou could leave the vars almost completely as they are and still move the haproxy service config into each playbook18:50
damiandabrowskibut how would you access variables like glance_backend_https then?18:50
damiandabrowski(if they are needed for haproxy service configuration)18:51
damiandabrowskiit really sucks that now we need to define glance_backend_https globally(for all hosts) and point 1. from my blueprint tries to fix it18:52
jrosseri don't think that there is an answer to that18:58
jrosserthere has to be some compromise here18:58
jrosserand these patches fundamentally change the "data is wired between roles using group_vars" concept in OSA18:58
jrosserand there is no escaping that some of these vars pretty much have to become globals18:58
jrosseralso remember that OSA supports using an external lb such as an F5, so we have to keep everthing to do with haproxy seperable and optional19:02
damiandabrowskithat's right, some of them do need to remain global(like haproxy_ssl) so I moved them to group_vars/all/haproxy.yml19:03
damiandabrowskibut some of them(like openstack_haproxy_horizon_stick_table) are needed only for horizon, so i moved them to group_vars/horizon_all.yml19:03
damiandabrowskisorry but I really need to go :/ have a great weekend!19:03
jrosserok, you too19:03
*** dviroel is now known as dviroel|pto21:03

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!