Friday, 2024-01-26

admin1\o10:27
admin1looking for pointers to do this in the latest osa .. old method was => haproxy_horizon_service_overrides:  haproxy_frontend_raw:  - acl cloud_keystone hdr(host) -i id.domain.com .. - use_backend keystone_service-back if cloud_keystone 10:27
noonedeadpunkhaproxy_horizon_service_overrides is still valid variable?10:29
noonedeadpunkSo not sure what you mean here10:30
noonedeadpunkwhat has changed is that you'd neet to run os-horizon-install.yml --tags haproxy-service-config10:30
noonedeadpunkinstead of harpoxy-install.yml to propogate these settings10:30
admin1grep -ri haproxy_horizon_service_overrides  has no hits on /etc/ansible or  /opt/openstack-ansbilble / .. so 10:32
admin1noonedeadpunk,  sample -> https://pastebin.com/raw/rxPWAgGQ10:35
noonedeadpunkwell, it's oviously there: https://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/group_vars/horizon_all/haproxy_service.yml#L4710:36
admin1now grep gets a hit .. it did not just a few mins ago :( 10:38
noonedeadpunkI didn't put anything to your env, I promise :D10:38
admin1:) 10:39
noonedeadpunkwhat changed is a scope of the var - it should not be in haproxy_all group, and you need to run service playbook instead of haproxy to apply changes10:39
admin1it used to be so easy to ssh to util containers before .. now ssh is removed and have to first go to controller and then shell .. 10:43
noonedeadpunkWell. I wasn't thinking it's a usecase tbh, as container IPs are kinda... were not suposed to be used like this... But what I think we can do - is create some kind of playbook that will add SSH to containers10:46
noonedeadpunk(or role)10:46
admin1its a apt install openssh-server for now .. so not a biggie 10:46
admin1but one extra step to do 10:46
noonedeadpunkActually10:46
noonedeadpunkWe have a variable for that :)10:46
noonedeadpunk`openstack_host_extra_distro_packages`10:47
noonedeadpunkIf you define it in group_vars/utility_all - you will get package installed only there10:47
admin1aah ..   for me whould be curl wget and vim :) 10:48
noonedeadpunkBut there's still question of key distribution10:48
noonedeadpunkBut also you can say that utility host is localhost and get rid of containers-containers for utility10:48
admin1noonedeadpunk, https://pastebin.com/raw/rxPWAgGQ -- this updated the id in keystone .. but i see no entries in haproxy 10:53
jrosserin wonder what happens here https://zuul.opendev.org/t/openstack/build/9d1b1dc7776b42378157a18969c7ffb2/log/logs/host/nova-compute.service.journal-00-02-15.log.txt#744611:04
jrosserseems like it returns 200 from neutron https://zuul.opendev.org/t/openstack/build/9d1b1dc7776b42378157a18969c7ffb2/log/logs/openstack/aio1_neutron_server_container-92ca237a/neutron-server.service.journal-00-02-15.log.txt#1709311:04
noonedeadpunk408 - huh11:13
noonedeadpunkadmin1: so, what playbooks you ran after changed it?11:14
noonedeadpunkand in what file you've defined `haproxy_horizon_service_overrides`?11:14
admin1user_variables.yml11:27
admin1i ran keystone playbook, haproxy playbook and the horizon playbook11:27
noonedeadpunkthat should work for sure11:27
noonedeadpunkhm, let me try this then...11:28
admin1the endpoint points to https://id.domain.com in endpoint list now ( for keystone ) .. but there is no entry in haproxy et 11:28
admin1this will help move all my endpoints to https:// and not have it on any speciifc ports -- better for customers behind restrictive firewalls11:29
noonedeadpunkadmin1: oh, I know11:32
noonedeadpunkI know11:33
noonedeadpunkfrontend for horizon is not in horizon vars....11:33
admin1i ran haproxy playbook also .. 11:34
noonedeadpunkadmin1: try replacing haproxy_horizon_service_overrides with haproxy_base_service_overrides11:34
admin1ok11:34
admin1one moment11:34
noonedeadpunkand then yes - haproxy playbook11:34
admin1it added the acl , but i do not see any bind id.admin0.net 11:38
admin1so it still redirects to horizon 11:39
admin1hmm.. chekcing how it should be .. 11:40
gokhanhello folks, I have an offline environment. there is no internet, dns and firewall. all ip cidr and vlans  are defined on switch. I created provider network and subnet without dns nameservers. after that I created a vm with provider network (it is defined on switch) but it doesn't get ip address. nova and neutron metadata services are working there is no error log. what can be the reason ? 11:42
admin1check tcpdump . is it lb or ovs or ovn 11:43
admin1are the dhcp agents plugged in correctly to the right vlan 11:43
noonedeadpunkare dhcp agents even enabled - that's the question I guess11:43
gokhanit is ovs 11:43
noonedeadpunkas if network created with --no-dhcp you might need to use only config_drives11:43
noonedeadpunkor have an l3 router attached to the network11:44
noonedeadpunk(neutron router)11:44
gokhandhcp agents are enabled 11:45
gokhanthere is no router 11:45
gokhanI am creating with provider network 11:45
gokhanfrom nodes I can't ping dhcp agents11:46
gokhancan dhcp agent ping between themselves 11:46
admin1noonedeadpunk, https://pastebin.com/raw/pD5AMxe6    cloud.domain.net and id.domain.net point o the same ip its listening to, the pem has wildcard for *.domain 11:46
admin1gokhan, check where the dhcp agent is .. in which node .. is it seing the proper vlan tag .. is the tag enabled in the switch for tagged traffic ? 11:47
admin1id.domain still goes to horizon  .. i stopped and started haproxy 11:48
noonedeadpunkgokhan: I guess it's matter to ping dhcps from the VM11:50
noonedeadpunkor VM from DHCP namespaces11:50
noonedeadpunkANd then you indeed can check for traffic on compute node on the bridge or tap interface11:51
gokhanI will check it now11:53
noonedeadpunkand after all you can try spawning VM with config drive and at least access it through console given you provide root password as extra user config11:54
noonedeadpunkas then VM should get static network configuration IIRC. OR well. At least execute cloud-init11:55
admin1gokhan ,  u can also use cirros to check for tcpdump, setup ip manually  and troubleshooting 11:58
gokhanadmin1, yes now I created cirros image and I will set ip manually 11:59
gokhanadmin1, is it a problem if network gateway is not defined on switch ?12:02
gokhanI assigned ip manually to vm and ping from vm to one of the dhcp agent ip. I listened with tcpdump on tapinterface but there is no icmp trafic on tap interface. 12:33
kleinihttps://docs.openstack.org/nova/2023.1/admin/aggregates.html#tenant-isolation-with-placement does somebody use that? This worked very well for me for a single project. It broke now, since is put a second project onto the host aggregate. The issue is, that all other projects are also scheduled onto VMs of the host aggregate. I want to have the host aggregate exclusively for two projects. What do I oversee?13:38
noonedeadpunkOh, I know :)13:38
noonedeadpunkkleini: so you need 2 things to work together - pre-filtering (with placement) and post-filtering with old time-proven AggregateMultiTenancyIsolation filter13:39
noonedeadpunkAnd this is not designed to work together right now13:40
noonedeadpunkBut it works somehow :D13:40
noonedeadpunkOne way - this patch: https://review.opendev.org/c/openstack/nova/+/896512 But it used to break other use-cases13:41
noonedeadpunkWhat I have right now, is weird hack around13:41
kleiniWhy did it work for a single project and it broke by adding a second one?13:41
noonedeadpunkYeah, that's exactly the thing..... That format is incompatible between pre/post filtering13:42
kleiniEven the pre-filtering seemed to have worked.13:42
kleiniSo the issue is that adding 1 and 2 to filter_tenant_id?13:42
noonedeadpunkSo you need to have following properties: `filter_tenant_id=tenant1,tenant2 filter_tenant_id_1=tenant1 filter_tenant_id_2=tenant2`13:43
noonedeadpunkpre-filter does not split on comas13:43
noonedeadpunkpost filter does not understand suffixes13:43
kleiniand that's all?13:43
noonedeadpunkso you configure post filter with filter_tenant_id (it's breaking pre-filter, but in a way that there's never a match with no biggie)13:44
noonedeadpunkand pre filter with multiple suffixed options13:44
noonedeadpunkyes13:44
noonedeadpunkAs pre-filter works on destination selection for those, who are in the list13:45
noonedeadpunkand post filter rejects ones, who are not in the list, iirc13:46
noonedeadpunkhope this helps13:46
admin1noonedeadpunk, when you have time, please try to test just 1 endpoint  id.domain.com with keystone 13:49
admin1via the overrides i am trying to do 13:49
kleiniSo, pre-filtering schedules VMs of these two projects to the host aggregate but the post-filtering failed to schedule other projects VMs non host aggregates, right?13:50
admin1gokhan, its a combo of security groups ( iptabels + ebtables ) and the vlan/vxlan/tunnel 13:52
admin1if you crate 2 vms on the same network, can they ping each other 13:52
admin1try to create 4 vms .. using the affinity .. one together and one  anti affinity so that 2 are in the same node, and 2 are in diff diff nodes13:52
admin1and then check if they can ping each other in same node vs diff node 13:53
admin1for the moment, try to allow all rules in icmp 13:53
noonedeadpunkkleini: well, it all depends on how you set these. as any combination is possible14:05
spateljrosser  I have blog out rados + keystone integration - https://satishdotpatel.github.io/ceph-rados-gateway-integration-with-openstack-keystone/14:06
kleininoonedeadpunk, I will check source code of Nova and Placement and try to figure out, how to workaround the issue.14:08
noonedeadpunkbut `filter_tenant_id=tenant1,tenant2 filter_tenant_id_1=tenant1 filter_tenant_id_2=tenant2` should really work14:08
noonedeadpunkAt least it does for me14:08
noonedeadpunkit's jsut that for filter_tenant_id you should have coma separated list, which contains no more then 6 projects, and then also each project individually through prefixed param14:09
noonedeadpunkand it works on antelope for us14:09
noonedeadpunkadmin1: ok... So what specifically I should check? If config is applied?14:10
admin1well. curl id.domain.com should return keystone json 14:10
admin1for me, it returns horizon 14:10
noonedeadpunkok14:10
noonedeadpunkugh, it's really broad topic though14:11
kleininoonedeadpunk, thank you very much. I expect it to work for us, too. Only currently it does not seem to work. Maybe we need some rest first.14:11
noonedeadpunkand what behaviour do you see? That other projects can spawn instances on this aggregate, but tenants specified only spawn on these hosts?14:13
kleiniyes, exactly that14:13
admin1do you guys know a good blog on how policies work  .. for example how do I deny admin the rights to delete a project for example .. while the delete could be done via a diff group called arbritary "super-admins"14:13
kleininot wanted VMs of other VMs on the aggregate14:14
noonedeadpunkok, then it's post-filter14:14
kleiniof other tenats/projects14:14
kleiniand post-filter is in Nova AggregateMultiTenancyIsolationFilter, right?14:14
noonedeadpunkDo you have `AggregateMultiTenancyIsolation` in the list of enabled filters to start with?14:14
admin1spatel, all working fine ?  object storage ? 14:14
spatelyes all good 14:15
noonedeadpunkadmin1: no, but thats not complicated. in theory14:15
kleininoonedeadpunk, again, thank you very much. I need to have a rest. My sentences already get more and more mistakes.14:15
spatelone problem having which is storage policy* stuff not able to define in horizon GUI 14:15
spatelI can create container from command line... but not from horizon because storage policy stuff which is new feature in openstack14:16
noonedeadpunkadmin1: google keystone policies, find page like https://docs.openstack.org/keystone/latest/configuration/policy.html - look for `delete_project`. You see it's `rule:admin_required`14:16
spatelmay be need to ask some one in horizon team 14:16
kleiniand I have everything configured as described here: https://docs.openstack.org/nova/2023.1/admin/aggregates.html#tenant-isolation-with-placement. except filter_tenant_id=tenant1,tenant2 was missing14:16
noonedeadpunkok, yes, that is vital part for post-filter to work as it doesn't check for suffixes14:17
noonedeadpunkadmin1: and you can either define admin_required differently (it's also on that page) or replace with smth like `role:super_admin`14:17
opendevreviewAndrew Bonney proposed openstack/openstack-ansible master: WIP: [doc] Update distribution upgrades document for 2023.1/jammy  https://review.opendev.org/c/openstack/openstack-ansible/+/90683214:31
noonedeadpunkandrewbonney: just in case, I had another round of improvements here: https://review.opendev.org/c/openstack/openstack-ansible/+/90675014:36
noonedeadpunkTHe only thing I'm not convinced about - keystone. as it feels that by default running keystone on primary node will rotate fernet prematurely14:36
andrewbonneyI spotted :) I'll be a while before I have any other tweaks ready to merge so happy for that to go in first and I'll fix any merge conflicts14:36
noonedeadpunkleaving all issued/cached tokens invalid14:36
noonedeadpunkThat we got hitted with rgw and users who tried to re-use tokens and cache them locally14:37
andrewbonneyLast time we did the upgrade I don't think limit worked for keystone, but I did want to re-check that this time around14:37
noonedeadpunknot saying about need to wipe memached....14:37
andrewbonneyIs there a reason we rotate keys every time the keystone role runs? If not it would be nice to be able to skip that14:37
noonedeadpunkYeah, limit probably does not... But maybe when "primary" container is empty - there's a way to say that it's another one which is primary14:38
noonedeadpunkto use it as a sync source for the runtime14:38
noonedeadpunkI *guess* it might be not each time14:38
noonedeadpunkbut since they're absent on "primary"....14:38
noonedeadpunkhaven;t look there so it's all just assumptions14:38
andrewbonneyOk14:38
noonedeadpunkbut that hit us really badly....14:39
noonedeadpunkI guess this just results in no file and triggers fernet being re-issued: https://opendev.org/openstack/openstack-ansible-os_keystone/src/branch/master/tasks/keystone_credential_create.yml#L16-L1914:40
andrewbonneyI'll definitely look at this more when I get to our keystone nodes next week14:40
noonedeadpunkyeah: https://opendev.org/openstack/openstack-ansible-os_keystone/src/branch/master/tasks/keystone_credential_create.yml#L82-L8314:41
andrewbonneyDid you hit any issues with repo servers? I've just done one of those and think the gluster role needs some tweaks for re-adding existing peers14:41
noonedeadpunkbut dunno how to workaround that, as it just depends on `_keystone_is_first_play_host`14:41
noonedeadpunkum, no, but we don't have gluster - we just mount cephfs there14:42
andrewbonneyAh ok14:42
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: [doc] Slighly simplify primary node redeployment  https://review.opendev.org/c/openstack/openstack-ansible/+/90675014:44
noonedeadpunkadmin1: sorry, I need to leave early today, so not able to check on your question now.... Or maybe when I will return back later...14:49
admin1sure 14:49
noonedeadpunkLIke grabbing couple of beers on the way back....14:49
admin1:) happy friday 14:53
noonedeadpunkyeah, you too :)14:54
noonedeadpunkwill ping you if find anything14:54
spatelWhat do you think about Micron 5400 SSD for ceph ?19:51
mgariepyi mostly do sas ssd instead of sata ones if i don't have money for nvme ;)19:57
opendevreviewMerged openstack/openstack-ansible master: [doc] Reffer need of haproxy backend configuration in upgrade guide  https://review.opendev.org/c/openstack/openstack-ansible/+/90636023:20

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!