Thursday, 2023-08-10

noonedeadpunkmornigns06:27
damiandabrowskihi!07:47
noonedeadpunkjust in case - this keystone "trim" thing has been backported even to Zed: https://review.opendev.org/c/openstack/keystone/+/89041710:03
noonedeadpunkpatch with a fix "should work" https://review.opendev.org/c/openstack/keystone/+/89093610:04
noonedeadpunkonce it lands, we likely don't need https://review.opendev.org/c/openstack/openstack-ansible/+/88978110:05
noonedeadpunkas we do nothing wrong - password limit is 72 symbols in bcrypt, not 54 like was done in keystone patch10:05
damiandabrowskinice work noonedeadpunk 10:15
damiandabrowskibtw. i'm curious, why did you put '96' here? :D 10:15
damiandabrowskihttps://review.opendev.org/c/openstack/keystone/+/89093610:15
noonedeadpunkjsut random number. Test in topic is expected to raise an exception10:16
noonedeadpunk(random number that is higher 72)10:16
damiandabrowskiack, thanks10:17
damiandabrowskii got confused because previously it was 64 for both max_password_length  and invalid_length_password10:17
noonedeadpunkI've also looked into support of bcache_sha256 that does not have limitation on password length10:17
noonedeadpunkhttps://review.opendev.org/c/openstack/keystone/+/89102410:17
noonedeadpunkthat actually could be a good replacement for just bcrypt10:18
noonedeadpunk*bcrypt_sha25610:18
noonedeadpunkdamiandabrowski: well, if max_password_length does not apply to bcrypt if it's longer then BCRYPT_MAX_LENGTH anyway...10:21
noonedeadpunkBut yeah, might be that was an idea actually10:21
noonedeadpunkgood point actually10:22
noonedeadpunkI wasn't thinking from that angle10:22
VitoHello Everybody ! In a openstack ansible environment, after adding a new compute host, It appears in "openstack compute service list" but not in "openstack hypervisor list " what could be wrong ? Thanks for your help13:19
noonedeadpunkVito: hey! and is service appears as UP in compute service list?13:22
noonedeadpunkUsually, when service is not in hypervisor list, nova-compute is down13:23
VitoBut it is shown Up in "compute service list" so this is weird 13:25
noonedeadpunkisn't accidentally this compute host has the same hostname as some other old decomissioned one?13:28
noonedeadpunkAs there could be a conflict while registering new compute, that it's uuid does not match with the hostname of previous one in placement13:29
noonedeadpunkbut then in nova-compute logs you should see quite explicit error about that13:30
noonedeadpunkit it's worth checking them regardless13:30
VitoNope it's another hostname, but i have a weird network setup   this new compute node is in another network (ip 192.168), but i created route it can reach all ips of the first node network (ip 172.16.xx), so it's maybe an unusual setup i have done here 13:32
Vitook now the new compute appears down, and i have the error in nova-compute " Compute nodes ['93d12d39-9ac2-463a-9bff-6fc02b5f9659'] for host vmfarm12 were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning."13:34
noonedeadpunkwell, this record is fine for the first service startup13:36
noonedeadpunkso nova-compute must be able to communicate with nova-conductor through rabbitmq13:36
noonedeadpunkand with other computes via various protos for migrations13:36
noonedeadpunkbut hypervisors and service list are reported and recorded kinda independently iirc. 13:41
noonedeadpunkas they can also have different hostnames (fqdn vs short one)13:42
Vitoyes the rabbit mq is joinable from the nova compute node (i tried telnet with ip + port )13:49
Vito@noonedeadpunk the issue was the ceph storage, not joinable from my new compute host, so now the hypervisor is in the list15:01
tuggernuts1Hey ya'll working on getting a antelope deployment up and I'm having some trouble with my neutron deployment. I think I have the api deploying on my infra nodes correctly now but for whatever reason northd is not honoring my is_metal. I'm putting this: https://pastebin.com/ucLnctZn in env.d but I'm pretty sure I have a bug here.19:25
tuggernuts1also https://pastebin.com/AabdKgqf 19:26
mgariepyyour metal thing i think it's a yaml issue. https://github.com/openstack/openstack-ansible/blob/master/inventory/env.d/neutron.yml#L88-L9519:38
mgariepywrong indentation or something or you did paste it wrong in pastebin 19:38
tuggernuts1k I figured it was probably wrong19:41
tuggernuts1I don't get any containers anymore it looks like but I'm still not getting anything under network-northd_hosts in my inventory 19:49
tuggernuts1I really don't understand what I'm missing here lol19:49
tuggernuts1how can I get this northd deployment not in a container and have all my infra nodes be where it gets deployed?19:50
mgariepyi do have: network-northd_hosts group in my openstack_user_config.yml.19:52
mgariepydo you have that ?19:52
mgariepydid you paste your config somewhere?19:52
tuggernuts1moment I'm getting rate limited by pastebin I can get you all that19:52
tuggernuts1I do have it in my user_config yes19:52
mgariepyhttps://paste.openstack.org/19:53
tuggernuts1https://paste.openstack.org/show/bRsAtanf6TRXDSh3cCm8/19:54
tuggernuts1oic you said network-northd_hosts and I have neutron_ovn_northd19:55
mgariepyhttps://github.com/openstack/openstack-ansible/blob/master/inventory/env.d/neutron.yml#L13019:56
mgariepytry with it. 19:57
tuggernuts1thanks for the help it's running atm19:59
tuggernuts1ok it's timing out on me but it seems it's still trying to talk to a container 20:01
tuggernuts1https://paste.openstack.org/show/bl3fmKTAI0cKBiMWAqdN/20:01
mgariepyprogress.20:02
mgariepyneeds to update the is-metal stuff now.20:02
mgariepymaybe your inventory.json file will need to be modified a bit.20:02
tuggernuts1I deleted it and ran it again20:03
mgariepyouch.20:03
tuggernuts1it's all good I don't really need it20:03
tuggernuts1https://paste.openstack.org/show/bjVWQBIXzUg9sPe0D9ws/20:03
tuggernuts1that's the task I'm running atm to see how the groups are getting built20:03
tuggernuts1I just blew away this environment so nothing is actually deployed yet20:04
mgariepyon this i got to go.20:04
mgariepyok.20:04
mgariepymaybe the is_metal stuff is not quite right and needs some fix. i don't really know.20:05
mgariepyi prefer to run my services inside LXC :D20:05
tuggernuts1I would if I could unfortunately I won't ever get any working networks if I do that20:05
tuggernuts1oh snap! @mgariepy https://paste.openstack.org/show/bUHW8OCzCTJeXJ5RNjSl/20:16
tuggernuts1adding that I _think_ worked!20:16
mgariepywoohoo.20:44
tuggernuts1running a full deploy atm so hopefully it finishes 20:45

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!