Monday, 2022-11-21

*** ysandeep|out is now known as ysandeep04:59
*** ysandeep is now known as ysandeep|ruck04:59
dok53morning, so I have cinder using quobyte at the moment, it's creating the volumes and now when I attch it's not failing but getting stuck in the attaching phase. This is the only log I can see https://paste.openstack.org/show/bTvfhBX9o3bNXwZW1SfB/ any ideas what may be causing it or where to look for a better log?#08:39
noonedeadpunkwell, I assume there's nothing on cinder-volume log?08:41
dok53only that on the container https://paste.openstack.org/show/byiJjGDetFnUMc7MS6kd/ (volume from just now)08:54
*** ysandeep|ruck is now known as ysandeep|ruck|lunch09:46
dok53noonedeadpunk, it was a nova compute issue, the service was dead so the volume couldn't attach. Attached now :)09:54
noonedeadpunkhmm... but if it died - you would see that in the log most likely...09:56
noonedeadpunkweird...09:56
noonedeadpunkas eventually according to your first output it has recieved volume attachment request09:57
*** ysandeep|ruck|lunch is now known as ysandeep|ruck10:26
dok53Yep I didn't see anything in the logs to suggest that. I just spotted myself thst the compute service was dead on the host10:45
*** dviroel|out is now known as dviroel11:23
*** ysandeep|ruck is now known as ysandeep|ruck|brb12:00
*** dviroel_ is now known as dviroel12:12
admin1anyone using ceph + keystone auth (openstack) .. i have got a strange issue .. making bucket private works fine .. making bucket public , when trying to access gives NoSuchBucket 12:12
*** ysandeep|ruck|brb is now known as ysandeep|ruck12:12
noonedeadpunkadmin1: I do recall rgw issue for that but it was quite a while ago and just ceph upgrade helped12:44
noonedeadpunkBut it was during ceph 14 or smth12:44
noonedeadpunkbut it was related to acls in general12:45
dok53noonedeadpunk, jrosser and others, just wanted to say thanks for the help lately. I now have an openstack with networking and cinder volumes on a Quobyte backend14:43
jrosserdok53: excellent :)14:44
dok53:)14:47
*** ysandeep|ruck is now known as ysandeep|ruck|dinner15:11
*** dviroel is now known as dviroel|lunch15:14
*** ysandeep|ruck|dinner is now known as ysandeep|ruck16:13
*** dviroel|lunch is now known as dviroel16:18
*** ysandeep|ruck is now known as ysandeep|out16:23
opendevreviewBjoern Teipel proposed openstack/openstack-ansible-os_octavia master: Adding octavia_provider_network_mtu-parameter parameter  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/86481917:32
opendevreviewBjoern Teipel proposed openstack/openstack-ansible-os_octavia master: Adding octavia_provider_network_mtu-parameter parameter  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/86481917:34
jrosserdamiandabrowski: i had a quick look at glance/cinder nfs jobs and this is breaking https://zuul.opendev.org/t/openstack/build/07862b60e8b64062a432004150f9efbe/log/logs/host/cinder-volume.service.journal-19-33-56.log.txt#676219:58
ElDuderinoHi all, random question and am not sure where to (or how to) ask it. I have 3 network nodes (bare-metal) that only run neutron containers. I also have three controllers, that run all the other core services. keepalived seems to honor the priority and sets the state correctly on the controllers, but the neutron nodes seem to all have the same info in /var/lib/neutron/ha_condfs/<routerID>/keepalived.conf21:15
ElDuderinoCurrently, keepalived is alive and reports the proper states via systemctl on each of my controllers as expected.21:16
ElDuderinothe issue is, that on my neutron hosts all three are showing 'active' l3 agents (high availability status) in the GUI, but when I check those keepalived configs they don't have the prio weights like their controller counterparts do.21:17
ElDuderinoso,what's happening is that the router shows all three active for the l3 agent, and there's no dhcp or routing happening into the vms for those networks. Wondering, what creates those ha_confs and are they all supposed to be weighted the same and all say backup in the configs? Still learning how keepalived is being invoked via that agent, and how it all works.21:19
ElDuderinoFrom what I can tell, 'ha_confs' is created by /etc/ansible/roles/os_neutron/ and is used by keepaliaved which is created via /etc/ansible/roles/keepalived/*. Thx.21:25
*** dviroel is now known as dviroel|afk22:45

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!