Wednesday, 2021-09-01

noonedeadpunkI'm not sure there's a path from lxb to ovs though?07:40
snadgeim running 21.2.6 because reasons, on a centos 7 deployment, and im seeing fibrechannel issues.  This v7000 san is the bane of my existence. the gist of the error is unable to find a fibrechannel volume, in the os_brick.initiatior07:55
snadgeit looks like its looking for a lun2 in /dev/disk/by-path/pcietc-lun-2 ... where nothing lun-2 exists.. something is bugging out in the fibre channel driver, and as best as I can tell its set up in the same way that openstack train was08:01
snadgei was getting different errors because systool wasn't installed, i've installed sysfsutils and get results from systool -c fc_host -v and systool fc_transport -v .. cinder is loaded and status seems ok08:13
*** arxcruz is now known as arxcruz|training08:16
snadgeive been looking at os_brick/initiator/linuxfc.py and os_brick/initiator/connectors/fibre_channel.py to try to make sense of the errors, but feel im a bit out of my depth.. any pointers of where I could look next, or tips on how to troubleshoot volume creation issues with fibrechannel/san would be much appreciated08:17
*** sshnaidm|afk is now known as sshnaidm09:08
noonedeadpunkI think that #openstack-cinder folks might have more ideas as they have more experience with different vendors and related to them issue. So worth asking there as well09:11
ierdemhi, I've started resize process yesterday and it is still on "resize" state. I don't want to lose that VM, is there any way to gracefully cancel resize operation and roll-back VM to old state?10:46
noonedeadpunkI guess `openstack server resize revert` doesn't really work?10:51
noonedeadpunkI think you might need to reset state to active and maybe shut down/power on.10:52
ierdemnoonedeadpunk, which one I should try first? revert or reset state?11:00
noonedeadpunkI believe revert would work only when instance is in verify_resize state...11:09
noonedeadpunkbut it wouldn't hurt trying11:10
noonedeadpunkreset state would just update database with the state. But it may bring inconsistency if some process is still running11:10
noonedeadpunkso that's why I said about power off/ power on - to ensure that nova can operate VM and then most likely it will cancell all pending stuff if any11:11
*** kleini_ is now known as kleini12:24
ierdemnoonedeadpunk, I set state of VM to active and tried to restart/shutOff VM. Unfortunately Nova could not operate VM, and now when I try to send command to VM from horizon, it stucks on. Any suggestions?12:35
noonedeadpunkWhat is in logs?12:38
noonedeadpunkAre you sure that rest of the cloud operates normally?12:39
ierdemyes, cloud env is working normally. I can not reach VM log via horizon, on the compute node of VM there is just this https://paste.opendev.org/show/808510/ in nova-compute log about VM12:45
ierdemby the way, I can reach VM via ssh and I can reboot it, it works for now12:46
ierdemnoonedeadpunk, I've checked the compute node. While resizing the instance, nova has tried to do live-migration and couldn't succeed. Normally, VM should run on compute20 but at the time of live migration nova has changed the compute node to compute5, and failed. Now, 'openstack server show' cmd is showing that instance on running compute2013:09
ierdemSo, i think problem occured when migrating instance. How can I solve this?13:10
noonedeadpunkanything for nova server-migration-list ?13:10
noonedeadpunkhttps://docs.openstack.org/nova/latest/admin/live-migration-usage.html#what-to-do-when-the-migration-times-out also might help13:11
ierdemnoonedeadpunk, nova migration-list: https://paste.opendev.org/show/808512/, compute5 logs: https://paste.opendev.org/show/808511/13:13
noonedeadpunkworth trying to abort it then13:14
ierdemok, trying now13:14
ierdemnoonedeadpunk, couldn't abort: https://paste.opendev.org/show/808513/, openstack server-show: https://paste.opendev.org/show/808514/13:17
ierdemshould I change state of VM to 'migrating' from 'active'? 13:18
noonedeadpunkdamn(13:19
noonedeadpunkyou can change that only from DB now :(13:19
ierdemnoonedeadpunk, which should I change, vm_state or task_state? 13:32
noonedeadpunkiirc vm_state but that's not 100%13:33
ierdemok, setting it to "migrating". I ll just try13:35
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible master: remove unnecessary revive and add support for metal deployments  https://review.opendev.org/c/openstack/openstack-ansible/+/80693313:39
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible master: Implement RabbitMQ cluster rolling restart feature  https://review.opendev.org/c/openstack/openstack-ansible/+/80410913:43
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-os_tempest master: Update cirros version to 0.5.2  https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/80348713:52
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-ops master: A playbook to rolling restart controllers(paying special attention to galera and rabbitMQ)  https://review.opendev.org/c/openstack/openstack-ansible-ops/+/80693714:09
ierdemnoonedeadpunk, I've changed the task_state to 'migrating', vm_state should be 'active'. Now, nova could not abort migration because the state of live-migartion is "pre-migrating". ERROR (BadRequest): Migration 692 state of instance 349d84c7-866b-44da-9315-0a48afd0cca7 is pre-migrating https://paste.opendev.org/show/808519/14:11
noonedeadpunkSorry, I have no idea of clean way for doing that. I think you can kind of adjust database and set migration.status to error or smth...14:16
noonedeadpunkBut I don't have direct answer what exactly to do14:16
ierdemnoonedeadpunk, thank you :) If I find a solution, I will write14:19
noonedeadpunkBut what I'd do - continue messing up with DB to be able to cancel migration with cli14:22
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-ops master: A playbook to rolling restart controllers(paying special attention to galera and rabbitMQ)  https://review.opendev.org/c/openstack/openstack-ansible-ops/+/80693715:00
spatel does anyone running openstack in remote rental hardware?15:10
spatelI want to to run openstack in EU and ASIA region so talking to rental party to rent out some servers.15:11
spatelI have question, what if rental party say we can't give you bunch of VLAN like br-vxlan,br-vlan,br-mgmt etc.? 15:12
spatelI heard they just give you 2x10G nic with public IP :(15:12
spateli am trying to push hard for customized vlans 15:13
mgariepycall ovh ;p15:23
spatelovh?15:24
mgariepyspatel, https://www.ovh.com/ca/en/solutions/vrack/15:27
mgariepybut they might no have DC everywhere you need :)15:28
spatelneat!!! 15:29
spatelmgariepy do you have personal experience with them or you just know them 15:30
mgariepyno i don't have professional experience with them15:31
mgariepyneither personal.15:31
mgariepyi know a few ppl that rented some servers over the years but not much more than this.15:32
spatelokay! looking good so far not sure about service and SLA 15:32
noonedeadpunkI know about hetzner in eu but not all feedback was positive :)15:41
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-ops master: A playbook to rolling restart controllers(paying special attention to galera and rabbitMQ)  https://review.opendev.org/c/openstack/openstack-ansible-ops/+/80693715:42
spatelIts not easy to deploy openstack in remote DC :(  15:45
fungispatel: i use ovh's openstack clouds (both personally and also as a sysadmin of opendev, since they're donating a lot of quota for our nodepool), but haven't tried their dedicated server offering15:48
fungias far as the company themselves, yes they're fairly pleasant to work with, and also great outspoken supporters of openstack15:49
spatelgood to know! in worst case i may talk to them to see what they can do for me15:49
kleiniOVH lost a complete data center in EU due to rain, fire or something like this. They did not have replicated their data to a second site and therefore lost all data.18:44
spatelkleini good to know! 20:17
mgariepyit was a fire 21:40

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!