Monday, 2020-12-14

*** prometheanfire has joined #openstack-ansible00:20
*** tosky has quit IRC00:36
*** cshen has joined #openstack-ansible01:45
*** cshen has quit IRC01:49
ThiagoCMCNever stop! OpenStack is fun!  =P02:22
ThiagoCMCVictoria this week?!  lol02:22
*** dave-mccowan has joined #openstack-ansible02:56
*** cshen has joined #openstack-ansible03:45
*** cshen has quit IRC03:50
*** akahat is now known as akahat|ruck04:11
*** evrardjp has quit IRC05:33
*** evrardjp has joined #openstack-ansible05:33
*** cshen has joined #openstack-ansible05:44
*** cshen has quit IRC05:48
*** cshen has joined #openstack-ansible06:05
*** cshen has quit IRC06:09
*** cshen has joined #openstack-ansible06:12
*** cshen has quit IRC06:17
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_zun master: Update zun role to match current requirements  https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/76314106:26
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_zun master: Update zun role to match current requirements  https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/76314106:31
*** kukacz has quit IRC06:34
*** pto has joined #openstack-ansible06:38
*** pto has quit IRC06:38
openstackgerritMerged openstack/openstack-ansible-tests stable/ussuri: Bump virtualenv to version prior to 20.2.2  https://review.opendev.org/c/openstack/openstack-ansible-tests/+/76680106:46
*** kukacz has joined #openstack-ansible06:57
*** pcaruana has joined #openstack-ansible07:50
*** pcaruana has quit IRC07:51
*** masterpe has quit IRC08:16
*** gundalow has quit IRC08:16
*** tbarron has quit IRC08:16
*** cshen has joined #openstack-ansible08:17
*** johanssone has quit IRC08:19
*** andrewbonney has joined #openstack-ansible08:20
*** gundalow has joined #openstack-ansible08:22
*** tbarron has joined #openstack-ansible08:22
*** johanssone has joined #openstack-ansible08:23
*** rpittau|afk is now known as rpittau08:27
*** tosky has joined #openstack-ansible08:38
*** akahat|ruck is now known as akahat|lunch09:08
noonedeadpunkmornings09:10
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: [DNM]  https://review.opendev.org/c/openstack/openstack-ansible/+/76690109:15
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_zun master: Update zun role to match current requirements  https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/76314109:17
*** macz_ has joined #openstack-ansible09:20
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible stable/ussuri: Apply /etc/environment for runtime after adjustment  https://review.opendev.org/c/openstack/openstack-ansible/+/76679809:21
noonedeadpunkjrosser: regarding security.txt - you decided to have both for keystone and haproxy?09:23
*** macz_ has quit IRC09:24
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_keystone stable/ussuri: Move openstack-ansible-uw_apache centos job to centos-8  https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/76592809:25
jrossernoonedeadpunk: it's keystone apache/nginx that serves the actual file09:38
jrosserbut intervention is needed on haproxy to intercept https://example.com:443/security.txt to the backend which is normally listening on port 500009:39
* jrosser double checks the patch09:41
noonedeadpunkah, ok09:57
jrossergit breakage on train bump upgrade job, python_venv_build repo fatal: reference is not a tree: 74d3eeacc72d5d6bb7a915e83440626a8d16a1c010:07
jrosserthat is so wierd10:08
*** gshippey has joined #openstack-ansible10:11
*** akahat|lunch is now known as akahat|ruck10:32
*** sshnaidm|off has quit IRC10:47
*** SecOpsNinja has joined #openstack-ansible11:06
noonedeadpunkandrewbonney: ok, so seems focal just fails with kuryr from victoria11:09
noonedeadpunkseems it's missing some other backport to victoria11:09
noonedeadpunkoh, sorry pinged to early - it's still on passing tempest step :911:10
andrewbonney:)11:10
noonedeadpunkit's bionic passed11:10
andrewbonneyI've got an AIO going so I can always debug further11:10
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: Remove *_git_project_group variables  https://review.opendev.org/c/openstack/openstack-ansible/+/76603911:11
noonedeadpunkI've returned patch o state of patchset 10 which was yours last one11:12
*** sshnaidm has joined #openstack-ansible11:17
andrewbonneynoonedeadpunk: timing looks suspicious for the release of https://docs.docker.com/engine/release-notes/#20100. I'm investigating...11:49
noonedeadpunkoh, well, it really does...11:49
noonedeadpunkdo we add docker repo? as I'm not sure ubuntu would just publish latest....11:50
andrewbonneyYeah, the ubuntu ones tend to be a long way behind11:50
noonedeadpunkif we add reo, probably it's worth using apt_package_pinning role11:51
andrewbonneyI'll take a look at that once I can confirm a downgrade fixes the test11:52
noonedeadpunkgood example I guess in rabbit https://opendev.org/openstack/openstack-ansible-rabbitmq_server/src/branch/master/tasks/install_apt.yml#L16-L3011:55
andrewbonneyThanks. That definitely fixes it so I'll add a pin11:58
noonedeadpunkand drop depends on I've added then :)11:59
andrewbonneyWill do11:59
*** rfolco has joined #openstack-ansible12:03
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_octavia master: [doc] Adjut octavia docs  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/76683312:10
openstackgerritMerged openstack/openstack-ansible-os_masakari master: Add taskflow connection details  https://review.opendev.org/c/openstack/openstack-ansible-os_masakari/+/76683012:24
openstackgerritMerged openstack/openstack-ansible-os_octavia master: Delegate info gathering to setup host  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/76669312:41
openstackgerritMerged openstack/openstack-ansible-os_octavia master: Trigger service restart on cert change  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/76606212:41
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_octavia master: [doc] Adjut octavia docs  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/76683312:53
openstackgerritMerged openstack/openstack-ansible-os_keystone master: Remove centos-7 conditional packages  https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/76593112:58
openstackgerritMerged openstack/openstack-ansible-openstack_hosts master: Make CentOS 8 metal voting again  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/76642512:58
openstackgerritMerged openstack/openstack-ansible master: Bump SHAs for master  https://review.opendev.org/c/openstack/openstack-ansible/+/76685813:03
openstackgerritAndrew Bonney proposed openstack/openstack-ansible-os_zun master: Update zun role to match current requirements  https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/76314113:11
admin0quick question ..  when using sr-iov, is it transparent in horizon ?13:14
admin0i mean can a user create instances as normal and it will get sr-iov ports13:14
*** mgariepy has joined #openstack-ansible13:18
*** redrobot has quit IRC13:29
jrosseradmin0: an sriov vm is always two steps - create the port, create the vm attached to the port13:30
jrosserso it's not the same as a non-sriov case13:31
admin0i want to be able to give horizon and not do support in office .. so looking for something easy for me to explain to users13:35
jrosserwell it is what it is, you can write instructions for how to do this with horizon, but if i remember right it is different to a regular vm13:37
admin0and i also read the sr-iov can be used with linuxbridge also .. no need for ovs13:38
admin0if you personally had a choice for a new greenfield with both ovs and sr-iov , lb as well as ovs, what would you recommend .. ( for an internal cloud with 1000+ users) , so trying to keep support and complexity to a min13:39
admin0and also, if the card support sr-iov, dpdk, is it not a good idea to use it ?13:39
jrosserthey are not the same, so if you want line speed networking to your VM then sriov is one way to do that13:39
jrosserbut if you want security groups, or vxlan, and all the other stuff, then you want linuxbridge/ovs13:40
admin0can both co-exist13:40
jrosseryes13:40
admin0like normally people will get ovs/lb .. but if they want very fast, do the sr-iov stuff13:40
jrossergenerally the recommendation is to have a dedicated nic for sriov13:40
admin0oh13:40
admin0so 2 diff vlan providers ..13:41
jrosserwell you don't have to, but you mix up a lot of things13:41
admin0one for sr-iov, one for normal13:41
admin0that is good to know as well13:41
admin0so i will go with regular lb for now for this 10g .. . and later, add another 10g and dedicate it for sr-iov .. I do not need ovs at all right ?13:43
admin0one more question .. osa does support mixed hypervisors ? like one using lb and one using ovs ?13:46
admin0this specific use case might see upto 200 (small) instances in a single hypervisor  .. so trying to figure out at what point lb will be a bottleneck13:47
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: Remove *_git_project_group variables  https://review.opendev.org/c/openstack/openstack-ansible/+/76603913:52
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: Remove *_git_project_group variables  https://review.opendev.org/c/openstack/openstack-ansible/+/76603913:55
*** dave-mccowan has quit IRC13:55
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: Remove openstack_testing.yml for RC  https://review.opendev.org/c/openstack/openstack-ansible/+/76695713:56
*** spatel has joined #openstack-ansible13:58
*** mgariepy has quit IRC14:06
*** mgariepy has joined #openstack-ansible14:06
jrosserspatel: is that a discussion for #rdo or really for here?14:12
spatelsorry i didn't realized i am in RDO :(14:14
spatelwhat is your thought on that?14:14
noonedeadpunkI think we will probably try to switch to RDO but as you might get, there're no guarantees with CentOS these days...14:15
spatelnoonedeadpunk: that is why i am worried14:15
noonedeadpunkthere will be also Cloudlinux forks of CentOS14:15
spatelright now i have choice to make after 1 year i don't14:15
noonedeadpunkbut yeah...14:15
jrosseri did a Centos 8 Stream AIO this morning14:16
noonedeadpunkeventually even cPanelstarted development for Ubuntu and promises to release till end of the 202114:16
noonedeadpunkI'm pretty sure it just worked :)14:16
spateljrosser: what is your experience14:16
jrosserright now i see this Transaction test error:\n  file /usr/share/man/man7/systemd.net-naming-scheme.7.gz from install of systemd-239-43.el8.x86_64 conflicts with file from package systemd-networkd-246.6-1.el8.x86_6414:17
jrosserand i just put my head in my hands and sigh14:17
spateljrosser: damn it14:17
noonedeadpunkoh, rly ?14:17
noonedeadpunkcome on....14:17
openstackgerritMarc Gariépy proposed openstack/openstack-ansible-haproxy_server master: Add haproxy_frontend_only and haproxy_raw feature.  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/76650414:17
spatelI am giving second thought of ubuntu14:17
noonedeadpunkgreat CI they told14:17
spatelDebian is good but worried about hardware support14:17
noonedeadpunkthings won't be broken any more they said14:18
jrosserwell this is becasue stream has a newer systemd than the one in EPEL where we get the networkd bit from14:18
mgariepymorning everyone14:18
jrosseroh also amusingly in ansible you cannot differentiate between centos 8.x and Centos 8 stream14:18
spatelsteam will take rolling upgrade from fedora so definitely they will get updated more frequently14:18
jrosserbecasue version = "8"14:18
jrosserso as far as ansible facts is concerned its older in a version compare than 8.314:19
jrosserwhich breaks what we just merged for the kernel module renaming14:19
noonedeadpunk┻━┻︵ \(°□°)/ ︵ ┻━┻14:19
jrosseri think we have to grep in /etc/redhat-release and set a local fact14:20
jrosseroh wait14:22
noonedeadpunkor, we can jsut say from next release that centos 8 is not supported and only stream is which sucks. but leave regular centos bit for future forks of centos...14:22
jrosserwierdly it's installed systemd-networkd from epel fine14:23
jrosseri wonder if it tries to do it one more time in lxc_hosts and thats blowing up14:23
jrosseri would like to treat it like a totally different distro14:23
openstackgerritMerged openstack/openstack-ansible-os_keystone master: Add security.txt file hosting to keystone  https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/76643714:24
jrosseralready it's obvious that all our version detection stuff is just wrong14:24
noonedeadpunkI'm wondering if it has some difference in ansible_distribution_release or smth...14:28
jrosseri could not find anything to drive centos(classic) vs centos(stream) logic14:32
noonedeadpunkalso looking through simmilar thread on their forum....14:34
noonedeadpunkand no solution there14:34
noonedeadpunkhow frustrating14:35
openstackgerritMerged openstack/openstack-ansible stable/ussuri: Apply /etc/environment for runtime after adjustment  https://review.opendev.org/c/openstack/openstack-ansible/+/76679814:37
spatelFolk, I have decided to rebuild my openstack using Ubuntu14:37
mgariepyspatel, ;)14:37
openstackgerritLinhui Zhou proposed openstack/openstack-ansible-os_magnum master: Replace deprecated UPPER_CONSTRAINTS_FILE variable  https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/76205714:37
noonedeadpunk☜(⌒▽⌒)☞14:37
spatelI talked to my team and they give me thumbs up for ubuntu14:37
noonedeadpunklol14:38
spatelNo centOS hacks anymore14:38
noonedeadpunkso I'm starting wondering - will be there anybody interested in centos in half a year?14:38
spatelI am thinking for Debian but little worried14:38
noonedeadpunkDebian is good imo14:38
spatelworried about hardware support14:38
*** mgariepy has quit IRC14:39
spatelUbuntu is more popular in openstack community (very well known)14:39
noonedeadpunkused to worked for me previously14:39
spatelI didn't see anyone using Debian in production14:39
noonedeadpunk*used to work14:39
jrosser \o/ there is no tar for a container rootfs whatsoever https://cloud.centos.org/centos/8-stream/x86_64/images/14:39
admin0:)14:40
*** cshen has quit IRC14:41
noonedeadpunksome infra folks does at least (like fungi) and vexxhost used to run it as well14:41
spatelright now they are using ubuntu right?14:42
noonedeadpunkI guess still debian14:42
spatelHmm Let me try both and see how it goes.14:43
noonedeadpunkjrosser: I'm just speachless14:43
spatelgood think moving forward you won't hear anything from me about CentOS :)14:43
jrosserit's kind of run this far though with just a couple of minor edits14:44
jrosserbut really this i don't know what to do14:44
noonedeadpunkspatel: we have worse CI cverage for debian though> but it's pretty much similar to ubuntu... So you might see issues but nothing serious and smth we totally should fix (and maybe add more tests)14:44
noonedeadpunkjrosser: well, we always have lxcontainers and legacy method14:44
jrosserthis is the lxc prep log http://paste.openstack.org/show/801012/14:45
noonedeadpunkbut last time I saw really huge permormance degrade14:45
jrosserperhaps the prep script runs the command to convert centos->centos stream14:45
jrosserbut we kind of only get one year out of that whichever way :(14:45
*** mgariepy has joined #openstack-ansible14:56
admin0chances also are like centos came to be , ( downstream distro) .. people might just fork it and continue to make it downstream distro15:14
admin0it will be the same, just in another name15:14
admin0which has happened to many projects in the past when decisions like this has been taken15:15
spatelI just download Ubuntu Server 20.04.1 LTS  (first time in my life)15:16
admin0spatel, \o/ yay15:18
spatelI need to setup PXE book first to fire up my servers15:18
spatelboot*15:19
*** cshen has joined #openstack-ansible15:26
*** macz_ has joined #openstack-ansible15:37
*** macz_ has joined #openstack-ansible15:38
kleiniubuntu server is great, never had big issues with it. especially ZFS support in ubuntu solved my problems with filesystems getting too fragmented over time in production systems15:45
SecOpsNinjahi everyone. one quick question is there an easy way to recreate ques in rabbitmq?  im getting "nova-scheduler: amqp.exceptions.NotFound: Queue.declare: (404) NOT_FOUND - queue 'scheduler_fanout_*' in vhost '/nova' process is stopped by supervisor" which is is the cause of the Connection failed: [Errno 113] EHOSTUNREACH (retrying in 32.0 seconds): OSError: [Errno 113] EHOSTUNREACH in nova-co15:45
SecOpsNinjanducttor.15:45
openstackgerritMerged openstack/ansible-role-systemd_service master: Use upper-constraints for all tox environments  https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/76583115:55
SecOpsNinjayep i confirm the queues exist but it doesn't have any messasges in it... what could be the problem? the compute node notbieng able to connect to rabbitmq?15:57
spatelSecOpsNinja: i had same issue and re-building rabbitMQ helped - https://bugs.launchpad.net/nova/+bug/183563716:03
openstackLaunchpad bug 1835637 in OpenStack Compute (nova) "(404) NOT_FOUND - failed to perform operation on queue 'notifications.info' in vhost '/nova' due to timeout" [Undecided,Incomplete]16:03
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_octavia master: [doc] Adjust octavia docs  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/76683316:03
spatelRabbitMQ is much easier to re-build then troubleshoot16:03
spatelthan*16:04
admin0SecOpsNinja, you can nuke the 3 rabbitmq containes and re-do it .. it will add the queues and fix it self16:05
admin0based on your build time, some agents might not retry, so you might have to locate them and manually restart the services16:05
noonedeadpunkit's the way faster just to rerun `rabbitmq-install.yml -e rabbitmq_upgrade=true`16:06
noonedeadpunkat least I'd start with it in case suggested issues with rabbit16:07
SecOpsNinjabut from what im seasing the queue existis in /nova but it doesn have nay messages now i don't know if the problems is a connectivy one from compute node or from nova-api regarding rabbitmq cluster16:09
SecOpsNinjastil ltryin gto find a way to see who is connect to which queue and see if i can find one the problem16:10
spatelSecOpsNinja: tcpdump will give you idea if anything hitting RabbitMQ or not16:17
spatelRabbitMQ is complex sometime internal message routing is broken also cause issue and not visible until you debug components16:18
SecOpsNinjamy checking /var/log/rabbitmq/*cf.log after rebooting this container and seeing who is conencted16:18
SecOpsNinjabut yeh atm i can create vm becuase they get stuck in schedluing forever16:19
spatelSecOpsNinja: use RabbitMQ GUI management interface which is easy to understand who is connected and where16:19
openstackgerritMerged openstack/openstack-ansible-tests master: Return centos-8 jobs to voting  https://review.opendev.org/c/openstack/openstack-ansible-tests/+/76598616:19
SecOpsNinjaspatel, what that GUI? can you give the url? im using the cli rabbitmqctl16:20
admin0anyone using netplan for declaring ovs setup on ubuntu 20  for osa ?16:20
admin0last i tried was in 18.04, but netplan was new and there was no ovs support on it16:20
spatelSecOpsNinja: https://www.rabbitmq.com/management.html16:20
spatelThe management UI can be accessed using a Web browser at http://{rabbitmq_container_ip}:15672/16:21
spatelyou may need to do some kind of SSH port forwarding if container network not accessible from your desktop16:21
SecOpsNinjaspatel,  ok thanks :D16:21
spatelSecOpsNinja: you can find UI password from cat /etc/openstack_deploy/user_secrets.yml | grep rabbitmq_monitoring_password16:23
SecOpsNinjasuposse the username is admin?16:23
spatelusername monitoring16:23
SecOpsNinjaok thanks16:23
SecOpsNinjawill check it now16:24
SecOpsNinjato see if i can understand what is happening16:24
admin0what i do is use firefox and foxyproxy with patterns like *172.29.236.* via socks port say 17221 .. then,via ssh do  ssh user@deploy/or-any-server -D 17221 ( which opens a socks tunnel on 17221)16:26
admin0then you can browse/reach any IP that the server you are doing an ssh to reaches16:26
SecOpsNinjayep i normaly use the ssh tunnel but in this cases im in the same management network so it not  a problem. but yes the GUI is a lot easiear to see the connections :D16:27
spatelI think OSA should expose rabbitmq monitoring to external network using HAProxy :)  let me tags jrosser & noonedeadpunk16:27
*** fanfi has joined #openstack-ansible16:28
admin0should be via a user_variable16:28
spateli am not seeing any security issue to expose that port because its read-only account and with password16:30
spatelI love SSH tunnel stuff but its hard to teach every person and specially NOC people..16:31
noonedeadpunkthe problem with just monitoring user is that it's really very limited metrics can be gathered with it16:32
noonedeadpunkI usually put admin tag on it to make full privilege user to gather all available data... but dunno about security...16:32
spatelnoonedeadpunk: we can give more privilege16:32
noonedeadpunkrabbit runs on mgmt network which should not be exposed16:32
spatelnoonedeadpunk: question is can we expose it via HAproxy or not?16:32
noonedeadpunkah16:33
spatelI want to just type http://openstack.example.com:<rabbit_port>/ on my browser16:33
spatelwithout any SSH tunnel hacks16:33
noonedeadpunkI think you can just do it with haproxy_extra_services16:34
spateli didn't know that16:34
spatelcan we add that example snippet in RabbitMQ troubleshooting page of OSA documents?16:34
spateli meant at this page - https://docs.openstack.org/openstack-ansible/pike/admin/maintenance-tasks/rabbitmq-maintain.html16:35
noonedeadpunkI don't have example on my hands... but it would be pretty much the same to https://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/group_vars/haproxy/haproxy.yml#L64-L7416:35
noonedeadpunkI think that would be more proper place for this kind of doc https://docs.openstack.org/openstack-ansible-rabbitmq_server/latest/configure-rabbitmq.html16:36
spatelI will test that out in lab and if everyone agreed then put example in that link16:36
noonedeadpunkbut maybe your link is good too...16:37
noonedeadpunkas eventually it's really manitenance...16:37
spatelwill do there, i want all possible hacks to fix RabbitMQ in single page :)16:38
SecOpsNinjayep i dyep i think i will do what noonedeadpunk suggested and try running rabbitmq-install.yml -e rabbitmq_upgrade=true and see if it resolves the EHOSTUNREACH...16:38
admin0another check will be to acutally curl/ping and see if its a network issue and not rabbit16:39
SecOpsNinjain logs and haproxy i dont see any connection drops16:40
noonedeadpunkrabbit does not go throug haproxy by the way16:41
spatelIn my last 3 years openstack operation i found re-building RabbitMQ fixed all kind of issue (even all monitoring showing green and cluster looking healthy).16:41
noonedeadpunkbut I'd rather run this playbook tbh16:41
noonedeadpunkit never made things worse at least for me16:42
spatelnoonedeadpunk: do you reset cluster (clear mnesia directory) before running that playbook?16:42
SecOpsNinjanoonedeadpunk, i was checking the checkouts in haproxy regarding rabbitmq containers to see if they drops anything but yep the majority of from that i have are always something in rabbitmq... ok i will rerrun openstack ansible and see if it reosolverd the problem or not16:43
noonedeadpunkspatel: nope, just run it :)16:43
spatelI found if you have dirty mnesia directory then rabbitMQ start to fail and playbook get stuck16:44
spatelbut again its case to case..16:44
noonedeadpunkHm, maybe... I just never faced that, but I can imagine that happening tbh16:44
noonedeadpunkand I never run on centos, so...16:44
noonedeadpunk(well actually ran but it was not so many times as for ubuntu)16:45
spatelmay be depend on what state your cluster die16:45
noonedeadpunkwell yeah16:45
noonedeadpunkI mostly experienced issues after one controller outage was re-joining cluster16:45
SecOpsNinjaspatel, noonedeadpunk regaring upgrading the rabbitmq this  [req-*] identifies are going to be reset when the openstack ansible finishs or irs there any whay that i can reset this behaviour?16:49
spatelSecOpsNinja: I don't understand your question (what is req-*?)16:51
admin0SecOpsNinja, those req-s are going to be lost16:51
admin0coz the new db will have no idea of the request16:51
admin0request-id16:51
SecOpsNinjanova-conductor[449]: 2020-12-14 16:48:16.438 449 ERROR oslo.messaging._drivers.impl_rabbit [req-4901b480-6728-4b58-994f-8ed141e7898e - - - - -] Connection failed: [Errno 113] EHOSTUNREACH (retrying in 32.0 seconds): OSError: [Errno 113] EHOSTUNREACH still showing after openstack-ansible rabbitmq-install.yml -e rabbitmq_upgrade=true16:52
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_ceilometer master: Remove centos-7 conditional configuration  https://review.opendev.org/c/openstack/openstack-ansible-os_ceilometer/+/76595616:52
SecOpsNinjayep the rabbitmq cluster sound't know this reqquest id but the clienets are still expecting answer to it16:52
SecOpsNinja*shouldn't know16:52
SecOpsNinjathat is why i asked how to reset this information from consumers. i already deleted server in openstack but the nova.-conductor is still requesting and answer to that previoues request id16:53
spatelnot sure if  openstack-ansible rabbitmq-install.yml -e rabbitmq_upgrade=true  re-build cluster from scratch (like delete all)16:54
admin0it will timeout and not complain after a while16:54
spatelqueue as TTL and it will die after TTL expire16:54
SecOpsNinjabecause i still have the 2 request id from almost 5h ago and its still complaning :D16:54
spatelYou can delete that mesg also manually (need to google or use UI to delete those request)16:55
spatelnoonedeadpunk: question for you does openstack-ansible rabbitmq-install.yml -e rabbitmq_upgrade=true  will destroy cluster and re-build like *new* ?16:56
SecOpsNinjaok i will try to find a way to delete that because the queues are empty of messags16:56
noonedeadpunkpretty close to this17:04
noonedeadpunkyes17:04
noonedeadpunkit drops queues, and rebuilds cluster17:04
spatelnoonedeadpunk: Does it preserve data during re-build because its in HA17:05
noonedeadpunkexcept it does not drop already created users, vhosts and some more of the persistant data17:05
noonedeadpunkbut it does drop all messages that were there17:06
spatelwhat is why SecOpsNinja Req-* still in queue (because it preserve )17:06
spatelhmm17:06
noonedeadpunk(well I'm not 100% sure about that)17:06
spatelI believe if its in HA then it will preserve data in queue (i would like to try that would)17:07
spatelout*17:07
*** jbadiapa has joined #openstack-ansible17:08
noonedeadpunkEHOSTUNREACH ofc sounds more like networking... are you able to telnet to 5671 port to all rabbitmq containers from nova-api one?17:09
jrosserlooking at what nova-conductor is trying to connect to with strace -p <pid> then ping / check routes / telnet to whatever its trying to connect to is a good plan for these situations17:15
jrosseryou'll see the actual IP it's trying like that17:15
*** mgariepy has quit IRC17:19
openstackgerritMerged openstack/openstack-ansible-repo_server master: Fix order for removing nginx file.  https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/76625717:35
SecOpsNinjanoonedeadpunk, spatel  and jrosser yep i have a rabbitmq clsuter with 3 nodes. and i see that after recriaring it the queues in /nova vhost are still the same17:36
SecOpsNinjai will try to do that with sttrace and see if i can find it because i dont see any log of droping connection oin rabbitmq nodes17:37
admin0anyone doing netplan+ovs -- can share config ?17:37
SecOpsNinjathe only whaty to stop the ERROR oslo.messaging._drivers.impl_rabbit in nova schedluer and  nova conductor was restarting the systemd service. going to make a strace to both pids and make que request to create a new server and see what happens17:38
*** johanssone has quit IRC17:45
*** johanssone has joined #openstack-ansible17:47
*** rpittau is now known as rpittau|afk17:56
*** spatel has quit IRC17:57
*** maharg101 has quit IRC17:58
*** spatel has joined #openstack-ansible17:59
SecOpsNinjajrosser,  one question regarding strace if using in parrent process of nova-scheduler or nova-conductor i only see the something like this select(0, NULL, NULL, NULL, {tv_sec=0, tv_usec=9973}) = 0 (Timeout). who whould i  use strace?18:02
*** carlosm has joined #openstack-ansible18:03
carlosmhi guys18:03
SecOpsNinjafrom i able to see in the logs in the moment i try to create a new server using cli, i the the valdiation of /v2.1/flavors/ and after that i get uwsgi[72]: Mon Dec 14 17:47:22 2020 - SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request /v2.1/servers (ip of the host) !!!18:03
SecOpsNinjaand 2 seconds later nova snd scheduler reconnecting and starting giving EHOSTUNREACH errors...18:05
carlosmMy neutron has following erros, someones knows? : Device brq3c0d52cf-11 cannot be used as it has no MAC address18:05
admin0SecOpsNinja, have you tried rebooting this host again :)18:12
SecOpsNinjayep varios times, including nova and scheduler containers18:13
SecOpsNinjaim notw trying to do the strace to nova-api-wsgi pid to see what causes the A recoverable connection/channel error occurred, trying to reconnect: Server unexpectedly closed connection18:14
*** mgariepy has joined #openstack-ansible18:20
spatelSecOpsNinja: just curious what your tcpdump saying? it should give you all the information18:20
SecOpsNinjaok i do see some connection from host in strace of uwsgi pid (/etc/uwsgi/nova-api-os-compute.ini)  gerring ECONNRESET (Connection reset by peer)  and see and error regaging  "HTTP exception thrown: Flavor basic-small could not be found" it shows the flavor as public18:21
SecOpsNinjalet me try again18:22
SecOpsNinjaspatel, trying to reduce the quantaty of messages because tcpdump -i the1 inside nova-api container does get a lto of info18:26
spatelyou need to filter for port and just grab 1 call to trace and see start to finish18:33
spatelrabbitMQ use TCP so it will keep connect in Established mode (so you won't see any SYN/ACK)18:34
SecOpsNinjahow i do the trace with just one package?18:34
spateldownload pcap and use wireshark18:34
spatelhow many compute nodes you have?18:35
SecOpsNinja318:36
SecOpsNinjaand 3 infra ones where i have 1 node of rabbitmq18:36
SecOpsNinjabut atm im seeing another error that could eb the problem (or at least narro it)18:36
SecOpsNinjahttp://paste.openstack.org/show/801020/18:38
SecOpsNinjathe this part is strange GET /v2.1/flavors/basic-small18:38
SecOpsNinjabecause this openstack flavor show basic-small workds18:39
SecOpsNinja"GET /v2.1/flavors/basic-small" status: 404 but "GET /v2.1/flavors?is_public=None" status: 200 ?  but the flavor does have os-flavor-access:is_public : True18:41
SecOpsNinjain meantime i will try to do a pcap and use it with wireshark18:41
SecOpsNinjabecause im still learning about lxc is there any what to copy files from inside the contaienrs to the host?18:42
SecOpsNinjaforget the last question lol....18:42
spatelSecOpsNinja: did you see this - https://ask.openstack.org/en/question/32360/networking-issues-errno-113-ehostunreach/18:44
SecOpsNinjalet me check thaht18:45
spatelcopy file /var/lib/lxc/<contrainer_name>/rootfs/....18:46
spateli do copy in/out mostly using that path never did scp from host to container :)18:46
SecOpsNinjasorry i dont understand that question. because from i understand the compute is not able to connect to any service  and in the service nova in compute log doesnt report any thing and the nova api only reportes connection drops after specfic calls18:47
SecOpsNinjaatm if i check all the services show up and runing18:47
SecOpsNinjaspatel,  thanks for the cp path i normaly did scp18:48
spatelthink LXC container like folders :)18:48
SecOpsNinjaim a bit lost atm because theo openstack service shows all up and runing and only after a espefic request i see nova-conducter/scheduelr reconnecting after a few seconds but can find why is droping the connection...18:50
SecOpsNinjato rabbitmq18:50
*** openstackgerrit has quit IRC18:50
admin0is it recommended to change qcow2 to raw if using ceph for cinder/glance/vms ?18:55
admin0for the image18:55
SecOpsNinjai dont know why but the 404 in uwsgi of nova-api is causing the connection failed from rabbitmq as you can see it here http://paste.openstack.org/show/801021/18:56
*** gyee has joined #openstack-ansible18:58
SecOpsNinjaand that 172.30.0.2 its the primary ip of the haproxy so all the request that i make with openstack client outside go with haproxy ip and not mine18:59
jrosserSecOpsNinja: using internal/public would help as those are the terms in the code19:00
SecOpsNinjabut the internal and public and managed by haproxy19:01
jrosseri struggle to follow primary/outsude19:01
jrosseradmin0: yes for glance images in ceph you should convert to raw19:01
SecOpsNinjasorry not haproxy but keepalive but the public endpoisn are using the vips19:02
SecOpsNinjathe public and private endppoints as i had various haproxys19:03
SecOpsNinjai now only have 1 but im still using the vip so i dont have to reconfigure all the cluster19:03
SecOpsNinjai will try to strace all pid fork process of uwsi in nova container to see if i can the the connection but strace is a bit unknown to me atm...19:05
*** openstackgerrit has joined #openstack-ansible19:05
openstackgerritMerged openstack/openstack-ansible master: Remove *_git_project_group variables  https://review.opendev.org/c/openstack/openstack-ansible/+/76603919:05
spateladmin0: use raw for ceph storage19:10
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-haproxy_server master: Add haproxy_frontend_only and haproxy_raw feature.  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/76650419:10
spatelmost of people saying it boost performance (I personally never experience that so going with best practices)19:11
spatelnova directly talk to rabbitMQ (not via haproxy)19:13
spatelSecOpsNinja: ^19:13
spatelhaproxy shouldn't come in picture for troubleshooting rabbitmq communication19:14
SecOpsNinjaspatel,  yep but the info that i have is  SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request /v2.1/servers/d8508991-78d5-45e3-a7a2-77ca8c11aba0 (ip 172.30.0.2) !!! and the 172.30.0.2 if from the physical host and not nova-api or haproxy containers so i suposse that ino says that my openstacl client cli dropded the connections but that should cause the n19:16
SecOpsNinjaova api to loose connection to rabbitmq19:16
SecOpsNinjaand tcpdump doesn shows info regarding what/who dropped a connection19:17
SecOpsNinjai suposte that all the rabbitmq consumers are always connected to the varios rabbitmq cluster so there must be something that is causing the nova-scheduler and nova-condutor to reconnect19:17
spatelMake sure no MTU mismatch and no packetloss19:18
SecOpsNinjabecause ther are only the ones that reconnect after the failed api call19:18
SecOpsNinjaand i see the reconnects in various rabbitmq logs19:18
spatelMTU mismatch is very complex to troublshoot because it look like works but drop packets19:18
SecOpsNinjathe mtu is only a problem if you are using some thing like vlans because of the header but other wise it should be a problem in lan comunication, no?19:19
spatelIf host A has MTU 9000 and host B has 1500 then you may see issue.19:20
spatelIt has nothing to do with VLAN or VxLAN19:20
SecOpsNinjaand i didnt mess twith mut so i believe its the 1500 default one that is configureed19:20
SecOpsNinjalet me confirm that but i believe there all have the same19:20
spatelI had issue with LXC container 3 years ago, everything was working but it was dropping packets and turn out it was kernel logging issue19:21
SecOpsNinjafrom what im seaisng the majory is 1500 and some  brq/tap interfces are using 145019:22
spatelthat is good19:23
SecOpsNinjabut the starnge parte is that all this problems starts when i installed adicional infra nodes.... and tryied to enabled HA in all of them with keepalive and multiple haproxys19:23
SecOpsNinjathis as been and intertaing adventure :D19:24
spatelif this is not in production then why don't you destroy container and re-build it19:24
SecOpsNinjai will make a new test and see if i can detect disconnects in all rabbitmq cluster nodes19:25
spatelre-build nova and rabbit19:25
SecOpsNinjabecuase i want to understand what is the problem (sometimes i cant rdestroy and rebuild it)19:25
spatelyes agreed. lets us know whatever you find.19:27
SecOpsNinjalet me make a few test before going home to rest :D but what i will try is to see making first a connection to the flaour list and then tryoing to create a new server with it19:28
SecOpsNinjaand see if rabbitmq cluster nodes report any reconnect/error to the current consumers list19:29
SecOpsNinjaspatel, jrosser, noonedeadpunk   yep its as to be something in nova api container/services that is causing the drop of the connection http://paste.openstack.org/show/801022/ . If ii understand correctly the rabbitmq in openstack cluster it shouldn't loose connection to rabbitmq ecause of an 404 or http exception throw. If it was a network conenction varioues other consumers will also reconne19:46
SecOpsNinjact but that didn't happen... only in nova_api_container19:46
SecOpsNinjaand the flavour was reacreated in the same project where the imaegs and server is being created so there must be any missconfiguration form my oart but i can't find where...19:47
jrosserit almost suggests that the mq credentials are mismatched between the nova container and the mq cluster19:49
jrosserbecasue it disconnects pretty much straight away19:49
SecOpsNinjaei dont think i replaced the openstack osa secrets but let mee check in nova api conf files19:50
SecOpsNinjanot finding the password in nova conffiles19:53
SecOpsNinjayep im out of ideas to try to understand what is happening here... i can try to force the creation of all contaienr sexcept rabbitmq and galera and see if it resolves but supossly openstack-ansible should have done all configuration...19:55
*** maharg101 has joined #openstack-ansible19:55
*** carlosm has quit IRC20:00
*** maharg101 has quit IRC20:00
spatelwhy you getting {handshake_timeout,handshake}20:01
spatelI have seen that error when cluster is not healthy20:02
SecOpsNinjaprobably because the http trhow exception and doesn finish the reques?20:02
SecOpsNinjabut if i goo to the cluster is show that it as all the nodes and there install any brain slip20:02
*** viks____ has quit IRC20:03
spatelwhy don't you run nova in debug mode20:03
SecOpsNinjaand i make the rabbitmq install20:03
*** hindret has quit IRC20:03
*** simondodsley has quit IRC20:03
SecOpsNinjai can . lett me try to put that service in debug... i supose that  --debug?20:03
*** simondodsley has joined #openstack-ansible20:04
spatelnova.conf  use debug=True20:04
*** hindret has joined #openstack-ansible20:04
SecOpsNinjawhere is the file? i coud only find *.ini ones20:04
spatelinside nova-api container /etc/nova/20:05
SecOpsNinjaok give me a minute to change that and open all rabbitmq logs20:06
*** cshen has quit IRC20:10
SecOpsNinjalol after restarting all nova services nova-api-os-compute.service nova-api-metadata.service nova-conductor.service nova-novncproxy.service nova-scheduler.service  in the container not the api doesnt drop the connection in rabbitmq logs20:11
SecOpsNinjabut still givves the error 404 in flavour20:11
SecOpsNinjahttp://paste.openstack.org/show/801025/20:12
*** cshen has joined #openstack-ansible20:12
jrosserSecOpsNinja: where are you running the cli commands from?20:13
SecOpsNinjamy computer that is using haproxy vip endpoint as the OS_AUTH_URL20:13
jrossercan you please try from the utlity container20:14
SecOpsNinjayep one second20:14
SecOpsNinjahum one firence that im finding in openrc configuration is that the utility use the /v3 part and the one in my machine don't20:15
SecOpsNinjabut let me make the request20:16
*** andrewbonney has quit IRC20:17
SecOpsNinjayep same behaviour regaring 404 and 202 - http://paste.openstack.org/show/801026/20:18
SecOpsNinjabut still no droping connections now in rabbitmq20:18
SecOpsNinjaand will try now to force the creation of a vm to see if i get more info20:19
spatelare you getting list of flavor with openstack flavor list ?20:19
SecOpsNinjayes20:20
SecOpsNinjathat is the strange parte and all are public20:20
spatelmostly openstack flavor show  command don't interact with rabbitMQ20:20
spatelThat is API call directly go to mysql DB20:21
SecOpsNinjayep the openstack flavor list dont20:21
spatelnot sure why flavor issue coming in picture20:21
SecOpsNinja atleast i dont see anything in logs20:21
SecOpsNinjalet me try to creat ea dummy vm20:21
SecOpsNinjajrosser, spatel  http://paste.openstack.org/show/801027/20:26
SecOpsNinjaand it starts getting  problems in rabbitmq disconets20:27
SecOpsNinjalet me try to repost with info regarding rabbitmq logs20:28
spatelHTTP exception thrown: Flavor basic-small could not be found.20:31
SecOpsNinjahttp://paste.openstack.org/show/801028/20:32
SecOpsNinjabut exists at least in the falouv list20:33
SecOpsNinjaand shoing info regarding specific flavor20:33
SecOpsNinjathats is very strange indead20:33
SecOpsNinjashould i force the destruction of all rabbitmq cluster and after they have been recreated force the restart of all the infra nodes containers?20:34
spatel2 node RabbitMQ ?20:34
spatelthat is bad20:34
SecOpsNinjayep i have 3 nodes in my rabbitmq20:34
SecOpsNinjathe first one didn't report any disconnect20:34
spatelI have strong feeling your rabbit isn't in good health20:35
SecOpsNinjaor the tail didnt updated20:35
SecOpsNinjayep it didnt report any disconnect20:35
spatelJust nuke rabbitmq and re-build20:35
SecOpsNinjaso you recommend destroy all the rabbitmq cluster containers, recreate them and run the rabbitmq install ?20:36
admin0i would recommend that also20:36
spatelThis is what i do to nuke rabbitmq20:36
SecOpsNinjaand the the consumers should i restart all of them or they will be able to resolve there problems?20:36
spatelstop all services20:37
spatelkill -9 rabbit20:37
spatelun-install rabbit  (yum remove rabbitmq-server)20:37
spatelrm -rf /var/lib/rabbitmq/mnesia/*20:37
spatelRun playbook to deploy rabbitmq20:37
SecOpsNinjawhen you say stop all services is regarding infra host services that would be using rabbitmq right?20:38
spatelInside rabbitmq-container20:39
spatelon infra nodes20:39
SecOpsNinjaoh ok20:39
SecOpsNinjathanks everyone for all info and i will try tomorow to do that and see if i can have this resolved .... im having nightmware with rabbits :D20:39
spatelrabbit is worst part of openstack and majority of time you will see issue with rabbitmq20:40
spateli have multiple time nuke rabbitMQ (because none of troubleshooting guide helped me)20:41
SecOpsNinjai would think that giving more nodes would put rabbitmq more stable20:41
mgariepyi tought it was neutron the worst part ;).. lol20:41
SecOpsNinjamgagne,  yep neutron with some plugins is a interesting part also20:42
spatelneutron is CPU hungry (i haven't seen any complication about config)20:42
SecOpsNinjathanks again and i will try to give and updated tomorow :D20:42
SecOpsNinjagn to all20:42
spatelgn20:42
spatelI hate Rabbitmq clustering part, its always hard to recover. (whenever i tried to join node it always do something nasty or hung on me)20:43
spatelone day i had split-brain (that was nightmare)20:44
spatelat least with neutron you don't need to deal with clustering issue.20:44
*** cshen has quit IRC20:46
*** SecOpsNinja has left #openstack-ansible20:48
*** cshen has joined #openstack-ansible20:51
mgariepysure but neutron tend to be really slow to recover from what i've seen.20:55
mgariepyi agree failure when it's the first time and you need to learn on the spot to fix it is not fun.20:56
spatelmgariepy: its easy to horizontally add more resource in neutron to spread load21:11
spatelanyone has good Ubuntu pxe boot kickstart file?21:40
spatelin this option looks good for PXE  - append initrd=/images/ubuntu/initrd ip=dhcp syslog=10.70.0.20:514 url=http://10.70.0.20/pxe_repo/ubuntu-20.04.1-live-server-amd64.iso ks=http://10.70.0.20/pxe_ks/ubuntu-20-04-1.ks21:41
spatelI found installation works but it prompt for question/answer :(21:42
spateli need auto-install21:42
jrosserspatel: before 20.04 there was debian-installer and preseed21:50
*** cshen has quit IRC21:50
jrosserin 20.04 there is now this https://ubuntu.com/server/docs/install/autoinstall21:50
jrosserit is late here but i can maybe share some stuff tomorrow21:50
spateljrosser: thanks, let me read about that21:51
jrosserit is vert similar to cloud-init for a vm21:51
spatelhmm i came across with some article talking about cloud-init but i thought that would be not for my setup so ignored them21:52
spatelLet me understand how 20.04 handle that21:52
spatelwhat OS you guys running on your openstack?21:52
spatel19.x ?21:52
jawad_axdHi! Can someone please push me on this one, with newly added compute . I can see in 'openstack compute service list' but not in 'openstack hypervisor list'.  This is nova-compute log http://paste.openstack.org/show/801032/ .One more thing I noticed that I can not reach ceph from compute node after installation as "rbd --user cinder ls -p pool-name" after following openstack-ansible docs for adding n21:53
jawad_axdew compute node. Thanks in advance for pointers.21:53
jawad_axdI am trying to make this compute host as gpu passthrough, and it has vfio-pci kernel driver enabled on the host. I am not sure if that is causing some problem.21:55
*** maharg101 has joined #openstack-ansible21:56
jawad_axdI would highly appreciate if someone would gives some hints for it. I have spent last few days on it..21:58
spateljawad_axd: did you check nova-api logs and nova-placement logs?21:59
*** maharg101 has quit IRC22:00
jawad_axdI can not see any error there..22:01
jawad_axdI got http://paste.openstack.org/show/800980/ libvirt related error couple of days ago. But then it didt appear again.22:04
spateljawad_axd: can you see your compute nodes in "openstack resource provider list"22:06
spatelif not then it could be nova-placement service related issue22:07
jawad_axdI can not see it with " openstack resource provider list""22:07
*** cshen has joined #openstack-ansible22:08
spatelthere you go22:08
spatellook like your compute node not able to register to nova-placement or may be nova22:09
spateli would check your compute nova.conf file to see if you have good config and nothing missing22:09
spatelalso make sure your nova-placement is running on infra node22:10
jawad_axdThis is nova.conf from compute. http://paste.openstack.org/show/801033/22:15
spatelcan you ping or curl your endpoints and node able to talk to all API services?22:16
spatelits hard to say anything just looking at file nova.conf22:17
spatelrun in debug mode and see why its not able to register itself to controller nodes22:17
jawad_axdOk. Regarding nova.conf I added pcipassthrough filter and [pci] information. I never had this kinda problem before.22:19
spatelremove that option and restart nova to see22:19
spatelI am using pcipassthrough and i had no issue at all22:20
jawad_axdok22:20
spateljust do some quick hit and try to see if it make any sense22:20
jawad_axdnova-compute service restart is taking forever after removing those entries.22:27
spatelhmm22:27
spatelcheck logs and see22:28
jawad_axdThis is nova-compute log http://paste.openstack.org/show/801034/ after service restarted.22:31
spatelnothing change22:34
*** jbadiapa has quit IRC22:34
spatelno error except not able to find "No compute node record found"22:34
spatelI would check again nova-placement and nova-api logs22:35
spatelwhen compute node restart it try register to nova-placement/api and sure tell you something (run in debug mode to get more data)22:36
jawad_axdThis is placement log http://paste.openstack.org/show/801036/22:38
spatelwhat if you run tcpdump on compute node and on other terminal restart service nova-compute22:42
spatelit will tell you in tcpdump what its trying to do making call to api etc..22:42
jawad_axdOk. I do it.22:43
jawad_axdThis is nova-api log http://paste.openstack.org/show/801037/22:43
spatellooking clean so look like your compute nodes not making a call (if you have single infra then run tcpdump on nova-api also to see if you getting any packet from compute)22:47
jawad_axdI have HA setup . 3 nova-api nodes.22:48
jawad_axdhttp://paste.openstack.org/show/801038/22:49
jawad_axdtcpdump on compute node while restarting services.22:49
jawad_axdI do tcpdump on nova-api22:49
spatel:) you need to filter tcpdump for specific host ip or port (otherwise you will see all garbage like SSH / ARP etc..)22:50
jawad_axdah ok22:50
spateltcpdump -i any -nn not port ssh -e -xX -s0 (i would try that)22:51
spatelGood night folks! see you tomorrow! it was wonderful troubleshooting day today.22:55
jawad_axdGoodnight!22:56
jawad_axdThanks for your time.22:56
*** spatel has quit IRC22:58
*** spatel has joined #openstack-ansible23:05
*** spatel has quit IRC23:09
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_octavia master: [doc] Adjust octavia docs  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/76683323:28

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!