Friday, 2021-02-05

*** kukacz has quit IRC00:14
*** kukacz has joined #openstack-ansible00:20
*** Underknowledge has quit IRC00:23
*** Underknowledge has joined #openstack-ansible00:24
*** tosky has quit IRC00:27
*** ianychoi__ has quit IRC00:37
*** ianychoi__ has joined #openstack-ansible00:37
*** djhankb has quit IRC00:38
*** djhankb has joined #openstack-ansible00:38
*** maharg101 has joined #openstack-ansible01:44
*** spatel has joined #openstack-ansible01:44
*** maharg101 has quit IRC01:48
*** dasp has quit IRC02:04
*** cshen has quit IRC02:27
*** dasp has joined #openstack-ansible02:32
*** LowKey has joined #openstack-ansible02:59
*** cshen has joined #openstack-ansible03:15
*** cshen has quit IRC03:19
*** cshen has joined #openstack-ansible03:22
*** cshen has quit IRC03:27
*** Underknowledge has quit IRC03:33
*** Underknowledge2 has joined #openstack-ansible03:33
*** Underknowledge2 is now known as Underknowledge03:34
*** cshen has joined #openstack-ansible05:23
*** cshen has quit IRC05:28
*** evrardjp has quit IRC05:33
*** evrardjp has joined #openstack-ansible05:33
*** yasemind has joined #openstack-ansible05:38
*** maharg101 has joined #openstack-ansible05:45
*** maharg101 has quit IRC05:50
*** yasemind has quit IRC06:12
*** spatel has quit IRC06:36
*** cshen has joined #openstack-ansible06:45
*** cshen has quit IRC06:50
CeeMacmorning07:27
CeeMacmgariepy: was just catching up in the channel.  If you come up with a 'stable' process for upgrading R > T direct I'd be very interested to hear about it :)07:28
*** maharg101 has joined #openstack-ansible07:39
*** miloa has joined #openstack-ansible07:44
noonedeadpunkmorings07:49
*** miloa has quit IRC07:53
*** rpittau|afk is now known as rpittau07:57
*** vesper11 has joined #openstack-ansible07:58
noonedeadpunkhm, interesting... why do we run it considering https://opendev.org/openstack/openstack-ansible-openstack_hosts/src/branch/master/tasks/main.yml#L7308:00
openstackgerritDmitriy Rabotyagov proposed openstack/ansible-role-python_venv_build master: Import wheels build only when necessary  https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/77415908:02
noonedeadpunkbecause of import instead of include?08:02
noonedeadpunkdoh08:02
noonedeadpunkI can recall Jesse has wrote about workaround method for tags I wrote down that log somewhere, need to look for it...08:04
*** gokhani has joined #openstack-ansible08:09
*** cshen has joined #openstack-ansible08:12
*** andrewbonney has joined #openstack-ansible08:18
gokhaniHi team, I have prepared 3 nfs shares for glance,cinder and nova. Firstly when I deployed OpenStack Ussuri version with OSA, systemd mount  didn't work correctly. ıt didn't mount my nfs share to /var/lib/nova/instances. After restart systemd mount service, it worked. Moreover, ı rebooted one of my compute nodes. After that It didn't mount my08:23
gokhaninova nfs share. when try to mount it manually  (mount /nfsserver:/var/nfs/nova /var/lib/nova/instances), it connects to my cinder nfs share. ıt is weird. My export is in there > http://paste.openstack.org/show/802366/ . ı didn't find any solution. May be ı am missing something or doing wrong things. Can you help me please?08:23
*** ierdem has joined #openstack-ansible08:24
*** ierdem has left #openstack-ansible08:24
*** ierdem21 has joined #openstack-ansible08:24
*** ierdem21 has quit IRC08:24
*** ierdem has joined #openstack-ansible08:24
noonedeadpunkjrosser: hm iirc you saw it with stackviz already? https://zuul.opendev.org/t/openstack/build/f12a5aea3e674328b8d3da0e43130c31/log/job-output.txt#1858608:28
noonedeadpunk"ERROR: Could not satisfy constraints for 'stackviz': installation from path or url cannot be constrained to a version"08:28
noonedeadpunkI guess the same thing spatel say yestarday with barbican-ui08:28
kleinigokhani: we have in production massive problems with NFS shares everywhere. it is broken all the time if either client or server have restarts, hickups or the network has issues. Consider to use something else, which is more resilient against such issues like Ceph.08:29
*** vesper11 has quit IRC08:30
noonedeadpunkI totally agree that NFS in prod is a curse.08:30
* noonedeadpunk migrating NFS -> Ceph workloads right now08:30
noonedeadpunkgokhani: anyway, what does systemd status says regarding that mount?08:33
noonedeadpunkand how nova_nfs_client is defined?08:34
gokhanikleini, ın fact, we are using nfs in our prod (OpenStack Pike Version), ıt has been 3 years and it is working successfully. in prod we are using netapp.08:37
noonedeadpunkoh, so it's on Pike...08:37
gokhanino, now this problem is on my development environment and it is ussuri.08:38
noonedeadpunkok, right, then prevuious 2 question about systemctl status of mount and how nova_nfs_client is set?08:39
noonedeadpunkkleini: if you decide to migrate to ceph I might have some tricks for that :)08:40
gokhanithis is my nova nfs client settings > http://paste.openstack.org/show/802368/08:40
*** tosky has joined #openstack-ansible08:41
gokhaniI checked uid and guid it is same with my nfs server08:42
gokhanikleini, we will upgrade our pike environment to ussuri, and we are planning to use ceph. ıt will be good :)08:43
noonedeadpunkWhat does systemctl status var_lib_nova_instances.mount (or smth like that)08:46
noonedeadpunkmight be `-` instead of `_`08:47
noonedeadpunkbtw I'm wondering if it's the missing bit https://opendev.org/openstack/ansible-role-systemd_mount/commit/bcbd5344cf56338adea03ad3ef41466fd8615e7008:48
kleininoonedeadpunk: thanks for your offer. for OpenStack and Proxmox we are already using Ceph since Cuttlefish release. we only have some rare cases, where still NFS is used. those will be migrated this year08:48
noonedeadpunkyeah, it has not been backported to Ussuri08:49
jrossernoonedeadpunk: i didnt see that trouble with stackviz here https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/77028108:49
noonedeadpunkgokhani: you can try cherry-picking this patch https://review.opendev.org/c/openstack/ansible-role-systemd_mount/+/75497808:50
noonedeadpunkjrosser: yeah, because we don't build wheels there08:50
jrosserah ok08:50
noonedeadpunkand that is when I'm testing with build wheels set to true08:50
noonedeadpunkI guess it brings up extra complexity to the new pip resolver...08:50
jrosserok well that is coming up because somehow stackviz ends up as a requirement and a constraint and that fails08:51
gokhaninoonedeadpunk, I am getting these errors on syslog > http://paste.openstack.org/show/802371/08:51
noonedeadpunkum. that sounds like networking thing...08:51
jrossernoonedeadpunk: this is what was needed for installs from git tarball to fix similar errors08:52
jrosserhttps://github.com/openstack/ansible-role-python_venv_build/commit/c9eb3b1c905333282e597598e73c5459a4f5c14608:52
jrosser*git repo08:52
jrosseri expect some adjustment needed to that to handle tarball is needed08:53
gokhaninoonedeadpunk, when ı google kernel error 107 it says "107ENOTCONNTransport endpoint is not connected". May be I have problems with my NICS08:55
noonedeadpunkor maybe with mtu (but not sure)08:56
jrosserwould be interesting to know if we can use stackviz_tarball: "https://tarballs.opendev.org/openstack/stackviz/dist/stackviz-latest.tar.gz#egg=stackviz"08:58
jrosserand split the string on egg= again08:58
gokhaninoonedeadpunk, mtu is 1500 on al nics09:00
gokhaninoonedeadpunk, I also get lots of timeout errors or warnings on rabbitmq cluster. ı am working on performance tunings for rabbitmq. ı try these kernel settings > http://paste.openstack.org/show/802374/ . Dou you have any ideas for rabbitmq performance tuning?09:14
gokhaninoonedeadpunk, some error logs on rabbitmq > http://paste.openstack.org/show/802375/09:16
noonedeadpunkno, not really, never tried to tune it...09:17
noonedeadpunkit's just working for me nowadays09:18
gokhaninoonedeadpunk, thanks for your help. ı also mounted my nova nfs  share after removed my cinder nfs share from nfs server. But now ı am struggling with rabbitmq errors :(09:25
*** yasemind has joined #openstack-ansible09:40
*** pto has joined #openstack-ansible09:45
ptoI think there is a bug in the letsencrypt support. The renewal fails: http://paste.openstack.org/show/802378/09:51
ptoIt seems like its trying to bind on port 80 where haproxy runs09:51
noonedeadpunkeventually it should run with `"--http-01-address {{ ansible_host }} --http-01-port 8888"`10:10
noonedeadpunkhttps://docs.openstack.org/openstack-ansible-haproxy_server/ussuri/configure-haproxy.html#using-certificates-from-letsencrypt10:10
noonedeadpunkpto: or you're running on Victoria?10:11
noonedeadpunkbecause I changed default there indeed10:11
ptoIm on ussuri10:11
noonedeadpunkthen you must haproxy_ssl_letsencrypt_setup_extra_params to be set10:12
ptohttp://paste.openstack.org/show/802379/10:15
ptoManual renewal gives: http://paste.openstack.org/show/802381/10:16
noonedeadpunkI'm wondering why `pre-hook command "/etc/letsencrypt/renewal-hooks/pre/haproxy-pre" returned error code 124`10:23
noonedeadpunkas eventually this should start a temp server behind haproxy, and haproxy should have an acl to forward to it10:24
noonedeadpunkok, I'm wrong. actually pre-hook I guess should just sleep enough for haproxy to see backend under 8888 port10:32
noonedeadpunkpto do you have an acl for letsencrypt in haproxy config?10:34
jrosserif this is ussuri i think that the whole of haproxy_default_services needs overriding to make the ACL work10:35
noonedeadpunkthe nasty thing is that if you use horizon, you might need to override whole haproxy_default_services10:35
noonedeadpunkyeah10:35
jrosserlast section here https://github.com/openstack/openstack-ansible/blob/stable/ussuri/doc/source/user/security/ssl-certificates.rst10:36
jrosserpto: ^ this stuff is all much nicer in victoria but it wasnt really backportable10:37
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible stable/victoria: Add haproxy_*_service variables  https://review.opendev.org/c/openstack/openstack-ansible/+/77412610:37
noonedeadpunknot sure how appropriate to backport so huge patch, but I think for those who already have overrides that might be ok?10:38
ptonoonedeadpunk: I am using the default config. So no acl's in haproxy10:39
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible stable/victoria: Add haproxy_*_service variables  https://review.opendev.org/c/openstack/openstack-ansible/+/77412610:40
jrosserpto: the documentation link i gave is necessary10:40
jrosserotherwise haproxy will not redirect the challenge to certbot10:40
ptoThe plan is to use a static public ssl certificate in a shot while, so the problem goes away :-)10:40
ptoI think the problem is that port 8888 is not accessible from the outside right now10:40
noonedeadpunkhttps://docs.openstack.org/openstack-ansible/ussuri/user/security/index.html#letsencrypt-certificates might be better readable a bit https://docs.openstack.org/openstack-ansible/ussuri/user/security/index.html#letsencrypt-certificates10:40
noonedeadpunkyes, because you don't have acl10:41
noonedeadpunksince letsecrypt always asks on 80 port for verification10:41
jrosserpto: it doesnt work like that, port 8888 is only on the backend, and an ACL on the haproxy frontend port 80 sends the acme challenge to the backend/port 888810:41
jrosserthis is needed becasue you have to renew all the certs on all the haproxies, but the VIP is only ever present on one of them10:42
jrosser*external VIP10:42
jrosserso the loadbalancer function of haproxy is key to ensuring that any of the nodes can renew10:43
ptojrosser: which is the VIP url on port 8888, which haproxy then proxies back to the letsencrypt server right?10:44
ptojrosser: Otherise i cant see how the request ever would reach the le server10:44
jrosserdo we talk about the renewal request from certbot to LE, or the challenge from LE to certbot?10:45
ptoI guess the challenge is present in both requests10:50
openstackgerritDmitriy Rabotyagov proposed openstack/ansible-role-python_venv_build master: Import wheels build only when necessary  https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/77415910:51
jrosserpto: certbot on all the haproxy nodes needs to be able to hit the LE API endpoint on port 80 and 443 to request a renewal10:52
jrosserso your haproxy nodes regardless of the VIP need some egress for https, be that a default route, NAT, firewall, proxy, whatever10:53
jrosseronce they hit the LE renewal API, LE then calls back to the IP looked up from DNS for the FQDN, always on port 8010:53
ptoFrom the docs: https://docs.openstack.org/openstack-ansible/ussuri/user/security/index.html#letsencrypt-certificates i think im missing the last part which introduces the acl for well-known/acme-challenge/10:54
ptoSo its probably just me who missed that part10:54
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Replace import with include  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/77422410:54
noonedeadpunkno, it's not only you. because we don't mention that in haproxy docs10:55
noonedeadpunkand even me pointed to to the wrong place :(10:55
noonedeadpunkbtw, I found what Jesse was writing about tags and that's it in 77422410:55
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Replace import with include  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/77422410:57
jrosserdocs for haproxy/LE was tough because it’s a reusable role and I think I put stuff about the ACL there too?10:58
ptoI think you have updated the docs, since i deployed. It was very confusing then. Its much better now, but still not a trivial task to setup i think10:58
noonedeadpunkyeah, you did10:58
noonedeadpunkit wasn't just so "obvious" that you need to replace all services because of that10:59
jrosseriirc it was only in a basic state for ussuri10:59
noonedeadpunkand that's ok in the context of the role docs10:59
jrossernearly made my head explode making it work at all in the first place so I’m not surprised it’s causing difficulty :/11:00
jrosserI think adding a diagram to the docs would be hugely helpful11:01
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Replace import with include  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/77422411:02
noonedeadpunkbtw, https://opendev.org/openstack/ansible-role-pki is made11:02
noonedeadpunkI guess it will be the place for haproxy code as well somehow?11:02
noonedeadpunk*letsencrypt code11:02
*** gokhani has quit IRC11:06
jrossernoonedeadpunk: from the numbers I got yesterday the venv build role accounts for quite a large proportion of the total tasks we run11:09
jrossercan we make the library symlinking tasks also be optional?11:09
jrosseranything we can do there to reduce the number of tasks gets multiplied by ~2011:10
*** gokhani has joined #openstack-ansible11:11
ptoThank you all for helping today. You are all Awsome :-)11:19
noonedeadpunkjrosser: yeah, fair note about symlinking11:26
openstackgerritDmitriy Rabotyagov proposed openstack/ansible-role-python_venv_build master: Import wheels build only when necessary  https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/77415911:29
ierdemHi everyone, neutron-linuxbridge-agent throws "queueu not found" error on my compute nodes. I checked rabbitmq queues and I realized that queues in neutron logs don't exist. Now, I can not do anything on my cluster. How can I create this queues or do you know the proper solution of this problem? Neutron logs:11:31
ierdemhttp://paste.openstack.org/show/802377/, Rabbitmq /neutron queue list:http://paste.openstack.org/show/802376/11:31
ierdemSimilar logs exist on other services such as heat and nova11:32
noonedeadpunkeventually neutron is one who should create them11:34
noonedeadpunkand can you telnet 172.30.25.206 5671 from neutron container/host?11:35
noonedeadpunkas I'd say it's rabbit issue totally11:36
ierdemnoonedeadpunk I can telnet from compute host to 172.30.25.206 567111:36
ierdemCould this problem caused by network issues? Can recreate or reinstall rabbitmq cluster be a solution ?11:38
noonedeadpunkI'd say network issue was my first assumption but since you can telnet...11:39
noonedeadpunkwell, I guess it's worth at least try to run openstack-ansible playbooks/rabbitmq_install.yml -e rabbitmq_upgrade=true11:39
noonedeadpunkjust in case11:39
noonedeadpunkeither it will confirm network issues or may heal rabbit cluster and re-create required stuff11:40
ierdemIf I run rabbitmq_install.yml playbook will older messages/queues delete?11:40
noonedeadpunkyeah, they will, I'm afraid11:42
*** shyamb has joined #openstack-ansible11:43
ierdemI have ~20 instances and for now I can only do ssh to them, not ping or curl. This problem occured first when I try to change their security groups and first error in neutron was " queue q-agent-notifier-securitygroup-update.compute06 not found". So as I understand it, neutron creates necessary queues if it needs them11:45
noonedeadpunkyep, exactly.11:46
ierdemSo even I reinstall rabbitmq, necessary queues for neutron may not be created by playbook11:46
noonedeadpunkqueues are not created by playbokk11:46
noonedeadpunkit's neutron responibility to create and somehow manage them11:47
noonedeadpunkand I'm absolutely sure that from neutron side things are good11:47
noonedeadpunkand it's rabbit that needs attention (or networking)11:47
ierdemhmm, is there any list of necessary neutron queues? Can we create them manually? I know it is not proper solution but I faced this problem first time11:48
noonedeadpunkno, you can't11:48
noonedeadpunkI think major thing there is `due to timeout`11:49
noonedeadpunkSo I'd say it would create it if not timeout11:50
ierdemSo it seems only way that reinstall rabbitmq as you said11:51
ierdemnoonedeadpunk I am trying it now, thank you11:54
*** gokhani has quit IRC11:57
*** yasemind has quit IRC12:01
ierdemnoonedeadpunk I want to ask something, I realized that minutes ago. When I check the Hypervisor List on horizon, all compute nodes mentioned with their names like compute06, compute07 but except compute06. It mentioned there with its FQDN (compute06.openstack.local). What cause to this, any idea? http://paste.openstack.org/show/802383/12:03
noonedeadpunkyeah, I guess it's an order of records in /etc/hosts for 127.0.1.1 or 127.0.0.112:04
noonedeadpunkI can bet that output of python3 -c "import socket; print(socket.gethostname())" and python3 -c "import socket; print(socket.getfqdn())" would differ for this host12:05
*** gokhani has joined #openstack-ansible12:05
*** shyamb has quit IRC12:09
ierdemnoonedeadpunk you re right, I checked and it FQDN was different. In /etc/hosts 127.0.1.1 was set log01 because compute06 server's first name were log01. I changed it to compute06, thank you12:15
*** zul has joined #openstack-ansible12:15
openstackgerritMerged openstack/openstack-ansible-os_zun stable/ussuri: Update zun role to match current requirements  https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/77154712:20
*** pto has quit IRC12:42
*** cshen_ has joined #openstack-ansible12:55
*** cshen has quit IRC12:59
*** strattao has joined #openstack-ansible13:05
*** spatel has joined #openstack-ansible13:48
*** cshen_ has quit IRC13:56
*** chandankumar is now known as raukadah13:59
*** LowKey has quit IRC14:07
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: Adjust magnum CI image  https://review.opendev.org/c/openstack/openstack-ansible/+/77424314:14
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: Adjust magnum CI image  https://review.opendev.org/c/openstack/openstack-ansible/+/77424314:15
*** spatel has quit IRC14:35
*** gokhani has quit IRC14:43
*** cshen has joined #openstack-ansible14:43
ierdemI cannot see my compute06 node when I run "openstack hypervisor list" but I can see it when I run "openstack comptue service list" command. I have 3 instances on compute06 but their disks are on NFS server. Now, I want my cluster to get compute06 node as hypervisor again but I could not. How can I do that?14:44
ierdemby the way noonedeadpunk I ran rabbitmq playbook as you said and it is working now14:44
noonedeadpunkgood news!14:45
noonedeadpunkregarding compute06 I'm wondering if it can not register with the "new" name because it already in openstack compute service list14:46
noonedeadpunkI guess it should report some issues in journald on compute itself14:46
noonedeadpunkand you should restart nova-compute service as well14:46
ierdemI checked compute06 journals but there is nothing suspicious and yes, I  can restart nova-compute service succesfully14:50
ierdemis there any way to add this host as hypervisor with don't losing any instances inside it14:50
noonedeadpunkwell eventually it needs to be discovered in nova14:51
ierdemI ran nova-manage discover command and it gave error, http://paste.openstack.org/show/802388/14:52
noonedeadpunkyou should run it from nova-api container14:52
ierdemoh, okey14:53
ierdemI ran and it didn't change , http://paste.openstack.org/show/802389/14:55
ierdemhttp://paste.openstack.org/show/802390/14:55
noonedeadpunkis libvirtd running?14:55
noonedeadpunkand what about compute service list?14:56
ierdemthere was an error in libvirtd on compute06, Feb 05 14:22:41 compute06 libvirtd[2953]: End of file while reading data: Input/output error14:57
ierdemI restarted and am waitin now14:57
ierdemcompute service list shows all nodes consist compute06 correctly14:58
ierdemhttp://paste.openstack.org/show/802391/14:58
ierdemOh after restarting libvirtd, I restarted nova-compute and it is working now! Thank you noonedeadpunk15:00
ierdemI can see all hypervisors and states of all are up15:00
noonedeadpunkok, great)15:02
*** LowKey has joined #openstack-ansible15:02
noonedeadpunkI think libvirt didn't liked changed hostname15:02
noonedeadpunkbut not sure if instances are not stuck there15:03
noonedeadpunksince in DB it might be other host15:03
ierdemI am restarting all instances now, after that I will check instances which runs on compute0615:04
ierdemNow there is another problem.. Hypervisor went Down state again, I think it caused by var-lib-nova-instances.mount service. Inside this service's logs it can not umount the /var/lib/nova/instances path15:11
*** gokhani has joined #openstack-ansible15:12
ierdemhttp://paste.openstack.org/show/802393/15:13
*** rpittau is now known as rpittau|afk15:17
*** macz_ has joined #openstack-ansible16:15
*** pcaruana has quit IRC16:18
*** tosky has quit IRC16:29
openstackgerritMerged openstack/openstack-ansible-os_keystone stable/train: Allow OIDCClaimDelimiter to be set in the apache config file  https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/77396618:19
*** spatel has joined #openstack-ansible18:20
*** maharg101 has quit IRC18:23
*** tosky has joined #openstack-ansible18:37
*** spatel has quit IRC18:38
*** spatel has joined #openstack-ansible18:40
openstackgerritMerged openstack/openstack-ansible-os_keystone stable/victoria: Allow OIDCClaimDelimiter to be set in the apache config file  https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/77396418:50
ierdemHi again, I added new compute node to my OSA environment. Now, new compute host's nova-compute service goes down state, in the mount share point of instance on NFS, there are many locks http://paste.openstack.org/show/802403/19:08
ierdemWhat cause to this problem? My other compute nodes works fine. Did I miss sth?19:08
*** andrewbonney has quit IRC19:15
ierdemProcess that creates locks is http://paste.openstack.org/show/802406/19:16
spatelI don't run NFS in my cloud but i would say check NFS logs etc. Are you running NFSv4 ?19:22
ierdemI am using nfsV319:23
spatelI would say use v4  it has better lock handling19:24
spatelv3 has long history of locking issue so i would highly recommend to use v419:27
*** gokhani has quit IRC19:41
*** gokhani has joined #openstack-ansible19:43
*** gokhani has quit IRC20:17
*** maharg101 has joined #openstack-ansible20:19
*** ierdem has quit IRC20:21
*** maharg101 has quit IRC20:24
*** LowKey has quit IRC20:37
openstackgerritMerged openstack/openstack-ansible-os_nova master: Move nova pip package from a constraint to a requirement  https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/77027920:56
openstackgerritMerged openstack/openstack-ansible-os_cinder master: Move cinder pip package from a constraint to a requirement  https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/77027221:05
openstackgerritMerged openstack/openstack-ansible-os_keystone master: Move keystone pip package from a constraint to a requirement  https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/77027121:07
openstackgerritMerged openstack/openstack-ansible-os_placement master: Move placement pip package from a constraint to a requirement  https://review.opendev.org/c/openstack/openstack-ansible-os_placement/+/77028021:08
openstackgerritMerged openstack/openstack-ansible-os_glance master: Move glance pip package from a constraint to a requirement  https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/77054621:18
* noonedeadpunk wondering how heavily gates will be broken on moday with release of 20.04.2 and kernel 5.821:27
noonedeadpunkoh, 5.8 is only for HWE kernel21:30
* noonedeadpunk not worried anymore. clean forgot that it's not CentOS21:30
noonedeadpunkthat is whole log of upgradsed things for 20.04.1 -> 20.04.2 for my workstation... http://paste.openstack.org/show/802409/21:32
*** Underknowledge3 has joined #openstack-ansible21:39
*** Underknowledge has quit IRC21:41
*** Underknowledge3 is now known as Underknowledge21:41
spateli don't think i am going to upgrade it :)21:52
*** macz_ has quit IRC21:52
spatelnoonedeadpunk Hey finally i finished my blog for Designate DNS implementation with Openstack-Ansible - https://satishdotpatel.github.io/designate-integration-with-powerdns/21:53
*** spatel has quit IRC22:02
*** spatel has joined #openstack-ansible22:03
*** spatel has quit IRC22:04
*** dasp_ has joined #openstack-ansible22:18
*** dasp has quit IRC22:20
*** waxfire has quit IRC22:21
*** spatel has joined #openstack-ansible22:21
*** waxfire has joined #openstack-ansible22:21
*** fnpanic has joined #openstack-ansible22:23
*** spatel has quit IRC22:26
*** cshen has quit IRC23:27
openstackgerritMerged openstack/openstack-ansible-os_tempest master: Move tempest pip package from a constraint to a requirement  https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/77028123:28

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!