Monday, 2020-12-28

*** tosky has quit IRC00:57
*** jawad_axd has joined #openstack-ansible01:01
*** jawad_axd has quit IRC01:06
*** yann-kaelig has quit IRC01:22
*** spatel has joined #openstack-ansible02:06
*** spatel has quit IRC02:09
*** spatel has joined #openstack-ansible02:09
*** spatel has quit IRC02:51
*** raukadah is now known as chandankumar03:22
*** macz_ has quit IRC04:05
*** macz_ has joined #openstack-ansible05:00
*** evrardjp has quit IRC05:33
*** evrardjp has joined #openstack-ansible05:33
*** jawad_axd has joined #openstack-ansible06:06
*** jawad_axd has quit IRC06:07
*** jawad_axd has joined #openstack-ansible06:07
*** PrinzElvis has quit IRC07:46
*** PrinzElvis has joined #openstack-ansible07:47
*** stduolc has joined #openstack-ansible07:48
*** sshnaidm_ has joined #openstack-ansible08:02
*** sshnaidm has quit IRC08:05
*** shyamb has joined #openstack-ansible09:08
*** dirk has quit IRC09:20
*** dirk1 has joined #openstack-ansible09:47
*** macz_ has quit IRC09:58
*** sshnaidm_ is now known as sshnaidm|rover10:50
*** tosky has joined #openstack-ansible10:55
*** spatel has joined #openstack-ansible11:06
admin0a new osa install on 21.2.0 gives RuntimeError: rbd python libraries not found .... while the same worked in another setup11:07
admin0and i can't understand why the playbooks all go fine but nova compute fails11:07
*** spatel has quit IRC11:10
kleinirbd sounds like Ceph related11:12
kleiniI still have the problem with 21.2.0 that Ceph MONs list still needs to exist although configuration from file works fine.11:13
kleinimaybe this is the same for you as the ceph-client role does not run, if ceph_mons is not defined11:16
admin0i defined the ceph mons11:18
admin0but there is no /etc/ceph created11:18
admin0but the nova configs have the ceph config11:18
admin0openstack_config: true --this should make osa ssh to the mons, copy and download the configs and keys right ?11:23
admin0strange is i used the same 21.2.0 on almost 4-5 new builds involving ceph .. it worked on all others11:23
admin0failed here11:23
admin0unless i missed something or did something wrong here ;)11:30
kleiniat least your approach sounds correct to me11:32
kleinisorry, I don't have any idea what could have been failed11:32
admin0also, 22.0.0.rc1 was released, but it fails on TASK [haproxy_server : Unarchive HATop]  => fatal: [c1]: FAILED! => {"changed": false, "msg": "dest '/opt/cache/files/v0.8.0' must be an existing dir"}11:33
*** shyamb has quit IRC11:59
*** rfolco has joined #openstack-ansible12:20
*** stduolc has quit IRC12:30
*** stduolc has joined #openstack-ansible12:31
*** janno has quit IRC12:58
*** janno has joined #openstack-ansible12:58
*** spatel has joined #openstack-ansible14:05
kleiniIs there a list or a document, that describes, which OSA variables can be used how to get IP addresses for configuration files? E.g. I want to define IP addresses in pools.yaml for Designate and I would like to get hosts br-mgmt IP running the Designate container.14:05
kleiniOr current container br-mgmt IP to insert that into some configuration.14:07
spatelkleini: is this what you looking for https://docs.openstack.org/openstack-ansible/latest/reference/configuration/using-overrides.html14:17
kleiniI know all of that but this still does not help me to get the br-mgmt IP address of the host running some LXC container, where in that LXC container I want to put the hosts IP address into the configuration file14:23
kleiniI already put OSA at the point, where the configuration file is generated into debug mode and had a look at the available variables, but this list is somewhat long...14:26
*** sshnaidm|rover has quit IRC14:49
*** spatel has quit IRC14:52
admin0kleini, you are looking for this ?  /opt/openstack-ansible/scripts/inventory-manage.py -l14:57
kleiniyeah, kind of. br-mgmt IP of container, I am currently templating a configuration file on and its LXC host br-mgmt IP15:02
*** spatel has joined #openstack-ansible15:05
admin0the output of that commad has the br-mgmt of the container15:05
admin0exactly what you wanted15:05
kleiniand how is the variable named?15:06
kleinicontaining that IP address?15:06
admin0you should run that command once in the deployment host15:07
admin0and you will see the output and know how to parse it15:07
admin0or for ansible etc15:07
kleiniwhat is the variable name for my current containers br-mgmt IP address, independent of the container I am in15:09
jawad_axdHi folks! Question: When I have taken snapshot of instance_disk in ceph then I try to delete instance, it deletes from openstack but instance_disk doest not get deleted from ceph. I can understand it has snapshot but shouldt it purge snapshot and delete instance_disk afterwards?15:11
jrosserkleini: the current container mgmt network address is container_address, see also these https://github.com/openstack/openstack-ansible/blob/master/inventory/group_vars/all/all.yml#L37-L3815:26
jrosserkleini: if you wanted a list of the mgmt address for a particular container type you can do something like this https://github.com/openstack/openstack-ansible/blob/master/inventory/group_vars/keystone_all.yml#L3215:28
kleinijrosser: thanks, that helps a lot15:30
ThiagoCMCMorning! Are you guys running Victoria already? I'm looking forward to execute that `scripts/run_upgrade.sh ` but I'm afraid that it'll mess up with my deployment. Thing is, my deployment is a 100% based on QEMU and LXD, I mean, all my Controllers are QEMU VMs and all my Network/Computes/OSDs are LXD, so, I'm thinking about making a `Libvirt snapshot` of at least all of my controllers, before the upgr16:41
ThiagoCMCade to Victoria. Sounds like a good idea, right?16:41
ThiagoCMCIf something goes wrong, I can revert my controllers back and probably reinstall the compute nodes (I don't want to make LXD snapshots and grab all the local qcows too)16:42
spatelVictoria RC1 is out16:47
spatelI am deploying RC1 right now on my production16:47
spatelRC1 is pretty much close to stable (now its just matter of time)16:48
ThiagoCMCNice! But, fresh install or upgrade from Ussuri?16:48
spatelfresh (because i convert centos to ubuntu)16:48
ThiagoCMCThat's a very smart move!  =P16:48
spatelwhy don't you quickly setup AIO ussuri and run upgrade victoria16:48
spatelif your openstack isn't running production workload then you can go directly16:49
ThiagoCMCIt's production and I don't have a clone env to play with. Sounds like AIO is the way to go.16:50
ThiagoCMCI was thinking about making a clone env of my OSA, in an Heat Template, so I can launch a stack that would me OpenStack within OpenStack, just to test new playbooks...16:51
ThiagoCMCI'll start this with AIO!16:51
admin0ThiagoCMC, stuck wtih 22.0.0.rc1 was released, but it fails on TASK [haproxy_server : Unarchive HATop]  => fatal: [c1]: FAILED! => {"changed": false, "msg": "dest '/opt/cache/files/v0.8.0' must be an existing dir"} on victoria16:54
spatelwhy are you deploying 22.0.0.rc1 ?16:54
spatelyou should use stable/ussuri for aio right?16:55
admin022.0.0.rc1 is to test it and report issues16:55
admin0i am on 21.2.0 for prod16:55
ThiagoCMCAre there significant code differences in between the "stable/victoria" (or stable/something) branch and the "22.0.0.rc1" tagged one (or 21.2.0)?16:56
spatelRC1 should be close to stable16:57
ThiagoCMCThe `stable/something` are bleeding edge, right? That will be tagged in a next release...?16:57
spatelI am running victoria on my lab and didn't see any issue16:57
spatelYes16:58
ThiagoCMCcool16:58
spatelRC1 soon turn into stable (matter of days)16:58
ThiagoCMCOkdok   =)16:58
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_barbican master: [doc] Add barbican configuration page  https://review.opendev.org/c/openstack/openstack-ansible-os_barbican/+/76851317:15
*** jawad_axd has quit IRC17:21
*** jawad_axd has joined #openstack-ansible17:22
kleiniadmin0, spatel, jrosser: my results in dynamically defining designate_pools_yaml for having PowerDNS running on the LXC hosts running the designate containers: http://paste.openstack.org/show/801313/17:23
spatelkleini: nice! you are running pDNS on OSA host machine?17:27
kleiniyes, on infra hosts. and from there the zones a transferred to our central DNS17:29
kleiniI had good support from my colleagues developing pDNS, otherwise pDNS deployment would have been as hard as designate deployment.17:30
spatelWe are running pDNS on dedicated VMware guest machine totally isolated from openstack.17:32
spatelBecause DNS is first machine we need in datacenter before deploy any other services.17:32
kleinithis is similar here, too, and therefore I deployed here an additional instance being just responsible for the zones from Designate17:34
spatelkleini: are you going to manage multiple zone using designate ?17:37
spatelwhat did you set neutron_dns_domain: ?17:38
kleinineutron_dns_domain: os.oxoe.int.17:42
kleinibut I am running here only two private clouds not even reachable from the internet, just in company network17:42
kleiniboth clouds provide the resources for running unit and performance tests of pDNS, dovecot and Open-Xchange App Suite17:44
spatelkleini: lets say if you want to add new zone foo.bar.com then can you do that?17:55
admin0kleini, thanks .. i am only on bind18:14
admin0but wanted to move to pdns18:14
admin0at least try pdns18:14
admin0so need to manually create a container and install pdns there first ?18:15
admin0give us the full howto :)18:15
*** carlosmss has joined #openstack-ansible18:19
spatelDo you guys using tag to checkout RC1 or just master ?  (Example: git checkout tags/22.0.0.0rc1 )18:19
carlosmssHi guys, someone help? I have a problem with interface link UP in openstack-ansible-stein, because the namespace used in function ,that wake up the "link" in ha-mode , is None. I edited the debug message to show the NS: "Interface brqxxx not found in namespace None get_link_id18:22
admin0spatel, i use tag18:24
admin0always use a tag18:24
spateli mostly use stable/<release>  branch and never used tags18:25
spatelIn this case i want to try RC1 so may be tag would be good18:25
*** macz_ has joined #openstack-ansible18:26
admin0good idea spatel .. on   22.0.0.rc1 , for me it fails on TASK [haproxy_server : Unarchive HATop18:27
spatelDid you create directory by hand /var/cache/files ?18:27
spatelI think its trying to untar inside /var/cache/files18:28
spatelmay be bug.. i can take a look.. (currently deploying so should git that bug)18:28
admin0i should ?18:31
admin0doesn't the bootstrap/playbooks take care of that :)18:31
admin0i mean i don't recall doing this by hand ever in any other installs18:31
spateladmin0: it does that is why saying may be some race condition or bug18:31
spatelBecause htop was broken earlier for python3 and we changed pointer recently to use newer binary18:32
masterpeWe have OSA 20.1.6 installed, we see a couple of times a day the following message: "ERROR oslo_messaging.rpc.server neutron_lib.exceptions.ProcessExecutionError: Exit code: 255; Stdin: ; Stdout: ; Stderr: Cannot find device "vxlan-10"" I have applyed two patches to try to fix it (https://review.opendev.org/c/openstack/neutron/+/766939 and https://review.opendev.org/c/openstack/neutron/+/754005/) I don't see the18:32
masterpeerror any more. But why did it fixed it and which one? My feeling is that https://review.opendev.org/c/openstack/neutron/+/754005/ fix my issue.18:32
spatelI did deployed on my lab and i didn't see that bug18:33
spateladmin0: https://opendev.org/openstack/openstack-ansible-haproxy_server/src/branch/master/tasks/haproxy_install.yml#L2718:37
admin0and you are on the same tag ?18:38
spatelIn lab i am on master (i had no issue there)18:39
admin0so maybe fixed in master but not on the tag ?18:39
spatelright now deploying 22.0.0.rc1 tag so will see if i hit that bug18:39
admin0ok18:40
admin0reset your hosts :)18:40
admin0if master already created that dir, then you will not hit that bug18:40
admin0i reset and start greenfield everytime18:40
admin0and mine is also not AIO18:40
admin0i don't know of aio or multi build will have diff playbooks18:41
spateli don't think aio and multi build use different playbook18:41
spatelotherwise we will have big problem :)18:41
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible stable/victoria: Fix config_template trackbranch  https://review.opendev.org/c/openstack/openstack-ansible/+/76861118:49
admin0https://gist.githubusercontent.com/a1git/7938dc0f22011770739c1b0917fbbd41/raw/f70046dc6e9c30097d1b4f501c0cf5a2e2a24a65/gistfile1.txt  -- this seems to always fail in this one particular host .. the URl seems to work fine18:50
admin0is there a way for me to manally check/continue it18:50
spateladmin0: check your repo container18:52
spatelvendor.urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='172.29.236.9', port=8181): Read timed18:52
spatellook like not able to talk to repo (possible issue of repo service is dead or haproxy not able to talk)18:53
admin0curl http://172.29.236.9:8181/os-releases/21.2.0/ubuntu-20.04-x86_64/  gives me a whole lot of stuff from that same  host18:53
admin0also out of 3 hypervisors, 2 work fine .. just this one fails18:53
admin0curl http://172.29.236.9:8181/os-releases/21.2.0/ubuntu-20.04-x86_64/ from this hypervisor also seems to work normally18:53
spatelall 3 should work (if not then may be lsync services has issue)18:54
spatelI had lots of fun with repo service so i would say look at that.. (remove 2 repo from haproxy so it will help to debug)18:55
admin0let me know if you run into the same bug as i did wth hatop18:58
admin0was planning to test 22 with ovn18:59
spatelDoing it.. now18:59
spateladmin0: what is the variable to change Region name?18:59
spateli forgot18:59
spatelis it region_name: foo in user_variables ?19:00
admin0service_region19:00
admin0service_region: 'foo'19:00
admin0well, without the quotes19:01
spatelgot it19:01
*** sshnaidm has joined #openstack-ansible19:11
*** macz_ has quit IRC19:11
*** sshnaidm is now known as sshnaidm|rover19:11
kleiniadmin0: I deploy pDNS as part of "before OSA" with the following role usage http://paste.openstack.org/show/801315/19:38
kleiniso I deploy it on shared-infra_hosts on metal and not in a container19:40
spateljrosser: are you around?20:16
admin0kleini, thanks20:44
admin0spatel, did you faced the issue on that tag ?20:44
spatelI am stuck in this place...20:44
spateladmin0: http://paste.openstack.org/show/801317/20:45
spatelvery strange issue...20:45
spateltrying to understand what is wrong here..20:45
admin0Could not resolve hostname ostack-phx-api-1-120:48
spatelbut i can20:48
spateltrying to understand from which host its not able to resolve20:48
admin0are you using dns for anything and not ip .. like for nfs, glance etc20:48
spatelno DNS20:49
spatelno nfs etc20:49
spatelits very simple OSA deployment which i did many time in LAB20:49
spatelnever faced this issue before20:49
spateladmin0: does your OSA deployment machine has /etc/hosts file populated with all hosts name?20:53
admin0checking21:02
spateladmin0: i think i found issue21:02
admin0it does not21:03
spatelits dns issue..21:03
spatelre-running playbook21:08
spatelubuntu run local DNS 127.0.0.53 and that was my issue21:08
spateldo you upgrade ubuntu time to time (my motd saying - 129 updates can be installed immediately. )21:09
*** macz_ has joined #openstack-ansible21:12
*** macz_ has quit IRC21:17
*** yann-kaelig has joined #openstack-ansible21:29
*** carlosmss has quit IRC21:31
spateladmin0: i hit this bug - fatal: [ostack-phx-haproxy-1]: FAILED! => {"changed": false, "msg": "dest '/opt/cache/files/v0.8.0' must be an existing dir"}21:43
spatellet me find why we didn't see this on master branch21:44
admin0yes21:56
admin0same with me21:56
admin0so  it is a bug :)21:56
admin0i thought all tagged branches are also checked before its released21:59
admin0these bugs are not found by our ci ?21:59
*** stduolc has quit IRC22:00
*** stduolc has joined #openstack-ansible22:00
spateladmin0: i found bug so let me submit patch22:01
spatelthis bug only impacting if your deployment-host is different machine (not part of infra)22:01
*** rfolco has quit IRC22:05
openstackgerritSatish Patel proposed openstack/openstack-ansible-haproxy_server master: Fix HATop for haproxy  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/76861522:07
spateladmin0: that is the fix22:07
*** macz_ has joined #openstack-ansible22:18
*** macz_ has quit IRC22:23
*** yann-kaelig has quit IRC22:33
*** spatel has quit IRC22:43
*** rfolco has joined #openstack-ansible23:53

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!