Monday, 2020-07-20

*** irclogbot_2 has quit IRC00:16
*** irclogbot_0 has joined #openstack-ansible00:19
openstackgerritSatish Patel proposed openstack/openstack-ansible-os_aodh master: Add centos-8 support  https://review.opendev.org/73940100:32
openstackgerritSatish Patel proposed openstack/openstack-ansible-os_gnocchi master: Add centos-8 support  https://review.opendev.org/74051300:33
*** markvoelker has joined #openstack-ansible01:36
*** markvoelker has quit IRC01:41
*** dave-mccowan has joined #openstack-ansible03:11
*** dave-mccowan has quit IRC03:57
*** raukadah is now known as chandankumar04:04
*** udesale has joined #openstack-ansible05:48
jrosserwatersj: the config file you need is templated out here https://github.com/openstack/openstack-ansible-os_masakari/blob/master/tasks/masakari_post_install.yml#L42-L4606:22
jrosserOSA uses something called 'config_template' rather than the standard ansible template module, so it's work knowing about this https://docs.openstack.org/openstack-ansible/latest/reference/configuration/using-overrides.html#overriding-openstack-configuration-defaults06:23
jrosseryou would want somthing like this in your user_variables http://paste.openstack.org/show/796098/06:25
*** this10nly has joined #openstack-ansible06:44
*** shyamb has joined #openstack-ansible07:01
*** shyamb has quit IRC07:20
*** yolanda has joined #openstack-ansible07:31
*** tosky has joined #openstack-ansible07:34
*** jbadiapa has joined #openstack-ansible07:51
*** andrewbonney has joined #openstack-ansible07:58
*** markvoelker has joined #openstack-ansible08:04
*** markvoelker has quit IRC08:09
*** markvoelker has joined #openstack-ansible08:14
*** markvoelker has quit IRC08:24
*** mjwales has joined #openstack-ansible08:25
mjwalesHi, attempting an upgrade from Rocky to Stein on 19.1.2 tag but keystone fails to deploy with the error "No matching distribution found for keystone==15.0.2.dev3". I cannot find where this can be changed to 15.0.1 as there is no 15.0.2dev3 listed on https://pypi.org/project/keystone/#history08:27
*** markvoelker has joined #openstack-ansible08:27
*** markvoelker has quit IRC08:32
jrossermjwales: openstack-ansible installs python packages like keystone from source code rather than pypi08:37
mjwalesjrosser: okay that ends that idea. Any ideas how to resolve "ERROR: Could not find a version that satisfies the requirement keystone==15.0.2.dev3" in the keystone task "TASK [python_venv_build : Install python packages into the venv]"08:38
jrosseri would expect that there will be better infomation in the venv build log08:40
mjwalesThe only thing that sticks out is " Could not fetch URL http://172.29.236.9:8181/os-releases/19.1.2/ubuntu-18.04-x86_64: 404 Client Error: Not Found for url: http://172.29.236.9:8181/os-releases/19.1.2/ubuntu-18.04-x86_64/ - skipping"08:42
mjwalesSo the ubuntu-18.04-x86_64 directory exists on the infra1 repo container but has not synced across to infra2 or infra3. Is there any way to force a sync?08:43
*** shyamb has joined #openstack-ansible08:46
mjwalesLooking at lsyncd log on infra the log contains "rsync: failed to open "/var/www/repo/repo_prepost_cmd.sh", continuing: Permission denied (13)"08:46
mjwalesThe permissions on that file are "-rwxr-x--- 1 root root 385 Jul 17 09:14 /var/www/repo/repo_prepost_cmd.sh". By chance does anyone know if they are correct?08:46
jrosseri have -rwxr-x---  1 root  root     471 Jun 17  2019 repo_prepost_cmd.sh08:49
mjwalesinfra1 has root:root where as infra2 and infra3 have "-rwxr-x---  1 nginx www-data  385 Jul 17 09:14 repo_prepost_cmd.sh"08:50
*** shyam89 has joined #openstack-ansible08:55
jrossermjwales: if you systemctl restart lysncd on infra1 it will retry the sync, then you can look at whats failing08:57
jrosseruntil it is synced across the 3 hosts you are in a bit of trouble becasue the loadbalancer will round-robin around the built wheels on the infra hosts08:57
jrosserand only one in three of those will succeed08:57
*** shyamb has quit IRC08:58
mjwalesThere was a syntax error in /etc/lsyncd/lsyncd.conf.lua on line 594 'exclude = links"', should be 'exclude = "links"'09:01
admin0morning09:05
mjwalesjrosser: I don't currently have the time to patch this but the error relates to https://opendev.org/openstack/openstack-ansible-repo_server/src/branch/stable/stein/templates/lsyncd.lua.j2. Missing opening '"' on line 59609:10
jrossermjwales: https://review.opendev.org/#/c/73177909:14
mjwalesjrosser: will this get ported back to Stein? It's fixed in Train.09:15
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-repo_server stable/stein: add missing quote to lsyncd.lua template  https://review.opendev.org/74189609:15
*** kleini has joined #openstack-ansible09:16
*** sshnaidm|off is now known as sshnaidm09:27
mjwalesIs there a process for upgrading the Ceph cluster? After running the run-upgrade.sh (R -> S) our Ceph cluster in still on luminous but Ceph clients in container (e.g. cinder) are running Mimic.09:39
*** also_stingrayza is now known as stingrayza09:58
*** shyamb has joined #openstack-ansible09:59
*** shyam89 has quit IRC10:01
*** tosky has quit IRC10:08
*** irclogbot_0 has quit IRC10:08
*** NewJorg has quit IRC10:08
*** vesper11 has quit IRC10:08
*** mmethot has quit IRC10:09
*** gokhani has quit IRC10:09
*** tosky has joined #openstack-ansible10:10
*** irclogbot_0 has joined #openstack-ansible10:10
*** NewJorg has joined #openstack-ansible10:10
*** vesper11 has joined #openstack-ansible10:10
*** mmethot has joined #openstack-ansible10:10
*** gokhani has joined #openstack-ansible10:10
*** this10nly has quit IRC10:17
*** arkan has joined #openstack-ansible10:18
arkanHi masters :)10:19
arkanguys I have installed "Designate"10:19
arkanand I have integrated it with neutron10:21
arkaneverything it seems to work10:21
arkanmy vms have automatic "A" records added into the zone10:21
arkanand I can ping them10:22
arkanwhat it remained is:10:22
*** shyamb has quit IRC10:22
arkanI see that the vms are using the domain that I've set in dns_domain = arkan.cloud. in /etc/neutron/neutron.conf10:27
arkanthe instructions are here https://docs.openstack.org/neutron/train/admin/config-dns-int.html10:27
arkanI'm just asking if this is possible10:27
arkanI want to use another project "development" and I want that the vms to be in the forms10:28
arkanex. vm1.development.arkan.cloud10:28
arkanI tried different things but I could not succeed10:29
arkanI've also created a zone development.arkan.cloud but the vms always are in the form vm1.arkan.cloud10:29
arkanthis is because I have dns_domain = arkan.cloud in /etc/neutron/neutron.conf10:30
arkanis there a solution for this ?10:30
kleiniThe names and networks for VMs are transported through metadata to the VMs. AFAIK the main source for the metadata is Nova and not Neutron. So maybe Nova needs to know about installed Designate, too.10:30
arkankleini: I just did what the docs have10:31
arkanand it worked for me related to the docs10:31
arkanbut in my usecase I don't know how can I have different subdomains for the vms10:31
arkanI even tried to give vm name test.development, and the result was test.development.arkan.cloud10:32
arkanbut it had problem of creating the networking10:32
arkanthis happens when the vm boots10:32
arkanOk, this was my first problem10:33
kleiniI just found: openstack network set --dns-domain sample.openstack.org10:33
kleiniSo DNS domain in Neutron networks needs to be linked with Zone in Designate somehow10:34
kleiniif that is possible at all10:34
arkanI did this also10:34
arkanI have added openstack network set --dns-domain development.arkan.cloud. internal-net10:35
arkanbut the vms that are using internal-net also they used .arkan.cloud not development.arkan.cloud10:35
*** trident has quit IRC10:35
*** trident has joined #openstack-ansible10:36
kleinihttps://docs.openstack.org/neutron/latest/admin/config-dns-int.html10:36
kleiniDid you redeploy all Neutron components after configurating Designate for internal DNS resolution?10:36
arkanno I did not redeploy all neutron components10:37
arkanI restarted the services to check the dns integration10:37
arkanin the beginning I didn't have DNS integration10:38
arkanafter what I've done (reading the docs), the DNS integration worked10:39
arkanafter applying what the docs has, the DNS integration worked10:40
kleiniI am sure, some Neutron components require now a different configuration. Like ml2 extension_drivers = dns_domain_ports10:40
*** stingrayza has quit IRC10:40
arkanI have them configured10:40
jrossermjwales: https://review.opendev.org/#/c/711440/10:40
arkankleini: I have them configured in /etc/neutron/plugins/ml2/ml2_conf.ini10:41
arkan[ml2]10:41
arkanextension_drivers = port_security,dns_domain_ports10:41
kleiniThan I am out of ideas.10:41
*** npalladium has joined #openstack-ansible10:42
arkanwhat you have in the doc, is configured in my case10:42
mjwalesjrosser: thank you10:42
arkankleini: ok, this was my first problem10:43
arkanthe other one10:43
arkanhow can I set from dnsmasq, to not let it go outside to get into my domain10:43
jrossermjwales: that patch is not merged yet but if you are able to see if the method is working it would help with knowing it's correct10:43
arkankleini: for example my public domain is openstack.arkan.cloud which has public (ISP)10:44
arkanI want to set dnsmasq to not call that IP, and to call 192.168.50.21010:44
arkanI want my vms, to call this ip, otherwise it will go outside and call the public ip (assigned by ISP)10:45
npalladiumI'm having some trouble understanding internal_lb_vip_address/external_lb_vip_address. I've gone through docs/playbook comments but I'm still unable to understand them. Could someone please give a simple explanation of what they are?10:45
arkankleini: in the controller node I have it in /etc/hosts10:46
arkanbut the vms are using dnsmasq, I need to set it somewhere10:46
arkanI searched the net for finding some place to add:10:47
jrossernpalladium: in general production deployments of openstack (rather than very simple test environments) have a quite distinct separation between "internal" and "external" networks10:47
arkanaddress=/openstack.arkan.cloud/192.168.50.21010:47
arkanbut I think lxc-dnsmasq is not using /etc/dnsmaq/dnsmasq.conf10:48
npalladiumMore specifically, does openstack-ansible (Ussuri) setup the lbs? If no, how do I configure them? (For context, I'm getting the pip error described in https://ask.openstack.org/en/question/104307/openstack-ansible-pip-issues-while-installing-the-infrastructure/ as the load balancer IPs haven't been configured by me.)10:48
jrossernpalladium: the inner workings of the cloud all communicates on the internal network, like rabbitmq, galera and the API services10:48
jrosseran end user of your cloud will only be able to see the external network10:48
openstackgerritChandan Kumar (raukadah) proposed openstack/openstack-ansible master: [WIP] improve ironic tempest testing  https://review.opendev.org/73650710:50
jrossernpalladium: openstack-ansible will configure a haproxy on your controller, that is the loadbalancer10:50
jrossernpalladium: you need to give an address for haproxy on the internal (br-mgmt) network, and also whatever your external network is10:51
npalladiumjrosser: Thanks! Quick clarification that this is a test setup. The internal and external explanation makes sense. I have given both internal and external lb vips to IPs (which are not being used) on the management network.The HAProxy host is same as my infra host. I don't see a HAProxy service going up on it.10:55
jrosserthere is a playbook to deploy haproxy a part of setup_infrastructure.yml10:56
jrosserperhaps run the haproxy playbook on its own and try to see what it does10:56
mjwalesjrosser: will give the Ceph upgrade a go once setup-openstack.yml is done10:56
npalladiumThe HAProxy play has run successfully.10:57
jrosserthen you should see haproxy running on the controller?11:00
*** shyamb has joined #openstack-ansible11:01
CeeMacnpalladium: its worth noting, if you haven't done so already, that the ip used for the internal vip should also be added to the 'used ips' list on openstack_user_config, so you don't end up accidentally having it allocated out to a container11:02
CeeMacthat totally didn't happen to me once, honestly >_>11:03
jrosseroh yes! out of a whole /24 or greater it does seem to have an uncanny way of allocating the one you picked by hand11:06
CeeMacindeed11:06
npalladiumjrosser: Yup. I goofed up with that. Although, I don't think that IP is occupied, I'll fix it right way. Assuming external vip should be added too? (We don't really have an external network; so just alloting an IP in the management network itself.)11:09
jrossernpalladium: yes, add them both to used_ips11:10
CeeMacnpalladium: if the 'external' ip is from one your defined container networks, yes11:10
jrosserit's completely fine to have them both on the mangement network11:10
watersjjrosser, ya i found it - by setting restrict_to_remotes = True under host section. thx for response11:12
jrosserwatersj: do you think this should be a default in the template? i've not really any experience of masakari to say about that11:13
watersjjrosser, for me I am going have about 200 nodes, past 16 node limit.11:14
watersjyes11:14
watersjjrosser, osa not setting up corosync|pacemaker (yet). Maybe having it in doc would be a start.11:15
jrosserwatersj: yes it looks like noonedeadpunk is still working on the corosync part11:16
npalladiumjrosser: Thanks again! Anything else I need to look out for?11:16
*** mjwales has quit IRC11:17
jrossernpalladium: you should be fine i think, you need to have put the IP for the internal/external endpoint on an interface yourself if it's just a single controller node11:18
npalladiumjrosser: Didn't quite understand the last message. Could you please elaborate on that?11:21
jrosseryou have just a single controller don't you?11:21
npalladiumMy setup has an infra node and a compute node.11:23
jrossernpalladium: when you have the normal 3 controllers, haproxy is installed onto each of them11:24
jrosserthen keepalived is also installed automatically but only when controllers > 111:25
jrosserkeepalived normally is responsible for assigning the VIP to an interface that you specify11:25
jrosserbut with only one controller keepalived is not deployed, so you must manually assign the internal/external endpoint IP addresses to an interface on your single controller node11:26
npalladiumjrosser: I did not do the assignment. Must be the reason why it's unreachable.11:28
*** stingrayza has joined #openstack-ansible11:36
*** shyamb has quit IRC11:38
*** pcaruana has quit IRC11:40
*** shyamb has joined #openstack-ansible11:41
*** soren has left #openstack-ansible11:47
*** pcaruana has joined #openstack-ansible11:51
*** rh-jelabarre has joined #openstack-ansible12:03
*** shyamb has quit IRC12:07
*** mjwales has joined #openstack-ansible12:10
*** yolanda has quit IRC12:14
*** yolanda has joined #openstack-ansible12:15
*** rh-jelabarre has quit IRC12:23
*** rh-jelabarre has joined #openstack-ansible12:28
*** also_stingrayza has joined #openstack-ansible12:35
*** stingrayza has quit IRC12:37
*** also_stingrayza is now known as stingrayza12:37
*** spatel has joined #openstack-ansible12:53
spateljrosser: I am stuck here, i did as you mention but now its failing on something else https://review.opendev.org/#/c/739400/12:56
spatelthinking to delete all these patch and start with fresh one12:56
jrosserspatel: dont delete them12:58
spatelok12:58
jrosserlook at this file https://github.com/openstack/openstack-ansible/blob/master/ansible-role-requirements.yml12:59
jrosserthen look at it for the ussuri branch https://github.com/openstack/openstack-ansible/blob/stable/ussuri/ansible-role-requirements.yml12:59
jrosserall of the version is switched from master to be a specific SHA on ussuri13:00
spateloh yeahh13:00
spatelZuul only work with master right?13:01
spatelwhy it got switched from master to sha?13:02
*** udesale_ has joined #openstack-ansible13:03
*** udesale has quit IRC13:06
spateljrosser: this is what Zuul is fetching right? https://87a1a9986bdad83ca04a-c8f1fa3a42531750546fabcdaf4f3d24.ssl.cf1.rackcdn.com/739400/3/check/openstack-ansible-deploy-aio_metal-centos-8/14cdd3b/logs/etc/host/openstack_deploy/user_variables_zuulrepos.yml.txt13:08
jrosserspatel: it is switched from master to a sha because it is a stable branch, that is how we do releases in OSA13:09
jrosserit's not really anything to do with zuul13:09
jrosseron a stable branch it's necessary to update the SHA in openstack-ansible/ansible-role-requirements in order to update the versions of the ansible roles used on that branch13:09
spatelgot it13:11
spateljrosser: you think that is my issue ?13:11
jrosseryes, becasue you need fixes for centos-8 from all sorts of ansible roles13:12
jrosserwe need to move the SHA forward for all the roles on ussuri in order to merge the main centos-8 patch13:12
jrosseri was trying to find the script/instructions for doing this didnt find it13:12
spateloh! okay13:13
spateli am standby until you say do something :)13:13
*** dave-mccowan has joined #openstack-ansible13:17
openstackgerritJonathan Rosser proposed openstack/openstack-ansible stable/ussuri: Bump role SHA for ussuri  https://review.opendev.org/74193913:31
spatel+113:37
jrosserits -2 tbh13:37
spatelouch!13:38
openstackgerritGeorgina Shippey proposed openstack/openstack-ansible-os_keystone master: Add CADF notifications for federated keystone  https://review.opendev.org/74194613:45
*** mjwales has quit IRC13:51
*** d34dh0r53 has joined #openstack-ansible13:55
*** spatel has quit IRC14:04
chandankumarjrosser: from ironic testing https://dcdef67f370bf151c86e-9e1e29a469f8f669289667ecce245a85.ssl.cf1.rackcdn.com/727067/4/check/openstack-ansible-deploy-aio_metal-centos-7/67c4e85/logs/openstack/aio1-utility/stestr_results.html without any patch14:04
chandankumarjrosser: let me try with https://review.opendev.org/727692 and https://review.opendev.org/#/c/727692/ with additional config in https://review.opendev.org/736507 and see how it goes14:08
*** aedc has joined #openstack-ansible14:10
openstackgerritJonathan Rosser proposed openstack/openstack-ansible stable/ussuri: Bump role SHA for ussuri  https://review.opendev.org/74193914:33
*** mgariepy has quit IRC14:41
*** mgariepy has joined #openstack-ansible14:51
*** kaiokmo has joined #openstack-ansible15:01
kaiokmohi folks15:02
kaiokmodoes anyone know a way to hide openstack openrc variables from an ansible execution in debug mode? that means, if I have openstack env vars (OS_*) already exported and I run some playbook with verbosity 3 (either with -vvv or having it set on ansible.cfg), the playbook will output the env vars.15:05
kaiokmofor instance: <127.0.0.1> EXEC /bin/sh -c 'OS_PROJECT_NAME=xpto OS_USERNAME=foo OS_USER_DOMAIN_NAME=bar OS_PASSWORD=passwd .....'15:05
*** jgwentworth is now known as melwitt15:19
*** mjwales has joined #openstack-ansible15:28
mjwalesjrosser: trying to run the ceph-mimic-upgrades.yml results in 'ansible_pkg_mgr' is undefined. Does an extra argument need to be provided to the openstack-ansible command?15:29
*** spatel has joined #openstack-ansible15:29
jrossermjwales: that should be a ansible fact i think15:38
jrosserperhaps it is missing running the ansible setup module at some point?15:39
mjwalesgather_facts: false is declared in the yaml. Assuming this is incorrect and it needs to gather facts to determine package manager?15:40
jrosseras it's only targetting the mon and mgr you could just switch that to be true15:50
*** gyee has joined #openstack-ansible15:51
*** sshnaidm is now known as sshnaidm|afk15:57
*** udesale_ has quit IRC16:28
*** andrewbonney has quit IRC17:06
openstackgerritJonathan Rosser proposed openstack/openstack-ansible stable/ussuri: Add Centos-8 support  https://review.opendev.org/74028917:33
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_ceilometer master: Remove installation of python2 distro packages for debian  https://review.opendev.org/74199617:47
openstackgerritGeorgina Shippey proposed openstack/openstack-ansible-ops master: Install virtualenv  https://review.opendev.org/74199717:50
openstackgerritGeorgina Shippey proposed openstack/openstack-ansible-ops master: Collect keystone apache federation files  https://review.opendev.org/74123617:51
jrosserspatel: for your ceilometer patch i think this is important to understand https://review.opendev.org/#/c/712096/18:05
jrosserit is probably an example of a place that we do need redhat-7.yml and redhat-8.yml variables files so that the conditional logic stays simple18:06
spateljrosser: oh! that is nice catch18:07
jrosseryeah so i think split that into two files18:07
spateljrosser: will do that sure18:08
jrossermaybe it is possible to use just one file18:09
jrosserthe actual error on centos-8 is because it is trying to build the wheel for libvirt-python18:10
jrosserbecasue this is unconditional https://github.com/openstack/openstack-ansible-os_ceilometer/blob/master/vars/redhat-7.yml#L46-L4718:10
spateljrosser: but same patch work on my lab (but it failed in zuul)18:11
jrosserthe error is because the distro package libvirt-devel is not installed18:12
jrosserperhaps that was present on your test18:12
jrosserhere is the log http://paste.openstack.org/show/796136/18:13
jrosser2020-07-19T22:21:30,233     Package libvirt was not found in the pkg-config search path.18:14
jrosser2020-07-19T22:21:30,234     Perhaps you should add the directory containing `libvirt.pc'18:14
jrosserthats the root cause18:14
mjwalesAfter upgrading from Rocky to Stein we are getting 'OSError: [Errno 22] failed to open netns' errors logged against neutron services. Is this a known issue?18:15
spateljrosser: i have notice that error also but thought it passed in my lab so it should be something else18:15
mjwalesThe netns error is preventing any networks from starting so our cluster is down :(18:39
*** spatel has quit IRC18:42
openstackgerritGeorgina Shippey proposed openstack/openstack-ansible-ops master: Install virtualenv  https://review.opendev.org/74199718:44
*** tosky has quit IRC18:57
*** spatel has joined #openstack-ansible19:49
arkanhi guys19:51
arkanI've solved the problem19:51
arkannow I can ping my openstack.arkan.cloud from inside (not from outside world through my ISP IP)19:52
arkannow it works from vms and also it works from containers19:52
arkanI solved it with dnsmasq19:52
arkanthis is important because I had problem with magnum kube-master, which was connecting to my openstack.arkan.cloud from outside19:54
arkannow I will see what will happen, maybe it will work :)19:54
*** mjwales has quit IRC20:18
*** dave-mccowan has quit IRC20:44
*** dave-mccowan has joined #openstack-ansible20:46
*** markvoelker has joined #openstack-ansible20:58
*** markvoelker has quit IRC21:02
*** spatel has quit IRC21:06
*** aedc has quit IRC21:14
*** aedc has joined #openstack-ansible21:15
npalladiumjrosser: I have a working OpenStack setup now. Thanks a lot!21:17
*** markvoelker has joined #openstack-ansible21:19
*** aedc has quit IRC21:20
jrossernpalladium: awesome :) well done!21:21
*** markvoelker has quit IRC21:23
*** jbadiapa has quit IRC21:24
*** arkan has quit IRC21:47
*** arkan has joined #openstack-ansible21:49
*** npalladium has quit IRC21:51
*** kaiokmo has quit IRC22:36
*** markvoelker has joined #openstack-ansible22:53
*** markvoelker has quit IRC23:03

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!