Friday, 2018-10-12

*** mma has quit IRC00:05
*** gyee has quit IRC00:29
*** mattoliverau has joined #openstack-ansible00:37
openstackgerritAlex Redinger proposed openstack/openstack-ansible-os_keystone master: Add memcache flushing handler on db migrations  https://review.openstack.org/60806600:43
*** mma has joined #openstack-ansible01:02
*** mma has quit IRC01:06
*** cshen has joined #openstack-ansible01:14
*** mmercer has quit IRC01:15
*** cshen has quit IRC01:19
*** jonher has quit IRC01:26
*** jonher has joined #openstack-ansible01:26
*** faizy98 has joined #openstack-ansible01:29
*** faizy_ has quit IRC01:32
*** spatel has joined #openstack-ansible01:34
*** mma has joined #openstack-ansible01:40
*** fatdragon has quit IRC01:41
*** mma has quit IRC01:44
*** macza has joined #openstack-ansible01:58
*** macza has quit IRC02:02
*** francois has quit IRC02:31
*** francois has joined #openstack-ansible02:31
openstackgerritAlex Redinger proposed openstack/openstack-ansible-os_keystone master: Add memcache flushing handler on db migrations  https://review.openstack.org/60806602:40
*** mma has joined #openstack-ansible02:41
*** ram5391 has joined #openstack-ansible02:43
*** mma has quit IRC02:45
ram5391is there a known issue in the keystone deployment phase where haproxy only accepts https, but the deployment tests use http?02:45
*** lbragstad has joined #openstack-ansible02:45
ram5391or some config I'm missing that sets what to use?02:46
*** cshen has joined #openstack-ansible02:57
ram5391hoping I found the solution using keystone_service_publicuri_proto as per: https://docs.openstack.org/openstack-ansible-os_keystone/ocata/02:59
*** fatdragon has joined #openstack-ansible02:59
*** cshen has quit IRC03:02
*** fatdragon has quit IRC03:08
*** macza has joined #openstack-ansible03:18
*** macza has quit IRC03:22
*** mma has joined #openstack-ansible03:22
*** mma has quit IRC03:27
*** jonher has quit IRC03:30
*** jonher has joined #openstack-ansible03:30
*** vnogin has joined #openstack-ansible03:31
ram5391It seems like the tests for keystone out of the box are configured to check for 'http' not 'https' whereas the otb config for keystone is to use 'https' I can verify that the service is running both on the container and via haproxy via 'https' but not 'http' If this isn't a known issue, I'll create a ticket for it03:31
*** vnogin has quit IRC03:36
*** fatdragon has joined #openstack-ansible03:38
*** fatdragon has quit IRC03:49
*** ram5391 has quit IRC03:52
*** udesale has joined #openstack-ansible03:54
*** spatel has quit IRC04:00
*** macza has joined #openstack-ansible04:04
*** canori01 has quit IRC04:06
*** faizy_ has joined #openstack-ansible04:07
*** macza has quit IRC04:10
*** macza has joined #openstack-ansible04:10
*** faizy98 has quit IRC04:11
*** macza has quit IRC04:15
*** macza has joined #openstack-ansible04:16
*** fatdragon has joined #openstack-ansible04:17
*** macza has quit IRC04:20
*** lbragstad has quit IRC04:21
*** mma has joined #openstack-ansible04:24
*** mma has quit IRC04:28
*** fatdragon has quit IRC04:29
*** defionscode has quit IRC04:30
*** yetiszaf has quit IRC04:30
*** defionscode has joined #openstack-ansible04:33
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Added support for installing tempest from distro  https://review.openstack.org/59142404:56
*** cshen has joined #openstack-ansible04:57
*** macza has joined #openstack-ansible04:57
*** fatdragon has joined #openstack-ansible04:57
*** macza has quit IRC05:02
*** cshen has quit IRC05:03
openstackgerritAlex Redinger proposed openstack/openstack-ansible-os_keystone master: Add memcache flushing handler on db migrations  https://review.openstack.org/60806605:09
*** fatdragon has quit IRC05:09
*** cshen has joined #openstack-ansible05:13
*** pcaruana has joined #openstack-ansible05:15
*** cshen has quit IRC05:18
*** mma has joined #openstack-ansible05:23
*** olivierb has joined #openstack-ansible05:30
*** chkumar|off is now known as chandankumar05:34
*** fatdragon has joined #openstack-ansible06:11
chandankumarodyssey4me: Good morning06:16
chandankumarodyssey4me: http://logs.openstack.org/24/591424/21/check/openstack-ansible-functional-centos-7/2045287/job-output.txt.gz#_2018-10-12_06_00_11_01535706:16
chandankumarodyssey4me: during instaling python packages in venv it is failing06:17
openstackgerritAlex Redinger proposed openstack/openstack-ansible-os_keystone master: Add memcache flushing handler on db migrations  https://review.openstack.org/60806606:17
chandankumarplease have a look06:18
*** cshen has joined #openstack-ansible06:21
*** fatdragon has quit IRC06:22
*** DanyC has quit IRC06:34
*** hamzaachi has joined #openstack-ansible06:37
*** hamzaachi has quit IRC06:40
*** hamzaachi has joined #openstack-ansible06:45
*** hamzaachi has quit IRC06:46
*** hamzaachi has joined #openstack-ansible06:46
*** shardy has joined #openstack-ansible07:18
*** hamzaachi has quit IRC07:25
*** fatdragon has joined #openstack-ansible07:29
*** faizy98 has joined #openstack-ansible07:37
*** faizy_ has quit IRC07:41
*** fatdragon has quit IRC07:41
deployer2Hi! odyssey4me, mgariepy, chandankumar something very similar here - markers 'python_version == "3.4"' don't match your environment07:43
deployer2I have applied patch https://review.openstack.org/#/c/608042/3 to 18.0.0.0rc3, now failing on TASK [repo_build : Create OpenStack-Ansible requirement wheels] https://pastebin.com/raw/20jaHnUT07:44
*** tosky has joined #openstack-ansible07:45
*** mma has quit IRC07:45
*** mma has joined #openstack-ansible07:45
odyssey4medeployer2: you need to apply that patch to the head of stable/rocky, not just to 10.0.0.0rc307:46
*** mma has quit IRC07:47
odyssey4memorning folks, looks like opensuse is broken again :/07:48
chandankumarodyssey4me: for this one centos failure https://review.openstack.org/#/c/591424/07:48
chandankumarwhat to do?07:48
deployer2ok, will try to figure out how to do that. Meanwhile - has anyone got successful run of all playbooks for rocky on ubuntu 18.04?07:49
chandankumarodyssey4me: how to fix this part https://review.openstack.org/#/c/591424/21/tasks/tempest_install.yml@128 ? if we pass tempest_install_method = distro it is going to execute all the steps07:51
chandankumarfor source will I move it a seperate yaml?07:52
odyssey4mechandankumar: reviewed, much the same comments as done before07:56
chandankumarodyssey4me: updating thanks :-)07:59
odyssey4mechandankumar: also, the niclude_vars has gone - that needs to be returned08:00
*** electrofelix has joined #openstack-ansible08:04
*** rpittau has quit IRC08:09
*** rpittau has joined #openstack-ansible08:10
*** rpittau has quit IRC08:10
*** rpittau has joined #openstack-ansible08:11
*** macza has joined #openstack-ansible08:18
*** macza has quit IRC08:19
*** macza has joined #openstack-ansible08:19
*** faizy_ has joined #openstack-ansible08:20
*** faizy98 has quit IRC08:20
*** cshen has quit IRC08:23
*** macza has quit IRC08:24
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible master: Use the compute kit + horizon for all distros  https://review.openstack.org/60932908:29
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible master: Restore OpenSUSE voting jobs  https://review.openstack.org/60935308:29
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible master: Use the compute kit + horizon for all distros  https://review.openstack.org/60932908:31
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible master: Restore OpenSUSE voting jobs  https://review.openstack.org/60935308:34
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible master: Restore bionic/ceph voting jobs  https://review.openstack.org/60996508:34
*** devx has quit IRC08:39
*** devx has joined #openstack-ansible08:40
*** cshen has joined #openstack-ansible08:51
chandankumarodyssey4me: by the way python-tempestconf and stackviz package are now available in openstack/rpm-packaging :-)08:53
*** DanyC has joined #openstack-ansible08:55
*** cshen has quit IRC08:55
openstackgerritVieri proposed openstack/openstack-ansible-memcached_server master: fix tox python3 overrides  https://review.openstack.org/60997309:03
openstackgerritVieri proposed openstack/openstack-ansible-os_gnocchi master: fix tox python3 overrides  https://review.openstack.org/60997509:06
openstackgerritVieri proposed openstack/openstack-ansible-os_heat master: fix tox python3 overrides  https://review.openstack.org/60997609:11
openstackgerritVieri proposed openstack/openstack-ansible-os_almanach master: fix tox python3 overrides  https://review.openstack.org/60997709:13
openstackgerritVieri proposed openstack/openstack-ansible-os_ceilometer master: fix tox python3 overrides  https://review.openstack.org/60997909:16
evrardjpthanks for the rechecks odyssey4me09:18
openstackgerritVieri proposed openstack/openstack-ansible-openstack_hosts master: fix tox python3 overrides  https://review.openstack.org/60998009:18
odyssey4meevrardjp: yeah, no worries - it's a bit frustrating :/09:18
evrardjpI am sorry I should be on the ball on this -- I just have realised my notifications where to be improved :p09:19
evrardjpI see what goes to gating, not what fails in gating09:20
evrardjpso I always assume it's passing which is not the case :p09:20
*** cshen has joined #openstack-ansible09:29
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Added support for installing tempest from distro  https://review.openstack.org/59142409:30
*** cshen has quit IRC09:34
*** faizy98 has joined #openstack-ansible09:35
*** faizy_ has quit IRC09:37
odyssey4mechandankumar: almost there, just a few edits to go09:38
chandankumarodyssey4me: sure09:38
chandankumarodyssey4me: in openstack-ansible do we use lxc containers or kolla containers?09:40
odyssey4mechandankumar: lxc by default for now, we're switching to nspawn soon09:40
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-ops master: MNAIO: Tidy up README image use instructions  https://review.openstack.org/60998309:42
chandankumarodyssey4me: one more thing does this one https://github.com/openstack/openstack-ansible-os_tempest/blob/master/meta/main.yml can be switched to centos also?09:42
odyssey4mechandankumar: it already has EL 7, which covers CentOS and RHEL09:43
chandankumarodyssey4me: so apt_package_pinning is just for ubunut09:44
chandankumarna?09:44
chandankumarI mean ansible galaxy dependencies09:44
odyssey4meoh, but that role will do nothing on  any platform other than ubuntu anyway - it'll just skip it09:44
odyssey4mein fact, in another patch, I think we can remove it because that role doesn't do any pinning09:45
chandankumarodyssey4me: sure09:45
*** cshen has joined #openstack-ansible09:51
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Added support for installing tempest from distro  https://review.openstack.org/59142409:54
*** fatdragon has joined #openstack-ansible09:59
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Remove apt_package_pinning dependency from os_tempest role  https://review.openstack.org/60999210:00
chandankumarodyssey4me: ^^10:00
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Remove apt_package_pinning dependency from os_tempest role  https://review.openstack.org/60999210:01
*** vnogin has joined #openstack-ansible10:03
*** fatdragon has quit IRC10:03
*** suggestable has joined #openstack-ansible10:06
deployer2odyssey4me btw is it expected that those who are now on 18.0.0.0rc3 will be able to upgrade to 18.0.0 stable when it is released or full reinstall will be needed?10:07
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Remove apt_package_pinning dependency from os_tempest role  https://review.openstack.org/60999210:11
*** udesale has quit IRC10:36
*** fatdragon has joined #openstack-ansible10:39
*** cshen has quit IRC10:39
*** pabelanger has quit IRC10:40
*** fatdragon has quit IRC10:44
odyssey4medeployer2: it should work, but we don't test it so YMMV depending on what your deployment looks like10:45
openstackgerritMerged openstack/openstack-ansible-ops master: MNAIO: Tidy up README image use instructions  https://review.openstack.org/60998310:58
*** cshen has joined #openstack-ansible11:12
*** fatdragon has joined #openstack-ansible11:13
*** cshen has quit IRC11:16
*** fatdragon has quit IRC11:18
*** faizy_ has joined #openstack-ansible11:20
*** faizy98 has quit IRC11:24
*** vnogin has quit IRC11:31
*** dave-mccowan has joined #openstack-ansible11:38
*** vnogin has joined #openstack-ansible11:39
deployer2I have defined 3 repo containers, but only 2 of them deploy successfully, third one fails with "Failed to establish a new connection: [Errno 111] Connection refused".11:46
deployer2TASK [repo_server : Install pip packages (from repo)]. Have destroyed container with containers-lxc-destroy.yml + deleted facts & recreated, but the same. Tested with wget - rest of repo containers get http this one connection refused.11:46
deployer2what else to delete to make third container to be identical to others? Does haproxy has some accesslist ?11:47
deployer2It cannot het http service from haproxy VIP:818111:48
*** juhak has left #openstack-ansible11:48
odyssey4medeployer2: is haproxy running, are the backends available? also, are you sure that you don't have bad CIDR's or conflicting IP's?11:49
suggestableHi everyone! We've managed to deploy OSA without any warnings during the playbook runs, but are struggling to get l3ha routing working. Running on stable/queens on Ubuntu 16.04. Three controller nodes handling all infrastructure/network/storage services. We've even tried running neutron-server on metal, and encountered an issue where the bind host is hard set in the j2 template. Anyone able to point us in the right direction?11:50
deployer2odyssey4me haproxy runs and VIP is only up on 1 infra node - that should be so. The rest of repo containers get the service, only one gets refused11:51
*** vnogin has quit IRC11:51
*** vnogin has joined #openstack-ansible11:52
odyssey4medeployer2: ok, and in the playbook output that ran, did anything happen there to show a failure?11:52
*** cshen has joined #openstack-ansible11:55
chandankumarodyssey4me: for setting default to distro for centos where can i make changes?11:55
deployer2odyssey4me this is the output https://pastebin.com/raw/GHv5WSWN11:55
*** vnogin has quit IRC11:58
deployer2cannot understand how one container is different from the rest even after complete recreation11:58
*** vnogin has joined #openstack-ansible11:58
openstackgerritMerged openstack/openstack-ansible-ops master: Update MNAIO to deploy systemd-networkd  https://review.openstack.org/60982612:02
deployer2odyssey4me to recreate container is it enough to destroy with with containers-lxc-destroy.yml + deleted facts or there could be any other remnanats somewhere? Or maybe I need to reinstall haproxy also if repo container is recreated from scratch?12:05
odyssey4medeployer2: no, using the destroy playbook is just fine - and haproxy is already configured and the container names & ip's aren't changing, so it shouldn't need to be setup again12:08
odyssey4mechandankumar: the default must not be distro, the default must be source - the tox env sets the override to distro for the distro test12:09
odyssey4medeployer2: what release are you using to test with?12:09
*** vnogin has quit IRC12:09
odyssey4medeployer2: also, that task has a fallback - did the fallback not work? https://github.com/openstack/openstack-ansible-repo_server/blob/master/tasks/repo_install.yml#L90-L12412:11
*** sawblade6 has joined #openstack-ansible12:13
odyssey4medeployer2: if it's queens or later, you'll also need to run the repo-use playbook after deleting those containers12:13
odyssey4me(I think)12:13
deployer2odyssey4me Rocky on bionic. Using git clone 18.0.0.0rc3 from docs, then git checkout stable/rocky, then applying patch https://review.openstack.org/#/c/608042/3 and going from there12:13
odyssey4medeployer2: ok, did the fallback task not work? your log only shows the first task failing12:14
deployer2need to check logs12:15
*** fatdragon has joined #openstack-ansible12:18
deployer2odyssey4me what should I look for? not finding "Install pip packages (from pypi mirror)" from rescue task name12:18
odyssey4medeployer2: that's odd, but if there's a /root/.pip/pip.conf in those containers, remove it12:19
odyssey4methis is unusual, and I'm not sure how you got into this mess12:20
*** fatdragon has quit IRC12:23
*** ansmith has joined #openstack-ansible12:24
deployer2odyssey4me will try, thanks. This is already n-th attemt to bring up rocky on bionic. First had mistake in my user_variables.yml regarding VIP netmask and had to bring internal VIP up manually but then corrected it. Dont know, need to rethink12:27
odyssey4medeployer2: rather than destroy and recreate containers next time, try to properly figure out the cause of the fail - there's a fair chance that you have some sort of bad networking config if there was a comms failure12:28
odyssey4menetworking config is always the stumbling block for a first-time deployer12:28
odyssey4mealso, you'll save yourself a lot of pain by restarting from scratch with a fresh host if you change any network configs12:29
*** sawblade6 has quit IRC12:42
*** sawblade6 has joined #openstack-ansible12:43
*** sawblade6 has quit IRC12:46
*** sawblade6 has joined #openstack-ansible12:46
*** sawblade6 has quit IRC12:49
*** sawblade6 has joined #openstack-ansible12:49
*** sawblade6 has quit IRC12:51
*** sawblade6 has joined #openstack-ansible12:51
*** fatdragon has joined #openstack-ansible12:53
*** sawblade6 has quit IRC12:53
*** sawblade6 has joined #openstack-ansible12:53
*** sawblade6 has quit IRC12:54
*** sawblade6 has joined #openstack-ansible12:54
*** sawblade6 has quit IRC12:55
*** sawblade6 has joined #openstack-ansible12:55
*** sawblade6 has quit IRC12:56
*** fatdragon has quit IRC12:57
*** vnogin has joined #openstack-ansible13:03
*** thuydang has joined #openstack-ansible13:08
*** markvoelker has quit IRC13:15
*** munimeha1 has joined #openstack-ansible13:22
*** lbragstad has joined #openstack-ansible13:23
*** fatdragon has joined #openstack-ansible13:26
*** fatdragon has quit IRC13:31
chandankumarodyssey4me: http://logs.openstack.org/24/591424/23/check/openstack-ansible-functional-ubuntu-bionic/6e73cfa/job-output.txt.gz#_2018-10-12_10_53_09_755379 at this part it is failing at all places13:33
chandankumarodyssey4me: https://review.openstack.org/#/c/591424/13:34
odyssey4mechandankumar: commented in review13:36
mgariepyhwoarang, evrardjp : opensuse-423, gives Retry limit in 1m 08s for the my patch in nova. is that something that you are aware ? https://review.openstack.org/#/c/60578913:38
*** munimeha1 has quit IRC13:38
*** vrobert has joined #openstack-ansible13:42
vroberthi13:43
vrobertthis calculation caused troubles to me in nova-api: nova_wsgi_processes: "{{ [[ansible_processor_vcpus|default(1), 1] | max * 2, nova_wsgi_processes_max] | min }}"13:43
vrobertcaused slowdown and timeouts in nova-api13:44
vrobertcaused slowdowns and timeouts in nova-api13:44
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Added support for installing tempest from distro  https://review.openstack.org/59142413:44
vrobertAre you sure the max*2 is correct in the formula?13:45
vrobertIn the calculations of nova_api_threads: "{{ [[ansible_processor_vcpus|default(2) // 2, 1] | max, nova_api_threads_max] | min }}" seems correct for me.13:51
vrobertHere is a divison by 2 causing these threads are set to 10 if I have 20 vcpus.13:51
vrobertBut nova_wsgi_processes ended with 16 which is the default for nova_wsgi_processes_max.13:52
mgariepyyou can set the variables to change it if you need to13:53
vrobertOf course and I did it.13:53
vrobertJust want to be shared with you.13:53
vrobertMaybe I am wrong and my controller nodes are overloaded...13:53
mgariepyI guess it depends on the hw you have for you control plane.13:53
vrobertYes its hardly depends on you are right.13:53
*** rpittau has quit IRC13:54
vrobertBut I think this calculation for nova_wsgi_processes: "{{ [[ansible_processor_vcpus|default(1), 1] | max * 2, nova_wsgi_processes_max] | min }}" not correct.13:54
vrobertWhy is the max*2 multiplier at all?13:54
vrobertI think lots of deployments out there running a lot of things in the controller nodes.13:55
vrobertI tested it on a 59 nodes cluster with 3 controller nodes.13:55
mgariepyin my case i have 24 vcpus , and end up with 16 which is ok for me.13:55
vrobertIt was okay for me in the beginning.13:55
vrobertBut when I started to do full tempest runs it was slowing down.13:56
vrobertAfter 2-3 tempest run its started to give me timeouts.13:56
vrobertIts happened after all my 16 uwsgi workers are connected to rabbitmq.13:57
vrobertIts happened after all my 16 uwsgi workers are connected to rabbitmq on all nodes.13:57
*** munimeha1 has joined #openstack-ansible13:57
vrobertSo it was a bit hiding problem for me.13:57
vrobertBut right now, with nova_wsgi_processes_max: 10 solved the problem for me.13:57
mgariepymy config is : thread 1, process 16 on a 24 thread cpu13:58
vrobertYes it is the default.13:58
vrobertI found that each workers start 5-6 threads if there is a load on the nova-api.13:59
vrobert3 rabbitmq-heartbeats, 1 other, 1 rabbitmq read() which is a blocking read.14:01
mgariepylol, the nova_api_threads calculation doesn't make much sense.. haha14:01
vrobertwhy?14:02
*** sum12 has quit IRC14:02
mgariepyok i'm confused about it. nova-api thread != wsgi thread haha14:02
vrobertyes.14:02
vrobertyes, I think so.14:03
vrobertthere is always 1 wsgi thread and others are nova threads14:03
vrobertthere is always 1 wsgi thread per wsgi processes and others are nova threads14:03
vrobertthere is always 1 wsgi thread per wsgi processes and others are nova threads I think...14:03
*** sum12 has joined #openstack-ansible14:03
mgariepyyeah14:03
mgariepyso each wsgi process will start 16 nova-api threads14:04
vrobertNo, I don't sad that.14:05
mgariepyis the issue with the number of wsgi process or the number of thread in nova-api ?14:05
vrobertNo, I didn't sad that.14:05
vrobertI had issue with the number of max wsgi processes.14:06
vrobertI tried to modify nova api threads, rpc pool threads but the only thing wich is worked for me is that I reduces to max nova-api wsgi processes from 16 to 10.14:07
*** fatdragon has joined #openstack-ansible14:07
vrobertI tried to modify nova api threads, rpc pool threads but the only thing wich is worked for me is that I reduced the max nova-api wsgi processes from 16 to 10.14:07
mgariepymaybe a physical core count would be better ?14:08
mgariepylike, the min between 16 or the # of core ?14:09
vrobertyes, its makes sense.14:09
mgariepycan you write a patch ?14:10
vrobertThis problem was hiding until I started to stresstest my cloud.14:10
vrobertSo maybe there are a lots of deployments out there where this problems is hiding...14:10
mgariepyyeah indeed.14:11
odyssey4mecloudnull: when you're in, I have a slightly perplexing issue in downstream CI to figure out relating to systemd-resolved when executing an AIO build14:11
vrobertBecause in the beginning where not all my wsgi processes was utilized everything was ok.14:11
*** weezS has joined #openstack-ansible14:11
*** fatdragon has quit IRC14:11
vrobertOkay lets say that I modify only one thing from vcpu to cpu count in the formula: {{ [[ansible_processor_cores|default(1), 1] | max * 2, nova_wsgi_processes_max] | min }}14:13
openstackgerritweezer su proposed openstack/openstack-ansible master: Add one test case to the TestMergeDictUnit for same key dict merge  https://review.openstack.org/60974514:13
vrobertIt would be min(10*2,16) which would be 16 again, not good again...14:14
vrobertIf I remove the *2 multiplier: {{ [[ansible_processor_cores|default(1), 1] | max 2, nova_wsgi_processes_max] | min }}14:14
vrobertIt would be min(10,16) which would be 10 which is good for me14:15
vrobertIt would be min(10,16) which would be 10 which is good for me on a busy controller node14:15
vrobertIt would be min(10,16) which would be 10 which is good for me on a busy controller nodes14:15
vrobertBut yes Its hardly depends on the load and the busy processes and threads on the actual controller node so its very hard to prove it...14:16
mgariepyi would not multiply per 2.14:18
vrobertI think the *2 multiplier on vcpus doesn't make sense14:21
vrobertmaybe they wanted to do it on cpu cores not vcpu cores...14:21
mgariepyi would do something like: "{{ [[ansible_processor_count * ansible_processor_cores, 1] | max, nova_wsgi_processes_max] | min }}"14:21
mgariepyor ansible_processor_vcpus / ansible_processor_threads_per_core14:22
vrobertyes, seems scorrect to me14:22
vrobertI like it!14:22
mgariepycan you create the patch and comment the why in the commit message please ?14:23
vrobertYes, I can try that.14:23
*** thuydang has quit IRC14:23
*** lbragstad is now known as elbragstad14:25
*** sawblade6 has joined #openstack-ansible14:27
mgariepyif you need help let us know.14:27
vrobertOkay thank you mgariepy!14:31
*** munimeha1 has quit IRC14:32
mgariepyvrobert, have you submitted a patch in the past ?14:33
*** faizy_ has quit IRC14:35
vrobertNo I didn't.14:36
vrobertIf you can help me to do this this it can save me a lot of time.14:37
mgariepyspotz, you you still have that handy doc on how to start submitting code ?14:38
*** fatdragon has joined #openstack-ansible14:39
spotzmgariepy: You mean the git and gerrit stuff?14:39
mgariepyyep to help vrobert getting started :D14:39
vrobertthanks in advance :)14:39
*** ansmith has quit IRC14:40
*** markvoelker has joined #openstack-ansible14:41
openstackgerritjacky06 proposed openstack/openstack-ansible-repo_server stable/rocky: Replace Chinese punctuation with English punctuation  https://review.openstack.org/61006214:43
mgariepyvrobert, do you have a gerrit account ?14:43
openstackgerritjacky06 proposed openstack/openstack-ansible-os_neutron stable/rocky: Replace Chinese punctuation with English punctuation  https://review.openstack.org/61006314:43
vrobertNo, I dont.14:43
mgariepyhttps://docs.openstack.org/infra/manual/developers.html14:43
vrobertthx14:43
mgariepyfollow this to setup your account.14:44
*** fatdragon has quit IRC14:45
*** canori01 has joined #openstack-ansible14:49
openstackgerritjacky06 proposed openstack/openstack-ansible-os_swift master: Revert "use include_tasks instead of include"  https://review.openstack.org/61006914:52
*** spatel has joined #openstack-ansible14:55
*** sawblade6 has quit IRC14:55
vrobertwoo its a bit too much for me at this time for a one line change14:56
vrobertmaybe I should need to open a discussion around this with opening a thread somewhere14:56
*** ansmith has joined #openstack-ansible14:57
vrobertSomebody needs to verify I am right when I sad that  nova_wsgi_processes calculation is not correct. :)14:57
*** weezS has quit IRC14:57
*** sawblade6 has joined #openstack-ansible14:58
vrobertmgariepy what do you think where should I open a discussion for this?14:59
*** sawblade6 has quit IRC14:59
*** weezS has joined #openstack-ansible14:59
*** weezS has joined #openstack-ansible15:00
mgariepycloudnull, any idea about this ? ^^15:01
*** weezS has joined #openstack-ansible15:01
mgariepyvrobert, you can open a bug in LP15:01
mgariepythen it will be reviewed15:01
*** sawblade6 has joined #openstack-ansible15:01
*** sawblade6 has quit IRC15:03
*** thuydang has joined #openstack-ansible15:04
*** ansmith has quit IRC15:05
openstackgerritAndy Smith proposed openstack/openstack-ansible master: Add documentation for hybrid messaging configuration  https://review.openstack.org/61007915:05
vrobertOkay, I will follow that way. Thanks for your help and thx for your time.15:05
*** sawblade6 has joined #openstack-ansible15:05
*** sawblade6 has quit IRC15:08
*** sawblade6 has joined #openstack-ansible15:08
*** sawblade6 has quit IRC15:09
*** sawblade6 has joined #openstack-ansible15:09
*** sawblade6 has quit IRC15:11
*** cshen has quit IRC15:15
*** vrobert has left #openstack-ansible15:17
*** ansmith has joined #openstack-ansible15:18
*** gyee has joined #openstack-ansible15:19
*** fatdragon has joined #openstack-ansible15:20
*** vnogin has quit IRC15:21
*** fatdragon has quit IRC15:24
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-ops master: MNAIO: Do not use apt-cacher-ng on the MNAIO host  https://review.openstack.org/61008315:30
*** thuydang has quit IRC15:31
*** thuydang has joined #openstack-ansible15:31
openstackgerritcaoyuan proposed openstack/openstack-ansible-os_masakari stable/rocky: use include_tasks instead of include  https://review.openstack.org/61008415:31
openstackgerritcaoyuan proposed openstack/openstack-ansible-os_horizon stable/rocky: Fix the UI Panel name of ironic  https://review.openstack.org/61008515:32
*** fatdragon has joined #openstack-ansible15:34
*** thuydang has quit IRC15:35
*** thuydang has joined #openstack-ansible15:36
*** vollman has quit IRC15:38
*** goldenfri has joined #openstack-ansible15:39
benkohlodyssey4me: after checking out stable/rocky and cherry-pick https://review.openstack.org/#/c/608042/ I get this: https://snag.gy/qKFcA7.jpg15:40
*** vnogin has joined #openstack-ansible15:40
odyssey4mebenkohl: take a look at the wheel build log in the repo container to see what broke15:41
benkohlmaybe I should wait until osa rocky is stable enough... the semester started and I haven't enough time for all the issues :/15:42
*** vnogin has quit IRC15:44
odyssey4mebenkohl: either use queens, which is stable, or wait for rocky's release15:47
benkohlodyssey4me: queens is no option because I want to use ubuntu 18.04, so I will wait... Thanks anyway :)15:49
*** markvoelker has quit IRC15:51
*** ansmith has quit IRC15:51
*** markvoelker has joined #openstack-ansible15:52
*** ansmith has joined #openstack-ansible15:57
spatelQuick question how does anti-affinity understand application should spread out other hypervisor ?15:57
spatelDoes it based on instance name ?15:57
spatelif i create web1, web2 , web3  and db1, db2, db3  so how does that fit in anti-affinity server group?15:58
*** deployer2 has quit IRC16:00
*** cshen has joined #openstack-ansible16:01
*** skiedude has joined #openstack-ansible16:08
skiedudeSo i've finally figured out what container it is thats causing me 504s, I currently have a 3 infra cluster, and when I only shut down the memcached container on the 3rd infra host, everything slows to a crawl16:12
skiedudeLooking at the other 2 up memcached containers, their is a log folder in /var/log/memecached, however there is no log file to watch errors for16:12
odyssey4meskiedude: memcache doesn't cluster, so if there's a switch from one to the other, everything cached has to be re-cached16:15
*** suggestable has quit IRC16:15
skiedudeso in theory once everything gets recached, it should speed back up16:15
skiedudebut it seems to never do that16:16
*** mmercer has joined #openstack-ansible16:16
odyssey4meskiedude: have you checked whether the openstack services saw the disconnected and found the next in line? although I think this may go through haproxy - I can't recall16:19
*** dariko has joined #openstack-ansible16:19
skiedudehow exactly would I go about checked that?16:21
skiedudeI don't see any vips or data in haproxy for memcache, looking through the roles, it appears most services just have the list of IPs16:21
skiedudefor the memcache containers16:21
*** fatdragon has quit IRC16:24
*** dariko has quit IRC16:25
skiedudeso I'm thinking its more the other containers trying to to use the downed memcached container, then the container itself being the issue16:28
odyssey4meskiedude: yep, so that means the services themselves are supposed to handle failover16:32
odyssey4meso in their debug logs you might find some clues16:32
odyssey4meit might be a misconfig16:32
spatelodyssey4me: is it possible to do --limit compute1,compute2,compute3   multiple values in openstack-ansible command?16:33
*** irclogbot_0 has joined #openstack-ansible16:35
*** flaviosr_ has quit IRC16:36
*** shardy has quit IRC16:37
odyssey4mespatel: yes, you can also use groups - eg 'compute_hosts' and exclusions like 'compute_hosts:!compute1'16:40
odyssey4mespatel: https://docs.ansible.com/ansible/2.6/user_guide/intro_patterns.html16:41
*** irclogbot_0 has quit IRC16:42
*** cshen has quit IRC16:43
*** cshen has joined #openstack-ansible16:45
*** spatel has quit IRC16:48
*** spatel has joined #openstack-ansible16:48
spatelperfect!16:48
spatelI just want to run playbook on set of compute machine16:48
*** DanyC has quit IRC16:49
spatelThis should work for me --limit compute1,compute2,compute316:49
goldenfriI am adding some compute nodes they are getting stuck on Add keys (primary keyserver) and the fallback is not working either. Its showing the urls as:  u'hkp://keyserver.ubuntu.com:80' is that normal?17:06
cloudnullo/ all17:07
*** Bhujay has joined #openstack-ansible17:07
cloudnullmgariepy vrobert if there's something that we can do to improve that calculation (the nova_wsgi_processes calculation) it'd be wonderful17:08
cloudnullodyssey4me im around now17:08
cloudnullstill seeing that resolved issue?17:08
odyssey4mecloudnull: I managed to work it out, thanks.17:09
cloudnullwhat was it ?17:09
mgariepycloudnull, 16 process if you have only 10  cores might be a bit too much ? basing the count on cores instead of thread would be probably a better default.17:11
odyssey4meWell, in nodepool we use glean so that we can write out the network/resolver configs from config-drive... and glean is massively lighter than cloud-init + nova-agent17:11
cloudnullmgariepy ++17:11
cloudnullthat probably makes a lot more sense17:11
cloudnullah that's cool!17:11
odyssey4mebut in bionic systemd-resolved is turned on by default, and when glean writes out /etc/resolv.conf, systemd-resolved replaces it... so I had to make our image builds disable systemd-resolved to prevent that17:11
cloudnullodyssey4me ^17:12
odyssey4meotherwise sh*t don't work, yo17:12
cloudnullcan you get glean to update the `/etc/systemd/resolved.conf` file ?17:12
cloudnullvia glean ?17:12
odyssey4mecloudnull: well, prometheanfire is working on making glean detect and do the right things17:13
cloudnullhttps://www.freedesktop.org/software/systemd/man/resolved.conf.html17:13
cloudnullah cool17:13
odyssey4methanks prometheanfire :)17:14
prometheanfireodyssey4me: cloudnull: https://review.openstack.org/610105 is part 117:16
prometheanfiretoo bad no providers do that though17:16
prometheanfireI suppose I should only write out to /etc/systemd/resolved.conf if that file already exists (aka if resolved is installed)17:17
cloudnullprometheanfire = Add icons in the PWA manifest ?17:17
prometheanfirecloudnull: wat17:17
goldenfriplease continue to ignore my question, I'm dumb and figured it out. :)17:17
cloudnullthat review number17:18
odyssey4melol17:18
prometheanfirewrong link :|17:18
prometheanfirehttps://review.openstack.org/61010717:18
*** fatdragon has joined #openstack-ansible17:18
prometheanfirethat's just if using glean with networkd (which you can force with 'glean --distro networkd')17:19
*** fatdragon has quit IRC17:19
prometheanfirethe generic stuff will follow thisafternoon17:19
cloudnullyes that resolved.conf file is part of systemd17:19
cloudnulland systemd-resolved will use that for all of its config as needed17:19
prometheanfireit wasn't always, forget when it was added, but trusty may not have it, which means I should only write the file if /etc/systemd/resolved.conf exists (and then only set DNS= within that file, leaving other values)17:21
prometheanfireglean runs on more than one OS17:21
*** faizy98 has joined #openstack-ansible17:25
*** mmercer has quit IRC17:33
*** mmercer has joined #openstack-ansible17:44
*** Bhujay has quit IRC17:50
mgariepyanyone here tested neutron networking-ovn ?17:53
openstackgerritMerged openstack/openstack-ansible-ops master: MNAIO: Do not use apt-cacher-ng on the MNAIO host  https://review.openstack.org/61008317:54
*** mmercer has quit IRC17:57
*** mmercer has joined #openstack-ansible17:58
odyssey4memgariepy: I think jamesdenton has.18:00
*** skiedude has quit IRC18:07
*** mmercer has quit IRC18:21
*** olivierb has quit IRC18:21
jamesdentonyo19:36
jamesdentonmgariepy define "testing"19:36
*** electrofelix has quit IRC19:37
mgariepydid you tryed it ?19:39
mgariepyis it working ?19:39
mgariepyjamesdenton, ^^19:39
jamesdentonNeeds this: https://review.openstack.org/#/c/584069/19:40
jamesdentonwhich, surprising just passed checks this week19:40
jamesdenton(thanks odyssey4me)19:40
jamesdentoni would also say that it does not fully address NB DB HA19:40
jamesdentonbut it should be functional, albeit not production ready19:41
mgariepywhat kind of test did you do with it ? did any load test on it ?19:41
jamesdentonno, i did not perform any official load testing, only functional tests (i.e. connectivity)19:42
jamesdentonand security groups IIRC19:42
mgariepyok19:43
jamesdentonare you interested in deploying it?19:43
mgariepymaybe, i was mostly wondering is somebody here tested it a bit.19:47
jamesdentoni gotcha. I'm hoping to dedicate some cycles to it soon, if that helps19:48
mgariepyok if i get time to test it a bit, i'll probalby ping you then. but it seems interesting to simplify the network a bit.19:51
jamesdentonagreed. getting rid of DHCP and L3 agents would be nice.19:52
*** DanyC has joined #openstack-ansible20:00
*** vnogin has joined #openstack-ansible20:22
*** vnogin has quit IRC20:26
*** ianychoi has joined #openstack-ansible20:35
*** ansmith has quit IRC20:40
jrosserodyssey4me: you around?20:46
*** dave-mccowan has quit IRC20:57
*** spatel has quit IRC20:57
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-nspawn_container_create master: Add a guard so we don't allow for duplicate config  https://review.openstack.org/61016220:58
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-nspawn_container_create master: Add a guard so we don't allow for duplicate config  https://review.openstack.org/61016220:59
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Upgrade ceph to mimic release  https://review.openstack.org/61016521:00
*** spatel has joined #openstack-ansible21:03
*** DanyC_ has joined #openstack-ansible21:03
*** cshen has quit IRC21:04
*** strattao has joined #openstack-ansible21:06
*** DanyC has quit IRC21:07
*** nsmeds has joined #openstack-ansible21:24
*** jamesdenton has quit IRC21:24
*** DanyC_ has quit IRC21:32
*** thuydang has quit IRC21:41
*** thuydang has joined #openstack-ansible21:41
*** ansmith has joined #openstack-ansible21:43
*** thuydang has quit IRC21:45
*** thuydang has joined #openstack-ansible21:46
*** tosky has quit IRC22:02
openstackgerritMerged openstack/openstack-ansible-nspawn_container_create master: Add a guard so we don't allow for duplicate config  https://review.openstack.org/61016222:08
*** spatel has quit IRC22:11
*** pcaruana has quit IRC22:20
openstackgerritMerged openstack/openstack-ansible-ops master: add lxc3 support  https://review.openstack.org/60980022:25
*** strattao has quit IRC22:34
*** strattao has joined #openstack-ansible22:41
*** strattao has quit IRC22:41
*** weezS has quit IRC22:45
*** elbragstad has quit IRC22:56
*** spatel has joined #openstack-ansible22:57
*** nsmeds has quit IRC23:01
*** spatel has quit IRC23:01
*** spatel has joined #openstack-ansible23:03
*** spatel has quit IRC23:08
*** elbragstad has joined #openstack-ansible23:15
*** elbragstad has quit IRC23:15
*** gyee has quit IRC23:52
*** spatel has joined #openstack-ansible23:55
*** spatel has quit IRC23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!