Thursday, 2020-10-08

*** maharg101 has joined #openstack-ansible00:20
*** rf0lc0 has quit IRC00:22
*** maharg101 has quit IRC00:25
*** gyee has quit IRC00:31
*** ianychoi__ is now known as ianychoi00:44
*** cshen has joined #openstack-ansible01:07
*** cshen has quit IRC01:12
*** maharg101 has joined #openstack-ansible02:21
*** maharg101 has quit IRC02:26
*** macz_ has joined #openstack-ansible02:41
*** macz_ has quit IRC02:46
*** NewJorg has quit IRC02:58
*** NewJorg has joined #openstack-ansible02:59
*** cshen has joined #openstack-ansible03:07
*** cshen has quit IRC03:12
*** maharg101 has joined #openstack-ansible04:22
*** maharg101 has quit IRC04:26
*** evrardjp has quit IRC04:33
*** evrardjp has joined #openstack-ansible04:33
*** cshen has joined #openstack-ansible05:08
*** cshen has quit IRC05:12
*** nurdie has joined #openstack-ansible05:43
*** cshen has joined #openstack-ansible05:45
*** nurdie has quit IRC05:47
*** cshen has quit IRC05:50
*** nurdie has joined #openstack-ansible06:03
*** nurdie has quit IRC06:08
*** macz_ has joined #openstack-ansible06:18
*** miloa has joined #openstack-ansible06:20
*** macz_ has quit IRC06:22
*** maharg101 has joined #openstack-ansible06:23
*** maharg101 has quit IRC06:27
*** masterpe has quit IRC06:35
*** ioni has quit IRC06:35
*** csmart has quit IRC06:35
*** fridtjof[m] has quit IRC06:35
*** cshen has joined #openstack-ansible06:38
*** openstackgerrit has joined #openstack-ansible06:40
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-lxc_hosts master: Include libpython and rsync for centos in lxc base image  https://review.opendev.org/75658706:40
*** cshen has quit IRC06:42
*** csmart has joined #openstack-ansible06:44
*** ioni has joined #openstack-ansible07:10
*** fridtjof[m] has joined #openstack-ansible07:10
*** masterpe has joined #openstack-ansible07:10
*** maharg101 has joined #openstack-ansible07:22
*** cshen has joined #openstack-ansible07:23
*** rpittau|afk is now known as rpittau07:27
*** andrewbonney has joined #openstack-ansible07:27
*** miloa has quit IRC07:33
*** jamesdenton has quit IRC07:33
*** miloa has joined #openstack-ansible07:33
jrossermorning07:35
*** jamesdenton has joined #openstack-ansible07:38
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_cinder master: Add rsync to required packages for redhat based OS  https://review.opendev.org/75665107:45
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: Add infra testing scenario  https://review.opendev.org/75549707:51
*** tosky has joined #openstack-ansible07:54
recycleheromorning08:06
recycleheroimages dont get uploaded. I have set debug: true and didnt to any glance specefic configs. I see up until the post request in the logs but after that nothing.08:14
recycleheroHow can I investigate08:14
recycleheroi didnt create the LV that was told in the docs?08:16
recycleherobut I think glance will then use file, right?08:16
recycleherodo I need to override anything08:16
*** pto has joined #openstack-ansible08:21
*** pto has joined #openstack-ansible08:24
ptoThe Ussuri docs states that you should clone master (git clone -b master https://opendev.org/openstack/openstack-ansible /opt/openstack-ansible) should that not be a tag instead?08:25
noonedeadpunkrecyclehero: I can suggest you setting `glance_use_uwsgi: false` and trying that out08:25
noonedeadpunkNot sure, but maybe you're trying to use interoperable import as an image upload method, which is known not to work with uwsgi08:26
noonedeadpunkpto it should08:26
ptonoonedeadpunk: 21.0.1 tag?08:26
recycleheronoonedeadpunk: thanks. I was checking in the browser and saw some CORS error. I was using the internal lb address to connect to it.08:27
recycleheroyes 21.0.108:27
*** pto has quit IRC08:27
noonedeadpunkoh, it's totally time for release...08:27
*** CeeMac has joined #openstack-ansible08:31
recycleheroits a CORS error which prevent me from uploading image, tring to disable cors in firefox08:41
jrosserrecyclehero: https://github.com/openstack/openstack-ansible-os_glance/blob/master/releasenotes/notes/add-cors-config-6326223fe7fa7423.yaml08:45
jrosseras an end user with a browser it's probably best to use the external lb address08:46
recycleheroI changed to use the load balancer address like 200.200.200.11008:46
recycleherobut the CORS complaint about port mismatch I guess08:47
recyclehero200.200.200.200.119:929208:47
recyclehero008:47
recyclehero200.200.200.110:929208:47
recycleheroI know the address is not private, I was wrong when I set up that netowrk08:47
recycleherohttps://200.200.200.110:9292/v2/images/ba0d5ec8-bc4b-4b17-91f2-ec968445910d/file08:48
jrosseranyway, my point is that there is already provision in the glance role to set the cors headers08:48
recycleherojrosser: thank u I will get into that08:48
*** johnsom has quit IRC09:25
*** johnsom has joined #openstack-ansible09:25
*** macz_ has joined #openstack-ansible09:55
*** macz_ has quit IRC09:59
*** SecOpsNinja has joined #openstack-ansible10:01
recycleheroare these still relevant? https://docs.openstack.org/project-deploy-guide/openstack-ansible/ocata/app-advanced-config-override.html10:03
recycleheroThe general format for the variable names used for overrides is <service>_<filename>_<file extension>_overrides10:03
recyclehero?10:03
noonedeadpunkgenerally yes, but you'd better double check naming inside the role were overrides will be applied, ie for os_nova it will be https://opendev.org/openstack/openstack-ansible-os_nova/src/branch/master/defaults/main.yml#L391-L39810:07
noonedeadpunksorry, wrong selection10:08
noonedeadpunkthis one is valid https://opendev.org/openstack/openstack-ansible-os_nova/src/branch/master/defaults/main.yml#L498-L50410:08
recycleheronoonedeadpunk: thank u, after doing this override, which one of setup_hosts, infrastructure or openstack should be execeuted?10:11
recycleheroall?10:11
recycleheroI think just openstack10:13
noonedeadpunkwell, I'd say depends on what exactly you're doing:) in case of overriding one of nova configs, worth shooting just os-nova-install.yml10:13
noonedeadpunkand we have a playbook and role for each specific service10:27
recycleheronoonedeadpunk: can u please chew it a little bit more for me? how to shoot one off playbook10:29
recycleheroopenstack-ansible os-glance.yml10:29
recycleheroor I should use ansible commands? ansible ansble-playbook10:29
recycleheroopenstack-ansbile /etc/ansible/roles/glance/tasks/main.yml ?10:31
recycleheroERROR! 'include_vars' is not a valid attribute for a Play10:33
noonedeadpunkopenstack-ansible /opt/openstack-ansible/playbooks/os-glance-install.yml10:37
noonedeadpunkopenstack-ansible as well as ansible-playbook (openstack-ansible is eventually just a wrapper around ansible-playbook) can launch plays, which in their turn include roles, which are stored in /etc/ansible/roles/10:39
noonedeadpunkbut you can't run role directly without a play10:39
recycleheroso I should write a play myself indicating hosts which are the glance container in my case10:44
noonedeadpunkwell, as I said, we have all sets of playbooks10:45
noonedeadpunkie glance one is here https://opendev.org/openstack/openstack-ansible/src/branch/master/playbooks/os-glance-install.yml10:46
recycleheroyes, I just saw them. thank you.10:46
noonedeadpunkand setup-openstack.yml is just an include of all these playbooks in correct order for deployment/upgrade10:46
*** rh-jlabarre has quit IRC10:49
recycleheroyeah it went ok and I saw it replacing configs. but does it restart the service or make it re read its configuration10:57
recyclehero?10:57
recycleherocause its still comlaining about CORS10:57
noonedeadpunkyep, it should do that in case config is changed via handlers10:57
noonedeadpunkyou should see service restart in the output10:57
noonedeadpunkwell, cors is pretty tricky. if you reach via domain name - you should place it for CORS10:58
recycleherono I reach via IP and put the ip in there. allowed_origin: https://200.200.200.110. I checked the request in the browser and its what I ahve specified10:59
noonedeadpunkif you upload via horizon, you may also want to try setting legacy upload method11:00
recycleheronoonedeadpunk: handlers doesnt write in their output things like [ok] [changed] [skipped]11:00
recyclehero?11:00
noonedeadpunkthey do when executed11:00
recycleheroso for stop and start they didnt write anything under them11:01
noonedeadpunkit should be called `Execute service restart`11:02
noonedeadpunkok, we might have a bug here...11:03
noonedeadpunkcan you check `systemctl status gnocchi-api`?11:03
recycleheroon infra1?11:03
recycleheroit seems I dont have this servic11:04
recycleheroI commented when on "Execute service restart" thats what I can do ;d11:06
recycleheronoonedeadpunk: TASK [Disable the service restart requirement]11:16
*** lkoranda has joined #openstack-ansible11:17
noonedeadpunkrecyclehero: on infra inside glance container11:19
noonedeadpunkoh well11:19
noonedeadpunk`systemctl status glance-api` ofc, sorry11:19
recycleheroits active11:20
recycleherobut I just restart it to check my config11:20
noonedeadpunkyeah, I mean time when it was launched (when it was restarted last time)11:20
recycleheroit seems it was restarted but I have played with the playbook11:22
recycleherodo u want me to check with the orig playbook and report back11:22
recyclehero?11:22
SecOpsNinjahello. atm in my openstack deployment i have only 1 infra note + 1 compute + storage, but im sessing that the majority of memory in infra hosts is consumed by uwsgi. is there any way to reduce this? the infra node has 16GB but with only 349 free mem and its using swap. i was checking each lxc contianer memory in there and i saw the bigger ones were nova_api (3,36G), heat-api (2.17G), cinder11:22
SecOpsNinja(1,57G), keystone (1,42G), glance (1,23G)  and magnum (1,06G)11:22
recycleherobut why isnt it respecting my config11:22
noonedeadpunkrecyclehero: you won't get it restarted unless config is changed11:22
noonedeadpunkwith original playbook/role11:23
noonedeadpunkSecOpsNinja: every role with uwsgi has variable like `{{service_name }}_wsgi_processes` which I guess set to 16. Decreasing this number will help in decreasing ram usage as well11:25
noonedeadpunkrecyclehero: you can try setting `horizon_images_upload_mode: legacy` and run os-horizon-install playbook11:26
recycleheronoonedeadpunk: I changed the config and its playing with orig conf11:27
noonedeadpunkthis will make uploads to be done through horizon container and does not require cors11:27
recycleheronoonedeadpunk: ok I will tryt hat11:27
noonedeadpunkit's not the best solution, but it should work. after you can rollback and continue playing with cors :)11:28
recycleheroI changed the config and it havent restarted. thought u should know11:29
noonedeadpunkok, I see....11:30
SecOpsNinjanoonedeadpunk, thanks. i was checking nova default/main and yep i found nova_wsgi_processes: "{{ [[ansible_processor_vcpus|default(1), 1] | max * 2, nova_wsgi_processes_max] | min }}" but couldn't understand what would be the processors valour for exmaple with a machine with  8 processores (proc/cpuinfo)11:30
SecOpsNinjaand i need to set them in all container roles right? there isn't a global variable to override them all11:31
noonedeadpunkno, there's no global setting unfortunatelly11:32
noonedeadpunkI think it will be 1611:33
noonedeadpunkyou can also set nova_wsgi_processes_max but it will be kind of the same result here11:33
SecOpsNinjayep for what i understand the value is going to pik the mininal od cpus *2 and the max value defined right?11:34
noonedeadpunkyep11:35
recycleheronoonedeadpunk: horizon_images_upload_mode in user_varialbes. just like that? and play horizon afterwards ?11:36
admin0checking if  anyone can guide me in removing .novalocal from hostnames -- new and existing11:36
SecOpsNinjaok i will make a few test to try to reduce this memory usage. im going to add 2 more infra host but i would like to have the same  lxc containers in all of them for redundancy that is why im try to mess with each service to reduce its memory ocupation11:36
noonedeadpunkadmin0: use nova overrides and set `dhcp_domain` in DEFAULT section11:38
noonedeadpunkregarding existing - not sure11:38
noonedeadpunkSecOpsNinja: well, 16Gb is not much for controllers tbh11:40
noonedeadpunkI think it's ok until you have really basic set of services11:40
noonedeadpunkbut also keep in mind that decresing amount of processes will reflect on amount of request that could be process at moment of time11:41
recycleheroSecOpsNinja:   [[ansible_processor_vcpus|default(1), 1] | max * 2,11:41
recycleheroif you found out how to read this please share it with me too11:41
recycleherois it yaml?11:42
noonedeadpunkit's jinja I think)11:43
*** macz_ has joined #openstack-ansible11:43
noonedeadpunkimo, max filter here is pointless11:43
noonedeadpunkrecyclehero: you're right, we have pretty serious bug here....11:44
* recyclehero receives bug bounty11:46
recycleheronoonedeadpunk: can u give me a link so I can follow what u are working on11:46
noonedeadpunkwell, I didn't create one yet:)11:47
recycleheroI meant where u have identified the bug11:48
recycleherobut ok11:48
*** macz_ has quit IRC11:48
noonedeadpunkwe don't trigger service restart when glance is ran with uwsgi11:49
noonedeadpunkI will push PR to cover that11:50
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_glance master: Trigger uwsgi restart  https://review.opendev.org/75668111:53
noonedeadpunkrecyclehero: ^11:53
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_aodh master: Trigger uwsgi restart  https://review.opendev.org/75668412:04
*** shyam89 has joined #openstack-ansible12:04
*** shyamb has joined #openstack-ansible12:04
*** shyamb has quit IRC12:05
*** shyam89 has quit IRC12:05
*** shyamb has joined #openstack-ansible12:06
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_barbican master: Trigger uwsgi restart  https://review.opendev.org/75668612:08
*** lkoranda has quit IRC12:10
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_barbican master: Cleanup stop handler  https://review.opendev.org/75668912:11
*** cshen has quit IRC12:14
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_cinder master: Remove unecessary apache tasks  https://review.opendev.org/75669012:15
*** rf0lc0 has joined #openstack-ansible12:22
recycleheronoonedeadpunk: but the problem with my situation concerning CORS doesnt realte to this, right? I have restarted the galnce-api itself. I think uWSGI are just a medium in between12:22
openstackgerritMerged openstack/openstack-ansible-os_nova master: Bind novncproxy host and port to defined variables  https://review.opendev.org/75377412:23
noonedeadpunkrecyclehero: well yeah, it was regarding service not restarting12:23
noonedeadpunkregarding cors - I can only suggest hat you set it in some wrong way...12:23
recycleheroi will figure it out later. first I need to have a basic setup working12:25
recycleheroi did this btw http://paste.openstack.org/show/798849/12:25
recycleheroat the end of user_variables.yml12:25
noonedeadpunkrecyclehero: well, you could just provide `glance_cors_allowed_origin: https://200.200.200.110` as we have specific variable for it:)12:26
noonedeadpunkand you're accessing horizon with https://200.200.200.110?12:26
recycleheroyes12:26
recycleherodidnt see that, but they have the same result I guess12:27
noonedeadpunkyeah, they are12:28
recycleheroI went in the glance container and checked the config12:28
*** shyamb has quit IRC12:28
recycleheronoonedeadpunk: I am uploading to glance via legacy. I was wondering if both legacy and direct use haproxy and the mgmt-br, right?12:33
noonedeadpunkyeah12:33
noonedeadpunkthe difference is that in case of direct, horizon doesn't take part and you streaming content directly to glance endpoint12:33
noonedeadpunkwhile with legacy you upload to horizon container, and then horizon uploads to glance12:34
recycleherogreat, but for both situation ha-proxy is invloved and br-mgmt is used?12:37
noonedeadpunkyes12:37
recycleherothank you12:37
*** lkoranda has joined #openstack-ansible12:39
admin0noonedeadpunk, can it be left blank ?12:43
admin0in dhcp_domain ?12:43
noonedeadpunkhave no idea12:43
openstackgerritMerged openstack/openstack-ansible-rabbitmq_server master: Require the use of community.rabbitmq ansible collection  https://review.opendev.org/75465712:50
*** cshen has joined #openstack-ansible12:53
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: Remove glance_registry from inventory  https://review.opendev.org/75631813:02
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: Switch integrated linters to focal  https://review.opendev.org/75575913:03
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: Fix octavia tempest tests  https://review.opendev.org/75573713:03
*** shyamb has joined #openstack-ansible13:05
*** shyam89 has joined #openstack-ansible13:09
*** shyamb has quit IRC13:12
mgariepyhttps://zuul.opendev.org/t/openstack/build/3b9ab58deab3468696dd8618c439b555/log/job-output.txt#1569-161213:16
mgariepysome error on the ara generation dmsimard ^?13:17
*** vakuznet has quit IRC13:19
mgariepygalaxy.ansible.com is having issues ?13:23
mgariepyhttps://zuul.opendev.org/t/openstack/build/3b9ab58deab3468696dd8618c439b555/log/job-output.txt#1528-152913:23
dmsimardmgariepy: it looks like it's trying to generate reports but there's no data (or database)13:27
dmsimardI think the real issue is slightly above with the timed out download, yeah13:28
*** macz_ has joined #openstack-ansible13:31
*** macz_ has quit IRC13:36
*** shyam89 has quit IRC13:40
*** nurdie has joined #openstack-ansible13:44
jrosserlooks like ansible galaxy is really broken somehow14:01
*** lkoranda has quit IRC14:20
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_cinder master: Trigger uwsgi restart  https://review.opendev.org/75671914:20
noonedeadpunkjrosser: yeah it is which is pretty disapointing...14:21
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_cloudkitty master: Remove unused api handler  https://review.opendev.org/75672114:27
*** soutr has joined #openstack-ansible14:31
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_heat master: Trigger uwsgi restart  https://review.opendev.org/75672414:33
*** MickyMan77 has joined #openstack-ansible14:35
soutrTo retrieve the value of password from the below file:————clouds:  dal-prod-openstackv1:    auth:      username: ‘user’      password: ‘XXX’      auth_url: 'https://authurl:5000/v3'——————I am using the below code——————-
- name: Parse auth file  set_fact:    auth_content: "{{ lookup('file', ‘user.ini') |14:39
soutrfrom_yaml }}"  delegate_to: localhost- name: Obtain password  set_fact:    cluster_pass: "{{ auth_content['clouds’][‘dal-prod-openstackv1’][‘auth']['password'] }}"———————Is there any way I can combine the 2 into one?14:39
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_ironic master: Trigger uwsgi restart  https://review.opendev.org/75678014:41
noonedeadpunksoutr: um, I'm not sure why you need this tbh. as in case of OSA you have all passwords already defined as variables. Even if not, openstack ansible modules use clouds.yaml nicely and you can just specify cloud name there14:45
*** soutr has quit IRC14:50
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_nova master: Trigger uwsgi restart  https://review.opendev.org/75686114:50
noonedeadpunkjrosser: btw seems functional tests are somehow broken in train:(14:51
noonedeadpunkspecifically tox and py214:51
*** soutr has joined #openstack-ansible14:52
soutrnoonedeadpunk I am trying to automate the creation of projects and users using ansible. The creation of users and projects will be done as openstack admin user. I am saving the  the auth details for users and projects. And then parsing the file to retrieve the password, so that I can use them with os_user module.14:54
soutr- name: Add openstack users  os_user:    domain: "{{ item.user_domain }}"    default_project: "{{ item.default_project }}"    name: "{{ item.name }}"    description: "{{ item.description }}"    password: "{{ item.password }}"#    update_password: always  environment: "{{ env }}"  with_items: "{{ openstack_common.users }}"14:54
noonedeadpunksoutr: well, you can avoid retrieving password for this for sure. you may look at example https://opendev.org/openstack/openstack-ansible-tests/src/branch/stable/train/sync/tasks/service_setup.yml14:55
noonedeadpunkso the thing that needs to be done - this file should be put in /etc/openstack/clouds.yaml or ~/.config/openstack/clouds.yaml ie https://docs.openstack.org/python-openstackclient/ussuri/configuration/index.html14:56
noonedeadpunkand ansible modules will just pick them, when you provide cloud name as parameter14:57
noonedeadpunk`cloud: default` in our example14:57
noonedeadpunkfor you it will be `dal-prod-openstackv1`14:57
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_panko master: Cleanup apache tasks  https://review.opendev.org/75686414:59
openstackgerritMerged openstack/openstack-ansible-os_mistral master: Remove support for lxc2 config keys  https://review.opendev.org/75624915:00
soutrThankyou noonedeadpunk . I will take a look in those documents.15:00
noonedeadpunkyou might need them if you will be using so specific service clients which don't read clouds.yaml, but I can't think about which are not currently15:02
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_placement master: Trigger service restart  https://review.opendev.org/75686615:05
jrossernoonedeadpunk: do yo have a link to the failing train tests?15:05
noonedeadpunkhttps://review.opendev.org/#/c/690342/15:06
noonedeadpunkbtw I dunno how we really lived without service restart on config change....15:06
jrossernoonedeadpunk: internet seems to say virtualenv 20 is maybe the problem15:16
noonedeadpunkyeah, I googled15:17
jrosserwe are still running python2 there for those functional tests i think15:17
noonedeadpunkbut I can't recall we've bumped it since release?15:17
noonedeadpunkwe do for sure15:17
jrossermaybe not on train i guess15:17
noonedeadpunkand tox should be bumped as well...15:18
noonedeadpunkI mean tox should be fixed15:18
jrosserwhere is this set for the openstack-ansible-tests repo.....15:19
noonedeadpunkoh...15:19
*** macz_ has joined #openstack-ansible15:19
noonedeadpunkI think we take https://opendev.org/openstack/openstack-ansible/src/branch/master/global-requirement-pins.txt but there's neither tox nor virtualenv15:20
noonedeadpunkwell... we don't bump tox15:22
noonedeadpunkhttps://opendev.org/openstack/openstack-ansible-tests/src/branch/stable/train/run_tests_common.sh#L8515:22
*** soutr has quit IRC15:24
*** macz_ has quit IRC15:24
*** macz_ has joined #openstack-ansible15:29
masterpeI'm upgrading rabbitmq from Stein to Train,  see that it brings down every rabbitmq node for the upgrade. Is that by design?15:30
noonedeadpunkmasterpe: it should bring one node at a time15:31
noonedeadpunkas we have serial:1 here  https://opendev.org/openstack/openstack-ansible/src/branch/master/playbooks/rabbitmq-install.yml#L2415:31
masterpeI use the -e 'rabbitmq_upgrade=true' -e 'placement_migrate_flag=true' options and I followed the rabbitmqctl cluster_status on every node15:32
noonedeadpunkah, well, yes, there might be short downtime of all members when upgrading first15:33
noonedeadpunkas we need to stop all instances extept the single one, and while upgrading it it might be restarted, during this time period rabbit might be down15:34
noonedeadpunkhttps://www.rabbitmq.com/upgrade.html#rabbitmq-cluster-configuration15:35
masterpethanks15:36
noonedeadpunkbut it should be really ~1min downtime iirc15:36
noonedeadpunkbut probably you're right and we can do rolling upgrades nowadays since we're on 3.815:37
masterpedowntime was a little bit more then one minute but i did not clock it.15:38
jrossernoonedeadpunk: maybe we need to pin virtualenv < 20 on train15:42
jrosserjust tons of complexity with where to put that15:42
noonedeadpunkand we had pin in python_venv_build only?15:43
jrosseralso we bring a *very* old version of pip in the functional tests15:43
jrosseri think the error you see is just in getting the tox environment to work though15:44
jrosserwell before anything actually to do with ansible stuff15:44
noonedeadpunkit is15:45
noonedeadpunkI just trying to recall where I saw virtualenv binding :)15:45
jrosserkind of unrelated i think but https://bootstrap.pypa.io/3.4/ would get us to pip version 19.x.y15:47
*** gyee has joined #openstack-ansible16:00
*** rpittau is now known as rpittau|afk16:01
noonedeadpunkjrosser: virtualenv==16.7.10 works nicely16:02
jrossernoonedeadpunk: i think we are doing identical things :) http://paste.openstack.org/show/798875/16:03
jrosseri get the same version 'Successfully installed virtualenv-16.7.10'16:04
noonedeadpunkpip version is really unrelated16:04
jrosseryeah, was just interested to see what happened16:05
noonedeadpunkbut yeah, I like your solution16:05
noonedeadpunkbut I didn't get what exactly installs virtualenv16:05
noonedeadpunkdependency to tox?16:05
jrosserit's a dependancy of tox16:05
noonedeadpunkyeah, I see)16:05
jrosserhttps://zuul.opendev.org/t/openstack/build/5c5a7a206ef346ba9e0fdc9949b42f19/log/job-output.txt#122716:06
noonedeadpunkwell, I'd say let's bump tox as well?16:06
noonedeadpunkas they're pretty close with >16.0.0 <20.0.016:07
noonedeadpunkit's kind of one major release16:07
jrosserlooks like latest tox still supports python216:08
noonedeadpunkit is16:08
noonedeadpunksugggest return to this once it won't?16:08
jrosserif we want to bump tox version that sounds like something to do on master and backport16:09
jrosserfixing virtualenv pin we should just do straight to train right now16:09
noonedeadpunkwell, if we want to bump it in master, we need to do that in test-requirements.txt and include it as contraints16:10
noonedeadpunkyeah, ok, let's bump virtualenv, backport to stein16:10
noonedeadpunkand return to this later :p16:10
jrosseryou happy we do it like my diff? we don't have a constraints file really in the tests repo16:11
noonedeadpunkI'm pretty happy16:13
* jrosser makes patch16:14
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-tests stable/train: Pin virtualenv<20 for python2 functional tests  https://review.opendev.org/75688316:14
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-lxc_hosts stable/train: Updated from OpenStack Ansible Tests  https://review.opendev.org/69034216:18
noonedeadpunkah, depends on doesn't for for tests repo16:18
jrosserCI seems wedged up for this https://review.opendev.org/#/c/756587/ looks like the centos-8 job is stuck waiting to timeout16:25
jrosserwell there is a lesson - no sooner do i say that, the job passes16:27
noonedeadpunkheh:)16:28
openstackgerritMerged openstack/openstack-ansible-lxc_hosts master: Include libpython and rsync for centos in lxc base image  https://review.opendev.org/75658716:47
*** cshen has quit IRC17:19
*** maharg101 has quit IRC17:24
*** cshen has joined #openstack-ansible17:25
*** MickyMan77 has quit IRC17:28
*** MickyMan77 has joined #openstack-ansible17:29
*** cshen has quit IRC17:29
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Update integrated ansible-lint rules  https://review.opendev.org/75612117:31
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Use nodepool epel mirror in CI for systemd-networkd package  https://review.opendev.org/75470617:31
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Deprecate os_congress role  https://review.opendev.org/74252117:31
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Remove "when" statement from vars_prompt  https://review.opendev.org/75582417:31
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Bump SHAs for master  https://review.opendev.org/75597317:32
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Added Openstack Adjutant role deployment  https://review.opendev.org/75631017:32
*** fghaas has joined #openstack-ansible17:41
*** fghaas has quit IRC17:44
recycleherohey guys, sometimes I get 504 when I want to login. and this implentation is slower than my queens manual deployment.18:04
recycleheroI remember noonedeadpunk told me before about adjusting number of sth18:05
recycleherothat sth is processes or threads or both?18:05
recycleherothe system is mostly used by me, what should I consider?18:06
recycleheroanother housekeeping q for me, is heat a totally optional service, can I remove neutron-metering-agent?18:06
recycleheroshould I focus on ram or mostly or cpu for controller?18:07
recycleheroI use NFS cinder volume is better to reside on infra1 or compute1, give me some reasoning for this one.18:08
recycleherouWSGI18:08
*** maharg101 has joined #openstack-ansible18:24
*** maharg101 has quit IRC18:31
openstackgerritGeorgina Shippey proposed openstack/openstack-ansible-galera_server master: Ability to take mariadb backups using mariabackup  https://review.opendev.org/75526118:40
*** SecOpsNinja has left #openstack-ansible18:48
djhankbrecyclehero: I had run into the 504 issue quite a bit back when I was doing my first test cluster on basic Desktop-class hardware.  I set some of the services to use less "workers" to try and save on resources which helped a bit. You can add some stuff from here http://paste.openstack.org/show/798881/ to user_variables.yml18:49
recycleherodjhankb: thanks I needed something to look for. u set this for a 8 core cpu?18:52
recycleheroin general whats the reasoning for chosing threads and processes? or better process to thread ratio?18:55
djhankbrecyclehero I think so - they were i5 desktop machines, IIRC18:55
recycleheroI think I5 has 4 cores though. but I am going for I5 anyway18:55
*** spatel has joined #openstack-ansible18:56
*** MickyMan77 has quit IRC18:56
*** andrewbonney has quit IRC19:00
recycleherothis is goodhttps://www.mathworks.com/help/parallel-computing/choose-between-thread-based-and-process-based-environments.html19:01
*** miloa has quit IRC19:21
noonedeadpunkdamn it, 756244 failed in gates (╯°□°)╯︵ ┻━┻19:29
jrosseroh dear19:36
*** d34dh0r53 has quit IRC19:49
*** d34dh0r53 has joined #openstack-ansible19:52
*** nurdie has quit IRC20:00
*** nurdie has joined #openstack-ansible20:39
*** nurdie has quit IRC20:45
*** spatel has quit IRC20:57
recycleheroi followed this variable for cors setting of glance21:05
recycleheroglance_cors_allowed_origin: "{{ (glance_show_multiple_locations | bool) | ternary(openstack_service_publicuri_proto | default('http') + '://' + external_lb_vip_address, None) }}"21:05
recycleheroI found them not good21:05
recycleheroglance_show_multiple_locations: "{{ glance_default_store == 'rbd' }}"21:06
recycleheroit means if you have a default file store you cant upload from horizon!21:06
recycleheroI will set glance_show-multiple_locations: true and leave the rest as is.21:09
recycleherocause you see the cors was complaining about the port mismatch.21:10
*** cshen has joined #openstack-ansible21:16
*** cshen has quit IRC21:21
*** d34dh0r53 has quit IRC21:28
*** d34dh0r53 has joined #openstack-ansible21:28
*** rf0lc0 has quit IRC21:30
*** MickyMan77 has joined #openstack-ansible21:31
*** maharg101 has joined #openstack-ansible22:28
*** maharg101 has quit IRC22:37
*** nurdie has joined #openstack-ansible22:40
*** nurdie has quit IRC22:45
*** tosky has quit IRC22:59
*** pmannidi has joined #openstack-ansible23:23
*** macz_ has quit IRC23:32
*** rfolco has joined #openstack-ansible23:38
*** rfolco has quit IRC23:40
*** rfolco has joined #openstack-ansible23:41
*** rfolco has quit IRC23:45
*** gyee has quit IRC23:50

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!