Tuesday, 2021-02-09

*** tosky has quit IRC00:20
*** openstackgerrit has quit IRC00:49
*** cshen has joined #openstack-ansible01:16
*** openstackstatus has quit IRC01:20
*** openstack has joined #openstack-ansible01:23
*** ChanServ sets mode: +o openstack01:23
*** cshen has joined #openstack-ansible03:01
*** cshen has quit IRC03:06
*** mwhahaha has quit IRC03:26
*** PrinzElvis has quit IRC03:27
*** CeeMac has quit IRC03:27
*** PrinzElvis has joined #openstack-ansible03:28
*** mwhahaha has joined #openstack-ansible03:29
*** CeeMac has joined #openstack-ansible03:31
*** spatel has joined #openstack-ansible03:33
*** evrardjp has quit IRC05:33
*** evrardjp has joined #openstack-ansible05:33
*** gyee has quit IRC06:11
*** spatel has quit IRC07:01
*** cshen has joined #openstack-ansible07:02
CeeMacmorning07:12
*** miloa has joined #openstack-ansible07:17
*** lkoranda has joined #openstack-ansible07:56
*** rpittau|afk is now known as rpittau07:59
*** maharg101 has joined #openstack-ansible08:05
*** andrewbonney has joined #openstack-ansible08:19
*** lkoranda has quit IRC08:20
*** lkoranda has joined #openstack-ansible08:27
jrossermorning08:30
*** openstackgerrit has joined #openstack-ansible08:33
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: WIP - use the cached copy of the u-c file from the repo server  https://review.opendev.org/c/openstack/openstack-ansible/+/77452308:33
*** lkoranda has quit IRC08:37
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Add reference to netplan config example  https://review.opendev.org/c/openstack/openstack-ansible/+/77442508:50
jrossernoonedeadpunk: i put -W against my cached u-c patches https://review.opendev.org/q/topic:%22osa-uc-cache%2208:52
jrosseri think it's going to work certainly for CI / AIO cases08:53
*** lkoranda has joined #openstack-ansible08:53
*** lkoranda has quit IRC08:53
jrosserbut if you run master branch magnum on victoria for example, we need to be able to somehow get a cache of the V u-c file as well to point magnum_upper_constraints_url to08:54
jrosserah master not V of course08:55
noonedeadpunkI don't fully understand why do we use content at the moment. to filter out things like tempest from it?08:55
noonedeadpunkbut I don't see this happening... It would be much easier just to use copy from localhost location (path of which we know)?08:57
jrosserit's not just a CI time thing though08:58
noonedeadpunknasty thing here is that we link and rely on really different steps08:58
jrosserit also has to work for multinode, and also when deploy host is seperate08:58
noonedeadpunkyeah, but we store file for non-ci as well08:58
jrosseryeah well this is kind of why i -W it, as theres a lot to consider08:59
noonedeadpunkyeah, it's a bit hacky I guess at the moment... and if you change requirements SHA - it won't get updated09:00
jrosserif the repo server role is done it should, but thats kind of not obvious09:01
*** jbadiapa has joined #openstack-ansible09:03
jrosseri'm wondering if this should all be some common_tasks stuff put into each role playbook, really not sure what is the right approach09:06
*** tosky has joined #openstack-ansible09:10
openstackgerritAlfredo Moralejo proposed openstack/openstack-ansible-os_tempest master: Remove horizon tempest plugin installation for EL8 distro  https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/77460409:13
noonedeadpunkI think approach with repo_server is good. But I thik I'd move more logic into the role. we can just always try using copy to get constraints from /opt/ansible-runtime-constraints-{{ requirements_git_install_branch }}. txt and if it fails - get_url09:14
noonedeadpunkbut yeah, good question in this is how to determine upstream url for repo container only, and point the same variable to repo container for all other hosts (except localhost as well)09:15
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Pass upper-constraints content to the repo_server role  https://review.opendev.org/c/openstack/openstack-ansible/+/77451809:45
*** lkoranda has joined #openstack-ansible09:55
*** gokhani has joined #openstack-ansible10:02
gokhanihi folks, can ı ask how to change lxc containers timezone ?10:03
noonedeadpunkthat's funny, but I never tried to to that so have no idea. I guess they're tighten to the host timezone?10:04
noonedeadpunkoh, I think you can set it in lxc config10:05
*** lkoranda has quit IRC10:05
noonedeadpunkI think you can set environment.TZ for lxc_container_config_list variable10:07
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: WIP - use the cached copy of the u-c file from the repo server  https://review.opendev.org/c/openstack/openstack-ansible/+/77452310:08
gokhaninoonedeadpunk , ı think I need to add environment.TZ variable to /etc/lxc/lxc-openstack.conf10:10
gokhaninoonedeadpunk, it doesn't change anything :810:19
openstackgerritDmitriy Rabotyagov proposed openstack/ansible-role-uwsgi stable/train: Run uwsgi tasks only when uwsgi_services defined  https://review.opendev.org/c/openstack/ansible-role-uwsgi/+/77449410:19
*** lkoranda has joined #openstack-ansible10:20
noonedeadpunkAs I said, most liekly that setting lxc_container_config_list and re-running lxc-containers-create.yml playbook should apply config. but that may also require containers restart10:20
openstackgerritDmitriy Rabotyagov proposed openstack/ansible-role-uwsgi stable/train: Run uwsgi tasks only when uwsgi_services defined  https://review.opendev.org/c/openstack/ansible-role-uwsgi/+/77449410:20
noonedeadpunkso it will result in adding config to "/var/lib/lxc/{{ inventory_hostname }}/config" https://opendev.org/openstack/openstack-ansible-lxc_container_create/src/branch/master/tasks/lxc_container_config.yml#L17-L2610:21
noonedeadpunkand yeah, container restart will be performed by the role10:21
gokhaninoonedeadpunk, how to set key/value pair "environment.TZ = Europe/Istanbul" to lxc_container_config_list. This variable is list.10:27
jrossergokhani: see the way it is used here https://codesearch.opendev.org/?q=lxc_container_config_list10:29
jrosseralso that some values are already set there so perhaps be careful with overriding that var10:29
*** lkoranda has quit IRC10:33
gokhanijrosser, it didn't work for me. I override this variable and in tasks I can see it add this variable > http://paste.openstack.org/show/802461/10:39
gokhanimay be I need to set lxc.environment.TZ=Europe/Istanbul instead of environment.TZ=Europe/Istanbul10:40
jrosseryou need to set whatever it is that the lxc config file expects10:42
openstackgerritJonathan Rosser proposed openstack/ansible-role-pki master: Add boilerplate ansible role components  https://review.opendev.org/c/openstack/ansible-role-pki/+/77462010:45
openstackgerritMerged openstack/openstack-ansible-os_zun master: Move zun pip packages from constraints to requirements  https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/77230010:46
jrossernoonedeadpunk: did you try this on something with containers? https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/77422410:50
jrosserthe CI jobs look all like metal10:50
noonedeadpunkfunctional are lxc?10:51
noonedeadpunkoh, hm...10:52
noonedeadpunkseems like they not10:52
jrosserwell thats the funny thing, i'm not seeing that10:52
jrosseryeah surprising, as i'm sure they used to be10:52
noonedeadpunkbut according to recap they are lxc... https://zuul.opendev.org/t/openstack/build/57ac419aa1424096aa07f9dd514c3ae8/log/job-output.txt#367510:54
gokhanijrosser noonedeadpunk , lxc.environment.TZ didn't work. ı only changed timezone with install tzdata on lxc container. And Maybe if we copy /usr/share/zoneInfo directory to container and link with /etc/localtime it will work.10:54
noonedeadpunkwe do install tzdata by default nowadays... maybe we should backport it's installation...10:56
jrosserseems [test1 -> localhost] everywhere10:56
noonedeadpunkhttps://opendev.org/openstack/openstack-ansible-lxc_hosts/src/branch/stable/victoria/vars/debian.yml#L5010:57
openstackgerritMerged openstack/openstack-ansible master: Do not prepare upstream repos for distro jobs  https://review.opendev.org/c/openstack/openstack-ansible/+/77437210:57
noonedeadpunkjrosser: btw I guess we're missing https://review.opendev.org/c/openstack/project-config/+/773387 for pki CI10:59
jrosseroh hrrm yes11:00
jrosseri didnt have much of a plan with that other that trying to get the CI to be alive11:00
noonedeadpunkaha, I see. I was wondering if we need integrated tests right now while not having playbook in place11:01
jrosseri was also thinking maybe just the infra tests11:01
jrosserperhaps stage one is just getting to the point we can fix up rabbitmq11:01
gokhaninoonedeadpunk, thanks I will add tzdata to ubuntu vars file for ussuri11:01
noonedeadpunkoh, yes, actually that's also good!11:01
* noonedeadpunk checking lxc11:07
*** evrardjp has quit IRC11:10
*** guilhermesp has quit IRC11:10
*** nicolasbock has quit IRC11:11
*** guilhermesp has joined #openstack-ansible11:11
*** nicolasbock has joined #openstack-ansible11:11
*** evrardjp has joined #openstack-ansible11:12
noonedeadpunkthat looks weird https://opendev.org/openstack/openstack-ansible/src/branch/master/playbooks/containers-lxc-create.yml#L8111:12
*** miloa has quit IRC11:12
noonedeadpunkI think we should have smth more reliable in inventory nowadays?11:13
noonedeadpunklike `is_metal`11:15
*** gokhani has quit IRC11:16
*** gokhani has joined #openstack-ansible11:18
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Replace import with include  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/77422411:19
noonedeadpunkwell, in sandbox role is running ok11:19
noonedeadpunk(against lxc)11:19
*** miloa has joined #openstack-ansible11:22
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Replace import with include  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/77422411:23
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Adjust openstack_distrib_code_name  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/77462411:25
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts stable/victoria: Adjust openstack_distrib_code_name  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/77462511:26
openstackgerritMerged openstack/openstack-ansible-os_aodh master: Move aodh pip packages from constraints to requirements  https://review.opendev.org/c/openstack/openstack-ansible-os_aodh/+/77225911:45
*** lkoranda has joined #openstack-ansible11:52
*** zul has quit IRC11:54
*** andrewbonney has quit IRC12:08
*** guilhermesp has quit IRC12:08
*** nicolasbock has quit IRC12:09
*** andrewbonney has joined #openstack-ansible12:10
*** guilhermesp has joined #openstack-ansible12:10
*** nicolasbock has joined #openstack-ansible12:12
openstackgerritMerged openstack/openstack-ansible stable/victoria: Add haproxy_*_service variables  https://review.opendev.org/c/openstack/openstack-ansible/+/77412612:17
*** cshen has quit IRC12:22
*** cshen has joined #openstack-ansible12:26
jrosserhmm wierd error https://zuul.opendev.org/t/openstack/build/f7594d20ed424213b6b7478c8c634e1412:30
* jrosser wonders if this is pip being too old12:30
fricklerjrosser: is that distro pip? then very likely yes12:32
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Pass upper-constraints content to the repo_server role  https://review.opendev.org/c/openstack/openstack-ansible/+/77451812:35
fricklerjrosser: if you run with "-vvv" you can see how pip sees the wheels but deems them incompatible12:39
jrosserhrrm thats ugly12:41
*** CeeMac has quit IRC12:58
*** CeeMac has joined #openstack-ansible13:01
noonedeadpunkuh, that;s nasty as well https://zuul.opendev.org/t/openstack/build/7bf27b24d95c410ab9c22fb44c527ca9/log/job-output.txt#223113:28
noonedeadpunknew python cryptography module requiers rust now...13:29
andrewbonneyThink I just hit that same thing in our internal CI13:29
noonedeadpunkfor ubuntu fixed with apt install rustc setuptools-rust13:30
jrosserthere was some chat in the infra irc about this yesterady13:31
jrossermay be that old pip isnt smart enough to find the built wheel13:31
noonedeadpunk* apt install rustc && pip3 install setuptools-rust13:31
* jrosser will try in VM13:31
jrosserwe currently pin virtualenv back in the functional tests which could be the casue13:32
openstackgerritMerged openstack/openstack-ansible master: Disable ssl for rabbitmq  https://review.opendev.org/c/openstack/openstack-ansible/+/77337713:32
* noonedeadpunk trying to get rid of functional tests for openstack-hosts and lxc roles13:39
*** lkoranda has quit IRC13:44
jrosserreally we need to merge this i think https://review.opendev.org/c/openstack/openstack-ansible-tests/+/77386213:46
jrosserno point fixing things on bionic13:46
noonedeadpunkyeah13:46
noonedeadpunkbut that patch is more about adding things to gates13:47
noonedeadpunkthat we're missing13:47
noonedeadpunkit doesn't drop bionic13:47
noonedeadpunkwe can drop it though there13:48
*** lkoranda has joined #openstack-ansible13:49
openstackgerritMerged openstack/openstack-ansible master: Add reference to netplan config example  https://review.opendev.org/c/openstack/openstack-ansible/+/77442513:55
*** spatel has joined #openstack-ansible13:59
*** sshnaidm is now known as sshnaidm|afk14:00
spatelI am trying to deploy ceph using OSA and successfully deployed mon container but when i am trying to run playbook on OSD node getting stuck here - http://paste.openstack.org/show/802471/14:03
spateldeploying octopus14:03
spateli check and i do have /dev/sdb disk but its saying not able to find so look like something else going on14:04
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-tests master: Unpin virtualenv version  https://review.opendev.org/c/openstack/openstack-ansible-tests/+/77465114:04
admin0is scaling from 1 -> 3 controllers as easy as defining them in the config and running the playbooks ?14:04
jrosser^ that allows cryptography to install on bionic but it does mean moving to the new resolver at the same tie14:05
jrosser*time14:05
noonedeadpunkdoh14:06
jrosserspatel: thats an error from ceph_volume Failed to find physical volume "/dev/sdb1"14:07
jrosserrelated to LVM pvs, you need to check what ceph_volume wants your disk setup to be (this is not an OSA thing)14:07
jrosserspatel: as usual there is goldmine of info in the AIO :) https://github.com/openstack/openstack-ansible/blob/master/tests/roles/bootstrap-host/tasks/prepare_ceph.yml#L65-L9114:08
spatelI have created /dev/sdb1 using fdisk because i ran out of option otherwise i told ceph use /dev/sdb raw disk14:11
jrosserwell first check that ceph-volume works with a raw partition or if you need to put some LVM on that first14:20
mgariepyceph-volume can manage the bare drives if you let it.14:21
*** pcaruana has joined #openstack-ansible14:24
admin022.0.0 -- on a 3x controller setup, mysql fails to come up .. the error is: WSREP: Member 1.0 (m2b10_galera_container-ccaa6815) requested state transfer from '*any*', but it is impossible to select State Transfer donor: Resource temporarily unavailable14:39
spatelmgariepy jrosser when i am running this command manually "/usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb" it just hang and not returning any output so trying to understand what its doing14:41
admin0spatel, why not run a decoupled build ? where u bring up ceph separately to osa14:45
admin0and then just integrate the two14:45
spatelI have very small ceph deployment, just for glance images and small backup stuff that is why deploying using OSA + ceph14:46
kleinispatel: I saw your blog post regarding PowerDNS and Designate. Does your setup work to upgrade both PowerDNS instances from Designate?14:50
spatelupgrade both PowerDNS instances from Designate? i don't what what you trying to say?14:51
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_neutron master: Remove neutron_keepalived_no_track variable  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/77080814:52
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_neutron master: Switch neutron functional jobs to focal  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/77397914:54
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_neutron master: Switch neutron functional jobs to focal  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/77397914:55
kleinisorry, the question is: do both PowerDNS instances transfer new zone data from Designate successfully? In my case only one PowerDNS instance is notified through Web API, although I have two targets defined.15:01
spatelYes it notify both PowerDNS using designate15:03
spatelkleini15:03
kleiniokay, then that is different for your setup. maybe something fixed with Victoria and broken in Ussuri.15:04
spatelif you have 10 PowerDNS they will get notify all as far as they are in target list. (I have noticed first target DNS always get update immediately but next target take few more seconds (in my case 20 to 30second delay)15:05
kleinimaybe I just need more time to test this again in my environment15:06
spatelI never tested designate on Ussuri but i can tell you victoria its working15:31
spatelkleini :)15:31
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add hosts integrated tests  https://review.opendev.org/c/openstack/openstack-ansible/+/77468515:37
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Use integrated tests  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/77468815:41
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add hosts integrated tests  https://review.opendev.org/c/openstack/openstack-ansible/+/77468515:41
*** sshnaidm|afk is now known as sshnaidm15:44
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Use integrated tests  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/77468815:45
*** gokhani has quit IRC15:48
*** spatel has quit IRC15:59
*** macz_ has joined #openstack-ansible16:02
noonedeadpunk#startmeeting openstack_ansible_meeting16:02
openstackMeeting started Tue Feb  9 16:02:54 2021 UTC and is due to finish in 60 minutes.  The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot.16:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:02
*** openstack changes topic to " (Meeting topic: openstack_ansible_meeting)"16:02
openstackThe meeting name has been set to 'openstack_ansible_meeting'16:02
*** macz_ has joined #openstack-ansible16:03
noonedeadpunktrying to check if we have some bugs to discuss...16:05
noonedeadpunkI think jrossercovered today most of them :)16:06
noonedeadpunk#topic office hours16:07
*** openstack changes topic to "office hours (Meeting topic: openstack_ansible_meeting)"16:07
jrossero/16:09
jrosserhello16:09
noonedeadpunk\o/16:09
noonedeadpunkI don't have really much from my side to say, since I had pretty little time on my hands :(16:10
jrosserfeels like we need to get all this new-pip stuff merged16:11
noonedeadpunkI'd say we almost did?16:11
noonedeadpunkhttps://review.opendev.org/q/topic:%22osa-new-pip%22+(status:open)16:11
noonedeadpunkit's super closr16:11
noonedeadpunk*close16:12
jrosserwe don't yet land the patch to the integrated repo which turns it on16:12
jrosserthis is related for the tests repo https://review.opendev.org/c/openstack/openstack-ansible-tests/+/77465116:12
noonedeadpunkwe stuck on neutron pretty much16:13
noonedeadpunkand tests repo does not make this easy for us16:13
jrosseryeah lots of things there, the tests repo patch will help16:13
jrosserthen we need the bionic->focal patch for os_neutron16:13
noonedeadpunkwhich just doesn't work actually...16:13
*** spatel has joined #openstack-ansible16:14
jrosserindeed, the functional tests are all generally unhappy16:14
jrosserhttps://review.opendev.org/773979 is failing horribly in CI just now16:15
jrosseroh right16:16
jrosserwe can't and the change to the tests repo + bionic->focal without also the constraints->requirements changes for os_neutron16:17
jrossersome of these patches are going to need to be squashed16:17
jrosser*land16:17
noonedeadpunkwhy constraints->requirements changes relate to bionic vs focal? I guess they will get same versions during play?16:18
noonedeadpunkbut see no issues in merging as well if it will be required16:18
noonedeadpunkalso I'm wondering what to do with octavia on centos16:19
noonedeadpunkshould we jsut mark it nv now?16:19
jrosseri wonder if johnsom is around?16:20
johnsomHi16:20
jrosserwoah16:20
jrosser:)16:20
johnsomYou rang?16:20
johnsomWhat is up?16:20
jrosserdid you see this http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020218.html16:21
jrosserwe are a bit stuck on our centos-8 CI jobs16:21
johnsomHmm, reading through. The initial report is a nova bug it looks like. Let me read all the way down16:22
noonedeadpunkthe issue here is that nova and neutron tempest tests are passing for us..16:23
noonedeadpunkmaybe we're testing wrong things...16:23
jrosserwe should check they actually boot something :)16:24
johnsomWell, Octavia tends to actually test more than other projects. We have true end-to-end tests, where some gloss over things16:24
johnsomIs there a patch with logs I can dig in?16:24
noonedeadpunksure https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/76995216:25
johnsomThanks, give me a few minutes.16:25
noonedeadpunk`tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops` should boot smth I guess16:26
noonedeadpunkhttps://zuul.opendev.org/t/openstack/build/0a123e189be8445da96927be09220d7a/log/logs/openstack/aio1-utility/tempest_run.log.txt#135 (it's for nova role CI)16:26
johnsomHmm, those logs have expired. Another patch maybe?16:26
johnsomAh, nevermind, I had the wrong link16:26
noonedeadpunkjrosser: yeah they do spawn isntance https://zuul.opendev.org/t/openstack/build/0a123e189be8445da96927be09220d7a/log/logs/host/nova-compute.service.journal-20-52-15.log.txt#524116:27
jrossercool16:28
noonedeadpunkjohnsom: you an also check this if previous has expired https://zuul.opendev.org/t/openstack/build/df371a76c1ab4e76b97e4b6b974fe29a16:29
noonedeadpunkbtw for the last patch debian also failed in pretty much the same way I'd say...16:30
*** chandankumar is now known as raukadah16:33
jrossernoonedeadpunk: i did not know what to do about the 0.0.0 version here https://bugs.launchpad.net/openstack-ansible/+bug/191512816:36
openstackLaunchpad bug 1915128 in openstack-ansible "OpenStack Swift-proxy-server do not start" [Undecided,New]16:36
jrosserother than say we're not really supporting rocky.....16:36
noonedeadpunkI'm wondering if it's because they checked-out to rocky-em tag16:38
noonedeadpunkI could imagine that pbr might go crazy about that16:39
jrosseroh interesting, could be16:39
jrosserperhaps an assumption that a tag is a number16:39
jrosserwhilst we are in meeting time i guess we should also talk about CI resource use?16:40
noonedeadpunkyeah16:40
noonedeadpunkI think the best we can do, except reducing time, is also move bionic tests to experimental16:41
jrosseri think that the conclusion on the ML is a good one, reducing job failures is the biggest win16:41
noonedeadpunknot sure if we should actively carry on bionic16:41
jrosserbecasue that may be even 100% overhead right now, or more16:41
noonedeadpunkand main issue with failures I guess is galera16:41
noonedeadpunkyeah, there were another ones, like auditd bug...16:42
jrosseri'm going to try and be a bit more disciplined with recheck to note on the etherpad (https://etherpad.opendev.org/p/osa-ci-failures) when there is some systematic error16:42
noonedeadpunkand I guess looking into gnocchi is also useful16:42
jrosseroh yes there is a whole lot of mess there16:42
jrossersomething very strange with the db access unless i'm reading the log badly16:43
noonedeadpunk+1 to having that etherpad16:43
noonedeadpunkI think I need to deploy it to see what's going on16:43
*** rh-jelabarre has quit IRC16:45
jrosserwhat to do with mariadb? is this an irc sort of thing?16:45
noonedeadpunkI actually no idea except asking, yeah.16:47
* noonedeadpunk goes to #mariadb for this16:47
noonedeadpunk* #maria16:47
admin0hi all .. i am getting issue in setup-infra that i cannot understand .. this is the error:  https://gist.githubusercontent.com/a1git/bf7c55a1befd59e3682be485bc4b1e88/raw/785c1d0a32fc05ae23e5fa5dbd859d3934f6930a/gistfile1.txt -- does it mean i need to downgrade my pip ?16:48
admin0i tried 22.0.0 .. but it fails on galera setup .. so going back on 21.2.216:48
*** rh-jelabarre has joined #openstack-ansible16:50
jrosseradmin0: have you used venv_rebuild=true ever on that deployment?16:52
noonedeadpunkuh....16:53
admin0i have not .. this is a new greenfield16:53
noonedeadpunkwe need to merge https://review.opendev.org/q/I6bbe66b699ce5ab245bb9779b61b5c4625eba92716:53
admin0on one line in the log inside the python_venv_build  log, I find 021-02-09T22:13:01,803     error: command 'x86_64-linux-gnu-gcc' failed with exit status 116:54
spatelnoonedeadpunk ++++++1 for that patch16:54
admin0aren't those installed by ansible inside the container ?16:54
noonedeadpunkI guess it should be installed only on repo container where we usually delegate16:55
admin0i will lxc-containers-destroy .. and retry once more16:55
spateladmin0 it cook everything on repo and then just deploy on other container to reduce duplication work16:56
openstackgerritMerged openstack/openstack-ansible-tests master: Unpin virtualenv version  https://review.opendev.org/c/openstack/openstack-ansible-tests/+/77465116:56
johnsomjrosser noonedeadpunk I think we need to bring in a nova expert on this. I don't see why nova is going out to lunch, but there are a bunch of errors in the nova logs.  This seems to be related: https://zuul.opendev.org/t/openstack/build/df371a76c1ab4e76b97e4b6b974fe29a/log/logs/host/nova-api-os-compute.service.journal-12-56-44.log.txt#689316:56
spatelvenv_rebuild can be evil without that patch :) I learnt that hard way16:57
jrosserit should never be trying to build that wheel on the utility container lie spatel says16:57
jrosserit means that for some reason it is not being taken from the repo server16:57
johnsomThis is the other key message: https://zuul.opendev.org/t/openstack/build/df371a76c1ab4e76b97e4b6b974fe29a/log/logs/host/nova-compute.service.journal-12-56-44.log.txt#597016:57
johnsomBut that may be a side effect of the cleanup/error handling related to the above error16:58
* jrosser sees eventlet......17:00
johnsomYeah17:03
noonedeadpunkhm, that seems like libvirt issue indeed17:03
noonedeadpunkwondering why we don't see it anywhere else...17:03
johnsomWell, I really think it's related to the messaging queue problem. The libvirt very well may be a side effect17:03
johnsomI'm just not sure what it is trying to message there.17:04
jrosserrabbitmq log is totally unhelpful :(17:04
noonedeadpunkeventually I saw this messages in my deployment with ceilometer17:06
noonedeadpunkwhen it agent tries to poll libvirt17:06
noonedeadpunkand the metric it's polling is not supported by libvirt17:07
noonedeadpunkbut here we don't have any pollster I guess (except nova)17:07
*** spatel has quit IRC17:07
*** spatel has joined #openstack-ansible17:08
noonedeadpunkwell anyway, thanks for taking time and watching johnsom!17:08
noonedeadpunk#endmeeting17:08
*** openstack changes topic to "Launchpad: https://launchpad.net/openstack-ansible || Weekly Meetings: https://wiki.openstack.org/wiki/Meetings/openstack-ansible || Review Dashboard: http://bit.ly/osa-review-board-v3"17:08
openstackMeeting ended Tue Feb  9 17:08:56 2021 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:08
johnsomYeah, sorry I can't be of more help17:08
openstackMinutes:        http://eavesdrop.openstack.org/meetings/openstack_ansible_meeting/2021/openstack_ansible_meeting.2021-02-09-16.02.html17:08
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/openstack_ansible_meeting/2021/openstack_ansible_meeting.2021-02-09-16.02.txt17:09
openstackLog:            http://eavesdrop.openstack.org/meetings/openstack_ansible_meeting/2021/openstack_ansible_meeting.2021-02-09-16.02.log.html17:09
johnsomWe need nova team level knowledge on that one.17:09
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_neutron master: Switch neutron functional jobs to focal  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/77397917:12
jrossermgariepy: can you take a look at this if you have a moment https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/774159/417:20
*** gyee has joined #openstack-ansible17:20
*** maharg101 has quit IRC17:39
*** lkoranda has quit IRC17:46
*** rpittau is now known as rpittau|afk17:56
spatelI think i am totally stuck with ceph, ceph-ansible running this command and its just hanging forever   " ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb "18:07
spatelthat command supposed to create LVM etc.. right18:07
spateli wonder if this is octopus issue18:08
admin0spatel, i have octopus using ceph-ansible18:08
admin0worked fine for me18:08
spatelhmm ubuntu 20.04?18:08
admin0i used sgdisk to zap the partitions .. so the disk is clear and used auto detect18:09
admin0yep18:09
spateli did zap to wipe out all partition18:09
spateldid you use raw disk or LVM?18:09
admin0it will do the lvm itself18:09
admin0let me share my configs18:10
spatelplease18:10
admin0spatel,  v18:12
admin0https://gist.github.com/a1git/1f0b75a169a74e463b92214c6148bb5518:12
spatelreading18:12
spatelwhere are data disk in config?18:13
admin0no need :)18:14
admin0thats the magic18:14
spatelhow does ceph know which is OS disk and which is spare disk18:14
admin0if there are no partition table, it uses it18:14
admin0so your main disk which has data and partitions is untouched18:14
spatelhmm18:14
admin0this means i can have uneven disks and i don't have to worry about making a config18:14
admin0else you have to write every disk and thats a pain18:15
admin0https://gist.github.com/a1git/7f491039325bd735d3e93b2cae418da718:15
spatelin my old ceph deployment i have each disk in config because some disk are not part of ceph18:15
admin0one of the server where there are disks18:15
admin0for those disks, just create a partition .. like  fdisk .. 1 partition .. and ceph will not touch it18:15
admin0it will only add those disks which have no partion table .. so  sgdisk --zap ones18:16
spatelhmm18:16
*** jpvlsmv has joined #openstack-ansible18:18
spateladmin0 this is what i have http://paste.openstack.org/show/802489/18:19
jrosserpublic and cluster networks overlap there wierdly18:21
admin0it failed again ..     unable to execute 'x86_64-linux-gnu-gcc': No such file or directory ..18:22
admin0can't seem to make utility-install work18:23
jrosseradmin0: CI tests are passing, it is usually something in your environment18:23
jrosseri built an AIO today which worked18:24
admin0if i have to redo from scratch, i do a rm -rf /openstack  on all servers,  rm -rf /etc/ansible .. rm -rf /etc/openstack_deploy and then  start again18:24
jrosserany my colleague upgraded a region U->V today too18:24
jrosserthats not cleaning out the repo server18:25
jrosserreally you should read the detail of the bug i pasted earlier18:25
jrosserunderstand what it breaks and why18:25
admin0this seems like the error i have https://bugs.launchpad.net/rally/+bug/161871418:26
openstackLaunchpad bug 1618714 in Rally "Running setup.py install for netifaces: finished with status 'error'" [Undecided,Fix released]18:26
jrosserthis is from 201618:27
admin0jrosser,    you asked about _rebuild=true   is used or not .. but which bug did you pasted earlier18:27
admin0i seem to have missed it18:27
jrosserhttps://bugs.launchpad.net/bugs/191430118:29
openstackLaunchpad bug 1914301 in openstack-ansible "passing venv_rebuild=true leaves repo server in unusable state" [Undecided,Fix released]18:29
jrosserif you just delete the directories that you listed then you are not doing a clean deployment18:29
mgariepyjrosser, looking at it now.18:30
jrosserperhaps this explains why you experience so much failure when switching between tags?18:30
jrosserspecifically, if you leave the /var/www contents available to the repo server, then next deployment you may inherit bad state for that18:31
jrosserand if at some point you have run with venv_rebuild=true, then the trouble caused by that bug will propagate into the next deploy18:31
spateljrosser  oh! i can see my mon ips are wrong. thanks for pointing that out18:31
spatelafter fixing mon IP still hanging on ceph-volume lvm create command damn! it18:35
spateladmin0 can you send me output of ceph-volume inventory /dev/sdX   command18:35
admin0i do lxc-container-destroy to destroy all containers first . then delete the /openstack  on all controllers and computes18:36
spatelI don't have "sas address" address18:36
admin0what would be the best way to clean and redo it ?18:36
admin0and i also delete the inventory file, so that the new containers will be new ones ( and not resusing old ips or folders by any chance)18:37
jrosseradmin0: you should do whatever works best for you :) here we tend to just pxeboot stuff again if it gets messed up18:39
admin0i have that too . trying to avoid that18:39
admin0one final question18:40
jrosserthe answer lies in yuor repo server18:40
jrosserwhy did it not build the wheel for netifaces18:40
jrosserthats the actual trouble18:40
admin0is expanding from 1 to 3 controllers only a matter of adding them into the config file and running the playbooks ?18:40
admin0my 22.0.0 failed on galera .. so i can try to delete all and go back to victoria with only 1 controller added18:40
jrosserwith one controller you will have the internal/external IP on an interface yourself18:40
jrosserthat needs converting to keepalived managing the VIP18:41
admin0i also have keepalive/haproxy on single controller with a VIP18:41
admin0so that is taken care of18:41
jrosserit should be easy to add more then18:41
jrossergalera does sometimes fail to start properly on focal18:41
jrosserdid you check that?18:42
admin0spatel, https://gist.github.com/a1git/0b8b1906382318e021f9d6a782af558e18:43
admin0one is from virtual(kvm) based ceph install, another one is on metal18:43
spatelthanks so you don't have sas address  address, which i am getting error may be safe to ignore18:46
admin0the last time i did ceph+osa was in 2016 :)18:50
admin0always decoupled after that18:50
admin0that too from copying the rackspace playbooks18:50
admin0not the osa one, but their other one18:50
admin0i forgot which one18:51
admin0i think i may have a clue/hunch :D18:52
*** andrewbonney has quit IRC18:52
admin0i copied my config from an identical m1000e .. but this one does not have mtu9k, while that one has ..  now i realize it may have been the cause of this and also why the galera was not syncing18:53
admin0going back on 22.0.0 to retry this again18:53
admin0i had already destoryed the containers before i could valiate that it had the 9000 mtu that would not have worked18:54
*** jpvlsmv has quit IRC19:00
openstackgerritMerged openstack/ansible-role-uwsgi stable/train: Run uwsgi tasks only when uwsgi_services defined  https://review.opendev.org/c/openstack/ansible-role-uwsgi/+/77449419:11
*** miloa has quit IRC19:23
admin0it was the mtu :(19:34
admin0all working good now19:34
admin0"<jrosser> the answer lies in yuor repo server" \o/19:34
*** maharg101 has joined #openstack-ansible19:36
*** jpvlsmv has joined #openstack-ansible19:36
spateljrosser admin0 my ceph problem resolved :)19:39
*** macz_ has quit IRC19:39
spateldon't ask me what was the issue otherwise you will beat me.. lol19:39
jrosserawesome19:39
jrosserwrong host?!19:40
spatelproblem was mtu19:40
jrosserlol19:40
*** macz_ has joined #openstack-ansible19:40
*** maharg101 has quit IRC19:40
spatelin netplan i set mtu on bridge but that doesn't work, you have to set on bond0 or vlan interface19:40
spatelit would be good if our ansible playbook check MTU and do ping -s 9000 kind of test to see if it can make it otherwise throw error :)19:42
jpvlsmvHello, I'm a first time caller...19:45
spateljpvlsmv hello19:46
jpvlsmvtrying to deploy Victoria, with some security constraints... specifically, root must not be able to log in via ssh.  There are many tasks that specify user:root19:47
jpvlsmvwhich overrides the -u openstackuser and fails19:47
admin0spatel, yours also :D19:51
jpvlsmvFor example, in openstack-hosts-setup.yml task Install Ansible prerequisites, changing the user:root to become: true fixed my situation19:51
admin0i burned 6 hours on mtu today :) before i realized it19:51
spatelit would be that part of test in ansible-playbook so one less thing to worry19:52
jrosserjpvlsmv: currently we don't really have support for deploying as the non root user19:52
admin0jpvlsmv, is there an issue just logging in as root, or adding the keys to root directly ?19:52
jrosserhowever as you've seen it's actually not so hard to fix things up19:53
jrosserif you are wanting to make some contributions to openstack-ansible this would be excellent19:53
openstackgerritGhanshyam proposed openstack/openstack-ansible-os_horizon master: Use Tempest for dashboard test instead of tempest-horizon  https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/77471919:54
jpvlsmvwell, I have a key to log in to the target, but it's as my own userid rather than root, and the security folks have blocked root via sshd_config19:54
admin0jpvlsmv, then whey do you make this your issue ? go back to the security folks and say you need root to continue19:57
admin0or send them the playbooks and ask them to help you write a workaround for you19:57
jpvlsmvUnfortunately, they require "PermitRootLogin no" (because that line is specified in their controls)19:59
jpvlsmvok, well, I'll look at how to make such a contribution...20:00
jrosserjpvlsmv: have you built an all-in-one?20:02
jpvlsmv@jrosser I have before, but currently working to expand out to a proper-sized grid20:03
jrosseryou can probably do a lot of this in an AIO by disabling root login and seeing what breaks20:03
spatelI have openstack design question: last couple years i am building openstack in my own datacenter where i have full control on network / server ect. but now i want to build openstack in Remote datacenter where i am renting some server where i don't have control on network. what are the option i have to build openstack there?20:05
admin0its not that hard20:05
spatelI am thinking about network design20:06
spatelcurrently i have all VLAN base provider20:06
admin0you need to create your own OOB network ..20:06
admin0so throw in 1 server ( i call it infra node) .. and install ubuntu vyos, or pfsense .. where you can create vlans, tags, multiple networks20:06
admin0then you can vpn here and reach the idrac/cimc of the servers and switches20:07
spateli need VLAN right on physical network br-mgmt, br-vxlan etc..20:07
admin0yes20:07
admin0you will need to rent/colo the switch .. so it needs to be able to do vlans20:07
admin0and also based on your traffic and how you get the ips, you can have either the DC have .1 on the subnet, or you can ebgp to the dc and split their /24 to /25 internally on your own vlan tag for br-vlan20:08
spatellets say i don't have control on DC network (you can't put any switch or have vlan) in that case what option i have?20:10
admin0you mean pure dedicated servers only ?20:10
spatelcurrently collecting all information20:10
spatelYes pure 100% metal server20:11
spatelwith public IPs20:11
admin0if they are in the same L2 , you can still run your network between them20:11
spatelthat is what i am thinking and trying to understand what i can do with minimum resource.20:12
admin0else, you need something else like a openvpn where on subinterfae you run the br-mgmt br-vlan br-vxlan etc to make it work .. but you need to do some NAT tricks for the ip address20:12
spatelI am sure its not going to be easy20:13
spatelI need DPDK or SRIOV also because my network profile is bursty20:13
jrosseri have seen this done using linux vxlan to overlay on top of no-vlans metal deployment20:18
spateljrosser that would be overkill right?20:24
admin0ext-net can also be on a vxlan right ?20:25
spatelrunning all VLANs on top of overlay network like spine-leaf20:25
spatelI need to find good datacenter who can provide me dedicated VLANs for my OSA control plane20:26
jrosserspatel: depends if you think it is overkill if all you have is public IPs20:26
jrosserlike where your internal vip would go, for example20:27
jrosserall http and insecure.....20:27
spatelcurrently i am thinking through my head.. not sure what kind of deal we will get, currently i am preparing all Question/Answer to ask when going to hunt for DC racks20:28
spatelif its very complicated then i have to go for ranting Racks instead of server20:29
spatelrenting*20:29
*** waxfire has quit IRC20:39
*** waxfire has joined #openstack-ansible20:40
*** d34dh0r53 has quit IRC21:07
*** d34dh0r53 has joined #openstack-ansible21:08
*** jpvlsmv has quit IRC21:10
*** poopcat has quit IRC21:22
*** poopcat has joined #openstack-ansible21:22
spateljrosser do you know how or what part of playbook create default ceph pool like vms,images and metrics etc?21:27
spatelI am seeing it didn't created any pools on ceph storage so wondering if i missing something21:28
*** ianychoi__ has joined #openstack-ansible21:40
*** ianychoi_ has quit IRC21:43
*** cshen has quit IRC21:59
jrosserspatel: https://github.com/openstack/openstack-ansible/blob/master/tests/roles/bootstrap-host/templates/user_variables_ceph.yml.j2#L2522:23
spatelI do have that setting in user_variables.yml22:24
spatelopenstack_config: true22:24
jrosserthis is all ceph-ansible stuff, not OSA22:24
jrosserhttps://github.com/ceph/ceph-ansible/blob/b1f37c4b3ddec4ff564514c079026f1e020575ea/roles/ceph-defaults/defaults/main.yml#L578-L63322:24
spatelis it possible i don't have enough disk or space on ceph that is why it didn't create default pools?22:25
spatelTotally understand its not OSA but just trying to get some help to see if i missed something22:25
spatelanyway i may re-build ceph again to verify because i did lots of poking and not sure if something went wrong22:26
admin0spatel, if you want to do it separately, buzz me :)22:26
spateldoesn't it make any difference if i do separately ?22:27
spatelits same playbook even i do separately22:27
admin0are the variables also the same ?22:28
admin0i have not looked into our inbuilt playbooks tbh ..22:28
spatelMy ceph deployment is very small and limited usage so i don't want to give it more hardware22:28
spatelEverything is same22:28
admin0it can be from the same deply container22:28
admin0deploy*22:28
admin0git clone into /opt/ceph-ansible and thats it22:29
spatelOSA use ceph-ansible22:29
spateladvantage of using OSA is i can run mon nodes on containers22:29
jrosserspatel: https://zuul.opendev.org/t/openstack/build/1546dae2f15d456fb98eff4a4860fec9/log/job-output.txt#30984-3101022:30
jrosserevery patch to openstack-ansible runs a ceph job22:30
jrosseryou can compare the logs22:30
spatelPerfect! i can see TASK now which creating pools22:31
*** cshen has joined #openstack-ansible22:32
jrosseri found it by looking in the ceph-ansible code here https://github.com/ceph/ceph-ansible/blob/371d854a5c03cfb30d27d5cdbaaad61f7f8d6c58/roles/ceph-osd/tasks/openstack_config.yml#L422:33
jrosserthen finding some random patch to openstack-ansible in the review dashboard and searching the log for that task name22:33
spatelI have noticed ansible not executing that TASK22:37
jrosserthen you would look at the conditionals here https://github.com/ceph/ceph-ansible/blob/371d854a5c03cfb30d27d5cdbaaad61f7f8d6c58/roles/ceph-osd/tasks/main.yml#L97-L10322:38
admin0the cloud is up \o/ .. thanks guys for helping22:38
spatelin that block i have  openstack_config: true22:40
spatelnot sure about this part  not add_osd | bool22:40
spateljrosser thank for the clue, let me take it from here and start investigate22:45
jrosserno worries22:45
jrosseralways worth throwing in some debug: task to print those followed by a fail: to make it stop22:45
spatelyup! let me start debugging to see how that condition will be true22:46
*** spatel has quit IRC22:50
*** spatel has joined #openstack-ansible23:02
*** maharg101 has joined #openstack-ansible23:37
*** maharg101 has quit IRC23:42
*** tosky has quit IRC23:48

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!