Friday, 2020-10-23

*** nurdie has joined #openstack-ansible00:32
*** macz_ has joined #openstack-ansible00:37
*** macz_ has quit IRC00:42
*** gyee has quit IRC00:44
*** MickyMan77 has joined #openstack-ansible01:03
*** MickyMan77 has quit IRC01:11
*** cshen has joined #openstack-ansible01:16
*** cshen has quit IRC01:21
*** Muran has joined #openstack-ansible01:26
*** Muran has quit IRC01:30
*** MickyMan77 has joined #openstack-ansible01:44
*** recyclehero has quit IRC01:45
*** MickyMan77 has quit IRC01:53
*** jawad_axd has joined #openstack-ansible01:54
*** jawad_axd has quit IRC01:58
*** jawad_axd has joined #openstack-ansible02:15
*** cshen has joined #openstack-ansible02:17
*** jawad_axd has quit IRC02:20
*** cshen has quit IRC02:21
*** MickyMan77 has joined #openstack-ansible02:23
*** MickyMan77 has quit IRC02:32
*** ChiTo has quit IRC02:34
*** jawad_axd has joined #openstack-ansible02:35
*** jawad_axd has quit IRC02:40
*** jawad_axd has joined #openstack-ansible02:56
*** jawad_axd has quit IRC03:01
*** MickyMan77 has joined #openstack-ansible03:07
*** jawad_axd has joined #openstack-ansible03:17
*** jawad_axd has quit IRC03:21
*** jawad_axd has joined #openstack-ansible03:38
*** jawad_axd has quit IRC03:42
*** d34dh0r53 has quit IRC03:44
*** jawad_axd has joined #openstack-ansible03:58
*** jawad_axd has quit IRC04:03
*** evrardjp has quit IRC04:34
*** evrardjp has joined #openstack-ansible04:34
*** nurdie has quit IRC04:38
*** gillesMo has quit IRC04:43
*** recyclehero has joined #openstack-ansible04:57
*** jawad_axd has joined #openstack-ansible05:00
*** jawad_axd has quit IRC05:05
*** gillesMo has joined #openstack-ansible05:07
*** MickyMan77 has quit IRC05:16
*** nurdie has joined #openstack-ansible05:39
*** nurdie has quit IRC05:43
*** macz_ has joined #openstack-ansible06:02
*** macz_ has quit IRC06:07
*** yolanda has quit IRC06:14
*** rpittau|afk is now known as rpittau06:21
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-lxc_hosts master: Use lxc_image_cache_server_mirrors as image source  https://review.opendev.org/75931006:23
*** jawad_axd has joined #openstack-ansible06:40
*** Muran has joined #openstack-ansible06:50
*** Muran has quit IRC06:53
*** Muran has joined #openstack-ansible06:54
noonedeadpunkrecyclehero: this means that allocation ratio the way you've configured is not working properly... as here's what I see in my sandbox horizon https://imgur.com/a/rHKnpcL06:55
noonedeadpunkso horizon should not be point of verification if allocation ratios are set correctly or not - that's what I'm saying06:56
noonedeadpunkwell, it's really interesting why it get reverted back06:59
noonedeadpunkI think that it worth asking in #openstack-nova - maybe they will suggest something regarding config07:00
noonedeadpunkand morning everyone:)07:06
*** andrewbonney has joined #openstack-ansible07:13
*** cshen has joined #openstack-ansible07:27
*** tosky has joined #openstack-ansible07:36
*** Muran has quit IRC07:39
*** Muran has joined #openstack-ansible07:40
*** Muran has quit IRC07:41
*** Muran has joined #openstack-ansible07:41
*** shyamb has joined #openstack-ansible07:44
*** Muran has quit IRC07:49
*** Muran has joined #openstack-ansible07:50
*** shyamb has quit IRC07:53
*** shyamb has joined #openstack-ansible07:54
*** Muran has quit IRC07:54
CeeMacmorning07:56
*** shyamb has quit IRC07:57
CeeMaci think there has been ongoing 'discussions' between horizon and nova people over the years on the fact the CPU graphs in Horizon don't reflect the commit ratio of CPUs. Its something to do with the APIs Horizon uses to pull the resource information that speaks to a component that doesn't have any visibility of the over commit values, so can only report true CPU/core count07:57
CeeMaci looked into it a while back but never reached a solid conclusion on how best to check the 'actual' used/free CPUs based on the over commit07:58
CeeMacrecyclehero: ^07:58
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: Use parallel git clone  https://review.opendev.org/58837207:59
*** jawad_axd has quit IRC08:37
*** jawad_axd has joined #openstack-ansible08:38
noonedeadpunkwondering what does `WARNING urllib3.connectionpool: Connection pool is full, discarding connection: localhost` mean during ansible runtime...08:44
noonedeadpunkie https://zuul.openstack.org/stream/6db0a6695ce243eea881113370cb16f1?logfile=console.log08:44
noonedeadpunkoh, can this be ara?08:45
noonedeadpunkdmsimard ?08:46
noonedeadpunkoutput from console http://paste.openstack.org/show/799307/08:47
noonedeadpunkand play pretty stuck that way08:48
noonedeadpunkI saw that in for other tasks in random places as well08:51
noonedeadpunkoh, well, https://2e774ffe9e481649075b-3c2a18acb5109e625907972e3aa6a592.ssl.cf1.rackcdn.com/759229/4/check/openstack-ansible-lxc-btrfs-ubuntu-bionic/496af20/job-output.txt08:52
*** cshen_ has joined #openstack-ansible08:56
*** cshen has quit IRC08:59
*** jawad_ax_ has joined #openstack-ansible09:13
*** jawad_axd has quit IRC09:15
*** Muran has joined #openstack-ansible09:38
*** odyssey4me has joined #openstack-ansible09:40
*** hamzaachi has joined #openstack-ansible09:42
openstackgerritArx Cruz proposed openstack/openstack-ansible-os_tempest master: Re-adding redhat-7.yml distro var  https://review.opendev.org/75882309:42
*** sep has quit IRC09:43
openstackgerritArx Cruz proposed openstack/openstack-ansible-os_tempest master: Re-adding redhat-7.yml distro var  https://review.opendev.org/75882309:43
*** Muran has quit IRC09:45
*** Muran has joined #openstack-ansible09:48
*** Muran has quit IRC09:53
*** sep has joined #openstack-ansible10:03
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_barbican master: Clean up barbican.conf  https://review.opendev.org/75908410:06
*** sep has quit IRC10:07
*** sep has joined #openstack-ansible10:08
*** Muran has joined #openstack-ansible10:08
*** pto has joined #openstack-ansible10:10
*** Muran has quit IRC10:20
*** pto has quit IRC10:22
*** pto has joined #openstack-ansible10:22
*** ioni has quit IRC10:24
*** masterpe has quit IRC10:24
*** fridtjof[m] has quit IRC10:24
recycleheromorning guys, it was a long night!10:32
recycleherothanks noonedeadpunk CeeMac10:32
recycleheroI will head on to nova10:32
noonedeadpunkjrosser: sorry, I know you are away today, but need to ask about https://review.opendev.org/#/c/759229/ vs https://review.opendev.org/#/c/759310/ as we need to unblock gates and do point releases... I feel like 759310 is more reliable tbh, despite it's not base but minimal image. And we should move debian to it as well probably, but there's circular dependency with lxc_containers_create so planing to do that for master only10:33
noonedeadpunkthe only thing that rerally concerns me is centos execution time10:33
noonedeadpunknot sure if it's because of the image though10:34
*** ioni has joined #openstack-ansible10:34
*** pto has quit IRC10:37
*** pto has joined #openstack-ansible10:38
jrossernoonedeadpunk: hi, just only on my phone for a while, but the runtime is quite concerning10:46
jrossereven just the infra job takes very very long10:47
noonedeadpunkI think we might have issue with ara as well, but have no idea what exactly influence it so badly10:47
jrosserwe can always pin it back just to take out a variable10:48
noonedeadpunknot really... as to do that we need one of these patches to merge:)10:49
noonedeadpunkI think I will try in single sandbox to get container on new and old image to see the difference10:50
*** yolanda has joined #openstack-ansible10:57
*** masterpe has joined #openstack-ansible10:59
*** fridtjof[m] has joined #openstack-ansible10:59
*** pto has quit IRC10:59
noonedeadpunkwhat is interesting - functional job is not outstanding. and in console issues seemed to happen only when interacting with containers11:06
*** shyamb has joined #openstack-ansible11:08
*** shyam89 has joined #openstack-ansible11:08
*** shyam89 has quit IRC11:08
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_manila master: Switch CI job for centos to centos-8  https://review.opendev.org/75465211:15
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_manila stable/ussuri: Add centos-8 support  https://review.opendev.org/75939611:16
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_manila master: Add Ubuntu Focal CI jobs  https://review.opendev.org/75465311:18
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_manila master: Add image download for manila  https://review.opendev.org/70497211:19
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_manila master: Add image download for manila  https://review.opendev.org/70497211:21
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_manila master: Updated from OpenStack Ansible Tests  https://review.opendev.org/75417211:22
*** pto has joined #openstack-ansible11:24
dmsimardnoonedeadpunk: the connection pool warning comes from ara11:27
dmsimardit stays stuck ?11:27
noonedeadpunkyep11:27
dmsimarddamn, https://github.com/ansible-community/ara/blob/master/ara/plugins/callback/ara_default.py#L18011:28
dmsimardi didn't seem to be able to reproduce the issue with 10, I guess it needs to be reduced11:29
noonedeadpunkwell, we probably have more ansible threads now then 1011:29
dmsimardnumber of ansible forks shouldn't matter11:30
noonedeadpunkor exactly 10 (not sure hwat has merged)11:30
dmsimardI need to send kids at school and then I'll figure a fix, sorry about that11:31
*** gshippey has joined #openstack-ansible11:31
dmsimardif it's super problematic, please pin to 1.5.1 temporarily11:31
dmsimardi was too optimistic :/11:32
noonedeadpunkI wish we could lol11:32
dmsimardwhat do you mean ?11:33
noonedeadpunkubuntu has dropped their base images several days ago and gates are broken. so to fix them and pin ara we need too unblock gates first which is hard because of ara11:34
noonedeadpunkbut yeah, we will try:)11:34
dmsimardoh no11:34
noonedeadpunkanyway thanks for looking into that11:34
ptoHas something been changed? I am doing a fresh install of 21.0.1 on ubuntu focal. The TASK [lxc_hosts : Unpack base image] fail with fatal: [os_infra1]: FAILED! => {"changed": false, "msg": "Source '/tmp/ubuntu-base-20.04-base-amd64.tar.gz' does not exist"}11:44
*** shyam89 has joined #openstack-ansible11:44
noonedeadpunkpto: yeah, ubuntu dropped images....11:44
*** shyam89 has quit IRC11:44
ptonoonedeadpunk: oh... How to get around it?11:44
noonedeadpunktry setting lxc_hosts_container_image_download_legacy: true11:44
noonedeadpunkwe will make release that fixes it soon11:45
ptoShould i go for 21.0.1 tag or the latest ussuri/master11:45
noonedeadpunkas for now it's not even merged...11:47
ptoGenerally speaking, which is most "stable"11:47
noonedeadpunkso enabling lxc_hosts_container_image_download_legacy or defining valid url with lxc_hosts_container_image_url11:47
*** shyamb has quit IRC11:47
noonedeadpunkare the best options at the moment tbh11:48
noonedeadpunkwe have several patches which covers that but none of them merged yet11:48
noonedeadpunkso all current releases are broken at the moment in the same way11:49
ptoThanks for clarifying this.11:52
*** rfolco has joined #openstack-ansible11:53
*** pto has quit IRC11:53
*** pto has joined #openstack-ansible11:53
*** rh-jelabarre has joined #openstack-ansible11:59
*** pto_ has joined #openstack-ansible12:04
*** pto has quit IRC12:05
*** NewJorg has quit IRC12:10
*** nurdie has joined #openstack-ansible12:16
*** NewJorg has joined #openstack-ansible12:16
kleinifor me applying patch https://review.opendev.org/759229 to /etc/ansible/roles/lxc_hosts solves the problem for 21.1.0 on Ubuntu 18.04 very well. but this gets overwritten with every bootstrapping of Ansible12:22
noonedeadpunkwe have also https://review.opendev.org/#/c/759310 as second option, but there's issue with centos image at the moment12:31
*** nurdie has quit IRC12:37
dmsimardnoonedeadpunk: hey o/ I reproduced the "oddly getting stuck" and I can't reproduce it anymore by lowering the amount of threads so we'll go with that for now12:52
dmsimardonly happens with the default offline client, doesn't occur with the http client :/12:52
noonedeadpunkwe're lucky...12:54
dmsimardhmmm, nope, I managed to reproduce the stuck issue with less threads too13:00
dmsimarddoesn't happen all the time13:00
*** pto_ has quit IRC13:05
*** pto has joined #openstack-ansible13:05
noonedeadpunkit totally does not!13:09
noonedeadpunklike 1 ou of 6 jobs get stuck13:09
*** jawad_ax_ has quit IRC13:10
*** jawad_axd has joined #openstack-ansible13:11
*** jawad_axd has quit IRC13:12
*** jawad_axd has joined #openstack-ansible13:12
*** jawad_axd has quit IRC13:13
*** ianychoi_ has joined #openstack-ansible13:13
*** jawad_axd has joined #openstack-ansible13:13
*** ianychoi has quit IRC13:16
dmsimardcan reproduce the issue with offline client and >=2 threads but not one thread13:16
dmsimardhttp client can use >= 4 threads without any problems :(13:17
noonedeadpunkhm, pretty weird...13:18
*** jeh has quit IRC13:28
*** akahat has quit IRC13:29
*** nurdie has joined #openstack-ansible13:29
*** akahat has joined #openstack-ansible13:31
*** nurdie has quit IRC13:38
*** gshippey has quit IRC13:49
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible stable/ussuri: Fix manila tempest jobs  https://review.opendev.org/75942613:53
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_manila stable/ussuri: Add centos-8 support  https://review.opendev.org/75939613:54
noonedeadpunkjrosser: so, in my vm old centos image took 13m for lxc-containers-create.yml and new 23m o_O13:56
noonedeadpunkhave no idea what this can be...13:56
noonedeadpunksome selinux or what... I'd say ssh config if we were using ssh for connection...13:58
noonedeadpunkwill try to create 2 empty containers and compare them....13:59
*** d34dh0r53 has joined #openstack-ansible14:00
*** jawad_axd has quit IRC14:00
*** rpittau is now known as rpittau|afk14:06
*** cshen_ has quit IRC14:16
*** jawad_axd has joined #openstack-ansible14:20
noonedeadpunkwell, new image is +300mb in size...14:38
noonedeadpunkno difference in case of regular facts gathering task...14:42
*** nurdie has joined #openstack-ansible14:46
dmsimardnoonedeadpunk: I have a patch up to fix the locking https://review.opendev.org/#/c/759439/14:47
dmsimardwill merge it and then I'd like to test OSA against master before releasing it14:47
dmsimardI couldn't reproduce locally but you never know14:48
dmsimard"it worked in devstack", "it works on my machine", etc. :p14:48
jrossernoonedeadpunk: thats wierd with the new image being slow14:48
jrosserwonder if somehow the same thing that always made centos-7 slow, we just never found what14:49
noonedeadpunkmight be...14:49
noonedeadpunkwas also thinking about that14:49
*** yann-kaelig has joined #openstack-ansible14:52
*** spatel has joined #openstack-ansible14:58
jrosseri figure we should fix the release note for this https://review.opendev.org/#/c/758029/2/templates/nova.conf.j214:59
jrosseror even revert the whole patch, not sure really as it was possible all along with 0.0 values :(14:59
noonedeadpunkwell it's not merged yet15:02
noonedeadpunkput -W15:03
noonedeadpunkI thought about that we probably don't need it as well. but it actually doesn't make things worse15:03
noonedeadpunkthat's what sean-k-mooney said, and referenced to https://github.com/openstack/nova/blob/master/nova/conf/compute.py#L420-L45315:04
*** nurdie has quit IRC15:05
*** nurdie has joined #openstack-ansible15:06
noonedeadpunkwell, what 's worse, that removing variable after first deployment won't have any effect...15:07
*** nurdie has quit IRC15:10
*** macz_ has joined #openstack-ansible15:10
*** nurdie has joined #openstack-ansible15:12
noonedeadpunkso probably we should just comment these variables out by default and check if they are defined... or even just leave that for overrides15:16
noonedeadpunkjrosser: or should I release -W?15:16
jrosserwell the commit message and reno text is kind of wrong15:17
noonedeadpunkwhat worries me - `Once set to a non-default value, it is not possible to “unset” the config to get back to the default behavior`15:17
jrosserwheres that from?15:18
noonedeadpunkhttps://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.cpu_allocation_ratio15:18
jrosseroh yes15:18
* jrosser reads again15:18
noonedeadpunksame for ram_allocation_ratio and disk_allocation_ratio¶15:18
noonedeadpunkit's in note15:18
noonedeadpunkwell, we can discuss that on monday then15:19
jrosserurg "the default behaviour" like what does that actually mean15:20
noonedeadpunkI think it populates inventory and then overrides set with api may not work properly?15:21
noonedeadpunkas like initial_cpu_allocation_ratio read only during node discovery and population and futher change of the value will have no effect15:21
noonedeadpunkSean was trying ot explain this today :)15:22
jrosseri saw15:22
jrosserkind of complicated logic15:23
jrossercomplicated / unexpected15:23
noonedeadpunkit is... but now numa pinned vms can live migrate, so probably was worth it15:23
jrosserso i guess the question is if there is a need to change the current behaviour or defaults15:28
jrossersetting defaults of 0.0 and overrides will make the scheme via placement work, but only if you do that first time15:29
*** gyee has joined #openstack-ansible15:48
openstackgerritDavid Moreau Simard proposed openstack/openstack-ansible master: DNM: Test OSA with ara on master  https://review.opendev.org/75946415:53
dmsimard^ let's see if anything in there gets stuck15:56
noonedeadpunkjrosser: yeh15:56
noonedeadpunkand you should now what you're doing before dpeloying osa15:56
noonedeadpunkwhich is almost never the case:)15:57
jrosserperhaps really we could drop the patch and do a nice explanation in os_nova role docs15:57
noonedeadpunkreturning to centos - this really simpla task took 2 mins - whaaat https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d55/759310/7/check/openstack-ansible-deploy-infra_lxc-centos-8/d552d15/logs/ara-report/results/1613.html16:01
noonedeadpunkjust trying to somehow reproduce issue and try to find the way to reproduce16:02
noonedeadpunkbut nothing close in sandbox16:05
noonedeadpunkwow. new image has somehow jumbo-frame o_O16:06
noonedeadpunkhttp://paste.openstack.org/show/799331/16:06
noonedeadpunkfeels like $reason16:06
jrosserthats interesting16:07
noonedeadpunkand from old one on the same host has 150016:08
jrosseri have seen other jobs in one of our repos failing functional tests with mtu16:08
jrossersorry cant remember which16:08
jrosserbut iirc that was also centos16:08
noonedeadpunkI'm wondering why it decided to set it to 9000... Maybe NetworkManager.....16:10
*** MickyMan77 has joined #openstack-ansible16:11
jrosseri wonder if there is conflict between what networkd and nm are doing16:11
*** _mmethot_ has quit IRC16:12
jrosserafter all we do insert networkd into the image ourselves16:12
noonedeadpunkwell, container restart did reset it to 150016:19
noonedeadpunkanother restart and.... http://paste.openstack.org/show/799333/ lol16:19
*** MickyMan77 has quit IRC16:20
guilhermespnoonedeadpunk: i think that deserves a backport https://review.opendev.org/#/c/751767/16:23
guilhermespat least till train16:23
noonedeadpunkwell, have nothing against it16:24
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_magnum stable/ussuri: Add deployment of keystone_auth_default_policy  https://review.opendev.org/75947116:25
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_magnum stable/train: Add deployment of keystone_auth_default_policy  https://review.opendev.org/75947216:25
guilhermespappreciated noonedeadpunk16:26
noonedeadpunksure, np16:30
noonedeadpunkwell, it's not nm, as once I dropped it, eth0 stopped starting until systemd-networkd manual restart16:31
noonedeadpunkso something is fighting, but seems to be not nm16:32
*** MickyMan77 has joined #openstack-ansible16:44
*** recyclehero has quit IRC16:50
*** gundalow_ has joined #openstack-ansible16:50
*** persia_ has joined #openstack-ansible16:51
*** jawad_axd has quit IRC16:52
*** yann-kaelig has quit IRC16:56
*** persia has quit IRC16:57
*** gundalow has quit IRC16:57
*** gundalow_ is now known as gundalow16:57
noonedeadpunkoh, well, other container also can randomly get 9000 mtu17:05
noonedeadpunkdespite what is set in lxc interface config17:06
*** andrewbonney has quit IRC17:06
noonedeadpunkoh, well, lxcbr0 has also 9000 mtu17:07
*** recyclehero has joined #openstack-ansible17:10
noonedeadpunkI can recall some race condition when we should wait networkd for some reason....17:16
noonedeadpunkah it was https://review.opendev.org/#/c/734089/17:17
*** nurdie has quit IRC17:19
*** MickyMan77 has quit IRC17:19
*** mmercer has joined #openstack-ansible17:21
noonedeadpunkok, so it's us who set mtu to 9000 https://opendev.org/openstack/openstack-ansible/src/branch/master/tests/roles/bootstrap-host/tasks/prepare_networking.yml#L10117:21
noonedeadpunkexcept not sure about how lxcbr0 gets 900017:22
nsmedsThere are more specific channels for this - however I've had better luck with y'all :P has anyone ever seen `resolvconf-pull-resolved.service: Failed with result 'start-limit-hit'.` repeated in their logs anytime they create/delete a network?17:23
nsmedsRunning Ubuntu 18.04 with Train, deployed with openstack-ansible.17:23
nsmedsThere doesn't appear to be any issues with the networks - but I'm trying to figure out _why_ this is being restarted so many times it hits a rate limit.17:23
noonedeadpunknever noticed17:23
noonedeadpunkwondering why on ubuntu aio I don;t see mtu 900017:29
*** cshen has joined #openstack-ansible17:29
*** renich has joined #openstack-ansible17:31
*** cshen has quit IRC17:34
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: Set container MTU equal to dummy interfaces  https://review.opendev.org/75949817:43
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-lxc_hosts master: Use lxc_image_cache_server_mirrors as image source  https://review.opendev.org/75931017:44
jrossernoonedeadpunk: i am thinking that lxcbr0 ever being mtu 9000 is wrong17:50
jrossereth0->lxcbr0 is the default route and so should be mtu 1500 always17:50
noonedeadpunkwell, it's src-nated so it probably can be 9000?17:52
noonedeadpunk(not sure)17:52
jrossernat will just rewrite some if the ip headers17:53
jrosserthe packet needs to be small enough to leave the host and maybe out to the internet17:53
*** cshen has joined #openstack-ansible17:54
noonedeadpunkI kind of out of ideas why in the world this centos 8 image executes simple tasks like template for 2 mins. and it feels like networking thing I guess... except there's no networking as connection plugin just attaches with lxc command?17:54
noonedeadpunkand removal of NM makes things even worse for some reason17:54
noonedeadpunkso interface doesn't get ip addr17:55
jrosserdoes it reproduce locally?17:55
jrosser2 mins for template, i mean17:55
noonedeadpunkno, if it's single task17:55
noonedeadpunkbut like lxc-containers-create took 23m instead of 13m for old image17:56
noonedeadpunkand the same for setup-infrastructure17:56
noonedeadpunkso it reproduced locally, but now idea what triggers it...17:56
noonedeadpunkmaybe should try to remove ara17:56
noonedeadpunkjust to be super sure17:57
*** nurdie has joined #openstack-ansible18:06
*** cshen has quit IRC18:10
*** cshen has joined #openstack-ansible18:11
dmsimardfwiw my test with the fix in master hasn't locked down https://review.opendev.org/#/c/759464/18:17
dmsimardi'll do a recheck just to be sure18:17
*** elduderino80 has joined #openstack-ansible18:29
*** cshen has quit IRC18:40
*** nurdie has quit IRC18:49
*** nurdie has joined #openstack-ansible18:55
*** MickyMan77 has joined #openstack-ansible18:56
*** cshen has joined #openstack-ansible19:22
*** nurdie has quit IRC19:27
*** MickyMan77 has quit IRC19:31
*** nurdie has joined #openstack-ansible19:32
*** cloudnull has quit IRC19:47
*** cloudnull has joined #openstack-ansible19:47
*** klamath_atx has quit IRC19:48
*** cloudnull has quit IRC19:49
*** klamath_atx has joined #openstack-ansible19:56
*** cloudnull has joined #openstack-ansible19:58
*** cloudnull has quit IRC20:04
*** cloudnull has joined #openstack-ansible20:05
*** klamath_atx has quit IRC20:05
*** MickyMan77 has joined #openstack-ansible20:07
*** klamath_atx has joined #openstack-ansible20:15
*** klamath_atx has quit IRC20:22
*** MickyMan77 has quit IRC20:27
*** klamath_atx has joined #openstack-ansible20:33
*** sc has quit IRC20:36
*** sc has joined #openstack-ansible20:36
*** klamath_atx has quit IRC20:42
*** vesper has joined #openstack-ansible20:44
*** mgariepy has quit IRC20:50
*** prometheanfire has quit IRC20:50
*** vesper11 has quit IRC20:50
*** prometheanfire has joined #openstack-ansible20:50
*** nurdie has quit IRC20:52
*** rf0lc0 has joined #openstack-ansible20:52
*** nurdie has joined #openstack-ansible20:52
*** rfolco has quit IRC20:53
*** nurdie has quit IRC20:53
*** rf0lc0 has quit IRC20:56
*** mgariepy has joined #openstack-ansible21:00
*** cshen has quit IRC21:13
*** klamath_atx has joined #openstack-ansible21:24
*** cshen has joined #openstack-ansible21:26
*** cshen has quit IRC21:30
*** klamath_atx has quit IRC21:37
*** macz_ has quit IRC21:40
*** spatel has quit IRC21:40
*** elduderino80 has quit IRC22:12
dmsimardsecond round of jobs came back without locking either and I also can no longer reproduce locally so I'll tag a release with the fix in22:16
*** klamath_atx has joined #openstack-ansible22:29
*** NewJorg has quit IRC22:31
*** NewJorg has joined #openstack-ansible22:33
dmsimardara 1.5.3 is tagged and it'll be on pypi/mirrors soon enough, I've created this issue for the random lock up: https://github.com/ansible-community/ara/issues/18322:49
dmsimardthanks for pointing it out <322:49
*** macz_ has joined #openstack-ansible23:03
*** macz_ has quit IRC23:07
*** cshen has joined #openstack-ansible23:09
*** cshen has quit IRC23:14
*** klamath_atx has quit IRC23:16
*** MickyMan77 has joined #openstack-ansible23:27
*** cshen has joined #openstack-ansible23:35
*** owalsh_ has joined #openstack-ansible23:40
*** cshen has quit IRC23:40
*** owalsh has quit IRC23:43
*** owalsh has joined #openstack-ansible23:45
*** tosky has quit IRC23:46
*** owalsh_ has quit IRC23:48
*** klamath_atx has joined #openstack-ansible23:51

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!