Monday, 2019-02-18

*** hwoarang has joined #openstack-ansible00:01
openstackgerritShannon Mitchell proposed openstack/openstack-ansible master: Dynamic inventory backup corruption fix  https://review.openstack.org/63744100:04
*** shananigans has quit IRC00:30
*** markvoelker has joined #openstack-ansible00:33
*** markvoelker has quit IRC01:07
*** dave-mccowan has joined #openstack-ansible01:21
guilhermespany core to push this up? :) https://review.openstack.org/#/c/637352/01:38
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Added dependency of os_tempest role  https://review.openstack.org/63272601:52
*** sdake has joined #openstack-ansible02:00
*** markvoelker has joined #openstack-ansible02:04
*** sdake has quit IRC02:18
*** sdake has joined #openstack-ansible02:20
*** Soopaman has quit IRC02:32
*** markvoelker has quit IRC02:38
*** sdake has quit IRC02:44
*** hwoarang has quit IRC02:49
*** hwoarang has joined #openstack-ansible02:51
*** hwoarang has quit IRC02:56
*** hwoarang has joined #openstack-ansible03:00
*** hwoarang has quit IRC03:05
*** hwoarang has joined #openstack-ansible03:06
*** markvoelker has joined #openstack-ansible03:34
*** udesale has joined #openstack-ansible03:44
*** markvoelker has quit IRC04:06
*** dave-mccowan has quit IRC04:07
*** hwoarang has quit IRC04:15
*** hwoarang has joined #openstack-ansible04:17
openstackgerritKevin Carter (cloudnull) proposed openstack/monitorstack master: add default pickle cache version  https://review.openstack.org/63745104:30
openstackgerritMerged openstack/openstack-ansible-os_nova master: Add nova_user_pip_packages variable  https://review.openstack.org/63557904:30
*** hwoarang has quit IRC04:46
*** hwoarang has joined #openstack-ansible04:47
openstackgerritKevin Carter (cloudnull) proposed openstack/monitorstack master: add default pickle cache version  https://review.openstack.org/63745104:56
*** markvoelker has joined #openstack-ansible05:03
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: Add monitorstack data collection into ES  https://review.openstack.org/63745505:11
openstackgerritMerged openstack/monitorstack master: add default pickle cache version  https://review.openstack.org/63745105:18
openstackgerritMerged openstack/openstack-ansible-os_nova master: Remove save directory creation  https://review.openstack.org/63735205:29
*** shyamb has joined #openstack-ansible05:32
*** markvoelker has quit IRC05:37
*** jbadiapa has joined #openstack-ansible06:02
*** shyamb has quit IRC06:04
*** shyamb has joined #openstack-ansible06:04
*** chandankumar is now known as chkumar|ruck06:17
*** hwoarang has quit IRC06:23
*** shyamb has quit IRC06:23
*** shyamb has joined #openstack-ansible06:26
*** hwoarang has joined #openstack-ansible06:29
*** markvoelker has joined #openstack-ansible06:34
*** sdake has joined #openstack-ansible06:38
*** hamzaachi has joined #openstack-ansible06:48
*** phasespace has quit IRC07:01
*** markvoelker has quit IRC07:06
*** hwoarang has quit IRC07:12
*** hwoarang has joined #openstack-ansible07:16
*** rgogunskiy has joined #openstack-ansible07:23
*** shyamb has quit IRC07:24
*** shyamb has joined #openstack-ansible07:27
*** sdake has quit IRC07:30
*** DanyC has quit IRC07:33
*** rgogunskiy has quit IRC07:38
*** shyamb has quit IRC07:42
*** pcaruana has joined #openstack-ansible07:43
*** shyamb has joined #openstack-ansible07:59
*** rgogunskiy has joined #openstack-ansible08:03
*** markvoelker has joined #openstack-ansible08:03
*** DanyC has joined #openstack-ansible08:04
*** kopecmartin|off is now known as kopecmartin08:04
*** gkadam has joined #openstack-ansible08:06
*** gkadam has quit IRC08:09
*** iurygregory has joined #openstack-ansible08:14
*** tosky has joined #openstack-ansible08:33
*** shyamb has quit IRC08:34
*** markvoelker has quit IRC08:37
*** ArchiFleKs has quit IRC08:41
*** rgogunskiy has quit IRC08:42
*** phasespace has joined #openstack-ansible08:51
*** ArchiFleKs has joined #openstack-ansible09:00
*** pcaruana|afk| has joined #openstack-ansible09:01
*** noonedeadpunk[h] is now known as noonedeadpunk09:02
*** pcaruana has quit IRC09:02
*** shyamb has joined #openstack-ansible09:04
openstackgerritGeorgina Shippey proposed openstack/openstack-ansible-plugins stable/rocky: Remove whitespace before comments  https://review.openstack.org/63749509:07
*** fnpanic has joined #openstack-ansible09:13
*** sdake has joined #openstack-ansible09:16
fnpanicmornig09:17
*** rgogunskiy has joined #openstack-ansible09:25
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Add a known issue release note regarding inotify watch limits  https://review.openstack.org/63749809:25
*** markvoelker has joined #openstack-ansible09:34
*** trident has joined #openstack-ansible09:38
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-os_tempest master: Ensure stackviz wheel build is isolated  https://review.openstack.org/63750309:40
*** mbi has quit IRC09:40
*** electrofelix has joined #openstack-ansible09:42
openstackgerritMerged openstack/openstack-ansible-os_nova stable/rocky: Update deprecated option for pci passthrough  https://review.openstack.org/63639009:55
*** hamzaachi has quit IRC10:00
*** hamzaachi has joined #openstack-ansible10:01
*** pcaruana|afk| has quit IRC10:01
openstackgerritMerged openstack/openstack-ansible-os_neutron master: Revert "Avoid distro installing unused services"  https://review.openstack.org/63739310:04
*** pcaruana has joined #openstack-ansible10:07
*** markvoelker has quit IRC10:07
*** hamzaachi has quit IRC10:13
*** hamzaachi has joined #openstack-ansible10:15
openstackgerritMerged openstack/openstack-ansible-os_cinder master: Revert "Avoid distro installing unused services"  https://review.openstack.org/63739210:15
*** shyamb has quit IRC10:18
*** hamzaachi has quit IRC10:21
*** hamzaachi has joined #openstack-ansible10:22
*** rgogunskiy has quit IRC10:24
openstackgerritMerged openstack/openstack-ansible-os_nova stable/queens: Update deprecated option for pci passthrough  https://review.openstack.org/63639610:25
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-os_glance master: Update role for new source build process  https://review.openstack.org/62034010:28
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Added dependency of os_tempest role  https://review.openstack.org/63272610:42
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Added dependency of os_tempest role  https://review.openstack.org/63272610:42
openstackgerritMerged openstack/openstack-ansible-os_cinder stable/queens: Add missing CLI_OPTIONS when setting up qos volume types  https://review.openstack.org/63728110:43
openstackgerritMerged openstack/openstack-ansible-os_cinder stable/rocky: Add missing CLI_OPTIONS when setting up qos volume types  https://review.openstack.org/63727910:43
CeeMacmorning10:55
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Added dependency of os_tempest role  https://review.openstack.org/63272611:03
*** markvoelker has joined #openstack-ansible11:04
*** sdake has quit IRC11:16
*** sdake has joined #openstack-ansible11:21
*** udesale has quit IRC11:33
*** udesale has joined #openstack-ansible11:34
*** markvoelker has quit IRC11:37
*** shyamb has joined #openstack-ansible11:37
*** sdake has quit IRC12:12
*** sdake has joined #openstack-ansible12:14
*** shyamb has quit IRC12:24
*** kaiokmo has joined #openstack-ansible12:24
chkumar|ruckodyssey4me: hello12:26
chkumar|ruckodyssey4me: is it possible to use https://github.com/openstack/openstack-ansible/blob/master/scripts/gate-check-commit.sh locally by passing scenario deploy and install_method locally?12:26
chkumar|ruckor is it just used in CI?12:26
odyssey4mechkumar|ruck CI just runs gate-check-commit: https://github.com/openstack/openstack-ansible/blob/master/zuul.d/playbooks/run.yml12:27
odyssey4mechkumar|ruck so you can do the same12:27
chkumar|ruckodyssey4me: adn what is the system requirements for these scenarios 8 GB is fine?12:28
odyssey4mechkumar|ruck same as openstack infra - 8vcpu/8ram & ~ 80GB HDD12:28
chkumar|ruckok12:28
*** gillesMo has joined #openstack-ansible12:29
odyssey4mechkumar|ruck there is also a vagrantfile - so vagrant can also be used12:30
*** mhayden has quit IRC13:05
chkumar|ruckarxcruz: kopecmartin please have a look at this failure http://logs.openstack.org/26/632726/11/check/openstack-ansible-functional-tempestconf-centos-7/789fce5/logs/ara-report/result/9d6cff3b-c643-40dc-93b5-a9ae084d8997/when free13:06
*** mhayden has joined #openstack-ansible13:06
*** mhayden has quit IRC13:13
openstackgerritGuilherme  Steinmuller Pimentel proposed openstack/openstack-ansible master: wip: upgrade jobs  https://review.openstack.org/62778213:16
arxcruzchkumar|ruck:     Details: {u'message': u'Only volume-backed servers are allowed for flavors with zero disk.', u'code': 403}13:25
*** priteau has joined #openstack-ansible13:26
*** mhayden has joined #openstack-ansible13:27
*** sdake has quit IRC13:28
*** hamzaachi_ has joined #openstack-ansible13:28
*** hamzaachi_ has quit IRC13:29
*** hamzaachi_ has joined #openstack-ansible13:29
*** hamzaachi has quit IRC13:29
*** priteau has quit IRC13:45
*** sdake has joined #openstack-ansible13:55
*** sum12 has quit IRC13:56
*** priteau has joined #openstack-ansible14:02
*** sum12 has joined #openstack-ansible14:04
*** jroll has quit IRC14:13
*** jroll has joined #openstack-ansible14:14
*** hamzaachi__ has joined #openstack-ansible14:15
*** hamzaachi_ has quit IRC14:18
*** chkumar|ruck is now known as chandankumar14:28
guilhermesphow's the best way to submit a test PR? Like adding in the title something like [test] something?14:28
*** ansmith has joined #openstack-ansible14:31
*** tacco has joined #openstack-ansible14:35
taccohi there.. is there any prefered way to redeploy a compute node? Would like to clean up a old compute and just redeploy the same host from scratch. is there anything to take care of before redeploy? or can i just redeploy as i would deploy a new one? Hostname and IP will stay the same.. and the node is already disabled as "compute host"14:37
*** priteau has quit IRC14:47
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: Add monitorstack data collection into ES  https://review.openstack.org/63745514:47
cloudnullmornings14:48
cloudnulltacco you can just redeploy it. I'd say make sure that all of the workloads have been evacuated from the node and mark the node as disabled before rekicking it14:49
cloudnullhowever, there's nothing from the OSA side you have to do to simply rekick it, especially if the hostname and IP will remain the same .14:49
*** sdake has quit IRC14:58
*** dave-mccowan has joined #openstack-ansible14:59
*** priteau has joined #openstack-ansible14:59
odyssey4meguilhermesp use [TEST] or [DNM]15:01
guilhermespthanks odyssey4me !15:01
odyssey4metacco just wipe out the gathered facts for the node to make sure you get some fresh ones15:01
*** dave-mccowan has quit IRC15:03
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-os_glance master: Cleanup files and templates using smart sources  https://review.openstack.org/58895915:06
*** shananigans has joined #openstack-ansible15:09
taccoodyssey4me: thanks will do so. host is already evacuated.. and nothing is on it anymore.. so will make sure the facts are fresh.. then i will redeploy. :D thanks mate15:12
openstackgerritMerged openstack/openstack-ansible-galera_server master: Iterate over list of values of PPC packages dict  https://review.openstack.org/63727515:14
CeeMac odyssey4me hi15:15
CeeMaci've had a bit of an oddity during minor upgrade15:15
CeeMac2 of the nodes lost network connectivity during setup-hosts, the others were fine15:16
CeeMacnetwork config and hardware is mostly the same across all of the nodes15:16
CeeMacseen anything like that before?15:16
jrosserCeeMac: do you mean the nodes themselves, or the containers?15:18
*** priteau has quit IRC15:18
CeeMacphysical node15:18
jrosserhmm, i've seen containers do that at upgrade but never nodes15:19
CeeMaci had to hard reset the switch ports under the bond to bring it back up15:19
CeeMacjust scouring the logs now to try work out if its a physical config issue or not15:19
CeeMacjust after in install apt task15:19
jrosserso it may depend how your host is set up15:20
jrosserbut afaik the host setup and OSA are supposed to be independant15:20
CeeMaci need to double double check, but it could be the bond15:20
jrossergive or take a bit of systemd-networkd :)15:20
CeeMacits the networkd stuff that its falling over at i think15:20
CeeMacansible-apt gets invoked to install some packages15:21
* jrosser scurries back to the safety of ifupdown15:21
CeeMacthen systemd stops and waits for network to be configured15:21
CeeMacat which point the network connectivity is lost15:22
CeeMacdon't know if its netplan getting upset because ansible is installing ifupdown or what15:22
CeeMaci'm highly tempted to revert to ifupdown.......15:22
CeeMacjust need to check of the other hosts that werent affected have bonded nics or not15:23
CeeMac"kernel: [272496.554626] br-mgmt: port 1(bond0.211) entered disabled state" would probably be the bit15:24
CeeMachmm15:26
CeeMaci'm going to blame the network setup and worry about it later15:26
jrosserit would be very useful if you could identify the point/thing which made systemd_networkd restart15:26
CeeMacwouldn't it15:28
* CeeMac isn't sure where to begin15:29
CeeMaci was asuming there was task in the playbook that did it15:29
*** phasespace has quit IRC15:31
*** kmadac has joined #openstack-ansible15:37
*** udesale has quit IRC15:41
*** sdake has joined #openstack-ansible15:45
CeeMachmmm15:49
CeeMacthink i've worked out the issue on 2 of the nodes, but not the 1st one that failed.  Still trying to track down the task that restarted systemd-networkd15:50
*** mathlin has quit IRC15:50
CeeMacfor the record, ubuntu bionic doesn't seem to like native vlan tags on cisco blade switch ports15:51
CeeMaci'd already taken it off the primary port when I built the blades but forgot to take it off the second15:51
CeeMaclooks like port 0 is bounced first, then network is lost because of the native vlan issue and doesn't come back until the port 1 is bounced physically15:52
CeeMacstill digging15:52
CeeMacjrosser15:55
CeeMacTASK [lxc_hosts : Remove conflicting packages] completed successfully15:55
CeeMacthen TASK [lxc_hosts : Install apt packages] timed out15:55
CeeMacso the network went down in between those 2 tasks15:55
CeeMacor during the latter15:56
CeeMacand thats where i get lost in the rabbit hole trying to decypher ansible16:00
CeeMaci get as far as lxc_install_apt.yml, which has the names tasks16:01
CeeMacbut i don't know where to look for apt: pkg: "{{ lxc_hosts_remove_distro_packages }}"16:01
*** pcaruana has quit IRC16:10
CeeMacwould the systemd upgrade cause systemd-networkd to restart? the timings seem to coincide16:13
*** phasespace has joined #openstack-ansible16:27
CeeMacheading out shortly, will keep digging in the morning16:38
*** ppetit has joined #openstack-ansible16:50
*** ppetit has quit IRC16:53
*** dswebb has joined #openstack-ansible16:54
*** ostackz has joined #openstack-ansible16:55
*** sdake has quit IRC17:00
*** DanyC has quit IRC17:00
*** DanyC has joined #openstack-ansible17:01
*** DanyC has quit IRC17:03
*** ppetit has joined #openstack-ansible17:03
openstackgerritMerged openstack/openstack-ansible-ops master: Correct formatting in README.rst  https://review.openstack.org/63711717:03
*** sdake has joined #openstack-ansible17:05
*** ppetit_ has joined #openstack-ansible17:06
*** DanyC has joined #openstack-ansible17:12
*** ppetit_ has quit IRC17:14
openstackgerritMaciej Kucia proposed openstack/ansible-hardening master: setup.cfg: Define data files  https://review.openstack.org/63759517:40
bgmccollumThe Rocky AIO doesn't appear to be setting up the cinder-volume loop / VG, preventing the service from starting. Anyone seen this?17:41
bgmccollumAlthough, might be due to a restart, so something with the loop-cinder.service unit.17:43
*** DanyC has quit IRC17:44
*** DanyC has joined #openstack-ansible17:44
bgmccollumrestarting loop-cinder.service and cinder-volume.service allows cinder-volume to properly come up...17:45
*** hwoarang has quit IRC17:47
*** hwoarang has joined #openstack-ansible17:47
*** DanyC has quit IRC17:49
*** ppetit has quit IRC17:50
*** DanyC has joined #openstack-ansible17:51
*** cmart has joined #openstack-ansible17:51
*** idlemind has quit IRC17:52
*** priteau has joined #openstack-ansible17:53
*** DanyC has joined #openstack-ansible17:54
*** kopecmartin is now known as kopecmartin|off17:56
*** phasespace has quit IRC17:57
*** hamzaachi__ has quit IRC17:59
cloudnullbgmccollum is that unit enabled?18:02
bgmccollumlet me check18:02
cloudnullalso could be a wait condition18:02
cloudnullbut if its not enabled that would make sense18:02
bgmccollumyes its enabled18:02
cloudnullif its not coming up after a reboot I'd be curious what the failure was18:03
bgmccollumill reboot the host and verify thats the case18:03
cloudnullif its something not existing or a timing issue we can adjust the units to have additional conditions to try and resolve that18:04
*** kmadac has quit IRC18:05
dswebbbgmccollum I'm facing a somewhat similar issue now with the ceph backend for Rocky AIO (installed via ansible-ops mnaio).  I don't even have a cinder-volume.service18:06
dswebbI managed to get it started by hand after installing some of the rados / rbd python modules, but am having further complications now18:07
dswebbroot@infra1-cinder-api-container-b195e794:~# systemctl list-unit-files|grep cind18:08
dswebbcinder-api.service                         enabled18:08
dswebbcinder-scheduler.service                   enabled18:08
dswebbcinder-volume-usage-audit.service          disabled18:08
dswebbcinder-volume-usage-audit.timer            disabled18:08
bgmccollumcloudnull reboot shows the loop and cinder-volume came up...ill try a few more reboots to see if its transient...18:10
*** gillesMo has quit IRC18:26
*** ansmith_ has joined #openstack-ansible18:28
*** sdake_ has joined #openstack-ansible18:29
*** sdake has quit IRC18:29
*** ansmith has quit IRC18:30
cloudnullinteresting18:32
cloudnullcould be a terrible race condition18:32
bgmccollum1st reboot worked, 2nd and 3rd reboots did not work18:34
bgmccollumhttp://paste.openstack.org/show/745277/18:35
bgmccollumcloudnull appears to race with the "/openstack" mount18:37
bgmccollum"losetup: /openstack/cinder.img: failed to set up loop device: No such file or directory"18:37
cloudnullhum . /openstack is a mount?18:38
cloudnullis that on baremetal ?18:38
bgmccollumdata device18:38
bgmccollumfor AIO18:38
cloudnulloh!18:39
cloudnullso we likley need to add an after clause in the loop devices ?18:40
cloudnullwe could probably add something like ConditionPathIsMountPoint or ConditionPathExists to the loop device units18:41
bgmccollumYes, but only if a data device has been defined...18:41
bgmccollumright18:41
bgmccollumalso, swift keeps its loop images in /srv which isn't affect by this18:41
bgmccollumbut isn't honoring the data device technically18:42
bgmccollumoh nevermind18:42
bgmccollumthats the mount point18:42
cloudnullI guess we could generally add a ConditionPathExists=/path/to/loop/file which should do whats required18:42
bgmccollumok, ill try adding that, and do some reboots18:42
bgmccollumdoes that go under [Unit]?18:44
cloudnullyes18:44
bgmccollumok, about to fire it off18:44
* cloudnull crosses fingers waiting in suspense 18:45
*** hamzaachi__ has joined #openstack-ansible18:46
mnaserhey so i guess our gates are broken18:47
mnaserarxcruz: mentioned this a bit ago18:47
mnaserhttp://lists.openstack.org/pipermail/openstack-discuss/2019-February/002842.html18:47
*** ansmith_ has quit IRC18:50
*** ansmith_ has joined #openstack-ansible18:51
bgmccollumcloudnull -- http://paste.openstack.org/show/745278/19:00
cloudnullmnaser RUTRHO!19:02
mnasercloudnull: i broke it ;(19:02
mnaserbut i fixed it19:02
cloudnullbgmccollum did the service not eventually start?19:02
mnaserbut i fixed it by breaking it19:02
cloudnullmnaser nice !19:03
cloudnullbreak fixing is the best19:03
bgmccollumcloudnull it did not start19:03
cloudnullhum. maybe we need to figure out an After=... when a data disk is being used19:06
bgmccollumAfter=openstack.mount ?19:07
bgmccollumand only template that when a data device is being used19:07
bgmccollumtesting19:07
chandankumarguilhermesp: i think we need to revert this patch https://review.openstack.org/#/c/633549/ as on moving temepst workspace the directory is empty we need to do tempest init again19:08
chandankumarguilhermesp: http://logs.openstack.org/82/627782/7/check/openstack-ansible-upgrade-aio_lxc-ubuntu-bionic/7a5a7b4/logs/ara-report/result/5d5f132f-0e5e-4bc6-904b-35488f1da75d/19:08
*** electrofelix has quit IRC19:15
*** dswebb has quit IRC19:18
*** DanyC has quit IRC19:20
*** DanyC has joined #openstack-ansible19:20
*** DanyC_ has joined #openstack-ansible19:22
*** DanyC has quit IRC19:25
*** priteau has quit IRC19:25
*** DanyC_ has quit IRC19:26
bgmccollumcloudnull adding `After = openstack.mount` did not work (in fact, none of the swift mounts loaded either). Instead, I replaced `After = systemd-udev-settle.service` with `After = openstack.mount`, and that seems to have worked.19:28
cloudnullah. and that would work regardless, data disk or not19:29
cloudnullmaybe we add that to the systemd_mounts role as a general rule ?19:30
bgmccollumis that used the by the bootstrap-aio role?19:30
bgmccollumlooking19:31
bgmccollumok, it does appear to use it19:32
*** ansmith has joined #openstack-ansible19:34
cloudnullbgmccollum we could probably just add that to https://github.com/openstack/ansible-role-systemd_mount/blob/master/templates/systemd-mount.j219:35
*** ansmith_ has quit IRC19:35
cloudnullwhich would add  that option as a general rule19:35
bgmccollumbut loop-cinder is a service not a mount haha19:39
*** DanyC has joined #openstack-ansible19:40
bgmccollumprobably because it does more than just setup a mount19:40
bgmccollumpvscan so lvm picks it up19:40
cloudnullah...19:43
cloudnullso then maybe it just needs to go into the service definition19:43
bgmccollumeither way, its still not reliable...just had a reboot where it didnt work...19:44
bgmccollumits racing with something non-obvious to my eyes19:44
*** DanyC has quit IRC19:44
cloudnullmaybe `fter = openstack.mount` and `After = systemd-udev-settle.service`?19:45
bgmccollumi tried with both, and it didn't seem to make a difference19:45
bgmccollumill keep banging on it19:46
bgmccollumi think something more systemic is at play, because sometimes SSH doesn't even start... :/19:46
bgmccollummight need to do the cinder mount like the swift mount, then have another service that does the pvscan after that19:47
bgmccollumwait...that wont work. we just need the loop device setup and mount units wouldn't be able to do that.19:50
*** sdake_ has quit IRC19:54
*** hamzaachi__ has quit IRC20:26
*** hamzaachi has joined #openstack-ansible20:27
*** hamzaachi has quit IRC20:28
*** hamzaachi has joined #openstack-ansible20:31
*** hamzaachi has quit IRC20:31
*** hamzaachi has joined #openstack-ansible20:32
*** cmart has quit IRC20:37
*** hamzaachi has quit IRC20:37
*** cmart has joined #openstack-ansible20:38
*** hamzaachi has joined #openstack-ansible20:38
*** hamzaachi has quit IRC20:39
*** hamzaachi has joined #openstack-ansible20:39
*** hamzaachi has quit IRC20:51
guilhermespyes chandankumar I was about to test something but I didn't have the opportunity21:32
guilhermespI noticed that the tempest move command wasn't moving the workspace in fact, that's why the upgrade jobs stills fails21:33
guilhermespI was thinking of moving manually the content to the new workspace, but you think adding a tempest init could solve the problem?21:33
guilhermespI still have the hold on that job, but is quite messy in there. I think I will ask ppl of openstack-infra to remove it and ask a brand new one in case of failures again21:34
*** DanyC has joined #openstack-ansible21:51
cmartHi folks. Is there any information out there about some of the largest single OpenStack deployments that are managed by OpenStack-Ansible? On the scale of 10^3 compute nodes?21:56
*** DanyC has quit IRC21:57
cmartI know other things would tend to bottleneck before your deployment tooling, but just curious to learn how well it works from an ops perspective.21:58
*** ansmith has quit IRC22:05
*** dave-mccowan has joined #openstack-ansible22:17
*** dave-mccowan has quit IRC22:21
*** sdake has joined #openstack-ansible22:25
*** sdake has quit IRC22:27
*** sdake has joined #openstack-ansible22:33
*** shananigans has quit IRC22:36
*** cmart has quit IRC22:41
*** sdake has quit IRC22:46
*** sdake has joined #openstack-ansible22:49
*** dhellmann has left #openstack-ansible22:55
*** cmart has joined #openstack-ansible22:58
*** ArchiFleKs has quit IRC23:04
*** aedc has quit IRC23:06
*** DanyC has joined #openstack-ansible23:13
*** DanyC has joined #openstack-ansible23:15
*** DanyC has quit IRC23:22
*** DanyC_ has joined #openstack-ansible23:22
*** sdake has quit IRC23:24
*** sdake has joined #openstack-ansible23:25
*** ArchiFleKs has joined #openstack-ansible23:25
*** sdake has quit IRC23:32
*** aedc has joined #openstack-ansible23:44
*** cmart has quit IRC23:48
*** cmart has joined #openstack-ansible23:48
*** spsurya has quit IRC23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!