Wednesday, 2020-07-15

*** cshen has joined #openstack-ansible00:02
*** rmcall has quit IRC00:06
*** cshen has quit IRC00:07
*** markvoelker has joined #openstack-ansible00:19
*** jbadiapa has quit IRC00:29
*** gyee has quit IRC00:36
*** cshen has joined #openstack-ansible02:03
*** cshen has quit IRC02:08
*** spatel has joined #openstack-ansible02:10
*** markvoelker has quit IRC02:19
*** rmcall has joined #openstack-ansible02:31
*** spatel has quit IRC02:49
*** spatel has joined #openstack-ansible03:22
*** spatel has quit IRC03:22
*** spatel has joined #openstack-ansible03:23
*** spatel has quit IRC03:23
*** spatel has joined #openstack-ansible03:30
*** spatel has quit IRC03:30
*** redrobot has quit IRC03:36
*** cshen has joined #openstack-ansible04:04
*** cshen has quit IRC04:09
*** markvoelker has joined #openstack-ansible04:11
*** markvoelker has quit IRC04:15
*** evrardjp has quit IRC04:33
*** evrardjp has joined #openstack-ansible04:33
*** markvoelker has joined #openstack-ansible05:01
*** cshen has joined #openstack-ansible05:02
*** markvoelker has quit IRC05:05
*** cshen has quit IRC05:07
*** udesale has joined #openstack-ansible05:22
*** udesale has quit IRC06:07
*** cshen has joined #openstack-ansible06:15
*** udesale has joined #openstack-ansible06:20
jrossertow: can you give some more information, which operating system you are using?06:38
jannowhen adding designate to an existing cluster, which part would be responsible for adding the queues to rabbitmq?06:44
jannowe are currently seeing such errors: http://paste.openstack.org/show/795931/06:46
CeeMacmorning06:48
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_ceilometer stable/ussuri: [New files - needs update] Update paste, policy and rootwrap configurations 2020-07-15  https://review.opendev.org/74109606:51
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_ceilometer stable/ussuri: [New files - needs update] Update paste, policy and rootwrap configurations 2020-07-15  https://review.opendev.org/74109806:54
*** this10nly has joined #openstack-ansible06:55
jrosserseems i have no idea how to bump the role sha using the release scripts :(06:56
CeeMacjrosser: based on https://opendev.org/openstack/openstack-ansible/src/branch/master/osa_toolkit/generate.py#L670-L67707:15
CeeMacand in o_u_c the br-mgmt provnet uses group_binds: hosts07:16
CeeMacbut hosts isn't a member of 'physical_host_group'07:16
CeeMacso its falling back to the ansible_host ip?07:16
*** andrewbonney has joined #openstack-ansible07:17
*** cshen has quit IRC07:18
CeeMacim also not sure about this comment https://opendev.org/openstack/openstack-ansible/src/branch/master/osa_toolkit/generate.py#L560-L56107:18
CeeMacwhich seems to imply only the first IP will be used?07:19
CeeMacalthough i haven't found the logic for that in the code yet07:19
jrosserisnt that more to do with making sure that the ip to ssh to for ansible, for a container, is the physical host07:20
jrosserrather than the IP of the mgmt net inside the container07:21
jrosserremember everything that ansible does to the containers is kind of proxied via the host07:21
jrosserCeeMac: give us a minute and my colleague will be along who's also been investigating07:22
CeeMacsure07:24
*** tosky has joined #openstack-ansible07:46
*** udesale has quit IRC07:47
*** udesale has joined #openstack-ansible07:47
*** also_stingrayza is now known as stingrayza07:54
jrosserCeeMac: my colleague andrewbonney thinks he has a solution08:12
*** markvoelker has joined #openstack-ansible08:12
andrewbonneyHi. So I haven't fully traced how this worked before, but I've been tracing back and taking a look at the nova source08:13
andrewbonneyThe nova docs suggest that 'live_migration_inbound_addr' can't be used if 'live_migration_tunnelled' is enabled (https://docs.openstack.org/nova/latest/configuration/config.html)08:14
andrewbonneyBut whilst tunneled was enabled in https://github.com/openstack/openstack-ansible-os_nova/commit/12e09a3402cb810c53188a94ad1c820086d8e30208:14
andrewbonneyThe nova source suggested the inbound_addr config option may still get used: https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L889408:15
andrewbonneyHaving set that config on our live instance (after having fixed the Nova 'my_ip' config to use the correct network too) our migrations use the correct interface08:15
*** markvoelker has quit IRC08:17
CeeMachi andrewbonney08:21
CeeMaci'm not sure i'm following that last bit in the driver.py, that still seems to suggest that tunnel should be disabled?08:22
andrewbonneyI'm afraid I'm not familiar enough with that to say at the moment08:23
*** mmethot_ has joined #openstack-ansible08:24
CeeMacso in your environment you set live_migration_inbound_addr? did you take out live_migration_uri and live_migration_tunnelled settings?08:24
*** dmsimard has quit IRC08:25
andrewbonneyNo, I left those in. As far as I can tell (admittedly I haven't traced the full path) the inbound_addr is passed as metadata to the host which ultimately uses the live_migration_uri08:25
andrewbonneyThis replaces the %s08:26
*** dmsimard has joined #openstack-ansible08:26
CeeMacoh, interesting08:26
andrewbonneyIt's a little confusing that they're all grouped together in the docs08:27
CeeMacand you used the br-mgmt ip for inbound_addr and my_ip both?08:27
CeeMacyes, the docs aren't clear in a lot of places08:27
jrosserandrewbonney: do you think that the nova docs don't represent actually what the code does?08:28
andrewbonneyYes, that's how I've got it set at present. It may be that you could just set inbound_addr to make live migration work, but we adjusted my_ip too as the git history suggested it was broken08:28
andrewbonneyjrosser: my impression is there is an error yes, but I'm willing to accept I could be missing a detail08:28
*** mmethot has quit IRC08:29
CeeMacseems sensible08:29
CeeMacdid you happen to look into how osa is selecting the container_address incorrectly in the dynamic inventory08:29
andrewbonneyYeah, I traced that back to this commit: https://github.com/openstack/openstack-ansible/commit/4c04c688e70ff16ebed4ddcaf20e8e8d712a47b0#diff-6d3cb25b2133a32fce9452487bf728b2R2408:32
andrewbonneyIt looks like there may have been some confusion between 'management_bridge' and 'container_address'08:32
andrewbonneyA local patch to use 'container_address' in a couple of places in playbooks/common-playbooks/nova.yml seems to fix it, but I'd like to see if there's a cleaner way08:34
CeeMacit also looks like manage_address used to be a provnet too08:34
CeeMac*in a08:34
andrewbonneyI'll put some patches together to share the end result for further comment. It starts to get confusing following all of the threads08:38
*** shyamb has joined #openstack-ansible08:41
CeeMaccool08:42
*** cshen has joined #openstack-ansible08:51
*** cshen has quit IRC08:56
CeeMacsorry was multi-tasking badly while in a all there08:57
CeeMacI saw here https://opendev.org/openstack/openstack-ansible/src/branch/master/osa_toolkit/generate.py#L1141-L114508:58
*** aedc has joined #openstack-ansible08:58
CeeMacthat there must have originally been a management ip_q and provnet which is probably where the "management_address" logic is derived from08:58
CeeMacandrewbonney: dynamic-address-fact.yml looks like it was changed again after that commit, some time between queens and rocky09:04
CeeMacnot sure how best to track that09:04
CeeMacbut the login in it now seems to be causing container_networks: container_address: address: to be populated by the ansible_host address, not the br-mgmt one09:05
CeeMacby, or from, can't fathom which09:06
CeeMacs/login/logic09:08
CeeMaci'm still trying to get my head around what it happening in generate.py09:09
*** markvoelker has joined #openstack-ansible09:11
*** markvoelker has quit IRC09:15
*** shyam89 has joined #openstack-ansible09:17
*** shyamb has quit IRC09:17
*** jbadiapa has joined #openstack-ansible09:25
*** dpaclt has joined #openstack-ansible09:57
*** jbadiapa has quit IRC09:57
dpacltHi all good day ,I am unable to launch new vms getting error as http://paste.openstack.org/show/795937/10:05
*** shyamb has joined #openstack-ansible10:05
dpacltCan anyone suggest10:06
*** shyam89 has quit IRC10:07
*** dpaclt has quit IRC10:09
*** dpaclt has joined #openstack-ansible10:12
*** jbadiapa has joined #openstack-ansible10:18
*** jbadiapa has quit IRC10:19
*** jbadiapa has joined #openstack-ansible10:19
*** shyamb has quit IRC10:23
*** sshnaidm is now known as sshnaidm|afk10:24
*** shyamb has joined #openstack-ansible10:24
*** shyamb has quit IRC10:27
*** shyamb has joined #openstack-ansible10:27
openstackgerritAndrew Bonney proposed openstack/openstack-ansible-os_nova master: Add nova_management_address to defaults  https://review.opendev.org/74114610:29
*** cshen has joined #openstack-ansible10:52
*** cshen has quit IRC10:57
*** shyam89 has joined #openstack-ansible10:57
*** shyamb has quit IRC11:00
openstackgerritAndrew Bonney proposed openstack/openstack-ansible-os_nova master: Use Nova management IP for live migrations  https://review.opendev.org/74115511:04
*** tosky has quit IRC11:05
openstackgerritAndrew Bonney proposed openstack/openstack-ansible-os_nova master: Use nova_management_address as a default VNC bind address  https://review.opendev.org/74115611:06
*** tosky has joined #openstack-ansible11:10
*** udesale_ has joined #openstack-ansible11:31
*** udesale has quit IRC11:34
*** dpaclt has quit IRC11:43
CeeMacandrewbonney: so the nova management address will default to loopback unless overridden in user cars?11:44
*** dkopper has joined #openstack-ansible11:48
*** dkopper has quit IRC11:50
CeeMacs/cars/vars12:00
*** shyam89 has quit IRC12:04
*** mgariepy has joined #openstack-ansible12:09
andrewbonneyAs far as I can see at present, nova_management_address only comes from playbooks/common-playbooks/nova.yml. The default addition is just intended as a cleanup, and I picked loopback to match up with a similar case for cinder12:14
andrewbonneyI'd be happy to take advice on a better default. These patches are just some prep to make the fix around the dynamic address fact a little easier12:14
*** vapjes has joined #openstack-ansible12:35
CeeMacMakes sense, just checking I understand correctly. I can't remember if there was a push recently to move away from binding loopback in favour of binding management address. But there is the outstanding issue of the management address being incorrect when using OOB IP for physical hosts.12:35
CeeMacYour principles seem sound though, and having it over rideable gives the user control if/where they need it12:36
andrewbonneyI wouldn't be surprised if I'm missing some history in some places as I'm quite new to the detail, so it may be that one or two things will need swapping to match current best practice12:37
CeeMacI'm still trying to work my head around a lot of it too12:38
CeeMacI believe jrosser had been doing some work on the loopback ip binding issue iirc12:38
openstackgerritAndrew Bonney proposed openstack/openstack-ansible master: Fix management address lookup for metal hosts in some deployments  https://review.opendev.org/74116712:39
*** sshnaidm|afk is now known as sshnaidm12:53
*** cshen has joined #openstack-ansible12:53
*** cshen has quit IRC12:58
*** spatel has joined #openstack-ansible12:59
openstackgerritAndrew Bonney proposed openstack/openstack-ansible-os_neutron master: Remove unused neutron_management_ip  https://review.opendev.org/74117713:02
*** dkopper has joined #openstack-ansible13:03
admin0when doing a minor upgrade from 20.0.2 to 20.1.3, on setup infrastructure, i get this error message "The galera_cluster_name variable does not match what is set in mysql" ..13:04
admin0how is that even possible :D13:04
admin0it says To ignore the cluster state set '-e galera_ignore_cluster_state=true13:04
admin0how safe is it .. or how do I fix this up13:04
admin0bootstrap, setup-hosts showed no errors13:05
openstackgerritAndrew Bonney proposed openstack/openstack-ansible master: Conform cinder management address to pattern used for nova  https://review.opendev.org/74118013:05
*** dkopper has quit IRC13:06
*** watersj has joined #openstack-ansible13:25
watersjfor masakari hostmonitor. what parts are needed to get that working. Looks like it needs pacemaker-(remote?) corosync but have not found a document on setup13:27
watersjany pointers be great help13:27
jrosserwatersj: there are patches not yet merged to add corosync/pacemaker for this, so i'd say thats not available yet13:31
jrosserCeeMac: my work on binding was to make sure the services all bound to the openstack management network IP rather than 0.0.0.013:32
jrosserlogan-: would be interested to know what you think of this https://review.opendev.org/#/c/741155/13:34
watersjjrosser, patches of masakari or to install/setup of corosync/pacemaker13:37
jrosserwatersj: all i know is whats in here https://review.opendev.org/#/c/739146/ :)13:38
watersjjrosser, that confirms some of what I saw about corosync, ty13:39
jrosserwatersj: if you were in a position to test that and comment on the patch it would be great13:40
admin0error I am facing is here: https://gist.github.com/a1git/712b8ed02c64a82fa73690bdd70bf4d713:42
*** Guest14648 has joined #openstack-ansible13:43
jrosseradmin0: you have errors related to haproxy before the galera error ?13:44
admin0jrosser, nova_libvirt_live_migration_inbound_addr .. isn't nova_libvirt_live_migration_inbound_interface better ?13:44
admin0as an operator, i would not know what address to put in that13:44
admin0jrosser, haproxy playbook runs fine .. and is working fine .13:45
*** Guest14648 is now known as redrobot13:45
jrosseradmin0: the first thing in your paste is "RUNNING HANDLER [haproxy_endpoints : Set haproxy service state]" failed13:46
admin0rerunning haproxy playbook now to see if i can catch this13:47
admin0it ran fine .. the haproxy in itself runs fine .. but when i run the galera playbook/setup-infra, i think  at one point it checks for service in the container .. which is not up yet because mysql could not be instlaled13:48
admin0haproxy playbook recap is   ok-40, changed =0 in all 3 controller it runs on13:49
admin0jrosser, https://asciinema.org/a/FI5KvhP2VTW4KOIfHg3ciTMCl --- recorded this13:52
jrosseradmin0: i can't really offer anything but debugging tips13:56
jrosserthe tasks are here https://opendev.org/openstack/openstack-ansible-galera_server/src/branch/stable/train/tasks/galera_cluster_state.yml13:56
jrosserand adding some -vv (or more) to the cli should print out the actual and expected cluster names13:57
admin0rerunning with -vvvv13:58
admin0https://asciinema.org/a/kFkWI1RPj6gIpHdsZQ6VST1kb14:00
admin0as far as i can understand, it failing due to the fact that its unable to find show variables like cluster name from the output, but no mysql has been installed there yet14:00
admin0do you guys use anything to   push terminal output to gist ?14:02
jrosseradmin0: sorry for the silly question but if its an upgrade from 20.0.2 to 20.1.3 why is mysql not already installed?14:06
admin0during  the playbook run, it said container name mismatch, so without using -vvv or extra thinking, i just nuked the c1 galera container and recreated it14:07
*** this10nly has quit IRC14:07
jrosseri think there is some statefulness about if galera is installed or not14:11
jrosserso that it doesnt get re-installed / restarted unless you absolutely need14:12
jrosserso this might now be more like a major upgrade where you need to pass -e 'galera_upgrade=true'14:12
jrosserbut i'm kind of guessing a bit14:12
admin0trying14:13
jrosseradmin0: look here it sets a fact on installation https://opendev.org/openstack/openstack-ansible-galera_server/src/branch/stable/train/tasks/galera_install.yml#L22-L3114:15
jrossernot sure if that got deleted or not when the container was destroyed14:15
CeeMacjrosser: ah, OK, I knew it was something to do with management address but couldn't remember the details14:16
admin0jrosser, https://asciinema.org/a/WQV8nSiPwDtjlAreaaxVdmuhr   --14:16
admin0looks like a haproxy galera  ping-pong error in each run14:17
CeeMacnova_libvirt_live_migration_inbound_addr is derived from the nova variable live_migration_inbound_addr so it makes sense in context14:17
CeeMacadmin0: ^14:17
*** also_stingrayza has joined #openstack-ansible14:17
admin0CeeMac, as an operator, in user_variables, i put stuff that is common .... so for a variable in defaults/main.yml like nova_libvirt_live_migration_inbound_addr:  .. how would I know what to put there . as addr will be different for each host . if it was _cidr, would make sense14:19
admin0i am looking it from an operator prospective who goes to the main.yaml trying to figure out what to put where14:19
jrosseradmin0: we have lots of things like that in defaults which are different for each host14:20
spateljamesdenton: are you around ?14:20
jrosser*potentially different14:20
*** stingrayza has quit IRC14:21
CeeMacadmin0: the challenge is the same as if you were manually adding live_migration_inbound_addr variable to nova.conf14:21
CeeMacthat is the variable that nova uses, so to set it programatically it is good practice to use the same naming14:21
admin0i mean  if i have br-transfer interaface at 172.29.229.0/23, will that variable allow me to tell nova use this network/interface for the vm transfers ?14:21
jrosseradmin0: and also remember that all the things in role defaults can also be overridden in /etc/openstack_deploy/group / host vars which are by definition host specific14:21
CeeMacadmin0: that is part of the challenge we're facing14:22
CeeMacyou would need the specific IP of the interface on the host whose nova.conf variable you were setting14:23
CeeMacor an equivalent dynamic inventory value14:23
admin0will this use --p2p ?14:24
jrosserit would be nova_libvirt_live_migration_inbound_addr:"{{ hostvars[inventory_hostname]['ansible_' + 'br-transfer']['ipv4']['address'] }}"14:24
CeeMacadmin0: as an osa operator I actually do spend a lot of time looking in defaults/main.yml to see what variables are set that I can override to do what I want/need :)14:24
admin0yep ..14:25
admin0i try not to play too much :D14:25
CeeMaclooking isnt playing :p14:25
admin0fair enough14:30
CeeMac:D14:30
CeeMacI understand your issue though14:30
admin0jrosser,  i am destroying the container and rebuilding it again .. i failed to rm -rf /openstack/galera folder data14:30
admin0hopefully it will rejoin and fix itself this time14:30
CeeMacwading through the docs for config variables for the various openstack projects can be a challenge14:31
admin0in my case, diff dept have their diff clusters with diff settings .. so keeping track of it all together is also a challenge14:32
jrosseri don't think it's correct that every overridable variable has a single constant value14:34
jrossera lot of them are host dependant14:34
*** dave-mccowan has quit IRC14:43
*** dave-mccowan has joined #openstack-ansible14:47
*** mmethot_ is now known as mmethot14:48
CeeMacagreed14:51
*** cshen has joined #openstack-ansible14:54
*** cshen has quit IRC14:59
towjrosser: base os is updated Centos715:08
openstackgerritGeorgina Shippey proposed openstack/openstack-ansible-ops master: Collect keystone apache federation files  https://review.opendev.org/74123615:09
towjrosser: it always fails with the placement, heat_api, aodh containers15:09
jrossertow: and you mentioned it was missing git and other packages?15:09
towjrosser: apparently just git, the setup-openstack playbook halts, then if we go into the container and yum install git, rerunning the playbook goes through15:11
jrossercan you show me the log from where it stops?15:12
jrossertow: if you are able to paste something at paste.openstack.org to give some context it would be really helpful15:15
towjrosser: http://paste.openstack.org/show/795951/15:18
jrossertow: there are a few odd things, i would expect the name of the venv to include the openstack-ansible version number release rather than be /openstack/venvs/placement-train15:22
jrosserthen it's using python2 which it really shouldnt be15:22
jrosserand then last the clone of the placement repo with git on the placement container itself is not what i'd expect15:23
jrossernormally the clone and build of the python wheels would happen on the repo server container15:23
jrossertow: what stage are you at with this, is it an early attempt in a lab environment, single node, multinode, .....15:24
towthis is a production deployment, multinode 3-node control plane, 16 compute nodes, connecting to external ceph15:25
towyes, it is very strange indeed, we've used OSA in the past, that's why we are at a loss15:26
jrosserperhaps the first thing to check would be that you have the repo containers configured properly https://github.com/openstack/openstack-ansible/blob/master/etc/openstack_deploy/openstack_user_config.yml.prod.example#L93-L10015:26
jrosserOSA uses the presence of the ansible group for the repo containers to be the thing that decides if the wheels are built locally, or deferred to the repo server15:27
towhmmm interesting15:31
jrossertow: it ultimately is decided here where the wheels are built https://github.com/openstack/ansible-role-python_venv_build/blob/897e97eb58cfc78cadf8e9e183ed0eb1297535b6/vars/main.yml#L65-L7815:31
jrosserit's a bit complicated but it finds an operating system and architecture match in the repo_all group15:31
towok let us look into it, I'll let you know, but makes sense, it has to be related to the repo containers15:32
jrosserand that then ends up being used here https://github.com/openstack/ansible-role-python_venv_build/blob/aabd3c07c29fd89a56cadc01ec3a2e3b682e5e14/tasks/python_venv_wheel_build.yml#L2415:32
jrosseri'm thinking that for some reason thats running against localhost15:32
jrosserno not localhost, i mean ansible_host15:33
jrosseri.e the target15:33
towjrosser: btw, is there any recommendation on using the stable tag vs the version number tag? e.g. stable/train or 20.1.315:38
jrosserstable/train is always the tip of the stable branch15:39
jrosserevery two weeks or so a tag will get dropped on that branch to be a 'minor release'15:39
jrosserand at that point the SHA for all the underlying openstack services (cinder/nova) are moved to the head of whatever their stable branch is15:40
jrosserand the SHA for all the OSA ansible roles are moved forward to the head of their stable branches15:40
jrosserwhat the tags are saying is that all of that in combination passed the CI tests15:40
towok, understood, looking into the repo containers as we speak15:46
*** mgariepy has quit IRC16:22
*** vblando has joined #openstack-ansible16:22
*** udesale_ has quit IRC16:38
*** cshen has joined #openstack-ansible16:55
*** cshen has quit IRC17:00
*** mgariepy has joined #openstack-ansible17:09
*** watersj has quit IRC17:11
*** gyee has joined #openstack-ansible17:30
logan-jrosser: re https://review.opendev.org/#/c/741155/, i'm not sure. as of rocky, where my deployments are currently marooned, it seems to still use the %s replacement in live_migration_uri when tunneling is enabled. I'll have to look through the nova repo to see if that changes in later releases. they've been kicking around ideas for removing live_migration_uri for a long time so its certainly possible something changed.17:36
*** cshen has joined #openstack-ansible17:44
*** andrewbonney has quit IRC17:57
*** mloza has quit IRC18:04
jrosserlogan-: there certainly seems to be a difference between the nova docs and the actual behaviour18:06
jrosserin our deployment setting live_migration_inbound_addr to the IP of the interface we actually want seems to make the migration traffic go the right way18:07
jrosserwhich seems contrary to the docs18:07
jrosserthis is whilst leaving live_migration_uri as it is currently18:08
*** cshen has quit IRC18:20
*** tosky has quit IRC18:25
*** tosky has joined #openstack-ansible18:25
*** cshen has joined #openstack-ansible18:50
*** cshen has quit IRC18:55
*** arkan has joined #openstack-ansible18:58
CeeMacIt is most definitely strange19:05
CeeMacOut of interest was the variable introduced and outcome validated in isolation before my_ip etc was also updated?19:06
CeeMacjrosser: ^19:07
jrosserno, I think we changed it all together19:08
*** cshen has joined #openstack-ansible19:12
*** KeithMnemonic has joined #openstack-ansible19:12
CeeMacIs it worth retesting by maybe commenting that one variable out and validating the outcome? Interests of science and all that19:18
*** rmcall has quit IRC19:22
*** rmcall has joined #openstack-ansible19:23
*** rmcallis has joined #openstack-ansible19:25
*** rmcall has quit IRC19:28
*** rmcallis__ has joined #openstack-ansible19:28
*** rmcallis has quit IRC19:31
*** cshen has quit IRC19:36
*** cshen has joined #openstack-ansible20:02
*** cshen has quit IRC20:07
*** cshen has joined #openstack-ansible20:52
*** cshen has quit IRC20:56
arkanis this still active ? https://bugs.launchpad.net/openstack-ansible/+bug/187742121:07
openstackLaunchpad bug 1877421 in openstack-ansible "Cinder-volume is not able to recognize a ceph cluster on OpenStack Train." [Undecided,New]21:07
arkanI'm getting this error from cinder volumes21:07
arkancinder.exception.ClusterNotFound: Cluster {'name': 'ceph@ceph'} could not be found.21:07
arkanbut I can do this from the container21:08
arkanrbd -p cinder-volumes --id cinder -k /etc/ceph/ceph.client.cinder.keyring ls21:08
arkanit's working, but the service still throwing that error21:08
*** gyee has quit IRC21:10
arkanI think I solved the problem with authorisation with magnum, the user stack_domain_admin was not added to the heat domain with a role admin21:11
*** gyee has joined #openstack-ansible21:11
arkanI ran this openstack role add --domain heat --user-domain heat --user stack_domain_admin admin21:11
*** rmcall has joined #openstack-ansible21:27
*** rmcallis__ has quit IRC21:28
*** rmcall has quit IRC21:30
*** rmcall has joined #openstack-ansible21:31
*** rmcall has quit IRC21:35
*** rmcallis has joined #openstack-ansible21:36
*** rmcallis has quit IRC21:42
*** spatel has quit IRC22:15
*** logan- has quit IRC22:17
*** logan- has joined #openstack-ansible22:19
*** spatel has joined #openstack-ansible22:29
*** vapjes has quit IRC22:33
*** spatel has quit IRC22:34
*** tosky has quit IRC22:48
*** watersj has joined #openstack-ansible22:50
*** cshen has joined #openstack-ansible22:52
*** cshen has quit IRC22:57
*** markvoelker has joined #openstack-ansible23:11
*** markvoelker has quit IRC23:15

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!