Friday, 2020-10-09

*** djhankb has quit IRC00:01
*** djhankb has joined #openstack-ansible00:02
*** spatel has joined #openstack-ansible00:23
*** spatel has quit IRC00:28
openstackgerritMerged openstack/openstack-ansible-tests stable/train: Pin virtualenv<20 for python2 functional tests  https://review.opendev.org/75688300:32
*** maharg101 has joined #openstack-ansible00:33
*** maharg101 has quit IRC00:38
*** nurdie has joined #openstack-ansible00:41
*** nurdie has quit IRC00:46
openstackgerritJimmy McCrory proposed openstack/ansible-role-python_venv_build master: Install git package on hosts building venvs  https://review.opendev.org/75693900:57
openstackgerritMerged openstack/openstack-ansible-os_neutron master: Remove support for lxc2 config keys  https://review.opendev.org/75625101:22
*** cshen has joined #openstack-ansible02:26
*** cshen has quit IRC02:31
*** nurdie has joined #openstack-ansible02:42
*** nurdie has quit IRC02:47
*** maharg101 has joined #openstack-ansible03:06
*** maharg101 has quit IRC03:11
*** macz_ has joined #openstack-ansible03:22
*** macz_ has quit IRC03:26
*** pmannidi has quit IRC04:16
*** mpsairam has joined #openstack-ansible04:16
*** cshen has joined #openstack-ansible04:27
*** cshen has quit IRC04:31
*** evrardjp has quit IRC04:33
*** evrardjp has joined #openstack-ansible04:33
*** nurdie has joined #openstack-ansible04:43
*** nurdie has quit IRC04:48
*** maharg101 has joined #openstack-ansible05:07
*** maharg101 has quit IRC05:12
*** miloa has joined #openstack-ansible05:40
*** rpittau|afk is now known as rpittau05:43
*** sshnaidm is now known as sshnaidm|off06:06
*** nurdie has joined #openstack-ansible06:44
*** nurdie has quit IRC06:49
noonedeadpunkfinally06:58
noonedeadpunkmornings06:59
noonedeadpunkuh, I thought that lxc merged....06:59
recycleheromorning07:00
recycleheroI have commented out belongs_to of heat.yml in /opt/openstacK_anssible/env.d07:01
recycleherobut it gets included when I run dynamcin_inteventory.py?07:01
recycleherowhy is that and is it a good way to exclude services?07:02
noonedeadpunkrecyclehero: yeah, we don't clean up automatically inventory.07:02
noonedeadpunkso you need to run scripts/inventory_manage.py07:02
recycleheroaha thank u,then I will go run destory_lxc_container and then setup_hosts,infra and openstack07:03
recycleheroseems good for major reconfiguration?07:03
noonedeadpunkum...07:04
noonedeadpunkif you need to drop heat containers, then you need to run destory_lxc_container with `--limit heat_all`07:04
recycleherogreat, thanks07:05
noonedeadpunkand after that just remove hosts from inventory with scripts/inventory-manage.py -r container_name07:05
recycleheroso what is dynamic_inventory.py07:06
recycleheroI got it07:07
noonedeadpunkdynamic_inventory generates inventory and is passed to ansible-playbook as inventory source07:07
*** maharg101 has joined #openstack-ansible07:08
noonedeadpunkbut it can only add hosts, but not drop them out07:11
noonedeadpunkinventory-manage.py is for looking inventory in table view07:11
*** cshen has joined #openstack-ansible07:11
*** maharg101 has quit IRC07:12
recycleheronoonedeadpunk: whats their relationship with openstack_inventory.json in /etc/openstack_deploy07:12
noonedeadpunkthat;s good question:)07:13
noonedeadpunkconsider openstack_inventory.json as cache file for dynamic_inventory.py07:13
recycleheroif it isnt there it will make one07:13
noonedeadpunkyep07:14
noonedeadpunkbut it will use different container names07:14
noonedeadpunkwhich will be really bad if you have containers already running07:15
noonedeadpunkthere's also /etc/openstack_deploy/openstack_hostnames_ips.yml which stores ip relation07:15
jrosserrecyclehero: if you are trying to not deploy heat then the best thing to do is not define orchestration_hosts in openstack_user_config07:16
jrosseradjusting env.d should not be needed for that07:16
noonedeadpunk(it may be in conf.d as alternative to openstack_user_config)07:17
recycleherojrosser: I used os-infra_hosts07:17
jrosserenv.d says what is going where07:17
jrosseropenstack_user_config and conf.d say what you have / have not got07:18
jrosserso layout vs. presence07:18
noonedeadpunkjrosser: heat is part of os-infra_hosts I think because of that https://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/env.d/heat.yml#L3207:19
recycleheronoonedeadpunk: exactly thats what I ahve commented07:19
noonedeadpunkso I'd say worth replacing os-infra_hosts with specific groups07:20
recycleherojrosser: I dont know how to use conf.d yet07:20
jrosserisn't it this (for AIO) which makes those exist though https://opendev.org/openstack/openstack-ansible/src/branch/master/etc/openstack_deploy/conf.d/heat.yml.aio07:21
noonedeadpunkrecyclehero: so, I'd probably instead of editing env.d would just define openstack_user_config that way https://opendev.org/openstack/openstack-ansible/src/branch/master/etc/openstack_deploy/openstack_user_config.yml.pod.example#L32707:22
noonedeadpunkinstead of setting os-infra_hosts which would include set for passing refstack (I guess that was idea why we included heat)07:23
*** MickyMan77 has quit IRC07:24
jrosseryeah i see we don't make this totally clear07:24
jrosseros_infra-hosts is a group including a bunch of stuff07:25
noonedeadpunkyep07:25
jrosserbut nothing stops you individually defining the hosts for each service07:25
*** maharg101 has joined #openstack-ansible07:33
recycleheroI destoryed all lxc-containers last night and let setup host-infra-openstack to ran. now I am getting connection refused all over from rabbitmq07:41
recycleherowhy could this be07:41
recycleheroi wanted to do some tuning and also remove heat like how I said07:44
recycleheroI expect to not see neutron-metering agent but I am seeing in between other connection refuesd07:45
recycleherohttp://paste.openstack.org/show/798892/07:45
*** tosky has joined #openstack-ansible07:46
gillesMoHello, I already asked without succes, but has anyone used the ceph-rgw-install.yml in Stein ? Ansible refuse the always_run option07:52
*** gillesMo has left #openstack-ansible07:54
*** gillesMo has joined #openstack-ansible07:54
noonedeadpunkgillesMo: I think we had it in CI...08:31
noonedeadpunkgillesMo: but I can't find where always_run is used there08:35
noonedeadpunkwe don;t have `always_run` in playbook itself and ceph-ansible we use don't have it as well08:36
noonedeadpunk(on stable-3.2 branch which is used for stein)08:36
ebbexI'm getting this from cinder-volume; Error starting thread.: cinder.exception.ClusterNotFound: Cluster {'name': 'ceph@RBD'} could not be found.08:43
*** nurdie has joined #openstack-ansible08:45
ebbexis this a rabbitmq queue thing gone missing?08:46
recycleheroI am doing evaluation so I dont f... up in production. my openstack_inventory.json and openstack_hostnames_ips.yml are completly out of sync with whats there. what can I do to delete containers?08:48
*** nurdie has quit IRC08:50
gillesMonoonedeadpunk: Have the roles been changed, perhaps I have old ones in /etc/ansible/roles, I check08:50
noonedeadpunkgillesMo: you need specificly ceph-ansible repo get to origin/stable-3.208:51
noonedeadpunkrecyclehero: well, if you've dropped openstack_inventory.json before dropping containers - you can remove them now only manually08:52
noonedeadpunkas ansible does not know anything about current containers08:52
recycleherothats what I thought. what about ips? what will happen with them?08:53
noonedeadpunkebbex: I can recall facing this but can't recall how I've fixed it...08:53
ebbexnoonedeadpunk: https://bugs.launchpad.net/openstack-ansible/+bug/187742108:53
openstackLaunchpad bug 1877421 in openstack-ansible "Cinder-volume is not able to recognize a ceph cluster on OpenStack Train." [Undecided,New]08:53
ebbexI can't remember having deployed stable/train directly before, always upgraded from a previous release.08:54
noonedeadpunkebbex: well I deployed08:55
noonedeadpunkLet me check what I have in vars08:56
ebbexso never encountered this before, the others on this bug-report claim their config worked on stein.08:56
noonedeadpunkwell, I think the point here is that backend name != 'ceph'08:57
noonedeadpunkas 'ceph' is kind of reserved name for cluster08:57
noonedeadpunkbut according to bug it worked as well08:58
noonedeadpunkebbex: can you share your cinder config?09:00
gillesMonoonedeadpunk: For info, I 'm not deploying ceph, just use it. I only have client config (mons IPs,rgw config, cinder overrid to use EC pools with specific user defaults)09:01
ebbexnoonedeadpunk: http://paste.openstack.org/show/0bxC0x8n5GcoiajxFnW4/09:02
gillesMoDo I need to remove all ceph roles I have in /etc/ansible/roles and define (where) ceph-ansible path ?09:03
gillesMoI haven't seen something about that in the (Stein) release notes09:04
noonedeadpunkgillesMo: you should go to /etc/ansible/roles/ceph-ansible and try to use git pull or git reset origin/stable-3.2 --hard09:04
gillesMonoonedeadpunk: I also have sevral roles like ceph-defaults, ceph-osd, ceph-mons, which are certainly older versions now under ceph-ansible/roles ?09:07
noonedeadpunkebbex: hm I have really very simmilar one and it is working....  http://paste.openstack.org/show/798900/09:09
noonedeadpunkoh, well09:09
noonedeadpunkebbex: try adding `backend_host = rbd` to each backend09:09
noonedeadpunkor pasted one worked for you?09:20
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: [doc] Define backend_host for Ceph backends  https://review.opendev.org/75704309:32
ebbexnoonedeadpunk: adding `backend_host = rbd` seems to have worked.09:39
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_barbican master: Cleanup stop handler  https://review.opendev.org/75668909:39
ebbexWhich happens to be the last change commited to os_cinder on train, https://review.opendev.org/#/c/672078/09:42
recycleheroAny changes to the containers must also be reflected in the deployment’s load balancer.09:43
recycleherohttps://docs.openstack.org/openstack-ansible/latest/reference/inventory/manage-inventory.html09:43
recycleherobut how?09:43
noonedeadpunkwell, you will need to run haproxy-install.yml playbook09:43
recycleheroor setup-openstack.yml09:44
recycleheroI will now try to delete the containers manually and also delte inventory json and ips yaml. then try to redeploy09:45
noonedeadpunkebbex: hm, I can recall such discussion...09:45
ebbexnoonedeadpunk: I wonder how that discussion went ;)09:49
noonedeadpunkebbex: here you are http://eavesdrop.openstack.org/irclogs/%23openstack-cinder/%23openstack-cinder.2019-07-22.log.html#t2019-07-22T14:03:1109:51
ebbexAdding backend_host=rbd to the config on just 1 node fixed the issue with cinder-volume on two other hosts...09:53
noonedeadpunkhm....09:53
noonedeadpunkhttps://docs.openstack.org/cinder/latest/contributor/high_availability.html#cinder-volume09:54
noonedeadpunk`The name of the cluster must be unique and cannot match any of the host or backend_host values. Non unique values will generate duplicated names for message queues.`09:55
noonedeadpunkwell, I least I recalled that cluster name and backend name can't be same09:56
jrosseri seem to remember someone else having a very similar thing possibly with the name 'rbd'09:58
jrosserand it all started working when changing the name09:58
jrosserthis maybe something we could add an assert for in the cinder role09:59
noonedeadpunkebbex had `RBD`, and in the bug report ppl claimed that changing to `RBD` worked for them10:01
noonedeadpunkso could it be the point to just rename backend lol?:)10:02
jrosserwe'd have to look back in eavesdrop but we had an OSA user having terrible difficulty with this10:02
noonedeadpunkwell, I read it through, and patch seems valid10:06
noonedeadpunkwould be interesting to reproduce that10:08
gillesMo[OSA pb with ceph playbook] I removed all ceph roles in /etc/ansible/roles and relaunched the bootstrap-ansible script. There now only ceph_client and ceph-ansible roles. That was the problem. The bootstrap script or the release note should tell that we must remove old ceph roles10:16
recycleheroWhen I deleted the containers obv haproxy started complaining there are no backends and broadasted it.10:17
recycleheroi found it annoying and stopped it10:17
recycleherorunning setup-infrastructure doesnt start the service ?10:17
ebbexnoonedeadpunk: thanks for the read :)10:24
noonedeadpunkwell., it seems that setting backend_host was a bad call after all10:25
recyclehero (item.service.state is defined and item.service.state == 'absent')10:27
recycleheroin haproxy service config10:27
ebbexso, the weird thing is i've since removed the line, restarted the services, and everything seems fine.10:27
noonedeadpunkum10:28
recycleherowhats item.service.state10:28
noonedeadpunkebbex: any chance you can drop volume containers along with cinder db and re-deploy them?:)10:28
noonedeadpunkjust really would be great to understand how this should be fixed for ral...10:29
noonedeadpunk*real10:29
noonedeadpunkbut whatever, I think I can spawn aio with multiple cinder containers on it10:29
ebbexso something might created during the active/passive `backend_host` thing, that's not getting created with active/active `cluster` option.10:29
noonedeadpunkit looks like this yes10:30
ebbexyeah, i can try a redeploy in an hour or two.10:30
noonedeadpunkmaybe it's really better to spawn aio and not to play in your deployment10:45
*** nurdie has joined #openstack-ansible10:47
*** nurdie has quit IRC10:51
*** mensis has joined #openstack-ansible10:58
*** ioni has quit IRC11:02
*** fridtjof[m] has quit IRC11:02
*** csmart has quit IRC11:02
*** masterpe has quit IRC11:02
*** mensis21 has joined #openstack-ansible11:09
*** mensis21 has quit IRC11:10
*** mensis2 has joined #openstack-ansible11:10
ebbexnoonedeadpunk: my deployment is for fun/testing features, it gets redeployed once it's up and running. Which it finally is :)11:11
*** ioni has joined #openstack-ansible11:11
ebbexJust gonna add some stuff, and try upgrade to ussuri.11:11
*** mensis has quit IRC11:12
*** lkoranda has joined #openstack-ansible11:14
*** watersj has joined #openstack-ansible11:21
*** jamesdenton has quit IRC11:34
*** fridtjof[m] has joined #openstack-ansible11:37
*** masterpe has joined #openstack-ansible11:37
*** csmart has joined #openstack-ansible11:37
*** jamesdenton has joined #openstack-ansible11:40
*** jbadiapa has joined #openstack-ansible11:45
*** sum12 has quit IRC11:57
*** lkoranda has quit IRC11:58
*** lkoranda has joined #openstack-ansible12:01
*** cshen has quit IRC12:13
*** itsjg has quit IRC12:18
*** nurdie has joined #openstack-ansible12:48
*** nurdie has quit IRC12:52
openstackgerritMerged openstack/openstack-ansible master: Convert lxc2 config keys to lxc3 format  https://review.opendev.org/75624412:55
*** nurdie has joined #openstack-ansible13:00
*** rpittau is now known as rpittau|afk13:03
*** nurdie has quit IRC13:15
*** mmethot_ is now known as mmethot13:15
*** cshen has joined #openstack-ansible13:21
noonedeadpunkebbex: if you're somewhere around, would be great to have vote on this https://review.opendev.org/#/c/75549713:26
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: Remove glance_registry from inventory  https://review.opendev.org/75631813:32
openstackgerritMerged openstack/openstack-ansible-lxc_hosts stable/train: Updated from OpenStack Ansible Tests  https://review.opendev.org/69034213:48
ebbexThis letsencrypt part of haproxy, how bad is it (in regards to limits imposed from letsencrypt) when you're ordering a certificate for the same domain for each haproxy node?13:56
noonedeadpunkwell, we don't ask for each)13:56
noonedeadpunkwe ask for the first one and then distribute among others iirc13:56
ebbexWhat's what I would expect aswell, but my deployment creates an account and fires up certbot on each node with request/response. Have I missed a patch?13:59
ebbexError creating new order :: too many certificates already issued for exact set of domains.13:59
noonedeadpunkwell, it's probably for renewal?14:00
ebbexhttps://github.com/openstack/openstack-ansible-haproxy_server/blob/master/tasks/main.yml#L46-L5014:00
ebbexThat doesn't look like a run once, and distribute to other setup...14:01
ebbexNor does anything in https://github.com/openstack/openstack-ansible-haproxy_server/blob/master/tasks/haproxy_ssl_letsencrypt.yml14:03
noonedeadpunkhm yes14:03
jrosserif you're just messing then you should use --staging14:03
ebbexShould I perhaps take a stab at implementing it?14:03
jrosserthen there is no rate limit14:03
jrossereffectively14:03
jrosserthe idea was to not do any distribution becasue it is hard14:04
noonedeadpunk`There is no certificate distribution implementation at this time, so this will only work for a single haproxy-server environment. `14:04
jrosserhold on14:04
jrosserpreviously there was a patch for LE which worked only non-HA14:04
jrosserwhat i did was make it work HA with independant certbot on each haproxy14:05
jrosserit would be possible to improve it so that there is only one account used across all of them14:05
jrosserbut i don't think that changes the issuance limits at all14:05
openstackgerritDmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: Simplify path for letsencrypt usage  https://review.opendev.org/75132714:06
ebbexWell, I actually like the security-aspect of the keys not leaving the server. So having multiple made sense.14:06
ebbexDistribution could be something a little more complicated, but along the lines of this; https://github.com/eb4x/openstack-playbooks/blob/master/openstack-ansible/deployment-key.yml14:06
jrosserfor acoount keys?14:07
jrosser*account14:07
noonedeadpunkebbex: I mixed that up with self-signed yeah https://opendev.org/openstack/openstack-ansible-haproxy_server/src/branch/master/tasks/haproxy_ssl_self_signed.yml14:07
ebbexjrosser: Yeah, account keys, and the key used in the csr.14:08
jrosserwe'd need a way of initialiseing it, which i guess could be done on the first haproxy14:09
jrosserthen the relevant keys recovered to the deploy host and moved across to the remaining ones14:09
jrosserthe only benefit from a consistent account seemed to be if you were a really heavy user you could ask for the issuance limit to be increased for a specific account14:10
ebbexletsencrypt seems to have a limit of 5 requests for cert for the same domain, and it went fine the first time around, 3 accounts generated, 3 separate challenge/responses issued, the haproxy registers when a backend goes up and forwards correctly.14:10
ebbexbut if it's not this way by design, I'll be willing to look into the distribution of accounts/keys/certs.14:12
jrosseri guess you are running into LE limiting to 5 duplicate certs per week for the same exact domain14:13
jrosserwhich for a production deployment will be fine14:14
jrosserfor any testing i'd really be using this https://letsencrypt.org/docs/staging-environment/14:14
jrosseras a workaround for now i think you may be able to add an additional -d <another-domain> to the certbot command and you get another 5 issuances14:15
ebbexjrosser: hehe, noice :)14:15
ebbexYeah, cause I kinda like it the way it is, just dumb of me to request new ones on a fresh deploy.14:17
ebbexI'll look into using staging.14:17
jrosserebbex: i wonder if we had -d <openstack.example.com> and -d <haproxy<N>.example.com> as a SAN you'd be requesting non-unique certs per haproxy14:17
jrosserthat dows maybe assume DNS records and IP per haproxy on the public side though14:18
jrossersorry *unique* certs cer haproxy14:18
ebbexprobably just wants the dns entries to be there, pointing all <haproxy<N>.example.com> to the same external_lb_vip might work fine aswell.14:21
jrosserthe other thing to be mindful of is every cert you issue with certbot gets recorded in the certificate transparency logs14:21
jrosserthat maybe something you do or dont care about being public14:21
*** nurdie has joined #openstack-ansible14:22
jrosserthe LE staging endpoint keep your dev/test environment activity out of those logs14:23
openstackgerritMerged openstack/openstack-ansible master: Use nodepool epel mirror in CI for systemd-networkd package  https://review.opendev.org/75470614:32
*** d34dh0r53 has quit IRC15:07
*** d34dh0r53 has joined #openstack-ansible15:20
*** macz_ has joined #openstack-ansible15:41
*** d34dh0r53 has quit IRC15:42
*** maharg101 has quit IRC15:50
*** d34dh0r53 has joined #openstack-ansible16:01
*** d34dh0r53 has quit IRC16:07
*** d34dh0r53 has joined #openstack-ansible16:07
*** lkoranda has quit IRC16:24
*** gyee has joined #openstack-ansible16:32
*** tosky has quit IRC16:35
*** maharg101 has joined #openstack-ansible16:38
recycleheroi am getting this log at a crazy rate16:42
recyclehero[ValueError("unsupported format character '{' (0x7b) at index 1")] _periodic_resync_helper /openstack/venvs/neutron-21.0.1/lib/python3.7/site-packages/neutron/agent/dhcp/agent.py:32116:42
recycleheroany ideas16:43
*** maharg101 has quit IRC16:45
*** recyclehero has quit IRC16:53
*** recyclehero has joined #openstack-ansible17:10
jrosserrecyclehero: that would be here https://github.com/openstack/neutron/blob/stable/ussuri/neutron/agent/dhcp/agent.py#L320-L32117:25
jrosserand from the error it suggests that the network name has a '{' character in it, which is suspiciously pointing to templating something including a { maybe in a neutron config file17:25
*** jbadiapa has quit IRC17:56
recycleherothank you jrosser18:00
recycleherojrosser: I have add this to user_variables and this error came up. do u see anything wrong here. I need an extra pair of eyes18:05
recycleherohttp://paste.openstack.org/show/798892/18:05
recycleheroalos I deleted all the containers manually then setup-hosts-infra-openstack. after that when I sign in admin I only have the admin project and not service anymore18:08
*** spatel has joined #openstack-ansible18:19
spateljrosser: hey18:20
spatelDo you have br-host and br-mgmt interface both in your cloud ?18:20
spatelor just br-mgmt (routed?)18:21
jrosserjust br-mgmt, routed18:21
jrosserbut also each host has a 1G oob interface18:21
jrosserfor ssh and whatever18:21
spatelcurrently i have br-host and br-mgmt both and br-host is routed18:22
spatelproblem is i can't ssh or connect directly to br-mgmt18:22
jrosserdo you need to? i can't do that either btw18:22
spateli have to ssh on br-host to get access of br-mgmt network18:22
spatelI am building new cloud so thinking to remove br-host and just stay on br-mgmt (routed)18:23
jrosseri do have web reverse proxy from mgmt net to 'internal' network for visiting ceph dashboard and things18:23
spateli have 10G+10G bonded interface so enough speed18:23
spateli like br-mgmt only idea (less interface so less complexity)18:24
jrosserso long as you can pxeboot sensibly - i have some hosts with only 10G+10G bond and that was tricky18:25
spatelwhy tricky?18:25
jrosserneed careful config on the switch to make it fall back out of lacp mode correctly when the bond isnt up (i.e during boot)18:25
spateli do native VLAN for pxe traffic18:26
spatelI do pxe boot and then i have custom bash script with take compute number (Ex: setup-compute.sh 123 )18:27
jrossersee 'no lacp suspend-individual'18:27
spatelthat script set hostname compute-123  also set same compute number and set IP address like 10.71.0.12318:27
spatelhmm18:29
spatelwe have HP7000 blade center and those switches doesn't support vPC so i am running active-standby on my bond018:30
spatelHP7000 blade switches connected to my cisco nexus vPC in spine-leaf18:30
*** miloa has quit IRC18:31
spateljrosser: quick question if you are using br-mgmt only interface then what do you set in happroxy external and internal IP?18:31
*** tosky has joined #openstack-ansible18:32
*** maharg101 has joined #openstack-ansible18:43
*** maharg101 has quit IRC18:47
recycleheroUnable to s18:49
recycleheroync network state on deleted network 1291c921-d97e-4444-bf11-c277d6243ec918:49
recycleheroI deleted every container, and redeployed. I dont know how it found a deleted network. I would say rabbitmq but its container was deleted too18:49
recycleheroI want to start clean without reinstalling the host OS.18:50
recycleheroneutron is on metal aha18:51
recycleherodoes it store anywhere else from galera which I have also deleted18:51
ebbexnoonedeadpunk: So I couldn't reproduce the cinder-volume issue on a new deployment :/18:53
*** cshen has quit IRC19:01
*** cshen has joined #openstack-ansible19:04
*** SecOpsNinja has joined #openstack-ansible19:45
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_keystone master: Do not manage /etc/ssl or /etc/pki directories or symlinks  https://review.opendev.org/75409219:59
spateljrosser: Do guys have multiple openstack or single cloud?20:01
jrosserhello :)20:02
jrosseri have two production clouds right now20:02
spatelI am looking for multi site openstack solution (not sure how to sync keystone )20:02
spateljrosser: how do you sync keystone or are you running keystone in federation ?20:02
jrosseri went with 'shared nothing'20:02
spateloh!20:03
jrosserand federation to external identity provider, SSO20:03
spateli am looking for unified openstack cloud where we can just pick region to deploy vms20:04
spatelcurrently i have isolated cloud so i have to create foo user everywhere20:04
jrossermy use case is for engineering / r&d users20:05
jrossercompany SSO provides auth to keystone and keystone mappings auto-create a project for each user when they first log in20:06
jrosserthen I have a git repo and ansible for shared projects and groups/quotas20:06
jrosserbut there are many many ways to do this20:07
spatelhmm20:07
jrossersome you can share the keystone if you want, depends what you need to acheieve really20:07
spatelI want to sync keystone between two geo datacenter so my project/role/users all get replicate itself20:08
spatelI may need to look into that side.. i will run some test in lab to see how it goes20:09
jrossermaybe start here? https://www.slideshare.net/Colleen_Murphy/bridging-clouds-with-keystone-to-keystone-federation-14325835020:10
jrosserlike i say lots and lots of choices here20:10
spatellooking promising, i will try that in lab20:12
spatelthank you jrosser20:15
jrosserno worries - AIO is good for testing some of this federation stuff, don’t need a bit deploy for that20:16
jrosser*big20:16
SecOpsNinjahi. hou can we enalble automatic backups of mariadb?20:18
SecOpsNinjawhen i tried to add more infra nodes to my galera server (with only hade 1 node) it didn't start normaly because it cound connect to the another nodes. now i was able to start it using  the recover opntion and mariadbd-safe --wsrep_cluster_address=gcomm://  but now i owuld like to make a bakcup before trying to add the next host with ansible playbooks20:19
SecOpsNinjabut i can't find the corect information regarding the backup of galera db20:20
*** SmearedBeard has joined #openstack-ansible20:26
spatelSecOpsNinja: it should be standard method mysqldump --20:28
spateli don't think galera has any special method to take backup, Just dump full database including user/password/permission etc..20:29
SecOpsNinjaspatel, ok i will check. the problem is that i dont understand what i did wrong because i was following thei sinformation  https://docs.openstack.org/openstack-ansible/latest/admin/scale-environment.html to add neuy infra nodes,. and now my first node as the information regarding  the other 3 nodes, but in the other galeara containers the mariadb isn't installed ....20:30
SecOpsNinjai trying to see what is the best way to recover the mariadb  data if i need to recreate all galera containers20:31
spateldid you see this - https://docs.openstack.org/openstack-ansible/newton/developer-docs/ops-galera-recovery.html20:33
spateli would say take full backup first and also restore it somewhere to just verify its working (if this is first installation then nothing to worry)20:34
SecOpsNinjai was checking this ... https://docs.openstack.org/openstack-ansible/latest/admin/maintenance-tasks.html#galera-cluster-recovery20:34
SecOpsNinjabut in only talks about recovery but dont way how... i will check that link20:35
spatelyou are saying mysql isn't get installed on container right?20:37
SecOpsNinjaon the seond and third no20:37
spatelyou didn't get any error during playbook run?20:38
SecOpsNinjabecause the openstack ansible crashed when tried to start mariadb on 1º  but it failed because it could connect to any other nodes and for what i understand it didn't have qorum to start20:38
SecOpsNinjaso i  used this "mariadbd --wsrep-recover --skip-networking" and "mariadbd-safe --wsrep_cluster_address=gcomm://" to be able to start it in the first node20:38
spatelyou need to bootstrap galera to get everything running20:38
spatelplaybook take care of everything you don't need to do anything manual20:39
SecOpsNinjayep but my problem is with the mariadb-safe runing if the paybook will continue to run and configure the other nodes correctly20:40
spatelsometime i have seen issue of /var/lib/mysql/grastate.dat  (i mostly move this file out)20:40
spatelfirst you need to find out why playbook not installing mysql on node-2 and 320:41
spateli would start from there before troubleshooting node-120:41
SecOpsNinjayep but in this https://docs.openstack.org/openstack-ansible/latest/admin/backup-restore.html it doenst say much regarding to recover all galera cluster if i ned to recteeate all novdes20:42
SecOpsNinjaso i will try to backup mariadb amd see if i can safe some data20:42
jrosserSecOpsNinja: 1 infra node to >1 infra node you have to get keepalived handling the vip20:43
jrossermake sure thats all working properly first20:43
SecOpsNinjayep i allready have that working20:43
*** maharg101 has joined #openstack-ansible20:43
spatelHave a good weekend guys! see you on Monday20:44
recycleherois there a way to uninstall neutron?20:44
SecOpsNinjaspatel,  thanks for the help :)20:44
*** spatel has quit IRC20:45
*** maharg101 has quit IRC20:48
*** mensis2 has quit IRC21:04
*** SmearedBeard has quit IRC21:06
*** nurdie has quit IRC21:31
*** SecOpsNinja has left #openstack-ansible21:43
*** nurdie has joined #openstack-ansible22:44
*** nurdie has quit IRC22:50
*** macz_ has quit IRC23:06
*** tosky has quit IRC23:26

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!