Tuesday, 2015-05-05

*** sdake has quit IRC00:03
*** stevemar has joined #openstack-ansible00:06
*** galstrom_zzz is now known as galstrom00:19
*** BjoernT has quit IRC00:31
*** stevemar has quit IRC00:32
*** daneyon_ has quit IRC00:55
*** darrenc is now known as darrenc_afk01:40
*** darrenc_afk is now known as darrenc01:52
*** galstrom is now known as galstrom_zzz01:58
*** sigmavirus24 is now known as sigmavirus24_awa02:33
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Updated for the 11.0.1 tag  https://review.openstack.org/17990602:38
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Allow configuration of Nova SQLAlchemy options  https://review.openstack.org/17887002:41
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Updated RabbitMQ to the new release version  https://review.openstack.org/17823002:42
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Updated RabbitMQ to the new release version  https://review.openstack.org/17823002:43
*** galstrom_zzz is now known as galstrom02:49
*** JRobinson__ is now known as JRobinson__afk03:22
*** JRobinson__afk is now known as JRobinson__03:33
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Set the neutron default workers As stated in neutron's default conf file:   # This feature is experimental until issues are addressed and testing   # has been   # enabled for various plugins for compatibility.  https://review.openstack.org/18002003:37
*** stevemar has joined #openstack-ansible03:43
*** galstrom is now known as galstrom_zzz03:56
*** galstrom_zzz is now known as galstrom04:17
*** sdake_ has quit IRC04:38
*** galstrom is now known as galstrom_zzz04:49
*** sdake has joined #openstack-ansible04:57
*** JRobinson__ has quit IRC05:49
*** openstackgerrit has quit IRC06:23
*** openstackgerrit has joined #openstack-ansible06:23
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Update the tempest install environment  https://review.openstack.org/18004407:06
*** sandywalsh has quit IRC07:11
*** sandywalsh has joined #openstack-ansible07:12
*** stevemar has quit IRC07:22
*** JRobinson__ has joined #openstack-ansible08:14
openstackgerritSerge van Ginderachter proposed stackforge/os-ansible-deployment: [WIP] first shot at implementing Ceph/RBD support  https://review.openstack.org/17322909:09
*** JRobinson__ has quit IRC09:44
*** markvoelker has quit IRC09:48
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Update the tempest install environment  https://review.openstack.org/18004410:26
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Update the tempest install environment  https://review.openstack.org/18004410:28
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Update the tempest install environment  https://review.openstack.org/18004410:37
*** sandywalsh has quit IRC11:12
*** sandywalsh has joined #openstack-ansible11:14
*** markvoelker has joined #openstack-ansible11:51
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Update the tempest install environment  https://review.openstack.org/18004411:54
*** galstrom_zzz is now known as galstrom11:59
*** galstrom is now known as galstrom_zzz12:53
*** KLevenstein has joined #openstack-ansible13:00
cloudnullmornings13:00
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Update the tempest install environment  https://review.openstack.org/18004413:51
cloudnullmattt ^ same changes you made, however on a local AIO test i found that if the reqs are moved to the build server everything works while also not adding additional installed pkgs on the deployment host.13:52
cloudnulland i figured while all of gating seems jammed up this was a good time to make the change.13:53
matttcloudnull: dang, didn't realise that was necessary, thanks !13:53
mattt(s/necessary/possible/)13:54
cloudnullits not really necessary as in it didn't work before . what you had was 100% functional, It just added deps to the deployment host. i couldve left it alone (sorry for being OCD) :\ .13:55
matttcloudnull: now that i think about it, do we need to explicitly install those in the heat_engine container ?13:55
*** Mudpuppy has joined #openstack-ansible13:55
matttcloudnull: no that makes perfect sense, i didn't like having to install those on teh deployment host13:55
cloudnullno. theyll be build as part of the heat deps / wheels13:55
matttk13:55
cloudnullpbr makes me sad , in a lot of ways.13:56
mattti'm trying to understand what we need to do w/ https://review.openstack.org/#/c/179671/13:58
matttseems the heat devs are reluctant to specify versions13:58
matttwhich results in a wheel that isn't functional13:58
*** fawadkhaliq has joined #openstack-ansible13:59
cloudnullwell we'd have to change our setup to make that happy.14:00
cloudnullwe build the wheel using the setup.py14:00
cloudnullhowever when the contrib plugin is deployed we clone and install using pip.14:00
cloudnullso we could remove the clone part and simply install the built wheel.14:01
matttoh derp14:01
mattti thought that's what we were doing14:01
cloudnullbut thats a change on our part.14:01
* cloudnull double checking. 14:01
cloudnullhttps://github.com/stackforge/os-ansible-deployment/blob/master/playbooks/roles/os_heat/tasks/heat_install_plugins.yml14:01
matttwhatever we do isn't working14:01
matttbut if i remove and manually install with pip it looks OK14:02
*** sigmavirus24_awa is now known as sigmavirus2414:03
cloudnullyea, i think if we just specifiy the package name without using the cloned source , we should be ok.14:03
cloudnullwhich i think is "heat_contrib_extraroute"14:03
* cloudnull looking 14:03
matttcloudnull: yep14:03
matttbecause it's getting built as a wheel so that whole task file is not needed14:04
cloudnullyup.14:04
cloudnulljust the entry item in the pip list14:04
cloudnullwe could make the change now and have it depend on  https://review.openstack.org/180044 because we know its coming in some form or another.14:05
mattti dislike this version tho: heat-contrib-extraroute==2015.1.0rc1.post22414:07
matttthat's really misleading14:07
sigmavirus24mattt: it is14:07
cloudnullit is indeed.14:07
matttcloudnull: i'll put together the PR tho and we can have it there incase we want to put it through14:08
cloudnull+114:08
sigmavirus24Anyone else still having problems with testrepository: http://logs.openstack.org/29/178429/9/check/os-ansible-deployment-dsvm-check-commit/cf9aeba/logs/ansible-logging/ansible.log ? I rebased onto cloudnull's review from yesterday (which merged already) and I'm still seeing failures14:22
sigmavirus24Oh testscenarios too now14:23
matttand fixtures14:25
cloudnullsigmavirus24 the trail of fail is strong these last few days14:29
sigmavirus24Yep14:30
cloudnullmattt found the extra pins that we needed and added them to https://review.openstack.org/18004414:30
cloudnullall because pbr > 0.11.014:30
cloudnulland the fact that we are using a heat contrib plugin14:31
cloudnullwhich turns out is not really anything that people in the heat community want to maintain...14:31
cloudnull:\14:31
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Install heat-contrib-extraroute wheel  https://review.openstack.org/18017014:33
sigmavirus24cloudnull: yeah, that stuff is just disappointing14:43
sigmavirus24If you don't want to maintain it move it to the -archive section of gerrit14:43
sigmavirus24release a last packaged version of it and then mark it as deprecated14:43
sigmavirus24If people want to maintain it, they'll step up to do so14:43
*** sdake_ has joined #openstack-ansible14:43
*** yaya has joined #openstack-ansible14:47
*** sdake has quit IRC14:47
cloudnull+114:52
cloudnulli think that those contrib packages should be moved into stackforge and be pulled into mainline heat due to some method of promotion14:52
sigmavirus24cloudnull: so you're volunteering to write a heat spec for it? Cool =P14:53
cloudnullsure.14:53
cloudnullin a week or two14:53
cloudnullim all over it14:53
sigmavirus24:D14:57
sigmavirus24b3rnard0: ^14:57
* b3rnard0 adds it as an action item15:01
*** sdake_ is now known as sdake15:01
*** erikmwilson is now known as Guest3297015:04
*** Guest32970 has quit IRC15:04
*** erikmwilson_ has joined #openstack-ansible15:04
*** galstrom_zzz is now known as galstrom15:04
*** jaypipes has quit IRC15:13
sigmavirus24b3rnard0++15:18
*** jwagner_away is now known as jwagner15:36
*** sdake has quit IRC15:37
*** sdake has joined #openstack-ansible15:40
cloudnullmeeting times15:59
cloudnull^ cloudnull, mattt, andymccr, d34dh0r53, hughsaunders, b3rnard0, palendae, Sam-I-Am, odyssey4me, serverascode, rromans, mancdaz, dolphm, _shaps_, BjoernT, claco, echiu, dstanek, jwagner16:00
*** yaya has quit IRC16:00
dstaneko/16:00
stevelleo/16:00
Apsuo/16:01
d34dh0r53%16:01
*** daneyon has joined #openstack-ansible16:02
b3rnard0oh oh16:02
jwagnerpresent16:02
d34dh0r53bah-lak-e?16:02
Sam-I-Amhelloooo16:02
rromanshi16:02
*** daneyon_ has joined #openstack-ansible16:03
cloudnullso to start we only have two new issues that need to be brought up16:03
cloudnullhttps://bugs.launchpad.net/openstack-ansible/+bug/145121716:03
openstackLaunchpad bug 1451217 in openstack-ansible "net.netfilter.nf_conntrack_max not changed on swift server" [Undecided,New]16:03
palendaeHi16:04
cloudnullaccording to the issue bjorne wants to up the net.netfilter.nf_conntrack_max to > 256K16:05
cloudnullwhen it pertains to swift storage nodes.16:05
cloudnullin master /kilo we set # playbooks/roles/openstack_hosts/defaults/main.yml:82:  - { key: 'net.netfilter.nf_conntrack_max', value: 262144 }16:05
cloudnullsame in juno16:05
*** daneyon has quit IRC16:06
cloudnullcheck that we set it on neutron and compute nodes in juno16:06
cloudnullso it looks like we could add another var to swift container nodes for juno and call it good? Apsu  thoughts ?16:07
Apsucloudnull: looking16:07
cloudnullin juno # rpc_deployment/vars/config_vars/container_config_nova_compute.yml:61:  - { key: 'net.netfilter.nf_conntrack_max', value: 262144 }16:07
Apsucloudnull: Yep. Agreed. Just apply to the swift container config vars too16:08
*** yaya has joined #openstack-ansible16:08
cloudnullnext https://bugs.launchpad.net/openstack-ansible/+bug/145058016:09
openstackLaunchpad bug 1450580 in openstack-ansible "run-upgrade/plays should remove existing rpc_release.link" [Undecided,New]16:09
cloudnullthis issue is a matter of cleaning up the rpc_release llnk file16:10
cloudnullwhile it was left for redundancy in an environment that may be running "other" links besides what we use upstream, it seems that its not something that we need.16:11
cloudnulland removing the rpc_release.link file shouldn't impact any other links that may be present.16:11
cloudnulli think we can do something like # ansible hosts:all_containers -m shell -a "rm /root/.pip/links.d/rpc_release.link" somewhere around https://github.com/stackforge/os-ansible-deployment/blob/master/scripts/run-upgrade.sh#L291 and it should be all good.16:14
ApsuSounds reasonable to me.16:14
*** erikmwilson has joined #openstack-ansible16:15
cloudnulldone.16:15
cloudnullany other issues that we want to talk about ?16:15
cloudnullor any other things we want to talk about ?16:15
cloudnullwe're waiting on zuul http://status.openstack.org/zuul/16:16
ApsuI like turtles16:16
cloudnullwhich seems like its very much stuck16:16
ApsuZuul will run on Zuul's own time16:16
*** Bjoern__ has joined #openstack-ansible16:17
stevelleadjust your clocks, Zuul is the reference for all time16:17
Apsu#WWZD16:17
palendaeProbably crash16:18
ApsuMore like WWZR? When Will Zuul Run?16:18
palendaeNever16:18
palendaeOr always16:18
cloudnullor always never16:18
palendaeWhen Will Zuul Finish16:18
d34dh0r53almost always never16:18
Bjoern__Hey16:18
*** Bjoern__ is now known as BjoernT16:18
cloudnullpalendae: tick question.16:19
cloudnull*trick16:19
cloudnullhi BjoernT16:20
cloudnulldid you have anything that you wanted to highlight ?16:20
BjoernTlet me look16:20
BjoernTI think we covered the cinder backup job16:20
*** daneyon_ has quit IRC16:21
*** daneyon has joined #openstack-ansible16:21
BjoernT #1440784 Spice console is freezing16:22
BjoernTI believe we were not able to reproduce it16:22
BjoernTdid we look in the code what might cause the situation ?16:23
*** yaya has quit IRC16:23
cloudnullif the spice console freezes in production and nobody's able to reproduce it, did it really freeze? :)16:23
ApsuNOTABUG16:24
cloudnullBjoernT i have not.16:24
d34dh0r53OPINION16:24
BjoernTwhat happened was that port 6082 is open but not giving back the normal directory listing, if you just do a curl http://localhost:608216:24
BjoernTnot even using the spice_auto.html, it did not serve any files basically16:25
BjoernTso my hope was that we look at the code to get an idea what to check next time16:26
cloudnullso nope nothing new on that front.16:27
BjoernTok apsu :  #1436999 LVM/DM/UDEV out of sync inside cinder-volumes container16:27
* cloudnull not it16:28
ApsuBjoernT: Ah yes, our old friend.16:28
BjoernTyeah, won't just die16:29
BjoernTI like to get udev working right in containers, at least for cinder volumes16:29
ApsuI can provide slightly more info, but I'm not sure what the solution is just yet. I assume apparmor profiles combined with lvm.conf using udev.16:29
ApsuBut... I was helping you guys troubleshoot an issue like this and we noticed that there was a mismatch between what dmsetup though the device major/minor was and the actual device file in /dev16:30
ApsuWhich pretty much has to mean udev not triggering to rewrite the device file16:30
*** yaya has joined #openstack-ansible16:31
BjoernTright, I think it is a simple as /dev is not mounted over udev16:31
ApsuWell udev is enabled in the containers already. The lvm.conf has it disabled for LVM currently, but...16:32
BjoernTother than that, I don't have anything else16:32
ApsuSupposedly Ubuntu's apparmor profiles for LXC already deal with udev cascading issues16:32
*** yaya has quit IRC16:32
ApsuBut we'll have to test the current state of the union there, with udev enabled for lvm16:32
BjoernTok16:33
openstackgerritIan Cordasco proposed stackforge/os-ansible-deployment: Harden our copy of Glance's policy  https://review.openstack.org/17842916:33
sigmavirus24(Sorry for disrupting the meeting)16:33
cloudnullso https://bugs.launchpad.net/openstack-ansible/+bug/143699916:36
openstackLaunchpad bug 1436999 in openstack-ansible "LVM/DM/UDEV out of sync inside cinder-volumes container" [Medium,Confirmed] - Assigned to Evan Callicoat (apsu-2)16:36
cloudnullis a wip16:36
Apsuyep16:36
cloudnullb3rnard0 ^ action item16:36
b3rnard0k16:36
cloudnullanything else ?16:36
ApsuThree cheers for device file shenanigans16:37
cloudnullpi pi ta!16:37
*** jaypipes has joined #openstack-ansible16:38
cloudnullok so we're done here ?16:38
ApsuI sure am :D16:38
cloudnullthanks everyone16:41
b3rnard0thank you16:41
b3rnard0notes here: https://etherpad.openstack.org/p/openstack_ansible_bug_triage.2015-05-05-16.0016:42
cloudnulltyvm b3rnard016:43
*** sdake_ has joined #openstack-ansible16:45
*** sdake has quit IRC16:49
*** erikmwilson has left #openstack-ansible17:05
*** jwagner is now known as jwagner_away17:11
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Update the tempest install environment  https://review.openstack.org/18004417:12
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Harden our copy of Glance's policy  https://review.openstack.org/17842917:13
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Install heat-contrib-extraroute wheel  https://review.openstack.org/18017017:13
*** sdake_ has quit IRC17:14
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Updated RabbitMQ to the new release version  https://review.openstack.org/17823017:19
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Update the tempest install environment  https://review.openstack.org/18004417:26
cloudnull^ mattt  i squashed your heat-contrib commit into that one because its still blocked due to the clone .17:28
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Updated RabbitMQ to the new release version  https://review.openstack.org/17823017:29
cloudnullsigmavirus24/miguelgrinberg:  i hate your heat and your pbr.17:30
cloudnull:)17:30
miguelgrinbergcloudnull: :)17:31
sigmavirus24hey hey hey17:31
sigmavirus24Don't lump me in with either heat or pbr17:31
miguelgrinbergI sort of got in a beef with a core dev over this17:31
sigmavirus24I don't work on either17:31
cloudnullsigmavirus24 your fault via proxy17:31
cloudnull:D17:31
* sigmavirus24 throws arms up17:32
cloudnullhahahah17:32
miguelgrinbergcloudnull: are we okay staying with the old pbr, or do we need to change how we install the plugin?17:32
miguelgrinbergI assume they are going to remove the versions from the plugins17:32
cloudnulli built the plugin and added it to our index. mattt made a commit that simply installs the plugin instead of attempting to build it.17:33
cloudnulland i recently squashed that into the above commit .17:33
miguelgrinbergah okay, I thought I saw you guys pinned pbr to the previous release17:33
cloudnullwe did. but heat has a run time dependency on pbr which requires lover versions of an ever increasing set of newer packages.17:34
cloudnull*lower17:34
miguelgrinbergfun17:34
cloudnullso attempting to pin pbr is next to impossible over any given period of time.17:34
cloudnullit worked yesterday, but not today.17:35
Sam-I-Amcloudnull: you're adding too much hipster17:36
cloudnullthis is true  .17:36
cloudnullso maybe its not all sigmavirus24 's fault. :D17:36
miguelgrinbergthere is a summit design session to discuss bringing all those plugins into the main codebase, some of the cores feel like they don't want to officially support them, so not sure if it'll happen17:37
cloudnulladd in extra route move the rest to stackforge17:37
cloudnulland  then we can forget that they even exist.17:37
miguelgrinbergthat'd be nice17:38
cloudnullit would, but that'd be too easy .17:38
cloudnullso im sure theres a way that we can make this all more complicated.17:38
cloudnulland drag out the migration for a few more releases17:39
*** sdake has joined #openstack-ansible17:41
*** yaya has joined #openstack-ansible17:57
*** logan2 has quit IRC18:28
*** jwagner_away is now known as jwagner18:29
*** fawadkhaliq has quit IRC18:30
*** logan2 has joined #openstack-ansible18:31
openstackgerritIan Cordasco proposed stackforge/os-ansible-deployment: Harden our copy of Glance's policy  https://review.openstack.org/17842918:37
*** sdake_ has joined #openstack-ansible18:43
*** fawadkhaliq has joined #openstack-ansible18:43
*** sdake has quit IRC18:47
*** BjoernT has quit IRC18:53
openstackgerritMerged stackforge/os-ansible-deployment: Set the neutron default workers As stated in neutron's default conf file:   # This feature is experimental until issues are addressed and testing   # has been   # enabled for various plugins for compatibility.  https://review.openstack.org/17997318:58
openstackgerritMerged stackforge/os-ansible-deployment: Set the neutron default workers As stated in neutron's default conf file:   # This feature is experimental until issues are addressed and testing   # has been   # enabled for various plugins for compatibility.  https://review.openstack.org/18002018:58
*** yaya has quit IRC19:01
d34dh0r53cloudnull: think we may have problems with rsyslog logging, do you have a 'trusted' install running somewhere?19:04
cloudnullTrusted ?19:05
d34dh0r53:)19:09
d34dh0r53been running for a while?19:09
*** KLevenstein__ has joined #openstack-ansible19:12
*** KLevenstein has quit IRC19:12
*** KLevenstein__ is now known as KLevenstein19:12
cloudnullWould the iad3 lab work?19:13
cloudnullIts not running current but its close enough I'd say?19:13
d34dh0r53cloudnull: yeah, maybe, I'll look there19:13
cloudnullOK.19:13
*** subscope_ has joined #openstack-ansible19:27
*** Bjoern__ has joined #openstack-ansible19:32
*** sdake has joined #openstack-ansible19:42
*** sdake has quit IRC19:42
*** sdake_ has quit IRC19:45
*** sdake has joined #openstack-ansible19:46
d34dh0r53cloudnull: yep, no mysql logs in the /var/log/log-storage/*galera direcories, spool files are in the containers and everything looks to be setup correctly but no logs19:47
d34dh0r53IAD3 btw19:47
*** sdake_ has joined #openstack-ansible19:47
*** sdake has quit IRC19:51
sigmavirus24d34dh0r53: wrong channel?19:52
d34dh0r53it was mentioned earlier so I thought I'd put it here, this is referring to os-a-d logging not rpc-extras work19:53
sigmavirus24okay19:53
openstackgerritMatthew Kassawara proposed stackforge/os-ansible-deployment: Add documentation to user config file  https://review.openstack.org/18029619:57
*** sdake has joined #openstack-ansible20:01
*** sdake_ has quit IRC20:05
*** sdake has quit IRC20:10
*** sdake has joined #openstack-ansible20:10
cloudnulld34dh0r53 we're you able to figure out the issue?20:18
d34dh0r53cloudnull: not yet, still looking into it20:21
cloudnullrerunning the client role didn20:22
cloudnull*didn't make it happy20:22
cloudnull?20:22
cloudnullalso what was spooling ?20:22
d34dh0r53cloudnull: there are spool files, but no log files on the logging server20:23
d34dh0r53cloudnull: the only files on the logging server are cron.log,dhclient.log,kernel.log etc...20:23
cloudnulldo we see this behavior on an AIO ?20:27
d34dh0r53yep20:27
*** KLevenstein has quit IRC20:31
cloudnulld34dh0r53 i had an env up running kilo this morning where i had all the logs on the log server.20:35
cloudnullim kicking kilo & master right now20:35
cloudnullwill know more in a min20:35
*** persia has quit IRC20:40
*** d34dh0r53 has quit IRC20:41
*** persia has joined #openstack-ansible20:41
*** persia has quit IRC20:41
*** persia has joined #openstack-ansible20:41
*** d34dh0r53 has joined #openstack-ansible20:45
*** fawadkhaliq has quit IRC20:51
*** KLevenstein has joined #openstack-ansible20:52
*** jwagner is now known as jwagner_away21:05
cloudnulld34dh0r53 on my two AIO's i have a whole mess of logs in [ /openstack/aio1_rsyslog_container-*/log-storage/**/* ]21:07
d34dh0r53cloudnull: look in log-storage/<one of your galera containers> and see if you see galara_server*21:08
cloudnulli put keys on 162.209.124.2521:08
d34dh0r53cloudnull: http://paste.openstack.org/show/215069/21:10
d34dh0r53cloudnull: and then http://paste.openstack.org/show/215071/21:11
d34dh0r53I've tried changing all kinds of stuff in the rsyslog_client config and nothing so far is making it go21:11
cloudnulli see the entries in /etc/rsyslog.d/99-galera-rsyslog-client.conf21:11
d34dh0r53yep21:12
d34dh0r53that file looks fine, I've tried moving the ip down to the bottom, removing the RFC tag thing, changing the tag to be unique, cant figure it out, rsyslog is definitely shipping logs, just not those logs21:12
cloudnull-rw-r----- 1 mysql root     39638 May  5 21:12 galera_server_error.log21:13
cloudnull-rw-rw---- 1 mysql mysql  1599759 May  5 21:13 mysql-slow.log21:14
cloudnullmaybe a perms issue as to why its not being shipped ?21:14
d34dh0r53possibly, although I just got it to ship on IAD321:15
d34dh0r53one sed, gonna muck around with your aio21:16
d34dh0r53s/sed/sec21:16
cloudnulld34dh0r53 changing to the perms to 644 made the logs ship21:20
cloudnulllook on my server under /openstack/aio1_rsyslog_container-cd7d7179/log-storage/aio1_galera_container-6f9a76ba/galera_server_error.log21:20
d34dh0r53yep, appears so21:21
cloudnullso d34dh0r53 maybe the correct action here is to, within the rsyslog_client role, ensure that permissions are 0644 ?21:25
cloudnullthat way all logs regestered via the client are ensured to log correctly.21:25
cloudnullor we can move the logs into /var/log from within the galera_server role instead of having them in the /var/log/mysql directory.21:26
d34dh0r53hmm21:30
d34dh0r53any security implications to making them 644?21:31
cloudnullyes.21:32
palendaeWould those security implications be the same if you put them in /var/log?21:32
d34dh0r53:) that's what I thought21:32
*** KLevenstein__ has joined #openstack-ansible21:33
cloudnulld34dh0r53 if the logs were in /var/log/ instead they'd be craeted with owner "mysql:adm"  which syslog is a member of adm so that should work21:34
d34dh0r53ok, that works21:34
*** sdake_ has joined #openstack-ansible21:35
*** KLevenstein has quit IRC21:35
*** KLevenstein__ is now known as KLevenstein21:35
d34dh0r53is syslog set in [mysqld_safe] in my.cnf?21:35
cloudnullwe have nothing there.21:36
cloudnullexcept [mysqld_safe]21:36
cloudnullsocket = /var/run/mysqld/mysqld.sock21:36
cloudnullnice = 021:36
d34dh0r53if we set syslog that may work as well21:36
*** JRobinson__ has joined #openstack-ansible21:37
d34dh0r53that will work for the error log, not sure about the slow-query-log21:37
cloudnullif we simply put syslog under the [mysqld_safe] section error logging should go there by defualt.21:38
d34dh0r53no, log-error takes precedence21:38
d34dh0r53just read that21:38
*** sdake has quit IRC21:39
d34dh0r53I think /var/log is the best option21:39
d34dh0r53going to try that on my aio21:41
cloudnullim am too :)21:42
*** stevemar has joined #openstack-ansible21:47
cloudnullso that didnt work21:47
d34dh0r53really?21:49
d34dh0r53same perms?21:49
cloudnullyup. mysql:root21:49
d34dh0r53we could chgrp to adm later in the play21:49
*** sdake has joined #openstack-ansible21:50
d34dh0r53bbl, going to gym21:51
cloudnulld34dh0r53 https://gist.github.com/cloudnull/c50f14f2b82f6850fe4821:52
cloudnullthat made it go for me21:52
cloudnulli moved the logs to a directory with a stick adm group.21:53
*** sdake__ has joined #openstack-ansible21:54
*** sdake_ has quit IRC21:54
*** Bjoern__ has left #openstack-ansible21:55
*** sdake has quit IRC21:58
*** galstrom is now known as galstrom_zzz22:22
*** stevemar has quit IRC22:24
*** KLevenstein has quit IRC22:31
*** subscope_ has quit IRC22:34
*** erikmwilson_ is now known as erikmwilson22:35
*** Mudpuppy has quit IRC22:45
*** jbweber has quit IRC23:18
*** jbweber has joined #openstack-ansible23:18
d34dh0r53cloudnull: nice, want me to PR that?23:20
cloudnullsure.23:20
cloudnullunless you have a better solution23:20
cloudnullits up to you brother.23:21
cloudnull:)23:21
d34dh0r53not at the moment, I'll PR that in the morning, maybe I'll come up with something between now and then23:21
cloudnullkk23:21
d34dh0r53have a good one sir, I'm getting out of here23:21
cloudnullhave a good night.23:22
*** sdake__ has quit IRC23:42

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!