Friday, 2015-09-11

*** rromans has joined #openstack-ansible00:02
*** cfarquhar has quit IRC00:04
*** cfarquhar has joined #openstack-ansible00:15
*** cfarquhar has quit IRC00:15
*** cfarquhar has joined #openstack-ansible00:15
*** shoutm has quit IRC00:22
*** darrenc is now known as darrenc_afk00:22
*** shoutm has joined #openstack-ansible00:23
*** woodard has joined #openstack-ansible00:34
*** woodard has quit IRC00:38
*** mrstanwell has quit IRC00:39
*** shoutm_ has joined #openstack-ansible00:40
*** shoutm has quit IRC00:41
*** tlian2 has joined #openstack-ansible00:54
*** skamithi12 has joined #openstack-ansible00:56
*** skamithi12 has quit IRC00:57
*** tlian has quit IRC00:58
*** skamithi12 has joined #openstack-ansible01:10
*** darrenc_afk is now known as darrenc01:12
*** Mudpuppy has joined #openstack-ansible01:13
*** Mudpuppy has quit IRC01:16
*** Mudpuppy has joined #openstack-ansible01:16
*** skamithi12 has quit IRC01:23
*** sdake has quit IRC01:26
*** tlian2 has quit IRC01:28
*** daneyon_ has quit IRC01:32
*** tlian has joined #openstack-ansible01:34
*** woodard has joined #openstack-ansible01:35
*** woodard has quit IRC01:39
*** jlvillal has quit IRC01:49
*** shoutm_ has quit IRC02:02
*** shoutm has joined #openstack-ansible02:09
*** daneyon has joined #openstack-ansible02:09
*** daneyon has quit IRC02:10
*** daneyon has joined #openstack-ansible02:12
*** daneyon has quit IRC02:13
*** daneyon has joined #openstack-ansible02:13
*** jlvillal has joined #openstack-ansible02:18
*** daneyon has quit IRC02:19
*** jlvillal has quit IRC02:35
*** arbrandes1 has joined #openstack-ansible02:48
*** arbrandes has quit IRC02:49
*** sigmavirus24_awa is now known as sigmavirus2402:56
*** sigmavirus24 is now known as sigmavirus24_awa02:58
*** tlian has quit IRC03:31
*** sdake has joined #openstack-ansible03:55
*** daneyon has joined #openstack-ansible04:00
*** jbweber has joined #openstack-ansible04:42
*** skamithi12 has joined #openstack-ansible04:53
*** daneyon has quit IRC04:54
*** daneyon has joined #openstack-ansible04:56
*** daneyon has quit IRC04:58
*** daneyon has joined #openstack-ansible05:00
*** jlvillal has joined #openstack-ansible05:06
*** sdake has quit IRC05:13
*** daneyon has quit IRC05:30
*** shausy has joined #openstack-ansible05:37
*** shausy has quit IRC05:48
*** javeriak has joined #openstack-ansible05:57
*** javeriak has quit IRC06:05
*** javeriak has joined #openstack-ansible06:11
*** markvoelker has quit IRC06:16
*** skamithi12 has quit IRC06:27
*** javeriak_ has joined #openstack-ansible06:55
*** javeriak has quit IRC06:57
*** markvoelker has joined #openstack-ansible07:17
*** markvoelker has quit IRC07:22
*** javeriak has joined #openstack-ansible07:25
*** javeriak_ has quit IRC07:25
*** javeriak_ has joined #openstack-ansible07:35
*** javeriak has quit IRC07:37
*** gparaskevas has joined #openstack-ansible07:41
*** javeriak_ has quit IRC08:19
*** cbaesema_ has left #openstack-ansible08:25
*** shoutm has quit IRC08:32
openstackgerritJean-Philippe Evrard proposed stackforge/os-ansible-deployment: Implementation of keepalived for haproxy  https://review.openstack.org/21881808:34
*** Mudpuppy has quit IRC08:36
openstackgerritJean-Philippe Evrard proposed stackforge/os-ansible-deployment: [WIP] Implementation of keepalived for haproxyiImplementation of keepalived for haproxy  https://review.openstack.org/21881808:37
evrardjpgood morning08:37
evrardjpsorry for this ^(my vim went AWOL)08:37
openstackgerritJean-Philippe Evrard proposed stackforge/os-ansible-deployment: [WIP] Implementation of keepalived for haproxy  https://review.openstack.org/21881808:38
*** javeriak has joined #openstack-ansible08:41
*** javeriak has quit IRC08:47
hughsaundersodyssey4me: I may be missing something here, but why do we need a blank line before </virtualhost> ? https://review.openstack.org/#/c/221974/7/playbooks/roles/os_keystone/templates/keystone-httpd.conf.j2,cm08:51
*** tiagogomes has joined #openstack-ansible08:57
tiagogomeshi08:57
hughsaundersHI tiagogomes08:58
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Add neutron_migrations_facts module  https://review.openstack.org/21975909:05
tiagogomesI am trying to use os-ansible-deployment to install a new stack. I had configured my hosts with some interfaces with IP addresses with a /16 prefix for the different required networks (management, vlanx, storage). However after running the setup-hosts playbook I verified the prefix changed to be /24. This is causing the packets to not be routed to the container properly. Anyone knows what could have caused the change of pref09:05
tiagogomesix?09:05
tiagogomesOn the cidr_networks key of /etc/openstack_deploy/openstack_user_config.yml, I am using a /16 prefix as well09:06
andymccrtiagogomes: you mean the cidr of the interfaces inside the containers is a /24 when it should be a /16?09:15
odyssey4mehughsaunders we don't, but what I have seen sometimes is that the whitespace removal (ie the dash in {%-) ends up removing whitespace including the newline, so the </VirtualHost> tag ends up on the same line as the WSGI...09:19
odyssey4mehughsaunders I was not really in agreement with the first patch, but it was voted through - this is a backport09:19
hughsaundersodyssey4me: I thought it was totally unnecessary so I tested it, but it does fail as described :/09:20
hughsaundersHowever the cherry pick id doesn't match the commit it was actually cherry picked from09:21
hughsaundersreally unintuitive that the - on the tag effects newlines that are visually outside the tag.09:21
odyssey4mehughsaunders the original patch was done as a patch to master and kilo at the same time - I edited the commit message once the master patch had merged09:22
tiagogomesandyhky, the interfaces outside the containers09:22
odyssey4medid I do a bad edit?09:22
openstackgerritJean-Philippe Evrard proposed stackforge/os-ansible-deployment: [WIP] Implementation of keepalived for haproxy  https://review.openstack.org/21881809:22
odyssey4mehughsaunders if you can just fix up the commit message that'd be great - the content of the patch itself is the same09:23
*** arbrandes has joined #openstack-ansible09:23
*** arbrandes1 has quit IRC09:24
openstackgerritHugh Saunders proposed stackforge/os-ansible-deployment: Fixes jinja typo in keystone-httpd.conf.j2  https://review.openstack.org/22197409:24
odyssey4mehughsaunders looks good to me09:26
hughsaundershaha +2 race09:27
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Add neutron_migrations_facts module  https://review.openstack.org/21975909:27
andymccrtiagogomes: the existing interfaces on the physical hosts aren't managed by os-ansible-deployment at all, so i dont think they could/would've been changed by a setup-hosts run. The only change that happens there would be the lxc-br0 which is added by the run, but that shouldn't determine container connectivity on the other interfaces.09:29
tiagogomesandymccr right, ok thanks. That's what I thought but I wanted to me sure. I was deploying to another host with identical network setup and the prefixes haven't changed. Odd, but restarting the network fixed the issue09:31
andymccrtiagogomes: no problem! glad thats working now09:31
openstackgerritJean-Philippe Evrard proposed stackforge/os-ansible-deployment: Implementation of keepalived for haproxy  https://review.openstack.org/21881809:32
tiagogomesI have another question, does the deployment host has to have access to the container management network directly? Or does it ssh first the the container host using the IP in the inventory, and then to the container09:32
tiagogomesIt looks like that is trying to SSH to the container directly09:33
evrardjpcloudnull: when you'll be there, could you check the inventory of this is what you expected? : https://review.openstack.org/21881809:33
evrardjpit works here for me09:34
andymccrtiagogomes: it needs access to the management network directly - if you look in the openstack_inventory.json file, which is the inventory file that is created by taking your user_config.yml and the environment config files, you'll see each container is actually a host.09:40
andymccrthere is a file in the scripts directory called "inventory-manage.py" you can run this with a --list and you'll see all the hosts (which includes containers) as well as their ansible_ssh_address09:40
andymccrwhich is the address ansible will try connect directly to.09:41
tiagogomesbugger, I would be nice to mention that the deployment host has to have access to the management network in the docs09:41
tiagogomesI'll have to change the deployment to one of the target hosts then09:41
tiagogomesthanks andymccr09:41
andymccrtiagogomes: thats a good point - that should be mentioned in the docs/guide. i guess your deployment host would only need access on that network, but none the less it does require access to that network.09:42
openstackgerritSerge van Ginderachter proposed stackforge/os-ansible-deployment: Allow Horizon setup with external SSL termination  https://review.openstack.org/21464709:42
hughsaundersmattt: your tempest cinder multi backend patch, allows a deployer to test multi backend but without changing anything for the current gate?09:54
odyssey4mehughsaunders yup09:58
odyssey4meit enables alternative gate tests09:59
openstackgerritgit-harry proposed stackforge/os-ansible-deployment: Remove unused Juno-specific groups from inventory  https://review.openstack.org/22218610:08
tiagogomeshi again, anyone knows what is going on here: http://paste.openstack.org/show/456579/10:22
*** javeriak has joined #openstack-ansible10:34
andymccrtiagogomes: ansible cant reach those 2 memcache containers so it fails out and then has no hosts to run the next task against (because the 2 eligible hosts have already failed)10:36
tiagogomesI got that, but I managed to ssh to the containers without ansible, so I don't don't know why ansible can't10:37
andymccrhmm.10:40
andymccrif you run the ansible command with a -vvv it could give you more of a clue. I have had an issue before where it tries to execute /usr/bin/python and the containers i had were not deployed properly (mostly because i stopped a run midway through).10:42
tiagogomesI don't have a python executable on the container, only python310:43
andymccrif you check with -vvv it'll give a better idea, but if there is an issue during the container deployment process that could be a cause10:45
tiagogomesThe output with that I posted is already with the verbose option10:47
tiagogomesright, the is a broken symbolic link in the container rootfs  python -> /usr/bin/python2.7. The python2.7 file it is not on the container rootfs10:51
openstackgerritMerged stackforge/os-ansible-deployment: Add profiling for Ansible tasks  https://review.openstack.org/22234810:51
tiagogomeswould be worth so run teardown and start all over? Is this a know bug?10:52
* tiagogomes notes that there is a task that created the python symbolic link10:55
hughsaunderstiagogomes: ansible has problems with ssh connections sometimes, and doesn't have a mechanism to retry them until v2 when ssh-retry got merged: https://github.com/ansible/ansible/commit/42467777593e3a4897c86362d3ec9fb09f51786210:59
*** javeriak_ has joined #openstack-ansible11:00
mattthughsaunders: yeah, that backend1 stuff we were setting was actually not being used11:01
hughsaundersmattt: are you using those vars in something?11:02
odyssey4metiagogomes that's odd, the task right before that installs the python2.7 package11:02
*** javeriak has quit IRC11:02
mattthughsaunders: in a gate?  no11:03
mattthughsaunders: it was a case of removing the pointless entries from tempest.conf, or adding some vars so that functionality can actually be used11:03
mattthughsaunders: and since we may have deployments w/ lvm and ceph backends, i figured it may get used down the line11:03
*** markvoelker has joined #openstack-ansible11:04
hughsaundersAh, it looked prep work for something, but it was actually tidy up11:04
mattthughsaunders: yep, just cleanup, that's why i was on the fence about backporting to kilo11:05
*** Mudpuppy has joined #openstack-ansible11:08
*** markvoelker has quit IRC11:09
*** Mudpuppy has quit IRC11:12
*** shoutm has joined #openstack-ansible11:13
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Add neutron_migrations_facts module  https://review.openstack.org/21975911:13
openstackgerritMerged stackforge/os-ansible-deployment: Fixes jinja typo in keystone-httpd.conf.j2  https://review.openstack.org/22197411:14
*** ggillies has quit IRC11:23
*** Mudpuppy has joined #openstack-ansible11:26
hughsaundersjust looking at log slave updates patch for kilo, does that make sense with galera?11:28
hughsaundersIf a galera node dies, when its rebuilt, it can be brought back up to speed via galera's SST mechanism and doesn't need local binlogs11:29
hughsaundersSo why add load making every node write binlogs for each tx?11:30
hughsaunders^ mancdaz11:31
mancdazhughsaunders I keep meaning to follow up with DBAs on why this was recommended11:31
mancdazthis and server_id uniqueness11:32
*** Mudpuppy has quit IRC11:32
mancdazwhich is generally not recommended with galera clusters11:32
*** Mudpuppy has joined #openstack-ansible11:33
mancdazhughsaunders I guess in a case where the entire db is screwed, being able to restore from binlongs up to a point is useful11:33
mancdazscrewed and also replicated in it's screwed state11:33
mancdazso other nodes are not good donors here11:33
hughsaundersahh, yes in the raid-is-not-backup sense11:34
hughsaundershowever binlogs are not backups either11:34
mancdazhughsaunders they kind of are a bit11:34
hughsaundersif you happen to catch them at the right time11:34
mancdazhughsaunders well you can replay up to whenever you like11:34
mancdazie before catastrophe11:35
hughsaundersyeah, depending on how long ago it was and how much binlog you store11:35
mancdazwell quite11:35
hughsaundersbut I can see there can be a use and that it would be hard to try and reconstruct that from multiple nodes11:35
mancdazI'm guessing though that a full restore on all nodes because of catastrophe will be done inside 5 days11:35
mancdazso this is why we need server_id uniques11:36
mancdazotherwise all updates get logged multiple times11:36
mancdazelse you can't decipher your own updates11:36
mancdazor something11:36
mancdazI think possibly I just talked myself into understanding why this change is needed, but also that without the server_id change this could be bad...?11:37
mancdazhughsaunders walk down that logic path with me11:37
hughsaundersbut you wouldn't restore on multiple nodes, you'd restore on one, re initialise it as a new cluster, and let it update the others11:39
hughsaundersthen you don't have to sync positions and whatnot, let galera do its thang11:39
mancdazhughsaunders sure. I don't think that's what I was saying11:39
mancdazI think the server_id helps in determining which other updates to log/ignore11:40
hughsaunderssounds plausible, don't know though11:41
*** woodard has joined #openstack-ansible11:42
*** sdake has joined #openstack-ansible11:42
*** sdake has quit IRC11:44
*** woodard has quit IRC11:46
openstackgerritMerged stackforge/os-ansible-deployment: Replaced the copy_update module  https://review.openstack.org/21679011:50
hughsaundersgit-harry: didn't know about viewitems, interesting.11:58
git-harryhughsaunders: yeah it's python3 items12:01
mgariepygood morning12:02
openstackgerritMerged stackforge/os-ansible-deployment: Allow testing of cinder multi backends w/ tempest  https://review.openstack.org/22211912:10
*** markvoelker has joined #openstack-ansible12:15
*** kukacz has joined #openstack-ansible12:17
tiagogomesI think that python2.7 was not installed on the containers because apt-transport-https was not installed in the rootfs, and the repository URLs were using https12:28
*** tlian has joined #openstack-ansible12:30
openstackgerritMerged stackforge/os-ansible-deployment: Enable log_slave_updates  https://review.openstack.org/22219812:39
odyssey4memancdaz hughsaunders as I recall the bil_logs are useful when a server has simply been offline for a period of time, not entirely destroyed12:49
mancdazodyssey4me kind of, except when you have galera you don't need them12:50
mancdazin a normal master/slave replication env, sure12:50
hughsaundersodyssey4me: very much so in the mysql replication case, in the g.... what mancdaz said12:50
odyssey4metiagogomes both of those packages are pre-installed into the container image that is downloaded, so how is it that your container image doesn't have those?12:50
*** sigmavirus24_awa is now known as sigmavirus2412:50
odyssey4memancdaz hughsaunders I'm just giving you the justification that was given me. :p12:51
*** tlian has quit IRC12:51
mancdazodyssey4me in this case, I think we're looking at using them to restore n entire cluster from "last nightly full + latest binlogs"12:51
odyssey4memancdaz that seems odd though, why bother restore when you can just add the node to the cluster?12:52
*** javeriak_ has quit IRC12:54
*** tlian has joined #openstack-ansible12:56
*** javeriak has joined #openstack-ansible12:59
mancdazenitire dead cluster data...need to start from last good point in time13:04
*** cloudtrainme has joined #openstack-ansible13:05
*** cloudtrainme has quit IRC13:05
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Adds the config_template to OpenStack Services  https://review.openstack.org/21703013:09
odyssey4memattt in light of our earlier discussion about independent role repositories, can you please review https://review.openstack.org/213779 ?13:10
kukaczhi, I've just recently noticed this project and on first look I'm excited of it's scope13:10
kukaczI wonder what is the maturity of the project, is it something being used in production already?13:11
odyssey4mekukacz yes, absolutely13:12
matttodyssey4me: yeah sure13:12
*** hrvojem has left #openstack-ansible13:12
*** javeriak_ has joined #openstack-ansible13:12
kukaczoddysey4me, mattt: that's so nice to hear :-)13:12
odyssey4meit's been used for Rackspace Private Cloud in production for over a year now (since Icehouse) and has been growing and maturing over that time. In the Liberty cycle we have had several other deployers adopt it to run Kilo.13:12
*** javeria__ has joined #openstack-ansible13:13
*** javeria__ has quit IRC13:13
mgariepyout of curiosity how big private cloud are at rackspace ?13:14
kukaczwhen considering deploying it for own production - do I have to bother with some Rackspace specifics?13:14
odyssey4mekukacz our docs are a little dated and need some attention, but if you're able to explore the ansible defaults files you can often find the right changes - we'd love to get more doc update patches!13:14
*** logan2 has quit IRC13:15
odyssey4mekukacz There are no Rackspace specifics - those were pulled out for Kilo and RPC now consumes openstack-ansible as an upstream project. Anything RPC specific is handled in rpc-openstack and not in openstack-ansible13:15
*** javeriak has quit IRC13:15
kukaczodyssey4me: if I consider using OSAD in place of eg. RedHat's OSP - is it something one would seriously recommend? (ignoring the support)13:15
*** javeriak has joined #openstack-ansible13:15
openstackgerritMerged stackforge/os-ansible-deployment: Ensure cinder-backup-related variables are defined  https://review.openstack.org/22170513:16
openstackgerritMerged stackforge/os-ansible-deployment: Be more clear about used_ips, mostly in the example file  https://review.openstack.org/22212113:16
odyssey4mekukacz take note that the repository is being renamed to openstack/openstack-ansible tonight - it will no longer be a stackforge project13:16
kukaczodyssey4me:yeah, I've noticed. congratulations to such a move!13:16
odyssey4mekukacz if you're happy with using Ubuntu instead of RHEL then yes13:16
*** javeriak_ has quit IRC13:16
odyssey4meif not, then we have some work being done in the Mitaka cycle to implement broader OS support and we'd love more hands and eyes to be involved13:17
odyssey4memgariepy big in terms of people, or in terms of deployments?13:17
mgariepyin size of the cluster13:18
kukaczodyssey4me: so similarly to RAX, one can thing of consuming OSAD as an upstream source for own private cloud solution?13:18
odyssey4memgariepy RPC has builds that vary in size from small (ie 10 compute nodes or less) to large (50+ nodes)13:18
*** javeriak has quit IRC13:18
odyssey4memgariepy some builds are in RAX DC's and others are in Customer DC's13:18
mgariepyok13:18
odyssey4mekukacz yes, absolutely - you can learn from how RAX does it by looking at https://github.com/rcbops/rpc-openstack/ or you can do it completely your own way13:19
odyssey4mekukacz we're also doing work in the Mitaka cycle to make it easier to consume openstack-ansible in more flexible ways 0 see the spec in https://review.openstack.org/21377913:20
*** javeriak has joined #openstack-ansible13:20
odyssey4memgariepy I don't know if you've seen the news about the large clusters being built for the OpenStack Innovation Centre? That's 2 x 1000 node clusters, all being built with openstack-ansible13:21
kukaczodyssey4me: amazing. I'll check those links13:21
mgariepynop i havent seen it.13:21
*** javeriak has quit IRC13:21
odyssey4methe big clusters are specifically being built for larger scale testing, cross DC testing, federation, global clustering, etc13:21
odyssey4memgariepy http://www.rackspace.com/blog/newsarticles/rackspace-collaborates-with-intel-to-accelerate-openstack-enterprise-feature-development-and-adoption/13:22
*** KLevenstein has joined #openstack-ansible13:24
kukaczodyssey4me: I start considering using it for our company's private cloud offering. we're just beginning with openstack, consuming distro and support from third parties, but in short term window we'll need to move to customized solutions offered to our customers13:24
kukaczodyssey4me: .... doing those customizations in house. OSAD seems to be a great basement for this13:25
odyssey4mekukacz well, what we offer is really flexible - and ansible is easy to understand and quick to learn and work with, so it makes it really easy to get involved in doing improvements in the project which filter down into your environment after going through feedback, iteration and testing13:26
kukaczwill probably spend some time this weekend running an OSAD eval deployment on top our pilot OS env13:26
odyssey4mewe understand that some deployers prefer to fork for internal needs, but we do try and encourage upstream commits of whatever's been forked so that we can all help you make it better and everyone can benefit from it13:27
kukaczodyssey4me: I'm not familiar with Ansible but this is a good reason to start learning it13:27
*** mrstanwell has joined #openstack-ansible13:28
kukaczodyssey4me: I understand. I mostly thing of those customizations like doing something required by our customers (integrate ELK stack or whatever....) while contributing that back to the project. that should not be a problem13:28
odyssey4mekukacz if you look carefully, rpc-openstack already has an ELK stack :)13:29
odyssey4meyou can always borrow from there, considering that the source is open :)13:29
kukaczodyssey4me: OK, will have to grab for some other task :-)13:29
kukaczodyssey4me: I've noticed a ceph support too13:30
mrstanwellI tried making that ELK go in osad kilo, but failed.13:30
odyssey4meyou could also join evrardjp in the call to move those roles into the openstack-ansible umbrella so that we can all participate in making it better :)13:30
odyssey4mekukacz yup, ceph support was merged a month or two ago and has been baking for a few weeks13:30
odyssey4memrstanwell oh? I'm pretty sure that it works as the team working on Ceph in rpc-openstack included some new filters for Ceph... so I'm fairly certain that it works - there may just be some implementation details that need clarification13:31
kukaczodyssey4me: (thinkin of what else we need in the core) - any experience with OpenContrail integrations?13:31
kukaczI mean with OSAD specifically13:32
tiagogomesodyssey4me I don't know13:33
odyssey4mekukacz nope, we've had no-one work with that - some work is being done for plumgrid enablement - not sure if that helps13:33
*** jroll is now known as jtroll13:33
*** jtroll is now known as Guest1373013:34
*** Guest13730 is now known as jroll13:34
kukaczodyssey4me: nevermind, still I'm excited :-)13:34
kukaczodyssey4me: thanks for the chat! I'll prepare an environment to test it out myself13:35
mrstanwellodyssey4me: could be.  in my case, it looked like the cloud-init scripts defined in the heat template weren't being applied when the master node was being created.  could never figure out why.  chalked it up to some moving part having revved since then or something.  but, I'd never done anything with heat before.  so, it's also very possibly a n00b error.13:36
mrstanwellgot distracted with ipv6.13:37
mrstanwelland, after all, the rsyslog container in osad gathers all the logs up sooooo nicely!!13:37
odyssey4memrstanwell I'm confused - was it the heat template that didn't work, or was it the ELK stack that didn't work?13:37
mrstanwellodyssey4me: sorry.  it was the heat template.13:38
mrstanwell(the rpc-openstack one)13:38
odyssey4memrstanwell which heat template did you use? mattt is the primary maintainer of it so he may help you make it go13:38
mrstanwellodyssey4me: it was from the RPC-Heat-ELK project on github. at commit 6f4d52b63d, looks like.13:41
odyssey4memrstanwell ah, that's meant to be used from inside an RPC installation - it's part of a Horizon solutions tab13:42
odyssey4meit doesn't deploy the same thing as is present in the ELK roles in rpc-openstack (which is developed for an openstack back-end)13:43
*** logan2 has joined #openstack-ansible13:43
odyssey4memattt can you backport https://review.openstack.org/221705 please?13:43
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Ensure cinder-backup-related variables are defined  https://review.openstack.org/22261013:44
mrstanwellodyssey4me:  aaah.  ok.  thanks for clarifying!13:46
odyssey4memattt perhaps it's worth adding a subsequent patch to clean-up the old users - re: https://review.openstack.org/20282213:48
matttodyssey4me: are you asking me to do the follow-up patch?13:51
matttif so, i'd rather it just get done in that review :P13:51
evrardjpabout opencontrail: At some point we'll get there13:55
evrardjpI mean: opencontrail is something we think of, where I work ;)13:56
kukaczevrardjp: same with us.13:56
evrardjpso if we need to integrate it with openstack, the odds are high that we are going to use ansible to deploy it, so why not integrate it with osad (if relevant) or in a close repo13:57
kukaczevrardjp: the network guys find it the most suitable and flexible enough for their needs13:57
evrardjpit makes sense ;)13:57
openstackgerritMajor Hayden proposed stackforge/os-ansible-deployment-specs: Adding security hardening spec  https://review.openstack.org/22261913:57
evrardjpworking with juniper hardware?13:57
evrardjpnot mandatory, but still interesting to know13:58
kukaczpartially, we are a mix of cisco/juniper13:58
kukaczbut thei favor contrail for the real features, without any preference regarding the hardware vendor13:59
*** woodard has joined #openstack-ansible13:59
openstackgerritMajor Hayden proposed stackforge/os-ansible-deployment-specs: Adding security hardening spec  https://review.openstack.org/22261913:59
kukaczwe need to connect various external networks and extend them into openstack as tenant networks. contrail has good ways how to ensure this14:00
kukacz... among others14:00
evrardjpsame use case here14:00
kukaczso the network guys appreciate MPBGP, EVPN and these protocol options14:01
*** cloudtrainme has joined #openstack-ansible14:03
kukaczis there any fast way to deploy a small environment for evaluating OSAD deployment?14:06
kukaczI thought I would start few openstack instances, to deploy control node cluster and a pair of compute nodes14:07
kukaczbut seems that I'll spend some time configuring several networks on those instances first14:07
evrardjpI'm afraid I don't have time to help you right now :/ I must go, maybe on Monday if everything goes fine14:08
evrardjpI'm off!14:08
kukaczevrardjp:  no problem, enjoy the weekend!14:08
kukaczanybody else? I wonder what people do to repeatedly build and destroy OSAD environment for dev/test purposes?14:09
mrstanwellyou can build a basic all-in-one pretty easily.  I've got a multi-node that was pretty easy to deploy once I worked out how to set up the network interfaces.  the teardown script wipes the containers out, and I've generally been able to just redeploy using osad.14:11
gparaskevastear down script does the work14:13
gparaskevasand then redeploy14:13
gparaskevasi believe allthough i am not sure that it also deletes openstack_deploy folder14:13
kukaczbut first I probably need to create the required bridges and interfaces on my instances, right?14:16
mrstanwellkukacz: the bootstrap-aio script will set up your bridges.  then you can just run the setup-everything playbook.14:17
mrstanwellwait.  you said on instances...?14:17
openstackgerritMerged stackforge/os-ansible-deployment: Used named veth pairs that match container  https://review.openstack.org/22229514:17
kukaczmrstanwell: yes, I would like to deploy multinode, using existing openstack environment14:18
kukaczmrstanwell:I thought I will run some 5 instances and run OSAD on top of them14:18
*** javeriak has joined #openstack-ansible14:19
palendaekukacz: That's do-able, we run multinode on top of rackspace public cloud for testing. You need to define the bridges and interfaces, though14:19
kukaczmrstanwell:the bootstrap-aio - I assume that works on single node only, does it?14:19
mrstanwellkukacz:  Ah.  I've never deployed ostack on ostack.  But, yeah, for an all-in-one, the bootstrap-aio will set up the bridges.  for a multi-node, you need to configure them yourself, then update the user-settings.yml accordingly.14:19
palendaekukacz: Yes, bootstrap-aio.sh makes the assumption that is one node only14:19
palendaekukacz: For reference, this is what we use to deploy across the rackspace public cloud for multinode: https://github.com/rcbops/rpc-heat14:20
kukaczpalendae: are there any heat teplates or ansible scripts available to help with the prepare phase?14:20
kukaczpalendae: thanks, will look at it14:21
palendaeWithin openstack-ansible/OSAD, no, there's no host configuration present right now14:21
kukaczand just to confirm I got it right - I need to create 4 interfaces + bridges14:22
kukacz"br-mgmt" "br-storage" "br-vxlan" and "br-vlan"14:22
*** spotz_zzz is now known as spotz14:23
*** javeriak has quit IRC14:23
*** javeriak has joined #openstack-ansible14:24
*** woodard has quit IRC14:24
mrstanwellkukacz: sounds right.  you can use the examples in os-ansible-deployment/etc/network/interfaces.d.  Frankly, my interface files for multinode don't look much different from aio, just with different addressing.  and bonding, if you're into that sort of thing.14:27
kukaczmrstanwell:yeah. now I'm checking that example interfaces.d config and completing my openstack_user_config.yml14:30
kukaczmrstanwell: I noticed the example uses linuxbridge - is that a default or only integrated networking option?14:31
palendaekukacz: Currently the only one14:32
kukaczok14:32
*** javeriak_ has joined #openstack-ansible14:34
*** javeriak has quit IRC14:34
mrstanwellkukacz: and, technically, you don't need 4 interfaces.  I've done it with just one, using vlans.  (intel nucs only have one wired ifc...)14:35
*** phalmos has joined #openstack-ansible14:35
palendaemrstanwell: I've avoided getting a NUC in my homelab for that reason, but sure would reduce the amount of NIC packaging I have to deal with14:36
mrstanwellpalendae: I've got some startech usb enet adapters that worked out-of-the-box w/ ubuntu.  but they do drive the cpu load a bit.  and for some reason they get pretty warm...14:37
kukaczmrstanwell: I see. I thing I'll create 4 OS non-routed networks so those instances will have simple NIC/bridge pairs without vlans in config - to make my config task easier14:37
palendaemrstanwell == Prof Denton?14:37
mrstanwellnope14:37
mrstanwellkukacz: yeah, if you're doing it ostack already, no point in complicating your life with vlans if you don't have to.  extra interfaces are cheap!  ;-)14:39
kukaczmrstanwell: but in the end  - I need to always have those 4 bridges in place to satisfy the default/example config, right?14:39
mrstanwellkukacz: I believe so, yes.  or depending on whatever you set up in openstack_user_config.yml.14:40
kukaczmrstanwell: I will leave it as much default as I can to make it work for the first tries14:41
kukaczas I'm new both to OSAD and Ansible14:41
mrstanwellpalendae: so far, nucs make a great little platform for osad.  if you don't mind the vlan config hassle and don't need serious performance, you can get by with the single nic. or go usb.14:42
mrstanwellkukacz: I'm not much farther ahead of you, frankly.  ;-)  But osad is hands-down the easiest ostack install I've ever done.14:43
mrstanwellkukacz: I went from git clone to running aio in less than an hour.  outstanding!14:44
kukaczmrstanwell: good to hear. as I need to spare some time of the weekend for kids too :-)14:44
kukaczmrstanwell: congrats, I won't beat you. already spent 2 hours chatting here and checking the configs14:45
mrstanwellkukacz: well, multinode took me quite a bit longer.  had to figure out what aio had been doing for me, networkwise.  ;-)14:46
kukacznetwork is always the pain. we should get rid off it14:47
openstackgerritMerged stackforge/os-ansible-deployment: Enabling log compression by default  https://review.openstack.org/22192414:49
mhaydenshould i cherry pick this one into kilo ^^ ?14:51
*** KLevenstein has quit IRC14:53
*** KLevenstein has joined #openstack-ansible14:56
odyssey4memhayden yeah, I think so - it looks like a good candidate for a backport14:56
mhaydenalrighty14:56
odyssey4mekukacz yes, openstack-ansible assumes that the hosts are setup already with the networking and disks already configured in the way you want to consume them14:56
openstackgerritMajor Hayden proposed stackforge/os-ansible-deployment: Enabling log compression by default  https://review.openstack.org/22264614:56
* mhayden did a thing14:57
odyssey4mekukacz it may be helpful to you to try out some basic ansible work on the side to understand how it works - it really helps you navigate the openstack-ansible project when you understand ansible a little better14:58
cloudnullmorning.14:58
cloudnullfinally at a compute.r14:58
kukaczodyssey4me: thanks. actually, I've just started trying to use Ansible to configure the networks on my hosts15:00
tiagogomesanyone has seen this http://paste.openstack.org/show/456863/ ?15:00
odyssey4meo/ cloudnull :)15:00
odyssey4metiagogomes yes, that may have come up if you're using yesterda15:00
odyssey4methe last tag of kilo? there was an upstream issue - we'll likely bump the upstream sha today for both juno and kilo to take care of that and rev the tag15:01
odyssey4mecloudnull shall I go ahead with that?15:01
cloudnulltypie typie, make it go ! =)15:02
odyssey4mecloudnull on it :)15:02
tiagogomesoh, I was using master15:02
odyssey4metiagogomes haha, living dangerously :p15:02
cloudnullDO IT LIVE!15:03
odyssey4metiagogomes I'm not sure if you're aware of this, but master tracks upstream master - so you're deploying Liberty-3 code with that. :)15:03
tiagogomesI had to clone again the repo when moving from separate deployment host to deploy from a target host and I forgot to checkout the branch15:03
odyssey4meit may break at any time15:03
*** agireud has joined #openstack-ansible15:03
tiagogomesright, I'll try with Kilo15:04
odyssey4metiagogomes hang ten with that - let us get that upstream sha update done15:04
*** phalmos has quit IRC15:04
*** agireud has quit IRC15:08
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Update kilo to resolve PyECLib compile issue  https://review.openstack.org/22265615:11
*** cloudtra_ has joined #openstack-ansible15:12
*** cloudtrainme has quit IRC15:13
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Update juno to resolve PyECLib compile issue  https://review.openstack.org/22266615:21
openstackgerritMerged stackforge/os-ansible-deployment: Adds the config_template to OpenStack Services  https://review.openstack.org/21703015:25
*** jwagner_away is now known as jwagner15:26
*** phalmos has joined #openstack-ansible15:26
*** javeriak_ has quit IRC15:40
*** javeriak has joined #openstack-ansible15:41
*** gparaskevas has quit IRC15:43
*** javeriak_ has joined #openstack-ansible15:45
*** javeriak has quit IRC15:47
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Add neutron_migrations_facts module  https://review.openstack.org/21975915:56
*** alop has joined #openstack-ansible15:56
*** javeriak has joined #openstack-ansible16:05
*** devlaps has joined #openstack-ansible16:06
*** daneyon has joined #openstack-ansible16:06
*** javeriak_ has quit IRC16:06
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Compartmentalizing RabbitMQ  https://review.openstack.org/20282216:09
*** KLevenstein has quit IRC16:10
*** KLevenstein_ has joined #openstack-ansible16:10
cloudnullmattt:  ^^ updated that patch per your review. that removes the "openstack" rabbitmq user16:11
*** phalmos has quit IRC16:13
*** phalmos has joined #openstack-ansible16:15
matttcloudnull: brilliant, thanks kevin16:17
*** woodard has joined #openstack-ansible16:17
cloudnullthank you sir, that was a good catch16:17
*** javeriak_ has joined #openstack-ansible16:17
matttcloudnull: we need to write an upgrade task to remove it on existing deploys, palendae had a review in flight to clean up the upgrade script ... any progress on that one palendae so we can start adding to it?16:18
matttbut i have to bounce, i'll be back online this evening16:19
* mattt is afk16:19
cloudnullkk take care mattt16:19
*** javeriak has quit IRC16:19
palendaemattt: Nope, still fixing Juno -> kilo upgrades16:20
openstackgerritMerged stackforge/os-ansible-deployment: Added post and pre hook script for veth cleanup  https://review.openstack.org/22034216:21
odyssey4mepalendae do you mind if I quickly fix the nit on that patch so that we can get on with building upgrade tasks into a fresh version?16:22
palendaeodyssey4me: Nope - feel free to take it16:22
*** javeriak has joined #openstack-ansible16:22
*** cloudtra_ has quit IRC16:22
*** javeriak_ has quit IRC16:23
*** cloudtrainme has joined #openstack-ansible16:24
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Remove Juno to Kilo logic in upgrade script  https://review.openstack.org/21529116:25
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Added post and pre hook script for veth cleanup  https://review.openstack.org/22269516:36
odyssey4mecloudnull happy to make https://review.openstack.org/215291 go?16:38
cloudnullha! already did it16:38
*** sdake has joined #openstack-ansible16:51
*** jwagner is now known as jwagner_away16:56
*** sdake_ has joined #openstack-ansible16:57
*** woodard has quit IRC16:58
*** javeriak_ has joined #openstack-ansible16:59
*** sdake has quit IRC17:01
*** javeriak has quit IRC17:01
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment-specs: Add spec for Gate Split  https://review.openstack.org/22100917:05
*** javeriak has joined #openstack-ansible17:05
*** javeriak_ has quit IRC17:06
*** woodard has joined #openstack-ansible17:08
odyssey4mehughsaunders what do you think about skipping the submodule thing and simply switching over to using the role requirements and a sha reference?17:08
odyssey4meas I recall cloudnull  favored that approach17:08
*** sdake_ has quit IRC17:10
odyssey4mecloudnull mattt hughsaunders can we make these two go please? https://review.openstack.org/22266617:15
odyssey4mehttps://review.openstack.org/22265617:15
odyssey4med34dh0r53 if you're around ^17:18
*** sdake has joined #openstack-ansible17:19
*** phalmos has quit IRC17:30
*** KLevenstein_ has quit IRC17:32
*** javeriak_ has joined #openstack-ansible17:36
*** javeriak has quit IRC17:38
*** jwagner_away is now known as jwagner17:39
*** woodard has quit IRC17:43
*** javeriak has joined #openstack-ansible17:44
*** javeriak_ has quit IRC17:47
*** abitha has joined #openstack-ansible17:47
d34dh0r53odyssey4me: looking at it now17:49
*** KLevenstein has joined #openstack-ansible17:55
*** mgariepy has quit IRC18:03
*** mgariepy has joined #openstack-ansible18:06
*** tiagogomes has quit IRC18:23
*** phalmos has joined #openstack-ansible18:35
*** javeriak has quit IRC18:37
cloudnullodyssey4me:  I do favor the approach of skipping submodules and using the requirement files.19:07
*** lkoranda_ has joined #openstack-ansible19:07
*** sdake_ has joined #openstack-ansible19:09
palendae^ +119:10
*** lkoranda has quit IRC19:11
*** lkoranda_ is now known as lkoranda19:11
odyssey4mecloudnull palendae :) if you could put that into the review it'd be awesome :)19:13
*** sdake has quit IRC19:13
palendaeodyssey4me: Which spec was that? The gate spliit one?19:14
odyssey4mepalendae yep19:14
odyssey4mepalendae cloudnull https://review.openstack.org/22100919:14
cloudnullodyssey4me:  isnt that covered here https://review.openstack.org/#/c/213779/19:16
odyssey4mepalendae cloudnull whoops, you're right19:17
palendaeI was searching the page for submodule, wasn't seeing it19:17
odyssey4meI clearly need to have my first sip of scotch so that I can think straight. :p19:17
cloudnullPatch set 3: Line 166: "I don't think we need to or should do the submoduling for anything that is Ansible galaxy compatible. When the role is moved to the external repo we should update our role-requirements.yml file with the data required to make it go using our scripts."19:18
odyssey4mecloudnull ah yes, my bad - youdunnit already19:18
cloudnullbut easy to miss , my review was like a stream of consciousness19:19
odyssey4meI am considering taking that whole piece out and just leaving the line that says it should be blueprinted and spec'd per role, and leave it at that19:19
cloudnullyea that makes sense19:19
odyssey4methe method for splitting roles out is contentious and distracting from the main purpose of the spec19:19
odyssey4mewhich is to allow other repositories for roles in our umbrella19:20
*** jmckind has joined #openstack-ansible19:20
*** sdake_ is now known as sdake19:20
cloudnullas things move out it may be necessary to submodule something like the shared libs or whatever. so it makes sense to have the decision made within the role migration spec.19:20
odyssey4mecloudnull sure, that's under playbook/role impact19:21
*** jwagner is now known as jwagner_away19:22
*** javeriak has joined #openstack-ansible19:26
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment-specs: Add spec for Independent Role Repositories  https://review.openstack.org/21377919:27
*** jwagner_away is now known as jwagner19:28
*** lkoranda has quit IRC19:34
*** lkoranda has joined #openstack-ansible19:37
*** shoutm has quit IRC19:46
*** sdake_ has joined #openstack-ansible19:46
*** sdake has quit IRC19:50
*** javeriak has quit IRC19:50
openstackgerritMerged stackforge/os-ansible-deployment: Update kilo to resolve PyECLib compile issue  https://review.openstack.org/22265620:06
mhaydenwhen i finish a draft of a spec, do i flip the blueprint to review or pending approval?20:15
mhaydenor neither?20:15
*** jmckind has quit IRC20:18
cloudnullyes20:21
cloudnull:)20:21
spotzheheh20:21
cloudnullI'd flip it to pending approval. it cant hurt20:22
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Remove Juno to Kilo logic in upgrade script  https://review.openstack.org/21529120:26
mhaydencloudnull: thanks20:26
odyssey4memhayden I'd suggest review20:29
odyssey4meit's likely to go through a few iterations before being accepted, so until you're fairly certain that it's almost done it should be in review.20:29
odyssey4mechanging those values is supposed to be a PTL activity though :p20:30
*** jmckind has joined #openstack-ansible20:30
*** jmckind has quit IRC20:31
*** jmckind has joined #openstack-ansible20:32
*** lkoranda has quit IRC20:37
*** lkoranda has joined #openstack-ansible20:42
*** phalmos has quit IRC20:43
openstackgerritMerged stackforge/os-ansible-deployment: Update juno to resolve PyECLib compile issue  https://review.openstack.org/22266620:54
*** jmckind has quit IRC21:00
*** pradk has quit IRC21:10
*** alop has quit IRC21:32
*** alop has joined #openstack-ansible21:38
*** jwagner is now known as jwagner_away21:47
odyssey4mecloudnull sigmavirus24 d34dh0r53 hughsaunders mattt I would recommend holding off submitting any new reviews or doing any +w on patches from now on.21:55
odyssey4mewe're close to the rename timeframe, so I think it'd be prudent to hold off further changes21:55
cloudnull++21:55
prometheanfire:D21:56
sigmavirus24thanks odyssey4me21:56
odyssey4meI think it's safe to quickly do a juno/kilo tag though :)21:57
ApsuOh man, renamery ahoy21:57
odyssey4melet me get on that21:57
Apsugit clone while you can!21:57
*** daneyon has quit IRC22:01
d34dh0r53odyssey4me: tys22:09
*** woodard has joined #openstack-ansible22:10
*** kukacz has quit IRC22:13
*** woodard has quit IRC22:18
odyssey4mefyi everyone 11.2.2 has been released22:19
-cloudnull- PTL nomination for the upcoming cycle have started flowing into the mailing list. If you're interested in the position please submit your candidacy directly to openstack-dev@lists.openstack.org list.22:23
cloudnullyay! 11.2.2 thanks odyssey4me!22:24
odyssey4mecloudnull I do find the whole ML candidacy a little odd as it seems it's supposed to be done differently this round: http://lists.openstack.org/pipermail/openstack-dev/2015-September/074171.html22:25
cloudnullyea its a bit odd22:25
odyssey4meI think that people may be misreading, or not reading. ;)22:26
cloudnullit seems that they're following the old process22:26
cloudnullbut its also likely that they've sent the message to the ML22:27
matttso who's going for PTL?22:27
cloudnulland the file to the repo22:27
cloudnullmattt:  you said it last22:27
cloudnull:)22:27
odyssey4melol, I'm happy to admit that I've put my hat into the ring22:27
odyssey4meI do still need to write it up though22:27
* odyssey4me watches the clock... 2 mins22:29
palendaeThey couldn't possibly run late :)22:29
odyssey4mefyi http://lists.openstack.org/pipermail/openstack-dev/2015-September/074235.html22:30
-openstackstatus- NOTICE: 30 minute warning, Gerrit will be offline from 23:00 to 23:30 UTC while some projects are renamed http://lists.openstack.org/pipermail/openstack-dev/2015-September/074235.html22:30
dolphmodyssey4me: i assume people still want to campaign per usual22:31
dolphmodyssey4me: "vote for me!" is never going away lol22:31
odyssey4mehaha dolphm is that am american thing? :p22:31
odyssey4mehaha, ok so my timing was out - apparently 23:00 UTC in in 30 mins :p22:32
odyssey4mewell, 28 mins22:32
mrstanwellodyssey4me: the American thing is to screw campaigning, and just buy what you need.22:33
odyssey4mebest I refresh my scotch :)22:33
dolphmodyssey4me: how many dollars must i give you for your vote, dear sir?22:33
odyssey4medolphm you're contesting stevemar ?22:33
dolphmodyssey4me: here, just take this blank check, i don't have time to talk petty matters with my constituency22:34
dolphmodyssey4me: no, just being american22:34
odyssey4medolphm I'll PM you my postal address. Money orders welcome. :p22:35
*** spotz is now known as spotz_zzz22:38
cloudnullconfirmed people are campaigning :) https://review.openstack.org/#/q/status:open+project:openstack/election+branch:master,n,z -- Vote for Pedro!22:40
odyssey4meheh, how about you +2 your own patch? https://review.openstack.org/22265422:40
cloudnullthats how you do it !22:41
palendaecloudnull: In Juno, do you know if I can tell Glance to use a swift proxy URL that's different from the VIP's?22:41
cloudnulli want to say no. but let me go look22:42
palendaeCrap22:42
odyssey4mepalendae I want to say yes because you can make glance use cloudfiles22:44
*** Mudpuppy_ has joined #openstack-ansible22:44
palendaeodyssey4me: Yeah, it's a special case though22:45
palendaeBasically I have 1 swift proxy node, want to use its IP and not redo the F522:46
palendaeI'm not seeing a swift-specific URL in glance22:46
palendaeOh...is it taking it from the service catalog again?22:47
palendaesigmavirus24: ^22:47
palendaeThough cloudfiles wouldn't be in the service catalog22:47
matttcloudnull: lolzusofunny22:47
*** Mudpuppy has quit IRC22:47
odyssey4meif it's cloudfiles then you probably won't have object storage in the service catalogue22:47
palendaeYeah...22:47
palendaeNot seeing a URL in here22:48
cloudnullso the auth address is defined here: https://github.com/stackforge/os-ansible-deployment/blob/juno/rpc_deployment/roles/glance_common/templates/glance-api.conf#L7222:48
palendaeYeah, got the auth one22:48
cloudnulland in juno swift_store_auth_address is defined through rax uservars22:48
palendaeOh, is that the *swift* auth, not keystone?22:49
palendaeHrm, not necessarily, looks like it's cloud identity22:49
cloudnullnevermind thats keystone and not the specific proxy address.22:50
odyssey4mewell, if that auth address is keystone then it should discover through the service catalogue, surely?22:50
*** arbrandes has quit IRC22:50
odyssey4meI'd suggest changing up the service catalogue for the object store22:50
cloudnullif you want to change the proxy address as stored in the service catalog, in juno, youll need to modify the ./rpc_deployment/playbooks/openstack/swift-proxy.yml file with the value you want22:52
cloudnullremove the endpoint from the service catalog22:52
cloudnulland then run the swift-proxy.yml plan22:52
cloudnull*play22:52
palendaeYeah, so, the auth url is the LB at Keystone's port22:52
cloudnullpalendae:  https://github.com/stackforge/os-ansible-deployment/blob/juno/rpc_deployment/vars/openstack_service_vars/swift_proxy_endpoint.yml22:53
sigmavirus24palendae: sorry22:54
palendaeHrm, yeah,  my swift changes didn't make it work22:58
*** tlian has quit IRC22:59
*** cloudtrainme has quit IRC23:00
-openstackstatus- NOTICE: Gerrit is offline from 23:00 to 23:30 UTC while some projects are renamed. http://lists.openstack.org/pipermail/openstack-dev/2015-September/074235.html23:00
*** ChanServ changes topic to "Gerrit is offline from 23:00 to 23:30 UTC while some projects are renamed. http://lists.openstack.org/pipermail/openstack-dev/2015-September/074235.html"23:00
odyssey4mepopcorn time :)23:01
KLevensteinodyssey4me: it begons?23:02
KLevenstein*begins23:02
*** meteorfox has quit IRC23:02
odyssey4meKLevenstein yep23:03
odyssey4mefollow progress in openstack-infra and on https://etherpad.openstack.org/p/project-renames-Septemeber-201523:03
* KLevenstein goes to pour a drink23:03
*** jlvillal has quit IRC23:03
*** meteorfox has joined #openstack-ansible23:06
odyssey4meI suppose I may as well get on with changing up the wiki and such.23:07
*** arbrandes has joined #openstack-ansible23:07
palendaeDoes swift-all register with keystone?23:09
palendaeNot seeing my IP in there23:10
*** sdake has joined #openstack-ansible23:10
*** sdake_ has quit IRC23:14
odyssey4meKLevenstein it seems from current discussion that the old repo should be redirected automatically by github - so we may be in some luck with the older deployments23:18
KLevensteinthat would be helpful23:18
odyssey4me(where github was the referential git source)23:19
odyssey4mewe can double-check when it's all done23:19
*** markvoelker has quit IRC23:19
odyssey4mewe can also try to validate whether git.o.o redirects23:19
prometheanfirehttps://github.com/openstack/openstack-ansible23:21
odyssey4meKLevenstein heh, in a browser enter https://github.com/stackforge/os-ansible-deployment right now23:22
KLevensteinsweet!23:22
odyssey4meand https://github.com/stackforge/os-ansible-deployment-specs23:23
palendaeGonna re-push my rpc-openstack PR and see what happens23:23
KLevensteinI’ll see what happens with the docs.23:24
odyssey4mepalendae hang ten - stuff's still going on23:25
KLevensteinyeah, docs still not building right yet.23:25
odyssey4meKLevenstein are you able to poke some people to get attention on https://review.openstack.org/222323 some time this w/end?23:25
KLevensteinI can ping Lana23:26
KLevensteinif you want, I’ll add her to the reviewers?23:26
odyssey4menot right now - we can wait for the target folder to actually be created, but ideally in the Aus Monday?23:26
odyssey4meah, good plan :)23:26
KLevensteinconsider it done23:26
odyssey4mehappy to wait for a work day, just wanting it to be the earliest work day possible :)23:27
palendaeAh, yeah, it clones from git.openstack.org which isn't working23:27
odyssey4mepalendae yep, they're still busy there23:28
odyssey4meI have to say, this is pretty impressive operational co-ordination.23:28
odyssey4meI've seen teams working in the same room which can't do this well.23:29
KLevensteinodyssey4me: added her; she’ll see it on her Monday/our Sunday23:29
odyssey4meKLevenstein perfect, thanks - you're a star!23:29
odyssey4meand https://review.openstack.org/#/admin/groups/490 and https://review.openstack.org/#/admin/groups/491 :)23:31
odyssey4menow for the project-config / system-config / nodepool changes to merge23:32
cloudnullgerrit is reflecting the change https://review.openstack.org/#/q/status:open+project:openstack/openstack-ansible,n,z23:33
odyssey4meanyone keen to submit the review to gerrit dashboard to change the dashboard?23:34
odyssey4meI forgot about that23:34
cloudnullwhat ?23:35
odyssey4meie https://github.com/stackforge/gerrit-dash-creator/blob/master/dashboards/os-ansible-deployment.dash23:35
*** alop has quit IRC23:35
odyssey4medid y'all even know that existed?23:35
cloudnullnope23:35
KLevensteinoh cool, our docs are building23:35
odyssey4meit's the secret sauce behind bit.ly/os-ansible-deployment-review23:36
palendaeodyssey4me: Did not23:36
cloudnullalso it doesnt seem that there is an equivelant repo in /openstack/23:36
odyssey4mewhich I guess needs to change now too :)23:36
cloudnullill review that23:36
odyssey4mepalendae it's a convenience dashboard, primarily for core reviewers :)23:37
odyssey4meit hides the stuff that hasn't successfully built, or already has negative votes23:37
odyssey4meI should probably add the updated version to the wiki or something :)23:37
*** ChanServ changes topic to "Topic: Launchpad: https://launchpad.net/openstack-ansible Weekly Meetings: https://wiki.openstack.org/wiki/Meetings/openstack-ansible || Repo rename from stackforge/os-ansible-deployment to openstack/openstack-ansible happens Sept 11 2015 23:00 to 23:30. See https://review.openstack.org/#/c/200730/"23:38
odyssey4meok, so the bulk of the work is done - now it's just down to the proposed change merging23:38
odyssey4me*changes23:38
odyssey4meonce that's done - meaning that our ACL's will be updated, we should have our full capability back23:39
palendaeodyssey4me: I know, I saw that23:39
palendaethem*23:39
palendaeAlso the 'new' view in gerrit is soooooooooooooo much nicer23:39
cloudnullodyssey4me:  https://review.openstack.org/#/c/222795/23:40
palendaehttps://review.openstack.org/#/settings/preferences > change view to 'New Screen'23:40
odyssey4mepalendae yep - it takes some getting used to, but it is great23:41
odyssey4meespecially for seeing the history of comments :)23:41
odyssey4meKLevenstein ok, so for the sake of information - whether it's useful or not...23:41
dolphmi had not realized the change in acronym resulting from openstack/ until just now.23:41
odyssey4meopenstack-ansible 11.1.2 referred to github for its git source23:42
palendaedolphm: OA23:42
odyssey4me11.2.0 switched to using git.openstack.org23:42
cloudnullyep OSAD is a clasic23:42
cloudnull*classic23:42
palendaedeprecated23:42
KLevensteinodyssey4me: ok23:42
dolphmsosad nomore osad23:42
* dolphm apologizes23:42
cloudnullhahaha23:42
palendaeNot sad23:42
cloudnullthat'll be the next shirt23:42
odyssey4meI expect that rpc-openstack r11 is using git.openstack.org - so there's not redirect there23:42
palendaeNow everything matches23:42
dolphmhow do i upgrade my legacy shirt?23:43
palendaeodyssey4me: Correct23:43
KLevensteinodyssey4me: not sure where the docs will be affected but I’ll have a look23:43
rromansdolphm: turn it upside down23:43
dolphmplz assign bp zero-downtime-shirt-upgrades to claco kthx23:43
palendae>:(23:44
odyssey4meKLevenstein so for rpc-openstack, nothing changes - the osad rename will be handled transparently by a change in the rpc-openstack submodule poiinter23:44
cloudnullOSA is the new acronym. the D has been dropped. I'm sure Poettering is crying right now.23:44
*** openstackgerrit has quit IRC23:46
*** openstackgerrit has joined #openstack-ansible23:46

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!